Invalid ssl certs ...

Published: 2007-06-03
Last Updated: 2007-06-03 21:15:33 UTC
by Swa Frantzen (Version: 1)
0 comment(s)

We all know them: invalid ssl certs. But how bad are they? And what can we do to improve the situation?

Users

Basically the users are a weak link in multiple directions. If we teach users that ssl certs that are bad are OK to accept and continue as if nothing is wrong, we are taking away all their defense against man in the middle attacks.

Equally we allow our users to accept and continue interacting with websites that by providing an invalid certificate actually proofed there is something wrong with them.

We should get to a situation where we can teach our users in awareness sessions to *never* to accept a ssl cert that is apparently bad.

In order to get there, we need to make sure we get good certificates signed by a recognized CA in all our uses of ssl certs such as our websites. One of the things to do is to take care with "temporary" setups and to make sure we are proactive in renewing certificates.

Is your calendar marked to renew your certs?
Do you know when they will expire?

Man in the middle

Man in the middle attacks on ssl enhanced connections are actually prevented by having the certificates and the ability to verify them. The security is for a large part centered in the procedures used by the certificate authorities (CA) you accept to use.

As long as we cannot teach or prevent users from accepting bad certificates, we will always loose this fight. Phishers and the like can work through ssl+strong authentication if we let our users fall prey to man in the middle attacks.

Do you teach your users the hazards of bad certificates?

Self-signed certificates

Self signed certificates, having no recognized CA signing them, aren't by definition bad. They however complicate things: Users should verify the certificate before accepting it. Such verification can be done using a fingerprint and an out of band communication. As this means additional work, one would expect the use of a recognized CA is simpler, still one finds these often.

If you use self-signed certificates, make sure you know how your users will (or will not) deal with it, make sure to setup that out of band verification and make sure that you have the right reason to do this. If you use a PKI infrastructure, make sure all users have your root certificates as needed so they do not get errors.

If you have self-signed certificates, how many times do you get called for verifying the fingerprint(s)?

CA

A certificate authority should have very strict procedures to verify you are who you claim to be before signing your public key. There have been a few problems in the past with reputable companies e.g. signing certificates claiming to belong to a well known software vendor. So these procedures are not foolproof. That's why there are revocation lists, unfortunately many clients neglect to verify those lists.

Not all of the CAs use the same procedures to verify who's certificate they sign, choosing it right is key: you want as many as possible of the others to recognize the CA as bing a good and reputable company with strict rules, but you want them to be flexible enough that -esp. hen they are located in another country- are possible to work with and actually have procedures where you can jump through their hoops.

Do you know what CAs are out there? What the strength of their procedures is?
How was your CA selected ?
Bo you know what CAs are in your browser?

Browser makers

Most of us think of the users as the weakest links, but honestly, the browsers the users use are the weakest link in reality. They simply lack all backbone in preventing the users from hurting themselves.

Doesn't you car make an annoying noise when you do not wear your seatbelt while driving it?

Then why does your browser only need an obscure "next" to proceed on to a website that has a bad cert ? Why not:

  • Prevent access to websites with bad ssl certs (the site basically proofed it isn't who it claims to be!), putting the burden of having right certificates with the website owners.
  • Show a red overlay on every pageload/refresh warning the user the site is not to be trusted
  • Not to allow use of forms to send data to a https site that has a bad cert
  • Not to load images, scripts, ... from such sites
  • ...

And as far as bad certificates go, how about telling the user what is wrong with the certificate in understandable language. While at it, make the text there easy to cut and paste so users can talk with the administrators.

While this might seen hard to sell to consumers, I'm not sure it would be that hard to sell to administrators in a company wanting to step up security a notch or two.

Since browser makers also choose for most of the world what CAs are trusted and what not, how about making that choice a bit more under the control of the administrators of the computer ?  E.g. if you delete a CAs root cert, how about not adding it again at every patch, making the admin redo the thing over and over.

Did you think of the impact of users switching browsers on the list of CAs they trust?

Conclusion

I think we need to eradicate bad certificates on all of our websites. Next, teach our users significant better habits and start by increasingly making those bad habits harder to have in the browsers we let our users use.

--
Swa Frantzen -- NET2S

Keywords:
0 comment(s)

IIS 5.0 authentication bypass exploit -- CVE-2007-2815

Published: 2007-06-03
Last Updated: 2007-06-03 20:04:36 UTC
by Swa Frantzen (Version: 1)
0 comment(s)

David wrote in pointing us to an exploit against IIS 5.0 and 5.1 . The exploit was discovered on December 15, 2006, and made public since the end of May 2007. The design of IIS 5.x allows to bypass basic authentication by using the hit highlight feature.

Microsoft's response seems to be a bit atypical for them as it includes a section on how to reproduce the exploit. In other words: Microsoft is telling the world how to exploit their products being used by their customers. Not that the worst of those interested in it did not already know, but the one thing we need from Microsoft is not the exploit, but the patch or at least a decent work-around. And that patch is lacking. Their only defensive advice is to upgrade to IIS 6.0.

Since this means that you would also need to upgrade the windows 2000 or XP to Windows 2003, and that such an upgrade isn't free, nor easy. So what do we do when Microsoft does not give any advice but to upgrade to IIS 6.0 ?  Let's look at alternatives.

Feel free to write in if you know more effective alternatives:

  • Most probably there is a way to remove something or change some registry setting to prevent this, unfortunately exactly what is neither documented nor validated.
    Eric told us to "If you don't use the web hits functionality, a simple workaround would be to remove the script mapping for .htw files". Without a script mapping, IIS should treat the file as static content.
  • Try to use application level firewalls (filters), while they aren't the easiest to configure considering all the ways URLs can be encoded, it's something that might help for a while, but getting it fully right will be a pain. If you have the infrastructure it can be a temporary measure till you can upgrade IIS, solving the actual problem.
  • URLScan, a URL filter by Microsoft actually can be used to stop access to .htw files and is reported by some readers as being effective. While a URL scanner inside the web browser might know all possible encodings, it remains the poor man choice, but most likely good enough as a workaround in the short run provided you do not need .htw functionality.
  • A number of readers who are preventing access to files by managing rights on the confidential files or directories themselves. To people used to apache this sounds odd, but IIS uses OS level users and therefore the permissions set in the filesystem can be used to limit rights and it will protect against server side scripts walking the documentroot tree as well.
  • Upgrade to apache or another web server, with or without a (cross) upgrade of the OS.
  • Scramble an upgrade to Windows 2003, potentially on more potent hardware.

Some URLs:

While the public exploits seem to focus on leaking protected information, the ability to execute code is unexplored, but hinted about.

Unlike my normal habit of avoiding to broadcast exploitable information, but since Microsoft themselves are telling the world already, take a look in your IIS logs for hits like:

"/null.htw?CiWebhitsfile=protected_file&CiRestriction=none&CiHiliteType=full"

Don't be blindsided if you do not find "null.htw" in your document root directory, the exploit does not need that file at all, in fact the reference needs to be to a file that does not exist, but since it can be located anywhere, that's not a working workaround either.

The one workaround that seems to be functioning is to install and configure -if not done so already- URLScan. Andrew wrote in with: "use URLScan to block all requests for htw files (or, better yet, set URLScan never to permit requests for any extensions but ones you know you need)". URLScan as a workaround remains an ugly solution as it uses filtering as an afterthought instead of proper security by design, but then again, not that many web servers come with security as one of the very top requirements.

A reader pointed us to Aqtronix Webknight as an alternative URL filter that could help stop the exploits agaisnt IIS (GNU licensed).

--
Swa Frantzen -- NET2S

Keywords:
0 comment(s)

Comments


Diary Archives