Windows 7 - not so secure ?

Published: 2009-01-31
Last Updated: 2009-01-31 21:47:42 UTC
by Swa Frantzen (Version: 2)
1 comment(s)

While it is still a beta program, and as such not very interesting to report on yet, there is a little buzz about a Windows 7 security sacrifice to usability.

Basically Windows 7 beta "fixed" the annoying Vista security prompts by allowing the user to set it up (and set so by default) that the UAC only prompts for "Notify me only when programs try to make changes to my computer" and "Don't notify me when I make changes to Windows settings". The tricky bit of course is to be able to differentiate between what a program initiates and what the user initiates (the user is after all always controlling a piece of software).

It seems the Windows 7 beta isn't very good at making that critical difference as it got beaten already.

The authors have a workaround ready: change the setting to always prompt, (iow.: bring the annoying prompt from Vista back)

The entire thing is a bit typical for an approach where there is put a lot of value on what the user does. While for home situations it might make sense at some level; for corporate situations where control is needed putting the user in charge of security it hardly ever is considered a good solution as the user can be tricked by various means into choosing the wrong course of action.

Perhaps a solution would be in radically different ways of working between a "home" edition and a "business" edition (far more than the incomprehensible marketing and fancy gui sauce it is today), with a series of settings and ways to allow control over them that are radically different. In the end the home users often are alone, so separation of duties etc. is very hard to implement properly. While those same separate roles are already in place in places that need a more secure setup, but they get hampered by the permissive nature of the software designed for home use.

Still if you look at the past few years worth of severity ratings in MS0X-0XX bulletins, you'll notice a consistent trend to rate the severity of a problem significantly lower if the user had to confirm something. This isn't just for the OS, but just as well for e.g. the office suite. Now if you want to know how a regular user reacts to these prompts: watch them without them being aware of your interest and see them click to get the pop-up that's blocking them from doing what they want to do -without reading-, just finding the ok/continue/next/approve/... and clicking as fast as possible on it.

So what's the real value of a reactive user approval ?

  • They typically don't even read the warnings at all, just want to get to the good bits
  • They are vulnerable to allow it even if they would know better due to social engineering


So how do other OSes handle this ?

The traditional unix solution is to have regular users and "root" separate the tasks: the regular user typically cannot change settings, only root can do that and one has to either:

  • log in as root (best practices block this avenue)
  • become root using su (optionally only allowed for users in the wheel group, requires knowledge of the root password)
  • execute a command using sudo for users with the needed privileges (not needing to know the root password)

Note that all of these require an up front action by the user to get more rights, and no initiative is given to applications that prompt users for more rights.

A modern Mac OS X machine has that same unix pedigree (sudo works perfectly fine on a mac, root by default has no password so the other two avenues are closed), but also has an added graphical equivalent of sudo that walks the far more dangerous path of prompting the user for its password as needed, allowing software to take initiative and leaving the user with a judgment call to make.

Have you seen other security default settings in Windows 7 beta you don't agree with ? Let us know!

UPDATE: A reader wishing to be anonymous pointed to the follow-up story by the original reporters. Basically it boils down that they say Miscrosoft persists in this and worst of all: that Microsoft doesn't consider it a vulnerability.

Swa Frantzen -- Section 66

1 comment(s)

Google Search Engine's Malware Detection Broken

Published: 2009-01-31
Last Updated: 2009-01-31 18:17:26 UTC
by John Bambenek (Version: 1)
5 comment(s)

As of right now, it appears any google search you do will come up with all the same results as before.  What has changed is that it appears to be reporting that every site might contain malware (i.e. it shows the "This site may harm your computer" warning with every result).  Apparently it has been happening for about the last 15 minutes.  So things are going a little haywire there and I'm sure it'll be fixed shortly.  Bottom line, there is no massive web-based attack going on.

The interesting backstory to this is that I discovered this problem with Twitter. Specifically, I use TweetDeck and noticed that all the sudden "harm", "malware", "harmful" and "google" just jumped to the top of the trending list. I took a look and found out about the problem and confirmed it for myself.  I'm still somewhat skeptical of using Twitter trends to get hard-core intelligence about what is going on around you, but it certainly does point out some things to look at, even for information security professionals.

UPDATE X1: It appears international versions of Google search are also impacted.

UPDATE X2: It appears that the problem has since been fixed.

UPDATE X3: Google's reponse:

(Weekend humor: I had thought about this after this was written, but a better title of this diary would have been "Whitelisting: You're Doing it Wrong")

John Bambenek, bambenek /at/ gmail \dot\ com

5 comment(s)

DNS DDoS - let's use a long term solution

Published: 2009-01-31
Last Updated: 2009-01-31 17:17:56 UTC
by Swa Frantzen (Version: 2)
2 comment(s)

The current batch of DDoS attacks continues now for quite a few days. Let's recap a few things and look forward to a long-term solution.

The current attacks:

A spoofed UDP query causes a reply of the root cache information which is significantly larger (~330 bytes) than the query.

The victims are twofold: those who reply to the query (or even just get probed for it and not reply) and those who are the ultimate victims of the DDoS (yes those who reply to it are victims, nothing more).

This results in two things: the botnet of the attackers is harder to trace back by the ISPs involved as the spoofed packets are each fairly small (60 bytes) and do not come near the ultimate victim. So the white hat community needs to work more closely and harder to track down the source(s). Also the amplification factor is significant (~5.5x) which means that the attackers can use less of their bots to completely flood the ultimate victim.

Quite a few calls are being made to stop giving out root cache copies, some even going as far as calling those who operate the intermediate victims incompetent and worse.

While I fully sympathize with the ultimate victims, I also sympathize with the intermediate victims as they've not done that much wrong, In fact, it can be argued they did absolutely nothing wrong.

Past incidents

In the past we have seen similar attacks. They also had large amplification factors. They used open resolvers to query a very large TXT records from a DNS server and those resolvers sent those huge (cached) replies back to the victim who's ip address had been spoofed.

The call at the time was for us all to stop having open resolvers, something that originally was considered being neighborly, friendly became a network hazard and in fact offensive to continue to offer.

Some reading from 2006 on these attacks:

Longer term solutions

Clearly attacks evolve (even if it takes a few years), so we need to be ready for the next ploy by the time the attackers hit us with it.

What's to stop them once we shut down enough root caches responders to start asking our name servers questions we have to answer? Something like a large reply in a domain the server is authoritative for? Sure, not all servers will need to answer the same query, but all things considered that is not very complex for a botnet controller to cope with: have a table of who get's which query takes no rocket scientist to program.

So what are we going to shut down when they do this? Are we going to start to hunt for the long replies that people might have and try to make them shorter?

Or are we finally going to put pressure on the ISPs to stop once and for all the ability of their customers to spoof their source IP address.

The root problem isn't so much a stateless protocol like UDP replying to something with a longer reply than the question. In itself it's not a problem as long as IP spoofing is made impossible.

So what anti-spoofing measures are we talking about?

Is stopping spoofing at the AS borders enough? Not really: it still allows the bad guys to group their botnets per ISP (if they haven't done so already), send their spoofed requests within the ISP and then have a non-spoofed reply to it go to the victim. Moreover this can't be done at transit providers borders as it would greatly impact the self-healing feature of the internet.

Is stopping the spoofing at the border between the ISP and the individual customer enough? Bingo! But these filters aren't trivial to implement:

  • Regular dial-up and even xDSL and cable customers get a dynamic IP address, forcing the filter to be dynamic as well.
  • Some larger customers are mixed in with the consumers but have routed networks, forcing the complexity of the dynamics to actually adapt the filter from the routing tables
  • Some customers are multi-homed. They have connections to multiple ISPs and want the ability to send out packets to one ISP even when they'd get the reply on their other connection. Depending on just how this is done (multiple options exist), this can require allowing the addresses the other ISP(s) have allocated to the customer, using information from the ASN of the customer etc.
  • ...

So full ingress/egress filtering isn't easy to achieve and vendor's equipment in active use might not even be able to support it. IMHO it's the only thing that will make all stateless protocols safer from being abused to either to amplify the attack, or hide the real location of the attackers. Some botnets out there are by far large enough to blow just about anybody out of the water, amplification is not needed as such by one of the larger botnets (do the math of upstream capacity times number of bots).

More references:

  • BCP 38, RFC 3704 best current practices regarding ingress filtering dating back to May 2000 and March 2004 respectively.
  • Unicast Reverse Path Forwarding: Aimed at ISPs; general understanding, a "cheap" way to link routing info in allowing traffic in the reverse direction.

So in the end, it's my opinion that pressure needs to be put on those ISPs that do not have full anti-spoofing measures for all their customers.

Now if you run a sizable network, you can help with this too: prevent all source addresses that aren't in your assigned official address space from leaving your network onto the Internet. You won't filter away valid traffic as you can't get answers anyway and you're doing your good deed (hint: log the traffic, it might point to misconfigured and/or infected hosts).

If you're an intermediate victim, please do not see this text as an excuse not to help minimize the ongoing attacks by removing long root cache replies. You're in a position to help (as little as each of you can individually), please do so even if you're not the root cause of the problem.

Swa Frantzen -- Section 66 

Keywords: DDoS DNS spoofing
2 comment(s)

VMware updates

Published: 2009-01-31
Last Updated: 2009-01-31 13:39:22 UTC
by Swa Frantzen (Version: 1)
0 comment(s)

VMware issued a number of fixes for VMware ESXi 3.5, VMware ESX 3.5, VMware ESX 3.0.3 and VMware ESX 3.0.2

This fixes CVE-2008-4914 (corrupt VMDK delta file crash), CVE-2008-4309 (snmp getbulk DoS), CVE-2008-4226 and CVE-2008-4225 (both libxml2).


Swa Frantzen -- Section 66

0 comment(s)


Diary Archives