Yesterday we announced we would write about tips on how people patch their systems after a Black Tuesday. Since Mike is apparently suffering from a withdrawal symptoms after defcon, his fellow handlers will do the honors.
There are basically a few tactics to this in use. What strikes me in the responses we got: most of those writing in value not breaking applications significantly more than patching before you get hit with an exploit. Perhaps there is a lot work left to be done in order to convince (upper) management of the risks of patching late as patching even an hour after the worm or the targeted exploit hit you might cost the company significantly more than losing a few hours left and right over a not so critical system not being 100% healthy with a new patch.
Just patchThe folks doing this, take the risk and let the patches roll out in their organization. They expect a few systems to fail somehow more or less randomly and will deal with them as they go. Should one of the patches prove to be incompatible with one of their critical applications, they will deal with them at that point.
This group includes by necessity also most home users as they lack other means to patch. At best they can wait till others got some problems, but if all do that, it won't work and expose you for longer.
Test on limited scale, roll out carefullyYou can use the Microsoft tools like WSUS and delay the rollout a few hours to make sure a few test systems survive and can run the critical applications. Typically smaller organizations use this with great success for the masses of general-purpose client machines.
Reader Ken wrote: "As we all know, patching any kind of operating system or application is fraught with dangers. In my environment, I don't have the luxury of a full test environment that I would love to have in order to be able to test each patch against all the applications and services in use. But that is just not possible with a limited budget.
In order to minimize the risk of a patch causing harm, I apply patches first to a set of known systems. The first system is my own workstation. I'd rather have it crash there than one of my coworkers' systems. After a day or so, the patches are then deployed to a subset of the systems (about 10) in the office. Finally, if there are still no issues found and no problems have been reported on sites such as the Internet Storm Center or on any of the security lists, the patches are distributed to all systems.
I actually use two tools for patch management. The first is Microsoft's WSUS service. I have all systems pointing there in order to get their updates. There are a couple of advantages. The first is Internet bandwidth usage. The patches are only downloaded once for all the systems. This can be a major savings in terms of time and bandwidth. Second, I can specify how and when the patches are applied via a GPO. Third, I control which patches are installed. If there is a known problem with a specific patch, I can just not release just that one patch to the users. Finally, I can get a status on which patches are applied and what systems have had problems installing the patches. The other tools is Shavlik's NetChk. This tool allows me to deploy a number of non-Microsoft Windows application patches and also to verify that patches are indeed being installed.
I use a similar process when it comes to apply patches to UNIX systems. First my own system, then a subset of system and finally all the systems.
So far, I have not had a major problem when it has come to applying patches. In almost every case where there was an issue, it surfaced within the smaller group of systems and the disruption was minimized.
Of course patching is not the only line of defense. I also have NIDs, firewalls, proxy web servers, virus scanning and log monitoring in place to try to reduce the risk to the office. Also recently, user information security awareness sessions have been started within the organization. This helps bring the users into the equation of defending the company against malicious software and web sites."
Test applications thoroughlyTesting applications to the end is next to impossible, you at the very best can test a few critical operations in your application and will have to gain trust it at some point. This approach is more often used on critical servers. The big drawback to this is that it takes time and resources to get this done. But in those cases where you end up with an incompatible patch you gain the pain of rolling back out incompatible patches and having to restore potential damage.
Mike wrote in on their strategy: "Simple strategy really:
Personally I like the last line of his comment as it show they are trying to balance the heavy testing scheme with a fast track for getting those "PATCH NOW" patches out.
Fully features planned rolloutsSome organizations might (need to) plan ahead all their patching and actually do a roll-out plan that covers a long time before they come full circle and start over.
We had one such anonymous submission: "On the day after Black Tuesday, a task force meets to discuss the recently released patches. There is a set of ~100 users who represent all applications used. They get the patches via MS SMS to test. Once they verify their apps still function as expected, the patches are sent out via SMS each week to four predefined patch groups. This process lasts a month. Lather. Rinse. Repeat".
Divide and conquerA well-known strategy from the real world can be used to divide the to be patched machines in different categories and tackle each differently. E.g.:
Let's not forget that one of the reasons that getting Microsoft to release patches slow -aside from the obvious marketing impact- is that they test these patches. So you only get tested patches to start with ...
Thanks to all those writing in!
Swa Frantzen -- Section 66
Aug 9th 2006
1 decade ago