The day after patch Tuesday; sometimes called Wednesday

Published: 2012-05-09
Last Updated: 2012-05-09 16:49:23 UTC
by Dan Goldberg (Version: 1)
4 comment(s)

This is my first diary entry in several years. I am returning as a handler after a lengthy hiatus. I joined an organization which took too much time and did not permit this kind of interaction. It was worth it. That ride is coming to a close and I am happy to be able to return to this fine organization.


Today many of us are working through the monthly onslaught of patches and updates. Between the Microsoft May 2012 updates, PHP, ESX, and some Adobe updates there is quite a bit to think about. This is a monthly occurrence though. There are a number of steps organizations can take to prepare for this recurring event. A simple one is to mark the second Tuesday on a team calendar. Start to clear the deck on the Friday before and make sure that test systems on ready to go following the Tuesday release.

I have seen a number of approaches to patch preparation. At one extreme all critical systems are replicated in a lab, patches applied and a QA team validates key functions. At the other extreme, patches are just applied and then organization deals with the fall out. Not being an extremist I like to somewhere in the middle depending on organization size, mission, and capability.

There is also the triage effort for reviewing updates and determining how long to wait to get updates applied. I have seen one organization which waited 10 days after the MSFT release then applied all release patches counting on the forums and general buzz about the updates to call out any problems with them. This of course can leave the organization open to many other risks if an exploit is in the wild.

I advocate a more hands on approach especially with key systems. The organization just mentioned ran into a problem recently where two RADIUS (IAS) servers were taken offline by a patch which modified the CA cert. This brought the IAS servers down impacting wireless access for several hours while the problem was identified and investigated and resolved. Testing or patching one system at a time could have prevented or mitigated this outage.

What are some that work and some that don’t work? Care to share?

--
Dan
MADJiC.net

Keywords:
4 comment(s)

Comments

Sometimes clustered/paired/failover systems have to both be at the same patch level. And as you mentioned with the RADIUS servers, this can mean applying a patch to both/all the servers providing a service at the same time can cause a failure - so much for redundancy, eh? :-)

In an ideal world, we have a development environment for things like this where patches can be applied, and tested before deploying to production servers. Unfortunately, that's not always possible (or affordable)...
Our organization is on the small side. Our non-extremist strategy is to immediately deploy patches to a small group of people and then listen for screams. If they're all still alive after a few days, we patch everyone.
Same as John
Dan i would agree with you. We try to deploy patches first to a development server first before we do the same to production. But we also need to make sure our test area is as closely matched to our production hardware/ software etc. If that is not possible then we will try to update a passive or secondary server in production to see it's impact. For many companies these days it is hard to replicate a production environment so many take the hybrid approach or like you said the wait till it has time out on forums approach.

thanks,
http://mjddesign.wordpress.com

Diary Archives