Diaries

Published: 2012-10-31

Cyber Security Awareness Month - Day 31 - Business Continuity and Disaster Recovery

What better time to talk about business continuity and disaster recovery. The "super storm" Sandy showed how important, and how difficult it can be to prepare adequately. Business continuity and disaster recovery are two separate activities, but of course, they do affect each other and have to be considered together. Business continuity deals with keeping the business going during an event, and disaster recovery relates to getting back to normal after the event passed. The better you can maintain "business as normal" during the event, the easier it should be to recover. But in some cases, keeping the business open is just not an option.

All too often business continuity and disaster recovery planning (BCP/DRP) is associated with large natural disasters like hurricanes and earthquakes. But I find that it is more useful to start with "little things" that happen regularly and scale your plan up from there. For example, some of these little things are:

- server failures
- component failures (switch, hard drive)
- road closures
- network provider outages

In its sum, the actions you take to cover yourself against these "little issues" can very well result in a plan to cover yourself against big problems. But these little issues are a lot easier to measure and test then the big problems.

Business continuity is covered as part of ISO 27002. The British standard institute created a distinct BCP standard, BS 25999 which is also referred to a lot. Like all the ISO 27000 series standards, the BCP/DRP is heavy on process improvement. There are however a couple of very important special items to consider:

First of all, you have to define the goal of your business continuity plan. 100% uptime is not a realistic goal. You need to distinguish business critical from non-critical functions. BCP/DRP is usually only applied for critical functions. For each function, you need to define:

- Recovery Point Objective: how much data are you willing to risk? For example, if you have daily off site backup tapes, you will risk one days worth of work. For a development shop, loosing one day of work may not be pretty, but probably acceptable. For a financial institution, loosing one day of transactions is probably catastrophic and not acceptable.

- Recovery Time Objective: How long can you afford to be "out of business". In some cases, based on the disaster, there may also be no point being in business. If you have a shop in a subway station, and the subway is shut down, it doesn't help you to be open for business. It is important to be realistic and not to set overly optimistic goals.

For each critical business function, you need to map what resources are needed to fulfill the function (servers, networks, people...).

Once you define the critical business functions and the acceptable downtime, you need to consider different threats and how they affect the resources required for each function. As I mentioned in the beginning: Start with "little" events that happen regularly. This will make it easy to define the likelihood and also to test the mitigation techniques. I would use events like "hard disk failure", "network outage" and "power failure". Also consider compound failures ("what if power goes out and as a result, one of our router's power supplies burns out cutting off internet access"). These cascade/compound failures are quite common.

As part of this threat analysis, you should be able to figure out how likely it is to suffer a particular outage, and how you are going to react to each event.

Testing your failover plans is of course very important. I actually recommend regular failover even if there is no event. In my experience, if you don't do it at least once a month (better: once a week), it will not work if needed. The problem is that your networks and business processes are not static. They keep changing and your plans need to be updated in response. If you don't test it regularly, you are not going to uncover these changes. And regular tests will force everybody to keep the plan up to date in order to avoid regular failures.

Back tot he event at hand: Hurricane Sandy. This is an event in scale that will challenge any BCP. First of all, keeping the business running is not necessarily a sensible option in many cases. (see my subway store example above). Businesses are located in expensive and in many ways "inconvenient" locations like New York City because they derive special advantages from the concentration of businesses in the area. Just packing up and move to a different location will keep the network running, but you may lose physical proximty to customers and collaborators. For example, the stock exchange would have been able to operate all electronically. However, the decision was made to keep it closed as it wasn't safe for the traders to all come to the trading floor, and having them work from home remotely would remove the personal contact required for some of the trades. Another challenge is to define the worst possible disaster to prepare for. For flooding, a "100 year flood" is usually used to drive planning. The national flood insurance program is publishing maps that indicate what a "100 year flood" in your area means. However, Sandy exceeded these levels and as a result many business were not prepared and had equipment like fuel pumps for generators placed in locations that got flooded.

 

------

Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

0 Comments

Published: 2012-10-30

Cyber Security Awareness Month - Day 30 - DSD 35 mitigating controls

Nearing the end of the month it would be remiss not to mention the DSD 35 mitigating strategies.  Whilst not strictly a standard it provides guidance and The Defence Signals Directorate or DSD is an Australian government body that deals with many things called Cyber.  Amongst other things they are responsible for providing guidance to Australian Government agencies and have produced the Information Security Manual (ISM) for years.

In the past few years they have expanded on this and produced the DSD 35 mitigating strategies. The DSD 35 mitigating strategies are based on examination of intrusions in government systems and have been developed to address the main issues that would have prevented the breach in the first place. In fact DSD states that by implementing just the top 4 strategies at least 85% of the intrusions would have been prevented.
The top four are:

  • Application whitelisting
  • Patch Applications
  • Patch operating system vulnerabilities
  • Minimise the number of users with domain or local administrative privileges.


Implementing the top 4 (Some general hints anyway)

Application whitelisting
Application whitelisting is one of the more effective methods of preventing malware from executing and therefore the attack from being effective.  The main argument you will hear against this is that application whitelisting is difficult to achieve, which in a sense is correct.  It will take effort to get the environment to the point where everything functions as it should.  However in the whitespace following the top 4 is a good piece of advice form DSD "Once organisations have implemented the top four mitigation strategies, firstly on computers used by employees most likely to be targeted by intrusions and then for all users, additional mitigation strategies can then be selected to address system security gaps to reach an acceptable level of residual risk."  In other words address the high risk users and issues first and then propagate the control to the remainder of the organisation.

There are a number of tools available that will implement application whitelisting and the initial prolong of systems in order to get the whitelisting right.  A number of end point products are also capable of enforcing it and of course app locker in windows can also do the job.  When implementing it, make sure you do this in test environments first to sort out the issues.

Patch Applications
Patching applications is something that we all should be doing, but can be difficult to achieve.  One issue that I come across is "the vendor won't let us". Providers of certain applications, usually expensive, will not allow the environment to be changed.  If you patch the operating system or supporting products they'll not provide support.  In one extreme case I'm aware of the vendor of the product insisted the operating system was reverted back to XP SP2 before they would provide support. Those situations are difficult to resolve and unfortunately I can't help you out there. However going forward it may be an idea to make sure that support contracts allow for the operating system and other supporting products to be patched without penalty. As a minimum identify what really can't change and what can.

So outside of those applications that are just too hard, implement a process that patches applications that can be patched, maybe remove those that are really not needed. For those applications that are to hard, you will have to find some other controls that help you reduce the risk to them, possibly strategy one?   

Patch operating system vulnerabilities
Many organisations have this sorted reasonably well.  A number of operating system provide a relatively robust process to update operating system components. One of the clients I work with does the following which works for them. When there is an advanced notification the bulletin is analysed. A determination is made whether the patch needs to be applied and how  quickly.  Once they are released they are implemented in the DEV environment straight away and systems are tested, assuming they do not break anything the patches are implemented in UAT and other non production environments. Systems are tested again (A standard testing process mostly automated).  Production implementations are scheduled for a Sunday/Monday implementation. Assuming there are no issues stopping the implementation everything is patched by Monday morning.  It takes effort, but with some automation the impact can be reduced.  There are also a number of products on the market that will assist in the patching processes, simplifying life.
  
Minimise the number of users with domain or local administrative privileges.
Removing admin rights will also take a little bit of effort. Identify those that have administrative rights, domain or local. Identify what functions or roles they actually perform that require those full rights.  Take local admin rights as an example. There are some applications that really do require the user to have local administrative rights. However there are also plenty that "need" them for the sake of convenience, rather than figuring out what access is really needed admin rights are given.  Some applications you come across need admin right the first time they are run, after that no more. Your objective should be to remove all local admin rights from users and reduce domain administrative rights to as few as possible accounts in the environment. 

You will need to test before implementing in production.

If it all seems overwhelming break it down into smaller jobs. Do those devices that are critical to the organisation first and then expand the efforts. But once done you will have reduced risk to the organisation and you can start looking at implementing the remaining 31 controls. As I said at the start not necessarily a standard, but how often can you say that you know of a way to reduce risk of targeted attacks by 85% or more?

I'm interested in finding out how you may have implemented one or more of the top 4 controls, please share by commenting, or let us know via the contact form and I'll add contributions later in the week.

Mark H - Shearwater

http://www.dsd.gov.au/infosec/top35mitigationstrategies.htm

  

 

2 Comments

Published: 2012-10-30

Hurricane Sandy Update

Last nights storm cut power to millions of households across much of the north east of the US and parts of Canada. The outages affect major population centers, including New York City.

At this point, the damage to infrastructure appears to be substantial and recovery may take days to weeks.

We have not heard of any outages of east coast services like amazon's cloud or google web services hosted in the area. We will try to keep you updated as we hear about any larger outages, but right now, there are only some individual web sites affected. This may change if power outages persist.

If you reside in the effected area, you are probably best off staying at home. Many roads are blocked by debris and in some cases by downed power lines.

Here are some of the typical issues we see after an event like this:

- outages of communications networks as batteries and generator fuel supplies run out.
- malware using the disaster as a ruse to get people to install the malicious software ("watch this video of the flooding")
- various scams trying to take advantage of disaster victims. 

A couple ways how the internet can help in a disaster like this:

- many power companies offer web pages to report and monitor outages.
- FEMA offers updates on it's "ready.gov" and "disasterassistance.gov" web sites.
- local governments offer mobile applications to keep residents informed.

Twitter can provide very fast and localized updates, but beware that twitter is also used to spread misinformation.

A lot has been made of tweets that suggest organized looting. The posts I have seen appear to be meant as a joke if read with other tweets by the same person. In some cases the person doesn't live in the area, or the account is very new. Remember it is hard to detect irony in 140 characters.

We hope everybody in the effected area will stay save. The storm is still on going and internet outages are probably the least significant issue right now.

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

9 Comments

Published: 2012-10-29

Cyber Security Awareness Month - Day 29 - Clear Desk: The Unacquainted Standard

A Clear Standard 

A "Clear Desk Policy" is becoming a more commonly adopted STANDARD in the work place.  The idea that a clean desk is a standard may seem a bit of stretch.  However, it is recognized in the access control domain by ISO [1], NIST [2], and ISC2 [3].  The standard name varies a bit and often includes the "Clear Screen" title and requirements too.  A Clear Desk standard is not primarily targeting the actual cleanliness of the desk, but the often seen clutter of classified information left unattended out in the open.

I have worked in environments as an infosec professional, with a Clear Desk policy in effect and without.  The comparison of each environment is drastic.  An ENFORCED Clear Desk Standard ultimately reduces risk and nicely faciliates efficiency and effectiveness in the work place.  An unenforced standard is equivalent to no standard and creates an endless list of items for any ambitious auditor.

A highly effective execution of a defined Clear Desk Standard/Policy should include two main components.

  • Awareness
  • Audit
     

Awareness

   Awareness is key.  This can be very simple. Make sure your employee KNOWS the policy/standard EXISTS and that it is ENFORCED.  The awareness does not need to include an expensive training module. It can be delivered with mailbox flyers, emails, or simple cascaded conversations by management. Please check out the resource link that SANS provides. [4]

Audit

   Once the awareness piece is in place, regularly auditing the work place is very critical.  This too, does NOT need to be expensive.  It can consist of delegating a champion to schedule/execute a review of the workplace, a spreadsheet for tracking, and a pad of review slips to leave on each desk detailing the review.

Here's a simple review slip example that can be used.  
Keep it simple.  I created this example in MS Word in ten minutes.

Audit Checklist

When the audit slip is left, it keeps the employee/user aware that checks are in place and the policy is enforced.  This need only have to happen quarterly to be extremely effective.  The spreadsheet can be used to track results and assist in accomplishing the compliancy goals of the policy.  Publishing the quarterly numbers is also very effective.

Conclusion

The responsibility lies on the user to comply to any standard/policy.  The responsibility lies on Management to enforce standard/policy.  A lack of policy or policy enforcement can increase risk, loss of reputation and loss of data.  Here is a snapshot of an assessment from a corporate envrionment where no policy existed.



These monitor notes are a simple example of the endless problems identified within an environment with little policy and enforcement.  A simple expectation of a clean desk can provide an unmeasurable amount of decreased risk and positive image.  The risk is tangible and the positive image is intangible.  Both of which translate into increased efficiency and effectiveness by the staff and ultimately every line of business.

   

[1] http://csrc.nist.gov/publications/nistpubs/800-53-Rev3/sp800-53-rev3-final.pdf
[2] * Page H9 of link listed on reference [1].
      TABLE H-2:  MAPPING ISO/IEC 27001 (ANNEX A) TO NIST SP 800-53
[3] https://www.isc2.org/cissp-domains/default.aspx
[4] http://www.sans.org/security-resources/policies/desk-top.php

 

-Kevin
--
ISC Handler on Duty

2 Comments

Published: 2012-10-28

Firefox 16.02 Released

Just a quick note today to say thank you to one of our readers, Paul, for the note that Firefox 16.02 has been released.  Firefox classifies this fixes addressed in this release as 'Critical'.  Details of the updates can be found here.

tony d0t carothers -gmail

2 Comments

Published: 2012-10-26

Cyber Security Awareness Month - Day 26 - Attackers use trusted domain to propagate Citadel Zeus variant

Here on Day 26 of Cyber Security Awareness Month, as the ISC focuses on standards, we received a very interesting email from David at Lamp Post Group, the IT provider for Access America Transport.

Per David: "Access America owns a US Trademark and the domain accessamericatransport.com. On Tuesday, October 23, a malicious user registered the domain accessamericatransport.net and immediately began sending phishing emails under the domain. Purporting to be Access America Transport, some emails were sent to several of our carriers with a link to a fake "Rate Confirmation" ("rate confirmations" is a normal term in the 3PL industry) or carrier "Claim" which in fact linked to an executable containing a virus."

There are a number of interesting elements here so let me parse them individually.
First, with an eye for security awareness specific to your domain names:
Depending on the "value" of your enterprise name space, you may want to ensure you own the related domain for all the major TLDs (com, net, org) and even consider some of the newer offerings (info, biz, us). Think about close possible squatter matches too. Using the example David sent us, phishers and attackers may buy domain names that closely match those related to your enterprise. While the attackers David reported simply acquired accessamericatransport.net, had that not been available, they might have created the likes of accessamericantransport.com or accessamericatransp0rt.com. It can definitely start to get expensive to buy the near names matches in addition to what should be all your known good domain names, but your Internet presence is your reputation. David's sharing this attack with us all is admirable transparency and an excellent lesson learned.
By the way, as we are weaving in discussion around standards, you should read the primary DNS-related RFCs. I'm always reminded about how little I know about DNS when I dig in here. Yes, DNS dig pun intended.

So, let's dig into the attack against Access America Transport:
Most importantly, they've recovered control of accessamericatransport.net and have posted warnings to their primary page.
The phishing emails sent from accessamericatransport.net included links to Zeus binaries hosted in the Ukraine(UA) in Eastern Europe (shocker) at 91.20x.20y.167 (slight obfuscation to protect the innocent). The binaries, when executed, phoned home to 193.10x.1y.163 (Seychelles(SC) in Southern Africa) and POSTed victim identifying data to the C&C app at 193.10x.1y.163/file.php. I spotted config files being downloaded including the likes of candy.dll and cit_video.module. This is in keeping with the Citadel Zeus variant; there's a nice writeup from 12 MAR 2012 on this behavior here.
Targeted Zeus attacks are nothing new, but in this case the analysis does seem to indicate a ramp up against the 3rd party logistics (3PL) industry. David indicated at least four other 3rd party logisitcs companies that have recently suffered similar "attention". The efforts against Access America allegedly even included a vishing attempt.

In summary, here's the BOLO (be on the lookout):
1) Protect your domain name interests with awareness of any names you lose control of that may be used against your consumers
2) 3rd party logistics (transportation) organizations, beware of a possible increase in phishing/vishing activity leading to dangerous malware

The ISC always appreciates your feedback. Readers, if you're seeing similar activty, please feel free to comment or send us samples.
 

Russ McRee | @holisticinfosec

 

3 Comments

Published: 2012-10-26

Securing the Human Special Webcast - October 30, 2012

Special Webcast: How To Create an Engaging Awareness Program People Want To Take - 3nd in Series

Tuesday, October 30, 2012 at 1:00 PM EDT (1700 UTC/GMT)

Featuring: Will Pelgrin, Chair of the MS-ISAC, President & CEO of the Center for Internet Security

Details and sign in at https://www.sans.org/webcasts/create-engaging-program-people-3nd-series-95539

--
Adam Swanger, Web Developer (GWEB, GWAPT)
Internet Storm Center https://isc.sans.edu

0 Comments

Published: 2012-10-25

Cyber Security Awareness Month - Day 25 - Pro Audio & Video Packets on the Wire

Introduction

 

In previous Diary's niche layer 2 protocols for different network areas have been covered. In keeping with that theme, this diary will cover three in particular. Two that are widely deployed (and may already be in your network) protocols and discuss one emerging protocol.

Ethernet truly is everywhere and most everything is converging, if not already, to an Ethernet transport model. You have Data Center Storage [1] [2], Voice over Internet Protocol (VOIP)[3], Infrastructure Management (e.g. SCADA [4]) all converging over that RJ45 and or Fiber port. You may or may not be aware that professional grade audio converged onto Ethernet for a transport many years ago.

There are several transport protocols but the three that we will discuss today are CobraNet [5] [6], Dante [7] [8] and Audio Video Bridging (AVB) [9] [10] [11] [12].

This article will not attempt to explain the protocols but more increase awareness and potential risks.

Cobranet

 

 

Let's talk Cobranet, invented in the 1990's by Cirrius Logic and is pretty much the first Audio transport over Ethernet. It is widely deployed and is a pure Open Systems Interconnect (OSI) [13] Model layer 2 protocol. This immediately sense of my PacketSense Danger Sense *Must know more* about how it is deployed.

Deployments may vary and range from converged to closed networks. Since it is an Ethernet Protocol it can co-exist with other Ethernet Traffic. A quick tcpdump run through network captures could tell you if Cobranet is on your network.

 

tcpdump -vv -e -nn ether proto 0x8819

Dante

Dante sits at Layer three in the OSI model [13] and is more of a VOIP style play. They recommend and use VOIP style of Quality of Service mechanisms. You can find a great technology overview of Dante @ www.audinate.com/index.php.

Registering to get access to Dante documentation and marketing/white paper material was easy. This protocol however might be harder to find. It can use both Multicast  [14] and Unicast traffic and looks to be customizable. I will admin openly that I have 0 experience in deploying or working with Dante but thought it important to include it's existence.

AVB

 

Audio Video bridging is the hart of what needed to be discussed today. This protocol is heading to a car near you :) among many other possible solutions. Today AVB is mostly audio but video is quickly ramping up. When first informed about the auto industry play with this protocol, it took me by surprise, but one of the heaviest components in a car is the wire harness. This protocol may change that. Now, beyond the scary "Networking in my CAR?????" it has other applications as well.

Both Dante and Cobranet are proprietary protocols, very well designed but not open. AVB is an open set of protocols managed by the IEEE [15] so the competition is now open. One thing that bothered me about this protocol is no security controls. Having some contacts in the AVNu alliance [10] and with the IEEE working group [15] this has been brought up.

There are several different protocols to snoop for but fortunately you are likely to not have this in your network just yet. The protocol is just ramping up. It is designed to converge with what the Pro Audio Space call "Legacy" traffic :) or email, web, etc. The AVNu team contributed time to the Wireshark group and latest versions of Wireshark parse this protocol. 

The AVNu Alliance has a great list of resources to better understand AVB Itself @ www.avnu.org/resource_library

 

Conclusion

Cyber Security Awareness tip for day 25, EVERYTHING is converging onto Ethernet… And some don't think about the risks of converged networking. Be aware of the nuance protocols and services that may make it into your environment!

 

Web References

 

[1] http://tools.ietf.org/html/rfc3720

[2] http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

[3] http://en.wikipedia.org/wiki/Voice_over_IP

[4] http://en.wikipedia.org/wiki/SCADA

[5] http://en.wikipedia.org/wiki/CobraNet

[6] http://www.cobranet.info

[7] http://en.wikipedia.org/wiki/Dante_(networking)

[8] http://www.audinate.com/index.php?option=com_content&view=article&id=138

[9] http://en.wikipedia.org/wiki/Audio_Video_Bridging

[10] http://www.avnu.org

[11] http://www.ieee802.org/1/pages/avbridges.html

[12] http://www.wireshark.org/lists/wireshark-bugs/201005/msg00292.html

[13] http://en.wikipedia.org/wiki/OSI_model

[14] http://datatracker.ietf.org/wg/magma/charter/

[15] http://www.ieee802.org/1/pages/avbridges.html 

 

 

Richard Porter 

--- ISC Handler on Duty

Twitter: @packetalien

Email: richard at isc dot sans dot edu

0 Comments

Published: 2012-10-24

Cyber Security Awareness Month - Day 24 - A Standard for Information Security Incident Management - ISO 27035

Rob covered ISO 27005 in his 17 OCT diary, which covers information security risk management. I believe as handlers for the Internet Storm Center we'd be remiss in failing to cover an incident response standard for Cyber Security Awareness Month. ISO 27035 fits the bill perfectly.


ISO/IEC 27035:2011 provides a structured and planned approach to:
1) detect, report and assess information security incidents
2) respond to and manage information security incidents
3) detect, assess and manage information security vulnerabilities
4) continuously improve information security and incident management as a result of managing information security incidents and vulnerabilities

This International Standard cancels and replaces 2004's ISO 18044.
In our Standard Operating Procedures, I provide direct pointers to ISO 27035 as well as NIST's SP 800-61 rev 2.
Aligning your security incident management program with these two documents lends well to meeting security incident management components for ISO and or PCI compliance. You'll definitely need to validate (with evidence) that your related activities meet muster for the audits, but with well written SOPs, documented processes, good case management, and regular drills and exercises (practice). Remember, actual incidents don't count as exercises. :-)  Conduct a drill-like activity on a quarterly basis if possible, report on it, and be sure to incorporate lessons learned.

"No battle plan survives contact with the enemy"...but you can definitely prepare.

Cheers.

Russ McRee | @holisticinfosec

2 Comments

Published: 2012-10-23

Cyber Security Awareness Month - Day 23: Character Encoding Standards - ASCII and Successors

Let me preface this by saying that the "history" part of this ended up being way more complicated than we have space to cover in this story, I'll try to keep it brief.

Back in the day, I remember the "PC DOS Tech Ref" manual (yes, I was there in 1981 to read this one.  And yes, I still have my copy) - one of the many useful things in that manual was the by-now-very-familiar ASCII table, listing characters 1-127, which had been extended to include the next 128 characters, for an even 1-255 (1-FF in hex).  I think this extension might have been for PC-DOS actually.  I spent a lot of time using this, as it was handy in transcoding hex and binary data streams to characters (remember, this was before we had sniffers on PC DOS platforms).

At the time, the main competition for ASCII was EBCDIC, the character encoding used by IBM System/36, System /38 and mainframe architectures.  IBM AIX and all the other Unix (this was pre-Linux) vendors used 8 bit ASCII along with everyone else.  But at least we had decent packet sniffers on mainframes both mainframes, S/3x and Unix platforms!

Enter "the rest of the world" which needed to read and write in characters that exceeded the limited A-Z ASCII character set.  Unicode 1.0 was established back in 1987 (yes, really, it was that long ago!) and has seen regular updates since then.  The current version is 6.2 (released just last month, in September 2012), which supports 110,000 different characters, 100 scripts, including rendering, collation, bidirectional order (to handle right to left scripts).  All of a sudden simple text got a lot less simple !

How does this relate to security?  Because many of today's defence technologies still live in the 1981, 8 bit ASCII world.

Consider a directory traversal attack.  Say you have a website at http://somesite.domain.com/somepage
A directory traversal attack will "traverse" the directory structure to steal files outside of the web page.  For instance, http://somesite.domain.com/../../../etc/passwd to steal the "passwd" file from a unix or Linux system that might host that site.

So, how do we protect from that?  The web server should prevent you from using that pesky "../../.." string, or any variant that looks like it.  But how about in straight up ASCII, where the "./" character is character can be encoded in Hexadecimal - as 2E 2F.  So now we need to protect against "%2E%2F", and any other variation on that.

Simple  so far, but now consider Unicode, where the "." and "/" can now be represented as (again in hexadecimal) 002E 002F.  So we also need to protect against "%002E%002F".  But now factor in that there are hundreds of other alphabets and character sets, each with their own Unicode table, so we now have more than a few different hexadecimal representations for the "." and "/" characters!  For instance the "/" character, which we now know as %2F or %002F, can also be represented as %C0AF (this one was missed in an early version of IIS).  Or we can mis-code it intentionally, and %ca%9v also works! 

Oh, remember that if you're on a Windows machine, where the subdirectory delimeter is backwards, using the "\" character (hex 5C)?  That means we need to take everything above and double the number of checks!

Now add every other web attack method (directory traversal is just of the most simple ones), and you can see how character encoding can complicate matters tremendously in defending web (and other) applications!

One of the character encoding attacks that we're all expecting to see more common is the use of Unicode in spear-phishing attacks.  We covered this a while back in a diary: http://isc.sans.edu/diary/non-latin+TLD+to+be+issued/8755
Consider if you're google searches - it's now easy to redirect you to a site where the "o" characters were actually a different character entirely, in another code page.  It's unlikely that most people would detect an attack like this, and most of our technical controls for things like this are not prepared for non-latin domain names either.
 

What got me started on this you ask?  We received a note from one of our readers (thanks again Larry) - he had captured a cross site scripting attack against his web application (an unsuccessful attack, thankfully).  The neat thing for me was the character encoding used to obfuscate the attacking script - the attack as captured is shown (partially) here:

> GET
> /js/i%20=%200;%20i%20%3C%20parts.length;%20i++)%20%7B%20%20%20%20%20%2
> 0%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20
> %20%20%20%20%20%20%20if%20(parts[i].substr(0,%201)%20==%20'q')%20%7B%2
> 0%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20
> %20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20return%20unescape(part
> s[i].split('=')[1].replace(//+/g,%20'%20'));%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20
> %20%20%20%20%7D%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%
> 20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%20%20%20
> %20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%
> 20%20return%20'';%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20%20%20%7D%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20
> %20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%
> 20goog
> le.load('search',%20'1',%20%7B%20language:%20'en',%20style:%20google.l
> oader.themes.GREENSKY%20%7D);%20%20%20%20%20%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20%20%20%20%20%20%20google.setOnLoadCallback(function()%20%
> 7B%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2
> 0%20var%20customSearchControl%20=%20new%20google.search.CustomSearchCo
> ntrol('004498978135172075721:ll4byhgudkg');%20%20%20%20%20%20%20%20%20
> %20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20customSearchC
> ontrol.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET);%
> 20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20customSearchControl.draw('cse');%20%20%20%20%20%20%20%20%
> 20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20customSearc
> hControl.execute(getQuery());%20%20%20%20%20%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20%20%20%20%20%20%20%7D,%20true);%20%20%20%20%20%20%20%20%2
> 0%20%20%20%20%20%20%20%20%20%20%20%3C/script%3E%20%20%20%20%20%20%20%2
> 0%20%20%2 0%20%20%20%20%20%20%20%20%20%3Clink%20rel= HTTP/1.1


This looks complicated, but it's essentially an obfuscation attack based on simple 8 bit ASCII. 
%20 is ASCII 32 (decimal), which is a space character.
%7D is }
%7B is {
%3C is <
%3E is >
 

After you undo all the encoding and "pretty it up" by putting in appropriate carriage returns and indents, the code looks like:

i = 0;
i < parts.length;
i++) {
    if (parts[i].substr(0, 1) == 'q') {
        returnunescape(parts[i].split('=')[1].replace( //+/g,''));}}}
        return '';
        }
        google.load('search', '1', {
            language: 'en',
            style: google.loader.themes.GREENSKY
        }); google.setOnLoadCallback(function () {
            varcustomSearchControl = newgoogle.search.CustomSearchControl('004498978135172075721:ll4byhgudkg');
            customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET);
            customSearchControl.draw('cse');
            customSearchControl.execute(getQuery());
        }, true); < /script>

This looks fairly straightforward, but that weird string in the middle '004498978135172075721:ll4byhgudkg' had me stumped.  Bojan Zdrnja (another Handler here at the ISC) clued me in - it's a stored search string on Google.  So what this attack script does, once successful, is pull the "real" attack down from a stored site, indexed and called indirectly courtesy of Google.  This real attack might often be a command and control channel back to a botnet or other controller host, but it could be just about anything really.

Anyway, back to character encoding - you see that the majority of this attack was encoded / obfuscated in 8 bit ASCII - it's not unicode or anything complex at all.  The IPS in front of the website had no trouble dealing with this, it was blocked and sent to our reader as an alert, and he passed it on to us. 

But remember what I mentioned about many of our defences still living in the 1980's world of 8-bit ASCII?  While the attack *looks* complicated to the human eye, it's 10-years-ago complicated, ie - it looks complex but if you've got any defences at all attacks of this nature are likely to be blocked handily.  Throwing in unicode, especially from one of the less used tables, and doctoring it up with some mis-coded characters might have made this simple XSS attack more likely to avoid detection by a signature based IPS. 

The proper method for an IPS (or "Web Application Firewall" or WAF) to deal with this is to have it decode the attack the same way the target host will (this is true of web attacks as well as network based attacks like packet fragmentation methods), rather than use a signature database.  If you have multiple hosts, the IPS/WAF may need to decode the attack multiple times to "get it right" for each target.  The tough part is that the IPS or WAF has to decode *everything* before it knows what traffic is good and what is an attack, which is why IPS's these days usually have lots of CPU and memory !

We covered a lot of ground in today's story, I hope that the example made things clearer by showing a real attack you might see on your network today.  If you have any comments, perhaps a neat attack you may have seen lately that uses character encoding, please use our comment form!

So when you're thinking about attack and defense on the net - until you've had a chance to look at the character encoding, don't believe everything you read !

 

===============
Rob VandenBrink
Metafore

2 Comments

Published: 2012-10-21

Potential Phish for Regular Webmail Accounts

I was looking through my spam folder today and saw an interesting phish.  The phishing email is looking for email account information.  Nothing new about that, except this one seemed to have a broad target range.  Normally, these types of phishes are sent to .edu addresses not those outside of academia.  From the email headers, this one was sent to the Handlers email which is a .org.  A non-technical user, like many of my relatives, would probably respond to this.  I could see this being successful against regular webmail users of Gmail, Hotmail, etc.  especially if the verbiage was changed slightly.  It could also be targeting those who may be enrolled in online universities.  I was wondering if anyone else has seen this type of phish toward their non .edu webmail accounts.  I have included the email below:

From: University Webmaster <university.m@usa.com>
Date: Fri, Oct 19, 2012 at 9:34 PM
Subject: Webmail Account Owner
To:

Dear Webmail Account Owner,

This message is  from the University Webmail Messaging Center to all email account owners.

We are currently carrying out scheduled maintenance,upgrade of our web mail service and we are changing our mail host server,as a result your original password will be reset.

We are sorry for any inconvenience caused.

To complete your webmail email account upgrade, you must reply to this email immediately and provide the information requested below.

*********************************************************************************
CONFIRM YOUR EMAIL IDENTITY NOW
E-mail Address:
User Name/ID:
Password:
Re-type Password:

************************************************************************************
Failure to do this will immediately render your email address deactivated from the University Webmail.
************************************************************************************

This E-mail is confidential and privileged. If you are not the intended Recipient please accept our apologies; Please do not Disclose, Copy or Distribute Information in this E-mail or take any action in Reliance on its contents: to do so is strictly prohibited and may be Unlawful.

Please inform us that this Message has gone astray before deleting it.

Thank you for your Co-operation.

Copyright ©2011 University Webmaster. All Rights Reserved

7 Comments

Published: 2012-10-21

Cyber Security Awareness Month - Day 22: Connectors

(we took a break from our "standard fair" this weekend and didn't publish any standards related diaries. 20/21 will be skipped as a result)

Over the years, I collected quite a number of "standard" connectors/cables and interfaces. This is certainly an area where standards seem to be proliferating quickly. To stick with our theme of security and security awareness, I would like to focus on a couple of popular standards and particular outline security aspects of the standard.

First of all, pretty much all peripherals connected to a system require drivers to interact with the device. These device drivers frequently are part of the kernel and a vulnerability in the device driver will lead to a system compromise. I don't think the full potential of this class of vulnerabilities has been realized yet, but there have certainly been some notable exploits that were based on these vulnerabilities. Even simple devices like VGA monitors do send some data to the system, and could potentially be used to exploit vulnerabilities (I am not aware of a VGA vulnerability).

USB

 The "Universal Serial Bus" is by now pretty old and you can't buy a laptop or desktop without a USB port. In the past, the main risk of USB has been the ability to automatically launch software as the USB memory stick is plugged into the system. This vulnerability has been mostly eliminated in modern operating system configurations. However, there are still plenty of possibly issues with USB:

  • USB is not just "USB Memory stick". A memory stick like device may also emulate a key board. For example the YubiKey is an interesting security application of a simulated keyboard. But this can also be abused. A USB keyboard may issue commands, just like a user sitting in front of the system. "Teensy" is a very capable USB development board that can be configured to emulate a keyboard [1]. A device based on Teensy could be added to any existing USB device via a simple USB hub. USB devices do not use any meaningful authentication to the host, so there is little that can be done to limit access to "good" USB devices.
  • Some recent work points to possible file system driver vulnerabilities that can be exploited by mounting a specific file system. This would happen even if auto-execute is enabled. The system first needs to mount the file system to provide access to the user
  • There have been plenty of social engineering based exploits showing that people will click on files on USB sticks just about as likely as they open attachments in e-mail.

Firewire (IEEE 1394)

  A lot of attention has been spent on USB. Firewire on the other hand provides for an entire different level of access to the system. Firewire extends the PCI bus, and allows access to the system in ways similar to PCI plugin boards. An attacker with access to the Firewire bus can read and manipulate memory and access devices (like hard drives) connected to the bus.

  • Reading memory: This has been used in forensics to retrieve system memory without having to install additional tools. Of course, an attacker would be able to retrieve encryption keys and the like that are stored in memory.
  • Manipulating memory: Tools exist to "patch" system processes in memory . For example, a proof of concept tool allows bypassing the Windows XP login dialog by patching the password comparison function in memory.
  • Low level system access: Even low level elements, like BIOS passwords, have been read via firewire.

(sorry for the lack of links/URLs for this section. but the main source of these tools, http://www.storm.net.nz/projects/16 , hasn't been up in a while)

Thunderbolt (Light Peak)

  This is a relatively new technology, initially introduced by Apple and Intel. Currently, first non-Apple laptops start to appear with Thunderbold port. Thunderbolt is pretty much a further development of the firewire concept. It does allow direct access to the newer PCIe bus, and includes a video bus via display port.  At this point, not a lot of work has been done exploiting Thunderbolt. But more or less all exploits that worked against Firewire should in principle work with Thunderbolt. The bus is not authenticated and a device like a monitor may disguise an internal second devices that will then read and manipulate data on the system via the thunderbolt interface. There is very little visibility into the data exchanged via thunderbolt (we need something like tcpdump for these ports). 

[1] http://www.pjrc.com/teensy/

 

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

3 Comments

Published: 2012-10-19

Cyber Security Awareness Month - Day 19: Standard log formats and CEE.

Back when I started DShield.org, one of the challenges was dealing with variations in log formats. 10+ years laters, this problem hasn't really changed, even though there are some promising solutions (which isn't that different form 10+ years ago).

 Firewall logs are a pretty simple example. The basic information captured is pretty similar across different firewalls: Packet header data. Some log formats are more verbose then others, but the idea is the same and it is not too hard to come up with a standard to express these logs. For DShield, we used a smallest common denominator approach. It wasn't our goal to collect all the details offered by different firewalls. For an enterprise log management system however, you may need to preserve this detail, and the simple tab delimited format we came up with for DShield wouldn't be extensible enough.

One of the logging standards that is gaining some steam is "CEE", or "Common Event Expression" [1]. To be successful, a logging standard has to address a number of different problems:

  • Log format: This is the basic "syntax" used to express logs. This problem is actually the easier one to solve, and the current approach is to use XML to express the logs. XML isn't exactly efficient, but it is extendable and there is a rich set of libraries and database technologies to create and parse XML. I see it as the "ugly default solution". A more compact binary format may be preferred, but would have a much higher cost to get started.
  • Taxonomy: This is the hard problem. The "magic strings" we assign different events. For firewall logs, this is pretty easy usually. But think about antivirus! You could log the MD5 hash of the sample that was detected as malicious. But this wouldn't be as meaningful as knowing what malware family this sample belongs to. But there is no agreement as to what constitutes a "malware family" or what to call different families. If you have to correlate logs from different vendors, you will need to translate the name each vendor assigns to a particular piece of malware.
  • Vendor Acceptance: There are a lot of great proposals in this space that solve the first two problems. But unless you want to implement it yourself, you need a vendor to support a particular solution. In order for a standard to catch on, there has to be customer demand first. Secondly, the solution has to be economical to implement. It helps if the standard is open and not associated with licensing fees. But first of all, the standard needs to be easy to implement. 

So how does CEE solve these issues?

Log Format

CEE supports two different formats: XML and JSON. XML is the "primary" standard allowing for the most flexibility, but JSON, due to its simple structure, is easier to parse and sufficient in many applications. It is also not terribly hard to convert JSON to XML.

Taxonomy/Vocabulary

CEE doesn't really solve all of this problem, but it starts by defining common labels and data types (like "src.ipv4" for the IPv4 address of a source). In part, CEE refers to other standards like CVE to come up with a vocabulary to use to identify events.

Log Transport

I didn't list this problem above, but it is certainly important to consider how logs are transported. In the Unix world, various versions of syslog have become the de-facto standard for log transport. But once you leave Unix based systems, syslog support is no longer a given. CEE addresses various issues like support for compression and protecting log integrity (which plain old syslog doesn't do well at all)

I do think CEE is certainly a standard to watch out for. Right now, the standard is labeled as "beta". The tricky part will be vendor support. The CEE board does include representatives from a number of important vendors, but I don't see a lot (any?) log management vendors on the list. Of course CEE would help the most if devices generating logs would support it.

[1] http://cee.mitre.org

/**

Learn more about log management during my class at CDI in Washington DC (Dec 15/16)

*/

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

0 Comments

Published: 2012-10-18

Cyber Security Awareness Month - Day 18 - Vendor Standards: The vSphere Hardening Guide

Many vendors have security hardening guides - step-by-step guides to increasing the security posture of one product or another. We alluded to the Cisco guides earlier this month (Day 11), Microsoft also makes a decent set of hardening guides for Windows server and workstation products, as do most Linux distros - you'll find that most vendors have documents of this type.

VMware's vSphere hardening guide is one I use frequently.  It's seen several iterations over the years - the versions considered current are all stored at:  http://www.vmware.com/support/support-resources/hardening-guides.html

The initial guide for ESX 3.x (back in the day) was mostly CLI based, with commands executed mostly within the Linux shell on the individual ESX hosts.  Things have changed quite a bit since then (and no, that wasn't a reference to the amount of grey in my hair!), the current version (5.0) covers the entire vSphere environment, it discusses settings for the ESXi hosts, the Virtual Machine guests, the Virtual Network (and physical network), the vCenter management platform and vCenter Update Manager. 

From an both an auditor and a system administrator perspective, there are a number of "oh so cool" factors to this standard that make it a great example for vendor security documentation:

There is a clear description of why you might make any specific configuration change.  The security exposure is clearly explained for each setting discussed, along with the severity.


Every setting is not a recommended setting.  They are very clear that some security changes are recommended in all cases.  Others might only be recommended for DMZ settings, or some other exceptional circumstances.  For each setting, they discuss in what situation that change would be deemed neccessary

Some security changes will break functionality that you might be expecting, for instance it might disable something in vcenter, or it might break vCLI (a remote cli command line api) functions.  If a setting affects functionality, it is clearly spelled out.

There are several ways to get the job done.  For each benchmark setting, several methods for effecting that change are discussed.  Often there'll be a setting to tweak within vCenter, but whenever possible they'll also discuss how to accomplish the same task from a remote command line, either from Powershell (using the PowerCLI api) or from a remote windows or linux command line (using their VCLi api command set).  For instance, for something as simple as setting NTP (Network Time Protocol), they cover off:

  • How to set NTP services up for the ESXi host in the vSphere Client application
  • What config file is updated (/etc/ntp.conf)
  • From the vCLI (Virtual Command Line), how to audit this setting using vicfg-ntp.  Note that all the vCLI commands are run from a remote host (Linux or Windows), so this is a great audit tool!
  • How to update this setting using the vCLI, again, using vicfg-ntp
  • How to list the NTP settings from all hsots in an environment using PowerCLI, vMware's Powershell API.  Again, this is remotely run from a Windows host with PowerShell and the VMware PowerCLI installed.
  • How to update all hosts in an environment using PowerCL
  • And finally, am external link for more information

Audit is not neglected in this document.  Not only do they tell you how to make each change, they show you how to audit that change, to get the current value of the affected settings.  Again, whenever possible, they discuss how to do the audit steps from as many toolsets as possible.  You'll find that if you are an auditor looking at 10 servers, or a consultant working with a different client each week, the CLI approaches have a lot of appeal.  Not only are they much quicker, but they are less prone to error, and you don't have to rekey anything.

So, if you are an auditor, or a consultant who sees many clients, or a System Administrator who just wants to keep tabs on their environment,  from this guide you can easily and simply create your own audit scripts.  With these scripts in hand you are able to get accurate, repeatable security assessments (based on a published standard) of a vSphere environment. 

This means that you are delivering exactly the same security assessment for each client's environment.   However, while the assessment is the same each time, the recommendations will not be - remember that there is a severity value for each assessed value, and also a discussion of in which situation each setting is recommended - the recommendations will vary quite a bit from one client to the next.   Even if you are an auditor within a single organization, you'll find that results will vary from one audit to the next.  Remember that this is an evolving standard - recommended settings change from one version of the guide to the next.  You'll also find that when you combine security assessments with risk assessments (this is almost always desired), the risk equation will change depending on how the impacts are phrased, what has happened in the organization recently, or who is involved in the discussion.

Security is unique in the fact that while the questions will be consistent over time and between organizations, the answers will change.  You'll see them vary over time, across versions of a product, in different deployment situations and between organizations.  I think this benchmark is a good example of a standard that is well equipped to handle this shifting landscape.

(You'll find the vSphere Hardening Guide covered cover-to-cover in SANS SEC579)

If you have any stories this article, or on this or other vendor security guides, please share - use our comment form.

===============
Rob VandenBrink
Metafore

1 Comments

Published: 2012-10-17

Cyber Security Awareness Month - Day 17 - A Standard for Risk Management - ISO 27005

A word that I'm hearing a lot these days from clients is "Risk".  And yes, it has a capital R. Every time.

Folks tend to think of any risk as unacceptable to the business.  Every change control form now-a-days has a Risk Assessment and Risk Remediation sections, and any issue that crops up that wasn't anticipated now becomes a process failure that needs to be addressed.

Don't get me wrong, I'm all for some rigor in Risk Assessment, but every risk can't be an 11 on a scale of 1 to 10.  Enter "ISO/IEC 27005:2011 - Information technology - Security techniques - Information security risk management".
ISO 27005 allows system administrators (change requestors) and managers (change approvers) to use a common approach, the same language and come to an agreement on risk.  Most importantly, this helps parties like this come to an agreement quickly – if you’ve ever had a change approver who has trouble saying either “yes” or “no”, you’ll understand why this is so important.

This standard starts by defining a framework and a flowchart to manage risk (below).  Like all good methodologies, there’s decision points and iteration, so you’ll need to ensure that you identify decision makers who will actually decide, or you’ll never escape!
 

Once “inside” the flowchart, I found that I was impressed with the emphasis on business and organizational language – this standard is written to get buy-in from management (this is a good thing).
 
They’ve also got the obligatory section on qualitative and quantitative risk, but more importantly, in the appendices there is some clear direction on how to use both approaches.  More importantly (in my books anyway), they have examples of taking a qualitative assessment and quantifying it, allowing you to apply numeric values to “fuzzy” situations.  This makes the job of the System Administrator easier – when proposing a change, you can use this approach to assign actual values to things,

The Risk Treatment section ensures that a final decision is made.  Too often we see managers “decide not to decide” – following this standard ensures that everyone understands that this is not an option - there are a few choices to make, and yes, assuming the risk is a valid choice.  When all the ducks are lined up and it’s decision time, then a decision there will be!


I can’t cover every aspect of a 68 page standard in 1 page, but suffice to say that this one is well worth the purchase price – yes, it’s an ISO standard so you’ll have to buy it to use it. 

If you've got a "risk management" war story, or a comment on this post, please use our comment form, we'd love to hear from your!

In SANS SEC579, we use the ISO 27005 methodology and apply it to the ENISA Cloud Risk document (see references below)  to contrast the risks of Public and Private Cloud deployments to your organization.

 

References:

(2011). ISO/IEC 27005 - Information technology - Security techniques - Information security risk management (ISO/IEC 27005:2011). Geneva, Switzerland: International Standards Organization

(2009). Cloud Computing: Benefits, risks and recommendations for information security.  Crete, Greece: ENISA - European Network and Information Security Agency.

 

===============
Rob VandenBrink
Metafore

1 Comments

Published: 2012-10-17

Oracle Critical Patch Update October

Oracle has just released their critical patch update http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html

Quite  a number of products are being patched also for those of you subject to PCI DSS there are a significant number of patches addressing issues with a CVSS score of 4 or higher, which must be patched under the standard.

They have also released a critical patch update for Java http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html 

The info in the Oracle bulletin is comprehensive and should allow you to identify what needs to be done fairly easily.  Both bulletins have the following wording in the work around section "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply CPU fixes as soon as possible." For most of us not new (at least not on the java side), but maybe a strong argument if you get pushback on patching.

Happy patching, as always test before you implement.

Mark H - shearwater
 

2 Comments

Published: 2012-10-16

Cyber Security Awareness Month - Day 16: W3C and HTML

The W3C (World Wide Web Consortium, w3.org) is responsible for defining standards around HTML. One of the most prominent current developments is HTML 5. 

HTML 5 is not just about the HTML "mark-up" language. The standard includes extensive extensions to Javascript APIs around geolocation, storage, media access and other features.

In addition, HTML is defined by the "WHATWG" (Web Hypertext Application Technology Working Group), an organization not associated with W3C. The WHATWG was created by Apple, Opera and Mozilla after the companies felt that the W3C's HTML Working Group (HTMLWG) didn't move fast enough.

These days, the HTMLWG and the WHATWG are working together, but they are taking a different approach to the future development of HTML. The WHATWG is defining HTML as a constantly developing, "living" standard. The HTMLWG is taking various snapshots of the WHATWG standard, and defining them as an HTML version.

Here are some of the more recent notable additions to HTML, which are usually kept under the umbrella of "HTML 5":

- Access to hardware sensors: Most browsers already support GPS geolocation, or access to other geolocation APIs of the hosts (e.g. via WiFi). But sensors like accelerometers commonly found in mobile devices are supported as well. Recently, support for the access to cameras and microphones emerged but support is still spotty.

- Extended storage options: Traditionally, web applications had to store data in cookies. Cookies are rather limited in size, and wouldn't scale to a larger size as they are sent with each request. With HTML 5, web applications can store up to 20 MB on the browser, and if that's not enough, they can ask the user for permission to store more data.

- Offline applications: An application may provide a manifest listing all files (HTML, Javascript) that are required to run an application offline

- Video/Audio codecs: the <video> and <audio> tags allow for the playback of audio without the help of plugins like Flash or Java. However, not all browsers support the same codecs.

- Client input validation: Many web applications use javascript to validate user input on the client. In HTML 5, this can be done within the "input" tag by specifying a regular expression. Just like the javascript client validation, this should never be used for security purposes, but can make an application more usable.

There are many more features that are part of the most recent HTML specs, and browsers are starting to implement them. Which features you will find depends on the browser you are using.

But with great power comes great responsibility. All these features need to be implemented correctly in order to avoid security vulnerabilities in the browser. The browser is also very exposed constantly downloading code and executing it from various sites. The fundamental problem in HTML is that data (HTML) and code (Javascript) isn't well separated from each other. This missing separation opens the door to issues like XSS.

There is also no good way to "sign" a piece of javascript like you would sign a desktop application. The best you can do is to protect the transit via SSL.

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

1 Comments

Published: 2012-10-16

CyberAwareness Month - Day 15, Standards Body Soup (pt2), Same Soup Different Cook.

Introduction 

 

There are several new protocols that are on their way to being adopted in some form or another. In the previous article we covered how different standards bodies can cover and sometimes govern similar protocols and standards. Here we will discuss two emerging data center orriented standards and how they compete.
 

TRILL

 
First, I would like to draw your attention to a protocol called TRILL or TRansparnet Interconnection of Lots of Links. [1] There are several good sources for a technical overview so I will be brief. In short TRILL is a method of Routing Bridges or RBRidges [4] to exchange link state and does so with another protocol called IS-IS [2] or Intermediate System to Intermediate System. 
 
Before we get lost in our first example of too many cooks making the soup, lets be clear on TRILL using IS-IS that are both published by the IETF as RFC6327 and RFC 1142. RFC1142 is a republication of an ISO Standards body routing protocol publication. So, RFC6327 uses a standard that that was actually published by the ISO body but republ… You see where I am going.
 

SUPER OVER Simplification (TRILL)

 

TRILL is desinged to run at Layer 2 in the OSI model and allows for each TRILL switch to exchange link state information. You get enough information shared between TRILL Switches that they can make route discisions for optimized pathing. Here is a great write up http://en.wikipedia.org/wiki/TRILL_(computing) on Wikipedia. So basically build a tree of L2 States, trade them, and help them to talk, REALLY Fast… Well that's the goal anyways.
 
Why are we talking about this new Data Center Protocol by the IETF and through republication the ISO? 
 

SPB

 

Enter Protocol number 2, this protocol is brought to you by the good ole folks at the IEEE. If we remember our breakdown from my last diary, we will know they govern things like 802.1 [5] and 802.11 [6]. Why is this relevant? Enter contender number two for datacenter bridging protocols. SPB or Shortest Path Bridging. [7] [8] 
 

SUPER OVER Simplification (SPB)

 

Use IS-IS (<------seeing a trend?) to exchange a tree information to compute shortest path for packets. There is, of course, a lot more to it than the above but hopefully my point is made. Another great write up: http://en.wikipedia.org/wiki/IEEE_802.1aq
 

Conclusion

 
So, to recap, the IETF and the IEEE are working on similar protocols to accomplish similar goals. We will see who "Markets" the best to gain acceptance but It might be important to understand how many standards bodies have influence on the widgets and tools we implement. With SDN [9] or Software Defined Networking being the new "Cloud" word, it is good to understand who is shaping the SDN protocols. We can now start to see that many standards bodies go into making the "Internet" go....
 
And most of all, awareness of this is good as we are the ones that have to secure it 
 
IETF - TRILL
IEEE - SPB
 
[1] http://tools.ietf.org/html/rfc6327
[2] http://tools.ietf.org/html/rfc1142
[3] http://tools.ietf.org/pdf/rfc1142.pdf <-- PDF Warning
[4] http://tools.ietf.org/html/rfc6325
[5] http://www.ieee802.org/1/
[6] http://www.ieee802.org/11/
[7] http://en.wikipedia.org/wiki/IEEE_802.1aq
[8] http://www.ieee802.org/1/pages/802.1aq.html
[9] http://www.technologyreview.com/article/412194/tr10-software-defined-networking/
 
 

 

0 Comments

Published: 2012-10-14

Cyber Security Awareness Month - Day 14 - Poor Man's File Analysis System - Part 1

 

Update: In an attempt to get the link  for the first script, I mistakenly put the link for another script. Fixed now. Thanks Michael for the "oops" :)

Ok ok the "System" on the title may be a bit too much for what this diary will show, but it will give you a nice idea on how to start to build your own analysis system using open source and free tools.

For the first part of this Diary we will focus on PE files, using three different tools for Static Analysis:

1) Malware.py - http://code.google.com/p/malwarecookbook/source/browse/trunk/malware.py

1) Pescanner.py - http://code.google.com/p/malwarecookbook/source/browse/trunk/3/8/pescanner.py

2) Adobe Malware Classifier - http://blogs.adobe.com/asset/tag/malware-classification

3) sigcheck.exe (via Wine) - http://technet.microsoft.com/en-us/sysinternals/bb897441

The first tool is from the great book Malware Cookbook, and the authors made all the code available via Google Code. It is a collection of python scripts used during the several chapters. The one I will show here is called PescannerMalware.py.

This PescannerMalware.py script will give you several pieces of information that will help you on your static analysis. 

The following example is the output of a known malware:

 

 

Meta-data

============================================================

File:    wire-report.pdf.exe

Size:    190464 bytes

MD5:     0a0b73f2652f242e255ac9c1a7724dda

SHA1:    5ad43440eaf1c30b9e320a0ea06754ad67e9d66f

Date:    0x29EB59F0 [Tue Apr 14 22:17:20 1992 UTC] [SUSPICIOUS]

EP:      0x402a00 (.text)

 

Resource entries

============================================================

Name               RVA          Size         Type

------------------------------------------------------------

RT_VERSION         0x3b058      0x3ec

 

Suspicious IAT alerts

============================================================

CreateProcessW

CreateProcessA

 

Sections

============================================================

Name       VirtAddr     VirtSize     RawSize      Entropy

------------------------------------------------------------

.text      0x1000       0x7000       0x7000       2.397724

.rdata     0x8000       0x2e000      0x23800      7.231950    [SUSPICIOUS]

.data      0x36000      0x3000       0x2600       2.536738

.ndata     0x39000      0x1000       0x800        3.405680

.ndata     0x3a000      0x1000       0x800        3.405680

.rsrc      0x3b000      0x444        0x600        3.980035

 

Version info

============================================================

LegalCopyright: Copyright (C) 2000-2010 TightVNC Group

InternalName: vncviewer

FileVersion: 1.5.2.0

CompanyName: TightVNC Group

PrivateBuild:

LegalTrademarks:

Comments: Based on VNC by AT&T Research Labs Cambridge, RealVNC Ltd.

ProductName: TightVNC Win32 Viewer

SpecialBuild:

ProductVersion: 1.5.2.0

FileDescription: vncviewer

OriginalFilename: vncviewer.exe

Translation: 0x0409 0x04b0

 ---

 

This report shows a weird creation date, of 1992. Second, it shows a high entropy on the second PE section of this file, suggesting that it may be packed. Third, while the file name is Wire-report.pdf.exe, the version section shows information as  it is a VNC application...

This script also allows you to integrate with YARA if you want, making it even more powerful.

Also, if you want to check the Packer, you may want to consider the Python script that the fellow Handler Jim Clausing created a few years ago: http://handlers.sans.org/jclausing/packerid.py

--

The second tool was created by a former co-worker, now working at Adobe. He created another python script that checks different characteristics of the PE file, and returns one of the three different results:

1,0,Unknown. 

 

According the Adobe page, the "Malware Classifier uses machine learning algorithms to classify Win32 binaries – EXEs and DLLs – into three classes: 0 for “clean,” 1 for “malicious,” or “UNKNOWN.” "

 

Example:

pedros-MacBook-Pro:samples ppbuen0$ python AdobeMalwareClassifier.py -f wire-report.pdf.exe

1

Which means Malicious.

---

The third tool is a tool from Sysinternals. It is called Sigcheck. This tool helps you to identify file signed or not and help you on your assessment.

If you want to run it on Linux together with the other Python tools, you may want to consider Wine :) .

The output below is from our same exe from previous examples:

 

        Verified:       Unsigned

        File date:      10:59 AM 8/9/2011

        Publisher:      TightVNC Group

        Description:    vncviewer

        Product:        TightVNC Win32 Viewer

        Version:        1.5.2.0

        File version:   1.5.2.0

        Strong Name:    Unsigned

        Original Name:  vncviewer.exe

        Internal Name:  vncviewer

        Copyright:      Copyright (C) 2000-2010 TightVNC Group

        Comments:       Based on VNC by AT&T Research Labs Cambridge, RealVNC Ltd.

Just to compare, these are two outputs from other files:

 

1) Malware:

        Verified:       Unsigned

        File date:      5:41 AM 9/28/2012

        Publisher:      Microsoft Corporation

        Description:    Microsoft (R) Internet Common

        Product:        Microsoft(R) Windows(R) Operating System

        Version:        6, 0, 2900, 3138

        File version:   6, 0, 2900, 3138

2) Windows ARP:

        Verified:       Signed

        Signing date:   10:07 PM 4/13/2008

        Publisher:      Microsoft Corporation

        Description:    TCP/IP Arp Command

        Product:        Microsoft« Windows« Operating System

        Version:        5.1.2600.0

        File version:   5.1.2600.0 (xpclient.010817-1148)

 

--

Hopefully this will help you to start your own analysis system. The next part of this diary we will check tools/scripts for non-PE files, and integrate them.

 

--

Pedro Bueno (pbueno /%%/ isc. sans. org)

Twitter: http://twitter.com/besecure

2 Comments

Published: 2012-10-12

Cyber Security Awareness Month - Day 12 PCI DSS

Today I'll provide an overview of what is often the elephant in the room. The Payment Card Industry Data Security Standard (PCI DSS). Unlike ISO 27001 where shades of grey are acceptable, in PCI DSS things are very much black and white, with some wiggle room  although limited and realistically only if you can convince the QSA that what you are doing is ok.  It boils down to you either comply with a requirement, or you don't. There is no "kind of".
 

Background
Each of the payment brands has a set of information security requirements that must be met by its merchants. This meant that in order to process VISA transactions you needed to comply with Visa’s Cardholder Information Security Program (CISP).  When dealing with MasterCard you needed to comply with MasterCard’s Site Data Protection (SDP) and so on for American express, Discover and JCB issued cards. In order to simplify the requirements on merchants and to align the different programs the founding members developed PCI DSS and the PCI Security Standards Council was created to manage the various different standards.

Founding members and their various compliance requirements:

  • American Express: • www.americanexpress.com/datasecurity
  • Discover Financial Services: • www.discovernetwork.com/fraudsecurity/disc.html
  • JCB International: • www.jcb-global.com/english/pci/index.html
  • MasterCard Worldwide: • www.mastercard.com/sdp
  • Visa Inc: • www.visa.com/cisp
  • Visa Europe: • www.visaeurope.com/ais

The main standard most people will need to comply with is PCI DSS. The other standards have a specific scope. 

  • PA-DSS applies to those that are selling an application that accepts, processes, stores or transmits credit card information.
  • PCI PTS applies to the actual pin pad devices many of us are familiar with and
  • PCI P2PE (Point-to-Point Encryption) which deals with encryption in point to point solutions.  


PCI DSS applies to any organisation that  accepts, process, store or transmit credit card information. It unfortunately does not matter how small or large you are, you have to meet all the requirements of PCI DSS, although there are some small differences in the standard depending on whether you are a merchant or a service provider. If you accept credit cards, you have to comply. 

Depending on the number of transaction you may be considered a level 1, 2, 3, or for some payment brands even a level 4 merchant or service provider.  In a nutshell if you are a level 1 merchant or service provider, you will need to have an on-site assessment annually and Quarterly Authorised Scanning Vendor (ASV) scans. Lower levels may only require you to validate using a self assessment questionnaire (SAQ) and have quarterly ASV scans. The main thing to remember though is that the number of transactions you do only determines the validation requirements, not the compliance requirements.  You will always have to comply with all requirements outlined in the standard, unless they are really not applicable to your situation.

Just to make it slightly more complicated the number of transactions that determines what level you are depends on each payment brand.  You can be a level 1 merchant for Visa, Level 2 for MasterCard and level 3 for Amex. The best place to find the specific levels is via one of the links above. To make it even more complicated your acquirer or the payment brand themselves may specify that you have to validate as a level 1 merchant or service provider. Some service providers will always have to validate as a level 1 regardless of the number of transactions processed, depending on the type of service being provided.   So life can get confusing
 
What happens if you don't comply, well that depends on the acquiring bank that you deal with as a merchant or service provider. Ultimately the Acquirer caries the risk, they are the ones that get yelled at and have to provide evidence of their merchants' compliance (i.e you) with the standard. When you are not compliant they may impose additional fees on your organisation until you are compliant, they may refuse services. I have see both happen in the past twelve months. Should there be a breach you may also be held liable for the costs associated with the breach.  Typically the acquirer or payment brand will bring in their own investigative team and perform an analysis as to how the breach occurred. If, at the time, of a breach you are not compliant with PCI DSS they may try and recover their costs. It is therefore important that once you are compliant you make sure you are able to maintain it.

A longish background I know, but it is important to get these things straight before you try and tackle the rest of the standard. 

Scope and scope reduction
Possibly the most difficult thing to do is scoping of the PCI environment and the one area where the as a QSA you get the most questions from people.  Scoping can be a pain, but there are a few rules of thumb you can work with that might make it easier for you.

  • Electronic world
    • If the system accepts, stores, processes or transmits credit card information it is in scope.
    • If you use tokenization the systems interacting with the tokenization services are in scope.
    • If you can access systems that process card information you are in scope. For example if I have a web server that shows some static pages (nothing to do with buying anything or processing cards), but it is in the same network segment (i.e. it has access to the other web server) as those web servers dealing with credit card information, then it is in scope.
  • Paper world
    • If it has a credit card written on it, it is in scope

If you are slightly freaked out by those broad rules and you are thinking OMG that means my entire worldwide network is potentially in scope, then you are starting to get it. PCI DSS is the elephant in the room or bigger than Ben Hur is quite appropriate as well. Which is where scope reduction comes into play.

PCI DSS is concerned with specific pieces of information, the cardholder data.  Credit card number, Name, expiry date, CVV/C2V, and authentication data. Some of which you are allowed to store and use prior to authorisation of a transaction and not after. Some of it must never be stored (e.g. authentication data, unless you are an issuer of course). So by reducing the information kept, you may be reducing the impact of PCI DSS on the organisation.  
The main mechanisms for reducing scope are:

  • don't store card details. If you do not need the card number for anything, get rid of it once the transaction is complete.  Keep enough info so you can do your settlements and chargebacks if needed;
  • Truncate, only store the first 6 and last 4 digits of the card number;
  • Tokenize - Use an internal or third party tokenization service which takes a credit card and provides a non identifiable result back which is used in your transactions (i.e if you lose the token, the actual credit card number cannot be discerned);
  • Network segmentation is also a common method for reducing the scope.  Basically those servers and other devices that accept store and process card details are segmented off from the main network and has very limited interaction between the cardholder environment and the rest of the network.  If done correctly the scope for PCI may be reduced and finally,
  • get someone else to do it.  Many organisations reduce scope by getting a service provider to certain tasks for them, such as store credit card details. Third party tokenization services are quite common. Likewise having a third party scan forms, redact the credit card number prior to it being sent to your organisation is very common.  Depending on how it is implemented having someone else do the work may reduce your scope.

The how and why is a bit beyond this intro to the standard. Either way if doing scope reduction exercises keep your QSA briefed and they can provide advice as to what you are doing will help your situation or not.

Requirements
There are twelve requirements in the standard. A number of them have been introduced as a result of breach investigations, the remainder are fairly typical security practices that hopefully you are already doing anyway. Following are brief explanations of the requirements and some general observations regarding the requirement based on PCI based engagements over the years. 

  • Requirement 1: Install and maintain a firewall configuration to protect cardholder data

This requirement outlines the processes that need to be in place with regards to the management of firewalls and routers used in the Cardholder Data Environment.

Most organisations have this reasonably under control. There are however documentation requirements that are not often met and there are those pesky ANY rules (you can have them, but it typically increases the scope).

  • Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters

Systems are often compromised through the use of default passwords and other default configuration items.  This section addresses requirements such as default passwords, but also security/hardening configuration of devices used, such as server configuration standards for servers and network devices.

This varies from organisation to organisation. Basic build documents are usually available, but they do not often address the requirements of the standard. Default passwords are usually changed, but the simple ones are not (public, private ring a bell?). 

  • Requirement 3: Protect stored cardholder data

This requirement addresses the storage requirements for cardholder information and what can be stored and what cannot be stored.   It also addresses encryption requirements and outlines the related documentation requirements.

This is probably the most difficult requirement for most organisations.  Encrypting and managing the keys can be a big challenge.  

  • Requirement 4: Encrypt transmission of cardholder data across open, public networks

This requirement addresses the interaction between the organisation and third parties as well as donors/customers. 

Usually easily met by most organisations as they often already have VPNs or SSL based applications.

  • Requirement 5: Use and regularly update anti-virus software or programs

This requirement addresses the malware management of the environment and helps ensure that malicious software does not adversely affect the environment.

If you don't have this sorted, PCI will be the least of your problems.  Most organisations do OK with this, just read the requirement for regular scanning carefully.

  • Requirement 6: Develop and maintain secure systems and applications

This requirement addresses patching and change control as well as the development and testing of applications used to accept and process cardholder information.

The main issues we find relate to the lack of security training or awareness for developers as well as a deficiency in security related testing of internal and external facing applications.  On the patching side, if it has a CVSS score of 4 or higher, you must patch.

  • Requirement 7: Restrict access to cardholder data by business need to know

Requirement 7 relates to access and privilege management and the processes involved for providing access the cardholder data.  This includes the authorisation process and documentation, typically role based.

Most of the time the deficiencies relate to documentation.

  • Requirement 8: Assign a unique ID to each person with computer access.

Users have individual accounts on the various systems and password controls are applied, but not documented.

Userid and password management is often fine, however privileged accounts management and the use of root in some organisations can be challenge.

  • Requirement 9: Restrict physical access to cardholder data.

In order to protect the cardholder information physical security must be considered.

Usually OK, the most problems we come across relate to dealing with visitors.  

  • Requirement 10: Track and monitor all access to network resources and cardholder data.

As part of management of cardholder data it is important to have visibility in the environment and have the ability to track activities. 

The daily review of logs and file integrity monitoring is where most people struggle

  • Requirement 11: Regularly test security systems and processes.

Under PCI DSS it is expected that the security of systems and applications be tested regularly to ensure that cardholder information is safe.

The quarterly wireless checks for rogue devices is usually the main stumbling block as all sites have to be done. Likewise the difference between a penetration test and a vulnerability scan is sometimes confused and causes issues.  

  • Requirement 12: Maintain a policy that addresses information security for employees and contractors.

Policies are the cornerstone of compliance as they outline the requirements to be followed within the organisation.

Usually policies are OK, however monitoring of the PCI status of your service providers is often not well developed. 


Becoming compliant
If you are in the process of becoming compliant, then the first step should really be to see where you are at.  So perform a gap analysis, have a look at all the of the requirements and see how your organisation stacks up against these.  The council has two documents available here https://www.pcisecuritystandards.org/security_standards/documents.php that will help.  Firstly there is the "navigating the PCI DSS v2.0" document.  It provides guidance on the requirements. Secondly there is the "prioritised approach for PCI DSS Version 2.0" document and spreadsheet that will help with remediation by assisting in prioritising your efforts.  Acquirers have been know to ask for completion of this spreadsheet so they can track your compliance efforts.

Be aware of the self assessment trap that people tend to fall into. Remember all those quizzes in magazines. Can you run 100m in 12 seconds?, sure.  Can you bench press your own weight at least ten times?  no problem.  We tend to over estimate our abilities. So when doing the gap analysis, for each requirement you look at, add the following "how can I prove it?" That should bring answers back into reality. For example requirement 1.1.1 Verify that there is a formal process for testing and approval of all network connections and changes to firewall and router configurations "how can I prove it?".  What the QSA will be looking for is a document that states "must be approved, must be tested, etc" then typically they'll ask to see the approval for a particular change to a network device.  Who approved it, who executed it, etc. If that is not possible, then you are not compliant.

After the gap analysis and the remediation you are ready for either the on-site assessment, if needed, or the self assessment. There are a number of different self assessment questionnaires that can be completed depending on your situation.  Usually the acquirer stipulates which SAQ needs to the be completed. Just remember the magazine quiz trap when completing the SAQ, it is easy to say yes when the answer is really no.     

A bit longer than I had initially intended, so that's where we'll leave it for today. Comments always welcome I would suggest that for specific questions you use the contact form.

Cheers
Mark H - www.shearwater.com.au

 

10 Comments

Published: 2012-10-11

Cyber Security Awareness Month - Day 11 - Vendor Agnostic Standards (Center for Internet Security)

The Center for Internet Security (CIS) is best known for it's Security Benchmarks.  These are security standards for hardening various products and services, making them more resistant to attack, setting them to log and alert better and so on.  There are a few attractions to using benchmarks from an organization like CIS:

  • The benchmarks are written by volunteers, most of whom do not work for the vendor in question.  This means that each security setting will have seen scrutiny from many people who are NOT the vendor.  Recommended security settings will often match the vendor's recommendations, but you'd be surprised how much further a group of dedicated volunteers will take things! 
  • The benchmarks are written collaboratively by consensus.  There may be a project lead (or leads), but most points see spirited debated before they reach their final form.  A change doesn't get committed to the final document until everyone is convinced that it is "the right thing to do", presented the right way.
  • The benchmarks will usually discuss specific situations where any change is appropriate (or just as important, not appropriate)
  • As each recommended change is considered in the document, there's a discussion about how making that change might affect the service delivered
  • Recommended settings or changes will usually have references for additional background and reading

Discussion of the CIS Benchmarks is particularly timely, as they released updates to several benchmarks earlier this week, for:

  • CIS Apache HTTP Server 2.2.x
  • Google Android 4.0
  • IBM AIX 5.3-6.1
  • Microsoft IIS 7.5
  • Oracle Solaris 10

The focus today will be on the Cisco Device benchmarks, which I use almost daily.  These include standards for both IOS based Routers/Switches and for Firewalls from Cisco.

The benchmark is divided into 2 sections (these are pasted right from the benchmark document):


Level-1 Benchmark
The Level-1 Benchmark for Cisco IOS represents a prudent level of minimum due care.
These settings:
•  Can be easily understood and performed by system administrators with any level of security knowledge and experience
•  Are unlikely to cause an interruption of service to the operating system or the applications that run on it

Level-2  Benchmark
The Level-2 Benchmark for  Cisco  IOS   represents an enhanced level of due care for system security.
• Enhance security beyond the minimum due care level, based on specific network architectures and server function
• Contain some security configuration recommendations that affect functionality, and are therefore of greatest value to system administrators who have sufficient security knowledge to apply them with consideration to the functions and applications running in their particular environments

Each section is in turn divided up in hierarchical fashion, breaking each area of configuration into logical groups.  Each specific setting has a description of the change, the rationale for the change (usually describing any attack vector), as well as the configuration command to make the change.  An audit command is also included, to verify if the setting in question has been made successfully or not.  Finally, references are included for each change - these give you additional reading on other sites and documents such as the NSA's Security Configuration guide, the Cisco documentation site (of course, for the complete documentation of the commands being discussed), or the Cisco Guide for Hardening IOS Devices.

A final win is the Router Assessment Tool (RAT), which is an audit tool that accompanies the benchmark.  RAT will take a saved configuration and assess it against each of the Benchmark settings, either at Level 1 or Level 2.  RAT can also be configured to collect configurations from live devices prior to the audit.  The completed audit ends up being a colour coded HTML doc, which can be used to help in remediation of the platform (Red for non-compliance really gets the attention of the non-technical folks).

As always
As with most standards of this type, the recommendation is to either:

  • Audit your environment against the benchmark documents
  • Make changes to your environment as suggested in the document, considering each change individually on it's own merits with an eye towards how it will affect both security and service delivery (ie - a risk assessment).

What you DON'T want to do is implement changes from any security benchmark without this risk assessment - as discussed, going this route can have some dire consequences! 

Often organizations will take several security documents like this, and distill them down to a single Corporate Standard for Internal Compliance and Auditing.  This is a great approach, but it also means that the internal standard will need to be re-addressed as the source document

Happy auditing everyone !

Related Links:
The CIS home page ==> http://www.cisecurity.org/
Security Benchmarks available for Download ==> https://benchmarks.cisecurity.org/en-us/?route=downloads.multiform
Benchmark Assessment Tools (includes RAT) ==> https://benchmarks.cisecurity.org/en-us/?route=downloads.audittools
NSA Router Security Configuration Guide ==> http://www.nsa.gov/ia/_files/routers/C4-040R-02.pdf
Cisco Guide to Harden Cisco IOS Devices ==> http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080120f48.shtml
 

===============
Rob VandenBrink
Metafore

1 Comments

Published: 2012-10-11

Firefox 16 / Thunderbird 16 updates

Thanks Mike and others for digging in to the security fixes and changes in the recent Firefox 16 and Thunderbird 16 updates (earlier this week).  Find these details here:

https://www.mozilla.org/security/known-vulnerabilities/firefox.html

https://www.mozilla.org/security/known-vulnerabilities/thunderbird.html

 

.. And thanks to our reader Paul, who let us know that this latest update has been pulled (if you download the lastest version right now, it's 15.0.1).  It seems that a critical security vulnerability slipped past in 16.0.  (version 15.0.1 is not affected).  Good on the Firefox / Mozilla teams for pulling it so quickly, and posting on it immediately.   More info here:

https://blog.mozilla.org/security/2012/10/10/security-vulnerability-in-firefox-16/

Final Update (we hope):

Another reader has just let us know that 16.01 has just been posted - this should get us all back on track!  Happy updating everyone!  The two original links (above) have the security-specific info for version 16.01

===============
Rob VandenBrink
Metafore

5 Comments

Published: 2012-10-10

Cyber Security Awareness Month - Day 10 - Standard Sudo - Part Two


It is Day 10 of Cyber Security Awareness Month.  I am continuing with Part Two of my entry from Day 3 on Standard Sudo - Part One.   We will cover some technical implementation options of sudo with pros and cons of the given examples.

Some Sudo Good Ideas

A.  Central Distribution 

   The distribution control of your sudoers file is key to controlling the risk of your UNIX servers.

       1. LDAP [1]

        Cons
            - Large environments may have challenges to update and true up software on all flavors of UNIX;
            - Sudoers LDAP support began with Sudo v1.6.8;
            - LDAP client software is required for Sudo to work with LDAP support;          
            - Potental for a significant work effort to port existing sudo command sets into LDAP sudo schema;    

         Pros
           - May make auditors happy;  
           - Ideal for small start up environments;
           - Opportunities for provisioning teams become easier;

       2. Central Server
 
          - Use/develop a sync mechanism like CVS, rsync, scp, nfs, etc.;

B.  Single File / 1:Many 

Use of ONE standardized sudo command file to be used on every server lightens the distribution burdens. (sync scripts are an easy fix...)   This does come with risks and each environment needs to measure the tolerance of this idea.  The biggest gap to consider is unused sudo commands will likely exist on a server. For intance, if rule to restart the Apache web server was maintained by the UNIX group "webteam" and it existed on every server, then the servers without Apache may not end up with the "webteam" group.  In many cases the process and procedure that already exists will easily dictate whether this is condition is acceptable.

     Cons
       - Requires deliberation of risk; a potential barrier;
       - Potential for unused sudo command sets to be present on any given server; 
       - 1 mistake, Many failures; An undetected critical mistake gets distributed every where.

     Pros
       - May make auditors happy;
       - No minimum version requirement; (if command sets are kept basic)
       - One file to manage; (a potentially large file, but a BIG pro..)
       - Easy distribution; (sync scripts easy to develop)

C.  File Based Command Sets / Few:Many

The #includedir directive was released between Sudo v1.7.1 and v1.7.2 in early 2009.[2]  This feature allows the configuration to be managed with multiple files. For instance, all web/app admin command sets can be placed into a file and distributed separately.  An update to one command set does not necessarily jeopardize remainder of command sets.  This method can easily be 'profiled'. For instance, the sync scripts can keep track of which file is pushed to which server with configuration/list files.  This way, only webservers get the 'web admin' command sets.

      Cons
       - Large environments will need to true up to 1.7.2 or higher; a potentiallly significant effort;
       - Potential for unused sudo command sets to be present on any given server
       - Adds complexity; (not much mind you...)

       Pros
       - May make auditors happy;
       - One file per command set (not per command, but set of commands);
       - Easily profiled;
       - Adds simplicity; (how can that be?! )
       - Easy distribution; (sync scripts easy to develop)
 

D.  Structured Formatting

The idea is to create a XML'ish format of the file using comment hashes (#). A copy of each sudoers file is stored centrally, and each section is managed locally at the central point, and distributed to the remote server as needed. The purpose of standard format (illustarated below) is to provide many opportunities to control, audit, and report on the sudoers environment.

The proposed file can be broken into two or more formatted sections and managed accordingly. My experiences have seen that three sections is best.  It provides flexibility and room for growth.  These sections can be carved and updated with scripting once they are rolled out.  The layout of a standard file can be such with explanation below:

      << standard_OS >>
        << tier1_support>>
            << password resets >>
            << account aging >>
        <</tier1_support>>
        << tier2_support>>
            << account creations >>
            << account mods >>
            << account deletes >>
        <</tier2_support>>
      <</standard_OS >>
      << profiles >>
        << profile_support_group_one>>
           << DBA support commands >>
        <</profile_support_group_one>>
        << profile_support_group_two>>
           << Web Admin support commands >>
        <</profile_support_group_two>>
      <</profiles >>
      << native >>
        << localized rules >>
      <</native >>
 

     Cons
       - Potentially significant effort to implement;
       - Potential for unused sudo command sets to be present on any given server;
       - Adds much complexity;
       - Development effort required to implement;

      Pros
       - May make auditors happy;
       - Creates efficiencies;
       - Provides the following opportunities;
             - Sudoers file integrity checking;
          - Reporting of compliance;
          - Standardized provisioning of new sudo adds;
          - Distribute provisioning responsbilities of sudo mods;
       - No minimum version requirement; (if command sets are kept basic)

 
STANDARD SECTION
A master command set is created that will be needed on EVERY server in your estate. This command set is stored in the "standard" section of every sudoers file. In some environments, the risks are to be considered, and separate standard sections will need to be created for each Operating System in your environement.  The contents of this section will be a technical work of art. The value is solely created by the needs of each environment.
 
   Example:
   #### STANDARD_SOLARIS #### START ####
   #   Standard rules for Tier 1 Support
   #    User need only be added to tier1 unix group to access
   #
   User_Alias STD_SOL_TIER1_USERS = %tier1
   Cmnd_Alias STD_SOL_TIER1_COMMANDS =                \
                /bin/passwd, !/bin/passwd *root*,    \
                /bin/usermod, !/user/usermod *root*, \
                /bin/groupmod
 
   STD_SOL_TIER1_USERS ALL=NOPASSWD: STD_SOL_TIER1_COMMANDS
   #### STANDARD_SOLARIS #### END ####
 
PROFILES SECTION
 
A standard command set is created for each production support team like the Database Administrators, Web Admins, or System Admin's even. The idea however is NOT allow these command sets ON EVERY server.  They would only be needed on a group of servers, as not every server has a database or a web server.
 
   Example:
   #### PROFILES #### START ####
   ##### DBA_PROFILE ##### START #####
   #   Standard rules for the DBA's
   #    User need only be added to dba unix group to access
   #
 
   User_Alias DBA_PROFILE_USER = %dba
   Runas_Alias DBA_IDS = oracle
 
   Cmnd_Alias DBA_CAT_PARENT = /bin/cat *../*
   Cmnd_Alias DBA_PROFILE_COMMANDS =                \
               /bin/cat /u01/path/to/tracefiles/[a-zA-Z0-9]*/tracefiles/*trc \
               /bin/cat /u01/path/to/tracefiles/[a-zA-Z0-9]*/tracefiles/*log \
               /bin/cat /path/to/whatever/file/you/want
 
   DBA_PROFILE_USER ALL=(DBA_IDS) NOPASSWD:DBA_PROFILE_COMMANDS
 
   ##### DBA_PROFILE ##### END #####
   ##### WEBADMIN_PROFILE ##### START #####
   #   Standard rules for the Web Guys
   #    User need only be added to webadmin unix group to access
   #
 
   User_Alias WEB_PROFILE_USER = %webadmin
   Runas_Alias WEB_IDS = nobody
 
   Cmnd_Alias WEB_PROFILE_COMMANDS_AS_ROOT =                \
               /usr/bin/apachectl
 
   Cmnd_Alias WEB_PROFILE_COMMANDS=                \
               /opt/WebSphere/bin/startServer.sh
 
   WEB_PROFILE_USER ALL=(WEB_IDS) NOPASSWD:WEB_PROFILE_COMMANDS
   WEB_PROFILE_USER ALL=(root) NOPASSWD:WEB_PROFILE_COMMANDS_AS_ROOT
 
   ##### WEBADMIN_PROFILE ##### END #####
   #### PROFILES #### END ####
 
NATIVE SECTION
 
The Native section is bit of flexibility worked in to insure you have the ability to support the business. This section is used exclusively for needed sudo commands on that particular server.  It the standard formatting makes this section possible for any scripting to leave this section alone.
 
   Example:
   ##### NATIVE ##### START #####
   # Section reserved for any commands only needed on this server.
   #
   ##### NATIVE ##### END ##### 
  

Summary

In summary, (if you're still with me) the sudo environment in your organization may not be very complicated, thus much of this may seem overkill.  However, there is much listed above to take away to any sized organization.  Sudo solves MANY problems, while creating some high risk ones.  How it is configured, much like everything on your UNIX servers, really matters to the security of your environment.  When sudo is setup and managed in a standard framework, it keeps the risks under control, the efficiencies high, and the auditors happier.  The underlying main message is that no one solution fits all, yet Standard Methods of implementation lower your risks. 

Please keep in mind, I only know what I know. There is always much to learn. Please share any ideas, gaps, or even questions you have about the diary above.  We all benefit from the sharing.

[1] http://www.sudo.ws/sudo/sudoers.ldap.man.html
[2] http://www.gratisoft.us/sudo/maintenance.html#1.7.2

 

-Kevin 

-- 
ISC Handler on Duty

 

 

0 Comments

Published: 2012-10-10

Facebook Scam Spam


We are seeing reports of Facebook Scam Spam trickle in.  Rene provided us with a detailed anecdote that includes the following image.   The url provided in the image was investigated a bit.  TinyURL has since taken down the redirect and classified it as Spam.   However, the image (and others like it) still propagate by FB users clicking on the link.  

This type of scam is used mostly without the permission of the vendor noted, in this case Costco.   The idea is to entice the user to click so they get redirected to a site where the business model depends on traffic volume.   If the Facebook user count has hit 1 billion yet, (not something I'm keeping track of.. :) )  then even a small percentage of that makes the Facebook population an easy target, with an easy payout.





If you are a Facebook user, then please be wary of any offers that entice you to "click" to receive.  It's a really bad practice.   The holiday shopping season is beginning and these vectors are going to be heavily used by the scammers in the coming months.


-Kevin
--
ISC Handler on Duty

6 Comments

Published: 2012-10-09

Microsoft October 2012 Black Tuesday Update - Overview

Overview of the October 2012 Microsoft patches and their status.

# Affected Contra Indications - KB Known Exploits Microsoft rating(**) ISC rating(*)
clients servers
MS12-064 Remote Code Execution Vulnerability in Microsoft Word
(ReplacesMS12-029 MS10-079 MS12-050 )
Word
CVE-2012-0182
CVE-2012-2528
KB 2742319 No. Severity:Critical
Exploitability: 1
Critical Important
MS12-065 Remote Code Execution Vulnerability in Microsoft Works
(ReplacesMS12-028 )
Works
CVE-2012-2550
KB 2754670 No. Severity:Important
Exploitability: 2
Critical N/A
MS12-066 Elevation of Privilege Vulnerability via XSS in HTML Sanitation Component
(ReplacesMS12-039 )
HTML Sanitation"
CVE-2012-2520
KB 2741517 Yes (limited). Severity:Important
Exploitability: 1
Important Important
MS12-067 Oracle outside/in and advanced filter pack for FAST Search Server Code Execution Vulnerabilities
 
FAST Search Server 2010 (SharePoint)
CVE-2012-1766
CVE-2012-1767
CVE-2012-1768
CVE-2012-1769
CVE-2012-1770
CVE-2012-1771
CVE-2012-1772
CVE-2012-1773
CVE-2012-3106
CVE-2012-3107
CVE-2012-3108
CVE-2012-3109
CVE-2012-3110
KB 2742321 Yes. Severity:Important
Exploitability: 1
Important Critical
MS12-068 Privilege Escalation in Windows Kernel
(ReplacesMS09-058 MS10-021 MS11-068 MS11-098 MS12-042 )
Kernel
CVE-2012-2529
KB 2724197 No. Severity:Important
Exploitability: 3
Important Important
MS12-069 Denial of Service Vulnerability in Kerberos
(ReplacesMS11-013 )
Word
CVE-2012-2551
KB 2743555 No. Severity:Important
Exploitability: 1
Important Important
MS12-070 Reflective XSS Vulnerability in SQL Server
(ReplacesMS09-062 MS11-049 )
Word
CVE-2012-0182
CVE-2012-2528
KB 2754849 No. Severity:Important
Exploitability: 1
N/A Important
We will update issues on this page for about a week or so as they evolve.
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
(*): ISC rating
  • We use 4 levels:
    • PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
    • Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
    • Important: Things where more testing and other measures can help.
    • Less Urgent: Typically we expect the impact if left unpatched to be not that big a deal in the short term. Do not forget them however.
  • The difference between the client and server rating is based on how you use the affected machine. We take into account the typical client and server deployment in the usage of the machine and the common measures people typically have in place already. Measures we presume are simple best practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
  • The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threat for affected systems. The rating does not account for the number of affected systems there are. It is for an affected system in a typical worst-case role.
  • Only the organization itself is in a position to do a full risk analysis involving the presence (or lack of) affected systems, the actually implemented measures, the impact on their operation and the value of the assets involved.
  • All patches released by a vendor are important enough to have a close look if you use the affected systems. There is little incentive for vendors to publicize patches that do not have some form of risk to them.

(**): The exploitability rating we show is the worst of them all due to the too large number of ratings Microsoft assigns to some of the patches.

 

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

6 Comments

Published: 2012-10-09

Cyber Security Awreness Month - Day 9 - Request for Comment (RFC)

The Internet Engineering Task Force (IETF) is the main standard body for Internet related protocols. As far as standard bodies go, the IETF is probably the most open. Standards are discussed on mailing lists, and all you need to do is sign up for a mailing list and chime in, or attend one of the IETF meetings or both. There is no "membership" and standards usually require aconsensus. 

The RFC Process

RFCs are not only published by the IETF, but the Internet Architecture Board (IAB) and Internet Engineering Steering Group (IESG).  Not all RFCs are "standards". Some just document best practices or just informational (for example RFC1796: "Not all RFCs are Standards"). There are three distinct sub-series: Standards (STD), Best Current Practice (BCP) and Informational (FYI).

The RFC process itself is regulated by RFCs. RFCs start out as Internet Drafts. These drafts have a limited lifetime (default is 6 months) and are discarded unless they are selected to become an RFC by the IESG. 

Once an RFC is published, it's content can no longer be changed. Once in a while you will see erratas that are added to RFCs. But for the most part, to update an RFC, a new RFC needs to be published. When researching RFCs, it is VERY important to make sure it hasn't been updated by a newer RFC (I prefer the listing at http://tools.ietf.org/html/ as it links to updates)

There is no enforcement of RFCs, other then peer pressure. For the most part, if you want stuff to work, you better follow RFCs. Until about a week ago, one of the expressions of the peer pressure aspect of the RFC system was rfc-ignorant.org .  The site listed networks that choose not to obey some RFCs, in particular related to spam and abuse reporting. 

RFCs and Security

All RFCs should have a security sections. It will summarize any security impact the particular RFC may have. In addition, there are a good number of RFCs that deal with security issues.  I recommend taking a look at new RFCs regularly. Internet standards are very dynamic and assumptions you make based on old standards can be dangerous, or you are not taking advantage of some of the newer features.

IETF also publishes a list of security related RFCs here: http://www.apps.ietf.org/rfc/seclist.html

 

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

0 Comments

Published: 2012-10-08

Cyber Security Awareness Month - Day 8 ISO 27001

The ISO 27000 series consists of a number of standards that apply to information security.  The main standard that you can actually certify against is ISO 27001. The remaining standards are mainly supporting standards that help you address specific areas of information security.

ISO 27001 is an information security management standard. The main objective of which is to make sure that an organisation has the processes in place to manage information security within the organisation. Unlike the Payment card Industry Data Security Standard (PCI DSS, more on that in a later diary) ISO 27001 is not prescriptive.  It doesn't tell you exactly what to do, it provides high level guidance and you have to work the rest out yourself. This is where the supporting standards come into play.  ISO 27002 for example provides more information on implementing specific controls and provides examples. If you are stuck on how you should be assessing risk, then you need to take a look at ISO 27005 (ISO31000 is also excellent it is the old AS/NZS 4360).

One of the main difficulties of complying with the standard is the first realisation that you are complying with sections 4 through to 8 whereas many people concentrate on the controls in annex A (Annex A BTW is 27002 with less detail provided).  Sections 4 through to 8 outline the system that needs to be in place.  The Plan, Do Check, Act cycle.  The standard is risk based, the idea being that you identify your assets, assess the risk, based on those risks select controls that you are going to implement, monitor how it is all going and then rinse lather and repeat the cycle.  The other key idea is that it is a system for the security of information. So not specifically computer systems, but the information it manages and holds as well as the information used to manage the environment. Many ISO 27001 systems initially concentrate on the technical aspects of IT security, do I have a firewall, do I have AV, do I have processes to manage it, etc.  As the system matures the system tends to go up a level and looks at the processes that are being performed by a group or division and the information they need to successfully do this. For Example, the CISO needs to report on the status of information security in the organisation. What information is needed? They might need stats from various systems, pentest results, vulnerability analysis results, risk assessments, and so on. All are information assets that the CISO needs to do their job. How is that information generated, by whom? How reliable is it? So in ISO 27001 world there are a number of different levels that your system can work at. 

Just going back to sections 4 through to 8 for a little bit.  One of the first things you will be doing is to define the scope of the system you are about to implement.  Typically this will be phrased along the lines of "management of information security for system/group/division/product/application/service by responsible group".  Usually it will be a little bit prettier than that, but you get the general idea.  Like a quality system (ISO 9000 series) you define the scope of the environment.  If you have a scope that doesn't include a HR function, then the HR function will become an input into your system, but not part of it.  CYou may have to request them to do certain check prior to hiring, but in my experience those types of processes are usually mature. Good scoping can be your saviour if you are going for certification.

So certify or just comply?  That is one of the main questions we get when talking about 27001. The choice is quite simple.  If you are going to use it as a marketing tool to improve confidence in your organisation's ability to manage information security, then certify.  If you just want to make sure that you are covering the bases that should be covered, then complying but not certifying may be the right choice for you.  

Where to start. Well after you have bought your copy of the standard you could perform a gap analysis on what you currently do and what the standard expects to be done.  Be brutally honest.  You can use this mechanism to monitor your progress and show improvement as thing change.  Expect to fail miserably and make sure that management understands this before you start.  You haven't needed to comply with the standard before, therefore there are going to be gaps. If you've never run a 5km race previously, the chances of you finishing it on your first go are pretty slim.  Once you have your gaps you will have a starting place and you can start working on progressing and improving security.

In order to certify you must have what are called the documented processes in place (Sorry I can't really list them as without the standard to provide context they won't make sense).  Without these processes, written down, being followed and maintained, you cannot pass a certification audit. Likewise it will be difficult to pass a certification audit if you do not have an information security policy, change control, Business Continuity Plan, Incident response plan, Acceptable usage policy and more. However what you do or don't have will come out in the gap analysis. 

As a management system ISO 27001 is quite reasonable.  If you do it correctly the overhead on your scarce resources won't be too bad.  It makes you document those processes that are actually important to the organisation, which is never a bad idea. It forces you to think about issues that you may not have thought about previously. In fact that probably goes for most standards.The standard forces the engagement of management in information security matters and this often results in better understanding of what you really do and possibly even more funding.  The main thing to remember is don't work for the standard, make the standard work for you. If you are doing it to tick a box, you will likely fail

It is a brief overview of ISO 27001. If you have anything specific, let us know via the comments, or contact form.

Cheers

Mark H

4 Comments

Published: 2012-10-07

Cyber Security Awareness Month - Day 7 - Rollup Review of CSAM Week 1

Dr. J started the week with commentary on what we will be attempting to write this month.  One of the things we hope to accomplish this month with our focus on standards is awareness of their existence and how they can assist in solving some of the information security challenges we are faced in our everyday work.  Dr. J also mentioned guest diaries, but as of this writing I am not aware of any guest diaries that have been accepted, so if you’re interested, please drop us a note. 

For CSAM day 4 Dr. J wrote about crypto standards, due to the announcement of the winner for the competition for the new SHA-3.  One key point Dr. J mentions in his article is the  discussion of performance.  The application of cryptography should always be weighed against the risk of exposure and impact to performance.

For CSAM Day 5 Richard Porter wrote on the different groups that publish standards that may be of interest.  The Handler group is a very diverse group of individuals, some who have actually written some of the standards we use today, with much experience implementing these standards.  The task of implementing one of these standards can be daunting, so let us know what we can do to help.  There is tons of great information at each of the links, for example the NIST publications include the ‘800- series’ of Special Publications, which covers Computer Security. 

For CSAM Day 6 Manuel discussed the North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards.  NERC CIP is an excellent example of a set of non-government standards that are fairly easy to interpret.   "NERC is a non-government organization which has statutory responsibility to regulate bulk power system users, owners, and operators through the adoption and enforcement of standards"   Granted we all are not bulk power system users, owners, etc. however the approach is based on solid practices that can be adapted to many environments, regardless of mission.

The challenge with standards has often been trying to interpret or understand the intent, and fit that material to the world we work in.  The Handlers here at the Internet Storm Center have a very diverse set of experiences, so if you have questions about where to start, what does it mean, etc., we can certainly assist.  Feel free to ask, that’s why we are here :)


tony d0t carothers -gmail

0 Comments

Published: 2012-10-06

Cyber Security Awareness Month - Day 6 - NERC: The standard that enforces security on power SCADA

The North American Electric Reliability Corporation (NERC) has published under the Critical Infrastructure Protection program a security standard that is mandatory for every SCADA to manage infrastructure within the electrical system. It has a close resemblance to ISO27002 control objectives. Look for the Critical Infrastructure protection item at NERC website. Let's have a look inside the detail of each document:

 

ID Description Purpose
CIP-001-2a Sabotage Reporting Its purpose is to define how to handle disturbances or unusual occurrences, suspected or determined to be caused by sabotage. It indicates that companies need to define procedures and guides to handle sabotage and how to report them to the appropriate systems, governmental agencies, and regulatory bodies.
CIP-002-4a Cyber Security - Critical Cyber Asset Identification

Its purpose is to require the identification and documentation of the Critical Cyber Assets associated with the Critical Assets that support the reliable operation of the Bulk Electric System. It must include at least one of the following characteristics:

  • The Cyber Asset uses a routable protocol to communicate outside the Electronic Security Perimeter; or,
  • The Cyber Asset uses a routable protocol within a control center; or,
  • The Cyber Asset is dial-up accessible.
CIP-003-4 Cyber Security - Security Management Controls Its purpose is to create and mantain Cyber Security Policy, define Leadership of a senior manager to lead an manage the implementation of CIP standards, control exceptions to policy, define and implement access control measures, change control, configuration management and information protection methodologies.
CIP-004-4a Cyber Security - Personnel and Training  It requires that personnel having authorized cyber or authorized unescorted physical access to Critical Cyber Assets obtained in CIP002-4a, including contractors and service vendors, have an appropriate level of personnel risk assessment, training, and security awareness as defined by the risk assessment model inside the company and in compliance with the Information Security Management System.
CIP-005-4a Cyber Security - Electronic Security Perimeter  It requires the identification and protection of the Electronic Security Perimeter inside which all Critical Cyber Assets reside. This means placing controls like Firewalls that have specific support for the SCADA protocols being used, Application Whitelisting, IPS among many others. All those controls cannot induce or modify the protocol flow between all the SCADA entities in place.
CIP-006-4d Cyber Security - Physical Security of Critical Cyber Assets  This standard is intended to ensure the implementation of a physical security program for the protection of Critical Cyber Assets. This include the implementation of physical controls like special locks, walls, biometric and the monitoring system checking all those controls for anomalies.
CIP-007-4 Cyber Security - Systems Security Management  It requires Responsible Entities to define methods, processes, and procedures for securing those systems determined to be Critical Cyber Assets inside the Electronic Security Perimeter, like test procedures, security baseline for ports and services, security patch management, malicious software prevention, account management and security status monitoring.
CIP-008-4 Cyber Security - Incident Reporting and Response Planning It ensures the identification, classification, response, and reporting of Cyber Security Incidents related to Critical Cyber Assets. For more details on incident response, check NIST Computer Incident Response guide.
CIP-009-4 Cyber Security - Recovery plans for Critical Cyber Assets

It that recovery plans are put in place for Critical Cyber Assets and that these plans follow established business continuity and disaster recovery techniques and practices

 

The implementation of the NERC CIP standards needs to be build from the Information Security Management System directives and both of them need to agree in the way controls are implemented.

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
Twitter:@manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

1 Comments

Published: 2012-10-05

Reports of a Distributed Injection Scan

We have received a report of a large distributed SQL Injection Scan from a reader. Behavior of scan is being reported as 9000+ Unique IPv4 Addresses and sends 4-10 requests to lightly fuzz the form field. Then the next IP will lightly fuzz the second form field within the same page and the next IP the next form field. Looks to be targeting MSSQL and seeking version.

The reader reports that this scan has been going on for several days.

Sample Payload:

%27%29%29%2F%2A%2A%2For%2F%2A%2A%2F1%3D%40%40version--

%27%2F%2A%2A%2For%2F%2A%2A%2F1%3D%40%40version--

%27%2F%2A%2A%2For%2F%2A%2A%2F1%3D%40%40version%29%29-

%29%29%2F%2A%2A%2For%2F%2A%2A%2F1%3D%40%40version--

%29%2F%2A%2A%2For%2F%2A%2A%2F1%3D%40%40version--

 

The User Agent String for all of the attacking IPs is always

User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)

There does not seem to be a referrer page either.

 

If you are seeing this activity and can report it please let us know.

 

Richard Porter

--- ISC Handler on Duty

5 Comments

Published: 2012-10-05

Cyber Security Awareness Month - Day 5: Standards Body Soup, So many Flavors in the bowl.

Introduction

First I would like to say, without our readers and subscribers we would not exist and that we genuinely do read every post. A reader posted a request to break down standards bodies and I decided to take that endeavor on. This now has turned into a larger project than just one diary entry. You will see more on this topic but hopefully today is a good start. This first pass at understanding the different bodies does not include a complete list. 

Many of likely heard the quote “The problem with Standards is there are so many to choose from.”  I really don’t know who first uttered that phrase but it holds true from my point of view. This article will take a 10,000 meter or 30,000 foot view (Depends on if you are metric [1] or imperial Units [2] ) of what I am calling standards body soup. Within this bowl of standards groups there are several types and methods in which they govern. I can make the assumption that most of the readers are familiar with a Request for Comments (RFC) and the group that governs this standards suite is the Internet Engineering Task Force (IETF). So, we will start there and will break down the IEFT into areas for understanding. This will provide a framework for a further list of Standards Bodies.

Breakdown and Terminology

In order to build a table for understanding different standards bodies we will use the following subject areas for columns.

Abbreviated Name

The short name or acronym used to reference the organization.

Fule Name

The complete name. Sometimes we only know the Acronym. 

Web Site

How to find them on the web.

Members and Contributors

Who can and or are members of the standards group.

Role

How do they influence or contribute to industry.

Notable Standards

Standards that might matter to us.

 

Standards Body Profile

Abbreviated Name: IETF

Full Name: Internet Engineering Task Force

Web Site: www.ietf.org

Members and Contributors:  To Numerous to list. Membership is open to anyone and IETF is comprised of many working groups. A breakdown of working groups can be found at http://www.ietf.org/wg/ but in summary they are open to anyone and usually conduct business over open mailing lists. If there is an RFC that you would like to impact, join the mailing list and begin your journey.

The IETF Is governed by a group called the Internet Society (ISOC) and the board of trusties can be found at http://www.internetsociety.org/who-we-are/board-trustees. With most standards bodies, in our experience, the members are made up from various places. Members will often have a second industry position and their parent company allows them to contribute.

Role:  Internet Standards Governance

 

Notable Security Based Standards: Again there are far too many notable standards to list from IETF but I will list a couple of my favorites.

RFC 2350 – Expectations for Computer Security Incident Response 

http://www.rfc-editor.org/rfc/rfc2350.txt

On occasion we are asked things like “My Company/Group/Team/Org is looking to stand up an Incident Response Team, where do I start?” and in the spirit of the world we live in today I am re-coining a popular phrase to “There’s a Standard for that!”

 

RFC 4949 – Internet Security Glossary, Version 2

http://www.rfc-editor.org/rfc/rfc4949.txt

In case you were wondering, yes there are standards for the standards. This is an informational RFC, which means it is not really a standard but a good reference. 

 

RFC 6618 (Experimental) – Mobile IPv6 Security Framework Using Transport Layer Security for Communication between the Mobile Node and Home Agent

http://datatracker.ietf.org/doc/rfc6618/

The title alone is scary but signs of a mobile world to come. This one is on my watch list.

Table

Please See Spreadsheet for editable details: https://isc.sans.edu/diaryimages/Standards_Framework_Draft.xlsx

(click on image for larger view)

 

 

References
[1] http://en.wikipedia.org/wiki/Metric_system
[2] http://en.wikipedia.org/wiki/Imperial_units