RSA posted SP4 Patch 4 of their Authentication Manager product today. There are a few pages of fixes in the README, but the most significant is that Authentication Manager can now be installed on Windows Server 2008 (both 32 and 64bit).
This is significant, as until now Windows Server 2003 was the most recent Windows version supported - this has been a growing source of frustration for RSA shops.
Long story short, I've installed it in a production environment on Server 2008, it works exactly as you'd expect. Good on them for catching up !
ps - The native AD integration (via LDAP) also works quite nicely - this is recent but not new in this release.
Symantec published today a report that spam attacks via social networks (Facebook, Twitter and YouTube) grew in popularity between April and June 2011 for the purpose of distributing spam, malware and phishing attacks.
Other statistics of interest is that most of the spam was launched from botnets; 57% of it originated from the United States with another 19% originating from various European countries.
Of course, Symantec reminds it readership that "Needless to say, none of these social network sites are behind these spam attacks. Social networks are employing a variety of techniques to protect users from such attacks and fraudulent activities involving user accounts." 
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
Community SANS SEC 503 coming to Ottawa Sep 2011
SSL or TLS is *the* security protocol to encrypt in particular HTTP traffic. We all know it, love it, and then ignore various pop-ups telling us that in ever so cryptic ways that someone is playing a man in the middle attack on us.
I don't want to go over the basics here, but just talk about various tricks and issues that I see sometimes left out.
What about different certificate "Classes"
SSL plays two important roles: It encrypts traffic AND it verifies that you are connected to the correct server. Your browser knows that it connects to the correct server because the server presents a certificate that includes its host name, among other information, and is signed by a trusted certificate authorities.
Certificate authorities vary in how they validate the information in the certificate, and what information is actually validated:
Domain: This is the simplest (and cheapest) type of certificate. All it verifies is that the host name. Usually, you can get these certificates in a few minutes as long as your e-mail address is listed in the domain's whois record. For example, if you own the domain name "bigbank.com", you can get a certificate for it, no mater if you are affiliated with a company called "bigbank" or not.
Organization: This is the next class of certificates, sometimes called "Class 2" certificates. In this case, the certificate authority verifies that you are associated with the respective organization that owns the domain name. You typically need to fax in a copy of a photo id, a business license or other paperwork. Now, the name of the business is validated by the certificate as well.
EV (Extended Validation) Certificates: This type of certificate is the most expensive to get, and requires additional paper work and validation. The goal is to better validate the business name the certificate is used for. As a "reward", many browsers will display the business name, not just the host name, as part of the URL bar. Banks frequently use this type of certificate.
I need a certificate that covers multiple host names
No problem. You got two options:
Wildcard certificates are used for a domain, and they will work for all hostnames in that particular domain (e.g. *.example.com)
Multiple Domain Name Certificates can list various host names from different domain. For example, we use one for isc.sans.edu that covers some of the old host names we used like incidents.org and isc.sans.org.
I am using NameVirtualHosting (1 IP = Multiple Hostnames)
Now this is a tricky issue. If you use SSL, the entire HTTP stream, including headers, is encrypted, In order to figure out which key to use to decrypt it, the server needs to know the host name, which is encrypted... classic catch 22. As a result, you can not use multiple SSL certificates on the same IP address unless each server listens on a different port. However, modern browsers have a solution referred to as "SNI" (Server Name Indication, see RFC 4366) . With server name indication, the host name is sent in the clear as part of the client establishing the SSL connection (the SSL Client Hello message). Now the server knows what host name you are trying to connect to, and can use the right key.
Sadly, Windows XP DOES NOT support this extension to SSL, which limits its usefulness at this point. But it is a great option for small sites with limited user groups that don't use Windows XP. Internet Explorer 6 doesn't support it either, but I hope you aren't using that ;-)
In order to support SNI, you also need a recent version of openssl and Apache on the server. In cases where I can't update openssl and apache, I had good luck using nginx as a proxy (it supports SNI). Microsoft IIS will not support SNI last time I checked.
HTTP Strict Transport Security
This is a new features, introduced in Firefox 4. Other browsers start picking it up as well. The feature is intended to tell a browser to only use HTTPS, not HTTP, to connect to a particular host. It protects against attacks that try to redirect the user to an HTTP version of the site. All you have to do is add a an HSTS header to your server response:
Strict-Transport-Security: max-age=100 ; includeSubDomains
The "max-age" will tell the browser for how many seconds it should remember this setting. The optional "includeSubdomains" parameter will extend this preference to any subdomains.
Couple Links related to SSL:
https://www.ssllabs.com/ - great site to check if SSL is configured correctly (make sure to check the "Do not show the results on the boards" checkbox)
http://hacks.mozilla.org/2010/08/firefox-4-http-strict-transport-security-force-https/ - details about HSTS
http://www.ietf.org/rfc/rfc4366.txt - RFC for SNI
Cisco released earlier today a bulletin regarding a vulnerability in the Cisco VPN client for Windows 7. The vulnerability is pretty simple: The client runs as a service, and all users logged in interactively have full access to the executable. A user could now replace the executable, restart the system and have the replacement running under the LocalSystem account.
The fix is pretty simple: Revoke the access rights for interactive users.
The interesting part : NGS Secure Research found the vulnerability, and released the details after Cisco released the patch . The vulnerability is almost identical to one found in 2007 by the same company in the same product 
Very sad at times how some vendors don't learn. Lucky that at least companies like NGS appear to be doing some of the QA for them.
We have covered DNSSEC before. But over the last few month, DNSSEC deployments have increased and yesterday's DNS poisoning diary by Manuel shows that attacks against unsecured zones certainly happen.
I wanted to put together a couple of tips to avoid common errors:
- Patch your DNS server. Make sure you are running a recent version that supports current encryption algorithms. In particular, look for NSEC3 support.
- Review your overall DNS configuration. Clean it up first before implementing DNSSEC.
- Does your registrar have a facility to upload DS records?
- If you are using DNSSEC on a resolver, make sure the root zone's key is kept up to date. Recent versions of BIND support RFC 5011 and can manage key updates for you.
- Remember to regularly re-sign the zones. Signatures are typically valid for a month.
- make sure your DNS server supports EDNS0 (should not be a problem)
- make sure your firewall isn't blocking UDP DNS replies that are larger then 512 Bytes
- pick an algorithm that supports NSEC3 (RSASHA1-NSEC3-SHA1, which is #7, is my preferred one as it appears to be well supported compared to other NSEC3 algorithms)
- only deposit DS records with your parent zone after you completed the prior three steps
Anything I forgot? Please add a comment...
Couple URLs to use as a reference:
http://dnsviz.net/ - Really nice visualization tool.
http://dnssec-debugger.verisignlabs.com/ - thorough test of DNSSEC settings
http://www.dnssec.net - links to standards and tools
https://addons.mozilla.org/en-US/firefox/addon/dnssec-validator/ - Firefox extension to validate DNSSEC
http://www.iana.org/assignments/dns-sec-alg-numbers/dns-sec-alg-numbers.xml - DNSSEC Algorithm Numbers
http://www.cymru.com/Documents/secure-bind-template.html - secure BIND template. Apply this first.
http://technet.microsoft.com/en-us/library/cc772661%28WS.10%29.aspx - Securing Microsoft DNS
After talking about SQL Injection, this is the second part of the mini series to help you protect yourself from simple persistent attacks as we have seen them in the last couple months. A common MO employed in these attacks is to steal passwords from a database via sql injection. Later, the attacker will try to use these passwords to break into other sites for which users may choose the same password. Of course, part of the problem is password reuse. But for now, we will focus on the hashing of passwords to make it harder for an attacker to retrieve a users plain text password.
First of all: What is hashing? According to NIST, "A hash algorithm (alternatively, hash "function") takes binary data, called the message, and produces a condensed representation, called the message digest. A cryptographic hash algorithm is a hash algorithm that is designed to achieve certain security properties."  A good cryptographic hash will make it hard to find two messages with the same hash, or find clear text for a specific hash value. Common hashing algorithms are MD-5 (old), or the "Secure Hashing Algorithms" (SHA) family of hashing algorithms (SHA-1, SHA-256...). Another common but less popular algorithm is RIPE-MD.
Storing a password as a hash will make it difficult to figure out the actual password a user used. In order to verify the password, it is first hashed, then compared to the stored hash in the database.
A hash isn't fool proof. All hashes are vulnerable to brute forcing. If I can get the hash, I can try various passwords, hash them and check if they match. I may actually end up with a different password then the correct one. Hashes do have collisions (same hash for two different plain text values). A good hash is slow enough to calculate to make brute forcing difficult. Brute forcing can be improved by using databases with pre-calculated hash values, or so called rainbow tables. These systems reduce the brute forcing to a simple database look-up, but require storage space. Rainbow tables are practical for strings up to 10 characters in length (lower case alpha numeric). Of course, the size of a rainbow table will increase fast as the length of the plain text increases or the complexity of the plain text increases .
Probably the most important defense against rainbow tables is the idea of introducing a "salt". First of all a "salt" will ensure that two users who happen to use the same password, will end up with a different hash. A salt can also be used to increase the length of the plain text beyond the point where rainbow tables become practical.
In order to use a "salt", the salt value and the users password are first concatenated, then the string is hashed.
Another trick to harden a hash is to just apply the same algorithm multiple times. For example, if we take the SHA-1 algorithm, and apply it 100 times, we will slow down a brute force attempt by a factor of 100. However, the delay in validating an individual password will be hard to notice.
When selecting an algorithm to hash passwords, it is important to select carefully as it is difficult to change the algorithm later. You will have to ask users to change their password if you do as you no longer know what password they picked.
Here a proposal to create difficult to reverse hashes with salt:
- as salt, do use a complex string that is different for each user. I like to use the username or email address (of course, this means the user will have to enter their password whenever they change their e-mail address, but that is usually a good idea anyway). You could just create a salt for each user and store it with the hash (similar to what the unix /etc/shadow file does).
- first, hash the salt (e-mail address) and the password by themselves. This way, we end up with simple fixed length strings. We no longer have to worry about odd characters in either.
- concatenate the two hashes, and hash them again.
So the complete formula to create our password hash would look like (using sha1 as an example, you could also mix and match hashing algorithms):
You could also add a secret, in addition to the salt. If the secret is not stored in the database, it would not be easily reachable via a SQL injection exploit (yes, you can use them to read files, but it requires sufficient privileges). The formula would now look like:
Introducing minimum password strength requirements may also help, but can also lead to annoyed users. The best defense against brute forcing a password hash is a long password using a diverse character set. Even if you do not require strong passwords, you should at least not restrict the length or the character set. Hashing the password as soon as received (for example as part of input validation) will help mitigate any risks due to odd characters a user may use. Passwords should never be echoed back to the user.
As a user, how do you know if your password is hashed? There isn't a bullet proof way to figure it out. But there are some indications that it is not hashed:
- the length or characters you are allowed to use is limited. If the password is hashed, it doesn't matter how long the original password was.
- As part of the password recovery, the site is returning your old password (very bad... and well, proof that the password was not hashed)
If you want to read more about this, Heise.de recently published a nice article about password hashing . This is of course also a topic we cover in our secure coding / defending web application classes like DEV522.
NIST is currently finalizing a competition to come up with a new hashing standard, that will be known as SHA-3. The "winner" should be announced in 2012. Until then, SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 are suggested. I usually recommend to stick to these standards. As programming languages change, it is likely that they will keep supporting these standard algorithms. Other less popular algorithms may on the other hand be dropped.
I was teaching this week at University. It was a pretty normal class until I heard the following from one of my students:
What happened to google?
A couple of seconds after, many people started to make the same complaint and one minute after nobody had access to google. I typed the google URL from my computer and got the following screen:
First thing I though was that google suffered an attack. Looking further, I queried for the current google IP and found the following:
When I looked for the owner of that IP address, ARIN says it is not precisely google. I performed a nslookup from another domain and got the correct ip address for google:
At this time I found out we were victim of a DNS cache poisoning attack.Since the network admin was not at his office because class was in the night, there was nothing I could do but wait for the DNS cache to expire.
How this attack works and How we can protect ourselves
The DNS process works as follows to resolve ip address from a fully qualified domain name (FQDN):
- Client sends a query to the internal DNS looking for an ip address for a machine name.
- Internal DNS server performs recursion and if it's not present in the cache looks for the IP address on the internet from the authoritative nameserver of the domain.
- The authoritative nameserver answers the IP address requested.
- The Internal DNS server answers the IP address to the client.
The attack works as follows:
- Attacker queries the target DNS server for a FQDN not present in the cache.
- Target DNS server performs recursion and looks for the IP address on the internet from the authoritative nameserver of the domain.
- Attacker floods the target DNS server with fake responses for the query.
- Target DNS server updates the cache and begins serving the fake ip address every time the FQDN is requested.
How do we protect ourselves from the attack?
- Use the last version of your DNS server (I really like BIND) as it randomize the source port of your queries.
- Do not allow recursion from outside of your network. Allow it only from your corporate network computers.
- Use DNSSEC. The root servers support it since July 15 2010 and the protocol allows to authenticate valid records from domains zones.
Any other protection measure you want to share with us? Please use our contact form.
Lately there has been an increased surge in spam. This past week I've received four messages that impersonate a message from Facebook. The messages are actually a Phishing attempt to sell you some drugs. They are very "facebook" like and to an unsuspecting email recipient they would likely capture a click thru. I followed through the links to find dead pharmacy links. It appears there is spam campaign to sell med's through phishing emails.
A snapshot of one of the emails is below and all of the emails had a consistent link inside the email. The links were as follows. The ultimate destinations never loaded and appear to be removed as of this writing. The pharm url's were all on the same IP block. So someone has caught up to this batch. Be vigilant and on the look out for more.
hxxp://hajayanee.com/directories.html -> hxxp://controlpills.net
hxxp://carrosserieaerni.ch/ascension.html -> hxxp://medicarerxdrugstore.com
hxxp://mallorcaso.com/postprocessor.html -> hxxp://pillpillspharmacy.net
hxxp://firstclassmotorsports.com/screeching.html -> <no response received>
Feel free to tell us about any of your phishing spam email.
ISC Handler on Duty
One of the assertions made by the recent run of high profile attacks was that all networks are vulnerable, and the groups behind these attacks either had or could have access to many more systems if they wish.
Several articles expanded on this assertion and using the recent compromises as evidence considered this fact a failure of information security. I would like to question the conclusion that recent attacks prove that all networks are vulnerable, or that these attacks prove a large scale failure of information security.
First of all, let me state my philosophy of information security: I don't believe it is the goal of information security to prevent every single breach. As little, as it is the goal of a guard at a bank to prevent every single bank heist.
As an information security professional, it should be your goal to mitigate risks to a level that is small enough to be acceptable to business. It is much more about risk management then avoiding every single risk.
With that focus on risk management, information security itself becomes a solvable problem.
But back to Lulzsec. What did Lulzsec proof? Lulzsec proved that there are insecure networks. They did not prove that all networks are insecure. Lulzsec took very large targets ("the government", "banks", "on-line gaming") and rattled doors until they found an open one.
How do you protect yourself against that? First of all, you don't. Lets get back to the basics of risk: "the probable frequency and probable magnitude of future loss" . We can address risk two ways:
- Reduce the probably frequency of a loss
This comes down to reducing your attack surface, and hardening the remaining castle. Most organizations suffer from the diffusion of confidential information. The better your are able to compartmentalize and limit access to confidential organization, the less likely it is that some of this information will leak. The tricky part in my opinion is the labeling or classification of information. This can be difficult and labor intensive. Classifications may also change over time.
- Reduce the probable magnitude of a loss
Limit the information you store to information the business needs. Consider information a liability, not just an asset. Storing credit card numbers will lead to more purchases. But will it be enough to justify the risk?
In the end, doing business on-line is to a large extend about trust. The difficult part is that trust is asymmetric. Trust is much easier lost then gained. Last week, when someone announced that Lulzsec may have compromised UK census data, the overall sentiment was to assume the announcement was true. Even though there was no evidence to proof this, and later Lulzsec stated that the claim was wrong.
I wrote a diary a while back about process maturity called Countdown to Tuesday, using peoples patching processes as an example. I want to use this diary as a catalyst to understand how people tell their boss, or indeed their CxO level managers how well, or badly they are doing with security response.
Incident Response is a classic example of where your ability to respond needs to be measured, and your success or failure at doing the response presented to the powers that be.
Given that we have some key steps during our incident lifecycle, we can look at gathering data, performing analysis and then producing reports based on that data. The SANS incident response lifecycle is based on PICERL short for Preparation, Identification,. Containment, Eradication, Recovery and Lessons learned.
So when your incident response process is triggered, we have clearly entered the identification phase. But how do you keep track of when you enter containment, or Eradication, or indeed Recovery.
If you could measure between when an incident happened, and when it was identified you have a metric which shows how good your security monioring is. If you come up with the metric of how quickly you go from Identification to Containment you can then show how well your team is working. If you can work out your mean time to achieve something then you can identify and focus in on steps in your process where it went wrong, or didnt operate effectively. You can trend, and produce further statistics showing the cost of the incident if your business has an average cost for being unable to do business.
But we're getting ahead of ourselves here. How do you record this information? Drop me a note via the contact form, and i'll add it to my next diary on how you could do it using free tools, some scripts, and allow you to produce statistics you'll want to present.
Apple has released Mac OS X 10.6.8 and security update 2011-004. These updates address 39 CVE entries. The updates cover many components of the core operating system and many popular applications so you should probably plan to update ASAP. The bulletin went out on Apple's security-announce mailing list, but the security update web page doesn't have the details yet, they should be there shortly.
Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu
SANS FOR558 Network Forensics coming to central OH in Sep see http://www.sans.org/mentor/details.php?nid=25749
A former employee of Baltimore Substance Abuse Systems Inc. compromised his boss’ computer during a presentation and replaced some of the content with pornographic material. It is customary to have policies in place that require terminated employees to be escorted out of the building by either a security officer or member of upper level administration.
However, when it comes of terminating employees, this case highlights the importance of having a solid corporate termination policy. The actions of this former employee embarrass the company during a presentation but what if he would have deleted business critical data and trashed the backups? Or copied the business critical data (i.e. financial data, client credit card data or employees’ information) and sold it to the highest bidder?
It is important to have a policy for limiting access to corporate technical resources after an employee has been terminated. Some basic step include: disabling user account(s), changing or locking all the passwords the former employee had access to, disabling corporate e-mail access and locking down access to their personal workstation.
An email from HR using a pre-configured template to all key stakeholders with a mean of reporting back to HR, confirming the work has been completed, would help prevent this kind of malicious activity. Of course, the account(s) should be monitored to detect potential unauthorized access. Do you have similar horror story to share?
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
WordPress is currently investigating a series of "[...] suspicious commits to several popular plugins (AddThis, WPtouch, and W3 Total Cache) containing cleverly disguised backdoors. We determined the commits were not from the authors, rolled them back, pushed updates to the plugins, and shut down access to the plugin repository while we looked for anything else unsavory." 
If you have a WordPress.org, bbPress.org and BuddyPress.org account, you will be required to choose a new password. You can change your password here.
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
Mozilla released Firefox 3.6.18 for Windows, Mac and Linux fixing several security and stability issues . Mozilla Thunderbird released version 3.1.11 fixing vulnerabilities reported in version 3.1.10 .
Mozilla released Firefox 5.0 for Windows, Mac and Linux and it is the "First Web Browser to Support Do Not Track on Multiple Platforms." This version includes more than 1,000 improvements and performance enhancements. It is available for download here.
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
I posted a piece entitled "Log files - are you reviewing yours?"  as a gentle reminder that despite how overworked we are, log files should be mined for data as they form a critical part of our defences. I was lamenting that perhaps some of the breaches we've seen splashed across the media may have been prevented if more attention had been paid to the logs. Well, a link to the Western Australian Auditor General's report on 15 government departments' information systems , published just on the 15th of June turned up in my inbox.
It makes an eye-opening and fairly concerning reading, on the poor state of IT security in these government agencies. I suggest you take time to read through the 32 pages of the report and make your own notes. Some comments in the report will antagonize professional penetration testers reading it, so make sure you haven't had too much caffeine first. The report is split in to two sections and I'm just going to focus on the first section; the external attack portion.
The meat of the external attack portion in the report covers a third party, using common security tools, running very aggressive and obviously hostile external scans against the targeted departments. The heavy scanning uncovered a number of exploitable known vulnerabilities. One of the most damning statements is no-one noticed one web application system actually started to slow down from millions of username and password brute force attacks. If the security team missed this, that’s one thing but the operation team missing the system hit is another. Operations team tend to pick up on “wrongness” pretty quickly, either from help desk calls or their own performance alerts. I wonder if someone did notice, would they have told the security team? If you've ever tested your own systems, any obvious noisy, repetitive attacks should standout and scream for attention against normal log entries. This, to me highlights, the total misused of monitoring and reporting on logs that could so easily provide warning of an attack and the attacker.
I'm impressed that this type of report is in the public domain and the blunt approach it has taken to highlight the key findings*. This open approach is a marked change to the old misdirection of "These aren't the droids, er, - massive problems we're pretending don't exist, which anyone with Nessus 0.99.10 could find - you're looking for", so we'll just ignore it.
It is easy, and important, to take a number of positive actions and lessons learnt from the report, rather than treat this as another stick to be beaten with. Show this report and the any of the numerous breaches to management to and use resources such as http://datalossdb.org/ to factor in the financial cost of breach to your company. This may sway those that control your time and the purse strings to make time for simple, effective security steps, such as testing and log review, and even a bit of training**
Here are a couple of points I've gotten from reading this report:
- Know what normal traffic and logs look like for your environment
- Test your own systems with freely available and widely used scanning and vulnerability assessment tools to see what shows up in the logs files ***With PERMISSION only***
- Test username and password brute force attack tools against your publicly facing systems and see what shows up in the logs files ***With PERMISSION only***
- Find a simple, automated process to review logs files for the alert or events generated in these scans
- Let people in your company know who to call if they see a possible security incident
- Make sure you have an incident response plan first, then one that works and finally is understood, endorsed and signed off by management
- Show other IT staff what attacks look like and how they can affect system performance
As a final note, if you get an audit result like this on your systems, use it to highlight business risks and produce a plan on how to effectively and realistically address the points and issues raised. IT security is part of the business to support and protect it so it a group effort involving the business to fix it, not just the poor, misunderstood IT security sap in the corner. To the folks in those fifteen agencies trying get their systems and processes secured keep at it, work through what’s been reported and next year audit will be very different picture.
As always, if you have any suggestions, insights or tips please feel free to comment.
[2a] Strategies to Mitigate Targeted Cyber Intrusions pdf mentioned in the report http://www.cert.gov.au/www/cert/RWPAttach.nsf/VAP/(3A6790B96C927794AF1031D9395C5C20)~intrusion_mitigations+pdf+for+CERT+website.PDF/$file/intrusion_mitigations+pdf+for+CERT+website.PDF
* Despite the warm and fuzzy thanks that are recorded in the agencies' responses, I suspect there may have been a number of closed door meetings with enraged management waving the report and equally annoyed security teams waving their own reports saying we told you this already.
** http://www.sans.org/security-training/courses.php - learn something new for every, and any, security professional :)
Chris Mohan --- Internet Storm Center Handler on Duty
An attack on web authentication authority StartSSL has lead to them suspending their services and stopped issuing any further certificates.
From the landing page of Startssl's web site  they offer this information:
Due to a security breach that occurred at the 15th of June, issuance of digital certificates and related services has been suspended.
Our services will remain offline until further notice.
Subscribers and holders of valid certificates are not affected in any form.
Visitors to web sites and other parties relying on valid certificates are not affected.
We apologize for the temporary inconvenience and thank you for your understanding
The Register web site has more information on the story 
Chris Mohan --- Internet Storm Center Handler on Duty
The media is full of security horror stories of company after company being breached by attackers, but very little information is actual forthcoming on the real details.
As an incident responder I attempt to understand what occurred and learn from these attacks, so I'm always looking for factual details of what actually happened, rather than conjecture, hearsay or pure guess work.
Back in April Barracuda Networks, a security solution provider, got compromised and lost names and email addresses. They disclose the breach then took the admirable step of publishing how the breach took place, with screen shots of logs, and their lessons learnt from the attack .
I hope that those who unfortunate to suffer future breaches are equally generous enough to share their logs and lessons learnt for the rest of us to understand and adapt for our own systems. The attackers share their tips and tricks, as anyone looking at the uploaded chat logs to public sites like pastebin can attest to this. We need the very smart folks looking after the security at theses attacked companies, that can step up, to take time to write up what really happened is going to make it accessible for the rest of us to learn from.
Seeing the events of an attack in recorded in log files is a terrible, yet beautiful thing. To me it means we, as defenders, did one thing right since detection is always a must. If the attack couldn't or wasn't blocked, then being able to replay how a system was compromised is the only way forward to stopping it from occurring again.
Logs review should be a intrinsic routine performed by everyone, daily if possible. Whether it be a visual, line by line review* or by using grep, a simple batch script or a state-of-the-art security information and event management system to parse the logs in to an easy to read and digest format for even a novice IT person to review and understand. This should be part of the working day process for all levels of support and security staff; drinking that morning coffee while flicking through the highlights of systems should be part of the job description.
Log files need to easy to understand and get information from. As someone who works with huge Windows IIS logs files, automation is your friend here. Jason Fossen's Search_Text_Log.vbs script  is a great starting point for scripters or for a more dynamic analysis tool Microsoft's log parser  is well worth taking the time to get to grips with. As an example of some of the information you can extract from IIS logs have a read here  see how easy it is to pull pertinent data and this blog piece  has a excellent way to get visual trending IIS data.
If log analysis isn't something you do much of, then a marvellous way to get some practice in is from this Honeynet.org challenge 
It's important to note logging has to be enabled on your systems, set up and reviewed to produce useful information. Multiple logging sources have to be using the same time source, to make correlation easy, so take the time to make sure your environment is configured and logging correctly before you need to review the logs for an incident.
As always, if you have any suggestions, insights or tips please feel free to comment.
 Download log parser from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
* for you own time management, eyesight and frankly sanity try to avoid this.
Chris Mohan --- Internet Storm Center Handler on Duty
Another gaming company had customer data illegally accessed by hackers who copied e-mail addresses, encrypted passwords and birth dates stored in the Sega Pass database. Sega confirmed that no personal payment information was taken because they use external payment providers. On Friday, they indicated that all user passwords were reset and access was temporarily suspended. As of this writing, the Sega Pass is still unavailable to users with a message stating "We hope to be back up and running very soon."
Going back to Rob's diary  on Incident Response, it looks like Sega was well prepared (Preparation) and did a pretty good job at quickly informing its customers  that in incident had occurred, they identified their customer data had been compromised (Identification) and immediately isolated the incident (Containment).
Not sure why at this point so many video game vendors (Nintendo and Sony) have become the prey of hackers. In this case, there was no credit card involved, however, we cannot say the same for potential spam when 1.29 millions email addresses have been stolen; that is a sizable target.
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
If you are ever curious, yes the handlers do participate in events that do not include keyboards, packet analysis tools or malware reverse engineering. At an event here in Phoenix, AZ, USA it was clear that a piece of technology in development deserves some attention. As a lead in to the discussion the event clearly posted, no filming. The Security staff were very helpful in taking photos of folks during intermission and when the event was not taking place but vigilant in telling participants to stop during the course of the event.
This may seem like a soft subject for a diary piece but each of the handlers is entrusted with access to information that our readers post. In turn we all hold each other and ourselves to a high level of professional and personal ethics. but ... Not everyone has the same opinion on what is right or what is wrong. That brings me to the technical piece of this entry that is relevant to the above topic.
Fox News  is running a story about how Apple has filed patent for technology that can disable iPhones from filming at live events. After some searching I found a good source for explaining the patent in more detail .
In summary, the device will be able to receive commands through the infrared receiver. Keep in mind, Apple has several patents that never seem to surface as technology but this one, due to events last night, strikes as a concept to follow.
At what point do you stop owning your technology? Opposite of that where is the line to cross when it comes to protecting intellectual property?
Considering the world of extreme disclosure we are in, technology like this could be greatly useful in classified spaces and in areas of high sensitivity. For security operators that control sensitive spaces this is a technology that could be excited and useful but be aware that this could be a sign of the times to come.
--- ISC Handler on Duty
email: richard at isc dot sans dot edu
This week brought a number of headlines related to Bitcoin--a peer-to-peer online currency that seems to be increasing in popularity. From the security perspective, the rise of Bitcoin offers a peek at the type of financial transactions that may need to be safeguarded in the future and also provides insight into the criminal activities associated with such transactions.
Malware has appeared to steal Bitcoin wallets, time is near where botnets will be used for Bitcoin mining and attackers are probably considering whether weaknesses in the Bitcoin design and implementation might be used to game the Bitcoin market. Just like Friendster was the precursor to today's on-line social networks and Napster foreshadowed modern online music distribution models, so too BitCoins might be a sign of upcoming approaches to distributed online financial transactions.
Here are a few articles for coming up to speed on Bitcoin and the recent incidents associated with it.
Getting Started With Bitcoin
- Become familiar with the key Bitcoin concepts--what Bitcoin is, why it exists and how it is used--by reading the Bitcoin Wikipedia entry.
- Understand some of the reasons for Bitcoin continuing to increase in value by reading SmartMoney's perspective on the currency's growth streak.
- Take a look at the list of vendors who accept Bitcoin as a form of payment or who can exchange Bitcoins into traditional currencies.
- Consider the perspective that the economic factors behind Bitcoin might be unsustainable and could resemble a Ponzi scheme. Read a related perspective on why Bitcoin might be a poor idea.
- Understand the notion of Bitcoin mining--generating new Bitcoins by solving cryptographic problems. Consider the likely scenario of compromised computers being used for Bitcoin mining--a malicious practice that is not yet widespread, yet will inevitably rise in popularity.
Recent Bitcoin Incidents
- Read about Silk Road--an online marketplace for drugs such as LSD and Cannabis--that only accepts Bitcoin as the form of payment. This story brought Bitcoin to the attention of many people outside the tech community, including lawmakers.
- Learn the details of the theft where 25,000 Bitcoins, potentially worth $500,000, were reportedly stolen from a person's PC. (Maybe the victim exaggerated the size of the stolen sum?)
- Understand the nature of a recently-discovered trojan that was designed to steal the victim's Bitcoin wallet from the infected Windows computer. Also, read the forum discussion to understand how this malware was probably being distributed. (If you own Bitcoins, remember to safeguard the wallet.)
Potential Bitcoin Implications
- Read the EFF's perspective on Bitcoin's potential to "offer the kind of anonymity and freedom in the digital environment we associate with cash used in the offline world."
- Consider the opportunities for financial arbitrage if the Bitcoin market could be manipulated through the sale of a large quantities of Bitcoins at once.
The notion of Bitcoin as a distributed and anonymous form of currency is capturing the world's attention. The readers of this blog will find it particularly interesting to consider the implications of the role that such currency can play in the criminal marketplace and online attack activities.
Perhaps Bitcoin might be ahead of its time and maybe its design and implementation is flawed--we will know soon enough. Regardless, it is an idea that will inspire creative thinking in the space of online payments. In the words of Edward Z. Yang, "The future of Bitcoin depends on those who will design its successor. If you are investing substantially in Bitcoin, you should at the very least be thinking about who has the keys to the next kingdom."
(This diary is based on the text originally published on my blog.)
-- Lenny Zeltser
All started with iPhone...:)
Some days ago I had to replace the battery on my wife's iPhone, and after that I noticed that the Wifi was not working properly anymore...so I decided to check on Google for pictures of the iPhone antenna so that I could open again and verify if I left anything loose (which later I found to be the case...) :) .
A regular search for "iphone wifi antenna" (BE CAREFUL) and I got several hits...and as Google is proactive, it also showed some examples of pictures related to my search.
Well, I decided to see one of the pictures and clicked on it. It then started to load and suddenly was redirected to another page, which looked like my Finder screen:
That is exactly what happens on the next screenshot:
Note that everything is really well crafted to look real. On Windows systems they use real detection names for the fake trojans found on the system. Here they use detection names to resemble Mac trojan names, which includes the OSX on the name.
Whenever you press Cancel or Remove All it will push the "anti-malware.zip" file which is actually a .DMG file (the one used by Mac OSX).
At the time of this diary, only 17 of 42 AV detects it on VirusTotal, some as MacDefender, some as Downloader.OSX.
Pedro Bueno (pbueno /%%/ isc. sans. org)
Overview of the June 2011 Microsoft patches and their status.
|#||Affected||Contra Indications - KB||Known Exploits||Microsoft rating||ISC rating(*)|
|MS11-037||The MHTML (Mime encapsulated HTML) protocol handler is vulnerable to information disclosure through an XSS like problem.
|KB 2544893||Publicly known vulnerability.||Severity:Important
|MS11-038||WMF processing by OLE allows for arbitrary code execution with the rights of the logged on user.
|OLE - WMF
|KB 2476490||No known exploits||Severity:Critical
|MS11-039||Input validation vulnerabilities in the .NET framework and the Silverlight implementations allow for arbitrary code execution with the rights of the logged on user.|
|.NET - silverlight
|KB 2514842||No known exploits||Severity:Critical
|MS11-040||Improper bounds checking in Microsoft Forefront Threat Management Gateway 2010 Client allows for arbitrary code execution in the context of the service.|
|KB 2520426||No known exploits||Severity:Critical
|MS11-041||An input validation problem in the parsing of OTF (OpenType Font) fonts in in 64bit kernels allows for arbitrary code execution in kernel mode. This is remotely exploitable though file sharing, webdav, websites, email and more.
|KB 2525694||No known exploits||Severity:Critical
|MS11-042||Input validation problems in the Distributed File System (DFS) implementation allow for arbitrary code execution in the context of the service or denial of service (DoS) conditions.|
|DFS (Distributed File System)
|KB 2535512||No known exploits||Severity:Critical
|MS11-043||An input validation problem in the parsing of the responses to SMB requests allows for arbitrary code execution in the context of the service.
Replaces MS11-019 and MS11-020.
|KB 2536276||No known exploits||Severity:Critical
|MS11-044||An input validation problem in the JIT optimization of the .NET framework allows for arbitrary code execution in the context of the logged on user, and bypass security measures such as the CAS (Code Access Security) restrictions.
Replaces MS11-028 and MS10-060.
|KB 2538814||Publicly disclosed vulnerability.||Severity:Critical
|MS11-045||Multiple vulnerabilities in Excel allow for arbitrary code execution in the context of the logged on user.
Office for Mac versions are also affected.
Replaces MS11-021 and MS11-022.
|KB 2537146||No known exploits||Severity:Important
|MS11-046||An input validation vulnerability in AFD (Ancillary Function Driver) allows for privilege escalation and arbitrary code execution in kernel mode for logged on users.
|KB 2503665||Publicly disclosed vulnerability, Microsoft claims "limited, targeted attacks attempting to exploit the vulnerability"||Severity:Important
|MS11-047||A Denial of Service (DoS) condition is possible where an authenticated user of a guest system can cause a denial of service on the host system.
|KB 2525835||No known exploits.||Severity:Important
|MS11-048||A parsing error in the SMB server can be used to cause a Denial of Service (DoS) condition.
|KB 2525835||No known exploits.||Severity:Important
|MS11-049||XML editor can leak file content though XML external entities that are nested. XML editor is part of Infopath, SQL server, and Visual Studio.
Replaces MS10-039 and MS09-062.
|KB 2543893||No known exploits.||Severity:Important
|MS11-050||Multitude of vulnerabilities in MSIE.
|KB 2543893||No known exploits.||Severity:Critical
|MS11-051||Active Directory Certificate Services Web Enrollment allows for a reflected XSS issue.|
|Active Directory Certificate Services Web Enrollment
|KB 2518295||No known exploits.||Severity:Important
|MS11-052||A VML memory corruption allows arbitrary code execution in MSIE with the rights of the logged on user. IE9 is not affected.|
|VML - MSIE
|KB 2544521||No known exploits.||Severity:Critical
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
- We use 4 levels:
- PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
- Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
- Important: Things where more testing and other measures can help.
- Less Urgent: Typically we expect the impact if left unpatched to be not that big a deal in the short term. Do not forget them however.
- The difference between the client and server rating is based on how you use the affected machine. We take into account the typical client and server deployment in the usage of the machine and the common measures people typically have in place already. Measures we presume are simple best practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
- The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threat for affected systems. The rating does not account for the number of affected systems there are. It is for an affected system in a typical worst-case role.
- Only the organization itself is in a position to do a full risk analysis involving the presence (or lack of) affected systems, the actually implemented measures, the impact on their operation and the value of the assets involved.
- All patches released by a vendor are important enough to have a close look if you use the affected systems. There is little incentive for vendors to publicize patches that do not have some form of risk to them.
Swa Frantzen -- Section 66
As if we will not have enough work on reboot Wednesday, Adobe released their own patches today. The pre-announcement can be found here:
Swa Frantzen -- Section 66
My colleague Branko and I spent a lot of time reversing various FakeAV/RogueAV copies as we were quite interested in how they manage to constantly have < 5 detections on VirusTotal (and therefore successfully evade detection by normal anti-virus programs).
We noticed that various FakeAV versions use pretty advanced obfuscation, basically anything you can think of: anti-disassembly (destroying functions, opaque predicates, long ROP chains ...), anti-emulation, anti-VM, anti-debugging, even with some bugs of their own.
Branko spent a lot of time analyzing this to improve his Optimice plugin for IDA Pro. If you haven’t heard about it, and you spend a lot of time analyzing malware or reverse engineering binaries, be sure to check it at http://code.google.com/p/optimice/. It’s an amazing tool that can cut down on your time spent on reversing by an order of magnitude.
Below is a screenshot of what Optimice can do – on the left side you can see the original FakeAV code, while on the right you can see the same code after Optimice deobfuscated and optimized it. Much easier to analyze, isn’t it:
Back to FakeAV now – time to explain the title of this diary. While reversing one of the FakeAV copies we noticed that under certain circumstances (when FakeAV is trying to update itself), it basically calls its own binary with a very interesting argument, as you can see in the screenshot below:
Those Harry Potter fans among you probably immediately noticed the argument BOMBARDAMAXIMUM which, according to some online references is “a spell that, being a stronger version of Bombarda, provokes explosions capable of bringing an entire wall down”. I’m not sure which wall this is about, but at least there is some sense of humor here.
If the argument was supplied, the binary will call two functions: the first one will create couple of mutexes, while the second function will connect to a C&C, send some data and (probably – we couldn’t confirm this since the C&C is down) updates itself. This part of code is shown below:
Stay tuned, we’ll post more interesting things in next couple of weeks, including a paper.
The cloud means a number of different things to different people. For some it is the new frontier, the way forward. For others it is outsourcing by a different name and even less control over what happens in the cloud. In true security fashion and one of my favourite answers, it depends. The reality however is that it is inevitable, in some aspects of your work you will come into contact with the cloud, or you will be asked to secure it.
So lets have a look at a few of the challenges in cloud world, and if your weekend or Monday is as drab, wet and cold as mine add your comments to the list. We'll try and keep it to pros and cons.
- Free up resources from performing menial tasks
- Access to resources at a price you can afford
- Getting affordable offsite storage or backup facilities are often cheaper in the cloud than you can do yourself. Especially for smaller businesses.
- Quality content filtering solutions
- IDS/IPS services
- Less limitations
- e.g. online backups. if you need more space, you purchase it and it is there
- You do not necessarily know where your data is?
- Many cloud providers have in their contracts that they can move your services about. So if it is important that your services are delivered locally, then some cloud providers may not be what you are after,
- How do you get your data back when the provider refuses access or goes bust?
- Companies go bust. If your core data resides with that company, how do you get it back
- Who has access to your information?
- The cloud is a shared environment. there will always be at least two parties that have access to your information, you and the provider.
- Legal entities, depending on the jurisdiction you are in different legal entities may have access to your data.
So that has us started. If sending through comments please state clearly at the start whether your comment is Pro or Con.
I just had a change to skip through our IPv6 logs from yesterday. There was a significant, but not huge increase in hosts accessing the site via IPv6. Usually we get maybe 200 or so hosts via IPv6, yesterday we got around 270.
Interestingly about 25% of the traffic (per IPv6 day as well as during IPv6 day) is due to hits to our rss feed. I will try to follow up on this to see why we get so much IPv6 traffic to it.
After an initial look at the logs, I didn't see any attacks via IPv6 against our web application.
A reader emailed in with the question, in short, which is currently the most secure browser and how to stay up to date on the different browsers. In the interest of Chrome having an update today it seems fitting to post the answer as a Diary.
Before the browser war ignites, let me be the first to say in my opinion "It Depends." Chrome  is regarded as a very safe and secure browser but when you get to the number of lines of code in any browser architecture it is hard to say . There has been some great research on lines of code in different systems  and when you get to that level of complexity errors are bound to occur. There are several different thoughts and many books on this subject but what I am getting at here is complexity and trust. At some point you have to trust the development team that wrote the code for the browser, what operating system you are running and how you have deployed your browser.
Second, the browser, or the technology is only part of the matter. You still have Phishing and the human factor. Even on the most secure platform the user can be tricked. 
Another commonly accepted deployment strategy is Firefox with add on components of No-Script and Adblock. Research into your specific deployment scenario and resources is the key to identifying what works in your environment. Infoworld had a great article on securing different browser types , it is a little old but still relevant.
The pwn2Own contests held at some of the CanSec conferences can lead to some good reading on this subject. 
In the end, a huge browser war will ignite over which is the most secure but as organic as feature and code has become it is arguable that the best way to secure your environment is layers of defense but finally check out the SANS reading room for papers on the subject. Specifically refer to a paper written by one of SANS GIAC Students .
And to our Reader who wrote in, stand by for the heavy opinions on the subject. To our readers, please comment on your experiences or how you stay current.
--- ISC Handler on Duty
Email richard at isc dot sans dot edu
If you have not seen, Chrome has been updated to version 12.0.742.91  and with this release brings some nice updates. You can check the official blog post by Google  for a long list of enhancements and security fixes. Of particular interest are the safe browsing enhancements . Chrome has added some malicious file detection. Not sure if this is in response to the exploit claim that was made some months back but one could speculate . If you are running Chrome it is advised that you update when it becomes available.
--- ISC Handler on Duty
We keep getting ongoing reports from readers about spam being sent from legitimate Hotmail accounts. Like web mail systems in general, Hotmail accounts are targeted to be able to send spam from "trusted" sources. if an e-mail is received from a friend or relative, you are much more likely to open and read it.
These accounts are compromised via many ways, most commonly these days via phishing. The question always is if it is actually a compromised account, or just someone spoofing the "From" address.
Hotmail adds some characteristic headers that can be used to identify the source as hotmail. While they may be faked of course, the allow you to narrow down the chances of the account being compromised.
You should see a "Received" header from a hotmail.com host, using Microsoft SMTSVC. If the e-mail was posted via the web interface, you should also see an "X-Oritinating-IP" header, with the IP address of the sender. Here are some sample headers from an e-mail I sent to myself via hotmail, using the web interface:
Received: from snt0-omc2-s38.snt0.hotmail.com (snt0-omc2-s38.snt0.hotmail.com [188.8.131.52])
Received: from SNT112-W36 ([184.108.40.206]) by snt0-omc2-s38.snt0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675);
I obfuscated the X-Originating header.
Next question we get: What to do if you find out your friends hotmail account was compromised? If your friend is "lucky", all that happened was a phishing attack. Your friend only needs to change the password (and of course, all sites he uses the same password with). Worse case: Your friend is infected with malware that stole the password. Point the friend to some decent anti-malware detection, or if you are a real good friend, help with the cleanup.
IPv6 day officially started at midnight GMT. Over the next 24 hrs, a number of large web sites will be reachable via IPv6. For example Google, Yahoo and Facebook added AAAA records.
You can check yourself if you are able to receive the AAAA records with this nslookup command:
nslookup > set type=AAAA > www.facebook.com Non-authoritative answer: www.facebook.com has AAAA address 2620::1c08:4000:face:b00c:0:2
The next 24 hrs bring a unique opportunity to test IPv6 and to experiment with it. I recommend that you setup at least a test system and attempt to connect to IPv6 via a tunnelbroker. You may also be able to use auto-configured 6-to-4 but it tends to be less reliable. See the end of this article for a number of free tunnel brokers.
Things to test:
- ping Google: on unix, use ping6 www.google.com, on Windows, ping -6 www.google.com
- measure latency via IPv4 and IPv6 and compare.
- test if you can reach various IPv6 sites (http://isc.sans.edu has been dual stack for a while now)
- can you detect the traffic with whatever tools you use (snort, tcpdump, windump, wireshark...)
More information about IPv6 day:
RSA issued a press release, offering to replace all tokens if a customer asks for it. As an alternative, RSA also offers to implement additional authentication monitoring.
Aside from the press release, and an interview with the RSA CEO, there have not been any details about how this would work or how long it will take. However, RSA states that this will cover all customers, even if RSA considers them not at risk.
Many of the recent high profile attacks follow a similar pattern. First, a web application is compromissed using SQL injection. Next, the attacker dumps the database using the SQL injection vulnerability.
Once the attacker has a hold of the database, the attacker will search it for passwords. In some cases, the password was not hashed, and in other cases, the hash was brute forced. The attacker then used the password to try and breach other accounts.
I will try to write up a few diaries discussing steps to defend against the basic weaknesses exploited by these attacks:
- SQL Injection
Unhashed or weak passwords
- Password reuse.
In this first pst, we will take a look at SQL injection.
The Tool: Havij
A few times before, I showed some of the attacks we see agains the ISC website. One notable change over the last couple years is an increase in SQL injection attacks. In the past, remote file inclusion attacks dominated. But now, SQL injection attacks have increased substantially, in particular attacks using the attack tool "Havij".
Havij is a simple Windows GUI tool to automate SQL injection attacks. Its capabilities are similar to tools like Absinthe and sqlmap. Personally, I think sqlmap is a more capable tool but it is not as easy to use as a click-kiddie friendly tool like Havij. Havij is distributed by itsecteam, an Iranian security company. The word "Havij" translates to "carrot" and indeed, Havij uses a carrot as icon. Havij works ok with simple GET requests. It does support POST but in my limited testing appears to be less reliable. In its default setting, Havij is easily identified by its user agent:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727) Havij
The attack method is pretty straight forward. Havij injects a "SELECT UNION" statement and keeps adding additional fields to the union query to work out how many columns are required. Each statement selects static "random" hex strings to make it easy to identify them in the response.
GET /diary.html?storyid=999999.9+UNION+ALL+SELECT+ 0x31303235343830303536%2C0x31303235343830303536--
Again a technique used by other tools as well.
Of course the best defense is to avoid SQL injection vulnerabilities in the first place. Did I mention yet that you should use prepared statements whenever possible? That and decent input validation will pretty much eliminate the problem.
Now I also know, that you probably got plenty of legacy applications and applications you didn't code. In these cases, you need a "quick fix". You could for example block the Havij user agent at your Intrustion Protection System or your web application firewall. A little mod_rewrite rule may work too. I find another decent string to detect the tool (and other SQL injection tools) is "%27+UNION+ALL+SELECT" . This string shouldn't have a huge false positive rate.
We covered SQL Injection a few times before:
Update: Thanks to a reader for pointing out that Havij means carrot and that itsecteam is Iranian.
I live in a country where theft by electronic means are at fairly high levels. There are criminal organizations that are responsible for using various techniques to steal users and passwords for online banking web pages, doubling the bank web pages on sites that have security problems by allowing upload files and are used by attackers to mount them.
I want to discuss in this diary a very commonly used technique, which corresponds to spoof a URL in the status bar of browsers and links sent by e-mail. At the end of this text will find the Spanish version, which is a translation of this text in English.
Let's see another example. If you use http://handlers.dshield.org/msantand/spoofexample.html, you will get a link pointing to youtube. If clicked, it will get you to SANS Institute website. If you see the source code, when the mouse is not over the link the URL is modified using the href property of the element:
Unfortunately, at this time the code used on both examples are legitimate to browsers and they are executed as any other code. The only solution is to enforce user awareness and make them keep in they mind that they won't be asked by legitimate companies or people for personal information by e-mail or websites and that they should not click any link sent by e-mail.
------------------------------------------------START OF SPANISH VERSION------------------------------------------------
Vivo en un país en donde los robos utilizando medios electrónicos se encuentran en niveles bastante altos. Existen organizaciones criminales que se encargan de robar mediante diversas técnicas los usuarios y las claves de acceso de las sucursales bancarias en línea, duplicando las páginas web de los bancos en sitios web que poseen problemas de seguridad permitiendo subir archivos y son utilizados por los atacantes para montar estas fachadas fraudulentas.
Quiero discutir en este diario una técnica que es comunmente utilizada, la cual corresponde a falsificar un URL en la barra de estado de los navegadores o de los enlaces enviados por correo electrónico. Utilicemos el siguiente URL de demostración: http://xeyeteam.appspot.com/media/agh4ZXlldGVhbXINCxIFTWVkaWEY-cIwDA/spoof_StatusBar.html. Al pasar el mouse sobre el URL, aparece el sitio web www.google.com. Si usted da click, lo lleva a otro sitio distinto:
Observemos otro ejemplo. Si usted accede la página http://handlers.dshield.org/msantand/spoofexample.html, encontrará un enlace apuntando al sitio web de youtube. Si le da click, lo llevará al sitio web de SANS Institute. Si usted observa el código fuente, cuando el mouse no se encuentra sobre el enlace, este último es modificado utilizando la propiedad href del elemento
Desafortunadamente el código mostrado para ambos ejemplos es legítimo para los navegadores y por esto se ejecutan como cualquier otro código. La única solución para este problema es reforzar la concienciación y sensibilización al usuario, haciendo que siempre tenga claro en su mente que nadie legítimo le va a solicitar datos personales por correo o sitios web y que no deben hacer click en ningún enlace que reciban por correo electrónico.
------------------------------------------------END OF SPANISH VERSION------------------------------------------------
I am going to stray a little off the beaten path of the ISC today and ask you about personal disaster planning.
I am in the middle of the bald prairie in Western Canada, in the last several months we have experienced a record snowfall, a rapid melt, abnormally high precipitation, widespread flooding, wildfires, and tornadoes. Despite all this my part of the country has gotten off relatively easy. Other parts of Canada and the U.S. have been far less lucky, with significant loss of life on top of devastating property damage. As a matter of a fact, my thoughts and prayers this weekend are with the Internet Storm Center’s Deb Hale, who is at this moment living through the flooding in Iowa.
As these events were unfolding, companies all over were dusting off their business continuity and disaster recovery plans and making sure they were adequate to get them through the coming crisis, if it were to occur. Although in my part of the world only minor aspects of most plans were implemented, I wondered about all the people whose basements were flooded, whose farm land was submerged, and whose houses were underwater. Did they have a personal disaster recovery plan?
Given the human tendency to look at the bright side and to downplay risks that can’t easily be imagined, most people do not have an adequate personal disaster recovery plan. I am not suggesting that you need to plan for every contingency, but it cannot hurt to go through a couple of the most likely scenarios and see what resources you would need to minimize the impact on your family and put the foundation in place in advance.
The Center for Disease Control Zombie Apocalypse preparedness blog from a month or so ago is an excellent place to start for planning for the immediate aftermath of a disaster, but what about after the immediate crisis is over. Would you have the ability to access the resources necessary to recover from a disaster?
I suggest building a file of important documentation and contact information. Keep one copy in a safe place in your house, a fireproof box perhaps, and a second copy at a friend or relative's place, sufficiently far from your house, so you can access it if your house is seriously damaged or inaccessible for an extended period of time. What sort of information should be in the file?
- Insurance Information
- Medical Information
- Contact list of anyone you will need to be able to contact to tell them you are safe
- Contact list of companies and agencies that may be able to help you recover.
Personally I keep the originals in a safe deposit box and scans, in PDF form, on a couple of good quality USB thumb drives as well as on an Internet data storage site. Be sure to throw some money into these files, just in case the problem is widespread enough to put banks and ATMs out of service for a bit.
Would you have enough money to survive and start the rebuilding process? Insurance money, in most cases, will not be immediately forthcoming, and adjustors may not be available for an extended period of time. I recommend setting up a line of credit with your financial institution for at least half of the value of your house and contents. Most institutions do not charge you anything for having a line of credit and not using it, although there may be some fees for establishing the line of credit initially. Be sure your line of credit permits cheques to be written against it and place several of the cheques in your documentation packages.
I think that is a good start, but I am sure you will have many other excellent ideas. What sort of things have you done to aid your personal recovery from a disaster?
Please provide your ideas via comments or through the contact form.
-- Rick Wanner - rwanner at isc dot sans dot org - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)
We have written diaries on Sony’s security woes over the past few months, first one was a DDoS against its infrastructure  followed by the hacking of the Sony PlayStation network that took their network offline for several weeks, affecting all its PlayStation customers . This week, SonyPictures was compromised by a group of individuals calling themselves LulzSec who took over 1,000,000 unencrypted plaintext customer password. Last week, another attack took place, this time against Sony Music Entertainment Greece website  who took usernames, passwords, email addresses and phone numbers.
One question comes to mind. With all of this data lost, if a PCI compliant corporation can be this easily targeted and compromised, is PCI a good standard to measure security posture?
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
Next Tuesday, Oracle is planning to release a Java SE Critical Patch Update that will contain 17 new security fixes which may be remotely exploitable without authentication. "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update fixes as soon as possible." 
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
This is the second release candidate of the upcoming 1.6 (stable) branch. This new branch contains several new enhancements and bug fixes. For example, support for files greater than 2 GB, it can export SSL session keys, it can export SMB objects, graphs now save as PNG images by default to name a few. It also supports a large number of new protocols. This update can be downloaded here.
Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu
Now with Apple pushing out its first daily update to combat the latest MacDefender variant, its a good time to take a closer look at "XProtect", the Snow Leopard Anti Malware engine (or to use the Apple euphemism: "safe download list").
OS X heavily relies on XML files for configuration. These "plist" files are easy to read. The same is true for the XProtect configuration, which includes the currently valid signatures. Two files are used:
This file appears to track XProtect versions, and when they got applied.
This is the actual signature file. For example, one of the MacDefender entries looks like:
<dict> <key>Description</key> <string>OSX.MacDefender.B</string> <key>LaunchServices</key> <dict> <key>LSItemContentType</key> <string>com.apple.installer-package</string> </dict> <key>Matches</key> <array> <dict> <key>MatchFile</key> <dict> <key>NSURLNameKey</key> <string>Info.plist</string> </dict> <key>MatchType</key> <string>Match</string> <key>Pattern</key> <string>3C6B65793E43464276B6....F737472696E673E</string> </dict> [ ... 3 more 'dict' sections deleted ... Also, the string is appreviated to fit ] </array> </dict>
xpath /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/XProtect.plist /plist/array/dict/string
Kasperky Lab Security news service posted this recently.
“Researchers have identified a second large batch of apps in the Android Market that have been infected with the DroidDream malware, estimating that upwards of 30,000 users have downloaded at least one of the more than 30 infected apps. Google has removed the apps from the market.”
The user does NOT have to run the application to trigger the data theft. A phone call can trigger that event by invoking android.intent.action.PHONE_STATE intent (an incoming phone call). When that occurs data is extracted from the phone and sent to a remote site including IMEI, IMSI, installed package list, other data and possibly install other applications.
Additionally mylookout.com a company that makes smart phone security software posted a analysis of droiddreamlight and a set of infected applications here:
Later today, we are going to roll out a redesign of the ISC website to bring it in line with the current design of www.sans.edu and to overall refresh the look of the site. If you see a problem, please let us know at handlers @ sans. edu, or if you can use the contact form, use it. Include a screen shot and your browser / OS version.
Most IPv4 networks are managed using DHCP and as a result are subject to attacks via rogue DHCP servers. In response, many switches implement "DHCP Snooping" as a feature to protect from these attacks.
As networks switch to IPv6, DHCP will become less used. Some operating systems, like for example OS X, don't even implement DHCPv6. Instead, router advertisements will be used to help systems discover the network and to configure themselves. But just like DHCP, router advertisements (RA) may be spoofed. To protect your network from rogue RAs, a switch may implement RA-Guard , a feature similar to DHCP Snooping.
RA-Guard will only forward RAs, if they are received on a port known to be connected to an authorized router. Additional filtering may happen based on the MAC address of the router.
A recent IETF draft outlines some deficiencies of RA-Guard and how to possibly evade RA-Guard. The basic premise of RA-Guard is that the switch, a layer 2 device, is able to inspect the IPv6 and ICMP6 headers (layer 3) as well as the ICMP6 payload in order to identify and interpret RAs. In particular for IPv6, this is not an easy task and the evasion techniques outlined in the IETF draft are a nice lesson in the difficulties of correctly interpreting IPv6 traffic.
First of all, there is the potential for extension headers. Router advertisements *should not* have any extension headers, but there isn't really anything to prevent that from happening and per RFC, it is legal. However, as soon as we are dealing with extension headers, the "Next Header" field in the IPv6 header can no longer be used to identify the packet as an ICMP6 packet. Instead, the switch will have to find the last header and use it's Next-Header field.
Processing the entire header chain will take more resources and in the end, may limit the through put of the switch or even lead to denial of service conditions.
Next, the RA messages may be fragmented. This is again a condition that should not be seen in a *normal* network, but then again, attackers may very well craft legal RA packets that are fragmented. The fragmentation could be used to only show the IPv6 header and some extension headers in the first fragment, and move the header indicating the ICMP6 header, as well as the actual ICMP6 header, to the second fragment. Of course, this could be made even more interesting with multiple fragments.
Oh... and remember: Wednesday is IPv6 day :) We will have more about that later.
IPv6 Security Summit is coming July 15th, Washington DC, http://www.sans.org/ipv6-summit-2011
Many operating systems use the EUI-64 algorithm to generate IPv6 addresses. This algorithm derives the last 64 bits of the IPv6 address using the MAC address. Many see this as a privacy problem. The last half of your IP address will never change, and with MAC addresses being somewhat unique, the interface ID becomes close to a unique "cookie" identifying your system.
As a result, RFC3041 introduces "privacy enhanced" addresses which will change and are created by hashing the MAC address. Of course, each operating system has its own way to enable privacy enhanced addresses.
You can use "netsh" to enable and configure privacy enhanced addresses. Use
netsh interface ipv6 show privacy
to query the status, and
netsh interface ipv6 set privacy state=enabled
to enable it. In my testing, privacy enhanced addresses were enabled and I wasn't actually able to disable them (a possible bug?).
OS X uses the sysctl command to change various kernel parameters, including privacy enhanced addresses. By default, EUI-64 is used.
To enable, run
sudo sysctl -w net.inet6.ip6.use_tempaddr=1
and cycle the interfaces (ifconfig en0 down; ifconfig en0 up). However, to have this setting survive a reboot, create a file called /etc/sysctl.conf and add the line:
as root, similar to OS X, update the respective /proc entries
echo 1 > /proc/sys/net/ipv6/conf/all/use_tempaddr echo 1 > /proc/sys/net/ipv6/conf/default/use_tempaddr echo 1 > /proc/sys/net/ipv6/conf/eth0/use_tempaddr
Linux uses an /etc/sysctl.conf file, just like OS X, to make these changes persistent during reboots.