[the following is a guest diary contributed by Russ McRee] Given the extraordinary burst in headlines over the last six months relating to "hacktivist "exploitation of web application vulnerabilities, Critical Control 7: Application Software Security deserves some extra attention. The control describes WAF (Web Application Firewall) use, input validation, testing, backend data system hardening, and other well-defined practices. Not until the 6th suggested step does the control state: “Organizations should verify that security considerations are taken into account throughout the requirements, design, implementation, testing, and other phases of the software development life cycle of all applications.” OWASP offers excellent resources to help with SDL/SDLC efforts.
As you take a look at testing “in-house-developed and third-party-procured web applications for common security weaknesses using automated remote web application scanners” don’t fall victim to vendor hype. Test a number of tools before settling on one as some tools manage scale and application depth and breadth very differently. If you’re considering monthly or ongoing scans of applications that may serve thousands of unique “pages” but with very uniform code, you’ll want a scanning platform that can be configured to remove duplicate items (same URL and parameters) as well as items with media responses or certain extensions. Takeaways:
|
Swa 760 Posts Oct 11th 2011 |
Thread locked Subscribe |
Oct 11th 2011 1 decade ago |
I'd much rather have new projects embrace a bottom-up security framework such as ESAPI (Enterprise Security API) for OWASP:
https://www.owasp.org/index.php/ESAPI Than tick a checkbox behind "have a web application firewall". These firewalls are terrible to configure manually and if you let them "learn": how do you know all legitimate traffic is seen, and how do you know no attacks were seen during the learning phase ? |
Swa 760 Posts |
Quote |
Oct 11th 2011 1 decade ago |
I could not agree more. That touches on a nerve for me as I have spent a good portion of my IT career writing software and watching all the problems we have with buggy software. Most of it can be boiled down to lack of proper input validation even though the conditions are present in the code, the conditions that would indicate a problem are often just not checked and acted upon. I can't tell you how many times have I seen sample code that is completely void of any error checking used in book after book. Even though they usually say you need to do error checking in production code you still end up seeing this stuff just pasted into prod code.
The output of this situation is random function failures, crashes, and of course lots of exploitable holes that we have to work around with things like WAF tools. If you need a WAF for a program then in my 20 plus years experienced opinion you have a program that is not viable as a production tool, especially if it is internet facing. A WAF is only going to know about the attack vectors that someone else has already identified, just like anti-virus software. Relying on WAF, even with defense in depth, is giving in that you will always be behind the bad guys and never caught up or actually ahead. The vendor should be forced back to the drawing board to fix it even if they have to re-write it. We let them get away with far too low a quality by not demanding them to shore up their products to at least make attacking them very difficult. Or of course a more stable product can be chosen instead. It may cost more but the total cost of ownership will still most likely be less. Occasional bugs are going to happen, we are all human. But there is a big gap between an occasional issue and recurring problems of a common theme. Recurring issues is a bad sign that the design or implementation was sub-par. BC |
BGC 23 Posts |
Quote |
Oct 11th 2011 1 decade ago |
Sign Up for Free or Log In to start participating in the conversation!