Identification and authentication are hard ... finding out intention is even harder
Last Updated: 2014-03-13 00:09:31 UTC
by Daniel Wesemann (Version: 1)
While the drama about the lost airplane in Malaysia is still continuing, our hearts of course go out to the families of the missing. This ISC diary though is not about airplanes, or terrorism, it is rather about the related discovery that at least two passengers on the plane were using fake passports. Equally startling was the comment by Interpol that this is "common". What is the point of maintaining, for example, a no-fly list, if those listed on it anyway travel with stolen documents, and if the security checkpoint apparently fails to determine that a 19yr old doesn't look like a 40yr old, and that Italians who don't speak at least rudimentary Italian are, well, somewhat rare?
If we translate this to the virtual world, it turns into an everyday problem. How do we know that Joe using Joe's password is actually Joe, and not Jane? I probably should call them "Bob" and "Alice" to make this worthy of a scientific paper :), but the problem still stands: identification and authentication are hard, and finding out intentions is even harder. If we take from the airport physical security playbook, then it is "behavior" that makes the difference. The security checkpoint guys are (supposedly) trained to look for "clues" like nervousness, and carry-on baggage that is leaking 1,2,3-trinitroxypropane. Inevitably, there are numerous software products that claim to identify the "unusual" as well. Joe connecting from Connecticut, even though he lives in Idaho? Alert! Joe using Chrome even though he used Firefox last time? Alert! Joe typing his password faster than usual? Alert!
But like in the physical world, this kind of profiling only works well if you have a pretty homogenous and static "good guy" population, and a pretty well defined adversary. The real world, unfortunately, tends to be more diverse and complex than that. Which is why login fraud detection, just as airport security, often drowns in the "false positives", and as a result, de-tunes the sensitivity to the point where real fraud has stellar odds to just slip by. This is a fundamental issue with many security measures. Statisticians call this "base rate fallacy". If there are many many! more good guys than bad guys, finding the bad guys with a test that has a high error rate is pretty much: moot.
Checking the passports against the Interpol list of stolen passports .. wouldn't hurt though. Not doing this is akin to letting someone log in to an account that is suspended, or log in with a password that was valid two years ago.
Yep, and also why no number of security appliances will completely replace the time needed to wade through and triage and analyze events caused by the security appliances. All too often CEOs seem to think they can just buy an appliance to check that SOX box and then twiddle settings on the appliance until it stops taking time away from the IT staff to evaluate alerts, thinking it will surely still notify the IT staff of anything really important and take care of the rest itself. (face-palm)
All these fancy new security appliances can arm an IT staff with useful intel but there still needs to be someone with some infosec chops to sift through all the intel these appliances could be providing and act on it. And of course they need to be allowed to spend the time necessary on all this too...
Mar 14th 2014
9 years ago