OWASP Belgium Chapter Meeting Wrap Up

OWASP Belgium Chapter MeetingI’m back from the latest OWASP Belgium Chapter meeting. Belgium is a small country with lot of political issues (off-topic here 😉 ) but also a great electronic identify card or “eID“. Almost all Belgian citizens have an eID for a while (8.2 millions of cards have been delivered up to today). Coupled with a PIN code, the card can be used for several purpose:

  • To identify the owner;
  • To authenticate the owner (using the PIN code);
  • To sign documents

Of course, the eID can be used online with compatible websites. This was the first topic covered tonight. Erwin Geirnaert and Frank Cornelis presented “The Belgian e-ID: hacker vs developer“. The second part was performed by Larry Suto about the accuracy of web application scanners.

Frank started with a deep technical presentation of the Belgian eID card (from the physical structure, the data contained on it and the software used to access them). Event if the card has information written on it, other information are only available on the chipset (like the home address). The card contains PKI elements (RSA keys), the owner picture, an identity file, address file and a PKCS#15 structure. The Belgian authorities operate a PKI-infrastructure to work with the eID. Some actions are available without authentication like the basic identification. Just insert your card in a compatible reader and your data are available to everyone. On the other side, to be authenticated (to prove that you are really the owner of the card), you need to give a PIN code (two-factors authentication). The same apply to digitally signed documents. Then Frank explained how to integrate the eID in web applications. The requirements are:

  • It must be easy;
  • It must be secure (of course!);
  • It must be platform independent (operating systems & browsers);
  • And… idiot proof!

Don’t forget that everybody has an eID and not all people are security experts! (even if some pretend to be 😉 ). The latest eID applet is based on Java 6 and does not require any installation on the user side. If you are interested, the code is available on Google Code under the GPLv3 license. One of the remaining problem is the risk of stolen PIN code via key loggers. That’s why some critical applications require a specific card-reader with a built-in keypad. The new applet also implements an integrity control which prevent data read from the card to be altered by a MitM attack. About the digital signature feature, two types of documents/applications are supported: OpenOffice and Microsoft Office.

After the technical details about implementing the eID within the web applications, Erwin “the bad guy” given some bad examples. First, the implemetation of the eID into the web sites will not protect you against the classic web vulnerabilities! It’s just a new way to authenticate the users. Keep your developers aware of this. What are the common bad implementations?

  • Identification is not the same as authorization! (just inserting your card is not a safe way to authenticate you)
  • No implementation of the HTTPS protocol! The eID data can be sniffed!
  • Using an unsecured trust in a 3rd party product (like in the Drupal case)
  • Data automatically intercepted by a reverse proxy (and forwarded in clear text)
  • After a successful authentication, usage of a cookie to keep the session alive.

Frank performed some demonstrations using Webscarab and demonstrated how it is easy to capture and change the eID data on the fly (without integrity verification of course). Note that a nice project is ongoing: an official validation of web sites providing eID authentication? (via L-SEC).

The second part of the meeting was dedicated to Larry Suto from Strategic Data Command. Larry tested several web scanners and tested their accuracy. He presented a summary of his report. The study focused on four points:

  • The accuracy of the scanners using a “point & shoot” configuration (or “out-of-the-box”);
  • The accuracy of the scanners using a fine-tuned configuration;
  • The accuracy of the reported vulnerabilities;
  • And the time required to ensure quality results

He performed his tests using commercial solutions (Acunetix, Appscan, Burpsuite Pro, Hailstorm, NTOspider, Qualys and Webinspect). The tested web sites were those provided by the solution developers themselves. Here are some facts discovered by Larry:

  • Each scanner is different in the way it performs the scans;
  • Scanners did not work best against their own test server;
  • Point & shoot configurations: lot of problems with badly managed authentication;
  • The web site language can be an issue in false positives: Look at non English sites? (Example: “error” – “faut” – “erreur”);
  • Scanning in the cloud (ex: Qualys) is limited in features (like JavaScript support)

In a new version of the report, some open source scanners will maybe be added like Skipfish or w3af. But at the moment, they give poor results compared to the commercial solutions.

The job performed by web app scanners is difficult compared to the classic vulnerability scans. In this case, there is notsimple signature or patterns to detect. A lot of actions must be performed from a human point of view. That’s why the web scanners users can be grouped in two categories: A first one which find the “point & shoot” operating mode enough to reach their expected security level). The second one thinks that no automatic scanning can be used due to the complexity of modern web sites.

Note to Larry: what about a “webscannertotal.com” like virustotal.com where we could test the same web site against several scanners at the same time ? 🙂

According to the audience (the room was full of known and unknown faces), OWASP meetings have more and more success in Belgium. That’s good!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.