After a cool dinner with other Belgian infosec people, the second day started with a discussion panel about the “Economics of vulnerabilities“. The panelists were: Lucas Adamski (Mozilla), Steve Adegbite (Adobe), Aaron Portnoy (Tipping Point),Adrian Stone (Blackberry / RIM), Chris Evans (Google),Katie Moussouris (Microsoft), Dhillon Kannabhiran (HITB – moderator). Almost all vendors are concerned by vulnerabilities in their softwares. Some decided to develop research programs to involve users in bug hunting (Mozilla & Google are well-known examples). Google estimates the cost of its “bug bounty program” only 10% of the price required by consultants to do the same job! The main topic was the “black market“. Yes, it exists and vendors cannot compete against the prices proposed there. If you want to make some business, this is the place to be! But, people buying on the black market are also expecting results: working exploits and not only a “vulnerability”. They are ready to pay but for a good ROI. Another topic was how vendors accept vulnerabilities in their products. Some of them are still old-school and sue security researchers (check out the recent magic_de story) or ignore them. Some vendors establish good communication with the community of security researchers. About the vulnerability research programs, Microsoft told that they don’t have one: most vulnerabilities are sent to them privately. Question: isn’t a risk of “iceberg” effect? Most bugs being silently fixed? Note that even if 0-days are critical, they can be reduced by implementing security at all levels! (network access, permissions, etc). Victims of a 0-day introduced the malicious code via a specific channel or an action. Microsoft also mentioned “developers education“. Another good point! Last question: Will the researchers focus on bugs with bounties? All participants agreed on the fact that money is not the main incentive. Reconnaissance remains important!
The second talk was presented by Ivan Ristic about “SSL“. What’s the status of SSL implementations today? Is SSL properly configured? First, according to Ivan, SSL must be seen as an “add-on” designed for HTTP, as well as other protocols. SSL is not only a piece of code. It’s a full ecosystem: This is a good example of why security is difficult. The main issue remains weak configuration! SSL suffers of three principal attacks:
- Passive MitM (Example: firesheep)
- Active MitM (Example: sslstrip)
- 3rd party compromise (outsourcing)
Then, Ivan introduction of the research he performed via SSL Labs. They wrote a SSL rating guide and provide interesting online tools like the SSL assessment review. What’s the status of SSL implementations on the Internet? SSL Labs performed a big study: in 2010, they scanned web sites to grab SSL configuration. What do they discover?
- 119M domains checked, 900K servers to assess
- 600K were valid
In 2011, the performed the same exercise but using the EFF’s SSL observatory DB and scanned 1.2M of sites. They build a specific crawler (robot) to visit top list of sites. What’s the current usage?
- Must combination is SHA1/RSA. most servers support strong ciphers.
- 20% redirect HTTP to HTTPS and from the other, some even do not redirect the login pages to HTTPS!
- HSTS is almost unused (HTTP Strict Transport Security): 80 sites from 250K
- Cookies remain common in authentication
- Cookies secure flag must be more used
- Mixed content can also be a problem
- Distribution of trust (Google Ads, Analytivs, Twitter, Facebook, etc)
What were Ivan’s conclusions? First, the press and community give bad messages regarding SSL. In most cases, the deployment and implementation breaks SSL and finally, it’s possible to achieve reasonable security but most sites choose to not do it! Finally, a mention for the Google project “SPDY” which offers encryption by default.
“Let me Stuxnet you” was presented before the lunch. I was a bit septic by reading the title of Itzik Kotler. Everything has already been said about Stuxnet! But, good surprise, after a very short introduction about this attack, Itzik explained that software controls hardware but that malicious sofware can also control software! And software can damage hardware! This is called a “PDoS” or “Permanent Denial of Service“. A PDoS is an attack which causes a piece of hardware to be replaced or reinstalled (Examples a bricked device). The reasons to perform a PDoS attacks remain the same as attacking a power plant: Rival companies, Foreign countries, terrorism, etc. They are different types of attacks:
- Phlashing: overwriting the firmware and make it useless or “bricked“
- Overclocking (burn it)
- Overvolting (burn it)
- Overusing (reusing a mechanical device and force it to die – like a CDROM tray)
- Power cycling
In a computer, what are the potential targets of such attacks?
- Fans (reduce speed to increase temperature)
- CPU Infinite loop using sample code: “jmp short 0x0”, bricking CPU by flashing a new microcode)
- RAM (overclocking or overvolting)
- GPU
- Hard drives (excess of I/O)
- SSD drives (excess of write operations)
- BIOS
- NIC
Some funny simple example of “killer” commands:
# while true; do dd if=/dev/xxx of=/dev/xxx conv=notrunc; done # hdparm -S 1 /dev/xxx; \ while true; do sleep 60; dd if=/dev/random of=foobar count=1; done # while true; do eject /dev/cdrom; eject -t /dev/cdrom; done
Which countermeasures can be used against those attacks? For firmware upgrade, use digitally signed images and, as usual, apply common basic security! The talk was interesting but presented too slowly! Half an hour should be enough. No need to spend ten minutes on overclocking!
I attended “Attacking critical infrastructure” by Maarten Oosterink. A nice talk just after the lunch. Everything has been said in Stuxnet. Some interesting facts mentioned during the presentation:
- A common issue with systems used in industrial environment: When to patch? (During install, during maintenance cycles: 1-2-4 years,e very Tuesday, never)
- Who maintains? (The IT department, local engineers, vendors)
- Critical infrastructure are based on multiple (un)common protocols and applications. This increases drastically the surface of attack!
Regarding those attack vectors:
- The human factor (night shifts, remote locations), computer look “like home“, noisy rooms, poor IT skilled people, contractors/vendor maintenance.
- Procedural: low patch frequency, manual patching, backups on removable devices,irrelevant policies
- Used technologies: 90’s networking, bad configurations, no monitoring, ha protocols not robust and OSI layers > 5
One of the most expected talk with the one of Adam Laurie & Daniele Bianco. Indeed, the room was full. They presented their research about the security of the “EMV” (“Europay Mastercard Visa“) system used by modern readers/credit cards. The system has already been reported as broken by the Cambridge University. They explained deeply why the system is vulnerable. They showed an EVM skimmer. Compared to magnetic stripe models, they cannot be detected and require little installation effort.
The EVM domain makes usage of a lot of abbreviations like CVM, PIN, SDA, DDA, CVMR, TVR which made some slides not easy to follow but, most imporant, they performed a nice demo and cloned a credit card.
The last talk was about OpenDLP: “Gone in 60 seconds” presented by Andrew Gavin, the creator of OpenDLP. According to the presentation title, I expected a more aggressive approach of OpenDLP. How to use it to really steal information. Andrew presented his tool in the first part. OpenDLP is free and based on two components: a central server (LAMP) . It can address:
- Compliance requirements
- Network & system administration tasks
- Pentests
Why it was developed? DLP solutions are expensive and not based on agents. Most important they are not working in the background! OpenDLP is based on policies that can be expended and reused across multiple agents. Policies have the following features:
- PCRE regex
- White/black lists
- Concurrent agent deployment
- Memory requirements
- How often agents phone home with results
- Obfuscation of data (ex: xxxxxx-3010)
How is the scan performed? Agents are deployed via SMB and started with Samba’s winexe (to run remote commands over SMB). Once running, it’s non intrusive by limiting the used CPU and memory resources. And once scan done, it asks itself to the server to be removed (winexe again). All the results are available from the server using a browser.
The second part was a review of the web interface (server side) with all the available features and a scan demo. Some benchmarks were collected to compare agent-based vs agent-less implementations. The list of coming features looks cool.
In the main room, it was also the end of the robot contest organized by the hackerspaces. I need such toys too! 😉
The closing keynote was presented by Richard Thieme. It’s now almost time to drive back to Belgium. Thanks to the HITB team for the organization (and the press pass). See you next year for another coverage!