Tag Archives: Ossec

Playing with IP Reputation with Dshield & OSSEC

Reputation[This blogpost has also been published as a guest diary on isc.sans.org]

When investigating incidents or searching for malicious activity in your logs, IP reputation is a nice way to increase the reliability of generated alerts. It can help to prioritize incidents. Let’s take an example with a WordPress blog. It will, sooner or later, be targeted by a brute-force attack on the default /wp-admin page. In this case, IP reputation can be helpful: An attack performed from an IP address reported as actively scanning the Internet will not (or less) attract my attention. On the contrary, if the same kind of attack is coming from an unkown IP address, this could be more suspicious…

Read More →

Tracking SSL Issues with the SSL Labs API

SSL LockThe SSL and TLS protocols have been on the front of the stage for months. Besides many vulnerabilities disclosed in the OpenSSL library, the deployment of SSL and TLS is not always easy. They are weak cyphers (like RC4), weak signatures, certificates issues (self-signed, expiration or fake ones). Other useful features are mis-understood and not often not configured like PFS (“Perfect Forward Secrecy”). Encryption effectiveness is directly related to the way it is implemented and used. If it’s not the case, encrypted data can be compromized by multiple attacks scenarios. To resume: For users, the presence of a small yellow lock close to the URL in your browser does not mean that you are 100% safe. For administrators and website owners: it’s not because you have a good SSL configuration today  that it will remain safe in the coming months/years. Unfortunately, keeping an eye on your SSL configurations is a pain.

Read More →

Detecting Suspicious Devices On-The-Fly

RadarJust a link to my guest diary posted today on isc.sans.edu. I briefly introduced a method to perform permanent vulnerability scanning of newly detected hosts. The solution is based on OSSEC, ArpWatch and Nmap.

The article is here.

Tracking Processes/Malwares Using OSSEC

TerminatorFor a while, malwares are in front of the security stage and the situation is unlikely to change in the coming months. When I give presentations about malwares, I always like to report two interesting statistics in my slides. They come from the 2012 Verizon DBIR: In 66% of investigated incidents, detection was a matter of months or even more and 69% of data breaches are discoverd by third parties. The problem of malwares can be addressed at two levels: infection & detection. To protect against infection, more and more solutions are provided by security vendors and some are quite performant but they don’t fully protect you. To contain the malware, the detection process is also critical. If you can’t prevent some malwares to be installed, at least let’s try to detect them as soon as possible. To track malicious activity, there is no magic: you have to search for what’s abnormal, to look for stuff occurring below the radar. Malwares try to remain stealthy but they have to perform some actions like altering the operating system and contacting their C&C. To detect such activity, OSSEC is a wonderful tool. I already blogged about a way to detect malicious DNS traffic with OSSEC and the help of online domains blacklists like malwaredomains.com.

Read More →

Keep an Eye on Your Amazon Cloud with OSSEC

Cloud LogsThe Amazon conferencere:Invent” is taking place in Las Vegas at the moment. For a while, I’m using the Amazon cloud services (EC2) mainly to run lab and research systems. Amongst the multiple announcements they already made during the conference, one of them caught my attention: “CloudTrail“. Everything has already been said over the pro & con of cloud computing. But one of them is particularly frustrating if, like me, you like to know what’s happening and to keep an eye on your infrastructure (mainly from a security point of view): who’s doing what, when and from where with your cloud resources? CloudTrail can help you to increase your visibility and is described by Amazon as follow:

CloudTrail provides increased visibility into AWS user activity that occurs within an AWS account and allows you to track changes that were made to AWS resources. CloudTrail makes it easier for customers to demonstrate compliance with internal policies or regulatory standards.

As explained in the Amazon blog post, once enabled, CloudTrail will generate files with events in a specific S3 bucket (that you configure during the setup). Those files will be available like any other data. What about grabbing files at regular interval and create a local logfile that could be processed by a third party tool like… OSSEC?

Generated events are stored as JSON data in gzipped files. I wrote a small Python script which downloads these files and generates a flat file:

$ ./getawslog.py -h
Usage: getawslog.py [options]

Options:
  --version             show program's version number and exit  
  -h, --help            show this help message and exit
  -b LOGBUCKET, --bucket=LOGBUCKET
                        Specify the S3 bucket containing AWS logs
  -d, --debug           Increase verbosity
  -l LOGFILE, --log=LOGFILE
                        Local log file
  -j, --json            Reformat JSON message (default: raw)
  -D, --delete          Delete processed files from the AWS S3 bucket

$ ./getawslog.py -b xxxxxx -l foo.log -d -j -D
+++ Debug mode on
+++ Connecting to Amazon S3
+++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131114T1325Z_xxx.json.gz
+++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131114T1330Z_xxx.json.gz
+++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131114T1335Z_xxx.json.gz
+++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131115T0745Z_xxx.json.gz
+++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131115T0745Z_xxx.json.gz
+++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131115T0750Z_xxx.json.gz

By default, the script will just append the JSON data into the specified file. If you use the “-j” switch, it will parse the received event and store them in a much more convenient way to be further processed by OSSEC (using “items:values” pairs). Here is an example of parsed event:

"eventVersion":"1.0","eventTime":"2013-11-15T07:55:53Z", "requestParameters":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b'}]}}","responseElements":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b', u'currentState': {u'code': 32, u'name': u'shutting-down'}, u'previousState': {u'code': 16, u'name': u'running'}}]}}","awsRegion":"us-east-1","eventName":"TerminateInstances","userIdentity":"{u'principalId': u'xxxxxxxxxxxx', u'accessKeyId': u'xxxxxxxxxxxxxxxxxxxx', u'sessionContext': {u'attributes': {u'creationDate': u'2013-11-15T07:48:03Z', u'mfaAuthenticated': u'false'}}, u'type': u'Root', u'arn': u'arn:aws:iam::xxxxxxxxxxxx:root', u'accountId': u'xxxxxxxxxxxx'}","eventSource":"ec2.amazonaws.com","userAgent":"EC2ConsoleBackend","sourceIPAddress":"xxx.xxx.xxx.xxx"

Within OSSEC, create a new decoder which will extract the information you may find relevant for you. Here is mine:

<decoder name="cloudtrail">
 <prematch>^"eventVersion":"\d.\d"</prematch>
 <regex>"awsRegion":"(\S+)"\.+"eventName":"(\S+)"\.+"sourceIPAddress":"(\d+.\d+.\d+.\d+)"$</regex>
 <order>data,action,srcip</order>
</decoder>

And the event below decoded by OSSEC:

**Phase 1: Completed pre-decoding.
 full event: '"eventVersion":"1.0","eventTime":"2013-11-15T07:55:53Z","requestParameters":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b'}]}}","responseElements":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b', u'currentState': {u'code': 32, u'name': u'shutting-down'}, u'previousState': {u'code': 16, u'name': u'running'}}]}}","awsRegion":"us-east-1","eventName":"TerminateInstances","userIdentity":"{u'principalId': u'xxxxxxxxxxxx', u'accessKeyId': u'xxxxxxxxxxxxxxxxxxxx', u'sessionContext': {u'attributes': {u'creationDate': u'2013-11-15T07:48:03Z', u'mfaAuthenticated': u'false'}}, u'type': u'Root', u'arn': u'arn:aws:iam::xxxxxxxxxxxx:root', u'accountId': u'xxxxxxxxxxxx'}","eventSource":"ec2.amazonaws.com","userAgent":"EC2ConsoleBackend","sourceIPAddress":"xxx.xxx.xxx.xxx"'
 hostname: 'boogey'
 program_name: '(null)'
 log: '"eventVersion":"1.0","eventTime":"2013-11-15T07:55:53Z","requestParameters":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b'}]}}","responseElements":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b', u'currentState': {u'code': 32, u'name': u'shutting-down'}, u'previousState': {u'code': 16, u'name': u'running'}}]}}","awsRegion":"us-east-1","eventName":"TerminateInstances","userIdentity":"{u'principalId': u'xxxxxxxxxxxx', u'accessKeyId': u'xxxxxxxxxxxxxxxxxxxx', u'sessionContext': {u'attributes': {u'creationDate': u'2013-11-15T07:48:03Z', u'mfaAuthenticated': u'false'}}, u'type': u'Root', u'arn': u'arn:aws:iam::xxxxxxxxxxxx:root', u'accountId': u'xxxxxxxxxxxx'}","eventSource":"ec2.amazonaws.com","userAgent":"EC2ConsoleBackend","sourceIPAddress":"xxx.xxx.xxx.xxx"'
**Phase 2: Completed decoding.
 decoder: 'cloudtrail'
 extra_data: 'us-east-1'
 action: 'TerminateInstances'
 srcip: 'xxx.xxx.xxx.xxx'

So easy! Schedule the script via a cron job to grab automatically new events and happy logging! The CloiudTrail service is still in beta and is not (yet) available everywhere (ex: not in the EU region) but seems to be working quite well. My script is available here.

 

Review: Instant OSSEC Host-Based Intrusion Detection System

OSSEC Host-based Intrusion Detection nsystemThe guys from Packt Publishing asked me to review a new book from their “Instant” collection: “OSSEC Host-Based Intrusion Detection“. This collection proposes books with less than 100 pages about multiple topics. The goal is to go straight forward to the topic. OSSEC being one of my favorite application, I could not miss this opportunity! The book author is Brad Lhotsky, a major contributor to the OSSEC community. Amongst the list of reviewers, we find JB Cheng, the OSSEC project manager responsible for OSSEC releases. It is a guarantee of quality for the book!

Read More →

Improving File Integrity Monitoring with OSSEC

File Integrity ErrorFIM or “File Integrity Monitoring” can be defined as the process of validating the integrity of operating system and applications files with a verification method using a hashing algorythm like MD5 or SHA1 and then comparing the current file state with a baseline. A hash will allow the detection of files content modification but other information can be checked too: owner, permissions, modification time. Implemeting file integrity monitoring is a very good way to detect compromized servers. Not only operating system files can be monitored (/etc on UNIX, registry on Windows, share libraries, etc) but also applications (monitoring your index.php or index.html can reveal a defaced website).

During its implementation, a file integrity monitoring project may face two common issues:

  • The baseline used to be compared with the current file status must of course be trusted. To achieve this, it must be stored on a safe place where attacker cannot detect it and cannot alter it!
  • The process must be fine tuned to react only on important changes otherwise they are two risks: The real suspicious changes will be hidden in the massive flow of false-positives. People in charge of the control could miss interesting changes.

There are plenty of tools which implement FIM, commercial as well as free. My choice went to OSSEC for a while. My regular followers know that I already posted lot of articles about it. I also contributed to the project with a patch to add Geolocatization to alerts. This time, I wrote another patch to improve the file integraty monitoring feature of OSSEC.

Read More →

Malicious DNS Traffic: Detection is Good, Proactivity is Better

Google Malware WarningIt looks that our beloved DNS protocol is again the center of interest for some security $VENDORS. For a while, I see more and more the expression “DNS Firewall” used in papers or presentations. It’s not a new buzz… The DNS protocol is well-known to be a excellent vector of infection and/or data exfiltration. But what is a “DNS firewall” or “Strong DNS Resolver“?

Read More →

Howto: Distributed Splunk Architecture

Distributed ArchitectureImplementing a good log management solution is not an easy task! If your organisation decides (should I add “finally“?) to deploy “tools” to manage your huge amount of logs, it’s a very good step forward but it must be properly addressed. Devices and applications have plenty of ways to generate logs. They could send SNMP traps, Syslog messages, write in a flat file, write in a SQL database or even send smoke signals (thanks to our best friends the developers). It’s definitively not an out-of-the-box solution that must be deployed. Please, do NOT trust $VENDORS who argue that their killing-top-notch-solution will be installed in a few days and collect everything for you! Before trying to extract the gold of your logs, you must correctly collect events. This mean first of all: do not loose some of them. It’s a good opportunity to remind the Murphy’s laws here: The lost event will always be the one which contained the most critical piece of information! In most cases, a log management solution will be installed on top of an existing architecture. This involves several constraints:

  • From a security point of view, firewalls will for sure block flows used by the tools. Their policy must be adapted. The same applies to the applications or devices.
  • From a performance point of view, the tools can’t have a negative impact on the “business” traffic.
  • From a compliance point of view, the events must be properly handled in respect of the confidentiality, integrity and availability (you know the well-know CIA principle).
  • From a human point of view (maybe the most important), you will have to fight with other teams and ask them to change the way they work. Be social! 😉

To achieve those requirements, or at least trying to reach them, your tools must be deployed in a distributed architecture. By “distributed“, I mean using multiple software componants desployed in multiple places in your infrastructure. The primary reason for this is to collect the events as close as possible to their original source. If you do this, you will be able to respect the CIA principle and:

  • To control the resources usage to process them and centralise them
  • To get rid of proprietary or open multiple protocols
  • To control the good processing of them from A to Z.

For those who are regular readers of my blog, you know that I’m a big fan of OSSEC. This solution implements a distributed architecture with agents installed on multiple collection points to grab and centralise the logs:

OSSEC SchemaOSSEC is great but lack of a good web interface to search for events and generate reports. Lot of people interconnect their OSSEC server with a Splunk instance. There is a very good integration of both products using a dedicated Splunk app. Usually, Splunk is deployed on the OSSEC server itself. The classic way to let Splunk collect OSSEC events is to configure a new Syslog destination for alerts like this (in your ossec.conf file):

<syslog_output>
<server>10.10.10.10</server>
<port>10001</port>
</syslog_output>

This configuration blog will send alerts (only!) to Splunk via Syslog messages sent to 10.10.10.10:10001 (where Splunk will listen for them). Note that the latest OSSEC version (2.7) can write native Splunk events over UDP. Personally, I don’t like this way of forwarding events because UDP remains unreliable and only OSSEC alerts are forwarded. I prefer to process the OSSEC files using the file monitor feature of Splunk:

[monitor:///data/ossec/logs]
whitelist=\.log$

But what if you have multiple OSSEC server across multiple locations? Splunk has also a solution for this called the “Universal Forwarder“. Basically, this is a light Splunk instance which is installed without any console. This goal is just to collect events in the native format and forward them to a central Splunk instance (the “Indexer“):

Splunk Schema

If you have experience with ArcSight products, you can compare the Splunk Indexer with the ArcSight Logger and the Universal Forwarder with the SmartConnector. The configuration is pretty straight forward. Let’s assume that you already have a Splunk server running. In your $SPLUNK_HOME/etc/system/local/inputs.conf, create a new input:

[splunktcp-ssl:10002]
 disabled = false
 sourcetype = tcp-10002
 queue = indexQueue

[SSL]
 password = xxxxxxxx
 rootCA = $SPLUNK_HOME/etc/auth/cacert.pem
 serverCert = $SPLUNK_HOME/etc/auth/server.pem

Restart Splunk and it will now bind to port 10002 and wait for incoming traffic. Note that you can use the provided certificate or use your own. It’s of course recommended to encrypt the traffic over SSL! Now install an Universal Forwarder. Like the regular Splunk, packages are available for most modern OS. Let’s play with Ubuntu:

# dpkg -i splunkforwarder-5.0.1-143156-linux-2.6-intel.deb

Configuration can be achieved via the command line but it’s very easy to do it directly by editing the *.conf files. Configure your Indexer in the $SPLUNK_HOME/etc/system/local/outputs.conf:

[tcpout]
 defaultGroup = splunkssl

[tcpout:splunkssl]
 server = splunk.index.tld:10003
 sslVerifyServerCert = false
 sslCertPath = $SPLUNK_HOME/etc/auth/server.pem
 sslPassword = xxxxxxxx
 sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.pem

The Universal Forwarder inputs.conf file is a normal one. Just define all your sources there and start the process. It will start forwarding all the collected events to the forwarder. This is a quick example which demonstrate how to improve your log collection process. The Universal Forwarder will take care of the collected events and send them safely to your central Splunk instance (compressed, encrypted) and will queue them in case of outage.

A final note, don’t ask me to compare Splunk, OSSEC or ArcSight. I’m not promoting a tool. I just gave you an example of how to deploy a tool, whatever your choice is 😉

MySQL Attacks Self-Detection

Injection

I’m currently attending the Hashdays security conference in Lucerne (Switzerland). Yesterday I attended a first round of talks (the management session). Amongst all the interesting presentations, Alexander Kornbrust got my attention with his topic: “Self-Defending Databases“. Alexander explained how databases can be configured to detect suspicious queries and prevent attacks. Great ideas but there was only one negative point for me: Only Oracle databases were covered. But it sounds logical; In 2008, Oracle was the first (70%) in terms of database deployments, followed by Microsoft SQLServer (68%) and MySQL (50%). I did not find more recent numbers but the top-3 should remain the same. Alexander gave me the idea to investigate how to do the same with MySQL.

Read More →