If you follow my blog on a regularly basis, you probably already know that I’m a big fan of OSSEC. I’m using it to monitor all my personal systems (servers, labs, websites, etc). Being a day-to-day user, I have always new ideas to extend the product , by using 3rd party tools or by adding features. One of the missing feature (at least for me), is the lack of information when an alert is generated. Tracking the attackers source IP addresses is very nice. Example: OSSEC can trigger active-response scripts to blacklist them during a short period but how can we benefit of more “visibility” with those addresses. When you think how to give more visibility to IP addresses, you immediately think: Geolocation! I already posted an article about the power of geolocation (link) to map alerts onto a Google map (example: a brute-force attack). This is very interesting but required manual actions. As IP addresses are already known by OSSEC (saved in the variable “srcip“), why not let OSSEC do the job for us in real time?
The problem solved by the geolocation is the following: How to convert IP addresses in coordinates (longitude/latitude) first and then map them to a country and/or city? MaxMind has the solution. This company maintains a database of all the assigned IP addresses (Over 99.5% on a country level and 78% on a city level for the US within a 40 kilometer radius – source: Maxmind) with their mapping to geographic locations. Note that they provide two databases: IPv4 and IPv6. They propose API‘s for several languages like C. Good news OSSEC is written in C! So, I wrote a patch which perform a geolocation of the ‘srcip’ attackers to display the location of the IP address. Let’s go…
Step 1: Install the MaxMind GeoIP API
This is easy as any other open source tool/library. Download the source code and install it. It should not a problem on classic Linux flavors.
# wget http://www.maxmind.com/download/geoip/api/c/GeoIP-1.4.8.tar.gz # tar xzvf GeoIP-1.4.8.tar.gz # cd GeoIP-1.4.8 # ./configure # make # make install
By default, everything will be installed under /usr/local but you are free to change the multiple paths via the configure script. The API provides C include files and (dynamic/static) libraries, very common.
Step 2: Install or recompile OSSEC with GeoIP localization
If you have already an OSSEC running, you could simply apply my patch to the original source tree. Take care if you already performed personal changes! I created the patch from a standard 2.6 tarball. To enable the GeoIP feature, you have to enable it (like the MySQL support).
# wget http://www.ossec.net/files/ossec-hids-2.6.tar.gz # tar xzvf ossec-hids-2.6.tar.gz # wget http://blog.rootshell.be/wp-content/uploads/2012/06/ossec-geoip.patch # cd ossec-hids-2.6 # patch -p1 < ../ossec-geoip.patch # cd src # make setgeoip # cd .. # ./install.sh
Step 3: Install the MaxMing GeoIP DBs
MaxMind provides different versions of the databases. I’m using the GeoCityLite. This is the free version which provides precision up to the city. This is far enough for me. Databases are not provided with the API, they must be installed manually:
# cd /var/ossec/etc # wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz # gzip -d GeoLiteCity.dat.gz # wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz # gzip -d -c - >GeoLiteCityv6.dat.gz
I suggest you to download both databases (v4 & v6) and don’t forget that databases are regularly updated! It’s interesting to setup a small cron to install new versions to keep a good accuracy in the results.
Step 4: Fix the OSSEC configuration files
Once the patch installed, new parameters are available to set the GeoIP environment. First, in the ossec.conf “global” section, define the path of your databases.
<global> <geoip_db_path>/etc/GeoLiteCity.dat</geoip_db_path> <geoip6_db_path>/etc/GeoLiteCityv6.dat</geoip6_db_path> </global>
In the “alerts” section, activate the GeoIP feature:
<alerts> <use_geoip>yes</use_geoip> </alerts>
Finally, in the internal_options.conf, enable (or disable) the display of GeoIP information in the notification emails:
# Maild GeoIP support (0=disabled, 1=enabled) maild.geoip=1
How does it work? The OSSEC process which performs the GeoIP lookups is “ossec-analysisd“. When an alert must be logged and if a “srcip” has been decoded, the IP address is passed to a new function GeoIPLookup() which calls the MaxMind API and returns a string with the geolocation data. The data is added to the alert text. The second component which has been patched is “ossec-maild” which parses the alerts and send emails also with the GeoIP data (if enabled).
During the configuration, don’t forget that “ossec-analysisd” runs chrooted (in the main OSSEC directory). Don’t forget to adapt the path to the GeoIP databases! (they must be defined relative to the chroot environment).
Here follow some examples of alerts with GeoIP data enabled:
** Alert 1338899194.500996: - apache,invalid_request, 2012 Jun 05 14:26:34 (xxx) x.x.x.x->/var/log/apache/error_log Rule: 30115 (level 5) -> 'Invalid URI (bad client request).' Src IP: 22.214.171.124 Src Location: RU,Moscow City,Moscow [Tue Jun 05 14:26:34 2012] [error] [client 126.96.36.199] [deleted] ** Alert 1338901319.507426: - syslog,postfix,spam, 2012 Jun 05 15:01:59 (xxx) x.x.x.x->/var/log/syslog Rule: 3303 (level 5) -> 'Sender domain is not found (450: Requested mail action not taken).' Src IP: 188.8.131.52 Src Location: NL,Zuid-Holland,Alphen Jun 5 15:01:43 xxx postfix/smtpd: NOQUEUE: reject: [deleted] Received From: (xxx) x.x.x.x->/var/log/apache/access_log Rule: 100106 fired (level 15) -> "PHP CGI-bin vulnerability attempt." Src Location: IL,Tel Aviv,Tel Aviv-yafo Portion of the log(s): 184.108.40.206 - - [05/Jun/2012:16:42:07 +0200] [deleted]
Note that GeoIP lookups will be successful only for alerts which have a valid “srcip” field! In all case, the returned location will be “(null)“! What about the impact on performances? I’m using this patch in production for a few days and I did not notice any performance degradation. The GeoIP lookup is performed only once and parsed later by ossec_maild. (I’ve an average of 2000 alerts logged per day).
My OSSEC server is compiled with the support for MySQL. I did not detect any incompatibility between MySQL and my patch. At this time, the geolocation data are not sent to the MySQL alert table. But this could be done easily: Storing the latitude/longitude could be helpful to map attacks in real time using a 3rd party tool.
My patch is available here. Feel free to download it, use it and maybe improve it. Comments and suggestions will be appreciated. Final disclaimer, I won’t be responsible of you break your current OSSEC setup…
The Apache Foundation released the new version of their very popular Apache web server. Lot of interesting changes have been introduced in this release. From my point of view (and because it’s one of my favorite topics), a very interesting change is the way Apache handles now its logs. Your web server logs must be properly handled like any piece of log. But, because web sites remain an important vector of attacks today, more attention must be given to them. Let’s review quickly how to manage your logs with Apache. Read More →
During the last BruCON edition (0×03), we operated our own DNS resolver. Instead of using public servers or the ones proposed by our ISP, pushing our own DNS resolver to network visitors can be really interesting. Of course, addicted to logs, I activated the “queries_log” feature of bind to log every requests performed by BruCON visitors.
Important remark: This information was collected for evidence requirements. In case of security incident, being able to find who resolved a specific hostname is priceless. The information extracted from the log file to write this blog post did not break the privacy of the BruCON visitors!
Back at home with plenty of logs , I decided to analyze the huge “queries.log” file (only the first day – for time reason). Here follow some statistics…
First, there was less queries than expected: 414687 queries were logged in the 24-hours logfile. Based on twelve hours (09:00 – 21:00), it’s only 9.5 requests/min for 600 devices (I assumed here 1.5 device per visitor – laptops, PDAs, tablets,…). It looks that more and more people use open/public DNS servers as Google or OpenDNS. That’s a first good conclusion: people do not trust the DNS provided by their ISP (in our case – BruCON). It was again proven recently with the Pirate Bay case in Belgium. On the other side, the BruCON attendees were not the “average men in the street” in terms of security.
Let’s give some numbers now:
- 414687 queries in 24 hours
- IPv4 / IPv6 split: 200091 “A” requests / 139617 “AAAA” requests
- 30034 unique FQDN requested
- 11544 unique TLD requested (xxx.yyy)
Top-10 TLD resolved:
(brucon.org and pwn3d.be – used by the wall of sheep – were present in the top-10 but were removed due to the close relation with the event)
What do we learn from this top-10? Google remains a killer online service provider and Twitter was used to cover the event (with lot of posted pictures). Facebook, a classic, why am I not surprised? It looks that security people are fans of Apple products but lot of them are also using Windows Vista or Seven. This is proven by the number of requests to “www.msftncsi.com“. Those are due to the “Network Connectivity Status Indicator” feature present in the latest Microsoft OS. It puts the little “earth” near the network interface icon in the tray bar.
More surprising, no trace of common URL-shorteners in the top-50! If people used mainly Twitter to post BruCON news online, api.twitter.com was the first FQDN for Twitter. People do not use the native web interface but clients (I suppose on most PDAs). Something more scary: I saw a lot of requests to big company TLD’s (no name given here). For me it means two things: people are maybe using a corporate device while attending a security conference or they connect to their corporate environment via VPN services. Some directly access resources like “owa.company.com“. Don’t do this!
Some interesting stuffs:
- Ubuntu looks to be the preferred Linux distribution due to the huge amount of requests to ntp.ubuntu.com.
- Gmail is a common e-mail platform but lot of people manage their emails via IMAP (imap.gmail.com).
- ocsp.verisign.com / ocsp.thawte.com are quite well used (“Online Certificate Status Protocol“).
- Bittorrent remains a classic tool to search for content.
- WordPress remains a top platform for security bloggers.
- WPAD (“Web Proxy Autodiscovery Protocol“) is a nice way to detect from where are coming your visitors. Most browsers try to resolve “wpad.company.tld” to configure their proxy settings.
- Special mention to Peter from corelan.be, who was resolved quite often!
Something common but dangerous: typo errors! Typo-squatting still remains a valid way to catch people! So many errors.. A tip for you: bookmark the sites you visit often and access them only from your bookmarks!
Last but not least, some fun:
- We had a fan of COBOL who visited www.opencobol.org!
- Adult sites are everywhere (even if I found less request then expected!)
The final top-100 is composed of domains related to technology websites, social media and information gathering. Then came sites related to the “real life”: restaurants, traveling, bars, etc. This prove that people can be profiled just be inspecting their DNS traffic. Sometimes critical information is disclosed just be reading the FQDN like the applications running on the computer or the operating system.
This week is the third annual OSSEC week! A good initiative to promote this open source log management solution. This post is my first contribution to the OSSEC community, I hope to publish more posts if I’ve enough time. OSSEC is a excellent tool to collect and analyze the events generated by your (multiple) hosts and applications. But, being based on a command line interface, OSSEC lacks of “visibility” (IMHO). As you know, “one picture is worth a thousands words“. That’s why displaying a “map” of your alerts could be very helpful to quickly detect suspicious activity or to analyze security incidents. My goal was to add a feature like the one presents in the ArcSight ESM tool (called “Event Graph“).
OSSEC proposes for a while an interface with Picviz. Picviz is a nice tool but the integration is very basic and does not allow to filter some events. The generated graphs can become quickly unreadable if you have a lot of alerts. I’m a big fan of another visualization tool called AfterGlow. Basically, this tool helps to understand the relations that you have between “objects“. In the context of OSSEC, the useful objects are:
- The attackers (source IP address or user)
- The alert description
- The destination (the OSSEC location based on the agent name / log source)
[220.127.116.11] -> [Attempt to access forbidden file or directory.] -> [web1->/var/log/apahe2/access.log] [10.0.0.1] -> [SSHD authentication success.] -> [unix1->/var/log/auth.log] [18.104.22.168] -> [Access attempt blocked by Mod Security.] -> [web1->/var/log/apahe2/error.log]
My first idea was to add an interface like the one implemented for Picviz (using a named pipe). But the required information is already available in the OSSEC MySQL database (if you enabled this feature). To feed Afterglow with OSSEC data, I’m using a Perl script which read the database. The script syntax is:
Usage: ./alerts2afterglow.pl --dbpass=password [--dbhost=127.0.0.1] [--dbport=3306] [--dbname=ossec] [--dbuser=ossec] [--logfile=./alerts2afterglow.log] [--exclude-alerts=id1[,id2,...]] [--time-interval="30 minute"] [--do-reverse] [--show-duplicate] [--help] [--debug]
The most important parameters are:
- “–time-interval” allows to specify the amount of alerts to export starting from now(). Supported values are “second”, “minute”, “hour”, “day” or “week”.
- “–exclude-alert” allows to exclude a list of OSSEC alert IDs. This is useful to remove “noise” from your graphs. IDs are separated by commas.
- “–do-reverse” allows to perform a reverse DNS lookup of all IP addresses extracted from the database. Sometimes, it’s easier to interpret the source of the attacks
To generate a complete graph, combine the Perl script with Afterglow and a dot rendering tool:
$ ./alerts2afterglow.pl --dbpass=xxx \ --exclude-alerts=3302,3303 \ --time-interval="1 hour" \ | ./afterglow.pl -c ossec.properties \ | circo -v -Tgif -o /var/www/ossec-alerts-1h.gif
And here are some examples of generated maps:
The Perl script is available here. Comments and contributions are welcome!
The primary goal of a log management solution is to receive events from multiple sources, to parse and to make them available for multiple purposes: searching, alerting and reporting. But why not send some interesting events to another log management system or application? Usually, some inputs are added in the log management environment like IP addresses blacklists, list of vulnerabilities, etc. But we can also generated some interesting outputs. By receiving data from multiple systems, it is possible to extract even more interesting stuffs. That’s what does dshield.org for years! Dshield is a service operated by the Internet Storm Center (ISC). Many volunteers from all around the world feed a huge database with events collected from systems like firewalls, routers, IDS etc. Based on this information, reports are generated and valuable content to be re-used in log management or SIEM environments; the loop is complete! Once you created an account, you can install a client which will collect your logs and send them at regular interval to dshield.org. Clients are available for the common types of firewalls.
But what if you already collect your firewall logs via a log management tool (like OSSEC – just an example)? Why install a second client or agent? Once logs collected and centralized, why not send your logs directly from OSSEC? Dshield describes how to write your own client. The format is quite simple. So I write a small Perl script which works as described in the following schema:
It read the OSSEC firewall.log file and generate events in Dshield format. The script syntax is simple:
$ ossec2dshield.pl --log=file --userid=dshieldid --statefile=file --from=email --mta=hostname [--help] [--debug] [--test] [--obfusctate] [--ports=port1,port2,...] Where: --help : This help --debug : Display processing details to stdout --test : Test only, do not mail the info to dshield.org --obfuscate : Obfuscate the destination address (10.x.x.x) --ports=port1,!port2,... : Filter destination ports ex: !25,!80,445,53 --log=file : Your OSSEC firewall.log --userid=dshieldid : Your dshield.org UserID (see http://www.dshield.org) --statefile=file : File to write the state of log processing --from=email : Your e-mail address (From:) --mta=hostname : Your Mail Transfer Agent (to send mail to dshield.org)
$ ./ossec2dshield.pl --log=/ossec/logs/firewall/firewall.log --statefile=/ossec/logs/firewall/firewall.log.state --userid=12345 --firstname.lastname@example.org --mta=localhost --ports="!80,!443"
You will need your dshield.org UserID, a mail relay (MTA). Very important, the state file will contain the timestamp of the last processed event. This prevents events to be sent twice to dshield.org. Once processed, the data will be submitted to register(at)dshield(dot)org. Using “–port“, you can exclude or restrict to some interesting ports. Example: “–port=’!80,!22,!443′” will report all blocked firewall traffic except for the destination ports 80, 22 and 443.
There are multiple advantages to use the OSSEC firewall log:
- You don’t need extra piece of software installed on the firewalls.
- You don’t need to send data to the Internet from the firewalls.
- You don’t need multiple clients, your logs are already processed and normalized by OSSEC.
The current script is not so powerful as the regular Dshield clients but it works. If you have ideas or suggestion, contact me. (Note: Your OSSEC must be properly configured to collect and store firewalls logs. Check out the OSSEC documentation for more details about this setup)
The script is available here in github.com. Feel free to re-use my code or to add features.
Log management… A hot topic! There are plenty of solutions to manage your logs. Like in all IT domains, there are two major categories: free and commercial tools. Both have pro and cons. No big debate here, contrariwise I’ll show you a good example of a mix between both worlds. Let’s take two products: OSSEC & ArcSight. OSSEC is a free log management solution. If you follow my blog, you know that it’s one of my favorite toy at the moment. But I’m also involved in ArcSight projects. It’s a very robust commercial log management / SIEM solution (let’s stop the promotion here).
One of the main issue in log management is the way of handling events. All of them are generated by multiple devices or applications in several formats. Once collected, those events must be parsed and normalized. ArcSight developed a specific format called CEF (“Common Event Format“). It is described on their website as following:
“The Common Event Format (CEF) is an open log management standard that improves the interoperability of security-related information from different security and network devices and applications. CEF is based on expertise from building support for over 275 products across more than 35 solution categories and is the first log management standard to support a broad range of device types. CEF enables technology companies and customers to use a common event log format so that data can easily be collected and aggregated for analysis by an enterprise management system.“
More information is available here if you are interested. Honestly, there are not so many products which support CEF today. But it looks that it’s changing and big players started to implement the generation of native CEF events in their products. What makes CEF important? By generating CEF events, you remove an important step in the log management process: the normalization. CEF events already contains all the important information. Basically, a CEF event is based on the following structure:
CEF:Version|Device Vendor|Device Product|Device Version|Signature ID|\ Name|Severity|Extension
The field names explain by themselves their meaning. The last one (“Extension“) is used to pass extra information. This field contains a collection of key=value pairs.
CEF:0|ArcSight|Logger|22.214.171.12455.2|sensor:115|Logger Internal Event|1|\ cat=/Monitor/Sensor/Fan5 cs2=Current Value cnt=1 dvc=10.0.0.1 cs3=Ok \ cs1=null type=0 cs1Label=unit rt=1305034099211 cs3Label=Status cn1Label=value \ cs2Label=timeframe
Let’s go back to OSSEC. This software can grab very easily events from multiple devices and applications, process them and generate useful alerts. OSSEC has also an interesting “remote syslog” feature to forward alerts to a remote Syslog server. Why not let OSSEC generate CEF events by itself? ArcSight could be used to centralize logs coming from multiple sources and agents as well as OSSEC. OSSEC is very light and free, why not integrate it with ArcSight? A winning team!
I wrote a small patch to add CEF support to the csyslogd daemon. This one is responsible of sending alerts to our remote Syslog server. My patch adds a new format that can be specified in your ossec.conf file:
<syslog_output> <server>10.0.0.1</server> <port>514</port> <format>cef</format> </syslog_output>
Once you applied the patch, recompile the os_csyslogd module, adapt your ossec.conf and restart OSSEC! Alerts will now be sent in CEF format. The following information is mapped into the new event:
- Vendor (Trend Micro Inc.)
- Product (OSSEC HIDS)
- Version (v2.5.1)
- Rule ID
- Rule Name
- Message (which will contains the “srcip” and “user” fields)
CEF:0|Trend Micro Inc.|OSSEC HIDS|v2.5.1|5302|\ User missed the password to change UID to root.|9|dvc=ubuntusvr \ cs2=ubuntusvr->/var/log/auth.log cs2Label=Location src= suser=root \ msg=May 11 21:16:05 ubuntusvr su: - /dev/pts/1 xavier:root
If you specify in the <syslog_output> XML block the IP address of your ArcSight device (properly configured to receive CEF events over UDP), they will be smoothly indexed and stored in the database:
But, will you ask why implement this? Here are some scenarios:
- OSSEC integrates nice features like a rootkit detection and FIM (“File Integrity Monitoring“) but not ArcSight. This could add more value to your SIEM.
- During a transition period between the two products (and avoid multiple agents on your servers)
- To use the powerful ArcSight correlation engine without deploying a huge platform (Log archiving remains at OSSEC level)
- OSSEC Decoders are much easier to write and deploy compared to the ArcSight FlexConnector s(which also require a developer license !)
Note that this patch does not implement the classification of the OSSEC alerts using the ArcSight mechanism (via the field “categorySignificance“). It’s just a dumb rewrite of the alerts using a CEF pattern. If you have ideas or comments, they are welcome!
My patch is available here and, as usual, is available for free and without any warranty.
I would like to tell you about the situation I experienced this afternoon. The goal of a log management solution is to collect and store events from several devices and applications in a central and safe place. By using search and reporting tools, useful information can be extracted from those events to investigate incidents or suspicious behaviors. During a live implementation, I started to collect Syslog messages from a bunch of Cisco switches and routers. While looking if the events were correctly normalized and processed, I discovered lot of “traceback” messages like the following one:
-Process= "xxx", level= 0, pid= 172 -Traceback= 1A32 1FB4 5478 B172 1054 1860 ...
For the Cisco administrators amongst you, this means a problem: The device generated useful debug information. When reporting the problem to Cisco, this information is always useful. Of course. I’m not a Cisco admin but it looked suspicious to me and I reported this information to the local network admin:
Me: “FYI, I detected a suspicious behavior on the router xxx, there are regular tracebacks generated by IOS”
Admin: (He checks) “Hmmm… My device is working as expected.”
Me: “Maybe but there is an issue on this device!”
Admin: “Could you implement a filter on the log management platform to get rid of those events? They are not important for me.”
Me: “Technically, I could. But they’re generated for a good reason! You should investigate…“
Some minutes later…
Admin: “Ok, I reduced the log level. You shouldn’t see them anymore.“
Indeed, no more traceback events were collected by the log management platform. I suppose he applied the following configuration:
Router# conf t Router(config)# no service log backtrace Router(config)# end
From a technical point of view, this guy was right: it’s always possible to filter some “unwanted” event and prevent them to be processed then indexed. However, how to define an “unwanted” event? The admin was wrong while reducing the log level. Again the goal of a log management solution is to distinguish critical or suspicious events from the continuous flow of events generated by your infrastructure. If you implement too strict filters, they are risks of missing interesting events.
Don’t be an ostrich! A log management without alerts or a dashboard always full of green indicators are not reliable! They give a false sense of security. Instead of getting rid of them by implementing filters, search for the root causes!
For a while, I was looking for a good solution to display my OSSEC server status in (near) real time. For most of us, the classic log file monitoring tool still remains based on the “tail | grep | awk | less” commands. If it catches perfectly the events you are looking for, you can miss very important events. OSSEC has its own WebUI but it is quite old (the latest release was released in 2008) and, event if it comes with lot of interesting features, it does not match my main requirement: to have a unique dashboard with relevant live information about my OSSEC infrastructure.
Designing a dashboard is not an easy task! I always remember my statistics professor who said that numbers can be manipulated. It is always possible to express quantitative results in different ways. How to make your dashboard relevant? This topic was also discussed by Wim Remes during the latest BlackHat Europe in Barcelona. I don’t pretend to have the best dashboard ever. Even more, I’m not a developer. Here is my current dashboard:
Current features are:
- Configurable time windows (30 mins, 1 hour, 3 hours, …).
- Auto-refresh (to be displayed on a standalone screen or beamer).
- Based on Portlets which can be organized, minimized (and restored!) as you want.
- Some graphical indicators (because a picture is worth a thousand words!).
First, I needed to find a good interface as I don’t have the knowledge to build my own. I looked for cool examples based on jQuery and found this one. Why reinvent the wheel? The link with the OSSEC server is performed via the DB output module. OSSEC will write all the required information into its database. Each portlet make its own connection to the database to execute SQL queries and display the results. The following portlets are available at the moment (all of them based on the selected time period):
- Top-10 Alerts : Reports the 10 most reported alerts.
- Top-10 Suspicious: Reports the 10 less reported alerts (can be usefull to detect activities occuring “below the radar“).
- Top-10 Agents: Reports the agents/log files which generated the most important amount of alerts.
- Top-10 Attackers: Reports the IP addresses which generate the most important amount of alerts (click on the IP address to perform a Whois request).
- Top-10 Locations: Perform geo-localization on attackers IP and reports the most suspicious countries
- Events Timeline : Display the number of alerts generated for the last ten period of times
- Trend Level: Display the current average alert level and, based on the previous interval of time and a trend represented by a colored arrow.
The next step will be to implement:
- A caching system to increase the performances.
- A search engine to search across the alerts based on regular expressions.
Any other idea is welcome! The installation is pretty straight forward. Configure your OSSEC server with database support then installed the PHP code o a LAMP server which can access the OSSEC MySQL database. The code is available here. Feel free to re-use it and contribute.
I was invited by the ISSA Belgium chapter to talk last night about log management & SIEM (“Security Information and Event Management“). This is a very interesting topic but almost everything has been said (good as bad) on SIEM. I decided to innovate and to use some articles posted in this blog as practical examples of fraud detection. After the theory, some practice is always welcome! Let’s make your logs more valuable…
Fraud can be defined as “a deliberate deception, trickery, or cheating intended to gain an advantage“. This term is often closely linked to the world of finances. That’s why I prefer to use the word “suspicious“. An event can be flagged as suspicious if it does not follow strict baselines. Four practical examples of suspicious activities were discussed:
- MySQL Database changes
- USB stick detection
- Rogue access to resources
- Mapping events to Google Maps
Each example was reviewed as a quick recipe to detect the suspicious event. All of them reported by OSSEC. The goal was to explain how to gain more visibility and more value from your logs at… an affordable price, read – without an (expensive) SIEM solution. Even, if smallest organizations don’t have budgets and resources, they can implement solutions to increase their security.
The presentation is available on Slideshare.com.