My Linux servers are all protected by a local iptables firewall. This is an excellent firewall which implements all the core features that we are expecting from a decent firewall system. Except… logging and reporting! By default, iptables send its logs using the kernel logging facilities. Those can be intercepted by common Syslog daemons: Events are collected and stored in a flat file. Note that some Syslog implementations, like rsyslog, have a built-in mechanism to store logs into a MySQL database. But, messages are stored “as is” without processing or normalization; this makes them difficult to use. Of course, solutions exists to parse Syslog flat files and generate firewalls stats (have a look at fwlogwatch) but I’m looking for something more “visual”. Visibility is a key point!
My idea is to store the iptables events into a MySQL database but with some parsing, which means that every important information (source IP, destination IP, ports, …) can be indexed and re-used in queries later. As usual, the goal is to give more value to the huge amount of logs produced by iptables. Example of an interesting information: To know from where is coming the traffic to my server and map it on a Google map. To achieve this, iptables has fortunately another way to send logs to the user space: the ULOG target:
The ULOG target is used to provide user-space logging of matching packets. If a packet is matched and the ULOG target is set, the packet information is multicasted together with the whole packet through a netlink socket. One or more user-space processes may then subscribe to various multicast groups and receive the packet. This is in other words a more complete and more sophisticated logging facility that is only used by iptables and Netfilter so far, and it contains much better facilities for logging packets. This target enables us to log information to MySQL databases, and other databases, making it much simpler to search for specific packets, and to group log entries. (Source: linuxtopia.org)
To collect the packets sent using the ULOG target, a daemon exists and is called – logically – ulogd. It allows you to store the received packets to the following destinations:
- Databases (MySQL, SQLite, PostgressSQL)
- Flat files
- pcap files
By the way, the last option is really cool and allow you to store a dump of all packets in specific conditions and analyze them later using your best pcap file processor (tcpdump, Wireshark, …) In my case, I just store the packets into a MySQL DB. First download and install the ulogd software (The example below is based on an Ubuntu Server):
# apt-get install ulogd ulodg-mysql
Create a MySQL database. Here on the same host but you can use a remote DB server:
# cat <<END | mysql -u root -p create database ulogd; grant all privileges on ulogd.* to "ulogd"@"localhost" identified by "strOngP4ss"; END # mysql -u root -p ulogd </usr/share/doc/ulogd/mysql.table
Configure ulogd to send packets to the MySQL database via the configuration file “ulogd.conf“:
# cat ulogd.conf nlgroup=1 logfile="/var/log/ulog/ulogd.log" loglevel=1 rmem=131071 bufsize=150000 plugin="/usr/lib/ulogd/ulogd_BASE.so" plugin="/usr/lib/ulogd/ulogd_MYSQL.so" [MYSQL] table="ulog" pass="str0ngP4ss" user="ulogd" db="ulogd" host="localhost"
If you need help, check out the ulogd manpage for details. Start the daemon:
# service ulogd start
Now, let’s reconfigure your iptables rulebase to log packets using the ULOG target. Important recommendation: take care when logging your packets! Using a MySQL database might have a huge impact on the system performance! In my example, I’ll just log the incoming traffic. Ubuntu uses some kind wrapper to manage your firewall. It is called “UFW” or “Uncomplicated FireWall“. It’s convenient for basic configuration tasks but limited for specific changes like here. It’s not possible to enable the ULOG support via ufw, you will have to manually change the “before.rules” file. Add the following line:
# allow logging to ulogd -A ufw-before-input -i eth0 -j ULOG --ulog-nlgroup 1 --ulog-prefix ULOG
And restart ufw! You should see immediately packets logged into your MySQL DB. Now, we have a database full of very interesting information! Let’s extract and add more value to some of them. The schema below gives an overview of the components:
A small Perl script will extract the source IP addresses based on a specific SQL query like:
- The last 10000 IP addresses which hit the port 80
- The IP addresses detected for the last 10 minutes
- …
The IP addresses will be geo-localized using the MaxMind database and its Perl API. I’m using the GeoLiteCity database. The Perl script will produce a XML file containing all the information required by the Google:
<?xml version="1.0" encoding="UTF-8" ?> <markers entries="2526"> <marker country_code="US" country_name="United States" lng="-122.2995" lat="47.5839"/> <marker country_code="BE" country_name="Belgium" lng="4.3500" lat="50.6667"/> <marker country_code="BE" country_name="Belgium" lng="4.3500" lat="50.6667"/> <marker country_code="US" country_name="United States" lng="-122.3933" lat="37.7697"/> <marker country_code="BE" country_name="Belgium" lng="4.3500" lat="50.6667"/> <marker country_code="FR" country_name="France" lng="2.3333" lat="48.8667"/> ... </markers>
As you can see, only the coordinates will be used by Google. IP addresses are no used! (better for confidentiality). Last step, generate the GoogleMaps using the API. This will be achieved with some lines of Javascript. Here is the final result:
I did not write the HTML and Javascript code myself. This is grabbed from the Orange Security Blog where Jean-François Audenard wrote a similar post a few weeks ago about mapping botnets on GoogleMaps. Many thanks to him for allowing me to re-use it here.
Finally, it’s quite easy to automate the process with a simple crontab to create a new XML files every x minutes and perform an auto-refresh of the HTML page. Don’t forget to cleanup the MySQL data by removing the old logs (my main system logged 4.6M of packets in two days!)
Resources:
I wanted to give this a try but the perl script gives this error:
perl ulog2xml.pl
DBD::mysql::st execute failed: Unknown column ‘ip_saddr’ in ‘field list’ at ulog2xml.pl line 32.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at ulog2xml.pl line 33.
Anyone have an idea how to solve this?
i get an error when executing the ulog2xml.pl script:
Can’t call method “country_code” on an undefined value at ulog2xml.pl line 50.
any ideas?
Hi sha8e,
You’re welcome! Enjoy!
Sorry Xavier, I found the problem to my situation (dumb me) :$
I didn’t enable the plugin for the MySQL 😀
Sorry again, and thanks for your tut. again.
I added the ulog rule to log anything coming to the INPUT chain, but I am not seeing anything stored in the db! All configuration is 100% correct and as you explained. What do you think might be the problem?
Regardless of answers, thanks Xavier for the tutorial.
Hello David,
I know, the idea of this post came from Bruno’s work 😉
Just to add some “visualization” like the Palo Alto dashboard.
Regards,
Xavier
you have to talk with bruno, he did the same (iptables, mysql). and I must say,
he made a great and nice looking webinterface for the logs. I urgently take al look at
iptables by the way :p time for a crash course?
Hello Karl,
Thank for the feedback.
In fact, the data stored in the MySQL DB come from the ulogd which process the iptables events. The packet payloads are NOT saved => There is no risks to have Javascript or SQL injection stored in records.
The script doesn’t seem to have any sanitization feature which is not recommended at all. Because if someone sends in the logs strings which are appropriate for HTML and/or SQL, you get a nice injection security issue.