File Integrity Monitoring for the Poor

Foo Network LogoFor most organizations, security has a huge impact on budgets… except if you’re called the NSA and must deploy a massive surveillance program! Every time you need money, you have to fight with your boss or finance guys to get some bucks after explaining why a new piece of software, appliance or consultant will help you to improve the security of their data. But sometimes, you can use data generated by non-security related solutions and extract some added value from them. When I say “non-security related“, it’s not 100% true, let me explain…

Even if information security is difficult to explain to the business, C-level people generally understand and agree on the need of backup systems. Ok, still today not all organizations have a strong backup procedure (and even less have a strong restore procedure!) but let’s assume it. Basically (I’m not a backup expert), there are two major ways to perform a backup. At the beginning of the week, we make a full backup on Monday and:

  • Perform an incremental backup every day (based on the full backup)
  • Or perform a delta backup every day (based on the yesterday’s backup)

The next Monday, a new full backup is performed and close the loop. Another very interesting tool to track changes on a server is a FIM (“File Integrity Monitor“). Such solution is helpful to detect suspicious changes in directories on a server. Classic examples of directories being controlled on a UNIX server are: /etc (where reside configuration files), /usr/bin & /usr/sbin (where reside system binaries). Usually, they don’t change often. But deploying a commercial FIM solution can be expensive. Idea: which kind of tools also scan filesystems for changes? Backup tools of course!

In my case, I’ve servers backuped every night via rsync to a central storage, rsync writes down to a file all the modified files synce the last backup. Why not parse this file and search for suspicious modifications? You could also process this file via Splunk and do some correlation or alerting on the indexed data. Finding a reference to /etc/passwd in my nightly rsync backup could be very suspicious if no new user was created by myself or another admin!

Conclusion: if you don’t have money, have ideas! Any data or logfile can be valuable and help you to increase your overall security.

5 comments

  1. Hi,
    You’re right. There are other tools which use the same technic like OSSEC.
    But the goal of my blog post was to demonstrate that tools that are normally not dedicated to such tasks could also report interesting information.

  2. True for Splunk! 🙂 Indeed “dev” tools could be used to increase security too! I should also write a blog post on this topic!

  3. Conclusion: if you don’t have money .. don’t waste it on Splunk, spend time with Logstash or Graylog2 ..

    But apart from that … if you are doing deployment the automated way .. you don’t need a dedicated fim as you it for free .. packaging, config mgmt .. they all help you there ..

    But that’s content for a different blog entry .. or a panel at #brucon 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.