Malicious DNS Traffic: Detection is Good, Proactivity is Better

Google Malware WarningIt looks that our beloved DNS protocol is again the center of interest for some security $VENDORS. For a while, I see more and more the expression “DNS Firewall” used in papers or presentations. It’s not a new buzz… The DNS protocol is well-known to be a excellent vector of infection and/or data exfiltration. But what is a “DNS firewall” or “Strong DNS Resolver“?

It’s a fact: without DNS, no Internet! We have to life with DNS servers infrastructures. Most organizations have internal DNS resolvers. Their primary goal is to translate all FQDN (“Fully Qualified Domain Names“) into IP addresses (v4 or v6). This is a mandatory network service and attackers know this! If the registration of domain names was very expensive a few years ago, today it became very cheap (when not free!). Malware authors or attackers register thousands of domain name, usually based on random strings. Malicious DNS activity may occur at different steps to infect a computer. First, a user can be redirected to a malicious website which will send a payload to the browser (or any other application). Another step is, once the infection completed, the communication with a C&C to request actions to be performed or to exfiltrate data. The DNS protocol can also be used to transport other data. A DNS firewall will have a good knowledge of all those malicious domains and prevent users/applications to access them. Two actions are performed by those systems: To redirect the traffic to a black-hole (usually the loopback or and to generate an alert to warn the security teams that a device tried to reach a blacklisted domain.

Do you know that this feature can be easily implemented? Here is a quick overview of my setup. First step, the detection! It is indeed critical to be notified in case of connectivity attempts with a malicious domain. To achieve this, I’m using OSSEC to collect bind logs and search for interesting domains. This can be achieved with the “CDB listfeature implemented into OSSEC. CDB is a convenient way to index constant databases and perform lookups on them. The classic types of information that can be found in CDB lists are: users, IP addresses, domains, ports, etc. By indexing a list of bad domains, it’s possible to detect suspicious DNS traffic with a simple rule like this one:

<rule id="100000" level="10">
  <list field="url">lists/bad-domains.txt</list> <description>DNS query for malicious domain!</description> </rule>

The “URL” field is extracted from the bind logs using the existing OSSEC decoder. Great! But when this rule will be fired, it will already be too late and the risk to have the user infected is still important. This is good for detection but what about adding some proactivity? What about configuring your local resolver to prevent it to resolve malicious domains? This can be also achieved using a simple configuration change. Include in your “named.conf” a list of malicious domains:

include "/etc/named/bad-domains.zones";

This file will have the following format:

zone {
  type master; file "/etc/named/blackhole.hosts"; };
zone {
  type master; file "/etc/named/blackhole.hosts"; };
zone {
  type master; file "/etc/named/blackhole.hosts"; };
zone {
  type master; file "/etc/named/blackhole.hosts"; };

Create a blackhole.hosts file that will look like a basic zone file:

$TTL 3600
@ IN SOA (
                         2013012801 ; Serial
                         28800      ; Refresh 8 hours
                         7200       ; Retry 2 hours
                         604800     ; Expire 7 days
                         3600 )     ; Minimum 24 hours
* IN A

The queries will be logged and fire the OSSEC rule described above but in parallel, all listed domains will resolve to the loopback IP address! This could be very efficient but the next question is: how to build the (good) list of malicious domains? Don’t panic! Internet is full of interesting resources. In our case, the site will help us. They provide lists of malicious domain names and update them frequently. To implement the rules described here, two files will be required:

Honestly, I did not invent anything. This technique is known for a while but it’s always good to remember it. The automatic update of those lists can be scheduled via a crontab (like once a week) and some system commands. There is a RSS feed available with all updates of above files. Finally, don’t forget to add a layer of reporting on top of this (example: by injecting all logs into a Splunk instance) and you will greatly reduce the surface attack opened by the DNS protocol… without spending huge amounts of money with $VENDORS solutions!

You don’t have (or you don’t want to manage) a local resolver? There are of course alternatives like They provide the same kind of protection but with a big limitation: you don’t control the list of blacklisted domains!


  1. Interesting. I am developing a freeware DNS filter called NxFilter. Currently I am reviewing the possibility of implementing malware/botnet detection on NxFilter. Anyway it already has whitelist/blacklist and block by category, unlimited custom category. And you will have built-in GUI as well. Dashboard and query history and report all there for free. You can go with NxFilter instead of implementing your own blacklist. It will save your time greatly.

  2. Great article, been using DNS Redirector software since 2003 to locally blacklist & whitelist domains, while still using my ISP or Google DNS, or whoever I find provides the fastest resolver’s.

    Equally important part of this security method is to prevent all access to sites by IP address (usually a regex or firewall HTTP inspection rule can do this) – any site that didn’t at least register a domain and setup DNS for it shouldn’t be trusted.

  3. Courtland from OpenDNS here. Great article, very informative. One thing I’d like to add about OpenDNS and their security service, Umbrella, is that the service does let you control your list of blacklisted and whitelisted domains.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.