Enhancing Home DNS: Merging PiHole & Bind9 for Superior DNS

It’s Always DNS. This might be a joke but more times than I would like to admit, this was the case.

I have in the past gone through various DNS setups, and they all worked but was never what I really wanted. But I also did not have a concrete definition of what I wanted.

So when my Synology NAS decided to die on me it was time to redesign my DNS Setup again.

A quick history of my DNS setups goes as follow
Windows Home Server handling DNS and AD – This was a good solution by always a single point of failure, and not really lightweight.

PiHole on a RaspberyPi 3 – This never really worked. trying to get PiHole to do my local DNS was a mission.

Windows Server 2019 – A redundant pair of Windows Servers was a great way to learn Microsoft DNS – Just not light weight and automating against it proved to be quite a challenge.
Synology NAS DNS Package – This was so far the best solution. but under the hood its just Bind with a GUI, so we can do better.

Requirements

To set out the requirements for my DNS solution I actually wrote it down.

  • Containerized
  • Lightweight
  • Multiple Internal Domain support for forward and reverse lookups
  • URL Filtering
  • Redundant
  • IaC managed.
  • OpenSource / Free

From a Hardware perspective I wanted this to run separately from my Lab. I do still have a Synology NAS and the RaspberryPi available but I also have a new Zimaboard to play with.

Design

As DNS is an integral part of any Network I decided to have the Primary DNS Instance directly attached to my Unifi Dream Router. This meant that the only Network Infrastructure component that can take DNS down is the central router.

For IaC I am using a free Terraform Cloud Organization with a On-premises Agent to manage my internal resources. All the config for the DNS Stack as well as all entries are stored in a GitHub Organization linked to Terraform Cloud.

I decided to go with a combination of PiHole for Content Filtering and Bind9 for DNS Management. PiHole and Bind9 will run in containers on the same host. I played around with having either PiHole or Bind9 as the client facing instance and for me it worked better having Bind9 Client Facing and the PiHole as the upstream resolver for Bind9. The problem came in with reverse lookups going to PiHole, as having multiple Domains and VLAN’s is not really what PiHole was designed for.

Another consideration was the presence of 2 DNS Servers on a single host. To get around this problem I am only exposing port 53 for the Bind9 Container. Bind9 will then resolve upstream request via the docker network to PiHole. I did expose the PiHole DNS on port 5053 to allow me to test the DNS resolution.

PiHole uses CloudFlare DNS as its upstream resolvers, now I could go the recursive DNS part to completely bypass public DNS services but I already trust Cloudflare to Tunnel my Traffic to my Public Exposed services so what is 1 more datapoint?

Building Blocks

To start we need the following to be in place.

Terraform Cloud Workspace backed by a Git Repo and using a On Premisses Agent.

Ubuntu 22.04 on both my Primary and Secondary Node.

Docker and Docker Compose Installed.

Deploying PiHole and Bind9

I started by getting the primary Bind9 and PiHole deployed using Docker

docker-compose.yaml


version: "3"

services:
#---Primary Bind9 Instance---
  bind9-master:
    container_name: Bind9-Master
    restart: unless-stopped
    image: ubuntu/bind9:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "8053:8053"
    environment:
      - "BIND9_USER=root"
      - "TZ=Europe/Zurich"
    volumes:
      - ./config:/etc/bind
      - ./cache:/var/cache/bind
      - ./records:/var/lib/bind
    networks:
       dnsnet:
          ipv4_address: 172.53.53.3

    security_opt:
      - no-new-privileges:true
    depends_on:
      - pihole

#---PiHole---
  pihole:
    container_name: PiHole
    hostname: PiHole
    restart: unless-stopped
    image: pihole/pihole:latest
    ports:
      - "5053:53/tcp"
      - "5053:53/udp"
      - "80:80/tcp"
    environment:
      TZ: "TZ=Europe/Zurich"
      WEBPASSWORD: "SuperDuperPassword!"
      PIHOLE_DNS_: 1.1.1.1;1.0.0.1
      DNSSEC: true
      FTLCONF_LOCAL_IPV4: 172.53.53.2
    volumes:
      - "./etc-pihole:/etc/pihole"
      - "./etc-dnsmasq.d:/etc/dnsmasq.d"
    networks:
       dnsnet:
          ipv4_address: 172.53.53.2
    security_opt:
      - no-new-privileges:true

#---Network for DNS---
networks:
  dnsnet:
      ipam:
         driver: default
         config:
           - subnet: 172.53.53.0/24
             gateway: 172.53.53.1

I tried doing all the config in the Docker-Compose file to make this easy to redeploy.

The above does require the ./config ./cache and ./records to already exist.

The ./records folder needs to be writable by the container if you want to use something like Terraform to dynamically update the Records (chmod 777 ./records not the right way but working)

Now we also need to create the config and zone files

in ./config you need 2 files

named.conf is the bind 9 configuration and named.conf.key is the tsig Key I use for dynamic updates

named.conf


include "/etc/bind/named.conf.key";

acl internal {
    10.0.0.0/8;
    192.168.0.0/16;
    172.16.0.0/12;
};

options {
    recursion yes;                 # enables resursive queries
    allow-recursion { internal; };  # allows recursive queries from "trusted" clients
    allow-transfer { none; };      # disable zone transfers by default

    forwarders {
      172.53.53.2;
    };
    allow-query { internal; };
    forward only;
};

statistics-channels {
    inet 0.0.0.0 port 8053 allow { any; };
};

zone "my.zone" {
    type master;
    file "/var/lib/bind/my.zone.zone";
    update-policy { grant tsig-key zonesub any; };
    allow-transfer { 10.70.10.11; };
};

zone "70.10.in-addr.arpa" {
    type master;
    file "/var/lib/bind/70.10.zone";  # Location of your zone file
    update-policy { grant tsig-key zonesub any; };
    allow-transfer { 10.70.10.11; };
};

my-zone.zone


$ORIGIN .
$TTL 300        ; 5 minutes
my.zone           IN SOA  my.zone. my.zone.my.zone. (
                                2023072626 ; serial
                                43200      ; refresh (12 hours)
                                900        ; retry (15 minutes)
                                1814400    ; expire (3 weeks)
                                3600       ; minimum (1 hour)
                                )
                        NS      ns1.my.zone.
                        NS      ns2.my.zone.
$ORIGIN my.zone.
AriaAutoConfig          A       10.70.13.53
AriaAutomation          A       10.70.13.52
AriaIDM                 A       10.70.13.51
AriaLifecycle           A       10.70.13.50
AriaOperations          A       10.70.13.54
ns1                     A       10.70.10.10
ns2                     A       10.70.10.11

now with a docker compose up -d we are good to go on the primary node.

The docker compose for the secondary node is basically the same and can actually be exactly the same if ran on another machine, the only difference will be in the named.conf file. you also do not need a zone file in th records folder but the records folder is still required.

Secondary node named.conf

named.conf


include "/etc/bind/named.conf.key";

acl internal {
    10.0.0.0/8;
    192.168.0.0/16;
    172.16.0.0/12;
};

options {
    recursion yes;                 # enables recursive queries
    allow-recursion { internal; };  # allows recursive queries from "trusted" clients
    allow-transfer { none; };      # disable zone transfers by default

    forwarders {
      172.53.53.2;
    };
    allow-query { internal; };
    forward only;
};

statistics-channels {
    inet 0.0.0.0 port 8053 allow { internal; };
};

zone "my.zone" {
    type slave;
    file "slaves/my-zone.zone";
    masters { 10.70.10.10; };  # IP address of the primary DNS server
    allow-transfer { none; };
};

zone "70.10.in-addr.arpa" {
    type slave;
    file "slaves/70.10.zone";
    masters { 10.70.10.10; };  # IP address of the primary DNS server
    allow-transfer { none; };
};

Conclusion

This is now a working DNS Server. In the next post I will go over the creation of DNS Records using terraform

Leave a Reply

Your email address will not be published. Required fields are marked *