The Critical Need for Redundant DNS

The Critical Need for Redundant DNS
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

For many enterprises, DNS is deployed in a single-threaded fashion. There is no back-up, and if something goes wrong, it represents a single point of failure – often for the company’s entire digital estate. Most of the country was taken offline recently due to a large-scale cyberattack, affecting major brands and preventing ecommerce and many types of cloud-based work.

Organizations of all sized across the U.S. that rely on the cloud felt the pain of a slow to non-existent internet – an example of how the democratization of technology makes everyone equally vulnerable. Given the state of the Internet, it’s becoming more common for enterprises to deploy redundant DNS to mitigate these risks as much as possible.

What DNS Redundancy Is – and Is Not

Before discussing how to mitigate risk with redundant DNS, it’s important to debunk a common misconception. When people hear the term Secondary DNS, this is often confused with the notion of data center failover or disaster recovery – that the second set of name servers are a backup for sorts. Secondary DNS is neither of these, but rather a means of ensuring end users aren’t left with that dreaded “Server Not Found” message, which occurs when their request for a DNS lookup is not answered.

In a traditional DNS setup, there are a set of name servers that the domain owner delegates at their registrar.

$ dig example.com ns +short

ns1.primarydnsserver.net.

ns2.primarydnsserver.net.

ns3.primarydnsserver.net.

ns4.primarydnsserver.net.

These four addresses are selected at random when a query is made against the domain name example.com. While there is often a good deal of redundancy built into such setups, they can be vulnerable to targeted attacks. If all four of these servers in the delegation are under duress, visitors will have a very hard time getting a DNS response, and may not get one at all. Recent attacks have demonstrated this reality.

How Does Redundant DNS Work?

When adding a secondary DNS provider, the pool of available name servers is enlarged and spread across two different DNS networks.

$ dig example.com ns +short

ns1.primarydnsserver.net.

ns2.primarydnsserver.net.

ns3.primarydnsserver.net.

ns4.primarydnsserver.net.

ns1.secondaryforthewin.com.

ns2.secondaryforthewin.com.

ns3.secondaryforthewin.com.

ns4.secondaryforthewin.com.

With eight available servers in the delegation across two separate DNS networks, the risk of an attack causing a customer-impacting outage is reduced. If one network becomes overly latent, resolvers will try another server. While this process of timing out and retrying another option during an attack does add latency to a DNS transaction, what is crucial is that the end user will still be able to get to where they intended to go.

Adding a redundant DNS service to your existing primary is not as daunting as it may seem. Some of the largest enterprises in the world hedge their bets with this strategy, ensuring that their presence is secured across multiple providers. This is not unlike the recent trend towards diversifying an enterprise’s CDN or cloud footprint, which has been gaining traction throughout the industry.

Before you begin, it’s absolutely critical that your primary DNS provider allows zone transfers (AXFR). If this is not the case, you would be stuck having to manually push changes to your zones to both providers, which is a rather inconvenient and unfortunate way to go about achieving replication.

Working from left to right on this diagram, the user pushes a DNS change to their primary DNS provider. When this change is enacted in the primary DNS, the zone’s serial number increments. This means that the secondary zone file at the other provider is now behind the times and needs to be updated. An optional NOTIFY can be sent to the secondary, letting that system know that a new zone file is ready to be transferred in at specific polling intervals. The process of zone transfer, or AXFR, is initiated and the updated zone file is replicated in the secondary.

Advanced Traffic Steering

While the design of AXFR suits the needs of basic DNS record types, just about every managed DNS provider out there has built its own way of leveraging DNS to do advanced traffic steering on top of the protocol. This advanced functionality extends beyond what’s covered in the DNS RFCs and, as a result, more advanced features and record types can’t be synchronized across providers using AXFR. Running dual primary servers solves this by allowing the DNS administrator to push changes to both providers, while leveraging each platform’s advanced feature functionality, albeit largely in a manual fashion. Where both primary servers are included in the delegation, traffic is split across both primaries.

Integrated Dual Primary Redundancy and Automation

This is where leveraging an API on both sides to manipulate the advanced features can provide a bit of relief in operating two separate DNS networks. Simple middleware can be written to translate intended changes to work across both networks; however, the intricacies and breadth of a specific vendor’s advanced features may result in inconsistent application performance from one to the other, and you’re stuck using the lowest common denominator in terms of the functionality each platform offers.

Dual DNS Reduces Risk

Your best bet against “Server Not Found” is to have two cutting-edge DNS providers that can offer the exact same features and functionality, along with complete API interoperability. Managed DNS services not only relieve enterprises of the burden of running their own DNS, but they often provide value added capabilities not available in purely RFC compliant (typically open source) implementations. With advanced traffic routing features, managed DNS providers are addressing technical needs in the marketplace that the original designers of DNS did not anticipate. A well architected dual DNS reduces the risk of business losses due to DNS failure. It can also improve day-to-day end user quality of experience by reducing latency in DNS queries.

Popular in the Community

Close

What's Hot