The DNS vs Privacy Conundrum: Where are we headed?
Caution: This is a long read. It has been sometime in the making.
The DNS (Domain Name System) is the granddaddy of Internet allowing Internet to achieve global spread since 1983. In brief, it provides an addressing mechanism for internet connected devices and a mechanism to convert domain names to IP addresses. Without this, Internet would have perhaps never reached the scale it has today. Check out History of the Domain Name System (harvard.edu) to know more about the history of DNS, especially from 1983 onwards. Check out the short video below to understand how DNS works.
Now, DNS was born during the times when cyber security was still a thing in future and mostly things were built on trust. The various actors in a DNS name resolution chain such as DNS Resolvers, Root Servers, Name Servers etc. communicate with each other using UDP protocol over Port 53 by default. This is plain, unencrypted traffic and includes data like who is requesting the name resolution and what domain is being sought to be resolved.
A DNS Response, providing answer to the DNS REQUEST includes the IP address of the target. Being a clear text protocol, DNS leaves itself vulnerable to many types of attacks, the most famous being the DNS Cache Poisoning attack discovered by Dan Kaminsky ( An Illustrated Guide to the Kaminsky DNS Vulnerability (unixwiz.net)) who found a way to hijack the DNS authority records using Query ID guessing and timing attacks. That caused quite a furor in the security community at that time and DNS servers were quickly patched and the world was great again. By the way, that ghost came back from the dead recently. Read this article by Ars Technica to know more. DNS cache poisoning, the Internet attack from 2008, is back from the dead | Ars Technica.
There were other issues with DNS. Even if not poisoned, anybody with access to the traffic could read the requests and responses and carry out a traffic analysis. People could be identified with the sites they were visiting on the Internet. With privacy gaining foothold in recent times, this was just not acceptable. Now, imposing privacy (essentially confidentiality) is best achieved through encryption and that’s the path taken by many peer protocols of DNS, such as HTTP giving way to HTTPS, FTP to SFTP, Telnet to SSH and many more. These were relatively straight forward evolutions since the payload can be encrypted between the client and the server making them invisible to any network node that may process the packet in between. With DNS, this is not possible. There are many nodes in the DNS eco system that will process the payload itself and thus the content has to be exposed to them
The Evolution of Privacy Conscious DNS
The early efforts to secure DNS were not for confidentiality but integrity. DNS cache poisoning attacks were a huge threat and thus first efforts addressed this integrity problem by digitally signing DNS responses to possibly prevent cache poisoning. This lead to the birth of DNSSEC around 2006. Check out the Timeline “ DNSSEC Deployment (dnssec-deployment.org) to know more.
The past decade has seen increased focus on confidentiality with many privacy laws implemented all over the globe. Clear text data transaction was just not acceptable anymore. Since DNS itself did not support any encryption, the best possible way to secure the DNS was by tunneling it through another protocol that would provide the encryption layer. And thus were born the twin recent initiatives of DNS over HTTPS (DoH) and DNS over TLS (DoT). Both these broadly encapsulate DNS traffic between the endpoint and the resolver improving confidentiality but introducing new set of concerns.
DNS Over TLS (DoT)
DoT simply works by encapsulating plain old DNS traffic in TLS encrypted channel over TCP Port 853, providing confidentiality and integrity between stub resolvers (e.g. Android phone or laptop) and recursive DNS servers (e.g. Google’s 184.108.40.206). In the example below, DoT encrypts data marked by communications 1 and 8. But what about the links 2 to 7? What about the Recursive Resolver itself? Who controls it?
In DoT implementations, the Recursive Resolver is owned and provided by DNS providers such as Cloudfare, Google, Akamai etc. They have 100% visibility into each and every DNS request. Also, security of links 2 to 7 are dependent upon the provider. Typically, these are not encrypted for privacy but most implement integrity checks. Thus, while part of the DNS request is encrypted for confidentiality, the onus of securing it further shifts to the DNS provider. Also, they get all the data which can be considered private data. That may not matter to home users, but if you are a large enterprise, letting go of the control does carry some risk.
DNS Over HTTPS (DoH)
DoH is very similar to DoT except that it is more democratic. As mentioned above, DoT works on TCP Port 853. Anyone who is monitoring that port can look at the traffic metadata and can do traffic analysis, even if they cannot see the contents of the DNS request and response. DoH, on the other hand, does almost the same things, but over standard HTTPS TCP Port 443, thus there is no telling DNS traffic over other regular HTTPS traffic. But this also reduces visibility for the defenders, who no longer can differentiate DNS traffic over regular HTTPS traffic.
DoT and DoH Implementations
Whether we like it or not, DoH and DoT are already a reality and are being widely adopted, with DoH being the preferred option for browser providers. Check out this article How to Enable DNS Over HTTPS in Your Web Browser (lifehacker.com) to know how to enable for most of the popular browsers. Windows support for DoH is already in preview builds of Windows and will become mainstream soon.
Read more about it at Windows 10 21H1 new features up to build 21277 * Pureinfotech
Considerations with DoT and DoH
Most home users will simply not care one way or another. DNS resolution is not something to be excited about as a home user. The only considerations here are that the DoH/DoT provider has full access to your DNS requests (private data) and can potentially monetize it. They can also change their terms of service at any time. This, to some degree, negates the privacy benefits of using a public resolver. Also, depending upon local law, this record may be demanded by governments. In addition, since your ISP can no longer access DNS records, any service dependent upon that, such as parental control, malware protection etc. can no longer be provided. As usual, there is no free lunch, there always is a trade-off.
If you are an enterprise though, you care about controlling DNS much more than a home user. You want to be able to sinkhole malware domains, quickly disable malicious domains at perimeter, allow internal domain lookups, or you may want (you must actually) to monitor and analyze DNS queries for possible attacks or exfiltration processes. In brief, enterprises must really want to be in control of their DNS. Currently, the only way is by explicitly disabling DoH/DoT to public resolvers. Slowly, but surely, DoH is becoming the default in web browsers and have to be explicitly disabled to continue using the existing DNS. Enterprises allowing browser updates should be aware of that and implement mechanisms to make this change at scale. There are many ways to do this including deploying enterprise versions of the browser, which allows much greater centralized control over settings. Check out Get Firefox for your enterprise with ESR and Rapid Release (mozilla.org) and Chrome Browser for Business Productivity — Chrome Enterprise for example.
DNS logs are also a great resource for security monitoring and threat hunting for enterprises. Some of these include:
- Review hosts with a high volume of uncommon record types (TXT, NULL, CNAME, etc.).
- Explore uncommon TLDs (.xyz, .me, .biz) and TLDs for geographical regions in which your organization does not regularly operate.
- Look for large volume of NXDOMAIN (domain does not exist) response codes to detect possible DNS C2 through Domain Generation Algorithm (DGA).
- Look for hosts with high DNS request volume for multiple sub domains of a single parent domain.
- Identify suspicious requests by reviewing queries of domains that have a high level of entropy. Be careful though, many legitimate cloud URLs also match this pattern.
- Many more….
With DoH/DoT, enterprises lose the ability to extract DNS data over network and make real time detections. There could be alternatives as we shall see later.
One of the key considerations in DoH/DoT was the shift in control over DNS from end user to DNS Resolvers. Common privacy advocacy believes in users being in charge of their private data. DoH/DoT is the exact opposite.
Enter Oblivious DoH. To prevent DNS Resolver from identifying the target with the request, this model introduces a proxy between the end user and the DNS Resolver. The Resolver only sees the proxy server for all end users behind the proxy and thus has no idea who actually requested that domain. This has been co-designed and proposed by Cloudfare, Apple and Fastly. ZDNet had a nice article on that at Oblivious DoH: Cloudflare supports new privacy, security-focused DNS standard | ZDNet.
This could perhaps work for enterprises who could place the proxy at perimeter and route all DoH/DoT traffic through that.
Looks all good. Right? Except that all of the above is not altering the DNS in any way. All they are doing is slapping upon public and symmetric key encryption over it using TLS or HTTPS (which also uses TLS by the way). That means increased complexity and increased latency (even if it is negligible).
DNS over Quic (DoQ)
Quic is an evolution of SPDY protocol, which was an attempt by Google to develop an alternative to the ageing TCP protocol. SPDY was considered a success and adopted as the data transport layer for the HTTP/2 web protocol. Quic promises to be faster, more reliable and built in support for TLS encryption. Quic has also been formally adopted for use in the HTTP/3 protocol. Thus, Quic seems to be the future for web traffic.
DoQ is very different from DoH or DoT, in that the protocol replaces the underlying belly of DNS from UDP to Quic providing encryption capability ground up. At present, it is still early days and at present DoQ faces the same issue of the DNS resolver having full visibility of the requests. Something like Oblivious DoQ may evolve in future. ZDNet recently reported first ever deployment of DoQ resolver ( Ad-blocker AdGuard deploys world’s first DNS-over-QUIC resolver | ZDNet).
What does all this mean for Cyber Defenders?
As mentioned earlier, enterprises want to control DNS due to privacy and other issues but cyber defenders love DNS due to the wide range of potential detection use cases. DNS also forms a major component in the practice of Network Security Monitoring (NSM) with tools such as Zeek creating a dedicated dns.log that can be leveraged for pivoting and analysis. All this encryption over DNS makes that entire practice of DNS monitoring invisible.
Just like NSM practices were forced to evolve for monitoring other protocols which went encrypted, such as HTTP to HTTPS over multiple iterations, DNS will also have to go down that path. However, even that aspect is evolving and all gates providing monitoring opportunities are getting closed one by one. Initially, when most of the TLS handshake process got encrypted, as in TLS 1.3, it still left behind enough clues about the exchange such as Server Name Identification (SNI) field in Client Hello message which gave an indication about the server on the other side. Security solutions started using meta data such as JA3 and JA3/S TLS signatures for detecting TLS exchanges (more on it at GitHub — salesforce/ja3: JA3 is a standard for creating SSL client fingerprints in an easy to produce and shareable way.) But then, JA3 was passive, dependent upon visibility into network traffic. As more and more parts of the traffic started getting encrypted, including the digital certificate, JA3/S no longer remains potent.
The next evolution of SNI was Encrypted SNI or ESNI, but that relied on DNS for key distribution, which would potentially reveal the server on the other side. However, this itself was closing the door on the defensive solutions that relied on SNI fingerprinting.
ESNI is giving way to Encrypted Client Hello (ECH). This aims to fully encrypt the TLS handshake, including the Client Hello, making the exchange fully resistant to traffic analysis. A great read on this subject can be found at Good-bye ESNI, hello ECH! (cloudflare.com). All this makes it harder and harder for the cyber defenders to extract information on DNS exchanges.
One potential solution is JARM. JARM actively fingerprints a TLS connection by sending out 10 specifically crafted Client Hello packets to identify the server. Read more on it at Easily Identify Malicious Servers on the Internet with JARM | by John Althouse | Nov, 2020 | Salesforce Engineering. Even with ECH, JARM may be able to identify the server, but given the fact that it is active and involves multiple TLS handshakes, it can perhaps never scale. It could perhaps be of much help in investigations though.
After over 35 years, the big daddy of Internet — DNS, is changing, and how? With so much going on in privacy space and more and more traffic getting encrypted, DNS could not afford to be left behind. But there’s so much going on, it is difficult to forecast a direction. Will DoH be the new ubiquitous standard, or Oblivious DoH? Maybe DoQ? Who knows.
What can be predicted with certainty is that DNS is going to get encrypted and cyber defenders have to look at alternate means to get the DNS logs for detection and threat hunting use cases. Since picking DNS from network traffic is going to get near impossible, one alternative is to get the raw DNS logs from the DNS servers themselves. But hardly anyone does that since it is so cumbersome and can also lead to performance issues.
Microsoft DNS debug logs are famous for being painful. That was the only default logging option until Server 2012 and Microsoft themselves have this to say about it:
“Debug logging can affect overall server performance and also consumes disk space, therefore it is recommended to enable debug logging only temporarily when detailed DNS transaction information is needed.”
Thankfully, Microsoft introduced DNS Analytic Logs in Server 2016 and back ported it to Server 2012R2. This should improve DNS logging and performance and perhaps can be the best option to get DNS logs in future. Read more about DNS Analytic logging at DNS Logging and Diagnostics | Microsoft Docs.
BIND DNS implementation in Linux is no cake walk either. These are also very cumbersome and require detailed knowledge to discover optimal configuration.
CrowdStrike, in their 2019 Report at Lateral Movement Explained | What is Lateral Movement? (crowdstrike.com) identified BreakOut time of 1 hr 58 minutes. They explain:
“Breakout time is the time it takes for an intruder to begin moving laterally into other systems in the network after initially compromising a machine. Last year, CrowdStrike tracked an average breakout time of 1 hour and 58 minutes. This means an organization has roughly two hours to detect, investigate and remediate or contain the threat. If it takes longer, you run the risk of the adversary stealing or destroying your critical data and assets.”
When attackers go dark and start using legitimate credentials inside your environment, detecting them become near impossible. This became evident even during the latest SUNBURST or SOLORIGATE supply chain attacks where even the best enterprises remained oblivious to targeted attacks for considerable period of time. If this can happen to them, most of us stand no chance.
Thus, we cannot afford to lose DNS as a high fidelity log source. We have to find alternatives to continue to enrich our use cases based on DNS. The future is not clear, but the next few years will determine the direction for the Internet of the future. Will you be ready for that?
Originally published at https://www.linkedin.com.