I don’t know how DNS service protects itself against requests to find the IP address for a false domain name. Innocent errors are bad enough but a spamming technique generating likely domain names could be devastating. One corrupted personal computer could place a substantial burden on the top level server.

I believe that DNS service is a hierarchy of servers with higher levels answering such requests from lower levels. Lower levels cache responses for a time. You could cache negative responses but this would not help against a spammer attack which generates random domain names.

I have come up with the following simple scheme that works pretty well, I think.

Each node in the DNS service hierarchy keeps a decaying counter for each of its clients. Each time a DNS request results in a negative response, innocent or not, the counter is incremented. The counter decays with time. There is a counter threshold above which requests are delayed until the counter decays.

An ISP now has an incentive to apply this protocol to each of its customers, at least if this sort of attack emerges. If the ISP does not then its DSN server is throttled. This is a disguised charge for DNS—perhaps congestion pricing.

This server behavior can be put in place without much of any coordination between systems, starting at or near the top. It simulates a congested server, but only to those asking the wrong questions.

Even the load due to innocent mistakes seems substantial and this leaves that part unsolved.

Related idea

This is to record an idea that was floating around for implementing DNS. The idea is that some level of domain name, for instance all names ending in “.com”, would be served by a single node, but served by a hierarchy of caches. Each DNS cache server would have at most 8 clients. Each cache would recall which of its clients had been informed of which names. Also each cache would retain cache entries that its clients know. The advantage of this is when it is necessary to change an IP address. In this design the system would remember all the caches that know the old address. LRU is difficult because higher caches don’t see traffic on popular names.