Tux

...making Linux just a little more fun!

DNS source port randomisation

Kapil Hari Paranjape [kapil at imsc.res.in]


Thu, 10 Jul 2008 17:29:27 +0530

Hello,

Most of you must have heard about Dan Kaminsky's discovery of a flaw in the DNS protocol and its standard implementation (in glibc and bind 8).

I thought of a quick fix for source port randomisation for DNS queries using iptables.

http://www.imsc.res.in/~kapil/blog/dns_quickfix-2008-07-10-17-07

Basically, the idea is to use iptables feature of source nat coupled with source randomisation.

iptables -t nat -A POSTROUTING -o ! lo -p udp --dport 53 \
    -j MASQUERADE --to-ports 1024-65535 --random
 
iptables -t nat -A POSTROUTING -o ! lo -p tcp --dport 53 \
    -j MASQUERADE --to-ports 1024-65535 --random

After writing this I realised that the randomisation only works with kernels version than 2.6.22.

Regards,

Kapil. --


Top    Back


Jimmy O'Regan [joregan at gmail.com]


Thu, 10 Jul 2008 13:42:42 +0100

2008/7/10 Kapil Hari Paranjape <kapil@imsc.res.in>:

> Hello,
>
> Most of you must have heard about Dan Kaminsky's discovery of a flaw
> in the DNS protocol and its standard implementation (in glibc and
> bind 8).
>
> I thought of a quick fix for source port randomisation for DNS
> queries using iptables.
>
>  http://www.imsc.res.in/~kapil/blog/dns_quickfix-2008-07-10-17-07

Maybe you mean this: http://www.imsc.res.in/~kapil/blog/lg/dns_quickfix-2008-07-10-17-07


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Thu, 10 Jul 2008 20:39:46 +0530

Hello,

On Thu, 10 Jul 2008, Jimmy O'Regan wrote:

> Maybe you mean this:
> http://www.imsc.res.in/~kapil/blog/lg/dns_quickfix-2008-07-10-17-07

Yes. Thanks.

The question is does the fix work?

Kapil. --


Top    Back


René Pfeiffer [lynx at luchs.at]


Thu, 10 Jul 2008 17:20:42 +0200

On Jul 10, 2008 at 2039 +0530, Kapil Hari Paranjape appeared and said:

> On Thu, 10 Jul 2008, Jimmy O'Regan wrote:
> > Maybe you mean this:
> > http://www.imsc.res.in/~kapil/blog/lg/dns_quickfix-2008-07-10-17-07
>=20
> Yes. Thanks.
>=20
> The question is does the fix work?

The problem I see is Netfilter's idea of "random". I have neither looked up the source of Netfilter nor the resolver, so I suspect that the quality of the fix is tied to the random generation involved. A statistical analysis could help. :)

Best, René.


Top    Back


René Pfeiffer [lynx at luchs.at]


Tue, 15 Jul 2008 17:35:17 +0200

On Jul 10, 2008 at 2039 +0530, Kapil Hari Paranjape appeared and said:

> Hello,
>=20
> On Thu, 10 Jul 2008, Jimmy O'Regan wrote:
> > Maybe you mean this:
> > http://www.imsc.res.in/~kapil/blog/lg/dns_quickfix-2008-07-10-17-07
>=20
> Yes. Thanks.
>=20
> The question is does the fix work?

I still don't know, but today I read Paul Vixie's opinion about fixing the issue by using PAT/NAT:

"Tom Cross of ISS-XForce correctly pointed out that if your recursive nameserver is behind most forms of NAT/PAT device, the patch won't do you any good since your port numbers will be rewritten on the way out, often using some pretty nonrandom looking substitute port numbers."

http://www.circleid.com/posts/87143_dns_not_a_guessing_game/

I still haven't compared BIND's/the resolver's source code with the Netfilter code, but I doubt that port randomisation in the network code is doing very complicated things. The above comment is a good hint on which other components can ruin your port selection strategey though.

Best, René.

-- 
  )\._.,--....,'``.  fL  Let GNU/Linux work for you while you take a nap.
 /,   _.. \   _\  (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/
`._.-(,_..'--(,_..'`-.;.'  - System administration + Consulting + Teaching -
Got mail delivery problems?  http://web.luchs.at/information/blockedmail.php


Top    Back


Rick Moen [rick at linuxmafia.com]


Wed, 23 Jul 2008 17:18:55 -0700

Quoting Kapil Hari Paranjape (kapil@imsc.res.in):

> Hello,
> 
> Most of you must have heard about Dan Kaminsky's discovery of a flaw
> in the DNS protocol and its standard implementation (in glibc and
> bind 8).
> 
> I thought of a quick fix for source port randomisation for DNS
> queries using iptables.
> 
>  http://www.imsc.res.in/~kapil/blog/lg/dns_quickfix-2008-07-10-17-07
> 
> Basically, the idea is to use iptables feature of source nat coupled
> with source randomisation.
> 
> iptables -t nat -A POSTROUTING -o ! lo -p udp --dport 53 \
>     -j MASQUERADE --to-ports 1024-65535 --random
> 
> iptables -t nat -A POSTROUTING -o ! lo -p tcp --dport 53 \
>     -j MASQUERADE --to-ports 1024-65535 --random
> 
> After writing this I realised that the randomisation only works with
> kernels version than 2.6.22.

Here's one of the Matasano Chargen guys telling the full story about the probable attack mode. Yes, making all resolvers (including the cruddy BIN8-descended resolver library built into your libc) randomise source ports, one way or another, is necessary in order to foil transaction forgery. (The referenced Halvar Flake is a German mathematician who speculated quite well about this problem on his blog: http://addxorrol.blogspot.com/2008/07/on-dans-request-for-no-speculation.html)

Reliable DNS Forgery in 2008: Kaminsky's Discovery from Matasano Chargen by ecopeland

0. The cat is out of the bag. Yes, Halvar Flake figured out the flaw Dan Kaminsky will announce at Black Hat.

1. Pretend for the moment that you know only the basic function of DNS -- that it translates WWW.VICTIM.COM into 1.2.3.4. The code that does this is called a resolver. Each time the resolver contacts the DNS to translate names to addresses, it creates a packet called a query. The exchange of packets is called a transaction. Since the number of packets flying about on the internet requires scientific notation to express, you can imagine there has to be some way of not mixing them up.

Bob goes to to a deli, to get a sandwich. Bob walks up to the counter, takes a pointy ticket from a round red dispenser. The ticket has a number on it. This will be Bob's unique identifier for his sandwich acquisition transaction. Note that the number will probably be used twice -- once when he is called to the counter to place his order and again when he's called back to get his sandwich. If you're wondering, Bob likes ham on rye with no onions.

If you've got this, you have the concept of transaction IDs, which are numbers assigned to keep different transactions in order. Conveniently, the first sixteen bits of a DNS packet is just such a unique identifier. It's called a query id (QID). And with the efficiency of the deli, the QID is used for multiple transactions.

2. Until very recently, there were two basic classes of DNS vulnerabilities. One of them involves mucking about with the QID in DNS packets and the other requires you to know the Deep Magic. First, QIDs.

Bob's a resolver and Alice is a content DNS server. Bob asks Alice for the address of WWW.VICTIM.COM. The answer is 1.2.3.4. Mallory would like the answer to be 6.6.6.0. It is a (now not) secret shame of mine that for a great deal of my career, creating and sending packets was, to me, Deep Magic. Then it became part of my job, and I learned that it is surprisingly trivial. So put aside the idea that forging IP packets is the hard part of poisoning DNS. If I'm Mallory and I'm attacking Bob, how can he distinguish my packets from Alice's? Because I can't see the QID in his request, and the QID in my response won't match. The QID is the only thing protecting the DNS from Mallory (me).

QID attacks began in the olden days, when BIND simply incremented the QID with every query response. If you can remember 1995, here's a workable DNS attack. Think fast: 9372 + 1. Did you get 9372, or even miss and get 9373? You win, Alice loses. Mallory sends a constant stream of DNS responses for WWW.VICTIM.COM. All are quietly discarded -- until Mallory gets Bob to query for WWW.VICTIM.COM. If Mallory's response gets to your computer before the legitimate response arrives from your ISP's name server, you will be redirected where Mallory tells you you're going.

Obvious fix: you want the QID be randomly generated. Now Alice and Mallory are in a race. Alice sees Bob's request and knows the QID. Mallory has to guess it. The first one to land a packet with the correct QID wins. Randomized QIDs give Alice a big advantage in this race.

But there's a bunch more problems here: * If you convince Bob to ask Alice the same question 1000 times all at once, and Bob uses a different QID for each packet, you made the race 1000 times easier for Mallory to win. * If Bob uses a crappy random number generator, Mallory can get Bob to ask for names she controls, like WWW.EVIL.COM, and watch how the QIDs bounce around; eventually, she'll break the RNG and be able to predict its outputs. * 16 bits just isn't big enough to provide real security at the traffic rates we deal with in 2008.

Your computer's resolver is probably a stub. Which means it won't really save the response. You don't want it to. The stub asks a real DNS server, probably run by your ISP. That server doesn't know everything. It can't, and shouldn't, because the whole idea of DNS is to compensate for the organic and shifting nature of internet naming and addressing. Frequently, that server has to go ask another, and so on. The cool kids call this "recursion".

Responses carry another value, too, called a time to live (TTL). This number tells your name server how long to cache the answer. Why? Because they deal with zillions of queries. Whoever wins the race between Alice and Mallory, their answer gets cached. All subsequent responses will be dropped. All future requests for that same data, within the TTL, come from that answer. This is good for whoever wins the race. If Alice wins, it means Mallory can't poison the cache for that name. If Mallory wins, the next 10,000 or so people that ask that cache where WWW.VICTIM.COM is go to 6.6.6.0.

3. Then there's that other set of DNS vulnerabilities. These require you to pay attention in class. They haven't really been talked about since 1997. And they're hard to find, because you have to understand how DNS works. In other words, you have to be completely crazy. Lazlo Hollyfeld crazy. I'm speaking of course of RRset poisoning.

DNS has a complicated architecture. Not only that, but not all name servers run the same code. So not all of them implement DNS in exactly the same way. And not only that, but not all name servers are configured properly.

I just described a QID attack that poisons the name server's cache. This attack requires speed, agility and luck, because if the "real" answer happens to arrive before your spoofed one, you're locked out.

Fortunately for those of you that have a time machine, some versions of DNS provide you with another way to poison the name server's cache anyway. To explain it, I will have to explain more about the format of a DNS packet.

DNS packets are variable in length and consist of a header, some flags and resource records (RRs). RRs are where the goods ride around. There are up to three sets of RRs in a DNS packet, along with the original query. These are: * Answer RRs, which contain the answer to whatever question you asked (such as the A record that says WWW.VICTIM.COM is 1.2.3.4) * Authority RRs, which tell resolvers which name servers to refer to to get the complete answer for a question * Additional RRs, sometimes called "glue", which contain any additional information needed to make the response effective. A word about the Additional RRs. Think about an NS record, like the one that COM's name server uses to tell us that, to find out where WWW.VICTIM.COM is, you have to ask NS1.VICTIM.COM. That's good to know, but it's not going to help you unless you know where to find NS1.VICTIM.COM. Names are not addresses. This is a chicken and egg problem. The answer is, you provide both the NS record pointing VICTIM.COM to NS1.VICTIM.COM, and the A record pointing NS1.VICTIM.COM to 1.2.3.1.

Now, let's party like it's 1995.

Download the source code for a DNS implementation and hack it up such that every time it sends out a response, it also sends out a little bit of evil -- an extra Additional RR with bad information. Then let's set up an evil server with it, and register it as EVIL.COM. Now get a bunch of web pages up with IMG tags pointing to names hosted at that server. Bob innocently loads up a page with the malicious tags which coerces his browser into resolving that name. Bob asks Alice to resolve that name. Here comes recursion: eventually the query arrives at our evil server. Which sends back a response with an unexpected (evil) Additional RR. If Alice's cache honors the unexpected record, it's 1995 - buy CSCO! - and you just poisoned their cache. Worse, it will replace the "real" data already in the cache with the fake data. You asked where WWW.EVIL.COM was (or rather, the image tags did). But Alice also "found out" where WWW.VICTIM.COM was: 6.6.6.0. Every resolver that points to that name server will now gladly forward you to the website of the beast.

4. It's not 1995. It's 2008. There are fixes for the attacks I have described.

Fix 1:

The QID race is fixed with random IDs, and by using a strong random number generator and being careful with the state you keep for queries. 16 bit query IDs are still too short, which fills us with dread. There are hacks to get around this. For instance, DJBDNS randomizes the source port on requests as well, and thus won't honor responses unless they come from someone who guesses the ~16 bit source port. This brings us close to 32 bits, which is much harder to guess.

Fix 2:

The RR set poisoning attack is fixed by bailiwick checking, which is a quirky way of saying that resolvers simply remember that if they're asking where WWW.VICTIM.COM is, they're not interested in caching a new address for WWW.GOOGLE.COM in the same transaction.

Remember how these fixes work. They're very important.

And so we arrive at the present day.

5. Let's try again to convince Bob that WWW.VICTIM.COM is 6.6.6.0. This time though, instead of getting Bob to look up WWW.VICTIM.COM and then beating Alice in the race, or getting Bob to look up WWW.EVIL.COM and slipping strychnine into his ham sandwich, we're going to be clever (sneaky).

Get Bob to look up AAAAA.VICTIM.COM. Race Alice. Alice's answer is NXDOMAIN, because there's no such name as AAAAA.VICTIM.COM. Mallory has an answer. We'll come back to it. Alice has an advantage in the race, and so she likely beats Mallory. NXDOMAIN for AAAAA.VICTIM.COM. Alice's advantage is not insurmountable. Mallory repeats with AAAAB.VICTIM.COM. Then AAAAC.VICTIM.COM. And so on. Sometime, perhaps around CXOPQ.VICTIM.COM, Mallory wins! Bob believes CXOPQ.VICTIM.COM is 6.6.6.0!

Poisoning CXOPQ.VICTIM.COM is not super valuable to Mallory. But Mallory has another trick up her sleeve. Because her response didn't just say CXOPQ.VICTIM.COM was 6.6.6.0. It also contained Additional RRs pointing WWW.VICTIM.COM to 6.6.6.0. Those records are in-bailiwick: Bob is in fact interested in VICTIM.COM for this query. Mallory has combined attack #1 with attack #2, defeating fix #1 and fix #2. Mallory can conduct this attack in less than 10 seconds on a fast Internet link.


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Thu, 24 Jul 2008 08:32:24 +0530

Hello,

On Wed, 23 Jul 2008, Rick Moen wrote:

> Here's one of the Matasano Chargen guys telling the full
> story about the probable attack mode.  Yes, making all resolvers
> (including the cruddy BIN8-descended resolver library built into your
> libc) randomise source ports, one way or another, is necessary in order
> to foil transaction forgery.

Yet ... reading through to the end of the write-up you have sent --- it may not be enough.

The problem with using source port randomisation with a real DNS resolving daemon is to avoid trying to bind to sockets that are already being utilised. Presumably asking the kernel to bind a random port solves this problem as it already "knows" what ports are bound.

The problem with using source port randomisation with iptables is that if you happen to also be running a DNS name server on the same machine then you do not want to randomise the source port of the DNS answers sent by this machine. After all the machine sending the query may be "dumb" and be using the source port 53! So the iptables setup is a little more complex to exclude such packets.

So: - if you just use the libc resolver then upgrade to kernel version 2.6.24 and use iptables to randomise the source port. - if have a caching only name server, then this may not be enough and it would be better to update your caching software to avoid the additional RR's attack. - if you have a DNS server that does caching as well as authority service, then you probably cannot employ iptables. So it is best to update your software. This means bind 8 has to go out of circulation unless someone takes the trouble to fix it.

I seem to recall that Dan Bernstein always recommended that combining an authority server with a caching server was a bad idea!

Regards,

Kapil. --


Top    Back


Rick Moen [rick at linuxmafia.com]


Thu, 24 Jul 2008 14:10:23 -0700

Quoting Kapil Hari Paranjape (kapil@imsc.res.in):

> On Wed, 23 Jul 2008, Rick Moen wrote:
> > Here's one of the Matasano Chargen guys telling the full
> > story about the probable attack mode.  Yes, making all resolvers
> > (including the cruddy BIN8-descended resolver library built into your
> > libc) randomise source ports, one way or another, is necessary in order
> > to foil transaction forgery.
> 
> Yet ... reading through to the end of the write-up you have sent ---
> it may not be enough.

The quoted text of mine above, on reflection, is just a bit alarmist: The "stub" resolver built into Linux libc/glibc is indeed deficient in not bothering to randomise source ports of outgoing recursive-resolver DNS queries. However, this is not a huge problem in itself, because the results of those queries are not getting cached. Therefore, to borrow the metaphor I used in one of my other postings, the "poison" gets flushed from the system pretty much immediately. (On the other hand, systems running "nscd", the nameservice caching daemon, would have a concern, because that sets up a scenario where responses to recursive-resolver queries from a querying process, libc's resolver library, using 16-bit QIDs and no source-port randomisation gets combined with caching.)

So, SOHO and similar systems that have no DNS infrastructure of their own are probably relatively safe, ironically, although their operators should be concerned about the condition of whatever caching services they do use. The fatal combination involves poor/absent source port randomisation along with results caching.

> The problem with using source port randomisation with a real DNS
> resolving daemon is to avoid trying to bind to sockets that are
> already being utilised. Presumably asking the kernel to bind a random
> port solves this problem as it already "knows" what ports are bound.
> 
> The problem with using source port randomisation with iptables is
> that if you happen to also be running a DNS name server on the same
> machine then you do not want to randomise the source port of the
> DNS answers sent by this machine. After all the machine sending the
> query may be "dumb" and be using the source port 53! So the iptables
> setup is a little more complex to exclude such packets.

Honestly, although wrapping recursive-resolver processes of various sorts with iptables scripts to randomise the source ports of DNS queries is an ingenious and delightful solution -- and it's good to know that it's an option -- the right solution, actually, is to just upgrade any caching nameserver charged with handling outbound recursive-resolver queries, to versions that randomise those queries' source ports.

Also, one needs to ensure that poorly randomising NAT/PAT between that host and the outside world doesn't de-randomise the query.

Also, make sure the nameserver config doesn't prevent use of the source port randomising option by locking the queries to port 53. (This pitfall is going to be quite common, because of IP/port filtering.)

> So:
>  - if you just use the libc resolver then upgrade to kernel version
>  2.6.24 and use iptables to randomise the source port.

I'd be curious to hear how this works out -- but, realistically, this isn't a credible threat model in the absence of results caching.

>  - if have a caching only name server, then this may not be enough
>  and it would be better to update your caching software to avoid the
>  additional RR's attack.

Utterly mandatory. Urgent.

>  - if you have a DNS server that does caching as well as authority
>  service, then you probably cannot employ iptables. So it is best to
>  update your software. This means bind 8 has to go out of
>  circulation unless someone takes the trouble to fix it.

It's worth noting that Paul Vixie, Dan Kaminsky, and others behind the July 8 mass-upgrade announcement quietly worked with the only significant large BIND8 site remaining, Yahoo, Inc., to finally retire that codebase from their installation. BIND8's unfixable, and always has been. Anyone who is still running it has something of a death wish.

> I seem to recall that Dan Bernstein always recommended that combining
> an authority server with a caching server was a bad idea!

It's at least sub-optimal, and I don't want to seem like a BIND9 cheerleader (which I'm not), but: These problems can be properly addressed within the codebase of a do-it-all nameserver like BIND9, just fine. But -- you're right -- not by wrapping it using iptables.

[NAT/PAT "firewall" appliances:]

> This seems to be quite a serious problem!
> 
> Some solutions:
>  a. Use the appliance in "bridge mode" and use a real computer
>  as the gateway.
>  b. Use a real computer with two nic's as the gateway and use the
>  appliance as a router alone.
>  c. By an appliance that uses Linux under the hood and update it to a
>  recent kernel.

You are so very right! Other solutions are also possible, such as moving your caching recursive-resolver nameserver to be in front of the NAT box, on the outside network, or forwarding your queries to an outside box.

By the way, this posting from Halvar Flake's blog explains pretty well why, although nameservers doing bailiwick checking is A Good Thing, it still leaves room for (some) credible cache-poisoning threats:

   Egill H said...
 
   Here is why it works:
 
   Mallory wants to poison the server ns.polya.com
 
   Mallory sends NS requests for ulam00001.com, ulam00002.com ... to
   ns.polya.com.
 
   Mallory then sends a forged answer, saying that the NS for
   www.ulam00002.com is ns.google.com AND puts a glue record saying that
   ns.google.com is 66.6.6.6
 
   Because the glue records corresponds with the answer record, (same
   domain) the targetted nameserver will cache or replace it's curent
   record of ns.google.com to be 66.6.6.6

"Mallory", here, presumably has an M-name to indicate that he/she is a Man-in-the-Middle attacker, trying to sabotage communications between implied traditional characters Alice (a DNS user) and Bob (in this case ns.polya.com, a recursive-resolver nameserver that Alice uses).

Anyway, to explain: Traditional bailiwick checking would ensure that ns.polya.com will not commit to its cache glue records in incoming query results outside the domain. Here's an example of in-domain "glue records":

$ dig  -t a  linuxmafia.com  @ns1.linuxmafia.com
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26390
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 2
 
;; ANSWER SECTION:
linuxmafia.com.         86400   IN      A       198.144.195.186
 
;; AUTHORITY SECTION:
linuxmafia.com.         86400   IN      NS      ns2.linuxmafia.com.
linuxmafia.com.         86400   IN      NS      ns.tx.primate.net.
linuxmafia.com.         86400   IN      NS      ns.primate.net.
linuxmafia.com.         86400   IN      NS      ns1.thecoop.net.
linuxmafia.com.         86400   IN      NS      ns1.linuxmafia.com.
 
;; ADDITIONAL SECTION:
ns1.linuxmafia.com.     86400   IN      A       198.144.195.186
ns2.linuxmafia.com.     86400   IN      A       63.193.123.122

Notice the two records in the "ADDITIONAL SECTION" data: I actually hadn't asked in my "dig" query where those nameservers are. I'd only asked what the "A" (forward-address) record for linuxmafia.com is -- but the answering nameserver not only provided the answer (within "ANSWER SECTION") but also the fully-qualified hostnames of all five authoritative nameservers and the corresponding IP addresses of two of them (in order to save a second lookup, if the querying host should need to talk to them -- an example of "glue records").

Why those two rather than all five? Because those are the only two whose names are within the .COM TLD, making the data be in-bailiwick for a nameserver handling .COM namespace and answering queries about a .COM record. E.g., ns1.linuxmafia.com would have no business purporting to hand out allegedly reliable data about nameservers within the .NET or .COM namespaces, so it doesn't try.

However, Egill H's point is that bailiwick checking would not block acceptance of a forged response carefully crafted to be in-bailiwick.

Bailiwick checking is further explained here: http://www.linuxjournal.com/article/9905


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Thu, 24 Jul 2008 08:34:36 +0530

Dear Rick,

On Wed, 23 Jul 2008, Rick Moen wrote:

> Here's one of the Matasano Chargen guys telling the full
> story about the probable attack mode.

Great stuff.

> (The referenced Halvar Flake is a German
> mathematician who speculated quite well about this problem on his blog:
> http://addxorrol.blogspot.com/2008/07/on-dans-request-for-no-speculation.html)

Added to my "rawdog" configuration!

On Wed, 23 Jul 2008, Rick Moen wrote:

> On UDP, starting only with 2.6.24.  See:
> http://cipherdyne.org/blog/2008/07/mitigating-dns-cache-poisoning-attacks-with-iptables.html

Thanks for all these links!

Kapil. --


Top    Back


Rick Moen [rick at linuxmafia.com]


Wed, 23 Jul 2008 21:30:05 -0700

[Forwarding this from the conspire@linuxmafia.com mailing list]

Let me expand on that, particularly for the benefit of the majority of you who do not run your own DNS nameservers -- because everyone has a horse in this race, not just sysadmins:

When your app (say, a Web browser or e-mail client) needs to communicate with a remote host, it invokes the system DNS service. On Linux boxes, that's a small library (disturbingly, derived from horrible, ancient, BIND8 spaghetti code) built into the system C library called the resolver. The resolver, which is each TCP/IP system's DNS client piece, has (on Linux) /etc/resolv.conf as its configuration file.

For the 98% of you who get your IP address, nameservice details, routing, and so on via DHCP, that file gets overwritten frequently, each time your DHCP lease is renewed, FYI. Please do have a look at your /etc/resolv.conf. It's really simple. My server's own resolv.conf:

search linuxmafia.com deirdre.org
nameserver 198.144.192.2
nameserver 198.144.192.4
nameserver 198.144.195.186

The first line says "If the silly human gives a less-than-fully-specified hostname, try 'linuxmafia.com' as the domain suffix, and then also 'deirdre.org', before giving up." The remaining three lines give the IPs of three DNS servers where the resolver client can deliver queries. The first two are (excellently run!) nameservers at my upstream link, Raw Bandwidth Communications. The third is my own nameserver.

You folks on DHCP get all such lines from your DHCP server. (There are also ways to configure your DHCP client to modify this behaviour.) If you control your own DHCP server, e.g., as part of a "firewall" appliance, then you can determine what configuration data gets passed out with the DHCP IP leases.

Anyhow, your resolver library lobs a query off to one of the DNS nameservers listed in resolv.conf. What does the nameserver do? It's like asking a research librarian: Either he/she knows, or knows whom to ask. "Knowing" is the model called authoritative DNS service: That's where your query happens to go to one of the nameservers that are the recognised, worldwide authorities for what DNS data that domain is serving at this moment. "Knowing whom to ask" is called recursive-resolver service, where your query reaches a nameserver that is not authoritative for the queried domain, but happens to have the requested data in its cache of data that others have asked for in the recent past ("I know a guy, who knows a guy...."), and that the accompanying "use by date" stamp (the "time to live" value) suggests that data are still good.

It's been known for a long, long time that recursive-resolver service is technically difficult, and has huge security pitfalls. Among the worst hazards is a malicious party "poisoning" the cache data of a recursive-resolver server your local resolver library queries. Such caches can be poisoned only via queries from resolvers (DNS clients) on the servers' lists of IPs permitted to send them recursive queries. Remember when you signed up with your ISP and they gave you a small list of IPs that you can use as nameservers? (Maybe you don't, because you're using 100% DHCP. In that case, you're getting those IPs with your lease.) Those are nameservers your ISP is exposing to a huge number of users for recursive service -- at miminum, all of its customers, and some ISPs leave their public nameservers open to recursive queries from anywhere at all.

So, lesson #1: One of the easiest ways to reduce your security exposure to all DNS security issues is to avoid using (most) ISP nameservers for your general DNS needs. You can do that by setting up your own recursive-resolver nameserver package. The thing hardly anyone knows except sysadmins is that doing so is dead-simple. You pretty much just install the package and it works great by default. No tweaking, no futzing around. You just have to make sure resolv.conf points to it. It costs you a bit of RAM, and that's about it. Anyone can and should consider doing that -- yes, even on laptops.

Basically, ISP nameservers are (in general) Typhoid Marys. Don't use them! The fact that I'm still relying in part on Raw Bandwidth's reflects the high esteem in which I hold Mike Durkin's operation, there, but that does NOT generalise to other ISPs.

A lot of people including Dan Bernstein pointed out, starting many years ago, that recursive queries are dangerously easy to forge (I mean, to forge a faked response loaded with bogus data that is then accepted as having come from the real nameserver the resolver actually asked). Recursive queries have a (sort of) unique transaction ID number, called a query ID (QID) -- but that's just a 16-bit number, which is rather too few, making forged responses much more likely to succeed than if QIDs were, say, 32-bit integers.

Since it's not practical to switch to longer QIDs, the only other logical way to make it more difficult to convincingly forge responses to recursive queries is to make the queries originate on a random TCP or UDP port, rather than the standard DNS port 53. Guess what? Most nameservers prior to the patches released on July 8, 2008 did the very, very dumb thing, and always sent out their queries from port 53. The nameserver you use today probably does, too. That's very, very bad, because, as the "Matasano Chargen" guy and German mathematician Halvar Flake (http://addxorrol.blogspot.com/2008/07/on-dans-request-for-no-speculation.html) have pointed out, the bad guys have recently figured out -- or are right about to figure out -- how to easily poison the caches of vulnerable recursive-resolver nameservers. And nothing increases that vulnerability as much as always sending out recursive queries from the same port.

(The Matasano Chargen piece also talks about a second part of the problem: nameservers willing to accept "out of bailwick" recursive response data: extra "oh, by the way" data thrown in along with the requested response that is about a different domain entirely. Fortunately, most modern nameservers are pretty good about that, and it's not addressed by the July 8 patches.)

Something a lot of people don't think much about is that your libc DNS code is a "stub" (limited) recursive-resolver of a sort: It originates DNS queries with the recursive bit set, which is the "if you don't know, please ask some other nameserver that does" signal. Aren't they also potentially attackable by the sort of forgery that the Matasano Chargen guy discusses? Yes, but "stub" resolvers don't cache their received data, so it's not much of a threat. (The "poison" gets flushed immediately.) Oddly enough, the desktop software components aren't the problem, this time. It's the working nameservers out on people's (and ISPs') server machines.

And people's "firewall" boxes are going to be a big problem. Two reasons:

1. Many firewall appliances have built-in recursive-resolver nameservers. Guess how many of those are likely to get patched? Right, almost none. (Fortunately, probably most of them are non-caching.)

2. Let's say you follow my advice and run a caching nameserver on your local machine -- and that you operate behind a "firewall" gateway appliance that connects your DSL or cable link to upstream, and that does NAT / port address translation (as they pretty much all do) so you can get by with a single IP. You're wary and so patch your systems to get the July 8 patches -- so that your resolver is originating its queries from a random port, instead of always sending them from port 53.

Good, right? Except, then, the firewall appliance's network address translation / port address translation (NAT/PAT) algorithm kicks in, and rewrites the outbound traffic. The originating port was random, so the firewall's rewritten version of that same packet should likewise have a random source port, right? Because all $40 cheap plastic appliances have excellent random number generators, right? Oops. Sorry, your originating port assignment probably doesn't end up being quite so random, any more. See: http://www.circleid.com/posts/87143_dns_not_a_guessing_game/ Basically, a typical firewall box makes a rather efficient de-randomiser.

Testing your nameserver's randomness of source port selection:

Do:

$  dig [namserver IP or hostname] porttest.dns-oarc.net in txt

The result string will include a editorial comment like "GOOD", "FAIR", or "POOR" about randomness quality.

Or use this Web facility: https://www.dns-oarc.net/oarc/services/dnsentropy

You really do want to attend to this now. It's not Somebody Else's Problem.


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Thu, 24 Jul 2008 11:02:22 +0530

Dear Rick,

You are really making me very nervous!

On Wed, 23 Jul 2008, Rick Moen wrote:

> Good, right?  Except, then, the firewall appliance's network address
> translation / port address translation (NAT/PAT) algorithm kicks in, and
> rewrites the outbound traffic.  The originating port was random, so the
> firewall's rewritten version of that same packet should likewise have a
> random source port, right?  Because all $40 cheap plastic appliances
> have excellent random number generators, right?  Oops.  Sorry, your
> originating port assignment probably doesn't end up being quite so
> random, any more.  See:
> http://www.circleid.com/posts/87143_dns_not_a_guessing_game/  Basically,
> a typical firewall box makes a rather efficient de-randomiser.

This seems to be quite a serious problem!

Some solutions:

 a. Use the appliance in "bridge mode" and use a real computer
 as the gateway.=20
 b. Use a real computer with two nic's as the gateway and use the
 appliance as a router alone.
 c. By an appliance that uses Linux under the hood and update it to a
 recent kernel.

Regards,

Kapil. --


Top    Back


Rick Moen [rick at linuxmafia.com]


Wed, 23 Jul 2008 18:01:25 -0700

Quoting Kapil Hari Paranjape (kapil@imsc.res.in):

> Basically, the idea is to use iptables feature of source nat coupled
> with source randomisation.
> 
> iptables -t nat -A POSTROUTING -o ! lo -p udp --dport 53 \
>     -j MASQUERADE --to-ports 1024-65535 --random
> 
> iptables -t nat -A POSTROUTING -o ! lo -p tcp --dport 53 \
>     -j MASQUERADE --to-ports 1024-65535 --random
> 
> After writing this I realised that the randomisation only works with
> kernels version than 2.6.22.

On UDP, starting only with 2.6.24. See: http://cipherdyne.org/blog/2008/07/mitigating-dns-cache-poisoning-attacks-with-iptables.html


Top    Back