LSN/CGN/NAT44 vs. DNS64/NAT64, Part 2 of 3

Last time we looked at NAT444, this time we will examine DNS64/NAT64.  I always write the mechanism that way because the DNS lookup (and possible manipulation) comes first, then the NAT operation.

The DNS64/NAT64 mechanism (RFC 6147, RFC 6146) provides a way for an IPv6-only client to exchange IP packets with an IPv4-only server.  Because (of course) IPv6 nodes can only send IPv6 packets, and vice-versa for IPv4-only nodes, translation must be performed on the packets somewhere along the path.

Translating IPv6 packets to IPv4 packets is not a particularly modern capability.  For example, NAT-PT specified a similar capability (RFC 2766, February/2000).  NAT-PT does not scale well, however, because so much manual state had to be developed and distributed to the participating nodes.  As a result, NAT-PT was moved to “Historic” status (RFC 4966, July 2007).  DNS64/NAT64 builds on NAT-PT, but enlists the DNS server capability, with some DNS64-specific extensions, to make IPv6-IPv4 packet translation scalable and deployable.  One significant difference between NAT-PT and DNS64/NAT64 is that in the later the IPv6-only node must initiate the connection.    That is why above I use the language “IPv6-only client” and “IPv4-only server”.

As we often do, let’s explore DNS64/NAT64 via an example.

The diagram shows a simplified and idealized provider network.  This provider has chosen to deploy IPv6-only services towards subscribers, but wants to provide an IPv4 service as well, so that  subscribers can reach both IPv6-only and IPv4-only services (and of course dual-stack services as well, on either stack).  IPv6 services have been built out to the subscriber edge.  The node at bottom left, for example, has the IPv6 address 2001:db8:5:4::a.

The subscriber has also built out the DNS infrastructure, and has included the DNS64 extensions.  In a nutshell (and there are RFCs on this whole process, so this is *very* terse), this is the DNS64 capability:

1) upon receiving a AAAA query from a downstream client, behave as a “regular” DNS caching server and query upstream towards the authoritative DNS server

2) if an IPv6 address is returned, behave as a “regular” DNS caching server and return query result to the client

3) if **NO IPv6 address returned** , use DNS64 extension, and perform the following process

3a) send an A query for the same name

3b) if no IPv4 address returned, behave as a “regular” DNS caching server and return “no such name”

3c) if IPv4 address returned, create *synthetic* IPv6 address using well-know prefix (64:ff9b::/96, as defined in RFC 6052) and embedding returned IPv4 address in low 32 bits of IPv6 address, and return to client as AAAA query answer

Going back to the example, make “Client-A” the client, and the IPv4-only node “Server-A” the server.   For DNS and DNS64 above, the recursive AAAA query would fail, and the DNS64-initiated A query would return  DNS64 would create the synthetic IPv6 address 64:ff9b:: (same as 64:ff9b::c000:2cd), and return that to Client-A.

At this point DNS64 has completed.  On to NAT64.

Client-A will launch the IPv6 session, convinced that Server-B is also an IPv6-capable node.  The IPv6 packets will be sourced by Client-A from its IPv6 address and destined for the synthetic IPv6 address manufactured by DNS64.  These packets will be routed within the IPv6 enclave (the network to the left in the diagram) to the NAT64 inside-facing interface (following the default route in this case).  The NAT64 function knows that if there is an IPv6 packet, sourced internally and arriving on the inside interface, where the destination is within the prefix 64:ff9b::/96, that this is a packet that will be translated.

The NAT64 platform will translate the packet into IPv4, making the source address the IPv4-Internet-facing address ( and making the destination the 32-bit value “plucked” from the low 32-bits of the IPv6 destination address (the synthetic address) –  The NAT64 device will also make a state entry for this session, using L4 port multiplexing, much like an IPv4 hide-NAT stateful device would use to track and manage sessions.

In this manner, many internal IPv6-only clients can access any IPv4-based server with an IPv4 address in the global DNS, all with no explicit manual configuration of sources and destinations.

And that’s it for DNS64/NAT64.

A few last quick points:

1) DNS64/NAT64 does have downsides; for one not all applications work well through the mechanism, and for another any content that uses IPv4 addresses rather than Fully Qualified Domain Names (FQDNs) (like an embedded link to a graphic within a webpage) will not work – translation depends on the DNS entry.

2) Note that IPv6 native traffic is simply routed at the enclave perimeter.  This means that, for Client-A, sessions to IPv4-only Internet-based servers is translated, which is probably an “acceptable, but not great” solution, and sessions to IPv6-only Internet-based services runs as native IPv6 end-to-end, providing an excellent solution.

3) Now that World IPv6 Launch Day has come and gone, and as more content providers make their content reachable over IPv6, less and less traffic in the example scenario will use the DNS64/NAT64 translation service and more traffic will be carried natively.

That’s it for this blog entry.  In the next installment of the NAT444 and DNS64/NAT64 story we’ll talk briefly about relative strengths and weaknesses of the two mechanisms and provide some practical deployment considerations.

LSN/CGN/NAT44 vs. DNS64/NAT64, Part 1 of 3

There may be a little confusion about the roles of these two technologies.  Plus, I have heard a few provider architects’ say “we were thinking of deploying LSN, because we will quickly run short of routable IPv4 addresses, but now we think maybe DNS64/NAT64 would be better”.  Both solutions do conserve routable IPv4 addresses provisioned to the subscriber (let’s take the broadband case – specifically cable), but are in other ways very different in how they work and what the relative strengths and weaknesses of the solution are.

I’m going to cover this topic over several posts.  For this post, let’s review basic elements of LSN.


Large Scale NAT (LSN) involves Providers assigning non-routable IPv4 addresses to their subscribers (either the subscriber host – say Windows 7 – or more commonly the subscriber’s edge router or Home Gateway (HGW)).  The provider then implements, at the edge of their network, the LSN device, which performs IPv4 “Hide NAT” on the non-routable source address of subscriber packets.

As an example, let’s take the case of a subscriber with a Home Gateway (HGW), as shown below.  The graphic shows two subscribers – we will focus on   the top subscriber.  Inside the subscriber home, the network is numbered using  The subscriber’s HGW provider-facing interface is assigned  The HGW performs IPv4 “Hide NAT” on outbound packets.  The packet is carried across the provider’s IPv4 infrastructure to the LSN device, where the subscriber-facing interface is assigned  The LSN then also performs IPv4 “Hide NAT”, this time to a routable address, (note that address is taken from the “documentation” range for IPv4, and is not really routable, but in a real deployment a routable address would be used).

Therefore this solution is an *all IPv4* solution, and shares a single routable IPv4 address across multiple subscribers.  There is no simple rule for deciding how many subscribers can be supported through a single outside routable address, but for this example let’s say we’ve chosen 100 subscribers per routable address. Some key considerations for LSN include:

  1. Application failure or degradation, where some applications will not run properly, or at all, through the NAT444 mechanism (for example, although the list is longer and growing, some peer-to-peer file sharing solutions, some online gaming, video streaming, and other applications do not work across NAT444).  For a comprehensive document with supporting testing, read this excellent IETF draft authored by a number of cable industry IPv6 leaders at
  2. L4 port exhaustion when supporting too many subscribers behind a single routable address, where  port-hungry applications (Google Maps is a common example of a single function that opens many L4 connections immediately) use up the shared 64K TCP and 64K UDP ports available for a single routable address
  3. Blacklisting, where one subscriber performs some action that results in the shared IPv4 address blacklisted, resulting in all 99 other subscribers losing the use of the blacklisted service in addition to the perpetrator
  4. Some providers are short non-routable IPv4 addresses for use within the provider network, so this is also a constraint
  5. Any provisioning, logging, billing, lawful intercept, or monitoring systems within the provider environment that assume a “1 routable address = 1 subscriber” scheme will be impacted and will have to be modified to understand the NAT444 solution, and where the amount of stored history on subscriber activity will likely be much larger
  6. Subscriber service provisioning limitations, where a subscriber would want to host a website using HTTP (TCP port 80), because only one subscriber can have use of that port
  7. LSN/NAT444 requires no investment at all in IPv6 – in fact it has nothing to do with IPv6

So then, to wrap up this entry, LSN/NAT444 is an IPv4-only solution that allows subscribers to ration their shrinking pool of routable IPv4 addresses.  This comes with some functionality limitations for subscribers, and will still require the provider to implement new platforms, new functionality, and will almost certainly impact some provider back-end support systems.  No investment in IPv6 is required.

The most likely outcome of LSN implementation by providers is for deployment only in the lowest-cost service tier, where these subscribers can only run “simple applications”.  Subscribers that want better service would pay more, and still get a single routable IPv4 address allocation.

By the way, Jeff Doyle wrote an excellent article that takes a deeper look at NAT444/LSN/CGN which I highly recommend (

The next blog entry we will examine how DNS64/NAT64 works and some key considerations with that technology.

IPv6 and “Teardrop” Attacks

Way back in 1997, CERT issued an advisory about “Teardrop” attacks (  These are attacks where overlapping IP fragments are sent by a malicious actor in an attempt to either A) gain access to a protected system by evading IDS, firewall, and/or platform-based defenses, or B) disrupt the capabilities of the target environment – ideally in several parts of the network but especially the target node (the destination IP interface) itself.

In the 1997 CERT advisory, I am sure the concern was IPv4 systems.  In brief, CERT’s advice was to contact vendors for a solution, and that solution was almost always to upgrade the platform OS version – it seemed the vendors were able to patch implementations.  Presumably, this was done by discarding fragments associated with a Teardrop attack (any set of fragments where any two overlap).  Juniper, for example, added a user configurable switch on platforms to explicitly drop overlapping fragments.

Any node sending overlapping fragments make no sense at all, from a protocol standpoint.  Yet my understanding is that some old IP stacks did sometimes send overlapping fragments, not as an attack but as a result of a sloppy or buggy IP implementation.  Thus, in an effort to promote interoperability with these stacks, other vendors would accept overlapping fragments.  This philosophy goes all the way back to Jon Postel who said “be conservative in what you do, be liberal in what you accept from others”.

On the downside, this flexibility provided an exploitable vulnerability to malicious actors. Mostly, the attack was more directed at OS platform bugs than the IP stack.  Modern platforms for the most part have implementations hardened against Teardrop-style attacks on IPv4 or IPv6.

And while it is true that the IETF IPv6 specification (RFC 2460) does not explicitly prohibit overlapping fragments, most vendors implemented protections against those IPv6 fragments as a matter of best practice.  Later (December 2009) RFC 5722 made it out-of-spec for source nodes to send overlapping fragments, and made it required for the reassembling node (the ultimate destination) to silently discard all fragments associated with a fragmented packet when any two fragments overlap.  Fragments are silently discarded, as opposed to having the reassembling node send ICMPv6 error messages, to guard against reflection attacks (where the attacker uses a victim IPv6 address as the source of the offending packets), and because the sending node is already acting “fishy”, so best not to burden any system (including the originator) with processing ICMPv6 error messages.

There is also the issue of fragmentation attacks against “atomic fragments”.  These are IPv6 packets that are not actually fragmented, but carry the IPv6 Fragmentation Extension Header (with “Fragment Offset” set to zero and the “M” bit not set).  The reason for these to be allowed in the IPv6 specification is to support certain IPv6-to-IPv4 translation solutions.  To protect the capability from exploitation, there is an IETF draft (“Processing of IPv6 “Atomic” Fragments”) discussing ways to defend against fragmentation-centric attacks on this kind of packet.

With the improvements made to stack implementations over time (in the case of IPv4), and the fact that most IPv6 implementations were always “tighter”, the Teardrop attack has become a spent force on the Internet – really only affecting older platforms.  Thanks to RFC 5722, there is now official IETF guidance on how IPv6 stacks should be implemented in terms of handling overlapping fragments, and giving platform vendors the green light to discard these corrupted fragments aggressively.

It is a classic example of the “arms race” nature of IP security – whether for IPv4 or IPv6.  The bad guys discover a new vulnerability and an exploit, and the good guys close it off and harden everything else related to the vulnerability they can think of.   A good reminder that IPv6 security is only “better” than IPv4 security in a few ways, and secure environments really depend on rigorous deployment of security best practices and constant vigilance by an informed and talented security team.

What’s New with the Flow Label?

The IPv6 Flow Label specification has been updated by the IETF as of November, 2011, and includes changes and refinements that should make the Flow Label more useful in the near term.

IT networking specialists have considered the Flow Label “underspecified” since the first IPv6 specifications were written in the mid 1990s.  As a result, this 20-bit field carried in the IPv6 base header has been a bit of a waste, and has been little used.

In brief, there are three (3) IETF RFCs related to the Flow Label recently published.  Below is a brief digest of each:

RFC 6436 (Informational) is titled “Rationale for Update to the IPv6 Flow Label Specification”, and provides just that.  It provides a description of perceived shortcomings of the previous specification (RFC 3697) and makes recommendations on changes.  The most important bits of information in this RFC are:

  • RFC 3697 requires that only the source node set a Flow Label, and that the Flow Label be delivered intact to the destination node.  This means the Flow Label cannot be set by intermediate nodes, unlike the DSCP sub-field in the IPv6 Traffic Class field.
  • Because the Flow Label is not covered by any checksum, and it is not covered by IPsec protections (not even the Authentication Header), the field cannot really be trusted to not be changed accidentally or overtly along the path from source to destination.
  • The new RFC makes recommendations on updates to the Flow Label, which are specified more authoritatively in RFC 6437.

RFC 6437 (Proposed Standard) is titled “IPv6 Flow Label Specification”, and replaces the previous standard (RFC 3697).  Important elements of the new standard include:

  • Noting that the default implementation of the Flow Label is “stateless”, but that future other uses based on a signaling mechanism are not precluded.
  • Noting that the envisioned use case for stateless Flow Labels involves load-balancing traffic, either in the case of Equal Cost MultiPath (ECMP) or Link Aggregation (LAG) implementations.
  • Encourages source nodes to set Flow Labels, with a unique label per flow
  • Restates that Flow Labels should be well-distributed (random) and not guessable
  • IMPORTANT – removes the restriction that only the source node may set a Flow Label, making it permissible for any device to set a non-zero Flow Label in an IPv6 header where the Flow Label was previously zero (so, an intermediate node may set a Flow Label, but not re-set it)
  • UPSHOT HERE – this new permissiveness means that a router – perhaps the distribution-layer router – can set a Flow Label for flows that do not have them, making the Flow Label useful to other routers, further downstream, that may be performing ECMP or LAG.  The router can take action on the Flow Label where the host did not.  This is a little like an ingress router doing QoS classification and marking for the benefit of downstream routers.
  • The new specification also makes it permissible, on occasion, in high-security environments, for an intermediate node to set a non-zero Flow Label to zero, in an effort to eliminate the possibility of a covert channel being implemented in Flow Label values.
  • The RFC states numerous other possible security issues related to the Flow Label

RFC 6438 (Proposed Standard) is titled “Using the IPv6 Flow Label for ECMP and LAG in Tunnels”.  It describes a way by the Flow Label can be used as the title suggests.  The scenario is that a tunnel has been built where the tunneled traffic passes through a set of devices implementing LAG.   In brief it works like this:

  • Normally, tunnel traffic frustrates the LAG load-balancing algorithm.  Think about it.  If the traffic were not in a tunnel, the individual flows would be apparent – the traffic would be a collection of flows between varying sources and destination, and for different protocols and L4 services (the “5-tuple” would be available for load-balancing).  Because the traffic is tunneled, all traffic yields the same 5-tuple at the downstream LAG, where there is a single source (one end of the tunnel), a single destination (the other end of the tunnel), on protocol (perhaps IP-in-IP, or perhaps GRE).  In other words with the tunnel all traffic looks like a single “massive flow” to the LAG.
  • For a solution, the RFC describes a mechanism whereby the ingress Tunnel End-Points (TEP) examine the tunnel traffic as it enters the tunnel, and then write a Flow Label into the *outer* IPv6 base header based on the 5-tuple of the *inside* packets.  In short, the TEP and the Flow Label turn the “single massive flow” back into individual flows.  The LAG then load-balances on the IPv6 3-tuple (source, destination, Flow Label), and the balancing can be efficient.

That’s the quick wrap-up for all three (3) related RFCs.  IPv6 refinement continues apace, and the protocol continues to evolve gracefully to match the needs of a global scope network.

A Less Secure IPv6?

There has been a relatively recent revision at the IETF –still in Draft but shaping up – to change the IPv6 stack requirement for IPsec implementation from “MUST” to “SHOULD”.  This has been under discussion at the IETF for a few revisions of the Draft, but appears to be “solid” now.  This change has been controversial, and deserves a bit of a closer look and some careful thought about the impact on IPv6’s ongoing evolution and deployment future.

The base IPv6 IETF specification (RFC 2460) mandates IPsec support (“MUST”) in all “full implementations” of IPv6.  In the early years of IPv6 promotion, this gave rise to the overstatement that “IPv6 has security built-in” or “IPv6 is more secure than IPv4”.  Both these taglines for IPv6 are inaccurate.  IPsec is a (terrific) layer-3 solution to provide in-transit security services to IPv6 (or IPv4, in many, many IPv4 stack implementations) packet flows.  It is not the right security tool to solve all security problems, and IPv6 – like IPv4 – faces a number of very different kinds of attack strategies.

More specifically, RFC 2460 mandates support of AH (Authentication Header) and ESP (Encapsulating Security Payload) in IPv6.

The latest twist in this story is the change in “IPv6 Node Requirements” (draft-ietf-6man-node-req-bis-XX”), on track for Informational status, moving IPsec support in IPv6 stacks from “MUST” to “SHOULD”.

A reasonable question at this point is “why is the IETF removing IPsec as a mandatory element of IPv6” and maybe “does this really matter”?  I don’t have answers to those questions, but I will suggest a way to look at this issue.

For the first, my understanding is that some vendors object to a blanket requirement for IPsec in IPv6 stacks because; (a) their customers are not asking for it; and/or (b) they believe their product will run better or faster or use less power or have a smaller footprint if they are able to omit IPsec support; and perhaps they simply want to make their own decisions about IPsec’s relative priority in their product.  This generates a reasonable question – essentially “what price is paid by unique implementations for the standard that all IPv6 stacks MUST support IPsec, and what is the value of a standards-driven requirement?”.

The second question is harder.  There are some IPv6 standards that assume
IPsec is present, as “promised” by RFC 2460.  Perhaps most notable is OSPFv3, which removes all L4+ security services in favor of leveraging the node’s underlying IPsec service for security.  It would not be good to have OSPFv3 unable to use a secured routing plane in cases where IPsec is not provided by the platform vendor, or to make vendor OSPFv3 implementations non-interoperable, by driving vendors to build varying security solutions into their OSPFv3 implementation.

And it is simpler – and simpler is often better – to have a straightforward rule like “all IPv6 stacks have IPsec” than something more complex like “most IPv6 stacks have IPsec, but not all, in all cases, so careful evaluation is required by buyers/implementers”.

A careful read of the Node Requirements Draft actually shows it does two things concurrently.  On the “add complexity side”, it does make IPsec support by IPv6 stacks optional.  Vendors are free to implement IPsec in their IPv6 stacks or not, at their discretion. That means buyers need to evaluate their need for IPsec separately from their need for IPv6, and consider the tradeoffs of buying an “IPsec-free” IPv6 implementation.

On the “add clarity side” of the balance sheet, the Draft tightens the exact requirements for IPv6 IPsec implementation **when the IPsec implementation is present**.  As an example, the Draft makes support for RFC 4301 (“Security Architecture for the Internet Protocol”) mandatory for IPv6 IPsec implementations.  That RFC in turn requires support for IKE (Internet Key Exchange) automatic key management within those IPsec implementations, which makes IPsec much more deployable and secure.  RFC 4301 also requires support for a minimum set of supported cryptographic algorithms, which makes IPsec more interoperable across vendor implementations.

To wrap up, then, these new IPsec requirements in the Draft have two significant impacts:

  1. IPsec becomes optional to implement on IPv6 stacks
  2. Where IPsec is implemented for IPv6, it will be more interoperable and more robust

My own opinion is that these revised requirements are a good outcome.  They allow vendors to serve their customers best, by implementing IPsec (or not) as their particular solution demands.  And where IPsec is a market requirement (even if no longer a strict RFC requirement), as an implementer I will have increased confidence IPsec will be deployable, interoperable, and robust – thanks to additions and clarifications above and beyond what RFC 2460 required that are described in the Node Requirements Draft.  I am willing to trade off the uncertainty of the former for the assurance of the latter.