OpenStack Havana and IPv6

OpenStack Havana (8th release) is available on October 17 with 400+ new features. However, it brought very little excitement for IPv6 fans: users still cannot launch IPv6-based VMs.

After running out of patience, we immersed ourselves in the lab for several weeks and achieved the following goals:

  • All OpenStack infrastructure nodes communicated with each other by IPv6
  • User could spin up dual-stack VMs in multi-tenant environment
  • VMs gained connectivity to existing IPv6 network beyond OpenStack boundary

Although Havana release note didn’t mention any IPv6 limitations, what we found were:

  • Router Announcement was not sent to internal tenant network by default
  • DHCP process was bound to IP other than default gateway of tenant network
  • Neighbor Discovery packets were dropped by default by ip6tables filter rules
  • NAT and GARP were turned on for IPv6 subnets. Not desirable!

More details can be found in our whitepaper published at http://www.nephos6.com/pdf/OpenStack-Havana-on-IPv6.pdf

 

 

IPv6 and Taxes

How many times have you heard skeptics ask about the ephemeral Business Case for IPv6? How many times did you ask yourself that question?  Even if you are one of its cheerleaders, there are those lonely, rainy nights when you doubt and wonder. The pressure did come off a bit lately. By now we all know there is no way around it so that might be a business case in itself. But then … does it matter?  Who cares?

Everyone will soon care.

Picture this nightmare: It is April 15. Yeah, that April 15, the IRS day. You procrastinated (much like many organizations who should have moved to IPv6 by now) and you are ready to do what it takes to file by the deadline. You are ready in your tax filing sweat pants and head band, the pile of supporting papers is to your right on the desk,  a gallon of coffee to your left, it is game time! Now all you need is the forms.

The Facebook lover, twitter savvy, Google master you sees no problem. You confidently type in your browser: www.irs.gov and … you wait, and wait … and wait some more. Odd but, it must be crowded you say. You get to the forms and start the download. The progress bar is barely moving while the clock on the wall seems to have learned about the theory of relativity and is racing to deadline. You keep your cool. You smash a few knickknacks from your desk. You sob like a baby but the download is still dragging. Pride be damned, you call a friend and … what do you know? He has no problem downloading the forms, many times over. You suspect he might be doing it even for fun at your expense.

What in the IRS World is going on?

You my friend are accessing the IRS website over IPv6 and the IRS webpage is only 61% effective over IPv6. Here is the proof, the blue line in the graph bellow shows the Global IPv6 Effectiveness of www.irs.gov on April 22 as measured by v6Sonar (the only superhero APM in the IPv6 space: www.v6sonar.com).

irs.eff

OK, so the nightmare scenario happened a few days before April 15 but … it did happen!

Morals of the story:

1)      IPv6 does matter and it might be your Ironman (just to stay actual) or your Freddy

2)      Dear IRS, don’t take the OMB mandate for granted. Not even you can just wave a magic wand and think IPv6 will just work well. You have to work for it like everyone else.

What is your IPv6 story?

For questions, bashing or tax filling audits, please e-mail: contact@nephos6.com

An example of “trying and failing” to really do IPv6 …

One of the most common complaints an organization has when trying to move forward with an IPv6 deployment is “lack of vendor support”.  Whether that means your ISP cannot get you the connectivity you need (cough DISA FAIL cough) or that means criticial components of your infrastructre just can’t do it (yet?) – in either type of scneario, this is clearly suboptimal.

Another problem we run into is vendors that say they “do IPv6″, and even seem to live up to that, at first glance.  Only when a deployment commences do you then find out some “little things” that aren’t quite right …

Case in point: F5′s Big-IP Load Balancers.
These devices are fairly popular and claim pretty strong IPv6 capabilities.  And we can configure the virtual IPs (IPv4 and IPv6) that will be the public-facing side of a service being offered – nice, right?

However, these devices don’t do a couple things that we expected … 
* The “virtual inside address” – a Link Local IPv6 address that nodes will use as a default gateway – isn’t used properly.  The Big-IP’s source the Router Advertisements from the “physical Link Local Address”, not the virtual one.  FAIL!
* Additionally, the current version of code does not support managing the device over IPv6.  Even the newer version of code supports IPv4 *or* IPv6 for management, but not both concurrently.  LAME!

(In both cases, we are working with the vendor to try to mitigate this … do you have any similar stories to share?  Send them along!)
Just some quick thoughts on the types of things you need to think through as you deploy IPv6 in your network … I mean, you are (at the very least) starting this process aren’t you??
/TJ

PS – for reference:
http://support.f5.com/kb/en-us/solutions/public/12000/600/sol12648.html
http://support.f5.com/kb/en-us/solutions/public/12000/400/sol12430.html
… feel free to dorp by there and let them know how important these items are for you :) .

As another year closes, how is IPv6 looking for you?

While a bit cliche, the last days of each year are a good opportunity to reflect on the year – progress made, problems solved, insights gained – and to look towards those same things for the upcoming year.

2012 saw Google measuring IPv6 traffic clearing over 1% of overall traffic – while 1% is still too low, check the chart.  The phrase “hockey stick” comes to mind – it will be very interesting to see if this exponential growth trend continues (or accelerates).

2012 saw “World IPv6 Launch” happen, a very successful follow-up to “World IPv6 Day” in 2011.  ”This time it is for real”, meaning not just a 24 hour light-up of some Ipv6-capable site; but a permanent light-up of IPv6 of your primary site.  And getting ISPs to commit to lighting up some customers as well.

(Sidenote:  I personally benefit from this in that Comcast has deployed native IPv6 in  my service region.  I have native IPv6 at my home; not because I ‘know someone’ and not because I configured a tunnel or loaded custom hacked-up firmware on my CPE.  I have native IPv6 because my ISP supports it, my cable modem happens to be DOCSIS3.0 capable, my off-the-shelf CPE (Linksys 4200v2, if you must ask) does DHCPv6 and because all of the computing things in my house support it.  Win!!)

2012 also saw the US federal government’s “OMB2012″ deadline come to pass.  And while many agencies failed to meet it, many did (kudos to the Department of Veterans Affairs (va.gov) and even those that didn’t – hopefully they have started down the right path.  A great guide to these requirements, and the now-on-deck 2014 deadline is available in the “Planning Guide/Roadmap Toward IPv6 Adoption within the U.S. Government

In our view, 2012 is also the year when having an IPv6 presence on the Internet became An Important Thing.  sadly, many environments that have taken this bold step often fail to maintain the same level of service, support and monitoring as their IPv4 offerings.  To that end, we encourage the use of something like IPv6Sonar to monitor the status and performance of you site, over both IPv4 and IPv6.

Anecdotally, 2012 has also seen the training work continue to accelerate.  This is a Good Thing, as understanding IPv6 is an important step in getting it deployed – and we have a long way to go in spreading this knowledge!

On the topic of sharing information, another pet peeve of mine: articles authored in such a way that they could easily be misunderdstood.  For example, this article makes several valid points – but also raises points that require more clarification to avoid misunderstandings.
(Also, note that NAT-PT has officially been deprecated – DNS64/NAT64* is where it is at; (go read about that here and here!))

* – as a final aside: IPv6-only devices are something many have said will not happen in the near future, but clearly that is short-sighted and ignores one very important aspect for certain deployment scenarios.  Such as my phone.  In the interest of mitigating certain technical and economic impacts of dual-stacking cellular devices, my carrier** has elected to make IPv6-only an option for connecting to their network (and it is an option for now, the user needs to reconfigure the phone to do this).  Naturally, I continue to need access to IPv4-only sites – and this happens via DNS64/NAT64!

** – OK, I lied.  One more aside – while my carrier is doing this great work in getting IPv6-only devices deployed, note that their website is IPv4-only.  That’s right, I actually need to use their DNS64/NAT64 implementation to get to their own website.  Insert the “sad trombone” sound here …

 

And I close with a smile – feel free to take a minute (less, actually!) to watch our video about how you might approach your IPv6 training needs :) .

 

I hope you had a fantastic 2012, and are looking forward to an even more IPv6-enabled 2013!
/Your humble IPv6 servant

LSN/CGN/NAT44 vs. DNS64/NAT64, Part 2 of 3

Last time we looked at NAT444, this time we will examine DNS64/NAT64.  I always write the mechanism that way because the DNS lookup (and possible manipulation) comes first, then the NAT operation.

The DNS64/NAT64 mechanism (RFC 6147, RFC 6146) provides a way for an IPv6-only client to exchange IP packets with an IPv4-only server.  Because (of course) IPv6 nodes can only send IPv6 packets, and vice-versa for IPv4-only nodes, translation must be performed on the packets somewhere along the path.

Translating IPv6 packets to IPv4 packets is not a particularly modern capability.  For example, NAT-PT specified a similar capability (RFC 2766, February/2000).  NAT-PT does not scale well, however, because so much manual state had to be developed and distributed to the participating nodes.  As a result, NAT-PT was moved to “Historic” status (RFC 4966, July 2007).  DNS64/NAT64 builds on NAT-PT, but enlists the DNS server capability, with some DNS64-specific extensions, to make IPv6-IPv4 packet translation scalable and deployable.  One significant difference between NAT-PT and DNS64/NAT64 is that in the later the IPv6-only node must initiate the connection.    That is why above I use the language “IPv6-only client” and “IPv4-only server”.

As we often do, let’s explore DNS64/NAT64 via an example.

The diagram shows a simplified and idealized provider network.  This provider has chosen to deploy IPv6-only services towards subscribers, but wants to provide an IPv4 service as well, so that  subscribers can reach both IPv6-only and IPv4-only services (and of course dual-stack services as well, on either stack).  IPv6 services have been built out to the subscriber edge.  The node at bottom left, for example, has the IPv6 address 2001:db8:5:4::a.

The subscriber has also built out the DNS infrastructure, and has included the DNS64 extensions.  In a nutshell (and there are RFCs on this whole process, so this is *very* terse), this is the DNS64 capability:

1) upon receiving a AAAA query from a downstream client, behave as a “regular” DNS caching server and query upstream towards the authoritative DNS server

2) if an IPv6 address is returned, behave as a “regular” DNS caching server and return query result to the client

3) if **NO IPv6 address returned** , use DNS64 extension, and perform the following process

3a) send an A query for the same name

3b) if no IPv4 address returned, behave as a “regular” DNS caching server and return “no such name”

3c) if IPv4 address returned, create *synthetic* IPv6 address using well-know prefix (64:ff9b::/96, as defined in RFC 6052) and embedding returned IPv4 address in low 32 bits of IPv6 address, and return to client as AAAA query answer

Going back to the example, make “Client-A” the client, and the IPv4-only node “Server-A” the server.   For DNS and DNS64 above, the recursive AAAA query would fail, and the DNS64-initiated A query would return 192.0.2.205.  DNS64 would create the synthetic IPv6 address 64:ff9b::192.0.2.205 (same as 64:ff9b::c000:2cd), and return that to Client-A.

At this point DNS64 has completed.  On to NAT64.

Client-A will launch the IPv6 session, convinced that Server-B is also an IPv6-capable node.  The IPv6 packets will be sourced by Client-A from its IPv6 address and destined for the synthetic IPv6 address manufactured by DNS64.  These packets will be routed within the IPv6 enclave (the network to the left in the diagram) to the NAT64 inside-facing interface (following the default route in this case).  The NAT64 function knows that if there is an IPv6 packet, sourced internally and arriving on the inside interface, where the destination is within the prefix 64:ff9b::/96, that this is a packet that will be translated.

The NAT64 platform will translate the packet into IPv4, making the source address the IPv4-Internet-facing address (192.0.2.15) and making the destination the 32-bit value “plucked” from the low 32-bits of the IPv6 destination address (the synthetic address) – 192.0.2.205.  The NAT64 device will also make a state entry for this session, using L4 port multiplexing, much like an IPv4 hide-NAT stateful device would use to track and manage sessions.

In this manner, many internal IPv6-only clients can access any IPv4-based server with an IPv4 address in the global DNS, all with no explicit manual configuration of sources and destinations.

And that’s it for DNS64/NAT64.

A few last quick points:

1) DNS64/NAT64 does have downsides; for one not all applications work well through the mechanism, and for another any content that uses IPv4 addresses rather than Fully Qualified Domain Names (FQDNs) (like an embedded link to a graphic within a webpage) will not work – translation depends on the DNS entry.

2) Note that IPv6 native traffic is simply routed at the enclave perimeter.  This means that, for Client-A, sessions to IPv4-only Internet-based servers is translated, which is probably an “acceptable, but not great” solution, and sessions to IPv6-only Internet-based services runs as native IPv6 end-to-end, providing an excellent solution.

3) Now that World IPv6 Launch Day has come and gone, and as more content providers make their content reachable over IPv6, less and less traffic in the example scenario will use the DNS64/NAT64 translation service and more traffic will be carried natively.

That’s it for this blog entry.  In the next installment of the NAT444 and DNS64/NAT64 story we’ll talk briefly about relative strengths and weaknesses of the two mechanisms and provide some practical deployment considerations.

LSN/CGN/NAT44 vs. DNS64/NAT64, Part 1 of 3

There may be a little confusion about the roles of these two technologies.  Plus, I have heard a few provider architects’ say “we were thinking of deploying LSN, because we will quickly run short of routable IPv4 addresses, but now we think maybe DNS64/NAT64 would be better”.  Both solutions do conserve routable IPv4 addresses provisioned to the subscriber (let’s take the broadband case – specifically cable), but are in other ways very different in how they work and what the relative strengths and weaknesses of the solution are.

I’m going to cover this topic over several posts.  For this post, let’s review basic elements of LSN.

LSN/CGN/NAT444

Large Scale NAT (LSN) involves Providers assigning non-routable IPv4 addresses to their subscribers (either the subscriber host – say Windows 7 – or more commonly the subscriber’s edge router or Home Gateway (HGW)).  The provider then implements, at the edge of their network, the LSN device, which performs IPv4 “Hide NAT” on the non-routable source address of subscriber packets.

As an example, let’s take the case of a subscriber with a Home Gateway (HGW), as shown below.  The graphic shows two subscribers – we will focus on   the top subscriber.  Inside the subscriber home, the network is numbered using 192.168.1.0/24.  The subscriber’s HGW provider-facing interface is assigned 10.1.1.3.  The HGW performs IPv4 “Hide NAT” on outbound packets.  The packet is carried across the provider’s IPv4 infrastructure to the LSN device, where the subscriber-facing interface is assigned 10.5.5.5.  The LSN then also performs IPv4 “Hide NAT”, this time to a routable address, 192.0.2.15 (note that address is taken from the “documentation” range for IPv4, and is not really routable, but in a real deployment a routable address would be used).

Therefore this solution is an *all IPv4* solution, and shares a single routable IPv4 address across multiple subscribers.  There is no simple rule for deciding how many subscribers can be supported through a single outside routable address, but for this example let’s say we’ve chosen 100 subscribers per routable address. Some key considerations for LSN include:

  1. Application failure or degradation, where some applications will not run properly, or at all, through the NAT444 mechanism (for example, although the list is longer and growing, some peer-to-peer file sharing solutions, some online gaming, video streaming, and other applications do not work across NAT444).  For a comprehensive document with supporting testing, read this excellent IETF draft authored by a number of cable industry IPv6 leaders at http://tools.ietf.org/html/draft-donley-nat444-impacts-04.
  2. L4 port exhaustion when supporting too many subscribers behind a single routable address, where  port-hungry applications (Google Maps is a common example of a single function that opens many L4 connections immediately) use up the shared 64K TCP and 64K UDP ports available for a single routable address
  3. Blacklisting, where one subscriber performs some action that results in the shared IPv4 address blacklisted, resulting in all 99 other subscribers losing the use of the blacklisted service in addition to the perpetrator
  4. Some providers are short non-routable IPv4 addresses for use within the provider network, so this is also a constraint
  5. Any provisioning, logging, billing, lawful intercept, or monitoring systems within the provider environment that assume a “1 routable address = 1 subscriber” scheme will be impacted and will have to be modified to understand the NAT444 solution, and where the amount of stored history on subscriber activity will likely be much larger
  6. Subscriber service provisioning limitations, where a subscriber would want to host a website using HTTP (TCP port 80), because only one subscriber can have use of that port
  7. LSN/NAT444 requires no investment at all in IPv6 – in fact it has nothing to do with IPv6

So then, to wrap up this entry, LSN/NAT444 is an IPv4-only solution that allows subscribers to ration their shrinking pool of routable IPv4 addresses.  This comes with some functionality limitations for subscribers, and will still require the provider to implement new platforms, new functionality, and will almost certainly impact some provider back-end support systems.  No investment in IPv6 is required.

The most likely outcome of LSN implementation by providers is for deployment only in the lowest-cost service tier, where these subscribers can only run “simple applications”.  Subscribers that want better service would pay more, and still get a single routable IPv4 address allocation.

By the way, Jeff Doyle wrote an excellent article that takes a deeper look at NAT444/LSN/CGN which I highly recommend (http://www.networkworld.com/community/node/45776).

The next blog entry we will examine how DNS64/NAT64 works and some key considerations with that technology.

Nephos6 Helps Define the IPv6 Forum Security Certification Standards

Excerpt from the IPv6 Forum Press Release:

The IPv6 Forum Launches the IPv6 Education Security Certification Logo Program Accelerating adoption and integration of IPv6 in the Education Curriculum Worldwide

 

PENANG, MALAYSIA – SAN JOSE, USA – LUXEMBOURG,  May 25, 2012 – The IPv6 Forum Education Logo Program Committee releases a new program: The IPv6 Education Security Certification Logo Program. This program will certify Security Courses, Trainers and Engineers with the Gold Logo level.

In order to be certified, the candidates must cover all mandatory topics outlined in section 3.5 in the requirements specifications document (attached). Only the Gold level of certification is provided by the IPv6 Forum Certified Security program.

“The IPv6 security & privacy are going to be implemented again as an after-thought similar to IPv4 simply due to lack of in-depth knowledge in this area. The IPv6 Forum Security Certification Logo program formalizes a concrete curriculum for everyone to benefit from.” states Latif Ladid, President, IPv6 Forum, Senior Researcher at University of Luxembourg, Security and Trust (SnT) Center.

“Security is top of mind for any decision maker facing the two major inflexion points ahead IT organizations Worldwide: IPv6 transition and Cloud adoption. The IPv6 Forum leverages its global network of IPv6 SMEs to define the educational and expertise standards that will provide the industry with the proven talent needed to successfully tackle the security challenges and opportunities presented by the IPv6 transition.” states Ciprian Popoviciu, CEO, Nephos6. Certified IPv6 Forum Trainer (Gold)

To obtain the IPv6 Security Certification Logo, please visit the following web site andapply by filling out the application form http://www.ipv6forum.com/ipv6_enabled/ipv6_education.php

 

IPv6 and “Teardrop” Attacks

Way back in 1997, CERT issued an advisory about “Teardrop” attacks (http://www.cert.org/advisories/CA-1997-28.html).  These are attacks where overlapping IP fragments are sent by a malicious actor in an attempt to either A) gain access to a protected system by evading IDS, firewall, and/or platform-based defenses, or B) disrupt the capabilities of the target environment – ideally in several parts of the network but especially the target node (the destination IP interface) itself.

In the 1997 CERT advisory, I am sure the concern was IPv4 systems.  In brief, CERT’s advice was to contact vendors for a solution, and that solution was almost always to upgrade the platform OS version – it seemed the vendors were able to patch implementations.  Presumably, this was done by discarding fragments associated with a Teardrop attack (any set of fragments where any two overlap).  Juniper, for example, added a user configurable switch on platforms to explicitly drop overlapping fragments.

Any node sending overlapping fragments make no sense at all, from a protocol standpoint.  Yet my understanding is that some old IP stacks did sometimes send overlapping fragments, not as an attack but as a result of a sloppy or buggy IP implementation.  Thus, in an effort to promote interoperability with these stacks, other vendors would accept overlapping fragments.  This philosophy goes all the way back to Jon Postel who said “be conservative in what you do, be liberal in what you accept from others”.

On the downside, this flexibility provided an exploitable vulnerability to malicious actors. Mostly, the attack was more directed at OS platform bugs than the IP stack.  Modern platforms for the most part have implementations hardened against Teardrop-style attacks on IPv4 or IPv6.

And while it is true that the IETF IPv6 specification (RFC 2460) does not explicitly prohibit overlapping fragments, most vendors implemented protections against those IPv6 fragments as a matter of best practice.  Later (December 2009) RFC 5722 made it out-of-spec for source nodes to send overlapping fragments, and made it required for the reassembling node (the ultimate destination) to silently discard all fragments associated with a fragmented packet when any two fragments overlap.  Fragments are silently discarded, as opposed to having the reassembling node send ICMPv6 error messages, to guard against reflection attacks (where the attacker uses a victim IPv6 address as the source of the offending packets), and because the sending node is already acting “fishy”, so best not to burden any system (including the originator) with processing ICMPv6 error messages.

There is also the issue of fragmentation attacks against “atomic fragments”.  These are IPv6 packets that are not actually fragmented, but carry the IPv6 Fragmentation Extension Header (with “Fragment Offset” set to zero and the “M” bit not set).  The reason for these to be allowed in the IPv6 specification is to support certain IPv6-to-IPv4 translation solutions.  To protect the capability from exploitation, there is an IETF draft (“Processing of IPv6 “Atomic” Fragments”) discussing ways to defend against fragmentation-centric attacks on this kind of packet.

With the improvements made to stack implementations over time (in the case of IPv4), and the fact that most IPv6 implementations were always “tighter”, the Teardrop attack has become a spent force on the Internet – really only affecting older platforms.  Thanks to RFC 5722, there is now official IETF guidance on how IPv6 stacks should be implemented in terms of handling overlapping fragments, and giving platform vendors the green light to discard these corrupted fragments aggressively.

It is a classic example of the “arms race” nature of IP security – whether for IPv4 or IPv6.  The bad guys discover a new vulnerability and an exploit, and the good guys close it off and harden everything else related to the vulnerability they can think of.   A good reminder that IPv6 security is only “better” than IPv4 security in a few ways, and secure environments really depend on rigorous deployment of security best practices and constant vigilance by an informed and talented security team.

What’s New with the Flow Label?

The IPv6 Flow Label specification has been updated by the IETF as of November, 2011, and includes changes and refinements that should make the Flow Label more useful in the near term.

IT networking specialists have considered the Flow Label “underspecified” since the first IPv6 specifications were written in the mid 1990s.  As a result, this 20-bit field carried in the IPv6 base header has been a bit of a waste, and has been little used.

In brief, there are three (3) IETF RFCs related to the Flow Label recently published.  Below is a brief digest of each:

RFC 6436 (Informational) is titled “Rationale for Update to the IPv6 Flow Label Specification”, and provides just that.  It provides a description of perceived shortcomings of the previous specification (RFC 3697) and makes recommendations on changes.  The most important bits of information in this RFC are:

  • RFC 3697 requires that only the source node set a Flow Label, and that the Flow Label be delivered intact to the destination node.  This means the Flow Label cannot be set by intermediate nodes, unlike the DSCP sub-field in the IPv6 Traffic Class field.
  • Because the Flow Label is not covered by any checksum, and it is not covered by IPsec protections (not even the Authentication Header), the field cannot really be trusted to not be changed accidentally or overtly along the path from source to destination.
  • The new RFC makes recommendations on updates to the Flow Label, which are specified more authoritatively in RFC 6437.

RFC 6437 (Proposed Standard) is titled “IPv6 Flow Label Specification”, and replaces the previous standard (RFC 3697).  Important elements of the new standard include:

  • Noting that the default implementation of the Flow Label is “stateless”, but that future other uses based on a signaling mechanism are not precluded.
  • Noting that the envisioned use case for stateless Flow Labels involves load-balancing traffic, either in the case of Equal Cost MultiPath (ECMP) or Link Aggregation (LAG) implementations.
  • Encourages source nodes to set Flow Labels, with a unique label per flow
  • Restates that Flow Labels should be well-distributed (random) and not guessable
  • IMPORTANT – removes the restriction that only the source node may set a Flow Label, making it permissible for any device to set a non-zero Flow Label in an IPv6 header where the Flow Label was previously zero (so, an intermediate node may set a Flow Label, but not re-set it)
  • UPSHOT HERE – this new permissiveness means that a router – perhaps the distribution-layer router – can set a Flow Label for flows that do not have them, making the Flow Label useful to other routers, further downstream, that may be performing ECMP or LAG.  The router can take action on the Flow Label where the host did not.  This is a little like an ingress router doing QoS classification and marking for the benefit of downstream routers.
  • The new specification also makes it permissible, on occasion, in high-security environments, for an intermediate node to set a non-zero Flow Label to zero, in an effort to eliminate the possibility of a covert channel being implemented in Flow Label values.
  • The RFC states numerous other possible security issues related to the Flow Label

RFC 6438 (Proposed Standard) is titled “Using the IPv6 Flow Label for ECMP and LAG in Tunnels”.  It describes a way by the Flow Label can be used as the title suggests.  The scenario is that a tunnel has been built where the tunneled traffic passes through a set of devices implementing LAG.   In brief it works like this:

  • Normally, tunnel traffic frustrates the LAG load-balancing algorithm.  Think about it.  If the traffic were not in a tunnel, the individual flows would be apparent – the traffic would be a collection of flows between varying sources and destination, and for different protocols and L4 services (the “5-tuple” would be available for load-balancing).  Because the traffic is tunneled, all traffic yields the same 5-tuple at the downstream LAG, where there is a single source (one end of the tunnel), a single destination (the other end of the tunnel), on protocol (perhaps IP-in-IP, or perhaps GRE).  In other words with the tunnel all traffic looks like a single “massive flow” to the LAG.
  • For a solution, the RFC describes a mechanism whereby the ingress Tunnel End-Points (TEP) examine the tunnel traffic as it enters the tunnel, and then write a Flow Label into the *outer* IPv6 base header based on the 5-tuple of the *inside* packets.  In short, the TEP and the Flow Label turn the “single massive flow” back into individual flows.  The LAG then load-balances on the IPv6 3-tuple (source, destination, Flow Label), and the balancing can be efficient.

That’s the quick wrap-up for all three (3) related RFCs.  IPv6 refinement continues apace, and the protocol continues to evolve gracefully to match the needs of a global scope network.