Google IPv6 Implementors Conference: State of the IPv6 Internet & Transition Mechanisms and Tools

Uploaded by GoogleTechTalks on 16.06.2010

VAN DE VELDE: So this section is broadly about the state of IPv6 Internet and brokenness
and things like that. Actually, I'm wearing two hats but the main hat that I'm wearing
today actually is the, you know, I'm the chair also of the Belgian IPv6 council. So just
like, you know, we saw yesterday in one of those flash talks. Also in Belgium, the IPv6
form has been reinvented again and we're trying to, you know, get some things done hopefully.
Hopefully in about the year, two years time actually promote, be needed anymore. Actually,
I have even doubts right now. It could be too late and, you know, the use could be very,
you know, minimal actually, so still thinking about those things. So what I'm going to speak
about is so reflect, I'm going to be trying to set the scenery for the next three slides
actually which will come, you know, in the rest of this particular section. So the next
things are going to be like measurement for v6, you know, in different environments. We're
going to speak about the brokenness of IPv6. I'm also going to speak about white listing.
And some of these things actually have a cause, you know, by the non-managed tunnels. And
the reflection I want to go to, you know, the key message of this slide where actually
is, you know, is, are these tunnels actually the future of the Internet? Can we actually
live on the Internet with the use of non-managed tunnels? So, by the way, you know, just to
wake everybody up again also because we have like 95 male population here, I put the face
of Angelina Jolie here. So, hopefully it will draw some attention to happy eyeballs to the
slides, huh? So, it's competent, you know. It's competent. So now some of these things
also tend to be a little bit controversial. So what I've done, I've created this particular
slide here, which will help people to jump up and down on their seats when they see everything
here. So, I have like this switch-on like highly controversial, medium controversial,
whatever that means, and not really controversial. So I have very little green points here. So
I hope I will really satisfy your 15 minutes actually. So how these things carved up? So
first before I start speaking about some of the artifacts of non-managed tunnels, I would
like to give you an understanding of what is a managed tunnel. And once you understand
that, it's much easy to actually explain what is a managed tunnel and everything else, I
can see that it's actually is a non-managed tunnel. The next thing is also, so some people
actually are happy with what these non-managed tunnels things do. So I'm going to give you
some reflections from different angles coming from, you know, the user case, the enterprise
case and from a service provider case. Then a little bit of philosophy, this one of my,
probably, one of my red things like what is the goal of the Internet and how do these
non-managed tunnels fit into it followed by, you know, some of these negative properties
of non-managed tunnels. And what you will see also is I will try to gauge sometimes.
Actually, I will try to give a balance overview, which actually means I will tell some good
things about non-managed tunnels and I have the unfortunate look Lorenzo is sitting very
closely, actually Mike strike out and I will tell some negative things about non-managed
tunnels. And I'm, we're going to conclude, basically, I will let you actually draw the
conclusion of, you know, some of the content I've been telling you about. So basically
what is a managed tunnel? Now, a managed tunnel actually is what I consider a thing for which
you can contact somebody. You know, somebody has set it up for you. You have, like, an
agreed SLA level agreements, it works perfectly. And the way actually it works is just as your
current IPv4 native connectivity works. It should actually even be working, you know,
as good almost as your native IPv6 connectivity if you would have it. Now, of course, if you
would native IPv6, you probably wouldn't need this kind of tunnels. It also means, you know,
that you actually have like, you know, a reliable set of security performance and integrity
parameters attached to the tunnel itself. And also the administration of Realm actually
is, you know, is something you can actually contact. Somebody you trust, somebody, you
know, you pay for for delivering this particular kind of service. And, so basically, what I'm
trying to say is like anything which doesn't fall into these categories, I actually, you
know, account as a non-managed tunnel; so, some tunnel experiences. So, when I speak,
you know, to people and you know, and I tell them like, "Yes, 6to4 is maybe not the best
thing to do." Now the often answer I could get back is like, "Yeah, but it works really,
really well for me, you know, that's no problems. It's my only way to go to the Internet." That
maybe true, you know, even like, you know, 10 or 20 or 100 users or so, you know, technologists,
but imagine if you've like one million of them, you know, there could be a potential
issue there. Other comments are here is like, "Oh, I didn't know I was using IPv6." That's
funny. Good that you tell me but I don't really care so much. Then also the enterprise view
is another thing. So when you're looking to 6to4 than often, you know, what people say
is like, "Yes, 6to4 gives me like, you know, a very asymmetric traffic POTS and perverse
traffic, you know, direction is the one that words. Now, that is actually true in some
cases but not in all of them because sometimes actually, you know, 6to4, for example, can
follow UV for traffic POTS with the network if you go from a 6to4 site to another 6to4
site. Only on the certain circumstances that actually have the potential of asymmetric
traffic POTS. And now for the service provider, so what do I hear there actually often say
about these non-managed tunnels. So some of them, you know, some of the good service providers
they actually care about their customers and what they do is they want to give the Internet
experience, you know, the best possible angle. So what they actually do is they actually
provide 6to4 relation in the network and they either make it available for everybody over
the whole world or it just will restrict it to their own customer-based. Now, one of these
elements there, of course, is, you know, that will cost money and the question is, you know,
how much money is the service provider willing to invest to help other non-customers basically.
So, that's always a question, you know, they're thinking about. And then the other thing about
these non-managed tunnels, you know, what I hear from content service providers is like,
yeah, whatever. You know, but what I do see on the Internet is like, you know, with these
tunnels, I see like, you know, a measurable difference in my, you know, round trip time
and in the quality I get from these guys. And as a result, you know, I cannot really
enable all my v6 content right now just yet. And as a result, you know, they will have
to use into different measurement, you know, different techniques and enabling v6 for the
community actually, you know, in a different kind of a way. Put more enough light onwards.
So, the way I see actually, you know, how these things fall into the Internet actually
as such. So the Internet when it is growing, it should actually be a platform, you know,
which can grow beyond what it is right now. It should be a services platform. It should--so
it should be a services platform actually and it should be providing a simple control
playing for end-to-end connectivity. And the idea should also be that, you know, all man,
all machine should be able to connect to the Internet. Now, the question is here, you know,
do non-managed tunnels actually follow this fundamental? That's a big question. And especially
going up further, the one we just probably going to break it the most is, you know, the
v6 Internet connectivity should be as good or better as the perceived quality of the
v4 Internet. And right now, that is not the case and I believe we will see some measurements
like onwards but that actually is not the case. So now, going further on the question
why people actually, initially, invented non-managed tunnels and the main thing is actually, you
know, in the beginning we're early adopters. Everybody is connecting to IPv4. Some people
actually wanted to have IPv6 connectivity and then these tunnels a bit perfectly. At
the same time it also provided, for example, a de-coupling between the infrastructure readiness
for IPv6 and the application readiness of IPv6. So by using these tunnels, the application
is good potentially experience some of v6 connectivity there. Then, of course, you also
have like now if you implement IPv6 in an infrastructure, if you want to do it in control
steps then these tunnels actually might help also a little better. Now going further to
the properties. So one of the first things, what we often see with non-managed tunnels,
they think they use well-known IP addresses. Now, an artifact of that is that it creates
actually lots of potential asymmetric tunnels. And now the same thing actually--good potentially
be happening with IPv4. But with IPv6, with non-managed tunnels, the probability for that
is actually much higher. And so I put like a little drawing in there, for example, you
see on the bottom. So, you know, the laptop with the routers, you know, it probably will
select, you know, for the well-known tunnel or relay or something. The first router on
the tab there and then the other one will take the router on the other side of the slide
here. Another thing is performance. So there is like a series de-coupling of what the guy
using the tunnel actually expects from the tunnel and the guy delivering the elements
to create a tunnel, here is no direct correlation between them. And one of the questions you
can actually ask yourself is like, you know, do you actually want to provide or manage
service of the Internet over unmanaged infrastructure element? So, that's a question you have to
think about. So, a nomad element is also like, you know--if it suddenly stop working perfectly,
correct, then how does the end-user, actually, how can they complain to the guy providing
the relay-router, for example, that is not working correctly. You know, how does a guy
actually even know who's the relay-router--who owns that particular relay-router. I've just
been flagged that I have one more minute so... >> [INDISTINCT]
>> VAN DE VELDE: Okay. So just two more slides, and this is a great one actually. So one of
the things also is the realm of control, so what is very important here is that a non-managed
tunnel actually goes from the artifact that you actually use third-party involvement to
actually create, you know, your IPv6 productivity. And the same thing is actually also happening,
you know, in the [INDISTINCT] of course, if you send the packet from one end to the other
end of the Internet, but that is just forwarding traffic. In this case, you actually use third-party
middleware which is a complete different equation, yeah? And the whole new set of parameters
and variables which actually are in, you know--just take into accounts. That results actually,
you know, in things like, you know, sub-optimal flows, if your middleware is not working perfectly
correct, which will increase your Round Trip Time and packet loss. If you have a low performance
like relay-router, the whole thing could be screwed up actually because you experience
suddenly, you know, with this tunnel of worse experience--much worse experience than you
actually are seeing with IPv4. And then the other question is like, you know, who's going
to be responsible for this degrade of, you know, service you actually are getting of
the Internet. Then a few words about security, so as we have seen also earlier, doing things
in tunnels creates problems. So it creates issues with firewalls. It creates issues for
de-packet inspection. There is like a particular draft also, like you know, the tunnel security
concerns. And they're valid for both managed and non-managed tunnels. But the importance
of the security elements actually is actually higher for non-managed tunnels because you
don't have control of all the different elements playing and generating these tunnels actually
as such. And then of course, 6to4 is a special case, and it also has its own security considerations
sections actually in--so now going to the conclusion. So what I've seen actually is
that, yeah, early adopters, you know, they had been working very fine with non-managed
tunnels, you know, in the beginning, yeah? Like if you speak to Tony Hain, he's like
super happy with his 6to4 connection. But imagine if there would be like, you know,
one million Tony Hain. It's going to be a small issue because both for ideas and for
many other things. So, and also for the Internet in that case because they will all go to the
relays and the relays will crash and burn and which may be a good thing for Cisco because
you will buy more of these if you want to reinvest. But that's maybe not a good way
to build up the Internet infrastructure. And so if you go from mainstream usage then, you
know, the things what I can see actually as a result of this non-managed stability of,
you know, of these tunnels is like, you know, blackholing, perverse traffic paths, you know,
nobody really wants to invest in relays, you know, hard to manage, you know, difficult
security model. And the consequence what I see out of this directly is that content providers
cannot just switch on IPv6 right now just yet for everybody. You just saw it earlier
with the previous talk. Google also is, you know, having some issues there with switching
them on for everybody and at the same time, you know, one of the consequences also is
that we are now starting to discuss all these things, all the complexity with white listing,
you know, good citizens in the Internet who actually have proper IPv6 connectivity. So
if it would be up to me, I would just, you know, deprecate all these non-managed tunnels
things going forward but that's maybe a too much of a high ambition but, you know, we
can only try so that's what actually... >> [INDISTINCT]
>> VAN DE VELDE: Native connectivity, do you know? Just native.
>> [INDISTINCT] >> VAN DE VELDE: Yeah. So any questions.
>> [INDISTINCT]. Two points I would like to make. The first one about 6to4, re-tunelling
is asymmetric by nature and even if the service provider wants to offer relay, it can offer
a relay on the outbound path from the 6to4 customers to the native Internet. The real
problem is when the packet comes back. >> VAN DE VELDE: Yes.
>> Because it has to go to somebody who is going to advertise 2002: : /16 essentially
acting as an open relay for the entire Internet and there's absolutely no way to restrict
it to your own customers. So we essentially rely on somebody on the Internet of doing
free transit, and you have no way of knowing who is going to be the one that is selected
by the packet unless we don't [INDISTINCT]. And that's a really, really serious issue.
So this stuff is fine for early adopters. I'm not sure if this is a question about do
we need to deprecate it or not. But it'll create a problem when home gateways turn this
thing on by default and the customer is not aware of it.
>> VAN DE VELDE: Yes. >> And maybe a recommendation should be "This
is fine if you want to use it but don't make this on by default."
>> VAN DE VELDE: Yes. That is true. And I think also next to that, the risk of degree
of control element you can have, you know, you could potentially, you know, announce
it in the Internet like 2002, and then a set of the prefix, you know, of that particular
service provider then to limit it a little bit there. But that's maybe not very allowed,
you know. There are some procedures. >> It's against, that's said in the RFC for
a really good reason because if you start that path, you input the entire v4 routing
table in v6. >> VAN DE VELDE: Yup. But it's a way out.
>> [INDISTINCT] >> There are possible things content providers
can do too. For instance, you know, we've for a long time investigated trying to uncap
6to4 responses directly to users. But that sort of [INDISTINCT] to--but, you know, it's
not totally in control on our return path but yes, it's quite problematic.
>> VAN DE VELDE: That's it. On time. Thank you.
>> KISTELEKI: I'm Robert Kisteleki from the RIPE NCC. I lead the science group there.
And I'm actually just going to present the work of my colleague Emile Aben. And for those
of you who don't know RIPE NCC is the Regional Internet registry in the European and surround
regions so just like [INDISTINCT] around here. First of all, we would like to have more incite
into the v6 deployment and how many clients are there really using v6. And basically,
most of you are interested in how this goes but especially that's true for an RIR because
we are in the--we are the numbers business, so it's really important for us. It is also
true that we have heard a couple of different numbers about how much v6 is deployed really
out there. For example, if we look at our statistics, roughly 25% of our members have
actually v6 allocations. But if you look at the routing table, you'll see that roughly
6.1% or so of the ASes actually announced any kind of v6 prefix. We also hear different
numbers between--well, today between 0.25% and 2% of web clients actually connecting
over v6 to different services. So that made us wonder, so where is the difference between
the 0.25% and the 6% and 2% or 1% or whatever that number is today? We kind of devised a
method to try to make a distinction between the clients themselves and the infrastructure
they are using. And we were hoping that that actually will explain some of the details.
So how does that work? This is the good old methodology. Everyone knows that. The end
users connect to some participating Website and they actually fetch some kind of a JavaScript
or embedded code there that actually redirects them to some measurement network. In some
of the cases, that's actually the participating website itself but it could be a different
network. That's all fine and well. Most of the time what happens is some background image
an object fetches or on v4 or v6 and on dual stacks so that you can actually check which
one makes it there and which one doesn't. The other twist that we tried to put in here
is to measure the provider infrastructure and a one way of doing it is to try to provide
the DNS queries that the users are--themselves are actually making while doing the queries
to the measurement network. So what we provide here is we built up a really small DNS server
as well behind this measurement and we force the clients to do special named object fetches
and those names actually include a unique domain name as well for each and every request.
Also the--as you can see the URLs, the URLs also contain some unique IDs so that we can
make a difference between different clients. So we can try different combinations of forcing
the user to go through on v4, v6 and dual stack HTTP and providing different responses
on DNS v4 or v6 and dual stack. So if you draw up a matrix of what is possible to do
here, you have this matrix of nine cells and actually if you really want to go for the
full experience, you can do all of them but most of the time, it's just not worth it,
it's just too much. These are actually the four measurements that we do. It's easy to
the v4-v4. So we only take measurements that where the v4-v4 actually made it through.
And then we can do that. We only provide DNS responses over v6 but we require the client
to connect to our v4 and so on and so forth. And finally, on the lower right hand, we do
the dual stack-dual stack response. That actually covers most of the missing parts so there's
not really much going to doing over line. Okay. What we did was we started to measure
at that but ever since then, we have expanded to other sites as well. We
know that there is an operator skew there. This is a technical audience, somewhat, and
we also know that this is a RIPE-region skew which is kind of intentional, so we don't
really want to expand to the whole world there. We also know that not all of the clients actually
use their own default DNS resolver. That's fine. Some of them are using open DNS. Some
of them are using Google. So that's okay. We also know that not all of the clients have
JavaScript turned on. Now, you can say that there is some relation between the clients
that actually turn off JavaScript versus how IPv6 capable they are because most of the
time, those are the techy people but we can leave that with the assumptions. As long as
we are aware that that might be there, it's okay. So results. Well, you cannot really
see the lines there but that's--the good news here is that you no longer have to use a microscope.
It's perfectly enough to use a looking glass. So if we use the looking glass--yeah, that's
good news, I believe. Okay. So this is what you see for, again, This actually
excludes the RIPE-region skew's own internal infrastructure, so we don't want to screw
the measurements there because then v6 would be much higher. What you can see here--the
different colored lines mean, of course, different things as you can see on the right-hand side.
I would like to draw your attention to the blue line. That actually means clients with
IPv6 capable resolver. So that's the one that actually shows more or less the infrastructure
component that the clients are using. The blue line is a v6 capability. So when the
clients can actually connect to--over v6. And the red one is preference. So given the
choice, they actually use v6. You can see a bump roughly in early May. That's because
of the RIPE meeting that we had in Prague, and we provide native v6 connectivity there.
So, yeah, there you go. Now, if we filter out the non-managed tunnels--thank you for
the term. We--sometimes, we internally call it auto-tunneling, but it doesn't matter what
you say. The picture changes differently, just a slight. If I go back and forth, you
can see that they are almost exactly the same. The interesting thing here is--if you download
the presentation, you can actually check it out yourself--is that the clients with v6
capable resolver line changes as well. Strange because that means that the resolver, the
DNS resolvers themselves sometimes very--not very often but very rarely--but that sometimes
they do auto-tunneling as well, strange. For comparison, this is a web log analyzers on
www, again, on a longer time scale, you can actually see the peaks. Those are RIPE meetings
as well. And, yeah, one interesting thing to see here--these are roughly the same numbers
but one interesting thing to see here is the change of behavior roughly in March this year.
And that is probably because [INDISTINCT] changing behavior. It's 10.5, or so. Ever
since then, they don't prefer 6to4 anymore. Okay, a different view. It's the weekend effect.
You can actually see that on the weekends, the client IPv6 capability goes up. One good
thing that it goes down because in the office, you might have more v6 connectivity but it
turns out that--probably due to auto-tunneling at home, more people have the capability to
actually use v6. The other numbers don't really change. I'm not really going to go through
this but some people would love this especially Slovenians and Estonians. They are really,
really high above the average in terms of v6 usage. Some pockets of native v6 so we
pick some places where we can actually see much more native v6 than anywhere else and,
of course, Slovenia is on the top. That's good. You have a different view, and this
might be interesting. So, we have seen roughly 12,000 AS numbers that actually had resolvers
in them. Roughly almost 14,000 that had web clients in them. And you can actually see
the IPv6 difference is that we still have more ASes--relatively more ASes with v6 capable
resolvers than clients. And so, if I also add the two of the numbers like that, roughly
1% of the clients actually use v6 and roughly 25% of the ISPs have v6 allocation that you
can actually make these hierarchies. With 25% have v6 allocations, 4.9% have actual
v6 resolvers, roughly 4% have web client--v6 capable web clients and roughly 1% actually
use it. Random facts that we have discovered, GoogleBot actually does JavaScript. So that
was surprising at the first sight. But then we realized, you know, actually, that make
sense. In some of the measurements, we see that the client AS is not the same as the
resolver AS. So we take the IP address of the resolver, we take the IP address of the
client, and we compare whether they're on the same AS or not. This is due to Google
DNS and open DNS most of the time. We try to draw up interesting RT graphs about this
where we had blobs of ASes and arrows between them if they use another AS for resolving
stuff. And it's really, really cool to see that there is a huge crowd. And in the middle,
there's Google, and the people are just pointing at it because many, many, many of the ASes
are actually using Google. And it's also true that in many cases, we do see that the client
v4 resolver AS is not the same as the v6 resolver AS, which is also kind of--but why? And that
is--according to our understanding, that is mostly because of tunnel broker thing. So
on v6, you end up somewhere completely different on a v4, then it's fine. What's next? We want
to keep this running especially because of the v4 run out and we really, really want
to change--see the changes, and that would be really fun. We also--there's something
that's not mentioned here, is we want to introduce more precise findings so that we can actually
know if there are delay differences between v4 and v6. But many other people are measuring
that already. So it's not really in our focus. And finally, we know that and
connecting--we have satellite websites are really skewed to the techy crowd. So we really
want to measure average users and we started up a program, especially in Europe, where
you can just join in and you only have to hold this tiny bit of JavaScript that we will
give you, the site statistics about your v4 and v6. And that's it. I hope I'm in time.
Any questions? Questions? Okay. Thank you. Thank you.
>> COLITTI: All right. So, again, these slides were [INDISTINCT] last minute. So this is
some of the data that we've just recently started collecting literally, you know, over
in the last few weeks about broken this in the Internet. We have had this running for
a long time and there have been several challenges to this. Some on the methodology, some on
the sheer volume of data that we we're able to collect previously. So this is more recent
data than we've presented so far. But--so first of all, let's start to, you know, look
at the problem. What is the problem? So, the problem is that, the way this is all suppose
to work. You know, it was laid down by itojun in, I think, 1998. And it's basically, you--your
dual-stack application, you make a DNS request, you get all--you get all the addresses that
you want to connect to in order and you try to connect to them in order, right? You take
the first one that comes back. And then the idea was, well, if you have a local failure
and it's fast, and you don't have any connectivity, you'll just try them, it will fail immediately,
and you'll just go to the next one that works. So, and that's, you know, that's the way applications
are written because it provides a sort of a seamless upgrade part. You know, when v6
comes available, you just start to use it. So, to that end, "getaddrinfo()" is which
is deserved a function that you use in most browsers use in--or applications use to look
up names or IP addresses, usually returns IPv6 first. There's, you know, RFC 344 says
what it actually does but if you--typically, if you have both a v4 address and a v6 address,
it will return v4 first. In the application with try v4, it would fall back to IP--will
try IPv6, would fall back to IPv4. And so, you basically try all the v6 addresses one
by one if--and when they--and if they all fail, you start trying the v4 addresses one
by one. If they all fail, you give up. So, what really is the problem? How bad can this
get? So, there's various failure modes here, right? There's a host local error. If you--if
your host attempts to connect to a v6 address and has no v6 address configured, it'll fail.
The kernel will give you a sort of destination and reachable message and it'll just say,
"Okay", and that's essentially instant. It's in microseconds or however fast the system
call completes Inet unreach, right. So, that's no problem if the application does fall back
the way it [INDISTINCT] intended it. But some applications, notably, Java do not do this.
They have the concept of an InetAddress and the socket may only connect to one address.
And if you--the canonical implementations of Java, not the Android implementations do
connect to only one address; and if it fails, then it fails and it gives up. So--but most
applications, you know, the exception of Java and I personally put this down on the fact
that Java--that Java APIs were designed so much--so early on in the v6 transition that
they didn't anticipate this. The good news is that you can fairly easily find this is
in Java by making a connect-by-name, connect to all the addresses in sequence, although
that's not compatible with other implementations, so. So, okay, so--but this case is usually
easy. You'll get a nice fast failure, you know it happened and you can try again. You
could also get a network error. For example, your router could be telling you, look, I'm
your default router but you can't get that from here, that there's no way you can get
there and the way--the canonical way to do this really is to send on unreachables back,
right? Say, no, ICMP unreachable. A lot of applications--a lot of implementations actually
completely ignore unreachables. They just say, "Okay. Well, unreachable; well, I'm going
to try anyway because somebody might be spoofing the unreachable," or whatever. Or so what
we've seen happened in the past with networks that have v6 addresses but no connectivity
is that some element inside the network will fake RST packets and it'll say, "No, nothing
here. Yeah, actually I'm close port, try the next address." So, by and large this actually
works and it's a reasonably fast failure in the sense that the time of the failure is
only--the packet getting to whatever element is spoofing your packets and then the packet
coming back with the reset. I don't know. I think the people that run this network in
this room--I don't know where it's done. Presumably it's done pretty close. The important thing
it's not done at service site, right. It's not very far away. I would expect it to be
less than that. So, it's reasonably fast. Compared to what we're seeing--compared to
the numbers you'll see in the next slides, it's, you know, golden. And so, there's also
a blackholing, right, so the router could be advertising a default route and it could
then be proceeding to blackhole the packets either locally, because it's a bad implementation
or sending them into the void or sending to some relay that's not working or it could
be blackholing anywhere in the network, right, packet lost in the core, whatever. And then
there's MTU holes; typically misconfigured firewalls dropping ICMP but mass has a presented
a whole slew of cases where this can happen. So, mass' compilation is much more comprehensive
than this. But, anyway, so MTU holes are particularly bad because either you've never--a TCP never
recovers, and I say that never recovers but [INDISTINCT] corrects me, it does recover
but it takes a long time because it sort of has to, you know, wait for various timeouts.
So, anyway, so, what do OS has to do in these cases? I did some testing; I actually managed
to obtain a Windows laptop under the duress and try to and try to see what it does because,
I mean, ultimately there are users that are using Windows. So, we need to understand what
happens. So, in most cases--and I was testing browsers because our main motivation for making
this fast is HTTP, right. And because SMTP if you get a 20-second delay failure, well,
maybe that's still okay. But if you're trying to wait 20 seconds for Google search, then
it's not okay. So, for--local failures are fast, unreachables--the time it really depends
on the OS. If you're on Windows it'll take you 20 seconds, and this is per AAAA record,
remember. So, for every record that you try, it will take you 20 seconds. Mac takes 4 seconds
and Linux just says, "Oh, unreachable. Let me try the next address," and it just, bang,
it goes there. Blackholing is similar, except Linux has a 3-minute connection timeout. So,
I actually saw my--when I blackholed my router by screwing my v6 connectivity, I saw my Linux
box faithfully trying to connect for 19 minutes before finally connecting over
before. And it just stand--it just sat there spinning. So, there's a lot of--we could've--I
mean, this was from--in hindsight we could've sort of improve that. But MTU holes again,
TCP never covers, not true. It probably--I would have thought that it would recover in
the order of seconds, I don't have data on this so this is incorrect. Even if failures
are fast and this--thanks again to Miles for pointing this out to me--applications my have
other limits; for example, IE 7 and above, it gives up completely. Tries five times,
it says, "Well, you know, there's nobody here, move on. I don't want to waste my time." So,
it fails. It just gives up. So, that's not good, right. So in our current initial and
current implementation may have up to AAAA records. And this is for various
details of our internal load balancing and because we wanted to have exactly the same
number of AAAA as we did A records because we want everything to look the same. Well,
it turns out that this is not a good idea if you're considering a broken list. So, on
a Mac that means 24 second, on Windows it means 2 minutes, and on Linux it's either
get some or it takes again 19 minutes. If you're on IE 7 and you're trying to connect
it will not work. It'll just fail. It'll take a long time and it'll just give up. It'll
say, "Page cannot be displayed." The nice thing is that if it tries again, it remembers
that those three addresses failed and it goes to the other three and then it connects again.
But you have to reload the page. And so, needless to say, I mean, this is broken. We can't accept
that, right. So, we are going to mitigate this damage by publishing one AAAA record
and we haven't rolled the code that--rolled out the code to do that quite yet, but, you
know, we're going to do it. That's still 20 seconds, right. So, you know, would you like
to wait 20 seconds when you go to Google? Would you like--sure the impact is over playing
at HTTP, the impact is only on the first connection. But if there's XML HTTP request or things
like that each one will be counted as a new connection. Each one will--each map tile will
take 20 seconds to load. Again, would you like that? No. Neither would we. Okay. So,
what's going on? We have some indications, some data points, some sample of things that
are broken. So, typically and this is, you know, maybe just me seeing 6to4, you know,
bogeyman--bogeyman all over the place. I mean, we'll just--you know, what I've seen happened
of what had--both seen it and have first time reports of it, we'll just--we'll turn off
6to4 and go through broken relays and at best you'll see latency increase and--or the relay
might drop your cap packets or might refuse to drop your--to route your packets if you're
coming from native address. So, even if you have native connectivity at the same time,
the 6to4 relay route--the 6to4 router will impact your connectivity and at best introduce
only latency increase. Routers have been known various--from various vendors have been known
to turn on 6to4 with private addresses, guess what, that won't work. It really won't. So
some implementations do it anyway and the host has no idea, right, so--oh, well, they
suppose they could. There's stuff in the host that could be fixed as well, I mean, the host
might prefer, again, the 6to4 router or the native router as before it--for example if
the 6to4 router sends RAs more frequently or if it sends a higher priority. For example,
all right, so, we might be able to fix this by setting RAs to high in native routers but
that's the only hammer we got and we probably, you know, there's nothing higher than high
I think. So, we might think carefully about that. Host may prefer the 6to4--so this is
which router I send my packet to and again, and there's another thing that the host may
do, it might decide to use the 6to4 address to send the query instead of--for example
before. It probably wouldn't prefer a 6to4 address over native because the RFC says,
I explicitly not to do that but it might prefer a 6to4 address over an IPV 4 address as a
source address, so it might prefer to use 6to4 over IPv4 either if it's not using a
properly RFC compliant gather information or if it's using private addresses who uses
public addresses [INDISTINCT]. And this is a known issue in our RFC 3484, it's being
fixed but the implementations that are out there, some of them actually and notably one--notably
the Mac implementation, now that offer has been fixed, still do this. Similar consideration
for Teredo; Teredo is an absolute nightmare for short lived, you know, requests. There's
a high setup times it might or not might work. It cycles through the various possibilities
that it can have, and most implementations don't do this they know that Teredo is to
be used only for v6 only. And this is my favorite. So, please don't look at the Mac addresses
and try to find out which implementation it is, I didn't have time to blur it out in the
slides. But, anyway, so we have this home gateway that's sending out a router advertisement
that's obviously of the prefix 0000/64. Now that is just beautiful to me. So it's sending
out a null prefix, the host is accepting it saying, "This is my RA. This is my address."
So, it's forming a nice address here that's not even a v4 compatible address, it's a broken
address. It's not a unicast address because it's not in 2000, it's just broken. The host
is happily confirming this address in its interface. It's trying to use it. And the
router is saying, "No. You can't get it from here. I don't have a route." And the host
is saying, "Syn" and the router is saying, "Nope", "Syn", "Nope", "Syn" and then after
four times it tries, it says, "Oh, well, okay. I'll try to actually, oh, let me do neighbor
unreachability to figure out if that router is actually still here." And it says, "Syn"
to another address, you'll note that the first is trying to connect to kind of call 93 and
then to 63. So, this took 24 seconds because, you know, each of this takes four seconds.
And so--yes, so these are the problems and later on we'll talk about how we can fix it.
So, what do we do to measure them? First of all, we don't how many there are. So--Robert
talked about how this type of connectivity measurement is done. We've done it before,
others have done it before. We basically try asked the browser connector before and into
dual-stack website and see what happens. We made a few tweaks with respect to our initial
implementation, we use long-lived websites where people stay for longer periods of time
so we can actually check, you know, after 30 seconds or whatever, are you still here?
For example, YouTube or Gmail, we use JavaScript to make multiple measurements in one session
so we can actually group these measurements for a single session so we know who--what
a user was doing and that also allow us to do multiple measurements like add MTU checks
or checks with for host with v6 only glue and so on. And we have the sentinel again
after a given amount of time saying, "Are you still there?" And this is useful because
if--in an A measurement, if you connect over--for example, say, you asked to connect to dual-stack
first and then use disconnect and then would've come in over before but you're just disconnected,
you ignore that. You can ignore that measurement, other than--instead of interpreting and it's
broken. We also use one time host names because it sort of dynamically generated new host
names because you can associate measurements, you can find out if browsers asked for AAAAs
and As; hopefully, you can find out if it took them 30 seconds to resolve the AAAA if
they asked in sequence and so on. And we--prevents cache. We have about 10 million samples per
day at the moment. It all depends on which frequency you want to run this experiment.
Only web request--wee our analysis is in the initial phase still. Before it also has non-zero
failure rates, we see the 30--we see a request after--that's scheduled off for 30 seconds
show up before the request that was scheduled immediately. So, I just run this number, I
don't even know exactly which date it's on. This number is--but again, the number is clearly
high, right, that's really still is not acceptable. It's one over out of a thousand years. And
this is the whole of the Internet and there's also stuff that--before it is broken as well,
so we need to factor that out but this is the raw number. What is the affect on networks,
so there's a large ISP, sample large ISP 0.064, this is residential ISP with sort of whatever
home gateway they have. 0.064% that's, you know, if it's 10 million people that's, you
know, 6,000 users, you know, that's kind of not acceptable still. Whitelisted ISP, given
our current measurement models, we can't really--we don't have a really good handle on that, we
need to be able measure on one of the whitelisted website. But in this case the spread with
v4 is less significant than the above. It's much closer to v4 and that's because whitelisting
masks the brokenness, so we still have somewhat to do that. Different OS have different numbers
for the large ISP above. If you take out Mac which is a smaller but certainly not the majority
of the access from this ISP, you see a dramatic drop in the level of brokenness. So, maybe
this can be address in the implementations, and that's why--it's probably because Mac
prefers 6to4. So, how do we fix this? You can't fix the home gateway. Well, you could.
In theory, you can upgrade it. Users don't upgrade it, they don't what to do, they don't
know it's broken and the firmware upgrades aren't available anyway. So, that's a non
starter really from my point of view. You can't wait for them to be fixed because we
don't have time. The shelf like of these things--the life of these things in is multiple years.
So, to me that's a non-starter and this problem is not going to go away and so we need to
find a fix. One thing what we can do is ship all these users to new TPs, maybe that would
work. We'd have to know who they were and so on. Host problems; we can work around applications,
for example, we could put, you know, fixes in chrome which is open source, it fixes the
Firefox. Only Microsoft can fix Internet Explorer. Only Apple can fix Safari so this is a set
of limited scope solution. To fix all the applications you need on OS upgrade and that
will also fix your router problems. So, what do we do on the host side? There's dance draft
of Happy eyeballs. That's a general and perhaps a little more complex than we need to fix
a specific problem and it also needs to be implemented or as shed library or inter-library
application. OS X from Mac OS X, I think Apple is plan to record is to do parallel connects.
That won't fix MTU holes unless it also recovers but it'll get most of the way there. And one
thing that actually Igor and I were discussing at the--where was it--the ITF was to have
implementations probe, the networking on attach when you do the HTTP, I think Windows already
do it. It does check for captive portals and takes an HTTP request, sees if it gets what
it expected to get. And if it doesn't it says, "You may be behind the captive portal. Click
here to find out or log in to your Internet and do something." So, you could do this and
you could pop up--play up a little bubble that the user is saying, "Your v6 is broken.
Please fix or please hold your ISP or please disable." Because given the numbers that we
have disabling v6 is a perfect solution for me. If 0.05% of users disabled v6, sure, the
native users will get there, that's not a problem. So--yeah this is one thing that I
definitely see but at this--all these solutions require OS vendor buying in, and unfortunately
they are not in this room even though we did try to contact them. And--yeah, that's it.
So, any questions? >> I have one. I think it would be highly
useful if you could actually track these numbers, the brokenness number overtime. And it would
be even more useful if you can publish them. >> COLLITI: We know.
>> I know you know. >> COLLITI: So, first things first. We haven't
really even got off the ground. These are preliminary numbers. I fully appreciate the
value of these numbers and I think that it would do--that it would be useful for ISPs
to do this. It, of course, depends on, you know, what level of application you want to
publish the data for, how do you allow only ISPs to see their own numbers. You know, you
certainly don't want to publish individual user numbers for privacy so that it needs
to be a balance there. But, I agree and there's also certain of infrastructural work that
needs to be done to actually get this data publish in a reliable way and it sort of--it
takes time. >> I think that we completely agree. The scary
part of it is that I have seen a couple of presentations in the last two-two and a half
years and all of them mentioned a single number that you presented like three years ago in
Berlin or something. And--then I asked the [INDISTINCT] and they said, "This is the amount
of brokenness and we don't want to loose these guys." And then I had the questions like,
"So what's that number?" "Oh, what, that's the number from Lorenzo" "Okay."
>> COLLITI: And this is our network, right? We, you know...
>> So, I completely agree that, you know, you have a big network you know your numbers
but it is scary that those numbers are quoted and used and business decisions are made based
on those numbers and people don't know what the numbers mean, they just know the numbers,
that's a problem. >> COLITTI: Well--yes. So, and--so the reason--so
we have published a paper on v6 adoption as you may know; the reason why we didn't include
these numbers and not these actual numbers, the numbers that we had earlier is because
we didn't we have--we didn't have space in the paper and we also didn't have a sort of
complete faith in--no, no, sorry--so we also didn't have complete faith in the quality
of these numbers and these builders and the methodology that we have at that time. I personally
think and others may disagree that these numbers are more solid than the ones we had and we
could think about publishing these. It would be more useful to publish them on the web
than to write them in a paper. Huh? But they're--but the point is also that there are no the numbers,
right. There's Thor Anderson who publishes a monthly news letter of brokenness and what
he sees there. So, did I answer all your questions? Of course, I mean, every--it's only fair to
expect that, you know, every ISP makes their own business decisions based on their own
numbers if they have them, right? >> All right. I have a two part question.
Number one, with the brokenness, do you have any feeling as to what to percentages effectively
inside the home versus brokenness outside the home? The reason is, is obviously as an
ISP, I'm trying to do native connectivity. What I'm just--what I was thinking about was
if it's outside the home, it's possible to effectively build your own 6to4 gateway and
3-door gateway and put a captive portal behind it. And in fact, suck in all those requests
and effectively be able to tell your customers how you got broken connectivity. But if it's
inside the home, there's nothing I can do about it. So is there--I'm just trying it
like that's an interesting thing for me. Can I fix it for people and tell them because
I'm not content so I don't get them connecting to me so I can't see as an ISP because people
hate it when I look at their packets. >> COLITTI: Yeah. So I can't--I don't have
an answer for that because from our perspective, right, we're at the very end, right, it's
hard to tell. If you--I mean, if you were able to connect anonymized packets of IPv4--I'm
sorry, of 6to4 coming from private source addresses which will never work, right, then
that would be one way. >> It occurred to me that you can do that
in a sort of an almost, you know, like a voice SBC kind of way where because we know that
the--it's probably come through a NAT and the outside is being translated. You can actually
guess where to send it back, too. So I'm just trying to think is like--this is why I'm asking.
Is there some way of effectively, as an ISP, being out to tell people they're broken?
>> COLITTI: I don't know. >> Okay.
>> [INDISTINCT]. I had like--question number one of your slides.
>> COLITTI: Yeah. >> It looked--correct me if I'm wrong, but
it looked like you're endorsing the OS X Solution of actually doing kind of [INDISTINCT], right?
I would have thought like you would like really [INDISTINCT] as Google but is that your opinion
or... >> COLITTI: So to us, I mean, if--I mean, if a user can't get to
us, that's a bigger problem than if a user sensors us and we then close the connection.
I think we can work with that. So I mean, one of these is a scaling issue. The other
one is an impassable problem, right? Because we, you know, we might not like it if we get
double the numbers. In fact, I haven't run any math because we can do whatever it takes,
right, to, you know, to get--it's not really imposing any sort of either front-end server
load or backend server load. Really, it's all load balance a load. We can scale to that.
What we can't scale to is fixing people's home routers because we have no access to
that. >> Right, but, like, does Mac OS--like, the
last time I saw, like, Mac OS is actually not completing the connection. Just kind of
like letting you like just hang there on your side, right?
>> COLITTI: No, I didn't think--I think it does--I think it resets the connection once
it's done. >> Okay.
>> COLITTI: Well, I would hope so. Otherwise, they have to maintain state as well, right?
>> Last time, I saw it didn't, so. >> COLITTI: And would--don't they leave file
descriptors? I mean... >> [INDISTINCT]
>> COLITTI: Yeah. Okay, well, yeah, that sounds like a bug.
>> Well, there are no bugs. >> COLITTI: So just curious. Who else is like
trying to--conducting measurement stuff like this? Any one? Wow. Nobody cares? See, I'll
buy each of you guys, beer. >> What are you going to do?
>> COLITTI: I will gladly buy you guys, beer later. Great. Fantastic. Cheap, it'll be a
cheap night for me. But, I mean, so seriously, I mean, like, anybody who runs a network here?
I mean, we've talked about this in awhile. We're going to do the same--trying to do same
stuff and, you know, having data available is really important, right? I mean, this is--this
is a pretty big problem. I'm kind of wondering why nobody is else's. I mean, I was kind of
hoping like half the room would raise their hand.
>> There are people right in this room that have done that as well. I think Nathan Wood
has a script that's publicly available that you can use to do this. You put it on your
website. And then there's Tor Anderson, right? >> Yeah.
>> COLITTI: There are two more people. >> There are two more people.
>> [INDISTINCT] >> COLITTI: Yeah, but I guess that was my
answer. >> TONY: So, but, you know, if our network
somehow, you know, turns one in--in the thousand packets into smoke on the backbone then, you
know, you're trusting our numbers [INDISTINCT]. You know, we'll trust our own numbers but,
you know, your mileage may vary. >> Yeah, but Tony, the thing is I mean like
this gentleman over here, I mean, they're Lorenzo's numbers. So they're not, you know,
insert company names here. I mean, you figured that you care enough about it, [INDISTINCT]
generate your own information. Naive I am? >> COLITTI: It depends where you are on the
long tail of it. >> TONY: You know, in fact, that was my point
yesterday was that we need a common set of tools for how these measurements get made
so that we don't have--you're doing a set of measurements and your numbers don't correlate
with what he's doing because if you all go off and build your own tool set to do this
monitoring the numbers won't match and people get confused. If everybody's got a common
tool suite--that was where I got up yesterday, it's like, you know, we need to think about
how we come up with a common metric for how we do this measurement and what it means because,
you know, I was saying earlier. Anybody that directly peers with Lorenzo and has control
over the endpoints, mobile carriers, right, will not see any of this brokenness. They
will have absolutely zero. >> Yeah?
>> TONY: Because they're not going to be doing, you know, any of these things behind that.
>> Thanks, Tony. Now, I can sit down. That was my only point.
>> TONY: No, that in fact, that was [INDISTINCT]. It's like if he's got complete control over
the endpoint and he's directly peered, he's not going to see the brokenness that you're
going to see because you don't have control over the entire system. And so where the measurement
gets made gives you completely different answers and we need consistent measurement process,
not... >> This is all based on assumptions. You're
basing it on assumptions that host stacks work fine that everything in the chain works
fine. Just a sort of end-to-end data, which it could be affected by--if you install--if
you're a tethered machine--if you tether and you got a firewall in your laptop that doesn't
do v6, your shop is broken, so. >> TONY: Let's make that a goal then.
>> Let's make a what? >> TONY: Let's make IPv6 end-to-end a goal.
>> It is the goal. But we have to get there somehow.
>> [INDISTINCT] >> COLITTI: There's a new idea.
>> We have to keep the line, so last couple of questions, last couple of quick statements.
>> COLITTI: Okay, I'll be quick. Here, here. But it's also true that we don't need 10,000
sets of measurements, right? I mean, if we have 10 sets of measurements and they all
point in the same way then perhaps there is a problem and we need to fix it, right?
>> I'm not sure to I agree with that. I think it's valuable to have sets of measurements
because the body of users hitting a given website can be different and the level of
capability, the level of brokenness may be different for different types of applications.
I think the question [INDISTINCT] to you, two things. First, with regards to the measurements
that you detailed here, is it proprietary enough to Google that you wouldn't be willing
to share that and make it publicly available so a few more people could use it because
it looks like you kind of... >> COLITTI: Share what? The methodology or
the implementation? >> The actual implementation that this code
is for something not specific to your implementation but more specific to something that people
could put on their website and use it as... >> COLITTI: The code is--the code is the implementation.
It's--by definition specific, it's the implementation. We can share the methodology probably and
I'm not the person who would need to approve such a thing. But we did share the methodology
for measuring adoption so I don't think, you know, personally, I don't think that would
be controversial, but... >> Okay.
>> COLITTI: The--you wouldn't be able to do much for the code I can tell you.
>> Well, and that's kind of where I was going with this. Is that it may be--it may ends
up having to be an off-shoot where you have the methodology getting implemented in a more
generic code form that people--that the community could use. I mean you can kind of publish
that as, "Look, this is out on Google code. Go download it, go put it on your websites
so you can get that information." >> I think such thing already exist, right?
Nathan Woods took it already. >> Yeah, [INDISTINCT] and also it's the JavaScript
where you could get it and like, you know... >> Right, great, so...
>> So the problem is that this is a measurement between two endpoints, right? And nothing
in the middle can really snoop and know that, "Oh, this is a measurement and I should be
watching it and that's an example of brokenness." >> Right.
>> We know that we didn't see this or, you know, so...
>> Right. So it's sort of a web content specific thing at the moment but it's still something
that could be valuable. >> Sure it's definitely application specific.
We're not measuring what happens with SMTP or anything like that.
>> The other thing I was going to bring up was just a curiosity about your methodology.
If you're looking at this on a day by day slice, you're going to have some N number
of users that are going to be broken based on your review. Are you doing anything to
try and correlate between one day and another, you know, if we got the same set of broken
users and they're hitting your website the same amount of time as multiple days, you
basically have--it's a sort of skews your percentage because it's not actually multiple
discrete broken users; it's a single discrete broken user that hits the same website every
day to check his Gmail and therefore, you know, your...
>> COLITTI: The question is then, you know, if there's a user or an IP address, right,
that hits you a hundred times a day and one that hits you one time a day, which one do
you care about the most? Do you care--you might care differently or you might care the
same? >> Yeah, I'm not trying to necessarily say
that it's one's more important than the other. I'm saying that it's a valuable point of data
to have when you're looking at how broken it actually is.
>> You're into a fuzzy area where you're talking about identifying users.
>> Yes. >> Yeah, he--exactly.
>> And I fully understand the ground I'm treading; I'm just making the point.
>> Yeah. >> Okay.
>> Yeah. I'll be quick. I have one quick question about the broken DNS server, is about AAAA
user's record. Some years ago we've seen DNS servers [INDISTINCT] an X domain or adjust
beside and doing quite bit, and have you seen these kind of servers in your experience today?
>> COLITTI: We don't measure for that, we only count, we're only looking at people that
trying to get to us who are asking our DNS servers.
>> Yeah, I understand that. >> COLITTI: I've seen that, and I totally--yes,
there are certain domains that do that. I remember implementing in 2000 something that
[INDISTINCT] domains preference in Firefox that...
>> Right, right, yeah. >> COLITTI: I think you've trade was mentioned
and... >> So I'm wondering with that, do you have
any evidence about the improvement in this area but that you don't know that.
>> COLITTI: I think it's getting better but I have no data.
>> KLINE: My name is Erick Kline. I, obviously, do some IPv6 stuff for Google. I was just
going to talk briefly about some whitelist, potentially some--the idea I was talking about
whitelist automation but they're--or the discussion that I'm trying to have around it except that
there's not a lot of operational experience with it outside of our own experience per
se and anyway, I'd like some feedback. And also just to discuss sort of whitelisting
practice in general and what it is, and see where this discussion goes. So there is sort
of, you know, a fundamental difficulty. The, you know, DNS resolution of AAAA is the one
and only control knob, right, for v6 traffic. We add a AAAAA for YouTube, v6 traffic you
turn it off, it goes away, that's it. There really, you know, for HDP there aren't any
other control knobs. And, you know, RFC 3596 says that, you know, the DNS database has
to be consistent whatever protocol you're asking for, you have to get the answer. And
that make sense, I think, it's obviously fundamentally required especially for a transition but it,
you know, it does break the sort of fate-sharing thing where you can have someone asking for
a AAAA who has absolutely no guarantee of actually having v6 at all. So there's been
some discussion about how to restore some semblance of fate-sharing and there's Igor
Bryskin's work and some other stuff about disabled AAAA on v4 transported BIND which
you can do. The concept being to turn that on at the first level of resolver so that,
near the client so that if they don't ask, if they ask for a AAAA over v4, you just don't
even resolve, you just going to pass that up. Our own Wilmer van der Gaast and Carlo
Contavalli have been--and some others have been working on an EDNS extension to pass
the client IP as well so you can pass that up through all the resolvers so that geographic
load balance EDNS servers can receive this information from resolvers that if you go
through several chains of resolvers, you can still know where the client is and try to
route them to the right datacenter and then there's also the sort of whitelisting concept.
So why whitelist when you can still get this other fate-sharing signal because fate-sharing
isn't really enough, right. That just sort of proves that you can get a 512-byte packet
through and that's really a low bar for network operations, right. And, sadly it's this bar
that some networks don't meet but it's still not quite good enough especially, you know,
forget about not even reaching the site, what about people who have, you know, seconds,
minutes, worth of latency. And not all--IPv6 connectivity is equal, right? With the default
preference to prefer, IPv6 means that if we see, if we're, for example, peered with so
many [INDISTINCT], we have 10 exchange points with them and then they don't--they dual stack
one of their pairing links, well all of a sudden all of the v6 traffic is going to slip
over onto that one link that's not redundant. And clearly, we would rather serve them over
the redundant links, right, for basic operational necessity. And, sometimes we've contacted
to some network saying, "Hey, would you like to be whitelisted?" And they say, "No, please
God, don't." And, then some sort of discussion issues but, you know, they don't necessarily
want the surprise. They want to--or they're working on v6 and it's very tender. It's almost
like some sort of egg that needs to be sat on some more before it hatches, right? If
we turned on v6, [INDISTINCT] v6 traffic. Their operation staff would just be very upset
and then they would turn off v6 and so, we could actually potentially damage some network,
some IPv6 deployments that are in progress by attempting to serve them v6. So if we have
the idea of being that we have, to be able to whitelist, we can, you know, we can take
all of this into account more than we can do an any sort of application sense. This
is, you know, from, you can see sort of roughly how it is basically, you
know, with one, when our DNS server receives a request, we just, I think, it's for a AAAA,
we just check a whitelist. If there are resolvers in the whitelist then we return AAAAs, if
we actually have them. And if not, we just pretend like they're actually thinking in
return no error and no results. This is sort of the process that we go through and the
reason that it takes so long and all these things just pile up for a million bucks and
I don't get to it. We receive a list of resolvers or prefixes, we then have to like try to verify
that, oh, this person, you know, they probably know who they're saying is, maybe that's somebody's
work. I then take that stuff, I look at, I convert all that to ASN(s), I get all their
v4 and v6 prefixes, I look at all of our routes between them like, you know, for example the
situation where we only have--we only see them v6 over through one connection. We'd
really rather not do that. Then this maybe, you know, some pMTUd testing and now we have
the ability to do some more sort of per AS per prefix brokenness analysis. I have to
get, you know, a record in writing an email, you know, do you commit to whatever your sport
and you have to exchange a lot of contacts and all that kind of stuff. And then some
people really request, can we get like a really specific go-live time--it's 7 a.m. in some
bizarre time zone that is not specific so that, you know, we have to be up, where we
have to, you know, or something like this. And then you have to deal with an emergency
rollback or revert if there's a problem. I think we've only had one actual revert and
it lasted like a day or two but, everyone. And then you have sort of, you know, iterate
on all of these, right. And it's a long time, this takes a long time and there's just not
a lot of, this is obviously, doesn't scale. It was a huge amount of human effort involved
here. I was thinking about how we can get around this, and I couldn't really think of
a whole lot to elevate some of the stuff because you still need to perform some of this checks,
but I thought, well, rather than having everybody send in the email, why don't they just signals
their readiness somehow and I'll just scrape my logs. They can just put a magic text record
on their reverse for their resolvers and, you know, for, obviously, works for v6. I'll
scrape these every once in a while by looking through the resolver logs and doing an offline
lookups. And, actually, I have to describe this process every--anyway, so, without this,
I would pick this up, it would go into the whitelist possibly and then you could actively
monitor, we would actually do monitor as well for traffic dips, trouble reports and continue
monitor our brokenness metrics so we can sort of debug and iterate. But, specifically, what
the--like I said, [INDISTINCT], the whitelist, there a lot of people like what it is, right,
it's really just--this proposal anyway is a method to signal your readiness to receive
AAAAs, that's really all it is. We use reverse DNS for sort of some loose verification of
operational ownership, if you can modify your, your, we assume that
probably you own it. We could use the TTLs to express desired lifetimes, operational
reality may trump this, right, because there could be millions of resolvers a day that
we need to go and perform what this queries on, so now there's [INDISTINCT] DNS qps trying
to like figure out who has this text records and so on, or would this could be a bad idea.
It's fairly simple and it involves lower communication but--so what is a whitelist not, by the way,
just because there's a lot of stuff that you were publishing some disposition statements.
It's not a membership restricted club, you know, this whole--this proposal is not 100%
automated, it's not maintenance-free, it not necessarily guaranteed to be handled by all
providers and certainly not perfect and certainly not a long-term solution, right. Nobody really
wants a deal with this sort of, for the long-term. In a long-term, we do expect to have a list
with the DNS server must consult. We would like it to be a blacklist, right. Serve AAAA
to the world except for the following known networks that are just really, just never
going to work. They're just bad, whatever it is. And that includes like don't ask me,
answer AAAAs if you receive a query over 2002/16 or 2001/32, as I'm not going to answer AAAAs
for that, but that would be like a long-term solution. Here's sort of like the syntax,
sort of that came with the expression, I could go over this. If it's interesting, there's
a whole proposal. It's documented of IPv6 On the content, provider side,
I log all of the resolver IPs. I do this reverse lookups. I process, this sort of format. There's
a slightly more expressive syntaxes documented, but I didn't necessarily want to go into it,
maybe operationally infeasible. I then still have to do some automated testing and sort
of merge all this stuff into whitelist and blacklist, and so on and so forth. And I just
repeat this on a daily or weekly basis. Now there's clearly a lot of limitations with
this. It does reduce some communication in the common case were everything is working.
Thank you. But, for other people to do the same thing, implementation, stuff or process
is not--it's a non-triple effort, there's definitely some development here. It's possible
that timeliness is not necessarily going to be respected. If you try to use TTLs and you
said, "I want this only whitelisted for one day", that would mean I have to sort of expire
in the day and then re-crawl it. That might be more than I want to do for crawling, hundreds
of thousands, or millions of resolvers. And if I do impact analysis and you aren't automatically
added to the whitelist, you still have to have some sort of personal communication to
figure out what's wrong and privacy requirements actually hamper helping, right? If I could
– if I could say please use these IP addresses, if I was even allowed to do that analysis,
I certainly, probably, couldn't share that, right? So this is difficult. And there's a
sort of a syntax attempt for a pair-wise opt-in, opt-out but that, again, may not be operationally
feasible. That was really, really fast and possibly incoherent.
>> So can you comment, Erik. Is this [INDISTINCT] proposal or is it operationally already implemented?
>> KLINE: Oh, it's a proposal. I have some code that does this and there's one or two
friends out there who've--we've put this up. Actually, I think Jason has done it as well.
I have some code to run [INDISTINCT] stuff. But I'm not actually running against any of
our resolver logs. That would be quite a lot. >> Who, besides Google's, currently doing
the whitelist approach? >> KLINE: Well, this is indeed the problem,
right? This is what I said when there's no operational experience besides ours.
>> Okay. So there's nobody that's joined would other than articles about maybe at the happening?
>> KLINE: Well, yeah. So, several of us have been together at various meetings. We've had
discussions about it and people are interested in it as a method to get around the brokenness,
right, because you can't necessarily express the quality of someone's IPv6 connection through
a regular DNS, right? You might be able to express fate-sharing but you can't necessary
express the quality. >> Actually, Wikipedia actually has a very
small whitelist for a few hosts. >> KLINE: Who?
>> Wikipedia. >> KLINE: Oh, Wikipedia.
>> So I'm just curious on the content providers here. Who would be interested in following
this type of approach? Anybody? What about on the access side? Would you actually put
rivers pointers in for this? >> KLINE: Well, you don't need to worry, [INDISTINCT].
>> Oh, really? >> [INDISTINCT]
>> KLINE: Thanks, [INDISTINCT]. >> Follow up question for the room. Would
you do this, nobody raises their hands, would you not do this, nobody raises their hands.
If you wouldn't do this, if you didn't--wouldn't want to do this, what would you do? Do you
expect us to, you know, to expect to be able to email us and have us do it? Do you think
that'll scale? Do you have any other suggestions? I think it's clear that this problem is not
going to go away, right? And it's also clear that if we want to make progress, we have
to find a compromise. So what would you like to do?
>> Well, let me strengthen that a bit. So I'm the NAT guy. And I wear my flame retardant
suit all the time and... >> KLINE: I get the prefix though...
>> Yeah. >> KLINE: I get it. I get it. As long as it
preserves the intent for--anyway, go ahead. >> This is--we have a choice to lesser of
two evils. Do we want no v6? Do we want everyone to have to beg the content providers that
I'm good enough to get a AAAA really? I've done my homework and we're going to lose a
couple of users but, you know, I'm trying my best, or would something like this be okay?
There's still going to be a blacklist behind something like this. But this may be and seems,
from what I've seen, the best proposal yet. >> KLINE: But also, you know, to the emotional
argument, it's not necessarily by whether or not even an ISP has done their homework,
right? We can have fully redundant links. They're monitoring drop queues for various
QS levels like IPv6 and v4, and if they're all the same, and so and so forth, you could
just be--they have a bunch of users who have all of--broken gear, right? They have crappy--this
is... >> They have their users. Absolutely.
>> KLINE: And it's nothing they can necessarily do about that unless they want to go and find
them and replace them and fix them. So you kind of held hostage by your customer base.
>> Sure. Sure. >> [INDISTINCT], T-Mobile. So just answer
the question. I think it's easy. I think it's a good idea. I support it. Number two, for
folks that don't release AAAAs, the DNS64, which we'll talk about in the next presentation,
will produce a AAAA record for your domain. You probably want to produce the AAAA record
yourself and host the traffic yourself rather than have the DNS64 and NAT64 do it for you.
>> [INDISTINCT]. I'm concerned about the whitelist approach. The approach talking about an--folks
opting in by setting a reverse PTR. That's because--well, we already have a behavior
that's default and the clients out there which is asking for the AAAAs to begin with, and
that's a problem. Now, it's going to be this new behavior. People are going to throw that
into their products. There's something that is done by default. They'll update this new
thing in the reverse as part of installing the product.
>> KLINE: But this is the access network signaling to the content network, we're ready to move
forward with it. >> Right. And so access ...
>> KLINE: We're going to take the call... >> ...that perform this. We'll say, "Oh, [INDISTINCT]
ready. We'll set this for you. And you're going to have--you're off hands of this. If
not on that, then later on where the access networks--you know, it changes over time.
Maybe their IPv6 breaks. Maybe that guy quit and nobody knows about it. And that's still
in the [INDISTINCT]. There's going to be a problem maintaining the...
>> KLINE: Nothing [INDISTINCT] the impact analysis that you have to do.
>> I think as a strategy, you're a little better off doing something a little more passive,
something a lot more like a stability--what do you call it? We do it for spam. I'm blanking
on the word now. >> Reputation.
>> KLINE: Reputation. >> Reputation. Yes. An IPv6 to build a reputation
trend. >> KLINE: So we could just automatically do
it, right, because you look at resolvers and say, "Oh, yeah. This is pretty clean. Well,
just turn it on," except that we're in a situation where I could be harming someone's IPv6 deployment
that may be very fragile at the moment. Well, yeah, maybe the connectivity is good enough
but maybe their operational staff is dead, they haven't done operational procedures,
they're not taking a pager for this stuff, it's not monitored. All that kind of stuff.
>> Okay. That's an issue. >> KLINE: Okay. I should sit down.
>> Last thing before I sit down, I'm kind of capitalizing the lecture is if you do go
further to the PTR idea, I'd suggest you prepend and underscore IPv6optin.reverse. Just--so
that when people are seeing these queries and wondering what they are, they'll know
first of all and second because of some silly requirements about PTR being kind of like
[INDISTINCT] but not and it's just--it would work out better that way for me.
>> KLINE: Thank you. >> [INDISTINCT]. You put up in these slides
on statistics about how much book and their single stats like many places but I've not
seen really a detailed dive into what is really broken. And I think that until we see what
is really broken, nobody is going to fix it. >> KLINE: Well, Lorenzo had a presentation
about a variety of brokenness types. I mean, it's not like broken out with statistics like
this percentage is 64 and that percentage is broken IE6 and--or whatever. Actually,
IE6 is fine for IPv6 but because I've never even tried it and so...
>> My larger point is this is just like the bags in implementation. They will be [INDISTINCT]
when the code will be exercised, not before. So this is something as--essentially this
has correcting problem and I'm really concerned but almost subtype of approach essentially
taking a big gun to kill a little fight. >> KLINE: Possible. Possible.
>> I just want to response, Mr. Hankins. The reverse point around, that would be a signal,
it won't be a complete Boolean release. In my case, I would take it as advisement. It
would be coupled with our own actual metrics from testing users. What resolvers they're
coming through, what percentage are good or bad. If they're--you know, if they're borderline
[INDISTINCT] yes, let's go do it. If they're really bad, maybe I still hold back.
>> KLINE: Can we just--can we get to the next presentation or comment?
>> Yes. >> KLINE: Sorry, we have--we're like way behind
time. We have like lunch in like 25 minutes and I have to get like two presentations.
And here's Marc, please talk about--do you want this?
>> BLANCHET: Let's befriend with it. >> KLINE: All right. Thank you. Marc Blanchet.
>> BLANCHET: Okay. Thanks, Erik. I've been working trying to [INDISTINCT] IPv6 end-to-end
nice. Many of you probably know I've been in the business of selling some products that
tried to v6-to-v6 over the current v4. Well, now we're--I'm in the thing of doing the ugliest
thing I thought was DNS and NAT64. Anyway, we've been doing experiments over the ITF
and other conferences. If you want to try it right now, it's--that's the information
which is essentially turn off v4 and then put your DNS server address as in this address.
We try to coordinate with Erik to try to have a better set up here but there was some issues
so this kind of an ugly act right now that I installed this morning. So it might not
work or it might be slow, might be related to either the set up or your OS implementations
which, you know, as you have seen, have some issues. It actually includes a really ugly
act for Mac OS, so to help Mac OS machines that are pretty popular. So having said that,
try it out if it's, you know, send me email and we'll see if it works or not. It usually
pretty works in real, you know, environments where we had time to set it up correctly.
We ran it over the last ITF the whole week. Okay, presentation. Use case, why NAT64 and
DNS64, basic architecture, the components, implementation, some words about our experiments,
and real networks improvements. So here's today. We have lot of v4, a few dual-stacks,
and a very few IPv6-only networks or devices. And that's the problem we have, which is many
operating systems and applications do not necessarily behave right in IPv6-only environments.
When you feel a bug or you talk to the vendors, they say, you know, that's because no--not
enough money, right? Not enough users. However, tomorrow, because of the IPv4 address going
out, then we will have IPv6 single stack users and they will need to come access the content
of the v4 Internet. So that's roughly the biggest use case for DNS64 or NAT64. So it's
to connect the v6-only to v4-only together. However, when you talk about that as was well-described
in the behave work, is you have the location of the initiator and the direction, and the
number of servers on each side have an importance on the solution and deployment scenario. So
what I wrote here is that NAT64/DNS64 is mostly for--from v6-only to v4-only, and where most
servers are on the v4-only side. You can add things, you know, mostly is most is important
which is, you know, where the use case for NAT64/DNS64 is the right thing. So I'll take
two cases. Access Network. So provider provisions only v6 addresses to end-users on its access
network for different reasons. IPv6 only end-users need to access content and services on the
IPv4-only Internet or on their provider, you know, specific services that are only v4 or
an ESP or something as you've seen since the last two years--last two days a few examples
of these. IPv6-only end-users are clients for connections. They don't advertise the
service so it's a typical NAT environment. So here's the drawing where the computer is
on IPv6-only access network, you know, connect to v6 content. No problem and--but cannot
access the v4 Internet and obviously, the answer is the NAT64 to fill out that problem.
Content Network Use Case. We've been talking to a few content providers for using ours--the
software we wrote. So this is an example where--essentially, who is doing the--who is helping the other.
On the previous slide or the use case, it's the provider that, you know, helps these end-users
to connect to v4. On the other side is the content provider that has v4-only content
because their service is not converted yet to v6 or they will never be converted to v6.
So therefore, they want to offer that content to the v6 Internet and so that's the use case.
So you see the difference is the NAT64 is then on the other side. So the architecture,
there's two components. DNS64 is a DNS server that inserts a AAAA request from IPv6-only
client. And it does that by synthesizing a AAAA reply using the A answer received from
the IPv4 site. The NAT64 translates IP packets between IPv4 and IPv6. The two components
do not share state or need synchronization. They can be on different and you will on the
slides that they're in different, you know, places. They could be at the same device or
just completely separate but they need to be configured with the same special prefix.
The IPv6-only client is unaware of NAT64/DNS64, so unchanged. The IPv4-only server is unaware.
So this is an example, you know, very basic components where the client do DNS queries
to the DNS64, get an answer. And that answer contains a special prefix which is synthesize
and then add some part which is from the A record on the Internet. Then, essentially
sends the packet to the request or the TCP sent to the NAT64 which actually translate
the packet. There is an example with the flow diagram so you'll see a bit more. So this
is the--v6 network is 2001: db8 and NAT64 is 1, 2--you'll, get the thing. The key here
is that you--the special prefix that you use for those, you know, AAAA synthesis are actually
either a special--actually, there's a typo here but it's actually either a special prefix
or your own--part of your own provider prefix both case work. So here's the flow example
where the IPv6 client request the DNS Query for at AAAA record goes--it has
to go through the DNS64. The DNS64, you know, ask that same question, just pass through.
If it gets a AAAA answer, then it just pass back to the IPv6 clients, same thing, no problem.
If it doesn't get any response, then it tries the A record. It could be done in parallel.
Then receive the A run answer and then synthesize DNS response. And then the client connect
and then connect through the NAT64. So our project, open-source implementations. Funded
by NLnet and ourselves. Ecdysis refers to the moulting of the cuticula in arthropods
which is an analogy of IPv4 moulting to IPv6. So after moulting, the arthropod is fresh
and ready to grow. We did actually many implementations, DNS64 in Perl, a patch against a Bind and
a patch against Unbound. NAT64 was done on a user space. Linux module Netfilter and then
PF patch. So you have your traces. Pretty simple for Unbound because Unbound is modular,
so you just write another module and then you see the configuration would be on--you
just need to add the special prefix for DNS64. Bind, well, it's a patch. It's not a module
and we added a configuration variable DNS64 and then their prefix. That's pretty straightforward.
NAT64 in Linux, it's kernel-space. You know, there's a module. What is good with the integration
with Netfilter and then PF is the fact that you can apply security policies as you want
this all integrated. NAT64 in OpenBSD's pf, you know another configuration command. We
don't do our redirection right now. Network Experiments, we did Anaheim IETF the whole
week was with a specific SSID. We presented some slides at behave working group about
this. And now today--and we plan for Maastricht too. And what we found, surprisingly work
well for typical end-user traffic, because essentially, Internet is over HTTP as we all
know. However, as you have seen, there's many things that work and doesn't work. But roughly
speaking, I was pretty surprise at, you know, we've been running it in our network and,
you know, for the typical end-user connecting to a few typical website, it works fine. Network
managers think you're offline. IPv4-only apps don't work, obviously but--and IP address
literals are not good. And we do have one OS that is really picky about IPv6-only networks,
which affects NAT64 deployments. I'll leave this, because time is over and we will have
more work to do and we'll be obviously open-source. Conclusion, it's a solution for connecting
v6-only to v4-only, useful for access network, content network providers as source code is
available. Please try it. And thank you very much for your attention.
You did a great job explaining how DNS64/NAT64 works, but I think it's also important to
explain how it doesn't work. It's not needed for the cases where there's IPv6. And so,
if you live in like the mobile provider world, like I do, where there's lots and lots of...
>> BLANCHET: Yeah. Please do not use it if you don't need it. Please.
>> So if you live in a world where a lot of NAT and people throw around the words like
CGN a lot, there's no light at the end of the tunnel, you know, in IPv4. It's very scary.
It's very daunting. You see your expenses go up and up every year. You see state go
up and up. Every year, there's new ALG issues. DNS64/NAT64 is beautiful in the sense that
it goes away over time. As an architecture, it's very elegant, and that it drops out of
the network as the IPv6 content comes online. That's why I really want to see a lot of IPv6
content so it doesn't have to go through this even though this works pretty good.
>> Thank you. [INDISTINCT] our user. >> Thank you, Marc
>> BLANCHET: Thank you very much. >> ARKKO: All right. So talking about experiences
around IPv6-only and NAT64. Basically, me and Frederick and a few other people going
to the bleeding edge and then coming back here to report what we found, what we learned.
So, what did we do and why? So, for background, our sites had been in dual stack for ages,
basically. The office had been in dual stack for like over 10 years. And it all worked
very well, so clearly, we had to try something else. And, you know, of course, eventually,
people will have to go to IPv6-only or go beyond dual stack. It's just a question of
time in--I'm choosing a suitable network where this can be done. And as you heard during
these two days, there are some of these mobile players that are considering this very seriously.
So we wanted to know what does this mean and we wanted to find out what it means for the
end-user, report back to the community on what will actually happen. We're also building
a NAT64 device of our own, you know, at Ericsson, so we wanted to obviously test an early version
of that. And a little bit more internally, we wanted to build an understanding of when
can we recommend this mode of operation as opposed to regular dual stack. So, what did
we do? You know, it's a really--I mean, I can boast, you know, hundreds or millions
of users. It's just a handful of people, really, but it's our research lab in my home. It's
a small group of users that will all opt in--there's probably like 10 people who had been in the
network and maybe five of them are there in a permanent sense. And we built an ultimate
network alongside the existing one that has its own wireless, has its NAT64 for access
to the IPv4 Internet, has routing...routing, you know, whitelisting, all kinds of things
like that were already in place, so we didn't have to deal with that, so that was already
working quite well. And experiences, it is possible, I mean, just speaking from a personal
experience, I don't need to go back, I am permanently in IPv6-only. I've said goodbye
to the IPv4 Internet. But some pain is involved. So I don't necessarily recommend this to everyone,
just yet. In more detail, many things do break, there's mostly about lack of IPv6 support
in applications and some types of devices. There's lot of bugs that, you know, people
just haven't tested this in a network that doesn't have v4 as well. And some of our users
went back because of these issues because we couldn't really ask them to, you know,
be without whatever they needed. And overall, I felt that the key issue or that the one
thing that we need to address is this IPv6 support rather than some things in the NAT64
part which generally seemed to be working relatively okay. So what does actually work
and what does break? So many things work well, obviously browsing works. Yesterday, we talked
about the IPv4 literals. I've only seen two instances of that in my personal browsing
the last two months. So it's not a huge issue. Well, of course, it depends on where you go
and so forth. Email software updates, many chat systems, streaming, lots of things do
work, and then some interesting thing if you compare those other general purpose computing
environment and the mobile environment. If you get to choose the terminals that you use
then you can actually get everything working in v6-only and you don't lose anything. But
that's not necessarily true in the channel computing environment. And then there's issues
there. There's a host operating system issues. The lack of testing. Some applications fail.
Many appliances support--do not support IPv6. There are some issues with firewalls particularly
with the fragmentation support on v6. And we actually had to change our NAT64 code a
little to do this. Example issues with operating systems, RA info is not aged or removed as
you move from one network to the other without going through the sleep state in the Ubuntu
for instance. You have to tell several operating systems that you shall not expect v4, that's
like a manual action that is needed. DNS discovery continues to be an issue. And there are several
types of manual things you have to do for that. Example issues with applications. In
our particular user group, the biggest problem, the Skype. And I actually send the message
to Jonathan Rosenberg about that and he said that they do have a plan to fix this but not
a specific timeline when that actually happens. So, it's, you know, maybe if all of us call
him tonight, he'll change his mind and do it quickly. Some chat accounts fail, MSN,
AIM/ICQ, but some other stuff works. Secondlife doesn't work. And that's a problem for me
because the IST has decided to have some of its meetings in Secondlife. Many games fail
in pair network or LAN version. Messaging, this is the--I won't to go through the details
but these are the things that we tested. Anything on the web works, anything over xmpp works,
that's great. Some other stuff doesn't work. Then I also enrolled my kids in this exercise
and, you know, move their computers to IPv6-only and ask them to test the games that they had
on their table. And it's not pretty, it doesn't work. Even some things that are supposed to
work like on the stuff on the web, it doesn't work. There are some IPv4 [INDISTINCT] or
something like that. NAT64 generally worked relatively well, some issues perhaps, we--I'll
leave that to the lack of time. We did run through some experiments actually, we compared
like, looked at the Alexa top website lists and try to compare what happens in a regular
IPv4-only network and an IPv6-only, and then IPv6 only plus NAT64. Obviously, with IPv6-only,
lots of things fail because it's like Google and few others on the list that are v6 so
nothing else works. But with the NAT64, it's basically the same error rate, as with the
regular v4 but, then you loose the sites to have IPv4 literals and that's pretty much
the issue. There aren't too many, but some. And if you look at that, that's 0.2% on the
first thousand and if you go to the first 10,000 then that's 2%. We haven't [INDISTINCT]
on that we will continue this test. But it's really hard to tell whether this is an issue
or not but it wasn't an issue for me personally, but it could be--and just having an error
it could mean some advertisement that was not displayed and, you know, big deal, for
me at least. So, conclusions, recommendations, I think this confirms that dual stacks would
still be our preferred mode of operation for the general public or networks in general,
maybe possible to do IPv6 only in special environments, such as mobile networks, particularly
if you have a control of the terminals, that's really a key and, you know, enthusiast networks
and special things like that. And with effort, I mean, this will, of course, change in time.
Maybe in two years, this will be possible for everyone and we actually do have to work.
I have a list of things here that we need to do in order to make things better. We have
to improve the DNS discovery. We're doing some things about that in the ITF. Some implementation
things also have to be done about it. We have to fix bugs. We have to have IPv6 support
to Skype, messaging, gaming. We need to do more measurements that'll actually do understand
what's going on about breaks. And we actually didn't turned on any ALGs in this particular
test or proxies so there are things you can do manually or operationally to remove some
of this breakage that we saw by which we want it to be pure in our test here. So, that's
it. I am--my plan is to report in more detail but we intend to draft about this, so hopefully
we'll have, you know, more information for you in the future. Thank you.
>> Real quick question. So, as a sophisticated user, have you noticed your user habits changing?
I mean, you take around with the network, you type in IPv4 addresses, you type in IPv6
addresses, I'm just wondering, as a sophisticated user, do you find yourself typing in an IPv6
addresses and avoid it because they're long and hard to read or do you just use the same
expert mode that you've always have? >> Well, mostly I used DNS, obviously, but
now I do remember my IPv6 address, so that's a change.
>> But you forgot your Social Security number? >> So have you noticed any difference about
the--you're like experiencing time [INDISTINCT], for example, the response time or the download
time or something like that. >> That's an excellent question and I don't
have any data about that. That's like anecdotal evidence that, you know, one of our users
felt that this was slower but it can really be quantified and we didn't really see it
in ping times. But this is definitely something that needs to be measured. Not just for this
NAT64 stuff but also for the general dual stack usage in the global Internet.
>> I can answer from our experience. It really depends on the operating system you're using.
Very different, if, you know, some operating system, you know, time outs and problems but
some are better for the same, you know, connecting to the same sites, exactly the same sites.
>> I think you're list of games might be sort of a tad bit too negative. If you add--at
least if you want to go to like older games, if you want to see [INDISTINCT], that's open
source, it's been supported for, like, ten years. Or if you'd like to play Command &
Conquer type of [INDISTINCT], I have a binary patch for that.
>> That's good. We really need to fix the games. My selection of games was, however,
very random. It was exactly the games that were on the table, on the top of the table
at the time that I'm asked this to be done. And, you know, I removed the ones that don't
do network mode, so. >> Quick response to the max comment. I understand
that the aspect of this experience would be--would depend on the operating system implementation
but I also think that would depend on the reachability or the network status or stability
or something like that. So, my question is about the overall experience taken into account
or... >> Yeah, and I think we need to separate the
error cases where things fail or timeout and then the usual case when it works but it takes
a little bit or millisecond longer, and that's the one that I really want to measure but
I haven't done it yet. >> Thank you, Gary. Thank you very much.