How can I see raw AYIYA traffic at the IPv4 layer?
Shadow Hawkins on Wednesday, 19 April 2017 16:32:22
How can use Wireshark or other tools to see what AYIYA is doing over my IPv4 connection? Wireshark always seems to show the IPv6 payload.
I'd like to see exactly how AYIYA is encapsulating the payload, so I may figure out why I am seeing problems with 6in4 (slowness, stalled traffic) not present with AYIYA.
How can I see raw AYIYA traffic at the IPv4 layer?
Jeroen Massar on Wednesday, 19 April 2017 16:35:18 Wireshark always seems to show the IPv6 payload.
Dump the correct interface: the IPv4 interface, not the tunnel.
I'd like to see exactly how AYIYA is encapsulating the payload, so I may figure out why I am seeing problems with 6in4 (slowness, stalled traffic) not present with AYIYA.
Wireshark has an AYIYA dissector, and thus will show you Ethernet -> IPv4 -> UDP -> AYIYA -> IPv6.
"slowness, stalled traffic" can be caused by many many factors. Though, most very likely, due to the latter "stalled" part: you are having MTU issues. See the FAQ for more details.
How can I see raw AYIYA traffic at the IPv4 layer?
Shadow Hawkins on Thursday, 20 April 2017 10:10:27
Jeroen Massar wrote:
Dump the correct interface: the IPv4 interface, not the tunnel.
[...]
Wireshark has an AYIYA dissector, and thus will show you Ethernet -> IPv4 -> UDP -> AYIYA -> IPv6.
Wireshark always dissects AYIYA and IPv6, regardless of the interface. Searching for "disable dissector" helped, I needed to disable IPv6 and/or AYIYA in "enabled protocols".
"slowness, stalled traffic" can be caused by many many factors. Though, most very likely, due to the latter "stalled" part: you are having MTU issues. See the FAQ for more details.
In case you have not guessed, this is about migrating tunnels away from SixXS to other tunnelbrokers. One tunnel that used to work fine as a static 6in4 tunnel, first at SixXS, later elsewhere (also fine), suddenly is showing stalled transfers when loads are high. Another tunnel that used to run AYIYA to SixXS is now runs at 10% of the IPv4 speed using 6in4 to elsewhere.
For both networks, I can replicate the problems I see over a 6in4 tunnel to a colocation Linux server under my control, using private IPv6 tunnel addresses. So the other tunnelbroker is not to blame. But notably, for both these IPv4 networks, an AYIYA tunnel works just fine, fast and reliable.
In the case of the "stalled" network, I guess AYIYA is less prone to MTU and fragmentation issues. For the "slow" network, it could be the ISP is throttling proto 41 but not AYIYA's UDP.
The (business oriented) ISP for the "stalled" network has promised me native IPv6 within a year, but not before 6-6-2017. They are willing to help me with my current tunneling issues, though, but do not appear to be able to find the cause. Tracepath and ping tests (with DF bit and large packet sizes) do not show problems. I will need to dig deep.
The ISP for the "slow" network offers me either DS Lite or an expensive upgrade from a consumer to a business subscription. I guess I will have to opt for one of them. Or prove and complain that they are not net-neutral.
How can I see raw AYIYA traffic at the IPv4 layer?
Jeroen Massar on Thursday, 20 April 2017 11:20:46 In case you have not guessed, this is about migrating tunnels away from SixXS to other tunnelbrokers.
Instead of asking your ISP for Native IPv6.... or moving to an ISP that does provide what you actually want.
... suddenly is showing stalled transfers when loads are high
You state "load", what do you mean with "load"?
Another tunnel that used to run AYIYA to SixXS is now runs at 10% of the IPv4 speed using 6in4 to elsewhere.
From where to where? See the FAQ: The tunnel is slow which mostly also applies to any other tunnel in the world...
I can replicate the problems I see over a 6in4 tunnel to a colocation Linux server under my control
That partially only excludes the common path, but that might just be an indicator. Remember that your source node is still the same. There are CPEs which have issues with non-TCP/UDP packets for instance.
In the case of the "stalled" network, I guess AYIYA is less prone to MTU and fragmentation issues
AYIYA does nothing magical for fragmentation. Just a correctly configured MTU and properly configured endpoints that do proper ICMPv6 PTB sending, nothing else.
You will "just" (as it is far from that easy) need to verify if your setup is correct and that the nodes you are talking to properly handle ICMPv6 PTB.
The (business oriented) ISP for the "stalled" network has promised me native IPv6 within a year, but not before 6-6-2017.
Did you ask them what they have been doing for the last 20 years? IPv6 is very old by now....
The ISP for the "slow" network offers me either DS Lite or an expensive upgrade from a consumer to a business subscription. I guess I will have to opt for one of them. Or prove and complain that they are not net-neutral.
Sounds like the standard monopoly that is plaguing most of Europe, better to complain to your government for unfair business practices.
How can I see raw AYIYA traffic at the IPv4 layer?
Shadow Hawkins on Thursday, 20 April 2017 12:57:52
Jeroen Massar wrote:
> In case you have not guessed, this is about migrating tunnels away from SixXS to other tunnelbrokers.
Instead of asking your ISP for Native IPv6.... or moving to an ISP that does provide what you actually want.
Oh but I have, extensively. I have negotiated native IPv6 within a year, or break an otherwise three year contract. I/they just need some more time. I need to give them (FTB Nederland) a break, they are a new small ISP that took over a fiber network previously run by Vodafone NL, which they abandoned, deemed unprofitable. FTB are about to invest more in IPv6 then they will ever earn from my contract. And they are actively helping me to find the cause of current tunneling issues.
> ... suddenly is showing stalled transfers when loads are high
You state "load", what do you mean with "load"?
Downloading 100GB over HTTP (wget to apache) stalls at around 30 GB.
rsync over ssh stalls almost immediately.
> Another tunnel that used to run AYIYA to SixXS is now runs at 10% of the IPv4 speed using 6in4 to elsewhere.
From where to where? See the FAQ: The tunnel is slow which mostly also applies to any other tunnel in the world...
The "slow" network is my home network, a Ziggo consumer cable network (300/30 mbit). With 6in4 it is slow to anywhere. Most notably to a TransIP colocation server under my control, with native IPv6, close to AMS-IX, close to the new tunnelbroker.
$ traceroute -4 bol.macroscoop.nl
traceroute to bol.macroscoop.nl (80.69.71.122), 30 hops max, 60 byte packets
1 * * *
2 gv-rc0011-cr101-xe-0-1-0-0.core.as9143.net (213.51.181.113) 15.037 ms 14.652 ms 16.357 ms
3 asd-tr0042-cr101-ae8-0.core.as9143.net (213.51.158.12) 18.312 ms 19.244 ms 18.951 ms
4 m6.r1.ams0.transip.net (80.249.208.244) 25.353 ms 21.996 ms 25.118 ms
5 bol.macroscoop.nl (80.69.71.122) 22.208 ms !X 24.885 ms !X 24.623 ms !X
$ traceroute -6 bol.macroscoop.nl
traceroute to bol.macroscoop.nl (2a01:7c8:c055:1701::1), 30 hops max, 80 byte packets
1 pimzand-2.tunnel.tserv11.ams1.ipv6.he.net (2001:470:1f14:a33::1) 20.490 ms 25.575 ms 22.690 ms
2 ve213.core1.ams1.he.net (2001:470:0:7d::1) 40.008 ms 38.533 ms 39.644 ms
3 m6.r1.ams0.transip.net (2001:7f8:1::a502:857:1) 13.610 ms 14.338 ms 23.855 ms
4 bol.macroscoop.nl (2a01:7c8:c055:1701::1) 23.308 ms !X 23.105 ms !X 23.607 ms !X
> I can replicate the problems I see over a 6in4 tunnel to a colocation Linux server under my control
That partially only excludes the common path, but that might just be an indicator. Remember that your source node is still the same. There are CPEs which have issues with non-TCP/UDP packets for instance.
For the "stalled" network: FTB swears nothing has changed in the infrastructure under their control.The issue has started long after the tunnelbroker migration and the Vodafone to FTB migration. We both are suspecting a change in the Vodafone infrastructure between FTB and AMS-IX.
This is from us (FTB) to colo (TransIP)
$ traceroute -4 bol.macroscoop.nl
traceroute to bol.macroscoop.nl (80.69.71.122), 30 hops max, 60 byte packets
1 wisper-gw.macroscoop.nl (85.146.253.158) 0.753 ms 1.017 ms 1.236 ms
2 static-51-197-117-93.thenetworkfactory.nl (93.117.197.51) 0.122 ms 0.125 ms 0.115 ms
3 10.99.9.41 (10.99.9.41) 4.126 ms 4.158 ms 4.205 ms
4 80.112.229.253 (80.112.229.253) 4.444 ms 4.613 ms 80.112.229.249 (80.112.229.249) 4.279 ms
5 m6.r1.ams0.transip.net (80.249.208.244) 4.231 ms 4.232 ms 4.578 ms
6 bol.macroscoop.nl (80.69.71.122) 2008.227 ms 2008.032 ms 2007.999 ms
I particularly suspect the hop with the private IPv4 address. Would that hop be able to send an ICMP message?
> In the case of the "stalled" network, I guess AYIYA is less prone to MTU and fragmentation issues
AYIYA does nothing magical for fragmentation. Just a correctly configured MTU and properly configured endpoints that do proper ICMPv6 PTB sending, nothing else.
You will "just" (as it is far from that easy) need to verify if your setup is correct and that the nodes you are talking to properly handle ICMPv6 PTB.
So in my simulation, with a private tunnel between endpoints under my control, it would be sufficient that _my_ endpoints handle ICMPv6 PTB. Right? Everything in between just needs to pass proto 41 packets and react to ICMPv4 pmtud correctly. Right?
Thanks for your responses.
How can I see raw AYIYA traffic at the IPv4 layer?
Jeroen Massar on Thursday, 20 April 2017 13:14:11 Oh but I have, extensively. I have negotiated native IPv6 within a year, or break an otherwise three year contract. I/they just need some more time. I need to give them (FTB Nederland) a break, they are a new small ISP that took over a fiber network previously run by Vodafone NL, which they abandoned, deemed unprofitable. FTB are about to invest more in IPv6 then they will ever earn from my contract. And they are actively helping me to find the cause of current tunneling issues
Not too unreasonable to give them a wee bit of time in that case.
Their shortest path: setup a 6rd box and voila, all customers that want it can do IPv6.
Then, the longer path: go native IPv6.
Though as it is "fiber", it is very likely Ethernet based and thus enabling IPv6 should not be too complicated.
Downloading 100GB over HTTP (wget to apache) stalls at around 30 GB. rsync over ssh stalls almost immediately.
A "stall" definitely sounds like a PathMTU issue.
Though, if TCP loses packets weird effects that are similar can happen though.
... consumer cable network (300/30 mbit).
Note that those are MAXimum speeds, they are not what you will always get, especially at peak time.
With 6in4 it is slow to anywhere.
Ziggo/LibertyGlobal are known to do QoS style packet prioritization. They have never admitted it publicaly but it is seen all over the place.
Also, LibertyGlobal has a nasty peering policy, which means that the transit port you are going over might just be full and they are not bothering to upgrade it, instead letting the remote ISP pay for buying transit from them. That is what you get with monopoly companies unfortunately.
5 bol.macroscoop.nl (80.69.71.122) 22.208 ms !X 24.885 ms !X 24.623 ms !X
Note that traceroutes are one-way, you do not see the return path with them which might take a completely different route.
But as there is an !X there, it also shows that some kind of packet filtering is happening, which can also cause problems for tunnels.
We both are suspecting a change in the Vodafone infrastructure between FTB and AMS-IX.
AMS-IX only provides a switch. But those ports on the switch might be 'full'. That can happen on both sides of the link though. That said, IX'es should not be used for transit, that is what private peering is for.
I particularly suspect the hop with the private IPv4 address. Would that hop be able to send an ICMP message?
Obvious it is sending a ICMP (most traceroute implementions use that method) or another packet type. Nevertheless that packet is being sourced from a RFC1918 address and being returned without issues to your host. RFC1918 should never exist on the public Internet.
Which shows that the networks you are on are susceptible to spoofing.
Time to teach your ISP some MANRS!
Note that that indeed can cause all kind of weird issues, as packets originating from such a host will be dropped by properly configured networks.
it would be sufficient that _my_ endpoints handle ICMPv6 PTB.
No. Because that is just the first hop of a traceroute. All nodes between the source and destination need to properly support ICMPv6 PTB, and the rest of the IPv6 Node Requirements (there are quite a few more).
But you are forgetting that if a too-large IPv4 packet is send that the IPv4 network might silently drop that packet too. IPv4 might fragment, but maybe the node that is uncompliant does not properly do that, the fun with unknown nodes.
How can I see raw AYIYA traffic at the IPv4 layer?
Shadow Hawkins on Thursday, 20 April 2017 18:06:23
Jeroen Massar wrote:
Though as it is "fiber", it is very likely Ethernet based and thus enabling IPv6 should not be too complicated.
Provisioning should not need be a problem. They tell me they are in the process of acquiring IPv6 address space (they lease their IPv4 address space from Vodafone). They also are depending on Vodafone for actual connectivity.
> ... consumer cable network (300/30 mbit).
Note that those are MAXimum speeds, they are not what you will always get, especially at peak time.
I am actually seeing that speed in IPv4, and almost that speed in an AYIYA tunnel.
The slowness starts with 6in4. Ten times slower.
Ziggo/LibertyGlobal are known to do QoS style packet prioritization. They have never admitted it publicaly but it is seen all over the place.
I asked exactly this in their official forum. No response yet.
Note that traceroutes are one-way, you do not see the return path with them which might take a completely different route.
But I can see. This bol.macroscoop.nl is our colo server, so I can traceroute/tracepath back. What is interesting that a tracepath from there, back to our FTB ethernet switch shows some gaps:
tracepath -n magritte.macroscoop.nl
1?: [LOCALHOST] pmtu 1500
1: 80.69.71.120 0.236ms asymm 2
1: 80.69.71.120 0.202ms asymm 2
2: 80.249.209.143 1.475ms
3: no reply
4: no reply
5: 93.117.197.50 4.774ms
6: 85.146.253.145 4.462ms reached
And what about this: this is an IPv6 ping from our colo server, using native IPv6, to our tunnel endpoint, via a non-SixXS tunnelbroker. We see a PTB, but the ping continues. Should this fail completely?
$ ping -6 -M do -s 1500 magritte.macroscoop.nl
PING magritte.macroscoop.nl(magritte.macroscoop.nl (2001:470:7bc9::1)) 1500 data bytes
From tserv1.ams1.he.net (2001:470:0:7d::2) icmp_seq=1 Packet too big: mtu=1280
1508 bytes from magritte.macroscoop.nl (2001:470:7bc9::1): icmp_seq=2 ttl=61 time=6.14 ms
1508 bytes from magritte.macroscoop.nl (2001:470:7bc9::1): icmp_seq=3 ttl=61 time=5.83 ms
$ traceroute -6 magritte.macroscoop.nl
traceroute to magritte.macroscoop.nl (2001:470:7bc9::1), 30 hops max, 80 byte packets
1 v871.router1.dcg.transip.net (2a01:7c8:c055::2) 19.674 ms 19.642 ms 19.613 ms
2 30gigabitethernet1-3.core1.ams1.he.net (2001:7f8:1::a500:6939:1) 1.800 ms 1.786 ms 1.762 ms
3 tserv1.ams1.he.net (2001:470:0:7d::2) 5.550 ms 14.106 ms 9.533 ms
4 magritte.macroscoop.nl (2001:470:7bc9::1) 5.509 ms 5.474 ms *
> it would be sufficient that _my_ endpoints handle ICMPv6 PTB.
No. Because that is just the first hop of a traceroute. All nodes between the source and destination need to properly support ICMPv6 PTB, and the rest of the IPv6 Node Requirements (there are quite a few more).
Are you sure? In _this_ context "my endpoints" are the only IPv6 hops. Here I am referring to my private tunnel, not dependent on any tunnelbroker, just used for simulation.
But you are forgetting that if a too-large IPv4 packet is send that the IPv4 network might silently drop that packet too. IPv4 might fragment, but maybe the node that is uncompliant does not properly do that, the fun with unknown nodes.
Right, this is my focus now.
Thanks again.
How can I see raw AYIYA traffic at the IPv4 layer?
Shadow Hawkins on Tuesday, 09 May 2017 12:08:30
For the record:
It appears all the problems I was seeing with 6in4 tunnels are related to protocol 41 packets being throttled (Ziggo NL) or dropped (Vodafone NL).
My solution was to get myself a VPS, use that as the endpoint for the 6in4 tunnel, and tunnel the routed /48 subnet to home and work using Wireguard. Wireguard uses UDP, supports NAT and dynamic endpoints just like AYIYA and as a bonus encrypts the traffic.
That works just fine. Fast and reliable.
This will help me while waiting for native IPv6 hopefully before the end of this year.
Posting is only allowed when you are logged in. |