Delays when connecting to machine on sixxs ipv6 subnet
Shadow Hawkins on Thursday, 10 February 2011 16:14:12
Hi all,
I've been setting up an internal subnet which is autoconfigured with radvd. Everything is working, but there's an odd performance problem. If I take the host hufflepuff.codelibre.net (2001:770:1d5:0:211:24ff:fe75:6d56), I can ping6 the host instantly using its address and name lookups are ~instant too. But, if I ping6 via the hostname or ssh to the host using its hostname, there's a ~10 second delay for each ping or the initial ssh connect. This delay is seen whether trying this on the host itself or on any remote host.
Has anyone else seen this odd behaviour before? One possible suspect is if it's trying a DNS lookup over IPv6 and that's failing and falling back to v4, but I'm not sure how best to trace that. strace would seem to indicate that the lookup is over v4, but I may be misinterpreting it.
Name lookup and ping6:
ravenclaw% host hufflepuff.codelibre.net
hufflepuff.codelibre.net has IPv6 address 2001:770:1d5:0:211:24ff:fe75:6d56
ravenclaw% ping6 hufflepuff.codelibre.net -c 5
PING hufflepuff.codelibre.net(2001:770:1d5:0:211:24ff:fe75:6d56) 56 data bytes
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=1 ttl=64 time=3.74 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=2 ttl=64 time=0.211 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=3 ttl=64 time=0.213 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=4 ttl=64 time=0.204 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=5 ttl=64 time=0.243 ms
--- hufflepuff.codelibre.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 40044ms
rtt min/avg/max/mdev = 0.204/0.923/3.746/1.411 ms
ravenclaw% ping6 hufflepuff.codelibre.net -c 5 -n
PING hufflepuff.codelibre.net(2001:770:1d5:0:211:24ff:fe75:6d56) 56 data bytes
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=1 ttl=64 time=0.252 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=2 ttl=64 time=0.215 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=3 ttl=64 time=0.195 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=4 ttl=64 time=0.194 ms
64 bytes from 2001:770:1d5:0:211:24ff:fe75:6d56: icmp_seq=5 ttl=64 time=0.205 ms
--- hufflepuff.codelibre.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3997ms
rtt min/avg/max/mdev = 0.194/0.212/0.252/0.023 ms
You can see that while the ping times are all fast, the total time is very different with the -n option (4s) compared without (40). So it looks very much like name lookups are at fault here. I know for a fact that the ISP nameserver isn't yet accessible over IPv6; it's IPv4 only, but I don't know if that's what's at fault or have any way I know of to test that. If this is what's at issue, is there any way of disabling name lookups over IPv6 until the nameserver gets IPv6 support. Note that /etc/resolv.conf only contains a single IPv4 entry.
I don't think there's any issues with the configuration, but I've included it below anyway just in case.
The host:
hufflepuff% ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:11:24:75:6d:56 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
inet6 2001:770:1d5:0:211:24ff:fe75:6d56/64 scope global dynamic
valid_lft 86195sec preferred_lft 14195sec
inet6 fe80::211:24ff:fe75:6d56/64 scope link
valid_lft forever preferred_lft forever
hufflepuff% ip -6 route
2001:770:1d5::/64 dev eth0 proto kernel metric 256 expires 86172sec mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0
default via fe80::222:15ff:fe1b:4d10 dev eth0 proto kernel metric 1024 expires 1572sec mtu 1500 advmss 1440 hoplimit 64
On the router and tunnel endpoint:
ravenclaw% ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:22:15:1b:4d:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.3/24 brd 192.168.1.255 scope global eth0
inet6 2001:770:1d5::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::222:15ff:fe1b:4d10/64 scope link
valid_lft forever preferred_lft forever
ravenclaw% ip addr show dev sixxs
15: sixxs: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet6 2001:770:100:ca::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::470:100:ca:2/64 scope link
valid_lft forever preferred_lft forever
% ip -6 route
2001:770:100:ca::/64 dev sixxs proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0
2001:770:1d5::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev sixxs proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0
default via 2001:770:100:ca::1 dev sixxs metric 1024 mtu 1280 advmss 1220 hoplimit 0
ravenclaw% cat /etc/radvd.conf
interface eth0
{
AdvSendAdvert on;
prefix 2001:770:1d5::/64
{
};
};
Thanks for any suggestions,
Roger Leigh
Delays when connecting to machine on sixxs ipv6 subnet
Jeroen Massar on Thursday, 10 February 2011 16:19:57
Sounds like broken reverse (ping6 also resolved hostnames and SSH(server) also does a reverse check and matches that with the forward).
Lets see if that theory is correct:
$ ~/bin/ip6_arpa.pl 2001:770:1d5:0:211:24ff:fe75:6d5
5.d.6.0.5.7.e.f.f.f.4.2.1.1.2.0.0.0.0.0.5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa
$ dig +trace 5.d.6.0.5.7.e.f.f.f.4.2.1.1.2.0.0.0.0.0.5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa
[.. these go well and then ..]
5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa. 604800 IN NS b.ns.bytemark.co.uk.
5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa. 604800 IN NS a.ns.bytemark.co.uk.
5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa. 604800 IN NS c.ns.bytemark.co.uk.
;; Received 155 bytes from 2001:7b8:3:1e:290:27ff:fe0c:5c5e#53(ns2.sixxs.net) in 28 ms
that takes forever... directly trying:
$ dig @a.ns.bytemark.co.uk. 5.d.6.0.5.7.e.f.f.f.4.2.1.1.2.0.0.0.0.0.5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa ptr
and also on the b+c variant, just never returns... thus there is your issue.
Delays when connecting to machine on sixxs ipv6 subnet
Shadow Hawkins on Thursday, 10 February 2011 16:42:13
Ah, that makes sense, thanks. So it's delegated to the correct nameservers, they are just not configured correctly for the reverse. I'll have a look into that.
Many thanks!
Roger
Delays when connecting to machine on sixxs ipv6 subnet
Shadow Hawkins on Thursday, 10 February 2011 17:24:26
Adding the reverse mappings correctly fixed the long timeouts.
In case anyone else has any difficulties with this, I'll post the solution here. I'm using hosting at Bytemark (bytemark.co.uk), which use tinydns for configuring customer DNS. I needed to add the following to my configuration:
.0.0.0.0.5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa::a.ns.bytemark.co.uk:86400
.0.0.0.0.5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa::b.ns.bytemark.co.uk:86400
.0.0.0.0.5.d.1.0.0.7.7.0.1.0.0.2.ip6.arpa::c.ns.bytemark.co.uk:86400
This sets up the reverse delegation authority. Then when you add hosts,
such as:
6ravenclaw.codelibre.net:2001077001d500000000000000000001:86400
6hufflepuff.codelibre.net:2001077001d50000021124fffe756d56:86400
these AAAA entries, it will automatically set up the reverse mapping as well. If you use a "3" prefix rather than "6", it sets up the AAAA record without the reverse mapping (useful for when you don't have ownership of the IP address or you already configured a reverse).
Regards,
Roger
Posting is only allowed when you are logged in. |