Discussion:
Steam In Home Streaming on ath9k wifi
(too old to reply)
Caleb Cushing
2017-11-18 22:14:44 UTC
Permalink
Raw Message
In home streaming is basically a high bandwidth low latency way of
streaming a game so you can play it on another machine. I'm not sure if
this is the best list to talk about this on, but since it involves wifi,
fq-codel, and cake, this seems like the best place maybe, I did start by
talking about this on the forums, but I think I've basically reached the
end of what documented configuration can do on the current LEDE, and
perhaps what can be done currently period. That said maybe I can offer up
this use case and help things improve in the future.

https://forum.lede-project.org/t/sqm-for-video-streaming-and-steam-in-home-streaming-over-wifi/8494/23


I've actually managed to get game streaming mostly smooth, but there's
still an occasional stutter (latency increase from ~5ms to around 100ms,
then it goes away again) that I haven't figured out. happy to provide more
information if it'll help this use case. note, the additional options
mentioned in the forum on src/desthost seem to actually make things worse.

config queue
option debug_logging '0'
option verbosity '5'
option enabled '1'
option interface 'eth0.2'
option upload '5500'
option linklayer 'ethernet'
option overhead '18'
option qdisc_advanced '0'
option qdisc 'cake'
option script 'layer_cake.qos'
option download '38000'

config queue
option enabled '1'
option interface 'wlan1'
option qdisc_advanced '1'
option squash_dscp '1'
option squash_ingress '1'
option ingress_ecn 'ECN'
option egress_ecn 'NOECN'
option qdisc_really_really_advanced '1'
option itarget '1ms'
option etarget '1ms'
option download '260000'
option upload '260000'
option linklayer 'none'
option debug_logging '1'
option verbosity '10'
option qdisc 'fq_codel'
option script 'simplest.qos'
--
Caleb Cushing

http://xenoterracide.com
Toke Høiland-Jørgensen
2017-11-19 16:28:33 UTC
Permalink
Raw Message
Post by Caleb Cushing
I've actually managed to get game streaming mostly smooth, but there's
still an occasional stutter (latency increase from ~5ms to around
100ms, then it goes away again) that I haven't figured out. happy to
provide more information if it'll help this use case. note, the
additional options mentioned in the forum on src/desthost seem to
actually make things worse.
So this is two computers talking to each other over WiFi? What
chipsets/drivers are they using for WiFi? It may very well be that what
you're seeing is hickups in the WiFi connection at the computer, not at
the router. In which case there's nothing you can do on the router to
fix it.

Another possibility is that it's an occasional signal drop that causes
excessive retries (either at the router or the AP). We have not gotten
around to limiting the retries in the drivers yet, so that can cause
quite a bit of very intermittent head of line blocking as well.

-Toke
Caleb Cushing
2017-11-19 21:18:58 UTC
Permalink
Raw Message
Post by Toke Høiland-Jørgensen
So this is two computers talking to each other over WiFi? What
chipsets/drivers are they using for WiFi? It may very well be that what
you're seeing is hickups in the WiFi connection at the computer, not at
the router. In which case there's nothing you can do on the router to
fix it.
sure possible, can't say it's not, I did manage to find a channel that only
I'm on in the 5 ghz range and all machines are within 2 meters of the AP.
But of course it's wifi so, no real guarantees.

Both computers are running windows 10. Server is running a qualcomm device
says it's using the killer wireless n/a/c version 4.0.2.26 (fishy, I know
killer wireless is sort of it's own qos, I think I've effectively disabled
it though). Client is running an Intel Dual Band wireless AC 8260,
driver 19.71.1.1.
Post by Toke Høiland-Jørgensen
Another possibility is that it's an occasional signal drop that causes
excessive retries (either at the router or the AP). We have not gotten
around to limiting the retries in the drivers yet, so that can cause
quite a bit of very intermittent head of line blocking as well.
could also be, do those retries happen when it's udp? 'cause I think steam
homestreaming is basically all udp, maybe the packets are just getting
dropped?
--
Caleb Cushing

http://xenoterracide.com
Toke Høiland-Jørgensen
2017-11-19 21:27:02 UTC
Permalink
Raw Message
Post by Caleb Cushing
Post by Toke Høiland-Jørgensen
So this is two computers talking to each other over WiFi? What
chipsets/drivers are they using for WiFi? It may very well be that what
you're seeing is hickups in the WiFi connection at the computer, not at
the router. In which case there's nothing you can do on the router to
fix it.
sure possible, can't say it's not, I did manage to find a channel that
only I'm on in the 5 ghz range and all machines are within 2 meters of
the AP. But of course it's wifi so, no real guarantees.
Both computers are running windows 10. Server is running a qualcomm
device says it's using the killer wireless n/a/c version 4.0.2.26
(fishy, I know killer wireless is sort of it's own qos, I think I've
effectively disabled it though). Client is running an Intel Dual Band
wireless AC 8260, driver 19.71.1.1.
Right, no idea how Windows drivers behave. But odds are that the
bottleneck is at the client, since that often has worse antennas than
the AP. If you're in a position to take packet captures at both clients
and AP you may be able to figure it out; may require tightly
synchronised clocks to do properly, though.

How do you see the latency spikes? An end-to-end ping? You could try
pinging the AP from both clients and see which one sees the latency
spike...
Post by Caleb Cushing
Post by Toke Høiland-Jørgensen
Another possibility is that it's an occasional signal drop that causes
excessive retries (either at the router or the AP). We have not gotten
around to limiting the retries in the drivers yet, so that can cause
quite a bit of very intermittent head of line blocking as well.
could also be, do those retries happen when it's udp? 'cause I think
steam homestreaming is basically all udp, maybe the packets are just
getting dropped?
Yeah, those are link-layer retransmissions. The ath9k driver sometimes
retries individual packets up to 30 times, which is obviously way too
much...

-Toke
Caleb Cushing
2017-11-19 22:03:51 UTC
Permalink
Raw Message
Post by Toke Høiland-Jørgensen
Right, no idea how Windows drivers behave. But odds are that the
bottleneck is at the client, since that often has worse antennas than
the AP. If you're in a position to take packet captures at both clients
and AP you may be able to figure it out; may require tightly
synchronised clocks to do properly, though.
I could try running wireshark on both, it's a ton of packets, and I'll be
honest, I don't know that *I* would know what to do with them.
Post by Toke Høiland-Jørgensen
How do you see the latency spikes? An end-to-end ping? You could try
pinging the AP from both clients and see which one sees the latency
spike...
steam homestreaming has an in game performance monitor, it shows you your
ping, estimated bandwidth, and how fast it's currently streaming. So when
the game stutters I also notice a recorded ping/latency increase. Sadly
though, it doesn't include highly detailed logs to my knowledge.

Yeah, those are link-layer retransmissions. The ath9k driver sometimes
Post by Toke Høiland-Jørgensen
retries individual packets up to 30 times, which is obviously way too
much...
that's fun...
--
Caleb Cushing

http://xenoterracide.com
Dave Taht
2017-11-19 22:13:32 UTC
Permalink
Raw Message
Post by Caleb Cushing
Post by Toke Høiland-Jørgensen
Right, no idea how Windows drivers behave. But odds are that the
bottleneck is at the client, since that often has worse antennas than
the AP. If you're in a position to take packet captures at both clients
and AP you may be able to figure it out; may require tightly
synchronised clocks to do properly, though.
I could try running wireshark on both, it's a ton of packets, and I'll be
honest, I don't know that *I* would know what to do with them.
Post by Toke Høiland-Jørgensen
How do you see the latency spikes? An end-to-end ping? You could try
pinging the AP from both clients and see which one sees the latency
spike...
steam homestreaming has an in game performance monitor, it shows you your
ping, estimated bandwidth, and how fast it's currently streaming. So when
the game stutters I also notice a recorded ping/latency increase. Sadly
though, it doesn't include highly detailed logs to my knowledge.
Post by Toke Høiland-Jørgensen
Yeah, those are link-layer retransmissions. The ath9k driver sometimes
retries individual packets up to 30 times, which is obviously way too
much...
that's fun...
Ideally we should retry at most, twice, at the lowest rate, and a bit
more as we go up.
Post by Caleb Cushing
--
Caleb Cushing
http://xenoterracide.com
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
Hal Murray
2017-11-21 19:08:26 UTC
Permalink
Raw Message
Right, no idea how Windows drivers behave. But odds are that the bottleneck
is at the client, since that often has worse antennas than the AP. If you're
in a position to take packet captures at both clients and AP you may be able
to figure it out; may require tightly synchronised clocks to do properly,
though.
It should be reasonable to synchronize the clocks at both ends well enough.

If that doesn't work and/or is inconvient, you could post process one trace
to adjust the time stamps. The idea is to scan both traces in parallel to
find the minimum transit times in each direction, then adjust the time stamps
on one end to allocate half the total time to each direction. It would
obviously be handy to have a few pings during a period of low traffic for
calibration.

-------

There are two dimensions to clocks. One is the current time. The other is
the frequency. If the frequency is off, the clock will drift. (ntpd's drift
correction is usually stored in someplace like /var/lib/ntp/ntp.drift)

Unless you are interested in long runs, the clock will not drift far enough
to be a serious problem, so all you have to do is get the time right before
starting a run.

You might want to kill ntpd on the wifi end so it doesn't get confused and yank the clock around.
--
These are my opinions. I hate spam.
Neil Davies
2017-11-22 10:31:36 UTC
Permalink
Raw Message
Hal

We use this approach to automatically manage measurements.

There are a few more issues - the relative drift between the two clocks can be
as high as 200ppm, though typically 50-75ppm is what we observe, but this drift
is monotonic.

Also NTP can make changes at one (or both) ends - they show up as distinct
direction changes in the drift.

This gives a limit (or a measurement error function you have to work within).

Neil
Post by Hal Murray
Right, no idea how Windows drivers behave. But odds are that the bottleneck
is at the client, since that often has worse antennas than the AP. If you're
in a position to take packet captures at both clients and AP you may be able
to figure it out; may require tightly synchronised clocks to do properly,
though.
It should be reasonable to synchronize the clocks at both ends well enough.
If that doesn't work and/or is inconvient, you could post process one trace
to adjust the time stamps. The idea is to scan both traces in parallel to
find the minimum transit times in each direction, then adjust the time stamps
on one end to allocate half the total time to each direction. It would
obviously be handy to have a few pings during a period of low traffic for
calibration.
-------
There are two dimensions to clocks. One is the current time. The other is
the frequency. If the frequency is off, the clock will drift. (ntpd's drift
correction is usually stored in someplace like /var/lib/ntp/ntp.drift)
Unless you are interested in long runs, the clock will not drift far enough
to be a serious problem, so all you have to do is get the time right before
starting a run.
You might want to kill ntpd on the wifi end so it doesn't get confused and yank the clock around.
--
These are my opinions. I hate spam.
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
BEGIN-ANTISPAM-VOTING-LINKS
------------------------------------------------------
Spam: https://portal.roaringpenguin.co.uk/canit/b.php?c=s&i=03UAH8AV8&m=09f8fdc3b5f4&rlm=pnsol-com&t=20171121
Not spam: https://portal.roaringpenguin.co.uk/canit/b.php?c=n&i=03UAH8AV8&m=09f8fdc3b5f4&rlm=pnsol-com&t=20171121
Forget vote: https://portal.roaringpenguin.co.uk/canit/b.php?c=f&i=03UAH8AV8&m=09f8fdc3b5f4&rlm=pnsol-com&t=20171121
------------------------------------------------------
END-ANTISPAM-VOTING-LINKS
Caleb Cushing
2017-11-23 17:48:09 UTC
Permalink
Raw Message
I'll try to get some measurements from both clients... before the end of
the holiday weekend, but it's hard for me to say I'll get the clock sync
right. I have done some client side tuning and driver updates, and that
seems to have improved things, but there is still some bumps that don't
don't make sense.

as a side note, noticing the request for cake testing or whatever, cake was
inferior performance wise for this problem, on the wlan to fq-codel, I
tried it but got worse results (also tried no optimizations on wlan, also
worse peformance). Not sure why that would be but.... well there it is.
Post by Neil Davies
Hal
We use this approach to automatically manage measurements.
There are a few more issues - the relative drift between the two clocks can be
as high as 200ppm, though typically 50-75ppm is what we observe, but this drift
is monotonic.
Also NTP can make changes at one (or both) ends - they show up as distinct
direction changes in the drift.
This gives a limit (or a measurement error function you have to work within).
Neil
Post by Hal Murray
Post by Toke Høiland-Jørgensen
Right, no idea how Windows drivers behave. But odds are that the
bottleneck
Post by Hal Murray
Post by Toke Høiland-Jørgensen
is at the client, since that often has worse antennas than the AP. If
you're
Post by Hal Murray
Post by Toke Høiland-Jørgensen
in a position to take packet captures at both clients and AP you may be
able
Post by Hal Murray
Post by Toke Høiland-Jørgensen
to figure it out; may require tightly synchronised clocks to do
properly,
Post by Hal Murray
Post by Toke Høiland-Jørgensen
though.
It should be reasonable to synchronize the clocks at both ends well
enough.
Post by Hal Murray
If that doesn't work and/or is inconvient, you could post process one
trace
Post by Hal Murray
to adjust the time stamps. The idea is to scan both traces in parallel
to
Post by Hal Murray
find the minimum transit times in each direction, then adjust the time
stamps
Post by Hal Murray
on one end to allocate half the total time to each direction. It would
obviously be handy to have a few pings during a period of low traffic for
calibration.
-------
There are two dimensions to clocks. One is the current time. The other
is
Post by Hal Murray
the frequency. If the frequency is off, the clock will drift. (ntpd's
drift
Post by Hal Murray
correction is usually stored in someplace like /var/lib/ntp/ntp.drift)
Unless you are interested in long runs, the clock will not drift far
enough
Post by Hal Murray
to be a serious problem, so all you have to do is get the time right
before
Post by Hal Murray
starting a run.
You might want to kill ntpd on the wifi end so it doesn't get confused
and yank the clock around.
Post by Hal Murray
--
These are my opinions. I hate spam.
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
BEGIN-ANTISPAM-VOTING-LINKS
------------------------------------------------------
https://portal.roaringpenguin.co.uk/canit/b.php?c=s&i=03UAH8AV8&m=09f8fdc3b5f4&rlm=pnsol-com&t=20171121
https://portal.roaringpenguin.co.uk/canit/b.php?c=n&i=03UAH8AV8&m=09f8fdc3b5f4&rlm=pnsol-com&t=20171121
https://portal.roaringpenguin.co.uk/canit/b.php?c=f&i=03UAH8AV8&m=09f8fdc3b5f4&rlm=pnsol-com&t=20171121
Post by Hal Murray
------------------------------------------------------
END-ANTISPAM-VOTING-LINKS
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
Caleb Cushing

http://xenoterracide.com
Hal Murray
2017-11-24 09:20:21 UTC
Permalink
Raw Message
Post by Neil Davies
There are a few more issues - the relative drift between the two clocks
can be as high as 200ppm, though typically 50-75ppm is what we observe, but
this drift is monotonic.
200 ppm seems pretty high, but not off scale. If ntpd is running and not
getting confused by long queuing delays, it should correct the drift to well
under 1 ppm. If you turn on loopstats, you can graph it.

If you are blasting the network and adding long queuing delays, ntpd can
easily get confused.

There is another quirk to keep in mind. The temperature coefficient of the
crystal is ballpark of 1 ppm per C. Things can change significantly if an
idle system starts flinging lots of bits around.
Post by Neil Davies
Also NTP can make changes at one (or both) ends - they show up as distinct
direction changes in the drift.
I'm not sure what you mean by "direction change". I'd expect a graph of the
time offset vs time to be linear and the slope would have a sharp change if
ntpd changed it's "drift" correction and/or maybe a rounded bend as a system
warmed up.

----------

Are you happy with whatever you are doing? Should we try to set things up
so ntpd works well enough? How close would you like the times to be? ...
--
These are my opinions. I hate spam.
Neil Davies
2017-11-24 09:34:59 UTC
Permalink
Raw Message
Post by Hal Murray
Post by Neil Davies
There are a few more issues - the relative drift between the two clocks
can be as high as 200ppm, though typically 50-75ppm is what we observe, but
this drift is monotonic.
200 ppm seems pretty high, but not off scale. If ntpd is running and not
getting confused by long queuing delays, it should correct the drift to well
under 1 ppm. If you turn on loopstats, you can graph it.
I’m saying that is the maximum rate of drift between two clocks even
when they are under NTP control. As you say below the clock rates
are not completely stable they are temperature dependent.
When we did this with the guys at CERN we could
correlate the results with the workload (see below for references).

We’ve got ~1M experiments using this approach across various networks,
the numbers are what we are seeing in practice.

The caveat is that, after a while (i.e several 100s) the clock drift can make
a significant difference (i.e a few ms) in the one-way delay estimation.
Post by Hal Murray
If you are blasting the network and adding long queuing delays, ntpd can
easily get confused.
There is another quirk to keep in mind. The temperature coefficient of the
crystal is ballpark of 1 ppm per C. Things can change significantly if an
idle system starts flinging lots of bits around.
Post by Neil Davies
Also NTP can make changes at one (or both) ends - they show up as distinct
direction changes in the drift.
I'm not sure what you mean by "direction change". I'd expect a graph of the
time offset vs time to be linear and the slope would have a sharp change if
ntpd changed it's "drift" correction and/or maybe a rounded bend as a system
warmed up.
Don’t forget you are measuring the difference in the rates between two NTP clocks,
hence the change when one of the NTP systems decides to change the drift rate
the relative rate can change direction.
Post by Hal Murray
----------
Are you happy with whatever you are doing? Should we try to set things up
so ntpd works well enough? How close would you like the times to be? 

Yep, we’re very happy - we don’t care that there is a linear clock drift (we
can correct for that) and the step changes are infrequent and can be eliminated
from the long term analysis.

You might find §4.4 (esp §4.4.5) and §5.6 in
https://cds.cern.ch/record/1504817/files/CERN-THESIS-2013-004.pdf <https://cds.cern.ch/record/1504817/files/CERN-THESIS-2013-004.pdf>
interesting. It illustrates these sort of issues.
Post by Hal Murray
--
These are my opinions. I hate spam.
Caleb Cushing
2017-11-26 07:25:36 UTC
Permalink
Raw Message
ping from laptop

C:\Users\xeno>ping 192.168.1.105 -n 100

Pinging 192.168.1.105 with 32 bytes of data:
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=5ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128

Ping statistics for 192.168.1.105:
Packets: Sent = 100, Received = 100, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 3ms, Maximum = 5ms, Average = 3ms
Post by Neil Davies
There are a few more issues - the relative drift between the two clocks
can be as high as 200ppm, though typically 50-75ppm is what we observe, but
this drift is monotonic.
200 ppm seems pretty high, but not off scale. If ntpd is running and not
getting confused by long queuing delays, it should correct the drift to well
under 1 ppm. If you turn on loopstats, you can graph it.
I’m saying that is the maximum rate of drift between two clocks even
when they are under NTP control. As you say below the clock rates
are not completely stable they are temperature dependent.
When we did this with the guys at CERN we could
correlate the results with the workload (see below for references).
We’ve got ~1M experiments using this approach across various networks,
the numbers are what we are seeing in practice.
The caveat is that, after a while (i.e several 100s) the clock drift can make
a significant difference (i.e a few ms) in the one-way delay estimation.
If you are blasting the network and adding long queuing delays, ntpd can
easily get confused.
There is another quirk to keep in mind. The temperature coefficient of the
crystal is ballpark of 1 ppm per C. Things can change significantly if an
idle system starts flinging lots of bits around.
Also NTP can make changes at one (or both) ends - they show up as distinct
direction changes in the drift.
I'm not sure what you mean by "direction change". I'd expect a graph of the
time offset vs time to be linear and the slope would have a sharp change if
ntpd changed it's "drift" correction and/or maybe a rounded bend as a system
warmed up.
Don’t forget you are measuring the difference in the rates between two NTP clocks,
hence the change when one of the NTP systems decides to change the drift rate
the relative rate can change direction.
----------
Are you happy with whatever you are doing? Should we try to set things up
so ntpd works well enough? How close would you like the times to be? 

Yep, we’re very happy - we don’t care that there is a linear clock drift (we
can correct for that) and the step changes are infrequent and can be eliminated
from the long term analysis.
You might find §4.4 (esp §4.4.5) and §5.6 in
https://cds.cern.ch/record/1504817/files/CERN-THESIS-2013-004.pdf
interesting. It illustrates these sort of issues.
--
These are my opinions. I hate spam.
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
Caleb Cushing

http://xenoterracide.com
Caleb Cushing
2017-11-26 07:29:35 UTC
Permalink
Raw Message
from desktop

Pinging 192.168.1.105 with 32 bytes of data:
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128
Reply from 192.168.1.105: bytes=32 time<1ms TTL=128

Ping statistics for 192.168.1.105:
Packets: Sent = 100, Received = 100, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

C:\Users\xenot>ping -n 100 192.168.1.130

Pinging 192.168.1.130 with 32 bytes of data:
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=5ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=5ms TTL=128
Reply from 192.168.1.130: bytes=32 time=6ms TTL=128
Reply from 192.168.1.130: bytes=32 time=6ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=6ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=8ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=3ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128
Reply from 192.168.1.130: bytes=32 time=4ms TTL=128

Ping statistics for 192.168.1.130:
Packets: Sent = 100, Received = 100, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 3ms, Maximum = 8ms, Average = 3ms
Post by Caleb Cushing
ping from laptop
C:\Users\xeno>ping 192.168.1.105 -n 100
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=5ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Reply from 192.168.1.105: bytes=32 time=4ms TTL=128
Reply from 192.168.1.105: bytes=32 time=3ms TTL=128
Packets: Sent = 100, Received = 100, Lost = 0 (0% loss),
Minimum = 3ms, Maximum = 5ms, Average = 3ms
Post by Neil Davies
There are a few more issues - the relative drift between the two clocks
can be as high as 200ppm, though typically 50-75ppm is what we observe, but
this drift is monotonic.
200 ppm seems pretty high, but not off scale. If ntpd is running and not
getting confused by long queuing delays, it should correct the drift to well
under 1 ppm. If you turn on loopstats, you can graph it.
I’m saying that is the maximum rate of drift between two clocks even
when they are under NTP control. As you say below the clock rates
are not completely stable they are temperature dependent.
When we did this with the guys at CERN we could
correlate the results with the workload (see below for references).
We’ve got ~1M experiments using this approach across various networks,
the numbers are what we are seeing in practice.
The caveat is that, after a while (i.e several 100s) the clock drift can make
a significant difference (i.e a few ms) in the one-way delay estimation.
If you are blasting the network and adding long queuing delays, ntpd can
easily get confused.
There is another quirk to keep in mind. The temperature coefficient of the
crystal is ballpark of 1 ppm per C. Things can change significantly if an
idle system starts flinging lots of bits around.
Also NTP can make changes at one (or both) ends - they show up as distinct
direction changes in the drift.
I'm not sure what you mean by "direction change". I'd expect a graph of the
time offset vs time to be linear and the slope would have a sharp change if
ntpd changed it's "drift" correction and/or maybe a rounded bend as a system
warmed up.
Don’t forget you are measuring the difference in the rates between two NTP clocks,
hence the change when one of the NTP systems decides to change the drift rate
the relative rate can change direction.
----------
Are you happy with whatever you are doing? Should we try to set things up
so ntpd works well enough? How close would you like the times to be? 

Yep, we’re very happy - we don’t care that there is a linear clock drift (we
can correct for that) and the step changes are infrequent and can be eliminated
from the long term analysis.
You might find §4.4 (esp §4.4.5) and §5.6 in
https://cds.cern.ch/record/1504817/files/CERN-THESIS-2013-004.pdf
interesting. It illustrates these sort of issues.
--
These are my opinions. I hate spam.
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
Caleb Cushing
http://xenoterracide.com
--
Caleb Cushing

http://xenoterracide.com
Jan Ceuleers
2017-11-26 07:55:30 UTC
Permalink
Raw Message
Post by Caleb Cushing
from desktop
Reply from 192.168.1.105 <http://192.168.1.105>: bytes=32 time<1ms TTL=128
I've also performed a ping test via wifi to the AP. Here are the stats
for 100 pings:

100 packets transmitted, 100 received, 0% packet loss, time 99156ms
rtt min/avg/max/mdev = 1.280/6.938/93.746/17.846 ms

Both the laptop and the AP have ath9k cards. The laptop runs Ubuntu
16.04 and the AP runs Debian Jessie. Both have achieved NTP sync.

The vast majority of the ping RTTs are below 2ms, but there are
occasional spikes of multiple tens of ms. I expect that this is due to
the fact that the laptop isn't the only STA that is associated with the AP.
Jonathan Morton
2017-11-26 10:05:21 UTC
Permalink
Raw Message
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which requires
the wifi radio to be temporarily tuned away from the currently associated
AP's frequency.

- Jonathan Morton
Jan Ceuleers
2017-11-26 10:53:04 UTC
Permalink
Raw Message
Post by Jonathan Morton
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which
requires the wifi radio to be temporarily tuned away from the currently
associated AP's frequency.
My understanding is that APs that operate on channels that are wider
than 20MHz must back off to 20MHz in case they detect other SSIDs that
are competing for the wider channel. So this is clearly "unnecessary"
from the point of view of the AP itself, but it does make the AP a
better citizen to be aware of who else is trying to access the medium.
Jonathan Morton
2017-11-26 10:55:59 UTC
Permalink
Raw Message
Even if true, you can detect competing traffic without having to retune the
radio or do a complete scan.

- Jonathan Morton
Steinar H. Gunderson
2017-11-26 11:54:04 UTC
Permalink
Raw Message
Post by Jan Ceuleers
Post by Jonathan Morton
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which
requires the wifi radio to be temporarily tuned away from the currently
associated AP's frequency.
My understanding is that APs that operate on channels that are wider
than 20MHz must back off to 20MHz in case they detect other SSIDs that
are competing for the wider channel.
It's not about the AP, it's about the client. (APs can detect extension
channel interference without doing a scan, although I don't know if you need
this fallback at all on 5 GHz.)

/* Steinar */
--
Homepage: https://www.sesse.net/
Jan Ceuleers
2017-11-26 13:03:24 UTC
Permalink
Raw Message
Post by Steinar H. Gunderson
It's not about the AP, it's about the client. (APs can detect extension
channel interference without doing a scan, although I don't know if you need
this fallback at all on 5 GHz.)
You are absolutely right. I have disabled the scans and now the stats
for the same test look like this:

100 packets transmitted, 100 received, 0% packet loss, time 99153ms
rtt min/avg/max/mdev = 1.225/2.403/49.129/5.933 ms

Disabling the scans needed to be done on the AP side though (in
hostapd.conf):

# If set non-zero, require stations to perform scans of overlapping
# channels to test for stations which would be affected by 40 MHz traffic.
# This parameter sets the interval in seconds between these scans.
Setting this
# to non-zero allows 2.4 GHz band AP to move dynamically to a 40 MHz
channel if
# no co-existence issues with neighboring devices are found.

This was set to 10 when I did the first test. I have now reset it to 0
(the default) which results in the above improved stats.

Thanks, Jan
Jan Ceuleers
2017-11-26 13:05:47 UTC
Permalink
Raw Message
Post by Jan Ceuleers
Disabling the scans needed to be done on the AP side though (in
# If set non-zero, require stations to perform scans of overlapping
# channels to test for stations which would be affected by 40 MHz traffic.
# This parameter sets the interval in seconds between these scans.
Setting this
# to non-zero allows 2.4 GHz band AP to move dynamically to a 40 MHz
channel if
# no co-existence issues with neighboring devices are found.
This was set to 10 when I did the first test. I have now reset it to 0
(the default) which results in the above improved stats.
I failed to mention the name of the hostapd.conf parameter in question.
Sorry about that. It's

obss_interval=0

Jan
Steinar H. Gunderson
2017-11-26 13:13:52 UTC
Permalink
Raw Message
Post by Jan Ceuleers
Disabling the scans needed to be done on the AP side though (in
# If set non-zero, require stations to perform scans of overlapping
# channels to test for stations which would be affected by 40 MHz traffic.
# This parameter sets the interval in seconds between these scans.
Setting this
# to non-zero allows 2.4 GHz band AP to move dynamically to a 40 MHz
channel if
# no co-existence issues with neighboring devices are found.
Note again, though, that this is 2.4 GHz only. In general, you don't want
40 MHz channels on 2.4 GHz, since the band is so crowded anyway. Use 5 GHz :-)

/* Steinar */
--
Homepage: https://www.sesse.net/
Dave Taht
2017-11-26 17:53:51 UTC
Permalink
Raw Message
Post by Jonathan Morton
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which requires
the wifi radio to be temporarily tuned away from the currently associated
AP's frequency.
I'd written that up here:

http://blog.cerowrt.org/post/disabling_channel_scans/

Which was improved in some release of network manager

https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/373680
Post by Jonathan Morton
- Jonathan Morton
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
Jan Ceuleers
2017-11-26 18:43:41 UTC
Permalink
Raw Message
Resending with the from-address with which I'm subscribed to the list
Post by Dave Taht
Post by Jonathan Morton
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which requires
the wifi radio to be temporarily tuned away from the currently associated
AP's frequency.
http://blog.cerowrt.org/post/disabling_channel_scans/
Which was improved in some release of network manager
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/373680
Dave,

Thanks, but that's not the same problem I experienced. Yours was
entirely client-side (i.e. it was behaviour of Network Manager). My
problem was due to the AP asking the client to periodically perform
scans (by means of hostapd's obss_interval parameter).

Similar (but not the same) symptoms - different cause.

Jan
Dave Taht
2017-11-26 23:11:55 UTC
Permalink
Raw Message
Post by Jan Ceuleers
Resending with the from-address with which I'm subscribed to the list
Post by Dave Taht
Post by Jonathan Morton
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which requires
the wifi radio to be temporarily tuned away from the currently associated
AP's frequency.
http://blog.cerowrt.org/post/disabling_channel_scans/
Which was improved in some release of network manager
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/373680
Dave,
Thanks, but that's not the same problem I experienced. Yours was
entirely client-side (i.e. it was behaviour of Network Manager). My
problem was due to the AP asking the client to periodically perform
scans (by means of hostapd's obss_interval parameter).
Got it. Thanks!
Post by Jan Ceuleers
Similar (but not the same) symptoms - different cause.
Jan
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Caleb Cushing
2017-12-04 07:24:47 UTC
Permalink
Raw Message
So I was going to upload a couple of very large wireshark dumps, but then I
looked at something again and realized I was reading one of steam's poorly
documented performance metrics wrong. It appears that more often than not
now, my problem is actually host or client side on the cpu/encoding, so my
network setup is good. I think their may have also been some client side
network driver issues. It also seems that even slight alternate network
usage can affect the stream (such as leaving a web browser open) all though
this seems strange, I can't imagine how that could be impacted one way or
another by the router. So thanks for the help. Looking forward to more
improvements on your end.
Post by Jonathan Morton
Post by Jan Ceuleers
Resending with the from-address with which I'm subscribed to the list
Post by Dave Taht
Post by Jonathan Morton
Another explanation for latency spikes on the order of 100ms is that a
periodic (and wholly unnecessary) scan for other APs is run, which
requires
Post by Jan Ceuleers
Post by Dave Taht
Post by Jonathan Morton
the wifi radio to be temporarily tuned away from the currently
associated
Post by Jan Ceuleers
Post by Dave Taht
Post by Jonathan Morton
AP's frequency.
http://blog.cerowrt.org/post/disabling_channel_scans/
Which was improved in some release of network manager
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/373680
Dave,
Thanks, but that's not the same problem I experienced. Yours was
entirely client-side (i.e. it was behaviour of Network Manager). My
problem was due to the AP asking the client to periodically perform
scans (by means of hostapd's obss_interval parameter).
Got it. Thanks!
Post by Jan Ceuleers
Similar (but not the same) symptoms - different cause.
Jan
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
--
Caleb Cushing

http://xenoterracide.com
Loading...