Discussion:
[Bloat] debloats/day metric?
Dave Taht
2018-08-25 16:57:36 UTC
Permalink
I'm always casting about for some simple metric, some simple phrase,
that we can use to describe what we're about. Lately - without
formally defining it mathematically as yet - I've been talking about
"badwidth" - what you get from your typical ISP - and "goodwidth",
what debloating does - which originally sprang from me typo-ing
"bandwidth".

More recently I tried combatting the perception that packet
drops/marks are "bad", by renaming them to "debloats/day".

Codel kicks in rarely, but I'm pretty sure every time it does it saves
on a bit of emotional upset and jitter for the user. For example I get
about 3000 drops/ecn marks a day one inbound 100mbit/20mbit campus
link (about 12,000 on the wifi links), and outbound a mere ~100 or so.

But: Every one of those comforts me 'cause I feel like I'm saving a
~500ms latency excursion for all the users of this (640ms badwidth
down/280ms up) comcast link.

I am kind of curious as to y'all's regular "debloats/day"?
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
Pete Heist
2018-08-27 07:44:26 UTC
Permalink
Not sure how to answer exactly, but I’m interested in some way to measure bloat more directly both instantaneously and over time. My current plan for instantaneous measurement is to modify irtt to send request pairs, one given strict priority at the bottleneck and one best effort, then measure the difference between the two (for both RTT and OWD).

Assuming this yields useful data, maybe there is a long-term stat that could be derived from it that indicates how much bloat there is for a given link, or how well it has been mitigated.

For interest, here are abbreviated cake stats from 12 hours at the camp (~40Mbit p2p WiFi), so double those for an approximate daily stat:

Egress:
Tin 0
pkts 12327019
bytes 3838819739
way_inds 903650
way_miss 261326
drops 929
marks 8

Ingress:
Tin 0
pkts 28783116
bytes 36517120801
way_inds 2031504
way_miss 250153
drops 12648
marks 24
Post by Dave Taht
I'm always casting about for some simple metric, some simple phrase,
that we can use to describe what we're about. Lately - without
formally defining it mathematically as yet - I've been talking about
"badwidth" - what you get from your typical ISP - and "goodwidth",
what debloating does - which originally sprang from me typo-ing
"bandwidth".
More recently I tried combatting the perception that packet
drops/marks are "bad", by renaming them to "debloats/day".
Codel kicks in rarely, but I'm pretty sure every time it does it saves
on a bit of emotional upset and jitter for the user. For example I get
about 3000 drops/ecn marks a day one inbound 100mbit/20mbit campus
link (about 12,000 on the wifi links), and outbound a mere ~100 or so.
But: Every one of those comforts me 'cause I feel like I'm saving a
~500ms latency excursion for all the users of this (640ms badwidth
down/280ms up) comcast link.
I am kind of curious as to y'all's regular "debloats/day"?
Jonathan Morton
2018-08-27 08:02:02 UTC
Permalink
…request pairs, one given strict priority at the bottleneck and one best effort, then measure the difference between the two (for both RTT and OWD).
Assuming this yields useful data…
For the overwhelming majority of bloated bottlenecks, it will not - because they have zero concept of "strict priority".

- Jonathan Morton
Pete Heist
2018-08-27 09:32:46 UTC
Permalink
Post by Jonathan Morton

request pairs, one given strict priority at the bottleneck and one best effort, then measure the difference between the two (for both RTT and OWD).
Assuming this yields useful data

For the overwhelming majority of bloated bottlenecks, it will not - because they have zero concept of "strict priority".
To clarify, I control the bottleneck link and would give that priority with tc for a chosen dscp marking, then use that marking in one of the two requests in the pair. That would be required to make this measurement. I realize there are other areas in the stack though where I can’t give priority that may sufficiently pollute the results.

But, as a rough illustration, attached are two flent rrul_be runs with and without cake on my home connection (p2p WiFi with an airOS device). In the plot without cake, it’s pretty clear that ICMP is being prioritized, as its RTT stays relatively stable under load, while UDP RTT increases. (I believe this prioritization happens when airMAX is enabled in airOS, because when it’s disabled, ICMP and UDP RTTs under load are almost identical.)

Presumably, the difference between the two RTTs would approximate bloat. Or if not, why not? And presumably, I could create a similar effect by giving priority to one of two requests in a pair...

Pete

Loading...