Discussion:
Tuning fq_codel: are there more best practices for slow connections? (<1mbit)
(too old to reply)
Y
2017-11-02 06:42:10 UTC
Permalink
Raw Message
hi.

My connection is 810kbps( <= 1Mbps).

This is my setting For Fq_codel,

quantum=300

target=20ms
interval=400ms

MTU=1478 (for PPPoA)

I cannot compare well. But A Latency is around 14ms-40ms.

Yutaka.
I'm trying to gather advice for people stuck on older connections. It
appears that having dedictated /micromanged tc classes greatly
outperforms the "no knobs" fq_codel approach for connections with 
slow upload speed.
competing ICMP traffic quickly begin to drop (fq_codel) or be delayed
considerably ( under sfq). From reading the tuning best practices page
is not optimized for this scenario. (<2.5mbps)
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than
an untuned codel ( more delay but much less loss for small flows).
People just flipping the fq_codel button on their router at these low
speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the
excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes,
a.k.a.**traditional QoS. But ,  wouldn't it be possible to tune
fq_codel punish the large flows 'properly' for this very low bandwidth
scenario? Surely <1kb ICMP packets can squeeze through properly
without being dropped if there is 350kbps available, if the competing
flow is managed correctly.
I could create a class filter by packet length, thereby moving
ICMP/VoIP to its own tc class, but  this goes against "no knobs" it
seems like I'm re-inventing the wheel of fair queuing - shouldn't the
smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet
in a time interval?
Is there real value in tuning fq_codel for these connections or should
people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Jonathan Morton
2017-11-02 07:11:46 UTC
Permalink
Raw Message
Have you tried Cake? It has automatic tuning of interval/target, based on
the bandwidth setting of its internal shaper. Also lots of other goodies.

- Jonathan Morton
Sebastian Moeller
2017-11-02 08:23:20 UTC
Permalink
Raw Message
Hi cloneman,
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
This page was last updated 2014, it seems we learned a few tricks since then. May I recommend you look into sqm-scripts (see https://lede-project.org/docs/user-guide/sqm for a user guide) if only to look at how we recommend to configure fq_codel currently.
One of the biggest issues with fq_codel at slow speeds (exactly speeds below (1526*8)/0.005 = 2441600 bps) is that even a single packet might use up most of or even exceed the 5ms default "target" duration that fq_codel/codel default to. This is a problem because exceeding that time will cause fq_codel to switch into drop mode and your experience will be choppy. We now recommend to simply increase target to be at least around 1.5 MTU worth of transfer time, for fast links this will be smaller than the default 5ms, so we do nothing but for slower links it will not so we adjust things. fq_codel has no way of knowing the available bandwidth and hence can not auto compensate for that.
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I would in all modesty recommend that people rather look into using sqm-scripts (at least if using lede/openwrt or other linux based distributions) that should give a better starting point for their own experiments and modifications. So I am not saying do not experiment yousrseld, but rather start from a known decent starting point (also sqm-scripts easily handles user supplied scripts and makes it quite comfortable to get started with playing with qdisc's and traffic shapers).
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
Target is exactly the value of interest here (except it seems reasonable to also adjust interval as target is supposed to be in the range of 5-10% of interval, so if target changes, interval might need to change as well)
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
At 2.5Mbps one full MTU packet in transfer will make all other flows with wueued packets exceed the default 5ms target and hence change them into drop mode, extending target is the right thing to do here... Other than that fq_codel does try to boost sparse flows, but you will only see this once you extended target appropriately.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Not dropping based on a feature like length will simply invite abuse, so I would not go there.
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
No not serving fractional packets, but taking multiple rounds through the "scheduler" before a flow with a large queued packet will be allows to send, this also should allow for smaller packets to squeeze by faster (without affecting bandwidth fairness).
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
Again, maybe just use sqm-scripts might slove most of these issues in a user friendly fashion. sqm-scripts also allows to easily use the experimental cake qdisc which has quite a number of cool tricks up its sleeve, like pirecing though NAT to get the "real" source and destination IP addresses which can be used to easily configure a made in which cake tries to achieve fairness by the number of concurrently active internal host IPs (and that for many end users seems to be sufficient to not having to bother any further with twiddling with qos/q\aqm settings).

Best Regards
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Sebastian Moeller
2017-11-02 08:25:39 UTC
Permalink
Raw Message
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...

Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Kathleen Nichols
2017-11-02 16:33:05 UTC
Permalink
Raw Message
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
An MTU packet would cause 14.6ms of delay. To cause a codel drop, you'd
need to have a queue of more than one packet hang around for 400ms. I
would suspect if you looked at the dynamics of the delay you'll see it
going up and down and probably averaging to something less than two
packet times. Delay vs time is probably going to be oscillatory.

Is the unloaded RTT on the order of 2-300 ms?
Post by Sebastian Moeller
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Y
2017-11-02 16:53:50 UTC
Permalink
Raw Message
Hi , Kathleen.

Fomula of target is 1643 bytes / 810kbps = 0.015846836.

It added ATM linklayer padding.
Post by Kathleen Nichols
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
An MTU packet would cause 14.6ms of delay. To cause a codel drop, you'd
need to have a queue of more than one packet hang around for 400ms. I
would suspect if you looked at the dynamics of the delay you'll see it
going up and down and probably averaging to something less than two
packet times. Delay vs time is probably going to be oscillatory.
Is the unloaded RTT on the order of 2-300 ms?
(When I do speedtest upload with ping to 8.8.8.8)
Ping RTT is around 30ms-80ms.
Avarage is around 40ms-50ms.
There is not 100ms over delay.
Delay vs time is probably going to be oscillatory.

yes :)
Post by Kathleen Nichols
Post by Sebastian Moeller
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Y
2017-11-02 16:58:29 UTC
Permalink
Raw Message
Hi ,Moeller.

Fomula of target is 1643 bytes / 810kbps = 0.015846836.

It added ATM linklayer padding.

16ms plus 4ms as my sence :P

My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.

I changed Target 27ms Interval 540ms as you say( down delay plus upload
delay).

It works well  , now .
Thank you.

Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Sebastian Moeller
2017-11-02 20:31:41 UTC
Permalink
Raw Message
Hi Yutaka,
Post by Y
Hi ,Moeller.
Fomula of target is 1643 bytes / 810kbps = 0.015846836.
It added ATM linklayer padding.
16ms plus 4ms as my sence :P
My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.
That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Post by Y
I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
Post by Y
It works well , now .
Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?

Best Regards
Sebastian
Post by Y
Thank you.
Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Yutaka
2017-11-03 00:31:23 UTC
Permalink
Raw Message
Hi , Sebastian.
Post by Sebastian Moeller
Hi Yutaka,
Post by Y
Hi ,Moeller.
Fomula of target is 1643 bytes / 810kbps = 0.015846836.
It added ATM linklayer padding.
16ms plus 4ms as my sence :P
My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.
That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Link speed is
11872 bps download
832 bps upload

Why I reduce download 12 to 7 , Because according to this page, please
see espesially download rate.
But I know that I can let around 11mbps download rate work :)
And I will set 11mbps for download
Post by Sebastian Moeller
Post by Y
I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
I did smaller something.
I thought that dropping rate is getting increased.
Post by Sebastian Moeller
Post by Y
It works well , now .
Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?
Best Regards
Sebastian
My dirty stat :P

[***@localhost ~]$ tc -d -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26
direct_packets_stat 0 ver 3.17 direct_qlen 1000
 linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
 Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194
requeues 0)
 backlog 1590b 1p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300
target 5.0ms interval 100.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum
300 target 36.0ms interval 720.0ms
 Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)
 backlog 1590b 1p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0
  new_flows_len 1 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum
300 target 5.0ms interval 100.0ms
 Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0
  new_flows_len 1 old_flows_len 7
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum
300 target 36.0ms interval 720.0ms
 Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
 Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26
direct_packets_stat 0 ver 3.17 direct_qlen 32
 linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
 Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum
300 target 27.0ms interval 540.0ms ecn
 Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum
300 target 27.0ms interval 540.0ms ecn
 Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0
  new_flows_len 1 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2
2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
[***@localhost ~]$ tc -d -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26
direct_packets_stat 0 ver 3.17 direct_qlen 1000
 linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
 Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078
requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300
target 5.0ms interval 100.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum
300 target 36.0ms interval 720.0ms
 Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum
300 target 5.0ms interval 100.0ms
 Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0
  new_flows_len 0 old_flows_len 6
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum
300 target 36.0ms interval 720.0ms
 Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
 Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26
direct_packets_stat 0 ver 3.17 direct_qlen 32
 linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
 Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum
300 target 27.0ms interval 540.0ms ecn
 Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum
300 target 27.0ms interval 540.0ms ecn
 Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0
  new_flows_len 0 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2
2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0

I wrote script according to this mailinglist and sqm-script.
Thanks to Sebastian and all

Maybe, This works without problem.
From now on , I need strict thinking.

Yutaka.
Post by Sebastian Moeller
Post by Y
Thank you.
Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Sebastian Moeller
2017-11-03 09:53:01 UTC
Permalink
Raw Message
Hi Yutaka,
Post by Yutaka
Hi , Sebastian.
Post by Sebastian Moeller
Hi Yutaka,
Post by Y
Hi ,Moeller.
Fomula of target is 1643 bytes / 810kbps = 0.015846836.
It added ATM linklayer padding.
16ms plus 4ms as my sence :P
My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.
That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Link speed is
11872 bps download
832 bps upload
Thanks. With proper link layer adjustments I would aim for 11872 * 0.9 = 10684.8 and 832 * 0.995 = 827.84; downstream shaping is a bit approximate (even though there is a feature in cake's development branch that has promise to make it less approximate) so I would go to 90 or 85% of the sync bandwidth. As you know linux shapers (with proper overlay specified) are shaping gross bandwidth so due to ATM's 48/53 encoding the measurable goodput will be around 9% lower than would be expected:

10685 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 9338.8
878 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 767.4
This actually excludes the typical HTTP part of your web based speedtest but that should be in the noise. I realize what you did with the MTU/MSS ((1478 + 10) / 48 = 31; so for full sized packets you have no atm/aal5 cell padding), clever; I never bothered to go to this level of detail, so respect!
Post by Yutaka
Why I reduce download 12 to 7 , Because according to this page, please see espesially download rate.
` Which page?
Post by Yutaka
But I know that I can let around 11mbps download rate work :)
And I will set 11mbps for download
As stated above I would aim for in the range of 10500 initially and then test.
Post by Yutaka
Post by Sebastian Moeller
Post by Y
I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
I did smaller something.
I thought that dropping rate is getting increased.
My mental model for interval is that this is the reaction time you are willing to give a flows endpoint to react before you drop more aggressively, if set too high you might be trading of more bandwidth for a higher latency under load increase (which is a valid trade-off as long as you make it consciously ;) ).
Post by Yutaka
Post by Sebastian Moeller
Post by Y
It works well , now .
Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?
Best Regards
Sebastian
My dirty stat :P
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
So you are shaping on an ethernet device (eno1) but you try to adjust for a PPPoA, VC/Mux RFC-2364 link (so since the kernel adds 14 bytes for ethernet interfaces, you specify -4 to get the desired IP+10; Protocol (bytes): PPP (2), ATM AAL5 SAR (8) : Total 10), but both MPU and MTU seem wrong to me.
For tcstab the tcMTU parameter really does not need to match the real MTU, but needs to be larger than the largest packet size you expect to encounter so we default to 2047 since that is larger than the 48/53 expanded packet size. Together with tsize tcMTU is used to create the look-up table that the kernel uses to calculate from real packet size to estimated on-the-wire packetsize, the defaulf 2047, 128 will make a table that increments in units of 16 bytes (as (2047+1)/128 = 16) which will correctly deal will the 48 byte quantisation that linklayer atm will create (48 = 3*16). , your values (1478+1)/128 = 11.5546875 will be somewhat odd. And yes the tcstab thing is somewhat opaque.
Finally mpu64 is correct for any ethernet based transport (or rather any transport that uses full L2 ethernet frames including the frame check sequence), but most ATM links do a) not use the FCS (and hence are not bound to ethernets 64 byte minimum) and b) your link does not use ethernet framing at all (as you can see from your overhead that is smaller than the ethernet srcmac. dstmac and ethertype).
So I would set tcMPU to 0, tcMTU to 2047 and let tsize at 128.
Or I would give cake a trial (needs to be used in combination with a patches tc utility); cake can do its own overhead accounting which is way simpler than tcstab (it should also be slightly more efficient and will deal with all possible packet sizes).
Post by Yutaka
Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 requeues 0)
backlog 1590b 1p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)
backlog 1590b 1p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0
new_flows_len 1 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0
new_flows_len 1 old_flows_len 7
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0
new_flows_len 1 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Same comments apply as above.
Post by Yutaka
Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0
new_flows_len 0 old_flows_len 6
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0
new_flows_len 0 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
I wrote script according to this mailinglist and sqm-script.
Would you be willing to share your script?

Best Regards
Sebastian
Post by Yutaka
Thanks to Sebastian and all
Maybe, This works without problem.
From now on , I need strict thinking.
Yutaka.
Post by Sebastian Moeller
Post by Y
Thank you.
Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Yutaka
2017-11-03 10:10:36 UTC
Permalink
Raw Message
Hi , Sebastian.

Thank you for your reply .
I added URL while I am reading :)
Post by Sebastian Moeller
Hi Yutaka,
Post by Yutaka
Hi , Sebastian.
Post by Sebastian Moeller
Hi Yutaka,
Post by Y
Hi ,Moeller.
Fomula of target is 1643 bytes / 810kbps = 0.015846836.
It added ATM linklayer padding.
16ms plus 4ms as my sence :P
My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.
That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Link speed is
11872 bps download
832 bps upload
10685 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 9338.8
878 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 767.4
This actually excludes the typical HTTP part of your web based speedtest but that should be in the noise. I realize what you did with the MTU/MSS ((1478 + 10) / 48 = 31; so for full sized packets you have no atm/aal5 cell padding), clever; I never bothered to go to this level of detail, so respect!
Post by Yutaka
Why I reduce download 12 to 7 , Because according to this page, please see espesially download rate.
` Which page?
I forget to paste page URL sorry.
http://tldp.org/HOWTO/ADSL-Bandwidth-Management-HOWTO/implementation.html
Post by Sebastian Moeller
Post by Yutaka
But I know that I can let around 11mbps download rate work :)
And I will set 11mbps for download
As stated above I would aim for in the range of 10500 initially and then test.
Post by Yutaka
Post by Sebastian Moeller
Post by Y
I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
I did smaller something.
I thought that dropping rate is getting increased.
My mental model for interval is that this is the reaction time you are willing to give a flows endpoint to react before you drop more aggressively, if set too high you might be trading of more bandwidth for a higher latency under load increase (which is a valid trade-off as long as you make it consciously ;) ).
Post by Yutaka
Post by Sebastian Moeller
Post by Y
It works well , now .
Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?
Best Regards
Sebastian
My dirty stat :P
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
So you are shaping on an ethernet device (eno1) but you try to adjust for a PPPoA, VC/Mux RFC-2364 link (so since the kernel adds 14 bytes for ethernet interfaces, you specify -4 to get the desired IP+10; Protocol (bytes): PPP (2), ATM AAL5 SAR (8) : Total 10), but both MPU and MTU seem wrong to me.
For tcstab the tcMTU parameter really does not need to match the real MTU, but needs to be larger than the largest packet size you expect to encounter so we default to 2047 since that is larger than the 48/53 expanded packet size. Together with tsize tcMTU is used to create the look-up table that the kernel uses to calculate from real packet size to estimated on-the-wire packetsize, the defaulf 2047, 128 will make a table that increments in units of 16 bytes (as (2047+1)/128 = 16) which will correctly deal will the 48 byte quantisation that linklayer atm will create (48 = 3*16). , your values (1478+1)/128 = 11.5546875 will be somewhat odd. And yes the tcstab thing is somewhat opaque.
Finally mpu64 is correct for any ethernet based transport (or rather any transport that uses full L2 ethernet frames including the frame check sequence), but most ATM links do a) not use the FCS (and hence are not bound to ethernets 64 byte minimum) and b) your link does not use ethernet framing at all (as you can see from your overhead that is smaller than the ethernet srcmac. dstmac and ethertype).
So I would set tcMPU to 0, tcMTU to 2047 and let tsize at 128.
Or I would give cake a trial (needs to be used in combination with a patches tc utility); cake can do its own overhead accounting which is way simpler than tcstab (it should also be slightly more efficient and will deal with all possible packet sizes).
Post by Yutaka
Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 requeues 0)
backlog 1590b 1p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)
backlog 1590b 1p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0
new_flows_len 1 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0
new_flows_len 1 old_flows_len 7
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0
new_flows_len 1 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Same comments apply as above.
Post by Yutaka
Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0
new_flows_len 0 old_flows_len 6
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0
new_flows_len 0 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
I wrote script according to this mailinglist and sqm-script.
Would you be willing to share your script?
Best Regards
Sebastian
Post by Yutaka
Thanks to Sebastian and all
Maybe, This works without problem.
From now on , I need strict thinking.
Yutaka.
Post by Sebastian Moeller
Post by Y
Thank you.
Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Sebastian Moeller
2017-11-03 10:31:56 UTC
Permalink
Raw Message
Hi Yutaka,

thanks for the link. I believe this is quite interesting, but shaping the downstream down to <50% of the sync seems a bit drastic. Wth the link layer adjustment features that linux acquired in the meantime, my observation is that going to 80 to 90% of syncspeed should be sufficient for most usage pattern (if you have excessively many concurrent flows this might not be good enough). In that case, many concurrent flow, it might be good to look into the development version of cake as that has a mode where it allows stricter ingress shaping (by trying to throttle that the rate of incoming packets matches the defined bandwidth instead of the usual shaping of the outgoing packet rate).


Best Regards
Sebastian
Post by Yutaka
Hi , Sebastian.
Thank you for your reply .
I added URL while I am reading :)
Post by Sebastian Moeller
Hi Yutaka,
Post by Yutaka
Hi , Sebastian.
Post by Sebastian Moeller
Hi Yutaka,
Post by Y
Hi ,Moeller.
Fomula of target is 1643 bytes / 810kbps = 0.015846836.
It added ATM linklayer padding.
16ms plus 4ms as my sence :P
My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.
That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Link speed is
11872 bps download
832 bps upload
10685 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 9338.8
878 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 767.4
This actually excludes the typical HTTP part of your web based speedtest but that should be in the noise. I realize what you did with the MTU/MSS ((1478 + 10) / 48 = 31; so for full sized packets you have no atm/aal5 cell padding), clever; I never bothered to go to this level of detail, so respect!
Post by Yutaka
Why I reduce download 12 to 7 , Because according to this page, please see espesially download rate.
` Which page?
I forget to paste page URL sorry.
http://tldp.org/HOWTO/ADSL-Bandwidth-Management-HOWTO/implementation.html
Post by Sebastian Moeller
Post by Yutaka
But I know that I can let around 11mbps download rate work :)
And I will set 11mbps for download
As stated above I would aim for in the range of 10500 initially and then test.
Post by Yutaka
Post by Sebastian Moeller
Post by Y
I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
I did smaller something.
I thought that dropping rate is getting increased.
My mental model for interval is that this is the reaction time you are willing to give a flows endpoint to react before you drop more aggressively, if set too high you might be trading of more bandwidth for a higher latency under load increase (which is a valid trade-off as long as you make it consciously ;) ).
Post by Yutaka
Post by Sebastian Moeller
Post by Y
It works well , now .
Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?
Best Regards
Sebastian
My dirty stat :P
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
So you are shaping on an ethernet device (eno1) but you try to adjust for a PPPoA, VC/Mux RFC-2364 link (so since the kernel adds 14 bytes for ethernet interfaces, you specify -4 to get the desired IP+10; Protocol (bytes): PPP (2), ATM AAL5 SAR (8) : Total 10), but both MPU and MTU seem wrong to me.
For tcstab the tcMTU parameter really does not need to match the real MTU, but needs to be larger than the largest packet size you expect to encounter so we default to 2047 since that is larger than the 48/53 expanded packet size. Together with tsize tcMTU is used to create the look-up table that the kernel uses to calculate from real packet size to estimated on-the-wire packetsize, the defaulf 2047, 128 will make a table that increments in units of 16 bytes (as (2047+1)/128 = 16) which will correctly deal will the 48 byte quantisation that linklayer atm will create (48 = 3*16). , your values (1478+1)/128 = 11.5546875 will be somewhat odd. And yes the tcstab thing is somewhat opaque.
Finally mpu64 is correct for any ethernet based transport (or rather any transport that uses full L2 ethernet frames including the frame check sequence), but most ATM links do a) not use the FCS (and hence are not bound to ethernets 64 byte minimum) and b) your link does not use ethernet framing at all (as you can see from your overhead that is smaller than the ethernet srcmac. dstmac and ethertype).
So I would set tcMPU to 0, tcMTU to 2047 and let tsize at 128.
Or I would give cake a trial (needs to be used in combination with a patches tc utility); cake can do its own overhead accounting which is way simpler than tcstab (it should also be slightly more efficient and will deal with all possible packet sizes).
Post by Yutaka
Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 requeues 0)
backlog 1590b 1p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)
backlog 1590b 1p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0
new_flows_len 1 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0
new_flows_len 1 old_flows_len 7
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0
new_flows_len 1 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Same comments apply as above.
Post by Yutaka
Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0
new_flows_len 0 old_flows_len 6
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0
new_flows_len 0 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
I wrote script according to this mailinglist and sqm-script.
Would you be willing to share your script?
Best Regards
Sebastian
Post by Yutaka
Thanks to Sebastian and all
Maybe, This works without problem.
From now on , I need strict thinking.
Yutaka.
Post by Sebastian Moeller
Post by Y
Thank you.
Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Yutaka
2017-11-03 10:51:15 UTC
Permalink
Raw Message
Hi , Sebastian.

I paste my script.
This is 3 classes htb shaper script with iptables script on CentOS7.
This runs with systemctl and iptables save.
I tried to install cake qdisc , but I cannot install on CentOS7 with
some errors.

I think that I don't recognize Fq_codel's target and interval , yet.

----Notice----
Script for iptables is only for egress , especially ACK or SYN packet.
Packet to 192.168.0.0 for ingress I tried and failed :P
Ingress is for 2 classes.
-----
script for 3 class htb shaper
-----
#!/usr/bin/env bash

# Definitions
ext=eno1        # Change for your device!

ext_ingress=ifb0
ext_down=9.5Mbit

ext_up=829397

q=300
up_burst=2k
quantum=300        # fq_codel quantum 300 gives a boost to interactive flows
            # At higher bandwidths (50Mbit+) don't bother
MTU=1478
BQL=2956
target=36ms
interval=720ms
target_prio0=36ms
interval_prio0=720ms
target_ingress=27ms
interval_ingress=540ms

stab_mtu=2047

txqlen=50

limit=300
flows=256
modprobe sch_fq_codel

LAN=95Mbit

modprobe ifb
modprobe act_mirred

# Clear old queuing disciplines (qdisc) on the interfaces
tc qdisc del dev "$ext" root
tc qdisc del dev "$ext" ingress
tc qdisc del dev "$ext_ingress" root
tc qdisc del dev "$ext_ingress" ingress

#########
# INGRESS
#########

# Create ingress on external interface

tc qdisc add dev "$ext" handle ffff: ingress

#ifconfig "$ext_ingress" up # if the interace is not up bad things happen
ip link set  "$ext_ingress" up
# Forward all ingress traffic to the IFB device
tc filter add dev "$ext" parent ffff: protocol all u32 match u32 0 0
action mirred egress redirect dev "$ext_ingress"

tc qdisc add dev "$ext_ingress" root handle 1: stab overhead -4
linklayer atm  mtu "$stab_mtu" tsize 128 htb default 26

tc class add dev "$ext_ingress" parent 1: classid 1:1 htb rate
"$ext_down" ceil "$ext_down" quantum "$q" burst "$up_burst" cburst
"$up_burst"

tc class add dev "$ext_ingress" parent 1:1 classid 1:20 htb rate 200kbit
ceil "$ext_down" prio 0 quantum "$q" burst "$up_burst" cburst "$up_burst"
tc class add dev "$ext_ingress" parent 1:1 classid 1:26 htb rate 200kbit
ceil "$ext_down" prio 1 quantum "$q" burst "$up_burst" cburst "$up_burst"

tc filter add dev "$ext_ingress" parent 1: protocol ip u32 match ip
sport 80 0xffff flowid 1:20
tc filter add dev "$ext_ingress" parent 1: protocol ip u32 match ip
dport 80 0xffff flowid 1:20

tc filter add dev "$ext_ingress" parent 1:0 protocol ip u32 match ip
sport 53 0xffff flowid 1:20
tc filter add dev "$ext_ingress" parent 1:0 protocol ip u32 match ip
dport 53 0xffff flowid 1:20
tc filter add dev "$ext_ingress" parent 1:0 protocol arp u32 match u32 0
0 flowid 1:20

tc qdisc add dev "$ext_ingress" parent 1:20 handle 200: fq_codel quantum
"$quantum" limit "$limit" target "$target_ingress" interval
"$interval_ingress"
tc qdisc add dev "$ext_ingress" parent 1:26 handle 260: fq_codel quantum
"$quantum" limit "$limit" target "$target_ingress" interval
"$interval_ingress"


#########
# EGRESS
#########
ethtool -K "$ext" tso off gso off gro off # Asco turn of gro on ALL
interfaces

tc qdisc add dev "$ext" root handle 1: stab overhead -4 linklayer atm 
mtu "$stab_mtu" tsize 128 htb default 26

tc class add dev "$ext" parent 1: classid 1:1 htb rate "$ext_up" ceil
"$ext_up" quantum "$q" burst "$up_burst" cburst "$up_burst"

tc class add dev "$ext" parent 1:1 classid 1:10 htb rate 200kbit ceil
"$ext_up" prio 0 quantum "$q" burst "$up_burst" cburst "$up_burst"
tc class add dev "$ext" parent 1:1 classid 1:20 htb rate 200kbit ceil
"$ext_up" prio 1 quantum "$q" burst "$up_burst" cburst "$up_burst"
tc class add dev "$ext" parent 1:1 classid 1:26 htb rate 200kbit ceil
"$ext_up" prio 2 quantum "$q" burst "$up_burst" cburst "$up_burst"


tc class add dev "$ext" parent 1: classid 1:2 htb rate 200kbit ceil
"$LAN" quantum "$q" burst "$up_burst" cburst "$up_burst"
tc filter add dev "$ext" parent 1: protocol ip u32 match ip dst
192.168.0.0/24 flowid 1:2
tc qdisc add dev "$ext" parent 1:2 handle 2: fq_codel quantum "$quantum"
limit "$limit" flows "$flows" noecn

tc filter add dev "$ext" parent 1: protocol ip handle 10 fw flowid 1:10

tc filter add dev "$ext" parent 1:0 protocol ip u32 match ip sport 53
0xffff flowid 1:10
tc filter add dev "$ext" parent 1:0 protocol ip u32 match ip dport 53
0xffff flowid 1:10
tc filter add dev "$ext" parent 1:0 protocol arp u32 match u32 0 0
flowid 1:10

tc filter add dev "$ext" parent 1: protocol ip u32 match ip dport 80
0xffff flowid 1:20

tc qdisc add dev "$ext" parent 1:26 handle 260: fq_codel quantum
"$quantum" limit "$limit" target "$target" interval "$interval" noecn
flows "$flows"
tc qdisc add dev "$ext" parent 1:10 handle 110: fq_codel quantum
"$quantum" limit "$limit" flows "$flows" noecn

tc qdisc add dev "$ext" parent 1:20 handle 120: fq_codel quantum
"$quantum" limit "$limit" target "$target" interval "$interval" noecn
flows "$flows"

ip link set dev "$ext" qlen "$txqlen"

echo "$BQL" > /sys/class/net/"$ext"/queues/tx-0/byte_queue_limits/limit_max
echo "$BQL" >
/sys/class/net/"$ext_ingress"/queues/tx-0/byte_queue_limits/limit_max

ip link set mtu "$MTU" dev "$ext"
ip link set mtu "$MTU" dev "$ext_ingress"

ethtool -G "$ext" tx 64 rx 64
ethtool -C "$ext" rx-usecs 1
ethtool -s "$ext" autoneg on
ethtool -s "$ext" wol d

------
script for iptables
-----
#!/usr/bin/env bash

iptables -F -t mangle

iptables -t mangle -N ack
iptables -t mangle -A ack -m dscp ! --dscp 0 -j RETURN
iptables -t mangle -A ack -p tcp -m length --length 0:128 -j DSCP
--set-dscp-class AF11
iptables -t mangle -A ack -p tcp -m length --length 128:  -j ECN
--ecn-tcp-remove


iptables -t mangle -A ack -m dscp --dscp-class AF11 -j MARK --set-mark 10

iptables -t mangle -A ack -j RETURN

iptables -t mangle -A POSTROUTING -p tcp -m tcp --tcp-flags SYN,RST,ACK
ACK -j ack

-----
Post by Sebastian Moeller
Hi Yutaka,
Post by Yutaka
Hi , Sebastian.
Post by Sebastian Moeller
Hi Yutaka,
Post by Y
Hi ,Moeller.
Fomula of target is 1643 bytes / 810kbps = 0.015846836.
It added ATM linklayer padding.
16ms plus 4ms as my sence :P
My connection is 12mbps/1mbps ADSL PPPoA line.
and I set 7Mbps/810kbps for bypass router buffer.
That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Link speed is
11872 bps download
832 bps upload
10685 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 9338.8
878 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) = 767.4
This actually excludes the typical HTTP part of your web based speedtest but that should be in the noise. I realize what you did with the MTU/MSS ((1478 + 10) / 48 = 31; so for full sized packets you have no atm/aal5 cell padding), clever; I never bothered to go to this level of detail, so respect!
Post by Yutaka
Why I reduce download 12 to 7 , Because according to this page, please see espesially download rate.
` Which page?
Post by Yutaka
But I know that I can let around 11mbps download rate work :)
And I will set 11mbps for download
As stated above I would aim for in the range of 10500 initially and then test.
Post by Yutaka
Post by Sebastian Moeller
Post by Y
I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
I did smaller something.
I thought that dropping rate is getting increased.
My mental model for interval is that this is the reaction time you are willing to give a flows endpoint to react before you drop more aggressively, if set too high you might be trading of more bandwidth for a higher latency under load increase (which is a valid trade-off as long as you make it consciously ;) ).
Post by Yutaka
Post by Sebastian Moeller
Post by Y
It works well , now .
Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?
Best Regards
Sebastian
My dirty stat :P
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
So you are shaping on an ethernet device (eno1) but you try to adjust for a PPPoA, VC/Mux RFC-2364 link (so since the kernel adds 14 bytes for ethernet interfaces, you specify -4 to get the desired IP+10; Protocol (bytes): PPP (2), ATM AAL5 SAR (8) : Total 10), but both MPU and MTU seem wrong to me.
For tcstab the tcMTU parameter really does not need to match the real MTU, but needs to be larger than the largest packet size you expect to encounter so we default to 2047 since that is larger than the 48/53 expanded packet size. Together with tsize tcMTU is used to create the look-up table that the kernel uses to calculate from real packet size to estimated on-the-wire packetsize, the defaulf 2047, 128 will make a table that increments in units of 16 bytes (as (2047+1)/128 = 16) which will correctly deal will the 48 byte quantisation that linklayer atm will create (48 = 3*16). , your values (1478+1)/128 = 11.5546875 will be somewhat odd. And yes the tcstab thing is somewhat opaque.
Finally mpu64 is correct for any ethernet based transport (or rather any transport that uses full L2 ethernet frames including the frame check sequence), but most ATM links do a) not use the FCS (and hence are not bound to ethernets 64 byte minimum) and b) your link does not use ethernet framing at all (as you can see from your overhead that is smaller than the ethernet srcmac. dstmac and ethertype).
So I would set tcMPU to 0, tcMTU to 2047 and let tsize at 128.
Or I would give cake a trial (needs to be used in combination with a patches tc utility); cake can do its own overhead accounting which is way simpler than tcstab (it should also be slightly more efficient and will deal with all possible packet sizes).
Post by Yutaka
Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 requeues 0)
backlog 1590b 1p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)
backlog 1590b 1p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0
new_flows_len 1 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0
new_flows_len 1 old_flows_len 7
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0
new_flows_len 1 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Same comments apply as above.
Post by Yutaka
Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms
Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0
new_flows_len 0 old_flows_len 6
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms
Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32
linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn
Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0
new_flows_len 0 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
I wrote script according to this mailinglist and sqm-script.
Would you be willing to share your script?
Best Regards
Sebastian
Post by Yutaka
Thanks to Sebastian and all
Maybe, This works without problem.
From now on , I need strict thinking.
Yutaka.
Post by Sebastian Moeller
Post by Y
Thank you.
Yutaka.
Post by Sebastian Moeller
Hi Y.
Post by Y
hi.
My connection is 810kbps( <= 1Mbps).
This is my setting For Fq_codel,
quantum=300
target=20ms
interval=400ms
MTU=1478 (for PPPoA)
I cannot compare well. But A Latency is around 14ms-40ms.
Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
Best Regards
Post by Y
Yutaka.
I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed.
(https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
https://lists.bufferbloat.net/listinfo/bloat
Loading...