> On Jun 21, 2018, at 21:54, Dave Taht <***@gmail.com> wrote:
> On Thu, Jun 21, 2018 at 12:41 PM, Sebastian Moeller <***@gmx.de> wrote:
>> Hi All,
>>> On Jun 21, 2018, at 21:17, Dave Taht <***@gmail.com> wrote:
>>> On Thu, Jun 21, 2018 at 9:43 AM, Kathleen Nichols <***@pollere.com> wrote:
>>>> On 6/21/18 8:18 AM, Dave Taht wrote:
>>>>> This is a case where inserting a teeny bit more latency to fill up the
>>>>> queue (ugh!), or a driver having some way to ask the probability of
>>>>> seeing more data in the
>>>>> next 10us, or... something like that, could help.
>>>> Well, if the driver sees the arriving packets, it could infer that an
>>>> ack will be produced shortly and will need a sending opportunity.
>>> Certainly in the case of wifi and lte and other simplex technologies
>>> this seems feasible...
>>> 'cept that we're all busy finding ways to do ack compression this
>>> month and thus the
>>> two big tcp packets = 1 ack rule is going away. Still, an estimate,
>>> with a short timeout
>>> might help.
>> That short timeout seems essential, just because a link is wireless, does not mean the ACKs for passing TCP packets will appear shortly, who knows what routing happens after the wireless link (think city-wide mesh network). In a way such a solution should first figure out whether waiting has any chance of being useful, by looking at te typical delay between Data packets and the matching ACKs.
> We are in this discussion, having a few issues with multiple contexts.
> Mine (and eric's) is in improving wifi clients (laptops, handhelds)
> behavior, where the tcp stack is local.
Ah, sorry, I got this wrong and was looking at this from the APs perspective; sorry for the noise... and thanks for the patience
> packet pairing estimates on routers... well, if you get an aggregate
> "in", you should be able to get an aggregate "out" when it traverses
> the same driver. routerwise, ack compression "done right" will help a
> bit... it's the "done right" part that's the sticking point.
How will ACK compression help? If done aggressively it will sparse out the ACK stream potentially making aggregating ACK infeasible, no? On the other hand if sparse enough maybe not aggregating is not too painful? I guess I am just slow today...
>>> Another thing I've longed for (sometimes) is whether or not an
>>> application like a web
>>> browser signalling the OS that it has a batch of network packets
>>> coming would help...
>> To make up for the fact that wireless uses unfortunately uses a very high per packet overhead it just tries to "hide" by amortizing it over more than one data packet. How about trying to find a better, less wasteful MAC instead ;) (and now we have two problems...)
> On my bad days I'd really like to have a do-over on wifi. The only
> hope I've had has been for LiFi or a ressurection of
> I haven't poked into what's going on in 5G lately (the mac is
> "better", but towers being distant does not help), nor have I been
> tracking 802.11ax for a few years. Lower latency was all over the
> 802.11ax standard when I last paid attention.
> Has 802.11ad gone anywhere?
>> Now really from a latency perspective it clearly is better to ovoid overhead instead of use "batching" to better amortize it since batching increases latency (I stipulate that there are condition in which clever batching will not increase the noticeable latency if it can hide inside another latency increasing process).
>>> web browser:
>>> parse the web page, generate all your dns, tcp requests, etc, etc
>>>> (we tried this mechanism out for cable data head ends at Com21 and it
>>>> went into a patent that probably belongs to Arris now. But that was for
>>>> cable. It is a fact universally acknowledged that a packet of data must
>>>> be in want of an acknowledgement.)
>>> voip doesn't behave this way, but for recognisable protocols like tcp
>>> and perhaps quic...
>> I note that for voip, waiting does not make sense as all packets carry information and keeping jitter low will noticeably increase a calls perceived quality (if just by allowing the application yo use a small de-jitter buffer and hence less latency). There is a reason why wifi's voice access class, oith has the highest probability to get the next tx-slot and also is not allowed to send aggregates (whether that is fully sane is another question, answering which I do not feel competent).
>> I also think that on a docsis system it is probably a decent heuristic to assume that the endpoints will be a few milliseconds away at most (and only due to the coarse docsis grant-request clock).
>> Best Regards
>>>> Bloat mailing list
>>> Dave Täht
>>> CEO, TekLibre, LLC
>>> Tel: 1-669-226-2619
>>> Bloat mailing list
> Dave Täht
> CEO, TekLibre, LLC
> Tel: 1-669-226-2619