0 measured Packet loss over 5ghz WI-FI (802.11a) and questions...

Soldato
Joined
30 Jun 2019
Posts
7,868
I wanted to try to get 0 packet loss over short range 5ghz WIFI, to use with things like xCloud (microsoft game steaming) and for reliable skype / video calls.

The main difference I found was setting the WIFI channel to 100 on my router, and the channel width to 20mhz. This gave me 0 packet retransmissions while downloading a 1GB file, which I measured in the router web interface.

Today, I ran a test here:
https://speed.measurementlab.net/#/

It tests what the packet loss to your network connection on your device is, over the Internet.

My adapter was set to 802.11ac, which provides the highest possible bandwidth on my adapter.

I ran the test a few times and was getting 0.1-0.3% packet loss which is already pretty good.

Then, I changed the protocol type to 802.11a, which reduced the packet loss further. I also set USB RF select to 'BY DEV/HOST', this reduced the result to 0% packet loss.

Download rate is limited to around 24mbps with 802.11a.

So, if 0% packet loss is achievable over 802.11a (at least at short ranges), will the 'Wi-Fi Alliance' who design the network protocols like 802.11a and 802.11ac, create a protocol designed to transmit and receive packets with no loss, at a higher bandwidth than is possible over 802.11a?
 
Last edited:
Associate
Joined
4 Jun 2021
Posts
456
Location
Yorkshire
will the 'Wi-Fi Alliance' who design the network protocols like 802.11a and 802.11ac, create a protocol designed to transmit and receive packets with no loss, at a higher bandwidth than is possible over 802.11a?

I'm guessing you actually mean the IEEE, not the Wi-Fi Alliance as the latter only do conformance testing/branding and marketing. Wireless network protocols are developed by the IEEE.

But, no, it's not likely they'll put significant effort into getting to zero loss because it's not necessary. It's not like they'll go out of their way to make wireless networking unreliable, but neither is it worth putting huge amounts of effort into never dropping a packet. If an
application needs a reliable transport it'll use TCP.

If you want a more reliable link layer, then use a (good) wire. Plus using a remote service to test packet loss in the link layer is an odd choice.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,868
@VersionMonkey Standard 'use a wire' advice lol, thanks couldn't figured that out :p

Packet loss is gonna become a bigger problem with WIFI, as people increasingly use it for online game streaming at high bandwidths, and video calls. I think it's a matter of choice, with WIFI, the IEEE seem to have prioritized total bandwidth (we've all seen the 'OMG 9.6 Gbps' type of marketing that goes on some WIFI products) , over delivering packets reliably for real time applications.

There are some technologies that in theory, could help with delivering packets reliably, like STBC, which transmits the same packets over multiple different antenna, so that the data has a much higher probability of reaching it's destination without needing to be retransmitted.
 
Last edited:
Associate
Joined
4 Jun 2021
Posts
456
Location
Yorkshire
@VersionMonkey Standard 'use a wire' advice lol, thanks couldn't figured that out :p

It's standard advice for a good reason. RF is a shared resource subject to interference, especially on unlicensed bands. Wi-Fi will always involve a compromise somewhere.

Packet loss is gonna become a bigger problem with WIFI, as people increasingly use it for online game streaming at high bandwidths, and video calls. I think it's a matter of choice, with WIFI, the IEEE seem to have prioritized total bandwidth (we've all seen the 'OMG 9.6 Gbps' type of marketing that goes on some WIFI products) , over delivering packets reliably for real time applications.

While device manufacturer marketing departments may well concentrate on slapping the biggest numbers they can find on the side of the box, that's not necessarily the focus of the IEEE working groups. Sure higher data transfer speeds with each iteration are a big part of their work, but getting better results with lots of devices on the same wireless network is a major part of the newer standards for example.

There are some technologies that in theory, could help with delivering packets reliably, like STBC, which transmits the same packets over multiple different antenna, so that the data has a much higher probability of reaching it's destination without needing to be retransmitted.

Maybe so. But that is probably not appropriate for a general purpose technology. As you say yourself that only gives a higher probability of successful transmission but it is very wasteful of the available bandwidth while it does it. Those multiple transmissions could be better served providing data to other devices.

If the latency is good, retransmitting an occasional lost packet isn't that big a deal for the majority of use cases.
 
Don
Joined
19 May 2012
Posts
17,057
Location
Spalding, Lincolnshire
Packet loss is gonna become a bigger problem with WIFI, as people increasingly use it for online game streaming at high bandwidths, and video calls. I think it's a matter of choice, with WIFI, the IEEE seem to have prioritized total bandwidth (we've all seen the 'OMG 9.6 Gbps' type of marketing that goes on some WIFI products) , over delivering packets reliably for real time applications.

Eh? Packet Loss matters less in the scheme of things especially with voip/video calls/games streaming, than actually making sure time sensitive packets are received in order. (e.g. pointless having 0 packet loss, if some packets arrive late or out of order where they are no use and end up discarded anyway)

Streaming and communication platforms have long since coped with packet loss, so there is no benefit to trying to solve it on a wifi network standards "level" (Not that you even could due to the numerous causes)
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,868
I can see that packets often arrive late when sent in 'real time' via wifi, by upto a second or more in some cases (tested with WebRTC, which uses the UDP protocol by default). There's an online test here to check this:
https://packetlosstest.com/

I definitely think the IEEE should prioritize this over more channel bandwidth, do we need more than 10gbps, if the packets are slow to arrive, or in some cases, have to be resent?
 
Don
Joined
19 May 2012
Posts
17,057
Location
Spalding, Lincolnshire
I definitely think the IEEE should prioritize this over more channel bandwidth, do we need more than 10gbps, if the packets are slow to arrive, or in some cases, have to be resent?

Honestly it doesn't matter - packets are far more likely to get lost over the internet than via wifi. There is zero real world benefit to having lossless wifi.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,868
It's probably possible though, as with 5G (which travels much longer distances), latencies of 1/2ms are possible. Applications this will help with include voice calls and gaming. So, surely people would want this for WIFI also?
 
Caporegime
Joined
18 Oct 2002
Posts
26,053
If you need packets to arrive then you use protocols that support retransmission. If you're running a mission-critical voice service then you use private connections that can have QoS applied end-to-end.

Nobody is going to make a Wi-Fi protocol that involves two radios connected at two different frequencies and sends all the data twice just because you want Call of Duty to perform better, because it's already solved by plugging a cable in.

If you're getting packets delayed by a second or lost on that WebRTC test then you have an issue that isn't just simply due to being on Wi-Fi, as I don't have that problem.

F5nWMXW.png
 
Associate
Joined
4 Jun 2021
Posts
456
Location
Yorkshire
I can see that packets often arrive late when sent in 'real time' via wifi, by upto a second or more in some cases (tested with WebRTC, which uses the UDP protocol by default). There's an online test here to check this:
https://packetlosstest.com/

I definitely think the IEEE should prioritize this over more channel bandwidth, do we need more than 10gbps, if the packets are slow to arrive, or in some cases, have to be resent?

If you want to test the packetloss on your Wi-Fi you should do that, instead of testing the packetloss on some third party's service, internet connection, internet transit, your ISP's network, your internet connection and your Wi-Fi.

"tested with WebRTC, which uses the UDP protocol by default": UDP is an unreliable transport - the choice to use it by the designers of WebRTC is an implicit acceptance that some packet loss may occur. If it was necessary for every packet to arrive and be processed
in the right order for WebRTC to work acceptably well, they would default to TCP. They don't because some packet loss isn't as much of a problem as the overhead in running a reliable transport. And it doesn't matter which layer you try to run the reliable transport -
there will still be an overhead.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,868
If you need packets to arrive then you use protocols that support retransmission. If you're running a mission-critical voice service then you use private connections that can have QoS applied end-to-end.

Nobody is going to make a Wi-Fi protocol that involves two radios connected at two different frequencies and sends all the data twice just because you want Call of Duty to perform better, because it's already solved by plugging a cable in.

If you're getting packets delayed by a second or lost on that WebRTC test then you have an issue that isn't just simply due to being on Wi-Fi, as I don't have that problem.

F5nWMXW.png


Delays this long only occurred with the largest packet size set, at a frequency of 300 packets per second, for 14 seconds. I had to run the test a few times for this to occur.
 
Caporegime
Joined
18 Oct 2002
Posts
26,053
So you're sending data over the size of a standard ethernet frame (requiring fragmentation) 300 times a second. What realtime use case requires that much data?

You're starting to look a bit foolish here by cranking up the sliders on a random test you've found online and then declaring that Wi-Fi should be improved because games need it.
 
Caporegime
Joined
18 Oct 2002
Posts
26,053
Yes but you need to understand what is being sampled. If it's voice for instance then packets are small to keep the delay low, and a consequence of this is that single packets going missing result in tiny gaps in the audio stream.

Gaming is the same - it certainly doesn't send a constant ~6Mbps of data to the server
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,868
Streaming games and video, you are right though, Microsoft's xCloud game streaming tends to vary between about 5-15mbps, at 1080p 60fps (more with 4k if this ever happens). The streaming requirements of similar services like stadia at 4k resolution are pretty high.
 
Soldato
Joined
30 Jan 2009
Posts
17,175
Location
Aquilonem Londinensi
Cable it and be happy. I've even had good results with power line adaptors as far as latency/loss is concerned, compared to older wireless standards anyway. Haven't got any AX kit to test with, as anything important is over wire anyway
 
Back
Top Bottom