EDIT: It probably also needs to be clarified which TCP algorithm they are using. TCP standards just dictate framing/windowing, etc. but algorithms are free to use their own strategies for retransmissions and bursting, and the algorithm used makes a big difference in differing loss scenarios.
EDIT 2: I just noticed the number in parens is transfer success rate. Seeing 0% for 10 and 50% loss for TCP sounds about right. I'm not sure I still understand their UDP #'s as UDP isn't a stream protocol, so raw transferred data would be 100% minus loss %, unless they are using some protocol on top of it.
> https://github.com/nyxpsi/nyxpsi/blob/bbe84472aa2f92e1e82103...
This is not how you "simulate packet loss". You are not "dropping TCP packets". You are never giving your data to the TCP stack in the first place.
UDP is incomparable to TCP. Your protocol is incomparable to TCP. Your entire benchmark is misguided and quite frankly irrelevant.
As far as I can tell, absolutely no attempt is made whatsoever to retransmit lost packets. Any sporadic failure (for example, wifi dropout for 5 seconds) will result in catastrophic data loss. I do not see any connection logic, so your protocol cannot distinguish between connections other than hoping that ports are never reused.
Have you considered spending less time on branding and more time on technical matters? Or was this supposed to be a troll?
edit: There's no congestion control nor pacing. Every packet is sent as fast as possible. The "protocol" is entirely incapable of streaming operation, and hence message order is not even considered. The entire project is just a thin wrapper over the raptorq crate. Why even bother comparing this to TCP?
The reason why TCP will usually fall over at high loss rates is because many commonly-used congestion controllers assume that loss is likely due to congestion. If you were to replace the congestion control algorithm with a fixed send window, it'd do just fine under these conditions, with the caveat that you'd either end up underutilizing your links, or you'd run into congestive collapse.
I'm also not at all sure that the benchmark is even measuring what you'd want to. I cannot see any indications of attempting to retransmit any packets under TCP -- we're just sometimes writing the bytes to a socket (and simulating the delay), and declaring that the transfer is incomplete when not all the bytes showed up at the other side? You can see that there's something especially fishy in the benchmark results -- the TCP benchmark at 50% loss finishes running in 50% of the time... because you're skipping all the logic in those cases.
https://github.com/nyxpsi/nyxpsi/blob/main/benches/network_b...
Mathis equation suggests that even with 50% packet loss at 1ms RTT, the max achievable TCP [edit: Reno, IIRC] throughput is about 16.5 Mbps.
I can imagine noisy RF, industrial, congested links, new queueing at the extremes in densely loaded switches, but the thing is: usually out there are strategies to reduce the congestion. External noise, factory/industrial/adversarial, sure. This is going to exist.
All I can see are hardcoded ping/pong “meow” messages going over a hardcoded client and server.
But maybe the ping/pong is part of the protocol?
It’s not clear.
Anyway, this redundancy-based protocol doesn’t seem to take into account that too many packets over the network can be a cause of bad, “overloaded” network conditions.
Raptorq is a nice addition, though.
The patents are hopefully expiring soon.
Generic tornado codes are likely patent free, having been expired for a few years now: https://en.wikipedia.org/wiki/Tornado_code
EDIT: looking a bit deeper into this repo, it's really just a wrapper over the raptorq crate with a basic UDP layer on top. Nothing really novel here that I can see over `raptorq` itself.
Is 10% loss common for backbone networks? Maybe if you’re dealing with horribly underprovisioned routers that don’t have enough RAM for how much traffic they’re processing? Not sure otherwise the use case…
Deep Space Communications, Satellite Network Reliability, First Responder Communication in Disaster Zones, Search and Rescue Operations Communication, Disaster Relief Network Communication, Underwater Sensors
> And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?
What it does _not_ do is anything resembling intelligent determination of appropriate bandwidth, let alone real congestion control. And it does not obviously handle the part that fountain codes don’t give for free: a way to stream out data that the sender wasn’t aware of from the very beginning.