amluto
Just reading the description, I guessed it would be a fountain code. And, indeed, it’s a very very thin wrapper around RaptorQ. So it gets data through a link that drops each packet, independently, with moderate probability, at approximately the maximum rate in the sense of number of packets transmitted divided by total message size.

What it does _not_ do is anything resembling intelligent determination of appropriate bandwidth, let alone real congestion control. And it does not obviously handle the part that fountain codes don’t give for free: a way to stream out data that the sender wasn’t aware of from the very beginning.

nu11ptr
This is neat, but I'm a little confused on those benchmark numbers and what they mean exactly. For example, with 10% or 50% packet loss you aren't going to get a TCP stream to do anything reasonable. They will seem to just "pause" and make very, very slow progress. When we talk about loss scenarios, we are typically talking about single digit loss and more often well under 1%. Scenarios of 10 to 50% loss are catastrophic where TCPs effectively cease to function, so if this protocol works well in that that environment, it is an impressive feat.

EDIT: It probably also needs to be clarified which TCP algorithm they are using. TCP standards just dictate framing/windowing, etc. but algorithms are free to use their own strategies for retransmissions and bursting, and the algorithm used makes a big difference in differing loss scenarios.

EDIT 2: I just noticed the number in parens is transfer success rate. Seeing 0% for 10 and 50% loss for TCP sounds about right. I'm not sure I still understand their UDP #'s as UDP isn't a stream protocol, so raw transferred data would be 100% minus loss %, unless they are using some protocol on top of it.

iczero
Hi, respectfully, you do not appear to understand how networking works.

> https://github.com/nyxpsi/nyxpsi/blob/bbe84472aa2f92e1e82103...

This is not how you "simulate packet loss". You are not "dropping TCP packets". You are never giving your data to the TCP stack in the first place.

UDP is incomparable to TCP. Your protocol is incomparable to TCP. Your entire benchmark is misguided and quite frankly irrelevant.

As far as I can tell, absolutely no attempt is made whatsoever to retransmit lost packets. Any sporadic failure (for example, wifi dropout for 5 seconds) will result in catastrophic data loss. I do not see any connection logic, so your protocol cannot distinguish between connections other than hoping that ports are never reused.

Have you considered spending less time on branding and more time on technical matters? Or was this supposed to be a troll?

edit: There's no congestion control nor pacing. Every packet is sent as fast as possible. The "protocol" is entirely incapable of streaming operation, and hence message order is not even considered. The entire project is just a thin wrapper over the raptorq crate. Why even bother comparing this to TCP?

vitus
Are we assuming that ~all packet loss is due to the physical medium, and almost none due to congestion?

The reason why TCP will usually fall over at high loss rates is because many commonly-used congestion controllers assume that loss is likely due to congestion. If you were to replace the congestion control algorithm with a fixed send window, it'd do just fine under these conditions, with the caveat that you'd either end up underutilizing your links, or you'd run into congestive collapse.

I'm also not at all sure that the benchmark is even measuring what you'd want to. I cannot see any indications of attempting to retransmit any packets under TCP -- we're just sometimes writing the bytes to a socket (and simulating the delay), and declaring that the transfer is incomplete when not all the bytes showed up at the other side? You can see that there's something especially fishy in the benchmark results -- the TCP benchmark at 50% loss finishes running in 50% of the time... because you're skipping all the logic in those cases.

https://github.com/nyxpsi/nyxpsi/blob/main/benches/network_b...

Mathis equation suggests that even with 50% packet loss at 1ms RTT, the max achievable TCP [edit: Reno, IIRC] throughput is about 16.5 Mbps.

HippoBaro
It would be great to know a bit more about the protocol itself in the readme. I’m left wondering if it’s reliable connection-oriented, stream or message based, etc.
dools
Since it is so stable under lossy conditions it might be a good candidate for VoIP applications. The protocol du jour for VoIP is UDP because you really want to just drop your packets and move on rather than try to retransmit most of the time, but since the transfer speed in this case seems to be immune to the effects of packet loss and it performs just as well as UDP in a 0% packet loss environment it seems like it would produce superior call quality more consistently than either TCP or UDP.
ggm
What are the conditions leading to extreme packet loss in layers 2&3 in the first place?

I can imagine noisy RF, industrial, congested links, new queueing at the extremes in densely loaded switches, but the thing is: usually out there are strategies to reduce the congestion. External noise, factory/industrial/adversarial, sure. This is going to exist.

e-dant
What’s the protocol actually?

All I can see are hardcoded ping/pong “meow” messages going over a hardcoded client and server.

But maybe the ping/pong is part of the protocol?

It’s not clear.

Anyway, this redundancy-based protocol doesn’t seem to take into account that too many packets over the network can be a cause of bad, “overloaded” network conditions.

Raptorq is a nice addition, though.

mmastrac
Went looking for fountain codes, was not disappointed [1]. It's a shame these have been locked up for so long -- there's a lot of network infrastructure that could be improved by them.

The patents are hopefully expiring soon.

Generic tornado codes are likely patent free, having been expired for a few years now: https://en.wikipedia.org/wiki/Tornado_code

EDIT: looking a bit deeper into this repo, it's really just a wrapper over the raptorq crate with a basic UDP layer on top. Nothing really novel here that I can see over `raptorq` itself.

[1] https://crates.io/crates/raptorq

rasengan
First time I'm hearing of raptor codes which seems a lot more efficient for this use case which I had until now been using KCP for.
vlovich123
IIUC this would work well to tunnel the TCP/UDP WiFi traffic and cellular traffic over so that you get the advantages without needing to migrate software. But then again FEC is already employed by these (maybe not raptor codes but that’s a relatively simple update). Or does tunneling not work?

Is 10% loss common for backbone networks? Maybe if you’re dealing with horribly underprovisioned routers that don’t have enough RAM for how much traffic they’re processing? Not sure otherwise the use case…

adamch
This is interesting. But how would you use it? You'd need to open up a new type of socket (neither TCP nor UDP but nyxpsi) and everything along your network route would need to support it. So it wouldn't be useful with existing mobile networks (middle boxes won't speak the protocol) nor within the data center (because it's used for high packet loss situations). So what's the use case? Custom wireless networks with embedded devices?
RobinHirst11
This could be great for high paying places that could utilize this quite well

Deep Space Communications, Satellite Network Reliability, First Responder Communication in Disaster Zones, Search and Rescue Operations Communication, Disaster Relief Network Communication, Underwater Sensors

westurner
Re: information sharing by entanglement, CHSH, and the Bell test: https://news.ycombinator.com/item?id=41480957#41515663 :

> And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?

notpushkin
Show HN?
peterweyand38
[dead]
peterweyand38
[dead]
nodeshiftcloud
Interesting project! It's great to see efforts to tackle high packet loss scenarios using fountain codes like RaptorQ. However, the benchmarks could use more clarity. Comparing Nyxpsi to TCP and UDP without considering factors like congestion control, latency, and retransmission strategies might not give a complete picture. It would be helpful to see how Nyxpsi performs under different network conditions, especially with varying latencies and in real-world environments. Also, providing more detailed documentation about the protocol's operation and its handling of issues like streaming and congestion control would be beneficial. Looking forward to seeing how this evolves!