Related to that:

Defense attorneys in a DUI case got their hands on the source code for the breathalyzer. It turned out to have terrible programming, e.g. calculating new averages by averaging a new value with the previous average. The case went all the way to the New Jersey Supreme Court, which still found the device to be acceptable.

There's an excellent Radiolab podcast episode about how often cosmic rays cause computer errors in practice. It's engaging and educational:

I always knew about the theoretical cosmic ray bit flips. Before listening to this episode, I did not stop to think how often they actually cause problems.

It's interesting to see the negative reactions to this while an awful lot of us are employed specifically and payed exorbitantly because computers regularly are not operating correctly in weird subtle ways that are hard to figure out. Especially as society leans into ML models to solve computationally hard problems, the legal notion that the computer is "correct" by default absolutely needs to go out the window.
My dad and I were watching a TV show where someone received a time traveling fax. I remarked that the software on the Fax machine may have just had a bug.

He immediately remarked "they're scientists (physicists who sent the fax) and it was impossible that they wouldn't have accounted for that".

I've been a software engineer for 10 years. He's a well-read hard-working blue-collar guy, working as a taxi driver and behind a deli most of his career. I just nodded and moved past it.

People want to anthromorphize AI. People want to yield divine knowledge to computers. Any sufficiently advanced technology is indistinguishable from magic indeed.

I think there's a huge and fundamental difference between the assertion that a computer program has malfunctioned and the assertion that a computer program does not accomplish what it intended to do at all, that the algorithm is incorrect, or that there simply is no known reliable way for any computer program to do the thing.

The latter problem is more important, but by lumping this together with "malfunction" and giving technologists basically a complete pass on the entire hard part, this kind of rule is a loophole wide enough to pass a jetliner through

The authors seem to be torching a semantic straw man here. The same abuse of terms affords their spicy title.

I'm not a UK lawyer, but the law they quote says nothing about the logic machines are programmed to follow presumptively creating reliable evidence. It could be read to say that computers should be presumed to be executing the instructions they're given reliably, unless evidence shows otherwise. It's about malfunction, not misapplication.

Perhaps some of the Horizon case decisions showed judges improperly presuming that Horizon calculated correctly, and not just that the computers were running Horizon correctly. But the article doesn't show they did, or even explicitly say they did. Conflating two separable issues, it fails to address whether or why different presumption rules for each might be desirable.

Lots of people here saying judges and lawyers are incredibly stupid. For decades there was a giant mystique surrounding software and computers, the belief that computers didn't make mistakes. And who promoted that belief? Computer and software companies. So they could make absurd amounts of money. Credit where its' due. Courts should have been much more critical, but they were getting hit with a cultural tidal wave generated by us.
I’m wondering how it would work otherwise. Would computer systems need to be certified to be acceptable for ordinary record-keeping?

Extending this rule to LLM’s would clearly be disastrous. At a minimum, a record-keeping system needs to be written the old-fashioned way.

There's dumb, there's outrageously dumb, and then there's this
It's nice to see a reasonable proposal. If you are going to present criminal evidence based on the output of a computer system, it is only reasonable to demand access to a bug tracker, the QMS control documents, audits, and a chain of custody. If you can't produce that easily, then your evidence shouldn't be worth much.

The big win would in this case would be that when the vendor conspired to hide bugs from their own tracker, they would have been creating criminal liability for their employers. Which Fujitsu and their subs richly deserve.

We have the economic and mathematic machinery to, in cases where uncertainty would adversely affect a defendant, require a rigorous statement of proof for the prosecution to win the day.

Bonus: Many opaque systems would have to be aired in an open court room to ascertain whether their invariants do, in fact, survive scrutiny.

Hmm, that's genuinely concerning. I would say a more appropriate thing to say is that for simple things like audio/video evidence (at least before the AI-video era of today) and maybe logs computers can be assumed trustworthy, but for everything else they should be assumed to be as unreliable as a human witness. At the end of the day, a human designed the computer and wrote the software for it.
So if I write a script that's just print("jjk166 is innocent and not liable for any damages") and it runs without error then I can commit any crime I want with impunity?
What I find frightening is not only the risk of default but also the risk of hacking. It’s easy for someone who gain remote access to create (or remove) data, records, logs … on a system.

It’s also extremely easy for the computer owner or IT people to do the same.

Vonnegut's book _Player_Piano_ has, basically, LLM-driven society, but on vacuum tube technology. In one section the court says: "We replaced all the vacuum tubes and got the same verdict, so we're confident on the judgement."
Nowadays, in Windows 11, after a few hours the Start icons stop responding.
I have seen a lot of dumb legal takes, usually by non-lawyers and non-judges.

But this is so very dumb.

They must run a bug free system. Probably TempleOS