mindcrime
The opening few pages of this read like they were written by somebody with an axe to grind, which makes me suspicious of the rest. Why? Well because having an "axe to grind" may motivate one to start with a conclusion and go looking for ways to justify it. And you can almost always talk yourself into believing you've proven something you already really want to believe.

"But mindcrime, there's a mathematical proof. How can you argue with math?"

To be fair, I didn't read their entire proof. I skimmed some bits of it, and while I can't say it's wrong I didn't find it very convincing at first blush. My initial read left me thinking that the proof rests on some assumptions that may be unfounded and which may not hold up.

Some of my skepticism may also be rooted in the way the paper seemed to weave back and forth between claiming to show that "AGI is computationally intractable" and "AGI is unachievable in the short-term". Those are two substantially different arguments and it's still not clear to me which the authors were really aiming for.

I dunno. I gave up before getting through it all. I'll wait to see if others find it compelling and decide whether or not it's worth going back to.

Also, see earlier discussion:

https://news.ycombinator.com/item?id=41689558

alexander2002
eli5 from chatgpt: Imagine human thinking is like a super complicated puzzle. When cognitive science (studying how we think and learn) was just starting, people thought of Artificial Intelligence (AI) as a special toolbox that could help solve parts of this puzzle. But now, many people working on AI are trying to build robots or computers that can solve the entire puzzle by themselves, just like a human would. This paper says that's really, really hard—so hard that we probably can't do it. The paper also says that if we believe these robots or computers are just like us, we're getting the wrong idea about how our own minds work. It's like using a map of a different place to try and find your way home—it doesn't work and just makes things confusing. The paper suggests we should use AI like a toolbox again, to help us understand our minds better, but we need to be careful not to make the same mistakes we did before.
keikobadthebad
This doesn't feel like it will age well.