proc0
There is the term "narrow AI" which is usually used for specialized systems like visual detection, etc., and I personally think that what we still have is narrow AI but we now have it for language and recently for video. I also think even if all of these modalities are stitched together it is still narrow AI... so at what point does it become AGI? I think it becomes AGI when the system understands the world in the same way that humans do, and also demonstrates human level agency. So you would have a system that can maintain itself... it changes its own batteries, it fixes its own bugs (to some extent, not absolutely), and it understands the world of humans such that it can engage in the world without supervision. Until then, I think we'll just have really, really good narrow AI or systems of narrow AI.
SirensOfTitan
AI has always had a terms problem, perhaps because cognitive science does too—one of the key features of the controversial but highly entertaining “Origin of consciousness in the breakdown of the bicameral mind” book was defining what consciousness isn’t.

My mom was diagnosed 3 years ago with early onset Alzheimer’s disease and had rapidly progressed. One thing I’ve noticed is how much projection is involved in intelligence appraisal—it’s hard to not get worked up by someone with a dementia because in your mind it’s a version of you and your capabilities looking back at you—the actual state of the dementia brain is entirely ineffable to the perceiver.

I wonder whether a similar effect tends to occur when people appraise LLMs. LLMs can sound human enough for our automatic projection to occur, and so actually evaluating the intelligence of the machine becomes extremely non trivial because the theory of mind of the observer is projected into the LLM.

I’m largely with Yann LeCun here: I think that current LLMs are magical but their key power is extremely effective fuzzy search. They will find a pragmatic way into software, but barring more innovations will not be a revolution—though more innovations are on the horizon.

The traditional strong AI characteristics are theory of mind and consciousness, so ultimately defining those terms very narrowly and precisely is going to be important. I doubt we’ll see a clear and unambiguous leap to AGI, it will be gradual, so agreeing on terms is going to become more important. It does feel like real metacognition is a clear attribute of intelligence in my eyes.

Jensson
> As capabilities are currently being developed in exponential time

What do you mean? It doesn't seem exponential to me, exponential would mean things happen faster and faster, but the capabilities seem to develop slower and slower.

Or do you mean a negative exponential?