In other words — here as in many areas there is no incentive to dig deep, while there are plenty - to stay on the surface and tell scary stories about agi doomsday to journalists who barely have writing skills, let alone some philosophical or logical foundations.
See for example, the Buddhist descriptions of the jhanas, progressive levels of consciousness in which meditators peel back the layers of their personality, human awareness and end up in pure awareness and beyond. It's hard to read (and experience, albeit only the initial stage in my case) such things and not be left in little doubt that consciousness doesn't derive from thought like philosophers like to believe (no, Descarte, you are not just because you think).
It's for this reason I don't buy the AGI hype. Maybe after fundamental breakthroughs in computation and storage allow better simulations, but not any time soon since these traditions tell us consciousness isn't emergent. Most AGI researchers are barking up the wrong tree. Still, the hype boosts valuations so perhaps it's in their best interests anyway.
Philosophers can get so wrapped up in thoughts they say nonsense like "I can't comprehend not having an internal monologue", which you can experience any time you watch a film, listen to music, etc. Someone with only the smallest experience of meditation shouldn't fall into such thought traps.
1. Sloppy, unclear thinking. I see this constantly in discussions about AGI, superintelligence, etc. Unclear definitions, bad arguments, speculation of a future religion "worshipping" AIs, using sci-fi scenarios as somehow indicative of future progress of the field, on and on. It makes me long for the days of the early-mid 20th century, when scientists and technicians were both technical and philosophically-educated people.
2. The complete and utter lack of ethical knowledge, which in practice means AI companies adopt whatever flavor-of-the-day ideology is being touted as "ethical." Today, that seems to be DEI, although it seems to have peaked. Tomorrow, it'll be something else. The depth of "ethics of AI" or "AI safety" for most researchers seems to be entirely dependent on whatever society at large finds unpleasant.
I have been kicking around the idea of starting a blog/Substack about the philosophy of AI and technology, mostly because of this exact issue. My only hesitation is that I'm unclear of what the monetization model would be – and I already have enough work to do and bills to pay. If anyone would find this interesting, please let me know.