keiferski
As someone with a background in philosophy: very few researchers seem to care about philosophy, unfortunately. (At least from the outside-looking-in: I don't actually work in the AI field, so perhaps all of the philosophically-inclined researchers simply don't talk much.) This has two major negative consequences:

1. Sloppy, unclear thinking. I see this constantly in discussions about AGI, superintelligence, etc. Unclear definitions, bad arguments, speculation of a future religion "worshipping" AIs, using sci-fi scenarios as somehow indicative of future progress of the field, on and on. It makes me long for the days of the early-mid 20th century, when scientists and technicians were both technical and philosophically-educated people.

2. The complete and utter lack of ethical knowledge, which in practice means AI companies adopt whatever flavor-of-the-day ideology is being touted as "ethical." Today, that seems to be DEI, although it seems to have peaked. Tomorrow, it'll be something else. The depth of "ethics of AI" or "AI safety" for most researchers seems to be entirely dependent on whatever society at large finds unpleasant.

I have been kicking around the idea of starting a blog/Substack about the philosophy of AI and technology, mostly because of this exact issue. My only hesitation is that I'm unclear of what the monetization model would be – and I already have enough work to do and bills to pay. If anyone would find this interesting, please let me know.

aristofun
If they did - we wouldn’t have a bubble we have today, NVidia stocks be overpriced etc.

In other words — here as in many areas there is no incentive to dig deep, while there are plenty - to stay on the surface and tell scary stories about agi doomsday to journalists who barely have writing skills, let alone some philosophical or logical foundations.

uptownfunk
I am more interested in meta-cognition, advanced cognitive architectures, learning about how humans learn and then figuring out how we get machines to do that better and faster
nprateem
Philosophy is just thoughts and thoughts about thoughts. To really understand consciousness researchers should study meditation (for direct experience) and traditions such as Buddhism that have been studying consciousness for millenia.

See for example, the Buddhist descriptions of the jhanas, progressive levels of consciousness in which meditators peel back the layers of their personality, human awareness and end up in pure awareness and beyond. It's hard to read (and experience, albeit only the initial stage in my case) such things and not be left in little doubt that consciousness doesn't derive from thought like philosophers like to believe (no, Descarte, you are not just because you think).

It's for this reason I don't buy the AGI hype. Maybe after fundamental breakthroughs in computation and storage allow better simulations, but not any time soon since these traditions tell us consciousness isn't emergent. Most AGI researchers are barking up the wrong tree. Still, the hype boosts valuations so perhaps it's in their best interests anyway.

Philosophers can get so wrapped up in thoughts they say nonsense like "I can't comprehend not having an internal monologue", which you can experience any time you watch a film, listen to music, etc. Someone with only the smallest experience of meditation shouldn't fall into such thought traps.

orobus
I've got a background in philosophy and I'm constantly asking myself these questions too. There seems to be a two way failure - first, the ML folks failing to engage with the extant literature, and second, academic philosophy's failure to produce anything remotely resembling concrete, practical, or really even relevant philosophical work. The former is pretty much par for the course (in academic philosophy you get used to being ignored early on), but it's the latter I find egregious. Especially given (as others here have pointed out) the almost universally lazy and magical thinking in the ML space. For example, there's much hand-wringing about the benefits and perils of "AGI" with barely any attempt to establish that "AGI" is even a coherent concept. I'm skeptical that it is, but I'd be happy to entertain arguments to the contrary---if there were any! "AI" has become a marketing term for increasingly sophisticated statistical methods for approximating functions. I think some sober discussion about whether such brute-force induction is the right sort of thing to warrant the term "AI" would be a welcome addition.
akasakahakada
Doing real stuff vs just talk. The Universe do not coherent with your thought experimental result from philosophy e.g. quantum mechanics, black hole, etc. Scientific discoveries have provided people evidence to pursue science, but what about philosophy? What is your evidence to prove that you have value?