idempotent_
I recall having a few interviews for MLOps jobs in 2022/2023 that had a sort of AI safety component to the interview. Not necessarily a deep philosophical debate but rather a conversation about AI limits, user discovery and usability, that sort of thing although I did touch upon alignment, guardrails, responsible computing type stuff.

Recently I did a few interviews in the same space and this wasn't even a consideration - I think the genie has left the bottle. Signaling about AI safety and alignment is being shed in favor of technical execution and speed, probably because we're starting to see the plateau forming with regards to machine intelligence and how far off AGI/ASI truly is.