rozenmd
Have them perform a task similar to what you want to hire them for.

In my case, I had them build a feature in a React app. There was a candidate that solved it faster than I did, while explaining their approach and trade-offs live, so we hired them.

solardev
Maybe a contrary opinion, but who cares if they use AI? We shouldn't expect juniors to use a different / artificially handicapped workflow than the rest of us.

Test them on the actual work, whatever it is. Make multiple example questions or one big take home project that's similar to the real work they'd be doing, and allow them to solve it organically, as they would on the job. If they pass it with AI, cool, they managed to use it effectively. If they fail it with AI, that's on them. You don't need to go out of your way to purposely try to confuse AI OR the human. Working together is going to be the new norm and the AI is just another assistant or teammate.

IMO you're not testing "is this person super smart and able to recite leet code from memory without any internet access or reference materials". You're testing "is this person going to work well in our team and contribute meaningfully". You wouldn't disrespect your current coworkers by asking "how much AI did you use in this PR" if it otherwise works and passes tests and seems readable and maintainable, etc. (and if it doesn't, that's a code quality concern, not a question of human vs AI provenance).

The juniors should have the same ability to use the tooling available to the rest of us. If they choose to never understand AI output more deeply, well, the same could be said of any third party lib or Stack post. It's usually not a deal breaker in run of the mill business apps anyway. In fact there's an argument to be made for adopting good enough popular solutions vs reinventing the wheel and adding more tech debt for minimal gains.

muzani
I just treat ChatGPT more like a framework than anything. Don't interview for problems that can be solved with a framework. You wouldn't test a web dev by asking them to set up a blog.

Red, green, refactor. Make sure ChatGPT fails the test. Make sure a human can pass the test. Then adjust accordingly. If you can't do this, then you should pay for a senior who can.

euvin
I would think the answer is to peer into the applicant's thought process through conversation and decide if they're capable enough for the job, LLM or not.

Are you looking for specific criteria on how to judge junior applicants?

austin-cheney
In big corporate software most developers are beginners. Just some have remained beginners for 8-12 years. That is what you want to avoid.

Expected non juniors to be able to write software on their own. This is a colossal huge ask, because most developers cannot do this. Instead they bullshit with tech stacks, configurations, and expect open source tools to do their jobs for them. When real problems occur they are worthless.

So, expect non juniors to actually write software. For junior developers you are looking for potential, the potential to independently write software in the future.

I would look for people who can read, write, and follow instructions. Be extremely critical about this. Can they figure things out or do they require tremendous hand holding? It’s not about what they already know but what they can do and what they can figure out.

Hopefully software will figure this out. It’s why I stopped developing. I got tired of working with people who are grossly incompetent, fully reliant on tech stacks, and extremely superficial and insecure. A bunch of expert developers inventing unnecessary complexity to justify their many years of inexperience in the line of work.