What you’ll learn
Resume-level signals, the AI-proficiency probe (tourists vs. power users), and how to grade customer-facing readiness in a 30-minute screen.
This is the most important new screening dimension in 2026, and the one most prone to performative answers. The probes below reveal whether a candidate has actually shipped with AI tools or just memorized the vocabulary.
| Probe | Tourist | Power user |
|---|---|---|
| Show me your CLAUDE.md or .cursorrules file. | ‘I don’t have one.’ | Shares it; explains each line. |
| Difference between a Cursor rule, skill, and command? | Confused. | Rules guide (always-on context); skills do (procedural, on-demand); commands trigger (saved prompts). |
| When do you use plan mode vs. agent mode? | Doesn’t know. | Specific examples (‘plan mode for >2-file changes’). |
| Last time the AI was confidently wrong, how did you catch it? | Vague or ‘I didn’t.’ | Specific story with rejection signal. |
| How do you keep context windows from filling up? | ‘I just start a new chat.’ | References checkpointing, sub-agents, scoped @file mentions, custom internal. |
| MCP, what is it, what have you wired up? | Doesn’t know. Only gives broad definition. | Names servers used (github, postgres, custom internal). |
Cheating is rising fast
Fabric's data shows detected interview cheating jumped from 15% in June 2025 to 35% by December 2025. Behavioral signals include 4–5-second response delays, robotic eye movements, burst typing, and vocabulary mismatches between conversation and answer.
Read Fabric's full report
Probe 1
Tell me about the last time you sat with a non-engineer end-user and watched them use software you built. What did you change as a result?
Probe 2
Tell me about a time you said no, or pushed back, to a customer or stakeholder.
Give me an example of when (candidate) said no to a customer or pushed back.
Key takeaways