Here’s a story about a job candidate I interviewed who seemed to be engaging in some sort of AI-assisted fraud.
Before the interview even occurred, there was already suspicion around the candidate. A couple other interviewers observed a lag between the voice and video, and they thought this might be a sign of an AI generated video. And the resume felt way too perfect, claiming extensive experience in basically everything that we do.
I don’t know about the video lag. To the extent there are AI tools out there, I don’t presume to know how exactly they work, nor do I presume that I am capable of distinguishing AI video from a bad internet latency. So I took an approach that was different from other interviewers: I looked at his LinkedIn page.
His LinkedIn page was a major red flag for me. His resume listed jobs at several staffing companies, each time working for a different client company. His LinkedIn page lists the staffing companies, but does not list the client companies. So there was basically no information about what he did and who he worked for. I thought he was likely making fake resumes tailored to each job application by changing the identities of the client companies. His LinkedIn page was deliberately vague to avoid obvious contradictions. However, he was not very careful, and his LinkedIn page included an “about” section that mentioned expertise in healthcare. Healthcare wasn’t anywhere in his resume, so I planned to ask him about it.
I also found a Medium blog via one of his LinkedIn posts. It was one of those irritating professional blogs where people write about tech stuff not out of passion, but just to pretend that they’re actively engaged in the field they’re looking for work in. And it was very obviously AI generated. Hereabouts I’m one of the more “pro” AI folks, but this is a bad look, why would you advertise this about yourself? It doesn’t demonstrate knowledge or enthusiasm, it demonstrates an eagerness to show off, paired with a lack of anything to show.
I actively discussed this with the hiring manager, and there was back and forth about whether I should interview the candidate at all. Like, this is obviously great story material, but generating story material is not properly part of my job description. If he’s a fraud, the correct course of action is to not engage. But several other people interviewed him first, and nobody else seemed very confident about the fraud hypothesis. So I ended up having the interview.
From the very start, I asked him to introduce himself, and he listed client companies that did not match his resume! I pointed out the mismatch, and said we would discuss it later in the interview.
So the first part of my interview discussed a take home assignment that we assign to all candidates. I pick an arbitrary section of their work, put it up on a shared screen, and ask them to discuss what they did and why they did it that way.
Initially I showed a notebook, and asked him to confirm it was his. He confirmed. Then I said, oops my mistake, that’s from a different candidate. After I switched to the right notebook, I asked him to explain the section on screen. He started talking about the general assignment, but I interrupted saying, please talk about what’s on the screen right now, section 5?
Then he dropped the call.
Five minutes later, he came back on, and claimed he couldn’t see the shared screen. We wasted more time figuring out what to do about that, and I eventually got him to share his own screen. This was the only part of the interview that proceeded as normal, and although it’s not really relevant to this story, it is my opinion that even under good faith assumptions he was one of the worst candidates I’ve ever interviewed.
So next we discussed his resume. I confirmed again the contradiction between his story and the resume. He claimed that I had the incorrect resume! I said, so this isn’t you? Am I interviewing the wrong person? I sent him his own resume, and asked him to send me the correct one. He went silent for a bit, ostensibly to retrieve his resume. I never received the resume.
So I moved on and asked him to confirm that I had the correct LinkedIn page. I asked, “I saw on your LinkedIn that you have experience in healthcare, could you tell me about that?” He asked where I got that. I repeated, “It’s on your LinkedIn page.”
How do you think he got out of that one? I thought maybe he’d make something up about briefly working for a healthcare company, and claim that he left it out of the resume because it wasn’t relevant to the role he was currently interviewing for. Folks, that is not what he did.
Instead, he dropped the call, 5 minutes before the end of the interview. That was the last I heard from him.

Obviously it’s very funny how badly the scammer fucked up. It’s ego-boosting to me that I caught it.
But to be serious for a moment, it’s the wrong takeaway to think “scammers are easy to catch”. People get scammed because they’re overconfident in their ability to spot scams. When people get caught in a scam they have difficulty asking for help, because they think it’s embarrassing that they missed the scam to begin with.
This guy had been interviewed by four people before me, and none of them found definitive signs of fraud, even when they were looking for it. I respect my colleagues, so we may chalk it up to chance that it was obvious to me where it was not obvious to them. Maybe I was speaking to a different guy, and my guy was incompetent. Maybe screen-sharing is an interview technique that trips up current AI tools. And AI tools develop over time, so what worked for me may not work universally.
I’m guessing that this guy was part of a fraud ring. An employee for a company that tries to land jobs under false pretenses, in order to hand the jobs over to paying customers. He probably dropped the call a couple times in order to talk to a supervisor, that’s how he came up with the lie that screenshare was broken. I suspect that the paying customers are not actually getting a good deal, and are also being scammed. But what do I know?
So what’s the takeaway? I don’t know, scammers bad?

telling whether text is AI is one area where most people overestimate their abilities very badly. i saw a guy being haughty in a comment section, claiming not only that his opponent was obviously using AI, but that his day job involved telling this difference. to me, his opponent was obviously a slightly pretentious ESL speaker. who was right?
the AI transvestigator should pretty much get fucked whether he was right or not. even if his opponent was using an LLM, he was making cogent points that deserved to be addressed – unlike the creep you interviewed.
my job does involve talking with scammers often enough and i’d be lying if i said i had a 100% success rate sussing them out. con artists are pretty much my most hated category of scum in the world and i’d gladly strangle them to death.
my experiences with them have retroactively ruined the movie “the sting” for me. paul newman and robert redford will stay dead if they know what’s good for them.
christ i use pretty much like napoleon dynamite
Scammers aren’t easy to detect, and scamming evolves. It used to be that someone who could talk a good game had an advantage; help from an AI will leverage that advantage.
Way back before the internet (1987) I was with a tiny startup that hired someone who turned out to be a fraud. I don’t recall if I interviewed him (the company was maybe a dozen people), but if I did, I missed it. The references were hard to check, as the claimed PhD was from overseas (Turkey?).
It was only after we hired him and worked with him for a few months that it became clear that his resume wasn’t reflective of his abilities, and our management did some additional digging to determine that we shouldn’t have hired him.
I deemphasized the AI angle because:
1) Fraud predates AI. A candidate can bullshit using traditional methods, as with the anecdote from Some Old Programmer.
2) It’s common to use AI to generate resumes and take-home assignments. They’re allowed to do it. The important thing is they don’t misrepresent themselves, and they understand the material.
3) I think overfocusing on the AI isn’t the best path to fraud detection. IMO my colleagues were too caught up with the idea of identifying AI generated video, when we don’t even know that he was doing that! He could just be reading out an LLM response. The more perennial fraud detection method is validating against external information, such as LinkedIn.