CrowdStrike's latest Threat Report includes new information about China's increased targeting of North American telecommunications companies, Russia's continued efforts to support its invasion of Ukraine with cyberespionage, and other trends the security firm witnessed from July 2024 to June 2025. (Presumably excluding the period during which a faulty update to its software brought down global infrastructure.) But of particular interest is the sheer scale of North Korea's AI-supported tech worker schemes.
The company said that in the last 12 months, it has "investigated over 320 incidents where [North Korean] operatives obtained fraudulent employment as remote software developers" and that the hackers have "been able to sustain this pace by interweaving GenAI-powered tools that automate and optimize workflows at every stage of the hiring and employment process." Resumes? Fake. Social accounts? Fake. The person shown during a video call, the headshots, the messages they send? Fake, fake, fake.
"Once hired, [these] workers use GenAI code assistants [and] translation tools to assist with daily tasks and correspondence related to their legitimate job functions," CrowdStrike said. "Though an average employee may use GenAI in a similar manner, these tools—especially those enabling English-language communication—are especially crucial [to this group]. These operatives are not fluent in English, likely work three or four jobs simultaneously, and require GenAI to complete their work and manage and respond to multiple streams of communication."
We knew this had been happening—the Justice Department announced in July that it had made a flurry of arrests, sanctions, and investigations related to North Korea's fake tech workers. I noted at the time that U.S. officials started issuing warnings about these schemes in 2022 and that Google reported a similar uptick in activity related to these efforts in March, so CrowdStrike isn't pulling back the mask for the first time, as it were. But this new Threat Report drives home just how big the problem is.
It's kind of like watching an episode of "Scooby Doo" where the gang first reveals that some normal-seeming dude is a criminal. But so is that dude, and this other dude at that other company, and... wait, actually those are the same person using a combination of laptop farms and chatbots to seem like different people, and whoops it turns out Velma's an imposter too, and that's why that HBO show was so bad. Oh, and unlike a cartoon villain, North Korea will continue to get away with this.
CrowdStrike's recommendations for identifying these imposter hackers include, among other things, the adoption of "enhanced identity verification processes during the hiring phase that include rigorous background investigations and corroboration of online professional profiles" and the implementation of "real-time deepfake challenges during interview or employment assessment sessions." But those approaches incur additional costs — and North Korea will find ways to circumvent them.
The masks are being pulled back. It doesn't seem to be making a difference. So what now?
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.