Sometime last year, Ian Lamont's inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn't opened any new positions, but when he logged onto LinkedIn, he found one for a "Data Entry Clerk" linked to his business's name and logo.
As reported by Business Insider, Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company's "manager." The account had fewer than a dozen connections and an AI-generated face.
He spent the next few days warning visitors to his company's site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied.
Generative AI's potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it's expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI's ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses.
Since ChatGPT's debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it's increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse.
In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. RenĂ©e DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the "industrial revolution for scams" — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets.
The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization's chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than US$ 25 million, and assuming the request came through the CFO, he green-lit the transaction.
Business Insider spoke with professionals in several industries — including recruitment, graphic design, publishing, and healthcare — who are scrambling to keep themselves and their customers safe against AI's ever-evolving threats. Many feel like they're playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning.
Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company's 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it — and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive.
It was a rude awakening for Hankiewicz. She's since ramped up the company's cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, "the incident actually strengthened our relationship with many customers who appreciated our proactive approach," she says.
For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they're dealing with an actual human and that the money they're sending is actually going where they intend it to go.
Read More