Boom-Malaysia

Deepfake Scams Just Stole $25 Million—From a Single U.S. Company

Deepfake Scams Just Stole $25 Million—From a Single U.S. Company

That day, the conference room most likely appeared unremarkable. A glowing computer screen on a desk. Somewhere in the background, a spreadsheet is open. the customary tension that permeates finance departments whenever significant transfers are brought up. However, something about that meeting—as captured by a laptop camera—was completely made up.

An employee of the multinational engineering company Arup authorized a series of wire transfers worth over $25 million at the beginning of 2024. The decision seemed legitimate at the time. The worker had joined a video conference with coworkers, one of whom seemed to be the chief financial officer of the business. The faces were recognizable. The voices sounded correct. The directions were very clear.

CategoryDetails
Incident$25 Million Deepfake Fraud Case
Company InvolvedArup
IndustryEngineering and design consultancy
Year of Incident2024
Amount StolenApproximately $25.6 million (HK$200 million)
Method UsedAI-generated deepfake video and audio impersonating executives
Location of Fraudulent TransfersHong Kong financial accounts
InvestigationOngoing with Hong Kong authorities
Referencehttps://www.weforum.org/stories/2025/02/deepfake-scam-arup/

The issue was that none of those individuals were genuine. Deepfake recreations—AI-generated faces and voices assembled from publicly accessible footage of the company’s executives—were what showed up on the video call. The thieves had meticulously researched their targets, compiling video footage from interviews and business gatherings. They created a digital illusion that was convincing enough to pass for an actual conversation using those fragments.

The details make it hard not to feel uneasy. The attack did not take advantage of software flaws or breach servers. Rather, it took advantage of something more ancient and brittle: human trust. After receiving an urgent email regarding a “secret transaction,” the employee first had suspicions about something strange. However, once the video meeting started, skepticism subsided. The request felt authentic when a number of coworkers who looked exactly like the real ones nodded on screen.

The transfers were finished in a matter of hours. Approximately 200 million Hong Kong dollars were transferred into fraudsters’ accounts through fifteen different transactions. The entire operation took place in a single day, indicating that the attackers were sufficiently familiar with corporate procedures to overcome the victim’s reluctance before doubts reappeared.

The realization didn’t come until much later. Investigators claim that the worker eventually got in touch with the corporate office to inquire about the status of the private agreement. There, executives were perplexed. There was no such transaction. Such a meeting had not been planned. Finding out that everyone in the video call had been a digital puppet must have been a terrifying moment.

As this story develops, it seems that the way cybercrime operates has fundamentally changed. Phishing emails and fictitious invoices were the mainstays of corporate fraud for many years, but they frequently fell apart as soon as someone answered the phone. That dynamic is altered by deepfakes. By producing visual confirmation, they strengthen the impression that decision-makers are present and in agreement.

This is sometimes referred to by experts as “technology-enhanced social engineering.” The underlying idea is straightforward, despite the phrase’s technical sound. With increasingly realistic digital tools, criminals are learning to manipulate human psychology.

The technology is surprisingly user-friendly. You can now download or buy software that can produce synthetic video or clone voices online. Sometimes all it takes to create a convincing deepfake is a few minutes of recorded speech. In less than an hour, a cybersecurity executive who experimented with the technology claimed to have created a rough replica of his own face.

That particular detail lingers. It implies that, despite businesses’ inability to modify their defenses, the barrier to entry for these crimes is rapidly declining. The goal of traditional cybersecurity tools like intrusion detection, firewalls, and antivirus software is to prevent hackers from accessing systems. When all of the deception occurs through communication, they are less successful.

That vulnerability was made uncomfortably clear by the Arup incident. Financial transfer decisions within large organizations frequently depend on approval from higher-ups. Employees are trained to respond promptly when a request seems to originate from upper management. In corporate culture, authority, secrecy, and urgency are all potent signals.

Attackers using deepfakes are aware of this. They create scenarios that appear to be authentic executive orders by imitating internal processes. According to reports, the employee in this instance thought the transaction was a part of a private agreement that called for discretion. The likelihood of verification with other coworkers was probably diminished by that detail alone.

As of early 2025, the pilfered money had not been found. Although little information about the offenders has been made public, Hong Kong authorities are still conducting an investigation. Tension is increased by this uncertainty. Investigations into cybercrime frequently span jurisdictions, tracking digital traces that wind through banks and servers dispersed throughout continents.

Meanwhile, companies everywhere are quietly reassessing their procedures. Even when senior executives give the order, some companies now demand multi-step verification for significant financial transfers. Others are experimenting with “safe words” or out-of-band confirmation channels, which basically requires staff members to verify requests using an alternative means of communication.

It’s a reasonable answer. However, the speed at which technology is developing begs the question of whether such measures will be able to keep up.

The increasing availability of training data and advancements in machine learning are driving the rapid improvement of deepfakes. Corporate executives are frequently captured on camera during conference panels, earnings calls, and interviews, building up a sizable archive of content that could be used by criminals.

It is hard to deny that reality. Businesses seem to be venturing into uncharted territory as AI-driven fraud becomes more prevalent. For many years, hearing someone’s voice or seeing them on a screen implied authenticity. That presumption is now subtly eroding.

A finance employee who previously trusted a video call has discovered how costly that shift can be, somewhere in a quiet office.

Share it :