Skip to content
Back to Blog
Phishing

Fake It ’til You Make a Fortune: Financial Scams Enter the Deepfakes Era

Recent scams leveraging AI-based deepfakes technology have demonstrated that video and audio can also no longer be fully trusted.

We’ve known for a long time that emails and texts can’t be 100-percent trusted. Senders’ addresses can be spoofed. Business contacts can be impersonated.

But what if you hear your boss in a voice mail asking you to wire money? Or what if, instead of a call, this request takes place on a live online video chat? Many employees wouldn’t even give such an exchange a second thought.

Unfortunately, recent scams leveraging AI-based deepfakes technology have demonstrated that video and audio can also no longer be fully trusted.

In a shocking incident – reported on Feb. 4, 2024 by CNN – an international finance firm was defrauded out of roughly $25 million after scammers used deepfake technology in a video conference call to trick a Hong Kong-based employee into executing a payment.

According to the Hong Kong police, the finance worker was fooled into thinking he was conversing with his firm’s CFO and several other staffers. But in reality, all of the other participants in the video call were synthesized images of company executives and staffers.

“…[I]t turns out that everyone [he saw] was fake,” Senior Superintendent Baron Chan              Shun-ching explained to the Hong Kong’s public broadcaster RTHK, per CNN.

Reportedly, the employee was prompted to join the conference call via a phishing email. The police also noted several other recent incidents in which deepfake technology was used to create lifelike images of people whose Hong Kong identity cards had been stolen – all in an effort to trick facial recognition programs.

There has also been concern that deepfake technology could be misused to impersonate politicians for the purposes of election disinformation or voter suppression. Sure enough, on Jan. 21, 2024, New Hampshire residents received robocalls on their phones, featuring what sounded like the voice of President Joe Biden. The message was intended to discourage voters from casting a ballot in the New Hampshire Presidential Primary Election.

A press release by N.H. Attorney General John Formella alleged that the source of the robocalls was “Texas-based Life Corporation and an individual named Walter Monk,” while “the originating voice service provider for many of these calls [was] Texas-based Lingo Telecom.”

“AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process,” said Formella in the release. Life Corporation and Lingo Telecom have both been issued cease-and-desist orders, and they could see additional governmental action taken in the future.

In the future, it will become even more difficult to ascertain if audio or video clips are authentically captured moments or fakes. In theory, deepfake technology could be used to extort executives with embarrassing or controversial videos designed to tarnish their reputations.

As the technology advances, deciphering faked videos from real ones will increasingly require the advanced tools of forensic investigators. But at least there are some procedures companies can put in place to avoid getting scammed in the heat of the moment.

Just as they should with ordinary email, employees should watch out for telltale indicators of phishing and impersonation scams in audio and video exchanges. These signs include unusually urgent and secretive instructions, especially those involving large financial transactions or requests for sensitive information.

Before remitting any money, employees should go through a pre-established process for confirming such requests – perhaps directly calling the potentially impersonated party or messaging them via a communications platform such as Slack. However, that does not mean clicking on the conference call link contained within the possible phishing email, as that link would be part of the scam.

If both parties are in the same office building, then the employee can confirm the transaction by stopping by the other person’s workspace for an in-person chat. It’s safe to say that’s one interaction that cybercriminals can’t fake.

Latest Articles

RSA Conference: Secure by Design Pledge Leads Spate of U.S. Cyber Announcements

RSA Conference: Secure by Design Pledge Leads Spate of U.S. Cyber Announcements

RSA Conference coverage: Learn about the U.S. government's Secure by Design pledge and its impact on software manufacturers' cybersecurity ...

RSA Conference: Cyber Leaders Must Self-Advocate for Better Legal Protections, Say Panelists

RSA Conference: Cyber Leaders Must Self-Advocate for Better Legal Protections, Say Panelists

RSA Conference coverage: Learn how cyber leaders can protect themselves from legal scrutiny in the wake of data breaches and privacy violat...

NSA Releases New Guidelines for AI Deployment and Operation

NSA Releases New Guidelines for AI Deployment and Operation

NSA establishes security standards for AI deployment through its newly formed Artificial Intelligence Security Center division.