Ferrari almost a victim of a deepfake scam

Due to the rapid development of artificial intelligence, fraudsters have new tools at their disposal with which they can deceive even the most reputable targets. Recently, this deepfake scam hit a Ferrari CEO, with several messages and calls appearing to be from CEO Benedetto Vigna. Fortunately, the director outwitted the scammer by asking him a personal question, which allowed him to discover that it was a scam.

It all started with a series of WhatsApp messages sent by someone posing as a Ferrari executive. The messages requesting urgent help with the alleged clandestine acquisition came from another number, but contained a profile picture of Vigna standing in front of a Ferrari emblem.

According to Bloomberg, one of the messages read: “Have you heard about the big acquisition we're planning? We need your help with this.” The scammer continued: "Be prepared to sign a non-disclosure agreement that our lawyer will send you shortly." The message ended with a sense of urgency: “The Italian market regulator and the Milan Stock Exchange have already been informed. Maintain maximum discretion.”

He impersonated Benedetto Vigno, CEO of Ferrari.

According to text messages, the CEO received a phone call in which Vigno's voice was convincingly rendered in his signature southern Italian accent. The caller claimed to be using a different number due to the sensitive nature of the matter and then asked the CEO to complete the transaction. Ferrari did not disclose the amount involved.

The unusual request for money, along with some "slightly strange mechanical intonations" during the call itself, made Ferrari's CEO suspicious of the caller's authenticity. So he replied, "Excuse me, Benedetto, but I need to verify your identity," and asked the CEO about the book he had recommended to him a few days ago. Not surprisingly, the impersonator answered incorrectly and ended the call in a hurry.

Ferrari representatives declined to comment on the incident, which Bloomberg learned about from unnamed sources. The company is investigating the situation that happened earlier this month.

Needless to say, this is not the first time fraudsters have used artificial intelligence to extort money. Rachel Tobac, CEO of cybersecurity firm SocialProof Security, warns: "This year we're seeing more and more criminals trying to voice clone using artificial intelligence."

Stefano Zanero, professor of cyber security at Milan's Politecnico di Milano University, predicts darkly that artificial intelligence-powered deepfakes will only become more frightening and "unbelievably authentic". Until companies equip their staff with superhuman detectors, individuals will have to be extra vigilant. Double, triple, even quadruple check before transferring money, no matter who asks you, even if it's your boss.

spot_img

More similar stories

WE RECOMMEND