Deepfake scams on the rise: Experts warn of celebrity misuse
Experts from NASK are sounding the alarm due to the increasing number of frauds using deepfake technology. Videos that falsify the images of celebrities can become the start of serious scams.
Specialists from the Research and Academic Computer Network (NASK) warn about videos created using deepfake technology. This manipulation method uses artificial intelligence and involves interference in audiovisual materials, including the voices and faces of known figures. It is used for various purposes, from entertaining and creative, such as in movies and art, to more dangerous ones, like creating fake news, blackmail, or manipulating public opinion.
According to information provided by NASK experts, an increasing number of frauds are based on videos using public images appearing online. For example, these could be videos featuring people such as footballers or a Minister. Through the "lip sync" technique, an artificially generated voice is matched with the mouth movements and gestures of the figure on the screen. As a result, the person appears to be saying words they never actually said. Worse yet, properly using this technique gives the illusion of natural speech.
How do deepfakes work?
The head of NASK's deepfake analysis team explains that today's technologies allow criminals to manipulate audiovisual materials easily. The "text-to-speech" feature enables criminals to create new audio tracks based on just a few seconds of recorded voice. They can then synchronize this track with any video, such as a speech or political rally. On the other hand, with "speech-to-speech" technology, where intonation and emotions are more complex, a longer fragment—at least a minute of the original material—is needed.
It is worth adding that "text-to-speech" (TTS) technology analyzes voice samples, learning their specifics and intonation, and then generates speech based on the entered text. According to studies conducted by Google, the TTS model named WaveNet can generate very natural-sounding speech that is difficult to distinguish from an authentic voice.
Meanwhile, advanced algorithms used in STS can capture subtle nuances of the voice, such as modulation, tempo, and emotions. This makes it difficult for traditional methods of detecting forgery to differentiate generated speech from authentic speech. Even modern biometric systems may struggle to identify such forgeries, posing a severe challenge for security experts and the defense against misinformation.
A NASK expert appealed to social media users to exercise caution regarding unverified or suspicious video content, especially those that could influence public perception of significant figures and institutions.
Using deepfakes in political campaigns, such as discrediting candidates, can affect election results and destabilize democratic processes. Moreover, spreading false information through deepfakes can undermine the credibility of authentic content, leading to increased skepticism among audiences.
How can one verify if a material is a deepfake?
NASK emphasizes that such frauds are becoming increasingly difficult to detect as artificial intelligence develops, allowing for a more precise generation of fake voices. Nevertheless, experts note that detecting such materials is still possible, provided a thorough technical analysis of the video and its content is conducted.
Specialists point out several possible signs of fraud, including distortions around the mouth, problems reproducing teeth, unnatural head movements and facial expressions, errors in word inflection, or unusual intonation. Bartuzi-Trokielewicz adds that criminals increasingly use techniques of adding noise and spots to disrupt the image's clarity, conceal artifacts generated by artificial intelligence, and confuse deepfake detection algorithms.
NASK notes that various social engineering techniques are employed in such videos to manipulate viewers, such as promises of quick profit, time-limited exclusivity of the offer, urging to act quickly, and appealing to viewers' emotions.
How much has the number of deepfakes increased in recent years?
In recent years, the number of deepfakes has increased significantly, as confirmed by various sources. The Sumsub report from 2023 indicates a tenfold (10x) increase in the number of deepfakes worldwide between 2022 and 2023, with notable regional increases of 1740% in North America, 1530% in Asia and the Pacific, 780% in Europe, 450% in the Middle East and Africa, and 410% in Latin America.
On the other hand, Onfido reports that in 2023, attempts at fraud using deepfakes increased 31 times, constituting a 3000% increase compared to the previous year. Meanwhile, Sentinel reported that between 2019 and 2020, the number of deepfakes online rose from 14,678 to 145,227, signifying an increase of about 900%.
These data indicate a rapid acceleration in developing and using deepfake technology, posing a growing challenge for information security and privacy. Therefore, many experts emphasize the need for implementing more advanced methods of detecting deepfakes and strengthening legal regulations to counteract their negative impacts.