Lawmakers propose harsher punishments if crimes are committed using AI-generated media
Male human face with 3d mesh and recognition marks © Getty Images / imaginima
The Liberal Democratic Party of Russia (LDPR) is preparing a bill that would introduce criminal liability for using ‘deepfakes’ for fraudulent purposes such as extortion or theft. The details of the proposal were posted to the party’s Telegram channel on Friday.
Yaroslav Nilov , deputy head of the LDPR faction in the lower house of parliament, the State Duma, noted that deepfakes – images and videos that have been altered using a neural network – can be used by plotters to deceive and obtain money from victims.
The party has suggested updating the Russian criminal code to define the use of deepfakes as harmful, and an aggravating circumstance, in several articles.
“If a deepfake is used, those guilty of fraud will face up to five years in prison or a fine, for theft – up to two years, for extortion – up to four years, and for misappropriation or embezzlement of someone else’s property – up to two years,” the party wrote.
State Duma Deputy Arkadiy Svistunov explained that fraud in the banking and financial sector is growing, and that deep fake technology is becoming an additional tool used by attackers who employ social engineering to convince victims to reveal their passwords, personal data or even transfer money to an account. “It is difficult for banks to deal with this, because formally everything is according to the law,” he said.
Andrey Svintsov, deputy head of the State Duma Committee on Information Policy, pointed out that while deepfakes can be used to create content on social media, “revive” the dead and help in psychotherapy, they can also be used by criminals for deception and extortion. “We, as legislators, must respond to this. Our bill is correct and timely, it acts ahead of the curve,” Svintsov added.
Read more
Last month, a report by the Intercept revealed that the Pentagon’s secret operations branch was allegedly seeking capabilities to deploy “next generation” deepfakes, to use them in information warfare.
According to procurement documents obtained by the outlet, the US Special Operations Command’s Military Information Support Operations (MISO) is seeking a “next generation of ‘deep fake’ or other similar technology to generate messages and influence operations via non-traditional channels in relevant peer/near peer environments.”
Despite seeking to weaponize the technology itself, Washington has repeatedly voiced concerns that deepfakes could pose a threat to democracy and public trust, and be used by adversaries such as China and Russia to launch disinformation campaigns.