The senator also touched on the issue of labeling when using

Collaborate on cutting-edge hong kong data technologies and solutions.
Post Reply
tanjimajuha20
Posts: 490
Joined: Thu Jan 02, 2025 7:24 am

The senator also touched on the issue of labeling when using

Post by tanjimajuha20 »

Russian Senator and Deputy Chairman of the Council for the Development of the Digital Economy under the Federation Council Artem Sheikin spoke about the initiative to protect the human voice at a press conference in Moscow.

"We see that voice synthesis denmark whatsapp number database is used in various directions. So far, we have not legislated the concept of "citizen's voice". By analogy with the norm in civil law "protection of the image of a citizen", we propose to expand this norm and add "voice" to it. Its publication should be allowed only with the consent of the citizen," explained Artem Sheikin.

a synthesized voice: "The company must notify that the following information was used using artificial intelligence (AI). In addition, it is necessary to create a special department that will deal with issues of the functioning and use of neural networks. The department will consider complaints and appeals from citizens. As a result of the detection of violations, it will be necessary to remove or block prohibited content."

Vyacheslav Beresnev, Executive Director of the Association of Laboratories for the Development of Artificial Intelligence (ALRII), supported Artem Sheikin's position: "The situation with the protection of biometric data, voice and other data used by neural networks is complex, since it is at the junction of several areas of legislation and requires a balanced and consolidated position. Excessive restrictions will certainly lead us to technological backwardness, but without them we risk getting lost in the world of digital twins and illusions. Due to the novelty, complexity and mass of potential consequences, the state's steps in this area are cautious and, as it may seem, unhurried."

Sergey Kosetsky, Commercial Director of the system integrator X-Com, proposed a variant of the initiative's application: "The issue of regulating Deep Fake technology is acutely on the agenda. The initiative is intended to direct its application into the legislative channel. One of the tools for this could be the discussed labeling of generative AI products, for which it is enough to introduce an automatic generator of unique content labels into the algorithm. This will not require significant time and financial costs from developers, but will significantly reduce the use of AI for criminal purposes."

Product manager of the Russian company Innostage, a developer of services and solutions in the field of information security, Evgeny Surkov, told what steps need to be taken to implement the initiative: "Mass generation of duplicates of citizens' voices is possible due to the ease of obtaining the necessary samples (for example, through a call from an unknown number). In order to complicate the collection of these samples, it is necessary to implement a number of expensive technical measures and improve the legislative framework. It will be necessary to ensure a clear link between numbers and the final beneficiary - an individual/legal entity or a group of persons, exclude anonymization of the number, build a process for checking persons from whose numbers mass calls are made for illegal actions. When these loopholes are closed, the question of protection against leaks from electronic databases used to store recordings of citizens' voices - for example, in the support service of various companies - will remain."

Dmitry Parshin, Director of the Development Center at Artezio, a company specializing in digital business transformation solutions, believes that it is impossible to completely eliminate fraudulent activities: "With effective tools, the number of fraudulent activities after the innovations in voice protection will decrease significantly, since the risk of detection and punishment for attackers will increase. In addition, potential victims will have more opportunities to verify the authenticity of the voice and protect their rights in the event of a violation. However, it is impossible to completely eliminate the possibility of fraud, since voice synthesis technologies are constantly improving. Already in 2025, we will come to the point where it will be almost impossible to distinguish a fake from an original 100% without complex tools and laboratory testing."
Post Reply