|
According to the Scientific and Technical Center of the Federal State Unitary Enterprise “GRChTs” (STC), in many ways the threats and risks associated with the development and mass implementation of AI technologies are the same for adults and children. “However, for minors they are aggravated by the peculiarities of development and perception of the surrounding world. Children are not fully aware of the potential risks associated with information security, including the use of AI, and can content writing service commit unsafe acts,” noted Evgeniya Ryzhova, Advisor to the General Director for Scientific and Technical Development of the Federal State Unitary Enterprise “GRChTs”, in an interview with RSpectr. She drew attention to the fact that
THE GREATEST DANGER IS THE RISKS ASSOCIATED WITH CHILDREN'S USE OF GENERATIVE AI MODELS
They do not ensure the completeness and accuracy of the information provided and may contribute to children acquiring false or partially false information.

First of all, it depends on the data that was put into them at the training stage. It is important what sources of information the AI is trained on, what and whose meanings and values are embedded in them. The content of generative AI models directly depends on the views and attitudes of the programmers who create them or their sponsors.
"The worldview of algorithms can coincide with the worldview of people who do not share our traditional values. In addition, generative AI models are trained on fictional stories, myths, legends and messages on social networks. As a result, the responses of neural networks can transmit information that does not correspond to our cultural code, misleading and carrying an incorrect understanding of the world," said Evgenia Ryzhova.
There are frequent cases of so-called “hallucination” of generative neural networks, for example, chat bots, when false information is presented by the neural network under the guise of reliable facts, the expert recalled.
Also, due to the peculiarities of training models and the datasets used, generative models can transmit incorrect values.
Evgeniya Ryzhova, FSUE "GRChTs":
– Algorithmic bias built into AI systems during the learning phase can reinforce stereotypes in children or lead to misconceptions about the world around them.
For example, when emphasizing the decisive contribution to the victory over fascism, generative networks put Western countries and their leaders first or can confirm the normality of same-sex relationships.
This risk is also accompanied by falling into an “information bubble”.
Alexey Parfentiev, SearchInform:
– Any AI creates an “information bubble”, offering the user only those topics that he or she presumably likes. Once in this bubble, the child may form an incorrect idea of the world around them, and the neural network will help them with this.
Against this background, the likelihood of personal information leakage increases, since children are usually naive and unaware of how their personal data can be used.
Children may give away too much personal data to AI, for example, when using chatbots, Alexey Parfentyev, head of the analytics department at SearchInform, commented to RSpectr. However,
|
|