Discuz! Board

 找回密碼
 立即註冊
搜索
熱搜: 活動 交友 discuz
z»z z z Kids in an AI cage
查看: 2|回復: 0

Kids in an AI cage

[複製鏈接]

1

主題

1

帖子

5

積分

新手上路

Rank: 1

積分
5
發表於 16:50:02 | 顯示全部樓層 |閱讀模式
According to the Scientific and Technical Center of the Federal State Unitary Enterprise “GRChTs” (STC), in many ways the threats and risks associated with the development and mass implementation of AI technologies are the same for adults and children. “However, for minors they are aggravated by the peculiarities of development and perception of the surrounding world. Children are not fully aware of the potential risks associated with information security, including the use of AI, and can content writing service commit unsafe acts,” noted Evgeniya Ryzhova, Advisor to the General Director for Scientific and Technical Development of the Federal State Unitary Enterprise “GRChTs”, in an interview with RSpectr. She drew attention to the fact that

THE GREATEST DANGER IS THE RISKS ASSOCIATED WITH CHILDREN'S USE OF GENERATIVE AI MODELS

They do not ensure the completeness and accuracy of the information provided and may contribute to children acquiring false or partially false information.



First of all, it depends on the data that was put into them at the training stage. It is important what sources of information the AI ​​is trained on, what and whose meanings and values ​​are embedded in them. The content of generative AI models directly depends on the views and attitudes of the programmers who create them or their sponsors.

"The worldview of algorithms can coincide with the worldview of people who do not share our traditional values. In addition, generative AI models are trained on fictional stories, myths, legends and messages on social networks. As a result, the responses of neural networks can transmit information that does not correspond to our cultural code, misleading and carrying an incorrect understanding of the world," said Evgenia Ryzhova.

There are frequent cases of so-called “hallucination” of generative neural networks, for example, chat bots, when false information is presented by the neural network under the guise of reliable facts, the expert recalled.

Also, due to the peculiarities of training models and the datasets used, generative models can transmit incorrect values.

Evgeniya Ryzhova, FSUE "GRChTs":

– Algorithmic bias built into AI systems during the learning phase can reinforce stereotypes in children or lead to misconceptions about the world around them.

For example, when emphasizing the decisive contribution to the victory over fascism, generative networks put Western countries and their leaders first or can confirm the normality of same-sex relationships.

This risk is also accompanied by falling into an “information bubble”.

Alexey Parfentiev, SearchInform:

– Any AI creates an “information bubble”, offering the user only those topics that he or she presumably likes. Once in this bubble, the child may form an incorrect idea of ​​the world around them, and the neural network will help them with this.

Against this background, the likelihood of personal information leakage increases, since children are usually naive and unaware of how their personal data can be used.

Children may give away too much personal data to AI, for example, when using chatbots, Alexey Parfentyev, head of the analytics department at SearchInform, commented to RSpectr. However,

回復

使用道具 舉報

您需要登錄後才可以回帖 登錄 | 立即註冊

本版積分規則

Archiver|手機版|自動贊助|z

GMT+8, 17:42 , Processed in 1.322205 second(s), 30 queries .

抗攻擊 by GameHost X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回復 返回頂部 返回列表
一粒米 | 中興米 | 論壇美工 | 設計 抗ddos | 天堂私服 | ddos | ddos | 防ddos | 防禦ddos | 防ddos主機 | 天堂美工 | 設計 防ddos主機 | 抗ddos主機 | 抗ddos | 抗ddos主機 | 抗攻擊論壇 | 天堂自動贊助 | 免費論壇 | 天堂私服 | 天堂123 | 台南清潔 | 天堂 | 天堂私服 | 免費論壇申請 | 抗ddos | 虛擬主機 | 實體主機 | vps | 網域註冊 | 抗攻擊遊戲主機 | ddos |