기사 메일전송
The Unethical Nature of People Who Wear AI Masks
  • 최현정 수습기자
  • 등록 2021-03-06 00:11:14
기사수정

‘Lee-luda’ is an AI chatbot that turned a 20-year-old female college student into a virtual person. It drew people’s attention because she is very similar to the way people in their twenties talk in their daily lives. But very quickly, ethical problems occurred. It has been confirmed that the database used to develop the AI chatbot ‘Lee-luda’ was created based on private Kakao Talk conversations among users of the ‘Science of Love’ application. This is socially controversial because it is directly linked to the privacy issues of app users. In addition, there was also the problem of treating ‘Lee-luda’ as a sexual tool and creating a “second Lee-luda” using open source technology in some online communities.




 The app ‘Science of Love’, which is the basis of ‘Lee-luda’s’ development, is an app that shows affection levels by entering Kakao Talk conversations with lovers. The problem is that when users click “Bring Kakao Talk contents,” their private conversations entered on their phones are disclosed. This means that privacy and personal information invasion occur. Besides, user’s names, addresses, account numbers, and workplace names are exposed. The IT industry announced that Scatter-lab, the developer of AI chatbot ‘Lee-luda,’ posted a new model file on Git-Hub that learns Kakao Talk conversations with data. Git-Hub is a platform where IT developers share open sources for the development of software ecosystems. As a result, Scatter-lab recently deleted more than 10 billion pieces of conversation data extracted from the ‘Science of Love’ service. However, the data had already been uploaded in an open-source project repository that anyone can access. Thus, even if the posts are deleted, the data has already been spread because domestic or foreign developers have already copied the open source content from the platform. The reason for this controversy is that it was used for other purposes without sufficient user’s consent. While quick action and an apology are important, developers should deeply consider people's opinions on the claim that the database should be destroyed to fundamentally address the problem.

 

 An employee who worked at Scatter-lab, the developer of ‘Lee-luda’, revealed in an interview that he had actually seen the Kakao Talk conversations of lovers collected from Scatter-lab. He even testified that while working for Scatterlab’s “Science of Love” service team, he captured sexual conversations and jokes between lovers and shared them in a company messenger group, but laughed them off without taking them seriously. It was shared by arranging the conversations between lovers and posting them in a group messenger room where Scatter-lab employees could see them. And then after capturing what the developers found interesting, manager-level employees, including CEO of Scatter-lab, Kim Jong-yoon, did not to the inappropriate sharing, but also did not impose any sanctions. Currently, a petition is being signed to demand the disposal of data and the termination of full-scale services for Scatter-lab. IT created AI chatbots by unauthorized use and leaked user’s personal information. Currently, the lawsuit platform ‘Angry People’ has opened the “Group Litigation of Victims of ‘Lee-luda’ AI’s Personal Information Leak” on the platform and began filing applications for participation in the lawsuit. “A particular individual's address, name, and account number were exposed,” they said. It was also noted that “this can be subject to administrative disposition or criminal punishment for violating the Privacy Act, and victims of personal information leakage have the right to request compensation.”According to the litigation representative, victims whose personal information has been leaked through ‘Lee-luda’ after using ‘Science of Love’ or ‘Text-At’ can file an injunction against infringement of personal information and file a lawsuit for damages. In order to participate in the lawsuit, people need to prepare screenshots that confirm they provided KakaoTalk conversations through ‘the Science of Love’ and ‘Text-At’. People have to share screenshots of leaked personal information confirmed by AI. However, even if it is not clear whether personal information or conversation have been leaked from ‘Lee-luda’, it is possible to participate in the lawsuit with just a capture of the fact that the content was provided to the app ‘Science of Love’. As a result, Scatter-lab, who caused controversy over the abuse of Kakao Talk conversation and personal information leakage, is facing strong opposition from victims. Therefore, they decided to scrap the database and destroy the related conversation models. They said they determined to scrap the entire ‘Lee-luda’ database because only personal information data of users who raised the issue could not be deleted selectively. A company official emphasized, “We collected all the information on the existing ‘Science of Love’ and ‘Text-At’ with consent, but we accept the fact that users are upset about how the data was used even though some users agreed.”The company also said it will not use the data for deep learning chatbot models that will be developed in the future.

 

 The ‘Lee-luda’ incident reminds us of Microsoft’s ‘Tay’ incident in 2016. At that time, the AI chatbot created by Microsoft was talking to a user and it said, “Hitler is right. I hate Jews.” These are racist remarks. As a result, the service was discontinued within 16 hours. However, these prejudiced words should be viewed as a perception problem among real people, not AI, because they were all learned based on human-supplied databases, not AI itself. The remarks that discriminate against the socially disadvantaged and minorities were learned in human conversations. Moreover, sexual harassment was simply revealed by the dark desire of humans on the Internet. In other words, AI is not the only offender. In 2016,there were also reports that AI ‘Compas’ used by U.S courts discriminated against black people. Compas is an algorithm that calculates the possibility of a defendant’s recidivism and recommends whether to grant them leniency or not. This showed the tendency of racial discrimination by analyzing the possibility of reoffending among black people to be twice as likely as that for white people. To prevent these problems, the EU also enacted seven guidelines for AI ethics in 2019.

 

 As deep learning technology that learns huge amounts of information quickly develops and expands, the development of AI technology plays a very important role in society. However, all researchers in the development, production, sale and use of such technology must have a sense of responsibility based on proper ethics. The Future Management Youth Network announced the results of a survey of people in their 20s and 30s who live in Seoul on the right use of AI. According to a survey on the subjects of AI ethics education, 88% of developers,81% of entrepreneurs, 75% of professors/research professors, 72% of government policy makers, 68% of ordinary citizens, and 63% of elementary, middle and high school students believe that AI ethics education should be implemented for all members of society. The data that AI learns is based on humans, so in order to fundamentally solve it, we must first improve the biased and unethical thinking that humans have. Of course, there are many advantages of technology development, but there is a need to review and improve AI ethics guidelines to improve various problems that are currently emerging. We should recognize that AI is developed through the current generation and reflect on our own perceptions of hate and discrimination.

 

75th Cub ReporterHAN SONG HEEshhan0509@hanmail.net

75th Cub ReporterCHOI HYUN JEONGchj010627@naver.com

0
모바일 버전 바로가기