Chatbot Lee Luda brings ethical concerns around AI

Jan 13, 2021, 09:30 am

print page small font big font

facebook share

tweet share

Artificial intelligence (AI) chatbot “Lee Luda”. Scatter Lab has suspended its service on Jan. 11, 2021.

AsiaToday probationary reporter Kim Ye-seul

A social media-based artificial intelligence (AI) chatbot named “Lee Luda” has been suspended Monday after stirring a series of controversies, raising a new topic of conversation in our society regarding technological advancement and AI ethics.

In a statement on Monday, local startup developer Scatter Lab apologized for the controversy involving the chatbot, saying they would bring back the service after upgrading the program to prevent a recurrence. Designed to mimic a 20-year-old woman, the chatbot has become a subject of controversy with its discriminatory remarks against homosexuals, the disabled, women and others in its conversation with users.

The “Lee Luda” controversy clearly shows possible legal and ethical issues that may emerge when AI technology is commercialized. The major focus is how far the regulation of technological developments can be justified when algorithms and social universal principles collide with each other. Possible issues include AI discrimination against minorities, traffic accidents with autonomous vehicles, regulations on automatic stock trading system, and more.

As Lee Luda made discriminatory remarks against minorities, many agree that the ethics of society should be protected rather than excessive technological development. They point out that a more delicate design is required in the AI development. 

“A developer’s own point of view is inevitably involved in the AI design, data selection and learning process,” said Lee Jae-woong, former CEO of Socar. “Every AI-using service involving recruitment, interview, chatbots, and news recommendation needs to be monitored whether it complies with minimum social norms,” he said.

Some argue that it is a bit too early to discuss AI ethics issue at the current stage of AI technology development in the country. They say that the fragmentary case of “Lee Ruda” should not put a brake on technological advances in the AI industry.

“Lee Luda is just a program. It is not a legal or ethical issue, no matter who uses it. Lee Luda is just a low-performance chatbot that says whatever it wants,” said Lee Kyung-jun, a professor of business administration at Kyung Hee University.

“If legal regulations are imposed every time a new technology is released, companies will be less willing to develop technology,” said Jeon Chang-bae, the head of the Korea Artificial Intelligence Ethics Association, a non-government organization that researches AI ethics. “If so, humanity will not be able to enjoy the benefits of technological development,” he said. “South Korea’s AI technology development is slower than those in advanced countries. The regulation in the AI industry should be approached carefully.”

Some points out developers are responsible for coordinating technological advancement and ethical issues.

In fact, Scatter Lab acknowledged that there was a flaw in the controversial chatbot’s algorithm, and apologized for lack of careful measures. “We filtered real names with algorithm, but there were parts with names left depending on the context. Personal information included in individual sentence units was deleted through algorithms and filtering process, but some names were left depending on the context,” the developer said. This is why many point out that more enterprise-level efforts are needed when using personal information or setting up AI.

“To genuinely advance AI technology, a safety device called AI ethics must be put in,” Jeon said. “AI developers should check and follow the AI ethics guidelines before launching their products and services.”

#artificial intelligence #AI ethics #chatbot #Lee Luda #Scatter Lab 
Copyright by Asiatoday