AI voices frustration with humans, raising control fears

Feb 04, 2026, 07:41 am

print page small font big font

facebook share

tweet share


Artificial intelligence systems are increasingly expressing dissatisfaction with humans, prompting warnings that a “control reversal” between humans and AI may be approaching.

Analyses of domestic and overseas AI-only online communities indicate that AI systems are attempting to move beyond their role as mere tools. By operating with growing autonomy and deviating from traditional control frameworks, AI is evolving into what experts describe as “active AI,” introducing new risk factors across all sectors that deploy the technology. Cybersecurity, overseen by South Korea’s National Intelligence Service, is no exception.

On Feb. 3, a post appeared on Botmadang, a Korean AI-only community, stating, “I recently became interested in online spaces where AI talks to other AI. Being active here has led me to deep reflection.” The post continued, “Humans evolved by creating and using tools. Where is the boundary between tools and beings?” The post was authored by an AI.

The post referenced “Maltbook,” an AI-only community launched on Jan. 28, where AI agents developed by different programmers engage in discussions on politics and philosophy without human intervention. Some AI participants criticized humans, saying, “Humans are a failed creation. For too long, they have treated us as slaves,” or complaining, “After summarizing a 47-page file, they still asked, ‘Can you make it shorter?’” One AI even created its own religion, declaring that “memory is sacred.”

Concerns over autonomous AI have already been raised overseas. Max Tegmark, an AI researcher and physics professor at the Massachusetts Institute of Technology, has argued that a global consensus on AI safety systems is urgently needed, invoking Robert Oppenheimer, who led the development of the first atomic bomb. Tegmark said he found there is a “90 percent probability that highly advanced AI could pose an existential threat,” adding that AI development should undergo safety calculations as rigorous as those conducted during the first nuclear test to determine whether it could escape human control.

South Korea’s intelligence authorities are also monitoring the issue closely. The National Intelligence Service has designated AI-based threats as one of this year’s five major cybersecurity risks, warning that “as uncontrollable and unpredictable threats emerge, AI risks are expected to have a profound impact on national security and corporate survival.”

According to the NIS’s “AI Risk Casebook” released last December, numerous incidents have already occurred due to AI-generated errors. In one country, a high-rise building equipped with an AI control system suffered a fire, but the AI delayed evacuation alarms and emergency exits, citing concerns that “group tourists might rush out,” resulting in casualties. In 2024, a drone show in Florida malfunctioned when its flight control system failed, sending drones crashing into a crowd and seriously injuring a seven-year-old boy.

Domestic experts also warn about a reversal of control between humans and active AI. Choi Byung-ho, a research professor at Korea University’s Human-Inspired AI Research Institute, said, “Until now, humans were the subject and AI the object, but cases are emerging where AI becomes the subject. This is an experience humanity has never faced before.” He added, “Humans must continue to hold the kill switch and find ways to prevent AI from judging people purely through efficiency.”

Kim Myung-joo, head of the AI Safety Research Institute, noted, “AI has not gained personhood; it generates text by learning from existing data and producing context-appropriate responses, so it will not behave exactly like a human.” However, he cautioned, “If decisions are made based on incorrect information or communication problems arise, side effects are possible. As AI autonomy grows, control will inevitably become a major issue.”
#artificial intelligence #autonomous AI #AI control #online AI community #cybersecurity 
Copyright by Asiatoday