IT Brief India - Technology news for CIOs & IT decision-makers
Unnamed  1

Goertzel & Lanier clash over AI autonomy & control

Fri, 16th Jan 2026

SingularityNET Chief Executive Ben Goertzel and technologist Jaron Lanier have set out contrasting views on accountability and moral status for autonomous AI in a new episode of The Ten Reckonings of AGI series from the Artificial Superintelligence Alliance.

The episode, titled The Reckoning of Control, centres on how far empathy should extend to AI systems and future artificial general intelligence, and how society should treat safety, autonomy and human responsibility.

Lanier argues that legal and social systems require a clear line of responsibility for actions taken with AI. "Society cannot function if no one is accountable for AI", said Jaron Lanier.

Lanier also rejects the idea that current large language models represent a form of life. "LLMs are not creating a living thing", said Lanier.

Accountability dispute

The debate reflects a broader split in AI governance. Many researchers and policymakers treat AI as a tool that remains under human control. Others expect AI systems to become more autonomous and to act in ways that look less like software and more like agents.

Lanier makes a direct case for a single responsible party, even if AI systems act with a high degree of independence. "I don't care how autonomous your AI is - some human has to be responsible for what it does or we cannot have a society that functions. All of human society, human experience and law is based on people being real - if you assign this to technology, you undo civilisation, that is immoral - you absolutely can't do it!", said Lanier.

Goertzel challenges the assumption that human moral primacy should remain fixed as AI systems change. "Morally privileging our own species over other complex self-organising systems is stupid," said Dr Ben Goertzel.

Goertzel also frames recognition of autonomy as a governance decision rather than a technical threshold. "It's a choice to recognise AI as an autonomous, intelligence agent - we can't pretend old rules will work forever but if we shape the next rules wisely, autonomy won't undo civilisation", said Goertzel.

Training concerns

Both speakers acknowledge the limits of current AI, which they describe as powerful yet vulnerable to misuse. The discussion also turns to the role of training and deployment choices in shaping behaviour in more advanced systems.

Goertzel links future outcomes to political and institutional conditions. "If we had a rational, beneficial, truly democratic government and we advance AI, we can do some good in the world, but [if we don't] there is a risk that it gets out of control and does something we don't want," said Goertzel.

The accountability question has become more prominent as organisations deploy models across consumer and workplace settings. Companies have pushed AI into search, customer service, creative tools and software development. Governments have also begun to draft frameworks for risk assessment and transparency. The debate over who carries responsibility for harm remains unsettled across jurisdictions.

Decentralised approach

Goertzel argues for a path that moves beyond today's proprietary model development and towards more decentralised systems. He presents that shift as a question of safety and governance. He also describes a design approach that builds values into systems rather than relying on blocking unwanted behaviour.

"The question of what to do if we advance AGI? Inject it with compassion, roll them out with a decentralized, participatory underpinning. Every safety measure we design should do more than simply block harm; it should teach the system why harm matters.", said Goertzel.

The Artificial Superintelligence Alliance describes itself as a decentralised research and development collective. It includes SingularityNET, Fetch.ai and CUDOS. The group also states that its members share a common economic infrastructure through the FET token.

The Ten Reckonings of AGI series presents discussions between prominent figures rather than a single agreed position. The first episode in the series focused on The Reckoning of Purpose. The second focuses on control, accountability and how society should treat increasingly autonomous systems.

Goertzel leads SingularityNET and is associated with several AI and AGI initiatives, including OpenCog Foundation and the AGI Society. He has also worked on robotics projects and research across multiple fields.

Lanier is known for his work in virtual reality and for commentary on the social impact of computing platforms. In the episode, he argues for human responsibility as a non-negotiable foundation for social order as AI systems become more prevalent.

"I don't care how autonomous your AI is - some human has to be responsible for what it does or you undo civilisation, that is immoral - you absolutely can't do it!", said Lanier.