The mythical creatures crowding the American Film Institute’s list of 100 Heroes & Villains are accompanied by a seemingly realistic threat: HAL 9000, an artificial intelligence-powered computer that thinks for itself and is ready to kill.
As AI finds a place in homes, minds, and companies across the world, many are aflutter over agentic AI and the rise of the agentic Internet. Analysts, policy wonks, and academics appear to believe that the agentic Internet will usher in an age of increased productivity and innovation for consumers and organizations.
However, as the protagonists of 2001: A Space Odyssey found out, with every technological advance comes the potential for misuse. As a society on the cusp of a fully functioning agentic Internet, some experts cite a need to put guardrails into place or risk serious harm.
Agency or Obedience: Finding the Right Balance
At its most basic, the agentic Internet is a series of autonomous agents informed by AI that make at least one decision on a user’s behalf. Much of the discussion around the agentic Internet is centered on commerce and information. Consumer use cases include asking AI agents to take over research and shopping, curation of information, and doing things such as paying bills and other mundane tasks.
Business use cases are appealing and numerous. AI agents already are looking for ways to better target cancer research, handle software development, optimize manufacturing, and more. It’s really a question of what an AI agent can do, rather than what it can’t. But what does it mean for society that turns over all the major and minor decisions that humans have always made to such agents? That’s the most critical question, says Kate O’Neill, founder and CEO of strategic advisory firm KO Insights and author of What Matters Next.
“The easier technology makes things, the harder we must think about its effects,” said O’Neill. “When AI agents handle our routine choices, they don’t just save time; they shape our behaviors, preferences, and ultimately, our autonomy. The real question isn’t whether AI can make good decisions, but whether those decisions align with human flourishing.”
That’s a difficult question, especially when the AI we have today often makes mistakes or hallucinates, as well as telling humans exactly what they want to hear; the perfect little electronic ego boost. In addition, AI is trained using data. If the wrong people do the training or use the wrong data, the agent’s answers will be based on what it has been taught. And even if training and data is provided with good intent, the results can be flawed due to bias.
These problems become magnified when AI agents interact with each other, according to Merve Hickok, a lecturer in the School of Information of the University of Michigan, and founder of AIethicist.org. An expert on AI policy, ethics, and governance, Hickok said that while one biased or dangerous agent can be worked around, when they interact the impact grows exponentially.
“Since we do not have a solution for value alignment or hallucinations, we should be worried about agentic AI systems based on language models,” Hickok said. “The same ethical concerns apply, with the additional element of more complexity. An individual agentic AI might contain bias or errors. Interconnected agentic AI might snowball and complicate the issues. Or individually acceptable systems might have risks when they operate together.”
The Barn Is Already Open
The easiest fix for out-of-control AI agents is imbuing them with ethical pathways and common-sense rules before they are turned loose on the world. Having such rules built in could make it more difficult for AI agents to do harm to users or other systems. Yet some experts say we’re already past the point of no return, and creating or remaking AI agents with ethics is going to be nearly impossible. The main hurdle is the fact that there’s no single point of development and usage, said Faisal Hoque, founder of several companies and author of Transcend: Unlocking Humanity In The Age Of AI. Saud Hoque, “Who’s responsible? It’s really a multi-tiered effort. …The first tier is the platform vendors who are building these agents.”
The second and third tiers, Hoque said, are governments and the developers and users who are working with them. Getting all these entities to agree on the same goals, much less agree to put limits on AI agents, probably couldn’t happen. What corporate developers and professional users can do, however, is create complete transparency and, in some cases, build in a ‘kill switch’ to be deployed in case of a problem, Hoque said.
“You cannot have a kill switch for all AI because that’s the entire Internet, or entire connected network, but you can have a kill switch for a particular application in a particular setup.”
Power to the People
It’s important to note that, in the case of a kill switch, the actual control of the agent stays in a human’s hands, something that must continue in order to avoid catastrophes, Ece Kamar, distinguished scientist and managing director of Microsoft’s AI Frontiers Lab explains. Kamar sees the potential benefits of the agentic Internet, saying that it provides an opportunity for co-evolution with models to create real value for the people using agentic Internet systems, but with some caveats.
“[Agentic AI] allows better understanding of user needs, taking actions to get things done, and being able to interact with the environment. The innovations on reasoning and model capabilities are contributing to a new technology stack towards creating reliable and capable agents,” Kamar said. “With this higher value that agents can foster, there is a new set of risks we should be aware of, which prompts new research questions around mitigating those risks and how to enable effective human-agent collaboration that puts people in control.”
This should be coupled with auditing from an outside source whether that’s an industry group, government entity, or third-party vendor who will keep track of how AI agents are built, tested, executed and monitored.
Building in transparency can also happen by programming the AI agent to tell users how it is making decisions and providing answers, says Eelco Herder, an associate professor in the Interaction Group at Utrecht University who is also chair of the ACM Special Interest Group on Hypertext and the Web (SIGWEB).
“There is a strong research area focused on transparent and fair recommender systems that use explanations as a main mechanism,” Herder said. For instance, if you asked an AI agent to find you a dentist, the agent would show you how it came upon that decision. This could expose some biases and hallucinations.
Finally, organizations and users must start putting pressure on developers so that they don’t forget what’s at stake. Liz Miller, vice president and principal analyst at Constellation Research, said this is imperative.
“As an enterprise, we have to ask harder questions when we are actually considering and bringing these tools in. What are the governance models? What are the training guidelines, what are the training limitations, organizations who are fine-tuning these large models, or organizations who are building these large models? It is our responsibility to ask them the hard questions, to demand to know what those policies and what those details are.”
Added Hickok, “I do not think we yet know the full extent of safeguards which may be necessary, or the effectiveness of the current methods we have.”
K.J. Bannan is a writer and editor based in Massapequa, NY, USA. She began her career on the PM Magazine First Looks team reviewing all the latest and greatest technologies. Today, she is a freelancer who covers business, technology, health, personal finance, and lifestyle topics.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment