IT Brief India - Technology news for CIOs & IT decision-makers
Enterprise ai governance meeting room robot assistant supervision

Enterprises pivot to risk-aware, human-centric AI in 2026

Sat, 13th Dec 2025

Enterprises are preparing for a more cautious phase of artificial intelligence deployment in 2026, as security concerns, stalled pilots and pressure to prove financial returns prompt a shift from rapid experimentation to more structured, risk-aware programmes.

Security specialists and enterprise technology leaders expect some organisations to roll back earlier generative AI initiatives, tighten governance and reframe AI systems as human-supervised tools rather than fully autonomous agents.

Risk concerns

Security risk is at the centre of the reassessment. Inconsistent outputs from generative models are emerging as a problem for organisations that rely on repeatable processes and must comply with regulations.

Jake Williams, Faculty, IANS Research & VP of R&D, Hunter Strategy, said, "Enterprises will continue to struggle to mitigate risks in generative AI applications. There are definitely situations where generative AI can provide great value, but rarely within the risk tolerance of enterprises. The LLMs that underpin most agents and gen-AI solutions do not create consistent output, leading to unpredictable risk. Enterprises value repeatability, yet most LLM-enabled applications are, at best, close to correct most of the time."

"Enterprises today struggle to address the security risks introduced by the inconsistent output of LLMs. In 2026, we'll see more organisations roll back their adoption of AI initiatives as they realise they can't effectively mitigate risks, particularly those that introduce regulatory exposure. In some cases, this will result in re-scoping applications and use cases to counter that regulatory risk. In other cases, it will result in entire projects being abandoned. Smart technology leaders will perform threat modeling exercises before solution development begins to identify and eliminate risks that will be unacceptable to the organisation," said Williams. 

The emphasis on threat modelling before development suggests a more traditional security engineering mindset being applied to AI projects that have often been run as fast-moving pilots.

Strategy shift

Enterprise AI teams are also rethinking how projects are initiated. Commentators expect a move away from tool-led experimentation towards programmes tied to specific business outcomes and governed by cross-functional teams.

John Dawson, Director, Creative ITC, said, "Almost every CEO is tasking their leadership to implement AI, but so far, most companies have focused on rushing to adopt out-of-the-box tools, before actually defining what they want to achieve with them. It may sound simple, but leaders must first know what challenge they want to solve, rather than trying to retrofit a tool to that use case. Consequently, many have yet to see the payout of their investment."

"In the year ahead, we expect a greater focus on organisations flipping this trodden narrative to help unlock and prove the ROI of AI. Starting with identifying clear objectives - such as revenue growth or operational efficiencies - we'll see firms building a common data environment (CDE) for their specific use case and then developing proprietary systems. The business case must then be 'proven' within the first three to six months, or the pilot will be put on hold. This will enable firms to test AI securely, analyse its outcomes and pinpoint areas of improvement before scaling it across the enterprise."

"Critically, success won't be achieved by the IT team alone. It must be driven by key stakeholders - end-users to road-test applications; business leads to put in place the right governance for safe experimentation; analysts to quantify and report impacts; and third-party partners to guide change management and AI implementation. Those that pivot to a use case and outcomes-driven approach over being tempted by the latest tools will be at the forefront of this shift towards ROI-focused strategies."

This focus on common data environments and time-bound pilots reflects a growing insistence on measurable returns before scaling deployments.

Human role

Alongside the rebalancing of risk and return, some enterprise leaders expect AI to be repositioned as a colleague-like system that works within human-led processes, rather than as a standalone automation layer.

Laura Wenzel, Global Market & Insights Director, iManage, said, "In 2026, organisations will shift from a scrambled approach to AI adoption to a strategic, considered implementation, driven by practitioner-led teams comprising business decision-makers, users, domain experts, and technical professionals."

Wenzel links this to a reassessment of projects launched during the recent hype cycle.

"Over 2024 and 2025, the generative AI hype cycle created a significant disparity between leadership enthusiasm for AI adoption and end-user adoption. This reflects a fundamental misalignment - organisations deployed AI solutions without adequately defining the business problems they were trying to solve. Therefore, while creating an early momentum to deploy AI, it resulted in a widespread 'abandonment' of AI as the technology failed to deliver meaningful value to end-users."

She expects the correction to be accompanied by the creation of new roles focused on the interface between people, processes and AI systems.

"The correction to define and evaluate AI's practical utility will result in demand for new skills and competencies and the creation of brand new roles, from AI Conversation Designer and AI Integration Specialist through to Agent Trainer and Ethical AI Architect, as examples," said Wenzel.

Teammate model

Wenzel also anticipates a mindset shift in how enterprises frame AI's purpose and limitations.

"2026 will be marked by the realisation that AI isn't simply a single standalone technology solution to throw problems at, but that its effectiveness depends entirely on integration with humans as a collaborative partner rather than an augmentation tool."

She expects greater attention to the boundaries of automation and the need for human judgment.

"After 18 months of intense focus on AI's technical capabilities - such as its power to automate, accelerate, and augment - 2026 will see a marked shift towards understanding the critical role of the human in making its adoption a success. Enterprises will recognise that AI's value is fundamentally limited by the quality of human involvement surrounding it. While AI excels at numerous tasks, it cannot replicate the nuanced judgment, contextual knowledge, and adaptive decision-making that humans provide," said Wenzel.

Organisations testing autonomous agents are expected to confront these limits directly.

"Organisations experimenting with autonomous AI agents will encounter friction and unintended business consequences when human oversight is absent. The winners in 2026 will be companies that reframe AI not as technology, but as a new teammate designed to boost their existing workforce. This human-centric approach to AI deployment will separate thriving organisations from those struggling with adverse business outcomes," said Wenzel.

Agentic systems

Some firms are preparing for more complex, agent-based architectures, underpinned by shared standards designed to connect different models and data sources.

"With the widespread availability of Model Context Protocol (MCP), the deployment of autonomous AI agents will gather pace. A handful of organisations with mature standard operating procedures will lead adoption. In contrast, the others will significantly invest time and effort in documenting and optimising internal workflows to support autonomous operations," said Wenzel.

Wenzel concluded, "MCP creates an interoperability layer that connects different large language models, enabling AI agents to function seamlessly across platforms. However, agentic AI requires autonomous operation with minimal human intervention, with people involved only for exceptions. Therefore, centralised data and the existence of optimised workflows will determine when organisations are in a position to leverage agentic AI. Without this, the risks will far outweigh the benefits."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X