AI supply chain to mature with trust & oversight by 2026
The AI supply chain is set for major consolidation and maturity by 2026, according to Duncan Curtis, Senior Vice President for GenAI at Sama. Curtis said the current fragmented ecosystem will transform into a more unified structure based around integrated infrastructure and robust human oversight.
The market faces pressure to address persistent weaknesses in data, model evaluation, and operational reliability. Companies have managed disparate services and relied on disconnected vendors. Curtis predicted that businesses will restructure AI development practices to ensure resilience and speed, building supply chains with clear step-by-step lineage from data collection through to deployment.
Integrated Infrastructure
Cognitive infrastructure will play a central role in this evolution. Companies will move from treating data operations and human-in-the-loop (HITL) processes as afterthoughts. Instead, they will become tightly integrated steps within AI supply chains.
Curtis said organisations that fail to embed these processes may lose significant value in their AI initiatives. He added that vertical integration, in which companies control each major link in their supply chains, will become a defining strategy. Firms following this model will build stacks that manage entire segments of AI workflows. Examples set by companies such as Google and Tesla indicate the benefits of end-to-end control over user experience and model deployment.
Cognitive Infrastructure
By 2026, Curtis predicted that advanced AI businesses will view their data workforces and human oversight systems as essential cognitive infrastructure. These functions will shift from back-office roles to centre stage in AI innovation.
He said that continuous human-AI collaboration, used for monitoring bias, identifying rare data cases, and retraining models, will accelerate rather than slow development. Organisations will develop workflows in which human expertise regularly improves model performance in the field.
The companies that blend automation with expert validation will set themselves apart. Their systems will adapt more quickly and generate greater trust among users and clients compared with those that rely on automation alone.
Business Model Pressure
Curtis said model development firms will face increasing demand for clear, viable business models by 2026. Investors and the wider market are scrutinising companies such as Anthropic, which forecasts annual revenues of USD $20 billion to USD $26 billion, and are requiring businesses to move beyond the promise of long-term returns.
Business leaders will develop a range of monetisation strategies. These could include advertising, freemium approaches with premium features, and in-product purchases, reflecting shifts seen in consumer technology and gaming. As a result, the threshold for new entrants to succeed in the AI sector will rise, due to higher expectations for both technical validation and financial sustainability.
This process is expected to reduce the number of foundation model providers. The market will concentrate around firms able to demonstrate sustainable revenue and profitability.
Regulatory Changes
The AI sector will see tighter regulation focused on child protection by 2026. New laws in California covering how children interact with AI and social platforms will likely serve as a model for global standards.
Policymakers are expected to pass these rules with little opposition, following precedents from GDPR and CCPA on privacy issues. The spread of these regulations could create obstacles for companies operating in more restrictive environments. Curtis noted that this will force governments to weigh the benefits of data protection against potential loss of competitive edge.
Despite increased rules, many lawmakers will continue to face difficulties understanding the technical workings of AI. New laws will define required outcomes and protections, leaving it to companies themselves to determine how to meet these benchmarks.
Trust as Differentiator
Issues of trust and safety will move from back-end compliance functions to core features of AI products. Curtis said successful companies must make red teaming, safety checking, and model traceability fundamental to their platforms.
"Responsible scale will be the defining characteristic of successful AI companies in 2026. Trust, safety, and governance will shift from compliance add-ons to core product features and competitive differentiators," said Curtis.
Customers and partners will increasingly demand evidence that providers operate responsibly, particularly in high-risk areas such as healthcare, finance, and autonomous systems. Companies investing in explainable AI and regular validation by human experts will have an advantage as the sector matures into a new phase of scrutiny and adoption.