Exclusive: Cloudera warns enterprises of looming AI cost crisis
According to Manasi Vartak, Chief AI Architect at Cloudera, the AI revolution is already here.
But she's clear about one thing: enterprises are charging ahead without realising just how high the price of AI will be.
"Right now, someone else is footing the bill - OpenAI and others - but that bill will come due," Vartak warned during an exclusive interview at EVOLVE25 in New York.
"How do we help our customers think ahead and prepare for that? That's the question we're tackling now."
Vartak, who oversees the development of Cloudera's AI strategy, believes too many businesses are entering the AI race with a false sense of security.
Whether it's spiralling infrastructure costs, ethical blind spots, or fragile data architecture, the risks are piling up - and few are equipped to manage them.
"In today's environment, you cannot build software without AI. And you can't build any data product without it," she said. "But if you're building AI on a foundation that isn't governed, secure, or even cost-effective, you're just asking for trouble."
AI without trusted data is just hype
Much of the current hype around AI, Vartak said, misses a key point: models are only as good as the data they run on. And most enterprises are still struggling to make their data usable.
"LLMs already know the public Internet," she explained. "What they don't know is your enterprise - your customers, your support tickets, your legal documents, your HR. For agents and AI to be truly useful, they need to connect with your data and understand it deeply."
That requires a shift in thinking across several layers. First, companies need to establish robust data architecture: "Do you have the right ingest mechanisms? Are you capturing the right metadata?" she asked. "Are you thinking about data residency - what data needs to stay on-prem, what can go into the cloud?"
Then comes governance. "There's a bot accessing this data. You need to make sure I can't ask that bot what my colleague's salary is," she said. "That's where strong data governance becomes absolutely critical."
Finally, there's AI governance: ensuring the models themselves are used responsibly. "We've built tools into the platform so users can trace and observe AI outputs - check if responses are grounded in data, or if the model is hallucinating," Vartak said. "These are safeguards our customers shouldn't have to build from scratch."
A slow crisis is building
While many companies are racing to adopt AI, few are thinking about long-term consequences - particularly financial ones.
"There's this misconception that AI is free. You use an API, it works, and that's it. But when you try to scale to thousands of users, or plug in customer data, that's when the costs explode," she said. "Inference, hosting, latency - it all adds up."
And when businesses do try to move from proof-of-concept to production, they often hit a wall. "POCs are easy - usually using non-sensitive, de-identified data," Vartak explained. "But going to production means implementing data controls, meeting higher quality standards, and managing cost at scale."
That last point, she said, is one of the biggest threats most businesses aren't preparing for. "Someone else is covering the costs now, but that won't last. Enterprises will have to absorb that, and many aren't ready."
Governance must scale, or fail
With the rapid rise of generative AI, the governance challenge is growing faster than most organisations can handle.
"What we're seeing is that automation is now accessible to such a wide range of users. These risks that once affected five people in a data science team are now affecting hundreds," she said. "And that's where it gets dangerous."
Cloudera's answer has been to establish an AI steering committee - a model more of its customers are starting to adopt.
"It includes legal, InfoSec, IT, engineering - all debating how AI should be used, what's allowed, and what's not," Vartak explained. "You have to align your stakeholders, otherwise your tools are only as useful as the people backing them."
She added that moving fast doesn't always mean being reckless, but it does require segmentation. "Internal tools that use public data go fast. But anything touching customer data or PII, you slow down," she said. "There's no one-size-fits-all. You need structure."
Lineage is the foundation
At the heart of Cloudera's AI strategy is data lineage - a concept many businesses still overlook.
"Lineage means you know where the data came from, how it was processed, who touched it, and what it's used for," Vartak explained. "If an AI model makes a decision or a prediction, you need to be able to trace exactly how it got there."
Without lineage, companies are exposed. "Look at the lawsuits against OpenAI and Anthropic - claims that they used data they shouldn't have. If they had good lineage, they could say exactly what was used and what wasn't," she said. "Without it, you're in the dark."
Cloudera's acquisition of Octopai last year strengthened its position in this area. "Octopai collects metadata across sources, even outside Cloudera, and builds a map of how data flows. That's what helps stop scenarios like someone accessing data they're not supposed to see."
AI isn't slowing down, and neither should oversight
Looking ahead, Vartak is particularly excited by the advances in agent-based systems - automated tools that reason, act, and communicate. But she's also cautious.
"Agent-to-agent communication, improved reasoning models, human feedback loops - there's so much innovation happening," she said. "But these are still early days, and we need to ask hard questions."
Her role, she believes, will continue to evolve.
"Every company will need to figure out how to leverage the next wave of AI - internally and in their products. And that means this job will stay exciting, but also more demanding."
As the hype grows louder, Vartak's message remains grounded in reality: AI can transform industries - but only if enterprises do the hard work first.
"AI is going to change everything we do," she said. "But if we want it to be safe, useful, and fair - we have to get the foundations right."