IT Brief India - Technology news for CIOs & IT decision-makers
Story image

Organisations face risk with open-source AI & cloud use

Yesterday

Tenable has released a new report warning that organisations are increasingly exposed to cybersecurity risks as they adopt open-source AI tools and cloud services at a pace that exceeds their security readiness.

According to Tenable's Cloud AI Risk Report 2025, a growing reliance on open-source frameworks and rapidly configured cloud AI services is creating vulnerabilities, misconfigurations, and elevated risk of data exposure across cloud environments.

Tenable's findings are supported by broader industry trends. A McKinsey Global Survey cited in the report revealed that by early 2024, 72 percent of organisations globally had adopted AI in at least one business function, compared to 50 percent two years prior. This widespread uptake has introduced greater complexity in securing AI workloads and the supporting ecosystem of open-source libraries and managed services.

The report is based on analysis of real-world environments spanning Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) between December 2022 and November 2024. It found that AI development environments are heavily reliant on open-source packages, many of which are integrated at speed and lack adequate review or security assessment. Among the most commonly observed were Scikit-learn and Ollama, present in nearly 28 percent and 23 percent of AI workloads, respectively.

While such frameworks help accelerate the development of machine learning solutions, they can also introduce hidden vulnerabilities due to their open-source nature and multiple dependency chains. Tenable points out that this risk is heightened when AI workloads run on Unix-based systems, which have long depended on open-source libraries. This scenario increases the likelihood of unpatched vulnerabilities persisting, presenting opportunities for attackers to access sensitive data or compromise AI models.

The report also highlights a strong correlation between AI adoption and heavy use of managed cloud services. On Microsoft Azure, for example, 60 percent of surveyed organisations had configured Azure Cognitive Services, 40 percent had deployed Azure Machine Learning, and 28 percent had used Azure AI Bot Service. Within AWS environments, 25 percent of users configured Amazon SageMaker and 20 percent deployed Amazon Bedrock. On GCP, Vertex AI Workbench was detected in 20 percent of environments.

Tenable warns that high rates of adoption of these cloud services increase the complexity of securing environments. Security risks are amplified by improper configurations or excessive permissions—many of which are enabled through default settings—potentially leaving critical infrastructure and sensitive training data open to cyber attacks.

"Organisations are rapidly adopting open-source AI frameworks and cloud services to accelerate innovation, but few are pausing to assess the security impact," said Nigel Ng, Senior Vice President at Tenable APJ. "

The very openness and flexibility that make these tools powerful also create pathways for attackers. Without proper oversight, these hidden exposures could erode trust in AI-driven outcomes and compromise the competitive advantage businesses are chasing."

Tenable's report provides several mitigation strategies to help organisations address these risks. Organisations are urged to manage AI exposure holistically by continuously monitoring cloud infrastructure, workloads, identities, data, and AI tools to maintain contextual visibility and prioritise mitigation efforts. The company also recommends classifying AI assets—such as models, datasets, and tools—as sensitive, treating them as high-value targets for regular scanning and protection.

Tenable advises organisations to stay updated on regulatory requirements and best practices, including mapping AI data stores, implementing strict access controls, and ensuring development aligns with frameworks such as NIST's AI Risk Management Framework. Additional recommendations include enforcing least-privilege access by reviewing permissions, managing cloud identities tightly, and verifying that configurations match provider security best practices as default settings may be overly permissive. Prioritising the remediation of critical vulnerabilities with advanced tools to reduce alert fatigue and improve efficiency is also advocated.

"AI will shape the future of business, but only if it is built on a secure foundation," said Ng. "Open-source tools and cloud services are essential, but they must be managed with care. Without visibility into what is being deployed and how it is configured, organisations risk losing control of their AI environments and the outcomes those systems produce."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X