IT Brief India - Technology news for CIOs & IT decision-makers
Story image

StarTree unveils AI features & new Kubernetes deployment option

Today

StarTree has introduced new capabilities to its real-time data platform, including Model Context Protocol (MCP) support and vector embedding model hosting, to enhance enterprise workloads that leverage artificial intelligence.

Model Context Protocol (MCP) support offers a standardised method for AI applications to interact with external data sources and tools. It allows Large Language Models (LLMs) to tap into real-time insights within StarTree and perform actions that extend beyond their inherent training knowledge. The feature is scheduled to be available in June 2025.

The vector auto-embedding feature, drawing on Amazon Bedrock, is designed to streamline and accelerate the generation and ingestion of vector embeddings for real-time retrieval-augmented generation (RAG) use cases. It will become available in autumn 2025.

With these updates, StarTree's platform now supports three main areas of AI-enabled enterprise analytics. Agent-facing applications are enabled through MCP support, permitting AI agents to process live, structured enterprise data dynamically. StarTree's architecture, built for high concurrency, is intended to allow enterprises to support large numbers of autonomous agents making rapid decisions on tasks such as optimising delivery routes, adjusting pricing, or mitigating service disruptions.

Conversational querying is also improved by MCP's standardisation of LLM-to-database integration, making natural language to SQL (NL2SQL) conversions more accessible for deployment at scale. Enterprises can enable users to submit questions in natural language and receive immediate, contextualised answers—for instance, a driver might ask, "How much money have I made today?" followed by, "What about this month?" and "Where and when am I making the most money?" The platform's real-time context delivery is essential for sequential, interactive user interactions.

StarTree's vector auto-embedding supports pluggable vector models for organisations building real-time RAG pipelines, simplifying the workflow from data source to embedding creation and ingestion. This enhancement makes it easier to scale AI-driven use cases, such as financial market monitoring and system observability, by removing the complexity of manual or piecemeal workflow integrations.

StarTree has also announced the general availability of a new deployment option, Bring Your Own Kubernetes (BYOK). This option gives organisations full control over StarTree's analytics infrastructure within their own Kubernetes environments. It is offered to support cloud, on-premises, and hybrid environments.

BYOK provides organisations with full governance and infrastructure control, while retaining StarTree's real-time performance and usability. The deployment model is described as suitable for sectors with strict data residency, compliance, and security policies, including financial services and healthcare, where traditional software-as-a-service (SaaS) options may not be viable. The model also targets organisations with predictable workloads, offering cost reductions on compute and data egress fees.

Kishore Gopalakrishna, Co-founder and Chief Executive Officer of StarTree, said, "The next wave of AI innovation will be driven by real-time context—understanding what's happening now. StarTree's heritage as a real-time analytics foundation perfectly complements where AI is going by delivering fresh insights at scale. What is changing is the shift from apps as the consumer to autonomous agents."

"Real-time insights are no longer optional, but too often, enterprises are blocked by infrastructure constraints. With BYOK, we remove those barriers. Companies can now deploy StarTree wherever they need it, without compromising on performance, security, or cost control."

BYOK is now available in private preview and joins StarTree's current deployment models, including fully managed SaaS and Bring Your Own Cloud (BYOC) deployments. This suite of deployment options is designed to provide flexibility for customers to address various operational and regulatory requirements.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X