IT Brief India - Technology news for CIOs & IT decision-makers
Story image

Confluent unveils AI Model Inference for seamless data-AI integration

Tue, 7th May 2024

Data streaming pioneer Confluent has unveiled the AI Model Inference, a brand new tool designed to streamline the incorporation of machine learning into data flows. The tool allows engineers to use simple SQL statements from within Apache Flink to converse with various AI engines, including OpenAI, AWS SageMaker, GCP Vertex, and Microsoft Azure. This simplification reduces the need for specialist tools and languages, establishing a smoother coordination between data processing and AI workflows, and ultimately enabling more accurate, real-time AI-driven decision-making with fresh and contextual streaming data, the company states.

As part of the announcement, Confluent has also introduced Freight clusters, a cost-effective solution for handling large, non-time-sensitive data volumes, such as logging or telemetry data. Shaun Clowes, Chief Product Officer at Confluent said, "Apache Kafka and Flink are the critical links to fuel machine learning and artificial intelligence applications with the most timely and accurate data." He added that their new AI Model Inference "removes the complexity involved when using streaming data for AI development."

However, the development of AI and ML applications often involves several tools and languages to work with AI models and data processing pipelines, leading to complex and fragmented workloads. These complexities can hinder the use of the most current and relevant data for decision-making, consequently compromising the accuracy and reliability of AI-based insights. AI Model Inference addresses these challenges, potentially reducing development time and difficulties in maintaining and scaling AI applications.

Stewart Bond, Vice President of Data Intelligence and Integration Software at IDC, commented: "Leveraging fresh, contextual data is paramount for training and refining AI models, and for use at the time of inference to improve the accuracy and relevancy of outcomes. Flink can now treat foundational models as first-class resources, enabling the unification of real-time data processing with AI tasks to streamline workflows, enhance efficiency, and reduce operational complexity."

AI Model Inference is currently available in early access to select customers. Additionally, Confluent's Apache Flink enables stream processing in private clouds and on-premises environments. Alongside unrestricted usage of Confluent Platform for Apache Flink, customers are ensured three years of support for every release from launch, guaranteeing secure and up-to-date stream processing applications.

Furthermore, many organisations use Confluent Cloud to process logging and telemetry data. However, such use cases usually require extensive and generally non-latency-sensitive data. Confluent's newly introduced Freight clusters provide a cost-efficient solution for high-volume use cases with relaxed latency requirements. Simplifying things further, Freight clusters come equipped with Elastic CKUs, auto-scaling based on demand, eliminating the need for manual sizing or capacity planning.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X