IT Brief India - Technology news for CIOs & IT decision-makers
Story image

Dell launches new AI innovations for enterprise & research

Today

Dell has announced a range of advancements in enterprise AI infrastructure, solutions and services to support organisations seeking to adopt and scale artificial intelligence.

The company reported that 75% of organisations view AI as central to their strategy, but high costs and data security concerns remain significant obstacles. Dell aims to address these challenges by simplifying deployment, reducing expenses and enabling secure, scalable AI adoption through its expanded Dell AI Factory, enhanced infrastructure and an expanding partner ecosystem.

The Dell AI Factory, launched a year ago, has received more than 200 updates and now supports AI workloads at any scale with new infrastructure, software improvements and collaborations with partners such as NVIDIA, Meta and Google. The company states its approach to on-premises AI inference can be up to 62% more cost effective for large language models compared to public cloud options.

Among the notable product introductions is the Dell Pro Max Plus laptop, equipped with the Qualcomm AI 100 PC Inference Card, which the company states is the world's first mobile workstation to include an enterprise-grade discrete NPU. This platform is intended to provide rapid, secure on-device inferencing for large AI models, facilitating edge deployments outside traditional data centres. The Qualcomm AI 100 PC Inference Card offers 32 AI-cores and 64GB of memory to support engineers and scientists working with sizeable data models.

Addressing the energy demands of AI workloads, Dell introduced the PowerCool Enclosed Rear Door Heat Exchanger (eRDHx), designed to capture 100% of IT-generated heat and reduce cooling costs by up to 60% compared to current solutions. This innovation supports water temperatures between 32°C and 36°C and enables increased data centre density, allowing organisations to deploy up to 16% more racks of dense compute without raising power consumption, and provides advanced leak detection and unified management features.

For high performance computing and AI, Dell's PowerEdge XE9785 and XE9785L servers will support AMD Instinct MI350 series GPUs, promising up to 35 times greater inferencing performance. Both liquid- and air-cooled versions will be available to further reduce facility cooling costs.

In terms of data management, Dell's AI Data Platform now includes updates designed to improve access to structured and unstructured data, with Project Lightning, a parallel file system, reported to deliver up to two times greater throughput than alternatives. Enhancements to the Data Lakehouse further streamline AI workflows for use cases like recommendation engines and semantic search, and the introduction of Linear Pluggable Optics aims to lower power use and boost networking efficiency.

Dr Paul Calleja, Director of the Cambridge Open Zettascale Lab and Research Computing Services at the University of Cambridge, commented: "We're excited to work with Dell to support our cutting-edge AI initiatives, and we expect Project Lightning to be a critical storage technology for our AI innovations."

Dell has also broadened its partner ecosystem to include on-premises deployments with platforms such as Cohere North, Google Gemini, Glean's Work AI platform and Meta's Llama Stack, as well as joint solutions with Mistral AI. The company is providing enhancements to its AI platform with AMD and Intel technologies, including upgraded networking, software stack improvements, container support and integration with Intel Gaudi 3 AI accelerators.

Updates to the Dell AI Factory with NVIDIA include new PowerEdge servers supporting up to 192 NVIDIA Blackwell Ultra GPUs per standard configuration and up to 256 per Dell IR7000 rack with direct to chip liquid cooling. These advancements aim to simplify data centre integration, speed up rack-scale AI deployment and are reported to deliver up to four times faster large language model training compared to the previous generation.

The PowerEdge XE9712, featuring NVIDIA GB300 NVL72, targets efficiency at rack scale for training and is said to offer up to 50 times more inference output and five times improvement in throughput, with new PowerCool technology supporting power efficiency in high-demand environments. The company intends to support the NVIDIA Vera CPU and Vera Rubin platform in future server offerings.

In networking, Dell has extended its portfolio with new PowerSwitch and InfiniBand switches, deliver up to 800 Gbps of throughput, and are now supported by ProSupport and Deployment Services. Further software platform updates include direct availability of NVIDIA NIM, NeMo microservices and Blueprints, plus Red Hat OpenShift integration on the Dell AI Factory with NVIDIA.

To streamline AI operations, Dell has introduced Managed Services for the AI Factory with NVIDIA, providing 24/7 monitoring, reporting, upgrades and patching for the stack, supported by Dell's technical teams.

Michael Dell, Chairman and Chief Executive Officer, Dell Technologies, said: "We're on a mission to bring AI to millions of customers around the world. Our job is to make AI more accessible. With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale."

Jensen Huang, Founder and Chief Executive Officer, NVIDIA, added: "AI factories are the infrastructure of modern industry, generating intelligence to power work across healthcare, finance and manufacturing. With Dell Technologies, we're offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge."

Jeff Clarke, Chief Operating Officer, Dell Technologies, stated: "It has been a non-stop year of innovating for enterprises, and we're not slowing down. We have introduced more than 200 updates to the Dell AI Factory since last year. Our latest AI advancements — from groundbreaking AI PCs to cutting-edge data centre solutions — are designed to help organisations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results."

Christopher M. Sullivan, Director of Research and Academic Computing for the College of Earth, Ocean and Atmospheric Sciences at Oregon State University, said: "We leverage the Dell AI Factory for our oceanic research at Oregon State University to revolutionise and address some of the planet's most critical challenges. Through advanced AI solutions, we're accelerating insights that empower global decision-makers to tackle climate change, safeguard marine ecosystems and drive meaningful progress for humanity."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X