The European Union is investing over $200 billion into AI — but to gain the most value from this investment, its developers must navigate three key constraints: limited compute availability, data-privacy needs and safety priorities.
Unveiled at the NVIDIA GTC Paris keynote at VivaTech, a new suite of NVIDIA technologies is making it easier to address these challenges at every stage of the AI development and deployment cycle. Using these tools, enterprises can build scalable AI factories on premises or in the cloud to rapidly create secure, optimized sovereign AI agents.
The expanded NVIDIA Enterprise AI Factory validated design delivers a turnkey solution for sovereign AI — pairing NVIDIA Blackwell-accelerated infrastructure with a next-generation software stack.
At its core is a new NIM capability that will let enterprises spin up lightning-fast inference for a variety of open large language model (LLM) architectures — slated to support more than 100,000 public, private and domain-specialized model variants hosted on Hugging Face.
Layered on top are new NVIDIA AI Blueprints and developer examples. These guide developers on how to simplify the process of creating and onboarding AI agents while ensuring robust safety, enhanced privacy and continuous improvement.
With these new tools — which include the AI-Q and data flywheel NVIDIA Blueprints, plus a blueprint for AI safety using NVIDIA NeMo — European organizations can build, deploy and run AI factories at scale without compromising performance, control or compliance.
Major enterprises across the continent are already building NVIDIA-accelerated AI factories for virtually every industry. Some of the region’s largest finance companies, including BNP Paribas and Finanz Informatik, are scaling AI factories to run financial services AI agents to assist employees and customers. L’Oreal-backed startup Noli.com is working with Accenture to use its AI Refinery for its AI Beauty Matchmaker. IQVIA is building AI agents to support healthcare services.
In the telecom industry, BT Group is optimizing customer service with ServiceNow and addressing anomalies in its network. Telenor is using its AI factory to run NVIDIA AI Blueprints for autonomous network configuration.
Boosting AI Agent Development With Enterprise AI Factories
The first step to create sovereign AI agents is model development — often using regional or enterprise-specific data tailored to specific use cases. To train, manage and scale these models, sovereign AI developers need AI factories.
On-premises sovereign AI infrastructure is especially valuable in regulated sectors such as government, finance and healthcare. The NVIDIA Enterprise AI Factory validated design helps these industries scale to support AI applications quickly with on-premises AI factories where every hardware and software layer is optimized.
It features NVIDIA Blackwell accelerated computing — including NVIDIA RTX PRO Servers — NVIDIA networking and NVIDIA AI Enterprise software to accelerate generative and agentic AI applications.
Several regional software providers, including Adaptive ML, ClearML, Dataloop, Deepchecks, deepset, Domino Data Lab, EnterpriseDB, Iguazio, Quantiphi, Teradata, Weaviate and Wiz, are now integrating the validated design to help developers build and deploy enterprise AI agents at scale.
The Enterprise AI Factory can also be used with software from regional partners such as aiOla, DeepL, Elastic, Photoroom, PolyAI, Qodo, Sana Labs, Tabnine and ThinkDeep.
NIM Accelerates LLM Deployment Across NVIDIA Infrastructure
When ready to deploy their AI models and agents, developers can tap NVIDIA NIM microservices to unlock accelerated, enterprise-ready inference across an expanding global suite of LLMs, including models tailored to specific languages and domains.
NIM microservices will soon support a vast collection of LLMs on Hugging Face. NIM automatically optimizes the model with its ideal inference engine — such as NVIDIA TensorRT-LLM, SGLang or vLLM — so that, with a few simple commands, users can rapidly deploy their preferred LLMs for high-performance AI inference on any NVIDIA-accelerated infrastructure.
“NIM makes it easy to deploy a broad range of LLMs from Hugging Face on NVIDIA GPUs,” said Jeff Boudier, vice president of product at Hugging Face. “With support for over 100,000 public and private LLMs hosted on the Hugging Face Hub, NIM makes the performance and diversity of open models available to enterprise AI agents.”
Enterprise model builders and software development tool creators AI21 Labs, Dream Security, IBM and JetBrains, as well as European research and innovation organizations Barcelona Supercomputing Center, Bielik.AI and UTTER are among those contributing specialized LLMs as NIM microservices.
These optimized models support 35 regional languages, including Arabic, Czech, Dutch, German, Hebrew, French, Polish, Portuguese and Spanish — expanding options for developers building AI agents with local language and cultural understanding.
NVIDIA is also working with several model builders and AI consortiums in Europe to optimize local models with NVIDIA Nemotron techniques.
Blueprints for Smarter, Safer AI Agents
To give developers a head start on building and onboarding powerful, secure AI models and agents, NVIDIA offers easy-to-follow blueprints and developer examples. These reference designs will enable Europe’s developers to tailor their models and agents to regional needs by connecting them to proprietary data, applying safety policies and continuously updating them for optimized performance.
The AI-Q NVIDIA Blueprint provides a guide for developing agentic systems capable of fast multimodal data extraction and powerful information retrieval. It includes the NVIDIA NeMo Agent toolkit, an open-source software library for evaluating and optimizing AI agents.
The NeMo Agent toolkit brings intelligence to agentic AI workflows and is compatible with the open standards including Model Context Protocol (MCP), an open-source framework for connecting AI agents to tools. This integration enables interoperability with tools served by MCP servers. The NeMo Agent toolkit is also integrated with agent frameworks, including CrewAI, LangChain, LlamaIndex, Microsoft Semantic Kernel and Weights & Biases.
The AI-Q blueprint offers a foundation for enterprises to build domain-specific AI agents that can use a wide range of enterprise data sources to deliver insights contextualized to an organization’s specific needs. NVIDIA partners including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA use AI-Q to connect their data platforms to AI agents.
The NVIDIA AI Blueprint for building data flywheels enables enterprises to improve their AI agents over time. It includes tools to turn inference data into new training and evaluation datasets — and tools to automatically surface optimized models while maintaining high accuracy.
Built on NVIDIA AI Enterprise, the blueprint pulls in production traffic and user feedback and triggers retraining and redeployment pipelines, creating a continuous feedback loop to enhance model performance. Powered by modular NVIDIA NeMo microservices, it offers flexible deployment options that run on any accelerated computing infrastructure, whether on premises or in the cloud.
The blueprint evaluates existing and new candidate models to help developers identify and deploy smaller, faster models that match or surpass the accuracy of larger ones. With this tool, enterprises can pick models that increase compute efficiency and decrease the total cost of ownership, enabling leaner, more cost-effective AI.
NVIDIA partners VAST Data, Weights & Biases and Iguazio — an AI platform company acquired by QuantumBlack, AI by McKinsey — are building on the NVIDIA AI Blueprint for data flywheels to integrate additional features, such as advanced monitoring capabilities, based on their software platforms.
To help enterprises safely adopt open-source models, the Agentic AI Safety blueprint is slated to offer a framework for evaluating and enhancing model safety across content, security and privacy dimensions.
It guides developers through NVIDIA-curated datasets and standardized evaluation tools to prepare models for production with post-training. The recipe also provides actionable safety and vulnerability metrics — covering jailbreaks, prompt injections and harmful content — enabling enterprises to accelerate deployment without compromising compliance or trust.
Enterprises Set to Integrate New NVIDIA Software
Global enterprises — including ActiveFence, Amdocs, Cisco, Cloudera, CrowdStrike, IBM, IQVIA, SAP, ServiceNow and Trend Micro — are adopting NVIDIA NIM microservices and blueprints to accelerate AI workflows in cybersecurity, financial services, healthcare, telecommunications and more.
Amdocs, a leading provider of software and services for communications and media providers, uses NVIDIA AI Enterprise — including NVIDIA NeMo and NVIDIA NIM microservices — as part of Amdocs’ amAIz suite of AI products and services. The company has used the NVIDIA AI Blueprint for building data flywheels in an LLMOps pipeline to enable efficient LLM fine-tuning.
Amdocs plans to integrate more NIM microservices to support AI agents for content creation, translation, network automation and customer service.
Global system integrators like Capgemini, Accenture, Deloitte, EY, Infosys, Tata Consultancy Services and Wipro are helping enterprises build their AI factories with full-stack NVIDIA software.
Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
See notice regarding software product information.