NVIDIA DGX Spark and DGX Station Power the Latest Open-Source and Frontier Models From the Desktop

by Chris Marriott

Open-source AI is accelerating innovation across industries, and NVIDIA DGX Spark and DGX Station are built to help developers turn innovation into impact.

NVIDIA today unveiled at the CES trade show how the DGX Spark and DGX Station deskside AI supercomputers let developers harness the latest open and frontier AI models on a local deskside system, from 100-billion-parameter models on DGX Spark to 1-trillion-parameter models on DGX Station.

Powered by the NVIDIA Grace Blackwell architecture, with large unified memory and petaflop-level AI performance, these systems give developers new capabilities to develop locally and easily scale to the cloud.

Advancing Performance Across Open-Source AI Models

A breadth of highly optimized open models that would’ve previously required a data center to run can now be accelerated at the desktop on DGX Spark and DGX Station, thanks to continual advancements in model optimization and collaborations with the open-source community.

Preconfigured with NVIDIA AI software and NVIDIA CUDA-X libraries, DGX Spark provides powerful, plug-and-play optimization for developers, researchers and data scientists to build, fine-tune and run AI.

Spark provides a foundation for all developers to run the latest AI models at their desk; Station enables enterprises and research labs to run more advanced, large-scale frontier AI models. The systems support running the latest frameworks and open-source models — including the recently announced NVIDIA Nemotron 3 models — right from desktops.

The NVIDIA Blackwell architecture powering DGX Spark includes the NVFP4 data format, which enables AI models to be compressed by up to 70% and boosts performance without losing intelligence.

NVIDIA’s collaborations with the open-source software ecosystem, such as its work with llama.cpp, is pushing performance further, delivering a 35% performance uplift on average when running state-of-the-art AI models on DGX Spark. Llama.cpp also includes a quality-of-life upgrade that speeds up LLM loading times.

DGX Station, with the GB300 Grace Blackwell Ultra superchip and 775GB of coherent memory with FP4 precision, can run models up to 1 trillion parameters — giving frontier AI labs cutting-edge compute capability for large-scale models from the desktop. This includes a variety of advanced AI models including Kimi-K2 Thinking, DeepSeek-V3.2, Mistral Large 3, Meta Llama 4 Maverick, Qwen3 and OpenAI gpt-oss-120b.

“NVIDIA GB300 is typically deployed as a rack-scale system,” said Kaichao You, core maintainer of vLLM. “This makes it difficult for projects like vLLM to test and develop directly on the powerful GB300 superchip. DGX Station changes this dynamic. By delivering GB300 in a compact, single-system form factor deskside, DGX Station enables vLLM to test and develop GB300-specific features at a significantly lower cost. This accelerates development cycles and makes it easy for vLLM to continuously validate and optimize against GB300.”

“DGX Station brings data-center-class GPU capability directly into my room,” said Jerry Zhou, community contributor to SGLang. “It is powerful enough to serve very large models like Qwen3-235B, test training frameworks with large model configurations and develop CUDA kernels with extremely large matrix sizes, all locally without relying on cloud racks. This dramatically shortens the iteration loop for systems and framework development.”

NVIDIA will be showcasing the capabilities of DGX Station live at CES, demonstrating:

  • LLM pretraining that moves at a blistering 250,000 tokens per second.
  • A large data visualization of millions of data points in category clusters. The topic modeling workflow uses machine learning techniques and algorithms accelerated by the NVIDIA cuML library.
  • Visualizing massive knowledge databases with high accuracy using Text to Knowledge Graph and Llama 3.3 Nemotron Super 49B.

Expanding AI and Creator Workflows

DGX Spark and Station are purpose-built to support the full AI development lifecycle, from prototyping and fine-tuning to inference and data science, for a wide range of industry-specific AI applications in healthcare, robotics, retail, creative workflows and more.

For creators, the latest diffusion and video generation models, including Black Forest Labs’ FLUX.2 and FLUX.1, and Alibaba’s Qwen-Image, now support NVFP4, reducing memory footprint and accelerating performance. And the new Lightricks’ LTX-2 video model is now available for download, including NVFP8 quantized checkpoints for NVIDIA GPUs, delivering quality on par with the top cloud models.

Live CES demonstrations highlight how DGX Spark can offload demanding video generation workloads from creator laptops, delivering 8x acceleration compared with a top-of-the-line MacBook Pro with M4 Max, freeing local systems for uninterrupted creative work.

The open-source RTX Remix modding platform is expected to soon empower 3D artists and modders to use DGX Spark to create faster with generative AI. Additional CES demonstrations showcase how a mod team can offload all of their asset creation to DGX Spark, freeing up their PCs to mod without pauses and enabling them to view in-game changes in real time.

AI coding assistants are also transforming developer productivity. At CES, NVIDIA is demonstrating a local CUDA coding assistant powered by NVIDIA Nsight on DGX Spark, which allows developers to keep source code local and secure while benefiting from AI-assisted enterprise development.

Industry Leaders Validate the Shift to Local AI

As demand grows for secure, high-performance AI at the edge, DGX Spark is gaining momentum across the industry.

Software leaders, open-source innovators and global workstation partners are adopting DGX Spark to power local inference, agentic workflows and retrieval-augmented generation without the complexity of centralized infrastructure.

Their perspectives underscore how DGX Spark is enabling faster iteration, greater control over data and IP, and new, more interactive AI experiences on the desktop.

At CES, NVIDIA is demonstrating how to use the processing power of DGX Spark with the Hugging Face Reachy Mini robot to bring AI agents into the real world.

“Open models give developers the freedom to build AI their way, and DGX Spark brings that power straight to the desktop,” said Jeff Boudier, vice president of product at Hugging Face. “When you connect it to Reachy Mini, your local AI agents become embodied and gain a voice of their own. They can see you, listen to you and respond with expressive motion — turning powerful AI into something you can truly interact with.”

Hugging Face and NVIDIA have released a step-by-step guide to building an interactive AI agent using DGX Spark and Reachy Mini.​

“DGX Spark brings AI inference to the edge,” said Ed Anuff, vice president of data and AI platform strategy at IBM. “With OpenRAG on Spark, users get a complete, self-contained RAG stack in a box — extraction, embedding, retrieval and inference.”

“For organizations that need full control over security, governance and intellectual property, NVIDIA DGX Spark brings petaflop-class AI performance to JetBrains customers,” said Kirill Skrygan, CEO of JetBrains. “Whether the customers prefer cloud, on-premises or hybrid deployments, JetBrains AI is designed to meet them where they are.”

TRINITY, an intelligent, self-balancing, three-wheeled single-passenger vehicle designed for urban transportation, will be on display at CES, using DGX Spark as the AI-powered brain for AI inference of open-source, real-time vision language model workloads.

“TRINITY represents the future of micromobility — where humans, vehicles, and AI agents work together seamlessly,” said will.i.am. “With NVIDIA DGX Spark as its AI brain, TRINITY delivers conversational, goal-tracking workflows that transform how people interact with mobility in connected cities. It’s brains on wheels, designed from the agent up.”

Accelerating AI Developer Adoption

DGX Spark playbooks help developers rapidly get started with real-world AI projects. At CES, NVIDIA is expanding this library with six new playbooks and four major updates, spanning topics such as the latest NVIDIA Nemotron 3 Nano model, robotics training, vision language models, fine-tuning AI models using two DGX Spark systems, genomics and financial analysis.

As DGX Station becomes available later this year, more playbooks will be added for developers to get started with NVIDIA GB300 systems.

NVIDIA AI Enterprise software support is now available for DGX Spark and GB10 systems from manufacturer partners. Including libraries, frameworks and microservices for AI application development and model installation, as well as operators and drivers for GPU optimization, NVIDIA AI Enterprise enables fast and reliable AI engineering and deployment. Licenses are expected to be available at the end of January.

Availability

DGX Spark and manufacturer partner GB10 systems are available from Acer, Amazon, ASUS, Dell Technologies, GIGABYTE, HP Inc., Lenovo, Micro Center, MSI and PNY.

DGX Station will be available from ASUS, Boxx, Dell Technologies, GIGABYTE, HP Inc., MSI and Supermicro starting in spring 2026.

Dive deeper into DGX Spark in this technical blog.

See notice regarding software product information.