![Avatar photo](https://blogs.nvidia.com/wp-content/uploads/2024/06/DaveSalvator0026-web-1.jpg)
NVIDIA Blackwell Now Generally Available in the Cloud
AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The “reasoning” process involves multiple models, generating… Read Article
Fast, Low-Cost Inference Offers Key to Profitable AI
Businesses across every industry are rolling out AI services this year. For Microsoft, Oracle, Perplexity, Snap and hundreds of other leading companies, using the NVIDIA AI inference platform — a… Read Article
NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite
NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications. At Microsoft Ignite, Microsoft announced the launch of the first cloud… Read Article
Peak Training: Blackwell Delivers Next-Level MLPerf Training Performance
Generative AI applications that use text, computer code, protein chains, summaries, video and even 3D graphics require data-center-scale accelerated computing to efficiently train the large language models (LLMs) that power… Read Article
What’s the ROI? Getting the Most Out of LLM Inference
Large language models and the applications they power enable unprecedented opportunities for organizations to get deeper insights from their data reservoirs and to build entirely new classes of applications. But… Read Article
NVIDIA and Oracle to Accelerate AI and Data Processing for Enterprises
Enterprises are looking for increasingly powerful compute to support their AI workloads and accelerate data processing. The efficiency gained can translate to better returns for their investments in AI training… Read Article
NVIDIA Blackwell Sets New Standard for Generative AI in MLPerf Inference Debut
As enterprises race to adopt generative AI and bring new services to market, the demands on data center infrastructure have never been greater. Training large language models is one challenge,… Read Article
NVIDIA to Present Innovations at Hot Chips That Boost Data Center Performance and Energy Efficiency
A deep technology conference for processor and system architects from industry and academia has become a key forum for the trillion-dollar data center computing market. At Hot Chips 2024 next… Read Article
Scaling to New Heights: NVIDIA MLPerf Training Results Showcase Unprecedented Performance and Elasticity
The full-stack NVIDIA accelerated computing platform has once again demonstrated exceptional performance in the latest MLPerf Training v4.0 benchmarks. NVIDIA more than tripled the performance on the large language model… Read Article