
Ankit Patel
Ankit Patel is a Senior Director at NVIDIA, leading developer engagement for the company’s ecosystem of libraries, compilers and developer tools. Ankit joined NVIDIA in 2011 as a GPU Product Manager helping pioneer GPU-accelerated virtual machines, and pushed the boundaries of server GPUs, including NVIDIA's early 8-GPU server appliance. His career transitioned to software, notably leading product management for NVIDIA's OptiX library for ray tracing and AI denoising, a key development that leveraged RT Cores and deepened his connection to deep learning. Prior to NVIDIA, he held product management roles at Matrox Video and Blackmagic Design. Working on full systems for over two decades has given him perspective and expertise at the intersection of silicon and software. Ankit holds a Bachelor's in Computer Science from Concordia University, an MBA from Cornell University and currently serves on the PyTorch Governing Board.
Open for Development: NVIDIA Works With Cloud-Native Community to Advance AI and ML
Cloud-native technologies have become crucial for developers to create and implement scalable applications in dynamic cloud environments. This week at KubeCon + CloudNativeCon North America 2024, one of the most-attended… Read Article
NVIDIA Releases Open Synthetic Data Generation Pipeline for Training Large Language Models
NVIDIA today announced Nemotron-4 340B, a family of open models that developers can use to generate synthetic data for training large language models (LLMs) for commercial applications across healthcare, finance,… Read Article
Small and Mighty: NVIDIA Accelerates Microsoft’s Open Phi-3 Mini Language Models
NVIDIA announced today its acceleration of Microsoft’s new Phi-3 Mini open language model with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference when running on NVIDIA GPUs… Read Article
Wide Open: NVIDIA Accelerates Inference on Meta Llama 3
NVIDIA today announced optimizations across all its platforms to accelerate Meta Llama 3, the latest generation of the large language model (LLM). The open model combined with NVIDIA accelerated computing… Read Article
Shining Brighter Together: Google’s Gemma Optimized to Run on NVIDIA GPUs
NVIDIA, in collaboration with Google, today launched optimizations across all NVIDIA AI platforms for Gemma — Google’s state-of-the-art new lightweight 2 billion– and 7 billion-parameter open language models that can… Read Article
NVIDIA Launches New, Updated Accelerated Computing Libraries: NVIDIA cuOpt, cuQuantum, cuNumeric, cuGraph, Modulus, Morpheus, NeMo Megatron, Riva, RAPIDS, DOCA and Dozens More
NVIDIA has introduced 65 new and updated software development kits — including libraries, code samples and guides — that bring improved features and capabilities to data scientists, researchers, students and… Read Article