Surrounded by AI Devices that Do Everything from Flying to Farming, NVIDIA Launches Jetson TX2

Powerful factory robots. Commercial drones. Smart cameras. NVIDIA Tuesday unveiled the NVIDIA Jetson TX2, a credit card-sized platform that puts AI computing to work in the world all around us.

Over the past five years, the mobile revolution has brought more and more devices online, Deepu Talla, vice president and general manager of the Tegra business at NVIDIA, told an audience of press, analysts and robotics enthusiasts at an event in San Francisco.

At the same time, GPU-based deep learning has given computers the ability to understand — and react to — the data streaming in from all these devices in uncanny new ways. Both through training — which creates smart systems — and through inference — which creates systems that are able to react intelligently to the world around them in real time.

“We’re seeing a lot of this inference not just in the cloud, but also moving towards the edge, whether it’s a robot, or a drone or a security camera,” Talla said. “And Jetson is our platform for doing inference and AI computing at the edge.”

Talla spoke surrounded by a dozen examples of how Jetson is already bringing AI to connected devices that can be found rolling, crawling and flying through homes, factories, farms and more.

LTaaS: Lettuce Thinning as a Service

They’re created by companies ranging from startups with just a handful of engineers to Cisco, which showed off its new Cisco Spark Board, which lets teams dispersed throughout the globe connect to collaborate in real time.

A few highlights:

  • Startup Blue River Technologies offers “lettuce thinning as a service” to farmers in the Salinas Valley that allows them to manage the year-round lettuce crop in one of the nation’s most productive agricultural regions.
  • VIMOC uses Jetson as part of its hardware and software platform to integrate data from a wide array of different sensors to help manage complex building, such as parking garages.
  • Enroute demonstrated how it’s using Jetson TX1 to create autonomous search and rescue drones that can bring payloads of up to 20 pounds to where they’re needed most.
  • Fellow Robots is using Jetson to power a fleet of robots that help manage inventory, or even help customers find exactly what they need in sprawling big box stores.

The Jetson TX2 joins the Jetson TX1 and TK1 products for embedded computing. Jetson is an open platform. So it’s accessible to anyone for putting advanced AI to work “at the edge,” or in devices in the world all around us.

Jetson TX1, left, and Jetson TX2, right.
Double team: Jetson TX1, left, and Jetson TX2, right.

Jetson TX2 doubles the performance of its predecessor. Or it can run at more than twice the power efficiency, while drawing less than 7.5 watts of power.

So Jetson TX2 can run larger, deeper neural networks on edge devices. That will bring more accuracy and speed to devices tackling jobs such as image classification, navigation and speech recognition.

Availability

Our NVIDIA Jetson TX2 Developer Kit can be preordered today for $599 in the United States and Europe and will begin shipping March 14. It will be available in other regions in the coming weeks. The Jetson TX2 module will be available in the second quarter for $399 in quantities of 1,000 or more.

For more details, including system specs and software, see our Jetson TX2 page, or Tuesday’s press release.

Similar Stories

  • Pixellus

    Can it run planned Windows on ARM?

  • Gary_Rainville

    Thanks for your question. We currently support Linux for Tegra. The majority of our customers are very satisfied with Linux, so we’re not planning to support other operating systems for Jetson at this time.

  • Pixellus

    Thanks

  • Siby Jose

    Is it possible to do inference using Deep Learning models trained using Google’s TensorFlow on the Jetson TX2?

  • Brian Caulfield

    Yes, you can install deep learning frameworks like Caffe, Torch, pyTorch, Theano, and TensorFlow on Jetson TX2 with cuDNN acceleration.

  • Pixellus

    Just another questions:
    1. Why there ale still cortex A57 cores not newer cortex A72?
    2. How custom denver2 cores works: in hardware decoder mode or in software dynamic code translation mode?