The cars weren’t the only things ready to growl at this year’s Los Angeles, Paris and Shanghai auto shows.
Luxury automaker Jaguar surprised attendees with a wall-sized digital display that featured a life-like, interactive — and vocal — jaguar that responded to their every move.
Jaguar wanted a display that would showcase how responsive their latest car model can be. So global brand experience agency Imagination and The Mill, a premier, London-based visual effects studio, created a photorealistic, 3D version of the Jaguar logo for the global auto shows.
The team at The Mill used NVIDIA Quadro GPUs to render the jaguar in real time. And with the help of wrnch, a leader in AI-based markerless motion capture, they brought a new level of interactivity to the mixed reality experience that left viewers dazzled.
Quadro Shows Off Cat-Like Reflexes
The aim was to create a stunning display that people could interact with as naturally as possible. But since the installation would be moving around the world from one auto show to another, they also needed a simple system that still delivered powerful graphics performance without requiring multiple PCs for multiple screens.
On a single PC with NVIDIA Quadro GPUs, everything in the environment — from the car models and trees in the background to the actual jaguar cat — was rendered in real time using Unreal Engine.
Taking advantage of Quadro’s NVIDIA Mosaic features, The Mill could connect and run the experience across three 65-inch, 4K displays while getting the maximum graphics performance needed.
“Running a real-time experience with interactive elements has its challenges, but the NVIDIA Quadro cards gave us the stability and performance we needed to create a full, real-time environment,” said Martin Thelwell, lead engineer at The Mill. “It gives us a huge confidence boost, knowing we can achieve real-time rendering and still get the polished, visual look we wanted.”The next step was building a connection between the jaguar and the viewer by having it react to attendees. That’s when the team turned to wrnch, a Montreal-based AI company, to add real-time motion and gesture tracking to the experience.
AI Brings Audience Interaction to the Next Level
To help the jaguar recognize the movements of attendees, wrnch developed the visual cortex of the jaguar using wrnchAI, a high-performance deep learning engine that extracts human motion from video feeds.
For this display, a webcam was plugged into one of the screens to function as the jaguar’s “eyes.” Using the video feed from the webcam, the AI system would extract and translate the image of the viewer so the jaguar could follow their movements.
The AI gesture recognition enabled the jaguar to observe the proximity and position of someone, so it would know exactly where to look and respond.
Using NVIDIA TensorRT, wrnch trained the system to learn certain commands, including two unique gestures: clapping and raised arms. Each gesture would get a specific reaction out of the jaguar, so audiences could interact with it.
A second PC with NVIDIA graphics was used for gesture tracking, ensuring the system had its own GPU with enough horsepower to get the high performance and high frame rate needed for the display to work smoothly.
“This is an excellent example of how computer graphics and AI can work together to create new mixed reality experiences — it’s the future of interactive digital characters,” said Paul Kruszewski, CEO of wrnch. “The beauty is that both the AI inferencing and rendering is done on NVIDIA GPUs. It is a visually beautiful way for people to understand the power and potential of AI.”
Learn more about the Jaguar project in the video below: