AI is transforming the broadcast industry by enhancing the way content is created, distributed and consumed — but integrating the technology can be challenging.
Launched this week in limited availability, NVIDIA Holoscan for Media is a software-defined, AI-enabled platform that helps developers easily integrate AI into their live media applications and allows media companies to run live media pipelines on the same infrastructure as AI.
NVIDIA RTX AI workstations and PCs, powered by NVIDIA GPUs for real-time graphics processing and AI computing, provide an ideal foundation for developing these applications.
At the IBC broadcast and media tech show in Amsterdam, NVIDIA partners including Adobe, Blackmagic Design and Topaz Labs will showcase the latest RTX AI-powered video editing tools and technologies powering live media advancements.
NVIDIA Holoscan for Media: Building the Future of Live Production
Building a robust AI software stack for application development in live media is an intricate process that requires substantial expertise and resources.
This technical complexity, coupled with the need for large amounts of high-quality data and the difficulty of scaling pilot programs to production-level performance, often prevents these initiatives from reaching full deployment. Additionally, traditional development of software is tied to dedicated hardware, further limiting innovation and making upgrades cumbersome.
Addressing these challenges, NVIDIA Holoscan for Media empowers developers to create cutting-edge AI applications for live media with ease through its seamless integration with NVIDIA’s extensive suite of AI software development kits (SDKs). This allows developers to easily incorporate advanced AI capabilities into their applications so they can focus on creating more sophisticated and intelligent media applications. Media companies can then seamlessly connect those applications to live video pipelines running on top of the platform.
Another typical challenge in live media application development is inefficiency in deployment. Developers often find themselves needing to create separate builds for different deployment types, whether on premises, in the cloud or at the edge. This increases costs and can extend development timelines. Developers must also allocate resources to build additional infrastructure services, such as authentication and timing protocols, further straining budgets.
Holoscan for Media’s cloud-native architecture enables applications to run from anywhere. Applications developed for the cloud, edge or on-premises deployments can run across environments, eliminating the need for separate builds.
Holoscan for Media is available on premises today, with cloud and edge deployments coming soon. The platform also includes Precision Time Protocol for audio-video synchronization in live broadcasts and Networked Media Open Specifications for seamless communication between applications — simplifying the management of complex systems.
Enhancing Development With RTX AI PCs and Workstations
NVIDIA RTX AI PCs and workstations complement the potential of Holoscan for Media by offering a robust foundation for developing immersive media experiences.
The CUDA ecosystem available on RTX AI PCs and workstations offers access to a vast array of NVIDIA SDKs and tools optimized for media and AI workloads. This allows developers to build applications that can seamlessly transition from workstation to deployment environments, ensuring that their creations are both robust and scalable.
NVIDIA AI Enterprise offers further enhancements by putting a comprehensive suite of AI software, tools and frameworks optimized for NVIDIA GPUs into the hands of enterprise developers who require secure, stable and scalable production environments for AI applications. This enterprise-grade AI platform includes popular frameworks like TensorFlow, PyTorch and RAPIDS for streamlined deployment.
Using NVIDIA AI Enterprise, developers can build advanced AI capabilities such as computer vision, natural language processing and recommendation systems directly in their media applications. And they can prototype, test and deploy sophisticated AI models within their media workflows.
Video Editors and Enthusiasts — Rejoice!
Holoscan for Media will be on display at IBC, running Sept. 13-16. At the Dell Technologies booth 7.A45, attendees can witness live demonstrations that showcase how to seamlessly transition from application development to live deployment.
A number of NVIDIA partners will spotlight their latest RTX AI-powered video editing tools and technologies at the show.
Blackmagic Design’s DaVinci Resolve 19 Studio is now available, introducing AI features that streamline editing workflows:
- IntelliTrack AI makes it fast and easy to stabilize footage during the editing process. It can be used in DaVinci Resolve’s Fairlight tool to track on-screen subjects and automatically generate audio panning as they move across 2D and 3D spaces. With the AI-powered feature, editors can quickly pan or move audio across the stereo field, controlling the voice positions of multiple actors in the mix environment.
- UltraNR is an AI-accelerated denoise mode in DaVinci Resolve’s spatial noise reduction palette. Editors can use it to dramatically reduce digital noise — undesired color or luminance fluctuations that obscure detail — from a frame while maintaining image clarity. Editors can also combine the tool with temporal noise reduction for even more effective denoising in images with motion, where fluctuations can be more noticeable.
- RTX Video Super Resolution uses AI to sharpen low-resolution video. It can detect and remove compression artifacts, greatly enhancing lower-quality video.
- RTX Video HDR uses an AI-enhanced algorithm to remap standard dynamic range video into vibrant HDR10 color spaces. This lets video editors create high dynamic range content even if they don’t have cameras capable of recording in HDR.
IntelliTrack and UltraNR get a performance boost when running on NVIDIA RTX PCs and workstations. NVIDIA TensorRT lets them run up to 3x faster on a GeForce GTX 4090 laptop than a Macbook Pro M3 Max.
All DaVinci Resolve AI effects are accelerated on RTX GPUs by TensorRT. The Resolve update includes GPU acceleration for its Beauty, Edge Detect and Watercolor effects, doubling their performance on NVIDIA GPUs.
The update also introduces NVIDIA’s H.265 Ultra-High-Quality (UHQ) mode, which utilizes NVENC to boost HEVC encoding efficiency by 10%.
Pixel-Perfect Partners: Topaz Video AI and Adobe After Effects
This year, Topaz Labs introduced an Adobe After Effects plug-in for Video AI, a leading solution for video upscaling and frame interpolation. The plug-in integrates the full range of enhancement and frame interpolation models directly into the industry-standard motion graphics software.
It also allows users to access AI upscaling tools in their After Effects compositions, providing greater flexibility and faster compositing without the need to transfer large files between different tools.
A standout feature of Topaz Video AI is its ability to create dramatic slow-motion videos with Topaz’s Apollo AI model, which can convert footage to up to 16x slow motion.
The plugin also excels at upscaling, ideal for integrating low-resolution assets into larger projects without compromising quality. It includes all of Topaz’s enhancement models, like the Rhea model for 4x upscaling. Check out Adobe’s blog to learn more about After Effects plug-ins and how to use them.
Built for speed, the plug-in is accelerated on RTX GPUs by NVIDIA TensorRT, boosting AI performance by up to 70%. A future update to Video AI will introduce further TensorRT performance improvements and efficiency optimizations, including a significant reduction in the number of AI model files required as part of the app installation.
With the rapid integration of AI, the future of broadcasting is brighter and more innovative than ever.