Anyone who’s circled a busy parking lot or city block knows that finding an open spot can be tricky. Faded line markings. Big trucks hiding smaller cars. Other drivers on … Read article >
Turning a traditional car into an autonomous vehicle is an enormous challenge. At NVIDIA, we’re tackling the problem by building the essential blocks of autonomous driving — categorized into perception, … Read article >
Lane markings are critical guides for autonomous vehicles, providing vital context for where they are and where they’re going. That’s why detecting them with pixel-level precision is fundamentally important for self-driving cars.
Simple rule: If you can’t judge distances you shouldn’t drive. The problem: judging distances is anything but simple. We humans, of course, have two high-resolution, highly synchronized visual sensors — our eyes — that let us to gauge distances using stereo-vision processing in our brain.
Autonomous vehicles must use computational methods and sensor data, such as a sequence of images, to figure out how an object is moving in time.
Navigating a traffic-light controlled intersection may seem routine. But when the NVIDIA BB8 autonomous test vehicle first performed that task last year, it had our engineers smiling.
Having confidence in a self-driving car’s ability to use data to perceive and choose the correct drivable path while the car is driving is critical. We call this path perception confidence.