Send an untrained photographer to capture the action at a children’s soccer game with a digital single-lens reflex (DSLR) camera and the result is almost always a mess. The equipment may cost $1,500 or more, but without the expertise needed to capture an image of a moving ball – or to account for the variations in light and shade in each scene – even the most technically-advanced cameras can look mighty dumb.

Smartphone with a big lens.
Smartphones need big brains, not big lenses. 



The problem isn’t the camera. It’s the person behind the lens. For novices, a good camera’s manual mode is too hard to use; but its automated mode is designed to capture portraits of people – not on-field action. What’s needed: cameras that are programmable enough that developers can build software that can take good photos in conditions that don’t occur every day. That flexibility is key when taking snapshots in the long tail of photography situations, such as soccer games, explains NVIDIA Senior Director of Research Kari Pulli.

Pulli and a team of researchers at NVIDIA are working on technologies that let more developers take advantage of the powerful application processors being built into smartphones and cameras. FCam, short for ‘frankencamera,’—part of a joint research project with a team led by Marc Levoy at Stanford University’s Computer Graphics Laboratory — is an open-source C++ application programming interface aimed at giving developers precise control over all of a camera’s parameters.

For example, Pulli says, FCam could make it possible for an application developer to create a ‘sport mode,’ for a camera that would automatically focus on a moving object – such as a soccer ball. Or in low-light situations, a camera that can take photos with two different settings: one with a short exposure time, and another with a longer exposure time. One image may have more noise, and the other may be blurry, but the two less-than-perfect images can be combined into a third one that combines the best aspects of both.

The ability to take multiple images with different exposure times also makes it possible to capture more scenes than was possible before. Take the case of a photographer who wants to capture a picture of a room with a window. A picture exposed to capture the details of the furniture inside the room can be merged with a second picture exposed so that the bright region outside of the window is visible.

This approach, usually referred to as high-dynamic-range (HDR) imaging, involves several technical challenges, many of which have been addressed by Pulli’s team. First they designed an algorithm that can automatically select the optimal sequence of images which need to be captured for a specific scene. With fewer, cleverly selected images, they achieve results of higher quality than standard methods (see the first circle in the figure, below).

Pulli’s team also worked to removing the ‘artifacts’ caused as the camera moves between shots (see the second circle). That has long been a major problem for HDR. To solve it, they developed an algorithm able to find where each pixel “moved” across a stack of pictures. With this information, each pixel can be moved back so that the images are perfectly aligned and can be merged without generating any artifacts (see the third circle).

Removing artifacts from photos.

Another effort that NVIDIA is supporting is OpenCV, a popular computer vision library. When optimized for Tegra 3 with its four ARM-based CPU cores and 12 GPU cores, OpenCV can be used to create 3D images, or build augmented reality experiences that lets users interact with virtual objects layered over images of the real world, among other applications.

Of course, NVIDIA isn’t the only company pushing computational photography into the mainstream. Microsoft’s Kinect controller for the Xbox 360 videogame console is perhaps the most high-profile example of photographic computing. But while Kinect makes for a lively living room experience, the technology will be critical in smart phones. The small size of modern phones is at odds with making high-quality cameras, Pulli says, making computation the best way to wring a high quality photo out of a tiny smartphone. – Orazio Gallo contributed to this report.

Related link: OpenCV for Tegra Demo app on Google Play

Similar Stories

  • Sagar Rawal

    Now this research truly piques my interest.

    For me, my smartphone is a better camera primarily because I carry it more often due to it being more convenient compared to my DSLRs. The camera I have on me is a better camera than the one I left at home!

    From my assessment of Nokia’s PureView technology, the amazing image quality generated from the PureView 808’s comparatively small lens (next to my DSLR) combined with their oversampling algorithms has convinced me that software can truly enhance the quality of pictures generated by phones. 

  • http://www.facebook.com/profile.php?id=680745410 Antonio Quintanilla

    Exactly and also, you are only carrying one piece of equipment, plus the camera ocupies more space and you cant share your content direcly from the camera with a smartphone you can in seconds! FTW Smartphones.

  • https://twitter.com/xarinatan Alexander ypema

    You guys should create a DSLR with that tech, that’d be pretty epic, a DSLR from Canon or Nikon but have the main image processor be a cooperatively designed chip. Like a http://en.wikipedia.org/wiki/DIGIC except better and more intelligent.

  • http://www.facebook.com/profile.php?id=576756216 Aleksander Torset Eriksen

    There are no replacement for great optics..

  • giorginho eponumous

    I’m guessing the chief editor spent his weekly income on a DSLR only to get blurry images at his kid’s soccer game.
    After all, all soccer game photographers use smartphones to capture the action!

    I don’t know whether or not you believe you can change the laws of physics, but the fact remains that with equal “brains” (all kinds included) bigger lenses = better pictures.
    Try getting a photo with my Tegra 3 Nexus tablet for example. Or with my galaxy smartphone, even on a tripod, with timer and equator-like sunlight. Seriously, better processors for DSLRs would be nice, but claiming that smartphones take better pictures than DSLRs only because a rookie failed to properly use the latter is out of line. Hell, I get better motion from my mini-dv camcorder than I see on samples of the unbelievably praised Lumia 920!

    Why don’t you stick to a field which would suite you better, say, http://www.geforce.com/hardware/technology/3d-vision/supported-gpus
    and issue support for more (even older) gpus? 8800, 9600 and the whole 9800 series is supported but my laptop’s 9800M isn’t. Sony managed (with or without your help?) to 3D-fy a 7800’s on the ps3, your writing us off is extremely annoying.

  • rndtechnologies786

    Thank you for this

  • http://www.facebook.com/maheshp.gowda Mahesh P Gowda

    I’m loving McDonalds for fast food… MyDeals247 for the best deals;))