by Gary Rainville

[Editor’s note: Nearly three dozen companies participated in the Emerging Companies Summit, held during NVIDIA’s GPU Technology Conference in May. Below is one in a series of company profiles showcasing how startups are innovating with GPU technology.]

Think of Cortexica Vision Systems as turning the camera on your smartphone into a virtual encyclopedia. The device no longer just captures an image but can access a galaxy of information associated with it.

Point your phone or tablet at a bottle in your local wine store and get served up tasting notes, the price and what food to pair it with. Snap a picture of an ad and learn more about the product, where to buy it and maybe get a discount coupon. Take a shot of a movie or TV show and find out more about it and where to rent or buy it.

Cortexica CEO Iain McCready presents
at Emerging Companies Summit

Cortexica makes this possible by mimicking the brain’s visual cortex, according to Iain McCready, CEO of the London-based startup. When you look at something, your visual cortex doesn’t read the image from one side to the other or scan it from top to bottom. Instead, it rapidly picks out many points of interest.

Cortexica’s algorithms mimic that scattershot process by creating a fingerprint of a target image’s points of interest, and then matching it against databases of stored images that have been created by retailers or advertisers. People can then take snapshots of items using their mobile devices and be provided via the cloud information associated with any matches.

The beauty of this is a company’s own brand or product becomes the focus of search. People can satisfy their curiosity by scanning the actual image they see for more information, potentially replacing links to websites and QR codes, those crossword puzzle-like images often seen in magazine ads.

The people behind Cortexica’s technology have used GPUs to accelerate their on-the-fly image recognition since before the company’s founding in 2009. Like the brain, GPUs handle many tasks simultaneously. With more powerful GPUs increasingly built into mobile devices, images can be processed and information can be delivered even more quickly.

Given this greater processing power, one area Cortexica is looking to develop is video, where a person could scan an entire room and get information about potentially any of the objects present.

McCready says some novel applications of Cortexica’s technology have been suggested over the years – from a detergent company seeking to build a stainfinder to naturalists desiring a tree and plant identifier.

Another suggestion was to apply the technology to identifying fish. Who knows? The next time you pull up Charlie at the end of your fishing line, you just might ask him to say “cheese.”

Download and watch the Cortexica presentation at the Emerging Companies Summit last May on the GTC On-Demand website.

Similar Stories

  • James Peters

    This is a really exciting prospect. I’m curious as to how the approach differs from Google Goggles, which, at the moment, doesn’t seem to pick up anything other than company logos shot straight on.

  • bonsai_in_SF

    Begin rant… Google Goggles had promise. But as with most Google products, they’ve failed to execute fully on an incredible vision…End rant.

    Hope to see Cortexica’s technology in action sometime soon! 

  • Will Park

    Hi James. That’s a good question. While I can’t speak for Cortexica, you might want to check out the company’s product showcase. You can see how their tech works here:

  • Jeffrey Ng

    Google has attempted to “boil the ocean” with goggles and even with its vast image database, cannot guarantee to recognise every incoming image and return an appropriate call to action.

    Cortexica’s technology and approach start from a point that works, i.e. our own visual cortex. Drawing inspiration from a wonderful piece of computational architecture shaped by millions of years of evolution, we have reverse engineered a comprehensive visual technology stack that copes with real-life imaging conditions and provides a lot of invariance to environmental factors such as lighting, partial views, change in viewpoint, complex backgrounds, etc. 

    The GPU, and CUDA in particular, has helped us loads to create a vision system that is close to real-time and modular to tackle many different verticals.

    Lastly, we have created a platform that targets verticals and image silos first, to ensure a good user experience.

  • rndtechnologies786

    Thank you for this