How AI Is Breathing New Life Into Digital SLR Cameras
To the throngs of those who’ve put down their complicated digital SLR cameras in favor of easy-to-use smartphones, Ryan Stout has a message: Not so fast.
Stout is the founder, CEO and lead developer of Arsenal, a six-person, Montana-based startup that’s using computer vision to intelligently automate the abundant capabilities of DSLR cameras.
Stout admits he himself had become a smartphone-dependent photographer. But several years ago, he decided to pull his camera out and start taking night photos. From setting shutter speeds, apertures and ISOs to choosing just the right filter, he was quickly reminded that photography is an intensely technical undertaking.
He suspected he wasn’t alone in wanting to simplify the photography experience in general. And with an earlier experience as founder of a company that used recurrent neural networks to enable natural language processing, he thought he might have a solution: machine learning.
Enter the Smart Camera
Arsenal was born when he decided he could tap AI to essentially create a camera that was just as smart — but took significantly better pictures — than a smartphone.
“It became pretty apparent that I could take some of the recent computer vision advances and create something that would help people in the early stages of photography take better photos,” he said. “What we’re doing is taking technology that’s made it into phones but hasn’t made it into DSLR cameras yet.”
Just how much pent-up demand there was for simpler DSLR camera settings became clear when Arsenal introduced itself on Kickstarter earlier this year. Hoping to generate $50,000 in pre-sales, Stout said the company passed that goal in seven hours, generating $2.6 million on the back of 16,000 pre-orders, making it the most successful camera fundraiser in Kickstarter’s history. (Arsenal has continued to accept pre-orders at $175 each on Indiegogo.)
How Arsenal Works
Slimmer than a deck of cards, the Arsenal device slips into a camera’s hot shoe mount and plugs into the USB port. Then, the companion mobile app takes over, literally seeing what the camera’s sensors are picking up and enabling the photographer to establish ideal settings with the slide of a finger — and without having to know anything about shutter speed, aperture or white balance.
Arsenal trains its model using a dataset of 200,000 professional photos, effectively teaching the device what constitutes a great shot, and what settings will recreate it. The main training occurs on machines powered by the NVIDIA GeForce GTX GPUs, while the deeper training relies on a Tesla GPU-enabled Amazon instance. CUDA and cuDNN are also used for deep learning.
Once ready, the fixed models are loaded onto the device, with all inference occurring locally, and firmware updates ensure that devices are running the latest models.
While the Arsenal team will work to constantly improve its models, the focus at this point is on shoring up the software running both the app and the device, as well as addressing quality assurance, logistics and customer support. The company aims to get its device into the hands of early buyers by January.
“Most of the big work is ramping up to ship 17,000 of these,” said Stout. “It takes a long time to scale up to a large production run.”
Judging from their enthusiastic response, the early buyers of Arsenal probably won’t mind waiting.
Watch how Arsenal works.