He may never command the fame of Linus Torvalds, the father of Linux, but fellow Finn Antti Honkela recently helped clear a big barrier to digital privacy.
The associate professor of data science at the University of Helsinki works on differential privacy, a method for guaranteeing a computation based on personal data will keep that data private. In March, the emerging field made MIT Technology Review’s top 10 list of breakthroughs with the promise of profound impact.
Pieces of the technology are already widely used in smartphones and cloud computing. The 2020 U.S. census will even employ it.
“Differential privacy rests on a strong theoretical foundation, so if you follow the algorithm you get privacy guarantees, but to date the performance cost has been quite significant,” said Honkela.
“Now we could close this gap,” he said of the first project in a broad, multi-year collaboration between NVIDIA and AI researchers in Finland.
100x Speedup for Differential Privacy
Honkela and Niki Loppi, a solutions architect at NVIDIA, demonstrated a way to accelerate training for differential privacy 100x by running it on GPUs.
“We often see these kinds of speedups with GPUs, but the exciting thing here was the penalty for adding differential privacy to standard training was only 2-3x rather than 20x observed on CPU systems,” said Loppi.
Their work shows how to make anonymous versions of highly valuable datasets that currently must remain private because they contain sensitive personal information. Releasing privacy-protected versions of such data would let any AI developer build much better models, accelerating the whole field.
As a follow-up, Loppi’s colleagues at NVIDIA are exploring ways to implement an efficient GPU-accelerated approach for random subsampling in AI training. The work could narrow the performance gap further for implementing enhanced differential privacy.
The effort was the first of many, varied projects in the collaboration between NVIDIA and two powerhouse partners in Finland. The Finnish Center for AI (FCAI) is a national effort that pools top researchers from the University of Helsinki, Aalto University and the VTT Technical Research Center of Finland.
Finland’s national supercomputing center, known as CSC, is the other partner with NVIDIA and FCAI. It will run the group’s research projects on its 2.7-petaflops system that includes 320 NVIDIA V100 Tensor Core GPUs.
A Wide Range of AI Targets
The collaboration in Finland comes on the heels of one forged in January in Modena, Italy. They join a growing global community of NVIDIA AI Technology Centers (NVAITC) driving technology forward.
The Finland work will tap into the many areas of expertise of local partners to drive AI forward. The collaboration between AI researchers and GPU experts “is a good model,” said Honkela, a coordinating professor at FCAI.
“Obviously, researchers have to know the code, but sometimes understanding how to run this work efficiently is a specialty of its own that not all researchers have,” he said.
“Through this cooperation, we are able to boost AI research in Finland and better support local scientists already doing great work in the field,” said Simon See, senior director of NVAITC at NVIDIA.
And who knows what goodness may emerge. Honkela notes that a modern, efficient version of backpropagation, an algorithm at the heart of all neural-network training, was first published in 1970 as a master’s thesis by a University of Helsinki researcher.