An iconic puzzle from the 1980s — the Rubik’s Cube — is being used to bridge a very contemporary gap between deep learning and advanced mathematics.
Pierre Baldi, the computer science professor overseeing the effort by a team of researchers at the University of California, Irvine, described this gap as the greatest conundrum facing AI today.
“People complain that deep learning is a black box, and that they don’t know what the network is doing,” Baldi said. “We could see that the network was learning mathematics.”
The Rubik’s Cube with multicolored faces has, of course, compelled and confounded people since its invention in 1974 by Hungarian sculptor and architecture professor Ernő Rubik.
The research team’s discovery — that a deep learning model can be used to teach a machine how to do math, in this case the algebraic concept known as group theory — is what Baldi called a “small step in the grand challenge of AI.”
Not the Original Goal
That wasn’t what the researchers set out to do. Rather, they were looking to build a deep learning model that could solve the Rubik’s Cube without any human help, much the way earlier models have mastered the games of chess and go.
They did this by teaching it to approach the cube as a child might.
Starting with a solved puzzle, the model first took one move backward before solving it. Then it took two moves back and solved it, then three moves back, and so on. This forced the algorithm to learn a little more on each effort. Baldi likens it to learning golf by starting first with tap-in puts, then moving further away from the hole as accuracy improves.
In a recently published paper detailing the work, the team gave the reinforcement learning algorithm it developed the moniker “autodidactic iteration.” It was able to solve 100 percent of scrambled cubes in an average of 30 moves, or as quickly as the fastest human solvers.
The model was trained on dozens of machines running NVIDA GPUs, most of them TITANs, in conjunction with the CUDA programming model, TensorFlow machine learning framework and Keras neural network API.
Baldi estimates that the GPUs sped things up by a factor of 5-10x, and that there’s no limit to his team’s ability to put more GPUs to good use in furthering its deep learning research.
“We’re starving for GPUs,” he said. “They are essential to this work.”
An Advancement Ripe with Possibilities
Baldi said that the Rubik’s Cube presents a unique deep learning challenge in that it has only one correct configuration and quintillions of incorrect alternatives. And that’s just working with a traditional three-by-three Rubik’s Cube with nine squares on each side.
Solving larger versions of the puzzle represents the next frontier for the team’s work. They’re interested in seeing how the autodidactic iteration approach works with four-by-four and five-by-five cubes. But first the team has to tweak its approach to take on significant added complexity.
“If you slow down by a factor of two, that’s fine,” Baldi said. “But if you slow down to the speed of continental drift, that’s a problem.”
Baldi also sees opportunities to use the approach of starting with the solution to teach the autodidactic iteration model to master other games.
He believes the work has potential applications in other areas of math beyond group theory, especially math above a high school level, which he said AI has struggled with to date.
If Baldi’s team has any say about it, that struggle could soon become a thing of the past. In the meantime, solving the biggest, baddest puzzles will do.
Try your hand at solving a digital Rubik’s Cube, or watch the UCI team’s deep learning algorhithm solve it, at http://deepcube.igb.uci.edu/.