From AI to Zzzz: MIT, Mass General Aim Deep Learning at Study of Sleep Stages

by Scott Martin

Sleeplessness is a national epidemic. It’s hard to solve and complicated by how difficult it is to study.

One in three U.S. adults generally don’t sleep enough, according to the Center for Disease Control and Prevention, which defines healthy sleep as more than seven hours daily.

Chest straps, nasal probes and head electrodes are among the traditional sensors routinely attached to patients needing their sleep patterns monitored. These uncomfortable methods can themselves cause sleeplessness, rendering data collected unrepresentative.

Hoping to provide a better night’s sleep for these patients, researchers from MIT and Massachusetts General Hospital are studying the use of AI and a Wi-Fi-like signal that monitors a person without any sensors attached.

“We don’t really know enough about sleep because we aren’t able to continuously monitor sleep,” said Dina Katabi, a professor of electrical engineering and computer science at MIT and the director of the institute’s wireless center.

The research team contributing to a paper describing this effort consisted of Katabi, Matt Bianchi, chief of the Division of Sleep Medicine at Mass General, and Tommi Jaakkola, a professor of electrical engineering and computer science at MIT. The paper’s authors also included MIT graduate students Mingmin Zhao and Shichao Yue.

Thanks to a special wireless device installed in their bedrooms, people in the study were able to sleep at home. The device measures signals bouncing off the subjects, and sends data back to the researchers via the cloud.

Tapping into how a person in a room affects radio frequencies, the researchers can interpret measurements of pulse, breathing rate and movement into various sleep stages: light sleep, deep sleep, rapid eye movement or an awake state.

The research uses a new neural network design called conditional adversarial architecture, which processes the radio frequency signal to eliminate information irrelevant to sleep and focuses on what is important for studying sleep stages. This new neural network allows the authors to achieve much higher accuracy than previous studies of sleep stages using radio signals.

In fact, MIT’s wireless research boosts prediction accuracy to nearly 80 percent compared with 64 percent for radio frequency methods previously in use.

Data Crunched

The researchers studied 100 nights of sleep from 25 people, labeling every 30 seconds of sleep and dividing people used for training from those for testing.

Its cloud-based service can collect the signals remotely and run the algorithm models. It takes just a few seconds to analyze a full night of sleep and is commercially viable, Katabi said.

“The deep learning model can analyze the signal and spit out the sleep stages of the person,” she said.

MIT’s researchers used the NVIDIA TITAN X GPU on model training and also for inferencing on the back-end cloud service. They also used NVIDIA’s cuDNN library and the TensorFlow deep learning framework.

Research Implications

Advances in the study of sleep stages offer wide-reaching applications. Such sleep stage detection can be used in monitoring for depression, for example, in which the REM stage of sleep occurs earlier. This is one area of focus for drugmakers.

Studies of Alzheimer’s disease focus on sleep stages, particularly whether people are getting deep sleep and how that affects the disease. Likewise for those with Parkinson’s disease.

“Sleep is a problem for Parkinson’s patients because it has implications for the progression of the disease. Sleep problems can also be an early sign of Parkinson’s disease,” Katabi said.

Other areas of interest include the ability to show sleep apnea events, in which breathing can become obstructed during sleep. Physicians can also monitor cardiac patients and those with multiple sclerosis by watching changes in sleep patterns.