Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations
Watch Live

Researchers Offer Jet Lag Advice In Return For Data About Your Sleep

What time is it?
Doug Griswold Bay Area News Group/MCT via Getty Images
What time is it?

Companies like Google and Fitbit gather all kinds of data on how people behave. Why couldn't scientists use an app to do the same thing?

Two years ago, mathematicians at the University of Michigan released an app called Entrain to help people get over jet lag. Users entered data on their time zone, when they sleep, what kind of light they're exposed to, and the app gives them an ideal schedule to recover.

If app users consent, the data get sent back to the researchers. The trade-off — accurate data for useful jet lag advice — motivates people to send in accurate data, says Olivia Walch, a doctoral candidate working on the project. "It's the path forward for academics," she says.

Advertisement

The researchers collected data from more than 5,000 users of the Entrain app from 100 countries. The analysis was published last Friday in the journal Science Advances.

Among the findings: Women schedule around 30 minutes more sleep than men; people who spend time in outdoor light tend to go to bed earlier and sleep more than people who spend most of their time in indoor light; and sleep patterns converge as people age, suggesting that there's a narrower window for when they can fall and stay asleep.

The authors and others say the work shows their app can accurately collect sleep data.

The approach has potential, says Daniel Forger, professor of mathematics and computational medicine and bioinformatics at the University of Michigan and one of the authors. "You could call a million people up, most of those people won't answer your survey, and spend millions of dollars doing this," he says. "But what was so remarkable was that — for almost no amount of money and almost instantaneously — we could collect this kind of data."

Typically, sleep researchers get data from controlled lab environments or large studies with subjects reporting back on what they did, says Jamie Zeitzer of the Stanford Center for Sleep Sciences and Medicine. An app bridges the two, he says.

Advertisement

"This is real life, this is what's actually happening and ... especially in something like sleep, there's a dichotomy between what people remember they do ... and what they actually do," Zeitzer adds. "This fits quite nicely into that gap and helps us understand actual behavioral patterns."

Other researchers wonder about generalizations drawn from the data. Among them: Diane Lauderdale, chair of the department of public health sciences at the University of Chicago and a sleep researcher, who says that while the patterns in this paper are plausible, she's not sure the data support some of the findings.

Her concern is that people who use the app — they have a smartphone, agree to send data back, presumably travel quite a bit — may not be representative of the people in their countries.

"There's a general challenge as we move into big data sources about how to weigh the attractiveness of using these ... to answer questions ... that we have not been able to answer and the real limitation that we don't really have control over, or knowledge about, exactly who we're getting the data from," she says. "It's not just unique to this study."

Michigan's Walch agrees, and she says she has spent nights stressing out over this potential selection bias. She points out the patterns they describe match what sleep researchers have previously established in more controlled studies.

"The very first wave of data analysis we did, I was almost distraught," she says. "A lot of our things are confirmatory, and I came to realize, 'No, this is great that they're confirmatory of these smaller studies with fewer people, because it tethers us to reality.' But then, stepping back, it's still a problem."

She says she hopes the spread of the technology and improvements in its ease of use will help.

Ida Sim, co-director of biomedical informatics at the University of California, San Francisco Clinical and Translational Sciences Institute, shares Lauderdale's concern about selection bias and adds that big data researchers have to go out of their way to get a random representative sample.

They need to hold people's hands through tech problems, and motivate them to report good data, she said. As an example of something that could meet that standard, she points to the Precision Medicine Initiative from the National Institutes of Health, which aims to recruit 1 million or more people in the U.S. to study treatments that take into account different genes, environments and lifestyles. The president called for $215 million in 2016 for this program.

Sim says another issue is making sure that researchers are measuring the same thing. For instance, an app that reports a blood glucose value isn't very useful to other researchers unless they know whether it's fasting blood glucose level, a random level, an average over the past week or a single reading.

"It's like if people are speaking different languages, and they all use a slightly different word ... and it turns out everybody's talking about the same thing, but the words are slightly different and so communication is impeded."

She co-founded a nonprofit called Open mHealth that aims to develop open common standards, taking inspiration from the Internet's open architecture, as explained in a 2010 article in Science.

As for the study on sleep patterns, she says "the findings weren't that earth-shattering, but the methods and approach are illustrative" and that we can expect more big data research like this in the future.

Alan Yu is a freelance reporter in Hong Kong who contributes regularly to the South China Morning Post. You can follow him on Twitter: @Alan_Yu039

Copyright 2016 NPR. To see more, visit http://www.npr.org/.