Skip to main contentSkip to navigationSkip to navigation
Airport security personnel take a body temperature reading of a boy as he arrives at Hong Kong International Airport April 9,2013, following concerns over a deadly strain of bird flu.
Airport security personnel take a body temperature reading of a boy as he arrives at Hong Kong International Airport April 9,2013, following concerns over a deadly strain of bird flu. Photograph: Tyrone Siu/Reuters Photograph: TYRONE SIU/REUTERS
Airport security personnel take a body temperature reading of a boy as he arrives at Hong Kong International Airport April 9,2013, following concerns over a deadly strain of bird flu. Photograph: Tyrone Siu/Reuters Photograph: TYRONE SIU/REUTERS

Google Flu Trends is no longer good at predicting flu, scientists find

This article is more than 10 years old

Researchers warn of 'big data hubris' and the importance of updating analytical models, claiming Google has made inaccurate forecasts for 100 of 108 weeks

Science researchers have discovered a problem with Google's Flu Trends system: it's no longer any good at predicting trends in flu cases.

According to research carried out by a team at Northeastern University and Harvard University, Google's Flu Trends (GFT) prediction system has overestimated the number of influenza cases in the US for 100 of the past 108 weeks - and in February 2013 forecast twice as many cases as actually occurred.

A better prediction model of the number of cases for the forthcoming week could be more accurately generated from the number of cases recorded by the US Center for Disease Control (CDC) in the preceding week, the team found.

The discovery has led them to warn of "big data hubris" in which organisations or companies give too much weight to analyses which are inherently flawed – but whose flaws are not easily revealed except through experience.

The apparent reason for GFT's failure, the scientists suggest, is tweaks made by Google itself to its search algorithm, together with its "autosuggest" feature introduced in November 2009.

Wrong weighting

"GFT was like the bathroom scale where the spring slowly loosens up and no one ever recalibrated," David Lazer, an associate professor of computer and information science at Northeastern University, who led the research, told the Guardian.

"You know scales are going to need to be recalibrated, yet when GFT started missing by a lot (which started years ago, before it got any media attention) no one tweaked the mechanism."

Lazer notes that GFT was built on correlation with the CDC's reported figures, and so is intended to forecast the CDC data, rather than any "absolute" number of flu cases. "We are not assuming that the CDC data are 'right'," he noted. "[But technically] GFT is a predictive model of future CDC reports about the present." That makes its deviation from the forecast more notable.

Big data headache

Google autosuggest may have driven flu estimates
Google's own autosuggest feature may have driven more people to make flu-related searches - and misled its Flu Trends forecasting system. Photograph: /Guardian Photograph: Guardian

In the paper, published in the journal Science, the team led by Lazer notes that even from its inception in 2009 "the initial version of GFT was a particularly problematic marriage of big and small data." They note that "essentially, the methodology was to find the best matches among 50m search terms to fit 1,152 data points." The chances of finding search terms that seemed to match the incidence of flu - but in fact were unrelated - "were quite high", the team commented.

Google constantly makes tweaks to its general search algorithm, averaging more than one a day, and the introduction of its "autosuggest" feature may make people more likely to search on terms related to influenza.

One problem in finding out why GFT has run amok is that Google has never disclosed which 45 search terms it uses, nor how it weights them, to generate its forecast.

"We do find evidence that Google changed how it serves up health-related information that likely resulted in more searches for terms related to flu cures, and that these terms tend to be more correlated with GFT than the CDC data," Lazer commented in an email.

"This suggests that part of the answer is that those (unknown) GFT search terms are related to flu cures, and that the change of the search algorithm drove counts of those search terms up. But we don't know that for sure. And even if the algorithm did not change at all, how people use tools changes over time – maybe people didn't think of using Google for health-related information a decade ago (where the training data for GFT came from) and now they are more likely to."

Correlation - but not causation?

Google's FAQ page about GFT says that "certain search terms are good indicators of flu activity" and that "We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms".

It published its work in a paper in the science journal Nature in November 2008 (PDF) which it said had a correlation of 90% with CDC flu data. Correlations range between zero and one, with one being indicating perfect matching.

But correlation does not demonstrate causation - which the Flu Trends' failure to maintain its predictive power seems now to be demonstrating. "The initial version of GFT was part flu detector, part winter detector" the researchers note - because flu cases are highly seasonal, tending to rise in winter.

A Google spokesperson said: "We review the Flu Trends model each year to determine how we can improve — our last update was made in October 2013 in advance of the 2013-2014 flu season. We welcome feedback on how we can continue to refine Flu Trends to help estimate flu levels."

The scientists say there are broader lessons about the use of "big data" in GFT's failure to do its expected job of forecasting flu trends. Google's failure to explain its algorithms or to release its data in effect blocks scientific research on that work; "making money 'without doing evil' (paraphrasing Google's motto) is not enough when it is feasible to do so much good," they write.

Lazer added: "I do think a general lesson here is that you have to build and maintain a model like this based on the assumption that these types of relationships are elastic - there is no natural constant here like pi - and constantly, or at least regularly, recalibrate."

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed