Writing in the Financial Post, Macdonald-Laurier Institute Senior Fellow Philip Cross says that the Statistics Canada error in its job numbers is a reminder of how the human element plays a role in the collection and processing of data.
Despite the mistake, which currently has the agency scrambling to produce a corrected Labour Force Survey before the end of the week, Cross stresses that data collection and processing is for the most part improving.
By Philip Cross, Aug. 14, 2014
The hullaballoo surrounding Statistics Canada’s announcement that an error had been made in the labour force survey estimates and a correction would be issued Friday is a reminder of the growing importance of data to our society and the need to preserve the reputation of the agency charged with collecting and disseminating statistics.
Without knowing the specifics of what the error was, we know some of the general outlines of what happened. First, the mistake was not due to a lack of resources. While there have been cutbacks at Statistics Canada recently, the agency knows not to cut the ‘Crown jewel’ of its statistical program — the labour force survey, the consumer price index and GDP. It is one thing to make an error in the estimate of livestock; just admitting to a mistake of unknown magnitude with the labour force survey is front page news. Statistics Canada has always understood this priority.
While we are at it, let’s discard the conspiracy theories about the government ordering up a better set of numbers than what was published Friday. No government would ever try something so ham-handed; all Statistics Canada would have to do is call a press conference and state that an attempt had been made to interfere with the data, and the government’s credibility would be destroyed.
Why is Statscan waiting to publish the correction, a source of widespread complaints in markets? The quixotic reference Statscan makes to a “processing” error suggests that the initial mistake may not have been the only potential error. “Processing” also implies the error was not in the raw data collected from its sample, but in how the raw data was processed into a publishable form (if processing sounds like a manufacturing process, that’s why Statscan is often called a numbers factory). The only thing more embarrassing to Statscan than admitting it made a mistake would be correcting an error, and then saying two days later that it found another mistake. At that point, you become fodder for late night TV humourists. While the mayor of Toronto might survive such ridicule, the reputation of a statistical agency would not.
It is also a reminder that there is a human element to all statistics. Every major data series published by Statscan at some point has had something similar happen — someone uses the wrong spreadsheet, forgets to check if all the responses are included, mindlessly inputs last month’s data for this month, the list of possible mistakes is a census of human imperfection. There is even a seasonal component to these errors — it is no coincidence that a similar event last August led Statscan to delay its release of national household survey data. People are on vacation in summer, and those on duty are simply not as vigilant as in other months.
However, this does not mean we should simply sit back and accept mistakes as inevitable. One of the first things Munir Sheikh did upon becoming chief statistician was emphasize the importance of improving data quality and reducing the frequency of published mistakes. By emphasis, I mean starting every meeting with a reminder of its importance, and holding senior managers accountable in their evaluations. As a result, the number of errors published in The Daily fell precipitously. This demonstrates that it is attitude, not resources, that count in improving data quality. And despite the impression given by recent events, the long-term trend of data quality is improving, not deteriorating.
Philip Cross is Research Co-ordinator at the Macdonald-Laurier Institute and former Chief Economic Analyst at Statistics Canada.