Reuters has published an interesting article by Felix Salmon where he examines more closely the basis for the data behind TomTom's congestion indices.
Some of his points:
- TomTom's data comes from people who have its devices in their cars when they are turned on and being used. Most peak time commuters are far less likely to be using satellite navigation services for their daily commute compared to occasional users of the road. As such, both the driving habits, the speeds and the weighting of traffic volumes based on the presence of such devices will be skewed away from regular users.
Now I think that over time, this may change as these systems also advise on traffic conditions more reliably. Anything that encourages people to always have the system on will help, but for now it is at least questionable as to whether the sampling of peak users is representative.
- TomTom doesn't have any measure of confidence levels in its data, because it hasn't actually measured the congestion by any other means. That makes the indices curious, but hardly a sound basis for major public policy decisions.
- Measures of congestion on a percentage basis distorts delays for short trips relative to longer ones. A half hour delay on a one hour journey would be seen as less of a delay than a 10 minute delay on a 15 minute journey, which it is, in one sense. Yet, 10,000 people enduring a half hour delay is more significant than a 10 minute delay.
What this all means is that, beyond individual corridors, it is astonishingly difficult to generalise about cities accurately, comparing performance between cities. That doesn't mean TomTom should be pilloried for what it has done. It is interesting what it has compiled, but it isn't much more than that.