I think they are using the term 'resolution' very loosely, here - but I do appreciate the update, Due.
By combining imagery from all five SPOT satellites, it is now possible to generate data
at four levels of resolution (2.5 m, 5 m, 10 m and 20 m)
in black and white and in colour
across the same 60 km swath
This multi-resolution approach offers users the geospatial information they need at different scales.
The key here is 'combining imagery' - and it IS commercial grade - ie. it costs.
Combined lower resolution images may MATCH, at least visually, actual higher resolution images, when combined properly. This is, though, basically, a high-pass form of pixel interpolation. Signal Theory applies, coupled with noise reduction equations, work equally in the audio or the visual realm.
Yeah - that's a bit 'beyond the scope' of most people caring - much less understanding.
The 'stacking' of images is what is yielding apparent higher resolutions, imo. They aren't real specific on what the ACTUAL resolution of the raw imagery is, nor the method of combination. But ... that's ok.
Irrespective of what 'wikipedia' has to say on the matter - I have dealt with MOST of the datasets that Google is using - and I will guarantee that MOST of it is from the USDA aerial recon photos. At 2.5 meter resolution - a car is about 1.5 pixels wide, and 3-4 pixels long - and the google dataset is MUCH higher resolution than that over most of the U.S. - down to 1' for most areas - and down to 1.5 inches in others. That is DEFINITELY aerial photo recon that has been scanned properly and orthorectified.

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good ...