This week dozens of news outlets invented a scientific journal and reported on the reporting of the reporting of “a study” that was published (and reported on) at least six months ago. This ridiculous game of Chinese whispers gave us an insight into the way science journalism can (and does) go terribly wrong.

It started when Reuters published a story about the positive cognitive effects of bad moods.

Reuters reported that the study “was published in the November/December edition of the Australian Science journal”.

Wow! That makes it sound like it was published in the equivalent of Science, only for Australia.

The only thing is, there is no peer-reviewed journal called Australian Science and there was no single relevant study.

Rather, it was several studies conducted over the past five years, none of which were published more recently that six months ago. Indeed, the most recent study was widely reported on at the time.

And what about the Australian Science journal? Well, there is a popular magazine called Australasian Science that carried an interesting feature about the various studies published over the past five years. But that is a long way from what the Reuters story communicated.

The Reuters piece was eventually (partly) corrected but not before it was syndicated to many news outlets in Australia and around the world. Other news outlets made the matter worse by “reporting” on Reuters’ reporting of the piece.

In doing so, the Daily Mail in the UK, notorious for its bad science reporting, changed the name again of the magazine to The Australian Science Journal, thus endowing it with a false air of authority.

The result is dozens of articles published around the world giving the false impression that a new study was published in a science journal that doesn’t exist.

Good science reporting is essential for the public to make informed choices and it must be better than this.

Michael Slezak is a freelance journalist, philosophy teacher and runs the science blog Good, Bad, and Bogus.