Robot reporters had a very human frisson last week when one of the elders — the LA Times quake bot — made a very personal error in reporting a 6.8 magnitude quake off the coast of Santa Barbara in California.

Like all good reporters, the bot was quick to make clear that it wasn’t really its fault. It was just bad information. Having been first with the news, the bot made sure it was first with the correction.

Built in about 2012 by journalist Ken Schwencke, the LA Times quake bot works something like this: it reads an alert from the United States Geological Survey (USGS) about earthquakes and applies a journalistic rule of thumb to assess newsworthiness based on magnitude and distance from Los Angeles. Then it applies basic journalist writing style to turn the USGS alert into a news alert, which can be distributed through the LA Times channels.

Of course, being a bot — which in this case means a line of code sitting on the LA Times server — the bot reporter does this more quickly than it takes to read this, and certainly much more quickly than it would take a human reporter to scan the survey, make the same assessment, punch out the key strokes and press send.

Where this particular bot reporter got into trouble was from that very old computer principle: garbage in, garbage out (with a touch of a Y2K hangover). A USGS researcher was correcting the historic record about the exact centre of a genuine earthquake in 1925. The USGS warning system read the correction as a new event and put out an alert, which the LA Times quake bot read, parsed according to its rules and turned into breaking news.

The error was quickly corrected by the USGS and by the quake bot in turn, within a matter of minutes, proving again the online adage: “If it’s wrong, it’s not wrong for long!”

OK, a wryly amusing story about the world of bots. But more than that, it alerts us to the increasing role that bots are playing in the news ecosystem, whether we’re aware of it or not. And we’re only at the beginning.

For a lot of journalists, there’s something existentially terrifying about this: You’re telling me the creative principles of journalism can be boiled down to lines of code?

Yep: In this case What+/-Where+/-When = Newsworthiness.

Hang on, there computer! That’s OUR job.

It’s in commodified data-dependent news that bots work best: sport match reports, financial announcements, government reports, police alerts and, of course, earth tremors. They are most common when speed matters. For example, The Washington Post used a bot to turn around results from the Rio Olympics.

[Can robots do journalists’ jobs better than we can?]

But before we get too panicked, let’s remember a lot of this is formulaic drudge work. If we’re honest, no human really wanted to do it. Often it was “training” work that, apparently, taught the value of accuracy. Now that major news organisations have about half as many journalists as a decade ago, getting a bot to do this drudge work is just good use of resources.

More significant is that bots are now enabling us to personalise those stories. Most of the billions of bot stories generated each year are for an audience of one. For example, if you play fantasy football, you can receive a “news” report as if you’re fantasy team actually played. Or for your daughter’s under-10 cricket team’s best plays of the weekend. You provide the data, get back a journalistically written match report for publishing to family and friends.

So, in commodified news, bots aren’t so much replacing reporters as back-filling the gaps the job losses are leaving and creating new opportunities, particularly in what we can call audience-of-one news.

Now, bots are being designed to learn to use artificial intelligence to work with human reporters to uncover stories that would otherwise be missed, particularly data-heavy work — such as politicians’ expenses. This strengthens journalism by expanding both scope and scale of the stories we can tell.

As a delaying tactic, there is a tendency by politicians and bureaucrats to attempt to gum up the works, by making data hard to access, often relying on hand-writing or protected PDFs, which require hacking or specialised Optical Character Recognition or re-keyboarding to get the data into a form that the bot can process.

But it’s just a delay. Journalists are experimenting with bots and AI to process data, write reports and even handle the bad bots of fake news. This will only increase. Soon, it will be universal in news organisations.

So what will we need people for once they’ve coded the bots? Creativity, empathy, character: the very human side of story-telling.

*Disclosure: Christopher Warren is developing a conversation platform using bots and AI to tell personal stories in superannuation.