IA image
(Image: DALL-E)

Australians have just caught a break. The government, rather than subject society to whatever results from big tech’s release of generative artificial intelligence technology such as ChatGPT into the wild, is giving us a say

It wants to know whether and how we want the technology to be regulated, allowing us to benefit from its potential without suffering its misery-inducing or dangerous consequences. 

If you’re intimidated by the prospect of preparing a submission because you’re knowledgeable about neither AI nor the best regulatory regimes for managing high-risk discoveries, please don’t be. The key thing the government needs to know is we do not want another tech disruption such as AI without serious consideration and mitigation of the harms it may cause — such as what it may do to our mental health and well-being, as well as the cultures and institutions we have built around our values and needs. 

It needs to know that we don’t consent to being used as guinea pigs, and that we recognise the previous harms wreaked by unregulated tech. Harms such as the public health hazard of loneliness caused by social media, which has undermined young people’s acceptance of their bodies, mental health and resilience. Or the invasion of privacy, the assault on our attention, and the continuing struggle we face to discern basic truths about the world, all of which have eroded the social trust needed for social relationships and institutions, including democracy, to flourish. 

It’s important to articulate that the community demands all perilous technologies be regulated in the public interest by democratically accountable authorities who are in constant conversation with the community. We cannot tolerate big tech deciding for reasons of profit and power when and what life-altering and dangerous interventions it will release.

How can this future be achieved? One option is establishing a time-limited Australian AI commission based in the prime minister’s portfolio, which would bring together academics, industry representatives, public servants and young Australians to protect against the intrusions of — in the words of philosopher Yuval Noah Harari in a brilliant talk with the Frontiers Forum — an “alien intelligence”. 

But even if this approach doesn’t float your boat, what matters is our political leaders know that citizens consider the risks posed by AI as similar to those presented by pharmaceuticals, nuclear energy and pathogens. Which means that whatever their claimed benefits, they cannot be pursued until the risks of harm have been clearly identified, discussed and mitigated to levels the community will tolerate. 

Regulation and governance strategies such as licensing and investments in public research are useful places to start. In the meantime, I agree with Harari: “Governments must immediately ban the release into the public domain of any more revolutionary AI tools before they are made safe.” Such a ban would give us the breathing space experts have been begging for since ChatGPT appeared late last year. 

This would allow us time to prepare our education system, which has been suddenly and unfairly disrupted by chatbots allowing students to cheat in largely undetectable ways, undermining trust between students and teachers.

And time to ready the labour market and income support system for technological advances to replace an estimated 5 million jobs as soon as the next decade, including the loss of career prospects for young people still paying off degrees for careers in law, computer programming and manufacturing that may never eventuate.

And time too for helping professions to prepare for the influx of Australians rendered financially dependent and/or surplus to the productive requirements of society, including those whose efforts at excellence are rendered pointless (who can forget Go champion Lee Se-Dol’s decision to quit the game when consistently defeated by AI: “I’ve realised that I’m not at the top even if I become the No. 1 … There is an entity that cannot be defeated.”

We also to confront the limits of transparency and consent to protect us from software harms. Ensuring such protections are built into future systems is vital, because once these tools become part of how the place is run, they won’t be turned off, no matter the damage they’re causing. 

There are a lot of crises that demand our attention: the decline of Western democracies; the deteriorating climate; political instability in Europe; the cost-of-living crisis — the list goes on. 

But unlike all these, there is a silver bullet for the AI crisis caused by large language models such as ChatGPT, but only if we act quickly and regulate well. Your voice could make the difference. 

The Australian government’s community consultation on AI closes on July 26.