
The use of facial recognition technology would need to be publicly registered and assessed for potential harms in many cases and, in some cases, banned under regulation proposed by a group of leading academics.
Today the University of Technology Sydney’s Human Technology Institute will publish “Facial recognition technology: towards a model law”, which lays out framework for regulating facial recognition technology in Australia.
While acknowledging the possible benefits of the technology, the report proposes a “risk-based approach” that would require users and sellers of facial recognition to consider and publish how it is used. Additionally, the model law would ban some high-risk uses.
As it stands, regulation of facial recognition happens through the federal Privacy Act, which covers the collection of biometric data, including face data. There are many exemptions to the act (including for small business) and it does not specifically address the unique challenges of facial recognition data.
In the report’s foreword, the experts argue Australian law does not effectively regulate facial recognition technology: “Our law does not reliably uphold human rights, nor does it incentivise positive innovation.”
The academics include former Australian human rights commissioner Edward Santow who in that role has called for a moratorium on the use on the technology in high-risk situations.
A model for regulating the technology
The model law proposes requiring developers and most users of facial recognition (called “deployers” in the report) to complete an assessment of the potential harms — including risks to human rights — and how they can minimise these harms.
A “facial recognition impact assessment” would consider factors such as where the technology is being used, how it’s used, how well it works, what decisions are being made using the data, and whether individuals are giving consent. The report also includes guides about the potential risk associated with some of the factors; for example, use of facial recognition in public spaces is seen as a larger risk than used in a private space.
These assessments would be registered, published publicly and updated as necessary. Individuals using facial recognition in non-commercial purposes in a way that’s covered by a previous assessment would not need to assess or register the technology; users deploying it for a commercial use would need to register their use but may rely on another assessment if it’s used in the same way.
The model law would ban use of facial recognition that is assessed as high risk unless it is for national security or law enforcement, for academic research, or when the regulator grants an exemption. Failing to comply with this law would carry civil penalties and injunctions granted against unauthorised use. The Office of the Australian Information Commissioner is named as the relevant regulator. The report also calls for adequate funding for the office.
In addition to the law, the report creates a technical standard for facial recognition technology. Such a standard could require certain levels of security, audit logging, data quality and performance testing.
“Australia needs a dedicated facial recognition law. This report urges the federal attorney-general to lead this pressing and important reform process,” the report says.
Are you worried about the increasing use of facial recognition technology? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.
Private companies should NOT be allowed to use facial recognition technology. Period. Currently it is being used in shopping centres and department stores – Wesfarmers are reported to be an enthusiastic user of the technology. But to what purpose? I don’t go into a Bunnings store now without a hat, glasses and a face-mask. Technology continues to advance faster than our regulations and laws are developed to control it. Governments should immediately establish technology assessment groups or committees to evaluate technology and advise on whether it should be adopted or not. And the criteria used for making those decisions should be simple – is the introduction of any particular technology in the public interest. If the answer to that question is no, or that the technology only benefits vested interests at the possible expense of society’s interests generally then it should be rejected.
Bunnings sought to justify their use of facial recognition on the basis of shopper and staff safety…. Oh, and helping to reduce shoplifting. There must have been no alternatives, poor things. So they could store your image as long as they like, and reassured objecting customers that their image would be uploaded ONLY to appropriate govt authorities.
Bunnings made no comment about HOW they could protect your image from being hacked, or how their own management would be storing, using and possibly deleting your image. They could keep it as long as they like. Compared to their aims, your privacy was of no importance. WHY? Because under current laws, they can.
Legislation to restrict any organisation using facial recognition is badly needed. It was only public reaction, promoted by CHOICE, that forced Bunnings to back down and stop using facial recognition.
You can be fairly sure once the government gets involved it’s main concern will be to see its agencies are plugged into the live feed. After that the concern will be there are too many dark little corners where there’s no coverage. Just what are doing in the bathroom so long with the door shut? Any devices in there?