Sarah Harte: Fast-tracking facial recognition technology laws would be irresponsible
The whole area of artificial intelligence and its legal regulation is in flux. It is not the time to rush legislation through the back door. Picture: Stefan Rousseau/PA
Last week was a tough week for An Garda Síochána. The acquittal of Gerard Hutch was followed quickly by a revelation about how a member of Gsoc (now resigned) attended a party to celebrate Hutch’s acquittal, which did little to inspire confidence in the force.
Not easy if you are rank and file, out beating the streets doing a tough job.
Recently, in Cork, an ugly clash between anti-racist protesters and ‘Ireland for the Irish’ types, which involved the latter screaming abuse at a line of guards acting as a buffer (many of them young) made me question who would be a garda.
As the recent recruitment campaign run by the force conceded: “Policing is not a career for the faint of heart.”
Meanwhile, tensions are mounting within the Government about a plan to fast-track the introduction of facial recognition technology (FRT), which would give the guards far greater powers.
There’s no doubt that this technology could be a game-changer. The gardaí could feed the image of a suspect into a computer which would swiftly compare it with thousands of faces captured on camera.
It was reported that in recent weeks Garda Commissioner Drew Harris wrote to Simon Harris arguing the case for the police use of FRT, which is understandable from the guards’ point of view.
However, the words ‘fast-track’ should never be conjoined in a sentence with the words ‘facial recognition technology’ or ‘artificial intelligence’.
Justice Minister Simon Harris is proposing to introduce FRT by way of a controversial amendment to the Garda Síochána (Recording Devices) Bill 2022.
Last week, Tánaiste Micheál Martin said that he supports Simon Harris’ approach, which is disappointing.
Introducing one of the most privacy-intrusive tools in policing without a comprehensive public consultation and proper pre-legislative scrutiny is ill-advised.
The Irish Council for Civil Liberties has called for “proper scrutiny of [the]Government proposals” because of a “serious risk to people’s fundamental rights”.
Labour Justice spokesperson Aodhán Ó Ríordáin and many other TDs agree.
The Green Party wants facial recognition technology to be rolled out through a standalone law. It believes — correctly — that the issue of the technology being used by the gardaí is too complex to be dispatched by way of an amendment.
Naturally, introducing a new piece of legislation and carrying out pre-legislative scrutiny by having a special committee examine proposals would take significantly longer than the amendment being pushed through by Simon Harris.
Far-reaching consequences
Far better to have a delay and to enact appropriately balanced legislation if we decide to push ahead with this because the introduction of this technology is a consequential decision with far-reaching implications.
Human rights groups here and abroad have consistently hit out at facial recognition technology for being less accurate when it comes to black people, women, and people under 20.
At the beginning of the month the British Metropolitan Police “welcomed” a research report (it commissioned it) that found that the “true positive identification rate” of live facial recognition was running at 89%, which translates to the chance of a false match of one in 6,000 people.
As one British civil rights campaigner pointed out: “One in 6,000 people being wrongly flagged by facial recognition is nothing to boast about, particularly at deployments in large cities where tens of thousands of people are scanned per day.”
According to The Met, there were “minimal discrepancies for race and sex” when the technology was used at certain settings.
Leaving aside the question of what “used at certain settings” means most people would be unwilling to rely on the Met’s view on any ethical question given that a recent review found it to be “institutionally racist, misogynist and homophobic”.
An Garda Síochána is closely connected to the communities it serves in a way that is not true in many other countries, including Britain, and we should keep it that way.
No technology is inherently good or bad, it comes down to how it’s used.
Facial recognition can potentially serve various objectives in addressing public safety and law enforcement concerns but its current use presents levels of risk that cannot be overlooked.
And which as a bare minimum must be properly examined.
The reality is that AI facial recognition technology can’t be trained without access to vast amounts of data (or faces) to build pattern recognition.
Privately owned facial recognition companies such as Clearview AI gather data from publicly available sources building a massive dataset (or bank) of faces. They then sell this technology to the police and others.
So, the processing of personal data in a law enforcement context relies on a database populated by a massive and indiscriminate collection of personal data obtained by ‘scraping’ online facial pictures and photographs from social media networks by private companies.
'Threat to fundamental rights'
This practice doesn’t meet the standards of European law. In fact, the European Data Protection Board views facial recognition technology as “a serious threat to fundamental rights”.
Last year facial recognition software provider Clearview AI was fined in Britain and ordered to stop scraping the personal data of British people from the public internet.
And it was directed to delete the data it had already gathered of British residents from its systems. Clearview AI has also been under fire in Australia, Canada, France, Italy, and the US.
And there is a broader question of whether we want the expansion of mass surveillance tools on our streets. We are being told that the technology will be used in bounded ways but can we really trust that promise off the bat?
In October 2021, the EU Parliament voted to ban police use of facial recognition in public places.
It also removed Hikvision thermal cameras from its own security systems. Chinese surveillance firm Hikvision is part of the systems used in China’s internment camps for the country’s minority Muslim Uighurs in Xinjiang.
China with its long history of aggressively controlling its society loves FRT and ethnic profiling, which isn’t a ringing endorsement.
There are multiple calls to ban the use of facial recognition technology under the proposed EU Artificial Intelligence Act which is currently being scrutinised by EU legislators. The whole area of artificial intelligence and its legal regulation is in flux. This is no time to be rushing legislation through the back door.
Last week, Micheál Martin said: “Once the adequate safeguards are put in place, I do believe it’s moving in the right direction”. With all respect, this isn’t good enough.
Civil liberties — and their erosion — are not to be taken lightly. When rights are gone, they’re hard to get back.
Citizens in countries such as China or Russia have no choice but to live with Orwellian scrutiny in a big Brother-style society but we allegedly take fundamental rights seriously.
We need to thrash out the risks of this technology, and the appropriate safeguards that would need to be enshrined in any law.
Let’s have a public consultation around the issue involving all stakeholders. To do anything else is irresponsible.
CONNECT WITH US TODAY
Be the first to know the latest news and updates
More in this section





