[ad_1]
Britain has a close relationship with security cameras. London alone has one of the highest ratios of surveillance cameras per citizen in the developed world. Estimates from 2002 put the number of surveillance cameras in Greater London at more than 500,000; around 110 are used by the City of London Police, according to data obtained through a 2018 Freedom of Information request.
Being recorded apparently is not enough; London’s Metropolitan Police Service has been testing the use of facial-recognition cameras, and the effort has the support of Home Secretary Sajid Javid—who oversees immigration, citizenship, the police force, and the security service. “I think it’s right they look at that,” he said, according to the BBC.
Although the upcoming election will decide the new leader of the Conservative Party, who will also become Prime Minister, it is unlikely that government attitudes toward facial recognition will change. Javid might move to another part of the government, but the civil libertarian side of the Conservative Party has been relatively quiet of late.
The problem is, facial recognition—as it currently stands—is often inaccurate. London police have been using facial recognition since 2016, but an independent report revealed last week showed that four out of five people identified by facial recognition as possible suspects were actually innocent—a distinct failing in the machine learning used to train the system.
Professor Pete Fussey and Dr. Daragh Murray, from the University of Essex, analyzed the accuracy of six out of 10 police trials. Of 42 matches, only eight were correct, and four of those 42 were never identified because of crowding.
Nevertheless, the Metropolitan Police see the trials as a success and was “disappointed with the negative and unbalanced tone of this report,” a deputy assistant commissioner told Sky News. The Met measures accuracy by comparing successful and unsuccessful matches to the total number of faces processed; by this rubric, the error rate was only 0.1 percent.
That was not the only error, however. The database used by the police was not current, and therefore identified people whose cases had already been closed. There is also “significant ambiguity” over the criteria around what puts a person onto the watchlist, the report states.
The Metropolitan Police informed citizens about the trials by handing out leaflets and tweeting, but the report deems this insufficient. “The information provided transparency regarding the time and location of the [live facial recognition] test deployments yet there was less clarity over the purpose of the deployment, who was likely to be the subject of surveillance, and how additional information could be ascertained,” the reports says. Moreover, treating those who tried to avoid cameras “as suspicious…undermines the premise of informed consent.”
The report concludes that it’s “highly possible [the trial] would be held unlawful if challenged before the courts.” The implicit legal authority “coupled with the absence of publicly available, clear, online guidance is likely inadequate” when compared to human rights law, which requires that interference with individuals’ human rights be “in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society.'”
Controversy Across the Pond
The United Kingdom isn’t the only country struggling with this problem. In the United States, facial-recognition algorithms have been criticized after research by the Government Accountability Office found that the systems used by the FBI were inaccurate 14 percent of the time. Moreover, that number does not take into account “the accompanying false-positive rate presents an incomplete view of the system’s…
Source link
No Comment