By Jonathan Drake*
“You are worse than a fool; you have no care for your species. For thousands of years men dreamed of pacts with demons. Only now are such things possible.”
When William Gibson wrote those words in his groundbreaking 1984, novel Neuromancer, artificial intelligence remained almost entirely within the realm of science fiction.
Today, however, the convergence of complex algorithms, big data, and exponential increases in computational power has resulted in a world where AI raises significant ethical and human rights dilemmas, involving rights ranging from the right to privacy to due process.
Addressing these issues will require considerable input from experts across an extremely broad array of disciplines such as public policy, business, criminal justice, ethics, and even philosophy.
Unintended consequences result from many new inventions. AI, however, is unique in that the decisions that give rise to these consequences are often made without human input.
The most severe of these potential adverse outcomes arise from systems that are designed to cause harm from the outset, such as weapons systems. Long a staple of science fiction films, weapons incorporating varying degrees of autonomous functionality have in fact existed for some time, with landmines being one of the simplest—and for human rights, most problematic—examples of this technology.
Today, however, the science of AI has advanced to the point that the construction of sophisticated fully autonomous robots is a possibility.
In response to this, in 2012 the “Campaign to Stop Killer Robots” was launched by a coalition of NGOs seeking to ensure that life-or-death decisions remain firmly within human hands.
Although not associated with the campaign, in that same year the US Department of Defense issued Directive number 3000.09, defining its policy that fully autonomous weapon systems are only to “be used to apply non-lethal, non-kinetic force such as some forms of electronic attack.” Lethal force, according to current policy, requires human control.
The rapid growth of Artificial Intelligence is also causing considerable ethical and human rights dilemmas.
Although less dramatic than military applications, the development of AI in the domestic sector also opens the door to significant human rights issues such as discrimination and systemic racism.
Police forces across the country, for example, are increasingly turning to automated “predictive policing” systems that ingest large amounts of data on criminal activity, demographics, and geospatial patterns to produce maps of where algorithms predict crime is likely to occur. The human rights implications of this technology are even more acute when such a system looks beyond predicting the location of crime to consider which individuals are likely to offend.
This is the approach that has been taken by the city of Chicago, which has used AI to produce a “Strategic Subject List” of potential criminals who are subsequently visited by the police and informed that they are considered to be high-risk.
The development of AI in the domestic sector also opens the door to significant human rights issues such as discrimination and systemic racism. Although proponents of these systems claim that they can help bring down crime rates while reducing bias in policing, so far the evidence for this is mixed at best.
Furthermore, skeptics have pointed out that AI may in fact end up enhancing, rather than mitigating, any pre-existing bias—and not just in police work. The algorithms, they point out, are informed by current government practices that are often unjust and result in disparate impacts—whether in law enforcement, public infrastructure investment, access to due process, the right to assembly, and many other areas.
When trained with such inputs, AI has the potential to reinforce and deepen such systemic discrimination and, perhaps most concerning, may remove the opportunities for those policies to be understood and reformed. In this way, AI predictions could become self-fulfilling prophecies, violating freedom of information and due process, which has implications for the full range of civil, political, economic, social, and cultural rights.
Even in seemingly innocuous settings, the use of AI has the potential to raise thorny questions surrounding rights such as an individual’s expectations of privacy. In 2012, for example, US retail giant Target was using predictive analytics to try to determine whether its customers were pregnant, to market effectively to expectant parents.
Their system, however, at one point made that determination about a teenage girl—and revealed its judgment by sending coupons for baby products to her home, prior to her parents learning the news.
The increasing prevalence of virtual personal assistants like Apple’s Siri and Amazon’s Alexa raise similar privacy issues, as the effectiveness of the underlying AI depends on amassing and analyzing large quantities of personal information about their users.
When AI is coupled with machinery that interacts with the physical world, the potential for disruptive effects is further enhanced, on both an individual and societal level.
Since the end of the most recent recession, for example, manufacturing output in the United States has increased by over 20%, but employment in the same sector has risen by only a quarter as much.
This trend may only be the beginning. According to a 2013 study by researchers from Oxford University, as general-purpose robotic systems become increasingly capable and more easily programmable, an estimated 47% of US jobs may be at high risk of automation. The policy and ethical implications of such a development would be particularly acute since, as the study notes, many of the tasks most likely to be automated correspond to low-skilled jobs that today are disproportionately held by the working poor.
In countries that uphold the right to protection from unemployment (Article 23.1 in the UDHR), such developments may have significant legal implications as well.
In Gibson’s novel, the protagonist inhabits a grim future in which humans and their artificial intelligences exist in a constant state of mutual antagonism. “Nobody trusts those [things],” the novel explains, “the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, [they’ll] wipe it.”
Although it may yet come to pass, the popular image of the threat posed by Artificial Intelligence, in which a sentient computer seeks to augment its intelligence at the expense of its human creators, remains speculative. The threats posed by this technology to other areas of human rights, however, are already with us. Violations of the rights to privacy and due process may be only the beginning.
Jonathan Drake is a Senior Program Associate in the Center for Science, Policy, and Society at the American Association for the Advancement of Science.
Jonathan Drake’s article was published in openDemocracy. Go to Original.
**Image: Shadow Dexterous Robot Hand holding a lightbulb | Author: Richard Greenhill and Hugo Elias (myself) of the Shadow Robot Company | Creative Commons Attribution-Share Alike 3.0 Unported license.
2017 Human Wrongs Watch