Today I opened my Facebook and discovered that two of my very good friends in another country were getting married. It seemed odd as they have been together for many years, they have grown up children and did not make any announcements. But the FB post looked genuine; there were relevant pictures and various congratulations. So I sent a message to the ‘bride’ who confirmed they are not getting married, simply when she attempted to put in a new email FB asked her if she’s married and to whom, and created the posting all by itself.
We are used to dismissing all kinds of fake news posted on FB whether out of malice, gullibility or dark reasons impossible to fathom but we can in general detect some kind of human intention behind them.
The fact that FB can produce fake news out of an accident of algorithm creates a different issue. I have been following with interest the discussion between Mark Zuckerberg and Elon Musk about the future of AI.
‘The groundwork for the world’s nerdiest fight was laid by Musk, the Tesla and SpaceX CEO, earlier this month, when he pushed again for the proactive regulation of artificial intelligence because he believes it poses a “fundamental risk to the existence of civilization”…“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” he said.’…when asked for his opinion in the matter ‘Zuckerberg said: “I have pretty strong opinions on this. I am optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios – I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.” The Guardian.
But it seems that before we see the robots coming to take over the world we may see some pretty worrying accidents simply because the algorithms that deal with most of the automated tasks of the online giants are given increasingly complex tasks they may not be able to cope with. Small samples are already visible, this fake news about my friends sounds harmless enough but who knows what kind of permutations may produce other less benign ‘information’ to influence the behaviour of those who believe everything they see in social media.
Another software accident happened to FB when a mistake left the names of those who regulate content available to the public:
‘Revealed: Facebook exposed identities of moderators to suspected terrorists. A security lapse that affected more than 1,000 workers forced one moderator into hiding – and he still lives in constant fear for his safety’ The Guardian.
Human intentionality, the way consciousness structures data from senses and memory, has a direction and an emotional tone. The whole of humanity tries to move away from suffering and towards meaningful happiness. Even if it gets it wrong (e.g., increasing violence) it is possible to understand the roots of the malfunction: fear, frustration, injustice, indoctrination, greed.
If algorithms organise their own structuring we should not attribute intentionality to them. Only humans can give direction. Machines will follow a certain logic, given by its human creators but the search for AI necessarily introduces the possibility of random connections, a certain ‘freedom’ that makes the difference with simple calculators.
Since the moment Alan Turing, the father of modern computers, created the Turing test (whose objective was to detect the moment of no longer being able to differentiate responses given by a human from those given by a computer) the race to exit the constrains of human control was on. More worryingly human ethics have lost a lot of ground in research in a society where profit is the highest value. And in the era of transnational cyberspace no nation state can hope to apply rules and regulations set up by, however imperfect, their democratic mechanisms. In other words, a new kind of totalitarian regime is deciding what type of technology will shape our lives.
So, it is not the case of becoming a Luddite(1) or a paranoid technophobe and fear the robots. The danger right now is in human intentions which will decide whether Artificial Intelligence or Artificial Stupidity will prevail. Only humanising the values of society can move technology in the right direction.
1: English 19th century textile workers destroying newly introduced machinery to protect their employment