By Ignacio Torres, Community Manager and Humanist Activist.
The terrorist attack perpetrated by white supremacists in New Zealand has had as an even more brutal feature with the live streaming by one of the killers in the act of committing the crime. The real time broadcast of the murder of dozens of people is an example of the extreme fascism that has invaded social networks, and its spread through these platforms leads us to consider what content should be shared and how to combat the brutality that, in addition to taking place in the real world, is also advancing in the virtual world.
Social networks have been an invaluable communication and information resource that has had a profound impact on contemporary societies. But also, and increasingly, they have become a breeding ground for cyberbullying, harassment, fake news and the spread of hate speech. The reiteration of these practices puts at risk the psychological and physical integrity of people, weakens historical advances in terms of respect for human rights and opens the doors to authoritarian, discriminatory and violent movements.
In technical terms, the great contribution of social networks is that, thanks to technology and its configuration, they have allowed anyone to generate multimedia content and to circulate it for viewing, listening or reading. Prior to social networks, this capacity was concentrated in the hands of media companies that could generate audio-visual or written content and circulate it thanks to the fact that they controlled the material resources to do so: recording, printing and broadcasting technologies. In 1985, the only way to see a live video of an event was for a TV station to go on location with one of its sophisticated mobile TV units and broadcast from there, and this could only be viewed on a TV set. By 2019, anyone with a cell phone connected to the Internet could livestream and become a viral phenomenon with a very high audience, available on multiple devices.
However, this expansion of the capacity to produce, broadcast and circulate audio, video, pictures and written content has not necessarily had its correlate in the expansion of reflection on what should be produced and circulated. Historically, the evolution of the media has been accompanied by editorial reflection and this has led to the elaboration of ethical guidelines and protocols, together with the legal regulation of media activity. The mass media have had to discern what is suitable for broadcasting, at what time, and then take responsibility for their mistakes, which may include changing their guidelines.
But when it comes to social networks, that reflection and regulation has been remarkably scarce on the part of the users themselves. One of the possible reasons for this lack of critical judgement about our own activity on these platforms is the belief that personal publications have a very low impact and are irrelevant in moral or social terms. In terms of coverage, many publications on different social networks effectively reach an extremely limited audience compared to large media conglomerates, but the reach is always greater than person-to-person conversation; and on the other hand, the audience reached by social network publications is usually made up of people who know the sender directly and personally and for whom it is not an irrelevant subject. It is often a relative, a loved one, a classmate or a work colleague. The great current trend of social networks is to consider them as communities, because effectively through Facebook, Instagram and Twitter profiles you cannot easily speak to everyone, but you can speak to the communities of which the publications’ creator is a part, and what they say, or don’t say, is even more relevant than what a multinational media company located on the other side of the world is able to communicate.
It is precisely in this community configuration where the greatest strength of social networks, and their greatest risk potential, lie. People who consider that what they publish or promote on their social networks is irrelevant – when in reality they are communicating with those close to them, those who hold them in personal esteem – might spread hate speech, harass or discriminate just by not reflecting on the suitability of what they are going to publish or share on their networks. By not developing a minimum editorial reflection on the appropriateness of what they are going to publish.
The Feminist Movement has been eloquent in this regard. In the recent Feminist Strike, multiple feminist organizations were clear in their call to men: if they wanted to contribute to the cause, the first thing they should do is leave WhatsApp groups where publications degrading of women are shared. The fact is that there are many groups in this communication network that are composed only of men and in which images, videos, GIFs and pornographic memes are shared and where degrading jokes are made against women. Feminists have a point when they say that what is communicated among groups of former schoolmates, work colleagues and cousins is not irrelevant. And the type of content shared in these communities is far from irrelevant and, therefore, stopping publications that degrade women is an irreplaceable step towards ending violence against women. If women are degraded in intimate, personal networks, the first step has been taken to normalizing their degradation in more general terms.
Continuing with this example, a first minimum exercise of responsible use of networks would be to go through evaluating whether the publication that you’re about to post will be outrageous to someone else or to a group. This is precisely what has been raised by various organizations concerned with making the Internet a safe space for everyone. They propose a very concrete exercise: ask yourself if, what you’re about to write or publish, you would say in a public space. It may seem surprising, but there are many denigrating comments that are uttered in social networks, that would never be said in a public space. This is one of the distortions produced by the virtual world: believing that what is published on social networks remains in the cloud, when in reality it is seen, read and suffered by people of flesh and blood day after day.
Thus the first and most fundamental action in combatting discrimination, violence and hate speech on the Internet is to adjust our own personal actions to the principles of respect, valuing diversity and the intrinsic dignity of every individual. The first action is the elaboration and implementation of a personal editorial line that determines which content to publish and which is unacceptable for sharing – not even in order to criticize it – on social networks.
It is in this regard that reflection arises on the validity of disseminating the video of the terrorist who in New Zealand live-streamed part of the murder of at least 49 people. As we know, the original audio-visual piece was promptly removed from Facebook and the terrorists’ accounts were also blocked on that network, so there is no way to share the video from its original source. But it quickly leapt from that platform and moved on to WhatsApp, where it has been circulating from group to group. Upon receiving such a video, the question arises whether it should be shared with others. The answer, in light of all of the above, is categorically no.
These kinds of videos are part of the brutality that threatens the Internet and that can be combated with the simple action of not being part of it and not propagating it. In particular, it is harmful to replicate the New Zealand video because, in the first place, it’s morbid: it does not provide an iota more information about the attack and, on the other hand, it shows brutal images of an unacceptable event that only satisfies some people’s morbid thirst.
Secondly, the dissemination of such images has a normalizing effect on actions that are in no way acceptable or normal. This particular footage, moreover, simulates a video game, as if the wilful murder of people were a joke, which of course it never is. The sharing of unacceptable images in order to denounce them or show their gravity generates desensitization with respect to those same images and ends up making something that is not normal pass for normal. In this sense, the exercise of asking yourself if what is being shared would be shown in a public space is very pertinent.
In addition to normalization, there is a deeper element in the dilemma of showing brutal images such as those from the video of the attack in New Zealand. We all have standards by which there are videos that we would not show. An extreme hypothetical example would be the video of the murder of a loved one. Surely, we would doubt whether images showing the violent and painful death of someone close to us could be broadcast, so why do we find it acceptable to broadcast images of the violent and painful death of others? Because of something that is difficult to recognize, but that exists and that we can combat: the consideration that the lives of others, of other people who are from different cultures, are less relevant than our own and that it’s ok for them to be shown.
Finally, a third argument for not sharing the New Zealand video has to do with the very logic of white supremacist terrorists, who believe that spreading their actions is a triumph for their cause, because it informs and frightens as many people as possible about the threat they pose. To contribute to the advancement, in whatever form, of that fascist vision should be something that anyone who believes in the minimum values of humanity refuses to do.
Acting to block and prevent the advance of fascism on social networks is something that all of us who use these platforms should commit ourselves to doing.