Is the Internet immoral?

Subscribe to our Newsletter

Subscribe to our newsletter and join our 119 subscribers.
Photograph: Pexels

A colleague recently described the internet as “immoral”. Some might think it is “amoral” – a neutral place, where it is we, human beings, decide whether it is good or bad. But this colleague was adamant. The internet, right from its very foundations (even deeper, from its roots) is substantially immoral. It’s bad, rotten to the core. That immorality may have been made in ignorance by its originators, or it may have been consciously chosen. So says my colleague.

That would suggest that the architecture of the internet itself has been founded upon immoral premises and motives. Now, that is strange as the origins of the internet lie in an attempt to enable the sharing of content across physical distance. Universities could collaborate. Knowledge could be shared. People could message each other without using up planet-warming gasoline. That was an original idea, before all digital hell broke loose.

My colleague claims it is the binary nature of the internet that “outs” it as immoral. That is because, in his view, binary thinking, applied to the human-social sphere, demeans and even harms the human being. Binary thinking serves mathematics but it can never hold the human “essence” in any morally good way. That’s what HE says.

Binary thinking arises from resolving our transactions, interactions, processes, even goals and will impulses to “either-or”. Complicated systems of the ones and zeros, on or offs, either-ors, can then be built up to model and even mimic qualitative reality – the world (internally and externally) that we experience with our senses.

Essentially the picture of the Mona Lisa, made up of a billion little pin prick (or smaller) lots of light and colour (built up from little lines of binary code) can be made to fool those senses of ours and we can no longer tell the difference between the über-complex binary-built model and the “real thing” (made from oil paint and human creativity and effort).

We can reach a stage where the binary copy fools the senses so well and we adapt very quickly to the illusion, normalising it – that it is perfectly functional in social life. In a few decades (or less) this is going to become even more pronounced as holographic versions of our friends and family are compelling before us, in the room, with crystal clear, pitch perfect reproductive quality. Mum may be in New York, but she is damn as hell in our living room as if she was really here.

The trick is complete and we can no longer tell the difference. My colleague would call it a trick, an illusion, and attempt to deceive – and even if that deception is for benevolently functional reasons, it is, nonetheless, a lie. And, he would say, lying is immoral.

If he is right, every time the digital realm offers versions of reality as if they are true, we are being delivered a lie, sometimes compounded by other lies on top. For example, in the realm of social media the corporations want to target us with advertising but will often present functionality as if its only motive is to free us up, to empower us, to make us smile, or to get a bargain. Even as the social media companies snoop on our behaviour in the “background” they claim that transparency is almost a moral duty. Transparency and openness are core values at Facebook even as they use complex and largely secret algorithms (programs to mine and manipulate our data) to aim buy-sell impulses at us.

I can see my colleague nodding and saying. “See? Immoral! Just as I told you!”

But sure we are free to switch off? Surely it is we who decide what content we share or what we choose to look at? If the internet were truly amoral (neutral in the realm of right and wrong) then the providers of products and services wouldn’t be trying to force us to behave in certain ways (we can’t easily delete, we are told we have done something too much, or that we have broken guidelines, we are regularly notified and emailed and it is made very tiresome and hard to turn off those features).

Recently, I heard a new phrase: inadvertent algorithmic cruelty.  This arose in a blog where the author, whose daughter had sadly died some time ago, had an image thrust in his face on his Facebook timeline as Facebook decided to show him what a review of his year might look like. The “algorithm” had no way of discerning which images might cause pain and upset. Its simplicity, clumsy to say the least, resulted in an immoral act, immoral because it harmed someone who did not wish nor ask to be harmed.

The author, Eric Myer, described the experience and offered an explanation:

“To show me Rebecca’s face and say ´Here’s what your year looked like!´ is jarring.  It feels wrong, and coming from an actual person, it would be wrong.  Coming from code, it’s just unfortunate.  These are hard, hard problems.  It isn’t easy to programmatically figure out if a picture has a ton of “Likes” because it’s hilarious, astounding, or heartbreaking.”

He goes to to discuss the algorithms that created this upsetting experience for him: “Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person ´thoughtless´ is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.”

Note he uses the word “unleash”. This word is used a lot online. It is almost as if the internet is an animal, on a leash, pulling insistently, aching to be let go to run all over the place. Once unleashed, it might make some smile, it might make others run in fear. Clumsy algorithms lack more quality when they cannot connect with the specific quality and needs of a unique person. And they often can’t do this. When they do “match” it is often because a person has role played a simplified version of themselves to enable the digital program to “get them right” – a questionnaire, some fixed settings, only answering “yes” or “no”. Without this collusion, the program will reveal just how dumb and simple it really is. Of course, computing is becoming faster and closer to “artificial intelligence” and in the future we may well find much better “matching”. Often nowadays the matching is very poor, often hurtful and sometimes dangerous, especially when it is forced upon us.

Forcing – which can range from the horrors of rape, to behaviours or bullying and blackmail, is immoral because it undermines the freedom of another. In the case of “inadvertent algorithmic cruelty”, cruelty results because a choice has been made to sheep dip human beings in a generalised and unasked for forcing of pictures (they could be ex-partners, companies we were made redundant from and deceased loved ones) upon us.

The core nature of these algorithms are that they attempt to personalise and target but try to do this using binary language. When one person’s uniqueness falls between the cracks of a one or a zero, pain or even death might occur. Usually what results is a feeling of being undervalued, ignored or misunderstood. Algorithms that generalise and then attempt to target from the heart of search engines and “customisation” in the internet. The core assumption is that we use the best guess based on an evolving data set. The information may get better as the digital program “learns and refines” but there will be collateral damage – people will get hurt. My colleague would call that immoral too, citing Oscar Schindler that: “If you save one life, then you save humanity”.

If we upgrade systems and it always means that 5% of the crowd will be harmed, frustrated, deleted, then we have created a system that harms all of us, because such decisions are immoral to the core. For some this is a naive and sentimental view of the world. Isn’t it better to genetically modify crops, despite possible long-term risks, if it means that more people get fed and less starve? These are difficult decisions, sometimes heartbreaking and seemingly impossible. Yet if we root our culture in normalising such a view, then soon enough we will always have acceptable losses. In medicine and in food safety we try to ensure that foods will not kill anyone (at least that is the theory and what the law says). Strict health and safety ensures our flights are as safe as possible. We have kite marks for electrical products and only when standards drop do people get burned or electrocuted.

Yet in the digital realm, the notion of “acceptable losses” has fundamental to its model of innovation, commerce and functioning. The binary world, writ large, is so complex that corporations only reveal many glitches and bugs through the cries of pain of their users. Many are sold products that are marketed as looking perfect but that, in reality, are riddled with bugs. Users are then expected to pay to get help or are directed to communities of fellow sufferers (help forums), even as the corporation eyes its next product launch. Help and support is ceased for products and programs after ever shorter periods of time. Is this a big whinge of mine? No, it is my colleague pointing out that the internet is built on corporations (app, program and hardware makers) who see the human being, not in individual terms – not as a qualitative, revealing, unique mystery – but as a mass, a crowd, something to be controlled en masse. Personalisation is part of an attempt to customise for the majority (as this sells more products and service and is often called “customer care”), but for the 5% who don’t fit, they can firin, conform or die “and decrease the surplus population” (to quote Eberneezer Scrooge).

The internet is a binary beast. As such the quality of nuance, the unpredictable, the uniqueness that is each person, eludes it. But that is no major problem because the commercial model it has evolved allows for acceptable losses, for it to ignore those who don’t fit. Many customers, when their product fails, experience the providers as distant, uncaring, even indifferent. This can be cold and uncaring. My colleague calls that immoral.

Is that immoral or moral? Is that amoral? No, my colleague says, and he is very definite on this – it is immoral. In much of the digital and binary world, if you don’t conform to the majority, then you become part of the problem. “Either you are with us or you are against us.” Attempts are made to solve the problem as long as it submits to binary algorithm. Anything or anyone left out – and my colleague believes that is “left out” bit could be our specialness, what makes us beautifully unique and different – needs to be ignored, swept aside, even obliterated.

And here I agree with my colleague. That is immoral.

Share Darrow

We believe in the free flow of information. We use an Commons Attribution-NonCommercial 4.0 International License, so you can republish our articles for free, online and in print.

Creative Commons Licence

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Darrow.

By copying the HTML below, you will be adhering to all our guidelines.


About Paul Levy 29 Articles
Paul Levy is a writer, a facilitator, senior researcher at the University of Brighton, founder of FringeReview, and author of the book Digital Inferno, published in 2014 by Clairview Books.

Be the first to comment

What do you think?