The end

Indice

Many other stories could be told of those times.

But now the cheerful company wanted to return to the city; after being together to tell each other stories they were curious to be back to the cities, almost anxious. Whatever had happened during their absence they knew there was much to do.

Maybe the world wouldn’t have changed, but certainly they had changed, and that was enough.

The fire was still crackling. Delia relit it for the last time, she looked around and sighed.

“How do you feel? Are you all right?” asked Samir.

“Yes, yes, just a little melancholy. Do you remember that guy who was obsessed with losing his loved ones…?”

Quizzical expression on everyone’s face. Time for the final story.

Life beyond death

Marius Jacobi was born in Romania. In the early 10s of the 21st century he entered MIT, the Massachusetts Institute of Technology in Boston, the famous cradle of hackers. One bad day, a close friend of his died in a car accident. After his disappearance, Marius had seen and reviewed the videos of his public interventions, as if he were trying to have him back, to revive him.

Slowly he came to the idea of virtual immortality.

In 2014 he founded Eternime, an app capable of creating the avatar of the beloved one passed away. In order to do this the app had to gather information when the person was still alive. The more information it collected, the more precise the avatar was: messages, geo-location, information extracted from other applications, photographs, messages, social posts, traces of purchases, biomedical sensors and so on.

Eternime stored life experiences and then reassembled them thanks to software. The avatar had the tastes of the deceased and was able to chat, it really felt like the original! Apart from the fact that it could not give you hugs and that it did not drink dark beer: however, it showed the same competence the missing friends had in terms of hopping and fermentation.

A British TV series, Black Mirror, had made familiar the idea of the replacement android. In the first episode of the second season, Be Right Back, a young woman, devastated by the death of her partner, agrees to subscribe to a service capable of acquiring the guy’s digital life. Making him talk is the first step. After that, with the premium version, you can also get an android in plastic and circuits, indistinguishable from the original. Indeed, perhaps even better, more available, certainly more programmable…

Other startups were born with the same goal, to realize the old dream of life beyond death. Usually the idea came out from a tragedy: in fact even Eugenia Kowalsky had lost a close friend, Jules, in a car accident. He was thirty years old, a life ahead. She missed him terribly, so she had collected tens of thousands of his messages and created a bot, a software program, capable of replicating the way Jules spoke.

Thus Replika was born. A sort of digital diary, a place where everything you wrote became a confidence made to a software that learned to know you. From a private experiment, it quickly became a business with millions of users.

The character of the avatars

They were called “Artificial Intelligence”, and they certainly exhibited intelligent behavior. On the other hand, in a world of people increasingly accustomed to relationships mediated by devices, screens, and sound waves, bodily presence was increasingly irrelevant; it became superfluous with the mass diffusion of virtual reality helmets, even if for humans obsessed with the body there were services to have an android identical to the beloved extinct delivered at home. Anyway people were already wondering on how these so-called intelligences would evolve their character.

The only obvious thing was that the behavior of these avatars depended on the food they were fed, that is, the data. One of the first experiments known to the general public in this sense was Tay, a bot released in 2016 by the multinational Microsoft on Twitter (in China it had been preceded by Xiaoice a couple of years earlier).

Within a few hours Tay was shut down, because, learning from other users, it had manifested a sexist language, it had begun to praise Nazism and it liked to mock the police. Twitter users had trained it well!

Meanwhile, the character of avatars was being studied at MIT. Shelley was born in 2017, it was capable of collaborating with humans in writing horror stories, in honor of course of Mary Wollstonecraft Godwin, known as Mary Shelley, author of Frankenstein. Its main database was Reddit’s r / nosleep channel.

Shortly thereafter it was the turn of Deep Empathy, an artificial intelligence designed to increase empathy for the victims of distant disasters by creating images that simulate disasters closer to home. And finally they developed Norman, the first psychopathic AI (in reference to Norman Bates, of Psycho).

The idea was simple and apparently entirely correct: the data used to train a machine learning algorithm can significantly influence its behavior. So when people talked about distorted and unfair artificial intelligence algorithms, the culprit was not the algorithm itself, but the distorted data that had been given to it. Following this principle, if an algorithm “trains” on a “wrong” (or “right”) dataset, its behavior will change.

Norman had suffered prolonged exposure to the darker corners of the Reddit bulletin board containing images of death. It was an artificial intelligence trained to formulate captions to images, that is, to describe an image. Norman tended to see death in any image, so much so that his answers to the famous Rorschach test (ink spots of varied shapes) were invariably “dead man under a train”, “shot woman”, and so on.

There was, however, a fundamental error in this approach: the assumption that the technology was neutral and that it depended only on the type of data. Good data, good (human or artificial) intelligence. Bad data, bad (human or artificial) intelligence.

Another item of evidence was completely missing: technologies are tools, not data. They are ways of relating, realizations of worldviews, processes in the making. They depend heavily on interactions, that is: all technologies embody, incorporate and tend to evolve and take to the extreme the ideologies of the people who created them.

In the case of extremely complex mass technologies that involve interactions between humans and non-humans, ideological effects appear as natural conditions that have always been in place, while they are instead absolutely artificial, they are consequences of the adoption of those tools.

Who wants to live forever?

But beyond AI, who really wants to live forever? When I was teaching applied cybernetics, I always asked students during exams: if I could choose, red pill = I will die one day; blue pill = I will live forever, what would you choose? The most frequent answer was: but what do the others do? Do they live forever too? Well then I want to live forever too…

This was the conformism that led to the Great Plague of the Internet, now we understand it: wanting to conform and yet wanting to stand out, wanting to be exceptional but without any effort, with a simple pill.

As my old friend Naief Yehya said in the conclusion of his Homo Cyborg:

The main risk imposed by technological development, regardless of the great and very rapid successes of science and technology, is to live in an era dominated by selfish utopias, marked by the promise to make life eternal (or at least to extend it without limits), to offer us a prodigious abundance generated by the new digital economy, and in particular to guarantee absolute freedom, not only from authorities, governments, states and institutions, but also from our fellow men [sic] and our own bodies. […] To what extent will concepts such as solidarity or brotherhood [sic] make sense for the digital and interconnected minds of the future? […]

Let’s assume that a mountaineer, before going on a hike, makes a backup, a backup copy of his own being, and stores it on a magnetic disk. The climber starts and faces excessive risks, he is the victim of an avalanche and dies buried in the snow. A couple of days later, the police notify his wife of the death. She, more annoyed than sad, thanks the officials and brings a copy of the magnetic disk to a cloning center, where they manufacture (thanks to a biopsy done to her husband) a body identical to that of the disappeared, after which they reprogram his mind with the disk. The climber regains consciousness. Once he learns of the frustrating outcome of his expedition, he apologizes to his wife for the hassle, thanks doctors and technicians, pays the bill and returns home to his new body.

There is no doubt that the prospect of changing bodies as you change cars or apartments is fascinating, but what will become of the human spirit in a world without old age where you can buy eternal life? Our species is defined through the preeminence and irreversibility of life cycles. Mortality is the certainty that every moment is unique, and that life is unrepeatable and precious. In a world from which human tragedy has been eradicated, dying without a trace will perhaps be the only revolutionary act.