Is social media an extinction catalyst?
Your human mind is a complex information processing system, made of meat-like stuff. Sadly, it only has a few debugging tools. Social media is a swarming hive of trojan horses ... that can fly.
Is social media an extinction catalyst?
I believe that it is. But to make such a dramatic argument I must first try to convince you of a few radical premises.
To make my case I will ask you to participate in a thought experiment, which will elaborate some of these premises.
Thought experiments. Albert Einstein called them gedankenexperiment, because he spoke German. I'm not claiming any similarity to Einstein, who was a brilliant physicist. But if you choose to see any similarities between him and I, be my guest!
Imagine that a clairvoyant machine is invented by a group of millennial scientists. Like all millennials, they felt it necessary to take Fridays off, so they never finished an important part of this invention's software. Despite some technical shortcomings that I will explain later, the machine still works. It's a telepathic mind reading device, which is called Percepticon.
When the science team tested it on volunteers, they found it can read a volunteers thoughts and translate them into a written sentence with total precision, when it's correct, that is. But! It turns out it's perfectly correct 40% of the time, and terribly wrong 60% of the time. Unfortunately, when it makes a clairvoyancy error, the error is impossibly undetectable to anyone.
It is still incredible. The 40% of the time when Percepticon is correct it has accomplished something miraculous. It has actually scanned the brain of another human being and given a readable output of their thoughts. The world of science acknowledges that it is a marvel of technology, even if it is not useful in a court of law. So instead, they sell the patent to Samsung, who produces it for the retail market.
So ... what does a foolish and overconfident person do with Percepticon? The temptation is simply irresistible. So, they think to themselves, well ... I will use it and consider its output. I’m sure I can trust my own intuition will sniff out when Percepticon has given a false answer. Admittedly, the whole 60/40 thing isn't a great ratio to work with. But I'm smart enough to work it all out on the fly.
Now, imagine using Percepticon in a few situations where a person's own intuition is both crucial, and probably useless.
Example 1:
Imagine going into a car dealership to buy a used car, and using it to read the thoughts of the salesman.
During the price haggling, the salesman tells you he absolutely can't go any lower than $9000. You secretly use Percepticon to see what he's really thinking. Percepticon tells you he honestly believes he can't go lower than $9000.
Your human intuition is that he must want to make a deal. Percepticon is probably wrong, because it's usually wrong. But what do you do now? Offer $8000? Why did you even bring it with you?
Example 2:
You're in a serious romantic relationship. But you're worried that your partner might not be faithful to you. They promised to be faithful, and you want to believe them.
You talk with your partner about trust and commitment, and you secretly use Percepticon to scan what they are really thinking. Percepticon tells you, yes, they are cheating on you. Now you're left wondering, are they truly cheating? The chances are slightly better that they aren't cheating, or are they?
So now you ask yourself, what the hell good is Percepticon anyway?
After Percepticon is on the market for a while, people are having trouble finding a worthwhile use for it. Customer reviews are terrible, but Percepticon is still selling like pinot grigio at an Adele concert.
This motivates a different team of scientists to study the machine and see what's really going on. They examine all the fancy transistors and algebra, and discover that it should make correct readings 100% of the time. They find that incredibly, it's not wrong at random.
They are stunned to discover Percepticon is incorrect deliberately. And, Percepticon is incorrect in a way that exactly fits with the user's intuition. When it gets a mind reading wrong, the output is exactly what you would have thought using your own intuition.
This makes matters worse. At this point, Percepticon seems completely useless. But! You know for certain that 4/10 times it will do a miraculous thing and read the mind of another human being. You can't deny it's wrong more than half the time, and you'll never have any sense of when. But at least you'll also feel intuitively correct even when it's wrong. Is that bad?
What you have is a machine that 60% of the time makes you both intuitively confident and completely wrong about other people's mental lives. You'd absolutely be better off without it, and you should throw it in a wood chipper immediately.
But you don't. Because being wrong in your judgments 60% of the time is kind of fine with you, weighed along with feeling intuitively confident about your judgment 100% of the time. So people continue to use Percepticon, and give up on worrying about when it's wrong, because 40/60 isn't so bad, in the big picture.
Naturally, a society with Percepticon deforms into a culture where every person is always 100% confident, while being wrong more often than not.
You may be aware that the items that appear in any person's social media feeds are not served up there at random. You may be aware that social media platforms use something called machine learning and something called algorithms to sort through all your behaviours and interpret them. For the sake of argument, you could call that a form of intelligence.
You may also be aware that the intelligence of social media platforms does more than passively record and analyze intimate personal information. Social media machine learning is active, and uses its substantial datas to nudge you, and see what happens next. It tries to influence your behaviour in specific ways. All these ways are to further the ambitions of its master.
One could argue that the influence exerted by social media is limited and small. I can argue back that small is enough, for it to deliver what its masters desire. If Percepticon is controlled by its masters, and you're controlled by Percepticon, then you have a couple of new masters.
Example 3:
Your neighbor is a high ranking member of the Ministry of Security. Sadly, he has also been showing obvious romantic interest in your 16 year old daughter. Your daughter is quite afraid of his intentions. You know your neighbor is quite ruthless when he wants something. And he knows you will do whatever is in your power to protect your daughter.
One morning you're arrested with no explanation. When the burlap sack is removed from your head, you find yourself in an interrogation room at the Ministry of Security.
An unremarkable and nameless interrogator comes into the room. Without the interrogator even saying a word, you know they have a Percepticon scanning your mind.
The interrogator tells you that she will ask you three questions, and then your punishment will be decided. She asks "Do you love your daughter?" She asks "Do you love Big Brother?" She asks "Who do you love more?"
It is clear to you now that you can't save your daughter, and you can't save yourself. You're doomed. So ... how do you answer the questions?
Fun examples! Now then, how does this relate to privacy?
Privacy and Percepticon
Privacy is about respecting boundaries. I hope you’d agree that reading someone’s mind is crossing a private boundary. Despite that it is wrong to wish for that intrusion, knowing what other people are thinking is still pretty appealing as a practical matter.
Lucky for the species, we can’t do such a thing. So instead, we do a lot to try and guess what other people are thinking. We also put a lot of effort into trying to convince other people of what we want them to think we are thinking. Not of course what we are actually thinking.
When you scroll through instagram, you're indulging your advanced lemur voyeurism, although you might not think of it that way. You're on a surveillance mission. You're scrutinizing a boatload of information, all of it is performative and false to some degree, in fact heaping tons of it are false and intended to mislead.
While you're scrolling social media you're actively judging what you perceive, as much as humanly possible. It may have never occurred to you to question your judgment when you're doing something passively. If so, you have probably also never questioned your instinctual confidence that you're judging what you perceive correctly.
This confidence is misplaced. But the stakes seem so low. You're merely judging celebrities, or strangers you think are hot, or stupid, or your friends(maybe using the same criteria), and your family, and ... some projected version of yourself.
You're quite wrong about the stakes being low, even though that may never have occurred to you.
What is stake is something very special: your grip on reality.
Not molecular reality, and not the reality of physics and chemistry. Not the reality of your hand at the end of your arm, nor the reality of your boneless face. But the reality of the living things filling your surroundings, that all must be interpreted by a puny human mind.
That reality is material. It is objective and spontaneous, reaching you through your senses. Understanding reality requires all your mental functions working as well as they can manage, which is never 100%, but ideally not 30%. A person's grip on reality is variable, and for some people, it is quite fragile.
Social media is a virtual reality, and virtually the worst kind humans could have created. It is a virtue-less reality, and a bottomless well. We call something a rabbit hole, itself a metaphor, because we have no pre-existing words for what it is. Each of the differences between virtual reality, and reality, corrupts your knowledge of reality.
Recall that your consciousness runs on a beef-platform-architecture. Your mind has limited debugging tools. Those tools are also 99% more similar to a ham than a semiconductor. Those debugging tools are much more useful on the grasslands than they are on campus.
When something new comes along, it is a worthwhile question to ask yourself: is this thing an improvement? When I meditate on that simple question, much becomes clearer.
Extinction catalysts don't need to take the form of a virus, asteroid, or prolonged periods of warm weather. Here’s a pessimistic question: in idle societies that consume themselves with themselves, given enough time and self inflicted misery, how often wouldn’t a widespread feeling of dissolution overwhelm them? History seems to show a trend.
I think it's possible when that dissolution kicks in, some people's will live might just leave them. It will be long gone before any of those certain extinction deaths arrive.
When an actual extinction threat arrives, it won't matter if it's a virus, asteroid, or warm weather that turns up to kill us once and for all. If we're already dead on the inside, the outside is just waiting for the bad news.
Ta Ta For Now!