I think it’s fascinating that we are all watching how science is done on real time, at a scale hitherto undreamt of.
And what we are learning is that we didn’t really know science. The most powerful example of what we didn’t know is how science is connected to truth.
Let’s try to fix that a little bit. I’m going to show how different scientists’ views on truth are from those of the average Joe.
There are three concepts a scientist uses to relate to truth and those are:
- Evidence
- Models
- Falsifiability
With the first one being the most simple and the last one being the most complex.
I’m also going to address how belief and truth are connected. And I’m gonna do that by explaining what “there is no evidence” actually means.
But first, let’s start with what we were taught at school. Just so we are all on the same page.
Step 1: A box of chocolates
You know those big chocolate boxes? The ones you “never know what you are gonna get” until you open the actual chocolate? They are quite fun, but very frustrating. “Oh, no, this one has mint, let’s open another one”. “Oh, no, this one has… whatever the fuck this creamy pink substance is”.
Now let’s say you open the box, and all of the chocolates look the same. You eat the first one. Standard milk chocolate. You go for the second one. Also a standard milk chocolate. Then you eat a third, fourth, fifth. They all turn out to be standard milk chocolates!
At this point, you might be tempted to conclude the entire box only has milk chocolates. That would be reasonable. The evidence seems to point that way.
Now imagine you have ten really big chocolate boxes. They all look exactly the same, and the chocolates inside them also look exactly the same. The boxes don’t say anything about what’s inside them, apart from the fact they are chocolates. You eat thirty chocolates at random before going to sleep. They are all milk chocolates!
You might go to sleep thinking all of the chocolates are actually milk chocolates. That’s reasonable. At least, it’s more reasonable than thinking that all of the chocolates are mint chocolates, except for the thirty you ate.
There is no evidence to tell you there is even one mint chocolate in your collection. Let alone that you have several of them!
So you come up with this reasonable hypothesis that you only have milk chocolates. You do keep in the back of your mind that it’s possible that there are actually more chocolate varieties in the boxes. But it looks unlikely — you should have encountered at least one of these varieties by now!
You keep eating chocolates every day, hoping to see if your hypothesis is right or wrong. A month of chocolate eating goes on, and you still only get milk chocolates.
Your hypothesis is now officially a model. Your model says “every box contains only milk chocolates”. Models are ideas that help making stuff easier to understand and deal with.
And on the fourth month, it happens. You take a bite, and that fucking disgusting pink stuff shows up. So you were wrong all along; there actually more chocolate varieties in your boxes!
However, the facts remain that you still only get one pink chocolate every few months. The rest of them are still milk chocolates. You were not fully wrong. You just were missing a very small part of the picture.
Hey, it happens to the best of us.
Let’s pay attention to how exactly you were “wrong” or “missing a part of the picture”. If you have said at the beginning “the great majority of these chocolates are milk chocolates, but some of them are pink” you would have been unreasonable, even though the statement is correct. There was simply no evidence to back up that statement. It was a guess, and you were lucky enough that your guess turned out correct. You could as easily have said “the great majority of these chocolates are milk chocolates, but some of them are mint chocolates”, which is wrong.
So we see that:
- “Evidence” means “we did experiments to test our hypothesis too many fucking times, and none of the experiments showed our hypothesis wrong”. In the case of the chocolate box, you spent four months eating chocolates from your boxes, and they were all milk chocolates.
- It’s really easy to prove a hypothesis wrong (just show me one pink chocolate), but you can never prove it right (no matter how many milk chocolates you eat, you can never be truly sure there is not another flavor among the chocolates you have left). You can just show lots of evidence supporting it (four months of eating milk chocolate is pretty good evidence there are only milk chocolates).
You can see this limits the working scope of the scientist. What matters is the method. That is, your hypothesis needs to be in some way “the most reasonable” explanation for what you are observing. Otherwise, you cannot be sure whether or not you are spouting complete nonsense.
Of course, the devil is in the details. And there are plenty of them (details, not devils).
Step 2: A painting
We are all art fans until we are forced to spend hours in an art museum. Then, the true art fans remain, and the rest of us show our true colors.
Except for a precious few, in their desperation to seem all “cultured” and stuff, they actually put on the act that they are enjoying themselves. But I’m going off-topic here.
I’m going to do a massive disservice to the centuries of knowledge in art criticism, and say (for my own convenience!) that art is mainly about two things:
- Aesthetic value. Simply finding value in a direct, deep sensory experience.
- Interpretation. How our brains struggle to give meaning to said sensory experience.
There you go. Art critics, you can kill me now. Moving on…
Science uses aesthetic value as the intermediary to its relationship with truth. In that way, it’s very similar to art. The differences show up when it comes to interpretation. Where art seeks to reach a unique individual, science is universal.
- Art uses how each different human being connects perception with emotion and ideas in order to create a unique impact.
- Science uses the universality of its method in order to try to have everyone understand the same thing.
The key difference here is that science seeks universality. It seeks precise propositions, so that it’s really, really easy to see when they are wrong. “This is beautiful” is a statement that cannot be conclusively shown to be wrong, in most cases. “The ball will reach the floor in 3 seconds” can be accurately and precisely be shown to be wrong.
So while art is all about deriving value from the ability to explore without fear of being wrong, science, on the other hand, is all about being wrong. Yes, the goal is to eventually be less wrong, but being “right” never comes into play, except when we are talking about the rules of the game.
The rules of the game are set mainly by epistemology, ontology and other areas of philosophy, lumped together in a field known as “the philosophy of science”. In basic words, the philosophy of science determines three things:
- How you can be wrong
- How you can be less wrong
- How you can be “not even wrong” (read “much, much worse than wrong”)
Nothing about being right there. “Right” is so much more subtle.
After all is said and done, and all the fancy words are removed, scientists are looking for two things:
- Relate to the phenomena they are studying
- Predict
We are all intuitively familiar with the first bullet point, since that’s how we understand stuff in daily life. Where most non-scientists lose their shit is in the predict department. Astrology, for instance, helps us relate to the phenomenon of personality… but it doesn’t predict crap. The big five model of personality, however, is predictive!
It is telling that most people know what astrology is, but no one I know has ever heard of the big five model. That said, let’s see how these two things scientists look for are related to being wrong, being less wrong and being not even wrong.
Making predictions is what sets apart the serious scientist from your standard YouTube professor who suddenly figured out what hoards of brilliant minds that dedicated their lives to the field couldn’t. Making the kind of predictions that scientists make is as uncomfortable as standing naked in a crowd: you are completely exposed to readily be proven wrong. There is no handwaving your way out of it. You said that when I dropped the ball, it would reach the floor in 3 seconds. I dropped it, and it took 4 seconds. You were wrong. End of story.
The kind of predictions scientists make, their testable hypotheses, have some defining features:
- They refer to the outcome of an experiment. That is, they say “if you do these things exactly as I describe here, you will get this result”. That is, you need to describe exactly under what conditions what you are saying is going to happen. This feature is called falsifiability. Since the testable hypothesis describe the outcome of an experiment, then to see whether it’s wrong you just need to carry out the experiment.
- They are very precise statements. This is where the universality of interpretation comes in. When you are proven wrong, you don’t get to say “that’s not what I meant”. You have already described how to set up the experiment, you specified exactly what you meant. You are already exposed. Naked in a crowd. Have fun.
Normal discourse is not like that. It’s all about saving face and loopholing your way out of whatever fuckery you believe. If that fails, destroy the questioner and kill the messenger.
In science, the interesting part comes with why you were wrong. What happened, exactly? To answer that, we will need to see how you made your prediction in the same place. We all hope you didn’t just pull that “3 seconds” out of your ass. Hopefully, there is some reasoning making your prediction plausible or necessary. That’s your predictive model, and it is what you are testing against evidence.
If your model comes out on top, meaning it predicts the outcome of every experiment we throw at it correctly, then it graduates to becoming a theory. When it eventually fails a test, we say the model (or theory) is valid on some restricted regime. This is the case for the classical theory of mechanics (reminder to self: I will be making a post about the mathematical foundations of classical mechanics at some point).
This upgrading of models is the way science accesses truth. What this does is define the domain in which a model is useful by seeing where it fails and where it fails to fail. In some way, what we are doing is being less wrong each point. More precise. Being able to predict more stuff, more accurately. And we can do that only by proving things wrong and by failing to do so. Never by being right.
This brings us to an idea proposed by Stephen Hawking and Leonard Mlodinow. Model-dependent realism states that the underlying nature of reality is actually irrelevant, and all that matters are the models we use to interpret it. In other words, perceptions don’t give rise to models, models give reality to our perceptions. The prime example they give to this is the following. Imagine you exit you dining room. Does your dining table disappears when you are not there? There is no way to know! (You can imagine it is an intelligent table that knows if you are trying to trick it with hidden cameras or whatever other “test”)
It’s the way we view reality, not reality itself, that dictates that the table stays there. It’s just does not make sense to believe the table disappears when nothing is watching. When there is no evidence, we default to the simplest way to fill our gaps.
Yes, “simplest” is complicated. But that’s for the next section.
There is an art to building models. As we previously said, we want to relate and feel like we get the thing we are studying. That means it would be cool if our model made some sense to us and is not just a clusterfuck of parameters that just “works”, for whatever reason.
It needs to be as simple as possible, but not simpler. But it also needs to have some aesthetic value, born out the belief that a beautiful thing is a useful thing in it’s own right. There is a majestic awe in having a few, powerful, but very clear statements capture a wide array of phenomena, and seeing how those statements bring light to the underlying nature of what’s going on. Just like a painter expresses himself using a brush and his sense of what is beautiful, a scientist expresses himself using models and his sense of what is true.
Truth is not evidence. Evidence is a part of the game of science. Evidence is the real thing you can directly access. This is where “not even wrong” comes in.
“Not even wrong”, in general, describes a statement that does not make any sense within the context in which it is being made. It pretends to want to play a specific game, but then it turns around and adds new pieces without even telling you what those pieces are for. In science, “not even wrong” transcends the standards by which “wrong” and “less wrong” are judged. Being not even wrong is exactly like bringing a Batman costume and a banana to a traditional boxing match.
The main reason for “things” to be not even wrong in science is failing to predict the outcome of an experiment. We then say that your idea is not falsifiable and hence outside the realm of science. But there are more outrageous reasons, which are becoming far more common. On the frenetic exchanges characterizing today’s rhetoric and discourse, everyone needs an opinion to step into the battlefield. It’s no longer OK to not have an opinion on something. If you don’t have one, you grab a cookie cutter one and go out and play.
There is little substance behind doing that. These opinions end up holding no merit or meaning. As they are cookie cutter, they are not nuanced or precise, or anything really. That automatically makes them wildly wrong: reality is complex. When there is not an honest attempt to learn and understand, to improve whatever it is that you managed to put together, then, for a scientist, you are so far lost that you are not even wrong.
You see, science is about the process. The method. It cannot afford unchanging ideas because it’s core value is changing ideas by seeing where they fail. We also said it was about beauty, elegance, simplicity. So when what you have is against all available evidence or not supported under any reasonable interpretation of available evidence, you are an unnecessary burden. Now we have to explain all of the evidence and why what you have goes against it. Scientist will occasionally go through that trouble, but only when the tremendous value of your explanation outweighs the added complexity.
“Not even wrong” is what happens when you intend to talk about reality, but also want reality to be whatever you want.
Claiming the truth and dismissing the presence or absence of evidence without an extremely powerful reason is insanity. Yes, I said truth is not evidence, but evidence is a signal that can point us to what may be true. Any explanation you have needs to account for why all the evidence that is present is present and why all the evidence that is missing is missing.
Truth itself is something much more profound, something which science can help us understand, but fails to provide.
Truth for the scientist is just like beauty for the artist. A guiding light. Something one can aspire to convey to others. Something that’s worth pursuing, if only because it feeds our souls.
Step 3: An imaginary friend
I’ve had imaginary friends. I’ve had many complete imaginary worlds as a kid.
And hey, sometimes I go back to visit them. There is something magical about creating your world.
What if you applied the same principles you used as a kid to create your world or imaginary friends to reality?
There is a lot to be said about choosing beliefs. It’s a tremendously powerful tool, in many different ways. Science makes such choices, as it has “elegance and simplicity” criteria. But it is also forced to make such choices in order to have a coherent whole and not a disparate mess.
These forced choices come when we ask what the evidence actually means. When we ask what information we can extract from the evidence. When we ask what caused the evidence to come to be in the first place.
Suppose I track US spending in science, space and technology. I also track suicides by hanging, strangulation and suffocation. I find this:
Well that’s a fucking shock! Every time US spending on science, space and technology goes up or down, so do suicides by hanging, strangulation and suffocation. Here is where the mindless data-driven monkey starts looking for a connection. They will probably come up with “the government is using research funding to cover up assassinations as suicides”.
Guess what. There isn’t any connection. I pulled the graph from Spurious Correlations, a site with many weird “apparently related but not really related” stuff. You will find many similar graphs there.
The graph shows what is called a correlation between two sets of data. This correlation would suggest that if we know what the US spends on science, space and technology, we can get a pretty good estimate on the number of suicides by hanging, strangulation and suffocation. That is, it suggests that one variable (US spending) may have predictive power on the phenomenon of interest (certain suicides). And indeed, it did have that power over a pretty long period of time.
That would be all good. We could use US spending to predict suicides, even if we don’t know how they are connected. The thing is, this relation could break down tomorrow. It’s a random coincidence. There is no guarantee that this will continue to hold in the future!
We need to have some sort of guarantee. That usually comes in the form of a way in which the two events are connected causally. This could be:
- One event is the cause of the other (the effect)
- Both events have a common cause
If we know of a cause-and-effect relation, we can be reasonably sure that as long as that relation is not disrupted, we will maintain predictive power.
We see that the rationale behind our model works, in a certain way, as “insurance”. It gives us motive to believe the predictive power of the model will not break down at any time, for no reason. It also, of course, give us understanding as to what is going on under the hood. Or do you feel the graph above gives you any understanding at all?
Coming up with such a rationale is a matter of choosing beliefs. Of choosing our imaginary friends.
I really like the conception of ideas being imaginary friends or imaginary enemies. How you think is guided by the ideas you consciously or subconsciously value. These are going to dictate the manner in which you understand things and, by extension, how you make decisions. Your imaginary friends and enemies rule your life from the shadows of your mind.
Just as in real life, it’s really important to:
- Actually get to choose your friends and enemies, and not have them be dictated by fiat.
- Choose them well.
In theory, it’s easy: maximize “expected utility” (hello, utilitarians!). In practice, it’s a fucking mess.
The most famous and clear example of choosing beliefs in science comes, perhaps, from Bell’s theorem in quantum mechanics. In simple terms (and leaving out technicalities), this theorem told us that two “obvious things” that were believed to be absolutely true of our reality… just couldn’t be true at the same time. We needed to choose what we wanted, and that would result in two very different conceptions of how reality works.
- The first choice, called a non-local hidden variables theory (never mind the name), had an added layer of complexity. Namely, it had hidden variables: things the theory told us existed, but were nowhere to be found. There was no evidence it was true (as shown by our inability to find these hidden variables), and no theoretical convenience derived from adopting this choice. It was there simply to preserve a long-held conviction.
- The second choice, locality, required us to adapt our intuition of what reality looks like, but it was simpler. Ditching locality would get us in immense trouble. It had the potential to mess with the philosophical underpinnings of science, destroy them, and make us question our own existence (OK, the two last ones, not really). So unless we come across really good evidence we have to ditch locality, we won’t. The first choice, on the other hand, was not as threatening. It just required to accept reality has no obligation to conform to our standards, and then we can just… move on.
(Note: I’m not trying to settle the debate around hidden variables theories in this simplistic manner. It was a long and hard debate in the scientific community until one of the choices emerged as more convenient. Einstein famously argued for them! I’m just using this as an example to make a broader point.)
The thing is, when your cat jumps, you don’t question gravity. You look at how the cat generated the force to beat it. Similarly, when US science funding and suicide deaths move together, you don’t say the money is going to fund assassinations and cover ups. You don’t say scientists having more money increases the odds of them committing suicide (by asphyxiation, exclusively).
No. Unless you have extraordinary evidence to the contrary, you shout “coincidence”. Random. Dispassionate. Fair to the rich and poor alike.
There is a more tricky situation that often plays out in public discourse. It’s when the guy talking gives a rationale on how some stuff is causally connected but… it’s the wrong rationale, and it ignores other likely ways to connect the stuff. Here are some examples.
- A virus starts spreading in Wuhan, China. The media goes wild with how it was deliberately engineered and released to favor certain economic interests. Since there is a virus, and certain people are perceiving a benefit from it, then it must be said people created the virus, right? But there are other possibilities! What if it was one of the expected cases of gain-of-function in nature? What if it leaked from a BSL lab due to inappropriate safety practices? Maybe after the accident happened certain people played the circumstances in their favor. Maybe no one saw it coming and we were all just winging it. We just don’t know! At least, we don’t know the extent to which each factor is important.
- A classical example. Sales of ice cream and deaths by drowning are very closely related. When one increases, the other increases too, and viceversa. Does that mean ice cream causes drowning? No, it means both happen during summer.
- Sugar makes you fat. Yes, you will find that people who eat a lot of sugary stuff usually have rounder bellies to show for it. But it is not the sugar. It’s the fact those people are consuming more calories, and the fact it’s easy to exceed your calories by eating sugary stuff (because they have lots of calories and are super tasty). It’s not the sugar itself that makes you fat, you can eat a piece of cake and still lose weight!
These are all examples of confounding factors. Extra variables we fail to account for and therefore mess up our rationale (how/why a virus started spreading, the time of the year and calorie intake, respectively).
And to find those factors, we need evidence. In the absence of evidence, we can only enumerate the likely factors and nothing else. Further evidence allows us to adjust the relative likelihoods and incidence of each factor. If we find evidence that the genome of the virus was deliberately engineered (which we didn’t), then yeah — the conspiracy theory gains a stronger footing.
In the case of sugar, we have controlled studies. They change the sugar intake for the subjects while keeping calories the same for everyone. The result? Everyone loses or gains the same amount of fat, independently of whether they took a lot of sugar or not any sugar at all.
This gives a new meaning to “there is no evidence”. Saying “there is no evidence sugar makes you fat” doesn’t just mean “it’s a fifty-fifty, it could or couldn’t make you fat”. No. It means all the available evidence points to the contrary. Similarly, there is no evidence vaccines cause autism, or that homeopathic medicine works.
Let’s go to a more outrageous example: flat-earthers. There is no evidence the Earth is flat. How? Well, because buildings were built using Newtonian gravity, the same model that predict the shape of the Earth, and those buildings don’t fall off. Because we have telescopes. Because the ISS orbits the Earth. Because telecommunications work. Because we have pictures. Because we went to the moon and saw it for ourselves.
To argue the Earth is flat, you would need to argue against each of those. You need to absolutely demolish entire fields of study! No one is about to do that. That’s why we “assume” the Earth has (approximately) the shape of an oblate spheroid. It’s too hard to argue otherwise.
But when you have no evidence in any direction, you are free to believe. You are free to build your reality how you see fit. To choose your friends and enemies freely…
… or not really. That’s where the arbiter changes. We are now dealing with what is good and bad for you and others, and they are generally not really subjective. They have a large objective component. If you seek to have, for example, a positive impact in yourself and others (as measured by these objective “good and bad” criteria), you are back into the realm of testable hypotheses. You are back in science. Trapped. Naked.
Is there any escape to this? Yes. Science is very limited. There is a huge, very valuable world outside of its reach. But that’s for another post.
And that’s about it.
There are a couple of references to Avengers: Infinity War in this post. Go find them!