How to understand the universe: A story about the scientific method
These days I’m at the very end of my PhD, and I’ve been struggling to maintain my once seemingly-unshakable love for science. So, I turned to a person who never fails to inspire me: Carl Sagan.
I’m re-reading Sagan's book, “The Demon-Haunted World: Science as a Candle in the Dark”, which was first recommended to me by fellow science communicator Barry Fitzgerald, also known as the Superhero Scientist. And this time around, a couple of quotes near the beginning of the book really stood out to me:
“The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions.”
“The method of science, as stodgy and grumpy as it may seem, is far more important than the findings of science.”
I found these sentences inspiring because they truly capture the difference between the scientific method of thought, versus pseudoscience or superstition. As Carl Sagan put it, “There are no sacred truths” in science. Everything is up for challenge. So today, I wanted to share a short anecdote of my own to illustrate how this scientific method is used in practice.
I won’t get too into the details, but recently I had formed a hypothesis that Dengue virus was messing with a certain intracellular pathway, and that the virus’ nasty little tricks would lead to an accumulation of a certain organelle, or sub-cellular organ, within human cells.
In the research group in which I’ve been doing my PhD, we present our raw data and our hypotheses every single week - if not multiple times a week - to get input and inspiration from our colleagues, as well as to cross-check weak points and biases regarding our data. About a month ago, I showed a series of data that I’d collected, and presented a hypothesis that I planned to test over the coming week: “Dengue virus infection leads to accumulation of organelle X in human dendritic cells 48 hours after infection.”
You can see from the way that that sentence is formulated that I was already pretty cautious about the boundaries within which my educated guess about the manipulations of the virus might be true.
I knew that viral replication is an ongoing process with many different parts, so I gave a time constraint: I would only be measuring things at 48 hours post-infection. I knew that different types of cells have different reactions to the same virus, so I specified that I was looking at only human cells, as opposed to, say, mosquito cells. I also got really specific with the type of human cell I was looking at: I would only be looking at one type of human immune cell. And, I knew that different organelles within the same cell would be affected differently (or maybe not affected at all) by Dengue virus infection, so I specified that I was only going to look at one of the many different organelles that reside within our cells.
And then, I set out to test my hypothesis.
I saw the opposite of what I expected.
When I first tested my hypothesis, in cells from two different blood donations (each of which was from a different and much appreciated human blood donor), I saw that actually, Dengue virus infection seemed to be causing a decrease in organelle X in human dendritic cells 48 hours after infection. Interesting!
So, I took my data back to the next lab meeting, and presented my updated hypothesis: “Dengue virus infection leads to a decrease in organelle X in human dendritic cells 48 hours after infection.”
But, it was still just a hypothesis. My work was not even close to being done.
You see, I’d only tested my hypothesis in cells from two different human individuals. But for scientists, a measly two tests are simply not enough. You need to test your hypothesis multiple times, and ideally in multiple different ways, to convince yourself, let alone anybody else, that your hypothesis is correct. Biology is complicated, and a lot of things can happen simply by chance when working with biological materials. So, to make sure that my observation was not simply due to chance, I went back to the lab to repeat my experiment with more cells derived from more different blood donations. And, I enlisted one of my excellent colleagues (thanks AR if you’re seeing this!) to also check my hypothesis using a system for measuring organelle X that was different from the type of measurement that I'd been using. Taken together, observing the data from many different cells, and measuring the same outcomes in multiple different ways, reduces the likelihood that the observation was simply due to chance.
Imagine it like this:
You’re sitting in a room with no windows, and somebody comes up to you and says, “It’s sunny outside!”
If you’re thinking like a scientist, you won’t just accept this as fact without any evidence. And the best way to be sure that it’s indeed sunny is to perform multiple measurements, using different technologies.
So, you might start with, for instance, checking the power produced by a nearby solar panel. If you see that there’s a lot of power being produced, you can infer that the sun is indeed shining. But to be sure that it’s not just a malfunction of that particular solar panel, you’ll also check if other solar panels are also producing lots of power. If they're all upping their power production in comparison to yesterday (and you know yesterday was cloudy), that's some pretty good supporting evidence for the working hypothesis that the sun is shining.
But that’s still only one form of measurement! There could indeed be sun shining, as measured by the power output of the solar panels… But only through clouds and with scattered showers. And if the latter is true, it’s really not worth getting up from your desk to go for your midday walk. So, you will also need to check if it’s truly a sunny day without any unpleasant scattered showers.
To investigate that possibility, you can log in remotely to high-tech rain barrels you’ve set up around your office building. A few months back, you put sensors in the rain barrels to detect water levels to a highly accurate degree, and now you can see from the comfort of your windowless room that the water level is actually slightly decreasing over time. Nice, that tells you that the sun is indeed out, and water is evaporating rather than precipitating!
And thus, you can now be fairly confident in the working hypothesis that it’s a pleasant day out. Although you can always do more tests to be sure...
Of course this is a contrived example, but it actually approximates what we do in the lab quite well. In immunology or microbiology, for instance, we don’t have the privilege of the equivalent of getting up and just taking a look outside to see what the weather is like. We always have to infer and approximate, based on precise measurements of things that are too small for us to see. So, it’s important to always make sure that we’re measuring what we think we’re measuring, by making measurements at multiple times and in multiple ways.
How did it work out for me in the lab? Well, it turned out that when I measured the impact of Dengue virus infection in more donors and in different ways, there was no consistent impact of Dengue virus infection on organelle X at 48 hours. The new working hypothesis was thus: “Dengue virus infection does not impact levels of organelle X in human dendritic cells 48 hours after infection.”
And, back to the drawing board.
But actually, this story encapsulates two very cool things about the scientific process.
The first is that when you build a hypothesis, you do it within strict boundaries. And that leaves you with a lot of next steps to test! For instance, I could have gone on to test the impact of Dengue virus infection on a different organelle, or on organelle X at a different time point, say 24 hours after infection. Every door that closes truly leads to the opening of a window.
The second is that when you hypothesise that something is true, you do so tentatively. I wasn’t attached or emotional about my hypothesis regarding organelle X. From the beginning, I was open to the very real possibility of being proven completely incorrect. As a scientist, you simply cannot be too egoistic about your beliefs. Being open to being wrong is an important part of the job. I’m wrong all the time, but the important thing to remember is this: every incorrect hypothesis leads me asymptotically towards the truth. I fail over and over again, but I get a little closer to the real state of things with each failure. It’s not an easy way to operate, to be sure. But I think it’s the best possible way to approach the difficult task of teasing apart fact and fiction. And I find that inspiring.
As with many thoughts on this subject, I think Carl Sagan put it best, so I’ll leave you with another quote from him:
“For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring… And if our naive self-confidence is a little undermined in the process, is that altogether such a loss? Is there not cause to welcome it as a maturing and character-building experience?”