Hi folks, we’re very pleased to post our very first student written blog post! This post and the post coming next week are from Adrian Dutkiewicz, a first year Neuroscience Graduate Student.
Part 1: The role of the individual scientist
The purpose of this post is to explain, broadly, how science is carried out. This essay aims to explain the basic conventions of science for scientists-in-training, interested members of the general public, or generally just to address concerns raised by science-cynics so that their future criticisms will be better informed about precautionary measures that are routinely taken.
For millennia humans have known that children resemble their parents. Such a “common sense” belief was attributed to a belief that children contain blended characteristics of their ancestors. The pure traits were believed to be irretrievably lost, the way one can dissolve sugar and salt into water, but then not be able to separate out the sugar and salt once they have been combined. This blending notion was intuitively pleasing because it explained the observations that are visible within a lifetime. We now know that this explanation is not entirely correct, and the highly controlled experiments of Gregor Mendel were one key step in dispelling that view.
In 1865, Gregor Mendel published a paper that was the culmination of years of research in cross-pollinating pea plants, under very controlled circumstances, to track the appearance of heritable traits, such as color, seed shape, and position of the leaves on the stem.1 Among his key observations was the fact that these traits were not present in some generations, but could reappear in the next generation (for instance, seeds with green albumen). This astute observation provided the first evidence for a discrete unit of inheritance – which today is known as a gene.
A science cynic might have been horrified that a scientist had spent many difficult years gathering enormous amounts of data to study heredity – particularly if he had been financed through public funds. Surely, everyone knew that children were similar to their parents already – yet such a criticism would miss the point. Mendel had provided strong evidence against the “common sense” notion of blending heredity, and even proposed that heritable traits were discrete and stable over generations. Like Mendel, modern scientists have several basic conventions to ensure that we do not accept superficial explanations for natural phenomena, and that the data we present are not biased by preconceived expectations.
This blog entry discusses some measures that scientists take during the course of their research, and to explain the reasons for those steps. I hope to inform you on some of the basic measures that scientists take to ensure that their findings accurately reflect reality – as well as to identify their own shortcomings. As such, I won’t be discussing how scientists formulate hypotheses, choose models, or design follow-up experiments – though those steps are very important too.
For the sake of this post, I will define bias as a skewing of experimental methods (or the way that results are presented) so that the results can be interpreted in a pre-conceived fashion. In other words, bias is when data (or their interpretation) are tainted by beliefs, rather than beliefs being influenced by the data. It should be pointed out that bias doesn’t mean that the findings are wrong– just that the results should be examined more critically. So rather than ignoring the results, you can weigh its significance against other evidence.
Though it’s tempting to believe that bias is a result of intentional intervention – and some bias certainly is – the most insidious form of bias are assumptions that we are not aware of. There are no single measures taken in science to eliminate bias altogether but there are many steps we can take to mitigate its effects. Broadly speaking, though, bias exists everywhere and reassurances for being “unbiased” are pretty hollow, nonetheless, some conflicts of interest are so pervasive that they become something qualitatively different.
In scientific literature, researchers are required to disclose conflicts of interest. These conflicts of interest can range anywhere from financial interests in the results due to affiliation, relevant patents, or investments.2 Though in these cases, the goal is to reveal a potential bias, rather than eliminate it.
Suppose you wanted to measure a lab rat’s water intake over a long period of time, to see if it had a positive correlation with temperature in a highly controlled environment. In other words, your hypothesis was that as temperature increased so did the rat’s water intake. You’d need to measure water as well as setting the temperature for many days and then determine the average water intake for periods when the temperature was at a given value. You might find that water intake increased gradually with temperature, but you would need to also determine whether the correlation that you observed occurred because of random fluctuations in water intake that were unrelated to the temperature. Why not just take the average water intake at each temperature, stop there, and then compare the averages? The answer is that (particularly when sample sizes are small) the averages could have randomly fluctuated the way that you have observed, and by simply looking at the averages you could draw misguided conclusions – even if the actual experiment was carried correctly. You need a statistical analysis to determine the probability that the variation in water intake was due to random factors, and not temperature.
Luckily, there are many statistical tests designed to measure correlations between two variables and derive the probability that the 2 variables were related. By means of simple subtraction you could deduce the probability that the variations were due to random chance; this latter probability is known as a p-value and represents the probability of a “null hypothesis” being correct.
Scientists apply mathematical rigor to hypothesis testing in order to rule out the null hypothesis, which is basically a hypothesis that states that the results were due to random changes within the data. A generic null hypothesis exists for every experiment; there is always the possibility that two variables you are measuring are unrelated. Under many circumstances, the p-value should be less than 0.05 (5% in the form of a fraction) before you can rule out the null. To state it a different way, the probability of the null hypothesis being true must be less than 5% before you can start claiming that the two variables are related. Though this cutoff is a bit arbitrary, it’s always the case that the smaller the p-value, the less likely the null hypothesis is. Water intake is of course linked to temperature; but most correlations within scientists’ research aren’t so obviously linked.
Small samples are much more easily influenced by outliers but collecting more data points can reduce their sway over the data. Statistical tests acknowledge this by, effectively, inhibiting smaller sample sizes from achieving statistical significance. Thus when a researcher finds that their trend doesn’t reach the level of statistical significance, the common response is to expand their sample size. This can be incredibly difficult if your data collection involves loading human research subjects for an fMRI and administering a 2-hour intelligence test, but far easier if you’re doing something relatively simple, like a questionnaire. So even if large sample sizes are more preferable, they aren’t always feasible in practice.
This statistical test can provide evidence that two variables are correlated, but causation is not so easily determined.
Causation and correlation
Correlation is not causation; this is probably the most important principle I describe in this post. Two variables can be correlated with each other but not have caused one another. A typical example used to illustrate the concept is that ice cream sales have a positive correlation with crime; the more ice cream that is sold, the more crime rate increases. Clearly, it’s not the case that ice cream causes crime, or that crime causes ice cream sales. The relevant information that is lacking here is the fact that more people are out and about during warm weather, when ice cream is often sold, and therefore more people out in public where they can carry out crimes; in other words, the number of people on the street was not held constant.
It’s an arduous task, or even impossible, to prove that one variable causes the other. Scientists often provide evidence for causation by controlling every variable within an experiment, then only changing that one variable (such as temperature) and see if that elicits a change in the variable they are measuring (such as water intake). Correlation here is close as one can get to causation, because every other variable stays constant throughout the experiment. In human studies, however, it’s ethically difficult to justify locking a human in a cage to measure their water intake at different temperatures. Practically speaking, too, it’s also much easier – and less expensive – to do the same thing in a lab animal. Causation is easier to test if you’re working with cell culture models because the environment is much easier to control for.
That does not mean that it’s impossible to find evidence for causation in human studies. A recent example is a study that assigned participants to a messy desk or a tidy desk and challenged them to come with new uses for pingpong balls. Participants assigned to messy desks were rated (by independent evaluators) as more creative. This is evidence for causation because the researchers randomly assigned their volunteers to different groups; any personal variation among the research subjects should have been balanced out among the groups (provided the sample was large enough).3 To summarize this section: causal relationships are always correlated with each other, but not all correlative relationships are causal ones.
The most parsimonious explanation is the simplest explanation for a phenomenon that you observed. These explanations follow the rule of “Occam’s Razor,” where the simplest explanation is often the correct one. Parsimonious explanations are the ones that make the fewest assumptions to explain the event that was just observed. These explanations also should be meaningful, for example the explanation “Something has just happened for some reason” is hard to make future predictions about and annoyingly impossible to disprove.
Also falling in the category of parsimonious explanations is the burden of proof. That is, if a researcher believes that the most parsimonious explanation is not the correct one; the burden is on them to provide evidence why a less likely explanation is true. For example, if you found out that both pea aphids and fungus have the same pigment-producing enzymes, the most parsimonious explanation for that observation might be that they inherited the gene for the enzyme from a common ancestor (as is the case with nearly any example of gene similarity) – or even that the aphid sample was contaminated with fungus DNA by mistake. Yet follow-up research ruled out those explanations, and it seems to be the case that a gene was transferred from fungus to aphids as an extremely rare example of natural horizontal gene transfer.4 So the most parsimonious explanation from the outset isn’t always going to be the correct one, but researchers can introduce new evidence to favor a new parsimonious explanation when they feel the old one fails to explain new evidence. But it’s always the job of the challenger to provide the new evidence, and it’s not someone else’s job to provide evidence against every radical new idea proposed by a renegade scientist.
1. Mendel, G. Experiments in plant hybridisation. (Cosimo, Inc., 2008).
2. AAAS, S. <http://www.sciencemag.org/site/feature/contribinfo/prep/coi.pdf> (2015).
3. Vohs, K. D., Redden, J. P. & Rahinel, R. Physical Order Produces Healthy Choices, Generosity, and Conventionality, Whereas Disorder Produces Creativity. Psychological Sciences, doi:10.1177/0956797613480186 (2013).
4. Moran, N. A. & Jarvik, T. Lateral Transfer of Genes from Fungi Underlies Carotenoid Production in Aphids. Science 328, 624-627, doi:10.1126/science.1187113 (2010).
Stay tuned for Part 2: The role of the scientific community!