Skip to main content

Putting your money where your mouth is: could betting fix science?

You may be aware of the current “replication crisis” in science at the moment.
If you aren’t, over the last decade human and biological sciences (psychology in particular) have seen a dramatic increase in the number of papers that are either poorly executed, are not successfully replicated, or are not replicated at all.
It has been estimated that these irreproducible studies waste $28 billion in
the US alone. 
There are many reasons why these poor studies are conducted and published. Job performance is a major issue, as the value of an academic is measured by the number of publications they write and the grant money they are able to bring in for their institute.
Many believe that this has led to a drop in the overall quality of manuscripts being published, as more and more researchers feel the pressure to publish positive results in order to progress in their careers or even keep their job at all. This means that negative results are rarely published and few people conduct replications of others’ studies for fear of not getting the results published.
For example, a now landmark study aimed to calculate the reproducibility of psychological studies recently published. The research group took 100 social psychology studies published in 2008 by the three most prestigious psychology journals and repeated them as closely to the original as possible. 
They found that, although 97 of the original studies showed a statistically significant effect, only 36 of the replication studies did so too, giving a total of just 39% of studies that were supported by the replication studies. There is hope though, as the higher quality studies were far more likely to be replicated successfully than poorly designed studies. Now there are obvious generalisations made here and the authors of the study are very aware of the limitations of conducting such studies, but it was the first in a number of calls to arms for improving the quality, repeatability and credibility of science.


Photo credit Logan Faerber


Now that the extent of the problem is clear, many academics have started to try and find ways of fixing it. This has led to the start of several journals solely dedicated to negative results, a new centre for open science and the Open Science Framework which allows readers to not only freely read the publication, but also easily see the raw data and analyses used in the study.
The same team of researchers that founded the COS may have found an interesting way of predicting whether a study will be successfully replicated. The researchers turned to prediction markets, similar to a stock market only that the ‘stocks’ are in studies that are going to be replicated within the next 2 months. 
The studies used in this research were part of the reproducibility study mentioned above.  To set the starting price of each study in the prediction market, they asked  experts to give an estimate of the chance they thought each study had of being replicated from 0–100%. They were also asked to estimate their knowledge of the area of study, to give weight to the answers.
When the market opened, the same experts bought and sold shares in the studies they had more or less confidence in their successful replication. They were able to correct their opinions in line with the rest of the group if they thought a study was reaching a higher price (and therefore a higher chance of replication) than they first thought.  The final share price of a study (up to a maximum of 100) indicated the predicted chance of reproduction from the collective group of experts.
This method of assessment predicted reproducibility of a study 71% of the time vs an average score of 58% in the initial survey of individual experts. The collective knowledge of the group allowed people to change their minds based on what others were thinking . This increased the accuracy of estimates by 13% over the expertise-weighted surveys.
This may not sound like much, but if you look at figure 1 in the study you can clearly see that the prediction market was much better at predicting the success or failure of a study than individual predictions. This is particularly true for studies where the experts were not too sure about a study individually (they rated it at around a
50% chance of successful reproduction), the market was very successful at subsequently coming to a much stronger decision.
Although it’s rather modest, further refinements of the market e.g. increasing the number of participants, prolonging the time the market was open for and the numbers of studies in the market, may help to increase the accuracy further.
I would be interested in seeing how experts treated new studies after seeing a table of previous markets. You can 
read the whole paper here  and the supplementary information here for free and you can see all the files and analysis free on the open science framework .  
The prediction market formed much stronger opinions over which studies would be replicated or not and with a higher rate of accuracy than individual experts even with a high level of knowledge in that area, allowing decision makers to make more informed decisions on which studies deserve support from funding bodies and publishers and creating a greater drive towards high quality and replicable science. Once larger markets are used, and repeated studies verify this as a valid method for predicting repeatability, journals could use this method to predict whether an article is worth publishing or not. Alternatively, funding bodies could use them to determine which studies are most likely to be reproducible and award funding accordingly.



Comments

  1. Thanks for writing this blog. It is very much informative and at the same time useful for me
    Pepgra offers Clinical Biostatistics Services
    pharmacovigilance literature search services Our regulatory consultants advise you on every aspect of global standards and compliance guidelines Learn more about our flexible services today.

    ReplyDelete

Post a Comment

Popular posts from this blog

Medical Writing: Do you need a PhD?

When I was looking into becoming a medical writer, I was just at the end of my MSc and trying to decide whether to try and get a job in medical communications or to do a PhD and then move into Med Comms later. The short answer is no you don't necessarily need  a PhD to be a medical writer, but some employers think you do and it certainly seems to be the ideal.  After looking around on the internet I found a few forum posts asking about the same thing, "should I do a PhD to get into Med Comms?”. Most people said no not necessarily. But as far as I could tell, all of them did have PhD’s and were just telling people well maybe yes and maybe no. Since getting a job as an associate medical writer without a PhD, I've heard a lot more of this conversation occurring within the industry and especially at careers fairs. A lot of people  do  have PhD’s and quite often post-doctoral experience. But that is normally because they started out in academia and then discovered medical w

#ThanksToVideoGames shows Twitter gaming community has a sincere side

National Video Games Day was Thursday 7th July, 2020. A day where people in the gaming community can reflect on what video games mean to them. For me, video games are a way of staying in touch with friends I no longer live close to and provides an escape from the everyday stresses of the world.  It's clear that I'm not the only one, with #ThanksToVideoGames making waves on Twitter, people from all over the world shared their own reasons to be grateful for having video games in their lives. My curiosity got the better of me; I wanted to find out more about why people were grateful for video games and I wanted to show people that (shockingly) gamers can also be very sincere online.  I created this visualisation of the most common words that appeared in tweets with #ThanksToVideoGames to show the different themes coming through in discussions on Twitter. Make sure you hit the full screen icon to show the full picture; you  c

Are We Ignoring Our Body Clocks?

The body clock (or circadian rhythm) is a system of smaller cellular clocks that is responsible for our daily cycles, we have peaks and troughs in almost all aspects of our biology, including alertness and stress. This rhythm is tuned to the day/night cycle, and helps to make sure our bodies are prepared for the behaviours that are appropriate for that time of day eg becoming sleepy when it gets dark.  However, recent research suggests that our 24/7 society is causing an increasing number of us to become out of sync with our natural rhythm and that this may be having adverse effects on our physical health and mental well-being. All animals, in fact all cells as far as I'm aware, have an internal clock. these cellular clocks are controlled by a larger network of cellular clocks in the brain, called the suprachiasmatic nucleus (SCN). this cluster of cells is kept in harmony by the day/light cycle. Using ancestral light receptors in our eyes called retinal ganglion cells, they det