The Times Australia
Mirvac Harbourside
The Times World News

.

How small differences in data analysis make huge differences in results

  • Written by Hannah Fraser, Postdoctoral Researcher , The University of Melbourne
how small differences in data analysis make huge differences in results

Over the past 20 years or so, there has been growing concern that many results published in scientific journals can’t be reproduced[1].

Depending on the field of research, studies have found efforts to redo published studies lead to different results in between 23%[2] and 89%[3] of cases.

To understand how different researchers might arrive at different results, we asked hundreds of ecologists and evolutionary biologists to answer two questions by analysing given sets of data. They arrived at a huge range of answers.

Our study has been accepted by BMC Biology as a stage 1 registered report[4] and is currently available as a preprint[5] ahead of peer review for stage 2.

Why is reproducibility a problem?

The causes of problems with reproducibility[6] are common across science. They include an over-reliance on simplistic measures of “statistical significance” rather than nuanced evaluations, the fact journals prefer to publish “exciting” findings, and questionable research practices[7] that make articles more exciting at the expense of transparency and increase the rate of false results in the literature.

Much of the research on reproducibility and ways it can be improved (such as “open science” initiatives[8]) has been slow to spread between different fields of science.

Read more: Our survey found 'questionable research practices' by ecologists and biologists – here's what that means[9]

Interest in these ideas has been growing among ecologists[10], but so far there has been little research evaluating replicability in ecology. One reason for this is the difficulty of disentangling environmental differences from the influence of researchers’ choices.

One way to get at the replicability of ecological research, separate from environmental effects, is to focus on what happens after the data is collected.

Birds and siblings, grass and seedlings

We were inspired by work led by Raphael Silberzahn[11] which asked social scientists to analyse a dataset to determine whether soccer players’ skin tone predicted the number of red cards they received. The study found a wide range of results.

We emulated this approach in ecology and evolutionary biology with an open call to help us answer two research questions:

  • “To what extent is the growth of nestling blue tits (Cyanistes caeruleus) influenced by competition with siblings?”

  • “How does grass cover influence Eucalyptus spp. seedling recruitment?” (“Eucalyptus spp. seedling recruitment” means how many seedlings of trees from the genus Eucalyptus there are.)

A photo of eucalyptus seedlings outdoors
Researchers disagreed over whether grass cover encourages or discourages Eucalyptus seedlings. Shutterstock[12]

Two hundred and forty-six ecologists and evolutionary biologists answered our call. Some worked alone and some in teams, producing 137 written descriptions of their overall answer to the research questions (alongside numeric results). These answers varied substantially for both datasets.

Looking at the effect of grass cover on the number of Eucalyptus seedlings, we had 63 responses. Eighteen described a negative effect (more grass means fewer seedlings), 31 described no effect, six teams described a positive effect (more grass means more seedlings), and eight described a mixed effect (some analyses found positive effects and some found negative effects).

For the effect of sibling competition on blue tit growth, we had 74 responses. Sixty-four teams described a negative effect (more competition means slower growth, though only 37 of these teams thought this negative effect was conclusive), five described no effect, and five described a mixed effect.

What the results mean

Perhaps unsurprisingly, we and our coauthors had a range of views on how these results should be interpreted.

We have asked three of our coauthors to comment on what struck them most.

Peter Vesk, who was the source of the Eucalyptus data, said:

Looking at the mean of all the analyses, it makes sense. Grass has essentially a negligible effect on [the number of] eucalypt tree seedlings, compared to the distance from the nearest mother tree. But the range of estimated effects is gobsmacking. It fits with my own experience that lots of small differences in the analysis workflow can add to large variation [in results].

Simon Griffith collected the blue tit data more than 20 years ago, and it was not previously analysed due to the complexity of decisions about the right analytical pathway. He said:

This study demonstrates that there isn’t one answer from any set of data. There are a wide range of different outcomes and understanding the underlying biology needs to account for that diversity.

Meta-researcher Fiona Fidler, who studies research itself, said:

The point of these studies isn’t to scare people or to create a crisis. It is to help build our understanding of heterogeneity and what it means for the practice of science. Through metaresearch projects like this we can develop better intuitions about uncertainty and make better calibrated conclusions from our research.

What should we do about it?

In our view, the results suggest three courses of action for researchers, publishers, funders and the broader science community.

First, we should avoid treating published research as fact. A single scientific article is just one piece of evidence, existing in a broader context of limitations and biases.

The push for “novel” science means studying something that has already been investigated is discouraged, and consequently we inflate the value of individual studies. We need to take a step back and consider each article in context, rather than treating them as the final word on the matter.

Read more: The science 'reproducibility crisis' – and what can be done about it[13]

Second, we should conduct more analyses per article and report all of them. If research depends on what analytic choices are made, it makes sense to present multiple analyses to build a fuller picture of the result.

And third, each study should include a description of how the results depend on data analysis decision. Research publications tend to focus on discussing the ecological implications of their findings, but they should also talk about how different analysis choices influenced the results, and what that means for interpreting the findings.

Read more https://theconversation.com/two-questions-hundreds-of-scientists-no-easy-answers-how-small-differences-in-data-analysis-make-huge-differences-in-results-216177

Mirvac Harbourside

Times Magazine

YepAI Joins Victoria's AI Trade Mission to Singapore for Big Data & AI World Asia 2025

YepAI, a Melbourne-based leader in enterprise artificial intelligence solutions, announced today...

Building a Strong Online Presence with Katoomba Web Design

Katoomba web design is more than just creating a website that looks good—it’s about building an onli...

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beau...

5 Ways Microsoft Fabric Simplifies Your Data Analytics Workflow

In today's data-driven world, businesses are constantly seeking ways to streamline their data anal...

7 Questions to Ask Before You Sign IT Support Companies in Sydney

Choosing an IT partner can feel like buying an insurance policy you hope you never need. The right c...

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in t...

The Times Features

Macquarie Bank Democratises Agentic AI, Scaling Customer Innovation with Gemini Enterprise

Macquarie’s Banking and Financial Services group (Macquarie Bank), in collaboration with Google ...

Do kids really need vitamin supplements?

Walk down the health aisle of any supermarket and you’ll see shelves lined with brightly packa...

Why is it so shameful to have missing or damaged teeth?

When your teeth and gums are in good condition, you might not even notice their impact on your...

Australian travellers at risk of ATM fee rip-offs according to new data from Wise

Wise, the global technology company building the smartest way to spend and manage money internat...

Does ‘fasted’ cardio help you lose weight? Here’s the science

Every few years, the concept of fasted exercise training pops up all over social media. Faste...

How Music and Culture Are Shaping Family Road Trips in Australia

School holiday season is here, and Aussies aren’t just hitting the road - they’re following the musi...

The Role of Spinal Physiotherapy in Recovery and Long-Term Wellbeing

Back pain and spinal conditions are among the most common reasons people seek medical support, oft...

Italian Lamb Ragu Recipe: The Best Ragù di Agnello for Pasta

Ciao! It’s Friday night, and the weekend is calling for a little Italian magic. What’s better than t...

It’s OK to use paracetamol in pregnancy. Here’s what the science says about the link with autism

United States President Donald Trump has urged pregnant women[1] to avoid paracetamol except in ...