WUR Library

News

The Open Science story of Peter Tamás

article_published_on_label
September 2, 2024

For this Open Science story, we interviewed Peter Tamás. Peter is a lecturer and researcher in research methodology at the Biometris group. He explains how GenAI technologies and natural language processing (NLP) can improve qualitative analysis and systematic review.

What are the main challenges in qualitative research?

When we conduct interviews, we’re often trying to explore topics where we don’t know what’s important yet. That’s why interviews are so open-ended. However, this also means that the data we collect can be overwhelming and difficult to analyse without bias. Our brains naturally interpret information based on our past experiences, which can lead to unintentional biases in our analysis.

What can we do to avoid some of these problems?

AI and NLP can help by providing a more consistent and systematic way to analyse qualitative data. For example, these tools can detect patterns in the text and identify potential biases that a human researcher might overlook. Using these tools you can repeat your analysis with different parameters to find robust results. They can also help structure the analysis of texts in a way that makes it easier to spot errors or inconsistencies.

However, many commercial NLP tools are like black boxes—they work, but we don’t always understand how they arrive at their conclusions. The companies behind these tools don’t share the GenAI algorithms and training data. This lack of transparency is a problem because we need to be able to reproduce the results and understand any inherent biases.

How do you envision the future of open science and qualitative research?

One of the biggest challenges in open science is upholding the standards of scientific integrity and assessing the quality of research.
Peter Tamás

Currently, impact factors are a standard metric, but they’re failing us more and more. Scientists often get rewarded for being popular rather than producing good work.

I think the future lies in better use of AI in systematic review and open science practices. This will require collaboration across different fields with libraries playing a central role in supporting and guiding these efforts. The goal should be to create a research environment where quality is measured by the rigour and validity of the work.

What steps should be taken to improve the evaluation of research quality in open science?

First, we need to move away from relying solely on traditional metrics like citation counts and impact factors. Instead, we should focus on creating tools that can assess the quality of research based on more meaningful criteria, like the appropriateness of the methods used or the coherence of the arguments presented. AI can play a big role here, helping us to analyse large volumes of data more effectively. By developing tools ourselves that can help us analyse qualitative data more accurately and transparently, we can ensure that research is not only openly accessible but also high quality. But these tools must remain open and transparent, so they align with the principles of open science.

How do you view the new Academic Career Framework in this context?

The Academic Career Framework is a positive step forward.

It’s great that the university is trying to move away from individual popularity and impact factors as criteria for quality and excellence.
Peter Tamás

Collaboration will be better rewarded. However, in practice, there are still significant issues. For example, we are always encouraged to share our PhDs, so they can work for more chair groups. However, the PhD funding is for the chair groups, so there is no good incentive for allowing PhDs to collaborate across chair groups.

How can we address the public distrust in science, so often voiced on social media?

When people distrust the outcomes of research, I see this as a consequence of the current “popularity contest” in science. Scientists are pressured to make bold claims when reporting about their research. This has the risk of creating distrust and cynicism because those claims often fall through.

We need to make gentle, but rock-solid claims in science and to develop tools to assess research quality beyond simple popularity metrics.
Peter Tamás

What do you think can be the library’s role in this changing research landscape?

I strongly feel that Libraries should lead in developing and maintaining tools for better research assessment, ensuring they align with open science principles. As centers of knowledge sharing, libraries are well-positioned to facilitate the availability of innovative tools that enhance research quality.

I hope that in the future, new AI tools can enhance the quality of qualitative analysis and systematic literature reviews. It would be great to have tools that help researchers navigate large volumes of data, identify relevant information, and assess the appropriateness of research methods.

This Open Science Story is based on an interview with Peter Tamás by Annemieke Sweere from WUR Library and Ben Excell, WUR’s Open Science community manager.