Longread

Meat in a database


Organic meat is more expensive than non-organic meat. But does it taste better too? Wageningen researcher Hans Spoolder wants to answer this question in the mEATquality project. For this European project, data is being collected in different countries. A data warehouse will be set up, where all the data can be collected. “It is essential that everyone measures the same thing and submits data in the same way.”

“People who eat organic products say they taste better,” says Hans Spoolder of Wageningen Livestock Research. "But do they? And if so, why?” For the four-year European research project mEATquality, Spoolder will investigate whether pork and chicken meat from extensive livestock farming tastes better than meat coming from intensive husbandry. The researchers will visit farms in Denmark, Germany, Poland, Spain and Italy, looking at animal welfare, the animal breeds and the type of food they are given.

Throughout the project, the meat from the different farms will be presented to taste panels. The meats’ properties will also be investigated in the lab. Spoolder: “We will do a comprehensive chemical and physical analysis of the meat. We want to link this to the origin of the meat. As an example, we are looking for isotopes that indicate whether a pig has been eating Spanish or Polish grass.”

Lots of different kinds of data

All in all, different kinds of data will be collected from various European countries. Examples are scores from the animal welfare questionnaires that researchers conduct on farms, data from the taste panels and data from the lab research. And it must also be possible to link all this data together: after all, you want to be able to check whether the meat from a German organic pig farm really tastes different and has a separate composition than the meat from a conventional Danish pig farm.

We need to set up a water-tight code system that allows you to trace where the meat sample comes from, from start to finish
Hans Spoolder, project leader at Wageningen Livestock Research

Moreover, all data has to comply with the General Data Protection Regulation (GDPR), which means that farmers’ personal data must not be visible. Spoolder: “We try to ensure anonymity as much as possible. Every farmer is given a number and a country indication. But only the researchers from the relevant country know which farmer is behind that code.”

Data warehouse

Wageningen University & Research is coordinating this project and is also building the data warehouse where all the data will be assembled. How do you approach something like that? That is the domain of Wouter Hoenderdaal, developer at Wageningen Food Safety Research. Hoenderdaal: “The project is still in its infancy, but the processes that precede data collection are of equal importance. It is essential that everyone measures the same things and submits data in the same way. We send all researchers a specific template that they can use to fill in their data.”

As mentioned, it is important that the data can be linked as part of the mEATquality project. Hoenderdaal: “Part of the animal is sent to the lab, another part to the taste panels. So we need to set up a foolproof coding system that allows you to trace where the meat sample came from: which animal, farm, region and country.”

We ensure that everything is foolproof; after all, not every researcher is equally tech-savvy
Wouter Hoenderdaal, developer at Wageningen Food Safety Research

Two parts

The data warehouse consists of two parts and a kind of gateway. The latter is a file system that researchers can upload raw data into themselves. They will only be granted access rights to their own folders.” Hoenderdaal: “All the files will also be password protected, so user X can only access their own folder and read their own files there.”

The actual data warehouse consists of a development database and a production database. Hoenderdaal: “In the development database, we build and test, and once everything is correct there, all the data is sent off to the production database. Researchers do not have access to the development or production databases, but they do have access to the file system.” This is to prevent the database from being cluttered with unusable data, or worse, partly deleted by an absent-minded researcher. “We are creating the database in Postgres, an open-source relational database, where the data is stored in a structured way.”

The transfer of data from the file system to the development database is automated. “We write scripts in Python so that the researchers’ files automatically end up in the right place in the database. The idea is that the scripts cannot prevent an incorrect file from being uploaded to the system, but they will not recognise them so the files do not get into the database. This prevents incorrect data from being uploaded into the database. We ensure that everything is foolproof; after all, not every researcher is equally tech-savvy.”

Researchers should be able to search the production database so they can compare their data to others’, although they do not have access to the databases of others. How do Hoenderdaal and his colleagues plan to solve this issue? “We expect that researchers will largely want to see standard datasets, which combine certain data. We can put these in a secure folder. If a researcher has a very specific question, we can compile a customised dataset for them.”

Obstacles

What are the obstacles to this kind of international data exchange? Hoenderdaal: “Language can cause problems. The language of communication is English, so errors may occur when a researcher translates their native language into English. The researchers themselves have built in a check by first translating an English text into German, then back again. If the second English text is the same as the first, they know it’s all correct.”

A second obstacle has to do with the system: a relational database like Postgres is well suited to storing structured data, but less to unstructured data like PDFs or pieces of text. Hoenderdaal: “You may receive structured data about a particular meat sample, from a lab for example, but you may also receive scans. After all, not everything can be captured in structured data. We still have to come up with something that links the unstructured data to the structured data. So there is a lot for us to learn in this project.”

Meat database

If it were all up to Hans Spoolder, the mEATquality project would form the basis for a large, European database on meat origins. Spoolder: “This kind of European database exists for wine. The Oritain company is setting up a database for beef and lamb. They are interested in our data on chickens and pigs. The traceability of meat is important in the prevention of meat fraud; think of the horse meat scandal and meat being labelled as organic when it actually comes from factory farming.

Spoolder: “Identifying meat fraud is a side track in our project. We don’t have the budget to develop it further, but we will be able to contribute to an international meat database in due course.”