Back

Who we are

With research staff from more than 60 countries, and offices across the globe, IFPRI provides research-based policy solutions to sustainably reduce poverty and end hunger and malnutrition in developing countries.

Kalyani Raghunathan

Kalyani Raghunathan is Research Fellow in the Poverty, Gender, and Inclusion Unit, based in New Delhi, India. Her research lies at the intersection of agriculture, gender, social protection, and public health and nutrition, with a specific focus on South Asia and Africa. 

Where we work

Back

Where we work

IFPRI currently has more than 600 employees working in over 80 countries with a wide range of local, national, and international partners.

If Mary has a little lamb, who should know about it?

Open Access | CC-BY-4.0

lamb

By Gideon Kruseman and Robert Hijmans

The General Data Protection Regulation (GDPR) went into effect on May 25, enforcing new guidelines and restrictions for the collection and storage of personally identifiable information (PII). This puts many organizations, including IFPRI and the other international agricultural research centers of CGIAR—which are dedicated to providing research data products as global public goods—in a difficult position. Increased open access to such data products is fueling a “big data revolution” in agronomy and international development. The GDPR could significantly impact this progress.

The recent Facebook-Cambridge Analytica scandal and the presumed unmasking of the street artist Banksy via data methods have raised awareness that private and public data can be used for unexpected and possibly undesirable purposes. While these examples are outside the scope of agricultural development research, they suggest how personal data from farmers might be used for unintended purposes.

If the knowledge that Mary has a little lamb is useful, both for those providing quality needed services and wolves alike, who should know?

The scientific community is, therefore, asking what GDPR means for the provision of these global public goods.

The GDPR could jeopardize efforts to bring the big data revolution to farmers in low and middle-income countries. This is because many international organizations that work in these countries have ties to Europe and may be exposed to litigation under the regulation. Clear guidelines for implementation of the GDPR in research are still lacking, and in most research institutions where the relevant policies have not been updated to address the new situation, there is a possibility that legal departments will argue for undue restrictions to be on the safe side.

Depersonalizing data and aggregating it to a level that is beyond all reproach is possible, but that may also mean that the value of the data is lost—for researchers, product developers, and for the data subjects themselves. One of the big opportunities in digital agriculture is in developing personalized services for farmers—if you aggregate out all the specificity in the data, you can’t personalize the information and you are denying the end-users the opportunity to benefit from it.

Before we delve into GDPR and what it may mean for science, we need to be clear that the issues at hand pertain especially to raw human-subjects research (HSR) data. Much other agricultural research data has already been sufficiently de-identified when used for analysis and these data sets do not fall under the GDPR.
Here are the main standards of GDPR pertaining to data that contains personally identifiable information—with which all organizations are expected to comply:

  • Transparency and lawfulness: A lawful, fair and transparent process of the personal data is required.
  • Purpose: There has to be a specific and legitimate reason behind the collection and the processing of the data. To put it simply, you have to be able to explain clearly and in great detail why you are gathering the data and what you are going to do with it.
  • Minimization: You should only collect the minimum possible amount of data that you need with regards to your purpose. On top of that, you should only keep the data that it is absolutely necessary to keep.
  • Accuracy: Under any circumstances, personal data should be precise and continuously updated. Personal data that is either unreliable or outdated should be reviewed or deleted.
  • Storage: As soon as the personal data is no longer necessary for your purpose, it has to be deleted. This does not mean all data has to be deleted, but only the personally identifiable information.
  • Confidentiality and integrity: Privacy sensitive data should be stored in a secured manner.

But that is not enough. At least one of the following criteria needs to be met to comply with the first standard mentioned above:

  • Consent: The data subject should provide unambiguous positive consent regarding the processing of his/her personal data for specific purpose(s).
  • Performance of contract: The data in question should be processed in order to allow/facilitate the performance of a contract that the data subject is part of. Data processing can also be a way for the data subject to initiate a contract agreement.
  • Legal requirement: The data manager is obliged by the national or European law to process the personal data. This is not relevant for the cases we are discussing here.
  • Vital interests: In cases of emergency (e.g., medical), the processing of personal data is allowed in order to protect interests of vital importance either for the data subject or another person in connection with the subject.

Guidelines for how to manage data collection, storage, analytics, and reporting will vary across scientific domains, but the standards outlined above provide a clear demarcation.

Let’s examine some key examples of data that would fall under GDPR:

  • Household survey data on the adoption of agricultural technology and its effect on livelihoods that is made public.
  • The use of satellite remote sensing data to make inferences about cropping systems.
  • The development and provision of data-driven services to smallholders using multiple data sources including farmer data.

The first case is a classic human-subjects research (HSR) example. We can assume that the research went through an Internal Review Board or Ethics Committee for clearance. This means that the issue of consent should have been addressed in detail. The purpose of the data collection is clearly defined in a research protocol. Raw HSR data should always be stored in a secure location. Since, for research purposes, the individual is usually not the object of investigation, but depersonalized data are used to infer broader conclusions. The crucial role of ethics committees is to pass judgment on how the data is analyzed.

Most research is not problematic since it does not need personally identifiable data. The personal data is usually collected for follow-up research and/or auditing enumerators. Geospatial location data of fields and homesteads are often collected but do not need to be published. The data made public in open access is outside the sphere of influence of an ethics committee. Therefore, it should be stripped of all personally identifiable data and blurred up to a level that the geospatial coordinates point to a general area and not a specific household. How to best do this is a research question, as the amount of blurring required depends on location and data type. Data such as household rosters should be aggregated to prevent identification of households by combining data with other sources such as social media feeds. Finally, any sensitive questions and their answers should be masked.

Providing more detailed metadata on relevant data that has been masked or blurred can provide interested third parties the necessary information to request the original data for specific lawful purposes.

For maintaining high-quality research data, one generally does not want to update personally identifiable or other data; a dataset is a snapshot in time and needs to be curated as it was collected. In contrast, the GDPR prescribes either updating or deletion. The pragmatic thing to do would be to delete PII as data gets older, but we note that the risk of abuse of the data, in fact, diminishes as the data gets older.

The use of remotely sensed data from satellites is publicly available, whether free or at a cost. However, the data pertains to the assets of individuals and the decisions they have made on the management thereof. Ideally, such data are used together with field observations, e.g. in input use and crop response at specific locations. Given the large spatial and temporal variability in agriculture, compilations of data from many field research programs are needed in this type of research, and it thus depends on the availability of open data.

Two opposing viewpoints emerge. On the one hand, there is the strong conviction that the availability of accurately georeferenced field data provides a large benefit to society, as it allows us to finally be able to properly take spatial variation into account in agricultural research in various disciplines; whole personal risks are likely minimal. On the other hand, making field data available through open access could be seen as a violation of a subject’s privacy. Moreover, there is no prior informed consent when using satellite data.
Because of the potential for misuse of granular geo-spatially referenced data, increasingly research organizations are requiring ethics clearance of research using this data. An ethics clearance can address many of the issues related to the data use.

It is, however, very difficult to do an ethics review on open access data, as future data use is unknown.

Open access is fundamental to maintaining public research in agricultural development—otherwise, only private companies and perhaps a few very large research institutions would have relevant data.

The challenge thus is to come up with reasonable guidelines on how to blur PII to strike a balance between individual risk and potential societal benefit. Research is needed to set guidelines about, for example, how much noise to add to location data given the location, and the type of data provided.

In the third case, data is used in a contractual relationship between a farmer and a service provider, there is consent and therefore the use of the data is lawful. It then becomes important for the service provider to adhere to the standards. One of the big opportunities in digital agriculture is in developing personalized services for farmers—if you aggregate out all the specificity in the data, you can’t personalize the information and you are denying the end-users the opportunity to benefit from it. Fortunately, GDPR captures this case explicitly.

There is a clear role for scientific communities to develop guidelines for specific purposes, as the three examples demonstrate.

An overly cautious approach to data sharing would make it legal, but would also inhibit uses of the data in ways that can benefit the development of the agricultural sector in low and middle-income countries.

The ability to learn from the combined long-term data sets that include field trials, household surveys and remotely sensed data holds enormous promise. We should be careful to not squander the public benefit for only hypothetical risks to individuals. Through careful use of prior informed consent, we could exercise both the right to be forgotten (to not be identified) and the right to be remembered (to assure that research data can be used for maximal benefits).

Pragmatic approaches need to be developed to ensure that the protection of individuals is very high—while maximizing the potential benefits to society.

Gideon Kruseman coordinates the community of practice on socioeconomic data with the CGIAR Platform for Big Data in Agriculture and is the big data focal point for the International Maize and Wheat Improvement Center (CIMMYT). Robert Hijmans is an Associate Professor of Environmental Science and Policy at the University of California-Davis. This post first appeared on the Big Data in Agriculture blog.


Previous Blog Posts