In search of an ‘ontsporende maatschappelijke discussie’
Summary
The University of Amsterdam (UvA), and the Vrije University of Amsterdam have established a collaboration with Huawei, a global tech company headquartered in China. The firm will contribute funding and data to a project called the DReaMS lab which will work on improving search engine technology. This collaboration is deeply troubling: Huawei is actively involved in serious human rights violations in China against the Uighur people, who are bringing a claim of genocide against China through the UK courts. They are also accused of setting up surveillance infrastructures and programs which have been used against activists and civil society. There is evidence from multiple sources that shows this partnership has not been subject to appropriate ethics review, and should not have been approved. We join initiatives before us in raising our concerns with respect to this collaboration and call for a more effective and meaningful ethics review of such research partnerships by universities; for Dutch institutions not to engage with Huawei as a partner, and for a public debate on how such engagements with companies should be pursued.
Statement
A collaboration with industry partners?
How should academic researchers collaborate with the private sector? Increasingly academic funders such as the NWO and the European Commission demand that researchers collaborate with companies and source research funding from the private sector, and with the collaboration between the University of Amsterdam (UvA), the Vrije University of Amsterdam (VU) and Huawei, it becomes clear that this policy provides no safeguards against collaborations with those complicit in grave human rights violations. Huawei will invest 3,5 million euro and will contribute data to the new ‘DReaMS Lab’, which will work on search engine optimisation. There has been resistance from, among others, employees of the UvA and VU, and the UvA’s Centrale Ondernemingsraad, which represents university employees as a group. The COR says: ‘collaboration implies that the company is welcome in the West and appears to legitimise its other practices, despite its dubious recent history’.
We would like to emphasise that our objection to this collaboration is not about drawing a simple line between tech companies of the West versus “the rest”, but about assessing proposed collaborations with all private institutions by meaningful standards that include human costs.
Who are the collaborators?
The DReamS Lab is led by three researchers at VU and UvA, who are also members of the KNAW and one of whom co-developed the Dutch AI Research Agenda [https://www.nwo.nl/en/news-and-events/news/2019/11/first-national-research-agenda-for-artificial-intelligence.html]. The DReaMS Lab will conduct research on information retrieval — popularly known as search. The project leaders explain that Huawei has a renewed interest in information retrieval due to geopolitical shifts, namely a US boycott that blocked Huawei’s access to Google services, including search. The company, by providing access to its data and internal services, expects to benefit from the research results as well as access to “talent”.
The funder, Huawei, is a global technology company that also maintains a global track record in human rights violations. In China, Huawei has been using Uighur slave labour provided by the government and has been contracting to provide the surveillance apparatus used to control and persecute the Uighur population – activities for which the Chinese government is now facing charges of genocide in the UK courts and calls for the UN to investigate on the part of numerous international organisations. Huawei has not merely offered technical support to the Chinese government’s persecution of the Uighur people: the firm has been shown to be actively engaged in R&D with the security services in Xingjiang and with media providers on projects to manipulate public opinion in the province.
Huawei also contracts with governments in other countries to provide illegitimate spying software and services. In Uganda, Huawei engineers have configured NSO group spyware for the government: NSO group has been heavily criticized after research by the renowned Citizen Lab (University Toronto) showed that the software was used to target journalists (amongst others, New York Times journalist Ben Hubbard) and human rights defenders. Its “Safe City” program, the company’s “competitive” surveillance product in the international market to “improve policing efforts” in cities, has raised further concerns with civil society and activists across the world.
What was the review process for this collaboration?
Neither of the two universities shows any sign of having weighed the ethical or political implications of this deal. Instead the partnership has been scrutinised on the basis of a limited understanding of academic integrity: will the researchers be free to publish their findings without commercial claims or editorial hindrance, will the research groups in question be free to choose their own staff, and will the products be free and open for others to use? The answer to the first two questions is yes, but to the third, no – Huawei will receive a commercial interest in the results of the research since they will have the right to apply for patents on the research outputs .
The government on the other hand has raised concerns about national security, as a response to which data management plans have been reviewed by the Dutch intelligence services. The AIVD [the Dutch national security agency] has cleared the universities’ cooperation with Huawei concerns exclusively in relation to the national security of the Netherlands. Both these approaches – focusing on academic integrity and national security – miss the point. As the COR points out, by collaborating with Huawei the UvA and VU are contributing something priceless to the company: legitimacy. If Huawei wants to operate in the EU it needs high-profile allies and projects to direct attention away from its politics.
The researchers leading the AI project reassure the press that ‘the same rules and guarantees hold’ as with their collaborations with ‘ING, Ahold and Elsevier’. But none of these companies are currently being accused of collaborating in genocide. Rather than seeking to ask the obvious questions of this partnership, both university and governmental authorities are working hard to avert any discussion of the bigger picture. A leader of the lab has said that ‘the crucial risk is of a derailing social controversy’ (‘ontsporende maatschappelijke discussie’), inadvertently acknowledging that a discussion that focuses on the actual issues in play might indeed endanger the collaboration. We reject the idea that as researchers we have a responsibility not to rock the boat, or that we are betraying our universities by questioning the enthusiastic welcome this company is receiving into our academic community.
What should be the scope of an ethics analysis?
Ruha Benjamin in Race After Technology shows how, while many firms are complicit in persecution and human rights abuses, others actively promote them. She offers the example of Polaroid which during the apartheid era in South Africa developed new photo-identification techniques to capture black faces for the pass-books that were used to restrict the freedom of black citizens. Its support of the South African regime’s aims can be compared to the collaboration between IBM and the Nazi government in Germany in the 1930s, where the firm provided innovative technical and financial constructions to capture the market for identifying and locating the groups eradicated in the Holocaust.
Computer science, as a field of ever-growing importance, has yet to resolve its ethics problem. Most recently, we have observed how research ethics in AI is actively shaped by economic priorities, requiring an urgent update so that it can withstand this pressure. Meaningful ethical review must take account of human consequences, yet if the data to be supplied by Huawei to the new lab were considered by an ethics review board, it is unlikely it would be flagged as problematic. There is a well-documented mismatch between conventional research ethics and practices in big data and AI development, where the subjects of research are often too far downstream of the research to be considered at risk of harm. In this case, we have no information about how the datasets that will be shared have come into being, who the downstream subjects of this research will be, or which values will be prioritised due to the collaboration. How can research ethics assess harmful (side) effects more effectively and include a transnational dimension? Even if research ethics were fully up to the task, the relevant independent ethical oversight is missing at UvA. The university’s ‘general institutional ethics commission’ (Algemene Instellingsgebonden Ethische Commissie) has reportedly not met in two years, has no chairperson and no agenda.
What next?
It is time to update university ethics processes to reflect the kinds of collaboration being proposed. UvA’s COR has called for a collective Code of Conduct to be established and for a university-wide Ethics Committee to review all research partnerships with companies before they are brought to the Executive Board and the COR. We support their position, and call on all the Dutch universities to adopt this approach and to institute ethical review on the central level that, at a minimum, takes into account the human rights records of proposed collaboration partners. The case of Huawei, as well as other companies from all parts of the globe, also makes clear that it is necessary to move beyond ethical committees, to a public debate. We urge Dutch institutions, including municipalities and other public authorities, not to engage with Huawei as a partner, and the Dutch government to consider not only matters of academic integrity and the security implications of working with companies, but human rights violations as well.
Funding Matters
(If you want to sign on to this statement, send an email to signon@fundingmatters.tech indicating your name and affiliation. If you have other questions, send an email to enquiries@fundingmatters.tech )
Co-signed:
Global Data Justice project – Tilburg University
DATACTIVE research project – UvA
Faculty Student Council of the Faculty of Humanities (FSR FGw) – UvA
Bits of Freedom
Lonneke van der Velden – UvA
Niels ten Oever – UvA
Stefania Milan – UvA
Linnet Taylor – Tilburg University
Seda Gürses – TU Delft
Hans de Zwart – Amsterdam University of Applied Sciences
Annelies Moors – UvA
Thomas Poell – UvA
Joris van Hoboken – UvA
Sarah Eskens – UvA
Olav Velthuis (AISSR) – UvA
Matthijs Koot – UvA
Alex Gekker – UvA
Zazie van Dorp – UvA
Marjolein Lanzing – UvA
Naomi Appelman – UvA
Kristina Irion – UvA
Jill Toh – UvA
Ot van Daalen – UvA
Giulia Ranzini – VU
Arno Lodder – VU
Miriyam Aouragh – University of Westminster
Angela Wigger – Radboud University
Tamar Sharon – Radboud University
Bart Jacobs – Radboud University
Esther Keymolen – Tilburg University
Aviva de Groot – Tilburg University
Aaron Martin – Tilburg University
Gijs van Maanen – Tilburg University
Tobias Fiebig – TU Delft
Jaap-Henk Hoepman – Radboud University and University of Groningen
Francien Dechesne – Leiden University
Carolina Frossard – UvA
Luiza Bialasiewicz – UvA
Nadya Purtova, Tilburg University
Nanne van Noord – UvA
Robin Celikates – FU Berlin (Ex-UvA)
Josef Früchtl – UvA
Vincent de Rooij – UvA
Natalie Scholz – UvA
Annalisa Pelizza – University of Bologna and University of Twente
David Kuric – UvA
Joël van der Weele – UvA
Virginie Mamadouh – UvA
Julia Hoffmann – UvA
Tineke Broer – Tilburg University
Monique Mann – Deakin University
Nadege Merabet – UvA UMC