AI and COVID-19: A New Era of Surveillance?
In the wake of the coronavirus pandemic, artificial intelligence (AI) and related technologies can make a huge difference in containing the virus by warning individuals of potential contact with infected people. AI, based on data science, machine learning, or ‘big data’ is being applied in medicine, health management, and public policies, i.e., in the development of contact tracking apps, drugs and vaccines, and the healthcare services management. In response to the pandemic crisis, most governmental agencies and private companies have chosen to use these innovative technologies that often come along with the enhancement of invasive surveillance, uncontrolled monitoring, and detection measures.
Unrestricted and unethical use of AI may dramatically impact human rights, especially of marginalized and vulnerable populations. ‘The digital age has opened new frontiers of human welfare, knowledge and exploration. Yet new technologies are too often used to violate rights through surveillance, repression and online harassment, and hate. Advances such as facial recognition software, robotics, digital identification, and biotechnology, must not be used to erode human rights, deepen inequality or exacerbate existing discrimination’, acknowledged the UN’s Secretary-General António Guterres in a public statement on 24th February to the UN Human Rights Council.
For instance, contact tracing apps in Bahrain (‘BeAware Bahrain’), Kuwait (‘Shlonik’), and Norway (‘Smittestopp’) can capture location data through GPS and upload it to a centralized database so that it is possible to track the movement of all users in real-time, an Amnesty International investigation reveals. These mobile location tracking programs in the absence of transparency and limits on data collection, retention, and use pose a serious risk to human rights, such as the right to privacy, freedom of movement, expression, association. These fundamental rights are particularly threatened by non-democratic governments, such as China and Russia, that already have intrusive surveillance practices for political reasons and due to the lack of the appropriate legal framework.
According to the United Nations Conference on Trade and Development, 132 out of 194 countries have legislation to secure the protection of data and privacy and only 66 percent of countries have adopted such legislation.
Privacy and Data Protection legal framework in Europe
In 2018, the European Union enacted the toughest online privacy law in the world, known as the General Data Protection Regulation (GDPR) that gave governments the power to impose fines (up to 4% of a company’s revenue) if company’s data collection practices put privacy and security at risk. However, even the EU with its strongest legal framework faced difficulties enforcing this law due to the lack of funding and resources: since 2018 Google has been the only huge company panelized. In May 2019, only 20% of Europeans knew which authority is responsible for their data protection, shows a Eurobarometer survey.
Amid the coronavirus pandemic, the EU member states decided to set rules to help develop contact tracing apps that are compliant with GDPR. In April 2020, under the umbrella of the pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project, an international consortium of technologists, legal experts, engineers, and epidemiologists suggested keeping all data on mobile phones – the DP-3T (Decentralized Privacy-Preserving Proximity Tracing) protocol. Under the DP-3T, they avoid creating a centralized database, thus all personal information including location data never leaves an individual’s device. The DP-3T technology uses Bluetooth low energy to identify contact links so that the system automatically awares the users who were close to a person tested positive. Even though Bluetooth is considered a more privacy-friendly option, it is impossible to remain completely anonymous while using the app: re-identification attacks are conducted even on anonymized datasets.
The problem of anonymization is important: the GDRP does not grant any protection to anonymous data, it only applies to the data related to an identifiable person. Moreover, there is neither international or regional standard nor a threshold – when data can be de-anonymized avoiding infringement of human rights.
The Ethical Use of AI and Global Governance
Apart from data protection and privacy regulations framework, global laws that would impact the use and growth of AI are highly relevant. On the global scale, the United Nations Interregional Crime and Justice Research Institute (UNICRI) suggested recommendations that will help to fight the pandemic and, at the same time, avoid infringement of human rights, i.e. a clear timeline to the use of the tracking tools; knowledge-sharing and Open Data access; personal data that is collected to track the pandemic should not be used for another purpose; and data anonymization.
The concept of purpose limitation is already in force in Europe (the GDPR), data anonymization is used by Google, Apple, and many European countries that collaborate to increase user privacy (e.g., the DP-3T protocol). However, other states and companies neglect these recommendations, e.g. even though data protection and privacy legislation is in place in China, they still process big amounts of citizens’ private data while using mandatory GPS contact tracing apps. This highlights the need for elaboration of specific international or regional regulations and global standards with regards to the ethical use of AI that might be highly effective in protecting human rights worldwide.
Global governance, i.e. international or regional standards can support AI policy goals by facilitating trust among states, research efforts, developers of the technology, and people using the technology; encouraging the efficient development of increasingly advanced AI systems; and spreading beneficial systems and practices globally. For instance, UNESCO is already working on the elaboration of the first global standard-setting instrument on ethics of artificial intelligence. Furthermore, the Organization for Economic Co-operation and Development (OECD) and its member states prioritize trustworthy AI by adopting Recommendations on Artificial Intelligence.
On the one hand, the coronavirus pandemic highlighted that AI has the potential to make a significant difference by guiding politicians and scientists in the fight against the virus. On the other hand, it provoked heated debates on the rise of intrusive surveillance worldwide. Weak data protection and privacy laws or the absence of such legislation along with inadequate anonymization technologies put human rights at risk, deepen inequality or exacerbate existing discrimination. Such risks require coordinated global governance responses.
The discussion on data protection and privacy goes hand in hand with the conversation on the ethical and responsible use of AI. The EU that introduced its prohibitive regulatory approach for data protection and privacy has been an example for many states that developed such legislation: the United Kingdom, Brazil, and some states in the United States have enacted such an approach. The DP-3T is considered the most privacy-friendly protocol for contact tracing, but it is not clear yet whether it is 100 percent effective and guarantees privacy. At the same time, no country or region has adopted specific standards or legislation with regards to the responsible and ethical use of AI. Global standards for AI will not achieve all policy goals, but they can help foster trust among states, scientists, developers, and users, spread beneficial practices, and encourage the development of AI.
AI needs more research and society needs more awareness of the possible dangers of uncontrolled information sharing. This would foster the adoption of strong data protection and privacy legislation, global standards for AI that will facilitate people’s trust in the technology and make it more effective. AI is not equal to intrusive surveillance when it is controlled by the rules and society.
Adam Satariano (2020), ‘Europe’s Privacy Law Hasn’t Shown Its Teeth, Frustrating Advocates’, The New York Times. Available at: https://www.nytimes.com/2020/04/27/technology/GDPR-privacy-law-europe.html [Accessed on 26 June 2020]
Amnesty International (2020), ‘Bahrain, Kuwait and Norway contact tracing apps among most dangerous for privacy’. Available at: https://www.amnesty.org/en/latest/news/2020/06/bahrain-kuwait-norway-contact-tracing-apps-danger-for-privacy/ [Accessed on 29 June 2020]
Council of Europe (2020), ‘Artificial Intelligence and the control of COVID-19’. Available at: https://www.coe.int/en/web/artificial-intelligence/ai-covid19 [Accessed on 30 June 2020]
DP3T – Decentralized Privacy-Preserving Proximity Tracing (2020). Available at: https://github.com/DP-3T/documents [Accessed on 29 June 2020]
European Commission (2020), ‘Data Protection in the EU’, Available at: https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en [Accessed on 26 June 2020]
Human Rights Watch (2019), ‘Russia: New Law Expands Government Control Online’. Available at: https://www.hrw.org/news/2019/10/31/russia-new-law-expands-government-control-online [Accessed on 26 June 2020]
Human Rights Watch (2020), ‘Covid-19 Apps Pose Serious Human Rights Risks’. Available at: https://www.hrw.org/news/2020/05/13/covid-19-apps-pose-serious-human-rights-risks [Accessed on 26 June 2020]
Irakli Beridze and Maria Eira (2020), ‘Evolution from a social animal to a virtual animal?’, UNICRI. Available at: http://www.unicri.it/news/article/evolution_socialanimal_virtualanimal [Accessed on 25 June 2020]
Kathleen Walch (2020), ‘AI Laws are Coming’, Forbes. Available at: https://www.forbes.com/sites/cognitiveworld/2020/02/20/ai-laws-are-coming/#cb351aa2b48f [Accessed on 29 June 2020]
OECD Legal Instruments (2019), Recommendation of the Council on Artificial Intelligence. Available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 [Accessed on 30 June 2020]
Peter Cihon (2019), ‘Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development’, Technical Report, University of Oxford. Available at: https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf [Accessed on 29 June 2020]
Sergio Miracola (2019), ‘How China Uses Artificial Intelligence to Control Society’, ISPI. Available at: https://www.ispionline.it/it/pubblicazione/how-china-uses-artificial-intelligence-control-society-23244 [Accessed on 26 June 2020]
UNCTAD (2020), ‘Data Protection and Privacy Legislation Worldwide’. Available at: https://unctad.org/en/Pages/DTL/STI_and_ICTs/ICT4D-Legislation/eCom-Data-Protection-Laws.aspx [Accessed on 30 June 2020]
UNESCO, ‘Elaboration of a Recommendation on ethics of artificial intelligence’. Available at: https://en.unesco.org/artificial-intelligence/ethics [Accessed on 30 June 2020]