Improving access to redress for workers vulnerable to violence and harassment in South Asia
Experts discuss the factors that make some workers more vulnerable than others to violence and harassment.
This page is approximately a 5 minute read
This page was published on
This page was written by , Dr Aaron Ions Gardner
The poll found that 77% of people are worried about their personal information being stolen online.
If you live in a country with a strong food-safety ecosystem, you can walk into any supermarket, pick a sandwich off the shelf and trust that it is safe to eat. If you board a plane at a major international airport, you can take your seat on the plane trusting that the engine is working, and the pilot is trained.
In both these instances you don’t need to think too hard about the risks of the sandwich or plane (unless you have aerophobia or an allergy). You just intuitively trust the risks are mitigated, because you’re aware that governance systems exist around both food and aviation safety.
Data and artificial intelligence are not sandwiches or planes. But they are just as pervasive in almost all aspects of everyday life around the world. From healthcare to finance, entertainment to climate change research, big data and AI technologies are shaping the present and future of how we live, how economies work, and how we’ll address the biggest challenges facing our societies.
While the potential benefits of this datafication are huge, from better services to scientific breakthroughs, so are the potential harms. Data and AI technologies risk exacerbating the inequalities and discrimination already present in societies, can help to concentrate power and control in the hands of a relatively small handful of organisations and individuals, and shift the way democracies and vital societal services work.
Part of the problem for data and AI is that unlike for sandwiches and aeroplanes, we do not yet have effective, trustworthy governance ecosystems for data and AI.
In a 2021 paper, academics Bran Knowles and John T. Richards argued that ‘public distrust of AI originates from the underdevelopment of a regulatory ecosystem’. In this they recognise that, because data and AI technologies are so novel and the law is catching up, everyday people have no reassurance that the right protections and oversight are in place to minimise the harms and maximise the benefits. (I also credit Knowles for the aeroplane governance comparison.)
The World Risk Poll is an important puzzle-piece in understanding the link between governance of and public trust in data and AI. Three findings in particular stick out on this.
Firstly, the poll finds that a huge proportion of the people around the world are concerned with data protection: 77% are worried about their personal information being stolen, 75% are worried about companies using their personal data without permission, and 68% are worried about the government using their personal data. To find such consensus in a global poll is no insignificant thing. It shows the scale of concern about the datafied world we now live in, where many people feel that personal data is gathered and used without sufficient governance.
Secondly, the poll finds higher optimism around AI in countries with large innovation sectors, strong economies and robust legal systems, countries like Germany, Japan, China. All these things are components and products of effective governance.
Thirdly, and perhaps most tellingly, the poll finds a link between countries with a higher ‘Rule of Law’ Index and citizens of those countries being less worried about personal information being stolen. In other words, those living in countries with reliable governance systems are more confident that their data won’t be misused.
Taken together, these findings contribute to the growing picture that creating effective and robust governance ecosystems for data and AI is vital if members of the public are to trust them.
The poll’s findings also speak to research we’ve conducted at the Ada Lovelace Institute on what people in the UK think about data, AI and governance. Our 2022 review of UK public attitudes research found that the UK public want data-driven innovation but expect it to be ethical and responsible. It also found that building an effective governance ecosystem is crucial to earning the public’s trust in data and the technologies built upon it. And findings from citizens’ juries we conducted in 2021 about the governance of data during the pandemic show what people feel are important principles for effective governance: things like transparency, consent, accountability.
Aviation and food safety standards didn’t appear overnight. And public trust in food outlets and the idea of flying is not eternal or unwavering. But over decades, the components of effective governance have developed in both the food and aviation industries: safety inspections, strict codes of conduct, checks, balances, laws, mechanisms to hold bad actors to account, investigating mistakes, and much more. Because of this, the vast majority of people have trust in sandwiches and aeroplanes, and the industries thrive. (And where mistakes have happened and trust faltered, the regulatory ecosystems have had to respond and adapt to re-earn that trust).
Data and AI are not sandwiches and aeroplanes, of course. And the governance ecosystems that emerge around novel data-driven technologies won’t be cookie-cutter copies of other governance ecosystems. But the evidence that links effective governance of and public trust in data and AI is mounting, drawing from academic studies, civil society research, and surveys like the World Risk Poll.
That evidence increasingly suggests that building effective, trustworthy governance ecosystems for data and AI is paramount to ensuring their benefits maximised, their risks minimised, and public trust in them is earned.