What are the externalities of AI and why do they matter for designers?
To start the conversation, each of the panelists gave a short introductory speech. Here is what I said, adapted for reading.
Please be aware, in this introduction I am going to refer to marginalisation, exploitation and domestic abuse. Read with care.
I am an advocate and a consultant for Responsible Design.
How did I get here? Let me tell you a bit about my journey.
I studied Industrial Design in Milan many years ago, witnessed and contributed to the rise of digital, design thinking and UX. I have been a great supporter of technological innovation, worked with startups and embraced the culture of experimentation that came with it. I did believe that technology could save the world. Maybe you do too?
Technology feels sometimes like magic, when you can push a button and get groceries delivered at home, when you can write a thought and get a full essay, when you can fold your phone’s screen. Wow, you can see a whole world of possibilities!
If you look behind the curtain you realise that delivering that magic has some ‘externalities’.
Externalities are things that happen beyond what is declared to be the main scope. They may be intentional or not, expected or unexpected.
It may be that the business model behind getting groceries at home relies on surveillance and data exploitation, it may be that in order for you to get an essay not filled with hate speech some people in the South of the World are reading and labeling the most horrendous hate content. It may be that in order for you to get the foldable phone, your old one was designed to expire and end up in a landfill.
These effects are sometimes the actual core business model or main goal of a company, sometimes they are externalities. Sometimes there are Tech and AI really meant ‘for good’ but simply not taking into consideration certain scenarios.
Smart homes applications - which rely on AI - are designed to help people live more comfortably. The same functionalities are used by domestic abusers that play with the thermostat or the lights to harass their victims.
Does it make smart home technology ‘bad’ or ‘good’? Neither. These technologies are really powerful and the circumstances they get used in are very complex. We should treat them with responsibility. We cannot leave the full responsibility of what we design to our users. The externalities, the ‘edge cases’ are real people and need better consideration.
As designers and UX professionals, we have an incredible power in shaping technology and AI. In deciding on what we want to work on and, as importantly, how.
Let’s assume, to take another example, you decide to work on the most ‘good’ AI application that you can think of, for example, helping refugees in finding resources.
How you are going to design it matters. If your application is going to learn from the wrong set of data it may discriminate against people or perpetuate certain harmful narratives and stereotypes. If it is going to store personal information it may be mis-used by authorities. If it is not well-explained it may end up wasting resources.
You can tell me ‘Well, I was only planning to use AI not to design it’. The same logic applies and even more so. The moment when you decide which AI to use in which context, for which use, and which prompts you are going to feed it, you are making choices. Those choices matter and may have consequences.
You can outsource a lot of work to AI but you cannot outsource your critical thinking, your ethics.
This is what Responsible Design is about, it does not determine what is good or what is bad but it gives you tools and practices to explore the nuances and make informed decisions. It helps us look with a critical eye to the established AI and UX and design and business practices and examine when they become manipulative, extractive, excluding, polluting.
It prioritises respect for people, equality and sustainability in designing businesses, products, services, interfaces and anything else that needs to be designed.
What are your thoughts? How can we introduce these kinds of considerations in the design, development and utilisation of AI and Tech?
I would like to add a quote from one of my brilliant co-panelists Marieke Peeters:
For me there is a big difference between AI for Good and Responsible AI, I think it is oftentimes mixed. People will say ‘we are working on AI for good, we are using it for this super nice impact intention. But even if you are using AI to bring education to people in rural areas where it is very difficult to go to school, if you are not respecting their privacy or if you are not explaining to them how you make the decision about who gets to use your tool, if you are using some weird business model that makes it accessible only to people that have money for instance, then it may be AI for Good, just you are not doing it Responsibly.
There is a very big difference between these two things and it is a big challenge to hit the mark on both.