Tech Ethics: Here’s Why We Have To Start with Big Data

We have to change our relationship with technology. It’s not just what it can do for us, but what it can do to us. This is an ethics discussion.

Future of Work

For the last six years, my research collaborations across academia and business have seen me studying the impact that emerging technology will have on society. To be more specific, I seem to have permanently set up camp in the intersection of social life, information and technology. My most recent research project has me looking for biases contained within big datasets; datasets that get used to train machine learning systems that eventually come to life as Artificial Intelligence. In the interests of full disclosure, it’s been both one of the most interesting, as well as one of the most challenging (read: infuriating at times) projects I have ever been on. The reason? I keep ramming my head into Ethics… or rather, the lack thereof. But ethics isn’t where today’s blog starts. We have to first talk about data.

Something very interesting has happened to AI discourse over the last six years. First by ascribing the word ‘Intelligence’ to machines and then depicting AI as either an ethereal brain or weird shiny robot – who is often white may I add – we’ve started to go beyond the realms of anthropomorphism. Coupled with threatening, over simplistic headlines we’ve started to talk about AI as if it were sentient – a conscious ‘other’ we can’t control that is coming to get us.

This is deeply problematic for a number of reasons. Firstly, it has created a distorted and confusing veil around the word ‘autonomy’ and the resulting belief that machine and human autonomy are the same when they are not. The knock-on effect is profound as this creates socio-technical blindness where the created are disconnected from their creators. We essentially focus so much on ‘the machines’ that we forget to talk about the role of humans in the creation, design and deployment of machines. However, this blindness is two-fold and presents itself in another way. Dig into API documentation, what developers use to build tech products, and the label for people using the technology is under ‘user ID’, ‘contact’ or some other benign word.

How did we get here? I have a theory. Right now we’re in a scarcity trap. There’s a shortage of money, growth is sluggish and we’re killing ourselves trying to find the next big idea. So where do we turn to? We turn to science. Science has courted us since modernity and we are loved up with stars in our eyes. And since the boom of social media data, oh boy. We believe we can now track, measure, percentage and pie-chart the human experience to within an inch of its existence, because look, we have the evidence, it’s literally staring at us in the face. We’re desperately looking for something to save us when instead we should be looking for something to support us.

As a result, our relationship with technology hinges on not only what it can do for us, but also on the stuff it gives us. Data is our drug. And because we’re in the scarcity trap, we believe the more data we have, the better off we will be. But here’s the thing with data, once you have it, you have to make it mean something and as it turns out, that’s a pretty tough job. It’s also a pretty dangerous job, because you are a pretty complex being with a history and prejudice and judgement.

Tech that documents and records social life and behaviour isn’t neutral. It’s personal, subjective and intricately weaved into context. There is no such thing as ‘raw’ data and technology is never just a product. Technology is a system. A system that is connected to the many systems of society like trust, power and information. At the beating heart of these many systems, is ethics. How should we live and behave towards one another? And like data, the answer to that question is personal, subjective and contextual.

As I’m sure you’ve gathered by now, I believe that the tech ethics conversation needs to start with data. We have to understand how it will get used and start to be mindful of how we will measure its impact. There is an algorithm that can detect a person’s pulse rate from a YouTube video and a person’s sexual orientation by studying their face. What is more personal than a heartbeat and who you choose to sleep with? Do we really need access to this kind of data? How does it serve us? How will it serve others?

Are you sitting uncomfortably yet?

In 2009, in an interview sharing his thoughts on innovation, Mark Zuckerberg told Business Insider that in order to innovate, you have to, “move fast and break things.” “Unless you are breaking stuff, you are not moving fast enough, ” he said. He was 25 at the time. I wonder if those words flashed through his mind as he took a seat to face Congress last month. I don’t know if it was his intention, but “move fast and break things” became the anthem of tech start-ups and disruptors everywhere; a kind of war cry on the status quo. The impact has been that it is now common place to give software engineering teams autonomy to test and learn at scale, on a live platform, to an unsuspecting audience. Launching in BETA isn’t just accepted, its expected and baked into the phrase is the need to act immediately and should you need to, ask for forgiveness/denounce your accountability tomorrow. Considering some of the already recorded experiences of AI systems in action, these don’t feel like sound business strategies to continue pursuing.

What about…

  1. Do we really have to move this fast? There is no such thing as an Ethics drive-through.
  2. How can we make our engineering teams more diverse? What if your next hire was an anthropologist, archaeologist, data ethnographer or feminist AI researcher? Do you really need another data scientist or do you actually need a social scientist?
  3. How can we better distribute technology decision-making throughout the organisation so that we unlock it from the product and engineering teams and disperse the responsibility and accountability throughout the business? Perhaps we need to have an in-house ethics council that debates options and sense-checks them for their potential discriminatory, negative impacts?
  4. What if we spent as much time, energy and financial resources measuring the impact of our creations as we do designing and branding them? Perhaps it’s time to rethink why we hero product over process so much.
  5. What about being both transparent and accountable? Let’s be open about the data we use and why we choose to use one dataset over another. Let’s build in feedback loops so that people using products and services can flag when your product goes rogue and behaves badly.

We have to change our relationship with technology, and perhaps we can start by inverting the power balance we have with data. It’s not just what technology can do for us, but what it can do to us. Questioning and evaluating the impact of the data we use and the technology we build, needs to inform the conversations we have about ethics. Because if we don’t, we risk the biases of a few, becoming the lived experience of many.

Machine autonomy with unquestioning, human authority is a recipe for disaster. It’s time to ask yourself some tough questions.

Similar posts

Get notified on new future of work insights

Be the first to know about WNDYR’s latest work and productivity insights to drive a successful enterprise digital transformation