Advertisement

The next wave of AI is rooted in human culture and history

"One of the challenges now is: What is the value of our humanity?"

Genevieve Bell is a full-time anthropologist and part-time futurist at Intel.

Understanding humans is essential to the design and experience of a technology. For decades, major corporations have turned to social scientists for insight into human behavior, culture and history. At Intel, Genevieve Bell, a prominent Australian anthropologist with a Ph.D. from Stanford University, has been tracking societal trends across the world to help build technologies that are fine-tuned to the needs of the people who will interact with them.

Bell started working at Intel in 1998. She brought her anthropological research and fieldwork techniques to the world of microprocessors, wearables and artificial intelligence. Over the years, her formal role has evolved from director of user experience at Intel's research lab to VP of corporate strategy. But regardless of the titles, her work has remained firmly focused on studying the patterns and complexities of human behavior across cultures. In her self-proclaimed role as a "full-time anthropologist and part-time futurist," she examines the meaning of "intelligence" within the context of machines, while she continues to trace its cultural impact on humans and their relationships.

At a time when robotic helpers and virtual assistants are starting to infiltrate our personal lives, the need to assess the implications of this new kind of interaction feels more pertinent than ever. I recently called Bell to talk about the social impact of building relationships with our machines and the ways in which the story of AI is deeply connected to the history of human culture.

In what ways does the study of human societies and cultures drive technological innovation? And how does that translate into your work at Intel?

I think of it as this challenge where we have built machinery and then we say, "Here it is, now use it." We need to have a more balanced approach, which is to have a robust understanding of people's pain points and also their aspirations. What are they trying to get done? What are the barriers to that and what do they wish were possible? And conversely, what do they not want to have happened. With those things, you can develop a set of insights about what people care about, and that should be a part of the design process. It's not just about designing technology but government services and infrastructure should have an eye on those things too.

At Intel, I would like to imagine that is part and parcel of how we do things. What are the experiences this technology can deliver? What can we make possible with our technology that wasn't possible before? When you look at the range of places where Intel is engaged these days, you can see that.

As we're thinking about new technology objects, whether it's wearables or other things like sensor technology that's closer to the body, we think about what people want to get done. How do we build technologies that speak meaningfully to that? Even on our side, where we're thinking about data and the cloud – we're moving to a world where it's more machines talking to machines – we need to be paying attention to, what are the consequences to all of that data movement? Paying attention to the experiences that technology could or should deliver, and also what are the barriers that you're trying to remediate, are all part of what it means to think about people when you're building technology.

How does your experience and work as an anthropologist inform your perspective of artificial intelligence?

My mom is also an anthropologist, and the joke in my family is that anthropology is less a vocation and more a mind-set. It's a way of looking at the world that I don't know how to escape from. I had an ex-boyfriend once tell me I was a terrible person to go on a vacation with. He said, "You treat vacation like fieldwork," and I was like, "I treat life like fieldwork."

For me, there are some key tenets to being an anthropologist that I bring everywhere. I want to make sense of something as a story. When I first started thinking about artificial intelligence, I wanted to know two things: What was the work that the phrase artificial intelligence was doing? So what was that language doing, and what did it privilege and erase? And then I wanted to know its history. Where did it come from? What are the other stories and narratives that it's attached to? That leads you to ask questions about who coined the phrase and why?

As an anthropologist, I wanted to interrogate AI not just as a technical agenda but as a cultural category. I wanted to look at the intellectual history of it. I found myself reading [Alan] Turing and his incredibly provocative question: "Can a machine think?" And the whole notion of the Turing test -- Is there a moment where we as humans can no longer distinguish ourselves from the machines? It's a really interesting formulation both of a technical idea but also a cultural one. It's also where you can see the cultural ambivalences and anxieties too.

In the conversations in the press and public culture, AI is often accompanied by everything from the language around the robot apocalypse, the singularity, to the idea that they'll replace or kill us, all depending on the narrative. I was interested in why those two stories were so tightly coupled. Why have the conversations around AI always necessitated this other conversation? Unpicking that was also a very anthropological endeavor.

Turing's ideas are incredibly prominent in the narrative around AI. But there are also ideas generated and popularized by Hollywood. As someone who is examining the language and meaning of AI, how do you define it?

The language of AI is interesting. "Artificial" is a word whose oppositional points are things like natural, real or increasingly organic. In the 1950s, artificial was a really good word. It meant things like tidy and scientific and unmired by the messiness of biology. And intelligence is in the opposition to emotionality, affect, irrationality and stupidity. You start to see the notion that AI is rooted in, about something that would've been clean and tidy, uncomplicated by human emotions and irrationality and somehow pristine.

For me, artificial intelligence is a catchall term and it's one that's cycled in and out of popularity. It's back at the moment. It's an umbrella term under which you can talk about cognitive compute, machine learning and deep learning, and algorithms. It's a catchall because it means everything and nothing at the same time. It's a cultural category as much as a technical one.

One of the challenges for AI is that it is always and already twinned with the cultural imagination of what it would mean to have technologies that could be like humans. And that's a preoccupation that preexists Hollywood. Mary Shelley wrote Frankenstein 200 years ago and that is in some ways one of the quintessential stories about a technology trying to be human. And that in and of itself is based on earlier stories like the Golem out of Jewish Kabbalism and the notions that thread through almost every major world culture and religion about humans trying to bring something to life and about the consequences of that, which are always complicated and rarely good.

"We talk about artificial intelligence, but we often don't talk about artificial affect or artificial emotionality or morality or artificial souls. All of those are in some ways a part of our intellect."

In most cultural traditions, the only people who get to make things come to life are gods, and humans shouldn't do that work; nothing good will come of it. That's been around for thousands of years. It doesn't surprise me that artificial intelligence kind of echoes back to that history.

For me, it means a constellation of scientific, technical and regulatory agendas to develop a set of technologies that have the capacity for thought that mirrors a piece of human intellect but not all of it. In its most narrow sense it's about technologies that will allow a certain class of decision-making. In its current instantiation, circa 2016, it's a lot about pattern-matching and the capacity to look through data sets.

One of the challenges now is: What is the value of our humanity? Is what it means to be human reducible to our intellect? If so, what do we imagine intellect is? And if not, then what are we designing out of the system that we might actually want to have in the system? We talk about artificial intelligence, but we often don't talk about artificial affect or artificial emotionality or morality or artificial souls. All of those are in some ways a part of our intellect. We don't think about that because we sometimes have the capacity to drive intelligence down to playing chess or Go and the ability to doing a Q&A. Frankly, most people's intellect is much more expansive and complicated than that.

In what ways do you think these mainstream ideas of AI get in the way of our understanding of it? Is there something you'd like to see changed in the narrative around it?

We frequently dismiss the fears without acknowledging that they are based in a little bit of truth. Humans have built technical systems for a long time, and they've often had unintended consequences. And sometimes those unintended consequences were quite difficult to live with. We built cars but it took us another 70 or 80 years to build safety equipment into them. We had cars long before we had seat belts or traffic lights and road rules. There is a piece there that says maybe when humans go, "Oh, that's how I feel about that," it's not because they're afraid of the science, they're afraid of themselves.

What would happen if we took the fears seriously enough not to just dismiss them, going, "You've watched too many Terminator movies"? But actually took them seriously and said, "Here are the guardrails we're implementing; here are the things we're going to do differently this time around and here are the open questions we still have." That's an interesting way of doing it, but the other way is to be acutely aware that most science fiction, while based in fact, sells more when it's dystopic and not utopic.

So you have the distinct clash between scientists who are often techno-deterministic and optimistic and science fiction, which is techno-deterministic but pessimistic. How do you diminish the delta between those two things? How do you acknowledge the anxiety and not make it irrational while also moving people through it?

You often talk about the "preoccupation to make something like us, to bring something to life." What are the cultural implications of doing that?

It's different in various cultures. There are lots of stories in cultural narrative about who gets to bring things to life and under what circumstances. And those vary by cultural traditions, and there are also very different notions about what can be sanctioned and what can't. Even within late-Industrial, post-Enlightenment West, our ideas about what can think has changed dramatically.

In the last 20 years, zoology has pushed those boundaries a great deal where we are now more willing than ever to allow that animals have many things that we once regarded as human. Capacity for logic, dreaming, intergenerational transmission of knowledge and language, tool-making – those were the things we thought made us human.

There's also stuff around symbolic and magical thinking in animals that has in some ways finally blown the boundary line. We're now willing to acknowledge that all different species of birds recognize things, engage in pattern-matching and recognition. Studies of crows, in particular, suggest that.

"In most cultural traditions, the only people who get to make things come to life are gods, and humans shouldn't do that work; nothing good will come of it. That's been around for thousands of years. It doesn't surprise me that artificial intelligence kind of echoes back to that history."

Even in our own tradition we're increasingly blurring the boundaries of what makes humans distinctive versus everything else. For a long time there was a bright line between what was human and what wasn't. Frankly, if you think about the last 300 years, not all humans were on the bright side of that line. If you think about the way the West thought about the Aboriginal people in my country, 200 years ago, they weren't fully human. If you think about the way most constitutional documents enshrined women, they were mostly human but didn't have the capacity to vote. You know the idea of who is in and who is out of humanness is contested territory.

Think about even the last 20 years around homosexuality or politics around gender and race, you can clearly see even the notion of who is human and who isn't human amongst humans is complicated. One of the lines we have historically drawn is about the capacity for thought or intellect; that was one of the things that sanctioned an object to reflect about itself. We've credited different sorts of humans with that capacity over the years. What it means to be human is never as straightforward as it seems. We've been contesting that category for a really long time.

Particular challenges around AI are tied up with all of those stories, too. Part of the reason it feels tricky is that all of the pieces in that equation are not as stable as they appear. Reality is, we've had these internal debates about what's human and what isn't for a very long time, and this is one more complexity on top of that. To stabilize those debates, what people are doing is making the human piece easy, and it clearly isn't.

Boris Karloff In Frankenstein

A still from Frankenstein, 1931. Image: Universal Film Archive/Getty Images.

In constantly engaging with technologies, we've been building a new kind of social relationship. In what ways are we knowingly and unknowingly building that relationship through artificial intelligence? Using the voice, for instance. It's a subtle but profound way to connect with a machine and it's being used across devices now.

I'm right there with you. Voice is a surprisingly intimate way of engaging with things. Text is one thing, and I'm willing to bet many of us type things we would never say. Much of the internet is predicated on people typing things they would never say. With voice, it's not as simple as the only time we talk is to other people. We talk to gods when we pray; some of us pray out loud. We talk to our pets. Some people talk to their cars. But most of those other objects don't talk back. Your pets may recognize your voice and alter their behavior; if they're cats, usually not. But there's a piece [of] those are intimate exchanges because of voice.

I got an Alexa [Amazon's personal assistant] when it first came out last year and I was shocked at how it felt like a personal relationship because I was talking to it. I had asked Alexa to set an alarm, and when it went off I said thank you. I literally said to myself, "I just thanked a machine, good Lord!" I found that because she and I interact via voice it feels much more intimate. I've read up on how people are using it, I am by no means alone. There's something about the voice interface that's really interesting as a social experiment.

What's fascinating there is I suspect you can get to relationships with technology without AI as the backbone, if you can have relationships and connection without it necessarily behaving like a human. If it did, you can imagine other values coming into play. Maybe it's about emotional intelligence and not problem-solving intelligence.

Imagine a point when Alexa or another similar object is back-ended by more data than she currently has. If she has spent enough time in my kitchen, then she knows enough about me to say: "You have an 8 o' clock meeting, the traffic is terrible, you're late! Get out of bed, get on the road!" She can start to be not so much bossy, but nurturing. So the frame there is not about recommendations, which is where much of AI is now, but is actually about nurture and care. If those become the buzzwords, then you sit in this very interesting moment of being able to pivot from talking about human-computer interactions to human-computer relationships.

Within the context of this new human-computer relationship, what should we be most aware or cautious of?

It's the same things we are cautious of in all relationships. We already have different scales of relationships in our lives. [But] machines will not be humans, so this is a different kind of relationship. It's a reasonable set of challenges for us as citizens and consumers but also for technologists and designers to think about things like: What's the nature of this relationship, and what happens when you break up? Am I keeping my stuff or are you taking it back? How does that manifest itself as data? What are the values built into those things and how do you navigate different value systems? People who are designing AI will build into it assumptions about how the world will be. And, you know, there are places where we'll need to ask: What is the vision of the world that is being enacted here?

Speaking of designing AI and building ideas into it, there's been a lot of talk about the need for diversity. And recently at the AI Now symposium in New York, where you were a speaker, an interesting point was made about the inclusion and need for the female perspective. What are your thoughts on that?

In order to build the next generation of technology, you need to have as many points of view in the room as you can. For me, that is absolutely a call to say it'll be good to have more women; in some places, it'll be good to have any women. Not just more women, but also people who come from different economic and cultural backgrounds. People from different interdisciplinary backgrounds are hugely important for the next wave of technology. You need to have people who are historians and philosophers and even poets. There has to be this capacity to think differently about data, time, history and logic. It requires as many different ways of thinking as you can possibly tolerate.

The reality is, managing that is hard. It is easier to have teams that look and think like you. There is less tension. But it also means there's less debate. We need to be more willing to allow that it will be harder but that it is good and the outcome is infinitely better. So for me, it's about: How do you ensure you have as much good thinking in the room and as much diversity of thought and lived experience and positionality as you can get? More women is just the start of that, and it's a critical start. But I worry. We keep saying it's important, but we don't do enough about it.