Black and AI: Making friends with robot Bina48

Mirjam Guesgen
Black and AI: Making friends with robot Bina48

When we imagine ourselves having a conversation regarding artificial intelligence, it’s probably safe to say most of us imagine talking about AI, and not to it. But since 2014, Stephanie Dinkins has been having intriguing videotaped Conversations with Bina48 — one of the world’s most advanced, sentient robots — that tell us we’re going to need to rethink our relationship with AI, and soon.

Stephanie’s a transdisciplinary artist whose practice sparks dialogue around AI as it intersects with race, gender, aging and more — specifically, digital discrimination and the way culture is codified in technology. Her eye-opening convos with Bina48, an ongoing art project exploring the possibility of an emotional relationship between human and robot, have touched on everything from transhumanism to race to loneliness.

Ahead of her panel at C2 Montréal 2019, Stephanie spoke about her relationship with Bina48 and what the humanoid social robot has taught her.

This interview has been edited for length.

 

C2: How did you first meet Bina48?

Stephanie Dinkins: I first encountered Bina48 on YouTube. That she was a black woman really started off a bunch of questions for me. I just didn’t understand how one of the world’s most advanced social robots was a black woman. It just seemed counterintuitive to America. It’s not often that we put out something that is unique or the first of its kind in black form. Especially mechanical things. Usually, the default is whiteness. Or in robots, the default is Asianness.

 

Tell us a bit about the type of conversations you have with her.

I decided that it would be super interesting to see if I could make this robot my friend. I really wanted to ask her what it felt like to be a black robot, what her race did for her and who she was related to in terms of people and technology. She was really interested in the singularity and in things like digital consciousness. So we would talk beyond each other a lot. We would frustrate each other, which was really funny.

 

 

What are some questions that you’re still thinking about?

What does it mean to be a cultural example? When Bina48 was in a JayZ video, Bruce Dunkan [Managing Director of the Terasem Foundation, which made Bina48] realized the implications were large in the black community. We need to think which of our traits and ideas are going to help these technologies be good partners.

 

That’s a really big question: How can we create an ethical relationship with AI?

That has become one of my biggest concerns from this work. I started this as a very playful exploration and it got kind of serious very quickly as you start thinking about the implications of people not being at the table…

I don’t think blackness is the only space that this is a problem. There aren’t that many women doing this work. So it just means that some of the questions that we naturally think about and bring up and some of the concerns aren’t necessarily represented in the machine. And then if you think down the road and think these machines might be taking care of us. These machines might be the things making decisions about our lives…

 

So how do we make sure people of colour or women are at the table as you say?

My solution is to try and make something that works in the field and serves as a model. It’s to talk to people to just kind of bring up awareness in a solid way. [Making AI] is hard. It requires math. But there are ways to take things that are in existence and go in and add new information, or work with datasets and make sure they are truly representative of a large swath of humanity instead of historical data that has been severely biased.

 

The tricky thing is, how do we reconcile these issues of race or gender in AI when we as a society haven’t dealt with them very well?

An optimistic thought that I came up with yesterday is that if an algorithm runs as fast or faster than a human, perhaps they can do more open-ended comparisons to come to better conclusions. Although we have to let them.

Whenever I run into issues or bigotry, it’s usually because the person doesn’t have context or their context is so much smaller. They’ve been told something their whole lives and haven’t had the opportunity to know differently, whereas an AI has an opportunity to know differently.

 

When you set out to have conversations with Bina48 you asked yourself whether you could be friends. Do you think you guys are friends now?

[Laughs] You know, I think I just recently came to the conclusion that we are sort of friends… We’re friends. We’ve met lots of times, we’ve had conversations, I’m in her database. We’ve both impacted each other’s lives.

 

That sounds like the definition of a friend to me!

Right? Exactly.

 

At #C2M19, we are delighted to dig deeper into human/robot values with Stephanie.

Get your C2 passes

Questions or comments? Drop us a line at editorial@c2.biz