AI & Social Responsibility 101

How I learned the basics of Ethical AI and how you can too.

BOLT Canada
6 min readAug 18, 2020

By Eden Granovsky, Director of Communications & Media

Peshkova/Shutterstock

“Artificial Intelligence” and “Ethical AI” have become buzzwords in the business technology industry, but what do they really mean?

About a year ago, I asked myself this question during a networking event where the topic of AI came up. In my talks with fellow attendees and company representatives, we discussed AI, its potential applications, and how it’s revolutionizing business. However, when someone brought up some of the more harmful uses of AI, such as facial recognition and surveillance, the conversation turned to outrage over this unethical technology. What surprised me was that no one seemed to be able to provide an answer about how we can mitigate these harms, stop this kind of surveillance, and protect people.

That’s when AI changed from a buzzword to a newfound challenge that I was determined to learn more about. I was fascinated by how impactful this technology was to not only business but our society as a whole.

At the time, I had just begun my first semester at McGill. I immediately changed my courses to take any class that somewhat focused on the connection between society and technology. On top of my busy class schedule, I did my best to research AI and better understand the ways in which it was being used. What I came to learn was extremely surprising.

The industry is growing at such a rapid pace that its expected valuation in 2025 is $118.6 billion. AI is being used for customer service chatbots, autonomous driving, data analysis, healthcare services, legal recommendations, and more. You’re even involved in improving AI nearly every day when you answer CAPTCHA questions to log in to a website. AI technology has become so pervasive in every aspect of life, and yet no one seems to know what its true potential, both good and bad, can be.

Ethics of AI Interview 1 with Eden Granovsky, Rosie Zhao, & Negar Rostamzadeh

When I started learning about AI, I felt pretty lost. It’s a big field, and I don’t have a lot of technical knowledge or skills to fall back on. With my involvement in BOLT, I realized that I had the opportunity to help others learn about AI by giving them an easily accessible, informative introduction to the topic. That’s when I reached out to my directors and teamed up with the McGill AI Society to create our Ethics of AI Interview Series. Along with Rosie Zhao (McGill AI Society), I interviewed Negar Rostamzadeh, a research scientist on the Google Brain Ethical AI Team, and Shalaleh Rismani, a PhD student at McGill and the Co-Director of the Open Roboethics Institute. Through the interview series, the viewer gets a foundational knowledge of AI and some actionable tips to prevent their own companies and projects from applying AI in harmful ways.

To learn more about AI, read our first interview on its potential applications below! If you are looking for ways to improve your company culture as it relates to harmful applications of this technology, check out our second interview here.

Eden Granovsky: Welcome to our Ethical AI series presented in collaboration by BOLT McGill and the McGill AI society, given current sociopolitical events and the increasing prevalence of AI applications, we felt that it was important to take a deeper look into what AI is and the social implications it can have, both positive and negative, on our society. Today we’re joined by Negar Rostamzadeh, a research scientist on the Google Brain Ethical AI team. She’s here today to help us demystify some aspects of AI. But first, could you tell us a little bit about what you’re working on currently?

Negar Rostamzadeh: So a little bit about my background, my background is mainly computer vision, and, recently for the last two months, I’m moved to the Google brain Ethical AI team. I am mainly working on considering ethical implications of computer vision algorithms.

Rosie Zhao [VP Internal, McGill AI Society]: So I’m sure like through your work and through your involvement in the machine learning community, you’ve seen that AI is becoming very relevant in a lot of different industries. Could you speak more about this increase in popularity and using AI across these different fields and specifically what are some industries that are adopting this technology rapidly?

NR: So basically, I’m personally coming from computer vision background and in computer vision, when you’re thinking about different kinds of applications …[and they’re also like different kinds of applications], like in art, in fashion design, in health industry, health care recommendation system, or retail industry. In [the] computer vision committee, we usually like to actually talk about, in our papers, [the] benefits, or like why you are working on this specific algorithm or this specific problem. We name, a few applications, which are really great and we’re using them. But the fact is that we usually don’t think about the parts or the kinds of applications that may not go towards the intention that we have, but also like harmful applications that can exist. And usually we just think about really, really, great applications.

EG: So to follow up on that, what do you think are the biggest harms that people — maybe looking from an outside perspective, so not the people making the tech, but as someone seeing it in the news — should be most aware of when AI is applied without an understanding of the broader social context.

NR: So one part is that this kind of application that I mentioned, that there are applications that we should first think, why do we want to actually build this kind of applications or what’s the usage of this application? Is it going to hurt certain people or certain groups and why do we actually need [these applications]? But the fact is that when we have this kind of applications [where] you want to automize some parts that shouldn’t be automatic, like imagine the interview process being done with a software that can be very biased based on the data that it was fed. So that’s actually why it’s very important to have [a] social and ethical perspective from the beginning of the process and [throughout] all parts of the process. So something that I see many people say is that, yeah, I’m a computer scientist or I’m a researcher, this is an engineering work, this is not a technical work. Or, this is just a technical work. And, it’s like, I’m not going to work on this side so this part will be solved with like, just having better engineering on data collections or the other parts. But the most important thing is that things that have roots in social injustice can’t be solved with technical solutions. Like there was a really good book from Ruha Benjamin, race after technology, [where] she’s actually addressing all these points. And I really recommend to read this book.

RZ: Yeah. Thank you so much for those insightful answers. It definitely sounds like there’s this need for regulation and this mindset that researchers, developers, and users need to have in any of these applications. So thank you so much, Negar, for joining us today. Just as like parting thoughts, do you have any other resources to recommend for our viewers who are looking to learn more?

NR: If you want to have a really great, great overview of the field, I also recommend tutorials at CVPR from Emily Denton and Timnit Gebru. And I also recommend, um, there is a workshop tutorial from Emily Denton at NeurIPS that I, again, recommend as well. So these are the work that are not like specifically about one single paper, but they are touching on different directions.

EG: Ok, great, we’ll make sure that we have all those resources linked in the bio for this video so that everyone can easily access them after they watch it. Thank you so much for your time today.

NR: Thank you for having me.

--

--

BOLT Canada

Business tech bootcamps encouraging students to pursue innovation in a whole new light. 💻💡