Color Line

Conversation with Branka Panic


AI for Peace

San Juan, Puerto Rico

March 2021

What connects us: hot chocolate at Cocoa Cinnamon.

What does AI for Peace strive to do?

AI for Peace is a network of people covering the social consequences of AI and related technologies. Technology is evolving so quickly. It can create unintended consequences and be misused, especially in the fields of military and security. Yet there is a major gap in researching AI’s potential for enabling positive peace. For example, strengthening democracy, protecting human rights, and ensuring human security. We want to put humans in the center of that conversation, so we are working at the nexus of development, humanitarian work, peace building, and AI. 

How has the COVID-19 pandemic impacted your community’s work?

AI for Peace was originally intended to incubate in California, but with COVID-19, the conversation decentralized to different places and different time zones. Now, AI for Peace is not just focused on what’s happening in Silicon Valley, but also in the global north and south and developing contexts. 

We wanted to assess the impacts of COVID-19 policies and lockdowns. So, we mobilized networks of data scientists and machine learning experts. We connected them directly to field experts in local communities with humanitarian and health organizations. It was the first time we realized how difficult it is to have a dialogue between these two groups. Now, we serve as a bridge between them. As an example, data scientists can understand the social consequences of their algorithms. At the same time, field experts gain basic knowledge of AI and how to utilize technology more effectively in their work.

How does racial equity and social justice intersect with AI?

How do algorithms predict behavior? With historical data. But the data is biased. We lack data for certain populations, whereas other populations have been over analyzed. Systems are amplifying bias. Looking for a job? It might be an algorithm, AI, with biased data filtering your employment options. Credit ratings: that is determined by AI. Even pretrial bail decisions can be determined by AI. But how can an algorithm know the likelihood that a person will commit a crime if the data it is using is biased? There is so much historical injustice already present in the data. AI amplifies injustices for people who are traditionally discriminated against. 

When was a moment when you saw groups mobilize; what did they accomplish?

Joy Buolamwini is an artist, a “poet of code”, and AI researcher who called out biases in facial recognition. She realized systems are negatively biased towards people of color, especially women of color. She saw lots of false positives in her research. Then she put a white mask on her face, and the system worked. She went on to create the Algorithmic Justice League to hear directly from people being impacted by AI and  building AI. She asked questions like: How do citizens feel about cameras in their buildings? Who is creating the systems that compile the data? The answer to the latter is mainly white, middle-aged men. If you’re white and male, you won’t likely notice the system is not recognizing others, i.e., faces that are black. 

Communities started organizing and pushing back against facial recognition systems. They started to advocate and ask critical questions. Should our population be under surveillance? Who is watched and who does the watching? Why and to what extent? Some facial recognition systems can even recognize the shape of your ear. Do we really need this level of surveillance? Because of community advocacy, some cities are actually banning surveillance technology.

I want to point out that not all AI is negative. There are positive applications such as when AI is used to forecast famine or violence. Or when technology such as social media enables a community to project its voice in the absence of a free press. But we see some actors misusing AI in ways that counter human rights and democracy, and we need to be honest about this.

AI for Peace advocates for transparency in technology.

What connects us: hot chocolate at Cocoa Cinnamon, and much more.

Race after technology book cover

Branka’s recommended read for all you need to know about AI.

What can people do individually and in their communities? 

Learn more about the basics of AI and how AI impacts your daily life. What are you doing when you’re on social media, or if you limit your news to social platforms? Know that you have a basic human right to privacy, to keep certain things to yourself. 

Also know that AI presents many contradictions. It can enable better disease diagnosis and prevention, yet it can amplify racial bias in the prediction of healthcare risks in populations. AI can predict natural disasters and improve humanitarian relief, but it can also perpetuate bias in criminal justice. 

These systems need to be transparent. Individuals and impacted communities – informed by their lived experiences – can demand ethical application, fairness, and accountability in AI. 

What are some of the best books you have read?

The Race After Technology by Ruha Benjamin. It puts everything you need to know about AI on the table.

What brings you hope?

People. I’m surrounded by so many wonderful people. When I see the work they are doing, I believe we can tackle anything in front of us. 

What has been your career defining moment?

It’s not a moment. The fact that I grew up in war and post-war circumstances has pushed me in the direction of peace building. It makes me want to do whatever I can to contribute to building more peaceful societies.

Connect with Branka

Learn more about AI for Peace and follow the organization on Twitter @AI4Peace. Connect with Branka directly @Branka2Panic.

Read More Like This:

Color Line