ResearchUNC Charlotte

Charlotte Researcher Hamed Tabkhi Designs AI For Public Safety

In Hollywood, powerful evil artificial intelligences take over governments, destroy planets and predict our every move. In real life, the problems with AI are often those of their creators — perpetuating and worsening gender and racial biases. To fight these biases and restore public trust in AI, researchers are turning to the public.

“The question is how to create artificial intelligence responsibly and in an ethical manner to address real-world problems,” said Hamed Tabkhi, who is working with community stakeholders and researchers to “co-create and co-design” solutions to public safety with AI. “Public opinion is a priority for our work: what are people’s needs, what are their privacy concerns, what kind of system do we need to design?”

In Tabkhi’s (literally) million-dollar idea, his AI system helps law enforcement without breaking the trust of people the police are meant to protect. Residents’ crime and policing concerns directly inform the design of an AI system to address neighborhood safety specific to their community, and prevent misunderstanding about law enforcement’s use of the tools.

MORE >>>