Language and algorithms: how AI acts to stop gender-based violence from its beginnings - International experiences

“Boosting the use of technologies such as artificial intelligence and natural language processing will help improve our response capacity and allow us to generate evidence-based statistics to make more informed decisions” Miranda Mejía, Director of Costa Rica's 911 Emergency System..

This is what the “Saving Lives” webinar, an initiative promoted by the UNODC-INEGI Center of Excellence and the Chilean 911 service, brought together on June 16 professionals from emergency services and statistical offices in Latin America to explore how AI can identify cases of violence against women from 911 calls.

This event was divided into three main moments, where institutions from Latin America and the Caribbean were able to share their experiences and knowledge on the subject. During the first intervention, the Costa Rican 911 Emergency Service shared relevant statistics on incidents and calls handled on a daily basis. Afterwards, Ignacio Agloni, from the National Institute of Statistics of Chile, gave a detailed introduction on machine learning (supervised and unsupervised) and deep learning.

Following this, Adriana Oropeza and Pablo Guevara, from the UNODC-INEGI Center of Excellence, presented the principles of Natural Language Processing (NLP): highlighting how call texts are interpreted and how to identify key entities and patterns not evident to traditional human analysis.

Another key session led by the Center of Excellence addressed the descriptive characterization of incidents related to extortive lending (“drop by drop”). Using the BERTopic model, specific thematic clusters were identified, explaining advanced techniques such as cleaning, tokenization and frequency analysis, supported by libraries such as SpaCy.

Then, the National Institute of Statistics (INE) of Chile shared its experience in automated crime classification using PLN in the National Urban Survey on Citizen Security (ENUSC). The National Statistics Office (ONE) of the Dominican Republic showed how it uses PLN techniques to classify administrative records on construction licenses based on unstructured narratives.

In addition, the UNODC-INEGI Center of Excellence concluded with the results of the automated analysis of 911 calls to detect specific cases of violence against women, explaining in detail each phase of the model: from the selection and cleaning of data to the implementation and validation of the algorithm.

This model analyzes the texts of 911 call transcripts in order to separate them into two groups, those with and without violence against women. This proposal establishes a new technological option capable of identifying the level of risk of the victims, taking advantage of the timely information obtained as a result of the recording of the incidents. Currently, the Center is looking to implement new pilots to replicate the model and integrate a methodology that can be replicated by statistics offices and 911 emergency services.

Finally, experts from different countries shared their experiences using similar algorithms. In the case of Chile, they presented their work on automated coding of open-ended responses in victimization surveys. The Dominican Republic presented its classification of sensitive content in administrative records. Mexico, on the other hand, shared how the use of webscraping helps to monitor news related to migration and violence and, based on this, obtain statistical data in real time.

It was also explained how a model is built for these purposes: from data collection, cleaning and labeling to algorithm training and implementation. It was emphasized that inputs can include free text, temporal and geographic data, while outputs can be automatic classifications, risk levels or suggested referrals.

The closing of the webinar was devoted to the ethical aspects of artificial intelligence. Pablo Guevara, from the UNODC-INEGI Center of Excellence, raised pressing issues: how to protect personal data, what biases can models introduce, how to prevent automation from reinforcing existing inequalities, and how to prevent the use of artificial intelligence in the field.

The need to implement anonymization mechanisms, monitoring of algorithmic biases, and constant validation of the models used was emphasized. In sensitive issues such as gender violence, technology should act as an ally of human rights, not as an obstacle.

The “Saving Lives” Webinar was not just a technical training. It was an invitation to transform the way States respond to violence. Instead of waiting for a victim to articulate a perfect complaint, artificial intelligence offers the possibility to listen better, earlier and more accurately.

Because behind every word, even the unspoken ones, there may be a life in danger. And technology, when used responsibly and with an ethical approach, can be key to detecting it in time.