By Fernanda Carlés
In recent years, the use of artificial intelligence tools has become almost inevitable. In many workplaces, and particularly in civil society organizations, AI appears as a way to “do more with less”: write faster, organize ideas, summarize long documents, prepare draft reports, develop project concepts, or analyze information.
The demands of today’s labor market push in that direction: speed, efficiency, and immediate responsiveness are expected. In this context, chatbots can be a useful and even positive tool for work in human rights, environmental advocacy, education, health, or communications. Denying this would be unrealistic.
The problem is that these tools do not exist in a vacuum—at least not in the way they are currently built and made available to the public. They operate within a business logic based on data accumulation. Most current generative AI systems are developed by private companies whose models rely, directly or indirectly, on collecting information, training models, and scaling products. This has clear implications for privacy and security.
At the same time, many of these technologies are relatively new. They are startups or rapidly evolving products, with documented histories of security incidents, data leaks, and vulnerabilities.1 2 We are not talking about hypothetical risks. Data extraction, unauthorized access, or opaque changes to terms of service have already occurred—and will continue to occur.
In this context, this article does not aim to demonize AI or promote its uncritical use. It seeks something more modest—and more urgent: to propose basic security guidelines for using AI without putting people, communities, or organizations at risk.
Tip 1: Anonymizing prompts is not optional
In social organizations, the risk rarely comes from “malicious” uses of AI, but rather from hurried and routine use. We open a chatbot to unblock a task, type a prompt almost without thinking, and—without realizing it—introduce information that never needed to leave our workspace.
When we work with people, especially in contexts of vulnerability, violence, reporting, or exclusion, that excess information can violate rights—not only the rights of those we support, but also those of our own organizations. It is not uncommon for prompts to include details about team members, internal dynamics, conflicts, ongoing projects, funding sources, or even financial movements, simply because we think it will “help the AI understand better.”
The problem is that once shared, that information leaves our direct control. And it is important to recognize that we do not need to compromise our privacy—or that of our organization or the people we work with—to obtain good results. There are ways to use AI effectively without exposing sensitive information, and one of the most important is prompt anonymization.
Anonymizing the information we share in a chatbot protects us not only against extreme scenarios—such as a massive data breach by the service provider, a risk that, as mentioned, is not merely hypothetical—but also against far more ordinary situations. Personal account security failures, hacking, unauthorized access, or even the theft or loss of devices without adequate protection measures can turn a careless prompt into a source of unnecessary exposure.
What does it really mean to anonymize a prompt?
Anonymizing is not just about removing names. It is a broader process that begins with identifying what constitutes personal or sensitive data, and understanding that identification rarely happens solely through explicit labels like a name or ID number. Very often, people become identifiable through the surrounding information.
Exact age, neighborhood, type of work, approximate date of an event, family composition, health condition, institution involved, role within an organization—each of these data points may seem harmless in isolation. But when combined, they function like a fingerprint. Even without names, that combination can allow someone to recognize the individual, especially in small communities, geographically limited contexts, or highly specific sectors such as civil society work.
This phenomenon is particularly relevant when working with vulnerable populations. In many cases, it does not require malicious intent for someone to identify a person. Sometimes it is enough to know the territory, the team, or the local context for an “anonymized” description to become obvious. For this reason, anonymization cannot be reduced to mechanically deleting data—it requires a conscious exercise of abstraction and care.
As a general rule, certain types of information should never be included in a prompt, even if the intention is “just to ask for help”:
- Real names
- Specific addresses or exact locations
- Phone numbers or email addresses
- Personal identification numbers (ID, tax number, passport)
- Medical information linked to identifiable individuals
- Personal financial data
- Contextual details that could enable identification even without a name
From there, the logic of using AI changes. Instead of working with individual cases, you work with general profiles. Instead of dated and localized events, you work with patterns. Instead of real stories, you use fictional examples constructed from multiple situations.
For example, it is not the same to ask a chatbot to “rephrase the testimony of a woman who lives in a specific neighborhood and experienced a specific situation,” as it is to ask for help structuring anonymous testimonies about domestic violence while maintaining emotional impact and protecting identity. The objective is the same, but the level of risk is radically different.
Another good practice is to request templates or structures instead of sharing fully written texts containing real information. AI can help you think through how to organize a report, what sections to include, what tone to use, or what questions to ask—without needing access to sensitive content. In this way, you benefit from the tool without transferring data that should not leave the organization.
Tip 2: Beware of hallucinations and false certainty
The second major risk in using AI has less to do with what we provide and more with what we receive. Chatbots generate responses that sound confident, coherent, and well-written—even when they are wrong.
This happens because AI does not “know” in the human sense of the word. It does not verify information, contrast sources, or understand the consequences of what it states.3 It generates probable text based on statistical patterns, without any genuine comprehension that would allow it to evaluate the validity of its own answers. Sometimes it gets things right; sometimes it does not. But it almost always responds with a level of confidence that gives the impression of truth.
In human rights work, this can be especially problematic. Incorrect legal frameworks, outdated data, misinterpretations, or inappropriate recommendations can cause real harm. For this reason, using AI without expert validation is a risk that should not be underestimated.
It is worth noting that the risk of incorporating incorrect information into our work is not new, nor exclusive to artificial intelligence. Human beings have always built knowledge socially, deciding what to believe based on trust in certain people, institutions, or authorities. What changes with AI is not the existence of error, but the way we now access answers.
Not long ago, researching a topic required consulting multiple sources: books, academic articles, websites, institutional documents. Each person, consciously or not, decided which sources to grant more credibility. In the early years of the internet, this logic deepened with unprecedented decentralization: we could choose between a book by a recognized professional, a Wikipedia entry built by consensus, or a blog by an activist whose worldview aligned with our own.
The current use of AI radically transforms this process. It pushes us toward near-instant answers from a machine that may not have updated or reliable information, that assumes no responsibility for what it produces, and that nevertheless speaks with a confidence that can make its output feel unquestionable.
In this scenario—and beyond the fact that AI is already replacing humans in many tasks and even some roles—it becomes clear that human responsibility for AI-generated output should never be optional. There must always be a person accountable for reviewing, verifying, and making decisions: a lawyer, a social worker, a health professional, an environmental engineer, a communicator. AI can assist, suggest, or help organize ideas, but final responsibility is always human.
Using AI responsibly means assuming that no response is definitive and that any relevant information must be checked against external sources, professional expertise, and local knowledge.
Tip 3: Choosing tools is also a political decision
Not all AI tools are the same. Some are designed with greater attention to privacy; others openly prioritize data collection. Choosing one over another is not merely a technical decision—it is also political and ethical.
There are platforms built around privacy by design, with lower levels of identification, limited data retention, or greater transparency. There are also alternatives for using AI without personal accounts, or even local models in contexts where information is especially sensitive.
In other cases, the decision may be simpler: not using AI at all for certain types of content. Legal strategies, lists of people at risk, active complaints, or critical information do not always benefit from automation. Sometimes the best security practice is simply not to pass that data through any AI system.
On this point, we have already developed a specific article on privacy and chatbots,4 where we analyze different platforms and approaches. It is worth reading alongside this text to make more informed decisions.
In conclusion
Artificial intelligence is here to stay, including in the work of civil society organizations. Using it is not, in itself, the problem. The problem is using it without reflecting on where it comes from, the logic that shapes it, and the risks it introduces.
Secure prompting is not an advanced technique or a luxury. It is an everyday practice of care—care for the people we work with, for our organizations, and for the information we handle.
If something should not circulate beyond your team, it probably should not be in a chatbot either.
And if an answer sounds too certain, it is always wise to question and verify.
Using AI thoughtfully does not make us less efficient. It makes us more responsible.
This publication has been funded by the European Union. Its contents are the sole responsibility of TEDIC and do not necessarily reflect the views of the European Union.
Notas:
- OpenAI reveals data breach and warns users about possible phishing attacks (Wired, 2025): https://es.wired.com/articulos/openai-revela-filtracion-de-datos-y-alerta-a-sus-usuarios-sobre-posibles-ataques-de-phishing.
- AI data breaches: root causes and real-world impact (LayerX, 2025): https://layerxsecurity.com/es/generative-ai/ai-data-breaches/ }
- A chatbot that doesn’t “hallucinate” is simply impossible (Wired, 2025): https://es.wired.com/articulos/un-chatbot-que-no-alucine-es-simplemente-imposible
- TEDIC article on Privacy and Chatbots.


Worrisome regulation on disinformation in times of COVID19