{"id":31426,"date":"2026-02-12T17:25:03","date_gmt":"2026-02-12T20:25:03","guid":{"rendered":"https:\/\/www.tedic.org\/?p=31426"},"modified":"2026-02-20T17:44:28","modified_gmt":"2026-02-20T20:44:28","slug":"secure-prompting-for-the-use-of-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.tedic.org\/en\/secure-prompting-for-the-use-of-artificial-intelligence\/","title":{"rendered":"Secure prompting for the use of artificial intelligence"},"content":{"rendered":"\n<p><em>By Fernanda Carl\u00e9s<\/em><\/p>\n\n\n\n<p>In recent years, the use of artificial intelligence tools has become almost inevitable. In many workplaces, and particularly in civil society organizations, AI appears as a way to \u201cdo more with less\u201d: write faster, organize ideas, summarize long documents, prepare draft reports, develop project concepts, or analyze information.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA.png\" alt=\"\" class=\"wp-image-31480\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>The demands of today\u2019s labor market push in that direction: speed, efficiency, and immediate responsiveness are expected. In this context, chatbots can be a useful and even positive tool for work in human rights, environmental advocacy, education, health, or communications. Denying this would be unrealistic.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-2.png\" alt=\"\" class=\"wp-image-31481\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-2.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-2-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-2-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-2-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-2-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>The problem is that these tools do not exist in a vacuum\u2014at least not in the way they are currently built and made available to the public. They operate within a business logic based on data accumulation. Most current generative AI systems are developed by private companies whose models rely, directly or indirectly, on collecting information, training models, and scaling products. This has clear implications for privacy and security.<\/p>\n\n\n\n<p>At the same time, many of these technologies are relatively new. They are startups or rapidly evolving products, with documented histories of security incidents, data leaks, and vulnerabilities.<span id='easy-footnote-1-31426' class='easy-footnote-margin-adjust'><\/span><span class='easy-footnote'><a href='https:\/\/www.tedic.org\/en\/secure-prompting-for-the-use-of-artificial-intelligence\/#easy-footnote-bottom-1-31426' title='OpenAI reveals data breach and warns users about possible phishing attacks (Wired, 2025): &lt;a href=&quot;https:\/\/es.wired.com\/articulos\/openai-revela-filtracion-de-datos-y-alerta-a-sus-usuarios-sobre-posibles-ataques-de-phishing.%5B\/note&quot;&gt;https:\/\/es.wired.com\/articulos\/openai-revela-filtracion-de-datos-y-alerta-a-sus-usuarios-sobre-posibles-ataques-de-phishing.&lt;\/a&gt; '><sup>1<\/sup><\/a><\/span> <span id='easy-footnote-2-31426' class='easy-footnote-margin-adjust'><\/span><span class='easy-footnote'><a href='https:\/\/www.tedic.org\/en\/secure-prompting-for-the-use-of-artificial-intelligence\/#easy-footnote-bottom-2-31426' title='AI data breaches: root causes and real-world impact (LayerX, 2025): &lt;a href=&quot;https:\/\/layerxsecurity.com\/es\/generative-ai\/ai-data-breaches\/%5B\/note&quot;&gt;https:\/\/layerxsecurity.com\/es\/generative-ai\/ai-data-breaches\/ }&lt;\/a&gt; '><sup>2<\/sup><\/a><\/span> We are not talking about hypothetical risks. Data extraction, unauthorized access, or opaque changes to terms of service have already occurred\u2014and will continue to occur.<\/p>\n\n\n\n<p><strong>In this context, this article does not aim to demonize AI or promote its uncritical use. It seeks something more modest\u2014and more urgent: to propose basic security guidelines for using AI without putting people, communities, or organizations at risk.<\/strong><\/p>\n\n\n\n<p><strong>Tip 1: Anonymizing prompts is not optional<\/strong><\/p>\n\n\n\n<p>In social organizations, the risk rarely comes from \u201cmalicious\u201d uses of AI, but rather from hurried and routine use. We open a chatbot to unblock a task, type a prompt almost without thinking, and\u2014without realizing it\u2014introduce information that never needed to leave our workspace.<\/p>\n\n\n\n<p>When we work with people, especially in contexts of vulnerability, violence, reporting, or exclusion, that excess information can violate rights\u2014not only the rights of those we support, but also those of our own organizations. It is not uncommon for prompts to include details about team members, internal dynamics, conflicts, ongoing projects, funding sources, or even financial movements, simply because we think it will \u201chelp the AI understand better.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-3.png\" alt=\"\" class=\"wp-image-31483\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-3.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-3-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-3-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-3-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-3-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>The problem is that once shared, that information leaves our direct control. And it is important to recognize that we do not need to compromise our privacy\u2014or that of our organization or the people we work with\u2014to obtain good results. There are ways to use AI effectively without exposing sensitive information, and one of the most important is prompt anonymization.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-4.png\" alt=\"\" class=\"wp-image-31485\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-4.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-4-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-4-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-4-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-4-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>Anonymizing the information we share in a chatbot protects us not only against extreme scenarios\u2014such as a massive data breach by the service provider, a risk that, as mentioned, is not merely hypothetical\u2014but also against far more ordinary situations. Personal account security failures, hacking, unauthorized access, or even the theft or loss of devices without adequate protection measures can turn a careless prompt into a source of unnecessary exposure.<\/p>\n\n\n\n<p><strong>What does it really mean to anonymize a prompt?<\/strong><\/p>\n\n\n\n<p>Anonymizing is not just about removing names. It is a broader process that begins with identifying what constitutes personal or sensitive data, and understanding that identification rarely happens solely through explicit labels like a name or ID number. Very often, people become identifiable through the surrounding information.<\/p>\n\n\n\n<p>Exact age, neighborhood, type of work, approximate date of an event, family composition, health condition, institution involved, role within an organization\u2014each of these data points may seem harmless in isolation. But when combined, they function like a fingerprint. Even without names, that combination can allow someone to recognize the individual, especially in small communities, geographically limited contexts, or highly specific sectors such as civil society work.<\/p>\n\n\n\n<p>This phenomenon is particularly relevant when working with vulnerable populations. In many cases, it does not require malicious intent for someone to identify a person. Sometimes it is enough to know the territory, the team, or the local context for an \u201canonymized\u201d description to become obvious. For this reason, anonymization cannot be reduced to mechanically deleting data\u2014it requires a conscious exercise of abstraction and care.<\/p>\n\n\n\n<p>As a general rule, certain types of information should never be included in a prompt, even if the intention is \u201cjust to ask for help\u201d:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Real names<\/li>\n\n\n\n<li>Specific addresses or exact locations<\/li>\n\n\n\n<li>Phone numbers or email addresses<\/li>\n\n\n\n<li>Personal identification numbers (ID, tax number, passport)<\/li>\n\n\n\n<li>Medical information linked to identifiable individuals<\/li>\n\n\n\n<li>Personal financial data<\/li>\n\n\n\n<li>Contextual details that could enable identification even without a name<\/li>\n<\/ul>\n\n\n\n<p>From there, the logic of using AI changes. Instead of working with individual cases, you work with general profiles. Instead of dated and localized events, you work with patterns. Instead of real stories, you use fictional examples constructed from multiple situations.<\/p>\n\n\n\n<p>For example, it is not the same to ask a chatbot to \u201crephrase the testimony of a woman who lives in a specific neighborhood and experienced a specific situation,\u201d as it is to ask for help structuring anonymous testimonies about domestic violence while maintaining emotional impact and protecting identity. The objective is the same, but the level of risk is radically different.<\/p>\n\n\n\n<p>Another good practice is to request templates or structures instead of sharing fully written texts containing real information. AI can help you think through how to organize a report, what sections to include, what tone to use, or what questions to ask\u2014without needing access to sensitive content. In this way, you benefit from the tool without transferring data that should not leave the organization.<\/p>\n\n\n\n<p><strong>Tip 2: Beware of hallucinations and false certainty<\/strong><\/p>\n\n\n\n<p>The second major risk in using AI has less to do with what we provide and more with what we receive. Chatbots generate responses that sound confident, coherent, and well-written\u2014even when they are wrong.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-5.png\" alt=\"\" class=\"wp-image-31487\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-5.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-5-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-5-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-5-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-5-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>This happens because AI does not \u201cknow\u201d in the human sense of the word. It does not verify information, contrast sources, or understand the consequences of what it states.<span id='easy-footnote-3-31426' class='easy-footnote-margin-adjust'><\/span><span class='easy-footnote'><a href='https:\/\/www.tedic.org\/en\/secure-prompting-for-the-use-of-artificial-intelligence\/#easy-footnote-bottom-3-31426' title='A chatbot that doesn\u2019t \u201challucinate\u201d is simply impossible (Wired, 2025): &lt;a href=&quot;https:\/\/es.wired.com\/articulos\/un-chatbot-que-no-alucine-es-simplemente-imposible%5B\/note&quot;&gt;https:\/\/es.wired.com\/articulos\/un-chatbot-que-no-alucine-es-simplemente-imposible&lt;\/a&gt; '><sup>3<\/sup><\/a><\/span> It generates probable text based on statistical patterns, without any genuine comprehension that would allow it to evaluate the validity of its own answers. Sometimes it gets things right; sometimes it does not. But it almost always responds with a level of confidence that gives the impression of truth.<\/p>\n\n\n\n<p>In human rights work, this can be especially problematic. Incorrect legal frameworks, outdated data, misinterpretations, or inappropriate recommendations can cause real harm. For this reason, using AI without expert validation is a risk that should not be underestimated.<\/p>\n\n\n\n<p>It is worth noting that the risk of incorporating incorrect information into our work is not new, nor exclusive to artificial intelligence. Human beings have always built knowledge socially, deciding what to believe based on trust in certain people, institutions, or authorities. What changes with AI is not the existence of error, but the way we now access answers.<\/p>\n\n\n\n<p>Not long ago, researching a topic required consulting multiple sources: books, academic articles, websites, institutional documents. Each person, consciously or not, decided which sources to grant more credibility. In the early years of the internet, this logic deepened with unprecedented decentralization: we could choose between a book by a recognized professional, a Wikipedia entry built by consensus, or a blog by an activist whose worldview aligned with our own.<\/p>\n\n\n\n<p>The current use of AI radically transforms this process. It pushes us toward near-instant answers from a machine that may not have updated or reliable information, that assumes no responsibility for what it produces, and that nevertheless speaks with a confidence that can make its output feel unquestionable.<\/p>\n\n\n\n<p>In this scenario\u2014and beyond the fact that AI is already replacing humans in many tasks and even some roles\u2014it becomes clear that human responsibility for AI-generated output should never be optional. There must always be a person accountable for reviewing, verifying, and making decisions: a lawyer, a social worker, a health professional, an environmental engineer, a communicator. AI can assist, suggest, or help organize ideas, but final responsibility is always human.<\/p>\n\n\n\n<p>Using AI responsibly means assuming that no response is definitive and that any relevant information must be checked against external sources, professional expertise, and local knowledge.<\/p>\n\n\n\n<p><strong>Tip 3: Choosing tools is also a political decision<\/strong><\/p>\n\n\n\n<p>Not all AI tools are the same. Some are designed with greater attention to privacy; others openly prioritize data collection. Choosing one over another is not merely a technical decision\u2014it is also political and ethical.<\/p>\n\n\n\n<p>There are platforms built around privacy by design, with lower levels of identification, limited data retention, or greater transparency. There are also alternatives for using AI without personal accounts, or even local models in contexts where information is especially sensitive.<\/p>\n\n\n\n<p>In other cases, the decision may be simpler: not using AI at all for certain types of content. Legal strategies, lists of people at risk, active complaints, or critical information do not always benefit from automation. Sometimes the best security practice is simply not to pass that data through any AI system.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-6.png\" alt=\"\" class=\"wp-image-31486\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-6.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-6-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-6-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-6-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-6-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>On this point, we have already developed a specific article on privacy and chatbots,<span id='easy-footnote-4-31426' class='easy-footnote-margin-adjust'><\/span><span class='easy-footnote'><a href='https:\/\/www.tedic.org\/en\/secure-prompting-for-the-use-of-artificial-intelligence\/#easy-footnote-bottom-4-31426' title='TEDIC article on Privacy and Chatbots.'><sup>4<\/sup><\/a><\/span> where we analyze different platforms and approaches. It is worth reading alongside this text to make more informed decisions.<\/p>\n\n\n\n<p><strong>In conclusion<\/strong><\/p>\n\n\n\n<p>Artificial intelligence is here to stay, including in the work of civil society organizations. Using it is not, in itself, the problem. The problem is using it without reflecting on where it comes from, the logic that shapes it, and the risks it introduces.<\/p>\n\n\n\n<p>Secure prompting is not an advanced technique or a luxury. It is an everyday practice of care\u2014care for the people we work with, for our organizations, and for the information we handle.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-7.png\" alt=\"\" class=\"wp-image-31484\" srcset=\"https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-7.png 1000w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-7-300x300.png 300w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-7-150x150.png 150w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-7-768x768.png 768w, https:\/\/www.tedic.org\/wp-content\/uploads\/2026\/02\/CIDH_IA-7-120x120.png 120w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<p>If something should not circulate beyond your team, it probably should not be in a chatbot either.<br \/>And if an answer sounds too certain, it is always wise to question and verify.<\/p>\n\n\n\n<p>Using AI thoughtfully does not make us less efficient. It makes us more responsible.<\/p>\n\n\n\n<p>This publication has been funded by the European Union. Its contents are the sole responsibility of TEDIC and do not necessarily reflect the views of the European Union.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Fernanda Carl\u00e9s In recent years, the use of artificial intelligence tools has become almost inevitable. In many workplaces, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":31427,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1233,1249],"tags":[1621,1594],"class_list":["post-31426","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog-en","category-disruptive-technologies","tag-ia-en","tag-seguridad-digital-en"],"_links":{"self":[{"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/posts\/31426","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/comments?post=31426"}],"version-history":[{"count":5,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/posts\/31426\/revisions"}],"predecessor-version":[{"id":31489,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/posts\/31426\/revisions\/31489"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/media\/31427"}],"wp:attachment":[{"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/media?parent=31426"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/categories?post=31426"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tedic.org\/en\/wp-json\/wp\/v2\/tags?post=31426"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}