
The EU AI Act takes full effect in August 2026, with penalties up to €35 million. Every organization deploying AI in Europe will be affected, including yours. But most compliance guides miss something important: social impact organizations already have what responsible AI actually requires. Mission clarity, community trust, values driven decision making. Most corporates spend years trying to build these foundations. Your organization has had them all along. Here are six practical ways to turn that advantage into real AI governance practice, drawn from a conversation we hosted with She Shapes AI.
In this article
Most responsible AI advice starts with what organizations lack. We want to start with what social impact organizations already have.
In March 2026, we hosted a conversation with Dr. Julia Stamm, founder and CEO of She Shapes AI, and Lisa Talia Moretti, digital sociologist and responsible AI practitioner. One point came through clearly: the first question any organization should ask before deploying AI is "why are we doing this and whom does it serve?" Social impact organizations have already answered that question through years of direct engagement with their communities.
Dr. Julia Stamm put it directly. Social impact organizations have a clear advantage over many well resourced corporates because they have a mission and they have clarity. They know why they are doing what they are doing and who they are working for. That clarity is precisely the filter that responsible AI adoption requires.
Lisa Talia Moretti reinforced this point by advising organizations to be problem centric rather than technology centric. AI is strong at pattern recognition, prediction, and forecasting. The organizations that get AI adoption right are the ones that match those strengths against specific operational problems rather than applying tools broadly in search of a use case.
You do not need a formal AI strategy or a dedicated ethics team to begin. Based on the conversation with She Shapes AI and our own experience working with over 2,000 social impact organizations since 2020, here are six steps any organization can take now.
Before adopting any AI tool, ask: why are we using AI to solve this specific problem? Both Dr. Julia Stamm and Lisa Talia Moretti identified this as the single most important starting point. This question aligns your AI investments with your mission and filters out tools that are solutions looking for problems.
AI is strong at pattern recognition, prediction, and forecasting. Map those strengths against your real bottlenecks. Do not try to use every new tool; prioritize the areas where AI's capabilities align most clearly with the work your organization already does.
If transparency and trust are core to your organization, evaluate how you communicate with your community about AI use. What transparency notices do you publish? What feedback mechanisms exist for stakeholders to raise concerns about data and automation? Your existing values framework can guide your AI governance without building one from scratch.
Three resources recommended during the webinar provide structured, practical guidance. The Alan Turing Institute's AI ethics and governance practice introduction walks organizations through responsible AI step by step. The OECD's good practice principles on data ethics offer a strong framework for the public sector and nonprofits. UNESCO's Red Teaming Playbook helps you test and challenge your own AI tools. None of these require a technical background.
Dr. Julia Stamm offered a practical filter: if you do not understand what a vendor is telling you, that is a signal about them, not about you. Vendors who hide behind jargon may be using technical vocabulary as a sales tactic. Ask them to explain their product in plain language: what it does, how it affects your workflows, what data it uses, and what happens if something goes wrong. Lisa Talia Moretti shared an example where a salesperson admitted they did not fully understand the terminology they were using. If the people selling the technology cannot explain it, treat that as a warning sign.
Organizations at different stages of AI maturity can find structured support. At Tech To The Rescue (TTTR), we have worked with over 2,000 social impact organizations across health, education, climate, and economic opportunity since 2020. The organizations that move fastest with AI are the ones with the clearest sense of purpose.
TTTR's AI Impact Lab helps organizations build their first AI prototype through a seven week cohort. The AI Impact Scaling Program provides long term support for organizations with existing prototypes or MVPs, including pro bono development partnerships, mentorship, and access to a Responsible AI Hub run in partnership with ecosystem contributors like She Shapes AI. Organizations accepted into the Scaling Program receive an equivalent of $150,000 to $330,000 in services on the open market. She Shapes AI also runs training programs focused on AI fluency, strategy, and confidence for leaders.
One of the most persistent objections we hear is that responsible AI adds time and cost to development. Lisa Talia Moretti shared a concrete example from her work with a civil aviation organization that proved the opposite.
Before Moretti joined, product teams would complete full development sprints only to have their work fail compliance checks at the very end. She redesigned the AI product development lifecycle to bring data privacy officers, cybersecurity teams, and compliance specialists into conversations during the earliest stages of discovery. Product teams learned where the boundaries were before they started building.
The result: faster development cycles, fewer failed reviews, and products that passed compliance on the first attempt. This tracks with broader industry research. NIST, CISA, and IBM cost of breach studies have consistently found that issues identified at the design stage cost approximately one tenth of what they cost to fix after deployment.
The takeaway for nonprofits: bringing ethical questions into your AI planning from the start is not extra work. It prevents the much larger work of fixing problems after they have been built.
Both speakers made an urgent case for civil society organizations to move beyond AI adoption and into active participation in shaping how AI is governed.
Dr. Julia Stamm noted that conversations where AI policy decisions are made are dominated by technology companies and, occasionally, policymakers. Civil society voices are largely absent. Lisa Talia Moretti added that nonprofits hold a unique position: they have both the trust and the direct relationships with the communities that AI systems affect. That makes them natural conduits for carrying community feedback to policymakers and technology builders.
Moretti also pointed to an opportunity that nonprofits are underusing: forming coalitions to establish community based standards for responsible AI practice. Unlike private sector organizations that compete with one another, nonprofits can collaborate to create shared frameworks, publish case studies, and define what good practice looks like in specific sectors. Organizations like the European AI Fund are already supporting this kind of work within the EU.
At TTTR, our ecosystem model is designed to support exactly this. We connect social impact organizations with pro bono tech partners, mentors, and infrastructure so they can engage with AI from a position of strength.
The EU AI Act is the first comprehensive legal framework for artificial intelligence worldwide. Its high risk system requirements take effect on August 2, 2026, with penalties reaching €35 million or 7% of global annual turnover for violations. Every organization deploying AI in Europe needs to classify its AI systems, document risk assessments, and meet minimum compliance standards.
But compliance and responsible AI are two different things. Lisa Talia Moretti explained the gap during our conversation with She Shapes AI. Compliance sits in a specific function within an organization, often far from the teams building products. In many organizations, the people designing AI systems do not know their compliance officer. The people handling compliance do not understand how product development works. That disconnect means problems get caught late, products fail reviews at the end of development cycles, and teams waste time rebuilding.
Responsible AI goes further. It requires external conversations with the communities you serve, alignment with their expectations, and ongoing evaluation of how your technology affects them. Meeting EU AI Act requirements is necessary, but the real advantage comes from the responsible AI practices that go beyond those requirements.
Social impact organizations already possess the mission clarity, community trust, and values alignment that responsible AI adoption requires.
Six practical steps, from asking one foundational question to engaging with structured ecosystems, make responsible AI practice accessible to any organization regardless of technical capacity.
Embedding ethical review early in the AI development process speeds things up and reduces costs by preventing late stage fixes.
Nonprofits have both the opportunity and the credibility to help shape AI governance through coalitions, community standards, and direct advocacy.
EU AI Act compliance is a legal necessity for any organization deploying AI in Europe, but responsible AI practice builds the trust and stakeholder alignment that compliance alone cannot deliver.
Social impact organizations operate with clear missions, defined communities, and established values. These are the exact foundations that responsible AI requires. The first question in any responsible AI process is "why are we using this technology and whom does it serve?" Mission driven organizations have already answered this through years of direct community engagement. Tech To The Rescue's AI Impact Scaling Program builds on this advantage by providing pro bono tech partnerships and responsible AI support.
Evidence from practice says no. Lisa Talia Moretti's work with a civil aviation organization showed that integrating responsible AI practices throughout development, rather than stacking compliance checks at the end, made product teams more efficient. Industry research from NIST, CISA, and IBM consistently shows that addressing issues at the design stage costs approximately one tenth of what post deployment fixes require.
Start with one question before any AI adoption: "why are we using AI to solve this problem?" Then use freely available toolkits for structured guidance. The Alan Turing Institute's AI ethics and governance practice introduction, the OECD's good practice principles on data ethics, and UNESCO's Red Teaming Playbook all provide practical approaches that do not require technical backgrounds. Mapping your existing organizational values to AI use cases is also an effective and accessible first step.
If a vendor cannot explain their product in plain language, that is a significant warning sign. Organizations should ask vendors to describe what their tool does, how it affects workflows, what data it uses, and what happens when things go wrong. If the answers are unclear or rely on jargon without substance, consider alternative options.
Nonprofits hold unique positions as trusted intermediaries between the communities affected by AI and the policymakers making decisions about it. Beyond adopting AI responsibly, organizations can form coalitions to establish community based standards, publish case studies, and advocate for their communities in policy discussions. Tech To The Rescue and She Shapes AI provide ecosystem support for nonprofits engaging in this work.
Tech To The Rescue's AI Impact Lab is a seven week cohort program where social impact organizations prototype their first AI solution with pro bono tech support. The AI Impact Scaling Program provides long term enablement for organizations with existing prototypes, including pro bono development partnerships, mentorship, infrastructure, and a dedicated Responsible AI Hub. Since 2020, TTTR has supported over 2,000 social impact organizations working with more than 200 pro bono tech company partners worldwide.
Dr. Julia Stamm is the founder and CEO of She Shapes AI, the leading global catalyst for women led responsible AI innovation. A sociologist with a PhD in philosophy, she has worked with the European Commission and the G20. Julia is a Fellow of the RSA, a founding committee member of the AI in Media Institute, and a Fellow at the Hertie School's Center for Digital Governance. She was recognized as a LinkedIn Top Voice in January 2026.
Lisa Talia Moretti is a Digital Sociologist and Responsible AI Practitioner. She serves as Co Chair of the AI Council at the British Interactive Media Association and is an Adjunct Senior Industry Fellow at RMIT FORWARD. Lisa was named one of the top 100 people in Britain changing the digital and technology landscape and has shaped policies for the UK government, UK Parliament, and the World Health Organisation.
This article is based on a conversation we hosted with She Shapes AI on March 25, 2026. Watch the full recording on YouTube | Learn about the AI Impact Scaling Program | Explore She Shapes AI