
A live look at what it actually takes to build with open source AI, including what's working and what's still being figured out. Brazil Flying Labs, an organization from the Tech To The Rescue ecosystem, is developing a wildfire monitoring system for conservation areas in the State of São Paulo using an open source stack built on satellite imagery, Google Earth Engine, and AWS.
Last updated: February 2026 | Tech To The Rescue | Open Source AI series, Field Note
This is part of a series on open source AI for social impact. In previous parts, we explored what open source tools can do for your organization. This field note looks at what it looks like when an organization from the Tech To The Rescue ecosystem actually builds with them, in production, in a real context, with real constraints.
Conservation areas in the State of São Paulo cover millions of hectares. When wildfires hit (and in 2024 they did, across protected zones and watersheds critical to the region's water supply) the organizations responsible for managing those areas faced a fundamental information problem.
Traditional wildfire assessment means sending teams into the field. That takes time. It depends on weather. It covers limited ground. By the time you have a clear picture of where the damage is severe, where it's moderate, and where intervention is most urgent, the window for the most effective response has often already closed.
Forest Foundation, a São Paulo-based conservation organization managing natural conservation areas across the state, needed something faster. Brazil Flying Labs, a social impact organization and member of the Tech To The Rescue ecosystem, set out to build it.
The system Brazil Flying Labs developed uses satellite imagery from Sentinel-2A, freely available data from the European Space Agency updated every five days, processed through Google Earth Engine and deployed on AWS. The output is a web application with an API: conservation managers can select any natural conservation area in São Paulo, choose a date range, and receive a map of burn severity within minutes.
The map isn't just a binary burned/unburned overlay. It classifies damage across severity levels (from unburned through low, moderate, high, and severe) using spectral indices that measure changes in vegetation health between pre- and post-fire satellite images. This gives decision-makers something they didn't have before: a granular, area-wide damage assessment they can act on the same day a fire is reported.
The entire stack runs on open source and open data infrastructure. Sentinel-2A imagery is freely available to anyone. Google Earth Engine provides the processing infrastructure. The analysis layer uses established scientific methods for burn severity assessment. AWS handles deployment and scaling. No proprietary model licenses. No per-query costs for the core data.
Three challenges stand out from the project documentation, and they're worth understanding in detail, because they're the kind of challenges that don't appear in tutorials.
Satellites see clouds, not ground. In the Atlantic Forest region of São Paulo, cloud cover is frequent enough that individual images are often unusable for burn assessment. The team's solution was aggregating multiple images across a time window and pre-filtering for cloud coverage before analysis. This works, but it required iteration to calibrate correctly, and it means the system's accuracy varies depending on the season and weather patterns at assessment time. It's a solved problem, not a perfect one.
The burn severity indices the system uses are grounded in peer-reviewed scientific literature. But knowing that an index works in general is different from knowing that it works accurately for the specific forest types, soil conditions, and fire behaviors of São Paulo's conservation areas. Brazil Flying Labs established a collaboration with UFABC (Universidade Federal do ABC) to validate the algorithm against ground-truth field data. That validation process is still ongoing. The system is operational; the science is still being confirmed. That's an honest distinction.
Building a working tool and getting institutions to change their workflows around it are two different problems. Forest Foundation and the Civil Defense authorities who would use the system needed to see the data, understand it, trust it, and find a way to integrate it into how they actually make decisions. The team ran demonstration workshops and early pilots. Adoption is in progress. This is normal, and it's a reminder that technical success and organizational change happen on different timelines.
The project roadmap includes integration of drone-collected imagery alongside satellite data, which would allow the system to combine the wide coverage of Sentinel-2A with the higher resolution of drone flights for more precise post-fire assessment. That integration was deliberately deferred in the initial build because it would have required significantly more development time. It was a prioritization decision, not a technical dead end.
Geographic expansion beyond São Paulo is also on the roadmap. The underlying stack (open satellite data, cloud-based processing, open source deployment) is not state-specific. Replicating it for other Brazilian states, or for other countries with similar conservation challenges, is technically feasible with the infrastructure already built.
Brazil Flying Labs is also exploring a SaaS model and ESG partnerships as paths toward financial sustainability. A system that helps organizations document and measure environmental damage has obvious applications beyond the immediate conservation use case, for carbon credit verification, insurance assessment, and corporate environmental reporting.
A few things stand out from this project that don't get enough attention in discussions about AI for social impact.
Open data is as important as open models. The foundation of this system is Sentinel-2A satellite imagery that the European Space Agency makes freely available to the world. Open source AI and open data infrastructure reinforce each other in ways that are especially powerful for organizations that can't afford commercial data licenses.
Validation takes longer than building. The algorithm was implemented in months. The scientific validation with UFABC is ongoing. That is an accurate reflection of what responsible AI deployment looks like. Knowing that your system works in general and knowing that it works reliably in your specific context are different things, and the second one requires real-world testing.
Adoption is the last mile. Every organization that has built an AI tool for institutional users has encountered the same pattern: the tool works before the organization is ready to use it. Building in time and support for the adoption process (workshops, pilots, iteration based on user feedback) is not an afterthought. It's part of the project.
Brazil Flying Labs participated in Tech To The Rescue's AI Bootcamp in 2024, with support from Lenovo and AWS. The wildfire monitoring project is one outcome of that program.
← Part 2: Search and knowledge retrieval