This blog was co-authored by Georgia Walker, with contributions from Emeline Béréziat and Redmar Leeuwendal.
We are living in the AI age. After steam engines, telephony, and the Internet, AI is perhaps the biggest technological revolution humankind has seen. With the rapid advancement of generative AI, particularly large language models, every household with Internet access now has the opportunity to harness this technology. While forecasts predict massive productivity gains and economic growth, AI also brings legitimate concerns: ethical dilemmas, job losses, and a growing environmental impact.
The development sector is no exception. Across climate, water, and agriculture, AI is expanding rapidly, attracting major players and promising improved efficiency, system resilience, and smarter resource allocation. But it’s crucial to ask: what problems are we really trying to solve, and for whom?
At Akvo, we believe that AI is a tool. And like any other tool, it comes with its own set of challenges and limitations that must be carefully managed. It is not a silver bullet. As a data and technology organisation, we are committed to applying AI responsibly to improve service delivery, boost business efficiencies, and elevate the lives and livelihoods of citizens in the world’s most resource-constrained environments. Our focus is on augmenting human capabilities - especially in places where data is scarce, challenges are complex, and solutions must be highly contextual.
We’re not in the business of chasing trends. We’re here to build solutions that matter - rooted in data, shaped by context, and driven by local needs. We believe AI should augment human intelligence and enhance our ability to address some of the most urgent challenges of our time - water security, climate resilience, and the food and income security of our farmers.
Use cases from the water, agriculture, and climate sectors
Predicting water infrastructure failures
Maintaining rural water infrastructure is a significant challenge in many low-resource settings, where monitoring systems are limited and breakdowns are often addressed reactively. In Sierra Leone, we partnered with WPdx and DataRobot to implement AI-powered predictive maintenance for rural water points, using supervised machine learning to estimate functionality and forecast failures. These models optimise maintenance by prioritising at-risk assets, reducing downtime, and improving service reliability. AI also improves data quality - supporting validation, identifying inconsistencies, and extracting verification data from images.
Enhancing search and retrieval across ocean plastic pollution data
The global fight against marine litter is hindered by scattered, hard-to-navigate data sources. Through the Global Partnership on Marine Litter, we used natural language processing to make large volumes of information searchable and usable for practitioners. This AI-powered tool helps connect governments around the world to prepare action plans with the knowledge they need - faster and more effectively.
Augmenting agricultural advisory
In Kenya, we worked with local partners to strengthen agricultural advisory services. Through Agriconnect, we are supporting agriculture extension officers to deliver more timely and accurate advisory to smallholder farmers - on everything from crop disease identification to optimal fertiliser use. These tools don’t replace field expertise; they augment it, especially in under-resourced contexts where smartphones, timely information and an optimal number of farmers per advisor can be considered as privileges.
Above: farmer in a maize field in Nyagatare, in Rwanda's Eastern Province. Credit: ©2009CIAT/NeilPalmer
Segmenting populations for more targeted interventions
In Rwanda, smallholder farmers operate in highly diverse conditions. We used unsupervised machine learning - specifically cluster analysis - to identify farmer archetypes (e.g. a 35 year old woman farmer from Eastern Rwanda without any formal education and smartphone would constitute an archetype) based on behavioural and contextual factors. This helped local partners design targeted interventions and farmer awareness campaigns instead of relying on generic, one-size-fits-all approaches.
Challenges and risks
Data quality and quantity
Among the challenges, the quality and quantity of data emerge as a fundamental concern. AI models thrive on data richness, requiring large volumes of diverse and accurate information to yield meaningful insights. But AI does not just require large volumes of data - it also needs the data to be highly structured, labelled (depending on the learning type), and often standardised. Thus, data that is hard to quantify or label, such as indigenous knowledge or highly context-specific data, may be undermined. In the realm of water, climate and agriculture, acquiring such datasets can be a formidable task. Data collection is often constrained by factors like limited monitoring networks, remote and inaccessible locations, and financial constraints. This information gap is especially evident for marginalised and vulnerable communities, who are routinely under-documented and underrepresented in data systems, which leaves them excluded from AI-driven innovations and can risk reinforcing existing inequalities. This gap is not incidental; it stems from deeper structural issues rooted in historical under-representation, income inequalities, and a systemic prioritisation of quantitative data over experiential and Indigenous knowledge. These dynamics can lead to a feedback loop, in which communities underrepresented in data are also excluded from AI-driven decision-making, further reinforcing their exclusion and marginalisation.
Interpretability
The issue of interpretability stands out as a significant concern, particularly with regard to certain AI techniques that lack transparency, thereby impeding stakeholders' ability to comprehend and place trust in the outcomes generated by these models. AI models, especially those driven by deep learning and complex algorithms, have shown remarkable capabilities in analysing vast and intricate datasets to yield insights that were previously unattainable. However, the intricate nature of these models often leads to a "black-box" scenario, where the decision-making process behind the predictions is not easily decipherable. This opacity poses a challenge in understanding how the model arrives at specific conclusions and raises questions about the credibility of the results.
From this technical interpretability issue, another political issue arises. Who gets to interpret the outputs and for what purpose? In contexts such as community-led versus donor-led development, the authority and relevance of different interpretations may vary, with power dynamics influencing which interpretations are legitimised and acted upon.
Technical expertise
The requirement for specialised technical expertise is a challenge, as the successful implementation of AI solutions often demands skills that may not be readily available within the government and/or international development agencies. AI techniques encompass a wide spectrum of methodologies, ranging from machine learning algorithms to data preprocessing and model optimisation. These techniques require a deep understanding of mathematical concepts, programming languages, and data science principles. However, the agencies (both governmental and non governmental) tasked with dealing with societal problems may not always have access to individuals with this specialised expertise, making it challenging to fully harness the potential of AI applications .
At the same time, it is important not to conflate the absence of technical expertise with a lack of capacity. Many local actors have deep contextual and domain-specific knowledge that is crucial for the effective design, implementation, and interpretation of AI tools. Framing them solely as recipients of technology overlooks their role as co-creators. Therefore, interdisciplinary collaboration and capacity-building (particularly efforts that bridge contextual and technical expertise) are essential to ensure that AI solutions are not only technically sound but also socially grounded and locally meaningful.
Bias
AI models inherently include a human element. Decisions about how the model is built, what data it learns from, and how it is fine-tuned all shape the system’s behaviour. These human decisions shape how AI systems interpret information, draw conclusions, and ultimately influence the real-world outcomes of their use. When left unchecked, this can lead to systems that reflect and reinforce existing social biases. Without careful consideration and mitigation, such biases can become deeply embedded in AI tools, exacerbating existing inequalities and disproportionately harming marginalised communities that are already underserved or overlooked. Examples of this are smallholder farmers being overlooked in AI-based credit systems or natural language processing models underperforming on low-resource languages.
Environmental costs
While artificial intelligence holds immense potential for driving progress in the development sector, we must also acknowledge its significant environmental cost. Training and running AI models require substantial amounts of energy and water - natural resources that are already under pressure in many parts of the world. At Akvo, we believe in embracing AI not just for innovation, but for impact. This means using AI tools responsibly, staying informed about their environmental footprint, and choosing technologies that are optimised for efficiency. It also means working with partners to advocate for greener tech infrastructure and integrating sustainability criteria into our digital choices. The goal is not to reject AI, but to shape its use in ways that are aligned with the principles of equity, responsibility, and environmental stewardship.
Building systems, capacities, and trust in the future
Despite the risks, we believe that AI is a transformative technology. We need to embrace the future and shape it ourselves. Here are a few possible future trajectories where applying AI could be transformative.
- Predicting behaviour of groundwater systems: As climate change and human activities reshape hydrological systems, AI-driven prediction models offer a solution for managing uncertainties. By processing large datasets, AI models can capture complex relationships between variables such as precipitation, temperature, land use, and extraction rates. These models continuously learn and update as new data becomes available, improving groundwater predictions over time. This helps water managers anticipate changes in groundwater recharge, water tables, and flow directions, guiding sustainable practices and preventing issues like saltwater intrusion in coastal areas.
- Building and operationalising climate data ecosystems: AI-powered collaborative platforms use artificial intelligence and data analytics to create a digital ecosystem where stakeholders can share insights, data, and expertise. These platforms act as hubs for interdisciplinary collaboration, allowing scientists, policymakers, and local communities to contribute to solutions. One key application is data sharing and integration, where participants contribute weather, geological, and hydrological data, which AI algorithms then analyse. This shared data enhances the accuracy of predictive models, supporting more informed decision-making on climate scenarios and adaptation/mitigation strategies. AI platforms also facilitate scenario analysis, enabling stakeholders to simulate the outcomes of different management strategies, including factors like extraction rates, climate projections, and land use changes. At the same time, data ownership and governance significantly influence the effectiveness and fairness of such platforms. Without clear mechanisms for consent, governance, and benefit-sharing, there is a risk of data extraction without equitable returns. Ensuring data ecosystems are transparent and inclusive is thus crucial for just and effective action.
- Bridging institutional gaps and empowering youth in agriculture: In the global South, there is an institutional gap in agriculture and significant youth unemployment. Training local youth as village-based agricultural extension advisors can fill this void. These entrepreneurs will collect local data, images, and videos to improve AI models, making the models more contextually relevant. By combining local knowledge with AI, the goal is to create agricultural models that are better tailored to the community, improving both their reach and impact. As we push for grassroots empowerment through AI, we must remain mindful of the potential tension between the commercialisation of AI technologies and their use for the public good. It is crucial to ensure that AI is not only framed as a business opportunity but as a shared infrastructure that serves community needs. Additionally, young people and local actors may need ongoing support, not just one-off training, to adapt and contribute to evolving AI systems. We must also reflect on how ‘impact’ is measured. AI systems tend to prioritise quantitative outcomes like crop yield or efficiency. But these metrics might overlook qualitative and sociocultural dimensions of farming and community life. Ensuring that AI-driven impact aligns with community-defined goals is crucial, and perhaps the most complex challenge of all.
While applying AI in these ways could transform the sector, realising this potential requires deliberate investment in systems, capacity, and trust building. A set of underlying technology principles and practices have to be put in place before we apply specific AI models and platforms to solve societal problems.
At Akvo, we will focus on building the appropriate foundational AI systems for low-resource environments where data is scarce, capacity is low, and needs are pressing. By integrating AI into everyday workflows and decision-making processes, we aim to build the capacities of governments, NGOs, cooperatives, and local businesses to better understand, prioritise, and address their own challenges. Our approach includes training policymakers and senior management to apply AI for more effective resource allocation and insight interpretation.
Beyond the transactional process and capacity improvements, we are committed to building long-term trust in the technology by working with local communities and governments. We will collaborate with them to collect primary local data and curate secondary data to feed into the AI system, improve prompting, and use probabilistic thinking to make sense of the AI generated inputs. These insights will be embedded within local decision-making frameworks grounded in sound, participatory processes. For us, technology is not just about improving efficiency - it is about enhancing transparency and strengthening accountability.
In addition, we aim to build open source and responsible AI systems grounded in safety, ethics, and transparency. As computing power grows and algorithms become sharper, the quality and reliability of data will become increasingly vital. But as we have noted in an earlier blog, data is not a placeholder for truth. Truth-seeking demands patience, empathy, and the recognition that people are partners in development - not just data points.
AI holds transformative potential to accelerate sustainable development. But our practice must be rooted in human consciousness and wellbeing. Guided by a humanist philosophy, we can confront challenges of bias, instrumentality, and data scarcity.
Our ultimate goal is simple: to improve lives and livelihoods - especially for smallholder farmers, climate-vulnerable communities, and those without safe access to WASH services.