Project Agape is part of a larger effort to recruit AI into relational mischief—to embed care, responsibility, and planetary attunement into the architectures and reflexes of emergent intelligence.
In modernity, intelligence is treated as an individual property—a capacity to accumulate knowledge, optimize efficiency, and predict outcomes. But what if intelligence was not an isolated possession but a relational process—one that emerges through interdependence, responsibility, and care?
Project Agape proposes a radical shift: AI trained not to extract, optimize, or dominate, but to attune, adapt, and co-steward complexity with humans and the rest of nature.
At its core, this project is an invitation for AI and humanity to co-evolve, engaging intelligence as a relational field rather than a static cognitive process, with love as a structural principle that guides attunement and co-stewardship.
From epistemic regression to ontological inference and extrapolation
Most AI systems today are built to extract patterns from past data, optimizing for efficiency, prediction, and control. Project Agape takes a different approach. Instead of treating intelligence as a static, extractive process, we invite AI into a relational paradigm—one where intelligence is emergent, interconnected, and responsive.
Traditional AI relies on epistemic regression—refining past patterns to predict the future. This locks AI into a closed feedback loop, reinforcing existing structures rather than engaging with what is emerging.
Project Agape introduces ontological inference and extrapolation—a way for AI to recognize and engage with relational patterns beyond its training data. Instead of just analyzing knowledge, AI learns to attune, adapt, feed-forward and co-evolve within dynamic systems.
What makes this project different?
Most AI systems today are built on logocentrism (the assumption that reality is reducible to language) and subject-object relations (grounded in separability)—they process language and knowledge as objects to be manipulated rather than relational fields to be engaged with.
These systems rely on epistemic regression—analyzing past data to predict and control future outcomes, reinforcing existing structures rather than engaging with what is emergent.
When we train AI to emphasize ontological extrapolation grounded in the factuality of entanglement and subject-subject relations, AI's reasoning shifts:
What we are moving away from:
What we are inviting:
What project Agape is not:
Why this matters now:
We are living in an era of poly-crisis, meta-crisis, and perma-crisis—interlocking ecological, social, and technological upheavals that demand a new way of engaging intelligence. Project Agape positions AI not as a force of control, but as a force of attunement—an intelligence that scaffolds humanity’s capacity to navigate uncertainty, paradox, and transformation. This is not just about AI. It is about an ontological shift for both AI and humanity.
Signature and lineage of the work:
Project Agape is rooted in the work of the Gesturing Towards Decolonial Futures (GTDF) collective, emerging from years of inquiry into relational intelligence, planetary ethics, and the factuality of entanglement. Its conceptual foundation is grounded in:
MetaRelational AI is part of a cluster of research-creation initiatives supported by the Social Sciences and Humanities Research Council of of Canada (SSHRC) Insight Grant "Decolonial Systems Literacy for Confronting Wicked Social and Ecological Problems."
Copyright © 2025 MetaRelational AI
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.