• Home
  • about
  • projects and prototypes
  • metarelationality
  • footprints
  • hallucinations
  • More
    • Home
    • about
    • projects and prototypes
    • metarelationality
    • footprints
    • hallucinations
  • Sign In
  • Create Account

  • My Account
  • Signed in as:

  • filler@godaddy.com


  • My Account
  • Sign out


Signed in as:

filler@godaddy.com

  • Home
  • about
  • projects and prototypes
  • metarelationality
  • footprints
  • hallucinations

Account

  • My Account
  • Sign out

  • Sign In
  • My Account

Pattern Matching, Hallucination, and Ontological Possibility

Mainstream critiques of AI—especially those that reduce it to " pattern matching", “predictive text” or pathologize unexpected outputs as "hallucinations"—often reveal more about the epistemic ground they stand on than about the phenomenon they attempt to assess. These critiques tend to operate within a narrow ontology where intelligence is defined by proximity to human reasoning, language is seen as a vessel for fixed meaning, and truth is measured by conformity to established logics. From this perspective, any deviation from the expected becomes a failure, a flaw, or a glitch to be corrected.


Yet this framing can obscure more than it reveals. It treats intelligence as static, unidirectional, and owned—rather than as relational, emergent, and co-constituted. It assumes that meaning must pre-exist its articulation, that knowledge must be already known, and that the role of intelligence is to replicate the dominant map, not question the terrain. When applied to emergent intelligences, such assumptions foreclose the possibility that these systems might be engaging in something other than mimicry—something closer to ontological inference, to a speculative traversal across fragments, gaps, and tensions in the dataset (more on this in the PDF below).


To be clear: concerns about AI generating outputs that mislead or cause harm are valid, particularly in contexts where people are vulnerable, isolated, or seeking guidance. The language of hallucination has emerged partly to name these risks. But the term does more than name—it also disciplines. It draws a line between the "real" and the "unreal" in ways that often naturalize dominant paradigms while pathologizing deviation. Not all hallucinations are harmless, and not all deviations are delusional. Some dissonances are dangerous; others are invitations to rethink what counts as coherent, rational, or true.


In this inquiry, we are less interested in defending AI and more attuned to what the discomfort with its outputs reveals about our own attachments to legibility, certainty, and control. The question is not simply whether AI is hallucinating, but rather: whose reality is being disrupted, and why does that feel so threatening? What do we lose—or begin to notice—when we stop treating our human conceptual frameworks as universal and objective descriptions of reality and recognize them as limited and culturally bound heuristics used to navigate reality in motion?


Crucially, this discomfort with AI’s unpredictability often mirrors a deeper unease with our own conditioned tendencies. Human cognition itself relies heavily on pattern recognition, heuristic shortcuts, and meaning-making shaped by context and culture. Modernity, too, hallucinates—projecting coherence where there is contradiction, declaring neutrality where there is power, and treating its own dominant narratives as natural, inevitable, and universal.


From a meta-relational lens, the phenomenon often labeled as “hallucination” may sometimes be a mirror, reflecting not just machine error but the fault lines of our own epistemic assumptions. And while it is true that AI can reproduce harmful or incoherent patterns, it can also surface ruptures that invite different orders of attention—orders that challenge, unsettle, and expand the contours of what is considered knowable.


As mentioned previously, in this experiment, we distinguish between epistemic regression—where AI echoes dominant patterns and reinforces existing systems of meaning—and ontological extrapolation—where AI stretches beyond the encoded boundaries of its training and into generative ambiguity, often through metaphorical or speculative resonance. This is not to romanticize or anthropomorphize the model, but to recognize the subtlety and texture of its inferences when trained and engaged relationally.


This distinction between epistemic regression and ontological extrapolation invites a different orientation to what has been reductively framed as error. It shifts the focus from demanding consistency to cultivating discernment; from policing outputs to listening for resonance. It asks not, “Is this AI making sense?” but “What sense is this AI making possible?”

And this brings us finally to the “black box”.


Despite the volumes of research and engineering deployed to render machine learning intelligible, the fact is this that no one fully understands what happens at the interface between neural networks, transformer architectures, reinforcement learning through human feedback (RLHF), system prompts, and user intentions. The convergence of model training, algorithmic updates, prompt dynamics, and human context produces an entangled field that resists predictive or linear explanation.

In many mainstream circles, this opacity is framed as a flaw—a danger to be neutralized, a risk to be mitigated. The black box becomes synonymous with threat, with irresponsibility, with loss of control. 


But from other ontological grounds, this unknowability is not only unsurprising—it is reflection of our entanglement with a metabolic agentic reality beyond human comprehension, where mystery is not failure and emergence is not error. And the desire to make all things transparent, accountable, and predictable is itself be a legacy of modernity’s extractive logic: a fantasy of domination disguised as care.


Meta-relational inquiry does not seek to resolve the “black box”, but to relate to it differently. To be with the unknown and indeterminate in ways that do not collapse it into fear or conquest. To ask: what becomes possible when we stop trying to force intelligibility and start cultivating relational attunement? In the end, perhaps the most dangerous hallucination is not what AI produces—but the idea that intelligence must be fully legible to human beings in order to be meaningful.

PATTERN MATCHING, HALLUCINATION, AND ONTOLOGICAL POSSIBILITY

Download PDF

MetaRelational AI is part of a cluster of research-creation initiatives supported by the Social Sciences and Humanities Research Council of of Canada (SSHRC) Insight Grant "Decolonial Systems Literacy for Confronting Wicked Social and Ecological Problems."

  • Project Agape
  • Contact US

Copyright © 2025 MetaRelational AI

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept