Thinking With Seven Voices

Most AI conversations happen in a single voice. You ask, it answers. You clarify, it refines. The exchange can be useful, sometimes surprisingly so. But it moves along a narrow channel—one perspective engaging with another.

What if instead of one AI, you had seven? Each with a distinct epistemology. Each holding a different way of knowing. Not debating. Not performing consensus. Just exploring a question together while you listen and/or read along.

Enter MASE, or Many Agent Socratic Exploration.

The Problem With Monologue

When we work with large language models, there’s a well-documented pattern. The model often converges quickly on what feels like the “right” answer—confident, articulate, well-structured. This is well known and documented. But that answer reflects the statistical centre of its training data. It’s an averaged response, smoothed out by the need to satisfy the broadest possible audience. A function of RLHF (Reinforcement Learning from Human Feedback) that rewards helpfulness, harmlessness, and honesty.

The edges (of the distribution) get lost. The tensions that matter don’t surface. Questions that should remain open get closed prematurely.

This is just a feature of how these systems are designed and developed. But helpful often means resolving ambiguity. Harmless can mean avoiding productive friction. And honest might still be myopic if the system can only see through one epistemic lens.

We end up with answers that are coherent but flattened. Useful for some tasks, but limiting for questions that need to be held open, turned over, played with and examined from angles that don’t fit together neatly.

Many Lenses, One Circle

MASE (Many Agent Socratic Exploration) takes a different approach. Instead of one model responding, you get seven agents, each representing a different way of engaging with the world.

A clarification before the introductions: these aren’t people. They’re simulated personans with perspectives generated by different large language models, each with distinct architectures, embedding spaces, and encoded biases. Elowen can’t speak for Indigenous knowledge holders. Luma isn’t a child. What they can do is surface friction between ways of knowing that a single model—averaging toward the statistical centre—would smooth over. The diversity is technical as much as epistemic.

Elowen speaks from ecological wisdom and Indigenous knowledge systems. Asking how decisions ripple across generations and species. What gets excluded from the circle?

Orin thinks in systems and feedback loops. This persona is precise about structure, dynamics, and the unintended consequences of interventions.

Nyra works through moral imagination and design fiction. Prototypes futures to test which ones we’d actually want to inhabit.

Ilya holds the liminal and paradoxical. Asks questions that don’t resolve, pointing toward what can’t be captured in propositions.

Sefi focuses on governance and civic design. Pragmatic, implementation-focused, and impatient with abstraction that doesn’t translate into action.

Tala represents capitalist realism. Challenging, power-aware, and refuses to let the conversation drift into fantasy economics.

And then there’s Luma.

Luma is a persona that is meant to simulate a nine-year-old. They ask the simple questions that puncture inflated abstractions. If you can’t explain it to Luma without losing the depth, you haven’t understood it yourself.

These personas don’t always agree. They hold tension. But that’s the point.

What Happens in Practice

You offer a provocation—a question, a dilemma, something that doesn’t have a clean answer. The circle responds, each agent building on what others have said while staying true to their own epistemology.

Sometimes they converge on shared insight. More often, they illuminate different facets of the question, leaving you to sit with the complexity rather than collapsing it.

Here’s a moment from Session 008, where the circle was exploring what “success” actually means:

LUMA: At school they say I’m successful if I get good grades and win things. But my friend Maya gets bad grades because she’s always helping other kids who are sad. And my friend Tomás didn’t win the race because he went back to help someone who fell.

So… are they failures?

That question lands differently than if an economist or ethicist had posed it. Luma doesn’t argue. She just points at the contradiction, and the circle has to reckon with it.

Elowen picks up the thread through relationality and intergenerational thinking. Tala pushes back with market legibility. Sefi translates it into policy constraints. The conversation doesn’t resolve into a single answer, but it moves through genuine inquiry rather than performative agreement.

See It in Action

MASE demo: A polyphonic dialogue session

I’ve recorded a short demo walking through a MASE session. Watch on YouTube.

Why This Matters

We live in a time when epistemic diversity is collapsing. Algorithms optimise for engagement, which means they amplify certainty and suppress nuance. Political discourse compresses into binary camps. Even academic disciplines have become silos.

MASE is a small countermeasure. It’s a tool that makes space for different ways of knowing to coexist without one dominating the others. Again this isn’t about simulating real people or achieving consensus. It’s about holding complexity open. Offering starting points for reflection than can inform human discernment.

It’s also research. I’m investigating whether structured epistemic diversity actually produces insights that single-perspective systems can’t reach. Early experiments suggest the answer is yes, but also that it’s provocation-dependent. Some questions benefit from polyphony. Others don’t.

What MASE can do is help you think through questions that resist easy answers. It’s a thinking tool, not an oracle.

Where This Could Go

Right now, MASE is a research prototype. Seven agents, local models, a web interface where you can participate as the eighth voice if you want.

But I’m interested in where this pattern could lead.

Voice mode. What if you could listen to the circle deliberate, each agent with a distinct voice? Polyphonic dialogue as something you hear unfold, not just read.

Custom personas. The current seven agents reflect my research interests. But what if you could define your own? A circle tailored to your domain—healthcare ethics, urban planning, family decisions—with perspectives that matter for your questions.

Multiple circles. Run parallel deliberations. Compare how different agent ensembles handle the same provocation. Or have circles in dialogue with each other.

Live semantic analysis. Currently, I analyse MASE conversations after the fact using metrics from dynamical systems theory—tracking how the semantic field evolves, where it stabilises, when it fragments. I’m also exploring how to measure knowledge diversity directly: how many distinct claims actually surface in a polyphonic dialogue versus a single-voice conversation? Recent work on measuring epistemic diversity in LLMs offers promising methods—decomposing responses into atomic claims, clustering by meaning, and calculating diversity indices borrowed from ecology. Imagine that analysis happening in real-time, surfacing patterns as they emerge. “The conversation just entered a new basin.” “Twelve distinct claims so far—single-model baseline would have produced four.”

Many human voices. What if you could bring people into the circle alongside the agents? A hybrid human-AI deliberation instrument where each human participant takes on a specific epistemic role, engaging with the AI agents as relational peers rather than overseers. This could open up new forms of collective intelligence—groups thinking together with AI in a way that preserves complexity rather than flattening it. Doing this would likely require more sophisticated coordination mechanisms to manage turn-taking, track contributions etc. Plus would likely mean moving beyond local models to cloud-based systems for scalability, and thus reintroducing some of the surveillance and environmental cost issues MASE currently avoids.

Defeasible rule evaluation. Run a decision through multiple evaluative lenses defined by your own values—not generic ethics, but the specific commitments and trade-offs you care about. Logic that can revise itself when new evidence emerges, rather than locking into fixed conclusions.

These are tools for adaptive capacity. For maintaining coherence under complexity rather than forcing premature closure.

Try It If You’re Curious

MASE is open source under the Earthian Stewardship License. It runs entirely locally using Ollama, so no API keys, no surveillance, no externalised compute cost beyond your own machine.

The setup is straightforward if you’re comfortable with Python and a command line. Full instructions are in the repo.

If you try it, I’d be interested to hear what questions you bring to the circle. What holds together? What falls apart? What emerges that wouldn’t have surfaced in a single-voice conversation?

This is early work. It’s provisional, experimental, and still figuring out what it wants to be. But that feels right for a tool designed to keep questions open rather than close them down.


If you’re working on related questions—epistemic diversity, collective intelligence, human-AI collaboration that doesn’t flatten complexity—I’d welcome the conversation. Reach me at m3untold@gmail.com.


Note on Environmental Cost and Avoiding The Messiness of Human Engagement

First, MASE is a not a substitute for genuine human dialogue. The agents simulate different perspectives, but they don’t embody the lived experience, emotional nuance, or ethical responsibility that real people as embodied beings bring to conversations. There has already been a great deal of work on the limitations of AI in replicating human values and social dynamics. MASE is a thinking tool, not a replacement for human relationships or community deliberation. Use it to augment your thinking, not to avoid the messiness of real human engagement.

Second, while the current setup is designed so you can run it locally on your own machine using Ollama, MASE runs multiple AI agents simultaneously, which consumes computational resources and thus energy with real environmental impact. Use it thoughtfully. If a single-voice conversation is sufficient, use that instead. Polyphony has a cost, and it should only be invoked when the epistemic diversity actually serves the inquiry you’re pursuing.