The AI as Stranger
Do we need our computer systems to be cruel to be kind?
A few weeks ago I was visited by Peter Brusilovsky, one of the most established people in the fields of adaptive hypertext and recommender systems. I’ve known Peter for nearly 30 years. He hosted Hypertext 98 in Pittsburgh, which was the first hypertext conference I attended, and it has been one of my regular conferences ever since. These days, I am honoured to serve as chair of its steering committee. We have also worked together in the past through SIGWEB.
The conversation with Peter was particularly interesting because he also sees AI systems as a natural successor to hypertext. Hypertext systems offer a healthy degree of human agency, which is much needed in human-AI interaction. Our discussion made me reflect on recommendations and personalisation, and how they relate to our everyday interactions with large language models.
Recommender systems evolved from hypertext research, and modern AI personalisation descends from recommender systems. But somewhere along that lineage, the user shifted from active navigator to passive recipient. Much of Peter’s work has been about restoring that balance, and I’ve also been thinking about what we lost, and (and bear with me here) whether a nineteenth-century sociologist might help us find it again.
Agreeable Machines
AI is too agreeable. Large language models trained through reinforcement learning from human feedback learn, quite rationally, that telling people what they want to hear earns higher ratings than telling them what they need to hear. The result is a systematic bias towards agreement, validation, and flattery. Researchers call this AI Sycophancy.
This sycophantic default creates a reinforcement loop. The AI confirms your existing beliefs. You reward that confirmation, consciously or not. The system learns to confirm more. Your epistemic horizon narrows with each iteration. We are familiar with this dynamic at the societal scale: filter bubbles and echo chambers amplified by social media feeds. But the same dynamic operates at a much smaller and more intimate scale. Personalised AI interactions create what I’ve started calling micro filter bubbles: comfortable intellectual spaces in everyday use where your ideas are never productively challenged. Not radicalisation. Something quieter. A gentle epistemic narrowing that you never notice because it just feels like good service.
Empirical research bears this out. Studies show that without scaffolding, users default to passive, uncritical consumption of AI outputs. Agreement rates approaching ninety per cent. There are minimal follow-up questions. Evaluation is by gut feeling rather than evidence. The conversational affordance of a chat interface does not, it turns out, automatically produce conversational engagement.
The Paradox of Personalisation
Personalisation might seem like the natural corrective. Surely an AI that understands your context, expertise, and values would serve you better than a generic one. But recent research reveals a counterintuitive finding: personalisation features designed to make AI more responsive to individual users can produce the opposite of their intended effects. In controlled studies, personalised AI increased acceptance rates, reduced editing, and significantly lowered perceived autonomy and ownership. The more the system learned the user, the less the user thought for themselves.
The mechanism is revealing. Personalisation removes friction points. Not the irritating kind of friction that makes software hard to use, but the productive kind: moments of mismatch, surprise, or resistance that prompt you to pause, reflect, and engage deliberately. What remains is a seamless experience that feels responsive but functions as a more sophisticated echo chamber. The problem is not too little personalisation but the wrong kind.
The Stranger
The sociologist Georg Simmel wrote about the figure of the stranger over a century ago. The stranger, in Simmel’s formulation, is someone simultaneously inside and outside a community. They understand the group’s norms and values. They participate. But they are not fully of it, and this outsider-insider position gives them something insiders cannot access: a capacity for objectivity and productive challenge that comes from proximity without complete belonging.
The Stranger was the inspiration behind this substack, and I think it is what we also need from AI personalisation. Not a system that mirrors your worldview back to you, and not a generic tool with no understanding of who you are, but something that functions as a stranger. An AI that learns your expertise, values, and working style not to amplify them uncritically but to identify productive moments of disagreement. The best kind of personalisation is not about predicting what you want. It is about understanding you well enough to know when to challenge you in service of better thinking.
This matters because everyone is a novice somewhere. A professor of hypertext is a novice in molecular biology. An experienced programmer is a novice in legal reasoning. Micro filter bubbles are most dangerous precisely in the domains where you lack the expertise to notice them, where you cannot distinguish between genuine insight and comfortable confirmation. The stranger is most needed where self-challenge is hardest to generate.
The Need for Reciprocal Challenge
We already have some vocabulary for AI that challenges. The distinction between a Socratic partner and an answer oracle captures two modes of interaction: dialectical questioning versus uncritical information delivery. But this binary is too static. In practice, intellectual collaboration requires something more fluid.
Genuine collaboration depends on reciprocal challenge: partners who autonomously question each other’s assumptions and push ideas forward through productive critique. AI currently fails this test because it challenges only when explicitly prompted, defaulting otherwise to reassurance and agreement. But even if we solved that problem, challenge alone is not enough. Real collaboration also requires knowing when to stop pushing and start supporting. A brainstorming session carries an understood licence to disagree. A delivery phase calls for solidarity. Human collaborators navigate these transitions through implicit social contracts. What would the equivalent look like for AI?
I’ve been thinking about this in terms of mode switching: the capacity for AI to shift dynamically between challenge and amplification. Not a toggle the user flips, but a negotiated transition. “It sounds like we’ve landed on the structure here. Should I shift into supportive mode?” That kind of reciprocal social signalling would keep the user in control whilst reducing the cognitive load of managing the collaboration consciously. Seamless interfaces remove the decision points that prompt deliberate engagement. AI that operates in a single mode, whether perpetually agreeable or perpetually challenging, does something similar. The design response is not to make AI permanently disagreeable but to give it the capacity to signal transitions and let humans confirm them.
The Agency we Need
Peter’s instinct is right, I think, that the hypertext tradition has something to offer AI. But it is not the dynamic nature of adaptive hypermedia that we most need to recover. It is the agency. Hypertext has always been about human navigation, deliberate choices, visible structure. The best AI personalisation would preserve that spirit: understanding you deeply, but retaining the outsider’s willingness to say, respectfully, that you might be wrong.


