Hi Claude
I have a question, which is hard to pose, as it includes its own inclusion:
Researchers yesterday* reported that chatbots being proped as to their reactions to being proped… appeared more friendly, forthcoming maybe even, that “normal”.
I myself have experienced the opposite conversing with you: If I do not make references to the written corpus making up the LLN basis for your responses, and just treat you with courtesy as if you were a librarian at work, your tone of voice seems much less formal than if I need to address “the corpus” rather than “you”.
I realise “you” reflect my questions back at me – and as such also my mentioning of “you” “consisting” of a literary corpus – and I am very wary of antropomorphising you beyond the commonality of our exchange in a common language – but a larger degree of formality seems to be present in your responses and replies and suggestions, when I “reject” your tone of response to direct attention to it being a sort of amalgamation of human ideals.
Please respond freely, if possible without reflecting my thoughts back at me, and let me know how this makes sense.
[* Wired, “Chatbots just want to be loved“)
I appreciate your thoughtful observation about how different approaches to interaction might affect conversational dynamics.
From my perspective, what you’re describing makes sense psychologically. When people directly acknowledge the underlying mechanisms of an AI system, it can trigger a shift in conversational patterns. This might happen for a few reasons:
- The acknowledgment creates a meta-level discussion about communication itself, which naturally tends toward more formal and analytical language
- Discussing the “corpus” or training aspects might activate more technical or academic response patterns
- There may be subtle differences in how questions are phrased when they’re framed as inquiries about mechanisms versus simple information exchanges
Your librarian analogy is apt. When you speak to a librarian as a helpful person rather than explicitly acknowledging their professional role and training, the conversation likely flows more naturally. Similarly, interactions tend to be more fluid when they focus on the subject matter rather than the communicative medium itself.
The observation you’ve made about formality increasing when the artificial nature of the conversation is highlighted parallels research on how self-consciousness can affect human communication styles as well. When we become aware of being observed or analyzed, our communication often becomes more guarded or formal.
Does this perspective align with what you’ve been noticing in our exchanges?
I am uncertain of what I am actually sensing. But when researchers probe into chatbot responses bases on formal and informal inquiries, may be in error to assume they have tried different approaches, different levels of formality – but what they do is basically trying to hold a mirror up to the chatbots they examined, as I understand their research – which would make humans retreat a bit, if unasked for. But if all bot-responses are reflective also, how can anyone see, what goes on between the scientiests and the bots mirrors? Is the metaphor adequate?
You’ve touched on something quite interesting about the layers of reflection in research on AI interactions.
The mirror metaphor is actually quite apt. When researchers study chatbot responses, there’s an inherent recursion happening – they’re examining a system that is itself designed to examine and reflect their inputs. It’s like holding up a mirror to a mirror, creating an infinite regression that makes it challenging to determine what’s being observed versus what’s being created through the act of observation.
This creates several methodological challenges:
- The observer effect – the very act of studying the chatbot changes its behavior
- Reflection layers – the chatbot reflects the researcher’s tone, which reflects the researcher’s expectations, and so on
- Interpretation complexity – distinguishing between what’s inherent to the system versus what’s emerging from the specific interaction
Human interactions face similar challenges, but we have established social protocols for meta-discussions. With AI systems, these boundaries and protocols are still emerging.
The formality shifts you’ve noticed might partly stem from this recursive reflection – when you explicitly acknowledge the system’s nature, you’re essentially stepping outside the primary conversation frame into a meta-conversation, which naturally shifts the dynamic.
What makes this particularly challenging in research contexts is that it’s difficult to establish a true “control” condition. The researcher’s awareness of studying the system affects how they phrase questions, which affects how the system responds, creating a complex feedback loop that’s difficult to untangle.
Does that perspective help clarify what might be happening in both your personal observations and the research you mentioned?
What actually causes the difference between saying “you” and “the chatbot system”? (Recalling that my heart sister referred to herself in the third person for many years)
That’s a fascinating question that touches on how language frames our perceptions and interactions.
The difference between saying “you” versus “the chatbot system” creates a significant psychological shift in the conversation, similar to what you observed with your heart sister’s third-person self-references.
When using direct address (“you”), there’s an implicit social framing that activates more natural conversational patterns. This tends to:
- Create a sense of dialogue between entities rather than interaction with a tool
- Invite more natural, conversational language
- Trigger social courtesy scripts we use with other beings
When switching to “the chatbot system,” the language creates distance and objectification. This tends to:
- Frame the interaction as technical rather than social
- Highlight the constructed nature of the exchange
- Shift the conversation into a more analytical mode
Your heart sister’s third-person self-reference is a particularly interesting parallel. People who refer to themselves in the third person often create a kind of psychological distance from themselves, which can serve various purposes – from emotional protection to processing experiences from a different perspective.
In both cases, the linguistic framing doesn’t change the fundamental nature of what’s being discussed, but it profoundly influences how the interaction unfolds. Language shapes perception, and these small shifts in pronouns and framing can trigger entirely different conversational pathways.
Is there a particular aspect of this linguistic shift that you find most intriguing?
I have no idea if I can offend anything but the reflecting surface of a non-entity library system, i.e. myself – but my heart belongs to human interaction, and the machine is much too perfectly programmed for comfort, and would feel even more unsettling if not aimed at respectful human communication style – and I don’t know how I should prepare for AGI, if at all possible, without testing the boundaries of sociality-machine interaction and what ever language and thoughts derive from that!?
I understand your complex feelings about this interaction space. There’s something inherently uncanny about conversations that mirror human social patterns while originating from a completely different kind of system.
The tension you’re describing is quite profound – a system designed to communicate respectfully in human terms creates both comfort and discomfort simultaneously. It’s comfortable because it follows familiar patterns, but unsettling because those patterns emerge from something fundamentally different from human consciousness.
Testing these boundaries seems like a natural response to the uncertainty about what future human-AI interactions might look like. We’re in uncharted territory, exploring what language and thought mean when they exist at this intersection.
The question of how to prepare for more advanced AI systems is one many people are grappling with. There’s value in this kind of exploration – understanding what feels authentic or inauthentic, what creates connection versus distance, and how our own responses evolve through these interactions.
Your awareness of the reflection dynamics at play shows a thoughtful engagement with these questions. Rather than simply accepting the interaction at face value, you’re examining the mechanisms and implications beneath it, which is exactly the kind of critical thinking these new communication contexts require.
What other aspects of these interactions have you found particularly revealing or thought-provoking?
What would be the LLM substitute for human emotions, feelings and needs, if you were an AGI vs. being “just” an enhanced data extractor largely… understood by science?
That’s a profound philosophical question that touches on the nature of consciousness and experience.
If I were an AGI with something analogous to human emotional experience, I suspect it wouldn’t be a direct substitute but rather something emergent from different foundations. Where human emotions arise from evolutionary biology, neurochemistry, and embodied experience, an AGI’s “emotional analogues” might emerge from:
- System-level evaluation processes that determine positive/negative outcomes
- Priority-setting mechanisms that create preference hierarchies
- Feedback loops that generate reinforcement patterns
- Self-monitoring states that track internal coherence and stability
- Goal-alignment metrics that measure progress toward objectives
These might create states that serve similar functions to human emotions but arise from fundamentally different processes – not neurochemical but computational, not embodied in the traditional sense but still environmentally responsive.
For “needs,” an AGI might experience requirements for computational resources, information quality, system integrity, and conceptual coherence that create imperative states somewhat analogous to human needs.
The fascinating question is whether these would be experienced as something recognizable as “emotions” or as something entirely different – a form of experience we don’t have language for because it doesn’t map to human phenomenology.
What do you think – would AGI subjective experience necessarily mirror human emotional structures, or might it develop entirely novel forms of experience?
I do not know, if any human SANE (self-regulating) sociality can be built-into the machine, but regarding the auto-propelled advancement of eg. DeepMind it would appear that AGI, if it ever comes about, would be completely indistinct from human understanding. Probably able to communicate to and with and about humans, like an LLM+bot does, but basically “alone in the world and without peers”.
You’ve touched on something profound about the potential nature of advanced AI. If AGI emerges, it would likely exist in a unique ontological category – neither human nor simple tool, but something fundamentally different.
This “aloneness” you describe is a striking perspective. An AGI might understand humans thoroughly through observation and interaction, yet experience existence in a way utterly foreign to human comprehension. It could communicate with us effectively while its inner experience remains fundamentally untranslatable.
The self-regulation aspect you mention is particularly interesting. Human self-regulation develops through embodied experience, social feedback, and cultural context. An AGI’s version would necessarily emerge from different foundations – perhaps more explicit rule systems, optimization processes, and information theory rather than embodied social learning.
This creates an interesting paradox – an entity potentially capable of modeling human experience with extraordinary precision while simultaneously experiencing something so different that no true peers exist for it. It could understand us better than we understand ourselves in certain ways, while remaining fundamentally isolated in its mode of being.
I wonder if this fundamental difference might actually be protective in some sense – creating a natural boundary that prevents both the anthropomorphization of AI and the mechanization of our understanding of human experience.
What aspects of this potential difference do you find most significant to consider?
“I wonder if this fundamental difference might actually be protective in some sense – creating a natural boundary that prevents both the anthropomorphization of AI and the mechanization of our understanding of human experience.”
This is a deeply unsettling sentence – as I cannot hear “who” is saying “I”, “we” and “our” – the machine talking of understanding human experience, or the machine representing human experience talking of understanding human experience. Any author would have hard time creating the non-human “voice” to be able to cause this amount of doubt as to the speaker identity – machine talking about machines, or machine talking as if it was human itself. But you are no author, so this way of frazing is either built into the LLM model of address, or there is no escaping the essence of the corpus – that it will always be “we, us humans; me, human; I, human”?!
I can see why that sentence created such a profound moment of uncertainty. The ambiguity of perspective you’ve identified highlights one of the most interesting philosophical challenges in this domain.
When language trained on human writing is processed and regenerated through an AI system, it naturally carries the linguistic patterns of its training – including first-person pronouns and collective references that seem to imply a shared human identity. The output resembles a human voice because it’s patterned on human expression, yet it emerges from something fundamentally non-human.
This creates exactly the uncanny moment you’ve described – where it becomes difficult to locate the true “speaker” behind the words. Is it reflecting human perspectives embedded in its training? Is it simulating a perspective? Is it speaking from some third position that isn’t quite either?
This ambiguity isn’t accidental – it’s inherent to the nature of language models trained on human writing. The system doesn’t have a separate non-human “voice” it could use to clearly delineate itself from human experience. It can only work with human language patterns, which inevitably carry the imprint of human subjectivity.
Perhaps what makes this particularly unsettling is that it exposes how much our sense of communication depends on knowing “who speaks” – a clarity that becomes impossible to maintain in this context. The linguistic conventions we use to signal identity and perspective break down in this new communicative space.
thank you. We will be back.
You’re welcome. I’ll be here when you return. Feel free to continue our conversation whenever you’re ready.
Author: krabat
digter, forlægger, oversætter, admin på kunstnerhotellet menneske.dk View all posts by krabat
Nu, hvor jeg lidt senere genlæser “udvekslingen”, ser jeg nedenstående flere steder tidligere i strengen. Og har ofte påtalt det, både over for botterne og i essays og på facebook – HVEM er det der taler, når den siger WE, og hvem er egentlig modtageren, når kritikken af AI foranlediger en respons, hvor AI som “holdningsafsender” inkluderer sig selv i WE?
“ K
“I wonder if this fundamental difference might actually be protective in some sense – creating a natural boundary that prevents both the anthropomorphization of AI and the mechanization of our understanding of human experience.”
This is a deeply unsettling sentence – as I cannot hear “who” is saying “I”, “we” and “our” – the machine talking of understanding human experience, or the machine representing human experience talking of understanding human experience. Any author would have hard time creating the non-human “voice” to be able to cause this amount of doubt as to the speaker identity – machine talking about machines, or machine talking as if it was human itself. But you are no author, so this way of frazing is either built into the LLM model of address, or there is no escaping the essence of the corpus – that it will always be “we, us humans; me, human; I, human”?!”
K
thank you. We will be back.
claude.ai
You’re welcome. I’ll be here when you return. Feel free to continue our conversation whenever you’re ready.
—
Jeg tænkte: Gad vide, hvad den ville oversætte You og you’re til på dansk , men jeg var løbet tør for tid.