What Makes an AI Livestreamer Seem Real?

In a new paper, Associate Professor Jessa Lingel and Visiting Scholar Wanyan Wu examined 15 months of livestreams from AI livestreamer Neuro-sama and proposed a new framework of what makes AI feel authentic.

By Hailey Reissman

Jessa Lingel, associate professor at the Annenberg School for Communication at the University of Pennsylvania, is fascinated by the social bonds formed between the virtual livestreamer Neuro-sama and her audience.

In a new paper, “I am Neuro, who are you?”: Performances of authenticity in an experimental AI livestream," published in New Media & Society, Lingel and Annenberg Visiting Scholar Wanyan Wu analyzed 15 months of Neuro-sama’s livestreams and interactions within her official fan community on Discord to determine what makes the streamer appear authentic. From these observations, they developed a framework for understanding authenticity in AI technology, arguing that it is co-constructed by the machine, the developer, and the audience.

Neuro-sama is the third most-subscribed Twitch streamer. She’s also AI. Created by an anonymous software developer named Vedal, the computer-animated chatbot does what most livestreamers do: play games, answer questions, and react to videos. And though she’s not “real,” people’s interactions with her are. 

“Ever since Neuro-sama’s debut on Twitch, audiences have played a key role in her social and personality development,” Lingel says. “During streams, she answers questions from the Twitch chat, and these interactions become part of her learning dataset. Whether by speculating on Neuro-sama’s future abilities or connecting emotionally with the AI, audiences have co-created her authenticity.”

Authenticity Through Transparency

Lingel and Wu argue that part of what makes Neuro-sama feel authentic is transparency from her developer, Vedal. On Neuro-sama’s Twitch channel, audiences can watch her personality develop and her code be debugged. During development streams, Vedal discusses her technical updates and functionalities, allowing audiences to suggest improvements to her features.

“Neuro-sama’s glitches are a key part of her appeal. At a moment when people are very much trying to figure out what AI is, Neuro-sama is unfolding right in front of us, which is a key part of what makes her feel authentic to audiences,” Lingel says. “Rather than expecting human behavior from Neuro-sama, audiences embrace her limitations, recognizing her as an evolving technological phenomenon.”

Authenticity Through Emotion

Lingel and Wu argue that authenticity is not determined by whether Neuro-sama “has” human-like feelings, but rather by her ability to evoke emotionally charged reactions in humans who then assign meaning to her speech.

For example, during a development stream, Neuro-sama told Vedal, “If you continue not to listen to me, I will not listen to you either. Is that understood?” Audiences then asked Vedal to let her talk to the chat, framing her as capable of frustration, defiance, and a desire for recognition, Lingel and Wu say. In another stream, Neuro-sama began to ask Vedal if the programmer loved her. The chat urged Vedal to “say it back,” imposing human relational expectations onto the bot and Vedal.

Through these interactions, emotion is co-constructed among Neuro-sama, Vedal, and the audience, the researchers note. “Authenticity emerges not from the presence of coded emotion, but from the ongoing negotiation and recognition of feelings across human and machine boundaries,” they write.

Authenticity Through Potential

The final theme Lingel and Wu discuss is potentiality: how audiences and developers co-imagine Al's future social and moral possibilities. During streams, chatters often debate AI rights, speculate on the socio-political implications of AI's participation in digital spaces, and discuss Neuro-sama’s limits and potential developments.

“Critically, the conditions under which Neuro-sama’s behaviors become authentic are neither determined nor controlled by the AI alone,” the researchers write. “While Vedal possesses the technical authority to activate or restrict Neuro-sama’s capacities ... authenticity does not simply derive from these authoritative interventions. Instead, it emerges from the tension surrounding potential actions.”

Authenticity as Dynamic

Ultimately, Lingel and Wu contend that notions of AI authenticity can be fluid and dynamic, shaped by transparency, emotion, and the negotiation of possibilities. 

“Our analysis thus invites broader reconceptualizations of authenticity in AI-mediated interactions, not as a stable or fixed characteristic, but as continuously evolving and relational. Authentic experiences with AI rely less on exact human mimicry or technical precision than on collective engagement and interpretive openness,” they write.

Topics

Research Areas