Learning to Listen: Critically Considering the Role of AI in Human Storytelling and Character Creation

In this opinion piece, we argue that there is a need for alternative design directions to complement existing AI efforts in narrative and character generation and algorithm development. To make our argument, we a) outline the predominant roles and goals of AI research in storytelling; b) present existing discourse on the benefits and harms of narratives; and c) highlight the pain points in character creation revealed by semi-structured interviews we conducted with 14 individuals deeply involved in some form of character creation. We conclude by proffering several specific design avenues that we believe can seed fruitful research collaborations. In our vision, AI collaborates with humans during creative processes and narrative generation, helps amplify voices and perspectives that are currently marginalized or misrepresented, and engenders experiences of narrative that support spectatorship and listening roles.


Introduction
Somebody gets into trouble. Gets out of it again. People love that story! (Vonnegut, 1970) Once upon a time, people decided it was not enough to share stories; as humans are as much a curious animal as we are a storytelling one, we sought to study them. Theories of narratology, or the study or narratives, defines and breaks down narratives according to their distinct states of action, events, and elements (Prince, 1974;Bal, 2009) and (for the most part) narratologists agree that to constitute a narrative, a text must tell a story, exist in a world, be situated in time, include intelligent agents, and have some form of causal chain of events. Narratives also usually seeks to convey something meaningful to an audience (Ryan et al., 2007). (Note: In this paper, we are somewhat relaxed with the terms "story" and "narrative," and will often use the two interchangeably). With the advancement of artificial intelligence (AI), research related to narratives has taken on a whole new shape and meaning; not only are there new narrative forms ripe for the studying, including more examples of stories that are interactive and branching (Ryan and Rebreyend, 2013), but there is also a whole field of research devoted to improving AI storytelling capabilities.
Although research that teaches AI to tell better stories is challenging and intriguing on many levels, research and discourse in domains like psychology, literary fiction, and even social media suggests that a) humans benefit from telling stories, so AI might serve us better if it nurtures our storytelling predilections rather than act solely as storyteller, and that b) after all these years, we as humans still have a lot to learn and improve when it comes to telling narratives. Our stories brim with issues of representation, bias, and authenticity that can dilute or negate the beneficial powers of consuming and interacting with narratives, and our social structures promote certain stories while silencing others. If we're not careful, AI storytellers will only inherit and exacerbate these problematic patterns. In what follows, we present an opinion piece that urges researchers at the intersections of AI, natural language processing (NLP), human-computer interaction (HCI), and storytelling to envision different futures for research related to AI and storytelling. In this paper, to be clear, we do not strive to present precise quantitative or qualitative conclusions or recommendations, nor are we exhaustive in our presentation of extant storytelling research, discourse, and innovations. Rather, we seek to spark dialogues, questions, and cross-discipline exchanges around the role of AI in storytelling. To this end, we discuss existing conversations and 1 research from human-computer interaction, AI, and other domains concerned with storytelling; we provide anecdotal highlights from interviews we conducted with character creators; and we illuminate alternative, promising directions for AI storytelling research. We begin by outlining some of the predominant roles that AI research in storytelling has held in order to provide context for our discussion. As a framing note, readers should be aware that we will use the term AI to encompass different forms of computational approaches that may not fall within everyone's defined scope of AI. Whatever their specific disciplines or perspectives, we humbly request that our readers relax their definitions of AI for the duration of this paper to include computational approaches, more broadly.

Predominant Roles of AI in Narratives
We can categorize AI work in storytelling in three ways: 1) teaching AI to generate and understand stories; 2) helping human storytellers as a cocreator; and 3) modeling story elements. Some of the earliest work in storytelling and AI focused on improving AI's understanding of stories using scripts, or "boring little stories" in the words of the authors Schank and Abelson, 1975). As technology advanced, more research attention shifted to using AI to generate stories. For example, the hypertext fiction model emerged, in which links to branching narratives allowed for branching stories and some level of direct agency over the story (Bolter and Joyce, 1987). Researchers have continued making advances in story generation, developing AI story, world, and character generators that are planning-or event-sequence-based (Fairclough and Cunningham, 2004;Lebowitz, 1987;Porteous and Cavazza, 2009;Riedl and Young, 2010;Young et al., 2004;Barber and Kudenko, 2007;Min et al., 2008). Generators may also be character-centric, in which players' interactions with intelligent agents move the stories forward (Magerko, 2006;Swartjes and Theune, 2006;Cavazza et al., 2002). This work most commonly aims to create entertainment value, with an emphasis on learning and catering to player preferences (e.g., (Thue et al., 2007)) and is often framed in terms of applications to interactive narratives, even if the potential applications of the work could extend to narratives, in general.
However, the goals of AI research in narratives are not exclusively focused on AI story generation; some work also strives to teach machines how to be more "human" through stories (Huang et al., 2016;Riedl and Harrison, 2016). Although much less common, some prior work has also positioned AI as co-creator, encouraging and guiding humans in creating their own stories (Ryokai and Cassell, 1999;Bers and Cassell, 1998;Van Broeckhoven et al., 2015). We have also seen work in the interactive drama space in which human-allowed actions are more open-ended, positioning humans to act more like actors on a stage than characters that must conform to a limited story world (Mateas and Stern, 2003). In addition, some research in modeling stories has focused not on story generation, but on understanding (and subsequently improving) human experiences of narratives. For example, work in identifying "emotional arcs" looks at mood shifts and audience engagement in experiencing narratives (Chu and Roy, 2017;Reagan, 2017), and other work has endeavored to identify turning points in stories (Ouyang and McKeown, 2015), model the shapes of stories (Mani, 2012;Elson, 2012), and understand the relationships of characters to narrative arcs (Bamman et al., 2014b,a). These works have identified what people enjoy about stories, have learned from our existing stories, and subsequently can augment story generation efforts.
In short, for the (many) story-lovers among us, it's an exciting time to be working in AI, NLP and HCI. However, as we will discuss in the next section, there can be potential dangers in placing AI in a storytelling role, in aligning AI storytelling research too closely with interactive narratives, and in catering too much to the preferences of the humans engaging in stories and games. We now to turn to a discussion of both the benefits and potential harms of storytelling as they relate to extant work on AI in narratives.

The Complex Pleasures of Narrative
Scholars in philosophy, psychology, anthropology, and related disciplines have characterized storytelling as fundamental to how we as humans grow, learn, develop, and process and experience the world, (Jung, 1964;Dautenhahn and Nehaniv, 1998;Sutton-Smith, 1986Paley, 2009;Cooper, 1993). As such, our desire to en-gage in stories and storytelling is not a "frivolous impulse, but a fundamental adaptive response" (Rose, 2012). As an experimental study from the 1940s has shown, we go so far as to ascribe narrative to situations where no narrative form exists (Heider and Simmel, 1944). Although AI research in emotional arcs recommends "happy" endings and seeks to maximize positive moods (Chu and Roy, 2017;Reagan, 2017), other research suggests that the relationship between story enjoyment and emotion is more complex. Research on benign masochism tells us that the human brain can derive pleasure from negative reactions and feelings such as sadness, fear, and disgust (Rozin et al., 2013). For example, sad films can be highly enjoyable, especially for certain groups such as female and younger viewers (Oliver, 1993(Oliver, , 2003Mares et al., 2008). Similarly, research has found that mixed emotional experiences, such as experiencing both happiness and sadness rather than just one or the other, can be beneficial to one's physical health (Hershfield et al., 2013). Thus, AI work that focuses on maximizing human enjoyment may overemphasize "sunny" experiences of narratives, and by focusing on pleasure rather than growth, may favor stories with narratives that fail to challenge and aid in human development.
Entertainment through narratives can be a valuable end goal in itself, but it can also have other, attendant advantages. According to transportation theory, we can achieve immersion in narrative worlds through identification with characters and perceptions of plausibility or the "suspension of disbelief," in which we view narrative worlds and character actions as authentic (Green et al., 2004(Green et al., , 2003Tesser et al., 2005). Not only does this transportation lead to enjoyment, but it can also enable perspective taking and belief change, (Kaufman and Libby, 2012;Berns et al., 2013). It can positively transform us, e.g. leading us to personality growth and maturation (Djikic et al., 2009b), with potentially higher transformative effects on attitudes for those who are resistant to change, or have diminished emotionality (Dal Cin et al., 2004;Djikic et al., 2009a). Reading fiction has been shown to improve the ability to attribute mental states to oneself and others (known as Theory of Mind), an important cognitive foundation for complex social relationships (Kidd and Castano, 2013), and reading narratives can lead us to feel psychologically connected to groups of characters, increasing feel-ings of belongingness and subsequently leading to greater feelings of satisfaction and more positive mood (Gabriel and Young, 2011).

On "Listening" Versus "Agentic" Narrative Forms
The jury is still out, however, on which forms of media provoke higher levels of transportation, transformation, and enjoyment. Here, we find it useful to separate "listening" forms of narrative from "agentic" forms of narrative, and we will use these terms throughout the remaineder of the paper. We characterize "listening" narratives as positioning the consumer of the narrative in a more passive role, listening or watching the story rather than making direct decisions that define or shape the characters, the plot, or the narrative world. "Listening" narratives would include traditional, typically non-branching narratives such as films, novels, and short stories. We define "agentic" narratives as stories in which the narrative consumer has some level of direct agency over characters, plot, or other story elements. In our definition, "agentic" narratives are akin to interactive narratives, and this aligns with other definitions of interactive narratives. For example, agency in the context of interactive narratives can be said to occur when the world "responds expressively and coherently to our engagement with it" (Murray, 2004). Accordingly, we prefer to stress the idea of agency over interaction because we do not think that "listening" narratives preclude interaction. Indeed, other researchers and designers ostensibly share our view of agency in narratives as nebulously interactive. For example, Persausive Games has produced experiences that question the notion of how much traditional game definitions of agency truly allows players to interact with and engage in a narrative (Bogost, 2005(Bogost, , 2006, and other researchers have chosen to use the term "agency play" rather than agency to suggest that interactive narratives require more expansive notions of interactivity (Harrell and Zhu, 2009). As we consider the future of narrative expression and consumption, we can consider the possibility of listening narratives that allow narrative consumers to interact without directly making decisions about narrative or character arcs and shapes. Indeed, studies have found that traditional "listening" forms such as books and movies may be equally or even more engaging and transforma-tional than "agentic" narratives (Oh et al., 2014;Green et al., 2008;Jenkins, 2014). However, other studies have found that enjoyment and identification can be higher in agentic narratives (Hefner et al., 2007;Elson et al., 2014;Hand and Varan, 2008). It may also be that certain types of interaction may have varying costs and benefits. For example, a study allowing for character customization actually decreased narrative engagement and enjoyment, showing that the qualities of a narrative, not character agency, might be more important (Green et al., 2004).
We make no attempt to argue for "listening" over "agentic" forms of engagement with narratives, but we do believe that listening forms warrant more attention. For example, advancements in AI mechanisms for branching narratives needn't apply to exclusively agentic forms of narrative; we can also think in terms of new listening experiences that are simultaneously branching and non-agentic (listening). A large body of work on branching narratives could be translated into new forms of listening rather than agentic digital media; for example, researchers have shown how their work in story generation techniques like plot graphs can be applied to branching (not necessarily agentic) narratives (Li et al., 2013;Guzdial et al., 2015). However, to our knowledge, there does not exist a wide range of practical applications of advancements in story generation and branching narrative work to new, listening forms of narrative. Some analogies may be useful here. In describing his affinity for both traditional (listening) and interactive (agentic) narratives, storyteller and SIMS creator Will Wright likens traditional narratives to a roller coaster, and games and interactive narratives to a dirt bike (Rose, 2012). Neither experience is necessarily "better" than the other; whether the driver or the passenger, each has their unique benefits and affordances. Moreover, just as the invention of the roller coaster enabled new forms of "riding", listening forms of narrative needn't necessarily be "traditional." Just as humans enjoy, learn, and develop through social interaction, we also have much to gain by spectatorship such as the popular pastime of "peoplewatching." Just as there is value in active thinking, there is also value in meditating (watching our thoughts pass by without directly engaging). We argue that by focusing so much scholarly attention on agentic forms of narrative, we may be missing out on ways to use technology to engender new ways of listening. Technological advancements in branching narratives, for example, could be realized by means of listening narratives rather than agentic narratives; we can consider the potential benefits of narratives that allow a multitude of experiences and paths, without conceding choice or agency to the listener/spectator.
Lastly, although existing efforts in AI narrative generation would suggest that humans benefit primarily from the experience of receiving narratives, the telling of stories can be highly beneficial. They can help us develop resilience, (East et al., 2010), provide therapeutic benefits (Block and Leseho, 2005;Carlick and Biley, 2004;Chelf et al., 2000;Pennebaker, 1997), and activate imaginative processes that are key to human growth and development (Harris, 2000). Thus, as with the narrowness of focus on interactive (agentic) rather than listening forms of narrative, prioritizing AI's role as storyteller misses valuable opportunities for empowering humans as storytellers.

The Potential Harms of Existing Narratives
In addition, as we consider AI-enhanced storytelling experiences, we need to be mindful that our starting points-the story frames that we use to train AI-may unintentionally marginalize, misrepresent, and altogether exclude many groups. Just as recent work in natural language processing has criticized and sought ways to rectify the amplification of negative biases (e.g. gender biases) in NLP techniques (Dwork et al., 2012;Zhao et al., 2017;Bolukbasi et al., 2016;Voigt et al., 2017), AI storytelling has the potential to amplify and exacerbate issues of bias and diversity, which in turn excludes certain individuals from experiencing the potential benefits of story engagement. For example, AI story generators that learn from existing narrative corpora may learn that straight white male characters are best suited to be protagonists or figures of power, and that genderqueer and people of color should occupy sidekick roles. These stereotypes may persist regardless of the identification of the human author; for example, a study of online fan-fiction found that gendered stereotypes were highly common, and perpetuated by both male and female-identifying authors (Fast et al., 2016). Thus, machine learning models may also learn from patterns of speech and role characterizations that stereotype certain groups, decreasing character authenticity. This is especially worrisome because research demonstrates that narrative persuasion is less effective if people cannot identify with the characters (So and Nabi, 2013;Ritterfeld and Jin, 2006;Slater et al., 2006;Gillig and Murphy, 2016).
Many popular films fail the classic Bechdel test, which simply specifies that the movie must 1) feature at least two women, 2) that these women must talk to each other, and 3) that their conversation must concern something other than a man (Selisker, 2015), and fare even worse on new NLP techniques that assess power differentials between men and women in movies (Sap et al., 2017). Issues of representation in film highlighted by the 2014 and 2015 Oscars, in which all awardees were white, sparked a social media firestorm under the hashtag #OscarsSoWhite (Syed, 2016;Borum Chattoo, 2018), and brought to further light a history of under-representation in film, with only 6.4% of all awardees since 1929 (1,688) being non-white (Berman, 2016). Minority groups are often under-, mis-, or negatively represented in film and other forms of narrative (Okoye, 2016;Smith, 2009;Hooks et al., 2006). In writing communities, gendered violence under the dominance of a "straight male cisgender patriarchy" and exclusion of black and brown writers from major literary publications has spawned a wave of debate and protest about exclusion, marginalization, and the silencing of voices (Tsay et al., 2015;Groom, 2015). Such issues suggest that AI may be more useful to us as an aid that can help identify biases and stereotypes, and amplify muffled voices, rather than a generator that replicates and extends our existing, problematic narratives.
In the next section, we cull anecdotal excerpts from a series of interviews we conducted with individuals deeply involved with some form of character creation to reveal existing pain points in human's attempts to avoid and address issues of biasing, stereotyping, under-representation, and misrepresentation. Our interviewees' discussions suggest concrete, specific ways in which AI can aid humans in improving some of the more problematic elements of our existing storytelling efforts.

An Exploration of Challenges in Character Creation
The one thing about being a dude and writing from a female perspective is that the baseline is, you suck.
As author Junot Díaz's quote above (Rosenberg, 2012) points out, creating characters can be an intractable challenge. Unless we are in the rare case of writing a story that is only about the self, with no secondary characters, creating the "Other"-a character who is different from oneself along one or multiple dimensions (Shawl and Ward, 2005)-is inevitable. As humans, we define ourselves along multiple axes of identity, including gender, sexuality, race/ethnicity, class, nationality, health, disability, education, and passions/interests, to name a few; some axes may be especially salient for some individuals, and inconsequential for others. We posit that anxiety and uncertainty about how to authentically and sensitively create characters who are Other can hinder both the creative process and the narrative experience, as inauthentic characters can also impede character identification and subsequent narrative transportation and enjoyment. Understanding the ways in which human character creators approach and grapple with creating characters that are Other can provide insights into where the most crucial needs lie, and how we might design AI systems to assist with rather than model human stories. To this end, we conducted interviews to explore and better understand the space of character creation and its attendant pain points. Below, we present anecdotal highlights of interviews as they pertain to insights into needs for assistive AI; a full presentation of our methodology and qualitative analysis processes can be found elsewhere (more information available upon request). We conducted qualitative, semi-structured interviews with 14 individuals with deep involvement in character creation, including novelists, short story writers, poets, journalists, television and game writers, actors, directors, and roleplaying gamers, game masters, and designers (including both tabletop and live-action role-playing games). These character creators, recruited with the help of professors in relevant departments at our local university, ranged in age from 19 to 62 (average of 45), and held education levels from "some college" to PhD. All spoke English as a primary language, and primarily were born and raised in the U.S. Eight identified as male, five as female, and one as non-binary; 11/14 identified as white, one as black, one as Native American, and one as Asian. For several of our participants, aspects of the narrative creation process constituted their full-time occupation or activitye.g. writer, videographer, professor of drama or literature, and game designer-whereas others pursued narrative and/or character creation as a passion, hobby or pastime while also holding another occupation, such as secretary, civil servant, human resources coordinator, or student. With IRB approval, we audio-recorded the interviews (each lasting roughly 40-70 minutes), transcribed, and qualitatively analyzed using opencoding techniques to identify patterns across our participants.
We asked our participants to describe their processes of creating or embodying their characters, what informs the development of their characters, on what axes they identify with or diverge from their characters, and what conflicts or hesitations they have in creating or embodying certain kinds of characters. These responses validated existing research on the benefits of storytelling as a source of joy and growth. As one participant put it, "you get shaped by these stories that touch you, and by the sources that touch you. And I think you develop greater empathy. I think you become a better human being though that" (p2). The results of our interviews offer key insights about how AI systems can support humans in character creation and storytelling efforts, which we can organize into three themes: 1) the distinct ways in which different participants struggled and dealt with inauthenticity concerns; 2) the pain points participants discussed about giving voice to characters from under-represented groups; and 3) the impact of collaboration on character creation.

Concerns about Inauthentic Characters
Our participants generally fell into one of two camps when it came to relationships between their characters and their self-identity. Either a) they specifically chose characters they viewed as similar to themselves, operating under the adage, "write what you know" or b) they grounded themselves in notions of universality, seeking to find elements of themselves in characters that were seemingly highly divergent from themselves. Yet both groups expressed feelings of discomfort with and anxiety about representing different viewpoints, suggesting opportunties for AI to assist with authentic character representation.
Participants that fell into group A explained that certain character decisions were outside their comfort zones, e.g. role-playing a character of an opposite gender (p9, p12). Another participant stated, "I'm very careful about running mental illnesses that I don't and have never have. Because I'm sensitive enough to portrayals of my own, that I kind of don't want to screw that up." (p13). Other participants echoed this sentiment of certain stories or portrayals being "not my story to tell" (p4).
Participants in group B felt it was very important to include diverse characters in their stories and games in order to be more inclusive, but many of these participants expressed concern that despite their best efforts, they might be misrepresenting characters that identified in ways differently from themselves. They wanted to remain inside the lines of what they felt, as one participant worded it, "cultural appreciation" rather than "cultural appropriation" (p9). One role-player was initially hesitant to move outside her own identity. She had slowly branched into different ethnicities, genders, and sexualities, but had lingering apprehension regarding whether her portrayals were ethical and authentic, saying "Hopefully I'm not being horribly insulting to anyone of that ethnicity or sexuality while playing them. I hope I'm not. I think I'm not. I think I'm doing it relatively sensitively (p11). Thus, AI could be helpful in flagging characters that might be cause for concern by perpetuating certain stereotypes or offenses.

Issues of Representation
We have given examples in this paper of narrative exclusion along lines of gender and race/ethnicity; concerns related to these topic arose often in our interviews, and suggests opportunities for AI to offer additional support. For example, one of our participants, a drama director, said he made a purposeful decision to cast racially diverse actors in his plays (p3).
Yet even among those for whom improving the representation of certain under-represented groups is a priority, there can be conflict over how we should represent such groups. For example, one participant (p1) spoke of the controversies in TV writing communities around what it means to write authentic characters of color. He spoke of a panel he participated in about TV representations of people of color in which many of the panelists were sharply divided on the questions: Is it okay for a character to be universal in identity, such that someone of a different race could conceivably play that character? Or ought characters be steeped in the specifics of their social identities and contexts? Participants also brought up issues of exclusion along axes that are often overlooked. For example, one participant discussed issues with neurotypical privilege, explaining that collaborative storytelling games are often exclusionary because they require players to pool from a relatively common pool of narrative tropes, meanings and interpretations that are not easy accessible to those who are neurodivergent (p8). These concerns about and disagreements on how to authentically and sensitively represent different groups indicate that AI that could serve to assist humans in grappling with and reflecting on these issues.

Impact of Collaboration on Character Creation
Where writers of novels or short stories may be more likely to develop narratives relatively autonomously and in isolation, other media lend themselves to highly collaborative environments, such as role-playing games (where the game master and the role-playing actors interact to shape the narrative), writing for the stage or screen (where writers interface with actors that embody their characters), and video game writing (where it is common for large teams to collaborate). Participants in narrative media with more collaborative development processes spoke enthusiastically of how actors and other characters had reshaped their understandings of their own characters. For example, a participant who is a playwright discussed how interactions with actors often reshaped not just a character, but a whole play (p4). Similarly, a screenwriter-participant (p7) discussed completely revising a major scene after an actress revealed she couldn't "in her wildest dreams" imagine taking the action assigned to her. The interviewee stated that he often gains invaluable insights from actors, and explained that the relationship between actors and their characters are symbiotic; if an actor can't feel they can be true to a character, then everything will fall apart. Role players and game masters spoke of how in-teractions with other characters shaped their understandings of their own characters, and affected the decisions they made in the game. Where writers in less collaborative contexts do research and seek guidance from those they feel may have more expertise or insight, in these more collaborative contexts, characters are not just created; they are constantly negotiated and re-negotiated. Through these processes of negotiation, our participants explained that they felt their characters took on more authentic, lifelike forms. However, not all narrative media are structured to automatically support such forms of collaboration and feedback, and not all narrators and character creators have access to social circles that can enable such collaboration. Intelligent agents that can play similar roles to human collaborators (e.g., other role players and actors) could provide critical, transformative feedback to creators and narrativists that work in relative isolation.
In sum, our interviews indicate that AI could be helpful as a storytelling assistant or co-creator by offering practical assistance (e.g. flagging misrepresentations), providing support for reflection on representation, and by taking on character embodiment roles that are usually assumed by humans in collaborative creation contexts.

Discussion
As we move forward towards new forms of media, narrative, and interaction, we urge scholars to take a step back and question the whys of AI in storytelling. Based on existing research and current trends in AI and other domains invested in narratives, and informed by the qualitative interviews our team conducted with character creators, we recommend a reorientation towards how we conceptualize the role of AI in storytelling. Yes, we can keep moving towards a future in which AI becomes more and more adept at human forms of storytelling. But is that the preferred future? As we consider the joys and benefits humans experience by engaging in storytelling, the shortcomings of our current narrative forms and processes that can exclude groups and dilute the transformative power of narratives, and both the struggles and affordances of different processes of character creation, we see several potential branches that future AI narrative systems can grow. Here, we return to the idea of "listening": we envision AI that better listens to human storytellers and assists us as co-creators, and AI-assisted narrative forms that enable "listening" rather than "agentic" engagement.
We give examples of specific starting ideas we have for 1) designing AI to support human storytellers, and 2) investigating "listening" rather than "agentic" forms of narrative that we hope will inspire the growth of new branches of AI narrative research. We acknowledge that the current state of computational powers renders some of our suggestions only feasible through at least partial Wizard of Oz approaches; we consider these ideas as starting points to guide future research and scientific advancements. Although we think current paths of AI research merit continued work and investigation, we believe that these alternate paths of inquiry and design are at least equally promising and important.

AI and Crowd-Powered Feedback Mechanisms for Human Storytellers
As we saw from our literature review and our interviews, humans enjoy storytelling; they grow, learn, and heal from it. However, as we saw in our interviews, creating authentic characters can be a challenging and emotionally fraught task, and as we saw from discussions of (mis)representation and stereotyping in film and literature, the stories that humans currently create are not the ideal models for AI to emulate. Thus, instead of expending all our effort on teaching AI to tell stories, we can divert some of our energies to using AI to help humans tell the stories they may struggle to tell. AI has already been used to model emotional arcs in narrative, and to identify bias in a number of domains. AI could be leveraged to better identify potential problems that could dilute authenticity and stymie narrative transportation, such as exclusion and stereotyping. There are a number of approaches that already use crowd-powered "minicorpora" to teach AI how to generate narrative, (Guzdial et al., 2015;Li et al., 2012Li et al., , 2013Li et al., , 2014Purdy and Riedl, 2016) but this existing work does not seek to improve experiences or engage crowdworkers in meaningful forms of storytelling. Taking cues from work in improving crowd workers experiences by inducing curiosity (Law et al., 2016), we might further consider crowd-powered feedback mechanisms could allow both story creators and crowd workers to engage in and benefit from stories in different ways. For example, taking a character-centric ap-proach, AI systems could prove useful in identifying when characters begin to fall into traps of stereotypes or implausibility. We could consider training models using a combination of existing corpora and crowdsourcing; as a secondary benefit, we could design crowdsourcing studies such that they engage crowd workers in meaningful storytelling that contributes to larger, concretized goals so that crowd workers are also benefiting by consuming and ultimately contributing to revisions of narratives. Studies could invite participation from those who most identify with marginalized and underrepresented populations, and are thereby able to speak to concepts of authenticity around specific identities (including axes of identity that our interview participants highlighted, such as mental illness and neurodivergence). We could then apply these models to new narratives and use them to generate feedback for narrative creators (e.g., flagging certain depictions that are deemed to be inauthentic or insensitive); AI systems provide prompts and exercises to encourage reflective or creative practices to psychologically and creatively grapple with these challenges.
Considering the dynamics of collaborative character creation processes in domains like role playing games and acting, there could be opportunities to re-purpose advances we've made in intelligent agents. During the narrative and character creation processes, creators could engage with intelligent agents that take on certain roles in the story, and give feedback on elements that feel inauthentic or incohesive. For creators that work in relative isolation, AI could simulate the more collaborative creative atmospheres native to roleplaying and acting environments, in which characters can quite literally talk back and generate thoughts of their own, thereby re-shifting and reshaping aspects of the characters and of narratives as a whole. Under such a scenario, humans would still be the primary storytellers, just as playwrights are still the people writing the script even if they decide to make revisions and edits based on feedback they receive from actors. Although AI cannot yet simulate human intelligence to the degree that such ideas would require, we can think of ways we could use crowdsourcing and/or partial Wizard of Oz approaches to achieve similar ends and provide guidance for future goals of AI.

Innovating and Exploring "Listening" Narrative Forms
Technological progress has spawned innovation in the field of interactive narratives and narrative games, particularly in the realm of video games. However, we argue that the scholarly energy around interactive narratives might be occluding potential for technological innovation in "listening" forms of narrative in which consumers are watchers or spectators rather than active agents in the narrative. As discussed previously, research has shown that both interactive and "traditional" narrative media have positive impacts, and under certain circumstances, "traditional" narratives may be even more effective for producing particular outcomes, such as narrative transportation. However, there is room to explore what "nontraditional" listening narratives could look like and produce.
As a starting point to this path of inquiry, we could leverage existing NLP research in style transfer, which uses neural networks to learn stylistic elements of a corpus, and apply the style schema onto new texts (Kabbara and Cheung, 2016;Shen et al., 2017;Carlson et al., 2017;Fu et al., 2017;Han et al., 2017). Narratives that seek to persuade, shift opinions, or otherwise transform readers are not always successful. For example a study exposing youth to stories of LGBTQ people found that where LGBTQ youth felt more hopeful, hetero and cis youth felt more negative attitudes after the narrative exposure (Gillig and Murphy, 2016). Here, we could begin to think about how we could transform stories that could better achieve their narrative end goals (e.g., changing attitudes) in ways that better speak to different groups of readers. Automated style transfer while maintaining diegetic plausibility and coherency could be one way to achieve this, and is worth further exploring. Again, we acknowledge that given the current state of computational sophistication, such an idea would require at least partial "Wizard of Oz" approaches.
We could also think of how "listening" to AI-powered on-demand storytelling could soothe, heal, and transport in times when we cannot access human-generated stories, or in time-sensitive situations when the narrative specifications we are seeking are not readily met by existing, available stories. In the midst of a bad break-up, an episode of depression, an anxiety attack, a death of a loved one, a school or work-related failure, or any number of upsetting or traumatic experiences (potentially including the stresses of authentically representing "Other" characters in narratives), it may be difficult to reach out to others, and mustering the energy to actively engage in an agentic narrative might be too daunting. Instead, a listening form of narrative could be more helpful. An AI-powered listening narrative system could encourage certain emotional, psychological, or behavioral responses, such as allowing individuals to shift to more realistic and optimistic perspectives, or motivating individuals to reach out to friends, family or health support staff. It could be tailored to the specific situation or instance in which extra support is needed, could learn from one's engagement with other narratives to cater to personal preferences of narrative style, content, and characters, and could take various media forms, such as text, audio (including more musical or soundoriented narratives), video, augmented reality, or virtual reality. For example, multi-modal sensing could allow for branching even in the absence of explicit listener choice, such as using the listener's nonverbal or physiological responses to make decisions or to alter the course or trajectory of the story. Given the current limitations of AI, early iterations could sample from existing corpora that have been studied to produce specific psychological or behavioral reactions.
We see these research and design suggestions as mere starting points to inspire more interesting ideas and conversations. We look forward to further discussing the opinions and ideas we've laid out in this paper, and collaborating with others who share our passions for narrative and exploring the limits and potentials of technology. In other words: To be continued. . .

Acknowledgements
A special thanks goes to the National Science Foundation's Graduate Research Fellowship Program for their support of this work, and to Diyi Yang, Mark Riedl, Anjalie Field, Judeth Oden Choi, and all our reviewers, all of whom provided feedback, ideas, and references to literature that we earnestly but imperfectly tried to incorporate into this final version.