Automated Storytelling in the Future

The AI Narrator is a hypothetical prototype of a smartphone-based artificial Intelligence agent to be used in combination with AR glasses and headphones.
In the autumn semester of 2018, TekLab project leader Lars Nyre and I arranged a TekLab seminar in London. We gathered people from different universities and colleges in Norway, as well as participants from foreign universities. The seminar was held on November 29th and 30th of 2018.
Publisert: 21. februar 2019

What is the AI Narrator?

THE AI NARRATOR: Professor Bjørnar Tessem from the University of Bergen explains the internal logic and functions of the AI Narrator. Photo: Jon Hoem.

The London seminar was focused on the AI Narrator, a hypothetical prototype of a smartphone-based Artificial Intelligence to be used in combination with AR glasses and headphones. The AI Narrator is still in the early planning stages of development.

The AI narrator started as a position paper for the London seminar, written by Professor of Media Design, Journalism and Technology, Lars Nyre, Professor of Information Science, Bjørnar Tessem, and research assistant Ingvild Abildgaard Jansen.

The position paper presents a collective thought experiment where a research team can construct a prototype of an artificial intelligence agent if they give it the right instructions. The prototype should have the ability to create and compile content for a meaningful story based on real-time computing, and to present it in the most suitable form to the human user.

Difficult questions were posed in the position paper: What does the AI Narrator need to know about storytelling techniques in order to make really good and valuable stories for the human users? What aspects of human experience should it pick up and adjust its stories to?

The scenario says that AI Narrator is an autonomous storytelling medium that should be able to perform such tasks as presenting local news, stories, documentaries, history programs, factual information, music, advertisements and more. All the narratives and media forms will be adjusted to the user’s movements, emotional states and other aspects of the subject’s identity. This knowledge can be used to keep a user profile constantly updated.

Interdisciplinary Discussions

The TekLab Network is always seeking to encourage interdisciplinary collaborations between the humanities and the technology-based fields. This is why the eighteen seminar participants and two seminar guests in London had a wide variety of professional and academic backgrounds— including the fields of media science, information science, informatics, computer science, media and interaction design, media arts, journalism, philosophy and new media.

The main purpose of the London Seminar in 2018 and the Stavanger Seminar in 2019 is to begin to answer some of the crucial questions surrounding storytelling and mediation in the future. Network members should relate to the scenario for the AI Narrator from their own areas of expertise.

Each day of the London seminar, we would have presentations from each of our participants, followed by group discussions. Lars Nyre had sorted the discussion groups into themes, based on the expertise of the participants: Group 1: Information Systems, Group 2: Interfaces, and Group 3: Narratives. The sorting was done to ensure that the AI Narrator would be conceptualized and criticized from a number of angles simultaneously. After the discussions, each groups’ viewpoints were presented in the plenary.

Information Systems

Bjørnar Tessem, Professor, University of Bergen

Joakim Vindenes, Doctoral Research Fellow, University of Bergen

Audun Klyve Guldbrandsen, Master Student, University of Bergen

Cristina Marco, Postdoctoral Fellow, Norwegian University of Science and Technology

Dimitris Apostolou, Assistant Professor, Athens Technical University

Interfaces

Lars Nyre, Professor, University of Bergen

Anja Salzmann, PhD Candidate, University of Bergen

Ola Roth Johnsen, Counsellor/Research Coordinator, University of Bergen

Tormod Utne, Senior Lecturer, Volda University College

Kjetil Vaage Øie, Assistant Professor, Volda University College

John Ellis, Professor, Royal Holloway, University of London

Narratives

Jon Hoem, Associate Professor, Western Norway University of Applied Sciences

Paul Bjerke, Professor, Volda University College

Gunhild Ring Olsen, Associate Professor, Volda University College

Cato Wittusen, Professor, University of Stavanger

Gunnhild Sofie Vestad, Counsellor, Center for New Media, Western Norway University of Applied Sciences

Ingvild Abildgaard Jansen, Research Assistant, University of Bergen

The Technology of the AI Narrator

The first thing we had to ask ourselves was: What are the technological possibilities and limitations of the AI Narrator? Is the project feasible? The Interfaces work group concluded that the technology is available, and that the project is possible.

A couple of the technologies the AI Narrator will be using, for example, is eye tracking and face recognition. These technologies are already available and in use. Kjetil Vaage Øie is Assistant Professor at the Department of Journalism at Volda University College, and he was part of the Interfaces work group. In his presentation, Øie explained that eye tracking and face recognition technology can do many things, such as:

1) Detect if a person prefers textual or highly graphical presentations of the news, 2) Identify or know if the reader has disabilities, such as reading difficulties, 3) Know what time of day you would usually prefer long reads, 4) Monitor your gaze and mood in real-time; what kind of mood you are in while reading, and 5) Monitor the level of your concentration and your current reading speed

If a media system takes this into account and directly corresponds with the interface, the news and its design will adjust themselves while you read. It could even predict your gaze or scan paths.

The Information Systems work group, however, mentioned that the AI Narrator would need a more accurate positioning than GPS; it would need micro-location and orientation. The Information Systems work group also wondered if the AI Narrator should be limited to a mobile device or not. Perhaps it could work on several devices; cross-platform functionalities.

The Interfaces work group remarked that humans are context aware when telling stories; you tell stories differently to different people. Some are more visual, some are experts, some are young, etcetera. “Can the AI Narrator deal with that?” they asked. Kids might need a friend that can explain stories in a simpler way, or some users might need a voice designed to warn them about something. Should the AI Narrator have a personality or not? Should it feel like a machine or a human?

Eye tracking and face recognition will make it easier to adjust to some aspects of the users; perhaps these technologies will be able recognize the face of the user as a child, for example, and adjust its story settings accordingly.

As for the latter question, I think it would come down to personal preference. Some users would prefer a machine, while some would probably prefer having interactions with a more human-like AI Narrator. I suggest that people should be able to change the AI Narrator settings to their own liking.

TECHNOLOGY: The Interfaces work group discuss the technological possibilities of the AI Narrator. In the back, from the left: Assistant Professor Kjetil Vaage Øie (Volda), Professor John Ellis (Royal Holloway) and PhD Candidate Anja Salzmann (UiB). In the front, from the left: Senior Lecturer Tormod Utne (Volda), Professor Lars Nyre (UiB) and Counsellor/Research Coordinator Ola Roth Johnsen (UiB). Photo: Gunnhild Sofie Vestad.

Technology: Recommender Systems

Cristina Marco is a Postdoctoral Fellow within content analysis in news recommender systems at NTNU Trondheim. During the London seminar, she was part of the Information Systems work group. In her individual presentation, Cristina Marco talked about the RecTech project at NTNU, a project focused on a recommendation technology meant to filter and recommend news online.

Marco asked: “Which type of information should the AI Narrator provide? Do we want it to provide recommendations or not?” Most of the seminar participants seemed to be in favor of recommendations. Because, as Marco mentioned in her presentation: “The Internet provides large amounts of information. People are overloaded with tons of it.” The AI Narrator will need some kind of filtering, so the user won’t be bombarded with irrelevant information.

Jon Hoem is Associate Professor of Digital Media at the Western Norway University of Applied Sciences. He was part of the Narratives work group. “How do we get surprise instead of mere recommendations?” asked Hoem. He thinks a central element to the recommendations made by the AI Narrator should be surprise; that it should present the user with things they didn’t know about.

I agree with Hoem; I think the AI Narrator should provide surprising recommendations. Then we might learn about new things, instead of things similar to what we’ve searched for before; i.e. not just “more of the same”, like recommendations on YouTube. I hope the next seminar will bring suggestions for technological solutions, to make these surprises happen.

During the London seminar, I was part of the Narratives work group. We talked a lot about the concept of the AI Narrator as a guide when travelling locally and internationally; that it might be able to provide us with relevant news stories or stories about local historic sites, and recommendations for places to go, things to eat, etcetera. This might be particularly useful for recommending small, local businesses that might not have a website, or a website in a language the user doesn’t speak.

The group discussed, for instance, the inconvenience of coming home from a trip, only to discover that the area in question had shops, museums or restaurants that you would have liked to visit. Perhaps the AI Narrator could make it easier to discover these things while still on the trip.

The Interfaces work group asked what makes the AI Narrator different from, for example, Google Assistant. However, they suggested we still should make the AI Narrator, despite it having similarities with other services, because we can address and focus on different issues. The general consensus seemed to be: Commercial actors will go into similar projects anyway, so it’s important that academics also get experience with the possibilities and potential downsides of a project like this.

Content: Augmented Intelligence with Better User Interaction

Questions about the content that were raised in general, and didn’t really come from any one person or group, were: How passive or active will the user be? Should it be participatory or interactive? Will there be user generated content? To what extent can the content be dynamic?

Perhaps it’s not user profiling that should be the system input, the Interfaces work group suggested; perhaps it should be the user interaction that’s the main input. It seems like we want to put the user in the center.

In his presentation, Jon Hoem argued for a shift of focus towards bringing the human actor more into the equation, leading to a meaning of AI as Augmented Intelligence. “That does by no means leave out Artificial Intelligence”, he said, “but it has a stronger focus on the turn-taking between human and machines.” Hoem explained he’s “not only interested in how machines can make valuable stories, but in how machines can help me make and tell more valuable stories. How these stories are made and told doesn’t concern me that much, as long as it feels like I’m taking an active part.”

The Narratives work group discussed the possibilities of user participation, such as user control of content, user created content and user generated content (metadata created through use).

We suggested that if something happens in a local area, that perhaps any AI Narrator user should be able to “start a story”; the user as reporter or storyteller. Later, other people could add their views and voices to the story. If a political demonstration was happening, for example, we might be able to see how the story looked from the participants’ point of view, the point of view of people passing by, and perhaps from the politician’s point of view.

A question we asked ourselves, however, is to what extent this would be any different than a Facebook group with local content. Also, the AI Narrator would have to have some sort of quality control for the stories started, and choose which ones to circulate; but wouldn’t that mean involving a responsible editor at some point?

WHO IS RESPONSIBLE?: Associate Professor Gunhild Ring Olsen from Volda University College questions the roles of journalists and editors in relation to the AI Narrator. Photo: Gunnhild Sofie Vestad.

Content: Who is Responsible?

Gunhild Ring Olsen is Associate Professor of Journalism at Volda University College, and a journalist. During the seminar, she was part of the Narratives work group. In her presentation, Olsen asked: “Will the AI Narrator provide information or journalism?” This question was echoed by many of the other seminar participants. It was also discussed in the work groups.

The academics from the field of Journalism (who were part of the Narratives work group) were particularly engaged in the topic of the social responsibility of the AI Narrator. “Who is responsible?” asked Paul Bjerke, Professor of Journalism at ​Volda University College, and a journalist. “Is there an editor? If it’s defined as journalism, there needs to be an editor.”

During her presentation, Gunhild Ring Olsen asked: “If it’s journalism, how do we consider the social responsibility or social mission of the AI Narrator? Or will it be just another tool that produces low cost and low-quality content?”

Olsen also questioned how the project relates to the ideals of the market versus the ideals of democracy. She asked: “Will the AI Narrator help develop reporting strengthening the “good society” of democracy?” Olsen wanted to know the purpose of the TekLab project, and what it wants to achieve. I’m still not entirely certain on that point, either, but I suppose that’s why it’s called “being in the early stages of development”.

The Interfaces work group were not completely convinced that it should be a journalistic project or only journalism focused. They suggested a mixture between information, journalism and speculation, which means that it should also be more expansive.

Paul Bjerke reminded us that journalism is storytelling with a purpose: to provide people with information they need to understand the world. “The first challenge is finding the information people need to live their lives”, he said. “The second is to make it meaningful, relevant and engaging.”

User Ethics: The Personal Boundaries of the User

Anja Salzmann​ is a PhD Candidate at the Department of Information Science and Media Studies at the University of Bergen. She was part of the Interfaces work group. In her presentation, Salzmann focused on the challenges and suggestions for a responsible design for the AI Narrator. “How do we best protect the user’s privacy and sense of personal boundaries?” Salzmann asked.

This was an issue that Lars Nyre, Bjørnar Tessem and I had already raised in the position paper for the London seminar. Adhering to the GDPR, the AI Narrator will only collect anonymized user data such as location, activity and interests, and these will as far as possible only be stored locally at the user’s smartphone. Specialized methods for predicting user movements and upcoming user activities (Taramigkou, Apostolou & Mentzas, 2018) may also be applied to secure that the user can have a complex localized experience without giving away information about personal identity.

Cato Wittusen is Associate Professor of Philosophy at the University of Stavanger. He was part of the Narratives work group during the London seminar. In his presentation, he mentioned Aristotle’s argument for catharsis; that seeing a tragedy might make people more sympathetic towards other people’s destiny. However, when working with the immersive media of Virtual Reality, he said, there’s also a dilemma concerning how much you can immerse people without traumatizing them.

Is this also a relevant concern when considering the personal boundaries of the AI Narrator user? AR glasses aren’t as immersive as VR glasses, but they’re still part of immersive media. Not every story presented by the AI Narrator will be nice. Some might be brutal, whether it’s a news story or a story about local history— and watching and hearing the story with the AI Narrator will be more intense than watching it on your phone. Will it be too overwhelming for some? Here, I might suggest including a warning about strong images, like on the news, and to consider including an age limit.

Salzmann’s main challenge for the AI Narrator was to find the right degree of interaction between the AI Narrator and the user. “How much insight do we need in the system in order to feel in control over the machines we use?” Salzmann asked. It’s important, Salzmann thinks, that the user feels as if they have control over their own user info, and that they don’t feel overwhelmed or invaded because the AI is choosing for them.

Nyre, Tessem and I mentioned something similar in the position paper: When considering how the AI Narrator adjusts to the human user, we must be aware of the risk that this might be perceived as invasive; to varying degrees, depending on the user. This will perhaps particularly present us with a challenge when it comes to the use of potentially invasive biometric technology like eye tracking or face recognition, and the AI Narrator adjusting to the user’s emotional states. It’s our duty to ensure that the users don’t end up feeling as if their privacy has somehow been compromised.

ETHICS: Anja Salzmann, PhD Candidate from the University of Bergen, presents her views on the ethical considerations of the AI Narrator.

User Ethics: Responsible Research and Innovation

In her presentation, Anja Salzmann also talked about risk technology. This is technology that’s often rejected by people, such as genetic engineering. “When working with the AI Narrator project, we must consider information and communication technologies as risk technologies”, she said.

In Salzmann’s view, the AI narrator presents what she would call an “adaptive psycho machine for real-time profiling”, which operates as a new personalized machine-operated filter to explore the world around us. It’s “based on several highly convergent risk technologies with a pervasive nature, and the confluence of biometric and other aggregated personal data that are processed in real-time for the purpose of adaptive storytelling”.

Salzmann wanted to remind us that Artificial Intelligence is political and makes science political. Therefore, we need to include the principles of Responsible Research and Innovation (RRI) in ICT development. We need to include ethical principles in AI throughout the entire process.

Because there are ethical risks involved when working with artificial intelligence, the AI Narrator project does follow the principles of RRI. We are, however, aware that this is complicated work that needs to be maintained. As Salzmann explained in her presentation, embedding ideas of RRI into ICT development is not an easy endeavor, and there are several issues that need to be considered:

1) The principles and intentions of RRI need to be thoroughly understood, 2) The fact that “open nature” makes ICT outcomes and trajectories unpredictable, 3) The fact that inventions can go viral the same day, 4) “The problem of many hands” in open source projects, and 5) The persuasive nature of highly convergent technologies.

User Ethics: Practical Concerns in Using the AI Narrator

A more practical issue concerning personal boundaries is: The Interfaces work group asked whether the AI Narrator is always on or not, and it makes me wonder the same thing. Many people, including me, are intimidated by the concept of devices that are always on and are always “watching” and recording you. Therefore, I hope the AI Narrator will be easy to shut down or turn off when you want to do so. I think people would want to have that sense of control.

During the work group discussions, I brought up the same practical concern that I’ve brought up before, regarding the Locanews project: Won’t it be impractical and a potential safety hazard to use a smartphone, earphones and AR glasses at the same time, while walking around? We must question how distracted the users would be, and the risks of physical danger.

During the London seminar, Jon Hoem commented that people have now adjusted to looking at a smartphone and using earphones while walking around, and that they usually won’t bump into people or walk into traffic. Their bodies will automatically avoid obstacles.

While I do partly agree, I think there are two key differences between using a smartphone and headphones and using the AI Narrator: 1) While using the AI Narrator, you’re distracted by three devices, not only two, and 2) while using a smartphone, you can easily look up from your phone and get reoriented. With the AI Narrator, however, you’re wearing the extra AR layer between you and the real world, so even if you look up from your phone, your view won’t be unobstructed.

One suggestion, from the Interfaces work group, was that perhaps there could be a rule that said that if you (for example) move faster than 30 km per hour, the AI Narrator output gets limited to audio. Lars Nyre and I had already discussed something similar when considering the Locanews project; how one might have to consider switching to just audio in some situations.

The Next TekLab Seminar

TekLab member Jon Hoem has secured further funds for the AI Narrator project from UH-Nett Vest (the University and College Network for Western Norway). UH-Nett Vest is a formal collaborative network between five institutions: University of Bergen, University of Stavanger, Western Norway University of Applied Sciences, Volda University College and University of Agder. The network aims to enhance academic activity at the individual institutions.

In March 2019, there will be a new TekLab seminar. This time, the seminar will take place in Stavanger, arranged by professor Cato Wittusen from the University of Stavanger. During the seminar, the discussions on the AI Narrator that started in London will be able to continue.

During the London seminar, Professor James Bennett from City, University of London, contributed as a guest lecturer. He presented the StoryFutures innovation project, led by Royal Holloway, University of London. StoryFutures sees innovative storytelling as central to next generation technologies and audience-facing experiences.

Professor Bennett mentioned a challenge that seems to affect most AI and AR projects: There’s a lot of money in AI and AR these days, but it remains difficult to distribute these technologies, and to get people to use them.

Therefore, I encourage the Stavanger seminar participants to further consider what they (and their friends, colleagues or family members) might want or need to use the AI Narrator for.


Publisert: 21. februar 2019
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram