RIOT is an immersive video installation that takes place in a protest march that swiftly escalates into a dangerous riot. RIOT responds to the participants’ emotional state in real time to alter their journey.
Viewing “Riot (Prototype)” at the Museo de Arte Contemporáneo in Lima.
The objective is to get through a digitally simulated riot alive. This can be achieved by communicating with a variety of characters. But the narrative is governed by the emotional state of the user, which is measured by bespoke facial expression recognition software and by devices that monitor neurological activity. If the user becomes agitated, characters will become defensive or impatient and the user will be taken deeper into an unknown world.
“‘Riot’ is played out on a large screen, with 3D audio sound surrounding us as a camera watches our facial expressions and computes in real time how we are reacting. Based on this feedback, the algorithm determines how the story unfolds. We see looters, anarchists and police playing their parts and ‘interacting’ directly with us. What happens next is up to us: our reactions and responses determine the story. . . .
“Ochu reacted with jumps and gasps to what was happening around her and ultimately didn’t make it home. . . . As a scientist and storyteller she felt ‘Riot’ was ahead of the curve: ‘This has leapfrogged virtual reality,’ she said.”
SENIOR SOFTWARE ENGINEER
EMOTION ANALYSIS LEAD
Dr Hongying Meng
MACHINE LEARNING ALGORITHM ENGINEERS
NATIONAL THEATRE DIGITAL CURATOR
DIRECTOR OF PHOTOGRAPHY