PROJECTS         ABOUT US         CONTACT


Doppelgänger V 

(work-in-progress)

XR
Immersive Storytelling
Spatial Soundscape
AI/GAN
IoT
Installation

Doppelgänger V is a multi-platform XR experience that utilizes sonic and visual languages as abstract narratives exploring the notion of self and the fragmented existence of bodies and sensoriums. The series of artistic montages and narratives also further inquiry the questions including how the bodies became illusory forms between the digital and the physical, the artificial and the organic; how the sensorium and consciousness exist as fragmented flow in-between the spaces; And the process of how these traces of human emotions and behaviour surpluses shape and reconfigure themselves as searchable archives and chains of models; the endless feedback loop of the automated and choreographed social patterns, etc...  

This project explores a speculative future when machinery analysis and prediction of our mind become possible, data monetization becomes uncontrollable, and the discrepancy between machines and humans become blurred, we become the amalgamation of digital and organic being and constantly evolving by trading our personal information. By investigating various means of technology from the realm of data monopoly, it aims to further explore the bodily presence of posthuman, new forms of power-structures and other related social-political issues in the context of surveillance and consciousness capitalism.  

The story follows a journey in a data world. Through a first-person perspective, the audience moves on a long scroll which represents our mobile screens or web pages. The audience will be able to interact along with multiple storylines through their digital (mouse cursor) and physical motion (body movements). According to the audience's movement, the floating screens will surround them from time to time and interfere with their sense of direction and the visibility of the path. They will face the challenge of confronting the continuous distraction of multiple screens and keep moving forward to discover the deep side of this data world. With the sound and image becoming more digitally distorted, the audience traverses a portal and is sucked into the central data hub which is like a panopticon where they may occasionally hear the sound of our familiar reality or the volumetric video of the physical world until it finally fades out. In the end, we will invite participants to record 10 seconds of humming or text messages (w/Geographic data) and send them to a cloud database. Then a spatialized soundscape of synthesized voices, words, and texts will be continuously updated using collected data from the previous step. This individual soundscape will be archived on the project website and can be retrieved with the assigned ID. The participants will be able to move around the UI interface to change their listening positions and receive a morphing soundscape. 

This project will draw the context from both the physical and digital environments with a primary focus on studying the behaviour patterns and social structures related to our current transformative and telematic presence in the machines. Its sonic and visual elements will be extracted from found footage, synthesized visuals, field-recordings, human voices, instrumental and electronic sound, and AI(GAN)-generated media materials including portraits, photos, voices, non-sense texts, body images, cursor icons, emojis, household/environmental sound/images, synths, etc. Furthermore, the interactive media framework will be built based on various IoT agents that spread in multiple physical sites. The audience will be able to interact and alter a certain part of the narratives from multiple geographical locations. The project comprises several parts: a website, an XR application, a mobile application, and an outdoor participatory installation (actual form and materials TBD).