The Masks We Wear
Created was part of the Performance Art and New Technologies class taught by Daniel Fine, The Masks We Wear was a proof -of-concept telepresent interactive installation speaking on visual identity and what happens when we strip that away and instead focus on the underlying person. 
Audience members (situated in the same room for this prototype but ideally in distanced locations) were invited to stand facing a large projection surface showing a virtual face within its portal. In a designated area, the audience member would take control of a virtual avatar and be able to communicate with another location telepresently through their unknown virtual form. The unknown facade they inhabited and the unique experience of communicating with a photorealistic but virtual face prompted conversation about identity, personality and personal facts by breaking down the initial visual barrier. The inhabited avatars were able to be swapped at audience request, but they would be uncertain of their new face without conversing with the other end of the line.
Future iterations of this project would be fully telepresent and separated by buildings or even countries. The audience would not be able to see the cameras being used to capture their face, instead only seeing the portal with the virtual avatar they would be conversing with. The audience would also gain some more autonomy in being able to choose the avatar on the other end of the communication, but not be able to see what they were choosing unless they communicated with the other audience member. The possibility of altering vocal tonality is also intriguing, and opens up a whole host of unintentional customization from the audience end on what mask they choose to wear. 
Technical Breakdown: Two 5400 lumen laser projectors, connected through the media server Isadora and projection mapped into circles. An Unreal Engine project, hosting four MetaHumans connected through the app LiveLink Face to the two iPhones to facilitate facial motion capture, stood in front of 4 virtual Spout cameras which were piped into the Isadora patch to be projected. The MetaHuman avatars could manually be swapped at audience request to have a variety of identities displayed, though in a final product this would occur through an audience initiated control panel or similar device. This interactive exhibit functioned as a test for future projects merging virtual and actual elements into a cohesive experience. 

University of Iowa, February 2023
Co-Collaborators: Kaelen Novak, Kenneth Collins, Cat Dooley