Cycle 3
Posted: May 7, 2026 Filed under: Uncategorized Leave a comment »Resources
- Motion Lab
- ACCAD space
- Wireless audio system
- Guitars, pedals
- Drums
- Orbbec Femto Depth sensor
- RGB Camera
- MacBook Pro (M1)
- Mac Studio (M4)
- PC
- MoLa Network
- NDI
- Cue-able lighting system
- Portable lights
- Circular rug
- Wall-Mounted TV
- TouchDesigner
- MediaPipe & OpenPose
- StreamDiffusion Operator
- GeoZone operator
- iPhone
- Amazon Echo Tap
- Past ACCAD Projects
- Performance recording from castle project
- Interactive fluid simulation
- Interactive particle system
- Mediapipe control system
- Audio responsive coloration
- Shared memories of impactful moments from prior projects this semester
Themes
- Undermining Expectations
- Intentional Performance with Interactive Systems
- Instability
- Surprise
- Disorientation
- Immersion
- Uncertainty
- Friendship
- Challenge
- Hubris
Value Groundings
For this project I wanted to showcase my abilities and pay homage to the journey that brought me here. I saw this as a sort of culmination of an era of my life that has been both exciting and enriching. In each cycle, I integrated elements and systems that I had built for previous classes with new elements that I learned to implement in the current semester. Cycle 3 was envisioned to be a choreographed performance to demonstrate what a practiced and intentional performer could do with the system.
In addition to showcasing my current skillset, I wanted to feature some of the elements that I have loved about my time at ACCAD. The friendships and camaraderie developed with classmates, the supportive environment that artfully enables vulnerability, and the creative confidence cultivated in every class that encourages and enables ambitious pursuits without judgement.
Undermining Expectations
This theme ran through the entirety of the performance from the pre-show through the show as envisioned.
I have typically tried to build systems for other people to interact with. For Cycle 3, I chose to build a system that would allow me to perform for a more passive audience.
I have typically (not always) steered clear of negative emotional valence, opting for friendly and approachable experiences with some emotional dynamics, but almost always ending on a positive note. The score for this cycle featured many abrupt oscillations between positive and negative emotional cues.
I have typically strived to make digestible experiences with a central focal point separate from the audience intended to provide a “wow factor”. I devised this experience to wrap the audience inside the experience with progressively disclosed wow factors.
I have typically tried to implement experiences with clear beginnings, middles, and endings. The experience as devised for cycle 3 had several false stops to keep the audience on the proverbial balls of their feet. For this cycle I didn’t want audience members to feel completely comfortable at any point. I wanted emotional highs and lows. I wanted a little bit of vigilance.
I like surprise. I needed Michael for this project for a number of reasons including as a co-performer in the original score (he could have nailed it, but I wasn’t comfortable with my own performance on the canceled second song). Even knowing that I could not achieve everything I wanted to do without him, I still tried to find ways to keep certain advancements hidden from him so that he could experience a bit of surprise as well. My main goals for the cycle were to introduce points of intrigue, to increase the polish, and to build an experience bigger and more immersive than I have in the past.
I made a major mistake in my score. Early in cycle 3, I considered that audience members would likely assume that this was an audience participation experience. I had initially considered using the Orbbec depth sensor to track movement into the stage space and trigger an audio rebuke and instruct the audience to find a seat and relax. I couldn’t immediately devise a way to trigger this response only when I was out of the room. In retrospect there were several easy ways to achieve this. The easiest way would have been to trigger a voice over immediately upon entry to instruct audience members on how to engage with the experience. This would help to avoid unintentional disclosure of the experience.
Another simple method would be to hide a switch near the entrance to the motion lab that I could trigger on entry to deactivate the security mode. This would be easily achieved using a makey – makey with some conductive tape
The importance of framing cannot be overstated. Especially if you are planning an experience that is going to undermine what people have been conditioned to expect. I tried to do a lot of fancy framing when clear and simple framing would have been a better approach.
Score
The initial score was as follows:
- Audience waits in the holding area listening to muzak while the performance is set up.
- I leave the motion lab through the supply closet while the muzak stops abruptly and the emotional valence of the experience shifts from light to dark as they are instructed to enter.
- The audience enters the motion lab to find the system in a dormant state with designed lighting.
- I enter the waiting room, set up the backing light, and begin playing the song before entering the motion lab
- I enter the motion lab and walk slowly to the robo camera while continuing the song. I use the robo cam to interact with my audio responsive fluid simulation (powered by media pipe) to paint a base on the front screen.
- I would proceed to the center of the rug and perform the song.
- At specific moments during the song where no lyrics were sung, I would step away from the microphone into a zone on the rug, which would trigger a specific prompt weight on the StreamDiffusion operator to rise to 1 and fall to 0 over 10 seconds. I would then step several feet directly behind the microphone, which would cause TouchDesigner to fade between feeds, so the stream diffusion model would fade in behind the point cloud and slowly fade back.
- All lights in the room would shift from the initial cool color setting to bright red over the course of four minutes. Gradually shifting the light shining on the performer’s face from soft blue to dark red.
- If at any point in the song I forgot the lyrics (which happened several times in practice, I would have a cheat sheet projected on the wall TV.
- At the end of the song the lights would fade to black and I would recede from the microphone with only feedback playing over the speakers, lay the guitar down on the table in front of the screen, and wiggle my fingers around the glass globe placed at the center of the table.
- When a motion sensor detected the movement in my fingers, an energy animation would appear using a Pepper’s Ghost effect inside the orb, the screen would flash and I would fall to the floor.
- At the drum set on the opposite side of the room, Michael would begin playing a drum solo with each pad linked to a specific portable light, shifting attention from the right side of the screen to the left side.
- During this performance I would be hidden behind the table changing costumes and emerge with a different instrument to join Michael in a final song, David Bowie’s “I’m Afraid of Americans”. The lights would reactivate with a red, white, and blue theme.
- Stepping into a previously avoided zone would provide a new set of prompts to the AI that are relevant to each verse of the second song. The motion-based same weight adjustments would allow me to shift between each prompt.
- This song would end and the lights would fade to black.
Simplifications
Full disclosure: it would have been better to have simplified this experience more. With a simpler media system I would have had a more predictable performance. At the same time, even more than a flawless performance I valued this final challenge and an honest assessment of the edge of my abilities as a benchmark. I took risks, I suffered consequences, I have no regrets — though I do have lessons learned.
Truncating the show
While I built most of the system to execute the complete score, it became evident that I was not prepared to actually perform the song. While I was not great at the first song, the second was comparatively worse, and I did not want to deliver an unbalanced experience ending on a sour note. I chose to cut the second song. As the Pepper’s ghost transition was intended to have “transformed” the performer character, it no longer served any narrative purpose and became a cheap party trick that was irrelevant to any coherent narrative.
Changing the onboarding experience
Being slow to devise an approach to a verbal warning system to prevent audience members from interacting with the system and uncovering the surprise enhancements from Cycle 2, I elected to simplify the system by introducing a new entry experience. I decided to add a new switch operator with a trigger that would allow me to hide the actual experience until a moment that I specified.
This was wishful thinking and did not serve its primary purpose. The audience’s prior knowledge and experience had strongly reinforced the idea that this would be an interactive audience experience. Luke’s performance also featured a microphone as a mode of interaction, establishing it as a system component that was “in-play”. After starting the song on the guitar I heard audience members speaking or singing the lyrics into the microphone.
Where I probably should have simplified, but chose not to
Some of the elements of complexity were required to achieve the vision. The experience employed three networked computers trading data. This is a big vulnerability that did bite me in the end, but it was necessary if I wanted to have the AI generated content and transitions. I could have potentially eliminated by personal laptop from the experience and run everything on two machines, but I wanted to keep the patch that required the most work on my personal laptop so that I could work remotely and make updates in the motion lab without needing to transfer the files back and forth.
I am not confident that I could have reduced to only the PC to run all components and chose to leave it dedicated to the AI model given its intense processing demands.
This complexity presented a major fault during the performance. On the day of the final I had accidentally opened two versions of my project, which doubled the NDI out feeds and caused the network to dynamically rename my computer. When I set up the AI computer prior to the performance, it did not recognize my NDI feeds as they were coming from a new source. I reconnected the feeds, but made a mistake in feeding the source image into the AI model.
I learned during cycle 3 that feeding the Front Screen NDI into the AI model caused a feedback loop that ruined the content and caused the model to reach a state of stability with striped colors. I resolved this problem by creating a separate feed named fluidsim-precomp (bad choice). I only discovered this mistake while cleaning up in the motion lab after the performance. I should have given the NDI output a name corresponding to its purpose, not its content, and added a comment to my network to remind what feeds are expected.
Lessons Learned
I had practiced this performance extensively in the motion lab and at home. I practiced under pristine conditions and had not practiced playing through unanticipated variables. While I set up the experience and drilled the performance enough that I could execute it with minimal active thought, the unanticipated initial conditions forced me to adapt my performance and it took longer to settle in and get back out of my head. I was fixated on making sure everything was reset to the baseline state while trying to continue the show. In the process I missed several cues (e.g. entry, delay pedal timing) and failed to notice several important factors (e.g. lowered microphone lowered, guitar fx pedals set incorrectly) that caused distractions for me early in the show.
I have to work on simplification. I love a challenge. I love devising big, elaborate experiences. I love to stretch the boundaries of what I know how to do. Sometimes it’s worth adding complexity to a system when the risk is justified by the extended capabilities, but there is a penalty to pay. By nature I’m a shy person, but designing experiences allows me an outlet to be bold. I like to take risks. I think it’s okay to do things like this when it’s for my own benefit, but when devising experiences for others (especially where clients are involved) simplification is non-negotiable. I should strive to re-frame simplification as the challenge to exercise that muscle.
I also learned how much I prefer to build experiences for others rather than perform for others. It’s a much more gratifying experience for me as a creator to watch other people play and discover even if the work is less polished. Even if I’m panicked during an experience that others are interacting with it’s not necessarily evident to the participants. I don’t particularly like being
Special thank you to Alex, Michael, Lou, Rufus, Zarmeen, and Luke for everything this semester, it was a great joy to work with all of you.
Final Practice before new elements introduced
The GeoZone based prompt switching and updated floor visuals were not yet implemented during this rehearsal.
Final Performance
Note* I have cut this video down to remove the initial audience and performer entry to protect the identities of the classmates who did exactly what I had conditioned them to expect through previous cycles and began interacting with the set. They did nothing wrong, I simply didn’t provide the necessary framing to contextualize the performance and I fear that sharing that portion of the video publicly might cause undue embarrassment. I have deep respect for the entire cohort loved working with each member.