Pressure Project 3: Surfing Soundwaves
Posted: April 11, 2025 Filed under: Uncategorized Leave a comment »When starting this project, it was a bit daunting and overwhelming to decide which part of my cultural heritage to explore. Theatre initially came to mind, but that felt too easy, I wanted to choose something that would allow me to share a new side of myself with my peers. With graduation on the horizon, I’ve been thinking a lot about home. I grew up near the water, and there’s this unbreakable need in me to be close to it. That’s where I began: sourcing sounds of the ocean on Pixabay. I went so specific as to only use sounds from the Atlantic Ocean when possible, even finding one clip from Florida.. But I realized that the ocean itself didn’t fully capture my heritage, it was a setting, not a story. I had to go deeper.
I began thinking about my dad, who was a professional surfer. I’m sure he had aspirations for me to follow a similar path. I felt a lot of pressure to surf as a kid, it was honestly terrifying. But over time, and on my own terms, I grew to love it. Now it’s how my dad and I connect. We sit out on the water together, talk, and when the waves come, we each ride one in, knowing we’ll meet back up. The more I thought about that ritual, the more homesick I felt. Even though the assignment was meant to be sound-based, I found myself needing a visual anchor. I dug through my camera roll and found a photo that captured the calmness and rhythm of beach life, something soft and personal that helped ground my process.

Once I had that anchor, I started refining the design itself. Since this was primarily a sonic piece, I spent about thirty minutes watching first-person surfing videos. It was a surprisingly helpful strategy for identifying key sounds I usually overlook. From that research, I pinpointed four essential moments: walking on sand, stepping into water, submerging, and the subtle fade of the beach as you move away from it. While wave sounds and paddling were expected, those smaller, transitional sounds became crucial to conveying movement and environment.
I then spent another chunk of time combing through Pixabay and ended up selecting sixteen different sounds to track the journey from the car to the ocean. But as I started laying them out, it became clear that I wasn’t telling a story yet, I was just building an immersive environment.

Avoiding that realization for a while, I focused instead on the technical side. I mapped out a soundscape where the waves panned from right to left, putting the listener in the middle of the ocean. I also created a longform panning effect for the beach crowd, which built up as you walk from the parking lot but slowly faded once you were in the water. I was proud of how the spatial relationships evolved, but I knew something was still missing.

Late in the process, I returned to the idea of my dad. I created an opening sequence with a sound of submerging followed by a child gasping for air. I used a kid’s breath to imply inexperience, maybe fear. To mark a shift in time, I recorded a voicemail similar to the kinds of messages my dad and I send each other, with the general sentiment“I’ll see you in the water.” It was meant to signal growth, like the child in the beginning was now the adult. I had hoped to use my actual dad’s voicemail, but when I called him, I found his box wasn’t set up.
I decided to end the piece with the paddling slowing down and the waves settling as I/the character reached dad. I repeated “dad” a couple of times—not in panic, just to get his attention, to make it clear he was present and alive. I used very little dialogue overall, letting the sound do most of the storytelling, but I tried to be economical with the few words I did include.
During the presentation, I felt confident in the immersive quality of the work, but I wasn’t prepared for how the tone might be read. The feedback I received was incredibly insightful. While the environment came through clearly, the emotional tone felt ambiguous. Several people thought the piece might be about someone who had passed away, either the child or the father. I had added a soft, ethereal track underneath the ocean to evoke memory, but that layer created a blurred, melancholic vibe. One person brought up the odd urgency in the paddling sounds, which I completely agreed with. I had tried to make my own paddling sounds in my kitchen sink, but they didn’t sound believable, so I settled on a track from Pixabay.
Looking back, I know I missed the mark slightly with this piece. It’s hard to convey the feeling of home when you’re far from it, and I got caught up in the sonic texture at the expense of a clearer narrative. That said, I still stand by the story. It’s true to me. When I shared it with my wife, she immediately recognized the voicemail moment and said, “I’ve heard those exact messages before, so that makes sense.”
If I had more time, I would revisit the paddling and find a better way to include my dad’s voice, either through an actual message or a recorded conversation. That addition alone might have clarified the tone and ensured that people knew this was a story about connection, not loss.
Pressure Project 3: Soundscape of Band
Posted: April 9, 2025 Filed under: Uncategorized | Tags: Pressure Project 3 Leave a comment »For this pressure project, we were tasked with creating a two-minutes soundscape that represents part of our culture or heritage. I chose to center my project around my experience in band. My original plan had been to take recordings during rehearsals to create the soundscape. None of my recordings saved properly, so I had to get creative with where I sourced my sound.
Upon realizing I did not have any recordings from rehearsal, I had a moment of panic. Fortunately, some of our runs from rehearsal are recorded and uploaded to Carmen Canvas so we can practice the music for the upcoming Spring Game show. I also managed to scrounge up some old rehearsal recordings from high school and a recorded playing test for University Band. I had also done a similar project for my Soundscapes of Ohio class last year, and still had many of my sound samples from that (the first and likely last time my inability to delete old files comes in handy).
Then I had the idea to use videos from the Band Dance, meaning I had to scour my camera roll and Instagram so I could screen record posts and convert them to mp3 files. This process worked quite well, but it was no small task sorting through Instagram without getting distracted. It was also a challenge to not panic about having no recordings, but once I found this new solution, I had a lot of fun putting it together.
I spent about the first hour finding and converting video files into audio files and putting them into GarageBand. I made sure everything was labeled in a way that I would be able to immediately understand. I then made a list of all the audio files I had at my disposal.
The next 45 minutes were used to plan out the project, since I knew going straight to GarageBand would get frustrating quickly. I listened to each recording and wrote out what I heard, which was helpful when it came to putting pieces together. I also started to clip longer files into just the parts I needed. I made an outline so I knew which recordings I wanted to use. It was a struggle to limit it to just what would fit into two minutes, but it was helpful going in with a plan, even if I had to cut it down.
During the next 45-minute block, I started creating the soundscape. I already had most of the clips I needed, so this time was spent cutting down to the exact moment I needed everything to start and stop, and figure out how to layer sounds to create the desired effect. About halfway through this process, I listened to what I had and was close to the two-minute mark. I did not have everything I wanted, so I cut down the first clip and cut another. I did a quick revision on my outline now that I had a better understanding of my timing.
The last half hour was dedicated to automation! I added fade ins and outs in the places I felt were most necessary for transition moments. I wanted to prioritize the most necessary edits first to ensure I had a good, effective, and cohesive soundscape. I had plenty of time left, so I went through to do some fine tuning, particularly to emphasize parts of the sound clips that were quieter but needed to have more impact.
I wanted my soundscape to reflect my experience. As such, I started with a recording from high school of my trio rehearsing before our small ensemble competition. This moment was about creating beautiful music. It fades into the sound from my recorded playing test for University Band last year, where I was also focused on producing a good sound.
There is an abrupt transition from this into Buckeye Swag, which I chose to represent my first moment in the Athletic Band, a vastly different experience from any previous band experience I had. The rest of the piece features sounds from sports events, rehearsals, and the band dance, and it includes music we have made, inside jokes and bits, and our accomplishments.
I wanted to include audio from the announcers at the Stadium Series game because that was a big moment for me as a performer. I marched Script on Ice in front of more than 94,000 people in the ‘Shoe, performing for the same crowd as twenty one pilots (insane!). My intent with that portion was to make it feel overwhelming. I cut the announcement into clips with the quotes I wanted and layered them together to overlap slightly, with the crowd cheering in the background. I wanted it to feel convoluted and jumbled, like a fever dream almost.
In creating this project, I came to realize what being in band really means to me. It is about community, almost more so than making music. Alex pointed this out during the discussion based on how the piece progresses, and I was hoping it would read like that. At first, it was about learning to read and make music. When I got to high school, I realized that no matter where I went, if I was in band, I had my people. So over time, band became about having a community, and it became a big part of who I am, so I wanted to explore that through this project.
Aside from personal reflection, I wanted to share the band experience. My piece represents the duality of being in concert and pep band, both of which I love. There is a very clear change from concert music to hype music, and the change is loud and stark to imitate how I felt going to my fist rehearsal and first sporting events with the band. By the end, we are playing a high energy song while yelling “STICK!” at a hockey game, completely in our element.
Cycle 1: Big Rainbow Piano
Posted: April 6, 2025 Filed under: Uncategorized Leave a comment »For Cycle 1, I started by identifying a few resources I wanted to explore. Since attending the Touch Designer workshop at the beginning of the semester, I have been searching for an opportunity to utilize the software and explore the tools that were provided to us from Sudo Magic. I was able to get my hands on the “Geo Zone Detector” Comp which allows you to easily draw a rectangular volume and use the point cloud data from a depth sensor to detect when a person or object is occupying that zone.
I decided to use the basic framework of a human sized piano keyboard to test this system. I began by drawing the “geo zones” in the Comp to be approximately the size of a person, one meter by two meters by three meters. This was roughly the width and height of the person with enough length to occupy a human sized “key”. Then the next step was to align the depth sensor in the Motion Lab. As part of the “patch lab” in the Touch Designer workshop, I learned to use the point transform Top to orient the pint cloud to the Motion Lab x,y,z coordinate system.

Once I had the point cloud and spatial trigger boxes aligned and working in unison, I tried to attach a specific sound frequency to each “geo zone”. I started by using the audio oscillator chop. This produced a pure sine wave that I could dial in the exact frequency of each note on the piano. Getting this to work with the Geo Zone Comp was straight forward, but I did have to alter the settings for zone “trigger threshold” so that it would detect only an arm of a leg and not just the whole person.
The pure sine wave was able to prove that the concept was working. However, I was disappointed in the tonality of the sound and I wanted to find a way to trigger an actual piano. I wanted to find a solution that wasn’t too complicated and did not require too much additional hardware. I have dabbled with Garage Band for many years and I was well aware of the plethora of virtual instruments available in the software. My previous experience with a MIDI keyboard lead me believe this would be a plug and play type of solution.
Unfortunately, the world of MIDI and Apple drivers is not exactly “plug and play” but there is a pretty easy to use “audio midi setup” app that allows you to see the devices plugged into your computer. A few google searches led me to the IAC or inter-application communication driver. This allows one software program to send MIDI messages to another without the need for MIDI hardware attached. This is exactly what I needed. However, the MIDI out Chop in Touch Designer does not fill in the necessary coding to talk directly to other devices. A few additional google searches led me to the simple channel plus note syntax. Once I inserted all of the triggers in this format, the Geo Zone Detector was triggering a MIDI keyboard in Garage Band. Success! I spent over an hour listening to different synths and virtual instruments before I landed on “Gritty Bells” which had a pleasant tone, but also a rhythmic component that essentially “played a song” when you hit different notes together.

In an effort to connect the Isadora skills I had already learned, I wanted to draw some graphical boxes similar to the “shapes” actor. I found the “rectangle” sop to be essentially the same thing. You can draw a box and also define a line color, fill color and line weight. This process created the “keys” of the keyboard. In addition to triggering MIDI notes, I connected the same triggers to the ”fill” parameter of the boxes. The end result was the key lighting up and playing the notes simultaneously.
Finally, I projected the boxes on the ground and tweaked the keystone to line up with the boxes already defined in the Geo Zone Detector. I decided to roll out a strip of white Marley to help the top down projection reflect off of the floor and then refined the keystone and geo zones further so they aligned. This was a bit of a trial and error operation. The Marley was a fixed size so I started by aligning the projections and then scaled the Geo Zones to match. I could easily tweak each section, or the entire point cloud using the point transform Top.
When it came time to present, I added some color projection on the MOLA cylinder scrim. These scrims were already in place for Noah’s cycle 1, So I decided at the last minute to add a rainbow gradient projection onto the cylinder. I had this idea previously and I got lucky having just enough prep time to quickly make the gradient match the keyboard and load it into Resolume utilizing a previous projection map (resource).
I made everyone leave the lab before they saw the finished cycle 1. This helped to reset the vibe after the somewhat chaotic setup process. Some people had already seen the basic setup before but there were also some new guests that have never experienced the installation or the motion lab. Everyone seemed to timidly approach the installation but once the were all there, everyone took turns playing notes and testing the reactivity of the “piano”. Some people tried to play a song and others were trying to find the limits of the depth sensor to see how little of their presence triggered a note and how high or low their foot or hand needed to be.

The feedback I received was generally positive. People enjoyed the colorful and playful nature of the setup as well as the pleasant tones emitted my the speakers. Someone said it reminded them of a Fischer Price Xylophone, which I must admit, was not something I was going for, but unintentionally recreated almost exactly! Some of the other feedback I received was that they enjoyed that they could step back and watch others interact and play since there were essentially a “play” space on the Marley, and a “observer” space outside the rectangle. Some others commented that more interactive possibilities would have been interesting, such as a different response to the height of your hand or foot that triggered the notes.
For Cycle 2 I plan on using the same basic concept but abstract the entire field to be a little less rectangular and obviously a type of piano. I have been experimenting with a spatial synthesizer that can utilize the immersive sound system in the Motion Lab. I also plan to add more “geo zone” boxes to increase the sensitivity and interactive possibilities.
Cycle One: Immersive Theatre
Posted: April 3, 2025 Filed under: Uncategorized | Tags: adobe, cycle 1, immersive theatre, Isadora, premiere pro, theater, theatre Leave a comment »For cycle one, I wanted to take the tools I had been equipped with over the first half of the course and utilize them in a theatrical context.Knowing that I would only be working with a bite sized chunk of work, I decided to revisit a play I had developed in a composition class my first year. It was a 20-ish minute play called The Forgotten World of Juliette Warner. It is a piece that juxtaposes the heroine’s journey with the stages of Alzheimer’s diseases. A theatrical convention of this piece in its initial construction was an ever changing set, meant to reflect the mind of the titular character where nothing is truly settled. Having actor’s move literal mats and blocks constantly was a barrier when establishing the suspension of disbelief. So recently trained in projection mapping, I developed a score around mapping the environments and bringing the audience inside the world.
My original score:
Resources Needed:
- Software: Isadora
- Hardware: Projector, long HDMI cables
- Space & Physical Elements: MOLA access, boxes, rolling boards, or curtains
Steps to Achieve This:
- Organize blocks, rolling boards, or curtains in the MOLA space as the base setup.
- Map the projector to these surfaces.
- Design and program projections to depict the shifting realities of Juliette’s world.
- Create a control board in Isadora for ease of access.
- Source actors.
- Rehearse.
- Present.
In my original score, I had anticipated projecting onto flat surfaces and possibly a curtain. But after our Motion Lab Demonstration, I observed a track for curtains that ran circular which I was immediately drawn to. So the first two days, with the gracious help of my peer Michael, I worked to understand routing NDI sources to 3 projectors in the motion capture lab. Through trial and error, we overcame a major barrier on day 2. When sending an NDI signal over a closed internet connection, many computers such as mine, will not send the signal if a firewall is enabled. After disabling the firewall, I was off to the races.
In IsaDora, I utilized EJ McCarthy’s focus grid to understand how to canvas the curtains properly. This was a meticulous effort that took nearly a whole class. I find that I can often get so focused on the specifics of the work, that I forget to take a step back and look at the big picture. So towards the end of class, I threw up some stock footage on all 3 curtains and to my surprise, I found that nearly everything is more forgiving than the focus grid.

With my surfaces properly mapped for projection, it was time to turn to the script. This piece has always been one that makes me nervous because I want to handle this difficult subject matter with as much care as possible. So to avoid alienating my peers/the audience, I selected a brief snippet that occurs earlier in the play that revolves around a repeated daily occurrence, grabbing coffee. I felt that both the environment and the interaction would be familiar enough to put audiences at ease while also providing a great opportunity to show the evolution of Juliette’s mind. When writing this scene, I found that this scene occurred at these stages of the heroine’s Journey/Alzheimer’s development:
STAGE 3 – Noticeable Memory Difficulties (3A.The Awakening)
STAGE 4 – More Than Memory Loss (3B.Preparing for The Journey)
With one day left in class to work on this project, it was time to generate. Although I did not have this software initially in my score, I decided that Adobe Premiere Pro would be the best canvas to create this scene. I sourced stock footage and audio from both Adobe and Pixabay (an EXCELLENT source if you haven’t checked it out).
I had to source footage that could suggest a coffee shop without needing to be in full focus, I didn’t want the projections to be a focal point for the audience. I eventually settled on a nice loopable clip and to make the transition gradual, I then started the scene with a circular gaussian blur at the center and over the course of the 2 and half minute scene allowed it to encompass the entire video. I then created a sound design based on stock noises. With the audience being on the inside of the curtains, I felt it was important to not only surround them visually, but sonically. I utilized surrounding reverbs and panning to allow sounds to come from specific points in the room.
I moved this scene into my ISADORA file where it replaced the focus grid and easily projected on all 3 surfaces.
On the cue line “my mind” I set up a second scene in Isadora which would be the Doctor’s office. I used a similar approach to the coffeeshop, but reversed the blur effect. I did this to intentionally throw off the audience, to tell them that we were somewhere different, somewhere with much more sterile light, but slowly allowed that to be revealed over time.
With my projections designed, it was time to implement actors. I did source a few actors who agreed to take part in a staged reading of the scene. Given the nature of a class project, all my actor’s eventually backed out which led me to scramble for new actors. When I presented the piece in class, I was only able to give my actor’s their given circumstances and the instruction to follow their impulses. This created a sense of a scene, but led to some confusion in character dynamics and audience attention. For my personal taste, it created some clunky staging, but I was so thankful to have 2 actors who were gracious enough to jump in and with a brief rehearsal, we could have ironed this out.
In the feedback, which was extremely valuable, I learned that there was room to go further with the visual design. While the current projections and sound designs were immersive according to my peers, the same visual on all 3 surfaces created an uncanny blend that actually kind of takes the audience out of it. That being said, I did receive feedback that my approach was tasteful and the blur effect, while discrete, was noticed. Perhaps my biggest takeaway from the feedback was that there is a real opportunity to continually define and redefine the audience relationship. The coffee shop very much sent the message “I want you to be a part of this”, but the doctor’s office provides an opportunity to flip that on its head and push the audience out. When I continue to work with this project in cycle 3, I will explore how lighting can be a tool for achieving this effect. My question I will investigate is, “When can I afford to truly leave the audience in the dark”.
Overall, I am happy with the shape this project took. While it did not look at all how I originally intended, I was pleased to expand my muscles with NDI AND projection mapping at the same time while providing a unique theatrical experience for the audience. I laid the groundwork for a compelling piece and with an updated score and a bit more time, I can lean into the aspects of this project that were lost to time.
Cycle 1: It Takes Two Magic Mirror
Posted: April 1, 2025 Filed under: Uncategorized | Tags: cycle 1, Interactove Media, Isadora, magic mirror Leave a comment »My project is a magic mirror of sorts that allows for interaction via an XBox One Kinect depth sensor. The project is called “It Takes Two”, because it takes two people to activate. In its single-user state, the background and user’s shadow are desaturated with a desaturation actor linked to the “Bodies” output of the OpenNI Tracker BETA actor. When the sensor only detects 1 body (via an Inside Range actor), it puts the saturation value at 0. When a second body is detected, it sets the saturation value at 100. I have utilized envelope generators to ensure a smooth fade in and fade out of saturation.
The above patch was added onto the shadow mechanism I created. I did some research on how to achieve this, and experimented with a few different actors before concluding that I needed an Alpha Mask. The LumaKey actor was one I played with briefly but it did not do what I needed. I found a tutorial by Mark Coniglio, which is how I ended up in alpha-land, and it worked beautifully. I still had to tinker with the specific settings within the OpenNI Tracker (and there is still more to be fine-tuned), but I had a functional shadow.
My goal with Cycle 1 was to establish the base for the rest of my project so I could continue building off it. I sectioned off my time to take full advantage of lab time to get the majority of my work done. I stuck to this schedule well and ended Cycle 1 in a good position, ready to take on Cycle 2. I gave myself a lot of time for troubleshooting and fine-tuning, which allowed me to work at a steady, low-stress pace.
I did not anticipate having so much difficulty finding colorscape videos that would provide texture and contrast without being overwhelming or distracting. I spent about 45 minutes of my time looking for videos and found a few. I also ended up reusing some video from Pressure Project 2 that worked nicely as a placeholder and resulted in some creative insight from a peer during presentations. I will have to continue searching for videos, and I am also considering creating colored backdrops and experimenting with background noise. I spent about 20 minutes of my time searching for a sound effect to play during the saturation of the media. I wanted a sound to draw the users’ attention to the changes that are happening.
Overall, the reactions from my peers were joyful. They were very curious to discover how my project worked (there was admittedly not much to discover at this point as I only have the base mechanisms done). They seemed excited to see the next iteration and had some helpful ideas for me. One idea was to lean into the ocean video I borrowed from PP2, which they recognized, causing them to expect a certain interaction to occur. I could have different background themes that have corresponding effects, such as a ripple effect on the ocean background. This would be a fun idea to play with for Cycle 2 or 3.
The other suggestions matched closely to my plans for the next cycles. I did not present on a projector because my project is so small at the moment, but they suggested a bigger display would better the experience (I agree). My goal is to devise a setup that would fit my project. In doing so, I need to keep in mind the robustness of my sensor. I needed a very plain background, as it liked to read parts of a busy background as a body, and occasionally refused to see a body. Currently, I think the white cyc in the MOLA would be my best bet because it is plain and flat.
The other major suggestion was to add more things to interact with. This is also part of my plan and I have a few ideas that I want to implement. These ‘easter eggs’, we’ll call them, will also be attached to a sound (likely the same magical shimmer). Part of the feedback I received is that the sound was a nice addition to the experience. Adding a sonic element helped extend the experience beyond my computer screen and immerse the user into the experience.

This is a screen recording I took, and it does a great job demonstrating some of the above issues. I included the outputs of the OpenNI Tracker actor specifically to show the body counter (the lowest number output). I am the only person in the sensor, but it is reading something behind me as a body, so I adjusted the sensor to get rid of that so I could demonstrate the desaturation. Because it saw the object behind me as a body, Isadora responded as such and saturated the image. The video also shows how the video resaturates breifly before desaturating when I step out and step back in, which is a result of the envelope generator. (The sound was not recording properly, please see above for sound sample.)
My score was my best friend during this project. I had it open any time I was working on this project. I ended up adding to it regularly throughout the process. It became a place where I collected my research via saved links and tracked my progress with screenshots of my Isadora stage. It helped me know where I was at with my progress so I knew what to work on next time I picked it up and how to pace myself across this cycle by itself and all three cycles together. I even used to to store ideas I had for this or a future cycle. I will continue to build on this document in future cycles, as it was incredibly helpful it keeping my work organized.
Cycle 1: A Touch of Magic
Posted: March 31, 2025 Filed under: Uncategorized Leave a comment »For this project, my primary strategy was to try to reverse engineer a piece of media I experienced in a queue at Disney World. I was pretty sure I understood how it was working, but I was curious if I would be able to reconstruct some portion of it on my own. After thinking about the patterns, I knew the media sequence was a series of scenes that were just variations on one another with either some kind of count trigger or time marker that triggered the movement into the next scene. I also wanted to use this as an opportunity to explore a depth sensor, as a tactic to experiment with other interactive media I’ve seen used in various capacities. The two together were challenging, and I think there were ways that I could have created a more polished finished product if I had focused on one thing or another. I think this is a byproduct of me originally imagining a much bigger/flashier end product at the end of the three cycles, which I have since scaled back from significantly. In the end, these cycles will end up serving more as vehicles for me to continue to learn and explore so I can better recognize how these tools and technologies are working out in the wild, thereby making me a better partner to the people who are actually executing the media side of things.

Figure 1: No Time for Making Things Neat
The tools I used for this were a depth sensor, Touch Designer mainly because Isadora couldn’t recognize the depth sensor without it, and Isadora. In Isadora, I used a Virtual Stage for the first time in one of my projects. It helped me be able to see where the areas were I was isolating with the depth sensor, while keeping it separate from the area where the interactive pieces were being triggered. I used the Luminance Key, NDI Watcher, and Calc Brightness actors for the first time in a project and continued to enjoy the possibilities of triggers, gates, and sequential triggers. I find that I am always thinking about how the story progresses, what happens next, and in building in ways for what I’m creating to naturally bring up another scene or next thing once it’s reached a relatively low threshold or interactions or time.

Figure 2: An Abundance of Gates and Triggers
Some challenges were that I didn’t quite author to the technology, which caused some issues, and I didn’t fully understand the obstacles I might encounter in moving from one space to another. The very controlled environment of a ride queue is perfect for these kinds of tools, and I feel like I understand much better, now, how this could work in a more controlled and defined space. I also think focusing on the challenges of a depth sensor alone and maybe using it to more carefully capture motion, rather than necessarily a range of depth, could have allowed this to be more sensitive and, therefore, successful. I certainly spent more time than I would have liked trying to figure out how to get the trigger to be less sensitive than I wanted it to be but sensitive enough to still fire, and I still didn’t quite figure it out.
I think the decision to trigger both a visual and auditory reaction when the specific areas were interacted with was a good one. The sound, especially, seemed like a success with my classmates interacted with it, and the simplistic nature of a little bit of magic on a black screen also worked well. It was noted that the specific visual and sound combination did make those playing with it want to do more magical motions or incited joy, so I think that those were good choices.

Figure 3: A Touch of Magic
I can’t help wishing the presentation would have been visually more polished, but I don’t really have the skills to make more finalized video product. Adobe stock leaves something to be desired, but it’s what I’m mostly working with right now, as well as whatever sounds I can find in a relatively short amount of time that kind of “do the job.” I think this is where my scenic design abilities need the other, media design part to fully execute the ideas that are in my head. As I said, though, I am primarily using these cycles to continue to explore and learn how things work.
In the end, I think the project elicited the responses I was hoping for, which mostly was for people to realize a motion in a specific space would trigger magic. I could also see how some magic incited the desire to seek out more, and I think that a more comprehensive version of this could absolutely be extrapolated out into the queue entertainment I saw at Disney.* I do think this project was successful, in that I learned what I set out to learn and it incited play, and, depending on how the coming cycles develop and what I learn, there feels like the possibility some of this might show up again in the coming cycles.

Figure 4: The To Be Continued Scene Transition
*In the queue, guests interacted with shadows of either butterflies, bells, or Tinkerbell. After a certain amount of time or a certain number of interactions, the scene would kind of “explode” into either a flurry of butterflies, the bells ringing madly, or Tinkerbell fluttering all around, and move into the next scene. Guests’ shadows would be visible in some scenes and instances and not in others, sometimes a hat would appear on a shadow’s head. I think I mostly figured out what was going on in the queue sequence, and for that, I’m very proud of myself.
Pressure Project 2
Posted: March 5, 2025 Filed under: Uncategorized Leave a comment »For pressure project two, I took my main inspiration from “A Brief Rant on the Future of Interaction Design” by Bret Victor. I was drawn to the call for tactile user interfaces and the lamentation of a lack of sensation when using our hands and fingers on touch screen “images below glass.” I wanted to create something that not only avoided the traditional user interface, but also avoided the use of screens all together. For this reason, I choose to focus on sound as the main medium and created the experience around the auditory sensation.
Having seen the Makey Makey’s around the Motion Lab for over a year, I was excited to finally use one and see what all the fuss was about. I was immediately inspired by the many options that became available to be used as a means of interacting with the computer. Anything that had even the slightest ability to exchange electrical charges could be used to trigger events on the computer.
Knowing that I wanted to focus on sound, I started off this exploration by recreating the popular Makey Makey example of the banana keyboard. Since I didn’t want to ruin multiple bananas for an experiment, I decided to use grapes instead. I was able to map the grapes to the tone generator actor in isadora and then it was just a matter for figuring out the frequencies of each note, which it turns out is quite specific and requires hard coding each note as a separate actor.

Since I considered this a simple proof of concept, I didn’t bother to create the whole keyboard and was satisfied with being able to play “hot cross buns” using the three notes I had programmed. When I finally had the patch working well, I quickly realized that the tone generator in Isadora was very one dimensional. It produced a pleasant sine wave but the result was something that sounded very robotic and computational. This was one component that I knew would need further refinement. While enjoyed the grapes as an instrument (and a snack) I knew I wanted something a bit more meaningful. I had considered using brail to spell out the notes, but I ended up deciding I would like to deviate further from the “keyboard” example and instead trigger sounds in a more abstract way.
Having successfully complete the basic outline of a patch, I took a few days to think about the different ways I could modify and expand the concept so that it no longer resembled a piano. I was encouraged to explore the “AU” actors in Isadora to expand the functionality of the audio handling in Isadora. I found these actors to be useful, but also a bit mysterious since I could not find any help documentation readily accessible. Through a process of trial and error, I found the actors that worked for my purposes and was able to have the control needed to complete the patch.
For the tactile inputs I was reminded of some basic 3 dimensional shapes that have been laying around in the Motion Lab. I don’t know their exact origin, but I liked the simple sphere, cube, cone and cylinder’s size and overall appearance, so I decided to use them as the human facing element of the project. The shapes simple form reminded me of some ancient truths as if they were the basic elements of the universe. When these basic building blocks are combined, the mystery will be revealed. At last, I found a way to fulfill the overarching requirement of pressure project 2.
Since I choose to make this a sound centric project, deciding on the sound effects that would be played was one of the most important aspects. I wanted something booth soothing and mysterious and was reminded of the souvenir “Tibetan singing bowls” that I had seen in the Christmas Markets of Europe. When the rim of the bowl is rubbed with a stick, the bowl starts to resonate like a bell and reverberates loudly in a very calming tone.
I downloaded some pre-recorded singing bowls from freesounds.org and after a bit of research, decided to cover the “singing shapes” in gold foil so that they resembled the metal bowls, but also so they would conduct electricity for the Makey Makey. Wiring everything up was a breeze and the basic patch I had made before only needed a few tweaks. The most difficult part was revealing the final “mystery” when all four shapes were activated at the same time, a 5th sound of a choir singing was activated.


Finally, for the actual project presentation, I found myself with an extra hour to spare and an empty Motion Lab, so I decided to take the project to the next level and present a polished version to the class. I had this larger vision ahead of time, but was not planning on implementing it since I didn’t think I would have the opportunity, but I’m glad I did. A few mirrors that I found in the same box as the shapes, and an fortuitously focused lighting special added the finishing touch.
I was happy to watch the class interact with the singing shapes and it quickly became clear they were putting a lot more thought into the “mystery” than I did. There were a few theories about which shape did what function and if what order you pressed them in mattered, in reality they played the exact same sound every time and the order made no difference. Eventually they all were activated the same time and the choir began to sing. I don’t think it was obvious enough, because it was noticed, but there was no “ah ha” moment and I think people were expecting a bit more of a pay off. Either way, I had fun making it, and I think others had fun experiencing it, so overall I think it turned out well.



Pressure Project 2: Unveil the Mystery
Posted: March 3, 2025 Filed under: Uncategorized Leave a comment »For this pressure project, we had to unveil a mystery. Going into it, I had a general idea of what I wanted to create, but I faced many struggles during execution. I spent a good portion of my 7 hours doing research into different actors I would need and how to use them. I think I got so focused on the technical aspects that I pushed the storytelling aspect to the side and didn’t flush the story out as much as I wanted. Overall, though, my project was successful and I learned a lot.
I wanted to hide little mini mysteries throughout my project, so I built three scenes for the user to explore before getting to the end. Each scene has a base set of actors that I build on to customize the scene. I created a little avatar to move around the stage with a Mouse Watcher actor and added a sound effect (unique to each scene) that played when the avatar hit the edge of the stage. The avatar also completed a different action in each scene when the user hit a button on the Makey Makey interface I made.

This is the Makey-Makey interface I made for myself, which allowed me to use it single-handed. I planned to make another one that was more like a video game controller but ran out of time.
In the intro scene, a digital city, the avatar jumped when the key was hit, and after 7 jumps, it inflated to take up a large portion of the scene, then quickly deflated. In the ocean scene, it triggered fish to “swim” across the scene so it could swim with the fish. In the space scene, the avatar teleported (an explosion actor was triggered to scatter the avatar’s pixels and reform them). Every scene had another key plugged into a counter actor, and after that key was pushed so many times, it jumped to the next scene. This project was designed to make the users curious to push all the buttons to figure out the mystery. A great suggestion I received was to put a question mark on the action button, which my peers agreed would be more effective than having it unlabeled.
I received a lot of positive feedback on my project, both verbally and through reactions. There were several moments where they were surprised by something that happened (aka a feature they unlocked), namely when the avatar first exploded in the space scene. I was surprised by this because I didn’t think it would be a big moment, but everyone enjoyed it. They went into it with an unlock-the-mystery mentality and were searching for things (oftentimes things that weren’t there), so they were happy to find the little features I put in for them to find before reaching the final mystery, which was that they won a vacation sweepstakes. They said the scenes felt immersive and alive because I used video for the backgrounds. Again, the space scene was a big hit because the stars were moving towards the user and was more noticeable than the more subtle movement of the ocean scene. There were multiple moments of laughter and surprise during the presentation so I am very pleased with my project.
The main critique was that the mystery was a bit confusing without context, and I do agree with that. One suggestion was to add a little something pointing to the idea of a context or raffle to add context for the final mystery, and another was to progress from the ocean to land to beach to get to the vacation resort or something along those lines. My original idea was to have portals between scenes and end with a peaceful beach vacation after traveling so far, but I ran out of time and didn’t dedicate enough time to telling the story that I ended up throwing something in at the end. They did say the final scene provided closure because the avatar had a smile (unique to this scene) and was jumping up and down in celebration.
I hit a creative rut during this project so the majority of it was improvised, contributing to the storytelling problem. I started by just making the avatar in the virtual stage because making a lil guy seemed manageable, and it sparked some ideas. I decided on a video game-esque project where the avatar would move around the scene. As I brainstormed ideas and started researching, I had a list of ideas for what could happen as the user navigated through my little world. I spent about 20-25 minutes on each idea on the list to figure out what worked. This involved research and (attempted) application. Some things didn’t work and I moved on to try other things.

I broke this project up into several short sessions to make it more manageable, especially with my creative struggles. This gave me time to sit and process/reflect what I had already done, attempted, and wanted to do. I was able to figure out a few mechanisms I had struggled with and go back later to make them work, which helped me move the project along. One time-based challenge I did not consider was how much time it would take to find images, videos, and sound effects. The idea to add these sonic and visual elements occurred much later in the process, so I did not have a lot of time left to adequately reallocate my time, so I do think I would have been better off creating a storyboard before diving right in to better prepare myself for the project as a whole.
Pressure Project #2
Posted: February 27, 2025 Filed under: Uncategorized Leave a comment »For Pressure Project 2, my strategy was to work on exploring some of the actors we’d been introduced to in class and think about applications. I also wanted to make something that I could see be extrapolated out into other uses, and I think that was successful. I chose to let the “mystery” be the mechanics (though I ended up making them more straightforward than originally planned), as well as how the thing itself was structured. I also wanted to focus on creating something that was smoother and ran in a more finished way by setting some attainable goals for myself, even though that is somewhat counter to the way we’re supposed to be thinking in this class, I think. I wanted the user facing side to look more finished than the programming that was holding it together (Figure 1, because everything is theatre). I tried not to spend as much time on story, this time, while still allowing for some kind of simple, cohesive structure to hold the project together. Moving forward, I’m trying to be more focused on creating interesting visuals with just enough story to hold them together, rather than being very story heavy at the expense of slightly more complex visuals (though I still think it worked well for Pressure Project 1).

Figure 1: Tape, Glue, and Paint
I spent some time working with the Stage Mouse Watcher and a lot of time with triggers. I used them as both switches within the game, as well as, ultimately, a way to reset the scene, rather than manually turning all the projectors and effects on and off each time I wanted to test things. I wish I had thought of that much earlier in the process, but at least I have that under my belt for next time, now. I also explored Gates so that my sequential trigger wouldn’t keep firing and kick the project to the next scene prematurely. I felt especially good when I figured out how to implement that, as it made the unrolling of the game itself possible. At one point, the stars that replaced the items in the room triggered if you just moved the mouse over the correct spot. Late in the process, I decided to change it to clicking on the item. The mechanic felt like it make more sense, and I’m not sure why I didn’t think of it in the first place. I worked hard to keep my workspace neat and organized. Not that it really matters in the way the thing functions, but it helped me see the patterns and try to sus out when something wasn’t working correctly and then make the necessary changes across various actors. I realized after I built everything that there might have been a better way to use User Actors and simplify things/make the initial workspace neater, but ultimately, I didn’t really mind seeing everything laid out together. I think really cleaning up my workflow is a problem for later, when I’m faster at creating and fixing things. (Figure 2 and Figure 3)

Figure 2: The Mess Behind the Scenes

Figure 3: A Cleaner Scene 2
The thing I really ran out of time for, and, I admit, I ran over on time in general, was the controller. I had a better, more designed idea, but the connections between the wires and buttons wasn’t strong enough to be consistent, and I was more interested in a smooth experience at the expense of the look of the controller. To that end, at the presentation, it felt like the controller was easy to use and understand, which pleased me. I didn’t want the mechanics of use to take away from the experience itself, so that was a win. In the things I’m creating, I’m interested in a level of engagement that doesn’t require training or too much figuring out; I want it to be easily playful and, therefore, usable for a wider array of folks.
If I’d had more time, I would have added more items to the room, though the cleanliness of the mess seemed to appeal to the users. I think there’s something to be said for a streamlined interface and screen, especially where play is involved. Users also noted that sound would have enriched the experience, and I definitely agree with that note/wish I’d had time to think of and include that. It’s, admittedly, the last aspect on the priority list, below making my project work, but something I will certainly try to leave time for in the future. This points to a need, in general, for me moving forward, which is to think through all the pieces I need to figure out for a project and make sure to leave at least some time to address them, even if it’s a little frankensteined.
I think there are uses to the things I figured out for this project, especially in relation to gate and triggers. I can see this being very useful as I move into my Cycles and the way I’m thinking about creating an experience and having interactions work in that experience. (Figure 4)

Figure 4: Success!
Figure 5: Video of Final Project
Pressure Project 2: Creating an Escape
Posted: February 25, 2025 Filed under: Uncategorized Leave a comment »For this second pressure project, I had seven hours to design an interactive mystery without using a traditional mouse and keyboard. Based on feedback from my first project—which felt like the start of a game—I decided to fully embrace game mechanics this time. However, I initially struggled with defining the mystery itself. My first thought was to tie it into JumpPoint, a podcast series I wrote about time travel, but I quickly realized that the complexity of that narrative wouldn’t fit within a three-minute experience. Instead, I leaned into ambiguity, letting the interaction itself shape the mystery.
My first hour was spent setting up a live video feed that would give the user questions regarding their current state and environment. To achieve this, I utilized actors such as video noise and motion blur. My initial concept was to have the experience activated via depth sensors, something I had actively avoided in my first project. I set this up, only to realize that to ultize both my high quality web cam as well as the makey makey, all my USB drives were being utilized. So I pivoted to a sound level watcher which would activate the experience.
My second Cue titled “The Scene” serves as my soundbed for the experience. There is an enter scene trigger that gives the appearance of bypassing this in real time, with the 3rd scene utilizing an Activate Scene actor to trigger the music.

If you view the next screenshot, you will notice that the project says One Hour mark, this is not true. This is also your reminder to save early and save often as twice in this experience, I had the misfortune of having about a half hour of work, each time, disappear.
So between hour one and hour two, I set up what would be the meat of the experience. 3 doors with no clear path behind each. This is where I intended to really incorporate the makey makey as the first cue is activated off of sound. In the control panel, I created 3 sliders and attached them to each shape actor in addition to an inside range actor. What I would achieve by this, is creating the appearance that the door is coming closer if you choose to “open” it, while also creating a number which, when hit, would activate a trigger to a jump ++ actor to take the user to either Door 1, 2, or 3’s outcome. A mouse watcher was also added to track the movement from the Makey Makey, which at this point, I had not decided how it would be arranged.

Over the next hour, I would set up the outcomes of doors 1 and 3. Wanting to unlocked the achievement of “expressing delight”, I decided that Door 1 would utilize a enter scene trigger for 3 purposes. 1. To Deactivate the Sound bed. 2. To Activate a video clip to start. The video clip is from Ace Ventura: Pet Detective where Ace, dishelved, comes out of a room and declares “WOO, I would NOT go in there!!”. For the 3rd purpose of the enter scene trigger, I inserted a trigger delay for the duration of the clip to a Jump++ actor to go back to the three doors.
Behind Door 3, I decided to set up a riddle or a next step.To set this up, I utilized a text draw that would rise via an envelope generator. You’ll observe a few other actors, but those were purely for the aesthics. I wanted users to be able to use the Makey Makey in another capacity, so I utilized serval keyboard watchers to hopefully catch every letter being typed. I had made several attempts to figure out how exactly that I needed to inventory each letter being typed before emailing my professor who helped out big time!

While I awaited a response, I spent about half an hour experimenting with the makey makey testing it’s robustness with and without playdough, which I intended to be the conduit for the experience. Please ignore the messy desk, a mad scientist was at work.

Hour 5 is where my pressure project went from being a stressor to being REALLY fun. (Thanks Alex!) Instead of a keyboard watcher, I created 6 text actors to coordinate with WASDFG. Those text actors would connect to a text accumulator which was attached to 2 additional actors. A text draw, which would put it on screen as I intended and a text comparator, which, when it matched the intended answer, would take send a trigger to take the actor to another scene. Instead of using the WASDFG inputs of the makey makey, I stuck to the directional and click inputs that I played around with earlier and instead created those keys with buttons on the control panel. This would still give the user the experience of typing their answer without having to add 6 additional tangible controls. As hour five drew to a close, I set up the outcome of door two which mirrors features and actors of the other two doors.

With the high use of the text draw actor, I was unintentionally creating a narrative voice for the piece. So in the 6th hour I worked on the bookend scenes to make it more cohesive. I added a scene at the top which included a riddle to tell the user the controls. I also used an explode actor on the text to hopefully instill the notion that the user needed to be quiet in order to play the game (which they would have to do the opposite on the live video, a fun trick). I created a scene on the backend where I felt that a birthday cake was an interesting twist that didn’t get too dark in plot. I liked the idea of another choice, so I simply decided to narrow down from 3 options like the doors, to 2. Still utilizing buttons in the control panel.
It was also in this sixth hour that I realized I didn’t know how this mystery was going to end. I had to spend a bit of time brainstorming, but ultimately felt that this experience was an escape of some kind, but to avoid going a dark direction, I decided that the final scene would lead to an escape room building.
My final hour was spent on 2 things. 1, establishing the scenes that would lead to the escape room end and setting up the experience in my office and asking a peer to play it so I could gauge the level of accessibility. Feeling confident after this experience, I brought my PP to class where I received positive feedback.
Much like an escape room, there was collaboration in both the tangible experience of controlling the escape room as well as decision making. I did observe that the ability to discern which parts of the control were the ground and the click wasn’t clear. In the future, I would like to distinguish these a bit more through an additional label.
Something else that occurred was the live video scene instantly bypassed due to the baseline level of volume in the room. So in the future, I would utilize the actor to update the range in real time as opposed to the hold range actor that I used as a baseline.
With how much jumping was occurring through scenes, I struggled throughout with ensuring the values would be were I wanted them upon entry of each scene. It wasn’t until afterwards that I was made aware of the initialized value feature on every actor. This would be a fundamental component if I were to work on this project moving forward.