Cycle 2 | Mysterious Melodies

For my cycle 2 project, I wanted to expand upon my original idea and add a few resources based on the evaluation I received from the initial performance. I wanted to lean away from the piano-ness of my original design and instead abstract the experience into a soundscape that was a bit more mysterious. I also wanted to create an environment that was less visually overwhelming and played more with the sense of light in space. I have long been an admirer of the work of James Turrell, an installation artist that uses light, color and space as his main medium. Since my background is primarily in lighting and lighting design, I decided to remove all of the video and projection elements and focus only on light and sound.  

ACCAD and the Motion Lab recently acquired a case of Astera Titan Tubes. They are battery powered LED tubes that resemble the classic fluorescent tube, but have full color capabilities and the ability to be separated into 16 “pixels” each. They also have the ability to receive wireless data from a lighting console and be controlled and manipulated in real time. I started by trying to figure out an arrangement that made sense with the eight tubes that I had available to me. I thought about making a box with the tubes at the corners, I also thought about arranging them in a circular formation. I decided against the circular arrangement because, arranged end to end, the 8 tubes would have made an octagon, and spread out, they were a bit linear and created more of a “border”. Instead, I arranged them into 8 “rays”. Lines that all originated from the center and fanned out from a central point. This arrangement felt a bit more inviting, however it did create distinct “zones” in-between the various sections. The tubes also have stands that allow them to be placed vertically. I considered this as well, but I ended up just setting them flat on the ground.  

In order to program the lights, I opted to go directly from the lighting console. This was the most straightforward approach, since they were already patched and working with the ETC ion in the Motion Lab. I started by putting them into a solid color that would be the “walk in” state. I wanted to animate the pixels inside the tubes and started by experimenting with the lighting console’s built in effects engine. I have used this console many times before, but I have found the effects to be a bit lacking on usability and struggled to manipulate the parameters to get the look I was going for. I was able to get a rainbow chase and a pixel “static” look. This was okay, but I knew I wanted something a bit more robust. I decided to revisit the pixel mapping and virtual media server functions that are built into the console. These capabilities allow the programmer to create looks that are similar to traditional effects, but also create looks that would otherwise be incredibly time consuming if not completely impossible using the traditional methods. It took me a bit to remember how these functions worked since I have not experimented with them since my time working on “The Curious Incident of the Dog in the Night”, programming the lighting on the set that had pixel tape integrated into a scenic grid. I finally got the pixel mapping to work, but ran out of time before I could fully implement the link to triggers already being used in Touch Designer. I manually operated the lighting for this cycle, and intend to focus more on this in the next cycle.  

For the audio portion, I decided to use “Skydust” a spatial synthesizer that Professor Jean-Yves Munch recommended. This allowed me to use the same basic midi integration as the last cycle, but expand the various notes into spatial sound without the need for an extra program to take care of the panning. Similar to the last cycle, I spent a lot of time listening to the various presets and the wide variety of sounds and experiences they produced. Everything from soft and southing to harsh and scary. I ended up going with the softer side and finding a preset called “waves pad” which produced something a bit ethereal but not too “far out”. I also decided to change up the notes a bit. Instead of using the basic C chord, I decided to use the harmonic series. I had a brief discussion with Professor Marc Ainger and this was recommended as a possible arrangement that went beyond the traditional chord structure.  

For the patch, I kept the same overall structure but moved the trigger boxes around into a more circular shape to fill the space. Additionally, I discovered that I could replace the box with other shapes. Since the overall range of the depth sensor is a circle, I decided to try out the “tube” or cylinder shape in touch designer. When adding boxes and shapes into the detector, I also noticed that there is a limit of 8 shapes per detector. This helped a bit to keep things simplified and I ended up with 2 detectors, one that had boxes in a circular formation, and another with tubes of various sizes, one that was low and covered the entire area, with another that was only activated in the center, and a third tube that was only triggered if your reached up in the center of the space. 

The showing of this project was very informative since only one other person had experienced the installation previously. I decided to pull the black scrim in front of the white screen in motion lab to darken the light reflections, while also allowing the spatial sound to pass through. I also pulled a few black curtains in front of the control booth to hide the screens and lights inside. I was pleasantly surprised by how long people wanted to explore the installation. There was a definite sense of people wanting to solve or decode a mystery or riddle, even though there was no real secret. The upper center trigger was the one thing I hoped people would discover and they eventually found it (possibly because I had shown some people before). For my next cycle I plan to continue with this basic idea but flesh out the lighting and sonic components a bit more. I would like to trigger the lighting directly from touch designer. I am still searching for the appropriate sound scape and am debating between using pre recorded samples to form a song or continuing with the midi notes and instruments. Maybe a combination of both? 


Pressure Project 3 | Highland Soundscapes

Our prompt for pressure project 3 was to create an audio soundscape that reflects a cultural narrative that is meaningful to you. Since I am 75 percent Scottish, I choose to use the general narrative of Braveheart as an inspiration. My biggest challenge for this project was my motivation. Since I am concurrently taking Introduction to Immersive Audio, I have already done a project similar to this one just a few weeks ago. I struggled a bit with that project, so this one seemed a bit daunting and unexciting. I must admit, the biggest hurdle was just getting started. 

scene from the movie braveheart

I wanted to establish the setting of the Scottish countryside near the sea. I imagined the Cliffs of Moher, which are actually in Ireland, but the visual helped me to search and evaluate the many different wave sounds available. I used freesound.org for all of my various sound samples. In order to establish the “country side” of the soundscape, I found various animal noises. I primarily focused on “highland cow” and sheep. I found many different samples for these and eventually I got tired of listening to the many moos and trying to decide which sounded more Scottish. I still don’t know how to differentiate.  

Highland Cow by seaside cliffs

To put the various sounds together I used Reaper. This is the main program we have been using for my other class, so it seemed like the logical choice. I was able to lay a bunch of different tracks in easily and then shorten or lengthen them based on the narrative. Reaper is easy to use after you get the hang of it and it is free to use when “evaluating” which apparently most people do their entire lives. I enjoyed the ease of editing and fading in and out which is super simple in Reaper. Additionally, the ability to automate panning and volume levels allowed me to craft my sonic experience easily.  

For the narrative portion, I began by playing the seaside cliffs with waves crashing. The sound of the wind and crashing waves set it apart from a tranquil beach. I gradually began fading in the sound of hoofs and sheep as if you were walking down a path in the highlands and a herd of cattle or sheep was passing by. The occasional “moo” helped to establish the pasture atmosphere I wanted to establish. The climax of the movie Braveheart is a large battle where the rival factions charge one another. For this, I found some sword battle sounds and groups of people yelling. After the loud intense battle, I gradually faded out the sounds of clashing swords and commotion to signify the end of the battle. To provide a resolution, I brought back in the sounds of the cattle and seaside, this was to signify that things more or less went “back to normal” or, life goes on.  

cliffs of mohre

For the presentation portion, I decided to take an extension since my procrastination only lead me to have the samples roughly arranged in the session and the narrative portion wasn’t totally fleshed out. This was valuable not only because I was not fully prepared to present on day 1, but I was also able to hear some of the other soundscapes that helped me to better prepare and understand the assignment. While I am not proud that I needed an extension, I am glad I didn’t decide to present something that I didn’t give the proper amount of effort to. While, I don’t think it was my best work, I do think it was a valuable exercise in using time as a resource and how stories can sometimes take on a life of their own when a listener has limited visual and lingual information and must rely totally on the sounds to establish the scene and story. I felt the pressure of project 3 and I am glad I got the experience, but I’m also glad it’s over 🙂 


Cycle 2: The Rapture

For Cycle Two, I wanted to build off the kernel of my first cycle, Isadora in a theatrical context, and put my money where my mouth is by using it in a real production. My first project was exploratory; this one was functional. I wanted to program a show I could tour anywhere that had a screen. I had just revised The Phil Mitchell Radio Hour, a solo show booked for performances April 11th and 12th at UpFront Performance Space in Columbus, and at the Atlanta Fringe Festival May 27th–June 8th. Both venues had access to a projector, so I decided to build a visual component to the show using Isadora.

This meant I needed to create a projection design and keep it user friendly, since I wouldn’t be running the projections myself. It was important to me that the visual media didn’t just exist as a cool bonus, but felt like a performer alongside me, heightening the story, rather than distracting from it. I’d say the project was a major success. I want to tweak a few small moments before Atlanta, but the tweaks are minor and the list is short.

My Score:

Resources:

  • Isadora, Adobe Premiere Pro, Adobe Express
  • Projector, HDMI cable, laptop
  • UpFront Performance Space
  • Projector screen

Steps:

  1. Create the “scenes” of the show in Isadora
  2. Remaster sound design and integrate into scenes using enter triggers
  3. Design projections
  4. Add projections to scenes and build new scenes as needed
  5. Run the full show and make sure it all holds together

For context: Here is the official show synopsis followed by an image from the original production:

The Phil Mitchell Radio Hour follows a self-assured televangelist delivering a live sermon on divine pruning—only to realize mid-broadcast that the rapture is happening, and he didn’t make the cut. As bizarre callers flood the airwaves and chaos erupts, Phil scrambles to save face, his faith, and maybe even his soul—all while on live on air.”

I’m a big image guy, so I sort of skipped step one and jumped straight to step three. I began by designing what would become the “background” projection during moments of neutrality, places in the show where I wanted the projection to exist but not call too much attention to itself. I made a version of this in Adobe Express and pretty quickly realized that the stillness of the image made the show feel static. I exported a few versions and used Premiere to create an MP4 that would keep the energy alive while maintaining that “neutral” feeling. The result is a slow-moving gradient that shifts gently as the show goes on.

My light design for the original production was based around a red recording light and a blue tone meant to feel calming, those became my two dominant projection colors. Everything else built off that color story. I also created a recurring visual gag for the phrase “The Words of Triumph,” which was a highlight for audiences. 

Once the media was built, I moved into Isadora. This was where the real work of shaping began. While importing the MP4s, I also began remastering the sound design. Some of it had to be rebuilt entirely, and some was carried over from previous drafts as placeholders. The show had 58 cues. I won’t list them all here, but one challenge I faced was how to get the little red dot to appear throughout the show. I had initially imagined it as an MP4 that I could chroma key over a blank scene then recall with activate/deactivate scene actors.  Alex helped me realize that masking was the best solution, and once I got the hang of that, it opened up a new level of design.

All in all, I spent about two weeks on the project. The visual media and updated sound design took about 8–10 hours. Working in Isadora took another 3. By then, I felt really confident navigating the software. It was more about refining what I already understood. I initialized nearly every scene with specific values so it would be consistent regardless of who was operating. I also built some buttons for the stage manager in key moments. For example, when Phil yells, “Did I just see a horse?! Do horses go to Heaven?!” the stage manager pressed a button which activates a horse neighing sound.

When it came time to present a sample, I struggled with which moment to show. I chose a segment just before the first emergency alert of the Rapture, a phone call exchange. I thought it was the strongest standalone choice. It features three different projections (caller, emergency alert, and blank screen), has layered sound design, and doesn’t require much setup to understand.

Most of the feedback I received was about the performance itself, which surprised me. I was trying to showcase the Isadora work, but because I didn’t have a stage manager, I had to operate the cues myself, this created some awkward blocking. One suggestion I appreciated was to use a presenter’s remote in the future, so I could trigger cues without disrupting the flow of the scene. There were also some great questions about Phil’s awareness of the projections, should he see them? Should he acknowledge them? That question has stuck with me and influenced how I’ve shaped the rest of the show. Now, rather than pointing to the projections, Phil reacts as though they are happening in real time, letting the audience do the noticing. 

Overall, I’m extremely proud of this work. Audience members who had seen earlier drafts of the show commented on how much the projections and sound enhanced the piece. This clip represents the performance version of what was shown in class. Enjoy 🙂


Cycle 2: Up in Flames

               For this project, I wanted to keep stepping up the interactivity and visual polish. I knew I wouldn’t be able to get all the way with either, but my aim was to continue to step up those two aspects. In these cycles, my overall aim isn’t to create one, final project, but to use them as opportunities to continue to play and create as much as I can before the semester ends, and I have to return all the toys to the Lab. I didn’t do a great job of keeping track of how I was spending time on this project. I spent less time than I have in the past looking for assets, partially because I’m learning how to use my resources better and more quickly. I found the music I used more quickly than I expected, which also cut down on this aspect significantly. I spent most of my time increasing my understanding of actors I’d used before and trying to use them in new ways. Once I’d locked in the main visual, I figured out what to add to increase the interest and then invested some time in designing the iterations of the “embers.”

Exploding Embers

               I explored the Luminance Key, Calc Brightness, Inside Range, and Sequential Triggers actors, mostly, seeing what I could figure out with them. I used them in the last cycle, but I wanted to see if I could use them in a different way that would require less precision from the depth sensor, which was also at play in this project. In the last cycle, I struggled with the depth sensor not being robust enough to tune into the very exact way I wanted to use it; this time, I figured I would use it more to capture motion than precision. I still have to use TouchDesigner to get the sensor to work with Isadora. Because it’s free, I can see myself continuing to explore TouchDesigner after this class to see if I can at least replicate the kinds of things I’ve been exploring in these cycles. For the purposes of class, though, I’m trying to fit as much Isadora in as I can before I have to give the fob back!

Actor Setup

               I think the main challenge continues to be that I want the sensor to be more robust than it is. I think, in many ways, I’m running into hardware issues all around. My computer struggles to keep up with the things I’m creating. But I am working with my resources to create meaningful experiences as best I can.

               With this project, I wanted to add enough to it that it wasn’t immediately discernable exactly how everything was working. In that, I feel I succeeded. While it was eventually all discovered, it did take a few rounds of the song that was playing, and, even then, there were some uncertainties. Sound continues to be a bit of an afterthought for me and is something I want to more consciously tie into my last cycle, if I can leave myself enough time to do so. The thing I didn’t include that I had an impulse toward was to connect the sound to the sensor, once I actually added it. This was an option that was pointed out during the presentation, so it seems that that was definitely an opportunity for enhancement. I also think there was a desire to be able to control more of the experience in some way, to be able to affect it more. I think this would have been somewhat mitigated if it was, indeed, tied to the music in some way.

               The presentation itself did immediately inspire movement and interaction, which was my main hope for it. While I did set up that getting close to it would be better, I also think that the class felt driven to do that somewhat on their own. I think choice of color, music, and movement helped a lot with that. The class was able to pick up on the fire aspect of it, as well as the fact that some part of it was reacting to their movement. They also were able to pick up on the fact that the dots were changing size beyond the pulsing they do when in stasis. I think the one aspect that wasn’t brought up or mentioned was that the intensity of the dots was also tied into the sensor. It was brighter when left alone to draw people to it and got less intense when things were closer to the sensor. A kind of inverse relationship. In some ways, the reverse would have encouraged interaction, but I was also interested in how people might be drawn to come interact with this if it weren’t in a presentation and landed on this solution.

               If I were to take this further, I definitely like the idea of tying the embers to different pieces of the sound/song. There was also an interest in increasing the scale of the presentation, which would also be something to explore. I think my resources limit my ability to do the sound aspect, at least, but it would be the step I would take if I were going to go further with this, somehow. I also liked the idea of different scenes, and it is definitely something I was thinking about and would have been interested in adding with more time. I can absolutely see this as a thirty-minute, rotating scene interactive thing. I think the final note was that the boxes around the exploding embers could be smoothed out. It’s an issue I’ve run into with other projects and something I would definitely like to address in future projects.

Video of Stage

Song Used for Presentation


Pressure Project 3: Going to the Movies

For pressure project 3, I set out to tell a clear story using only sound. I’d done a similar project once before a handful of years ago and was in the middle of teaching sound design to my Introduction to Production Design class, so I felt fortunate to have a little experience and to be in the correct mindset. I wanted to make sure there was a clear narrative flow and that the narrative was fleshed out completely with details. I opted for a realistic soundscape, a journey, because it felt, to me, the most logical way to organize the sounds in the way I wanted to.

In terms of what it would be, I started broadly thinking about what culture thing I might create a story for. As I mentioned in class, I did originally consider doing a kind of ancestral origin story. I first considered doing a story about/for my great grandparents who came to the United States from Croatia. The initial thought was to follow them from a small town there, across the ocean, to Ellis Island, where they landed, and then perhaps onto the train and to Milwaukee (even though that skips a portion of their story). Ultimately, part of why I abandoned this idea was because it felt too sweeping for two minutes, and I started to realize that a lot of what I would want for that story would have to be recorded, rather than found.

In the end, I decided to think much smaller, about a cultural experience about me, specifically. Because I still had this idea of a “journey” in my head, I came to the idea of walking to the movies. Being able to walk to an independent movie house, preferably an old one, is something I look for when we’re moving somewhere new, so it felt like the right way to go. I decided to age up the movie theatre in my head a bit to play up the soundscape; quiet chairs and doors didn’t seem as interesting as someplace that needs a healthy round of WD40. The theatres for the last three places I’ve lived are pictured below.

Neighborhood Theatres

With a story in mind, I set out to figure out how to actually make the sound story. With a decent amount of digging around, I finally remembered that the program I’d used in the past was Audacity, so I at least knew it would be free and relatively accessible. From there, I searched out sounds on the sound effects websites I’d given my students to use (YouTube Sound Library, Freesound, Free Music Archive, etc.). I used Audacity to get the sound file from the movie I wanted to use at the end of the piece, which from The Women (1939). The main challenge I faced was that I’ve personally only worked with sound once before maybe four years ago when I took a class very much like the one I’m teaching now. The other is that it’s always harder than you want it to be to find the right sound you’re looking for. I was mindful of the relatively limited time I had to work on the project and tried to work as quickly as I could to make a list of sound I felt I needed and try to track them down. I also challenged myself to make the sound locational in the piece, to work with levels and placement so that it felt as if it was actually happening to the listener. In some ways, this was the easiest part of the project to do. For me, it was easy to pick out when a sound wasn’t where I wanted it to be or something was too loud or quiet to my mind’s eye.

I didn’t keep careful track of how my time worked out or was split up. I did the project, more or less, straight through, moving between tasks as it made sense. I designed it mostly from start to finish, rather than jumping around, finding a sound, placing it, and editing it. I shifted lengths and things like that as I went, and I spent maybe half an hour at the end fine tuning things and making sure the timing made sense.

A Smattering of the Sounds

During the presentation, the class understood that a journey was happening and, eventually, that we were at the movies. They experienced this realization at slightly different points, though it universally seemed to happen once we had gone through the theatre doors and into the theatre itself. They reported that the levels and sound placement worked, that they knew they were inside of an enclosed space. I think the variety of information was also pleasing, in terms of the inclusion of social interactions, environmental information, the transitions between places, etc. I think the inclusion of a visual would have made the sound story come into focus much more quickly. If I had included a picture of a movie theatre, or, better, the painting I had in my head (below), I think the whole journey would have been extremely clear. As it was, once I explained it, I think all the choices made sense and told the story I was trying to tell.

Overall, I think the project was successful, though perhaps in need of visual accompaniment, particularly if this kind of movie going isn’t your usual thing. I also think one of the effects, the popcorn machine in the lobby, needed adjusting; I was worried about it being too loud and overcorrected.

New York Movie (Edward Hopper, 1939)


Cycle 2: It Takes 2 Magic Mirror

My Cycle 2 project was a continuation of my first cycle, which I will finish in Cycle 3. In Cycle 1, I built the base mechanisms for the project to function. My focus in this cycle was to start turning the project into a fuller experience by adding more details and presenting with larger projected images instead of my computer screen.

Overall, there was a great deal of joy. My peers mentioned feeling nostalgic in one of the scenes (pink pastel), like they were in an old Apple iTunes commercial. Noah pointed out that the recurrence of the water scene across multiple experiences (Pressure Project 2 and Cycle 1) has an impact, creating a sense of evolution. Essentially, the water background stays the same but the experience changes each time, starting with just a little guy journeying through space to interacting with your own enlarged shadows.

Video of the beginning of the presentation as they were just beginning to interact with the sensor and shadows.

Alex asked “how do we recreate that with someone we only get one time to create one experience with?” How do we create a sense of evolution and familiarity when people only experience our work once? I think there is certainly something to coming into a new experience that involves something familiar. I think it helps people feel more comfortable and open to the experience, allowing them the freedom to start exploring and discovering. That familiarity could come from a shared experience or shared place, or even an emotion, possibly prompted by color or soundscape. Being as interested in creating experiences as I am, I have greatly enjoyed chewing on this question and its ramifications.

I got a lot of really great feedback on my project, and tons of great suggestions about how to better the experience. Alex mentioned he really enjoyed being told there was one detail left to discover and then finding it, so they suggested adding that into the project, such as through little riddles to prompt certain movements. There was also a suggestion to move the sensor farther back to encourage people to go deeper into the space, especially to encourage play with the projected shadows up close to the screen. The other major suggestion I got was to use different sounds of the same quality of sound (vibe) for each action. Alex said there is a degree of satisfaction in hearing a different sound because it holds the attention longer and better indicates that a new discovery has been made.

I plan to implement all of this feedback into my Cycle 3. Since I do not think I will be creating the inactive state I initially planned, I want a way to help encourage users to get the most out of their experience and discoveries. Riddles are my main idea but I am playing with the idea of a countdown; I am just unsure of how well that would read. Michael said it was possible to put the depth sensor on a tripod and move the computer away so it is just the sensor, which I will do as this will allow people to fully utilize the space and get up close and personal with the sensor itself. Lastly, I will play with different sounds I find or create, and add fade ins and outs to smooth the transition from no sound to sound and back.

As I mentioned earlier, the base for this project was already built. Thus, the challenges were in the details. The biggest hurdle was gate logic. I have struggled with understanding gates, so I sat down with the example from what we walked through in my Pressure Project 2 presentation, and wrote out how it worked. I copied the series of actors into a new Isadora file so I could play around with it on its own. I just followed the flow and wrote out each step, which helped me wrap my brain around it. Then I went through the steps and made sure I understood their purpose and why certain values were what they were. I figured out what I was confused about from previous attempts at gates and made notes so I wouldn’t forget and get confused again.

After the presentation day, Alex sent out a link (below) with more information about how gates work with computers, and a video with a physical example in an electric plug, which was neat to watch. I think these resources will be valuable as I continue to work with gates in my project.

Khan Academy: https://en.khanacademy.org/computing/computers-and-internet/xcae6f4a7ff015e7d:computers/xcae6f4a7ff015e7d:logic-gates-and-circuits/a/logic-gates

This was my first attempt at a gate, and it does not make any sense. This was before sitting down with my gate example and working out the actual flow of information. This was beneficial in the fact that it helped me see what I was confused about and focus on that. I do not have a way to turn the gate back on, and I do not have the correct actor to read the initial values. I also played with Trigger Value actors and realized they would not work the way I wanted and needed and abandoned the idea. I left them in the screenshot for documentation purposes so I wouldn’t forget I already tried that.
This was my next attempt, in which I realized I was missing that Inside Range actor and had a Limit-Scale actor instead. This was after figuring out the logic. It is not perfect but after the Cycle 2 presentation, I think the issues I had with consistency were more the result of the sensor not reading well in my environment. Once I got it set up for the presentation, Michael found the sweet spot where he was able to pump his fist and get the scenes to change each time fairly smoothly.

Because I spent so much time playing with the gate and the staging, I did not get as far as I wanted with the other aspects. I still need to fix the transparency issue with the shadow and the background, and I realized that my videos and images for them are not all the right size. Aside from fixing the transparency issue, I will probably make my own backgrounds in Photoshop so I can fully ensure contrast between the shadow and background. The main mission for Cycle 3 will be adding in discoverable elements and a way to guide users towards them without giving away how.

The galaxy background is very clearly visible through the shadow, a problem I was not able to fix by simply changing the blend mode. I will likely have to do some research about why this happens and how to fix it.


Pressure Project 3: Surfing Soundwaves

When starting this project, it was a bit daunting and overwhelming to decide which part of my cultural heritage to explore. Theatre initially came to mind, but that felt too easy, I wanted to choose something that would allow me to share a new side of myself with my peers. With graduation on the horizon, I’ve been thinking a lot about home. I grew up near the water, and there’s this unbreakable need in me to be close to it. That’s where I began: sourcing sounds of the ocean on Pixabay. I went so specific as to only use sounds from the Atlantic Ocean when possible, even finding one clip from Florida.. But I realized that the ocean itself didn’t fully capture my heritage, it was a setting, not a story. I had to go deeper.

I began thinking about my dad, who was a professional surfer. I’m sure he had aspirations for me to follow a similar path. I felt a lot of pressure to surf as a kid, it was honestly terrifying. But over time, and on my own terms, I grew to love it. Now it’s how my dad and I connect. We sit out on the water together, talk, and when the waves come, we each ride one in, knowing we’ll meet back up. The more I thought about that ritual, the more homesick I felt. Even though the assignment was meant to be sound-based, I found myself needing a visual anchor. I dug through my camera roll and found a photo that captured the calmness and rhythm of beach life, something soft and personal that helped ground my process.

Once I had that anchor, I started refining the design itself. Since this was primarily a sonic piece, I spent about thirty minutes watching first-person surfing videos. It was a surprisingly helpful strategy for identifying key sounds I usually overlook. From that research, I pinpointed four essential moments: walking on sand, stepping into water, submerging, and the subtle fade of the beach as you move away from it. While wave sounds and paddling were expected, those smaller, transitional sounds became crucial to conveying movement and environment.

I then spent another chunk of time combing through Pixabay and ended up selecting sixteen different sounds to track the journey from the car to the ocean. But as I started laying them out, it became clear that I wasn’t telling a story yet, I was just building an immersive environment.

Avoiding that realization for a while, I focused instead on the technical side. I mapped out a soundscape where the waves panned from right to left, putting the listener in the middle of the ocean. I also created a longform panning effect for the beach crowd, which built up as you walk from the parking lot but slowly faded once you were in the water. I was proud of how the spatial relationships evolved, but I knew something was still missing.

Late in the process, I returned to the idea of my dad. I created an opening sequence with a sound of submerging followed by a child gasping for air. I used a kid’s breath to imply inexperience, maybe fear. To mark a shift in time, I recorded a voicemail similar to the kinds of messages my dad and I send each other, with the general sentiment“I’ll see you in the water.” It was meant to signal growth, like the child in the beginning was now the adult. I had hoped to use my actual dad’s voicemail, but when I called him, I found his box wasn’t set up.

I decided to end the piece with the paddling slowing down and the waves settling as I/the character reached dad. I repeated “dad” a couple of times—not in panic, just to get his attention, to make it clear he was present and alive. I used very little dialogue overall, letting the sound do most of the storytelling, but I tried to be economical with the few words I did include.

During the presentation, I felt confident in the immersive quality of the work, but I wasn’t prepared for how the tone might be read. The feedback I received was incredibly insightful. While the environment came through clearly, the emotional tone felt ambiguous. Several people thought the piece might be about someone who had passed away, either the child or the father. I had added a soft, ethereal track underneath the ocean to evoke memory, but that layer created a blurred, melancholic vibe. One person brought up the odd urgency in the paddling sounds, which I completely agreed with. I had tried to make my own paddling sounds in my kitchen sink, but they didn’t sound believable, so I settled on a track from Pixabay. 

Looking back, I know I missed the mark slightly with this piece. It’s hard to convey the feeling of home when you’re far from it, and I got caught up in the sonic texture at the expense of a clearer narrative. That said, I still stand by the story. It’s true to me. When I shared it with my wife, she immediately recognized the voicemail moment and said, “I’ve heard those exact messages before, so that makes sense.” 

If I had more time, I would revisit the paddling and find a better way to include my dad’s voice, either through an actual message or a recorded conversation. That addition alone might have clarified the tone and ensured that people knew this was a story about connection, not loss.


Pressure Project 3: Soundscape of Band

For this pressure project, we were tasked with creating a two-minutes soundscape that represents part of our culture or heritage. I chose to center my project around my experience in band. My original plan had been to take recordings during rehearsals to create the soundscape. None of my recordings saved properly, so I had to get creative with where I sourced my sound.

Upon realizing I did not have any recordings from rehearsal, I had a moment of panic. Fortunately, some of our runs from rehearsal are recorded and uploaded to Carmen Canvas so we can practice the music for the upcoming Spring Game show. I also managed to scrounge up some old rehearsal recordings from high school and a recorded playing test for University Band. I had also done a similar project for my Soundscapes of Ohio class last year, and still had many of my sound samples from that (the first and likely last time my inability to delete old files comes in handy).

Then I had the idea to use videos from the Band Dance, meaning I had to scour my camera roll and Instagram so I could screen record posts and convert them to mp3 files. This process worked quite well, but it was no small task sorting through Instagram without getting distracted. It was also a challenge to not panic about having no recordings, but once I found this new solution, I had a lot of fun putting it together.

I spent about the first hour finding and converting video files into audio files and putting them into GarageBand. I made sure everything was labeled in a way that I would be able to immediately understand. I then made a list of all the audio files I had at my disposal.

The next 45 minutes were used to plan out the project, since I knew going straight to GarageBand would get frustrating quickly. I listened to each recording and wrote out what I heard, which was helpful when it came to putting pieces together. I also started to clip longer files into just the parts I needed. I made an outline so I knew which recordings I wanted to use. It was a struggle to limit it to just what would fit into two minutes, but it was helpful going in with a plan, even if I had to cut it down.

During the next 45-minute block, I started creating the soundscape. I already had most of the clips I needed, so this time was spent cutting down to the exact moment I needed everything to start and stop, and figure out how to layer sounds to create the desired effect. About halfway through this process, I listened to what I had and was close to the two-minute mark. I did not have everything I wanted, so I cut down the first clip and cut another. I did a quick revision on my outline now that I had a better understanding of my timing.

This is a screenshot of part of my (almost illegible) notes I took on the audio clips. I mostly recorded the vibes to help me figure out how to make the piece flow as I fit clips together.

The last half hour was dedicated to automation! I added fade ins and outs in the places I felt were most necessary for transition moments. I wanted to prioritize the most necessary edits first to ensure I had a good, effective, and cohesive soundscape. I had plenty of time left, so I went through to do some fine tuning, particularly to emphasize parts of the sound clips that were quieter but needed to have more impact.

This screenshot of GarageBand shows some of the automation I added (the thin horizontal lines) to add fade ins/outs and emphasize certain parts of the clips, particularly the CBJ Crowd clips.

I wanted my soundscape to reflect my experience. As such, I started with a recording from high school of my trio rehearsing before our small ensemble competition. This moment was about creating beautiful music. It fades into the sound from my recorded playing test for University Band last year, where I was also focused on producing a good sound. 

There is an abrupt transition from this into Buckeye Swag, which I chose to represent my first moment in the Athletic Band, a vastly different experience from any previous band experience I had. The rest of the piece features sounds from sports events, rehearsals, and the band dance, and it includes music we have made, inside jokes and bits, and our accomplishments.

I wanted to include audio from the announcers at the Stadium Series game because that was a big moment for me as a performer. I marched Script on Ice in front of more than 94,000 people in the ‘Shoe, performing for the same crowd as twenty one pilots (insane!). My intent with that portion was to make it feel overwhelming. I cut the announcement into clips with the quotes I wanted and layered them together to overlap slightly, with the crowd cheering in the background. I wanted it to feel convoluted and jumbled, like a fever dream almost.

In creating this project, I came to realize what being in band really means to me. It is about community, almost more so than making music. Alex pointed this out during the discussion based on how the piece progresses, and I was hoping it would read like that. At first, it was about learning to read and make music. When I got to high school, I realized that no matter where I went, if I was in band, I had my people. So over time, band became about having a community, and it became a big part of who I am, so I wanted to explore that through this project.

Aside from personal reflection, I wanted to share the band experience. My piece represents the duality of being in concert and pep band, both of which I love. There is a very clear change from concert music to hype music, and the change is loud and stark to imitate how I felt going to my fist rehearsal and first sporting events with the band. By the end, we are playing a high energy song while yelling “STICK!” at a hockey game, completely in our element.

A collage of pictures that represent my band experience, which acted as inspiration and a guide.

Cycle 1: Big Rainbow Piano

For Cycle 1, I started by identifying a few resources I wanted to explore. Since attending the Touch Designer workshop at the beginning of the semester, I have been searching for an opportunity to utilize the software and explore the tools that were provided to us from Sudo Magic. I was able to get my hands on the “Geo Zone Detector” Comp which allows you to easily draw a rectangular volume and use the point cloud data from a depth sensor to detect when a person or object is occupying that zone. 

I decided to use the basic framework of a human sized piano keyboard to test this system. I began by drawing the “geo zones” in the Comp to be approximately the size of a person, one meter by two meters by three meters. This was roughly the width and height of the person with enough length to occupy a human sized “key”. Then the next step was to align the depth sensor in the Motion Lab. As part of the “patch lab” in the Touch Designer workshop, I learned to use the point transform Top to orient the pint cloud to the Motion Lab x,y,z coordinate system. 

Touch Designer Geo Zone Detector

Once I had the point cloud and spatial trigger boxes aligned and working in unison, I tried to attach a specific sound frequency to each “geo zone”. I started by using the audio oscillator chop. This produced a pure sine wave that I could dial in the exact frequency of each note on the piano. Getting this to work with the Geo Zone Comp was straight forward, but I did have to alter the settings for zone “trigger threshold” so that it would detect only an arm of a leg and not just the whole person.  

The pure sine wave was able to prove that the concept was working. However, I was disappointed in the tonality of the sound and I wanted to find a way to trigger an actual piano. I wanted to find a solution that wasn’t too complicated and did not require too much additional hardware. I have dabbled with Garage Band for many years and I was well aware of the plethora of virtual instruments available in the software. My previous experience with a MIDI keyboard lead me believe this would be a plug and play type of solution. 

Unfortunately, the world of MIDI and Apple drivers is not exactly “plug and play” but there is a pretty easy to use “audio midi setup” app that allows you to see the devices plugged into your computer. A few google searches led me to the IAC or inter-application communication driver. This allows one software program to send MIDI messages to another without the need for MIDI hardware attached. This is exactly what I needed. However, the MIDI out Chop in Touch Designer does not fill in the necessary coding to talk directly to other devices. A few additional google searches led me to the simple channel plus note syntax. Once I inserted all of the triggers in this format, the Geo Zone Detector was triggering a MIDI keyboard in Garage Band. Success! I spent over an hour listening to different synths and virtual instruments before I landed on “Gritty Bells” which had a pleasant tone, but also a rhythmic component that essentially “played a song” when you hit different notes together.

In an effort to connect the Isadora skills I had already learned, I wanted to draw some graphical boxes similar to the “shapes” actor. I found the “rectangle” sop to be essentially the same thing. You can draw a box and also define a line color, fill color and line weight. This process created the “keys” of the keyboard. In addition to triggering MIDI notes, I connected the same triggers to the ”fill” parameter of the boxes. The end result was the key lighting up and playing the notes simultaneously.  

Finally, I projected the boxes on the ground and tweaked the keystone to line up with the boxes already defined in the Geo Zone Detector. I decided to roll out a strip of white Marley to help the top down projection reflect off of the floor and then refined the keystone and geo zones further so they aligned. This was a bit of a trial and error operation. The Marley was a fixed size so  I started by aligning the projections and then scaled the Geo Zones to match. I could easily tweak each section, or the entire point cloud using the point transform Top. 

When it came time to present, I added some color projection on the MOLA cylinder scrim. These scrims were already in place for Noah’s cycle 1, So I decided at the last minute to add a rainbow gradient projection onto the cylinder. I had this idea previously and I got lucky having just enough prep time to quickly make the gradient match the keyboard and load it into Resolume utilizing a previous projection map (resource).

I made everyone leave the lab before they saw the finished cycle 1. This helped to reset the vibe after the somewhat chaotic setup process. Some people had already seen the basic setup before but there were also some new guests that have never experienced the installation or the motion lab. Everyone seemed to timidly approach the installation but once the were all there, everyone took turns playing notes and testing the reactivity of the “piano”. Some people tried to play a song and others were trying to find the limits of the depth sensor to see how little of their presence triggered a note and how high or low their foot or hand needed to be.  

The feedback I received was generally positive. People enjoyed the colorful and playful nature of the setup as well as the pleasant tones emitted my the speakers. Someone said it reminded them of a Fischer Price Xylophone, which I must admit, was not something I was going for, but unintentionally recreated almost exactly! Some of the other feedback I received was that they enjoyed that they could step back and watch others interact and play since there were essentially a “play” space on the Marley, and a “observer” space outside the rectangle. Some others commented that more interactive possibilities would have been interesting, such as a different response to the height of your hand or foot that triggered the notes.  

For Cycle 2 I plan on using the same basic concept but abstract the entire field to be a little less rectangular and obviously a type of piano. I have been experimenting with a spatial synthesizer that can utilize the immersive sound system in the Motion Lab. I also plan to add more “geo zone” boxes to increase the sensitivity and interactive possibilities. 


Cycle One: Immersive Theatre

For cycle one, I wanted to take the tools I had been equipped with over the first half of the course and utilize them in a theatrical context.Knowing that I would only be working with a bite sized chunk of work, I decided to revisit a play I had developed in a composition class my first year. It was a 20-ish minute play called The Forgotten World of Juliette Warner. It is a piece that juxtaposes the heroine’s journey with the stages of Alzheimer’s diseases. A theatrical convention of this piece in its initial construction was an ever changing set, meant to reflect the mind of the titular character where nothing is truly settled. Having actor’s move literal mats and blocks constantly was a barrier when establishing the suspension of disbelief. So recently trained in projection mapping, I developed a score around mapping the environments and bringing the audience inside the world.

My original score:


Resources Needed:

  • Software: Isadora
  • Hardware: Projector, long HDMI cables
  • Space & Physical Elements: MOLA access, boxes, rolling boards, or curtains

Steps to Achieve This:

  1. Organize blocks, rolling boards, or curtains in the MOLA space as the base setup.
  2. Map the projector to these surfaces.
  3. Design and program projections to depict the shifting realities of Juliette’s world.
  4. Create a control board in Isadora for ease of access.
  5. Source actors.
  6. Rehearse.
  7. Present.

In my original score, I had anticipated projecting onto flat surfaces and possibly a curtain. But after our Motion Lab Demonstration, I observed a track for curtains that ran circular which I was immediately drawn to. So the first two days, with the gracious help of my peer Michael, I worked to understand routing NDI sources to 3 projectors in the motion capture lab. Through trial and error, we overcame a major barrier on day 2. When sending an NDI signal over a closed internet connection, many computers such as mine, will not send the signal if a firewall is enabled. After disabling the firewall, I was off to the races.

In IsaDora, I utilized EJ McCarthy’s focus grid to understand how to canvas the curtains properly. This was a meticulous effort that took nearly a whole class. I find that I can often get so focused on the specifics of the work, that I forget to take a step back and look at the big picture. So towards the end of class, I threw up some stock footage on all 3 curtains and to my surprise, I found that nearly everything is more forgiving than the focus grid.

With my surfaces properly mapped for projection, it was time to turn to the script. This piece has always been one that makes me nervous because I want to handle this difficult subject matter with as much care as possible. So to avoid alienating my peers/the audience, I selected a brief snippet that occurs earlier in the play that revolves around a repeated daily occurrence, grabbing coffee. I felt that both the environment and the interaction would be familiar enough to put audiences at ease while also providing a great opportunity to show the evolution of Juliette’s mind. When writing this scene, I found that this scene occurred at these stages of the heroine’s Journey/Alzheimer’s development:


STAGE 3 – Noticeable Memory Difficulties (3A.The Awakening)

STAGE 4 – More Than Memory Loss (3B.Preparing for The Journey)


With one day left in class to work on this project, it was time to generate. Although I did not have this software initially in my score, I decided that Adobe Premiere Pro would be the best canvas to create this scene. I sourced stock footage and audio from both Adobe and Pixabay (an EXCELLENT source if you haven’t checked it out).

I had to source footage that could suggest a coffee shop without needing to be in full focus, I didn’t want the projections to be a focal point for the audience. I eventually settled on a nice loopable clip and to make the transition gradual, I then started the scene with a circular gaussian blur at the center and over the course of the 2 and half minute scene allowed it to encompass the entire video. I then created a sound design based on stock noises. With the audience being on the inside of the curtains, I felt it was important to not only surround them visually, but sonically. I utilized surrounding reverbs and panning to allow sounds to come from specific points in the room.

I moved this scene into my ISADORA file where it replaced the focus grid and easily projected on all 3 surfaces.

On the cue line “my mind” I set up a second scene in Isadora which would be the Doctor’s office. I used a similar approach to the coffeeshop, but reversed the blur effect. I did this to intentionally throw off the audience, to tell them that we were somewhere different, somewhere with much more sterile light, but slowly allowed that to be revealed over time.

With my projections designed, it was time to implement actors. I did source a few actors who agreed to take part in a staged reading of the scene. Given the nature of a class project, all my actor’s eventually backed out which led me to scramble for new actors. When I presented the piece in class, I was only able to give my actor’s their given circumstances and the instruction to follow their impulses. This created a sense of a scene, but led to some confusion in character dynamics and audience attention. For my personal taste, it created some clunky staging, but I was so thankful to have 2 actors who were gracious enough to jump in and with a brief rehearsal, we could have ironed this out.

In the feedback, which was extremely valuable, I learned that there was room to go further with the visual design. While the current projections and sound designs were immersive according to my peers, the same visual on all 3 surfaces created an uncanny blend that actually kind of takes the audience out of it. That being said, I did receive feedback that my approach was tasteful and the blur effect, while discrete, was noticed. Perhaps my biggest takeaway from the feedback was that there is a real opportunity to continually define and redefine the audience relationship. The coffee shop very much sent the message “I want you to be a part of this”, but the doctor’s office provides an opportunity to flip that on its head and push the audience out. When I continue to work with this project in cycle 3, I will explore how lighting can be a tool for achieving this effect. My question I will investigate is, “When can I afford to truly leave the audience in the dark”.

Overall, I am happy with the shape this project took. While it did not look at all how I originally intended, I was pleased to expand my muscles with NDI AND projection mapping at the same time while providing a unique theatrical experience for the audience. I laid the groundwork for a compelling piece and with an updated score and a bit more time, I can lean into the aspects of this project that were lost to time.