Pressure Project 2: Japanese Sign
Posted: October 18, 2025 Filed under: Uncategorized | Tags: Pressure Project Leave a comment »As soon as I heard the requirements for this pressure project, I immediately knew I wanted to use a livestream. Not because I didn’t want to physically observe people in the real world, but because a livestream allowed me to go anywhere in the world. This seemed like a great idea, until I started looking for livestreams. I actually found a playlist of Japanese livestreams on YouTube and as I was going through them I realized one major flaw: there really weren’t many interactive systems to observe.
There was a famous livestream of a major Tokyo intersection, but this seemed too unpersonal and generic. A smaller intersection, while more personal, was still just as generic. Although as a note, barely anyone ever jaywalked in the streams I watched, even when there were clearly no cars at all. I ended up going with a stream of people walking down a street. There were shops, restaurants, many signs, and it all looked very… Japanese. I chose this one because, on top of looking very characteristic for Japan, there was a sign in a great location for observing. Additionally, I watched from 12-1pm (their time) so there was a lot of activity happening. See below for the sign in a screenshot of the livestream.

What made this sign a good candidate to observe though? Firstly, as I said above, it’s framed in a way where you can tell when people interact with the sign. Many of the other signs aren’t on street level, and sense we can’t see the exact location people’s eyes are looking, you can’t tell which sign they are looking at. The sign being on the ground makes it very clear when people look at it. Next, although you can’t really tell in the screenshot, this sign has lights around it that flash in a particular way. This was the “system” what people would interact with. Below is a mockup of exactly what the flashing light pattern looked like:

Now you may be thinking, is this really an interactive system? Perhaps it’s a bit of a stretch but first let’s cover how people interacted with this sign and signs in general. In my opinion, there are four key interactions:
Interaction #1:
A person doesn’t even see the sign. This is the worst case scenario. Either our potential observer was too busy looking at their phone or was in a rush to get through the street who knows, but in the end our sign made no influence on them at all… š
Interaction #2:
A person sees the sign but doesn’t look. This is what I believe to be the most common interaction. I know I said the first type of interaction was worst case scenario, but in a way this one feels worse. People could tell there was a sign there, maybe even glanced at it to see what it was, but the sign’s existence was so ambivalent to them they simply didn’t care to look further.
Interaction #3:
A person sees and processes what is going on with the sign but does not stop walking. This is a great improvement over the first two interactions. People become so interested in the sign that they become curious of what it is. This interaction comes in a range but has an ideal scenario: the head turn. If someone turned their head to read the sign, that means it was so interesting to them, that up until the point where they physically could not see the sign anymore, they were still looking. There is room for improvement here though, as these people still walked by the sign when the time came.
Interaction #4:
A person stops to look at the sign. This is the best case scenario. A person was so interested in the sign, that whatever reason they were walking for become overridden. They must learn more about this sign. This is the only acceptable scenario. I will now redesign the sign to accomplish this goal… š

Simple Changes
Assuming we want to keep things tame with the changes, let’s focus on the lights before adding new components.
Possible Change #1:
Make the light pattern more coherent and interesting. In the mock-up above, you can see the light pattern may vaguely be in a clockwise pattern, but adding more states and making it a clearly clockwise pattern could make people more likely to look, if just to see the pattern.
Possible Change #2:
Add more colors. A pattern of on/off is alright, but a pattern of different colors is definitely more likely to get people’s attention. This also adds an entire new layer to the lights, and that added complexity could keep people’s gaze longer.
Possible Change #3:
Make the pattern flashy. If the pattern has many lights on and then off in quick succession, people may be more likely to look. Especially someone who isn’t really paying attention as a sudden burst of activity may steal their focus.
Intense Changes
The simple changes are largely superficial. While they may get people to look more often and for longer, they’re less likely to get people to stop, which of course is the only goal that matters.
Possible Change #4:
Add many varied colorful and random patterns. The idea here is that there are just so many crazy lights and patterns and flashes occurring that people can’t possibly understand everything happening in the time it takes to walk down the street. People will have to stop in order to get the full pattern, if there even is one.
Possible Change #5:
Added speakers and proximity detectors to the sign. A speaker that just makes noise could get people to glance over, but if the audio is proximity based and makes noise depending on people’s distance to the sign, the personal aspect is more likely to get people to actually stop. The sign could say things like “Look at me!” or “You there, stop immediately!” in reaction to someone getting close to the sign and in many scenarios that person will stop.
Possible Change #6:
Makes the lights extremely bright. Now maybe this could have the opposite effect as people can’t look at the something that’s too bright because it hurts, but a light that is extraordinarily bright could cause people to stop in surprise. Although again looking away is not ideal, even if they stop.
Stop Them No Matter What
It’s still possible that the above changes won’t stop people. But what can we do to ensure that people stop no matter what?
Possible Change #7:
Add a fake crowd of people in front of the sign. It should really look like everyone wants to see this sign. How could anyone walking down the street resist the intrigue of what they could be looking at? They may even join the crowd and then strengthen its attraction towards other people…
Possible Change #8:
Add a piece of currency on the ground in front of the sign that is on a string. When people try to grab the money, the sign retracts it back into itself. The act of having to bed down and ground the string will stop someone, and then after they are stopped, they’ll likely look at the sign either in intrigue or confusion.
Possible Change #9:
All other changes have a chance of failure. In this change, a motorized system is added to the sign’s wheels that allow it to move back and forth freely. A motion detector tracks people’s movement and moves the sign to block people’s path so they physically must stop and look at the sign. This is the ultimate solution. I suggest all sign manufacturers invest immediately!

After presenting, the class discussed a few things that are worth noting. It was questioned if people are really “interacting with an automated computer system” by simply looking at a sign, however the changes I made, especially ones related to proximity, easily bring the system as a whole up to that specification. In terms of a less invasion approach, proximity lights were brought up as a possible idea. I kind of had moved this idea to the audio but it could easily work with lights as well. For example, maybe the color changes depending on your distance to the sign or maybe more and more lights turn on the closer you get. Either of these could definitely get a person to stop, especially if they notice that they are the ones controlling the lights.
This was definitely a fun project. I was a little disappointed that I couldn’t find something more interesting in a livestream, but I was satisfied with how I was able to spin something extremely simple into something a bit more complex.
Pressure Project 1: Bouncing Idle Screen
Posted: October 15, 2025 Filed under: Uncategorized | Tags: Isadora, Pressure Project Leave a comment »The idea for this pressure project came to me based on the “achievements” that Alex gave us to work towards. At first I was concerned about how I could possibly keep an audience engaged for over 30 seconds with something non-interactive. But then I thought about something completely noninteractive that does keep people engaged, and that’s the DVD bouncing idle screen. I specifically remember doing this myself whenever I saw it, but I knew other people liked doing it too from seeing it in internet memes or referenced in pop culture. This idea seemed perfect as it could definitely be done within Isadora.
The only issue was that it didn’t feel like it would be enough work for 5 hours. I then decided that because this idle screen plays on a TV, I could simulate someone changing channels. My first thoughts for other channels were full static and the color bars as obviously I can’t animate anything too complex (although maybe a live feed camera could have worked…). This was when I started creating the project in Isadora.
The first thing I made was the TV. I wanted an actual TV to be visible instead of just using the edges of the stage because it looks nicer but also because it just makes it feel more authentic. I also wanted it to be in the classic 4:3 resolution that old CRT TVs had. Another aspect of these older TVs that I wanted to emulate was the curved corners of the screen (technically the entire screen is bulging out but this is a 2D representation). With that plan in mind, I created the first TV with two boxes: the outer TV casing and the screen. I made the outer casing a darkish grey hue and the screen was a sort of dark grey/green thing that I associate with turned-off TVs of this type (the screen also has a thick black border so the entire thing doesn’t go from outer casing to the screen). The first issue came with adding the curved corners of the screen. The best way I could figure out how to do this was to use a shape with odd insets as that was the closest thing to a negative space curve. The issue with this however, was that it couldn’t be layered under border while on top of the screen, as those were both being represented by a single square. See below:

To solve this, I recreated the border casing as 4 individual rectangles so that the layering would allow the corner shape to be on top of the screen and under the border. The also allowed the entire TV itself to have softer edges as the rectangles ended up not perfectly flush. The TV was also made into a user actor where the color of the screen was controllable. The completed turned-off TV is below:

Next was to make the main attraction: the bouncing idle screen. The first thing I did was create a default white square. I used two wave generators for its vertical and horizontal position, with the wave generators in triangle mode as the movements should constant the entire time. To my surprise, this immediately worked in making the square bounce around the screen exactly as I wanted, the only exception is that it was bouncing around the borders of the entire stage. After some scaling and position subtracting (mostly done through trial and error) the square correctly bounced within the TV.
Now that I have something bouncing, it’s time to make that thing change colors every time it hits as edge. I did this by using an inside range actor connected to the wave generators. Every time the wave generators left the range of 0.5 – 99.5 it sent out a signal. This perfectly corresponds to when the shape bounces off a wall. I then connecting this signal to three random actors and connected those to a color maker RGBA actor’s red, green, and blue values to generator a random color for the shape. Now every time the square bounces off a wall, it also changes color.
The final thing I needed to do was replace the default square shape with something else. I didn’t want to recreate the original exactly, so I replaced the “DVD” text with “ACCAD” and just put a small empty oval underneath it similar to the original. I turned this into a user actor to simplify the views and after a few more operations it looked great. See below:

I was extremely happy with how this turned out, but I still needed a bit more to complete the project. The next thing I created was the static screen. At first I wanted it to be as accurate as possible by having a large number of shapes act as pixels, but this quickly showed to be not possible. At one point I had over a hundred little “pixels” that would get a random black and white color and position in the screen but the lag this caused was too great to continue. Not to mention the fact that it looked horrible! I then briefly thought about using several images of static screen and cycling between them, but I couldn’t remember how to display and swap images and this seemed like the easy approach any way. I ended up using a couple dozen large “pixels” to simulate a sort of static. By coincidence, they ended up in a similar pattern to how the color bars looked and so I was satisfied enough. The squares simply get a random black and white color in a pretty fast frequency. See below:

The last screen I made was the color bars. This was very simple as it was just static colors, although getting the exact correct positions was a little annoying sometimes.
Finally, I decided to simulate the TV turning off as it felt like a pretty iconic old TV action and a satisfying conclusion. For this animation, I used two wave generators set to sawtooth and to only play once. One wave generator squishes a big white square vertically into a line, and then the other squishes it horizontally until it’s gone. The end result was surprisingly satisfying! See below for the color screen into turning off:

Now that I had all the scenes complete, I needed to link them together. For the idle screen, I decided to start a counter for the number of times it bounces off the top or bottom wall. After 20 bounces it switches to static. For both static and the color bars I simply had a pulse generator activating a counter to switch after an amount of pulses. There was probably a better way to do this, but I was running out of time and there was one more thing I wanted to do.
The very last thing I added was channel text to the corners of the static and color bar scenes. This would further signify that this was a TV the viewer was looking at. Ideally, this would be solid and then slowly fade away, but given the time crunch it was just a very slow constant fade. Because these scenes only play briefly, it isn’t too noticeable.
The complete (although slightly shortened) result can be seen below:

The feedback I received on this project was amazing! I seemed like everyone made at least some noise while it was playing. One person said they were getting physically engaged in the idle bounces. Some people didn’t realize it was a TV until it changed channels which actually surprised me as it seemed obvious given the context of the idle bouncing. I hadn’t thought about how someone who wasn’t completely familiar with it wouldn’t know what was happening or what the border was supposed to represent. I was extremely happy when someone pointed out the curved corners of the screen as I thought nobody would even notice or care about it. There were also feelings of nostalgia and anticipation among the viewers as well.
This pressure project was a ton of fun! Isadora is just a blast to create things with and pushing its capabilities is something I loved exploring. If I had more time, I definitely could have done a lot more with this project, but I’m looking forward to creating more interactive experiences in future projects!
PP1- randomness and dark humor
Posted: September 25, 2025 Filed under: Uncategorized Leave a comment »The day our Pressure Project 1 was assigned, I was immediately excited about the possibilities waiting for me in the process of the endeavor to achieve making someone laugh. We had 15 minutes of class left and I began working on the patch. My first thought was to create ‘a body’ through the shapes actors, and have something happen to that body that was absurd. As I was creating the shapes, I began changing the fill colors. The first color I tried was blue, and that made me think of making the body drink water, get full of water, and something with that water happens that creates some type of burst.
While I liked the idea at the moment, it wasn’t funny enough for me. After I sat down to work on it longer, I recreated my ‘body’ and stared at it for some time. I wanted to make it come alive by giving it thoughts and feelings beyond actions. I knew I had to have some randomness to what I was doing for the random actor to make sense to me.
So I turned inward. My MFA research is utilizing somatic sensations as a resource for creative expression through a queer lens. The inward-outward contrast, alignments and misalignments are exciting for me. I enjoy training my mind to look at things in non-normative ways, as both a queer and a neurodivergent artist. While I have a lot of coherent thoughts relative to the situations, I sometimes have hyperfixations or interest in random stuff many people might not think of thinking.
I wanted my Isadora ‘body’ to be hyperfixated on magic potions. I wanted it to be consumed by the thought of magic potions that led to some sort of absurd outcome, hence the randomness. I searched for magic potion images with .png extensions and found one that I would like to use. After adding that image, I needed a ‘hand’ to interact with the potion. So I searched for a .png image of a hand.


To help my ‘body’ convey its inner experiences, I decided to give it a voice through the text draw actor and included short captions to my scenes. The next part was giving my magic potion a storyline to have two characters in my story. I achieved that through showing how the magi potion affected the body beyond the body’s initiated actions. Carrying the magic potion from a passive role to an active role.



I connected a wave generator the magic potion’s width that created a spinning visual and connected another wave generator to the head’s intensity that created a sense of lightheadedness/dizziness or some type of feeling funny/not normal.
In the next scene, the head of my body disintegrates after consuming the magic potion. I achieved that with an explode actor.

To exeggarete the explosion and the effect of the magic potion on the person, I connected a random actor to the explode actor and connected a pulse generator to the random actor.
The last scene reveals the dark truth of the story, using humor. The body disappears and the only thing on the scene is the magic potion with its inner voice (through text draw) for the first time. I needed to give a facial expression to my magic potion so I searched for a .png image of a smiley face that I could layer on top of the previous image. After finding the image I liked, I looked at my scenes and found myself laughing alone in my room. That’s when I decided my work on this project has been satisfactory on my end and I stayed within the 5 hour limit we had to work on it.

In my presentation, everything went according to plan on my end. And the expected achievement of making someone laugh was achieved as I heard people making noises in reaction to the scenes, especially the final scene.
There was a feedback on the scale of images. I worked on it on my personal computer and presented on the big screen in the Motion Lab. Because I didn’t project this there before, the images were very big especially given the proximity of the audience to the screen. But I received the feedback that due to the texts not being long and readable both in attention span and timewise, it still worked.
I am quite content with how the process went and how the product ended up. Having used Isadora in previous classes, building on my skills is very exciting to me. I usually don’t use humor in my artistic works, but I had a craving for it. With the goal of making someone laugh, using Isadora as a canvas or a ‘stage’ for storytelling and connecting beyond the ‘cool’ capabilities of the software was the part I enjoyed the most in this process.
Pressure Project 1 – The best intentions….
Posted: September 21, 2025 Filed under: Uncategorized | Tags: PP1 Leave a comment »I would like to take a moment and marvel that a product like Isadora exists. That we can download some software, and within a few hours create something involving motion capture and video manipulation, is simply mind blowing. However, I learned that Isadora makes it very easy to play with toys without fully understanding them.
The Idea
When we were introduced to the Motion Lab we connected our computers into the “magic USB” and were able to access the rooms cameras, projectors, etc… I picked a camera to test and randomly chose what turned out to be the ceiling mounted unit. I’m not sure where the inspiration came from, but I decided right then that I wanted to use that camera to make a Pac-Man like game where the user would capture ghosts by walking around on a projected game board.
The idea evolved into what I was internally calling the “Fish Chomp” game. The user would take on the role of an angler-fish (the one with the light bulb hanging in front of it). The user would have a light that, if red, would cause projected fishes to flee, or if blue, would cause the fish to come closer. With a red light the user could “chomp” a fish by running into it. When all the fish were gone a new fish would appear that ignored the users light and always tried to chase them, a much bigger fish would try to chomp the user. With the user successfully chomped, the game would reset.
How to eat a big fish? One bite at a time.
To turn my idea into reality it was necessary to identify the key components needed to make the program work. Isadora needed to identify the user and track its location, generate objects that the user could interact with, process collisions between the user and the objects, and process what happens when all the objects have been chomped.
User Tracking:

The location of the user was achieved by passing a camera input through a chroma key actor. The intention was that by removing all other objects in the image the eyes actor would have an easier time of identifying the user. The hope was that the chroma key would reliably identify the red light held by the user. The filtered video was then passed to the eyes++ actor and its associated blob decoder. Together these actors produced the XY location of the user. The location was processed by Limit-Scale actors to convert the blob output to match the projector resolution. The resolution of the projector would determine how all objects in the game interacted, so this value was set as a Global Value that all actors would reference. Likewise, the location of the user was passed to other actors via Global Values.
Fish Generation:

The fish utilized simple shape actors with the intention of replacing them with images of fish at a later time (unrealized). The fish actor utilized wave generators to manipulate the XY position of the shape, with wither the X or Y generator updated with a random number that would periodically change the speed of the fish.
Chomped?

Each fish actor contained within it a user actor to process collisions with the user. The actor received the user location and the shape position, subtracted their values form each other, and compared the ABS of the result to a definable ākill radiusā to determine if the user got a fish. It would be too difficult for the user to chomp a fish if there locations had to be an exact pixel match, so a comparator was used to compare the difference in location to an adjustable radius received form a global variable. When the user and a fish were āclose enoughā together, set by the kill radius, the actor would output TRUE, indicating a successful collision. A successful chomp would trigger the shape actor to stop projecting the fish.
Keeping the fish dead:

The user and the fish would occupy the same space only briefly, causing the shape to reappear after their locations diverged again. To avoid the fish from coming back to life, they needed memory to remember that they got chomped. To accomplish this, logic actors were used to construct a SR AND-OR Latch. (More info about how they work can be found here https://en.wikipedia.org/wiki/Flip-flop_(electronics) .) This actor, when triggered at its āSā input, causes the output to go HIGH, and critically, the output will not change once triggered. When the collision detection actor recognized a chomp, it would trigger the latch, thus killing the fish.
All the fish in a bowl:

The experience consisted of the users and four fish actors. For testing purposes the user location could be projected as a red circle. The four fish actors projected their corresponding shapes until chomped. When all four fish actors latches indicated that all the fish were gone, a 4-inpur AND gate would trigger a scene change.
We need a bigger fish!


When all the fish were chomped, the scene would change. First, an ominous pair of giant eyes would appear, followed by the eyes turning angry with the addition of some fangs.
The intention was for the user to go from being the chomper to being the chomped! A new fish would appear that would chase the user until a collision occurred. Once this occurred, the scene would change again to a game over screen.

The magic wand:

To give the user something to interact with, and for the EYES++ actor to track, a flashlight was modified with a piece of red gel and a plastic bubble to make a glowing ball of light.
My fish got fried.
The presentation did not go as intended. First, I forgot that the motion lab ceiling webcam was an NDI input, not a simple USB connection like my test setup at home. I decided to forgo the ceiling camera and demo the project on the main screen in the lab while using my personal webcam as the input. This meant that I had to demo the game instead of handing the wand to a classmate as intended. This was for the best as the system was very unreliable. The fish worked as intended, but the user location system was too inconsistent to provide a smooth experience.
It took a while, but eventually I managed to chomp all the fish. The logic worked as intended, but the scene change to the Big Fish eyes ignored all of the timing I put into the transition. Instead of taking several seconds to show the eyes, it jumped straight to the game over scene. Why this occurred remains a mystery as the scenes successfully transitioned upon a second attempt.
Fish bones
In spite of my egregious interpretation of what counted as ā5 hoursā of project work, I left many ambitions undone. Getting the Big Fish to chase the user, using images of fish instead of shapes, making the fish swim away or towards the user, and the ideas of adding sound effects were discarded like the bones of a fish. I simply ran out of time.
Although the final presentation was a shell of what I intended, I learned a lot about Isadora and what it is capable of doing and consider the project an overall success.
Fishing for compliments.
My classmates had excellent feedback after witnessing my creation. What surprised me the most was how my project ended up as a piece of performance-art. Because of the interactive nature of the project I became part of the show! In particular, my personal anxiousness stemming from the presentation not going as planed played as much a part of the show as Isadora. Much of the feedback was very positive with praise being given for the concept, the simple visuals, and the use of the flashlight to connect the user to the simulation in a tangible way. I am grateful for the positive reception from the class.
Bumping Alisha’s post
Posted: August 28, 2025 Filed under: Uncategorized Leave a comment »
Reading Alisha Jihn’sĀ Cycle 3: PALIMPSESTĀ post and viewing the accompanying videos was insightful. It provided a clear example of how projections can be used in a concert dance context. Iāve encountered Alishaās work before, both in movement and technology, and seeing her process, her iterations, questions, and curiosities resonated with me.
Her approach reminded me that stepping away from a piece and returning to it later can offer fresh eyes and new ideas. It also reinforced the idea that just because IĀ canĀ add more to a work doesnāt mean IĀ should. Sometimes, subtraction can be more effective than addition.
Bumping Old Discussion
Posted: August 26, 2025 Filed under: Uncategorized Leave a comment »I found Peter’s pressure project utilizing an Arduino to be an interesting idea. Using the microcontroler as a means for creating unusual interfaces is a fantastic idea. He mentions that the hardware caused things to change “on screen”, which is particularly fascinating. I assume that the Arduino was providing input to some other application that processed the audio/video aspect of the project. I’m curious if in addition to what was described, can the Arduino also receive information from the A/V application while also serving as an input device? For example, could touching the fruit controllers also cause lights or motors to activate based on instructions form the computer controlling everything?
Cycle 3 – Interactive Immersive RadioĀ
Posted: May 1, 2025 Filed under: Uncategorized | Tags: Cycle 3 Leave a comment »I started my cycle three process by reflecting on the performance and value action of the last cycle. I identified some key resources that I wanted to use and continue to explore. I also decided to focus a bit more on the scoring of the entire piece, since many of my previous projects were very loose and open ended. I was drawn to two specific elements based on the feedback I had received previously. One of which was the desire to āplayā the installation more like a traditional instrument. This was something that I had deliberately been trying to avoid in past cycles, so I decided maybe it was about time to give it a try and make something a little more playable. The other element I wanted to focus on was the desire to discover hidden capabilities and āsolveā the installation like a puzzle. Using these two guiding principles, I began to create a rough score for the experience.

I addition to using the basic MIDI instruments, I also wanted to experiment with some backing tacks from a specific song, in this case, Radio by Sylvan Esso. In a previous project for the Introduction to Immersive Audio class, I used a program called Spectral Layers to āun-mixā a pre-recorded song. This process takes any song and attempts to separate the various instruments into their isolated tracks, to varying degrees of success. It usually takes a few try’s, experimenting with various settings and controls to get a good sounding track. Luckily, the program allows you to easily unmix and separate track components and re-combine elements to get something that is fairly close to the original. For this song I was able to break it down into four basic tracks; Vocals, Bass, Drums and Synth. The end result is not perfect by any means, but it was good enough to get the general essence of the song when played together.

Another key element I wanted to focus on was the lighting and general layout and aesthetic of the space. I really enjoyed the Astera Titan Tubes that I used in the last cycle and wanted to try a more integrated approach to triggering the lighting console from Touch Designer. I received some feedback that people were looking forward to a new experience from previous cycles, so that motivated me to push myself a little harder and come up with a different layout. The light tubes have various options for mounting them and I decided to hang them from the curtain track to provide some flexibility in placement. Thankfully, we had the resources already in the Motion Lab to make this happen easily. I used spare track rollers and some tie-line and clips left over from a previous project to hang the lights on a height adjustable string that ended up working really well. This took a few hours to put together, but I think this resource will definitely get used in the future by people in the Motion Lab.

In order to make the experience āplayableā I decided to break out the bass line into its component notes and link the trigger boxes in Touch Designer to correspond to the musical score. This turned out to be the most difficult part of the process. For starters, I needed to quantify the number of notes and they cycles they repeat in. Essentially, this broke down into 4 notes, each played 5 times sequentially. Then I also needed to map the boxes that would trigger the notes into the space. Since the coordinates are Cartesian x-y and I wanted the boxes arranged into a circle, I had to figure out a way to extract the location data. I didnāt want to do the math, so I decided to use my experience in Vectorworks as a resource to map out the note score. This ended up working out pretty well and the resulting diagram has an interesting design aesthetic itself. My first real life attempt in the motion lab was working as planned, but the actual playing of the trigger boxes in time was virtually impossible. I experimented with various sizes and shapes, but nothing worked perfectly. I settled on some large columns that a body would easily trigger.
The last piece was to link the lighting playback with the Touch Designer triggers. I had some experience with this previously and more recently have been exploring the OSC functionality more closely. It took a few tries, but I eventually sent the correct commands and got the results I was looking for. Essentially, I programmed all the various lighting looks I wanted to use on āsubmasterā faders and then sent the commands to move the faders. This allowed me to use variable āfade timesā by using the āLagā Chop in Touch Designer to control the on and off rate of each trigger. I took another deep dive into the ETC eos virtual media server and pixel mapping capabilities, which was sometimes fun and sometimes frustrating. Itās nice to have multiple ways to achieve the same effect, but it was sometimes difficult to find the right method based on how I wanted to layer everything. I also maxed out the āspeedā parameter, which was unfortunate because I could not match the BPM of the song, even though the speed was set to 800%.

I was excited for the performance and really enjoyed the immersive nature of the suspended tubes. Since I was the last person to go, we were already running way over on time and I was a bit rushed to get everything set up. I had decided earlier that I wanted to completely enclose the inner circle with black drape. This involved moving all 12 curtains in the lab onto different tracks, something that I knew would take some time and I considered cutting this since we were running behind schedule. Iām glad I stuck to my original plan and took the extra 10 minutes to move things around because the black void behind the light tubes really increased the immersive qualities of the space. I enjoyed watching everyone explore and try to figure out how to activate the tracks. Eventually, everyone gathered around the center and the entire song played. Some people did run around in the circle and activate the ābase lineā notes, but the connection was never officially made. I also hid a rainbow light cue in the top center that was difficult to activate. If I had a bit more time to refine, I would have liked to make more āeaster eggsā hidden around the space. Overall, I was satisfied with how the experience was received and look forward to possible future cycles and experimentation.
Cycle 3: It Takes 3
Posted: May 1, 2025 Filed under: Uncategorized | Tags: Cycle 3, Interactive Media, Interactive Shadow, Isadora, magic mirror Leave a comment »This project was the final iteration of my cycles project, and it has changed quite a bit over the course of three cycles. The base concept stayed the same but the details and functions changed as I received feedback from my peers and changed my priorities with the project. I even made it so three people could interact with it.
I wanted to focus a bit more on the sonic elements as I worked on this cycle. I started having a lot of ideas on how to incorporate more sonic elements, including adding soundscapes to each scene. Unfortunately I ran out of time to fully flesh out this particular idea and didnāt want to incorporate a half baked idea and end up with an unpleasant cacophony of sound. But I did add sonic elements to all of my mechanisms. I kept the chime when the scene became saturated, as well as the first time someone raised their arms to change a scene background. I did add a gate so this only happened the first time, to control the sound.
A new element I added was a Velocity actor that caused the image inside the silhouettes to explode, and when it did, it triggered a Sound Player with a POP! sound.This pop was important because it drew attention to the explosion to indicate that something happened and something they did caused it. This actor was also plugged into a Inside Range actor that was set to trigger a riddle at a certain velocity just below the range to trigger the explosion.
The other new mechanism I added was based on the proximity to the sensor of one of the users. The z-coordinate data for Body 2 was plugged into a Limit-Scale Value actor to translate the coordinate data into numbers I could plug into the volume input to make the sound louder as the user gets closer. I really needed to spend time in the space with people so I could fine-tune the numbers to the space, which I ended up doing during the presentation when it wasnāt cooperating. I also ran into the issue of needing that Sound Player to not always be on, otherwise that would have been overwhelming. I decided to have the other users have their hands raised to turn it on (it was actually only reading the left hand of Body 3 but for ease of use and riddle-writing, I just said both other people had to have them up).
I have continued adjusting the patch for the background change mechanism (raising the right hand of Body 1 changes the silhouette background and raising the left hand changes the background). My main focus here was making the gates work so it only changes one time while the hand is raised (gate doesnāt reopen until hand goes down), so I moved the gate to be in front of the Random actor in this patch. As I reflect on this, I think I know why it didnāt work; I didnāt program it to turn the gate on based on hand position, it only holds the trigger until the first one is complete, which is pretty much immediately. I think I would need an Inside Range actor to tell the gate to turn on when the hand is below a certain position, or something to that effect.
I sat down with Alex to work out some issues I had been having, such as my transparency issue. This was happening because the sensor was set to colorize the bodies, so Isadora was seeing red and green silhouettes. This was problematic because the Alpha Mask looks for white, so the color was not allowing a fully opaque mask. We fixed this with the addition of an HCL Adjust actor between the OpenNI Tracker and the Alpha Mask, with the saturation fully down and the luminance fully up.
The other issue Alex helped me fix was the desaturation mechanism. We replaced the Envelope Generators with Trigger Value actors plugged into a Smoother actor. This made for smooth transitions between changes because it allowed Isadora to make changes from where itās already at, rather than from a set value.
The last big change I made to my patch was the backgrounds. Because I was struggling to find decent quality images of the right size for the shadow silhouettes, I took the information of one image that looked nice and created six simple backgrounds in Procreate. I wanted them to have bold colors and sharp lines so they would stand out against the moving backgrounds and have enough contrast both saturated and not. I also decided to use recognizable location-based backdrops since the water and space backdrops seemed to elicit the most emotional responses. In addition to the water and space scenes, I added a forest, mountains, a city, and clouds rolling across the sky.
These images worked really well against the realistic backgrounds. It was also fun to watch the group react, especially to the pink scene. They got really excited if they got a sparkle full and clear on their shadow. There was also a moment where they thought the white dots in the rainbow and purple scenes were a puzzle, which could be a cool idea to explore. I did have an idea to create a little bubble-popping game in a scene with a zoomed-in bubble as the main background.
The reactions I got were overwhelmingly positive and joyful. There was a lot of laughter and teamwork during the presentation, and they spent a lot of time playing with it. If we had more time, they likely would have kept playing and figuring it out, and probably would have loved a fourth iteration (I would have loved making one for them). Michael specifically wanted to learn it enough to manipulate it, especially to match up certain backgrounds (I would have had them go in a set order because accomplishing this at random would be difficult, though not impossible). Words like āpuzzleā and āescape roomā were thrown around during the post-experience discussion, which is what I was going for with the addition of the riddles I added to help guide users.
The most interesting feedback I got was from Alex who said he had started to experience himself āin third personā. What he means by this is that he referred to the shadow as himself while still recognizing it as a separate entity. If someone crossed in front of the other, the sensor stopped being able to see the back person and āerasedā them from the screen until it re-found them. This prompted that person to often go āoh look Iāve been erasedā, which is what Alex was referring to with his comment.
I’ve decided to include my Cycle 3 score here as well, because it has a lot of things I didn’t get to explain here, and was functinoally my brain for this project. I think I might go back to this later and give some of the ideas in there a whirl. I think I’ve learned enough Isadora that I can figure out a lot of it, particularly those pesky gates. It took a long time, but I think I’m starting to understand gate logic.
The presentation was recorded in the MOLA so I will add that when I have it :). In the meantime, here’s the test video for the velocity-explode mechanism, where I subbed in a Mouse Watcher to make my life easier.
Cycle Three: The Forgotten World of Juliette Warner
Posted: April 30, 2025 Filed under: Uncategorized Leave a comment »For my third cycle, I wanted to revisit my cycle one project based on the feedback I had received, which centered mostly on the audienceās relationship with the projections. One sticking point from that earlier cycle was that all three projectors featured the same projected surface. This choice was originally made as a preventative measure to keep Isadoraās load low. That being said, the first point of focus for my cycle three project was determining whether three separate projected video streams would be sustainable. Once I confirmed that they were, I began sourcing media for the entire script.
After gathering my media, I moved onto an element Iāve wanted to incorporate since cycle one but hadnāt felt fully ready to tackle, depth sensors. I used the Orbec sensor placed in the center of the room, facing down (birdās-eye view), and defined a threshold that actors could enter or exit, which would then trigger cues. I accessed the depth sensor through a pre-built patch (thank you, Alex and Michael) in TouchDesigner, which I connected to Isadora via an OSC Listener actor. This setup allowed Isadora to receive values from TouchDesigner and use them to trigger events. With this in mind, I focused heavily on developing the first scene to ensure the sensor threshold was robust. I marked the boundary on the floor with spike tape so the performers had a clear spatial reference.
Outside of patch-building, I devoted significant time to rehearsing with three wonderful actors. We rehearsed for about three hours total. Because class projects sometimes fall through due to cancellations. as happened with my first cycle, I wanted to avoid that issue this time around. It was important to me that the actors understood how the technology would support their performances. To that end, I began our first rehearsal by sharing videos that illustrated how the tech would function in real time. After that, we read through the script and discussed it in our actor-y ways, and I explained how I envisioned the design unfolding across the show.
In our second rehearsal, we got on our feet. I defined entrances and exits based on where the curtains would part and placed mats to indicate where the bench would be in the Motion Lab. These choices helped define the shared āin-the-roundā experience for both audience and performers.
Over the weekend, I worked extensively on my patch. The full script juxtaposes the Heroineās Journey with the stages of Alzheimerās, so I wanted the audience to know which stage they were in as the performance unfolded. Using a combination of enter-scene triggers, trigger delays, envelopes, and text-draw actors, I created a massive patch for each scene that displayed the corresponding stage. This was by far my biggest time investment.

When I was able to get into the lab space with my actors for a mini tech run, I realized that these title cards, which I had spent so much time on, were not serving the immersive experience and to be honest, they were ugly. The entire concept of the piece was to place the audience alongside Juliette on her journey, sharing her perspective via media. Having text over that media disrupted the illusion, so I cut the text patches and Isadoraās performance load improved as a result.
I spent another two hours between tech and performance fine-tuning the timing of scene transitions, something I had neglected earlier. This led to the addition of a few trigger delays for sound to improve flow.
When it came time to present the piece to a modest audience, I was surprisingly calm. I had come to terms with the idea that each personās experience would be unique, shaped by their own memories and perspectives. I had my own experience watching from outside the curtains, observing it all unfold. Thereās a moment in the show where Julietteās partner Avery gives her a sponge bath, and we used the depth sensor to trigger sponge noises during this scene. I got emotional there for two reasons. First, the performer took his time, and it came across as such a selfless act of care, exactly as intended. Second, I was struck by the realization that this was my final project of grad school, an evolution of a script I wrote my first year at OSU. It made me appreciate just how much Iāve grown as an artist, not just in this class, but over the past three years.

The feedback I received was supportive and affirming. It was noted that the media was consistent and that a media language had been established. While not every detail was noticed, most of the media supported the immersive experience. I didnāt receive the same critiques I had heard during cycle one, which signaled a clear sense of improvement. One comment that stuck with me was that everyone had just experience theatre. I hadnāt considered how important that framing was. It made me reflect on how presenting something as theatre creates a kind of social contract, itās generally not interactive or responsive until the end. Thatās something Iām continuing to think about moving forward.
Once the upload is finished, I will include it below:
Cycle 3: Grand Finale
Posted: April 29, 2025 Filed under: Uncategorized Leave a comment »For the final project, I decided to do some polishing on my Cycle 2 project and then try to add to it to create a more complete experience. During feedback for Cycle 2, a few people felt the project could be part of a longer set, so I decided to try to figure out where that set might go. This allowed me to keep playing with some of the actors I had been using before, but also to try to see how I could alter the experience in various ways to keep it interesting. I, again, spent less time than I did in the beginning of the semester looking for assets. I think I knew what I was looking for more quickly than I did in the beginning, and I knew the vibe I wanted to maintain from the last cycle. While I, for the second cycle in a row, neglected to actually track my time, a decent portion of it was spent trying to explore some of the effects and actors I hadnāt used before that could alter the visuals of what I was doing in smaller ways, by swapping out one actor for another. As always, a great deal of time was spent adjusting and fine-tuning various aspects of the experience. Unlike in the past, far, far too much time was spent trying to recover from system crashes, waiting for my computer to unfreeze, and generally crossing my fingers that my computer will make it through a single run of the project without freezing up. Chances seem slim.

Cleaned Up and Refined Embers

Flame Set-up
I again worked with the Luminance Key, Calc Brightness, Inside Range, and Sequential Triggers actors. Sequential Triggers, in particular, are a favorite because I like tying ācuesā together and then being able to reset the experience without anyone needing to touch a button. Throughout the class, Iāve been interested in making experiences that cycle and donāt have an end, things that can be entered at any time, really, and can be abandoned at any point. To me, this most closely mimics the way Iāve encountered experiences in the wild. Think of any museum video, for instance. They run on a never-ending loop, and itās up to the viewer to decide to wait until the beginning or just watch through until they return to where they entered the film. For me, I want something that people can come back to and that isnāt strictly tied to a start and stop. I want people to be able to play at will, wait for a sequence they particularly enjoyed to come back around, leave after a few cycles, etc. I am still using TouchDesigner and a depth sensor to explore these ideas. This time around, I also the TT Edge Effect, Colorizer, and Sound Level Watcher ++ to the mix. I was conservative with the Sound Level Watcher because my system was already getting overloaded, at that point, and while I wanted to incorporate it more, I wanted to err on the side of actually being able to run my show.

āNeuronā Visuals

āNeuronā Stage Set-up
The main challenge I faced this time was that my computer is starting to protest, greatly, and Isadora decided she had had enough. I ran into errors a number of times, and while I didnāt lose a significant amount of work, I did lose some, and the time I lost to the program crashing, restarting my computer in hopes it would help, and various other troubleshooting attempts was not insignificant. In retrospect, I should have found ways to scale back my project so my computer could run it smoothly, but it everything was going fine until it very much wasnāt. I removed a few visual touches and pieces to try to get the last scene, in particular, to work, and Iām hoping that will be enough.

Initial Set-up of Final Scene

Final Scene Visuals before Edits
For this project, I mostly was focusing on expanding on my Cycle 2 project to make a longer experience. I donāt know that the additional scenes ended up being as developed as the first, but I think they still added to the experience as a whole. I worked hard to incorporate music to make a more holistic experience, as well. As stated above, I couldnāt do so to the level I would have liked, but I hope the impulse toward that is recognized and appreciated, such as it is. I also wanted to address some of the visual notes I received in the last cycle, which I was able to do with relative ease. I tried to find new ways for people to interact with the project in the added scenes, but I feel they are a little lacking in the interactive department. Ultimately, it was a combination of struggling with Isadora logic, still, and a lack of ideas. I still feel my creativity in these projects is somewhat stymied by my beginner status with media. I canāt quite get to the really creative exploration, still, because Iām working on figuring out how to make things work and look somewhat finished. Iām satisfied with what Iāve been able to put together, I just wish I were able to create at a higher level (itās an unrealistic wish, but a wish, nonetheless).
If I were to continue to work on this project, I would work to increase the interactivity in each scene and make things more sound reactive. I would continue to look for different ways people could play with the scenes. I think thereās a world where something like this is maybe 5 or 6 scenes long, and it runs 20-30 minutes. I think this one runs maybe 15 minutes with three scenes, which maybe would work well with more interactivity woven in. Regardless, Iām happy with what I managed to put together and all I managed to cram into my noggin in one semester.
ADDENDUM: I ended up cutting the third scene of the experience before I presented it. By the time I removed enough of the effects to get it to run smoothly, it felt too derivative, and I decided to stick with the two scenes that were working and “complete.” If I had more time, I would work on the getting the third scene to a simpler patch that was still engaging.