Cycle # Patrick Park

For the last cycle I prepared a play space where the music evolves by positions or it activates one shot samples. The song I composed plays throughout the entirety of the experience. If the audience goes closer to the camera the voice switches from regular to pitched up voice. Vive versa, when the participant is moving away from the camera, the singing switches back to normal. In the mid spaces there were triggers that played notes in the scale the song was written in. In front there are trigger that turned on echos and delays (although it did not activate this time). In the back there are 808 bass drums and a snare sound. My plan was to create “out of bound” area where every track would be playing in reverse. This did not happen because making sure that the main interactive functions took a long time to actually work. In this cycle there were more excitement and urge to interact in the room than last couple cycles. Making sound triggers to interact together with a song is a fun idea. Hope to keep developing this.


Cycle 3 by Jenna Ko

I used Isadora and Astra Orbbec for an experiential media design where a tunnel zooms in and out as the audience walks toward the projection surface.

The tunnel represents my state of mind where I mentally suffer from the news of the Russian invasion of Ukraine, the Prosecution reform bill in Korea, and marine pollution. The content begins with an empty tunnel. As the participant walk toward the projection surface, the news content fades in. I wanted to articulate my hopelessness and powerless feeling through the monochromatic content that fills the tunnel. As the participant walks towards the end of the tunnel, hopeful imagery of faith in humanity fades in. The content is in color, contrasting with the tunnel interior.

Finally, I was able to implement the motion-sensing function with my media design. If I were to do this again, I would set up the projector in a physical tunnel for a more immersive user experience.


Cycle 2 by Jenna Ko

I used Isadora and Astra Orbbec for an experiential media design where projection content reverses in Korean political history as a participant walk towards the motion detector.

The content begins with superficial imagery of modern Korea with K-pop, kimchi, and elections. It reverses to 2017 impeachment and candlelight vigil, the democratization movement in the 1980s, President Park Jeong-hee and the Miracle on the Han River, 6.25 Korean War, ending with the Korean independence movement. As political polarization intensifies, I wanted to remind each party’s contributions to Korean democracy and prosperity for more respect and gratitude towards each other. This piece reflects my hope for politicians and supporters of each side to cooperate to make a better country than condemning and bogging each other down.

I used the Yinyang symbol of the Korean national flag to represent each political party. The Yinyang symbol represents the complementary nature of contrary forces and the importance of their harmonization. I translated the meaning to political colors, the red representing the conservative party and the blue symbolizing the liberal party. Synchronizing with the triumphing side, the yin and yang alternate as the participant walks towards the motion sensor until the participant reaches the Korean independence movement, where the symbol is complete.

 I could not make the motion-sensing part work, so I triggered the scenes manually for the presentation and utilized the motion sensing function for Cycle 3.


Cycle @ Documentation Patrick Park

Cycle 2 I successfully was able to capture data from motion capture in Isadora and send them back to MAX/MSP to play the audio. There were five tracks that built into the whole song. I had 5 sections and each track being triggered by participant’s presence in front of the camera. Guitar, bass drum, snare, hi-hat, and pitched up voice played when standing in certain area. There were triggers that activated delays, echos, and distortion but they were not triggered during experience. This was better designed than the last cycle, nonetheless I could see rooms for improvement. I realized that having sounds play most of the time can be very frustrating and the song should play throughout the experience. Sound should not stop at all. Rather than placements having triggers to play the tracks, the song should be playing all the time. For the next cycle I plan on adding one shot samples of instruments that can be activated and play along with the song playing in the background.


Cycle 1 Documentation

For the first cycle I envisioned a motion detected sound scape. The furthest point from the motion camera played the beach ocean sound, as audience got closer, the beach sound changed to under water sound. Lastly when the participant got close to the camera, it would activate a salsa song. Though this experiment I realized that having just sound play from moving is not too stimulating. There were only three sound sources and I feel like I could have done more to this. Part of the issue that came up while working on this cycle 1 was that Isadora could not handle audio plugins being processed through the program. In the next cycle, I mean to take in motion capture data from Isadora and send it to MAX/MSP where I can work with more audio manipulation.


Circle 3 – Yujie

In circle 3, I kept the key ideas and media design developed during circle 1 and 2. Two things are still central to my project. The first one is to use fragmented body parts of the culturally marked dancer to resist the idea of returning to the whole which represents the so-called cultural essence. The whole is also easy to be categorized into racial stereotypes. The fragmented body parts then can be seen as a challenge. The second part is to continue offering the negative mode of seeing. The inverted color (or the negatives) can be seen as a metaphor for undeveloped images or hidden “truth” in the darkroom for one to see differently. 

I also added two things in the final circle: the preshow projections and the voice-over from my interview with the dancer. From the feedback after the showing, I receive helpful comments from the class for these two add-ons. I was told that they appreciate that the preshow projections allow them to explore the space and the circular setting forms a more intimate relationship between the dancer and the viewer. Also, the contradictions of performing the Japanese body discussed in the voice recording have some guiding effect for the viewer to get my intention. 

Here are some Isadoro patches

Here is the video document of the final performance


Cycle 3: Sensory Soul Soil

For my cycle 3 project I shifted the soundscape a bit, changed two interfaces and the title. I honed in on two of the growers interviews. I wanted to illustrate the connection to what they’ve grown through an organic interface rather than the former plastic tree. For instance grower Sophia Buggs, started out growing Zucchini in kiddie pools because her grandmother used to make Zucchini bread. Highlighting this history I wanted the participant to feel the electricity Sophia and her grandmother felt with hands in the soil and on fresh zucchini. I worked with my classmate Patrick to clean the sound of Sophia’s interview because when I interviewed her we were in a noisy restaurant. He worked with me to level the crowd in the background and amplify Sophia’s voice to the foreground. I used two tomatoes to illustrate grower Julialynn’s expression “It was just me and two tomatoes!”. This is her origin story to growing her church’s community garden. Sensory Soul Soil in an experiment of listening with your hands, feeling your body, and seeing with your eyes and ears.

As for the set up I decided to start in front of the sensory soil box. I made this change so that I can better guide the participate in the experience. This go round I wanted to bring some color to the originally white box. I used Afro-Latinadad fabric to build in a sense of cultural identity through patterns and colors also representing those who have labored in our U.S. soil.

In the last iteration I didn’t really know how to bring the space to a close however within this cycle I closed the space with a charge to join the agricultural movement by getting involved anyway they can because the earth and their bodies will appreciate it. This call to action is and invitation to take this work beyond the performative, experiential space, and in to the world from which this inspiration came.

To get my sound to work the way I intended, I designed the trigger values and gates to be able to start the audio when touched and cut off when you are not connected. I found that having wav files for sound works better in the sound player. My MP3 continued to show in the movie player just an FYI.

Below you can see one of the Sensory Soul Soil experiences with Juilalynn and two tomatoes.

In the video below you will witness how I curated the space to be extremely immersive. My goal was to engage in the true labor of a grower not only by hearing their stories but also by embodying their movement. I changed the title because now all materials in this iteration are tress as before. I figured sense I come a culture of soul food, and food has the nutrients to feed the soul which comes from the soil why not call it what it is. A Senory Soul Soil experience.


Cycle 3–Allison Smith

I had trouble determining what I wanted to do for my Cycle 3 project, as I was overwhelmed with the possibilities. Alex was helpful in guiding me to focus on one of my previous cycles and leaning into one of those elements. I chose to follow up with my Cycle 1 project that had live drawing involved through motion capture of the participant. This was a very glitchy system, though, so I decided to take a new approach.

In my previous approach of this, I utilized the skeleton decoder to track the numbers of the participants’ hands. These numbers were then fed into the live drawing actor. The biggest problem with that, though, is that the skeleton would not track well and the lines didn’t correspond to the person’s movement. In this new iteration, I chose to use a camera, eyes ++ and the blob decoder to track a light that the participant would be holding. I found this to be a much more robust approach, and while it wasn’t what I had originally envisioned in Cycle 1, I am very happy with the results.

I had some extra time and spontaneously decided to add another layer to this cycle, where the participant’s full body would be tracked with a colorful motion blur. With this, they would be drawing but we would also see the movement the body was creating. I felt like this addition leaned into my research in this class of how focusing on one type of interactive system can encourage people to move and dance. With the outline of the body, we were able to then see the movement and dancing that the participant’s probably weren’t aware they were doing. I then wanted to put the drawing on a see-through scrim so that the participant would be able to see both visuals being displayed.

A few surprises came when demonstrating this cycle with people. I instructed that viewers could walk through the space and observe however they wanted, however I didn’t consider how their bodies would also be tracked. This brought out an element of play from the “viewers” (aka the people not drawing with the light) that I found most exciting about this project. They would play with different ways their body was tracked and would get closer and farther from the tracker to play with depth. They also played with shadows when they were on the other side of the scrim. My original intention with setting the projections up the way that they were–on the floor in the middle of the room–was so that the projections wouldn’t mix onto the other scrims. I never considered how this would allow space for shadows to join in the play both in the drawing and in the bodily outlines. I’ve attached a video that illustrates all of the play that happened during the experience:

Something that I found interesting after watching the video was that people were hesitant to join in at first. They would walk around a bit, and they eventually saw their outlines in the screen. It took a few minutes, though, for people to want to draw and for people to start playing. After that shift happened, there is such a beautiful display of curiosity, innocence, discovery, and joy. Even I found myself discovering much more than I thought I could, and I’m the one who created this experience.

The coding behind this experience is fairly simple, but it took a long time for me to get here. I had one stage for the drawing and one stage for the body outlines. For the drawing, like I mentioned above, I used a video in watcher to feed into eyes ++ and the blob decoder. The camera I used was one of Alex’s camera as it had a manual exposer to it, which we found out was necessary to keep the “blob” from changing sizes when the light moved. The blob decoder finds bright points in the video, and depending on the settings of the decoder, it will only track one bright light. This then fed into a live drawing actor in its position and size, with a constant change in the colors.

For the body outline, I used an astra orbec tracker feeding into a luminance key and an alpha mask. The foreground and mask came from the body with no color, and the background was a colorful version of the body with a motion tracker. This created the effect of having a white colored silhouette with a colorful blur. I used the same technique for color in the motion blur as I did with the live drawing.

Screenshot of my full patch
Drawing patch
Silhouette patch

I’m really thankful for how this cycle turned out. I was able to find some answers to my research questions without intentionally thinking about that, and I was also able to discover a lot of new things within the experience and reflecting upon it. The biggest takeaway I have is that if I want to encourage people to move, it is beneficial to give everyone an active roll in exploration rather than having just one person by themselves. I was focused too much on the tool in my previous cycles (drawing, creating music) rather than the importance of community when it comes to losing movement inhibition and leaning into a sense of play. If I were to continue to work on this project, I might add a layer of sound to it using MIDI. I did enjoy the silence of this iteration, though, and am concerned that adding sound would be too much. Again, I am happy with the results of the cycle, and will allow this to influence my projects in the future.


Cycle 3 – Ashley Browne

For cycle 3, I wanted to finalize the hardware for my video synthesizer and continue to work on the game portion so that the experience would be better suited for both players. This resulted in me making an audio file that was split into separate sets of headphones where Player 1 had a separate set of instructions from Player 2. For the player using the video synthesizer, they were instructed to turn the signals on and off and mix the channels so that it matched the tempo of the music playing through the headphones. For player 2, they used a makeymakey as a game controller where their goal was to collect as many items using the flower character within the 2 minute timer. Since they both played at the same time, Player 1 could choose to hinder Player 2’s game or not. It was fun to watch how people interacted together.

Also, for one of the input signals, I used an isadora patch that used a webcam live feed that watched the players as they worked together. 

I received a lot of feedback and reactions to the aesthetics of the experience– lots of people enjoyed the nostalgic feeling it gave them seeing the CRT tv and being able to interact with it in a new way. People also enjoyed the free stickers I gave out!

Overall, I’m really happy with how the project turned out. In another iteration or cycle, I’d love to further develop the makeymakey controller so that it was similar to the video synthesizer– like using the same hardware casing and tactile buttons.

https://vimeo.com/704302386


Cycle 3 Final Presentation, Gabe Carpenter, 4/27/2022

In cycle 3 I wanted to bring everything together. In cycle one I provided the proof of concept of using physical controls to interact with a virtual environment, and that I could create these physical tools out of almost anything. In cycle 2, I demonstrated how these tools could be used to control a game environment, however the environment was very limited and lacked sound. For cycle 3, I really wanted to turn up the heat. My game is based on another popular series called I’m on Observation Duty, and shares many of the same concepts. My cycle 3 presentation includes a small story, 3 rooms to play in, and 9 different anomalies to find.

The first step, and arguably the most difficult step, was the animation and modeling itself. I used Maya 2022, as with previous cycles. Each asset in each of the three scenes was modeled from scratch, and no third party assistance was used in the creation of the game world. This meant that a majority of my time was spent modeling and rendering.

The three images above show a raw view of the newly modeled environments. For a look at the absent third room, check out my cycle 2 post where I go more in depth on that room specifically. The last step of the animation process was to choose the placement of the cameras, one of which you can see in the third image above. These were a bit challenging, as I wanted to give the player enough vision to complete the task of the game without giving too much to focus on at once. Texturing and lighting also took a good deal of my time, as I wanted to make things look realistically lit for a night time setting.

The next step was to get the sound and video assets for the final patch. The process I used involving taking multiple renders of each anomaly from each of the camera locations, and stitching them together as one large MP4. In the end, each camera was comprised of a 4 minute video. The Isadora patch would then allow the user to simply switch between which video they were viewing. I then needed to include sound. The music for the game was obtained through a royalty free music sharing platform, and the voice acting was done by good friend Justin Green. His work is awesome!

Finally, I needed to write the Isadora patch.

The full patch will be linked here. The patch is split up into 4 sections. The bottom left is the control scheme, comprised of keyboard watchers and global value receivers. The right side is all of the video and sound assets, as well as the timers and triggers that control how they are seen. The upper left is each of the anomalies, and triggers to allow the win condition to understand if the player reported correctly or not. The very top of the patch is the win condition which is made of two parts. The first condition is that the player must correctly identify at least 7 of the 9 anomalies. The second is that the player may not submit more than 11 reports total, to prevent the use of spam reporting. Overall, the actual patch took around 7 hours to write, but this was mainly due to some confusions in all of the broadcasters I ended up using.

In conclusion, I believe that my project was very successful. I did what I set out to do at the beginning of cycle 1, and had an Isadora patch that ran without issue.