Lawson: Pressure Project 1

For pressure project one, I was inspired to create a narrative as opposed to an infinite loop. I have recently been interested in outerspace and interstellar processes and decided to create an animation of a supernova, albeit in a simple, geometric representation.

To create a sense of time and scale, I played with trigger delays, envelope generators, and the fade in/fade out aspects of the jump++ actor. Playing with the fade in and fade out aspects of the jump scene actor helped me to create the perception of a camera zooming out to show the entire solar system. Trigger delays and envelope generators allowed me to establish movement of particular elements and create a sense of passing time. Because I did not yet know about the spinner actor I used a wave generator to manipulate the planet shapes’ horizontal and vertical positions long an elliptical pathway. To prevent them from traveling together in a straight line, I gave each planet a different scale limit and used trigger delays to offset the initiation of the planets’ pathways. For this reason, the planets appear to move in a random pathway across the screen, rather than circular orbits.

To create the sun’s “explosion,” I used an envelope generator to increase it’s scale over 10 seconds. In the next scene I used the shimmer actor to disrupt the pointed red shape and the dots actor to disrupt the yellow circle. The seeming “fizzling” of the sun was achieved through the cross-fade between scenes.

If I was to improve this patch, I would want to first, reduce the load on the Isadora program and offset the limited power of my computer’s processor, and second, create an orbit for my planets using the spinner actor. I might also use the explode and particle actors to create a “real” explosion of the sun rather than the illusion that I created using the shimmer actor. Additionally, I think that I could use the layer functions and blend mode of the projector actors to allow the planets to disappear and reappear around the sun, rather than showing up as bright spots when the images overlap.

Upon further reflection…

From a storytelling and pacing standpoint, I wish that I had allowed the scene in which the planets appear to stay longer. For me, the scene jumps to quickly after the last planet appears, rushing the story rather than establishing the presence of a solar system. For similar reasons, I wish that I had used a similar strategy of trigger delays and envelope generators to allow the stars to appear at the end of the story. I think that the narrative would have had a more satisfactory ending had I allowed the stars to slowly establish themselves rather than appear all at once. It would have also been a more satisfying final image if I had used a particle generator to create a background of smaller stars- this would have created significantly more depth to the screen.


PP1 – WIZARD VAN SPACE ADVENTURE .COM

Hello. Welcome to the Wizard Zone. The Wizard will now show you more about his wonderful Isadora Pressure Project.

Just kidding. He doesn’t know anything about Isadora. Anyway. Here’s a video capture of the thing:

How I built it~

For this pressure project I wanted to extend beyond just shapes actors, and create something silly and personally entertaining. I use MaxMSP (and a bit of Pure Data) for my Sonic Arts major, so I am used to this type of interface for audiovisual coding. I knew I wanted to throw some wizards in this project, because that’s what I did for a lot of Marc Ainger’s MaxMSP projects in his class.

For the first hour of the allotted time, I followed the Isadora Guru videos to create a star-field backdrop. I used pulse and (sawtooth) wave generators to create rotating triangles which move left to right on the screen, before restarting at a different y-axis location and moving left-to-right again. I’m sure there is a much more effective way to produce this effect (so that the shapes reset at the right moment), but I found this to be a great start to immerse myself in the software. I threw the stars into user actors to clean up my first stage and create more stars without too much effort.

After that I started messing with images. I found some good PNGs to work with:

I used the picture playor actor with wave generators and limit-scale value actors to create movement for the PNGs, and I used envelope generators to make the PNGs appear, disappear, move in-and-out of frame, etc.. My process was just a lot of playing around and improvising. I added some text draw actors to my stage so that I could give the scene a little bit of absurd context. WIZARD VAN SPACE ADVENTURE .COM was just the first thing that came up in my head. I liked the idea of turning this whole thing into some kind of intro for the Wizard’s personal HTML/Adobe Flash site (rip Flash).

Stage 2 is the only other stage that has User Actors. I set up 5 rotating, falling pentagons which serve as a funky background for the Wizard to appear in front of. I threw on the dots actor to make them a little more retro. For future reference: the computer did not like this. Either do this more efficiently (virtual stages?) or record it as a video to use instead.

Stage 3 is the Chaos Zone. It was at this point that I was just plugging everything in to see what would work, with no regard for neatness. I used a ton of envelope generators for the text to appear then disappear, for the duck to slide in, for the Wizard to run away, etc.. Trigger delays really helped time things out, especially with the sounds…

THE SOUNDS

SOUNDS PRO-TIP: As far as I know your audio clips have to be in .WAV format in order to function properly. I tried using .MP3s and they showed up in a different non-sound-oriented actor. Beware.

It felt like a bit of a cop-out, but I knew using sound effects would instantly improve the engagement of an audience with my project. I used a bunch of cartoon sound effect I recorded off YouTube. I needed a distinct voice for the Wizard, so I grabbed some .WAV files off the Team Fortress 2 wiki (Soldier’s dying scream 2 and “yaaay!” to be specific).

The sound player actor is fairly uncomplicated. You just need to plug a trigger into the “start” input. Throughout each stage, my sound player actors are plugged into either the end trigger of an envelope generator, or the trig out of a trigger delay. I’m curious about what more I could do with the sound player and its various inputs – whether I could stretch it to do some of the crazy sample things I can do with MaxMSP…

Reflection

During presentation I appreciated Alex’s comment about how my presence impacted the experience of the piece – I served as a cog in the media system, because I came up to the computer (in front of the TV screen), pressed the big red start button, speedwalked away, and came back to shut it off at the end. It was a purposeful decision to be active in the performance (via button-pressing) – it gives me greater control over the performance environment and adds a personable humor to the entire experience. It’s something I will be thinking about more as we have more pressure projects and work more with Isadora – how does the perceived presence (or absence) of a human person in a media system impact the audience’s enjoyment/experience of a piece? I suppose that difference can be demonstrated in the difference between my in-person presentation of this piece and how you experience it in my video recording shown above….


Cycle 3: Dancing with Cody Again – Mollie Wolf

For Cycle 3, I did a second iteration of the digital ecosystem that uses an Xbox Kinect to manipulate footage of Cody dancing in the mountain forest. 

Ideally, I want this part of the installation to feel like a more private experience, but I found out that the large scale of the image was important during Cycle 2, which presents a conflict, because that large of an image requires a large area of wall space. My next idea was to station this in a narrow area or hallway, and to use two projectors to have images on wither side or surrounding the person. Cycle 3 was my attempt at adding another clip of footage and another mode of tracking in order to make the digital ecosystem more immersive.

For this, I found some footage of Cody dancing far away, and thought it could be interesting to have the footage zoom in/out when people widen or narrow their arms. In my Isadora patch, this meant changing the settings on the OpenNI Tracker to track body and skeleton (which I hadn’t been asking the actor to do previously). Next, I added a Skeleton Decoder, and had it track the x position of the left and right hand. A Calculator actor then calculates the difference between these two numbers, and a Limit-Scale Value actor translates this number into a percentage of zoom on the Projector. See the images below to track these changes.

My sharing for Cycle 3 was the first time that I got to see the system in action, so I immediately had a lot of notes/thoughts for myself (in addition to the feedback from my peers). My first concern is that the skeleton tracking is finicky. It sometimes had a hard time identifying a body – sometimes trying to map a skeleton on other objects in space (the mobile projection screen, for example). And, periodically the system would glitch and stop tracking the skeleton altogether. This is a problem for me because while I don’t want the relationship between cause and effect to be obvious, I also want it to be consistent so that people can start to learn how they are affecting the system over time. If it glitches and doesn’t not always work, people will be less likely to stay interested. In discussing this with my class, Alex offered an idea that instead of using skeleton tracking, I could use the Eyes++ actor to track the outline of a moving  blob (the person moving), and base the zoom on the width or area that the moving blob is taking up. This way, I could turn off skeleton tracking, which I think is part of why the system was glitching. I’m planning to try this when I install the system in Urban Arts Space.

Other thoughts that came up when the class was experimenting with the system were that people were less inclined to move their arms initially.  This is interesting because during Cycle 2, people has the impulse to use their arms a lot, even though at the time the system was not tracking their arms. I don’t fully know why people didn’t this time. Perhaps because they were remembering that in Cycle 2 is was tracking depth only, so they automatically starting experimenting with depth rather than arm placement? Also, Katie mentioned that having two images made the experience more immersive, which made her slow down in her body. She said that she found herself in a calm state, wanting to sit down and take it in, rather than actively interact. This is an interesting point – that when you are engulfed/surrounded by something, you slow down and want to receive/experience it; whereas when there is only one focal point, you feel more of an impulse to interact. This is something for me to consider with this set up – is leaning toward more immersive experiences discouraging interactivity?

This question led me to challenge the idea that more interactivity is better…why can’t someone see this ecosystem, and follow their impulse to sit down and just be? Is that not considered interactivity? Is more physical movement the goal? Not necessarily. However, I would like people to notice that their embodied movement takes effect on their surroundings.

We discussed that the prompting or instructions that people are given could invite them to move, so that people try movement first rather than sitting first. I just need to think through the language that feels appropriate for the context of the larger installation.

Another notable observation from Tamryn was that the Astroturf was useful because it creates a sensory boundary of where you can move, without having to take your eyes off the images in front of you – you can feel when you’re foot reaches the edge of the turf and you naturally know to stop. At one point Katie said something like this: “I could tell that I’m here [behind Cody on the log] in this image, and over there [where Cody is, faraway in the image] at the same time.” This pleased me, because when Cody and I were filming this footage, we were talking about the echos in the space – sometimes I would accidentally step on a branch, causing s snapping noise, and seconds later I would hear the sound I made bouncing back from miles away, on there other side of the mountain valley. I ended up writing in my journal after our weekend of filming: “Am I here, or am I over there?” I loved the synchronicity of Katie’s observation here and it made my wonder if I wanted to include some poetry that I was working on for this film…

Please enjoy below, some of my peers interacting with the system.


Cycle 3: Layering and Gesture: Collective Play

For this third iteration, I decided to set up three digital layers that provided space for play, collaboration, and digital/analog spaces to mingle. My initial idea was to consider how I could introduce the body/model into the space and suggest an opportunity for gestural drawing and experimentation both on physical paper and digitally. As you can see in the image below, participants were actively engaged in working on the paper, viewing what happening on the projection screen, and interacting with one another across these platforms and planes in space. A third layer not visible in the image below is a LIve Drawing actor in Isadora that comes into play in some of the videos below. I stuck with the TT Edge Detect actor in Isadora and played with a Motion Blur actor on the second layer so that the gestural movements would be emphasized.

Note the post-its on Alison’s back below. These were a great surprise as they were translated into digital space and were activated by her drawing and movement. They became a playful, unexpected surprise!

Alex the superhero!
Isadora Patch/Cycle 3
Interaction between three digital layers.
Drawing together across physical and digital space.

I really appreciated the feedback from this experience and want to share some of the useful comments I received as a record:

  • Alison: I loved that Alison shared it was “confusing in a good way” and that she felt like it was a space where she could play for a long time. She identified that this experience was a social one and that it mattered that they were exploring together rather than a solo experience.
  • Katie: Katie was curious about what would show up and explored in a playful and experimental way. She felt some disorientation with the screens and acknowledged that when Alex was using the live draw tool in the third layer, she didn’t realize that he was following her with the line. I loved that this was a surprise and realized that I didn’t share this as an option verbally well enough so she didn’t know what was drawing the line.
  • Alex: Alex was one of the group that used the live draw tool and others commented that it felt separated from the group/collaborative experience of the other two layers. Alex used the tool to follow Katie’s movement and traced her gestures playfully. He commented that this was one of his favorite moments in the experience. He mentioned it was delightful to be drawn, when he was posing as a superhero and participants were layering attributes onto his body. There was also a moment when I said, “that’s suggestive” that was brought up and we discussed that play in this kind of space could bring in inappropriate imagery regardless if it was intended or not. What does it mean that this is possible in such a space? Consider this more. Think about the artifact on the paper after play, how could this be an opportunity for artifact creation/nostalgia/document.
  • Mila: With each iteration, people discovered new things they can do. Drawing was only one of the tools, not the focus, drawing as a tool for something bigger. Love the jump rope action!
  • Molly: How did we negotiate working together? This creates a space for emergent collaboration. What do we learn from emergent collaboration? How can we set up opportunities for this to happen? The live draw was sort of sneaky and she wondered if there was a way to bring this more into the space where other interactions were happening.

This feedback will help me work towards designing another iteration as a workshop for pre-service art teachers that I am working with in the spring semester. I am considering if I could stage this workshop in another space or if using the motion lab would be more impactful. If I set it up similarly in the lab, I would integrate the feedback to include some sort of floor anchors that are possibilities or weights connected to the ropes. I think I would also keep things open for play, but mention perspective, tools available, and gesture drawing to these students/participants who will be familiar with teaching these techniques to students in a K – 12 setting.

I have been exploring the possibility of using a cell phone mounted on the ceiling as the birds-eye-view camera and using NDI and a router to send through Isadora. I’ll work on this more in the spring semester as I move towards designing a mini-version for a gallery experience in Hopkins Hall Gallery as part of a research collective exhibition and also the workshop with the pre-service students. If I can get permission to host the workshop in the motion lab, I would love to bring these students into this space as my students this semester really appreciated the opportunity to learn about the motion lab and explore some of the possibilities in this unique space.


Mollie Wolf Cycle 2: The WILDS – Dancing w/ Cody

For Cycle 2, I began experimenting with another digital ecosystem for my thesis installation project. I began with a shot I have of one of my collaborators, Cody Brunelle-Potter dancing, gesturing, casting spells on the edge of a log over looking a mountain side. As they do so, I (holding the camera) am slowing walking toward them along the log. I was rewatching this footage recently with the idea of using a depth camera to play the footage forward or backward   as you walk – allowing your body to mimic the perspective of the camera – moving toward Cody or away from them. 

I wasn’t exactly sure how to make this happen, but the first idea I came up with was to make an Isadora patch that recorded how far someone was from an Xbox Kinect at moments in time regularly, and was always comparing the their current location to where they were a moment ago. Then, whether the difference between those two numbers was positive or negative would tell the video whether to play forward or backward.

I explained this idea to Alex; he agreed it was a decent one and helped me figure out which actors to use to do such a thing. We began with the OpenNI Tracker, which has many potential ways to track data using the Kinect. We turned many of the trackers off, because I wasn’t interested in creating any rules in regards to what the people were doing, just where they were in space. The Kinect sends data by bouncing a laser of objects, depending on how bright the is when it bounces back tells the camera whether the object is close (bright), or far (dim). So the video data that comes from the Kinect is grey scale, based on this brightness (closer is to white, as far is to black). To get a number from this data, we used a Calc Brightness actor, which tracks a steadily changing value corresponding to the brightness of the video. Then we used Pulse Generator and Trigger Value actors to frequently record this number. Finally, we used two Comparator actors: one that checked if the number from the Pulse Generator was less than the current brightness from the Calc Brightness actor, and one that did the opposite, if it was greater than. These Comparators each triggered Tigger Value actors that would trigger the speed of the Movie Player playing the footage of Cody to be -1 or 1 (meaning that it would play forward at normal speed or backwards at normal speed).

Once this basic structure was set up, quite a bit of fine tuning was needed. Many of the other actors you see in these photos were used to experiment with fine tuning. Some of them are connect and some of them are not. Some of them are even connected but not currently doing anything to manipulate the data (the Calculator, for example). At the moment, I cam using the Float to Integer actor to make whole numbers out of the brightness number (as opposed to one with 4 decimal points). This makes the system less sensitive (which was a goal because initially the video would jump between forward and backward when a person what just standing still, breathing). Additionally I am using a Smoother in two locations, one before the data reaches the Trigger Value and Comparator actors, and one before the data reaches the Movie Player. In both cases, the Smoother creates a gradual increase or decrease of value between numbers rather than jumping between them. The first helps the sensed brightness data change steadily (or smoothly, if you will); and the second helps the video slow to a stop and then speed up to a reverse, rather than jumping to reverse, which felt glitchy originally. As I move this into Urban Arts Space, where I will ultimately be presenting this installation, I will need to fine tune quite a bit more, hence why I have left the other actors around as additional things to try.

Once things were fine tuned and functioning relatively well, I had some play time with it. I noticed that I almost instantly had the impulse to dance with Cody, mimicking their movements. I also knew that depth was what the camera was registering, so I played a lot with moving forward and backward at varying times and speeds. After reflecting over my physical experimentation, I realized I was learning how to interact with the system. I noticed that I intuitively changed my speed and length of step to be one that the system more readily registered, so that I could more fluidly feel a responsiveness between myself and the footage. I wondered whether my experience would be common, or if I as a dancer have particular practice noticing how other bodies are responding to my movement and subtly adapting what I’m doing in response to them…

When I shared the system with my classmates, I rolled out a rectangular piece of astro turf in the center of the Kinect’s focus (and almost like a carpet runway pointing toward the projected footage of Cody). I asked them to remove their shoes and to take turns, one at a time. I noticed that collectively over time they also began to learn/adapt to the system. For them, it wasn’t just their individual learning, but their collective learning because they were watching each other. Some of them tried to game-ify it, almost as thought it was a puzzle with an objective (often thinking it was more complicated than it was). Others (mostly the dancers) had the inclination to dance with Cody, as I had. Even though I watched their bodies learned the system, none of them ever quite felt like they ‘figured it out.’ Some seemed unsettled by this and others not so much. My goal is for people to experience a sense of play and responsiveness between them and their surroundings, less that it’s a game with rules to figure out.

Almost everyone said that they enjoyed standing on the astro turf—that the sensation brought them into their bodies, and that there was some pleasure in the feeling of stepping/walking on the surface. Along these lines, Katie suggested a diffuser with pine oil to further extend the embodied experience (something I am planning to do in multiple of the digital ecosystems through out the installation). I’m hoping that prompting people into their sensorial experience will help them enter the space with a sense of play, rather than needing to ‘figure it out.’

I am picturing this specific digital ecosystem happening in a small hallway or corner in Urban Arts Space, because I would rather this feel like an intimate experience with the digital ecosystem as opposed to a public performance with others watching. As an experiment with this hallway idea, I experimented with the zoom of the projector, making the image smaller or larger as my classmates played with the system. Right away, my classmates and I noticed that we much preferred the full, size of the projector (which is MUCH wider than a hallway). So now I have my next predicament – how to have the image large enough to feel immersive in a narrow hallway (meaning it will need to wrap on multiple walls). 


Cycle 3–Allison Smith

I had trouble determining what I wanted to do for my Cycle 3 project, as I was overwhelmed with the possibilities. Alex was helpful in guiding me to focus on one of my previous cycles and leaning into one of those elements. I chose to follow up with my Cycle 1 project that had live drawing involved through motion capture of the participant. This was a very glitchy system, though, so I decided to take a new approach.

In my previous approach of this, I utilized the skeleton decoder to track the numbers of the participants’ hands. These numbers were then fed into the live drawing actor. The biggest problem with that, though, is that the skeleton would not track well and the lines didn’t correspond to the person’s movement. In this new iteration, I chose to use a camera, eyes ++ and the blob decoder to track a light that the participant would be holding. I found this to be a much more robust approach, and while it wasn’t what I had originally envisioned in Cycle 1, I am very happy with the results.

I had some extra time and spontaneously decided to add another layer to this cycle, where the participant’s full body would be tracked with a colorful motion blur. With this, they would be drawing but we would also see the movement the body was creating. I felt like this addition leaned into my research in this class of how focusing on one type of interactive system can encourage people to move and dance. With the outline of the body, we were able to then see the movement and dancing that the participant’s probably weren’t aware they were doing. I then wanted to put the drawing on a see-through scrim so that the participant would be able to see both visuals being displayed.

A few surprises came when demonstrating this cycle with people. I instructed that viewers could walk through the space and observe however they wanted, however I didn’t consider how their bodies would also be tracked. This brought out an element of play from the “viewers” (aka the people not drawing with the light) that I found most exciting about this project. They would play with different ways their body was tracked and would get closer and farther from the tracker to play with depth. They also played with shadows when they were on the other side of the scrim. My original intention with setting the projections up the way that they were–on the floor in the middle of the room–was so that the projections wouldn’t mix onto the other scrims. I never considered how this would allow space for shadows to join in the play both in the drawing and in the bodily outlines. I’ve attached a video that illustrates all of the play that happened during the experience:

Something that I found interesting after watching the video was that people were hesitant to join in at first. They would walk around a bit, and they eventually saw their outlines in the screen. It took a few minutes, though, for people to want to draw and for people to start playing. After that shift happened, there is such a beautiful display of curiosity, innocence, discovery, and joy. Even I found myself discovering much more than I thought I could, and I’m the one who created this experience.

The coding behind this experience is fairly simple, but it took a long time for me to get here. I had one stage for the drawing and one stage for the body outlines. For the drawing, like I mentioned above, I used a video in watcher to feed into eyes ++ and the blob decoder. The camera I used was one of Alex’s camera as it had a manual exposer to it, which we found out was necessary to keep the “blob” from changing sizes when the light moved. The blob decoder finds bright points in the video, and depending on the settings of the decoder, it will only track one bright light. This then fed into a live drawing actor in its position and size, with a constant change in the colors.

For the body outline, I used an astra orbec tracker feeding into a luminance key and an alpha mask. The foreground and mask came from the body with no color, and the background was a colorful version of the body with a motion tracker. This created the effect of having a white colored silhouette with a colorful blur. I used the same technique for color in the motion blur as I did with the live drawing.

Screenshot of my full patch
Drawing patch
Silhouette patch

I’m really thankful for how this cycle turned out. I was able to find some answers to my research questions without intentionally thinking about that, and I was also able to discover a lot of new things within the experience and reflecting upon it. The biggest takeaway I have is that if I want to encourage people to move, it is beneficial to give everyone an active roll in exploration rather than having just one person by themselves. I was focused too much on the tool in my previous cycles (drawing, creating music) rather than the importance of community when it comes to losing movement inhibition and leaning into a sense of play. If I were to continue to work on this project, I might add a layer of sound to it using MIDI. I did enjoy the silence of this iteration, though, and am concerned that adding sound would be too much. Again, I am happy with the results of the cycle, and will allow this to influence my projects in the future.


Cycle 2–Allison Smith

For my cycles, I’m working on practicing different media tools that can interact with movement. For this cycle, I chose to work with the interaction of movement and sound. Similar to my PP2, I had a song with several tracks playing at the same time, and the volume would turn up when it’s triggered. The goal was to allow a space to play with movement and affect the sound, allowing that to affect the movement.

This is the first scene, with texts that gave instructions to the participants. Feedback that I received from my previous cycle was to consider the audience and how instructions would be given in different environments. For this cycle, I wanted to make it as independent and “walk-up” as possible. The texts were triggered by a depth sensor, and after the instructions were done, it jumped to the next scene.
This is the user actor that tracked the depth of the volunteer to trigger the texts/
This is my text user actor that gave the instructions for the experience.
This is the next scene that was triggered, which has all of the music. The music was synced up and was triggered to play when the scene was entered.
This is a closeup of my music scene. I fed the motion tracker into user actors, which then went into triggering the volume of each sound.
Finally, this is an example of one of my movement user actors. Whenever the participant would go in the specific depth of the luminance key, it would trigger the volume to turn to 100, and when the participant would go in that depth again, it would turn the volume to 0.

I had two possible audiences in mind for this. The first audience I was considering was people who don’t typically dance, and who find this in a type of installation and want to play with it. Like I mentioned at the beginning, I’m curious about how completing an activity motivated through exploration will knock down inhibitions that are associated with movement. Maybe finding out that the body can create different sounds will inspire people to keep playing. The other audience I had in mind was a dancer who is versed in freestyle dance, specifically in house dance. I created a house song within this project, and I inspired the movement triggers based on basic moves within house dance. Then, the dancer could not only freestyle with movement inspired by the music, but their movement can inspire the music, too.

For this demo, I chose to present it in the style for the first audience. Here is a video of the experience:

Thanks to Orlando for volunteering and thank you to Yujie for documenting!

I ran into a few technical difficulties. The biggest challenge was how I had to reset the trigger values for each space I was in. The brightness of the depth was different in my apartment living room than it was in the MOLA. I also noticed that I was able to create the different boundaries based on my body and how I would move. No one moves exactly the same way, so sounds will be triggered differently for each person. It was also difficult to keep things consistent. Similarly to how each person moves differently from each other, we also don’t ever move exactly the same. So when a sound is triggered one time, it may not be triggered again by the same movement. Finally, there was a strange problem where the sounds would stop looping after a minute or so, and I don’t know why.

My goal for this cycle was to have multiple songs to play with that could be switched between in different scenes. If I were to continue to develop this project, I would want to add those songs. Due to time constraints, I was unable to do that for this cycle. I would also like to make this tech more robust. I’m not sure how I would do that, but the consistency would be nice to establish. I am not sure if I will continue this for my next cycle, but these ideas are helpful to consider for any project.


Final Mission: three travelers

Brave and Selfless Volunteers at the MoLAB Finals Performance
photo by Alex Oliszewski

During my Cycle 3 of Choose Your Own Adventure: Live Performance Edition, I explored how to allow for more timelines. I realized that the moments of failure for the audience provides excitement and raises the stakes of the performance. How to make a system that encourages and provides feedback for the volunteers while also challenging them?

I feel most creative and myself when creating pieces that play with stakes. I love dance and theatre that encourages heightened reactions to ridiculous situations. The roles of the three travelers started to sink in to me the more we rehearsed. They needed to be both helpless adventurers somewhere distant in time and space while also being all-knowing, somewhat questionably trustworthy narrator-like greek chorus assistants. Tara, Yildiz and I added cheering on the volunteers to blur those lines of where and who we are.

Emily Craver, Yildiz Guventurk, and Tara Burns as three travelers
photo by Alex Oliszewski

The new system for Choose Your Own Adventure included: MIDI keyboard as a controller, Live Webcam for a live feed of adventurers and photo capture of successes, FocusRite Audio hook-up for sound input and sound level watcher, GLSL shaders of all colors and shapes, and Send MIDI show control in order to trigger light cues.

Note on Watcher for Keys on the MIDI keyboard speaking to which song to play in order to reveal a clue

The new system provided more direct signs of sound level watching and cues to the volunteers. The voice overs were louder and aided by flashing text reiterating what the audience should be doing. The three travelers became side coaches for the volunteers as well as self-aware performers trying to gain trust. I found myself fully comfortable with the way the volunteers were being taken care of and started to question and wonder about the audience who was observing all of this. How can an audience be let in while others are physically engaging with the material? I thought about perhaps close camera work of the decisions being made at the keyboard? Earlier suggestions (shout out to Alex Christmas who gave this suggestion) included an applause-o-meter to allow for the non-volunteers to have a say from their seats. A “Who Wants to be a Millionaire” style audience interaction comes to mind with options for volunteers to choose how to interact and have the audience come to their aid. What does giving audience a voice look like? How can it be both respectful, careful and challenging?

photos by Alex Oliszewski
video by Doug Barber