Dancing Club 3.0

Scene 1
Box Office
two types of tickets
Scene 2
Scene 3
Scene 4
Scene 5
Scene 6

In this project, I explored the digital body and the actual body moving together.

Scene 1: People came into space and saw the introduction. (Audio & Video)


Scene 2: I used two sperate interfaces, which are the Box office and the DJ table. People went to the box office, answer the question, and got their ticket first, and then be permitted to go to the stage. I changed the question into which color can represent your feeling at this moment? People typed their answers, and the answers went to the screens immediately. While people were typing their answers, the screen jumped to scene 2, which was a dancer dance in the middle of a 3D music ball with the music “Alitina by the water.” I wanted people in this part could be more patient since they had to wait for all the people and then went into the next scene. So I choose a piece of soft piano music to erase the anxiety mood. It looks like people were patient. They either watching the dance or talking with the same color person.


Scene 3: I used a dark red instruction of “Level up” to let people aware that there was some change happening. But some people still not noticed that because the side screens were not clearly showing the warning. I had to tell them “Level up”.

Scene 4: this was the same thing in Dancing club1.0 and 2.0. Four different colors of dancers dance very fast. People dance with the dancers.

Scene 5: I changed the instruction into an individual scene that people could know quickly. It was successful, people noticed and read those instructions. And they prepared to dance or battle with others!

Scene 6: people danced to each other with music. I still did the lighting change at this part, but some people may forget about follow their light.

Thought: Audiences are uncontrollable! There always something happened, unexpectable! So I have to think more and prepare more form different angles and perspectives.


Dancing Club 2.0


After cycle 1 performance, I added something on my cycle 2. It was basically the same structure as cycle 1. I tried to make each scene going smoothly into the next and made audiences better understand why the scene changed. I used some instructions on scene 2 (I recorded some dance through kinetic sensor and project to the screen) through which people could know the next part is to dance/battel with others. (picture 2) And the scene 3 was the kinetic sensor live recording the dance people did on stage. I also added some light cue on scene 3 (picture 3), so people had to move while recognizing the lighting change and follow their light! In this performance, we had some guest audiences, they gave me some valuable suggestions. They talked about the change between scene2 and scene 3 felt a little fast. Because I pressed the wrong button, the instruction didn’t go as I designed. After this performance, I thought I should make the whole thing more detail and specific, and to guide people step by step with more time so that they can understand better.


Olsen Cycle 3 – Final – Audio Game

Cycle 3:
I decided to move forward with the Audio game as previously mentioned. Through the last few weeks, I had random participants test the game and let me know if any part had any particular confusion to it.
Some of the comments that were mentioned included:

  • How would this be different if it were a touch screen instead of a mouse?
  • How should I know when I need to listen?
  • Is there a way to make the experience move forward at a quicker pace?
  • Is there modes of play? Easy, Medium, Hard? What would those be?

—I had considered using the Kinect as a means to play. That being said, I took the time to become part of the Beta testing group for Isadora. Using the Kinect data, it was difficult to calibrate all of the settings for every time the experience would be set up. It was also somewhat buggy, and didn’t necessarily follow the body as planned.
—I hooked up the OpenNi and Skeleton decoder to get the x/y data from the hand of the user, but because of the lag or loss of input, it made the experience unpleasant. I also attempted to use the depth from the Orbecc sensor in the MoLab. This would use the top-down orientation of the depth to create a ‘blob’ to be tracked. Unfortunately, as well, the feedback was buggy and difficult to be managed.
–I ended up going with the IR sensor touch screen placed on top of the monitor as the solution. This allowed a user to use either their hand, a touch-friendly pen, and headphones (simultaneously send to speakers as well) for the experience.

—Below is a visual representation of the layout, including photos from the final day of people playing the game:

Visual Layout of the Audio Game system.

—Below is an example of the actual game being played. It goes through the Easy / Medium modes. If you happen to win the experience, you get a nice treat of music and visuals at the end! Note: the losing screen is not shown, though if you can imagine: No music, and no visuals. Just a plain, “You Lose!” screen with the same options as the ‘Win’ screen.

A play-through of the Audio Game — *Using a mouse*

—I decided to move forward with the project in a way that made it as easy for the user to get used to using a touch screen. This prevented the player from scribbling all over the screen with a mouse to try to find the dot, and added a level of difficulty that didn’t let the participant “cheat” to win.

I also ended up throwing in the different difficulties for the experience.
>>Easy Mode<<
—The user gets a visual trail to track where they’ve been; this includes shrinking the trail as the user gets closer to the target. The user’s cursor shrinks as it gets closer. The audio will also aid in their efforts.
>>Medium Mode<<
—The user’s cursor shrinks as it gets closer. The audio will also aid in their efforts.
>>Hard Mode<<
—Only the audio will aid in their efforts. (Using headphones is the best way for this experience)

The actors used within the Isadora project include:

  • Keyboard Watcher
  • Mouse Watcher
  • Stage Mouse Watcher
  • Trigger Delay
  • Pulse Generator
  • Text Draw
  • Shape
  • Sound Player
  • Video Player
  • Jump++
  • Enter Scene Trigger
  • Enter Scene Value
  • Gate
  • Dual Trigger Watcher
  • Calculator
  • Math
  • Route
  • Counter
  • Envelope Generator
  • OSC multi-transmit
  • Projector
  • Multi-mix
  • Ease In-Out (2D)
  • Inside Range
  • Envelope Generator
  • Limit-Scale Value
  • Shapes

The objects used within the Max / Msp patch include:

  • udprecieve
  • message boxes
  • unpack
  • scale
  • gate
  • toggle
  • cycle~
  • ezdac~

Outcome:
—I think that my game was well received. Everything about the experience taught me more about working within the limitations of Isadora. I would go back and change a few things about the interaction — like some idle music during the sections where the user needs to read. Or, I might narrate those pages altogether to ‘make’ the user listen, rather than expect them to read the text to it’s full extent. I would also want to add some videos as the explanation screens; and having them ‘catch’ a few of the dots before moving forward seems like it would benefit the experience–rather than throwing them into the experience too quickly. All in all, I see myself using the program more in tandem with Max/MSP, and perhaps Touch Designer, for future projects that could be installations!

-Taylor Olsen


Documentation Box Folder Link

You may access the video’s I took during the final, some docs that Oded has collected over the semester or upload any images you wish to share with the group here

https://osu.box.com/s/nzgzphid06c6htflrr980olnjkmw5dz1


Cycle 2: Choose your own Adventure

Cycle 2 of my Mission to Mars developed multiple endings and the switchover to a slightly new narrative. Upon discovering that Mission to Mars is actually a Disney Theme park ride-turned film, funny enough, I decided to switch the story. There is now a three person mission that has gone astray and it is up to the volunteers from the audience to find them. Are they rogue? Are they lost? Are they being sabotaged?

Choice of SPACESHIP, TIME MACHINE, or SUBMARINE

This particular crew chose TIME MACHINE which sent them into journey to who knows what time, accompanied by this GLSL shader and Cyndi Lauper’s sweet voice.

Time after Time
Small Section of Cycle 2

RUN-DOWN of the piece for the PERFORMERS:

LAYOUT of CYCLE 2 CHOREO scenePRESHOW set in three corners of space in chair pathways television and basket preset scenePREAMBLE music starts//text onscreens EM TARA YILDIZ enter from three corners {5 walks getting excited 2 walks trepidatious 2 walks back track 3 walks into trio Slice Phrase} slice/insert/round head/press down 2, 3 3 rotations charging up/dev/kick thru pas de chat/swoop down/walk out legs try to hug/double slap/fail drone faces phrase glob together fancy legs with head going straight frwd, down Left angle, Up above Then look out to audience above heads Realize something wrong//we share something together//a secret} go to Basket and pull three strings to bring basket to center we lay down the strings and walk away from whence we came sceneSTARYAY audience volunteers and cheers sceneFIRST SCENE V.O. Intro sceneTITLES assigned names for players//descriptions out in the space sceneWHICH PATH audience decides how to go sceneSPACESHIP or TIME MACHINE or SUBMARINE text scroll and transition to decide which noise to pick who will volunteers sceneSOUND WATCHER audience picks a sound for the microphone to pick which character will interact with the hologram sceneHOLOGRAMZ TaraEmYildiz run into center light facing the volunteer who won: movement phrase: M with our bodies (Yildi in middle, Tara Em make the slants down) pick up the phone lost connection, hang it up with body– up and down up down and around FACE THE PERSON AND NOD dissipate sceneYOU V.O. Find the light in the vehicle to find out who will volunteer sceneVOLUNTEER instructions sceneHYPOTHESIS flashlight decision sceneMARS or IN THE HOLLOW EARTH or AT THE MALL we celebrate and explain our body language from the hologram: YOU DID IT!!! yayyy MARS: We were approaching Mars when we lost contact with our base and the turbulence was up and down until we crash landed HOLLOW EARTH: We were on a Mission to the North Pole when we lost contact with our base and as we flew over the pole we were sucked into the hollow earth AT THE MALL: Meet you at the MALL, I’m so glad you Called, we went Up and Down Up Down and Around.

Next Cycle will include ways to fail, ways to return to previous sections to make a new decision, a MIDI keyboard that will allow for an easier interaction with the system, and potentially more interaction by the live performers.


Cycle 2 – Audio Game

For Cycle 2:
—I decided to completely abandon my idea of using this project as a means to push my thesis work forward, and decided to instead use this as a way to explore an audio-driven game. The game itself is an interactive piece that uses the body as a “human tuning-fork”. The programs that would be used are Isadora and Max/MSP. Isadora would send the messages via an OSC message to Max, and would therein transmit a sound based on those OSC messages. My intentions are to use the body tracking from a Kinect, or Orbbec, to tune a pitch to match another “fixed” pitch (this would also be sent to Max).

To begin:
—I drew out my initial idea on a whiteboard of how the experience would start. This included:

  • Making a “Welcome” Screen.
  • Having the welcome screen introduce the concept of using your body to tune the pitch to match the fixed tone.
  • Introducing the concept of making it into a game
  • Showing the visuals for the game
  • Giving parameters for how the game will keep score, and how much time is allotted for each round.
  • Game-play screen with proposed visuals
  • Some math for how the determinate pitch will be associated with the fixed pitch

—For now there are 3 screens: Welcome, Goals, and Game-play. The welcome screen includes all of the introductory information for the game, including how the experience will play out and a small example. The goals screen shows what the player needs to do; it’s a 3-out-of-5 win/lose gambit. The game-play screen shows the timer, a visual representation of the countdown, and where the player has won/lost a round.
—To elaborate upon this, I proposed the idea to the group and had some feedback/questions in terms of game-play, visuals. The setup thankfully made sense to the group (and myself). The comments are as follows:
>>Game-play:

  • Do you get to choose the pitch you want to listen for?
  • How will you know when you’ve gained a point?
  • Is the pitch always the same? Or does it change?
  • Is our pitch lower or higher than that of the initial pitch to begin?

>>Visuals:

  • Will there be visual indicators of how close/far you are from the desired pitch?
  • How will these visuals be shown? Through a sine-wave graph? Through a visual bar?
  • Does the player have a visual to notate where they are in the space?
  • Is it necessary to have a timer AND a visual for the time? Is this too much? How could the screen space be used in a better way?
  • Is there a way that the X & Y coordinate could be represented as visual? As in: if I am on the correct X/Y coordinate, something would show this correlation?

—My next steps include (in order, hopefully):

  1. Making the screens for each section
  2. Making the interactivity between Isadora and Max/MSP function correctly.
  3. Using a “Mouse Watcher” actor as the means to test the game-play.
  4. Using the math (Pythagorean theorem) to create the game-play screen pitch sounds. Distance from goal == pitch (scaled).
  5. Making the game-play function correctly; points, movement, etc.
  6. Using the top-down kinect/orbbec depth sensor and the blob-tracking function to take the place of the “mouse-watcher” actor.

-Taylor Olsen


Cycle 2, Stress Relief Environment

For cycle 2, I wanted to clean up my Touch OSC program and make sure it communicated robustly with Isadora. I worked with Alex on how Isadora could send messages back to OSC touch. This was so useful because it allowed me to change the page of the OSC interface with the scene of Isadora…essentially allowing the participant to easier follow along with the program.

This is the first page of the program. When the participant presses the green button, the OSC interface sends a message to Isadora.
Through the actor ‘OSC Listener’, Isadora is listening for messages from OSC. In this case, the green button sends a trigger through OSC Listener to the actor ‘OSC Multi Transmit’ and ‘Jump’. The OSC Multi Transmit allows Isadora to send a message back to OSC, which I used to make the next page of the OSC program visible. I also programmed Isadora to jump scenes at the trigger of the OSC Listener. So essentially, at the press of the green button, Isadora tells OSC to turn the page and also communicates to Isadora to continue with the program automatically.
This is the second page of the OSC program that would appear after pressing the green button on the first “Welcome” page.

Also during cycle 2, I collected most of my content for the music choice and background sound choice. This wasn’t hard, but was really time consuming as I tried to accommodate to various different preferences of relaxing music.

I wasn’t able to present my program as I had hoped for cycle 2 because I am currently struggling on finding a common network for my laptop and my OSC interface to work through. Thankfully, I will most likely get them to run on a separate router for cycle 3 and the final!


Cycle 2

Booo tech sucks. Not really, but I was really excited for this cycle because I figured out the most technologically advanced thing I’ve ever done thus far in my life to implement into this cycle only to have my presentation be hampered by significant tech problems from sound to video quality. I think everyone was able to get the gist of my ideas though, so not the end of the world, but still disappointing.

Nevertheless, I was super proud of myself for learning basic electrical engineering and how to close MakeyMakey circuits to use in my Isadora patch. Phase 3 of my project brings three videos of my mom and I at home (or rather will as I’m making those videos over Thanksgiving Break when I go home.), and participants will be able to toggle between the three videos by interacting with one of my dolls (also to be retrieved from home! Shoutout to Baby Jesus for standing in for us from Brianna Rae Johnson’s desk) and various Black hair products that I have set up on a table. They doll and products outfitted with MakeyMakey buttons connected to keyboard watcher actors in Isadora, allowing it to switch between each of the three video playing scenes.

I learned a lot about Isadora and actors in constructing these mechanics, and while after spending two hours figuring out to make Isadora switch between three videos at random all from within the same scene, Alex came to me at the next class with a much simpler option, which is how we ended up putting them in separate scenes and using a keyboard watcher to trigger a Jump++ actor to move forward and backward between scenes.

They all look like this, just playing different videos.

Once I figured out the patchwork, making the buttons was quite simple. I just used some pieces of packing cardboard that was in a box from a recent Ikea purchase, wrapped them in foil, and clipped alligator head end of the wire to the foil. Baby Jesus (which will be one of my own Barbies in C3) served as the ground, so by picking up the doll and one of the hair products, you could switch the video that was playing.

One additional change that I made was removing the written option for the “Thank a Black Woman” phase that opens the experience. I thought it would better streamline entry inside the space. A “thank you” is how you gain access to the space.

Feedback was still very good. The “thank you” section continues to resonate deeply. There were more people in the space thanks to guests from the Sonder film that was visiting, so the playback moment was longer, which was really great. I’m still towing around with the idea of pre-recording thank yous to intersperse throughout Phase 3.

Things I Need to Do/Make for Cycle 3:
* Make larger “Thank a Black Woman” signage for entrance by microphone
* Projection map videos
* Enhance hair product station (table cloth, pictures, etc) and make instructional signage
* Shoot film of my mom and I and edit/compile footage


Cycle 1, Stress Relief Environment

I am developing an installation that leads a participant through a “create your own” stress relief environment and guided meditation. The concept of this project is to provide a brief escape from the stresses of daily life to help individuals recenter and allow themselves a break. This will be possible by creating a Touch OSC interface on an iPad that communicates with Isadora through the relaxation program I will develop. The program will prompt the participant questions that will allow them to choose music, calming videos, and a nature sound/background noise. I envision the program to be played inside a fort or tent installation with comfy pillows and blankets for a better sense of security and privacy from the outer world.

Here is a sketch of what I intend my stress relief tent to look like. Pillows and blankets are inside with the Touch OSC interface. Projectors and the Isadora program will be running on the exterior (out of sight).

During cycle 1, I set out to begin creating the Touch OSC program that communicates with Isadora. I successfully created the Isadora patch and was able to figure out how to get OSC to communicate with Isadora. The most helpful feedback gave insight on how to prompt the questions to the participant. Instead of projecting the questions and using the OSC to respond, causing the participant to constantly look up and down to know what to do… my classmates suggested that the questions and OSC interface could have both to make it easier to understand.

These are my notes and prompted questions for the program. They aren’t meant to easily be seen, but to show that there’s about 6ish questions for the participant.

In addition, I realized that I needed Isadora to send messages back to OSC to have a more robust program. This was something I knew I needed to figure out for cycle 2.


CYCLE 1: Dancing Club

In cycle 1, I did a “Dancing Club” through which the relationship of the digital body and the actual body was experienced. I hope people can put focuses on their bodies under this relax and casual club atmosphere.

There was an entrance for people who had the ticket to go into the Club. To get a ticket, people had to choose one color and type into the Isadora Interface.

QUESTION INTERFACE
TICKET

When people arrived at the club (Stage), the projector started to play the video that combined my Pressure Project1 and movement choreographed by me. I didn’t mean to let people learn these dance phrases, but they did.

As the instruction on the Ticket, people had to follow and stay in their color light. This means I “hired” and lighting designer manually control four light: Bule, Yellow, Red, and Green. When the light changed, people had to walk to the new spot.

With the music and video fade out, the last scene was people improvisation. I used Connect Sensor to capture people who dance in the stage and project them at the same time. They gradually recognized that it was themselves on the projection screen and started to play with it. And still, they had to take care of the change of their color light.