Cycle III – Bar: Mapped by Alec Reynolds

The Experience

Cycle III was the final presentation stage of the project. Bar: Mapped is a demonstrative example of an elevated nightclub bar experience that would be regarded as a special attraction at venue. The purpose of this project was to demonstrate a pragmatic and engaging use case of projection mapping. Below I will walk you through the intention behind the Isadora side of the experience:

  • A user will order a drink off of the themed menu. The drink will be prepared and then the bartender will press the drinks button on the iPad kiosk which will trigger the desired scene in the bar area.
    • The iPad communication works through a TouchOSC communication protocol over a network that both the client running Isadora and the iPad are on.
  • When the Isadora patch receives the input from the iPad, a trigger is sent from the TouchOSC actor to a jump scene actor. There are six total scenes in the patch representing the Main bar scene and then one scene for each of the six themed cocktails.
  • Once on the desired scene there are four main pieces of media that are playing:
    1. A large video is mapped to the screen
    2. A smaller video is mapped to the box that the bottles rests on
    3. The smallest video is mapped to four out of five of the liquor bottles. The fifth bottle is a mapped image of itself. The design intent behind this was to highlight the main ingredient in their specific cocktail
    4. A sound effect that compliments the theme is played
  • After the large video reaches the end of its loop, the movie player actor triggers a scene jump back to the Main bar scene and a new drink can be ordered.

Lessons Learned

Isadora is an extremely sensitive and temperamental software. For my Cycle III performance, I unfortunately was not able to present my work in the fashion that I intended to. Due to what I believe was a performance issue with the high variability in resolution and codec, Isadora would reliably crash each time that I tried opening a stage into full screen mode, or mapped any bottle. In order to salvage a performance, I was able to loosely map the Main bar scene bottles and fit my small stage to full screen. TouchOSC still worked and demonstrated the triggering well, but nothing was mapped.

Beyond complications with Isadora, I was able to have a fundamental appreciation for how much skill Projection Mapping takes. Being able to manipulate the projector angled lighting onto a physical asset is nothing short of an artistic craft. Just scratching the surface of complexity with small curve bottles, I now have a deeper appreciation for some of the large scale projections, the likes of which you see at Disney and Large Municipality Events.

If I Had To Do It Again

If I were to do this project again there are a few steps I would take to ensure that the performance was more successful. The primary change is that I would ensure that all my media and content was formatted the same so that Isadora could process it without crashing. Next, I would try to make the experience understood better that Bar: Mapped is in its own room / section of the nightclub and that the music that is heard is bleed over from the main dance floor. In tech rehearsal the music was prepared such that it was at a quieter volume and was only playing from two speakers in the rear of the performance space from the perspective of Bar: Mapped. During the Cycle III performance, the music was blaring and playing from all channels making it sound like the room itself was the nightclub. The final lesson I have learned is that in an environment like that, the people that want to be there are going to have fun, and the people that got dragged there, well its them that you have to work the hardest on to ensure they have a good time. My hope would be that the extra layer of immersion that the projection mapping provides, would fascinate the unsteady patron and keep them drinking and dancing.

Official Website of TAO Nightclub | Chicago
Specific Projection Mapping Example Using a Wall at Tao Nightclub Chicago

Cycle 3 – Seasons – Min Liu

Based on the second cycle, I further developed the experiential system for children to learn about geography & season in this final cycle. I built a spherical interface with makeymakey, copper tape and conductive paint. Participants can touch specific locations on the globe and trigger different interactive environments featuring local geographical and climatic characteristics.

I searched for countries and places that is now (late April) in Spring, Summer, Autumn and Winter. I choose Tokyo (Japan), Amazon rainforest (Brazil), Mount Maceton (Australia)​, and Yellowknife (Canada) correspondingly and marked them on the globe. I used Kinect as input device this time and it worked much better than Webcam. In the autumn and the winter scenes I had created in the previous cycle, participants can interact with falling leaves and snows. I built the other two scenes this time. In the Spring Tokyo scene, the Kinect tracks people’s head and Sakura flowers will fall right above participants’ heads on the screen. In the Summer Amazon rainforest scene, rain falls on people and bounces off.  I will explore more interactive visual effects in the future. I also added audios to each scene which made the experience more immersive.​​​​​​ Here are the scenes I built in Touchdesigner:

In the final presentation, I was happy that participants explored the physical globe and engaged in interacting with different virtual environments both individually and together. They said the experience was fun and educational and they didn’t even notice that they were learning. Also, children will be excited about this experience. It’s a very COSI thing.

Here are some points to improve about this system. First, there needs to be some ambient light to make people brighter and more detectable, thus improving the sensitivity of interaction. Second, the spring Tokyo scene is too bright, and the flowering interaction is too slow. Third, there can be some explanations for participants to know that they are experiencing the weather in the area in current time.

Here is the recording of the experience:


Cycle # Patrick Park

For the last cycle I prepared a play space where the music evolves by positions or it activates one shot samples. The song I composed plays throughout the entirety of the experience. If the audience goes closer to the camera the voice switches from regular to pitched up voice. Vive versa, when the participant is moving away from the camera, the singing switches back to normal. In the mid spaces there were triggers that played notes in the scale the song was written in. In front there are trigger that turned on echos and delays (although it did not activate this time). In the back there are 808 bass drums and a snare sound. My plan was to create “out of bound” area where every track would be playing in reverse. This did not happen because making sure that the main interactive functions took a long time to actually work. In this cycle there were more excitement and urge to interact in the room than last couple cycles. Making sound triggers to interact together with a song is a fun idea. Hope to keep developing this.


Cycle 3 by Jenna Ko

I used Isadora and Astra Orbbec for an experiential media design where a tunnel zooms in and out as the audience walks toward the projection surface.

The tunnel represents my state of mind where I mentally suffer from the news of the Russian invasion of Ukraine, the Prosecution reform bill in Korea, and marine pollution. The content begins with an empty tunnel. As the participant walk toward the projection surface, the news content fades in. I wanted to articulate my hopelessness and powerless feeling through the monochromatic content that fills the tunnel. As the participant walks towards the end of the tunnel, hopeful imagery of faith in humanity fades in. The content is in color, contrasting with the tunnel interior.

Finally, I was able to implement the motion-sensing function with my media design. If I were to do this again, I would set up the projector in a physical tunnel for a more immersive user experience.


Cycle 2 by Jenna Ko

I used Isadora and Astra Orbbec for an experiential media design where projection content reverses in Korean political history as a participant walk towards the motion detector.

The content begins with superficial imagery of modern Korea with K-pop, kimchi, and elections. It reverses to 2017 impeachment and candlelight vigil, the democratization movement in the 1980s, President Park Jeong-hee and the Miracle on the Han River, 6.25 Korean War, ending with the Korean independence movement. As political polarization intensifies, I wanted to remind each party’s contributions to Korean democracy and prosperity for more respect and gratitude towards each other. This piece reflects my hope for politicians and supporters of each side to cooperate to make a better country than condemning and bogging each other down.

I used the Yinyang symbol of the Korean national flag to represent each political party. The Yinyang symbol represents the complementary nature of contrary forces and the importance of their harmonization. I translated the meaning to political colors, the red representing the conservative party and the blue symbolizing the liberal party. Synchronizing with the triumphing side, the yin and yang alternate as the participant walks towards the motion sensor until the participant reaches the Korean independence movement, where the symbol is complete.

 I could not make the motion-sensing part work, so I triggered the scenes manually for the presentation and utilized the motion sensing function for Cycle 3.


Cycle @ Documentation Patrick Park

Cycle 2 I successfully was able to capture data from motion capture in Isadora and send them back to MAX/MSP to play the audio. There were five tracks that built into the whole song. I had 5 sections and each track being triggered by participant’s presence in front of the camera. Guitar, bass drum, snare, hi-hat, and pitched up voice played when standing in certain area. There were triggers that activated delays, echos, and distortion but they were not triggered during experience. This was better designed than the last cycle, nonetheless I could see rooms for improvement. I realized that having sounds play most of the time can be very frustrating and the song should play throughout the experience. Sound should not stop at all. Rather than placements having triggers to play the tracks, the song should be playing all the time. For the next cycle I plan on adding one shot samples of instruments that can be activated and play along with the song playing in the background.


Cycle 1 Documentation

For the first cycle I envisioned a motion detected sound scape. The furthest point from the motion camera played the beach ocean sound, as audience got closer, the beach sound changed to under water sound. Lastly when the participant got close to the camera, it would activate a salsa song. Though this experiment I realized that having just sound play from moving is not too stimulating. There were only three sound sources and I feel like I could have done more to this. Part of the issue that came up while working on this cycle 1 was that Isadora could not handle audio plugins being processed through the program. In the next cycle, I mean to take in motion capture data from Isadora and send it to MAX/MSP where I can work with more audio manipulation.


Circle 3 – Yujie

In circle 3, I kept the key ideas and media design developed during circle 1 and 2. Two things are still central to my project. The first one is to use fragmented body parts of the culturally marked dancer to resist the idea of returning to the whole which represents the so-called cultural essence. The whole is also easy to be categorized into racial stereotypes. The fragmented body parts then can be seen as a challenge. The second part is to continue offering the negative mode of seeing. The inverted color (or the negatives) can be seen as a metaphor for undeveloped images or hidden “truth” in the darkroom for one to see differently. 

I also added two things in the final circle: the preshow projections and the voice-over from my interview with the dancer. From the feedback after the showing, I receive helpful comments from the class for these two add-ons. I was told that they appreciate that the preshow projections allow them to explore the space and the circular setting forms a more intimate relationship between the dancer and the viewer. Also, the contradictions of performing the Japanese body discussed in the voice recording have some guiding effect for the viewer to get my intention. 

Here are some Isadoro patches

Here is the video document of the final performance