Fist Time Round with Tea

I had a very difficult time getting started with my project. I had a fairly clear ideas for what I wanted but did not how to express them in an interactive format. At the base of my project I was trying to explore storytelling as an act of time travel. We experience the world in linear mode, but categorize it fairly non-sequentially, skipping from detail the detail and making associations with other times. In this way our bodies experienced time in a different way then our minds do and the way did we interpret the world. I have been dwelling on these thoughts for a little bit now. There is an idea in the theater that the show starts at the first pamphlet and never really ends. As performers we can control what the audience sees but everything that is not the performance ultimately has mass effect on the nature of the work.

From these fairly heady notions I wanted to create a format for telling stories without my control and manipulation of time. I wanted the viewer to experience it as a thought, disconnected from a temporal state. My first project was more about achieving this shuffled time state then the interactions with the audience. I filmed short three second clips of me making tea and the images that I saw while making tea. I’ve then made an Isadora patch that would play these images at random either forward or backward.

At the stage in the process the choice of tea was fairly arbitrary. I chose it because of its sequential nature and because it was a process that was familiar to me and to the viewer. I was committed to it being a performative work and thus read a script while the videos occurred. The script run counter to the objective view presented by the camera. It’s featured nuggets of information that I knew about tea and interlaced them with instructions for making tea with the steps reversed.

During my lab time I toyed with projection mapping in to corners. I was interested in the effects that would have on immersing the audience into the work. I think this Will be a rabbit hole I jump down in the future, but did not serve my work in an intentional way.

After the performance I got feedback which allowed me to focus more on the content. For my next cycle I wanted to try to make the world more interactive, and not necessarily interactive with the audience but interactive in that content would be generated during the performance.


Peter’s Painting Pressure Project (3)

For our final pressure project, we were told to use physical “dice” in some way and create an interactive system.  I took that prompt and interpreted it to make use of coins!

So the first obstacle to overcome was to figure out how to get coins to communicate digital signals to my media system.  I really wanted to encourage users to flick coins from afar, so I got a box for coins to be tossed into.  Then, I lined the inside of the box with tinfoil strips.  Every other strip was connected to ground, and every other strip was connected to a digital input on an Arduino microcontroller.  So using this, every time a coin landed in the box, it would output a potentially different keyboard button press to the host computer, appearing seemingly random!

Next I needed to decide how to produce a response from the system to the coin-tossing input.  I really enjoy the idea of splattering paint on a canvas, a la Jackson Pollock, and wanted to capture something similar as a result of the chaos of tossing coins and their unpredictability that results.  So depending on where the coin lands, a colour associated with that location will begin radiating from a random position on-screen.  This creates a really interesting splattering effect when the coin initially lands, as it rattles the other coins in the box, outputting a wide range of signals in rapid succession.

A neat observation made by the group was that it really encourages passersby to engage with the system as it produces dynamic and vibrant colours.  Since the main way to interface is tossing in coins, it would also turn a pretty nice profit, too!  Haha 🙂

paint


Pressure Project #3

At first, I had no idea what I was going to do for pressure project #3. I wasn’t sure how to make a reactive system based on dice. This lead to me doing nothing but occasionally thinking about it from the day it was assigned until a few days before it was due. Once I sat down to work on it, I quickly realized that I was not going to be able to create a system in five hours that could recognize what people rolled with the dice. I began thinking about other characteristics of dice and decided that I should explore some other characteristics of dice. I made a system with two scenes using Isadora and Max/MSP. The player begins by rolling the dice and following directions on the computer screen. The webcam tracks the players’ hands moving, and after enough movement, it tells them to roll the dice, of which the loud sound of dice hitting the box triggers the next scene, where various images of previous rolls appear, with numbers 1-6 randomly appearing on the screen, and slowly increasing in rapidity while delayed and blurred images of the user(s) fade in, until it sends us back to the first scene, where we are once again greeted with a friendly “Hello.”

The reactions to this system surprised me. I thought that I had made a fairly simple system that would be easy to figure, but the mysterious nature of the second scene had people guessing all sorts of things about my project. At first, some people thought that I had actually captured images of the dice in the box in real time because the first images that appeared in the second scene were very similar to how the roll had turned out. In general, it seemed like the reaction was overall very positive, and people showed a genuine interest in it. I think that I would consider going back and expanding on this piece more and exploring the narrative a little more. I think that it could be interesting to develop the work into a full story.

Below are several images from a performance of  this work, along with screenshots of the Max patch and Isadora patch.

screenshot-2016-11-03-16-14-56 screenshot-2016-11-03-16-14-51 screenshot-2016-11-03-16-14-38 screenshot-2016-11-03-16-14-32 screenshot-2016-11-03-16-14-23 screenshot-2016-11-03-16-14-20 screenshot-2016-11-03-16-14-12 screenshot-2016-11-03-16-14-07 screenshot-2016-11-03-15-32-50 screenshot-2016-11-03-16-15-09 screenshot-2016-12-09-19-08-44 screenshot-2016-12-09-19-08-40


Cycle 2_Taylor

This patch was created to use in my rehearsal to give the dancers a look at themselves while experimenting with improvisation to create movement. During this phase of the process we were exploring habits in relation to limitations and anxiety. I asked the dancers to think about how their anxieties manifest physically calling up the feelings of anxiety and using the patch to view their selves in real time, delayed time, and various freeze frames. From this exploration, the dancers were asked to create solos.

For cycle two, I worked out the bugs from the first cycle.
The main thing that was different for this cycle was working with the Kinect and using a depth camera to track the space and shift from the scenic footage to the live feed video instead of using live feed and tracking brightness levels.

screen-shot-2016-12-03-at-10-14-57-am

I also connected toggles and gates to a multimix to switch between pictures and video for the first scene using pulse generators to randomize the images being played. For rehearsal purposes, these images are content provided from the dancers’ lives, a technological representation of their memories. I am glad I was able to fix this glitch with the images and videos. Before they were not alternating. I was able to use this in my previous patch along with the Kinect depth camera to switch the images based on movement instead of pulse generators.

screen-shot-2016-12-03-at-10-16-30-am

screen-shot-2016-12-03-at-10-15-34-am

Cycle 2 responses:
The participants thought that not knowing that the system was candidly recording them was cool. This was nice to hear because I had changed the amount of video delay so that the past self would come as more of a surprise. I felt that if the participant was unaware of the playback of the delay that there interaction with the images/videos at the beginning would be more natural and less conscious of being watched (even though our class is also watching the interaction).
Participants (classmates) also thought that more surprises would be interesting. Like the addition of filters such as dots or exploding to live video feed, but I don’t know how this would fit into my original premise for creating the patch.
Another comment that I wrote down, but am still a little unsure of what they meant was dealing with the placement of performers and inquiring if multiple screens might be effective. I did use the patch projecting on multiple screens in my rehearsal. I was interesting how the performers were very concerned with the stages being produced letting that drive their movement but were also able to stay connected with the group in the real space because they could see the stages from multiple angles. This allowed them to be present in the virtual and the real space during their performance.
I was also excited about the movement of participants that was generated. I think I am becoming more and more interested in getting people to move in ways they would not normally and think with more development this system could help to achieve that.

link to cycle 2 and rehearsal patches: https://osu.box.com/s/5qv9tixqv3pcuma67u2w95jr115k5p0o (also has unedited rehearsal footage from showing)


Cycle 1… more like cycle crash (Taylor)

The struggle.
So, I was disappointed that I couldn’t get the full version (with projector projecting, camera, and a full-bodied person being ‘captured’) up and running. Even last week I did a special tech run two days before my rehearsal using two cameras (an HDMI connected camera and a web cam). I got everything up, running, and attuned to the correct brightness Wednesday and then Friday it was struggle-bus city for Chewie, then Izzy was saying my files were corrupted and it didn’t want to stay open. Hopefully, I can figure out this wireless thing for cycle 2 or maybe start working with a Kinect and a Cam?…
The patch.
This patch was formulated from/in conjunction with PP(2). It starts with a movie player and a picture player switching on&off (alternating) while randomly jumping through videos/images. Although recently I am realizing that it is only doing one or the other… so I have been working on how the switching back and forth btw the two players works (suggestions for easier ways to do this are welcome). When a certain range of brightness (or amount of motion) is detected from the Video In (fed through Difference) the image/vid projector switches off and the 3 other projectors switch on [connected to Video In – Freeze, Video Delay – Freeze, another Video In (when other 2 are frozen)]. After a certain amount of time the scene jumps to a duplicate scene, ‘resetting’ the patch. To me, these images represent our past and present selves but also provide the ability to take a step back or step outside of yourself to observe. In the context of my rehearsal, for which I am developing these patches, this serves as another way of researching our tendencies/habits in relation to inscriptions/incorporations on our bodies and the general nature of our performative selves.
The first cycle.
Some comments that I received from this first cycle showing were: “I was able to shake hands with my past self”, “I felt like I was painting with my body”, and people were surprised by their past selves. These are all in line with what I was going for, I even adjusted the frame rate of the Video Delay by doubling it right before presenting because I wanted this past/second self to come as more of a surprise. Another comment that I received was that the timing of images/vids was too quick, but as they experimented and the scene regenerated they gained more familiarity with the images. I am still wondering with this one. I made the images quick on purpose for the dancers to only be able to ‘grab’ what they could from the image in this flash of time (which is more about a spurring of feeling than a digestion of the image). Also, the images used are all sourced from the performers so they are familiar and these images already have certain meanings for them… Don’t quite know how the spectators will relate or how to direct their meaning making in these instances…(ideas on this are also welcomed). I want to set up the systems used in the creation of the work as an installation that spectators can interact with prior to the performers performing, and I am still stewing on through line between systems… although I know it’s already there.
Thanks for playing, friends!!!
Also, everyone is invited to view the Performance Practice we are working on. It is on Fridays 9-10 in Mola (this Friday is rescheduled for Mon 11.14, through Dec. 2), please come play with us… and let me know if you are planning to!


Wizarding Project – VR, Cycle 2

Things are coming along very well! An actual dungeon layout is coming into scope, puzzles are being created and spells are working as designed. Up until now the focus has been to create a proper interface as well as create the building-blocks required to make a real game. Now that the pieces are in place, real level designs can be created and a story constructed.

Another major focus is audio. The story, as it is now, is for the player to enter the world and be told that they are beginning their wizarding examination in order to become a proper sorcerer. The instructor (myself) will help them both within the experience and outside of it using trigger-based sound queues inside and myself in character outside.

The final product is coming into view. I am excited.


Pressure Project 3 Rubik’s Cube Shuffle

For this project, I learned the difficulties of chroma detecting. I was trying to create a patch that would play a certain song every time the die (a Rubik’s cube) landed on a certain color. Since this computer vision logic depends highly on lighting I trying to work in the same space (spiking the table and camera) so that my color ranges would be specific enough to achieve my goal. With Oded’s guidance, I decided to use a Simultaneity actor that would detect the Inside Ranges of two different chroma values through the Color Measure actor, which was connected to a Video In Watcher. I duplicated this set up six times, trying to use the most meaningful RGB color combinations for each side of the Rubik’s cube. The Simultaneity actor was plugged into a Trigger Value that triggered the songs through a Movie Player and Projector. Later in the process I wanted to use just specific parts of songs since I figured there would not be a lot of time between dice roles and I should put the meaning or connection up front. I did not have enough time to figure out multiple video players and toggles and I did not have time to edit the music outside of Isadora either, so I picked a place to start that worked relatively well with each song to get the point across. However, this was more changing when wrong colors were triggering songs. I feel like a little panache was lost by the system’s malfunction, but I think the struggle was mostly with the Webcam’s consistent refocusing – causing the use of larger ranges. I am also wondering if a white background might have worked better lighting wise. (Putting breaks of silence between the songs may have also been helpful to people’s processing of the connections btw colors and songs). Still, I think people had a relatively good time. I had also wanted the video of the die to spin when a song was played, but with readjusting the numbers for lighting conditions, which was done with Min/Max Value Holds to detect the range numbers, was enough to keep me busy. I chose not to write in my notebook in the dark and do not aurally process well, so I am not remembering other’s comments.
Here are the songs: (I trying to go for different genres and good songs)
Blue – Am I blue?- Billie Holiday
Red – Light My Fire – Live Extended Version. The Doors
Green – Paperbond- Wiz Khalifa
Orange – House of the Rising Sun- The Animals
White – Floating Away – Chill Beats Mix- compiled by Fluidify
Yellow – My girl- The Temptations

Also, Rubik’s Cube chroma detection is not a good idea for use in automating vehicles.
screen-shot-2016-11-03-at-4-15-23-pm

screen-shot-2016-11-03-at-4-16-43-pm

https://osu.box.com/s/5qv9tixqv3pcuma67u2w95jr115k5p0o PresProj3(1)


Cycle #1 – Motion-Composed Music

For my project, I would like to a create a piece of music that is composed in real time by the movement of users in a given space, utilizing multi-channel sound, Kinect data, and potentially some other sensors. For the first cycle, I am trying to use motion data as tracked by a Kinect using TSPS to control sounds in real time. I would like to develop a basic system that is able to trigger sounds (and potentially manipulate them) based on users’ movements in a space.


Pressure Project #2 – Keys

For my second pressure project, I utilized a makey makey, a kinect, and my computer’s webcam. In the first scene, the user was instructed to touch the keys that were attached to the makey makey while holding onto a bottle opener on a keyring that served as the earth. Each key triggered the playback of a different MIDI file. The idea was to get the user to touch the keys in a certain order, but the logic was not refined enough to require any specific order, but would instead either transition after so many key touches or display “Wrong Order!” and trigger an obnoxious MIDI file. The contrast of the MIDI files lead to laughter on at least one person’s part.
The second scene was set on a timer to transition back to the first scene. This scene featured a MIDI file that was constantly playing, and several that were triggered by depth sensor on the kinect, thus responding to not just one user, but everyone in the room. The webcam also added an alpha channel to shapes that were triggered when certain room triggers were initiated by the kinect. This scene didn’t have as strong as an effect as the first one, and I think part of that was due to instruction and Kinect setup. I programmed the areas that the kinect would use as triggers in the living room of my apartment, which has a lot of things that get in the way of the depth sensor, that were not present in the room this was performed. Another thing that would’ve benefited this scene would have been some instruction, as the first scene had fairly detailed instructions, and this one did not.

screenshot-2016-10-11-11-54-18 screenshot-2016-10-11-11-53-47 screenshot-2016-10-11-11-53-42 screenshot-2016-10-11-11-53-06 macdonald_pressure_project02_files


The Synesthetic Speakeasy – Cycle 1 Proposal

For our first cycle I’d like to explore narrative composition and storytelling techniques in VR.  I’m interested in creating a vintage speakeasy / jazz lounge environment in which the user passively experiences the mindsets of the patrons by interacting with objects that have significance to the person they’re associated with!  This first cycle will likely be experimentation with interaction mechanics and beginning to form a narrative.