Tara Burns – “a canvasUnbound” (Cycle 3)

Cycle 3: Basement iteration

Goals
*To have panels that disappear when triggered
*To have that reveal an underlying theme/movie
*To use the Oculus Quest as the reveal movie

Challenges
*Everything worked in my office and then when changing to the basement I had to add a few more features in order for it to work. I think the version it ended up at will hopefully be more able to travel with slight modifications. *It is very difficult to create an interactive system without a body in the space to test.
*The Oculus Quest doesn’t work without light, so without directional light I did get that working but you couldn’t see the projection. So in the final video I opted to just use a movie, knowing that it did work is good enough for me at this point and when/if I’m able to use directional light that doesn’t effect the projection we can try it again then. Alternately the positive of this is that I can interact with the system more, if painting in VR, I can’t see when and if I make the panels go away and where I need to dance in order to make that happen.

Moving forward
I’d would put this as big as possible and flip the panels to trigger on the same side as myself (the performer). Take some time to rehearse more inside the system to come up with a score with repetition and duration that allowed for people to see the connections if they are looking for it. Perhaps use the VR headset if that works out, but I am also ok with painting and then recording (the recording is the score that corresponds when the dance) a new white score specific to the space I am performing in to then use in performance. If large enough I think it would be easy to see what I am triggering when they are on the same side as me. In my basement, I chose to trigger the opposite side because my shadow covered the whole image.

The 20 videos projection mapped to a template I made in photoshop.
The test panel –> I used these projectors to show the whole image that the ORBECC was seeing and the slice of the frame I was using to trigger the NDI Tracker. I used the picture player to project my template for the above projection mapping.
These actors above come from the NDI tracker’s depth video -> chroma Key (turns the data tracked into the color (in this case red)) -> HSL Adjust (changes red to white) -> Zoomer (zooms the edges of the space to the exact area of space I want to track) -> IDlab Effect Horizontal tilt shift (allowed me to stretch the body so that it would cover the whole sliver that we are tracking whether upstage or downstage) -> Luminance Key (I actually don’t think it does anything, but we used it earlier to reduce the top (closest) and bottom (farthest) spaces to close in the space I wanted to track) then to the panner seen below.
This is almost the whole patch. Continued from above, the panner allowed me to track the exact verticle space I wanted to track -> the Brightness made that brighter and then the limit scale value created the trigger amount when in between the two numbers requested (45 – 55). Here you see four of the 20 progressions to the projection mapped panels which all are triggered the same and once I change panner/Calc brightness/Limit scale value to a user actor it will be easy to adjust for multiple spaces.
Here is my user actor that counts how many times each panel is triggered. It is attached to the unactive/red lines above in the image above this one. The range from the Calc Brightness (which is the range that goes into the limit scale value comes to this input and then when triggered adds on the counter until 10 and the inside range actor spits out a trigger to turn off the active parameter on the projector of the panel after I have triggered it 10 times. As I move through my score, this actor, deletes all the panels to reveal an underlying movie.

I recorded both 10 seconds of stationary and 10 seconds of slowly moving videos in the Oculus Quest (I wasn’t sure which would look better) and cut them into short clips in Davinci Resolve. I chose to use only the moving clips.

I converted all the movies to the HAP codec and it cut my 450% load in Isadora to 140%. This decision was prompted not because it was crashing anymore but it was freezing when I would click through tabs.

After some research on HAP, I found a command line method using ffmeg. My partner helped me do a batch on all my videos at the same time with the above addition.


Tara Burns – Cycle Two

1st trigger corresponds with audience rigth panel, 2nd trigger corresponds with audience left panel, 3rd trigger corresponds with 2nd from audience left panel, 4th trigger corresponds with 2nd from audience right panel

Goals
– Using Cycle 1‘s set up and extending it into Isadora for manipulation
– Testing and understanding the connection between Isadora and OBS Virtual Camera
– Testing prerecorded video of paintings and live streamed Tilt Brush paintings in Isadora
– Moving to a larger space for position sensitive tracking through Isadora Open NDI Tracker
– Projection mapping

Challenges and Solutions
– Catalina Mac OS doesn’t function with Syphon so I had to use OBS Virtual Camera in Isadora
– Not having a live body to test motion tracking and pin pointing specific locations required going back and forth. I wouldn’t be able to do this in a really large space but for my smaller space I put my Isadora patch on the projection and showed half the product and half the patch so I could see what was firing and what the projection looked like at the same time.
– Understanding the difference between the blob and skeleton trackers and what exactly I was going for took a while. I spent a lot of time on the blob tracker and then finally realized the skeleton tracker was probably what I actually needed in the end.
– I realized the headset will need more light to track if I’m to use it live.

Looking Ahead
The final product of this goal wasn’t finished for my presentation but I finished it this week which really brought about some really important choices I need to make. In my small space, if I’m standing in front of the projection it is very hard to see if I’m affecting it because of my shadow, so either the projection needs to be large enough to see over my head or my costume needs to be able to show the projection.

I am also considering a reveal, where the feed is mixed up (pre-recorded or live or a mix – I haven’t decided yet) and as I traverse from left to right the paintings begin to show up in the right order (possibly right to left/reverse of what I’m doing). Instead of audience participation, I’m thinking of having this performer triggered; my own position tracking and triggering the shift in content perhaps 3-4 times and then it stays in the live feed. Once I get to the other side, it is a full reveal of the live feed coming from my headset. This will be tricky as the headset needs light to work (more than projection provides), which is a reason I switched to using movies in my testing as I didn’t have the proper lights to light me so the headset could track and you could see the projection. I also was considering triggering the height of the mapped projection panel (like Kenny’s animation from class) and revealing what is behind that way. Although I do want to keep the fade in and out.

I used the same set up from Cycle 1 to wirelessly connect the headset to the computer and send it to OBS. I created these reminders in my patch to make sure I did all the steps necessary to make things work. Note: The Oculus Quest transmits a 1440×1600 resolution per “eye.” To be able to transmit that resolution to Isadora, make sure the “Start Live Capture” in OBS is turned off, change to the appropriate resolution, then “Start Live Capture” and Isadora should receive this information.
The Video in Watcher caught Virtual Camera in Isadora from the live capture in OBS then projection mapped the four panels of projections and their alternate panel to be triggered. Knowing this works is a big step and now I need to decide if it is necessary.
Later, I projection mapped movies I downloaded from the Oculus Quest, so I didn’t have to have the headset streaming a live feed of VR footage while testing.
I began using “Eyes++” and “Blob Decoder” to to trigger the panels but wasn’t able to differentiate between blobs/areas of space.
This is what happens (although interesting) using the blob decoder. It was very difficult to achieve a depth that wasn’t being triggered by extraneous elements even using threshold. Perhaps using ChromaKey might have helped, but essentially I think I want the locations to correspond with specific panels blob decoder seemed too care free in that regard.
I switched to using the “Skeleton Decoder” and used “Calc Angle 3D” (see Mark Coniglio’s Guru Session #13) to calculate the specific area I wanted to trigger the fade between movies. Mark explains it better but essentially you stand (or ideally have someone else stand) in the space where you want the trigger, watch the numbers in the x2, y2, z2, and catch the median numbers they send off when you are standing in the space. Then put those numbers in the x1, y1, z1. Send the “dist” to the value in a “Limit Scale Value” and determine the range where it can catch the number. In Mark’s tutorial, he achieves ‘0’, however I couldn’t do that so I made a larger range in my limit scale value actor and that seems to work. I hypothosize that it might be the projection interference with the depth camera but I’m not sure. More testing is needed here, perhaps I can reduce my depth range in the OpenNDI Tracker.
I did this 4x to trigger each panel. Note: they are all going into the same “Skeleton 1” id on the Open NDI Tracker because I only had one body to test. So choreographically, I have to change the patch if I want more people in the work by connecting each panel to a different skeleton id.
This is how I achieved the numbers by myself. This way, I was able to watch the screen, remember the numbers and then input them into the actor.

Tara Burns – Iteration X: Extending the Body – Cycle 1 Fall 2020

I began this class with the plan to make space to build the elements of my thesis and this first cycle was the first iteration of my MFA Thesis.

I envision my thesis as a three part process (or more). This first component was part of an evening walk around the OSU Arboretum with my MFA 2021 Cohort. To see the full event around the lake and other projects click here: https://dance.osu.edu/news/tethering-iteration-1-ohio-state-dance-mfa-project

In response to Covid, the OSU Dance MFA 2021 Cohort held a collaborative outdoor event. I placed my first cycle (Iteration X: Extending the Body) in this space. Five scheduled and timed groups were directed through a cultivated experience while simultaneously acting as docents to view sites of art. You see John Cartwright in the video above, directing a small audience toward my work.

In this outdoor space wifi and power were not available. I used a hotspot on my phone to transmit from both my computer and VR headset. I also used a battery to power my phone and computer for the duration.

By following this guide I was able to successfully connect my headset and wirelessly screen copy (scrcpy) my view from the Oculus Quest to my computer.
In OBS (video recording and live streaming software), I transmitted to Twitch.tv.
I then embedded all the components into our interactive website that the audience used on site with their mobile devices and headphones.

Iteration X: Extendting the Body asked the audience to follow the directions on the screen to listen to the soundscape, view the perspective of the performer, and imagine alternate and simultaneous worlds and bodies as forms of resistance.


Tara Burns – PP3

Goal
To expand my idea of the installation/performance sound box into a very small version while creating a surprise for my user

I feel I did get an opportunity to research the installation more in my first patch, but I aborted mission to create a cohesive experience switching the sounds to “shhhs,” labeling the box “The SHHHH Box,” and adding the prayer.

Challenges
I had a problem with one of my touch points continually completing the circuit. I remedied this by putting a piece of paper under the aluminum. I decided that either glue is conductive (I don’t think so), I accidentally connected the circuit with messy electric finger paint, or the box is recycled and might have bits of metal in that one part.

On my computer when I opened the patch, it would blurt out the final prayer. So, I added an instruction to Kara to connect the toggle to the movie player so the sound wouldn’t play and ruin the creepy surprise. Update: Alex showed us a remedy for this: 1) Create a snapshot of exactly how it “should” be (even though it keeps reverting to something other), then Enter Scene Trigger –> Recall Snapshot.

I also wish I had connected all the sounds properly, in haste, I put them in a new folder but forgot to connect the sounds and then Kara had to do that which made it more difficult for her in the beginning. If the paint goes to the aluminum my assumption is that since the aluminum is conductive that it should be trigger enough. But when Kara tested it, she seemed to have trouble triggering it.

The SHHHH Box
It will calm you down or give you nightmares.
In process.
In process: The first patch I made for the box actually was much more complicated. In this patch, each touch would count through 4 different songs and the songs could overlap on each of the four touch sections, but I just thought it would be too hard to figure out and less satisfying because understanding how to control it would be really difficult. However, this patch might be great for a performance or installation where you don’t need people to figure out the intricacies and you are want to create interesting sound/movement combinations. In this patch I also continued with the Tilt Brush sounds from PP2.
Directions
1. Plug in USB Cords and Open Isadora file
2. Open box with Care: Tuck fingers under edges
3. Touch silver: One Tree and One Red Column at a time
4. Pray: Hover hands over box and slowly bring hands together
This patch used the hand distance parameter on the Leap Motion Watcher. I used the Comparator to limit the range, and connected the true value to a toggle that turned the sound on when your hands came into prayer. This does loop, so it starts over every time you pray.
These are pretty simple watchers to toggle the sound on and off. Sound-wise, I found it complicated enough without adding additional parameters. These do loop, so you can “mix” the sounds. You turn them on and off by touching the sensors.


Tara Burns – PP2

Goal:
– To create a sound response to movement/dance
– To test the recording of sound from VR/Tilt Brush brushes for repurposing in Isadora

Challenges:
– Having the wire to ground attached to my body made it precarious and possibly dangerous for extended use.
– This would require A LOT of wire.
– The VR/Tilt Brush sounds recorded pretty soft.

Conclusions and future thoughts:
This project kind of turned into dance dance revolution. However, the help from the class/Alex to make a grounding agent for each pad (provided there is enough wire) would make it so I didn’t have to wear the grounding cable. For future application, I can imagine this controlling light and sound perhaps in a small box like a telephone booth (post-covid), that when touched the sounds rolled over one another. As it is, without the numbers, the sounds roll over each other and you can’t quite place what is happening and if in an installation or performance this is what I would prefer. However, the wires and connection to the Makey Makey don’t seem like they would stand up to the abuse I would require (as a dancer), so if everything was contained in a box, then it would probably be ok. In addition, the sounds in Tilt Brush get louder the faster you move, so this could be an interesting thing to try to add to the patch.

This is the whole patch. I used the same actor components for each sound. A keyboard watcher –> Counter –> Comparator –> Toggle –> Movie Player (there was also a projection of what I created in Tilt Brush, but it was only to create the sound) –> Projector
I also added a verbal counting method because I think that was part of the brief, but I didn’t show this in class because the counters actually overpowered the “music” of the sounds so all you heard were random numbers. However, it was interesting to hear what number was being triggered.
These pictures are basically just reiterations with a different sound and then a different number of Speak Text actors to denote which number the computer would say.
This addition included Timecode Calculator –> Text Comparator –> Speak Text actors.
This is more of the same, but just different counts and different sounds in the movie player.


Tara Burns – The Pressure is on (PP1)

Goals:
To use the Live Drawing actor
To deepen my understanding of user actors and macros.

Challenges:
Finessing transitions between patches
Occasional re-setting glitches (it sometimes has a different outcome than the first 10x)
Making things random in the way you want them to be random is difficult.

User Actor creates daisy chains to help “cars” enter, dance, and exit. I changed the last User Actor into a User Actor Macro so I could change the outputs, stop the lost “car” and terminate him. 🙁 I’m realizing at this moment, that I took a picture before I changed the rectangles to circles, but I like the rectangles better I think.
This is inside the user actor.
This is after the termination of said car. 🙁
When the explosion is complete it triggers the pulse generators and envelope generators (EG) to begin the timing, color change, and brightness change of the “crazy” line drawing.
After 15 seconds the envelop generator completes and triggers another EG to raise the intensity of the projector that the more gouache paint fills in the frame until I press ‘a’ on the keyboard to exit the scene.