Cycle Project 1

For this project I wanted to synthesize some of the work I had done on my previous projects in the class. I wanted to create a kind of photobooth. Users would operate the booth by the makey makey and then Isadora would capture a webcam image of the user. From there the user can select different environments to add hats. For this iteration of the project, I offered a choice between a Western theme and a space theme.

After selecting the desired theme, the users could adjust the size of the hat by pressing a button on the makey makey interface. From there, they could use the Leapmotion controller and reposition and rotate the hat as well as resizing it.

One of the most difficult parts of this project for me was figuring out the compositing using virtual stages. Additionally, I spent a lot of time trying to find ways to make the prompts (which appeared over the image) disappear before the image was taken.

This is a link to my code for the project.

https://1drv.ms/u/s!Ai2N4YhYaKTvgbYVl0LiupCRh2PFdg?e=wuKpZm

Here is the link for my class presentation.

https://1drv.ms/u/s!Ai2N4YhYaKTvgbYaMqgJRuOsmjBMog?e=rDAOMd


Depth Camera CT Scan Projection System

by Kenneth Olson

(Iteration one)

What makes dynamic projection mapping dynamic?

Recently I have been looking into dynamic projection mapping and questioned what makes dynamic projection mapping “Dynamic”? I asked Google and she said: Dynamic means characterized by constant change, activity, or progress. I assumed that means for a projection mapping system to be called “dynamic” something in the system would have to involve actual physical movement of some kind. Like the audience, the physical projector, or the object being projected onto. So, what makes dynamic projection mapping “dynamic” well from my classification the use of physical movement within a projection mapped system is the separation between projection mapping and dynamic projection mapping. 

How does dynamic projection mapping work?

So, most dynamic projection systems use a high speed projector (meaning a projector that can project images at a high frame rate, this is to reduce output lag). Then an array of focal lenses and drivers are used (to change the focus of the projector output in real time). A depth camera (to measure the distance between the object being projected onto and the projector) and then a computer system with some sort of software to allow the projector, depth camera, and focusing lens to talk to each other. After understanding the inner workings of how some dynamic projection systems work I started to look further into how a depth camera works and how important depth is within a dynamic projection system.

What is a depth camera and how does depth work?

As I have mentioned before, depth cameras measure distance, specifically the distance between the camera and every pixel captured within the lens. The distance of each pixel is then transcribed into a visual representation like color or value. Over the years depth images have taken many appearances based on different companies and camera systems. Some depth images use gray scale and use brighter values to show objects closer to the camera and darker values to signify objects further in the distance. Each shade of gray would also be tied to a specific value allowing the user to understand visually how far something is from the depth camera. Other systems use color, while using warmer versus cooler colors to measure depth visually.

How is the distance typically measured on an average depth camera?

Basically most depth cameras work, the same way your eyes create depth through “Stereoscopic Vision”. For this Stereoscopic Vision to work you need two cameras (or two eyes) in this top down diagram (pictured above), the cameras are the two large yellow circles and the space between them is called the interocular (IN-ter-ocular) distance. This distance never changes, however, this ratio needs to be at a precise distance because if the interocular distance is too close or too far apart the effect won’t work. On the diagram the dotted line shows the cameras are both looking at the red circle. The point at which both camera sight lines cross is called the zero parallax plane, and on this plane all objects are in focus. This means every object that lives in front and behind the zero parallax plane is out of focus. Everyone at home can try this, If you hold your index finger a foot away from your face, and look at your finger, everything in your view, except your finger, becomes out of focus, and with your other hand slid it left and right across your imaginary zero parallax plane, with your eyes still focused on your finger you should notice your other hand is also in focus. There are also different kinds of stereotypes, another common type is Parallel, on the diagram, the two parallel solid lines coming from the yellow circles point straight out. Parallel means these lines will never meet and also mean everything will stay in focus. If you look out your window into the horizon, you will see everything is in focus, the trees, buildings, cars, people, the sky. For those of us who don’t have windows, Stereoscopic and parallel vision can also be recreated and simulated inside of different 3D animation software like Maya or blender. For those who understand 3D animation cameras and rendering, if you render an animation with parallel vision and place the rendered video into Nuke (a very expensive and amazing node and wires based effects and video editing software) you can add the zero parallax plane in post. This is also the system Pixar uses in all of its animated feature films.

Prototyping

After understanding a little more about how depth cameras work I decided to try and conceive a project using an Astra Orbbec (depth camera), a pico projector (small handheld projector), and Isadora (projection mapping software). Using a depth camera I wanted to try and prototype a dynamic projection mapping system, where the object being projected onto would move in space causing the projection to change or evolve in some way. I ended up using a set of top down human brain computed tomography scans (CT scans) as the evolving or “changing” aspect of my system. The CT scans would be projected onto regular printer paper held in front of the projector and depth camera. The depth camera would read the depth at which the paper is at in space. As the piece of paper moves closer or further away from the depth camera, the CT scan images would cycle through. (above is what the system looked like inside of Isadora and below is a video showing the CT scans evolving in space in real time as the paper movies back and forth from the depth camera) Within the system I add color signifiers to tell the user at what depth to hold the paper at and when to stop moving the paper. I used the color “green” to tell the user to start “here” and the color “red” to tell the user to “stop”. I also added numbers to each ST scan image so the user can identify or reference a specific image.

Conclusion 

The finished prototype works fairly well and I am very pleased with the fidelity of the Orbbec depth reading. For my system, I could only work within a specific range in front of my projector, this is because the projected image would become out of focus if I moved the paper too far or too close relative to the projector. While I worked with the projector I found the human body could also be used inplace of the piece of paper, with the projected image of the SC scans filling my shirt front. The projector could also be projected at a different wall with a human interacting with the depth camera alone, causing the ST scans to change as well. With a more refined system I can imagine this could be used in many circumstances. This system could be used within an interactive medical museum exhibit, or even in a more professional medical setting to explain how ST scans work to child cancer patients. For possible future iterations I would like to see if I could incorporate the projection to better follow the paper, having the projector tilt and scale with the paper would allow the system to become more dynamic and possibly more user friendly.


Cycle 2 – Stanford

I the Sketchup file I had been working on and put it in VR. I did this using a program called Sentio VR. After I created an account, I was able to install a plugin for Sketchup that allowed me to export scenes. Once the scenes were exported, I could go to the app on the Oculus Quest and input my account code to view my files.

I also had to find a way to mirror the Quest to my MacBook. I used the process outlined by the link below.

https://arvrjourney.com/cast-directly-from-your-oculus-quest-to-macbook-e22d5ceb792c

This gave me a mirrored image, but the result was not what I was looking for. I did not want to see two circles of image, so after I recorded the video, I cropped it to give a better product.

Screenshot of the video before I cropped it
A screen capture while I walked around the set
Another Scene (The Church)
Another Scene (Memphis)

Cycle 1 – Stanford

My final project is to create take a design (scenic) that I had done in the past and put it in VR so that you can walk around it and see it from both the audience and actor view. I focused on my Sketchup file for the first cycle.

The design is for a show called Violet. It is a musical set in the South in 1964. It is about a woman named Violet that has a huge scar across her face and she is traveling by bus to see a TV preacher in hopes that he can heal her.

I started from a base Sketchup file that had the Thurber already created.

Full Stage View
Close Up of the Truss
Additional Pieces

PP2 – Stanford

For this assignment, I used the Makey Makey to count the points of a card game. I created 3 buttons for each team with the labels 5, 10, and 20. These are the point values of the cards in the game. My goal was to have Isadora count the points for each team and when one reached the winning amount, a light would light up in the winning team’s color.

The buttons hooked up to the Makey Makey

In addition to the Makey Makey, I used an ENTTEC Pro. This allowed me to send a signal to an LED fixture from my computer.

Colorsource PAR

Each of the buttons were assigned a different letter on the Makey Makey. My patch used each letter to count by the value of the button it was associated with. It then added each team’s values together and a comparator triggered a cue to turn on the light fixture in either red or blue when a team reached 300 points or more.

My Patch
The cue triggered by hitting 300 points

Tara Burns – Cycle Two

1st trigger corresponds with audience rigth panel, 2nd trigger corresponds with audience left panel, 3rd trigger corresponds with 2nd from audience left panel, 4th trigger corresponds with 2nd from audience right panel

Goals
– Using Cycle 1‘s set up and extending it into Isadora for manipulation
– Testing and understanding the connection between Isadora and OBS Virtual Camera
– Testing prerecorded video of paintings and live streamed Tilt Brush paintings in Isadora
– Moving to a larger space for position sensitive tracking through Isadora Open NDI Tracker
– Projection mapping

Challenges and Solutions
– Catalina Mac OS doesn’t function with Syphon so I had to use OBS Virtual Camera in Isadora
– Not having a live body to test motion tracking and pin pointing specific locations required going back and forth. I wouldn’t be able to do this in a really large space but for my smaller space I put my Isadora patch on the projection and showed half the product and half the patch so I could see what was firing and what the projection looked like at the same time.
– Understanding the difference between the blob and skeleton trackers and what exactly I was going for took a while. I spent a lot of time on the blob tracker and then finally realized the skeleton tracker was probably what I actually needed in the end.
– I realized the headset will need more light to track if I’m to use it live.

Looking Ahead
The final product of this goal wasn’t finished for my presentation but I finished it this week which really brought about some really important choices I need to make. In my small space, if I’m standing in front of the projection it is very hard to see if I’m affecting it because of my shadow, so either the projection needs to be large enough to see over my head or my costume needs to be able to show the projection.

I am also considering a reveal, where the feed is mixed up (pre-recorded or live or a mix – I haven’t decided yet) and as I traverse from left to right the paintings begin to show up in the right order (possibly right to left/reverse of what I’m doing). Instead of audience participation, I’m thinking of having this performer triggered; my own position tracking and triggering the shift in content perhaps 3-4 times and then it stays in the live feed. Once I get to the other side, it is a full reveal of the live feed coming from my headset. This will be tricky as the headset needs light to work (more than projection provides), which is a reason I switched to using movies in my testing as I didn’t have the proper lights to light me so the headset could track and you could see the projection. I also was considering triggering the height of the mapped projection panel (like Kenny’s animation from class) and revealing what is behind that way. Although I do want to keep the fade in and out.

I used the same set up from Cycle 1 to wirelessly connect the headset to the computer and send it to OBS. I created these reminders in my patch to make sure I did all the steps necessary to make things work. Note: The Oculus Quest transmits a 1440×1600 resolution per “eye.” To be able to transmit that resolution to Isadora, make sure the “Start Live Capture” in OBS is turned off, change to the appropriate resolution, then “Start Live Capture” and Isadora should receive this information.
The Video in Watcher caught Virtual Camera in Isadora from the live capture in OBS then projection mapped the four panels of projections and their alternate panel to be triggered. Knowing this works is a big step and now I need to decide if it is necessary.
Later, I projection mapped movies I downloaded from the Oculus Quest, so I didn’t have to have the headset streaming a live feed of VR footage while testing.
I began using “Eyes++” and “Blob Decoder” to to trigger the panels but wasn’t able to differentiate between blobs/areas of space.
This is what happens (although interesting) using the blob decoder. It was very difficult to achieve a depth that wasn’t being triggered by extraneous elements even using threshold. Perhaps using ChromaKey might have helped, but essentially I think I want the locations to correspond with specific panels blob decoder seemed too care free in that regard.
I switched to using the “Skeleton Decoder” and used “Calc Angle 3D” (see Mark Coniglio’s Guru Session #13) to calculate the specific area I wanted to trigger the fade between movies. Mark explains it better but essentially you stand (or ideally have someone else stand) in the space where you want the trigger, watch the numbers in the x2, y2, z2, and catch the median numbers they send off when you are standing in the space. Then put those numbers in the x1, y1, z1. Send the “dist” to the value in a “Limit Scale Value” and determine the range where it can catch the number. In Mark’s tutorial, he achieves ‘0’, however I couldn’t do that so I made a larger range in my limit scale value actor and that seems to work. I hypothosize that it might be the projection interference with the depth camera but I’m not sure. More testing is needed here, perhaps I can reduce my depth range in the OpenNDI Tracker.
I did this 4x to trigger each panel. Note: they are all going into the same “Skeleton 1” id on the Open NDI Tracker because I only had one body to test. So choreographically, I have to change the patch if I want more people in the work by connecting each panel to a different skeleton id.
This is how I achieved the numbers by myself. This way, I was able to watch the screen, remember the numbers and then input them into the actor.

Movement Meditation Room Cycle 2

Cycle 2 of this project was to layer the projections onto the mapping of each location in the room. I started with the backgrounds which would fade in and out as the user moved around the room. A gentle view of leaves would greet them upon entering and when they were meditating, and when they walked to the center of the room it shifted to a soft mossy ground. This was pretty easy because I already had triggers built for each location so all I had to do was connect the intensity of the backgrounds to a listener that was used for each location. The multiblockers were added so that it wouldn’t keep triggering itself when the user stayed in the location, they are timed for the duration of the sound that occurs at each place.

Patch for cueing background projection.

The part that was more complicated was what I wanted for the experience in the center. I wanted the use to be able to see themselves as a “body of water” and to be able to interact with visual elements on the projection that made them open up. I wanted an experience that was exciting, imaginative, open-ended, and fun so that the user would be inspired to move their body and be brought into the moment exploring the possibilities of the room at this location. My lab day with Oded in the Motion Lab is where I got all of the tools for this part of the project.

Patch for the “body of water” and interactive colored globes

I rigged up a second depth sensor so that the user could turn toward the projection and still interact with it, then I created an alpha mask out of that sensor data which allowed me to fill the user’s body outline with a video of moving water. I then created an “aura” of glowing orange light around the person and two glowing globes of light that tracked their hand movements. The colors change based on the z-axis of the hands so there’s a little bit to explore there. All of these fade in using the same trigger for when the user enters the center location of the room.

I am really proud of all of this! It took me a long time to get all of the kinks out and it all runs really smoothly. Watching Erin (my roommate) go through it I really felt like it was landing exactly how I wanted it to.

Next steps from here mean primarily developing a guide for the user. It could be something written on the door or possibly an audio guide that would happen while the user walks through the room. I also want to figure out how to attach a contact mic to the system so that the user might be able to hear their own heartbeat during the experience.

Here is a link to watch my roommate go through the room: https://osu.box.com/s/hzz8lp5s97qw5q47ar32cgblh5hus8rs

Here is the sound file for the meditation in case you want to do it on your own:


Tara Burns – Iteration X: Extending the Body – Cycle 1 Fall 2020

I began this class with the plan to make space to build the elements of my thesis and this first cycle was the first iteration of my MFA Thesis.

I envision my thesis as a three part process (or more). This first component was part of an evening walk around the OSU Arboretum with my MFA 2021 Cohort. To see the full event around the lake and other projects click here: https://dance.osu.edu/news/tethering-iteration-1-ohio-state-dance-mfa-project

In response to Covid, the OSU Dance MFA 2021 Cohort held a collaborative outdoor event. I placed my first cycle (Iteration X: Extending the Body) in this space. Five scheduled and timed groups were directed through a cultivated experience while simultaneously acting as docents to view sites of art. You see John Cartwright in the video above, directing a small audience toward my work.

In this outdoor space wifi and power were not available. I used a hotspot on my phone to transmit from both my computer and VR headset. I also used a battery to power my phone and computer for the duration.

By following this guide I was able to successfully connect my headset and wirelessly screen copy (scrcpy) my view from the Oculus Quest to my computer.
In OBS (video recording and live streaming software), I transmitted to Twitch.tv.
I then embedded all the components into our interactive website that the audience used on site with their mobile devices and headphones.

Iteration X: Extendting the Body asked the audience to follow the directions on the screen to listen to the soundscape, view the perspective of the performer, and imagine alternate and simultaneous worlds and bodies as forms of resistance.


Meditation Room Cycle 1

The goal for my final project is to create an interactive meditation room that responds to the users choices about what they need and allows them to feel grounded in the present moment. The first cycle was just to map out the room and the different activities that happen at different points in the space. There were two ways that the user can interact with the system: their location in space and relationship of their hands to various points.

This is the programming that tracked the three locations in the room

There were three locations in the room that triggered a response. The first was just upon entering, the system would play a short clip of birdsong to welcome the user into the room. From there the user has two choices. I am not sure as of right now if I should dictate which experience comes first or if that should be left for the user to decide. I think that I can make the program work either way. One option they could choose was to sit on the couch which would start an 8 minute guided meditation and focuses on the breath, heartbeat, and stillness. The other option is to move to the center of the room which is more of a movement experience where the user is invited to move as they like (with a projection across them and responding to their movement to come in later cycles). As they enter this location there is a 4 minute track of ambient sound that creates the atmosphere of reflection and might inspire soft movement. This location is primarily where the body tracking triggers are used.

Programming for body tracking triggers

Two of the body tracking triggers are used throughout the room, they trigger a heartbeat and a sound of calm breathing when the user’s arms are near the center of their body. This isn’t always reliable and seemed like too much sometimes with all of the other sounds layered on top so I am thinking of shifting to just work upon entry and maybe just the heartbeat in the center using gates the same way that I did with the triggers used in the center of the room. The other two body tracking triggers use props in the room that the user can touch. There is a rock hanging from the ceiling that will trigger the sounds of water when touched and there is a flower attached to the wall that will trigger the sound of wind when touched. These both will only have an on-gate when the user is in the correct space at the center of the room.

Overall I feel good about this cycle. I was able to overcome some of the technical challenges of body tracking and depth tracking as well as timing on all of the sound files. I was able to prove the concept of triggering events based on the user’s interaction with the space which was my initial goal.

The next steps from here are to incorporate the projections and possibly a biofeedback system for the heartbeat. I also need to think about how I am going to guide the experience. I think I will have some instructions on the door that help users understand what the space is for and how to engage with it and what choices they may make throughout. I also am not really sure how to end it. Technically I have the timers set up so that if someone finished the guided meditation, got up and played with the center space, and the wanted to do the guided meditation again, they totally could. So maybe that is up to the user as well?

Here is a link to me interacting with the space so you can see each of the locations and the possible events that happen as well as some of my thoughts and descriptions throughout the room (the sound is also a lot easier to hear): https://osu.box.com/s/iq2idk432jfn2yzbzre91i2gp3y4bu9d


Pressure Project 3

For this pressure project, I wanted to create a game where the player would attempt to move a conductive ring through an obstacle without contacting the obstacle (similar to the board game Operation). The object is to move the wand through the obstacle and touch the piece of foil labeled goal. This would trigger a victory scene. However, if any of the obstacle was touched with the wand, it would trigger a failure screen and the player would need to remove the wand from the obstacle and touch the reset button to go back into the game.

One thing I realized while designing the game was that it would be incredibly easy to cheat, for example moving the wand directly to the goal without going over the obstacle at all after starting the game. I decided to keep this feature in the game without resolving it because it made it very easy to test the game.

When I was building my box I expected I bent a coat metal coat hanger to create the wand and the obstacle. I assumed that the metal coat hanger would be conductive, however, the paint on the obstacle prevented that from happening. I could have sanded down the hanger, but instead, I glued and wrapped foil around the hanger. This created a few places where the wand was not conductive.

Another challenge I ran into was the physical construction of the box. I did not anticipate the need for room for the alligator clips to connect to the makey makey device. I ended up having to make a much taller box so the clips could stand up without strain or being bent at an angle inside of the box.

If I had more time to work on this project, I would have liked to have added another button to bring up a scene that would allow the player to play a “challenge mode.” This mode would force the player to complete the course within a certain amount of time. I also thought it would be interesting to force the player to engage with the Leap Motion controller with their other hand while trying to complete the game.


Download game files:

https://1drv.ms/u/s!Ai2N4YhYaKTvgbM4H-iN0GAvYMls6Q?e=nLweiY