The Present

The Past

I am currently designing an audio-video environment in which Isadora reads the audio and harnesses the dynamics (amplification level) data, and converts those numbers into methods of shaping the live video output, which in its next iteration may be mediated through a Kinect sensor and separate Isadora patch.

My first task was to create a video effect out of the audio. I connected the Sound Level Watcher to the Zoom data on the Projector actor, and later added a Smoother in between in order to smooth the staccato shifts in zoom on the video. (Initial ideation is here.)

Once I had established my interest in the experience of zoom tied to amplitude, I enlisted the Inside Range actor to create a setting in which past a certain amplitude, the Sound Level Watcher would trigger the Dots actor. In other words, whenever the volume of sound coming into the mic hits a certain point and above, the actors trigger a effect on the live video projection in which the screen disperses into dots. I selected the Dots actor not because I was confident that it would create a magically terrific effect, but because it was a familiar actor with which I could practice manipulating the volume data. I added the Shimmer actor to this effect, still playing with the data range that would trigger these actors only above a certain point of volume.

Massey Isadora screen shot

The Future

User-design Vision:
Through this process my vision has been to make a system adaptable to multiple participants of a range between 2 and 30 who can all be simultaneously engaged by the experience and possibly having different roles by their own self-selection. As with my concert choreography, I am strategizing methods of  introducing the experience of “discovery.” I’d like this one to feel, to me, to be delightful. With a mic available in the room, I am currently playing with the idea of having a scrolling karaoke projection with lyrics to a well-known song. My vision includes a plan to plan on how to “invite” an audience to sing and have them discover that the corresponding projection is in causation relationship to the audio.

Sound Frequency:
Next steps, as seen at the bottom of the screenshot (and on the righthand side some actors I have laid aside for possible future use), is to start using the Sound Frequency actor as a means of taking in data about pitch (frequency) as a means of affecting video output. To do so I will need to provide an audio file in a variety of ranges as a source to experiment to observe how data shifts at different human voice registers. Then I will take the frequency data range and match that up, through an inside actor, to connect to a video output.

Kinect Collaboration:
I am also considering collaborating with my colleague Josh Poston’s project which currently uses the Kinect with projection as a replacement “live video projection,” that I am currently using, and instead to affect a motion-sensing movement on a rear-projected screen. As I consider joining our projects and expand its dimensions (so to speak-oh puns!), I need to start narrowing on the user-design components. In other words, where does the mic(s) live, where the screen, how many people is this meant for, where will they be, will participants have different roles, what is the

 

 

 

 


Iteration 1 of Final Project

PP1 screen shot, follow circle

Here is my patch for the first iteration:

The small oval shape followed a person around the space that is visible by both a Kinect and the projector. This was difficult since their field of views were slightly off, but was able to create it with good accuracy.

I would’ve like the shape to move a bit more smoothly and conform more closely to the shape of the body. I think using some of the actors that Josh did (Gaussian Blur and Alpha Mask) will help with these issues.


Cycle 2 – Sarah Lawler

Cycle 2 – Using Midi, Connor and I were able to control light cues by strumming strings on Connor’s guitar.

Connor:SarahPatch11:9:15

Screen Shot 2015-11-09 at 11.33.43 AM

Screen Shot 2015-11-09 at 11.33.27 AM


Cycle 2 Demo

The demo in class on Wednesday showed the interface responding to 4 scenarios:

  1. No audience presence (displayed “away” on the screen)
  2. Single user detected (the goose went through a rough “greet” animation)
  3. too much violent movement (the words “scared goose” on the screen)
  4. more than a couple audience members (the words “too many humans” on the screen)

The interaction was made in a few days, and honestly, I am surprised it was as accurate and reliable as it was…

The user presence was just a blob output. I used a “Brightness Calculator” with the “Difference” actors to judge the violent movement (the blob velocity was unreliable with my equipment). Detecting “too many humans” was just another “Brightness Calculator”. I tried more complicated actors and patches, but these were the ones that worked in the setting.

Most of what I have been spending my time solving is an issue with interlacing. I hoped I could build something with the lenses I have, order a custom lens (they are only $12 a foot + the price to cut), or create a parallax barrier. Unfortunately, creating a high quality lens does not seem possible with the materials I have (2 of the 8″ X 10″ sample packs from Microlens), and a parallax barrier blocks light exponentially based on the number of viewing angles (2 views blocks 50%, 3 blocks 66%… 10 views blocks 90%). On Sunday I am going to try a patch that blends interlaced pixels to fix the problem with the lines on the screen not lining up with the lenses (it basically blends interlaces to align a non-integral number of pixels with the lines per inch of the lens).

Worse case scenario… A ready to go lenticular monitor is $500, the lens designed to work with a 23″ monitor is $200, and a 23 inch monitor with a pixel pitch of .270 mm is about $130… One way or another, this goose is going to meet the public on 12/07/15…

Links I have found useful are…

Calculate the DPI of a monitor to make a parallax barrier.

https://www.sven.de/dpi/

Specs of the one of the common ACCAD 24″ monitor

http://www.pcworld.com/product/1147344/zr2440w-24-inch-led-lcd-monitor.html

MIT student who made a 24″ lenticular 3D monitor.

http://alumni.media.mit.edu/~mhirsch/byo3d/tutorial/lenticular.html

 


Josh Isadora PP3

For Pressure project three I used the tracking to trigger a movie on the upstage projection screen when a person entered into my assigned section of the space.  Here is the patch that I utilized.

This is the main patch.

This is the main patch.

This is inside of the User Actor

This is inside of the User Actor


Josh Final Presentation 1 Update

This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.

This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.

This is the VUO patch that feeds through syphon the kinect data to isadora.

This is the VUO patch that feeds through syphon the kinect data to isadora.


Checking In: Final Project Alpha and Comments

Alpha:

Today marked the first of our cycle presentation days for our final project. While I initially was stressing hard about this day, I was able to, with the help of some very important people, present a very organized “alpha” phase to my final project. While my idea initially took a lot of incubating and planning, I hashed it out and executed the initial connection/visual stage of my performance. Below is a sketch of my ideas for layouts, my beginning patch and my final alpha patch that I demonstrated in class today.

IMG_2276  IMG_2273

You will see the difference between my first patch and my second.IMG_2275

The small actor off to the right and bellow the image is a MIDI actor calibrated to the light board and my guitar. Sarah worked with me through this part and did a wonderful job explaining and visualizing this process.

Comments:

Lexi: Great start to your project! You made a really cool virtual spotlight using the Kinect. I know that the software may be giving you some initial troubles but I know you’ll be able to work through it and deliver an awesome final cut that incorporates your dance background. Keep it up!

Anna: I really like what you are doing with the Mic and Video connection. I believe that when it all comes together it will be something that’s really interactive and fun to play around with. I’m already having fun seeing how my voice influences the live capture. Keep it up!

Sarah: Words cannot describe how much you have helped me in this class and I look forward to working with you to not only complete but also OWN this project. I’m sure the final cut of your lighting project is going to be awesome and I look forward to working with you more.

Josh: How you implemented the Kinect with your personal videos is a very cool take on how someone can actually interact with video in a space. I cant wait to see what else you add to your project!

Jonathan: Dude, your project is practically self-aware! Even if you explained it to me, I would never know how you were able to do that. I look forward to seeing your final project. Keep it up!


Isadora Updates

http://troikatronix.com/isadora-2-1-release-notes/
Mark recently updated Isadora …
Check out the release notes.

It changes the ways that videos are assigned to the stage.


Luma to Chroma Devolves into a Chromadepth Shadow-puppet Show

I was having trouble getting eyes++ to distinguish between a viewer and someone behind the viewer, so I changed the luminescence to chroma with the attached actor and used “The Edge” to create a mask to outline each object, so eyes++ would see them as different blobs. Things quickly devonved into making faces at the Kinect.

The raw video is pretty bad. The only resolution I can get is 80 X 60… I tried adjusting the input, and the image in the OpenNI Streamer looks to be about 640X480, and there are only a few adjustable options, and none of them deal with resolution… I think it is a problem with OpenNI streaming.

https://youtu.be/fK1yDxjD2S4

But the depth was there, and it was lighting independent, so I am working with it.

The first few seconds are the patch I am using (note the outline around the objects), the rest of the video is just playing with the pretty colors that were generated as a byproduct.


Vuo + Kinect + Izzy

Screen caps and video of Kinect + Vuo + Izzy. Since we were working on the demo version on Vuo I couldn’t use video feed right from Isadora to practice tracking with the eyes and blob so a screen capture video was taken and imported.

We isolated a particular area in space that the kinect/vuo could read the depth as a gray scale to identify that shape

Screen Shot 2015-10-29 at 1.23.08 AM


Vuo screen shot

 

Screen Shot 2015-10-28 at 11.49.51 AM

Kinect to Vuo to Isadora Patch

 

Screen Shot 2015-10-28 at 12.19.17 PM

Patch connecting video of depth tracking to Izzy and using Eyes++ and Blob decoder, I could get exact coordinated for the blob in space.

 

https://www.youtube.com/watch?v=E9Gs_QhZiJc&feature=youtu.be