Cycle 2 Demo

The demo in class on Wednesday showed the interface responding to 4 scenarios:

  1. No audience presence (displayed “away” on the screen)
  2. Single user detected (the goose went through a rough “greet” animation)
  3. too much violent movement (the words “scared goose” on the screen)
  4. more than a couple audience members (the words “too many humans” on the screen)

The interaction was made in a few days, and honestly, I am surprised it was as accurate and reliable as it was…

The user presence was just a blob output. I used a “Brightness Calculator” with the “Difference” actors to judge the violent movement (the blob velocity was unreliable with my equipment). Detecting “too many humans” was just another “Brightness Calculator”. I tried more complicated actors and patches, but these were the ones that worked in the setting.

Most of what I have been spending my time solving is an issue with interlacing. I hoped I could build something with the lenses I have, order a custom lens (they are only $12 a foot + the price to cut), or create a parallax barrier. Unfortunately, creating a high quality lens does not seem possible with the materials I have (2 of the 8″ X 10″ sample packs from Microlens), and a parallax barrier blocks light exponentially based on the number of viewing angles (2 views blocks 50%, 3 blocks 66%… 10 views blocks 90%). On Sunday I am going to try a patch that blends interlaced pixels to fix the problem with the lines on the screen not lining up with the lenses (it basically blends interlaces to align a non-integral number of pixels with the lines per inch of the lens).

Worse case scenario… A ready to go lenticular monitor is $500, the lens designed to work with a 23″ monitor is $200, and a 23 inch monitor with a pixel pitch of .270 mm is about $130… One way or another, this goose is going to meet the public on 12/07/15…

Links I have found useful are…

Calculate the DPI of a monitor to make a parallax barrier.

https://www.sven.de/dpi/

Specs of the one of the common ACCAD 24″ monitor

http://www.pcworld.com/product/1147344/zr2440w-24-inch-led-lcd-monitor.html

MIT student who made a 24″ lenticular 3D monitor.

http://alumni.media.mit.edu/~mhirsch/byo3d/tutorial/lenticular.html

 


Josh Isadora PP3

For Pressure project three I used the tracking to trigger a movie on the upstage projection screen when a person entered into my assigned section of the space.  Here is the patch that I utilized.

This is the main patch.

This is the main patch.

This is inside of the User Actor

This is inside of the User Actor


Josh Final Presentation 1 Update

This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.

This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.

This is the VUO patch that feeds through syphon the kinect data to isadora.

This is the VUO patch that feeds through syphon the kinect data to isadora.


Checking In: Final Project Alpha and Comments

Alpha:

Today marked the first of our cycle presentation days for our final project. While I initially was stressing hard about this day, I was able to, with the help of some very important people, present a very organized “alpha” phase to my final project. While my idea initially took a lot of incubating and planning, I hashed it out and executed the initial connection/visual stage of my performance. Below is a sketch of my ideas for layouts, my beginning patch and my final alpha patch that I demonstrated in class today.

IMG_2276  IMG_2273

You will see the difference between my first patch and my second.IMG_2275

The small actor off to the right and bellow the image is a MIDI actor calibrated to the light board and my guitar. Sarah worked with me through this part and did a wonderful job explaining and visualizing this process.

Comments:

Lexi: Great start to your project! You made a really cool virtual spotlight using the Kinect. I know that the software may be giving you some initial troubles but I know you’ll be able to work through it and deliver an awesome final cut that incorporates your dance background. Keep it up!

Anna: I really like what you are doing with the Mic and Video connection. I believe that when it all comes together it will be something that’s really interactive and fun to play around with. I’m already having fun seeing how my voice influences the live capture. Keep it up!

Sarah: Words cannot describe how much you have helped me in this class and I look forward to working with you to not only complete but also OWN this project. I’m sure the final cut of your lighting project is going to be awesome and I look forward to working with you more.

Josh: How you implemented the Kinect with your personal videos is a very cool take on how someone can actually interact with video in a space. I cant wait to see what else you add to your project!

Jonathan: Dude, your project is practically self-aware! Even if you explained it to me, I would never know how you were able to do that. I look forward to seeing your final project. Keep it up!


Isadora Updates

http://troikatronix.com/isadora-2-1-release-notes/
Mark recently updated Isadora …
Check out the release notes.

It changes the ways that videos are assigned to the stage.


Luma to Chroma Devolves into a Chromadepth Shadow-puppet Show

I was having trouble getting eyes++ to distinguish between a viewer and someone behind the viewer, so I changed the luminescence to chroma with the attached actor and used “The Edge” to create a mask to outline each object, so eyes++ would see them as different blobs. Things quickly devonved into making faces at the Kinect.

The raw video is pretty bad. The only resolution I can get is 80 X 60… I tried adjusting the input, and the image in the OpenNI Streamer looks to be about 640X480, and there are only a few adjustable options, and none of them deal with resolution… I think it is a problem with OpenNI streaming.

https://youtu.be/fK1yDxjD2S4

But the depth was there, and it was lighting independent, so I am working with it.

The first few seconds are the patch I am using (note the outline around the objects), the rest of the video is just playing with the pretty colors that were generated as a byproduct.


Vuo + Kinect + Izzy

Screen caps and video of Kinect + Vuo + Izzy. Since we were working on the demo version on Vuo I couldn’t use video feed right from Isadora to practice tracking with the eyes and blob so a screen capture video was taken and imported.

We isolated a particular area in space that the kinect/vuo could read the depth as a gray scale to identify that shape

Screen Shot 2015-10-29 at 1.23.08 AM


Vuo screen shot

 

Screen Shot 2015-10-28 at 11.49.51 AM

Kinect to Vuo to Isadora Patch

 

Screen Shot 2015-10-28 at 12.19.17 PM

Patch connecting video of depth tracking to Izzy and using Eyes++ and Blob decoder, I could get exact coordinated for the blob in space.

 

https://www.youtube.com/watch?v=E9Gs_QhZiJc&feature=youtu.be


Final Project Progress

I don’t have a computer with camera inputs yet, so I have been working on the 3D environment and interlacing. Below is a screenshot with the operator interface and a video testing the system. It is only a test, so the interlacing is not to scale and oriented laterally. The final project will be on a screen that is mounted in portrait. I hoped to do about 4 interlaced images, but the software is showing a serious lag with 2, so it might not be possible.

 Operator Interface

(used to calibrate the virtual environment with the physical)Operator Interface

The object controls are on the Left (currently 2 views); Angle Difference (the relative rotation of object 1 vs object 2), X difference (how apart the virtual cameras are), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust

The Backdrop controls are on the Right (currently 2 views, I am using mp4 files until I can get a computer with cameras); Angle Difference (the relative rotation of screen 1 vs screen 2), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust.

In the middle in the interlace control (width of the lines and distance between, if I can get more than 1 perspective to work, I will change this to number of views and width of the lines)

Video of the Working Patch

0:00 – 0:01 changing the relative angle

0:01 – 0:04 changing the relative x position

0:04 – 0:09 changing the XYZ rotation

0:09 – 0:20 adjusting the width and distance between the interlaced lines

0:20 – 0:30 adjusting the scale and XYZ YPR of backdrop 1

0:30 – 0:50 adjusting the scale and XYZ YPR of backdrop 2

0:50 – 0:60 adjusting the scale and XYZ YPR of the model

I have a problem as the object gets closer and farther from the camera… One of the windows is a 3D projector, and the other is a render on a 3D screen with a mask for the interlacing. I am not sure if replacing the 3D projector with another 3D screen with a render on it would add more lag or not, but I am already approaching the processing limits of the computer, and I have not added the tuba or the other views… I could always just add XYZ scale controls to the 3D models, but there is a difference between scale and zoom, so it might look weird.

Hitchcock Zoom

The difference between zooming in and trucking (getting closer) is evident in the “Hitchcock Zoom”

A Diagram with the Equations for Creating a Parallax Barrier

ParallaxBarrierDiagram

Image from Baydocks http://displayblocks.org/diycompressivedisplays/parallax-barrier-display/

The First Tests

The resolution is very low because the only lens I could get to match with a monitor was 10 lines per inch lowering the horizontal parallax to just 80 pixels on the 8″ x 10″ lens I had. The first is pre-rendered 9 images. the 2nd is only 3 because Isadora started having trouble. I might pre-render the goose and have the responses trigger a loop (like the old Dragon’s Lair game). The draw back, the character would not be able to look at the person interacting. But with only 3 possible views, it might not be apparent he was tracking you.


Ideation of Prototype Summary- C1 + C2

My final project unfolds in 2 cycles that culminate to one performative and experiential environment where  a user can enter the space wearing tap dance shoes and manipulate video and lights via the sounds from the taps and their movement in space. Below is an outline of the two cycles I plan to actualize this project.

Cycle one: Friederweiss-esque video follow

Screen Shot 2015-10-25 at 10.48.19 PM

Goal: Create a patch in Isadora that can follow a dancer in space with a projection. When the dancer is standing still the animation is as well and as the performer accelerates through the space the animation leaves a trail that is proportionate to the distance and speed traveled.

Current state: I attempted to use Syphon into Quartz Composer to be able to read skeleton data from the Kinect to use in Isadora and met some difficulties installing and understanding the software. I referenced Jamie Griffiths blogs linked below.

http://www.jamiegriffiths.com/kinect-into-isadora

http://www.jamiegriffiths.com/new-kinect-into-syphon/

Projected timeline: 3-4 more classes to understand and implement software into Isadora and then create and utilize patch.

 

Cycle two: Audible tap interactions

Screen Shot 2015-10-25 at 10.48.28 PM

Goal: Create environment that can listen to the frequency and/or amplitude of the various steps and sounds a tap shoe can create and have those interactions effect/control the lighting for the piece.

Current State: Not attempted this yet since still in C1 but my classmate Anna Brown Massey did some work with sound in our last class that may be helpful to me when I reach this stage.

Current questions: How do I get the software to recognize frequency in the tap shoes since the various step create different sounds. Is this more reliable or difficult than using volume alone, as it is definitely more interesting to me.

Projected timeline: 3+ classes, no use yet with this kind of software so a lot of experimentation expected.


PP3 Ideation, Prototype, and other bad poetry

GOALS:

  • Interactive mediated space in the Motion Lab in which the audience creates performance.
  • Interactivity is both with media and with fellow participants.
  • Delight through attention, a sense of an invitation to play, platform that produces friendly play.
  • Documentation through film, recording of the live feed, saving of software patches.
  • Design requirements: can be scaleable for an audience of unexpected number, possibly entering at different times.

CONTENT:
Current ideas as of 10/25/15. Open to progressive discovery:

  • A responsive sound system affecting live projection.
  • Motion tracking and responsive projection as interactive
  • How do sound + live projection and motion tracking + live projection intersect?
  • Brainstorm of options: musical instruments in space; how do people affect each other; how can that be aggressive or friendly or something else; what can be unexpected for the director; could I find/use “FAO Schwartz” floor piano; do people like seeing other people or just themselves, how to take in data for sound that is not only decibel level but also pitch, timbre, rhythm; what might Susan Chess have to offer regarding sound, Alan Price, Matt Lewis.

RESOURCES

  • 6+ Videocameras projecting live feed
  • 3 projectors
  • CV top-down camera
  • Mutiple standing mics
  • Software: Isadora, possibly DMX, Max MSP

VALUES i.e. experiences I would like my users (“audience” / “interactors” / “participants”) to have:

  • uncertainty –> play –> discovery –> more discovery
  • with constant engagement

Drawing from Forlizzi and Battarbee, this work will proceed by including attention to intersecting levels of fluent, cognitive, and expressive experience. A theater audience will be accustomed to a come-in-and-sit-down-in-the-dark-and-watch-the-thing experience, and a subversion of that plan will require attention to how to harness their fluent habits, e.g. audiences sit in the chairs that are thisclose to the work booth but if the chairs are this                    far then those must be allotted for the performance which the audience doensn’t want to disrupt. Which begs: how does an entering audience proceed into a theater space with an absence of chairs. Where are mics(/playthings!) placed under what light and sound “direction” that tells them where to go/what to do. A few posts ago in examining Forlizzi and Battarbee I posed this question, and it applies again here:

What methods will empower the audience to form an active relationship with the present media and with fellow theater citizens?

LAB: DAY 1
As I worked in the Motion Lab Friday 10/23 I discovered an unplanned audience: my fellow classmates. Seemingly busy with their own patches and software challenges, once they looked over and determined that sound level was data I had told Isadora to read and spit into affecting a live zoom of myself via the facetime camera the mac, I found they were, over the course of an hour frequently “messing” with my data in order to affect my projection. (I had set the incoming data of decibel level to alter the “zoom” level on my live projection.) They were loud, soft, laughing aggressively seeing the lowest threshold at which they could still affect my zoom output.

SO, discovery that decibel level affects the live projection of a fellow user seems, through this unexpected prototype due to the presence of my co-working colleagues, to offer an opportunity to find that SOUND AFFECTING SOMEONE ELSE’S PROJECTION ENGAGES ATTENTION OF USERS ALSO ENGAGED IN OTHER TASKS. okay, good. Moving forward …