Cycle 2 – Sarah Lawler
Posted: November 9, 2015 Filed under: Final Project, Sarah Lawler Leave a comment »Cycle 2 – Using Midi, Connor and I were able to control light cues by strumming strings on Connor’s guitar.
Cycle 2 Demo
Posted: November 7, 2015 Filed under: Jonathan Welch, Uncategorized | Tags: Jonathan Welch Leave a comment »The demo in class on Wednesday showed the interface responding to 4 scenarios:
- No audience presence (displayed “away” on the screen)
- Single user detected (the goose went through a rough “greet” animation)
- too much violent movement (the words “scared goose” on the screen)
- more than a couple audience members (the words “too many humans” on the screen)
The interaction was made in a few days, and honestly, I am surprised it was as accurate and reliable as it was…
The user presence was just a blob output. I used a “Brightness Calculator” with the “Difference” actors to judge the violent movement (the blob velocity was unreliable with my equipment). Detecting “too many humans” was just another “Brightness Calculator”. I tried more complicated actors and patches, but these were the ones that worked in the setting.
Most of what I have been spending my time solving is an issue with interlacing. I hoped I could build something with the lenses I have, order a custom lens (they are only $12 a foot + the price to cut), or create a parallax barrier. Unfortunately, creating a high quality lens does not seem possible with the materials I have (2 of the 8″ X 10″ sample packs from Microlens), and a parallax barrier blocks light exponentially based on the number of viewing angles (2 views blocks 50%, 3 blocks 66%… 10 views blocks 90%). On Sunday I am going to try a patch that blends interlaced pixels to fix the problem with the lines on the screen not lining up with the lenses (it basically blends interlaces to align a non-integral number of pixels with the lines per inch of the lens).
Worse case scenario… A ready to go lenticular monitor is $500, the lens designed to work with a 23″ monitor is $200, and a 23 inch monitor with a pixel pitch of .270 mm is about $130… One way or another, this goose is going to meet the public on 12/07/15…
Links I have found useful are…
Calculate the DPI of a monitor to make a parallax barrier.
Specs of the one of the common ACCAD 24″ monitor
http://www.pcworld.com/product/1147344/zr2440w-24-inch-led-lcd-monitor.html
MIT student who made a 24″ lenticular 3D monitor.
http://alumni.media.mit.edu/~mhirsch/byo3d/tutorial/lenticular.html
Josh Isadora PP3
Posted: November 6, 2015 Filed under: Josh Poston, Pressure Project 3 Leave a comment »For Pressure project three I used the tracking to trigger a movie on the upstage projection screen when a person entered into my assigned section of the space. Here is the patch that I utilized.
Josh Final Presentation 1 Update
Posted: November 6, 2015 Filed under: Josh Poston, Uncategorized Leave a comment »
This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.
Checking In: Final Project Alpha and Comments
Posted: November 4, 2015 Filed under: Connor Wescoat, Isadora Leave a comment »Alpha:
Today marked the first of our cycle presentation days for our final project. While I initially was stressing hard about this day, I was able to, with the help of some very important people, present a very organized “alpha” phase to my final project. While my idea initially took a lot of incubating and planning, I hashed it out and executed the initial connection/visual stage of my performance. Below is a sketch of my ideas for layouts, my beginning patch and my final alpha patch that I demonstrated in class today.
You will see the difference between my first patch and my second.
The small actor off to the right and bellow the image is a MIDI actor calibrated to the light board and my guitar. Sarah worked with me through this part and did a wonderful job explaining and visualizing this process.
Comments:
Lexi: Great start to your project! You made a really cool virtual spotlight using the Kinect. I know that the software may be giving you some initial troubles but I know you’ll be able to work through it and deliver an awesome final cut that incorporates your dance background. Keep it up!
Anna: I really like what you are doing with the Mic and Video connection. I believe that when it all comes together it will be something that’s really interactive and fun to play around with. I’m already having fun seeing how my voice influences the live capture. Keep it up!
Sarah: Words cannot describe how much you have helped me in this class and I look forward to working with you to not only complete but also OWN this project. I’m sure the final cut of your lighting project is going to be awesome and I look forward to working with you more.
Josh: How you implemented the Kinect with your personal videos is a very cool take on how someone can actually interact with video in a space. I cant wait to see what else you add to your project!
Jonathan: Dude, your project is practically self-aware! Even if you explained it to me, I would never know how you were able to do that. I look forward to seeing your final project. Keep it up!
Isadora Updates
Posted: November 2, 2015 Filed under: Uncategorized Leave a comment »http://troikatronix.com/isadora-2-1-release-notes/
Mark recently updated Isadora …
Check out the release notes.
It changes the ways that videos are assigned to the stage.
Luma to Chroma Devolves into a Chromadepth Shadow-puppet Show
Posted: October 30, 2015 Filed under: Jonathan Welch | Tags: Jonathan Welch Leave a comment »I was having trouble getting eyes++ to distinguish between a viewer and someone behind the viewer, so I changed the luminescence to chroma with the attached actor and used “The Edge” to create a mask to outline each object, so eyes++ would see them as different blobs. Things quickly devonved into making faces at the Kinect.
The raw video is pretty bad. The only resolution I can get is 80 X 60… I tried adjusting the input, and the image in the OpenNI Streamer looks to be about 640X480, and there are only a few adjustable options, and none of them deal with resolution… I think it is a problem with OpenNI streaming.
But the depth was there, and it was lighting independent, so I am working with it.
The first few seconds are the patch I am using (note the outline around the objects), the rest of the video is just playing with the pretty colors that were generated as a byproduct.
Vuo + Kinect + Izzy
Posted: October 29, 2015 Filed under: Alexandra Stilianos, Pressure Project 3 Leave a comment »Screen caps and video of Kinect + Vuo + Izzy. Since we were working on the demo version on Vuo I couldn’t use video feed right from Isadora to practice tracking with the eyes and blob so a screen capture video was taken and imported.
We isolated a particular area in space that the kinect/vuo could read the depth as a gray scale to identify that shape

Patch connecting video of depth tracking to Izzy and using Eyes++ and Blob decoder, I could get exact coordinated for the blob in space.
https://www.youtube.com/watch?v=E9Gs_QhZiJc&feature=youtu.be
Final Project Progress
Posted: October 27, 2015 Filed under: Jonathan Welch, Uncategorized Leave a comment »I don’t have a computer with camera inputs yet, so I have been working on the 3D environment and interlacing. Below is a screenshot with the operator interface and a video testing the system. It is only a test, so the interlacing is not to scale and oriented laterally. The final project will be on a screen that is mounted in portrait. I hoped to do about 4 interlaced images, but the software is showing a serious lag with 2, so it might not be possible.
Operator Interface
(used to calibrate the virtual environment with the physical)
The object controls are on the Left (currently 2 views); Angle Difference (the relative rotation of object 1 vs object 2), X difference (how apart the virtual cameras are), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust
The Backdrop controls are on the Right (currently 2 views, I am using mp4 files until I can get a computer with cameras); Angle Difference (the relative rotation of screen 1 vs screen 2), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust.
In the middle in the interlace control (width of the lines and distance between, if I can get more than 1 perspective to work, I will change this to number of views and width of the lines)
Video of the Working Patch
0:00 – 0:01 changing the relative angle
0:01 – 0:04 changing the relative x position
0:04 – 0:09 changing the XYZ rotation
0:09 – 0:20 adjusting the width and distance between the interlaced lines
0:20 – 0:30 adjusting the scale and XYZ YPR of backdrop 1
0:30 – 0:50 adjusting the scale and XYZ YPR of backdrop 2
0:50 – 0:60 adjusting the scale and XYZ YPR of the model
I have a problem as the object gets closer and farther from the camera… One of the windows is a 3D projector, and the other is a render on a 3D screen with a mask for the interlacing. I am not sure if replacing the 3D projector with another 3D screen with a render on it would add more lag or not, but I am already approaching the processing limits of the computer, and I have not added the tuba or the other views… I could always just add XYZ scale controls to the 3D models, but there is a difference between scale and zoom, so it might look weird.

The difference between zooming in and trucking (getting closer) is evident in the “Hitchcock Zoom”
A Diagram with the Equations for Creating a Parallax Barrier
The First Tests
The resolution is very low because the only lens I could get to match with a monitor was 10 lines per inch lowering the horizontal parallax to just 80 pixels on the 8″ x 10″ lens I had. The first is pre-rendered 9 images. the 2nd is only 3 because Isadora started having trouble. I might pre-render the goose and have the responses trigger a loop (like the old Dragon’s Lair game). The draw back, the character would not be able to look at the person interacting. But with only 3 possible views, it might not be apparent he was tracking you.
Ideation of Prototype Summary- C1 + C2
Posted: October 26, 2015 Filed under: Alexandra Stilianos, Assignments 1 Comment »My final project unfolds in 2 cycles that culminate to one performative and experiential environment where a user can enter the space wearing tap dance shoes and manipulate video and lights via the sounds from the taps and their movement in space. Below is an outline of the two cycles I plan to actualize this project.
Cycle one: Friederweiss-esque video follow
Goal: Create a patch in Isadora that can follow a dancer in space with a projection. When the dancer is standing still the animation is as well and as the performer accelerates through the space the animation leaves a trail that is proportionate to the distance and speed traveled.
Current state: I attempted to use Syphon into Quartz Composer to be able to read skeleton data from the Kinect to use in Isadora and met some difficulties installing and understanding the software. I referenced Jamie Griffiths blogs linked below.
http://www.jamiegriffiths.com/kinect-into-isadora
http://www.jamiegriffiths.com/new-kinect-into-syphon/
Projected timeline: 3-4 more classes to understand and implement software into Isadora and then create and utilize patch.
Cycle two: Audible tap interactions
Goal: Create environment that can listen to the frequency and/or amplitude of the various steps and sounds a tap shoe can create and have those interactions effect/control the lighting for the piece.
Current State: Not attempted this yet since still in C1 but my classmate Anna Brown Massey did some work with sound in our last class that may be helpful to me when I reach this stage.
Current questions: How do I get the software to recognize frequency in the tap shoes since the various step create different sounds. Is this more reliable or difficult than using volume alone, as it is definitely more interesting to me.
Projected timeline: 3+ classes, no use yet with this kind of software so a lot of experimentation expected.