PP3 Ideation, Prototype, and other bad poetry

GOALS:

  • Interactive mediated space in the Motion Lab in which the audience creates performance.
  • Interactivity is both with media and with fellow participants.
  • Delight through attention, a sense of an invitation to play, platform that produces friendly play.
  • Documentation through film, recording of the live feed, saving of software patches.
  • Design requirements: can be scaleable for an audience of unexpected number, possibly entering at different times.

CONTENT:
Current ideas as of 10/25/15. Open to progressive discovery:

  • A responsive sound system affecting live projection.
  • Motion tracking and responsive projection as interactive
  • How do sound + live projection and motion tracking + live projection intersect?
  • Brainstorm of options: musical instruments in space; how do people affect each other; how can that be aggressive or friendly or something else; what can be unexpected for the director; could I find/use “FAO Schwartz” floor piano; do people like seeing other people or just themselves, how to take in data for sound that is not only decibel level but also pitch, timbre, rhythm; what might Susan Chess have to offer regarding sound, Alan Price, Matt Lewis.

RESOURCES

  • 6+ Videocameras projecting live feed
  • 3 projectors
  • CV top-down camera
  • Mutiple standing mics
  • Software: Isadora, possibly DMX, Max MSP

VALUES i.e. experiences I would like my users (“audience” / “interactors” / “participants”) to have:

  • uncertainty –> play –> discovery –> more discovery
  • with constant engagement

Drawing from Forlizzi and Battarbee, this work will proceed by including attention to intersecting levels of fluent, cognitive, and expressive experience. A theater audience will be accustomed to a come-in-and-sit-down-in-the-dark-and-watch-the-thing experience, and a subversion of that plan will require attention to how to harness their fluent habits, e.g. audiences sit in the chairs that are thisclose to the work booth but if the chairs are this                    far then those must be allotted for the performance which the audience doensn’t want to disrupt. Which begs: how does an entering audience proceed into a theater space with an absence of chairs. Where are mics(/playthings!) placed under what light and sound “direction” that tells them where to go/what to do. A few posts ago in examining Forlizzi and Battarbee I posed this question, and it applies again here:

What methods will empower the audience to form an active relationship with the present media and with fellow theater citizens?

LAB: DAY 1
As I worked in the Motion Lab Friday 10/23 I discovered an unplanned audience: my fellow classmates. Seemingly busy with their own patches and software challenges, once they looked over and determined that sound level was data I had told Isadora to read and spit into affecting a live zoom of myself via the facetime camera the mac, I found they were, over the course of an hour frequently “messing” with my data in order to affect my projection. (I had set the incoming data of decibel level to alter the “zoom” level on my live projection.) They were loud, soft, laughing aggressively seeing the lowest threshold at which they could still affect my zoom output.

SO, discovery that decibel level affects the live projection of a fellow user seems, through this unexpected prototype due to the presence of my co-working colleagues, to offer an opportunity to find that SOUND AFFECTING SOMEONE ELSE’S PROJECTION ENGAGES ATTENTION OF USERS ALSO ENGAGED IN OTHER TASKS. okay, good. Moving forward …

 


Guinea Pigs

Building an Isadora patch for this past project expanded my understanding of methods of enlisting CV (computer vision) to sense a light source (object) and create a projection responsive to the coordinates of that light source.

We (our sextet) selected the top down camera in the Motion Lab as the visual data our “Video In Watcher” would accept. As I considered the our light source, a robotic ball called the Sphero, manipulateable in movement via a phone application, I was struck by our shift from enlisting a dancer to move through our designed grid to employing an object. This illuminated white ball was served us not only because we were no longer dependent on a colleague to be present just to walk around for us, but also because its projected light and discreet size rendered our intake of data an easier project. We enlisted the “Difference” actor as a method of discerning light differences in space, which is a nifty way of distinguishing between “blobs.” Through this means, we could tell Isadora to recognize changes in light aka changes in the location of the Sphero, which gave us data about where the Sphero was so that our patches could respond to it.

My colleague Alexandra Stillianos wrote a succinct explanation for this method, explaining “Only when both the X and Y positions of the Sphero light source were toggled (switched) on, would the scene trigger, and my video would play. In other words, if you were in my row OR column, my video would NOT play. Only in my box (both row AND column) would the camera sense the light source and turn on, and when leaving the box and abandoning that criteria it would turn off. Each person in the class was responsible for a design/scene to activate in their respective space.”

The goal was to use CV (computer vision) to sense a light source (used by identifying the Sphero’s X/Y position in the space) to trigger different interactive scenes in the performance space.

Considering this newly inanimate object as a source, I discovered the “Text Draw” actor, and chose “You are alive” as the text to appear projected when the object moved into the x-y grid space indicated through our initial measurements. (Yes, I found this funny.) My “Listener” actor intercepted the channels “1” and “2” which we had our set-up in our CV frame to “Broadcast” the incoming light data, to which I applied the “Inside range” actor as a way of beginning to inform Isadora which data would trigger my words to appear. I did a quick youtube search of “slow motion,” and found a creepy guinea pig video that because of its single shot and stationary subjects, appeared to be possibly smooth fodder for looping. I layered two of these videos on a slight delay to give them a ghostly appearance, then added an overlay of red via the shapes actor.

Isadora Patch PP3

As we imported our video/sound/image files to accompany our Isadora patches into the main frame, we discovered that our patches were being triggered, but were failing to end once the object had departed our specific x-y coordinates as demarcated for ourselves on the floor, but more importantly as indicated by our “Inside Range” actors. We realized that our initial measurements needed to be refined more precisely, and with that shift, my own actor was working successfully, but we were still faced with difficulty in changing the coordinates on my colleague’s actor as he had multiple user actors embedded in user actors that continued to run parts of his patch independently. The possibility of enlisting a “Shapes” actor measured to create a projection of all black was considered, but the all-consuming limitation on time kept us from proceeding further. My own patch was limited by the absence of a “Comparator” as a means of refining the coordinates so that they might toggle on and off.

 

 


Lexi PP3 Summary/reflection

I chose to write a blog post on this project for my main wordpress page for another dance related class –> https://astilianos.wordpress.com/2015/10/13/isadora-patches-computer-vision-oh-my/

 

Follow the link in wordpress to a different wordpress post for WP inception.


Myo to OSC

Hey there everyone.

Here is a video those missing steps of getting the Myo up and working:

 

And here are the links that you would need:

https://github.com/samyk/myo-osc for the Xcode project

https://www.myo.com/start/ for the Myo software

 

 

Remember, that this is a very similar process to getting the Kinect, or any number of other devices connected to your computer and Isadora.

Please let me know if you have any questions, or if you would like to borrow the Myo and try to do this yourself.

Best!

-Alex


Jonathan PP3 Patch and Video

 

https://youtu.be/HjSSyEbz68Y

CLASS_PP3 CV Patch_151007_1.izz

 


PP3 Isadora Patch

Hi Guys!

Here’s our class patch so far for PP3!

CLASS_PP3 CV Patch_151007_1.izz

-Sarah


Coherence???

I was thinking we might want our scenes to be connected… I have a patch that turns on an outside night scene with crickets chirping and animal moving around in the woods… There is also a little parallax as the “performer” moves around in the space (the 33% X 50% piece of the stage before the performer triggers another patch/scene/whatever), but I could adapt it if we have a common vision…

https://youtu.be/HjSSyEbz68Y

The performer’s position is represented by the moving H (Vorizontal) and V (Vertical) readout. The numbers would not be visible in the projection.
The trigger position is in the lower left corner.


The “Sextant” and a patch that I was using to activate my Scene/Actor when the performer was in area 1

IMG_2059

I am guessing the area’s range is 100 X 100 in both directions, so I divided it into 33/33/34 W and 50/50 H, and wrote a user actor to define area 1. This could be used to activate a scene or turn on/off projectors.

The red quantifies the H axis, blue the V, purple is the combined, and green is the output with a value.

SelectorPatch  PP3 Patch


Computer Vision Patch Post

PP3 Patch

 

Screen Shots included within zip!!!!


Selecting Scenes

https://vimeo.com/141210159

I put together an animatic for my first idea for the scene selection.

The icons are purely arbitrary; we would probably want something more representational of the concept behind the individual scenes.

I am totally flexible on the idea, and it is dependent on being able to get reliable X Y positional data on the performer from the overhead camera…