Final Project Progress
Posted: October 27, 2015 Filed under: Jonathan Welch, Uncategorized Leave a comment »I don’t have a computer with camera inputs yet, so I have been working on the 3D environment and interlacing. Below is a screenshot with the operator interface and a video testing the system. It is only a test, so the interlacing is not to scale and oriented laterally. The final project will be on a screen that is mounted in portrait. I hoped to do about 4 interlaced images, but the software is showing a serious lag with 2, so it might not be possible.
Operator Interface
(used to calibrate the virtual environment with the physical)
The object controls are on the Left (currently 2 views); Angle Difference (the relative rotation of object 1 vs object 2), X difference (how apart the virtual cameras are), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust
The Backdrop controls are on the Right (currently 2 views, I am using mp4 files until I can get a computer with cameras); Angle Difference (the relative rotation of screen 1 vs screen 2), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust.
In the middle in the interlace control (width of the lines and distance between, if I can get more than 1 perspective to work, I will change this to number of views and width of the lines)
Video of the Working Patch
0:00 – 0:01 changing the relative angle
0:01 – 0:04 changing the relative x position
0:04 – 0:09 changing the XYZ rotation
0:09 – 0:20 adjusting the width and distance between the interlaced lines
0:20 – 0:30 adjusting the scale and XYZ YPR of backdrop 1
0:30 – 0:50 adjusting the scale and XYZ YPR of backdrop 2
0:50 – 0:60 adjusting the scale and XYZ YPR of the model
I have a problem as the object gets closer and farther from the camera… One of the windows is a 3D projector, and the other is a render on a 3D screen with a mask for the interlacing. I am not sure if replacing the 3D projector with another 3D screen with a render on it would add more lag or not, but I am already approaching the processing limits of the computer, and I have not added the tuba or the other views… I could always just add XYZ scale controls to the 3D models, but there is a difference between scale and zoom, so it might look weird.

The difference between zooming in and trucking (getting closer) is evident in the “Hitchcock Zoom”
A Diagram with the Equations for Creating a Parallax Barrier
The First Tests
The resolution is very low because the only lens I could get to match with a monitor was 10 lines per inch lowering the horizontal parallax to just 80 pixels on the 8″ x 10″ lens I had. The first is pre-rendered 9 images. the 2nd is only 3 because Isadora started having trouble. I might pre-render the goose and have the responses trigger a loop (like the old Dragon’s Lair game). The draw back, the character would not be able to look at the person interacting. But with only 3 possible views, it might not be apparent he was tracking you.
Myo to OSC
Posted: October 12, 2015 Filed under: Uncategorized Leave a comment »Hey there everyone.
Here is a video those missing steps of getting the Myo up and working:
And here are the links that you would need:
https://github.com/samyk/myo-osc for the Xcode project
https://www.myo.com/start/ for the Myo software
Remember, that this is a very similar process to getting the Kinect, or any number of other devices connected to your computer and Isadora.
Please let me know if you have any questions, or if you would like to borrow the Myo and try to do this yourself.
Best!
-Alex
Jonathan PP3 Patch and Video
Posted: October 12, 2015 Filed under: Pressure Project 3, Uncategorized | Tags: j, Jonathan Welch Leave a comment »
https://youtu.be/HjSSyEbz68Y
CLASS_PP3 CV Patch_151007_1.izz
Coherence???
Posted: October 6, 2015 Filed under: Uncategorized Leave a comment »I was thinking we might want our scenes to be connected… I have a patch that turns on an outside night scene with crickets chirping and animal moving around in the woods… There is also a little parallax as the “performer” moves around in the space (the 33% X 50% piece of the stage before the performer triggers another patch/scene/whatever), but I could adapt it if we have a common vision…
The performer’s position is represented by the moving H (Vorizontal) and V (Vertical) readout. The numbers would not be visible in the projection.
The trigger position is in the lower left corner.
The “Sextant” and a patch that I was using to activate my Scene/Actor when the performer was in area 1
Posted: October 6, 2015 Filed under: Uncategorized 4 Comments »I am guessing the area’s range is 100 X 100 in both directions, so I divided it into 33/33/34 W and 50/50 H, and wrote a user actor to define area 1. This could be used to activate a scene or turn on/off projectors.
The red quantifies the H axis, blue the V, purple is the combined, and green is the output with a value.
Class notes on 9/30 regarding PP3
Posted: September 30, 2015 Filed under: Alexandra Stilianos, Uncategorized Leave a comment »PP3 GOAL: Create an interactive ‘dance’ piece (as a group), each should create a moment of reaction from the ‘dancer’
Group goals/questions:
- Focus on art!
- What will we do?
- How will we do it?
- What is my job?
- Establish perimeters and workflow?
- Understand the big picture?
- How is the sensing being done?
- What is the system design?
Resources:
- 4 projectors
- 3 projectors on floor
- 2 HDMI Cameras
- Top down camera with infra red
- Kinect
- Light/sound system
- Isadora
- Max/MSP
- Myo (bracelets)
- MoLa
- Various sensors (?)
Group 1: Sarah, Josh, Connor, “Computer Vision Patch”
- Turn on MoLa, video input, write simple computer vision (CV) patch that gives XY coordinates
- Program lighting system, provide outputs and know what channels they’re on
Channels (1- 10)
- x
- y
- Velocity
- Height
- Width
All x2 for the 2 cameras (top down and front), 2nd camera would be same identifiers but on channels 6-10.
Group 2: John, Lexi, Anna, “Projector System”
- Listen to channels 1-10 of 2 cameras, take data, create patch and a place for them (6 quadrants were discussed, each in charge of one)
Options:
- Create scenes and implement triggers
- Each create our own user actor in the same scene
(This is by no means an exhaustive or exclusive list, just my general notes from today!)
Pictures from the Class MoCap Patch from Monday 09/28
Posted: September 30, 2015 Filed under: Uncategorized Leave a comment »
On this screen
- Video In Watcher – Camera input
- Difference – areas that change (eg. moving around on the screen) are brighter
- Eyes ++ – finds “blobs” (eg. an area where the actor/user is on the screen)
***
On this screen
- Start Live Capture Settings (don’t forget to click the show preview, or you will not see anything)
***
On this screen
- Eyes vs. Eyes ++
- no “Blob” detector, but it will give you tracking data (eg. Center, Height, Width, Velocity)
***
On this screen
- Eyes vs. Eyes ++
- no “Blob” detector, but it will give you tracking data (eg. Center, Height, Width, Velocity)
***
Isadora Live Camera Settings
Posted: September 25, 2015 Filed under: Isadora, Sarah Lawler, Uncategorized 1 Comment »Here’s the screenshot of the live camera settings for you all!
Computer Vision Example from Class
Posted: September 25, 2015 Filed under: Uncategorized Leave a comment »Here is the CV example from class today.
This file has been compressed. So make sure to unzip it before trying to open it in Isadora.
-Alex
Duck… Duck… Duck. Duck…… Duck Duck……. Duck… Goose!!!
Posted: September 23, 2015 Filed under: Uncategorized Leave a comment »Let’s see…
Delightful: Check
Visually Pleasing: Oh yea!
Self generating patch of shapes, lines, and color: Check, check, and check (with a goose for good measure)
Level 1: System is fully automatic and only requires being ‘turned on’
Fully automated??? It’s practically self aware!
Level 2: System produces multiple visual ‘looks’ or ‘feels’
You got your duck look; you got your goose look.
Level 3: Any underlying pattern in the systems movement and visual state is complex enough that a human takes more than a few seconds to ‘understand the pattern.’
Everything is randomized, even the starting and stopping of the random sequences.
Bonus Level: System produces unexpected results over time.
No one expects a goose…
Bonus Level: Maintains a watchers attention for more than 20-30 seconds. (Much harder then one might guess. How do we do this in the theatre and dance? [Can your system tell a story?])
A complex visual tale of binary, conflicting pairs. Up and down, or left and right; never together.
This is a compressed copy with no audio.
I will elaborate on all this after I write the paper that is due tomorrow morning.