Multi-Viewer 3D Displays (why the 3DS is a handheld divice)

I have been trying to make a, glasses free, multi-viewer 3D monitor (like the Nintendo 3DS, only 23 inch and multiple viewers), and it is much more tricky than it looks.
The parallax barrier cuts down the brightness exponentially based on the number of viewer (50% light cut to do 1 pair of eyes, 75% for 2), so with 2 viewers the screen is dim. But there is more, the resolution is not only cut by 1/4, a pixel is just as narrow, but I am adding a huge gap between them (some test subjects could not even tell what they were looking at)…
Lenticular lenses cost about $12 a foot, but none of the ones I got in the sampler pack line up with the pixels on any of the monitors, so I have to sacrifice even more horizontal resolution…
I gave up and started doing head tracking, but isolating individual eyes on that scale is just about impossible, so I wind up wasting 4X the resolution just to have enough margin for a single viewer…
And it still doesn’t isolate the eyes properly!!!
Now I have a 3D display (if you do not move faster than the head tracking can follow, and remain 2 to 3 1/2 feet from the monitor) that tracks your head, but will not respond to more than 1 viewer…
So I decided that when the head tracking detects more than one person, or if you move around too fast, the character (Tuba-Goose​) will get irritated and leave…
If I can get it to do that, I would consider it a hell of an achievement. And even watching this quasi-3D interface figure out where you are and adjust the perspective  to compensate (with a bit of a lag sometimes) is a little unsettling… Kind of like being eyeballed by a goose…

I can work with this, but I might still have to buy the 3D monitor, or at least the lens that is made for the 23 inch display…

 


Cycle 2 Demo

The demo in class on Wednesday showed the interface responding to 4 scenarios:

  1. No audience presence (displayed “away” on the screen)
  2. Single user detected (the goose went through a rough “greet” animation)
  3. too much violent movement (the words “scared goose” on the screen)
  4. more than a couple audience members (the words “too many humans” on the screen)

The interaction was made in a few days, and honestly, I am surprised it was as accurate and reliable as it was…

The user presence was just a blob output. I used a “Brightness Calculator” with the “Difference” actors to judge the violent movement (the blob velocity was unreliable with my equipment). Detecting “too many humans” was just another “Brightness Calculator”. I tried more complicated actors and patches, but these were the ones that worked in the setting.

Most of what I have been spending my time solving is an issue with interlacing. I hoped I could build something with the lenses I have, order a custom lens (they are only $12 a foot + the price to cut), or create a parallax barrier. Unfortunately, creating a high quality lens does not seem possible with the materials I have (2 of the 8″ X 10″ sample packs from Microlens), and a parallax barrier blocks light exponentially based on the number of viewing angles (2 views blocks 50%, 3 blocks 66%… 10 views blocks 90%). On Sunday I am going to try a patch that blends interlaced pixels to fix the problem with the lines on the screen not lining up with the lenses (it basically blends interlaces to align a non-integral number of pixels with the lines per inch of the lens).

Worse case scenario… A ready to go lenticular monitor is $500, the lens designed to work with a 23″ monitor is $200, and a 23 inch monitor with a pixel pitch of .270 mm is about $130… One way or another, this goose is going to meet the public on 12/07/15…

Links I have found useful are…

Calculate the DPI of a monitor to make a parallax barrier.

https://www.sven.de/dpi/

Specs of the one of the common ACCAD 24″ monitor

http://www.pcworld.com/product/1147344/zr2440w-24-inch-led-lcd-monitor.html

MIT student who made a 24″ lenticular 3D monitor.

http://alumni.media.mit.edu/~mhirsch/byo3d/tutorial/lenticular.html

 


Josh Final Presentation 1 Update

This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.

This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.

This is the VUO patch that feeds through syphon the kinect data to isadora.

This is the VUO patch that feeds through syphon the kinect data to isadora.


Isadora Updates

http://troikatronix.com/isadora-2-1-release-notes/
Mark recently updated Isadora …
Check out the release notes.

It changes the ways that videos are assigned to the stage.


Final Project Progress

I don’t have a computer with camera inputs yet, so I have been working on the 3D environment and interlacing. Below is a screenshot with the operator interface and a video testing the system. It is only a test, so the interlacing is not to scale and oriented laterally. The final project will be on a screen that is mounted in portrait. I hoped to do about 4 interlaced images, but the software is showing a serious lag with 2, so it might not be possible.

 Operator Interface

(used to calibrate the virtual environment with the physical)Operator Interface

The object controls are on the Left (currently 2 views); Angle Difference (the relative rotation of object 1 vs object 2), X difference (how apart the virtual cameras are), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust

The Backdrop controls are on the Right (currently 2 views, I am using mp4 files until I can get a computer with cameras); Angle Difference (the relative rotation of screen 1 vs screen 2), X/Y/Z rotation with a fine adjust, and X/Y/Z translation with a fine adjust.

In the middle in the interlace control (width of the lines and distance between, if I can get more than 1 perspective to work, I will change this to number of views and width of the lines)

Video of the Working Patch

0:00 – 0:01 changing the relative angle

0:01 – 0:04 changing the relative x position

0:04 – 0:09 changing the XYZ rotation

0:09 – 0:20 adjusting the width and distance between the interlaced lines

0:20 – 0:30 adjusting the scale and XYZ YPR of backdrop 1

0:30 – 0:50 adjusting the scale and XYZ YPR of backdrop 2

0:50 – 0:60 adjusting the scale and XYZ YPR of the model

I have a problem as the object gets closer and farther from the camera… One of the windows is a 3D projector, and the other is a render on a 3D screen with a mask for the interlacing. I am not sure if replacing the 3D projector with another 3D screen with a render on it would add more lag or not, but I am already approaching the processing limits of the computer, and I have not added the tuba or the other views… I could always just add XYZ scale controls to the 3D models, but there is a difference between scale and zoom, so it might look weird.

Hitchcock Zoom

The difference between zooming in and trucking (getting closer) is evident in the “Hitchcock Zoom”

A Diagram with the Equations for Creating a Parallax Barrier

ParallaxBarrierDiagram

Image from Baydocks http://displayblocks.org/diycompressivedisplays/parallax-barrier-display/

The First Tests

The resolution is very low because the only lens I could get to match with a monitor was 10 lines per inch lowering the horizontal parallax to just 80 pixels on the 8″ x 10″ lens I had. The first is pre-rendered 9 images. the 2nd is only 3 because Isadora started having trouble. I might pre-render the goose and have the responses trigger a loop (like the old Dragon’s Lair game). The draw back, the character would not be able to look at the person interacting. But with only 3 possible views, it might not be apparent he was tracking you.


Myo to OSC

Hey there everyone.

Here is a video those missing steps of getting the Myo up and working:

 

And here are the links that you would need:

https://github.com/samyk/myo-osc for the Xcode project

https://www.myo.com/start/ for the Myo software

 

 

Remember, that this is a very similar process to getting the Kinect, or any number of other devices connected to your computer and Isadora.

Please let me know if you have any questions, or if you would like to borrow the Myo and try to do this yourself.

Best!

-Alex


Jonathan PP3 Patch and Video

 

https://youtu.be/HjSSyEbz68Y

CLASS_PP3 CV Patch_151007_1.izz

 


Coherence???

I was thinking we might want our scenes to be connected… I have a patch that turns on an outside night scene with crickets chirping and animal moving around in the woods… There is also a little parallax as the “performer” moves around in the space (the 33% X 50% piece of the stage before the performer triggers another patch/scene/whatever), but I could adapt it if we have a common vision…

https://youtu.be/HjSSyEbz68Y

The performer’s position is represented by the moving H (Vorizontal) and V (Vertical) readout. The numbers would not be visible in the projection.
The trigger position is in the lower left corner.


The “Sextant” and a patch that I was using to activate my Scene/Actor when the performer was in area 1

IMG_2059

I am guessing the area’s range is 100 X 100 in both directions, so I divided it into 33/33/34 W and 50/50 H, and wrote a user actor to define area 1. This could be used to activate a scene or turn on/off projectors.

The red quantifies the H axis, blue the V, purple is the combined, and green is the output with a value.

SelectorPatch  PP3 Patch


Class notes on 9/30 regarding PP3

PP3 GOAL: Create an interactive ‘dance’ piece (as a group), each should create a moment of reaction from the ‘dancer’

Group goals/questions:

  • Focus on art!
  • What will we do?
  • How will we do it?
  • What is my job?
  • Establish perimeters and workflow?
  • Understand the big picture?
  • How is the sensing being done?
  • What is the system design?

Resources:

  • 4 projectors
  • 3 projectors on floor
  • 2 HDMI Cameras
  • Top down camera with infra red
  • Kinect
  • Light/sound system
  • Isadora
  • Max/MSP
  • Myo (bracelets)
  • MoLa
  • Various sensors (?)

Group 1: Sarah, Josh, Connor, “Computer Vision Patch”

  • Turn on MoLa, video input, write simple computer vision (CV) patch that gives XY coordinates
  • Program lighting system, provide outputs and know what channels they’re on

Channels (1- 10)

  1. x
  2. y
  3. Velocity
  4. Height
  5. Width

All x2 for the 2 cameras (top down and front), 2nd camera would be same  identifiers but on channels 6-10.


Group 2: John, Lexi, Anna, “Projector System”

  • Listen to channels 1-10 of 2 cameras, take data, create patch and a place for them (6 quadrants were discussed, each in charge of one)

Options:

  • Create scenes and implement triggers
  • Each create our own user actor in the same scene

 

(This is by no means an exhaustive or exclusive list, just my general notes from today!)