Isadora Video Tracking demo
Posted: September 8, 2016 Filed under: Uncategorized Leave a comment »A very basic layout. Two blobs are tracked and their locations are ported to the locations of squares.
Final Documentation Jonathan Welch
Posted: December 20, 2015 Filed under: Jonathan Welch, Uncategorized Leave a comment »OK I finally got it working.
The Master Patch
Eyes++ Tracking the viewer sending data to the “Emotion Matrix” that sent the character’s response to the “Player Controller”, which triggered the Players. The background interlacing was done in the “Background”.
This is the “Response Player”
There were 9 Responses (6 different animations, 2 actions that generated the same response, a still frame, and blank/away response). The players were very similar, but the “Leave” and “Greet” players had a broadcaster that toggled the character’s Here/Away state so noises when no one was around wouldn’t trigger the honk response.
The responses were:
1. Leave (triggered if no one was there, or you pissed him off)
“Honk” walk off camera off and subtitles read “Whatever Mammal”
2. Greet (triggered when someone arrives, the “Blob Counter” was 1)
Walks Up to the camera
3. Too Close
Honks and the subtitles read “You are freaking me out human”
4. Too Loud
Honks and the subtitles read “Are humans always this loud?”
5. Too much motion
Honks and the subtitles read “You are freaking me out human”
6. Too many Humans
Honks and the subtitles read “You are freaking me out human”
7. Honk
Honks and the subtitles read “Hey”
8. Blessing
Honks and the subtitles read “May your down always be greasy and your pond go be dry”
9) And Pause or Away (still frame, or blank frame depending on weather it the state was “Here or Away”)
“Playback Controller”
The response is generated in the Emotion Matrix, and this keeps the responses from triggering at the same time
The “Emotion Matrix”
Too much fast motion, getting too close, being too loud, or too many people around would generate a response, and add negative points to the goose’s attatude. If the score got too high, he would leave. If you did not have any negative points and you said something at normal talking tone volume, the goose would give you a blessing. If you had been loud or done something to irritate him, he would just say “Hey”. The irritants would go down over time, but if you got too many in too short a time the goose would walk away.
The “Goodness Elevator”
This took input from the negative response counter and routed the “Honk” response to a blessing if the count was 0 on all negative emotions.
What went wrong…
The background interlacing reduced the refresh rate to under 1 FPS at times. At such a slow rate several of the triggers that start the next response arrive at the same time. There were redundancies to keep them from all happening at once, and to keep a response from starting when one had been triggered but at 1 FPS they were all happening at once.
I had a broadcaster that was sending the player position that was used to ensure the players did not respond at the same time, and to trigger the next response when the last one was over. A “comparator” and a “router” kept the signals from starting while a player was playing, but if too many signals came at once, and the refresh rate was too low, there was no way to keep one response from starting while the other was playing.
I finally fixed it by eliminating the live feed, and the moving background. I tried just replacing the4 live feed with a photo, but interlacing it, and moving the background image was more than the computer could handle.
“Trailhead” Final project update
Posted: December 15, 2015 Filed under: Uncategorized Leave a comment »Reflection
I am very pleased with how my final iteration of this project went. Not all of the triggers worked perfectly (the same scene was always finicky, which could have called for some final tweaking), but the interactivity I was looking for occurred.
Please refer to my video documentation of the participants experience and experiments with the light projections:
When I had the opportunity to discuss the participants interactions here is the feedback I received as well as my observations.
- One participant suggested I play with the velocity as a reactive element, using slow as well as fast as options. I explained that this actual was a step in the creation process but I preferred the allure of the constant light. I would like to in the future use the velocity in a different way because I agree it would add a lot of interest to the experience.
- When I asked them to “paint” the light with their hands, they actually rubbed their hands on the ground not in the air like patch necessitated. I realized after the first cycle of participants that using wording more akin to “conducting” would give them the depth information they would need to interact.
- I found that their impetus after interacting for a while was to use try and use their feet to manipulate the light because the light was projected onto the floor. Giving them information about their vertical depth being a factor of body recognition would have helped them. On the contrary though, I did enjoy watching the discovery process unfold on its own as their legs got higher off the ground and the projection began to appear.
- Another participant, when asked how they decided to move after seeing the experiential media said it was in a way of curiosity. They wanted to ask to test the limits of the interaction.
- The most exciting feedback for me was when a final participant expressed their self-proclaimed ‘non-dancer’ status but said my patch encouraged them to move in new ways. Non movers found themselves moving because of the interest the projection generated. So cool!
I wanted to give the participants a little bit of information about the patch but not too much as to spoon feed it to them. The written prompts worked well, but as stated above, the “paint” wording while poetic was not effective.
Kinect to Isadora
Isadora Patch Details
Scene Names/Organization
“Come to me,” “Take a knee: Paint the light with your hands,” and “A little faster now” were all text prompts that would trigger when a person exited the space to give them instructions for their next interaction. The different “mains” was the actual trailing projection itself, separated into different colors. Finally, the “Fin, back to the top!” screen was simply there for a few seconds to reveal they had reached the end and the patch would loop all over again momentarily. This could potentially cause an infinite loop of interactivity, allowing participants to observe others and find new ways of spreading, interacting, hiding and manipulating the projected light.
I am excited to take this patch, intelligence, feedback forward with me into my dance studies. I am going to integrate a variant of this project immediately into my performance in the Department of Dance’s Winter Concert in February of 2016, of which I have a piece of choreography and soon an experiential projection, in.
Fin!
Final Project
Posted: December 13, 2015 Filed under: Pressure Project 3, Sarah Lawler, Uncategorized 1 Comment »Below is a zip file of my final project along with some screen shots of the patch and images embedded into the index of the patch.
The overall design of this project was based off of being an emergency spot light operator. There was no physical way to read cues on a sheet of paper while operating a spotlight. This read only system is a prototype. The images are pushed to a tablet sitting in front of the spot light operator based off the cues in the light board. The spot light operator can then determine what cue they need to be in standby for hands free.
Documentation Pictures From Alex
Posted: December 12, 2015 Filed under: Uncategorized Leave a comment »Hi all,
You can find copies of the pictures I took from the public showing here:
https://osu.box.com/s/ff86n30b1rh9716kpfkf6lpvdkl821ks
Please let me know if you have any problems accessing them.
-Alex
Cycle… What is this, 3?… Cycle 3D! Autostereoscopy Lenticular Monitor and Interlacing
Posted: November 24, 2015 Filed under: Jonathan Welch, Uncategorized | Tags: Jonathan Welch Leave a comment »One 23″ glasses-free/3D lenticular monitor. I get up to about 13 images at a spread of about 10 to 15 degrees, and a “sweet spot” 2 to 5 or 10 feet (depending the number of images); the background blurs with more images (this is 7). The head tracking and animation are not running for this demo (the interlacing is radically different from what I was doing with 3 images, and I have not written the patch or made the changes to the animation). The poor contrast is an artifact of the terrible camera, the brightness and contrast are normal, but the resolution on the horizontal axis diminishes with additional images. I still have a few bugs, honestly I hoped the lens would be different, this does not seem to really be designed specifically for a monitor with a pixel pitch of .265 mm (with a slight adjustment to the interlacing, it works just as well on the 24 inch with a pixel pitch of .27 mm). But it works, and it will do what I need.
better, stronger, faster, goosier
No you are not being paranoid, that goose with a tuba is watching you…
So far… It does head tracking and adjusts the interlacing to keep the viewer in the “sweet spot” (like a nintendo 3Dsa, but it is much harder when the viewer is farther away, and the eyes are only 1/2 to 1/10th of a degree apart). The goose recognizes a viewer, greets and follows the position… There is also recognition of sound, number of viewers, speed of motion, leaving, and over volume vs talking, but I have not written the animations for the reaction for each scenario, so it just looks at you as you move around. And the background is from the camera above the monitor, I had 3, so it would be in 3D, and have parallax, but it was more than the computer could handle, so I just made a slightly blurry background several feet back from the monitor. But it still has a live feed, so…
Multi-Viewer 3D Displays (why the 3DS is a handheld divice)
Posted: November 12, 2015 Filed under: Uncategorized Leave a comment »I have been trying to make a, glasses free, multi-viewer 3D monitor (like the Nintendo 3DS, only 23 inch and multiple viewers), and it is much more tricky than it looks.
The parallax barrier cuts down the brightness exponentially based on the number of viewer (50% light cut to do 1 pair of eyes, 75% for 2), so with 2 viewers the screen is dim. But there is more, the resolution is not only cut by 1/4, a pixel is just as narrow, but I am adding a huge gap between them (some test subjects could not even tell what they were looking at)…
Lenticular lenses cost about $12 a foot, but none of the ones I got in the sampler pack line up with the pixels on any of the monitors, so I have to sacrifice even more horizontal resolution…
I gave up and started doing head tracking, but isolating individual eyes on that scale is just about impossible, so I wind up wasting 4X the resolution just to have enough margin for a single viewer…
And it still doesn’t isolate the eyes properly!!!
Now I have a 3D display (if you do not move faster than the head tracking can follow, and remain 2 to 3 1/2 feet from the monitor) that tracks your head, but will not respond to more than 1 viewer…
So I decided that when the head tracking detects more than one person, or if you move around too fast, the character (Tuba-Goose) will get irritated and leave…
If I can get it to do that, I would consider it a hell of an achievement. And even watching this quasi-3D interface figure out where you are and adjust the perspective to compensate (with a bit of a lag sometimes) is a little unsettling… Kind of like being eyeballed by a goose…
I can work with this, but I might still have to buy the 3D monitor, or at least the lens that is made for the 23 inch display…
Cycle 2 Demo
Posted: November 7, 2015 Filed under: Jonathan Welch, Uncategorized | Tags: Jonathan Welch Leave a comment »The demo in class on Wednesday showed the interface responding to 4 scenarios:
- No audience presence (displayed “away” on the screen)
- Single user detected (the goose went through a rough “greet” animation)
- too much violent movement (the words “scared goose” on the screen)
- more than a couple audience members (the words “too many humans” on the screen)
The interaction was made in a few days, and honestly, I am surprised it was as accurate and reliable as it was…
The user presence was just a blob output. I used a “Brightness Calculator” with the “Difference” actors to judge the violent movement (the blob velocity was unreliable with my equipment). Detecting “too many humans” was just another “Brightness Calculator”. I tried more complicated actors and patches, but these were the ones that worked in the setting.
Most of what I have been spending my time solving is an issue with interlacing. I hoped I could build something with the lenses I have, order a custom lens (they are only $12 a foot + the price to cut), or create a parallax barrier. Unfortunately, creating a high quality lens does not seem possible with the materials I have (2 of the 8″ X 10″ sample packs from Microlens), and a parallax barrier blocks light exponentially based on the number of viewing angles (2 views blocks 50%, 3 blocks 66%… 10 views blocks 90%). On Sunday I am going to try a patch that blends interlaced pixels to fix the problem with the lines on the screen not lining up with the lenses (it basically blends interlaces to align a non-integral number of pixels with the lines per inch of the lens).
Worse case scenario… A ready to go lenticular monitor is $500, the lens designed to work with a 23″ monitor is $200, and a 23 inch monitor with a pixel pitch of .270 mm is about $130… One way or another, this goose is going to meet the public on 12/07/15…
Links I have found useful are…
Calculate the DPI of a monitor to make a parallax barrier.
Specs of the one of the common ACCAD 24″ monitor
http://www.pcworld.com/product/1147344/zr2440w-24-inch-led-lcd-monitor.html
MIT student who made a 24″ lenticular 3D monitor.
http://alumni.media.mit.edu/~mhirsch/byo3d/tutorial/lenticular.html
Josh Final Presentation 1 Update
Posted: November 6, 2015 Filed under: Josh Poston, Uncategorized Leave a comment »
This is my patch which receives the kinect data through syphon into isadora. There it takes the kdepth data and using luminance key and gaussian blur creates a solid and smoother image. From there that smooth image of the person standing in the space is fed into an alpha mask and combined with a video feed which projects a video within the outline of the body.
Isadora Updates
Posted: November 2, 2015 Filed under: Uncategorized Leave a comment »http://troikatronix.com/isadora-2-1-release-notes/
Mark recently updated Isadora …
Check out the release notes.
It changes the ways that videos are assigned to the stage.