First Video Tracking Demo
Posted: September 8, 2016 Filed under: Uncategorized Leave a comment »Here are some screenshots from my first attempts at controlling a shape using the eyes++ actor from in class today. A blob is tracked, and its vertical and horizontal centers determine the horizontal and vertical position of the square. The object width and height are used to control the red and green color values of the square, and the object velocity is used to control the line size of the square.
Danny Coyle’s game project: Green Hill Paradise – Act 2
Posted: September 8, 2016 Filed under: Uncategorized 1 Comment »This is a video game/research project of my own design inspired by the Sonic the Hedgehog game series.
——————————–
Green Hill Paradise was born out of a question. A question that has all but torn the Sonic the Hedgehog community asunder:
Can a Sonic the Hedgehog game be made such that it supplies a rich and robust experience in a fully 3D environment while staying true to its platforming roots?
We are here to answer this question, not through an in-depth analysis video, not through a lengthy forum post, but through a fully playable video game experience. After 10 months of research and development we believe that, yes, Sonic can not only exist in 3D, but he can THRIVE in it. GHP’s massive environment, winding paths, dynamic physics and hidden collectibles provide players with the freedom to choose where they want to go and how they wish to get there. The only limitations are the laws of physics and the player’s own skill.
No Spline paths.
No Boostpads.
No scripted cameras.
No Boost Button.
Gotta Go Fast?
Earn it.
——————————–
Since the game’s release, it has been featured on various websites and YouTube “Let’s Play” videos. I figured it was prudent to share it here with you all as well. If you have any questions about the project, just ask!
Sept 8th 2016 – Pressure Project 1
Posted: September 8, 2016 Filed under: Uncategorized Leave a comment »Pressure Project #1: Generator
Time limit: 5 hours max!
Specific resources needed for this project: Isadora; and the Shapes Actor and Jump++ Actor within.
Required Achievements:
Consider: Conway’s Game of Life
http://www.youtube.com/watch?v=C2vgICfQawE
Further Consider Cellular Automata: https://www.google.com/#q=cellular+automata
Primary goal:
Create a generative patch in the Isadora programming environment that integrates the use of the Shapes Actor, Jump++ Actor, Eyes or Eyes ++ and ANY other actor(s) that you choose to develop a self-generating patch.
After initializing the patch (“pressing a button”) the patch runs it self through a visual sequence of movement and behavior.
Secondary goal: Maximize the amount of time it takes for the novice viewer of the patch to grok the patch’s behavior. (Grok: http://en.wikipedia.org/wiki/Grok)
Other Isadora Actors of note for this project: All of the actors under the Calculation heading. But specifically: Random, Wave Generator, Smoother, Curvature, Comparator, Counter, Scale Value, Calculator, Logical Calculator, Inside Range, Hold Range,
This Project is: Pass / Fail | Artistic visions/considerations are highly valued.
For further information:
- Getting started: Complete the first 7 Isadora tutorials available at the following link:
- http://www.youtube.com/watch?v=VqNw_4AWvvA
- short version of url: http://goo.gl/Z1sPlQ
- (This link is to the first tutorial. The following tutorials can be found in the associated links that pop up on the right of the YouTube interface.)
Paper-based game results and observations:
Posted: September 8, 2016 Filed under: Uncategorized 1 Comment »Our team of two modified the “Boxes” game where the objective is to create boxes from a dot grid, racking up combos in order to defeat your opponent. Our change was simple: allow players to create diagonal lines between dots and have scoring based on triangles rather than squares. This simple change made for twice as many possibilities and opportunities for scoring. The meta-game involves trapping your opponents into situations where they will be forced to set you up for victory. The ability to create diagonal lines creates an environment where these traps can be set more frequently, thus creating a more aggressive style of play.
Next I tried out a variant of Hangman. This version included a life system. Every time you chose an incorrect letter, you lost a life. Lose all of them and, no matter your Hangman’s status, you lose. Guess correctly and you will gain lives. This change in the rule-set creates a dynamic of “momentum” that the game did not have prior to the change. If you begin guessing incorrectly, you will lose more quickly than normal. If you begin guessing correctly and gain more lives, then you are able to take more risks. An interesting dynamic, to be sure.
~Danny
Isadora Video Tracking demo
Posted: September 8, 2016 Filed under: Uncategorized Leave a comment »A very basic layout. Two blobs are tracked and their locations are ported to the locations of squares.
Final Documentation Jonathan Welch
Posted: December 20, 2015 Filed under: Jonathan Welch, Uncategorized Leave a comment »OK I finally got it working.
The Master Patch
Eyes++ Tracking the viewer sending data to the “Emotion Matrix” that sent the character’s response to the “Player Controller”, which triggered the Players. The background interlacing was done in the “Background”.
This is the “Response Player”
There were 9 Responses (6 different animations, 2 actions that generated the same response, a still frame, and blank/away response). The players were very similar, but the “Leave” and “Greet” players had a broadcaster that toggled the character’s Here/Away state so noises when no one was around wouldn’t trigger the honk response.
The responses were:
1. Leave (triggered if no one was there, or you pissed him off)
“Honk” walk off camera off and subtitles read “Whatever Mammal”
2. Greet (triggered when someone arrives, the “Blob Counter” was 1)
Walks Up to the camera
3. Too Close
Honks and the subtitles read “You are freaking me out human”
4. Too Loud
Honks and the subtitles read “Are humans always this loud?”
5. Too much motion
Honks and the subtitles read “You are freaking me out human”
6. Too many Humans
Honks and the subtitles read “You are freaking me out human”
7. Honk
Honks and the subtitles read “Hey”
8. Blessing
Honks and the subtitles read “May your down always be greasy and your pond go be dry”
9) And Pause or Away (still frame, or blank frame depending on weather it the state was “Here or Away”)
“Playback Controller”
The response is generated in the Emotion Matrix, and this keeps the responses from triggering at the same time
The “Emotion Matrix”
Too much fast motion, getting too close, being too loud, or too many people around would generate a response, and add negative points to the goose’s attatude. If the score got too high, he would leave. If you did not have any negative points and you said something at normal talking tone volume, the goose would give you a blessing. If you had been loud or done something to irritate him, he would just say “Hey”. The irritants would go down over time, but if you got too many in too short a time the goose would walk away.
The “Goodness Elevator”
This took input from the negative response counter and routed the “Honk” response to a blessing if the count was 0 on all negative emotions.
What went wrong…
The background interlacing reduced the refresh rate to under 1 FPS at times. At such a slow rate several of the triggers that start the next response arrive at the same time. There were redundancies to keep them from all happening at once, and to keep a response from starting when one had been triggered but at 1 FPS they were all happening at once.
I had a broadcaster that was sending the player position that was used to ensure the players did not respond at the same time, and to trigger the next response when the last one was over. A “comparator” and a “router” kept the signals from starting while a player was playing, but if too many signals came at once, and the refresh rate was too low, there was no way to keep one response from starting while the other was playing.
I finally fixed it by eliminating the live feed, and the moving background. I tried just replacing the4 live feed with a photo, but interlacing it, and moving the background image was more than the computer could handle.
“Trailhead” Final project update
Posted: December 15, 2015 Filed under: Uncategorized Leave a comment »Reflection
I am very pleased with how my final iteration of this project went. Not all of the triggers worked perfectly (the same scene was always finicky, which could have called for some final tweaking), but the interactivity I was looking for occurred.
Please refer to my video documentation of the participants experience and experiments with the light projections:
When I had the opportunity to discuss the participants interactions here is the feedback I received as well as my observations.
- One participant suggested I play with the velocity as a reactive element, using slow as well as fast as options. I explained that this actual was a step in the creation process but I preferred the allure of the constant light. I would like to in the future use the velocity in a different way because I agree it would add a lot of interest to the experience.
- When I asked them to “paint” the light with their hands, they actually rubbed their hands on the ground not in the air like patch necessitated. I realized after the first cycle of participants that using wording more akin to “conducting” would give them the depth information they would need to interact.
- I found that their impetus after interacting for a while was to use try and use their feet to manipulate the light because the light was projected onto the floor. Giving them information about their vertical depth being a factor of body recognition would have helped them. On the contrary though, I did enjoy watching the discovery process unfold on its own as their legs got higher off the ground and the projection began to appear.
- Another participant, when asked how they decided to move after seeing the experiential media said it was in a way of curiosity. They wanted to ask to test the limits of the interaction.
- The most exciting feedback for me was when a final participant expressed their self-proclaimed ‘non-dancer’ status but said my patch encouraged them to move in new ways. Non movers found themselves moving because of the interest the projection generated. So cool!
I wanted to give the participants a little bit of information about the patch but not too much as to spoon feed it to them. The written prompts worked well, but as stated above, the “paint” wording while poetic was not effective.
Kinect to Isadora
Isadora Patch Details
Scene Names/Organization
“Come to me,” “Take a knee: Paint the light with your hands,” and “A little faster now” were all text prompts that would trigger when a person exited the space to give them instructions for their next interaction. The different “mains” was the actual trailing projection itself, separated into different colors. Finally, the “Fin, back to the top!” screen was simply there for a few seconds to reveal they had reached the end and the patch would loop all over again momentarily. This could potentially cause an infinite loop of interactivity, allowing participants to observe others and find new ways of spreading, interacting, hiding and manipulating the projected light.
I am excited to take this patch, intelligence, feedback forward with me into my dance studies. I am going to integrate a variant of this project immediately into my performance in the Department of Dance’s Winter Concert in February of 2016, of which I have a piece of choreography and soon an experiential projection, in.
Fin!
Final Project
Posted: December 13, 2015 Filed under: Pressure Project 3, Sarah Lawler, Uncategorized 1 Comment »Below is a zip file of my final project along with some screen shots of the patch and images embedded into the index of the patch.
The overall design of this project was based off of being an emergency spot light operator. There was no physical way to read cues on a sheet of paper while operating a spotlight. This read only system is a prototype. The images are pushed to a tablet sitting in front of the spot light operator based off the cues in the light board. The spot light operator can then determine what cue they need to be in standby for hands free.
Documentation Pictures From Alex
Posted: December 12, 2015 Filed under: Uncategorized Leave a comment »Hi all,
You can find copies of the pictures I took from the public showing here:
https://osu.box.com/s/ff86n30b1rh9716kpfkf6lpvdkl821ks
Please let me know if you have any problems accessing them.
-Alex
Cycle… What is this, 3?… Cycle 3D! Autostereoscopy Lenticular Monitor and Interlacing
Posted: November 24, 2015 Filed under: Jonathan Welch, Uncategorized | Tags: Jonathan Welch Leave a comment »One 23″ glasses-free/3D lenticular monitor. I get up to about 13 images at a spread of about 10 to 15 degrees, and a “sweet spot” 2 to 5 or 10 feet (depending the number of images); the background blurs with more images (this is 7). The head tracking and animation are not running for this demo (the interlacing is radically different from what I was doing with 3 images, and I have not written the patch or made the changes to the animation). The poor contrast is an artifact of the terrible camera, the brightness and contrast are normal, but the resolution on the horizontal axis diminishes with additional images. I still have a few bugs, honestly I hoped the lens would be different, this does not seem to really be designed specifically for a monitor with a pixel pitch of .265 mm (with a slight adjustment to the interlacing, it works just as well on the 24 inch with a pixel pitch of .27 mm). But it works, and it will do what I need.
better, stronger, faster, goosier
No you are not being paranoid, that goose with a tuba is watching you…
So far… It does head tracking and adjusts the interlacing to keep the viewer in the “sweet spot” (like a nintendo 3Dsa, but it is much harder when the viewer is farther away, and the eyes are only 1/2 to 1/10th of a degree apart). The goose recognizes a viewer, greets and follows the position… There is also recognition of sound, number of viewers, speed of motion, leaving, and over volume vs talking, but I have not written the animations for the reaction for each scenario, so it just looks at you as you move around. And the background is from the camera above the monitor, I had 3, so it would be in 3D, and have parallax, but it was more than the computer could handle, so I just made a slightly blurry background several feet back from the monitor. But it still has a live feed, so…