Horse Bird Muffin Cycles

For my final project, I used Isadora, Vuo, a Kinect sensor, buttons and two projectors to create a sort of game or test to determine one’s inherent nature as horse, bird, or muffin, or some combination of them.
img_0963 Through a series of instructed interactions, a person first chooses an environment from 3 images projected onto a screen. As a person walks toward a particular image, their own moving silhouette is layered onto the image. Text appears to instruct them to move closer to the place they’ve chosen, and the closer they get, the louder an ambient sound associated with their chosen image gets.

 

Once within a certain proximity, if all goes well, the first part of the test is complete.img_0965 So they start with a subtly, self-selected identity.

In the next scene, the participant can interact with an image of their choice: the image mirrors their body moving through space and the image changes size based on the volume of sound the participant and their surroundings create.  Stomping loud feet and claps make the image fill the screen and also earn the participant at least one horse ranking, while lot’s of traveling through space accumulates to earn the participants a bird ranking.  Little or in place movement after a designated, as many guessed, puts them in ranks with muffins.

img_0863 img_0969img_0953 img_0952 img_0951  img_0954-1

Any of the three events trigger a song affiliated with their rank and the next evaluation to begin.

img_0944


  This last scene never quite reached my imagined heights, but it was intended for the participant to see themselves on video in real time and delay, still with the moving silhouettes tracked and projected through a difference actor on Isadora. This worked and participants enjoyed dancing with themselves and their echoes onscreen. Parts that needed work were a few shapes actors that were designed to follow movements in different quadrants of the Kinect’s RGB camera field and a patch that counted the number of times the participant crossed a horizontal field completely (to trigger a final horse ranking) or moved from high to low in vertical field, or again, observed themselves with small or no movements until a set time was up.

These functions sometimes worked and did not demonstrate robustness…..  img_0946

Nonetheless, each participant was instructed to push a button at the end of their assessment and were able to discover if they were a birdhorsemuffin, a birdhorsehorse, triple horse or some other combination.

 

img_0858img_0870


Pressure Project 3: Only the Lonely

For the third pressure project involving dice, I wanted the dice to trigger a music player, with the location of where dice landed determining which instrumental parts of a song would play. I decided to use a Kinect’s depth sensor to detect the presence of dice in the foreground, mid-ground and background of a marked range of space.maxresdefault

I made origami cubes to use as dice so that they were a bit larger and easy to detect and also not as volatile and thereby easier to keep within the range I’d set.

Using a Syphon Receiver in communication with Vuo, I connected the computer vision to 3 discrete ranges of depth and used a crop actor to cut out any information about objects, like people, that might come into the camera’s field of vision.
(This only partially worked and could use some fine tuning). In retrospect, I wonder if there is an adjustment on Eyes ++ I could use to require the presence of an object to last a certain amount of time before registering its brightness in order to prevent interfering messages besides the location of dice. screenshot-2016-12-16-18-17-13

Each Eyes actor was connected to a smoother and an inside range actor which looked for the brightness detected within the set luminescent range for each one to reach a minimum in order to trigger a series of signals. The primary signal was to play that sections sound (because the brightness indicated an object present) or to stop it (because below minimum brightness indicated the absence of an object).

Because the three instrumental music files were designed to play together, it was possible for 3 dice to land in the 3 respective ranges, with the goal being that all three files started together, playing a song with drums, bass line and keyboard melody (reminiscent of Roy Orbison’s Only the Lonely). To make this work and not have the sound files start in split seconds of each others, I set up a system of simultaneity actors, trigger delays and gates, that I can’t actually explain in detail, but I understand conceptually as delaying action to look first for simultaneous presence of objects in multiple ranges, then sending a signal for 2 or all based on information gathered, and barring that, continuing signals through for an individual part to play.screenshot-2016-12-16-18-50-50

For example, if a dice landed in the mid-ground and 2 in the foreground, the inside range enter triggers from both those ranges would send triggers to individual trigger delays that in 2 seconds send a message to play a single file. BUT, if the simultaneity actor for the mid-ground and foreground is triggered, it signals a gate to turn off on those two delays, preventing the individual file triggers and sending a shared trigger to both the bass and the piano files to start at the same time.

For whatever reason I can only get a zip file of garageband file to upload, but this is all 3 parts together: dice-player-band

The entire Isadora patch is here: http://osu.box.com/s/eyjn9zge850ywc81yhdoj4cpbic1rkka

 

Chance music….


Pressure Project #3: Augmented Backgammon

Backgammon is the first board game ever in human history.

screen-shot-2016-12-16-at-7-19-27-pm
It is a mysterious mix of choices, strategy and luck. Properties of life I enjoy the most.

screen-shot-2016-12-16-at-7-28-32-pm

On my latest visit to Istanbul Turkey, I got a beautiful backgammon board.  I decided then, to bring it back and let my fellow artists co workers play with the augmented reality version of this game I created.

screen-shot-2016-12-16-at-7-16-01-pm

Thanks ACCAD for the awesome video on this piece.!

screen-shot-2016-12-16-at-7-27-01-pm

Axel Cuevas Santamaría VJ axx
https://axxvjs.wordpress.com/


Pressure Project #2: Dancing Depths

After Presure project 1: Staring at the Sun, I developed this new iteration.
It was an attempt to deliver a delightful automatic digital art piece experiment.

screen-shot-2016-12-16-at-6-49-23-pm

… an individual walks into a room in darkness where a MIDI controller interface awaits.

20160929_152116

The device, once activated by the individual, lets Isadora create a visually pleasing and self generating patch of shapes, lines, and color

20160929_151615

Visual textures are assigned to the dancing silhouettes of people inside the room

20160929_161734

This was my first experiment using kinect v2 and Isadora together

20160929_161902

Also, my first interactive dancing experience where my audience got activated

axel-cuevas-santamaria-vj-axx-microbial-skin-011

After this, I continued developing a more complex and integrated experiential media system for an interactive dance performance titled Microbial Skin

axel-cuevas-santamaria-vj-axx-microbial-skin-009

Thanks Alex Olizsewski and ACCAD for all the support!

Axel Cuevas Santamaría VJ axx
https://axxvjs.wordpress.com/

 


Pressure Project #1: Staring at the Sun

Description of the project in this link: https://axxvjs.wordpress.com/2016/10/20/diy-dome/

For this project, we experienced yellow color invading our spatial perception.

screen-shot-2016-12-16-at-6-08-38-pm

Inspired by Pi, the American surrealist psychological thriller film from 1998 written and directed by Darren Aronofsky, this is the project I developed.

text_01

The experiential media system for this project provided a mock-up for an immersive environment that allows users to shift our spatial perception.

https://axxvjs.wordpress.com/

After this experiment, I built a geodesic dome to test immersive possibilities.

axel_vjaxx_fulldome2016

I started building cardboard domes, for delivering immersive experiences.

axel-vjaxx-dome-2016_-0088

Axel Cuevas Santamaría VJ axx
https://axxvjs.wordpress.com/

 


motiondetection01

As I started with fast motion detection action, I uploaded a first test patch.  Works best with lots of light in the room (darn) and with Input Menu/Start Live Capture to play. hope it works

Download Patch in this link:
axtest03_sept08_motiondetection01-izz


Pressure Project II

For our second pressure project we were tasked with creating a project with a Single person experience and a interface. As I was already thinking about story for my final project I wanted integrate intimacy and ideas of story told by the audience. I wanted to create a space in which the viewer was prompted to tell a story and then be audience to other stories about the same topic. I also want to using unconventional tactile Interface.

In the end I create a system which read one of my shirts and looked at the color to determine if it was there or not.  if the shirt was present it would trigger a text on the screen, that instructed the viewer to tell a story. This then would be captured to the computer and would be played along with all the other stories after the party was done. In theory this would work but in practice it kind of failed.  I don’t give myself time and the space to calibrate the computers understanding of the shirt under new lighting conditions.  And I had not quite figured out how do use Isadora’s captured to disk actor.

There where ideas in it that I were very attracted and will continue to work on. I really like the idea of a tactile interface being an everyday object, especially when the object hold significance or comfort. I was also interested in the different stories that me emerge from someone simply talking into a computer.


Pressure Project 3

For the Pressure Project 3 I used 2 colors as a dice to activate different interactivity modes. Firstly I created a square shaped, paper, two sided (red & blue) colored dice on a piece of string.

I introduced the dice to the viewer and asked her to experiment with it by showing it to the camera. In order to see the “trick” she needs to match the right side of the camera and the dice. I divided the screen in two half with the crop object and assigned red or blue colors to each side. Blue is tracked only on the left side of the camera/ screen and red is only on the right.

Blue color activates a different sound and blue visuals also the color picker triggers a type object that indicates “ “ in addition to a whisper soundtrack on the background. Red color activates a jackpot! İllustration with a changing scale based on the movement value and a clap soundtrack in order to indicate the “win”.

I used the dice as a symbolization of two different image and sound results that are presented to the viewer. Red symbolizes a winner theme while blue symbolizes a random/neutral trigger with the murmur and whisper sounds on the background. I tried to use blue for keep trying to find the right side and red as a finish point of the game (with a win). It seemed to work on the audience. They found both red and blue sides.

I tried to come up with a simplistic dice idea in order to enable the motion and flexibility of the object in front of the camera. Color code seemed to work well, which is an important feature to use/learn in visual programming. Also in addition to using chroma key object, I used inside range and measure color object for the first time. Measure color object works great, especially for measuring the exact numbers of RGB pixel values. I am surprised how much I learned from a dice concept. A dice can be anything that triggers different functions.

Here is the Isadora file: karaca_pp3-izz

dice

Two sided paper dice

Video is captured from the screens perspective, so when we see red side of the dice on the video, camera recognizes the blue and vice versa for red.


Cycle 1, 2 & 3

Hello,

I documented all parts of my process on my wordpress blog:

https://ecekaracablog.wordpress.com/2016/10/17/visualizing-the-effects-of-change-in-landscape/

I documented the process on my blog. All the visuals and sketches are on the website and the video is on the way.

Cycle 1:

While I was dealing with analysis of data, I came up with a very simple demonstration of variables in order to place them on a visual reference (map) to show the exact locations in the country. I wanted to show the main location by using a map, considering the audience might not know where exactly Syria is.

Text and simple graphic elements formed the main start of the project. After I shared it with the class the feedback I got was mainly around curious questions. One of my peers asked “What is sodium nitrate?” I could tell that he understood that chemical was not good for environment or health. Another classmate continued, “They attack to cultural areas during wars, like they are trying to destroy the culture or history of nations.” That comment helped me to add more on to the project and consider the different results of the war, connected to the same ending “damage”.

After more research and analysis I had to admit that I can’t use clear Satellite imagery or maps for the infographic, which would’ve been a great tool. I learned from Alex, since the war is still ongoing Google Maps have some restrictions in the war areas. All I got from Google Maps was pixelated, blurry maps. I moved on with abstract visualizations applied on map imagery, also included photographs from cities to the final outcome.

Cycle 2:

I believe the feedback that I got from the Cycles helped me a lot to consider the thoughts of my peers. When we work on our projects for long time periods, being subjective and critical towards the become harder. Since I worked on the collection, analysis, sketching and design of the dataset by myself, I was clear about the details of the dataset while I knew it was still too complex for the viewers. In the end the dataset and design is still too complex. It was decision I made for my first complex interactive information design project. Because the analysis of data showed me the complexity is the nature of this project. There is catastrophic damage given to a war country and even a small detail, an environmental issue I tried to visualize is connected to multiple variables on the data set. Therefore I used the wires to show the direct connection between variables and communicate the country is wired with these risks and damages.

Since my peers saw the work and heard me talking about the project, they were more familiar with it. So I was not sure the project was clear to them but I got a positive feedback in the second cycle. I’ve been told that the project looks more complete.

After I got feedback from Alex, I started to think about an active/passive mode for the project. I included a sound piece from the war area, that is activated by the audience walking by the hall in front of my project. The aim for the sound is to take the attention of the audience to the work and give an idea about the topic. Camera to track the motion sees the people passing by and activates the sound afterwards. Since I want the audience to focus on the data set, I targeted the potential audience for the sound part.

I believe sound completed the work and created an experience.

I know the work is not very clear or ideal for the audience, but I wanted to push the limits of layering and complexity in this project. Taking the risk of failure, I am happy to share that I learned a lot (from everyone in this class)!

Right now I have a general understanding of Unity, I know how to add more and manipulate the code (even I don’t know about complex coding). I learned the logic of Unity prefabs, inspector, interactivity and general interface. In addition to MAX MSP (used it last year), I learned using Isadora, which is way more user friendly. Finally I learned using a 3rd information design software called Tableau, which helped me to develop the images (under Infographic Data) on my blog.

I have very valuable feedback from my peers and professors. Even if the result was not perfect I learned using 3 softwares in total, had fun with the project, experimented, pushed the limits and learned a lot of things!

Please read the complete process from my blog. 🙂


Pressure Project #3

At first, I had no idea what I was going to do for pressure project #3. I wasn’t sure how to make a reactive system based on dice. This lead to me doing nothing but occasionally thinking about it from the day it was assigned until a few days before it was due. Once I sat down to work on it, I quickly realized that I was not going to be able to create a system in five hours that could recognize what people rolled with the dice. I began thinking about other characteristics of dice and decided that I should explore some other characteristics of dice. I made a system with two scenes using Isadora and Max/MSP. The player begins by rolling the dice and following directions on the computer screen. The webcam tracks the players’ hands moving, and after enough movement, it tells them to roll the dice, of which the loud sound of dice hitting the box triggers the next scene, where various images of previous rolls appear, with numbers 1-6 randomly appearing on the screen, and slowly increasing in rapidity while delayed and blurred images of the user(s) fade in, until it sends us back to the first scene, where we are once again greeted with a friendly “Hello.”

The reactions to this system surprised me. I thought that I had made a fairly simple system that would be easy to figure, but the mysterious nature of the second scene had people guessing all sorts of things about my project. At first, some people thought that I had actually captured images of the dice in the box in real time because the first images that appeared in the second scene were very similar to how the roll had turned out. In general, it seemed like the reaction was overall very positive, and people showed a genuine interest in it. I think that I would consider going back and expanding on this piece more and exploring the narrative a little more. I think that it could be interesting to develop the work into a full story.

Below are several images from a performance of  this work, along with screenshots of the Max patch and Isadora patch.

screenshot-2016-11-03-16-14-56 screenshot-2016-11-03-16-14-51 screenshot-2016-11-03-16-14-38 screenshot-2016-11-03-16-14-32 screenshot-2016-11-03-16-14-23 screenshot-2016-11-03-16-14-20 screenshot-2016-11-03-16-14-12 screenshot-2016-11-03-16-14-07 screenshot-2016-11-03-15-32-50 screenshot-2016-11-03-16-15-09 screenshot-2016-12-09-19-08-44 screenshot-2016-12-09-19-08-40