Peter’s Popping Pressure Project (1)

For my first pressure project, I wanted to explore a sort of gameified concept.  This project has two main components to it – popping digital bubbles by moving within their volume (this component is shown in the attached video – sorry it’s sideways!), and experiencing yourself via a third-person perspective (not in the video, but experienced in class).

My inspiration for the latter aspect of the experience came from staying up until 5am one night whilst demonstrating VR to friends from my dorm.  We ended up streaming local video calls into the head mounted display, and were able to watch ourselves walk through the environment from a fixed perspective.  I found it profound how quickly my mind associated with my third-person self and how different the experience is when navigating a space in such a way.  I wanted to use that idea and make it accessible to everyone, so my solution was to have the computer’s camera at a different location from that of the screen, and encourage the users to conceptualize fine motor skills from this new perspective.

And that leads into the former means of interaction – popping bubbles.  The bubbles both function as an encouragement to move about the space, as well as a curiosity to learn how we as participants can effect them.  Their mechanic is they spawn in at random locations on the screen (faster if there is no motion in the environment), and the bubbles will pop once a user collides with a bubble (as seen from the camera’s perspective).  To add curiosity and engage more senses, I caused the bubbles to play a MIDI sound each time they are popped, whose pitch is determined by the total motion within the camera’s field of view.

This system was achieved through clipping the incoming video signals into small portions, then dividing them apart into a 10 x 10 array.  That array then detects motion within each 1 x 1 grid component, and will pop a bubble if there is enough motion and a bubble is present in that coordinate.

It was very enjoyable to observe everyone interact with this system; similar to how we stepped back and observed the workings of our “Conwy’s Game of Life” experience in the Motion Lab, taking the time to simply observe the users instead of their associated video makes the experience truly feel performative.  The rules set forth by this patch caused users to move delicately as well as chaotically, in attempts to discover what the system’s rules are.  I found it interesting that many users did not associate the popping of bubbles with their movements, as well associate the playing of audio when bubbles are popped, until 2 – 4 minutes into the experience (however seeing as I was the one who designed the system, I was quite blind to how it would look for a first-timer).  Overall this project was very fun, and I enjoy seeing how everyone interacted with these different systems!

pp1 <– Isadora patch

 


Pressure Project 1 __ Taylor

Under Pressure, dun dun dun dadah dun dun
Alright… so I got super frustrated fiddling around in Isadora and my system ended up being what I learned from my failures in the first go round. Which was nice because I was able to plan based on what I couldn’t and didn’t want to do, leading me to make simpler choices. It seemed like everyone was enthusiastically trying to grok how to interact with the system. It felt like it kept everyone entertained and pretty engaged for more than 3 minutes. ? Some of the physical responses to my system were getting up on the feet, flailing about, moving left to right and using the depth of the space, clapping, whistling, and waving. Some of the aesthetic responses to my system were that the image reminded them of a microscope or a city. I tried to use slightly random and short intervals of time between scenes and build off of simple rules and random generators (and slight variations of the like), in an attempt to distract the brain away from connecting the pattern. For a time it seemed this proved successful, but after many cycles and finding out the complexities were more perceived than programmed enthusiasm waned. I really enjoyed this project and the ideas of scoring and iterations that accompanied it.
Actors I jotted down, that I liked in others processes: {user input/output/ create inside user actor}{counter/certain#of things can trigger}{alpha channel/ alpha mask}{chroma keying/ color tracking}{motion blur / can create pathways with rate of decay ? }{gate / can turn things on/off}{trigger value}
Reading things:
media object- representation
interaction, character, performer
scene, prop, Actor, costume, and mirror
space & time | here, there, or virtual / now or then
location anchored to media (aural possibilities), instrumental relationship, autonomous agent/ responsive, merging to define identity (cyborg tech), “the medium not only reflects back, but also refracts what is given” (love this). “The interplay between dramatic function / space / time is the real power — expansive range of performative possibilities …< <107/8ish.. maybe>>

screen-shot-2016-09-21-at-11-12-01-pm

screen-shot-2016-09-21-at-11-11-49-pm

pp1_taylor-izz


Pressure Project 1

Pressure Project 1:
1. I aimed to control the jump between the scenes by using camera feed, also arranged auto jumps by using trigger delay. The sudden jumps between the scenes created a playfulness and unpredictability. The jump from first to second scene is controlled by the movement. After the triangle is pushed to upper part of the scene with the movement feed the scene jumps to next one.
2. I used the sound watcher to control the density of the explode filter.
3. There are two different colorizers in two different scenes. First one is controlled by the wave generator, second one is controlled by the movement and camera feed.
4. There is an animation in loop on the 4th scene, based on the horizontal movement of the viewer the colors change between cyan and magenta or yellow and purple.
5. Last scene is controlled by color. By using chromakey object I paired up a cyan square to blue chromakey and yellow square to yellow chromakey object. Therefore, each square follows the object it’s pair (blue object movement controls blue square, yellow object movement controls the yellow object.)

Very helpful assignment, I learned a lot about interactivity. Also, It is fun to watch the contribution of the classmates, it carries the project to a different level.karaca_pp1-izz


Folding Four

I realized I haven’t published my drafts yet. So here is the playing with game rules post:

https://www.dropbox.com/home/Camera%20Uploads?preview=2016-09-05+14.21.02.jpg

Axel and I modified the Connect 4 by adding new rules to the game.

  1. After every 3rd move of the players, player 1 is going to fold a line.
  2.  Then the folded line is going to be unfolded after 3 more moves of each player.
  3. The first rule applies, 3 moves later of each player, player 2 is going to fold a line.
  4. Repeat rule 2.

The rule set is going to be repeated through the game.

Adding and hiding the lines for a short time frames add an unexpected and playful aspect to game. Now it is harder to predict who is going to win. Therefore it is more exciting.

It is fun to modify a game that all of us are familiar since the new rules change the dynamics of it. Therefore it takes sometime to understand and get used to game, it brings back the excitement of being a kid.

Also it is very interesting to see how little modifications and additive elements might change a simple system like a game. Going through the each step and rule of the game, in order to understand the dynamics and change or modify it is a simple but effective exercise to learn about systems.


can i do anything to make it change?

https://osu.box.com/s/vwqcomm5zxzndy0ibt0vxro5xylqm6ah

I learned a ton from seeing folks’ processes and getting to experience other people’s projects.  In particular, I’m beginning to have an inkling of how someone might animate an isolated square of a video.

 city scape… it is pretty magical what audience can create.

people thought there were more interactive opportunities than there were because they were primed for it.

i realized i wanted to give more clues as to where to cause reactions, bigger reactions.


SVRPVRPSV cycles

Resources/  Scores/ Valuaction/ Performance

The fact of understanding many activities and everyday components of our lives as a set of Scores, to me is a curious possibility.  From our shopping lists to class schedules, to bar menus, phone contacts and directions on Google maps, the cues that drive the direction of our daily processes can be seen and triggered as cue-based instances.  As if we were playing the visuals of our daily performances controlled by the resources of our body-driving decisions.  I agree that all parts of the process constantly interact having no specific order.  How else would art creation happen if not?  There are no specific rules for this.  At least on my opinion.


RSVP Cycles

“Scores are symbolizations of processes.” I’ve been taught that process has a crucial importance in design education. Documenting and reflecting the process help the student and instructor to understand the possible improvements of the design piece. Therefore RSVP cycle of each project will affect the learning process and future work. For instance, the main focus of an MFA thesis is getting a documented score as an outcome. It might be a project, research or both but it should be documented with thesis writing. Resources should be analyzed through secondary research. Scores should be documented in order to show the process of getting the actual outcome and valuaction of process enhance the possible revisions of outcome. Finally performance is the documented thesis work and possible solution or the project itself. Learning from the process and also reflecting process is possible through RSVP cycle.

“Scores face the possible, goals face the impossible.” Also scores make the impossible possible because success requires time and improvement of process.

“Scores are ways of symbolizing reality of communicating experience through devices other than the experience itself.” Then I assume that the software is a score. Very interesting.


Human perception. Virtual Reality. The resolution of your brain. Please watch half of this video!

So, this week – instead of a reading on the subject – please watch the second half of this video:
https://www.youtube.com/watch?v=UDu-cnXI8E8&ab_channel=RuthalasMenovich

(This channel has some fun stuff on it as well.)

At the very least – watch the second curly haired presenter. He tarts around 42 minutes.  Use full screen to take advantage of the illusions.

For bonus points: watch the whole thing and be ready to talk about FB vision for VR.

Bonus Readings:

Just remember to breath: http://digg.com/video/augmented-reality-future

 

“The basic effect of the virtual reality world is to simulate the neurological conditions that might be experienced in memory and learning disorders. This insight is a good thing, as neuroscientists move closer and closer toward effective treatments for those disorders.” :   http://motherboard.vice.com/read/why-the-brain-cant-make-sense-of-virtual-reality?utm_source=mbfbvn

 


Pressure Project #1

For my first pressure project, I wanted to create a system that was entirely dependent on those observing it. I wanted the system to move between two different visual ideas upon some form of user interaction. I used the Eyes++ actor to control sizes, positions and explode parameters of two different shapes while using the loudness of the sound picked up by the microphone to control explode rates and various frequency bands to control the color of the shapes. The first scene, named “Stars,” uses two shapes, sent to four different explode actors, with one video output of the shapes actor being delayed by 60 frames. The second scene, “Friends,”, uses the same two shapes, exploding in the same way, but this time, adds an alpha mask on the larger shapes of the incoming video stream with either a motion blur effect or a dots filter.

Before settling on the idea of using sound frequencies to determine color, I originally wanted to determine the color based on the video input. Upon implementing this, I found that my frame rate dropped dramatically, from approximately 20-24 FPS to 7-13 FPS, thus leading me to use the sound frequency analyzer actor instead.

While I was working on this project, I spent a lot of time sitting directly in front of my computer, often in public places such as the Ohio Union and the Fine Arts Library, which resulted in me testing the system with much more subtle movements and quieter audio input. In performance, everyone engaged with the system from much further back than I had the opportunity to work with, and at a much louder volume. Because of this, the shapes moved around the screen much more rapidly and lead to the first scene ending rather quickly and the second scene lasting much longer. The reason for this is because the transition into the second scene was triggered by a certain number of peaks in volume, whereas the transition back to the first scene was triggered by a certain amount of movement across the incoming video feed.

One of the things I would wish to improve on if I had more time would be to fine-tune the movement of shapes across the screen so they would seem a little less chaotic and a little more fluid. I also would want to spend some time trying to optimize the transition between the scenes, as I noticed a visible drop in frame rate.

Zip Folder with the Isadora file: macdonald_pressure_project01


pressureproject1_danny

My pressure project began very ambitiously. The idea was to use the Webcam, the Eyes++ Actor and Isadora’s sound recognition actors to have users interact with the scene to the beat of the music. However, my PC’s webcam would often misbehave and the system would lag as I attempted to run all of these actors in conjunction, especially when I attempted to incorporate chroma key-based tracking and interactions. The focus of the idea was heavier on sound rather than webcam interactions, thus I scrapped the webcam element and made the process interact with the sound and only sound.

Interestingly, the reaction to the project in action during class lead to my peers wishing me to turn the music off, so they could interact with the scene using their own voices and claps of the hand. It was an unexpected outcome, indeed.