SVRPVRPSV cycles

Resources/  Scores/ Valuaction/ Performance

The fact of understanding many activities and everyday components of our lives as a set of Scores, to me is a curious possibility.  From our shopping lists to class schedules, to bar menus, phone contacts and directions on Google maps, the cues that drive the direction of our daily processes can be seen and triggered as cue-based instances.  As if we were playing the visuals of our daily performances controlled by the resources of our body-driving decisions.  I agree that all parts of the process constantly interact having no specific order.  How else would art creation happen if not?  There are no specific rules for this.  At least on my opinion.


RSVP Cycles

“Scores are symbolizations of processes.” I’ve been taught that process has a crucial importance in design education. Documenting and reflecting the process help the student and instructor to understand the possible improvements of the design piece. Therefore RSVP cycle of each project will affect the learning process and future work. For instance, the main focus of an MFA thesis is getting a documented score as an outcome. It might be a project, research or both but it should be documented with thesis writing. Resources should be analyzed through secondary research. Scores should be documented in order to show the process of getting the actual outcome and valuaction of process enhance the possible revisions of outcome. Finally performance is the documented thesis work and possible solution or the project itself. Learning from the process and also reflecting process is possible through RSVP cycle.

“Scores face the possible, goals face the impossible.” Also scores make the impossible possible because success requires time and improvement of process.

“Scores are ways of symbolizing reality of communicating experience through devices other than the experience itself.” Then I assume that the software is a score. Very interesting.


Human perception. Virtual Reality. The resolution of your brain. Please watch half of this video!

So, this week – instead of a reading on the subject – please watch the second half of this video:
https://www.youtube.com/watch?v=UDu-cnXI8E8&ab_channel=RuthalasMenovich

(This channel has some fun stuff on it as well.)

At the very least – watch the second curly haired presenter. He tarts around 42 minutes.  Use full screen to take advantage of the illusions.

For bonus points: watch the whole thing and be ready to talk about FB vision for VR.

Bonus Readings:

Just remember to breath: http://digg.com/video/augmented-reality-future

 

“The basic effect of the virtual reality world is to simulate the neurological conditions that might be experienced in memory and learning disorders. This insight is a good thing, as neuroscientists move closer and closer toward effective treatments for those disorders.” :   http://motherboard.vice.com/read/why-the-brain-cant-make-sense-of-virtual-reality?utm_source=mbfbvn

 


Pressure Project #1

For my first pressure project, I wanted to create a system that was entirely dependent on those observing it. I wanted the system to move between two different visual ideas upon some form of user interaction. I used the Eyes++ actor to control sizes, positions and explode parameters of two different shapes while using the loudness of the sound picked up by the microphone to control explode rates and various frequency bands to control the color of the shapes. The first scene, named “Stars,” uses two shapes, sent to four different explode actors, with one video output of the shapes actor being delayed by 60 frames. The second scene, “Friends,”, uses the same two shapes, exploding in the same way, but this time, adds an alpha mask on the larger shapes of the incoming video stream with either a motion blur effect or a dots filter.

Before settling on the idea of using sound frequencies to determine color, I originally wanted to determine the color based on the video input. Upon implementing this, I found that my frame rate dropped dramatically, from approximately 20-24 FPS to 7-13 FPS, thus leading me to use the sound frequency analyzer actor instead.

While I was working on this project, I spent a lot of time sitting directly in front of my computer, often in public places such as the Ohio Union and the Fine Arts Library, which resulted in me testing the system with much more subtle movements and quieter audio input. In performance, everyone engaged with the system from much further back than I had the opportunity to work with, and at a much louder volume. Because of this, the shapes moved around the screen much more rapidly and lead to the first scene ending rather quickly and the second scene lasting much longer. The reason for this is because the transition into the second scene was triggered by a certain number of peaks in volume, whereas the transition back to the first scene was triggered by a certain amount of movement across the incoming video feed.

One of the things I would wish to improve on if I had more time would be to fine-tune the movement of shapes across the screen so they would seem a little less chaotic and a little more fluid. I also would want to spend some time trying to optimize the transition between the scenes, as I noticed a visible drop in frame rate.

Zip Folder with the Isadora file: macdonald_pressure_project01


pressureproject1_danny

My pressure project began very ambitiously. The idea was to use the Webcam, the Eyes++ Actor and Isadora’s sound recognition actors to have users interact with the scene to the beat of the music. However, my PC’s webcam would often misbehave and the system would lag as I attempted to run all of these actors in conjunction, especially when I attempted to incorporate chroma key-based tracking and interactions. The focus of the idea was heavier on sound rather than webcam interactions, thus I scrapped the webcam element and made the process interact with the sound and only sound.

Interestingly, the reaction to the project in action during class lead to my peers wishing me to turn the music off, so they could interact with the scene using their own voices and claps of the hand. It was an unexpected outcome, indeed.


The Things #1

First

That thing I was talking about in class.

Assent

related to that Currents International New Media is  an awesome place to intern, like i did this summer, and also to present work. It is small but truly a one of a kind place.

Second

Get tickets to Driscoll and 3 Acts 2 Dancers 1 Radio Host.


The Fly

This is Fly

Make sure your camera is on.

Try clapping at it.

https://www.dropbox.com/s/rx077164kz1ikai/fly.izz?dl=0

I had no clue what I was going with this, to the point that I was frustrated, and then I realized that this frustration could be one of my resources. One of the more frustrating things in this world are flies.  Inspired by these creatures and the need to use the Eyes ++ actor I created a Digital fly the moves when you move and then rest on the last thing that moved, presumably you.  I am not a computer programmer and the logic to make this work took most of my time. They need to use The jump actor forced me consider how to progress the story. As flies of the most annoying thing in the world (maybe slight hyperbole) the most satisfying thing (again hyperbole) is to get them.  I integrated a system whereby the scene would jump when movement is detected around the fly and a sound is as loud as a clap is heard. To capture to frustration created by a fly I made it so that ones you killed  it more would show up.  Well doing this I made a scene in which  a fly would just fly around.  The previous scene jumps to at one time and I decided it was fly heaven and get back to it you have to apologize.  I worked up kinks and counter actors as well as made it one more pretty and here is Fly.

Once I finish this assignment I wondered if it was generative. I came to the conclusion is not completely generative. They are fairly basic set outcomes however how you get there is different. The user has a score but what they do can differ. Also, the aspects of the fly are completely randomized including movement and color.


MuBu (Gesture Follower for Sound/Motion Data)

Hello all,

Here is a link to the audio/gesture recognition software I mentioned in class today. These are external objects for Max/MSP. If you don’t have a license for Max, you should be able to open the help patches in Max Runtime or an unlicensed version of Max/MSP (similar to Isadora, you’ll be able to use it but not able to save).

I found out that there is a version for Windows, but it’s not listed under the downloads; once you navigate to downloads, you’ll need to view the archives for MuBu and download version 1.9.0 instead of 1.9.1. I’ve never used it on Windows so I’m not sure how well it works, but I thought this could be of use to someone.

http://forumnet.ircam.fr/product/mubu-en/


Cirque du Soleli Media Workshop

Hello everyone, here’s a link to the Cirque du Soleli Media workshop I mentioned in class today: http://www.capital.edu/cirque/


SOLAR

Imagine entering a machine, supplying the co-ordinates of a city and a specific moment in time and as a response you receive the direction, the intensity and the sensation of heat and light that the sun radiated in that time-space.

((((((((((SOLAR))))))))))))

This is a piece I saw in Ars Electronica, I hope you guys find it useful for your projects.
Axel