PP2: Etch-a-Sketch

For Pressure Project 2, I created a sequential interactive experience that ends in a “human etch-a-sketch.”

The Project

For the earlier scenes, I took inspiration from Charles Csuri’s “Swan Lake” piece. I isolated the user’s outline and applied different effects to it. Here are screencaps of my patches for some of these scenes:

“Split scan” scene in which user’s outline changes color and is split down the middle of the screen
“Lines” scene in which four lines’ length and movement are influenced by user’s movement
“Hexagon” scene in which the size of the hexagons are influenced by the speed of the user’s movement

I was also inspired by Csuri’s “Sine Curve Man” and hummingbird drawing, so I wanted to include something that had a feeling of drawing to it. I stumbled upon the Live Drawing actor in Isadora, and figured out how to attach it to a user and have it draw along with their movement. Patch below:

The Eyes++ actor connects the position of the blob’s centroid to the position of the “cursor” in the Live Drawing actor. In order to erase, I connected a Keyboard Watcher to the Explode actor, so the drawing would seemingly erase into small particles whenever the user pressed the ‘c’ key.

As soon as I made this, I got the feeling it was like an etch-a-sketch, a kid’s toy where the user “draws” with two knobs and shakes the board to erase it. This vision gave me a much clearer image of my project as a whole, as I now saw it as a celebration of play and youthfulness. I spent a long time testing the etch-a-sketch scene to make sure the tracking was smooth enough to use intuitively. However, after testing for a long time, I realized I should include instructions as this would be shown both in class and in a gallery-style open house.

Knowing the details of the performance context gave me a new perspective with which to instruct the user. I decided to make them quick and to the point, just instructing them to press certain keyboard buttons to prompt an erase, for instance. I also set a user-controlled loop on the entire project, so it doesn’t require extra setup if a new user wants to start the experience over for themselves.

The Performance

I was very pleased with the outcome of this performance, but still glad we get a chance to iterate on it in preparation for the open house; there were many moments in which I realized new possibilities for “bugs” in the program when watching others test it. When creating the project, I was so used to testing it myself that there were certain user-interaction possibilities I overlooked. For instance, one of the scene changes triggers based on the speed of the user’s movement. After about five seconds, the program prompts them to wave quickly, which should trigger the next scene. However, this scene got skipped in the first run in class because Arvcuken (who did a great job of demonstrating the program) entered the scene while moving.

It was fascinating to see the class’ reactions to the project, especially the interaction aspect. I had a great time watching Arvcuken interact with the program; it gave a lot of new insight as to how someone new to the patch would experience it.

Hearing feedback was especially helpful. I appreciated CG’s comment on the aesthetic of the colors I used. Most times colors were involved, I had the Colorizer actor on three different Wave Generators: one for red, blue, and green respectively. This changed the colors somewhat randomly throughout the scenes. I added this aspect after I discovered the playfulness of the entire piece. The program encouraged discovery through movement, which is essential to childhood. I wanted to bring up those feelings of playfulness and discovery in the user by inserting bright colors to the scenes.

I also appreciated Amy’s comment about the playfulness of the etch-a-sketch scene. Her interpretation of the piece bringing up childhood memories was almost exactly my intention, and I was overjoyed that it actually came through for someone in the experience.

Niko made a good point about the instructions being helpful as a framing device from an accessibility standpoint. This was a new perspective that I hadn’t considered while inputting the instructions, but I greatly valued that as feedback. I originally intended the instructions to provide context and guidance for any user, but knowing that it’s helpful for accessibility is great information to have. Having a range of users in mind while iterating is essential to the RSVP cycle, so it was a welcome reminder of audience awareness.

Overall, I valued the experience of creating and receiving feedback on this project. Seeing people actually interact and “play along” with something I create was new to me, but a very welcome experience. I definitely know how I’m going to iterate on this for PP3, and I look forward to get to work on it more after testing it with users.


Reflection on PP2

The final piece for this project became a spiral color transition interacting with viewers’ motion.

But originally, I started from shapes floating in a monitor along with motion. By using x, y coordinates or speeds of blobs on Eye actor, I tried to make transformations happening on a screen which are generated by a computational process; this idea came from Charles Csuri’s Hummingbird (1967).

This piece could be the final output, but I found Get Stage Image actor can multiply transforming images on screens. I make a spiral structure which creates “echoes” of image transition by spinning the projector.
It totally changed the visual of my original plan, but I was satisfied that this structure created the relationship with the “past” actions which has just done a second before; a more time-transitional motion.


Sound Reactivity Lab Activity

My sound clip was the main theme from the 2010 film The Social Network, as I love the contrast present in the piece. I tried to roughly mimic that with the main scene I worked on in class. I spent some time on other scenes as well, but I ended up liking my first try the best. I tried to make a symmetrical representation of the sound by using the frequency bands actor. I connected different rectangles’ heights to this actor in order to replicate audio “sound bars.”

Above is a clip of the output. I’ve also attached a zip file of the whole project below which includes the two other scenes I played with.


Sound Reactivity LAB

In this sound reactivity patch made with Lexi, I animated three orbiting circles to music. The movement of each circle is generated by Sound Frequency Bands which through using a combination of Smoother, Limit-Scale Value and Color Maker RGBA. The movement idea came from using Spinner and Wave Generator, which is finally filtered through Motion Blur at the end. I really like the way the movement reacts to the beat by jumping in and out of orbit in a symmetrical way.

My biggest takeaway putting this together was actually the usefulness of the Limit-Scale Value. I also realized I was not clear on what limit min/max vs. out min/max meant and Lexi helped me to understand.


IZZ feeding sound practice

In this practice, I fed into isadora sound files of my own humming, the sound of my heart beat and footsteps. The raw numbers from the sound frequency actors were manipulated through the limit scale, calculator and smoother actors, before being plugged into the shapes actor, changing up the location, scale, rotation and color of the objects. I also used duplicated user actors to create multiples of the same moving object.


Audio Project

My goal for this short project was just simply playing with audio related actors.
I randomly picked up multiple values from Sound Frequency Bands actor, and connected them to many kinds of outputs such as shapes, positions, and colors transitions/transformations.

Even there was no order in those many connections between input and output values, the final video shows synchro with the music (of course, as inputs come from the music).

Especially, the intro part of Blue Train by John Coltrane clearly shows it (as it has the rhythm of the peak and the silence).

I really enjoyed the project, and got the sense of how I can deal with sound actors to use audio data as input or trigger.


Lawson: Audio Visualization

Here’s my audio visualization project!

Link to Isadora and Sound Files:

https://buckeyemailosu-my.sharepoint.com/:f:/g/personal/lawson_647_buckeyemail_osu_edu/EiouevvOn99IlmKdcooZq9gBWtffANWgKvOl8XfZ60E7NA?e=9kqVLL


Sound-Reactive Project

For my sound-reactive project, I used analyzed the input from the microphone and used different parts of the audio spectrum to shade different parts of a moving visual “shooting star-like” field.

Ultimately I wasn’t able to get any audio routing software working on the ACCAD computer I was using (permissions, etc.) Fortunately though, I have prior experience with software like this, and I feel comfortable incorporating it into future projects (on my home desktop).

Link to Isadora file:

visualizer 2.izz


PP1: Experiments in Synchromism

In this first plunge into Isadora, my main goal was to get to know the program through experiments in seeing. Using ‘synchronism’ as a guide (an art movement that draws analogies between color and music), I focused on orchestrating colors and the pacing between shapes. 

In the first scene, there are three elements at play: the pink transformer emerging from the depths, a green pulsing square sponge, and a blurry animated layer that gently envelops the two. [CG mentioned that the shapes had an organic feel, this may have something to do with the way they ‘breathe.’] The pacing of each shape was an important part of this scene, as a response to the prompt’s challenge of 20 seconds. I felt that if I could let the viewer watch the emerging pink transformer with anticipation, they would get the idea that you have to ‘wait and see’ rather than assume the loop will continue. In the programming, there was a wave generator altering color (+alpha), scale, and facets.

In the second scene, which was meant to be a continuation of the play of pacing, I encountered some technical difficulties (which Alex helped to resolve!) I wanted to create a prism where each square undulated at a subtly different speed so that they would influence the color of the next layer. You would ideally see all the layers in an unending loop, it sounded like a harp in my head. This worked for a good while until I created a new scene and then everything synched to one pulse, erasing the careful layering I had done earlier. Having spent so much time on that, I felt quite frustrated and gave up. The programming for this section comprised of a multitude of squares using wave generator influencing color + alpha. I placed each square so they would overlap with three vertically and three horizontally. Next time, I’ll make the squares very small rather than overlap so they act like pixels.

To be continued.

Conclusion: I learned a lot! I love the suggestion of the class to arrive at a place of clarity. Perhaps, another shape could emerge that is an amalgamation of the prism. I also really enjoyed seeing everyone’s interpretation of the prompts and was particularly impressed by the imaginative storytelling in every project. Kudos!


Pressure Project 1: Square Race

My idea for this project came about when I was considering how to create a drama that held viewers’ attention for at least 30 seconds. So I thought, why not create a race?

The starting screen.

When the “Go!” button in the control panel is pressed, the race starts!

And they’re off!

5 squares of differing colors race against each other, and the result varies every time the race is run. By far the most involved part of the project was the creation of the algorithm that propels each square. In the end, I settled on this:

The heart of the algorithm is just a straightforward ramp envelope. The square is simply moved slowly across the screen over the course of 30 seconds. The random, back-and-forth movement of the squares comes from the addition of random numbers to the envelope. I used a smoother on the random values to make the random movement of the square smooth rather than jumpy. A comparator is used to determine if the square is at any point the winner — if the horizontal position of the square (equal to the envelope output plus the random output from the smoother) is at any point greater than or equal to 49, the comparator’s condition is met, and it triggers a jump actor that changes the scene to the “victory screen” for the corresponding color (just some text):

Blue wins this time! Each color has their own victory screen.

While the outcome of the race is in fact random and changes every time, in truth, due to the nature of the algorithm, the first 29 seconds or so of the race don’t impact the result whatsoever. This is because the random number generator that determines the current random displacement of the square does not take into account the previous random displacement of the square. This also means that the race will always be perceived as “neck and neck” and “anyone’s race to win” until the final second, further escalating the drama of the situation. I think this gives the race appropriate casino machine vibes — an elaborate, meaningless show to create anticipation for an outcome that is actually only determined in a brief moment.

One thing I didn’t get around to adding were audio effects and music. I wanted to add a starting gun noise that played when “Go!” was pressed, some anticipation-building music that played during the race, and a victory fanfare for each square. If I were to do this again, I would definitely add these things.

Guts of the starting screen.

The guts of one of the victory screens. In this case, it’s the “Blue Wins!” victory screen. The “Go!” button is connected to a Jump++ actor so that a new race can easily be started.