PP2/PP3: Musicaltereality

Hello. Welcome to my Isadora patch.

This project is an experiment in conglomeration and human response. I was inspired by Charles Csuri’s piece Swan Lake – I was intrigued by the essentialisation of human form and movement, particularly how it joins with glitchy computer perception.

I used this pressure project to extend the ideas I had built from our in-class sound patch work from last month. I wanted to make a visual entity which seems to respond and interact with both the musical input and human input (via camera) that it is given, to create an altered reality that combines the two (hence musicaltereality).

So here’s the patch at work. I chose Matmos’ song No Concept as the music input, because it has very notable rhythms and unique textures which provide great foundation for the layering I wanted to do with my patch.

Photosensitivity/flashing warning – this video gets flashy toward the end

The center dots are a constantly-rotating pentagon shape connected to a “dots” actor. I connected frequency analysis to dot size, which is how the shape transforms into larger and smaller dots throughout the song.

The giant bars on the screen are a similar setup to the center dots. Frequency analysis is connected to a shapes actor, which is connected to a dots actor (with “boxes” selected instead of “dots”). The frequency changes both the dot size and the “src color” of the dot actor, which is how the output visuals are morphing colors based on audio input.

The motion-tracking rotating square is another shapes-dots setup which changes size based on music input. As you can tell, a lot of this patch is made out of repetitive layers with slight alterations.

There is a slit-scan actor which is impacted by volume. This is what creates the bands of color that waterfall up and down. I liked how this created a glitch effect, and directly responded to human movement and changes in camera input.

There are two difference actors: one of them is constantly zooming in and out, which creates an echo effect that follows the regular outlines. The other difference actor is connected to a TT edge detect actor, which adds thickness to the (non-zooming) outlines. I liked how these add confusion to the reality of the visuals.

All of these different inputs are then put through a ton of “mixer” actors to create the muddied visuals you see on screen. I used a ton of “inside range”, “trigger value”, and “value select” actors connected to these different mixers in order to change the color combinations at different points of the music. Figuring this part out (how to actually control the output and sync it up to the song) was what took the majority of my time for pressure project 3.

I like the chaos of this project, though I wonder what I can do to make it feel more interactive. The motion-tracking square is a little tacked-on, so if I were to make another project similar to this in the future I would want to see if I can do more with motion-based input.


Pressure Project 3

Pressure project 3 was a continuation of pressure project 2. We were given an additional 8 hours to iterate on pressure project 2 in preparation for the Chuck Csuri open house at ACCAD.

For pp3 I changed the third scene of my project to transition from the spinning ball into a color-changing “light show”.


The ball slowly changing to a black screen is not triggered by motion, but is just a series of timers that are triggered when the scene starts. The background and all four elements of the circle are connected to their own timer, with the first triggering at 10 seconds and the next triggering a second after the first, and so on until the ball is gone. To change the color, I connected each Timed Trigger to a Colorizer so that when each went off, the Colorizer would turn on.

The “light show” scene I added after the spinning ball is motion controlled and uses the Video In Watcher and Eyes++ actors to track the users motion. I used a TT Psycho Colors actor, with the brightness output from the Eyes++ actor controlling the bands and the first blob output controlling the width and height of the shape, with both passing through a smoother in an attempt to get the color changes to be more of a gradual change. This works unless there is lots of rapid movement by the user, in which case the colors shift more suddenly, which could negatively impact users with photosensitivity.

The last thing I added in PP3 was sound playing through the whole experience. To do this I placed my sound file (MP3) connected to a projector in a separate scene, then in each of the scenes I wanted to play music in, I placed a listener actor connected to a projector. This approach did not work with a WAV file.


Lawson: PP2 Inspired by Chuck Csuri

My second pressure project is inspired by the two Chuck Csuri works below: Lines in Space (1996) and Sign Curve Man (1967). I love the way that each work takes the human form and abstract it, making it appear that the figures melt, warp, and fray into geometric shapes and rich, playful colors.

Lines in Space, 1996
Since Curve Man, 1967

For my project, I wanted to allow the audience a chance to imitate Csuri’s digital, humanoid images in a real time self-portrait. I also wanted to build my project around the environmental factors of an art gallery – limited space in front of each art work, a mobile audience with split attention, and ambient noise. In addition to the patch responding to the movement of the audience, I wanted to introduce my interpretation of Chuck Csuri’s work in layers that progressively built into the final composite image. You can see a demonstration of the Isadora self-portrait below.

To draw the audience’s attention to the portrait, I built a webcam motion sensor that would trigger the first scene when a person’s movement was detected in the range of the camera. I built the motion sensor using a chain of a video-in watcher, the difference actor, a calculate brightness actor, the comparator to trigger a jump scene actor. If the brightness of the webcam was determined to be greater than 0.6, the jump scene actor was triggered. So that the jump actor would only be triggered once, I used a gate actor and trigger value actor to stop more than one trigger from reaching the jump actor.

Once the patch had detected a person in the range of the webcam, the remainder of the patch ran automatically using chains of enter scene triggers, trigger delays, and jump scene actors.

To imitate the colored banding of Csuri’s work, I filtered the image of the web came through a difference actor set to color mode. The difference actor was connected to a colorizer actor. In order to create the fluctuating colors of the banding, I connected a series of envelope generators to the colorizer that raised and lowered the saturation of hues on the camera over time.

In the next scene I introduced the sense of melting that I experienced in Csuri’s work by adding a motion blur actor to my chain. At the same time, I attached a soud level watcher to the threshold of the difference actor to manipulate it’s sensitivity to movement. This way the patch is now subtlely responsive to the noise level of the gallery setting. If the gallery is noisy, the image will appear brighter because it will require less movement to be visible. This visibility will then fluctuate with the noise levels in the gallery.

In the next scene I introduced the warping and manipulation I observe in Csuri’s work. I wanted to play with the ways that Csuri turns real forms into abstract representations. To do this, I introduced a kaleidoscope actor to my chain of logic.

My final play scene is a wild card. In this scene, I connected the sound level watcher to the facet element of the kaleidoscope actor. Instead of the clarity of the image being dependent on the noise level of the gallery, the abstraction or warping of the image would be determined by the noise levels. I consider this scene to be a wild card because it’s effectiveness is dependent on the audience realizing that their talking or silence impacts their experience.

The patch ends by showing the audience my inspiration images and then resetting.

In thinking about improving this patch for Pressure Project 3, I want to consider the balance of instructions and discoverability and how to draw in and hold an audience member’s attention. I am unsure as to whether my project is “obvious” enough for an audience member to figure out what is happening without instructions but inviting enough to convince the audience member to stay and try to figure it out. I also know that I need to calibrate the length of my black out scenes and inspiration image scenes to make sure that audience members are drawn to my installation, but also don’t stay so long that they discourage another audience member from participating in the experience.


Reflection on PP3

For PP3, I focused more on “interactive-ness” and built a very simple structure which viewers can feel how a projection interacting with them easily.
In PP2, I made a big spiral shape in a monitor in which colors interacting viewers’ motion run through. While I liked it, I felt that a huge static spiral made a sort of a psychological barrier against viewers just like a big abstract painting standing in front of them.
So, for this project, I stepped back to my original inspiration from Charles Csuri’s Hummingbird—joy of seeing transition of shape through digital computation on a monitor—and just put a single transforming shape.
Changes of form and color is the same feature as my PP2 idea, but I also added a sound interactive (alpha value changes along with sound) to make it more playful.

As my interest is building the interactive relationship between an object and viewers, working with a webcam through PP2 & 3, and also watching everyone’s joyful projects, were really meaningful for me.


PP3: Etch-a-Sketch Iterated

For pressure project 3, I used my pressure project 2 patch as a starting point and both added and updated scenes to make them work more intuitively when users interact with them. From the PP2 performance, there were certain errors within the original patch that would cause scenes to skip through before the user had prompted them to change.

Above is a screenshot of part of my “split screen” scene patch. I added most of this part of the patch for PP3 to ensure that the scene only jumped ahead after the user stayed still for around 2 seconds after the text had appeared to explain that as the way to move forward. I added the gate and enter scene values specifically to keep the triggers from enabling.

Below is a screenshot of Csuri’s work I used as inspiration. I wanted to encourage the user to create and draw live, like in some of Csuri’s pieces. This piece specifically relates to the final scene in my project, the “etch-a-sketch,” in which the user draws and is able to create a canvas similar to this one:

Above is Csuri’s piece, and below is an example of my etch-a-sketch after some movement.

I also added music to this version, as I thought that might encourage movement from the user. I used the activate scene actor to activate the music in its separate scene at the end of the sequence. This also inspired a scene early on in which the user interaction doesn’t trigger movement on the screen, rather it’s the music that controls it. Below is a screenshot of the patch containing this scene.

I enjoyed getting to iterate on a past project, especially because I enjoyed the point I got to when working on PP2. I found the performance interesting as well, as the rest of the class already knew what to expect in a way. I think I learned more from the performance of PP2, but I still enjoyed getting to show the class how I implemented and took into account their feedback and interaction with the patch. Below is a .zip file of my entire PP3 patch:


PP3: Living and Alive!

Inspired by Csuri’s use of feedback, duplication and circular repetition, as well as the ‘Nervous Tick’ animation, I continue developing the idea of living typography with an added sound-reactive element.

In addition to the motion sensor in the first scene, there is a sound trigger that leads us into the next scene. This was an idea I had for the first iteration but could not achieve without Alex’s help (thanks Alex!) So, the motion sensor triggers a text that says ‘Clap Clap Clap for me!’ which tells the audience to make sound, which in turn triggers the next scene to enter.

The sound element was an exciting new element to work with. Unlike the motion sensor, a clap is very energetic. Suddenly, the room was filled with life, which fueled the appetite of the creature. In scene 2, claps (or the sound-reactive element) altered the size of the creature, allowing it to emerge from the depths each time sound was detected. After watching it with the class, I had hoped I re-exported the video clip to be on black background and added a fade to the movement so that there was an overlapping effect between each clap and the rectangular edges wasn’t so clear.

In the next scene, I introduced a new sound-reactive only animation. Built using heavy feedback, I wanted to create a portal that also activated based on claps or sound engagements from the audience.

Programming-wise, the new elements in this scene includes an audio-reactive shapes actor and double feedback through ‘Get Stage Image’ using text. I’m not sure exactly how I created this look, so there was a lot of experimenting going on. In this scene, I felt visually creative for the first time using Isadora and would like to explore the combination of these actors further.

To finish off, we use ‘come closer’ through audio detection to return to ‘See you in your dreams’ which goes back to motion detection. Overall, I’m very satisfied by all the rabbit holes I managed to hop into with this project. It felt like a cohesive piece and each experiment was really fun and exciting!


PP2: Living Typography

The inspiration for this project was ‘Nervous Tick,’ an animation by Chuck Csuri. I loved the way the shapes felt ‘alive’ in their movements, almost reacting to those watching them. I also wanted to give the computer presence a voice through the use of typography. The interaction between image and viewer was my impetus for creating this work.

Everything in this first iteration is motion triggered and set off through time delay. Using a webcam, filtered through the ‘Difference’ and then ‘Eyes++’ actor, the motion sensor in it detects movement, which sends triggers for text to appear. One motion sensor actor (above) has a longer delay time and allows for a smoother interaction from human movement to text output. The second motion sensor is set up to be quick and jittery. I did this to give emotion to the creature.

Text in the first scene includes ‘Hey There!’, ‘I’m in here!’, ‘It’s dark..’, ‘Let me out!’

At first, the viewer is not sure whether the character is innocent. From audience reaction, I gathered people thought the creature was the victim. It is revealed in the next scene that it is an unpleasant virus looking thing that has been unleashed and will ‘see you in your dreams.’

A new element in this project is the use of feedback through the ‘Get Stage Image’ Actor plugged into Projector. It creates a melting effect similar to datamosh that really gave it an organic feel.


Pressure Project 1

In this pressure project, I was very much so tinkering with the software, trying to figure on what to plug where.


In Scene One, I was also experimenting with the various sliders the shape actor, combining them with the envelop, wave and pulse generators to create what I deem an interesting animation. I also did some tuning of frequency, amplitude and phase of curves for various generators so as to create an orbital effect in the moving agents. I was mindful to include user actors, to create independent objects on the stage. These user actors make sense for me from the perspective of object oriented programming.

In Scene two, I imported a video I made and experimented with the kaleidoscope actor.

Overall, I was not totally satisfied with this project and what it achieved, but I am ok to treat it more like a “study” piece.

Here’s my patch for sharing:

https://drive.google.com/file/d/1NSRfCIi78JRKkM_2lV559OdPd70sWCkj/view?usp=sharing


Pressure Project 2

In this project, I was inspired by Charles Csuri’s line work that appears in his work swan lake piece and his humming bird piece.

I experimented with various actors to manipulate the webcam image, such as difference, gaussian blur, shimmer, in order to impact how the eyes ++ actor was responding to the live capture. I wanted more simultaneous capturing of multiple moving objects, and i found the reflector actor to serve my intended purpose well. The numbers generated by the eyes ++ actors were fed into scale, position etc of the moving elements.

As for the moving visual elements, I created simple line drawings using photoshop, and exported them as PNGs with transparent backrgounds. I managed to device a way to call various images into the background using the numbers from the eyes ++ actor. See screenshot below. I will be using this more.

In the work, I also experimented with the 3D Particle generator, but it was not very successful. There were just too many sliders to account for, and after tinkering with it for 2-3 hours, it feels like I only have a sense of what 20% of them do. I also managed to work in the get stage image actor and capture stage to image actor, which I will be experimenting more for cycle 1 of my project.

Here’s my patch to share if you are interested.
https://drive.google.com/file/d/18Tq5N5q7md8nW2IadXUaUh_7EnQI9PIo/view?usp=sharing


Pressure Project 2

Pressure Project 2 had to be completed in 8 hours, had to use the Video In Watcher, had to be interactive, and had to be inspired by Chuck Csuri. I was inspired the colors in this piece and wanted to use them in some way in my work. One of the biggest problems I ran into throughout this Pressure Project was keeping Isadora from crashing. I found that just starting live capture with my webcam would increase the load of Isadora by 30% or more, so I decided to use it in only one part of my project rather than the whole thing in an attempt to keep the load down. I also reduced the webcam resolution quite a bit and this helped to keep Isadora and my computer from crashing.


My project begins with a blank screen that says “Open”, with the ‘O’ in pink and the rest of the word in white.

I used three Text Draw actors in this scene; one for the ‘O’, one for ‘pen’ placed beside the ‘O’ to look like one word, and the third for the hint that is displayed after 20 seconds. To create the timer, I used the Enter Scene Trigger, Trigger Delay, and Timed Trigger actors. I also used a keyboard watcher to trigger the next scene when the letter ‘o’ is pressed.


The second scene is an eye shape made of concentric circles and a glowing pink center. I actually started with this scene first, so I started playing with the Video In Watcher here.

The biggest challenge in this scene was getting a delay between each eye blink so that the next scene doesn’t trigger too quickly. To do this, I used an Inside Range actor within a sequence connected to the Video In Watcher with a higher minimum value and a small range, then a Trigger Delay actor off of the Inside Range Actor.

Because I wanted the scene to change after so many eye blinks, I used a Counter and an Inside Range actor to count the number of blinks (movement inputs from the Video In Watcher), then after x blinks (I used 10 but this can be changed for a longer experience), the Inside Range actor will trigger an Activate Scene actor to move to the next scene.


The third scene is a pink sphere on a gold and green/blue background. This one was mostly a result of playing around with layering Shapes actors to see if I could give the illusion of a 3D object. This scene is not interactive, mostly because I couldn’t decide on an input method or how I wanted it to be interacted with.

My biggest obstacles in this scene were just getting each circle in the right place and finding the right actor for the background gradient, and these took a lot of time to do but were relatively easy overall. Because this scene does not have any interactive components, I used a Wave Generator actor to get the circle to spin. As of now, this scene does not end.