PP1- randomness and dark humor

The day our Pressure Project 1 was assigned, I was immediately excited about the possibilities waiting for me in the process of the endeavor to achieve making someone laugh. We had 15 minutes of class left and I began working on the patch. My first thought was to create ‘a body’ through the shapes actors, and have something happen to that body that was absurd. As I was creating the shapes, I began changing the fill colors. The first color I tried was blue, and that made me think of making the body drink water, get full of water, and something with that water happens that creates some type of burst.

While I liked the idea at the moment, it wasn’t funny enough for me. After I sat down to work on it longer, I recreated my ‘body’ and stared at it for some time. I wanted to make it come alive by giving it thoughts and feelings beyond actions. I knew I had to have some randomness to what I was doing for the random actor to make sense to me.

So I turned inward. My MFA research is utilizing somatic sensations as a resource for creative expression through a queer lens. The inward-outward contrast, alignments and misalignments are exciting for me. I enjoy training my mind to look at things in non-normative ways, as both a queer and a neurodivergent artist. While I have a lot of coherent thoughts relative to the situations, I sometimes have hyperfixations or interest in random stuff many people might not think of thinking.

I wanted my Isadora ‘body’ to be hyperfixated on magic potions. I wanted it to be consumed by the thought of magic potions that led to some sort of absurd outcome, hence the randomness. I searched for magic potion images with .png extensions and found one that I would like to use. After adding that image, I needed a ‘hand’ to interact with the potion. So I searched for a .png image of a hand.

To help my ‘body’ convey its inner experiences, I decided to give it a voice through the text draw actor and included short captions to my scenes. The next part was giving my magic potion a storyline to have two characters in my story. I achieved that through showing how the magi potion affected the body beyond the body’s initiated actions. Carrying the magic potion from a passive role to an active role.

I connected a wave generator the magic potion’s width that created a spinning visual and connected another wave generator to the head’s intensity that created a sense of lightheadedness/dizziness or some type of feeling funny/not normal.

In the next scene, the head of my body disintegrates after consuming the magic potion. I achieved that with an explode actor.

To exeggarete the explosion and the effect of the magic potion on the person, I connected a random actor to the explode actor and connected a pulse generator to the random actor.

The last scene reveals the dark truth of the story, using humor. The body disappears and the only thing on the scene is the magic potion with its inner voice (through text draw) for the first time. I needed to give a facial expression to my magic potion so I searched for a .png image of a smiley face that I could layer on top of the previous image. After finding the image I liked, I looked at my scenes and found myself laughing alone in my room. That’s when I decided my work on this project has been satisfactory on my end and I stayed within the 5 hour limit we had to work on it.

In my presentation, everything went according to plan on my end. And the expected achievement of making someone laugh was achieved as I heard people making noises in reaction to the scenes, especially the final scene.

There was a feedback on the scale of images. I worked on it on my personal computer and presented on the big screen in the Motion Lab. Because I didn’t project this there before, the images were very big especially given the proximity of the audience to the screen. But I received the feedback that due to the texts not being long and readable both in attention span and timewise, it still worked.

I am quite content with how the process went and how the product ended up. Having used Isadora in previous classes, building on my skills is very exciting to me. I usually don’t use humor in my artistic works, but I had a craving for it. With the goal of making someone laugh, using Isadora as a canvas or a ‘stage’ for storytelling and connecting beyond the ‘cool’ capabilities of the software was the part I enjoyed the most in this process.


Pressure Project 1 – The best intentions….

I would like to take a moment and marvel that a product like Isadora exists. That we can download some software, and within a few hours create something involving motion capture and video manipulation, is simply mind blowing. However, I learned that Isadora makes it very easy to play with toys without fully understanding them.

The Idea

When we were introduced to the Motion Lab we connected our computers into the “magic USB” and were able to access the rooms cameras, projectors, etc… I picked a camera to test and randomly chose what turned out to be the ceiling mounted unit. I’m not sure where the inspiration came from, but I decided right then that I wanted to use that camera to make a Pac-Man like game where the user would capture ghosts by walking around on a projected game board.

The idea evolved into what I was internally calling the “Fish Chomp” game. The user would take on the role of an angler-fish (the one with the light bulb hanging in front of it). The user would have a light that, if red, would cause projected fishes to flee, or if blue, would cause the fish to come closer. With a red light the user could “chomp” a fish by running into it. When all the fish were gone a new fish would appear that ignored the users light and always tried to chase them, a much bigger fish would try to chomp the user. With the user successfully chomped, the game would reset.

How to eat a big fish?  One bite at a time.

To turn my idea into reality it was necessary to identify the key components needed to make the program work.  Isadora needed to identify the user and track its location, generate objects that the user could interact with, process collisions between the user and the objects, and process what happens when all the objects have been chomped.

User Tracking:

The location of the user was achieved by passing a camera input through a chroma key actor.  The intention was that by removing all other objects in the image the eyes actor would have an easier time of identifying the user.  The hope was that the chroma key would reliably identify the red light held by the user.  The filtered video was then passed to the eyes++ actor and its associated blob decoder.  Together these actors produced the XY location of the user.  The location was processed by Limit-Scale actors to convert the blob output to match the projector resolution.  The resolution of the projector would determine how all objects in the game interacted, so this value was set as a Global Value that all actors would reference.  Likewise, the location of the user was passed to other actors via Global Values.

Fish Generation:

The fish utilized simple shape actors with the intention of replacing them with images of fish at a later time (unrealized).  The fish actor utilized wave generators to manipulate the XY position of the shape, with wither the X or Y generator updated with a random number that would periodically change the speed of the fish.

Chomped?

Each fish actor contained within it a user actor to process collisions with the user.  The actor received the user location and the shape position, subtracted their values form each other, and compared the ABS of the result to a definable “kill radius” to determine if the user got a fish.  It would be too difficult for the user to chomp a fish if there locations had to be an exact pixel match, so a comparator was used to compare the difference in location to an adjustable radius received form a global variable.  When the user and a fish were “close enough” together, set by the kill radius, the actor would output TRUE, indicating a successful collision.  A successful chomp would trigger the shape actor to stop projecting the fish.

Keeping the fish dead:

The user and the fish would occupy the same space only briefly, causing the shape to reappear after their locations diverged again.  To avoid the fish from coming back to life, they needed memory to remember that they got chomped.  To accomplish this, logic actors were used to construct a SR AND-OR Latch. (More info about how they work can be found here https://en.wikipedia.org/wiki/Flip-flop_(electronics) .)  This actor, when triggered at its ‘S’ input, causes the output to go HIGH, and critically, the output will not change once triggered.  When the collision detection actor recognized a chomp, it would trigger the latch, thus killing the fish.

All the fish in a bowl:

The experience consisted of the users and four fish actors.  For testing purposes the user location could be projected as a red circle.  The four fish actors projected their corresponding shapes until chomped.  When all four fish actors latches indicated that all the fish were gone, a 4-inpur AND gate would trigger a scene change.

We need a bigger fish!

When all the fish were chomped, the scene would change.  First, an ominous pair of giant eyes would appear, followed by the eyes turning angry with the addition of some fangs. 

The intention was for the user to go from being the chomper to being the chomped!  A new fish would appear that would chase the user until a collision occurred. Once this occurred, the scene would change again to a game over screen.

The magic wand:

To give the user something to interact with, and for the EYES++ actor to track, a flashlight was modified with a piece of red gel and a plastic bubble to make a glowing ball of light. 

My fish got fried.

The presentation did not go as intended.  First, I forgot that the motion lab ceiling webcam was an NDI input, not a simple USB connection like my test setup at home.  I decided to forgo the ceiling camera and demo the project on the main screen in the lab while using my personal webcam as the input.  This meant that I had to demo the game instead of handing the wand to a classmate as intended.  This was for the best as the system was very unreliable.  The fish worked as intended, but the user location system was too inconsistent to provide a smooth experience. 

It took a while, but eventually I managed to chomp all the fish.  The logic worked as intended, but the scene change to the Big Fish eyes ignored all of the timing I put into the transition.  Instead of taking several seconds to show the eyes, it jumped straight to the game over scene.  Why this occurred remains a mystery as the scenes successfully transitioned upon a second attempt.

Fish bones

In spite of my egregious interpretation of what counted as “5 hours” of project work, I left many ambitions undone.  Getting the Big Fish to chase the user, using images of fish instead of shapes, making the fish swim away or towards the user, and the ideas of adding sound effects were discarded like the bones of a fish.  I simply ran out of time. 

Although the final presentation was a shell of what I intended, I learned a lot about Isadora and what it is capable of doing and consider the project an overall success.

Fishing for compliments.

My classmates had excellent feedback after witnessing my creation.  What surprised me the most was how my project ended up as a piece of performance-art.  Because of the interactive nature of the project I became part of the show!  In particular, my personal anxiousness stemming from the presentation not going as planed played as much a part of the show as Isadora.  Much of the feedback was very positive with praise being given for the concept, the simple visuals, and the use of the flashlight to connect the user to the simulation in a tangible way.  I am grateful for the positive reception from the class.