Pressure Project 2 (A Walk in the Woods)

I wanted to get away from the keyboard for this assignment, and make something that asked people to move and make noise. I considered making some sort of mock fight what got people to dodge or gesture to certain sides of their body, but I didn’t want to make anything violent. Serendipitously, I was an outdoor gathering while I pondered what to make, and this party what highly attended by bees. So, it began–a walk in the woods. I thought I would tell people that things (like bees) were coming at then and ask them from a certain angle and then ask them to either dodge or gesture it away. I thought I would use the crop actor to crop portions of what my camera was seeing, and read where the movement was to trigger the next scene.

Less serendipitously (in some ways) I was at this party because my partner’s sister’s wedding the was the day before, and I was in the wedding party. This meant that I had 2 travel days, a rehearsal dinner, an entire day of getting ready for a wedding (who knew it could take so long?), and the day after party. WHEN WAS I SUPPOSED TO DO MY WORK?!

So, to simplify, I decided to not worry about cropping, and to trigger through movement, lack of movement, sound, or lack of sound. Then, the fun began! Making it work.

I decided to start with a training that would give the user hints/instructions/practice for how to continue. This also give me a chance to figure out the patches without worrying about the content of the story. Using Text Draw, I instructed the user to stand up, and told them that they would practice a few things. Then, more words came in to tell the user to run. I created a user actor that would delay text, because I planned to use this in multiple scenes. Screen Shot 2018-10-04 at 9.27.06 AMThe user actor has an enter scene trigger to a trigger delay to a toggle actor that toggles the projector’s activity between on and off. Since I set the projector to initialize “off” and the Enter Scene Trigger only sends one trigger, the Projector’s activity toggles from “off” to “on.” Then, I connected user input actors to “text” in the Text Draw and “delay” in the Trigger Delay Actor. This allows me to change the text and the amount of time before it appears every time I use this user actor.

I also made user actors that trigger with sounds and with movement. I’ll walk through each of these below.

Screen Shot 2018-10-04 at 12.27.31 PM

Any story that prompted them to run or stay still used a user actor that I called, “Difference Jump++.” Here, I used the Difference Actor to and the Calc Brightness actor to measure how much movement takes place. (Note: the light in the space really mattered, and this may have been part of what made this not function as planned. However, because this was in a user actor, I could go into the space before showing it and change the the values in the Inside Range actor and it would change these values in EVERY SCENE! I was pretty proud of this, but it still wasn’t working quite right when I showed it.)

Screen Shot 2018-10-03 at 10.13.23 PMI used to gate actor, because I wanted to use movement to trigger this scene, but the scene starts by asking the user to stand up, so I didn’t want movement to trigger the Jump++ actor until they were set up. So, I set this up similar to the Text Delay User Actor, and used the Gate Actor to block the connection between Calc brightness and Inside Range until 17 seconds into the scene. (17 second was too long, and something on the screen that showed a countdown would have helped the user know what was going on.)

So, with this, the first scene was built (showing the dancer that their running action can trigger changes. The next scene trained the user that their voice can also trigger changes. For this, I built a user actor that I called, “Sound Jump++.” This functions pretty similarly to the Difference Jump++ User Actor.

Screen Shot 2018-10-04 at 1.09.17 PM

So, the trigger for most of the scenes is usually either movement, stillness, sound, or silence. I’ve explained Screen Shot 2018-10-04 at 1.15.45 PMhow movement and sound trigger the next scene, but stillness and silence are the absence of movement and sound. So, in add
ition to including a Difference Jump++ User Actor and/or a Sound Jump++ User Actor I had and Enter Scene Trigger to a Trigger Delay to a Jump ++ Actor. If the user had not triggered a scene change by the time the Trigger Delay triggered the next scene, it was assumed that the user had chosen stillness and/or silence.

Then, which scene we jumped to depended on how the scene change was triggered.

Next, I came up with a story line involving bears, bees, robots, mythical creatures, airlift rescues, search and rescue units, and I downloaded videos and sounds to represent these events. Unfortunately, I either had things jumping to the wrong scenes, or Inside Range values were set incorrectly, because the performance of this “choose your own adventure” story didn’t function, and we kept getting the same storyline when we tried it in class. I learned a lot in the process of building this though!


Robot Gender ASumptions

https://www.wired.com/story/robot-gender-stereotypes/?mbid=synd_digg


Stop Pretending

https://www.pcmag.com/news/364132/california-law-bans-bots-from-pretending-to-be-human

 


Convince me you are human

https://www.sciencemag.org/news/2018/09/want-convince-someone-you-re-human-one-word-could-do-trick?utm_campaign=googleeditorschoice


Loop Diver Artistic approach


Projection mapping


Infrared Tracking

A wonderful tutorial for how to do advanced Infrared Tracking in Isadora.

 


PP2

Task: For our second pressure project, we were tasked with creating a Fortune Teller Machine.  I found this task to be interesting and just the right thing to help me focus on learning and exploring a few tools I might use in my thesis.  In particular, this project allowed me to explore the abilities of FUNGUS which is a Unity Plugin that is used to help organize branching narratives.

Process: I first started with looking a little deeper into the task. From my last project, I didn’t want to have this system to have an “on the nose” quality. Therefore, I decided to go with Spirit Animals.  The Animals have a fortune so to speak, but also, in the context of a vessel that predicts your future, they have a more personal connection.  Your animal is the path.

I started with 10 animals, but ultimately settled on 6 animals and created a tree diagram.  I started with the animals at the bottom. I knew I have to have the user select objects and the best representations I have given the tie constraints came from a model library from Google.  From this library, I extracted a 2D image/ sprite and used that as the “items that one would choose in order to proceed along their chosen path.  The items consisted of hats, shoes, and even some places like the sky.

Creating branches, laying down some music and making a few background changes lead me to a completed project.

 

Feedback: Most of the feedback was positive.  The audience seemed to really like the ability to identify with a certain animal.  It is something I hope to implement in future work.  It really helped to give them a sense of uniqueness even though the choice was one of six.

One thing I might have done differently or didn’t expect was the very first user was interested in pressing the button as fast as they could.  In the future, I suppose I should put some time gates even though that limits some of the freedom…  Not sure.  I suppose we are conditioned to play video games, or click on things with a mindless rapid succession.   We might need to have the creators force individuals to pause in certain situations to actually observe the attention of the author.

https://youtu.be/bFmi-yM_dA8

IMG_20180926_000753


How to make a button in the Isadora Window

Screen Shot 2018-09-25 at 3.58.45 PM


PP1 – narrative sound and music

There are few things I like more than conspiracy theories, old science documentaries, and the king of rock and roll. Rarely do these treasures have a chance to coexist yet, in this first pressure project, I had the opportunity to create an auditory narrative of an urban legend using all three of my favorite things.

To speed up my workflow, and to follow the 5 hour time allotted, I made a list of what I needed to do. It began by coming up with what I wanted the story to be, how I wanted that story to be told, and then the steps I needed to accomplish that plan. My list ended up looking like this:

  1. Elvis was always alive
  2. Use Elvis Songs, create sentences using multiple audio tapes (inspired by this song I was listening to at the time), create sound bites that are choppy to create controlled confusion that evokes the sense of a conspiracy
  3. Gather all sounds that I might use first, import all sounds to after effects and cut out the pieces of what I want to use. Place those sounds in order and then fill in where there are gaps or information needs added to the story.

1. Elvis was Always Alive

To better understand the created narrative, one should know the original urban legend from which it was derived (the full story AVAILABLE HERE) .

2.Use Elvis Songs

To begin, I started by taking every Elvis song lyric (thanks wiki) and put them into a plain text doc. There, I picked apart the song lyrics to find the best songs that would match the story theme. Those songs were It’s Impossible, Lonely Man, You Don’t Know Me, All That I Am, and In the Ghetto. 

Create sentences using multiple audio tapes.

From there, I knew that I wanted to be clear on parts of the story, so I found various news reports and took just one word from each video to piece together sentences.

Create sound bites that are choppy to create controlled confusion that evokes the sense of a conspiracy.

I didn’t want the entire story retold in word clips, so I took various sound bites that dealt with what I was trying to say, without actually saying it and put them in order of the story. This was to create a little bit of confusion for the listener, who could maybe get at what the story was trying to tell, but only if they drew their own conclusions or made their own assumptions of what was the truth (similar to a conspiracy).

3. Gather all sounds then fill in what it needed 

After importing all the sounds, I realized that I had not addressed the DNA part of the story. I immediately thought of recent videos I had watched  and found similar videos on James Watson and the structure of DNA.

Once time ended, I stopped working. I would have liked to have added visuals, but with the time constrains, it would have negatively affected the overall presentation. I am pleased with the final outcome and really appreciated the feedback from the class. I think that everyone was spot on in their assessments, and I’m happy that they were able to make the connections I hoped they could make.