Cycle 1 – Anim. Driven Spatialized Audio
Posted: November 7, 2019 Filed under: Uncategorized Leave a comment »–This week of Cycle 1 I progressed more towards using this time to develop my thesis work. I decided to show the class the progress of the work and an example of how the project is progressing towards a ‘finalized’ project.
–My goal for this project is to be able to have the visuals projected towards the ground plane, and have the audio also reactive to the visuals being produced. Giving the viewer an audio-visual experience of animation beyond that of just watching it projected on a screen.


–For the moment, the patch works as an interface that 4 separate instances of utilizing visuals to drive audio. The proposed 4 separate sources are as follows:
1. Video
2. Image
3. Live
4. Multi-Touch
–Currently the only piece that I have invested time into showcasing is the “video” source. For the sake of effort and how much time it takes to connect all of the separate pieces that entail the position based object orientation, the GUI setup, and the prototyping, I have decided to stick with the work of making the video portion the main focus.
–Beyond this, I have been having considerations of changing the viewing experience from a viewing experience on a screen to a top-down projection onto the floor. I proposed a series of questions for the sake of generating a conversation around the piece as well:
- What would you imagine if the image was projected onto the ground?
- If color is incorporated, what do you expect to happen?
- I have the ability to derive a color palette from live imagery – would you all imagine that this affects all the sounds? Or ones from specifically tracked objects?

Feedback from the class (from what I could write down in time) entailed:
- Give us the understanding that based on the visuals, the audio data is being pulled out of the visual space itself.
- Explain that the video is live and the audio is being directly reactive from such.
- What kind of understanding should be gained through working with the experience? What can we experience/learn from it? Or is it purely something to enjoy?
- What do you want us to notice first? Audio? Visuals?
- Where would this piece actually be located? Within a museum? Open space?
- How do you project the visuals to give us more of an audio-visual sensation that drives our understanding?
- How does the curiosity of the audience aware that it wasn’t just pre-composed?
What I want to accomplish moving into next week:
- Work with the projector in the MoLab to have the animations play on the ground.
- Work with the audio in the MoLab to have the speakers setup in a way that is conducive to having the correct audio spatialization.
- Find patches/online resources that help with transferring different pieces of audio between different speakers.
Cycle 1 – DroidCo. Backend Development
Posted: November 6, 2019 Filed under: Uncategorized Leave a comment »For cycle 1, I built the backend system for a game I’ve been referring to as “DroidCo.” The goal of the game is to give non-programmers the opportunity to program their own AI in a head-to-head “code-off.”
The game starts with both players in front of sets of buttons (either digital or physical via makey makey). The buttons have keywords for the DroidCo. programming language. By pressing these buttons, both players build a program that each of their team’s droid will run. Here’s what a program might look like:
BEGIN
IF-next-is-enemy
hack
END
WHILE-next-is-empty
move
END
END
In addition to BEGIN and END, here are the keywords for the DroidCo. language:

After both players have completed their code, the second phase of the game begins. In this phase, a grid of spaces is populated by droids, half of which belong to one player and the other half belonging to the other player. Each second a “turn” goes by in which every droid takes an action determined by the code written by their owner. One of these actions could be “hack” which causes a droid to convert an enemy droid in the space it is facing into an ally droid. The goal is to create droids that “hack” all of your opponent’s droids, putting them all under your control.
The backend development is sorta an involved process to explain, so I may put the gritty details on my personal WordPress later, but here’s a boiled down version. We start with a long string that represents the program. For the sample program above it would be: “BEGIN IF-next-is-enemy hack END WHILE-next-is-empty move END END “. We give this string to the tokenizer that splits it up into an array of individual words. Here’s the tokenizer:
function tokenizer(tokenString){
var tokens = [];
var index = 0;
var nextSpace = 0;
while(index < tokenString.length){
nextSpace = tokenString.indexOf(" ", index);
tokens[tokens.length] = tokenString.substring(index, nextSpace);
index = nextSpace+1;
}
return tokens;
}
This array is then given to the code generator, which converts it to a code “roadmap” in the form of an array of integers. Here’s the code generator:
function codeGenerator(code, tokens){
var startIndex = code.length;
var nextToken = tokens.shift();
var dummy;
var whileloop;
while (tokens.length > 0 && nextToken != "END"){
switch (nextToken){
case "BEGIN":
break;
case "move":
code[code.length] = -1;
break;
case "turn-right":
code[code.length] = -2;
break;
case "turn-left":
code[code.length] = -3;
break;
case "hack":
code[code.length] = -4;
break;
case "skip":
code[code.length] = -5;
break;
case "IF—next-is-ally":
code[code.length] = -6;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-not-ally":
code[code.length] = -7;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-enemy":
code[code.length] = -8;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-not-enemy":
code[code.length] = -9;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-wall":
code[code.length] = -10;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-not-wall":
code[code.length] = -11;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-empty":
code[code.length] = -12;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "IF-next-is-not-empty":
code[code.length] = -13;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[dummy] = code.length;
break;
case "WHILE-next-is-ally":
whileloop = code.length;
code[code.length] = -6;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-not-ally":
whileloop = code.length;
code[code.length] = -7;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-enemy":
whileloop = code.length;
code[code.length] = -8;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-not-enemy":
whileloop = code.length;
code[code.length] = -9;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-wall":
whileloop = code.length;
code[code.length] = -10;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-not-wall":
whileloop = code.length;
code[code.length] = -11;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-empty":
whileloop = code.length;
code[code.length] = -12;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
case "WHILE-next-is-not-empty":
whileloop = code.length;
code[code.length] = -13;
dummy = code.length;
code[code.length] = -100;
codeGenerator(code, tokens);
code[code.length] = whileloop;
code[dummy] = code.length;
break;
}
nextToken = tokens.shift();
}
}
This code is then given to the server, which stores both players’ codes and controls the game-state. Since the server is over 300 lines of code, I wont be posting it here. Nonetheless, each turn the server will run through the droids, allowing each of them to take an action to modify the game-state. Once they have all acted, it outputs the new game-state to both the display actors and back into itself so it can run the next round.
Cycle 1: Collaborative Performance
Posted: November 6, 2019 Filed under: Uncategorized Leave a comment »With my initial thoughts about building a “supportive system”, I set my cycle 1 as a collaborative performance with the help of the audience. I make people into three groups: performer, lighting helper, and audience member. The performers will feel free to dance on the center stage space. The lighting helper will help me trigger the lighting cues to keep the dance going.
Actually the mission for the lighting helpers are heavier. I have a connect capture the depth data of the lighting helpers. They will run and then land on the marked line I preset on the floor so that they will appear in the depth sensor capture area to get a brightness data. When the brightness data bigger than a certain number, it will trigger my lighting cue. Each lighting cue will fade in dark after a certain amount of time, so my helpers will keep doing the “running and land” movement action to keep the performance space bright so that audience members can see the dance.
Also, I have a lighting cue which uses five different lights to shine one by one. I set this the same function as a count down system. This cue is for the lighting helper to notice that they are going to be ready, and run after this lighting cue is done.
I want to use the human’s body movement to trigger the lighting instead of press the “Go” button on the lighting board so that audience members can be involved into the process and be interactive. However, I found the performer seems become less and less important in my project since I want the audience members to help with the lighting and really get a sense of how lighting works in the performance. I want to build a space where people can physically move and contribute to the lighting system. And I think I want to develop my project as a “tech rehearsal”. I will become the stage manager to call the cues. And my audience members will become my crew members to work all together to get the lighting board run.
Cycle 1
Posted: November 5, 2019 Filed under: Uncategorized Leave a comment »Cycle 1 went really well, and I am very excited to continue working on this project. My work centers Black women and moves from Black women at large in the lives of my audience to my own specific experience as a Black woman.
I was very intentional about wanting to craft an environment in the MOLAB, so I used the white scrims to create a circle in the middle of the room onto which I projected my video. However, before the participants could enter the space, they were met with my “Thank a Black Woman” station, which invited them to thank a Black woman by name that is in their own lives or give a thank you to Black women at large. They had the option to speak their thank you into the microphone that was set up, write the woman’s name on a sticky note and post it to the poster board that was set up on the table, or do both. Once they had recorded their thank you, they could enter the space.
I used an updated version of my patch from PP3 to capture the audio from the recording (whoops, I switched the camera to movie to be completely manual and forgot to enter the patch right at the top) and so they could see the flashes of the shapes and colors moving on the walls of the circle they stood inside of. After everyone said their thank you and entered the space, the play back of the voiced thank you’s played. This moment was very meaningful; everyone stood in the space along the outside edges making a circle and listened to each other. It was very powerful to hold space for Black women in this way… for women who are too often over looked to be given their flowers for once.
The next scene of the patch is meant to be a “Black women at home” segment, showing me and my mom doing things like washing and doing our hair (often a shared experience between Black mother’s and daughters across the globe) and caring for ourselves and our homes. I don’t have that footage yet so I was unable to show. However, I plan to bring in some physical objects into the space that will be connected to a Makey Makey to open and close circuits to switch between various video feeds.
The last scene speaks to me and my Black womanhood most specifically and is a 3 minute clip of a movement exploration of a research I am calling “ancestral crossings” in which I seek to bring about a kinesthetic shift to spirit through movement in order to shift the temporal planes between past and present so that I might commune with my ancestors once again. The last minute includes a pre-recorded spoken text narrated by me. The audience spread out and watched as the video was played on the the three individual scrims. They commented on the texture of having to watch my movement through another person.
As the video came to a close, everyone settled again onto the periphery, further solidifying the circularity of the space which felt important to me as circles and circularity carry a great deal of meaning in Black culture and life across the diaspora. The feedback I received was really spectacular. I was particularly interested to know how the “Thank a Black Woman” station would be received, and it was exciting to hear that my classmates wanted that moment to go on for even longer. I’m now considering ways to incorporate more by perhaps pre-recording the some thank you to have playing in the background intermittently throughout the third scene.
It was an immense pleasure to have another Black woman in the room when I presented, as we are who I do my work for. Sage Crump of Complex Movements, a Detroit based artists collective that was in residency at ACCAD last week came to visit with our class after she heard about the work that we were doing. It was an honor and a privilege to be in her presence and to hear her thoughts on not only my own work, but also that of my classmates. Thank you for your time and energy, Sage.
You can listen to the thank you’s at the download link below.
PP3 — You can do it!
Posted: November 5, 2019 Filed under: Uncategorized Leave a comment »When we were first given the parameters of this project, I was immediately anxious because I felt like it was outside of my skill and capability. This class has definitely been a crash course in Isadora and I’m still not the most proficient at using this program. The assignment was to spend no more than 8 hours using Isadora to put together an experience that revealed a mystery; bonus points were given for making a group activity that organized people in the space. The idea of mystery was really confusing to me because the only thing that immediately came to mind was TouchOSC and Kinect tools that I don’t know know how to use independently yet. However, Alex assured me that he would help me learn the components of which I was unsure, and was very helpful in showing me some actors that I could use to explore the idea that I had which was to play a game of “Telephone” with the class.
After having the class line up one after the other in front of the microphone, I used the mic to record the “secret” while it was passed down the line. I had played with locomoting some shapes and that reacted to the levels of the voices as they spoke in to the mic. Initially, I had wanted to use the sound frequency bands actor in addition to the sound level watcher actor, but when I started working on the project at home, the frequency bands would not output any data. Unbeknownst to the participants, I was recording the “secret” as it passed down the telephone so that at the end, I could “reveal” the mystery of what the initial message was and what it became as it was went from person to person.
It was really exciting actually to hear the play back and listen to the initial message be remixed, translated into Chinese and then back to English, to finally end in the last iteration. It was fun for the class to hear their voices and see that the flashing, moving shapes on screen actually meant something and were not just arbitrary.
Overall, the most complicated thing about PP3 was learning how to program the camera to start and stop recording the audio. To do this I used a camera to movie actor, and played with automating the trigger to start, switch to stop, and then stop. Ultimately, I ended up setting up a gate and a toggle to control the flow to the start/stop trigger and using keyboard watcher that I manually controlled to open and close the gate so that the triggers could pass through.
In spite of my initial concerns, I found this project to be a nice challenge. I had even been exploring ways to integrate a Makey Makey into the presentation, but realized I don’t have enough USB ports on my personal computer to support both an Isadora USB key, the USB for the mic and a Makey Makey. This was definitely a project that was the result of the resources that I had available to me in regards to both skill and computer systems.
The shapes actors and the sound level watchers and envelop generators that controlled them. There were 4 in total. This data fed into a camera to movie actor that recorded all the sound that was picked up by the mic. This enter scene trigger sent a value of 2 into the the movie player, allowing the projector to play the sound recorded in the previous scene.
PP3 — composer
Posted: November 4, 2019 Filed under: Uncategorized Leave a comment »In my pressure project 3, I used Makey Makey as an intermedia to connect people and Isadora. My ideation for this project was to create some music keyboard that people could play randomly and compose their music work. I believe that the process of composition is a mystery. People have no idea about what kind of music they will produce until the end. I created an Isadora interface with the piano keyboard and three mystery buttons that were four background music, some funny human/robot/animal voices. And then, I did the same interface on that paper and connected it through Makey Makey to the Isadora. At first, I asked people to choose a color mark-pen that I had and to draw that paper. Which means they could control the color (button). Because Makey Makey works when people connect, I also asked them to touch each other without using hands. I thought that maybe fun. Then the journey started. I made instructions to guide people to play the piano at first. And then, the guideline introduced them to the other funny/weird buttons so they could explore mixing all the sound and music. People were enjoying playing it. They more tended to play based on background music rather than create something new. I feel this was because my instructions jump to fast so that they barely saw. And also, they feel weird in the part of touching because of the space limitation.
PP3
Posted: October 29, 2019 Filed under: Uncategorized Leave a comment »Pressure project 3 is about revealing a mystery during the interaction with audience member(s). I thought for a long time about what mystery I want to reveal but ended up with nothing. However, I saw my metal box for my welding exercise in art sculpture class which it has a big gap I failed to fill in. I thought this gap really can means something since people can never open this box and can never see the inside clear by just looking through the gap. The metal box can also connect with Makey Makey and become a button to trigger the system. So, I decided to make a “Mystery Box”.

This is how I connect the box to my computer, so as soon as people touch the box, people can see what is actually happening inside the box, which I use Isadora to fake a inside world and convey that through computer interface.
At the very beginning, I didn’t plan to make this project mean. But I saw some news about a Korean pop artist suicided at her place during the time I created this project. She encountered horrible internet violence before she die and she cannot bear that anymore and choose to leave the world. I was shocked by that news since I know her pretty well and she was only 25 years old. So I decided to insert a little bit educational purpose to let people feel what is the feeling of being judged badly with no reason and what can we do when we are in that situation.
I use the live capture so that the users can see their faces on the screen while the bad words are keep flying in front of their face. They have method to escape this scene by say no. But I declined the captured volume so that users really need to shout out no to reach the highest level to escape. I want the users can be angry and really saying no to the violence.
It was a really great experience when seeing my classmates trying altogether to increase the volume. But in one scene when I have 3 projector capturing the live images on the screen, I typed the word “Ugly” but changed the font so that the word turned into some other characters I don’t know. I should be careful about this since I don’t know the exact meaning of these and it may influence people’s experience if they know that.
PP3 – Haunted Isadora
Posted: October 27, 2019 Filed under: Uncategorized Leave a comment »For PP3, I attempted to create the experience of talking to spirits through a computer interface. To do this, I used a simple sound level controller to move between scenes in Isadora.
When a user begins the program, the Isadora patch randomly generates wait times that the user must wait through before Isadora begins listening. Once the user speaks (or, to my dismay, when the background music plays… oops) the scenes progress and the “little girl” asks them to play hide and seek.
After a moment or two, a jump-scare pops up and the little girl has found you. Over-all I wasn’t super proud of the project. It could have had more substance (the original plan was to have multiple “entities” to talk to… but that didn’t come together). I learned a bit about background scene triggers and text actors so I suppose it wasn’t a total flub.



Pressure Project 3: Magic Words
Posted: October 21, 2019 Filed under: Uncategorized 1 Comment »For this pressure project I wanted to move away from Isadora since I prefer text-based programming and it is just something that I am more comfortable doing. I also experienced frequent crashes and glitches with Isadora and I felt I would have a better experience if I used different technology.
For this experience, we had to reveal a mystery in under 3 minutes. I spent some time searching the web for cool apis and frameworks that I could use that would help me achieve this task. Something that ended up inspiring me was Google’s speak to go experiment with the Web Speech API.

You can check it out here: https://speaktogo.withgoogle.com/
I found this little experience very amusing and I wondered how they did the audio processing. That’s how I stumbled upon the Web Speech API and the initial idea for the pressure project was conceived.
I had initially planned on using 3d models with three.js that would reveal items contained inside them. The 3d models would be wrapped presents and they would be triggered by a magic word. Then they would open/explode and reveal a mystery inside. However, I ran into a lot of issues with loading the 3d models, CORS restrictions, and I decided that I did not have enough time to accomplish what I had originally intented.
So, in the end I decided on using basic 3d objects that are included in the three.js standard library and having them do certain actions when triggered with the mystery being the specific words that would cause an action and what action they caused (since some are rather subtle).

You can get the source code here:https://drive.google.com/file/d/1RbOzD3Ktrbp2VpqDNj6SQSWb83hOEA2U/view?usp=sharing
Pressure Project 2: Audio Story
Posted: October 21, 2019 Filed under: Uncategorized Leave a comment »Prompt
This Pressure Project was originally offered to me by my Professor, Aisling Kelliher:
Topic – narrative sound and music
Select a story of strong cultural significance. For example this can mean an epic story (e.g. The Odyssey), a fairytale (e.g. Red Riding Hood), an urban legend (e.g. The Kidney Heist) or a story that has particular cultural meaning to you (e.g. WWII for me, Cuchulainn for Kelliher).
Tell us that story using music and audio as the the primary media. You can use just audio or combine with images/movies if you like. You can use audio from your own personal collections, from online resources or created by you (the same with any accompanying visuals). You should aim to tell your story in under a minute.
You have 5 hours to work on this project.
Process
I interpreted the prompt as that “music and audio as the primary media” means that the audio can stand on its own or changes the meaning of the visual from what it would mean on its own.
I also was working on this project concurrently with a project for my Storytelling for Design class in which we were required to make a 30 second animation describing a how-to process. The thoughts and techniques employed in this project were directly influenced by hours of work on that project.
After the critique of that project, I had a very good sense for timing, sound, and creating related meeting from the composition of unrelated elements.
I blocked out five hours of time for my project and began with practicing on the narrative of Little Red Riding Hood. I used the sounds available from soundbible.com, a resource introduced in the previously mentioned Design class, to try to recreate this narrative from a straight-ahead viewpoint. After starting on my Little Red Riding Hood prototype, I found that I spend over thirty seconds introducing a foreshadowing of the wolf and that this story wouldn’t do.
I then moved on to other wolf-related stories including a prototype of The Three Little Pigs. My work with these animal noises brought me close to current recent life experiences. I had 2 friends test the story and then refined it, completing the assignment.
Result
The resulting recording (included below) told the story of a farmer defending his sheep from a wolf using only sound effects, no dialogue.
Critique
The most intriguing part of this project was not the work itself, but rather the final context in which the work existed on presentation day. Most of the other original stories were about large cultural issues. I presented last. I was struck while presenting the piece how much priming affects perception. From the previous examples, the class was primed for something large and culturally controversial or making a bold statement.
My piece was simple and different as it used no visuals and no words to tell the story. This simplicity was pretty much lost to a group that was primed for something large and controversial. I found the critique unsuccessful in that I did not receive feedback on the work I had created so much as the work I had not created.
From this exercise, I learned the importance of allowing time to contextualize your work and reset the mood when you are presenting a unique piece among a series of unique pieces. Our mind naturally desires to make connections between unconnected things; this is the root of creativity itself. So in the context of coursework, conference presentations, etc. and every interaction in the era where everything exists within a larger frame, it is vital to be clear about distinguishing work that is meant to be separate. This can often be achieved through means as simple as a Title and an Introduction. Giving some understanding of whose the work is, why they created it, and what they desired to learn through it gives much better context to critiquers and helps keep conversation focused and centered for the best learning experience for everyone.