Came for the Bowie Vibes, Stayed for the Lauper


Bumping old discussion


Duque bumping old discussion

https://dems.asc.ohio-state.edu/wp-admin/post.php?post=3321&action=edit


https://dems.asc.ohio-state.edu/?m=201911


Cycle 3 | 400m Dash | MakeyMakey

To continue with the cycles, for cycle 3, I chose to incorporate a MakeyMakey and foil to create a running surface for participants, replacing the laptop’s arrow keys. I expected the setup to be relatively straightforward. In previous cycles, I had trouble with automatic image playback, so I decided to make short videos on Adobe Express (which is free). Using this platform, I created the starting video, the audio cue video, and the 400m dash video with the audio cues.


After finalizing my videos and audio cues to my satisfaction, I encountered difficulties getting the MakeyMakey foil to function properly. Through various tests, troubleshooting, and help from Alex, I discovered that participants needed to hold the “Earth” cord while stepping on the foil. Additionally, they either needed sweaty socks or bare feet to activate the MakeyMakey controls. I copied the 400m dash race onto two separate screens and arranged two running areas for my participants. For the two screens and separate runs to work I had to devise a race logic with user actors.

During the presentation, I encountered technical difficulties again. It became apparent that because the participants had possibly sweaty feet, the foil was sticking to them, keeping the MakeyMakey control stay activated. Which caused issues with the race. We quickly realized that I needed to tape down the foil for the race to function properly.

If I were to work on another cycle, I would prioritize ensuring that the running setup functions smoothly and reliably, with both participants able to hear audio from their devices. Additionally, I would expand the project by incorporating race times, a running clock, and possibly personalized race plans tailored to participants’ goal race times or their best race times.


Cycle 2 | 100m Dash

I found myself lacking motivation for my cycle 2 idea, feeling that sticking with my cycle 1 concept was becoming forced. After a discussion with Alex about my thesis interest, we explored some ideas I had been considering. We thought it might be engaging to develop a running simulation where participants experience a first-person sprint, aided by audio cues for speed adjustments. For cycle 2, we decided participants could use their middle and pointer fingers along with the arrow keys to simulate the run, with each button press incrementally advancing the video.

During the presentation, I encountered some technical issues. I realized I needed a better method for sound implementation since I was relying on GarageBand on my phone, which was not effective because the first-person POV 100m dash video to progressed too rapidly. This led to my first feedback suggestion. It was proposed that instead of a 100m dash, a longer race would better showcase the audio cues, allowing participants more time to hear them. Overall, I was pleased with the feedback. Hearing my classmates’ responses to the experience, I decided that for cycle 3, I would incorporate a MakeyMakey and foil to create a running surface for participants, replacing the laptop’s arrow keys.


Cycle 3 – Unknown Creators

For this cycle I had a few ideas. My first one was that I wanted to add some amount of interactivity, as the project felt stagnant. I wasn’t sure how I wanted to do this though. At first I thought I could go back and use some of that tech I made from Pressure Project 3 that I didn’t end up using. Maybe the panels could have knobs on them or something? After talking about where I wanted the project to go with my mentor, it seemed like maybe interactivity wasn’t the way to go. Adding layers of interactivity could potentially confuse people as to what the project is about, and instead Scott emphasized expanding scope to talk more broadly about the subject outside of game development (he gave an example about how one side could be a politician talking about the problems a certain indigenous group and then footage of that group and what they actually had issues with). There are certainly ways of adding interactivity but I did want to expand towards media in general since who don’t know about the games industry can’t meaningfully interact with the piece anyways

Oftentimes, when I would talk about the project, I would reference many different kinds of media, like film or theater, and I wanted to incorporate examples of what I was talking about from these areas too. I ended up pulling examples from two people who I think are better known than Todd Howard: Guillermo Del Toro, Academy Award-winning director, and Michael Jackson, who uhh… is Michael Jackson. I went and collected footage from the making of Pinocchio, a relatively recent film of Del Toro’s that I knew had a ton of talented stop-motion people working on. This is the video that I used for that: https://www.youtube.com/watch?v=LWZ_K7oKu-o

I pulled an example of a well-regarded stop motion animator and puppeteer, Georgina Hayns. She is better known among stop-motion enthusiasts and creators but most lei people probably don’t know who she is, including me. I also pulled an interview of Del Toro from CBS: https://www.youtube.com/watch?v=_7xcED5GoaA

For Michael Jackson, I pulled some old interview footage from 1978: https://www.youtube.com/watch?v=fTTl4Vaow5Y

For the person behind the scenes, I decided to go with Brad Buxer, who I know worked with Michael Jackson and other famous creators like Stevie Wonder from my own personal research and intrigue (I was first told about this via Harry Brewis’s video about the origins of the Roblox oof: https://www.youtube.com/watch?v=0twDETh6QaI). Typically in the music industry there are creators that write lyrics or melodies that don’t get credited to the same degree, and Brad talks about how it’s easy for big creators to pawn off creation to people who work under them. This is a whole different issue, but I think this example is really good since Michael Jackson is a really well known celebrity. Here is Brad Buxer’s Masterclass course that I used for footage: https://www.youtube.com/watch?v=qlYQooIyCAI. What’s funny is that it was actually speculated that Michael Jackson did the music for Sonic 3, but what’s interesting is that Brad Buxer actually is credited for Sonic 3 music and because he is known for working with Michael Jackson, credit is usually give to Jackson. This isn’t completely relevant to the project, I just thought it was interesting that there is a tie back to the games industry. Come to think of it Guillermo Del Toro is also really good friends with another well known game developer, Hideo Kojima, small world I guess.

Anyways, same as before, I had to take the footage and throw it into Premiere Pro to splice it and edit in names and such.

Once again I displayed the project and people felt like I was getting even closer to properly conveying intent. Orlando hadn’t seen any iterations of this project and his interpretation was very close to what I had intended. There was a lot of great conversation sparked too, which was great to see! Overall I’m very happy with this iteration of the project, I’m thinking about applying for a motion lab residency to continue work on this, but for now I’m done.


Cycle 2 – Unknown Creators

For this cycle I decided that I wanted to take feedback from the previous cycle and try and incorporate it into this next one.

For starters, Jiara had mentioned that the wrinkle and material of the fabric felt meaningful. I hadn’t thought about this but I think she’s right, and I wanted to use these ideas. I wasn’t going to change the material, but I did end up making the backsides of the panels more wrinkly and the front more clean. The clean side would be the side with Todd and the messy backside would be the unknown people. I felt this worked really nicely because not only was it conveying metaphorically that these are people who are behind the veil, that they typically deal with messiness that we don’t see, but also it made the images and text harder to see, which I think was in line with my message about how hard it is to find these people.

I also went into premiere pro and edited the footage. I needed the footage of the unknown developers to be arranged so that there were two different people that I could put on the panels. I also added names job titles under all the developers, which I would use within the project. I wanted to express who these people are more directly while still allowing for ambiguity. I remapped the projections, putting the people on their respective panels. I also took Todd’s name and stretched it along the ground to better emphasize the perspective puzzle.

The final thing I did was inspired by talking with my mentor, Scott Swearingen. As I described the project to him he thought it would be interesting if the unknown people were hard to hear in the some way, the maybe the footage was jumbled or disjointed. I liked this idea, but I didn’t want to manually splice the footage. I figured I could have the video jump to random positions while playing, but I didn’t want it to be completely random, and I was trying to figure out how long I wanted footage to play for. I remembered something that Afure and others had said about how it felt like the two pieces of footage were talking to each other, like one was the interviewer and the other was the interviewee. As I was thinking about this, I thought it could be interesting to use the audio data from Todd’s clip and use it to jumble the other footage.

At first I thought I could just get the audio frequency bands from the Movie Player actor, but for some reason I couldn’t do that. I’m not entirely sure why and I tried looking into it (main forum post: https://community.troikatronix.com/topic/6262/answered-using-frequency-monitoring-in-isadora-3-movie-player), it seemed like it wasn’t possible within the Movie Player in this version of Isadora. After talking with Alex though I really just needed to route the audio through Blackhole on the motion lab mac and then use live capture to get the audio data. We created a custom audio configuration that would play to both Blackhole and the motion lab speakers, and after getting the data I simply compared the bands with a threshold. If the band values went above that threshold, the unknown creator footage would jump to a random position.

In terms of setup, everything stayed mostly the same, the only difference this time was the inclusion of a bench on the side of the unknown creators. I wanted people to linger in this area, and I hoped that providing seating would accommodate this (it didn’t but it was worth a shot).

Here is the final video:

After everyone saw it they noticed the things that I had changed and it seemed that I was moving in the right direction. The changes this time were small, but even those small changes seemed to make a difference. The piece felt a little more cohesive, which is good, and the most validating thing for me was Nathan’s reaction. He hadn’t seen the first version of this project and within the span of around 2 or 3 minutes I heard him say “Oh I get it”. He caught that you could only hear the audio from the unknown people when Todd stopped talking and immediately got what the project was about, and I was really happy to hear this. He knew who these people are so it seems that people with prior knowledge could potentially get ahold of the intended message. Still, people without prior knowledge are left alienated, and I wanted to address this going forward.


Pressure Project 3 – SP35 Potions 1

I’m putting a bunch of documentation in here!

Images of my camera system 😀

Prototyping lol, I’m thinking of creating a custom knob?

So this project changed right in the middle. I was thinking more about the ideas I had with perspective puzzles and I really wanted to take this idea and use it for my cycles, so I tabled the idea for now. In DESIGN 6300, I had been doing more research into educational games and the concept of serious games, which are games that are used for non-entertainment purposes (examples could be games used in the classroom, training simulations for new hires, or training simulations for military personnel like pilots). I wanted steer my project in a different direction and I was thinking about a conjecture I did while looking into instructional design.

I had found a lot of research looking into the effects of the pandemic and hybrid education, and I had also found a lot of research about integrating educational tools into devices created for the Internet of Things (in general integrating educational tools with new technologies was a big theme). In that conjecture I tried to imagine what a game-based hybrid learning space could look like, thinking about how massively multiplayer online games (MMOs) represent their public virtual spaces, and how that relates to public spaces in schools that students gravitate towards. I also tried to think about how collaboration and lessons would be taught, inspired by the work I do on the IFAB VR projects, what if chemistry was taught in AR, so that students could get proper hands-on experience using things like distillery sets? The biggest problem I saw actually related to that idea, students needing hands-on experience and to be engaged, which was a downside of hybrid learning; it seems that there is something about being in the room that’s important to cultivating a learning environment.

Thinking about this, I wondered if the tactile nature of Makey-Makey controllers could be used to help with hybrid learning experiences. Building off of how Quest to Learn (Quest to Learn (q2l.org)) gamifies it’s learning and a previous project I had made that was a wizard battle game using Makey-Makey’s, I wanted to see if I could make a Harry Potter style “potions” class, where the potions are chemicals, and the end goal is some sort of chemistry lesson.

One of the first things I did was think about how I wanted to use the Makey-Makey. At first I was thinking about actually using liquids of some sort. There are a lot of simple chemistry lessons that teach about concepts like why oil and water don’t mix well together, and I was hoping that I could work these kinds of lessons into the hardware (plus water conducts electricity well, oil doesn’t, there is definitely room for some sort of electronic magic).

I started with a cup of water that could be dipped into to get electrical current, but I found that I wasn’t able to control the circuit very well when I was using liquids. I then thought about how I could potentially mimic actions that one might have to do in a “potions” class, inspired by the kinds of strange chemistry magic that NileRed does (he has a wild YouTube channel where he does wacky chemistry hijinks, here is a video where he turns vinyl gloves into grape soda: https://youtu.be/zFZ5jQ0yuNA?si=EHvenzlJcoIEZPO6). I wanted to get the physical motions of grinding up a piece of material, pouring that material into a liquid, and stirring to dissolve it and saturate a solution. To do this, I was going to make a mortar and pestle controller and a beaker controller. I didn’t have a mortar and pestle or a beaker, and I only had 10 hours (now 5 after pivoting) to do the whole project, so I had to get creative. For my mortar and pestle, I filled the inside of a wooden bowl with aluminum foil and taped some to the bottom of a Wii Remote, which looked like this:

I also ended up making this cup and checking in Isadora whether the person was “stirring” and how fast. I did this by putting two pieces of foil on either side of the of the inside of the cup. The play used a metal spoon to tap both of the pieces of foil in rapid succession, which led to a back and forth motion that is akin to stirring.


Video-Bop: Cycle 3

The shortcomings and learning lessons from cycle 2 provided strong direction for cycle 3

The audience found that the rules section of cycle 2 were confusing. Although, I intentionally labelled these essentials of spontaneous prose as rules, it was more of a framing mechanism to expose the audience to this style of creativity. Given this feedback, I opted to structure the experience so that they witnessed me performing video-bop prior to trying it themselves. Further, I tried to create a video-bop performance with media content which outlined some of the influences motivating this art.

I first included a clip from a 1959 interview with Carl Jung in which he is asked:

‘As the world becomes more technically efficient, it seems increasingly necessary for people to behave communally and collectively. Now, do you think it possible that the highest development of man may be to submerge his own individuality in a kind of collective consciousness?’.

To which Jung responds:

‘That’s hardly possible, I think there will be a reaction. A reaction will set in against this communal dissociation.. You know man doesn’t stand forever.. His nullification. Once there will be a reaction and I see it setting in.. you know when I think of my patients, they all seek their own existence and to assure their existence against that complete atomization into nothingness or into meaninglessness. Man cannot stand a meaningless life’

Mind, that this is from the year 1959. A quick google search can hint at the importance of that year for Jazz:

Further, I see there to be another overlapping timeline. That of the beat poets:

I want to point a few events:

29 June, 1949 – Allen Ginsberg enters Columbia Psychiatric Institute, where he meets Carl Solomon

April, 1951 – Jack Kerouac writes a draft of On the Road on a scroll of paper

25 October, 1951 – Jack Kerouac invents “spontaneous prose”

August, 1955 – Allen Ginsberg writes much of “Howl Part I” in San Francisco

1 November, 1956 – Howl and Other Poems is published by City Lights

8 August, 1957Howl and Other Poems goes on trial

1 October, 1959 – William S. Burroughs begins his cut-up experiments

While Jung’s awareness is not directly tied to American culture or jazz and adjacent artforms, I think he speaks broadly to the post WW2 society. I see Jung as an import and positive actor in advancing the domain of psychoanalysis and started working at a psychiatric hospital in 1900 in Zürich. Nonetheless, critique of the institutions of power and knowledge surrounding mental illness emerged in the 1960s most notably with Michel Foucault’s 1961 Madness and Civilization and in Ken Kesey’s (a friend of the beat poets) 1962 One Flew over the Cuckoo’s Nest. It’s unfortunate that a psychoanalytic method and theory developed by Jung which emphasizes dedication to the patient’s process of individuation, is absent in the treatment of patients in psychiatric hospitals throughout the 1900s. Clearly, Ginsberg’s experiences in a psychiatric institute deeply influenced his writings in Howl. I can’t help to connect Jung’s statement about seeing a reaction setting in to the abstract expressionism in jazz music and the beat movement occurring at this time.

For this reason I included a clip from the movie Kill your Darlings (2013) which captures a fictional yet realistic conversation had by the beat founders Allen Ginsburg, Jack Kerouac, and Lucian Carr upon meeting at Columbia University.

JACK

A “new vision?”

ALLEN

Yeah.

JACK

Sounds phony. Movements are cooked up by people who can’t write about the people who can.

LUCIEN

Lu, I don’t think he gets what we’re trying to do.

JACK

Listen to me, this whole town’s full of finks on the 30th floor, writing pure chintz. Writers, real writers, gotta be in the beds. In the trenches. In all the broken places. What’re your trenches, Al?

ALLEN

Allen.

JACK

Right.

LUCIEN

First thought, best thought.

ALLEN

Fuck you. What does that even

mean?!

JACK

Good. That’s one. What else?

ALLEN

Fuck your one million words.

JACK

Even better.

ALLEN

You don’t know me.

JACK

You’re right. Who is you?

Lucien loves this, raises an eyebrow. Allen pulls out his poem from his pocket.


I think this dialogue captures well the reaction setting in for these writers and how they pushed each other respectively in their craft and position within society. Interesting is Kerouac’s refute of being associated with a movement which is a stance he continued to hold into later life when asked about his role in influencing American countercultural movements of the 60s 1968 Interview with William Buckley. Further, this dialogue shows how Kerouac and Carr incited an artistic development in Ginsberg giving him the courage to break poetic rules and to be uncomfortably vulnerable in his life and work.

For me video-bop shares intellectual curiosities with that of the beats and a performative improvised artistic style with that of Jazz. So I thought performing a video-bop tune of sorts for the audience prior to them trying it would be a better way to convey the idea rather than having them read from Kerouac’s rules of spontaneous prose. See a rendition of this performance here:


With this long-winded explanation in mind, there were many technical developments driving improvements for cycle 3 of video-bop. Most important was the realization that interactivity and aesthetic reception is fostered better from play as opposed to rigid-timebound playback. The first 2 iterations of video-bop utilized audio-to-text timestamping technologies which were great at enabling programmed multimedia events to occur in time with a pre-recorded audio file. However, after attending a poetry night at Columbus’s Kafe Kerouac, the environment where poets read their own material and performed it live inspired me to remove the timebound nature of the media system and force the audience to write their own haikus as opposed to existing ones.

Most of the technical coding effort was spent on the smart-device web-app controller to make it more robust and ensure no user actions could break the system. I included better feedback mechanisms to let the users know that they completed steps correctly to submit media for their video-bop. Further, I made use of a google-images-download to allow users to pull media from google images as opposed to just Youtube which was an audience suggestion from cycle 2.

video-bop-cyc3 (link to web-app)

One challenge that I have yet to tackle, was the movement of media files as downloaded from a python script on my PC into the Isadora media software environment. During the performance, this was a manual step as I ran the script and dragged the files into Isadora and reformatted the haiku text file path. See the video-bop process here: