Cycle One Mirror Room

The concept for the whole project is to just have fun messing with different video effects that Isadora can create in real time. One of the ways of messing with your image in real time is with a mirror that is warped in some way. The first cycle is simple by just deciding on what kind of effects that should be applied to the person in the camera. I wanted each one to be unique so the effect is something not available in usual house of mirrors.

The feedback for the effects that had something that took a bit of time for the user to figure out what was going on was positive. They enjoyed messing around in the camera until they could have a decent idea on how they affect the outcome. The one they thought was lack luster was the one with lines trailing their image. They were able to figure out almost immediately how they affect the image. So, for the next cycle the plan is to update the one effect screen to make it a bit harder to decipher what is going on. Next on the list is to try and get a configuration set up with projectors and projection mapping so the person can be in view of the camera and see what is happening on the screen or projection without blocking the projection or showing up on screen at a weird angle.


Cycle I: History of Bop

Video Bop interpretation of an Excerpt from Jack Kerouac’s ‘History of Bop’

History Of Bop The History of Bop by Jack Kerouac

As a jazz enthusiast and young adult, reading Kerouac’s ‘On The Road’ was a transformative experience. In school, I was predominantly interested in the math and sciences and hardly cared to read a book or pick up a pen to write. However, Kerouac’s style, legacy, and approach to writing (and life for that matter) convinced me of the value in these types of intellectual pursuits. Over the last five years or so, I’ve continued to explore Kerouac and other works from the beat canon; One exciting find was his set of spoken word readings called ‘On the Beat Generation’. For cycle I, I focused on the final minute and a half section from his work ‘History of Bop’. I find this writing to be a triumphant portrayal of the evolution of bebop and cultural changes in America surrounding the genre.

Upon researching the piece, it was interesting to find that it was originally published in the April 1959 issue of an lewd magazine called Escapade – a hidden gem of writing amongst smutty caricatures and 50s advertisements. Although, I was not entirely surprised by this discordant arrangement given that the hero of the beats (Neal Cassady) was, according to Allen Ginsberg, an “Adonis of Denver—joy to the memory of his innumerable lays of girls in empty lots & diner backyards, moviehouses’ rickety rows, on mountaintops in caves or with gaunt waitresses in familiar roadside lonely petticoat upliftings & especially secret gas-station solipsisms of johns, & hometown alleys too”(Howl, 1956). This quality of the beats did not age well especially from the vantage of sexual equality, but unconventional behavior and criticism are the norm for this eclectic group. What I’d call a most foundational criticism of Kerouac and this piece of writing in particular is in the realm of black appropriation. Scholars like James Baldwin described this ‘untroubled tribute to youthful spontaneity [as] a double disservice—to the black Americans who were assumed to embody its spirit of spontaneity and to Kerouac’s full literary achievement… a romantic appraisal of black inner vitality’(Scott Saul, FREEDOM IS, FREEDOM AIN’T, pg 56). It cannot be denied that Kerouac and his writer friends were escaping what they feared as the trap of the middle class white picket fence, and couldn’t have experienced the true reality of being a black in the mid 20th century. Yet, their works speak to a deep respect and profound inspiration for the black art of their time (jazz).

One artistic technique embodied by the beats and inspired from jazz is what Kerouac calls ‘spontaneous prose’. Jazz musicians exceled at this rapid invention of musical structures through heightened sonic sensibility and borderline phantasmal prostheses of an instrument to their nervous systems and soul for that matter. The beats took up this approach with the technology of their time; namely the typewriter. Kerouac even made a manifesto called ESSENTIALS OF SPONTANEOUS PROSE in which he describes the procedure of writing as ‘the essence in the purity of speech, sketching language is undisturbed flow from the mind of personal secret idea-words, blowing (as per jazz musician) on subject of image.’ Other writers of his time, such as Truman Capote, didn’t see eye to eye with this artistic style and humorously commented “That’s not writing, that’s typing.” While I don’t mean to argue that thoroughly edited and fully composed art is better or worse than spontaneous artforms like jazz and bop-prose, it is just that spontaneous modes of creation akin to ‘play’ can potentially more honestly externalize such private and elusive inner processes occurring in the lawless relational environment of the mind.

It has been a good 70 years since these creative practices surfaced within the avant-garde art scene, and the technologies at society’s disposal for externalizing thought have tremendously improved especially due to the massive sea of networked image and video objects available to the average internet user. Further, a concentrated development of skills in computer programming may be analogous to the level of discipline in harnessing artistic technics like the saxophone, piano, pencil, paper, typewriter, voice recorder and so on. With this long winded explanation in mind, these ideas are much the backbone of my inspirations in new media art and what I wish to explore in this 3 cycled project.

In cycle 1, I set out to visualize the final section of history of bop. I’ve listened to this poem recited countless times and did a lyrical transcription – meaning I listened to the recording and wrote out all of its words. This is a common practice among jazz musicians and writers since as it helps internalize language. Next, I followed Dmytro Nikolaiev’s implementation of a Vosk LLM in Python to convert the audio file into a transcribed text JSON file which contained the words and their position in time as spoken.

Not all the words were accurately transcribed so I had to manually correct as AI models are good but not nuanced enough to decipher all spoken phrases especially from an unconventional speaker like Kerouac. Further, I did some post-processing of the data to get it in a form which would play nicely with Isadora’s JSON parser actor. The format involved Key-Value pairs of timestamps and words spoken. Through experimentation, I found that repeating each word in the JSON list near the millisecond (ms) frequency ensured that it would appear onscreen and remain illuminated consistently as the corresponding audio is spoken. I found that although the resolution of the timecode variable of the movie player actor was at the ms scale, it didn’t increment consistently enough to predict the values it would trigger. Consequently, having a large and widespread array of timestamps between the start and end of a word ensures that it will be triggered. Additionally, Isadora plays in the realm of percentages between 0 and 100 rather than the typical format of videos being in time duration. So I had to account for a conversion of timestamp as percent completed with respect to the total length of the audio clip.

This allowed me to funnel the current position of a playing audio file through to the JSON parser actor, such that as the timecode increments, the transcribed text would display onscreen exactly in time with the recited poetry. This was exciting on its own as it was a semi-autonomous method to generating lyric videos. Also, the style of the text was strobe like giving the quality of spoken words – they appear and vanish in an instant. See below the media circuit implemented in Isadora (with image player disabled) to see the flow of time triggering text and subsequently displaying on screen.

The next stage in the process was to find imagery representing the ideas that Kerouac is expressing. Following Kerouac’s ‘Setup’ step: ‘The object is set before the mind, either in reality. as in sketching (before a landscape or teacup or old face) or is set in the memory wherein it becomes the sketching from memory of a definite image-object.’, I used Kerouac’s speaking of the words to serve as the stimulus (object) for mental imagery. Once an image or set of ideas was established in mind through free-association (‘mental image blowing’), I would search Youtube to find a clip which best represented this mental image. Instead of using one of the many available and ad-prone sites for converting Youtube videos to .mp4 files, I adapted a Python script using the yt-dlp/yt-dlp library to do so with better speed and precision. This allowed me to quickly find sections of videos, copy their video url along with start / end time into a function which would download the video file with a specific name to a designated folder. This method allows more mental energy to be concentrated on thinking of images and finding existing internet representations rather than downloading and cutting the video segments. In this way a quick flow can be achieved, better mimicking Kerouac’s spontaneous prose method. To add, just as writing is a negotiation between image thoughts and the language available to one’s tongue and fingers at that moment, video-bop (new term for this method) is a negotiation between a visualized animation and the medium of available images / videos online. This medium includes not only the content available and the form they take, but also the algorithmic recommender process personalized by the user’s previous internet activity. For in this intentionally fast paced creative process, one relies heavily on differentiating search terms to approach on an appropriate visualization.

When videos were found, they were named with the first ‘semi-unique’ word within the phrase they belonged to, and the length of the video was chosen to match the duration of that phrase as calculable from the initial audio transcription step. The grouping of phrases is to the video-bopper’s discretion and in accordance to the aesthetic sensibility eminent in jazz’s musical structuring. It is not necessary to find video images in the order for which they are spoken. I’d hopped around between Kerouac’s phrases freely and would encourage this approach as it may follow the flow of thinking more closely and it builds natural structures of moments and transitions between moments. This idea was neatly phrased by Mark Turner, a modern tenor saxophone player who describes “When I’m in the middle of a solo, whenever I am most certain of the next note I have to play, the more possibilities open up for the notes that follow.”(The Jazz of Physics, Stephon Alexander). To riff on Heisenberg’s uncertainty principle, there is this interdependence on knowing the exactness of both a particle’s momentum and position. To extend this into the domain of thought and artistic expression, perhaps carelessly, it suggests that there is a tradeoff in awareness. When the improvisor’s awareness is tuned most closely to what idea should come next, they may be unaware of the larger artistic structure to emerge. In contrast, if the improvisor’s awareness is tuned to larger timespans and movements in the piece, they may have less awareness of the idea to come next.

As a continuation on methodology, Isadora unfortunately doesn’t read file-paths for media artifacts and rather relies on an internal numbering system as files are uploaded to the project. To accommodate this structure, I manually updated a JSON object to convert between video trigger words and their index in Isadora. These index values are passed into Isadora’s movie player object to allow them to be visualized in time with the typography and spoken words.

Upon showing the project to my classmates and delving into ideas on how it can be more interactive for cycle’s 2 and 3 to come, many important suggestions were accumulated to form the next direction:

  • Jiang – Words that are action oriented may be good to include for transitions of images. (ie ‘turn’).
  • Afure – She liked my interpretation of word ‘Dreaming’ – Although, everyone would have a different interpretation of that word and what img they’d select.
    • Tik Tok Trend for going on Pinterest and searching words and displaying what image that is algorithmically connected with.
  • Kiki – Liked the subject matter – she teaches jazz dance and it would be helpful to have a more interesting way to teach about the this genre.
  • Alex – ways for interactivity – what sorts of ways to automate img generation and allow for user thoughts / personalities to be included. Potential for a custom web-app.
  • Nathan – would like to have clicked on links related to the content being shown. As a way to learn more about each part (as informative).

With these suggestions in mind, I plan to explore the use of Dispatcher — python-osc 1.7.1 documentation to build a simple server hosted web interface open to smart-device users connected to a LAN. A spoken word poem should be found and disseminated to each of the audience’s device surfaces. As an experience, there should be a listening period in which the audience engages with their own forms of active imagination to see what phrases catch their ears and images that become naturally available to them. From there, they should go through the video-bop process and find a clip which matches what they’ve concepted. Then they will connect to the media server and paste the link of that video, the start and end time, as well as the phrase it connects to. The media server will need to collect these audience responses and run the Youtube extraction script to grab all the associated artifacts and make them available for rendering in time with the spoken word poetry. This is the direction I envision for cycle two and a diagram of how I see the interaction occurring:


Cycle 1 – Unknown Creators

For my final project I knew I wanted to do a perspective piece sort of thing after my work from last project. After thinking about it for a while I decided that I would go back a social issue that occurs in the games industry and other media that require large teams of people. There is a phenomenon where the creation of a piece of art is attributed to one person, even we know that there are more people behind the scenes. One might refer to a movie as a Hitchcock or Tarantino film, or a play as something by Sondheim. These individuals that get referenced, mostly male, are often leads in some way: main writers, directors, etc., and they often get more screen time and interviews. The games industry is no different, and there are several big names that get thrown around: Hideo Kojima, Hidetaka Miyazaki, Shigeru Miyamoto, Warren Specter, the list goes on. In order to address and convey this issue to those that don’t know so much about games, I wanted to take an example of a revered creator and juxtapose that person with some others that are relatively unknown, people who did programming, mocap, animation, etc.

In my piece I wanted to focus on a studio that I know quite well, Bethesda, and it’s lead designer Todd Howard. I found a short interview from GameInformer about his life and how he got into the industry, linked here: How Skyrim’s Director Todd Howard Got Into The Industry (youtube.com). For the people who were lesser known, I am pulling footage from some documentaries made by ex-employees at Bethesda: How Bethesda built the worlds of Skyrim and Fallout (youtube.com), and A SKYRIM DOCUMENTARY | You’re Finally Awake: Nine Developers Recount the Making of Skyrim (youtube.com). In terms of set up, my idea looked a little like this:

Essentially I wanted to have some tall boxes set up in the center of a space (ideally the motion lab). On one side of these boxes, there would be a large projection of Todd Howard, the footage would be stretched over all the boxes so that when looked at as a whole, the image would become clear. On the other side of the boxes, individuals that worked at Bethesda would be projected. Each person would be projected onto one box. The Isadora part is fairly simple, just a bunch of projection mapping and some videos, the setup and space would really be the big issues.

One of the first things I did was go into the motion lab and establish what sorts of resources I would be using, as well as try and get some projection mapping stuff working just to get familiar with the process and environment; Michael Hesmond was very helpful in this regard. I was going to need to use the Mac Studio, that way I could use two projectors at once. We decided to try two different kind of projectors, a short-throw and long-throw, just to see if one would be better than the other. After getting everything set up and using the grid I got something that kind of looked like this:

I was technically projecting on the side instead of across from each other, but this was okay, I really just needed a proof of concept and I wanted to also see how pixel smearing would look. After doing all this, I determined that I would need these resources:

  • 2x Power Cables for the projectors
  • 2x Extension cables that would plug into the wall
  • 2x HDMI cables
  • 2x Short-throw Ben-Q projectors, both projector types worked fine but I wanted to use the short throws because their color was better, and the space was relatively small
  • The motion lab switcher, this was important for getting both of the projectors working; we had to do a little debugging to make sure the projections were going to the correct outputs

The only resources I wasn’t sure about were the boxes. There weren’t a lot of them, and I wanted more uniform shapes, so this would have to be reworked. When I came back, I did a bit more work thinking about the setup and resources with Nico and Michael, and I got another setup going:

This time, the setup was moved diagonally. I wanted to do this to give myself more space to work with and because when audio would be incorporated, I needed speakers that I could project audio from in different locations. Instead of boxes, Nico had a great suggestion of using some draped sheets, so we took them and folded them over some movable coat racks. I also played with getting audio to play out of different speakers. I only wanted sound to come out of the back left and the front right speaker. Isadora had a little bit of trouble, the snd out parameter for my movies had to be set to e1-2 for the front speakers and e3-4 for the back speakers, then the panning needed to be adjusted so that audio would only play out of a specific speaker. After that was figured out though, we saved my audio settings to the sound board so that it could be loaded up quickly later.

For the cycle 1 presentation, I took a quick video:

I got a lot of great feedback about peoples’ thoughts and feelings on this project:

  • Jiara felt that there was a resonance between the tone of the voice and the quality of the fabric, or perhaps the wrinkly appearance of the fabric. It seemed that for one coat rack the fabric was more pristine and ironed, while on another the fabric was more crumpled, which could connect to the quality of audio or perhaps the autobiographical nature of the footage
  • There was some confusion about the meaning of the piece, for many the relationship between the people and footage wasn’t clear. Was it connected? Was there a back and forth? It almost seemed like one side of the footage was the interviewer and the other side was the interviewee
  • People liked spatial aspect of what was going on. Alex noted that the sound could be disorienting because there are multiple sources faced towards each other, which is a very good thing to note. Michael and I had talked about this briefly and this issue could potentially be solved by having the speakers be under the coat racks and having them project in the opposite direction, rather than projecting towards each other. But one thing that was good about this audio setup was that there were dedicated viewing positions that we created since the audio would be less distracting in the corners where the perspective was best framed. Moreover, finding spots where both footage can be viewed and heard was fun
  • The biggest thing is that the messaging isn’t clear for those that don’t have context about these people. In general, there needs to be some conveyance of information about these people’s backgrounds, positions, roles, etc. and there could be more done to imply that one person is more well-known than the others. For example, maybe there is more ornate framing around Todd, maybe there is something about the volume of the space that could be played with, maybe the footage of the normal developers is jumbled and fragmented
  • Someone suggested having the racks move, which would be really interesting though I don’t know how that would work haha


PP3/ Cycle 1- Rhythm Imposter

For my cycle 1 I was inspired by the game Among Us. I wanted to put a twist on the game and use sound and body movement to help players decide who was the imposter and who wasn’t. I immediately knew I needed multiple headphones, but I struggled where to put these headphone in for Isadora to send the sound to. I realized that I was going to need a multichannel sound system that allows me to connect multiple headphones. I first used a different version of the MOTU UltraLite mk3 Hybrid sound system that had 8 channels on it. It was pretty simple to use but when I connected it to my computer and tried to send the sound through, it was only reading 2 channles. It took a couple of classes, reading the manual, and asking Alex how do I solve this problem but I solved the problem.

First, I had each player choose 1 number and that number would assign your role as a “regular” or “imposter”. Here is my patch for where everyone chose their roles:

Here is my patch for getting the sound to be sent through different channels. Again in this instance I had 1 song going to 4 channels (headphones) and another song going to 2 channels (headphones).

The purpose of this cycle was to get the MOTU sound system working and solving the problem of getting different sounds/ songs to be sent through different channels. My first struggle was that I could on get 2 channels working, however after switching to a bigger MOTU sound system and making sure Isadora read 8 channels, then I was able to achieve my goal.


Cycle 1: Personal Metrics

Overview:
Cycles 1-3 aims to explore the data behind providing personalized running phase zones to a sprinter based on user input. Eventually leading to efficient training strategies to improve performance.

Implementation:
The project will utilize Isadora’s control panel feature for interactive user input (Cycle 1). Algorithms for phase zone calculation will be developed, drawing on secondary research for running phase terminology and incorporating height as a variable (Cycle 2). The system will be designed to present users with clear and actionable results, potentially visualizing phase zones on a track diagram for better comprehension (Cycle 3).

Cycle 1 Features:

  1. User Input Interaction: Utilizing Isadora’s control panel feature, users will input their fastest running time, goal time, and height, providing essential data for the calculation process that I will be workshopping in Cycle 2.
  2. Personalization Based on World Records: Users will be able to set their persona best time and goal time, with reference ranges derived from the current Men’s and Women’s world records for sprint events (100m, 200m, and 400m dashes).
  3. Height Adjustment: Height will be factored into the calculation process, potentially influencing the distribution of running phase zones to accommodate individual physical characteristics.

Pressure Project 3: Puzzle Box

Well uh the thing is is that the project didn’t work. The idea was for a box to have a puzzle entirely solvable without any outside information. Anyone with any background can have a fair chance at solving the puzzle. So, because I am not a puzzle making extraordinaire, I had to take inspiration from elsewhere. It just so happens that a puzzle with just those requirements was gifted to me as a gift from my Grandpa. It is called a Caesar’s Codex. The puzzle works by presenting the user with a four rows of symbols that can slide up and down then right next to it is a single column of symbols. Then on the back is a grid full of very similar symbols to the ones with the four rows. Finally, their are symbols on the four sides that are similar to the ones on the column next to the four rows.

Now the challenge is to get this fully mechanical system to interact with the user interface created in Isadora. The solution was to use a Makey Makey kit. So the wat the user moves the pieces to solve the puzzle needed to change, but the hints to solve the puzzle needed to stay the same. The mechanical puzzle requires flipping the box over constantly to find the right symbol on the grid and then flip it over again to move the row to the right position. I opted to just have the grid portion be set up to directly solve the puzzle.

The paperclips are aligned in a grid like pattern for the users to follow. There is one unique paperclip color to indicate the start. The binder clips are used to indicate when an alligator clip needs to be attached to the paperclip. When the alligator clip is attached to the right paper clip, the screen shown on Isadora will change. Unfortunately, I never tested whether or not the paperclips were conductive enough. I assumed they would, but the coating on the paper clips was resistive enough to not allow a current to flow through them. So, lesson learned, always test everything before it is too late to make any changes, or you make your entire design based on it.