Cycle 2: Video-Bop
Posted: May 1, 2024 Filed under: Uncategorized | Tags: Cycle 2 Leave a comment »Feedback from cycle I directed me towards increased audience interactivity with the video-bop experience.
Continuing in the spirit of Kerouac, I was inspired by one of his recordings called American Haikus in which he riffs back and forth with tenor saxophone player Zoot Sims. Kerouac, not being a traditional musical instrumentalist (per say), recites his version of American Haiku’s in call and response with Zoot’s improvisations:
“The American Haiku is not exactly the Japanese Haiku. The Japanese Haiku is strictly disciplined to seventeen syllables but since the language structure is different I don’t think American Haikus (short three-line poems intended to be completely packed with Void of Whole) should worry about syllables because American speech is something again… bursting to pop. Above all, a Haiku must be very simple and free of all poetic trickery and make a little picture and yet be as airy and graceful as a Vivaldi Pastorella.” (Kerouac, Book of Haikus)
Kerouac’s artistic choice to speak simple, yet visually oriented ‘haikus’, allows him inhabit and influence the abstract sonic space of group improvised jazz. These haikus are at par with the music motifs typical of trading which is when the members of jazz ensemble will take turns improvising over small musical groupings within the form of the tune they are currently playing. What I find most cool is how you can feel Zoot representing Kerouac’s visual ideas in sound and in real-time. In this way, a mental visual landscape is more directly shaped by merging musical expression and the higher cognitive layer of spoken language. It is not new for abstract music to be given visual qualities. Jazz pianist Bill Evans described the prolific ‘Kind of Blue’ record as akin to “Japanese visual art in which the artist is forced to be spontaneous…paint[ing] on a thin stretched parchment.. in such a direct way that deliberation cannot interfere”(Evans, Kind of Blue Liner notes 1959)
As a new media artist, I tried to create a media system which would engage the audience in this creative process of translating abstract ideas of one form into another. I believe this practice can help externalize the mental processes of rapid free association. To do so, I had to build a web-application accessible to the audience connected to a cloud database which could be queried from my PC running the Isadora media software. This web-app could handle the requests of multiple users from their phones or other smart-devices with internet access. I used a framework familiar to me using Dash to code the dynamic website, Heroku to host the public server, and AstraDB to store and recall audience generated data.
See src code for design details
The experience stared with the audience scanning a QR code to access the website, effectively tuning their phones into an interactive control surface. Next, they were instructed to read Kerouac’s Essentials of Spontaneous Prose which humorously served as the ‘rules’ for the experience. This was more of a mood setting device to frame the audience to think about creativity from this spontaneous image oriented angle.
Next, I played a series of Kerouac’s Haikus and instructed the audience to visit a page on the site where they could jot their mental events for each haiku as they listened to the spoken words and musical interpretation by Zoot. After this, there was a submit button which sent all their words to the database and was then dynamically rendered onto the next page called ‘collective thoughts’. This allowed everyone in the audience to anonymously see each other’s free associations.
Example from our demo
After, reading through the collective image-sketches from the group, we decided on a crowd favorite Haiku to be visualized. The visualization process was equipped to handle multiple Youtube video links with timecode information to align with the time occurrence of spoken words in a prerecorded video. This process followed form from Cycle I in how I quickly explored Youtube to gather imagery which I thought expressive of the message within the ‘History of Bop’ Poem. This practices forces a negotiation in expression between original image thoughts and the available videos on the medium of Youtube equipped with its database of uploaded content and recommender systems. An added benefit to having the interaction on their personal phones is that it connects to their existing Youtube app and any behavioral history made prior to entering into the experience. The page to add media looked like this:
This was the final step and it allowed tables to be generated within the cloud database which were in a form for which they could be post-processed into a json file which worked with the visualizing patch I made in Isadora for Cycle 1. I had written a python script to query the database and download all of the media artifacts and place them into the proper format.
Unfortunately, I didn’t have much time to test the media system prior to presentation day and the database was overwritten due to a design issue. Someone had submitted a blank form which overwrote all of the youtube data entered by the other audience members. For this reason, I was not able to display the final product of their efforts. Yet, it was a good point of failure for improvement in cycle 3. The audience was disappointed that they didn’t see the end product, but I took this as a sign that the experience created an anticipation which had powerful buildup.
cycle three: something that tingles ~
Posted: May 1, 2024 Filed under: Final Project | Tags: Cycle 3 Leave a comment »In this iteration, i begin with an intention to :
– scale up the what I have in cycle 2 (eg: the number of sensors/motors, and imagery?)
– check out the depth camera (will it be more satisfying than webcam tracking?)
– another score for audience participation based on the feedback from cycle 2
– some touches on space design with more bubble wraps..
Here are how those goes…
/scale up/
I added in more servo motors, this goes pretty smoothly, and the effects are instant — the number of servo wiggling gives it more of sense of a little creature.
I also attempted to add more flex/force sensor, but the data communication become very stuck, at times, Arduino is telling me that my board is disconnected, and the data does not go into Isadora smoothly at all. What I decide is: keep the sensors, and it is okay that the function is not going to be stable, at least it serves as a tactile tentacle for touching no matter it activates the visual or not.
I also tried to add a couple more imagery to have multiple scenes other than the oceanic scene I have been working with since the first cycle. I did make another 3 different imagery, but I feel that it kinda of become too much information packed there, and I cannot decide their sequence and relationship, so I decide leave them out for now and stick with my oceanic scene for the final cycle.
/depth cam?/
What I notice with the depth cam at first is that it keeps crashing Isadora, which is a bit frustrating, which propels me to work with it “lightly”. my initial intention of working with it is to see if it may serve better for body position tracking than webcam to animate the rope in my scene. But I also note that accurate tracking seems not matter too much in this work, so I just wanna see what’s the potential of depth cam. I think it does give a more accurate tracking, but the downside is that you have to be at a certain distance, and with the feet in the frame, so that the cam will start tracking your skeleton position, in this case it becomes less flexible than the eye++ actor. But what I find interesting with depth camera, is the white body-ghosty imagery it gives, so I ended up layering that on the video. And it works especially well with the dark environment.
Here are the final Isadora patches:
/audience participation/
This time the score I decide to play with is: two people at a time, explore it. The rest are observers who can give to verbal cue to the people who are exploring — “pause” and “reverse”. Everyone can move around, in proximity or distance at any time.
/space design/
I wrapped and crocheted more bubble wrapper creatures in the space, tangling them through the wire, wall, charger, whatever happen to be in that corner that day. It’s like a mycelium growing on whatever environment there is, leaking out of the constructed space.
Feedback from folks and future iterations?
I really appreciate everyone’s engagement with this work and the discussions. Several people touches on the feeling of “jellyfish”, “little creature”, “fragile”, “desire to touch with care”, “a bit creepy?”. I am interested in all those visceral responses. At the beginning of cycle one, I was really interested in this modulation of touch, especially at a subtle scale, which I then find it hard to incite with certain technology mechanism, but it is so delightful to hear that the way the material composed actually evoke that kind of touch I am looking for. I am also interested in what Alex mentioned about it being like an “visual ASMR”, which I am gonna look into further. how to make visual/audio tactile is something really intrigues me. Also, I think I mentioned earlier that an idea I am working with in my MFA research is “feral fringe”, which is more of a sensation-imagery that comes to me, and through making works around this idea, it’s actually helping me to approach closer to what “feral fringe” actually refers to for me. I noticed that a lot of choice I made in this work are very intuitive (more “feel so” than “think so”) – eg: in the corner, the position of the curtain, and the layered imagery, the tilted projector, etc. Hearing people’s pointing out those help me to delve further into: what is a palpable sense of “feral fringe” ~
Cycle 3- Who’s the Alien?
Posted: May 1, 2024 Filed under: Uncategorized Leave a comment »For Cycle 3 I wanted to switch the game up a bit and combine the two game ideas I did before. This game would be played majority through the headphones but still have visuals to follow. This game was about figuring out who the alien is among the humans on a spaceship. I chose two aliens. There would be a captain for each round and they would choose which 3 layers have to take a test to prove their humanity. Each of the test would happen through the headphones. I didn’t change much in terms of Isadora patches. I wanted to keep the game simple but social.
These were the test options.
For every prompt, two questions were the same and one was slightly off. Here is an original question asked for the Instructions prompt.
cycle two
Posted: May 1, 2024 Filed under: Final Project | Tags: Cycle 2 Leave a comment »I started my cycle 2 with exploring about the 3d rope actor. I was very curious about it, but didn’t get to delve into this in cycle 1, so I catch up with that. I began with this tutorial, which is really helpful.
The imagery of strings wildly dangling is really intriguing, echoing with the video imagery I had for cycle 1. To make it more engaging for the participant, I think of this way of using the eye++ actor to track the motion of multiple participant and let that affect the location of the strings.
Along with the sensors connected to Arduino from cycle one, I also made some touch sensors with the makeymakey alligator clip wires, foil, foam, and bubble wrap slices to give it a squeezy feeling. And I used some crocheted bubble wrapper, to wrap my Arduino & makeymakey kit inside, with the wires dangling out along the bubble wrap thin slices like the jellyfish’s tentacles – the intention is to give it a vibe like an amoeba creature.
I played with projection mapping and space set-up in the molab on Tuesday’s class, but need more time to mess around with it.
The observation and feedback from the audience is really helpful and interesting 🙂 I didn’t come up with a satisfying idea regarding instruction for audience. So I decide to let audience (two at a time) free explore in the space. I like how people got crawling to the ground, I notice that this bodily perspective is interesting for exploring this work; also love the feedback from Alex saying that it is interesting to notice the two people’s silhouette figuring out what is happening and the sound of murmuring; and I appreciating Alex pointing out the deliberate choice of having projector set in a corner, hidden and tilted. I am really interested in the idea of reimagining the optimal/normal functionality, and instead what I may call “tuning to the glitches” (by which I am not referring to the aesthetic of glitch, but a condition of unpredictable, instability, and “feral”)
Thinking along, I am relating to this mode of audiencing in this artist’s work:
https://artscenter.duke.edu/event/amendment-a-social-choreography-by-michael-klien
For the final cycle, i am going to:
– now that I have sensor and servo working, I am considering adding the number of it – “scale up” a little bit
– incorporate sound
– fine-tune webcam tracking
– play with the set design and projection mapping in molab
– continue wondering about modes of audience participation
Cycle 2- The Village
Posted: May 1, 2024 Filed under: Uncategorized Leave a comment »For Cycle 2 I decided to continue my idea with the multi channel sound system using the headphones and incorporating a game. The game I chose was Werewolf which is similar to the game Mafia, but I titled it The Village. I created different visual scenes that I could click through using Isadora so that the game had a flow and I could easily moderate. I created these through Canva.
This is the patch I made for when people chose their roles. Depending on what headphones they chose, that would be their role in the game. I had 6 different media files sent to six separate headphones. I had the keyboard watcher as the same so that all of the audio played at the same time.
This was the patch I used for clicking through each visual. The keyboard watcher was assigned to different letters so I knew which visual would pop up.
What was successful about this was the game was fun, each role was hidden from the others, and the visuals had a nice flow for th egame. What I need to clarify are the game rules and how much replay value the game has. I also had the idea of incorporating some kind of sensor to play with as the game goes on.
Cycle One Mirror Room
Posted: April 10, 2024 Filed under: Uncategorized | Tags: cycle 1, Isadora Leave a comment »The concept for the whole project is to just have fun messing with different video effects that Isadora can create in real time. One of the ways of messing with your image in real time is with a mirror that is warped in some way. The first cycle is simple by just deciding on what kind of effects that should be applied to the person in the camera. I wanted each one to be unique so the effect is something not available in usual house of mirrors.
The feedback for the effects that had something that took a bit of time for the user to figure out what was going on was positive. They enjoyed messing around in the camera until they could have a decent idea on how they affect the outcome. The one they thought was lack luster was the one with lines trailing their image. They were able to figure out almost immediately how they affect the image. So, for the next cycle the plan is to update the one effect screen to make it a bit harder to decipher what is going on. Next on the list is to try and get a configuration set up with projectors and projection mapping so the person can be in view of the camera and see what is happening on the screen or projection without blocking the projection or showing up on screen at a weird angle.
Cycle I: History of Bop
Posted: April 10, 2024 Filed under: Uncategorized | Tags: cycle 1 Leave a comment »Video Bop interpretation of an Excerpt from Jack Kerouac’s ‘History of Bop’
History Of Bop The History of Bop by Jack Kerouac
As a jazz enthusiast and young adult, reading Kerouac’s ‘On The Road’ was a transformative experience. In school, I was predominantly interested in the math and sciences and hardly cared to read a book or pick up a pen to write. However, Kerouac’s style, legacy, and approach to writing (and life for that matter) convinced me of the value in these types of intellectual pursuits. Over the last five years or so, I’ve continued to explore Kerouac and other works from the beat canon; One exciting find was his set of spoken word readings called ‘On the Beat Generation’. For cycle I, I focused on the final minute and a half section from his work ‘History of Bop’. I find this writing to be a triumphant portrayal of the evolution of bebop and cultural changes in America surrounding the genre.
Upon researching the piece, it was interesting to find that it was originally published in the April 1959 issue of an lewd magazine called Escapade – a hidden gem of writing amongst smutty caricatures and 50s advertisements. Although, I was not entirely surprised by this discordant arrangement given that the hero of the beats (Neal Cassady) was, according to Allen Ginsberg, an “Adonis of Denver—joy to the memory of his innumerable lays of girls in empty lots & diner backyards, moviehouses’ rickety rows, on mountaintops in caves or with gaunt waitresses in familiar roadside lonely petticoat upliftings & especially secret gas-station solipsisms of johns, & hometown alleys too”(Howl, 1956). This quality of the beats did not age well especially from the vantage of sexual equality, but unconventional behavior and criticism are the norm for this eclectic group. What I’d call a most foundational criticism of Kerouac and this piece of writing in particular is in the realm of black appropriation. Scholars like James Baldwin described this ‘untroubled tribute to youthful spontaneity [as] a double disservice—to the black Americans who were assumed to embody its spirit of spontaneity and to Kerouac’s full literary achievement… a romantic appraisal of black inner vitality’(Scott Saul, FREEDOM IS, FREEDOM AIN’T, pg 56). It cannot be denied that Kerouac and his writer friends were escaping what they feared as the trap of the middle class white picket fence, and couldn’t have experienced the true reality of being a black in the mid 20th century. Yet, their works speak to a deep respect and profound inspiration for the black art of their time (jazz).
One artistic technique embodied by the beats and inspired from jazz is what Kerouac calls ‘spontaneous prose’. Jazz musicians exceled at this rapid invention of musical structures through heightened sonic sensibility and borderline phantasmal prostheses of an instrument to their nervous systems and soul for that matter. The beats took up this approach with the technology of their time; namely the typewriter. Kerouac even made a manifesto called ESSENTIALS OF SPONTANEOUS PROSE in which he describes the procedure of writing as ‘the essence in the purity of speech, sketching language is undisturbed flow from the mind of personal secret idea-words, blowing (as per jazz musician) on subject of image.’ Other writers of his time, such as Truman Capote, didn’t see eye to eye with this artistic style and humorously commented “That’s not writing, that’s typing.” While I don’t mean to argue that thoroughly edited and fully composed art is better or worse than spontaneous artforms like jazz and bop-prose, it is just that spontaneous modes of creation akin to ‘play’ can potentially more honestly externalize such private and elusive inner processes occurring in the lawless relational environment of the mind.
It has been a good 70 years since these creative practices surfaced within the avant-garde art scene, and the technologies at society’s disposal for externalizing thought have tremendously improved especially due to the massive sea of networked image and video objects available to the average internet user. Further, a concentrated development of skills in computer programming may be analogous to the level of discipline in harnessing artistic technics like the saxophone, piano, pencil, paper, typewriter, voice recorder and so on. With this long winded explanation in mind, these ideas are much the backbone of my inspirations in new media art and what I wish to explore in this 3 cycled project.
In cycle 1, I set out to visualize the final section of history of bop. I’ve listened to this poem recited countless times and did a lyrical transcription – meaning I listened to the recording and wrote out all of its words. This is a common practice among jazz musicians and writers since as it helps internalize language. Next, I followed Dmytro Nikolaiev’s implementation of a Vosk LLM in Python to convert the audio file into a transcribed text JSON file which contained the words and their position in time as spoken.
Not all the words were accurately transcribed so I had to manually correct as AI models are good but not nuanced enough to decipher all spoken phrases especially from an unconventional speaker like Kerouac. Further, I did some post-processing of the data to get it in a form which would play nicely with Isadora’s JSON parser actor. The format involved Key-Value pairs of timestamps and words spoken. Through experimentation, I found that repeating each word in the JSON list near the millisecond (ms) frequency ensured that it would appear onscreen and remain illuminated consistently as the corresponding audio is spoken. I found that although the resolution of the timecode variable of the movie player actor was at the ms scale, it didn’t increment consistently enough to predict the values it would trigger. Consequently, having a large and widespread array of timestamps between the start and end of a word ensures that it will be triggered. Additionally, Isadora plays in the realm of percentages between 0 and 100 rather than the typical format of videos being in time duration. So I had to account for a conversion of timestamp as percent completed with respect to the total length of the audio clip.
This allowed me to funnel the current position of a playing audio file through to the JSON parser actor, such that as the timecode increments, the transcribed text would display onscreen exactly in time with the recited poetry. This was exciting on its own as it was a semi-autonomous method to generating lyric videos. Also, the style of the text was strobe like giving the quality of spoken words – they appear and vanish in an instant. See below the media circuit implemented in Isadora (with image player disabled) to see the flow of time triggering text and subsequently displaying on screen.
The next stage in the process was to find imagery representing the ideas that Kerouac is expressing. Following Kerouac’s ‘Setup’ step: ‘The object is set before the mind, either in reality. as in sketching (before a landscape or teacup or old face) or is set in the memory wherein it becomes the sketching from memory of a definite image-object.’, I used Kerouac’s speaking of the words to serve as the stimulus (object) for mental imagery. Once an image or set of ideas was established in mind through free-association (‘mental image blowing’), I would search Youtube to find a clip which best represented this mental image. Instead of using one of the many available and ad-prone sites for converting Youtube videos to .mp4 files, I adapted a Python script using the yt-dlp/yt-dlp library to do so with better speed and precision. This allowed me to quickly find sections of videos, copy their video url along with start / end time into a function which would download the video file with a specific name to a designated folder. This method allows more mental energy to be concentrated on thinking of images and finding existing internet representations rather than downloading and cutting the video segments. In this way a quick flow can be achieved, better mimicking Kerouac’s spontaneous prose method. To add, just as writing is a negotiation between image thoughts and the language available to one’s tongue and fingers at that moment, video-bop (new term for this method) is a negotiation between a visualized animation and the medium of available images / videos online. This medium includes not only the content available and the form they take, but also the algorithmic recommender process personalized by the user’s previous internet activity. For in this intentionally fast paced creative process, one relies heavily on differentiating search terms to approach on an appropriate visualization.
When videos were found, they were named with the first ‘semi-unique’ word within the phrase they belonged to, and the length of the video was chosen to match the duration of that phrase as calculable from the initial audio transcription step. The grouping of phrases is to the video-bopper’s discretion and in accordance to the aesthetic sensibility eminent in jazz’s musical structuring. It is not necessary to find video images in the order for which they are spoken. I’d hopped around between Kerouac’s phrases freely and would encourage this approach as it may follow the flow of thinking more closely and it builds natural structures of moments and transitions between moments. This idea was neatly phrased by Mark Turner, a modern tenor saxophone player who describes “When I’m in the middle of a solo, whenever I am most certain of the next note I have to play, the more possibilities open up for the notes that follow.”(The Jazz of Physics, Stephon Alexander). To riff on Heisenberg’s uncertainty principle, there is this interdependence on knowing the exactness of both a particle’s momentum and position. To extend this into the domain of thought and artistic expression, perhaps carelessly, it suggests that there is a tradeoff in awareness. When the improvisor’s awareness is tuned most closely to what idea should come next, they may be unaware of the larger artistic structure to emerge. In contrast, if the improvisor’s awareness is tuned to larger timespans and movements in the piece, they may have less awareness of the idea to come next.
As a continuation on methodology, Isadora unfortunately doesn’t read file-paths for media artifacts and rather relies on an internal numbering system as files are uploaded to the project. To accommodate this structure, I manually updated a JSON object to convert between video trigger words and their index in Isadora. These index values are passed into Isadora’s movie player object to allow them to be visualized in time with the typography and spoken words.
Upon showing the project to my classmates and delving into ideas on how it can be more interactive for cycle’s 2 and 3 to come, many important suggestions were accumulated to form the next direction:
- Jiang – Words that are action oriented may be good to include for transitions of images. (ie ‘turn’).
- Afure – She liked my interpretation of word ‘Dreaming’ – Although, everyone would have a different interpretation of that word and what img they’d select.
- Tik Tok Trend for going on Pinterest and searching words and displaying what image that is algorithmically connected with.
- Kiki – Liked the subject matter – she teaches jazz dance and it would be helpful to have a more interesting way to teach about the this genre.
- Alex – ways for interactivity – what sorts of ways to automate img generation and allow for user thoughts / personalities to be included. Potential for a custom web-app.
- Nathan – would like to have clicked on links related to the content being shown. As a way to learn more about each part (as informative).
With these suggestions in mind, I plan to explore the use of Dispatcher — python-osc 1.7.1 documentation to build a simple server hosted web interface open to smart-device users connected to a LAN. A spoken word poem should be found and disseminated to each of the audience’s device surfaces. As an experience, there should be a listening period in which the audience engages with their own forms of active imagination to see what phrases catch their ears and images that become naturally available to them. From there, they should go through the video-bop process and find a clip which matches what they’ve concepted. Then they will connect to the media server and paste the link of that video, the start and end time, as well as the phrase it connects to. The media server will need to collect these audience responses and run the Youtube extraction script to grab all the associated artifacts and make them available for rendering in time with the spoken word poetry. This is the direction I envision for cycle two and a diagram of how I see the interaction occurring:
Cycle 1 – Unknown Creators
Posted: April 9, 2024 Filed under: Uncategorized Leave a comment »For my final project I knew I wanted to do a perspective piece sort of thing after my work from last project. After thinking about it for a while I decided that I would go back a social issue that occurs in the games industry and other media that require large teams of people. There is a phenomenon where the creation of a piece of art is attributed to one person, even we know that there are more people behind the scenes. One might refer to a movie as a Hitchcock or Tarantino film, or a play as something by Sondheim. These individuals that get referenced, mostly male, are often leads in some way: main writers, directors, etc., and they often get more screen time and interviews. The games industry is no different, and there are several big names that get thrown around: Hideo Kojima, Hidetaka Miyazaki, Shigeru Miyamoto, Warren Specter, the list goes on. In order to address and convey this issue to those that don’t know so much about games, I wanted to take an example of a revered creator and juxtapose that person with some others that are relatively unknown, people who did programming, mocap, animation, etc.
In my piece I wanted to focus on a studio that I know quite well, Bethesda, and it’s lead designer Todd Howard. I found a short interview from GameInformer about his life and how he got into the industry, linked here: How Skyrim’s Director Todd Howard Got Into The Industry (youtube.com). For the people who were lesser known, I am pulling footage from some documentaries made by ex-employees at Bethesda: How Bethesda built the worlds of Skyrim and Fallout (youtube.com), and A SKYRIM DOCUMENTARY | You’re Finally Awake: Nine Developers Recount the Making of Skyrim (youtube.com). In terms of set up, my idea looked a little like this:
Essentially I wanted to have some tall boxes set up in the center of a space (ideally the motion lab). On one side of these boxes, there would be a large projection of Todd Howard, the footage would be stretched over all the boxes so that when looked at as a whole, the image would become clear. On the other side of the boxes, individuals that worked at Bethesda would be projected. Each person would be projected onto one box. The Isadora part is fairly simple, just a bunch of projection mapping and some videos, the setup and space would really be the big issues.
One of the first things I did was go into the motion lab and establish what sorts of resources I would be using, as well as try and get some projection mapping stuff working just to get familiar with the process and environment; Michael Hesmond was very helpful in this regard. I was going to need to use the Mac Studio, that way I could use two projectors at once. We decided to try two different kind of projectors, a short-throw and long-throw, just to see if one would be better than the other. After getting everything set up and using the grid I got something that kind of looked like this:
I was technically projecting on the side instead of across from each other, but this was okay, I really just needed a proof of concept and I wanted to also see how pixel smearing would look. After doing all this, I determined that I would need these resources:
- 2x Power Cables for the projectors
- 2x Extension cables that would plug into the wall
- 2x HDMI cables
- 2x Short-throw Ben-Q projectors, both projector types worked fine but I wanted to use the short throws because their color was better, and the space was relatively small
- The motion lab switcher, this was important for getting both of the projectors working; we had to do a little debugging to make sure the projections were going to the correct outputs
The only resources I wasn’t sure about were the boxes. There weren’t a lot of them, and I wanted more uniform shapes, so this would have to be reworked. When I came back, I did a bit more work thinking about the setup and resources with Nico and Michael, and I got another setup going:
This time, the setup was moved diagonally. I wanted to do this to give myself more space to work with and because when audio would be incorporated, I needed speakers that I could project audio from in different locations. Instead of boxes, Nico had a great suggestion of using some draped sheets, so we took them and folded them over some movable coat racks. I also played with getting audio to play out of different speakers. I only wanted sound to come out of the back left and the front right speaker. Isadora had a little bit of trouble, the snd out parameter for my movies had to be set to e1-2 for the front speakers and e3-4 for the back speakers, then the panning needed to be adjusted so that audio would only play out of a specific speaker. After that was figured out though, we saved my audio settings to the sound board so that it could be loaded up quickly later.
For the cycle 1 presentation, I took a quick video:
I got a lot of great feedback about peoples’ thoughts and feelings on this project:
- Jiara felt that there was a resonance between the tone of the voice and the quality of the fabric, or perhaps the wrinkly appearance of the fabric. It seemed that for one coat rack the fabric was more pristine and ironed, while on another the fabric was more crumpled, which could connect to the quality of audio or perhaps the autobiographical nature of the footage
- There was some confusion about the meaning of the piece, for many the relationship between the people and footage wasn’t clear. Was it connected? Was there a back and forth? It almost seemed like one side of the footage was the interviewer and the other side was the interviewee
- People liked spatial aspect of what was going on. Alex noted that the sound could be disorienting because there are multiple sources faced towards each other, which is a very good thing to note. Michael and I had talked about this briefly and this issue could potentially be solved by having the speakers be under the coat racks and having them project in the opposite direction, rather than projecting towards each other. But one thing that was good about this audio setup was that there were dedicated viewing positions that we created since the audio would be less distracting in the corners where the perspective was best framed. Moreover, finding spots where both footage can be viewed and heard was fun
- The biggest thing is that the messaging isn’t clear for those that don’t have context about these people. In general, there needs to be some conveyance of information about these people’s backgrounds, positions, roles, etc. and there could be more done to imply that one person is more well-known than the others. For example, maybe there is more ornate framing around Todd, maybe there is something about the volume of the space that could be played with, maybe the footage of the normal developers is jumbled and fragmented
- Someone suggested having the racks move, which would be really interesting though I don’t know how that would work haha
PP3/ Cycle 1- Rhythm Imposter
Posted: April 7, 2024 Filed under: Uncategorized Leave a comment »For my cycle 1 I was inspired by the game Among Us. I wanted to put a twist on the game and use sound and body movement to help players decide who was the imposter and who wasn’t. I immediately knew I needed multiple headphones, but I struggled where to put these headphone in for Isadora to send the sound to. I realized that I was going to need a multichannel sound system that allows me to connect multiple headphones. I first used a different version of the MOTU UltraLite mk3 Hybrid sound system that had 8 channels on it. It was pretty simple to use but when I connected it to my computer and tried to send the sound through, it was only reading 2 channles. It took a couple of classes, reading the manual, and asking Alex how do I solve this problem but I solved the problem.
First, I had each player choose 1 number and that number would assign your role as a “regular” or “imposter”. Here is my patch for where everyone chose their roles:
Here is my patch for getting the sound to be sent through different channels. Again in this instance I had 1 song going to 4 channels (headphones) and another song going to 2 channels (headphones).
The purpose of this cycle was to get the MOTU sound system working and solving the problem of getting different sounds/ songs to be sent through different channels. My first struggle was that I could on get 2 channels working, however after switching to a bigger MOTU sound system and making sure Isadora read 8 channels, then I was able to achieve my goal.
Cycle 1: Personal Metrics
Posted: April 4, 2024 Filed under: Uncategorized Leave a comment »Overview:
Cycles 1-3 aims to explore the data behind providing personalized running phase zones to a sprinter based on user input. Eventually leading to efficient training strategies to improve performance.
Implementation:
The project will utilize Isadora’s control panel feature for interactive user input (Cycle 1). Algorithms for phase zone calculation will be developed, drawing on secondary research for running phase terminology and incorporating height as a variable (Cycle 2). The system will be designed to present users with clear and actionable results, potentially visualizing phase zones on a track diagram for better comprehension (Cycle 3).
Cycle 1 Features:
- User Input Interaction: Utilizing Isadora’s control panel feature, users will input their fastest running time, goal time, and height, providing essential data for the calculation process that I will be workshopping in Cycle 2.
- Personalization Based on World Records: Users will be able to set their persona best time and goal time, with reference ranges derived from the current Men’s and Women’s world records for sprint events (100m, 200m, and 400m dashes).
- Height Adjustment: Height will be factored into the calculation process, potentially influencing the distribution of running phase zones to accommodate individual physical characteristics.