cycle 3

For the third iteration of my project, I added in additional elements of a contact mic tracking my heart beat and a large scale projection of my blinking eye. I stood in front of the large eye projection and performed my movements with the smaller face projection trained on my face at all times.

The performance was a 7 minute duration, but in the future I could see it being much longer–potentially 30-45. The movements that are the score of the piece are my slow, measured blinkings that sync in and out with the blinking done by the image projected onto my face and the large eye projection, as well as the deliberate and slow movements of my head turning from left to right. My movements are slow to the point that it takes 7 minutes to turn from center to the right, back to center, and then to the left.

The addition of the contact mic picking up my heartbeat felt vital to the piece. With the help of Michael at MOLA, I was able to narrow in on the low frequencies of my heart beating and had that play for the duration of the performance. It felt crucial to be a live sound, as it was really evident how much it would change according to what was happening. For instance, it was quite fast at the beginning and then gradually slowed throughout, or it would speed up if someone moved around in the audience. The feedback I got was that it felt clear that it wasn’t pre-recorded because of how it felt tied to my physical presence in the room as audience members could see me breathe.

Besides increasing the duration of the piece, something I would consider changing in a future iteration would be providing subtle clues to the audience that they would be allowed to come closer to my body as I performed. Some clues I would consider would be a sign outside of the performance space that said something like “viewers are invited to look closely at the artist, but to not touch her,” or perhaps include a variety of seating at different viewing distances. Especially in the case of a longer durational piece, seating would become an invitation to stay and look. I was also appreciative of the monitor that was provided to me so I could “mark” my movements, and it made me remember that I had at one point considered having a live feed in the space that showed the performance in a different light; perhaps with a time delay? Alex pointed out that that could be a good idea, especially because there were many layers of mediation happening within the performance already.

I am really proud of this piece. It’s an idea that I had early on in the semester, and I’m really glad I stuck with it and spent the whole 3 cycles dedicated to broadening the performance.


Cycle Three

For the third phase of my project, I refined the Max patch to improve its responsiveness and overall precision. This interactive setup enables the user to experiment with hand movements as a means of generating both real-time visual elements and synthesized audio. By integrating Mediapipe-based body tracking, the system captures detailed hand and body motion data, which are then used to drive three independent synthesizer voices and visual components. The result is an integrated multimedia environment where subtle gestures directly influence pitch, timbre, rhythmic patterns, colors, and shapes allowing for a fluid, intuitive exploration of sound and image.

Visual component:

Adaptive Visual Feedback:
A reactive visual system has been incorporated, one that responds to the performer’s hand movements. Rather than serving as mere decoration, these visuals translate the evolving soundscape into a synchronized visual narrative. The result is an immersive, unified audio-visual experience that makes both the musical and visual experience.

Sound component:

Left Hand – Harmonic Spectrum Shaping:
The left hand focuses on sculpting the harmonic spectrum. Through manipulation of partials and overtones, it introduces complexity and depth to the aural landscape. This control over the harmonic series allows for evolving textures that bring richness and variation to the overall sound.

Right Hand – Synthesizer Control:
The right hand interfaces with a dedicated synthesizer module. In this role, it manages a range of real-time sound production parameters, including oscillator waveforms, filter cutoff points, modulation rates, and envelope characteristics. By manipulating these elements on-the-fly, the performer can craft sounds lines and dynamically shape the signals.


Cycle 3

For cycle 3, I continued on with my projects from cycles 1 and 2 to make a completed piece. At the end of cycle 2, I left off with most of the models done and compiled into a scene that people could look around. For cycle 3, I finished making the rest of the models, textured everything, animated it, added lighting, and then did some sound design.

For the models, at the end of cycle 2, I had most of the big furniture pieces done, but now I had to go through and make the smaller props that were going to be animated and tell the story. These props included playing cards, paper clips, a mug, a stuffed bear, a lawn mower, and the tic tac toe board. These are all things that carry significant memories for me about my grandmother and represent some of the things I remember interacting with the most in her house. They are all pretty generic and without the content of the house, background story, or additional things added to the scene, they would seem like ordinary objects without much significance. But that’s sort of the magical thing about memories connected to material things so I wanted to refrain from adding too much explicit narrative.

When making the textures for everything in the scene, I tried to balance what they looked like in real life and a more simplistic stylized version through the use of color and simple patterns. This abstraction was meant to make the space more applicable to more peoples memories while also staying connected to my own.

The animations were relatively simple. I wanted to add life and movement to the objects in my scene to show them being used and then that use fading away at the end. Playing a game with the playing cards, drawing tic tac toe in the fluffy carpet, getting a drink in my special mug, making paper clip necklaces, and hearing the riding lawn mower outside.

While the objects told the story of the space when used, I really wanted the lights to tell the story of the house over time. Everything starts out in golden hour with warm colors, then fades to blue as my time spent there dwindled, and then fades to gray at the end.

I also went for a more abstraction in the sound design as well. At first I had originally wanted to use some voicemails that I have of my grandmother and some other sound recordings that I had, but when I put them in the scene, I felt like it was too jarring hearing a person’s voice and it disrupted the serenity of the scene. I tried going a different route and adding written words to fade in and out of the scene as well, but I felt the same way about those. I thought both options added too much direct narration to what was happening and I wanted to leave it more open. Some of this might have been from the strength of these things in my memory, but I wasn’t able to distinguish what my personal feelings were toward it from the effect of the piece for other people so I thought it best not to include it in general. Regardless, I added in some ambient sound and the sound of some things in the room to fill up the space and round it out.

Overall I really enjoyed this creative experience. I really liked getting to divide out my work into three different cycles and push through different phases of the project as I went. The smaller chunks made the work feel more manageable and I was able to work through different ideas without worrying about the end product all of the time. This project turned out pretty similarly to what I was envisioning in the beginning visually, but in concept, turned out much more abstract than I had originally intended. One of the things that I wanted to explore when I started this all the way back in cycle 1 was making projects and telling stories that had personal significance to me. I think that the abstraction and lean away from direct narrative in part came from a hesitance to share. It’s one thing to talk about these stories in a classroom setting, and another to create a world and put a visualization of your thoughts, feelings, and memories on screen for everything to look at and dissect. In the end though, I thought that this was a great first trial run of telling this type of story and I definitely feel more comfortable doing so now than I did when this first started.

Feedback:

One of the biggest pieces of feedback I got was a desire for more of the original stories I had proposed to be incorporated into the work. There was a desire for a more emotional work and to lean further into those previous ideas. I really agree with this feedback and if I could go back and do a fourth cycle of this, that would probably be what I focused on. I struggled in this project to figure out how to balance my own memories with the meaning I was instilling into the objects and how much of that I wanted to show explicitly.

There was also still a strong desire for the project to be more interactive and to be able to move around the space (possibly in VR). Putting this space in a more interactable environment is totally possible and would add a lot of interesting elements, but I chose the medium I did because I wanted to explore something I didn’t see often in media. While it isn’t as advanced or interactive as VR would be, the underlying language and storytelling abilities of a 360 video medium was something I hadn’t seen a lot of before and was excited about. It would be interesting to compare the experience I made with the same thing but in VR and see how people react to the space.

I was happy to hear that people enjoyed watching it and went about watching it multiple times to try and catch all the different little things that were happening in the scene. The intention came through!

https://youtu.be/edGPGmAeuAg


Cycle 2

For cycle 2, it was important to me to begin looking at the projection in the Barnett Theatre and with my dancers and understand what I was working on from the “performance” aspect of the RSVP cycle. Something I had noticed from previous dance performances in the Barnett Theatre, that because the theater is arranged in-the-round, a lot of top-down projection felt quite flat due to the close proximity of the audience. As a result, I wanted to focus on creating some more dimension in my projections. Much of what I have been designing has been video projection that is constantly moving.  

In cycle 1, I created a series of different scenes to try out in the Barnett and was able to discern that some of them did not read very well as a floor projection. I ended up staying with the projection that had a bit more of a “pinched effect.” Another difference in cycle 2 was moving away from depth sensors. Part of this was a resource issue—I was extremely limited on time. I was concerned that I would spend too much time finagling with the depth sensor and not enough on actually designing the projection that I am hoping to use for my MFA project in February. I was also limited by the amount of time I could spend working in the Barnett Theatre as it is a shared space within the dance dept. All of those factors led to me choosing to facilitate interaction through the use of a mouse watcher actor which has a similar effect as the depth sensor, but just without the use of a sensor that needed to be hung from the grid of the Barnett.  

I ended up designing a projection that could track the movements of my dancers in the space. Below is a video of that exploration.  

My dancers did share with me that the movement of the projection was a bit motion-sickness inducing for them as they are dancing and I heard similar feedback from audience members during our cycle 2 demonstration in class. That is one of my goals going forward, which is to adjust the speed at which everything is moving, so that it does not feel too overwhelming for both dancers and audiences. I’m discovering that there truly is a fine line in design. I am hoping that the projection and choreography read well together and that all design elements will coalesce into the world that I’m building. One of my biggest concerns is that audiences will only watch the projection and not any of the choreography. As all of these elements are being developed together, I know that I will have some more information by cycle 3 to know if I am overdesigning the video projection and thereby flattening the choreography or if the video projection does really help highlight some of the nuanced gestures and movements in the choreography.  


Cycle Two

In the second phase of development, I dedicated my efforts to enhancing and tuning the main patch to elevate its functionality and the performer’s interactive experience. A major focus was the implementation of a dual-hand control system that enriches both the sonic and visual dimensions of the performance.

– Right Hand – Synthesizer Control:  The right hand now commands a synthesizer module. This setup allows the performer to manipulate real-time sound generation parameters such as oscillators, filter frequencies, modulation depths, and envelope shapes. This direct control facilitates dynamic melodic creation and nuanced timbral shifts during performances.

– Left Hand – Harmonic Series Manipulation: The left hand is dedicated to controlling the harmonic series. By adjusting the overtones and harmonic content, the performer can explore sonic textures and add depth to the sound.

Integration of Visual Elements:

– Responsive Visuals:  I’ve integrated a visual component that reacts dynamically to the movements of both hands. The visuals are not just an accompaniment but are designed to be a visual representation of the sonic elements, ensuring a cohesive audiovisual experience.

Parameter Mapping to X and Y Axes:  Various parameters are mapped to the X (horizontal) and Y (vertical) axes of each hand’s movement.

The overarching goal is to evolve the patch into a more advanced system where the performer has comprehensive control over both auditory and visual components. By expanding the capabilities and refining the interaction mechanisms, the performer can craft a more engaging and immersive experience.


cycle 2

For cycle 2, I played with some ideas for projecting behind my live performance. I made a few different patches with the goal that it could be projected on a large scale in the motion lab as an accompanying piece to my live performance 

For the live performance, I have decided to have the projection of my dad’s face with the cut-out eyes fading in and out of opacity. I think that this addition could add to the visual confusion and legibility, and could potentially make the audience feel confusion as to what they were watching. 

Because the performance is based around slow movement, there is a lot of subtlety to the piece. I want my movements to feel deliberate and have a score to them, so I will be doing some movement practices to work with that. Alisha gave me the feedback that it might be interesting to explore facial movements as the image that is projected is very placid. This idea terrifies me! I like it. 

I also added a sound element to accompany the performance. I used an app called “Hear My Heart” that essentially is a very sensitive mic. The mic picks up your heart beat, but it also has a lot of wooshing. The feedback that I received was that the sound was difficult to decipher as being an actual heart, so I want to experiment with a stronger mic; one that doesn’t have as much interference. It was also unpleasant to have to hold my phone’s mic during the performance. 

The patches that I designed on Isadora ultimately feel like they’re from a different world than the performance. I took a very close video of my eye blinking on the same beat as the stop-motion video/the live performance. The initial patches were mirrored and distorted to make the eye feel sort of terrifying, which doesn’t feel like it fits with the emotional feel of the performance. In class, we stripped away the effects and the video was then just an eye blinking. I will for the next cycle re-record this clip and try projecting it behind me.


Cycle 2 Project

Naiya Dawson

For Cycle 2 I wanted to utilize the rest of the time left in the semester as well as the resources that we have in this class to create examples of concepts that I may use for my senior project. I am still going to use the live drawing aspect that I created in Cycle 1 but for Cycle 2 I wanted to play with a new idea that I had. For this Cycle I gathered videos that I had of my friend dancing and then also videos I had of different beaches and bodies of water. I used isadora to layer and combined the different videos that I want to later project in the motion lab. I created two different scenes and with in the two scenes there are videos on 3 stages. Each stage contains two or three videos that I layered together and I want to continue to play with video effects and new ways videos can be presented in isadora. I created 3 stages because I want to present each stage on three projectors in the motion lab. I attached videos of my isadora patches and video clips of the videos I used.

For Cycle 3 I want to work on moving this to the motion lab and added the concepts I created in Cycle 1. I also want to add music and research ways I can have an interactive part of my project. I am also thinking about actors in isadora that I might want to add the the videos.


Cycle 2

https://youtu.be/Ymn3bq0i1Y0?si=IvjfVNHiVMXmmv-6

For Cycle 2, I started building the 360 degree environment that I talk about in my cycle 1. I used some reference photos that I took of my grandmother’s house back in August and started modeling the big pieces of furniture that are in that room as those were the most important in setting the space. I then worked on building up the hallways and walls to make the room an actual living space rather than a bunch of furniture scattered around.

Something that was really challenging in this process was allowing myself to be okay with items not perfectly accurate to real life. A viewer would never be able to tell that the things I’m modeling aren’t one to one because there is no reference to the space other than what I’ve made, but I had a hard time looking at my project and not seeing the imperfections in it. I also realized that the rooms are filled with too many objects to be modeled for the scale of this project, so deciding what items are important and which can be left out were hard. I decided to stick with the larger furniture items and then only make the smaller objects that have to do with what I’m going to animate to lighten the load.

There was a lot of good feedback and ideas given in class after showing everyone my project. The first big takeaway was that people enjoyed – and I enjoyed watching people interact with – the video style format and looking around at the environment on individual screens. Quite a few people wished for more interaction and expressed a desire to move around the space so that is something to consider. I’m hoping that some of that desire will be satisfied once the full animation is in place, but I also like the idea of only being able to watch what’s happening and not be able to interact as you would a real memory.

I’ve also been having lots of personal debates about the route I want to go down with materials and what style I want to make the look of the project. There were some good notes about how the grayscale – which was originally left as a placeholder – could be used purposefully to create some additional meaning. The overall feel of the environment was described as “dreamworld”, “astral projection-like”, and “shadow realm”, and I enjoy those interpretations of the work.

Next steps are to finish the short film 🙂


Cycle 1 Documentation

For cycle 1, I performed an early draft of my idea for cycle #3.

I am interested in intergenerational story-telling, and how our bodies are containers and expressions of time. I have lately been doing meditative performances with my family that involve blinking on a slow count of 8, as I’m curious about how our counts fall in and out of sync with each other. To add some visual complication to this otherwise simple thought, I performed this meditation with a projected video on my face. The projection is a sort of stop-motion collage made out of photos of my dad’s face. The original image is a black and white photo, so I placed a cut-out of his face onto a sort of fleshy pink background. The image is printed on rice paper which adds a sort of haziness to the lines, and its translucency adds a blush of pink to the grey tones. The stop motion element is me placing cut-outs of his eyes shut on top of the image. I then scanned 30 variations of this simple collage and imported them into a premiere sequence and exported it as a video, which became the projected material.

I performed this for about a minute and a half. I have been thinking about it as a durational performance, so I am interested in how it would feel as a full 15 minute experience, or even longer.

I was really excited by the feedback, especially with questions around the “music” of the piece. I loved the idea of including sound elements, especially ones that solidify this idea of internal time keeping. I’ve been experimenting with some heartbeat recording and want to think about how that could be used in the piece. I was also interested in how the audience experienced forgetting that I was a part of the piece. In the future, I’m going to include some shifts in opacity to narrow in on this question of fading legibility. I also might want to add in some variations on movement, but I want to talk to some more people about this.

I’m also running into some issues with the projector itself… It’s maybe too strong for my eyes. I might play with just darkening the overall projection, or trying to buy a projector with even less lumens. Overall, I’m really pleased with this performance.


Cycle One

My main project involves creating an interactive performance where both sound and visuals dynamically respond to the performer’s moves. To achieve this, I have developed a Max patch that translates numerical data into sound, utilizing input from Mediapipe. Additionally, I have initiated the development of the visual components that will react to the performance in real-time.

For the upcoming second cycle, my efforts will be concentrated on further developing and refining both sound visual components. Concurrently, I will also focus on enhancing the auditory aspects of the project to ensure a seamless and engaging user experience.

In the third cycle, my attention will shift towards thorough troubleshooting and fine-tuning of the entire system. The goal for this phase is to address any technical challenges that emerge and enhance the overall performance quality, ensuring reliability and effectiveness during live interactions. This structured approach ensures that each aspect of the project is developed with precision and integrates flawlessly to produce a compelling interactive experience.