Cycle 2

For cycle 2, it was important to me to begin looking at the projection in the Barnett Theatre and with my dancers and understand what I was working on from the “performance” aspect of the RSVP cycle. Something I had noticed from previous dance performances in the Barnett Theatre, that because the theater is arranged in-the-round, a lot of top-down projection felt quite flat due to the close proximity of the audience. As a result, I wanted to focus on creating some more dimension in my projections. Much of what I have been designing has been video projection that is constantly moving.  

In cycle 1, I created a series of different scenes to try out in the Barnett and was able to discern that some of them did not read very well as a floor projection. I ended up staying with the projection that had a bit more of a “pinched effect.” Another difference in cycle 2 was moving away from depth sensors. Part of this was a resource issue—I was extremely limited on time. I was concerned that I would spend too much time finagling with the depth sensor and not enough on actually designing the projection that I am hoping to use for my MFA project in February. I was also limited by the amount of time I could spend working in the Barnett Theatre as it is a shared space within the dance dept. All of those factors led to me choosing to facilitate interaction through the use of a mouse watcher actor which has a similar effect as the depth sensor, but just without the use of a sensor that needed to be hung from the grid of the Barnett.  

I ended up designing a projection that could track the movements of my dancers in the space. Below is a video of that exploration.  

My dancers did share with me that the movement of the projection was a bit motion-sickness inducing for them as they are dancing and I heard similar feedback from audience members during our cycle 2 demonstration in class. That is one of my goals going forward, which is to adjust the speed at which everything is moving, so that it does not feel too overwhelming for both dancers and audiences. I’m discovering that there truly is a fine line in design. I am hoping that the projection and choreography read well together and that all design elements will coalesce into the world that I’m building. One of my biggest concerns is that audiences will only watch the projection and not any of the choreography. As all of these elements are being developed together, I know that I will have some more information by cycle 3 to know if I am overdesigning the video projection and thereby flattening the choreography or if the video projection does really help highlight some of the nuanced gestures and movements in the choreography.  


Cycle Two

In the second phase of development, I dedicated my efforts to enhancing and tuning the main patch to elevate its functionality and the performer’s interactive experience. A major focus was the implementation of a dual-hand control system that enriches both the sonic and visual dimensions of the performance.

– Right Hand – Synthesizer Control:  The right hand now commands a synthesizer module. This setup allows the performer to manipulate real-time sound generation parameters such as oscillators, filter frequencies, modulation depths, and envelope shapes. This direct control facilitates dynamic melodic creation and nuanced timbral shifts during performances.

– Left Hand – Harmonic Series Manipulation: The left hand is dedicated to controlling the harmonic series. By adjusting the overtones and harmonic content, the performer can explore sonic textures and add depth to the sound.

Integration of Visual Elements:

– Responsive Visuals:  I’ve integrated a visual component that reacts dynamically to the movements of both hands. The visuals are not just an accompaniment but are designed to be a visual representation of the sonic elements, ensuring a cohesive audiovisual experience.

Parameter Mapping to X and Y Axes:  Various parameters are mapped to the X (horizontal) and Y (vertical) axes of each hand’s movement.

The overarching goal is to evolve the patch into a more advanced system where the performer has comprehensive control over both auditory and visual components. By expanding the capabilities and refining the interaction mechanisms, the performer can craft a more engaging and immersive experience.


cycle 2

For cycle 2, I played with some ideas for projecting behind my live performance. I made a few different patches with the goal that it could be projected on a large scale in the motion lab as an accompanying piece to my live performance 

For the live performance, I have decided to have the projection of my dad’s face with the cut-out eyes fading in and out of opacity. I think that this addition could add to the visual confusion and legibility, and could potentially make the audience feel confusion as to what they were watching. 

Because the performance is based around slow movement, there is a lot of subtlety to the piece. I want my movements to feel deliberate and have a score to them, so I will be doing some movement practices to work with that. Alisha gave me the feedback that it might be interesting to explore facial movements as the image that is projected is very placid. This idea terrifies me! I like it. 

I also added a sound element to accompany the performance. I used an app called “Hear My Heart” that essentially is a very sensitive mic. The mic picks up your heart beat, but it also has a lot of wooshing. The feedback that I received was that the sound was difficult to decipher as being an actual heart, so I want to experiment with a stronger mic; one that doesn’t have as much interference. It was also unpleasant to have to hold my phone’s mic during the performance. 

The patches that I designed on Isadora ultimately feel like they’re from a different world than the performance. I took a very close video of my eye blinking on the same beat as the stop-motion video/the live performance. The initial patches were mirrored and distorted to make the eye feel sort of terrifying, which doesn’t feel like it fits with the emotional feel of the performance. In class, we stripped away the effects and the video was then just an eye blinking. I will for the next cycle re-record this clip and try projecting it behind me.


Cycle 2 Project

Naiya Dawson

For Cycle 2 I wanted to utilize the rest of the time left in the semester as well as the resources that we have in this class to create examples of concepts that I may use for my senior project. I am still going to use the live drawing aspect that I created in Cycle 1 but for Cycle 2 I wanted to play with a new idea that I had. For this Cycle I gathered videos that I had of my friend dancing and then also videos I had of different beaches and bodies of water. I used isadora to layer and combined the different videos that I want to later project in the motion lab. I created two different scenes and with in the two scenes there are videos on 3 stages. Each stage contains two or three videos that I layered together and I want to continue to play with video effects and new ways videos can be presented in isadora. I created 3 stages because I want to present each stage on three projectors in the motion lab. I attached videos of my isadora patches and video clips of the videos I used.

For Cycle 3 I want to work on moving this to the motion lab and added the concepts I created in Cycle 1. I also want to add music and research ways I can have an interactive part of my project. I am also thinking about actors in isadora that I might want to add the the videos.


Cycle 2

https://youtu.be/Ymn3bq0i1Y0?si=IvjfVNHiVMXmmv-6

For Cycle 2, I started building the 360 degree environment that I talk about in my cycle 1. I used some reference photos that I took of my grandmother’s house back in August and started modeling the big pieces of furniture that are in that room as those were the most important in setting the space. I then worked on building up the hallways and walls to make the room an actual living space rather than a bunch of furniture scattered around.

Something that was really challenging in this process was allowing myself to be okay with items not perfectly accurate to real life. A viewer would never be able to tell that the things I’m modeling aren’t one to one because there is no reference to the space other than what I’ve made, but I had a hard time looking at my project and not seeing the imperfections in it. I also realized that the rooms are filled with too many objects to be modeled for the scale of this project, so deciding what items are important and which can be left out were hard. I decided to stick with the larger furniture items and then only make the smaller objects that have to do with what I’m going to animate to lighten the load.

There was a lot of good feedback and ideas given in class after showing everyone my project. The first big takeaway was that people enjoyed – and I enjoyed watching people interact with – the video style format and looking around at the environment on individual screens. Quite a few people wished for more interaction and expressed a desire to move around the space so that is something to consider. I’m hoping that some of that desire will be satisfied once the full animation is in place, but I also like the idea of only being able to watch what’s happening and not be able to interact as you would a real memory.

I’ve also been having lots of personal debates about the route I want to go down with materials and what style I want to make the look of the project. There were some good notes about how the grayscale – which was originally left as a placeholder – could be used purposefully to create some additional meaning. The overall feel of the environment was described as “dreamworld”, “astral projection-like”, and “shadow realm”, and I enjoy those interpretations of the work.

Next steps are to finish the short film 🙂


Cycle 1 Documentation

For cycle 1, I performed an early draft of my idea for cycle #3.

I am interested in intergenerational story-telling, and how our bodies are containers and expressions of time. I have lately been doing meditative performances with my family that involve blinking on a slow count of 8, as I’m curious about how our counts fall in and out of sync with each other. To add some visual complication to this otherwise simple thought, I performed this meditation with a projected video on my face. The projection is a sort of stop-motion collage made out of photos of my dad’s face. The original image is a black and white photo, so I placed a cut-out of his face onto a sort of fleshy pink background. The image is printed on rice paper which adds a sort of haziness to the lines, and its translucency adds a blush of pink to the grey tones. The stop motion element is me placing cut-outs of his eyes shut on top of the image. I then scanned 30 variations of this simple collage and imported them into a premiere sequence and exported it as a video, which became the projected material.

I performed this for about a minute and a half. I have been thinking about it as a durational performance, so I am interested in how it would feel as a full 15 minute experience, or even longer.

I was really excited by the feedback, especially with questions around the “music” of the piece. I loved the idea of including sound elements, especially ones that solidify this idea of internal time keeping. I’ve been experimenting with some heartbeat recording and want to think about how that could be used in the piece. I was also interested in how the audience experienced forgetting that I was a part of the piece. In the future, I’m going to include some shifts in opacity to narrow in on this question of fading legibility. I also might want to add in some variations on movement, but I want to talk to some more people about this.

I’m also running into some issues with the projector itself… It’s maybe too strong for my eyes. I might play with just darkening the overall projection, or trying to buy a projector with even less lumens. Overall, I’m really pleased with this performance.


Cycle One

My main project involves creating an interactive performance where both sound and visuals dynamically respond to the performer’s moves. To achieve this, I have developed a Max patch that translates numerical data into sound, utilizing input from Mediapipe. Additionally, I have initiated the development of the visual components that will react to the performance in real-time.

For the upcoming second cycle, my efforts will be concentrated on further developing and refining both sound visual components. Concurrently, I will also focus on enhancing the auditory aspects of the project to ensure a seamless and engaging user experience.

In the third cycle, my attention will shift towards thorough troubleshooting and fine-tuning of the entire system. The goal for this phase is to address any technical challenges that emerge and enhance the overall performance quality, ensuring reliability and effectiveness during live interactions. This structured approach ensures that each aspect of the project is developed with precision and integrates flawlessly to produce a compelling interactive experience.


Cycle 1

Naiya Dawson

For my cycle 1 project I started to play with and create patches that I can have as my foundation for the rest of my cycle projects. I used the live drawing actor in isadora to create what you see in the images and videos above. Working with live drawing was very fun because there are so many ways you can manipulate the actor and add different effects to it. I first worked on how to set up the live drawing actor then I took some time and went through the different affects I could add to see what I would like. I used affects like swirl, color, kaleidoscope and more. After I found all the effects I liked I worked on taking a video of the live drawing so I could present my work with out actually having to move the mouse and create the live drawing on the spot. Taking videos of the live drawing in isadora was successful because I was able to layer some videos and see the drawings I was able to create. I used the capture stage to movie actor to create the videos and then I create a new isadora patch so that I could just have the videos in each scene. This helped me optimize my work and make it easier to present.

Thinking about the start of my cylce 2 project I want to take what I have and make it interactive. I want to start working in the motion lab to get ideas on what I can do and how this will be projected. I have also been thinking about adding music and seeing if there is a way I can have music interact with live drawing or maybe the videos of the live drawing.


Cycle 1: Depth Sensor and Choreography

Regarding the cyclical projects for the rest of the semester, my goal is to create video projection in Isadora to support my MFA thesis project which will be presented in February 2025. I am choreographing a group ensemble piece that is centered upon the idea of intercultural encounters. I currently am viewing the piece as a moving palimpsest which refers to a manuscript where text has been effaced and rubbed out to write new text. This allows for ghosts or hauntings of previous text to be seen and witnessed alongside something new. Metaphorically, this reads to me as intercultural encounters where people’s life experiences intersect, conflict, merge, and bubble.  

I have been obsessed with the idea of a moving projection that tracks slow subtle movement and wanted to try this in cycle 1. I initially had a lot of ideas for the video that I wanted to project from a top-down projector. It ranged from textural video to gestures to excerpts of my dancers moving in the space. Due to the short amount of time that we had for cycle 1, I chose to focus primarily on developing a patch in Isadora that utilized a depth sensor as a form of tracking. Regarding the video that was projected, I ended up working with a clip of plastic overhang waving in the wind. In Taiwan, these rows and rows of plastic are utilized as an outdoor car cover or for an outdoor swimming pool.  

I spent a large chunk of my time playing with video effects. I knew that I wanted to abstract the image and desired to create a sort of otherworldly feel from the projection. Other time was spent figuring out how to use a depth sensor and have the projection track a body in space. The exploration started with a simple shape tracking to eventually having the whole projection track a body. Below is a video of the different scenes that I created. 

Choreographically, I imagine that the sensor will be tracking dancers from the top (bird’s eye view), so focusing primarily on their heads as they moving in the space. I have been working on a section of choreography that draws from ideas of processions. This emerges from my memories of watching temple processions in the streets of Taiwan but take a more hybrid creature approach. The video below is an example of what this looks like at this moment in the creative process 

In our feedback session, I was glad to hear that the abstraction of the video was landing with viewers. I found it fascinating that it felt kind of microscopic to people and was still tethered to ideas and aesthetics of this world. This reframing was very helpful for me to think about the intentional hybridity that I want to play with aesthetically. I really appreciate the feedback on softening the edges of the projection and playing with how color might fade in, grow, and fade out throughout. Looking towards cycle 2, my goals include taking the depth sensor and projection into the Barnett Theatre and seeing what it actually looks like from a top-down perspective, while still continuing to adjust the video effects. In cycle 2, I think it might be helpful to share a performance of this so that I can see what the full effect truly is in space with sound, projection, and dance. 


Cycle 1

Cycle 1

I went through a few different ideas for this project and had a hard time settling on something due to the amount of directions I thought this project could go in. After a few days and some research, I decided that I wanted to explore the medium of 360 degree animation. From a technical standpoint, I thought it would be interesting to learn and left a few different options open for interactivity as well. Then came the challenge of deciding what the animation was going to be about. In my thesis work, I’ve been battling with the idea that storytelling is what I really want to focus on, so this project needed to be more focused on storytelling as a medium than previous projects.

Initially, I started out by figuring out what the benefits of making a 360 degree animation were that a traditional film medium didn’t possess.

  • The ability to move from one screen to another
  • The immersiveness of the space – like you are really there
  • The abundance of details simultaneously included
  • Forcing people to move

And the limitations

  • Lost details
  • Not everyone gets the same exact story or experience
  • What is most important to focus on

In this case, the limitations weren’t necessarily bad things, but both lists were things to consider and try to find ways to design stories around.

My first idea was to do a series of vignettes. The story would take place in a town square and would be focused on the lives of people around the town. It would show planes flying around and interacting in the air, people dancing, musicians playing music, people having a conversation, a booth selling food, ants stealing the food, etc.. The idea here was that of a story centered around capturing a location rather than any one particular person’s experience. I used these ideas in the final project idea, but they originated here in this form.

My second idea was centered around the idea of perspective and what it would mean to show two sides of the same story simultaneously. I thought it could be interesting to show the hero and a villain both working on their side of the same conflict at the same time. The idea here was that depending on who the audience watched more, their opinion on the conflict would be swayed. Due to time constraints and a variety of logistical reasons, I didn’t think this was very feasible in the amount of time left in the semester, so I opted for something else.

There was brief consideration of adapting this idea into a story about two paths crossing from different storylines. Each half of the room showed the life of a different person until they ended up crossing paths and bumping into each other. Then the story would continue on and show the aftermath of the meeting.

My next idea was centered around the idea of time. I thought it would be cool to utilize the immersive environment and showcase the shifting of time and how a space changes when in it. I did a couple of projects in my undergrad centered around lost things and I thought it could be cool to continue this thread. I also used ideas from this in the final idea that I settled on.

The biggest challenge I’ve been facing for the past couple of years is figuring out what kind of storyteller I want to be and the kind of stories that I want to tell. I need to start doing a better job at exploring what is important to me and finding ways to reflect that in my work. With that in mind, I moved onto my final idea for this project.

My grandmother started getting sick right after Thanksgiving last year. After her passing in August, we visited her house to do some housekeeping things and saw that all of her Christmas decorations from the previous year were still up in her house, despite the fact that we didn’t get a chance to actually celebrate Christmas there due to her being in the hospital. It was a super surreal and haunting moment walking into that house. I took pictures of everything and documented the way she left everything the best I could so that I could look back on the memories. Recently, we started clearing out the house and sorting out her affairs. I’ve received a lot of things back that I had originally given to her as gifts and also a lot of things that I loved playing with and looking at when I visited. It’s made me think a lot about both the strength of memories that can be tied to physical things and the fleetingness of physical assets. There’s nothing quite like receiving back a bear that I gave my grandmother twenty years ago with my voice memo telling her I love her in it.

This parallel of objects containing memory is one of the biggest things I want to explore and capture. For myself, I’d like to recreate my grandmother’s living room (with some potential abstractness in the design to better fit the theme) and showcase these fading memories and the past.

Project idea:

  • Time capsule of my grandmother’s living room/house.
  • The concept of living memory, living ghosts, and nostalgia.

Memories to include:

  • Playing and drawing on the carpet. (child’s drawings, tic-tac-toe, etc.)
  • Playing on the piano
  • Sweeping the fireplace broom
  • Dinner at the dining room table
  • Riding lawn mower outside window
  • Making paperclip necklaces
  • Playing board games

Other features:

I want to make a story without people/actors and focus only on the memories living on through objects in the house.

In the beginning, all of the different objects moving around the house and memories being showcased will be present. As the animation goes on, different things will fade away. At the end, there will only be the still, silent house (set up for Christmas) that has been left behind in her wake.

Intentionally, due to the 360 degree design of the animation, not everyone will see every memory or focus on every detail that is happening in the space. This is meant to symbolize the uniqueness of memory and the loss of it as time goes on. The idea is that everyone’s experiences in the space will be slightly different and unique but still represent a holistic experience of memory. Collectively, everyone may be able to talk about what they witnessed and remember, but alone, they are unable to capture everything.

I also spent some time learning about the technical side of things for cycle one. I explored how to use animation software to make a 360 degree animation as I had only ever seen them made with live action 360 cameras before. The process wasn’t nearly as difficult as I was anticipating and was able to do so with a few careful setting changes.

Here is a link to a test I made with a few assets I had lying around from another project.