cycle 2 -simultaneous seeing : the digital self is real

Cycle 2 Documentation

11.22.2022

Katie O’Loughlin

For cycle two, I worked on creating a malleable environment that was affected by the choices people made within the space. I continued to play with feedback loops, although they weren’t the main focus anymore because of the variety of other choices that could be made. I also continued to think about the impact and effect of seeing and being seen, of framing a body other than your own, and of experiencing the digital image of your body as something less familiar, although not necessarily less real.

In the space, there were three cameras all attached to live feed video being projected in the space. One camera was being projected onto the large flat scrim via a capture card, one was being projected onto the large, curved scrim via NDI watcher, and one was hidden behind the curtains and projecting directly through a pico projector onto a small box. I implemented the pico projector in the corner of the space to play with scale. Where it was located, it would hide the person from the rest of the play space, giving a break from what could be chaotic experimentation.

The main area was carved into a circular-ish space with two curtains pulled around the curved track. The back scrim and one of the circle scrims had the two different live feeds playing on them. People were allowed to pick up both cameras and choose how to frame the room and the people in it. In the middle of the space there was a table with a magnifying glass, some shiny emergency blankets, and some plastic Fresnel lenses that warped and focused objects/people as you looked through them. These items were objects for the participants to play with to filter the images on the screens and change how they were viewing the space.

This cycle definitely didn’t have an end goal – there was nothing I was secretly trying to get the participants to realize. My research is invested in shifting perspective and understanding how perception affects experience. I am curious about how humans can be many things at once, including perceived as many things at once. I find myself invested in discovering how to hold multiple truths at once. As I watched the participants maneuver through the space, filter the images, and choose different framings, I was really interested in seeing the similarities and differences between the image on the screen, and the person I was seeing right in front of me. All of this work is really making me consider how integrated our digital lives are in society right now, and how we have a lot of agency in how we present ourselves, and others, to the world on digital platforms.

How does how we frame ourselves and our world affect other’s perceptions as they look into our world? What does real and fake mean in this day and age? If our digital selves are a representation of our identity, what is the impact on our own perception of self? How much choice do we get in how other people see us, or experience us? How carefully are we holding the idea that how we perceive someone else changes our reality of them, which in turn may change theirs as well?

I like giving participants agency in this work, to make choices and hold their own responsibility. As I do work with the digital body, I continue to be aware of the power structures, hierarchies, and delicate spaces that arise easily within this topic. One of the aspects of this cycle that I found really enjoyable was to see how all the participants interacted with each other much more than cycle one, and how I got to see the interconnectedness between choices and how that impacted the space as a whole.

footage taken by Alex O and Mollie W

Cycle 2 documentation

Since Cycle 1, I used Cinema 4D to create the final 3D cloth models I’m going to use for the installation, setting up Kinect and Isadora working in the Motion Lab, experimenting with projection spots, learning how to project in the first place, and I’ve also been modifying the Isadora patch based on the Motion Lab environment. One of the main changes I made is to have 4 separate scenes every minute at least. A big part of this process was optimizing the models in 3DS Max since the program has a maximum number of polygon faces that can be exported and my original models were much bigger than that:

At the time of the Cycle 2 presentation, my visuals were still in progress since I am learning how to make materials in 3DS Max which is the program I have to use because that’s the only format Isadora supports. But my vision for all the materials is to be non-shiny, like the first two scenes

…which was also the feedback I got from the critique – scene number 2 was the most visually pleasing one, and I have to figure out how to edit the shiny materials on the other objects (scenes 3 and 4) this week.

During Cycle 2 I decided I want the projection to be on the main curtain at the front of the Motion Lab and I liked the scale of the projected models, but I need to remove the Kinect-colored skeletons from the background and have the background just be black.

The feedback from the critique also included experimenting further with introducing some more forms of movement to the cloth which I already tried but it was kind of laggy and patchy so I think once I learn how to control the skeleton numbers and output better I could use this to also expand the ways in which I can make the models move, and then I’ll experiment with having them move a little on the projection horizontally and vertically instead of just scaling along the Z-axis.

Next steps:
My main next step is to keep working on modifying the Isadora patch since it is really confusing to figure out which numbers are best to use based on the skeleton tracking outputs, I’m thinking I might switch back to using brightness/darkness inputs for some scenes since I liked how much more sensitive the cloth models were when I was using that. But I will first experiment with utilizing the skeleton data more efficiently. I am also going to polish the layout and materials of the 3rd and 4th scenes, and I think I’m happy with how the first and second scenes are looking, they just need some interaction refining. On Tuesday I am also going to work on setting up the Kinect to be much further from the computer, in front of the participants.

I am also going to render some animations I have of these same cloth models and try importing them into the Isadora patch in addition to the interactive models to see how that combination looks in the projections.


Cycle 2 – Perspective drawing tool

I decided to move from the small handheld book form to the motion lab for my next cycle. I have been working with my own students in the motion lab. I decided to explore the possibility of designing tools for teaching drawing that involve observation, collaboration, and movement in the process. My initial plan involved using 2-point perspective as a starting point. Typically teaching 2-point perspective is a guided drawing kind of process, step-by-step. It is most often experienced using a pencil and a ruler. A teacher guides students through steps for drawing a horizon line, two points, and then proceeds to demonstrate how to draw a simple dimensional cube, often using a document camera to project a larger live capture image of the teacher’s actions. Once students understand the basic process, they are set free to create on their own.

I scaled this process up and envisioned the motion lab as a large format document camera. I set up a drawing space in the center of the space, directly under a birds-eye-view camera. In the image below you can see that I projected the live feed image on the main projection screen and applied a TT edge detect filter in Isadora. In the photo you can see students using the ropes that were each tied to one of the points on our horizon line that bisects the image.

The patch I created was fairly simple, see below:

Isadora Patch

Playing with two camera views layered and both processed through TT Edge Detect

After the initial explorations with the one video in-watcher, I layered another feed from a camera on a tripod that was directed at a whiteboard. This is when participants began to play even more with what was happening on each screen and how they were interacting together in the projection.

Layering with birds-eye-view and camera on tripod. shifting scale by zooming in, turning camera view

For my final 3rd cycle, I want to continue to build on and play with this tool. My initial ideas about pushing it into another phase is to focus on observational drawing and the figure/body in motion. Maybe shifting the output 90 degrees so the drawers can face the large projection screen head on. Using two camera again, but one would be directed at a body moving in space and those drawing would try to capture the movement and gestures on the page. I also wanted to add in the live drawing tool and see how that adds another layer of complexity and interest to the projected image.


Cycle 1 – Ghosts in the Landscape – Mollie Wolf

For cycle 1, I decided to attempt an interactive portion of my installation. My initial idea was to use an Astra or Kinect, with a depth sensor to make the background invisible and layer live capture footage of audience members into a video of a landscape.

When I asked Alex for help in background subtraction, he suggested I use a webcam instead. We found an example patch that Mark had posted on Toikatronix for background subtraction, so I started there.

He essentially used an effects mixer with a threshold actor to read the data from a video in watcher, and identify when a person had entered the frame – by watching for a change in light, compared to a freeze frame grab of the same image. Then, he used an add alpha channel actor  to mask anything that hadn’t changed (i.e. the background).

Here are photos of my version of Mark’s background subtraction patch.

When I was first using this patch, it was a little finicky – sometimes only showing portions of the body, or flickering images of the background, or showing splotchy images. I noticed that it had a hard time with light colors (if the person was wearing white, as well as if their skin was reflecting the light in the room). I tried to fix this by adjusting the threshold number and by adding a gaussian blur to the image, but it still was more finicky than I’d like.

When I brought this issue up to Alex, he suggested that I could use a difference actor instead to do something similar. However, with the difference actor, it is mostly recognizing the outline of bodies rather than picking up the whole image of a person. I decided that I actually liked this better, aesthetically anyway – it made for ghost-like images to be projected in to the landscape, rather than video images of the people. 

Here are photos of my current patch, using a difference actor. (The top line with the flip, freeze, effect mixer, etc – is still there from the previous attempt, but the projector is not active, so you can ignore that).

I think this method worked way better because it was no longer dependent on light in the room, and instead just on motion. (Which is way better for me, considering that I’m practicing this all in the Motion Lab with black walls and floors, but will be presenting it in Urban Arts Space with white walls and concrete floors). Conceptually, I like this version more as well, as it is an abstraction of people’s bodies and action, rather than a direct representation of it.

As a last minute experimentation, I laid out some platforms in the space and roughly lined them up with where there were mounds/hills in the landscape – that way if someone stood on the platform, it would look like their corresponding ghost image had climbed up the hill.

When I presented this version to the class, I was pleased to see how playful everyone became. With it just being an outline of their bodies, they were not looking at an image of themselves, and so there didn’t seem to be any amount of self-consciousness that I would expect if everyone were looking in a mirror, for example. People seemed to have genuine delight in figuring out that they could move through the landscape. 

One interesting thing about the difference actor, is that non-moving bodies blend into the background (with no motion, there is no difference, frame be frame). So when someone decides to sit or stand rather than moving around, their ghost image disappears. While this is an inevitable aspect of the difference actor, I kind of like the metaphor behind this – you have to actively, physically engage in order to take effect on your environment.

We spent some time with the camera in a few different locations (to the side or in front of people). As we were discussing together, we came to a consensus that it was more magical/interesting when the camera was to the side. It was a, less obvious in terms of how to move (i.e. not as easy as looking into a mirror), which added to the magic. And having the camera to the side also meant that the landscape image was not obscured from view by anything. People only knew where the camera was if they went searching for it.

Here is a video of my peers interacting with the patch/system.


cycle 1: infinite seeing : feedback loops

Cycle 1 Documentation: Katie O’Loughlin

10.27.2022

This first cycle took place in the Motion Lab and was my first experiment with multiple feedback loops happening simultaneously. Originally, I had wanted to try a double feedback loop, which had the scrims in a circle, two cameras facing each other, and two projectors above each camera. But, as it goes, one hand-held camera capture card decided to quit working, and I was left with only one hand-held camera, eliminating the possibility of that idea. I mentally scrolled through the resources I did have, and ones I felt confident working with on my own, which included the single hand-held cam, a top-down cam, a top-down projector, and the regularly used circle and front projectors.

So, instead of facing feedback loops, I put the front projector on a standard loop, and then the top-down camera and projector on their own loop that was projected onto a white sheet laid out on the floor. In addition to the floor, I also projected the top-down feed onto a hanging scrim in the space, which was at a 90-degree angle to the front scrim. My goal was partially met, as there were two feedback loops happening, but as they were not facing each other they weren’t picking up the other loop within their own, which was what I was curious about.

The top-down set up was a unique feedback loop, as the frame of the projector was shooting down, and the frame of the camera was turned at a 90-degree angle and shooting down. Instead of that feedback loop being a replica, it ended up being a rotated loop.

Once people entered the room, it was interesting to see how they interacted with the loops, as they could clearly see their own impact on the image. There was a definite sense of play in their interaction, which I want to hold onto if possible. I manipulated the front projector’s image in Isadora once everyone had landed in the space, mostly by rotating the image and zooming it in/out. The rotation of the image caused a spiral in the loop which would turn into a square at a certain angle. I enjoyed the idea that both the person in the space and the person behind the computer could choose to change the image.

I mentioned to Alex that I was realizing just how impactful lighting is for this work, as it will totally blow out an image’s loop if angled or intensified in a specific way. After I said this, he suggested I turn off all the lights and the front projector, as the front projector was at a way higher resolution than the down projector, because the whole top-down set up was getting lost in comparison. When I did, the light from the down projector was enough light to create the feedback loop on the floor, and at a way brighter contrast making it easier to see the results of the loop.

As people began to play with that loop, we realized that they were creating an analog version of the loop because the camera would catch the movement, create a loop, and then continue to see its own loop even after the person moved away from the camera. It created an amoeba-looking shape that would stay for a while and then eventually burst into static before it lost the loop.

I was intrigued with this result, as the top-down was my back-up plan that I wasn’t too excited about. And because I had it turned on with the other loop, that continuous feedback was getting blown out by the surrounding light. Once we could see it clearer, the loop got longer and more complex and we could see the reaction to our body’s movement on the scrim.

One piece of feedback I received was that there were many “frames” within the experience. Looking at the space, both hanging scrims were rectangles, the cloth on the ground was a square, and the loops were also looping the rectangular frame within themselves. The camera had its rectangular viewfinder open, and the scrims were at a 90-degree angle from each other. I appreciated the idea that it felt like everything was a camera frame, as how we are seen through a camera is a facet of my research. That being said, I’m glad it was pointed out because I’m not sure I want such a square space. I wished I could have played with the circle projectors and scrims but will try to implement them into cycle two.

I’m glad I showed what I did because new aspects of the feedback loop showed up. It gave me a bit more motivation to move forward into the next cycle. My goals for cycle two are to set up the wireless router and get the NDI Watcher working through my phone and Isadora.  That way, people can play with a moving camera around in the feedback loop.


Cycle 1: Puzzlr Prototype

Project Brief

For the final project, I’ve decided to create a fun and challenging puzzle that will stimulate the player in a variety of different ways through 3 distinct play modes. The puzzle consists of six pieces arranged in a hexagonal shape. The goal of each level will be to complete the puzzle each round while navigating the changes in the sensory elements of the puzzle. My research thread focuses on microcontroller inputs and the conceptualization of touch, so I see this project as a way to explore creating micro-interaction systems through physical objects.

The first round of the puzzle will be basic; the player just has to complete the puzzle by getting all the pieces onto the board in the correct configuration. The contacts on the bottom of the pieces and the board itself will give a slight hint as to how to approach the puzzle (more on that later). There will be both a visual and auditory feedback component informing the player about their progress.

The second round will involve covering the players hands so that they can only use the visual/auditory feedback from the system rather than looking at their hands. The pieces will also include unique textures that will then be mirrored on the screen to guide the player on what textures go where on the board.

The third and final round will involve blindfolding the player so they will have to rely on the textural elements of the pieces coupled with the auditory feedback. In this section, the auditory feedback for each piece will be unique so players can keep track of what pieces go where. This final level is intended to be very difficult and will be used mostly as an experiment.

First version of the schematic

Production & First Prototype

For the first phase of iteration I created a schematic and then used it to laser cut a preliminary model of the puzzle and circuit housing. After laser cutting and setting everything up I realized some parts of the first prototype need to change:

  1. The engravings on the bottom plate are too narrow and can’t house the wires and cables necessary for the puzzle piece contacts.
  2. The engravings on the bottom of the puzzle piece are on the wrong side of the pieces because I forgot to flip them before printing
  3. The circuit layout is confusing, and requires more thought into how the ground/input wires are going to be set up in the final board.

After thinking about those changes I modified the schematic and created a new one featuring a new build layout, a new circuit component, and a new circuit layout. I decided I would create two levels to the puzzle, one for the circuits and one to put the puzzle pieces on. This allows more room for cables and components under the puzzles, and ensures a clean look on the top. The new circuit component is a little adapter (in the middle of the schematic) that will serve as a bridge between the circuit housing and the top of the puzzle. The component will take the ground and input cables and attach them to two pieces of foil that are sticking out at the top, the puzzle piece will then have a piece of foil underneath that, when placed on the contact, will complete the circuit.

Schematic V2
New Circuit Layout

Workspace

Next Steps

The next step in the project is to make the second prototype and get the circuit laid out and working for the next cycle!


Cycle 1: Book to map?

Makey Makey connected to small book prototype, linking into Scratch to trigger audio files.

Exploring the book form and text that is interactive and triggers audio. Questioning the form. Leaning towards a map . . .

How do I design an experience for someone to engage with my research in a way that breaks from the typical dissertation format and reading printed text on a page?

Should the experience be intimate? Like our experience reading a book where it is just us and the text? Or, relational to both the text (object), ideas, researcher and reader?

What structure could facilitate a non-linear engagement with the “text” that is different for each individual?

How does one experience the researcher’s voice?

I considered a book and an accordion-style book but these seem limiting. Is a map an option? What are the aesthetics? How do I include layers? How does the experience facilitate connections across distance? A distancing from the research itself and the researcher?

What do I like about the book format?

              It relates to a way we come to know things through reading.

              A physical book must be held in one’s hand.

              A book has weight. Value. Symbolic of knowledge.

What do I like about a map? The process of reading a map?

It still has boundaries and borders, but can be entered from any point? Could include a hierarchy using scale and volume. 

Can still be held, however can an interactive map be hand-held in a convenient way? (other than a digital map on our phone)? I like the idea of the reader following along with their finger to “trace” the lines of text which also triggers and “generates” the text.

What about a wayfinding format? Like those trailhead maps? Something tactile with “triggers” built in that combine reading with audio that fills in the spaces between maybe? As you follow from one point on a “path” to another, you make a connection and the audio plays and leads you there.

Tactile topographic maps. . .

If this and this, then this. . .

Point A (Idea A) draw line to point B (Idea B), space between how A relates to B (audio).

Trigger through touch. Running your finger over the text or connection line. Touching on two points to draw a connection between them.

Recording: Idea A, connection, Idea B Following the route. Using the text to draw the line between them and make the connection.

Still pre-determined by researcher, in control of what the ideas are, how they relate/connect. Reader is in control deciding what they are interested in making connections between. I could imagine what this might look like in a VR space potentially, but what about a physical space?

How can I present a standard, university-defined, formatted dissertation become more experiential?

What if the “text’s” were not just printed text and audio, but also video and photographic artifacts?

Aesthetics and materials, topography, cast tracing paper, Plexi, watercolor, cut-out text, wires visible? Connections and “behind-the-scenes” transparent, process exposed, making thinking visible, translations, sound and video/photo projection on the “map”? Is there a backtrack? Could there be subtle changes to the backtrack that are affected by the “routing” on the map?

Is it on a wall, positioned on an angle much like a podium at a height about waist-high?

Intermedia map that involves human touch, listening, hearing, and viewing, prompts one to speak?

Is a dimensional, topographic map a good solution? The lines between could be graphite or could be the actual wire that is couched down with embroidery floss. The wires could be “embedded” into the layers of the tracing paper casting so they could be hinted at, and visible, but also underneath the surface. How do you ground the viewer in this case? What if they had to hold a “fake” pen/pencil that connected to the ground and use that to trace? I prefer the idea of using their finger directly to trace though. Maybe they are asked to touch a “key” of some sort as they are also then touching a point and following it with their finger.

How could each point be wired up with a way for a finger to complete the whole circuit?

Some feedback: My next step is to find a simple story to “map” and test out form, does the map form work? physical hyperlinks to audio, video and image files. Connections to braille, reading in different ways, audio books, how we experience making connections between things, our own, personal experiences, choose your own adventure.

Small sample of possible tracing paper topographic map.

Cycle 1 documentation – Dynamic Cloth – prototype

3 ideas that I presented before choosing to go with the first idea

Since my research is based on how shared mixed reality experiences help us relate to virtual worlds, I wanted to use this project to create an experience where users can collaboratively affect/respond to digital cloth shapes through body movement tracking in the Motion Lab. I love creating experiences and environments that blend both physical and virtual worlds, and so I thought this would be a good way to explore how physical surroundings and inputs impact virtual objects, and I also thought this project direction would be an interesting way to explore how interactive technology creates performative spaces.

Since at the time (and now) I was still a beginner in Isadora I didn’t really have an idea how to go about doing this, and I didn’t know if Isadora is even the right software to use or should I just be using Kinect in a game engine. My goal is to have the users affect the virtual cloth in real time, but not knowing how to do this, in the beginning I was thinking an option could also be pre-rendering the cloth simulations and then use the Kinect inputs to trigger different animations to create this dynamic effect. However, after learning how to import 3D models to Isadora and affect lighting, I realized that I will be able to trigger real time changes to 3D shapes, without using pre-made animations. I might still use animations if I think they can benefit the experience, but after the progress I’ve made so far I realized that I have a sense of how to make this work in real time.

After deciding I need Kinect and Isadora for this experience, I needed some help from Felicity to install the drivers for Kinect on the computer in the classroom, so I can begin working on an early prototype. After that was setup, I first learned how to import 3D models in Isadora because I didn’t figure that out during PP1. I was able to import a placeholder cloth model I made a few months ago and use that to begin figuring out how to cause dynamic changes to it using Kinect input. Initially, I hooked the Kinect tracking input to the 3D light orientation and I was already happy where it was going since it felt like I was casting a shadow on a virtual object through my movement, but this was just a simple starting point:

After this, I wanted to test changing the depth of the shape through positions and motion, so I thought a good initial approach would be plugging the moving inputs into the shape’s size along the y axis, to make it seem like the object is shrinking and expanding:

I took this approach to the previous file and currently I have the Kinect input impacting the lighting, orientation, and y-axis size of the placeholder cloth shapes. In the gif below I plugged in the movement inputs to the brightness calculator, and when I’m further away from Kinect and when more light is being let in, the shapes expand along the y axis, but when I get closer and it gets darker, the shapes flatten down, which feels like putting pressure on them through movement:

I’m happy that I’m figuring out how to do what I want, but I want this to be a shared experience with multiple users’ data influencing the shapes simultaneously, so the next step is to transition from the computer where I made the prototype into the Motion Lab where I want the experience to be. Currently I need the Isadora on the Motion Lab computers to be updated to the version we are using in class, so I will remind Felicity about that. After the setup is done again both with Isadora and Kinect, I will keep working on this in the Motion Lab and modifying the patch based on that environment, since they are going to be interdependent. I also finally managed to renew my Cinema 4D license yesterday so this upcoming week I also want to make the final models (and animations?) I want to use for this project and replace the current placeholders.

Quick schematic which will be refined

Feedback from the presentation:

I appreciated the feedback from the class, and the patch I currently have seemed to produce positive reactions. The comments I got included that it was intriguing to see how responsive and sensitive the 3D cloth model was. It was nice to see that a lot of people wanted to interact with it in different ways. I realized I need to think more about the scale of the projection since it can impact how people perceive and engage with it.


Apollo 11 Launch Sound Experience

A picture of the moon in space.

Preparing For Launch

The Moon. Arguably one of the greatest achievements in the history of space travel. It’s wild to think that people have been on that little circle of cheese in the sky, and even wilder that we’re going back. Space has always been a huge passion for me. So much so that one of my most cherished childhood possessions was my little season pass to the Burke Baker planetarium at the Houston Museum of Natural Science; whenever there was a new film or experience, my mom and I would go and spend the day gazing at the stars.

My love for science and space travel continued to high school, where I have vivid memories of afternoons with my science teacher Dr.Cote, talking about rockets and the latest NASA news, we even set up a watch party when the Curiosity rover landed back in 2012. All of these events and interactions with the wonders of space have stayed with me for my whole life. So, for the sound experience project, I choose the monumental event of the Apollo 11 Moon Landing.

Mission Log

As soon as I started the project I knew I wanted to use a piece from the Interstellar soundtrack. It is hands down my favorite movie of all time, and I absolutely love the soundtrack created by Hans Zimmer. The core style of the soundtrack gives me chills every time I hear it, and it sparks a truly unique sense of wonder and curiosity.

The problem I now faced was which song should I use. If you’ve seen Interstellar, then you’re probably picturing the film in your head, thinking of some core moments throughout the movie. As I did this myself, I remembered that the track I play most often is Cornfield Chase. This is the scene where Cooper, Murph, and Tom are chasing after a mysterious drone through rows and rows of corn. Take a listen below:

With my song selected I got right to work collecting sound artifacts relevant to the launch. I knew that nasa kept audio recordings of a lot of famous missions, so I started downloading and trimming to get the very best bits.

I also wanted a sort of preface to the mission recordings, something that could spark wonder and simulate that same fuzzy feeling I get when I watch the film and think about Space. After perusing through the events that led up to the launch, I remembered that John F Kennedy’s Rice University speech would be perfect for the job. I trimmed the speech and extracted this memorable line:

A couple more audio bits were collected to form a short chronological story into the Apollo 11 launch. My goal was for listeners to feel like they were progressing through this historic event, just as humanity progressed after JFK’s speech. I cut together countdown audio as well as some other famous lines from the astronauts themselves.

But instead of giving you more tidbits, listen to the full experience for yourself. Make sure you’re sitting down because you may get some intense goosebumps.

Mission Successful

I really loved the outcome of this project. It helped me experiment with audio like I never had before. I also have a strong interest in accessibility so getting to do a project like this helped me flex my universal design muscles. In addition to the audio experience, I also created a visual component because I wanted to include some enhancements to the audio if people chose to view it. Sorry in advance for the weird aspect ratio!

I wouldn’t change anything about my final product. I love this audio experience and I plan on posting it somewhere to share with the other kids that have a season pass to the Planetarium. Experiences like this are what sparked my love for space, and I can only hope to do the same for others.


PP3 Sound project

Final sound file

For this project my first idea was to work with a story that I always remember first when thinking of books/stories that stuck with me when I was younger. The story I chose to represent through sound is from the book called Palle Alone in the World by a Danish writer Jens Sigsgaard. Since I never read this book in English I know the book as “Pale sam na svijetu” and I always remembered the look of this cover because I liked the illustration style in the book:

Since my perception and understanding of the book has always been based on the visual, I thought it will be interesting to imagine what the events in the book would sound like. I always associated the book with happy memories but just thinking about recreating it through sound I could tell that it is probably going to sound kind of daunting and portraying overall a stressful experience. In the book, the boy Palle discovers he is totally alone in the world and so he goes on doing whatever he wants without any restrictions, he tries driving cars, he even crashes a car, he steals money from the bank, he eats all the food he can eat in the grocery store… The approach I took in the project was to depict the sounds of his actions and experiences in the order that they occurred, condensed in 3 minutes, and I also overlayed some slow piano music to create a dreamlike mood, since at the end of the book we find out that this was all just a dream.

I used the sound level watcher in Isadora to listen to the sound of the piece and use that to distort the picture of the book cover. I did this because as I was working on this project and listening to what’s happening in the book, my perception of the book and how it would feel to be Palle started to change.

Just the visual:

On the day we were showing the projects, I couldn’t use my Isadora file on the Motion Lab computers because I was using some of the effects that I previously had to install as plugins on the classroom computer I have been using, so I didn’t end up showing the visual portion during the experience. I wish I could have but it was still very interesting to hear the reactions people were having even without the visual. The visual was still quite abstract not knowing it’s a book but I think the picture does provide some context of “children’s book”, and it also gives further flow since it is constantly moving. Maybe I also thought this because I personally prefer to have something to look at, but I realized that’s not a universal preference. Based on the comments I got, a lot of people understood the moods and the narrative I was trying to convey which was good to hear. The comments included the observations that the events are linear, occurring in a specific sequence, and happening right after one another not in a way you would normally expect but it also still feels continuous, interpreting the sound through child’s perspective, the feeling of uneasiness, and getting invested in some sounds more than other like the sound of eating, walking on the grass, or unwrapping a chocolate bar. Another interesting aspect of the experience was being able to add and manipulate the lighting as the audience was listening to it, which is something I haven’t thought of before because I was not planning initially to show it in Motion Lab but I decided to after hearing how immersive the other sounds were in there.
I also remember thinking of this book sometimes when Covid first started when I had a very bad experience being stuck in a house with toxic and insane roommates and not being able to see my friends, so for me this book also relates to this time period.