Cycle 3 – Solo+ (the plus is that it is DDR now)
Posted: May 12, 2026 Filed under: Uncategorized Leave a comment »
If you have been keeping track of the last two posts, you will know I set out in the cycle to make a new control scheme for this iteration of Solo. So I made a dance pad. I also projected the lyric translations onto the circular rug beneath the user. The lyrics are in real time, thanks to the translated and timed captions on the official Zutomayo movie video for the song. I used the footage of the captions as a reference to time them out myself, then overlaid them onto the user-controlled imagery. I mapped out the tap inputs (3) onto a drum, which the user held, and hold inputs (5) on a shower liner. The shower liner was to ensure the user could see the lyrics while they navigate the inputs below them. This was in an effort to force the user to stare at the floor, hopefully at the lyrics. The user is not only curating visuals for their audience but also doing a little dance in the process.
The user rarely saw the main screen, and the audience saw the lyrics less often than the performers did… Both experiences were meant to be separate. One party was receiving direct guidance on what to produce, while the audience was left to process the user’s creation… Theatretically
So how did it go?
Well, we ran into a few bumps. It’s quite hard to read while jumping around on a map… and in my case, trying not to trip. I was caught up in trying to make cool visuals for the audience, and when I lost myself in the music a bit, I snapped out of it, worried I wasn’t making interesting visual combinations. Users reported similar feelings, only picking up some lyrics here and there. I don’t want users to just stand and read, and it’s okay to pick up only some of the lyrics for this project. This is a processing and meaning-making project; it’s okay if the users differ in interpretations.
One suggestion that stood out to me most was to possibly involve a second player: one player sits on the side with the bucket, while the other uses the step controls.
I was also asked whether I wanted the footage to be a “story machine” in future iterations… if the footage I said I wanted to animate the majority of could go with any song (the anwser is yes). And in my brain, I was like “holy shit, I made my capstone sort of”.
I was making a version of my initial idea for my senior Capstone Project (in undergrad). I had taken an alternative controller/programming course in my junior year, and I fell in love. So much so that I drummed up the idea of making a game/interactive experience. It would be an audio production app with simplified, visualization-based control schemes. It was inspired by an app I played with as a kid during the dawn of the iPod Touch; however, I haven’t been able to find it to this day, and it haunts me. From what I remember, there are three different categories of audio libraries you can access: bass, melody, and percussion. You pick from those libraries and place them into an “orbit” centered around a sphere. Moving the tracks (also sphere-shaped) around the orbit changes the tempo (and maybe other things), and the closer they are to the center sphere, the louder that audio element is… I love that app, and if you know what it is, I’ll give you $10.
So basically, I wanted to make that, but what if those audio elements and their placement were connected to animations that could be pieced together into a story? Any story.
I emailed the one professor I knew in the newly budding game design department at CCAD, but it was the summer, so I never got a response, so I opted to make a more traditional animated film.
So a story machine… I think that would be my cycle four. A creation tool that explores every angle of a narrative constructed by its users and audience… With controls more sophisticated than tin foil taped to a shower liner.
And last but not least, fuck gen AI
fuck data centers,
and fuck tech bros kids.
Make punk ass art.