Pressure Project 2: One for All, All for one

I named my cell One for All, All for One because it is built around the idea that an individual, and communities as a whole are constantly shaping each other. The cell itself is a constan conversation between the oneself and the communal archive. It takes a live video feed and layers it over a slideshow of images showing communities and people from different parts of the world.

Then interactive sound enters the picture. A glitch effect driven by audio input levels determines how much the live video overlay fractures. The louder the audio, the more the live layer breaks apart and reveals the slideshow underneath. Alongside this, I built in an internal LFO paired with an Edge TOP to create a rhythmic pulse, something I called a “heartbeat”. Even without external input, the system works fine and feels alive.

The audio module of the system which controls the switch
The video module, which overlays video input with the slideshow using the switch. Also shown here is the module of the glitch effect, which uses noise, displace, ramp, texture3d, and time TOPs. The Edge TOP is applied on the very final video output recieved from all of the revious processes.
The signal module which plugs into Edge TOP

The structure is modular and layered, and honestly not that complicated. There is a live video input, and a media player (which controls the slideshow) which plug into a switch. The output of the switch goes into a glitch system, and a pulse system. Each could be replaced without breaking the overall logic. The audio input, live video input, and the signal (LFO) are designed so that they could be overridden by an external network signal as well. The cell has its own system, but it is designed to connect, following the true concept of one for all, all for one.

The External network of the Cell which enables it to communicate with other cells in the network. It includes audio input/output, video input/output, and signal input/output.

Reflection:

When all the cells assembled, things became unstable. Signals were constantly dropping and connections were dying. For a while, I thought something was wrong with my cell because nothing would show up (it was a problem with the input signals I was getting). When I finally got it to work, very interesting emergent behaviors appeared. The glitches danced to different rhythms. Video overlays ended up in very interesting stacked outputs. It was interesting because I did design my system while being aware that it had to be plugged into a bigger system. However, I did not envision the results I got during testing. What I controlled alone became either amplified or distorted by others. The network did not just combine outputs. It reshaped them. I think where my careful planning fell off was the heartbeat. I had not accounted for the fact that other cells can have signals of different types. Instead of a steady pulse, I got an irregular signal input, which changed the whole heartbeat effect. At first it felt like something went wrong. My cell was no longer just reacting to my inputs. It was reacting to everyone. That is exactly what One for All, All for One means. Each cell affects the others. Each signal influences the collective behavior. My cell had a life of its own. In the network, it learned to respond, adapt, and sometimes surrender to the collective.

Project File: pp2_Zarmeen.zip


Pressure Project#1: Pitch, Please.

Description: Pitch, Please is a voice-activated, self-generating patch where your voice runs the entire experience. The patch unfolds across three interactive sequences, each translating the frequency from audio input into something you can see and play with. No keyboard, no mouse, just whatever sounds you’re willing to make in public.

Reflection

I did not exactly know what I wanted for this project, but I knew I wanted something light, colorful, interactive, and fun. While I believe I got what I intended out of this project, I also did get some nice surprises!

The patch starts super simple. The first sequence is a screen that says SING! That’s it. And the moment someone makes a sound, the system responds. Font size grows and shrinks, and background colors shift depending on frequency. It worked as both onboarding and instruction, and made everyone realize their voice was doing something.

The second sequence is a Flappy Bird-esque game where a ball has to dodge hurdles. The environment was pretty simple and bare-bones, with moving hurdles and a color-changing background. You just have to sing a note, and make the ball jump. This is where things got fun. Everyone had gotten comfortable at this point. There was a lot more experimentation, and a lot more freedom.

The final sequence is a soothing black screen, with a trail of rings moving across the screen like those old screensavers. Again, audio input controls the ring size and color. Honestly, this one was just made as an afterthought because three sequences sounded about right in my head. So, I was pretty surprised when majority of the class enjoyed this one the best. It’s just something about old-school screensaver aesthetic. Hard to beat.

What surprised me most was how social it became. I was alone at home when I made this and I didn’t have anyone test it so, it wasn’t really made with collaboration in mind, but it happened anyway. I thought people would interact one at a time. Instead, it turned into a group activity. There was whistling, clapping and even opera singing. (Michael sang an Aria!) At one point people were even teaming up, and giving instructions to each other on what to do.

When I started this project, I had a very different idea in my mind. I couldn’t figure it out though, and just wasted a couple hours. I then moved on to this idea of a voice controlled flappy-duck game, and started thinking about the execution it in the most minimal way possible (because again, time). This one took me a while, but I reused the code for the other two sequences and managed to get decent results within the timeframe. There’s something about knowing there is a time limit. It just awakens a primal instinct in me that kind of died after the era of formal timed exams in my life ended. In short, I pretty much went into hyperdrive and delivered. I’m sure I would’ve wasted a lot more time on the same project if there was no time limit. I’m glad there was.

That said, could it be more polished? Yes. Was this the best I could do in this timeframe? I don’t know, but it is what it is. If I HAD to work on it further, I’d add a buffer at the start so the stage doesn’t just start playing all of a sudden. I would also smooth out the hypersensitivity of the first sequence which makes it look very glitchy and headache-inducing. But honestly, with the resources that I had, Pitch, Please turned out decent. I mean, I got people to play, loudly, badly, collaboratively, and with zero shame, using nothing but their voice. Which was kind of the whole point.


AI EXPERT AUDIT – DANDADAN

I chose the anime DanDaDan as my topic. I believe I am an expert in a lot of anime/manga related topics because I have been reading manga and watching anime for more than a decade now. I love DanDaDan especially because it’s one of the few series lately that’s a little different in a world of overly saturated genres like the leveling-up games. DanDaDan is a breath of fresh air and super weird and fun filled with all sorts of absurdity. So, in order to train notebook LM about this topic, I used some YouTube videos. The videos focused on the storyline, major arcs, characters, and why is it such a hit.

1. Accuracy Check

I wasn’t so surprised that it got the gist of the story correct. I did give it sources where the youtubers summarized the whole storyline and talked about its characters, arcs and resolutions. So, it wasn’t a bad generic overview, I would even say it was good for a summary. It’s only when you’ve been thoroughly into a certain subject area that you start understanding the nuances and tiny details of it. I think it didn’t say something outright absurd if we were to talk about what it got wrong. It’s just that it sometimes mispronounced some names. With the names being Japanese, I am not surprised that they might be mispronounced, but the AI used a range of mis-pronunciations for the same name.

One of the voices in the podcast was too hung up on making the story what it is not. I mean sure it was justified at some points but it insisted that the real ideas behind this absurd adventure-comedy are deeper themes like teenage loneliness, and that it’s actually a romance story while it’s not. (It’s a blend of scifiXhorror) Sure there are sub-themes like in all anime, but it’s not the main theme. The other voice sometimes did agree with this idea. The podcast was not focused enough on just keeping it fun and light- which is what DanDaDan really is.

2. Usefulness for Learning

If I was listening to this topic for the first time, I feel like this podcast wouldn’t be a bad starter. Like I mentioned earlier, it gave a pretty decent summary of the whole plot. I think it definitely gets you started if you need a quick explanation of a subject area. I found the mindmap to be pretty decent too. It was a decent overview of the characters and the arcs. The infographic on the other hand… so bad. The design is super cringe and again, a lot of emphasis is on the romance and how it drives the action. Which I disagree with.

3. The Aesthetic of AI

Overall, the conversation was SO very cringe, and it was very difficult to get used to it in the beginning. I used the debate mode and they were talking so intensely about a topic that’s just nowhere as serious as the AI made it out to be. I had to just stop and remind myself it’s just a weird, fun anime they’re talking about. AI has this tendency to make everything sound intense, I guess.

4. Trust & Limitations

I would recommend AI to someone who wants a quick summary or overview of a topic. It’s what the AI is good at. What I wouldn’t recommend is to dwell on the details that the AI talks about. If anyone wants details or wants to form an opinion about a topic, they should look into it themselves.

Link to the podcast:

https://notebooklm.google.com/notebook/e5c722e5-dd21-4dc4-ae39-a7f22076b7d8?artifactId=d912ed44-154e-422e-aa93-fc9307c9a2f2

AI-Generated Visuals:

Sources:
https://youtu.be/8XdTF5tnMVU?list=TLGG7J2IoA7cY1QwNTAyMjAyNg