{"id":5840,"date":"2026-05-05T22:38:25","date_gmt":"2026-05-06T02:38:25","guid":{"rendered":"https:\/\/dems.asc.ohio-state.edu\/?p=5840"},"modified":"2026-05-05T22:38:25","modified_gmt":"2026-05-06T02:38:25","slug":"cycle-two-a-3d-movement-based-sound-explorer","status":"publish","type":"post","link":"https:\/\/dems.asc.ohio-state.edu\/?p=5840","title":{"rendered":"Cycle Two: A 3D Movement-Based Sound Explorer"},"content":{"rendered":"\n<p>Going into cycle two, I immediately knew that I wanted to take the ideas I had done in 2 axes of movement in <a href=\"https:\/\/dems.asc.ohio-state.edu\/?p=5784\" data-type=\"link\" data-id=\"https:\/\/dems.asc.ohio-state.edu\/?p=5784\">cycle one<\/a> and translate it to 3D. In order to do this, I needed to rework the analysis portion to plot all the samples in 3 dimensions, as well as figure out a new way for the user to see their movement within the corpus of sounds. For several days, I tried to do the analysis section on my own, but found myself unable to figure out how to plot them in a 3D space as I&#8217;m not great with Jitter, Max&#8217;s visual package. However, just about when I was getting ready to give up, I found someone who had already done <a href=\"https:\/\/discourse.flucoma.org\/t\/3d-sample-scrubber-with-jitter\/1199\/3\" data-type=\"link\" data-id=\"https:\/\/discourse.flucoma.org\/t\/3d-sample-scrubber-with-jitter\/1199\/3\">exactly what I was trying to do<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1566\" height=\"1082\" src=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-4.45.47-PM.png\" alt=\"\" class=\"wp-image-5843\" style=\"aspect-ratio:1.4392032981742249;width:622px;height:auto\" srcset=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-4.45.47-PM.png 1566w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-4.45.47-PM-300x207.png 300w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-4.45.47-PM-1024x708.png 1024w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-4.45.47-PM-768x531.png 768w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-4.45.47-PM-1536x1061.png 1536w\" sizes=\"auto, (max-width: 1566px) 100vw, 1566px\" \/><figcaption class=\"wp-element-caption\">My saving grace this cycle<\/figcaption><\/figure>\n\n\n\n<p>I won&#8217;t get too in-depth on how this works. But in short, it reduces the MFCC analysis data to 3 dimensions, as opposed to 2 in cycle 1, unpacks all that data into a matrix with 3 dimensions, and then that matrix data is used to generate an OpenGL environment with all the samples plotted in a 3D space. Then, depending on whatever point the camera is closest to in the Jitter window, the sample that correlates with that point is played back. This camera, however, was driven with WASDQZ, so I had to implement a new system of control that allowed this 3D corpus explorer to interface with the Google Mediapipe data I was receiving from TouchDesigner. I did this by scaling the data to the 3D space, smoothing it out some, and sending it a as list into the position attribute of the jit.gl.camera object. <\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"518\" height=\"624\" src=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-5.56.38-PM.png\" alt=\"\" class=\"wp-image-5847\" style=\"aspect-ratio:0.8301522417844154;width:247px;height:auto\" srcset=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-5.56.38-PM.png 518w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-5.56.38-PM-249x300.png 249w\" sizes=\"auto, (max-width: 518px) 100vw, 518px\" \/><figcaption class=\"wp-element-caption\">Routing of the MediaPipe data over OSC<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1980\" height=\"1306\" src=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-6.19.17-PM.png\" alt=\"\" class=\"wp-image-5849\" style=\"aspect-ratio:1.5161172057869652;width:526px;height:auto\" srcset=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-6.19.17-PM.png 1980w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-6.19.17-PM-300x198.png 300w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-6.19.17-PM-1024x675.png 1024w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-6.19.17-PM-768x507.png 768w, https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-05-at-6.19.17-PM-1536x1013.png 1536w\" sizes=\"auto, (max-width: 1980px) 100vw, 1980px\" \/><\/figure>\n\n\n\n<p>Zoom-out of the 3D corpus space, each dot represents a sample; dimmer dots are deeper on the Z-axis<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Results<\/h2>\n\n\n\n<p>Unfortunately, I don&#8217;t think my vision for this cycle translated to reality as well as I&#8217;d hoped it would. One of my biggest issues was that I left on an option that locked the camera to (0,0,0) in the space. This caused a major decorrelation between the movements of anyone using it and the movement in the space. Everyone seemed to struggle with &#8220;navigating&#8221; the corpus space much more than in the 2D iteration. Once I got rid of this attribute during our discussion afterwards, people understood what was happening better right away. I think the camera was also just a little glitchy since MediaPipe isn&#8217;t the *best* at detecting depth, which caused it to jump around wildly. I think if I were to continue with this 3D idea, I would have to incorporate a depth sensor in some way so that all 3 dimensions are accurate. <\/p>\n\n\n\n<p>In my first cycle, I also included the MediaPipe landmark camera feed in the projector image as a sort of monitor for movement. In this one, I made the decision not to include it, as I&#8217;d hoped the movement in a 3D space would translate more intuitively to the user. However, I was really surprised that most everyone actually missed its presence. I think the playfulness of the little dot stickman moving around on screen must have provided a nice counter to the cacophony of sound. For this reason, I made a point to include the landmark skeleton in the final version. <\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/dems.asc.ohio-state.edu\/wp-content\/uploads\/2026\/05\/IMG_1196-2.mov\"><\/video><\/figure>\n\n\n\n<p>Here&#8217;s a short recording of Alex using the system. As you can see, it was not very precise. Additionally, something in the analysis created a lot of sample points with that shrill high pitch sound that frequently plays. I don&#8217;t know where this came from, but it taught me to always check the samples when I&#8217;m working with tools like FluCoMa. <\/p>\n\n\n\n<p>While this cycle was a bit of a fail in meeting the goals I had, I think it actually gave me a chance to step back afterwards and examine what the goals I wanted to achieve with these cycles were. Out of all of them, the largest was to create a <em>group experience<\/em>. This setback caused me to really narrow in on that idea for the 3rd cycle, which I think worked out much better in the long run.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Going into cycle two, I immediately knew that I wanted to take the ideas I had done in 2 axes of movement in cycle one and translate it to 3D. In order to do this, I needed to rework the analysis portion to plot all the samples in 3 dimensions, as well as figure out [&hellip;]<\/p>\n","protected":false},"author":121,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[93],"tags":[62,92,90,91,89],"class_list":["post-5840","post","type-post","status-publish","format-standard","hentry","category-luke-buzard","tag-cycle-2","tag-flucoma","tag-maxmsp","tag-mediapipe","tag-touchdesigner"],"_links":{"self":[{"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=\/wp\/v2\/posts\/5840","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=\/wp\/v2\/users\/121"}],"replies":[{"embeddable":true,"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5840"}],"version-history":[{"count":10,"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=\/wp\/v2\/posts\/5840\/revisions"}],"predecessor-version":[{"id":5859,"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=\/wp\/v2\/posts\/5840\/revisions\/5859"}],"wp:attachment":[{"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5840"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5840"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dems.asc.ohio-state.edu\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5840"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}