<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Processing Foundation on Medium]]></title>
        <description><![CDATA[Stories by Processing Foundation on Medium]]></description>
        <link>https://medium.com/@ProcessingOrg?source=rss-42ab48286a4c------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 03 Apr 2026 20:45:08 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@ProcessingOrg/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Call / Code / Response]]></title>
            <link>https://medium.com/@ProcessingOrg/call-code-response-92918629f555?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/92918629f555</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Mon, 23 Mar 2026 15:49:33 GMT</pubDate>
            <atom:updated>2026-03-25T13:28:30.849Z</atom:updated>
            <content:encoded><![CDATA[<h4>How creative technologists and youth activists built LIVE FROM LA — Processing Foundation Fellowship Project 2025</h4><p>Written by Amy B. Woodman; Edited by Patt Vira and Xin Xin</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FztwJwoS_ycs%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DztwJwoS_ycs&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FztwJwoS_ycs%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/79fb8d3e29bc3f7679d4c60c62b7b8c2/href">https://medium.com/media/79fb8d3e29bc3f7679d4c60c62b7b8c2/href</a></iframe><h4>Fellowship Project: Call / Code / Response</h4><ul><li>Artists: Ana C., <a href="https://www.instagram.com/jiwonhaam/">Jiwon Ham</a>, &amp; <a href="https://instagram.com/paytoncroskey">Payton Croskey</a></li><li>Links: <a href="https://anacardenas.com/p5_sketches_exp/collage.html">collage tool</a>, <a href="https://github.com/anuzk13/p5_sketches">student collage repo</a>, <a href="https://github.com/anuzk13/p5_performances">performance repo</a></li></ul><p>“The most radical thing about this project,” Processing Foundation Co-Executive Director Xin Xin says, “is that we are letting young people lead with their vision, and every adult involved is following their cues.” This approach is captured in the defining philosophy of the collaboration as Processing Foundation Fellow Jiwon Ham reflects, the project was ultimately about “different disciplines, one purpose: amplifying a story led by youth.”</p><p><em>LIVE FROM LA </em>is a youth-led production that emerged from the Hip Hop Theatre Program founded by Street Poets, Inc., who also served as the play’s lead producer. The production was created by Street Poets, Inc., Unusual Suspects Theatre Company, and the Processing Foundation, with Versa-Style Street Dance Company and No Easy Props joining as creative partners. Together, each organization contributed skill-building workshops across disciplines including theatre, dance, poetry, music, and creative technology. At the heart of the collaboration were 12 committed young people, ages 13–19, who wrote and performed a play drawn from their own lived experiences.</p><p>This project stemmed from the 2025 Processing Foundation Fellowship, an exploratory cohort model in which Fellows were paired directly with a community partner to co-develop software tailored to that partner’s specific needs. The creative technologists awarded this Fellowship were Ana C., Jiwon Ham, and Payton Croskey, who built the technical and artistic architecture for the coded projections that displayed across the stage during the performance.</p><figure><img alt="A nighttime street scene shows a building facade covered in colorful projected images, text, and code-like visuals. People stand on the sidewalk watching as the projections transform the architecture into a layered collage." src="https://cdn-images-1.medium.com/max/1024/1*QlDNa6Kgx0hAa_dmTTxTng.png" /><figcaption>Projection test on a Los Angeles building facade during the development of <em>LIVE FROM LA</em>. Youth-created collages and coded visuals are mapped onto the architecture, transforming the space into a canvas for storytelling, protest, and collective expression. <em>Photo by Mariana Blanco.</em></figcaption></figure><p>For Jiwon, whose practice evolved from presenting motion graphics on public LED screens in Seoul to building immersive, interactive systems, <em>LIVE</em></p><p><em>FROM LA</em> was an opportunity to root her work in the city she now calls home. Jiwon often begins her projects through conversations and memory-gathering. “I’m consistently drawn to how people form emotional relationships with place,” Jiwon explains. In this production, that approach translated into a projection system built to help students curate and layer meaningful images about their own histories, families, and beliefs.</p><p>“I felt very connected to this story,” Ana reflects. “My life and the story have been intertwined throughout this process.” Ana came to creative technology through illustration. Raised in a family of artists and engineers, she grew up finding the connections between analog and digital art making. Over time, her practice moved beyond aesthetics and into research-driven work that focuses on the interplay of technology and identity. Inspired by artists like Doris Salcedo, whose public interventions engage memory and political history, Ana began exploring augmented reality and creative coding as ways to enter into dialogue with place. In <em>LIVE FROM LA, </em>that sensibility shaped her approach to using coding as a medium for empowerment, helping students see technology as something they could use to author and amplify their stories.</p><p>Payton approaches creative technology as both a designer and a theorist. Her early experiments revealed how computation could make objects feel alive, raising questions about futurity, artificiality, and power. Now a PhD researcher at UC Santa Barbara’s Expressive Computation Lab, her work inspects the crossroads of AI, AR, and Afrofuturism, where she develops community-centered methods for ethically modeling and preserving Black cultural data. “Meaningful innovation happens when process matters as much as product,” says Payton. In <em>LIVE FROM LA, </em>Payton brought a liberatory design framework into the theatre space, prioritizing accessibility, dialogue, and shared ownership so that the students’ stories were protected and shaped on their own terms.</p><p>The team analyzed the student-written script scene-by-scene to determine where projections could best serve the story without disrupting the emotional flow. They chose a collage-style aesthetic to mirror the “raw energy” of youth protest. To make the vision a reality, the fellows co-created a custom digital collage-making tool using p5.js, TypeScript, and React.</p><figure><img alt="An animated collage flower grows against a black background, with petals and leaves made from layered images of protest signs, faces, and text, symbolizing collective resistance and storytelling." src="https://cdn-images-1.medium.com/max/399/1*jtvwLV0Gm0naxOojvnpFRg.gif" /><figcaption>A collage flower blooms from layered images of protest, memory, and identity. Generated from student-created visuals, the animation transforms acts of resistance into a living, growing form, reflecting how collective stories take root and expand across the stage.</figcaption></figure><p>The technical labor was shared with intentionality. Ana built the software and the control interface, ensuring the code was clean and triggerable in real-time. Jiwon managed the hardware and used 3D scanning and Blender to simulate how projections would interact with the building facades of the outdoor Hollywood studio lot. And Payton focused on the cultural narratives and developed the lesson plans to introduce students to creative coding.</p><p>The project’s philosophy was rooted in call-and-response as a constant dialogue between the technologists and the performers. One of the most powerful connections was formed with a student named Jorge, who played a “nerdy tech guy,” as he puts it. Though Jorge had never coded before, the team designed a custom curriculum for him. Ana recalls the beauty of teaching him in her native language: “I would just put [the variables] in Spanish because that way it’s clear what the variables are … I really admired the voice that they had and all the agency they saw [with making their protest collages].”</p><figure><img alt="An animated collage of protest imagery featuring a portrait of a woman, raised fists, and bold text in English and Spanish reading “We’re still here” and “Seguimos aquí,” with words appearing and shifting over time." src="https://cdn-images-1.medium.com/max/793/1*HtJ9MViKy8YkCmaibnF_3w.gif" /><figcaption>An animated protest collage layers portraits, symbols, and bilingual text “We’re still here / Seguimos aquí” as words emerge and shift across the projection. The piece reflects the persistence of community, memory, and resistance within the youth-led performance.</figcaption></figure><p>The students’ input directly shaped the visuals. When an actor named Natalia performed a poem about resilience, the team created an animation of flowers “blooming” from the crevices of the buildings. Another student actor playing Luna was moved to tears seeing the faces of her own family and friends projected behind her while she spoke. Ana describes the atmosphere as one of “mutual admiration,” where the fellows often found themselves speechless by the wisdom and courage of the youth.</p><p>The collaboration culminated in a climactic scene where the students staged on the Radford Studio backlot in front of more than 200 audience members. The students performed across a set of building facades transformed by immersive projections generated from their own digital collages. While students performed their spoken-word poetry, the Fellows carefully cued and orchestrated the dynamic projections onto the buildings. As the story reached its peak, the actors staged a protest against city plans to gentrify their neighborhood. Sirens pierced the air. The crowd fled. The stage emptied.</p><p>But the projections did not disappear. The students’ collages remained lit, flickering defiantly across the facades long after the performers had exited. In that charged silence, the images of resistance and hope held their ground, a reminder that we’re still here.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1022/1*GNTLrAO4SDTzQoed32eowQ.png" /><figcaption>Cast and collaborators from LIVE FROM LA gather in celebration after the performance, holding bouquets and sharing a moment of joy and accomplishment. Photo courtesy of Unusual Suspects Theatre Company.</figcaption></figure><p>The Processing Foundation Fellowship supports artists and creative technologists developing open-source creative tools and practices that expand access to creative technology. Through financial support, mentorship, and community partnerships, fellows create tools, artistic works, and research that contribute to a more open and accessible creative coding ecosystem.</p><ul><li>Learn more about the Fellowship → <a href="https://processingfoundation.org/fellowships">here</a></li><li>Support programs like this → <a href="https://processingfoundation.org/donate">here</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=92918629f555" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Silence in the Glitch]]></title>
            <link>https://medium.com/@ProcessingOrg/the-silence-in-the-glitch-e00788e80b28?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/e00788e80b28</guid>
            <category><![CDATA[fellowship]]></category>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Fri, 20 Mar 2026 12:01:00 GMT</pubDate>
            <atom:updated>2026-03-23T09:48:41.737Z</atom:updated>
            <content:encoded><![CDATA[<h4><strong>Reimagining the Lagos Lagoon Through Speculative Protest</strong> — Processing Foundation Fellowship Project 2025</h4><p>Written by Amy B. Woodman; Edited by Patt Vira and Xin Xin</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FA6Umn0vHhzQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DA6Umn0vHhzQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FA6Umn0vHhzQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/39fc2d1ad5b4aed4f8db8eff00f4eed2/href">https://medium.com/media/39fc2d1ad5b4aed4f8db8eff00f4eed2/href</a></iframe><h4>Fellowship Project: The Future Protest</h4><ul><li>Artists: <a href="https://maryamkazeem.com/">Maryam Kazeem</a> and <a href="https://github.com/JubrilO">Jubril Olambiwonnu</a></li><li>Links: <a href="https://www.irantipress.com/programmes#future">Project Website</a></li></ul><p>“I’m really interested in prompting the public as a way of making meaning,” Maryam Kazeem says, reflecting on the shift from working with small artistic cohorts to creating a project that is open to “anyone and anybody participating.” She views these collective gestures of recordings as a “dream space” where the act of protest functions as a generative “glitch” that creates an opening in the world as we know it.</p><p>In their shared narrative, participants offered a startlingly intimate inventory for their imagined futures: they brought musical instruments, love, sandwiches, their kids, and “the end of war and violence.” They envisioned marching alongside ancestors and the next generation, declaring that the protest would only be over “when the seeds bloom and we are free from pain.” For Maryam and Jubril Olambiwonnu, the technical lead on the project, these contributions represent a form of radical publishing where archival words finally touch the water, transforming a digital exercise into a living site of collective research.</p><figure><img alt="A digital illustration of three mangrove-like trees made from translucent plastic bottle shapes, with exposed root systems spreading below. Beside them, a torn-paper graphic displays text prompting a future protest with questions about what to bring, who to go with, and when it ends." src="https://cdn-images-1.medium.com/max/1024/1*L9Rr-cW4LnnfmAhTqoD0Kw.png" /><figcaption>Speculative mangrove forms emerge from digital rendering of plastic waste, visualizing the ecological transformation of the Lagos Lagoon. Alongside them, a “future protest” prompt invites participants to imagine collective action.</figcaption></figure><p>The concept of The Future Protest emerged from a central, unsettling question: how can a body of water that has been systematically erased, sand-filled, and polluted become a site for speculative recovery? The project, a collaboration between Maryam, a writer and researcher based in Lagos, and Jubril, a software engineer and creative coder in London, responds to the ongoing ecological and socio-political displacement along the Lagos Lagoon, Nigeria. Their vision was not to merely document loss, but to create a sonic archive; a digital and eventually physical space where the act of imagining a protest creates a “glitch” in the dominant narrative of urban degradation.</p><p>In interpreting this year’s Processing Foundation Fellowship theme “Data Storytelling,” the project explores the friction between data, often seen as a rigid tool of state surveillance or outsider documentation, and storytelling, which in the context of Lagos is expansive, messy, and deeply relational. The Future Protest seeks to hold these together, treating the sound of a voice as a living energy capable of doing something in the water.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bipEkg3vFRlnr9YC2SHlgg.png" /><figcaption>A contrast between past and present conditions of the Lagos Lagoon: thriving mangrove ecosystems alongside today’s reality of pollution and plastic waste. The juxtaposition reflects the environmental transformation at the center of The Future Protest.</figcaption></figure><p>The history of the Lagoon is one of forced transformation. Before British colonization, the area was a lush network of wetlands and mangrove forests that protected the land and provided a sanctuary for both wildlife and human communities. Under the guise of malaria mitigation, the colonial administration initiated a campaign of land reclamation, eradicating the mangroves and fundamentally altering the city’s relationship with its water. Today, that legacy persists in the form of untreated industrial waste and the displacement of waterfront communities.</p><p>This reckoning takes a startlingly literal form in the project’s technology. When a participant records a “future protest,” a poethic algorithm developed by Jubril analyzes the recording. Crucially, the algorithm does not only listen to the words, but also the pauses and gaps where the speaker catches their breath or searches for a thought. These moments of silence generate 3D animations of speculative mangrove trees. However, these are not the lush green trees of the pre-colonial era; they are constructed from digital renderings of the plastic bottles and trash that currently choke the Lagoon.</p><figure><img alt="A digital simulation of a water landscape with small, stylized mangrove trees emerging from the surface, viewed from a first-person perspective with a recording interface visible on screen." src="https://cdn-images-1.medium.com/max/640/1*eLPz-JyW3V4KtR6itC6GuA.gif" /><figcaption>A simulated lagoon environment where speculative mangrove forms emerge in response to recorded voices. The interface invites participants to contribute “future protests,” transforming sound and silence into living digital landscapes.</figcaption></figure><p>“I got so excited when we first came up with this idea of thinking about silence as data,” Maryam recalls. “To take someone speaking and, from a poetic point of view, derive meaning from when they are not speaking felt incredibly impactful.” For Maryam, a writer who describes herself as formerly “technology-averse,” the fellowship transformed her understanding of data from something clinical into something poetic.</p><p>The project’s ethos mirrors the distinct talents of its creators. Jubril brought the technical rigor of a software engineer to a creative experiment, while Maryam provided the conceptual framework of speculative research and radical publishing. This April, Maryam will launch “<a href="https://www.youtube.com/watch?v=50qAiPnhSnU">The Library on the Lagoon</a>”, an immersive installation where passenger responses are transcribed by a typewriter that powers a trash collection wheel in the water.</p><figure><img alt="Two people sit on a small boat on the Lagos Lagoon, with equipment in front of them as they type. The scene is rendered in a bright, color-shifted tones, giving it a stylized, surreal appearance." src="https://cdn-images-1.medium.com/max/982/1*DUEOvR33G0W4ynYvkVX_cQ.gif" /><figcaption>Participants navigate the Lagos Lagoon in a speculative performance where typed “future protests” generate energy to propel the boat. The gesture transforms writing into action, linking language, movement, and environment within a living, participatory system.</figcaption></figure><p>Through the medium of speculative sonics, The Future Protest suggests that the simple act of saying “no” to the current state of the world can create a pathway for navigating the future. It treats the Lagoon as a site where words can touch the water and do something in the water, turning the act of archival documentation into a living exercise in discovering alternative futures. It is an invitation to look at the waste, the silence, and the water, and see a space where the “glitch” of protest might finally let the ecosystem hear itself again.</p><p>The Processing Foundation Fellowship supports artists and creative technologists developing open-source creative tools and practices that expand access to creative technology. Through financial support, mentorship, and community partnerships, fellows create tools, artistic works, and research that contribute to a more open and accessible creative coding ecosystem.</p><ul><li>Learn more about the Fellowship → <a href="https://processingfoundation.org/fellowships">here</a></li><li>Support programs like this → <a href="https://processingfoundation.org/donate">here</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e00788e80b28" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Negotiating the Movement]]></title>
            <link>https://medium.com/@ProcessingOrg/negotiating-the-movement-9f402ed68a18?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/9f402ed68a18</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Thu, 19 Mar 2026 11:51:11 GMT</pubDate>
            <atom:updated>2026-03-19T11:51:11.793Z</atom:updated>
            <content:encoded><![CDATA[<h4>p5.score and the Interplay of Algorithmic Choreography — Processing Foundation Fellowship Project 2025</h4><p>Written by Amy B. Woodman; Edited by Patt Vira and Xin Xin</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fcex5mzhTpKk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dcex5mzhTpKk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fcex5mzhTpKk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/2d49ae2340e61485e0a5492ef259ba51/href">https://medium.com/media/2d49ae2340e61485e0a5492ef259ba51/href</a></iframe><h4>Fellowship Project: p5.score</h4><ul><li>Artists: <a href="https://www.instagram.com/sicchio/">Kate Sicchio</a></li><li>Links: <a href="https://www.sicchio.com/p5score">Project website</a></li></ul><p>“I’m interested in this idea of a set of instructions, guidelines, or tasks for a starting point for improv or for generating movement,” says Kate Sicchio, the media artist and choreographer behind <a href="https://www.sicchio.com/p5score">p5.score</a>. “It’s about how the score can become this contract or negotiation point between movement, a performer, and a creator.” Developed during her 2025 Processing Foundation Fellowship, p5.score is a JavaScript library that harmonizes the physical world of dance with the digital logic of p5.js. By translating creative technology into a choreographic language, Kate has built a framework where visual patterns on a screen serve as a catalyst for physical improvisation, positioning the code as a new partner on the stage.</p><figure><img alt="To dancers perform in a room with purple lighting, responding to projected lines and shapes on the walls. The digital visuals from a grid-like score that guides their improvised movements." src="https://cdn-images-1.medium.com/max/1024/1*vFTKbFkGrQbwkcgkgJ0_kg.png" /><figcaption>Dancers interpret a projected p5.score during the workshop, using the visual grid and shapes as prompts for improvisation. The digital score acts as a shared language between code and movement, guiding how performers navigate space and timing.</figcaption></figure><p>The core mission of p5.score is to provide choreographers an entryway into creative coding. For many dancers, traditional programming can feel like a technical hurdle; Kate bridges this gap by integrating terminology from both dance and computing. Kate states that “simple things lead to complex things” when using p5.score, and the library’s API is intentionally paired down to remain accessible while also allowing for sophisticated structures as the dance evolves. The project draws inspiration from historical scores by choreographers like Trisha Brown who used systems and annotations rather than rigid notation to indicate movement.</p><p>The library’s “magic sauce” is the `Dancer class`, which represents a performer within the digital score. To bring the dancer to life, users employ a specific constructor that defines the performer’s initial state:</p><ul><li>new Dancer(x, y, durations, positions, color, shape): This allows the creator to set the starting coordinates, a list of positions for the dancer to loop through, and the duration (in milliseconds) for each moment.</li><li>Movement Qualities: By changing the shape (radius size) or color, choreographers can represent different movement dynamics or qualities, such as a large shape indicating a lumbering flow or a small dot representing fast, sharp movements.</li></ul><p>The movement is then animated using two primary methods: moves(), which initializes the timed sequence, and show(), which renders the dancer on the canvas with the draw() loop familiar in p5.js. One of the most impactful features is the Stage Direction functions. In response to community feedback that raw X and Y coordinates were “tricky,” Kate programmed familiar terms like center() (Center Stage), ul() (Upstage Left), and dr() (Downstage Right) to return coordinate objects, allowing artists to plot space using the language of the theatre.</p><figure><img alt="Two dancers interact with a projected floor surface displaying abstract shapes and colored circles, bending and moving in response to the visual cues." src="https://cdn-images-1.medium.com/max/888/1*5_u6VnyNIZNCwAzEuIjiNA.png" /><figcaption>Dancers engage with floor-based p5.score projection, using colored shapes and spatial cues to guide their movement. The score becomes an interactive surface, inviting performers to respond physically to timing and position visual prompts.</figcaption></figure><p>Underpinning the project is an open-source ethos rooted in Kate’s background in live coding. She views p5.score not as a static product, but as a tool for the community to “take, adapt, and change.” For Kate, this is a matter of empowerment: “Having agency when you move, having agency when you code.”</p><p>To ground the library in practice, Kate hosted a two-day workshop in Richmond, Virginia, with ten choreographers and creative coders. The process followed a call-and-response between analog and digital mediums. Participants began by drawing scores by hand and using paper and markers, a familiar starting point that allowed them to realize that p5.js fundamentals like strokes, weights, and coordinates were simply digital extensions of drawing.</p><figure><img alt="A person sits on the floor drawing diagrams on sheets of paper, each showing simple shapes and arrows that represent movement paths or choreography." src="https://cdn-images-1.medium.com/max/793/1*PEyLz1XMxeTEzvvyz2jdrw.png" /><figcaption>Workshop participants begin by sketching choreographic scores by hand, using shapes, lines, and arrows to map movement. These analog drawings serve as a foundation for translating choreography into digital form using p5.score.</figcaption></figure><p>As they transitioned to the screen, participants realized that “when you code, it’s like choreography,” as you are telling an element what to do and how to do it. The workshop atmosphere was one of mutual exploration, where users experimented with adding images and theatrical scenery, transforming the scores into immersive backdrops.</p><p>The collaboration culminated with live dancers performing the generated scores. Two improvisers interpreted the digital patterns, which ranged from intimate floor projections that dancers followed closely to large-scale theatrical visuals. For Kate, witnessing the interpretation was a “sigh of relief,” proving that the tool could generate “beautiful movement” and that the dance community saw it as an innovative and valuable addition to their practice.</p><p>The Processing Foundation Fellowship supports artists and creative technologists developing open-source creative tools and practices that expand access to creative technology. Through financial support, mentorship, and community partnerships, fellows create tools, artistic works, and research that contribute to a more open and accessible creative coding ecosystem.</p><ul><li>Learn more about the Fellowship → <a href="https://processingfoundation.org/fellowships">here</a></li><li>Support programs like this → <a href="https://processingfoundation.org/donate">here</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9f402ed68a18" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Where Has the Lake Gone?]]></title>
            <link>https://medium.com/@ProcessingOrg/where-has-the-lake-gone-df42cb148874?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/df42cb148874</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Wed, 18 Mar 2026 11:39:08 GMT</pubDate>
            <atom:updated>2026-03-18T11:39:08.909Z</atom:updated>
            <content:encoded><![CDATA[<h4>Mapping Mexico City’s Hidden Waters with DIY Technology — Processing Foundation Fellowship Project 2025</h4><p>Written by Amy B. Woodman; Edited by Patt Vira and Xin Xin</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FL0ydRLrd44E%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DL0ydRLrd44E&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FL0ydRLrd44E%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/1aeae57e32ab0accc90a17463eb2d706/href">https://medium.com/media/1aeae57e32ab0accc90a17463eb2d706/href</a></iframe><h4><strong>Fellowship Project: Where has the Lake Gone?</strong></h4><ul><li>Artists: <a href="https://www.instagram.com/medialabmx">Leonardo Aranda</a></li><li>Links: <a href="https://github.com/leonardoaranda1981/wherehasthelakegone">GitHub Repository</a></li></ul><p>“[Mexico City’s designers] have been building infrastructure to get rid of the water for 300 years and now we are building new infrastructure to try to bring water back into the city,” says media artist Leonardo Aranda.</p><p>Leonardo Aranda, director of the DIY creative space Medialabmx, developed the artistic research project <em>Where Has the Lake Gone?</em> during his 2025 Processing Foundation Fellowship. Through experimental cartography and open-source technology, the project investigates Mexico City from the perspective of its disappearing lakes, revealing how centuries of urban infrastructure have buried, redirected, and erased the waterways that once defined the region.</p><p>At the heart of Leonardo’s research is the concept of “slow violence.” Drawing from the work of Rob Nixon, Aranda describes this as a form of violence that occurs gradually and almost invisibly over time. In Mexico City, this slow violence is the 500-year history of draining the five-lake system upon which the city was originally built. While pre-Hispanic systems were based on coexistence with water, the colonial era introduced a tendency to control and eliminate nature, leading to the modern paradox of a city built on a lake that now suffers from both constant droughts and seasonal flooding.</p><figure><img alt="A translucent 3D point-cloud model of a small aqueduct ventilation tower with a roof and cylindrical base, rendered against a blue background as part of a digital mapping visualization of Mexico City’s hidden water infrastructure." src="https://cdn-images-1.medium.com/max/1024/1*-3Y2OY_6PVQoSiCmk-5HfQ.png" /><figcaption>A 3D point-cloud rendering of an aqueduct ventilation tower. Using spatial data and experimental cartographic methods, Leonardo Aranda visualizes fragments of Mexico City’s buried water infrastructure, revealing traces of the hydraulic systems that once shaped the valley.</figcaption></figure><p>Through his research in the National Archive of Water, Leonardo began to realize that the city’s history is “hidden in plain sight” within its modern layout. This shift in perspective allowed him to identify specific structures within bustling streets or common areas that the public often ignores.</p><p>Small towers scattered across neighborhoods, structures he had played around as a child, turned out to be aqueduct breathing towers, built to ventilate the colonial water system. In Chapultepec Park, the strange circular fields where people gather to play soccer are not merely decorative; they are the massive underground water tanks that help supply the entire city.</p><p>Even the city’s streets hold traces of its submerged past. Certain roads curve into organic Y-shaped patterns that seem oddly irregular in the rigid urban grid … until you realize they follow the paths of ancient rivers and canals. Architectural details offer similar clues. The elevated staircases on centuries-old churches begin to make sense when imagined from a different perspective: once, boats could pass directly beneath them. What appears today as an ordinary cityscape is, in fact, a layered map of waterways that once defined the valley.</p><p>To investigate what lies beneath the modern asphalt, Leonardo built a DIY Ground Penetration Radar (GPR) sensor. This device functions like an ultrasound, sending radio signals into the ground to create images of subsurface reflections.</p><figure><img alt="Close-up of a DIY ground-penetrating radar device with stacked circuit boards, an Arduino microcontroller, wiring, and custom printed electronics mounted inside an open case." src="https://cdn-images-1.medium.com/max/1024/1*nUAj0IowVf6CWBG6KY7JFA.png" /><figcaption>The DIY Ground Penetration Radar (GPR) sensor. Using open-source electronics, custom circuitry, and Arduino-based components, the device sends radio signals into the ground to detect buried structures beneath Mexico City’s streets.</figcaption></figure><p>But using scientific equipment in the middle of Mexico City presents its own challenges. Instead of arriving with conspicuous instruments, Leonardo mounted the sensor onto a bicycle disguised as a traditional tamale cart. Moving slowly through traffic and public plazas, he could scan the ground while blending into the everyday rhythms of the streets.</p><p>The method turned research into a kind of urban dérive. Riding through neighborhoods, Leonardo traced the invisible infrastructure beneath the city: the remnants of canals, buried waterways, and forgotten engineering systems that once shaped the valley’s relationship with water. The project became what he calls an “embodied investigation,” a speculative walk through history where his curiosity paired with his DIY spirit worked together to reveal the city’s submerged past.</p><figure><img alt="A bicycle attached to a red metal trailer styled like a traditional tamale cart, designed to carry scanning equipment while moving through the city." src="https://cdn-images-1.medium.com/max/1000/1*IgE2JHty6R-h3lfW-6Qehw.jpeg" /><figcaption>A bicycle-mounted cart modeled after a traditional tamale vendor’s setup. The design allowed the ground-penetration radar sensor to move through Mexico City’s streets discreetly, blending into everyday urban life while scanning the ground below.</figcaption></figure><p>The project <em>Where has the Lake Gone </em>is deeply rooted in open-source philosophy, which Leonardo views as an ethical necessity for “opening the blackboxed” systems of urban infrastructure. “For me it was important that I would use this idea of how openness has to go all the way through the whole process and the methodologies of the project. It is not only something that you use [as] a way of interrogating the system but something that has to be also applied to the process and the tools that you use,” he explains. He recycled code and circuits from the community and plans to keep his own <a href="https://github.com/leonardoaranda1981/wherehasthelakegone">GitHub repository,</a> including Arduino and Processing code and circuit schemes, publicly available for others to reproduce.</p><p>Technically, Leonardo moved away from traditional digital cartography software to create a custom 3D environment in Processing. His process included:</p><ul><li>Data Translation: He used standard shapefiles to extract geographic coordinates, which were then stored in P-shaped objects for efficient rendering.</li><li>Coordinate Mapping: Leonardo developed custom functions to translate real-world geographic coordinates into the 3D coordinate system within Processing.</li><li>Historical Sedimentation: The resulting visualization represents the city as a “stack of infrastructure layers” using different depth values to visually map the city’s historical evolution over time.</li></ul><figure><img alt="A digital visualization combining a point-cloud rendering of a city street with colorful horizontal data bands below, representing ground-penetration radar signals scanning beneath the surface" src="https://cdn-images-1.medium.com/max/1024/1*K5PTWZAP4iSjRkH3aMI3PQ.png" /><figcaption>Visualization generated from ground-penetration radar data showing a street scene layered with subsurface signal readings. The image combines spatial mapping and sensor output to reveal hidden structures and infrastructure beneath the city’s surface.</figcaption></figure><p>Ultimately, Leonardo views the project not as a finished artwork but as the beginning of a methodology intended to empower artists, activists, and communities organizing around water. By opening the tools, code, and circuitry behind the project, he hopes others will build upon it, using creative technology to question the infrastructures that shape everyday life.</p><p>“There is a colonial tendency to try to control nature. This project asks how we might restore our coexistence with water,” says Leonardo. What began as a historical excavation becomes something more speculative: a way of imagining different futures for the city. By revealing the hidden waterways beneath streets and plazas, the project reminds us that Mexico City is still, in many ways, a lakebed. The water has not disappeared; it has only been buried. His final goal is to move past 500 years of environmental control and interrogate the systems that define modern life, eventually allowing the city to coexist with water once again.</p><figure><img alt="A layered digital map visualization of Mexico City combining satellite imagery, road networks, contour lines, and colored overlays representing historical waterways and infrastructure." src="https://cdn-images-1.medium.com/max/1024/1*uPJzexxZAdT8yJJK5t5KKw.png" /><figcaption>A layered cartographic visualization combining geographic data, infrastructure networks, and historical water systems. The map reveals how present-day streets and urban structures overlay the valley’s former lakes, rivers, and canals.</figcaption></figure><p>The Processing Foundation Fellowship supports artists and creative technologists developing open-source creative tools and practices that expand access to creative technology. Through financial support, mentorship, and community partnerships, fellows create tools, artistic works, and research that contribute to a more open and accessible creative coding ecosystem.</p><ul><li>Learn more about the Fellowship → <a href="https://processingfoundation.org/fellowships">here</a></li><li>Support programs like this → <a href="https://processingfoundation.org/donate">here</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=df42cb148874" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Body as Data]]></title>
            <link>https://medium.com/@ProcessingOrg/body-as-data-df9526ef4107?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/df9526ef4107</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Tue, 17 Mar 2026 13:02:51 GMT</pubDate>
            <atom:updated>2026-03-17T13:02:51.594Z</atom:updated>
            <content:encoded><![CDATA[<h4>Projection Mapping with the Luna Library — Processing Foundation Fellowship Project 2025</h4><p>Written by Amy B. Woodman; Edited by Patt Vira and Xin Xin</p><iframe src="https://cdn.embedly.com/widgets/media.html?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DK6crurddeJY&amp;type=text%2Fhtml&amp;schema=youtube&amp;display_name=YouTube&amp;src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FK6crurddeJY%3Ffeature%3Doembed" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/438561e1005797efdcdbe47fcbe09087/href">https://medium.com/media/438561e1005797efdcdbe47fcbe09087/href</a></iframe><h4>Fellowship Project: Body as Data</h4><ul><li>Artist: <a href="https://www.instagram.com/danielcorbani/">Daniel Corbani</a></li><li>Luna Project: <a href="https://github.com/danielcorbani/LunaMapping">GitHub repository</a> or the <a href="https://luna.art.br/">website</a></li></ul><p>“Art doesn’t live in the tools; it lives in the experience we create.” Daniel Corbani, author of the Luna Video Mapping Library</p><p>Daniel Corbani, an engineer who transitioned into a visual artist and creative coder, dedicated his 2025 Processing Foundation Fellowship to a project titled “Body as Data.” This project expands his <a href="https://github.com/danielcorbani/LunaMapping">Luna Video Mapping library</a>, seeking to bridge the gap between technical engineering and the ephemeral nature of live performance. For Daniel, technology is merely a tool, much like a dancer’s flexibility, and the true art exists in the shared experience between performers and their audience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/368/1*pL6FI_vNVw69m9Wrq2wB1Q.gif" /><figcaption>Paola Higa interacts with Daniel Corbani’s Luna Library projection system, where virtual guitar strings appear as vertical beams of light. As Paola moves through them, the strings play musical notes.</figcaption></figure><p>The fellowship project, “Body as Data,” focuses on interactive and embodied projection mapping. “The secret here is to use projection mapping to match the position of the generative content to the source… as best as possible to create the illusion that the physical body is touching / creating / interacting with the digital image,” says Daniel. His artistic expression is best exemplified in his collaboration with dancer Paola Higa, whose work involves charcoal drawings and mandalas created through full-body movement.</p><p>Luna facilitates this three-stage narrative journey in Paola’s performance:</p><ol><li>The Illusion: Using pre-recorded video to create the sense of a digital double</li><li>The Merge: Blurring the lines between the physical performer and digital content</li><li>The Interaction: A “full merge” where the body’s movement creates images in real-time, such as a fluid or smoke simulation that reacts to the performer’s presence.</li></ol><p>This approach transforms the human body into a source of data, using an infrared camera and background subtraction to turn performers into “white blobs” or masks. These masks then interact with digital objects, creating the illusion that the physical body is directly touching or creating the digital image.</p><p>Technically, Luna is designed to be both a functional software for non-coders and a flexible platform for developers. The architecture is built upon several core classes:</p><ul><li>Project: The central manager of the entire system</li><li>Screen: A renderer that handles the output to external displays</li><li>Scene: Manages specific moments in a performance, coordinating multiple media types</li><li>Medialtem: Manages individual videos or images and contains the complex homography math required for mapping visuals onto physical surfaces.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FoKKJKMS2x8P-QVrx-RH9A.png" /><figcaption>Luna’s projection-mapping interface allows artists to arrange and transform media in real time. The software can treat custom generative code as if it were standard video, enabling artists to integrate live simulations and visuals.</figcaption></figure><p>A key technical breakthrough Daniel achieved during the fellowship was the implementation of Java interfaces. These act as a “bridge,” allowing Luna to recognize custom generative code as if it were a standard video file. This enables artists to use libraries like PixelFlow for real-time physics simulations while still utilizing Luna’s mapping and scene-management tools.</p><p>At the heart of Daniel’s work is a profound commitment to the philosophy of open source. He says Luna is open source largely because “I realized that so much of what I’ve been able to do exists thanks to others sharing their work with me,” whether it’s Processing itself, the libraries he uses, or the examples people publish in forums and on their websites. He continues, “It only felt fair to give something back by sharing my own work in the same spirit.” He views open-source software as both a technical choice and as a primary method for redistributing power to diminish inequality. He recognizes the place of big tech, yet imagines a more communal ecosystem where knowledge, access, and even some economic opportunity circulate freely, allowing creative tools and ideas to reach far beyond their original makers.</p><p>Daniel acknowledges that his own growth was made possible by the generosity of the Processing community and shared libraries. Consequently, he intends Luna to be an open platform for those who cannot afford expensive commercial software licenses, particularly artists from marginalized communities. By keeping the code accessible, he aims to share economic power and knowledge, allowing performers in theater and dance to integrate complex visuals without a high financial barrier.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XCqtiN8e2_IwEk7DkYZkEg.png" /><figcaption>Daniel leads a Luna workshop for creative coders in São Paulo, Brazil. The workshops reflect Daniel’s commitment to sharing knowledge and expanding access to creative coding tools within the arts community.</figcaption></figure><p>Daniel’s work has generated significant excitement with the Brazilian arts community, where cultural workers are eager for free tools to incorporate coding and projection into their work. Looking forward, Daniel sees Luna as a way to facilitate remote collaboration, “What is most special to me is that with Luna I can reach people far… they can run Luna there and I can send my work and help them from a distance because now we have a great tool for that.”</p><p>The Processing Foundation Fellowship supports artists and creative technologists developing open-source creative tools and practices that expand access to creative technology. Through financial support, mentorship, and community partnerships, fellows create tools, artistic works, and research that contribute to a more open and accessible creative coding ecosystem.</p><ul><li>Learn more about the Fellowship → <a href="https://processingfoundation.org/fellowships">here</a></li><li>Support programs like this → <a href="https://processingfoundation.org/donate">here</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=df9526ef4107" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Sound of the Day]]></title>
            <link>https://medium.com/@ProcessingOrg/the-sound-of-the-day-c5a112054210?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/c5a112054210</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Mon, 16 Mar 2026 17:44:50 GMT</pubDate>
            <atom:updated>2026-03-16T17:44:50.242Z</atom:updated>
            <content:encoded><![CDATA[<h4><em>Building the Network Gong Ensemble Archive</em> — Processing Foundation Fellowship Project 2025</h4><p>Written by Amy B. Woodman; Edited by Patt Vira and Xin Xin</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FqUKK0dlQw5k%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DqUKK0dlQw5k&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FqUKK0dlQw5k%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/9b460587521829b8e1154060a40779b6/href">https://medium.com/media/9b460587521829b8e1154060a40779b6/href</a></iframe><h4>Fellowship Project: Network Gong Ensemble Archive</h4><ul><li>Artists: <a href="https://www.instagram.com/elekhlekha/"><em>elekhlekha อีเหละเขละขละ</em></a></li><li>Link: <a href="https://networkgongensemblearchive.online/">https://networkgongensemblearchive.online/</a></li></ul><p>“We thought people would be shy when we asked them to play their gong,” Keng says, “but once they started, they didn’t want to stop. They were banging on their gongs; they listened to each other and responded to one another’s sounds.” No one wanted to stop and the show lasted much longer than expected.</p><p>The concept of the <a href="https://networkgongensemblearchive.online/">Network Gong Ensemble Archive (NGEA)</a> emerged from a question: could a non-institutional effort create a living archive of Southeast Asian oral histories and musical traditions, especially for marginalized and diasporic communities beyond national state identities? The project responds to the ongoing risk of cultural loss caused by socio-political and environmental displacement across the region. The artists’ vision was to not only create an archive, but also a digital space to nurture the human connection of performance, where the act of making noise together creates spiritual and social bonds without the need for verbal communication. In interpreting this year’s Processing Foundation Fellowship theme, the project explores the tension between data, which is often measurable, quantifiable, and shaped by outsider documentation, and storytelling, which in Southeast Asian oral and aural traditions remains expansive, contextual, and relational. NGEA seeks to hold these together, treating sound and oral history as data without separating them from the cultural contexts that give them meaning. Through the medium of sound archival, this project tells the story of collective resilience.</p><figure><img alt="Two performers of elekhlekha stand behind laptops and electronic music equipment on a dimly lit stage while an audience watches in silhouette. Behind them, a large curved projection screen displays swirling, twisted text fragments and lines of computer code, creating a dynamic visual backdrop for the live performance." src="https://cdn-images-1.medium.com/max/1024/1*_y-O6OUEnt91Uel-iuaMfQ.jpeg" /><figcaption>elekhlekha อีเหละเขละขละ performs at nodes: I organized by Processing Foundation x Artifice x UAAD with support from NYU Tandon</figcaption></figure><p>The duo behind NGEA is Keng (Kengchakaj Kengkarnka) and Fame (Nitcha Tothong), who together are <em>elekhlekha อีเหละเขละขละ</em>, a Thai word that means “chaos” or “nondirection.” The name was chosen because it portrays an ethos that breaks free from dominant systems and the Western music lens. The idea for the project mirrors the distinct talents both Keng and Fame bring to their art. For Keng, who is a jazz pianist and electronics experimentalist, the idea for the project extended out of a line of questioning. After completing his college studies in music in Thailand, he noticed “that my head and my ear were geared towards Western music so much that I heard Thai music as out of tune.” He recalls researching how Thai music is often unfairly judged by Western musical standards and even mistranslated in music theory books. “How can I unlearn that?” Keng asks. For Fame, the project is a meditation on memory. Fame is a creative technologist and interdisciplinary artist who built the online experience of NGEA. Inspired by open source technology as a tool for transparency and democratic access to knowledge, she wanted to make the archive less of a museum-like tomb and more of a living, evolving, and communal space that encourages new iterations.</p><p>The Archive is a digital repository of living and breathing sound with particular attention to gong practices shared across the region. It is a digital space that compiles and captures the nuance and embodied sounds that reflect Southeast Asian tradition. When Keng and Fame talk about their collaboration with other musicians for this project, they speak with reverence, holding the heavy yet effervescent weight of ancestral knowledge. While documenting Southeast Asian sounds, Keng reminds us that these sound traditions don’t just differ from ethnic group to ethnic group, but sometimes from village to village. Keng and Fame worked with three sound contributors, who talk about how time and loss has also played a role in the evolving function and nuance of these sounds. Keng reflects on a story that was shared with him by an_outskirt, a duo from the Philippines. Back in the Phillipines, an_outskirt had met with a musician who played a traditional flute. However, when they arrived, the musician said he lost his flute. He directed them to come back in an hour, he’ll make a new one. quickly made a new flute and demonstrated its tuning system, as if the sound itself had been pulled from the ground. Measuring the bamboo’s circumference with a blade of grass, he shaped the instrument’s holes directly from the lang that produced it. “Sometimes sounds come from the ground, and what you are hearing is the sound of that day,” says Keng.</p><p>The first entries in the Archive bring together a geographically dispersed yet culturally connected group of Southeast Asian musicians. These include:</p><ul><li>an_outskirt, currently based in Brooklyn and originally from the Philippines;</li><li>High Alter (A.k.a.Lynn Nandar Htoo), based in Cambodia with roots in Myanmar;</li><li>and Sorawat Ruangamporn “Kru Amp,” now in New Jersey and originally from Thailand.</li></ul><p>Each artist contributed recordings of their gong instruments and created a hand-drawn digital avatar of their gong; forms that were later translated into audio-visual p5.js sketches. Together, these contributions form the foundation of the interactive Network Gong Ensemble Archive website.</p><figure><img alt="Three side-by-side digital visualizations on a black background. Each panel displays a circular diagram composed of many small dots arranged in rings. In the first, a bright green line moves through the circle connecting dots that represent the timing of the gong sound pattern. In the second, multiple curved lines spiral outward from the center toward small circles around the edge. In the third, translucent petal-like shapes form a ring while a white line sweeps outward like a clock hand. Small" src="https://cdn-images-1.medium.com/max/1024/1*XT9PAJDwB5k0ckACVyz5Hg.png" /><figcaption>Screenshots of Kru Amp, an_outskirt, and High Alter’s p5.js sketches respectively based on their hand-drawn representations of their gongs.</figcaption></figure><p>While sharing their gong sounds, they also shared their songs and the stories behind their instruments. Sarawot uses the Khong Wong Yai (large gong circle) as his main instrument and has played it for over 30 years. “The more I learned, the more I loved it, and I realized that Thai music has a unique charm that only reveals itself through study,” Sarawot remembers. When moving to New York, he had to cut his gong into three parts and ship it overseas. Once it was reassembled, he noticed it sounded slightly different, but he accepted that as part of its new story. He teaches Thai music today and reminisces on how certain songs that were once tied to a specific activity are now repurposed for this new space with new people, forming a hybridity between the old and new. What is born out of that friction is a third thing: something new born from displacement, memory, and adaptation.</p><figure><img alt="Sorawat Ruangamporn “Kru Amp” sits cross-legged while playing a circular arrangement of small bronze gongs with his padded mallets. He focuses on the instrument as he strikes different gongs around the ring, with additional percussion instruments visible behind him." src="https://cdn-images-1.medium.com/max/1024/1*2Zlx9kN8OCInlAvwEPHqmA.png" /><figcaption>Sorawat Ruangamporn “Kru Amp” based in New Jersey, originally from Thailand performs on the <strong>khong wong yai</strong> (large gong circle), the instrument he has studied for more than 30 years.</figcaption></figure><p>High Altar (Lynn Nandar Htoo) contributed Khmer sound cultures during a tumultuous time in their life. Theirs is a story of personal survival as they fled war-torn Myanmar. They wrote, “Tradition persists not when life is easy, but when it becomes essential. The gong sound might be the only available peace on the most chaotic day.” Keng and Fame reflect on how sounds and songs stem from a place called home, and when forced to move by necessity, music has a healing function. Though High Altar is from Myanmar and Sorawat is from Thailand, the sound they contribute is essentially the same instrument.. Keng believes this is a crucial story to be told. Land disputes and forced displacement are based on “imaginary lines that the colonizers made.” We are all actually more alike than we are different and our sound cultures prove that.</p><p>Eleklekha immersed themselves in these sound cultures as they transposed oral histories into digital simulations of sound. The p5.js sketches of each gong are made with care. Fame says, “If you click on the smaller dot you can get an overtone of the two gong sounds together. We intentionally made the two sounds vary slightly in timing so one is hit before the other, reflecting what human gong expressions sound like.” Keng adds, “This is because the overtones are not fully aligned and it creates a beating noise and a sound that you associate with the gongs.”</p><p>Once you enter the NGEA site, you can explore sound stories and configure your own gong song. The “Ensemble Mode” feature allows anyone with an iphone or laptop to join a collective, live gong ensemble with other users. First, you are prompted to choose a gong from the library, then to configure your gong song by clicking on the chimes. The song begins looping and projecting around each user’s circle avatar.</p><figure><img alt="Interactive interface from the Network Gong Ensemble Archive showing a circular visualization of a gong pattern. Lines radiate between nodes representing chimes, while a green path traces the rhythm being played. The interface allows users to configure gong sequences and join a shared online ensemble where multiple players’ sounds overlap in real time." src="https://cdn-images-1.medium.com/max/1024/1*UnHjlFQOkj1h2q7eMglUdQ.gif" /><figcaption>Interface view of the Network Gong Ensemble Archive. Users select gongs and compose looping “gong songs” by activating chimes in the circular interface.</figcaption></figure><p>When someone enters into your projection sphere by moving around the space with their arrow keys, you can hear their gong song intermingling with yours. Participants are encouraged to roam around, intersecting, colliding, and harmonizing with each other in a cacophony of beating chimes and gongs. Can you hear the echoes of your ancestors or, perhaps simply, the sound of the day?</p><p>Fame recounts how “open source tools are essential for our communities because they make creative work accessible to people who can’t afford expensive software and gives us the agency to become not just users or consumers, but creators and contributors.” Through the Processing Foundation Fellowship, Keng and Fame learned to see their collaborators as part of a larger ecosystem, and they felt empowered to share knowledge about these sound cultures without having to be “masters” of the work, but rather to share things little by little via long-term relationships. About 300 people have experienced NGEA as of today.</p><p>Open source tools are a labor of care, as Fame puts it. Keng says, “We also want to invite a larger community of people who are interested in Southeast Asian sound to this project. Therefore it’s a living document and helps change the perspective of how music can sound.” Their hope is that people continue to contribute and experiment with The Network Gong Ensemble Archive.</p><p>The process of making this project has also led to personal transformation for the artists. Keng says that now “everything is in tune.” He can’t pinpoint exactly how, but this project has changed his piano playing. In fact, it’s a “completely different” approach to the sounds he chooses. In December 2025, he performed at the <a href="https://youtu.be/ipYh7BzoAPg?si=56LeHkyVPaBAe4zM">Asian Art Initiative Festival</a> and the panel noticed how different this performance sounded from the album, that this performance was an evolution. “I finally unlearned my ear,” Keng says, “anything can be in tune now.”</p><p>The Processing Foundation Fellowship supports artists and creative technologists developing open-source creative tools and practices that expand access to creative technology. Through financial support, mentorship, and community partnerships, fellows create tools, artistic works, and research that contribute to a more open and accessible creative coding ecosystem.</p><ul><li>Learn more about the Fellowship → <a href="https://processingfoundation.org/fellowships">here</a></li><li>Support programs like this → <a href="https://processingfoundation.org/donate">here</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c5a112054210" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[p5.js 2.1 and 2.2: Expanding Graphics Avenues with p5.strands improvements and WebGPU]]></title>
            <link>https://medium.com/@ProcessingOrg/p5-js-2-1-and-2-2-expanding-graphics-avenues-with-p5-strands-improvements-and-webgpu-9771d40c8b1d?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/9771d40c8b1d</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Mon, 09 Mar 2026 11:17:31 GMT</pubDate>
            <atom:updated>2026-03-09T15:20:00.403Z</atom:updated>
            <content:encoded><![CDATA[<p>We recently released p5.js 2.1 and 2.2, continuing the work that began with the release of p5.js 2.0. —<em> Written by Kit Kuksenok and Amy Woodman</em></p><p>Newly released features are typically experimental and open for community testing and feedback. You can find full release notes in the links below:</p><ul><li><a href="https://github.com/processing/p5.js/releases/tag/v2.1.1">p5.js 2.1</a>: TypeScript integration, add-on Events API, and color contrast checking for web accessibility. Introduced branching (if/else) and looping (for) in p5.strands. This release was co-authored by 31 people!</li><li>p5.js 2.2: Introduced WebGPU-based renderer, as well as in <a href="https://github.com/processing/p5.js/releases/tag/v2.2.1">2.2.1</a>: a simpler, flatter API for p5.strands and in <a href="https://github.com/processing/p5.js/releases/tag/v2.2.2">2.2.2</a>: performance improvements and support for “millis()” inside p5.strands</li></ul><p>These releases build off of the major milestone release of p5.js 2.0 last year, which laid new foundations for the library’s future. Since then, versions 2.1 and 2.2 have focused on stabilizing and extending those foundations. The features and updates in all 2.x releases are based on community feedback since 2023, and as development still continues, we would love for more p5.js artists, learners, teachers, and creators to get involved in shaping the software we all use!</p><p>This post is an overview of the latest releases, and also a moment to acknowledge the people who sustain this library. p5.js has been going strong for over a decade, and like any long-lived open-source software project, its survival is communal. Projects like p5.js persist because of the people who dedicate their time to review pull requests, fix regressions, improve documentation, test edge cases, and respond to issues that most users will never see. Much of this work is quiet and ongoing.</p><h3><strong>What’s New in 2.1 and 2.2</strong></h3><p>Versions 2.1 and 2.2 build directly on the work introduced in 2.0. These releases emphasize fixes on stability and infrastructure improvements that make future development possible. Some highlights include:</p><ul><li>New <a href="https://github.com/processing/p5.js-addon-template">add-on/template</a> support for ecosystem contributors. This is a starter repository designed to help developers quickly scaffold and publish p5.js add-ons and extensions.</li><li>Advanced graphics work (p5.strands and WebGPU): 2.2 continues refinement and stabilization of p5.strands, the shader programming API introduced in p5.js 2.0, to make it easier to get started with programming visuals using the GPU. Additionally, the experimental WebGPU-based renderer, in development for about half a year, is available directly from the p5.js 2.2 build onwards. WebGL shaders written in p5.strands will ultimately work in WebGPU as well, and support for compute shaders is <a href="https://editor.p5js.org/davepagurek/sketches/MVc84tjJw">in active development</a> — <a href="https://discord.gg/s2P3j792Eq">learn more </a>on the p5.js Discord if you are interested in helping shape this feature!</li><li>Accessibility improvements and maintenance: added accessibility tooling to the core library, including contrast checks, with more utilities planned in future minor releases to help sketches better align with web accessibility standards (WCAG, EAA), alongside ongoing bug fixes, performance improvements, and internal codebase cleanups.</li></ul><p>For full technical details, we encourage you to read the complete release notes here: <a href="https://github.com/processing/p5.js/releases/tag/v2.1.1">2.1</a>, <a href="https://github.com/processing/p5.js/releases/tag/v2.2.1">2.2.1</a>, and <a href="https://github.com/processing/p5.js/releases/tag/v2.2.2">2.2.2</a>!</p><h3>2.x Timeline: We are Here</h3><p>If you are a p5.js user: this is also a reminder that p5.js 2.x will become the default version in the editor in July 2026.</p><p>If you maintain sketches, libraries, or teaching materials, now is a good time to review the compatibility guide and make any needed updates: <a href="https://github.com/processing/p5.js-compatibility">https://github.com/processing/p5.js-compatibility</a></p><p>The goal of the 2.x transition is long-term sustainability, making sure p5.js can continue to evolve without accumulating unmanageable technical debt.</p><p>Learn more about some of the major changes here:</p><ul><li><a href="https://timrodenbroeker.de/kit-kuksenok-on-p5-js-2-0/">Interview</a> with Kit Kuksenok and Tim Rodenbröker on release of p5.js 2.0</li><li><a href="https://www.youtube.com/watch?v=E2OE-FaMkag">Code-along</a> to learn how to use shaders with p5.strands in p5.js 2.0</li><li><a href="https://www.youtube.com/watch?v=1KqQeqZ3R9Y">Learn more</a> about variable fonts, asynchronous loading, text to contours, and 3D text extrusion</li></ul><p>If you have questions, please get in touch via the <a href="https://discord.gg/6D7BfJn95v">p5.js Discord</a>.</p><h3>Friendlier Shaders with p5.strands</h3><p>The recent releases have significantly advanced p5.strands, a new feature that was introduced in p5.js 2.0. It is a new shader programming API that makes it possible to create complex, high-performance graphics using familiar JavaScript-style code. Strands translates that code into GLSL behind the scenes, allowing sketches to run dramatically faster than equivalent JavaScript-only implementations, especially when scaled up over time.</p><p><a href="https://beta.p5js.org/contribute/p5strands/"><strong>What does p5.strands make possible?</strong></a></p><p>First, consider <a href="https://editor.p5js.org/davepagurek/sketches/s9l80gISI">this sketch</a>, which uses JavaScript loops to draw a cube of cubes. It is only 40 lines, but if there are many more cubes, it will slow down very much. If it is running smoothly, try changing all the “15” to a higher and higher number, such as “30.” As the scene grows, the sketch performance will suffer very noticeably.</p><p>The purpose of shader is to use parallel, GPU-based computation to speed this up. Instead of for loops, here is <a href="https://editor.p5js.org/davepagurek/sketches/5iSuJWHIN">a second version</a> of the same sketch using GLSL. It is 200 lines of code, and, if you are not familiar with GLSL, may be very difficult to read. Look for the “15” here, too, and try changing it to a larger number, like “30” or beyond. The shader-based animation remains smooth, showing the performance benefits of GPU rendering.</p><p>Finally, <a href="https://editor.p5js.org/davepagurek/sketches/UfP9NTFYQ">the p5.strands version of this sketch</a> combines a more accessible, readable style of JavaScript with the performance of GLSL.</p><p>With the introduction of the WebGPU-based renderer, p5.strands sketches can seamlessly use either WEBGL or WEBGPU. <a href="https://editor.p5js.org/ksen0/sketches/q5eKBA-OT">Here</a> is the same example as above, but using the WebGPU-based renderer. The only changes needed were to use async/await with createCanvas(…), and to import both the main library and the p5.webgpu.js add-on:</p><pre>&lt;script src=”https://cdn.jsdelivr.net/npm/p5@2.2.2/lib/p5.js&quot;&gt;&lt;/script&gt;</pre><pre>&lt;script src=”https://cdn.jsdelivr.net/npm/p5@2.2.2/lib/p5.webgpu.js&quot;&gt;&lt;/script&gt;</pre><p><em>Special thanks to </em><a href="https://www.davepagurek.com/"><em>@davepagurek</em></a><em> for creating the sketches throughout this post.</em></p><p>As development continues in 2.x, community feedback and experimentation play a central role in shaping this beginner-friendly approach to shader programming. The changes introduced in 2.1.1, 2.2.1 and 2.2.2 reflect contributions of not only code, but ideas in how the API should develop. With stabilization and improvement of WebGPU-based renderer, p5.strands WebGL shaders would also immediately work when switched to WebGPU.</p><p>If you’re curious to explore, test new capabilities, or help guide the future of p5.strands or the WebGPU-based renderer, we’d love for you to join the conversation on Discord (in <a href="https://discord.gg/2MHKVeV2Dr">#p5strands</a> or <a href="https://discord.gg/nmS4v2qw4K">#webgpu</a>) and get involved!</p><h3>Acknowledging Contributors and Stewards of p5.js</h3><p>p5.js is built and maintained by a global community of contributors and stewards. Over its 10+ year lifespan, more than 800 people have contributed to p5.js, tracked using the <a href="https://github.com/all-contributors/allcontributors.org">all-contributors specification</a>.</p><p>Versions 2.1 and 2.2 include code contributions, reviews, testing, and project <a href="https://p5js.org/contribute/steward_guidelines/">stewardship</a> from ~<strong>50</strong> people. We’d like to give a big thank you to:</p><p>@Aayushdev18, @Abhayaj247, @acgillette, @ayushman1210, @calebfoss, @davepagurek, @dpanshug, @error-four-o-four, @FerrinThreatt, @Geethegreat, @GregStanton, @harishbit, @HughJacks, @IIITM-Jay, @Iron-56, @ksen0, @limzykenneth, @lirenjie95, @lukeplowden, @LalitNarayanYadav, @madhav2348, @nalindalal, @nbogie, @nickmcintyre, @nickswalker, @nking07049925, @Nitin2332, @nakednous, @perminder-17, @Piyushrathoree, @reshma045, @sophyphile, @SoundOfScooting, @tychedelia, @VANSH3104, @shawdm, @pearmini, @Vaivaswat2244, @shivasankaran18, @awood0727, @vietnuyen2358, @shuklaaryan367-byte, @rakesh2005, @aashu2006, @jjnawaaz, @LuLaValva, @dontwanttothink, @Anshumancanrock, @saurabh24thakur</p><p>While this reflects code contributions, it doesn’t yet capture the full range of work that sustains the project, including documentation, education, design, and community care. We’re actively working toward better ways to recognize these contributions because recognizing this labor matters. Below are reflections from some of these recent contributors:</p><p><em>“I learnt how to draw from coding, I am very bad at drawing and artistic things on hand and paper, but this project gave me opportunity to explore another aspect of my life.” </em>— Nalin, he/him @nalindalal (Contributor)</p><p><em>“Being a contributor has been such a meaningful experience for me. I’m really proud of helping organize and support community discussions that brought people together and sparked new ideas. Along the way, I’ve learned that even small efforts — a bit of time, a helpful comment, a shared resource — can make a real difference. For anyone thinking about getting involved, I’d say just start small: join a chat, ask questions, and offer help where you can. You’ll be surprised how quickly you start feeling part of something bigger.”</em> — Nitin Rajpoot @Nitin2332 (Contributor)</p><p><em>“Contributing to p5.js has been a deeply rewarding learning experience. I worked primarily on extending and improving noise functionality in p5.strands, including GLSL-based noise, fractal noise, vec2/vec3 support, and performance-related improvements, along with documentation and tooling fixes. Through this process, I learned how to work within a large, community-driven codebase, writing maintainable code, responding to reviews, and thinking carefully about API design and accessibility for creative coders. For anyone interested in getting involved, I would recommend starting with issues labeled “Good First PR,” reading existing code carefully, and not hesitating to ask questions. The p5.js community is welcoming, and small contributions quickly build confidence to take on more complex work.” </em>— Lalit Narayan Yadav (she/her) (Contributor)</p><p><em>“I improved p5.js documentation through two PRs … clarified inline examples for better accessibility, and enhanced consistency and tone in instructional text. These updates made the reference pages clearer and more welcoming for new learners. I learned the value of community feedback, clear commit messages, and small, incremental improvements that meaningfully enhance user understanding in open-source projects. Start with “good first issue” tasks, follow contributor guidelines, and don’t hesitate to ask questions. Focus on small, impactful fixes — every improvement helps the community grow.” </em>— Abu Harish @harishbit (Contributor)</p><p><em>“Working on p5.js has been a great learning experience. I focused on identifying and solving issues while continuously learning from the process. I’m proud of the progress I’ve made through contributing and growing as a developer. I’d encourage others to get involved — the p5.js community is incredibly supportive and welcoming, especially for beginner developers. It’s truly been a pleasure working on this project” </em>— Vansh Kabra (he/him) @VANSH3104 (Contributor)</p><p><em>“p5.js has provided me with incredible opportunities to grow as a developer. Under Dave Pagurek’s mentorship I worked on p5.strands, and I feel the project’s values gave me the time and guidance I needed to succeed. I started making sketches soon after beginning to code, and I encourage anyone interested in contributing to do the same. Testing the limits of what I could make, I found myself reading the source code to understand how my sketch ran, and reading contributor docs. The way p5.js is set up for its community makes the move from user to contributor a natural step.” </em>— Luke, he/him @lukeplowden (Contributor &amp; Steward)</p><p><em>“In the past few releases, we have made a lot of progress on p5.strands, slowly making it a more capable and easier way to write shaders. I’ve also been toiling away for a while on WebGPU mode, now testable in the latest release. I’m really excited to see people test these features out, make creative things out of them, and also report bugs and suggestions to me! Plenty of these have been logged and fixed/implemented already. If you’re interested in learning, seeing what other people are doing, and helping direct the future of these endeavours, come join the discussion on the p5 discord!” </em>— Dave Pagurek (he/him) @davepagurek (Maintainer &amp; Steward)</p><p>If you contributed to p5.js recently and don’t see yourself credited, or would like to add an additional note, please add your information here so we can acknowledge your work:<br><a href="https://docs.google.com/forms/d/1Nixz3VTes9W3d-V9kj6ouK3UKWHN28u-X8A9OGhLYfM">https://docs.google.com/forms/d/1Nixz3VTes9W3d-V9kj6ouK3UKWHN28u-X8A9OGhLYfM</a></p><h3>Getting Involved</h3><p>Active development on p5.js continues and there are many ways to participate. We host monthly, informal developer chats on Discord — stop by to learn something new or bring your own bugs to get help. In these sessions contributors help each other troubleshoot issues and are welcome to share demos or propose new ideas. These are not recorded, intentionally low-pressure, and open to people with a wide range of experience.</p><p>If you use p5.js, we’d love to see you there:</p><ul><li>Join the p5.js Discord channels for <a href="https://discord.gg/2MHKVeV2Dr">#p5strands</a> and <a href="https://discord.gg/nmS4v2qw4K">#webgpu</a></li><li>Check out the contributing guides on GitHub for <a href="https://beta.p5js.org/contribute/p5strands/">p5.strands</a> and <a href="https://beta.p5js.org/contribute/webgpu/">WebGPU</a></li><li>Want to test new features <strong>before</strong> they are released? Check recent releases (<a href="https://github.com/processing/p5.js/releases/tag/v2.1.1">2.1</a>, <a href="https://github.com/processing/p5.js/releases/tag/v2.2.1">2.2.1</a>, and <a href="https://github.com/processing/p5.js/releases/tag/v2.2.2">2.2.2</a>) and look for “release candidates!”. These contain instructions for testing, and finding bugs in them is a very helpful way to get involved with contribution!</li></ul><p>Contributing to p5.js spans beyond just coding, you could also get involved by contributing to our documentation, education resources, testing, and tooling. Thank you to everyone who has co-created p5.js over the years!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9771d40c8b1d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Bridges. Wrapping-Up the 2025 pr05 Developer Grant Program]]></title>
            <link>https://medium.com/@ProcessingOrg/building-bridges-wrapping-up-the-2025-pr05-developer-grant-program-79946a353640?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/79946a353640</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Mon, 23 Feb 2026 13:45:01 GMT</pubDate>
            <atom:updated>2026-02-23T14:14:50.225Z</atom:updated>
            <content:encoded><![CDATA[<p>pr05 (pronounced “pros”) is the Processing Foundation’s fully-remote grant and mentorship initiative supporting the professional growth of early to mid-career software developers through hands-on contributions to Processing and p5.js. Launched in 2024, the program began with the theme “New Beginnings,” focusing on projects that would enhance and solidify the foundations of these ecosystems.</p><p>The 2025 program followed up with the theme “Building Bridges,” focusing on stronger connections across the ecosystem: creating better connections between Processing and p5.js, strengthening interoperability, and building pathways that make these tools more accessible and powerful together. The spirit of “Building Bridges” was reflected not only in the projects, but also in the way the program evolved, with returning contributors supporting the next cohort.</p><p>Over four months (July-October 2025) our pr05 Developers, Stephan Max, Claire Peng, and Vaivaswat Dubey worked on projects that literally and figuratively built bridges across parts of the Processing ecosystem. Each grantee received 40 hours of mentorship along the way. We want to give a special thanks to pr05 mentors Connie Ye, Claudine Chen, and Stef Tervelde for their unyielding support and guidance.</p><p>In October, the cohort presented their projects at <a href="https://openassembly.processingfoundation.org/">OpenAssembly</a>, sharing the results of their four months of work. We’re excited to revisit what they built and take a closer look at the ideas that came out of last year’s program.</p><p><em>Written by Raphaël de Courville, edited by Patt Vira and Amy B. Woodman.</em></p><h4><a href="https://github.com/clairep94"><strong>Claire Peng</strong></a><strong>: Incremental TypeScript Migration for the p5.js Editor</strong></h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FsaA9Fb0b8DY%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DsaA9Fb0b8DY&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FsaA9Fb0b8DY%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/a8c28e396067270f9e3224596595aedd/href">https://medium.com/media/a8c28e396067270f9e3224596595aedd/href</a></iframe><p>Claire used her unique perspective to make the p5.js Editor easier to maintain. As a former fashion designer who discovered coding through Daniel Shiffman’s Coding Train tutorials, her non-traditional background shaped her approach to this technical project: making the codebase more approachable for contributors who, like her, learn best through visual hints and pattern-matching rather than dense technical documentation.</p><p>The p5.js Editor is a massive codebase (over 100,000 lines of code built over more than a decade) with layers of legacy dependencies. Rather than attempting to migrate everything, Claire took a strategic approach focused on broad coverage and clean ups across the codebase.</p><blockquote>TypeScript allows your code editor to provide more ‘spell-checks’ (or type-checks, as they are called). It also provides better autocompletion to give you more guardrails as you code, sort of like a game of MadLibs, with hints underneath each blank. — Claire Peng</blockquote><p>As a result of Claire’s work, the p5.js Editor codebase is now 27% TypeScript. From a user’s perspective, nothing looks different, but for new contributors, the repo now offers the guardrails that make contributing more visual and intuitive.</p><p>Thanks to <a href="https://github.com/khanniie">Connie Ye</a> for her mentorship and <a href="https://github.com/raclim">Rachel Lim</a> for her guidance as core maintainer of the p5.js Editor.</p><h4>Read More</h4><p>Claire’s blog post: <a href="https://medium.com/@clairepeng94/incremental-typescript-migration-for-the-p5-js-web-editor-7749878e0cbe">Incremental Typescript Migration for the p5.js Editor</a></p><h4>Related Links</h4><ul><li><a href="https://github.com/processing/p5.js-web-editor/tree/develop/contributor_docs/pr05_2025_typescript_migration">Migration Project Technical Documentation</a></li><li><a href="https://www.youtube.com/watch?v=saA9Fb0b8DY">Open Assembly Presentation Video</a></li></ul><h4><a href="https://github.com/Vaivaswat2244"><strong>Vaivaswat Dubey</strong></a><strong>: Building a Visual Regression Testing System for Processing</strong></h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FmrfamBu6Rxo%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DmrfamBu6Rxo&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FmrfamBu6Rxo%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/ebe24c42d1316ff8393e1cb393ca21e6/href">https://medium.com/media/ebe24c42d1316ff8393e1cb393ca21e6/href</a></iframe><p>For a platform like Processing, success isn’t just about a sketch compiling without errors, it’s about how things <em>look</em>. If a shape renders even slightly differently on macOS than on Linux, or if a color blend changes subtly after a code refactor, that’s a regression that can break the creative intent of the work.</p><blockquote>Processing has always been a place where art and code intersect. It empowers people to express ideas visually, even if they’ve never written code before. But behind its simplicity is a complex rendering system that must stay consistent across updates, platforms, and years of evolution. — Vaivaswat Dubey</blockquote><p>Vaivaswat built a fully native, dependency-free visual testing system for Processing that automatically catches these visual differences. The system runs Processing sketches, captures snapshots of rendered output, compares them to stored baseline images, and highlights even the tiniest differences.</p><figure><img alt="Three side-by-side panels labeled “1st image,” “2nd image,” and “diff image.” The first image shows a red circle and a green square inside a white frame, crossed by a blue diagonal line from top left to bottom right. The second image shows multiple red rectangular shapes arranged radially in a circular pattern on a white background. The third image is a diff (difference) view where pixels that differ between the first and second images are highlighted in red, shown over a checkerboard background" src="https://cdn-images-1.medium.com/max/1024/1*gG_MaDw--ro5q6gB8diA1w.png" /></figure><p>If the color value of a pixel changes, the testing system generates a visual diff (difference) where pixels that differ between the first and second images are highlighted in red.</p><blockquote>Testing might not sound “creative,” but when it protects the integrity of an artistic tool like Processing, it becomes an art form of its own. — Vaivaswat Dubey</blockquote><p>Thanks to <a href="https://github.com/mingness">Claudine Chen</a> for her mentorship and guidance throughout the project.</p><h4>Read More</h4><p>Vaivaswat’s blog post: <a href="https://medium.com/@vaivaswat2244/catching-visual-bugs-before-they-happen-building-a-visual-regression-testing-system-for-processing-09b1ab227640">Catching Visual Bugs Before They Happen: Building a Visual Regression Testing System for Processing</a></p><h4>Related Links</h4><ul><li><a href="https://github.com/processing/processing4/tree/visual-testing/core/test/processing/visual">Visual Testing Implementation</a></li></ul><h4><a href="https://github.com/stephanmax">Stephan Max</a>: Creating a New p5.js Mode for the Processing Development Environment</h4><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FHsV7tbOviEw%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DHsV7tbOviEw&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FHsV7tbOviEw%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/9823a81265e98f3489c6234e04f97f64/href">https://medium.com/media/9823a81265e98f3489c6234e04f97f64/href</a></iframe><p>Stephan’s project allows users to create and run p5.js sketches directly inside the Processing Development Environment (PDE) even without internet connection. This new “mode” bridges the gap between web-based and desktop coding.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jHZfaPDK01rR2qvdHEehOw.png" /></figure><p>Now, p5.js users can go beyond the limitations of an internet-dependent, browser-based code editor. Some exciting new features include: saving and loading files locally, accessing system resources, using Node packages, and even exporting sketches as standalone desktop applications across Windows, macOS, and Linux.</p><blockquote>My deep appreciation for the Processing Foundation does not come out of nowhere: Processing has been my companion for a while! It all started in 2009, when I bought the book ‘<a href="http://www.generative-gestaltung.de/2/">Generative Gestaltung</a>’… That means I have been working with Processing for over 15 years. — Stephan Max</blockquote><p>The mode is now available in the PDE’s contribution manager. Look for “p5.js Mode (experimental)”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gss0-qL_jp9bqL54SD8qGQ.png" /></figure><p>Thanks to <a href="https://steftervel.de/">Stef Tervelde</a> for his mentorship and <a href="https://linktr.ee/sableraph">Raphaël de Courville</a> for his guidance as Processing Community Lead.</p><h4>Read More</h4><p>Stephan’s blog post: <a href="https://stephanmax.com/pr05-grant-retrospective/">pr05 Grant Retrospective</a></p><h4>Related Links</h4><ul><li><a href="https://github.com/processing/processing-p5.js-mode">processing-p5.js-mode Repository</a></li><li><a href="https://github.com/processing/processing-p5.js-mode/blob/main/README.md">README file</a></li></ul><h3>Moving Forward</h3><p>Through the pr05 grant, we wanted to show that open-source infrastructure work (the kind that happens quietly behind the scenes) creates meaningful bridges between communities and technologies. Our 2025 grantees embraced this fully, approaching their projects with incredible care, technical rigor, and thoughtfulness about the contributor experience.</p><p>I’m genuinely proud of how much intentionality our pr05 grantees put into their contributions. There is a lot more to each of their projects than can reasonably fit in this article, and I strongly encourage you to read their stories in their own words through the blog posts linked above. To watch the pr05 talks and the presentations from the Processing Foundation’s 2025 Fellows and 2025 Google Summer of Code contributors, visit <a href="https://openassembly.processingfoundation.org/">https://openassembly.processingfoundation.org/</a>.</p><h3>Acknowledgements</h3><p>Again, a huge thanks to our incredible and supportive mentors, <a href="https://github.com/khanniie">Connie Ye</a>, <a href="https://steftervel.de/">Stef Tervelde</a>, and <a href="https://github.com/mingness">Claudine Chen</a>!</p><h3><strong>Support the Processing Foundation</strong></h3><p>Processing Foundation is the non-profit behind Processing, p5.js, and the p5.js Editor. We’re imagining open-source software that is free, creative, equitable, and accessible to all. However, free software is expensive to make, and we cannot do this work without you.</p><p><a href="https://processingfoundation.org/support"><strong>Donate now</strong></a>!</p><p>Your support is what keeps the Processing ecosystem alive, including core development, infrastructure, community programs, and fellowships like pr05. It helps ensure these tools remain free, creative, and accessible to everyone.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=79946a353640" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Processing 4.5.1 is officially out]]></title>
            <link>https://medium.com/@ProcessingOrg/processing-4-5-1-is-officially-out-1bf4e37ccc05?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/1bf4e37ccc05</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Wed, 21 Jan 2026 13:46:02 GMT</pubDate>
            <atom:updated>2026-01-21T13:51:54.069Z</atom:updated>
            <content:encoded><![CDATA[<p><em>We are releasing Processing 4.5.1, featuring redesigned welcome and preference screens.</em></p><p><a href="https://processing.org/download">Download Processing 4.5.1</a> from the Processing website.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TL0fTzKiNNklGmuYmOYDig.png" /></figure><h3>Sustaining Processing</h3><p>Processing has been around for 25 years, which in software years is… a lot.</p><p>That continuity is a testament to the foundations laid by Ben Fry, Casey Reas, and to the many contributors who have cared for Processing over the years. Projects like Processing do not persist by accident. They survive because people keep showing up to do the unglamorous work.</p><p>We know it is hard to get people excited about infrastructure, especially since much of this work is invisible to most users, but it is still essential. Over the past two years, <a href="https://www.youtube.com/watch?v=ngQwedwFyOY">a huge amount of effort</a> has gone into <a href="https://timrodenbroeker.de/the-future-of-processing-with-raphael-and-stef/">bringing Processing back</a> into a state where bug fixes and releases can happen.</p><p>Last September, we welcomed our new Processing Project Lead, <a href="https://medium.com/@ProcessingOrg/moon-dav%C3%A9-joins-processing-foundation-as-processing-project-lead-ef33efea35d4">Moon Davé</a>, who is already focusing on clearing the path for contributors of all skill levels, and collaborating with the contributor community to breathe new life into the project.</p><p>At this point, we are in a place we feel good about. Processing is stable, moving again, and in a shape where meaningful contributions can happen.</p><p>Now feels like a good time to be excited about Processing :)</p><p>If Processing matters to you, there are many ways to get involved! Join the <a href="https://discord.processing.org">Processing Discord</a>, or check out our<a href="https://github.com/processing/processing4/tree/main?tab=readme-ov-file#contributing-to-processing"> contributing guide</a> on GitHub.</p><h3>What’s new in Processing 4.5</h3><p><em>Full </em><a href="https://github.com/processing/processing4/releases/tag/processing-1312-4.5.1"><em>release notes for Processing 4.5.1</em></a></p><h3>A refreshed user interface</h3><p>Over the years, the editor has held up remarkably well with its timeless minimalist design. At the same time, some parts of the interface are starting to show their age and feel dated on modern systems.</p><p>More importantly, the underlying UI code has become harder to maintain and extend. Even small fixes can take longer than they should, which slows down development and causes frustration for everyone involved.</p><p>Starting with 4.5.1, the <strong>Welcome</strong> and <strong>Preferences</strong> screens have a new refreshed design, to support better accessibility and make it easier to add new features in future versions.</p><p>The main Processing editor window stays the same, and you should not expect any other visual changes elsewhere in this version.</p><h4>What we’re building on</h4><p>Starting with Processing 4.3.1, new features have been written primarily in the <a href="https://kotlinlang.org/">Kotlin</a> language. Kotlin is fully interoperable with Java, which means we can adopt it incrementally while keeping the rest of the codebase intact. This makes it possible to use newer tooling alongside the existing Java and Swing code.</p><p>For the user interface, we are using <a href="https://www.jetbrains.com/compose-multiplatform/">Jetpack Compose Multiplatform</a>, a modern, reactive UI toolkit built in Kotlin, together with the <a href="https://m3.material.io/">Material Design 3</a> design system. This allows us to build new interface components quickly, while still integrating with the existing Swing-based parts of Processing.</p><p>Contributors who are comfortable with Kotlin and interested in this work are very welcome to get involved. Come say hi on the <a href="https://discord.gg/tJvJB6ctUJ">Processing Discord</a>.</p><h4>Accessibility by default</h4><p>Improving accessibility is a key motivation for this work. Processing’s underlying UI tech was not designed with modern accessibility needs in mind. If we kept everything as-is, we would keep shipping the same limitations.</p><p>Jetpack Compose for Desktop provides built-in support for screen readers, keyboard navigation, and other <a href="https://kotlinlang.org/docs/multiplatform/compose-desktop-accessibility.html">desktop accessibility features</a>. As parts of the interface are ported to this system, they gain these accessibility features without additional work.</p><p>That’s the key shift: not “adding accessibility”, but moving toward a UI setup where accessibility is the default.</p><figure><img alt="A gif showing Processing’s new welcome screen where interface buttons receive focus one after another. As each button is focused, a large visible text label appears next to it, showing the button name. The labels mirror what screen readers announce." src="https://cdn-images-1.medium.com/max/1000/1*_rB2oaLEh3jqKR37bM0FFQ.gif" /></figure><h4>New UI components</h4><p>With this foundation in place, parts of the Processing interface are beginning to move to the new UI system in this release.</p><p>Material Design comes with a lot of pre-made components which make it easier to ensure consistency across all parts of the UI.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XrZGZtcqSACZbxHPEbwZTg.png" /><figcaption><em>Left: Default Swing interface. Right: Jetpack Compose with the Material Design 3 open source UI design system.</em></figcaption></figure><p>Jetpack Compose also tends to be less verbose than Swing, which makes UI code easier to read and reason about. For example, here is the same component written in Swing and in Jetpack Compose.</p><p><strong>Swing</strong></p><pre>class Processing : JFrame(&quot;Processing&quot;) {<br>    init {<br>        setDefaultCloseOperation(EXIT_ON_CLOSE)<br>        val label = JLabel(&quot;This is a label&quot;)<br>        val button = JButton(&quot;Click me&quot;)<br>        button.addActionListener {<br>            println(&quot;Button clicked&quot;)<br>        }<br>        add(label)<br>        pack()<br>        isVisible = true<br>    }<br>    companion object {<br>        @JvmStatic<br>        fun main(args: Array&lt;String&gt;) {<br>            SwingUtilities.invokeLater {<br>                Processing()<br>            }<br>        }<br>    }<br>}</pre><p><strong>Jetpack Compose</strong></p><pre>@Composable<br>fun Processing() {<br>    application {<br>        Window(onCloseRequest = ::exitApplication, title = &quot;Processing&quot;) {<br>            Text(&quot;This is a label&quot;)<br>            Button(onClick = {<br>                println(&quot;Button clicked&quot;)<br>            }) {<br>                Text(&quot;Click me&quot;)<br>            }<br>        }<br>    }<br>}</pre><p>The new <strong>Welcome</strong> screen adds faster ways to get started, a list of useful links, and a scrollable collection of example sketches.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*24dgwEuuQedwntCn7iL8sg.png" /></figure><p>In earlier versions of Processing, showing most options on a single screen worked well because there were only a handful of them. As more features were added, the <strong>Welcome</strong> and <strong>Preferences</strong> screens became increasingly cluttered, making them harder to use and difficult to expand upon.</p><p>The new Preferences are searchable, and organized into clear categories, which makes it easier to add new settings over time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MM3QchgFSih6CUoxfNkGBA.png" /></figure><p>The new <strong>Preferences</strong> screen also makes it easier to work with experimental settings.</p><p>Previously, these settings could only be changed by manually editing the preferences.txt file. Now, they can be viewed and edited directly from within the Processing interface.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*p7jL84wJYKJDYZiHBB4auQ.png" /></figure><h4>Incremental approach</h4><p>We chose to migrate the <strong>Welcome</strong> and <strong>Preferences</strong> screens first because they are self-contained, and include most of the interface elements needed for the project (menus, cards, dropdowns, checkboxes, sliders, etc). This makes them a good place to start and test the new design.</p><p>Rather than doing a large redesign or re-starting from scratch, we chose to take a careful, incremental approach, so we can demonstrate the new system, gather feedback, and invite contributions as we move forward. This is possible thanks to <a href="https://kotlinlang.org/docs/multiplatform/compose-desktop-swing-interoperability.html">Jetpack Compose’s interoperability with Swing</a>.</p><h3>It takes a village 💙</h3><p>We have already received useful feedback from the community through the beta release of Processing 4.5. Thanks to everyone who tested early versions, shared feedback, reported issues, or contributed ideas.</p><p>Want to help make Processing better? <a href="https://processing.org/download">Download Processing 4.5.1</a> from the Processing website, and if you find any bugs or issues in this release, please <a href="https://github.com/processing/processing4/issues/new/choose">open an issue</a> on GitHub.</p><p>Join the <a href="https://discord.processing.org">Processing Community on Discord</a>.</p><h3>Acknowledgment</h3><p>Part of this work was made possible by <a href="https://www.sovereign.tech/tech/processing">funding</a> from the <a href="https://www.sovereign.tech/">Sovereign Tech Agency</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1bf4e37ccc05" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Google Summer of Code 2025 —  Wrap-Up and Mentor Summit]]></title>
            <link>https://medium.com/@ProcessingOrg/google-summer-of-code-2025-wrap-up-and-mentor-summit-d1e565e9fe1f?source=rss-42ab48286a4c------2</link>
            <guid isPermaLink="false">https://medium.com/p/d1e565e9fe1f</guid>
            <dc:creator><![CDATA[Processing Foundation]]></dc:creator>
            <pubDate>Thu, 18 Dec 2025 02:07:53 GMT</pubDate>
            <atom:updated>2025-12-18T02:07:53.476Z</atom:updated>
            <content:encoded><![CDATA[<h3>Google Summer of Code 2025 — Wrap-Up and Mentor Summit</h3><p>In 2025, the Processing Foundation celebrated its thirteenth year of participation in Google Summer of Code (GSoC)! The primary goal of the GSoC program is to welcome new contributors to open-source software development. Out of a pool of about 150 submissions, 3 outstanding projects were <a href="https://medium.com/@ProcessingOrg/announcing-google-summer-of-code-2025-projects-6463d0e49470">selected</a> to improve <a href="https://p5js.org/">p5.js</a> coding experience. Each project was supported by mentors, and culminated in merged code and a public presentation at the <a href="https://openassembly.processingfoundation.org/">Open Assembly</a> in 2025.</p><p><strong>Project 1: </strong><a href="https://youtu.be/TVIZhfxpnLg?si=nFr-nKhgC-LO7sXl"><strong>Context-Aware Autocomplete and Navigation for the p5.js Editor</strong></a></p><ul><li>Contributor: <a href="https://kamakshi645.medium.com/gsoc25-processing-foundation-final-work-c2069c536ae8">Kamakshi Bali</a></li><li>Mentors: <a href="https://github.com/diyaayay">Diya Solanki</a> and <a href="https://github.com/tespin">Tristan Espinoza</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NPmHCbXoQ81kUW2h2vXrUg.png" /></figure><p><strong>Project 2: </strong><a href="https://youtu.be/kUXVl8kwwZs?si=jVl7ceTPnAbvZW-E"><strong>Translation Mapping and Accessibility for p5.js</strong></a></p><ul><li>Contributor: <a href="https://www.linkedin.com/in/divyansh013/">Divyansh Srivastava</a></li><li>Mentors: <a href="https://xnze.ro/">Kit Kuksenok</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I_n1wNE2Liqx7iOhCJE54w.png" /></figure><p><strong>Project 3: </strong><a href="https://youtu.be/7HwWTwmJBcY?si=PVyNOSDycaPx9CPY"><strong>p5.js Sketch Embed Tool for Blogs and Websites</strong></a></p><ul><li>Contributor: <a href="https://www.linkedin.com/in/glory-nwaekpe/">Ego Nwaekpe</a></li><li>Mentors: <a href="https://www.doradocodes.com/">Dora Do</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Nsv9_ix1a1ttl50BZQPLbQ.png" /></figure><p>During GSoC, contributors work on projects in mentor orgs, learning how to be a part of the open source software community. In 2025, Processing Foundation was one of <a href="https://opensource.googleblog.com/2025/08/google-summer-of-code-2025-contributor-statistics.html">185 accepted organizations</a>. Each contributor is supported by one or two mentors, but is responsible for completing their own project. The contributors worked over the summer, and shared their work in the <a href="https://openassembly.processingfoundation.org/">Open Assembly</a> alongside other 2025 Processing Foundation grantees and fellows.</p><p>Once the GSoC program was complete in the fall, the mentors from the participating organizations came together in Munich for the GSoC Mentor summit. This inspiring and energizing unconference connected us with open source mentors and maintainers over conversations on how to make open source software communities more sustainable, inclusive, and welcoming.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m5WNF3vehXQgLBw90uEkOg.jpeg" /><figcaption>Tristan Espinoza, Kit Kuksenok, and Diya Solanki (from left to right) at the GSoC Mentors Summit</figcaption></figure><p>This year, three mentors from Processing Foundation were able to attend: <a href="https://github.com/diyaayay">Diya Solanki</a>, <a href="https://github.com/tespin">Tristan Espinoza</a>, and <a href="https://xnze.ro/">Kit Kuksenok</a>.</p><p>Processing Foundation mentor delegates, Diya, Tristan, and Kit participated in unconference sessions and discussions that reflected on improving GSoC application and contributor engagement, on funding and governance in open source, on diversity and open source community health and sustainability, on open source in academia and in computational biology, and tooling for supply chain security.</p><p>The conversations at the mentor summit translated practical input into ongoing project work and into ideas about projects in internationalization, simulation, and academic collaborations, as well as reflections for future programs. For example, <a href="https://github.com/processing/p5.js/pull/8194">this pull request</a> introduces clear guidelines on how LLM-based tools should interact with this repository and its contributors. With the goal of removing barriers to contribution, we will improve the application processes with a clear template, focusing on custom projects grounded in experience with the Processing and p5.js ecosystem.</p><p>Thank you to our cohort of GSoC Contributors, mentors, and advisors for another amazing summer of code. We hope to keep supporting the creative code and open source community, year after year.</p><p>Want to support the Processing Foundation in this work? <a href="https://processingfoundation.org/donate">Donate here</a> to support our ecosystem of open source contributions!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d1e565e9fe1f" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>