<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Render Network on Medium]]></title>
        <description><![CDATA[Stories by Render Network on Medium]]></description>
        <link>https://medium.com/@rendernetwork?source=rss-3d2d407322fb------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 03:29:21 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@rendernetwork/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Render Network Foundation Monthly Report — February 2026]]></title>
            <link>https://rendernetwork.medium.com/render-network-foundation-monthly-report-february-2026-4582b09d30e8?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/4582b09d30e8</guid>
            <category><![CDATA[decentralized-ai]]></category>
            <category><![CDATA[depin]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[3d-rendering]]></category>
            <category><![CDATA[decentralized-computing]]></category>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Tue, 17 Mar 2026 19:41:21 GMT</pubDate>
            <atom:updated>2026-03-17T19:41:21.020Z</atom:updated>
            <content:encoded><![CDATA[<p>In February’s update, we highlight Dispersed product updates and use cases, RenderCon 2026 featured speakers, Kyle Gordon in the artist spotlight, a recap of Rendr Festival, and ecosystem metrics.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*JlOJqFMuzsEBg6wrpJiC9A.png" /></figure><h3><strong>Refik Anadol. Alex Ross. Francesco Siddi. Emad Mostaque.</strong></h3><p><strong>These are just a few of our announced RenderCon 2026 speakers.</strong></p><p>April 16–17, 2026 · Hollywood, California</p><p>Get <a href="https://luma.com/rendercon2026"><strong>tickets</strong></a>.</p><p><a href="https://rendercon2026.com/">RenderCon 2026</a> is focused on the future of art, media, and technology, and where the most meaningful work needs to happen next.</p><p>Confirmed speakers include:</p><ul><li><strong>Refik Anadol</strong>, Founder, DATALAND, the world’s first Museum of AI Arts</li><li><strong>Alex Ross</strong>, DC Comics and Marvel comic book artist</li><li><strong>Francesco Siddi</strong>, CEO and Chairman, Blender Foundation</li><li><strong>Emad Mostaque</strong>, CEO and Founder, Intelligent Internet</li></ul><p>And more.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/310/1*6wHR2TlmuLsaLEgNHREpTg.gif" /></figure><p>Learn more at <a href="http://rendercon2026.com">rendercon2026.com</a>.</p><p>Watch the 2025 RenderCon recap reel:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/1912538322083487855&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/032dd0785e9c3ae7ac6361ebdfe49b11/href">https://medium.com/media/032dd0785e9c3ae7ac6361ebdfe49b11/href</a></iframe><h3><strong>Ecosystem metrics</strong></h3><p>The Render Network Foundation tracks various metrics on its network. Below is the aggregate view of the month’s activity, previous month’s activity, and month-over-month change for a few key metrics.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*o-7WGgs5nTENA5-qnXLxHA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1cnMDWHRP6ky43Ai98cwVQ.png" /></figure><p>Looking for definitions of each of these terms? Check out our <a href="https://rendernetwork.medium.com/render-network-foundation-monthly-report-december-2025-43d956808e3f">previous</a> monthly reports.</p><h3><strong>Product updates: Dispersed Product Updates and Use Cases</strong></h3><p><a href="https://dispersed.com/">Dispersed</a>, the distributed GPU network by Render Network designed for general compute and AI workloads, has seen some UI updates and uptick of usage in the past month.</p><p>Image and video generation recipes (think of them as preset templates for LLMs targeting specific use cases) have been added to the interface for convenience. <a href="https://otoyinc.mintlify.app/">Documentation</a> for Dispersed has also been updated and will continue to be updated as new features and functionality are added.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XF3_guYG89ymWI0ccVeeVQ.png" /><figcaption>Pre-loaded video and image generation templates available on Dispersed</figcaption></figure><h4><em>Dispersed for AI-Powered Workflows for Art</em></h4><p>Last month we highlighted <a href="https://bitmap.mhxalt.com/">Bitmap</a>, a project by artist MHX, a long-time user of the Render Network whose art piece, <em>Lattice</em>, is featured in <a href="https://www.artechouse.com/submerge"><em>SUBMERGE: Beyond the Render</em></a>. He launched the Bitmap project back in January, powered by Dispersed and the Render Network.</p><p>MHX has graciously shared how he built Bitmap and encourages other creators to build their own generative workflow projects. So much so that he shared the <a href="https://bitmap.mhxalt.com/details">Bitmap workflow</a> publicly.</p><p>We’re working on a full case study of the Bitmap project and will publish it soon.</p><h4><em>Dispersed for Document Parsing</em></h4><p>Another use case we’ve seen on Dispersed is the use of the network to power real-world document parsing and text analysis workloads.</p><p>A document-parsing job run on the network can process multiple file types — including CSV, JSON, XML, PDF, plain text, and images. This particular application automatically detects the format and applies the appropriate parsing method. This enables structured extraction and analysis across a wide range of documents commonly used in data pipelines.</p><p>Importantly, the entire workflow runs 100% on Dispersed, with no reliance on external AI APIs. Model inference is executed on decentralized GPU nodes using vLLM, demonstrating the network’s ability to support real AI workloads end-to-end.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/508/1*0R5tENBOc_f1ZjXNclC57A.png" /></figure><p>The system connects directly to the live Dispersed API with secure HMAC-SHA256 request signing, ensuring every compute job is authenticated and verifiable. GPU inference nodes can be deployed directly from the browser in under two minutes, and workloads run on real network hardware at approximately $0.69 per GPU hour.</p><p>This use case highlights how Dispersed can support practical AI tasks such as document parsing and data extraction, while showcasing the network’s fast deployment, secure API architecture, and decentralized compute infrastructure.</p><p>We’re also supporting a builder who’s taking this concept to scale, specifically focused on analyzing scientific research across hundreds of academic papers. The goal is to create a public, open assessment of scientific evidence across fields such as behavioral science, public policy, and medicine.</p><p>More details on this project and its use of Dispersed are coming soon.</p><h3><strong>Artist spotlight: Kyle Gordon</strong></h3><p>This month we sat down with Kyle Gordon, an award-winning multidisciplinary artist and experiential designer based in San Francisco. Over the past 15 years, Kyle has built large-scale immersive installations and spatial environments for globally recognized brands and festivals including HP, Coachella, and #CarbonDrop.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*igghU7ptfb1vq_wvnn1yTg.jpeg" /></figure><p>In the interview, Kyle walks us through his creative process — from organic pattern systems and emotional storytelling to the full pipeline of concept, 3D modelling, and physical execution. We explore how scalable GPU rendering via Render Network supports the resolution demands and tight deadlines that come with festival and architectural-scale work, and where he sees immersive design heading as AI and decentralized compute become part of the creative toolkit.</p><p>Watch the interview:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FsHJH0HG4jR8%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DsHJH0HG4jR8&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FsHJH0HG4jR8%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b2a73978762ad6f458e5e8b154eaf6cf/href">https://medium.com/media/b2a73978762ad6f458e5e8b154eaf6cf/href</a></iframe><h3><strong>Announcements</strong></h3><p><strong>The Render Network Team attended RENDR Festival in Belfast, Feb 12–13<br></strong><a href="https://www.rendrfestival.com/">RENDR Festival</a> is a premier event hosted in Belfast, for inspiring creatives, exploring the space between creativity and technology across film, gaming, animation, and immersive experiences.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*lFETRygXdyeQJ4UPx-VMpw.jpeg" /><figcaption>Render Network hosts a happy hour for Rendr Festival attendees.</figcaption></figure><p>The 2026 event took place on 12–13 February and featured talent behind global hits including The Last of US, KPop Demon Hunters, and Alien Romulus.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/384/1*a0XwBlgT9_aqPq78cC4Elw.jpeg" /><figcaption>Attendees arriving at the 2026 Rendr Festival in Belfast</figcaption></figure><p>The Render Network team connected with creatives and speakers from leading global entertainment companies, award winning production studios surrounded by the backdrop of Belfast, a significant hub for global screen production.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*hUVmOIKJMq-yVWi4XZ4CzA.jpeg" /><figcaption>Camille Balsamo (left) talks about practical effects created by her studio, Pro Machina. Fede Alvarez (right) gave a surprise appearance and dived into the practical effects for Alien Romulus.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/384/1*_sgLK9s0TquxjNILxbSbRw.jpeg" /><figcaption>KPop Demon Hunters CG supervisor, James Carson, shared behind-the-scenes insights with the audience.</figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4582b09d30e8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[2025 Annual Financial Overview & Network Milestones]]></title>
            <link>https://rendernetwork.medium.com/2025-annual-financial-overview-network-milestones-936f24e6cbe6?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/936f24e6cbe6</guid>
            <category><![CDATA[depin]]></category>
            <category><![CDATA[decentralized-ai]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[3d-rendering]]></category>
            <category><![CDATA[decentralized-computing]]></category>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Mon, 02 Mar 2026 17:53:06 GMT</pubDate>
            <atom:updated>2026-03-02T20:37:29.808Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ztLhBAGAG_KBA8FaHE0oNQ.png" /></figure><p>As we look back at the Render Network ecosystem in 2025, we’re sharing a comprehensive overview of emissions, expenditures, and network usage across both Network operations and Foundation operations, along with key milestones that shaped the year.</p><p>In our midyear update, we revisited how the burn-mint equilibrium (BME) model works and addressed two recurring questions:</p><ul><li>What is the Foundation doing with emissions allocated to operations?</li><li>Why are emissions front-loaded in the early years?</li></ul><p>Those overviews remain relevant and can be found in our <a href="https://rendernetwork.medium.com/render-network-foundation-monthly-report-24e14ea50e13">July monthly report</a>. In this annual update, we build on that with a full-year view and context around how 2025 marked a pivotal year in the network’s evolution. With <strong>2025 emissions totaling 5,637,150 RENDER</strong>, split evenly between Network operations and Foundation operations, outflows are summarized in a table, followed by contextual explainers.</p><p>We’ve also included <strong>net cumulative token balances</strong> through the end of 2025, for both <strong>Network operations at 4,636,925 RENDER</strong> and <strong>Foundation operations at 2,019,647 RENDER</strong>.</p><p>From immersive, large-scale artistic production to the launch of dedicated AI compute infrastructure called Dispersed, 2025 was foundational for what’s to come for Render Network.</p><h3>Emissions Overview: January 1 — December 31, 2025</h3><p>All figures below are denominated in RENDER unless otherwise noted.</p><h4>Network Operations</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*b7JQ1d79sYmGERYhnMgSuQ.png" /></figure><p>Line items explained for 2025:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QAJLChhzo-rXlM7p4bfw5Q.png" /></figure><p><strong>Accounting note:</strong> As noted at mid-year, small variances versus on-chain figures may occur due to accrual vs. cash accounting, with reconciliation in subsequent periods.</p><p><strong>Takeaway:<br></strong> Network operations in 2025 prioritized creator grants and node incentives while maintaining a surplus for future RNP initiatives.</p><h4>Foundation Operations</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-IrvTkMjqjgU8N4vlsVOBA.png" /></figure><p>Line items explained for 2025:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ftfikQ9HkFdAyXyV9gtARg.png" /></figure><p>What “Provision of Services” covered in 2025:</p><ul><li>Protocol and software development</li><li>Infrastructure maintenance</li><li>Communications</li><li>Marketing and growth initiatives (E.g. RenderCon, Breakpoint headline sponsorship, <em>SUBMERGE)</em></li><li>Business development and partnerships</li><li>Grants administration</li><li>Operations</li><li>Risk and compliance</li><li>Staff compensation</li><li>Software and operating expenses</li></ul><p><strong>Takeaway:</strong></p><p>Foundation operations expanded in 2025 to support both the core rendering network and new AI compute infrastructure via Dispersed.</p><p>As the network continues to grow, net surplus resources are deployed for sustained development and ecosystem growth.</p><h4>Network Usage</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KdCHaUOwcoTVZMvfxtMJcg.png" /></figure><p>Line items explained for 2025:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Jq0efi2-yfqkLVbasf6B2g.png" /></figure><p><strong>Takeaway:</strong></p><p>Demand for decentralized GPU rendering continued to grow in 2025, reinforcing real-world utility of the network.</p><p>*Network operations are provided in USDC as well as RENDER to account for RENDER / USDC exchange fluctuations. This more closely reflects growth.</p><h3>2025 Milestones: Scaling Creative &amp; AI Compute</h3><p>To contextualize the numbers, it’s worth recapping how 2025 marked a defining year for the Render Network’s capabilities and direction.</p><h4>Submerge at ARTECHOUSE NYC</h4><p>The Render Network Foundation sponsored <em>SUBMERGE: Beyond the Render</em>, an immersive exhibition held at ARTECHOUSE in New York City. This took place in two parts throughout 2025: an experimental launch in January and a more fully developed program in September.</p><p><em>SUBMERGE: Beyond the Render</em> was the largest use of the Render Network across a single coordinated project to date:</p><ul><li>16 artists</li><li>12 immersive 3D works</li><li>Fully rendered on the Render Network’s distributed GPUs</li></ul><p>This use of the network to power a multi-artist, large-scale immersive exhibition of this complexity showcased what the network has been capable of for some time, at commercial scale. The project demonstrated that decentralized GPU infrastructure can reliably support:</p><ul><li>Studio-grade commercial production</li><li>Large-format immersive installations</li><li>Independent artist experimentation</li><li>Cross-disciplinary creative collaboration</li></ul><p><em>SUBMERGE: Beyond the Render</em> served as proof that decentralized compute is not just viable — it is production-ready at scale. Coverage across national and local New York city media, including PCMag, brought the Render Network to new audiences.</p><h4>Launch of the Dispersed Network</h4><p>2025 also marked the launch of <a href="https://dispersed.com/">Dispersed</a>, a dedicated compute subnet by the people beyond the Render Network, designed to support emerging AI workloads.</p><p>Built with more than five years of operational experience running a distributed GPU rendering network, Dispersed represents the next phase of infrastructure evolution: purpose-built compute capacity for AI-enabled and general compute workflows.</p><p>While still in its early stages, initial integrations and testing in 2025 laid the groundwork for expanded case studies in 2026, including creative-AI convergence projects such as The Bitmap by artist MHX, which demonstrates how decentralized GPU infrastructure can support generative image and video pipelines.</p><p>The launch of Dispersed signals a long-term strategic direction: positioning decentralized GPU infrastructure at the intersection of creativity and AI. It’s still early days for both Dispersed and the AI industry; the next few years are expected to generate new opportunities and growth for both.</p><h4>AI-Enabled Product Development</h4><p>In parallel, ecosystem partners including OTOY continued expanding AI-focused tools such as OTOY Studio and Canvas, early indicators of how rendering and generative AI workflows may increasingly converge.</p><p>These early experiments in 2025 are setting the stage for:</p><ul><li>AI-assisted 3D content creation</li><li>Generative image and video pipelines</li><li>Hybrid rendering + AI compute workloads</li></ul><p>We believe this convergence will drive sustained demand across both the core rendering network and the Dispersed network in 2026 and beyond.</p><h3><strong>Looking Ahead</strong></h3><p>2025 was a year of experimentation, product expansion, continued growth in network usage, and foundational infrastructure expansion. It was also a year where teams were brought on to build internal processes and discipline to manage growth into the future in a responsible, performance-based manner.</p><p>In order to sustainably and consistently deliver narratives of emerging models of decentralization, internal discipline is key to ensuring marketing and growth initiatives are iterated and optimized.</p><p>Key themes from 2025 that carry over into 2026:</p><ul><li>Thoughtful, measured spending of front-loaded emissions</li><li>Continued surplus available for reinvestment for long-term, category-building initiatives</li><li>Sustained network demand despite macro volatility</li><li>Demonstrated capability at immersive, large-scale production</li><li>Entry into dedicated AI compute infrastructure</li></ul><p>As AI application development accelerates globally, demand for GPU compute is expected to grow for years to come. Specifically in 2026, we hypothesize the convergence of AI and traditional media will play an increasingly important role in compute access. We expect to be able to support this via both the Render Network and Dispersed.</p><p>By continuing to invest in decentralized infrastructure — both for rendering and AI workloads — the Render Network is positioning itself to compete in a rapidly evolving compute landscape in 2026 and beyond.</p><p>We will continue to share transparent updates as growth programs mature and as new initiatives move from experimentation to production scale.</p><p>Onward into 2026.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=936f24e6cbe6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Artist Panel: Early Adopters of the Render Network]]></title>
            <link>https://rendernetwork.medium.com/artist-panel-early-adopters-of-the-render-network-4a159ce9633c?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/4a159ce9633c</guid>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Fri, 13 Feb 2026 19:53:04 GMT</pubDate>
            <atom:updated>2026-02-13T19:53:04.996Z</atom:updated>
            <content:encoded><![CDATA[<iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FiyoJltk75Vs%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DiyoJltk75Vs&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FiyoJltk75Vs%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/47486ce5a5e01e0220fbb99f4dbcacad/href">https://medium.com/media/47486ce5a5e01e0220fbb99f4dbcacad/href</a></iframe><p>At RenderCon 2025, three of the Render Network’s earliest users sat down to talk craft and scale. Director and animator Nick DenBoer, known as Smearballs, has pushed Octane and Render on everything from his Deadmau5 video <em>Pomegranate</em> to other inventive personal shorts. Filmmaker and remix artist Davy Force brought Render into features with the opening sequence for Rodney Ascher’s <em>A Glitch in the Matrix</em>, and continues to blend live action, 3D, and TV satire. Motion designer David Brodeur, known as Brilly, turned experiments into spectacle, from collaborations with Scott Storch to Coca-Cola’s takeover of the Las Vegas Sphere. The conversation was moderated by Phil Gara fromof the OTOY, a key Render Network ecosystem partner.</p><h3>From first Renders to full production</h3><p>The session opened with a highlight reel of the artists’ milestone projects: DenBoer’s <em>Pomegranate</em> music video for Deadmau5, Force’s visuals for Rodney Ascher’s <em>A Glitch in the Matrix</em>, and Brodeur’s Coca-Cola animation wrapping the Las Vegas Sphere. Together, they marked the Render Network’s shift from proof of concept to production-ready tool.</p><p>For DenBoer, Render unlocked professional polish without a studio pipeline. “Yeah, I use Octane pretty much every day,” he said, noting how the network lets him render alternate angles and “frivolous extra shots” that enrich the edit. Force took Render into feature filmmaking, while Brodeur bridged personal experimentation with commercial spectacle. Each artist described Render not as a backup solution but as part of daily creative rhythm — a distributed extension of their own workstation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TJCMzzsgMgiRlh2aYRykdw.png" /><figcaption>(left to right) David Brodeur, Davy Force, and Nick DenBoer discuss the Render Network’s first production uses at RenderCon 2025.</figcaption></figure><h3>Weird work that leads the way</h3><p>For these artists, weirdness is not an accident, it’s a method. Each built a career by following personal experiments that felt too strange or impractical for client work, only to find that those projects defined their style later.</p><p>Nick DenBoer went from running a construction company to remixing odd infomercials, to eventually creating 3D music videos and commercials, keeping Smearballs as his sandbox for “weird projects for fun.” Davy Force, who first picked up 3D tools in 1988, said Octane gave him “a whole new, rebirth of excitement about 3D,” reigniting his love of playful, surreal imagery that commercial jobs rarely allow.</p><p>“Clients would be like, you know, that weird blobby, gross thing that you just did this morning,” Brodeur summarized, “and then ask for something only tangentially related.”</p><p>In other words, clients may not want the experimental piece itself, but they hire the creative vision behind it.</p><p>Across the panel, the lesson was consistent: personal work that feels weird at first often becomes the source of the most original commercial ideas.</p><h3>Speed vs. quality</h3><p>When computing power multiplies, ambition follows. DenBoer joked that even with Render, “you still end up pushing scenes until render time fills the schedule.” He considered moving to real-time engines but stayed with Octane because, as he put it, “that extra five or ten percent of quality is worth so much to me.”</p><p>Brodeur echoed that mindset, describing how the Render Network lets artists deliver more without compromising craft. Distributed GPUs do not just shorten deadlines; they make experimentation possible. “I’ll throw extra camera runs or alternate takes,” DenBoer said. “It’s just rendering without me doing any compute power.” For independent creators, that flexibility turns technical access into creative range.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SXQwIAZwkjvtyTYTnSDPqw.png" /><figcaption>Nick DenBoer describes using Octane and Render to generate alternate camera runs at production scale.</figcaption></figure><h3>AI as an assistant, not an author</h3><p>When the discussion turned to generative AI, the tone shifted from excitement to caution. Brodeur explained how he uses AI to upscale renders or interpolate motion “to avoid wasting time on renders that don’t change the creative.” Force embraced AI for cutting matte work and depth maps, admitting, “I never saw it coming.” Both praised AI for removing the tedious parts of production, not for inventing ideas.</p><p>DenBoer noted how some agencies misuse the tools: “Like here’s a difficult idea we made with Midjourney. Now do it for months and make it a reality. It’s just totally backwards.” For these artists, AI belongs in the utility layer, clearing repetitive tasks so human taste can stay at the center. The consensus was simple: use AI to reclaim time, not to replace intention, nor imagination.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a7ZHx4v9pPB7vUZfiuOKCA.png" /><figcaption><em>Caption</em>: Brodeur, Force, and DenBoer weigh how AI can save time without taking over the creative process.</figcaption></figure><h3>Beyond JPEGs and toward human craft</h3><p>Looking back, the artists saw the NFT boom as both absurd and transformative. “When something feels like it’s too good to be true, it probably is,” DenBoer said, but he also credited that moment with elevating digital art into the fine art conversation. Brodeur added that the flood of AI imagery has made simple JPEGs feel thin. The future, he argued, lies in “richer objects” such as scene graphs and volumetric files that can be re-rendered, reframed, or relit across new displays.</p><p>Their current projects point toward deeper immersion. DenBoer is experimenting with Gaussian splats inside Octane to merge photogrammetry and 3D scenes. Force is co-developing new live-action hybrids with DenBoer, while Brodeur builds physical installations that pair with his digital work. As Brodeur put it during the Q&amp;A, “the pendulum always swings,” and after AI’s surge, audiences will crave human tactility again.</p><p>Technology will keep compounding — more nodes, faster networks — but as this panel made clear, progress only matters when it frees time for judgment and story. As Force closed, “Long live Octane.”</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4a159ce9633c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Render Network Foundation Monthly Report — January 2026]]></title>
            <link>https://rendernetwork.medium.com/render-network-foundation-monthly-report-january-2026-2396f0c8e1a8?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/2396f0c8e1a8</guid>
            <category><![CDATA[decentralized-computing]]></category>
            <category><![CDATA[3d-rendering]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[decentralized-ai]]></category>
            <category><![CDATA[rendering]]></category>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Wed, 11 Feb 2026 21:59:14 GMT</pubDate>
            <atom:updated>2026-02-11T21:59:14.922Z</atom:updated>
            <content:encoded><![CDATA[<p>For the first month of the year, we’re teasing our first Dispersed case study, sharing our limited-time Community Pass to RenderCon 2026, highlighting media coverage of our predictions for 2026, and showcasing an artist spotlight with Josh Pierce, and ecosystem metrics.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*PPSExs3q2E7JeLvwqbqSBQ.png" /></figure><h3><strong>RenderCon 2026: Solving the last-mile problem of AI in real creative production.</strong></h3><p><strong>RenderCon is back. 48-hour flash sale! Get your free </strong><a href="https://luma.com/rendercon2026"><strong>tickets</strong></a><strong> now.</strong></p><p>April 16–17, 2026 · Hollywood, CA</p><p>Last year we started RenderCon to explore what’s possible with AI and creative compute. This year we continue on that path.</p><p><a href="https://www.rendercon.ai/2026">RenderCon 2026</a> is focused on the last mile of AI in media production — the gap between experiments and real, production-ready workflows. It’s where issues like control, consistency, and scale still break down, and where the most meaningful work needs to happen next.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*17ole-ldl15ZlED_i0ReqQ.png" /></figure><p>Across two days, we’ll bring together artists, studios, developers, and technologists who are actively shipping work today. Expect practical conversations, real workflows, and fewer hypotheticals. This is about what’s usable now, what’s coming next, and how teams are actually building at the edge of rendering, AI, and media production.</p><p>The future of creative AI is being decided by creators in real, production-ready work.</p><p><strong>From Feb 11 to 13th, a limited number of </strong><a href="https://luma.com/rendercon2026"><strong>free passes</strong></a><strong> are available on a first come, first serve basis.</strong></p><p>Starting Friday, February 13th, <a href="https://luma.com/rendercon2026">Community Passes</a> are available for $99. Price goes up to $149 on March 1st. As a Render Network community member, we invite you to <a href="https://luma.com/rendercon2026">register</a> by Feb 28th before prices go up.</p><p>Learn more at <a href="https://www.rendercon.ai/2026">https://www.rendercon.ai/2026</a></p><p>Get tickets at <a href="https://luma.com/rendercon2026">https://luma.com/rendercon2026</a></p><p>Watch the 2025 RenderCon recap reel:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/1912538322083487855&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/032dd0785e9c3ae7ac6361ebdfe49b11/href">https://medium.com/media/032dd0785e9c3ae7ac6361ebdfe49b11/href</a></iframe><h3><strong>Ecosystem metrics</strong></h3><p>The Render Network Foundation tracks various metrics on its network. Below is the aggregate view of the month’s activity, previous month’s activity, and month-over-month change for a few key metrics. Much of this information is found at <a href="https://stats.renderfoundation.com/">stats.renderfoundation.com</a>.</p><h4><strong>Key Render Network metrics</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1epg_zNBeocNfIScJsfOaw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jY3ybPiOAL4nNAZakUmoVg.png" /></figure><p>Looking for definitions of each of these terms? Check out our <a href="https://rendernetwork.medium.com/render-network-foundation-monthly-report-december-2025-43d956808e3f">previous</a> monthly reports.</p><h3><strong>Governance/Product updates</strong></h3><h4><strong>Dispersed powers the AI-enabled autonomous flows behind MHX’s BITMAP Project</strong></h4><p>The team behind Dispersed (the compute subnet by the Render Network) has been working on product, business, and marketing initiatives to bring the network to life.</p><p>One of the users of Dispersed is the artist known as MHX. You may know this veteran 3D generalist and motion designer for his focus on procedural, data-driven art, including his work, <em>Lattice</em>, which is part of the ongoing <a href="https://www.artechouse.com/program/submerge/">SUBMERGE</a> show at the ARTECHOUSE immersive art space in New York City.</p><p>On January 1st, 2026, artist MHX launched his latest project, the <a href="https://bitmap.mhxalt.com/">BITMAP Project</a>, a generative 3D art machine that transforms a single Bitcoin block each day into a 3D data sculpture. By selecting just one block from the previous day, the system works with thousands of transactions to create a new, time-anchored artwork every 24 hours. It does so without dealing with the scale and noise of the full blockchain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*alOWt6YSBrp-uLCrvpS00Q.png" /><figcaption>BITMAP project outputs are viewable every day at <a href="https://bitmap.mhxalt.com/">bitmap.mhxalt.com</a></figcaption></figure><p>The goal is to run autonomously and reliably over a year. To that end, BITMAP is built on the decentralized compute network, called Dispersed, which acts as the compute layer. BITMAP then prepares and simulates each scene across distributed nodes before sending it to the Render Network for 4K video output. The result is a fully automated, decentralized pipeline that creates, renders, and publishes a new artwork daily without manual intervention.</p><p>On his local computer, it would take MHX 35 hours of compute. By using Dispersed plus the Render Network for rendering, it takes 15 minutes, making this project feasible and able to run smoothly every 24 hours.</p><p>Look out for more details about this case study soon.</p><h3><strong>Artist spotlight: Josh Pierce</strong></h3><p>In our first Creative Foundations Instagram Live of the year, we caught up with <a href="https://www.joshpierce.net/home">Josh Pierce</a> to talk through <em>Satori: The Infinite Now</em>, his immersive digital artwork created for <a href="https://www.artechouse.com/submerge-artists/">ARTECHOUSE</a>.</p><p>Josh is a visual artist and art director whose work blends surreal, natural environments with abstract energy forms, designed to draw viewers into a quiet, contemplative state. His practice is heavily influenced by mindfulness and spiritual presence, with a focus on stillness, awe, and the emotional weight of the present moment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*DB0oDSmDBifqGx07N21_2Q.jpeg" /><figcaption>The Infinite Now by Josh Pierce, as experienced at ARTECHOUSE New York.</figcaption></figure><p>We walked through the project from brief to final delivery, keeping it real about the pressures that come with commissioned work — tight timelines, high expectations, and ambitious visuals. Josh shared how technically complex moments, especially dense atmospheric fog and caustic lighting, would have been extremely limiting to explore using local rendering alone.</p><p>By moving rendering onto the Render Network, he was able to iterate faster, explore subtle variations, and stay focused on creative intent rather than waiting on hardware. It unlocked more freedom in the process and gave him the confidence to push the work further without friction.</p><h3>The Render Network on X (formerly Twitter): &quot;One takeaway from our recent chat with @j_pierceart: artists don&#39;t need massive rigs or expensive GPUs anymore. A laptop + Render Network = a render farm or supercomputer at your fingertips.Travel the world and keep shipping work, without a giant desk rig or expensive... pic.twitter.com/bVLWG5dIXo / X&quot;</h3><p>One takeaway from our recent chat with @j_pierceart: artists don&#39;t need massive rigs or expensive GPUs anymore. A laptop + Render Network = a render farm or supercomputer at your fingertips.Travel the world and keep shipping work, without a giant desk rig or expensive... pic.twitter.com/bVLWG5dIXo</p><p>We wrapped with a broader conversation around creative growth, presence, and how having the right infrastructure in place can quietly change what artists feel comfortable attempting.</p><p>Catch up with the convo now over on our <a href="https://www.instagram.com/reel/DUG2JVVAbBA/?utm_source=ig_web_copy_link&amp;igsh=MzRlODBiNWFlZA==">Instagram</a>!</p><h3><strong>In the News</strong></h3><h4><strong>CCN — From Cloud to Crowd: How Render Network Scales AI and Rendering With Decentralized GPUs</strong></h4><p>CCN’s Dr. Lorena Nessi and Trevor Harries-Jones, Render Network Foundation board member. They talk about:</p><ul><li>How the Render Network has helped scale digital creativity and art as tech advances.</li><li>Why, despite AI already unlocking a lot of opportunity, there’s still a long runway ahead.</li><li>The biggest opportunities, such as inference, don’t need the latest and greatest hardware.</li><li>How blockchain’s many premises (like smart contracts, no intermediaries, micropayments at scale) align perfectly for AI and creativity working hand in hand.</li></ul><p>Read the <a href="https://www.ccn.com/education/crypto/ai-rendering-decentralized-gpu-computing-render-network/">article here</a>.</p><p>Or watch the full interview on YouTube:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FbV4SIUA5lo0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DbV4SIUA5lo0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FbV4SIUA5lo0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/20f9eac20f3ee628c019817b1d6c2b8b/href">https://medium.com/media/20f9eac20f3ee628c019817b1d6c2b8b/href</a></iframe><h4><strong>TheStreet Roundtable — Render Network director says DePIN could ease AI bottlenecks</strong></h4><p>Trevor Harries-Jones, board member at the Render Network Foundation, shared a perspective on how decentralized GPU networks don’t need to replace centralized systems. They can, instead, take on other AI workloads that are currently bottlenecked in traditional incumbents. That includes inference, which makes up 80% of GPU work.</p><p><a href="https://www.thestreet.com/crypto/innovation/render-network-director-says-depin-could-ease-ai-bottlenecks">https://www.thestreet.com/crypto/innovation/render-network-director-says-depin-could-ease-ai-bottlenecks</a></p><h4><strong>DigiTimes Asia — Render Network predicts fully brand-owned AI experiences and GPU-powered creation surge in 2026</strong></h4><p>The Render Network Foundation shared a few predictions for 2026 and beyond around the future of AI, rendering, and creativity. Foundation board director Trevor Harries-Jones’ prediction was quoted in the article: “The AI systems will power the entire experience, giving brands full control over their digital presence,” he said. The article covered the concept that brands are predicted to want to shift towards owning their own AI encounters in 2026. The idea, as explained in the article, is that AI systems would autonomously embody a company’s style and identity across the customer journey. From interactive media to support and service and more, brands would gain round-the-clock oversight and input into how AI drives their brand equity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*JI2gyV7GeLlhy8Di8rjHlQ.jpeg" /></figure><p>For small or medium size brands, building fully functional, autonomous AI systems will be prohibitively costly and technically challenging. However, the IP and/or AI models can remain in control of the brands while the underlying compute can be outsourced. That’s where a distributed network like that facilitated by Dispersed, the AI compute-focused network from the Render Network, can reduce risk for brands.</p><p>As for creators and studios, the article went on to further explain the prediction that consumer GPUs will be able to run advanced AI “world models” which will turn everyday PCs and laptops into high performance AI machines.</p><p>Why is that exciting? If creators can use them for complex models in real time, Trevor Harries-Jones was quoted as saying, “Suddenly, the same GPUs that power games can generate interactive worlds, smart simulations, and next-level AI experiences directly in the hands of those who use them.”</p><p><a href="https://www.digitimes.com/news/a20260114VL205/2026-gpu.html">https://www.digitimes.com/news/a20260114VL205/2026-gpu.html</a></p><h3><strong>Announcements</strong></h3><h4><strong>ICYMI: The Render Network merchandise store is officially open for business</strong></h4><p>If you didn’t catch our announcement last month, we want to remind you that we’ve launched an official <a href="https://merch.renderfoundation.com/">merchandise store</a> where you can pick up anything from t-shirts to hoodies, hats, and even mugs with designs created by artists who use the Render Network.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zM5vS7myz2h-iwlF3gRnaQ.png" /></figure><h3><strong>What’s Coming Up</strong></h3><h4><strong>The Render Network team at Rendr Festival in Belfast, Northern Ireland, Feb 12–13</strong></h4><p>The Render Network team will be at the inaugural <a href="https://www.rendrfestival.com/">Rendr Festival</a> as a sponsor. The two-day event promises to include some great, behind-the-scenes insights from creators of some of the most iconic content including KPOP Demon Hunters, Stranger Things, Fallout, The Last of Us, and Claynosaurz. If you’re in the area, stop by and say hi to our team!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*YvS8unHCG9LeiqNZh4HKLg.png" /><figcaption>Rendr Festival takes place in Belfast, Northern Ireland Feb 12–13</figcaption></figure><h4><strong>SUBMERGE: Beyond the Render has been extended through March</strong></h4><p>The <a href="https://www.artechouse.com/program/submerge/"><em>SUMBERGE: Beyond the Render</em></a> exhibit has been extended to March of 2026.</p><p>If you haven’t already and are near New York City in the next couple of months, check it out and experience one of the most immersive, large-scale compilations of digital art rendered at 18K, all done on the Render Network’s decentralized GPUs.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2396f0c8e1a8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Super Bowl LIX to RenderCon 2026: CGI at Scale and the Next Era of Global Compute]]></title>
            <link>https://rendernetwork.medium.com/from-super-bowl-lix-to-rendercon-2026-cgi-at-scale-and-the-next-era-of-global-compute-f2840e850313?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/f2840e850313</guid>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Sat, 07 Feb 2026 18:19:09 GMT</pubDate>
            <atom:updated>2026-02-07T22:11:41.104Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4kBOJq974I_OtjjWdk40zg.jpeg" /></figure><p>2026 is moving fast.</p><p>As Super Bowl LIX lands this weekend, it’s another reminder of how spectacular graphics and CGI now sit at the center of live sports, music, and culture.</p><p>From broadcast packages to halftime shows, visuals aren’t just decoration anymore, they are the experience.</p><p>At Render Network, we’re excited about what this means for the next decade: creators tapping global GPU compute to ship cinematic visuals at massive scale.</p><h3><strong>Super Bowl LIX: Where Live Sports Meets Cinematic Graphics</strong></h3><p>Every year, the Super Bowl pushes the bar higher with real-time graphics, immersive stage design, and production pipelines that increasingly resemble film.</p><p>This convergence of live events and CGI is exactly where visuals are headed.</p><h4><strong>Super Bowl LVIII visuals powered by Render Network</strong></h4><p>Render Network user and multi-Emmy award-winning artist Andy Torres has delivered visuals across major moments in live entertainment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/945/1*ktYQGodiUxVukM-Ydzm-BQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/956/1*9XFsMEuT0MJl_Sjs4xIcXQ.png" /></figure><p>He’s been the creative force behind the Kansas City Chiefs’ visuals since 2018 — including the ultra-high-resolution “Countdown to Kickoff” trailer that wrapped around the Caesars Superdome.</p><p>Fans in New Orleans, and hundreds of millions watching worldwide, saw the piece come alive, with visuals rendered entirely on Render Network using thousands of decentralized GPUs.</p><p>Andy’s work also spans:</p><ul><li>2024 Grammy Awards</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/andytorres_a/status/1887690217240318025%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/1e4860082198df7d6a67ca75661cfdb3/href">https://medium.com/media/1e4860082198df7d6a67ca75661cfdb3/href</a></iframe><ul><li>Projects tied to Stranger Things on Render Network</li><li>Super Bowl LVIII</li><li>Maybe visuals connected to Super Bowl LX this weekend</li></ul><p>It’s a real-world example of how cinematic graphics for the biggest sporting events are now being powered by decentralized compute.</p><p>Check out Andy’s work at <a href="https://andytorresfilms.com">andytorresfilms.com</a> or follow him on Instagram <a href="https://www.instagram.com/andytorres_a/">@andytorres_a</a>, and keep an eye out for his visuals during this year’s Super Bowl.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/1899963363129479663%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/2eeea67f157b79b96852fff2d390d978/href">https://medium.com/media/2eeea67f157b79b96852fff2d390d978/href</a></iframe><h3>Crafting Immersive Worlds from Puerto Rico</h3><p>STURDY, the team behind an impressive list of productions for this year’s half-time performer Bad Bunny, led content design and production for his <em>No Me Quiero Ir De Aquí </em>Puerto Rico residency and ongoing <em>DeBí TiRAR MáS FOToS</em> World Tour, translating island culture and energy into large-scale immersive visuals.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/614/1*Y5LTrdku2jqpFOBdXbdNNQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/558/1*uYl6yd57KjgUcom9orgGiA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/570/1*EGU1eNtxwwCVN3oAnAdcbw.png" /></figure><p>Those projects were built using Octane, while Render Network has been used by the studio on other live events and productions. Together, these pipelines show how modern workflows are pushing cinematic rendering to new scale — high fidelity, fast iteration, massive displays, and global audiences now being the standard.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/RocNation/status/2019863914474041581%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/a6f156af765f1fc7dfe89f2b86953bea/href">https://medium.com/media/a6f156af765f1fc7dfe89f2b86953bea/href</a></iframe><h3>RenderCon 2026: Early Bird Tickets Releases Next Week</h3><p>Looking ahead: Pre-sale (community pass) RenderCon tickets go live early next week, with several main-stage speakers already confirmed and panelists locked in.</p><p><a href="https://www.rendercon.ai/">RenderCon2025 - Explore the future of media, technology, and art appears in ChatGPT</a></p><p><strong>RenderCon 2026 takes place April 16–17 at Nya Studios in Hollywood.</strong></p><p>If you’re planning to join, now’s the time to prepare to travel to Los Angeles for two full days of talks, hands-on workshops, and conversations with leaders across media, technology, and digital art.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2016199499455025351%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/906bde6ff730a66a66b705c9882afa09/href">https://medium.com/media/906bde6ff730a66a66b705c9882afa09/href</a></iframe><h3>Where This Is All Going…</h3><p>High-end visuals are no longer confined to studios, they’re powering moments watched by hundreds of millions. Live sports, music, film-grade pipelines, and real-time CGI are now standard across cultural events. Even this year’s Super Bowl ads reflect the shift, like Xfinity’s Jurassic Park-themed spot bringing back legacy characters with modern VFX for a new generation.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/discussingfilm/status/2018338906547040259%3Fs%3D46%26t%3DfnW7msWZVIjVltebwO3rjw&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/eaeeaa9a905d32b5a1388ccad7c11060/href">https://medium.com/media/eaeeaa9a905d32b5a1388ccad7c11060/href</a></iframe><p>That’s the future we’re building toward: artists and production teams creating at massive scale without centralized limits, backed by global GPU infrastructure.</p><p>Enjoy Super Bowl weekend, and take a moment to appreciate the invisible layer underneath it all: the creators, the tools, and the compute making these experiences possible.</p><h3><strong>The Future of Olympic Viewing</strong></h3><p>With the Olympics happening in Italy for Milano Cortina 2026, it’s hard not to think about where sports viewing is going next.</p><p>This is the shift: from watching a game to being inside it.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/bilawalsidhu/status/2019831831110258883%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/530b1c16b173aa4e1ed9d8ce16efc35f/href">https://medium.com/media/530b1c16b173aa4e1ed9d8ce16efc35f/href</a></iframe><p>Arcturus is building 4D <a href="https://x.com/rendernetwork/status/2009361638332485865?s=20">Gaussian splatting</a> tech that captures sporting events from every angle, pushing volumetric video far beyond 360 or 3D-180. Experiencing it in a headset puts you closer to the action than any stadium seat, and makes traditional formats feel instantly outdated.</p><p>If this is where we’re headed, the next Olympics won’t just be watched, they’ll be entered.</p><h3><strong>Latest AI Headlines:</strong></h3><ul><li>RTFM (Real-Time Frame Model) explores real-time, interactive frame generation from images, synthesizing new views on the fly without explicit 3D.<br>While Marble focuses on high-fidelity, persistent worlds via 3D Gaussian splats, the interesting frontier is how ideas from both approaches may eventually converge.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/theworldlabs/status/2019454593957319161%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/e810347c8f4b2f2c629f277b75704b38/href">https://medium.com/media/e810347c8f4b2f2c629f277b75704b38/href</a></iframe><ul><li>Introducing Project Genie: the most advanced experimental world model, generating playable worlds in real time from text prompts.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/demishassabis/status/2016925155277361423%3Fs%3D46%26t%3Dg1-X4LSLggP6cHtR8yF_SA&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/9355b81aa0e3384088b7cb6f6b56e77b/href">https://medium.com/media/9355b81aa0e3384088b7cb6f6b56e77b/href</a></iframe><ul><li>LingBot-world launches as an open-source frontier world model, pushing high-fidelity simulation, long-horizon consistency, and physical + game-world modeling.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/yinghaoxu1/status/2016547601563484189%3Fs%3D46%26t%3DfnW7msWZVIjVltebwO3rjw&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/4c0abbfd33534cdfbfe9aa32d5114f8c/href">https://medium.com/media/4c0abbfd33534cdfbfe9aa32d5114f8c/href</a></iframe><ul><li>Technical showcase: peer-to-peer GPU sharing, entirely in the browser. AI Grid runs LLMs locally with WebGPU, letting users donate spare compute or borrow from others, no installs, no cloud, just decentralized inference from a URL.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/webgl_webgpu/status/2019063404418195739%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/233297b9a084570282f3bb0aba0b13dd/href">https://medium.com/media/233297b9a084570282f3bb0aba0b13dd/href</a></iframe><ul><li>Waymo leverages Google’s Genie3 world model for hyperrealistic autonomous driving simulation to model for some of the most rare and complex scenarios — from tornadoes to planes landing on freeways.. creating data sets for scenarios with no sample data set.</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/Waymo/status/2019804616746029508%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/524683fced2eb19a662261c05b99813a/href">https://medium.com/media/524683fced2eb19a662261c05b99813a/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f2840e850313" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GPU Markets and Compute at RenderCon 2025]]></title>
            <link>https://rendernetwork.medium.com/gpu-markets-and-compute-at-rendercon-2025-1b883fad60bc?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/1b883fad60bc</guid>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Thu, 05 Feb 2026 00:39:19 GMT</pubDate>
            <atom:updated>2026-02-05T00:39:19.810Z</atom:updated>
            <content:encoded><![CDATA[<p>At RenderCon 2025, the GPU markets and compute panel returned to a simple point: scale comes from decentralization when you design for it.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FXNMmuRmtcHE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DXNMmuRmtcHE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXNMmuRmtcHE%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/5b08b8aa4a54479409a59c9418e6e63d/href">https://medium.com/media/5b08b8aa4a54479409a59c9418e6e63d/href</a></iframe><p>Led by moderator Trevor Harries-Jones, of the Render Network Foundation Board of Directors, the panel featured Divesh Naidoo, CEO of Simulon; Mike Anderson, core contributor at THINK; and Edward Katzin, CEO of Jember. They discussed how an API-driven approach can shift rendering, inference, and other offline jobs to distributed consumer GPUs, rather than to traditional clouds.</p><p>For Harries-Jones, the challenge was never raw GPU power, but whether idle hardware could operate as a reliable market for batched or asynchronous work. If a job can be chunked or queued, it can shift to decentralized infrastructure and free creators from constant capacity hunting. Viable use cases have surfaced across VFX, agents, and finance, with early integrations like Swatchbook proving the rendering model and proposals such as RNP-19 shaping what compute could become. “We really need your help,” he told the room.</p><p>For creators, that AI-compute specific proposal is not abstract; it defines whether tomorrow’s tools can scale on infrastructure priced for experimentation rather than only for incumbents.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Wl7plHI5WFjEDtb5CKNA9g.png" /><figcaption>Trevor Harries-Jones (right) frames RenderCon’s GPU markets panel around an API driven path to decentralized compute, joined from left to right by Divesh Naidoo, Mike Anderson, and Edward Katzin.</figcaption></figure><h3>Simulon: Real time VFX at network scale</h3><p>Simulon founder Divesh Naidoo traced his journey back to Jurassic Park, the film that made him want to tell dinosaur stories of his own. The reality of production, he said, was more limiting than the dream: fragmented tools, steep learning curves, and compute that favored big studios.</p><p>Simulon is his answer. The app wraps a modular ML pipeline in an AR and VR driven interface so that professionals, prosumers, and newcomers can capture performances and output a fully editable scene graph without juggling multiple applications. The workflow is unified in the front end and heavy in the back end, exactly the kind of design that stresses compute when adoption spikes.</p><p>Naidoo described Simulon’s test integration with the Render Network as fast and unusually collaborative. “We were up and running in a day as well,” he said, noting that requested features were turned around within a day during testing. For a young product, that responsiveness matters as much as raw throughput.</p><p>The demo sequence made the point. A car and dinosaur interaction that reads as live action was rendered in Blender at 4K using distributed cloud resources. As Naidoo put it, “rendering in about 15 minutes for the whole sequence.” No manual compositing, no per shot babysitting, just shots flowing from capture to render.</p><p>For artists, that speed tightens the feedback loop. When overnight renders become coffee break renders, it is easier to push story, timing, and nuance instead of rationing revisions around a farm schedule.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OiGYixBM2iH-34dD5yMdNw.png" /><figcaption>Naidoo showcases a Simulon capture that drops a fully animated dinosaur into a real scene, rendered in minutes on distributed GPUs.</figcaption></figure><h3>THINK: agents that route their own compute</h3><p>If Simulon is about pipelines, THINK is about agency. Core contributor Mike Anderson presented the THINK Agent Standard, an open source framework for onchain AI agents that can talk to each other, call tools like Blender, and route inference to decentralized providers.</p><p>Onstage, an agent trained on Render’s documentation answered practical questions about the network, then handed off tasks to other agents via a dashboard that mixed creative tools and infrastructure slots. The UI was still a work in progress, but the direction was clear: agents you own, intelligence you can move.</p><p>Anderson argued that this era of AI needs a browser equivalent, a neutral interface that lets builders launch their own “applications of intelligence” on shared infrastructure. “If I — if I was going to be betting on something, I would bet that Render or something that looks exactly like Render is going to become the way that compute becomes a commodity that we can all like, kind of build our lives on top of.”</p><p>In practice, that means routing jobs becomes a policy choice, not a rebuild. An agent can run locally for privacy, burst to decentralized GPUs for heavier inference, and coordinate other agents that specialize in tasks like asset generation or compliance checks. For individual creators, that turns a single workstation into a studio of narrow AIs, each with access to global compute.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*df1hwj9i0CcNwnAUZOfFmw.png" /><figcaption>Anderson demonstrates THINK’s agent dashboard as it drives MCP powered creative tools and routes tasks across decentralized GPUs.</figcaption></figure><h3>Jember: chain of trust on decentralized GPUs</h3><p>Where Simulon and THINK focused on images and agents, Jember CEO Edward Katzin brought a financial systems vantage point. His background is not in VFX but in moving money for hedge funds, banks, and card networks, where “trillions of dollars a day” is a normal scale and compliance failures are existential.</p><p>Jember’s core product is what they call Chain of Trust Choreography, an infrastructure layer that unifies transaction orchestration and compliance monitoring into a shared, verifiable record. The challenge he was handed before RenderCon was specific: could parts of that inference-heavy workflow run on the Render Network without compromising security or reliability?</p><p>The answer was yes, with the right architecture. Working with the Manifest Network for secure compute and Jember as the orchestrator, Katzin’s team redesigned their workflows so that non sensitive, non real time inference runs as asynchronous batch jobs on Render’s GPUs. The results drew a rare moment of financial exuberance from the stage. The old batch pattern, he said, still works: “and the Render Network performed magnificently at a 10th of the cost.”</p><p>Under the hood, Render’s distributed inference parses thousands of pages of regulations and enforcement guidance. Jember’s AI workflows structure that text into a machine readable schema and build a customer defined knowledge graph that maps obligations by jurisdiction, business unit, actor, and transaction. Manifest handles the most sensitive data and private logic, while Jember choreographs the whole stack.</p><p>For compliance teams, that turns months of audit prep into near real time queries. For the broader ecosystem, it is a proof that decentralized GPUs are not limited to media. The same pattern, a shared source of truth with scoped access, maps neatly onto film or game productions that need provenance, version control, and trust across many contributors.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hs9ZEpyulUN8LQXgzKEGQg.png" /><figcaption>Katzin outlines how Jember uses decentralized GPUs for offline inference, turning compliance heavy workflows into scalable, cost efficient processes.</figcaption></figure><h3>Where GPU markets go next</h3><p>The same architecture kept on showing up across VFX, agents, and finance: push what can be queued to the edge, keep what is sensitive in secure enclaves, and route everything else to the most efficient GPUs you can reach. Consumer hardware runs at incremental cost; centralized clouds charge exponentially more as you go up the stack.</p><p>That hybrid thinking is now baked into how the Render Network expands. Compute-oriented proposals like RNP-19 will guide which jobs are prioritized, how operators of nodes are incentivized, and what guarantees users can rely on. Governance isn’t a side process; it’s part of the infrastructure that partners are already building against.</p><p>For Simulon, it’s faster renders that keep pace with real time capture and democratized tools. THINK’s world is one where agents can treat compute as a commodity, switching providers without losing autonomy. For Jember, it’s compliance infrastructure that scales in lockstep with transaction volume, not cloud bills.</p><p>The cultural impact is cumulative. When compute is abundant and portable, the constraint shifts from hardware to imagination, and from gatekeeping to coordination. Early integrations are rarely glamorous. They are a string of API requests, error logs, and small fixes. Which is precisely why a simple field report from Naidoo resonated so strongly: “Integration was incredibly easy.”</p><p>The signal is clear.</p><p>When the network is responsive, partners ship. When partners ship, GPU markets stop being an idea and start becoming the default fabric behind the stories, agents and systems people are already trying to build.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1b883fad60bc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Gaussian Splats to Immersive Experiences Taking Over 2026]]></title>
            <link>https://rendernetwork.medium.com/from-gaussian-splats-to-immersive-experiences-taking-over-2026-171e29dbc284?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/171e29dbc284</guid>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Mon, 26 Jan 2026 00:07:11 GMT</pubDate>
            <atom:updated>2026-01-26T00:07:11.994Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2Tcr9ljjZs6eOC0MGpPf-w.jpeg" /></figure><p>2026 continues to move fast. From real-world volumetric production and Gaussian splats at scale to cross-collaboration immersive experiences, and behind-the-scenes momentum toward a bigger RenderCon, here’s what you need to know this week across the Render Network ecosystem.</p><h3>David Ariew at Sphere: Viridian Lightfall</h3><p>David Ariew continues to push large-format digital art forward with “Viridian Lightfall,” the first XO/Art collaboration inspired by <em>The Wizard of Oz</em> at Sphere.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/SphereVegas/status/2014032920714023403%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/9cbdbf5e03d83e74d10280d06b00cd50/href">https://medium.com/media/9cbdbf5e03d83e74d10280d06b00cd50/href</a></iframe><p>The piece traces Dorothy’s journey from sepia Kansas into the vibrant Land of Oz, beginning in fields of golden light and building through a crescendo of color before arriving at the luminous green of Emerald City.</p><p>Designed for an ultra-large, immersive canvas, the work demanded extreme detail, scale, and rapid iteration.</p><p><em>As David shared:</em></p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/DavidAriew/status/2014049940285300914%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/b7aea2c88ec7a1749ab90b4c3e29076b/href">https://medium.com/media/b7aea2c88ec7a1749ab90b4c3e29076b/href</a></iframe><p>The project highlights how distributed GPU rendering enables artists to design for environments like Sphere without compromising ambition, resolution, or creative control.</p><h3>SUBMERGE x VivoBarefoot: Movement Meets Immersive Media</h3><p>SUBMERGE: Beyond the Render remains open through May, with January marking a key moment for immersive, large-format experiences.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/rendernetwork/status/2014830594937573822%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/244a1b5a06a1b6eea62f29411ad91360/href">https://medium.com/media/244a1b5a06a1b6eea62f29411ad91360/href</a></iframe><p>On Friday, SUBMERGE brought together immersive media, movement, and wellness through a collaboration with VivoBarefoot — a footwear and wellness brand known for its minimalist (“barefoot”) philosophy.</p><p>Inside SUBMERGE’s 18K, 270-degree cinematic environment, rendered entirely on distributed GPUs, technologists, creators, journalists, and wellness leaders explored how physical activation and immersive storytelling intersect. The experience showcased what’s possible when real-world movement, large-format visuals, and decentralized compute converge into a single environment.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/rendernetwork/status/2013736720097800310%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/3915b4245752eb4e300e68d175be8980/href">https://medium.com/media/3915b4245752eb4e300e68d175be8980/href</a></iframe><h3>The Future of Creation: AI, Decentralized GPUs, and What Comes Next</h3><p>Trevor sat down with CCN.com to talk all things Render and answer a big question:</p><p><em>How does the future of AI creation and rendering evolve from here?</em></p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/CCNDotComNews/status/2014660666733330945%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/856f0fa8a56ab65c7d0caae629eccdc7/href">https://medium.com/media/856f0fa8a56ab65c7d0caae629eccdc7/href</a></iframe><p>Stay tuned for the full interview.</p><h3>Volumetric Capture at Scale: A$AP Rocky’s Helicopter</h3><p>The new <em>Helicopter</em> music video from A$AP Rocky is a real-world showcase of volumetric capture and Gaussian splats rendered in OctaneRender.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/RadianceFields/status/2011185108917588361%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/938b229e28b0d6ab7e37865f4bdb3511/href">https://medium.com/media/938b229e28b0d6ab7e37865f4bdb3511/href</a></iframe><p>Built using next-generation Octane features, the project showcases how radiance-field and splat-based workflows are moving from experimental R&amp;D into full production pipelines. Entire scenes were rendered in Octane, enabling complex volumetric environments that would have been impractical just a year ago.</p><p>On Hacker News, David Rhodes, co-founder of CG Nomads, shared how GSOPs (Gaussian Splatting Operators) for Houdini were used in combination with OctaneRender to produce the music video.</p><p><a href="https://news.ycombinator.com/item?id=46670024">Gaussian Splatting - A$AP Rocky &quot;Helicopter&quot; music video | Hacker News</a></p><p>The post offered a rare behind-the-scenes look at how splat-based tools are being integrated into professional DCC pipelines today, along with open-source access for artists who want to experiment themselves.</p><h3><strong>Octane 2026 on Render</strong></h3><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/rendernetwork/status/2010788695335403774%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/24505553012ad18635ca9d30ae8ec615/href">https://medium.com/media/24505553012ad18635ca9d30ae8ec615/href</a></iframe><p>With Octane 2026 now live on the Render Network, artists and studios have production-grade support for Gaussian splats and radiance-field workflows running at scale, unlocking faster iteration, higher fidelity, and real-world deployment across music, film, and immersive media.</p><h3>World Economic Forum 2026: Talk of an AI bubble is Overblown…</h3><p>At this year’s World Economic Forum, much of the conversation around AI focused less on speculation and more on <a href="https://www.weforum.org/stories/2026/01/elon-musk-technology-abundant-future-davos-2026/">real economic impact</a>. What’s holding progress back isn’t the technology itself, but the ability to run it at scale. As AI-powered creation and production move into real-world use, demand for flexible, high-performance GPU infrastructure continues to grow.</p><p><a href="https://blogs.nvidia.com/blog/davos-wef-blackrock-ceo-larry-fink-jensen-huang/">&#39;Largest Infrastructure Buildout in Human History&#39;: Jensen Huang on AI&#39;s &#39;Five-Layer Cake&#39; at Davos</a></p><h3>Behind the Scenes: RenderCon 2026 Takes Shape</h3><p>Planning for RenderCon 2026 is well underway.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/rendernetwork/status/2011215981981344148%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/0fb81dab86b110102699205bae3cd626/href">https://medium.com/media/0fb81dab86b110102699205bae3cd626/href</a></iframe><p>This year’s conference expands to two full days, taking place April 16–17, 2026 at Nya Studios in Hollywood, Los Angeles.</p><ul><li>Day One will feature talks and discussions across 3D, AI, rendering</li><li>Day Two will be a full day of hands-on workshops designed specifically for artists and builders</li></ul><p>Programming and speakers are actively being confirmed, with more details to be shared soon. Now is a good time to save the date and begin planning travel.</p><p><a href="https://renderfoundation.com/rendercon">Save the Date | Rendercon 2026</a></p><p>In the meantime, recorded sessions from RenderCon 2025 including conversations on the future of creative workflows, AI, and media are available to watch at <a href="http://rendercon.ai/"><strong>rendercon.ai</strong></a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=171e29dbc284" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[“AI Compute Is Skyrocketing” - CES 2026, Octane 2026, New Creator Releases, and the Week in Render]]></title>
            <link>https://rendernetwork.medium.com/ai-compute-is-skyrocketing-ces-2026-octane-2026-new-creator-releases-and-the-week-in-render-8a4fc4e9713f?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/8a4fc4e9713f</guid>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Sat, 17 Jan 2026 00:50:36 GMT</pubDate>
            <atom:updated>2026-01-17T00:50:36.233Z</atom:updated>
            <content:encoded><![CDATA[<p><em>5 min read · Jan 2026</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xS6MJn9ZHtFp6uev-qMLqQ.png" /></figure><p>2026 is already off to a fast start. From open-sourced generative art and real-world production workflows to CES signals, Octane 2026 going live, and early momentum toward RenderCon 2026, here’s what you may have missed this week in the Render Network ecosystem.</p><h3>Octane 2026 Is Live on the Render Network!</h3><p>OTOY officially released Octane 2026, and it’s already being used in production on the Render Network.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/rendernetwork/status/2010788695335403774%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/24505553012ad18635ca9d30ae8ec615/href">https://medium.com/media/24505553012ad18635ca9d30ae8ec615/href</a></iframe><p>One highlight: visuals for A$AP Rocky’s fourth studio album’s new music video <em>Helicopter</em> was built using Gaussian splats and next-generation Octane features. Entire scenes were rendered on Octane.</p><p>If you’re exploring Gaussian splats, path-traced workflows, or large-scale scenes, this release is a major step forward.</p><p>See this thread breaking down Gaussian splats:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2009361638332485865%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/ca2f8855fca40af9978e1ca2e729f671/href">https://medium.com/media/ca2f8855fca40af9978e1ca2e729f671/href</a></iframe><h3><strong>Introducing the Signature Series — Limited Edition in the Render Network Merch Store!</strong></h3><p>Human creativity x limited drops. Designed by artists.</p><p><em>ARTISTS ON THE NETWORK — FEATURING:</em><br>Raphael Rau | Silverwing <br>Machina Infinitum</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/459/1*QAk70tcUNc1gHyQpYAqCqQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/435/1*i5LFEyl_ZQtnULs8XgXKGA.png" /></figure><p><em>Shop the collections:</em></p><p><a href="https://merch.renderfoundation.com/">Render Network Merch Store</a></p><h3>Bitmap: Open-Sourced and Rendered Daily</h3><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/MhxAlt/status/2006064782365884783%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/d18bd0ba8831473922ad236d0370f9f5/href">https://medium.com/media/d18bd0ba8831473922ad236d0370f9f5/href</a></iframe><p>This week, MHX published a step-by-step guide breaking down how Bitmap was built, a live, generative 3D data sculpture driven by daily Bitcoin block data.</p><p><a href="https://t.co/KqQwRmmXUz">bitmap.mhxalt.com/details</a></p><p>Every 24 hours, an automated routine runs on the Render Network, constructing and rendering a new visual state using decentralized GPUs. Each render captures a unique snapshot of on-chain activity, creating an evolving, living artwork.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/MhxAlt/status/2011842564216013273%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/fed31bfed7eaf56858ff54136515235f/href">https://medium.com/media/fed31bfed7eaf56858ff54136515235f/href</a></iframe><p>What makes Bitmap stand out is its transparency: the full pipeline is open-sourced, offering a real-world example of autonomous, generative systems powered by decentralized compute.</p><h3>Community Spaces: Builders in Conversation</h3><p>This week’s community Space sparked thoughtful discussion across rendering, AI, and compute infrastructure. We’ve pulled a standout quote for a shareable graphic, a reminder that these Spaces are working conversations, not just announcements.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2011875308954402998%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/5476aea894c5ac30e5dfa9ea56c41069/href">https://medium.com/media/5476aea894c5ac30e5dfa9ea56c41069/href</a></iframe><p>If you missed it live, catch the replay and highlights rolling out across social</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3qwK_tcBv_JIQrI4yZjZRA.jpeg" /></figure><p>(<em>Timestamp: 51:17)</em></p><h3>CES 2026: XR, Spatial Computing, and AI Converge</h3><p>CES 2026 set the tone early.</p><p>As Jensen Huang put it during NVIDIA’s CES keynote, “the amount of computation required for AI is skyrocketing… models are increasing by an order of magnitude every single year,” driving massive demand for GPU infrastructure.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/BloombergTV/status/2008319121201475897%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/f4e4dba88a403974db4ca4504c39f1f0/href">https://medium.com/media/f4e4dba88a403974db4ca4504c39f1f0/href</a></iframe><p>From there, CES made the throughline clear: the future of AI, spatial media, and real-time 3D ultimately leads back to infrastructure.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2010039299581063184%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/99f662f5d8214b228b19c8e453e1b665/href">https://medium.com/media/99f662f5d8214b228b19c8e453e1b665/href</a></iframe><h3>SUBMERGE at ARTECHOUSE: Immersive, Large-Format Experiences</h3><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2009422856665309286%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/dae5da8eb3d3a56e0e7b127b5aaa0d3e/href">https://medium.com/media/dae5da8eb3d3a56e0e7b127b5aaa0d3e/href</a></iframe><p>SUBMERGE at ARTECHOUSE remains open through May, with January marking a key moment for large-format immersive visuals.</p><p>On Jan 23, SUBMERGE at ARTECHOUSE becomes a focal point for large-format immersive visuals.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eC2KLmUdpGO9tfV6Iu2M3g.png" /></figure><p>Bringing together ARTECHOUSE, footwear and wellness brand best known for minimalist (“barefoot”) shoes. VivoBarefoot, and the Render Network, the experience brings together top technologists, creators, journalists, and wellness leaders to explore the future of immersive experiences through movement and wellness, inside SUBMERGE: Beyond the Render’s 18K, 270-degree cinematic world rendered entirely on distributed GPUs.</p><p>SUBMERGE remains open through May, showcasing what’s possible when immersive storytelling, physical activation, and decentralized compute converge.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/DavidAriew/status/2010927433424232641%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/354eb927d45f92388107e479f83ee865/href">https://medium.com/media/354eb927d45f92388107e479f83ee865/href</a></iframe><p>Featured SUBMERGE artist David Ariew described the piece as his most detailed and immersive work to date, one that would have taken years to render on a single machine but was completed in days using Render.</p><h3>Creator Spotlight: Rendering at Scale for Formula 1</h3><p>Next up is a Creator Spotlight on real-world use of the Render Network, featuring Italo Ruan, founder of Perhaps Creation, and his recent full-CGI project for Santander / Formula 1, rendered entirely on the Render Network.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2006081928995221739%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/63806175b4bd58063d6c76154e0e5661/href">https://medium.com/media/63806175b4bd58063d6c76154e0e5661/href</a></iframe><h3>RenderCon 2026</h3><p>RenderCon 2026 is coming up quickly.</p><p>As shared on Spaces this week, the team is actively shaping programming and confirming speakers for RenderCon 2026’s two-day conference taking place April 16–17 in Los Angeles. Now’s a good time to start planning travel and accommodations.</p><p><a href="https://renderfoundation.com/rendercon">Save the Date | Rendercon 2026</a></p><p>If you missed RenderCon 2025, last year’s conversations offer a strong preview of what to expect. One standout was <em>The Artist’s Journey in the Age of AI</em>, featuring Mike Winkelmann (Beeple) andthe founder of Render Network, Jules Urbach, which focused on real-world creative workflows, experimentation, and artistic control, not hype.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=http%3A//x.com/rendernetwork/status/2011215981981344148%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/0fb81dab86b110102699205bae3cd626/href">https://medium.com/media/0fb81dab86b110102699205bae3cd626/href</a></iframe><p><em>Watch more recaps:</em></p><p><a href="https://www.rendercon.ai/">RenderCon2025 - Explore the future of media, technology, and art appears in ChatGPT</a></p><h3>Kicking Off 2026</h3><p>From open-sourced generative systems and CES signals to Octane 2026 going live and immersive experiences scaling up, the year is already setting a strong pace.</p><p>More ecosystem and community updates and case studies coming soon.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8a4fc4e9713f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Artist’s Journey in the Age of AI]]></title>
            <link>https://rendernetwork.medium.com/the-artists-journey-in-the-age-of-ai-1351374fefc7?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/1351374fefc7</guid>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Tue, 13 Jan 2026 22:14:03 GMT</pubDate>
            <atom:updated>2026-01-13T22:14:03.553Z</atom:updated>
            <content:encoded><![CDATA[<p>During RenderCon’s <a href="https://www.youtube.com/watch?v=6pqG5J3qi5g&amp;list=PLf9Sc8My5KURLYSbP9zOVwR0dD7FSFuG2&amp;index=14"><em>The Artist’s Journey in the Age of AI</em></a> conversation, Mike Winkelmann (AKA “Beeple”) and Jules Urbach explored what it means for artists to create when the tools are changing so fast. Their conversation wasn’t about hype or fear, but about experimentation and the messy, evolving process of staying creative in an AI world.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F6pqG5J3qi5g%3Flist%3DPLf9Sc8My5KURLYSbP9zOVwR0dD7FSFuG2&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D6pqG5J3qi5g&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F6pqG5J3qi5g%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/124ca1b31cd9abd1da177a41f8dc2485/href">https://medium.com/media/124ca1b31cd9abd1da177a41f8dc2485/href</a></iframe><h3>Creating at the Speed of Change</h3><p>The talk began with Beeple recounting the tools he’s tested in just the past year (like Stable Diffusion, ComfyUI, Runway, Luma, Pika and Higgs Field). “The tools really, to me, just are progressing so fast that it’s so hard to keep up,” he said.</p><p>Yet, he sounded more energized than overwhelmed. AI, to him, isn’t chaos; <strong>it’s momentum.</strong></p><p>Generative art, however, still resists control. “It can be very frustrating when you give it super specific directions and it doesn’t keep doing it,” he admitted. Instead of tightening the reins, he’s learned to loosen them, letting accidents shape his process. Happy mistakes, he said, often evolve into real ideas.</p><p>Urbach echoed the thought from the technical side. The next challenge, he said, is bringing precision back and making generative systems feel as reliable as traditional 3D workflows.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*GlihAx_zJ6RKzx6q54PJzw.png" /><figcaption>Jules Urbach and Beeple discuss AI as a working tool, not a shortcut.</figcaption></figure><h3>From Workflow to Ownership</h3><p>Beeple’s daily process blends AI and traditional 3D art into a single system. He’ll generate a rough image, isolate a form with AI, rebuild it as a 3D mesh using tools like Tripo, and bring it into Octane to light and render the image.</p><p>“I’m not just, like, rolling the dice. Okay, I typed in 20 words, we’ll see what happens, hit reroll, reroll, reroll,” he said. “I want very precise control over what the output is.”</p><p>He’s clear about why this matters.</p><blockquote>“At the end of the day, the clients, I promise you, give no fuck how you created this shi. They do not care.”</blockquote><p>That pragmatism carried into a discussion of ownership and training data. Urbach asked about the fear artists feel over their work being used to train models. Beeple didn’t mince words. “It’s very overblown,” he said. “If we actually got [artists] to remove their work from the training data, nothing would change.”</p><p>For him, the real fight isn’t about erasure but about access:</p><blockquote>“Having open source models that anybody can use, to me, actually seems the most fair.”</blockquote><h3>Building What Didn’t Exist</h3><p>The real excitement, both agreed, lies in art that AI makes possible, not faster. Beeple described projects that would have been impossible without generative tools.</p><p>“They give you an ability to make something you just fundamentally could not have made otherwise,” he said.</p><p>His Charleston studio in South Carolina embodies that philosophy. Inside its 50,000 square feet, sculptures, GPUs, and projection systems blend into constantly changing installations. One week it’s a live gaming-and-art event; the next, an immersive exhibition. Beeple sees it as a prototype for the museum of the future, where digital, physical, and social experiences merge.</p><h3>Culture in Real Time</h3><p>When the conversation turned to NFTs, Beeple didn’t mince his words about the crash. Projects without artistic depth were bound to fade. What remains, he said, is the idea that digital objects can carry meaning.</p><p>Memes, too, deserve more respect. For Beeple, they’re simply ideas moving at the speed of the internet and represent a new, networked form of culture.</p><p>That same immediacy drives <em>Everydays</em>, his sixteen-year commitment to making and posting new work daily. Lately, AI has become both a collaborator and an interpreter.</p><p><em>“On Twitter, that Grok thing… it’s surprisingly good at being able to, sort of, explain things,”</em> he said. The prospect that AI could archive and contextualize his work decades later fascinates him.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*EBlqDAqP8bZxuusJV48rIA.jpeg" /><figcaption><em>Everydays</em> continues as both art practice and digital archive.</figcaption></figure><h3>Freedom and Persistence</h3><p>Before signing off, Beeple gave a glimpse of the internet’s fault lines.</p><blockquote><em>“Let’s just say I’m in a naughty time out on Instagram right now because I was bullying Joe Biden,”</em></blockquote><p>He gets equal backlash on X when he pokes at Trump. The reactions, he said, show how centralized platforms box in creative expression.</p><p>Urbach added that decentralized networks, where ownership and moderation are distributed, might give artists more freedom. Beeple agreed but brought it back to the grind: keep working, keep experimenting.</p><p><em>The Artist’s Journey in the Age of AI</em> wasn’t about predicting the future. It was about persistence. AI, for Beeple, isn’t competition, it’s friction. And friction, in the right hands, still makes sparks.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1351374fefc7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Render Network Foundation Monthly Report — December 2025]]></title>
            <link>https://rendernetwork.medium.com/render-network-foundation-monthly-report-december-2025-43d956808e3f?source=rss-3d2d407322fb------2</link>
            <guid isPermaLink="false">https://medium.com/p/43d956808e3f</guid>
            <category><![CDATA[3d-rendering]]></category>
            <category><![CDATA[decentralized-ai]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[depin]]></category>
            <dc:creator><![CDATA[Render Network]]></dc:creator>
            <pubDate>Thu, 08 Jan 2026 21:43:24 GMT</pubDate>
            <atom:updated>2026-02-04T02:08:51.717Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>Render Network Foundation Monthly Report — December 2025</strong></h3><p>Catch our 2-minute recap of 2025, Render Network at Solana Breakpoint, new merch store now open, media coverage of the Dispersed launch, Render Network on the DePIN Hub podcast, and key metrics for the month.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*ayoDkU0l9Hh9LB1UkTLB7A.jpeg" /></figure><h3><strong>Top 10 moments of 2025 in 2 minutes flat</strong></h3><p>The Render Network had a full year in 2025. Here are top 10 highlights summarized in 2 minutes:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F3zzFGr-2Ubk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D3zzFGr-2Ubk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F3zzFGr-2Ubk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/18fa1db8a76fbcd6cb0cda0b2bf63c85/href">https://medium.com/media/18fa1db8a76fbcd6cb0cda0b2bf63c85/href</a></iframe><h3><strong>RenderCon: April 16–17 in Hollywood — Save the Date!</strong></h3><p>After a knockout inaugural year, RenderCon is back in 2026! Taking place at Nya Studios in Hollywood, California, the two-day event is already considered by leading motion graphics artists as the most informative and relevant conference in the space, where leading minds converge. It will feature presentations and workshops by top thinkers in the 3D graphics, VFX, AI and compute fields. Look out for more details in the coming weeks.</p><p>Interested in attending? Hold the dates on your calendar: April 16 and 17th. We are excited to connect with you there in person.</p><p>Watch the 2025 RenderCon sizzle real here:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/1912538322083487855&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/032dd0785e9c3ae7ac6361ebdfe49b11/href">https://medium.com/media/032dd0785e9c3ae7ac6361ebdfe49b11/href</a></iframe><h3><strong>The Render Network at Solana Breakpoint 2025</strong></h3><p>The Render Network was a headline sponsor at this year’s Solana <a href="https://solana.com/breakpoint">Breakpoint</a> event in Abu Dhabi, United Arab Emirates. It took place December 11–13.</p><p>The team representing the Foundation at the booth had great conversations with attendees across the Solana ecosystem as well as prospects:</p><p>Sunny Osahn, Foundation Grants Lead, gave a product talk onstage on December 13th. He announced Dispersed, the name of the Render Network Compute Subnet enabled by RNP-019.</p><p>As part of the announcement, Sunny shared that OTOY, the long-standing Render Network ecosystem partner, released the beta version of OTOY Studio. It will be the first major user of <a href="https://dispersed.com/">Dispersed</a>. OTOY Studio will offer over 600 curated Artist AI models, supporting builders as traditional 3D and AI workflows continue to converge. OTOY will aim to run as many open weight models as possible on Dispersed.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FGXKQYsd_IK8%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DGXKQYsd_IK8&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FGXKQYsd_IK8%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/858696a6fc6597c523a1137d63bc2c09/href">https://medium.com/media/858696a6fc6597c523a1137d63bc2c09/href</a></iframe><p>The Render Network featured prominently at Breakpoint, from the moment the conference opened: The Breakpoint trailer was made by Render Network artist Brilly, rendered on the Render Network:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/solana/status/1999014384606261418%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/80391422699e1b675008b4d017100198/href">https://medium.com/media/80391422699e1b675008b4d017100198/href</a></iframe><h3><strong>Ecosystem metrics</strong></h3><p>The Render Network Foundation tracks various metrics on its network. Below is the aggregate view of the month’s activity, previous month’s activity, and month-over-month change for a few key metrics. Much of this information is found at <a href="https://stats.renderfoundation.com/">stats.renderfoundation.com</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WZ2McQZvcIa6pwUcNa29gw.png" /></figure><h4><strong>Glossary</strong></h4><p><strong>Frames rendered </strong>— number of frames rendered by the network.</p><p><strong>Burns </strong>— the USDC equivalent of RENDER tokens burned on the network. Additional <a href="https://stats.renderfoundation.com/#bme">burn and mint data</a> and details of the <a href="https://github.com/rendernetwork/RNPs/blob/main/RNP-001.md">burn and mint equilibrium model</a> (BME).</p><p><strong>Emissions </strong>— RENDER tokens minted for the month per the emissions schedule for BME.</p><p><strong>Rewards to Foundation rendering node (RENDER) — </strong>RENDER token rewards issued (both for rendering jobs and availability) to the rendering node operated on behalf of the Foundation. The Foundation’s rendering node Solana wallet address is: AAHwFzKj8U1E7JyNBz2v561eXCz41Uv123xzjAYo3GdT</p><p><strong>Foundation rendering node cost to operate (USD)</strong> — Cost, in USD, of running the Foundation rendering node in its geographic location. The price of running a node on the Render Network varies depending on geography, local energy market, local utility service and other factors.</p><p><strong>Foundation rendering node kilo-watt hours (kWh) </strong>— Amount of energy consumed by the Foundation rendering node during the reported period.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sUA8Wfxef-bwIzlXlNXInw.png" /></figure><p><strong>Rewards for jobs </strong>— rewards sent, in RENDER tokens, to render node operators on the network whose machines were used to complete rendering jobs. Rewards are issued by epoch, which occur weekly.</p><p><strong>Rewards for availability — </strong>rewards sent, in RENDER tokens, to render node operators who made their machines available to the network. Rewards are issued by epoch, which occur weekly.</p><p><strong>Total rewards to nodes — </strong>the sum of rewards sent to render node operators for rendering and availability.</p><p><strong>Governance Updates</strong></p><h3><strong>The Burn-Mint Equilibrium Model has resulted in 1,000,000 RENDER burned</strong></h3><p>In December 2025, we reached an important milestone: we hit the 1 millionth RENDER burned on the network.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uelIO8YwE0kG40jV3rBZQw.jpeg" /></figure><p>Not sure what this means? Check out our <a href="https://rendernetwork.medium.com/how-the-render-network-operates-foundation-network-and-treasury-a43bc492616e">explainer</a> on how the burn-mint equilibrium process works, how the governance model dictates that every job processed on the network results in equivalent RENDER burned, and how the treasury is managed by the foundation.</p><h3><strong>Partner and product updates</strong></h3><h4><strong>Ecosystem Partners</strong></h4><p>This section highlights the Render Network ecosystem partners.</p><p><strong>OTOY Product News<br></strong>Below are product updates from OTOY, a Render Network ecosystem partner and primary technology service provider to the network.</p><p><strong>Octane 2026.1 is released in beta on the Render Network</strong></p><p>OctaneRender 2026.1 integrates advanced rendering technologies like 3D Gaussian Splats, Meshlets for geometry streaming, native MaterialX/OpenPBR support, and enhanced AI denoising (Neural Radiance Cache) for faster, more efficient GPU rendering, with early alphas/betas bringing cross-platform Apple Silicon support and new tools for artists across various DCC plugins like Cinema 4D and Houdini, moving towards production-ready features.</p><p>It is currently being tested on the Render Network and should be opened up to broader usage soon.</p><p><a href="https://support.maxon.net/hc/en-us/articles/8814051767964-Redshift-2026-2-1-2025-12-December-11-2025">Redshift 2026.2</a> is now supported on the Render Network. This is the latest version of the renderer released by Maxon.</p><h4><strong>Artist spotlight: Patrick Levar</strong></h4><p>Patrick Levar stumbled upon 3D graphics because he was bored. That’s right. Though he was already hosting a popular YouTube channel about mobile filmmaking, Patrick found that he needed a challenge. So he decided to include 3D motion graphics into his cell phone footage. He learned Cinema 4D and then Octane Render after watching another filmmaker use it.</p><p>Patrick shares how he always had to leave his rendering for right before going to bed because they took so long on his local machine. With the Render Network, he can free up his machine so he can keep working while the rendering is being handled in parallel. Iterations can be done quickly, in just 15 minutes vs. hours or overnight.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fqtekx1RfY8s%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dqtekx1RfY8s&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fqtekx1RfY8s%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/5566acdb6baaedd8b18c2486d8785c75/href">https://medium.com/media/5566acdb6baaedd8b18c2486d8785c75/href</a></iframe><h3><strong>In the News</strong></h3><h4>Dispersed, the AI compute subnet, gets media coverage</h4><p>In April 2025, the Render Network saw the passing of governance proposal RNP-019, enabling a RENDER rewards framework for a dedicated network of decentralized GPUs specifically designed for general purpose and AI workloads. This network would be separate from the existing rendering network.</p><p>The subnet — designed and developed based on the experience, knowledge, and model of the long-running rendering network — got its official name and website in December: <a href="https://dispersed.com/">Dispersed</a>, unveiled at the Solana Breakpoint conference on December 13th, aims to serve the increasing AI compute needs of a growing number of sectors.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JITPDPX0NpIokrL9Rlpk1A.png" /></figure><p>The announcement was made by Sunny Osahn, Grants Lead for the Render Network Foundation during the Render Network product keynote.</p><p>Various crypto trade publications covered the announcement:</p><p><em>Bitcoin News</em></p><p>Render Network Targets Cloud Bottlenecks With Distributed GPU Platform</p><p><a href="https://news.bitcoin.com/render-network-targets-cloud-bottlenecks-with-distributed-gpu-platform/">https://news.bitcoin.com/render-network-targets-cloud-bottlenecks-with-distributed-gpu-platform/</a></p><p><em>CryptoNews</em></p><p>The Render Network Foundation has launched Dispersed, a distributed GPU computing platform aimed at easing growing constraints in centralized cloud infrastructure as global artificial intelligence (AI) workloads expand.</p><p><a href="https://cryptonews.net/news/altcoins/32133887/">https://cryptonews.net/news/altcoins/32133887/</a></p><p><em>AInvest</em></p><p>Dispersed Launch Fuels GPU Decentralization Push in AI Compute Sector</p><p><a href="https://www.ainvest.com/news/dispersed-launch-fuels-gpu-decentralization-push-ai-compute-sector-2512/">https://www.ainvest.com/news/dispersed-launch-fuels-gpu-decentralization-push-ai-compute-sector-2512/</a></p><p><em>CurrencyAnalytics</em></p><p>Innovative GPU Platform Aims to Tackle Cloud Computing Challenges</p><p><a href="https://thecurrencyanalytics.com/bitcoin/innovative-gpu-platform-aims-to-tackle-cloud-computing-challenges-230066">https://thecurrencyanalytics.com/bitcoin/innovative-gpu-platform-aims-to-tackle-cloud-computing-challenges-230066</a></p><p><em>SolanaFloor</em></p><p>Render Network launches Dispersed</p><p><a href="https://solanafloor-frontend-news-dk49tz8g1-step-finance.vercel.app/en/news/solana-breakpoint-2025-key-announcements-and-stories-from-saturday">https://solanafloor-frontend-news-dk49tz8g1-step-finance.vercel.app/en/news/solana-breakpoint-2025-key-announcements-and-stories-from-saturday</a></p><h4><strong>The Render Network on the DePIN Hub podcast</strong></h4><p>Trevor Harries-Jones, board member at the Render Network Foundation, talked with Daniel Andrade, founder of the DePIN Hub platform and DePIN Hub podcast. They discussed:</p><ul><li>How the Render Network sends frames out to be rendered by distributed GPUs</li><li>The role of managing the supply of and demand for nodes</li><li>An overview of what makes rendering such a technical use case</li><li>How early generative AI still is for creative workflows — and how fast its evolving</li><li>The outsized role of inferencing in AI, to which decentralized networks are well suited</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FxDXvFG3XYPE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DxDXvFG3XYPE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FxDXvFG3XYPE%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/d47b9c0167c1646f3eddf64dfaaa8f3b/href">https://medium.com/media/d47b9c0167c1646f3eddf64dfaaa8f3b/href</a></iframe><h4><strong>Render Network: Inside DePIN’s Compute Pioneer</strong></h4><p>Render Network Foundation board member, Trevor Harries-Jones, spoke to the fellow DePIN project, Helium, as part of a recent interview series from the decentralized wireless infrastructure project. This interview was conducted at the Helium Beach House in Abu Dhabi during Solana Breakpoint 2025.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Ffgn0pluzyqY%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dfgn0pluzyqY&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Ffgn0pluzyqY%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b4e791f8f6330775f291ff7958c47ca5/href">https://medium.com/media/b4e791f8f6330775f291ff7958c47ca5/href</a></iframe><h3><strong>Announcements</strong></h3><h4><strong>The Render Network merchandise store is now open!</strong></h4><p>Interested in picking up a hat, hoodie, coffee mug, or hat? Check out our newly unveiled <a href="https://merch.renderfoundation.com/">merchandise store</a>!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2THntIJf3NdDJh61caNo7g.png" /></figure><p>All items were designed by real artists who use the Render Network.</p><p>We shared more about the merch store launch here:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/rendernetwork/status/2003536037981905053%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/bb5a762ee2708b365f3739ed6021e0ea/href">https://medium.com/media/bb5a762ee2708b365f3739ed6021e0ea/href</a></iframe><h4><strong>Render Royale: the Premier Monthly 3D Contest — December recap</strong></h4><p>The final Render Royale contest of the year invited artists to imagine the future through a festive but fractured lens. <strong>Frozen Futures &amp; Cyber Christmas</strong> was the theme, challenging creators to explore survival, nostalgia, and technology in worlds where warmth is scarce and tradition has evolved — or vanished entirely.</p><p><strong><em>Random Frozen Scenes</em></strong> by Turhan Pathan</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q3s-ixSFWb6YXcXz6BZ2QQ.png" /></figure><p><strong><em>Lead the Way</em> </strong>by Deneth Vihara</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0OHx2nmXxFeNFiqqD6B1KA.png" /></figure><p><strong><em>Delivery</em></strong> by JC</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IJqhl8Sd5KrH96djd-2TLg.png" /></figure><p>From post-apocalyptic journeys across icy wastelands to frozen monuments caught mid-conflict, the submissions captured both spectacle and story. Here are the standout winners from the final Render Royale of 2025.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FxlRiDd85BXA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DxlRiDd85BXA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FxlRiDd85BXA%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/3721e35e97a63c25e4ae71f86f7c3509/href">https://medium.com/media/3721e35e97a63c25e4ae71f86f7c3509/href</a></iframe><p>More info on the <a href="https://rendernetwork.medium.com/render-royale-december-2025-frozen-futures-cyber-christmas-ad4acacbbad8">winners here</a>.</p><h4><strong>Motion Plus Design presentations and interview are now available</strong></h4><p>Presentations from our sponsorship of Motion Plus Design are now available for replay.</p><p>Phil Gara, VP of Marketing and Strategy at OTOY <a href="https://motion-plus-design.com/watch/talks/159">talked</a> about the Render Network in New York City in June.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UF5HgYLW0VIZlr67imrV_A.png" /></figure><p>Watch Phil’s talk in New York City here: <a href="https://motion-plus-design.com/watch/talks/159">https://motion-plus-design.com/watch/talks/159</a></p><p>Sunny Osahn, Render Network Foundation Grants Lead, also gave a presentation at the Paris one-day conference in November.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YMd-1UQADJ4TLgj49ldmIQ.png" /></figure><p>Watch Sunny’s talk in Paris here: <a href="https://motion-plus-design.com/watch/talks/173">https://motion-plus-design.com/watch/talks/173</a></p><p>Sunny also sat for an interview with Motion Plus Design.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*59AIUCa2YCsYOXI0r2LNnQ.png" /></figure><p>Check out the interview here: <a href="https://motion-plus-design.com/watch/interviews/81">https://motion-plus-design.com/watch/interviews/81</a></p><h3>What’s coming up</h3><h4><strong>SUBMERGE: Beyond the Render has been extended through March</strong></h4><p><a href="https://www.artechouse.com/program/submerge/"><em>SUMBERGE: Beyond the Render</em></a>, the first-of-its kind art exhibition sponsored by the Render Network in collaboration with ARTECHOUSE NYC, has been extended until March. Originally scheduled to end in December, the exhibit resumes in January 2026. A perfect winter-time activity if you or someone you know is based in or traveling to New York City in the next couple of months.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=43d956808e3f" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>