<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Nosana on Medium]]></title>
        <description><![CDATA[Stories by Nosana on Medium]]></description>
        <link>https://medium.com/@nosana?source=rss-5a13d1805981------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 11:35:47 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@nosana/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Nosana Partners with Sogni.AI to Empower Creativity Through AI-Powered Art Generation]]></title>
            <link>https://nosana.medium.com/nosana-partners-with-sogni-ai-to-empower-creativity-through-ai-powered-art-generation-e3955ecec10e?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/e3955ecec10e</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Tue, 24 Sep 2024 10:20:11 GMT</pubDate>
            <atom:updated>2024-09-24T10:20:11.711Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DqOAWEYOmdjZfZKZ8EyMbA.jpeg" /></figure><h3>Nosana’s partnership with Sogni.AI is unlocking a new era of AI-powered creativity for iOS and macOS users, making art generation and editing faster than ever.</h3><p>Adobe’s Photoshop should be ready to renounce the crown because there’s a new kid on the block aiming to be the best in class for creative tools. Nosana is proud to announce its partnership with Sogni.AI, a cutting-edge platform that is reshaping the world of art generation and editing. Together, Nosana and Sogni.AI are bringing AI-powered creativity tools to iOS and macOS users, allowing them to generate and enhance images, videos and animations faster and more efficiently than ever before.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/768/1*mTrW-oY-agxsYJSlCBVN0A.png" /></figure><h3>Nosana and <a href="http://sogni.ai/?ref=blogcms.depinhub.io">Sogni.AI</a> Team Up</h3><p><a href="http://sogni.ai/?ref=blogcms.depinhub.io">Sogni.AI</a> provides AI art and image generation tool for artists and creatives, both professional and personal. Unlike the “casual prompting” web tools previously available Sogni runs as a IOS or macOS applications with deep workflow tools akin to an Adobe product but without the monthly subscription fees or heavy handed restrictions. These apps enable users to express their creativity by generating art from prompts and fine-tuning them through an intuitive interface. With over 100 purpose specific art generation models available, Sogni brings creative vision to life the way the artist intended with pixel perfect precision.</p><p>Previously, when users downloaded the Sogni app, art generation took place locally, which meant dealing with large, slow models. This often resulted in delays, limiting the creative flow. Now, by partnering with Nosana, Sogni can offer a faster, more streamlined experience by offloading the computationally heavy work to Nosana’s decentralized network of GPU nodes.</p><h3>How Nosana and Sogni Work Together</h3><p>Running Stable Diffusion models efficiently requires powerful GPUs, which is where Nosana excels. Through its decentralized GPU grid, Nosana provides the computing power Sogni needs to run Stable Diffusion models quickly and affordably.</p><p>Each Stable Diffusion model is trained to produce specific outcomes, from pixelated art to photo realistic images. Within Sogni, users can select from a variety of models to generate images tailored to their preferences. Additionally, the app offers an array of photo editing tools, allowing users to iterate on their creations seamlessly.</p><p>With Nosana’s GPU network, Sogni users no longer have to wait for lengthy local processing times. Instead, they can offload the art generation process to Nosana’s decentralized infrastructure, producing stunning visuals within seconds. This integration empowers users to explore multiple iterations and creative options in real time, enhancing their creative journey.</p><h3>Looking Ahead</h3><p>This partnership between Nosana and <a href="http://sogni.ai/?ref=blogcms.depinhub.io">Sogni.AI</a> marks a major milestone in the world of creative technology. By combining Nosana’s decentralized compute network with Sogni’s advanced art generation and editing tools, they are redefining what’s possible in digital creativity.</p><p>As the partnership evolves, both teams are excited to continue pushing the boundaries of AI-powered creativity, bringing innovative features and seamless performance to artists and creators across the globe.</p><p>With Nosana powering the next generation of creative tools, the future of digital art is in good hands.</p><p>To stay informed of the latest developments in this partnership, follow <a href="https://x.com/nosana_ai?ref=blogcms.depinhub.io">Nosana</a> and <a href="http://sogni.ai/?ref=blogcms.depinhub.io">Sogni.AI</a> on X.</p><p>Finally, don’t forget to go to <a href="http://sogni.ai/?ref=blogcms.depinhub.io">Sogni.AI</a> to download and use the <a href="https://apps.apple.com/us/app/sogni-ai-art-generator/id6450021857?pt=127281960&amp;ct=Website&amp;mt=8&amp;ref=blogcms.depinhub.io">macOS</a> or <a href="https://apps.apple.com/us/app/sogni-ai-art-generator/id6450021857?platform=iphone&amp;ref=blogcms.depinhub.io">IOS</a> app, and let your imagination run wild today!</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/?ref=blogcms.depinhub.io">Website</a> | <a href="https://docs.nosana.io/?ref=blogcms.depinhub.io">Documentation</a> | <a href="https://twitter.com/nosana_ai?ref=blogcms.depinhub.io">Twitter</a> | <a href="https://discord.gg/nosana-ai?ref=blogcms.depinhub.io">Discord</a> | <a href="https://t.me/NosanaCompute?ref=blogcms.depinhub.io">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/?ref=blogcms.depinhub.io">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e3955ecec10e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[LLM Benchmarking: Cost Efficient Performance]]></title>
            <link>https://nosana.medium.com/llm-benchmarking-cost-efficient-performance-836031c1bb83?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/836031c1bb83</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Fri, 13 Sep 2024 09:09:59 GMT</pubDate>
            <atom:updated>2024-09-13T09:09:59.573Z</atom:updated>
            <content:encoded><![CDATA[<h3>Explore Nosana’s latest benchmarking insights, revealing a compelling comparison between consumer-grade and enterprise GPUs in cost-efficient LLM inference performance.</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZmL-pz24S1mHT2v5PfszEQ.jpeg" /></figure><p>Economic viability is one of the most important factors in the success of new products and applications. No less so for Nosana. We show that the consumer-grade flagship RTX 4090 can provide LLM inference at a staggering 2.5X lower cost compared to the industry-standard enterprise A100 GPU.</p><p><a href="https://nosana.io/blog/llm_benchmarking_on_the_nosana_grid/">Our previous article</a> showed how we implemented a uniform LLM benchmark that helps track individual node performance and configurations. With this information, we are able to design fairer GPU compute markets by lowering their performance variation. But although the initial benchmark data is valuable in terms of market design optimization, it does not give meaningful insights into the realistic performance we are interested in. This is because the benchmark was designed to be compatible with all nodes on the network but it wasn’t able to test the full capacity of each node.</p><p>In this article, we address this limitation and zoom in on the performance comparison between consumer-grade and enterprise hardware. We implement benchmarks and use the results in a cost-adjusted performance analysis to highlight the competitive advantage of the Nosana Grid over traditional compute providers.</p><h3>LLM Inference</h3><p>When we talk about performance measurements in the context of LLM inference, we are mostly interested in inference speed. To better understand the factors influencing this speed, let’s begin with a brief overview of how LLM inference works.</p><p><a href="https://nosana.io/blog/llm_benchmarking_on_the_nosana_grid"><em>The previous blog post</em></a> <em>went into more detail on this topic. If you have read it, you can skip ahead to the ‘</em><strong><em>Current Research</em></strong><em>’ section. Readers who are interested in an in-depth explanation should refer to the previous blog post.</em></p><p>As far as computers are concerned, LLMs consist of two files. A large file containing the model parameters, and a smaller file that is able to run the model. The size of an LLM is determined by the amount of parameters it has and the precision of its parameters. Precision means the accuracy with which the model’s parameters are represented and is measured in bits. To calculate an example, let’s take the popular LLM Llama 3.1 with 8 billion parameters and a commonly used 16-bit floating-point precision. One parameter with 16-bit floating point precision equals 2 bytes times 8 billion parameters, giving us a total model size of 16 GB. The model size is an important factor in the usability of LLMs because it determines which types of hardware are able to load the model.</p><p>Once loaded onto hardware, LLMs perform next-token prediction. This means that LLMs iteratively predict and add single tokens to an input sequence that is provided as context. This process of generating tokens is called inference. To perform inference, an LLM goes through two stages, the <strong>prefill</strong> phase and the <strong>decoding</strong> phase. During the prefill phase, the model processes all input tokens simultaneously to compute all the necessary information for generating subsequent tokens. During the decoding phase, the model uses the cached information computed during the prefill phase to generate new tokens.</p><p>In practice, the prefill phase corresponds to the time you have to wait until the LLM starts generating its response. It is a relatively short period that makes efficient use of available computing capacity through highly parallelized computations. We call the prefill phase <strong>compute-bound</strong> because it is limited by the computational capacity of the hardware running the LLM.</p><p>The decoding phase generally takes up the bulk of the inference time and corresponds to the period between the generation of the first and the completion of the last token. This process is not as computationally efficient as the prefill phase because it requires the constant on and offloading of cached computations between the processing units and memory. We call the decoding phase <strong>memory-bound</strong> because its performance is limited by how fast data can be moved to and from memory.</p><h3>GPUs &amp; Inference</h3><p>In large production use cases, LLM inference is predominantly performed on high-end graphics processing units, or GPUs. Three key specifications of GPUs are particularly relevant to LLM inference:</p><ol><li>VRAM (Video Random Access Memory): The amount of available memory on the GPU</li><li>FLOPS (Floating Point Operations Per Second): A measure of the GPU’s computational capacity</li><li>Memory bandwidth: The speed at which data can be transferred within the GPU</li></ol><p>The processing of single sequences as described in the previous section usually leaves the VRAM and computational capacity of GPUs underutilized. To make better use of these resources we need to increase the amount of tokens processed and the computations performed. We can do this by processing a batch of multiple sequences at once. In production use cases this means that prompts from different users get bundled together and processed at the same time. Handling multiple requests, or <strong>concurrent users</strong>, plays an important role in the optimization of GPU usage.</p><h3>Current Research</h3><p>Alright, with the basics of LLM inference in mind, let’s get more specific about the goal of the current research. Previously, we benchmarked the performance of all GPU types on the Nosana grid using Llama 3.1–8B with a single concurrent user. Running inference with a single concurrent user leads to GPU underutilization, limiting the insights gained when comparing performance with other compute providers. In this article, we set up benchmarks for accurate performance comparisons. We’ll focus our analysis on comparing Nosana’s performance against established cloud computing platforms. This comparison involves two key benchmarks:</p><ul><li>A baseline assessment measuring the performance of current market leaders</li><li>An experimental evaluation of the Nosana grid’s performance</li></ul><h3>The Baseline Benchmark</h3><p>Similar to running models on the Nosana grid, you can use a fully customized Docker image when renting a GPU from a compute provider. This means that we can keep important variables such as the model files and LLM serving framework constant for our experiment and only have to pick the <em>GPU type</em> and the <em>price of usage</em> for a fair comparison.</p><p>Because running LLMs in a production setting requires high capacity in terms of computation and memory, there are two main types to consider when renting a GPU, the A100 and the H100. The H100 is a newer and more powerful GPU than the A100, but both cards are able to load in and effectively run most open-source models. Given its relative affordability and arguable cost-effectiveness, we opt for the A100 as our baseline GPU.</p><p>For the price of usage variable there are more options to consider because there are various compute providers that offer a specific rental price per hour. To pick a competitive price we made use of the website <a href="https://getdeploying.com/reference/cloud-gpu">https://getdeploying.com</a>, which shows aggregated GPU rental prices for all cloud providers. At the time of writing the cheapest rental price for an A100–80GB is offered by <a href="https://crusoe.ai/">Crusoe</a> at $1.65 per hour, so we will use this price for our analysis.</p><h3>The Experimental Benchmark</h3><p>To compare the Nosana grid with our baseline approach, we need to determine the GPU type and an accompanying price per hour for our experimental benchmark. We’ll leave the price per hour as a variable to allow comparisons across multiple hypothetical pricing scenarios. This means that we only have to choose the GPU type.</p><p>The RTX 4090 is the most frequently encountered GPU on the Nosana grid, closely followed by the RTX 3090. The prevalence of the RTX 4090 and RTX 3090 GPUs on the Nosana grid highlights one of the network’s primary advantages over centralized compute providers: its ability to tap into a pool of underutilized consumer-grade hardware. Consequently, the most interesting comparison to make for Nosana is between popular enterprise hardware such as the A100 and underutilized consumer hardware such as the RTX 4090. Therefore, we pick the RTX 4090 for our experimental benchmark.</p><h3>Research Setup</h3><p>Let’s go over the rest of the research setup. Now that we have determined the fixed variables for the baseline and the experimental condition, we have to pick the shared variables. The model, the LLM serving framework, and the number of concurrent users.</p><p>For the <em>model,</em> we picked Llama 3.1–8B. Llama models are the most used open-source LLMs in the world, and the 8 billion variant makes it possible to easily load the model on both the A100 and the RTX 4090 GPUs.</p><p>As an LLM <em>serving framework,</em> we experimented with both <a href="https://github.com/vllm-project/vllm">vLLM</a> and <a href="https://github.com/InternLM/lmdeploy">LMdeploy</a>. vLLM is one of the most popular frameworks and is frequently mentioned by our prospective clients. LMdeploy is a highly optimized framework and has shown the highest inference speed in <a href="https://www.bentoml.com/blog/benchmarking-llm-inference-backends">recent benchmarking research</a>. When using these frameworks, we used the out-of-the-gate inference configurations for both the baseline and experimental benchmark.</p><p>In our benchmarking script we implemented functionality to send <em>concurrent user</em> requests. While our previous article demonstrated that the 4090 slightly outperforms the A100 for a single concurrent user, this scenario rarely reflects optimized production environments. Therefore, we tested performance using 1, 5, 10, 50, and 100 concurrent users to see how the comparison holds up under different workloads.</p><p>As an evaluation metric, we used tokens produced per second, which directly measures inference speed. We evaluated both the A100 and RTX 4090 GPUs across all combinations of the variables mentioned above.</p><h3>Results</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Hd86hO_xcgRDeFWv.jpg" /></figure><p>In the above graphs, we can see the performance of the RTX 4090 and the A100 with the LMdeploy and vLLM frameworks for different levels of concurrency. The graphs show that:</p><ul><li>At a low number of concurrent users, the A100s outperform the 4090s. However, this outperformance decreases relatively with the increase of concurrent users.</li><li>At a higher number of concurrent users, LMdeploy greatly outperforms vLLM with its standard settings. The RTX 4090 with LMdeploy even outperforms the A100 with vLLM at 50 and 100 concurrent users.</li><li>You need 1.5–2 RTX 4090s to reproduce the performance of an A100.</li></ul><h3>Price Comparison</h3><p>Considering the respective purchase costs of the RTX 4090 and the A100, the performance results of the RTX 4090 are quite impressive. In this section, we analyze both GPUs’ performance while taking into account their purchase cost and operational expenses. For the cost-adjusted analysis we assume:</p><ul><li>The purchase cost of an RTX 4090 is $1,750.</li><li>The purchase cost of an A100–80GB is $10,000.</li><li>2 RTX 4090s are required to reproduce the performance of an A100.</li><li>The price of energy is equal to the average American price of $0.16 per kWh.</li><li>The energy consumption of an RTX 4090 is 300W.</li><li>The energy consumption of an A100 is 250W.</li><li>The price for renting an A100 is $1.65.</li></ul><p>Let’s start by calculating the return on investment (ROI) for the A100, which measures the amount of return relative to the investment cost. This helps us determine how quickly each GPU setup can earn its initial cost and start generating profit.</p><h4>A100 ROI</h4><ol><li>Initial Investment: $10,000</li><li>Hourly Energy Cost: 0.25kW * 1 hour * $0.16/kWh = $0.04 per hour</li><li>Hourly Rental Revenue: $1.65 per hour</li><li>Hourly Net Profit: $1.65 — $0.04 = $1.61 per hour</li></ol><p>To find the break-even point, we divide the initial investment of $10,000 by the hourly net profit of $1.61, which gives us approximately 6,211 hours or 259 days. Therefore, it would take about 259 days of continuous operation and rental to earn back the initial investment on the A100 GPU.</p><h4>RTX 4090 ROI</h4><p>Let’s perform a similar analysis for the RTX 4090 setup where we deliver the same performance as the A100 setup. Remember, we’re assuming that two RTX 4090s are required to match the performance of one A100.</p><ol><li>Initial Investment: $1,750 * 2 = $3,500</li><li>Hourly Energy Cost: (0.3kW * 2) * 1 hour * $0.16/kWh = $0.096 per hour</li></ol><p>Let’s first calculate the ROI assuming we rent out the RTX 4090 setup at the same price as the A100:</p><ol><li>Hourly Rental Revenue: $1.65 per hour</li><li>Hourly Net Profit: $1.65 — $0.096 = $1.554 per hour</li></ol><p>To find the break-even point: $3,500 / $1.554 per hour ≈ 2,252 hours or about 94 days</p><p>In this scenario, the RTX 4090 setup would break even much faster than the A100, in about 94 days compared to 259 days for the A100.</p><p>Now, let’s determine the hourly rental price that would allow the RTX 4090 setup to break even in the same timeframe as the A100. Here’s the calculation:</p><ol><li>Hourly rate to cover initial investment: $3,500 / 6,211 hours = $0.56 per hour</li><li>Total hourly rate including energy cost: $0.563 + $0.096 = $0.66 per hour</li></ol><p>This means that if we set the hourly rental price for the RTX 4090 setup at $0.66, it would break even at the same point as the A100.</p><p>Comparing this to the A100’s rental price of $1.65 per hour, we can see that the RTX 4090 setup could potentially be rented out 2.5X cheaper than the A100 while still achieving the same return on investment timeline. On top of that, the initial investment for the RTX 4090 setup is significantly lower than that of the A100, which reduces the barrier to entry for those looking to offer GPU rental services.</p><h3>Wrapping Up</h3><p>Through our comparison of the A100 and RTX 4090, we have demonstrated the potential competitive advantage that consumer-grade hardware has over enterprise hardware. As production models currently seem to trend toward smaller sizes, this benefit will only grow as more consumer-grade hardware becomes capable of running AI models efficiently. This trend holds enormous potential benefits for the Nosana grid, which primarily consists of consumer-grade technology.</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=836031c1bb83" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Nosana Team is Heading to Singapore for Solana Breakpoint and Token2049]]></title>
            <link>https://nosana.medium.com/nosana-team-is-heading-to-singapore-for-solana-breakpoint-and-token2049-0cc5aaebb66b?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/0cc5aaebb66b</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Fri, 13 Sep 2024 09:06:43 GMT</pubDate>
            <atom:updated>2024-09-13T09:06:43.453Z</atom:updated>
            <content:encoded><![CDATA[<h3>The Nosana team is heading to Singapore for Solana Breakpoint and Token2049 to connect with builders and innovators in the DePIN and AI sectors.</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*t_xW8Hl5id8UD3cscfEmSw.jpeg" /></figure><p><em>This post will be updated as new events are added.</em></p><h3>Connect with the Nosana team:</h3><ul><li><a href="https://lu.ma/AIPowered"><strong>AI Powered Summit with Akash Network</strong></a> <strong>— Sep 17</strong> Jesse will be speaking on a panel about Advancements and Implications for Distributed Learning Systems.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/568/0*yC3bKzecA-4vNLeP.png" /></figure><ul><li><a href="https://www.asia.token2049.com/"><strong>Token2049 Singapore</strong></a> <strong>— Sep 18–19, Marina Bay Sands</strong></li><li>The team will be attending and engaging with key stakeholders in the Web3 and AI space.</li><li><a href="http://lu.ma/hack_singapore"><strong>Hack Seasons Conference Singapore</strong></a> <strong>— Sep 19</strong></li><li>Jesse will participate in a panel to discuss the latest advancements in DePIN and AI.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/970/0*-v19F88y7jj7347-.png" /></figure><ul><li><a href="https://solana.com/breakpoint"><strong>Solana Breakpoint</strong></a> <strong>— Sep 20–21</strong><br>Sjoerd will be hosting a workshop on the <strong><em>20th of September</em></strong> from <strong>11:30 AM to 12:00 PM at <em>The Pod</em></strong> (in the Developer Networking Space). Don’t miss this chance to learn more about Nosana’s vision for a decentralized AI future! Stick around until the end for a <strong>special announcement!</strong></li></ul><h3>Meet the Team Beyond the Stage</h3><p>Nosana’s team will also be attending multiple <strong>side events</strong> throughout the week in Singapore. Be sure to connect with them prior to the conferences to secure a meet-up, explore potential collaborations, and get insider insights on what’s coming next for Nosana.</p><p><strong>Team Members to Connect With:</strong></p><p><a href="https://www.linkedin.com/in/jesse-eisses-9760ab48/"><strong>Jesse Eisses:</strong></a> Co-founder</p><p><a href="https://www.linkedin.com/in/sjoerd-dijkstra/"><strong>Sjoerd Dijkstra</strong></a>: Co-founder</p><p><a href="https://www.linkedin.com/in/bourjois-ilunga-banza/"><strong>Bourjois Ilunga-Banza:</strong></a> Business Development</p><p><a href="https://x.com/djmbritt"><strong>David Britt</strong></a>: DevRel</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0cc5aaebb66b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Nosana Staking Program Update]]></title>
            <link>https://nosana.medium.com/nosana-staking-program-update-21d02cd55247?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/21d02cd55247</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Tue, 06 Aug 2024 08:27:20 GMT</pubDate>
            <atom:updated>2024-08-06T08:27:20.916Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*R0KXaLRO_XcL7PxS.jpg" /></figure><h3>To ensure the network’s continued success and long-term potential, we’re implementing a key update to our staking program.</h3><p>Since the launch of our staking program in August 2022, the Nosana Network has experienced significant growth in stakers, thanks to our incredible community. To ensure the network’s continued success and long-term potential, we’re implementing a key update to our staking program, effective today, May 21st, 2024.</p><h3>A Focus on Long-Term Sustainability</h3><p>As the Nosana Network grows, ensuring its long-term health becomes paramount. To achieve this, we’re implementing a modification of staking rewards, effective today. This adjustment will result in an approximate halving of the APY. It’s important to remember that our dynamic staking rewards system considers multiple variables so that the actual APY fluctuation might be slightly higher or lower.</p><h3>Why These Changes Matter</h3><p>The APY adjustment might raise questions, but there’s a clear vision: ensuring a sustainable future for Nosana’s thriving ecosystem. This means prioritizing long-term health and aligning rewards with industry standards to benefit all stakers.</p><h3>Building an Even Stronger Network</h3><p>We’re constantly working to improve the Nosana staking experience. Here’s a glimpse into the developments planned for the future:</p><ul><li>Proof-of-Stake (PoS): We’re actively working on transitioning to a Proof-of-Stake (PoS) system. In the near future, users will need to stake a specific amount of NOS tokens to operate a node. This stake acts as collateral, guaranteeing responsible and efficient behavior from nodes and ultimately strengthening network security.</li><li>Transaction-Based Rewards: As the network grows and more inferences are run, stakers will receive more rewards. We have a transaction-based rewards system in which a portion of every transaction fee is distributed proportionally among all stakers. This means the more the network is used, the greater the rewards for our dedicated community!</li></ul><h3>The Future is Staked on Nosana</h3><p>These staking updates represent a significant step forward for Nosana. With the future implementation of Proof-of-Stake, slashing mechanisms, and a focus on transaction-based rewards, we’re creating a system where user participation is directly tied to network security and prosperity.</p><h3>FAQs</h3><p><strong>Why is the APY being halved and when does it take effect?</strong> We modified the staking rewards to ensure the long-term health and sustainability of the Nosana Network. This change took effect on May 21st, 2024. However, the current base APY will remain stable for a long period, providing predictability for stakers.</p><p><strong>What is Proof-of-Stake (PoS) and how will it impact me?</strong> PoS is a new system for securing the Nosana Network. In the future, users will need to stake NOS tokens to operate a node. This staking requirement incentivizes honest behavior from nodes and ultimately strengthens network security.</p><p><strong>What is slashing, and how can I avoid it?</strong> Slashing is a penalty for malicious activity by a node, resulting in a loss of staked tokens. To avoid slashing, simply participate honestly and responsibly as a node operator.</p><p><strong>What are transaction-based rewards?</strong> Transaction-based rewards are an exciting addition to the Nosana staking program. This system rewards stakers based on network usage. A percentage of each transaction fee is distributed proportionally among all stakers, offering the potential for increased earnings as the network grows!</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=21d02cd55247" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Test Grid Phase 2 Update]]></title>
            <link>https://nosana.medium.com/test-grid-phase-2-update-e33e52811788?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/e33e52811788</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Tue, 06 Aug 2024 08:25:56 GMT</pubDate>
            <atom:updated>2024-08-06T08:25:56.601Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*kX435PNNG-ZOTydv.jpg" /></figure><h3>An update on our plans for Test Grid Phase 2</h3><p>Many members of our community have been anxiously awaiting an update on our plans for Test Grid Phase 2 — and we’re excited to share them with you!</p><h3>Testing of our First GPU Test Grid</h3><p>Phase 1 concluded successfully at the end of January, with nearly 100 nodes running AI inference jobs tirelessly. This phase yielded extensive feedback, with over 300 tickets filed on Discord, covering bug reports, issues, and improvement suggestions. Rather than closing the job markets after Phase 1, our dedicated testers continued running jobs, providing ongoing feedback to the team. You can read the full report on Phase 1 testing <a href="https://nosana.io/blog/testing-the-first-gpu-grid-for-ai-inference">here</a>.</p><h3>Preparations for Phase 2</h3><p>The Nosana team recognized substantial improvements necessary for Phase 2, which required careful planning to meet community and tester expectations. Over the past eight weeks, our development team processed Phase 1 feedback, resulting in significant updates to node functionality, including robust network enhancements, GPU setup flexibility, and automated onboarding processes.</p><h3>Adapting to Network Challenges</h3><p>Near the end of our development phase, we encountered a significant challenge. The Solana network experienced congestion, which impacted areas of our project that are in the production stage, especially our staking program and the Test Grid. This led to numerous transaction failures within our ongoing Phase 1 test group due to the high volume on the Solana mainnet network.</p><p>We faced a choice: wait for the Solana 1.18 upgrade scheduled for mid-April or proactively adapt our node functionality to attempt to bypass these congestion issues. We opted for the latter, and we are confident it was the right choice.</p><h3>Phase 2 Focus: Expanding AI Project Integration</h3><p>In Phase 1, to simulate a real testing environment, we deliberately created AI jobs that replicate the types of AI jobs our future clients will use. This isolated environment, with specific types of jobs, allowed the team to closely monitor performance and establish early benchmarks. For Phase 2, our focus has shifted towards expanding the variety of AI jobs needed by our future clients.</p><p>Over the same eight weeks, our developers have worked to revamp the Nosana job format to streamline the integration of AI projects and models into our network. We opened the registrations for AI projects to join the Test Grid and we’ve seen a fantastic response. Our team has been assisting several projects with their Phase 2 onboarding, promising a more diverse set of AI inference jobs — created by accepted projects and developers.</p><p>There’s much more happening behind the scenes than meets the eye. We’ll soon be sharing more details about the extensive work our developers have been doing.</p><h3>Nodes Acceptance &amp; Waitlist</h3><p>Over 1000 nodes have been accepted for Phase 2, ten times more than Phase 1, but this only represents a portion of all applications. We are working on accommodating more GPU models that our community members have and wish to use.</p><p>Additionally, a small number of successful registrants had issues with their GPU models not being accurately recorded. Registrants experiencing problems with GPU model recording will receive further instructions to secure their spot on the waitlist.</p><h3>What’s next?</h3><p>Glad you asked! As mentioned earlier, Phase 2 focuses on two key areas: extensively testing the enhanced Node V2 and expanding the variety of AI jobs. To ensure the new node’s functionality and capacity before onboarding major AI projects, we’ll conduct a phased testing approach in April.</p><h4>Initial Testing of Node V2:</h4><p>Already in progress, with a select group of Phase 1 testers trying out the new node in a special market created for this purpose. This preliminary test runs through the second week of April.</p><h4>Load Testing of Node V2:</h4><p>On April 15th, we’ll start expanding testing to include all Phase 1 testers, ensuring the node’s functions and capacity are optimal, including the adjustments made to overcome Solana’s congestion issues. This intensive testing phase is essential before we bring over 1000 new nodes onto the grid and will last about two weeks.</p><h4>Onboarding:</h4><p>The date that <strong>everyone</strong> is waiting for — assuming the initial tests with a smaller sample size go well — we will begin onboarding on approximately April 29th. With over 1000 nodes accepted, the development team tackled this head-on, and now, our onboarding process has been fully automated. If you’re registered and accepted for Phase 2, starting your node will automatically grant you the requisite access token and place you into the correct market, ready to run jobs.</p><p>Before introducing real-world AI projects, we want to ensure our grid is ready, targeting the first project onboarding for May 15th. This gives us ample time to identify and fix any lingering issues with the node before these projects begin providing inference jobs to the grid.</p><h3>Questions?</h3><p>As always, our team is active on <a href="https://discord.gg/Nosana-AI">Discord</a> and ready to help with any questions you might have. Feel free to reach out there, and we’ll do our best to give you thorough answers.</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e33e52811788" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Testing the First GPU Grid for AI Inference]]></title>
            <link>https://nosana.medium.com/testing-the-first-gpu-grid-for-ai-inference-858336e763e7?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/858336e763e7</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Tue, 06 Aug 2024 08:24:48 GMT</pubDate>
            <atom:updated>2024-08-06T08:24:48.209Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*JOZD1TJ3sh5rmfQFx46FZA.png" /></figure><h3>Nosana has successfully tested the first decentralized GPU grid developed and customized for AI inference workloads.</h3><p>The first phase of the Nosana Test Grid has concluded. We want to give a big shout-out to everyone involved and who contributed! During these weeks, many components of the Nosana GPU grid were put to the test. Over a hundred GPU nodes managed to connect and were assigned to various AI-inference workloads, benchmarks, and other tasks over six weeks. This has provided us with valuable input for the development of the network and has marked the way forward.</p><p>In this article, we will summarize the key achievements, insights gained, and future improvements identified during the inaugural phase of the Nosana Test Grid, shedding light on the successful onboarding process, diverse AI-inference workloads, and the invaluable feedback received from participants that will guide us in enhancing the Nosana network for its next phase.</p><h3>Test Grid Goals</h3><p>The Test Grid was the first public GPU grid on the Nosana network, and an essential part of the process was establishing how nodes are identified, selected, and assigned to the grid. Nosana is committed to enabling underutilized consumer hardware and is designed to support a wide variety of hardware owned by semi-technical users. This means that we consider the onboarding process to be an essential part of the project. To provide personal assistance and process all feedback promptly, the number of Test Grid nodes was capped at 110. The selection included a wide range of device types and a good geographical distribution.</p><h3>Onboarding Process</h3><p>During the first stop in the Test Grid application, users had to download the <a href="https://github.com/nosana-ci/nosana-node">Nosana Node</a> software, follow the <a href="https://docs.nosana.io/nodes/testgrid.html">node configuration guide</a>, run a <a href="http://explorer.nosana.io/jobs/GUhQsFv2Dd6UUAgpcHpVCncodCHQMADGCJqvB6m6CdMe?network=devnet">benchmark job</a>, and use the results to submit the Test Grid application form. A total of 442 nodes went through this process and provided an abundance of feedback. During onboarding, we accomplished the following:</p><ul><li>Complete support for both Windows and Linux-based nodes</li><li>Connected nodes from 47 different countries</li><li>A total of 22 different GPU models were connected</li><li>The majority of users rated the experience as “Smooth as butter”</li></ul><p>The team has collected all feedback and will release an updated guide and video tutorial based on this. From the total onboarded nodes, a selection of 112 users was made, and they received an access NFT that activated their nodes. To properly test various use cases, participants were divided into Test Grid Markets based on their GPU model.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8rDDIxw0h9_cwCIL.png" /></figure><h3>Nosana Explorer</h3><p>Nosana has released an interface to inspect the Test Grid in detail: <a href="https://explorer.nosana.io/">https://explorer.nosana.io</a>. This enables anyone to look ‘behind the scenes’ of the Nosana test grid, revealing real-time data and statistics. This interface is crucial for our team, as it enhances transparency with our community and users, providing a deeper understanding of the Test Grid’s operations.</p><h3>AI Inference Workloads</h3><p>The Test Grid was divided into 13 compute markets of devices with similar specifications. These markets are continuously filled with a variety of GPU compute jobs. The majority of the AI workloads were targeted at image generation tasks using Stable Diffusion and speech recognition tasks using Whisper. Some jobs mimicked production workloads, while others aimed at stress testing and benchmarking the network to its limits. Here’s an overview of the number of jobs that were run:</p><p>Successful jobs: 139,749 Total job duration: 35,773.7 hours (or 1,490.6 days) Audio hours transcribed: 158,260.3 hours (or 6,594.2 days) Images generated: 935,097</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JWMWF_7oVoDV7dqs.png" /></figure><h3>Future Improvements</h3><p>Over 300 Discord tickets were raised and addressed throughout the Test Grid. As the Test Grid concluded, participants submitted feedback forms, and our team organized a Discord Live session to delve deeper into discussions on additional feedback. Undoubtedly, these interactions have been the most valuable aspects of our experience.</p><ul><li><strong>Making the network more robust</strong> During the Test Grid, there were cases where compute jobs could not be finalized due to external factors. The causes ranged from malfunctioning RPC nodes and congestion on Solana to GPU reliability. The Nosana software will be adapted to be more resilient when finalizing executed compute jobs, so nodes will not miss out on payments.</li><li><strong>Allow more flexible GPU setups</strong> At several occasions, participants requested more flexibility when it comes to upgrading and switching GPUs. Some users with advanced multi-GPU setups could only contribute one GPU at a time. Others upgraded their GPU to maximize income or wanted to switch between different GPU models during the day. We intend to support all these scenarios in future versions of Nosana.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/0*1PNtBzJ6PSyqoWqI.png" /></figure><h3>Next Steps</h3><p>The nodes currently connected to the Test Grid can keep running workloads and will continue to receive $NOS rewards. Nodes on the waitlist should keep an eye on their inbox, as the onboarding process is progressing gradually. In the meantime, the Nosana team is preparing for Phase 2 of the Test Grid. During this phase, job diversity will increase, allowing end-users to run actual workloads on the network.</p><p>For those interested in joining the waitlist for Nosana Test Grid Phase 2, please register <a href="https://docs.google.com/forms/d/e/1FAIpQLSfSBq9TLH4nzG6OL3BEDZaqWokiOTphYWa_7VESEQxpXJRlLQ/viewform">here</a>.</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=858336e763e7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Navigating a Sustainable Future in Tech: The Nosana Initiative]]></title>
            <link>https://nosana.medium.com/navigating-a-sustainable-future-in-tech-the-nosana-initiative-0c48a8a2087f?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/0c48a8a2087f</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Tue, 06 Aug 2024 08:22:50 GMT</pubDate>
            <atom:updated>2024-08-06T08:22:50.595Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/650/0*5SWOJDr9sjBEojBx.png" /></figure><h3>Addressing the GPU Shortage with a Sustainable Lens</h3><h3>The Global GPU Shortage: A Complex Challenge</h3><p>The technology world is grappling with a severe GPU shortage, affecting everything from video gaming to advanced AI research. This shortage is not a simple supply issue; it’s a complex problem fueled by a surge in demand across diverse sectors, compounded by global supply chain disruptions. With GPUs becoming essential for various applications, their scarcity poses a significant hurdle to technological progress.</p><h4>Nosana’s Innovative Approach</h4><p>In response, we’re introducing an innovative, sustainable solution. We’re creating a GPU compute grid that harnesses the power of underutilized GPUs across the globe. Our approach doesn’t just alleviate the shortage; it also reduces the need for new hardware production, which has a substantial environmental footprint, ranging from carbon emissions during manufacturing to electronic waste.</p><h4>Sustainable and Efficient Utilization of Resources</h4><p>Our strategy emphasizes the efficient utilization of existing resources. By tapping into idle GPUs on personal computers and small-scale servers, we unlock a vast, untapped resource. We envision that this will not only provide a much-needed boost to GPU availability but also encourage a culture of resource efficiency, which is crucial in our transition to a more sustainable future.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*FEBhrBAF5Pf2GAtz.png" /></figure><h3>The Environmental Impact of Centralized Data Centers</h3><p>The rising concern over data center energy consumption is growing. Centralized data centers are increasingly scrutinized for their environmental impact. These facilities are essential for storing, processing, and distributing large amounts of data, yet they consume a staggering amount of electricity. Their energy consumption is linked to significant carbon emissions, contributing to climate change. Some of the world’s largest data centers have energy needs comparable to those of small nations, raising concerns about their sustainability in an environmentally conscious world.</p><h4>Nosana’s Decentralized Approach as a Solution</h4><p>Our decentralized approach offers a compelling solution. By distributing computing tasks across a network of personal and idle GPUs, the dependence on massive data centers is significantly reduced. This not only cuts down on energy usage but also lessens the environmental impact associated with traditional data center operations. This shift is not just beneficial from a technical standpoint but also from an environmental perspective. Decentralization means that computing power is no longer concentrated in a few locations, which often rely on non-renewable energy sources. Instead, it taps into the distributed network of GPUs, possibly powered by cleaner, renewable energy sources. This significantly reduces the carbon footprint associated with high-performance computing.</p><h3>Fostering a Community for Eco-Conscious Tech Advancements</h3><h4>Building a Collaborative Network</h4><p>At the heart of Nosana’s initiative is the idea of building a community-driven network. Our network isn’t just about sharing resources; it’s about fostering a collective consciousness towards sustainable technology use. Each individual’s contribution, while seemingly small, adds up to create a substantial force for change. Community action like this can have a ripple effect and lead to a greener, more sustainable technological revolution.</p><h4>Join the Movement: Be Part of the Sustainable Tech Revolution</h4><p>We invite you to join us in this critical journey towards sustainable technology. By choosing to participate in Nosana’s network, you are not just addressing the GPU shortage or reducing your environmental impact. You are becoming part of a movement that values ecological responsibility as much as technological progress.</p><h4>Discover More and Get Involved</h4><p>To learn more about how you can be part of this transformative initiative, visit <a href="https://nosana.io/blog/nosana.io">https://nosana.io</a>. Here, you can discover the intricate workings of our decentralized model, and find out how to contribute your GPU resources to this cause. Together, we can reshape the landscape of technology towards a more sustainable, inclusive, and efficient future.</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0c48a8a2087f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Test Grid Phase 1: Accelerating the AI and GPU Computing Revolution]]></title>
            <link>https://nosana.medium.com/test-grid-phase-1-accelerating-the-ai-and-gpu-computing-revolution-f552d4154954?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/f552d4154954</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Mon, 05 Aug 2024 16:24:12 GMT</pubDate>
            <atom:updated>2024-08-05T16:24:12.502Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/650/0*7CkkDUnscbCgYtJD.png" /></figure><h3>The launch of our Test Grid represents a significant moment in AI and GPU-compute technology</h3><h3>Opening Doors to AI and GPU Computing</h3><p>The launch of our Test Grid represents a significant moment in AI and GPU-compute technology. We are inviting developers, data scientists, and AI enthusiasts to join the world’s most extensive GPU-compute grid. Participants in this groundbreaking project not only have the opportunity to advance in AI technology but also stand a chance to earn a share of 3 million $NOS tokens.</p><h3>The Nosana Node: Your Connection to Innovation</h3><p>Running a Nosana Node with a NVIDIA GPU is essential for registration. Given its pre-release status, it is recommended to operate the Node in a clean environment or virtual machine using a Solana address with minimal SOL​​.</p><h3>Hardware and Software Requirements</h3><p>A system with a minimum of 4GB RAM and a supported NVIDIA GPU is required. The range of compatible GPUs, including models like the NVIDIA RTX 4090 and RTX 4080, ensures robust processing capabilities​​. The software setup includes Ubuntu (20.04 or higher) or Windows with Ubuntu 22.04 on WSL2, Docker, Podman (for WSL2), NVIDIA Drivers, NVIDIA Container Toolkit, and the Solana Tool Suite​​.</p><h3>Step-by-Step Guide for Seamless Integration and Registration</h3><h3>Step 1: Installation and Setup</h3><ul><li>Install the necessary software for the Nosana Node on your system.</li><li>Follow the provided instructions for installing the Nosana Node on your operating system in our <a href="https://docs.nosana.io/nodes/testgrid.html">comprehensive guide</a>. We support Windows (WSL2) and Linux users.</li><li>Now you are prepared for the next steps in the registration process.</li></ul><h3>Step 2: Generating the Registration Code</h3><ul><li>Run the script mentioned in the guide to run the Test Grid registration task on your system.</li><li>After this, you will receive a unique registration code that you will need to paste into the form later on.</li><li>Make sure to make a backup of your node’s private key, as mentioned in the <a href="https://docs.nosana.io/nodes/testgrid.html">guide</a>. This is the account that is granted access to the Test Grid and will be receiving the rewards.</li></ul><h3>Step 3: Registration Process</h3><ul><li>Utilize the generated registration code to fill out the <a href="https://forms.gle/d6Copk6W4TAMDY5n8">registration form</a>.</li><li>Complete the remaining fields of the form so we know where to reach you, and leave valuable feedback to the Nosana team if needed.</li><li>Submit the completed registration form to initiate the registration process.</li></ul><h3>Step 4: Confirmation Email</h3><ul><li>Expect to receive a confirmation email after submitting the registration form.</li><li>This email confirms that your registration process is underway.</li></ul><h3>Step 5: Phase 1 Notification</h3><ul><li>Prior to the Test Grid launch, you will receive an email notifying you if you have been selected for Phase 1.</li></ul><h3>Step 6: Ready for Test Grid</h3><ul><li>Upon completion of the process and receiving confirmation, you are now prepared to actively participate in the Test Grid as a valuable member.</li></ul><p>Follow these steps diligently to ensure a smooth integration process and successful registration into the Test Grid.</p><h3>Join the AI Revolution</h3><p>This groundbreaking project awaits your participation. By joining Nosana’s Test Grid, you contribute significantly to the future of AI technology. We invite you to embark on this path of technological advancement and innovation in AI. Join our <a href="https://discord.gg/Nosana-ai">Discord server</a>, where you can connect directly with the technical team.</p><p>We look forward to seeing you on the grid!</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f552d4154954" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Nosana’s New Direction: AI Inference]]></title>
            <link>https://nosana.medium.com/nosanas-new-direction-ai-inference-fa6f6c616e92?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/fa6f6c616e92</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Mon, 05 Aug 2024 16:22:26 GMT</pubDate>
            <atom:updated>2024-08-05T16:22:26.303Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XAJug8Guuvw4qeTg.png" /></figure><h3>GPU-compute grid for AI inference</h3><p>Today, we’re excited to share a significant update about the future of Nosana. After careful consideration, we’ve decided to pivot away from CI/CD services. Instead, Nosana will now focus on providing a massive GPU-compute grid for AI inference.</p><h3>The Journey So Far</h3><p>Two years ago, we embarked on a mission to solve the painful dependence and lock-in that software developers face when shipping code. Our solution, Nosana, was designed as a decentralized automation provider that pooled together compute hardware for any developer to use freely, creating a marketplace where everyone can connect their machines with developers running CI/CD pipelines. Fast forward a year and a half, and we successfully built that computing platform. A dozen projects were using it daily to develop, build, and test their software. However, we started noticing challenges with gaining traction in the CI/CD market.</p><h3>The Challenges with CI/CD</h3><p>Despite our innovative approach, the CI/CD market didn’t provide the traction we sought. Most people consider CI/CD a “solved” problem and are hesitant to migrate to a new tool, even if it offers a superior, decentralized solution. Interestingly, one of our competitors reached the same conclusion and shared their insights about the subtleties of the CI/CD market and their pivot. It turns out that decentralization in itself is not a good enough reason for developers to leave the comforts of the tools they are already used to using every day. We tried developing a platform that was familiar enough to onboard developers easily, but this was enough. Developers don’t want to spend too much time learning the intricacies of a new notation system in order to use a product with half the features that are available with their current offering and an experimental platform. It’s a big ask, and we figured out that most developers are unwilling to invest in learning how to use our platform, even though we tried to make it easy to use for them.</p><h3>The Opportunity in AI</h3><p>The founders of Nosana have been at the forefront of artificial intelligence research for a long time. For the past couple of years, we’ve seen innovations happening in the AI space, with Alpha Go and Hugging Face’s generators indicating the rumblings of what was to come. The release of OpenAI’s Large Language Model, ChatGPT, almost a year ago marked a significant leap forward in the industry. Following its release, open-source AI models advanced rapidly, and demand for them rose quickly. GPUs are the cornerstone of these technologies. The same power that makes it possible to render beautiful graphics in your favorite video game also makes it possible to train and talk to ChatGPT. GPUs became increasingly scarce and expensive as companies started training and running these new models. Giant corporations have already bought up stock for the upcoming two years to meet their own demand because manufacturing hasn’t been able to keep up with the sudden demand for GPUs.</p><p>Meanwhile, many GPUs that belong to gamers, miners, and users of high-end devices are largely underused. GPU benchmark data suggests many use cases where you do not need the newest, most powerful GPUs on the market. Consumer-grade GPUs not only provide high availability but also deliver more inferences per dollar compared to major cloud providers.</p><h3>Nosana’s New Direction</h3><p>This is precisely what Nosana was built to solve: a decentralized market that matches hardware with demand. The Nosana compute engine that was built for CI/CD is flexible enough to run GPU workloads as well. In a matter of hours, we were able to connect some GPUs to the Nosana cluster and run a demo with AI workloads. All the technical ingredients are there; we’re preparing the same dish just for a different customer.</p><h3>What Will Change?</h3><p>As we shift our focus to building out the Nosana GPU grid, we will still discontinue support for the CI/CD platform by the end of this year. However, this doesn’t mean it will become obsolete. Existing CI/CD workloads will continue to run as long as someone hosts a compatible compute node and a connector. This is a testament to our original mission: no lock-in and complete freedom. Moving forward, our new mission is to connect existing GPUs to the growing demand. So, what does this look like?</p><h4>GPU Support and Easy Onboarding</h4><p>Firstly, we will start rolling out GPU support for Nosana Nodes, enabling you to run GPU jobs on your machine. We’re also prototyping new methods to make running GPU workloads on your machine easier. There are a couple of components to this. First off, as mentioned before, we will be expanding the different kinds of compute jobs that can be run on the Nosana Network. Starting with a focus on jobs that require GPUs, such as AI inference. You won’t need to pay to use OpenAI’s API anymore to do inference; you can determine the model you want to use and Nosana will connect you with one.</p><p>So now you will also be able to run smaller compute jobs instead of a whole pipeline. We have created a Nosana SDK that will make it easier for us and you to integrate Nosana into the applications we use every day. Such as the Nosana CLI tool, which will let you publish a compute job to the Nosana Network with one simple command. Embracing this new direction, we are also building out our Nosana Explorer, which will make it easier to go into the different kinds of compute jobs on Nosana.</p><p>During this journey, we realized that installing and running the Nosana Node software can be challenging for users. We believe we’ve found an innovative solution and are currently prototyping it. Imagine hosting a node within your browser being as easy as visiting a URL and connecting your wallet to host your machine on the network! By leveraging technologies such as WebAssembly, we aim to create the easiest GPU-node onboarding experience on the web.</p><p>We’re excited about this new direction and look forward to sharing more updates soon.</p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fa6f6c616e92" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Nosana Teams Up With PiKNik to Integrate High-performance GPUs Into Its DePIN Network]]></title>
            <link>https://nosana.medium.com/nosana-teams-up-with-piknik-to-integrate-high-performance-gpus-into-its-depin-network-3597367e78a5?source=rss-5a13d1805981------2</link>
            <guid isPermaLink="false">https://medium.com/p/3597367e78a5</guid>
            <dc:creator><![CDATA[Nosana]]></dc:creator>
            <pubDate>Mon, 05 Aug 2024 16:18:46 GMT</pubDate>
            <atom:updated>2024-08-05T16:18:46.828Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rQoEnNzmpsdG-xoL.jpg" /></figure><h3>Discover how Nosana’s new partnership with PiKNiK brings powerful Nvidia A5000 multi-GPU setups to our decentralized GPU marketplace, unlocking top-tier computing resources for a wide range of applications.</h3><p>We’re proud to announce an exciting partnership between Nosana and PiKNiK, a leading provider of enterprise-grade infrastructure optimized for the future of the Internet. This collaboration is a major step in our journey to enhance decentralized computing, leveraging PiKNiK’s expertise in Web3 and decentralized cloud solutions, particularly their advanced Filecoin operations.</p><p>PiKNiK excels in the decentralized cloud space. Known for their advanced Filecoin operations worldwide, they have also been key in training and certification programs for Filecoin storage providers. Now, PiKNiK’s advanced hardware capabilities will be integrated into Nosana’s platform, providing substantial benefits to our users.</p><p>A key focus of Nosana’s solutions is tackling the challenge of making GPUs available and accessible to everyone by leveraging a vast network of consumer GPUs through our DePIN-based marketplace. By building a scalable DePIN on the Solana blockchain, we address these challenges head-on. We optimize performance by focusing on small to medium, open large language models (LLMs) that run efficiently on low-memory cards. These strategies enable us to create a highly performant AI inference network at scale.</p><p>Thanks to our collaboration with PiKNiK, Nosana now has its first multi-GPU setup available on the network. PiKNiK’s high-performance hardware, including Nvidia A5000 cards and impressive multi-GPU setups like 8x A5000 configurations, will now be accessible through Nosana’s decentralized GPU marketplace. This advancement allows our clients to utilize top-tier computing resources for a variety of applications. PiKNiK’s involvement on the supply side of Nosana’s Test Grid highlights the technical depth and potential of this collaboration, significantly enhancing the capabilities and reach of our platform.</p><p>If you are interested in learning more about our relationship and its potential, be sure to join our X Spaces with PiKNiK.</p><p>August 2nd, Set your reminder <a href="https://twitter.com/i/spaces/1dRJZdwDVzDKB">here</a>.</p><h4>About PiKNiK</h4><p>PiKNiK is a Web3 ecosystem multiplier which has dramatically lowered the barriers to entry for both data owners and service providers in pursuit of decentralized networks like IPFS and Filecoin. As the first American storage provider on Filecoin, PiKNiK continues to set the standard for providing cloud products and services to end users atop Web3 storage and compute networks. PiKNiK squarely focuses on serving enterprise-scale clients in data-intensive industries. Today, PiKNiK operates over 200 million gigabytes of data storage and a fleet of modern data center CPUs and GPUs across multiple facilities throughout the United States.</p><p><a href="https://twitter.com/PiKNiK_US">Twitter</a>| <a href="https://www.linkedin.com/company/piknikus/">LinkedIn</a> | <a href="https://www.piknik.com/">Website</a></p><h4>About Nosana</h4><p>Nosana is an open-source cloud computing marketplace dedicated to AI inference. Their mission is simple: make GPU computing more accessible to all at a fraction of the cost. The platform has two main goals: providing AI users with flexible GPU access and allowing GPU owners to earn passive income by renting out their hardware.</p><p>By offering affordable GPU power, Nosana enables AI users to train and deploy models faster, without expensive hardware investments, all powered by the $NOS token. Access compute for a fraction of the cost or become a compute supplier at Nosana.io.</p><p><a href="https://nosana.io/">Website</a> | <a href="https://docs.nosana.io/">Documentation</a> | <a href="https://twitter.com/nosana_ai">Twitter</a> | <a href="https://discord.gg/nosana-ai">Discord</a> | <a href="https://t.me/NosanaCompute">Telegram</a> | <a href="https://www.linkedin.com/company/nosana/">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3597367e78a5" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>