<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Hyperstack  - Performance Benchmarks</title>
    <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks</link>
    <description>Hyperstack  - Performance Benchmarks</description>
    <language>en</language>
    <pubDate>Tue, 03 Feb 2026 07:10:13 GMT</pubDate>
    <dc:date>2026-02-03T07:10:13Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>LLM Inference Benchmark: NVIDIA A100 NVLink vs NVIDIA H100 SXM</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/llm-inference-benchmark-comparing-nvidia-a100-nvlink-vs-nvidia-h100-sxm</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/llm-inference-benchmark-comparing-nvidia-a100-nvlink-vs-nvidia-h100-sxm" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/1000017697.png" alt="LLM Inference Benchmark: NVIDIA A100 NVLink vs NVIDIA H100 SXM" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Is inference slowing you down&amp;nbsp;or costing more than it should?&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;As models grow larger, inference becomes harder to optimise. It’s where many teams hit their biggest bottlenecks. Whether you're deploying in production or fine-tuning in research, delays and inefficiencies can lead to high latency, rising costs and a poor user experience.&lt;/p&gt; 
&lt;p&gt;The right GPU can change that.&lt;/p&gt; 
&lt;p&gt;In this blog, we compare the most popular GPUS for LLM workloads: the NVIDIA A100 NVLink and the NVIDIA H100 SXM5. We ran benchmarks with &lt;a href="https://www.hyperstack.cloud/blog/case-study/what-is-vllm-a-guide-to-quick-inference"&gt;&lt;span style="font-weight: bold;"&gt;vLLM&lt;/span&gt;&lt;/a&gt;, a high-performance inference engine built for throughput and low latency on Hyperstack’s ultimate GPU cloud.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/llm-inference-benchmark-comparing-nvidia-a100-nvlink-vs-nvidia-h100-sxm" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/1000017697.png" alt="LLM Inference Benchmark: NVIDIA A100 NVLink vs NVIDIA H100 SXM" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Is inference slowing you down&amp;nbsp;or costing more than it should?&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;As models grow larger, inference becomes harder to optimise. It’s where many teams hit their biggest bottlenecks. Whether you're deploying in production or fine-tuning in research, delays and inefficiencies can lead to high latency, rising costs and a poor user experience.&lt;/p&gt; 
&lt;p&gt;The right GPU can change that.&lt;/p&gt; 
&lt;p&gt;In this blog, we compare the most popular GPUS for LLM workloads: the NVIDIA A100 NVLink and the NVIDIA H100 SXM5. We ran benchmarks with &lt;a href="https://www.hyperstack.cloud/blog/case-study/what-is-vllm-a-guide-to-quick-inference"&gt;&lt;span style="font-weight: bold;"&gt;vLLM&lt;/span&gt;&lt;/a&gt;, a high-performance inference engine built for throughput and low latency on Hyperstack’s ultimate GPU cloud.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fllm-inference-benchmark-comparing-nvidia-a100-nvlink-vs-nvidia-h100-sxm&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>Gen AI</category>
      <category>a100</category>
      <category>Cloud Computing</category>
      <category>GPU Cloud</category>
      <category>H100</category>
      <pubDate>Tue, 20 May 2025 08:25:17 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/llm-inference-benchmark-comparing-nvidia-a100-nvlink-vs-nvidia-h100-sxm</guid>
      <dc:date>2025-05-20T08:25:17Z</dc:date>
    </item>
    <item>
      <title>NVIDIA A100 vs H100: Use Cases, Cost &amp; Performance</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-a100-vs-nvidia-h100-use-cases-cost-and-more</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-a100-vs-nvidia-h100-use-cases-cost-and-more" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/BL%20H100%20vs%20H100.webp" alt="NVIDIA A100 vs H100: Use Cases, Cost &amp;amp; Performance " class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Looking for a clear NVIDIA A100 vs NVIDIA H100 comparison? This&amp;nbsp; guide answers it directly, covering performance, target use cases, cost differences and efficiency on real workloads. We show where the A100 still delivers value and where the H100’s architecture accelerates next-gen AI training and inference. Plus, we contextualise pricing and throughput on Hyperstack so you can make data-backed choices that align with your project goals and budget.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-a100-vs-nvidia-h100-use-cases-cost-and-more" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/BL%20H100%20vs%20H100.webp" alt="NVIDIA A100 vs H100: Use Cases, Cost &amp;amp; Performance " class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Looking for a clear NVIDIA A100 vs NVIDIA H100 comparison? This&amp;nbsp; guide answers it directly, covering performance, target use cases, cost differences and efficiency on real workloads. We show where the A100 still delivers value and where the H100’s architecture accelerates next-gen AI training and inference. Plus, we contextualise pricing and throughput on Hyperstack so you can make data-backed choices that align with your project goals and budget.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fcomparing-nvidia-a100-vs-nvidia-h100-use-cases-cost-and-more&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Innovation</category>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>NLP</category>
      <category>Gen AI</category>
      <category>a100</category>
      <category>Deep Learning</category>
      <category>High-Performance Computing (HPC)</category>
      <category>Data Analytics</category>
      <category>Cloud Computing</category>
      <category>GPU Cloud</category>
      <category>H100</category>
      <pubDate>Thu, 24 Apr 2025 11:30:08 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-a100-vs-nvidia-h100-use-cases-cost-and-more</guid>
      <dc:date>2025-04-24T11:30:08Z</dc:date>
    </item>
    <item>
      <title>NVIDIA L40 vs RTX A6000: Which GPU Leads AI Workloads in 2026</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-l40-vs-rtx-a6000</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-l40-vs-rtx-a6000" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/NVIDIA%20L40%20vs%20NVIDIA%20RTX%20A6000%20-%20Blog%20thumbnail%20-%201000x600.png" alt="NVIDIA L40 vs RTX A6000: Which GPU Leads AI Workloads in 2026" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="font-size: 16px;"&gt;Choosing the best GPU for AI in 2025? This benchmark-focused guide compares NVIDIA L40 and RTX A6000 across performance, memory, and AI-specific workloads. Using real training and inference examples, it shows which GPU excels in tasks like deep learning, generative AI, and high-throughput data processing. Get the insight you need to select the optimal GPU for your project requirements.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-l40-vs-rtx-a6000" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/NVIDIA%20L40%20vs%20NVIDIA%20RTX%20A6000%20-%20Blog%20thumbnail%20-%201000x600.png" alt="NVIDIA L40 vs RTX A6000: Which GPU Leads AI Workloads in 2026" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="font-size: 16px;"&gt;Choosing the best GPU for AI in 2025? This benchmark-focused guide compares NVIDIA L40 and RTX A6000 across performance, memory, and AI-specific workloads. Using real training and inference examples, it shows which GPU excels in tasks like deep learning, generative AI, and high-throughput data processing. Get the insight you need to select the optimal GPU for your project requirements.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fnvidia-l40-vs-rtx-a6000&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Innovation</category>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>Gen AI</category>
      <category>High-Performance Computing (HPC)</category>
      <category>Cloud Computing</category>
      <pubDate>Fri, 04 Apr 2025 08:37:02 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-l40-vs-rtx-a6000</guid>
      <dc:date>2025-04-04T08:37:02Z</dc:date>
    </item>
    <item>
      <title>NVIDIA A100 PCIe vs SXM: Comprehensive Performance Comparison</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a100-pcie-vs-nvidia-a100-sxm-a-comprehensive-comparison</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a100-pcie-vs-nvidia-a100-sxm-a-comprehensive-comparison" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/BL%20A100%20PCIe%20vs%20SXM_Thumb.png" alt="NVIDIA A100 PCIe vs SXM: Comprehensive Performance Comparison" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Choosing between NVIDIA A100 PCIe and SXM? This blog compares memory, bandwidth, and AI training efficiency. Using benchmarks for LLMs, deep learning, and high-performance workloads, we reveal which GPU excels in single-node vs multi-node setups. Quickly see which configuration matches your AI project’s scale and resource requirements to optimise cost and performance.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a100-pcie-vs-nvidia-a100-sxm-a-comprehensive-comparison" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/BL%20A100%20PCIe%20vs%20SXM_Thumb.png" alt="NVIDIA A100 PCIe vs SXM: Comprehensive Performance Comparison" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Choosing between NVIDIA A100 PCIe and SXM? This blog compares memory, bandwidth, and AI training efficiency. Using benchmarks for LLMs, deep learning, and high-performance workloads, we reveal which GPU excels in single-node vs multi-node setups. Quickly see which configuration matches your AI project’s scale and resource requirements to optimise cost and performance.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fnvidia-a100-pcie-vs-nvidia-a100-sxm-a-comprehensive-comparison&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Innovation</category>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>Gen AI</category>
      <category>a100</category>
      <category>Energy</category>
      <category>Deep Learning</category>
      <pubDate>Thu, 12 Dec 2024 15:01:30 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a100-pcie-vs-nvidia-a100-sxm-a-comprehensive-comparison</guid>
      <dc:date>2024-12-12T15:01:30Z</dc:date>
    </item>
    <item>
      <title>NVIDIA DGX GH200 GPU Grace Hopper Superchip at Hyperstack</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-dgx-gh200-grace-hopper-superchip-to-hyperstack</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-dgx-gh200-grace-hopper-superchip-to-hyperstack" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="gh200 performance benchmarks" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;We're excited to announce that Hyperstack will soon be offering the revolutionary NVIDIA DGX GH200 Grace Hopper Superchip by request. This addition to our high-performance computing solutions can easily tackle massive &lt;span style="font-weight: normal;"&gt;AI and HPC&lt;/span&gt;workloads with unprecedented efficiency.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-dgx-gh200-grace-hopper-superchip-to-hyperstack" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="gh200 performance benchmarks" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;We're excited to announce that Hyperstack will soon be offering the revolutionary NVIDIA DGX GH200 Grace Hopper Superchip by request. This addition to our high-performance computing solutions can easily tackle massive &lt;span style="font-weight: normal;"&gt;AI and HPC&lt;/span&gt;workloads with unprecedented efficiency.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fintroducing-the-nvidia-dgx-gh200-grace-hopper-superchip-to-hyperstack&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>High-Performance Computing (HPC)</category>
      <category>Product Updates</category>
      <category>GPU Cloud</category>
      <pubDate>Wed, 03 Jul 2024 08:15:00 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-dgx-gh200-grace-hopper-superchip-to-hyperstack</guid>
      <dc:date>2024-07-03T08:15:00Z</dc:date>
    </item>
    <item>
      <title>NVIDIA H100-80GB-SXM5: Unparalleled AI &amp; HPC Performance</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-h100-80gb-sxm5-unparalleled-performance-for-ai-and-hpc-workloads</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-h100-80gb-sxm5-unparalleled-performance-for-ai-and-hpc-workloads" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="h100 performance benchmarks" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="font-weight: normal;"&gt;We're thrilled to announce the upcoming addition of the NVIDIA H100-80GB-SXM5 GPU to our Hyperstack On-Demand offerings. This addition joins our existing NVIDIA H100-80GB-PCIe and H100-80GB-PCIe-NVLink options aiming to expand our high-performance computing solutions to meet the most demanding AI and HPC workloads.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-h100-80gb-sxm5-unparalleled-performance-for-ai-and-hpc-workloads" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="h100 performance benchmarks" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="font-weight: normal;"&gt;We're thrilled to announce the upcoming addition of the NVIDIA H100-80GB-SXM5 GPU to our Hyperstack On-Demand offerings. This addition joins our existing NVIDIA H100-80GB-PCIe and H100-80GB-PCIe-NVLink options aiming to expand our high-performance computing solutions to meet the most demanding AI and HPC workloads.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fintroducing-the-nvidia-h100-80gb-sxm5-unparalleled-performance-for-ai-and-hpc-workloads&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>High-Performance Computing (HPC)</category>
      <category>GPU Cloud</category>
      <category>H100</category>
      <pubDate>Tue, 02 Jul 2024 13:06:19 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/introducing-the-nvidia-h100-80gb-sxm5-unparalleled-performance-for-ai-and-hpc-workloads</guid>
      <dc:date>2024-07-02T13:06:19Z</dc:date>
    </item>
    <item>
      <title>Technical Overview of RDMA, RoCE &amp; Performance in SR-IOV</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/technical-overview-rdma-roce-and-performance-benchmarks-in-sr-iov</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/technical-overview-rdma-roce-and-performance-benchmarks-in-sr-iov" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="sr iov" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Welcome back to our SR-IOV series! In our previous post, we promised to provide the technical aspects of this technology. Today, we will offer a comprehensive look at Remote Direct Memory Access (RDMA) and its implementations along with some benchmark results. Let’s get started!&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/technical-overview-rdma-roce-and-performance-benchmarks-in-sr-iov" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="sr iov" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Welcome back to our SR-IOV series! In our previous post, we promised to provide the technical aspects of this technology. Today, we will offer a comprehensive look at Remote Direct Memory Access (RDMA) and its implementations along with some benchmark results. Let’s get started!&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Ftechnical-overview-rdma-roce-and-performance-benchmarks-in-sr-iov&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Machine Learning</category>
      <category>LLM</category>
      <category>High-Performance Computing (HPC)</category>
      <category>Product Updates</category>
      <pubDate>Mon, 01 Jul 2024 08:32:03 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/technical-overview-rdma-roce-and-performance-benchmarks-in-sr-iov</guid>
      <dc:date>2024-07-01T08:32:03Z</dc:date>
    </item>
    <item>
      <title>NVIDIA H100 PCIe vs SXM: Performance and Use Cases Compared</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-h100-pcie-vs-sxm-performance-use-cases-and-more</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-h100-pcie-vs-sxm-performance-use-cases-and-more" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="h100 pcie vs sxm" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Wondering how NVIDIA H100 PCIe compares to SXM for AI workloads? This blog answers upfront: SXM excels in multi-GPU training with NVLink, while PCIe offers flexibility for single-node setups. Using performance stats and real-world AI training examples, we reveal which GPU suits your workload, whether it’s LLM fine-tuning, inference, or high-performance compute tasks..&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-h100-pcie-vs-sxm-performance-use-cases-and-more" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-2.png" alt="h100 pcie vs sxm" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Wondering how NVIDIA H100 PCIe compares to SXM for AI workloads? This blog answers upfront: SXM excels in multi-GPU training with NVLink, while PCIe offers flexibility for single-node setups. Using performance stats and real-world AI training examples, we reveal which GPU suits your workload, whether it’s LLM fine-tuning, inference, or high-performance compute tasks..&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fcomparing-nvidia-h100-pcie-vs-sxm-performance-use-cases-and-more&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>Deep Learning</category>
      <category>High-Performance Computing (HPC)</category>
      <pubDate>Wed, 14 Feb 2024 08:38:50 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/comparing-nvidia-h100-pcie-vs-sxm-performance-use-cases-and-more</guid>
      <dc:date>2024-02-14T08:38:50Z</dc:date>
    </item>
    <item>
      <title>NVIDIA A6000 vs A100: Performance, Cost &amp; Use Case Compared</title>
      <link>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a6000-vs-a100-performance-cost-and-use-case-comparison</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a6000-vs-a100-performance-cost-and-use-case-comparison" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-1.png" alt="NVIDIA A6000 vs A100: Performance, Cost &amp;amp; Use Case Compared" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Confused between NVIDIA A6000 and A100? This&amp;nbsp;guide compares performance, cost, and workload suitability. Using AI training and inference benchmarks, we highlight which GPU is better for tasks like LLM fine-tuning, rendering, or large-scale data processing. Quickly identify the optimal GPU for your project to balance speed, efficiency, and budget.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a6000-vs-a100-performance-cost-and-use-case-comparison" title="" class="hs-featured-image-link"&gt; &lt;img src="https://www.hyperstack.cloud/hubfs/Thumbnail%202-1.png" alt="NVIDIA A6000 vs A100: Performance, Cost &amp;amp; Use Case Compared" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Confused between NVIDIA A6000 and A100? This&amp;nbsp;guide compares performance, cost, and workload suitability. Using AI training and inference benchmarks, we highlight which GPU is better for tasks like LLM fine-tuning, rendering, or large-scale data processing. Quickly identify the optimal GPU for your project to balance speed, efficiency, and budget.&lt;/p&gt;  
&lt;img src="https://track-eu1.hubspot.com/__ptq.gif?a=26282475&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.hyperstack.cloud%2Ftechnical-resources%2Fperformance-benchmarks%2Fnvidia-a6000-vs-a100-performance-cost-and-use-case-comparison&amp;amp;bu=https%253A%252F%252Fwww.hyperstack.cloud%252Ftechnical-resources%252Fperformance-benchmarks&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Market Insights</category>
      <category>AI</category>
      <category>Machine Learning</category>
      <category>NLP</category>
      <category>RTX A6000</category>
      <category>a100</category>
      <category>Automotive</category>
      <category>Financial Services</category>
      <category>Simulations &amp; Visualisations</category>
      <category>Architecture &amp; Engineering</category>
      <category>Healthcare &amp; Life Sciences</category>
      <category>Data Analytics</category>
      <category>Rendering</category>
      <category>Content Creation</category>
      <pubDate>Tue, 16 Jan 2024 12:11:17 GMT</pubDate>
      <author>daman.preet@nexgencloud.com (Damanpreet Kaur Vohra)</author>
      <guid>https://www.hyperstack.cloud/technical-resources/performance-benchmarks/nvidia-a6000-vs-a100-performance-cost-and-use-case-comparison</guid>
      <dc:date>2024-01-16T12:11:17Z</dc:date>
    </item>
  </channel>
</rss>
