A question for data center architects planning next-generation AI clusters: If your GPUs are liquid-cooled for efficiency, why are the SSDs in the same server still relying on fans? This hybrid cooling approach introduces "parasitic" power draw from fans, adds heat that must be removed from the facility, and often depends on evaporative cooling towers that consume significant volumes of clean water. With AI-driven water consumption projected to grow 11x by 2028, the storage tier is a meaningful piece of the efficiency equation. In this article, we invite you to explore how fanless, fully liquid-cooled storage solutions can help address the pressing power and water challenges facing modern data center infrastructures. https://lnkd.in/ddVPYZ8x
Solidigm
Computer Hardware Manufacturing
Rancho Cordova, California 56,403 followers
Expanding the possibilities of data that fuel human advancement
About us
Solidigm is a leading global provider of innovative NAND flash memory solutions. Solidigm technology unlocks data’s unlimited potential for customers, enabling them to fuel human advancement. Our origins reflect Intel’s longstanding innovation in memory products and SK hynix’s international leadership and scale in the semiconductor industry, Solidigm became a standalone U.S. subsidiary under SK hynix in December 2021. Headquartered in San Jose, CA, Solidigm is powered by the inventiveness of close to 2,000 employees in 20 locations around the world. For more information about Solidigm, please visit https://www.solidigmtechnology.com
- Website
-
https://www.solidigm.com
External link for Solidigm
- Industry
- Computer Hardware Manufacturing
- Company size
- 1,001-5,000 employees
- Headquarters
- Rancho Cordova, California
- Type
- Privately Held
- Founded
- 2021
Locations
-
Primary
Get directions
10951 White Rock Rd
Rancho Cordova, California 95670, US
Employees at Solidigm
Updates
-
"It feels like storage kind of got a promotion this year." - Ace Stryker on what #NVIDIAGTC 2026 meant for the role of storage in AI infrastructure. NVIDIA’s context memory platform creates an entirely new tier between GPUs and traditional storage, purpose-built for the KV cache data that agentic AI workloads generate at scale. Ace and WEKA's Betsy Chernoff sat down with SiliconANGLE & theCUBE to unpack why context memory is the next bottleneck, how persistent KV cache storage can deliver up to 6x improvement in token throughput, and what the global NAND shortage means for organizations running AI at scale. Watch their full convo: https://lnkd.in/g-ci5ENN
-
What happens when a technology company plants roots in Greater Sacramento? Local teams get to make a global impact! Solidigm's Rancho Cordova campus is home to the engineers, scientists, and innovators behind some of the industry's biggest breakthroughs in AI-era data storage. The ripple effect is reaching far beyond our walls. Hear from our team and city leaders on what this growth means for the region. Feat 🎥 Ashraf Abdelwly, Barry Broome, Greg Matson, Amanda Norton, Arica Schiffli, Avi Shetty, and Sebastian Uribe!
-
Yesterday was the ribbon cutting ceremony for the Solidigm AI SSD Platform Development Center at our Solidigm Vancouver site! The expanded 50,000+ sq. ft. facility is home to more than 200 team members whose work covers everything from architecture and design to validation of our enterprise SSDs for AI servers. Co-CEO Xin Guo joined the celebration alongside government officials and local Solidigm team members. "This space is testimony of the focus of the Vancouver site team as the centerpiece of our engine for future growth," Guo shared. Each year, Solidigm contributes more than $40 million CAD to the Vancouver region, and has placed 80+ interns on its workforce over the past four years. Congratulations #SolidigmVancouver! We're proud to be part of this community.
-
-
-
-
-
+2
-
-
Solidigm reposted this
Quantum computing has lived in theory for decades. Now it’s pushing into engineering, manufacturing, and real-world impact. To celebrate World Quantum Day, Allyson Klein and Jeniece Wnorowski (Solidigm) sit down for a Data Insights episode with Nobel Prize winner and Co-Founder of Qolab John Martinis and ZeroPoint Technologies' Nilesh S. for a grounded, deeply technical conversation on where quantum stands today and what comes next. A few moments stood out. Martinis reflects on work from the 1980s that helped prove macroscopic quantum effects could exist in electrical systems. That early research now sits at the core of modern quantum computing. The conversation moves quickly into scale. Not just qubits, but manufacturing, packaging, cost, and reliability. The shift is clear. Quantum is no longer only a physics problem. It is a systems problem. There’s also a sharp look at integration with AI infrastructure. Today, GPUs and classical systems are supporting quantum through error correction and simulation. Over time, that relationship could flip. Three themes define the discussion: • Exponential performance gains that change what is computable • A growing urgency around cryptography and preparedness • The push toward manufacturing methods that make quantum viable at scale The timeline remains uncertain. It could be years. It could surprise us. That uncertainty is exactly why leaders across industries are paying attention now. Listen to the full episode on TechArena, and join the conversation shaping what computing looks like next: https://lnkd.in/eJE6d_h6 #QuantumComputing #WorldQuantumDay
-
The next leap in AI will depend on how efficiently data can be moved, stored, and accessed. GPUs are only as effective as the data pipelines that feed them; a reality that was impossible to ignore at #NVIDIAGTC this year. In our latest piece, Ace Stryker, Director of AI and Ecosystem Marketing at Solidigm, shares key takeaways from conversations across the show floor. As inference workloads grow and AI expands into real-world applications, storage performance, latency, and capacity are becoming central to AI infrastructure strategy. Read more: https://lnkd.in/gbpVS86A
-
Storage is becoming one of the defining factors in how AI infrastructure scales and performs. In this conversation from SiliconANGLE & theCUBE and NYSE Wired, Avi Shetty, VP of AI Ecosystem, Solutions, and Market Enablement at Solidigm, joined industry leaders at #NVIDIAGTC to explore how the shift from training to inference is reshaping the role of storage across the entire data pipeline. Watch the full convo: https://lnkd.in/eSGvM2aw
-
Inference is where AI turns into value, and that value depends on keeping GPUs fed. A GPU is like a race car: speed isn't the issue, keeping it moving at speed is. https://bit.ly/4dx7tDf As context grows and becomes persistent, it has to stay close to compute. That's why a new context storage tier is emerging inside the pod. Watch the full video on our website to hear Solidigm AI Applied Research Lead Jeff Harthorn explain more: https://bit.ly/4dx7tDf
-
When we say data, sometimes it shows up as something far more personal, like a lifeline. For Tiffany Grady, that lifeline once lived in a binder, created 20 years ago after her son’s autism diagnosis, when obtaining critical information required immense research, persistence, and personal organization. Today, AI technologies and advancements in data storage are transforming lives, moving us from physical binders to immediate, life-changing insights that support families and fuel medical breakthroughs. Through the lens of her lived experience, Tiffany envisions the incredible opportunity ahead for both Solidigm and the broader technology industry. She knows that when we say data, we are talking about empowering all the "MacGyvers" in every industry (and family) using data to solve the unsolvable. Watch the first installment of our series "When We Say Data: People Meet Purpose." #WWSD #TeamSolidigm
-
Enterprise AI needs to be flexible, and so does the storage powering it. 🎥 Filmed on the show floor at #NVIDIAGTC, this quick convo walks us through how Solidigm D7-PS1010 PCIe Gen 5 SSDs integrate into MiTAC Computing's next-gen NVIDIA MGX-based server platforms, built for #AI training, inference, and #RAG workloads at scale.