Cloud Computing Solutions

Explore top LinkedIn content from expert professionals.

  • View profile for Melanie Nakagawa
    Melanie Nakagawa Melanie Nakagawa is an Influencer

    Chief Sustainability Officer @ Microsoft | Combining technology, business, and policy for change

    106,280 followers

    The next era of datacenters is here. The demand for AI is growing rapidly, and with it comes the need to grow the cloud’s physical footprint. Historically, datacenters have been water-intensive and require using large amounts of higher carbon materials like steel. At Microsoft, we're building datacenters with sustainability in mind, and we're constantly innovating to find new ways to reduce our environmental impact. This includes: 🤝 A first-of-its-kind agreement with Stegra, backed by an investment from Microsoft’s Climate Innovation Fund (CIF) in 2024, to procure near zero-emissions steel from Stegra’s new plant in Boden, Sweden, for use in our datacenters. Powered by renewable energy and green hydrogen, Stegra's facility reduces CO2 emissions by up to 95% versus conventional steel production. By committing to purchase this green steel before it rolls off the line, Microsoft is sending a clear market signal, driving demand for cleaner materials and supporting Stegra’s growth. 💧 We also announced a major breakthrough to make our datacenters more sustainable: microfluidic in-chip cooling technology. Unlike traditional cold plates that sit atop chips, microfluidics brings cooling right inside the silicon itself. Engineers carve microscopic channels directly into the chip, letting liquid coolant flow through and absorb heat exactly where it’s generated. This approach is up to three times more effective than current methods. More efficient cooling allows datacenters to support powerful next-gen AI chips without ramping up energy use or investing in costly new gear. 💵 Through our CIF investments, we’ve catalyzed billions in follow-on capital for breakthrough solutions in low-carbon materials, sustainable fuels, carbon removal, and more. We just released a new whitepaper – Building Markets for Sustainable Growth – that distills five key lessons on how catalytic investment and partnership can move markets and accelerate a global transition in energy, waste, water, and ecosystems. Our journey toward sustainable datacenters is only beginning, and we recognize true progress requires collective action and investment. Read more from Building Markets for Sustainable Growth: https://msft.it/6041sq9xD

  • View profile for Sean Connelly🦉
    Sean Connelly🦉 Sean Connelly🦉 is an Influencer

    Zscaler | Former CISA Zero Trust Director & TIC Program Manager | Co-author, NIST SP 800-207 | Co-author, CISA Zero Trust Maturity Model

    22,379 followers

    🚨CISA & NSA release Crucial Guide on Network Segmentation and Encryption in Cloud Environments🚨 In response to the evolving requirements of cloud security, the Cybersecurity & Infrastructure Security Agency (CISA) and the National Security Agency (NSA) recently released a comprehensive Cybersecurity Information Sheet (CSI): "Implement Network Segmentation and Encryption in Cloud Environments." This document provides detailed recommendations to enhance the security posture of organizations operating within cloud infrastructures (that probably means you). Key Takeaways Include: 🔐 Network Encryption: The document underscores the importance of encrypting data in transit as a defense mechanism against unauthorized data access. 🌐 Secure Client Connections: Establishing secure connections to cloud services is fundamental. 🔎 Caution on Traffic Mirroring: While recognizing the benefits of traffic mirroring for network analysis and threat detection, the guidance cautions against potential misuse that could lead to data exfiltration and advises careful monitoring of this feature. 🛡️ Network Segmentation: Stressed as a foundational security principle, network segmentation is recommended to isolate and contain malicious activities, thereby reducing the impact of any breach. This collaboration between NSA and CISA provides actionable recommendations for organizations to strengthen their cloud security practices. The emphasis is on strategically implementing network segmentation and end-to-end encryption to secure cloud environments effectively. Information security leaders are encouraged to review this guidance to understand better the measures necessary to protect cloud-based assets. Implementing these recommendations will contribute to a more secure, resilient, and compliant cloud infrastructure. Access the complete guidance provided by the NSA and CISA to fully understand these recommendations and their application to your organization’s cloud security strategy. 📚 Read CISA & NSA's complete guidance here: https://lnkd.in/eeVXqMSv #cloudcomputing #technology #informationsecurity #innovation #cybersecurity

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    710,166 followers

    System design interviews can be a daunting part of the hiring process, but being prepared with the right knowledge makes all the difference. This System Design Cheat Sheet covers essential concepts that every engineer should know when tackling these types of questions. Key Areas to Focus On: 1. Data Management:    - Cache: Boost read operation speeds with caching mechanisms like Redis or Memcached.    - Blob/Object Storage: Efficiently handle large, unstructured data using systems like S3.    - Data Replication: Ensure data reliability and fault tolerance through replication.    - Checksums: Safeguard data integrity during transmission by detecting errors. 2. Database Selection:    - RDBMS/SQL: Best for structured data with strong consistency (ACID properties).    - NoSQL: Ideal for large volumes of unstructured or semi-structured data (MongoDB, Cassandra).    - Graph DB: For interconnected data like social networks and recommendation engines (Neo4j). 3. Scalability Techniques:    - Database Sharding: Partition large datasets across multiple databases for scalability.    - Horizontal Scaling: Scale out by adding more servers to distribute the load.    - Consistent Hashing: A technique for efficient distribution of data across nodes, essential for load balancing.    - Batch Processing: Use when handling large amounts of data that can be processed in chunks. 4. Networking:    - CDN: Distribute content globally for faster access and lower latency (e.g., Cloudflare, Akamai).    - Load Balancer: Spread traffic across multiple servers to ensure high availability.    - Rate Limiter: Prevent overloading by controlling the rate of incoming requests.    - Redundancy: Design systems to avoid single points of failure by duplicating components. 5. Protocols & Queues:    - Message Queues: Asynchronous communication between microservices, ideal for decoupling services (RabbitMQ, Kafka).    - API Gateway: Control API traffic, manage rate limiting, and provide a single point of entry for your services.    - Gossip Protocol: Efficient communication in distributed systems by periodically exchanging state information.    - Heartbeat Mechanism: Monitor the health of nodes in distributed systems. 6. Modern Architecture:    - Containerization (Docker): Package applications and dependencies into containers for consistency across environments.    - Serverless Architecture: Run functions in the cloud without managing servers, focusing entirely on the code (e.g., AWS Lambda).    - Microservices: Break down monolithic applications into smaller, independently scalable services.    - REST APIs: Build lightweight, maintainable services that interact through stateless API calls. 7. Communication:    - WebSockets: Real-time, bi-directional communication between client and server, commonly used in chat applications, live updates, and collaborative tools. Save this post and use it as a quick reference for your next system design challenge!

  • View profile for David Steenhoek

    Think Quantum | Creator | OUTlier | AI Evangelist | Observer | Filmmaker | Tech Founder | Investor | Artist | Blockchain Maxi | Ex: Chase Bank, Mosaic, LAUSD, DC. WE build a better 🌎 2Gether. Question Everything B Kind

    10,740 followers

    Japan has placed a real Quantum computer online, letting people worldwide access advanced computing power through the internet today. This moment signals a shift where Quantum machines move from labs into shared global use. Researchers students and developers can now interact with real Quantum hardware without traveling or owning expensive systems. It turns a distant concept into a practical tool available with a connection. Unlike traditional computers that use bits, Quantum computers use qubits which can exist in multiple states at once. This allows certain problems to be explored in ways classical machines cannot match. Japan’s system is carefully controlled, offering guided access so users can learn test and experiment responsibly while protecting the delicate hardware from misuse or overload. This step matters because access changes innovation. When tools are shared, ideas grow faster. Students can practice on real systems, researchers can compare results, and small teams can test concepts without massive funding. It lowers barriers and spreads knowledge beyond elite labs into classrooms startups and curious minds across the world. The system does not replace everyday computers, and it will not instantly solve all problems. Quantum machines are specialized and still developing. But each real world use teaches engineers how to improve stability accuracy and scale. Progress comes through use feedback and patience, not hype or shortcuts. Moments like this show technology becoming more open and collaborative. Japan’s move invites the world to learn together and shape the future carefully. Quantum computing promises new ways to study materials security and nature itself. Giving global access builds trust curiosity and shared progress. It reminds us that science advances best when knowledge is opened not hidden and when powerful tools are guided by responsibility learning and cooperation for the benefit of everyone everywhere.

  • View profile for Rohit M S

    AWS Certified DevOps and Cloud Computing Engineer

    1,517 followers

    I reduced our Annual AWS bill from ₹15 Lakhs to ₹4 Lakhs — in just 6 months. Back in October 2024, I joined the company with zero prior industry experience in DevOps or Cloud. The previous engineer had 7+ years under their belt. Just two weeks in, I became solely responsible for our entire AWS infrastructure. Fast forward to May 2025, and here’s what changed: ✅ ECS costs down from $617 to $217/month — 🔻64.8% ✅ RDS costs down from $240 to $43/month — 🔻82.1% ✅ EC2 costs down from $182 to $78/month — 🔻57.1% ✅ VPC costs down from $121 to $24/month — 🔻80.2% 💰 Total annual savings: ₹10+ Lakhs If you’re working in a startup (or honestly, any company) that’s using AWS without tight cost controls, there’s a high chance you’re leaving thousands of dollars on the table. I broke everything down in this article — how I ran load tests, migrated databases, re-architected the VPC, cleaned up zombie infrastructure, and built a culture of cost-awareness. 🔗 Read the full article here: https://lnkd.in/g99gnPG6 Feel free to reach out if you want to chat about AWS, DevOps, or cost optimization strategies! #AWS #DevOps #CloudComputing #CostOptimization #Startups

  • View profile for Raul Junco

    Simplifying System Design

    135,145 followers

    Every developer should know that tenant isolation is not a database problem. It’s a blast-radius problem. I learned this the hard way. One missing tenant filter. That’s all it takes to turn a normal deploy into a security incident. Every multi-tenant system eventually picks one of three isolation levels. Each one trades safety, cost, and operational pain in different ways. 1. Database per tenant This is the strongest isolation you can get. Each tenant lives in its own database. No shared tables. No shared state. The upside is obvious. A bug in one tenant cannot leak data from another. Audits are simpler. Compliance conversations are shorter. When something breaks, the blast radius stays small. The downside shows up later. Operational overhead grows fast. You manage hundreds or thousands of databases. Migrations become orchestration problems. Costs scale with tenant count, not usage. This model works when tenants are large, regulated, or high-risk. It breaks down when you try to apply it blindly to long-tail customers. 2. Schema per tenant This is the middle ground most teams underestimate. All tenants share a database, but each one gets a separate schema. Tables stay isolated, but infrastructure stays manageable. You get clearer boundaries than row-level isolation. You avoid the explosion of databases. Audits remain reasonable. Most accidental data leaks disappear. But complexity still creeps in. Migrations must run across many schemas. Cross-tenant reporting becomes awkward. Automation is not optional anymore. Without it, this model collapses under its own weight. This approach works well when tenants vary in size and you want isolation without full separation. 3. Row-level isolation This is the cheapest and most dangerous option. All tenants share the same tables. Isolation lives in a tenant_id column and your queries. Infrastructure stays simple. Costs stay low. Scaling is easy. The risk is brutal. One missing filter equals a data leak. One refactor can break isolation. One rushed hotfix can expose everything. Security depends on every layer doing the right thing every time. This model only works when you add heavy guardrails: strict query scoping, database policies, service-level enforcement, and tests that actively try to cross tenant boundaries. Without those, you’re betting the company on discipline. Tenant isolation is not a storage choice. It’s a trust decision. Learn this, it's a classic Interview question.

  • View profile for Alex Banks
    Alex Banks Alex Banks is an Influencer

    Building a better future with AI

    187,857 followers

    I just got insider access to Oracle’s AI strategy. A $300 billion bet on AI infrastructure: Oracle has been the enterprise backbone for decades. But today they’re accelerating across infrastructure, platforms, and applications: → $300B Stargate partnership with OpenAI → 4.5 gigawatts of compute power over 5 years → Training ground for Grok, ChatGPT, and other frontier models Every major AI company is training on Oracle Cloud Infrastructure: • OpenAI • xAI (Grok) • Meta • Anthropic Oracle isn't racing to build their own foundation model. They're staying model-agnostic. Why? Because models are evolving rapidly. By partnering with the best (OpenAI, Anthropic, Meta), Oracle gives customers choice without the complexity. My takeaway: I sat down with Gary Miller, Oracle's Customer Success Officer, at Oracle AI World this week. What became abundantly clear is while everyone races to build the best model, Oracle is building the layer underneath that makes all models possible. They're playing a different game entirely. The infrastructure game compounds. The model game doesn't. Oracle has been the enterprise backbone for 45 years and this feels like their next evolution into something much bigger. The AI era needs a foundation. Oracle is building it. Follow me Alex Banks for daily AI highlights and insights.

  • View profile for Oron Gill Haus
    Oron Gill Haus Oron Gill Haus is an Influencer
    42,878 followers

    Excited to share insights from our latest Next at Chase blog post by Praveen Tandra and Sudhir Rao where we dive into the transformative journey of migrating our data ecosystem from Hadoop to AWS. This shift is a game-changer for our data strategy, addressing tech debt and setting the stage for future innovation. Key insights from our journey: • Migration Milestone: We're moving our Data Lake from on-premises Hadoop to AWS, embracing a flexible and future-proof cloud solution. • Tackling Tech Debt: Addressing challenges like data duplication, metadata drift, and platform incompatibilities to streamline our data processes. • Adopting Open Standards: Transitioning to Apache Parquet for efficient, open-format data storage, enhancing interoperability and performance. • Project Metafix: A collaborative effort to reconcile and adapt decades-old metadata, ensuring seamless migration and data integrity. • Lineage 2.0: Mapping data movement end-to-end, providing a clear view of data assets across legacy and target platforms. None of this may be groundbreaking, but for a 225-year-old company, this migration is more than just a tech upgrade—it's a strategic leap forward in how we manage and utilize petabytes of data at Chase. Stay tuned for part two, as we continue to share our journey and the innovations driving our data transformation. So proud of all of our teams driving this forward, boom! Question for You: How do you see cloud migration impacting the future of data management? Share your thoughts below! #DataTransformation #Innovation

  • View profile for Dunith Danushka

    Technical Product Marketing at EDB | Author of “Practical Data Engineering with Apache Projects”

    6,744 followers

    💡There’s an interesting trend I observed with organizations recently: they are choosing to save money and simplify their operations by using slower but cheaper storage systems. This is especially true when they handle large amounts of data and sub-second latency isn't critical. Let’s find out what’s motivating this. Data loses its value over time. Once data becomes older and rarely accessed, real-time performance becomes less crucial. While developers need to access historical data for analysis, ad hoc queries, and compliance requirements, they can accept some latency. Their priority now shifts to storing this older data most cost-effectively and efficiently. Compute-storage decoupling is something that we inherited from the Hadoop era, allowing storage systems to use tiered storage for improved cost-efficiency and scalability. ✳️ Object stores became the de facto tiered storage Amazon S3 was officially launched in 2006. Almost 20 years later and with trillions of objects stored, we now have reliable infinite storage. People started to call this cheap, infinitely scalable storage a Data Lake(or Lakehouse nowadays). For developers, it offers a simple path to disaster recovery. When you upload a file to S3, you immediately get eleven nines of durability—that's 99.999999999%. To put this in perspective: if you store 10,000 objects, you might lose just one in 10 million years. As object stores like S3 become more affordable, databases and OLAP systems have increasingly utilized deep object storage to enhance cost efficiency and durability. For example, PGAA, the EDB’s analytics extension for Postgres, allows you to query hot data and cold data with a single dedicated node, ensuring optimal performance by automatically offloading cold data to columnar tables in object storage, reducing the complexity of managing analytics over multiple data tiers. ✳️ Not only databases, but streaming data platforms are evolving too Redpanda and WarpStream show how modern streaming platforms can save money while maintaining good performance. They do this by using a mix of fast local storage (SSDs) for quick access and cloud storage for most of their data, avoiding costly cross-AZ data transfers. ✳️ Why not make the object stores Iceberg compatible? That will transform simple storage solutions into powerful data management systems like data lakehouses. This compatibility brings essential features like schema evolution, time travel capabilities, ACID transactions, and performance optimizations—all while maintaining the cost benefits of object storage. This gives organizations the flexibility to choose their own query engine and catalog, making data platforms more modular and composable.

  • View profile for Chris Thomas

    US Hybrid Cloud Infrastructure Leader at Deloitte

    5,764 followers

    Modern data center strategy has become a strategic differentiator in the AI era. Leaders can no longer rely on hybrid-by-default environments shaped by fragmented cloud, colocation, and on-premises decisions. Instead, a deliberate, hybrid-by-design approach is now essential to scale innovation, manage risk, and enhance value across cloud, on-premises, colocation, and edge.    In our latest Deloitte perspective (https://deloi.tt/4rkttVw), my colleagues Lou DiLorenzo, Jagjeet Gill, Heather Rangel, and I outline practical steps for leaders driving this shift, including:    🟢 Intentional workload placement based on latency, control, data sovereignty, economics, and resiliency needs 🟢 Strategic segmentation of AI-intensive workloads to manage compute, power, and cooling demands 🟢 Transparent economics that tie infrastructure cost to business value 🟢 Built-in governance across hybrid environments through standardized controls and automation The goal is not incremental modernization, but intentional architecture that turns complexity into advantage and enables resilient, responsible AI at scale.    Proud of our team's work in helping organizations build forward-thinking data center strategies and leading our hybrid infrastructure managed services, led by Erin Abbey, Rahul Bajpai, Micah Bible, Megan Ellis, Christian Grant, Kelly Marchese, Nicholas Merizzi, and Myke Miller. Let me know if building a hybrid-by-design strategy is top of mind for your organization in 2026; would love to connect! 

Explore categories