Advancing AI Development

Explore top LinkedIn content from expert professionals.

  • View profile for Christophe Fouquet
    Christophe Fouquet Christophe Fouquet is an Influencer

    Chief Executive Officer, ASML

    57,794 followers

    AI holds great potential for the semiconductor industry and will kick-start the next round of innovation for faster, cheaper and more energy-efficient computation – that was my message today at SPIE Advanced Lithography + Patterning. I discussed the potential and the challenges that AI holds for our industry.   The potential is clearly huge. AI is rapidly integrated into applications, and high-performance compute is expected to underpin growth towards $1 trillion of semiconductor sales by 2030. The challenges are around the computing needs of AI models and related energy consumption. The compute workload of training a leading AI model has increased 16x every 2 years in recent years – much faster than the increase in computing power delivered by Moore’s law, which is about 2x every 2 years. The energy needed to train a leading model has not grown so steeply but still rose 10x every 2 years. This computing need has been met by building supercomputers and massive data centers. If you extrapolate these trends, training a leading AI model would need the entire world-wide electricity supply in about 10 years. That’s clearly not realistic, so the trend has to break, by training algorithms becoming more efficient and by chips becoming more efficient. In other words, the needs of AI will stimulate immense innovation in chip design and manufacturing – and the potential value of AI to our society will put urgency and funding behind that drive. As a consequence, chip makers are pulling all levers to accelerate semiconductor scaling. This includes lithographic “2D” scaling: shrinking the dimensions of transistors to pack more into a square millimeter. It will also include “3D” integration, with innovations like backside power delivery, transistor designs like gate-all-around, as well as stacking chips in the package, where holistic lithography will play a critical role to deliver performance requirements. ASML will support these trends through a comprehensive, holistic lithography portfolio. Our 0.33 NA/0.55 NA EUV lithography systems allow chip makers to shrink dimensions at the lowest possible cost on their critical layers, while tightly matched and highly productive DUV systems will continue to reduce cost. More than ever, metrology and inspections tools – whose data is fed into lithography control solutions that keep the patterning process operating within tight specs to deliver the highest possible production yields – will be essential to deliver 2D scaling and 3D integration processes. 3D integration requires wafer-to-wafer bonding, and we have demonstrated the capability to map the stresses and distortions that bonding creates and to compensate for them, reducing overlay errors for post-bonding patterning by 10x or more.   It was a pleasure catching up with the industry’s lithography and patterning experts in San Jose. I’m excited to see our collective innovation power having a go at these challenges. Together, we will push technology forward.

  • View profile for Thomas Dohmke

    Entrepreneur

    112,749 followers

    Build AI applications right where you manage your code. With GitHub Models, now more than 100 million developers can access and experiment with top AI models where their workflow is—directly on GitHub. From the early days of the home computer, the dominant mode of creation for developers has long been building, customizing, and deploying software with code. Today, in the age of AI, a secondary and equally important mode of creation is rapidly emerging: the ability to leverage machine learning models. Increasingly, developers are building generative AI applications where the full stack contains backend and frontend code plus one or more machine learning models. With GitHub Models, developers can now explore these models on github.com, integrate them into their dev environment in Codespaces and VS Code, and leverage them during CI/CD in Actions – all simply with their GitHub account and free entitlements. GitHub Models also marks another transformational journey of GitHub. From the creation of AI through open source collaboration, to the creation of software with the power of AI, to enabling the rise of the AI engineer with GitHub Models – GitHub is the creator network for the age of AI. In the years ahead, we will continue to democratize access to AI technologies to generate a groundswell of one billion developers. By doing so, we will enable 10% of the world’s population to build and advance breakthroughs that will accelerate human progress for all.

  • View profile for Markus J. Buehler
    Markus J. Buehler Markus J. Buehler is an Influencer

    McAfee Professor of Engineering at MIT

    29,114 followers

    Deep stuff! We uncovered a startling link between #entropy, a bedrock concept in #physics, and how #AI can discover new ideas without stagnating. In an era where reasoning models can reflect on problems for days at a time (rather than generating quick, single-step solutions), our study shows how semantic entropy (the spread of meanings) and structural entropy (how evenly its links between concepts generated by the AI are distributed) together hold the secret to ongoing exploration as the model thinks through a problem. Specifically, we measured structural entropy using Von Neumann graph entropy (applied to the adjacency Laplacian), while semantic entropy came from a similarity-based embedding deep language embedding matrix. The key insight? Although semantic entropy consistently outpaces structural entropy, they remain in a near-critical balance—fueling "surprising edges" that introduce relationships between distant concepts. This mirrors physical systems on the brink of a phase transition, where a little bit of "disorder" keeps the process dynamic yet avoids chaos. The result is an AI that doesn’t just keep pace with known solutions but actively creates new pathways of thought over extended “thinking” sessions. As reasoning models become ever more capable—undertaking extended, multi-day "thought processes"—understanding fundamental principles is crucial. By weaving these insights into reinforcement learning strategies, we can reward models not just for correctness, but for venturing into novel conceptual ground. This opens the door to AI systems that actively cultivate new insights, rather than settling into narrow patterns or endlessly rehashing the same knowledge. Going Deeper When physicists describe entropy, they refer to the measure of "disorder" in a system: the number of ways particles can rearrange without altering the system’s energy. Yet entropy transcends molecules and heat. In this research, it emerges as the engine that drives AI reasoning models to keep generating fresh ideas over extended periods. The observed dynamics as the AI thinks about a problem reflects self-organized criticality—a state where systems hover between rigid order and random chaos. Much like a sand pile teetering on the edge of collapse, the AI preserves enough organizational structure to remain coherent, yet stays flexible enough to generate unexpected leaps in meaning. The fraction of "surprising edges" remains stable, offering evidence that the model naturally integrates new, distant ideas without toppling into confusion.

  • View profile for Allie K. Miller
    Allie K. Miller Allie K. Miller is an Influencer

    #1 Most Followed Voice in AI Business (2M) | Former Amazon, IBM | Fortune 500 AI and Startup Advisor, Public Speaker | @alliekmiller on Instagram, X, TikTok | AI-First Course with 300K+ students - Link in Bio

    1,626,378 followers

    Using AI for quick answers is a solid start, but there’s one thing everyone is missing. And that’s the next step. The next step to unlock value with AI is connecting it to your actual work tools: Google Drive, Gmail, Calendar, and more. Granting AI access to your actual data means you can have it take direct action with less friction, so you’ll be able to request summaries of client conversations, search your communication history, or pull key insights all within ChatGPT, Claude, etc within seconds. And if you add automation platforms like Zapier or Make, you can trigger actions and sync hundreds of tasks across your entire workflow automatically. This is how you get real value from AI.

  • View profile for Anima Anandkumar
    Anima Anandkumar Anima Anandkumar is an Influencer
    226,022 followers

    I recently spoke to Gartner about what is next in #AI. Here are my thoughts: We have seen impressive progress in #llm by scaling data and compute. Will this continue to hold? Yes, I believe so, but most of those gains will be in reasoning tasks where we have precise metrics to measure uplift, as well as the ability to have synthetic data to train further, and also the freedom to trade off computation for accuracy at test time. This is seen in the recent o1 model. For reasoning tasks, we will also be able to remove hallucination when we can construct accurate verifiers that can certify every statement that #llm makes. We have been doing this in our Leandojo project for mathematical theorem proving. However, there is one area of reasoning where #llm will never be good enough: understanding the physical world. This is because language is only high-level knowledge, and cannot simulate the complex physical phenomena needed in many applications. For instance, LLMs can talk about playing tennis or look up a weather app, but they cannot internally simulate any of these processes. While images and videos can help improve their knowledge of the physical world, models like Sora learn physics by accident, and hence, still produce physically wrong outputs. How can we overcome this? By teaching AI physics from the ground up. We are building AI models that are trained in a physics-informed manner at multiple scales. They are several orders of magnitude faster than traditional simulations, and can also generate novel designs that are physically valid. You can watch some of those examples in my recent TED talk.

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    77,804 followers

    When Nvidia reports earnings, it’s not just one company on trial - it’s a pulse check for the entire AI economy. With hype fatigue setting in and budgets under scrutiny, this quarter felt like a referendum: Is AI demand peaking, pausing… or powering ahead? Nvidia’s answer: still accelerating. Beat on revenue. Beat on earnings. Guided ahead. And the stock dropped. Why? Not because the results were weak, but because they weren’t superlative. Data center revenue was a hair shy: $41.1B vs. ~$41.3B. Market consensus for next quarter was $53B, but the whisper number was $55B. Nvidia guided to $54B. If you're priced for perfection but merely crush expectations - you get a “beat and meh.” But the real story isn’t in the share price. It’s in what Nvidia actually said. Let’s lift the hood. 1. Rack = Computer Jensen Huang put it plainly: Each rack is a computer. With NVLink 72, Nvidia made the rack the new atomic unit of compute, turning GPUs into one coordinated machine. This pushes the moat from chips into system architecture. Once a customer builds around Nvidia’s rack-scale blueprint, switching vendors means rebuilding the factory. Good luck pitching a “drop-in alternative.” 2. Networking = Strategic Leverage Networking was the boring part of AI infra - wires, switches etc. Nvidia turned it into a $7.3B business this quarter (+98% YoY). Spectrum-X is running at a $10B+ run rate; InfiniBand nearly doubled. Huang argued that improved networking boosts utilization enough to “make networking effectively free.” It’s not a bundle item anymore, it’s a margin driver. (3) Performance-per-Watt = New ARPU AI isn't compute-constrained, it's power-constrained. Grid limits are real. As they put it: “Performance-per-watt drives the revenues of our customers.” The new KPI chain: perf-per-watt → tokens-per-megawatt → revenue. Blackwell (now) and Rubin (next) aren’t cosmetic - they’re built to compound output inside fixed power budgets. Chips now sell on ROI per watt over speed. Better chips won't cut it, competitors need a better energy-to-token pipeline. That’s a full-stack, full-system problem. (4) Software + Numerics > Silicon The leap didn’t come from the chip, it came from how it’s used. NVFP4 (4-bit math) enables up to 7x faster training without loss of accuracy. Combined with NVLink and Nvidia’s stack (CUDA, TensorRT-LLM, Dynamo), the result: 10-50x energy efficiency per token. An operating system level play. (5) Optionality > Guidance While the Street focused on $54B guide, Nvidia left upside on the table: H20 China sales: $2-$5B not in the guide. Sovereign AI: now a $20B+ business (2× YoY). RTX Pro: seeding enterprise AI in 90+ logos across pharma, auto, and more. Nvidia’s revenue streams are growing wider, not just taller. Labeling Nvidia a chip company understates what they’ve become. They’re not competing on silicon, rather on system-level economics: who can turn power into tokens most efficiently. This is an AI systems company.

  • View profile for Andrew Anagnost
    Andrew Anagnost Andrew Anagnost is an Influencer

    President and Chief Executive Officer at Autodesk

    30,374 followers

    From CES this week, one thing is clear: we are moving into the era of physical AI — intelligence that operates in the real world.    Robotics, including humanoid and non-humanoid systems, are getting a lot of attention right now. This is familiar territory for Autodesk. We have decades of experience working with manufacturing, AI, and industrial design leaders who build in the physical world.    MarketWatch recently explored this momentum and included some of my perspective: https://lnkd.in/e_DN9HwC    Progress will not come from machines that just look like us, nor just language. It will come from AI that understands physics, objects, and three-dimensional space. That’s why work on world models, like what Fei-Fei Li and others are doing, matters. These systems learn from sensory data to build a usable understanding of their environment.    Physical AI will change how every industry that makes things designs, simulates, and executes. That is core to Autodesk’s mission, and I am optimistic about what is ahead.    Who is ready to put physical AI to work across everything we design and build? 

  • View profile for Rich Miller

    Authority on Data Centers, AI and Cloud

    46,952 followers

    AWS Builds Custom Liquid Cooling System for Data Centers Amazon Web Services (AWS) is sharing details of a new liquid cooling system to support high-density AI infrastructure in its data centers, including custom designs for a coolant distribution unit and an engineered fluid. “We've crossed a threshold where it becomes more economical to use liquid cooling to extract the heat,” said Dave Klusas, AWS’s senior manager of data center cooling systems, in a blog post. The AWS team considered multiple vendor liquid cooling solutions, but found none met its needs and began designing a completely custom system, which was delivered in 11 months, the company said. The direct-to-chip solution uses a cold plate placed directly on top of the chip. The coolant, a fluid specifically engineered by AWS, runs in tubes through the sealed cold plate, absorbing the heat and carrying it out of the server rack to a heat rejection system, and then back to the cold plates. It’s a closed loop system, meaning the liquid continuously recirculates without increasing the data center’s water consumption. AWS also developed a custom coolant distribution unit, which it said is more powerful and more efficient than its off-the-shelf competitors. “We invented that specifically for our needs,” Klusas says. “By focusing specifically on our problem, we were able to optimize for lower cost, greater efficiency, and higher capacity.” Klusas said the liquid is typically at “hot tub” temperatures for improved efficiency. AWS has shared details of its process, including photos: https://lnkd.in/e-D4HvcK

  • View profile for Matt Wood
    Matt Wood Matt Wood is an Influencer

    CTIO, PwC

    77,337 followers

    AI field note: Reducing the 'mean time to ah-ha' (MTtAh) is critical for driving AI adoption—and unlocking the value. When it comes to AI adoption, there's a crucial milestone: the "ah-ha moment." It's that instant of realization when someone stops seeing AI as just a smarter search tool and starts recognizing it as a reasoning and integration engine—a fundamentally new way of solving problems, driving innovation, and collaborating with technology. For me, that moment came when I saw an AI system not just write code but also deploy it, identify errors, and fix them automatically. In that instant, I realized AI wasn’t just about automation or insights—it was about partnership. A dynamic, reasoning collaborator capable of understanding, iterating, and executing alongside us. But these "ah-ha moments" don’t happen by accident. Systems like ChatGPT or Claude excel at enabling breakthroughs, but it really requires us to ask the right questions. That creates a chicken-and-egg problem: until users see what’s possible, they struggle to imagine what else is possible. So how do we help people get hands-on with AI, especially in enterprise organizations, without relying on traditional training? Here are some approaches we have tried at PwC: 🤖 AI "Hackathons" or Challenges: Host short, low-stakes events where employees can experiment with AI on real problems. For example, marketing teams could test AI for campaign ideas, while operations teams explore process automation. ⚙️ Sandbox Environments: Provide low-friction, risk-aware access to AI tools within a dedicated environment. Let users explore capabilities like text generation, workflow automation, or analytics without worrying about “messing something up.” 🚀 Pre-built Use Cases: Offer ready-to-use templates for specific challenges, such as drafting a client email, summarizing documents, or automating routine reports. Seeing results in action builds confidence and sparks creativity. At PwC we have a community prompt library available to everyone, making it easier to get started. 🧩 Embedded AI Mentors: Assign "AI champions" who can guide teams on applying AI in their work. This informal mentorship encourages experimentation without formal, structured training. We do this at PwC and it's been huge. ⚡️ Integrate AI into Existing Tools: Embed AI into everyday platforms (like email, collaboration tools, or CRM systems) so users can naturally interact with it during routine workflows. Familiarity leads to discovery. Reducing the mean time to ah-ha—the time it takes someone to have that transformative realization—is critical. While starting with familiar use cases lowers the barrier to entry, the real shift happens when users experience AI’s deeper capabilities firsthand.

  • View profile for Jigar Shah
    Jigar Shah Jigar Shah is an Influencer

    Host of the Energy Empire video podcast

    750,535 followers

    "AI data centers represent the most significant opportunity for grid economics in a generation. Today’s electric grid operates at less than 40% utilization for much of the year. When AI data centers are interconnected strategically to leverage existing capacity, they don’t strain the system— they optimize it. By spreading fixed grid costs across substantially more kilowatt-hours, these AI facilities become catalysts for lower rates and accelerated infrastructure investment." "Our analysis of a 1 GW of data center deployment in a representative mid-sized electric utility with one million customers shows: - Customer rates can decrease by nearly 5%—providing tangible relief to millions of Americans. - Over $1.35 billion in new capital investment becomes justifiable— without any rate increases. - Critical grid modernization accelerates—funded by new revenue streams rather than ratepayer burden." - GridCARE

Explore categories