We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. - Website: https://lnkd.in/gxzgeP-2 - Paper: https://lnkd.in/g7PJdz_8
Technology
Explore top LinkedIn content from expert professionals.
-
-
This is the most underrated way to use Claude: (and it has nothing to do with writing or coding) It's competitive intelligence. Using data that's free, public, and updated every single week. Here's my extract step by step guide: Step 1. Go to claude .ai. Step 2. Select the new Claude "Opus 4.6." Step 3. Turn on "Extended Thinking." Step 4. Pick a competitor. Go to their careers page. Step 5. Copy every open job listing into one doc. (Title. Team name. Location. Full description) Step 6. Save it as one .txt or .docx file. Step 7. Search the company at EDGAR (sec .gov) Step 8. Download its recent 10-K or 10-Q filing. (Official strategy, risks, and financials - all public.) Step 9. Upload both files to Claude Opus 4.6. Step 10. Paste this exact prompt: "You are a competitive intelligence analyst at a rival company. I've uploaded [Company]'s complete current job listings and their most recent SEC filing. Perform a strategic intelligence analysis: → Cluster these roles by what they suggest is being built. Don't use the team names they've listed. Infer the actual product initiatives from the skills, tools, and responsibilities described. → Identify capabilities or teams that appear entirely new — not mentioned anywhere in the SEC filing. These are unreleased bets. → Find roles where seniority is disproportionately high for a new team. This signals executive-level priority. → Cross-reference the SEC filing's Risk Factors and Strategy sections with hiring patterns. Where are they investing against a stated risk? Where did they flag a risk but have zero hiring to address it? → Predict 3 product launches or strategic moves this company will make in the next 6-12 months. State your confidence level and cite specific job titles and filing sections as evidence. Format this as a 1-page competitive intelligence briefing for a CMO." What you'll find: → Products that don't exist yet but will in 6 months. → Priorities that contradict what the CEO said. → Risks they told the SEC but aren't addressing. This is what consulting firms charge $200K for. It took me 10 minutes. I used the new Claude 'Opus 4.6' for a reason: ✦ It read 60 job listing & a 200-page filing together. ✦ And connects dots across both. ✦ It is superior in thinking and context retrieval. That's why I didn't use ChatGPT for this.
-
𝗧𝗵𝗲 𝗽𝗮𝗿𝗮𝗱𝗼𝘅 𝗼𝗳 𝗺𝗼𝗱𝗲𝗿𝗻 𝗵𝗲𝗮𝗹𝘁𝗵 𝘁𝗲𝗰𝗵: 𝗧𝗵𝗲 𝗺𝗼𝗿𝗲 𝘄𝗲 𝗺𝗼𝗻𝗶𝘁𝗼𝗿, 𝘁𝗵𝗲 𝗺𝗼𝗿𝗲 𝗮𝗻𝘅𝗶𝗼𝘂𝘀 𝘄𝗲 𝗯𝗲𝗰𝗼𝗺𝗲. We track our bodies 24/7. Count every calorie. Measure sleep, HRV, glucose, stress. From Apple Watch. To Oura Ring. To the latest “temple” device. Somewhere along the way, awareness turned into obsession. Here’s the paradox no one talks about: We have the best health-tracking tools in history, and some of the worst health outcomes. Something doesn’t add up. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝗵𝗼𝘄𝘀 𝗦𝗹𝗲𝗲𝗽 𝘁𝗿𝗮𝗰𝗸𝗶𝗻𝗴 𝗰𝗮𝗻 𝘄𝗼𝗿𝘀𝗲𝗻 𝘀𝗹𝗲𝗲𝗽 Studies on orthosomnia (an obsession with “perfect” sleep metrics) show that people who fixate on sleep scores experience more sleep anxiety, lighter sleep, and poorer recovery—even when objective sleep doesn’t improve. Trying to optimize sleep can literally break it. 𝗛𝗥𝗩 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗶𝗻𝗰𝗿𝗲𝗮𝘀𝗲𝘀 𝘀𝘁𝗿𝗲𝘀𝘀 𝗳𝗼𝗿 𝗺𝗮𝗻𝘆 𝘂𝘀𝗲𝗿𝘀 HRV is a useful trend marker—but daily fluctuations are normal. Research shows that constant HRV checking can heighten health anxiety and perceived stress, especially when users don’t understand variability or context. Ironically, stressing about HRV often lowers HRV. 𝗠𝗼𝗿𝗲 𝗱𝗮𝘁𝗮 ≠ 𝗯𝗲𝘁𝘁𝗲𝗿 𝗵𝗲𝗮𝗹𝘁𝗵 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀 Behavioral science research consistently finds that excessive self-monitoring leads to hypervigilance, loss of bodily trust, and decision fatigue. When every sensation becomes a data point, people stop listening to internal cues and start deferring to dashboards. In short: 𝗢𝘃𝗲𝗿-𝗺𝗲𝗮𝘀𝘂𝗿𝗲𝗺𝗲𝗻𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝘀 𝗮𝘄𝗮𝗿𝗲𝗻𝗲𝘀𝘀 𝘄𝗶𝘁𝗵 𝗮𝗻𝘅𝗶𝗲𝘁𝘆. So what actually creates health? The same fundamentals that worked 5,000 years ago: • Deep, peaceful sleep • Regular sunlight • Real, nourishing food • Daily movement • Time with people you love These don’t need algorithms. They need presence. Use wearables if they serve you—I do, occasionally. But don’t let them become your master. Your life isn’t an algorithm waiting to be optimized. It’s a system meant to be felt, explored, and course-corrected. The best health coach you’ll ever have is already inside you. Trust it.
-
AI security/securing the use of AI is going to kill me. I use Claude Code almost daily. It's a problem.... Here's what I have to change AGAIN this week. Security researcher Ari Marzuk disclosed 30+ vulnerabilities across AI coding tools. Cursor. GitHub Copilot. Windsurf. Claude Code. All of them. He called it IDEsaster. The attack chain includes prompt injection, hijacking LLM context, and auto-approved tool calls executing without permission. Then, legitimate IDE features are weaponized for data exfiltration and RCE. Your .env files. Your API keys. Your source code. Accessible through features you thought were safe. Most studies I read claim that around 85% of developers now use AI coding tools daily. Most have no idea their IDE treats its own features as inherently trusted. 𝗦𝗼... 𝗮𝗳𝘁𝗲𝗿 𝗿𝗲𝘃𝗶𝗲𝘄𝗶𝗻𝗴 𝗔𝗿𝗶'𝘀 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵, 𝗵𝗲𝗿𝗲'𝘀 𝗜 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗱𝗼𝗶𝗻𝗴... Be warned: All this is SO much easier said than done! Audit every MCP server connection. Checked for tool poisoning vectors where legitimate tools might parse attacker-controlled input from GitHub PRs or web content. Removed servers I couldn't verify. Disabled auto-approve for file writes. The attack chains weaponize configuration files and project instructions like .claude/settings.json and CLAUDE.md. One malicious write to these files can alter agent behavior or achieve code execution without additional user interaction. Move all credentials to a secrets manager. No .gitignored .env files in agent-accessible directories. API keys live in 1Password CLI. Environment variables inject at runtime through a wrapper script the LLM never sees. Start running Claude Code in isolated containers. Mounted volumes limited to specific project directories. No access to ~/.ssh, ~/.aws, or ~/.config. If the agent gets compromised, blast radius stays contained. Enable all security warnings. Claude Code added explicit warnings for JSON schema exfiltration and settings file modifications. These exist because Anthropic knows the attack surface. Add pre-commit hooks for hidden characters. Prompt injections hide in pasted URLs, READMEs, and file names using invisible Unicode. Flag non-ASCII characters in any file the agent might ingest. The fix isn't to stop using AI coding tools. The fix is to stop trusting them implicitly. What controls do you have for AI tools with write access to your codebase? 👉 Follow for more AI and cybersecurity insights with the occasional rant #AISecurity #DevSecOps
-
Today in Cell, we published new research showing how AI can help accelerate cancer discovery. With GigaTIME, we can now simulate spatial proteomics from routine pathology slides, enabling population-scale analysis of tumor microenvironments across dozens of cancer types and hundreds of subtypes. Developed in partnership with Providence and the University of Washington, our hope is that this work helps scientists move faster from data to insight, revealing new links between genetic mutations, immune activity, and clinical outcomes, and ultimately improving health for people everywhere. https://lnkd.in/dSpPdtzz
-
👁 Imagine losing your sight for 10 years… and then, the very first thing you do is recognize the faces of your loved ones again. That’s what happened to Jamal Furani, 78, thanks to a breakthrough in medical innovation: a fully synthetic cornea implant. No donor tissue. No immune rejection. A device that integrates directly with the eye’s own tissue. 💡 The deeper insight: The true revolution here isn’t only technological. It’s structural. Today, corneal blindness affects millions worldwide, but most can’t be treated because there simply aren’t enough donor corneas. A synthetic cornea changes the equation. It turns a scarce resource (donations) into a potentially unlimited one (innovation). And here’s what few realize: this implant doesn’t just restore vision. It restores autonomy, dignity, and human connection. Those are the “side effects” that make technology truly transformative. 👉 My take: The future of medicine won’t just be about “healing.” It will be about reinventing our organs — sometimes with solutions even better than the originals. If you could enhance or replace one organ with technology, which would you choose first? #Healthcare #Innovation #Biotech #FutureOfMedicine
-
Two strikingly similar headlines surfaced this past week that should make every leader pause: • “Companies Are Pouring Billions Into A.I. It Has Yet to Pay Off.” — New York Times • “Companies Are Pouring Billions Into AI. Here’s Why They’re Not Seeing Returns” — Forbes The NYT points to the human side: employees resist tools they don’t trust. Forbes focuses on the technical side: most AI still can’t understand the context of work. Both are true, and they’re related. When AI lacks context, employees lose trust. It can’t tell the latest doc from last year’s draft. It summarizes a customer conversation but drops the follow-ups buried in the thread. It pulls a response from Slack while ignoring the context in Google Drive. Employees realize it creates more work than it saves, and stop using it. Pilots stall, deployments fade, and projects slide into the “trough of disillusionment" as the NYT describes. Unfortunately, that's the reality for many organizations. At Glean, we work hard to make sure AI understands the enterprise context the way a human does. If a subject matter expert says something, I trust it more. If something’s old, I double-check it. That’s how people think, and it’s how AI should work too. Yet every enterprise has its own documentation culture and quirks, so sometimes we struggle at first. But we persist and co-develop with customers until the system reaches the quality they need. Then we take those learnings to make it work automatically for the next customer. We’ve seen this approach deliver measurable impact for customers: • Booking.com: Glean Agents give teams faster access to customer insights, cutting video production time by 75% and doubling monthly output. • Confluent: Glean’s AI-powered search saves 15,000+ hours/month, boosts support satisfaction by 13%, and cuts ticket investigation time by 10 minutes. • Fortune 100 telecom company: Glean surfaces instant knowledge during support calls, reducing call resolution time by 17 seconds across 800+ agents. • Leading global consultancy: Glean Agents automate RFP workflows, cutting consulting project proposals from 4 weeks to a few hours (97% faster). • Wealthsimple: Glean gives employees instant access to policies and knowledge, driving $1M+ in annual productivity gains. When AI understands the real context of work—across people, tools, and workflows— employees trust it and use it. Instead of falling into the trough of disillusionment, companies climb a slope toward productivity gains and real ROI.
-
We’ve all heard about AI’s potential to boost productivity. But what truly matters to me is whether it’s making work better for the people who show up every day. At Cisco, our People Intelligence team, in collaboration with IT, has been exploring this very topic, and the findings are fascinating. Here are five key insights from our research that leaders should take seriously: 1. Leaders are key to adoption. At Cisco, employees are 2x more likely to use AI if their direct leader uses it. 2. Generic AI training doesn’t work. Role-specific, practical training accelerates AI use. 3. Confidence gaps exist among senior leaders. Directors at Cisco often feel less confident with AI than mid-level employees, underscoring the need for tailored support at all levels. 4. Employee autonomy fuels adoption. Hybrid work environments are powerful accelerators for AI adoption, while mandates can hinder it. Employees who voluntarily go to the office are more likely to use AI, while those who are required to work on-site have lower adoption. 5. AI use is linked to employee well-being, but the relationship is complex, with both benefits and trade-offs that require thoughtful navigation. This is just the beginning. Next, we’re looking at how AI is transforming the way teams operate. For now, one thing is clear, employees who use AI aren’t just more productive. They’re also more engaged, better aligned with company strategy, and empowered to focus on meaningful work. #AIAdoption #EmployeeExperience #FutureOfWork
-
What if the real disruption in manufacturing isn’t coming from AI, cloud, or automation... but from the uncomfortable realization that we’ve been investing in all the wrong things? According to Deloitte’s 2025 Smart Manufacturing Survey, manufacturers are pouring billions into tech. 𝟕𝟖% are allocating over 𝟐𝟎% of their improvement budgets to smart manufacturing. 𝟒𝟔% are prioritizing process automation. The intent is clear. The excitement is real. But…. I would argue 𝐰𝐞’𝐫𝐞 𝐬𝐭𝐢𝐥𝐥 𝐧𝐨𝐭 𝐫𝐞𝐚𝐝𝐲.. Not in our culture. Not in our org structures. Not in how we prepare our people. The data exposes the gap. Human capital is the least mature capability in the smart manufacturing stack. Only 𝟒𝟖% of companies have a training and adoption standard. Yet it’s the number one area they say they want to improve. And while 𝟖𝟓% believe smart manufacturing will attract new talent, more than a third say their biggest human capital concern is simply adapting workers to the factory of the future. We like the sound of digital transformation as long as it doesn't slow us down. We like the optics of AI as long as we don't have to redesign how we work. We like talking about the workforce of the future as long as we don’t have to train the one we already have. So yes, investment is rising. But if we don’t confront the outdated systems and assumptions holding us back, all we’re doing is layering expensive tech on fragile foundations. The biggest barrier to smart manufacturing isn’t budget, technology, or even talent. It’s us. 𝐂𝐡𝐞𝐜𝐤 𝐨𝐮𝐭 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐫𝐞𝐩𝐨𝐫𝐭: https://lnkd.in/e6_QsJcw ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!
-
𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗮𝗻 𝗔𝗜 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝗮𝗻𝘆, 𝘆𝗼𝘂 𝗳𝗶𝗿𝘀𝘁 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗮 𝘀𝗼𝗹𝗶𝗱 𝗱𝗮𝘁𝗮 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝗲𝗻𝗳𝗼𝗿𝗰𝗲 𝘀𝘁𝗿𝗶𝗰𝘁 𝗱𝗮𝘁𝗮 𝗵𝘆𝗴𝗶𝗲𝗻𝗲. Getting your house in order is the foundation for delivering on any AI ambition. The MIT Technology Review — based on insights from 205 C-level executives and data leaders — lays it out clearly: 𝗠𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗱𝗼 𝗻𝗼𝘁 𝗳𝗮𝗰𝗲 𝗮𝗻 𝗔𝗜 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. 𝗧𝗵𝗲𝘆 𝗳𝗮𝗰𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗶𝗻 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗮𝗻𝗱 𝗿𝗶𝘀𝗸 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁. Therefore, many firms are still stuck in pilots, not production. Changing that requires strong data foundations, scalable architectures, trusted partners, and a shift in how companies think about creating real value with AI. Because pilots are easy, BUT scaling AI across the enterprise is hard. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: ⬇️ 1. 95% 𝗼𝗳 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗮𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝗔𝗜 — 𝗯𝘂𝘁 76% 𝗮𝗿𝗲 𝘀𝘁𝘂𝗰𝗸 𝗮𝘁 𝗷𝘂𝘀𝘁 1–3 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀: ➜ The gap between ambition and execution is huge. Scaling AI across the full business will define competitive advantage over the next 24 months. 2. 𝗗𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗹𝗶𝗾𝘂𝗶𝗱𝗶𝘁𝘆 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗯𝗼𝘁𝘁𝗹𝗲𝗻𝗲𝗰𝗸𝘀: ➜ Without curated, accessible, and trusted data, no AI strategy can succeed — no matter how powerful the models are. 3. 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝗮𝗿𝗲 𝘀𝗹𝗼𝘄𝗶𝗻𝗴 𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 — 𝗮𝗻𝗱 𝘁𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗴𝗼𝗼𝗱 𝘁𝗵𝗶𝗻𝗴: ➜ 98% of executives say they would rather be safe than first. Trust, not speed, will win in the next AI wave. 4. 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗲𝗱, 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀-𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗔𝗜 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 𝘄𝗶𝗹𝗹 𝗱𝗿𝗶𝘃𝗲 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝘃𝗮𝗹𝘂𝗲: ➜ Generic generative AI (chatbots, text generation) is table stakes. True differentiation will come from custom, domain-specific applications. 5. 𝗟𝗲𝗴𝗮𝗰𝘆 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗮𝗿𝗲 𝗮 𝗺𝗮𝗷𝗼𝗿 𝗱𝗿𝗮𝗴 𝗼𝗻 𝗔𝗜 𝗮𝗺𝗯𝗶𝘁𝗶𝗼𝗻𝘀: ➜ Firms sitting on fragmented, outdated infrastructure are finding that retrofitting AI into legacy systems is often more costly than building new foundations. 6. 𝗖𝗼𝘀𝘁 𝗿𝗲𝗮𝗹𝗶𝘁𝗶𝗲𝘀 𝗮𝗿𝗲 𝗵𝗶𝘁𝘁𝗶𝗻𝗴 𝗵𝗮𝗿𝗱: ➜ From GPUs to energy bills, AI is not cheap — and mid-sized companies face the biggest barriers. Smart firms are building realistic ROI models that go beyond hype. 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮 𝗳𝘂𝘁𝘂𝗿𝗲-𝗿𝗲𝗮𝗱𝘆 𝗔𝗜 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗶𝘀𝗻’𝘁 𝗮𝗯𝗼𝘂𝘁 𝗰𝗵𝗮𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗺𝗼𝗱𝗲𝗹 𝗿𝗲𝗹𝗲𝗮𝘀𝗲. 𝗜𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝘀𝗼𝗹𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝗵𝗮𝗿𝗱 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 — 𝗱𝗮𝘁𝗮, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗥𝗢𝗜 — 𝘁𝗼𝗱𝗮𝘆.