𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗺𝗶𝘀𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗼𝗼𝗱 𝘁𝗼𝗽𝗶𝗰𝘀 𝗶𝗻 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲. Because most people explain it from the inside out: policies, councils, standards, stewardship. But the business does not buy any of that. The business buys outcomes: → trustworthy KPIs → vendor and partner data you can actually use → faster financial close → fewer reporting escalations → smoother M&A integration → AI you can deploy without creating risk debt Most AI programs fail for boring reasons: nobody owns the data, quality is unknown, access is messy, accountability is missing. 𝗦𝗼 𝗹𝗲𝘁’𝘀 𝘀𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝗶𝘁. 𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗳𝗼𝘂𝗿 𝘁𝗵𝗶𝗻𝗴𝘀: → ownership → quality → access → accountability 𝗔𝗻𝗱 𝗶𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘃𝗲𝗿𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 𝗶𝗻 𝟰 𝗹𝗮𝘆𝗲𝗿𝘀: 1. Data Products (what the business consumes) → a named dataset with an owner and SLA → clear definitions + metric logic → documented inputs/outputs and intended use → discoverable in a catalog → versioned so changes don’t break reporting 2. Data Management (how products stay reliable) → quality rules + monitoring (freshness, completeness, accuracy) → lineage (where it came from, where it’s used) → master/reference data alignment → metadata management (business + technical) → access controls and retention rules 3. Data Governance (who decides, who is accountable) → data ownership model (domain owners, stewards) → decision rights: who can change KPI definitions, thresholds, and sources → issue management: triage, escalation paths, resolution SLAs → policy enforcement: what’s mandatory vs optional → risk and compliance alignment (auditability, approvals) 4. Data Operating Model (how you scale across the enterprise) → domain-based setup (data mesh or not, but clear domains) → operating cadence: weekly issue review, monthly KPI governance, quarterly standards → stewardship at scale (roles, capacity, incentives) → cross-domain decision-making for shared metrics → enablement: templates, playbooks, tooling support If you want to start fast: Pick the 10 metrics that run the business. Assign an owner. Define decision rights + escalation. Then build the data products around them. ↓ 𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝗮𝘀 𝗔𝗜 𝗿𝗲𝘀𝗵𝗮𝗽𝗲𝘀 𝘄𝗼𝗿𝗸 𝗮𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀, 𝘆𝗼𝘂 𝘄𝗶𝗹𝗹 𝗴𝗲𝘁 𝗮 𝗹𝗼𝘁 𝗼𝗳 𝘃𝗮𝗹𝘂𝗲 𝗳𝗿𝗼𝗺 𝗺𝘆 𝗳𝗿𝗲𝗲 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dbf74Y9E
Artificial Intelligence
Explore top LinkedIn content from expert professionals.
-
-
AI security/securing the use of AI is going to kill me. I use Claude Code almost daily. It's a problem.... Here's what I have to change AGAIN this week. Security researcher Ari Marzuk disclosed 30+ vulnerabilities across AI coding tools. Cursor. GitHub Copilot. Windsurf. Claude Code. All of them. He called it IDEsaster. The attack chain includes prompt injection, hijacking LLM context, and auto-approved tool calls executing without permission. Then, legitimate IDE features are weaponized for data exfiltration and RCE. Your .env files. Your API keys. Your source code. Accessible through features you thought were safe. Most studies I read claim that around 85% of developers now use AI coding tools daily. Most have no idea their IDE treats its own features as inherently trusted. 𝗦𝗼... 𝗮𝗳𝘁𝗲𝗿 𝗿𝗲𝘃𝗶𝗲𝘄𝗶𝗻𝗴 𝗔𝗿𝗶'𝘀 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵, 𝗵𝗲𝗿𝗲'𝘀 𝗜 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗱𝗼𝗶𝗻𝗴... Be warned: All this is SO much easier said than done! Audit every MCP server connection. Checked for tool poisoning vectors where legitimate tools might parse attacker-controlled input from GitHub PRs or web content. Removed servers I couldn't verify. Disabled auto-approve for file writes. The attack chains weaponize configuration files and project instructions like .claude/settings.json and CLAUDE.md. One malicious write to these files can alter agent behavior or achieve code execution without additional user interaction. Move all credentials to a secrets manager. No .gitignored .env files in agent-accessible directories. API keys live in 1Password CLI. Environment variables inject at runtime through a wrapper script the LLM never sees. Start running Claude Code in isolated containers. Mounted volumes limited to specific project directories. No access to ~/.ssh, ~/.aws, or ~/.config. If the agent gets compromised, blast radius stays contained. Enable all security warnings. Claude Code added explicit warnings for JSON schema exfiltration and settings file modifications. These exist because Anthropic knows the attack surface. Add pre-commit hooks for hidden characters. Prompt injections hide in pasted URLs, READMEs, and file names using invisible Unicode. Flag non-ASCII characters in any file the agent might ingest. The fix isn't to stop using AI coding tools. The fix is to stop trusting them implicitly. What controls do you have for AI tools with write access to your codebase? 👉 Follow for more AI and cybersecurity insights with the occasional rant #AISecurity #DevSecOps
-
On behalf of Palo Alto Networks, I testified yesterday before the House Committee on Financial Services regarding the critical intersection of AI and cybersecurity. We are at a pivotal moment where AI is reshaping the financial sector, and my message to Congress focused on the critical reality that as we think about the future of cybersecurity legislation, we have to get security right. To do that, we need to distinguish between two very different challenges: 1️⃣ AI for Cybersecurity: The threat landscape has fundamentally changed. Attacks that used to play out over days are now happening in minutes. Our research at Palo Alto Networks shows that AI-driven tools can compress a ransomware campaign, from the initial breach to stealing data, into roughly 25 minutes. Human teams, no matter how skilled, cannot fight machine speed alone. Defenders are drowning in alerts and data. We have to use AI as a force multiplier to automate our defenses. It is the only way to flip the script and stay ahead of adversaries who are moving faster than ever. 2️⃣ Cybersecurity for AI: As we rely more on these powerful models within our financial institutions, we have to remember that AI systems themselves are targets. Attackers are trying to manipulate the models and poison the data we increasingly trust to make decisions. This is why we need a Secure AI by Design approach. Security cannot be a safety feature we bolt on after the fact. It has to be baked into the DNA of our AI infrastructure from day one, securing the supply chain, the data, and the models themselves from development through runtime. I’ve spent my career working to convince organizations, public and private, that we cannot have true innovation without security. By partnering with the public sector, together we can build a financial system that is resilient and secure enough to harness the power of AI. Jeanette Manfra Nicholas Stevens Tal Cohen Joshua Branch Eva Dudzik Mehlert Daniel Kroese Katie (Donnell) Strand
-
The Agentic Enterprise is driving profound change across every industry, but nowhere are the stakes higher than in healthcare. There is an incredible opportunity to elevate the work of healthcare professionals and deliver stronger care for patients around the world. In an essay for TIME, Murali Doraiswamy, professor of medicine at Duke University, and I discuss how AI is revolutionizing medicine, including: • Flagging subtle abnormalities in scans and slides that a human eye might miss. • Speeding up the discovery of drugs and drug targets. • Providing patients faster and more personalized support, from scheduling to flagging side effects But we’ve also seen that over-reliance on AI can lead to “deskilling” — in which medical professionals become less effective. That underscores the importance of approaches that keep humans at the center, such as the Intelligent Choice Architecture (ICA), where AI systems don’t make decisions but nudge providers to take a second look at results, weigh alternatives, and stay actively engaged in the process. The future of work is humans and AI agents working together. If we commit to designing systems that sharpen our abilities, we can combine the promise of AI with the critical thinking, compassion, and real-world judgment that only humans bring. https://lnkd.in/gqkTUfb6
-
My biggest takeaways from Ethan Smith on how to win at AEO (i.e. get ChatGPT to recommend your product): 1. Being mentioned most often beats ranking first. In Google, the #1 blue link wins. In ChatGPT, the answer summarizes multiple sources—so appearing in five citations beats ranking #1 in one. Ethan’s strategy: get mentioned on Reddit, YouTube, blogs, and affiliates. Volume of mentions matters more than any single placement. 2. LLM traffic converts 6x better than Google search traffic. Webflow saw this dramatic difference because users who come through AI assistants have built up much more intent through conversation and follow-up questions, making them highly qualified leads. 3. Early-stage startups can win at AEO immediately, unlike with SEO. Traditional SEO requires years of domain authority. But a brand-new Y Combinator company mentioned in a Reddit thread today can show up in ChatGPT tomorrow. The playing field is finally level. 4. The long tail of AEO is 4x bigger than SEO. People ask ChatGPT questions with 25 or more words (vs. 6 in Google). Ethan found gold in queries like “Which meeting transcription tool integrates with Looker via Zapier to BigQuery?”—questions that never existed in search but are perfect for AI. Own these micro-niches. 5. Reddit is proving to be the kingmaker for AI visibility. ChatGPT trusts Reddit because the community polices spam better than any algorithm. Ethan’s exact playbook: make one real account, say who you are and where you work, give genuinely helpful answers. Five good comments can transform your visibility. No automation, no fake accounts—just be helpful. 6. YouTube videos for “boring” B2B terms are a gold mine for AEO. Nobody makes videos about “AI-powered payment processing APIs”—which is exactly why you should. While everyone fights over “best CRM software,” the high-value, zero-competition long tail is wide open in video. 7. Your help center is now a growth channel. All those “Does your product do X?” questions flooding ChatGPT can be answered by help-center pages. Move them from subdomain to subdirectory, cross-link aggressively, and cover every feature question. Ethan calls this the most underutilized opportunity in AEO. 8. January 2025 was the inflection point in AEO growth. That’s when ChatGPT made answers more clickable (maps, shopping cards, citations) and adoption exploded. Webflow went from near zero to 8% of signups from AI. This channel is accelerating faster than any Ethan’s seen in 18 years. 9. The AEO playbook: (1) Find questions from competitor paid search data, (2) set up answer tracking, (3) see who’s showing up as citations, (4) create landing pages answering all follow-up questions, (5) get mentioned offsite via Reddit/YouTube/affiliates, (6) run controlled experiments, (7) build a dedicated team. This exact process is driving real results at scale.
-
Something VERY cool just happened in California and… it could be the future of energy. On July 29, just as the sun was setting, California’s electric grid was reaching peak demand. However, instead of ramping up fossil fuel resources, the California Independent System Operator (CAISO) and local utilities decided to lean on a network of thousands of home batteries. More than 100,000 residential battery systems (made up primarily by Sunrun and Tesla customers) delivered about 535 megawatts of power to California’s grid right as demand peaked, visibly reducing net load (as shown in the graphic). Now, this may not seem like a lot but 535 megawatts is enough to power more than half of the city of San Francisco and that can make all the difference when a grid is under stress. This is what’s called a Virtual Power Plant or VPP. It’s a network of distributed energy resources that grid operators can call on in an emergency to provide greater resilience to our energy systems. Homeowners are compensated for the dispatch, grid operators are given another tool for reliability, and ratepayers are saved from instability. It’s a win-win-win. Now, this was just a test to prepare for other need-based dispatches during heat waves in August and September. But it’ historic. As homeowners add more solar and storage resources, the impact of these dispatch events will become even more profound and even more necessary. This was the second time this summer that VPPs have been dispatched in California and I expect to see even more as this technology improves. Shout out to Sunrun, Tesla, and all companies who participated. Keep up the great work.
-
This week MIT dropped a stat engineered to go viral: 95% of enterprise GenAI pilots are failing. Markets, predictably, had a minor existential crisis. Pundits whispered the B-word (“bubble”), traders rotated into defensive stocks, and your colleague forwarded you a link with “is AI overhyped???” in the subject line. Let’s be clear: the 95% failure rate isn’t a caution against AI. It’s a mirror held up to how deeply ossified enterprises are. Two truths can coexist: (1) The tech is very real. (2) Most companies are hilariously bad at deploying it. If you’re a startup, AI feels like a superpower. No legacy systems. No 17-step approval chains. No legal team asking whether ChatGPT has been “SOC2-audited.” You ship. You iterate. You win. If you’re an enterprise, your org chart looks like a game of Twister and your workflows were last updated when Friendswas still airing. You don’t need a better model - you need a cultural lobotomy. This isn’t an “AI bubble” popping. It’s the adoption lag every platform shift goes through. - Cloud in the 2010s: Endless proofs of concept before actual transformation. - Mobile in the 2000s: Enterprises thought an iPhone app was strategy. Spoiler: it wasn’t. - Internet in the 90s: Half of Fortune 500 CEOs declared “this is just a fad.” Some of those companies no longer exist. History rhymes. The lag isn’t a bug; it’s the default setting. Buried beneath the viral 95% headline are 3 lessons enterprises can actually use: ▪️ Back-office > front-office. The biggest ROI comes from back-office automation - finance ops, procurement, claims processing - yet over half of AI dollars go into sales and marketing. The treasure’s just buried in a different part of the org chart. ▪️Buy > build. Success rates hit ~67% when companies buy or partner with vendors. DIY attempts succeed a third as often. Unless it’s literally your full-time job to stay current on model architecture, you’ll fall behind. Your engineers don’t need to reinvent an LLM-powered wheel; they need to build where you’re actually differentiated. ▪️Integration > innovation. Pilots flop not because AI “doesn’t work,” but because enterprises don’t know how to weave it into workflows. The “learning gap” is the real killer. Spend as much energy on change management, process design, and user training as you do on the tool itself. Without redesigning processes, “AI adoption” is just a Peloton bought in January and used as a coat rack by March. You didn’t fail at fitness; you failed at follow-through. In five years, GenAI will be as invisible - and indispensable - as cloud is today. The difference between the winners and the laggards won’t be access to models, but the courage to rip up processes and rebuild them. The “95% failure” stat doesn’t mean AI is snake oil. It means enterprises are in Year 1 of a 10-year adoption curve. The market just confused growing pains for terminal illness.
-
LLM for Spatial Understanding – SpatialLM on Hugging Face The model has been around as a research project, but now it’s ready to use - pretrained, open-source, and much more accessible. What’s interesting is how it actually works, and what “understanding” means in this context: 𝐒𝐩𝐚𝐭𝐢𝐚𝐥𝐋𝐌 takes visual input (like a simple phone video), reconstructs it into a 3D point cloud, and then passes that through an LLM. But the LLM doesn’t just process words, it uses its semantic knowledge to interpret geometry. So instead of saying “Here’s a flat shape,” it says: “That’s a wall. There’s a door attached to it. That’s a sofa, facing this direction, with these dimensions.” And it doesn’t stop there. The output is structured, machine-readable data. For example: Bbox = Bbox(“sofa”, position=(2.9,1.6,3.7), size=(1.7,0.8,1.8)) This kind of fusion really excites me: 𝐯𝐢𝐬𝐢𝐨𝐧 + 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 + 𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 Not just detecting objects, but reasoning about space. Not just pixels, but meaning. I can see this unlocking smarter AR, robotics, indoor mapping, and more. And it’s open-source, built on LLaMA and Quinn, trained on real-world video. Definitely one to keep an eye on! 📍Btw, we also just open-sourced 𝐆𝐞𝐧𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐎𝐒, a lightweight framework we’ve been using internally to run multi-agent systems. If you’re playing around with agent workflows, feel free to check it out: GitHub: https://bit.ly/4kzE1Mt And if you’re into open source, a ⭐ would mean a lot! __________ For more on AI and open source, plz check my previous posts. I share my journey here. Join me and let's grow together. Alex Wang #generativeai #ai #aiagents #llms #opensource
-
We’ve all heard about AI’s potential to boost productivity. But what truly matters to me is whether it’s making work better for the people who show up every day. At Cisco, our People Intelligence team, in collaboration with IT, has been exploring this very topic, and the findings are fascinating. Here are five key insights from our research that leaders should take seriously: 1. Leaders are key to adoption. At Cisco, employees are 2x more likely to use AI if their direct leader uses it. 2. Generic AI training doesn’t work. Role-specific, practical training accelerates AI use. 3. Confidence gaps exist among senior leaders. Directors at Cisco often feel less confident with AI than mid-level employees, underscoring the need for tailored support at all levels. 4. Employee autonomy fuels adoption. Hybrid work environments are powerful accelerators for AI adoption, while mandates can hinder it. Employees who voluntarily go to the office are more likely to use AI, while those who are required to work on-site have lower adoption. 5. AI use is linked to employee well-being, but the relationship is complex, with both benefits and trade-offs that require thoughtful navigation. This is just the beginning. Next, we’re looking at how AI is transforming the way teams operate. For now, one thing is clear, employees who use AI aren’t just more productive. They’re also more engaged, better aligned with company strategy, and empowered to focus on meaningful work. #AIAdoption #EmployeeExperience #FutureOfWork
-
Invisible UX is coming 🔥 And it’s going to change how we design products, forever. For decades, UX design has been about guiding users through an experience. We’ve done that with visible interfaces: Menus. Buttons. Cards. Sliders. We’ve obsessed over layouts, states, and transitions. But with AI, a new kind of interface is emerging: One that’s invisible. One that’s driven by intent, not interaction. Think about it: You used to: → Open Spotify → Scroll through genres → Click into “Focus” → Pick a playlist Now you just say: “Play deep focus music.” No menus. No tapping. No UI. Just intent → output. You used to: → Search on Airbnb → Pick dates, guests, filters → Scroll through 50+ listings Now we’re entering a world where you guide with words: “Find me a cabin near Oslo with a sauna, available next weekend.” So the best UX becomes barely visible. Why does this matter? Because traditional UX gives users options. AI-native UX gives users outcomes. Old UX: “Here are 12 ways to get what you want.” New UX: “Just tell me what you want & we’ll handle the rest.” And this goes way beyond voice or chat. It’s about reducing friction. Designing systems that understand intent. Respond instantly. And get out of the way. The UI isn’t disappearing. It’s mainly dissolving into the background. So what should designers do? Rethink your role. Going forward you’ll not just lay out screens. You’ll design interactions without interfaces. That means: → Understanding how people express goals → Guiding model behavior through prompt architecture → Creating invisible guardrails for trust, speed, and clarity You are basically designing for understanding. The future of UX won’t be seen. It will be felt. Welcome to the age of invisible UX. Ready for it?