Buying Clay won’t get you more leads. Buying Gong won’t make your sales team better on calls. Just like: Buying a set of Wüsthofs won’t make you a better chef. Buying that new Titleist driver? Yeah… it’s not going to magically straighten your slice. Too often we buy tools hoping they’ll solve our problems. But tools don’t solve problems. Processes do. And the best Revenue and Rev Ops leaders I know all follow a playbook when it comes to tooling: 1. Start with the problem, not the tool You need a list—not of tools you want to try, but of business problems you need to solve. Some common ones I hear: "We need to improve our pipeline conversion rate" "We need better forecasting data" "We need to stay in closer touch with customers post-sale" Then you can go hunting for tools that solve those problems. But if you’re just chasing every shiny new AI-powered tool? You’re going to waste time, budget, and team attention. Trust me, the 100th AI SDR tool still sounds pretty cool but it might not be what you need for your business at the current time. 2. Use a structured, data-driven evaluation process “I can see us using this” is not a business case. You need a scorecard. How easy is it to implement? How hard will it be to drive adoption? What’s the expected ROI? Does it integrate with our current workflow and tech stack? The best teams run their tooling like procurement pros. Gut feel isn’t enough, especially when budgets are tight and the stakes are high. 3. No process = no payoff Let’s say you buy the tool. Now what? Without enablement, accountability, and integration into daily workflows, that tool is going to sit on the shelf (just like that $500 driver in your garage). At minimum, you need: -Training plans -Change management -Clear documentation -Leadership support -An incentive or consequence to drive usage If you don’t have a process to make the tool work, you’ve bought shelfware. 4. Continuously re-evaluate your stack We’re in an era where AI is creating entirely new categories almost overnight. Point solutions are becoming features. New platforms are emerging weekly. And you can’t afford to run the same stack just because it worked last year. Great revenue leaders are constantly pruning and optimizing, aligning tools with the evolving needs of the team and the business. The bottom line is software doesn’t make you better. Process does. So before you pull the trigger on the next tool, ask yourself: “Do we have the infrastructure, alignment, and plan to make this successful?” Because trust me, your new Titleist is still going to slice 20 yards right unless you’ve put in the reps (or booked some lessons).
Electrical Engineering Best Practices
Explore top LinkedIn content from expert professionals.
-
-
The best AI tool isn’t the most advanced. It’s the one that reliably solves your problem. With new AI tools appearing constantly, the real challenge is no longer access. It’s making good choices. What separates progress from noise is not discovering more tools, but being clear about the problem you want to solve and selecting deliberately. Here is a practical way to select AI tools that work with your specific needs: 𝟭. 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺, 𝗻𝗼𝘁 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹 Before looking at models, platforms, or feature lists, get specific. Avoid generic input like “Use AI in marketing.” Be precise instead: • Generate first drafts faster • Summarise long expert interviews • Support internal research • Scale repetitive analysis If you can’t describe the problem clearly, no tool will fix it. 𝟮. 𝗗𝗲𝗳𝗶𝗻𝗲 𝘁𝗵𝗲 𝗼𝘂𝘁𝗽𝘂𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 Be explicit about what “good” looks like: • How accurate does it need to be? • How much depth or creativity is required? • Is this a one-off task or a repeatable workflow? • Where does human judgment remain essential? Different outputs require very different tools even if they’re all called “AI”. 𝟯. 𝗠𝗮𝗸𝗲 𝗰𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁 This is where many projects fail later. Consider: • Cost and scalability • Latency and reliability • Data privacy and governance • Integration into existing workflows Constraints aren’t boring details. They define what’s viable in the real world. 𝟰. 𝗧𝗲𝘀𝘁 𝗱𝗲𝗹𝗶𝗯𝗲𝗿𝗮𝘁𝗲𝗹𝘆 Compare tools through a consistent methodology: • Same task • Same inputs • Clear evaluation criteria Look at: • Output quality • Consistency • Failure modes • Total effort, not just raw capability Evaluate tools under realistic conditions. That’s where real strengths, weaknesses, and trade-offs become visible. 𝟱. 𝗖𝗵𝗼𝗼𝘀𝗲 𝗮𝗻𝗱 𝗿𝗲𝘃𝗶𝘀𝗶𝘁 The newest or highest-ranked model isn’t the default answer. In practice, cost, reliability, and integration often matter more than marginal capability gains. For many real workflows, simpler or more specialized tools perform better. Treat tool selection as an ongoing process. Requirements change, constraints shift, and models improve; so, revisit the decision when the context evolves, not when hype does. ___ AI maturity isn’t about always using the most popular tool. It’s about repeatedly matching the right tool to the right problem and being willing to change as the context evolves. How do you approach AI tool selection in practice? Like my content? Follow Till for more on AI, consulting, and leadership.
-
Why Many PLM Evaluations and Improvement Projects Start in the Wrong Place I see the same pattern in many PLM evaluations and improvement projects: Companies start by defining dozens of individual use cases and hundreds of functional requirements in various capability areas: ✔ Document management ✔ Change management ✔ BOM management ✔ Requirements management ✔ Etc All important. But the wrong starting point. 🔹 The Core Mistake Many organizations don’t first ask a much more fundamental question: 👉 Which end-to-end processes matter most to our business, and which of those must be tightly integrated to unlock real value and efficiency gains? Without first answering that question, PLM becomes a checklist exercise: Feature A vs. Feature B Tool X vs. Tool Y Best-in-class capability comparisons The result? A technically impressive solution that optimizes individual tasks, but not the overall flow of work. 🔹 Why This Matters As I discussed in previous posts, the biggest efficiency gains come from process integration, not from isolated functional excellence. PLM is not just a collection of tools. It is the process backbone of product development. If you don’t first understand: - Where handoffs occur - Where data is recreated or reconciled - Where delays, loops, and rework originate …then no amount of detailed requirements will save you from: - Broken process chains - Excessive integrations - Productivity losses - Low ROI from PLM investments - User frustration 🔹 The Right Way to Approach PLM Evaluations 1️⃣ Identify your critical end-to-end processes (e.g., requirements → engineering → change → manufacturing → quality) 2️⃣ Determine where tight integration is essential Not everything needs to be unified, but some workflows are critical for the business and absolutely need to be integrated. 3️⃣ Define architectural principles What must be native? What can be federated? Where is latency acceptable? 4️⃣ Only then define detailed use cases and requirements Now they serve a purpose, supporting process flow, not fragmenting it. 💡 The Key Takeaway PLM architecture decisions should be driven by process integration first and tool preference second. When companies reverse that order, they often end up with individual best-in-class tools automating disjointed tasks. And that’s a very expensive way to miss the point of PLM and a huge lost opportunity. #PLM #Evaluation #Process #PLMadvisors
-
After analyzing dozens of AI tool implementations across engineering orgs, I've developed a simple framework to separate real value from AI theater. Four signs your AI tool is actually useful: Sign 1: Reasons across datasets, not just summarizes ✗ "Summarizes Slack messages" ✓ "Analyzes 18 months of delivery patterns to predict which epic is at risk" If it's just making existing information shorter, that's convenient. If it's connecting patterns across datasets humans can't feasibly analyze, that's transformation. Sign 2: Uses data you uniquely have ✗ "Uses GPT to answer questions from our docs" ✓ "Analyzes your commit patterns to predict integration conflicts before code review" Generic AI is commodity. Your proprietary data is your moat. Sign 3: Changes how people work ✗ "Available in the sidebar if people want to try it" ✓ "Engineering leaders start their week reviewing its recommendations" Tools people ignore don't create value. Tools that become workflow habits do. Sign 4: Eliminates categories of work ✗ "2-hour root cause analysis now takes 20 minutes" ✓ "We stopped doing manual root cause analysis entirely" Marginal improvement is nice. Category elimination is transformative. If your AI implementation passes 3-4 of these, you're probably creating real value. If it passes 0-2, you might be paying for expensive pattern matching. #AIStrategy #EngineeringTools #EngineeringProductivity
-
We’ve all seen impressive demos of AI agents performing multi-step tasks — booking flights, writing code, answering tough questions. But behind the scenes, there’s a critical engineering step that often gets overlooked: how to measure whether the agent is actually working well. Anthropic’s latest engineering post, “Demystifying evals for AI agents,” does exactly that — it breaks down why evaluations matter and how teams can build them in practice — with a common sense approach that engineers, product builders, and AI leaders will appreciate. Anthropic outlines a simple but effective framework that teams can use when building and testing agents: ✔ Start early: Build evaluation tests long before your agent hits production. This helps reveal hidden failure modes and prevents teams from flying blind. ✔ Define success clearly: Rather than judging how an agent solved a problem, focus on whether the outcome matches your success criteria. ✔ Use layered graders: • Code-based checks for objective correctness • Model-based graders to capture nuance • Human reviews for calibration and edge cases ✔ Distinguish eval types: • Capability evals answer, “Can the agent do this at all?” • Regression evals answer, “Does it still work after changes?” If you’re building or deploying AI agents — from internal tools to customer-facing automation — this is a piece worth a read - https://lnkd.in/gaPPQY8S #AI #AIAgents #MachineLearning #AIEngineering #SoftwareQuality #ProductDevelopment #TechLeadership #Anthropic #Evaluation #DeveloperTools