It was one of these eureka moments when we first saw #AI #agent systems acting autonomously in real-world #engineering workflows last year. Not just automating tasks, but reasoning through design constraints, adapting simulations, and making informed decisions. With LMs evolving rapidly, this shift was inevitable. Engineering knowledge is no longer locked in manuals or expert minds - it can be embedded into multi-agent systems that interact with tools, iterate on designs, and validate results. When connected to existing software, these systems start behaving like real assistants, executing workflows with near-human intuition. Instead of just scripting repetitive tasks, they start reasoning about the process itself. Setting up simulations, adjusting parameters, optimizing designs - not as isolated actions, but as part of a broader engineering logic. With the right architecture, these systems move closer to something like #JARVIS: a real engineering co-pilot that understands context, executes workflows, and improves over time. This isn’t about replacing engineers. It’s about scaling engineering intelligence. The question now: how far can we push this?
Cognitive Computing in Engineering Decision Making
Explore top LinkedIn content from expert professionals.
Summary
Cognitive computing in engineering decision-making refers to using artificial intelligence that can understand, reason, and learn like humans to support engineers in tackling complex problems and making choices throughout the product design and development process. These systems go beyond simple automation, providing intelligent assistance by reasoning about data, context, and workflows to improve the way engineers work.
- Connect systems: Bring together data from different sources so engineers have access to real-time information and context, helping them make more informed decisions.
- Use intelligent assistants: Integrate AI-powered tools that can remember past choices, suggest solutions, and guide engineers through challenging tasks as part of their daily workflow.
- Build feedback loops: Develop systems that learn from user actions and outcomes, so future decisions benefit from past experience and continuously improve over time.
-
-
We keep coming back to the same question in product development: Why is it still so hard to make good decisions at the right time? Despite decades of effort and investment in PLM systems intended to be the “single source of truth”, many engineering and manufacturing teams still work in silos and heavily using Excels. Data is fragmented. Context is lost. And decisions often rely more on tribal knowledge and spreadsheets than on trusted systems. In my new article, I explore a different approach. At OpenBOM, we’ve been thinking about how engineers work, and what kind of support they need - it comes not more data forms to fill out, but intelligent tools that help them in the moment, with access to real-time product and business context to help them with their design. It is a combination of Product Memory and Engineering Co-Pilot. 🔹 Product Memory captures the full lifecycle of a product—design decisions, vendor info, manufacturing challenges, support issues—and makes that knowledge accessible. 🔹 Engineering Co-Pilot brings that information to engineers in real time, right inside their workflows, helping them choose components, estimate costs, avoid mistakes, and stay connected to the broader business. This isn’t a chatbot. It’s not search. It’s a step toward closing the gap between design and operations with intelligent assistant providing information collected from multiple systems available via conversational user interface. The article outlines why the “single system” vision didn’t work—and how a more connected, intelligent, and flexible architecture might. If you’re thinking about the future of engineering, AI assistance, and decision-making across the product lifecycle, I’d love for you to read it and share your thoughts. 👉 Engineering Copilot, Product Memory, and Business Workflows: A New Vision for Product Development [link in the comments] #DigitalThread #ProductDevelopment #PLM #EngineeringTools #AIinEngineering #ManufacturingInnovation #OpenBOM
-
As we transition from traditional task-based automation to 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, understanding 𝘩𝘰𝘸 an agent cognitively processes its environment is no longer optional — it's strategic. This diagram distills the mental model that underpins every intelligent agent architecture — from LangGraph and CrewAI to RAG-based systems and autonomous multi-agent orchestration. The Workflow at a Glance 1. 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗶𝗼𝗻 – The agent observes its environment using sensors or inputs (text, APIs, context, tools). 2. 𝗕𝗿𝗮𝗶𝗻 (𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗘𝗻𝗴𝗶𝗻𝗲) – It processes observations via a core LLM, enhanced with memory, planning, and retrieval components. 3. 𝗔𝗰𝘁𝗶𝗼𝗻 – It executes a task, invokes a tool, or responds — influencing the environment. 4. 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (Implicit or Explicit) – Feedback is integrated to improve future decisions. This feedback loop mirrors principles from: • The 𝗢𝗢𝗗𝗔 𝗹𝗼𝗼𝗽 (Observe–Orient–Decide–Act) • 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 used in robotics and AI • 𝗚𝗼𝗮𝗹-𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 in agent frameworks Most AI applications today are still “reactive.” But agentic AI — autonomous systems that operate continuously and adaptively — requires: • A 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗹𝗼𝗼𝗽 for decision-making • Persistent 𝗺𝗲𝗺𝗼𝗿𝘆 and contextual awareness • Tool-use and reasoning across multiple steps • 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 for dynamic goal completion • The ability to 𝗹𝗲𝗮𝗿𝗻 from experience and feedback This model helps developers, researchers, and architects 𝗿𝗲𝗮𝘀𝗼𝗻 𝗰𝗹𝗲𝗮𝗿𝗹𝘆 𝗮𝗯𝗼𝘂𝘁 𝘄𝗵𝗲𝗿𝗲 𝘁𝗼 𝗲𝗺𝗯𝗲𝗱 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 — and where things tend to break. Whether you’re building agentic workflows, orchestrating LLM-powered systems, or designing AI-native applications — I hope this framework adds value to your thinking. Let’s elevate the conversation around how AI systems 𝘳𝘦𝘢𝘴𝘰𝘯. Curious to hear how you're modeling cognition in your systems.