🎯 How can we use a low-fidelity optimization model to achieve similar performance to a high-fidelity model? Many decision-making algorithms can be viewed as tuning a low-fidelity model within a high-fidelity simulator to achieve improved performance. A great example comes from Cost Function Approximations (CFAs) by Warren Powell. CFAs embed tunable parameters, such as cost coefficients, into a simplified, deterministic model. These parameters are then refined by optimizing performance in a high-fidelity stochastic simulator, either via derivative-free or gradient-based methods. A similar philosophy appears in optimal control, where controllers are tuned using simulation optimization. ⚙️ Inspired by this paradigm, my student Asha Ramanujam recently developed the PAMSO algorithm. PAMSO—Parametric Autotuning for Multi-Timescale Optimization—tackles complex systems that operate across multiple timescales: High-level decision layer: makes strategic decisions (e.g., planning, design). Low-level decision layer: takes high-level inputs, makes detailed operating decisions (e.g., scheduling), applies detailed constraints and uncertainties, and computes the true objective. However, one-way top-down communication between layers often results in infeasibility or poor solutions due to mismatches between the high-level and the detailed low-level operating models. 💡 PAMSO augments the high-level model with tunable parameters that serve as a proxy for the complex physics and uncertainties embedded in the low-level model. Instead of attempting to jointly solve both levels, we fix the hierarchical structure: the high-level layer makes planning or design decisions, and then passes them down to the low-level scheduling or operational layer, which acts as a high-fidelity simulator. We treat this top-down hierarchy as a black box: The inputs are the tunable parameters embedded in the high-level model. The output is the overall objective value after the low-level simulator evaluates feasibility and performance. By optimizing these parameters using derivative-free methods, PAMSO is able to steer the entire system toward high-quality, feasible solutions. 🚀 Bonus: Transfer Learning! If these parameters are designed to be problem-size invariant, they can be tuned on smaller problem instances and transferred to solve larger-scale problems with minimal extra effort. ⚙️ Case studies demonstrate PAMSO’s scalability and effectiveness in generating good, feasible solutions: ✅ A MINLP model for integrated design and scheduling in a resource-task network with ~67,000 variables ✅ A massive MILP model for integrated planning and scheduling of electrified chemical plants and renewable energy with ~26 million variables Even solving the LP relaxation of these problems is beyond memory limits, and their structure is not easily decomposable for optimization techniques. https://lnkd.in/gDfcvDaZ
Optimization Algorithms in Engineering
Explore top LinkedIn content from expert professionals.
Summary
Optimization algorithms in engineering are mathematical methods used to find the best possible solution to complex design and operational problems by systematically adjusting variables within certain constraints. These approaches help engineers improve system performance, save resources, and solve challenges ranging from scheduling to designing components efficiently.
- Consider problem structure: Choose your optimization method based on the shape and requirements of your engineering problem, such as its size, data complexity, and type of feedback available.
- Explore population-based models: Try algorithms inspired by nature, like swarm intelligence or evolutionary strategies, to tackle problems with many variables or uncertain environments.
- Tune for real-world use: Adjust algorithm settings and parameters to match your project’s constraints, whether it’s time, cost, or available information, to achieve solutions that are both practical and reliable.
-
-
Exciting News to Kick Off 2025! I'm happy to announce that our latest paper, titled 'Large Language Model-Based Evolutionary Optimizer: Reasoning with Elitism', has been published in Neurocomputing, Elsevier! This work explores the potential of Large Language Models (LLMs) as black-box optimizers, leveraging their remarkable reasoning capabilities for zero-shot optimization across a variety of scenarios, including multi-objective and high-dimensional problems. We introduce Language-Model-Based Evolutionary Optimizer (LEO), a novel, population-based method for numerical optimization. Applications include benchmark challenges and real-world engineering problems like, Supersonic nozzle shape optimization, Heat transfer optimization, Windfarm layout optimization. Key Highlights: 1. Comparable performance to state-of-the-art optimization methods Insights into leveraging LLMs creative potential while addressing challenges like hallucinations. 2. Practical guidelines for reliable optimization using LLMs 3. Limitations and exciting directions for future research A huge thanks to all the collaborators Shuvayan Brahmachary, Subodh Joshi, Kaushic K, Kaushik Koneripalli, Aniruddha Panda, Harshil Patel, PhD, et al.; and the reviewers for their support and feedback! If you're interested in cutting-edge intersections of AI, optimization, and engineering, I invite you to check out the paper: https://lnkd.in/e5hzJwhh Wishing everyone a joyful and prosperous New Year!
-
Stop settling for vanilla fixed-point updates. Optimization algorithms can reduce to fixed point steps, and there's a toolbox of options. 🔎 What’s a fixed point? Consider a function T(x) mapping a Euclidean space back onto the Euclidean space. We say a point x* is a fixed point of T if x* = T(x*), i.e. applying T yields what you input. And, these fixed points are precisely the solutions to our optimization problems. 📓 Fixed Point Iterations Here's a few schemes for finding fixed points: 🔹 Banach–Picard This is the vanilla update most people are familiar with where you repeatedly apply T to your current point. Each step shrinks the distance to the fixed point by the same factor. 🔹 Krasnosel'skiĭ-Mann (KM) Take a weighted average of your current point and T(current), gently steering toward a fixed point without overshooting. 🔹 Fast KM Like KM, but also adds a slice of the previous movement to speed up average progress. 🔹 Heavy-Ball Add “momentum” by applying T not to x^k, but to x^k plus a fraction of your last step, giving extra push. 🔹 Halpern At each update, mix in a fixed anchor point u and then apply T, gradually shifting all the weight onto T to get closest solution to u. 🔹 Viscosity Approximation Blend a simple contraction f(x) with T(x) each step—driving iterates toward a fixed point of T that satisfies a variational inequality. 🔹 Ishikawa Do a two-stage move: first mix toward T(x) to get y, then apply T to y and blend back with your original x for extra stability. The assumptions and trajectories of each scheme vary, and so ought be chosen with your application in mind. 🔭 Looking ahead: There are more sophisticated quasi-Newton style schemes like Anderson Acceleration and SuperMann for fixed point iteration. ⸻ ♻️ Learn something new? Repost so others can too. 💬 Questions/feedback are welcome in the comments. 🔔 Never miss a post—get it to your inbox: https://typalacademy.com #algorithms #mathematics #optimization
-
🚀 Why have I started Gateway.AI? Over the last few years I’ve been in a lot of rooms where teams try to adopt ML/automation in the real world - labs, factories, data-heavy groups. The biggest shift is to start with the workflow, not the model. Day-one questions I ask now: - Map the workflow: What are the true bottlenecks to throughput, cost, or quality? - Fit for AI/automation: Which bottlenecks can tech actually relieve—and which might it worsen? - Watch for negative ROI: Could AI create more dashboards/paperwork without new value? - For experimentalists: If today’s best theory/simulation were free/instant, how would you change experiments on the scale of seconds → weeks? - Benchmarks that matter: How will you measure productivity gains from AI internally? - Downstream value: Who benefits next - can we define benchmarks for downstream impact? - Rewards & objectives: What’s the objective function of the experiment? - For theory/ML folks: What experimental footprint (time/samples/$) is required to falsify the hypothesis? 🔧 Which AI/optimization method should you use? Pick methods by the shape of your problem, not by hype. A quick picker: - Small search, fast feedback, clear objective → start simple: design of experiments (DoE), gradient/coordinate search, rules. - Low–mid dimensional, moderate cost, noisy objective → Bayesian optimization (single/multi-objective; add constraints if needed). - Structured proxies available (cheap early readouts) → multi-fidelity BO or active learning with surrogate models (Gaussian Process, deep kernel learning). - Huge or discrete spaces, many viable recipes, rich constraints → Genetic algorithms / evolutionary strategies (keep operators “manufacturable”). - High-frequency control with a plant model → model-predictive control (MPC). - Sequential decisions under uncertainty, sparse rewards → contextual bandits (short horizon) → RL (only if you truly need it). - Hard planning with known costs/heuristics → tree search (A*, MCTS) beats RL in many cases. Choose with four dials in mind: parameter-space complexity, data dimensionality, proxy availability, and feedback latency (seconds vs hours vs weeks). Your algorithm should match your budget (samples/time), respect constraints, and exploit any physics priors you have. These questions and choices keep projects anchored to outcomes, not demos. It’s why I started Gateway.AI: to translate ML/AE enthusiasm into measurable productivity and downstream value for materials science! If you’re deciding where to start - or whether to- let’s talk! https://lnkd.in/eNeUiADP #AI #Automation #Optimization #ActiveLearning #BayesianOptimization #GeneticAlgorithms #MPC #RL #Bandits #RDM #LabAutomation #MLOps #ExperimentalDesign #GatewayAI
-
𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 - 𝗦𝘄𝗮𝗿𝗺 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 Swarm Intelligence Optimization (SO): SO is a class of optimization algorithms inspired by the behavior of social animals, such as birds, ants, and bees. They work by representing candidate solutions as particles and then using interaction and cooperation between these particles to find a better solution. Some of the most widely used families of algorithms in SO include: ▪️1. Ant Colony Optimization (ACO). Ant Colony Optimization is a swarm intelligence optimization algorithm inspired by the foraging behavior of ants. In ACO, candidate solutions to a problem are represented as ants that explore the search space and update a pheromone trail that guides the movement of other ants. The ants update the pheromone trail based on the quality of the solutions they find, with better solutions leading to stronger pheromone trails. ▪️2. Particle Swarm Optimization (PSO). Particle Swarm Optimization is a swarm intelligence optimization algorithm that mimics the behavior of social animals, such as birds and fish, to find optimal solutions to a problem. In PSO, candidate solutions are represented as particles that move and interact with each other in a search space. ▪️3. Artificial Bee Colony (ABC). Artificial Bee Colony is a swarm intelligence optimization algorithm inspired by the foraging behavior of honey bees. In ABC, candidate solutions to a problem are represented as bees that explore the search space and update their positions based on the quality of the solutions they find. A combination of local and global information about the search space guides the movement of the bees. ▪️4. Firefly Algorithm (FA). Firefly Algorithm is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies. In FA, candidate solutions to a problem are represented as fireflies that emit light and update their positions based on the quality of the solutions they find. The fireflies' movement is guided by their relative brightness, attracting other fireflies to their location. ▪️5. Cuckoo Search (CS): Cuckoo Search is a swarm intelligence optimization algorithm that is inspired by the egg-laying behavior of cuckoos. In CS, candidate solutions to a problem are represented as cuckoos that lay eggs in nests and update their positions based on the quality of the solutions they find. Basically, all these algorithms are used for optimization, but use different strategies to find the optimal solution. Evolutionary algorithms use a genetic metaphor to evolve a population of candidate solutions, and swarm optimization uses a population of agents that interact with each other. #evolutionarycomputing #swarmintelligence #optimizationalgorithms #computerscience #datamining #technology
-
We implemented & modified a biologically inspired evolutionary algorithm (Paddy) for objective-driven parameter optimization of chemical and biological systems when the underlying functional relationship is unknown. In this introductory manuscript, we show the power of Paddy optimization on toy-mathematical problems, efficiently sample latent space for objective-driven generation of molecules using junctional-tree variational auto encoder as an example for cheminformatics drug discovery tasks and hyperparameter tuning of neural networks compared to hyperopt Bayesian optimization. Watch this space as we have used Paddy for closed-loop experiments and parameter optimization to automate instruments based on specific objectives (coming soon) for efficient optimization. You can use Paddy from our lab GitHub or PyPI distribution. Well done Armen Beck who started this work in collaboration with Jonathan Fine when he was in the lab (both now at Merck). https://lnkd.in/gmjmXxjm
-
When solving mixed-integer programming (MIP) problems, you might sometimes need to add constraints to the model only when they are violated by the current solution. These are known as “lazy constraints.” Gurobi Optimization allows the addition of such constraints dynamically during the search process via callback functions. This can be significantly more efficient than adding all possible constraints at the outset, especially when the number of potential constraints is large. ⬇️ Let’s look at an example ⬇️ Suppose you’re solving a vehicle routing problem (VRP) and want to prevent sub-tours (circuits that do not include all nodes) without explicitly adding subtour elimination constraints for every possible subtour. You can use a callback function to add subtour elimination constraints only when Gurobi finds a solution that contains a subtour. The example attached shows how to set up a callback function that adds lazy constraints dynamically to eliminate subtours in a VRP. It uses model.cbGetSolution() within a callback function to inspect the current solution and identify any subtours. If a subtour is found, model.cbLazy() is used to add a constraint that eliminates the subtour from future solutions. #operationsresearch #optimization #milp #computerscience #algorithms
-
🚫 What if we stopped over-dimensioning our parts? 🔍 Are you familiar with topological optimization? It’s a revolutionary approach that removes unnecessary material from a part while maintaining its mechanical performance. The result? Lightweight, strong, and highly efficient designs! 🚀 Using simulation software like SolidWorks, ANSYS, or Fusion 360, we can: ⚖️ Reduce part weight by up to 60% 🔩 Optimize stress distribution 💡 Improve performance, aesthetics, and—most importantly—material savings! Topological optimization is already a key tool in: ✈️ Aerospace 🛰️ Space industry 🏎️ Motorsport 🖨️ And especially additive manufacturing, which enables the production of complex geometries. Of course, there are challenges: ⚠️ Some designs cannot be machined conventionally 🛠️ It requires advanced tools and skilled engineers 🔄 Sometimes the model must be reinterpreted for industrial viability But one thing is clear: The future of design lies in intelligently lightweight parts! 🌟 What about you? Have you integrated topological optimization into your projects, or do you think it’s reserved for large industries? #TopologicalOptimization #MechanicalDesign #Engineering #SolidWorks #AdditiveManufacturing #Innovation #CAO #Simulation #MechanicalEngineering