International Workshop on

and Modeling (ASDOM 2011)

August 13-14, 2011

Reykjavik University, Iceland

List of Speakers

The list of confirmed speakers in alphabetical order:
• Eyjólfur Ingí Ásgeirsson (Reykjavik University, Iceland) Project planning using integer optimization, relaxations, alpha-points and network flows, Abstract
• John W. Bandler (McMaster University, Canada), Exploitation of simulators and surrogates in optimization-driven design: the art and the science, Abstract
• Dibakar Chakrabarty (National Institute of Technology, Silchar, India), Identification of Unknown Groundwater Pollution Sources – A linked Simulation-Optimization Approach, Abstract
• Qingsha S. Cheng (McMaster University, Canada), Simulation-Driven Optimization and Modeling with Adjoint Sensitivities, Abstract
• Ivo Couckuyt (Gent University, Belgium) Forward and inverse surrogate modeling, Abstract
• Alexander I.J. Forrester (University of Southampton, UK) Multi-objective Design Using Variable Fidelity Geometry and co-Kriging, Abstract
• Paul D. Franzon (North Carolina State University, USA), Application of surrogate modeling in variation-aware macromodel and circuit design, Abstract
• Genetha Gray (Sandia National Laboratories, USA) Simultaneous optimization and uncertainty quantification of model calibration parameters, Abstract
• Leo Gusel (University of Maribor, Slovenia) Genetic Programming Method for Modeling of Material Properties, Abstract
• Nils Hornung (Fraunhofer-Institute for Algorithms and Scientific Computing SCAI, Germany) Aspects of adaptive hierarchical RBF metamodels for optimization, Abstract
• Konstantinos Kyprianidis (Chalmers University of Technology, Sweden) Lessons learned during the development of the multidisciplinary aero engine design tool TERA2020 (Technoeconomic Environemntal Risk Assessment for 2020), Abstract
• Slawomir Koziel (Reykjavik University, Iceland), Response correction techniques for computationally efficient simulation-driven design optimization in microwave engineering, Abstract
• Leifur Leifsson (Reykjavik University, Iceland), Variable-fidelity aerodynamic shape optimization, Abstract
• Stanislav Ogurtsov (Reykjavik University, Iceland), Rapid Surrogate-Based Design Optimization of Antennas Using Coarse-Discretization Simulation Data, Abstract
• James Parr (University of Southampton, UK), Infill sampling criteria for constrained surrogate-model-based global optimization, Abstract
• Rene Pinnau (University of Kaiserslautern, Germany) Model Hierarchies & Space Mapping Optimization, Abstract
• Malte Priess (Christian Albrechts University, Germany), Surrogate-Based Optimization of Climate Model Parameters, Abstract
• Delphine Sinoquet (IFPEN, France), Derivative free optimization for inverse problems in reservoir characterization, Abstract
• Thomas Slawig (Christian Albrechts University, Germany), Optimization in Climate Models, Abstract
• Hlynur Stefánsson (Reykjavik University, Iceland), An iterative simulation and Optimisation Procedure to Increase Robustness of Production Plans, Abstract
• Vassili Toropov (University of Leeds, UK) Industrial applications of metamodel-based design optimization, Abstract
• Xin-She Yang (National Physical Laboratory, UK) Metaheuristic Optimization and Inverse Problems with Industrial Applications, Abstract
• Julian Scott Yeomans (York University, Canada), An Efficient Modelling-to-Generate-Alternatives Algorithm for Public Environmental Policy Formulation: Applying Co-Evolutionary Simulation-Optimization to Municipal Solid Waste Management, Abstract
• Yifan Zhang (McMaster University, Canada), Simulation-based Adjoint Sensitivity Analysis in Imaging and Detection, Abstract

Main Page

Project planning using integer optimization, relaxations, alpha-points and network flows

Reykjavik University, Iceland

Abstract: Managing an industrial production facility requires carefully allocating limited resources, and gives rise to large, potentially complicated planning and scheduling problems. A detailed study of this project planning problem yields a complicated mixed-integer programming (MIP) model with upward of hundreds of thousands of variables and even more constraints.  Consistently and quickly solving such a model exactly is impossible using today's algorithms and computers, and, in addition to branch-and-bound, requires good heuristics and approximation algorithms.  In an effort to design such algorithms, we study several different methods of generating good solutions given the solution to the LP relaxation. In particular, we demonstrate the value of using $\alpha$-points as a way to quickly and cheaply generate, from one solution of an LP relaxation, many feasible solutions to an integer program.  For this problem, the use of $\alpha$-points, combined with other heuristics, outperforms local search.  We also see the value of finding combinatorially-structured subproblems as opposed to using simple greedy approaches.

List of talks    Main Page

Exploitation of simulators and surrogates in optimization-driven design: the art and the science

John W. Bandler

Abstract: It is widely appreciated that suitable physics-based surrogates can facilitate engineering design optimizations with “coarse-model” optimization speed and “fine-model” or high-fidelity simulation accuracy. In this talk, we exemplify how aspects of art (engineering experience and intuition) and science (physics and mathematics) have successfully come together in a space mapping technology. We address the original, aggressive, implicit and output space mapping variants. We also present exciting developments in tuning space mapping formulations suited to electromagnetics-based microwave design. Tuning space mapping is an invasive approach that takes advantage of auxiliary simulations with additional internal ports opened in the fine model; consequently it draws heavily on the prior knowledge and expertise of the designer. Several useful variants of tuning space mapping have appeared. No less than the established space mapping approaches, the newer tuning methodology follows the traditional experience and intuition of the design engineer.

List of talks    Main Page

Identification of Unknown Groundwater Pollution Sources – A linked Simulation-Optimization Approach

Dibakar Chakrabarty

National Institute of Technology, Silchar, India

Abstract: Identification of unknown groundwater pollution sources is a challenging problem. Some of the difficulties that increase the complexity of source identification problem include: limited and sparse observation data, spatio-temporal variation in the magnitude in source fluxes, uncertainties involved in the estimation of groundwater flow and transport parameters, uncertain initial and boundary conditions of the aquifer domain etc. In the identification of unknown groundwater pollution sources both flow and pollution concentration data are generally utilized. Studies done so far using well known methodologies, like, embedding technique, response matrix approach etc. for identification of pollution sources suffer from various limitations. The methodology reported in this study, however, uses a linked simulation-optimization approach. Here, the groundwater flow and pollution transport simulator is combined as an independent module to the conventional optimization method. The essential link between the simulator and the optimization method are the derivatives required for the optimization algorithm. The methodology is potentially applicable for large scale aquifers and does not have some of the computational limitations used in other methodologies, viz. embedding approach, response matrix technique etc.

List of talks    Main Page

Simulation-Driven Optimization and Modeling with Adjoint Sensitivities

Qingsha S. Cheng

Abstract: Surrogate-based techniques such as space mapping offer significant advantages over traditional gradient-based methods in modeling and design of complex engineering systems. With the emergence of efficient adjoint sensitivity analysis in, for example, finite-element simulation software, what advantages remain? We review sensitivity-related topics in space mapping technology: aggressive space mapping, hybrid aggressive space mapping, partial and space derivate mapping, the general space mapping framework, etc. Then we present space mapping techniques for both modeling and optimization that exploit adjoint sensitivity analysis information recently made available in a commercial finite element electromagnetic solver. Relevant surrogates are implemented and updated within appropriate circuit simulation software. Examples of microwave circuit modeling and design demonstrate our approach.

List of talks    Main Page

Forward and inverse surrogate modeling

Ivo Couckuyt

Gent University, Belgium

Abstract: As prototype building is very costly, the use of computer simulations has become commonplace as a feasible alternative. However, due to the computational cost of these high-fidelity simulations, the use of surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, there is great interest in techniques that facilitate the construction of surrogate models, while minimizing the computational cost and maximizing model accuracy. We present an algorithm that integrates adaptive modeling and adaptive sampling methods in order to generate an accurate approximation over the design space of interest with a minimum number of simulations. Moreover, the presented approach does not mandate assumptions (but does not preclude them either) about the problem, i.e., the simulator is viewed as a black-box.

List of talks    Main Page

Multi-objective Design Using Variable Fidelity Geometry and co-Kriging

Alexander I.J. Forrester

University of Southampton, UK

Abstract: Predicting or measuring the output of complex systems is an important and challenging part of many areas of science. If multiple observations are required for parameter studies and optimization, accurate, computationally intensive predictions or expensive experiments are intractable. Studies involving tradeoffs of multiple objectives are particularly demanding of resources. This paper looks at the use of Gaussian process based correlations to correct simple computer models with sparse data from more complex computer models in a multi-objective context. In essence, complex physics based computer codes are replaced by fast problem specific statistics based codes. The methodology is demonstrated via the aerodynamic design of the rear wing of a racing car, where the optimal lift/drag tradeoff is sought, using rear wing simulations calibrated to include the effects of the flow over the whole car.

List of talks    Main Page

Application of surrogate modeling in variation-aware macromodel and circuit design

Paul D. Franzon

North Carolina State University, USA

Abstract: This talk presents surrogate modeling as a solution to variation-aware macromodeling, circuit design, and device modeling. A scalable and high-fidelity IO buffer macromodel is created by integrating surrogate modeling with a physically-based model structure. Circuit performance surrogate models with design and variation parameters are efficient for design space exploration and performance yield analysis. Surrogate models of the main device characteristics are generated in order to assess the effects of variability in analog circuits.  Most of the work to date has been employing Surrogate Modeling to build response surfaces, and fitted equations, to permit the designer to make intelligent design choices.  Future work will focus on using fitted coarse models with fine models for optimization.

List of talks    Main Page

Simultaneous optimization and uncertainty quantification of model calibration parameters

Genetha Gray

Sandia National Laboratories, USA

Abstract: Model calibration refers to the process of inferring the values of model parameters so that the results of the simulations best match observed behavior.  It can both improve the predictive capability of the model and curtail the loss of information caused by using a numerical model instead of the actual system. At its heart is the comparison of experimental data and simulation results. Complicating this comparison is the fact that both data sets contain uncertainties which must be quantified in order to make reasonable comparisons.
Therefore, uncertainty quantification (UQ) techniques can be applied to identify, characterize, reduce, and, if possible, eliminate uncertainties. Incorporation of UQ into the calibration process can drastically improve the usefulness of computational models. Current approaches are serial approach in that first, the calibration parameters are identified and then, a series of runs dedicated to UQ analysis is completed.  In these runs, the calibrated.  Although this approach can be effective, it can be computationally expensive or produce incomplete results.
Model analysis that takes
advantage of intermediate optimization iterates can reduce the expense, but the sampling done by the optimization algorithms is not ideal.  In this talk, we will discuss an joint calibration and UQ approach that combines Bayesian statistical models and derivative-free optimization in order to monitor sensitivity information throughout the calibration process.  We will describe the benefits of this approach and show some examples of its success.  We will also discuss how this approach to optimization is being extended to problems associated with the electrical grid.

List of talks    Main Page

Genetic Programming Method for Modeling of Material Properties

Leo Gusel

University of Maribor, Slovenia

Abstract: Evolutionary computation (EC) is generating considerable interest for solving real engineering problems. They are proving robust in delivering global optimal solutions and helping to resolve limitations encountered in traditional methods. EC harnesses the power of natural selection to turn computers into optimization tools. It is very applicable to different problems in manufacturing industry. In the chapter, an approach completely different from the conventional methods for determination of accurate models for the change of material properties, is presented. This approach is genetic programming (GP) method which is one of evolutionary computation methods and is based on imitation of natural evolution of living organisms. GP is a domain-independent method that genetically breeds a population of computer programs to solve a problem. Specifically, genetic programming iteratively transforms a population of computer programs into a new generation of programs by applying analogs of naturally occurring genetic operations. The main characteristic of GP is its non-deterministic way of computing. No assumptions about the form and size of expressions were made in advance, but they were left to the self organization and intelligence of evolutionary process. Genetic programming method can automatically create, in a single run, a general (parameterized) solution to a problem in the form of a graphical structure whose nodes or edges represent components and where the parameter values of the components are specified by mathematical expressions containing free variables.
The computer programs in GP consist of function genes F = {arithmetical functions, Boolean functions, relation functions, etc.} and terminal genes T = {numerical constants, logical constants, variables, etc.}. By selected genes the evolution process tries to develop a solution that solves the problem best. The initial population is obtained with the creation of random computer programs consisting of the available function genes from the set F and the available terminal genes from the set T. The creation of the initial population is a blind random search for solutions in the huge space of possible solutions.
The next step is the calculation of adaptation of individuals to the environment (i.e., calculation of fitness for each computer program). Fitness is a guideline for modifying the structures undergoing adaptation. In GP the computer programs change, in particular, with genetic operations of reproduction, crossover and mutation.  The crossover operation ensures the exchange of genetic material between computer programs and gives a higher probability of selection to more successful organisms. They are copied unchanged into the next generation. In the crossover operation two randomly selected parts (crossover fragments) of two parental organisms (i.e., parental models for material properties) are interchanged. In the mutation operation a randomly generated organism is inserted at the randomly selected place of the parental individual. The mutation operation increases genetic diversity of population.
In the paper an example of GP modeling of material properties of formed material is described. Great number of experiments and measurements were carried out. The values of independent variables (effective strain, coefficient of friction) influence the value of the dependent variable. On the basis of training data, different prediction models for different material properties were developed by GP. The study showed that a GP approach is suitable for system modeling. The obtained models differ in size, shape, complexity and precision of the solution. Only the best models, gained by genetic programming are presented in the paper. Accuracy of the best genetic models was proved with the testing data set.
The proposed GP method is general, so it can be successfully used for modeling of different materials properties and phenomena where experimental data on the process are available.

List of talks    Main Page

Aspects of adaptive hierarchical RBF metamodels for optimization

Nils Hornung

Fraunhofer-Institute for Algorithms and Scientific Computing SCAI, Germany

Abstract: Radial basis functions, among other techniques, are used to construct metamodels that approximate multi-objective expensive high-fidelity functions from a finite number of function evaluations (design of experiments, DoE). Radial basis functions can be applied if the DoE covers the parameter space in an arbitrary though uniform manner. Since leave-one-out strategies allow for computing tolerance limits, the approximated value and tolerance can be interpreted as expectation and variance of a random experiment. Thus, model improvement as described for Kriging models in the literature can in principal be applied to RBF-based metamodels, too. We describe our implementation of this application and present a hierarchical metamodelling approach that deals with the specific problems that such metamodel adaptions pose to RBF-based models.
Overviews of metamodel improvement strategies for Kriging-based models can be found in the literature, including details on the efficient recursive adaption of a DoE, taking the model prediction and variance into account. One idea is to optimize the so-called expected improvement. Keane et al. describe a number of controlling parameters in this context that arbitrate between the focus on optimization and on model quality. We present an application of this strategy to RBF-based metamodels with tolerances. Since such a recursive improvement yields a non-uniformly sampled final DoE, we examine a partition method for the construction of hierarchical metamodels and adapt it to the needs of a non-uniformly improved DoE. For
smaller problems we suggest an alternative approach based on a clustering algorithm and present an implementation using k-means++. We review criteria for partitioning and clustering, specifically considering coverage error and, within the clusters, a uniform distribution. Finally, we briefly discuss implementation details and report upon first industrial test cases to validate our results.

List of talks    Main Page

Lessons learned during the development of the multidisciplinary aero engine design tool TERA2020 (Technoeconomic Environemntal Risk Assessment for 2020)

Konstantinos Kyprianidis

Chalmers University of Technology, Sweden

Abstract: A Techno-economic, Environmental and Risk Assessment (TERA) approach during the conceptual and preliminary design process of complex mechanical systems will soon become the only affordable, and hence, feasible way of producing optimised and sound designs, if the whole spectrum of possible impacts (economic, environmental etc) is to be taken into account. To conceive and assess engines with minimum environmental impact and lowest cost of ownership in a variety of emission legislation scenarios, emissions taxation policies, fiscal and air traffic management environments, a TERA approach tool is required. TERA2020 for NEWAC is a software tool that spans aero engine conceptual design and preliminary design. It addresses major component design as well as system level performance for a whole aircraft application. It helps to automate part of the aero engine preliminary design process using a sophisticated explicit algorithm and a modular structure.
TERA2020 considers a large number of disciplines typically encountered in conceptual design, such as: engine performance, engine aerodynamic and mechanical design, aircraft design and aerodynamic performance, emissions prediction and environmental impact, engine and airframe noise, as well as production, maintenance and direct operating costs. Individually developed modules are integrated in an optimiser environment. A large amount of information is available after each design iteration, which can be used for many purposes such as technology impact assessment, sensitivity and parametric studies, multi-objective optimisation etc. TERA2020 minimises internal iterations in order to speed up the execution of individual engine designs by using an explicit algorithm. Environment constraints can be applied through the optimiser, to determine the acceptability/feasibility of each engine design, and then home in on the best engines according to user specified objective functions.
In the first half of the presentation various aspects of the development of the TERA2020 multi-disciplinary optimisation framework will be discussed. The second half of the presentation will deal with new insights gained by the use of TERA2020 while studying new engine concepts and technologies, focusing on results and their interpretation

List of talks    Main Page

Response correction techniques for computationally efficient simulation-driven design optimization in microwave engineering

Slawomir Koziel

Reykjavik University, Iceland

Abstract: Contemporary microwave engineering design is heavily based on EM simulations. Simulation-driven design is a must for growing number of devices and systems for which theoretical (e.g., analytical) models are either not available or not sufficiently accurate to yield the design satisfying given performance requirements. Unfortunately, accurate numerical evaluation may be computationally expensive, particularly for complex structures. This makes straightforward approaches, such as employing a simulator directly in the optimization loop, impractical.
Computationally efficient EM-driven design optimization can be realized using physically-based surrogate models. More specifically, optimization of the CPU-intensive structure (high-fidelity or fine model) is replaced by iterative updating and re-optimization of computationally cheap low-fidelity (or coarse) model. The most successful approaches of this kind in microwave engineering include space mapping and simulation-based tuning.
In this talk, a few alternative surrogate-based techniques for microwave design optimization are discussed that are based on response correction of the coarse model. These methods include adaptive response correction, manifold mapping, adaptive response correction, and shape-preserving response prediction. All of these techniques are easy to implement, they do not require extraction of surrogate model parameters (typical for space mapping), nor any modification of the structure of interest (typical for tuning). Also, they can be extremely efficient in terms of yielding a satisfactory design at a low computational cost of a few EM simulations of the optimized structure. Application examples are provided.

List of talks    Main Page

Variable-fidelity aerodynamic shape optimization

Leifur Leifsson

Reykjavik University, Iceland

Abstract: A computationally efficient methodology for airfoil design optimization is presented. Our approach exploits a corrected physics-based low-fidelity surrogate that replaces, in the optimization process, an accurate but computationally expensive high-fidelity airfoil model. Correction of the low-fidelity model is achieved by aligning the airfoil surface pressure distribution with that of the high-fidelity model using a shape-preserving response prediction technique. We present several numerical applications to airfoil design for both transonic and high-lift conditions.

List of talks    Main Page

Rapid Surrogate-Based Design Optimization of Antennas Using Coarse-Discretization Simulation Data

Stanislav Ogurtsov

Reykjavik University, Iceland

Abstract: Antenna design can be challenging and time consuming due to the lack of accurate analytical models for many modern types of antennas, e.g., dielectric resonator antennas, ultra-wideband antennas, planar Yagi antennas. For such antennas their reflection and radiation responses are typically obtained through electromagnetic (EM) simulation which can be of high computational costs. On the other hand antenna design can be realized as a simulation-driven optimization problem. Direct EM-based optimization of antennas, however, is impractical not only because of being CPU-intensive but also because it often fails due to poor analytical properties of the EM-based objective function. Many existing approaches exploit metaheuristic methods like genetic algorithms or particle swarm optimizers which are characterized by huge computational overhead. As a matter of fact, an antenna design practice is with multiple EM simulations and parameter sweeps, where the design parameters are iteratively modified based on experience of the engineer. Here, we discuss and demonstrate a number of rapid antenna design optimization techniques which can be also feasible for implementation. In these techniques the optimization burden is shifted to a surrogate model, computationally cheap representation of the optimized structure. Presented techniques include space mapping combined with functional approximation of the coarse-discretization simulation data, variable-fidelity multi-model algorithm, shape-preserving response prediction technique, and adaptive design specification technique. In all these techniques, fast and reliable surrogate models are configured from properly corrected and handled coarse-discretization simulation data. Considered approaches yield reliable designs at computational costs corresponding to a few full-wave simulations of the antenna in question.

List of talks    Main Page

Infill sampling criteria for constrained surrogate-model-based global optimization

James Parr

University of Southampton, UK

Abstract: When using surrogates to model constraints, locating the feasible optimum can be difficult to achieve. A new method based on Pareto optimal solutions has been introduced that aims to better balance exploitation and exploration of the objective and all the constraint functions. Further to this, by selecting model updates in close proximity to the constraint boundaries, the regions that are likely to contain the feasible optimum can be better modelled. The enhanced probability of feasibility is used to encourage the exploitation of constraint boundaries. Comparisons of this Pareto based method with and without the enhance probability of feasibility reveals some promising results.

List of talks    Main Page

Model Hierarchies & Space mapping Optimization

Rene Pinnau

University of Kaiserslautern, Germany

Abstract: Optimization in industry often requires information on the adjoint variables for very complex model equations. Typically, there is a whole hierarchy of models available which allows to balance the computational effort and the physical exactness. These can be used in combination with space mapping techniques to speed up the convergence of optimization algorithms. In this talk we present three applications where this approach proved to be very successful. We will cover questions from semiconductor design, the control of particles in fluids and shape optimization for filters.

List of talks    Main Page

Surrogate-Based Optimization of Climate Model Parameters

Malte Priess

Christian Albrechts University, Germany

Abstract: Understanding the oceanic CO2 uptake is of central importance for projections of climate change and oceanic ecosystems. The underlying models are governed by coupled systems of nonlinear parabolic partial differential equations for ocean circulation and transport of biogeochemical tracers. The aim is to minimize the misfit between parameter dependent model output and given measurement data. Replacing the accurate model in focus by a computationally cheaper approximative model, the so-called surrogate, could significantly reduce this cost. Here we present recent research highlights of surrogate-based optimization methologies applied to this class of coupled marine ecosystem models.

List of talks    Main Page

Derivative free optimization for inverse problems in reservoir characterization

Delphine Sinoquet

IFPEN, France

Abstract: Reservoir characterization inverse problem aims at building reservoir models consistent with available production and seismic data for better forecasting of the production of a field . These observed data (pressures, oil/water/gas rates at the wells and 4D seismic data) are compared with simulated data to determine unknown petrophysical properties of the reservoir. The underlying optimization problem is usually formulated as the minimization of a least-squares objective function composed of two terms : the production data and the seismic data mismatch. In practice, this problem is often solved by nonlinear optimization methods, such as Sequential Quadratic Programming methods with derivatives approximated by finite differences. In applications involving 4D seismic data, the use of the classical Gauss-Newton algorithm is often infeasible because the computation of the Jacobian matrix is CPU time consuming and its storage is impossible for large datasets like seismic-related ones. Consequently, this optimization problem requires dedicated techniques: derivatives are not available, the associated forward problems are CPU time consuming and some constraints may be introduced to handle a priori information. We propose a derivative free optimization method under constraints based on trust region approach coupled with local quadratic interpolating models of the cost function and of non linear constraints. Results obtained with this method on a synthetic reservoir application with the joint inversion of production data and 4D seismic data are presented. Its performance is compared with a classical SQP method (quasi-Newton approach based on classical BFGS approximation of the Hessian of the objective function with derivatives approximated by finite differences) in terms of number of simulations of the forward problem.

List of talks    Main Page

Optimization in Climate Models

Thomas Slawig

Christian Albrechts University, Germany

Abstract: In climate models, many  processes are not resolved exactly, but are parametrized by more or less coarse approximations involving several parameters. Most of these parameters cannot be measured directly. They have to be estimated or identified using measurement data and nonlinear optimization methods. Besides these model calibration or validation problems, there are nowadays also applications where a direct optimization  of parts of the climate system are under consideration. One of these geo-engineering applications, motivated by global climate change,  is iron fertilization which is currently  investigated experimentally. The talk presents typical examples of optimization problems and some solution methods, both in discrete and in more abstract analytical setting.

List of talks    Main Page

An iterative simulation and Optimisation Procedure to Increase Robustness of Production Plans

Hlynur Stefánsson

Reykjavik University, Iceland

Abstract: We propose a procedure to increase robustness of production plans embedded in a modelling approach based on integrated multi-scale optimization models and solution methods for planning and scheduling of a make to order production process under uncertain and varying demand conditions. As an inspiration we have a large real world problem originating from a complex pharmaceutical enterprise. The approach is based on a hierarchically structured moving horizon algorithm. On each level in the algorithm we propose optimization models to provide support for the relevant decisions and the models are solved with decomposition heuristics. The levels are diverse regarding the time scope, aggregation, update rate and availability of data at the time applied. The maximum effective time horizon of the multi-scale approach is one year and we use sales forecasts as input demands instead of actual orders which are usually only available 3 months ahead in time although raw materials need to be procured up to one year in advance. The sale forecasts have historically proven to be rather uncertain and to increase the robustness of our long-term plans we use an iterative procedure where the first step is to use a MILP model to obtain a solution based on the sale forecasts. In the next step we generate a number of alternative demand scenarios and run LP models to test the robustness of the MILP solution for each of the demand scenarios. If the long-term production plan is feasible for enough of the demand scenarios, depending on our robustness criteria, then we use the current plan, but if not then we change the demand forecast and run the MILP model again iteratively until the robustness criterion has been met. The demand samples are generated with tailor-made methods based on statistical error analysis. The approach has been tested and implemented with industrial data from a pharmaceutical enterprise and has proved to be capable of obtaining realistic and profitable solutions within acceptable computational times.

List of talks    Main Page

Industrial applications of metamodel-based design optimization

Vassili Toropov

University of Leeds, UK

Abstract: The following issues related to hard design optimization problems are not yet adequately addressed in industrial applications:
•    Very large scale (1000+ design variables) optimization problems with computationally expensive (10+ hours) response function evaluation
•    Discrete optimization with even moderately expensive response functions
•    Optimization with stochastic responses
•    Issues of numerical noise and domain-dependent calculability of responses
•    Automated simplification of complex simulation models to enable the use of a multifidelity optimization strategies
•    Lack of metamodel quality assurance
•    Multidisciplinary optimization
The presentation will discuss the progress towards addressing these issues with examples of recent industrial applications related to aerospace and automotive engineering as well as attempt identifying industrial priorities for academic research.

List of talks    Main Page

Metaheuristic Optimization and Inverse Problems with Industrial Applications

Xin-She Yang

National Physical Laboratory, UK

Abstract: Many inverse problems involve the optimal estimates of physical parameters for a given set of observed data. As the number of degrees of freedom tends to be large, and non-unique solutions may exist due to incomplete data, inverse algorithms have to be specially tailored to suit a particular type of problem. Mathematically speaking, almost all inverse problems can be formulated as a constrained optimization problem, and thus can in principle be solved using efficient optimization techniques. We will first describe the optimization formulation of inverse problems, and then solve them using metaheuristic algorithms such as PSO and firefly algorithm. We also will discuss some case studies in industrial applications, including geophysical inversion of underground structures, optimal radius of a loop heat pipe for microelectronics, and parameter estimation of nanoscale heat transfer. Finally, we will discuss the topics for further studies.

List of talks    Main Page

An Efficient Modelling-to-Generate-Alternatives Algorithm for Public Environmental Policy Formulation: Applying Co-Evolutionary Simulation-Optimization to Municipal Solid Waste Management

Julian Scott Yeomans

Abstract: In public policy formulation, it is generally preferable to create several quantifiably good alternatives that provide very different approaches to the particular situation.  This is because public sector decision-making typically involves complex problems that are riddled with incompatible performance objectives and possess competing design requirements which are very difficult – if not impossible – to quantify and capture at the time supporting decision models are constructed.  There are invariably unmodelled design issues, not apparent at the time of model construction, which can greatly impact the acceptability of the model’s solutions.  Furthermore, public environmental policy formulation problems often contain considerable stochastic uncertainty and there are frequently numerous stakeholders with irreconcilable perspectives involved. Consequently, it is preferable to generate several alternatives that provide multiple, disparate perspectives to the problem.  These alternatives should possess near-optimal objective measures with respect to the known modelled objective(s), but be fundamentally different from each other in terms of the system structures characterized by their decision variables. By generating a set of very different solutions, it is hoped that some of these dissimilar alternatives can provide very different perspectives that may serve to satisfy the unmodelled objectives. This study provides a co-evolutionary simulation-optimization modelling-to-generate-alternatives approach that can be used to efficiently create multiple solution alternatives that satisfy required system performance criteria in highly uncertain environments and yet are maximally different in their decision space.  This new stochastic approach is very computationally efficient, since it permits the simultaneous generation of good solution alternatives in a single computational run of the SO algorithm.  The efficacy and efficiency of this technique is specifically demonstrated using a waste management policy formulation case. Waste management systems provide an ideal setting for illustrating the modelling techniques used for such public environmental policy formulation, since they possess all of the prevalent incongruencies and system uncertainties inherent in complex planning processes.

List of talks    Main Page

Simulation-based Adjoint Sensitivity Analysis in Imaging and Detection

Yifan Zhang

Abstract: Recent developments in the sensitivity-based imaging with microwave responses are presented. The aim is to uncover defects or abnormalities in an object whose normal (e.g., defect-free or healthy) state is known. The sensitivity-based imaging exploits prior knowledge of the incident field distributions in the object under test under all excitation conditions. These distributions usually become available through simulations, which include the object in its normal state, as well as the imaging sensors. With this prior knowledge, the imaging procedure is sufficiently fast to perform in real time. Examples illustrate applications with circular sensor arrays, such as those used in microwave tomography, as well as planar raster scanning, which is used in microwave holography. In both applications, the sensitivity and resolution limits are addressed.

List of talks    Main Page