Dominique Bonvin
Professor Emeritus
dominique.bonvin@epfl.ch +41 21 693 38 43 http://la.epfl.ch/
Birth date: 17.04.1952
Fields of expertise
Process control
Process chemometrics
Biography
Dominique Bonvin is Professor and Director of the Automatic Control Laboratory of EPFL. He received his Diploma in Chemical Engineering from ETH Zürich, and his Ph.D. degree from the University of California, Santa Barbara. He worked in the field of process control for the Sandoz Corporation in Basel and with the Systems Engineering Group of ETH Zürich. He joined the EPFL in 1989, where his current research interests include modeling, control and optimization of dynamic systems. He served as Director of the Automatic Control Laboratory for the periods 1993-97, 2003-2007 and again since 2012, Head of the Mechanical Engineering Department in 1995-97 and Dean of Bachelor and Master Studies at EPFL for the period 2004-2011.Publications
Infoscience publications
Publications
2018
Journal Articles
* Real-Time Optimizing Control of an Experimental Crosswind Power Kite
The contribution of this article is to propose and experimentally validate an optimizing control strategy for power kites flying crosswind. The control strategy provides both path control (stability) and path optimization (efficiency). The path following part of the controller is capable of robustly following a reference path, despite significant time delays, using position measurements only. The path-optimization part adjusts the reference path in order to maximize line tension. It uses a real-time optimization algorithm that combines off-line modeling knowledge and on-line measurements. The algorithm has been tested comprehensively on a small-scale prototype, and this article focuses on experimental results.
IEEE Transactions on Control Systems Technology. 2018. DOI : 10.1109/TCST.2017.2672404.Conference Papers
* Real-Time Optimization of Uncertain Process Systems via Modifier Adaptation and Gaussian Processes
In the context of static real-time optimization, the use of measurements allows dealing with uncertainty in the form of plant-model mismatch and disturbances. Modifier adaptation (MA) is a measurement-based scheme that uses first-order corrections to the model cost and constraint functions so as to achieve plant optimality upon convergence. However, first-order corrections rely crucially on the estimation of plant gradients, which typically requires costly plant experiments.
2018-01-01. European Control Conference (ECC), Limassol, CYPRUS, Jun 12-15, 2018. p. 466-471. DOI : 10.23919/ECC.2018.8550397.* Active Directional Modifier Adaptation with Trust Region - Application to Energy-Harvesting Kites
Many real-time optimization schemes maximize process performance by performing a model-based optimization. However, due to plant-model mismatch, the model-based solution is often suboptimal. In modifier adaptation, measurements are used to correct the model in such a way that the first-order necessary conditions of optimality are satisfied for the plant. However, performing experiments to obtain measurements can be costly. This paper uses a sensitivity analysis that allows making only partial corrections to the model, thereby relying on fewer experiments. Furthermore, this sensitivity analysis is of global nature, which ensures that the corrections are sufficient in the presence of large parametric uncertainties. However, since the corrections are still only locally valid, this paper proposes to control the update step length via a trust-region technique. The resulting algorithm is illustrated via the simulation of an energy-harvesting kite.
2018-01-01. European Control Conference (ECC), Limassol, CYPRUS, Jun 12-15, 2018. p. 2312-2317. DOI : 10.23919/ECC.2018.8550509.* Enforcing Model Adequacy in Real-Time Optimization via Dedicated Parameter Adaptation
Iterative real-time optimization schemes that employ modifier adaptation add bias and gradient correction terms to the model that is used for optimization. These affine corrections lead to meeting the first-order necessary conditions of optimality of the plant despite plant-model mismatch. However, since the added terms do not include curvature information, satisfaction of the second-order sufficient conditions of optimality is not guaranteed, and the model might be deemed inadequate for optimization. In the context of modifier adaptation, this paper proposes to include a dedicated parameter-estimation step such that also the second-order optimality conditions are met at the plant optimum. In addition, we propose a procedure to select the best parameters to adapt based on a local sensitivity analysis. A simulation study dealing with product maximization in a fed-batch reactor demonstrates that the proposed scheme can both select the right parameters and determine their values such that modifier adaptation can drive the plant to optimality fast and without oscillations. (C) 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
2018-01-01. 10th IFAC Symposium on Advanced Control of Chemical Processes (ADCHEM), Shenyang, PEOPLES R CHINA, Jul 25-27, 2018. p. 49-54. DOI : 10.1016/j.ifacol.2018.09.246.* Adaptation strategies for tracking constraints under plant-model mismatch
Optimal operating conditions for a process plant are typically obtained via model based optimization. However, due to modeling errors, the operating conditions found are often sub-optimal or, worse, they can violate critical process constraints. Hence, model corrections become a necessity and are done by exploiting measured process data. To this end, either model parameters are adapted and/or correction terms are added to the model-based optimization problem. The modifier-adaptation methodology does the latter by adding bias and gradient correction terms that are called modifiers. The role of modifiers and model parameters are often seen as competing, and which one of the two is better suited to track the optimality conditions is an open problem. This paper attempts to shed light on finding a synergy between the model parameters and the modifiers in the case when tracking constraints is sufficient for near-optimal performance. We demonstrate through the simulation study of a batch-to-batch optimization problem that a set of model parameters can be selected that mirror the role of modifiers. The modifiers are then added only when there is insufficient number of mirror parameters for independent constraint tracking. (C) 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
2018-01-01. 10th IFAC Symposium on Advanced Control of Chemical Processes (ADCHEM), Shenyang, PEOPLES R CHINA, Jul 25-27, 2018. p. 833-838. DOI : 10.1016/j.ifacol.2018.09.254.Theses
* On enforcing the necessary conditions of optimality under plant-model mismatch - What to measure and what to adapt?
Industrial processes are run with the aim of maximizing economic profit while simultaneously meeting process-critical constraints. To this end, model-based optimization can be performed to ensure optimal plant operations. Usually, inevitable model inaccuracies are dealt by collecting the plant measurements at the local operating conditions in order to adapt model parameters, followed by numerical re-optimization. This iterative two-step procedure often results in a sub-optimal solution, since the models are typically not designed for optimization. Modifier Adaptation (MA) is a Real-Time Optimization (RTO) technique that directly adds the affine-correction terms to the model. The affine corrections are parametrized in modifiers that are tailored to the optimization needs. This enables modifier adaptation to guarantee, upon convergence, matching the plant and the modified model's optimality conditions. However, computing the modifiers requires estimates of the plant gradients that are obtained via expensive plant experiments. The experimental cost can be reduced by relying more on the model of the considered plant. For example, Directional Modifier Adaptation (DMA) relies on offline-computed local parametric sensitivity analysis performed on the gradient of the Lagrangian function of the model resulting in reduced number of input directions that describe the gradient uncertainty in the model. Thereby, plant gradients are estimated only in a low-dimensional space of privileged input directions considerably reducing the experimental costs. However, local sensitivity analysis is often ineffective when the gradient of the model is considerably nonlinear in parameters. This thesis proposes an online procedure based on global sensitivity analysis for finding the most promising privileged directions that adequately compensates for the model deficiencies in predicting the plant optimality conditions. The discovered privileged directions are such that, upon parametric perturbations, the gradient varies a lot along the privileged directions and varies only a little along the remaining input directions. Consequently, the gradients of the model cost and constraints are corrected only along the privileged directions by adapting modifiers. The resulting methodology is named as Active Directional Modifier Adaptation (ADMA). Several simulation studies conducted show that the proposed approach reaches the near-optimality conditions at a considerably reduced experimental cost. In addition, this thesis attempts to establish a direct relation between the optimality conditions and the parameters of a given model. Model parameters are analyzed to discover mirror parameters that mimic the behavior of modifiers in influencing the optimality conditions. It is proposed to adapt mirror parameters instead of modifiers yielding the benefit of both, modifier adaptation in enforcing optimality conditions and of parameter adaptation in better noise handling and convergence. Moreover, it is investigated how to establish the synergies between privileged input directions with model parameters in order to reduce experimental efforts. The steady-state optimization of a simulated chemical process shows that the privileged directions and the selected parameters work together to reach near-optimal performance. Finally, the study on the power maximization of flying kites leads to the development of trust-region based ADMA method to better control the input step size.
Lausanne, EPFL, 2018. DOI : 10.5075/epfl-thesis-8803.* Real-Time Optimization of Interconnected Systems via Modifier Adaptation, with Application to Gas-Compressor Stations
The process industries are characterized by a large number of continuously operating plants, for which optimal operation is of economic and ecological importance. Many industrial systems can be regarded as an arrangement of several subsystems, where outputs of certain subsystems are inputs to others. This gives rise to the notion of interconnected systems. Plant optimality is difficult to achieve when the model used in optimization is inaccurate or in the presence of process disturbances. However, in the presence of plant-model mismatch, optimal operation can be enforced via specific real-time optimization methods. Specifically, this thesis considers so-called Modifier-Adaptation schemes which achieve plant optimality by direct incorporation of process measurements in the form of first-order corrections. As a first contribution, this thesis proposes a novel problem formulation for modifier adaptation. Specifically, it is focused on plants consisting of multiple interconnected subsystems that allows problem decomposition and application of distributed optimization strategies. The underlying key idea is the use of measurements and global plant gradients in place of an interconnection model. As a second contribution, this thesis investigates modifier adaptation for interconnected systems relying on local gradients by using an interconnection model. We show that the use of local information in terms of model, gradients and measurements is sufficient to optimize the steady-state performance of the plant. Finally, we propose a distributed modifier-adaptation algorithm that, besides the interconnection model and local gradients, employs a coordinator. For this scheme, we prove feasible-side convergence to the plant optimum, where a coordinator ensures that the local optimal inputs computed for each subsystem are consistent with the interconnection model. The experimental effort necessary to estimate the plant gradients increases with the number of plant inputs and may become intractable and sometimes not feasible or reliable for large-scale interconnected systems. The proposed approaches that use the interconnection model and local gradients overcome this problem. As an application case study of industrial relevance, this thesis investigates the problem of optimal load-sharing for serial and parallel gas compressors. The aim of load-sharing optimization is operating compressor units in an energy-efficient way, while at the same time satisfying varying load demands. We show how the structure of both the parallel and serial compressor configurations can be exploited in the design of tailored modifier adaptation algorithms based on efficient estimation of local gradients. Our findings show that the complexity of this estimation is independent of the number of compressors. In addition, we discuss gradient estimation for the case where the compressors are operating close to the surge conditions, which induces discontinuities in the problem.
Lausanne, EPFL, 2018. DOI : 10.5075/epfl-thesis-8666.* Concept of Variants and Invariants for Reaction Systems, with Application to Estimation, Control and Optimization
The concept of reaction variants and invariants for lumped reaction systems has been known for several decades. Its applications encompass model identification, data reconciliation, state estimation and control using kinetic models. In this thesis, the concept of variants and invariants is extended to distributed reaction systems and used to develop new applications to estimation, control and optimization. The thesis starts by reviewing the material and heat balances and the concept of variants and invariants for several lumped reaction systems. Different definitions of variants and invariants, in particular the vessel extents, are presented for the case of homogeneous reaction systems, and transformations to variants and invariants are obtained. The extension to systems with heat balance and mass transfer is also reviewed. The concept of extents is generalized to distributed reaction systems, which include many processes involving reactions and described by partial differential equations. The concept of extents and the transformation to extents are detailed for various configurations of tubular reactors and reactive separation columns, as well as for a more generic framework that is independent of the configuration. New developments of the extent-based incremental approach for model identification are presented. The approach, which compares experimental and modeled extents, results in maximum-likelihood parameter estimation if the experimental extents are uncorrelated and the modeled extents are unbiased. Furthermore, the identification problem can be reformulated as a convex optimization problem that is solved efficiently to global optimality. The estimation of unknown rates without the knowledge or the identification of the rate models is described. This method exploits the fact that the variants computed from the available measurements allow isolating the different rates. Upon using a Savitzky-Golay filter for differentiation of variants, one can show that the resulting rate estimator is optimal and obtain the error and variance of the rate estimates. The use of variants and invariants for reactor control is also considered. Firstly, offset-free control via feedback linearization is implemented using kinetic models. Then, it is shown how rate estimation can be used for control via feedback linearization without kinetic models. By designing an outer-loop feedback controller, the expected values of the controlled variables converge exponentially to their setpoints. This thesis presents an approach to speed up steady-state optimization, which takes advantage of rate estimation without rate models to speed up the estimation of steady state for imperfectly known dynamic systems with fast and slow states. Since one can use feedback control to speed up convergence of the fast part, rate estimation allows estimating the steady state of the slow part during transient operation. The application to dynamic optimization is also shown. Adjoint-free optimal control laws are computed for all the types of arcs in the solution. In the case of reactors, the concept of extents allows the symbolic computation of optimal control laws in a systematic way. A parsimonious input parameterization is presented, which approximates the optimal inputs well with few parameters. For each arc sequence, the optimal parameter values are computed via numerical optimization. The theoretical results are illustrated by simulated examples of reaction systems.
Lausanne, EPFL, 2018. DOI : 10.5075/epfl-thesis-8655.2017
Journal Articles
* Special Issue "Real-Time Optimization" of Processes
Processes. 2017. DOI : 10.3390/pr5020027.* Identification of Biokinetic Models Using the Concept of Extents
The development of a wide array of process technologies to enable the shift from conventional biological wastewater treatment processes to resource recovery systems is matched by an increasing demand for predictive capabilities. Mathematical models are excellent tools to meet this demand. However, obtaining reliable and fit-for-purpose models remains a cumbersome task due to the inherent complexity of biological wastewater treatment processes. In this work, we present a first study in the context of environmental biotechnology that adopts and explores the use of extents as a way to simplify and streamline the dynamic process modeling task. In addition, the extent-based modeling strategy is enhanced by optimal accounting for nonlinear algebraic equilibria and nonlinear measurement equations. Finally, a thorough discussion of our results explains the benefits of extent-based modeling and its potential to turn environmental process modeling into a highly automated task.
Environmental Science & Technology. 2017. DOI : 10.1021/acs.est.7600250.* A feasible-side globally convergent modifier-adaptation scheme
In the context of static real-time optimization (RTO) of uncertain plants, the standard modifier-adaptation scheme consists in adding first-order correction terms to the cost and constraint functions of a model based optimization problem. If the algorithm converges, the limit is guaranteed to be a KILT point of the plant. This paper presents a general RTO formulation, wherein the cost and constraint functions belong to a certain class of convex upper-bounding functions. It is demonstrated that this RTO formulation enforces feasible-side global convergence to a KKT point of the plant. Based on this result, a novel modifier adaptation scheme with guaranteed feasible-side global convergence is proposed. In addition to the first-order correction terms, quadratic terms are added in order to convexity and upper bound the cost and constraint functions. The applicability of the approach is demonstrated on a constrained variant of the Williams-Otto reactor for which standard modifier adaptation fails to converge in the presence of plant-model mismatch. (C) 2017 Elsevier Ltd. All rights reserved.
Journal Of Process Control. 2017. DOI : 10.1016/j.jprocont.2017.02.013.* On turnpike and dissipativity properties of continuous-time optimal control problems
This paper investigates the relations between three different properties, which are of importance in optimal control problems: dissipativity of the underlying dynamics with respect to a specific supply rate, optimal operation at steady state, and the turnpike property. We show in a continuous-time setting that if along optimal trajectories a strict dissipation inequality is satisfied, then this implies optimal operation at this steady state and the existence of a turnpike at the same steady state. Finally, we establish novel converse turnpike results, i.e., we show that the existence of a turnpike at a steady state implies optimal operation at this steady state and dissipativity with respect to this steady state. We draw upon a numerical example to illustrate our findings. (C) 2017 Elsevier Ltd. All rights reserved.
Automatica. 2017. DOI : 10.1016/j.automatica.2017.03.012.* Semi-analytical Solutions for Tubular Chemical Reactors
The one-dimensional tubular reactor model with advection and possibly axial diffusion is the classical model of distributed chemical reaction systems. This system is described by partial differential equations that depend on the time <i>t</i> and the spatial coordinate <i>z</i>. In this article, semi-analytical solutions to these partial differential equations are developed regardless of the complexity of their initial and boundary conditions and reaction kinetics. These semi-analytical solutions can be used to analyze the effect on the concentrations at the current coordinates <i>z</i> and <i>t</i> of (i) the initial and boundary conditions, and (ii) the reactions that took place at an earlier time. A case study illustrates the application of these results to tubular reactors for the two cases, without and with diffusion.
Chemical Engineering Science. 2017. DOI : 10.1016/j.ces.2017.06.008.* Shape constrained splines as transparent black-box models for bioprocess modeling
Empirical model identification for biological systems is a challenging task due to the combined effects of complex interactions, nonlinear effects, and lack of specific measurements. In this context, several researchers have provided tools for experimental design, model structure selection, and optimal parameter estimation, often packaged together in iterative model identification schemes. Still, one often has to rely on a limited number of candidate rate laws such as Contois, Haldane, Monod, Moser, and Tessier. In this work, we propose to use shape-constrained spline functions as a way to reduce the number of candidate rate laws to be considered in a model identification study, while retaining or even expanding the explanatory power in comparison to conventional sets of candidate rate laws. The shape-constrained rate laws exhibit the flexibility of typical black-box models, while offering a transparent interpretation akin to conventionally applied rate laws such as Monod and Haldane. In addition, the shape-constrained spline models lead to limited extrapolation errors despite the large number of parameters. (C) 2017 Elsevier Ltd. All rights reserved.
Computers & Chemical Engineering. 2017. DOI : 10.1016/j.compchemeng.2016.12.017.* HOLiCOW - IV. Lens mass model of HE 0435-1223 and blind measurement of its time-delay distance for cosmology
Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters, particularly the Hubble constant, H-0. We present a blind lens model analysis of the quadruply imaged quasar lens HE 0435-1223 using deep Hubble Space Telescope imaging, updated time-delay measurements from the COSmological MOnitoring of GRAvItational Lenses (COSMOGRAIL), a measurement of the velocity dispersion of the lens galaxy based on Keck data, and a characterization of the mass distribution along the line of sight. HE 0435-1223 is the third lens analysed as a part of the H-0 Lenses in COSMOGRAIL's Wellspring (HOLiCOW) project. We account for various sources of systematic uncertainty, including the detailed treatment of nearby perturbers, the parametrization of the galaxy light and mass profile, and the regions used for lens modelling. We constrain the effective time delay distance to be D-Delta t = 2612(191)(+208) Mpc, a precision of 7.6 per cent. From HE 0435-1223 alone, we infer a Hubble constant of H-0 = 73.1(6.0)(+5.7) km s(-1) Mpc(-1) assuming a flat ACDM cosmology. The cosmographic inference based on the three lenses analysed by HOLiCOW to date is presented in a companion paper (HOLiCOW Paper V).
Monthly Notices Of The Royal Astronomical Society. 2017. DOI : 10.1093/mnras/stw3077.* Generalization of the Concept of Extents to Distributed Reaction Systems
In the chemical industry, a large class of processes involving reactions can be described by partial differential equations that depend on time and on one or more spatial coordinates. Examples of such distributed reaction systems are tubular reactors and reactive separation columns. As in lumped reaction systems, the interaction between the different dynamic effects (reactions, mass and heat transfers, and inlet and outlet flows) complicates the analysis and operation of distributed reaction systems. In this article, the concept of extents, which has been applied to decouple the effects of dynamic processes in lumped reaction systems with one or multiple phases, is generalized to distributed reaction systems. The concept of extents and a linear transformation to extents are detailed for various configurations of tubular reactors and reactive separation columns, as well as for a more generic framework that is independent of the configuration and operating conditions. The application of extents to distributed reaction systems is illustrated through several case studies that show how the effect of each dynamic process can be expressed in terms of a corresponding extent.
Chemical Engineering Science. 2017. DOI : 10.1016/j.ces.2017.05.051.* Dynamic Optimization of Constrained Semi-Batch Processes Using Pontryagin’s Minimum Principle – An Effective Quasi-Newton Approach
This work considers the numerical optimization of constrained batch and semi-batch processes, for which direct as well as indirect methods exist. Direct methods are often the methods of choice, but they exhibit certain limitations related to the compromise between feasibility and computational burden. Indirect methods, such as Pontryagin’s Minimum Principle (PMP), reformulate the optimization problem. The main solution technique is the shooting method, which however often leads to convergence problems and instabilities caused by the integration of the co-state equations forward in time. This study presents an alternative indirect solution technique. Instead of integrating the states and co-states simultaneously forward in time, the proposed algorithm parameterizes the inputs, and integrates the state equations forward in time and the co-state equations backward in time, thereby leading to a gradient-based optimization approach. Constraints are handled by indirect adjoining to the Hamiltonian function, which allows meeting the active constraints explicitly at every iteration step. The performance of the solution strategy is compared to direct methods through three different case studies. The results show that the proposed PMP-based quasi-Newton strategy is effective in dealing with complicated constraints and is quite competitive computationally.
Computers and Chemical Engineering. 2017. DOI : 10.1016/j.compchemeng.2017.01.019.* Data Reconciliation for Reaction Systems Using Extents and Shape Constraints
Concentrations measured during the course of reactions are typically corrupted by random noise. Data reconciliation techniques improve the accuracy of measurements by using redundancies in the material and energy balances expressed as relationships between measurements. Since in the absence of kinetic models these relationships cannot integrate information regarding past measurements, they are expressed in the form of algebraic constraints. This paper shows that, even in the absence of a kinetic model, one can use shape constraints to relate measurements at different time instants, thereby improving the accuracy of reconciled estimates. The construction of shape constraints depends on the operating mode of the chemical reactor. Moreover, it is shown that the representation of the reaction system in terms of extents helps identify additional shape constraints. A procedure for deriving shape constraints from measurements is also described. Data reconciliation using numbers of moles and extents is illustrated via a simulated case study.
Computers and Chemical Engineering. 2017. DOI : 10.1016/j.compchemeng.2017.02.003.* Crosswind Kite Control - A Benchmark Problem for Advanced Control and Dynamic Optimization
This article presents a kite control and optimization problem intended as a benchmark problem for advanced control and optimization. We provide an entry point to this exciting renewable energy system for researchers in control and optimization methods looking for a realistic test bench, and/or a useful application case for their theory. The benchmark problem in this paper can be studied in simulation, and a complete Simulink model is provided to facilitate this. The simulated scenario, which reproduces many of the challenges presented by a real system, is based on experimental studies from the literature, industrial data and the first author’s own experience in experimental kite control. In par- ticular, an experimentally validated wind turbulence model is included, which subjects the kite to realistic disturbances. The benchmark problem is that of controlling a kite such that the average line tension is maximized. Two different models are provided: A more comprehensive one is used to simulate the ’plant’, while a simpler ’model’ is used to design and implement control and optimization strategies. This way, uncertainty is present in the form of plant-model mismatch. The outputs of the plant are corrupted by measurement noise. The maximum achievable average line tension for the plant is calculated, which should facilitate the performance comparison of different algorithms. A simple control strategy is implemented on the plant and found to be quite suboptimal, even if the free parameters of the algorithm are well tuned. An open question is whether or not more advanced control algorithms could do better.
European Journal of Control. 2017. DOI : 10.1016/j.ejcon.2017.03.003.Conference Papers
* Constrained multi-rate state estimator incorporating delayed measurements
Frequent and accurate concentration estimates are important for the on-line control and optimization of chemical reaction systems. Such estimates can be obtained using state estimation methods that fuse frequent (fast) delay-free on-line measurements with infrequent (slow) delayed laboratory measurements. In this paper, we demonstrate how several recent advances made in state estimation can be combined in an on-line recursive state estimation framework by imposing knowledge-based and measurement-based constraints on the state estimates of multi-rate concentration measurements with time-varying time delays. This framework is illustrated using a simulated example for a bacterial batch fermentation of recombinant l. lactis. It is shown that an extent-based formulation gives more accurate estimates than a conventional concentrationbased formulation.
2017. 21st International Conference on Process Control (PC), Strbske Pleso, Slovakia, June 06-09, 2017. p. 358-363. DOI : 10.1109/PC.2017.7976240.* Reaction extents: A Divide-and-Conquer Approach for Kinetic Model Identification
Obtaining reliable wastewater treatment process models is critical for the application of model-based design, operation, and automation. For example, Masic et al. (2014) explored the use of an observer designed for nonlinear processes to estimate nitrite in a biological urine nitrification process. In this process, anthropogenic urine is used as a resource for the production of a fertilizer (Udert & Wächter, 2012). Thanks to the separated collection and treatment of urine via NoMix toilets (Larsen et al., 2001), the majority of the nitrogen and phosphorus released via human excreta is captured. The urine nitrification step has two purposes: to prevent (i) volatilization of ammonia by reducing the pH and (ii) production of malodourous compounds. If successful, one can store nitrified urine for long periods of time. <br><br> The urine nitrification process operates at fairly high conversion rates and is prone to three important failures. The first failure is caused by inhibition of the ammonia oxidizing bacteria (AOB) at high free ammonia concentrations and can lead to washout of AOB as well as the nitrite oxidizing bacteria (NOB). The second failure is caused by growth of acid-tolerant AOB and causes the pH to decrease to a level where the NOB are inhibited and undesired chemical reactions occur. The third failure appears when a temporary accumulation of nitrite causes NOB inhibition, thereby reducing their activity. Such a nitrite accumulation can lead to an irrecoverable failure if the nitrite is allowed to accumulate to high levels (above 50 mg N/L). The first and second failures are mitigated easily by maintaining a safe pH via manipulation of the urine feed flow rate. The third failure is more difficult to avoid and requires a timely detection of nitrite. Masic et al. (2014) provided successful preliminary tests with a model-based observer, which highly depends on the availability of a reliable model. <br><br> It is unlikely that standard parameter values apply due to the high-strength nature of human urine. For this reason, a well-calibrated model is desired. In Masic et al. (2016b) parameters were estimated to global optimality for the nitrite oxidation by NOB. The applied method, however, allows only estimating parameters of a single reaction system. To apply the same optimization method to multivariate processes, an extent-based methodology was tested in silico in Masic et al. (2016a). By means of the computation of reaction extents, one can separate the estimation of the parameters for each individual reaction. This extent-based modelling method however requires as many measured variables as the number of reactions (Rodrigues et al., 2015). For this reason, Masic et al. (2016a) simplified the model identification problem by considering a constant biomass, i.e. a net biomass growth equal to zero for both AOB and NOB. In the present study, the extent-based model identification method is modified to avoid this simplification, while allowing the application of the globally optimal parameter estimation procedure developed in Masic et al. (2016b). At the same time, the resulting model identification method is tested with experimental data for the first time <br><br> - Larsen T A, Peters I, Alder A, Eggen R, Maurer M, Muncke J (2001). Peer reviewed: re-engineering the toilet for sustainable wastewater management. Env. Sci. Technol., 35, 192A-197A. <br> - Masic A, Villez K (2014). Model-based observers for monitoring of a biological nitrification process for decentralized wastewater treatment – Initial results. 2nd IWA Specialized International Conference Ecotechnologies for Wastewater Treatment (EcoSTP2014), Verona, Italy, June 23–25, 2014, 402–405. <br> - Masic A, Srinivasan S, Billeter J, Bonvin D, Villez K (2016a). Biokinetic model identification via extents of reaction. 5th IWA/WEF Wastewater Treatment Modelling Seminar (WWTmod2016), Annecy, France, April 2-6, 2016, appeared on USB-stick. <br> - Masic A, Udert K, Villez K (2016b). Global parameter optimization for biokinetic modeling of simple batch experiments. Environ. Modell. and Softw., 85, 356-373. <br> - Rodrigues D, Srinivasan S, Billeter J, Bonvin D (2015). Variant and invariant states for chemical reaction systems. Comp. Chem. Eng., 73, 23-33. <br> - Udert K M, Wächter M (2012). Complete nutrient recovery from source-separated urine by nitrification and distillation. Wat. Res., 46, 453-464.
2017. Frontiers International Conference on Wastewater Treatment (FICWTM), Palermo (Italy), May 21-24, 2017.* Global Identification of Kinetic Parameters via the Extent-based Incremental Approach
The identification of reaction kinetics represents the main challenge in building models for reaction systems. The identification task can be performed via either simultaneous model identification (SMI) or incremental model identification (IMI), the latter using either the differential (rate-based) or the integral (extent-based) method of parameter estimation. This contribution presents an extension of extent-based IMI that guarantees convergence to globally optimal parameters. <br><br> In SMI, a rate law must be postulated for each reaction, and the model concentrations are obtained by integration of the balance equations. The procedure must be repeated for all combinations of rate candidates. This approach is computationally costly when there are several candidates for each reaction, and convergence problems may arise due to the large number of parameters. <br><br> In IMI, the identification task is decomposed into several sub-problems, one for each reaction [1]. Since IMI deals with one reaction at a time, only the rate candidates for that reaction need to be compared. In addition, convergence is facilitated by the fact that only the parameters of a single reaction rate are estimated. In rate-based IMI, the parameters are estimated by fitting the simulated rates to the experimental rates obtained by differentiation of measured concentrations. In extent-based IMI, the simulated rates are integrated to yield extents, and the parameters are estimated by fitting the simulated extents to experimental extents obtained by transformation of measured concentrations [2]. The simulated rates are functions of concentrations. Hence, since each reaction is simulated individually, the simulated rates must be computed from measured concentrations. <br><br> Most parameter estimation methods converge to local optimality, which may result in an incorrect model. It turns out that extent-based IMI is particularly suited to global optimization since each estimation sub-problem (i) involves only a small set of parameters, and (ii) can be rearranged as an algebraic problem, where the objective function is polynomial in the parameters with coefficients computed only once prior to optimization using a Taylor expansion. These features facilitate the task of finding a global optimum for each reaction. Instead of the classical branch-and-bound approach, this technique relies on reformulating the estimation problem as a convex optimization problem, taking advantage of the equivalence of nonnegative polynomials and conical combination of sum-of-squares polynomials on a compact set to solve the problem as a semidefinite program [3]. <br><br> A simulated example of an identification problem with several local optima shows that extent-based IMI can be used to converge quickly to globally optimal parameters. <br><br> <b>References:</b> <br><br> [1] Bhatt et al., Chem. Eng. Sci., 2012, 83, p. 24 <br> [2] Rodrigues et al., Comput. Chem. Eng., 2015, 73, p. 23 <br> [3] Lasserre, SIAM J. Optim., 2001, 11(3), p. 796
2017. 27th European Symposium on Computer Aided Process Engineering (ESCAPE) - 10th World Congress of Chemical Engineering, Barcelona (Spain), October 1-5, 2017. p. 2119-2124. DOI : 10.1016/B978-0-444-63965-3.50355-X.* Optimal Control Laws for Batch and Semi-batch Reactors Using the Concept of Extents
2017. 109th Annual Meeting of the American Institute of Chemical Engineers (AIChE), Minneapolis, Minnesota (USA), October 29 - November 3, 2017.* Generalized Incremental Model Identification for Chemical Reaction Systems
Identification of kinetic models and estimation of kinetic parameters in chemical reaction systems can be done using Incremental Model Identification (IMI). By using IMI, it is possible to separate the effect of the different reactions and thus investigate each reaction individually. In contrast, with simultaneous approaches, it is necessary to work with a complete model that includes a rate candidate for each reaction, which might lead to a large number of possible model combinations. Hence, IMI allows faster computation of the identified models and estimated parameters [1]. There exist essentially two main approaches for IMI: extent-based IMI and rate-based IMI. In extent-based IMI, reaction rates are integrated to yield extents, and the parameters are estimated via least squares by fitting these simulated extents to experimental extents obtained from measured concentrations [2]; in rate-based IMI, the parameters are estimated via least squares by fitting simulated rates to experimental rates obtained by differentiation of measured concentrations [3]. <br><br> This contribution proposes a generalized IMI method that offers much more flexibility in the use of measurements, particularly in the way the various measurements are weighted. The parameters are estimated via weighted least squares by comparing simulated and experimental extents. The peculiarity consists in comparing extent values not only at the measurement points but for all possible time intervals between measurement points. Then, it can be shown that both the extent-based and rate-based IMI can be reformulated as particular cases of this generalized method. For example, the extent-based method would correspond to positive and equal weights for all time intervals that start at time zero, while the rate-based method would correspond to positive and equal weights for all time intervals with a length of one sampling period. This reformulation allows the investigation of new approaches by testing compromises between different methods, which can potentially result in a better IMI method. <br><br> With such a generalized method, it is also possible to test if there is an optimal weight distribution or, more generally, if there are important features in the weights to best perform model identification. The effect of the weight distribution on (i) the accuracy and precision of the parameters, and (ii) the model discrimination power can be investigated via different optimization methods, such as classic gradient-based algorithms or genetic algorithms. The different directions followed to find the best weight distribution are illustrated with simulated examples, and these results are compared to extent-based and rate-based IMI. <br><br> [1] Bhatt et al., <i>Chem. Eng. Sci.</i>, <b>2012</b>, 83, 24-38 <br> [2] Bhatt et al., <i>Ind. & Eng. Chem. Res.</i>, <b>2011</b>, 50, 12960-12974 <br> [3] Brendel et al., <i>Chem. Eng. Sci.</i>, <b>2006</b>, 61, 5404-5420
2017. Annual Meeting of the Swiss Chemical Society (SCS), Bern (Switzerland), August 21-22, 2017.* Incremental Kinetic Modeling of Spectroscopic Data using a Reduced Calibration Model
The identification of kinetic models is an important step for the monitoring, control and optimization of chemical processes. Kinetic models are often based on first principles that describe the evolution of concentrations by means of conservation and constitutive equations. Identification of reaction kinetics, namely, rate expressions and rate parameters, represents the main challenge in constructing such first-principles models. <br><br> Absorption spectroscopy is the method of choice for monitoring the kinetics of reaction systems. Calibration methods, such as PCR or PLS, are commonly used to convert absorbance to indirect concentrations measurements. To avoid the difficult and long task of building a calibration for all species, one often designs a reduced calibration model involving only a subset of important species. <br><br> Incremental kinetic modeling is a method that decomposes the original identification problem into sub-problems of lower complexity. In its extent-based variant, the concentrations are first transformed to extents, and the resulting extents are then modeled individually [1]. However, a strong condition for using this incremental method is that rank(<b>B</b>) = <i>R</i>, where <b>B</b> is the matrix of structural information (stoichiometry and inlet concentrations) and <i>R</i> is the number of independent reactions. This rank condition implies that the number of measured (calibrated) concentrations (<i>S</i><sub>a</sub>) be greater than or equal to <i>R</i>, namely, <i>S</i><sub>a</sub> ≥ <i>R</i> [2]. <br><br> In this contribution, we relax this constraint and consider the case <i>S</i><sub>a</sub> < <i>R</i>. To date, two approaches exist to adapt the extent-based kinetic modeling to this situation. The first approach consists in the dynamic modeling of (<i>R</i> - <i>S</i><sub>a</sub>) extents of reaction using candidate rate expressions and simultaneous estimation of their rate parameters (minimizing the difference between the <i>S</i><sub>a</sub> simulated and measured concentrations), followed by the algebraic computation of the <i>S</i><sub>a</sub> remaining extents of reaction and the incremental identification of their corresponding rate expressions [3]. The second approach consists in expressing the effect of the <i>R</i> reactions on the concentrations of the calibrated species by means of <i>S</i><sub>a</sub> < <i>R</i> extents of reactions, and identifying via a procedure based on graph theory the smallest subsets of reactions whose rate parameters can be estimated separately [4]. <br><br> This contribution briefly reviews these two approaches for the case <i>S</i><sub>a</sub> < <i>R</i> and illustrates them via a reaction system monitored by spectroscopy. <br><br> [1] Bhatt N; Amrhein M; Bonvin D; Incremental identification of reaction and mass-transfer kinetics using the concept of extents. Ind. Eng. Chem. Res. 2011, 50(23), 12960-12974.<br> [2] Billeter J.; Srinivasan S.; Bonvin D.; Extent-based kinetic identification using spectroscopic measurements and multivariate calibration. Anal. Chim. Acta 2013, 767, 21-34.<br> [3] Rodrigues D.; Srinivasan S.; Billeter J.; Bonvin D.; Variant and invariant states for chemical reaction systems. Comp. Chem. Eng. 2015, 73, 23-33.<br> [4] Masic A; Billeter J.; Bonvin D.; Villez K.; Extent computation and modeling under rank-deficient conditions. IFAC World Congress 2017, https://infoscience.epfl.ch/record/224435/files/Article.pdf.
2017. 15th Scandinavian Symposium on Chemometrics (SSC), Naantali (Finland), June 19-22, 2017.* Extent Computation under Rank-deficient Conditions
The identification of kinetic models can be simplified via the computation of extents of reaction on the basis of invariants such as stoichiometric balances. With extents, one can identify the structure and the parameters of reaction rates individually, which significantly reduces the number of parameters that need to be estimated simultaneously. So far, extent-based modeling has only been applied to cases where all the extents can be computed from measured concentrations. This generally excludes its application to many biological processes since the number of reactions tends to be larger than the number of measured quantities. This paper shows that, in some cases, such restrictions can be lifted. In addition, in contrast to most extent-based modeling studies that have dealt with simulated data, this study demonstrates the applicability of extent-based model identification using laboratory experimental data.
2017. IFAC World Congress, Toulouse (France), July 9-14, 2017. p. 3929-3934. DOI : 10.1016/j.ifacol.2017.08.367.* Improved Directional Derivatives for Modifier-Adaptation Schemes
The modifier-adaptation methodology enables real-time optimization (RTO) of plant operation in the presence of considerable plant-model mismatch. It requires the estimation of plant gradients. Obtaining these gradients is expensive as it involves potentially many online experiments. Recently, a directional modifier-adaptation approach has been proposed. It relies on process models to find a subset of input directions that are critical for plant optimization in an offline computation. In turn, this allows estimating directional derivatives only in the critical directions instead of full gradients, thereby reducing the burden of gradient estimation. However, in certain cases (change of active constraints, large parametric uncertainty) directional modifier adaptation may lead to significant suboptimality. Here, we propose an extension of directional modifier adaptation, whereby we compute, at each RTO iteration, a potentially varying set of critical directions that are robust to large parametric perturbations. We draw upon a simulation study on the run-to-run optimization of the Williams-Otto semi-batch reactor to show that the proposed extension allows achieving a good trade-off between the number of critical directions and plant optimality.
2017. 20th IFAC World Congress, Toulouse, France, July 9-14, 2017. p. 5718-5723. DOI : 10.1016/j.ifacol.2017.08.1124.* Use of Transient Measurements for Static Real-Time Optimization
Modifier adaptation (MA) is a real-time optimization (RTO) method characterized by its ability to enforce plant optimality upon convergence despite the presence of model uncertainty. The approach is based on correcting the available model using gradient estimates computed at each iteration. MA uses steady-state measurements and solves a static optimization problem. Hence, after every input change, one typically waits for the plant to reach steady state before measurements are taken. With many iterations, this can make convergence to the plant optimum rather slow. Recently, an approach that uses transient measurements for steady-state MA has been proposed. This way, plant optimality can be reached in a single transient operation. This paper proposes to improve this approach by using a dynamic model to process transient measurements for gradient computations. The approach is illustrated through the simulated example of a CSTR. Furthermore, the proposed method is less dependent on the choice of the RTO period. The time needed to reach plant optimality is of the order of the plant settling time, whereas several transitions to steady state would have been necessary using the standard static MA scheme.
2017. 20th IFAC World Congress, Toulouse, France, July 9-14, 2017. p. 5737-5742. DOI : 10.1016/j.ifacol.2017.08.1130.* Concept and Applications of Extents in Chemical Reaction Systems
Models of chemical reaction systems can be quite complex as they typically include information regarding the reactions, the various transfers of heat and mass, as well as the effects of the inlet and outlet flows. It is well known that a linear transformation involving the reaction stoichiometry allows artitioning the state space into a reaction invariant subspace and its complement. Alternative transformations have been proposed to partition the state space into various subspaces that are linked to the reactions, the heat and mass transfers, the inlets, and the initial conditions. This paper analyzes this partitioning of the state space, which helps isolate the effects of the various rate processes. The implications of this partitioning are discussed with respect to several modeling and estimation applications.
2017. Foundations of Computer Aided Process Operations (FOCAPO) - Chemical Process Control (CPC), Tucson (USA), January 8-12, 2017.Theses
* Predictive Control of Buildings for Demand Response and Ancillary Services Provision
This thesis develops optimization based techniques for the control of building heating, ventilation, and air-conditioning (HVAC) systems for the provision of demand response and ancillary services to the electric grid. The first part of the thesis focuses on the development of the open source MATLAB toolbox OpenBuild, developed for modeling of buildings for control applications. The toolbox constructs a first-principles based model of the building thermodynamics using EnergyPlus model data. It also generates the disturbance data affecting the models and allows one to simulate various usage scenarios and building types. It enables co-simulation between MATLAB and EnergyPlus, facilitating model validation and controller testing. OpenBuild streamlines the design and deployment of predictive controllers for control applications. The second part of the thesis introduces the concept of buildings acting as virtual storages in the electric grid and providing ancillary services. The control problem (for the bidding phase) to characterize the flexibility of a building, while also participating in the intraday energy market is formulated as a multi-stage uncertain optimization problem. An approximate solution method based on a novel intraday control policy and two-stage stochastic programming is developed to solve the bidding problem. A closed loop control algorithm based on a stochastic MPC controller is developed for the online operation phase. The proposed control method is used to carry out an extensive simulation study using real data to investigate the financial benefits of office buildings providing secondary frequency control services to the grid in Switzerland. The technical feasibility of buildings providing a secondary frequency control service to the grid is also demonstrated in experiments using the experimental platform (LADR) developed in the Automatic Control Laboratory of EPFL. The experimental results validate the effectiveness of the proposed control method. The third part of the thesis develops a hierarchical method for the control of building HVAC systems for providing ancillary services to the grid. Three control layers are proposed: The local building controllers at the lowest level track the temperature set points received from the thermal flexibility controller that maximizes the flexibility of a buildingâs thermal consumption. At the highest level, the electrical flexibility controller controls the HVAC system while maximizing the flexibility provided to the grid. The two flexibility control layers are based on robust optimization methods. A control-oriented model of a typical air-based HVAC system with a thermal storage tank is developed and the efficacy of the proposed control scheme is demonstrated in simulations.
Lausanne, EPFL, 2017. DOI : 10.5075/epfl-thesis-7897.* On Decoupling Chemical Reaction Systems
Chemical reaction systems act as the basis to get the desired products from raw materials. An in-depth understanding of all the underlying rate processes is necessary for monitoring, control and optimization of chemical reaction systems. Traditional representation of a reaction system by means of the conservation equations (material and energy balances) leads to a set of highly coupled differential equations. These coupled ODEs provides overall contributions of all the underlying rate processes, and hence, it is difficult to analyse the effect of each rate process in a reaction system. In this dissertation, an alternative representation of reaction systems in terms of decoupled variables, namely, vessel extents is introduced. The advantages of using the representation in terms of the decoupled variables over the traditional representation are investigated for data reconciliation, model identification and parameter estimation, and state reconstruction and estimation.
Lausanne, EPFL, 2017. DOI : 10.5075/epfl-thesis-7376.Book Chapters
* Control and Optimization of Batch Chemical Processes
A batch process is characterized by the repetition of time-varying operations of finite duration. Due to the repetition, there are two independent “time” variables, namely, the run time during a batch and the batch index. Accordingly, the control and optimization objectives can be defined for a given batch or over several batches. This chapter describes the various control and optimization strategies available for the operation of batch processes. These include online and run-to-run control on the one hand, and repeated numerical optimization and optimizing control on the other. Several case studies are presented to illustrate the various approaches.
Coulson and Richardson’s Chemical Engineering, 4th edition; Oxford, UK: Butterworth-Heinemann, 2017. p. 441-503.Talks
* Incremental Model Identification of Reaction Systems
The identification of reaction kinetics represents the main challenge in building models for reaction systems. The identification task can be performed via either simultaneous model identification (SMI) or incremental model identification (IMI), the latter using either the differential (rate-based) or the integral (extent-based) method of parameter estimation.<br /> <br /> In SMI, a rate law must be postulated for each reaction, and the modeled concentrations are obtained by integration of the balance equations. The procedure must be repeated for all combinations of rate candidates. This approach is computationally costly when there are several candidates for each reaction, and convergence problems may arise due to the large number of parameters.<br /> <br /> In IMI, the identification task is decomposed into several sub-problems, one for each reaction. Since IMI deals with one reaction at a time, only the rate candidates for that reaction need to be compared. In addition, convergence is facilitated by the fact that only the parameters of a single reaction rate are estimated in each sub-problem. In extent-based IMI, the simulated rates are integrated to yield extents, and the parameters are estimated by fitting the simulated extents to the experimental extents obtained by transformation of measured concentrations.<br /> <br /> In this talk, different cases of reaction systems will be presented, the concept of extents will be discussed for each case, and we will show how extents can be obtained from concentrations via linear transformation. Then, the decoupling provided by the concept of extents will be applied to incremental model identification of these reaction systems, which allows correct model discrimination and accurate parameter estimation.
Universidade de Santiago de Compostela (USC), Santiago de Compostela (Spain), March 29, 2017.Student Projects
* Receding Horizon Optimization and Disturbance Estimation of a Redox Flow Battery
No abstract available
20172016
Journal Articles
* Modifier Adaptation for Real-Time Optimization -- Methods and Applications
This paper presents an overview of the recent developments of modifier-adaptationschemes for real-time optimization of uncertain processes. These schemes have the ability to reach plant optimality upon convergence despite the presence of structural plant-model mismatch.Modifier Adaptation has its origins in the technique of Integrated System Optimization and Parameter Estimation, but differs in the definition of the modifiers and in the fact that no parameter estimation is required. This paper reviews the fundamentals of Modifier Adaptation and provides an overview of several variants and extensions. Furthermore, the paper discusses different methods for estimating the required gradients (or modifiers) from noisy measurements. We also give an overview of the application studies available in the literature. Finally, the paper briefly discusses open issues so as to promote future research in this area.
Processes. 2016. DOI : 10.3390/pr4040055.* Identification of Multiphase Reaction Systems with Instantaneous Equilibria
The identification of kinetic models for multiphase reaction systems is complex due to the simultaneous effect of chemical reactions and mass transfers. The extentbased incremental approach simplifies the modeling task by transforming the reaction system into variant states called vessel extents, one for each rate process. This transformation is carried out from the measured numbers of moles (or concentrations) and requires as many measured species as there are rate processes. Then, each vessel extent can be modeled individually, that is, independently of the other dynamic effects. This paper presents a modified version of the extent-based incremental approach that can be used to identify multiphase reaction systems in the presence of instantaneous equilibria. Different routes are possible depending on the number and type of measured species. The approach is illustrated via the simulated example of the oxidation of benzyl alcohol by hypochlorite in a batch reactor.
Industrial and Engineering Chemistry Research. 2016. DOI : 10.1021/acs.iecr.6b01283.* Extension of Modifier Adaptation for Controlled Plants using Static Open-Loop Models
Model-based optimization methods suffer from the limited accuracy of the available process models. Because of plant-model mismatch, model-based optimal inputs may be suboptimal or, worse, unfeasible for the plant. Modifier adaptation (MA) overcomes this obstacle by incorporating measurements in the optimization framework. However, the standard MA formulation requires that (1) the model satisfies adequacy conditions and (2) the model and the plant share the same degrees of freedom. In this article, three extensions of MA to problems where (2) does not hold are proposed. In particular, we consider the case of controlled plants for which the only a model of the open-loop plant is available. These extensions are shown to preserve the ability of MA to converge to the plant optimum despite disturbances and plant-model mismatch. The proposed methods are illustrated in simulation for the optimization of a CSTR.
Computers and Chemical Engineering. 2016. DOI : 10.1016/j.compchemeng.2016.07.008.* A Directional Modifier-Adaptation Algorithm for Real-Time Optimization
The steady advances of computational methods make model-based optimization an increasingly attractive method for process improvement. Unfortunately, the available models are often inaccurate. The traditional remedy is to update the model parameters, but this generally leads to a difficult parameter estimation problem that must be solved on-line. In addition, the resulting model may poorly represent the plant when there is structural mismatch between the two. The iterative optimization method called Modifier Adaptation overcomes these obstacles by directly incorporating plant measurements into the optimization framework, principally in the form of constraint values and gradients. However, the experimental cost (i.e. the number of experiments required) to estimate these gradients increases linearly with the number of process inputs, which tends to make the method intractable for processes with many inputs. This paper presents a new algorithm, called Directional Modier Adaptation, that overcomes this limitation by only estimating the plant gradients in certain privileged directions. It is proven that plant optimality with respect to these privileged directions can be guaranteed upon convergence. A novel, statistically optimal, gradient estimation technique is developed. The algorithm is illustrated through the simulation of a realistic airborne wind-energy system, a promising renewable energy technology that harnesses wind energy using large kites. It is shown that Directional Modifier Adaptation can optimize in real time the path followed by the dynamically flying kite.
Journal of Process Control. 2016. DOI : 10.1016/j.jprocont.2015.11.008.Conference Papers
* Distributed Modifier-Adaptation Schemes for the Real-Time Optimization of Interconnected Systems in the Presence of Structural Plant-Model Mismatch
<p>The desire to operate chemical processes in a safe and economically optimal way has motivated the development of so-called real-time optimization (RTO) methods [1]. For continuous processes, these methods aim to compute safe and optimal steady-state setpoints for the lower-level process controllers. A key challenge for this task is plant-model mismatch. For example, in the case of a model that is assumed to be structurally identical with the plant but has unknown parameters, the so-called two-step approach [2-4] has been proposed. It repeats two steps: In the first step, plant measurements are used to identify the parameters of the model. In the second step, the economically optimal setpoints for the updated process model are determined by solving an optimization problem. Unfortunately, a structurally correct process model is rarely available in practice. In that case, the optimal setpoints for the model determined by the two-step approach may not be optimal for the plant. To overcome this problem, the so-called modifier-adaptation (MA) methods have been developed [5]. In MA, no structurally correct model is required. Instead, plant measurements are used to formulate and solve a modified optimization problem at each iteration, such that, upon convergence, the first-order optimality conditions of the plant are guaranteed to be satisfied [5]. </p> <p> These and other available RTO methods usually treat the plant as a single entity, and compute the optimal setpoints in a centralized manner. However, this approach may be suboptimal or even infeasible for an increasing number of applications involving so-called interconnected systems. Interconnected systems are here defined as systems composed of subsystems that exchange material, energy or information, such as compressor networks, teams of autonomous vehicles or large industrial parks, in which different business units of a chemical company share certain resources. In these cases, distributed RTO methods can be employed, which utilize the available interconnection variables and exploit the inherent interconnection structure of the particular system. </p> <p> Only a few distributed RTO methods have been reported in the literature, including the methods proposed by Brdys and Tatjewski [6]. Just as in the two-step approaches, structurally correct models are assumed. In addition to identifying the model parameters, the methods also try to estimate the values of the interconnection variables. Consequently, these methods may not yield the plant optimum in the presence of structural plant-model mismatch. </p> <p> In this contribution, we propose a set of distributed RTO methods based on the modifier-adaptation framework for interconnected systems in the presence of structural plant-model mismatch. Thanks to the modifier-adaptation framework, all proposed distributed RTO methods are able to reach the plant optimum upon convergence despite possible plant-model mismatch. The proposed schemes employ different types of models, use different measurements, and differ in their algorithmic structure and required controller hierarchy, as well as in their communication topology, as detailed below. </p> <p> The first method utilizes a model of the local objective function, a model for the dependence of local outputs on the local setpoints and outputs of other subsystems, and a model for the interconnection structure of the system. The algorithm resembles a double-loop structure: In the simulation-based inner loop, the local MA-problems are solved in parallel until the interconnection constraints are satisfied. As soon as the inner loop has converged, the computed setpoints are applied to the plant in the outer loop. When the plant has reached a new steady state, measurements are taken to improve the performance at the next iteration. </p> <p> The second and third methods do not require a model of the interconnection structure of the system. Consequently, a single-loop algorithmic structure is sufficient. At every iteration, each subsystem computes local setpoints, which are immediately applied to the plant. At the corresponding steady state, plant measurements are taken to improve the performance at the next iteration. At this point, the second and third method proceed differently: The second method uses local measurements of the interconnection variables to each subsystem, whereas the third method additionally measures the local outputs. Consequently, the second method still uses a model describing the dependency of the local outputs on the local setpoints and local interconnection variables, whereas no such relationship is needed for the third method. </p> <p> Because of their different characteristics, each of these algorithms has its specific advantages regarding applications. For example, the first method requires each subsystem to have a complete model of the full system and its interconnection topology. If these models are good, then fast convergence of this scheme with few setpoint changes can be expected. In applications, where providing a full model of the system to every subsystem is feasible and does not raise any privacy concerns, the first scheme may be the method of choice. The second and third schemes, in contrast, do not require an interconnection model. Therefore, they may be preferred for applications where different subsystems do not want to disclose their models to other subsystems. This could be the case when the different subsystems are owned by competing companies. Moreover, the lack of an interconnection model may be advantageous in certain applications with changing interconnection topologies, such as power system networks. Another advantage of the second and third method is their reduced local modeling effort if accurate measurements are available. On the other hand, these methods may need significantly more iterations to converge than the first method if accurate models are available. </p> <p> In our contribution, we finally apply the proposed distributed modifier-adaptation schemes to numerical examples. The main features of each method are illustrated, revealing their potential for real-time optimization of interconnected systems with structural plant-model mismatch. </p> <p> <b>References</b> </p> <p> [1] C. R. Cutler and R. T. Perry. Real time optimization with multivariable control is required to maximize profits. Comput. Chem. Eng. 7(5):663-667, 1983. </p> <p> [2] T. E. Marlin and A. N. Hrymak. Real-time operations optimization of continuous processes. In Proceedings of the 5th International Conference on Chemical Process Controls-V: Assessment and New Directions for Research; J. C. Kantor, C. E. Garcia, and B. Carnahan, Eds.; AIChE Symposium Series, No. 316; American Institute of Chemical Engineers (AIChE): New York, 1997, 156-164. </p> <p> [3] S.-S. Jang, B. Joseph, and H. Mukai. On-line optimization of constrained multivariable chemical processes. AIChE J., 33(1): 26–35, 1987. </p> <p> [4] C. Y. Chen and B. Joseph. On-line optimization using a two-phase approach: An application study. Ind. Eng. Chem. Res., 26:1924-1930, 1987. </p> <p> [5] A. Marchetti, B. Chachuat, and D. Bonvin. Modifier-Adaptation Methodology for Real-Time Optimization. Ind. Eng. Chem. Res., 48: 6022-6033, 2009. </p> <p> [6] M. A. Brdys and P. Tatjewski. Iterative Algorithms for Multilayer Optimizing Control. Imperial College Press, Imperial College Press: London, U.K., 2005. </p>
2016. 108th Annual Meeting of the American Institute of Chemical Engineers (AIChE), San Francisco, CA, USA, November 13-18, 2016.* Optimal Load Sharing of Parallel Compressors via Modifier Adaptation
2016. IEEE International Conference on Control Applications, Buenos Aires, Argentina, September 19-22, 2016. p. 1488-1493. DOI : 10.1109/CCA.2016.7588011.* Fast Estimation of Plant Steady State, with Application to Static RTO
Experimental assessment or prediction of plant steady state is important for many applications in the area of modeling and operation of continuous processes. For example, the iterative implementation of static real-time optimization requires reaching steady state for each successive operating point, which may be quite time-consuming. This paper presents an approach to speed up the estimation of plant steady state for imperfectly known dynamic systems that are characterized by (i) the presence of fast and slow states, with no effect of the slow states on the fast states, and (ii) the fact that the unknown part of the dynamics depends only on the fast states. The proposed approach takes advantage of measurement-based rate estimation, which consists in estimating rate signals without the knowledge or identification of rate models. Since one can use feedback control to speed up the convergence to steady state of the fast part of the plant, this rate estimation allows estimating the steady state of the slow part during transient operation. It is shown how this approach can be used to speed up the static real-time optimization of continuous processes. A simulated example illustrates its application to a continuous stirred-tank reactor.
2016. 108th Annual Meeting of the American Institute of Chemical Engineers (AIChE), San Francisco (USA), November 13-18, 2016.* Real-Time Optimization Based on Adaptation of Surrogate Models
Recently, different real-time optimization (RTO) schemes that guarantee feasibility of all RTO iterates and monotonic convergence to the optimal plant operating point have been proposed. However, simulations reveal that these schemes converge very slowly to the plant optimum, which may be prohibitive in applications. This note proposes an RTO scheme based on second-order surrogate models of the objective and the constraints, which enforces feasibility of all RTO iterates, i.e., plant constraints are satisfied at all iterations. In order to speed up convergence, we suggest an online adaptation strategy of the surrogate models that is based on trust-region ideas. The efficacy of the proposed RTO scheme is demonstrated in simulations via both a numerical example and the steady-state optimization of the Williams-Otto reactor.
2016. 11th IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems (DYCOPS-CAB 2016), Trondheim, Norway, June 6-8, 2016. DOI : 10.1016/j.ifacol.2016.07.377.* Time-Optimal Path-Following Operation in the Presence of Uncertainty
Path-following tasks, which refer to dynamic motion planning along pre-specified geometric references, are frequently encountered in applications such as milling, robot-supported measurements, and trajectory planning for autonomous vehicles. Different convex and non-convex optimal control formulations have been proposed to tackle these problems for the case of perfect models. This paper analyzes path-following problems in the presence of plant-model mismatch. The proposed adaptation strategies rely on concepts that are well known in the field of real-time optimization. We present conditions guaranteeing that, upon convergence, a minimum-time solution is attained despite the presence of plant-model mismatch. We draw upon a simulated robotic example to illustrate our results.
2016. 15th European Control Conference, Aalborg, Denmark, June 29 - July 1, 2016. p. 2228-2233. DOI : 10.1109/ECC.2016.7810622.* ALS Scheme using Extent-based Constraints for the Analysis of Chemical Reaction Systems
Multivariate curve resolution via alternating least squares (ALS) is used to resolve the concentration profiles <b>C</b> and the pure component spectra <b>E</b> of <i>S</i> species from the multivariate absorbance data <b>A</b>, assuming the bilinear model <b>A</b> = <b>C E</b>. Due to the possible permutations of profiles and the presence of intensity and rotational ambiguities, soft constraints such as nonnegativity of <b>C</b> and <b>C</b> as well as unimodality, monotonicity, closure, and local rank selectivity of <b>C</b> are typically used to obtain tighter solution bounds for <b>C</b> and <b>E</b> [1]. <br><br> In addition, hard constraints in the form of kinetic models are also often used. Unfortunately, these models are subject to structural plant-model mismatch and parametric uncertainty, which weakens their impact. As an alternative, this paper proposes to use constraints based on variant states called extents. The computation of these extents does not require any information on the rate processes, that is, <b>x</b>(<i>t</i>) = <b>T</b> <b>n</b>(<i>t</i>), with <b>x</b>(<i>t</i>) and <b>n</b>(<i>t</i>) the vectors of extents and numbers of moles at time <i>t</i>, respectively, and <b>T</b> a matrix known from the reaction stoichiometry, the inlet composition and the initial conditions [2]. Expressing the <i>S</i> concentrations in terms of <i>d</i> extents and <i>q</i> = <i>S</i> - <i>d</i> invariants reduces the dimensionality of the problem from <i>S</i> to <i>d</i>. Each column of the extent matrix <b>X</b> describes the extent of a single rate process, for example of a reaction or an inlet flow. It turns out that the unknown concentration matrix <b>C</b> can be expressed in terms of the lower-dimensional matrix <b>X</b> as <b>C</b> = <b>V</b> <sup>-1</sup> <b>X</b> <b>T</b> <sup>-T</sup>, where <b>V</b> is a diagonal matrix containing the volume profile. Since each column of <b>X</b> describes a single rate process, additional constraints can be enforced on <b>X</b> such as monotonicity and convexity/concavity [3]. Furthermore, the <i>q</i> invariant relationships can be used as constraints in the least-squares problem. As a consequence, the use of extents in ALS reduces the ambiguity between <b>C</b> and <b>E</b>, yielding faster convergence and tighter solutions. <br><br> The use of extent-based constraints also opens up new perspectives for hard-soft ALS methods, since hard kinetic models can be identified individually (that is independently of the other rates) for some selected processes, while soft extent-based constraints are used for the unknown processes. Another feature involves the possibility of initializing or constraining the ALS scheme with a concentration submatrix (of dimension at least <i>S</i> × <i>S</i>) estimated from multiple experiments performed under well-designed conditions and local-rank information. Together with the corresponding absorbance data, this submatrix can efficiently replace the traditional initialization via factor analysis and be used to compute a better initial estimate of <b>E</b>. <br><br> After a brief review of the mathematical properties of extents for batch and open reactors, this talk will present the modified ALS scheme and illustrate it via simulated examples. <br><br> <b>References:</b> <br> [1] A. de Juan, J. Jaumot, R. Tauler, Anal. Methods 6, 4964 (2014). <br> [2] D. Rodrigues, S. Srinivasan, J. Billeter, D. Bonvin, Comp. Chem. Eng. 73, 23 (2015) <br> [3] S. Srinivasan, D.M.D. Kumar, J. Billeter, S. Narasimhan, D. Bonvin, IFAC Symposium DYCOPS (2016)
2016. 16th Conference on Chemometrics in Analytical Chemistry (CAC), Barcelona (Spain), June 6-10, 2016.* Modeling and Optimal Control of a Redox Flow Battery
Vanadium Redox Flow Batteries (VRFB) can be used as energy storage device, for example to account for wind or solar power fluctuations. In VRFBs charge is stored in two tanks containing two different vanadium solutions. This approach decouples the storage capacity and the power supply which is dependent only on the number and size of the cells [1]. <br><br> A control specific model of a VRFB is proposed, which captures the essential dynamic properties of the battery while ignoring all fluid mechanical elements. The model of the battery is a nonlinear DAE comprising a differential equation for the state of charge SoC and algebraic equations based on the Butler-Volmer equation for the current I and the voltage U, the power P = U∙I being set by the operator. The battery typically operates in constant power mode, but the control system limits its operating range by switching to a constant voltage when upper or lower voltage thresholds are reached. In addition, this VRFB contains a secondary flow circuit, where the electrolyte discharge reactions produce hydrogen and oxygen [2]. <br><br> Model parameters are estimated by minimizing the difference between the measured and modeled SoC over time, the other states (I, U and P) being compared with measured data. The model is validated using independent measurements, which show good fits. Finally, the dynamic model will be used to formulate and numerically solve the problem of optimal battery operation in different scenarios. <br><br> [1] C. Blanc, A. Rufer, Chapter 18, in Paths to Sustainable Energy, InTech, 2010 <br> [2] V. Amstutz et al., Energy and Environmental Science 7, 2350-2358 (2014)
2016. Symposium for Fuel Cell and Battery Modeling and Experimental Validation (MODVAL 13), SwissTech Convention Center, EPFL, March 22-23, 2016.* Biokinetic process model diagnosis with shape-constrained spline functions
Model-structure identification is important for the optimization and design of biokinetic processes. Standard Monod and Tessier functions are often used by default to describe bacterial growth with respect to a substrate, leading to significant optimization errors in case of inappropriate representation. This paper introduces shape-constrained spline (SCS) functions, which share the qualitative behavior of a number of conventional growth-rate functions expressing substrate affinity effects. A simulated case study demonstrates the capabilities in terms of model identification of SCS functions, which offer a high parametric flexibility and could replace incomplete libraries of functions by a single biokinetic model structure. Moreover, the diagnostic ability of the spline functions is illustrated for the case of Haldane kinetics, which exhibits a distinctively different shape. The major benefit of these spline functions lies in their model discrimination capabilities by indicating in a quick and conclusive way the presence of other effects than substrate affinity.
2016. 3rd IWA Specialized International Conference “Ecotechnologies for Wastewater Treatment” (ecoSTP), Cambridge (UK), June 27-30, 2016.* On the use of shape constraints for state estimation in reaction systems
State estimation techniques are used for improving the quality of measured signals and for reconstructing unmeasured quantities. In chemical reaction systems, nonlinear estimators are often used to improve the quality of estimated concentrations. These nonlinear estimators, which include the extended Kalman filter, the receding-horizon nonlinear Kalman filter and the moving-horizon estimator, use a state-space representation in terms of concentrations. An alternative to the representation of chemical reaction systems in terms of concentrations consists in representing these systems in terms of extents. This paper formulates the state estimation problem in terms of extents, which allows imposing additional shape constraints on the sign, monotonicity and concavity/convexity properties of extents. The addition of shape constraints often leads to significantly improved state estimates. A simulated example illustrates the formulation of the state estimation problem in terms of concentrations and extents, and the use of shape constraints.
2016. 11th IFAC Symposium on Dynamics and Control of Process Systems (DYCOPS) - CAB, Trondheim (Norway), June 6-8, 2016. p. 73-78. DOI : 10.1016/j.ifacol.2016.07.219.* On the Use of Shape-Constrained Splines for Biokinetic Process Modelling
Identification of mathematical models is an important task for the design and the optimization of biokinetic processes. Monod or Tessier growth-rate models are often chosen by default, although these models are not able to represent the dynamics of all bacterial growth processes. This imperfect representation then affects the quality of the model prediction. This paper introduces an alternative approach, which is based on constraints such as monotonicity and concavity and the use of shape-constrained spline functions, to describe the substrate affinity with high parametric flexibility. This way, the difficult task of searching through potentially incomplete rate-model libraries can be circumvented. A simulated case study is used to illustrate the superiority of the proposed method to represent non-ideal growth conditions, where neither Monod nor Tessier kinetics offer a good approximation.
2016. 11th IFAC Symposium on Dynamics and Control of Process Systems (DYCOPS) - CAB, Trondheim (Norway), June 6-8, 2016. p. 1145-1150. DOI : 10.1016/j.ifacol.2016.07.357.* Biokinetic Model Identification via Extents of Reaction
Model structure selection and parameter identification for biokinetic modeling of biological wastewater treatment processes is broadly accepted to be a complicated task. Contributing factors include (i) nonlinear behavior, (ii) lack of knowledge, (iii) lack of (accurate) measurements, and (iv) a large number of model parameters to estimate. Several strategies have been proposed in the wastewater engineering literature to deal with the complexity of the modeling task. These include (i) experimental design, (ii) determination of identifiable parameters, and (iii) stochastic nonlinear optimization. Despite these developments, model identification remains challenging. Extent-based modeling simplifies this task by identifying each reaction kinetics separately. The available method fits in a strategy where the reaction network (graph) and its stoichiometry (matrix) are first identified. Then, the extents of reaction are computed and the identification of the individual rate functions is made in terms of extents. In this work, the original extent-based method is modified to take nonlinear constraints and measurements into account. A simulated batch process is used to demonstrate the method.
2016. 5th Wastewater Treatment Modelling Seminar - IWA/WEF (WWTmod), Annecy (France), April 2-6, 2016.Theses
* Towards large-scale commercialization of fuel cells
The present and future challenges that humanity is facing regarding consumption and supply of energy constitute the context of this research. The technology in which we are interested is the fuel cell, mainly because of its high efficiency for the conversion of fuels into electricity and heat. More specifically, we considered solid-oxide and polymer electrolyte fuel cells. To take part in the reduction of the consumption of fossil fuels and of the emissions of greenhouse gases and of pollutants, fuel cells should first become a more attractive alternative technology. The finality of this study is hence to tackle remaining obstacles hindering their large-scale commercialization; namely, to reach a balanced and competitive combination of production cost, lifetime, and density of performance. The originality of this research lies in the simultaneous tackling of these challenges via the management of uncertainties during the design of fuel cell stacks. The approach is hence to take actions "upstream" rather than "downstream". Particularly, a novelty is to account for the effect of the manufacturing variability on the homogeneity of the performance and for the related risk of degradation, or even failure. We focus on dimensional tolerances of the parts whose function is to distribute the flows as homogeneously as possible into the fuel cell. The technical objective is to find a robust optimal solution, i.e., a solution which is optimal also in terms of a lowered sensitivity to imperfections, such as geometrical distortions. Besides, this research also deals with the challenges associated with the management of uncertainties in the context of combining optimization of geometries (design) and modeling based on computational fluid dynamics. Taken alone, these techniques were proven to be powerful tools of analysis and of synthesis. They are however computer intensive. When used together, the insight they can offer is even greater, but we face, even with today's high-performance computing infrastructures, the dilemma of accuracy versus tractability, which is even more problematic in the context of uncertainty management as will be shown. Therefore, efforts were dedicated to find ways to unravel this dilemma, in the prospect of achieving the optimization, under uncertainty, of the design of fuel cells. In particular, approximate models were investigated, notably reduced-order modeling and meta-modeling techniques. The results of this research relate to both methodology and technology. Among the methodological results, surrogate models are evaluated (tractability vs. accuracy). Guidelines are given for the management of uncertainties in this context, and for future researches. From a technological point of view, it was shown, first, that accounting for dimensional tolerances in the design of fuel cells is crucial. Then, the effect of these uncertainties were quantified, giving clearer insight on the best ways to deal with them. Last but not least, optimization of the design was carried out accounting for the uncertainties. Deterministic optima were compared with stochastic optima, revealing weaknesses of the former and potential for improvements of the designs when considering, quantitatively, the uncertainties. Last, and maybe more important, while conducting these investigations, we were able to raise numerous original (or re-formulated) questions, giving birth to novel tracks for improvements and to a fertile ground for further researches.
Lausanne, EPFL, 2016. DOI : 10.5075/epfl-thesis-6899.* Fixed-structure Control of LTI Systems with Polytopic-type Uncertainty
This thesis focuses on the development of robust control solutions for linear time-invariant interconnected systems affected by polytopic-type uncertainty. The main issues involved in the control of such systems, e.g. sensor and actuator placement, control configuration selection, and robust fixed-structure control design are included. The problem of fixed-structure control is intrinsically nonconvex and hence computationally intractable. Nevertheless, the problem has attracted considerable attention due to the great importance of fixed-structure controllers in practice. In this thesis, necessary and sufficient conditions for fixed-structure H_inf control of polytopic systems with a single uncertain parameter in terms of a finite number of bilinear matrix inequalities (BMIs) are developed. Increasing the number of uncertain parameters leads to sufficient BMI conditions, where the number of decision variables grows polynomially. Convex approximations of robust fixed-order and fixed-structure controller design which rely on the concept of strictly positive realness (SPRness) of transfer functions in state space setting are presented. Such approximations are based on the use of slack matrices whose duty is to decouple the product of unknown matrices. Several algorithms for determination and update of the slack matrices are given. It is shown that the problem of sensor and actuator placement in the polytopic interconnected systems can be formulated as an optimization problem by minimizing cardinality of some pattern matrices, while satisfying a guaranteed level of H_inf performance. The control configuration design is achieved by solving a convex optimization problem whose solution delivers a trade-off curve that starts with a centralized controller and ends with a decentralized or a distributed controller. The proposed approaches are applied to inverter-interfaced microgrids which consist of distributed generation (DG) units. To this end, two important control problems associated with the microgrids are considered: (i) Current control of grid-connected voltage-source converters with L/LCL filters and (ii) Voltage control of islanded microgrids. The proposed control strategies are able to independently regulate the direct and quadrature (dq) components of the converter currents and voltages at the point of common couplings (PCC) in a fully decoupled manner and provide satisfactory dynamic responses. The important problem of plug-and-play (PnP) capability of DGs in the microgrids is also studied. It is shown that an inverter-interfaced microgrid consisting of multi DGs under PnP functionality can be cast as a system with polytopic-type uncertainty. By virtue of this novel description and use of the results from theory of robust control, the stability of the microgrid system under PnP operation of DGs is preserved. Extensive case studies, based on time-domain simulations in MATLAB/SimPowerSystems Toolbox, are carried out to evaluate the performance of the proposed controllers under various test scenarios, e.g., load change, voltage and current tracking. Real-time hardware-in-the-loop case studies, using RT-LAB real-time platform of OPAL-RT Technologies, are also conducted to validate the performance of the designed controllers and demonstrate their insensitivity to hardware implementation issues, e.g., noise and PWM non-idealities. The simulation and experimental results demonstrate satisfactory performance of the designed controllers.
Lausanne, EPFL, 2016. DOI : 10.5075/epfl-thesis-6799.Student Projects
* Control of a Jetzone Dryer
No abstract available
20162015
Journal Articles
* Analysis of the Maximum Efficiency of Kite-Power Systems
This paper analyzes the maximum power that a kite, or system of kites, can extract from the wind. Firstly, a number of existing results on kite system efficiency are reviewed. The results that are generally applicable require significant simplifying assumptions, usually neglecting the effects of inertia and gravity. On the other hand, the more precise analyses are usually only applicable to a particular type of kite-power system. Secondly, a novel result is derived that relates the maximum power output of a kite system to the angle of the average aerodynamic force produced by the system. This result essentially requires no limiting assumptions, and as such it is generally applicable. As it considers average forces that must be balanced, inertial forces are implicitly accounted for. In order to derive practically useful results, the maximum power output is expressed in terms of the system overall strength-to-weight ratio, the tether angle and the tether drag through an efficiency factor. The result is a simple analytic expression that can be used to calculate the maximum power-producing potential for a system of wings, flying either dynamically or statically, supported by a tether. As an example, the analysis is applied to two systems currently under development, namely, pumping-cycle generators and jet-stream wind power.
Journal of Renewable and Sustainable Energy. 2015. DOI : 10.1063/1.4931111.* Application of Real-Time Optimization Methods to Energy Systems in the Presence of Uncertainties and Disturbances
In practice, the quest for the optimal operation of energy systems is complicated by the presence of operating constraints, which includes the need to produce the power required by the user, and by the need to account for uncertainty. The latter concept incorporates the potential inaccuracies of the models at hand but also degradation effects or unexpected changes, such as, e.g. random load changes or variations of the availability of the energy source for renewable energy systems. Since these changes affect the optimal values of the operating conditions, online adaptation is required to ensure that the system is always operated optimally. This typically implies the online solving of an optimization problem. Unfortunately, the applicability and the performances of most model-based optimization methods rely on the quality of the available model of the system under investigation. On the other hand, Real-time optimization (RTO) methods use the available online measurements in the optimization framework and are, thus, capable of bringing the desired self-optimizing control reaction. In this article, we show the benefits of using several RTO methods (co-) developed by the authors to energy systems through the successful application of (i) "Real-Time Optimization via Modifier Adaptation" to an experimental solid oxide fuel cells (SOFC) stack, of (ii) the recently released "SCFO-solver" (where SCFO stands for “Sufficient Conditions of Feasibility and Optimality”) to an industrial SOFC stack, and of (iii) Dynamic RTO to a simulated tethered kite for renewable power production. It is shown how such problems can be formulated and solved, and significant improvements of the performances of the three aforementioned energy systems are illustrated.
TMC Academic Journal. 2015.* Variant and Invariant States for Chemical Reaction Systems
Models of open non-isothermal heterogeneous reaction systems can be quite complex as they include information regarding the reactions, the inlet and outlet flows, the transfer of species between phases and the transfer of energy. This paper builds on the concept of reaction variants and invariants and proposes a linear transformation that allows viewing a complex nonlinear reaction system via decoupled dynamic variables, each one associated with a particular phenomenon such as a single chemical reaction, a specific mass transfer or heat transfer. Three aspects are discussed, namely, (i) the decoupling of reaction and transport phenomena in open non-isothermal both homogeneous and heterogeneous reactors, (ii) the decoupling of spatially distributed systems such as tubular reactors, and (iii) the applicability of the decoupling transformation towards the analysis of complex reaction systems, in particular with respect to the analysis of measured data in the absence of a kinetic model.
Computers and Chemical Engineering. 2015. DOI : 10.1016/j.compchemeng.2014.10.009.Conference Papers
* On the Design of Economic NMPC Based on Approximate Turnpike Properties
We discuss the design of sampled-data economic nonlinear model predictive control schemes for continuous-time systems based on turnpike properties. In a recent paper we have shown that an exact turnpike property allows establishing finite-time convergence of the NMPC scheme to the optimal steady state, and also recursive feasibility, without using terminal penalties or terminal constraints. Herein, we extend our previous results to the more general case of approximate turnpikes. We establish sufficient conditions, based on a dissipativity assumption, that guarantee (i) convergence to a neighborhood of the optimal steady state, and (ii) recursive feasibility in the presence of state constraints. The proposed conditions do not rely on terminal regions or terminal penalties. A key step in our developments is the use of a storage function as a penalty on the initial condition in the NMPC scheme. We draw upon the example of a chemical reactor to illustrate our findings.
2015. 54th IEEE Conference on Decision and Control, Osaka, Japan, December 15-18, 2015. p. 4964-4970. DOI : 10.1109/CDC.2015.7402995.* On the Use of Transient Information for Static Real-Time Optimization
Optimal operation of chemical processes is key for meeting productivity, quality, safety and environmental objectives. Both model-based and data-driven schemes are used to compute optimal operating conditions [1]: <br> - The model-based techniques are intuitive and widespread, but they suffer from the effect of plant-model mismatch. For instance, an inaccurate plant model leads to operating conditions that typically are not optimal for the plant and may violate constraints. Furthermore, even with an accurate model, the presence of disturbances generally leads to a drift of the optimal operating conditions, and adaptation based on measurements is needed to maintain plant optimality. <br> - The data-driven optimization techniques rely on measurements to adjust the optimal inputs in real time. Consequently, real-time measurements are typically used to help achieve plant optimality. This field, which is labeled real-time optimization (RTO), has received growing attention in recent years. RTO schemes can be of two types: explicit schemes solve a numerical optimization problem repeatedly, while implicit schemes adjust the inputs on-line in a control inspired manner. <br> Explicit RTO schemes solve a numerical optimization problem repeatedly. For example, the two-step approach uses (i) measurements to update the model parameters, and (ii) the updated model to perform the numerical optimization [2]. It has also been proposed to update the model differently. Instead of adjusting the model parameters, input-affine correction terms can be added to the cost and constraint functions of the optimization problem so that it shares the first-order optimality condition with the plant. The main advantage of the technique, labeled modifier adaptation (MA), lies in its proven ability to converge to the plant optimum, even in the presence of structural plant-model mismatch [3]. Furthermore, MA is capable of detecting the correct set of active plant constraints without additional assumptions. As a static optimization method applicable to continuous plants, MA requires waiting for steady state before taking measurements, updating the correction terms and repeating the numerical optimization. Hence, several iterations are generally required to achieve convergence. The main difficulty lies in the estimation of the steady-state plant gradient at each iteration. <br> In contrast, implicit RTO schemes, such as extremum-seeking control [4], self optimizing control [5] and NCO tracking [6], propose to adjust the inputs on-line in a control-inspired manner. In the absence of constraints, or when assumptions can be made regarding the set of plant constraints that are active at the optimum, implicit RTO methods reduce to gradient control, as the degrees of freedom are adjusted in real time to drive the plant cost gradient to zero. Here again, the difficulty lies in the estimation of the steady-state plant gradient, which, in addition, must be performed during transient operation. This is achieved via either low-frequency plant excitation and corresponding cost measurements (as in extremum-seeking control) or the use of transient measurements together with a model of the steady-state gradient (as in self optimizing control and NCO tracking via neighboring extremals, where the required steady-state measurements are simply replaced by the corresponding transient measurements) [7]. Implicit RTO is much more challenging when the set of active constraints is unknown, as not only the cost gradient has to be inferred from the measurements but also the set of active constraints and the constraint gradients. <br> This contribution proposes a framework for using MA during the transient phase toward steady state, thereby attempting to reach optimality in a single iteration to steady state. With this approach, a modified optimization problem is solved repeatedly at each optimization instant during transient, with the input-affine correction terms, which theoretically depend on steady-state plant quantities, being estimated on the basis of transient measurements. Note that such an attempt has already been documented in the literature but, as for the aforementioned implicit methods, the "steady-state" gradients were estimated using transient information in the framework of both multiple units and neighboring extremals [8]. In contrast, this work estimates the steady-state outputs from transient outputs and then uses these estimates "correctly" in the expressions for computing the steady-state gradients. For this, we propose to use the best available dynamic model and perform state estimation using an Extended Kalman Filter (EKF) framework [9]. Since the model is typically not perfect, one key parameter related to the static gain is made adjustable for each input-output pair. This way, the EKF feeds on the transient plant outputs and estimates, at the current time t, the corresponding steady-state outputs, which leads to the computation of the staticgradient. The dynamic model at hand can be seen as a surrogate model that, although not sufficiently accurate globally for process optimization, can process measurements to generate an estimate of the local gradients. The approach will be illustrated on various numerical examples and then applied to the optimization of a continuous stirred-tank reactor. <br> References : <br> [1] G. François and D. Bonvin, Measurement-Based Real-Time Optimization of Chemical Processes, In S. Pushpavanam, editor, Advances in Chemical Engineering, Vol. 43, 1-50, Academic Press (2013). <br> [2] T. E. Marlin and A. N. Hrymak, Real-Time Operations Optimization of Continuous Processes, In AIChE Symposium Series - CPC-V, Vol. 93, 156-164 (1997). <br> [3] A. Marchetti, B. Chachuat and D. Bonvin, Modifier-Adaptation Methodology for Real-Time Optimization, Industrial & Engineering Chemistry Research, 48(13), 6022-6033 (2009). <br> [4] K. Ariyur and M. Krstic, Real-Time Optimization by Extremum-Seeking Control, John Wiley, New York (2003). <br> [5] S. Skogestad, Plantwide Control: The Search for the Self-Optimizing Control Structure, J. Process Control, 10, 487-507 (2000). <br> [6] G. Francois, B. Srinivasan and D. Bonvin, Use of Measurements for Enforcing the Necessary Conditions of Optimality in the Presence of Constraints and Uncertainty, J. Process Control, 15, 701-712 (2005). <br> [7] G. Francois, B. Srinivasan and D. Bonvin, Comparison of Six Implicit Real-Time Optimization Schemes, J. Européen des Systèmes Automatisés, 46, 291-305 (2012). <br> [8] G. Francois and D. Bonvin, Use of Transient Measurements for the Optimization of Steady-State Performance via Modifier Adaptation, Industrial & Engineering Chemistry Research, 53(13), 5148-5159 (2014). <br> [9] A. H. Jazwinski, Stochastic Processes and Filtering, Mathematics in Science and Engineering, Academic Press (1970).
2015. 2015 AIChE Annual Meeting, Salt Lake City, UT, USA, November 8-13, 2015.* Data Reconciliation in Open Reaction Systems using the Concept of Extents
Kinetic models of chemical reaction systems are typically represented in terms of state variables, such as concentrations, temperature and partial pressures [1]. These state variables in turn depend on the underlying reactions, transfer phenomena, and transport due to the inlet and outlet flows. Kinetic models are derived from first principles – material and energy balances – and are expressed as a set of differential and algebraic equations (DAE), with the differential equations describing the evolution over time and the algebraic equations describing relationships that have to be satisfied at each time instant [2]. The identification of kinetic models along with the estimation of their parameters is carried out using measurements obtained from experiments performed under well-chosen and often ideal experimental conditions [3]. Kinetic models can then be adapted to non-ideal process operations using real-time measurements for the purpose of monitoring, control and optimization [4-5]. <br><br> Measurements are inherently corrupted with noise. The quality of measurements made during the identification experiment affects kinetic identification, while the quality of measurements made during process operation affects its efficiency. Data reconciliation techniques rely on balance equations (here algebraic constraints) to improve the accuracy of these measurements, with more relationships leading to better reconciliation [6]. Hence, data reconciliation can be formulated as a constrained optimization problem in terms of measured and reconciled variables. However, since these variables are typically involved in more than one rate process, it is difficult to add additional shape constraints involving for example monotonicity. <br><br> Several alternative representations of reaction systems in terms of variants and invariants have been proposed in the literature. Asbjørnsen and co-workers [7], for example, introduced a two-way decomposition into reaction variants and reaction invariants. Unfortunately, the transformed variables are also flow variant. Srinivasan et al. [8] introduced a nonlinear decomposition into reaction variants, flow variants, and reaction and flow invariants. Note that all these representations use abstract variables that do not carry any physical meaning. Recently, a representation of reaction systems using a linear transformation has been proposed. The transformed states, called vessel extents, have a clear physical meaning, and, in addition, each vessel extent is associated with a single rate process [9]. <br><br> Srinivasan et al. [10] have recently shown that the transformation to vessel extents allows a general formulation of process constraints under all common operating conditions, namely, batch, semi-batch and continuous mode. In addition, the extents are monotonically increasing in the absence of an outlet stream (batch and semi-batch mode). The same authors have also shown that the addition of monotonicity constraints to the data-reconciliation problem improves the accuracy of the reconciled estimates. Unfortunately, the monotonicity of extents cannot be guaranteed in the presence of an outlet stream. <br><br> In this contribution, piecewise monotonicity constraints on extents are applied for the purpose of data reconciliation in the presence of an outlet stream, that is, in continuous operation mode. A general procedure is presented to identify regions where these state variables are monotonically increasing or decreasing. The strength of these piecewise shape constraints for the task of data reconciliation will be illustrated via simulated examples. <br><br> [1] O. Levenspiel, Chemical Reaction Engineering, Wiley, 1972.<br> [2] C.G. Hill & W.R. Thatcher, Introduction to Chemical Engineering Kinetics and Reactor Design, Wiley, 2014.<br> [3] Bardow et al., Chem. Eng. Sci. 59, 2673-2684, 2004.<br> [4] Marchetti et al., Ind. Eng. Chem. Res., 48(13), 6022-6033, 2009.<br> [5] Srinivasan et al., Conference CAC 2014, Richmond (USA), 2014. <br> [6] S. Narasimhan and C. Jordache, Data Reconciliation and Gross Error Detection, Elsevier, 1999.<br> [7] Asbjørnsen et al., Chem. Eng. Sci., 27, 709-717, 1972.<br> [8] Srinivasan et al., AIChE J., 44(8), 1858-1867, 1998.<br> [9] Rodrigues et al., Comp. and Chem. Eng., 73, 23-33, 2015.<br> [10] Srinivasan et al., ESCAPE 25 / PSE 2015, Copenhagen (Denmark), 2015.
2015. 107th Annual Meeting of the American Institute of Chemical Engineers (AIChE), Salt Lake City (USA), November 8-13, 2015.* On Handling Cost Gradient Uncertainty in Real-Time Optimization
This paper deals with the real-time optimization of uncertain plants and proposes an approach based on surrogate models to reach the plant optimum when the plant cost gradient is imperfectly known. It is shown that, for processes with only box constraints, the optimum is reached upon convergence if the multiplicative gradient uncertainty lies within some bounded interval. For the case of general constraints, conditions are derived that guarantee plant feasibility and, in principle, allow enforcing cost decrease at each iteration.
2015. 9th International Symposium on Advanced Control of Chemical Processes (ADCHEM), Whistler, BC, Canada, June 7-10, 2015. p. 176-181. DOI : 10.1016/j.ifacol.2015.08.177.* Incremental Model Identification in Distributed Two-phase Reaction Systems
Transformation to variant and invariant states, called extents, is used to decouple the dynamic effects of reaction systems and serves as basis for incremental model identification, in which kinetic models are identified individually for each dynamic effect. This contribution introduces a novel transformation to extents for the incremental model identification of two-phase distributed reaction systems. Distributed reaction systems are discussed for two cases, namely, when measurements along the spatial coordinate are available and when there are not. In the second case, several measurements made under appropriate operating conditions are combined to overcome the lack of measurements along the spatial coordinate. This novel method is illustrated via the simulated example of a two-phase tubular reactor.
2015. 9th International Symposium on Advanced Control of Chemical Processes (ADCHEM), Whistler, BC (Canada), June 7-10, 2015. p. 266-271. DOI : 10.1016/j.ifacol.2015.08.192.* On the Design of Economic NMPC Based on an Exact Turnpike Property
We discuss the design of sampled-data economic nonlinear model predictive control schemes for continuous-time systems. We present novel sufficient convergence conditions that do not require any kind of terminal constraints nor terminal penalties. Instead, the proposed convergence conditions are based on an exact turnpike property of the underlying optimal control problem. We prove that, in the presence of state constraints, the existence of an exact turnpike implies recursive feasibility of the optimization. We draw upon the example of optimal fish harvest to illustrate our findings.
2015. 9th International Symposium on Advanced Control of Chemical Processes (ADCHEM), Whistler, BC, Canada, June 7-10, 2015. p. 525-530. DOI : 10.1016/j.ifacol.2015.09.021.* Directional Real-Time Optimization Applied to a Kite-Control Simulation Benchmark
This paper applies a novel two-layer optimizing control scheme to a kite-control benchmark problem. The upper layer is a recent real-time optimization algorithm, called Directional Modifier Adaptation, which represents a variation of the popular Modifier Adaptation algorithm. The lower layer consists of a path-following controller that can follow arbitrary paths. Application to a challenging benchmark scenario in simulation shows that this two-layer scheme is capable of substantially improving the performance of a complex system affected by significant stochastic disturbances, measurement noise and plant-model mismatch, while respecting operational constraints.
2015. European Control Conference, Linz, Austria, July 15-17, 2015. p. 1588-1595. DOI : 10.1109/ECC.2015.7330765.* Data Reconciliation in Reaction Systems using the Concept of Extents
<b>Abstract of the conference paper</b><br> Concentrations measured during the course of a chemical reaction are corrupted with noise, which reduces the quality of information. Since these measurements are used for identifying kinetic models, the noise impairs the ability to identify accurate models. The noise in concentration measurements can be reduced using data reconciliation, exploiting for example the material balances derived from stoichiometry as constraints. However, additional constraints can be obtained via the transformation of concentrations into extents and invariants, which leads to more efficient identification of kinetic models for multiple reaction systems. This paper uses the transformation to extents and invariants and formulates the data reconciliation problem accordingly. This formulation has the advantage that non-negativity and monotonicity constraints can be imposed on selected extents. A simulated example is used to demonstrate that reconciled measurements lead to the identification of more accurate kinetic models. <br><br> <b>Extended abstract</b><br> Reliable kinetic models of chemical reaction systems should include information on all rate processes of significance in the system. Apart from chemical reactions, such models should also describe the mass exchanged with the environment via the inlet and outlet streams and the mass transferred between phases. Model identification and the estimation of rate parameters is carried out using measurements that are obtained during the course of the reaction [1]. Model identification often leads to the combinatorial complexity of identifying simultaneously all rate processes [1]. Alternatively, it can be carried out incrementally by transforming the concentrations to extents and identifying each extent separately [2]. <br><br> Since measurements are inevitably corrupted by random measurement errors, the identification of kinetic models and estimation of rate parameters are affected by error propagation [3]. Data reconciliation is a technique that uses constraints to obtain more accurate estimates of variables by reducing the effect of measurement errors [4]. Data reconciliation can be formulated as an optimization problem constrained by the law of conservation of mass [5, 6] and positivity of reconciled concentrations. Consequently, model identification can be performed with reconciled concentrations. This paper presents a reformulation of the original reconciliation problem directly in terms of extents. This allows using additional constraints such as the monotonicity of extents. Such a reformulation improves the accuracy of the reconciled extents and hence of concentrations, and leads to better model discrimination and parameter estimation. The advantages derived from the use of reconciled extents are illustrated using a simulated example. <br><br> <b>References:</b> <br><br> [1] Bardow et al., Chem. Eng. Sci., <b>2004</b>, 59, 2673 - 2684<br> [2] Bhatt et al., AIChE J., <b>2010</b>, 56, 2873 - 2886<br> [3] Billeter et al., Chem. Intell. Lab. Syst., <b>2008</b>, 93, 120 - 131<br> [4] S. Narasimhan and C. Jordache, Data Reconciliation and Gross Error Detection, Elsevier, <b>1999</b><br> [5] Reklaitis et al., Chem. Eng. Sci., <b>1975</b>, 30, 243 - 247<br> [6] Srinivasan et al., IFAC Workshop on Thermodynamic Foundations of Mathematical Systems Theory, Lyon, <b>2013</b>.<br>
2015. 25th European Symposium on Computer Aided Process Engineering (ESCAPE) - PSE 2015, Copenhagen (Denmark), May 31 - June 4, 2015. p. 419-424. DOI : 10.1016/B978-0-444-63578-5.50065-7.* Control of Reaction Systems via Rate Estimation and Feedback Linearization
<b>Abstract of the conference paper</b><br> The kinetic identification of chemical reaction systems often represents a time-consuming and complex task. This contribution presents an approach that uses rate estimation and feedback linearization to implement effective control without a kinetic model. The reaction rates are estimated by numerical differentiation of reaction variants. The approach is illustrated in simulation through the temperature control of a continuous stirred-tank reactor. <br><br> <b>Extended abstract</b><br> Model identification and controller design are often seen as closely related tasks, since the control law is calculated using the plant model. Previous control approaches based on extensive variables or inventories are examples of this strong dependence on the model [1, 2]. Since the identification of chemical reaction systems can be a time-consuming and complex task, one would ideally like to avoid it as much as possible. The concept of variant and invariant states allows isolating the different rates in chemical reaction systems, thereby facilitating analysis, monitoring and control [3-5]. Using this concept, one can estimate dynamic effects without the need of identifying the corresponding kinetic models. <br><br> This contribution presents a feedback linearization approach that is based on the estimation of unknown rates, such as the rates of reaction and mass transfer, thus allowing efficient control without the use of kinetic models. <br><br> Rate estimation uses the numerical differentiation of appropriately transformed extensive variables called rate variants that are invariant with respect to the manipulated variables. A rate variant contains all the information about the corresponding rate and, as such, is decoupled from the other unknown rates. Since it is possible to estimate the unknown rates this way, the controller does not require kinetic information. However, because of the differentiation step, the controller is most effective with frequent and precise measurements of several output variables. <br><br> Feedback linearization sets a rate of variation for the controlled variables, thereby guaranteeing quick convergence of these variables to their set points. For open chemical reactors, the parameters of the feedback linearization controller are determined by readily available information, such as the reaction stoichiometry, the heats of reaction, the inlet composition or the inlet and outlet flow rates. This novel control strategy is illustrated in simulation for the control of both concentration and temperature in a continuous stirred-tank reactor. <br><br> [1] Georgakis, Chem. Eng. Sci., <b>1986</b>, 41, 1471<br> [2] Farschman et al., AIChE J., <b>1998</b>, 44, 1841<br> [3] Asbjørnsen and Fjeld, Chem. Eng. Sci., <b>1970</b>, 25, 1627<br> [4] Bhatt et al., Ind. Eng. Chem. Res., <b>2011</b>, 50, 12960<br> [5] Srinivasan et al., IFAC Workshop on Thermodynamic Foundations of Mathematical Systems Theory, Lyon, <b>2013</b>.<br>
2015. 25th European Symposium on Computer Aided Process Engineering (ESCAPE) - PSE 2015, Copenhagen (Denmark), May 31 - June 4, 2015. p. 137-142. DOI : 10.1016/B978-0-444-63578-5.50018-9.Theses
* Towards Seamless Continuation of Knowledge in Product Lifecycle Management
The current ICT for manufacturing landscape is characterized by scattered data formats, tools and processes dedicated to different phases in the product lifecycle. Due to the diversity of tools and data formats, manufacturing struggles to cope with new trends in this area. What is clearly missing in the current ICT landscape for manufacturing is an integrated, holistic view on data about products to be manufactured, resources (including human resources) and processes across the full product lifecycle. There is a need to formalize knowledge for the automatic integration in existing engineering tools. In this work ontology-based semantics were exploited to develop machine-understandable models, which are able to understand the meaning of the data and information they contain. The system developed using semantic web methods and tools is designed to support interoperability and data integration, as well as lead a way towards the usage of information contained in the system to extract useful knowledge. This knowledge may provide feedback for improving the design of future generations of products. Thus, the concept of seamless continuation of knowledge all along product-system lifecycles is introduced.The first step of this work was to study the state-of-the-art in PLM and Semantic web Technologies. This included theoretical background in concepts like Beginning Of Life (BOL), Middle Of Life (MOL) and End Of Life (EOL) regarding PLM as well as the definition and theoretical background of Ontologies and Semantic Web Technologies. After completing the study of theoretical background, more advanced research concepts and methodologies for both domains were thoroughly analysed. Examples include Closed Loop Lifecycle Management (CL2M) for PLM and NEON Method-ology for Ontology engineering. The next logical step followed was to find and discover relevant standards like STEP for PLM and W3C standards for Ontologies and Semantic Web Technologies. The engineering aspect of this study imposed the review of available tools both for PLM and Ontology Engineering. Ontology development tools like Protégé, TopBraid composer and Anzo Ontology editor were studied. More advanced software solutions for applications of the semantic web were also studied. Examples include Anzo Enterprise and the Open Semantic Framework. Inspiration also came from previous European projects like EC FP6 Promise. The problem of discontinuation, even after an Ontology model is created, is evident in most of the applications studied as background of this work. Ontology models have to be exploitable and that requires a framework of which they can be part of. The research for successful implementations of ontology driven frameworks was imperative. At first, the open seman-tic framework (osf) was the software stack that came into focus. After being thoroughly studied, it proved difficult to deploy and maintain, for it had still many problematic areas with its configuration and required people with very ad-vanced software skills (both in programing and system configuration). Nevertheless, the examples of use are promis-ing and give many ideas for exploitation. This finding triggered the efforts towards the definition and application of the solution proposed in this work, which has the following characteristics. First and most important, the solution is easy to deploy and maintain, since it is targeting SMEs and large enterprises that want to test the capabilities of the proposed[...]
Lausanne, EPFL, 2015. DOI : 10.5075/epfl-thesis-6742.* Real-Time Optimization via Directional Modifier Adaptation, with Application to Kite Control
The steady advance of computational methods makes model-based optimization an increasingly attractive method for process improvement. Unfortunately, the available models are often inaccurate. The traditional remedy is to update the model parameters, but this generally leads to a difficult parameter estimation problem that must be solved on-line, and the resulting model may still poorly predict the process optimum. An iterative real-time optimization method called Modifier Adaptation overcomes these obstacles by directly incorporating plant measurements into the optimization framework, in the form of constraint values and plant-gradient estimates. Experimental gradient estimation is the main difficulty encountered when applying Modifier Adaptation. The experimental effort required to estimate plant gradients increases along with the number of plant inputs. This tends to make the method intractable for processes with many inputs. The main methodological contribution of this thesis is a new algorithm called ‘Directional’ Modifier Adaptation, which handles the gradient-estimation problem by estimating plant derivatives only in certain privileged directions. By spending less effort on gradient estimation, the algorithm can focus on optimizing the plant. A ‘Dual’ Directional Modifier Adaptation is proposed, which estimates these ‘directional’ derivatives using past operating points. This algorithm exhibits fast convergence to a neighborhood of the plant optimum, even for processes with many inputs. Modifier Adaptation also makes use of an approximate process model. Another difficulty which may be encountered is that this model’s inputs differ from those of the real process. The second methodological contribution is ‘Generalized’ Modifier Adaptation, a framework for dealing with the case where the model’s inputs differ from those of the plant. This approach circumvents remodeling the system. For example, Generalized Modifier Adaptation allows an open-loop process model to be used to optimize a closed-loop plant, without having to model the controller. The Dual Directional Modifier Adaptation method is applied to a purpose-built experimental kite system. Kites are currently being developed into a radical new renewable energy technology. Large-scale applications include pulling ships and generating electricity from wind at altitudes beyond the reach of conventional wind turbines. While kites were traditionally manually controlled, these new applications require autonomous operation. The first challenge is to design reliable control algorithms for kites, capable of dealing with noise, wind disturbances, and time delays. The control algorithm keeps the kite flying a periodic path, at very high speeds. The second challenge is to choose this path in order to maximize the energy extracted from the wind. During this thesis, a small autonomous kite system was constructed. Thirty days of experimental testing were carried out, over the space of two years. A new modeling hypothesis was validated, linking steering deflections to a decrease in the kite’s lift/drag ratio. A path-following controller was implemented, capable of achieving good, robust path-following performance, despite significant time delay. The only real-time measurement required by the control algorithm is the kite’s position, which, in this work, was obtained simply by measuring the angle of the kite’s tether. A two-layer optimizing control scheme was implemented on the experimental kite system. Dual Directional Modifier Adaptation was used to periodically update the reference path tracked by the path-following controller, in order to maximize the kite’s average tether tension. Despite extremely high noise levels, the algorithm was able to locate the optimal reference path in only 10 minutes, while ensuring that a minimum altitude constraint was never violated. The resulting average tether tension is about 20% higher than that obtained following the optimal path computed using the model. An experimental study comparing the average tether tension obtained using different reference paths confirms the importance of path shape, and validates the optimal solution reached by the Dual Directional Modifier Adaptation algorithm.
Lausanne, EPFL, 2015. DOI : 10.5075/epfl-thesis-6571.Student Projects
* Control without Kinetic Models via Rate Estimation and Feedback Linearization
The concept of variant and invariant states has been developed to decouple the various dynamic effects in reaction systems. The aim of the project is to explore the applications of this concept to control of reaction systems. The concept of reaction variants has been exploited to estimate the reaction rates directly from measurements without the use of a kinetic model by Rodrigues, Billeter et al. (2015), for the temperature control in a continuous stirred-tank reactor. <br><br>A finer separation of the various dynamic effects in both homogeneous and heterogeneous open reaction systems has been proposed by Amrhein at al. (2010) and Bhatt et al. (2010) and reformulated recently as a linear transformation of the number of moles to vessel extents by Rodrigues, Srinivasan et al. (2015). This approach was used to implement a cascade control scheme for the temperature control in a CSTR. <br><br>The project focuses on validating and building on the work done in this topic.<br><br> The main objectives of this project are,<br> - to compare and analyze the advantages and disadvantages of two different approaches for temperature control in open reactors, via rate estimation and feedback linearization or via cascade control using the concept of extents<br> - to extend the control via rate estimation and feedback linearization to control of reactant concentrations in open reactors<br> - to extend the control via rate estimation and feedback linearization to control in the presence of dynamics in the actuator.
2015* Contrôle de la température et de la pression dans les machines à café professionnelles
Grâce à ce travail, un progrès a été fait dans la recherche d’une solution au problème du contrôle de la température et de la pression dans une machine à café professionnelle. Les analyses effectuées ont montrées que lorsqu’une variation importante du débit d’eau à lieu, le contrôle de la température devient particulièrement difficile. <br><br>Pour pouvoir améliorer la commande de la température, un régulateur feed-forward pour la compensation de perturbations a été développé. L’implémentation de cette commande a permis une amélioration sensible des performances de commande et de mettre en lumière des limites de contrôle liées à l’instrumentation. <br><br>Les analyses effectuées montrent aussi que dans l’installation utilisée, la mesure de la température présente un retard important. Pour essayer de réduire la dégradation des performances de commande due à ce retard, on a développé un prédicteur de Smith. Celui-ci, à travers un modèle du comportement de la température dans l’installation, permet d’améliorer les performances de commande. <br><br>Ces deux solutions montrent leur efficacité particulièrement pour des grands débits (>0,1 l/min). Par contre la commande à faible débit reste encore un point critique. <br><br>Les performances de commande de la température ne sont pas encore complétement satisfaisantes. Néanmoins, les solutions développées montrent une amélioration significative par rapport aux solutions de commande adoptées précédemment. <br><br>La commande de la pression est moins critique. Bien que des solutions de commande complexe (compensation par feed-forward de perturbations, commande en cascade) aient été développées, une simple solution de commande par rétroaction semble être capable d’achever les objectifs fixés.
20152014
Journal Articles
* Equivalence between Neighboring-Extremal Control and Self-Optimizing Control for the Steady-State Optimization of Dynamical Systems
The problem of steering a dynamical system toward optimal steady-state performance is considered. For this purpose, a static optimization problem can be formulated and solved. However, because of uncertainty, the optimal steady-state inputs can rarely be applied directly in an open-loop manner. Instead, plant measurements are typically used to help reach the plant optimum. This paper investigates the use of optimizing control techniques for input adaptation. Two apparently different techniques of enforcing steady-state optimality are discussed, namely, neighboring-extremal control and self-optimizing control based on the null-space method. These two techniques are compared for the case of unconstrained real-time optimization in the presence of parametric variations. It is shown that, in the noise-free scenario, the two methods can be made equivalent through appropriate tuning. Note that both approaches can use measurements that are taken either at successive steady-state operating points or during the transient behavior of the plant. Implementation of optimizing control is illustrated through a simulated CSTR example.
Industrial and Engineering Chemistry Research. 2014. DOI : 10.1021/ie402864h.* Use of Transient Measurements for the Optimization of Steady-State Performance via Modifier Adaptation
Real-time optimization (RTO) methods use measurements to offset the effect of uncertainty and drive the plant to optimality. RTO schemes differ in the way measurements are incorporated in the optimization framework. Explicit RTO schemes solve a static optimization problem repeatedly, with each iteration requiring transient operation of the plant to steady state. In contrast, implicit RTO methods use transient measurements to bring the plant to steady-state optimality in a single iteration, provided the set of active constraints is known. This paper considers the explicit RTO scheme "modifier adaptation" (MA) and proposes a framework that allows using transient measurements for the purpose of steady-state optimization. It is shown that convergence to the plant optimum can be achieved in a single transient operation provided the plant gradients can be estimated accurately. The approach is illustrated through the simulated example of a continuous stirred-tank reactor. The time needed for convergence is of the order of the plant settling time, while more than five iterations to steady state are required with conventional static MA. In other words, MA using transient information is able to compete in performance with RTO schemes based on gradient control, with the additional ability to handle plant constraints.
Industrial and Engineering Chemistry Research. 2014. DOI : 10.1021/ie401392s.* A Betz-Inspired Principle for Kite-Power Generation Using Tethered Wings
This paper’s main contribution is a theoretical result that can be used to evaluate the maximum powergenerating potential of any kite-power system. An upper bound is derived for the power a wing can generate in a given wind speed. It is proven that the angle of the restraining forces on the system modulates this upper bound. In order to derive practically useful results, this is linked to the strength-to-weight ratio of the different system components through an efficiency factor. The result is a simple analytic expression that can be used to calculate the maximum power-producing potential for any system of lifting surfaces, dynamic or static, supported by a tether. As an example, the analysis is applied to two systems currently under development, namely, pumping-cycle generators and jet-stream wind power.
Renewable Energy. 2014.Conference Papers
* Zur Analyse von Turnpike- und Dissipativitätseigenschaften in Problemen der Optimalsteuerung und der modellprädiktiven Regelung
2014. Tagung des GMA-Fachausschuss 1.40: Theoretische Verfahren der Regelungstechnik, Anif, Austria, September 21-26, 2014.* Real-Time Optimization: Optimizing the Operation of Energy Systems in the Presence of Uncertainty and Disturbances
In practice, the quest for the optimal operation of energy systems is complicated by the simultaneous presence of operating constraints, among which the need for producing the power required by the user, and of uncertainty. The latter concept incorporates the potential inaccuracies of the models at hand but also degradation effects or unexpected changes, such as, e.g. random load changes or variations of the availability of the energy source for renewable energy systems. Since these changes affect the optimal values of the operating conditions, online adaptation is required to ensure that the system is always operated optimally. This typically implies the online solving of an optimization problem. Unfortunately, the applicability and the performances of most model-based optimization methods rely on the quality of the available model of the system under investigation. On the other hand, Real-Time Optimization (RTO) methods use the available online measurements in the optimization framework and are, thus, capable of bringing the desired self-optimizing control reaction. In this article, we show the benefits of using several RTO methods (co-) developed by the authors to energy systems through the successful application of (i) “Real-Time Optimization via Modifier Adaptation” to an experimental Solid Oxide Fuel Cells (SOFC) stack, of (ii) the recently released “SCFO-solver ” to an industrial SOFC stack, and of (iii) Dynamic RTO to a simulated tethered kite for renewable power production. It is shown how such problems can be formulated and solved and significant improvements of the performances of the three aforementioned energy systems are illustrated.
2014. 13th International Conference on Sustainable Energy Technologies, Geneva, August 25-28, 2014. p. E40137.* Extent-based Model Identification of Surface Catalytic Reaction Systems
Identification of kinetic models and estimation of reaction and mass-transfer parameters is an important task for monitoring, control and optimization of industrial processes. A methodology called Extent-based Model Identification has been developed to separate the effects of reaction, mass transfer, and inlet and outlet flows for homogeneous and gas-liquid reaction systems. The decoupled effects, called extents, are used to decompose the model identification task incrementally into sub-problems of lower complexity, in which measured data are first transformed into extents and these extents are then modeled individually [1-3]. <br><br> For the analysis of surface catalytic reaction systems, it is important to separate the coupled effects of transport phenomena and reactions. Therefore, the methodology of Extent-based Model Identification has been extended to gas-solid and gas-liquid-solid systems involving catalytic processes at the surface of a solid catalyst, described by Langmuir-Hinshelwood types of kinetic models. <br><br> From measurements in the fluid and solid phases, the extent of each individual dynamic process is computed. A model is postulated for that process and the corresponding extent is simulated and compared with the computed extent. This procedure allows performing model identification and parameter estimation individually for each phenomenon and species (diffusion of substrates and products, adsorption of substrates, desorption of products and solid-phase reactions). <br><br> [1] Bhatt et al., <i>Ind. & Eng. Chem. Res.</i>, <b>2011</b>, 50, 12960-12974 <br> [2] Srinivasan et al., <i>Chem. Eng. J.</i>, <b>2012</b>, 208, 785-793 <br> [3] Billeter et al., <i>Anal. Chim. Acta</i>, <b>2013</b>, 767, 21-34
2014. Annual Meeting of the Swiss Chemical Society (SCS), Zurich (Switzerland), September 11, 2014. p. 548 (CE150).* On the Use of Extents for Process Monitoring and Fault Diagnosis
Process monitoring and fault diagnosis are broadly used to control quality and enforce safety compliance in industrial processes. Processes are commonly monitored by online spectroscopy, with either PCA or calibration techniques such as PLS being used to predict abstract or physical process variables. By comparing these variables to historical data measured under normal operating conditions, possible faults are detected based on deviations from statistical thresholds [1]. One then tries to identify the causes of these faults, identification being easier when monitoring involves a calibration that predicts physical process variables and when fault detection uses a model to relate controlled and manipulated variables [2]. <br><br> Exploiting the structure of balance equations, a transformation can separate multivariate data into decoupled variant/invariant states, which can be investigated individually to identify rate laws and reconstruct unmeasured quantities [3]. A convenient linear transformation uses the generalized concept of extents [4, 5], which coincides with a time-invariant transformation used to model rank-deficient spectroscopic data [6]. This transformation requires only limited process information, namely, the reaction stoichiometry, the species transferring between phases, the composition of inlet flows and the initial conditions. Moreover, this transformation was adapted to handle calorimetric and spectroscopic data [7, 8]. <br><br> This contribution addresses the applicability of the transformation to extents for process monitoring and fault diagnosis. By comparing the extents computed from measurements of the current batch with either their prediction or the extents computed from previous batches, significant deviations can point at possible faults and provide a systematic way of identifying their causes. <br><br> [1] Venkatasubramanian et al., Comp. Chem. Eng. 27 (2003) 327 <br> [2] Venkatasubramanian et al., Comp. Chem. Eng. 27 (2003) 293 <br> [3] Srinivasan et al., IFAC Proceedings Vol. 1 (2013) 102 <br> [4] Amrhein et al., AIChE J. 56 (2010) 2873 <br> [5] Bhatt et al., Ind. Eng. Chem. Res. 49 (2010) 7704 <br> [6] Billeter et al., Chemom. Intell. Lab. Syst. 95 (2009) 170 <br> [7] Srinivasan et al., Chem. Eng. J. 207-208 (2012) 785 <br> [8] Billeter et al., Anal. Chim. Acta 767 (2013) 21
2014. 106th Annual Meeting of the American Institute of Chemical Engineers (AIChE), Atlanta (USA), November 16-21, 2014.* Invariant Relationships for Heterogeneous Chemical Reaction Systems in Open Reactors
Many chemical and biochemical industries utilize chemical reaction processes to get desired products from raw materials. Common examples include reaction systems used to manufacture pharmaceutical drugs, vaccines and other chemicals. A chemical reaction system is a complex combination of various rate processes. Apart from the chemical reactions, these systems may include (i) mass transfers between phases, and (ii) heat transfer due to heating and cooling. Also, the reactor can be operated in a continuous mode, which adds additional mass transport due to the inlet and outlet streams. The measured state variables, namely concentrations, temperature and mass, are functions of the underlying reactions, mass transfer between phases, and mass transport due to the inlet and outlet streams. Since these variables are highly coupled and contain effects of all the phenomena, the analysis of reactor performance based on these variables is highly complex. <br><br> Proper understanding of the reaction system is necessary for process design, control and optimization. The analysis of the chemical reaction systems can be simplified, if a transformation can be made from the measured state variables into alternate variables (named variants) that each describe the dynamic behavior of the reactions, mass and heat transfers, inlets and outlets. The transformed states also include variables that are invariant with respect to time and remain constant during the course of the reaction. A number of applications of the reaction variants and invariants have been studied in the literature [1-7]. <br><br> Srinivasan et al. [1] have discussed possible applications of reaction and flow variants/invariants for control-related tasks such as model reduction, state accessibility, state reconstruction, and feedback linearizability. Reaction invariants have been used to study the state controllability and observability of continuous stirred-tank reactors [2]. Reaction invariants have also been used to automate the formulation of mole balance equations for the non-reacting part of complex processes (mixing and splitting operations), thereby determining the degrees of freedom for process synthesis [3]. Furthermore, Waller and Makila [4] demonstrated the use of reaction invariants to control pH, assuming that equilibrium reactions are very fast. Gruner et al. [5] showed that, through the use of reaction invariants, the dynamic of reaction-separation processes with fast (equilibrium) reactions resembles the dynamic of corresponding non-reactive systems in a reduced set of transformed variables. Aggarwal et al. [6] considered multi-phase reactors operating at thermodynamic equilibrium and were able to use the concept of reaction invariants, which they called invariant inventories, to reduce the order of dynamic models and to design control strategies accordingly. Recently, Scott and Barton [7] have used reaction invariant relationships to compute bounds for kinetic models of a chemical reaction system operated under batch conditions. While the applications of reaction variants and invariants are clear, the challenge still remains in finding these relationships in the presence of different physical phenomena. <br><br> Asbjornsen and co-workers [8] introduced a methodology for computing reaction variants and invariants and used it for reactor modeling and control. However, for open reactors, the reaction variants proposed are not fully representative of reactions since they are also affected by the inlet and outlet flows. Friedly [9] proposed to compute the variants of equivalent batch reactions, associating the remainder to transport processes, and used them to describe the dynamics of flow through porous media accompanied by chemical reactions. For open homogeneous reaction systems, Srinivasan et al. [1] developed a nonlinear transformation of the numbers of moles to reaction variants, flow variants, and reaction and flow invariants, thereby separating the effects of reactions and flows. Later, the same authors refined that transformation to make it linear [10] (at the price of losing the one-to-one property) and therefore more easily applicable. They also showed that, for a reactor with an outlet flow, the concept of vessel extent is most useful, as it represents the amount of material associated with a given process (reaction, exchange) that is still in the vessel. Bhatt et al. [11] extended that concept to heterogeneous G-L reaction systems for the case of no reaction and no accumulation in the film, the result being decoupled vessel extents of reaction, mass transfer, inlet and outlet, as well as true invariants, i.e. identically equal to zero. The use of this linear transformation for the task of kinetic identification has been clearly demonstrated [12] and this transformation has been extended to typical measured signals (calorimetry and spectroscopy) [13, 14]. <br><br> Srinivasan et al. [15] have recently introduced a new transformation that gives the same variants and invariants as the one proposed in [10]. This new transformation, which is conceptually and computationally much simpler, also gives explicit relationships for reaction invariants in terms of the measured state variables. The invariant relationships from the transformation can now directly be used for process monitoring, control and optimization. This transformation also allows computing the reaction invariants in the presence of additional measurements (e.g. flow rates) and can be extended to heterogeneous systems. <br><br> This contribution will first extend the computation of reaction invariants of homogeneous reaction systems when additional measurements (flow rates) are available. The procedure for writing the invariant relationships will then be adapted to F–F reaction systems with (i) reactions in the bulk of both phases, (ii) unsteady–state mass transfer between phases, and (iii) mass transport due to inlet and outlet streams in both phases. The procedure for writing the reaction invariant relationships will be demonstrated with suitable examples. <br><br> [1] Srinivasan et al., AIChE J. 44(8), 1858-1867, <b>1998</b> <br> [2] Fjeld et al., Chem. Eng. Sci. 29, 1917-1926, <b>1974</b> <br> [3] Gadewar et al., Ind. Eng. Chem. Res. 41, 3771-3783, <b>2002</b> <br> [4] Waller et al., Ind. Eng. Chem. Process Des. Dev. 20, 1-11, <b>1981</b> <br> [5] Gruner et al., AIChE J. 52, 1010-1026, <b>2006</b> <br> [6] Aggarwal et al., J. Process Contr. 21, 1390-1406, <b>2011</b> <br> [7] Scott et al., Comp. Chem. Eng. 34(5), 717-731, <b>2010</b> <br> [8] Asbjornsen et al., Chem. Eng. Sci. 27, 709-717, <b>1972</b> <br> [9] Friedly, AIChE J. 37(5), 687-693, <b>1991</b> <br> [10] Amrhein et al., AIChE J. 56, 2873, <b>2010</b> <br> [11] Bhatt et al., Ind. Eng. Chem. Res. 49, 7704-7717, <b>2010</b> <br> [12] Bhatt et al., Chem. Eng. Sci. 83, 24-38, <b>2012</b> <br> [13] Srinivasan et al., Chem. Eng. J. 207-208, 785-793, <b>2012</b> <br> [14] Billeter et al., Anal. Chim. Acta 767, 21-34, <b>2013</b> <br> [15] Srinivasan et al., 1st IFAC Workshop on Thermodynamic Foundations of Mathematical Systems Theory (TFMST), Lyon (France), <b>2013</b>
2014. 106th Annual Meeting of the American Institute of Chemical Engineers (AIChE), Atlanta (USA), November 16-21, 2014.* Turnpike and dissipativity properties in dynamic real-time optimization and economic MPC
We investigate the turnpike and dissipativity properties of continuous-time optimal control problems. These properties play a key role in the analysis and design of schemes for dynamic real-time optimization and economic model predictive control. We show in a continuous-time setting that dissipativity of a system with respect to a steady state implies the existence of a turnpike at this steady state and optimal stationary operation at this steady state. Furthermore, we investigate the converse statements: We show that the existence of a turnpike at a steady state implies (a) that this steady state is the optimal steady state; and (b) that over an infinite horizon the system is optimally operated at this steady state. We draw upon a numerical example to illustrate our findings.
2014. 53rd IEEE Conference on Decision and Control, Los Angeles, California, USA, December 15-17, 2014. p. 2734-2739. DOI : 10.1109/CDC.2014.7039808.* On the use of extents for process monitoring and fault diagnosis
Process monitoring and fault diagnosis are broadly used to control quality and enforce safety compliance in industrial processes. Processes are commonly monitored by online spectroscopy, with either PCA or calibration techniques such as PLS being used to predict abstract or physical process variables. By comparing these variables to historical data measured under normal operating conditions, possible faults are detected based on deviations from statistical thresholds [1]. One then tries to identify the causes of these faults, identification being easier when monitoring involves a calibration that predicts physical process variables and when fault detection uses a model to relate controlled and manipulated variables [2]. <br><br> Exploiting the structure of balance equations, a transformation can separate multivariate data into decoupled variant/invariant states, which can be investigated individually to identify rate laws and reconstruct unmeasured quantities [3]. A convenient linear transformation uses the generalized concept of extents [4, 5], which coincides with a time-invariant transformation used to model rank-deficient spectroscopic data [6]. This transformation requires only limited process information, namely, the reaction stoichiometry, the species transferring between phases, the composition of inlet flows and the initial conditions. Moreover, this transformation was adapted to handle calorimetric and spectroscopic data [7, 8]. <br><br> This contribution addresses the applicability of the transformation to extents for process monitoring and fault diagnosis. By comparing the extents computed from measurements of the current batch with either their prediction or the extents computed from previous batches, significant deviations can point at possible faults and provide a systematic way of identifying their causes. <br><br> <b>References:</b><br> [1] Venkatasubramanian et al., Comp. Chem. Eng. 27 (2003) 327 <br> [2] Venkatasubramanian et al., Comp. Chem. Eng. 27 (2003) 293 <br> [3] Srinivasan et al., IFAC Proceedings Vol. 1 (2013) 102 <br> [4] Amrhein et al., AIChE J. 56 (2010) 2873 <br> [5] Bhatt et al., Ind. Eng. Chem. Res. 49 (2010) 7704 <br> [6] Billeter et al., Chemom. Intell. Lab. Syst. 95 (2009) 170 <br> [7] Srinivasan et al., Chem. Eng. J. 207-208 (2012) 785 <br> [8] Billeter et al., Anal. Chim. Acta 767 (2013) 21
2014. 14th Conference on Chemometrics in Analytical Chemistry (CAC), Richmond (USA), June 9-13, 2014.* Modifier Adaptation for Constrained Closed-Loop Systems
The steady advance of computational methods makes model-based optimization an increasingly attractive method for process improvement. Unfortunately, the available models are often inaccurate. An iterative optimization method called "modifier adaptation" overcomes this obstacle by incorporating process information into the optimization framework. This paper extends this technique to constrained optimization problems, where the plant consists of a closed-loop system but only a model of the open-loop system is available. The degrees of freedom of the closed-loop system are the setpoints provided to the controller, whereas the model degrees of freedom are the inputs of the open-loop plant. Using this open-loop model and process measurements, the proposed algorithm guarantees both optimality and constraint satisfaction for the closed-loop system upon convergence. A simulated CSTR example with constraints illustrates the method.
2014. The 19th World Congress of the International Federation of Automatic Control, Cape Town, South Africa, August 24-29, 2014. p. 11080-11086. DOI : 10.3182/20140824-6-ZA-1003.02453.* On the Use of Second-Order Modifiers for Real-Time Optimization
We consider the real-time optimization of static plants and propose a generalized version of the modifier-adaptation strategy that relies on second-order adaptation of the cost and constraint functions. We show that second-order adaptation allows checking whether a (local) plant optimum is reached upon convergence. A sufficient convergence condition that is applicable to first- and second-order modifier-adaptation schemes is proposed. We also discuss how second-order updates can lead to SQP-like model-free RTO schemes. The approach is illustrated via the simulated example of a continuous reactor.
2014. The 19th World Congress of the International Federation of Automatic Control, Cape Town, South Africa, August 24-29, 2014. p. 7622-7628. DOI : 10.3182/20140824-6-ZA-1003.00735.Working Papers
* On linear and quadratic Lipschitz bounds for twice continuously differentiable functions
Lower and upper bounds for a given function are important in many mathematical and engineering contexts, where they often serve as a base for both analysis and application. In this short paper, we derive piecewise linear and quadratic bounds that are stated in terms of the Lipschitz constants of the function and the Lipschitz constants of its partial derivatives, and serve to bound the function's evolution over a compact set. While the results follow from basic mathematical principles and are certainly not new, we present them as they are, from our experience, very difficult to find explicitly either in the literature or in most analysis textbooks.
2014* Implementation techniques for the SCFO experimental optimization framework
The material presented in this document is intended as a comprehensive, implementation-oriented supplement to the experimental optimization framework presented in [Bunin, G.A., Francois, G., Bonvin, D.: Feasible-side global convergence in experimental optimization. SIAM J. Optim. (submitted) (2014)]. The issues of physical degradation, unknown Lipschitz constants, measurement/estimation noise, gradient estimation, sufficient excitation, and the handling of soft constraints and/or a numerical cost function are all addressed, and a robust, implementable version of the sufficient conditions for feasible-side global convergence is proposed.
2014Talks
* Optimisation en temps réel de procédés industriels, Maîtrise et optimisation de procédés industriels complexes
On aborde le problème de l’optimisation de procédés continus et discontinus en présence d’incertitudes sous forme d’erreurs de modèle et de perturbations inconnues. L’idée consiste à se baser sur des mesures du procédé pour venir ajuster les entrées. On propose d’abord, pour les procédés discontinus, un paramétrage des entrées qui permet de ramener le problème d’optimisation dynamique à un problème d’optimisation statique. On présente ensuite trois façons distinctes d’utiliser des mesures pour venir ajuster les conditions opératoires du procédé et l’amener ainsi de manière itérative en temps réel vers l’optimalité. La première approche est très intuitive et consiste à utiliser les mesures disponibles pour identifier les paramètres du modèle et calculer ensuite les entrées optimales à partir du modèle ajusté. On verra que cette façon de procéder ne permet en général pas d’amener le processus vers l’optimalité. La deuxième approche propose d’estimer certaines grandeurs expérimentales qui sont liées à l’optimalité du procédé et de corriger le modèle de manière à ce que le modèle corrigé et le procédé partagent les mêmes conditions d’optimalité. La troisième approche enfin utilise directement ces mêmes grandeurs expérimentales pour amener par rétroaction, c’est-à-dire sans optimisation numérique, le processus vers l’optimalité. Ces méthodologies sont illustrées expérimentalement sur un procédé continu (un système de piles à combustible) et un procédé discontinu (un réacteur de polymérisation batch).
ISA France/ISTIA, Angers, France, October, 2014.Student Projects
* Modelling of Surface Catalytic Reaction Systems using the Concept of Extents
Gas-solid catalytic reaction systems depend on a combination of several dynamic eects, such as mass transfer, chemisorption and surface reactions taking place simultaneously. In this master thesis, the extension of the method of extent-based model identication is proposed for catalytic reaction systems which involves the transformation of the number of moles in the gas and solid phases into decoupled state variables called (vessel) extents. This transformation computes extents of inlet, outlet, mass transfer, initial conditions and invariants from the numbers of moles in the gas phase. From the numbers of moles in the solid phase, it also calculates extents of mass transfer, chemisorption (adsorption/desorption), surface reactions and invariants. Then, these extents can be used to perform incremental model identication, where each rate is identied individually based on its corresponding extent. This is illustrated through the simulated example of the ammonia synthesis (Haber-Bosch process) in a continuous stirred-tank reactor. For this system, correct rate models were identied and reliable rate parameters were estimated even in the presence of fast chemisorption and reaction processes. This however required a sufficiently large amount of measurements at the start of the synthesis. Future work should focus on the extension of this method to more complex catalytic schemes involving more intermediate species.
2014* Contrôle de la température et de la pression dans les machines à café professionnelles
Pour avoir le meilleur café possible dans une tasse, non seulement les grains de café ont besoin d'avoir le bon goût et la bonne qualité, mais le processus nécessaire à l’extraction des différentes saveurs doit également être contrôlé. La température et la pression de l'eau sont deux paramètres importants qui doivent être contrôlés dans une machine à café pour obtenir le meilleur goût. Ces deux paramètres peuvent également avoir un impact important sur la consommation d'énergie en fonction de la technologie utilisée pour les contrôler. <br><br> Le but du projet est de développer un système de contrôle efficace et écologique pour la prochaine génération de machines à café en élaborant une stratégie de contrôle appropriée. <br><br> Ce document contient les premières étapes du développement du projet. Dans la première partie on développera le concept du modèle pour simuler le comportement de l'installation utilisée pour remplir les objectifs du projet. Les différents raisonnements et les difficultés de modélisation seront illustrés. La première partie contient aussi les outils nécessaires pour le calibrage du modèle pour l’installation utilisée. <br><br> Dans la deuxième partie différentes stratégies de contrôle de la pression et de la température sont proposées. <br><br> Le but de ce rapport est de fournir le modèle avec les techniques nécessaires à son calibrage et des propositions pour le contrôle de la pression et de la température.
20142013
Journal Articles
* On the Role of the Necessary Conditions of Optimality in Structuring Dynamic Real-Time Optimization Schemes
In dynamic optimization problems, the optimal input profiles are typically obtained using models that predict the system behavior. In practice, however, process models are often inaccurate, and on-line adaptation is required for appropriate prediction and re-optimization. In most dynamic real-time optimization schemes, the available measurements are used to update the plant model, with uncertainty being lumped into selected uncertain plant parameters; furthermore, a piecewise-constant parameterization is used for the input profiles. This paper argues that the knowledge of the necessary conditions of optimality (NCO) can help devise more efficient and more robust real-time optimization schemes. Ideally, the structuring decisions involve the NCO as follows: (i) one measures or estimates the plant NCO, (ii) a NCO-based input parameterization is used, and (iii) model adaptation is performed to meet the plant NCO. The benefit of using the NCO in dynamic real-time optimization is illustrated in simulation through the comparison of various schemes for solving a final-time optimal control problem in the presence of uncertainty.
Computers and Chemical Engineering. 2013. DOI : 10.1016/j.compchemeng.2012.07.012.* Quels ingénieurs pour la Suisse de demain ?
Cet article décrit l’évolution de la profession d’ingénieur de part le monde, et en Suisse en particulier. A partir de l’observation d’une société en mutation, on étudie les nouveaux défis qui se présentent à l’ingénieur et on en déduit l’impact sur le métier et la formation d’ingénieur. On aborde ensuite la relation entre la formation et le monde du travail et présente quelques actions concrètes propres à améliorer le recrutement et la formation de la prochaine génération d’ingénieurs.
VSH/AEU-Bulletin. 2013. DOI : 10.5169/seals-893714.* A Real-Time Optimization Framework for the Iterative Controller Tuning Problem
We investigate the general iterative controller tuning (ICT) problem, where the task is to find a set of controller parameters that optimize some user-defined performance metric when the same control task is to be carried out repeatedly. Following a repeatability assumption on the system, we show that the ICT problem may be formulated as a real-time optimization (RTO) problem, thus allowing for the ICT problem to be solved in the RTO framework, which is both very flexible and comes with strong theoretical guarantees. In particular, we propose the use of a recently released RTO solver and outline a simple procedure for how this solver may be configured to solve ICT problems. The effectiveness of the proposed method is illustrated by successfully applying it to four case studies – two experimental and two simulated – that cover the tuning of model-predictive, general fixed-order, and PID controllers, as well as a system of controllers working in parallel.
Processes. 2013. DOI : 10.3390/pr1020203.* From Discrete Measurements to Bounded Gradient Estimates: A Look at Some Regularizing Structures
Obtaining a reliable gradient estimate for an unknown function when given only its discrete measurements is a common problem in many engineering disciplines. While there are many approaches to obtaining an estimate of a gradient, obtaining lower and upper bounds on this estimate is an issue that is often overlooked, as rigorous bounds that are not overly conservative usually require additional assumptions on the function that may either be too restrictive or impossible to verify. In this work, we try to make some progress in this direction by considering four general structural assumptions as a means of bounding the function gradient in a rigorous likelihood sense. After proposing an algorithm for computing these bounds, we compare their accuracy and precision across different scenarios in an extensive numerical study.
Industrial and Engineering Chemistry Research. 2013. DOI : 10.1021/ie303309a.* Use of Convex Model Approximations for Real-Time Optimization via Modifier Adaptation
Real-Time Optimization (RTO) via modifier adaptation is a class of methods for which measurements are used to iteratively adapt the model via input-affine additive terms. The modifier terms correspond to the deviations between the measured and predicted constraints on the one hand, and the measured and predicted cost and constraint gradients on the other. If the iterative scheme converges, these modifier terms guarantee that the converged point satisfies the KKT conditions for the plant. Furthermore, if upon convergence the plant model predicts the correct curvature of the cost function, convergence to a (local) plant optimum is guaranteed. The main advantage of modifier adaptation lies in the fact that these properties do not rely on specific assumptions regarding the nature of the uncertainty. In other words, in addition to rejecting the effect of parametric uncertainty like most RTO methods, modifier adaptation can also handle process disturbances and structural plant-model mismatch. This paper shows that the use of a convex model approximation in the modifier-adaptation framework implicitly enforces model adequacy. The approach is illustrated through both a simple numerical example and a simulated continuous stirred-tank reactor.
Industrial and Engineering Chemistry Research. 2013. DOI : 10.1021/ie3032372.* Extent-based Kinetic Identification using Spectroscopic Measurements and Multivariate Calibration
Extent-based kinetic identification is a kinetic modeling technique that uses concentrations measurements to compute extents and identify reaction kinetics by the integral method of parameter estimation. This article considers the case where spectroscopic data are used by calibration models to predict concentrations measurements. The calibration set is assumed to be constructed from reacting calibration data formed by pairs of concentrations and spectral data or concentration- and spectral contributions of reaction and mass transfer only, obtained by pretreatment in reaction- and mass-transfer-variant form. The extent-based kinetic identification using concentrations predicted by calibration models from spectroscopic data is illustrated by the simulation of a homogeneous and a gas-liquid reaction system.
Analytica Chimica Acta. 2013. DOI : 10.1016/j.aca.2012.12.032.* A Quotient Method for Designing Nonlinear Controllers
An algorithmic method is proposed to design stabilizing control laws for a class of nonlinear systems that comprises single-input feedback-linearizable systems and a particular set of single-input non feedback-linearizable systems. The method proceeds iteratively and consists of two stages; it converts the system into cascade form and reduces the dimension at every step in the forward stage, while it constructs the feedback law iteratively as well in the backward stage. Controller design proceeds via the design of invariant manifolds and includes a guarantee of stability at every step. The paper shows that the construction of these invariant manifolds is well defined for feedback-linearizable system and, furthermore, it can also be applied to a class of non feedback-linearizable systems. These features are illustrated via two simulation examples.
European Journal of Control. 2013. DOI : 10.1016/j.ejcon.2012.10.001.Conference Papers
* Real-Time Optimization of Chemical Processes
2013. XIVe Congrès de la Société Française de Génie des Procédés SFGP2013, Lyon, October 8-10, 2013.* Variant and Invariant States for Reaction Systems
Models of chemical reactors can be quite complex as they include information regarding the reactions, the transfer of species between phases, the transfer of energy, and the inlet and outlet flows. Furthermore, the effects of the various phenomena are quite intertwined and thus difficult to quantify from measured data. This paper proposes a mathematical transformation of the balance equations that allows viewing a complex reaction system via decoupled dynamic variables, each one associated with a particular phenomenon such as a single chemical reaction, a specific mass transfer or heat transfer between the reactor and the jacket. Three aspects are investigated, namely, (i) the decoupling of mole balance equations, (ii) the decoupling of mole and heat balance equations, and (iii) the applicability of the decoupling transformation for model reduction, static state reconstruction and incremental kinetic identification.
2013. IFAC Workshop on Thermodynamic Foundations of Mathematical Systems Theory (TFMST), Lyon (France), July 13-16, 2013. p. 102-107. DOI : 10.3182/20130714-3-FR-4040.00020.* Incremental Model Identification of Fluid-Fluid Reaction Systems – Dynamic Accumulation and Reactions in the Diffusion Layer
The identification of kinetic models is an important step for the monitoring, control and optimization of industrial processes. This is particularly the case for highly competitive business sectors such as chemical and pharmaceutical industries, where the current trend of changing markets and strong competition leads to a reduction in the process development costs [1]. Moreover, the PAT initiative of the FDA advocates a better understanding and control of manufacturing processes by the use of modern instrumental technologies and innovative software solutions [2]. <br><br> Reaction systems can be represented by first-principles kinetic models that describe the time evolution of states – numbers of moles, temperature, volume, pressure – by means of conservation and constitutive equations of differential and algebraic nature. These models are designed to include all kinetic phenomena, whether physical or chemical, involved in the reaction systems. Generally, such kinetic phenomena include the dynamic effects of reactions (stoichiometry and reaction kinetics), transfer of species between phases (mass-transfer rates), and operating conditions (initial conditions as well as inlet and outlet flows). <br><br> The identification of reaction and mass-transfer rates as well as the estimation of their corresponding rate parameters represents the main challenge in building first-principles models. The task of identification is commonly performed in one step via ‘simultaneous identification’, in which a dynamic model comprising all rate effects is postulated, and the corresponding model parameters are estimated by comparing the measured and modeled concentrations [3]. This procedure is repeated for all combinations of model candidates, and the combination with the best fit is usually selected. The main advantage of this identification method lies in its capability to model complex dynamic effects in a concomitant way and thus to generate enough constraints in the optimization problem so that indirect measurements such as spectroscopic and calorimetric data can be modeled without the use of a calibration step [4, 5]. However, the simultaneous approach can be computationally costly when several candidates are available for each dynamic effect. Furthermore, this method often leads to high parameter correlation with the consequence that any structural mismatch in the modeling of one part of the model can result in errors in all estimated parameters and, in addition, convergence problems can arise from a poor choice of initial guesses [6, 7]. <br><br> As an alternative, the incremental approach decomposes the identification task into a set of sub-problems of lower complexity [8]. The approach consists in transforming the measured concentrations into decoupled rates or extents, which can then be modeled individually. When needed, prior to the modeling step, the missing or unmeasured states can be reconstructed using the computed rates or extents. In the ‘rated-based incremental identification’ [9], rates are first obtained by differentiation of concentration measurements. Then, postulated rate expressions and rate parameters are estimated one at a time by comparing the measured and modeled rates. However, because of the bias introduced in the differentiation step, the rate parameters estimated by this method are not statistically optimal. That is why, another approach, termed ‘extent-based incremental identification’ [10], that is based on the integral method of parameters estimation has been introduced. In this approach, extents are first computed from measured concentrations, and then postulated rate expressions are integrated individually for each extent and the corresponding rate parameters estimated by comparing the measured and modeled extents. The extent-based identification can also be adapted to analyze calorimetric and spectroscopic data using a calibration step [11, 12]. The transformation to rates or extents reduces the dimensionality of the dynamic model since all redundant states (invariants) can be discarded. More importantly, the remaining states (variants) isolate the effects of the reactions, mass transfers and operating conditions, which can then be analyzed individually [13]. This allows substantially reducing the computational effort, the convergence problems and the correlation between the estimated rate parameters. <br><br> Recently, the extent-based incremental identification has been extended to fluid-fluid reaction systems undergoing unsteady-state mass transfer and reactions at the interface of the two immiscible phases. This situation is commonly encountered in reaction systems that are limited by diffusion, such as CO2 post-combustion capture and nitration reactions. Such reaction systems can be modeled using the film theory, where the two bulks are separated by a spatially distributed film, located in either of the two phases, in which diffusing species can accumulate and react. In both bulks, the mass balance equations describing the dynamics of chemical species are expressed as ordinary differential equations (ODE) and serve as boundary conditions for the film. The dynamic accumulation in the film is described by Fick’s second law combined with a reaction term, thus leading to partial differential equations (PDE), which can be solved by appropriate spatial discretization and rearrangement in ODEs. <br><br> The extent-based model identification of fluid-fluid reaction systems with unsteady-state mass transfer and reactions requires a large number of measurements for reconstructing all the states and modeling the dynamics of the film [14]. The difficulty lies in the fact that, with the current state of sensor technology, such measurements can only come from the two homogeneous bulks, which provide information from a well-mixed reactor region and consequently are resolved only in time and not in space. Nevertheless, extents of reaction and extents of mass transfer can be extracted from these bulk measurements. These extents of reaction represent the effect of slow reactions that take place in the bulks of the two phases and can be modeled as before. On the other hand, the extents of mass transfer represent now the combined effect of mass transfer by diffusion through the film and of fast reactions taking place at the interface or in the film. Hence, both the diffusion coefficients and the rate constants of the fast reactions can be estimated by comparing the measured extents of mass transfer and the extents obtained by solving the corresponding PDEs. In the absence of coupling terms in the PDEs due to interactive diffusion and/or reactions, the diffusion coefficients of each species transferring through the film can be estimated incrementally. However, in the case of interactive diffusion and/or reactions, the interdependence of species via the coupling terms of the PDEs calls for a simultaneous identification of the diffusion coefficients and rate constants within the film. <br><br> This contribution extends the extent-based incremental identification to the analysis of reaction systems with dynamic accumulation and reactions in the film. In particular, the question of whether to use incremental or simultaneous estimation of the diffusion coefficients and rate constants within a diffusion layer will be addressed. <br><br> [1] Workman et al., Anal. Chem. 83, 4557-4578, <b>2011</b><br> [2] Billeter et al., 100th AIChE Annual Meeting, Philadelphia, <b>2008</b><br> [3] Hsieh et al., Anal. Chem., http://dx.doi.org/10.1021/ac302766m, <b>2013</b><br> [4] Billeter et al., Chemom. Intell. Lab. Syst. 95(2), 170-187, <b>2009</b><br> [5] Zogg et al., Thermochimica Acta 419, 1-17, <b>2004</b><br> [6] Billeter et al., Chemom. Intell. Lab. Syst. 93(2), 120-131, <b>2008</b><br> [7] Billeter et al., Chemom. Intell. Lab. Syst. 98(2), 213-226, <b>2009</b><br> [8] W. Marquardt, Chem. Eng. Res. Des., 83(A6), 561–573, <b>2005</b><br> [9] Brendel et al., Chem. Eng. Sci. 61, 5404-5420, <b>2006</b><br> [10] Bhatt et al., Ind. Eng. Chem. Res. 50, 12960-12974, <b>2011</b><br> [11] Srinivasan et al., Chem. Eng. J. 207-208, 785-793, <b>2012</b><br> [12] Billeter et al., Anal. Chim. Acta 767, 21-34, <b>2013</b><br> [13] Srinivasan et al., IFAC Workshop TFMST, Lyon, <b>2013</b><br> [14] Billeter et al., 104th AIChE Annual Meeting, Pittsburgh, <b>2012</b>
2013. 105th Annual Meeting of the American Institute of Chemical Engineers (AIChE), San Francisco (USA), November 3-8, 2013.* Incremental Model Identification of Gas-Liquid Reaction Systems with Unsteady-State Diffusion
Identification of kinetic models and estimation of reaction and mass-transfer parameters can be performed using the extent-based identification method, whereby each chemical/physical process is handled separately [1-3]. This method is used here to analyze gas-liquid systems under unsteady-state mass transfer. Such a situation is common in the case of diffusion-controlled reactions and is modeled by the film theory, that is, transferring species accumulate in a liquid film. In both the gas and liquid bulks, mass-balance relations describe the species dynamics as ordinary differential equations (ODE) and serve as boundary conditions for the film. On the other hand, the dynamic accumulation in the film is described by Fick’s second law. The resulting partial differential equation (PDE) system is solved by discretization and rearrangement in ODEs. <br><br> The estimation of diffusion coefficients follows a two-steps procedure. First, the extents of mass transfer are computed from measurements in the two bulks. Diffusion coefficients are then estimated individually by fitting each extent of mass transfer to the extent obtained by solving the corresponding PDE. Comparison of the estimated diffusion coefficients with their literature values serves to validate the models identified in the two bulks. <br><br> The estimation of both kinetic parameters and diffusion coefficients is investigated for gas-liquid reaction systems with unsteady-state diffusion. The approach is illustrated with simulated examples. <br><br> [1] Bhatt et al, Ind. & Eng. Chem. Res. 50, 12960-12974, <b>2011</b><br> [2] Srinivasan et al, Chem. Eng. J. 208, 785-793, <b>2012</b><br> [3] Billeter et al, Anal. Chim. Acta 767, 21-34, <b>2013</b>
2013. Annual Meeting of the Swiss Chemical Society (SCS), Lausanne (Switzerland), September 6, 2013. p. 473 (AS48).* Incremental Model Identification using the Concept of Extents
Kinetic models contribute greatly to cost reduction during the process development phase and are also helpful for process monitoring and control purposes. Kinetic models describe the underlying reactions, mass transport and operating conditions of the reactor. In the typical one-step <i>simultaneous method</i> of identification, one postulates a dynamic model encompassing the effects of all phenomena at stake, and model parameters are esti-mated by comparing measured data with model predictions. Simultaneous identification can be computationally costly and exhibit convergence issues in case of poor initial guesses. Furthermore, this method is characterized by high correlation between parameters, which can lead to structural mismatch. <br><br> In contrast, the <i>extent-based incremental method</i> of identification is a two-step approach, in which measured data are first transformed into extents, each one representing the effect of a particular phenomenon [1-3]. Then, for each phenomenon individually, a model is postulated and the corresponding parameters estimated by comparing the simulated and measured extents. Since each extent, and thus each effect, is handled individually, the correlation between model parameters is considerably reduced. <br><br> This presentation will give an overview of the extent-based incremental identification and will describe the procedure to analyze homogeneous and gas-liquid systems. The performance of simultaneous and incremental methods of identification will be compared via simulated examples. <br><br> [1] Bhatt et al, Ind. & Eng. Chem. Res. 50, 12960-12974, <b>2011</b><br> [2] Srinivasan et al, Chem. Eng. J. 208, 785-793, <b>2012</b><br> [3] Billeter et al, Anal. Chim. Acta 767, 21-34, <b>2013</b>
2013. Annual Meeting of the Swiss Chemical Society (SCS), Lausanne (Switzerland), September 6, 2013. p. 474 (AS49).* Real-Time Optimization when the Plant and the Model have Different Inputs
Model-based optimization is an increasingly popular way of determining the values of the degrees of freedom in a process. The difficulty is that the available model is often inaccurate. Iterative set-point optimization, also known as modifier adaptation, overcomes this obstacle by incorporating process measurements into the optimization framework. We extend this technique to optimization problems where the model inputs do not correspond to the plant inputs. Using the example of an incineration plant, we argue that this occurs in practice when a complex process cannot be fully modeled and the missing part encompasses additional degrees of freedom. This paper shows that the modifier-adaptation scheme can be adapted accordingly. This extension makes modifier adaptation much more flexible and applicable, as a wider class of models can be used. The proposed method is illustrated through a simulated CSTR.
2013. Dynamics and Control of Process Systems, Mumbai, 18-20 December, 2013. p. 39-44. DOI : 10.3182/20131218-3-IN-2045.00052.* Incremental model identification of gas-liquid reaction systems
Identification of kinetic models is an important task for monitoring, control and optimization of industrial processes. Kinetic models are often based on first principles, which describe the evolution of the states – numbers of moles, temperature and volume – by means of conservation and constitutive equations. Identification of reaction kinetics, namely, rate expressions and rate parameters, represents the main challenge in constructing first-principles models. Estimation of parameters is especially difficult for fluid-fluid reaction systems when chemical species transfer between phases and possibly react in the bulk of the two phases. <br><br> The identification task is commonly performed in one step via a simultaneous method. In this approach, a dynamic model comprising all kinetic steps, whether physical or chemical, is postulated, and the corresponding model parameters are estimated by comparing measured and modeled concentrations. The procedure is repeated for all combinations of model candidates, and the combination with the best fit is selected. However, simultaneous identification can be computationally costly when many candidate rate laws are available. Furthermore, this method often leads to high parameter correlation, and thus any structural mismatch in the modeling of one part of the model leads to errors in all estimated parameters. <br><br> Alternatively, model identification can be carried out over several steps via an incremental method. This way, the identification task is decomposed into sub-problems of lower complexity. Measured concentrations are transformed into extents, which can then be modeled individually [1-3]. This transformation reduces the dimensionality of the dynamic model since all redundant states (invariants) can be removed. More importantly, the remaining states (variants) represent the minimal set of states describing, individually, the effects of reaction, mass transfer and transport, inlets and outlets. Postulated rate expressions (and rate parameters) are validated and estimated – one at a time – by comparing the corresponding measured and modeled extents. This approach reduces significantly the computational effort and convergence problems. Since each kinetic step can be dealt with individually, there is no correlation between the parameters of the different physical and chemical phenomena. <br><br> This presentation will briefly review the extent-based model identification and then illustrate it with the absorption of nitrous oxides in water, which represents an important step in the treatment of flue gas and constitutes a complex reaction system with multiple reactions in both the gas and liquid phases. <br><br> [1] Bhatt et al., "Incremental Identification of Reaction and Mass-Transfer Kinetics Using the Concept of Extents", Ind. & Eng. Chem. Res. 50, 12960-12974, 2011<br> [2] Srinivasan et al., "Extent-based Incremental Identification of Reaction Systems Using Concentration and Calorimetric Measurements", Chem. Eng. J. 208, 785-793, 2012<br> [3] Billeter et al., "Extent-based Kinetic Identification using Spectroscopic Measurements and Multivariate Calibration", Anal. Chim. Acta 767, 21-34, 2013
2013. 13th Scandinavian Symposium on Chemometrics (SSC), Djurönäset (Sweden), June 17-20, 2013.* Iterative Controller Tuning by Real-Time Optimization
The present article looks at the problem of iterative controller tuning, where the parameters of a given controller are adapted in an iterative manner to bring the user-defined performance metric to a local minimum for some repetitive process. Specifically, we cast the controller tuning problem as a real-time optimization (RTO) problem, which allows us to exploit the available RTO theory to enforce both convergence and performance guarantees. We verify the effectiveness of the proposed methodology on an experimental torsional system and note that the results are particularly promising considering the simplicity of the method.
2013. Dynamics and Control of Process Systems (DYCOPS), Mumbai, India, December 18-20, 2013. p. 21-26. DOI : 10.3182/20131218-3-IN-2045.00020.* Real-Time Optimization for Kites
Over the past decade, a large number of academics and start-ups have devoted them- selves to developing kites, or airplanes on tethers, as a renewable energy source. Determining the trajectories the kite should follow is a modeling and optimization challenge. We present a dynamic model and analyse how uncertainty affects the resulting optimization problem. We show how measurements can be used to rapidly correct the model-based optimal trajectories in real time. This novel real-time optimization approach does not rely on intensive online computation. Rather, it uses knowledge of the structure of the optimal solution, which can be studied offline.
2013. 5th IFAC International Workshop on Periodic Control Systems (PSYCO'2013), July 3-5 2013. p. 64-69. DOI : 10.3182/20130703-3-FR-4039.00004.* Use of Transient Measurements for Real-Time Optimization via Modifier Adaptation
Real-time optimization (RTO) methods use measurements to offset the effect of uncertainty and drive the plant to optimality. Explicit RTO schemes, which are characterized by solving a static optimization problem repeatedly, typically require multiple iterations to steady state. In contrast, implicit RTO methods, which do not solve an optimization problem explicitly, can use transient measurements and gradient control to bring the plant to steady state optimality in a single iteration, provided the set of active constraints is known. This paper investigates the explicit RTO scheme “modifier adaptation” (RTO-MA) and proposes a framework that uses transient measurements. Convergence to the true plant optimum can be achieved in a single iteration provided the plant gradients can be estimated appropriately, for which we propose a linearization-based method. The approach is illustrated through the simulated example of a continuous stirred-tank reactor. It is shown that the time needed for convergence is of the order of the plant settling time, while more than five iterations to steady state are required when MA is applied in its classical form. In other words, RTO-MA is able to compete in performance with RTO schemes based on gradient control, with the additional ability to handle process constraints.
2013. SFGP 2013 - XIVème Congrès de la Société Française de Génie des Procédés, Lyon, France, October 8-10, 2013.Book Chapters
* Control and Optimization of Batch Processes
Encyclopedia of Systems and Control; Springer, 2013.* Measurement-based Real-Time Optimization of Chemical Processes
This chapter presents recent developments in the field of process optimization. In the presence of uncertainty in the form of plant-model mismatch and process disturbances, the standard model-based optimization techniques might not achieve optimality for the real process or, worse, they might violate some of the process constraints. To avoid constraints violations, a potentially large amount of conservatism is generally introduced, thus leading to sub-optimal performance. Fortunately, process measurements can be used to reduce this sub-optimality, while guaranteeing satisfaction of process constraints. Measurement-based optimization schemes can be classified depending on the way measurements are used to compensate the effect of uncertainty. Three classes of measurement-based real-time optimization methods are discussed and compared. Finally, four representative application problems are presented and solved using some of the proposed real-time optimization schemes.
Advances in Chemical Engineering; Academic Press, Elsevier, 2013. p. 1-50.Working Papers
* Sufficient Conditions for Feasibility and Optimality of Real-Time Optimization Schemes - II. Implementation Issues
The idea of iterative process optimization based on collected output measurements, or "real-time optimization" (RTO), has gained much prominence in recent decades, with many RTO algorithms being proposed, researched, and developed. While the essential goal of these schemes is to drive the process to its true optimal conditions without violating any safety-critical, or "hard", constraints, no generalized, unified approach for guaranteeing this behavior exists. In this two-part paper, we propose an implementable set of conditions that can enforce these properties for any RTO algorithm. This second part examines the practical side of the sufficient conditions for feasibility and optimality (SCFO) proposed in the first and focuses on how they may be enforced in real application, where much of the knowledge required for the conceptual SCFO is unavailable. Methods for improving convergence speed are also considered.
2013* Sufficient Conditions for Feasibility and Optimality of Real-Time Optimization Schemes - I. Theoretical Foundations
The idea of iterative process optimization based on collected output measurements, or "real-time optimization" (RTO), has gained much prominence in recent decades, with many RTO algorithms being proposed, researched, and developed. While the essential goal of these schemes is to drive the process to its true optimal conditions without violating any safety-critical, or "hard", constraints, no generalized, unified approach for guaranteeing this behavior exists. In this two-part paper, we propose an implementable set of conditions that can enforce these properties for any RTO algorithm. The first part of the work is dedicated to the theory behind the sufficient conditions for feasibility and optimality (SCFO), together with their basic implementation strategy. RTO algorithms enforcing the SCFO are shown to perform as desired in several numerical examples - allowing for feasible-side convergence to the plant optimum where algorithms not enforcing the conditions would fail.
2013Reports
* The SCFO Real-Time Optimization Solver: Users' Guide (version 0.9.4)
This document acts as a detailed users' guide to the SCFO real-time optimization (RTO) solver, and guides the user through basic setup, configuration, and theoretical aspects of the solver. Several application examples are also presented.
2013Student Projects
* Incremental Model Identification of Gas-Liquid Reaction Systems with Unsteady-State Diffusion
Identification of kinetic models and estimation of reaction and mass-transfer parameters can be performed using the extent-based identification method, whereby each chemical/physical process is treated individually. This method is used here to analyze gas-liquid systems under unsteady-state mass transfer. Such a situation is common in the case of diffusion-controlled reactions and can be modeled by the film theory. In both the gas and liquid bulks, mass-balance relations describe the species dynamics as ordinary differential equations (ODE) and serve as boundary conditions for the film. On the other hand, the dynamic accumulation in the film is described by Fick's second lay. The resulting partial differential equation (PDE) system is solved by discretization and rearrangement into ODEs. Kinetic models are assessed and the corresponding parameters are estimated using extents of reaction. The estimation of diffusion coefficients follows a tow-steps procedure. First, the extents of mass transfer are computed from measurements in the two bulks. Diffusion coefficients are then estimated individually by fitting each extent of mass transfer to the extent obtained by solving the corresponding PDE. Comparison of the estimated diffusion coefficients with their literature values serves to validate the models identified in the two bulks. The estimation of both kinetic parameters and diffusion coefficients is investigated for gas-liquid reaction systems with unsteady-state diffusion. The approach is illustrated with simulated examples.
2013* Analysis of Fluid-Fluid Reaction Systems with Reactions in Both Phases
Developing mathematical models for chemical reaction systems is essential for analysis, development, design, optimization and control of chemical, or biochemical industrial processes. Nowadays, more and more companies across the world have to deal with innovative concepts such as the Sustainable Chemistry (or "Green Chemistry"), which particularly require good knowledge of chemical reactions systems. Well-defined chemical operations involved in a process lead to its efficiency, energy saving and an improvement of the product quality. Furthermore, plants adopting this new philosophy minimize their byproduct production and pollutant formation. <br><br> This master thesis works on developing a methodology for analysis and kinetic modeling of chemical reaction systems. In this present study, the case where chemical reactions take place in both phases of a heterogeneous biphasic fluid-fluid (F-F) reactor under isothermal conditions is considered. <br><br> In order to model homogeneous or heterogeneous (F-F) chemical systems, a new approach called “Extent-based Incremental Identification” has recently been developed by the Laboratoire d’Automatique (EPFL). In contrast to the commonly used simultaneous approach, this extent-based incremental approach can compute parameters corresponding to reaction and mass transfer rates individually for each reaction and mass transfer laws, and also does not require prior postulation or knowledge of rate laws. <br><br> This particular case is an extension of the dissertation done by Nirav Bhatt [1], which considers a heterogeneous chemical reaction system with reactions in one of the phases with steady state mass-transfer between the phases, and work done by Michael Amrhein [2], who developed a linear transformation that computed the extents of reaction from the numbers of moles in homogeneous reaction systems, with inlet and outlet streams. <br><br> [1] Nirav Bhatt, EPFL Diss. n° 5028 (2011) <br> [2] Amrhein et al., AIChE J. 56 (2010), 2873-2886
20132012
Journal Articles
* Extent-based incremental identification of reaction systems using concentration and calorimetric measurements
Extent-based incremental identification uses the concept of extents and the integral method of parameter estimation to identify reaction kinetics from concentration measurements. The approach is rather general and can be applied to open both homogeneous and gas-liquid reaction systems. This study proposes to incorporate calorimetric measurements into the extent-based identification approach for two main purposes: (i) to be able to compute the extents in certain cases when only a subset of the concentrations are measured, and (ii) to estimate the enthalpies when all concentrations are measured. The two approaches are illustrated via the simulation of an homogeneous and a gas-liquid reaction system, respectively.
Chemical Engineering Journal. 2012. DOI : 10.1016/j.cej.2012.07.063.* Comparison of Six Implicit Real-Time Optimization Schemes
Real-time optimization (RTO) is a class of methods that use measurements to reject the effect of uncertainty on optimal performance. This article compares six implicit RTO schemes, that is, schemes that implement optimality not through numerical optimization but rather via the control of appropriate variables. For unconstrained processes, the ideal controlled variable is the cost gradient. It is shown that, because of their structural differences, model-free and model-based techniques exhibit different features in terms of required excitation, convergence, scalability with the number of inputs and rejection of uncertainty. This comparison is illustrated through a simulated CSTR.
Journal Européen des Systèmes Automatisés. 2012. DOI : 10.3166/JESA.46.291-305.* Incremental Identification of Reaction Systems - A Comparison between Rate-based and Extent-based Approaches
An incremental identification approach for determining the kinetics of homogeneous reaction systems from transient concentration measurements has been developed in previous work (Marquardt, W. Chemical Engineering Research and Design 83(A6), 561–573, 2005). This approach decomposes the identification task into a sequence of sub-tasks that include the identification of the rate expressions for every reaction and of the corresponding rate parameters. The approach is of the “differential” type, because reaction rates are estimated through numerical differentiation of concentration measurements. An alternative incremental identification approach based on the “integral method” using the concept of extents has been proposed recently (Bhatt et al., submitted to Industrial & Engineering Chemistry Research, 2011). The present paper compares the performance of these two incremental approaches with respect to their ability to discriminate between two or more competing rate expressions, to estimate the rate parameters with high accuracy, and the associated computational effort. The two incremental approaches are described and their main features investigated via the simulated start-up of a continuous stirred-tank reactor.
Chemical Engineering Science. 2012. DOI : 10.1016/j.ces.2012.05.040.* Directional Input Adaptation in Parametric Optimal Control Problems
This paper deals with input adaptation in dynamic processes in order to guarantee feasible and optimal operation despite the presence of uncertainty. The proposed adaptation consists in using the nominal optimal inputs and adding appropriately designed input variation functions. For optimal control problems having both terminal and mixed control-state path constraints, two orthogonal sets of directions can be distinguished in the space of input variation functions: the so-called sensitivity-seeking directions, along which a small variation will not affect the respective active constraints, and the complementary constraint-seeking directions, along which a variation will affect the respective constraints. It is shown that the sensitivity-seeking directions satisfy certain linear integral equations. Two selective input adaptation strategies are then defined, namely, adaptation in the sensitivity- and constraint-seeking directions. This paper proves the important result that, for small parametric perturbations, the cost variation resulting from adaptation in the sensitivity-seeking directions (over no input adaptation) is typically smaller than that due to adaptation in the constraint-seeking directions.
SIAM Journal on Control and Optimization. 2012. DOI : 10.1137/110820646.* Asymptotic Rejection of Nonvanishing Disturbances Despite Plant-Model Mismatch
A direct adaptive control methodology for the rejection of unmeasured non-vanishing disturbances is proposed. The approach uses the framework of polynomial RST controllers and relies on the internal model principle with additional degrees of freedom provided by the Q parametrization. The parameters of the Q polynomial are adapted using minimization of the closed-loop output error. Asymptotic disturbance rejection of unmeasured non-vanishing disturbances can be guaranteed despite plant-model mismatch, provided the closed-loop system remains stable. A simulation example illustrates the theoretical developments.
Int. Journal of Adaptive Control and Signal Processing. 2012. DOI : 10.1002/acs.2292.* Experimental Real-Time Optimization of a Solid Oxide Fuel Cell Stack via Constraint Adaptation
The experimental validation of a real-time optimization (RTO) strategy for the optimal operation of a solid oxide fuel cell (SOFC) stack is reported in this paper. Unlike many existing studies, the RTO approach presented here utilizes the constraint-adaptation methodology, which assumes that the optimal operating point lies on a set of active constraints and then seeks to satisfy those constraints in practice via the addition of a correction term to each constraint function. These correction terms, also referred to as “modifiers”, correspond to the difference between predicted and measured constraint values and are updated at each steady-state iteration, thereby allowing the RTO to iteratively meet the optimal operating conditions of an SOFC stack despite significant plant-model mismatch. The effects of the filter parameters used in the modifier update and of the RTO frequency on the general performance of the algorithm are also investigated.
Energy. 2012. DOI : 10.1016/j.energy.2011.04.033.Conference Papers
* Model Adequacy for Real-Time Optimization
Optimization is important in science and engineering as a way of finding ”optimal” situations, designs or operating conditions. Optimization is typically performed on the basis of a mathematical model of the process under investigation. In practice, optimization is complicated by the presence of uncertainty in the form of plant-model mismatch and unknown disturbances. Without uncertainty, one could use the model at hand, optimize it numerically off-line and implement the optimal inputs in an open-loop fashion. However, because of uncertainty, the inputs need to be re-computed or adjusted in real-time based on measurements. This is the field of real-time optimization, which is labeled RTO for static optimization problems [1]. A standard way of implementing RTO is via the so-called two-step approach of repeated parameter estimation and optimization. In this scheme, measurements are used to adapt the model parameters, and the updated model is used for optimization. The fixed point of this iterative scheme is an important issue. The quality of the model is often overlooked in real-time optimization. The model is typically inaccurate due to lack of time for detailed modeling or shear complexity. More surprising is the fact that a model can predict the plant outputs well but be inadequate to push the plant to its optimum, which means that the model does not adequately represent the optimality conditions of the plant. The problem of model selection in the two-step RTO approach has been discussed in [2]. If the model is structurally correct and the parameters are identifiable, convergence to the plant optimum can be achieved in a single iteration. However, in the presence of plant-model mismatch, whether the scheme converges, or to which point it does converge, becomes anyone’s guess. This is due to the fact that the objective for parameter adaptation might be unrelated to the cost and constraint values and gradients that drive optimality in the optimization problem. Hence, minimizing the mean-square error of the plant outputs may not help in our quest for feasibility and optimality. Convergence under plant-model mismatch has been addressed in [3] and [4], where it has been shown that optimal operation is reached if model adaptation leads to matched KKT conditions for the model and the plant. This contribution addresses the convergence of two-step RTO schemes in the presence of structural plant-model mismatch. We propose to investigate the parameter estimation and optimization problems in the light of the second-order sufficient conditions of optimality to show that, in general, there are too few degrees of freedom to be able to reach plant optimality. A possible solution to this problem consists in reconciling the objectives of the parameter estimation and optimization problems along the lines of “modeling for optimization” [5,6]. One could for example use as “measured outputs” in the parameter estimation problem estimates of the necessary conditions of optimality for the optimization problem. These issues will be investigated both theoretically and via the simulation of chemical processes at steady state. References [1] Marlin, T.E. and Hrymak, A.N. (1997). “Real-Time Operations Optimization of Continuous Processes”, In AIChE Symposium Series - CPC-V. Vol. 93. 156–164. [2] Forbes, J.F. and Marlin, T.E. (1996). “Design cost: A Systematic Approach to Technology Selection for Model-Based Real-Time Optimization Systems”, Comp. Chem. Eng. 20, 717–734. [3] Biegler, L.T., Grossmann, I.E. and Westerberg, A.W. (1985). “A Note on Approximation Techniques Used for Process Optimization”, Comp. Chem. Eng. 9, 201–206. [4] Forbes, J.F., Marlin, T. E. and MacGregor, J.F. (1994). “Model Adequacy Requirements for Optimizing Plant Operations”, Comp. Chem. Eng. 18(6), 497–510. [5] Srinivasan, B. and Bonvin, D. (2002). “Interplay Between Identification and Optimization in Run-to-Run Optimization Schemes”, ACC, Anchorage 2174–2179. [6] Bonvin, D. and Srinivasan, B. (2012). “On the Role of the Necessary Conditions of Optimality in Structuring Dynamic Real-Time Optimization Schemes”, submitted to Comp. Chem. Eng.
2012. AIChE Annual Meeting, Pittsburgh, PA, October 28 - November 2, 2012.* Numerical algorithm for feedback linearizable systems
A numerical algorithm that achieves asymptotic stability for feedback linearizable systems is presented. The nonlinear systems can be represented in various forms that include differential equations, simulated physical models or lookup tables. The proposed algorithm is based on a quotient method and proceeds iteratively. At each step, the dynamic system is desensitized with respect to the current input vector field. Control is obtained by tracking a desired value along the input vector field at each step. The numerical algorithm uses the direction on the tangent manifold at a given point and its variation around that point. This enables the algorithm to produce control values simply using a simulator of the nonlinear system.
2012. 10th Int. Conference of Numerical Analysis and Applied Mathematics, Kos, Greece, September 2012. p. 1458-1461. DOI : 10.1063/1.4756436.* Avoiding Feedback-Linearization Singularity Using a Quotient Method -- The Field-Controlled DC Motor Case
Feedback linearization requires a unique feedback law and a unique diffeomorphism to bring a system to Brunovsk´y normal form. Unfortunately, singularities might arise both in the feedback law and in the diffeomorphism. This paper demonstrates the ability of a quotient method to avoid or mitigate the singularities that typically arise with feedback linearization. The quotient method does it by relaxing the conditions on diffeomorphism, which can be achieved since there is an additional degree of freedom at each step of the iterative procedure. This freedom in choosing quotients and the resulting advantage are demonstrated for a field-controlled DC motor. Using a Lyapunov function, the domain of attraction of the control law obtained with the quotient method is proved to be larger than the domain of attraction of a control law obtained using feedback linearization.
2012. American Control Conference, Montreal, Canada, June 27-29, 2012. p. 1155-1161. DOI : 10.1109/ACC.2012.6315095.* On the Use of Models for Real-Time Optimization
The operation of dynamic processes can be optimized using models that predict the system behavior well, in particular its optimality features. In practice, however, process models are often structurally inaccurate, and on-line adaptation is typically required for appropriate prediction and optimization. Furthermore, it is difficult to identify process model parameters on-line during optimization because of lack of persistent excitation. This paper addresses the modeling issue for the purpose of real-time optimization. It will be shown that the models used for real-time optimization need not be valid as a whole; instead, it suffices that they represent the optimality conditions well. Two types of models are considered, namely, the traditional ”plant models” and the tailor-made ”solution models”. The features of each type, in particular their ability to be adapted using on-line measurements, are discussed and illustrated through a simple car example.
2012. Chemical Process Control-VIII, Savannah Harbor, GA, USA, January 2012.* Extent-based incremental identification of reaction systems – Minimal number of measurements for full state reconstruction
The identification of kinetic models is an essential step for the monitoring, control and optimization of industrial processes. This is particularly true for the chemical and pharmaceutical industries, where the current trend of strong competition calls for a reduction in process development costs [1]. This trend goes in line with the recent initiative in favor of Process Analytical Technology (PAT) launched by the US Food and Drug Administration, which advocates a better understanding and control of manufacturing processes with the goal of ensuring final product quality.<br><br> Reaction systems can be represented by first-principles models that describe the evolution of the states (typically concentrations, volume and temperature) by means of conservation equations of differential nature and constitutive equations of algebraic nature. These models include information regarding the reactions (stoichiometry and reaction kinetics), the transfer of species between phases (mass-transfer rates), and the operation of the reactor (initial conditions, inlet and outlet flows, operational constraints). The identification of reaction and mass-transfer rates represents the main challenge in building these first-principles models. Note that first-principles models can include redundant states because the modeling step considers balance equations for more quantities than are necessary to represent the true variability of the process. For example, when modeling a closed homogeneous reaction system with <i>R</i> independent reactions, one typically writes a mole balance equation for each of the <i>S</i> species, whereas there are only <i>R</i> < <i>S</i> independent equations, that is <i>S</i> - <i>R</i> equations are redundant. The situation is a bit more complicated in open and/or heterogeneous reaction systems.<br><br>The identification of reaction systems can be performed in one step via a simultaneous approach, in which a kinetic model that comprises all reactions and mass transfers is postulated and the corresponding rate parameters are estimated by comparing predicted and measured concentrations [2]. The procedure is repeated for all combinations of model candidates and the combination with the best fit is typically selected. This approach is termed 'simultaneous identification' since all reactions and mass transfers are identified simultaneously. The advantages of this approach lie in the capability to handle complex reaction rates and in the fact that it leads to optimal parameters in the maximum-likelihood sense. However, the simultaneous approach can be computationally costly when several candidates are available for each reaction, and convergence problems can arise for poor initial guesses. Furthermore, structural mismatch in one part of the model may result in errors in all estimated parameters.<br><br>As an alternative to simultaneous identification, the incremental approach decomposes the identification task into a set of sub-problems of lower complexity [3]. With the differential method [2], reaction rates are first estimated by differentiation of transient concentrations measurements. Then, each estimated rate profile is used to discriminate between several model candidates, and the candidate with the best fit is selected. This approach is termed 'rate-based incremental identification' since each reaction rate and each mass-transfer rate is dealt with individually. However, because of the bias introduced in the differentiation step, the estimated rate parameters are not statistically optimal. With the integral method [4-5], extents are first computed from measured concentrations. Subsequently, postulated rate expressions are integrated individually for each reaction, and rate parameters can be estimated by comparing predicted and computed extents. Since each extent of reaction and mass transfer can be investigated individually, this approach is termed 'extent-based incremental identification' [6, 7].<br><br>The context of this work is the extent-based incremental identification of rate laws for fluid-fluid (F-F) reaction systems on the basis of process measurements. Process measurements are available for some of the species only, as it is difficult to measure the concentrations of all species due to limitations in the current state of sensor technology. Hence, it is necessary to reconstruct the unmeasured concentrations that appear in the rate laws from the available measurements. If a process model were available, this reconstruction could be done via state estimation using observers or Kalman filters. In the absence of such a reaction model, the idea is to perform instantaneous reconstruction by having as many measured quantities as there are non-redundant states. Hence the key question: How many measurements are needed to be able to reconstruct the full state? <i>R</i> measurements suffice in the case of a homogeneous batch reactor, whereas <i>R</i> + 2 <i>p</i><sub>m</sub> + <i>p</i><sub>l</sub> + <i>p</i><sub>g</sub> + 2 measurements are needed in the case of an open gas-liquid reaction system without reaction/accumulation in the film [8], where <i>p</i><sub>m</sub> is the number of mass transfers, <i>p</i><sub>l</sub> the number of liquid inlets and <i>p</i><sub>g</sub> the number of gas inlets, and there is one outlet in each phase.<br><br>After a review of the extent-based incremental identification, this contribution will extend the results on the minimal number of measured species required for reconstructing all states to the cases of F-F reaction systems with reactions taking place in one or two bulks, without and with accumulation/reactions in the film. For the case where the number of measured species is insufficient to compute all the states, this presentation will address the possibility of using additional measurements, such as calorimetry and gas consumption, to augment the number of measured quantities [9]. These theoretical results will be illustrated through simulated examples of F-F reaction systems.<br><br>[1] J. Workman et al, Anal. Chem. 83, 4557 (2011) <br> [2] A. Bardow et al, Chem. Eng. Sci. 59, 2673 (2004) <br> [3] M. Brendel et al, Chem. Eng. Sci. 61, 5404 (2006) <br> [4] M. Amrhein et al, AIChE Journal 56, 2873 (2010) <br> [5] N. Bhatt et al, Ind. Eng. Chem. Res. 49, 7704 (2010) <br> [6] N. Bhatt et al, Ind. Eng. Chem. Res. 50, 12960 (2011) <br> [7] N. Bhatt et al, Chem. Eng. Sci. 83, 24 (2012) <br> [8] N. Bhatt et al, ACC, Montreal (Canada), 3496 (2012) <br> [9] S. Srinivasan et al, Chem. Eng. J. 208, 785 (2012)
2012. 104th Annual Meeting of the American Institute of Chemical Engineers (AIChE), Pittsburgh (USA), October 28 - November 2, 2012.* Simultaneous or incremental identification of reaction systems ?
Identification of kinetic models is essential for monitoring, control and optimization of industrial processes. Robust kinetic models are often based on first-principles and described by differential equations. Identification of reaction kinetics, namely rate expressions and rate parameters, represents the main challenge in building first-principles models. The identification task can be performed in one step via a simultaneous approach or over several steps via an incremental approach. <br><br> In the <b>simultaneous approach</b>, a kinetic model that encompasses all reactions is postulated and the corresponding parameters are estimated by comparing predicted and measured concentrations. The procedure is repeated for all combinations of model candidates and the combination with the best fit is typically selected. This approach can handle complex reaction rates and leads to optimal parameters in the maximum-likelihood sense. However, it is computationally costly when several candidates are available for each reaction, and convergence problems can arise for poor initial guesses. Furthermore, simultaneous identification often leads to high parameters correlation, and a structural mismatch in one part of the model can result in errors in all estimated parameters. <br><br> In the <b>incremental approach</b>, the identification task is decomposed into sub-problems of lower complexity. In the <i>differential method</i>, reaction rates are first estimated by differentiation of measured concentrations. Then, each estimated rate profile is used to discriminate between several model candidates, and the candidate with the best fit is selected. However, because of the bias introduced in the differentiation step, the estimated rate parameters are not statistically optimal. In the <i>integral method</i>, measured concentrations are first transformed to 'experimental extents'. Subsequently, postulated rate expressions are integrated for each reaction individually and rate parameters are estimated by comparing predicted and experimental extents. <br><br> This contribution reviews the simultaneous and incremental methods of identification and compares them via simulated examples taken from homogeneous and heterogeneous chemistry.
2012. 4th Chemistry Congress of the European Association for Chemical and Molecular Sciences (EuCheMS), Prague (Czech Republic), August 26-30, 2012.* Extent-based incremental identification of reaction kinetics from spectroscopic data
Identification of kinetic models is an important step for the monitoring, control and optimization of chemical and pharmaceutical processes. Furthermore, the availability of a first-principles model often leads to substantial reduction in process development cost. Recent developments in Process Analytical Technology can help build kinetic models from large amounts of multivariate spectroscopic data [1]. <br><br> Kinetic modeling from spectroscopic data typically relies on Beer’s law, <b>A</b> = <b>C</b> <b>E</b>, to decompose the observed data matrix <b>A</b> into the product of <i>n</i><sub>s</sub> concentration profiles, <b>C</b> (<i>n</i><sub>t</sub> x <i>n</i><sub>s</sub>), and <i>n</i><sub>s</sub> pure component spectra, <b>E</b> (<i>n</i><sub>s</sub> x <i>n</i><sub>w</sub>). In simultaneous identification approaches, the rate expressions are integrated simultaneously to predict <b>Ĉ</b>, <b>E</b> is estimated as <b>E</b> = <b>Ĉ</b><sup>+</sup><b>A</b>, while the rate parameters are determined by fitting the predicted absorbance <b>Ĉ</b> <b>Ĉ</b><sup>+</sup><b>A</b> to the measured absorbance <b>A</b>. <br><br> As an alternative to simultaneous identification, incremental identification focuses on each reaction separately, with the main advantage that each rate can be fitted individually, thereby resulting in less correlation between the estimated rate parameters. The concentrations are estimated from absorbance measurements as <b>C</b> = <b>A</b> <b>E</b><sup>+</sup> or from a calibration <b>C</b> = f(<b>A</b>). In the rate-based (differential) approach, the reaction rates are first estimated by differentiation of concentrations, and the rate parameters are obtained by individually fitting each candidate rate expression to the corresponding estimated rate [2]. The difficulty with this approach lies in the differentiation of noisy and sparse concentration data. In order to avoid the differentiation step, an extent-based (integral) approach has been proposed [3], in which the extents of reaction <b>X</b><sub>r</sub> (<i>n</i><sub>t</sub> x <i>n</i><sub>r</sub>), i.e. the numbers of moles consumed or produced by each reaction, are computed from <b>A</b>. For this, the linear transformation <b>S</b><sub>0</sub> has been developed, which computes <b>X</b><sub>r</sub> = <b>V</b> <b>C</b> <b>S</b><sub>0</sub>, where <b>V</b> is a diagonal matrix representing the volume at the <i>n</i><sub>t</sub> time instants. When some of the species do not absorb or react, <b>X</b><sub>r</sub> can be calculated using a flow-based method [3]. The rate parameters are determined by fitting each predicted reaction extent to the corresponding extent computed from measurements. More recently, this extent-based approach has been extended to heterogeneous reaction systems [4]. <br><br> This contribution presents the extent-based incremental identification of reaction kinetics based on spectroscopic data. The approach is illustrated through simulated examples.<br><br> [1] Workman et al, Anal. Chem., 83, 4557 (2011). <br> [2] Brendel et al, Chem. Eng. Sci., 61, 5404 (2006). <br> [3] Amrhein et al, AIChE Journal, 56, 2873 (2010). <br> [4] Bhatt et al, Ind. Eng. Chem. Res., 49, 7704 (2010).
2012. 13th Conference on Chemometrics in Analytical Chemistry (CAC), Budapest (Hungary), June 25-29, 2012.* Quotient method for stabilising a ball-on-a-wheel system – Experimental results
This paper extends the quotient method proposed in [1] and applies it to stabilize a “ball-on-a-wheel” system. The quotient method requires a diffeomorphism to obtain the normal form of the input vector field and uses canonical pro- jection to obtain the quotient. However, the whole process can be done without computing the normal form, which requires defining a quotient generating function and a quotient bracket. This paper presents the steps necessary to apply the quotient method without obtaining the normal form. Furthermore, a Lyapunov function is introduced to prove stability. This paper also presents the experimental implementation of the quotient method to stabilize a ball-on-a-wheel system.
2012. IEEE CDC 2012, Hawaii, December 2012. p. 1271-1278. DOI : 10.1109/CDC.2012.6426434.* Run-to-Run MPC Tuning via Gradient Descent
A gradient-descent method for the run-to-run tuning of MPC controllers is proposed. It is shown that, with an assumption on process repeatability, the MPC tuning parameters may be brought to a locally optimal set. SISO and MIMO examples illustrate the characteristics of the proposed approach.
2012. 22nd European Symposium on Computer Aided Process Engineering (ESCAPE), London, UK, June 17-20, 2012. p. 927-931. DOI : 10.1016/B978-0-444-59520-1.50044-0.* Extent-based incremental identification of reaction systems using concentration and calorimetric measurements
Extent-based Incremental Model Identification (IMI) uses the concept of extent of reaction and the integral method of parameter estimation to identify reaction kinetics from transient concentration measurements. This study proposes to incorporate calorimetric measurements into the extent-based IMI approach. Calorimetric measurements are added to concentrationmeasurements for two main purposes: (i) to be able to estimate the reaction enthalpies when all the concentrations are measured, and (ii) to be able to compute the extents of reaction in certain cases when only a subset of the concentrations are measured. The two approaches are demonstrated via the simulation of a semi-batch reactor.
2012. 12th International Symposium on Chemical Reaction Engineering (ISCRE), Maastricht (Netherlands), September 1-5, 2012. p. 785-793. DOI : 10.1016/j.cej.2012.07.063.* Minimal State Representation for Open Fluid-Fluid Reaction Systems
Reaction systems are typically represented by first-principles models that describe the evolution of the states (typically concentrations, volume and temperature) by means of conservation equations of differential nature and constitutive equations of algebraic nature. The resulting models often contain redundant states, since the variability of the concentrations is not linked to the number of species, but to the number of independent reactions, the number of transferring species, and the number of inlet and outlet streams. A minimal state representation is a dynamic model that exhibits the same behavior as the original model but has no redundant states. This paper considers the material balance equations associated with an open fluid--fluid reaction system that involves S_g species, p_g independent inlets and one outlet in the first fluid phase (e.g. the gas phase), S_l species, R independent reactions, p_l independent inlets and one outlet in the second fluid phase (e.g. the liquid phase). In addition, there are p_m species transferring between the two phases. Based on a nonlinear transformation that decomposes the (S_l+S_g) states of the original model into \sigma (= R+2p_m+p_l+p_g+2) variant states and (S_l+S_g-\sigma) invariant states, and on the concept of accessibility of nonlinear systems, the conditions under which the transformed model is a minimal state representation are derived. Furthermore, it is shown how to reconstruct unmeasured concentrations from measured concentrations and flow rates without knowledge of the reaction and mass--transfer rates. The minimal number of composition measurements needed to reconstruct the full state is (R + p_m). The simulated chlorination of butanoic acid is used to illustrate the various concepts developed in the paper.
2012. 2012 American Control Conference (ACC), Montreal, Canada, June 27 - 29, 2012. p. 3496-3502. DOI : 10.1109/ACC.2012.6315195.* Exploiting Local Quasiconvexity for Gradient Estimation in Modifier-Adaptation Schemes
A new approach for gradient estimation in the context of real-time optimization under uncertainty is proposed in this paper. While this estimation problem is often a difficult one, it is shown that it can be simplified significantly if an assumption on the local quasiconvexity of the process is made and the resulting constraints on the gradient are exploited. To do this, the estimation problem is formulated as a constrained weighted least-squares problem with appropriate choice of the weights. Two numerical examples illustrate the effectiveness of the proposed method in converging to the true process optimum, even in the case of significant measurement noise.
2012. The 2012 American Control Conference, Montréal, Canada, June 27-29, 2012. p. 2806-2811. DOI : 10.1109/ACC.2012.6314902.Theses
* Quotient-method Algorithms for Input-affine Single-input Nonlinear Systems
Many real-world systems are intrinsically nonlinear. This thesis proposes various algorithms for designing control laws for input-affine single-input nonlinear systems. These algorithms, which are based on the concept of quotients used in nonlinear control design, can break down a single-input system into cascade of smaller subsystems of reduced dimension. These subsystems are well defined for feedback-linearizable systems. However, approximations are required to handle non-feedback-linearizable systems. The method proceeds iteratively and consists of two stages. During the forward stage, an equivalence relationship is defined to isolate the states that are not directly affected by the input, which reduces the dimension of the system. The resulting system is an input-affine single-input system controlled by a pseudo-input which represents a degree of freedom in the algorithm. The pseudo-input is a complementary state required to complete the diffeomorphism. This procedure is repeated (n − 1) times to give a one-dimensional system, where n is the dimension of the system. The backward stage begins with the one-dimensional system obtained at the end of the forward stage. It iteratively builds the control law required to stabilize the system. At every iteration, a desired profile of the pseudo-input is computed. In this next iteration, this desired profile is used to define an error that is driven asymptotically to zero using an appropriate control law. The quotient method is implemented through two algorithms, with and without diffeomorphism. The algorithm with diffeomorphism clearly depicts the dimension reduction at every iteration and provides a clear insight into the method. In this algorithm, a diffeomorphism is synthesized in order to obtain the normal form of the input vector field. The pseudo-input is the last coordinate of the new coordinate system. A normal projection is used to reduce the dimension of the system. For the algorithm to proceed without any approximation, it is essential that the last coordinate appears linearly in the projection of the transformed drift vector field. Necessary and sufficient conditions to achieve linearity in the last coordinate are given. Having the pseudo-input appearing linearly enables to represent the projected system as an input-affine system. Hence, the whole procedure can be repeated (n−1) times so as to obtain a one-dimensional system. In the second algorithm, a projection function based on the input vector field is defined that imitates both operators, the push forward operater and the normal projection operator of the previous algorithm. Due to the lack of an actual diffeomorphism, there is no apparent dimension reduction. Moreover, it is not directly possible to separate the drift vector field from the input vector field in the projected system. To overcome this obstacle, a bracket is defined that commutes with the projection function. This bracket provides the input vector field of the projected system. This enables the algorithm to proceed by repeating this procedure (n−1) times. As compared with the algorithm with diffeomorphism, the computational effort is reduced. The mathematical tools required to implement this algorithm are presented. A nice feature of these algorithms is the possibility to use the degrees of freedom to overcome singularities. This characteristic is demonstrated through a field-controlled DC motor. Furthermore, the algorithm also provides a way of approximating a non-feedback-linearizable system by a feedback-linearizable one. This has been demonstrated in the cases of the inverted pendulum and the acrobot. On the other hand, the algorithm without diffeomorphism has been demonstrated on the ball-on-a-wheel system. The quotient method can also be implemented whenever a simulation platform is available, that is when the differential equations for the system are not available in standard form. This is accomplished numerically by computing the required diffeomorphism based on the data available from the simulation platform. Two versions of the numerical algorithm are presented. One version leads to faster computations but uses approximation at various steps. The second version has better accuracy but requires considerably more computational time.
Lausanne, EPFL, 2012. DOI : 10.5075/epfl-thesis-5467.* Optimal Design of Signaling Modules
One of the basic characteristics of every living system is the ability to respond to extracellular signals. This is carried out through a limited number of protein-based signaling networks, whose function is not based only on simple transmission of the received signals, but incorporates the processing, encoding and integration of both external and internal signals. The results than lead to different changes in gene expression and regulate cell growth, mitogenesis, differentiation, embryo development, and stress responses in mammalian cells, whereas the malfunction is in correlation with diseases such as cancer, asthma and diabetes. In signaling networks, the basic units are covalent modification cycles, which comprise the activation and deactivation of proteins by other proteins. Protein modification in cell signaling – typically a phosphorylation and dephosphorylation – is a general mechanism responsible for the transfer of a wide variety of chemical signals in biological systems. Although the concept does not seem to be complex from a biochemical point of view, these simple systems can nevertheless provide a large diapason of dynamical responses and are therefore ubiquitous building blocks of signaling pathways. These cycles are often linked, forming multiple layers of cycles, the so-called cascades. Commonly observed instance of signal transduction through a series of protein kinase reactions are the kinases of the mitogen-activated protein kinase (MAPK) cascades. These pathways, which are found in almost all eukaryotes, play an important role in controlling different cellular processes, including fundamental functions. The activation of the cellular response by MAPK pathways typically involves at least three phosphorylation steps. In order to better understand the nature of this regulation and to gain greater insight into the mechanisms that determine the function of cells, signaling modules have been intensively studied using mathematical modeling and computational simulations, through the fast growing field of systems biology and its disciplines. The primary aim is to faithfully describe the system and to be able to predict the system behavior. Synergistically with experimental analysis, the reported observations have allowed one to identify properties of these pathways, such as fast signal propagation, large amplification, short signal duration and noise resistance. Since biochemical parameters in signaling pathways are not easily accessible experimentally, it is necessary to use advanced mathematical tools for their correct estimation. Using the paradigm of man-made optimal signal transduction systems, we chose to take the research path for discovering optimal design of cellular signaling modules. To approach the main thesis objective, we first identified the key system parameters through global sensitivity analysis. Comparative analysis of differences and similarities within different system architectures revealed some insights for initial parameter classification and starting point for optimal system design. In order to be able to interpret a broader range of phenotypes, we take into account both steady-state and dynamic properties simultaneously. Furthermore, we investigated the trade-offs between optimal characteristics. As a result, we found the biochemical and biophysical parameters that determine these trade-offs and we analyzed if there exist conditions under which we can simultaneously achieve optimal steady-state and dynamic performance. We first analyze what are the design principles that lead the system to have the minimal signaling times, subject to a certain level of amplification gain. In this setup, we bring out our main research question: are there any trade-offs and interplay between different steady-state and dynamic properties? Furthermore, we include the property of ultrasensitivity and eventually solve multi-objective optimization problems. A particularly insightful finding of this work is that, upon judicious selection of the kinetic parameters, a simple covalent modification cycle is able to meet multiple objectives simultaneously. In particular, this analysis may help explain why signaling cycles are so ubiquitous in cell signaling. The enhancement of ultrasensitivity and faster signal propagation in the multicyclic systems clearly show the advantages of the natural choice of designing signaling pathways in the form of signaling cascades. The thesis concludes with the potential research steps that could be taken along the same path, and that would gather more quantitative knowledge about signaling pathways.
Lausanne, EPFL, 2012. DOI : 10.5075/epfl-thesis-5419.* On the Role of Constraints in Optimization under Uncertainty
This thesis addresses the problem of industrial real-time process optimization that suffers from the presence of uncertainty. Since a process model is typically used to compute the optimal operating conditions, both plant-model mismatch and process disturbances can result in suboptimal or, worse, infeasible operation. Hence, for practical applications, methodologies that help avoid re-optimization during process operation, at the cost of an acceptable optimality loss, become important. The design and analysis of such approximate solution strategies in real-time optimization (RTO) demand a careful analysis of the components of the necessary conditions of optimality. This thesis analyzes the role of constraints in process optimality in the presence of uncertainty. This analysis is made in two steps. Firstly, a general analysis is developed to quantify the effect of input adaptation on process performance for static RTO problems. In the second part, the general features of input adaptation for dynamic RTO problems are analyzed with focus on the constraints. Accordingly, the thesis is organized in two parts: For static RTO, a joint analysis of the model optimal inputs, the plant optimal inputs and a class of adapted inputs, and For dynamic RTO, an analytical study of the effect of local adaptation of the model optimal inputs. The first part (Chapters 2 and 3) addresses the problem of adapting the inputs to optimize the plant. The investigation takes a constructive viewpoint, but it is limited to static RTO problems modeled as parametric nonlinear programming (pNLP) problems. In this approach, the inputs are not limited to being local adaptation of the model optimal inputs but, instead, they can change significantly to optimize the plant. Hence, one needs to consider the fact that the set of active constraints for the model and the plant can be different. It is proven that, for a wide class of systems, the detection of a change in the active set contributes only negligibly to optimality, as long as the adapted solution remains feasible. More precisely, if η denotes the magnitude of the parametric variations and if the linear independence constraint qualification (LICQ) and strong second-order sufficient condition (SSOSC) hold for the underlying pNLP, the optimality loss due to any feasible input that conserves only the strict nominal active set is of magnitude O(η2), irrespective of whether or not there is a change in the set of active constraints. The implication of this result for a static RTO algorithm is to prioritize the satisfaction of only a core set of constraints, as long as it is possible to meet the feasibility requirements. The second part (Chapters 4 and 5) of the thesis deals with a way of adapting the model optimal inputs in dynamic RTO problems. This adaptation is made along two sets of directions such that one type of adaptation does not affect the nominally active constraints, while the other does. These directions are termed the sensitivity-seeking (SS) and the constraint-seeking (CS) directions, respectively. The SS and CS directions are defined as elements of a fairly general function space of input variations. A mathematical criterion is derived to define SS directions for a general class of optimal control problems involving both path and terminal constraints. According to this criterion, the SS directions turn out to be solutions of linear integral equations that are completely defined by the model optimal solution. The CS directions are then chosen orthogonal to the subspace of SS directions, where orthogonality is defined with respect to a chosen inner product on the space of input variations. It follows that the corresponding subspaces are infinite-dimensional subspaces of the function space of input variations. It is proven that, when uncertainty is modeled in terms of small parametric variations, the aforementioned classification of input adaptation leads to clearly distinguishable cost variations. More precisely, if η denotes the magnitude of the parametric variations, adaptation of the model optimal inputs along SS directions causes a cost variation of magnitude O(η2). On the other hand, the cost variation due to input adaptation along CS directions is of magnitude O(η). Furthermore, a numerical procedure is proposed for computing the SS and CS components of a given input variation. These components are projections of the input variation on the infinite-dimensional subspaces of SS and CS directions. The numerical procedure consists of the following three steps: approximation of the optimal control problem by a pNLP problem, projection of the given direction on the finite-dimensional SS and CS subspaces of the pNLP and, finally, reconstruction of the SS and CS components of the original problem from those of the pNLP.
Lausanne, EPFL, 2012. DOI : 10.5075/epfl-thesis-5170.Talks
* Real-Time Optimization in the Presence of Uncertainty
This presentation discusses real-time optimization (RTO) strategies for improving process performance in the presence of uncertainty in the form of plant-model mismatch, drifts and disturbances. RTO typically uses a plant model to compute optimal inputs. In the presence of uncertainty, selected model parameters can be estimated and the updated model used for optimization. Although very intuitive, this two-step approach suffers from the fact that the model is almost invariably inadequate, which prevents from reaching the plant optimum. Other approaches have been developed in the last two decades to overcome this difficulty. Recently, a generic formalization of these ad hoc fixes has been proposed under the label modifier adaptation. The basic idea is to leave the model parameters unchanged but to use the plant measurements to “appropriately” modify the optimization problem. The modifier-adaptation approach will be presented and compared to the two-step approach, in particular with regard to model adequacy. We will then go beyond this comparison and discuss different ways of using plant measurements for process improvement in the presence of uncertainty. There are many questions to be addressed: (i) what can be done off-line prior to process operation, and what should be performed in real time, (ii) how much of the optimization effort is model-based and how much is data-driven, (iii) what to measure, what to adapt, how to adapt? We will then see that there exists another class of measurement-based optimization approaches that implements direct input adaptation. This class of methods includes NCO tracking, extremum-seeking control and self-optimizing control. A case study will illustrate the applicability of the various approaches.
Workshop on Modeling, Simulation and Optimization of Uncertain Systems, Heidelberg Academy of Sciences, June 4-5, 2012.2011
Journal Articles
* Incremental Identification of Reaction and Mass-Transfer Kinetics Using the Concept of Extents
This paper proposes a variation of the incremental approach to identify reaction and mass-transfer kinetics (rate expressions and the corresponding rate parameters) from concentration measurements for both homogeneous and gas-liquid reaction systems. This incremental approach proceeds in two steps: (i) computation of the extents of reaction and mass transfer from concentration measurements without explicit knowledge of the reaction and mass-transfer rate expressions, and (ii) estimation of the rate parameters for each rate expression individually from the computed extents using the integral method. The novelty consists in using extents that are computed from measured concentrations. For the computation of the individual extents, two cases are considered: if the concentrations of all the liquid-phase species can be measured, a linear transformation is used; otherwise, if the concentrations of only a subset of the liquid-phase species are available, an approach that uses flowrate and possibly gas-phase concentration measurements is proposed. The incremental identification approach is illustrated in simulation via two reaction systems, namely the homogeneous acetoacetylation of pyrrole and the gas-liquid chlorination of butanoic acid.
Industrial & Engineering Chemistry Research. 2011. DOI : 10.1021/ie2007196.* On multivariate calibration with unlabeled data
In principal component regression (PCR) and partial least-squares regression (PLSR), the use of unlabeled data, in addition to labeled data, helps stabilize the latent subspaces in the calibration step, typically leading to a lower prediction error. A non-sequential approach based on optimal filtering (OF) has been proposed in the literature to use unlabeled data with PLSR. In this work, a sequential version of the OF-based PLSR and a PCA-based PLSR (PLSR applied to PCA-preprocessed data) are proposed. It is shown analytically that the sequential version of the OF-based PLSR is equivalent to PCA-based PLSR, which leads to a new interpretation of OF. Simulated and experimental data sets are used to point out the usefulness and pitfalls of using unlabeled data. Unlabeled data can replace labeled data to some extent, thereby leading to an economic benefit. However, in the presence of drift, the use of unlabeled data can result in an increase in prediction error compared to that obtained with a model based on labeled data alone.
Journal of Chemometrics. 2011. DOI : 10.1002/cem.1389.* On the Design of Integral Observers for Unbiased Output Estimation in the Presence of Uncertainty
Integral observers are useful tools for estimating the plant states in the presence of non-vanishing disturbances resulting from plant-model mismatch and exogenous disturbances. It is well known that these observers can eliminate bias in all states, given that as many independent measurements are available as there are independent sources of disturbance. In the most general case, the dimensionality of the disturbance vector affecting the plant states corresponds to the order of the system and thus all states need to be measured. This condition, which is termed integral observability in the literature, represents a fairly restrictive situation. This study focuses on the more realistic case, where only the output variables are measured. Accordingly, the objective reduces to the unbiased estimation of the output variables. It is shown that both stability and asymptotically unbiased output estimation can be achieved if the system is observable, regardless of the dimensionality of the disturbance vector. Furthermore, a condition is provided under which, using output measurements, the errors in all states can be pushed to zero. It is also proposed to use off-line output measurements to tune the observer using a calibration-like approach. Integral observers and integral Kalman filters are evaluated via the simulation of a fourth-order linear system perturbed by unknown non-vanishing disturbances.
Journal of Process Control. 2011. DOI : 10.1016/j.jprocont.2010.11.015.* A Globally Convergent Algorithm for the Run-to-Run Control of Systems with Sector Nonlinearities
Run-to-run control is a technique that exploits the repetitive nature of processes to iteratively adjust the inputs and drive the run-end outputs to their reference values. It can be used to control both static and finite-time dynamic systems. Although the run-end outputs of dynamic systems result from the integration of process dynamics during the run, the relationship between the input parameters p (fixed at the beginning of the run) and the run-end outputs z (available at the end of the run) can be seen as the static map z(p). Run-to-run control consists in computing the input parameters p∗ that lead to the reference values z_ref. Although a wide range of techniques have been reported, most of them do not guarantee global convergence, that is, convergence towards p∗ for all possible initial conditions. This paper presents a new algorithm that guarantees global convergence for the run-to-run control of both static and finite-time dynamic systems. Attention is restricted to sector nonlinearities, for which it is shown that a fixed gain update can lead to global convergence. Furthermore, since convergence can be very slow, it is proposed to take advantage of the mathematical similarity between run-to-run control and the solution of nonlinear equations, and combine the fixed-gain algorithm with a faster variable-gain Newton-type algorithm. Global convergence of this hybrid scheme is proven. The potential of this algorithm in the context of run-to-run optimization of dynamic systems is illustrated via the simulation of an industrial batch polymerization reactor.
Industrial and Engineering Chemistry Research. 2011. DOI : 10.1021/ie100808t.* Data-driven model reference control with asymptotically guaranteed stability
This paper presents a data-driven controller tuning method that includes a set of constraints for ensuring closed-loop stability. The approach requires a single experiment and can also be applied to nonminimum-phase or unstable systems. The tuning scheme uses an approximation of the closed-loop output error in the model-reference control problem. For linearly parameterized controllers, minimization of the correlation between this error and the reference signal leads to a convex optimization problem. A sufficient condition for closed-loop stability is introduced, which is implemented as a set of convex constraints on the Fourier transform of specific auto- and cross-correlation functions. As the data length tends to infinity, closed-loop stability is guaranteed. The quality of the estimated controller is analyzed for finite data length. The effectiveness of the proposed method is demonstrated in simulation as well as experimentally on a laboratory-scale mechanical setup.
Int. Journal of Adaptive Control and Signal Processing. 2011. DOI : 10.1002/acs.1212.Conference Papers
* A quotient method for designing nonlinear controllers
An algorithmic method is proposed to design stabilizing control laws for a class of nonlinear systems that comprises single-input feedback-linearizable systems and a particular set of single-input non feedback-linearizable systems. The method proceeds iteratively and consists of two stages; it converts the system into cascade form and reduces the dimension at every step by creating quotient manifold in the forward stage, while it constructs the feedback law iteratively in the backward stage. The paper shows that the construction of these quotient manifolds is well defined for feedback-linearizable system and, furthermore, it can also be applied to a class of non feedback-linearizable systems.
2011. 50th IEEE Conference of Decision and Control (CDC)/European Control Conference (ECC), Orlando, FL, Dec 12-15, 2011. p. 7980-7987. DOI : 10.1109/CDC.2011.6160803.* Comparison of Gradient Estimation Methods for Real-time Optimization
Various real-time optimization techniques proceed by controlling the gradient to zero. These methods primarily differ in the way the gradient is estimated. This paper compares various gradient estimation methods. It is argued that methods with model-based gradient estimation converge faster but can be inaccurate in the presence of plant-model mismatch. In contrast, model-free methods are accurate but typically take longer to converge.
2011. 21st European Symposium on Computer Aided Process Engineering - ESCAPE 21, Chalkidiki, Greece, May 29 - June 1, 2011. p. 607-611.* How Important is the Detection of Changes in Active Constraints in Real-Time Optimization?
In real-time optimization, enforcing the constraints that need to be active is important for optimality. In fact, it has been established in the context of parametric variations that, if these constraints are not satisfied, the optimality loss would be O($\eta^2$) – denoting the magnitude of the parametric variations. In contrast, the loss of optimality upon enforcing the correct set of active constraints would be O($\eta^2$). However, no result is available when the set of active constraints changes due to parametric variations, which forms the subject of this paper. Herein it is shown that, if the optimal solution is unique for each , keeping only the strictly active constraints of the nominal solution active will lead to O($\eta^2$) loss in optimality, even when the remaining active constraints of the perturbed system are different from that of the nominal system. This, in turn, means that, in any input adaptation scheme for real-time optimization, identifying changes in active constraints is not important as long as it is possible to enforce the strictly active constraints of the nominal solution to remain active.
2011. 18th IFAC World Congress, Milano, Italy, August 28 - September 2, 2011. p. 9862-9868. DOI : 10.3182/20110828-6-IT-1002.03134.* Modifier Adaptation for Run-to-Run Optimization of Transient Processes
Dynamic optimization can be used to determine optimal input profiles for dynamic processes. Due to plant-model mismatch and disturbances, the optimal inputs determined through model-based optimization will, in general, not be optimal for the plant. Modifier adaptation is a methodology that uses measurements to achieve optimality in the presence of uncertainty. Modifier-adaptation schemes have been developed for the real-time optimization of plants operating at steady state. In this paper, the concept of modifier adaptation is extended to transient plants such as batch processes. Two different schemes are proposed, and their performance is illustrated via the simulation of a semi-batch reaction system.
2011. 18th World Congress of the Int. Federation of Automatic Control (IFAC), Milan, Italy, August 28 - September 2, 2011. p. 11471-11476. DOI : 10.3182/20110828-6-IT-1002.02996.* Input Filter Design for Feasibility in Constraint-Adaptation Schemes
The subject of real-time, steady-state optimization under significant uncertainty is addressed in this paper. Specifically, the use of constraint-adaptation schemes is reviewed, and it is shown that, in general, such schemes cannot guarantee process feasibility over the relevant input space during the iterative process. This issue is addressed via the design of a feasibility-guaranteeing input filter, which is easily derived through the use of a Lipschitz bound on the plant behavior.While the proposed approach works to guarantee feasibility for the single-constraint case, early sub-optimal convergence is noted for cases with multiple constraints. In this latter scenario, some constraint violations must be accepted if convergence to the optimum is desired. An illustrative example is given to demonstrate these points.
2011. 18th World Congress of the Int. Federation of Automatic Control (IFAC), Milano, Italy, August 28 - September 2, 2011. p. 5585-5590. DOI : 10.3182/20110828-6-IT-1002.02937.* Stabilization of the cart-pendulum system through approximate manifold decomposition
This paper proposes a feedback law capable of swinging up and stabilizing the cartpendulum system. The approach uses an iterative algorithm that is typically used to construct a locally linearizing output for nonlinear control-affine systems. However, rather than computing the linearizing output, the algorithm iteratively constructs an approximate feedback form of the original system. The resulting feedback law has a large domain of attraction, which however does not extend over the upper half circle of the pendulum plane. A larger domain of attraction can be obtained by sampling the input and keeping it constant during a short sampling interval. The performance of the strategy is illustrated both in simulation and experimentally on a laboratory-scale setup.
2011. IFAC, Milano, Italy 2011, August 28th to September 2nd 2011. p. 10659-10666. DOI : 10.3182/20110828-6-IT-1002.01168.Theses
* Extents of Reaction and Mass Transfer in the Analysis of Chemical Reaction Systems
Monitoring, control and optimization of chemical reaction systems often requires in-depth analysis of the underlying reaction mechanisms. This dissertation investigates appropriate tools that facilitate the analysis of homogeneous and gas-liquid reaction systems. The main contribution is a novel procedure for computing the extents of reaction and the extents of mass transfer for reaction systems with inlet and outlet streams. These concepts can help reduce the dimension of reaction models and are useful in the identification of reaction kinetics based on concentrations and spectral data. Extents of reaction, mass transfer and flow The concept of extents of reaction is well established for single-phase closed systems such as batch homogeneous reactors. However, it is difficult to compute the extent of reaction for open and heterogeneous reactors due to material exchange with the surroundings via inlet and outlet streams and between phases via mass transfer. For open homogeneous reaction systems involving S species, R independent reactions, p independent inlet streams and one outlet stream, this dissertation proposes a linear transformation of the number of moles vector (S states) into four distinct parts, namely, the extents of reaction, the extents of inlet, the extent of outlet and the invariants, using only the stoichiometry, the inlet composition and the initial conditions. The open gas-liquid reaction systems considered in this thesis involve Sg species, pg independent inlets and one outlet in the gas phase, Sl species, R independent reactions, pl independent inlets and one outlet in the liquid phase. In addition, there are pm mass-transfer fluxes between the two phases. For these systems, various extents are developed successively for the liquid and gas phases. Using only the stoichiometry, the inlet composition, the initial conditions, and knowledge of the species transferring between phases, a linear transformation of the numbers of moles (Sl states) in the liquid into five distinct parts is proposed, namely, the extents of reaction, the extents of mass transfer, the extents of liquid inlet, the extent of liquid outlet and the invariants. Similarly, a transformation of the numbers of moles (Sg states) in the gas phase into four distinct parts is proposed to generate the extents of mass transfer, the extents of gas inlet, the extent of gas outlet and the invariants. Minimal state representation and state reconstruction A state representation is minimal if (i) it can be transformed into variant states that evolve with time and invariants that are constant with time (representation condition), and (ii) the transformed model is minimal (minimality condition). Since the linear transformation transforms the numbers of moles into variant states (the extents) and invariant states, it satisfies the representation condition. For homogeneous reaction systems, the linearly transformed model is of the order (R + p + 1), while the order of the linearly transformed model for open gas-liquid reaction systems is (R + pl + pg + 2pm + 2). Using the concept of accessibility of nonlinear systems, the conditions under which the transformed models are minimal state representations are derived for both types of reaction systems. Since it is often not possible in practice to measure the concentrations of all the species, the unmeasured concentrations have to be reconstructed from available measurements. Using the measured flowrates and the proposed transformations, it is possible to reconstruct the unmeasured concentrations without knowledge of the reaction and mass-transfer rate expressions. Furthermore, it is shown that the minimal number of measured concentrations is R for homogeneous reactors and (R + pm) for gas-liquid reactors. Use of concentrations and spectral data The identification of reaction kinetics can be done incrementally or globally from experimental data. Using measured concentrations and spectral data with knowledge of pure-component spectra, incremental identification proceeds in two steps: (i) computation of the extents of reaction and mass transfer from measured data, and (ii) estimation of the parameters of the individual reaction and mass-transfer rates from the computed extents. In the first step, the linear transformation is applied to compute the extents of reaction, mass transfer and flow directly from measured concentrations without knowledge of the reaction and mass-transfer rate expressions. The transformation can be extended to measured spectral data, provided the pure-component spectra are known. An approach is developed for the case where concentrations are only available for a subset of the reacting species. In the second step, the unknown rates can be identified individually for each reaction or each mass transfer from the corresponding individual extent using the integral method. For the case of measured concentrations corrupted with zero-mean Gaussian noise, it is shown that the transformation gives unbiased estimates of the extents. For the case of spectral data with unknown pure-component spectra, the contributions of the reactions and mass transfers can be computed by removing the contributions of the inlet flows and the initial conditions. This leads to the reaction- and mass-transfer-variant (RMV) form of spectral data, from which the reaction and mass-transfer rate parameters can be estimated simultaneously. However, if the RMV-form is rank deficient, the rank must be augmented before applying factor-analytical methods. In such cases, it is shown that, for example, gas consumption data can be used for rank augmentation. The concepts and tools are illustrated using simulated data. Several special reactors such as batch, semi-batch and continuous stirred-tank reactors are considered.
Lausanne, EPFL, 2011. DOI : 10.5075/epfl-thesis-5028.* Subspace Correction Methods in Multivariate Calibration
Productivity, quality, safety, and environmental concerns have driven major advancements in the development of process analyzers. Analyzers generate measurement data that are useful for characterizing product and process attributes (key variables), thereby benefiting the drive towards automatic control and optimization. However, these objectives may be severely compromised when key variables are determined at low sampling rates through off-line analysis. It is sometimes possible to relate more easily available secondary measurements (predictors) to key variables (predictands) using data-driven soft sensors or calibration models. These models can then be used to deliver information about key variables at a higher sampling rate and/or at lower financial burden. This work studies multivariate calibration for spectroscopic measurements (such as near-infrared, mid-infrared, ultra-violet, Raman spectra, or nuclear magnetic resonance) that are linked to concentrations of one or more analytes using an inverse regression model based on principal component regression (PCR) or partial least-squares regression (PLSR). Spectroscopic measurements are typically corrupted with both random zero-mean measurement errors (noise) and systematic variations (drift) caused by instrumental, operational and process changes. The prediction error can be decomposed into the error due to noise in the calibration data and bias resulting from truncation in PCR/PLSR, and the error due to drift and noise in the prediction data. To correct for these errors, this work proposes three subspace correction methods that use new information in addition to calibration data. Firstly, latent subspace correction using unlabeled data (secondary measurements for which the key variables are unknown) helps reduce the error due to noise in the calibration data and truncation. Secondly, drift subspace correction is achieved following a two-step procedure. In the first step, the drift subspace is estimated using slave data with drift and master data with no drift. In the second step, the original calibration data are corrected for the estimated drift subspace using shrinkage or orthogonal projection. The third subspace correction method involves data reconciliation, which is the procedure of adjusting predicted key variables to obtain estimates that are consistent with balance equations. The various methodologies are illustrated using both simulated and experimental data.
Lausanne, EPFL, 2011. DOI : 10.5075/epfl-thesis-4919.Talks
* Incremental Identification of Reaction and Mass-Transfer Rates In Gas-Liquid Reaction Systems Using Tendency Modeling
The identification of reliable reaction and mass-transfer rates is important for building first-principles models of gas-liquid reaction systems. The identification of these rates involves the determination of a model structure (reaction stoichiometry, rate expressions for the reactions and mass transfers) and of the corresponding parameters. The identification of these rate expressions from measured concentrations is a challenging task because of the direct coupling between the reactions and the transfer of reactants and products between the two phases. The identification task can be performed globally in one step by choosing the model structure and estimating the model parameters via the comparison of model predictions and measured data. The approach is termed simultaneous identification since all reactions and mass transfers are identified simultaneously. The procedure needs to be repeated for all candidate model structures. Hence, the simultaneous identification can be computationally costly when several candidate rate expressions are available for each reaction and mass transfer. Furthermore, since the global model is fitted so as to reduce the least-squares error, structural mismatch in one rate expression of the model will typically result in errors in all the parameters. Finally, it is often difficult to come up with suitable initial parameter values, which may lead to convergence problems. An incremental identification approach has recently been proposed, which decomposes the identification task into the following two steps [1, 2]: (i) computation of the extents of reaction and mass transfer from measured concentrations without knowledge of the reaction and mass-transfer rates, and (ii) for each rate individually, identification of the rate expression and its parameters from the computed extents. The fact that each reaction and mass-transfer rate is treated individually in the incremental approach helps reduce considerably the number of model candidates, thereby reducing the computational effort. Although the proposed incremental approach provides an efficient framework for the identification of gas-liquid reaction systems, a systematic way of selecting the appropriate rate expressions from several candidate expressions is needed in Step (ii). Recently, the so-called generalized tendency modeling (GTeMoC) method has been proposed to select appropriate rate expressions from a large number of rate expression candidates [3, 4]. In the GTeMoC methodology, a stepwise linear regression is used as a tool to select appropriate rate expressions. Moreover, the statistical metrics are developed to discriminate rate expression candidates and avoid collinearity in rate parameters. However, the effect of mass transfer rates is not treated explicitly in the GTeMoC method, and lumped rate expressions containing the effect of reactions and mass transfers are identified. This work combines the incremental approach and the GTeMoC methodology so that the reaction and mass-transfer rates can be identified individually. Hence, the resulting incremental approach proceeds in three steps: (i) computation of the extents of reaction and mass transfer from measured concentrations without knowledge of the reaction and mass-transfer rates, (ii) computation of the reaction and mass-transfer rates through differentiation of the corresponding computed extents, and (iii) for each rate individually, identification of the rate expression and its parameters using the GTeMoC method. The proposed incremental identification approach combines the strengths of the incremental approach (can handle each reaction and each mass transfer individually) and the GTeMoC method (can efficiently select the rate expression from several candidate expressions). The approach will be illustrated via the simulation of the chlorination of butanoic acid.
2011 AIChE Annual Meeting, Minneapolis, MN, USA, October 16-21, 2011.* Incremental model identification for homogeneous systems – A comparison
The identification of reaction kinetics involves the determination of a model structure (reaction stoichiometry, rate laws for all reactions) and its corresponding parameters from experimental data. An incremental identification approach for determining the kinetics of homogeneous reaction systems from transient concentration measurements has been developed in previous work [1, 2]. This approach decomposes the identification task into a sequence of sub-tasks that include the identification of the rate laws for every reaction and of the corresponding rate parameters. The approach is closely related to the “differential method”, because reaction rates are estimated through numerical differentiation of concentration measurements. An alternative incremental identification approach based on the “integral method” has been proposed recently [3]. It uses the concept of extent of reaction and proceeds in two steps: (i) the computation of extents from measured concentrations of all or a subset of the reacting species and, optionally, from the inlet and outlet flowrates, and (ii) for each reaction individually, the estimation of rate parameters from the corresponding extent using the integral method. The objective of this work is to compare the performance of the two types of incremental approaches with respect to their ability to discriminate between two or more competing rate laws and to estimate the rate parameters with high accuracy. In particular, we investigate the propagation of errors from the concentration measurements to the rates or extents and finally to the estimated kinetic parameters. The main features are illustrated via the startup of a continuous stirred-tank reactor. A well-known criterion is used to discriminate between competing rate laws [4]. It is shown that the (integral) extent-based methods is in many aspects (e.g. for low-frequency noisy concentration measurements) superior to the (differential) rate-based method, if the final, simultaneous correction step is avoided in the latter: it better discriminates between competing kinetic laws and it results in parameter estimates with tighter confidence intervals while less meta-parameters need to be adjusted. However, the rate-based method can be computationally advantageous for multiple-reaction systems when all kinetic laws are uncertain.
7th In.t Workshop on Mathematics in Chemical Kinetics and Engineering 2011, University of Heidelberg, Wednesday May 18th - Friday May 20th 2011.2010
Journal Articles
* Extents of Reaction, Mass Transfer and Flow for Gas-Liquid Reaction Systems
For gas-liquid reaction systems with inlet and outlet streams, this paper proposes a linear transformation to decompose the numbers of moles vector into five distinct parts, namely, the extents of reaction, the extents of mass transfer, the extents of inlet flow, the extents of outlet flow, and invariants. Furthermore, several implications of being able to compute the extents of reaction, mass transfer, and inlet flow are discussed. The concept is illustrated via the simulation of various reactor configurations for the chlorination of butanoic acid.
Industrial Engineering & Chemistry Research. 2010. DOI : 10.1021/ie902015t.* Framework for explicit drift correction in multivariate calibration models
Latent-variable calibrations using principal component regression and partial least-squares regression are often compromised by drift such as systematic disturbances and offsets. This paper presents a two-step framework that facilitates the evaluation and comparison of explicit drift-correction methods. In the first step, the drift subspace is estimated using different types of correction data in a master/slave setting. The correction data are measured for the slave with drift and computed for the master with no drift. In the second step, the original calibration data are corrected for the estimated drift subspace using shrinkage or orthogonal projection. The two cases of no correction and drift correction by orthogonal projection can be seen as special cases of shrinkage. The two-step framework is illustrated with four different experimental data sets. The first three examples study drift correction on one instrument (temperature effects, spectral differences between samples obtained from different plants, instrumental drift), while the fourth example studies calibration transfer between two instruments.
Journal of Chemometrics. 2010. DOI : 10.1002/cem.1291.* Extents of reaction and flow for homogeneous reaction systems with inlet and outlet streams
This paper proposes two transformations for homogeneous reaction systems with inlet and outlet streams that allow isolating three distinct parts of the state vector, namely, the reaction variants, the reaction invariants but inlet- flow variants, and the reaction and inlet-flow invariants. In the absence of an outlet stream, as in batch and semi-batch reactors, the first transformation leads to the concepts of extents of reaction and extents of inlet flow. For reaction systems with an outlet stream, the second transformation uses key components to decouple the variables and arrive at the new concepts of generalized extents of reaction and generalized extents of inlet flow. These transformations are helpful to compute the reaction invariants for general open homogeneous reaction systems. Furthermore, the energy balance equation is shown to augment the number of reaction invariants by one. The various concepts are illustrated through the analysis of a simulated ethanolysis reaction.
AIChE Journal. 2010. DOI : 10.1002/aic.12125.Conference Papers
* Identification of reaction and mass-transfer rates in gas-liquid reaction systems
This paper deals with the identification of reaction and mass-transfer rates from concentrations measured in gas-liquid reaction systems. It is assumed that the reactions take place in the liquid bulk only. The identification proceeds in two steps: (i) estimation of the extents of reaction and mass transfer from concentration measurements, and (ii) estimation of the parameters of the individual reaction and mass-transfer rates from the extents. For the estimation of the individual extents, two cases are considered: if the concentrations of all the species in the liquid phase can be measured, a linear transformation is used; otherwise, if the concentrations of only subsets of the species can be measured in the gas and liquid phases, an approach as an extension of the linear transformation is proposed. The approach is illustrated in simulation via the chlorination of butanoic acid.
2010. International Symposium on "Thermodynamics and Transport processes", Recent and Emergent Advances in Chemical Engineering, IIT-Madras, Chennai, India, December 2-4, 2010. p. 70-75.* Non-iterative data-driven controller tuning with guaranteed stability: Application to direct-drive pick-and-place robot
This paper illustrates the practical application of non-iterative correlation- based tuning with guaranteed stability. In this method, a sufficient condition for closed-loop stability is defined as the H infinity-norm of a particular error function. This norm is then estimated using data from one closed-loop experiment. The method is applied to a pick-and-place robot. It is shown that the proposed constraints for stability are effective without being overly conservative. Furthermore, it is shown how the method can be used to systematically design low-order controllers.
2010. 2010 IEEE Conference on Control Applications (CCA), Yokohama, September 8-10, 2010. p. 1005-1010. DOI : 10.1109/CCA.2010.5611118.* Experimental Real-Time Optimization of a Solid Oxide Fuel Cell Stack via Constraint Adaptation
The experimental validation of a real-time optimization (RTO) strategy for the optimal operation of a solid oxide fuel cell (SOFC) stack is reported in this paper. Unlike many existing studies, the RTO approach presented here utilizes the constraint-adaptation methodology, which assumes that the optimal operating point lies on a set of constraints and then seeks to satisfy those constraints in practice via bias update terms. These biases correspond to the difference between predicted and measured outputs and are updated at each steady-state iteration, allowing the RTO to successfully meet the optimal operating conditions of a 6-cell SOFC stack, despite significant plant-model mismatch. The effects of the bias update filter values and of the RTO frequency on the power tracking and constraint handling are also investigated.
2010. 23rd Int. Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems, Lausanne, June 14-17, 2010. p. 141-148.* Two-Layered Real-Time Optimization of a Solid Oxide Fuel Cell Stack
The optimal operation of a solid oxide fuel cell stack is addressed in this paper. Real-time optimization, performed at a slow time scale via constraint adaptation, is used to account for uncertainty and degradation effects, while model-predictive control is performed at a faster time scale to reject process disturbances and to safely adapt the system to the specified output constraints following changes in cell power demand. To ensure that these constraints are strictly honored, a novel adaptation algorithm that uses the built-in constraint handling of quadratic programming is implemented within the model-predictive controller. An additional feature of this algorithm - its ability to adapt the feasibility region in view of uncertainty - is shown as well. Simulation results illustrate the efficacy of this approach in the solid oxide fuel cell system.
2010. 9th International Symposium on Dynamics and Control of Process Systems, Leuven, Belgium, July, 5-7, 2010. p. 839-844. DOI : 10.3182/20100705-3-BE-2011.0149.* Selective Input Adaptation in Parametric Optimal Control Problems involving Terminal Constraints
This paper is concerned with input adaptation in dynamic processes in order to guarantee feasible and optimal operation despite the presence of uncertainty. For optimal control problems having terminal constraints, two sets of directions can be distinguished in the input function space: the so-called sensitivity-seeking directions, along which a small input variation does not affect the terminal constraints, and the complementary constraint-seeking directions, along which a variation does affect the terminal constraints. Two selective input adaptation scenarios are thus possible, namely, adaptation along each set of input directions. This paper proves the important result that the cost variation due to the adaptation along the sensitivity-seeking directions is typically smaller than that due to the adaptation along the constraint-seeking directions.
2010. American Control Conference 2010, Baltimore, Maryland, USA, June 30 - July 2, 2010. p. 4782-4787. DOI : 10.1109/ACC.2010.5531115.* Minimal state representation of homogeneous reaction systems
Minimal state representations are parsimonious models having no redundant states. For homogeneous reaction systems with S species, R independent reactions, p independent inlet streams and one outlet stream, a nonlinear transformation of the numbers of moles to reaction variants, flow variants and constant invariants is proposed. The conditions under which this transformed system is a minimal state representation of order (R+p+1) are presented. A simulation example illustrates the theoretical developments.
2010. ESCAPE-20: European Symposium on Computer Aided Process Engineering, Ischia, Naples, Italy, 6-9 June 2010.Theses
* Non-Iterative Data-Driven Model Reference Control
In model reference control, the objective is to design a controller such that the closed-loop system resembles a reference model. In the standard model-based solution, a plant model replaces the unknown plant in the design phase. The norm of the error between the controlled plant model and the reference model is minimized. The order of the resulting controller depends on the order of the plant model. Furthermore, since the plant model is not exact, the achieved closed-loop performance is limited by the quality of the model. In recent years, several data-driven techniques have been proposed as an alternative to this model-based approach. In these approaches, the order of the controller can be fixed. Since no model is used, the problem of undermodeling is avoided. However, closed-loop stability cannot, in general, be guaranteed. Furthermore, these techniques are sensitive to measurement noise. This thesis treats non-iterative data-driven controller tuning. This controller tuning approach leads to an identification problem where the input is affected by noise, and not the output as in standard identification problems. A straightforward data-driven tuning scheme is proposed, and the correlation approach is used to deal with measurement noise. For linearly parameterized controllers, this leads to a convex optimization problem. The accuracy of the correlation approach is compared to that of several solutions proposed in the literature. It is shown that, if the order of the controller is fixed, both the correlation approach and a specific errors-in-variables approach can be used. The model reference controller-tuning problem is extended with a constraint that ensures closed-loop stability. This constraint is derived from stability conditions based on the small-gain theorem. For linearly parameterized controllers, the resulting optimization problem is convex. The proposed constraint for stability is conservative. As an alternative, a non-conservative a posteriori stability test is developed based on similar stability conditions. The proposed methods are applied to several numerical and experimental examples.
Lausanne, EPFL, 2010. DOI : 10.5075/epfl-thesis-4658.Book Chapters
* Control of Polymerization Processes
With an annual worldwide production well in excess of 100 million metric tons, synthetic polymers constitute a significant part of the modern chemical process industry. Polymer reactors - operated in continuous, batch, or semibatch mode - are therefore important processing units, but there are unique problems associated with controlling them effectively. The most significant characteristics of polymer reactors that make them one of the most challenging units to model, control, and optimize, are discussed in this chapter; we also provide a survey of the strategies that have been proposed, and those that have been successfully employed in industrial practice.
The Control Handbook, 2nd Edition; CRC Press, 2010. p. 12.1-12.23.Posters
* Exploration of signaling cycles using dynamic optimization
One of the basic characteristics of every living system is the ability to respond to extracellular signals. This is carried out through a limited number of protein-based signaling networks, whose function is not based only on simple transmission of the received signals, but incorporates the processing, encoding and integration of both, external and internal signals. The results than lead to different changes in gene expression, regulate cell growth, differentiation, embryo development, and stress responses in mammalian cells, whereas the malfunction is in correlation with diseases. Commonly observed instance of signal transduction through a series of protein kinase reactions are the kinases of the mitogen activated protein kinase (MAPK) cascades. These pathways are found in almost all eukaryotes and play an important role in controlling different cellular processes, including fundamental functions. In order to understand better the nature of this regulation and to gain greater insight into the mechanisms that determine the function of cells, MAPK cascades have been intensively studied using mathematical modeling and computational simulations. The primary aim is to faithfully describe the system and to be able to predict the system behavior. Synergistically with experimental analysis, reported observations have identified properties of these pathways, such as rapid induction, noise resistance, amplification capability, threshold induction mechanism etc. Here, we investigate one class of approaches for analyzing the relationship between network structure and functional behavior and the overall idea involves applying optimization techniques. By manipulating the desired functional behavior and by monitoring the corresponding parameter values, one can learn how model parameters and functions are related, and then be in a position to discover new design principles. The primary motivation was to explore if there is any trade-off while promoting simultaneously large amplification and fast signal propagation. We identified the competing parameters in the linear tricyclic cascade and their values for the optimal design for minimal response times and given amplification. We also incorporated “ultrasensitivity”, in order to analyze interplay between this steady-state property and dynamic behavior of the system. Special emphasis is placed on the robustness of the resulting tricyclic cascades in the face of variations in kinase and phosphatase concentration ratios.
11th International Conference on Systems Biology, Edinburgh, Scotland, UK., October 11-14, 2010.Talks
* Exploration of Signaling Cycles using Dynamic Optimization
One of the basic characteristics of every living system is the ability to respond to extracellular signals. This is carried out through a limited number of protein-based signaling networks, whose function is not only based on simple transmission of the received signals, but incorporates the processing, encoding and integration of both external and internal signals. The results than lead to different changes in gene expression, regulate cell growth, mitogenesis, differentiation, embryo development, and stress responses in mammalian cells, whereas the malfunction is in correlation with diseases. Commonly observed instances of signal transduction through a series of protein kinase reactions are mitogen activated protein kinase (MAPK) cascades. These pathways are found in almost all eukaryotes and play an important role in controlling different cellular processes, including fundamental functions. In order to better understand the nature of this regulation and to gain greater insight into the mechanisms that determine the function of cells, MAPK cascades have been intensively studied using mathematical modeling and computational simulations. The primary aim is to faithfully describe the system and to be able to predict the system behavior. Synergistically with experimental analysis, reported observations have identified properties of these pathways, such as rapid induction, noise resistance, amplification capability, threshold induction mechanism, resistance to “cross-talk”, etc. Here, we investigate one class of approaches for analyzing the relationship between network structure and functional behavior and the overall idea involves applying optimization techniques. By manipulating the desired functional behavior and by monitoring the corresponding parameter values, one can learn how model parameters and functions are related, and then be in a position to discover new design principles. The primary motivation was to explore if there is any trade-off while promoting simultaneously large amplification and fast signal propagation. We identified the competing parameters in the linear tricyclic cascade and their values for the optimal design leading to minimal response times and given amplification. We also incorporated “ultrasensitivity”, in order to analyze interplay between this steady-state property and the dynamic behavior of the system. Special emphasis is placed on the robustness of the resulting tricyclic cascades in the face of variations in kinase and phosphatase concentration ratios.
AIChE Annual Meeting, Salt Lake City, UT, USA, November 7-12, 2010.* Model-Predictive Control of an Experimental Solid Oxide Fuel Cell Stack
Solid Oxide Fuel Cells (SOFC) are energy conversion devices that produce electrical energy via the reaction of a fuel with an oxidant. Although SOFCs have become credible alternatives to non-renewable energy sources, efforts are still needed to extend their applicability to a broader scope of applications, such as domestic appliances. SOFCs are typically operated continuously and are characterized by the presence of stringent operating constraints. Particularly, violating the constraint on the cell potential can severely damage a cell, while violating the upper bound on the fuel utilization can also induce negative effects due to fuel starvation. Hence, control and optimization are required to improve cost effectiveness, while respecting operational constraints. Among the numerous control strategies available in the literature, Model-Predictive Control (MPC) is an excellent candidate because it can handle constraints explicitly. Furthermore, the control inputs are obtained via the solution of a model-based optimization problem. Only the first moves of the resulting input profiles are applied to the process, and the procedure is repeated at the next sampling time. Constraints are often handled by penalizing the cost function for any constraint violation rather than by including constraints in the optimization problem. This approach, referred to as soft-constraint MPC, is advantageous since (i) the computational load is reduced, and (ii) it avoids the instabilities that MPC with hard output constraints typically induces. However, it also presents several drawbacks: (i) the constraints can be violated, (ii) an oscillatory behavior is often observed, and (iii) the performance is weight dependent. Consequently, hard-constraint MPC will be considered in this study. Because of the aforementioned stability issues, it is proposed to linearize the nonlinear output constraints with respect to the inputs, thus resulting in linear hard constraints on the inputs. In addition, a bias term is introduced in these linearized constraints to handle inaccuracies by artificially reducing the size of the feasible region. The bias term is then adapted using measurements, which leads to improved performance via a progressive, yet safe, expansion of the feasible region. This hard-constraint MPC approach is validated experimentally.
7th Symposium on Fuel Cell Modeling and Experimental Validation, Morges, CH, March 23-24.2009
Journal Articles
* Incremental identification of kinetic models for homogeneous reaction systems
Chemical Engineering Science. 2009. DOI : 10.1016/j.ces.2008.09.006.* Data Reconciliation of Concentration Estimates from Mid-Infrared and Dielectric Spectral Measurements for Improved On-Line Monitoring of Bioprocesses
Real-time data reconciliation of concentration estimates of process analytes and biomass in microbial fermentations is investigated. A Fourier-transform mid-infrared spectrometer predicting the concentrations of process metabolites is used in parallel with a dielectric spectrometer predicting the biomass concentration during a batch fermentation of the yeast Saccharomyces cerevisiae. Calibration models developed off-line for both spectrometers suffer from poor predictive capability due to instrumental and process drifts unseen during calibration. To address this problem, the predicted metabolite and biomass concentrations, along with off-gas analysis and base addition measurements, are reconciled in real-time based on the closure of mass and elemental balances. A statistical test is used to confirm the integrity of the balances, and a non-negativity constraint is used to guide the data reconciliation algorithm toward positive concentrations. It is verified experimentally that the proposed approach reduces the standard error of prediction without the need for additional off-line analysis. (C) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 25: 578-588, 2009
Biotechnology Progress. 2009. DOI : 10.1002/btpr.143.* Classification of magnetic resonance images from rabbit renal perfusion
The feasibility of using chemometric techniques for the automatic detection of whether a rabbit kidney is pathological or not is studied. Sequential images of the kidney are acquired using Dynamic Contrast-Enhanced Magnetic Resonance Imaging with contrast agent injection. A segmentation approach based upon principal component analysis (PCA) is used to separate out the cortex from the rest of the kidney including the medulla, the renal pelvic, and the background. Two classifiers (Soft Independent Method of Class Analogy, SIMCA; Partial Least Squares Discriminant Analysis, PLS-DA) are tested for various types of data pre-treatment including segmentation, feature extraction, centering, autoscaling, standardnormal variate transformation, Savitsky-Golay smoothing, and normalization. It is shown that (i) the renal cortex contains more discriminating information on kidney perfusion changes than the whole kidney, and (ii) the PLS-DA classifiers outperform the SIMCA classifiers. PLS-DA, preceded by an automated PCA- based segmentation of kidney anatomical regions, correctly classified all kidneys and constitutes a classification tool of the renal function that can be useful for the clinical diagnosis of renovascular diseases.
Chemometrics and Intelligent Laboratory. 2009. DOI : 10.1016/j.chemolab.2009.06.004.* Modifier-Adaptation Methodology for Real-Time Optimization
The ability of a model-based real-time optimization scheme to converge to the plant optimum relies on the ability of the underlying process model to predict the plant's necessary conditions of optimality (NCO). These include the values and gradients of the active constraints as well as the gradient of the cost function. Hence, in the presence of plant-model mismatch or unmeasured disturbances, one could measure the plant NCO and use them for tracking the plant optimum. This paper shows how the optimization problem can be modified to incorporate information regarding the plant NCO. The so-called modifiers, which express the difference between the measured or estimated plant NCO and those predicted by the model, are added to the constraint and cost functions in a modified optimization problem and are adapted iteratively. Local convergence and model-adequacy issues are analyzed. The modifier-adaptation scheme is tested experimentally on a three-tank system.
Industrial & Engineering Chemistry Research. 2009. DOI : 10.1021/ie801352x.* Drift Correction in Multivariate Calibration Models Using On-line Reference Measurements
On-line measurements from first-order instruments such as spectrometers may be compromised by instrumental, process and operational drifts that are not seen during off-line calibration. This can render the calibration model unsuitable for prediction of key components such as analyte concentrations. In this work, infrequently available on-line reference measurements of the analytes of interest are used for drift correction. The drift-correction methods that include drift in the calibration set are referred to as implicit correction methods (ICM), while explicit correction methods (ECM) model the drift based on the reference measurements and make the calibration model orthogonal or invariant to the space spanned by the drift. Under some working assumptions such as linearity between the concentrations and the spectra, necessary and sufficient conditions for correct prediction using ICM and ECM are proposed. These so-called space-inclusion conditions can be checked on-line by monitoring the Q-statistic. Hence, violation of these conditions implies the violation of one or more of the working assumptions, which can be used e.g. to infer the need for new reference measurements. These conditions are also valid for rank-deficient calibration data, i.e. when the concentrations of the various species are linearly dependent. A constraint on the kernel used in ECM follows from the space-inclusion condition. This kernel does not estimate the drift itself but leads to an unbiased estimate of the drift space. In a noise-free environment, it is shown that ICM and ECM are equivalent. However, in the presence of noise, a Monte Carlo simulation shows that ECM performs slightly better than ICM. A paired t-test indicates that this difference is statistically significant. When applied to experimental fermentation data, ICM and ECM lead to a significant reduction in prediction error for the concentrations of five metabolites predicted from infrared spectra.
Analytica Chimica Acta. 2009. DOI : 10.1016/j.aca.2008.12.044.* Neighboring-Extremal Control for Singular Dynamic Optimization Problems. II- Multiple-Input Systems
Dynamic optimization provides a unified framework for improving process operations while taking operational constraints into account. In the presence of uncertainty, measurements can be incorporated into the optimization framework for tracking the optimum. For nonsingular control problems, neighboring-extremal (NE) control can be used to force the first-order variation of the necessary conditions of optimality (NCO) to zero along interior arcs. An extension of NE control to singular control problems has been proposed in the companion paper for single-input problems. In this paper, a generalization to multiple-input systems is presented. In order for these controllers to be tractable from a real-time optimization perspective, an approximate NE feedback law is proposed, whose application guarantees, under mild assumptions, that the first-order variation of the NCO converge to zero exponentially. The performance of multi-input NE control is illustrated by the case study of a steered car.
Int. Journal of Control. 2009. DOI : 10.1080/00207170802460032.* Neighboring-Extremal Control for Singular Dynamic Optimization Problems. I-Single-Input Systems
A powerful approach for dynamic optimization in the presence of uncertainty is to incorporate measurements into the optimization framework so as to track the optimum. For nonsingular control problems, this can be done by tracking active constraints along boundary arcs and using neighboring- extremal (NE) control along interior arcs. Essentially, NE control forces the first-order variation of the necessary conditions of optimality (NCO) to zero. In this paper, an extension of NE control to singular control problems is proposed. The paper focusses on single-input systems, while the extension to multiple-input systems is investigated in the companion paper. The idea is to design NE controllers from successive time differentiations of the first- order variation of the NCO. ApproximateNE-feedbacklaws arealsoproposed, which are both easily implementable and tractable from a real-time optimization perspective. These developments are illustrated by the case study of a semi-batch chemical reactor.
Int. Journal of Control. 2009. DOI : 10.1080/00207170802460024.Conference Papers
* Quotient method for controlling the acrobot
This paper describes a two-sweep control design method to stabilize the acrobot, an input-affine under-actuated system, at the upper equilibrium point. In the forward sweep, the system is successively reduced, one dimension at a time, until a two-dimensional system is obtained. At each step of the reduction process, a quotient is taken along one-dimensional integral manifolds of the input vector field. This decomposes the current manifold into classes of equivalence that constitute a quotient manifold of reduced dimension. The input to a given step becomes the representative of the previous-step equivalence class, and a new input vector field can be defined on the tangent of the quotient manifold. The representatives remain undefined throughout the forward sweep. During the backward sweep, the controller is designed recursively, starting with the two- dimensional system. At each step of the recursion, a well-chosen representative of the equivalence class ahead of the current level of recursion is chosen, so as to guarantee stability of the current step. Therefore, this stabilizes the global system once the backward sweep is complete. Although stability can only be guaranteed locally around the upper equilibrium point, the domain of attraction can be enlarged to include the lower equilibrium point, thereby allowing a swing-up implementation. As a result, the controller does not require switching, which is illustrated in simulation. The controller has four tuning parameters, which helps shape the closed-loop behavior.
2009. CDC 2009, Shanghai, December 2009. p. 1770-1775. DOI : 10.1109/CDC.2009.5400729.* Partial least-squares regression with unlabeled data
It is well known that the prediction errors from principal component regression (PCR) and partial least-squares regression (PLSR) can be reduced by using both labeled and unlabeled data for stabilizing the latent subspaces in the calibration step. An approach using Kalman Filtering has been proposed to optimally use unlabeled data with PLSR. In this work, a sequential version of this optimized PLSR as well as two new PLSR models with unlabeled data, namely PCA-based PLSR (PLSR applied to PCA-preprocessed data) and imputation PLSR (iterative procedure to impute the missing labels), are proposed. It is shown analytically and verified with both simulated and real data that the sequential version of the optimized PLSR is equivalent to PCA-based PLSR.
2009. 6th Int. Conf. on Partial Least Squares and Related Methods, Beijing, September 2009. p. 102-105.* Data-driven controller validation
This paper proposes a data-driven test for closed-loop stability. The test is based on a non-conservative stability condition that can be verified without having to actually implement the controller. It uses a set of measurements from the plant but does not rely on a plant model. For infinite data length, a validated controller is guaranteed to stabilize the plant. In practice, however, only a finite number of noisy data can be used, and thus only an estimate of the stability condition can be obtained. A reliable stability test needs to take this estimation uncertainty into account, which introduces conservatism. In the proposed test, two variables are available to control the trade-off between reliability and conservatism in an intuitive way. A simulation example shows the effectiveness of the stability test.
2009. 15th IFAC Symposium on System Identification, St. Malo, France, July 6-8 2009. DOI : 10.3182/20090706-3-FR-2004.00174.* Parametric Sensitivity of Path-Constrained Optimal Control: Towards Selective Input Adaptation
In the context of dynamic optimization, plant variations necessitate adaptation of the input profiles in order to guarantee both feasible and optimal operation. For those problems having path constraints, two sets of directions can be distinguished in the input space at each time instant: the so-called sensitivity-seeking directions, along which a small input variation does not affect the active path constraints; the complementary constraint-seeking directions, along which a variation affects the path constraints. Hence, three selective input adaptations are possible, namely, adaptation along each set of input directions and adaptation of the switching times be- tween arcs. This paper considers parametric variations around a nominal optimal solution and quantifies the influence of these variations on each type of input adaptation.
2009. American Control Conference 2009, St. Louis, Missouri, USA, June 10 - 12, 2009. p. 349-354. DOI : 10.1109/ACC.2009.5160142.Theses
* Modifier-adaptation methodology for real-time optimization
The process industries are characterized by a large number of continuously operating plants, for which optimal operation is of economic importance. However, optimal operation is particularly difficult to achieve when the process model used in the optimization is inaccurate or in the presence of process disturbances. In highly automated plants, optimal operation is typically addressed by a decision hierarchy involving several levels that include plant scheduling, real-time optimization (RTO), and process control. At the RTO level, medium-term decisions are made by considering economic objectives explicitly. This step typically relies on an optimizer that determines the optimal steady-state operating point under slowly changing conditions such as catalyst decay or changes in raw material quality. This optimal operating point is characterized by setpoints that are passed to lower-level controllers. Model-based RTO typically involves nonlinear first-principles models that describe the steady-state behavior of the plant. Since accurate models are rarely available in industrial applications, RTO typically proceeds using an iterative two-step approach, namely a parameter estimation step followed by an optimization step. The idea is to repeatedly estimate selected uncertain model parameters and use the updated model to generate new inputs via optimization. This way, the model is expected to yield a better description of the plant at its current operating point. The classical two-step approach works well provided that (i) there is little structural plant-model mismatch, and (ii) the changing operating conditions provide sufficient excitation for estimating the uncertain model parameters. Unfortunately, such conditions are rarely met in practice and, in the presence of plant-model mismatch, the algorithm might not converge to the plant optimum, or worse, to a feasible operating point. As far as feasibility is concerned, the updated model should be able to match the plant constraints. Alternatively, feasibility can be enforced without requiring the solution of a parameter estimation problem by adding plant-model bias terms to the model outputs. These biases are obtained by subtracting the model outputs from the measured plant outputs. A bias-update scheme, where the bias terms are used to modify the constraints in the steady-state optimization problem, has been used in industry. However, the analysis of this scheme has received little attention in the research community. In the context of this thesis, such an RTO scheme is referred to as constraint adaptation. The constraint-adaptation scheme is studied, and its local convergence properties are analyzed. Constraint adaptation guarantees reaching a feasible operating point upon convergence. However, the constraints might be violated during the iterations of the algorithm, even when starting the adaptation from within the feasible region. Constraint violations can be avoided by controlling the constraints in the optimization problem, which is done at the process control level by means of model predictive control (MPC). The approach for integrating constraint adaptation with MPC described in this thesis places high emphasis on how constraints are handled. An alternative constraint-adaptation scheme is proposed, which permits one to move the constraint setpoints gradually in the constraint controller. The constraint-adaptation scheme, with and without the constraint controller, is illustrated in simulation through the real-time optimization of a fuel-cell system. It is desirable for a RTO scheme to achieve both feasibility and optimality. Optimality can be achieved if the underlying process model is able to predict not only the constraint values of the plant, but also the gradients of the cost and constraint functions. In the presence of structural plant-model mismatch, this typically requires the use of experimental plant gradient information. Methods integrating parameter estimation with a modified optimization problem that uses plant gradient information have been studied in the literature. The approach studied in this thesis, denoted modifier adaptation, does not require parameter estimation. In addition to the modifiers used in constraint adaptation, gradient-modifier terms based on the difference between the estimated and predicted gradient values are added to the cost and constraint functions in the optimization problem. With this, a point that satisfies the first-order necessary conditions of optimality for the plant is obtained upon convergence. The modifier-adaptation scheme is analyzed in terms of model adequacy and local convergence conditions. Different filtering strategies are discussed. The constraint-adaptation and modifier-adaptation RTO approaches are illustrated experimentally on a three-tank system. Finite-difference techniques can be used to estimate experimental gradients. The dual modifier-adaptation approach studied in this thesis drives the process towards optimality, while paying attention to the accuracy of the estimated gradients. The gradients are estimated from the successive operating points generated by the optimization algorithm. A novel upper bound on the gradient estimation error is developed, which is used as a constraint for locating the next operating point.
Lausanne, EPFL, 2009. DOI : 10.5075/epfl-thesis-4449.Posters
* On the bias-variance trade-off in principal component regression with unlabeled data
11th Scandinavian Symposium on Chemometrics, Loen, Norway, June 8-11 2009.Talks
* Correction of systematic disturbances in latent-variable calibration models
11th Scandinavian Symposium on Chemometrics, Loen, Norway, June 8-11 2009.2008
Journal Articles
* Process Optimization via Constraints Adaptation
In the framework of real-time optimization, measurement-based schemes have been developed to deal with plant-model mismatch and process variations. These schemes differ in how the feedback information from the plant is used to adapt the inputs. A recent idea therein is to use the feedback information to adapt the constraints of the optimization problem instead of updating the model parameters. These methods are based on the observation that, for many problems, most of the optimization potential arises from activating the correct set of constraints. In this paper, we provide a theoretical justification of these methods based on a variational analysis. Then, various aspects of the constraint-adaptation algorithm are discussed, including the detection of active constraints and convergence issues. Finally, the applicability and suitability of the constraint-adaptation algorithm is demonstrated with the case study of an isothermal stirred-tank reactor.
Journal of Process Control. 2008. DOI : 10.1016/j.jprocont.2007.07.001.* Closed-loop Identification of Multivariable Systems: With or Without Excitation of All References?
A variance analysis of the parameters of a plant, belonging to a class of multivariable systems, estimated in closed-loop operation is performed. More specifically, having in mind the control applications where it is not desirable to excite all external reference inputs, the effect of the absence of one or more reference signals on the variance of the estimated parameters is investigated. The derived expressions are valid for a wide range of model structures including all conventional prediction error models. It is shown that, regardless of the parametrization, the absence of a reference signal never improves and, in most cases, impairs the accuracy of the parameter estimates. In other words, there is a price to pay when restrictions are imposed on the experimental conditions. The analytical results are illustrated by two simulation examples.
Automatica. 2008. DOI : 10.1016/j.automatica.2007.11.016.Conference Papers
* NCO Tracking for Singular Control Problems Using Neighboring Extremals
A powerful approach for dynamic optimization in the presence of uncertainty is to incorporate measurements into the optimization framework so as to track the necessary conditions of optimality (NCO), the so-called NCO- tracking approach. For nonsingular control problems, this can be done by tracking active constraints along boundary arcs, and using neighboring- extremal (NE) control along interior arcs to force the first-order variation of the NCO to zero. In this paper, an extension of NE control to singular control problems is proposed. The idea is to design NE controllers from successive time differentiations of the first-order variation of the NCO. Based on these results, a NCO-tracking controller that is easily tractable from a real-time optimization perspective is proposed, whose application guarantees that the first-order variation of the NCO converges to zero exponentially. The performance of this NCO-tracking controller is illustrated via the case study of a steered car, a 5th-order two-input dynamical system.
2008. 17th IFAC World Congress (IFAC'08), Seoul, Korea, July 6-11, 2008. p. 1922-1927. DOI : 10.3182/20080706-5-KR-1001.00327.* Measurement-Based Drift Correction in Spectroscopic Calibration Models
Correct prediction of analyte concentrations from a new spectrum without drift is possible provided the spectrum lies in the row space spanned by the calibration spectra (space-inclusion condition). However, this condition may be violated as on-line spectrometers are compromised by instrumental, process and operational drifts that are not seen during calibration. A space- inclusion condition, which new spectra possibly corrupted with drift should fulfill, is proposed for drift-correction methods. These methods are characterized as either explicit or implicit based on whether or not drift is estimated using on-line reference measurements. A property of the kernel used in explicit methods is proposed based on the space-inclusion condition. The results are illustrated with a simulation study that uses mathematical models for different drift types.
2008. 11th Conference on Chemometrics in Analytical Chemistry (CAC), 2008, Montpellier, June 30 - July 4, 2008. p. 117-121.* Data-driven controller tuning with integrated stability constraint
This paper presents a data-driven controller-tuning algorithm that includes a sufficient condition for closed-loop stability. This stability condition is defined by a set of convex constraints on the Fourier transform of specific auto- and cross-correlation functions. The constraints are included in a correlation-based controller-tuning method that solves a model-reference problem. This entirely data-driven method requires a single experiment and can also be applied to nonminimum-phase and unstable systems. The resulting controller is guaranteed to stabilize the plant as the data length tends to infinity. The performance with finite data length is illustrated through a simulation example.
2008. 47th IEEE Conference on Decision and Control, Cancun, Mexico, December 9-11 2008. p. 2612-2617. DOI : 10.1109/CDC.2008.4739326.* Rank augmentation of spectral reaction data using calorimetric and gas consumption data
2008. 11th Conference on Chemometrics in Analytical Chemistry, Montpellier - France, June 30th - July 4th 2008. p. 87-92.* Rank analysis of spectral reaction data for factor-analytical methods
2008. 11th Conference on Chemometrics in Analytical Chemistry, Montpellier- France, June 30th- July 4th 2008. p. 123-128.Theses
* Robust model development and enhancement techniques for improved on-line spectroscopic monitoring of bioprocesses
The recent phenomenal growth of the field of biotechnology has contributed to a mounting pressure to improve process efficiency and productivity and to increase the quality and safety of end-products. This demand gave rise to the discipline of on-line bioprocess monitoring encompassing tools that provide a live analytical window into the process and create extensive opportunities for process development, control and optimization. Among these tools, on-line spectroscopy has surfaced as one of the prominent techniques of monitoring the concentration of process metabolites and biomass. Unfortunately, one of the major obstacles currently impeding the industrial spread of the technology is the chronic lack of robustness and long-term stability of spectrometers in on-line monitoring conditions. The work presented in this dissertation aims to explore various ways of improving the reliability of spectroscopic bioprocess monitoring instruments without interfering with their real-time functionality. A Fourier-transform mid-infrared (FTIR) and a dielectric (capacitance) spectrometer are used as the model instruments in a series of experiments involving the cultivation of yeasts. A general review of methods that help maintain the on-line reliability of bioprocess spectrometers is presented first. A clear distinction is made between techniques that involve retrospective reprocessing of the obtained predictions using off-line measurements, and methods that perform the signal or calibration model correction in real-time. A case study, included in the review, demonstrates the effectiveness of some of the latter techniques in correcting mid-IR spectral drift comparable in magnitude of absorbance to a pure component spectrum of glucose at 10 g/l. It is shown that the drift can be significantly reduced using techniques such as spectrum derivation, spectral anchoring and Orthogonal Signal Correction (OSC). Proposed next is a technique to generate on-line reference standards for the FTIR without the need of sampling. The method involves the periodic injection of small amounts of the monitored metabolites into the culture medium. The corresponding measured differences in the spectra are used as reference measurements for recalibrating the model in real-time based on the technique of Dynamic Orthogonal Projection (DOP). Applying this approach leads to a decrease, ranging from 25 to 50 %, in the standard error of prediction of metabolite concentrations. The following study compares three distinct methods of calibrating a dielectric spectrometer: fitting capacitance data to the theoretical Cole-Cole equation, correlating capacitance measurements linearly to biomass concentration and the modeling of scanning capacitance spectra using multivariate (PLS) analysis. The performance and robustness of each calibration technique is assessed during a sequence of validation batches in two experimental settings differing in the level of signal noise. The linear and PLS models outperform the Cole-Cole model in terms of biomass concentration prediction error, particularly in the more noisy conditions. The PLS model proves to be the most robust in rejecting the signal variability. Estimates of the mean cell size are additionally done using the Cole-Cole and PLS models, the latter technique giving more precise results. Finally, in a study involving the simultaneous use of the FTIR and capacitance spectrometers, data reconciliation is shown to improve the on-line prediction of process analytes and biomass. The concentrations predicted by both spectrometers are reconciled in real-time based on mass and elemental balances involving off-gas analysis and measurements of base addition. A statistical test is used to confirm the integrity of the balances before the reconciliation. The technique leads to a significant reduction in the standard error of prediction for all the components involved.
Lausanne, EPFL, 2008. DOI : 10.5075/epfl-thesis-4013.* Preferential estimation
State estimation is a necessary component of advanced monitoring and control techniques, since these techniques often require information that is too expensive or impossible to obtain from direct measurements. The objective of estimation is the reconstruction of the missing information from both the available measurements and prior knowledge in the form of a dynamic model. Usually, full-state estimation is considered because of the close link between estimation and the state feedback literature. By having an accurate estimate of all states, the entire system can be controlled, provided the system is controllable. However, since in some cases the goal is to control only a subset of the states, knowledge of all states is not required. The objective of this thesis is to estimate accurately a vector of preferred variables, whose dimension is much lower than that of the full state vector, while paying no attention to the accuracy of the estimates of the remaining variables. Such a problem might arise, for example, when optimizing a process by tracking active constraints. Biased estimates are often obtained due to the presence of plant-model mismatch. This mismatch can be regarded as a deterministic disturbance. In addition, the measurements of key variables might be available less frequently than the output measurements. The problem of preferential estimation (PE) is formulated as that of eliminating the bias in the estimates of the preferred variables using their infrequent measurements and a full-order model. Hence, the measurements are handled at two time scales. Such a concept has been studied thoroughly in the literature for the purpose of standard estimation, i.e. estimating all states accurately, for which infrequent measurements of all states are needed. The advantage of PE is to require a smaller number of measurements, despite using the full-order model. The following observer structures are studied in the thesis: Proportional observer. This structure contains a correction term proportional to errors obtained from the frequent measurements of the output variables. The gains corresponding to this term are computed from the infrequent measurements of the preferred variables, thus leading to a calibration-type approach. It is shown that bias can be eliminated in the preferred variables by an appropriate choice of the gains. Due to the observer structure, a different set of gains is required for each disturbance value. Hence, the gains have to be retuned each time the disturbances change or, since the disturbances are not measurable, each time a new measurement of the preferred variables becomes available. Integral observer. In addition to the proportional term based on the frequent measurements of the output variables, this structure contains an integral term based on the infrequent measurements of the preferred variables. Hence, this observer has a dual-rate structure. The presence of the integral term guarantees bias elimination in the preferred variables even for varying disturbances, provided the observer is stable. It is shown that stability can be guaranteed, and a procedure for tuning the observer gains is provided. The design parameters in this procedure can also be determined using a calibration-type approach. To simplify the mathematical developments, PE is formulated for linear time-invariant (LTI) systems. Its performance is investigated both analytically and through simulation. Though the analysis is restricted to LTI systems, the idea extends to more general systems, which is demonstrated via the estimation of biomass and enzyme concentrations in a pilot-scale filamentous fungal fermentation.
Lausanne, EPFL, 2008. DOI : 10.5075/epfl-thesis-4004.* Jet-scheduling control for flat systems
The control of cranes and nonholonomic robots has gained increased interest mainly because of the civilian and military industrial need to achieve fast and accurate transport of goods and equipment. Old and new harbors are now venturing into fully automated systems combining automated trolleys and classical cranes. From a theoretical viewpoint these systems are challenging because they are strongly dynamically coupled and offer interesting and useful control problems. Therefore, to take full advantage of their potential, the control design must take into account as much structural information as possible. The structural property that is exploited in the control design proposed in this thesis is the differential flatness property of these systems, that is the existence of particular functions of the states (called flat outputs), the time parametrization of which implies parametrization of all the individual states and inputs. This property is extremely useful for motion planning problems where the system should move quickly from one configuration to another, without inducing too much overshoot or residual oscillations. However, the flatness property is not sufficient to guarantee the design of an efficient controller in the presence of uncertain and unmodeled dynamics. This is especially the case for cranes where the winching mechanism, expressed in terms of the engine and pulleys, has a large amount of unmodeled dry friction. This robustness issue is normally addressed by splitting the control task into a feedforward-like part that handles the dynamical couplings and a feedback term that enforces the tracking of the reference values stemming from the feedforward motion planning algorithm. In contrast, this thesis proposes to combine these two mechanisms, resulting in what will be called the jet-scheduling controller. Classically, the flatness property guarantees the construction of a feedforward input based on a planned motion of the flat outputs by simply combining values of the flat outputs and their time derivatives, i.e. without having to integrate differential equations. Therefore, in the absence of perturbation, this mechanism is sufficient to move the system from one state to another, once a trajectory compatible with the initial and final positions has been designed. However, when the system has some unmodeled dynamics, an additional mechanism must be provided to make sure that the planned trajectory is indeed tracked accurately. The point of view adopted in this thesis is that, instead of specifying a trajectory to be tracked explicitly, a dynamical system called "the jet scheduler" provides the derivatives (the jets) of an ideal stabilizing trajectory. These jets are updated regularly according to measurements so as to react to unknown perturbations. The flat correspondence is used to provide the values of the jets, and a subsidiary controller is designed to ensure that these jets are really matched asymptotically by the true system. Unfortunately, each of these mechanisms could possibly break the equivalence between the original nonlinear system and the linear extended system (contrary to the classical feedback linearization approach for which this correspondence is guaranteed at every time instant). The design of the jet-scheduling controllers and the implication of the possible loss of correspondence are detailed in this work. In addition, stability issues are addressed. Applications to two classes of systems are shown, namely, nonholonomic robots and cranes. The specific properties of these systems are used to achieve a rigorous stability proof. The controller for both the nonholonomic robot and a new crane design labeled SpiderCrane that fully takes advantage of the jet-scheduling mechanism are tested on real setups. Nonlinear Control ; Flatness-based Control ; Trajectory Tracking ; Stabilization ; Nonholonomic Robot ; Crane.
Lausanne, EPFL, 2008. DOI : 10.5075/epfl-thesis-3996.Posters
* Identification of the Design Principles of Signaling Pathways for Metabolic Engineering
The mitogen-activated protein kinase (MAPK) cascades are ubiquitous in eukaryotic signal transduction, and these pathways are conserved in cells from yeast to mammals. Metabolic engineering of mammalian cells requires the redesign of the steady-sate and the dynamic responses of signal transduction pathways. Therefore, understanding the design principles of these pathways is a key to success of metabolic engineering for cell culture development and drug target discovery.
Metabolic Engineering VII: Health and Sustainability, Puerto Vallarta, Mexico, September 14-19, 2008.Talks
* Study of Tricyclic Cascade Networks using Dynamic Optimization
The mitogen-activated protein kinase (MAPK) cascades are ubiquitous in eukaryotic signal transduction, and these pathways are conserved in cells from yeast to mammals. They relay extracellular stimuli from the plasma membrane to targets in the cytoplasm and nucleus, initiating diverse responses involving cell growth, mitogenesis, differentiation and stress responses in mammalian cells. Detailed kinetics models of MAPK cascades have been constructed in recent years that are comprised of mixed sets of differential and algebraic equations (DAEs). Such models typically involve many parameters, such as the kinetic rate constants and the concentration ratios between various kinases and phosphatases, the values of which are not directly accessible in vivo and are subject to large uncertainty. Dynamic optimization has proved to be a very useful tool to help relate the model parameters to functions in MAPK networks. Large-scale, nonlinear DAE models can be handled within this framework, as well as a large variety of objective functions and constraints. In a recent work, the response of an interconvertible monocyclic cascade (phosphorylation- dephosphorylation cycle) has been studied. It was shown, using dynamic optimization, that values of the kinetic parameters that confer, at the same time, (i) a short response time, (ii) a large amplification capability, and (iii) a steep response profile to a graded input (ultrasensitivity) can be found. However, it was also found that, in a monocyclic cascade, these properties are not robust towards variations in the ratio between signaling enzyme and substrate kinase concentrations as well as the ratio between phosphatase and substrate kinase concentrations. In this presentation, we extend the analysis to the general case of multiple levels of cascades, with emphasis on a linear three-kinase model. The same response properties as in the monocyclic case are considered, and dynamic optimization is employed to identify parameter values that optimize these response properties. Special emphasis is placed on the robustness of the resulting tricyclic cascades in the face of variations in kinase and phosphatase concentration ratios. Comparisons with the monocyclic cascade case are also presented.
AIChE Annual Meeting, Philadelphia, PA, November 16-21 2008.* Modifier Adaptation for RTO: Estimation of Experimental Gradients
AIChE Annual Meeting, Philadelphia, PA.2007
Journal Articles
* Oxygen Control for an Industrial Pilot-Scale Fed-Batch Filamentous Fungal Fermentation
Industrial filamentous fungal fermentations are typically operated in fed- batch mode. Oxygen control represents an important operational challenge due to the varying biomass concentration. In this study, oxygen control is implemented by manipulating the substrate feed rate, i.e. the rate of oxygen consumption. It turns out that the setpoint for dissolved oxygen represents a trade-off since a low dissolved oxygen value favors productivity but can also induce oxygen limitation. This paper addresses the regulation of dissolved oxygen using a cascade control scheme that incorporates auxiliary measurements to improve the control performance. The computation of an appropriate setpoint profile for dissolved oxygen is solved via process optimization. For that purpose, an existing morphologically structured model is extended to include the effects of both low levels of oxygen on growth and medium rheological properties on oxygen transfer. Experimental results obtained at the industrial pilot-scale level confirm the efficiency of the proposed control strategy but also illustrate the shortcomings of the process model at hand for optimizing the dissolved oxygen setpoints.
Journal of Process Control. 2007. DOI : 10.1016/j.jprocont.2007.01.019.* Correlation-Based Tuning of Decoupling Multivariable Controllers
The iterative data-driven method labelled Correlation-based Tuning (CbT) is considered in this paper for the tuning of linear time-invariant multivariable controllers. The approach allows one to tune some elements of the controller transfer function matrix to satisfy the desired closed-loop performance, while the other elements are tuned to mutually decouple the closed-loop outputs. Using CbT, perfect decoupling can be achieved by decorrelating a given reference with the non-corresponding outputs. The controller parameters are calculated either by solving a correlation equation (decorrelation procedure) or by minimizing a cross-correlation function (correlation reduction). The two approaches are compared via a simple numerical example. In addition, the correlation-reduction approach is applied to the simulation model of a gas turbine engine and compared to standard Iterative Feedback Tuning for MIMO systems.
Automatica. 2007. DOI : 10.1016/j.automatica.2007.02.006.Conference Papers
* Data-Driven Estimation of the Infinity Norm of a Dynamical System
The estimation of a system’s infinity norm using one set of measured input and output data is investigated. It is known that, if the data set is noise free, this problem can be solved using convex optimization. In the presence of noise, convergence of this estimate to the true infinity norm of the system is no longer guaranteed. In this paper, a convex noise set is defined in the time domain using decorrelation between the noise and the system input. For infinite data length, we prove that the estimate of the infinity norm converges to its true value. A simulation example shows the behavior for finite data length. In addition, the method is used to test closed-loop stability in the context of data-driven controller tuning. A sufficient condition for stability in terms of an infinity norm is introduced. The effectiveness of the proposed stability test is illustrated via a simulation example.
2007. IEEE Conference on Decision and Control, New Orleans, December 12-14, 2007. p. 4889-4894. DOI : 10.1109/CDC.2007.4434184.* Batch process optimization via run-to-run constraints adaptation
In the batch process industry, the available models carry a large amount of uncertainty and can seldom be used to directly optimize real processes. Several measurement-based optimization methods have been proposed to deal with model mismatch and process disturbances. Constraints often play a dominant role in the dynamic optimization of batch processes. In their presence, the optimal input profiles are characterized by a set of arcs, switching times and active path and terminal constraints. This paper presents a novel method tailored to those problems where the potential of optimization arises mainly from the correct set of path and terminal constraints being active. The input profiles are computed between successive runs by dynamic optimization of a fixed nominal model, and the constraints in the optimization problem are adapted using measured information from previous batches. Note that, unlike many existing optimization schemes, the measurements are not used to update the process model. Moreover, the proposed approach has the potential to uncover the optimal input structure. This is demonstrated on a simple semi-batch reactor example.
2007. European Control Conference 2007, Kos, Greece, 2-5 July 2007. p. 2791-2798. DOI : 10.23919/ECC.2007.7068545.* Real-time optimization of continuous processes via constraints adaptation
In the framework of process optimization, measurements can be used to compensate for the effect of uncertainty. The method studied in this paper combines a process model and measurements to iteratively improve the operation of continuous processes. Unlike many existing real-time optimization schemes, the measurements are not used to update the process model, but to adapt the constraints in the optimization problem. Upon convergence, all the constraints are respected even in the presence of large model mismatch. Moreover, it is shown that constraints adaptation can handle changes in the set of active constraints. The approach is illustrated, via numerical simulation, for the optimization of a continuous stirred-tank reactor.
2007. DYCOPS 2007, Cancún, Mexico, June 6-8, 2007. p. 45-50. DOI : 10.3182/20070606-3-MX-2915.00006.* Parameter Identification to Enforce Practical Observability of Nonlinear Systems
The sensitivity of the unmeasured state variables to the measurements strongly affects the rate of convergence of a state estimation algorithm. To overcome potential observability problems, the approach has been to identify the model parameters so as to reach a compromise between model accuracy and system observability. A cost function has been proposed that uses repeated optimization to select a coefficient that weighs the relative importance of these two objectives. This paper proposes a cost function that is the product of measures of these two objectives, thus alleviating the need for the trial-and-error selection of a weighting coefficient. The proposed identification procedure is evaluated with both simulated and experimental data, and with different observer structures.
2007. 10th Computer Applications in Biotechnology, Cancun, Mexico, June 6-8, 2007.* Noniterative Data-driven Controller Tuning Using the Correlation Approach
Data-driven controller tuning for model reference control problem is investigated. A new controller tuning scheme for linear time-invariant single- input single-output systems is proposed. The method, which is based on the correlation approach, uses a single set of input/output data from open-loop or closed-loop operation. A specific choice of instrumental variables makes the correlation criterion an approximation of the model reference control criterion. The controller parameters and the correlation criterion are asymptotically not affected by noise. In addition, based on the small gain theorem, a sufficient condition for the stability of the closed-loop system is given in terms of the infinity norm of a transfer function. An unbiased estimate of this infinity norm can be obtained as the solution to a convex optimization problem using an infinite number of noise-free data. It is also shown that, for noisy data, the use of the correlation approach can improve significantly the estimate. The effectiveness of the proposed method is illustrated via a simulation example.
2007. European Control Conference 2007, Kos Island, Greece, July 2007. p. 5189-5195. DOI : 10.23919/ECC.2007.7068802.Theses
* Neighboring extremals in optimization and control
Optimization arises naturally when process performance needs improvement. This is often the case in industry because of competition – the product has to be proposed at the lowest possible cost. From the point of view of control, optimization consists in designing a control policy that best satisfies the chosen objectives. Most optimization schemes rely on a process model, which, however, is always an approximation of the real plant. Hence, the resulting optimal control policy is suboptimal for the real process. The fact that accurate models can be prohibitively expensive to build has triggered the development of a field of research known as Optimization under Uncertainty. One promising approach in this field proposes to draw a strong parallel between optimization under uncertainty and control. This approach, labeled NCO tracking, considers the Necessary Conditions of Optimality (NCO) of the optimization problem as the controlled outputs. The approach is still under development, and the present work is today's most recent contribution to this development. The problem of NCO tracking can be divided into several subproblems that have been studied separately in earlier works. Two main categories can be distinguished : (i) tracking the NCO associated with active constraints, and (ii) tracking the NCO associated with sensitivities. Research on the former category is mature. The latter problem is more difficult to solve since the sensitivity part of the NCO cannot be directly measured on the real process. The present work proposes a method to tackle these sensitivity problems based on the theory of Neighboring Extremals (NE). More precisely, NE control provides a way of calculating a first-order approximation to the sensitivity part of the NCO. This idea is developed for static and both nonsingular and singular dynamic optimization problems. The approach is illustrated via simulated examples: steady-state optimization of a continuous chemical reactor, optimal control of a semi-batch reactor, and optimal control of a steered car. Model Predictive Control (MPC) is a control scheme that can accommodate both process constraints and nonlinear process models. The repeated solution of a dynamic optimization problem provides an update of the control variables based on the current state, and therefore provides feedback. One of the major drawbacks of MPC lies in the expensive computations required to update the control policy, which often results in a low sampling frequency for the control loop. This limitation of the sampling frequency can be dramatic for fast systems and for systems exhibiting a strong dispersion between the predicted and the real state such as unstable systems. In the MPC framework, two main methods have been proposed to tackle these difficulties: (i) The use of a pre-stabilizing feedback operating in combination with the MPC scheme, and (ii) the use of robust MPC. The drawback of the former approach is that there exists no systematic way of designing such a feedback, nor is there any systematic way of analyzing the interaction between the MPC controller and this additional feedback. This work proposes to use the NE theory to design this additional feedback, and it provides a systematic way of analyzing the resulting control scheme. The approach is illustrated via the control of a simulated unstable continuous stirred-tank reactor and is applied successfully to two laboratory-scale set-ups, an inverted pendulum and a helicopter model called Toycopter. The stabilizing potential of NE control to handle fast and unstable systems is well illustrated. In the case of a strong dispersion between the state trajectories predicted by the model and the real process, robust MPC becomes infeasible. This problem can be addressed using robust MPC based on multiple input profiles, where the inherent feedback provided by MPC is explicitly taken into account, thereby increasing the size of the set of feasible inputs. The drawback of this scheme is its very high computational complexity. This work proposes to use the NE theory in the robust MPC framework as an efficient way of dealing with the feasibility issue, while limiting the computational complexity of the approach. The approach is illustrated via the control of a simulated unstable continuous stirred-tank reactor, and of an inverted pendulum.
Lausanne, EPFL, 2007. DOI : 10.5075/epfl-thesis-3949.* Measurement-based optimization of batch processes with terminal constraint
This thesis deals with dynamic optimization of batch processes, i.e. processes that are characterized by a finite time of operation and the frequent repetition of batches. The objective is to maximize the quantity of the desired product at final time while satisfying constraints during the batch (path constraints) and at final time (terminal constraints). The classical approach is to apply, in an open-loop fashion, input profiles that have been determined off-line using optimization. In practical applications, however, a conservative stand has to be taken since uncertainty is present, which may lead to constraint violations or non-optimal operation. Measurement-based optimization helps reduce this conservatism by using measurements to compensate for the effect of uncertainty. In order to achieve optimal operation, it has been proposed to track the necessary conditions of optimality (NCO). For the terminal cost optimization of batch processes, the NCO consist in four parts : Path constraints and path sensitivity conditions that have to be met during the batch, as well as terminal constraints and terminal sensitivity conditions that have to be met at final time. The intuitive approach is to handle the path conditions during the batch by using on-line measurements, and to implement the terminal conditions on a run-to-run basis since measurements of the terminal conditions are available off-line. However, run-to-run adaptation methods have two important disadvantages : i) Several runs are necessary to obtain optimal operation, and ii) within-run perturbations cannot be compensated. In this thesis, it is shown via a variational analysis of the NCO that meeting the constraint parts of the NCO is more important than meeting the sensitivity parts. Therefore, methods for steering the system toward the terminal constraints by using on-line measurements are examined in order to give priority to meeting the constraint-seeking conditions. This work proposes to track run-time references that lead to the constraints at final time. Thus, a two-time-scale methodology is proposed, where the task of meeting the active terminal constraints is addressed on-line using trajectory tracking, whilst pushing the sensitivities to zero is implemented on a run-to-run basis. Moreover, the use of Iterative Learning Control (ILC) to improve tracking is studied. Reference tracking is often accomplished with linear controllers of the PID type since they are widely accepted in industry and easy to implement. However, as batch processes operate over a wide region and are often highly nonlinear systems, tracking errors are inevitable. A possibility to reduce tracking errors is to use time-varying feedforward inputs. The feedforward inputs can be generated by using a process model, but the presence of model uncertainty can limit the performance. Instead of a process model, ILC uses error information from previous batches for computing the feedforward inputs iteratively. Since batch processes are often not reset to identical initial conditions, ILC schemes that provide a certain robustness to errors in initial conditions must be used, such as ILC with forgetting factor. In this work, a scheme that shifts the input of the previous run backward in time is proposed. The advantage of the input shift over the use of a forgetting factor is that, when the reference trajectory is constant and the system stable, the tracking error decreases with run time. In addition to a shift of the input, a shift of the error trajectory, which is known as anticipatory ILC, is also used. The foregoing methodological developments are illustrated by two case studies, a simulated semi-batch reactor and an experimental laboratory-scale batch distillation column. It is shown that meeting the terminal constraints is the most important optimality condition for the processes under study. Provided that selected composition measurements are available on-line, these measurements are used to steer the processes to the desired terminal constraints on product composition. In comparison to run-to-run optimization schemes, performance is improved from the first batch on, and within-run perturbations can be rejected.
Lausanne, EPFL, 2007. DOI : 10.5075/epfl-thesis-3720.Posters
* Classification of magnetic resonance images from renal perfusion studies of rabbits
Dynamic Magnetic Resonance Imaging (MRI) with contrast media injection is an important tool to study renal perfusion in humans and animals. The goal of this study is to build classifiers for the automatic classification of a kidney as healthy or pathological. A new algorithm is developed that segments out the cortex from the rest of the kidney including the medulla, the renal pelvic, and the background. The performance of two classifier-types (Soft Independent Method of Class Analogy, SIMCA; Partial Least Squares Discriminant Analysis, PLS-DA) is compared for various types of data pre-processing including segmentation, feature extraction, baseline correction, centering, and standard normal variate (SNV).
CHUV Research Day, Lausanne, Switzerland, 1st - 2nd February 2007.Talks
* Adjoint Sensitivity Analysis of Index-One Multistage Differential Equations
Many practical chemical engineering processes involve a sequence of distinct transient operations, forming multistage systems in which each stage is described by mixed sets of differential and algebraic equations (DAEs). These models usually involve decision variables that must be chosen so as to optimize some performance subject to operational constraints, thus leading to dynamic optimization problems, as well as parameters whose values are not known accurately. The sensitivity analysis of the solutions to these models is typically conducted via forward sensitivity analysis. However, when sensitivities with respect to a large number of variables or parameters are required, the forward sensitivity approach may become intractable, especially if the number of state variables is also large. These problems can often be handled more efficiently via adjoint sensitivity analysis. In the first part of the presentation, we propose an extension of the adjoint sensitivity approach to address index-1, multistage DAEs. We allow discontinuous junction conditions between the various stages of the system, as well as different number of equations in each stage. Moreover, we consider functionals depending not only on the differential states at stage end times, but also on the algebraic states and the differential state time derivatives. In particular, both end-point conditions and junction conditions at stage times will be discussed. Next, we consider optimization problems embedding index-1 mutistage DAEs and show how the adjoint sensitivity results can be used for the post-optimal sensitivity analysis of an optimal solution with respect to parameter variations. An important application of post-sensitivity analysis, in the field of real-time optimization, is for the development of neighboring-extremal controllers for multistage DAE systems. The proposed framework is illustrated via the numerical simulation and optimization of micro-scale chemical processes for portable power generation.
AIChE Annual Meeting, Salt Lake City, UT, November 4-9, 2007.* Understanding the Optimal Response Properties of Tricyclic Cascade Networks through Dynamic Optimization
The mitogen-activated protein kinase (MAPK) cascades are ubiquitous in eukaryotic signal transduction, and these pathways are conserved in cells from yeast to mammals. They relay extracellular stimuli from the plasma membrane to targets in the cytoplasm and nucleus, initiating diverse responses involving cell growth, mitogenesis, differentiation and stress responses in mammalian cells. Much effort has been devoted, in recent years, for constructing detailed kinetic models of MAPK networks linking molecular (protein-protein, protein-DNA, and protein-RNA) interactions, gene expression and chemical reactions to cellular behavior. These networks are most naturally described by systems of differential-algebraic equations (DAEs): the ordinary differential equations express the mass-action kinetics, whereas the algebraic equations enforce conservation relations among the constituents. Moreover, these models typically involve a relatively large number of parameters, such as the rate constants and strength of protein- protein interactions, the values of which are not directly accessible in vivo and are subject to large uncertainty. In this presentation, we investigate the application of dynamic optimization techniques to study the relationships between model parameters and functions in signal transduction pathways. Dynamic optimization is ideally suited for studying biochemical networks since it allows dealing with large-scale, nonlinear DAE models and can handle a great variety of objective functions and constraints. Yet, very few applications have been reported in this context to date. We employ dynamic optimization methods to identify ranges of the parameters that confer optimal dynamic response properties in a linear three-kinase model. Our focus is on the duration of the signal, the time from input to output, and the amplitude of the signal, which are important dynamic response properties for MAPK networks. Comparisons of alternative mathematical representations are considered.
AIChE Annual Meeting, Salt Lake City, UT, November 4-9, 2007.2006
Journal Articles
* Identification of multi-input systems: variance analysis and input design issues
This paper examines the identification of multi-input systems. Motivated by an experiment design problem (should one excite the various inputs simultaneously or separately), we examine the effect of an additional input on the variance of the estimated coefficients of parametrized rational transfer function models, with special emphasis on the commonly used FIR, ARX, ARMAX, OE and BJ model structures. We first show that, for model structures that have common parameters in the input–output and noise models (e.g. ARMAX), any additional input contributes to a reduction of the covariance of all parameter estimates. We then show that the accuracy improvement extends beyond the case of common parameters in all transfer functions, and we show exactly which parameter estimates are improved when a new input is added. We also conclude that it is always better to excite all inputs simultaneously.
Automatica. 2006. DOI : 10.1016/j.automatica.2005.12.017.Conference Papers
* On the Input Design for Data-Driven Correlation-Based Tuning of Multivariable Controllers
An iterative data-driven correlation-based method has been proposed recently to tune multivariable linear time-invariant controllers in closed-loop operation. In this contribution, the preferred way of exciting a 2 × 2 system is investigated via the accuracy of the estimated controller parameters. It is shown that simultaneous excitation of both reference signals does not improve the accuracy of the estimated controller parameters compared to the case with a single reference. In fact, onemust choose between low experimental cost (simultaneous excitation) and better accuracy of the estimated parameters (single reference).
2006. XIV IFAC Symposium on System Identification, Newcastle, Australia, 29-31.03.2006. p. 1103-1108. DOI : 10.3182/20060329-3-AU-2901.00177.* Direct Closed-Loop Identification of Multi-Input Systems: Variance Analysis
An analysis of the variance of the parameters of a multi-input plant estimated in closed-loop operation is performed. More specifically, the effect of the simultaneous excitation of an additional input on the variance of the estimated parameters is investigated. The resulting expressions are valid for all conventional Prediction Error Models (PEM). It is shown that, regardless of the parametrization, the presence of an additional reference signal never impairs and, in most cases, improves the accuracy of the parameter estimates. The analytical results are illustrated by two simulation examples.
2006. XIV IFAC Symposium on System Identification, Newcastle, Australia, 29-31.03.2006. p. 873-878. DOI : 10.3182/20060329-3-AU-2901.00138.* Scale-up of batch processes via decentralized control
The economic environment in the specialty chemicals industry requires short times to market and thus the ability to develop new products and processes very rapidly. This, in turn, calls for large scale-ups from laboratory to production. Due to scale-related differences in operating conditions, direct extrapolation of conditions obtained in the laboratory is often impossible, especially when terminal objectives must be met and path constraints respected. This paper proposes a decentralized control scheme for scaling-up the operation of batch and semi-batch processes. The targets to be reached are either taken directly from laboratory experiments or adjusted to account for production constraints. Some targets are reached on-line within a given run, while others are implemented on a run-to-run basis. The methodology is illustrated in simulation via the scale-up of a semi-batch reactor.
2006. International Symposium on Advanced Control of Chemical Processes - ADCHEM 2006, Gramado, Brazil, April 2-5, 2006. p. 221-226. DOI : 10.3182/20060402-4-BR-2902.00221.* Experimental Results for a Nonholonomic Mobile Robot Controller Enforcing Linear Equivalence Asymptotically
2006. IEEE Conference on Industrial Electronics and Applications, Singapore, May 2006. DOI : 10.1109/ICIEA.2006.257233.Theses
* Data-driven controller tuning using the correlation approach
The essential ingredients of control design procedures include the acquisition of process knowledge and its efficient integration into the controller. In many practical control applications, a reliable mathematical description of the plant is difficult or impossible to obtain, and the controller has to be designed on the basis of measurements. This thesis proposes a new datadriven method labeled Correlation-based Tuning (CbT). The underlying idea is inspired by the well-known correlation approach in system identification. The controller parameters are tuned iteratively either to decorrelate the closed-loop output error between designed and achieved closed-loop systems with the external reference signal (decorrelation procedure) or to reduce this correlation (correlation reduction). Ideally, the resulting closedloop output error contains only the contribution of the noise and perfect model-following can be achieved. By the very nature of the control design criterion, the controller parameters are asymptotically insensitive to noise. Both theoretical and implementation aspects of CbT are treated. For the decorrelation procedure, a correlation equation is solved using the stochastic approximation method. The iterative procedure converges to the solution of the correlation equation even in the case when an approximate gradient of the closed-loop output error with respect to controller The asymptotic distribution of the resulting controller parameter estimates is analyzed. When perfect decorrelation is not possible, the correlation reduction method can be used. That is, instead of solving the correlation equation, the norm of a cross-correlation function is minimized. A frequency domain analysis of the criterion shows that the algorithm minimizes the two-norm of the difference between the achieved and designed closed-loop systems.With the correlation reduction method, an unbiased estimate of the gradient of the closed-loop output error is necessary to guarantee convergence of the algorithm to a local minimum of the criterion. Furthermore, this criterion can be generalized to allow handling the mixed sensitivity specifications. An extension of this method for the tuning of linear time-invariant multivariable controllers is proposed for both procedures. CbT allows tuning some of the elements of the controller transfer function matrix to satisfy the desired closed-loop performance, while the other elements are tuned to mutually decouple the closed-loop outputs. The tuning of all decouplers and controllers can be made by performing only one experiment per iteration regardless of the number of inputs and outputs since all reference signals can be excited simultaneously. However, due to the fact that decoupling is imposed as a design criterion, simultaneous excitation of all references brings a negative impact on the variance of the estimated controller parameters. In fact, one must choose between low experimental cost (simultaneous excitation) and better accuracy of the estimated parameters (sequential excitation). The CbT algorithm has been tested on numerous simulation examples and implemented experimentally on a magnetic suspension system and the active suspension system benchmark problem proposed for a special issue of European Journal of Control on the design and optimization of restricted-complexity controllers.
Lausanne, EPFL, 2006. DOI : 10.5075/epfl-thesis-3536.* Scale-down approach
This thesis deals with the combined utilisation of a reaction calorimeter, the RC1® commercialised by Mettler Toledo and equipped with a 2 L glass reactor, and the heat transfer dynamics modelling of industrial reactors. By doing so, the temperature evolution of the reaction medium of full scale equipment during a chemical process can be forecast already at laboratory scale. Thus, the selectivity, quality and safety issues arising during the transfer of a new process, respectively the optimisation of an existing one, from the laboratory to the production scale are earlier detected and more correctly apprehended. It follows that the proposed methodology is a process development tool aiming to accelerate the rate at which innovative processes can be introduced into the market, and for which the global safety can be guaranteed. Chapter 3 of the thesis devotes to the heat transfer dynamics modelling of industrial reactors. To this intention, heating/cooling experiments have been performed at plant scale. First, it consisted in filling up the industrial reactor with a measured quantity of a solvent with known physical and chemical properties (typically water or toluene). Second, after a stabilisation phase at low temperature, the setpoint of the liquid was modified to a temperature about 20 °C below its boiling point, followed by a stabilisation phase at high temperature. Then, the setpoint was changed to a value about 20 °C higher than the fusion point, again followed by a stabilisation phase at low temperature. During the experiment, the solvent and jacket temperatures are measured and registered. The stirrer revolution speed or the liquid amount are changed, and the whole measurement cycle repeated. Not only the heat transfer between the utility fluid and the reaction medium was modelled, but also the thermal dynamics of the jacket itself. Nine industrial reactors have been characterised, their sizes ranging from 40 L to 25 m3. Chapter 4 presents the developed methodology allowing to predict the thermal behaviour of full scale equipment during a chemical process. It is based on two on-line heat balances, namely one over the reaction calorimeter to determine the instantaneous heat release rate and the other over the industrial reactor dynamics to compute its hypothetical thermal evolution. The dynamic model of the industrial reactor is introduced in an Excel sheet. A Visual Basic window allows to establish the connection between the reaction calorimeter and the Excel sheet, meaning that the data from the various sensors of the RC1® can be sent at regular intervals of 10 s to the Excel sheet. By controlling its jacket temperature, the calorimeter is then forced to track the predicted temperature of the industrial reactor. The advantage of the proposed methodology is that the kinetics modelling of the reaction, often a time-consuming and expensive step, is here not mandatory. In chapter 5, the precision of the on-line heat balance over the RC1® was tested and validated with the help of an external voltage source controlling the power delivered by the calibration probe. In this way, the heat provided to the reaction medium was known with great accuracy. The error of the on-line heat balance on the heat release rate, qrx, lies in the generally acceptable 5 % range for bench scale calorimeters. Afterwards, the chosen test reaction, the hydrolysis of acetic anhydride, has permitted, at laboratory scale using the RC1®, to highlight that the thermal dynamics of industrial reactors has a great influence on the temperature evolution of the reaction medium and, hence, on process safety. Finally, the simulation of a polymerisation reaction with the help of a thickener permits to conclude that the "scale-down" methodology and the on-line heat balance over the reaction calorimeter are also applicable to reactions accompanied with large variations of the reaction medium viscosity. Chapter 6 compares the temperature evolution of the reaction medium predicted in the calorimeter with that actually recorded at plant scale. Three reactions are presented: a neutralisation, a three steps reaction and an alkene oxidation by a peroxycarboxylic acid. For the neutralisation, the results precisely tallied with a mean temperature difference lesser than 0.5 °C. Due to technical difficulties, the results of the three steps reaction slightly differ. For the oxidation reaction, the temperature predicted in the reaction calorimeter corresponds to that of full scale equipment to the nearest 0.5 °C. Moreover, the final compositions of the reaction medium are from the gas chromatography analyses also comparable. Moreover, this reaction being thermosensitive, a final selectivity decrease of 13 % is obtained at laboratory scale if this reaction took place in the 25 m3 reactor. This effect is due to its slower dynamics, smaller cooling capacity and more unfavourable heat transfer area to volume ratio compared with smaller reactors. The effect being highlighted already at laboratory scale, the elaborated tool results in a shorter process development time, a safer process and, hence, a shorter time-to-market. Finally, chapter 7 concludes with some outlooks concerning the continuation of the project. As this thesis did not deal with mixing issues, its logical continuation would be the scale-down of mixing effects. A few general guidelines are given for this field.
Lausanne, EPFL, 2006. DOI : 10.5075/epfl-thesis-3464.Talks
* Measurement-based Optimization via NCO Tracking: Challenges and Opportunities
Optimization in the process industry has received a lot of attention in recent years because, in the face of growing competition, it represents a natural choice for reducing production costs, improving product quality, and meeting safety requirements and environmental regulations. Traditionally, the optimal operating conditions are determined based on a model of the process. However, the resulting process operation can be highly sensitive to uncertainty such as model mismatch and process disturbances. This generally gives rise to suboptimal process operation or, worse, infeasible operation, which of course is not tolerable in most industrial applications. Over the last decade, the {\it Laboratoire d'Automatique} of the EPFL has developed a promising approach that converts a dynamic optimization problem with both path and terminal constraints into a feedback control problem. In this approach, near-optimal process operation is enforced by tracking appropriate references, namely the {\it necessary conditions of optimality} (NCO). The NCO-tracking framework is thus much appealing in its on-line simplicity and its potential to be robust towards uncertainty and it has gained substantial international recognition. In this presentation, we give an overview of the current state-of-the-art in NCO tracking. Special emphasis is placed on the industrially relevant features of NCO tracking. We conclude the talk by presenting a number of future research directions in this area.
Systemes d'Information, Modelisation, Optimisation et Commande en Genie des Procedes (SIMO'06), Toulouse, France, October 11-12, 2006.2005
Journal Articles
* Use of Measurements for Enforcing the Necessary Conditions of Optimality in the Presence of Constraints and Uncertainty
Measurements can be used in an optimization framework to compensate the effects of uncertainty in the form of model mismatch or process disturbances. Among the various options for input adaption, a promising approach consists of directly enforcing the necessary conditions of optimality (NCO) that include two parts, the active constraints and the sensitivities. In this paper, the variations of the NCO due to parametric uncertainty are studied and used to design appropriate adaptation laws. The inputs are separated into constraint-seeking and sensitivity-seeking directions depending on which part of the NCO they enforce. In addition, the directional influence of uncertainty is used to reduce the number of variables to adapt. The theoretical concepts are illustrated in simulation via the run-to-run optimization of a batch emulsion polymerization reactor.
Journal of Process Control. 2005. DOI : 10.1016/j.jprocont.2004.11.006.Conference Papers
* Correlation-Based Tuning of Linear Multivariable Decoupling Controllers
The recently-proposed method for iterative correlation-based controller tuning is considered in this paper for the tuning of multivariable Linear Time-Invariant (LTI) controllers. The parameters of the controller are updated directly using the data acquired in closed-loop operation. This approach allows one to tune some elements of the controller transfer function matrix to satisfy the desired closed-loop performance, while the other elements are tuned to mutually decouple the closed-loop outputs. The controller parameters are calculated by minimization of the cross-correlation function involving instrumental variables. A very simple choice of the instruments is proposed. The approach is applied to a simulation model of a gas turbine engine, and excellent results are obtained in terms of decoupling and performance.
2005. 44th IEEE Conference on Decision and Control and European Control Conference, Seville, Spain, 12-15.12.2005. p. 7144-7149. DOI : 10.1109/CDC.2005.1583313.* Run-to-run Optimization of Filamentous Fungal Fermentation
Improving the productivity of fed-batch filamentous fungal fermentations can be formulated as a dynamic optimization problem. However, numerical optimization based on a nominal process model is typically insufficient when uncertainty in the form of model mismatch and process disturbances is present. This paper considers a measurement-based approach that consists of tracking the necessary conditions of optimality (NCO tracking). For this, the nominal input profiles are dissected into time functions and constant parameters that are assigned to the various parts of the NCO. These input elements are then adapted using appropriate measurements. NCO tracking is used here to maximize enzyme production in the case of parametric uncertainty. Upon parameterization of the feed rate input, only point constraints need to be met for optimality, which can be achieved on a run-to-run basis. The approach is illustrated in simulation and compared to open-loop application of the nominal optimal solution.
2005. 15th International Conference on Control Systems and Computer Science, Bucharest, Romania, 25-27 May, 2005.* Preferential Estimation for Uncertain Linear Systems at Steady State: Application to Filamentous Fungal Fermentation
State estimation is a widely used concept in the control community, and the literature mostly concentrates on the estimation of all states. However, in soft sensor problems, the emphasis is on estimating a few soft outputs as accurately as possible. The concept of preferential estimation consists of estimating these soft outputs with an accuracy higher than those with which the other states are estimated. The main question is whether or not the accuracy along the soft outputs can be improved at the detriment of others. This papers shows that, though preferential estimation is not possible for ideal linear systems, it is indeed possible for linear systems with model uncertainty. The theoretical concepts are illustrated on a filamentous fungal fermentation.
2005. 44th IEEE Conference on Decision and Control, Seville, Spain, 12-15 December, 2005. p. 7216-7221. DOI : 10.1109/CDC.2005.1583325.* Optimal grade transition for polyethylene reactors via NCO tracking
In fluidized-bed gas-phase polymerization reactors, several grades of polyethylene are produced in the same equipment by changing the operating conditions. Transitions between the different grades are rather slow and result in the production of a considerable amount of off-specification polymer. Grade transition improvement is viewed here as a dynamic optimization problem, for which numerous approaches exist. Numerical optimization based on a nominal process model is typically insufficient due to the presence of uncertainty in the form of model mismatch and process disturbances. This paper proposes to implement optimal grade transition using a measurement-based approach instead. It is based on tracking the Necessary Conditions of Optimality (NCO tracking) using a decentralized control scheme. For this, the nominal input profiles are dissected into arcs and switching times that are assigned to the various parts of the NCO. These input elements are then adapted using appropriate measurements. NCO tracking is used to determine optimal grade transition in polyethylene reactors. The problem of minimizing the transition time from a steady state of low melt index to that of high melt index is studied, with the feeds of hydrogen and inert and the output flow rate considered as manipulated variables. In the optimal solution, all arcs are determined by path constraints, and all switching times are determined by path and terminal constraints, which significantly eases the adaptation. The on-line and run-to-run adaptation of these parameters is illustrated in simulation.
2005. 7th World Congress of Chemical Engineering, Glasgow, Great Britain, 10-14 July, 2005.* Identification of a Two-Input System: Variance Analysis
This paper examines the identification of a single-output two-input system. Motivated by an experiment design problem(should one excite the two inputs simultaneously or separately), we examine the effect of the (second) input signal on the variance of the various polynomial coefficients in the case of FIR, ARX, ARMAX, OE and BJ models. A somewhat surprising result is to show that the addition of a second input in an ARMAX model reduces the variance of all polynomials estimates.
2005. 16th IFAC World Congress, Prague, July 4-8, 2005. p. 674-679. DOI : 10.3182/20050703-6-CZ-1902.00113.* A Globally Convergent Run-to-run Control Algorithm with Improved Rate of Convergence
2005. American Control Conference, Portland, Oregon, USA, June 8-10, 2005. p. 1901-1906. DOI : 10.1109/ACC.2005.1470246.Theses
* On the use of input-output feedback linearization techniques for the control of nonminimum-phase systems
The objective of the thesis is to study the possibility of using input-output feedback linearization techniques for controlling nonlinear nonminimum-phase systems. Two methods are developed. The first one is based on an approximate input-output feedback linearization, where a part of the internal dynamics is neglected, while the second method focuses on the stabilization of the internal dynamics. The inverse of a nonminimum-phase system being unstable, standard input-output feedback linearization is not effective to control such systems. In this work, a control scheme is developed, based on an approximate input-output feedback linearization method, where the observability normal form is used in conjunction with input-output feedback linearization. The system is feedback linearized upon neglecting a part of the system dynamics, with the neglected part being considered as a perturbation. Stability analysis is provided based on the vanishing perturbation theory. However, this approximate input-output feedback linearization is only effective for very small values of the perturbation. In the general case, the internal dynamics cannot be crushed and need to be stabilized. On the other hand, predictive control is an effective approach for tackling problems with nonlinear dynamics, especially when analytical computation of the control law is difficult. Therefore, a cascade-control scheme that combines input-output feedback linearization and predictive control is proposed. Therein, inputoutput feedback linearization forms the inner loop that compensates the nonlinearities in the input-output behavior, and predictive control forms the outer loop that is used to stabilize the internal dynamics. With this scheme, predictive control is implemented at a re-optimization rate determined by the internal dynamics rather than the system dynamics, which is particularly advantageous when internal dynamics are slower than the input-output behavior of the controlled system. Exponential stability of the cascade-control scheme is provided using singular perturbation theory. Finally, both the approximate input-output feedback linearization and the cascade-control scheme are implemented successfully, on a polar pendulum 'pendubot' that is available at the Laboratoire d'Automatique of EPFL. The pendubot exhibits all the properties that suit the control methodologies mentioned above. From the approximate input-output feedback linearization point of view, the pendubot is a nonlinear system, not input-state feedback linearizable. Also, the pendubot is nonminimum phase, which prevents the use of standard input-output feedback linearization. From the cascade control point of view, although the pendubot has fast dynamics, the input-output feedback linearization separates the input-output system behavior from the internal dynamics, thus leading to a two-time-scale systems: fast input-output behavior, which is controlled using a linear controller, and slow reduced internal dynamics, which are stabilized using model predictive control. Therefore, the cascade-control scheme is effective, and model predictive control can be implemented at a low frequency compared to the input-output behavior.
Lausanne, EPFL, 2005. DOI : 10.5075/epfl-thesis-3175.2004
Journal Articles
* Run-to-run Adaptation of a Semi-Adiabatic Policy for the Optimization of an Industrial Batch Polymerization Process
Industrial and Engineering Chemistry Research. 2004. DOI : 10.1021/ie034330e.Conference Papers
* Accuracy Aspects of Iterative Correlation-Based Controller Tuning
2004. IEEE American Control Conference, Boston, USA, July 2004. p. 4529-4534. DOI : 10.23919/ACC.2004.1384024.* Trajectory Following for the Optimization of a Batch Polymerization Reactor
This paper considers the minimization of batch time for an industrial inverse-emulsion polymerization reactor in the presence of uncertainty. A first optimization study resulted in the current industrial practice of a semi-adiabatic profile (isothermal operation followed by an adiabatic one), where the switching time between the two modes of operation is updated to meet the terminal constraint on reactor temperature, as well as a constraint on residual raw material levels. However, such a procedure requires several batches for convergence and, in addition, cannot compensate the effect of the within-run disturbances. As an alternative, we propose following a temperature trajectory with respect to conversion, for which the conversion is estimated on-line based on temperature measurements. Simulation results show an improved performance compared to the run-to-run strategy, with the additional advantage that this reduction is obtained immediately, i.e. without having to wait for run-to-run convergence.
2004. BatchPro Symposium, Greece, 2004. p. 153-159.* Optimisation basée sur les mesures d'un réacteur industriel de copolymérisation
2004. 6ème Ecole de Printemps Francophone de Casablanca - Contrôle et Procédés de Polymérisation et de Cristallisation, Casablanca, Morocco, June, 2004.Theses
* Measurement-based run-to-run optimization of batch processes
In batch chemical processing, dynamic optimization is the method of choice to reduce production costs while satisfying safety constraints and product specifications. Most of the standard optimization techniques are based on a model of the process, while it is extremely difficult to get a reliable dynamic model for industrial batch processes, due to uncertainty and process variations. Industry typically copes with uncertainty by introducing conservatism, at the cost of optimality, to guarantee constraint satisfaction even in the worst case. Measurement-based optimization via the tracking of the necessary conditions of optimality (NCO tracking) is a method that proposes to reduce this conservatism by using appropriate process measurements. The method enforces the NCO for the real process – and not for a possibly inaccurate model – using appropriate process measurements. The NCO for terminal-time dynamic optimization problems have four parts that correspond to meeting constraints and sensitivities both on-line and at final time. The real challenge lies in the fact that these four parts need to be handled in entirely different ways, the terminal constraints and sensitivities being addressed in this thesis. The objective of this thesis is to formulate the adaptation laws for the input parameters to satisfy the terminal objectives of the NCO. For this purpose, a variational analysis of the NCO is performed that takes into account uncertainty and the presence of constraints. This analysis also indicates the possibility of separating the input parameters depending on the influence of uncertainty and the effect on the terminal constraints. To implement these adaptation laws, a run-to-run scheme is proposed. Next, the convergence of the run-to-run scheme is analyzed. It is shown the scheme with a constant proportional gain converges to the optimal value for a class of systems that exhibit sector nonlinearity. Under the same assumptions, a variablegain algorithm, based on the Quasi-Newton techniques, is then proposed to improve the rate of convergence of the aforementioned scheme. Both algorithms exhibit global convergence. This methodology was applied to optimize copolymerization of acrylamide and cationic monomers in inverse emulsion in a 1-ton industrial reactor. The optimal solution obtained consists of an isothermal arc followed by an adiabatic one. The switching time between these two arcs was adapted to meet the constraint on final reactor temperature. Experimental results show that adaptation of the switching time led to a reduction of one third of the reaction time.
Lausanne, EPFL, 2004. DOI : 10.5075/epfl-thesis-3128.* Enhancing the control of tokamaks via a continuous nonlinear control law
The control of the current, position and shape of an elongated cross-section tokamak plasma is complicated by the instability of the plasma vertical position. In this case the control becomes a significant problem when saturation of the power supplies is considered. Current saturation is relatively benign due to the integrating nature of the tokamak, resulting in a reasonable time horizon for strategically handling this problem. On the other hand, voltage saturation is produced by the feedback controller itself, with no intrinsic delay. In practice, during large plasma disturbances, such as sawteeth, ELMs and minor disruptions, voltage saturation of the power supply can occur and as a consequence the vertical position control can be lost. If such a loss of control happens the plasma displaces vertically and hits the wall of the vessel, which can cause damage to the tokamak. The consideration and study of voltage saturation is especially important for ITER. Due to the size and therefore the cost of ITER, there will naturally be smaller margins in the Poloidal Field coil power supplies implying that the feedback will experience actuator saturation during large transients due to a variety of plasma disturbances. The next generation of tokamaks under construction will require vertical position and active shape control and will be fully superconducting. When the magnetic transverse field in superconducting magnets changes, the magnet generates two types of heat loss, the so-called coupling loss and the so-called hysteresis loss, grouped together as AC losses. Superconducting coils possess superconducting properties only below a critical temperature around a few K. AC losses are detrimental since they heat up the superconducting material. Thus, if AC losses are too large, the cryogenic plant can no longer hold the required temperature to maintain the superconductivity properties. Once the superconductivity is lost, the electric currents in the coils produce an enormous heat loss due to the ohmic resistivity, which can lead to a possible damage to the coils. In general, the coils are designed with enough margin to absorb all likely losses. A possible loss reduction could allow us to downsize the superconducting cross section in the cables, reducing the overall cost, or simply increase the operational cooling margin for given coils. In this thesis we have tried to take into consideration these two major problems. The thesis is therefore focused on the following main objectives: i) the stability analysis of the tokamak considering voltage saturation of the power supplies and ii) the proposition of a new controller which enhances the stability properties of the tokamak under voltage saturation and iii) the proposition of a controller which takes into consideration the problem of reducing the AC losses. The subject of the thesis is therefore situated in an interdisciplinary framework and as a result the thesis is subdivided into two principal parts. The first part is devoted to tokamak physics and engineering, while the second part focuses on control theory. In the tokamak physics and engineering part we present the linear tokamak models and the nonlinear tokamak code used for the controller design and the validation of the new proposed controller. The discussion is especially focused on the presence of a single unstable pole when the vertical plasma position is unstable since this characteristic is essential for the work presented in the control theory part. In order to determine the enhancement of the stability properties we have to bring the new proposed controller to its stability limits by means of large disturbances. Validation by means of simulations with either linear or nonlinear tokamak models are imperatively required before considering the implementation of the new controller on a tokamak in operation. A linear tokamak model will probably be inadequate since large disturbances can move its state outside its validity regions. A full nonlinear tokamak evolution code like DINA is indispensable for this purpose. We give a detailed description of the principal plasma physics implemented in the DINA code. Additionally, validation of DINA is provided by comparing TCV experimental VDE responses with DINA code simulations. To allow a study of the AC losses reduction, the nature of the AC losses has to be reduced to a simplified form. We analyse to what extent the accumulated AC losses in ITER could be reduced by taking into account the losses themselves when designing the feedback control loops. In order to be able to carry out this investigation a simple and fast AC loss model, referred to as "AC-CRPP" model, is proposed. In the control theory part we study the stability region in state space, referred to as the region of attraction, for linear tokamak-like systems with input saturation (voltage saturation) and a linear state feedback. Only linear systems with a single unstable pole (mode) and a single saturated input are considered. We demonstrate that the characterisation of the region of attraction is possible for a second order linear system with one unstable and one stable pole. For such systems the region of attraction possesses a topological bifurcation and we provide an analytical condition under which this bifurcation occurs. Since the analysis relies on methodologies like Poincaré and Bendixson's theorems which are unfortunately only valid for second order systems it is evident that there is no way to apply the results for second order systems to higher order systems. It turned out that the search for characterising the region of attraction for higher order systems was illusory and thus this research direction had to be abandoned. We therefore focused on controllers for which the region of attraction is the maximal region of attraction that can be achieved under input saturation. This region is referred to as the null controllable region and its characterisation is simple for any arbitrary high order system possessing a single unstable pole. We present a new globally stabilising controller for which its region of attraction is equal to the null controllable region. This result is obtained by incorporating a simple continuous nonlinear function into a linear state feedback controller. There are several advantages linked to this new controller: i) the stability properties are enhanced, ii) the performance, AC loss reduction and fast disturbance rejection, can be taken into account, iii) the controller can be applied to any arbitrary high order system and iv) the controller possesses a simple structure which simplifies the design procedure. We close the control theory part by focusing on the application of the proposed new controller to tokamaks. Since this controller is a state feedback controller one of the major problems is linked to the state reconstruction. Other pertinent topics are: i) the study of the effect of the disturbances on the closed-loop system stability, ii) the problem inherent to the nature of a state feedback controller when we want an output of the system to track a reference signal and iii) the discussion of the detrimental effects on stability if a pure time delay or a limited bandwidth are added to the closed-loop system, as is the case in reality. The validation of the proposed controller is carried out by means of simulations. We present results for ITER-FEAT and JET using the linear tokamak model CREATE-L. Finally, we present a validation for the case of TCV using the nonlinear DINA-CH code.
Lausanne, EPFL, 2004. DOI : 10.5075/epfl-thesis-3034.* New algorithmic methods for real-time transportation problems
Two of the most basic problems encountered in numerical optimization are least-squares problems and systems of nonlinear equations. The use of more and more complex simulation tools on high performance computers requires solving problems involving an increasingly large number of variables. The main thrust of this thesis the design of new algorithmic methods for solving large-scale instances of these two problems. Although they are relevant in many different applications, we concentrate specifically on real applications encountered in the context of Intelligent Transportation Systems to illustrate their performances. First we propose a new approach for the estimation and prediction of OriginDestination tables. This problem is usually solved using a Kalman filter approach, which refers to both formulation and resolution algorithm. We prefer to consider a explicit least-squares formulation. It offers convenient and flexible algorithms especially designed to solve largescale problems. Numerical results provide evidence that this approach requires significantly less computation effort than the Kalman filter algorithm. Moreover it allows to consider larger problems, likely to occur in real applications. Second a new class of quasi-Newton methods for solving systems of nonlinear equations is presented. The main idea is to generalize classical methods by building a model using more than two previous iterates. We use a least-squares approach to calibrate this model, as exact interpolation requires a fixed number of iterates, and may be numerically problematic. Based on classical assumptions we give a proof of local convergence of this class of methods. Computational comparisons with standard quasi-Newton methods highlight substantial improvements in terms of robustness and number of function evaluations. We derive from this class of methods a matrix-free algorithm designed to solve large-scale systems of nonlinear equations without assuming any particular structure on the problems. We have successfully tried out the method on problems with up to one million variables. Computational experiments on standard problems show that this algorithm outperforms classical large-scale quasi-Newton methods in terms of efficiency and robustness. Moreover, its numerical performances are similar to Newton-Krylov methods, currently considered as the best to solve large-scale systems of equations. In addition, we provide numerical evidence of the superiority of our method for solving noisy systems of nonlinear equations. This method is then applied to the consistent anticipatory route guidance generation. Route guidance refers to information provided to travelers in an attempt to facilitate their decisions relative to departure time, travel mode and route. We are specifically interested in consistent anticipatory route guidance, in which real-time traffic measurements are used to make short-term predictions, involving complex simulation tools, of future traffic conditions. These predictions are the basis of the guidance information that is provided to users. By consistent, we mean that the anticipated traffic conditions used to generate the guidance must be similar to the traffic conditions that the travelers are going to experience on the network. The problem is tricky because, contrarily to weather forecast where the real system under consideration is not affected by information provision, the very fact of providing travel information may modify the future traffic conditions and, therefore, invalidate the prediction that has been used to generate it. Bottom (2000) has proposed a general fixed point formulation of this problem with the following characteristics. First, as guidance generation involves considerable amounts of computation, this fixed point problem must be solved quickly and accurately enough for the results to be timely and of use to drivers. Secondly the unavailability of a closed-form objective function and the presence of noise due to the use of simulation tools prevent from using classical algorithms. A number of simulation experiments based on two system software including DynaMIT a state-of-the-art, real-time computer system for traffic estimation and prediction, developed at the Intelligent Transportation Systems Program of the Massachusetts Institute of Technology (MIT), have been run. These numerical results underline the good behavior of our large-scale method compared to classical fixed point methods for solving the consistent anticipatory route guidance problem. We close with some comments about future promising directions of research.
Lausanne, EPFL, 2004. DOI : 10.5075/epfl-thesis-2877.2003
Conference Papers
* Correlation-Based Tuning of a Restricted-Complexity Controller for an Active Suspension System
A correlation-based controller tuning method is proposed for the \Design and optimization of restricted-complexity controllers" benchmark problem. The approach originally proposed for model following is applied to solve the disturbance rejection problem. The idea is to tune the controller parameters such that the closed-loop output be uncorrelated with the measured disturbance. Since perfect decorrelation between the closed-loop output and the disturbance is not attainable with a restricted-complexity controller, the cross-correlation of these two signals is minimized. This is done iteratively using stochastic approximation. A frequency analysis of the tuning criterion allows dealing with control speci cations expressed in terms of constraints on the sensitivity functions. Application to the active suspension system of the Automatic Laboratory of Grenoble (LAG) provides a 2nd-order controller that meets the control speci cations to a large extent.
2003. International Workshop on Design and Optimization of Restricted Complexity Controllers, Grenoble, France, 15-16.01.2003. p. 138-143.* Data-Driven Controller Tuning
2003. AIChE Annual Meeting, San Francisco, USA, November 2003.* Run-to-Run Optimization of Batch Styrene Copolymerization
2003. Polymer Reaction Engineering: Modelling, Optimization and Control, Lyon, France, December, 2003. p. 108-110.* Measurement-Based Optimization of an Emulsion Polymerization Process
2003. Polymer Reaction Engineering: Modelling, Optimization and Control, Lyon, France, December, 2003. p. 16-19.* Convergence Analysis of Run-to-Run Control for a Class of Nonlinear Systems
2003. American Control Conference, Denver, Colorado, USA, June 4-6, 2003. p. 3032-3037. DOI : 10.1109/ACC.2003.1243993.Theses
* Spectroscopic monitoring of bioprocesses
Lausanne, EPFL, 2003. DOI : 10.5075/epfl-thesis-2620.2002
Conference Papers
* Iterative Correlation-Based Controller Tuning: Frequency Domain Analysis
2002. IEEE Conference on Decision and Control, Las Vegas, Nevada USA / december 2002. p. 4215-4220. DOI : 10.1109/CDC.2002.1185031.* Run-to-run Optimization of Batch Emulsion Polymerization
2002. IFAC World Congress, Barcelona, Spain, July 21-26, 2002. p. 1258-1263.* Convergence Analysis of an Iterative Correlation-Based Controller Tuning Method
2002. IFAC World Congress, Barcelona, Spain, July 2002. p. 1546.Theses
* Contribution to the heat integration of batch processes (with or without heat storage)
This work addresses the indirect heat integration (i.e. resorting to intermediate heat storage) and the direct heat integration (i.e. heat exchanges between coexisting process streams) of batch processes. Tools and methods for the targeting of these two limiting cases of heat integration are proposed, and completed by the development and the application of an automatic design & optimization methodology using the Struggle genetic algorithm (GA). A brewery process is used to demonstrate the feasibility and the practical relevance of the proposed indirect heat integration models. The fluctuations of the process schedule and their effects on the optimal solutions are not modelled (the indirect heat integration is known to feature an inherently low sensitivity). The rescheduling opportunities are not searched for. For the indirect heat integration, fixed temperature/variable mass inventory heat storage units (HSUs) are applied. Two models of indirect heat recovery schemes (IHRSs) are proposed : one based on a closed heat storage system, another built around an open storage system suitable e.g. for food and beverages industries. The inequality constraints of the IHRS models are automatically met owing to an appropriate definition of the decision variables managed by the GA. A rough preadjustment of the mass balance equality constraints on HSUs is achieved by a preliminary stage of heat recovery (HR) maximization before actually minimizing the total batch costs (TBCs). Optimization runs and theoretical considerations on the generation & replacement strategy of Struggle demonstrate that the structural and the parametric variables cannot be efficiently optimized within a single level, resulting in a two-levels optimization scheme. The automatically designed closed storage IHRS solutions for the brewing process are as good as the solutions obtained by another author using a combinatorial method followed by a post-optimization stage. The open storage IHRS is 13 % cheaper while the HR increases by 12 %. Optimizing the IHRS on a one-week period (including the non-periodic start-up & shut-down phases) results in an even more realistic solution, featuring a significantly different trade-off between energy, HSU capacities and HEX areas. A GA based, two-levels optimization scheme is proposed for the design of direct batch heat exchanger networks (HENs). The HEN structures, managed by the upper-level GA, do not include stream splitting. The re-use of HEX units across time slices is a key issue and a methodology to specify the actually possible structural changes by repipe or resequence is proposed, accounting for the thermo-physical compatibility, chemical compatibility, and process schedule constraints. The optimum operation of an existing HEN during each time slice has been analysed and a sound solution procedure is proposed.
Lausanne, EPFL, 2002. DOI : 10.5075/epfl-thesis-2480.2001
Theses
* On-line exploitation tools for the quantitative analysis of metabolic regulations in microbial cultures
Lausanne, EPFL, 2001. DOI : 10.5075/epfl-thesis-2484.* Adaptive rejection of unstable disturbances
Lausanne, EPFL, 2001. DOI : 10.5075/epfl-thesis-2405.2000
Theses
* Improving safety and productivity of isothermal semi-batch reactors by modulating the feed rate
Lausanne, EPFL, 2000. DOI : 10.5075/epfl-thesis-2245.* Etude du comportement dynamique des turbines Francis
A common phenomenon in Francis turbines is draft tube surge. In these fixed-bladed machines, a strong runner outlet swirl can develop under off-design conditions. The action of the diffuser, and in particular the draft tube elbow, on the rotating flow field induces synchronous pressure and discharge fluctuations. These low frequency disturbances are of special interest, because they can easily propagate throughout the whole hydraulic system. The dynamic response of the system to the natural excitations can inhibit the normal usage of the powerplant. If the natural excitation occurs at frequencies close to an eigenfrequency of the hydraulic system, an important amplification of the hydraulic fluctuations can be expected. Passive measures, such as air admission or minor design modifications, are commonly taken as a last resort to try to reduce these phenomena. This work explores an active control approach to alleviate the problem. It is shown that the hydroacoustic fluctuations in a hydraulic installation can strongly be reduced by "injecting" the inverse signal of the turbine's natural excitation. While the main theme is the application of the active control approach to improve the operation stability of Francis turbines, the scope of this study is larger and addresses the following scientific topics: Modeling of an hydroelectric installation The dynamic behavior of an installation is studied based on an one-dimensional model. A compilation of the basic modeling tools was done. Using these tools, the dynamic model of a hydraulic installation and an external hydroacoustical source is presented. It is used to explain how the overall hydroacoustic fluctuations can be reduced using a hydraulic exciter mounted in the wall of a draft tube's cone. Improving the operation stability by active control The hydraulic fluctuations associated with off-design operating conditions of Francis turbines are often very periodic and characterized by the presence of a dominant frequency component. The aim is to cancel out this component for which the amplitudes are often excessive. An active control system, composed of a hydraulic exciter and its controller, has been developed for a Francis turbine. The exciter employs a pulsed flow injection in the draft tube to generate an antiexcitation at a given frequency, which is then synchronized with the turbine's natural excitation. A self-tuning extremum control algorithm optimizes the operating parameters of the exciter to minimize the overall fluctuations in the hydraulic installation. Laboratory tests show the efficiency of the active control system. The amplitude of the pressure and discharge fluctuations at the dominant frequency component are reduced to background noise levels. Multiple configurations of the exciter have been tested. A spectral analysis of the pressure fluctuations is presented, the system's energy balance has been performed and the control algorithm of the system is analysed. The energy balance is very encouraging: the exciter system needed only about 1 percent of the hydraulic power of the Francis turbine. Prediction of the operation stability of turbines based on model tests Laboratory tests on a scale model, homologous to the turbine, are an indispensable step in the development of a hydroelectric powerplant. Unlike the static characteristics, the dynamic behavior of a turbine installation is difficult to predict. An original identification method of the dynamic characteristics of a model turbine is proposed. The method is verified on a theoretical case study and experimentally tested. It constitutes the basis of the prediction of the operation stability of the powerplant based on model tests.
Lausanne, EPFL, 2000. DOI : 10.5075/epfl-thesis-2222.1999
Conference Papers
* Optimization of a Semi-Batch Reaction System under Safety Constraints
OPTIMIZATION OF A SEMI-BATCH REACTION SYSTEM UNDER SAFETY CONSTRAINTS O. Ubrich+, B. Srinivasan*, F. Stoessel+, D. Bonvin* * Novartis Services AG, CH-4002 Basel, Switzerland Institut d’ utomatique, Ecole Polytechnique Fédérale de Lausanne A CH-1015, Lausanne, Switzerland * Fax: +41 21 693 2574 and e-mail: dominique.bonvin@epfl.ch + An important objective in the chemical industry is to find the operating conditions that maximize profit while ensuring safe operation. The safety considerations arise from the exothermic nature of most industrial chemical reactions. For normal operation, the system must be able to remove the heat produced. But at the same time, the system must be capable of withstanding a cooling failure. These goals are best achieved in a semi-batch reactor, since the contents of the reactor can be controlled by the external feed. In this paper, the maximization of the yield of a second-order reaction by manipulating the inlet flow rate is investigated. Two modes of operation are considered: (a) the isoperibolic mode, where the temperature of the cooling fluid is kept constant, and (b) the isothermal mode, where the temperature of the reaction mass is kept constant by adjusting the temperature of the cooling fluid. Constraints on (i) the amount of heat produced, and (ii) the temperature under cooling failure are imposed for safety considerations. The optimal solution is discontinuous and is first obtained numerically. Analytical expressions for the evolution of the input between the discontinuities and the switching times are obtained. Using the analytical characterization of the optimal solution, an efficient feedback implementation strategy is proposed and tested in simulation.
1999. p. 850-855. DOI : 10.23919/ECC.1999.7099412.Theses
* A feedback-based implementation scheme for batch process optimization
Lausanne, EPFL, 1999. DOI : 10.5075/epfl-thesis-2097.* Modélisation, commande et réglage d'un moteur à combustion interne alimenté par injection indirecte de gaz naturel comprimé
Lausanne, EPFL, 1999. DOI : 10.5075/epfl-thesis-2073.* Analysis and control of underactuated mechanical nonminimum-phase systems
Lausanne, EPFL, 1999. DOI : 10.5075/epfl-thesis-2024.1998
Theses
* Reaction and flow variants/invariants for the analysis of chemical reaction data
This dissertation is concerned with the development of a methodology and appropriate tools for the investigation of chemical reaction systems using measured data. More specifically, the determination of reaction stoichiometry and kinetics from concentration or, preferably, spectral measurements is considered. The main contribution of this work is the derivation of a nonlinear transformation of the dynamic model that enables the separation of the evolution of the states into three parts: (i) the reaction-variant part (related to the reactions), (ii) the reaction-invariant and flow-variant part (related to the inlet and outlet streams), and (iii) the reaction- and flow-invariant part (related to the initial conditions). This transformation is very helpful in the analysis of concentration and spectral data. Dynamic model First-principles models of reaction systems are gaining importance in chemical and biotechnological production. They can considerably reduce process development costs and be used for simulation, model-based monitoring, control, and optimization, thus leading to improved product quality, productivity, and process safety. These models include information regarding both the chemical reactions and the operational mode of the reaction system. For the analysis of these models, it is important to distinguish between the states that depend on the reactions and those which do not. The concept of reaction invariants is extended to include the flow invariants of reaction systems with inlet and outlet streams. A nonlinear transformation of the first-principles dynamic model to normal form is proposed. Model reduction, state accessibility, and feedback linearizability are analyzed in the light of this transformation. Concentration data Concentration data collected from reaction systems are highly structured, a result of the underlying reactions and the presence of material exchange terms. It is shown that concentration data can be analyzed in the framework of the three-level decomposition provided by the transformation to normal form. The resulting factorization, termed the factorization of concentration data, enables (i) the separation of the reaction and flow variants/invariants and (ii) the segregation of the dynamics (extents of reaction, integral of flows) from the static information (stoichiometry, initial and inlet concentrations). Using the factorization of concentration data, it is possible to isolate the reaction variant part by subtracting the reaction-invariant part from measured concentrations. The reaction-variant part is often unknown, since it depends on the kinetic description (typically the main difficulty in modeling chemical reaction systems). In contrast, the reaction-invariant part is usually known or measured. It is shown that, in cases where the reaction variants can be computed from the concentrations of a few measured species, the concentrations of the remaining species can be reconstructed using the known reaction-invariant part. Target factor analysis has been used successfully with concentration data to determine, without knowledge of reaction kinetics, the number of reactions and the corresponding stoichiometries. It is shown that, when only the reaction-variant part of the data is considered, existing target factor-analytical techniques can be readily applied. However, if target factor analysis needs to be applied directly to measured concentrations, knowledge of reaction-invariant relationships is required to specify necessary and sufficient conditions for the acceptance of stoichiometric targets. Spectral data In current practice, concentration measurements during the course of a reaction are generally not available, neither on-line nor off-line. Owing to new measurement technologies, spectral measurements are now available in both the laboratory and production. Various spectral instruments enable non-destructive indirect concentration measurement of most of the species in-situ/on-line during the course of a reaction. Measurements are available at high sampling rates and delay-free at low costs. Furthermore, in most cases, the spectral data are linear, i.e., the mixture spectrum is a linear combination of the pure-component spectra weighted by the concentrations. It is shown that the three-level interpretation provided by the transformation to normal form is applicable to spectral data from reacting mixtures. Similarly to traditional wet-chemical analysis methods, a calibration model must also be estimated that provides concentration estimates from spectral measurements. All calibration methods require that a new spectrum lies in the space spanned by the calibration spectral data (space-inclusion condition). To verify this space-inclusion condition, it is proposed to build a calibration model for the reaction-variant part only. Once the reaction variants are predicted from a new spectrum, the (known) reaction invariants can be added to reconstruct the concentrations. Concentration measurements for some species of interest are often not available due to difficulties/costs in sampling, sample preparation, and development of analytical techniques. Thus, traditional calibration of spectral measurements for the purpose of concentration estimation is not possible. Instead, explicit or implicit knowledge about the kinetic structure will be used (prior knowledge about the reaction-variant part), thus enabling the formulation of factor-analytical methods as a calibration problem. For pedagogical reasons, the results are developed for isothermal, constant-density reaction systems with inlet and outlet streams. The results are then extended to various scenarios such as reaction systems with varying density and temperature. Furthermore, factorizations of concentration data are presented that include temperature or calorimetric measurements. Several special cases are considered, encompassing continuous stirred-tank reaction systems, semibatch and batch reaction systems, systems with reactions in quasi-equilibrium conditions, and non-reacting mixtures with closure.
Lausanne, EPFL, 1998. DOI : 10.5075/epfl-thesis-1861.1997
Theses
* Feedback-based optimization of a class of constrained nonlinear systems
Lausanne, EPFL, 1997. DOI : 10.5075/epfl-thesis-1717.1996
Journal Articles
* IKB ou l'enjeu de l'interdisciplinarite
Flash EPFL. 1996.1995
Theses
* Contribution à l'identification et au réglage robustes
The parametric uncertainty is defined as the gap between the complete model of a dynamical process and its simplified nominal one which is normally used to investigate the properties of the system. It is because of this model mismatch that the role of sensitivity in designing control systems becomes very important. In robust control, the parametric uncertainty is taken into account in both the analysis and the synthesis phases of the closed-loop system. In this thesis, first, robustness to small parameter variation is studied using differential sensitivity functions. A combined diagram is used to generate relative sensitivity functions which can be used for improving existing controllers. Several control structures which are suitable for the design of zero-sensitivity control systems are presented. Then, as a framework for studying large parameter uncertainty, the RST controller has been chosen. Robustness criteria, based on parameter uncertainty, have been established. Two complementary ways to robustify a nominal RST controller are presented. First, the pole placement characteristic polynomial is augmented in the second approach, we try to compensate the supplementary signal due to the parametric uncertainty. This signal causes the nominal system misfunction. Instead of estimating this generalized perturbation, various block-diagrams are used to emulate it. As far as identification is concerned, a new method of robust identification which reinforces the tracking capability of RLS algorithm for a nonstationary system is presented. Normally, the gain matrix of RLS has to be adjusted when parameter variations are detected. In this work, an on-line estimation of the parameter covariance as an additional gain matrix is proposed for this adjustment on the basis of the prediction error. Then, the diagonal elements of this computed covariance matrix are used to calculate an on-line estimation of the parameter uncertainty.
Lausanne, EPFL, 1995. DOI : 10.5075/epfl-thesis-1354.