You are here
Search results
(1  20 of 28)
Pages
 Title
 Structure and evolutionary dynamics in fitness landscapes
 Creator
 Pakanati, Anuraag R.
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

Evolution can be conceptualized as an optimization algorithm that allows populations to search through genotypes for those that produce high fitness solutions. This search process is commonly depicted as exploring a “fitness landscape”, which combines similarity relationships among genotypes with the concept of a genotypefitness map. As populations adapt to their fitness landscape, they accumulate information about the fitness landscape in which they live. A greater understanding of...
Show moreEvolution can be conceptualized as an optimization algorithm that allows populations to search through genotypes for those that produce high fitness solutions. This search process is commonly depicted as exploring a “fitness landscape”, which combines similarity relationships among genotypes with the concept of a genotypefitness map. As populations adapt to their fitness landscape, they accumulate information about the fitness landscape in which they live. A greater understanding of evolution on fitness landscapes will help elucidate fundamental evolutionary processes. I examine methods of estimating information acquisition in evolving populations and find that these techniques have largely ignored the effects of common descent. Since information is estimated by measuring conserved genomic regions across a population, common descent can create a severe bias by increasing similarities among unselected regions. I introduce a correction method to compensate for the effects of common descent on genomic information and empirically demonstrate its efficacy.Next, I explore three instantiations of NK, Avida, and RNA fitness landscapes to better understand structural properties such as the distribution of peaks and the size of basins of attraction. I find that the fitness of peaks is correlated with the fitness of peaks within their neighborhood, and that the size of peaks' basins of attraction tends to be proportional to the heights of the peaks. Finally, I visualize local dynamics and perform a detailed comparison between the space of what evolutionary trajectories are technically possible from a single starting point and the results of actual evolving populations.
Show less
 Title
 Advanced data analysis framework for damage identification in civil infrastructure based on selfpowered sensing
 Creator
 Alavi, Amir Hossein
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

"This interdisciplinary research proposes an advanced data analysis framework for civil infrastructure/structural health monitoring (I/SHM) based on a pioneering selfpowered sensing technology. The current work characterizes the performance of a fairly new class of selfpowered sensors for specific application problems with complex behavior. The proposed health monitoring systems are established through the integration of statistical, artificial intelligence and finite element methods....
Show more"This interdisciplinary research proposes an advanced data analysis framework for civil infrastructure/structural health monitoring (I/SHM) based on a pioneering selfpowered sensing technology. The current work characterizes the performance of a fairly new class of selfpowered sensors for specific application problems with complex behavior. The proposed health monitoring systems are established through the integration of statistical, artificial intelligence and finite element methods. Different infrastructure systems with various damage types are analyzed. A new probabilistic artificial intelligencebased damage detection technique is developed that hybridizes genetic programming and logistic regression algorithms. The proposed multiclass classification system assigns probabilities to model scores to detect damage progression. A probabilistic neural network method based on Bayesian theory is further introduced to improve the damage detection accuracy. Data obtained from the finite element simulations and experimental study of hybrid sensor networks is used to calibrate the data interpretation algorithms. The network architecture comprises selfpowered sensors that use the electrical energy directly harvested by piezoelectric ceramic Lead Zirconate Titanate (PZT) transducers. The beauty of this socalled selfpowered monitoring system is that the operating power for the smart sensors directly comes from the signal being monitored. An advantage of using these sensors is that there is no need to directly measure the absolute value of strain in order to estimate damage. In fact, the proposed selfsustained sensing systems use the sensor output to relate the variation rate of strain distributions to the rate of damage. The proposed data analysis framework consists of multilevel strategies for structural/infrastructure damage identification through: (a) analysis of individual selfpowered strain sensors, (b) data fusion in a network of selfpowered strain sensors, and (c) data analysis in a hybrid network of selfpowered accelerometer and strain sensors. For each of these levels, several damage indicator features are extracted upon the simulation of the compressed data stored in memory chips of the selfpowered sensors. A new data fusion concept based on the effect of group of sensors, termed as "group effect", is proposed. The goal is to improve the damage detection performance through spatial measurements over structures. Moreover, combination of the data from a network of accelerometer and strain sensors results in developing an integrated globallocal damage detection approach. The investigated cases are crack growth detection in steel plates under a uniaxial tension mode, distortioninduced fatigue cracking in steel bridge girders, continuous health monitoring of pavement systems, failure of simply supported beam under threepoint bending, and failure of gusset plate of the I35W highway bridge in Minneapolis, Minnesota. 3D dynamic finite element models are developed for each of the cases. The experimental studies are carried out on a steel plate subjected to an inplane tension, a steel plate with bolted connections, and on asphalt concrete specimens in threepoint bending mode. PZT5A ceramic discs and PZT5H bimorph accelerometers are placed on the surface of the plates to measure the delivered voltage in each damage phase. For the asphalt experiments, a new miniaturized spherical packaging system is designed and tested to protect the PZT ceramic discs embedded inside the specimen. Uncertainty analyses are performed through the contamination of the damage indicator features with different noise levels. The results indicate that the proposed I/SHM systems are efficiently capable of detecting different damage states in spite of highlevel noise contamination."Pages iiiii.
Show less
 Title
 Cooperative content caching for capacity and cost management in mobile ecosystems
 Creator
 Taghi Zadeh Mehrjardi, Mahmoud
 Date
 2012
 Collection
 Electronic Theses & Dissertations
 Description

The objective of this thesis is to develop an architectural framework of social community based cooperative caching for minimizing electronic content provisioning cost in Mobile Social Wireless Networks (MSWNET). MSWNETs are formed by wireless mobile devices sharing common interests in electronic content, and physically gathering in public settings such as University campuses, work places, malls, and airports. Cooperative caching in such MSWNETs are shown to be able to reduce content...
Show moreThe objective of this thesis is to develop an architectural framework of social community based cooperative caching for minimizing electronic content provisioning cost in Mobile Social Wireless Networks (MSWNET). MSWNETs are formed by wireless mobile devices sharing common interests in electronic content, and physically gathering in public settings such as University campuses, work places, malls, and airports. Cooperative caching in such MSWNETs are shown to be able to reduce content provisioning cost which heavily depends on service and pricing dependencies among various stakeholders including content providers, network service providers, and end consumers. This thesis develops practical network, service, and economic pricing models which are then used for creating an optimal cooperative caching strategy based on social community abstraction in wireless networks. The developed framework includes optimal caching algorithms, analytical models, simulation, and prototype experiments for evaluating performance of the proposed strategy. The main contributions are: 1) formulation of economic costreward flow models among the MSWNET stakeholders, 2) developing optimal distributed cooperative caching algorithms, 3) characterizing the impacts of network, user and object dynamics, 4) investigating the impacts of user noncooperation, and finally 5) developing a prototype Social Wireless Network for evaluating the impacts of cooperative caching in a Mobile Social Wireless Networks.
Show less
 Title
 Applying evolutionary computation techniques to address environmental uncertainty in dynamically adaptive systems
 Creator
 Ramirez, Andres J.
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

A dynamically adaptive system (DAS) observes itself and its execution environment at run time to detect conditions that warrant adaptation. If an adaptation is necessary, then a DAS changes its structure and/or behavior to continuously satisfy its requirements, even as its environment changes. It is challenging, however, to systematically and rigorously develop a DAS due to environmental uncertainty. In particular, it is often infeasible for a human to identify all possible combinations of...
Show moreA dynamically adaptive system (DAS) observes itself and its execution environment at run time to detect conditions that warrant adaptation. If an adaptation is necessary, then a DAS changes its structure and/or behavior to continuously satisfy its requirements, even as its environment changes. It is challenging, however, to systematically and rigorously develop a DAS due to environmental uncertainty. In particular, it is often infeasible for a human to identify all possible combinations of system and environmental conditions that a DAS might encounter throughout its lifetime. Nevertheless, a DAS must continuously satisfy its requirements despite the threat that this uncertainty poses to its adaptation capabilities. This dissertation proposes a modelbased framework that supports the specification, monitoring, and dynamic reconfiguration of a DAS to explicitly address uncertainty. The proposed framework uses goaloriented requirements models and evolutionary computation techniques to derive and finetune utility functions for requirements monitoring in a DAS, identify combinations of system and environmental conditions that adversely affect the behavior of a DAS, and generate adaptations ondemand to transition the DAS to a target system configuration while preserving system consistency. We demonstrate the capabilities of our modelbased framework by applying it to an industrial case study involving a remote data mirroring network that efficiently distributes data even as network links fail and messages are dropped, corrupted, and delayed.
Show less
 Title
 Online innovization : towards knowledge discovery and achieving faster convergence in multiobjective optimization
 Creator
 Gaur, Abhinav
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

Ì0300nnovization'' is a task of learning common principles thatexist among some or all of the Paretooptimal solutions in amultiobjective optimization problem. Except a few earlierstudies, most innovization related studies were performed onthe final nondominated solutions found by an evolutionary multiobjective algorithm eithermanually or by using a machine learning method.Recent studies have shown that these principles can be learnedduring intermediate iterations of an optimization run...
Show moreÌ0300nnovization'' is a task of learning common principles thatexist among some or all of the Paretooptimal solutions in amultiobjective optimization problem. Except a few earlierstudies, most innovization related studies were performed onthe final nondominated solutions found by an evolutionary multiobjective algorithm eithermanually or by using a machine learning method.Recent studies have shown that these principles can be learnedduring intermediate iterations of an optimization run and simultaneously utilized in thesame optimization run to repair variables to achieve a fasterconvergence to the Paretooptimal set. This is what we are calling as ò0300nline innovization'' as it is performed online during the run of an evolutionary multiobjective optimization algorithm. Special attention is paid to learning rules that are easier to interpret, such as short algebraic expressions, instead of complex decision trees or kernel based black box rules.We begin by showing how to learn fixed form rules that are encountered frequently in multiobjective optimization problems. We also show how can we learn free form rules, that are linear combination of nonlinear terms, using a custom genetic programming algorithm. We show how can we use the concept of k0300nee' in PO set of solutions along with a custom dimensional penalty calculator to discard rules that may be overly complex, or inaccurate or just dimensionally incorrect. The results of rules learned using this custom genetic programming algorithm show that it is beneficial to let evolution learn the structure of rules while the constituent weights should be learned using some classical learning algorithm such as linear regression or linear support vector machines. When the rules are implicit functions of the problem variables, we use a computationally inexpensive way of repairing the variables by turning the problem of repairing the variable into a single variable golden section search.We show the proof of concept on test problems by learning fixed form rules among variables of the problem, which we then use during the same optimization run to repair variables. Different principleslearned during an optimization run can involve differentnumber of variables and/or variables that arecommon among a number of principles. Moreover, a preferenceorder for repairing variables may play an important role forproper convergence. Thus, when multiple principles exist, itis important to use a strategy that is most beneficial forrepairing evolving population of solutions.The above methods are applied to a mix of test problems and engineering design problems. The results are encouraging and strongly supportsthe use of innovization task in enhancing the convergence of an evolutionary multiobjective optimization algorithms. Moreover, the custom genetic program developed in this work can be a useful machine learning tool for practitioners to learn human interpretable rules in the form of algebraic expressions.
Show less
 Title
 Distancepreserving graphs
 Creator
 Nussbaum, Ronald
 Date
 2014
 Collection
 Electronic Theses & Dissertations
 Description

Let G be a simple graph on n vertices, where d_G(u,v) denotes the distance between vertices u and v in G. An induced subgraph H of G is isometric if d_H(u,v)=d_G(u,v) for all u,v in V(H). We say that G is a distancepreserving graph if G contains at least one isometric subgraph of order k for every k where 1<=k<=n.A number of sufficient conditions exist for a graph to be distancepreserving. We show that all hypercubes and graphs with delta(G)>=2n/31 are distancepreserving. Towards this end...
Show moreLet G be a simple graph on n vertices, where d_G(u,v) denotes the distance between vertices u and v in G. An induced subgraph H of G is isometric if d_H(u,v)=d_G(u,v) for all u,v in V(H). We say that G is a distancepreserving graph if G contains at least one isometric subgraph of order k for every k where 1<=k<=n.A number of sufficient conditions exist for a graph to be distancepreserving. We show that all hypercubes and graphs with delta(G)>=2n/31 are distancepreserving. Towards this end, we carefully examine the role of "forbidden" subgraphs. We discuss our observations, and provide some conjectures which we computationally verified for small values of n. We say that a distancepreserving graph is sequentially distancepreserving if each subgraph in the set of isometric subgraphs is a superset of the previous one, and consider this special case as well.There are a number of questions involving the construction of distancepreserving graphs. We show that it is always possible to add an edge to a noncomplete sequentially distancepreserving graph such that the augmented graph is still sequentially distancepreserving. We further conjecture that the same is true of all distancepreserving graphs. We discuss our observations on making nondistancepreserving graphs into distance preserving ones via adding edges. We show methods for constructing regular distancepreserving graphs, and consider constructing distancepreserving graphs for arbitrary degree sequences. As before, all conjectures here have been computationally verified for small values of n.
Show less
 Title
 PALETTEVIZ : A METHOD FOR VISUALIZATION OF HIGHDIMENSIONAL PARETOOPTIMAL FRONT AND ITS APPLICATIONS TO MULTICRITERIA DECISION MAKING AND ANALYSIS
 Creator
 Talukder, AKM Khaled Ahsan
 Date
 2022
 Collection
 Electronic Theses & Dissertations
 Description

Visual representation of a manyobjective Paretooptimal front in four or more dimensional objective space requires a large number of data points. Moreover, choosing a single point from a large set even with certain preference information is problematic, as it imposes a large cognitive burden on the decisionmakers. Therefore, manyobjective optimization and decisionmaking practitioners have been interested in effective visualization methods to en able them to filter down a large set to a...
Show moreVisual representation of a manyobjective Paretooptimal front in four or more dimensional objective space requires a large number of data points. Moreover, choosing a single point from a large set even with certain preference information is problematic, as it imposes a large cognitive burden on the decisionmakers. Therefore, manyobjective optimization and decisionmaking practitioners have been interested in effective visualization methods to en able them to filter down a large set to a few critical points for further analysis. Most existing visualization methods are borrowed from other data analytics domains and they are too generic to be effective for manycriterion decision making. In this dissertation, we propose a visualization method, using starcoordinate and radial visualization plots, for effectively visualizing manyobjective tradeoff solutions. The proposed method respects some basic topological, geometric and functional decisionmaking properties of highdimensional trade off points mapped to a threedimensional space. We call this method Palette Visualization (PaletteViz). We demonstrate the use of PaletteViz on a number of largedimensional multi objective optimization test problems and three realworld multiobjective problems, where one of them has 10 objective and 16 constraint functions. We also show the uses of NIMBUS and ParetoRace concepts from canonical multicriterion decision making and analysis literature and introduce them into PaletteViz to demonstrate the ease and advantage of the proposed method.
Show less
 Title
 Balancing convergence and diversity in evolutionary single, multi and many objectives
 Creator
 Seada, Haitham
 Date
 2017
 Collection
 Electronic Theses & Dissertations
 Description

"Single objective optimization targets only one solution, that is usually the global optimum. On the other hand, the goal of multiobjective optimization is to represent the whole set of tradeoff Paretooptimal solutions to a problem. For over thirty years, researchers have been developing Evolutionary Multiobjective Optimization (EMO) algorithms for solving multiobjective optimization problems. Unfortunately, each of these algorithms were found to work well on a specific range of objective...
Show more"Single objective optimization targets only one solution, that is usually the global optimum. On the other hand, the goal of multiobjective optimization is to represent the whole set of tradeoff Paretooptimal solutions to a problem. For over thirty years, researchers have been developing Evolutionary Multiobjective Optimization (EMO) algorithms for solving multiobjective optimization problems. Unfortunately, each of these algorithms were found to work well on a specific range of objective dimensionality, i.e. number of objectives. Most researchers overlooked the idea of creating a crossdimensional algorithm that can adapt its operation from one level of objective dimensionality to the other. One important aspect of creating such algorithm is achieving a careful balance between convergence and diversity. Researchers proposed several techniques aiming at dividing computational resources uniformly between these two goals. However, in many situations, only either of them is difficult to attain. Also for a new problem, it is difficult to tell beforehand if it will be challenging in terms of convergence, diversity or both. In this study, we propose several extensions to a stateoftheart evolutionary manyobjective optimization algorithm  NSGAIII. Our extensions collectively aim at (i) creating a unified optimization algorithm that dynamically adapts itself to single, multi and many objectives, and (ii) enabling this algorithm to automatically focus on either convergence, diversity or both, according to the problem being considered. Our approach augments the already existing algorithm with a nichingbased selection operator. It also utilizes the recently proposed Karush Kuhn Tucker Proximity Measure to identify illconverged solutions, and finally, uses several combinations of pointtopoint single objective local search procedures to remedy these solutions and enhance both convergence and diversity. Our extensions are shown to produce better results than stateoftheart algorithms over a set of single, multi and manyobjective problems."Pages iiiii.
Show less
 Title
 INTERPRETABLE ARTIFICIAL INTELLIGENCE USING NONLINEAR DECISION TREES
 Creator
 Dhebar, Yashesh Deepakkumar
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

The recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The backend mechanism with which automation occurs is generally blackbox. Some of the popular blackbox AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these blackbox AI methods have shown promising performance and have been widely applied and...
Show moreThe recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The backend mechanism with which automation occurs is generally blackbox. Some of the popular blackbox AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these blackbox AI methods have shown promising performance and have been widely applied and researched across industries and academia. While the blackbox AI models have been shown to achieve high performance, the inherent mechanism with which a decision is made is hard to comprehend. This lack of interpretability and transparency of blackbox AI methods makes them less trustworthy. In addition to this, the blackbox AI models lack in their ability to provide valuable insights regarding the task at hand. Following these limitations of blackbox AI models, a natural research direction of developing interpretable and explainable AI models has emerged and has gained an active attention in the machine learning and AI community in the past three years. In this dissertation, we will be focusing on interpretable AI solutions which are being currently developed at the Computational Optimization and Innovation Laboratory (COIN Lab) at Michigan State University. We propose a nonlinear decision tree (NLDT) based framework to produce transparent AI solutions for automation tasks related to classification and control. The recent advancement in nonlinear optimization enables us to efficiently derive interpretable AI solutions for various automation tasks. The interpretable and transparent AI models induced using customized optimization techniques show similar or better performance as compared to complex blackbox AI models across most of the benchmarks. The results are promising and provide directions to launch future studies in developing efficient transparent AI models.
Show less
 Title
 Nonlinear Extensions to New Causality and a NARMAX Model Selection Algorithm for Causality Analysis
 Creator
 da Cunha Nariyoshi, Pedro
 Date
 2021
 Collection
 Electronic Theses & Dissertations
 Description

Although the concept of causality is intuitive, an universally accepted objective measure to quantify causal relationships does not exist. In complex systems where the internal mechanism is not well understood, it is helpful to estimate how different parts of the system are related. In the context of timeseries data, Granger Causality (GC) has long been used as a way to quantify such relationships, having been successfully been applied in fields as diverse as econometrics and neurology....
Show moreAlthough the concept of causality is intuitive, an universally accepted objective measure to quantify causal relationships does not exist. In complex systems where the internal mechanism is not well understood, it is helpful to estimate how different parts of the system are related. In the context of timeseries data, Granger Causality (GC) has long been used as a way to quantify such relationships, having been successfully been applied in fields as diverse as econometrics and neurology. Multiple Grangerlike and extensions to GC have also been proposed. A recent measure developed to address limitations of GC, New Causality (NC), offers several advantages over GC, such as normalization and better proportionality with respect to internal mechanisms. However, NC is limited in scope by its seminal definition being based on parametric linear models. In this work, a critical analysis of NC is presented, NC is extended to a wide range of nonlinear models and finally, enhancements to a method of estimating nonlinear models for use with NC are reported.A critical analysis is conducted to study the relationship between NC values and model estimation errors. It is shown that NC is much more sensitive to overfitting in comparison to GC. Although the variance of NC estimates is reduced by applying regularization techniques, NC estimates are also prone to bias. In this work, diverse casestudies are presented showing the behavior of NC estimation in the presence of regularization. A mathematical study of the sources of bias in the estimates is given.For systems that cannot be modeled well by linear models, the seminal definition of NC performs poorly. This works gives examples in which nonlinear observation models cause NC values obtained with the seminal definition to behave contrary to intuitive expectations. A nonlinear extension of NC to all linearinparameters models is then developed and shown to address these limitations. The extension reduces to the seminal definition of NC for linear models and offers a flexible weighting mechanism to distribute contributions among nonlinear terms. The nonlinear extension is applied to a range of synthetic data and real EEG data with promising results.The sensitivity of NC to parameter estimation errors demands that special care be taken when using NC with nonlinear models. As a complement to nonlinear NC, enhancements to a algorithm for nonlinear parametric model estimation are presented. The algorithm combines a genetic search element for regressor selection with a settheoretic optimal bounded ellipsoid algorithm for parameter estimation. The enhancements to the genetic search make use of sparsity and information theoretic measures to reduce the computational cost of the algorithm. Significant reductions are shown and direction for further improvements of the algorithm are given. The main contributions of this work are providing a method for estimating causal relationships between signals using nonlinear estimated models, and a framework for estimating the relationships using an enhanced algorithm for model structure search and parameter estimation.
Show less
 Title
 Modeling and evaluation of integrated dynamic signal and dynamic speed control in signalized networks
 Creator
 Chen, Hui
 Date
 2013
 Collection
 Electronic Theses & Dissertations
 Description

A new integrated Dynamic Speed and Dynamic Signal (DSDS) control algorithm for signalized networks is developed in this research. The algorithm is formulated as a dynamic optimization problem with the objective of maximizing the number of vehicles released by the network and minimizing the number of stops in the network. The control algorithm is optimized by Genetic Algorithms (GAs).The developed DSDS algorithm is applied to signalized networks. The benefits of implementing a DSDS control...
Show moreA new integrated Dynamic Speed and Dynamic Signal (DSDS) control algorithm for signalized networks is developed in this research. The algorithm is formulated as a dynamic optimization problem with the objective of maximizing the number of vehicles released by the network and minimizing the number of stops in the network. The control algorithm is optimized by Genetic Algorithms (GAs).The developed DSDS algorithm is applied to signalized networks. The benefits of implementing a DSDS control algorithm on network efficiency are first evaluated through looking at key measures of effectiveness (MOEs). It is demonstrated that the algorithm is able to reduce queues over time, avoid gridlocks, and improve system performance. Vehicle speed profiles under DSDS control and dynamicsignal fixedspeed (DSFS) control are compared to evaluate the advantages of control with dynamic speed on minimizing speed noise and speed variation. DSDS control generates smoother flow profiles by reducing speed noise and speed variation. The comparison provides evidence that implementing DSDS control in signalized networks is an effective way to achieve safer and environmentally friendly signalized network operations. The operational and safety enhancement brought about by the implementation of DSDS varies depending on the levels of driver compliance. The microscopic simulation model VISSIM is used to evaluate the impacts of different levels of driver compliance. Results show that speeding and slow driving each have negative impacts on the performance of DSDS control. Parallel GAs (PGAs) is investigated and deployed in order to improve computational performance. Both a simple GA (SGA) and island PGAs are used to solve the DSDS control problem, a standard GAdifficult, and a standard GAeasy problem. For all problems, savings in computation resources were realized when PGA was used. The magnitude of improvements brought about by a PGA depended on the difficulty of the problem. An empirical approach is explored to configure Parallel Genetic Algorithms (PGAs) to optimize the DSDS control algorithm developed in this research. Two of the most important island PGA parameters are examined: the number of islands (subpopulations) and the migration rate. The results show 1) increasing the number of subpopulations does not always bring worthwhile savings in time, 2) increasing the number of subpopulations decreases the importance of migration rate, 3) there is an optimal migration rate associated with each number of subpopulations and it is problemdependent, and 4) PGA configuration and performance with the standard benchmark functions can be used as benchmarks to configure the PGA for problems of unknown complexity, such as the DSDS control algorithm developed in this research. The results suggest that offline processing may be necessary to ensure optimal performance of the PGA.
Show less
 Title
 Layout optimization of truss structures by fully stressed design evolution strategy
 Creator
 Ahrari, Ali
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

"The field of structural optimization has gained much academic interest in the recent decades. Different streams of optimization methods have been applied to this problem including analytical methods, optimality criteriabased method and gradientbased methods. During the recent decade, there has been a growing interest among researchers to apply stochastic populationbased methods, the socalled metaheuristics, to this class of optimization problems. The motivation is the robustness and...
Show more"The field of structural optimization has gained much academic interest in the recent decades. Different streams of optimization methods have been applied to this problem including analytical methods, optimality criteriabased method and gradientbased methods. During the recent decade, there has been a growing interest among researchers to apply stochastic populationbased methods, the socalled metaheuristics, to this class of optimization problems. The motivation is the robustness and capability of metaheuristics to avoid local minima. On the downside, their required evaluation budget grows fast when the number of design variables is increased, which limits the complexity of problems to which they can be applied. Furthermore, majority of these methods are tailored to optimize only the crosssectional areas of the members, the potential saving from which is highly limited. At the same time, several factors have diminished practitioners' interests in the academic research on this topic, including simplicity of conventional test problems compared to real structures, variety of design constraints in practice and the complexity of evaluation of the total cost. This dissertation aims at addressing some of the most critical shortcomings in the available truss optimization methods, both from academic and practical perspectives. It proposes a novel bilevel method for simultaneous optimization of topology, shape and size of truss structures. In the upper level, a specialized evolution strategy (ES) is proposed which follows the principles of contemporary evolution strategies (ESs), although the formulation is modified to handle mixed variable highly constrained truss optimization problems. The concept of fully stressed design is employed in the lower level as an efficient method for resizing the sampled solution in the upper level. The concept of fully stressed design is also utilized to define a specialized penalty term based on the estimated required increase in the structural weight such that all constraints are satisfied. The proposed method, called fully stressed design evolution strategy (FSDES), is developed in four stages. It is tested on complicated problems, some of which are developed in this dissertation, as an attempt to reduce the gap between complexity of test problems and real structures. Empirical evaluation and comparison with the best available methods in the literature reveal superiority of FSDES, which intensifies for more complicated problems. Aside from academically interesting features of FSDES, it addresses some of the practicing engineers' critiques on applicability of truss optimization methods. FSDES can handle largescale truss optimization problems with more than a thousand design parameters, in a reasonable amount of CPU time. Our numerical results demonstrate that the optimized design can hardly be guessed by engineering intuition, which demonstrates superiority of such design optimization methods. Besides, the amount of material saving is potentially huge, especially for more complicated problems, which justifies simulation cost of the design problem. FSDES does not require any userdependent parameter tuning and the code is ready to use for an arbitrary truss design problem within the domain of the code."Pages iiiii.
Show less
 Title
 Evolutionary multiobjective bilevel optimization for efficient deep neural network architecture design
 Creator
 Lu, Zhichao
 Date
 2020
 Collection
 Electronic Theses & Dissertations
 Description

Deep convolutional neural networks (CNNs) are the backbones of deep learning (DL) paradigms for numerous vision tasks, including object recognition, detection, segmentation, etc. Early advancements in CNN architectures are primarily driven by human expertise and elaborate design. Recently, neural architecture search (NAS) was proposed with the aim of automating the network design process and generating taskdependent architectures. While existing approaches have achieved competitive...
Show moreDeep convolutional neural networks (CNNs) are the backbones of deep learning (DL) paradigms for numerous vision tasks, including object recognition, detection, segmentation, etc. Early advancements in CNN architectures are primarily driven by human expertise and elaborate design. Recently, neural architecture search (NAS) was proposed with the aim of automating the network design process and generating taskdependent architectures. While existing approaches have achieved competitive performance, they are still impractical to realworld deployment for three reasons: (1) the generated architectures are solely optimized for predictive performance, resulting in inefficiency in utilizing hardware resourcesi.e. energy consumption, latency, memory size, etc.; (2) the search processes require vast computational resources in most approaches; (3) most existing approaches require one complete search for each deployment specification of hardware or requirement. In this dissertation, we propose an efficient evolutionary NAS algorithm to address the aforementioned limitations. In particular, we first introduce Paretooptimization to NAS, leading to a diverse set of architectures, tradingoff multiple objectives, being obtained simultaneously in one run. We then improve the algorithm's search efficiency through surrogate models. We finally integrate a transfer learning scheme to the algorithm that allows a new task to leverage previous search efforts that further improves both the performance of the obtained architectures and search efficiency. Therefore, the proposed algorithm enables an automated and streamlined process to efficiently generate taskspecific custom neural network models that are competitive under multiple objectives.
Show less
 Title
 Solving Computationally Expensive Problems Using SurrogateAssisted Optimization : Methods and Applications
 Creator
 Blank, Julian
 Date
 2022
 Collection
 Electronic Theses & Dissertations
 Description

Optimization is omnipresent in many research areas and has become a critical component across industries. However, while researchers often focus on a theoretical analysis or convergence proof of an optimization algorithm, practitioners face various other challenges in realworld applications. This thesis focuses on one of the biggest challenges when applying optimization in practice: computational expense, often caused by the necessity of calling a thirdparty software package. To address the...
Show moreOptimization is omnipresent in many research areas and has become a critical component across industries. However, while researchers often focus on a theoretical analysis or convergence proof of an optimization algorithm, practitioners face various other challenges in realworld applications. This thesis focuses on one of the biggest challenges when applying optimization in practice: computational expense, often caused by the necessity of calling a thirdparty software package. To address the timeconsuming evaluation, we propose a generalizable probabilistic surrogateassisted framework that dynamically incorporates predictions of approximation models. Besides the framework's capability of handling multiple objectives and constraints simultaneously, the novelty is its applicability to all kinds of metaheuristics. Moreover, often multiple disciplines are involved in optimization, resulting in different types of software packages utilized for performance assessment. Therefore, the resulting optimization problem typically consists of multiple independently evaluable objectives and constraints with varying computational expenses. Besides providing a taxonomy describing different ways of independent evaluation calls, this thesis also proposes a methodology to handle inexpensive constraints with expensive objective functions and a more generic concept for any type of heterogeneously expensive optimization problem. Furthermore, two case studies of realworld optimization problems from the automobile industry are discussed, a blueprint for solving optimization problems in practice is provided, and a widelyused optimization framework focusing on multiobjective optimization (founded and maintained by the author of this thesis) is presented. Altogether, this thesis shall pave the way to solve (computationally expensive) realworld optimization more efficiently and bridge the gap between theory and practice.
Show less
 Title
 An efficient strategy for reliabilitybased design optimization of nonlinear systems
 Creator
 Rademacher, Marcus H.
 Date
 2012
 Collection
 Electronic Theses & Dissertations
 Description

Engineers are constantly challenged with the task of creating better products. Design optimization is one of several modern tools that have been developed to meet this challenge. A common issue in optimization is that of finding a reliable design. A design may nominally pass all requirements, but when it is put into service stochastic variations in performance may cause the design to fail more often than is acceptable. The preferred solution is to perform reliabilitybased design optimization...
Show moreEngineers are constantly challenged with the task of creating better products. Design optimization is one of several modern tools that have been developed to meet this challenge. A common issue in optimization is that of finding a reliable design. A design may nominally pass all requirements, but when it is put into service stochastic variations in performance may cause the design to fail more often than is acceptable. The preferred solution is to perform reliabilitybased design optimization (RBDO), which accounts for uncertainty. In order to evaluate the reliability of a given design, considerable computational cost is necessary.The work presented in this thesis outlines a new strategy for performing RBDO where local response surfaces are used to reduce the computational burden of predicting the reliability. This strategy also takes advantage of the fact that the deterministic design optimization (DDO) and RBDO solutions are often spatially nearby each other, resulting in the opportunity to use inexpensive and wellworn DDO techniques to get a head start on finding the RBDO solution. The DDO study also provides ample data that can be used to fit response surfaces during the RBDO stage, without the need for additional evaluations. This new strategy is applied to several problems, including simple analytical functions and real engineering problems.
Show less
 Title
 Metamodeling framework for simultaneous multiobjective optimization using efficient evolutionary algorithms
 Creator
 Roy, Proteek Chandan
 Date
 2019
 Collection
 Electronic Theses & Dissertations
 Description

Most realworld problems are comprised of multiple conflicting objectives and solutions to those problems are multiple Paretooptimal tradeoff solutions. The main challenge of these practical problems is that the objectives and constraints do not have any closed functional forms and they are expensive for computation as well. Objectives coming from finite element analysis, computational fluid dynamics software, network flow simulators, crop modeling, weather modeling or any other simulations...
Show moreMost realworld problems are comprised of multiple conflicting objectives and solutions to those problems are multiple Paretooptimal tradeoff solutions. The main challenge of these practical problems is that the objectives and constraints do not have any closed functional forms and they are expensive for computation as well. Objectives coming from finite element analysis, computational fluid dynamics software, network flow simulators, crop modeling, weather modeling or any other simulations which involve partial differential equations are good examples of expensive problems. These problems can also be regarded as l03000300owbudget'' problems since only a few solution evaluations can be performed given limited time. Nevertheless, parameter estimation and optimization of objectives related to these simulations require a good number of solution evaluations to come up with better parameters or a reasonably good tradeoff front. To provide an efficient search process within a limited number of exact evaluations, metamodelassisted algorithms have been proposed in the literature. These algorithms attempt to construct a computationally inexpensive representative model of the problem, having the same global optima and thereby providing a way to carry out the optimization in metamodel space in an efficient way. Populationbased methods like evolutionary algorithms have become standard for solving multiobjective problems and recently Metamodelbased evolutionary algorithms are being used for solving expensive problems. In this thesis, we would like to address a few challenges of metamodelbased optimization algorithms and propose some efficient and innovative ways to construct these algorithms. To approach efficient design of metamodelbased optimization algorithm, one needs to address the choice of metamodeling functions. The most trivial way is to build metamodels for each objective and constraint separately. But we can reduce the number of metamodel constructions by using some aggregated functions and target either single or multiple optima in each step. We propose a taxonomy of possible metamodelbased algorithmic frameworks which not only includes most algorithms from the literature but also suggests some new ones. We improve each of the frameworks by introducing trust region concepts in the multiobjective scenario and present two strategies for building trust regions. Apart from addressing the main bottleneck of the limited number of solution evaluations, we also propose efficient nondominated sorting methods that further reduce computational time for a basic step of multiobjective optimization. We have carried out extensive experiments over all representative metamodeling frameworks and shown that each of them can solve a good number of test problems. We have not tried to tune the algorithmic parameters yet and it remains as our future work. Our theoretical analyses and extensive experiments suggest that we can achieve efficient metamodelbased multiobjective optimization algorithms for solving test as well as realworld expensive and lowbudget problems.
Show less
 Title
 Development of large scale structured light based measurement systems
 Creator
 Zhang, Chi
 Date
 2012
 Collection
 Electronic Theses & Dissertations
 Description

The development of large scale structured light based dimensional measurement systems is introduced in this study. Based on different applications, there are generally two different research directions for a structured light system: accuracy 3D inspection and fast environmental reconstruction. The first one is emphasized on the accuracy of the results and the second one is mainly concerned about reconstruction speed. Both of them become challenge tasks when the systems' scales increase...
Show moreThe development of large scale structured light based dimensional measurement systems is introduced in this study. Based on different applications, there are generally two different research directions for a structured light system: accuracy 3D inspection and fast environmental reconstruction. The first one is emphasized on the accuracy of the results and the second one is mainly concerned about reconstruction speed. Both of them become challenge tasks when the systems' scales increase.Quality inspection is a process to evaluate the quality of a manufactured work piece's 3D shape. Compared with traditional Coordinate Measurement Machine (CMM), structured light based optic measurement has the merits of fast speed, high inspection sample rate, and overall simplicity of operation, and becomes a popular alternative inspection method for CMM. The measurement result, point cloud of the test work piece, is compared with the Computer Aided Design (CAD) model to find the error distribution map. The colorcode error distribution map will be further evaluated with the acceptable tolerance. There are basically three main issues, when the system scale is enlarged.Calibration of such a large system with long standoff distance, large Field Of View (FOV), deep Depth Of View (DOV), and multisensor became a challenge task. The calibration errors are enlarged when a large scale system is treated. In order to maintain high accuracy, an innovative 3D sensor model with less calibration parameters was developed. Instead of employing the incident light in projector frame, 3D point recovery was achieved within the camera frame via plane induced parallax information, in which projector's intrinsic and orientation are avoided in the 3D model. Precision of the large scale optic system was also simulated and was tested against the random image noise in the system. Multiplyplane strategy was developed and implemented to calibrate the sensor.As the system scale increased, more work pieces could be inspected at the same time. The optical properties became more complicated. Same with all the other vision based measurement system, structured light systems is usually weak against surface optical properties. Material exhibiting different color textures, reflection ratios, and especially a mixture between specula reflection and diffuse reflection, fails the optic sensor to correctly acquire incident light information. Traditional structure light method is only valid for parts with diffuse reflection property. Therefore pretreatment of the surface have to be added before inspection. Aiming on this problem, a structured sensor with a robust surface decoding method for industry application was developed. The coding strategies were properly designed against variously test surface optic properties. Monochromatic light was utilized against different object color textures. The illumination of the projector was adjusted pixel by pixel based on the optic properties of the test material to composite different reflection coefficients and internal reflection. Furthermore, an extrapolation model to solve the internal reflection problem and a subpixel interpolation model to increase measurement accuracy were proposed too. The proposed system was capable to inspect various materials with different shapes and different optic properties, from black, dark, to shiny.Registration among sensor frame to the common was achieved based on Iterative Closest Point (ICP) method. The final point cloud joined by each individual point cloud could represent the 3D shape of a large work piece. Objectoriented online calibration was developed based on ICP and Geometry Dimension and Tolerance (GDT) information to register the final point cloud and the CAD model. Several modifications to traditional ICP are applied to speed up the registration process due to the large amount of points in both sets.Environmental reconstruction for navigation is a process to quickly acquire surrounding information around the vision sensor and presents a 3D fusion display for the operator. Traditional navigation system only employed a camera to view the environment, in which the depth information is lost. Operator is often confused with the object distance, the distance between the object and the camera. Structured light system based on infrared light could quickly rebuild objects depthes and fuse it into the displayed images without influence the operator's normal vision. The sensed points on the objects are highlighted by the colorcodes: from red to blue, which are used to indicate the object distances.Fast environmental reconstruction emphasizes the acquisition time. In order against the moving objects, an oneshot surface coding algorithm was developed. Only one projection image is needed to acquire the 3D information. The codeword is determined in a single pattern because the code of each primitive depends on the values of the primitive and its neighbors. Compared with the previous patterns, this pattern is more robust because it can avoid the influence of the ambient light and the inspected part reflective property. Moreover, the requirement of the accuracy performance is achieved by the pattern primitive which is similar to the corner of the checker board since it can provide high accuracy performance even when the occlusion occurs.In order to reconstruct the environment without blind areas, an omnidirection panoramic structured light sensing system was developed to increase the system field of reconstruction. Hyperbolic mirrors are put in front of a projector and a camera. 3D reconstruction model was build up associated with the hyperbolic mirror. Task level calibration is conduct for the system. At last, a 360 image fused with depth information is achieved by the designed system.In summary, the study developed large scale structured light systems based on two different applications: accurate inspection for industry quality control and fast environmental reconstruction for mobile robot navigation.
Show less
 Title
 Exploring jointlevel control in evolutionary robotics
 Creator
 Moore, Jared M.
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

"In this dissertation, we use computational evolution and physics simulations to explore both control and morphology in robotic systems. Specifically we investigate jointlevel control strategies and their interaction with morphological elements."  Abstract.
 Title
 Automatically addressing uncertainty in autonomous robots with computational evolution
 Creator
 Clark, Anthony Joseph
 Date
 2016
 Collection
 Electronic Theses & Dissertations
 Description

Autonomous robotic systems are becoming prevalent in our daily lives. Many robots are still restricted to manufacturing settings where precision and repetition are paramount. However, autonomous devices are increasingly being designed for applications such as search and rescue, remote sensing, and tasks considered too dangerous for people. In these cases, it is crucial to continue operation even when some unforeseen adversity decreases performance levelsa robot with diminished performance...
Show moreAutonomous robotic systems are becoming prevalent in our daily lives. Many robots are still restricted to manufacturing settings where precision and repetition are paramount. However, autonomous devices are increasingly being designed for applications such as search and rescue, remote sensing, and tasks considered too dangerous for people. In these cases, it is crucial to continue operation even when some unforeseen adversity decreases performance levelsa robot with diminished performance is still successful if it is able to deal with uncertainty, which includes any unexpected change due to unmodeled dynamics, changing control strategies, or changes in functionality resulting from damage or aging.The research presented in this dissertation seeks to improve such autonomous systems through three evolutionbased techniques. First, robots are optimized offline so that they best exploit available material characteristics, for instance flexible materials, with respect to multiple objectives (e.g., speed and efficiency). Second, adaptive controllers are evolved, which enable robots to better respond to unforeseen changes to themselves and their environments. Finally, adaptation limits are discovered using a proposed mode discovery algorithm. Once the boundaries of adaptation are known, selfmodeling is applied online to determine the current operating mode and select/generate an appropriate controller.These three techniques work together to create a holistic method, which will enable autonomous robotic systems to automatically handle uncertainty. The proposed methods are evaluated using robotic fish as a test platform. Such systems can benefit in multiple ways from the integration of flexible materials. Moreover, robotic fish operate in complex, nonlinear environments, enabling thorough testing of the proposed methods.
Show less
 Title
 Harnessing evolutionary computation for the design and generation of adaptive embedded controllers within the context of uncertainty
 Creator
 Byers, Chad Michael
 Date
 2015
 Collection
 Electronic Theses & Dissertations
 Description

A critical challenge for the design of embedded controllers is incorporating desirable qualities such as robustness, fault tolerance, and adaptability into the control process in order to respond to dynamic environmental conditions. An embedded controller governs the execution of a taskspecific system by monitoring information from its environment via sensors and producing an appropriate response through the system's actuators, often independent of any supervisory control. For a human...
Show moreA critical challenge for the design of embedded controllers is incorporating desirable qualities such as robustness, fault tolerance, and adaptability into the control process in order to respond to dynamic environmental conditions. An embedded controller governs the execution of a taskspecific system by monitoring information from its environment via sensors and producing an appropriate response through the system's actuators, often independent of any supervisory control. For a human developer, identifying the set of all possible combinations of conditions a system might experience and designing a solution to accommodate this set is burdensome, costly, and often, infeasible. To alleviate this burden, a variety of techniques have been explored to automate the generation of embedded controller solutions. In this dissertation, we focus on the bioinspired technique referred to as evolutionary computation where we harness evolution's power as a populationbased, global search technique to build up good behavioral components. In this way, evolution naturally selects for these desirable qualities in order for a solution to remain competitive over time in the population. Often, these search techniques operate in the context of uncertainty where aspects of the (1) problem domain, (2) solution space, and (3) search process itself are subject to variation and change. To mitigate issues associated with uncertainty in the problem domain, we propose the digital enzyme, a biologicallyinspired model that maps the complexity of both the environment and the system into the space of values rather than instructions. To address uncertainty in the solution space, we remove constraints in our initial digital enzyme model to allow the genome structure to be dynamic and openended, accommodating a wider range of evolved solution designs. Finally, to help explore the inherent uncertainty that exists in the search process, we uncover a hidden feature interaction present between the diversitypreserving search operator of a popular evolutionary algorithm and propose a new way to use niching as a means to mitigate its unwanted effects and bias on search.
Show less