Get Global Optimization essential facts below. View Videos or join the Global Optimization discussion. Add Global Optimization to your PopFlock.com topic list for future reference or share this resource on social media.
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .
Given a possibly nonlinear and non-convex continuous function with the global minima and the set of all global minimizers in , the standard minimization problem can be given as
that is, finding and a global minimizer in ; where is a (not necessarily convex) compact set defined by inequalities .
Global optimization is distinguished from local optimization by its focus on finding the minimum or maximum over the given set, as opposed to finding local minima or maxima. Finding an arbitrary local minimum is relatively straightforward by using classical local optimization methods. Finding the global minimum of a function is far more difficult: analytical methods are frequently not applicable, and the use of numerical solution strategies often leads to very hard challenges.
A recent approach to the global optimization problem is via minima distribution. In this work, a relationship between any continuous function on a compact set and its global minima has been strictly established. As a typical case, it follows that
where is the -dimensional Lebesgue measure of the set of minimizers . And if is not a constant on , the monotonic relationship
holds for all and , which implies a series of monotonous containment relationships, and one of them is, for example,
And we define a minima distribution to be a weak limit such that the identity
holds for every smooth function with compact support in . Here are two immediate properties of :
(1) satisfies the identity .
(2) If is continuous on , then .
As a comparison, the well-known relationship between any differentiable convex function and its minima is strictly established by the gradient. If is differentiable on a convex set , then is convex if and only if
thus, implies that holds for all , i.e., is a global minimizer of on .
Typical examples of global optimization applications include:
Curve fitting like non-linear least squares analysis and other generalizations, used in fitting model parameters to experimental data in chemistry, physics, biology, economics, finance, medicine, astronomy, engineering.
The most successful general exact strategies are:
Inner and outer approximation
In both of these strategies, the set over which a function is to be optimized is approximated by polyhedra. In inner approximation, the polyhedra are contained in the set, while in outer approximation, the polyhedra contain the set.
Branch and bound (BB or B&B) is an algorithm design paradigm for discrete and combinatorial optimization problems. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.
Interval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. Interval arithmetic helps find reliable and guaranteed solutions to equations and optimization problems.
Several exact or inexact Monte-Carlo-based algorithms exist:
Direct Monte-Carlo sampling
In this method, random simulations are used to find an approximate solution.
Example: The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize the total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account.
Stochastic tunneling (STUN) is an approach to global optimization based on the Monte Carlo method-sampling of the function to be objectively minimized in which the function is nonlinearly transformed to allow for easier tunneling among regions containing function minima. Easier tunneling allows for faster exploration of sample space and faster convergence to a good solution.
Parallel tempering, also known as replica exchange MCMC sampling, is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo (MCMC) sampling methods more generally. The replica exchange method was originally devised by Swendsen, then extended by Geyer and later developed, among others, by Giorgio Parisi.,
Sugita and Okamoto formulated a molecular dynamics version of parallel tempering: this is usually known as replica-exchange molecular dynamics or REMD.
Essentially, one runs N copies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Reactive search optimization (i.e. integration of sub-symbolic machine learning techniques into search heuristics)
Graduated optimization, a technique that attempts to solve a difficult optimization problem by initially solving a greatly simplified problem, and progressively transforming that problem (while optimizing) until it is equivalent to the difficult optimization problem.
Response surface methodology-based approaches
IOSO Indirect Optimization based on Self-Organization
Roberto Battiti, M. Brunato and F. Mascia, Reactive Search and Intelligent Optimization, Operations Research/Computer Science Interfaces Series, Vol. 45, Springer, November 2008. ISBN978-0-387-09623-0
For stochastic methods:
A. Zhigljavsky. Theory of Global Random Search. Mathematics and its applications. Kluwer Academic Publishers. 1991.
Hamacher, K (2006). "Adaptation in stochastic tunneling global optimization of complex potential energy landscapes". Europhysics Letters (EPL). IOP Publishing. 74 (6): 944-950. doi:10.1209/epl/i2006-10058-0. ISSN0295-5075.
For general considerations on the dimensionality of the domain of definition of the objective function:
Hamacher, Kay (2005). "On stochastic global optimization of one-dimensional functions". Physica A: Statistical Mechanics and Its Applications. Elsevier BV. 354: 547-557. doi:10.1016/j.physa.2005.02.028. ISSN0378-4371.
For strategies allowing one to compare deterministic and stochastic global optimization methods