We all want to optimize tasks in our day to day life. In simple words, we want the best output (or a yield) from a fixed set of inputs (or a cost), given certain constraints. We either want this output to be as large as possible or as small as possible, that is, we want to either attain a maxima or a minima(both known as extrema).
One would have seen such a problem in a very rudimentary form as “Linear Programming” in high school.
This field of mathematics is called optimization theory. We generally study certain functions that are to be optimized over a given set.Formally, we define an optimization problem as follows:-
Typically,the problem does not look as neat as it’s described in the formal definition.One usually deals with multiple constraints, which can still be expressed as shown above.
There are numerous techniques to optimize such problems, but it turns out there is enormous literature available if the function is convex, that is, if I draw a line segment between any two points of this function, it will always lie above the graph of the function, or more formally,
A convex function thus looks something like this:
or in higher dimensions something like:
The “cup” like shape intuitively helps you see that if you start at some corner, it is fairly easy to find your way down (if one is interested, then reading the intuition behind the gradient descent algorithm will offer more insight).
From calculus, one would be familiar with the terms global and local minima. If you are asked to visually identify the global and local minima of the above convex functions, you would say they are the same. This is a striking feature of convex optimization problems that make them neater and easier to solve as compared to non-convex ones.
Optimization from a mathematical perspective can be approached at various stages. Basic knowledge of calculus and coding can help you visualize optimization. A step further would be to work out the problems yourself, typically involving a background in numerical analysis.
But for someone deeply interested in the mathematics of optimization, the route is slightly longer, and involves a journey in“abstract” mathematics, or to be specific, an introduction to Functional Analysis.
Linear programming is a special case of convex optimization, where the function 'f' is linear, and thus
Certain fundamental properties of convex optimization problems, along with a few essential results which relate to geometric notions in functional analysis (specifically of Hilbert Spaces) form the crux of convex optimization.
Further generalizations of this field focuses on generalized notions of convexity, which is studied by a broader field of mathematics known as convex analysis.
Standard applications include optimizing power flow across a network grid, utility maximization problem in economics (and its dual,expenditure minimization problem), computational biology, geophysics, and a lot more.
This makes the study of convex optimization problems both mathematically rich as well as highly applicable to the real world.