convex takes two mandatory arguments, an at least twice differentiable function(al) f:ℝn→ℝ and a variable or list of variables. Some variables may depend on a common independent parameter, say t, when entered as e.g. x(t) instead of x. The first derivatives of such variables, when encountered in f, are treated as independent parameters of f.
The command returns a condition or list of conditions under which f is convex. If f is convex on the entire domain, the return value is true. If it is nowhere convex, the return value is false. Otherwise, the conditions are returned as inequalities which depend on the parameters of f. The returned inequalities are not necessarily independent.
An optional third argument simplify=false or simplify=true may be given. By default is simplify=true, which means that simplification is applied when generating convexity conditions. If simplify=false, only rational normalization is performed (using the ratnormal command).
The command operates by computing the Hessian Hf of f and its principal minors (in total 2n of them where n is the number of parameters) and checks their signs. If all minors are nonnegative, then Hf is positive semidefinite and f is therefore convex.
The function f is said to be concave if the function g=−f is convex.
For example, input :
In the example below, the function f(x,y,z)=x2+x z+a y z+z2 is not convex regardless of the value a∈ℝ :
In the next example we find all values a∈ℝ for which the function
|f(x,y,z)=x2+2 y2+a z2−2 x y+2 x z−6 y z|
is convex on ℝ3. Input :
The returned inequalities are simplified by solve :
Therefore f is convex for a≥ 5.
Let’s find the set S⊂ℝ2 on which the function f:ℝ2→ℝ defined by
is convex. Input :
From here we conclude that f is convex when x1+x2≥ 0. The sought set S is therefore the half-space defined by this inequality.
The algorithm respects the assumptions that may be set upon variables. Therefore, the convexity of a given function can be checked only on a particular domain. For example, input :
We want to minimize the objective functional
|L(t,y(t),y′(t)) d t|
where the Lagrangian L is defined by
for y:[0,x1]→ℝ such that y(0)=y0 and y(x1)=0 where x1>0 and y0>0 are fixed (the constant g is the gravitational acceleration). This is called the brachistochrone problem (the problem of shortest travel by own weight from the point (0,y0) to (x1,0)). By solving Euler-Lagrange equation one obtains a cycloid y(t) as the only stationary function for L. The problem is to prove that it minimizes T, which would be easy if the integrand L was convex. However, it’s not the case here :
This is equivalent to |y′(t)|≤√3, which is certainly not satisfied by the cycloid y near the point x=0.
Using the substitution y(t)=z(t)2/2 we obtain y′(t)=z′(t) z(t) and
The function P is convex :
Hence the function z(t)=√2 y(t), stationary for P (which is verified directly), minimizes the objective functional
|P(t,z(t),z′(t)) d t.|
From here and U(z)=T(y) it easily follows that y minimizes T and therefore the brachistochrone. For details see John L. Troutman, Variational Calculus and Optimal Control (second edition), page 257.