Abstract: An important step in the solution of a matrix nearness problem that arises in certain machine learning applications is finding the zero of f(alpha) = z^T exp(logX + alpha zz^T )z − b. The matrix valued exponential and logarithm in f(alpha) arises from the use of the von Neumann matrix divergence tr(X logX − X log Y − X + Y ) to measure the 6 nearness between the positive definite matrices X and Y . A key step of an iterative algorithm used to solve the 7 underlying matrix nearness problem requires the zero of f(alpha) to be repeatedly computed. In this paper we propose 8 zero-finding algorithms that gain their advantage by exploiting the special structure of the objective function. We 9 show how to efficiently compute the derivative of f, thereby allowing the use of Newton-type methods. In numerical 10 experiments we establish the advantage of our algorithms.
- On a Zero-Finding Problem involving the Matrix Exponential (pdf)
M. Sustik, I. Dhillon.
SIAM Journal of Matrix Analysis and Applications (SIMAX) 33(4), pp. 1237-1249, 2012.