Computational Methods for Risk-Averse Undiscounted Transient Markov Models
The total cost problem for discrete-time controlled transient Markov models is considered. The objective functional is a Markov dynamic risk measure of the total cost. Two solution methods, value and policy iteration, are proposed, and their convergence is analyzed. In the policy iteration method, we propose two algorithms for policy evaluation: the nonsmooth Newton method and convex programming, and we prove their convergence. The results are illustrated on a credit limit control problem.
This is a joint work with Dr. Andrzej Ruszczynski from Rutgers Business School.
Özlem Çavuş is currently an Assistant Professor of Industrial Engineering at Bilkent University. She received her B.S. and M.S. degrees in Industrial Engineering from Boğaziçi University in 2004 and 2007, respectively, and the Ph.D. degree in Operations Research from Rutgers Center for Operations Research (RUTCOR) at Rutgers University in 2012. Her research interests include stochastic optimization, risk-averse optimization, Markov decision processes, and healthcare applications.