Speaker: İrem Nur Keskin
Time: July 1, 11: 00
Meeting ID: 993 766 4491
Title: OUTER APPROXIMATION ALGORITHMS FOR CONVEX VECTOR OPTIMIZATION PROBLEMS
There are different outer approximation algorithms in the literature that are designed to solve convex vector optimization problems in the sense that they approximate the upper image using polyhedral sets. At each iteration, these algorithms solve vertex enumeration and scalarization problems. The vertex enumeration problem is used to find the vertex representation of the current outer approximation. The scalarization problem is used in order to generate a weakly C-minimal element of the upper image as well as a supporting halfspace that supports the upper image at that point. In this study, we present a general framework of such algorithm in which the Pascoletti-Serafini scalarization is used. This scalarization finds the minimum 'distance' from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. The reference point and the direction vector are the parameters for this scalarization.
The motivation of this study is to come up with efficient methods to select the parameters of the Pascoletti-Serrafini scalarization and analyze the effects of these parameter selections on the performance of the algorithm. We first propose three rules to choose the direction parameter at each iteration. We conduct a preliminary computational study to observe the effects of these rules under various, rather simple rules for vertex selection. Depending on the results of the preliminary analysis, we fix a direction selection rule to continue with. Moreover, we observe that vertex selection also has a significant impact on the performance, as expected. Then, we propose additional vertex selection rules, which are slightly more complicated than the previous ones, and are designed with the motivation that they generate well-distributed points on the boundary of the upper image. Different from the existing vertex selection rules from the literature, they do not require to solve additional single-objective optimization problems.
Using some test problems, we conduct a computational study where three different measures set as the stopping criteria: the approximation error, the runtime, and the cardinality of the solution set. We compare the proposed variants and some algorithms from the literature in terms of these measures that are used as the stopping criteria as well as an additional proximity measure, hypervolume gap. We observe that the proposed variants have satisfactory results especially in terms of runtime. When the approximation error is chosen as the stopping criteria, the proposed variants require less CPU time compared to the algorithms from the literature. Under fixed runtime, they return better proximity measures in general. Under fixed cardinality, the algorithms from the literature yield better proximity measures, but they require significantly more CPU time than the proposed variants.