NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:6625
Title:Multi-objective Bayesian optimisation with preferences over objectives

Reviewer 1


		
Originality: The work is original in its setup of preference order in multi-objective bayesian optimisation. It extends hypervolume based acquisition function for BO with an algorithm that tests for satisfiability of preference order in a sample. Quality and clarity: The work done is complete in its motivation, formulation, aproach and experimentation. It is clearly presented. Significance: To my understanding, the use of algorithm (1) in checking if v \in \matbb{S}_{\mathcal{J}} is the only step where preference order plays a role. However, I would be interested to know how this compares to a trivial approach where the preference order sorting is carried out as a post processing step for filtering a pareto front obtained without consideration of any preference order.

Reviewer 2


		
There are some related works on preference-based or interactive multiobjective optimization. The novelty of the work is not very high. From my point of view, the main algorithm in Section 4 is not clearly presented. To access the preference-based approaches, the measurements are quite important. However, this issue is very subjective. The paper did not discuss this issue. Therefore, it is hard to judge whether the found solutions are good or not.

Reviewer 3


		
Summary: This paper proposes a method for multi-objective Bayesian optimization when a user has given “preference order constraints”, i.e. preferences about the importance of different objectives. For example, a user might specify that he or she wants to determine where, along the pareto front, a given objective varies significantly with respect to other objectives (which the authors term “diversity”) or when the objective is static with respect to other objectives (which they term “stability”). The authors give algorithms for this setting and show empirical results on synthetic functions and on a model search task. Comments: > My main criticism of this paper is that I am not convinced about the motivation for, and uses cases of, the described task of finding regions of the pareto front where an objective is “diverse” or “stable” as they are defined in the paper. There are two potential examples given in the introduction, but these are brief and unconvincing (another comment on these below). A real experiment is shown on a neural network model search task, but it is unclear how the method, when applied here, provides real benefits over other multi-objective optimization methods. More written discussion on the benefits and application of this method (for example in the model search task) could help alleviate this issue. > The three examples given in the introduction are: - A case where both objectives have constraints (precision>=0.8, recall>=0.7). - A case where we want diverse objective values along the pareto front. - A case where we want regions of the pareto front where a large change in one objective is required to obtain a small improvement in the other objective. Intuitively, these all seem to constrain the pareto front or prioritize regions of the pareto front over others. The abstract describes these as “constraints on the objectives of the type ‘objective A is more important than objective B’”. I feel that the introduction does not clearly describe how the description in the abstract aligns with the three examples given in the introduction. Is the argument that diversity/stability is a property that directly corresponds to the importance of an objective? It would be great if you could provide better clarity on this definition. > The dominated hypervolume is defined in section 2.2. It would be valuable to give some intuition about this quantity, in addition to the definition, in order to provide some clarity on how it will be used. ---------- Update after author response ---------- I want to thank the authors for their response. I believe the authors description of a couple real world examples are nice, but do not shed much light on the motivations for this method beyond the original submission. While appreciated, I will not change my score.