ACE: Adaptive Constraint-aware Early Stopping in Hyperparameter Optimization

  • Yi-Wei Chen ,
  • ,
  • Amin Saied ,
  • Rui Zhuang

MSR-TR-2022-29 |

Published by Microsoft

Deploying machine learning models requires high model quality and needs to comply with application constraints.
That motivates hyperparameter optimization (HPO) to tune model configurations under deployment constraints.
The constraints often require additional computation cost to evaluate, and training ineligible configurations can waste a large amount to tuning cost. In this work, we propose an Adaptive Constraint-aware Early stopping (ACE) method to incorporate constraint evaluation with early stopping. To minimize the overall optimization cost, ACE estimates the cost-effective constraint evaluation interval for each trial. Meanwhile, we propose a stratum early stopping criterion in ACE, which considers both optimization and constraint metrics in pruning and does not require regularization hyperparameters. Our experiments demonstrate that ACE can achieve superior performance in tuning tree-based models under a fairness constraint.