Instance Space Analysis for Rigorous and Insightful Algorithm Testing

View

1. Aims

Standard practice in algorithm testing consists of reporting performance on-average across a suite of well-studied benchmark instances [ 1, 2 ]. As such, the conclusions drawn during this process critically depend on the choice of benchmarks [ 3 ]. Ideally, such suites are unbiased, challenging, and contain a mix of synthetically generated and real-world-like instances with diverse structural properties. Without this diversity, the conclusions that can be drawn about the expected algorithm performance in future scenarios are necessarily limited. Moreover, reporting performance on-average often highlights the strengths, while disguising the weaknesses, of an algorithm. In summary, there are two limitations to the standard benchmarking approach that affect the conclusions that can be drawn: (a) there is no mechanism to assess whether the selected test instances are unbiased and diverse enough to support the conclusions drawn; and (b) there is little opportunity to gain insights into the strengths and weaknesses of algorithms for different types of instances, when hidden by on-average performance metrics.

This tutorial introduces Instance Space Analysis (ISA), a recent methodology aiming to improve the way algorithms are evaluated by revealing relationships between the structural properties of problem instances and their impact on the performance of algorithms [ 4, 5, 6, 7, 8, 9, 10, 11 ]. ISA offers a more nuanced opportunity to gain insights into algorithm strengths and weaknesses for various types of test instances, and to objectively assess the relative power of algorithms, free from any bias introduced by the choice of test instances. An instance space is constructed whereby test instances can be visualized as points in a 2d plane, with algorithm footprints identified as the regions of predicted good performance of an algorithm, based on statistical evidence from empirical testing. From this view of a broad instance space, including the theoretical boundary where additional test instances could exist, we can assess the diversity of a chosen test set, and gain much needed insights into how structural properties of various instances influence the strengths and weaknesses of algorithms. Moreover, through ISA we can identify where additional test instances would be valuable to support greater insights. By setting target points in the instance space, new test instances with controllable properties can be generated to fill the instance space, enabling algorithms to be comprehensively "stress-tested" under all possible conditions .

The tutorial makes use of the on-line tools available at this website, and provides access to its MATLAB computational engine. MATILDA also provides a collection of ISA results and other meta-data available for downloading for several well-studied problems from optimization and machine learning, from previously published studies.

2. Format

The tutorial will be divided in three sections of half an hour each. In the first section, we will introduce the challenges in experimental testing of algorithms, followed by a description of the Instance Space Analysis methodology. In the second section, a live demonstration of the tools available at MATILDA will be given, illustrated with some of the results already available in the library case studies. Finally, we will carry out an interactive exercise, where the participants will be able to use the tools with a prepared dataset, and will be able to download their results. Questions will be taken throughout the tutorial. By the end of the session, participants will be in a position to create their own instance space analysis for studying the strengths and weaknesses of their algorithm comparison studies. All accompanying material, including links to the most relevant bibliography and a copy of the slides, will be made available at MATILDA's website.

3. Outline

The tutorial is expected to run for 1.5 hours. It is aimed to students, researchers and practitioners in the areas of Machine Learning and Optimization, who wish to improve their ability to analyze algorithm evaluation studies to draw insightful conclusions about strengths and weaknesses of algorithms, and adequacy of benchmark test suites. The tutorial builds on current topics such as automated algorithm selection, landscape-aware methods, and meta-learning.

The tutorial will cover the following material:

  1. Introduction: Experimental testing of algorithms - Best practices - Limitations of current approaches.
  2. Framework: The algorithm selection problem - Collection of meta-data - Algorithm Portfolios - Benchmark suites - Feature calculation - Algorithm performance metrics.
  3. Instance Space Analysis: Building the space - Footprint analysis - Algorithm selection models.
  4. Instance generation: Identification of target areas - Setting up an instance generator - Analysis of performance.
  5. Demonstration: Library of problems - Performing a custom analysis.

4. Presenters

Kate Smith-Miles is a Professor of Applied Mathematics in the School of Mathematics and Statistics at The University of Melbourne, and holds a five year Laureate Fellowship from the Australian Research Council. Prior to joining The University of Melbourne in September 2017, she was Professor of Applied Mathematics at Monash University, and Head of the School of Mathematical Sciences (2009-2014). Previous roles include President of the Australian Mathematical Society (2016-2018), and membership of the Australian Research Council College of Experts (2017-2019). Kate was elected Fellow of the Institute of Engineers Australia (FIEAust) in 2006, and Fellow of the Australian Mathematical Society (FAustMS) in 2008.

Kate obtained a B.Sc(Hons) in Mathematics and a Ph.D. in Electrical Engineering, both from The University of Melbourne. Commencing her academic career in 1996, she has published 2 books on neural networks and data mining, and over 260 refereed journal and international conference papers in the areas of neural networks, optimization, data mining, and various applied mathematics topics. She has supervised 28 PhD students to completion, and has been awarded over AUD$15 million in competitive grants, including 13 Australian Research Council grants and industry awards. MATILDA and the instance space analysis methodology has been developed through her 5-year Laureate Fellowship awarded to by the Australian Research Council.

Mario Andrés Muñoz Acosta is a Research Fellow in Operations Research in the School of Mathematics and Statistics, and a Lecturer at the School of Computer and Information Systems, both at the University of Melbourne. Before joining the University of Melbourne, he was a Research Fellow in Applied Mathematics at Monash University (2014-2014), and a Lecturer at Universidad del Valle, Colombia (2008-2009)

Mario Andrés obtained a B.Eng. (2005) and a M.Eng. (2008) in Electronics from Universidad del Valle, Colombia, and a Ph.D. (2014) in Engineering from the University of Melbourne. He has published over 40 refereed journal and conference papers in optimization, data mining, and other inter-disciplinary topics. He currently supervises 2 Ph.D. students, and has been awarded over AUD$1 million in research funding. He has developed and maintains the MATLAB code that drives MATILDA as a senior member of the Laureate project team.