ROADEF'2005: Revival

You can be aware of 2005 results and ladder here.
You can download instance files here.
Please submit your best solutions ever by instance here.

2005 ladder with 2015 evaluation function.


Warning: include_once(./components/2005ladder.php): failed to open stream: No such file or directory in /home/roadeforiw/www/challenge/2005/fr/revival2005.php on line 83

Warning: include_once(): Failed opening './components/2005ladder.php' for inclusion (include_path='.:/usr/local/php7.0/lib/php') in /home/roadeforiw/www/challenge/2005/fr/revival2005.php on line 83

Revival ladder : Best ever solutions


Warning: include_once(./components/revivalLadder.php): failed to open stream: No such file or directory in /home/roadeforiw/www/challenge/2005/fr/revival2005.php on line 91

Warning: include_once(): Failed opening './components/revivalLadder.php' for inclusion (include_path='.:/usr/local/php7.0/lib/php') in /home/roadeforiw/www/challenge/2005/fr/revival2005.php on line 91

Explanations & information about the revival

Ten years have passed since the Renault challenge ended and we wish to compare the quality gap between solutions computed with much more recent computers and the historical ones.
The original challenge was launched with a strongly constraining procedure. In order to open the competition a little more we decided to rework the procedure as follows :

  • The time limit is gone. Candidates are now ranked on a best-ever basis.
  • A new evaluation function has been designed. It uses 2005 results as reference scores.
  • We are aware that the problem as stated by Renault in 2005 is not exactly what one would call an "academic" car sequencing problem.
  • That is why candidates can now choose to ignore the paint batch constraint. All they have to do is check a box at the submission.

New evaluation procedure

The former procedure involved a tournament as an attempt to be fair. We decided not to.
The point of this revival is to find out whether theoretical and technical advance can actually improve the results candidates spent a year improving.
We have thus designed a way to rank new results relatively to the old ones. The scoring function used to weight solutions remains identical, but a new normalization is to be considered :
Given a candidate, for each instance, I, let s be the candidate's score over I, E(I) (resp. B(I)) the average (resp. the best) score over I obtained in 2005
Let m = E(I) - sE(I) - B(I) . Then 2m - 1 is the candidate's normalized score.
The overall score for a candidate is then the average of his normalized scores.
If one happened not to submit any solution for an instance, or if one were to submit a non-valid solution, then we would compute one's overall score as if one had obtained -1 (asymptotically the worst score) on this instance.

Useful links