The MSD challenge tests the generalisability of machine learning algorithms when applied to 10 different semantic segmentation tasks. The aim is to develop an algorithm or learning system that can solve each task, separateley, without human interaction. This can be acheived through the use of a single learner, an ensable of multiple learners, architecture search, curriculum learning, or any other technique, as long as task-specific model parameters are not human-defined.
The challenge will consist of 2 phases.
Data for 7 tasks was released on the 11th of May and can be downloaded here. Participants are expected to download the data, develop a general purpose learning algorithm, train the algorithm on each task training data independently without human interaction (no task-specific manual parameter settings), run the learned model on the test data, and submit the segmentation results by the 5th of August. This phase will test how well the developed learner can solve multiple independent tasks.
For a phase 1 submission to be considered valid the following will need to be provided:
- Have a valid submission on the grand challenge website.
- A paper describing the method. Papers should be at least 2 pages long (maximum of 8 pages) and provide sufficient details explaining why your algorithm is optimal for a multi-task setting. The paper should contain all hyper-parameters used in your model, hyper-parameters that should be used for phase 2. The paper should follow the LNCS format.
- A statement of honour. This statement of honour should be a letter, signed by a team member, stating that your own team has not registered multiple times to avoid the submission limit, and that your algorithm does not have any manually-defined and task-specific parameter choice as per the challenge guidelines.
- Both the paper and the statement of honour should be submitted by the 5th of August though this Link.
Teams that have submitted to Phase 1 and have a valid submission (see above) will be given access to 3 more tasks on the 6th of August. They should train their previously developed algorithm, without any software modifications, on the 3 new tasks, and submit results of the last 3 tasks by the 31th of August. This phase will test how well the previously developed learner can generalise to unseen tasks.
- Firstly, each team is only allowed 1 submission per day. Teams cannot register themselves multiple times to avoid this limitation. If you have created multiple teams, please notify us be emailing email@example.com by the 1st of August so that we can delete your previous submissions. Failure to disclose multiple registrations by the 1st of August will not be tolerated, and your participation in the competition will be terminated.
- Second, submissions take almost 1h to go through the validation script, and only one validation machine is available. The system works on a first-in-first-out manner, so it might take some time until your results show up on the website. We have more than 140 registered teams, so it can take days for results to show up if every team submits at the same time.
- Thirdly, your last submission will be considered your final submission. No resubmissions post the 5th Aug will be allowed under any circumstances, even if your submission is problematic. Please notify us through firstname.lastname@example.org if you are having any trouble with submissions.
- Fourthly, we would like to note that, as described in the challenge proposition, teams cannot manually tweak parameters of algorithms/models on a task specific basis. Any parameter tuning has to happen automatically and algorithmically. As an example, the learning rate or the depth of a network cannot be manually changed between tasks, but they can be found automatically through cross-validation. Any team which is found to use different human-defined and task-specific parameters will be terminated. If you are in doubt if your algorithm classifies as “algorithmically optimised”, please email us at email@example.com to confirm.