Important Dates
Release of data, code, and metrics for training | |
Release of examples for submission files | |
Release of data and metrics for testing | |
Challenge workshop website goes live | |
Submission deadline for prediction results files Final Round 3 | Mar. 7 2022 23:59 PST |
Manuscript submission deadline | |
Notification of ISBI sub-proceedings acceptance | |
KNIGHT Workshop | |
Camera-ready submission to ISBI sub-proceedings | Apr. 15, 2022 |
Publication of challenge outcomes | Oct. 01, 2022 |
Like the KNIGHT Challenge?
Try the BRIGHT challenge for breast tumor images.
Evaluation
The challenge participants will be ranked based on performance of Task 1 measured by AUC (area under the receiver operating curve [14]). In the event of a tie between participants, the average of AUC of the five groups measured through one-versus-all classification (Task 2), will be used for ranking the tied participants. Participants are asked to submit: a short paper (no more than four pages),) titled manuscript.pdf, describing their methods and discoveries. After the submission deadline, the ranking and performance of both criteria on the test data will be published. The participants will also need to submit a CSV file per task titled <task number>.csv containing a row with class scores for each patient in the test set. The rows must adhere to the following scheme:
Task 1 predictions file:
[case_id,NoAT-score,CanAT-score]
Task 2 predictions file:
[case_id,B-score,LR-score,IR-score,HR-score,VHR-score]
Where “case_id" represents the sample (e.g. case_00000) and all scores represent the probability of a patient to belong to a class. The evaluation script, implemented using FuseMedML [15], can be used during training and as a sanity check.
The script can be found here: https://github.com/IBM/fuse-med-ml/tree/knight_eval/fuse_examples/classification/knight.