Important Dates
Release of data, code, and metrics for training | Nov. 15, 2021 |
Release of examples for submission files | Jan. 21, 2022 |
Release of data and metrics for testing | Jan. 31, 2022 |
Challenge workshop website goes live | Feb. 01, 2022 |
Submission deadline of results and papers | Feb. 28, 2022 |
Response from reviewers | Mar. 21, 2022 |
Registration deadline and BRIGHT Workshop | Mar. 27, 2022 |
Camera Ready Version of the paper | Apr. 03, 2022 |
Post-workshop leaderboard release | Apr. 04, 2022 |
Submission deadline for manuscripts | Apr. 21, 2022 |
Publication of challenge outcomes | Oct. 01, 2022 |
Like the BRIGHT Challenge?
Try the KNIGHT challenge for kidney images.
Dear participants, we have exciting news about the BRIGHT Challenge.
- The number of registered participants is above 100, so the competition will be interesting for everyone.
- There will be proceedings of the challenge, containing 4-pages papers of the best submissions (up to 10) published SIMULTANEOUSLY with ISBI Proceedings on a separate volume on IEEEXplore
- Every participant (or team) must submit a 4 pages paper, in ISBI format that describes their submission. Including:
- Related work.
- A methods section containing a list of methods that were evaluated/tried by participants and detailed description of the selected method that was finally submitted to the challenge.
- A results section with in detail description of the experimental evaluation on the validation set, and ablation studies if they exist.
- A discussion of those results.
- The papers from the best submissions will be reviewed to make sure that they meet the quality standards of ISBI. After the review, authors will have time to address the comments of the reviewers. Review will be focused on clarity, depth and appropriateness of the description of the submissions. Novelty of the methods is not a criterion. If the camera ready version of the paper doesn’t meet the quality standard after the review, it will not be published.
- All the papers describing methods that are not published (because of quality or because they are not in the top performance) will be linked on the leaderboard.
- You get to play a part in advancing scientific efforts to help cancer treatment
- You get to use a collection of incredible data for your algorithms
- The winners will be published in an upcoming scientific paper
- And there's a chance of winning a big prize
- Allison, K.H., Reisch, L.M., Carney, P.A.,Weaver, D.L., Schnitt, S.J., O’Malley, F.P., Geller, B.M., Elmore, J.G.: Understanding diagnostic variability in breast pathology: lessons learned from an expert consensus review panel. Histopathology 65(2), 240–251 (2014)
- Aresta, G., Araújo, T., Kwok, S., Chennamsetty, S.S., Safwan, M., Alex, V., Marami, B., Prastawa, M., Chan, M., Donovan, M., et al.: Bach: Grand challenge on breast cancer histology images. Medical image analysis 56, 122–139 (2019)
- BMaier-Hein, Lena, e.a.: Bias: Transparent reporting of biomedical image analysis challenges (2020)
- Brancati, N., De Pietro, G., Riccio, D., Frucci, M.: Gigapixel histopathological image analysis using attention-based neural networks. IEEE Access 9, 87552–87562 (2021).
- Bulten, W., Pinckaers, H., van Boven, H., Vink, R., de Bel, T., van Ginneken, B., van der Laak, J., Hulsbergen-van de Kaa, C., Litjens, G.: Automated deep-learning system for gleason grading of prostate cancer using biopsies: a diagnostic study. The Lancet Oncology 21(2), 233–241 (2020)
- Elmore, J.G., Longton, G.M., Carney, P.A., Geller, B.M., Onega, T., Tosteson, A.N., Nelson, H.D., Pepe, M.S., Allison, K.H., Schnitt, S.J., et al.: Diagnostic concordance among pathologists interpreting breast biopsy specimens. Jama 313(11), 1122–1132 (2015)
- Gomes, D.S., Porto, S.S., Balabram, D., Gobbi, H.: Inter-observer variability between general pathologists and a specialist in breast pathology in the diagnosis of lobular neoplasia, columnar cell lesions, atypical ductal hyperplasia and ductal carcinoma in situ of the breast. Diagnostic pathology 9(1), 1–9 (2014)
- Ingegnoli, A., d’Aloia, C., Frattaruolo, A., Pallavera, L., Martella, E., Crisi, G., Zompatori, M.: Flat epithelial atypia and atypical ductal hyperplasia: carcinoma underestimation rate. The breast journal 16(1), 55–59 (2010)
- Macenko, M., Niethammer, M., Marron, J.S., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. pp. 1107–1110. IEEE (2009)
- Mobadersany, P., Yousefi, S., Amgad, M., Gutman, D.A., Barnholtz-Sloan, J.S., Vega, J.E.V., Brat, D.J., Cooper, L.A.: Predicting cancer outcomes from histology and genomics using convolutional networks. Proceedings of the National Academy of Sciences 115(13), E2970–E2979 (2018)
- Myers, D.J., Walls, A.L.: Atypical breast hyperplasia. StatPearls [Internet] (2020)
- Noorbakhsh, J., Farahmand, S., Namburi, S., Caruana, D., Rimm, D., Soltanieh-ha, M., Zarringhalam, K., Chuang, J.H., et al.: Deep learning-based cross-classifications reveal conserved spatial behaviors within tumor histological images. Nature communications 11(1), 1–14 (2020)
- Pati, P., Jaume, G., Fernandes, L.A., Foncubierta-Rodríguez, A., Feroce, F., Anniciello, A.M., Scognamiglio, G., Brancati, N., Riccio, D., Di Bonito, M., et al.: Hact-net: A hierarchical cell-to-tissue graph neural network for histopathological image classification. In: Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis, pp. 208–219. Springer (2020)
- Siegel, R.L., Miller, K.D., Jemal, A.: Cancer statistics, 2016. CA: a cancer journal for clinicians 66(1), 7–30 (2016)
- Sirinukunwattana, K., Raza, S.E.A., Tsang, Y.W., Snead, D.R., Cree, I.A., Rajpoot, N.M.: Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE transactions on medical imaging 35(5), 1196–1206 (2016)
To avoid overfitting to the test set, every participant is allowed a maximum of THREE SUBMISSIONS to the challenge before the deadline. Feedback will be given if the results are not correctly formatted or similar, but no information about accuracy will be given. The leaderboard will be published right after this deadline (allowing time to run the evaluation)
What is BRIGHT?
The aim of the BRIGHT challenge is to provide an opportunity for the development, testing and evaluation of Artificial Intelligence (AI) models for automatic breast tumor subtyping of frequent lesions along with rare pathologies, by using clinical Hematoxylin & Eosin (H&E) stained gigapixel Whole-Slide Images (WSIs). To this end, a large annotated cohort of WSIs, which includes Noncancerous (Pathological Benign, Usual Ductal Hyperplasia), Precancerous (Flat Epithelia Atypia, Atypical Ductal Hyperplasia) and Cancerous (Ductal Carcinoma in Situ, Invasive Carcinoma) categories, will be available. BRIGHT is the first breast tumor subtyping challenge that includes atypical lesions and consists of more than 550 annotated WSIs across a wide spectrum of tumor subtypes. The Challenge includes two tasks: (1) WSI classification into three classes as per cancer risk, and (b) WSI classification into six fine-grained lesion subtypes.
Why the Challenge?
In a clinical setup for breast cancer diagnosis, a pathologist manually inspects a breast tissue specimen and estimates a tumor subtype for the lesion by following a predefined grading system. The subtypes confer different levels of risk according to their probability of transitioning to invasive carcinoma. For instance, lesions with atypia are associated with higher risks compared to benign lesions [11,6]. Although the subtyping criteria are established, the continuum of histologic features phenotyped across the diagnostic spectrum prevents to clearly delineate the subtypes. Thus, manual inspection is a timeconsuming process with a significant intra- and inter-observer variability [6,7,1]. Moreover, for certain lesions, such as breast lesions with atypia, significant pathological expertise is required, and even with expert review, the inter-pathologist agreement can be as low as 48% [6]. The aforementioned challenges in manual diagnosis and the increasing incidence rate of breast cancer cases per year [14] compel for automated computed-aided cancer diagnostics.
To this end, AI appears promising as demonstrated in [13,2,10,15,12,5]. The goal of this challenge is to advance the role of AI in breast tumor subtyping by enabling the development and evaluation of AI models on a large cohort of breast tissue biopsies. Further, the challenge implicitly incorporates a number of key real-world diagnostic challenges by including ambiguous atypical lesions, a wide spectrum of breast tumor subtypes (up to six), and tissue preparation artifacts in the cohort. We will evaluate the developed AI models on an independent test cohort and benchmark them in terms of performance, model capacity, inference time, and robustness.
For ensuring fairness, we require that all participants, (1) use only automated methods, (2) provide all requested information, (3) only use auxiliary public datasets and pre-trained models; but not private non-shareable data and models, and (5) provide the details of their final methodology. We will also encourage the open-sourcing of the code and motivate the participation of multi-institutional teams. Finally, we will have all participants presenting their method in a dedicated workshop, and the top teams, selected from the leaderboard, contribute to a journal publication.
What's in it for you?
References
- Registration
- Submission (TBD)
- Contact the Organizers
- Leaderboard (TBD)