|Release of data, code, and metrics for training||Nov. 15, 2021|
|Release of examples for submission files||Jan. 21, 2022|
|Release of data and metrics for testing||Jan. 31, 2022|
|Challenge workshop website goes live||Feb. 01, 2022|
|Submission deadline of results and papers||Feb. 28, 2022|
|Response from reviewers||Mar. 21, 2022|
|Registration deadline and BRIGHT Workshop||Mar. 27, 2022|
|Camera Ready Version of the paper||Apr. 03, 2022|
|Post-workshop leaderboard release||Apr. 04, 2022|
|Submission deadline for manuscripts||Apr. 21, 2022|
|Publication of challenge outcomes||Oct. 01, 2022|
Like the BRIGHT Challenge?
Try the KNIGHT challenge for kidney images.
The participants will use the BReAst Carcinoma Subtyping (BRACS) dataset, a cohort of H&E-stained breast tissue biopsies. BRACS is an open source dataset but the BRIGHT challenge data organization, which is different to the public source data, is only available through registration for the Challenge. The slides pertained to breast tumor patients with one of the following six breast tumor subtypes, i.e., Pathological Benign (PB), Usual Ductal Hyperplasia (UDH), Flat Epithelia Atypia (FEA), Atypical Ductal Hyperplasia (ADH), Ductal Carcinoma in Situ (DCIS), and Invasive Carcinoma (IC) (examples in Figure 1). The slides were scanned with an Aperio AT2 scanner at 0:25 \mu m/pixel using a magnification factor of 40\times to result in WSIs.
The WSIs were assigned a subtype in a two-step procedure. First, inline with pathological diagnostic protocol, three board-certified pathologists independently annotated each WSIs according to the highest tumor lesion inside the WSIs. Then, the annotations with disagreement were discussed and re-annotated by the consensus of the three pathologists to reduce observer variability. A set of representative Regions of Interest (ROIs) per many WSIs were also annotated according to the same procedure. We have included the WSIs and the identified ROIs in the dataset.
Table 1: The number of breast tumor subtype-wise ROIs and WSIs in BRACS dataset.
The presence of atypical lesions in a tissue specimen indicates a higher risk for attaining invasive carcinoma, but their identification via tissue examination requires a very high confidence of expert pathologists. For instance, ADH shares morphological similarities with DCIS, and in certain cases, ADH possesses all the features of DCIS and is only limited in size. Also, UDH, ADH and DCIS are all characterized by an intraductal growth pattern which makes these classes difficult to differentiate in H&E stained sections. While atypical lesions are very important, their presence cannot be detected by reviewing a mammogram or other breast imaging studies. They also cannot be felt on a clinical breast exam and if they are found in a core biopsy, more frequent imaging follow-up and often surgical excision are recommended . Finally, including atypias in the profiling of the patients phenotypes paves the way for a more disease progression gnostic biomarker discovery. To that end, an AI solution that can distinguish atypias from other cancer subtypes in histology images is of great value for clinical practice.
To encourage the automated detection of breast tumor subtypes, we define two WSI classification tasks based on the BRACS dataset.
Task 1: A 3-class WSI classification task where we group the six original subtypes according to cancer risk, i.e., Non cancerous (PB+UDH), Pre-cancerous or Atypical (ADH+FEA), and Cancerous (DCIS+IC). Participants can train their algorithms on both annotated WSIs and ROIs. The testing will be done at WSI-level only.
Task 2: A 6-class WSI classification task where the participants have to perform finegrained tumor subtyping of WSIs. This task is more challenging as it includes more number of classes and higher inter-class ambiguities. Similar to Task 1, participants can train on the WSIs and ROIs, and testing will be done at WSI-level only.
We developed Deep Learning (DL)-based multi-class classification baselines for the two aforementioned tasks on the BRACS dataset. For both the tasks, the classifications were performed at WSI-level without using the auxiliary annotated ROIs. The baseline CNN models were based on the network architecture presented in . Each CNN model consisted of a compressing path and a learning path. In the compressing path, a WSI was encoded into a grid-based feature map by using a residual feature extractor. In the learning path, attention modules were employed to find ROIs by considering the spatial correlations of neighboring patch features. Following the stain normalization  of the WSIs at 10\times magnification, the images were augmented using affine transformations. For both tasks, the models were trained by using a batch size of 8, a dropout of 0:2, and an Adam optimizer with a learning rate of 10-5. The models were tested on the test set of the BRACS dataset, i.e., the Validation set in Table 1, and F1-scores were computed over four runs by using bootstrapped sampling. The results for Task 1 and 2 are reported in Table 2. The method details and code are available in .
Table 2: Mean and standard deviation of F1-score for 3-class and 6-class WSI classification.
The BRACS dataset was used to evaluate Histocartography, an open source graph representation and learning library that can be leveraged by participants in the challenge. Visit the main repository and the Histocartography github organization for more details.