Digital Breast Tomosynthesis (DBT) is a key imaging modality for breast cancer detection, improving lesion visibility by reducing tissue overlap inherent in conventional mammography. In this work, we propose a novel deep learning framework that classifies DBT volumes as malignant or non-malignant, while simultaneously generating a synthetic 2D image to assist diagnostic interpretation. This image is derived from a 3D saliency map computed by the internal attention mechanisms of the model, which highlights and preserves the most diagnostically relevant regions from the original volume. A surface is defined in this saliency space, enabling sampling and projection into a 2D diagnostic representation. This projection offers a compact summary of the volumetric scan, assisting clinicians in diagnostic interpretation and potentially alleviating the cognitive workload. A standard convolutional neural network trained on these synthetic 2D images achieves classification performance comparable to models operating directly on full 3D volumes. We train and evaluate our method on the OPTIMAM dataset and assess generalization through external validation on the independent BCS-DBT dataset without retraining. Results show that the model performs robustly across different clinical sources and provides an interpretable, computationally efficient tool for DBT-based breast cancer diagnosis.
Deep learning for DBT classification with saliency-guided 2D synthesis
Cantone, Marco;Russo, Ciro;Marrocco, ClaudioSupervision
;Bria, AlessandroSupervision
2025-01-01
Abstract
Digital Breast Tomosynthesis (DBT) is a key imaging modality for breast cancer detection, improving lesion visibility by reducing tissue overlap inherent in conventional mammography. In this work, we propose a novel deep learning framework that classifies DBT volumes as malignant or non-malignant, while simultaneously generating a synthetic 2D image to assist diagnostic interpretation. This image is derived from a 3D saliency map computed by the internal attention mechanisms of the model, which highlights and preserves the most diagnostically relevant regions from the original volume. A surface is defined in this saliency space, enabling sampling and projection into a 2D diagnostic representation. This projection offers a compact summary of the volumetric scan, assisting clinicians in diagnostic interpretation and potentially alleviating the cognitive workload. A standard convolutional neural network trained on these synthetic 2D images achieves classification performance comparable to models operating directly on full 3D volumes. We train and evaluate our method on the OPTIMAM dataset and assess generalization through external validation on the independent BCS-DBT dataset without retraining. Results show that the model performs robustly across different clinical sources and provides an interpretable, computationally efficient tool for DBT-based breast cancer diagnosis.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026 - Deep learning for DBT classification with saliency-guided 2D synthesis.pdf
accesso aperto
Tipologia:
Documento in Pre-print
Licenza:
Copyright dell'editore
Dimensione
2.98 MB
Formato
Adobe PDF
|
2.98 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

