Project ID: P202103100002
Multi-Modality Fusion Coupled with Deep Learning for Improved Outcome Prediction in Head and Neck Cancer
Survival prediction algorithms, deep learning, fusion methods, image processing
People with the below expertise are able to apply for this project: 1- The individual with enough experience in deep learning classification algorithms. 2- The individual with enough experience in traditional and deep learning fusion techniques. 3- The individual with enough experience in deep learning attention map. 4-Some learners who are familiar with images, image processing methods, and machine learning techniques. Both programming languages Matlab and Python are acceptable, but Python works better for some people who aim at working on google Colab.
Objective: Multi-level multi-modality-fusion-radiomics is a promising technique with potential for improved prognostication of cancer. We aim to use advanced fusion-techniques on PET and CT images coupled with deep-learning (DL) to achieve improved outcome-prediction in head-and-neck-squamous-cell-carcinoma (HNSCC). Methods: In our study, 408 HNSCC patients were included from The Cancer-Imaging-Archive (TCIA) in a multi-center-setting. Prognostic outcomes (binary classification) included overall-survival (OS), distant-metastasis (DM), loco-regional-recurrence (LR), and progression-free-survival (SP). We utilized DL with a 17-layer 3D-convolutional-neural-network (CNN) architecture. Prior to training, each image underwent min-max-normalization, image-augmentation by using random rotations (0-20°) to improve performance and generalizability of our model, and followed by 5-fold-cross-validation. We employed 12 datasets including CT, PET, and 10 image-level-fused-datasets.