Project ID: P202203140002
Deep versus handcrafted tensor radiomics features: application to survival prediction in head and neck cancer
Image processing Machine learning Deep learning Radiomics feature extraction Fusion techniques
The project will be provided with large datasets including PET, CT, its Ground truth, and clinical dataset. It will be provided with a strong GPU and google Colab to make long and heavy runs. The supervisor and advisors will lead and push all members to accomplish the project.
Objectives: In this study, multi-level multi-modality-fusion technique is a promising technique with potential for improved prognostication of head and neck (HN) cancer. We aim to extract tensor deep imaging and radiomics feature from deep learning algorithms and optimize survival prediction via those. Methods: In our study, 408 patients with PET, CT, and clinical dataset were included from the Cancer-Imaging-Archive (TCIA) database, derived in a multi-center setting. PET images are normalized, registered to CT, enhanced, and cropped. A range of optimal algorithms is pre-selected amongst various families of learner and fusion algorithms. Multiple fusion techniques are applied to images to combine PET and CT information. Multiple fused images and sole PET/CT images are applied to deep learning algorithm to generate tensor deep features. Each variable extracted from each modality or fused image calls a flavour of a feature. In addition, we employ SERA software to generate tensor radiomics feature from the mentioned images. Multiple hybrid systems including classifiers linked with dimension reduction algorithms to enhance binary progression free survival in HN cancer. 80% of patient data are used for hybrid systems to select the best model based on maximum performance resulting from 5-fold cross-validation. Subsequently, the remaining 20% was used for external testing of the selected model. We may also employ Ensemble Voting technique to enhance prediction performance.