/* */
Project ID: P202205130004
Application of Deep Learning Techniques Coupled with fusion Models for prediction of TNM Stage in Lung Cancer
bronze
Supervisor: Mohammadreza Salmanpour
Fund: Free
Status: new 2
Skills:

Deep Learning, Machine Learning, Image Processing

Requirements:

People with the below expertise are able to apply for this project: 1-The individual with enough experience in medical image processing techniques 2-The individual with enough experience in 3-D Deep learning methods 3- The individual with enough experience in traditional and deep learning fusion techniques. Both programming languages Matlab and Python are acceptable, but Python works better for some people who aim at working on google Colab.


This project includes:
Learners:

Description:

Objective: The tumor, node, metastasis (TNM) staging system enables clinicians to describe the spread of lung cancer in a specific manner to assist with assessment of disease status, prognosis, and management. Multi-level multi-modality fusion radiomics is a promising technique with potential to improve the prognostication of cancer. We aim to use advanced fusion techniques on PET and CT images coupled with deep learning (DL) to improve the prediction of TNM staging in lung cancer. Deep learning techniques most often excels radiomics based machine learning techniques because radiomics features extraction needs a ground truth of tumor segmented by physicians. Moreover, feature extraction process needs some requirements such as being familiar with some radiomics feature extraction software and image processing techniques. Methods: In our study, 500 patients with PET, CT, and clinical dataset were included from the Cancer-Imaging-Archive (TCIA) database, derived in a multi-center setting. PET images are normalized, registered to CT, enhanced, and cropped. A range of optimal algorithms is pre-selected amongst various families of deep learning and fusion algorithms. Multiple fusion techniques are applied to images to combine PET and CT information. Multiple fused images and sole PET/CT images are applied to some robust deep learning algorithms to predict TNM stages. 80% of patient data are used for HNLSs to select the best model based on maximum performance resulting from 5-fold cross-validation. Subsequently, the remaining 20% was used for external testing of the selected model. We may also employ Ensemble Voting technique to enhance prediction performance.

Similar projects

User Comment