Towards a Whole Body [18F] FDG Positron Emission Tomography Attenuation Correction Map Synthesizing using Deep Neural Networks

Authors

  • Ramiro Germán Rodríguez Colmeiro CONICET - UTN - UTT https://orcid.org/0000-0001-9279-5717
  • Claudio Verrastro Universidad Tecnol´ogica Nacional, CABA, Argentina
  • Daniel Minsky Comisi´on Nacional de Energ´ıa At´omica, CABA, Argentina
  • Thomas Grosges GAMMA3 (UTT-INRIA), Universit´e de Technologie de Troyes, 12 Rue Marie Curie - CS 42060, Troyes cedex, 10004, France

DOI:

https://doi.org/10.24215/16666038.21.e4

Keywords:

Attenuation Correction, Deep Learning, Generative Models, Positron Emission Tomography

Abstract

The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we explore the task of whole body attenuation map generation using 3D deep neural networks. We analyze the advantages thar an adversarial network training cand provide to such models. The networks are trained to learn the mapping from non attenuation corrected [18 ^F]-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. Then the sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples. The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the networks is 90±20  and 103±18HU and a peak signal to noise ratio of 19.3±1.7 dB and 18.6±1.5, for the base model and the adversarial model respectively. The attenuation correction is tested by means of attenuation sinograms, obtaining a line of response attenuation mean error lower than 1% with a standard deviation lower than 8%. The proposed
deep learning topologies are capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the accuracy of both methods holds in the presence of data from multiple sources and modalities and are trained on publicly available datasets. Finally, while the adversarial layer enhances visual appearance of the produced samples, the 3D U-Net achieves higher metric performance

Downloads

Download data is not yet available.

References

D. Nie, X. Cao, Y. Gao, L. Wang, and D. Shen, “Estimating ct image from mri data using 3d fully convolutional networks,” in Deep Learning and Data Labeling for Medical Applications, pp. 170–178, Springer, 2016.

K. Armanious, C. Jiang, M. Fischer, T. K¨ustner, T. Hepp, K. Nikolaou, S. Gatidis, and B. Yang, “Medgan: Medical image translation using gans,” Computerized Medical Imaging and Graphics, p. 101684, 2019.

J. M. Wolterink, A. M. Dinkla, M. H. Savenije, P. R. Seevinck, C. A. van den Berg, and I. Iˇsgum, “Deep mr to ct synthesis using unpaired data,” in International Workshop on Simulation and Synthesis in Medical Imaging, pp. 14–23, Springer, 2017.

T. Wang, Y. Lei, Y. Fu, W. J. Curran, T. Liu, J. A. Nye, and X. Yang, “Machine learning in quantitative pet: A review of attenuation correction and low-count image reconstruction methods,” Physica Medica, vol. 76, pp. 294–306, 2020.

F. Liu, H. Jang, R. Kijowski, G. Zhao, T. Bradshaw, and A. B. McMillan, “A deep learning approach for 18 f-fdg pet attenuation correction,” EJNMMI physics, vol. 5, no. 1, p. 24, 2018.

X. Dong, Y. Lei, T.Wang, K. Higgins, T. Liu,W. J. Curran, H. Mao, J. A. Nye, and X. Yang, “Deep learningbased attenuation correction in the absence of structural information for whole-body pet imaging,” Physics in Medicine & Biology, 2019.

K. Armanious, T. Hepp, T. K¨ustner, H. Dittmann, K. Nikolaou, C. La Foug`ere, B. Yang, and S. Gatidis, “Independent attenuation correction of whole body [18 f] fdg-pet using a deep learning approach with generative adversarial networks,” EJNMMI research, vol. 10, no. 1, pp. 1–9, 2020.

J. Nuyts, P. Dupont, S. Stroobants, R. Benninck, L. Mortelmans, and P. Suetens, “Simultaneous maximum a posteriori reconstruction of attenuation and activity distributions from emission sinograms,” IEEE transactions on medical imaging, vol. 18, no. 5, pp. 393–403, 1999.

L. Shi, J. A. Onofrey, E. M. Revilla, T. Toyonaga, D. Menard, J. Ankrah, R. E. Carson, C. Liu, and Y. Lu, “A novel loss function incorporating imaging acquisition physics for pet attenuation map generation using deep learning,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 723–731, Springer, 2019.

J. Hamill and V. Panin, “Tof-mlaa for attenuation correction in thoracic pet/ct,” in 2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC), pp. 4040–4047, IEEE, 2012.

K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle, et al., “The cancer imaging archive (tcia): maintaining and operating a public information repository,” Journal of digital imaging, vol. 26, no. 6, pp. 1045–1057, 2013.

B. Tatiana, M. De Ornelas Couto, and I. B. Mihaylov, “Head-and-neck squamous cell carcinoma patients with ct taken during pre-treatment, mid-treatment, and posttreatment dataset.,” 2018.

F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 3D Vision (3DV), 2016 Fourth International Conference on, pp. 565–571, IEEE, 2016.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proceedings of the 34th International Conference on Machine Learning (D. Precup and Y.W. Teh, eds.), vol. 70 of Proceedings of Machine Learning Research, (International Convention Centre, Sydney, Australia), pp. 214–223, PMLR, 06–11 Aug 2017.

I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in Advances in neural information processing systems, pp. 5767–5777, 2017.

P. Bandi, N. Zsoter, L. Seres, Z. Toth, and L. Papp, “Automated patient couch removal algorithm on ct images,” in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 7783–7786, IEEE, 2011.

S. Bakr, O. Gevaert, S. Echegaray, K. Ayers, M. Zhou, M. Shafiq, H. Zheng, W. Zhang, A. Leung, M. Kadoch, et al., “Data for nsclc radiogenomics collection,” The Cancer Imaging Archive, 2017.

M. L. Zuley, R. Jarosz, S. Kirk, Y. Lee, R. Colen, K. Garcia, and N. Aredes, “Radiology data from the cancer genome atlas head-neck squamous cell carcinoma [tcga-hnsc] collection,” Cancer Imaging Archive. doi, 2016.

B. Albertina, M. Watson, C. Holback, R. Jarosz, S. Kirk, Y. Lee, and J. Lemmerman, “Radiology data from the cancer genome atlas lung adenocarcinoma [tcga-luad] collection,” Cancer Imaging Arch, 2016.

S. Kirk, Y. Lee, C. Roche, et al., “Radiology data from the cancer genome atlas thyroid cancer [tcgathca] collection,” Cancer Imaging Archive. doi, 2016.

A. Meldo, L. Utkin, A. Lukashin, V. Muliukha, and V. Zaborovsky, “Database acquisition for the lung cancer computer aided diagnostic systems,” in 2019 25th Conference of Open Innovations Association (FRUCT), pp. 220–227, IEEE, 2019.

A. B. Menegotto, C. D. L. Becker, and S. C. Cazella, “Computer-aided hepatocarcinoma diagnosis using multimodal deep learning,” in International Symposium on Ambient Intelligence, pp. 3–10, Springer, 2019.

N. C. I. C. P. T. A. C. C. National Cancer Institute Clinical Proteomic Tumor Analysis Consortium (CPTAC), “Radiology data from the clinical proteomic tumor analysis consortium uterine corpus endometrial carcinoma (cptac-ucec) collection,” 2019.

N. C. I. C. P. T. A. C. C. National Cancer Institute Clinical Proteomic Tumor Analysis Consortium (CPTAC), “Radiology data from the clinical proteomic tumor analysis consortium lung squamous cell carcinoma [cptac-lscc] collection,” 2018.

J.-Y. Zhu, Z. Zhang, C. Zhang, J. Wu, A. Torralba, J. Tenenbaum, and B. Freeman, “Visual object networks: image generation with disentangled 3d representations,” in Advances in Neural Information Processing Systems, pp. 118–129, 2018.

T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, and Y.-L. Yang, “Hologan: Unsupervised learning of 3d representations from natural images,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 7588–7597, 2019.

L. Brusaferri, A. Bousse, N. Efthimiou, E. Emond, D. Atkinson, S. Ourselin, B. F. Hutton, S. Arridge, and K. Thielemans, “Potential benefits of incorporating energy information when estimating attenuation from pet data,” in 2017 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), pp. 1–4, IEEE, 2017.

X. Dong, T. Wang, Y. Lei, K. Higgins, T. Liu, W. J. Curran, H. Mao, J. A. Nye, and X. Yang, “Synthetic ct generation from non-attenuation corrected pet images for whole-body pet imaging,” Physics in Medicine & Biology, vol. 64, no. 21, p. 215016, 2019.

Downloads

Published

2021-04-17

How to Cite

Rodríguez Colmeiro, R. G., Verrastro, C. ., Minsky, D. ., & Grosges, T. (2021). Towards a Whole Body [18F] FDG Positron Emission Tomography Attenuation Correction Map Synthesizing using Deep Neural Networks. Journal of Computer Science and Technology, 21(1), e4. https://doi.org/10.24215/16666038.21.e4

Issue

Section

Original Articles