Contact
+49 9131 85 25247
+49 9131 85 27270
Address
Universität ErlangenNürnberg
Chair of Computer Science 5 (Pattern Recognition)
Martensstrasse 3 91058 Erlangen Germany
Powered by

Limited Angle Tomography
In computed tomography (CT), the Xray source and the detector of a CT system need to rotate at least π plus a fan angle to get complete data for image reconstruction, which is called a short scan. However, in practical applications, the gantry rotation might be restricted by other system parts or external obstacles. In such cases, only limited angle data are acquired. Image reconstruction from data acquired in an insufficient angular range is called limited angle tomography. Due to missing data, artifacts occur in the reconstructed images. They cause boundary distortion, intensity leakage, and edge blurring as demonstrated in Fig. 1(b). Especially, a lot of streak artifacts occur along the missing angular ranges.
To improve image quality in limited angle tomography, we have investigated the following three methods: missing data interpolation/extrapolation using data consistency conditions [1], iterative reconstruction with total variation regularization [2], and machine learning including conventional machine learning [3] and deep learning [4].
Fig. 1(a). Custom phantom 
Fig. 1(b). FBP reconstruction from 160degree fanbeam sinogram 
Missing Data Restoration in Limited Angle Tomography based on HelgasonLudwig Consistency Conditions
In computed tomography, there are many kinds of redundancy information, which are typically mathematically expressed as data consistency conditions. The HelgasonLudwig consistency condition (HLCC) is the most wellknown data consistency condition. One way to get HLCC is to use the Chebyshev Fourier transform (CFT). CFT can decompose a parallelbeam sinogram into different frequency components as demonstrated in Fig. 2.
HLCC can be used to restore missing data in limited angle tomography. Using CFT, we convert the missing data restoration problem into a regression problem and the Lasso regression is utilized. Due to severe illposedness, regression only recovers the low frequency components correctly. Bilateral filtering is utilized to retain the most prominent high frequency components. Afterwards, a fusion in the frequency domain utilizes the restored frequency components to fill the missing double wedge region. The proposed method is evaluated in a parallelbeam study on both numerical and clinical phantoms. The results show that our method is promising in streak reduction and intensity offset compensation in both noisefree and noisy situations.
Fig. 2(a). Restored sinograms using different orders 
Fig. 2(b). Reconstructed images using different orders 
Fig. 2(c). Fourier components of the reconstructed images using different orders 
ScaleSpace Anisotropic Total Variation for Limited Angle Tomography
Conventional Machine Learning for Limited Angle Tomography
In this work, the application of traditional machine learning techniques, in the form of regression models based on conventional, "handcrafted" features, to streak reduction in limited angle tomography is investigated. Specifically, linear regression (LR), multilayer perceptron (MLP), and reducederror pruning tree (REPTree) are investigated. When choosing the meanvariationmedian (MVM), Laplacian, and Hessian features, REPTree learns streak artifacts best and reaches the smallest rootmeansquare error (RMSE) of 29 HU for the SheppLogan phantom. Further experiments demonstrate that the MVM and Hessian features complement each other, whereas the Laplacian feature is redundant in the presence of MVM. In fanbeam, the SVDL features are also beneficial. Preliminary experiments on clinical data suggests that further investigation of clinical applications using REPTree may be worthwhile.
Fig. 4(a). Reference clinical image. 
Fig. 4(b). Image reconstructed from 160degree fanbeam projection data. 
Fig. 4(c). Image reconstructed by REPTree. 
Fig. 5. A fowchart summarizes our implementation of machine learning algorithms for limited angle tomography. 
Deep Learning for Limited Angle Tomography
Recently, deep learning methods have been applied very successfully to many medical imaging problems including limited angle tomography. In our study, deep learning achieves the best performance compared with the above conventional methods. Even in a small angular range like 120°, deep learning can still obtain very good image quality. Fig. 6 displays the reconstruction results learnt by the popular neural network UNet.
Although deep learning has achieved a lot of success, the robustness of neural networks for clinical applications is still a concern. It is reported that most neural networks are vulnerable to adversarial examples. Therefore, we aim to investigate whether some perturbations or noise will mislead a neural network to fail to detect an existing lesion. Our experiments demonstrate that the trained neural network, specifically the UNet, is sensitive to Poisson noise. While the observed images appear artifactfree, anatomical structures may be located at wrong positions, e.g. the skin shifted by up to 1 cm. This kind of behavior can be reduced by retraining on data with simulated Poisson noise. However, we demonstrate that the retrained UNet model is still susceptible to adversarial examples.
Fig. 7. The modified UNet architecture for artifact reduction in limited angle tomography with an example of 256×256 input images. 
