9. Performing X-Ray CT Reconstruction#
9.1. Introduction#
a CT reconstruction involves the creation of a 3D image of a sample by mathematically “stitching together” a series of 2D X-ray images taken from many different angles around a sample. VirtualLab allows users to such reconstructions using a python package called the Core Imaging Library (CIL).
This tutorial will not cover the specifics of how to use CIL, however training material on this is provided by the CIL team in the form of jupiter notebooks, which can be found here:
Our goal instead is to show how CIL can be run as a method inside a container within the VirtualLab workflow. As such we will cover similar examples to the training material but not the detailed theory behind them.
9.2. Prerequisites#
The examples provided here are mostly self-contained. However, in order to understand this tutorial, at a minimum you will need to have completed the first tutorial, to obtain a grounding in how VirtualLab is setup. You should also have completed the tutorial on X-ray imaging.
We also recommend you have at least some understanding of how to use CIL as a standalone package and have looked through the CIL Training material since, as previously mentioned we will not be coving the theory behind these examples in any great detail.
Note
I have not had time to thoroughly test this and there is nothing in the CIL docs that confirms this. However I believe CIL requires a dedicated GPU to run. On my laptop with only integrated graphics it crashes with very strange errors. I suspect this is due to a lack of Video Ram.
However the only other systems I have tested on both have beefy Nvidia GPUs so I can’t confirm it’s not just my machine.
The main takeaway is the container is setup for GPU compute and I have put in a crude check for a working Nvidia GPU (line 60 CT_Reconstruction.py).
Although it has been setup for Nvidia GPUs, the container should be GPU agonistic. This is because AMD and Intel GPUs use the mesa drivers which are part of the mainline linux kernel so should work with any container out of the box.
However I cannot confirm this as I have have no other cards to test with. Thus for now locking the production version to just Nvidia, given we know it works, seemed a sensible compromise.
9.3. Example 1: A simple CT-Reconstruction#
In this example we will demonstrate how to simulate a 360 degree X-ray CT scan and reconstruct it using CIL. This is a continuation of example 3 from the Xray imaging tutorial, which used the AMAZE mesh that was previously used for the HIVE analysis in tutorial 3.
Action
The RunFile
RunTutorials.py
should be setup as follows to run this simulation:Simulation='HIVE' Project='Tutorials' Parameters_Master='TrainingParameters_CIL_Ex1' Parameters_Var=None VirtualLab=VLSetup( Simulation, Project ) VirtualLab.Settings( Mode='Interactive', Launcher='Process', NbJobs=1 ) VirtualLab.Parameters( Parameters_Master, Parameters_Var, RunCT_Scan=True, RunCT_Recon=True ) VirtualLab.CT_Scan() VirtualLab.CT_Recon()A copy of this run file can be found in
RunFiles/Tutorials/CT_Reconstruction/Task1_Run.py
The main changes to note in Runfile, other than the change of input
file is the addition is the call to VirtualLab.CT_Recon()
. This
is the method used to reconstruct the CT data.
Looking at the file Input/HIVE/Tutorials/TrainingParameters_CIL_Ex1.py
you will notice that the only Namespace is GVXR
. This is
intentional as CIL shares the GVXR
Namespace with GVXR. The reason
for this simple convenience as CIL shares most of the same parameters
as GVXR and although confusing at first it saves us doubling up on
parameters.
Action
Launch VirtualLab using the following command:
VirtualLab -f RunFiles/RunTutorials.py
The raw x-ray tiff images generated by GVXR using the CT_Scan method can be found in Output/HIVE/Tutorials/GVXR-Images/AMAZE_360
. X-ray images of the component after being rotated by 90 and 180 degrees are shown in Fig. 9.1 9.2 respectively.
The resulting reconstruction can be found as tiff images in Output/HIVE/Tutorials/CIL_Images/AMAZE_360
, with each image representing a slice in Z.
Tiff images of the component at slice 70 and 120 are shown in Fig. 9.3 9.4 respectively.
Note
To observe the reconstructed component the external package ImageJ will be required, which is currently not incorporated in VirtualLab.
9.4. Parameter’s used by CIL:#
The following parameters are used by both CIL and GVXR:
GVXR.Name
GVXR.Beam_PosX/Y/Z
GVXR.Beam_Type
GVXR.Detect_PosX/Y/Z
GVXR.Spacing_X/Y
GVXR.Pix_X/Y
GVXR.Model_PosX/Y/Z
GVXR.Nikon_file
GVXR.num_projections
GVXR.angular_step
GVXR.image_format
GVXR.bitrate
Units
Helpfully CIL is unit agnostic, that is CIL does not actually care what units you use to define the setup. The only thing that matters is that you are consistent. As such any definition of
GVXR.{OBJECT}_units
are entirely ignored by CIL as it does not need to know what they are.Thus you can use any units you like (inches, furlongs, elephants) as long as they are consistent. That is if you use mm for the beam position you just need to ensure use mm for all other cases ie. model position, detector position and the pixel spacing.
Parameters that are unique to CIL
There is currently only one parameter that is unique to CIL GVXR.Recon_Method
which can be either “FBP” or “FDK”. We will be using the default FDK for all
our examples.
All these parameters work in exactly the same manner as GVXR as such they have already been explained in detail in the previous tutorial so I wont repeat myself here. However the parameters that are relevant to CIL are listed in the appendix.
The only slight exception is the default for GVXR.image_format
is a single multi-page
Tiff stack. I you would like individual tiff images for each slice in Z simply set
GVXR.image_format = 'Tiff'
.