Open NicerNewerCar opened 2 years ago
@NicerNewerCar , the Ground truth example of MC2MC3 is not aligned. Can you provide motion trial name and frame number please?
Can you provide motion trial name and frame number
This was frame 0 from the flx_ext_m3.tra
file that you provided in the email from 10/27
@amymmorton
Thanks. The 4x4 transformation outputs are specific to a rigid body. If applying the mc3 tracking to the mc2mc3 body- the results will be incorrect. I can provide the mc2mc3 tra
You can rename this txt to a .tra
Thanks! I will update the graphs by the end of the day.
@NicerNewerCar Sorry Anthony. I think you'll need to use the mc2mc3 to seed the mc3 tracking- and then evaluate the mc3 vs mc3 ground truth. I was wondering why are there more data values in the BA plots for rad than there are for mc2mc3?... If these output transforms are from the same motion trial (flex_ext) then there should be the same (400) values.
The mc2mc3 values are incomplete and were only used as a step along the way.
Looks like @BardiyaAk also was able to export/save the NCC. flx_ext_rad.txt flx_ext_mc3.txt
@NicerNewerCar The ncc values were gathered using the connection to software functions (in our case matlab). https://github.com/BrownBiomechanics/XGen_AutoscoperImageProcessing/blob/main/connection_to_software/getNCC.m
change to .tif
This is the input tif stack from which the drr will be rendered for wn00106
Overview
We want to get some metric for measuring the accuracy of matching the DRR volume and the radiograph images.
Inputs
The inputs consist of n frames of a radiograph video (8bit tiff images) from two different camera views and m bone volumes (16bit tiff stacks). For each frame a volume is manually aligned with the radiograph images and then PSO and NCC is preformed to automatically improve the alignment of the volume with the radiographs. This process is repeated for all m volumes on every n frames
Outputs
The autoscoper program has a "Save Tracking" button that saves the tracking results into a
*.tra
file, that contains the x,y,z,yaw,pitch, and roll values for all volumes over all frames. Making sure to select thexyzypr
option when saving we can parse this file into pandas dataframes with this python script. We can then compare to some ground truthtra
to get an idea on the accuracy of our tracks.Implementation
Loading of a configuration file (loading of radiographs and DRRs) takes place in the Trial class.
Loading of a saved trial takes place in the main window of the UI.
The tracking of volumes takes place in the Tracker class, this class fits the volume to the radiographs and updates the positional data accordingly.
The saving of the tracking results (xyz roll pitch yaw data) takes place in the main window of the UI.
Inital Positioning
Dataset used: JOVE ankle data.
Note: The tibia is note pictured here because even when seeding every other frame the volume still failed to track properly.
Results
Each graph is a Bland-Altman plot, where the X-axis consists of the mean of the two methods and the Y-axis consists of difference between the two methods for each point in the tracking data. Each plot represents a piece of positional data (ie. one plot for X, Y, Z, Roll, Pitch, and Yaw data for every frame). Each bone (Calcaneus and Talus) was tracked in both OpenCL and CUDA while being seeded with the ground truth data every 5 frames. The results were exported from autoscoper in xyzypr format and compared using this script to a ground truth tracking result provided by @BardiyaAk and @amymmorton.
Trial 1
Trial 2
Trial 3
Trial 4
Registration Error
We took the XYZYPR data for each one of the models and computed the normal vector of the plane that it lies on. We then compared the two normals to each other to get the angle between the two planes and the distance between the planes for every frame. All computation was done with this script.
Corner Distance
This was suggested by Matt McCormick. We take a bounding box and align it to each plane then compute the euclidean distance between each corner then take the average of those four distances. The computation was computed using this script