With TEM Tomography, you take a series of images of some object at intense magnifications, then rotate the stage around so you can gather multiple projections. From here, you align the projections, and then you have to use some sort of lengthy algorithm (Direct Fourier, Weighted Back, etc.) to spit out an actual model of the data.
Since the Looking Glass wants images from multiple views, I tried creating a quilt instead of a reconstruction to see how it would look. Turns out, it’s pretty quick and fairly accurate! There’s a slight amount of toe-in effect, but being able to get a grasp of what the model looks like prior reconstruction is pretty useful, especially since it doesn’t mind slight pixel alignment deviations like a reconstruction algorithm would. Obviously, this won’t replace doing a full reconstruction since that’s where a lot of analysis occurs, but it’s a nice way of being able to see quick visualizations of data regions. I can post a video of this later if anyone is curious, but I can’t right now since my phone is dead.
I’m curious if anyone in the medical field has tried messing around with CT scans in a similar sense?