I am trying to analyze greyscale TIFF stacks, in which a given frame will look like this. I filter it (using Gaussian blur), and then binarize it (using Otsu's method for threshold).
MATLAB code, which works great:
image_conncomp = bwconncomp(image_binary); # entire stack is held in image_binary
for i=1:image_conncomp.NumObjects
object_size = length(image_conncomp.PixelIdxList{i});
end
Each white spot in the example image is picked up, and its volume (in pixels) is pretty accurately given by object_size
.
Python code:
from skimage import measure
labels = measure.label(image_binary, background=1) # same image_binary as above
propsa = measure.regionprops(labels)
for label in propsa:
object_size = len(label.coords)
The Python code seems to work decently... except that most detected objects will have object_size
of 1 - 200, and then a couple will have a size of several thousand pixels.
What are these functions doing differently? I would be happy to try another approach in Python to get calculate object sizes, but I struggled to find another one. It'd be great to have a Python version of this code, if I could find a good substitute for Matlab's bwconncomp
function.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…