algorithm help for separating 3d object from ct scan
I did some CT scan on a box of grape, and i need to identify each individual grape bundle. the data is a 3 dimensional logical matrix, in a 3D view will be something looks like the pic attached. and i need to separate each individual grape bundle. I am quite new to image analysis, could someone please give me some hint on how to approach this problem.
See also questions close to this topic

Selecting simultaneously not NaN values in multiple matrices
I have three matlab matrices A, B, and C with the same size:
A = [1:3; 4:6; 7:9]; B = [2 NaN 5; NaN NaN 7; 0 1 NaN]; C = [3 NaN 2; 1 NaN NaN; 1 NaN 5]; %>> A = %>>B = %>>C = % 1 2 3 % 2 NaN 5 % 3 NaN 2 % 4 5 6 % NaN NaN 7 % 1 NaN NaN % 7 8 9 % 0 1 NaN % 1 NaN 5
I would like the three matrices to keep only values for which each of the 3 matrices does not have a NaN in that specific position. That is, I would like to obtain the following:
%>> A = %>>B = %>>C = % 1 NaN 3 % 2 NaN 5 % 3 NaN 2 % NaN NaN NaN % NaN NaN NaN % NaN NaN NaN % 7 NaN NaN % 0 NaN NaN % 1 NaN NaN
In my attempt, I'm stacking the three matrices along the third dimension of a new matrix ABC with size 3x3x3 and then I'm using a for loop to make sure all the three matrices do not have NaN in that specific position.
ABC(:,:,1)=A; ABC(:,:,2)=B; ABC(:,:,3)=C; for i=1:size(A,1) for j=1:size(A,2) count = squeeze(ABC(i,j,:)); if sum(~isnan(count))<size(ABC,3) A(i,j)=NaN; B(i,j)=NaN; C(i,j)=NaN; end end end
This code works fine. However, as I have more than 30 matrices of bigger size I was wondering whether there is a more elegant solution to this problem.
Thank you for your help.

Polynomial decomposition of a 1D signal on matlab
i have a signal of a pixel in an image, i need to decompose the signal with the polynomial method so i can extract the coefficient and rebuild the new image with coefficients. i'm using Matlab .

Reuse a result and use it in the next run
I'm writing a code in MATLAB where I want to use the result for one run and implement it in the next run, is that possible?
What I mean is that first I'm running my program with some boundaries A. After running it I'll get some new boundaries B. I want to use these boundaries B instead of A next time I run the program. Can the code in some way change this or must I do this by my own?
 How to remove noise(in form of dots and lines) from the image attached using Java?

Any popular library or example for removing image background (lasso/magic wand etc) in javascript?
I have found a couple of libraries for removing image background, but one is not opensource, another implements the approach when you need to select foreground/background (https://github.com/AKSHAYUBHAT/ImageSegmentation)
The thing is, there should be some popular JS libraries for raster image processing, with some examples of such tools as scissors/lasso/magic wand etc? Probably combined together?
What's important is that I need to save meta information about how the image was processed to the server, because I need to be able to continue editing the picture later =\
Is there any popular libraries/examples?

how to convert any image into a seamless image in unity?
I am working on this project in unity3d where I need to capture a photo using camera, crop the captured image make it seamless and then use it in the scene. I managed to create the capture and using it in scene part but I have no idea how to convert the captured image to a seamless images. i was thinking if I can create some service using php where I can pass a texture2d and get an image in return.
like on this website : https://www.imgonline.com.ua/eng/makeseamlesstexture.php
any idea how can I get started ?

PNP for estimating 360 camera pose from equirectangular image
I have an equirectangular image (Created from a 360 camera) and several 2D3D correspondences, and I want to estimate the correct camera position and rotation.
Preferably using OpenCV

Simple matlab code to use FPDW (Fastest Pedestrian Detector in The West) using Caltech Dataset
i need help on my project about pedestrian detector using dataset. I'm currently interested on FPDW as an example, but i'm having a problem understanding it. Can someone please give or explain to me the basic concept about it using simple syntax on matlab? Thank you

OpenCV Circular Buffer usage in a realtime multithreaded application
This is the producerconsumer problem. I have used a ring buffer in python to achieve the functionality I wanted. I feel this is inefficient. Is there a way we can directly use the buffer in opencv?
Writer function just enqueues the ring buffer and Reader function dequeues the ring buffer. This involves a separate process to handle the buffer operations. Which I think can be solved efficiently if I use the OpenCV functions right.
From OpenCVs source code I see we can set CV_CAP_PROP_BUFFERSIZE. I just don't know how to achieve it without running a separate thread or process for queuing the image in a separate ring buffer.
Any help would be appreciated :)
import multiprocessing import numpy import numpy.matlib import ringbuffer import cv2 def writer(ring): cam = cv2.VideoCapture(0) ret = cam.set(cv2.CAP_PROP_FRAME_WIDTH,320); ret = cam.set(cv2.CAP_PROP_FRAME_HEIGHT,240); for i in range(100): #while loop ret,m = cam.read() print(m.shape) x = numpy.ctypeslib.as_ctypes(m) #print(x.shape) try: ring.try_write(x) except ringbuffer.WaitingForReaderError: print('Reader is too slow, dropping %r' % x) continue if i and i % 100 == 0: print('Wrote %d so far' % i) ring.writer_done() print('Writer is done') def reader(ring, pointer,ind): print("in reader") while True: try: data = ring.blocking_read(pointer) except ringbuffer.WriterFinishedError: #print("Error") return x = numpy.frombuffer(data) x.shape = (240 ,320 ,3 ) m = numpy.matlib.asmatrix(x) print("read",m.shape) #cv2.imshow("read %d"%(ind), m) print('Reader %r is done' % pointer) def main(): ring = ringbuffer.RingBuffer(slot_bytes=1000000, slot_count=50) ring.new_writer() processes = [ multiprocessing.Process(target=writer, args=(ring,)), ] for i in range(4): processes.append(multiprocessing.Process( target=reader, args=(ring, ring.new_reader(),i))) for p in processes: p.start() for p in processes: p.join(timeout=50) assert not p.is_alive() assert p.exitcode == 0 if __name__ == '__main__': main()

How to combine split runs of spectral clustering for a huge affinity matrix
Leading up to the question
I have a 2D complex valued image with a short time series of values. I want to cluster similar pixels / segment the image. My first attempt was kmeans, but that really clustered according to the means (there is a distinction in mean values, especially compared to surrounding voxels, but I suspect the temporal information is greater). My second attempt was ICA and then look at the k components with the largest magnitude, and that did successfully identify certain regions in my image as being different, but did not identify the group of pixels I was interested in (visually it is not hard to recognize them, but they are small).
Current situation
So because my first two tries did not work out, I looked around with google and it seemed spectral clustering might be appropriate. But I have some serious issues when using the method, mostly to do with limited available memory. I then thought, since I have so many pixels, I can just apply spectral clustering to seperate slabs of the data.
Question
My question has 2 parts:
1. How do I combine the results for the seperate segments? The eigenvectors are different and the cluster numbers are different. The result looks like it worked in the seperate slabs.
2. No distance / affinity between pixels in seperate slabs is taken into account. Can I make 'slabs between slabs'? For those slabs L and A are not symmetric, no clue how to perform the method then.
(3. Is there a similar or better method that does not need so much memory. Computation time is also borderline acceptable, easily exploding)
Perhaps I can somehow compare / merge all eigenvectors at the end?
I did not think my thata was that large. It is just a short timeseries of a complex image, with a pretty small number of pixels. Compared to HD or 4K video my data is tiny. Since all real world machines around us process so much more data, and my computer is not that bad, I suspect my computer should easily be able to cluster these pixels.
% generate data tempdisk = strel('disk',922/2); tempdisk = double(repmat((1+sqrt(1)).*tempdisk.Neighborhood,[1 1 15])); tempnoise = (rand(921,921,15)+sqrt(1).*rand(921,921,15))./10; tempim1 = double(imresize(mean(imread('cameraman.tif'),3),[921,921])); tempim1 = repmat(tempim1./max(tempim1(:)),[1 1 15]); tempim2 = double(rgb2hsv(imread('fabric.png'))); tempim2 = imresize(tempim2(:,:,1),[921,921]); tempim2 = repmat(tempim2./max(tempim2(:)),[1 1 15]); sin1 = repmat(permute(sin(2.*pi.*(0:14)./15),[1 3 2]),[921 921 1]); cdat = (sin1.*tempim1.*exp(sqrt(1).*2.*pi.*sin1.*(tempim2.^2)).*tempdisk+tempnoise); %this is what the mean data looks like meanm = mean(abs(cdat),3); meanp = angle(mean(cdat,3)); figure; subplot(1,2,1); imshow(meanm,[]); title('mean magnitude'); subplot(1,2,2); imshow(meanp,[]); title('mean phase')
%get all pixel vectors in a single matrix cvect = reshape(permute(cdat, [3,1,2]), [15, 921*921]); %k means and eigs dont accept complex, so convert to real here? cvectT = [real(cvect);imag(cvect)]'; %lets say 10000 by 10000 matrices are still ok nvox = 10000; nslab = floor(length(cvectT)/nvox); nrest = rem(length(cvectT), nvox); % spectral clusetring according to http://ai.stanford.edu/~ang/papers/nips01spectral.pdf keig = 50;%how many eigenvectors needed? more is better affinity_sigma = 1;% i dont understand how to calculate this either tic for islab = (nslab+1):1:1; islab/nslab toc if islab>nslab voxrange = (1:nrest) + ((islab1)*nvox);; else voxrange = (1:nvox) + ((islab1)*nvox); end Aff = exp( squareform(pdist(cvectT(voxrange,:))).^2/(2*affinity_sigma^2) ); % affinity matrix Dsq = sparse(size(Aff,1),size(Aff,2)); %degree matrix for idiag=1:size(Aff,1) Dsq(idiag,idiag) = sum(Aff(idiag,:))^(1/2); end Lap = Dsq * Aff * Dsq; %normalize affinity matrix [eigVectors(voxrange,1:keig), eigValues] = eigs(Lap, keig); %eigenvectors of normalized aff mat normEigVectors(voxrange,1:keig) = eigVectors(voxrange,1:keig)./repmat(sqrt(sum(abs(eigVectors(voxrange,1:keig)).^2,2)), [1 keig]); %normalize rows of eigen vectors, normr only works on real numbers [idx,C,sumd,D] = kmeans([real(normEigVectors(voxrange,1:keig)),imag(normEigVectors(voxrange,1:keig))], 5); %k means on normalized eigenvecotrs idxval(voxrange) = idx; end idxim = reshape(idxval, [1280, 1280]); figure; imshow(idxim,[]) toc
The result looks like the method is working nicely, now its just the slabs that are the issue and that there are no crossslab clusters. This took my computer about 15 minutes. I reduced the number of eigenvalues and the image size for this example so it runs in an acceptable amount of time. I think that illustrates part of my problem.
Someone here also suggests clustering slabs first and then combine them, he then says 'at the end you will have the problem of recombining them and this problem can be solved easily'. The bits designated as 'easy' in explanations are hardly ever easy of course. He links to this paper, but that method does not process all the data in slabs. It rather excludes vectors that are not close to a principal component.

Image segementation of lost Visio file
I have some images that started out as Visio diagrams, but were only saved as JPEGs. Each image has a white background with clipart computers and networking equipment scattered around. There are also connecting lines and labels, but I can algorithmically erase them. An example would be the main image here, excluding the window decorations and tool palettes: https://www.topbestalternatives.com/wpcontent/uploads/2016/02/draw810x406.jpg
I would like to extract the subimages and save them as distinct files, along with each one's original bounding box, so I can approximately recreate the original Visio. Ideally this would be done by a standalong program, or a Python script. I wrote a Python script using PIL that uses the histogram function to find vertical and horizontal bands of white/line color and splits the image into smaller rectangles. It mostly works, but treats, for example, a circle of an odd number of devices as a single subimage.
I've spent a fair amount of time researching this, but neither Stack Overflow nor Wikipedia have been much help. Any ideas? Thanks!

How to apply 2D active contours (snakes) in 3D image?
I am trying to apply a snake method on a 3D image with no sucess. To my understanding, MATLAB's "activecontours" can be applied to 3D images but it seems that it fits a surface (balloon) into an object, instead of a single contour (snake). Or at least it seems to repeat the seed contour slice by slice, instead of fitting a single countour into a 3D object, because the end result is the whole 3D image segmented. I am looking for an active contour that has 3D awareness, as suggested by the following figure:
If the seed contour is the blue contour (slightly in perspective as to see it is a contour), I want the final snake to be the green one, as it fits the closest and easiest edge, instead of the red one, which would happen if I simply applied a 2D snake onto a slice of the 3D image coplanar with the seed contour. Thanks for your time!