algorithm help for separating 3d object from ct scan
I did some CT scan on a box of grape, and i need to identify each individual grape bundle. the data is a 3 dimensional logical matrix, in a 3D view will be something looks like the pic attached. and i need to separate each individual grape bundle. I am quite new to image analysis, could someone please give me some hint on how to approach this problem.
See also questions close to this topic
Curve Fitting Second derivative Gaussian
I have the following two columns with data 'X' and 'Y'. I need to do a fit based on the 2nd derivative of Gaussian (Mexican hat). Also, the fitting procedure should be able to give out the parameters in a cell for different spectrums. I have huge amounts of different data (also contains NaN) but important for me is to use exactly the same FWHM for all fittings. So, the fitting procedure should have a possibility for this constraint. I thought to get an expert opinion on this for an efficient solution with very less computing time. Please, for a testing as large dataset, just repeat both columns for a bigger range. (availability= 'nlinfit') Thank you very much!
X Y 9 -0.2047 10 -0.2014 14 -0.2944 22 -0.4893 27 -0.5433 32 -0.47 37 -0.2516 56 1.1604 63 1.4507 71 1.1809 91 -0.3434 99 -0.7094 102 -0.7002 106 -0.5832
For loop storing values in MATLAB
I have asked a similar question before, see,
I am storing the results from a for loop but this time my for loop numbers do not increase by one each time.
%% for q = [25,50,100,250,500,5000] ActualTable(:,q)=ActualValues; end
As you will see this code runs but it has large portions of rows in the matrix
ActualTablewhich only contain
0's I would just like the rows that contain non-zero. So it is saving every single row from 25 to 5000 and only inserting my values in the
25, 50, 100etc rows with all other rows containing a zero.
Building table with specifik data
I need some help with a task that I have:
I have 4 vectors with data: 3 of them are with dates and the 4th one is with overdue days, something like this:
dateAdded dueDate date overdue Published 02/11/18 02/11/18 03/11/18 1 03/11/18 04/11/18 11/11/18 7 03/11/18 04/11/18 04/12/18 30 04/11/18 05/11/18 ongoing overdue up to today
Can you give me some tips how can I create a table with the overdue days for each month from the year, considering that when I have transition from one month to another one I have to count the overdue to both months? Also when the datePublished hasn't come yet I have to count the overdue dates for each month passed.
How to set uniform brightness and contrass for different image?
I have a lot of images exactly 360 images with different brightness, can you help me how to set brightness so all of images have same brightness. Im using matlab for image processing.
Data correction techniques for shuttlecock prediction/detection
Thanks in advance for taking the time out to help me with this question, I have been tackling this issue for a while now.
So here's my issue: I have been working on a project using Python OpenCV to detect and predict the path of a shuttlecock using a ZED Camera (Stereo Cameras) Link:https://www.stereolabs.com/zed/
The techniques I am currently using is color detection and motion detection for the same. While most of the data I get is pretty accurate, some of the data points seem to be going way out of bounds.
Below is a sample of the data points and a graph which shows the data points being plotted.
Can someone help me with a way to correct the points which seem to be going away from the detected values?
I am using Extended Kalman filter for trajectory prediction, after referring to some research papers related to the same.
Can someone please let me know if the path chosen by me is on the right track or if I am missing something?
I also did read somewhere about using AI blocks but I'm not sure what I would be doing with them, any pointers on that would be a good too. :-)
Picture of the 3D Graph(detected points):Detected Points
Shuttlecock Trajectory(Orange blurred line):How should be the actual path look like
Data X,Y,Z - coordinates
Filling a polygon obtained after template matching in opencv
I am using KAZE detector to find a template in a main image and draw a polygon around it. Now I want to fill this polygon area with black color or we can say I need to mask it. I am using below code:
img1 = cv2.imread('C:/data/desktop1/imageProcessing/lenshine.PNG',0) # queryImage img2 = cv2.imread('C:/data/desktop1/imageProcessing/spects_cleaner.jpg',0) # trainImage # Initiate KAZE detector kaze = cv2.KAZE_create() # find the keypoints and descriptors with KAZE kp1, des1 = kaze.detectAndCompute(img1,None) kp2, des2 = kaze.detectAndCompute(img2,None) FLANN_INDEX_KDTREE = 0 index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5) search_params = dict(checks = 50) flann = cv2.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(des1,des2,k=2) # store all the good matches as per Lowe's ratio test. good =  for m,n in matches: if m.distance < 0.7*n.distance: good.append(m) if len(good)>MIN_MATCH_COUNT: src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2) dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2) M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) matchesMask = mask.ravel().tolist() h,w = img1.shape pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2) dst = cv2.perspectiveTransform(pts,M) img2 = cv2.polylines(img2,[np.int32(dst)],True,0,10, cv2.LINE_AA) plt.imshow(img2, 'gray'),plt.show() else: print("Not enough matches are found - %d/%d" % (len(good), MIN_MATCH_COUNT)) matchesMask = None
Sample images used are:
Please help me to fill this polygon with solid color or help me to make a rectangle instead of polygon with least code changes possible.
What are the labels in ImageNet Dataset (ILSVRC2012)
I have downloaded the Imagenet Dataset(ILSVRC2012) from academic torrents. There are two files, validation and training. When I extract the training set, there are about 1000 more files which as far as I understand are the classes. My question is, in order to build a classifier, I need both the images and the labels for each image. Where are the labels for each of the images in this dataset?
InvalidArgumentError for custom loss function in keras
I am trying to use the VGG face model for computing and comparing feature vectors in a custom built loss function. But every time I run the fit function I get the following error
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'conv2d_1_target' with dtype float and shape [?,?,?,?] [[Node: conv2d_1_target = Placeholder[dtype=DT_FLOAT, shape=[?,?,?,?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
and this the code for my custom action
def content_loss(yTrue, yPred): layer_name = ['conv1_1','conv1_2','conv2_1','conv2_2','conv3_1','conv3_2','conv3_3','conv4_1', 'conv4_2','conv4_3','conv5_1','conv5_2','conv5_2'] yTrue = yTrue.eval(session=K.get_session()) yPred = yPred.eval(session=K.get_session()) yTrue = np.reshape(yTrue,[batch_size,128,128,3]) yPred = np.reshape(yPred,[batch_size,128,128,3]) for i in range(len(layer_name)): out = vgg_model.get_layer(layer_name[i]).output yTrue_features = Model(vgg_model.input, out).predict(yTrue,steps=batch_size) yTrue_features = np.reshape(yTrue_features, [yTrue_features.shape,yTrue_features.shape,yTrue_features.shape]) yPred_features = Model(vgg_model.input, out).predict(yPred,steps=batch_size) yPred_features = np.reshape(yPred_features, [yPred_features.shape,yPred_features.shape,yPred_features.shape]) if i == 0: loss = tf.losses.mean_squared_error(yTrue_features,yPred_features) else: loss = loss + tf.losses.mean_squared_error(yTrue_features,yPred_features) return loss
I have been butting my head on this issue, I cant seem to figure it out any assistance is appreciated
Show Brisk keypoints with less keypoints in Python
I have the following code in python
import cv2 import numpy as np def save_keypoints(image_path, type_image): img = cv2.imread(image_path) gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) kp, descriptors =cv2.BRISK_create(10).detectAndCompute(gray,None) mg=cv2.drawKeypoints(gray, kp, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) cv2.imwrite('brisk_keypoints-'+ type_image+'.jpg',mg) if __name__=="__main__": save_keypoints("original.bmp" ,"original") save_keypoints("fake600.bmp" ,"fake600") save_keypoints("fake1200.bmp" ,"fake1200") save_keypoints("fake2400.bmp" ,"fake2400")
Basically, the code will save an image with BRISK keypoints detected. However, here are the results of applying this code in four images:
Although the images are different (I can easily discriminate them using these BRISK descriptors in a bag of visual words approach), it seems that the keypoints detected in all these four images are visually the same or maybe the high number of concentric circles are confusing the viewer. How can I reduce the number of keypoints shown in such a way that I can see how these images are different through these descriptors?
How do I change flag from GPU to CPU while loading freeze model (pb)?
I have downloaded a freezed model from tensorflow repository and I am trying to load it on a system which does not have GPU. I am getting the following error:
Cannot assign a device for operation 'init': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]
Is there any way to change the flag without training it again and freeze it.
Missing @matReader in imageDatastore function of MATLAB
I'm the beginner of deep learning and now I'm using U-net as my structure. According to the characteristics of my data, I need to use MAT files as my inputs (channels of the data = 2). I also used imageDatastore to make input images.
I considered my case is similar with hyper-spectral image so I followed this web as an example: https://www.mathworks.com/help/images/multispectral-semantic-segmentation-using-deep-learning.html
But in this example, they used
imds = imageDatastore('train_data.mat','FileExtensions','.mat','ReadFcn',@matReader);
I set the same option, but I got this error code:
Error using imageDatastore (line 116)
Function matReader does not exist.
I also tried to replace @matReader with @ load but I got error comment:
Error using trainNetwork (line 150)
Conversion to single from struct is not possible.
Error using cast
Conversion to single from struct is not possible.
How could I find out function @matReader and solve this problem? Many thanks. :)
U-net low contrast test images, predict output is grey box
I am running the unet from https://github.com/zhixuhao/unet but when I run the unet the predicted images are all grey. I get an error saying low contrast image for my test data, any one had or resolved this problem?
I am training with 50 ultrasound images and get around 2000/3000 after augmentation, on 5 epochs with 300 steps per epoch and batch size of 2.
Many thanks in advance Helena