How to detect object using Gabor Filter?
I want to apply Gabor Filter to detect the vehicle shown in the image. Here is my code:
clear all
close all
clc
A=imread('image4.jpg'); %read image
A = imresize(A,0.25); %resize image by 25% to inc. speed
Agray=rgb2gray(A); %convert to gray to inc. ops
figure
imshow(A)
imageSize = size(A); %calculate the image size A
numRows = imageSize(1); %number of rows
numCols = imageSize(2); %number of columns
wavelengthMin = 4/sqrt(2); %wavlength in increasing powers of two starting from 4/sqrt(2) up to the hypotenuse length of the input image
wavelengthMax = hypot(numRows,numCols); %max wavelength = hypot of rows and columns
n = floor(log2(wavelengthMax/wavelengthMin)); %calculating floor points
wavelength = 2.^(0:(n2)) * wavelengthMin; %wavelength calculation
deltaTheta = 45; %choose between 0 and 150 in steps of 30 degrees
orientation = 0:deltaTheta:(180deltaTheta); %orientation of source image
g = gabor(wavelength,orientation); %calculating gabor function values g = 1*24
gabormag = imgaborfilt(Agray,g); %gabor magnitude from source image
for i = 1:length(g) % length of g = 24
sigma = 0.5*g(i).Wavelength; %choose a sigma that is matched to the Gabor filter that extracted each feature
K = 2; % smoothing term K random value
gabormag(:,:,i) = imgaussfilt(gabormag(:,:,i),K*sigma); %imgaussfilt Gaussian Smoothing Filters to Images
end
X = 1:numCols; %1 to 317 columns
Y = 1:numRows; %1 to 176 rows
[X,Y] = meshgrid(X,Y); %Create 2D grid coordinates with Xcoordinates defined by the vector X and Ycoordinates defined by the vector Y
featureSet = cat(3,gabormag,X);
featureSet = cat(3,featureSet,Y);
numPoints = numRows*numCols; %numPoints = 124848
X = reshape(featureSet,numRows*numCols,[]); %Reshaping data into a matrix X of the form expected by the kmeans function
X = bsxfun(@minus, X, mean(X)); %Normalize features to be zero mean
X = bsxfun(@rdivide,X,std(X)); %Normalize features to be unit variance
coeff = pca(X); %returns the principal component coefficients
feature2DImage = reshape(X*coeff(:,1),numRows,numCols); %returns the numRowsbynumCols matrix, which has the same elements as X*coeff(:,1). The elements are taken columnwise from X*coeff(:,1) to fill in the elements of the numRowsbynumCols matrix
figure
imshow(feature2DImage,[])
L = kmeans(X,4,'Replicates',12); %
L = reshape(L,[numRows numCols]);
figure
imshow(label2rgb(L)) %label matrix to rgb image
Aseg1 = zeros(size(A),'like',A);
Aseg2 = zeros(size(A),'like',A);
BW = L == 2;
BW = repmat(BW,[1 1 3]);
Aseg1(BW) = A(BW);
Aseg2(~BW) = A(~BW);
figure
imshowpair(Aseg1,Aseg2,'montage');
The above code is copied from MathWork: Texture Segmentation Using Gabor Filters
Here is my image (image4.jpg) for which I'm applying Gabor Filter to detect the vehicle: The size of the image is initially 1000x557.
OK. Here are some more points regarding my above question: 1. Every time I run this code, I'm getting different outputs. 2. Can I insert a box around the object? If yes, please suggest me. 3. What exactly I get at the output of Gabor Filter?
Thanks is advance:)
See also questions close to this topic

Rounded corners in Android ImageButton doesn't work
Is there something wrong with my code?
round_corner.xml
(I put it in drawable)<?xml version="1.0" encoding="utf8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" > <solid android:color="#FFFF0F" /> <corners android:radius="30dp" /> </shape>
I think what's not working is the code in my imageButton
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/resauto" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:padding="5dp"> <ImageButton android:id="@+id/my_button" android:layout_width="match_parent" android:layout_height="170dp" android:layout_gravity="center" android:background="@drawable/round_corner" android:src="@drawable/my_pic"/> </LinearLayout>
It doesn't give me any error . It just didn't make my image rounded . Any ideas?

Drupal 7.59 images not download in android app
I have update my drupal version from 7.4 to 7.59. after that i can able to see my image files in browser, but i cannot download thoese image via my Android apps. It was working fine in my old drupal version
can you please suggest any solution regarding this?

Is it possible to substitute an HTML image with one from CSS?
I'm wondering if it is possible to substitute an HTML image for one from CSS inside of a media query. When the width is 100% the images that are generally small get a little fuzzy. I know how to insert images via CSS, but I'm very new to learning responsive development. If it is possible, how would you accomplish this? Thank you all I really appreciate it.

Calling a Function in Matlab with Symbolic Functions as Arguments
I have written the following
Matlab
function:function EulerMethod(t_min,t_max,h,f,Y,yzero) tlist = t_min:h:t_max; N = (t_max  t_min)/h; ylist = transpose(zeros(N+1,1)); ylist(1) = yzero; for i=1:N term = f(tlist(i),ylist(i))*h; ylist(i+1) = ylist(i) + term; end yrange = Y(tlist); % modified to generate a new figure window each time. figure; plot(tlist,yrange,'red','LineWidth', 2); hold; plot(tlist,ylist,'blue','LineWidth', 2); plot(tlist, abs(yrange  ylist),'magenta','LineWidth', 2) % modified to wrap the title. title({'Graphs of the True Solution, Euler Solution,', 'and the Absolute Value of the Global Error (GE)'}) xlabel('t') ylabel('Y(t)') legend({'True Solution','Euler Solution', 'Absolute Value of the GE'},'Location','southwest') end
I now try and call this function in another script. The variables
f
andY
in the function are symbolic functions; so, in the other file, I first declare these functions before calling this function. Here's the code:clc syms f(t,y) syms Y(t) f(t,y) = y + 2.0*cos(t); %This the derivative of the function whose solution we're trying to computed Y(t) = sin(t) + cos(t); %This is the true solution; % Calling function EulerMethod(0.0, 6.0, 0.2, f, Y, 1.0);
I, however, get errors when I run the second script. Can anyone help me figure out what's going wrong? I suspect this may be because of the way I have both input and used the symbolic functions
f
andY
but I am not sure. 
How to plot a multidimensional array in matlab?
I have a table as follows
system:index 2017_06_18 2017_06_19 2017_06_20 2017_06_21 2 612.8099664 1174.656713 1282.083251 815.3828357 3 766.4103726 1345.135952 1322.726083 749.998993 4 765.0230453 1411.669136 1350.437586 610.9541838 5 553.5858458 1374.14789 1152.086957 566.7924468 6 466.9780908 1311.903756 1060.494001 559.1982264 7 257.1162602 1270.182385 988.5455285 562.9224932 8 230.6611542 1310.971988 1001.548768 502.3266959
I want to plot a 2dcolormap representing system:index as y axis, dates as x axis and values under dates as colors. I tried with the following code but it did not give what I want.
clear clc filename = 'TurbidityDailyMean.xlsx'; data = xlsread(filename,'TurbidityDailyMean','A1:E8'); figure; hold on for i = 2:5 y = data(:,1); x = data(:,i); plot(x,y) end
I need to map a colormap as mentioned above. But from what I tried it gives something else. And another fact is that I can't insert system:index and dates row into matlab with relevant data.

Regexp syntax MATLAB
input1 = ' 8 BKN 15 BKN ' input2 = ' 2 X 3SM ' regexp(input1, '\s{1}\d(12)\s{1}c{3}\s{1}') regexp(input2, ''\s{1}\d(12)\c{1}\s{1}c{1}\s{1}' )
Have trouble getting regexp to work. I'm not at all great a debugging.
The code needs to identify (one space, (one digit or two digits), one space, three characters
[AZ]
, and one space)The code needs (one space, one digit or two digits, one space, X the letter, one digit and two characters, and one space)

How to convert to bytes format for PIL images which is similar to normal bytes format
Sorry that I couldnt explain clearly in subject. I used read() to read the entire image in the form of bytes and also I used PIL's tobytes() to read the same image. But to me the image bytes looks different. Could you please advice on how to have the same bytes generated using read() using PIL's package utility? From raw encoding to utf8
Code sample:
path3 = r'path' with io.open(path3, 'rb') as image_file: content1 = image_file.read() b'\xff\xd8\x ...
Using PIL:
with io.open(path3, 'rb') as image_file: content1 = Image.open(image_file).tobytes() b'\xbf\x91\xc0\xbf\x91\xc0\xbe\x90\xbf\xbe'
In my use case:
from pdf2image import convert_from_bytes images = convert_from_bytes(open('pp.pdf', 'rb').read()) b=images[0].read() # since this returns list format AttributeError: 'PpmImageFile' object has no attribute 'read'
Is it possible to have same byte format like read()?

Unsupported BMP compression  BMP to JPEG  PIL  Python
I am trying to convert a bmp image to jpeg using the below code.
from PIL import Image img = Image.open('/Desktop/xyz.bmp') new_img = img.resize( (256, 256) ) new_img.save( '/Desktop/abc.png', 'png')
While execution am getting the error,
Traceback (most recent call last): File "D:/widowed_hulk/otokar/image_scraper.py", line 80, in <module> img = Image.open('C:/Users/santhosh.solomon/Desktop/bmp/ImageHandler.bmp') File "C:\Python34\lib\sitepackages\PIL\Image.py", line 2609, in open im = _open_core(fp, filename, prefix) File "C:\Python34\lib\sitepackages\PIL\Image.py", line 2599, in _open_core im = factory(fp, filename) File "C:\Python34\lib\sitepackages\PIL\ImageFile.py", line 102, in __init__ self._open() File "C:\Python34\lib\sitepackages\PIL\BmpImagePlugin.py", line 201, in _open self._bitmap(offset=offset) File "C:\Python34\lib\sitepackages\PIL\BmpImagePlugin.py", line 161, in _bitmap raise IOError("Unsupported BMP compression (%d)" % file_info['compression']) OSError: Unsupported BMP compression (1)
Image I am trying to convert : https://servis.otokar.com.tr:8083/ImageHandler.ashx?id=6425
can anyone guide me through this error?

How to find the dominant color in images using Python?
I am trying to find top 2 colors from my image to process them accordingly For example if image has blue and white i apply one rule, if it is green and red i apply some other rule.I am trying below code which work for some and not for all.
Main goal : every image has top 2 dominant visible color as shown below and i need to get those color.
Expected result :
image1 : blue and yellow shade
image2 : green shade and blue shade
code :
from PIL import Image import numpy as np import scipy import scipy.misc import scipy.cluster NUM_CLUSTERS = 5 print('reading image') im = Image.open("captcha_green.jpg") # optional, to reduce time ar = np.asarray(im) shape = ar.shape ar = ar.reshape(scipy.product(shape[:2]), shape[2]).astype(float) print('find clus') codes, dist = scipy.cluster.vq.kmeans(ar, NUM_CLUSTERS) print ('cluster centres:\n', codes) vecs, dist = scipy.cluster.vq.vq(ar, codes) # assign codes counts, bins = scipy.histogram(vecs, len(codes)) # count occurrences index_max = scipy.argmax(counts) # find most frequent peak = codes[index_max] colour = ''.join(chr(int(c)) for c in peak).encode("utf8").hex() print ('most frequent is %s (#%s)' % (peak, colour))
For this image
I am getting most
frequent is [ 1.84704063 1.59035213 252.29132127] (#0101c3bc)
As per this link https://www.w3schools.com/colors/colors_picker.asp?color=80ced6 It is detecting blue that is true.For green image : instead of green shade it is detecting light pink
Detected coloe :
most frequent is [142.17271615 234.99711606 144.77187718] (#c28ec3aac290)
this is wrong prediction 
how video codec and container works and how the data inside the video container is organised?
i am trying to find how the video codec works and how the data inside the video container is organised (video containers like MPEG4 Part 14(mp4), Matroska(mkv), etc,.) and (Codecs like HEVC, H.264, VP8, etc,.) The blogs and sites are very general that specifies only the features and tech specifications of these containers and codecs. i am in need of exactly how the codec works and exactly how the container data organised kindly help me to find the solution, Thank you

Android, how to put watermark a video
I know it wasn't possible back in the days, but I have heard that it's possible now. How would one do it? I am not even sure where to start.
Right now I can save a video locally with mediarecorder, how do I add a watermark to it? I can't seem to find a library that lets me do that.

FFMPEG Multiple Alpha Overlays
I have a few videos with alpha channels that I will like to overlay on top of each other. It's possible to get working with the following command
ffmpeg i back.mov i front.mov filter_complex overlay c:v png output.mov
However if I add another video to this it no longer works
ffmpeg i back.mov i front.mov i front2.mov filter_complex overlay c:v png output.mov
Does anyone know a way of getting this to work? or would I have to output the first 2 layers and then run the code again with a new layer?
I will have more than 3 layers so looking for the most efficient way.

Gabor Filter implementation in Frequency domain
Here we have the Spatial domain implementation of Gabor filter. But, I need to implement a Gabor filter in the Frequency Domain for performance reasons.
I have found the Frequency Domain equation of Gabor Filter:
^{I am actually in doubt about the correctness and/or applicability of this formula.}
Source Code
So, I have implemented the following :
public partial class GaborFfftForm : Form { private double Gabor(double u, double v, double f0, double theta, double a, double b) { double rad = Math.PI / 180 * theta; double uDash = u * Math.Cos(rad) + v * Math.Sin(rad); double vDash = (1) * u * Math.Sin(rad) + v * Math.Cos(rad); return Math.Exp((1) * Math.PI * Math.PI * ((uDash  f0) / (a * a)) + (vDash / (b * b))); } public Complex[,] GaborKernelFft(int sizeX, int sizeY, double f0, double theta, double a, double b) { int halfX = sizeX / 2; int halfY = sizeY / 2; Complex[,] kernel = new Complex[sizeX, sizeY]; for (int u = halfX; u < halfX; u++) { for (int v = halfY; v < halfY; v++) { double g = Gabor(u, v, f0, theta, a, b); kernel[u + halfX, v + halfY] = new Complex(g, 0); } } return kernel; } public GaborFfftForm() { InitializeComponent(); Bitmap image = DataConverter2d.ReadGray(StandardImage.LenaGray); Array2d<double> dImage = DataConverter2d.ToDouble(image); int newWidth = Tools.ToNextPowerOfTwo(dImage.Width) * 2; int newHeight = Tools.ToNextPowerOfTwo(dImage.Height) * 2; double f0 = (newWidth+newHeight)/8; double theta = 45; double alpha = 3.7; double beta = 1.3; Complex[,] kernel2d = GaborKernelFft(newWidth, newHeight, f0, theta, alpha, beta); dImage.PadTo(newWidth, newHeight); Array2d<Complex> cImage = DataConverter2d.ToComplex(dImage); Array2d<Complex> fImage = FourierTransform.ForwardFft(cImage); // FFT convolution ................................................. Array2d<Complex> fOutput = new Array2d<Complex>(newWidth, newHeight); for (int x = 0; x < newWidth; x++) { for (int y = 0; y < newHeight; y++) { fOutput[x, y] = fImage[x, y] * kernel2d[x, y]; } } Array2d<Complex> cOutput = FourierTransform.InverseFft(fOutput); Array2d<double> dOutput = ImageRescaler.Rescale(DataConverter2d.ToDouble(cOutput)); dOutput.CropBy((newWidthimage.Width)/2, (newHeight  image.Height)/2); Bitmap output = DataConverter2d.ToBitmap(dOutput, image.PixelFormat); Array2d<Complex> cKernel = FourierTransform.InverseFft(new Array2d<Complex>(kernel2d)); cKernel = FourierTransform.RemoveFFTShift(cKernel); Array2d<double> dKernel = ImageRescaler.Rescale(DataConverter2d.ToDouble(cKernel)); Bitmap kernel = DataConverter2d.ToBitmap(dKernel, image.PixelFormat); pictureBox1.Image = image; pictureBox2.Image = kernel; pictureBox3.Image = output; } }
^{Just concentrate on the algorithmic steps at this time.}
I have generated a Gabor kernel in the frequency domain. Since, the kernel is already in Frequency domain, I didn't apply FFT to it, whereas image is FFTed. Then, I multiplied the kernel and the image to achieve FFTConvolution. Then they are inverseFFTed and converted back to Bitmap as usual.
Output
 The kernel looks okay. But, The filteroutput doesn't look very promising (or, does it?).
 The orientation (theta) doesn't have any effect on the kernel.
 The calculation/formula is frequently suffering from dividebyzero exception up on changing values.
How can I fix those problems?
Oh, and, also,
 what do the parameters α, β, represent?
 what should be the appropriate value of f_{0}?

Fixing a simple C# code for implementing Gabor Filter
The Formula for Gabor filter given by Wikipedia is:
I tried to write the follwong C# Console program for applying a Gabor filter (given by the formula above) to an image:
using System; using Emgu.CV; using Emgu.CV.Structure; using System.Drawing; namespace Gabor { public class GaborKernel { public int width; public Matrix<float> real; public Matrix<float> imaginary; public GaborKernel(int _width, double lambda, double theta, double psi, double sigma, double gamma) // constructor { width = _width; imaginary = new Matrix<float>(width, width); real = new Matrix<float>(width, width); for (int i = 0; i < width; i++) { for (int j = 0; j < width; j++) { int x = i  width / 2; int y = j  width / 2; double x_prime = x * Math.Cos(theta) + y * Math.Sin(theta); double y_prime = x * Math.Sin(theta) + y * Math.Cos(theta); double a = Math.Exp((x_prime * x_prime + gamma * gamma * y_prime * y_prime) / (2 * sigma * sigma)); double re = Math.Cos(2 * Math.PI * x_prime / lambda + psi); double im = Math.Sin(2 * Math.PI * x_prime / lambda + psi);//*/ double real_part = a * re; double imaginary_part = a * im; real.Data[i, j] = (float)real_part; imaginary.Data[i, j] = (float)imaginary_part; } } } } class Program { public static Image<Gray, float> Convolution(Image<Gray, float> src, GaborKernel kernel) { Point center = new Point(kernel.width / 2 + 1, kernel.width / 2 + 1); ConvolutionKernelF kernel_f; kernel_f = new ConvolutionKernelF(kernel.real, center); Image<Gray, float> temp1 = src.Convolution(kernel_f); kernel_f = new ConvolutionKernelF(kernel.imaginary, center); Image<Gray, float> temp2 = src.Convolution(kernel_f); temp1 = temp1.Pow(2); temp2 = temp2.Pow(2); temp1 = temp1.Add(temp2); return temp1.Pow(0.5); } static void Main() { Image<Gray, float> image = new Image<Gray, float>("input.bmp"); double psi = 0; double gamma = 1; double theta = Math.PI / 4; double sigma = 1; double lambda = 1; int width = 12; GaborKernel kernel = new GaborKernel(width, lambda,theta,psi,sigma,gamma); Image<Gray, float> trans = Convolution(image, kernel); trans.Save("output.bmp"); } } }
I'm not an emgu.CV programmer and I used Emgu.CV (opencv) just for convolution because I don't know the details about a convolution function.
The program works. But the results seem weird. For example this picture:
is converted to:
with
theta = Math.PI / 4
. Changing theta totheta = Math.PI / 6
makes the output completely different. Also it seems that changingwidth
does not change the output.I think something is wrong in my code. Can you help me fix it? Am i wrong in using the Gabor Filter Formula? Or am I wrong in using the Emgu.CV convolution?
I expect the output of the first picture is something like this:
But the program does not produce this.

Gabor kernel visualization
I am trying to generate the following picture using C#:
My output is as folows:
^{This is a far more inferior output.}
How can I make my output exactly look like the one in the 1st picture?
.
Source Code
public partial class GaborKernelForm : Form { public GaborKernelForm () { InitializeComponent(); int Size = 5;//kernel size. double Sigma = 10;//varriance of gaussian double Theta = 90;//Orientation double Lambda = 10;//Wavelength double Gamma = 0.8;//aspect ratio. double Psi = 0;//Phase. bool Normalized = true; for (int y = 40; y < 105*5; y += 105) { for (int x = 20; x < 105*8; x += 105) { double[,] kernel = GaborKernel.Get2d(GaborKernelType.Imaginary, Size, Lambda, Theta*Math.PI/180, Psi, Sigma, Gamma, Normalized); Bitmap bmp = ImageData.ToBitmap2d(kernel, System.Drawing.Imaging.PixelFormat.Format8bppIndexed); Grayscale.SetPalette(bmp); int xxx = x; int yyy = y; PictureBox picBox = GetPictureBox(x,y, bmp); this.Controls.Add(picBox); Theta = Theta + 23; } Gamma = Gamma  0.16; } } private PictureBox GetPictureBox(int x, int y, Bitmap bitmap) { PictureBox p = new PictureBox(); p.BorderStyle = BorderStyle.Fixed3D; p.SizeMode = PictureBoxSizeMode.CenterImage; p.Size = new System.Drawing.Size(100,100); p.Location = new Point(x, y); p.Image = bitmap; return p; } }