Looking for a BSS algorithm that works with Real World audio recordings
I am looking for a Blind Audio Source Separation algorithm that I can use to reconstruct human voices from real world recordings. My application is simultaneously recording 2 people speaking using 2 microphones and saving the recordings as mono audio files.
I have tried several implementations of FastICA (including FastICA Package for MATLAB from Aalto University, scikit learn BSS using FastICA, and several other university professors’ implementations) and a couple DUET implementations, but none have worked to separate real world recordings of 2 speakers. I know that FastICA fails when there is delay between the sources, which would be true in the real world so I am not surprised that FastICA did not work for me. Also, DUET makes the assumption that the sources are in separate frequency bands, which is not true of human voices.
Several of the algorithms have been able to separate the “manual mixing” case (speakers are recorded separately and then linearly mixed together using MATLAB). However my project requires that the voices be recorded simultaneously in real time.
I have come across others who seem to be looking for similar algorithms (Real World Blind Source Separation), and I am aware that there are other algorithms (Jade, AMUSE). My understanding is that JADE is another ICA algorithm so I expect that I would get results similar to the results I got from FastICA.
I was wondering if anyone had a BSS algorithm that has worked for reconstructing real world audio samples and could post a link to it. My project is running on a Unixbased OS that does not have MATLAB so preferably the algorithm is not coded for MATLAB, but I have access to MATLAB on another computer so I could still work with it.
Thanks!
See also questions close to this topic

Create an integer from a dict inside a list. Python newbie
I get the following object from a Google Analytics API response (simplified) which appears to be a list with a dict inside it:
max_value = [{u'values': [u'1647']}]
I simply want to extract [u'1647'] as a standalone integer object, but I've contorted myself into the following very ugly Python code to do so:
for i in max_value: temp = int(i['values'].pop(0)) print temp print isinstance(temp, int)
Yields:
1647 TRUE
I've searched for some time and all I see is the ability to turn a list of strings into a list of integers. I don't want a resulting list. I want a single var object, as above.
The code works, but I'm obviously missing something very simple here. My code is way too ugly... help. TIA.

I want loop to open youtube url on google chrome after every 60 second and refresh it then close
I have tried this code but its not working properly, its open the google chrome one time but not youtube url and after one time open its getting show error anybody could help me make some changes on my code :
from selenium import webdriver as wd chromedir = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe' driver = wd.Chrome(chromedir) website = input("http:\youtu.be/RsWCo_xGXxY") driver.open(website) def until_func(driver): driver.refresh() orderElem = WebDriverWait(driver, timeout=60, poll_frequency=10).until(until_func)

Hourly average data of CSV data
My data is in CSV format which is minute resolution. It looks like
Timestamp value 6/10/2018 0:00 23.9 6/10/2018 0:01 19.8 6/10/2018 0:02 20.3   6/18/2018 23:59 25.9
Now I need the hourly average of this data. The code I have done so far is
import pandas as pd df = pd.read_csv("filename.csv") df['DateTime'] = pd.to_datetime(df['Timestamp']) df.index = df['DateTime'] df1 = df.resample('H').mean() print(df1)
But the output is not correct which is as
DateTime Value 20180613 00:00:00 16.19 20180613 01:00:00 20.80   20181206 23:00:00 19.09
The date is far from the actual data table. So please help me to debug it.

How to check if two rotated images overlap in javafx
I have two ImageViews in a javafx program. They both have been rotated and translated a few times. I know their initial angle and position (layoutX,layoutY) and I also have the list of transformation and rotation they went through. How can I tell if they are overlapping one another right now or not?
The images are given below:
Image of an Apple:
Image of an Arrow:
It would also be really helpful if I could determine whether the tip of the arrow is inside the apple image. However, its okay if I can just tell if the images are colliding or not.
The apple Class:
class Apple { public double height, width, x1, x2, y1, y2; public ImageView image; Apple(double x1, double y1, double height, double width) { this.x1 = x1; this.y1 = y1; this.height = height; this.width = width; } Apple(double x1, double y1) { height = 20; width = 20; this.x1 = x1; this.y1 = y1; } public boolean isCollision(double ax, double ay) { x2 = x1 + width; y2 = y1 + height; if (ax > x1 && ax < x2 && ay > y1 && ay < y2) { return true; } else { return false; } } }
The code to create apples:
Apple generateApple() { double x1, y1, rx, ry; x1 = 300; y1 = 250; rx = 150; ry = 150; double xa, ya; xa = randomno(x1, x1 + rx); ya = randomno(y1, y1 + ry); Apple apl = new Apple(xa, ya); createAppleImage(apl); return apl; } void createAppleImage(Apple apple) { ImageView appleImage = null; FileInputStream inputstream5 = null; try { inputstream5 = new FileInputStream("C:\\Users\\MAHDI\\Documents\\NetBeansProjects\\ThreadTesting\\apple.jpg"); Image img4 = new Image(inputstream5); appleImage = new ImageView(img4); appleImage.setFitHeight(apple.height); appleImage.setFitWidth(apple.width); appleImage.setLayoutX(apple.x1); appleImage.setLayoutY(apple.y1); System.out.println(" " + apple.x1 + " " + apple.y1); gameLayout.getChildren().add(appleImage); } catch (FileNotFoundException ex) { Logger.getLogger(AppleShooter.class.getName()).log(Level.SEVERE, null, ex); } finally { try { inputstream5.close(); } catch (IOException ex) { Logger.getLogger(AppleShooter.class.getName()).log(Level.SEVERE, null, ex); } } apple.image = appleImage; }
The code to create arrow:
FileInputStream inputstream3 = new FileInputStream("C:\\Users\\MAHDI\\Documents\\NetBeansProjects\\ThreadTesting\\arrowpic.png"); Image img2 = new Image(inputstream3); arrow = new ImageView(img2); arrow.setFitHeight(arrowheight); arrow.setFitWidth(arrowwidth); arrow.setLayoutX(40); arrow.setLayoutY(420); arrow.setRotate(45);

Find shortest distance from any point on edge to all the vertices of graph
Question description:
A Research team want to establish a research center in a region where they found some rareelements. They want to make it closest to all the rareelements as close as possible so that they can reduce overall cost of research over there. It is given that all the rareelement’s location is connected by roads. It is also given that Research Center can only be build on road. Team decided to assign this task to a coder. If you feel you have that much potential.
Here is the Task : Find the shortest distance of research center from given locations of rareelements.
Locations are given in the matrix cell form where 1 represents roads and 0 no road.
number of rareelement and their location was also given (number<=5)
 order of square matrix isless than equal to (20).

Merging two Max heap which are complete Binary tree
Let H1 and H2 be two complete binary trees that are heaps as well. Assume H1 and H2 are maxheaps, each of size n .Design and analyse an efficient algorithm to merge H1 and H2 to a new maxheap H of size 2n.
==========================================================================
Approach  First copy the two arrays of H1 and H2 into a new array of size 2n...then apply build heap operation to get H...Time complexity=O(2n)=O(n), but doesn't need we need to apply Max heapify after building the heap? So, where's O(logn) time considered for that.
===================================================================
Another approach says merging two max heaps takes O(n+m) time. Now, what's correct and why no one is caring for Max Heapify?

How to convert color grain image to black and white (0,1) so that grain boundary remain identifiable
I want to convert color image to binary image (0,1) using simple code
im2bw()
.But in this case grain boundaries are lost or not properly visible
I would like to design grain boundary like
.
Any matlab or python explanation is highly acceptable.

How we can transform color one RGB image to look like another RGB image in Matlab
I want to apply colors of Image B to Image A. I'm new in Matlab and I'm stuck here on this problem. Anyone can help me to write this code in matlab. I will be very thankful to you. Thanks

Self Organizing Map to split datasets.
I am trying to use Self Organizing Map to split datasets into training, validation and test sets. I saw some examples in Matlab but it only talks about visualization of clusters. Could anyone please provide me some guidelines how split datasets using SOM in Matlab?

Python  How to extract values at particular points on graph
I have plotted the following graph by using data from 3 different text files. I would like to know the moment values (black circle areas) at Field=0 of three graphs. Could someone help me in solving this?
Thank you in advance

How to find roots of very complex equation on Python?
How do I solve this (f(r)=0) on Python?:
def f(r): return 6*r**2*(pi  1/cos(2.4e6*sqrt(3)/r))  3*r**2*(2*cos(pi/6  1/cos(2.4e6*sqrt(3)/r))*cos(1/cos(2.4e6*sqrt(3)/r))  pi/2 + 1/cos(2.4e6*sqrt(3)/r))  (2.88e5*sqrt(3) + 0.000207846096908265)*cos(pi/6  1/cos(2.4e6*sqrt(3)/r)) + 5.19615242270663e10

How do I translate this IDL Code to Python? Interpolation
I have the following IDL code which azimuthally averages over the power spectrum:
; interpolate onto polar coordinates for each frequency r=findgen(256) theta=findgen(256)/255.*2*!pi x0=255 y0=255 xpolar=x0+r#cos(theta) ypolar=y0+r#sin(theta) ppolar=fltarr(256,256,512) for i=0,511 do ppolar(*,*,i)=interpolate(reform(p1(*,*,i)),xpolar,ypolar) ; average over angle pow=total(ppolar,2)/256.
p1 is an 512*512*512 array with 512*512 k space and 512 frequency, i.e. p ~ (kx,ky,omega). I am not sure how to translate the last line to python code. Especially which interpolation I have to use because I am not sure how I have to change the grids so they work in Python notation.

How do I interpret the result of a 2D fourier Transform on an Image?
I am having trouble understanding the result of the 2D Fourier transform on images. Are the indices in the resulting matrix the horizontal and vertical frequencies respectively, of the image? How can I extract the frequencies that are present in the image from the matrix? As I recall in the 1D case if one were to Fourier transform a signal, one would get a spectrum representing the Magnitude of each frequency in said signal. How does it work for images? How do I interpret the result? Sample code:
img = imread(image); A = rgb2gray(img); X = fft2(A);
How do I interpret the X matrix in this case for example? Thanks in Advance!

How can i compare the performance of 32bit DSP and 24bit DSP
I want to compare the performance of ADSPBF706 and DSP56321. But I found that there were in the different core bit, and I knew that ADSPBF706 can process 8, 16 and 32bit words. What can I do with the DSP56321(24bits) to make the comparison in the same bits?
BTW what are the meaning of 32K and 24 bit in the "32K x 24 bit RAM"?

Database of natural and whispered voice
I'd like to know where I can find samples of natural and whispered speech (male and female)?
Thank you.