Solving largescale symmetric linear system with matrix inertia (python)
My requirements are to solve the following linear system and determine the LHS matrix inertia using python:
(Matrix Inertia = (p,n,z)  The number of positive, negative and zero eigenvalues of the RHS matrix.)
The solution to this system derives the search direction for a primaldual interior point optimisation.
Each term in this figure has the following properties:
n >> 1000
m < 10
W = [n x n] (Sparse)
Sigma = [n x n] (Sparse)
delta_w_I = [n x n] (Sparse)
A = [n x m]
delta_c = [m x m]
dx = [n x 1]
d_lambda = [m x 1]
delta_phi = [n x 1]
A_lambda = [n x 1]
c = [m x 1]
W represents the hessian and must be approximated using the LBFGS which only computes the inverse W^1.
What steps do I need to take to solve this sparse system efficiently in python whilst also calculating the matrix inertia?
I have read about LDL^T factorisation but it doesn't work as I do not wish to invert my W term as this method requires both W and W^1 to solve and determine the matrix inertia.
(I have tried to install the Harwell subroutine libraries on my machine with the HSL.py wrapper, but I find it impossible to do so.)
See also questions close to this topic

How do we call bash functions from python
Let's say i have a bash function that i source into a shell.
# cat sample.sh function func() { echo "Custom env : $CUSTOM_ENV" }
Now i source this script in the bash shell:
#source sample.sh
Then i define:
export CUSTOM_ENV="abc"
and then call func() from bash shell, it displays:
#func Custom env : abc
Now if i am calling a python script from the same shell, i want to invoke the function func() from the python script. Anyway to achieve this ?
What i tried:
 Tried os.system('func')  Doesn't work
 Tried subprocess.check_output('func', shell=True, env=os.environ.copy())  Doesn't work
Any guidance ?

How to run a file by another user in python Subprocess
I am creating a python script which calls a shell script and run that shell script on a terminal using
Subprocess
. My problem is that I want to run that shell script by another user . My code is Given belowCode:
import subprocess filename = '/mount/test.sh' p = subprocess.Popen([filename],shell=True,stdout=subprocess.PIPE) out, err = p.communicate() print(out)
Can anyone tell me that how to run my shell script by another user ?
Note:
Only the subprocess part should be run by another user, not the main python script

Build new logic adapter
I have 3 different types of csv files in which there is different data, I want to write 3 logic adapter for each csv files, in which I have to write code using conditions like
if statement has confidence <007 then it should show matching information , elseif it should show other response else different repsonse
.So how I can built logic for it using chatterbot?

cv2.estimateRigidTransform minimal number of points?
What is minimal number of points needed for
cv2.estimateRigidTransform
?As I understand with
fullAffine=False
it have 4 degrees of freedom, so 2 points should be sufficient.However:
Using 2 numpy array as input:
src_pts_subset.shape (2, 2) tgt_pts_subset.shape (2, 2) type(src_pts_subset) <class 'numpy.ndarray'> type(tgt_pts_subset) <class 'numpy.ndarray'> src_pts_subset.dtype int64 tgt_pts_subset.dtype int64
To
m = cv2.estimateRigidTransform(src_pts, tgt_pts, fullAffine=False)
Gives me
None
. 
How to Generate Random Covariance Matrix from Wishart Distrubtion
I need to generate an n x n, positivedefinite covariance matrix for a project. Drawing from the Wishart distribution was recommended. How do I generate a random covariance matrix in R, ideally also using the Wishart Distribution. I've tried rwishart() to get values, but need more help. Thanks
 R function for generating design matrix

knn with cosine distances using sklearn
I'm very new to machine learning. I have some documents with their word2vec vectors and I want to find a given document category by knn algorithm with cosine distances using sklearn.
I've seen this link but it is not clear for me. can you help me understand what should I do?
thanks in advance.

How to configure a cluster that is write optimized and with minimum resources consumption footprint
I am very new to ES, and would like to start indexing several logs files that are printed across components in the same machine, and across several machines on top of eventviewer entries on each machine.
At times with extended traces enabled there could be a high amount of writes, and I would like to keep indexing as fast and lightweight as possible.
Searching the logs would be a very rare operation done by a single user at a time, and as long as it takes up to ~5 seconds I am ok with that.
My initial thoughts, if it is possible, is to only allocate a single Index and Shard per each (machine, component, day) tupple which would reside on the local machine itself. This hopefully would reduce all nodes coordination to minimum, and at query time all results would just need to be aggregated from all the nodes.
My question is will this be possible (I plan to use logstash in order to push data to ES) or is this even a good approach for my needs?
Thanks, Leon

Why is the boolean logical operator ^ being used in this piece of example code from Effective Java?
I found this example code for Joshua Bloch's book, Effective Java. It's meant to demonstrate why you should avoid unnecessarily creating objects:
import java.util.regex.Pattern; // Reusing expensive object for improved performance public class RomanNumerals { // Performance can be greatly improved! static boolean isRomanNumeralSlow(String s) { return s.matches("^(?=.)M*(C[MD]D?C{0,3})" + "(X[CL]L?X{0,3})(I[XV]V?I{0,3})$"); } // Reusing expensive object for improved performance (Page 23) private static final Pattern ROMAN = Pattern.compile( "^(?=.)M*(C[MD]D?C{0,3})" + "(X[CL]L?X{0,3})(I[XV]V?I{0,3})$"); static boolean isRomanNumeralFast(String s) { return ROMAN.matcher(s).matches(); } public static void main(String[] args) { int numSets = Integer.parseInt(args[0]); int numReps = Integer.parseInt(args[1]); boolean b = false; for (int i = 0; i < numSets; i++) { long start = System.nanoTime(); for (int j = 0; j < numReps; j++) { b ^= isRomanNumeralSlow("MCMLXXVI"); // Change Slow to Fast to see performance difference } long end = System.nanoTime(); System.out.println(((end  start) / (1_000. * numReps)) + " μs."); } // Prevents VM from optimizing away everything. if (!b) System.out.println(); } }
Why is the boolean logical operator ^ being used in the for loop inside the main method here?
Is it to prevent the compiler from optimizing away subsequent iterations (thereby compromising the measurement), since the result would anyway be the same?

Solving a sparse matrix using \ in matlab
I am trying to solve a problem of the form
Ax=b
in which I have a tridiagonal matrixA
, and a full vectorb
.When doing
x=A\b
I get the error message:Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 3.301735e150.
I have theorised that this may be due to the sparsity of matrix
A
, is there a more efficient built in way of dealing with this in Matlab? 
Initialize a numpy sparse matrix efficiently
I have an array with m rows and arrays as values, which indicate the index of columns and are bounded to a large number n. E.g:
Y = [[1,34,203,2032],...,[2984]]
Now I want an efficient way to initialize a sparse numpy matrix X with dimensions m,n and values corresponding to Y (X[i,j] = 1, if j is in Y[i], = 0 otherwise).

How to obtain correct L and U matrices from LU decomposition of a sparse matrix A, without using scipy.sparse.linalg.splu()?
I have noticed that
scipy.sparse.linalg.splu()
does not allow me to decompose a sparse matrix A into the correct L and U matrix that I can call separately. The command ''merely'' allows me to decompose the matrix and reconstruct it later on using the permutation matrix. However, for my code I need to decompose a sparse matrix A into a sparse matrix L and U and then be able to call the L and U matrices separately (without permutation matrices etc.). This does not work when using thescipy.sparse.linalg.splu()
command. I could usescipy.linalg.lu()
but I cannot apply this to a matrix A in sparse format. Are there any other methods out there for obtaining the correct L and U decomposition matrices from a sparse matrix A? Thanks in advance.