Solving largescale symmetric linear system with matrix inertia (python)
My requirements are to solve the following linear system and determine the LHS matrix inertia using python:
(Matrix Inertia = (p,n,z)  The number of positive, negative and zero eigenvalues of the RHS matrix.)
The solution to this system derives the search direction for a primaldual interior point optimisation.
Each term in this figure has the following properties:
n >> 1000
m < 10
W = [n x n] (Sparse)
Sigma = [n x n] (Sparse)
delta_w_I = [n x n] (Sparse)
A = [n x m]
delta_c = [m x m]
dx = [n x 1]
d_lambda = [m x 1]
delta_phi = [n x 1]
A_lambda = [n x 1]
c = [m x 1]
W represents the hessian and must be approximated using the LBFGS which only computes the inverse W^1.
What steps do I need to take to solve this sparse system efficiently in python whilst also calculating the matrix inertia?
I have read about LDL^T factorisation but it doesn't work as I do not wish to invert my W term as this method requires both W and W^1 to solve and determine the matrix inertia.
(I have tried to install the Harwell subroutine libraries on my machine with the HSL.py wrapper, but I find it impossible to do so.)
See also questions close to this topic

One liner to make nested lists into one list
j= lambda l:reduce(lambda a,b:a+b,l) k= lambda l:map(lambda z:j(z),map(lambda x:k(x) if type(x)==list else [x],l))
Type Error: it prints [1, 14] for print k([1,[2,[3,4],5]])

based on Calculator i am using wxpyhton
I am working on Calculator using wxpyhton, but the problem is how to display the value in Textctrlenter image description here self.nameTxt = wx.TextCtrl( self, wx.ID_ANY,"",pos=(10,20),size=(260,30))
#+++++++++++++++++++++++++++++++++++++++++Button1++++++++++++++++++++++++++ self.clickcount1 = 1 one = self.clickcount1 getBtn = wx.Button(self, self.clickcount1,label="1",pos=(10,60),size=(40,40)) btn.Bind(wx.EVT_BUTTON, lambda btnClick, temp=button_name: self.OnButton(btnClick(1), temp) )

RGB to HSI Conversion  Hue always calculated as 0
So I've been trying to create this RGB to HSI conversion algorithm for a project I'm working on but I have run into several roadblocks while doing it.
I've so far narrowed the problems down to two possible issues:
 The program will not detect which of the two values compared in the ifstatement is true and just defaults to the initial ifstatement
 The program is not calculating the correct values when calculating the hue of the image as it always defaults to the inverse cosine's default value.
Here is the code:
import cv2 import numpy as np def RGB_TO_HSI(img): with np.errstate(divide='ignore', invalid='ignore'): bgr = cv2.split(img) intensity = np.divide(bgr[0] + bgr[1] + bgr[2], 3) saturation = 1  3 * np.divide(np.minimum(bgr[2], bgr[1], bgr[0]), bgr[2] + bgr[1] + bgr[0]) def calc_hue(bgr): blue = bgr[0] green = bgr[1] sqrt_calc = np.sqrt(((bgr[2]  bgr[1]) * (bgr[2]  bgr[1])) + ((bgr[2]  bgr[0]) * (bgr[1]  bgr[0]))) if green.any >= blue.any: hue = np.arccos(1/2 * ((bgr[2]bgr[1]) + (bgr[2]  bgr[0])) / sqrt_calc) else: hue = 360  np.arccos(1/2 * ((bgr[2]bgr[1]) + (bgr[2]  bgr[0])) / sqrt_calc) hue = np.int8(hue) return hue hue = calc_hue(bgr) hsi = cv2.merge((intensity, saturation, calc_hue(bgr)))
Thanks in advance for any tips or ideas

Generating a lookat matrix that rotates around a different point
Is there some way of generating a lookat matrix that rotates one object around another point until the original object is looking at a target object?
I know you can create a standard lookat matrix using cross products that will rotate and align the source object so that it's looking directly at the target object.
But what if I want to rotate the source object around a point other than it's own origin? The source object still needs to be aligned so that it's pointing at the target object except now it needs to be rotated around another point in 3D space to make that happen.
Is there any way to do this using vector/matrix math?

Multiplying a matrix by the columns of another matrix matlab
I am trying to multiply a matrix by the columns of another matrix and gain the sum of the multiplications.
deltaweight{j} = deltanode{j}(1).*layerinputs{j}+deltanode{j}(2)).*layerinputs{j}+deltanode{j}(3).*layerinputs{j}+deltanode{j}(4).*layerinputs{j};
This is a working version where deltanode{j} is(4collums,60000rows) and layerinputs{j} has(10collums,60000rows).
I am trying to do this while the columns of each matrix will change and will work for any configuration. This can be done using for loop and an accumulative sum but i am trying to avoid for loops to vecotorize my code.
for k = 1:layersize(4) deltaweight{j} = deltaweight{j} + deltanode{j}(k).*layerinputs{j}; end
layersize(4) is an array which has 4 stored in the location the number of columns on the deltanode cell.
deltanode{j} = rand(15,4); layerinputs{j} = rand(15,4);
for testing. j can be any number

size of correlation matrix using matshow
I am trying to format the matrix better. My current code gives me o/p in the format as seen in the image:
import numpy as np import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) plt.matshow(final.corr(), fignum = 1) plt.xticks(range(len(final.columns)), final.columns) plt.yticks(range(len(final.columns)), final.columns)
 Reduce Size of Build Unity

Optimize Gif loading time in React Native app
I am loading multiple Gifs at a time in a React Native app. The Gif URLs are fetched from my DB and then get rendered in Image components in a FlatList, 10 at a time. The user can scroll down to load more.
Each gif usually take 1.5s to 10s to load, which is too slow (that is the is the actual image loading time, after the data has been received from the DB).
I know Gifs can be heavy, but they load almost instantly in FB messenger and other apps... Is there a way to speed up the loading?
My only idea so far is to use lower resolution Gifs, but I can clearly see the difference in image quality...
As a note, gif data is stored in my DB as Giphy gifObjects so I do have easy access to a bunch of other sizes and formats that may work better (webp, mp4, though no luck so far...)

Storing each LBFGSB Iteration to visualize the algorithm
I have a 2 part question:
1: I am using LBFGSB to solve a constrained optimization problem with bounds such that every point must be within a sphere of radius 1. My start point is at the origin. I would like to visually show how the algorithm converges towards the final point. The only way I know how to do so, is to somehow store the output of each iteration of the LBFGSB algorithm. Is this possible? If not, are there any other solutions for what I am trying to do?
2: How can I set the bounds of the LBFGSB algorithm to mirror that of a sphere of radius 1? As you can see if the code I have below, the bounds are for those of a cube.
Here is the code for the specific function I am solving for:
import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import math from itertools import product, combinations from numpy.linalg import inv import scipy.optimize as opt import scipy from scipy.optimize import minimize m = 3 n = 3 global A,b A = np.random.randint(10, size=(m,n)) b=np.random.randint(10,size=(m,1)) x_c=np.zeros((n,1)) mag = lambda x: np.linalg.norm(x) f = lambda x: mag(A@xb)**2 # Change this to be row specific, e.g. A[0],A[1],A[2] Are A11, A12, A13. Then, recreate this for rows 2 and 3. this will be df/dx dfx1 = lambda x: 2*A[0,0]*(A[0,0]*x[0]+A[0,1]*x[1]*A[0,2]*x[2]b[0]) + 2*A[1,0]*(A[1,0]*x[0]+A[1,1]*x[1]*A[1,2]*x[2]b[1]) + \ 2*A[2,0]*(A[2,0]*x[0]+A[2,1]*x[1]*A[2,2]*x[2]b[2]) dfx2 = lambda x: 2*A[0,1]*(A[0,0]*x[0]+A[0,1]*x[1]*A[0,2]*x[2]b[0]) + 2*A[1,1]*(A[1,0]*x[0]+A[1,1]*x[1]*A[1,2]*x[2]b[1]) + \ 2*A[2,1]*(A[2,0]*x[0]+A[2,1]*x[1]*A[2,2]*x[2]b[2]) dfx3 = lambda x: 2*A[0,2]*(A[0,0]*x[0]+A[0,1]*x[1]*A[0,2]*x[2]b[0]) + 2*A[1,2]*(A[1,0]*x[0]+A[1,1]*x[1]*A[1,2]*x[2]b[1]) + \ 2*A[2,2]*(A[2,0]*x[0]+A[2,1]*x[1]*A[2,2]*x[2]b[2]) dfx = lambda x: np.array([dfx1(x), dfx2(x), dfx3(x)]) # Calculate Lipschitz constant, step size, and magnitude L = mag(dfx(x_c)) h = 1/L magnitude=mag(dfx(x_c))
To minimize this objective function, I am using scipy.optimize. However, as you will see in the results below, my bounds are incorrectly set:
solution = minimize(f,x_c,method='LBFGSB',bounds=((1,1),(1,1),(1,1)),options={'eps':h}) xfinal=solution.x[0] yfinal=solution.x[1] zfinal=solution.x[2] print(solution) > fun: 22.864580328930305 > hess_inv: <3x3 LbfgsInvHessProduct with dtype=float64> > jac: array([ 3.32420903, 4.26973116, 14.85375705]) > message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH' > nfev: 96 > nit: 7 > status: 0 > success: True > x: array([0.08133824, 1. , 0.88373222])
To see this visually, I set up the following wireframe:
fig = plt.figure(figsize=(8,8)) ax = fig.gca(projection='3d') ax.set_xlim(1, 1) ax.set_ylim(1, 1) ax.set_zlim(1, 1) u,v=np.mgrid[0:2*np.pi:20j,0:np.pi:10j] ball1=np.cos(u)*np.sin(v) ball2=np.sin(u)*np.sin(v) ball3=np.cos(v) ax.plot_wireframe(ball1,ball2,ball3,color="white") ax.scatter([0],[0],[0], color="y",s=100,label="Initial Point") ax.scatter([xfinal],[yfinal],[zfinal], color="r",s=100,label="Final Point") ax.legend()

MATLAB memory management of indexing of sparse matrices in system of equations
I want to know how MATLAB does memory management for the following system of equations that includes indexing of the sparse matrix.
x = A(indices,indices) \ b(indices);
A
is a sparse symmetric matrix,b
is a columnvector,indices
has the index of the elements ofA
to be included in the system of equationsAx = b
.I think
A
is stored as aCSC
(compressed sparse column). It is then stored temporarily in the memory with a differentCSC
data. The newCSC
is finally used in the system of equations withb(indices)
, similar to the following:Aindexed = A(indices,indices); % New symmetric sparse matrix bindexed = b(indices); x = Aindexed \ bindexed;
Does MATLAB have special sparse solvers with matrix indexing? I think it's less likely that MATLAB does the indexing inside the solver and the sparse matrix has to be indexed prior to being used in the solver. These are just my guesses. Could someone kindly shed some light on this subject? Thank you.

This is confusing to me! Summing sparse vectors stored as dictionaries in python
Sparse vectors, how can we add these two python dictionaries to get the below output? A={5:5,6:7,7:8} B={1:2,5:6,6:4,4:5} output should be: {1:2,4:5,5:11,6:11,7:8}

Efficiently serialize/deserialize a SparseDataFrame
Has anyone ever efficiently serialized/deserialized a pandas SparseDataFrame?
import pandas as pd import scipy from scipy import sparse dfs = pd.SparseDataFrame(scipy.sparse.random(1000, 1000).toarray()) # just for testing
pickle is not an answer
It's outrageously slow.
import pickle, time start = time.time() # serialization msg = list(pickle.dumps(dfs, protocol=pickle.HIGHEST_PROTOCOL)) # deserialization dfs = pickle.loads(bytes(msg)) stop = time.time() stop  start # 0.4420337677001953 # This is with Python 3.5 so it's using cPickle
As a comparison msgpack is faster on the dense version
df = dfs.to_dense() start = time.time() # serialization msg = list(df.to_msgpack(compress='zlib')) # deserialization df = pd.read_msgpack(bytes(msg)) stop = time.time() stop  start # 0.09514737129211426
msgpack
Msgpack would be the answer but I can't find an implementation for SparseDataFrame (related)
# serialization dfs.to_msgpack(compress='zlib') # Returns: NotImplementedError: msgpack sparse frame is not implemented
coordinate format
msgpack on a coordinate format via
scipy.sparse.coo_matrix
seems to be worth considering but conversion topython.sparse.coo_matrix
is slowfrom scipy.sparse import coo_matrix start = time.time() # serialization columns = dfs.columns shape = dfs.shape start_to_coo = time.time() dfc = dfs.to_coo() stop_to_coo = time.time() start_comprehension = time.time() row = [x.item() for x in df.row] col = [x.item() for x in df.col] data = [x.item() for x in df.data] stop_comprehension = time.time() start_packing = time.time() msg = list(msgpack.packb({'columns':list(columns), 'shape':shape, 'row':row, 'col':col, 'data':data})) stop_packing = time.time() # deserialization start_unpacking = time.time() dict = msgpack.unpackb(bytes(msg)) stop_unpacking = time.time() columns=dict[b'columns'] index=range(dict[b'shape'][0]) dfc = coo_matrix((dict[b'data'], (dict[b'row'], dict[b'col'])), shape=dict[b'shape']) stop = time.time() print('total: ' + str(stop  start)) print(' to_coo: ' + str(stop_to_coo  start_to_coo)) print(' comprehension: ' + str(stop_comprehension  start_comprehension)) print(' packing: ' + str(stop_packing  start_packing)) print(' unpacking: ' + str(stop_unpacking  start_unpacking)) #total: 0.2799222469329834 # to_coo: 0.22925591468811035 # comprehension & cast: 0.02356100082397461 (msgpack does not support all numpy formats) # packing: 0.004893064498901367 # unpacking: 0.001984834671020508
From there it seems one needs to go through a dense format.
start = time.time() dfs = pd.SparseDataFrame(dfc.toarray()) stop = time.time() stop  start # 2.8947737216949463