Eigen: Is it possible to create LeastSquareDiagonalPreconditionerlike conditioner if i only can compute Aty and Ax?
I want to solve leastsquares like system A^t * A * x = A^t * x
. (I'm implementing GaussNewton method for special problem).
I wrote special routines which allow me to compute A * x
and A^t * y
products. With such routines it's easy to use matrixfree solvers thanks to Eigen.
But my approach converges not as good as Eigen::LeastSquaresConjugateGradient
. I made a small test and it's looks like LeastSquareDiagonalPreconditioner
speed ups convergence a lot.
My question is  how i can use LeastSquareDiagonalPreconditioner
or implement own Preconditioner if i can only compute matrix products?
I'm not very good with understanding preconditioning/conjugate gradient stuff btw.
EDIT
For clarity  i want to use Matrixfree solvers from Eigen with my product routines.
EDIT 2
Matrixvector products was obtained by using forward and reverse mode autodiff on some objective functions.
1 answer

The easiest might be to implement your own preconditioner class inheriting
DiagonalPreconditioner
and implementing something likeLeastSquareDiagonalPreconditioner ::factorize()
but adapted to your type. Basically you need to compute:m_invdiag(j) = 1./mat.col(j).squaredNorm();
for all column
j
using a strategy to what you already implemented for the product operators.
See also questions close to this topic

How to make ShellExecute go though the whole computer files
I am using ShellExecute and i've noticed that it opens some of the applications that you have but not all of them. It opens thing like "Excel", "Notepad" and "MsPaint" but it doesn't open things like "Minecraft" or "Spotify". I read "Application Registation" and I relized that it only works in certain dirctorys. My qustion is how can I make ShellExecute go though my whole comeputer all of the files  all of the aplications.
I've tried setting up so it would work on C:\ but it works only on this folder not on the whole C:\. as you can see in the code below:
ShellExecuteA(NULL, NULL, url.c_str(), NULL, "C:\\Users\\ff", SW_SHOWNORMAL);

StringCch* functions and Cstring
I am using string safe functions (StringCch*) in my code. However, some functions I want to convert the variables from
WCHAR *
toCString
. This code is failing on the StringCchCopy functions:CString sFile; sFile.Preallocate(64); StringCchCopy(sFile, 64, L"ini"); DWORD rv = sFile.GetAllocLength(); rv = sFile.GetLength(); TCHAR sVal[64] = {0}; StringCchCopy(sVal, 64, sFile.GetBuffer());
Easy answer I'm sure, but I'm a bit stumped!
And
GetAllocLength()
is returning 71. Shouldn't it return 64? 
Is it necessary to delete an object if that runs on a QThread if finished signal of QThread is connected to deleteLater?
So I have the following code in the constructor.
m_someObject = new SomeObject(); m_someObject>moveToThread(&m_thread); m_thread.start(); connect(&m_thread, &QThread::finished, m_someObject, &QObject::deleteLater);
Any I have the following code in the destructor.
m_thread.terminate(); while (m_thread.isRunning()) { }
Do I still have to delete
m_someObject
despite having finished ofm_thread
connected todeleteLater
slot ofm_someObject
? 
How to check if a matrix is invertible in tensorflow?
In my Tensorflow graph, I would like to invert a matrix if it is invertible do something with it. If it is not invertible, the, I'd like to do something else.
I could not find any way to check if the matrix is invertible in order to do something like :
is_invertible = tf.is_invertible(mat) tf.cond(is_invertible, f1, f2)
Is there such a thing as an
is_invertible
function in Tensorflow ? I also considered using the fact that Tensorflow raises (not each time though) anInvalidArgumentError
when I try to invert a nonnvertible matrix, but I couldn't take advantage of this. 
How can I scale the difference between two points?
I have two 300dimensional vectors. I find the difference between the means of these vectors. The difference is 3 but I want to scale this difference to 100. Can you help me with this, please?
1.Vec: [0.39962309971671345, 0.45341047747336644, 0.483112606808456, 0.1632926271746124, 0.38684101391315034,..............,0.5876959107677568]
2.Vec: [0.9060905881280155, 0.07734264476816265, 0.12666859394715477, 0.2791334302389997, 0.48362315271717266,................0.005429261424327865]
Difference: 3.7562449691785424

Higher precision eigenvalues with numpy
I'm currently computing several eigenvalues of different matrices and trying to find their closedform solutions. The matrix is hermitian/selfadjoint, and tridiagonal. Additionally, every diagonal element is positive and every offdiagonal is negative.
Due to what I suspect is an issue with trying to algebraically solve the quintic, sympy cannot solve the eigenvalues of my 14x14 matrix.
Numpy has given me great results that I'm sometimes able to use via wolframalpha, but other times the precision is lacking to be able to determine which of several candidates the closed form solution could take. As a result, I'm wishing to increase the precision with which numpy.linalg.eigenvaluesh outputs eigenvalues. Any help would be greatly appreciated!

Eigen function that does array operations on Matrix<> or Block<>
I'd like to make a function that can take both a Matrix<> or a Block<> object.
Example:
#include <iostream> #include <Eigen/Dense> //A simplified example of the function I'm making double example_function(const Eigen::MatrixXd & input){ double d1 = input.array().pow(2).sum(); double d2 = input.rowwise().norm().sum(); return d1 + d2; } int main(){ Eigen::MatrixXd m1(3,2); m1.setRandom(); // This works example_function(m1); // I'd like to make this work example_function(m1.block(1,0,2,2)); return 0; }
The following does not work, because DenseBase does not have the 'array' method:
template<typename Derived> double example_function(const Eigen::DenseBase<Derived> & input){ double d1 = input.array().pow(2).sum(); // array() method invalid double d2 = input.rowwise().norm().sum(); return d1 + d2; }
How would I go about this?

How to get the scalar value returned by eigen tensor's sqrt() function
Eigen::Tensor<float, 1> linear(2); linear.setValues({3,4}); auto linear_square = linear * linear; auto linear_square_sum = linear_square.sum().sqrt(); std::cout<<linear_square_sum<<std::endl; // 5 auto new_linear = linear.constant(linear_square_sum); std::cout<<new_linear<<std::endl; // expected: [5,5]
The expected value of linear_square_sum is 5. Now the goal is to create a tensor that is the same size with 'linear', which shows in the last two code lines. Unfortunately, there's some bugs about the unmatched C++ type in the eigen tensor's 'constant' function as the required parameter type of 'constant' is scalar(int,float). So the question is how I can get the scalar value of the parameter 'linear_square_sum'. I'd appreciated it if someone teaches me how to change the last two code lines.

Can I have Eigen::VectorXd with size larger than the size limit (2^32)?
I'm try to initialize a VectorXd with a size 60,000,000,000. When running my software, I get the following error because of the Vector size.
Any recommendation to solve the problem?
a.out: /usr/local/EasyBuild/software/Eigen/3.3.3intel2018a/include/Eigen/src/Core/PlainObjectBase.h:312: void Eigen::PlainObjectBase::resize(Eigen::Index) [with Derived = Eigen::Matrix; Eigen::Index = long int]: Assertion `((SizeAtCompileTime == Dynamic && (MaxSizeAtCompileTime==Dynamic  size<=MaxSizeAtCompileTime))  SizeAtCompileTime == size) && size>=0' failed. Aborted (core dumped)

How to include a dummy variable in a bivariate regression in Matlab?
I have estimated a bivariate regression with OLS in matlab looks like this: dpu_t=b_0+b_1*kcfsi_t+e_t. I used the code:
z=readtable('kcfsi.csv'); y=readtable('dpu.csv'); kcfsi=z{:,4}; dpu=y{:,3}; ones=ones(size(kcfsi,1),1); X=[ones kcfsi]; [bdach,stf,e,fit,ser]=ols(dpu,X)
and use the function
ols
. Now I want to proof the effect only whenkcfsi
is above the value "1". How can I include a dummyvariable or something like this? 
MatrixVector Restriction Meaning in Linear Algebra/Programming
I am trying to implement NonNegative Least Squares to solve a system of linear equations expressed in standard Ax=b matrix form.
A is a (2D) matrix with float element values
P is a (1D) vector with integer element values
I ran into this statement:
Let Aᴾ be A restricted to the variables included in P
What does this mean in the context and how would I implement it in C# with MathNet.Numerics classes?

Improving the performance of many submatrix left division operations (mldivide, \)
I have two matrices,
a1
anda2
.a1
is 3x12000 anda2
is 3x4000. I want to create another array that is 3x4000 which is the left matrix division (mldivide
,\
) of the 3x3 submatrices ofa1
and the 3x1 submatrices ofa2
. You can do this easily with a for loop:for ii = 1:3:12000 a = a1(:,ii:ii+2)\a2(:, ceil(ii/3)); end
However, I was wondering if there was a faster way to do this.
Edit: I am aware preallocation increases speed, I was just showing that for visual purposes.
Edit2: Removed iterative increase of array. It seems my questions has been misinterpreted a bit. I was mainly wondering if there were some matrix operations I could do to achieve my goal as that would likely be quicker than a for loop i.e. reshape
a1
to a 3x3x4000 matrix anda2
to a 3x1x4000 matrix and left matrix divide each level in one go, however, you can't left matrix divide with 3D matrices. 
How to reshape a tensor in Eigen3?
I get some Eigen::TensorMap from the outputs vector from a tensorflow session in C++. I want to do some operations to the Eigen::TensorMap (reshape and concat etc.).
However, my codes cannot be compiled due to some weird error. I tried to reproduce it in pure Eigen3 code.
#include <unsupported/Eigen/CXX11/Tensor> using Eigen::Tensor; using Eigen::TensorMap; using Eigen::TensorRef; using std::vector; int main() { int storage[128]; TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8); vector<TensorRef<Tensor<int,2>>> reshapedTensors; std::array<int, 2> shape{ 16,8 }; auto re_op = t_4d.reshape(shape); reshapedTensors.push_back(re_op); return 0; }
According to the Eigen Doc, the return type of reshape function is an eigen operation, it will be caculate lazily. The TensorRef is the wrapper of all tensor operations.
This piece of codes will complain that:
Severity Code Description Project File Line Suppression State Error C2679 binary '=': no operator found which takes a righthand operand of type 'const std::array' (or there is no acceptable conversion) testEigen D:\Programming\cpp library\eigeneigen323c052e1731\unsupported\Eigen\CXX11\src\Tensor\TensorRef.h 49

Why doesn't conservativeResize work with Ref variable in a function?
I am making a function that can extend the given Matrix which is MatrixXd type. I was trying to use conservativeResize to achieve that, but when i use Ref in the function statement, conservativeResize doesn't work with with the Ref object, but resize do. I am wondering why this problem happens. Thanks in advance.
I tried to use MatrixXd &var directly in function statement, and it does work, but I don't know why Ref doesn't.
void mat_extend(Ref<MatrixXd> mat) { int len = mat.rows(); mat.conservativeResize(len+2,NoChange); } ```

Python numpy code more efficient than eigen3 or plain C++
I had some code in Python3 (with numpy) that I wanted to convert to C++ (with eigen3) in order to get a more efficient program. So I decided to test a simple example to assess the performance gain I would get. The code consists on two random arrays that are to be multiplied coefficientwise. My conclusions were that the python code with numpy is about 30% faster than the one in C++. I'd like to know why the interpreted python code is faster than a compiled C++ code. Am I missing something in the C++ code?
I'm using gcc 9.1.0, Eigen 3.3.7, Python 3.7.3 and Numpy 1.16.4.
Possible explanations:
C++ program isn't using vectorization
Numpy is a lot more optimized than I thought
Time is measuring different things in each programThere is a similar question in Stack Overflow (Eigen Matrix vs Numpy Array multiplication performance). I tested this in my computer and got the expected result that eigen is more efficient than numpy, but the operation here is matrix multiplication rather than coefficientwise multiplication.
Python code (main.py)
Execution command: python3 main.pyimport numpy as np import time Lx = 4096 Ly = 4000 # Filling arrays a = np.random.rand(Lx, Ly).astype(np.float64) a1 = np.random.rand(Lx, Ly).astype(np.float64) # Coefficientwise product start = time.time() b = a*a1 # Compute the elapsed time end = time.time() print(b.sum()) print("duration: ", endstart)
C++ code with eigen3 (main_eigen.cpp)
Compilation command: g++ O3 I/usr/include/eigen3/ main_eigen.cpp o prog_eigen#include <iostream> #include <chrono> #include "Eigen/Dense" #define Lx 4096 #define Ly 4000 typedef double T; int main(){ // Allocating arrays Eigen::Array<T, 1, 1> KPM_ghosts(Lx, Ly), KPM_ghosts1(Lx, Ly), b(Lx,Ly); // Filling the arrays KPM_ghosts.setRandom(); KPM_ghosts1.setRandom(); // Coefficientwise product auto start = std::chrono::system_clock::now(); b = KPM_ghosts*KPM_ghosts1; // Compute the elapsed time auto end = std::chrono::system_clock::now(); std::chrono::duration<double> elapsed_seconds = endstart; std::cout << "elapsed time: " << elapsed_seconds.count() << "s\n"; // Print the sum so the compiler doesn't optimize the code away std::cout << b.sum() << "\n"; return 0; }
Plain C++ code (main.cpp)
Compilation command: g++ O3 main.cpp o prog#include <iostream> #include <chrono> #define Lx 4096 #define Ly 4000 #define N Lx*Ly typedef double T; int main(){ // Allocating arrays T lin_vector1[N]; T lin_vector2[N]; T lin_vector3[N]; // Filling the arrays for(unsigned i = 0; i < N; i++){ lin_vector1[i] = std::rand()*1.0/RAND_MAX; lin_vector2[i] = std::rand()*1.0/RAND_MAX; } // Coefficientwise product auto start = std::chrono::system_clock::now(); for(unsigned i = 0; i < N; i++) lin_vector3[i] = lin_vector1[i]*lin_vector2[i]; // Compute the elapsed time auto end = std::chrono::system_clock::now(); std::chrono::duration<double> elapsed_seconds = endstart; std::cout << "elapsed time: " << elapsed_seconds.count() << "s\n"; // Print the sum so the compiler doesn't optimize the code away double sum = 0; for(unsigned i = 0; i < N; i++) sum += lin_vector3[i]; std::cout << "sum: " << sum << "\n"; return 0; }
Runtime of each program 10 times
Plain C++
elapsed time: 0.210664s
elapsed time: 0.215406s
elapsed time: 0.222483s
elapsed time: 0.21526s
elapsed time: 0.216346s
elapsed time: 0.218951s
elapsed time: 0.21587s
elapsed time: 0.213639s
elapsed time: 0.219399s
elapsed time: 0.213403sPlain C++ with eigen3
elapsed time: 0.21052s
elapsed time: 0.220779s
elapsed time: 0.216269s
elapsed time: 0.229234s
elapsed time: 0.212265s
elapsed time: 0.256714s
elapsed time: 0.212396s
elapsed time: 0.248241s
elapsed time: 0.241537s
elapsed time: 0.323519sPython
duration: 0.23946428298950195
duration: 0.1663036346435547
duration: 0.17225909233093262
duration: 0.15922021865844727
duration: 0.16628384590148926
duration: 0.15654635429382324
duration: 0.15859222412109375
duration: 0.1633443832397461
duration: 0.1685199737548828
duration: 0.16393446922302246