Eigen: Is it possible to create LeastSquareDiagonalPreconditionerlike conditioner if i only can compute Aty and Ax?
I want to solve leastsquares like system A^t * A * x = A^t * x
. (I'm implementing GaussNewton method for special problem).
I wrote special routines which allow me to compute A * x
and A^t * y
products. With such routines it's easy to use matrixfree solvers thanks to Eigen.
But my approach converges not as good as Eigen::LeastSquaresConjugateGradient
. I made a small test and it's looks like LeastSquareDiagonalPreconditioner
speed ups convergence a lot.
My question is  how i can use LeastSquareDiagonalPreconditioner
or implement own Preconditioner if i can only compute matrix products?
I'm not very good with understanding preconditioning/conjugate gradient stuff btw.
EDIT
For clarity  i want to use Matrixfree solvers from Eigen with my product routines.
EDIT 2
Matrixvector products was obtained by using forward and reverse mode autodiff on some objective functions.
1 answer

The easiest might be to implement your own preconditioner class inheriting
DiagonalPreconditioner
and implementing something likeLeastSquareDiagonalPreconditioner ::factorize()
but adapted to your type. Basically you need to compute:m_invdiag(j) = 1./mat.col(j).squaredNorm();
for all column
j
using a strategy to what you already implemented for the product operators.
See also questions close to this topic

Why is still possible to access a Temporary Object after its Destruction
class exemple { public: int i; exemple(int i):i(i) {cout << "Object has been created." << endl;} ~exemple(){cout << "Object has been destroyed.";} }; int main() { exemple* ptr = &exemple(10); getchar(); // pause code for a moment cout << ptr>i; }
Consider the code above. I create a temporary object and put its address in the ptr pointer.
When the program starts, the console shows that object is first created, constructor is called, (message appears: "Object has been created." ) and then is immediately destroyed, destructor is called,(message appears: "Object has been destroyed."). Now I pause the code for a moment. After the pause, the value(10) of i variable is printed out.
How is this possible ? If the object has been destroyed how can I still access the i variable ?

Is C++ Armadillo find() output sorted?
Does the C++ Armadillo find() function always output a sorted result, in ascending order?

how to solve the visual studio 2017 problem, unable to start debugging
Please help me , i am new to c++ and IDE's as visual studio 2017 is installed on my pc i downloaded the 5 gb of c++ package
then i made first project
and i am getting only same errors
enter image description here please see these images to check the problem
i am really sorry that i was unable to explain the question properly

Make a camera rotate along z axis while moving and changing lookAt (rollercoaster view) in Three.js
Hi i am having a problem maybe you can help me.
I have a camera that is going down a tube following a path. and a camera that rotates around that tube always pointing toward the next point in the tube. However, the camera sometimes can be below or beside the tube like a roller coaster. Like this
I have the position of point a and the position of the camera which is point b. I am always looking at point a+1
var bpoints = this.cameraPathpoints; var apoints = this.pathPoints; this.camera.position.copy(bpoints[i]); this.camera.lookAt(apoints[i+1]);
The camera is always looking at the point correctly however i want that the camera rotates in its z axis so that it is always normal to the tube. I tried making some calculations so that the camera rotates in its z axis so that the camera always faces normal to the tube, however my calculations work only on certain positions. Maybe there is a simpler way to do this. Thank you very much for any help.
var angleRadians = Math.atan2(cpv[this.cameraPos].pos.y  centePoints[this.cameraPos].pos.y, cpv[this.cameraPos].pos.x  centePoints[this.cameraPos].pos.x); if(angleRadians > 0 && angleRadians > Math.PI/2){ console.log("+90",(Math.PI/2)  angleRadians); angleRadians = (Math.PI/2)  angleRadians; this.camera.rotateZ(angleRadians); console.log("rotated ", angleRadians * 180/Math.PI); } else if(angleRadians > 0 && angleRadians < Math.PI/2 && anglesum > Math.PI/2){ console.log("90",(Math.PI/2)  angleRadians); angleRadians = (Math.PI/2)  angleRadians; this.camera.rotateZ(angleRadians); console.log("rotated ", angleRadians * 180/Math.PI); } else if(angleRadians > 0 && angleRadians < Math.PI/2){ console.log("90",(Math.PI/2) + angleRadians); angleRadians = (Math.PI/2)  (angleRadians/Math.PI/2); this.camera.rotateZ(angleRadians); console.log("rotated ", angleRadians * 180/Math.PI); } else if(angleRadians < 0 && angleRadians < Math.PI/2){ console.log("90"); angleRadians = (Math.PI/2) + angleRadians; this.camera.rotateZ(angleRadians); console.log("rotated ",angleRadians * 180/Math.PI); }else if(angleRadians < 0 && angleRadians > Math.PI/2){ console.log("+90"); angleRadians = (Math.PI/2)  angleRadians; this.camera.rotateZ(angleRadians); console.log("rotated ", angleRadians * 180/Math.PI); }

Printing a complex matrix Fortran
The complex matrix is declared this way:
complex(8) :: matrix(:,:)
How can I print this matrix with each element as: (a, b) or a+ib, and in a nxn format? (by that I mean as a square matrix, with a row per line, so there will be n rows and n columns)
This is the way I would print a real matrix with the format I want:
do i=1,n do j=1,n write(*, fmt="(f0.2, tr2)", advance="no") matrix(i,j) end do write(*, fmt="(a)") " " end do
But I'm not sure how to translate this to a complex matrix

Efficient way for iterating projection of points on a set of equations in Python
I need to project many points onto a set of equations. By writing the optimization problem and the corresponding KKT you ends up with a pretty easy linear system of the form $Ax=b$. The problem is that I need to solve this linear for many different values of $b$ not known at the same time. The matrix $A$ is symmetric, non singular and sparse, so I can rely on many different Lapack packages for solving the system. The problem is that I need to solve this system thousands of times for different values of b which are not known at the same time but only as a specific algorithm proceeds running. When the dimension of the problem is really large, the best option is to perform a factorization of the matrix (a LDL factorization) and then solve the system. However all the Lapack routines I found perform the factorization inside the routine itself and so the process becomes really inefficient because every time it needs to factorize the same matrix.
I was wondering if there exists a way to compute the matrix factorization before starting solving the systems and then give to the Lapack routine the matrix already factorized so that it avoids the matrix factorization at each iteration but has only to solve the system.
Thanks in advance for any precious suggestion.

Convert Eigen::ArrayXXd to Eigen::MatrixXd
How do you convert an
ArrayXXd
array to aMatrixXd
? So far I've doneMatrixXd temp_mat = my_array;
and the implicit conversion seems to work fine, but is this the way it is supposed to be done? Or is there some explicit conversion operation I should be doing?

Get a tensor shape using C++ without Tensorflow
I'm working on a Computer Vision project, where I get the X,Y,Z and B,G,R values of an object. and I'd like to write in a .txt the B,G,R values depending on their X,Y,Z relative coordinates, in a tensor that would have this shape (1,16,16,16,3).
I give an example : a tensor with this dimension (2,2,3) has this shape :
[[[B,G,R],[B,G,R]],[[B,G,R],[B,G,R]]]
You have to know that I get my X,Y,Z from x,y of the image with triangulation.
Could you please help me to get this tensor shape? I'm using OpenCV & Eigen
Thank you :)EDIT :
So here's some code to more explanation :
cv::Mat image; vector<Point3d> Pt3d; // Here we have the X,Y,Z Vec3i Colors; // Here the B,G,R values for (int i = 0; i < image.rows; i++) { for (int j = 0; j < image.cols; j++) { for (int x = 0; x < Pt3d.size(); x++) { std::cout << "["; for (int y = 0; x < Pt3d.size(); y++) { std::cout << "["; for (int z = 0; x < Pt3d.size(); z++) { std::cout << "["; std::cout << Colors << std::endl; std::cout << "]"; } std::cout << "]"; } std::cout << "]"; } } }

Can implementing Tensorflow custom op gradients in CUDA speed computation?
When writing your own custom op in tensorflow with GPU support, the guide suggests computing the gradients using python. Elsewhere people have used C++ with libraries such as Eigen to implement the gradients in the same way, more efficiently.
My question is; with the custom operations argument Tensors provided as pointers to device memory when training on a GPU (is this correct?):
OpKernelContext* context const Tensor& grad = context>input(0);
Can copying data between host and device be avoided by computing the gradients for the operation win CUDA on the GPU?
Will this reduce compute time? (I know this is dependant on how well the gradient computation lends itself to parallel computation, but assuming it does)
Is there any reason why this shouldn't be done?/are the potential speed increases too marginal for it to be worthwhile?

please explain about $samples and $target
If I have some data example
Year:  2015  2016  2017  2018  Price:1,500 2,100 1,700 1,400
I want to forecast price in 2019 with my code
My code use with composer PHPML  Machine Learning library for PHP
<?php require_once __DIR__ . '/vendor/autoload.php'; use Phpml\regression\LeastSquares; $samples = [[60], [61], [62], [63], [65]]; $targets = [3.1, 3.6, 3.8, 4, 4.1]; $regression = new LeastSquares(); $regression>train($samples, $targets); $regression>predict([64]); // return 4.06 ?>

Find points with minimal distance between each other
Let's assume I have multiple lists of ndimensional coordinates, i.e.
A =\ [ [ [1, 2, 3, 4], [2, 1, 0, 2], [4, 5, 2, 1] ], [ [2, 2, 3, 4], [3, 1, 0, 2], [5, 5, 2, 1] ], [ [5, 2, 3, 4], [6, 1, 0, 2], [8, 5, 2, 1] ] ]
Let's assume I have 3 "data sets" above (i.e. each data set is a list of 3 lists, each containing 4 integers). I would like to calculate two values for each data set:
 The centroid of each data set (easy enough: sum the contents of the columns and divide by the number of rows).
 The meansquared distance between each data point to get a measurement of the spread of the points from eachother (rather than a measurement of spread from the center).
Question
How can I accomplish this with Python (ideally
numpy
, as this will be with large data sets with thousands of points, so I'm trying to leverage efficiency/optimizations built intonumpy
/scipy
)? 
Find the nearest point to a group of 3D lines
This problem has had me stumped for days.
I have a group of lines formed from some data that produces 3D lines of the form:
P = a + dt
Where a is a position vector and d is the unit direction vector.
So basically I want to find the nearest point to all of these lines, using a least squares fit. I've been unable to find an algorithm online or how to implement it in Java. Im using the apache commons math library using Vector3D or RealVectors to calculate the line equations. So any help on an algorithm or example code to solve this problem would be helpful.

Intel Advisor: Inspect method including all submethods
Using Intel Advisor and the roof line model, I would like to assess the performance of a certain function. This function uses the Eigen library for matrix operations, where the main work is done.
In the output I can see my function with a relatively small self time and several Eigen functions called by my function. Now I would like to group all FLOPS and memory operations of my function together and use this result for the roof line model. How can I achieve this?

Eigen: sparse matrix pruned() does not remove entries below threshold
I want to use Eigen for sparse matrix multiplications, where in each iteration I want to remove all entries below a certain threshold. It seems to me that Eigen only removes elements exactly equal zero.
I am running Eigen 3.3.7, compiling with g++.
#include <Eigen/Sparse> #include <Eigen/Dense> #include <iostream> using namespace Eigen; typedef SparseMatrix<double> CscMat; typedef SparseMatrix<double,RowMajor> CsrMat; int N = 4; CsrMat S, S2; MatrixXd D(N, N), D2(N,N); D << 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16; D *= 0.1; S = D.sparseView(0.5); std::cout << D << std::endl; std::cout << S.nonZeros() << std::endl; D2 = D; D2 = (D2.array() < 0.5).select(0, D2); S2 = D2.sparseView(); std::cout << D << std::endl; std::cout << S2.nonZeros() << std::endl;
In the above S.nonzeros() returns 16, instead of expected 12 like in S2.nonzeros().
The output is:
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 16 0 0 0 0 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 12

pass data between C++ Eigen Matrix and MATLAB mxArray with a cell array
I would like to pass data between Eigen Matrix/Vector and mex arrays. In the following code, I defined a mex array called y_output, which contains a cell array. The variable y_output will be passed to MATLAB. Each element in y_output is a vector but with different lengths. I would like to pass a pointer that points to Eigen vectors to the mex array y_output.
Notice that the data stored in y will be modified with a userdefined function. After calling the function, I would assume that the data stored in y_output will be modified corresponding. However, I cannot directly pass the pointer from y_output to y. Is there any way to make it possible? Thanks!
This question is similar but different from the one at Pass C++ Eigen matrix to Matlab mex output. This question is asking how to pass an array of matrices, while the question in that link is asking how to pass a matrix.
#include "mex.h" #include "matrix.h" #include <Eigen> void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { // prhs[0]: a cell array of length T, each element is a vector with different lengths mwSize T = mxGetNumberOfElements(prhs[0]); mwSize* n = new mwSize[T]; Eigen::VectorXd* z = new Eigen::VectorXd[T]; for(int t=0; t<T; t++){ n[t] = mxGetNumberOfElements(mxGetCell(prhs[0], t)); z[t] = Eigen::Map<Eigen::VectorXd>(mxGetPr(mxGetCell(prhs[0], t)), n[t]); } // create a cell matrix with T rows and one columns mxArray* y_output = mxCreateCellMatrix(T,1); // create corresponding Eigen objects Eigen::VectorXd* y = new Eigen::VectorXd[T](); Eigen::VectorXd y_temp(n[0]); y_temp.setZero(); for(int t=0; t<T; t++){ mxSetCell(y_output, t, mxCreateDoubleMatrix(n[t], 1, mxREAL)); y[t] = Eigen::VectorXd::Zero(n[t]); y_temp.resize(n[t]); Eigen::Map<Eigen::VectorXd> y[t](mxGetPr(mxGetCell(y_output, t)), n[t]); // This is not correct! } // Myfun(y, z); // set output plhs[0] = y_output; }