What algorithm that could used for generating mesh from xyz coordinates? If the input is xyz dataset and the output is order of points to connect
Suppose I have a set of xyz coordinate S = {(x1,y1,z1), ... ,(xn,yn,zn)}, and the algorithm that I looking for will return a set A of number of S’s element index that represent the order to connect the xyz coordinates.
For example:
If S = {(0,1,1), (1,4,3), (6,0,1)}, and A = {(0,2,1)}
It means that point x1(0,1,1) will first connect to x3(6,0,1), then x3 will connect to x2(1,4,3).
I’d like to implement it for mesh generation program like one in Blender. So, the output set A must connect the points in S that will form a closed surface.
Mesh generation from xyz point in blender: enter link description here
See also questions close to this topic

Quantile egual to math expectation
I am trying to find any distribution corresponding to following:
E(X) = x_5, where
F(x_5) <= 0,05 ; F(x_5+0) >= 0,05
Is there any distributions having that speciality?
I tried to find the one among exponential dist., lognorm and so on but i didn`t succeed 
How to calculate distance between the below latitude and longitude points?
A=0N,1W (0 Degree North and 1 Degree West) B=0N,179E (0 Degree North and 179 Degree East) C=90N,0E (90 Degree North and 0 Degree East)
I have to find the distance AB and BC where Radius is 6400 unit.

What is it saved in the model of sklearn bayesian classifier
I believe that a Bayesian classifier is based on statistical model. But after training a Bayesian model, I can save it and do not need the training dataset to predict the test data. For example, if I build a bayesian model by
Can I take the model as a equation like this?
If so, how can I extract the weights and bias? and what is the new formula looks like?If not, what is the new equation like?

Why does my 3D Object flip suddenly when rotating around the zaxis?
I create a Quaternion (w, x, y, z) using input from an IMU device. I then normalize the quaternion and find the euler angles based on this conversion code from quaternion to euler angles:
private PVector setEulerAngles(){ // roll: xaxis rotation float sinr = 2.0f * (w*x + y*z); float cosr = 1.0f  2.0f * (x*x + y*y); float roll = (float)Math.atan2(sinr, cosr); // pitch: yaxis rotation float sinp = 2.0f * (w*y  z*x); float pitch = 0.0f; if(Math.abs(sinp) >= 1){ pitch = (float)Math.copySign(Math.PI/2, sinp); } else { pitch = (float)Math.asin(sinp); } // yaw: zaxis rotation float siny = 2.0f * (w*z + x*y); float cosy = 1.0f  2.0f * (y*y + z*z); float yaw = (float)Math.atan2(siny, cosy); return new PVector(roll, pitch, yaw); }
I then apply the ZYX rotation within Processing. However, to orient my 3D object I have to flip the z and y values, and negate the y value for the zrotation. I also had to create an offset of (PI/2) for rotateY, so my object faces the right direction. The object perfectly rotates around the x and y axis, but along the zaxis the object rotates in the right directions, but will flip suddenly. Is there a simple solution to get the object rotating properly around the zaxis?
pushMatrx(); translate(pos.x, pos.y, 0); rotateZ(q.eulerAngles.y); rotateY(q.eulerAngles.z+(PI/2)); rotateX(q.eulerAngles.x); shape(model); popMatrix();

Matplotlib display axis as binary
I'm attempting to label the xaxis of my graph in binary instead of float values in Python 2.7. I'm attempting to use
FormatStrFormatter('%b')
which according to the documentation provided by the Python Software Foundation should work. I'd also like all the binary strings to be the same length of characters. I've also consulted this link.The error I'm getting is:
ValueError: unsupported format character 'b' (0x62) at index 1
I've tried to do it like:
import matplotlib.pyplot as plt import numpy as np from matplotlib.ticker import FormatStrFormatter fig, ax = plt.subplots() ax.yaxis.set_major_formatter(FormatStrFormatter('%b')) ax.yaxis.set_ticks(np.arange(0, 110, 10)) x = np.arange(1, 10, 0.1) plt.plot(x, x**2) plt.show()

How to programmatically change screen brightness thru Intel graphics in C#
I tried using
WmiMonitorBrightness
class of the Windows Management Instrumentation but my system does not support it. I also tried changing gamma thruSetDeviceGammaRamp
API function but, meh..Then I checked the power option in windows 10 if it has the brightness control slider and it has none which I suspect is connected to why
WmiMonitorBrightness
does not work on my OS.Finally, I found out that I can manually change the brightness thru the color setting of Intel HD Graphics control panel. Now I want to ask how to change screen brightness using code like how Intel does it. I am running my OS in an Intel Pentium N4200 microprocessor @ 1.10Ghz clockspeed and I use a touchscreen UI, if that matters.

StereoCalibration in OpenCV: Shouldn't this work without ObjectPoints?
I have two questions relating to stereo calibration with opencv. I have many pairs of calibration images like these:
Across the set of calibration images the distance of the chessboard away from the camera varies, and it is also rotated in some shots.
From within this scene I would like to map pairs of image coordinates (x,y) and (x',y') onto object coordinates in a global frame: (X,Y,Z).
In order to calibrate the system I have detected pairs of image coordinates of all chessboard corners using cv2.DetectChessboardCorners(). From reading Hartley's Multiple View Geometry in Computer Vision I gather I should be able to calibrate this system up to a scale factor without actually specifying the object points of the chessboard corners. First question: Is this correct?
Investigating cv2's capabilities, the closest thing I've found is cv2.stereoCalibrate(objectpoints,imagepoints1,imagepoints2).
I have obtained imagepoints1 and imagepoints2 from cv2.findChessboardCorners. Apparently from the images shown I can approximately extract (X,Y,Z) relative to the frame on the calibration board (by design), which would allow me to apply cv2.stereoCalibrate(). However, I think this will introduce error, and it prevents me from using all of the rotated photos of the calibration board which I have. Second question: Can I calibrate without object points using opencv?
Thanks!

How to use opencv copyTo() function?
I have read through the documentation for copyTo() but am still confused on how this function would be applied to the following code. This anwer states that we can use the copyTo function instead of 255x. How would this function be applied in this case? I would appreciate a code snippet.
# Compute the gradient map of the image def doLap(image): # YOU SHOULD TUNE THESE VALUES TO SUIT YOUR NEEDS kernel_size = 5 # Size of the laplacian window blur_size = 5 # How big of a kernal to use for the gaussian blur # Generally, keeping these two values the same or very close works well # Also, odd numbers, please... blurred = cv2.GaussianBlur(image, (blur_size,blur_size), 0) return cv2.Laplacian(blurred, cv2.CV_64F, ksize=kernel_size) # # This routine finds the points of best focus in all images and produces a merged result... # def focus_stack(unimages): images = align_images(unimages) print "Computing the laplacian of the blurred images" laps = [] for i in range(len(images)): print "Lap {}".format(i) laps.append(doLap(cv2.cvtColor(images[i],cv2.COLOR_BGR2GRAY))) laps = np.asarray(laps) print "Shape of array of laplacians = {}".format(laps.shape) output = np.zeros(shape=images[0].shape, dtype=images[0].dtype) abs_laps = np.absolute(laps) maxima = abs_laps.max(axis=0) bool_mask = abs_laps == maxima mask = bool_mask.astype(np.uint8) for i in range(0,len(images)): output = cv2.bitwise_not(images[i],output, mask=mask[i]) return 255output

Different accuracy between validation and training set when training a cnn
Is it possible to have validation set accuracy higher than training accuracy when training a cnn ? Is it possible to have the accuracy of the test set smaller than the validation set accuracy? If so, why ? Thanks in advance!

Getting TLE in a LCA basic prob in spoj
I get TLE in this following problem: https://www.spoj.com/problems/QTREE2/
My idea is using LCA & Sparse table to calculate all 2^i parent for every node. Then each pair (a,b) calculate LCA by the LCA_query() function using sparse table. For each pair(a,b) finding kth node on their path, I found the (k1)th parent of a or total (nodek)th parent of b by checking if k is a to lca(a,b) portion or lca(a,b) to b portion !!
Complexity should be
O(test(nlogn+q*(logn+2*logn)))
but I got tle several time.Anyone help please.
My code is: https://ideone.com/gSHYBI
#include<bits/stdc++.h> using namespace std; #define fast_io ios_base::sync_with_stdio(0); //cin.tie(0); #define ll long long #define ld long double #define pb push_back #define ins insert #define in push #define out pop #define loop(i,n) for(i=1;i<=n;i++) #define loon(i,n) for(i=n;i>0;i) #define vctr(x) vector< x > #define pii(x,y) pair< x,y > #define mkpr(x,y) make_pair(x,y) #define ft first #define sd second #define MX 100005 #define mod 1000000007 #define INF 10000000000000 struct comp{ bool operator() (const pii(ll,ll) &a, const pii(ll,ll) &b) { return a.sd > b.sd; } }; priority_queue<pii(ll,ll), vctr(pii(ll,ll)), comp>Q; vctr(pii(ll,ll)) edg[MX]; ll level[MX],sparse[MX][22],prt[MX],dis[MX]; vector<ll>ed[MX]; void Dijkstra(ll root, ll n) { bool vis[MX]; memset(vis, false, sizeof vis); ll beg=root,i; loop(i,n) dis[i]=INF,vis[i]=false; dis[beg]=0; Q.in(mkpr(beg,0)); while(!Q.empty()){ ll a=Q.top().ft; Q.out(); if(vis[a]) continue; ll k=edg[a].size(); loop(i,k){ ll b=edg[a][i1].ft; ll c=edg[a][i1].sd; if(!vis[b] && dis[b]>dis[a]+c){ dis[b]=dis[a]+c; Q.in(mkpr(b,dis[b])); } } vis[a]=true; } } void BFS(ll s, ll N) { bool vis[MX]; memset(vis, false, sizeof vis); queue<ll>Q; Q.in(s); prt[s]=s; vis[s]=true; dis[s]=0; while(!Q.empty()){ ll u=Q.front(); Q.out(); ll n=ed[u].size(),i; loop(i,n){ if(vis[ed[u][i1]]) continue; prt[ed[u][i1]]=u; vis[ed[u][i1]]=true; Q.in(ed[u][i1]); dis[ed[u][i1]]=dis[u]+1; } } for(ll i=1;i<=N;i++) level[i]=dis[i]; } void LCA_InIt(ll N, ll root) { BFS(root,N); Dijkstra(root,N); memset(sparse, 1, sizeof sparse); ll i,j; for(i=1;i<=N;i++) sparse[i][0]=prt[i]; for(j=1;(1<<j)<=N;j++) for(i=1;i<=N;i++) if(sparse[i][j1]!=1) sparse[i][j]=sparse[sparse[i][j1]][j1]; } ll LCA_query(ll N, ll u, ll v) { ll log,i; if(level[u]<level[v]) swap(u,v); log=1; while(true){ ll next=log+1; if((1<<next)>level[u]) break; log++; } for(i=log;i>=0;i) if(level[u](1<<i)>=level[v]) u=sparse[u][i]; if(u==v) return u; for (i = log; i >= 0; i) if (sparse[u][i] != 1 && sparse[u][i] != sparse[v][i]) u = sparse[u][i], v = sparse[v][i]; return prt[u]; } ll kth_par(ll node, ll k, ll N) { if(!k) return node; ll x; for(ll i=0;(1<<i)<=N;i++){ if(sparse[node][i]!=1 and (1<<i)>k) break; x=i; } return kth_par(sparse[node][x],k(1<<x), N); } int main() { ll n,m,t,i,j,k,a,b,c,cs=1; //freopen(input.txt, r, stdin); //freopen(output.txt, w, stdout); scanf("%lld",&t); while(t){ scanf("%lld",&n); for(i=0;i<n1;i++){ scanf("%lld %lld %lld",&a,&b,&c); ed[a].pb(b); ed[b].pb(a); edg[a].pb(mkpr(b,c)); edg[b].pb(mkpr(a,c)); } LCA_InIt(n,1); while(true){ char str[10]; scanf("%s",str); if(!strcmp(str,"DONE")) break; if(!strcmp(str,"DIST")){ scanf("%lld %lld",&a,&b); ll par=LCA_query(n,a,b); cout<< dis[a]+dis[b]2*dis[par] << endl; } else if(!strcmp(str,"KTH")){ scanf("%lld %lld %lld",&a,&b,&c); ll par=LCA_query(n,a,b); if(level[a]level[par]>=c) cout<< kth_par(a,c1,n) << endl; else{ k=level[a]+level[b]2*level[par]+1; cout<< kth_par(b,kc,n) << endl; } } } cout<< endl; } return 0; }

Cluster tiny tasks in large DAG to large tasks in tiny DAG
I have a task DAG (Directed Acyclic Graph) of 3000+ vertices (i.e.: tasks). For parallelisation, I want to group/cluster tiny tasks into a couple of large tasks, such that any clustered task is suitable for running as a job on a thread. This DAG is executed millions of times, so finding an efficient schedule for multicore processing would be very desirable.
(Optional) Context: As a first and simple step, I search for disconnected components in the DAG and launch components on threads. However, I have one very large connected component of 900 vertices, whereas most other components are only 1 to 20 vertices. This 900vertexcomponent is still taking most of the time: one processor core takes care of this component, while my other cores deal with the remaining 2100 nodes in parallel. The 2100 other tasks are completed before the 900 tasks, hence this large DAG is the bottleneck.
Question: I'm thus looking for an algorithm to cluster these tiny tasks of a DAG into larger tasks, while preserving the semantics. Am I missing some terminology to search for this?
Simple example: Consider this taskDAG:
.> B > C > D . / \ /  / v A > E > F > G > H > I > J
which can be reduced to:
.> (BCD) . / \ /  / v A > (EFG) > H > (IJ)
which can be reduced to:
.> (BCD) . / \ /  / v A > (EFG) > (HIJ)
which can be reduced even further to:
(ABCDEFGHIJ)
Note that:
 the last situation is the clustering of tasks on a singlethread.
 first DAG has too many tiny tasks and will have a lot of threadsynchronisation overhead.
 the second DAG is a fine reduction, as it can schedule two things in parallel. However, it still has the disadvantage that H and IJ are two separate tasks, where they can be merged to avoid synchronisation overhead.
 the third DAG is a good reduction, as it can schedule two things in parallel.
More complex example: Somewhat more realistic in my scenario is is following DAG structure. Consider 100 tasks A_{i} and B_{i}, where B_{i} is dependent on A_{i}, where these 100 instantiations are independent of each other. Now one tasks C combines the results of these 100 B_{i} tasks.
A_0 > B_0 \ \ A_1 > B_1 +  A_2 > B_2 +> C  A_3 > B_3 +  ...  A_99 > B_99 /
The first reduction would cluster the A_{i} and B_{i} together into an (AB)_{i}.
AB_0 \ \ AB_1 +  AB_2 +> C  AB_3 +  ...  AB_99 /
However, we still have 100 parallel paths in the DAG. Since I have only n (eg: 4) processors, I would like to cluster these 100 parallel paths into n jobs (eg: of 25 paths), such that I save on the synchronisation overhead of the 100 paths, and only have n paths to synchronise.
AB_[00..24] \ \ AB_[25..49] +  AB_[50..74] +> C / AB_[75..99] /
So in this example, we started from 201 tasks, and merged them into 5 tasks suitable to run on 4 threads. This kind of intelligent merging of tasks is what I'm after.

Algorithm for reverse topology sorting
I'm looking for an efficient algorithm to resolve the dependencies of a series of modules in a dependency graph, with the following constraints:
 The algorithm can take as input one or more start nodes.
 The dependencies of these nodes are calculated such that their loading can be parallelized.
 Should the graph not to be acyclic, an error must be thrown.
 Its complexity should be as small as possible (no need to visit nonrequired nodes) – the graph may contain billions of nodes, only a fraction of which are necessary for each query.
Goal example
In this example, the two nodes in green are provided as starting points. The node in red is never visited. Each node in white is visited. Loading is scheduled to be performed in steps (indicated inside the node).
Do you have any hint on an algorithm that would respect these rules?

smallest enclosing cylinder
Is there an algorithm for finding of an enclosing cylinder with the smallest radius for a 3D cloud of dots? I know that 2D case with the smallest enclosing circle is solved (for example this thread Smallest enclosing circle in Python, error in the code), but is there any working approach for 3D?
EDIT1: OBB. Below is an example of an arcshaped cloud of dots. The smallest enclosing circle was found by this tool https://www.nayuki.io/page/smallestenclosingcircle
Circle is defined by three dots of which two are lying almost on a diameter, so it is easy to estimate where the central axis is. "Boxing" of dots will yield a center of the box obviously much shifted from the true center.
I conclude, that OBB approach is not general.
EDIT2: PCA. Below is an example of PCA analysis of a tight dot cloud vs. dot cloud with outliers. For the tight dot cloud PCA predicts the cylinder direction satisfactorily. But if there is a small number of outliers, compared to the main cloud, than PCA will basically ignore them, yielding vectors which are very far from the true axis of an enclosing cylinder. In the example below the true geometrical axis of an enclosing cylinder is shown in black.
I conclude that PCA approach is not general.
EDIT3: OBB vs. PCA and OLS. A major difference  OBB relies only on a geometrical shape, while PCA and OLS are dependent from the overall number of points, including those in the middle of the set, which do not affect the shape. In order to make them more efficient, a data preparation step can be included. First, find the convex hull. Second, exclude all internal points. Then, points along the hull can be distributed unevenly. I'd suggest to remove all of them, leaving only the polygonal hull body, and cover it with mesh, where nodes will be new points. Application of PCA or OLS to this new cloud of points should provide much more accurate estimation of the cylinder axis.
All this can be unnecessary, if OBB provides an axis, as much parallel to the enclosing cylinder axis, as possible.
EDIT4: published approaches. @meowgoesthedog: paper by Michel Petitjean ("About the Algebraic Solutions of Smallest Enclosing Cylinders Problems") could help, but I'm insufficiently qualified to convert it to a working program. Author himself did it (module CYL here http://petitjeanmichel.free.fr/itoweb.petitjean.freeware.html). But at the conclusions in the paper he says: "and the present software, named CYL, downloadable for free at http://petitjeanmichel.free.fr/itoweb.petitjean.freeware.html, is neither claimed to offer the best possible implementations of the methods nor is claimed to work better than other cylinder computation softwares." Other phrases from the paper also makes an impression, that it is an experimental approach, which was not thoroughly validated. I'll try to use it, anyway.
@Ripi2: this paper by Timothy M. Chan is also a bit too complicated for me. I'm not an expert of that level in mathematics, to be able to convert to a tool.
@Helium_1s2: probably, it is a good suggestion, however, it is much less detailed compared to two papers above. Also, not validated.
EDIT5: reply for user1717828. Two most distant points vs. cylinder axis. A counter example  8 points in a shape of cube, fit in a cylinder. The biggest distance between two points  green diagonal. Obviously not parallel to cylinder axis.

Three.js rotate sphere, point A to B
I'm working on a 3d model of night sky map, I want to rotate my geometries in such way that zenith(green box) lies directly on the Y axis as highest point (0,+y,0).
image of what I'm trying to do
so far I've found possible solution:
function rotateSphere(long, lat){ var c = scene.rotation.y; var d = long * (Math.PI / 180)%(2 * Math.PI); var e = Math.PI / 2; scene.rotation.y = c % (2 * Math.PI); scene.rotation.x = lat * (Math.PI / 180) % Math.PI; scene.rotation.z = d+e; }
however, I don't really understand how it works and results in upside down position of what I'm trying to achieve. Any help much appreciated.

Creating new vertices at the wireframe subdivision in Three.js
I understand that the THREE.SubdivisionModifier will section a shape's wireframe, but as far as I understand this doesn't add any new vertices to the geometry. How can I add new vertices to the geometry representing the intersection points of the subdivisions? I'm trying to add noisy distortions to a convex hull, so I need to either add new vertices along the shapes'faces, bend each subdivision independently, or else copy the geometry into a new similar geometry with a higher "
resolution
" of vertices along its faces.Is any one of those options supported by Three.js? Thanks!