What algorithm that could used for generating mesh from xyz coordinates? If the input is xyz dataset and the output is order of points to connect
Suppose I have a set of xyz coordinate S = {(x1,y1,z1), ... ,(xn,yn,zn)}, and the algorithm that I looking for will return a set A of number of S’s element index that represent the order to connect the xyz coordinates.
For example:
If S = {(0,1,1), (1,4,3), (6,0,1)}, and A = {(0,2,1)}
It means that point x1(0,1,1) will first connect to x3(6,0,1), then x3 will connect to x2(1,4,3).
I’d like to implement it for mesh generation program like one in Blender. So, the output set A must connect the points in S that will form a closed surface.
Mesh generation from xyz point in blender: enter link description here
See also questions close to this topic

What is the common name for mapping one range of numbers maps to another range of numbers?
I'm pretty sure it's a common pattern, but I'm looking for the name of the pattern where you match one range of numbers to another range of numbers. Something similar like:
Map(from1: 60, to1: 90, from2: 100, to2: 140, value: 75); // Result: 120 (middle of from2/to2) Map(from1: 60, to1: 90, from2: 100, to2: 140, value: 30); // Result: 100 (clamped bottom) Map(from1: 60, to1: 90, from2: 100, to2: 140, value: 60); // Result: 100 (bottom) Map(from1: 60, to1: 90, from2: 100, to2: 140, value: 500); // Result: 140 (clamped to2) Map(from1: 60, to1: 90, from2: 100, to2: 140, value: 85); // Result: 133.33 (in between)
What is the name for this method? I'm specifically looking for a solution in Unity, but I'm pretty sure if I know the name of the pattern I can find it.

Fixed Point slower than float on Arduino?
I did some tests on the execution times of math operations on the Arduino Uno. In my tests float multiplication only needed half the time than fixed point math (16.16) (11us vs 25us per operation). Tests were made with an loop over 100 random values. I used https://github.com/PetteriAimonen/libfixmath and https://github.com/duckythescientist/avrfix for fixed point math with the same results. Compiled using the Arduino IDE with compiler flag O2
Shouldn't fixed point math be much faster on a device without a FPU?

Says X must be a value in logarithmic function?
I have the following code:
import math q = input("Is there a square root in the function? (y,n) ") if q == "y": base = input("base? ") x_value = input("x? ") print (math.log(math.sqrt(base),x_value)) else: base_2 = input("base? ") x_value_2 = input("x? ") print (math.log(base_2, x_value_2))
When I run the code, it says that the second value in math.log() must be a number. Shouldn't it work if I just use the variable I assigned to it?

Generate points within complex polygons (2D)
A means to and end; ultimately I want to draw a single line that traverses areas defined by letterforms, creating a sort of skeleton. I would like to do so parametrically and randomly, not just by hand. I do have a handmade example.
First, a basic flow of what I need to do:
Check if a preceding point exists. (can do)
If yes, randomly pick a point at least x and at most y away from last point. If no, randomly pick a point. (can do)
Check if point is within the specified geometry. (need assistance)
If yes, plot it. If no, return to 2. (can do)
Check if there is a point preceding it. (can do)
If yes, draw a line to it. If no, return to 2. (can do)
Feel free to let me know if you think there is a better flow than above to accomplish this.
I have looked into using three.js' raycaster, but I don't know if it works in 2D, and I don't want to get into processing in a 3D space if I can help it. I am open to using libraries though.
Thanks in advance.

SVGs sometimes export with padding, sometimes not (Affinity Designer)
Sometimes SVGs have a viewing area that fills the browser window, and sometimes they have padding around them.
I use the exact same export procedure for SVGs in Affinity designer every time:
Export > SVG > SVG for Web > Selection without background
Some examples:
https://www.overdriveservices.com/wpcontent/uploads/billingcollections.svg fills the browser window and doesn't have any additional padding.
https://www.overdriveservices.com/wpcontent/uploads/recruiting.svg has additional transparent padding around the SVG.

How to open /dev/graphics/fb0 in kernel space?
I want to open
/dev/graphics/fb0
from kernel space and write to it to display a test pattern. Is there any way I can do that?There is one question related to this. But, in the answer, it's done from user space, which I don't want.

Shape detection with opencv/python
I'm trying to teach my test automation framework to detect a selected item in an app using opencv (the framework grabs frames/screenshots from the device under test). Selected items are always a certain size and always have blue border which helps but they contain different thumbnail images. See the example image provided.
I have done a lot of Googling and reading on the topic and I'm close to getting it to work expect for one scenario which is image C in the example image. example image This is where there is a play symbol on the selected item.
My theory is that OpenCV gets confused in this case because the play symbol is basically circle with a triangle in it and I'm asking it to find a rectangular shape.
I found this to be very helpful: https://www.learnopencv.com/blobdetectionusingopencvpythonc/
My code looks like this:
import cv2 import numpy as np img = "testimg.png" values = {"min threshold": {"large": 10, "small": 1}, "max threshold": {"large": 200, "small": 800}, "min area": {"large": 75000, "small": 100}, "max area": {"large": 80000, "small": 1000}, "min circularity": {"large": 0.7, "small": 0.60}, "max circularity": {"large": 0.82, "small": 63}, "min convexity": {"large": 0.87, "small": 0.87}, "min inertia ratio": {"large": 0.01, "small": 0.01}} size = "large" # Read image im = cv2.imread(img, cv2.IMREAD_GRAYSCALE) # Setup SimpleBlobDetector parameters. params = cv2.SimpleBlobDetector_Params() # Change thresholds params.minThreshold = values["min threshold"][size] params.maxThreshold = values["max threshold"][size] # Filter by Area. params.filterByArea = True params.minArea = values["min area"][size] params.maxArea = values["max area"][size] # Filter by Circularity params.filterByCircularity = True params.minCircularity = values["min circularity"][size] params.maxCircularity = values["max circularity"][size] # Filter by Convexity params.filterByConvexity = False params.minConvexity = values["min convexity"][size] # Filter by Inertia params.filterByInertia = False params.minInertiaRatio = values["min inertia ratio"][size] # Create a detector with the parameters detector = cv2.SimpleBlobDetector(params) # Detect blobs. keypoints = detector.detect(im) for k in keypoints: print k.pt print k.size # Draw detected blobs as red circles. # cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures # the size of the circle corresponds to the size of blob im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) # Show blobs cv2.imshow("Keypoints", im_with_keypoints) cv2.waitKey(0)
How do I get OpenCV to only look at the outer shape defined by the blue border and ignore the inner shapes (the play symbol and of course the thumbnail image)? I'm sure it must be doable somehow.

Why does my retrained model have poor accuracy?
I'm trying to retrain the final layer of a pretrained model using the same dataset (MNIST handrwitten digit dataset) but the accuracy of the retrained model is much worse than the initial model. My initial model gets an accuracy of ~98% while the retrained model accuracy varies between 4090% depending on the run. I get similar results when I don't bother to train the first 2 layers at all.
My first thought is that I've made a coding error but I'm confident I've ruled that out. I've also tried lowering the learning rate and increasing the epochs when retraining with no success.
Is there any theoretical reason why my retrained accuracy is this poor?
I can post my code if anyone's interested, I've omitted it here to reduce clutter.

Feature generation for Random Forest Image Data
I am trying to generate Random forest for key point classification. I am using image patches around various key points of images to generate training data for Random forest. How can I extract prominent features from these image patches?
Lets assume that I use all the pixels of the each patch extracted around key point. Lets say each patch size = 24x24 = 576 pixel and I use around 50000 patches from 200 images and label them with image number. Then I generate RF decision trees using Information gain and entropy of pixel values. This idea is not feasible as RF generation for such big set of feature would take a lot of training time.
In the following paper, authors generate features on the fly. But I am unable to find exactly how they are generating training data. Can anyone give more information on how we can extract prominent features on the fly from image patches and use them as RF training data features?
If anyone know source code link for this paper, could you please let me know?

distributing graph nodes into buckets
I have a adjacency matrix
n
xn
. Each node of the graph hasm
outgoing edges and I want to distribute these nodes intob
buckets. Each bucket should hold a minimum ofl
and a maximum ofu
nodes (u
xb
>=n
). Each node inside the bucket should have at least one outgoing edge to another node inside the bucket.I feel that I am missing the best angle to solve this. How would you approach this?

In graph theory, is there like a nonended edge?
My current work relates to graph theory. For edge in graph theory, there is a starting vertex for the edge and an ending vertex for the edge. I know the starting and ending vertex can be the same vertex. But is there like a nonended edge? Like, we get the starting vertex, but the other end of the edge is connected to nothing. For example, in a set of triangular mesh, we can regard the triangle to be the vertex (element) and regard the edges of these triangles to be connections between 2 bounded triangles. But some boundary edges only belong to one triangle.

Permutation Test on Graphs  igraph
I am trying to compute a permutation test of the EI index to evaluate the homophily of a network and to measure the signifance of the EI index. However using the permute() function to create a new graph, by permuting vertex ids, the network will permute, but by executing the EI index it gives me always the same result of 0.25. So is there anything wrong with the permutation function?
actors < data.frame(name=c("Alice", "Bob", "Cecil", "David", "Esmeralda", "Ben", "Fritz", "Jon", "Anna", "Julia"), age=c(48,33,45,34,21, 12,33,44,66,99), gender=c("F","M","F","M","F", "F","M","F","M","F")) relations < data.frame(from=c("Bob", "Cecil", "Cecil", "David", "David", "Esmeralda", "Cecil", "David", "Esmeralda", "Jon", "Anna", "Julia", "Bob", "Cecil", "Cecil", "David"), to=c("Alice", "Bob", "David", "Esmeralda", "Cecil", "David", "Alice", "Alice", "Bob", "Alice", "Fritz", "Jon", "Anna", "Alice", "Bob", "Cecil"), same.dept=c(FALSE,FALSE,TRUE,FALSE,FALSE,TRUE, TRUE,FALSE,FALSE,TRUE,FALSE,FALSE, FALSE,TRUE,FALSE,FALSE), friendship=c(4,5,5,2,1,1,1,3,5,7,9,1, 7,8,2,4), advice=c(4,5,5,4,2,3,1,5,7,8,2,4, 7,8,2,4)) g < graph.data.frame(relations, directed=TRUE, vertices=actors) V(g)$name < as.character(vertex_attr(g, "gender")) edges < get.data.frame(g) external < length(which(edges$from != edges$to)) internal < length(which(edges$from == edges$to)) ei_index = (external  internal) / nrow(edges) new.graph < permute(g, sample(vcount(g)))

Surface analysis of / construction from 3D point set
I am aware there are a number of answered posts on this topic in general and I did my share of reading these, but chances are I missed some or misunderstood others, so sorry in advance if this has already been explained elsewhere.
What I have
A set of 3D points like this:Or for better visibility with Delaunay tetrahedralized surface with wire mesh:
These points describe the surface I'm interested in.
What I want
Ultimately, I would like to have a closed surface that contains every point, while minimizing connectivity between points. (The goal is then to compute an estimate of the area per point by doing some sort of Voronoilike compartmentalization of the facets: halving the connections between points and allocating facet segments that are delimited by the normals starting from the so introduced halvingpoints to the closest points.)
The part in parenthesis I would have to figure out if I ever get to the surface I'm aiming at. I just added this because it may further clarify what I intend to construct.
I should add that I have rather basic knowledge of geometry. During a different project I stumbled upon the concepts of the convex hull and alpha shapes but I figured neither would be of much use in this case. Maybe all I would need is a pointer to the right keyword.. I already got convinced that surface reconstruction from point data is far more complicated than I initially anticipated, so although it sounds solvable to me I guess there's a chance it's not.
Anayways, thanks for your help!
Edit: spelling

Efficient algorithm to find closest line segments for each point
Given a polygonal subdivison S and a set of points P, find the closest line segment in S for each point (in 2d space).
In my setting, I have hundreds of thousands of segments and a couple thousand points.Checking each line for each point would take too long. Is there an efficient algorithm for this?
I was considering multiple options, but can't figure out which is best.
 Build a trapezoidal map and query the face each point is in. Then go over the edges of the face (in the subdivision) to find the nearest line.
 Build a range tree or segment tree. Query a box around the point and find the closest line segment in it. There has to be a segment in the box for this to find anything.
 Build a line segment voronoi diagram. Each face describes the nearest segment, but I wouldn't know how to do a point query, since the edges can be parabolic arcs.
What is a good highlevel approach for this problem?

How to find the computational geometry of circles and rectangles using x and y values in c++?
I'm a beginner in c++ and I've been tasked with finding certain geometries of circles and rectangles in c++. My issue is I don't know how to set up the parameters in either the header or source file, I don't want the solution, I want to better understand how to set up the functions.
Each function with its parameters are as follows:
GetCircumference(xc: double, yc: double, xe: double, ye: double): double
parameters:
xc: double, xvalue of circle’s center,
yc: double, yvalue of circle’s center,
xe: double, xvalue of point on circle’s edge, and
ye: double, yvalue of point on circle’s edge
– returns: the floating point value representing the circumference of a circle centered at (xc, yc) with a second point on the edge (xe, ye).
 GetVolume(xc: double, yc: double, xe: double, ye: double): double
parameters:
xc: double, xvalue of circle’s center,
yc: double, yvalue of circle’s center,
xe: double, xvalue of point on circle’s edge, and
ye: double, yvalue of point on circle’s edge
– returns: the floating point value representing the volume of the circle centered at (xc, yc) with a second point on the edge (xe, ye).
GetPerimeter(xll: double, yll: double, xur: double, yur: double): double
parameters:
xll: double, xvalue of lowerleft point of rectangle,
yll: double, yvalue of lowerleft point of rectangle,
xur: double, xvalue of upperright point of rectangle, and
yur: double, yvalue of upperright point of rectangle
– returns: the floating point value representing the perimeter of the rectangle
GetDistanceSquared(x1: double, y1: double, x2: double, y2: double): double
parameters:
x1: double, xvalue of point 1,
y1: double, yvalue of point 1,
x2: double, xvalue of point 2, and
y2: double, yvalue of point 2
– returns: the floating point value representing the squared distance between points 1 and 2.
GetDistance(x1: double, y1: double, x2: double, y2: double): double
parameters:
x1: double, xvalue of point 1,
y1: double, yvalue of point 1,
x2: double, xvalue of point 2, and
y2: double, yvalue of point 2
– returns: the floating point value representing the distance between points 1 and 2.
comp_geo.h:
/*comp_geo.h*/ double GetCircumference(double xc, double yc, double xe, double ye); double GetVolume(double, double, double, double); double GetPerimeter(double, double, double, double); double GetDistanceSquared(double, double, double, double); double GetDistance(double, double, double, double);
comp_geo.cc:
/*comp_geo.cc*/ #include <cmath> #include "comp_geo.h" double GetCircumference(double xc, double yc, double xe, double ye) { double pi = 3.14159265358; double r = sqrt(pow((xe  xc), 2) + pow((ye  yc), 2)); double c = 2 * pi * r; return c; } double GetVolume(double, double, double, double) { return 0.0; } double GetPerimeter(double, double, double, double) { return 0.0; } double GetDistanceSquared(double, double, double, double) { return 0.0; } double GetDistance(double, double, double, double) { return 0.0; }
This is the test file that we were given to complete this assignment:
test.cc:
/*test.cc*/ #include <iostream> using std::cout; using std::endl; #include "comp_geo.h" bool TestGetCircumference() { const double expected = 0.0; double actual = GetCircumference(0.0, 0.0, 0.0, 0.0); if(actual != expected) { cout << "Expected: " << expected << ", Actual: " << actual << endl; return false; } return true; } bool TestGetPerimeter() { const double expected = 0.0; double actual = GetPerimeter(0.0, 0.0, 0.0, 0.0); if(actual != expected) { cout << "Expected: " << expected << ", Actual: " << actual << endl; return false; } return true; } bool TestGetDistanceSquared() { const double expected = 0.0; double actual = GetDistanceSquared(0.0, 0.0, 0.0, 0.0); if(actual != expected) { cout << "Expected: " << expected << ", Actual: " << actual << endl; return false; } return true; } bool TestGetDistance() { const double expected = 0.0; double actual = GetDistance(0.0, 0.0, 0.0, 0.0); if(actual != expected) { cout << "Expected: " << expected << ", Actual: " << actual << endl; return false; } return true; } int main(int argc, char* argv[]) { cout << "TestGetCircumference" << endl; if (!TestGetCircumference()) return 1; cout << "TestGetPerimeter" << endl; if (!TestGetPerimeter()) return 1; cout << "TestGetDistanceSquared" << endl; if (!TestGetDistanceSquared()) return 1; cout << "TestGetDistance" << endl; if (!TestGetDistance()) return 1; return 0; }
and the makefile:
makefile:
CC = g++ # use the g++ compiler FLAGS = std=c++11 # compile with C++ 11 standard FLAGS += Wall # compile with all warnings LINK = $(CC) $(FLAGS) o # final linked build to binary executable COMPILE = $(CC) $(FLAGS) c # compilation to intermediary .o files test : comp_geo.o test.cc $(LINK) $@ $^ comp_geo.o : comp_geo.cc comp_geo.h $(COMPILE) $< clean: @rm test comp_geo.o
Also, the way the grader is set up it will look for
GetVolume
but a circle does not have a volume so the professor suggested that we write a second functionGetArea
that calls and returns the value of the functionGetVolume
.