What algorithm that could used for generating mesh from xyz coordinates? If the input is xyz dataset and the output is order of points to connect
Suppose I have a set of xyz coordinate S = {(x1,y1,z1), ... ,(xn,yn,zn)}, and the algorithm that I looking for will return a set A of number of S’s element index that represent the order to connect the xyz coordinates.
For example:
If S = {(0,1,1), (1,4,3), (6,0,1)}, and A = {(0,2,1)}
It means that point x1(0,1,1) will first connect to x3(6,0,1), then x3 will connect to x2(1,4,3).
I’d like to implement it for mesh generation program like one in Blender. So, the output set A must connect the points in S that will form a closed surface.
Mesh generation from xyz point in blender: enter link description here
See also questions close to this topic

Why does Math#nextUp not increment by Double.MIN_VALUE?
I learned this from the Java documentation
It basically says that:
Returns the floatingpoint value adjacent to d in the direction of positive infinity.
and especially:
If the argument is zero, the result is Double.MIN_VALUE
With this logic:
Math.nextUp(double x)  x
should always beDouble.MIN_VALUE
.So I tested it out:
double d_up = Math.nextUp(0); System.out.println(d_up); // 1.401298464324817E45 System.out.println(d_up == Double.MIN_VALUE); // false System.out.println(Double.MIN_VALUE); // 4.9E324
and it doesn't even work with zero.
Why does
Math.nextUp(double x)
not equalx + Double.MIN_VALUE
? 
Android Studio  How to run WebWork math problems on android studio without a server or downloading webwork?
Hello developer friends!
Before beginning:
 See here what is WebWork.
 Here you can find the WebWork opensource.
 WebWork uses Perl and a Perl based language called PG. More info here.
 WebWork also uses MathJax to display math operations.
Now, I'm trying to figure out, how can I display all the problems in the webworkopenproblemlibrary from android studio.
Example:

Calculating a cumulative sum with for loops
I'm trying to come up with a summation of variables within a model field of 27 layers. Most of the variables are applicable at each layer, but for one of the variables I'm gauging a change in height and therefore subtracting the previous layers "top height" from the total height at the given layer.
Basically, I'm just not sure how to represent a cumulative sum at any point using for loops.
I'm currently trying, per my code, to use two for loops to do the cumulative sum and getting the error.
"Attempted to access flheight(299,162,0,12); index must be a positive integer or logical."
I know that is because "flheight(299,162,0,12)" doesn't exist because there is no "layer = 0" for the third dimension.
no2molcm2 = 0; dh = 0; patm = 0; no2ppm = 0; for n=0:26 for i=1:27 T = Temp(299,162,i,12); % K dh = (flheight(299,162,i,12)*100) flheight(299,162,n,12)*100; patm = sum(Pres(299,162,i,12))*(1/101325); %atm R = 82.06; % cm3*atm/(k*mol) av = 6.022140857747*(10^23); % 1/mol no2ppm = sum(no2(299,162,i,12)); no2molcm2 = cumsum(((no2ppm*av*patm)/(R*T))*dh); end end
My question here is how on earth can I tell matlab that when it sees this input (or the error output) to just set this equal to zero?

Mac adding NupenGL Core library
I'm starting a C++ coursework. The base code uses Visual Studio 2017 on windows and an openGL graphics package called
nupengl.core
, https://www.nuget.org/packages/nupengl.core/. However, I have a Mac and I don't know how to add this external library in CMake.I'll need it for imports such as
#include <gl/freeglut.h>
I've been using CLion for C++ and don't have much experience with CMake so any help would be greatly appreciated. I’ve researched a lot but all the tutorials I’ve been able to find use visual studio on windows which doesn’t help as the Mac version doesn’t support C++

Blending antialiased circles
I've implemented Xiaolin Wu algorithm to draw an antialiased circle. And it works. However, in my app, I can draw on the screen many circles and they don't have full opacity. So, I want to blend them. Before implementing antialiasing Xiaolin Wu algorithm my blending method worked. I use very simple blending:
int blendColors(int a, int b, float t) { double s = sqrt((1  t) * a * a + t * b * b); return s; } void setPixel(int index, int r, int g, int b, int a, unsigned char* data) { int oldR = data[index]; int oldG = data[index + 1]; int oldB = data[index + 2]; int oldA = data[index + 3]; int newA = min((int) (oldA + a * 0.25f), 255); int newR = blendColors(oldR, r, 0.5f); int newG = blendColors(oldG, g, 0.5f); int newB = blendColors(oldB, b, 0.5f); data[index] = newR; data[index + 1] = newG; data[index + 2] = newB; data[index + 3] = newA; }
Alpha blending works like darkening.
Now, if I start from transparent background it looks like this:
But, when I start from opaque background is looks like this:
As you can see antialiasing is missing. That's because opaque background already have 255 opacity. So there's an issue in blending algorithm. I have to find another way to blend colours when there's an opaque background. How can I do this?
Circle algorithm is here:
void drawFilledCircle(int x, int y, int startRadius, int endRadius, int r, int g, int b, int a, unsigned char* data, unsigned char* areasData, int startAngle, int endAngle, bool blendColor) { assert(startAngle <= endAngle); assert(startRadius <= endRadius); dfBufferCounter = 0; for(int i = 0; i < DRAW_FILLED_CIRCLE_BUFFER_SIZE; i++) { drawFilledCircleBuffer[i] = 1; } for(int cradius = endRadius; cradius >= startRadius; cradius) { bool last = cradius == endRadius; bool first = cradius == startRadius && cradius != 0; float radiusX = cradius; float radiusY = cradius; float radiusX2 = radiusX * radiusX; float radiusY2 = radiusY * radiusY; float maxTransparency = 127; float quarter = roundf(radiusX2 / sqrtf(radiusX2 + radiusY2)); for(float _x = 0; _x <= quarter; _x++) { float _y = radiusY * sqrtf(1  _x * _x / radiusX2); float error = _y  floorf(_y); float transparency = roundf(error * maxTransparency); int alpha = last ? transparency : maxTransparency; int alpha2 = first ? maxTransparency  transparency : maxTransparency; setPixel4(x, y, _x, floorf(_y), r, g, b, alpha, cradius, endRadius, data, areasData, blendColor); setPixel4(x, y, _x, floorf(_y)  1, r, g, b, alpha2, cradius, endRadius, data, areasData, blendColor); } quarter = roundf(radiusY2 / sqrtf(radiusX2 + radiusY2)); for(float _y = 0; _y <= quarter; _y++) { float _x = radiusX * sqrtf(1  _y * _y / radiusY2); float error = _x  floorf(_x); float transparency = roundf(error * maxTransparency); int alpha = last ? transparency : maxTransparency; int alpha2 = first ? maxTransparency  transparency : maxTransparency; setPixel4(x, y, floorf(_x), _y, r, g, b, alpha, cradius, endRadius, data, areasData, blendColor); setPixel4(x, y, floorf(_x)  1, _y, r, g, b, alpha2, cradius, endRadius, data, areasData, blendColor); } } } void setPixel4(int x, int y, int deltaX, int deltaY, int r, int g, int b, int a, int radius, int maxRadius, unsigned char* data, unsigned char* areasData, bool blendColor) { for(int j = 0; j < 4; j++) { int px, py; if(j == 0) { px = x + deltaX; py = y + deltaY; } else if(j == 1) { px = x  deltaX; py = y + deltaY; } else if(j == 2) { px = x + deltaX; py = y  deltaY; } else if(j == 3) { px = x  deltaX; py = y  deltaY; } int index = (px + (img>getHeight()  py  1) * img>getWidth()) * 4; bool alreadyInBuffer = false; for(int i = 0; i < dfBufferCounter; i++) { if(i >= DRAW_FILLED_CIRCLE_BUFFER_SIZE) break; if(drawFilledCircleBuffer[i] == index) { alreadyInBuffer = true; break; } } if(!alreadyInBuffer) { if(dfBufferCounter < DRAW_FILLED_CIRCLE_BUFFER_SIZE) { drawFilledCircleBuffer[dfBufferCounter++] = index; } setPixelWithCheckingArea(px, py, r, g, b, a, data, areasData, blendColor); } } }

why does banding occur in SSAO
I'm reading these 2 tutorials https://learnopengl.com/AdvancedLighting/SSAO https://mtnphil.wordpress.com/2013/06/26/knowyourssaoartifacts/ they both mentioned banding and didn't explain the reason it occurs, can someone explain why banding occurs?

How to remove high frequency regions from a canny image?
Assume I have a canny detected image, for example this one I stole from the internet:
I need to detect and remove regions with high frequency features. So this image should become soemthing like:
(Excuse the GIMP skills)
In other words, areas containing high variability of of features should just go to black.
To my understanding I could use something like fourier transforms to filter out these kinds of regions, but I am not sure how or if this is the best approach.

Per image normalization vs Overall dataset normalization
I have a datasets of 1000 image.Using cnn For fingure gesture recognition. Should I normalize the Image by finding mean of that image only or the mean of entire dataset...and also suggest which library to use in python for the same

Xcode 10.0 OpenCV Cpp project does not read NSCameraDescription from info.plist
I have already added a info.plist file, but it just wont let me use the webcam.I am in a fix as I have already tried editing the info.plist as a source file. Correct me if I am missing something. Also linked the info.plist to the plist for the app in general settings. Am I missing something? Link the the error is given in the blue highlighted section

Identify all closed links between elements of two columns (i.e. closed networks)
My example:
df < data.frame( from = c(1111, 2222, 3333, 4444, 5555, 5555, 7777, 8888, 9999, 0), to = c(2222, 1111, 4444, 5555, 3333, 7777, 8888, 9999, 5555, NA) ) df from to 1 1111 2222 2 2222 1111 3 3333 4444 4 4444 5555 5 5555 3333 6 5555 7777 7 7777 8888 8 8888 9999 9 9999 5555 10 0 NA
Here I'd like to identify:
 whether a certain number (or string, let's call it
id
) in thefrom
column is part of any closed network;  and how many members are in the network at stake (can be multiple).
In my definition here, closed network is any network where the outgoing link returns back to the initial node either indirectly (through other nodes  and where no node is repeated) or directly through the receiving node of initial link.
Let's visualize this:
library(visNetwork) library(magrittr) nodes < data.frame(id = unique(c(df$from, df$to)), label = unique(c(df$from, df$to))) nodes < nodes[!is.na(nodes$id),] visNetwork(edges = df, nodes = nodes) %>% visEdges(arrows = 'to')
So above we can see that, for example,
5555
is part of a closed network, and actually it's part of 2 such networks with 3 and 4 members, respectively.Therefore, I'd like to have this kind of output:
from to any_closed_network nr_members_closed_network 1 1111 2222 Yes 2 2 2222 1111 Yes 2 3 3333 4444 Yes 3 4 4444 5555 Yes 3 5 5555 3333 Yes 3, 4 6 5555 7777 Yes 3, 4 7 7777 8888 Yes 4 8 8888 9999 Yes 4 9 9999 5555 Yes 4 10 0 NA No 0
Currently, my solution consists of joining recursively
to
tofrom
, and then further on the obtained columns to previous ones, so that I get a chain where I can see the whole development.However, this is a bit cumbersome / brute force, and I'd like to know if any of you have an idea how to approach this in an elegant way?
I also had a look at the
igraph
package, but there the clusters you can obtain don't necessarily distinguish between all small closed networks, and even if you can set the number of members this is rather fixed  I'd like to know about all such existing closed networks.  whether a certain number (or string, let's call it

How to find the shortest path that pass through a group of Sets?
I have an algorithmic problem where I have a number of Unordered Sets of elements, and I need to find the shortest path (Ordered combination of the sets) that pass through all of those sets. There may be thousands of sets.
For example, let there be the following 4 unordered sets:
A=abcdefg
B=cd
C=abch
D=defiThe shortest path size is 11.
One possible solution is:
P=CADB=habcgdeficd
P=11Note that sets may share elements with neighboring sets in the path!
There may also be duplicated elements belonging to different sets (as in the example above: 'c' and 'd' are duplicated in P, by adding B to CAD).Please advise with an algorithm to find the shortest path as described.
Thanks! 
How do I enumerate all *maximal* cliques in a graph using networkx + python?
If you look at https://en.wikipedia.org/wiki/Clique_problem, you'll notice there is a distinction between cliques and maximal cliques. A maximal clique is contained in no other clique but itself. So I want those clique, but networkx seems to only provide:
networkx.algorithms.clique.enumerate_all_cliques(G)
So I tried a simple for loop filtering mechanism (see below).
def filter_cliques(self, cliques): # TODO: why do we need this? Post in forum... res = [] for C in cliques: C = set(C) for D in res: if C.issuperset(D) and len(C) != len(D): res.remove(D) res.append(C) break elif D.issuperset(C): break else: res.append(C) res1 = [] for C in res: for D in res1: if C.issuperset(D) and len(C) != len(D): res1.remove(D) res1.append(C) elif D.issuperset(C): break else: res1.append(C) return res1
I want to filter out all the proper subcliques. But as you can see it sucks because I had to filter it twice. It's not very elegant. So, the problem is, given a list of lists of objects (integers, strings), which were the node labels in the graph;
enumerate_all_cliques(G)
returns exactly this list of lists of labels. Now, given this list of lists, filter out all proper subcliques. So for instance:[[a, b, c], [a, b], [b, c, d]] => [[a, b, c], [b, c, d]]
What's the quickest pythonic way of doing that?

Minimizing the maximum Manhattan distance
Given N points on a grid, find the number of points, such that the smallest maximal Manhattan distance from these points to any point on the grid is minimized. Also, determine the distance itself.
The points are inside a grid, –10000 ≤ Xi ≤ 10000 ; –10000 ≤ Yi ≤ 10000, N<=100000.
In the example below the points are (1, 1), (6,1), (6,6), (3,4) and the smallest maximal Manhattan distance (equal to 5) is achieved from points (4,3), (5,2) (marked with E).
Is there an efficient algorithm to solve the problem? The restrictions are quite large so the brute force approach wouldn't work.

Search for square inside array of points
I'm setting up a computervision application but I'm stuck with a control that I have to apply to an array of coordinates. I would like to retrieve all the possible square from an array of coordinates.
image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) corners = cv2.goodFeaturesToTrack(image, 20, 0.01, 15) corners = np.int0(corners) print("Points") for corner in corners: x, y = corner.ravel() cv2.circle(image, (x, y), 5, (0, 0, 255), 1) print(corners) corners = corners.tolist() corners = flatten(corners)
This is only a part of the array of points that I have to use to retrieve all the square inside my image:
[[10,50],[420,188],[177,425],[225,425],[176,220],[225,221],[10,170],[21,50],[21,170]]

How to find number of points such that the smallest maximum *distance in a set is minimised
Given a set of points on a grid, find the number of points, such that the smallest maximum *distance from these points to the points in the set is minimised, and the distance itself.
*The distance between 2 points A,B for this problem is calculated by formula: Xa  Xb + Ya  Yb.
Is there an efficient algorithm to solve this problem?
The points are inside a grid, –10000 ≤ Xi ≤ 10000 ; –10000 ≤ Yi ≤ 10000