What algorithm that could used for generating mesh from xyz coordinates? If the input is xyz dataset and the output is order of points to connect
Suppose I have a set of xyz coordinate S = {(x1,y1,z1), ... ,(xn,yn,zn)}, and the algorithm that I looking for will return a set A of number of S’s element index that represent the order to connect the xyz coordinates.
For example:
If S = {(0,1,1), (1,4,3), (6,0,1)}, and A = {(0,2,1)}
It means that point x1(0,1,1) will first connect to x3(6,0,1), then x3 will connect to x2(1,4,3).
I’d like to implement it for mesh generation program like one in Blender. So, the output set A must connect the points in S that will form a closed surface.
Mesh generation from xyz point in blender: enter link description here
See also questions close to this topic

why dot product of normalized vector is always data size 1
I don't understand why dot product of normalized vector is always data size 1.
a < scale(rnorm(100)) crossprod(a) # equal = 100  1 = 99 b < scale(runif(50)) crossprod(b) # equal = 50  1 = 49 c < scale(rchisq(30, 5)) crossprod(c) # equal = 30  1 = 29
I want to know mathematical understanding.

I just am tired and want someone to tell me the inverse of this equation
(x*key[0])+key[1]/(key[2]key[3]) just tell me the inverse of this. I am really tired, It's 11:18 PM and I have school tomorrow.

Transient Analysis LR series circuit
Hello guys,
how to get the transient current in LR series circuit i(t) with this DC controllable DC supply, How can I use this given (change of the voltage from 0 to 15) at t>=0?
I need to help me by solving this simple assignment or at least guide me with good links.
Thanks in advance

Negative coordinates in C# Graphics?
I am following this documentation, specifically the DrawString method. It says that for
public void DrawString (string s, System.Drawing.Font font, System.Drawing.Brush brush, float x, float y);
x and y are the coordinates of the upperleft corner of the drawn text.
I have call this function with x=0 but there is still some space between the border and the upper left corner so I call it with x=10 and now it is touching the border.
Does this mean that somehow we can call these with negative values?

Draw a box at an angle
How can I change the code below so that drawRect also accepts an
angle
parameter?(x, y)
represents the centroid. As I think I understand it I would want to determine the coordinates of the four corners and then rotate all of them around(x, y)
by that many degrees, keeping the same distance.# draw a line from a to b def line(a, b): # [code omitted] def drawRect(x, y, w, h): a = (x  w / 2, y  h / 2) b = (x + w / 2, y  h / 2) c = (x + w / 2, y + h / 2) d = (x  w / 2, y + h / 2) line(a, b) line(b, c) line(c, d) line(d, a)

JavaFX buttons, events, file
I've tried to write a graphic interface to my DataFrame project. I have to read DataFrame from file and then I want to count for example min for each column. I have an open button, so I can read a file, but I don't know how can I work on this file(if you know what I mean). When I tried to press min botton, there was a null pointer exception. I guess that the dataframe is empty. But what should I do to fix it? I also have to draw a graphs, but I totally do not know how to do it.
public class Graphics extends Application { private Text actionStatus; private Stage savedStage; DataFrame d; public static void main(String[] args) { Application.launch(args); } @Override public void start(Stage primaryStage) { Button open = new Button("open"); open.setOnAction(new SingleFcButtonListener()); HBox open1 = new HBox(10); open1.getChildren().addAll(open); Button save = new Button("Save"); HBox save1 = new HBox(10); save1.getChildren().addAll(save); Button min = new Button("Min"); HBox min1 = new HBox(10); min1.getChildren().addAll(min); min.setOnAction(new SingleButtonListener()); actionStatus = new Text(); VBox vbox = new VBox(30); vbox.getChildren().addAll( open1,save1, min, actionStatus); Scene scene = new Scene(vbox, 500, 300); primaryStage.setScene(scene); primaryStage.show(); savedStage = primaryStage; } private class SingleFcButtonListener implements EventHandler<ActionEvent> { @Override public void handle(ActionEvent e) { showSingleFileChooser(); } } private void showSingleFileChooser() { FileChooser fileChooser = new FileChooser(); File selectedFile = fileChooser.showOpenDialog(null); if (selectedFile != null) { actionStatus.setText("File selected: " + selectedFile.getName()); ArrayList<Class<? extends Value>> arr2 = new ArrayList<Class<? extends Value>>(); arr2.add(Str.class); arr2.add(DateTime.class); arr2.add(com.projekt1.Double.class); arr2.add(Double.class); DataFrame d = new DataFrame(selectedFile.getName(), arr2, true); d.showDatabase(); } } private class SingleButtonListener implements EventHandler<ActionEvent> { @Override public void handle(ActionEvent e) { showMean(); } } private void showMean() { d.mindf().showDatabase(); } }

Watershed Segmentation excluding alone object
Problem
Using this answer to create a segmentation program, it is counting the objects incorrectly. I noticed that alone objects are being ignored.
I counted 123 objects and the program returns 117, as can be seen bellow. The objects circled in red seem to be missing:
Using the following image from a 720p webcam:
Code
import cv2 import numpy as np import matplotlib.pyplot as plt from scipy.ndimage import label import urllib.request # https://stackoverflow.com/a/14617359/7690982 def segment_on_dt(a, img): border = cv2.dilate(img, None, iterations=5) border = border  cv2.erode(border, None) dt = cv2.distanceTransform(img, cv2.DIST_L2, 3) plt.imshow(dt) plt.show() dt = ((dt  dt.min()) / (dt.max()  dt.min()) * 255).astype(np.uint8) _, dt = cv2.threshold(dt, 140, 255, cv2.THRESH_BINARY) lbl, ncc = label(dt) lbl = lbl * (255 / (ncc + 1)) # Completing the markers now. lbl[border == 255] = 255 lbl = lbl.astype(np.int32) cv2.watershed(a, lbl) print("[INFO] {} unique segments found".format(len(np.unique(lbl))  1)) lbl[lbl == 1] = 0 lbl = lbl.astype(np.uint8) return 255  lbl # Open Image resp = urllib.request.urlopen("https://i.stack.imgur.com/YUgob.jpg") img = np.asarray(bytearray(resp.read()), dtype="uint8") img = cv2.imdecode(img, cv2.IMREAD_COLOR) ## Yellow slicer mask = cv2.inRange(img, (0, 0, 0), (55, 255, 255)) imask = mask > 0 slicer = np.zeros_like(img, np.uint8) slicer[imask] = img[imask] # Image Binarization img_gray = cv2.cvtColor(slicer, cv2.COLOR_BGR2GRAY) _, img_bin = cv2.threshold(img_gray, 140, 255, cv2.THRESH_BINARY) # Morphological Gradient img_bin = cv2.morphologyEx(img_bin, cv2.MORPH_OPEN, np.ones((3, 3), dtype=int)) # Segmentation result = segment_on_dt(img, img_bin) plt.imshow(np.hstack([result, img_gray]), cmap='Set3') plt.show() # Final Picture result[result != 255] = 0 result = cv2.dilate(result, None) img[result == 255] = (0, 0, 255) plt.imshow(result) plt.show()
Question
How to count the missing objects?

Stabilize bounding box from moving object in a camera feed
I'm currently working on a project where I want to track keypoints of an object in an image feed where the object moves towards the camera. To completely understand the process the methodology is as follows:
 use an object detector to locate the object
 use the result from 1. to crop the image (i.e a bounding box)
 downsample the image from 2. to a fixed pixel hight and width
 use an pose estimator to track key points of the object in the image from 3.
The problem is kind of trivial but I'm running out of ideas to solve it. Imagine that the object have some oscillatory movement hence the result from the object detector will of course inherit this movement (e.g the center, hight and width of the object)
The ideal case would be that the bounding box (2.) inherits some dynamics of the object such as it "freezes" at t_{0} i.e we only use object detection in the first frame of the feed, and then the bounding box grows with the same rate as the object (moving towards the camera) where the assumption that the object have constant speed is valid. This however, needs some modeling of the camera system (which I'm having trouble relevant papers/sources about).
It should be mentioned that I've tried some approaches which includes fitting the result from the object detector to a specific function and filtering the result by a high pass filter. Both of the methods works sometimes but I need something more general. This two approached also have drawbacks by that it neglects some of the oscillatory movement in the keypoints (which I would like to track). To be clear, its not the oscillatory movements per se that is the problem. The problem is that the bounding box inherits this movement which leads to an "oscillatory coordinate system".
I'm not looking for a complete solution rather I'm looking for ideas or suggestions. Thank you.

Home surveillance, TensorFlow (+ OpenCV)
I am considering to implement a network of security cameras in my countryside property due to the recent intrusion of some burglars. To do so, I am trying to make the most out of the equipment I already have at home, such as:
 6 IP night vision cameras
 Jetson TK1 as a processing unit, [https://www.nvidia.com/object/jetsontk1embeddeddevkit.html][1]
 Raspberry Pi 2 model B (not needed, I guess)
 PoE Switch
I wanted to feed the video stream of each camera "into" TensorFlow+OpenCV (or other DeepLearning algorithms) to make sure I can recognise what's/who's in my garden, garage, etc.  basically it would be:
 Human detection & car detection
planning to use Telegram or other APIs provided by the instant messaging platform to be notified realtime.
Configuring Tensorflow on my ubuntu 16.04 LTS (workstation with GPU support) was not trivial at all nor it will be the process of recognising objects and people.
Do you have any recommendation on this little project? Would it make sense using Docker to make the configuration smoother?

Space/time efficiently compute transitive closures of each dependents while incrementally building the directed graph
I don't have a formal education in graph theory, so if I use incorrect terminology or make other mistakes I do appreciate being corrected! This is not a homework question, this is actually for a programming language runtime but I've tried to simplify it significantly. Ultimately it just needs to be able to run on any Turing complete hardware.
I need to answer the question: given a node in a dependency graph, group its dependents by their own transitive dependents.
In other words, given a node in a dependency graph, find the set of sets of direct dependents which transitively have common dependents.
I think each of these sets (group) in the resulting set (groups) is a transitive closure.
e.g. given the pseudo code:
let a = 1 let b = a let c = a let d = b + c let e = a let f = a + e
You could compute the graph
a │ ┌─────┬──┴──┬──────┐ │ │ │ │ ▼ ▼ ▼ ▼ b c e ── ▶ f │ │ └──┬──┘ │ ▼ d
From this graph we can see that of the dependents of
a
, bothb
andc
have a dependent ofd
. Ande
has a dependent off
.Which would give us what we want, our set of sets:
groups = {{b, c}, {e, f}}
b
andc
have direct or transitive downstream relations, ande
andf
together as well. But, for example,e
andf
have no dependent (downstream) relation at all withb
orc
either directly or indirectly (transitively).
I did a bunch of paper research and indeed there are numerous solutions, however they don't have the performance characteristics I'm looking for. The graph is incrementally created over time, and at each stage I need to be able to answer this question so traversing the entire graph each time is a dealbreaker.
I think I have a major advantage that isn't referenced in the various approaches I could find: I have full control over the creation of the graph and the dependents are added in reverse topological order, so the graph is correctly sorted. With that in mind, I considered the obvious solution of computing the answer incrementally (dynamic programming).
Also, the graph can also contain cycles but if you have a solution that doesn't work with cycles, still let me know!
I figured a bitmask would be a fast way to store and look up what dependents a given node has. When a dependent is added to a node, I would update that node's mask to include the bits of that dependent (which itself includes its dependents and so on)
let maskCounter = 0; class Node { constructor(name) { this.name = name; this.dependents = []; this.mask = 1 << maskCounter; maskCounter++; } addDependent(dependent) { // Now our mask contains the bits representing all of // its direct and transitive dependents this.mask = this.mask  dependent.mask; // Need to see if this dependent has a transitive // dependent of its own that exists in one of the groups for (const group of this.dependents) { const result = group.mask & dependent.mask; if (result) { group.mask = dependent.mask; group.values.push(dependent); return; } } // If reached, this dependent has no transitive dependents // of its own with any of this node's other dependents. // That's confusing, huh? this.dependents.push({ mask: dependent.mask, values: [dependent] }); } }
The dependents would, however, need to be added in reverse order up the graph so that the top of the graph contained the masks of all its dependents.
const a = new Node('a'); const b = new Node('b'); const c = new Node('c'); const d = new Node('d'); const e = new Node('e'); const f = new Node('f'); e.addDependent(f); a.addDependent(f); a.addDependent(e); c.addDependent(d); b.addDependent(d); a.addDependent(c); a.addDependent(b);
The bitmasks would incrementally look like this:
e = e 00010000  f 00100000 a = a 00000001  f 00100000 a = a 00100001  e 00110000 c = c 00000100  d 00001000 b = b 00000010  d 00001000 a = a 00110001  c 00001100 a = a 00111101  b 00001010 =========================== a = 00111111
At the end
a
has a mask of00111111
, each bit represents each of its transitive dependents, in this examplea
has a transitive dependent of all of the other nodes.If we look at the resulting value of
a.dependents
we see:[ { values: [f, e], mask: 0b00110000 }, { values: [c, b], mask: 0b00001110 } ]
which provides the answer we're looking for, ultimately a set of sets.
a.dependents.map(group => group.values)
this an array aka list but it's being used as a set for simplicity.Here's a JSBin: https://jsbin.com/jexofip/1/edit?js,console
This works, and CPUwise is acceptable because I'll very frequently need to know the grouped dependents but dependents change far less often.
The example above uses JavaScript for simplicity of demo, which uses 32bit signed integers for bitwise operations so we can only create 31 unique nodes. We could use an arbitrary precision integer (e.g. BigInt) to create a "unlimited" number of nodes, but the problem is memory usage.
Because each node needs their own unique bit flipped, I think the memory usage is:
N * (N + 1) / 2 = bits (where N = number of nodes) e.g. 10,000 nodes is about 6.25 megabytes!
That's excluding any platform overhead for using arbitrary precision integers (or a similar custom data structure).
In my use case, it would be common to have 10k+ and in fact 100k+ is probable in some cases (625 MB !!!) and it's also possible for new nodes to be created indefinitely, using an infinite amount of memory, so this solution isn't practical because there is no easy way to "garbage collect" no longer used mask bits from nodes that drop off the graphit's of course possible, but it's the traditional GC problem I'd like to avoid if possible.
Side note: depending on the size and depth of the graph, this might not actually perform well either. Even though bitwise operations themselves are relatively fast, doing it on on a BigInt of 100,000 bits for each node at the top of the graph is not. So completely rethinking my approach is also welcome.
Ultimately trading memory for CPU is the usual give and take, but I'm wondering if perhaps I'm missing another approach that strikes a better balance or requires significantly less memory?
There may be other unique considerations I haven't thought of that could be used
School me!

I did understand the original proof given in CLRS, but in the given corollary why does the cut VC,(VVC) respect A?
Corollary. Let A be subset of E that is included in some MST for G, let C = (VC, EC) be a connected component (tree) in forest GA = (V, A). If (u, v) is a light edge connecting C to some other component in GA, then (u, v) is safe for A. Proof. The cut (VC, V − VC) respects A (A defines the components of GA), and (u, v) is a light edge for this cut. Therefore, (u, v) is safe for A.

Operator error and edge insertion problem in an unweighted, undirected graph
I've got several problems with my adjacency graph in C++. I fixed some of the main errors, but still can run and test the program. I'm not sure if the
newEdge()
method is working properly, if the vector in the vector is right created, and most importantly how to display the edges in the graph.There is the code and I marked the lines with errors with "!":
#include <iostream> #include <algorithm> #include <fstream> #include <vector> using namespace std; struct Edge { int begin; int end; }; class Graph { private: int numOfNodes; vector<vector<Edge>> baseVec; public: Graph(int numOfNodes) { baseVec.resize(baseVec.size() + numOfNodes); } void newEdge(Edge edge) { if (edge.begin >= numOfNodes1  edge.end >= numOfNodes1  edge.begin < 0  edge.end < 0) { cout << "Invalid edge!\n"; } baseVec[edge.begin].emplace_back(edge.end); !! baseVec[edge.end].emplace_back(edge.begin); !! } void display() { for (int i = 0; i < baseVec.size(); i++) { cout << "\n Adjacency list of vertex " << i << "\n head "; !!! for (int j = 0; j < baseVec[i].size(); j++) !!! { cout << baseVec[i][j] << " "; !!!!!!! cout << endl; } } } }; std::ostream &operator<<(std::ostream &os, Edge const &m) { return os << m.begin << m.end; } int main() { int vertex, numberOfEdges, begin, end; cout << "Enter number of nodes: "; cin >> vertex; numberOfEdges = vertex * (vertex  1); Edge edge; Graph g1(vertex); for (int i = 0; i < numberOfEdges; i++) { cout << "Enter edge ex.1 2 (1 1 to exit): \n"; cin >> edge.begin >> edge.end; if ((begin == 1) && (end == 1)) { break; } g1.newEdge(edge); } g1.display(); return 0; }
I overloaded the << operator, not sure if it's right and the two errors I have in Visual Studio are:
'<': signed/unsigned mismatch
binary '<<': no operator found which takes a righthand operand of type '_Ty' (or there is no acceptable conversion)
This is a new version of my prior question.

Segmentation fault in CGAL when intersecting polygons
I'm writing a code where I need to calculate the overlapping area of two polygons very frequently (one always is a square, the other one is a simple but in general nonconvex polygon). However, using CGAL for this, I occasionally encounter segmentation faults. The following code provides a minimal example. Note that as soon as one of the point coordinates is moved by 0.001, everything works fine.
#include <CGAL/Cartesian.h> #include <CGAL/Polygon_2.h> #include <CGAL/Polygon_with_holes_2.h> #include <CGAL/Boolean_set_operations_2.h> #include <CGAL/Polygon_2_algorithms.h> typedef CGAL::Cartesian<double> K; typedef K::Point_2 Point; typedef CGAL::Polygon_2<K> Polygon; typedef CGAL::Polygon_with_holes_2<K> PolygonH; int main() { // Rectangle Point pointsA[] = { Point(0.4,0.35), Point(0.4,0.4), Point(0.45,0.4), Point(0.5,0.35) }; // Triangle Point pointsB[] = { Point(0.428,0.412), Point(0.387,0.389), Point(0.359,0.393) }; // Convert to CGAL polygon Polygon polyA(pointsA, pointsA + 4); Polygon polyB(pointsB, pointsB + 3); // Intersection std::list<PolygonH> polyAB; CGAL::intersection(polyA, polyB, std::back_inserter(polyAB)); }

Count unique quadrangles
Given an planar undirected graph with n points marked by integer number [1,2,..n]
The task is to find all the unique quadrangles, by "unique", we mean: if two quadrangles have all the four points be the same but only the relative order is different, then the two are treated as the "same" quadrangle. For example, [1,2,3,4] and [1,3,2,4] are the same quadrangle.
Input: The graph can be stored by whatever format you prefer. Here we use adjacent matrix (for undirected graph, each physical edge is inputted once in the following description), the first two numbers in the 1st line is the vertex number and edge number respectively. Then the following lines input each edge by each time.
Output: An Mby4 matrix or list of arrays. M is the final unique quadrangle count you have found.
In the following undirected complete graph of five points:
5 10 1 4 1 2 1 3 1 5 2 3 2 4 2 5 3 4 3 5 4 5
There are only five unique quadrangles (ignore the relative order of the vertex sequence):
1 2 3 4 1 2 3 5 1 2 4 5 1 3 4 5 2 3 4 5
I don't have a perfect solution now.
The following MATLAB solution can only find every unique quadrangle for Case1, but failed in Case2 , i.e. no quadrangle can be found.
%% Count Quadrangles clc; v = vertex(:,1); t = vertex(:,2); G = zeros( max(max(v),max(t))); n = length(G); % For muiltedge graph , Build the matrix for graph: for i = 1:length(v) G(v(i), t(i)) = G(v(i), t(i)) + 1; G(t(i), v(i)) = G(v(i), t(i)); end issymmetric(G) max(max(G)) % For single edge graph, Build the matrix for graph: % G(sub2ind(size(G),v, t))=1; % G(sub2ind(size(G),t, v))=1; % fill the symmetric position tic quad_cnt = 0; % G_ = graph(G); quad_points = []; %% O(N^3) for i = 1:n for j = i+1:n if (j==i  G(i,j)==0) continue; end for k = j+1:n if ( k==i  k==j  (G(k,i)==0 && G(k,j) == 0) ) continue; end for p = k+1:n if ( p==i  p==j  p==k  G(p,i)==0  G(p,k) == 0) continue; end % otherwise, a quadrangle is ofund quad_cnt = quad_cnt+1; % save the vertices quad_points(quad_cnt,:) = [i,j,k,p]; end end end end toc % 0.1571 sec quad_cnt % output each triangle: quad_points %% O(deg*(V^2))
Test Cases Edge inputs by using vertices index (Note: starting from "1" not "0"):
Case1: Input:
5 10 1 4 1 2 1 3 1 5 2 3 2 4 2 5 3 4 3 5 4 5
Output:
1 2 3 4 1 2 3 5 1 2 4 5 1 3 4 5 2 3 4 5
Case2: Input:
8 8 1 3 2 3 1 4 2 4 1 8 2 5 3 6 4 7
Output:
1 2 3 4

Find Tangent Points of Circle and Two Lines in First Quadrant
I need to define explicit expressions to find the points (x1,y1) and (x2,y2), which are the two tangent points of a circle with radius r (known) and two lines (equations known). The center of the circle (x0,y0) is not know and not needed. See picture below.
In my case, I have the following conditions:
 problem in first quadrant: x>0, y>0
 line y=m1*x+b1 with m1<=0, b1>=0
 line y=m2*x+b2 with m2 < m1, b2>b1
 circle centre above y=m1*x+b1, so y0>y1
 circle centre at r.h.s. of y=m2*x+b2, so x0>x2
 circle tangent to line y=m1*x+b1, so (y1y0)/(x1x0)=1/m1
 circle tangent to line y=m2*x+b2, so (y2y0)/(x2x0)=1/m2
I computed the following:
x1, y1, x2, y2 = var('x1, y1, x2, y2') # tangent points m1, b1, m2, b2 = var('m1, b1, m2, b2') # lines' eqn x0, y0, r = var('x0, y0, r') # cirsle's eqn eq1 = (x1  x0)^2 + (y1  y0)^2  r^2 == 0 eq2 = (x2  x0)^2 + (y2  y0)^2  r^2 == 0 eq3 = y1  m1*x1  b1 == 0 eq4 = y2  m2*x2  b2 == 0 eq5 = (y1y0)/(x1x0) == 1/m1 eq6 = (y2y0)/(x2x0) == 1/m2 # unknown: x0,y0,x1,y1,x2,y2 # known: m1,b1,m2,b2,r solve([eq1,eq2,eq3,eq4,eq5,eq6, x1>0,y1>0,x2>0,y2>0, m1<=0,b1>=0,m2<m1,b2>b1, x0>x2,y0>y1,r>0],x0,y0,x1,y1,x2,y2)
Why is this not enough to define the problem?