What is the name of this graph algorithm?
I need to apply the following algorithm in one of my projects. I know what it does algorithmwise, but I can't seem to find an application for this. Is there a name for this algorithm?
Pseudocode:
input: G = (V, E): planar graph (Vertices, Edges)
output: W: a subset of V
function someFunc(G)
if V.length == 0 then
return []; // array
else
v = some vertex of V;
N = []; // array
// for every edge
for (a, b) in E or (b, a) in E) do
// if the edge connects to the vertex,
// add the other vertex end to the Narray
if a == v then
N.push(b);
end
end
// \notation means: left hand set minus the values in the right hand set (setdifference)
// e.g. [1, 2, 3] \ [2, 3, 4] = [1]
W1 = [v].union(someFunc(inducedSubgraph(V \ [v])));
W2 = N.union(someFunc(inducedSubgraph(V \ [v] \ N)));
return W1.length < W2.length ? W1 : W2;
end
end function
The inducedSubgraph function is an external function which removes the given vertices (+ adjacent edges) from the graph.
See also questions close to this topic

Minimal coin change(limited supply) with better time complexity discussion
The problem wants user to return a list of minimal coins as a change. For example , [.01, .10, .25] , .40 . And (all coins have 10 number of suppplies) should return [.10, .10, .10,.10] but not [.25,.1,.01,.01,.01,.01,.01]
The greedy approach doesn't work. This problem is Dynamic Programming problem. The described solution is O(2^n). How can we optimize it to O(n^2) or better with bottom up approach?
class CoinChange { public static List<Double> findMinRefundCombination(List<Double> inputCoins, double refundToMake) { List<Double> minCoins = new ArrayList<>(); List<Double> coinsAccumulatedSoFar = new ArrayList<>(); double refundSoFar = 0.0d; findMinRefundCombinationHelper(inputCoins, refundToMake, minCoins,coinsAccumulatedSoFar, 0, refundSoFar); System.out.println(minCoins.size()); return minCoins; } public static void findMinRefundCombinationHelper(List<Double> inputCoins, double refundToMake, List<Double> minCoins, List<Double> coinsAccumulatedSoFar, int curIndex, double refundSoFar) { if(refundSoFar > refundToMake  curIndex == inputCoins.size()) { return; } if(refundSoFar == refundToMake) { if(minCoins.isEmpty()) { for(Double coin: coinsAccumulatedSoFar) minCoins.add(coin); } else { if(coinsAccumulatedSoFar.size() < minCoins.size()) { minCoins.clear(); for(Double coin: coinsAccumulatedSoFar) minCoins.add(coin); } } } coinsAccumulatedSoFar.add(inputCoins.get(curIndex)); // findMinRefundCombinationHelper(inputCoins, refundToMake, minCoins, coinsAccumulatedSoFar,curIndex,refundSoFar + inputCoins.get(curIndex)); findMinRefundCombinationHelper(inputCoins, refundToMake, minCoins, coinsAccumulatedSoFar, curIndex + 1, refundSoFar + inputCoins.get(curIndex)); coinsAccumulatedSoFar.remove(coinsAccumulatedSoFar.size()  1); findMinRefundCombinationHelper(inputCoins, refundToMake, minCoins, coinsAccumulatedSoFar, curIndex + 1, refundSoFar); } public static void main(String[] args) { List<Double> inputCoins = new ArrayList<>(); inputCoins.add(.01); // inputCoins.add(); inputCoins.add(.10); inputCoins.add(.25); inputCoins.add(0.50); inputCoins.add(1.0); double refundToMake = 0.40; List<Double> minCoins = findMinRefundCombination(inputCoins, refundToMake); for(Double coin: minCoins) System.out.print(coin + " "); System.out.println(); } }

Runtime of singesource shortest paths in the directed acyclic graphs algorithm
Here is the Algorithm:
Topologically sort the Vertices of G Initialize  Single  Source(G,s) for each vertex u, taken in topologically sorted order for each vertex v in G.Adjacent[u] Relax(u,v,w)
 Topological sort has Runtime O(V + E), where V  is the number of Vertices and E  is a number of edges
 Initialize  Single  Source(G,s) has runtime O(V)
 The main question is double for Loop: The running time of the double for Loop is O(V + E). But I cannot understand, why it's not O(V*E)? Because for every Vertices we go through every edge and normally one nested Loop(all together 2 for Loops) have complexity O(N^2), but in this case it's not true.

Minimize service requests
I got a WCF service from which I can get a distance in meters from one point to another (latitude and lontitude) with the contract method:
public double GetDistance(double originLat, double originLng, double destLat, double destLng)
One of the points is a constant point, and the other point is one of several locations I need to extract from a database according to some other information I receive. The end goal is to get the 5 most closest locations to that constant point.
Imagine if using the WCF service cost money per request.. using the most direct approach, I would need to get all the locations from the database and then need to make a request from the service for each location.. Is there a way to somehow make it better like somehow filtering the locations in database in order to make less requests to the service?

My C++ code for sorting objects of a class using quick sort is showing a wrong output
There is a class named hashtable with two member variables and the sorting is to be done comparing the key of the objects in case of equality the value has to be compared
I have a doubt that the std::swap function is causing the error.Can swap function used for objects of class
int n; struct hashmap{ int key; int value; }; vector<hashmap> a;//Main vector int sort_quick(int part,int high)//The basic sort function which takes pivot as the end element {int i=1,j; hashmap pivot=a[high]; for(j=part;j<high;j++) { if(a[i].key<pivot.key) { i++; swap(a[part+i],a[j]); } else if(a[i].key==pivot.key&&a[i].value<pivot.value)//if the key is equal than compare the value { i++; swap(a[part+i],a[j]); } } swap(a[part+i+1],a[high]); return (part+high)/2;//This will be assigned as the index of the pivot for the divided part } void quick_sort(int low,int high) { if(low<high){ int new_pivot_index; new_pivot_index=sort_quick(low,high); quick_sort(low,new_pivot_index1); quick_sort(new_pivot_index+1,high);} } int main() { cout<<"Enter the Number of KeyValue pairs : "; cin>>n; enterdetails(n); quick_sort(0,n1); display(); return 0; }
The output result is not even sorted it is just showing me the initial values entered by the user

Recursive function returning final result before all recursive calls are popped off the call stack
I am confused about why this DFS algorithm I've written is not visiting the final vertex in my graph. Here is the code:
Graph/Vertex Class
import java.util.ArrayList; import java.util.Deque; import java.util.HashMap; import java.util.LinkedList; public class Graph { private HashMap<Vertex, ArrayList<Vertex>> adjacencyList; public Graph() { this.adjacencyList = new HashMap<>(); } public void addVertex(Vertex vertex) { if (!adjacencyList.containsKey(vertex)) { adjacencyList.put(vertex, new ArrayList<>()); } } public void addEdge(Vertex vertex1, Vertex vertex2) { this.adjacencyList.get(vertex1).add(vertex2); this.adjacencyList.get(vertex2).add(vertex1); } public HashMap<Vertex, ArrayList<Vertex>> getAdjacencyList() { return adjacencyList; } public void printAdjacencyList(Vertex vertex) { System.out.print(vertex.getValue() + ": "); for (Vertex v : adjacencyList.get(vertex)) { System.out.print(v.getValue()); } System.out.println(); } public ArrayList<Vertex> DFS_Recursive(Vertex start) { ArrayList<Vertex> result = new ArrayList<>(); HashMap<Vertex, Boolean> visited = new HashMap<>(); for (Vertex v : adjacencyList.keySet()) { visited.put(v, false); } return DFS_Recursive_Utility(start, result, visited); } private ArrayList<Vertex> DFS_Recursive_Utility(Vertex vertex, ArrayList<Vertex> results, HashMap<Vertex, Boolean> visited) { if (vertex == null) { return null; } visited.put(vertex, true); results.add(vertex); for (Vertex v : adjacencyList.get(vertex)) { if (!visited.get(v)) { return DFS_Recursive_Utility(v, results, visited); } } return results; } } class Vertex<E> { private E value; public Vertex(E value) { this.value = value; } public E getValue() { return value; } }
Main Class
public static void main(String[] args) { Vertex<String> a = new Vertex<>("A"); Vertex<String> b = new Vertex<>("B"); Vertex<String> c = new Vertex<>("C"); Vertex<String> d = new Vertex<>("D"); Vertex<String> e = new Vertex<>("E"); Vertex<String> f = new Vertex<>("F"); Graph graph = new Graph(); graph.addVertex(a); graph.addVertex(b); graph.addVertex(c); graph.addVertex(d); graph.addVertex(e); graph.addVertex(f); graph.addEdge(a, b); graph.addEdge(a, c); graph.addEdge(b, d); graph.addEdge(c, e); graph.addEdge(d, e); graph.addEdge(d, f); graph.addEdge(e, f); System.out.println(); for (Vertex v : graph.getAdjacencyList().keySet()) { graph.printAdjacencyList(v); } System.out.println(); for (Vertex v : graph.DFS_Recursive(a)) { System.out.print(v.getValue() + " "); } }
The result of calling DFS_Recursive() is:
A B D E C
I have gone through with the IntelliJ debugger and when the algorithm hits vertex C, there are still remaining calls on the stack for it to check any remaining unvisited vertices in E's adjacency list. However, at that point it just returns the results ArrayList and the remaining recursive calls are ignored.
Any ideas on whats happening and how to fix it?

What library/libraries of Python can I use to plots that follow Edward Tufte's style with axes, plot multiple ranges of values, etc.? (Examples below)
I'm looking for libraries that can plot different types of graphs, most of which are relevant in ML/Data science areas. Below are examples of graphs that I'd like to plot.
"Plot 1" follows Edward Tufte's style with axes and other elements like the data ticks. etc.
In "Plot 2", we have no xaxis but just the labels and values for different iterations/algorithms.
In "Plot 3" we have data ranges and particularly the Yaxis ranges only from the minimum to the maximum value possible in the data.
Can you please suggest Python library/libraries to plot such graphs?

Universal Sentence Encoder  RAM and CPU requirements
I am planning to deploy application, which uses Universal Sentence Encoder link on AWS Elastic BeanStalk.
What would be appropriate instance to deploy (RAM and CPU)? Where can I get more info on how much Universal Sentence Encoder requires RAM and CPU? I am guessing tensorflow graph withing encoder is fairly bit and would probably require 24 GB of RAM. I am not looking for optimal performance specs, I am looking for minimal specs necessary for application to run.
 I am not able to figure out the error i am doing continuously.?

Can someone help/explain me how the pseudocode of dijkstra works?
I have to present greedy algorithms on Tuesday as a part of my finals. But I am struggling with understanding the Pseudocode of the Dijkstra algorithm which is part of my task.
Example from wikipedia:
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
4 create vertex set Q
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
11 Q.add_with_priority(v, dist[v])
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that are still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist, prev
Example from TUM: BEGIN
d(v[1]) ← 0
FOR i = 2,..,n DO
d(v[i]) ← ∞, parent(v[i]) ← NULL
WHILE queue ≠ ∅ DO
u = queue.extractMin() FOR ALL (u,w) ∈ E DO dist ← d(u) + l(u,w) IF w ∈ queue AND d(w) > dist DO d(w) = dist, parent(w) = (u,w) ELSE IF parent(w) == NULL THEN d(w) = dist, parent(w) = (u,w) queue.insert(w,dist)
END

How can I reject repeated values input in an array and prompt to reinput until unique data is input?
How do I enter data in an array and if a duplicate value is entered. The program rejects and the user is prompted to reinput until a different value is input.
I need this for a project and I cant use an library functions such as If array.Contains. So I need a manual method. Even if its longer.
Thank You
For i = 1 To 3 Console.WriteLine("Input num") num(i) = Console.ReadLine() Next