What is the name of this graph algorithm?
I need to apply the following algorithm in one of my projects. I know what it does algorithmwise, but I can't seem to find an application for this. Is there a name for this algorithm?
Pseudocode:
input: G = (V, E): planar graph (Vertices, Edges)
output: W: a subset of V
function someFunc(G)
if V.length == 0 then
return []; // array
else
v = some vertex of V;
N = []; // array
// for every edge
for (a, b) in E or (b, a) in E) do
// if the edge connects to the vertex,
// add the other vertex end to the Narray
if a == v then
N.push(b);
end
end
// \notation means: left hand set minus the values in the right hand set (setdifference)
// e.g. [1, 2, 3] \ [2, 3, 4] = [1]
W1 = [v].union(someFunc(inducedSubgraph(V \ [v])));
W2 = N.union(someFunc(inducedSubgraph(V \ [v] \ N)));
return W1.length < W2.length ? W1 : W2;
end
end function
The inducedSubgraph function is an external function which removes the given vertices (+ adjacent edges) from the graph.
See also questions close to this topic

How can I fix this quicksort partition in python? (dealing with Numpy arrays)
I've been working on this quicksort partition for several days and still can't fix it. I've tried debugging it with prints to see how partitions evolve when called by the main quicksort but still can't figure out how to fix it. Showing the doctest to give context, apologies if it isn't relevant.
It works on some examples but not in others and can't figure out why. Any insight and help much appreciated.
def partition (s, cmp): >>> import generate >>> import numpy >>> import element >>> def cmp (x,y): ... if x == y: ... return 0 ... elif x < y: ... return 1 ... else: ... return 1 >>> t = numpy.array([element.Element(i) for i in [3, 8, 2 , 2, 3, 7, 9, 1, 1, 7]]) >>> p = {'left':0,'right':len(t)1,'data':t} >>> p1,p2 = partition(p,cmp) >>> p1['data'][p1['left']:p1['right']+1] array([8, 2, 2, 3, 1, 1], dtype=object) >>> p2['data'][p2['left']:p2['right']+1] array([9, 7, 7], dtype=object) """ a = s["data"] #whole array lp = s["left"]+1 #left pointer rp = s["right"] #right pointer pivot = a[lp1] #pivot is element of index lp=0 in slice while lp <= rp: if cmp(a[lp], pivot) <= 0: #lp is already on the correct side since it's <= to pivot a[lp1] = a[lp] lp += 1 #moving towards center else: a[lp], a[rp] = a[rp], a[lp] #taking advantage of python's easy swap rp = 1 #moving towards center a[rp] = pivot #replacing pivot in the end lslice = {"data" : a, "left" : s["left"], "right" : rp1} #<pivot rslice = {"data" : a, "left" : rp+1, "right" : s["right"]} #>pivot return (lslice,rslice)

How can a maximum sub array problem fail when one omits one side or the other?
If the right side is omitted, what can the left side have for it to fail? And vice versa. If the left is omitted, what can the right side be so that it fails?

Find the most "empty" place in an array
Let's say that I've the following numpy array:
foo = np.asarray([ [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1], [1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1], [1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1], [1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ])
As you can see, it contains some 0 et 1. I'm searching for the most "empty" place in this array. I mean the place where you find the higher density of 0. I would like to do a function that return me the row and col index of the center of that place.
For the moment what I tried to do is to cut that array in four parts of the same size with a function like this :
def blockshaped(arr, nrows, ncols): """Return an array of shape (n, nrows, ncols).""" h, w = arr.shape arr = arr.reshape(h // nrows, nrows, 1, ncols) arr = arr.swapaxes(1, 2) arr = arr.reshape(1, nrows, ncols) return (arr)
Then I compute the sum of each part. Take the cut where the sum was the lowest and cut in again in 4, and so on... But I feel it's not the right things to...

Solve the recurrence without using the Master Theorem. T(n) = 2T(n/2) + 2
I have the recurrence formula given above that I would like to solve without using Master Theorem. In the question it says that you can assume n is a power of 2
I have attempted to solve this and got 2^kT(n/2^k) + 2. I am unsure about this

understanding higher order functions and recursion
"Consider the given function merge(func1,func2,n)
A function grin_add S(t) gives:
S(1) = 1
S(2) = 4 + 1 + 4 = 9
S(3) = 9 + 4 + 1 + 4 + 9 = 27
S(4) = 16 + 9 + 4 + 1 + 4 + 9 + 16 = 59
S(5) = 25 + 16 + 9 + 4 + 1 + 4 + 9 + 16 + 25 = 109
Define grin_add S(t) in terms of merge, by adding the correct implementation to the code given and without changing the return statement 'return combine(f, op , n)'
Someone sent me the answer but I want to understand it myself, and not just understand it but the thought process that was used to get it.
(Greatly appreciate any advice. Not just with the code, but in how to approach programming problems and understand concepts in general. I find myself really having the issue of absolutely not knowing from where to start and how to approach the problem... feels like having to climb a tall smooth wall with nowhere to grip and it's seriously worrying me for timed exams.)
(On a side note, I'm bad at mathematical thinking so I really struggle to keep up during the classes...)
Sorry I know this question is awfully longwinded, it's my first time posting and I've barely been sleeping trying to keep up, and am oozing anxiety over this. I truly want to overcome and excel in this as a personal endeavour to build my resilience and determination, but this sure is disheartening. Thank you.
def merge(func1, func2 ,n): result = f(0) for j in range(n): result = func2(rslt , f(j)) return rslt def grin_add(t): def f(x): def func2(x, y): n = return combine(f, op, n) #Correct answer sent to me: def grin_add(t): def f(x): x=x**2 return x def func2(x, y): if y==1: return 1 else: result=x+2*y return rslt n=t+1 return combine(f, func2 , n) smiley_sum(5)
expected output for add_grin(5) is 109.
add_grin(4) gives 59

Filter with treeview
I am experiencing problems using a treeview filter. I have made following method:
var tree = [{ id: "Arvore", text: "Arvore", children: [ { id: "Folha_1", text: "Folha 1", children: [ { id: "Folha_1_1", text: "Folha 1.1", children: [ { id: "dd", text: "abc" } ] } ] }, { id: "Folha_2", text: "Folha 2" }, { id: "Folha_3", text: "Folha 3" }, { id: "Folha_4", text: "Folha 4" }, { id: "Folha_5", text: "Folha 5" } ] }]; filterData: function filterData(data, value) { return data.filter(function(item) { if (item.children) item.children = filterData(item.children, value); return item.text.indexOf(value) > 1; }); },
But when I enter text, for example
Folha 1.1
, I want it to returnArvore > Folha 1 > Folha 1.1
, but the function returns only the first children. What can i do? 
Longest path in unweighted undirected graph starting and finishing in the same vertex
I have a problem in which I need to find the longest path. Given an unveighted undirected graph. Starting from a given vertex I need to visit as many vertices as possible and finish in the same one without visiting each of them more then once.
Most of the algorithms I found were for a special case (acyclic, directed etc.). An idea can be to find Hamiltonian cycle for every subset of the vertices (the subset can be generated with backtrack). But I guess there must be a far better algorithm.

Search JPA complex object for nested object based on its value
Lets assume we have a complex JPA relation, a fraction of which looks like this:
@MappedSuperclass public class DiffEntity { private String diffId; public DiffEntity() { this.diffId = UUID.randomUUID().toString(); } //... } @Entity @Inheritance(strategy = InheritanceType.JOINED) public class ParentEntity extends DiffEntity { @Id @GeneratedValue private long id; @Column private String name; //... } @Entity public class Construct extends ParentEntity { @Column private String variable; @OneToMany(mappedBy = "construct", cascade = CascadeType.ALL) private List<Partconstruct> partconstructs; //... } @Entity public class Partconstruct extends ParentEntity { @OneToMany(mappedBy = "partconstruct", cascade = CascadeType.ALL) private List<Field> fields; @OneToMany(mappedBy = "partconstruct", cascade = CascadeType.ALL) private List<Hardparameter> hardparameters; @ManyToOne @JoinColumn(name = "construct_id") private Construct construct; //... } @Entity public class Field extends ParentEntity { @Column private int fieldSize; @ManyToOne @JoinColumn(name = "partconstruct_id") private Partconstruct partconstruct; //... } @Entity public class Hardparameter extends ParentEntity { @Column private String value; @ManyToOne @JoinColumn(name = "partConstruct_Id") private Partconstruct partConstruct; //... }
We are concerned with
Construct
type objects.Construct
is deeply cloned and persisted, having all its nested objects on the object graph being cloned too and getting a newId
(primary key). On every clone thediffId
(fromDiffEntity
entity) stays the same (it serves the purpose of correlating objects for a diffing feature).How would it be possible to search and get a reference for a specific
DiffEntity
given we have the below: a reference to the
Construnct
instance  type of the nested object
diffId
we are after.
I have tried different versions of object graph traversers with reflection, which will work for a small in size
Construct
object, but once it becomes too big performance is very slow.Is there any magic on the entity manager itself to achieve that ?
 a reference to the

Create a surface plot without actually displaying it in Matplotlib
I would like to create a surface plot without actually displaying it. I just want to export the graph to a PNG. Here is the relevant code:
import numpy from matplotlib import pyplot, cm from mpl_toolkits.mplot3d import Axes3D from pylab import figure, axes, pie, title, show fig = pyplot.figure(figsize=(11, 7), dpi=100) ax = fig.gca(projection='3d') ax.set_xlim(0, 2) ax.set_ylim(0, 1) ax.view_init(30, 225) ax.set_xlabel('$x$') ax.set_ylabel('$y$') X, Y = numpy.meshgrid(x, y) surf = ax.plot_surface(X, Y, p[:], rstride=1, cstride=1, cmap=cm.viridis, linewidth=0, antialiased=False) pyplot.pause(.001) pyplot.savefig("images/blah" + str(counter)+".png")
Note that the variables are all fine and there is no bug (it's a lot of math so I cut it for ease of reading) I just can't figure out how to export to a PNG without displaying the figure itself.

Design another algorithm that solves the problem in O(nlogn) time. Prove it’s time complexity
I have the following algorithm and need to solve the problem in a more efficient O(nlogn) time.
L:= empty list for i := 1 to n do L := MERGE(L,Li) #Li is the ith list end for return L

Simulated annealing problem. What would a representation be in this case?
I find this to be a very interesting problem, but the details seem pretty vague to me. What exactly would you consider a representation for instance?
Consider the following problem:
You need to load a lorry with products.
The maximum total weight of products that the lorry can stand is W.You have N products that can be loaded. If you feel it helps to answer the assignment questions, you can assume that each product has an identifier i ∈ {1,...,N}, or i ∈ {0,...,N1}, depending on your preference.
Each product i has a weight w(i), and a profit p(i).
You would like to decide which products to load so as to maximize the total profit of loaded products without exceeding the maximum total weight of the lorry.
Task:
Design the following components of a simulated annealing algorithm to solve this problem: Representation. Explain your representation.
 Neighbourhood operator. Explain how your neighbourhood operator works, and why you consider it to be adequate.
 Strategy to deal with the constraint. Explain how your strategy can deal with the constraint.
 PS: The strategy to deal with constraints may have an overlap with the representation and neighborhood operators. If that is the case, you still need to explain how these operators enable the strategy to deal with the constraint in item (3).

Where is the gaussian distribution function in the pseudocode below?
I was working on my final assignment, and I raised Box Muller Gaussian Distribution method to look for random numbers in unity software.
I am very confused about the gaussian distribution function on the pseudocode that I found in one of the journals.
Pseudocode algoritma BoxMuller(Sukajaya dkk., 2012) : a. Generate uniform random number u, v in range [1, 1] b. Calculate s = u2 + v2 c. Looping step 2 until s < 1 d. Find normal random numbers `z0 = u. √((2lns)/s)` and z1 = v . √( (2lns)/s)
I think the pseudocode only talks about the Box Muller and the Gaussian Distribution function is only for displaying diagrams of randomized numbers.