Assign inorder index to Binary Tree
How do we assign inorder index to binary tree?
Given a tree below:
1
2 3
4 5 6 7
Assign an index below to above tree as you can see below:
4
2 6
1 3 5 7
Following code does not work for case 6 as it assigns 10 as we are double counting passed down index from the argument and from the left child. Can we achieve this without using global variable?
int assignIndex(Node root, int index) {
if (root == null)
return 0;
int leftIndex = assignIndex(root.left, index);
root.index = leftIndex + index + 1;
int rightIndex = assignIndex(root.right, root.index);
if (rightIndex == 0)
return root.index;
else
return rightIndex;
}
1 answer

The problem in the above program is return of two different values at two different occasions. So the issue can be solved by returning only latest index values at all occasions if you don't want to use global variables.
int assignIndex(Node root, int index) { if (root.left != null) index = assignIndex(root.left, index); index++; root.index = index; if (root.right != null) index = assignIndex(root.right, index); return index; }
See also questions close to this topic

Implementing an id3 algorithm  error in 2 functions
I would like to make a ID3 algorithm but I am running into problems here.
First of all I would like to make 2 functions.
The first one is the following:
def func1_learn(X, y, impurity_measure = "entropy"): impurity_measure = "entropy"
this function should learn a decision tree with entropy as impurity_measure so if I wanted to call on the func1_learn function by using:
func1_learn(X, y, impurity_measure = "entropy")
or like this:
func1_learn(X, y)
then in both cases, the function should learn a decision tree with entropy as impurity measure. But my code doesn´t do that, I just get the wrong output.
My second function should predict class label of some new data point x.
def func2_predict(x, tree): predictions = classifier.predict(x) #predict class label of some new data point x
This function should predict class label of some new data point x.
My question is, How do I change the code for the functions so that they work as desired above? What mistake did I do?

Minimize travelling cost for N people visiting a subset of M cities
I have a problem that was asked in an interview.
Given are names of
N
travellers,M
cities and costs of travelling perperson between citiesM
_{i} andM
_{j} asC
_{ij}. Each travellerN
_{i} has to visit a given subsetS
_{i} of theM
places. And a numberX
.All travellers start their journey from a common city.
They can visit a city that is not in their list of cities to be visited.
A traveller can stay in a city for an arbitrary amount of time.
They are required to return to the origin city.
All cities are connected.
One can visit same city multiple times.
X < min( {C} )
Problem : If
T
number of travellers make a journey together, then they'd save(T1)*X
amount on the total cost of travel between a pair of cities. We need to determine the order of visit for each traveller so as to minimize the overall cost of travel by all travellers.Sample test case :
Inputs:
N = 4 M = 4 travellers[N] = { "person_one", "person_two", "person_three", "person_four" } cities[M] = { "A", "B", "C", "D" } costs = { A <> B = 100 A <> C = 120 A <> D = 50 B <> C = 20 B <> D = 20 C <> D = 20 } target_cities = { person_one = { "A", "B" } person_two = { "A", "B", "C", "D" } person_three = { "A", "C", "D" } person_four = { "A", "D" } } X = 10 Assuming they all start from city A.
Output :
person_one : A > B > A // the extra spacing is for convenience in person_two : A > D > C > B > A // understanding, original output can have person_three : A > D > C >*B > A // uniform spaces person_four : A > D > A
Explaination :
 person two, three, four save 2*10 by travelling to D together  person two, three save 1*10 by travelling to C together  person three goes back to B as:  cost A<>C = 120  cost C>B + B>A = 120 and as person one, two, three will travel together from B, they save 2*10.
So far I haven't been able to figure out any way to solve this problem. I'd like to know the approach that I can take.

Implement the ID3 algorithm from scratch
I would like to make a ID3 algorithm but I am running into problems here.
First of all I would like to make 2 functions.
The first one is the following:
def func1_learn(X, y, impurity_measure = "entropy"): impurity_measure = "entropy"
this function should learn a decision tree with entropy as impurity_measure so if I wanted to call on the func1_learn function by using:
func1_learn(X, y, impurity_measure = "entropy")
or like this:
func1_learn(X, y)
then in both cases, the function should learn a decision tree with entropy as impurity measure. But my code doesn´t do that, I just get the wrong output.
My second function should predict class label of some new data point x.
def func2_predict(x, tree): #predict class label of some new data point x
This function should predict class label of some new data point x.
My question is: How do I write the functions to do as I described above? How do I do this?

Electricity Billing System for Home in OOPS
I recently appeared for interview for codepairing round and I was asked following question: Electricity Billing System for home“. The corresponding power usage of home appliances(Fan, AC, Fridge, Light) was given:
Appliance  Per hour unit consumption Fan 4 Light 2 AC 10 Fridge 8
The slab chart was given as:
Slab 1  0 to 1000 unit: INR Rs 20 per unit Slab 2  1001 to 3000 unit: INR Rs 30 per unit Slab 1  3001 to 6000 unit: INR Rs 40 per unit Slab 1  6001 and above unit: INR Rs 50 per unit
Input:
Appliance  CountOfAppliance  TotalUasgePerDayInHours Fan 2 4 Light 1 4 AC 1 12 Fridge 1 5
Output:
200000 INR Units Per Day : (2*4*4 + 1*4*2 +1*12*10 + 1*5*8) = 200 unitsPerMonth = 6000 totalBill = 1000*20 + 2000*30 + 3000*30 + 3000*50 = 200000
I had modeled the code using "Factory Design Pattern" for appliance. But my Code for Price Slab was something the interviewer was not happy with the hardcoding. Though I modified it by using Constants file for slabs but the interviewer was still expecting something better. Kindly let me know how it can be improved.
My Code:
IElectricComponent
public interface IElectricComponent { public enum Unit{ FAN(4), LIGHT(2), AC(10) , FRIDGE(8); private int value; private Unit(int value) { this.value = value; } public int getValue(){ return value; } } public int claculateUnitForSingleDay(); }
Fan
public class Fan implements IElectricComponent{ private int noOfComponents; private int perDayUsage; public Fan(int noOfComponents, int perDayUsage){ this.noOfComponents=noOfComponents; this.perDayUsage=perDayUsage; } public int claculateUnitForSingleDay(){ return noOfComponents*perDayUsage*Unit.FAN.getValue(); } }
The same way for Fridge,Light and Ac
Factory : ApplianceFactory
public class ApplianceFactory { public static IElectricComponent getInstance(String appliance, int countOfAppliance ,int perDayUsage ){ switch(appliance){ case ApplianceConstants.FAN: return new Fan(countOfAppliance,perDayUsage); case ApplianceConstants.LIGHT: return new Light(countOfAppliance,perDayUsage); case ApplianceConstants.AC: return new AC(countOfAppliance,perDayUsage); case ApplianceConstants.FRIDGE: return new Fridge(countOfAppliance,perDayUsage) ; default : return new IElectricComponent() { @Override public int claculateUnitForSingleDay() { // TODO Autogenerated method stub return countOfAppliance*perDayUsage; } }; } } }
Constants:
public interface ApplianceConstants { public String FAN = "Fan"; public String LIGHT = "Light"; public String AC = "AC"; public String FRIDGE = "Fridge"; public int Slab1 = 20; public int Slab2 = 30; public int Slab3 = 40; public int Slab4 = 50; }
PriceSlab:
public class PriceSlab { HashMap<String,Integer> slabs = new HashMap<>(); public int calculateBill(int units){ slabs.put("A", ApplianceConstants.Slab1); slabs.put("B", ApplianceConstants.Slab2); slabs.put("C", ApplianceConstants.Slab3); slabs.put("D", ApplianceConstants.Slab4); return calculateBillTotal(units); } private int calculateBillTotal(int units) { int total = 0; if(units <= 1000){ total = units*slabs.get("A") ; } else if (units <= 3000){ total = 1000*slabs.get("A") + (units1000)*slabs.get("B"); } else if (units <= 6000){ total = 1000*slabs.get("A") + 2000*slabs.get("B") +(units3000)*slabs.get("C"); } else{ total = 1000*slabs.get("A") + 2000*slabs.get("B") + 3000*slabs.get("D") +(units6000)*slabs.get("C"); } return total; } }
Main class:
public class BillGenerator { public static void main(String[] args) { Scanner scn = new Scanner(System.in); ApplianceFactory factory = new ApplianceFactory(); int inputOfAppliance = scn.nextInt(); String appliance=""; int countOfAppliance; int perDayUsage; int unitsPerDay=0; for(int t=0 ; t<inputOfAppliance ;t ++){ appliance = scn.next(); countOfAppliance = scn.nextInt(); perDayUsage = scn.nextInt(); IElectricComponent electricComponent = factory.getInstance(appliance, countOfAppliance, perDayUsage); unitsPerDay += electricComponent.claculateUnitForSingleDay(); } System.out.println("unitsPerDay = "+unitsPerDay); int unitsPerMonth = unitsPerDay * 30; System.out.println("unitsPerMonth = "+unitsPerMonth); PriceSlab slab= new PriceSlab(); int totalBill = slab.calculateBill(unitsPerMonth); System.out.println("totalBill = "+totalBill); } }
I thought a lot about how to remove hardcoding from priceslab function but couldn't think of anything better. Please let me know how it can be improved or if any other design pattern could be used for it.
Thanks in advance.

Difference between pointer not equals null and pointer to link not equals to null
I am new to c++ and i am trying to implement linked list data structure using c++,my doubt is what is the difference between next>link_!= NULL and next!=NULL.for example below is the program i have written for deleting a key element from the given linked list. In void delete_() function there is a while loop while(next!=NULL) which works fine,
but when when i rewrite this loop as
while(next>link_!=NULL) it deletes every element of list but not the last one of the linked list.
#include <iostream> #include <conio.h> using namespace std; struct Node { int data_; struct Node * link_; }; struct Node * start = NULL; void delete_(int k) { cout<<"**************"<<endl; struct Node *prev,*next; prev=start; next=start; while(next!=NULL) { if(next>data_==k) { if(next==start) { start=next>link_; next>link_=NULL; delete next; prev=start; next=start; } else { prev>link_=next>link_; next>link_=NULL; delete next; next=prev>link_; } } else { prev=next; next=next>link_; } } } void create_Node(int data) { struct Node * temp,*pos; temp = new Node; temp>data_=data; temp>link_=NULL; if(start==NULL) start = temp; else { pos = start; while(pos>link_ != NULL) { pos=pos>link_; } pos>link_ = temp; } } void display_List() { struct Node * Print; Print = start; while(Print>link_!= NULL) { cout << Print>data_; cout<<">"; Print = Print>link_; } cout << Print>data_; } int main() { int Data , no_of_Element,key; cout << "Enter the no_of_Element: "<<endl; cin >> no_of_Element; while (no_of_Element > 0) { cout << "Enter the Data to be inserted: "<<endl; cin >> Data; create_Node(Data); no_of_Element; } display_List(); cout<<endl; cout<<"Enter element key to delete:"<<endl; cin>>key; delete_(key); display_List(); return 0; }
 Can someone help me with structures?

trying to print top view of binary tree using preorder
below two codes should work same way but giving different output. I tried to debug the code as best as i could but couldn't find the bug.
CODE1:
#include<iostream> #include<stack> #include<map> using namespace std; struct node { int data; struct node* left; struct node* right; }; typedef struct label { struct node* root; int disp; }label; int main() { struct node* n1 = (struct node*) malloc(sizeof(struct node)); struct node* n2 = (struct node*) malloc(sizeof(struct node)); struct node* n3 = (struct node*) malloc(sizeof(struct node)); struct node* n4 = (struct node*) malloc(sizeof(struct node)); struct node* n5 = (struct node*) malloc(sizeof(struct node)); struct node* n6 = (struct node*) malloc(sizeof(struct node)); struct node* n7 = (struct node*) malloc(sizeof(struct node)); struct node* n8 = (struct node*) malloc(sizeof(struct node)); struct node* n9 = (struct node*) malloc(sizeof(struct node)); struct node* n10 = (struct node*) malloc(sizeof(struct node)); struct node* n11 = (struct node*) malloc(sizeof(struct node)); struct node* n12 = (struct node*) malloc(sizeof(struct node)); struct node* n13 = (struct node*) malloc(sizeof(struct node)); n1>data = 1; n2>data = 2; n3>data = 3; n4>data = 4; n5>data = 5; n6>data = 6; n7>data = 7; n8>data = 8; n9>data = 9; n10>data = 10; n11>data = 11; n12>data = 12; n13>data = 13; n1 > left = n2; n1 > right = n3; n2 > left = n4; n2 > right = n5; n4 > left = n4 > right = NULL; n5 > left = n5 > right = NULL; n3 > left = n6; n3 > right = n7; n6 > left = n6 > right = NULL; n7 > left = n8; n7 > right = NULL; n8 > left = n9; n8 > right = NULL; n9 > left = n10; n9 > right = NULL; n10 > left = n11; n10 > right = NULL; n11 > left = n12; n11 > right = NULL; n12 > left = n13; n12 > right = NULL; n13 > left = NULL; n13 > right = NULL; node* root = n1; stack<label*> s; map<int, int> m; label* var = new label(); var > root = root; var > disp = 0; label* var1 = new label(); s.push(var); while(!s.empty()) { m.insert( pair <int, int> ( var > disp, var > root > data) ); s.pop(); if( var > root > right != NULL ) { var1 > root = var > root > right; var1 > disp = var > disp + 1; s.push(var1); } if( var > root > left != NULL ) { var1 > root = var > root > left; var1 > disp = var > disp  1; s.push(var1); } if(!s.empty()) { var > root = s.top() > root; var > disp = s.top() > disp; } } map<int, int> :: iterator itr; for( itr = m.begin(); itr != m.end(); itr++ ) { cout<< itr > second << endl; } }
CODE2:
#include<iostream> #include<stack> #include<map> using namespace std; struct node { int data; struct node* left; struct node* right; }; typedef struct label { struct node* root; int disp; }label; int main() { struct node* n1 = (struct node*) malloc(sizeof(struct node)); struct node* n2 = (struct node*) malloc(sizeof(struct node)); struct node* n3 = (struct node*) malloc(sizeof(struct node)); struct node* n4 = (struct node*) malloc(sizeof(struct node)); struct node* n5 = (struct node*) malloc(sizeof(struct node)); struct node* n6 = (struct node*) malloc(sizeof(struct node)); struct node* n7 = (struct node*) malloc(sizeof(struct node)); struct node* n8 = (struct node*) malloc(sizeof(struct node)); struct node* n9 = (struct node*) malloc(sizeof(struct node)); struct node* n10 = (struct node*) malloc(sizeof(struct node)); struct node* n11 = (struct node*) malloc(sizeof(struct node)); struct node* n12 = (struct node*) malloc(sizeof(struct node)); struct node* n13 = (struct node*) malloc(sizeof(struct node)); n1>data = 1; n2>data = 2; n3>data = 3; n4>data = 4; n5>data = 5; n6>data = 6; n7>data = 7; n8>data = 8; n9>data = 9; n10>data = 10; n11>data = 11; n12>data = 12; n13>data = 13; n1 > left = n2; n1 > right = n3; n2 > left = n4; n2 > right = n5; n4 > left = n4 > right = NULL; n5 > left = n5 > right = NULL; n3 > left = n6; n3 > right = n7; n6 > left = n6 > right = NULL; n7 > left = n8; n7 > right = NULL; n8 > left = n9; n8 > right = NULL; n9 > left = n10; n9 > right = NULL; n10 > left = n11; n10 > right = NULL; n11 > left = n12; n11 > right = NULL; n12 > left = n13; n12 > right = NULL; n13 > left = NULL; n13 > right = NULL; node* root = n1; stack<label*> s; map<int, int> m; label* var = new label(); var > root = root; var > disp = 0; s.push(var); while(!s.empty()) { m.insert( pair <int, int> ( var > disp, var > root > data) ); s.pop(); if( var > root > right != NULL ) { label* var1 = new label(); var1 > root = var > root > right; var1 > disp = var > disp + 1; s.push(var1); } if( var > root > left != NULL ) { label* var2 = new label(); var2 > root = var > root > left; var2 > disp = var > disp  1; s.push(var2); } if(!s.empty()) { var > root = s.top() > root; var > disp = s.top() > disp; } } map<int, int> :: iterator itr; for( itr = m.begin(); itr != m.end(); itr++ ) { cout<< itr > second << endl; } }
Code 2 is giving right output but Code 1 is not. In Code2 two local label variables are created and in Code1 one local label variable is created and values are overidden.

Attempting to flatten a tree in Haskell using inorder traversal
Trying to use the given fold function to flatten out a tree into a list.
treeFold :: (b > a > b > b) > b > Tree a > b treeFold _ b Leaf = b treeFold f b (Node lt x rt) = f (treeFold f b lt) x (treeFold f b rt)
Here is what I have tried to so far:
treeToList :: Tree a > [a] treeToList = treeFold (\xs x ys > xs ++ x : ys) (\x > [x])
For some reason, I can't quite wrap my head around how to go about doing this? Feels like there's something I haven't quite gotten into my head about Haskell. Any help would be appreciated along with how to go about solving it. Thanks!
Edit:
I realize that the type signature I am using here verges on the nonsensical. In the type signature of treeFold, based on what I can think, the second argument (b) is probably a list since it acts as the accumulator in a way in this case. That would make the third argument (Tree a) a some version of the argument on the left of the equation. Two of the arguments within the function have to be the left and right subtrees within a Node. The third argument is just the value of the Node. Within the function, I need to combine the left tree, right tree and the value in the usual in order fashion but all the different variations I tried have had some issues

Convert string of boolean expression to tree structure in C#
I'm trying to convert string of logical expressions like
"a && b  c && d"
or"(a && b)  (c && d)"
into binarytree structures: / \ && && / \ / \ a b c d
Then apply a depthfirst search to traverse them.
Is there any appropriate library to do this? I was thinking about Irony or Roslyn, but I was not sure.

Java array.sort(arr) output 0 instead of arr
I am using Java array.sort on a random int generated array.
The result should be a sorted array but instead i get 0's and a few random generated numbers.
Here is my code:
public class test { public static void main(String[] args) { int[] list = new int[10]; Random rand = new Random(); for (int i = 0; i < list.length; i++) { list[i] = rand.nextInt(100); Arrays.sort(list); System.out.println(list[i]); } } }
Expected outpit should be a sorted array of 10 random integers but intead always get first 5 numbers as 0.
Can anyone help?

How do I find all pairs of numbers in an array
I have a sequence of numbers marked 1 through N in an Array. I want to find the all possible pairs of indices (i, j) such that
1 <= i < j <= N
and apply some operation (bitwise XOR) on each of such A[i] and A[j].The Number of elements can be upto 10e5 and the naive approach of using two loops is slow. Is there another way to quickly find all such numbers?
For example : A = [2, 7, 6, 3, 2] All such pairs of of A[i] and A[j] are (2,7), (2,6), (2,3), (2,2) (7,6), (7,3), (7,2) (6,3), (6,3) (3,2)
I will then apply Bitwise XOR to each pair and check if the resultant number can be represented as sum of two primes (not necessarily distinct), also two primes must have same parity (both odd or both even). I need to report how many such pairs are there.
I am using sieve of Atkins for Prime numbers and I can report those two primes if exists in linear time. Since numbers can repeat in the array I am also using memorization to save more time. The only trouble I have is to build that pairs which takes O(n^2) and this makes it very inefficient. Is there any other way for doing this?
Here is the original problem : https://www.codechef.com/SEPT18B/problems/XORIER

Nested List weight sum javascript
I'm trying to work on this problem where we compute Nested weight sum for a given array of numbers.
Given a nested list of integers, return the sum of all integers in the list weighted by their depth.
For example for:
[[1,1],2,[1,1]] ====> solution is 10.
Four 1's at depth 2, one 2 at depth 1.
Here's the code i wrote:
var depthSum = function (nestedList, sum=0, depth=1) { for(let i=0; i<nestedList.length; i++){ let val = nestedList[i]; if (Array.isArray(val)) { return depthSum(val, sum, depth+1); } else { sum += val * depth; } }; return sum; };
I'm trying to work on the converse problem. i.e
Given a nested list of integers, return the sum of all integers in the list weighted by their depth. Where weight is increasing from root to leaf, now the weight is defined from bottom up. i.e., the leaf level integers have weight 1, and the root level integers have the largest weight.
Example: [[1,1],2,[1,1]] ===> Solution is 8.
How can I use the same approach and solve this problem?
(https://leetcode.com/problems/nestedlistweightsumii/description/)