amusement park scheduling rides using dynamic programming
You arrive at Waldo's World Amusement Park with T minutes remaining until the park closes. The park has n rides and your objective is to complete as many rides as possible before the park closes. (For this problem, taking the same ride twice counts as 2 rides.) You are given a table W such that W(i, t) gives you the waiting time for ride i at time t. For convenience, assume that t is expressed as minutes before the park closes. Ride i itself takes ri minutes and all times are measured in integer minutes.
I tried solving it using a method similar to 0 1 knapsack problem. But the Table W which contains the waiting time for ride i varies wrt to time t. Is it exactly a knapsack plus activity selection combined problem?
1 answer

Would this make any sense? Let
f(t)
represent the most achievable rides at timet
. Then:// Higher t is back in time // since t is how many minutes // before the park closes f(t) = max( // Not taking any ride f(t  1), // Take ride i 1 + f(t  W(i, t)  r_i) ) for all i
See also questions close to this topic

Count the number of ways to make a sequence sorted by deleting one item
I recently cam across a question that is basically a variant of the following problem:
We want to make a sequence sorted in nondecreasing order by deleting exactly one item from it. How many ways can we do that ?
For example, in the sequence
[2, 4, 5, 4]
, the number of ways will be2
as we can delete5
or the last4
.I was thinking in terms of longest increasing subsequence but couldn't properly relate it with this problem. Any pointer / direction regarding the solution strategy would be highly appreciated.

Merge sort "maximum recursion depth exceeded in comparison "
I want to code merge sort in Python 3.
Here is my code:
import math def Merge(array,start,mid,end): n1 = mid  start +1 n2 = end  mid i = 0 j = 0 left = [None] * n1 right = [None] * n2 inv = 0 for i in range(0,n1): left[i] = array[start + i  1] for j in range(0,n2): right[j] = array[mid + j] left.append(math.inf) right.append(math.inf) new_list = [] for k in range(0,end): if left[i] <= right[j]: array[k] = left[i] i+=1 elif left[i] > right[j]: array[k] = right[j] j+=1 def MergeSort(array,start,end): if len(array) <= 1: return array mid = math.ceil((start + end)/2) MergeSort(array,start,mid) MergeSort(array,mid+start,end) Merge(array,start,mid,end) stuff = [1, 5, 17, 32, 6] MergeSort(stuff, 0, len(stuff)) print(stuff)
I tested first function(Merge) it works as it should. But I have a problem with the recursion. I cannot figure out where is the problem. The error is "maximum recursion depth exceeded in comparison".

How to improve my way of thinking in javaScript?
I'm learning ES5 javaScript and want to test myself, so went to codewars site and there was a problems which says :
The new "Avengers" movie has just been released! There are a lot of people at the cinema box office standing in a huge line. Each of them has a single 100, 50 or 25 dollars bill. An "Avengers" ticket costs 25 dollars.
Vasya is currently working as a clerk. He wants to sell a ticket to every single person in this line.
Can Vasya sell a ticket to each person and give the change if he initially has no money and sells the tickets strictly in the order people follow in the line?
Return YES, if Vasya can sell a ticket to each person and give the change with the bills he has at hand at that moment. Otherwise return NO.
That challenge is in the first level on the codewars, I thought about it for an hour and I still didn't able to solve it. So my question is how can I improve my way of thinking to be able to solve that kind of questions and harder ones ?

Algorithm to solve optimal clustering of word vectors
I have word vectors split out into N groups, where each group contains M vectors. The problem is to find an optimal clustering of vectors where one and only one vector appears from each of the N groups.
As an example, say we had 3 groups of vectors as so:
 {"Hot Dog", "Hot sauce", "Hotshot"}
 {"Hamburger", "Hamburg", "Hamburgler"}
 {"Pizza", "Pisa", "Piazza"}
The optimal cluster would be {"Hot Dog", "Hamburger", "Pizza"} because according to some function I have F(), these vectors are clustered closely to each other within the vector space I have defined.
I can arrive at this result in a brute force way by merely trying every combination. But as N and M grow, this becomes unfeasible. Is there a dynamic programming approach I could use? Any reference algorithm I can look up?
Thanks.
Edit:
To clarify my example above, each of those strings is like an ID for a vector so to rephrase it, Group 1 is {v1, v2, v3} Group 2 is {v4, v5, v6}, Group 3 is {v7, v8, v9}.
My desired output is {v1, v4, v7} but in a nonbrute force way.
@m69's comment below correctly describes what I mean by a cluster  a group of vectors whose distance to one another as computed by some function F() is all within some threshold t.

Choose subset of points with geometry closest to orginal
I have list
n (n < 500)
of positive integers representing elevation profile. I need to choose at mostm (m < 255)
points of them to make new geometry as similar as orginal one as possible. For input[10, 21, 15, 2, 8, 35, 94, 223, 370, 575, 701, 661, 592, 356] and m = 8
I want to return[10, 0, 0, 0, 8, 0, 94, 0, 370, 575, 701, 0, 592, 356]
(0
means that we skip that number). Because when we connect points by lines we have geometry[10.0, 9.5, 9.0, 8.5, 8.0, 51.0, 94.0, 232.0, 370.0, 575.0, 701.0, 646.5, 592.0, 356.0]
, errors for points are[0.0, 11.5, 6.0, 6.5, 0.0, 16.0, 0.0, 9.0, 0.0, 0.0, 0.0, 14.5, 0.0, 0.0]
so the maximum error is16
I tried dynamic programming approach where
dp[i][j]
was solution for array starting not earlier than ati
position and using not more thanj
elements. To compute it for everyk
fromi
ton
I compute maximum error ifk
is the first element and take maximum of it anddp[k + 1][j  1]
.Can we spend
O(1)
time for everyk
to calculate maximum distance from points[i .. k  1]
to line connecting pointsi  1
andk
? Does anyone have idea how to solve whole problem inO(n^2)
? 
Calculating importance of independent variable in explaining variance of dependent variable in linear regression
I am working on a Media Mix Modeling (MMM) project where I have to build linear model for predicting traffic factoring in various spends as input variables. I have got the linear model equation which is:
Traffic = 1918 + 0.08*TV_Spend + 0.01*Print_Spend + 0.05*Display_spend
Now, I want to calculate two things which I don't know how to do: (1) How much each variable is contributing in explaining variance of traffic? (2) What percentage of total traffic is due to each independent variable?
Please suggest me how both of these can be calculated.

How do I do a regression analysis with panel data in R?
So I'm a noob at R and it's been more than a year since I've used R, and I've seem to forgot a lot... :(
I have a panel data that includes different countries with observations from 2005, 2010, and 2015 that looks like this:
Location Year Health_Spending Total NCD Deaths_male 1 CAN 2005 3282.454 101.4 2 CAN 2010 4225.189 105.5 3 CAN 2015 4632.837 109.2 4 ESP 2005 2126.553 179.9 5 ESP 2010 2882.912 180.6 6 ESP 2015 3175.457 183.1 Total NCD Deaths_female 1 102.7 2 107.3 3 110.2 4 170.4 5 170.6 6 180.8
I'm trying to run a regression analysis with Health_Spending as Y, and Total NCD Deaths_male & Total NCD Deaths_female as X1 and X2.
I've been looking up and it seems like plm package is used a lot to analyze panel data in R, but I'm having trouble figuring out how to use it.
Can a kind soul help me out and guide me on what I need to do?
(here's a dput version of my data just in case)
structure(list(Location = c("CAN", "CAN", "CAN", "ESP", "ESP", "ESP", "GBR", "GBR", "GBR", "ISR", "ISR", "ISR", "JPN", "JPN", "JPN", "KOR", "KOR", "KOR", "MEX", "MEX", "MEX", "NLD", "NLD", "NLD", "NOR", "NOR", "NOR", "POL", "POL", "POL", "TUR", "TUR", "TUR", "USA", "USA", "USA"), Year = c(2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L, 2005L, 2010L, 2015L), Health_Spending = c(3282.454, 4225.189, 4632.837, 2126.553, 2882.912, 3175.457, 2331.136, 3040.114, 4071.806, 1768.952, 2032.725, 2646.915, 2463.725, 3205.216, 4428.349, 1183.438, 1895.699, 2481.587, 730.816, 911.351, 1037.424, 3454.707, 4633.738, 5148.399, 3980.768, 5162.669, 6239.435, 806.974, 1352.424, 1687.009, 582.888, 871.677, 1028.911, 6443.02, 7939.798, 9491.4 ), `Total NCD Deaths_male` = c("101.4", "105.5", "109.2", "179.9", "180.6", "183.1", "245.8", "242.0", "249.0", "16.7", "16.8", "18.0", "460.3", "503.7", "543.2", "105.7", "110.2", "118.3", "194.7", "230.7", "257.5", "58.9", "58.6", "63.2", "17.4", "17.5", "17.1", "172.7", "175.1", "175.9", "185.3", "197.4", "211.8", "1024.9", "1061.6", "1159.5"), `Total NCD Deaths_female` = c("102.7", "107.3", "110.2", "170.4", "170.6", "180.8", "268.2", "259.0", "264.1", "17.5", "17.4", "18.7", "405.0", "458.9", "528.4", "92.9", "93.3", "102.2", "181.4", "214.2", "235.5", "62.1", "62.6", "67.7", "18.4", "18.8", "18.2", "163.1", "168.6", "174.6", "150.3", "162.6", "181.0", "1111.6", "1115.5", "1183.4")), .Names = c("Location", "Year", "Health_Spending", "Total NCD Deaths_male", "Total NCD Deaths_female" ), class = "data.frame", row.names = c(NA, 36L))

Average Case Analysis of Sequential Search with Geometric Probability Distribution
I was kind of aware of getting the average running time in a uniform distribution. Say for example we have 6 array elements.
 1/6  1/6  1/6  1/6  1/6  1/6 
Above is the array with the uniform probability distribution of a search element being positioned in every subscript in the array.
So getting the average running time in a uniform distribution will be like the solution below:
T(n) = (1/6)*1 + (1/6)*2 + (1/6)*3 + (1/6)*4 + (1/6)*5 + (1/6)*6 = (1/6) * ( 1 + 2 + 3 + 4 + 5 + 6 ) = 3.5
or when in express in n terms:
T(n) = (1/n) * ((n(n+1))/2) = (n+1) / 2 = ϴ(n)
But what about the averagecase number of key comparisons in sequential search under a geometric probability distribution?
Example:
Prob(target X is in the jth position) = 1/(2^(j+1)) where j = 0, 1, 2,3,4,5,6,...  1/(2^(0+1))  1/(2^(1+1))  1/(2^(2+1))  1/(2^(3+1))  1/(2^(4+1))  1/(2^(5+1)) 
Then
T(j) = ((1/2)* 1) + ((1/4)* 2) + ((1/8)* 3) + ((1/16)* 4) + ((1/32)* 5) + ((1/64)* 6) = .5 + .25(2) + .125(3) + .0625(4) + .03125(5) + .015625(6) = .5 + .5 + .375 + .25 + .15625 + .09375 = 1.875
I dont know how to express it in j terms:
T(j) = ?
What is the upperbound O(j)? lowerbound Ω(j)? tightbound ϴ(j)?
Any help or ideas , will be very much appreciated.