Why is Sumproduct formula showing #Value error?
I receive a #value error while using the sumproduct formulas. The main idea behind this document is to track the actuals on a daily basis using the following conditions:
 If the date is equal to the one specified in cell E2
 If the team name found in ranges e18:e46 or e50:e52 are the same as listed in the range C4:C8
 Sum the values found for those specific columns
Added the document for reference Excel File
I can't figure out what's wrong. Also to be specified that if i manually fill up random values in the table, the formula works.
1 answer

Becuase you have empty values in your return matrix. So you are multiplying with "0".
=SUMPRODUCT((E50:E52=C5)*(G17:AH17=E2),(G50:AH52))
See also questions close to this topic

adding elements to certain indexes in an array based on user input from a form (HTML/Javascript)
So I have an array with 23 indexes. I have 23 forms for user input as well. each form should add to certain index in the array after user enters an input. I try get the input with
onekeyup
function. For example the first form should only add values to the first index in the array and so on. What's the best way to do it?This is my array and function(which doesn't work):
Var dps=[] function a(){ dps = document.getElementById().value; }
These are my inputs:
<input type="number" class="formcontrol" id="skill1" onkeyup="a()"> <input type="number" class="formcontrol" id="skill2" onkeyup="a()"> <input type="number" class="formcontrol" id="skill3" onkeyup="a()"> . . . <input type="number" class="formcontrol" id="skill23" onkeyup="a()">

How to attach the previous item to the next in array?
I want to add the previous value to each array item except the first one Use the following code,result is true.input is ['python','jieba'],output is [ 'python', 'python jieba' ]
var config={keywords: ['python','jieba']} var keywords=config.keywords for(keyword in keywords){ if (keyword==0){ } else{ keywords[keyword]=keywords[keyword1]+" "+keywords[keyword] console.log(keywords) } }
But if I use an if statement,the code like this:
var config={keywords: ['python','jieba']} var keywords=config.keywords for(keyword in keywords){ if (keyword!==0){ keywords[keyword]=keywords[keyword1]+" "+keywords[keyword] console.log(keywords) } }
The return is wrong,
[ 'undefined python', 'jieba' ] [ 'undefined python', 'undefined python jieba' ]
Is the if statement incorrectly written?

Calculate mean and quantile from a list of 3d arrays
I would like to calculate the mean and quantile for a list of 3d arrays, while skipping NA values.
I've tried this solution, Mean of a list of 3D arrays in R but I don't know how to calculate quantiles or apply na actions.
x < array(1:24, dim = 2:4) y < array(24:49, dim = 2:4) z < array(50:74, dim = 2:4) x[1,1,1] < NA myL < list(x,y,z) Reduce('+', myL) / length (myL)
Actual:
, , 1 [,1] [,2] [,3] [1,] NA 27 29 [2,] 26 28 30 , , 2 [,1] [,2] [,3] [1,] 31 33 35 [2,] 32 34 36 , , 3 [,1] [,2] [,3] [1,] 37 39 41 [2,] 38 40 42 , , 4 [,1] [,2] [,3] [1,] 43 45 47 [2,] 44 46 48
Desired:
, , 1 [,1] [,2] [,3] [1,] 24 27 29 [2,] 26 28 30 , , 2 [,1] [,2] [,3] [1,] 31 33 35 [2,] 32 34 36 , , 3 [,1] [,2] [,3] [1,] 37 39 41 [2,] 38 40 42 , , 4 [,1] [,2] [,3] [1,] 43 45 47 [2,] 44 46 48
And, a quantile method. Thank you.
UPDATE:
I've tried this:
a < array(unlist(myL), dim = c(dim(myL[[1]]), length(myL))) a.mean < apply(a, 1:3, mean, na.rm=T) a.quantile < apply(a, 1:3, quantile, 0.95 na.rm=T)
But this is very slow over my array ([162, 200, 2190, 3]).
UPDATE 2:
Ok, this works for the mean calculation:
rowMeans(do.call(cbind, myL), na.rm = TRUE)
But, I do not know how to do this for the quantile.

How can I get a running formula with a dynamic range that "starts over" at a certain row condition?
I've got running sales numbers, and I'm trying to track win/loss percentage over time to calculate expectancy.
First step was to take the raw data and get a percentage over the number of sales. That was pretty easy with a
COUNTIF
The problem I'm having is how to have the range for the formula "start over" when the account number changes. I'd prefer to just dump the data into my named table without having to manually reset the formula each time I get a new data dump.

How Do I Automate Multiple Text Filter Contains Sequences and Add The Text Value To The Column To The Right?
I've been trying to implement excel VBA's at work. I have to manually categorise each keyword into categories and my current process is a simple text filter contains then manually add to all cells (GIF to demonstrate at the bottom of the post).
The community has helped me get this far with my VBA code  I'm trying to loop through a range C2:C3 (freehold and leasehold) and then return the value freehold or lease hold in column B next to the relevant keyword.
I'm completely stuck on why this isn't working and I would love a hand.
Here is the excel spreadsheet I'm using to test my macro on
Sub LoopRange() Dim lastrow, i As Variant lastrow = Range("A" & Rows.Count).End(xlUp).Row Dim rCell As Range Dim rRng As Range Set rRng = Sheet1.Range("C2:C3") For Each rCol In rRng.Columns For Each rCell In rCol.Rows Debug.Print rCell.Address, rCell.Value Next rCell Next rCol For i = 2 To lastrow If Range("A" & i).Value Like "*rCell.Value*" Or Range("A" & i).Value Like "*rCell.Value" Or Range("A" & i).Value Like "rCell.Value*" Then Range("B" & i).Value = "rCell.Value" End If Next i End Sub
There is usually another 2040 terms just like freehold and leasehold  that is why I need to use a loop through sequence.
P.S. Thank you to those who already replied  you guys have been immensely helpful already and I can't wait to improve my skills and start giving back to this community
Current process of manually adding the keyword categorisation.
Thanks again I really appreciate it guys!

Excel  Formula reference to cell on the same row
I'm trying to build a simple formula: if the cell on the same row as current cell, but column J is either =1 or empty, then the result is 1, else 0.
The part about =1 works, the part about="" does not for some reason.
Here is my formula:
=IF(OR("J"&ROW()=1,"J"&ROW()=""),1,0)
Can anyone help me find out why "J"&ROW()="" returns false, even if it is clearly true? The "J"&ROW()=1 returns true if the target cell is 1.
Another thing i tested is "J"&ROW()=j50, where 50 is the actual row number, and this also returned false, which does not make any sense to me.

Plotting points with matplotlib over base map error
I am having trouble overlaying scatter points over my baseman projection of data. I have a gridded set of sea surface temperature data and am trying to plot scatter points over the gridded data but the scatter points are not showing up. I keep getting the error "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()" from
pts = m.scatter(x,y, marker='o', s=5, c=ts, cmap='plasma',latlon=True)
the above line in my code. I think it is coming from the latlon=true . Any help would be appreciated! The datasets I am working with are in netCDF file format.
from mpl_toolkits.basemap import Basemap, shiftgrid, cm import numpy as np import matplotlib.pyplot as plt from netCDF4 import Dataset import matplotlib as mpl samosdata = Dataset("name of dataset", "r",format="NETCDF4") d = Dataset("name of datas set", "r",format="NETCDF4") sst = d.variables["analysed_sst"][:][0, ::1, :] lon = d.variables["lon"][:] lat = d.variables["lat"][::1] lats = samosdata.variables['lat'] lons = samosdata.variables['lon'] time = samosdata.variables['time'] ts = samosdata.variables['TS'] ts = np.array(ts) lons = np.array(lons) lats = np.array(lats) fig = plt.figure() ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) m = Basemap(projection='mill', llcrnrlat=80, urcrnrlat=80, llcrnrlon=180, urcrnrlon=180, lat_ts=20, resolution='c') nx = int((m.xmax  m.xmin)/11113.2); ny = int((m.ymax m.ymin)/11113.2) sst = m.transform_scalar(sst, lon, lat, nx, ny) im = m.imshow(sst, interpolation = "none") x, y =m(list(lons), list(lats)) pts = m.scatter(x,y, marker='o', s=5, c=ts, cmap='plasma',latlon=True) plt.colorbar(pts) m.drawcoastlines() parallels = np.arange(90, 90, 30) meridians = np.arange(180, 180, 60) m.drawparallels(parallels, labels = [1, 0, 0, 1]) m.drawmeridians(meridians, labels = [1, 0, 0, 1]) cb = m.colorbar(im, "right", size = "5%", pad = "2%") ax.set_title("SST 2010 01 01") plt.show()

How to fix 'ValueError: The truth value of an array with more than one element is ambiguous.' when comparing objects in dictionary?
I'm trying to check if a specific class object, here referred to as
new_state
, is present as a key in a dictionary. However, when I run the commandif new_state not in dictionary:
, I get the following error:ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
More specifically, I want to store a bunch of unique objects (called 'states') as keys in a dictionary (here initialized as
self.Q = dict()
), and to each key store a corresponding list as the dictionary value. The states are objects of theclass State:
, whose attributes are inputs of matrices, and are defined as:class State: """Defines the current state of the agent.""" def __init__(self, grid): self.grid = grid def __eq__(self, other): """Override the default Equals behaviour.""" if isinstance(other, self.__class__): return self.grid == other.grid return False def __ne__(self, other): """Override the default Unequal behaviour.""" return self.grid != other.grid
At some point in the program, I want to check if the object
new_state' is in the dictionary
self.Q, and if it isn't, then a list of random numbers is to be added to the dictionary with
new_state`as key:new_state = State(cur_env) if new_state not in self.Q: self.Q[new_state] = np.random.rand(len(ACTIONS)) # Add list of random numbers return new_state, a
And here is where the error occurs. What's going on here? I don't understand what the error means and I haven't been able to find i similar post that covers this. This code is part of a simple implementation of a Reinforcement Learning problem (I'll include the full code below). An agent is supposed to move from point A to B on a grid using a Qlearning algorithm, which is saved as a matrix in the agent's current state. My code is loosely based on the one found in this article https://medium.com/@curiousily/solvinganmdpwithqlearningfromscratchdeepreinforcementlearningforhackerspart145d1d360c120, however, they don't seem to run into this problem and I cannot see what the difference would be.
# Reinforcement learning import numpy as np import random as rnd from copy import deepcopy grid_size = 4 m_A = 0 # Start coordinate in matrix n_A = 0 # Start coordinate in matrix m_B = grid_size  1 # End coordinate n_B = grid_size  1 # End coordinate ACTIONS = ['Right', 'Left', 'Up', 'Down'] eps = 0.1 gamma = 0.7 alpha = 1 class State: """Defines the current state of the agent.""" def __init__(self, grid): self.grid = grid def __eq__(self, other): """Override the default Equals behaviour.""" if isinstance(other, self.__class__): return self.grid == other.grid return False def __ne__(self, other): """Override the default Unequal behaviour.""" return self.grid != other.grid def __hash__(self): return hash(str(self.grid)) terminal_grid = np.zeros((grid_size, grid_size), dtype = int) terminal_grid[m_B][n_B] terminal_state = State(terminal_grid) class Robot: """Implements agent. """ def __init__(self, row = m_A, col = n_A, cargo = False): self.m = row # Robot position in grid (row) self.n = col # Robot position in grid (col) self.carry = cargo # True if robot carries cargo, False if not self.Q = dict() self.Q[terminal_state] = [0, 0, 0, 0] def move_robot(self, state): """Moves the robot according to the given action.""" m = self.m # Current row n = self.n # Current col p = [] # Probability distribution for i in range(len(ACTIONS)): p.append(eps/4) if self.carry is False: # If the robot is moving from A to B Qmax = max(self.Q[state]) for i in range(len(p)): if self.Q[state][i] == Qmax: p[i] = 1  eps + eps/4 break # Use if number of episodes is large cur_env = deepcopy(state.grid) # cur_env = state.grid cur_env[m][n] = 0 action = choose_action(p) if action == 'Right': if n + 1 >= grid_size or cur_env[m][n+1] == 1: Rew = 5 # Reward 5 if we move into wall or another agent else: n += 1 Rew = 1 # Reward 1 otherwise a = 0 # Action number elif action == 'Left': if n  1 < 0 or cur_env[m][n1] == 1: Rew = 5 else: n = 1 Rew = 1 a = 1 elif action == 'Up': if m  1 < 0 or cur_env[m1][n] == 1: Rew = 5 else: m = 1 Rew = 1 a = 2 elif action == 'Down': if m + 1 >= grid_size or cur_env[m+1][n] == 1: Rew = 5 else: m += 1 Rew = 1 a = 3 m = m % grid_size n = n % grid_size self.m = m self.n = n cur_env[m][n] = 1 # print(cur_env) new_state = State(cur_env) if new_state not in self.Q: self.Q[new_state] = np.random.rand(len(ACTIONS)) # Add list of random numbers return new_state, a def choose_action(prob): # Given a probability distribution, chooses an action! """Defines policy to follow.""" action = np.random.choice(ACTIONS, p = prob) # Chooses an action at random return action def episode(robot): """Simulation of one episode.""" # Initialize E, S E = np.zeros((grid_size, grid_size), dtype = int) # Initializes the environment, E E[m_A][n_A] = 1 # Initializes position of robot S = State(E) # Initializes state of robot robot.Q[S] = np.random.rand(len(ACTIONS)) # print(S.grid) # print(robot.Q[S]) count = 0 while robot.carry is False: print('Carry == False') S_new, action_number = robot.move_robot(S) print('New state: ', S_new.grid) m_new = robot.m n_new = robot.n if m_new != m_B or n_new != n_B: R = 1 else: R = 5 robot.carry = True # Picks up cargo robot.Q[S][action_number] += alpha*(R + gamma*max(robot.Q[S_new])  robot.Q[S][action_number]) S = S_new # print(E) # print() count += 1 return count nepisodes = [] step_list = [] def simulation(): """Iterates through all episodes.""" r1 = Robot() for i in range(400): nsteps = episode(r1) nepisodes.append(i+1) step_list.append(nsteps) r1.m = m_A r1.n = n_A print("End of episode!") print(nsteps) simulation() plt.plot(nepisodes, step_list, '.') plt.show()
Any help would be greatly appreciated! Thanks in advance.

How to fix "The truth value of an array with more than one element is ambiguous" error when finding objects in dictionary?
I'm trying to implement a simple Reinforcement Learning algorithm. Basically, the agent is supposed to move from point A of a square grid to point B using Qlearning. I've gotten this to work previously using a simpler model, but now I need to refine it a bit. Basically, I want to store the Qvalues generated by the algorithm in a dictionary called (self.)Q, where each key is a state of the agent and each dictionary value is a list with Qvalues corresponding to that state. The states are objects of the class State, which has the grid matrix as attribute. However, when I want to check if a state (new_state) is already in the dictionary self.Q (see code below), I get the following error:
The truth value of an array with more than one element is ambiguous. Use >a.any() or a.all()
Why does this happen? I'm basing my code on this article https://medium.com/@curiousily/solvinganmdpwithqlearningfromscratchdeepreinforcementlearningforhackerspart145d1d360c120, in which they don't seem to run into this problem. If think this has something to do with the fact that the states are separate obejcts, but I do not know how to solve this.
import numpy as np import random as rnd from copy import deepcopy grid_size = 4 m_A = 0 # Start coordinate in matrix n_A = 0 # Start coordinate in matrix m_B = grid_size  1 # End coordinate n_B = grid_size  1 # End coordinate ACTIONS = ['Right', 'Left', 'Up', 'Down'] eps = 0.1 gamma = 0.7 alpha = 1 class State: """Defines the current state of the agent.""" def __init__(self, grid): self.grid = grid def __eq__(self, other): return isinstance(other, State) and self.grid == other.grid def __hash__(self): return hash(str(self.grid)) terminal_grid = np.zeros((grid_size, grid_size)) terminal_grid[m_B][n_B] terminal_state = State(terminal_grid) class Robot: """Implements agent. """ def __init__(self, row = m_A, col = n_A, cargo = False): self.m = row # Robot position in grid (row) self.n = col # Robot position in grid (col) self.carry = cargo # True if robot carries cargo, False if not self.Q = dict() self.Q[terminal_state] = [0, 0, 0, 0] def move_robot(self, state): """Moves the robot according to the given action.""" m = self.m # Current row n = self.n # Current col p = [] # Probability distribution for i in range(len(ACTIONS)): p.append(eps/4) if self.carry is False: # If the robot is moving from A to B Qmax = max(self.Q[state]) for i in range(len(p)): if self.Q[state][i] == Qmax: p[i] = 1  eps + eps/4 break # Use if number of episodes is large cur_env = deepcopy(state.grid) # cur_env = state.grid cur_env[m][n] = 0 action = choose_action(p) if action == 'Right': if n + 1 >= grid_size or cur_env[m][n+1] == 1: Rew = 5 # Reward 5 if we move into wall or another agent else: n += 1 Rew = 1 # Reward 1 otherwise a = 0 # Action number elif action == 'Left': if n  1 < 0 or cur_env[m][n1] == 1: Rew = 5 else: n = 1 Rew = 1 a = 1 elif action == 'Up': if m  1 < 0 or cur_env[m1][n] == 1: Rew = 5 else: m = 1 Rew = 1 a = 2 elif action == 'Down': if m + 1 >= grid_size or cur_env[m+1][n] == 1: Rew = 5 else: m += 1 Rew = 1 a = 3 m = m % grid_size n = n % grid_size self.m = m self.n = n cur_env[m][n] = 1 # print(cur_env) new_state = State(cur_env) if new_state not in self.Q: # Cheack if state is in dictionary self.Q[new_state] = np.random.rand(len(ACTIONS)) return new_state, a def choose_action(prob): """Defines policy to follow.""" action = np.random.choice(ACTIONS, p = prob) return action def episode(robot): """Simulation of one episode.""" # Initialize E, S E = np.zeros((grid_size, grid_size), dtype = int) E[m_A][n_A] = 1 # Initializes position of robot S = State(E) # Initializes state of robot robot.Q[S] = np.random.rand(len(ACTIONS)) count = 0 while robot.carry is False: S_new, action_number = robot.move_robot(S) m_new = robot.m n_new = robot.n if m_new != m_B or n_new != n_B: R = 1 else: R = 5 robot.carry = True # Picks up cargo robot.Q[S][action_number] += alpha*(R + gamma*max(robot.Q[S_new]) robot.Q[S][action_number]) S = S_new # print(E) # print() count += 1 return count nepisodes = [] step_list = [] def simulation(): """Iterates through all episodes.""" r1 = Robot() for i in range(400): nsteps = episode(r1) nepisodes.append(i+1) step_list.append(nsteps) r1.m = m_A r1.n = n_A print("End of episode!") print(nsteps) simulation()