How to best loop through masked 2d array
The big goal here is image processing: Detecting edges and color in a given jpg image.
I've made a masked 2D array based on the top answer in this post: https://stackoverflow.com/a/38194016/9700646
so I have the following, which should mask all 0's in the 2D array as masked:
color_matrix[~np.array(0)]
I'm just not sure of how to loop through this. Does somebody have an idea?
Eventually I need to develop a third matrix by looping through two 2D masked arrays.
Here's some code to give you an idea of what I have so far:
This first block is how I make an 'edge_matrix.' I also mask it at the end. A 'color_matrix' is made in a similar process.
def edge_matrix(canny_pic):
#edge_matrix = [[0 for x in range(RESOLUTION_WIDTH)] for y in range(RESOLUTION_HEIGHT)]
edge_matrix = np.zeros((RESOLUTION_HEIGHT,RESOLUTION_WIDTH,1),dtype = "uint8")
# get Canny pic
img = canny_pic #the first element of canny_pic is the picture, the others are the upper and lowers
# make matrix of edges
for i in range(RESOLUTION_HEIGHT):
for j in range(RESOLUTION_WIDTH):
#print(i,j)
color = img[i,j]
edge_matrix[i,j]=(color[0] or color[1] or color[2] ) and 1
return edge_matrix
Then I want to combine these matrices by going through the edge matrix and saying: whenever there is an edge (a 1) look at the 10 spaces around that pixel in the color_matrix and see if color was detected. if so, mark it as a 1 in the combined_matrix.
def reduce_problem(color_mat, edge_mat):
#this function will help reduce the number of location the object may be in by combining the colors and edges
#create a matrix to store the possible locations of the correct object.
#combined_matrix = [[0 for x in range(RESOLUTION_WIDTH)] for y in range(RESOLUTION_HEIGHT)] #same size as image. 1 = a spot with both edge and color; 0 = a spot without both
combined_matrix = np.zeros((RESOLUTION_HEIGHT,RESOLUTION_WIDTH, 1), dtype = "float32")
for i in range(10,RESOLUTION_HEIGHT11): #loop through color matrix
for j in range(10,RESOLUTION_WIDTH11):
if edge_mat[i,j] == 1:
#if this edge matrix location has an edge(1), go to the color matrix and check the 10 spaces surrounding it in each direction for any color(1) value (in respect to the corresponding space)
for k in range(i10, i+10):
#if k >= 0 and k < RESOLUTION_HEIGHT:
for m in range(j10, j+10):
#if m >= 0 and m < RESOLUTION_WIDTH:
if color_mat[k,m] ==1:
#print(k, m)
combined_matrix[i,j] += 1/441
This works for detecting color and edges. But it is very slow, and way too slow for realtime detection of objects. I was hoping that using masked arrays to iterate would help speed up the process.
See also questions close to this topic

Dataframe : each column in different plot in subplot
I have a panda Dataframe which i want each column to be represented on each subplot( 2 dimensions)
i know the default subplot of pandas is the desired output but 1 dimensional:
pallet 45 46 47 48 49 50 date 20190415 4.0 NaN 2.0 NaN NaN 2.0 20190416 3.0 2.0 2.0 2.0 1.0 1.0 20190417 2.0 2.0 2.0 2.0 1.0 1.0 20190418 2.0 2.0 2.0 NaN 1.0 1.0 20190419 2.0 2.0 2.0 NaN 1.0 1.0 20190420 2.0 2.0 2.0 NaN 1.0 NaN
pivot.plot(subplots=True) plt.show()
output: https://imgur.com/E61XREF.jpg
i want to be able to ouput each column but in 2 dimensional subplots. with common X and Y the columns length is dynamic so i want to be able to put like 6 columns on each figure, if num pallets > 6 open a new sameshaped figure.
so i want it to look like that: https://imgur.com/8GxWEah but with common X and Y
Thank you!

How to remove automatically added back ticks while using explode() in pyspark?
I want to add a new column with some expression as defined here(https://www.mien.in/2018/03/25/reshapingdataframeusingpivotandmeltinapachesparkandpandas/#pivotinspark). While doing so, my explode() function changes column names to be sought by adding back ticks(" ` ") at the beginning and at the end of each column which then gives out the error:
Cannot resolve column name `Column_name` from [Column_name, Column_name2]
I tried reading the documentation and few other questions on SO but they don't address this issue.
I tried logging the different steps, in order to give the reader some clarity.
The error is at the line:
_tmp = df.withColumn("_vars_and_vals", explode(_vars_and_vals))
The output of
explode(...)
is available here(https://pastebin.com/LU9p53th)The function snippet is:
def melt_df( df: DataFrame, id_vars: Iterable[str], value_vars: Iterable[str], var_name: str = "variable", value_name: str = "value") > DataFrame: """Convert :class:`DataFrame` from wide to long format.""" print("Value name is {} and value vars is {}".format( value_name, value_vars )) # df2 = df2.select([col(k).alias(actual_cols[k]) for k in keys_de_cols]) # Create array<struct<variable: str, value: ...>> _vars_and_vals = array(*( struct(lit(c).alias(var_name), col(c).alias(value_name)) for c in value_vars)) print("Explode: ") print(explode(_vars_and_vals)) # Add to the DataFrame and explode _tmp = df.withColumn("_vars_and_vals", explode(_vars_and_vals)) print("_tmp:") print(_tmp) sys.exit() cols = id_vars + [ col("_vars_and_vals")[x].alias(x) for x in [var_name, value_name]] return _tmp.select(*cols)
Whereas the whole code is:
import sys from datetime import datetime from itertools import chain from typing import Iterable from pyspark.context import SparkContext from pyspark.sql import (DataFrame, DataFrameReader, DataFrameWriter, Row, SparkSession) from pyspark.sql.functions import * from pyspark.sql.functions import array, col, explode, lit, struct from pyspark.sql.types import * spark = SparkSession.builder.appName('navydish').getOrCreate() last_correct_constant = 11 output_file = "april19_1.csv" input_file_name = "input_for_aviral.csv" def melt_df( df: DataFrame, id_vars: Iterable[str], value_vars: Iterable[str], var_name: str = "variable", value_name: str = "value") > DataFrame: """Convert :class:`DataFrame` from wide to long format.""" print("Value name is {} and value vars is {}".format( value_name, value_vars )) # df2 = df2.select([col(k).alias(actual_cols[k]) for k in keys_de_cols]) # Create array<struct<variable: str, value: ...>> _vars_and_vals = array(*( struct(lit(c).alias(var_name), col(c).alias(value_name)) for c in value_vars)) print("Explode: ") print(explode(_vars_and_vals)) # Add to the DataFrame and explode _tmp = df.withColumn("_vars_and_vals", explode(_vars_and_vals)) print("_tmp:") print(_tmp) sys.exit() cols = id_vars + [ col("_vars_and_vals")[x].alias(x) for x in [var_name, value_name]] return _tmp.select(*cols) def getrows(df, rownums=None): return df.rdd.zipWithIndex().filter( lambda x: x[1] in rownums).map(lambda x: x[0]) df = spark.read.csv( input_file_name, header=True ) df2 = df for _col in df.columns: if _col.startswith("_c"): df = df.drop(_col) if int(_col.split("_c")[1]) > last_correct_constant: df2 = df2.drop(_col) else: # removes the reqd cols, keeps the messed up ones only. df2 = df2.drop(_col) actual_cols = getrows(df2, rownums=[0]).collect()[0].asDict() keys_de_cols = actual_cols.keys() # df2 = df2.select([col(x).alias("right_" + str(x)) for x in right_cols]) df2 = df2.select([col(k).alias(actual_cols[k]) for k in keys_de_cols]) periods = [] periods_cols = getrows(df, rownums=[0]).collect()[0].asDict() for k, v in periods_cols.items(): if v not in periods: periods.append(v) # periods = list(set(periods)) expected_columns_from_df = [ 'Value Offtake(000 Rs.)', 'Sales Volume (Volume(LITRES))' ] for _col in df.columns: if _col.startswith('Value Offtake(000 Rs.)') or _col.startswith('Sales Volume (Volume(LITRES))'): continue df = df.drop(_col) df2 = df2.withColumn("id", monotonically_increasing_id()) df = df.withColumn("id", monotonically_increasing_id()) df = df2.join(df, "id", "inner").drop("id") print("After merge, cols of final dataframe are: ") for _col in df.columns: print(_col) # creating a list of all constant columns id_vars = [] for i in range(len(df.columns)): if i < 12: id_vars.append(df.columns[i]) # creating a list of Values from expected columns value_vars = [] for _col in df.columns: if _col.startswith(expected_columns_from_df[0]): value_vars.append(_col) value_vars = id_vars + value_vars print("Sending this value vars to melt:") print(value_vars) # the name of the column in the resulting DataFrame, Value Offtake(000 Rs.) var_name = expected_columns_from_df[0] # final value for which we want to melt, Periods value_name = "Periods" df = melt_df( df, id_vars, value_vars, var_name, value_name ) print("The final headers of the resultant dataframe are: ") print(df.columns)
The whole error is here(https://pastebin.com/9cUupTy3)
I understand one would need the data but I guess if one could clarify the working of explode in a way that the extra unwanted quotes(" ` ") can be avoided, I can work.

How to solve "ValueError: Shapes must be equal rank" when I use a customized env and use baseline to do DQN?
I use a Gym environment produced by others, which can be found on gymgomoku When I use baselines to try to train a model, an ERROR occurs like:
ValueError: Shapes must be equal rank, but are 1 and 2 for 'deepq/Select' (op: 'Select') with input shapes: [?], [?], [?,361].
I think there is something wrong with the environment but I can't get it.Because it is successful when I test other game environment on Gym's website like 'CartPolev0'.
Thank a lot!
here is my code:
import gym from baselines import deepq def callback(lcl, _glb): # stop training if reward exceeds 199 is_solved = lcl['t'] > 0.9 and sum(lcl['episode_rewards'][101:1]) / 100 >= 0.9 return is_solved def main(): env = gym.make("Gomoku19x19v0") model = deepq.models.mlp([32, 16], layer_norm=True) act = deepq.learn( env, q_func=model, lr=0.01, max_timesteps=10000, print_freq=1, checkpoint_freq=1000 ) print("Saving model to Gomoku9x9.pkl") act.save("Gomoku9x9.pkl") print('Finish!') if __name__ == '__main__': main()

How to update object in vuex store array?
I'm pretty new to vue/vuex/vuetify but starting to get the hang of it. I have a problem though I haven't been able to solve properly.
I have an array of "projects" in my store. When deleting and adding items to the store via mutations the changes reflect properly in subcomponents referencing the array as a property.
However, changes to items in the array does not reflect even though I can see that the array in the store is updated.
The only way I got it to "work" with an update action was to either :
 remove the project from the array in the store and then add it
use code that sort of does exactly the same as described above but like so:
state.categories = [ ...state.categories.filter(element => element.id !== id), category ]
But the problem with the above two methods is that the order of the array gets changed and I would really like to avoid that..
So basically, how would I rewrite my mutation method below to make the state reflect to subcomponents and keep the order of the array?
updateProject(state, project) { var index = state.projects.findIndex(function (item, i) { return item.id === project.id; }); state.projects[index] = project; }

Notice: Undefined offset: different rows
different Arrays…
when images Name not exist (Name will: noname)$imgs = array('img 1','img 2','img 3','img 4','img 5','img 6'); $imgNames = array('name 1','name 2','name 3',);
foreach($imgs as $files => $img) { echo 'Image: '.$img.' and Name: '.$imgNames[$files].'<br>'; };
I expect the output:
Image: img 1 and Name: name 1
Image: img 2 and Name: name 2
Image: img 3 and Name: name 3
Image: img 4 and Name: noname
Image: img 5 and Name: noname
Image: img 6 and Name: noname
but actual output is:
Image: img 1 and Name: name 1
Image: img 2 and Name: name 2
Image: img 3 and Name: name 3
Notice: Undefined offset: 3 in XX.php …
Image:img 4 and Name:
Notice: Undefined offset: 4 in XX.php …
Image:img 5 and Name:
Notice: Undefined offset: 5 in XX.php …
Image:img 6 and Name: 
Asymmetric axes plot
I have an 2D array which i want to use as a grid. I want use this 2D array to plot a quiver plot.

How to shift the entire 2D array
I am trying to drive an LED matrix and have an issue with shifting the whole display down. My end goal is to shift all of the rows and hopefully eventually implement a wrap around. The problem is that the first row is copied every time each row gets shifted.
The code that i used is as follows:
for (int i = (LAYERS  1); i >= 0; i ) { for(int z = 0; z < BYTES; z++) { LED_Buffer[i+1][z] = LED_Buffer[i][z]; } }

OpenGL Ortho 2D Overlay is staying in perspective view?
When rendering the triangle it does not overlay on the camera, instead it acts just like every other 3D object, it basically seems like its never going into ortho mode
void Overlay::DisplayMenu() { GraphicsFacade::RenderObject((Object*)menuP); int width, height; GraphicsFacade::GetWindowSize(&width, &height); orthoMat = Math::Matrix::Ortho(0, (float)width, 0, (float)height); glDisable(GL_DEPTH_TEST); // Disable the Depthtesting Math::Matrix mvp = Math::Matrix(orthoMat.mat * Math::Matrix(1.0f).mat); GraphicsFacade::RenderObjectOrtho((Object*)menuO, mvp); glEnable(GL_DEPTH_TEST); // Enable the Depthtesting }
The Math::Matrix::Ortho is just a facaded glm::ortho
void GraphicsFacade::RenderObjectOrtho(Object *object, const Math::Matrix MVP) { glBindVertexArray(object>GetVAO()); glBindVertexArray(object>GetIBO()>id); SetUniform(object>GetShader(), "MVP", MVP); if (object>GetIBO()>componentCount != 0) glDrawElements(GL_TRIANGLES, object>GetIBO()>componentCount, GL_UNSIGNED_INT, (void*)0); else glDrawArrays(GL_TRIANGLES, 0, object>GetVBO()>componentCount); glBindVertexArray(0); }

Out of bounds exception despite loops being in range of array length
So the program is supposed to make an odd sized array between the sizes of 3 and 11 from user input and then fill that board with a character at certain places to get patterns. Everything was going fine until I tried returning the array which gave me 2 out of bounds exceptions even though I set my loops to be less than the dimensions. I used 5 as an example here to try and get a 5 by 5 array. Here is the main.
public static void main (String [] args) { int dimension = findDimension(); char [] [] array2d = new char [dimension] [dimension]; char star = '*'; array2d = leftDiagonal(star, dimension); // Get out of bounds here print(array2d); }
The method that asks for user input "findDimension()"
public static int findDimension() { int dimension = 0; Scanner keybd = new Scanner(System.in); do { System.out.print("Enter an odd integer between 3 and 11 please: "); dimension = keybd.nextInt(); } while (dimension%2 == 0); return dimension; // Everything seems fine here, no errors }
Method that prints the array
public static void print(char [] [] arrayParam) { System.out.println(""); System.out.println(arrayParam); System.out.println(""); }
Method that sets the pattern "leftDiagonal"
public static char [] [] leftDiagonal(char starParam, int dimenParam) { char [] [] leftD = new char [dimenParam] [dimenParam]; for (int i = 0; i < dimenParam; i++){ for (int j = 0; i < dimenParam; j++) { leftD [i][j] = starParam; // Gets error here } } return leftD; }
The output should be
 * * * * * * * * * * * * * * * * * * * * * * * * * 
Well technically it should be
 * * * * * 
but at the moment I just want to get any output. I was originally planning to fill all the spaces with blank spaces ' ' and then fill the ones I need in with characters but I can't even get the array to print out first. Thank you for anyone willing to help.

Using Python, get pixels inside a polygon
I have a streaming image from a camera. From this streaming image, I've created a large set of polygons with each polygon being an 'area of interest'. I have then generated a mask using the polygons and applied that mask to the streamed image so now I only "see" the masked areas.
The question I now have is how do I figure out whether or not there is something inside my masked area (e.g. a rabbit, a person, a car, etc.)  note that it does not matter WHAT it is, only that the space is occupied.
I have found that a Laplacian spatial filter produces a pretty good visual of something being in a polygon or not (at least visually), but I have no idea what the next step would be. My thoughts are to average the pixels in the B&W image and if they cross a threshold then the assumption is yes, there is something there.
The file containing the polygon coordinates is laid out as:
x1,y1;x2,y2;c3,y3;x4,y4,... x1,y1;x2,y2;c3,y3;x4,y4,... x1,y1;x2,y2;c3,y3;x4,y4,... x1,y1;x2,y2;c3,y3;x4,y4,... x1,y1;x2,y2;c3,y3;x4,y4,...
My code to read the polygon array is as follows:
pointlist = [] data = open(args["slots"]).read().split() for row in data: tmp = [] col = row.split(";") for points in col: xy = points.split(",") tmp += [[int(pt) for pt in xy]] pointlist += [tmp] slots = np.asarray(pointlist)
Image masking and filtering is as follows:
img = cv2.resize(img, (480, 270)) blurred = cv2.GaussianBlur(img,(3, 3), 0) mask = np.zeros(blurred.shape[:2], dtype="uint8") cv2.fillPoly(mask,slots,(255,255,255)) masked = cv2.bitwise_and(blurred,blurred,mask=mask) lap = cv2.Laplacian(masked, cv2.CV_64F) lap = np.uint8(np.absolute(lap))
It would be greatly appreciated if you could give me some example code of what to do next.

Generate isolated labels from segmented image
Here is the idea. After image segmentation, i would like to save/ isolate each label (different size) on a smaller image. And create my dataset for machine learning. I can easaly code this on python, but i'm sure there already is function for this.
If you have any idea for function / libraries.
Thanks

Selecting from one dataframe using values from a second dataframe
I have two dataframes with the same index and columns:
In: import pandas as pd import numpy as np import random df1 = pd.DataFrame({'A' : [ random.random(), random.random(), random.random()], 'B' : [ random.random(), random.random(), random.random()], 'C' : [ random.random(), random.random(), random.random()]}) df2 = pd.DataFrame({'A' : [random.randint(0,10), random.randint(0,10), random.randint(0,10)], 'B' : [random.randint(0,10), random.randint(0,10), random.randint(0,10)], 'C' : [random.randint(0,10), random.randint(0,10), random.randint(0,10)]}) df1 Out: A B C 0 0.424566 0.054485 0.830993 1 0.673692 0.754941 0.621544 2 0.890594 0.805776 0.878123 In: df2 Out: A B C 0 9 9 3 1 4 6 6 2 10 2 9
I want to select values from
df1
depending on the corresponding value indf2
and return it as an array.e.g. selecting by the value
6
in the example above would return[0.754941, 0.621544]
I have looked at
mask
but can't see how to apply a mask from one df to the second df. 
How do I fix this bug with arctanx function in c++
I've recently been given a problem by my teacher about some mathematical equation / formula called the arctanx formula. The question is:
According to the Arctanx(x) = x  ((x ^ 3) / 3) + ((x ^ 5) / 5)  ((x ^ 7) / 7) + ...and π = 6 * arctanx(1 / sqrt(3)), Create function arctanx(x) , and find pi when the last "number"(like this ((x ^ y) / y)) is right before and bigger than 10 ^ 6, or you can say that no "number" can be smaller than that number without being smaller than 10 ^ 6.
I tried to code it out, but there is a bug in it.
# include<iostream> # include<math.h> using namespace std; float arctanx() { long double pi = 3.1415926535897; int i = 0; // 0 = +, 1 =  float sum = 0; float lsum; for (int y = 1; y < pi; y += 2) { if (lsum > 0.000001) { if (i == 0) { lsum = pow(1 / sqrt(3), y) / y; sum += pow(1 / sqrt(3), y) / y; i++; } else if (i == 1) { lsum = pow(1 / sqrt(3), y) / y; sum = pow(1 / sqrt(3), y) / y; i; } } else { break; } } sum = sum * 6; return sum; } int main() { cout << arctanx(); return 0; }
It should have a output of some number not equal to zero, but I got 0 from running this.

the latest first occurence of 123456789 in PI
I'm looking for the farthest permutations of 123456789 that can be found in PI, each permutation must have been found at least once. Thera are 9! permutations, all in Pi but I shearch the one with the latest first occurence. Where or how I can find this solution ?

Can this formula be explained to me in simple math terms?
I'm trying to create a program in C++ that calculates the first 800 digits of pi, similar to what Dik T. Winter accomplished with a c program with this function. I've searched for a formula, and I have come up with the formula in https://crypto.stanford.edu/pbc/notes/pi/code.html
Can this formula be explained in simple math terms?
Since I don't understand the formula, I don't know what to do.
//Dik T. Winter wrote the program as: #include <stdio.h> int main() { int r[2800 + 1]; int i, k; int b, d; int c = 0; for (i = 0; i < 2800; i++) { r[i] = 2000; } for (k = 2800; k > 0; k = 14) { d = 0; i = k; for (;;) { d += r[i] * 10000; b = 2 * i  1; r[i] = d % b; d /= b; i; if (i == 0) break; d *= i; } printf("%.4d", c + d / 10000); c = d % 10000; } return 0; } // In C++ this is: #include <iostream> int main() { int r[2800 + 1]; int i, k; int b, d; int c = 0; for (i = 0; i < 2800; i++) { r[i] = 2000; } for (k = 2800; k > 0; k = 14) { d = 0; i = k; for (;;) { d += r[i] * 10000; b = 2 * i  1; r[i] = d % b; d /= b; i; if (i == 0) break; d *= i; } std::cout << c + d / 10000; c = d % 10000; } return 0; }
I need an explanation of this formula in simple math terms so that I can translate it into a c++ program.