all input arrays must have the same dimensions  python using append
import numpy as np
x = [1,2,3,4] y = [[5,6,7,8],[9,0,1,2]]
j = np.append(x,y,axis=0)
Traceback (most recent call last): File "", line 1, in File "C:\Python27\lib\sitepackages\numpy\lib\function_base.py", line 5147, in append return concatenate((arr, values), axis=axis) ValueError: all the input arrays must have same number of dimensions
See also questions close to this topic

Fast way to replace elements by zeros corresponding to zeros of another array
Suppose we have two numpy arrays
a = np.array([ [1, 2, 0], [2, 0, 0], [3, 1, 0] ]) b = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
The goal is to set elements of b at the indices where a is 0. That is, we want to get an array
[ [1, 2, 0], [4, 0, 0], [7, 8, 0] ]
What is a fast way to achieve this?
I thought about generate a mask by $a$ first and then replace the values of b by this mask. But got lost on how to do this?

Phython simple neural network code not working
I'm trying to learn about neural networks and coded a simple backpropagation neural network that uses sigmoid activation functions and random weight initialisation. I was trying multiplication with two input values 3 and 2 in the input layer and target output 6 in output layer. When I execute my code the value for
w1
andw2
keeps on increasing and doesn't stop at the correct value.I am new to both Python and neural networks and I'd appreciate assistance.
import numpy as np al0 = 3 bl0 = 2 import random w1 =random.random() w2 =random.random() b = 0.234 ol1 = 6 def sigm(x,deriv=False): if deriv==True: return x*(1x) return 1/(1+np.exp(x)) y = sigm(x) E = 1/2*(ol1  y)**2 dsig = sigm(x,True) dyE = yol1 for iter in range(10000): syn0 = al0*w1 syn1 = bl0*w2 x = syn0 + syn1 + b dtotal1 = dyE*dsig*al0 w1 = w1 + 0.01*dtotal1 dtotal2 = dyE*dsig*bl0 w2 = w2 + 0.01*dtotal2 w1 w2

How to get positions of edgepixels by shifting numpyarray
Let's say I want to detect the boundaries of labeled objects in a 3D image. In the groundtruth image I have values 0,...,n in dtype float32. 0 for background and the other values are IDs.
My plan is to shift the groundtruth ndarray in all 3 dimensions by 1 and 1 and compare for equality with the original groundtruth. This shall give me a booleanarray of the same shape, which I would like to use for indexing the groundtruth image to update the values of the objectboundary pixels for further processing. The objects can overlap.
Minimal example:
import numpy as np a = np.zeros(5) a[1:4] = 1 Out[5]: array([0., 1., 1., 1., 0.])
Now shifting by 1
b = np.roll(a, 1) Out[7]: array([0., 0., 1., 1., 1.])
and by 1
c = np.roll(a, 1) Out[9]: array([1., 1., 1., 0., 0.])
Comparing a and b results in
a == b Out[10]: array([ True, False, True, True, False]) a == c Out[11]: array([False, True, True, False, True]) b == c Out[12]: array([False, False, True, False, False])
The boundaries are at the indices 1 and 3 but I'm unable to do the proper logic. I made several attempts with
np.logical_
but I'm stuck in a loop. Ideally the result should look like this:
array([True, False, True, False, True])
or vice versa.
Am I missing some fundamental things?
EDIT
My goal is segmentation. I have pixellevel labeling for different objects and background in the same image. How can I detect the edges of said objects? Or the border between two overlapping differently labeled objects? I thought something like shifting and comparing would get me a boolean array with truthvalues for edge/border or not.
Indices = np.array(Image.shape, dtype=bool)
to edit the entries of interest with something like
Image[Indices] = max_value
and treat this value as new foreground for a Euclidian Distance Transform.

Updated Python to_csv output file size being magnified
We have a csv file that is being used like a database, and an ETL script that takes input Excel files and transforms them into the same format to append to the csv file.
The script reads the csv file into a dataframe and appends the new input dataframe to the end, and then uses to_csv to overwrite the old csv file.
The problem is, when we updated to a new version of Python (downloaded with Anaconda), the output csv file is growing larger and larger every time we append data to it. The more lines in the original csv read into the script (which gets output with the new appended data), the larger the output file size is magnified. The actual number of rows and data in the csv files are fine, it's just the file size itself that is unusually large.
Does anyone know if updating to a new version of Python could have broken this process? Is Python storing data in the csv file that we cannot see?
Any ideas or help is appreciated! Thank you.

wriet spark Dstream to one file under google cloud storage
has anyone worked in scala development for writing spark dstreams into one concatenated file under google cloud storage . Actually, I tried several methods, and they all didn't work so I am trying to work with a new one based on the use of saveAsNewAPIHadoopFile method. Could anyone confirm that this method allows writing dstreams to one concatenated file?
I used this method at the beginning , but I got several part files whici is not my target output , actually for each message I am getting a part file :
val data = pubsubStream.map(message => new String(message.getData(), StandardCharsets.UTF_8)) data.foreachRDD{ rdd => import sparkSession.implicits._ val df = rdd.toDF() df.repartition(1).write.mode("append").save(output) } ssc.start() ssc.awaitTermination()
For the saveAsNewAPIHadoopFile method I am getting compilation erros , does anyone knows how to use it. Best Regards

bash append text at the start and end of ls output
I want to change the output of ls from:
1.png 2.png 3.png
Into
Start [1.png] End Start [2.png] End Start [3.png] End
I need to append a string at the start and end of a line at the same time. I'm not opposed to using text files to store the output however I avoid it if there is a better way.
I know I can use
ls  sed 's/^/Start [/'
and
ls  sed 's/$/] End/'
However is there a way to combine these 2 operations into 1 statement? And avoid using temporary text files?