is there any difference between matmul and usual multiplication of tensors
I am confused between the multiplication between two tensors using * and matmul. Below is my code
import torch
torch.manual_seed(7)
features = torch.randn((2, 5))
weights = torch.randn_like(features)
here, i want to multiply weights and features. so, one way to do it is as follows
print(torch.sum(features * weights))
Output:
tensor(2.6123)
Another way to do is using matmul
print(torch.mm(features,weights.view((5,2))))
but, here output is
tensor([[ 2.8089, 4.6439],
[2.3988, 1.9238]])
What i don't understand here is that why matmul
and usual multiplication are giving different outputs, when both are same. Am i doing anything wrong here?
Edit: When, i am using feature of shape (1,5)
both * and matmul
outputs are same.
but, its not the same when the shape is (2,5)
.
1 answer

When you use
*
, the multiplication is elementwise, when you usetorch.mm
it is matrix multiplication.Example:
a = torch.rand(2,5) b = torch.rand(2,5) result = a*b
result
will be shaped the same asa
orb
i.e(2,5)
whereas considering operationresult = torch.mm(a,b)
It will give a size mismatch error, as this is proper matrix multiplication (as we study in linear algebra) and
a.shape[1] != b.shape[0]
. When you apply the view operation intorch.mm
you are trying to match the dimensions.In the special case of the shape in some particular dimension being 1, it becomes a dot product and hence
sum (a*b)
is same asmm(a, b.view(5,1))
See also questions close to this topic

How do we call bash functions from python
Let's say i have a bash function that i source into a shell.
# cat sample.sh function func() { echo "Custom env : $CUSTOM_ENV" }
Now i source this script in the bash shell:
#source sample.sh
Then i define:
export CUSTOM_ENV="abc"
and then call func() from bash shell, it displays:
#func Custom env : abc
Now if i am calling a python script from the same shell, i want to invoke the function func() from the python script. Anyway to achieve this ?
What i tried:
 Tried os.system('func')  Doesn't work
 Tried subprocess.check_output('func', shell=True, env=os.environ.copy())  Doesn't work
Any guidance ?

How to run a file by another user in python Subprocess
I am creating a python script which calls a shell script and run that shell script on a terminal using
Subprocess
. My problem is that I want to run that shell script by another user . My code is Given belowCode:
import subprocess filename = '/mount/test.sh' p = subprocess.Popen([filename],shell=True,stdout=subprocess.PIPE) out, err = p.communicate() print(out)
Can anyone tell me that how to run my shell script by another user ?
Note:
Only the subprocess part should be run by another user, not the main python script

Build new logic adapter
I have 3 different types of csv files in which there is different data, I want to write 3 logic adapter for each csv files, in which I have to write code using conditions like
if statement has confidence <007 then it should show matching information , elseif it should show other response else different repsonse
.So how I can built logic for it using chatterbot?

Calculate delta time of specific column
I am working on these example below from a sensor data showing timestamp and status (either 0 or 1). I was able to calculate the delta of time between each row with same status, but I want to calculate the total length of time of each status (0 and 1).
df = pd.DataFrame(data=[['2018/02/16 15:00:05', 0], ['2018/02/16 15:00:08', 0], ['2018/02/16 15:00:09', 0], ['2018/02/16 15:00:14', 1], ['2018/02/16 15:00:26', 0], ['2018/02/16 15:00:28', 0], ['2018/02/16 15:00:29', 0], ['2018/02/16 15:00:31', 1], ['2018/02/16 15:00:33', 1], ['2018/02/16 15:00:34', 1], ['2018/02/16 15:00:37', 1], ['2018/02/16 15:00:39', 1], ['2018/02/16 15:00:40', 1], ['2018/02/16 15:00:41', 1], ['2018/02/16 15:00:43', 1]], columns=['Datetime', 'Status']) # convert to datetime object df.Datetime = pd.to_datetime(df['Datetime']) # find when the state changes run_change = df['Status'].diff() # get the step lengths step_length = df['Datetime'].diff() # loop and get the change since last state change since_change = [] current_delta = 0 for is_change, delta in zip(run_change, step_length): current_delta = 0 if is_change != 0 else \ current_delta + delta.total_seconds() since_change.append(current_delta) # add this data to the data frame df['Run_Change'] = run_change df['Step_Length'] = step_length df['Time_Since_Change(sec)'] = pd.Series(since_change).values
and it turned out as:
Datetetime Status Run_Change Step_Length Time_Since_Change 0 20180216 15:00:05 0 NaN NaT 0.0 1 20180216 15:00:08 0 0.0 00:00:03 3.0 2 20180216 15:00:09 0 0.0 00:00:01 4.0 3 20180216 15:00:14 1 1.0 00:00:05 0.0 4 20180216 15:00:26 0 1.0 00:00:12 0.0 5 20180216 15:00:28 0 0.0 00:00:02 2.0 6 20180216 15:00:29 0 0.0 00:00:01 3.0 7 20180216 15:00:31 1 1.0 00:00:02 0.0 8 20180216 15:00:33 1 0.0 00:00:02 2.0 9 20180216 15:00:34 1 0.0 00:00:01 3.0 10 20180216 15:00:37 1 0.0 00:00:03 6.0
I need the number of total length of time in seconds of whole data, for example, for status 0 the total length is 7 seconds (status 0 length is calculated from 00:05 to 00:09, continued 00:26 to 00:29).

Numpy: Set the value of a cell to the distance to the next "0"cell
I use python/numpy and i try to efficiently implement the following:
Given an 2darray with the shape 256x256 where most values are 0. There are some clusters (mostly rectangles) with the value 1. I want now to set the values of the "1"cells to their manhattendistance to the next "0"cell.
E.g., if there is a "1"square with the size 4x4 then the most outer cells do not change their value, and the inner cells change their value to 2.
How can this be implemented effectively with numpy?
Thank you very much

Keras Encoder Decoder expected to have 2 dimensions
A Keras Encoder Decoder returns an InvalidArgumentError as the shapes of the inputs seem incompatible.
I have:
 X_numerical.shape gives (304, 2500, 4) as input data
 y_numerical.shape gives (304, 40, 22) as output data
The Keras encoderdecoder is the following:
# Define an input sequence and process it. encoder_inputs = Input(shape=(None, 4)) encoder = LSTM(32, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) # We discard `encoder_outputs` and only keep the states. encoder_states = [state_h, state_c] # Set up the decoder, using `encoder_states` as initial state. decoder_inputs = Input(shape=(None, 22)) # We set up our decoder to return full output sequences, # and to return internal states as well. We don't use the # return states in the training model, but we will use them in inference. decoder_lstm = LSTM(32, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(22, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) # Define the model that will turn # `encoder_input_data` & `decoder_input_data` into `decoder_target_data` model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # Run training model.compile(optimizer='rmsprop', loss='categorical_crossentropy') ### THE ERROR OCCURS IN THE `.fit()` CALL model.fit([X_numerical, y_numerical], y_numerical, batch_size=4, epochs=1)
And I get the following error:
InvalidArgumentError Traceback (most recent call last) in () 25 model.fit([X_numerical, y_numerical], y_numerical, 26 batch_size=4, > 27 epochs=1)
InvalidArgumentError: Incompatible shapes: [4,40,22] vs. [1,22,1]
[[Node: training_6/RMSprop/gradients/dense_15/add_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@training_6/RMSprop/gradients/dense_15/add_grad/Sum"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](training_6/RMSprop/gradients/dense_15/add_grad/Shape, training_6/RMSprop/gradients/dense_15/add_grad/Shape_1)]]What I have tried to do is to reshape y_numerical to (304, 22, 40), but doens't work. I have also tried
y_numerical .squeeze()
and changing the batch_size in the model.fit() call, all returning various errors.What might be the cause of this dimensionality error?
The summary of my model is:
__________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_37 (InputLayer) (None, None, 4) 0 __________________________________________________________________________________________________ input_38 (InputLayer) (None, None, 22) 0 __________________________________________________________________________________________________ lstm_39 (LSTM) [(None, 32), (None, 4736 input_37[0][0] __________________________________________________________________________________________________ lstm_40 (LSTM) [(None, None, 32), ( 7040 input_38[0][0] lstm_39[0][1] lstm_39[0][2] __________________________________________________________________________________________________ dense_19 (Dense) (None, None, 22) 726 lstm_40[0][0] ================================================================================================== Total params: 12,502 Trainable params: 12,502 Nontrainable params: 0 __________________________________________________________________________________________________

How can I compute the tensor in Pytorch efficiently?
I have a tensor
x
andx.shape=(batch_size,10)
, now I want to takex[i][0] = x[i][0]*x[i][1]*...*x[i][9] for i in range(batch_size)
Here is my code:
for i in range(batch_size): for k in range(1, 10): x[i][0] = x[i][0] * x[i][k]
But when I implement this in
forward()
and callloss.backward()
, the speed of backpropagation is very slow. Why is it slow and is there any way to implement it efficiently? 
Pytorch: Learnable threshold for clipping activations
What is the proper way to clip ReLU activations with a learnable threshold? Here's how I implemented it, however I'm not sure if this is correct:
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.act_max = nn.Parameter(torch.Tensor([0]), requires_grad=True) self.conv1 = nn.Conv2d(3, 32, kernel_size=5) self.conv2 = nn.Conv2d(32, 64, kernel_size=5) self.pool = nn.MaxPool2d(2, 2) self.relu = nn.ReLU() self.linear = nn.Linear(64 * 5 * 5, 10) def forward(self, input): conv1 = self.conv1(input) pool1 = self.pool(conv1) relu1 = self.relu(pool1) relu1[relu1 > self.act_max] = self.act_max conv2 = self.conv2(relu1) pool2 = self.pool(conv2) relu2 = self.relu(pool2) relu2 = relu2.view(relu2.size(0), 1) linear = self.linear(relu2) return linear model = Net() torch.nn.init.kaiming_normal_(model.parameters) nn.init.constant(model.act_max, 1.0) model = model.cuda() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) for epoch in range(100): for i in range(1000): output = model(input) loss = nn.CrossEntropyLoss()(output, label) optimizer.zero_grad() loss.backward() optimizer.step() model.act_max.data = model.act_max.data  0.001 * model.act_max.grad.data
I had to add the last line because without it the value would not update for some reason.

Pytorch linear regression Neural Network
what are the steps for creating a linear regression neutral network in Pytorch? my data is 30X40 images.

dimension errors in tf.scatter_nd in tensorflow/keras
My codes:
reshape_out = Reshape((21, 3), input_shape=(21*3,), name='reshape_to_21_3')(output3d) def proj_output_shape(shp): return (None, 32, 32, 1) def f(x): import tensorflow as tf batch_size = K.shape(x)[0] print('x.shape={0}'.format(x.shape)) idx = K.cast(x[:, :, 0:2]*15.5+15.5, "int32") print('idx.shape={0}'.format(idx.shape)) # z = mysparse_to_dense(idx, (K.shape(x)[0], 32, 32), 1.0, 0.0, name='sparse_tensor') updates = tf.ones([batch_size, 21]) print('updates.shape={0}'.format(updates.shape)) #shape = tf.Variable(np.array([batch_size, 32, 32])) #print('shape.shape={0}'.format(shape)) z = tf.scatter_nd(indices=idx, updates=updates, shape=(batch_size, 32, 32), name='cool') print('z={0}'.format(z)) #z = tf.add(z, z) #z = tf.sparse_add(tf.zeros(z.dense_shape), z) z = K.reshape(z, (K.shape(x)[0], 32, 32, 1)) print('z.shape={0}'.format(z.shape), z) fil = make_kernel(1.0) fil = K.reshape(fil, (5, 5, 1, 1)) print('fil.shape={0}'.format(fil.shape), fil) r = K.conv2d(z,kernel=fil, padding='same', data_format="channels_last") print('r.shape={0}'.format(r.shape), r) return r
Output:
x.shape=(?, 21, 3) idx.shape=(?, 21, 2) updates.shape=(?, 21)
errors:
ValueError: The inner 1 dimensions of output.shape=[?,?,?] must match the inner 0 dimensions of updates.shape=[?,21]: Shapes must be equal rank, but are 1 and 0 for 'projection_4/cool' (op: 'ScatterNd') with input shapes: [?,21,2], [?,21], [3].
How to fix this? Thanks

ValueError: Shape (?, 21, 2) must have rank 2 in tf.SparseTensor
My codes:
def f(x): import tensorflow as tf print('x.shape={0}'.format(x.shape)) idx = K.cast(x[:, :, 0:2]*15.5+15.5, "int64") print('idx.shape={0}'.format(idx.shape)) st_z = tf.SparseTensor(idx, values=0.0, dense_shape=[K.shape(x)[0], 32, 32])
output:
x.shape=(?, 21, 3) idx.shape=(?, 21, 2)
Error:
ValueError: Shape (?, 21, 2) must have rank 2
How to fix this? Thanks

tf.sparse_to_dense: Shape must be rank 1 but is rank 0
My codes:
def f(x): try: import tensorflow as tf # x is (None, 10, 2) idx = K.cast(x*15.5+15.5, "int32") z = tf.sparse_to_dense(idx, 32, 1.0, 0.0, name='sparse_tensor') print('z.shape={0}'.format(z.shape)) except Exception as e: print(e) return x[:, :, 0:2] drop_out = Lambda(lambda x: f(x), output_shape=drop_output_shape, name='projection')(reshape_out)
x is tensor of
(None, 10, 2)
, where there are 10 indexes/coordinates. Trying to generate a(None, 32, 32)
tensorz
. I got the following error:Shape must be rank 1 but is rank 0 for 'projection_14/sparse_tensor' (op: 'SparseToDense') with input shapes: [?,10,2], [], [], [].
How to fix it? Thanks