Computing gradients of a depthwise convolution with respect to the layer's input in Pytorch?
There's an utility function which does this in Tensorflow. However, I need it to be in Pytorch. I've successfully implemented a depthwise convolution with either "same" or "valid" padding in Pytorch with matches Tensorflow's implementation e.g. depthwise_conv2d_torch(input, stride, kernel, padding) == tf.nn.depthwise_conv2d(input, stride, kernel, padding)
I'm not able to compute a matching gradient as follows:
def depthwise_conv2d_backprop_input_pt(depthwise_out, images):
return T.autograd.grad(outputs = depthwise_out, inputs = images, grad_outputs = T.ones_like(depthwise_out))
I get wildly differing values for depthwise_conv2d_backprop_input_pt
and tf.nn.depthwise_conv2d_backprop_input(input_sizes=images.shape, filter=filter, out_backprop=images, strides=strides, padding='SAME')
I've been stuck on this for a while and even tried using Tensorflow's reference cpp implementation to try to implement this but to no avail.
See also questions close to this topic

GoogleNet fails to classify images
I built Keras Google Net from here: https://www.analyticsvidhya.com/blog/2018/10/understandinginceptionnetworkfromscratch/ The only difference is that I replaced 1000 classes in output layer with 3 and data is prepared this way :
def grey_preprocessor (xarray): xarray=(xarray/127.5)1 return xarray img_resol = (224,224) train_batches = ImageDataGenerator(horizontal_flip = True, preprocessing_function = grey_preprocessor).flow_from_directory( directory = train_path, target_size=img_resol, classes = ['bacterial', 'healthy', 'viral'], batch_size = 10) valid_batches = ImageDataGenerator(horizontal_flip = True, preprocessing_function = grey_preprocessor).flow_from_directory( directory = valid_path, target_size=img_resol, classes = ['bacterial', 'healthy', 'viral'], batch_size = 10) test_batches = ImageDataGenerator(horizontal_flip = True, preprocessing_function = grey_preprocessor).flow_from_directory( directory = test_path, target_size=img_resol, classes = ['bacterial', 'healthy', 'viral'], batch_size = 10, shuffle = False) assert train_batches.n == 4222 assert valid_batches.n == 300 assert test_batches.n == 150 assert train_batches.num_classes == valid_batches.num_classes == test_batches.num_classes == 3
However, all the accuracies on every batch are 0.3333, which means it doesn't classify at all. I understand that it can be anything. What is a good way to troubleshoot it?

How to load images and text labels for CNN regression from different folders
I have two folders, X_train and Y_train. X_train is images, Y_train is vector and .txt files. I try to train CNN for regression.
I could not figure out how to take data and train the network. When i use "ImageDataGenerator" , it suppose that X_train and Y_train folders are classes.
import os import tensorflow as tf os.chdir(r'C:\\Data') from glob2 import glob x_files = glob('X_train\\*.jpg') y_files = glob('Y_rain\\*.txt')
Above, i found destination of them, how can i take them and be ready for model.fit ? Thank you.

Keras Binary Model Getting stuck at 50% Accuracy
I'm training a model to understand the impact of news on market volatility. The model seems to be find and the dataset classes are balanced, so I'm not sure what's exactly wrong.
I have coded a basic model using pretrained word embeddings:
model = tf.keras.models.Sequential([ tf.keras.layers.Embedding(vocab_size+1, embedding_dim, weights=[embedding_matrix]), tf.keras.layers.LSTM(300, return_sequences=True, activation='relu'), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(254, activation='relu')), tf.keras.layers.Dropout(0.4), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dropout(0.4), tf.keras.layers.Dense(32, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer='Adam', metrics=['binary_accuracy'])
Training the model, I get this:
109/109 [==============================]  265s 2s/step  loss: 0.6945  binary_accuracy: 0.5032  val_loss: 0.6927  val_binary_accuracy: 0.5161 109/109 [==============================]  265s 2s/step  loss: 0.6945  binary_accuracy: 0.5032  val_loss: 0.6978  val_binary_accuracy: 0.5123 109/109 [==============================]  265s 2s/step  loss: 0.6945  binary_accuracy: 0.5032  val_loss: 0.6859  val_binary_accuracy: 0.5096 109/109 [==============================]  265s 2s/step  loss: 0.6945  binary_accuracy: 0.5032  val_loss: 0.6801  val_binary_accuracy: 0.5245
I thought maybe my issue is that the data somehow isn't related, and the model has nothing to learn, but I'm not even sure about that, actually, I have published the dataset and the notebook on GitHub so that you can reproduce the issue, will be great if you can find what is going on.

What happens when the L1 regularizer penalty is 0?
I am playing around with deep learning. The final layer of my model is a dense layer and when I set the L1 regularizer to 0, it actually performs better than for any other value I have tested. I'm just wondering what is going on internally here as clearly it is not just dividing the penalty by zero and giving an error, as I would have expected.

Confusion in code for extending a class in python
class MnistModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, xb): xb = xb.reshape(1, 784) out = self.linear(xb) return out model = MnistModel()
I am trying to learn Deep Learning but I am stuck on this code. Can anyone please explain this code to me in brief. It seems too confusing to me.

Why do I got NaN (loss) values in training Wasserstein GAN with Gradient Penalty?
Recently, I worked with GANs for generating fake accelerometer data for human activity (i.e., mHEALTH dataset).
I have tried to implement Improved Technique for Training GAN (i.e., Conditional WGAN GP) from here. I have made some modifications on the code according to my data. My goal is to generate fake 1D timeseries from single accelerometer data with size (100,1) from input noise vector with size of 100.
However, when I run the code (training process), I got "Nan" for both discriminator loss and generator loss.
Did anyone have same problem like this ? Why this problem happen and how to solve this problem ? Thanks for your help !
Here my complete source code
WARNING:tensorflow:Discrepancy between trainable weights and collected trainable weights, did you set `model.trainable` without calling `model.compile` after ? 0 [D loss: 8.756878] [G loss: 0.499155] 10 [D loss: 5.958789] [G loss: 0.488709] 20 [D loss: 4.041764] [G loss: 0.473811] 30 [D loss: 2.644764] [G loss: 0.463494] 40 [D loss: 2.244551] [G loss: 0.460721] 50 [D loss: nan] [G loss: nan] 60 [D loss: nan] [G loss: nan] 70 [D loss: nan] [G loss: nan] 80 [D loss: nan] [G loss: nan] 90 [D loss: nan] [G loss: nan]

only one element tensors can be converted to python scalars
def Log(A): ''' theta = arccos((tr(A)1)/2) K=1/(2sin(theta))(AA^T) log(A)=theta K ''' theta=torch.acos(torch.tensor((torch.trace(A)1)/2)) K=(1/(2*torch.sin(theta)))*(torch.add(A,torch.transpose(A,0,1))) return theta*K def tensor_Log(A): blah=[[Log(A[i,j]) for j in range(A.shape[1])] for i in range(A.shape[0])] new=torch.tensor(blah) return new ValueError: only one element tensors can be converted to Python scalars
during training to get the outputs of my network, the above function is producing the following error, it is called inside a custom layer and I do not know what it is referencing, any thoughts?

Pytorch Conv2d Autoencoder Output Shape
I built the bellow convolution autoencoder and trying to tune it to get encoder output shape (x_encoder) of [NxHxW] = 1024 without increasing loss. Currently my output shape is [4, 64, 64] Any ideas?
# define the NN architecture class ConvAutoencoder(nn.Module): def __init__(self): super(ConvAutoencoder, self).__init__() ## encoder layers ## # conv layer (depth from in > 16), 3x3 kernels self.conv1 = nn.Conv2d(1, 16, 3, padding=1) # conv layer (depth from 16 > 4), 3x3 kernels self.conv2 = nn.Conv2d(16, 4, 3, padding=1) # pooling layer to reduce xy dims by two; kernel and stride of 2 self.pool = nn.MaxPool2d(2, 2) ## decoder layers ## ## a kernel of 2 and a stride of 2 will increase the spatial dims by 2 self.t_conv1 = nn.ConvTranspose2d(4, 16, 2, stride=2) self.t_conv2 = nn.ConvTranspose2d(16, 1, 2, stride=2) def forward(self, x): ## encode ## # add hidden layers with relu activation function # and maxpooling after x = F.relu(self.conv1(x)) x = self.pool(x) # add second hidden layer x = F.relu(self.conv2(x)) x = self.pool(x) # compressed representation x_encoder = x ## decode ## # add transpose conv layers, with relu activation function x = F.relu(self.t_conv1(x)) # output layer (with sigmoid for scaling from 0 to 1) x = F.sigmoid(self.t_conv2(x)) return x, x_encoder

How to make a one to one linear layer with an Image?How to add one linear layer to RGB images before convolutional layers?
I was implementing a new idea. I want to implement a linear layer which takes the 3dimensional RGB image and multiplies a 3dimensional linear layer but with one to one cell mapping which and can be optimized.i.e [1,2,200,200] the input cell will mapped to next [1,2,200,200].The other interconnections i.e [1,2,200,200]>[1,2,260,250]=0.Other interconnections are frozen. How to do that? Pytorch code is needed.

How to access conv2d layers in resnet50?
I am using a Resnet50 model and want to visualize the features of certain conv2d layer after forwarding an image through the network, but somehow I am not able to access the inner conv2d layers in the Bottleneck blocks.
My code so far:
def forward_res(model, conv_layer, x): count = 0 for child in model.children(): #outer layers x = child(x) if isinstance(child, nn.Conv2d): if count == conv_layer: return x count = count+1 if isinstance(child, nn.Sequential): for bottleneck in child: for lay in bottleneck.children(): #inner layers x = lay(x) if isinstance(lay, nn.Conv2d): if count == conv_layer: return x count = count + 1
When I try to use my code like this, I get the following error:
RuntimeError: Given groups=1, weight of size 1 3 1 1, expected input[1, 1, 512, 512] to have 3 channels, but got 1 channels instead
Can somebody help me how to access the inner conv2d layers of a Renset model? Or tell me what is wrong with my code?