Trouble Using Chain Rule to Implement Backprop in Numpy
I'm having trouble figuring out how to translate the backpropagation steps from my HW from math into python, or numpy operations.
I'm pretty sure I have the first few backprop steps correct, however, on the third step I know I messed up because I'm trying to multiply 2 matrices that are sized (784,1) and (200,10).
Here is the code so far
#imports
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
np.random.seed(0)
#loading data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
X = x_train[0].reshape(1,1)/255.; Y = y_train[0]
zeros = np.zeros(10); zeros[Y] = 1
Y = zeros
show_img(X,Y)
num_hidden_nodes = 200
num_classes = 10
# init weights
W1 = np.random.uniform(1e3,1e3,size=(784,num_hidden_nodes))
b1 = np.zeros((1,num_hidden_nodes))
W2 = np.random.uniform(1e3,1e3,size=(num_hidden_nodes,num_classes))
b2 = np.zeros((1,num_classes))
#forward pass
# multiply input with weights
Z1 = np.add(np.matmul(X,W1), b1)
def sigmoid(z):
return 1 / (1 + np.exp( z))
def d_sigmoid(g):
return sigmoid(g) * (1.  sigmoid(g))
# activation function of Z1
X2 = sigmoid(Z1)
Z2 = np.add(np.matmul(X2,W2), b2)
# softmax
def softmax(z):
# subracting the max adds numerical stability
shiftx = z  np.max(z)
exps = np.exp(shiftx)
return exps / np.sum(exps)
def d_softmax(Y_hat, Y):
return Y_hat  Y
# the hypothesis,
Y_hat = softmax(Z2)
cost = 1 * np.sum(Y * np.log(Y_hat))
#backprop math
dJ_dZ2 = d_softmax(Y_hat, Y)
dJ_dW2 = np.transpose(X2) * dJ_dZ2
dJ_db2 = dJ_dW2 * d_sigmoid(Z2)
dJ_dX2 = dJ_db2 * W2
dJ_dZ1 = np.transpose(dJ_dX2) * d_sigmoid(Z1)
dJ_dW1 = np.transpose(X) * dJ_dX2
dJ_db1 = ??
Here are the steps for backprop show with the math representation above.
Output Math Supposed to be implemented in numpy: here
dJ_dZ2 = d_softmax(Y_hat, Y)
Second Layer Weights Math Supposed to be implemented in numpy: here
dJ_dW2 = np.transpose(X2) * dJ_dZ2
Second Layer Bias Math Supposed to be implemented in numpy: here
dJ_db2 = dJ_dW2 * d_sigmoid(Z2)
Second Layer Input Math Supposed to be implemented in numpy: here
dJ_dX2 = dJ_db2 * W2
First Layer Activation Math Supposed to be implemented in numpy: here
dJ_dZ1 = np.transpose(dJ_dX2) * d_sigmoid(Z1)
First Layer Weights Math Supposed to be implemented in numpy: here
dJ_dW1 = np.transpose(X) * dJ_dX2
First Layer Bias Math Supposed to be implemented in numpy: here
dJ_db1 = ??
I really have no idea how to convert this math to numpy, any help would be appreciated.
See also questions close to this topic

FloydWarshall modified algorithm
I'm trying to find all the shortest paths between pair of vertices using FloydWarshall algorithm. But, in the original FloydWarshall algorithm, it determines only one shortest path between a pair of vertices, but for my problem, I want to calculate all shortest paths (i.e., equiweighted paths) between every vertex pair.
Based on some earlier answers, I've managed to write the C++ code as follows:
#include<set> #include<vector> #include<map> #include<array> #include<algorithm> using namespace std; typedef vector<vector<set<int> > > fl; fl fw(int n, vector<array<float,4>> edge) { vector<vector<float> > cost(n); vector<vector<set<int> > > next_node(n); for(int i=0;i<n;i++) { next_node[i]=vector<set<int> >(n); cost[i]=vector<float>(n,INT_MAX); cost[i][i]=0; } for(auto &i:edge) { if(i[2]<cost[i[0]][i[1]]) { cost[i[0]][i[1]]=i[2]; next_node[i[0]][i[1]]=set<int>{i[1]}; } else if(i[2]==cost[i[0]][i[1]] && i[0]<INT_MAX) { (next_node[i[0]][i[1]]).insert(i[1]); } } for(int k=0;k<n;k++) { for(int i=0;i<n;i++) { for(int j=0;j<n;j++) { float cost_ik_kj = cost[i][k]+cost[k][j]; if(cost_ik_kj < cost[i][j]) { cost[i][j]=cost_ik_kj; next_node[i][j]=set<int>{next_node[i][k]}; } else if(cost_ik_kj==cost[i][j] && cost_ik_kj<MAX) { (next_node[i][j]).insert((next_node[i][k]).begin(),(next_node[i][k]).end()) } } } } return next_node; }
But after returning the next_node vector, I'm unsure of how to reconstruct the path from it in C++11. What will be the path_reconstruction function for the same? The input edge to fw function is in the edgelist format i.e.,
vector<vector<float>> edge{{0,2,2},{2,3,2},{3,1,1},{1,0,4},{1,2,3},{0,3,4},{3,0,5}};
For reference: I used the below Python code as reference for writing in above C++ code.
def floyd_warshall(n, edge): rn = range(n) cost = [[inf] * n for i in rn] next_node = [[set() for j in rn] for i in rn] for i in rn: cost[i][i] = 0 for u, v, w in edge: # The data format allows multiple edges between two nodes. if w < cost[u][v]: cost[u][v] = w next_node[u][v] = set([v]) elif w == cost[u][v] and w < inf: next_node[u][v].add(v) for k in rn: for i in rn: for j in rn: cost_ik_kj = cost[i][k] + cost[k][j] if cost_ik_kj < cost[i][j]: cost[i][j] = cost_ik_kj next_node[i][j] = set(next_node[i][k]) # Want a copy. elif cost_ik_kj == cost[i][j] and cost_ik_kj < inf: next_node[i][j].update( next_node[i][k] ) return next_node
The path reconstruction function using next_nodes is an iterator function all_paths:
def all_paths(next_node, i, j): if 0 == len(next_node[i][j]): if i == j: yield [j] else: pass # There is no path. else: for k in next_node[i][j]: for rest in all_paths(next_node, k, j): yield [i] + rest edge = [[0,2,2],[2,3,2],[3,1,1],[1,0,4],[1,2,3],[0,3,4],[3,0,5]] # Here n is the value of the highest numberedvertex. In the above graph, n=4 n=4 next_node = floyd_warshall(n,edge) for i in range(4): for j in range(4): for path in all_paths(next_node, i, j): print((i, j, path))
Edited: If there is a better way of doing it, it's quite welcome :)

How improve text detection for bad quality image?
Usually, threshold and text detection is good for good quality image but bad image is really not good. I think the bad image needs some preprocessing but I don't know how. Do you have any idea to improve text detection for bad quality image?
import sys import cv2 import math img = cv2.imread(sys.argv[1]) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, bw = cv2.threshold(gray, 100, 255, cv2.THRESH_BINARY) kernel = cv2.getStructuringElement(cv2.MORPH_DILATE, (5, 5)) grad = cv2.morphologyEx(bw, cv2.MORPH_GRADIENT, kernel) contours, hierarchy = cv2.findContours(grad, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for c in contours: peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.01 * peri, True) x, y, w, h = cv2.boundingRect(approx) if math.fabs(cv2.contourArea(approx)) > 100: cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2) cv2.imshow('rects', img) cv2.waitKey(0)
good qaulity image
bad quality image

Using of regex in python
I'm trying to use regex in my python script but didn't work.
I have a python script that will suppose to get these data in a txt file and transfer to csv file:
Example data in the txt file
0.0 testing_1 1.0 testing_2 5.0 testing_3 4.5 testing_4
I want to use regex for the first 4 characters of the line which contains a space, another space or a dash, number, then a dot. Example regex: ( )\d. I want to use regex since the characters are changing but it didn't work.
Here's my code:
import csv import re # open and read the txt file. text_file = open("extractspamreport.txt", "r") # Read each line of text file and save it in lines. lines = text_file.readlines() # Make a csv file. mycsv = csv.writer(open('OutPut.csv', 'w')) # Write header for csv file. mycsv.writerow(['Rule Name']) mycsv.writerow(['Points']) #problem starts here testvar = re.search(" ( )\d+.", lines) n = 0 for line in lines: n = n + 1 n = 0 for line in lines: n = n + 1 if testvar in line: #this is just for checking if the regex is correct print("hello world")
Here's the error:
Traceback (most recent call last): File "test2.py", line 24, in <module> testvar = re.search(" ( )\d+.", lines) File "C:\Users\testf\AppData\Local\Programs\Python\Python35\lib\re.py", line 173, in search return _compile(pattern, flags).search(string) TypeError: expected string or byteslike object
Is there any way to get that data using regex?

How to remove duplicate combinations from an array in python
Supposed I have a list or a numpy array
[[3 6] [1 5] [2 3] [2 6] [0 4] [2 4] [0 2]]
since [3,6] and [2,3] occur, I want to remove a third possible combination of 2,3 and 6 which is [2,6].
similarily [0,4] and [2,4] occur, I want to remove a third possible combination of 0,2 and 4 which is [0,2]
Essentially from any possible combination of 3 numbers, only 2 combinations which occur first should remain, the other should be removed out.
The final output should be
[[3 6] [1 5] [2 3] [0 4] [2 4]]

How do I convert a numpy array divided into 6 sections lists, each containing 6 more lists with RGBA values into an accurate 6 by 6 pixel image?
I've tried everything, tkinter, turtle, PIL, matplotlib. I want to convert this:
[[[255 0 0 255] (contains RGBA values) [255 0 0 255] [255 0 0 255] [255 127 39 255] [255 127 39 255] [255 127 39 255]] [[255 0 0 255] [255 0 0 255] [255 0 0 255] [255 127 39 255] [255 127 39 255] [255 127 39 255]] [[255 0 0 255] [255 0 0 255] [255 0 0 255] [255 127 39 255] [255 127 39 255] [255 127 39 255]] [[ 34 177 76 255] [ 34 177 76 255] [ 34 177 76 255] [ 0 162 232 255] [ 0 162 232 255] [ 0 162 232 255]] [[ 34 177 76 255] [ 34 177 76 255] [ 34 177 76 255] [ 0 162 232 255] [ 0 162 232 255] [ 0 162 232 255]] [[ 34 177 76 255] [ 34 177 76 255] [ 34 177 76 255] [ 0 162 232 255] [ 0 162 232 255] [ 0 162 232 255]]]
code I have tried:
plt.axis('off') plt.imshow(outputPixels, aspect = 'auto') plt.show() Image.fromarray(outputPixels, mode='RGB')
(I would show you Tkinter, but I deleted it) (Just think a for loop adding and gridding a canvas the size of a pixel) into a 6 by 6 image.
At first, I used PIL imsave, but that was inaccurate. Then matplotlib, but I couldn't change the size. I even used tkinter canavases (each canvas represents a pixel), but everything was black.

Choosing random number where probability is random in Python
While I can find decent information on how to generate numbers based on probabilities for picking each number with numpy.random.choice e.g.:
np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])
which picks 0 with probability p =.1, 1 with p = 0, 2 with p = .3, 3 with p = .6 and 4 with p = 0.
What I would like to know is, what function is there that will vary the probabilities? So for example, one time I might have the probability distribution above and the next maybe p=[0.25, .1, 0.18, 0.2, .27]). So I would like to generate probability distributions on the fly. Is there a python python library that does this?
What I am wanting to do is to generate arrays, each of length n with numbers from some probability distribution, such as above.

Weights in Numpy Neural Net Not Updating, Error is Static
I'm trying to build a neural network on the Mnist dataset for a HW assignment. I'm not asking anyone to DO the assignment for me, I'm just having trouble figuring out why the Training accuracy and Test Accuracy seem to be static for every epoch?
It's as if my way of updating weights is not working.
Epoch: 0, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1% Epoch: 1, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1% Epoch: 2, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1% Epoch: 3, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1% . . .
However, when I run the actual forward and backprop lines in a loop without any 'fluff' of classes or methods the cost goes down. I just can't seem to get it working in the current class setup.
I've tried building my own methods that pass the weights and biases between the backprop and feedforward methods explicitly, however, those changes haven't done anything to fix this gradient descent issue.
I'm pretty sure it has to do with the definition of the backprop method in the NeuralNetwork class below. I've been struggling to find a way to update the weights by accessing the weight and bias variables in the main training loop.
def backward(self, Y_hat, Y): ''' Backward pass through network. Update parameters INPUT Y_hat: Network predicted shape: (?, 10) Y: Correct target shape: (?, 10) RETURN cost: calculate J for errors type: (float) ''' #Naked Backprop dJ_dZ2 = Y_hat  Y dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2) dJ_db2 = Y_hat  Y dJ_dX2 = np.matmul(dJ_db2, np.transpose(NeuralNetwork.W2)) dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1) inner_mat = np.matmul(YY_hat,np.transpose(NeuralNetwork.W2)) dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1) dJ_db1 = np.matmul(Y  Y_hat, np.transpose(NeuralNetwork.W2)) * d_sigmoid(Z1) lr = 0.1 # weight updates here #just line 'em up and do lr * the dJ_.. vars you found above NeuralNetwork.W2 = NeuralNetwork.W2  lr * dJ_dW2 NeuralNetwork.b2 = NeuralNetwork.b2  lr * dJ_db2 NeuralNetwork.W1 = NeuralNetwork.W1  lr * dJ_dW1 NeuralNetwork.b1 = NeuralNetwork.b1  lr * dJ_db1 # calculate the cost cost = 1 * np.sum(Y * np.log(Y_hat)) # calc gradients # weight updates return cost#, W1, W2, b1, b2
I'm really at a loss here, any help is appreciated!
Full code is shown here...
import keras import numpy as np import matplotlib.pyplot as plt from keras.datasets import mnist np.random.seed(0) """### Load MNIST Dataset""" (x_train, y_train), (x_test, y_test) = mnist.load_data() X = x_train[0].reshape(1,1)/255.; Y = y_train[0] zeros = np.zeros(10); zeros[Y] = 1 Y = zeros #Here we implement the forward pass for the network using the single example, $X$, from above ### Initialize weights and Biases num_hidden_nodes = 200 num_classes = 10 # init weights #first set of weights (these are what the input matrix is multiplied by) W1 = np.random.uniform(1e3,1e3,size=(784,num_hidden_nodes)) #this is the first bias layer and i think it's a 200 dimensional vector of the biases that go into each neuron before the sigmoid function. b1 = np.zeros((1,num_hidden_nodes)) #again this are the weights for the 2nd layer that are multiplied by the activation output of the 1st layer W2 = np.random.uniform(1e3,1e3,size=(num_hidden_nodes,num_classes)) #these are the biases that are added to each neuron before the final softmax activation. b2 = np.zeros((1,num_classes)) # multiply input with weights Z1 = np.add(np.matmul(X,W1), b1) def sigmoid(z): return 1 / (1 + np.exp( z)) def d_sigmoid(g): return sigmoid(g) * (1.  sigmoid(g)) # activation function of Z1 X2 = sigmoid(Z1) Z2 = np.add(np.matmul(X2,W2), b2) # softmax def softmax(z): # subracting the max adds numerical stability shiftx = z  np.max(z) exps = np.exp(shiftx) return exps / np.sum(exps) def d_softmax(Y_hat, Y): return Y_hat  Y # the hypothesis, Y_hat = softmax(Z2) """Initially the network guesses all categories equally. As we perform backprop the network will get better at discerning images and their categories.""" """### Calculate Cost""" cost = 1 * np.sum(Y * np.log(Y_hat)) #so i think the main thing here is like a nested chain rule thing, where we find the change in the cost with respec to each # set of matrix weights and biases? #here is probably the order of how we do things based on whats in math below... ''' 1. find the partial deriv of the cost function with respect to the output of the second layer, without the softmax it looks like for some reason? 2. find the partial deriv of the cost function with respect to the weights of the second layer, which is dope cause we can reuse the partial deriv from step 1 3. this one I know intuitively we're looking for the parial deriv of cost with respect to the bias term of the second layer, but how TF does that math translate into numpy? is that the same y_hat  Y from the first step? where is there anyother Y_hat  y? 4. This is also confusing cause I know where to get the weights for layer 2 from and how to transpose them, but again, where is the Y_hat  Y? 5. Here we take the missing partial deriv from step 4 and multiply it by the d_sigmoid function of the first layer outputs before activations. 6. In this step we multiply the first layer weights (transposed) by the var from 5 7. And this is weird too, this just seems like the same step as number 5 repeated for some reason but with yy_hat instead of y_haty ''' #look at tutorials like this https://www.youtube.com/watch?v=7qYtIveJ6hU #I think the most backprop layer steps are fine without biases but how do we find the bias derivatives #maybe just the hypothesis matrix minus the actual y matrix? dJ_dZ2 = Y_hat  Y #find partial deriv of cost w respect to 2nd layer weights dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2) #finding the partial deriv of cost with respect to the 2nd layer biases #I'm still not 100% sure why this is here and why it works out to Y_hat  Y dJ_db2 = Y_hat  Y #finding the partial deriv of cost with respect to 2nd layer inputs dJ_dX2 = np.matmul(dJ_db2, np.transpose(W2)) #finding the partial deriv of cost with respect to Activation of layer 1 dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1) #yyhat matmul 2nd layer weights #I added the transpose to the W2 var because the matrices were not compaible sizes without it inner_mat = np.matmul(YY_hat,np.transpose(W2)) dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1) class NeuralNetwork: # set learning rate lr = 0.01 # init weights W1 = np.random.uniform(1e3,1e3,size=(784,num_hidden_nodes)) b1 = np.zeros((1,num_hidden_nodes)) W2 = np.random.uniform(1e3,1e3,size=(num_hidden_nodes,num_classes)) b2 = np.zeros((1,num_classes)) def __init__(self, num_hidden_nodes, num_classes, lr=0.01): ''' # set learning rate lr = lr # init weights W1 = np.random.uniform(1e3,1e3,size=(784,num_hidden_nodes)) b1 = np.zeros((1,num_hidden_nodes)) W2 = np.random.uniform(1e3,1e3,size=(num_hidden_nodes,num_classes)) b2 = np.zeros((1,num_classes)) ''' def forward(self, X1): ''' Forward pass through the network INPUT X: input to network shape: (?, 784) RETURN Y_hat: prediction from output of network shape: (?, 10) ''' Z1 = np.add(np.matmul(X,W1), b1) X2 = sigmoid(Z1)# activation function of Z1 Z2 = np.add(np.matmul(X2,W2), b2) Y_hat = softmax(Z2) #return the hypothesis return Y_hat # store input for backward pass # you can basically copy and past what you did in the forward pass above here # think about what you need to store for the backward pass return def backward(self, Y_hat, Y): ''' Backward pass through network. Update parameters INPUT Y_hat: Network predicted shape: (?, 10) Y: Correct target shape: (?, 10) RETURN cost: calculate J for errors type: (float) ''' #Naked Backprop dJ_dZ2 = Y_hat  Y dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2) dJ_db2 = Y_hat  Y dJ_dX2 = np.matmul(dJ_db2, np.transpose(NeuralNetwork.W2)) dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1) inner_mat = np.matmul(YY_hat,np.transpose(NeuralNetwork.W2)) dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1) dJ_db1 = np.matmul(Y  Y_hat, np.transpose(NeuralNetwork.W2)) * d_sigmoid(Z1) lr = 0.1 # weight updates here #just line 'em up and do lr * the dJ_.. vars you found above NeuralNetwork.W2 = NeuralNetwork.W2  lr * dJ_dW2 NeuralNetwork.b2 = NeuralNetwork.b2  lr * dJ_db2 NeuralNetwork.W1 = NeuralNetwork.W1  lr * dJ_dW1 NeuralNetwork.b1 = NeuralNetwork.b1  lr * dJ_db1 # calculate the cost cost = 1 * np.sum(Y * np.log(Y_hat)) # calc gradients # weight updates return cost#, W1, W2, b1, b2 nn = NeuralNetwork(200,10,lr=.01) num_train = float(len(x_train)) num_test = float(len(x_test)) for epoch in range(10): train_correct = 0; train_cost = 0 # training loop for i in range(len(x_train)): x = x_train[i]; y = y_train[i] # standardizing input to range 0 to 1 X = x.reshape(1,784) /255. # forward pass through network Y_hat = nn.forward(X) # get pred number pred_num = np.argmax(Y_hat) # check if prediction was accurate if pred_num == y: train_correct += 1 # make a one hot categorical vector; same as keras.utils.to_categorical() zeros = np.zeros(10); zeros[y] = 1 Y = zeros # compute gradients and update weights train_cost += nn.backward(Y_hat, Y) test_correct = 0 # validation loop for i in range(len(x_test)): x = x_test[i]; y = y_test[i] # standardizing input to range 0 to 1 X = x.reshape(1,784) /255. # forward pass Y_hat = nn.forward(X) # get pred number pred_num = np.argmax(Y_hat) # check if prediction was correct if pred_num == y: test_correct += 1 # no backward pass here! # compute average metrics for train and test train_correct = round(100*(train_correct/num_train), 2) test_correct = round(100*(test_correct/num_test ), 2) train_cost = round( train_cost/num_train, 2) # print status message every epoch log_message = 'Epoch: {epoch}, Train Accuracy: {train_acc}%, Train Cost: {train_cost}, Test Accuracy: {test_acc}%'.format( epoch=epoch, train_acc=train_correct, train_cost=train_cost, test_acc=test_correct ) print (log_message)
also, The project is in this colab & ipynb notebook

Back propagation vs Levenberg Marquardt
Does anyone know the difference between Backpropagation and Levenbergâ€“Marquardt in neural networks training? Sometimes I see that LM is considered as a BP algorithm and sometimes I see the opposite. Your help will be highly appreciated.
Thank you.

All output neurons have the same activation value
I am learning how to make a Neural Network in JavaScript and I am having some trouble with the back propagation. After doing training with data sets, the each output neuron's value will be the same. Also, the weights from a neuron in the last hidden layer to each of the output neurons will be the same too.
For example, the values of 2 output neurons could be 0.25 and 0.25 when they should be 0.25 and 0.7.
Here is the code that I have been working on:
function NeuralNetwork() { this.layerCount = 0; this.gain = 1; this.layers = []; this.lr = 1; this.AddLayer = function (layer) { this.layers.push(layer); this.layerCount++; } this.Connect = function () { for (var n = 0; n < this.layerCount; n++) { for (var i = 0; i < this.layers[n].neuronCount; i++) { var neur = new Neuron(); if (n < this.layerCount  1) { for (var j = 0; j < this.layers[n + 1].neuronCount; j++) { neur.weightList.push(Math.random() * 2  1); } } this.layers[n].AddNeuron(neur); } } } this.Train = function (args, output) { this.Predict(args); for (o in output) { var oneur = this.layers[this.layerCount  1].neuronList[o]; var grad = Derivative(oneur.aggrigation, this.gain); var err = output[0]  oneur.value; oneur.error = grad * err; oneur.bias += oneur.error * this.lr; } for (var i = this.layerCount  2; i >= 0; i) { for (var j = 0; j < this.layers[i].neuronCount; j++) { var neur = this.layers[i].neuronList[j]; var sum = 0; for (var n = 0; n < this.layers[i + 1].neuronCount; n++) { sum += this.layers[i + 1].neuronList[n].error * neur.weightList[n]; } neur.error = sum * Derivative(neur.aggrigation, this.gain); for (var n = 0; n < this.layers[i + 1].neuronCount; n++) { var nextN = this.layers[i + 1].neuronList[n]; grad = nextN.error * neur.value; neur.weightList[n] += this.lr * nextN.error * neur.value; } neur.bias += this.lr * neur.error; } } } this.Predict = function (args) { for (k in args) { this.layers[0].neuronList[k].value = args[k]; } for (var i = 1; i < this.layerCount; i++) { for (var j = 0; j < this.layers[i].neuronCount; j++) { var sum = 0; var neur = this.layers[i].neuronList[j]; for (var n = 0; n < this.layers[i  1].neuronCount; n++) { var prevN = this.layers[i  1].neuronList[n]; sum += prevN.value * prevN.weightList[j]; } neur.aggrigation = sum + neur.bias; neur.value = Sigmoid(sum + neur.bias, this.gain); } } } } function Layer(n) { this.neuronCount = n; this.neuronList = []; this.AddNeuron = function (neuron) { this.neuronList.push(neuron); } } function Neuron() { this.aggrigation = 0; this.weightList = []; this.value = 0; this.bias = Math.random() * 2  1; this.error = 0; this.momentum = Math.random(); } function Sigmoid(input, gain) { return 1 / (1 + Math.exp(1 * gain * input)) } function Derivative(input, gain) { return (Sigmoid(input, gain) * (1  Sigmoid(input, gain))); }
The Back Propagation part of it:
this.Train = function (args, output) { this.Predict(args); for (o in output) { var oneur = this.layers[this.layerCount  1].neuronList[o]; var grad = Derivative(oneur.aggrigation, this.gain); var err = output[0]  oneur.value; oneur.error = grad * err; oneur.bias += oneur.error * this.lr; } for (var i = this.layerCount  2; i >= 0; i) { for (var j = 0; j < this.layers[i].neuronCount; j++) { var neur = this.layers[i].neuronList[j]; var sum = 0; for (var n = 0; n < this.layers[i + 1].neuronCount; n++) { sum += this.layers[i + 1].neuronList[n].error * neur.weightList[n]; } neur.error = sum * Derivative(neur.aggrigation, this.gain); for (var n = 0; n < this.layers[i + 1].neuronCount; n++) { var nextN = this.layers[i + 1].neuronList[n]; grad = nextN.error * neur.value; neur.weightList[n] += this.lr * nextN.error * neur.value; } neur.bias += this.lr * neur.error; } } }
What am I doing wrong and how can I fix it so that the output neuron values stop being the same, and be more closely to their target values?

How to calculate the gradient of a matrix
let f(x) = [2x^2, 3y^5]
I know how to calculate the derivative of f(x), which will be [d/dx 2x^2, d/dx 3y^5].
Is there a similar process being done when calculating the gradient of f(x)? If not, then how do you calculate the gradient of f(x)?

Plotting derivative of a signal
I am facing a problem in my coding. I have multiple plots in a single fig. I am getting all my plots correctly but unfortunately the plot of derivative is not even visible in the fig. Can anyone please help.. Thanks in advance
close all clear all clc load('20190131structTFMRIProband04Ivan.mat') fs=structTFMRI.SigInfo.fs; %Sampling frequency in Hz %PlotLead = 10; %ECG lead to plot (only used for 12lead ECG) %% 12lead ECG (Outside and Inside MRI) ECGOutSingle = structTFMRI.ECGMRI.ECG12OutSingle; %820 Samples per beat (~800ms), 120 beats, 12 leads ECGInSingle = structTFMRI.ECGMRI.ECG12InSingle; %820 Samples per beat (~800ms), 182 beats, 12 leads %Plot the data figure; hold on for PlotLead=4:4 t=(0:1:length(ECGOutSingle(:,1,PlotLead))1)/fs; %Time vector for plot to have xaxis in seconds %plot(t,ECGOutSingle(:,1,PlotLead)) %plot(t,ECGInSingle(:,1,PlotLead)) plot(t,ECGInSingle(:,1,PlotLead)ECGOutSingle(:,1,PlotLead)) %MHD signal MHD = ECGInSingle(:,1,PlotLead)ECGOutSingle(:,1,PlotLead); x = 1:length(ECGInSingle); [Zi,Zi_idx,Xi,Xi_idx,Bi,Bi_idx] = getZXB(MHD); y = diff(MHD); z = diff(x); plot(z,y); plot(x,MHD); plot(x(Bi),MHD(Bi),'r*'); plot(x(Xi),MHD(Xi),'g*'); plot(Zi_idx,Zi,'b*'); end