evaluate Neural network accuracy based on several inputs (x) with same label (y)
I am using a CNN model for prediction of speaker on text. the model input is each word. yet the overall input is a sentence of a person. I want the model accuracy to be calculated according to the following: predict speaker of each word > calculate the speaker that appears the most and give this as the prediction of the sentence.
is there any good way to do this vs. manually predict and calculate?
See also questions close to this topic

How to compare performance of different Tensorflow Models like SSD, FRCNN etc? Is there any martix?
I want to compare performance of different Tensorflow Models like SSD, FRCNN etc. Is there any martix?
Please guide. Any link to suitable discussion, blog post would be very helpful.
Thanks!

Value Error: Cannot feed value of shape (11,) for Tensor 'Reshape:0', which has shape '(?, 11)' in Tensorflow
Problem : ValueError: Cannot feed value of shape (11,) for Tensor 'Reshape:0', which has shape '(?, 11)'
I know that this type of question has been asked several times before but i cannot solve my query even with this. Can anyone please help me figure out what is wrong with this code and also please explain the concept behind it to me. Thanks
The dataset being used is Churn modelling dataset for bank account
The code is divides as follows:
import pandas as pd import numpy as np data = pd.read_csv('D:\Churn_Modelling.csv') X = data.iloc[:, 3:13].values Y = data.iloc[:, 13].values from sklearn.preprocessing import LabelEncoder, OneHotEncoder label_encoder_x_1 = LabelEncoder() X[:, 1] = label_encoder_x_1.fit_transform(X[:, 1]) label_encoder_x_2 = LabelEncoder() X[:, 2] = label_encoder_x_2.fit_transform(X[:, 2]) one_hot_encoder = OneHotEncoder(categorical_features = [1]) X = one_hot_encoder.fit_transform(X).toarray() X = X[:, 1:] from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2) from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.fit_transform(X_test) import tensorflow as tf epochs = 20 batch_size = 10 learning_rate = 0.001 output = 1 n_features = X_train.shape[1] X = tf.placeholder("float32", [None, n_features]) Y = tf.placeholder("float32", [None, 1]) n_neurons_1 = 64 n_neurons_2 = 32 n_neurons_3 = 16 def nnmodel(data): layer_1 = {'weights': tf.Variable( tf.random_normal([n_features, n_neurons_1])), 'biases': tf.Variable(tf.random_normal([n_neurons_1])) } layer_2 = {'weights': tf.Variable( tf.random_normal([n_neurons_1, n_neurons_2])), 'biases': tf.Variable(tf.random_normal([n_neurons_2])) } layer_3 = {'weights': tf.Variable( tf.random_normal([n_neurons_2, n_neurons_3])), 'biases': tf.Variable(tf.random_normal([n_neurons_3])) } output_layer = {'weights': tf.Variable( tf.random_normal([n_neurons_3, 1])), 'biases': tf.Variable(tf.random_normal([1])) } l1 = tf.add(tf.matmul(data, layer_1['weights']), layer_1['biases']) l1 = tf.nn.relu(l1) l2 = tf.add(tf.matmul(l1, layer_2['weights']), layer_2['biases']) l2 = tf.nn.relu(l2) l3 = tf.add(tf.matmul(l2, layer_3['weights']), layer_3['biases']) l3 = tf.nn.relu(l3) output = tf.matmul(l3, output_layer['weights'])+output_layer['biases'] return output def next_batch(size, x, y): idx = np.arange(0, len(x)) np.random.shuffle(idx) idx = idx[:size] x_shuffle = [x[ i] for i in idx] y_shuffle = [y[ i] for i in idx] return np.asarray(x_shuffle), np.asarray(y_shuffle) def train(x): prediction = nnmodel(x) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits = prediction, labels = Y)) optimizer = tf.train.AdamOptimizer().minimize(cost) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print("Total epochs:", epochs) for epoch in range(epochs): epoch_loss = 0 x_wala_data, y_wala_data = next_batch(batch_size, X_train, Y_train) for i in range(batch_size): x_i = x_wala_data[i] y_i = y_wala_data[i] _, c = sess.run([optimizer, cost], feed_dict = {X: x_i, Y: y_i}) epoch_loss += c print('Epoch', epoch, 'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print("Accuracy: ", accuracy.eval({X: X_test, Y: Y_test})) train(X)

What should I do to get low average loss?
I'm an student in hydraulic engineering, working on a neural network in my internship so it's something new for me. I created my neural network but it gives me a high loss and I don't know what is the problem ... you can see the code :
def create_model(): model = Sequential() # Adding the input layer model.add(Dense(26,activation='relu',input_shape=(n_cols,))) # Adding the hidden layer model.add(Dense(60,activation='relu')) model.add(Dense(60,activation='relu')) model.add(Dense(60,activation='relu')) # Adding the output layer model.add(Dense(2)) # Compiling the RNN model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy']) return model kf = KFold(n_splits = 5, shuffle = True) model = create_model() scores = [] for i in range(5): result = next(kf.split(data_input), None) input_train = data_input[result[0]] input_test = data_input[result[1]] output_train = data_output[result[0]] output_test = data_output[result[1]] # Fitting the RNN to the Training set model.fit(input_train, output_train, epochs=5000, batch_size=200 ,verbose=2) predictions = model.predict(input_test) scores.append(model.evaluate(input_test, output_test)) print('Scores from each Iteration: ', scores) print('Average KFold Score :' , np.mean(scores))
And whene I execute my code, the result is like :
Scores from each Iteration: [[93.90406122928908, 0.8907562990148529], [89.5892979597845, 0.8907563030218878], [81.26530176050522, 0.9327731132507324], [56.46526102659081, 0.9495798339362905], [54.314151876112994, 0.9579831877676379]] Average KFold Score : 38.0159922589274
Can anyone help me please ? how could I do to make the loss low ?

expected conv2d_7 to have shape (4, 268, 1) but got array with shape (1, 270, 480)
I'm having trouble with this autoencoder I'm building using Keras. The input's shape is dependent on the screen size, and the output is going to be a prediction of the next screen size... However there seems to be an error that I cannot figure out... Please excuse my awful formatting on this website...
Code:
def model_build(): input_img = InputLayer(shape=(1, env_size()[1], env_size()[0])) x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(16, (3, 3), activation='relu', padding='same')(x) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) encoded = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded) x = UpSampling2D((2, 2))(x) x = Conv2D(16, (3, 3), activation='relu', padding='same')(x) x = UpSampling2D((2, 2))(x) x = Conv2D(32, (3, 3), activation='relu')(x) x = UpSampling2D((2, 2))(x) decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) model = Model(input_img, decoded) return model if __name__ == '__main__': model = model_build() model.compile('adam', 'mean_squared_error') y = np.array([env()]) print(y.shape) print(y.ndim) debug = model.fit(np.array([[env()]]), np.array([[env()]]))
Error:
Traceback (most recent call last): File "/home/ai/Desktop/algernontest/rewarders.py", line 46, in debug = model.fit(np.array([[env()]]), np.array([[env()]])) File "/home/ai/.local/lib/python3.6/sitepackages/keras/engine/training.py", line 952, in fit batch_size=batch_size) File "/home/ai/.local/lib/python3.6/sitepackages/keras/engine/training.py", line 789, in _standardize_user_data exception_prefix='target') File "/home/ai/.local/lib/python3.6/sitepackages/keras/engine/training_utils.py", line 138, in standardize_input_data str(data_shape)) ValueError: Error when checking target: expected conv2d_7 to have shape (4, 268, 1) but got array with shape (1, 270, 480)
EDIT:
Code for get_screen imported as env():
def get_screen(): img = screen.grab() img = img.resize(screen_size()) img = img.convert('L') img = np.array(img) return img

Simulation and analysis of deep learning network for parallel hardware implementation(speech recognition)
how can I implement the project "Simulation and analysis of deep learning network for parallel hardware implementation"?

PyTorch weights and outputs aren't changing
I'm really enjoying using PyTorch for classification and regression. I have an interesting new problem to solve and I can't quite figure out the solution, I feel like I'm really close.
My problem: I have created a network with three outputs, let's call them x, y and z I have a function F(x, y, z) that returns a value between 0.0 and 100.0 where 100 is better My custom loss thus is 100F(x,y,z) at each step The goal is to figure out the best combination of outputs for problem F(...) (I know a genetic algorithm will outperform this, that's my project right now to prove it on an array of problems)
To implement the above, I force the network to have 1 piece of input data and a batch size of 1, and then in the loss we just completely ignore the 'true' and 'predicted' values and replace the loss with 100F(x,y,z). Basically our weights and outputs will lead to one solution at every epoch, and the fitness of this solution is inverse from the maximum possible fitness to give a loss (ie. fitness 100 will result in loss 0, 100100).
Outputs are rounded to integers since F(...) requires them. To prevent this from being an issue, I have a large momentum and learning rate.
The issue I'm having is that, although the loss function is running and my first [x,y,z] is being evaluated, the values never change. The network isn't learning from the results produced.
My code is as follows: Note testnetwork() is too long to paste but it is the F(x,y,z) mentioned above  any dummy function can replace it eg. 'return x+zy/2' etc. to minimise this function (100  x+zy/2)
import torch import torch.nn as nn from testnetwork import * n_in, n_h, n_out, batch_size = 10, 5, 3, 5 x = torch.randn(batch_size, n_in) y = torch.tensor([[1.0], [1.0], [1.0], [1.0], [1.0], [1.0], [1.0], [0.0], [1.0], [1.0]]) model = nn.Sequential(nn.Linear(n_in, n_h), nn.ReLU(), nn.ReLU() ) def fitness(string): print(string) list = string.split(",") list[0] = (int(round(float(list[0])))) list[1] = (int(round(float(list[1])))) list[2] = (int(round(float(list[2])))) print(list) loss = 100  testnetwork(list[0], list[1], list[2]) return loss def my_loss(output, target): table = str.maketrans(dict.fromkeys('tensor()')) ftn = fitness(str(output.data[0][0]).translate(table) + ", " + str(output.data[0][1]).translate(table) + ", " + str(output.data[0][2]).translate(table)) loss = torch.mean((output  output)+ftn) return loss #optimizer = torch.optim.SGD(model.parameters(), lr=1, momentum=2) optimizer = torch.optim.Adam(model.parameters(), lr=1, momentum=2) for epoch in range(10): # Forward Propagation y_pred = model(x) # Compute and print loss loss = my_loss(y_pred, y) print('epoch: ', epoch,' loss: ', loss.item()) # Zero the gradients optimizer.zero_grad() # perform a backward pass (backpropagation) loss.backward(retain_graph=True) # Update the parameters optimizer.step()
Thank you so much for reading my post!
epoch: 0 loss: 50.339725494384766 0., 0.0200, 0.6790 [0, 0, 1] testing: [0, 0, 1] epoch: 1 loss: 50.339725494384766 0., 0.0200, 0.6790 [0, 0, 1] testing: [0, 0, 1] epoch: 2 loss: 50.339725494384766 0., 0.0200, 0.6790 [0, 0, 1] testing: [0, 0, 1] epoch: 3 loss: 50.339725494384766 0., 0.0200, 0.6790 [0, 0, 1] testing: [0, 0, 1] epoch: 4 loss: 50.339725494384766 0., 0.0200, 0.6790 [0, 0, 1]
..and so on, nothing seems to change from epoch to epoch.

Paginating Data to Keras Model with GPU
When I train a Keras word embedding model on the entire corpus, I get the following error on my ec2 instance:
tensorflow.python.framework.errors_impl.InternalError: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 16914055168
What is the best way to get around this? I have tried loading the full corpus into memory and then paginating through the inmemory dataset as follows:
while start_page_index < training_size: logger.info('[Paging] Training index %s through %s', start_page_index, (start_page_index + page_size)) batch = t.generate_batch(n_positive, start_page_index=start_page_index, page_size=page_size, negative_ratio=negative_ratio) h = model.fit_generator( batch, epochs=2, steps_per_epoch=int(training_size/(n_positive*(negative_ratio+n_positive))), verbose=2 ) start_page_index += (page_size + 1)
But this gives me the same
CUDA_OUT_OF_MEMORY
error. 
how can I repeat one pixel of tensor to all pixels in new tensor?
I have a tensor with shape(1,4,4,1) named wtm in keras and it is the input. now I need to access to each pixel of this tensor and each time scatter each pixel of it to a (1,28,28) new tensor and add to the encoder. for example, suppose wtm with shape (1,4,4,1) has values 0 or 1. first I need to know what is the value of each pixel of this tensor and then produce a new tensor with shape (1,28,28,1) so that all its value is the same as the value of the mentioned pixel. I think I should use lambda layer but I do not know how can I access each value of tensor? could you please tell me how can I do this?
wtm=Input((4,4,1)) image = Input((28, 28, 1)) conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image) conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1) conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2) BN=BatchNormalization()(conv3) encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)
I need sth like this
wtmN=Kr.layers.Lambda(K.tile,arguments={'n':(1,28,28,1)})(wtm[:,1,1,:]) add_const = Kr.layers.Lambda(lambda x: x[0] + x[1]) encoded_merged = add_const([encoded,wtmN])
but it produces this error:
Traceback (most recent call last):
File "", line 64, in wtmN=Kr.layers.Lambda(K.tile,arguments={'n':(1,28,28,1)})(wtm[:,1,1,:])
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\keras\engine\base_layer.py", line 457, in call output = self.call(inputs, **kwargs)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\keras\layers\core.py", line 687, in call return self.function(inputs, **arguments)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\keras\backend\tensorflow_backend.py", line 2191, in tile return tf.tile(x, n)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\tensorflow\python\ops\gen_array_ops.py", line 8805, in tile "Tile", input=input, multiples=multiples, name=name)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\tensorflow\python\util\deprecation.py", line 488, in new_func return func(*args, **kwargs)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\tensorflow\python\framework\ops.py", line 3274, in create_op op_def=op_def)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\tensorflow\python\framework\ops.py", line 1792, in init control_input_ops)
File "D:\software\Anaconda3\envs\py36\lib\sitepackages\tensorflow\python\framework\ops.py", line 1631, in _create_c_op raise ValueError(str(e))
ValueError: Shape must be rank 2 but is rank 4 for 'lambda_1/Tile' (op: 'Tile') with input shapes: [?,1], [4].

How to enhance the performance of Mixed Autoencoder architecture?
I have a data of shape (93,32) and I want to compress it with autoencoder with the bottleneck dense layer. Somehow I am trying to embed the spatial information in one dimension only. I can design simple fully connected autoencoder but when autoencoder includes Conv2D layers I get a bit confused in the decoder part of it. Below is my code and the loss is not decreasing after 50 epochs and reconstruction is not even near to the input. It would be really nice if you can point out the problem in my Architecture. I am using Leakyrelu because my data has negative values..
model = Sequential() model.add(Conv2D(filters=30,kernel_size= (93,3),strides=(1, 1),input_shape=(93,32,1),kernel_initializer='VarianceScaling')) # VarianceScaling model.add(BatchNormalization()) model.add(LeakyReLU(alpha=0.7)) model.add(MaxPooling2D(pool_size=(1,4), strides=(1,3), padding='valid')) #padding='valid' model.add(Flatten()) model.add(Dense(150,kernel_initializer='VarianceScaling')) model.add(BatchNormalization()) model.add(LeakyReLU(alpha=0.7)) model.add(Dense(40,kernel_initializer='VarianceScaling')) model.add(BatchNormalization()) model.add(LeakyReLU(alpha=0.7)) model.add(Dense(150,kernel_initializer='VarianceScaling')) model.add(BatchNormalization()) model.add(LeakyReLU(alpha=0.7)) model.add(Dense(270,kernel_initializer='VarianceScaling')) model.add(BatchNormalization()) model.add(LeakyReLU(alpha=0.7)) model.add(Reshape((1,9,30))) model.add(UpSampling2D((93,4))) model.add(Conv2D(filters=1,kernel_size= (1,1),strides=(1, 1),kernel_initializer='VarianceScaling')) model.add(LeakyReLU(alpha=0.7)) model.add(Cropping2D(cropping=((0,0),(0,4))))
I am using simple "mean square errors" as a loss function. The problem is somewhere in the decoder part. Thanks for your time :)

speech to text conversion using python
Does anyone help me how to write code(not using existing) that will listen to the microphone and convert what you are saying into text? I will be very thankful to you if you can help me with this.

Watson speech to text: Invalid credentials error (Code: 401)
I am trying to use the IBM Watson speech to text API/service in the following Python program.
import json import os import sys from watson_developer_cloud import SpeechToTextV1 def transcribe_audio(audio_file_name) : IBM_USERNAME = "yourusername" IBM_PASSWORD = "yourpassword" #what changes should be made here instead of username and password stt = SpeechToTextV1(username=IBM_USERNAME, password=IBM_PASSWORD) audio_file = open(audio_file_name, "rb") json_file = os.path.abspath("america")+".json"; with open(json_file, 'w') as fp: result = stt.recognize(audio_file,timestamps=True,content_type='audio/wav', inactivity_timeout =1,word_confidence = True) result.get_result() json.dump(result, fp, indent=2) script = "Script is : " for rows in result['results']: script += rows['alternatives'][0]['transcript'] print(script) transcribe_audio("america.wav")
This code gave me an authentication error as mentioned in the title because IBM changed the authorization method from username + password to apikey very recently.
Could anybody tell me what changes should be made in this? And also how to generate the apikey on IBM Watson speech to text with username and password?
I am new to speech recognition, please let me know. Thanks in advance.

signal to signal pediction using RNN and Keras
I am trying to reproduce the nice work here and adapte it so that it reads real data from a file. I started by generating random signals (instead of the generating methods provided in the above link). Unfortoutanyl, I could not generate the proper signals that the model can accept.
here is the code:
import numpy as np import keras from keras.utils import plot_model input_sequence_length = 15 # Length of the sequence used by the encoder target_sequence_length = 15 # Length of the sequence predicted by the decoder import random def getModel():# Define an input sequence. learning_rate = 0.01 num_input_features = 1 lambda_regulariser = 0.000001 # Will not be used if regulariser is None regulariser = None # Possible regulariser: keras.regularizers.l2(lambda_regulariser) layers = [35, 35] num_output_features=1 decay = 0 # Learning rate decay loss = "mse" # Other loss functions are possible, see Keras documentation. optimiser = keras.optimizers.Adam(lr=learning_rate, decay=decay) # Other possible optimiser "sgd" (Stochastic Gradient Descent) encoder_inputs = keras.layers.Input(shape=(None, num_input_features)) # Create a list of RNN Cells, these are then concatenated into a single layer # with the RNN layer. encoder_cells = [] for hidden_neurons in layers: encoder_cells.append(keras.layers.GRUCell(hidden_neurons, kernel_regularizer=regulariser,recurrent_regularizer=regulariser,bias_regularizer=regulariser)) encoder = keras.layers.RNN(encoder_cells, return_state=True) encoder_outputs_and_states = encoder(encoder_inputs) # Discard encoder outputs and only keep the states. # The outputs are of no interest to us, the encoder's # job is to create a state describing the input sequence. encoder_states = encoder_outputs_and_states[1:] # The decoder input will be set to zero (see random_sine function of the utils module). # Do not worry about the input size being 1, I will explain that in the next cell. decoder_inputs = keras.layers.Input(shape=(None, 1)) decoder_cells = [] for hidden_neurons in layers: decoder_cells.append(keras.layers.GRUCell(hidden_neurons, kernel_regularizer=regulariser, recurrent_regularizer=regulariser, bias_regularizer=regulariser)) decoder = keras.layers.RNN(decoder_cells, return_sequences=True, return_state=True) # Set the initial state of the decoder to be the ouput state of the encoder. # This is the fundamental part of the encoderdecoder. decoder_outputs_and_states = decoder(decoder_inputs, initial_state=encoder_states) # Only select the output of the decoder (not the states) decoder_outputs = decoder_outputs_and_states[0] # Apply a dense layer with linear activation to set output to correct dimension # and scale (tanh is default activation for GRU in Keras, our output sine function can be larger then 1) decoder_dense = keras.layers.Dense(num_output_features, activation='linear', kernel_regularizer=regulariser, bias_regularizer=regulariser) decoder_outputs = decoder_dense(decoder_outputs) # Create a model using the functional API provided by Keras. # The functional API is great, it gives an amazing amount of freedom in architecture of your NN. # A read worth your time: https://keras.io/gettingstarted/functionalapiguide/ model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs) model.compile(optimizer=optimiser, loss=loss) print(model.summary()) return model def getXY(): X, y = list(), list() for _ in range(100): x = [random.random() for _ in range(input_sequence_length)] y = [random.random() for _ in range(target_sequence_length)] X.append([x,[0 for _ in range(input_sequence_length)]]) y.append(y) return np.array(X), np.array(y) X,y = getXY() print(X,y) model = getModel() model.fit(X,y)
The error message i got is:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays:
what is the correct shape of the input data for the model?

Keras: set initial states of a simple recurrent layer keeping stateful=False
I need to set the initial states of my RNN units with keras to study the network response (pretraining) to different initial values but I am confused with syntaxis of the methods. This is my model:
model = Sequential() model.add(SimpleRNN(units=N_rec,return_sequences=True,input_shape=(None, 2), kernel_initializer='glorot_uniform', recurrent_initializer= pepe ,activation='tanh',use_bias=False)) model.add(Dense(units=1,input_dim=N_rec))
My data type are time series of two channels and fix length. I need to use 'stateful=False'.
I was using a sequential model and I was trying to set the layer states and it is not working. I am using: Keras version 2.0.6
I used:
model.layers[0].states[0] =np.ones(N_rec)
where N_rec is the number of units. With no effect. I used "Ones" to set an initial value, then I watn to try with different initializations. I also try:K.set_value(model.layers[0].states[0],h_tm)
whereh_tm= tf.zeros([N_rec])
with following error:AttributeError: 'NoneType' object has no attribute 'dtype'
I also Try to use:
def set_states(model, states): for (d,_), s in zip(model.state_updates, states): K.set_value(d, s)
But since I am not so proficient in programming, I am not sure of the syntaxis. To print the states I used:
for layer in model.layers: print" 1" if getattr(layer,'stateful',False): if hasattr(layer,'states'): print "a c a " for state in layer[0].states: statesAll.append(K.get_value(state)) print"STATES",statesAll
But I am getting an empty list.
I also Try to used functional model syntaxis but I am not sure that I am doing properly.
I also try with reset states but does not allow to put an argument on the function and it is a model function, not a layer function.
The examples that I found are more complex because they are encoders.

Use tf.nn.dynamic_rnn() multitimes, Dimensions must be equal Error occurs
When I use
tf.nn.dynamic_rnn()
multitimes in my code,ValueError: Dimensions must be equal, but are 80 and 90 for '..scope1/rnn/while/gru_cell/MatMul_4' (op: 'MatMul') with input shapes: [50,80], [90,80].'
occurs.
A snippet of my code is:
gru = tf.contrib.rnn.GRUCell(d) with tf.variable_scope('scope1'): _, outputs_1 = tf.nn.dynamic_rnn(gru, inputs, dtype=tf.float32) # inputs:[N, L, V] with tf.variable_scope('scope2'): outputs_2 = somefunc(outputs_1) with tf.variable_scope('scope3'): _, outputs_3 = tf.nn.dynamic_rnn(gru, outputs_2, dtype=tf.float32) outputs_2:[N, L, d]
If I set d not equal to V, running the code throws the Error above. If d is set to equal to V, there is no problem. Why? The two tf.nn.dynamic_rnn() is not stacked directly. They should be independent from each other.