ValueError: Variable rnn/basic_rnn_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?
Any ideas how can I solve problem shown below? With the information that I found on the web it is associated with problem of reusing tensorflow scope however nothing works.
ValueError: Variable rnn/basic_rnn_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at: File "/code/backend/management/commands/RNN.py", line 370, in predict states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32) File "/code/backend/management/commands/RNN.py", line 499, in Command predict("string") File "/code/backend/management/commands/RNN.py", line 12, in <module> class Command(BaseCommand):
I tried for instance something like this
with tf.variable_scope('scope'): states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
with tf.variable_scope('scope', reuse = True ): states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
with tf.variable_scope('scope', reuse = tf.AUTO_REUSE ): states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32)
Does this happen when you run the model for the first time (upon opening a new python console)?
If not, you need to clear you computational graph. You can do that by putting this line at the beginning of your script.
See also questions close to this topic
How to display matplotlib histogram data as table?
So I am just trying to learn Python and have built a histogram that looks like such:
I've been going crazy trying to figure out how I could display this same data in a table format ie:
0-5 = 50,500 5-10 = 24,000 10-50 = 18,500
and so on...
There is only one field in df, and it contains the number of residents in towns/cities. Any help is greatly appreciated.
What are the proper practices of creating websocket connectors?
I've created a python script that connects to websocket and routes the data stream to a Kafka topic. Since this is the first time writing a connector code, I wanted to know if I wrote it in an efficient manner. For instance the While True loop in my function, is there a better way or practice to do this? Should I be writing it in JAVA for Scala? Also, rather than writing a codeset is there already an existing connector that exists to stream websockets into Kafka?
import asyncio import websockets import json from kafka import KafkaProducer WS_LINK = 'wss://stream.binance.com:9443/ws/hoteth@trade' KAFKA_TOPIC = 'test2' KAFKA_BROKER = 'localhost:9092' producer = KafkaProducer(bootstrap_servers=KAFKA_BROKER) async def wss_connect(url): # Connects to websocket and routes stream to Kafka topic async with websockets.connect(WS_LINK) as websocket: while True: response = await websocket.recv() producer.send(KAFKA_TOPIC, response.encode()) load = json.loads(response) producer.flush() if __name__ == "__main__": loop = asyncio.get_event_loop() asyncio.ensure_future(wss_connect(WS_LINK)) loop.run_forever() loop.close()
Pymongo - adding scheme validation to a collection in MongoDB
I keep getting errors when I try to add a JSON validation to an existing collection, or when I try to add a validation while creating a collection.
I copied the code verbatim from here: Does Pymongo have validation rules built in? but I get the error
pymongo.errors.OperationFailure: Command is not supported
When I changed it to add the validation via
db.create_collection("myColl",validator=vexpr)(vexpr is the validation json), I get
pymongo.errors.OperationFailure: Unsupported projection option: $substr
These seem like they should work (according to the mongodb docs), and I can't figure out why there's an error. Thanks in advance!
Using different databases depending on a parameter in the URL
Is there a way I can tell Django2 to use a different database (and cache/session store) depending on a parameter in the URL?
Note that I have read the docs related to multiple databases en Django (https://docs.djangoproject.com/en/2.1/topics/db/multi-db/#automatic-database-routing), and that is not what I'm asking.
The docs are showing an example about how to use
DATABASE_ROUTERS, which is a way of choosing which database should be used programatically when using a model.
What I'm asking is how can I make Django2 use different databases automatically depending on a parameter in the URL. Example:
http://foo.bar/usa <-- use USA database http://foo.bar/europe <-- use Europe database
How do I convert county names to fips codes?
I have a table where one column are the county names and the over columns are various attributes.
I want to convert this column of county names to fips codes.
I have an intermediary table that shows the fips code for each county.
How to categorize timestamp hours into day and night?
I was wondering how can we categorize timestamp column in a Data frame into Day and Night column on the basis of time?
I am trying to do so but unable to make a new column complete with the same number of entries.
d_call["time"] = d_call["timestamp"].apply(lambda x: x.time())
d_call["time"].head(1) 0 17:10:52 Name: time, dtype: object
for i in name: if i.hour > 17: return "night" else: return "day"
d_call["Day / Night"]= d_call["time"].apply(lambda x: day_night(x))
I want to get the entire series of the column but getting the first index only.
Thanks for your help!
Face landmark comparison iOS swift
I am using iOS Vision framework to get the face landmarks for an image of a person. Now I want to compare these landmarks with the face landmarks of the same person in some other image. The aim here is to compare the two images and tell that the person in both images is the same one based on facial landmark comparison. I am able to get the face landmarks. Can anyone please guide me on how to match these two landmarks i.e compare them and tell that both are same? I tried calculating the Euclidean distance between the landmarks for two images to compare them but that didn't work well. Thanks
How do I deploy my existing SPSS streams takes input from DB using WML in Watson Studio
Earlier when Machine Learning service was part of Bluemix, I use to deploy my SPSS Modeler streams easily and these SPSS streams where having Dashdb connectivity through Json Script. However, in Watson Studio, I do not find that connectivity for WML. Can you please guide
- If I can deploy my existing SPSS Modeler (version 18) Streams which are having DB connectivity for input and output in Watson Studio?
- I yes, can you point to the documentation or tutorial and also, can I import .str files into WML as it use to be case earlier?
- If not, what is the best way to do that? Why this useful feature is taken away? Any plans to include that again?
What hardware do I need to run/train more than x separate tensorflow models where x >=1000?
I want to run the code on the server side. Do I need to buy separate AWS instances for each model? I would rather run the models on client side but the source code and the model will be visible. I appreciate your recommendations.
How to convert numpy based function for TensorFlow tensor?
I'm trying to implement this below function with
tf.xyzmodules available in TensorFlow here. This below NumPy based function takes the 3D matrix as input, checks the condition with the values of last column and returns the values from first two columns which satisfy the condition.
I'm having hard time converting this NumPy based modules for TensorFlow tensors which I want to add in as lambda layer to my model. Any suggestion?
I'm trying with
tf.slice()but not getting the same output as NumPy version of the function.
# NumPy based function on 3D matrix: def fun1(arr): return arr[arr[:,2] > 0.95][:,:2] input_matrix = np.array([[[1, 2, 0.99], [11, 22, 0.80], [111, 222, 0.96]]]) >> input_matrix [[[ 1. 2. 0.99] [ 11. 22. 0.8 ] [111. 222. 0.96]]] >> np.array([fun1(i) for i in input_matrix]) array([[[ 1., 2.], [111., 222.]]])
CNN architecture in google tensorflow example
I have a question about tensorflow:
The CNN has one input layer, three (CNN, MaxPooling) layers, one fully connected hidden layer and one output layer. I cannot understand why there are two hidden layers when I use model.summary() to show the architecture.
img_input = layers.Input(shape=(150, 150, 3))
x = layers.Conv2D(16, 3, activation='relu')(img_input)
x = layers.MaxPooling2D(2)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(2)(x)
x = layers.Conv2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D(2)(x)
x = layers.Flatten()(x)
x = layers.Dense(512, activation='relu')(x)
output = layers.Dense(1, activation='sigmoid')(x)
model = Model(img_input, output)
Layer (type) Output Shape Param #
input_4 (InputLayer) (None, 150, 150, 3) 0
conv2d_9 (Conv2D) (None, 148, 148, 16) 448
max_pooling2d_9 (MaxPooling2 (None, 74, 74, 16) 0
conv2d_10 (Conv2D) (None, 72, 72, 32) 4640
max_pooling2d_10 (MaxPooling (None, 36, 36, 32) 0
conv2d_11 (Conv2D) (None, 34, 34, 64) 18496
max_pooling2d_11 (MaxPooling (None, 17, 17, 64) 0
flatten (Flatten) (None, 18496) 0
dense (Dense) (None, 512) 9470464
flatten_1 (Flatten) (None, 512) 0
dense_2 (Dense) (None, 512) 262656
dense_3 (Dense) (None, 1) 513
Total params: 9,757,217
Trainable params: 9,757,217
Non-trainable params: 0
How to debug "Cloud ML only supports TF 1.0 or above and models saved in SavedModel format."?
I make batch predictions using Cloud ML. Some of my models work and others don't. How I do debug the models that don't work? Everything I see is a bunch of errors:
Cloud ML only supports TF 1.0 or above and models saved in SavedModel format.in
prediction.errors_stats-00000-of-00001. The output of
saved_model_cli show --all --diris (other working models give the same output)
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['prediction']: The given SavedModel SignatureDef contains the following input(s): inputs['example_proto'] tensor_info: dtype: DT_STRING shape: (-1) name: input:0 The given SavedModel SignatureDef contains the following output(s): outputs['id'] tensor_info: dtype: DT_STRING shape: (-1) name: id:0 outputs['probability'] tensor_info: dtype: DT_FLOAT shape: (-1, 1) name: probability:0 Method name is: tensorflow/serving/predict signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['example_proto'] tensor_info: dtype: DT_STRING shape: (-1) name: input:0 The given SavedModel SignatureDef contains the following output(s): outputs['id'] tensor_info: dtype: DT_STRING shape: (-1) name: id:0 outputs['label'] tensor_info: dtype: DT_INT64 shape: (-1) name: label:0 outputs['probability'] tensor_info: dtype: DT_FLOAT shape: (-1, 1) name: probability:0 Method name is: tensorflow/serving/predict
UPDATED: My data is in the form of TF records, so I can't do
gcloud ml-engine local predict.
input_shape of an 8x8 board game to a Neural Network with Keras
I am very green behind the ears when it comes to creating a NN. Right now I am receiving the following error:
ValueError: Error when checking : expected dense_1_input to have 3 dimensions, but got array with shape (8, 8)
Background: I am using an 8x8 board this is how I am initializing it:
self.state = np.zeros((LENGTH, LENGTH))
Here is the code where I build my Model:
def build_model(self): #builds the NN for Deep-Q Model model = Sequential() model.add(Dense(24,input_shape = (LENGTH, LENGTH), activation='relu')) model.add(Flatten()) model.add(Dense(24, activation='relu')) model.add(Dense(self.action_size, activation = 'linear')) model.compile(loss='mse', optimizer='Adam') return model
I figured since the shape of the board is (8,8) that the input_size should be the same. Not sure what I am doing wrong?
Just in case this is helpful:
The game I have made is super simple that involves 5 pieces on the board:
- player1 has 1 piece and can move forward and backward diagonally only 1 step
- player2 has 4 pieces and can only move forward from their position diagonally 1 step
The objective for player1 is to get to the other side of the board The objective for player2 is to trap player 1 so he cannot move
Any help would be greatly appreciated!
Why does softmax provide always a probability of 1.0?
I have been trying with simple mnist example. Sorry if the question is the very basic one.
from keras.datasets import mnist from keras.layers import Input, Conv2D, Dense from keras.models import Sequential from keras.utils import np_utils def myModel(): model= Sequential() layer1 = Dense(1024, input_shape=(784,), activation='relu') layer2 = Dense(512, activation='relu') layer3 = Dense(10, activation='softmax') model.add (layer1) model.add (layer2) model.add(layer3) model.compile(loss='categorical_crossentropy', optimizer='adam') return model if __name__ == '__main__': print "Inside the main function " model = myModel() (trainX, trainY), (testX, testY) = mnist.load_data() print ("TrainX shape is ", trainX.shape) trainX = trainX.reshape(trainX.shape, trainX.shape * trainX.shape) trainY = np_utils.to_categorical(trainY, 10) model.fit(trainX, trainY, batch_size=200, epochs=1) print ("Let's predict now..") print ("Shae of x and shape of 100" , trainX.shape, trainX.shape) result = model.predict(trainX.reshape(1,784 )) print result import matplotlib.pyplot as plt plt.subplot(2,2,1) plt.imshow(trainX.reshape(28,28)) plt.show()
The output value of the last layer is
[[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]]
How do i have to interpret this result?. Is this not a probability distribution for the result?. if not how do i get the same?
How to use minimum and maximum elements needed as input of a neural network?
I'm trying to develop a neural network to predict the concrete number of certain elements that are part of a group of different elements, and I would like to use as inputs the minimum or/and maximum of its that could be available to do it.
I want to predict the number of carrots, onions and steaks that I would need to get the best dish for my nutritional needs.
For that, I have in my dataset a group of recipes with the ingredients that compound each of it and its nutritional contributions.
Nutritional needs and available ingredients (a maximum).
How should I use this data?
[nutritional needs, ingredients needed, available ingredients(maximum)]
- [nutritional needs] as X.
[ingredients needed] as hypothesis (ŷ).
How to deal with [available ingredients]?
If I put it in the dataset with the same value of the [ingredients needed] could drive to the NN to bias that [ingredients needed] = [available ingredients]