tf.keras.Model.predict and call return different result
tf.keras.Model.predict and call return different result
import tensorflow as tf
import numpy as np
tf.set
ipts = tf.keras.Input([2])
x = tf.keras.layers.Dense(10)(ipts)
x = tf.keras.layers.Dropout(0.5)(x)
x = tf.keras.layers.Dense(3)(x)
model = tf.keras.Model(ipts, x)
model.summary()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
y_train = model(tf.ones((2,2)),training=True)
y_test = model(tf.ones((2,2)),training=False)
sess.run(y_train)
sess.run(y_test)
model.predict(np.array([[1.,1.],[1.,1.]]))
sess.run(y_test) should be the same with model.predict(np.array([[1.,1.],[1.,1.]])) , but the fact is that they are different. Why
1 answer

You need to register the session as your Keras session with
K.set_session(sess)
. Thensess.run(y_test)
gives the same result asmodel.predict(np.array([[1.,1.],[1.,1.]]))
.
See also questions close to this topic

What do `tf.keras.Model.reset_metrics` and `tf.keras.Model.reset_states` do?
I was reading through the documentation in Tensorflow 2, and came across the methods
tf.keras.Model.reset_metrics()
andtf.keras.Model.reset_states()
. However when I used them they did not worked as expected. 
When making prediction with a trained neural net, does input have to run through all layers?
When you have an input that you want to make a prediction on, does the input have to be run through the entire neural net?

Module 'tensorflow' has no attribute
I tried writing the code ,
tf.reset_default_graph() session =tf.InteractiveSession()
but i got an error
AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'
already tried this
from tensorflow.python.framework import ops ops.reset_default_graph() tf.reset_default_graph() session =tf.InteractiveSession()

Is it possible to pass in an array into a neural network perceptron?
I am trying to set up a neural network for identify Elliott Waves and I was wondering if it is possible to pass in an array of arrays into a perceptron? My plan is to pass in an array of size 4 ([Open, Close, High, Low]) into each perceptron. If so, how would the weighted average calculation work and how can I go about this using the Python Keras library? Thanks!

How do I add layers at the start of a model in Keras?
I want to add new layers to a pretrained model, using Tensorflow and Keras. The problem is, those new layers are not to be added on top of the model, but at the start. I want to create a triplesiamese model, which takes 3 different inputs and gives 3 different outputs, using the pretrained network as the core of the model. For that, I need to insert 3 new input layers at the beginning of the model.
The default path would be to just chain the layers and the model, but this method treats the pretrained model as a new layer (when a new model with the new inputs and the pretrained model is created, the new model only contains4 layers, the 3 input layers, and the whole pretrained model):
input_1 = tf.keras.layers.Input(shape = (224,224,3)) input_2 = tf.keras.layers.Input(shape = (224,224,3)) input_3 = tf.keras.layers.Input(shape = (224,224,3)) output_1 = pre_trained_model(input_1) output_2 = pre_trained_model(input_2) output_3 = pre_trained_model(input_3) new_model = tf.keras.Model([input_1, input_2, input_3], [output_1, output_2, output_3])
new_model
has only 4 layers, due to the Keras API considering thepre_trained_model
a layer.I know that the above option works, as I have seen in many code samples, but I wonder if there is a better option for this. It feels awkward for me, because the access to inner layers of the final model will be messed up, not to mention the fact that the model will have an extra input layer after the added 3 input layers (the input layer from the pre trained model is still intact, and is totally unnecessary).

How are these python functions working? Segmentation with UNet
I am working on a project and I want to understand better some code from a source that I found out. The idea is that I want to do some semantic segmentation using UNet. I understood almost everything that happened in the project, excepting 2 functions.
The first function is about memory consumption (that's what the guy who did the project said). The idea is that I have the UNet, and after they do the last convolution operation, there are 2 more operation that thet are applying to the convoluted layer. After those operations, they apply the activation, etc.
conv6 = core.Reshape((2, patch_height * patch_width))(conv6) conv6 = core.Permute((2, 1))(conv6)
Good, now, after that, in the main training module, before starting the model.fit function, the output model is reshaped with the function that I was telling you about that makes improvements on the memory consumption. Below you have the function.
def function_unet_masks(masks): im_h = masks.shape[2] im_w = masks.shape[3] masks = np.reshape(masks, (masks.shape[0], im_h * im_w)) new_masks = np.empty((masks.shape[0], im_h * im_w, 2)) for i in range(masks.shape[0]): for j in range(im_h * im_w): if masks[i, j] == 0: new_masks[i, j, 0] = 1 new_masks[i, j, 1] = 0 else: new_masks[i, j, 0] = 0 new_masks[i, j, 1] = 1 return new_masks
What does the code above? Why it's better at memory consumption? I tested everything witout this function and yes, the 'loss' of the training model is increasing dramatically, very big and also everything takes more time.
Now, the second problem. I am doing the training phase based on patches. So I split up the training data set in little patches and I am learning everything based on patches. After I have a trained model, and I want to do the test, the predictions are also patches. So in the final move, I need to reconstruct the images based on the patches that were predicted. The problem Is that I can't understand why they are using a sum and a probability of pixels to return the final array with the correct order of the patches. I understood that the patches are overlapping, and this should remake the image without overlaps, or something like that. Below you have the function.
def reconstruct_overlapping_images(preds, img_h, img_w, stride_h, stride_w): assert (len(preds.shape) == 4) # 4D arrays assert (preds.shape[1] == 1 or preds.shape[1] == 3) # check the channel is 1 or 3 patch_h = preds.shape[2] patch_w = preds.shape[3] N_patches_h = (img_h  patch_h) // stride_h + 1 N_patches_w = (img_w  patch_w) // stride_w + 1 N_patches_img = N_patches_h * N_patches_w print("N_patches_h: " + str(N_patches_h)) print("N_patches_w: " + str(N_patches_w)) print("N_patches_img: " + str(N_patches_img)) assert (preds.shape[0] % N_patches_img == 0) N_full_imgs = preds.shape[0] // N_patches_img print("According to the dimension inserted, there are " + str(N_full_imgs) + " full images (of " + str( img_h) + "x" + str(img_w) + " each)") full_prob = np.zeros( (N_full_imgs, preds.shape[1], img_h, img_w)) # itialize to zero mega array with sum of Probabilities full_sum = np.zeros((N_full_imgs, preds.shape[1], img_h, img_w)) k = 0 # iterator over all the patches for i in range(N_full_imgs): for h in range((img_h  patch_h) // stride_h + 1): for w in range((img_w  patch_w) // stride_w + 1): full_prob[i, :, h * stride_h:(h * stride_h) + patch_h, w * stride_w:(w * stride_w) + patch_w] += preds[ k] full_sum[i, :, h * stride_h:(h * stride_h) + patch_h, w * stride_w:(w * stride_w) + patch_w] += 1 k += 1 assert (k == preds.shape[0]) assert (np.min(full_sum) >= 1.0) # at least one final_avg = full_prob / full_sum print(final_avg.shape) assert (np.max(final_avg) <= 1.0) # max value for a pixel is 1.0 assert (np.min(final_avg) >= 0.0) # min value for a pixel is 0.0 return final_avg
Can you help me please in understanding these functions and their use?
Thank you

How to convert trained in custom loop subclassed tf.keras.model to tflite?
I have a problem with converting trained subclassed model (
tf.keras.Model
) in custom loop to TFLite.Suppose we have small CNN architecture that use input data (
x
) and additional information that depends on batch size and other dims (add_info
):class ContextExtractor(tf.keras.Model): def __init__(self): super().__init__() self.model = self.__get_model() def call(self, x, training=False, **kwargs): b, h, w, c = x.shape add_info = tf.zeros((b, h, w, c), dtype=tf.float32) features = self.model(tf.concat([x, add_info], axis=1), training=training) return features def __get_model(self): return self.__get_small_cnn() def __get_small_cnn(self): model = tf.keras.Sequential() model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU(alpha=0.2)) model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU(alpha=0.2)) model.add(layers.Conv2D(64, (3, 3), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU(alpha=0.2)) model.add(layers.Conv2D(128, (3, 3), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU(alpha=0.2)) model.add(layers.Conv2D(256, (3, 3), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU(alpha=0.2)) model.add(layers.GlobalAveragePooling2D()) return model
I trained it in custom loop mode (using
tf.GradientTape
). It means that I didn't compile model, I just use it as it is.Now I want to convert it to SavedModel format, because I want to port my model in TFLite. But when I run something like:
tf.saved_model.save(model, path_to_file)
I got warning like:
Skipping full serialization of Keras model <ContextExtractor object at 0x7f30340bd6d8>, because its inputs are not defined.
And, of course, my
.pb
file that I got  super small and nothing in it.Can anyone provide full explanation how to convert subclassed model to SavedModel? Or maybe I can convert my model to TFLite without it?

How to export model (.h5) to EvalSavedModel in Keras?
For Evaluator component in Tensorflow Extended the input is EvalSavedModel. Previously to upload model to Tensorflow Serving I used tf.saved_model.simple_save() in Keras to export model to SavedModel. Now, I am looking for similar approach to export model in Keras to EvalSavedModel. Please help!
Here I tried to export Keras model using tfma.export.export_eval_savedmodel():
import tensorflow as tf from tensorflow.contrib.keras import backend as K import tensorflow_model_analysis as tfma K.set_learning_phase(0) model = tf.keras.models.load_model('/Users/user/Documents/.../model_name.h5') eval_model_dir = './' estimator = tf.keras.estimator.model_to_estimator(keras_model=model) image = tf.feature_column.numeric_column("image", shape=[255, 255]) label = tf.feature_column.categorical_column_with_identity(key='label', num_buckets=2) def receiver_fn(): serialized_tf_example = tf.compat.v1.placeholder( dtype=tf.string, shape=[None], name='input_example_tensor') receiver_tensors = {'examples': serialized_tf_example} feature_spec = tf.feature_column.make_parse_example_spec( [image, label]) features = tf.io.parse_example(serialized_tf_example, feature_spec) return tfma.export.EvalInputReceiver( features=features, receiver_tensors=receiver_tensors, labels=features['label']) tfma.export.export_eval_savedmodel( estimator=estimator, export_dir_base=eval_model_dir, eval_input_receiver_fn=receiver_fn)
The error is:
Use estimator.experimental_export_all_saved_models Traceback (most recent call last): File "/Users/user/Documents/autoreviewdeployment/convert_to_eval_saved_model.py", line 34, in <module> eval_input_receiver_fn=receiver_fn) File "/Users/user/Documents/spamreview/venv/lib/python3.7/sitepackages/tensorflow_model_analysis/util.py", line 173, in wrapped_fn return fn(**kwargs_to_pass) File "/Users/user/Documents/spamreview/venv/lib/python3.7/sitepackages/tensorflow_model_analysis/eval_saved_model/export.py", line 476, in export_eval_savedmodel checkpoint_path=checkpoint_path) File "/Users/user/Documents/spamreview/venv/lib/python3.7/sitepackages/tensorflow/python/util/deprecation.py", line 324, in new_func return func(*args, **kwargs) File "/Users/user/Documents/spamreview/venv/lib/python3.7/sitepackages/tensorflow_estimator/contrib/estimator/python/estimator/export.py", line 208, in export_all_saved_models checkpoint_path=checkpoint_path) File "/Users/user/Documents/spamreview/venv/lib/python3.7/sitepackages/tensorflow_estimator/python/estimator/estimator.py", line 801, in experimental_export_all_saved_models raise ValueError("Couldn't find trained model at %s." % self._model_dir) ValueError: Couldn't find trained model at /var/folders/9h/t7340gkn7lz5f2rbb05cp_ph0000gn/T/tmpi53cejji.