What does the tf_exported_symbols.lds do
I'm learning bazel and tensorflow, and when I check the tensorflow code I find this file: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tf_version_script.lds
So what does this file actually do?
See also questions close to this topic
-
Training SVM with images of different categories
I am new to machine learning, my aim is to classify traffic/road signs using
SVM
.My question is: Can the training
data set
ofpositive
images contains different categories of images?ex: I want to classify those two type of images to turn
RIGHT
, is it possible ? would be the data still linearly separable? -
Benefit of applying tf.image.per_image_standardization() over batch_norm layer in Tensorflow?
What is the benefit in applying
tf.image.per_image_standardization()
before the first layer of the deep neural network over adding theBatch_Norm layer
as the first layer?In order to normalize the [0.0, 255.0] float value image pixels before feeding into the network, which method would be suitable?
tf.image.per_image_standardization()
Batch_Norm - layer
-
Set "training=False" of "tf.layers.batch_normalization" when training will get a better validation result
I use TensorFlow to train DNN. I learned that Batch Normalization is very helpful for DNN , so I used it in DNN.
I use "tf.layers.batch_normalization" and follow the instructions of the API document to build the network: when training, set its parameter "training=True", and when validate, set "training=False". And add tf.get_collection(tf.GraphKeys.UPDATE_OPS).
Here is my code:
# -*- coding: utf-8 -*- import tensorflow as tf import numpy as np input_node_num=257*7 output_node_num=257 tf_X = tf.placeholder(tf.float32,[None,input_node_num]) tf_Y = tf.placeholder(tf.float32,[None,output_node_num]) dropout_rate=tf.placeholder(tf.float32) flag_training=tf.placeholder(tf.bool) hid_node_num=2048 h1=tf.contrib.layers.fully_connected(tf_X, hid_node_num, activation_fn=None) h1_2=tf.nn.relu(tf.layers.batch_normalization(h1,training=flag_training)) h1_3=tf.nn.dropout(h1_2,dropout_rate) h2=tf.contrib.layers.fully_connected(h1_3, hid_node_num, activation_fn=None) h2_2=tf.nn.relu(tf.layers.batch_normalization(h2,training=flag_training)) h2_3=tf.nn.dropout(h2_2,dropout_rate) h3=tf.contrib.layers.fully_connected(h2_3, hid_node_num, activation_fn=None) h3_2=tf.nn.relu(tf.layers.batch_normalization(h3,training=flag_training)) h3_3=tf.nn.dropout(h3_2,dropout_rate) tf_Y_pre=tf.contrib.layers.fully_connected(h3_3, output_node_num, activation_fn=None) loss=tf.reduce_mean(tf.square(tf_Y-tf_Y_pre)) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_step = tf.train.AdamOptimizer(1e-4).minimize(loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i1 in range(3000*num_batch): train_feature=... # Some processing train_label=... # Some processing sess.run(train_step,feed_dict={tf_X:train_feature,tf_Y:train_label,flag_training:True,dropout_rate:1}) # when train , set "training=True" , when validate ,set "training=False" , get a bad result . However when train , set "training=False" ,when validate ,set "training=False" , get a better result . if((i1+1)%277200==0):# print validate loss every 0.1 epoch validate_feature=... # Some processing validate_label=... # Some processing validate_loss = sess.run(loss,feed_dict={tf_X:validate_feature,tf_Y:validate_label,flag_training:False,dropout_rate:1}) print(validate_loss)
Is there any error in my code ? if my code is right , I think I get a strange result:
when training, I set "training = True", when validate, set "training = False", the result is not good . I print validate loss every 0.1 epoch , the validate loss in 1st to 3st epoch is
0.929624 0.992692 0.814033 0.858562 1.042705 0.665418 0.753507 0.700503 0.508338 0.761886 0.787044 0.817034 0.726586 0.901634 0.633383 0.783920 0.528140 0.847496 0.804937 0.828761 0.802314 0.855557 0.702335 0.764318 0.776465 0.719034 0.678497 0.596230 0.739280 0.970555
However , when I change the code "sess.run(train_step,feed_dict={tf_X:train_feature,tf_Y:train_label,flag_training:True,dropout_rate:1})" , that : set "training=False" when training, set "training=False" when validate . The result is good . The validate loss in 1st epoch is
0.474313 0.391002 0.369357 0.366732 0.383477 0.346027 0.336518 0.368153 0.330749 0.322070 0.335551
Why does this result appear ? Is it necessary to set "training=True" when training, set "training=False" when validate ?