How to write graph to tensorboard using tensorflow 2.0?
Am doing this
# eager on tf.summary.trace_on(graph=True, profiler=True) tf.summary.trace_export('stuff', step=1, profiler_outdir='output') # ... call train operation tf.summary.trace_off()
Profile section shows up in tensorboard but no graph yet.
See also questions close to this topic
How to make predictions with multiple input model in tensorflow (without model.predict)
noise = Input(shape=(100,)) label = Input(shape=(10,)) x = Dense(128)(noise) x = BatchNormalization()(x) x = LeakyReLU()(x) y = Dense(128)(label) y = BatchNormalization()(y) y = LeakyReLU()(y) x = concatenate([x, y], axis=-1) x = Dense(256)(x) x = BatchNormalization()(x) x = LeakyReLU()(x) x = Dense(512)(x) x = BatchNormalization()(x) x = LeakyReLU()(x) x = Dense(1024)(x) x = BatchNormalization()(x) x = LeakyReLU()(x) x = Dense(784, activation='tanh')(x) image = Reshape((28, 28, 1))(x) model = Model(inputs=[noise, label], outputs=[image])
Here is my code to build multiple input model. In this case,
model.predict(x=[noise, label])seems to be used to make predictions of the model.
model.predict(x)can be replaced by
model(x). So I tried
model([noise, label]), but it doesn't work. It seems to continue to expand the model. Is there any way to make
model(x)method work? Please help. (I can not use
confusion_matrix() library is giving ValueError
When trying to get confusion matrix for a ConvNet constantly getting the same error.
from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras import backend as K import numpy as np from keras.preprocessing import image from sklearn.metrics import classification_report, confusion_matrix img_width, img_height = 150, 150 train_data_dir = "train" validation_data_dir = "test" nb_train_samples = 2000 nb_validation_samples = 400 epochs = 50 batch_size = 40 #16 if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) train_datagen = ImageDataGenerator( rescale = 1. / 255, shear_range = 0.2, zoom_range= 0.2, horizontal_flip= True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = test_datagen.flow_from_directory( train_data_dir, target_size= (img_width, img_height), batch_size= batch_size, class_mode= 'binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size= (img_width, img_height), batch_size= batch_size, class_mode= 'binary')`
Applying CNN Layers
model.compile(loss= 'binary_crossentropy', optimizer= 'rmsprop', metrics= ['accuracy'] ) `model.fit_generator( train_generator, steps_per_epoch= nb_train_samples // batch_size, epochs= epochs, validation_data= validation_generator, validation_steps= nb_validation_samples // batch_size) Y_pred = model.predict_generator(validation_generator, nb_validation_samples // batch_size+1) y_pred = np.argmax(Y_pred, axis=1) print('Confusion Matrix') print(confusion_matrix(validation_generator.classes, y_pred))`
Getting error mentioned below but don't know how to resolve it
ValueError: Found input variables with inconsistent numbers of samples: [400, 440]
Keras model taking too long to train
So I have the following model for sentiment analysis (using pre trained word embeddings):
And as visible, I have a pre trained embedding matrix and only about 500k trainable parameters. So why does it take a whole eternity to train this model? The batch size is 128 and number of epochs is 25. And the ETA for first epoch is about 10 minutes. I haven't even completed that.
Just to mention, I am not using CUDA or anything. I don't think I have a GPU enabled Tensorflow. And I'm willing to do anything to increase the speed. And I have Tensorflow 2.1.0.
How to tune learning rate with HParams Dashboard
In Tensorflow documentation, it is shown how to tune several hyperparameters but not the learning rate.I have searched how to tune learning rate using HParams dashboard but could not find much. The only example is another question on github but it does not work.Can you please give me some suggestions on this?Should I use a callback function?Or provide different learning rates in hp_optimizer as in the question in github? Or something else?
Tensorflow2 Unknown device in Tensorboard
I cannot display the device placement in tensorboard graph. The only device is "unknown device" (see screenshot below).
In the python script, I activated the following option:
And indeed, the log device placement information is logged correctly in the python output:
Executing op Fill in device /job:localhost/replica:0/task:0/device:GPU:0 Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:GPU:0 Executing op VarIsInitializedOp in device /job:localhost/replica:0/task:0/device:GPU:0 ...
The tracing is activated as follows:
OS: Windows 10 x64 Python: 3.8 Tensorflow: 2.2.0 GPU
Store TensorFlow/TensorBoard data in Elastic or Prometheus
Is there a way to feed live training (summary) metrics from TensorFlow/TensorBoard into Elastic or Prometheus, so I can visualize these outside of TensorBoard? I'd like to combine my visualizations with other metrics that are not available to TensorBoard.
Fastest way to get unique values in ragged tensor
While Using ragged tensor
import tensorflow_text as tf_text input_text = ["Never tell me the odds!", "It's not my fault.", "It's a trap!"] tokenizer = tf_text.WhitespaceTokenizer() rt = tokenizer.tokenize(input_text)
<tf.RaggedTensor [[b'Never', b'tell', b'me', b'the', b'odds!'], [b"It's", b'not', b'my', b'fault.'], [b"It's", b'a', b'trap!']]>
I can get unique values inside ragged tensor using two approaches:
(Fastest) Get a unique of ragged tensor using list comprehension and then apply set and convert it back to tf.tensor. This operation takes The slowest run took 30.92 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 24 µs per loop
tf.constant(list(set([e for elem in rt.to_list() for e in elem])))
Get a unique of ragged tensor using list comprehension and then putting back it into tensor and take a unique using in-built tensorflow function (tf.unique). This ops takes:
The slowest run took 32.94 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 41.4 µs per loop
tf.unique(tf.constant([e for elem in rt.to_list() for e in elem]))
Is there any way other than these two that can speed up the operation ? More specially I am looking for in-built function in tf.ragged tensor
How to create a custom PreprocessingLayer in TF 2.2
I would like to create a custom preprocessing layer using the
In this custom layer, placed after the input layer, I would like to normalize my image using
tf.cast(img, tf.float32) / 255.
I tried to find some code or example showing how to create this preprocessing layer, but I couldn't find.
Please, can someone provide a full example creating and using the PreprocessingLayer layer ?
TensorFlow 2.0 and NetCDF4 RuntimeError: HDF error - Possible issue with I/O
While trying to write a NumPy array of floats to a NetCDF4 dataset, I am getting a
RuntimeError: NetCDF: HDF error. I believe that somewhere TensorFlow2.0 is messing up with NetCDF4, but I do need to import both in the same class/function. It is not clear why the sequence of importing libraries is affecting I/O of a NetCDF4 file.
Here's a sample script:
##The sequence of import which doesn't work import numpy as np import tensorflow as tf ### <<< if imported here, saving .nc doesn't work import netCDF4 as nc #import tensorflow as tf ### <<< if imported here, saving .nc works properly print("I am TensorFlow ", tf.__version__, " but I have no job here") print("I would let NetCDF4 ", nc.__version__," do it's job, for now") Nx = 160 # just another number outputfile = "outputfile.nc" # just another filename ArrayField = np.ones((Nx,Nx,1)) # sample array to write print("Writing field data of shape", ArrayField.shape) ncfile = nc.Dataset("outputfile.nc",'w',format='NETCDF4_CLASSIC') ncfile.createDimension('X',ArrayField.shape) #line is probably okay newx = ncfile.createVariable('X','d',('X')) #line is probably okay newx[:] = np.linspace(0.00,1.00,ArrayField.shape) #line is probably okay velx = ncfile.createVariable('Component_X','d',('X','X')) #line is probably okay velx[:] = ArrayField[:,:,0].T #line is probably okay print("Something written to: ", outputfile) ncfile.close() ###### <<<<<<< Gives error here print("Data successfully written to: ", outputfile)
I am TensorFlow 2.0.0 but I have no job here I would let NetCDF4 1.5.3 do it's job, for now Writing field data of shape (160, 160, 1) Something written to: outputfile.nc --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-1cfdf4070b97> in <module> 22 23 print("Something written to: ", outputfile) ---> 24 ncfile.close() ###### <<<<<<< Gives error here 25 print("Data successfully written to: ", outputfile) netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.close() netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset._close() netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success() RuntimeError: NetCDF: HDF error
I am TensorFlow 2.0.0 but I have no job here I would let NetCDF4 1.5.3 do it's job, for now Writing field data of shape (160, 160, 1) Something written to: outputfile.nc Data successfully written to: outputfile.nc
Though I can import TF2.0 after importing NetCDF4 for this particular sample to work, it doesn't really answer the question about getting a
RuntimeError: NetCDF: HDF errorand fixing the issue for its use in complex cases. Also, I would like to return more debugging information.
- Tested with
2.2.0, gives the same error.
- Disk size of created garbage .nc file is
~204Kwhich should be about
~209Kfor this array sample.
- Issue persists on a different machine or a clean environment by installing only
- Just in case, here's a
pip freezelist of my clean environment: https://gist.github.com/aakash30jan/9ae0cf3dde8a63d28df5275873cb0f10
- Tested with