Keras-tuning can't find callback
I am using keras-tuner in order to obtain the best set of hyperparameters for my model. Here is the training script:
tuner = kt.Hyperband(
hypermodel=build_model,
objective="val_accuracy",
max_epochs=1000,
factor=3,
hyperband_iterations=1,
directory=TrainingSpecific.SAVE_DIR,
project_name="cnn_tunning"
)
early_stop_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
mode='min',
patience=10,
min_delta=0.0002)
train_filenames = get_filenames(f'/*train_fold.csv')
val_filenames = get_filenames(f'/*val_fold.csv')
x_train_list, y_train_list = data_reader(train_filenames)
training_generator = custom_generator(x_train_list, y_train_list)
x_val_list, y_val_list = data_reader(val_filenames)
validation_generator = custom_generator(x_val_list, y_val_list)
print("********** Begin search **********")
tuner.search(
training_generator,
steps_per_epoch=len(train_filenames),
validation_data=validation_generator,
validation_steps=len(val_filenames),
callbacks=[early_stop_callback],
workers=1
)
# grab the best hyperparameters
print("********** End of search **********")
bestHP = tuner.get_best_hyperparameters(num_trials=1)[0]
Now what I have found is that after the hyperband starts using a decent number of iterations and the callback I set up should come into play I get this error: W tensorflow/core/framework/op_kernel.cc:1733] INVALID_ARGUMENT: ValueError: Could not find callback with key=pyfunc_11900 in the registry.
However it just proceeds to the next trial so I'm not sure what is going on, can someone explain why it can't find the callback?
I'm using tensorflow 2.8
and keras-tuner 1.1.2
do you know?
how many words do you know
See also questions close to this topic
-
Python File Tagging System does not retrieve nested dictionaries in dictionary
I am building a file tagging system using Python. The idea is simple. Given a directory of files (and files within subdirectories), I want to filter them out using a filter input and tag those files with a word or a phrase.
If I got the following contents in my current directory:
data/ budget.xls world_building_budget.txt a.txt b.exe hello_world.dat world_builder.spec
and I execute the following command in the shell:
py -3 tag_tool.py -filter=world -tag="World-Building Tool"
My output will be:
These files were tagged with "World-Building Tool": data/ world_building_budget.txt hello_world.dat world_builder.spec
My current output isn't exactly like this but basically, I am converting all files and files within subdirectories into a single dictionary like this:
def fs_tree_to_dict(path_): file_token = '' for root, dirs, files in os.walk(path_): tree = {d: fs_tree_to_dict(os.path.join(root, d)) for d in dirs} tree.update({f: file_token for f in files}) return tree
Right now, my dictionary looks like this:
key:''
.In the following function, I am turning the empty values
''
into empty lists (to hold my tags):def empty_str_to_list(d): for k,v in d.items(): if v == '': d[k] = [] elif isinstance(v, dict): empty_str_to_list(v)
When I run my entire code, this is my output:
hello_world.dat ['World-Building Tool'] world_builder.spec ['World-Building Tool']
But it does not see
data/world_building_budget.txt
. This is the full dictionary:{'data': {'world_building_budget.txt': []}, 'a.txt': [], 'hello_world.dat': [], 'b.exe': [], 'world_builder.spec': []}
This is my full code:
import os, argparse def fs_tree_to_dict(path_): file_token = '' for root, dirs, files in os.walk(path_): tree = {d: fs_tree_to_dict(os.path.join(root, d)) for d in dirs} tree.update({f: file_token for f in files}) return tree def empty_str_to_list(d): for k, v in d.items(): if v == '': d[k] = [] elif isinstance(v, dict): empty_str_to_list(v) parser = argparse.ArgumentParser(description="Just an example", formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument("--filter", action="store", help="keyword to filter files") parser.add_argument("--tag", action="store", help="a tag phrase to attach to a file") parser.add_argument("--get_tagged", action="store", help="retrieve files matching an existing tag") args = parser.parse_args() filter = args.filter tag = args.tag get_tagged = args.get_tagged current_dir = os.getcwd() files_dict = fs_tree_to_dict(current_dir) empty_str_to_list(files_dict) for k, v in files_dict.items(): if filter in k: if v == []: v.append(tag) print(k, v) elif isinstance(v, dict): empty_str_to_list(v) if get_tagged in v: print(k, v)
-
Actaully i am working on a project and in it, it is showing no module name pip_internal plz help me for the same. I am using pycharm(conda interpreter
File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\Scripts\pip.exe\__main__.py", line 4, in <module> File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\__init__.py", line 4, in <module> from pip_internal.utils import _log
I am using pycharm with conda interpreter.
-
Looping the function if the input is not string
I'm new to python (first of all) I have a homework to do a function about checking if an item exists in a dictionary or not.
inventory = {"apple" : 50, "orange" : 50, "pineapple" : 70, "strawberry" : 30} def check_item(): x = input("Enter the fruit's name: ") if not x.isalpha(): print("Error! You need to type the name of the fruit") elif x in inventory: print("Fruit found:", x) print("Inventory available:", inventory[x],"KG") else: print("Fruit not found") check_item()
I want the function to loop again only if the input written is not string. I've tried to type return Under print("Error! You need to type the name of the fruit") but didn't work. Help
-
Only download certain label tf dataset
Looking to do some fine tuning. The dataset (found here: https://knowyourdata-tfds.withgoogle.com/#dataset=sun397&filters=kyd%2Fsun397%2Flabel:%2Fh%2Fhouse&tab=ITEM&select=kyd%2Fsun397%2Flabel&item=%2Fh%2Fhouse%2Fsun_blpzjomvikwtulrq.jpg&expanded_groups=sun397) that Im trying to finetune w is pretty large and i just want to use/download the images with label /h/house. Any tips on how I can best accomplish this? Thanks!
import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds import numpy as np import matplotlib.pyplot as plt import functools import pandas (train_ds, valid_ds), info = tfds.load("sun397", split=["train", "validation"], as_supervised=True, with_info=True, label = "/h/house") int_to_class_label = info.features['label'].int2str
-
TFF: How can I train any model using a server running tff-runtime and a client running tff-client?
I read all the tensorflow-federated tutorials, including this one https://www.tensorflow.org/federated/gcp_setup, but I couldn't understand how to use this for training a model.
I'm doing a graduation project, to start I need to do this POC using tensorflow-federated to train a model with one server and one client in order to apply cross-silo setup for recognition of organs affected by covid in the future. If anyone can point me a direction, I'd be very grateful.
-
Can't use Keras MeanIoU to train semantic segmentation model
I'm working on a binary semantic segmentation problem. I built an UNet model with MobileNetV2 backbone. Here is my model code:
def upsample(filters, size, apply_dropout=False): initializer = tf.random_normal_initializer(0., 0.02) layer = Sequential() layer.add(layers.Conv2DTranspose(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) layer.add(layers.BatchNormalization()) if apply_dropout: layer.add(layers.Dropout(0.5)) layer.add(layers.ReLU()) return layer def UNet(image_size, num_classes): inputs = Input(shape=image_size + (3,)) base_model = applications.MobileNetV2(input_shape=image_size + (3,), include_top=False) layer_names = [ 'block_1_expand_relu', 'block_3_expand_relu', 'block_6_expand_relu', 'block_13_expand_relu', 'block_16_project', ] base_model_outputs = [base_model.get_layer(name).output for name in layer_names] down_stack = Model(inputs=base_model.input, outputs=base_model_outputs) down_stack.trainable = False up_stack = [ upsample(512, 3), upsample(256, 3), upsample(128, 3), upsample(64, 3) ] skips = down_stack(inputs) x = skips[-1] skips = reversed(skips[:-1]) for up, skip in zip(up_stack, skips): x = up(x) x = layers.Concatenate()([x, skip]) outputs = layers.Conv2DTranspose(filters=num_classes, kernel_size=3, strides=2, padding='same')(x) return Model(inputs, outputs)
To load the images and masks for training, I built an image loader inherits from
keras.Sequnce
.class ImageLoader(utils.Sequence): def __init__(self, batch_size, img_size, img_paths, mask_paths): self.batch_size = batch_size self.img_size = img_size self.img_paths = img_paths self.mask_paths = mask_paths def __len__(self): return len(self.mask_paths) // self.batch_size def __getitem__(self, idx): i = idx * self.batch_size batch_img_paths = self.img_paths[i:i + self.batch_size] batch_mask_paths = self.mask_paths[i:i + self.batch_size] x = np.zeros((self.batch_size,) + self.img_size + (3,), dtype='float32') for j, path in enumerate(batch_img_paths): img = utils.load_img(path, target_size=self.img_size) img = utils.img_to_array(img) x[j] = img y = np.zeros((self.batch_size,) + self.img_size + (1,), dtype='uint8') for j, path in enumerate(batch_mask_paths): img = utils.load_img(path, target_size=self.img_size, color_mode='grayscale') img = utils.img_to_array(img) # [0, 255] -> [0, 1] img //= 255 y[j] = img return x, y
In my segmentation problem, all the labels are in the range [0, 1]. However, when I try to compile and then fit the model using Adam optimizer, Sparse categorical cross entropy loss and metric
tf.keras.metrics.MeanIoU
, I encountered with the following problem:Node: 'confusion_matrix/assert_non_negative_1/assert_less_equal/Assert/AssertGuard/Assert' 2 root error(s) found. (0) INVALID_ARGUMENT: assertion failed: [`predictions` contains negative values. ] [Condition x >= 0 did not hold element-wise:] [x (confusion_matrix/Cast:0) = ] [-1 -1 -1...] [[{{node confusion_matrix/assert_non_negative_1/assert_less_equal/Assert/AssertGuard/Assert}}]] [[confusion_matrix/assert_less_1/Assert/AssertGuard/pivot_f/_31/_67]] (1) INVALID_ARGUMENT: assertion failed: [`predictions` contains negative values. ] [Condition x >= 0 did not hold element-wise:] [x (confusion_matrix/Cast:0) = ] [-1 -1 -1...] [[{{node confusion_matrix/assert_non_negative_1/assert_less_equal/Assert/AssertGuard/Assert}}]]
At first, I used accuracy as a metrics for training and I didn't encounter this problem, however when I changed to MeanIoU, this problem happened. Does anyone know how to fix this problem? Thank you very much!
UPDATE: I've searched on StackOverflow and found this question about a similar error, however the fix mentioned in that link (reduce learning rate) doesn't work in my case.
-
how to print all parameters of a keras model
I am trying to print all the 1290 parameters in
dense_1
layer, butmodel.get_weights()[7]
only show 10 parameters. How could I print all the 1290 parameters ofdense_1
layer? What is the difference betweenmodel.get_weights()
andmodel.layer.get_weights()
>model.get_weights()[7] array([-2.8552295e-04, -4.3254648e-03, -1.8752701e-04, 2.3482188e-03, -3.4848123e-04, 7.6121779e-04, -2.7494309e-06, -1.9068648e-03, 6.0777756e-04, 1.9550985e-03], dtype=float32) >model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 conv2d_1 (Conv2D) (None, 24, 24, 64) 18496 max_pooling2d (MaxPooling2D (None, 12, 12, 64) 0 ) dropout (Dropout) (None, 12, 12, 64) 0 flatten (Flatten) (None, 9216) 0 dense (Dense) (None, 128) 1179776 dropout_1 (Dropout) (None, 128) 0 dense_1 (Dense) (None, 10) 1290 _________________________________________________________________ ================================================================= Total params: 1,199,882 Trainable params: 1,199,882 Non-trainable params: 0 _________________________________________________________________
-
Training plot is not appearing properly for keras model
I have data where I need to train it with X and Y. Traning part is done but when I want to plot the prediction and actual data, it is appearing with so many lines instead of showing just non-linear regression line.
model= Sequential() model.add(Dense(7,input_dim=1, activation="tanh")) model.add(Dense(1)) model.compile(loss="mse", optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics= ["mae"]) history=model.fit(X,Y,epochs=1000) predict=model.predict(X) plt.scatter(X, Y,edgecolors='g') plt.plot(X, predict,'r') plt.legend([ 'Predictated Y' ,'Actual Y']) plt.show()
Please see the attached imageplotting image
-
File system for s3 already registered when importing tensorflow_io
I installed tensorflow-io with
pip install tensorflow-io
, when I import it I get:tensorflow.python.framework.errors_impl.AlreadyExistsError: File system for s3 already registered
.
The trace is this.import tensorflow_io as tfio File "/opt/miniconda/lib/python3.7/site-packages/tensorflow_io/__init__.py", line 17, in <module> from tensorflow_io.python.api import * # pylint: disable=wildcard-import File "/opt/miniconda/lib/python3.7/site-packages/tensorflow_io/python/api/__init__.py", line 19, in <module> from tensorflow_io.python.ops.io_dataset import IODataset File "/opt/miniconda/lib/python3.7/site-packages/tensorflow_io/python/ops/__init__.py", line 96, in <module> plugin_ops = _load_library("libtensorflow_io_plugins.so", "fs") File "/opt/miniconda/lib/python3.7/site-packages/tensorflow_io/python/ops/__init__.py", line 64, in _load_library l = load_fn(f) File "/opt/miniconda/lib/python3.7/site-packages/tensorflow_io/python/ops/__init__.py", line 56, in <lambda> load_fn = lambda f: tf.experimental.register_filesystem_plugin(f) is None File "/opt/miniconda/lib/python3.7/site-packages/tensorflow/python/framework/load_library.py", line 178, in register_filesystem_plugin py_tf.TF_RegisterFilesystemPlugin(plugin_location)
Can't get away from this problem, any ideas?
-
Group By and Sort a Tensorflow Dataset
I would like to group rows in a tensorflow dataset by a key and select top k rows in each group by some value. This is easily doable ex. in Pandas or SQL, but not so obvious in TF.
I found in tf.experimental group_by_window and group_by_reducer, but I can't figure out how to sort a dataset by a specific column.
My dataset has Dict structure for the rows. What I am looking for is smth like:
from tensorflow.data.experimental import group_by_window def key_f(row): return row['id'] def reduce_func(key, ds): # sort by a value - except there is no method like this... ds=ds.sort(by='value') return ds.take(5) t = group_by_window(key_func = key_f, reduce_func = reduce_func, window_size=100) ds = dataset.apply(t)
-
TypeError: Missing required positional argument. When using KerasTuner to tune ANN deep learning model
I was doing hyperparameter tuning for my Artificial Neural Network (ANN) model just now using KerasTuner where I want to use it to do binary classification. Below is my codes:
import tensorflow as tf from tensorflow import keras from keras import Input from keras.models import Sequential from keras.layers import Dense, Flatten, Dropout, BatchNormalization import keras_tuner as kt from keras_tuner.tuners import RandomSearch from keras_tuner.tuners import Hyperband from keras_tuner import HyperModel def build_model(hp): # Create a Sequential model model = tf.keras.Sequential() # Input Layer: The now model will take as input arrays of shape (None, 67). My dataset has 67 columns. model.add(tf.keras.Input(shape = (67,))) # Tune number of hidden layers and number of neurons for i in range(hp.Int('num_layers', 1, 3)): hp_units = hp.Int(f'units_{i}', min_value = 32, max_value = 512, step = 32) model.add(Dense(units = hp_units, activation = 'relu')) # Output Layer model.add(Dense(units = 1, activation='sigmoid')) # Compile the model hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4]) model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate), loss = keras.losses.binary_crossentropy(), metrics = ["accuracy"] ) return model # HyperBand algorithm from keras tuner hpb_tuner = kt.Hyperband( hypermodel = build_model, objective = 'val_accuracy', max_epochs = 50, factor = 3, seed = 42, executions_per_trial = 3, directory = 'ANN_Parameters_Tuning', project_name = 'Medical Claim' )
Then, I face below issue:
TypeError Traceback (most recent call last) <ipython-input-114-b58f291b49ae> in <module> 1 # HyperBand algorithm from keras tuner 2 ----> 3 hpb_tuner = kt.Hyperband( 4 hypermodel = build_model, 5 objective = 'val_accuracy', ~\anaconda3\envs\medicalclaim\lib\site-packages\keras_tuner\tuners\hyperband.py in __init__(self, hypermodel, objective, max_epochs, factor, hyperband_iterations, seed, hyperparameters, tune_new_entries, allow_new_entries, **kwargs) 373 allow_new_entries=allow_new_entries, 374 ) --> 375 super(Hyperband, self).__init__( 376 oracle=oracle, hypermodel=hypermodel, **kwargs 377 ) ~\anaconda3\envs\medicalclaim\lib\site-packages\keras_tuner\engine\tuner.py in __init__(self, oracle, hypermodel, max_model_size, optimizer, loss, metrics, distribution_strategy, directory, project_name, logger, tuner_id, overwrite, executions_per_trial) 108 ) 109 --> 110 super(Tuner, self).__init__( 111 oracle=oracle, 112 hypermodel=hypermodel, ~\anaconda3\envs\medicalclaim\lib\site-packages\keras_tuner\engine\base_tuner.py in __init__(self, oracle, hypermodel, directory, project_name, logger, overwrite) 101 self._display = tuner_utils.Display(oracle=self.oracle) 102 --> 103 self._populate_initial_space() 104 105 if not overwrite and tf.io.gfile.exists(self._get_tuner_fname()): ~\anaconda3\envs\medicalclaim\lib\site-packages\keras_tuner\engine\base_tuner.py in _populate_initial_space(self) 130 131 while True: --> 132 self.hypermodel.build(hp) 133 134 # Update the recored scopes. <ipython-input-113-ac44a2da327d> in build_model(hp) 18 hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4]) 19 model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate), ---> 20 loss = keras.losses.binary_crossentropy(), 21 metrics = ["accuracy"] 22 ) ~\anaconda3\envs\medicalclaim\lib\site-packages\tensorflow\python\util\traceback_utils.py in error_handler(*args, **kwargs) 151 except Exception as e: 152 filtered_tb = _process_traceback_frames(e.__traceback__) --> 153 raise e.with_traceback(filtered_tb) from None 154 finally: 155 del filtered_tb ~\anaconda3\envs\medicalclaim\lib\site-packages\tensorflow\python\util\dispatch.py in op_dispatch_handler(*args, **kwargs) 1088 if iterable_params is not None: 1089 args, kwargs = replace_iterable_params(args, kwargs, iterable_params) -> 1090 result = api_dispatcher.Dispatch(args, kwargs) 1091 if result is not NotImplemented: 1092 return result TypeError: Missing required positional argument
Even if I use
RandomSearch
from KerasTuner also has the same error as above traceback. Below is my codes forRandomSearch
:# RandomSearch algorithm from keras tuner random_tuner = RandomSearch( hypermodel = build_model, objective = 'val_accuracy', max_trials = 50, seed = 42, overwrite = True, executions_per_trial = 3, directory = 'ANN_Parameters_Tuning', project_name = 'Medical Claim' )
-
Why does my validation accurancy 0.0? Is this the cause of my problem?
I'm trying out keras tuner. My problem is, when I instantiate my model I always get 0.0 for my validation_accurancy.
def tuning_model(hp): hp_units = hp.Int('units', min_value=12, max_value=64, step=4) model = keras.Sequential([ layers.Dense(units=hp_units, activation='relu'), layers.Dense(units=hp_units, activation='relu'), layers.Dense(1) ]) hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4]) model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(hp_learning_rate), metrics=['accuracy']) return model
My instantiation looks like this:
tuner = kt.Hyperband(tuning_model, objective='val_accuracy', max_epochs=10, factor=3, directory='my_dir', project_name='models') stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
And after that:
tuner.search(train_features, train_labels, epochs=50, validation_split=0.2, callbacks=[stop_early]) # Get the optimal hyperparameters best_hps=tuner.get_best_hyperparameters(num_trials=1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is {best_hps.get('units')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}. """)
The output for the seacrh is:
Trial 30 Complete [00h 00m 01s] val_accuracy: 0.0
Best val_accuracy So Far: 0.0 Total elapsed time: 00h 00m 33s INFO:tensorflow:Oracle triggered exit
The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is 24 and the optimal learning rate for the optimizer is 0.01.
i'm guessing that this is the reason why the following code doesn't work:
model = tuner.hypermodel.build(best_hps) history = model.fit( train_features, train_labels, validation_split=0.2, verbose=2, epochs=100) val_acc_per_epoch = history.history['val_accuracy'] best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1 print('Best epoch: %d' % (best_epoch,))
This is the error massage I get:
Edit:
So I got this as my final output but not sure if it's good enough: hypermodel
-
Is it possible to view multiple tuner runs at once on tensorboard hparams tab?
I'm using Keras tuner to do hyperparameter optimization on a particular hypermodel which is then trained for different cases (i.e. different houses). My goal for now is to get an easy, straightforward visualisation of what generally works, and what doesn't. The Tensorboard Hparams tab seems perfect for this, except that it only shows one run of tuner at once - only one house. The scalars tab shows all runs at once, but not the Hparams tab - tuner is running for 100 trials for each house, and the Hparams tab shows only 100 trials total. Is there a way to change this?