Recognition x-rays and pretrained network
I need to recognize spine on the x-ray. I dont have much data, whether it is possible to use the pre trained networks? What kind? It seemed to me that the architecture Unet will work best for this task, maybe there are other options?
See also questions close to this topic
How to efficiently shuffle a large tf.data.Dataset when using tf.estimator.train_and_evaluate?
tf.estimator.train_and_evaluatedocumentation makes it clear that the input dataset must be properly shuffled for the training to see all examples:
Overfitting: In order to avoid overfitting, it is recommended to set up the training input_fn to shuffle the training data properly. It is also recommended to train the model a little longer, say multiple epochs, before performing evaluation, as the input pipeline starts from scratch for each training. It is particularly important for local training and evaluation.
In my application, I would like to uniformly sample examples from the full dataset with arbitrary evaluation frequency and
shuffle()'s buffer size. Is there an efficient way to achieve that with a
tf.data.Datasetwithout loading the complete dataset in the system memory?
I considered sharding the dataset based on the buffer size, but if the evaluation does not occur frequently, it will iterate on the same shard multiple times (a
repeat()closes the pipeline). Ideally, I would like to move to another shard after a complete iteration over the dataset, is that possible?
Thanks for any pointers!
How to run predict_generator on large dataset with limited memory?
Currently I am feeding all the images at once to predict_generator. I want to be able to feed small set of images which are being stored in the validation_generator and make predictions on them so that there are no memory issues with large datasets. How should I change the following code?
top_model_weights_path = '/home/rehan/ethnicity.071217.23-0.28.hdf5' path = "/home/rehan/countries/pakistan/guys/" img_width, img_height = 139, 139 confidence = 0.8 model = applications.InceptionResNetV2(include_top=False, weights='imagenet', input_shape=(img_width, img_height, 3)) print("base pretrained model loaded") validation_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(path, target_size=(img_width, img_height), batch_size=32,shuffle=False) print("validation_generator") features = model.predict_generator(validation_generator,steps=10)
TypeError: Can not convert a list into a Tensor or Operation
I trying to get the output of the code but error at
with tf.control_dependencies(). THe error is shown in below: The original code is came from enter link description here
Traceback (most recent call last): File "croptest.py", line 80, in <module> crop(Image,boxes,batch_inds); File "croptest.py", line 55, in crop with tf.control_dependencies([assert_op, images, batch_inds]): File "/home/ubuntu/Desktop/WK/my_project/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3936, in control_dependencies return get_default_graph().control_dependencies(control_inputs) File "/home/ubuntu/Desktop/WK/my_project/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3665, in control_dependencies c = self.as_graph_element(c) File "/home/ubuntu/Desktop/WK/my_project/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2708, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "/home/ubuntu/Desktop/WK/my_project/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2797, in _as_graph_element_locked % (type(obj).__name__, types_str)) TypeError: Can not convert a list into a Tensor or Operation.
I believe there is no mistake on the code, i just curious how the control dependencies not working even all inputs are given. the code i ran is as below:
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf import numpy as np def crop(images, boxes, batch_inds, stride = 1, pooled_height = 2, pooled_width = 2, scope='ROIAlign'): """Cropping areas of features into fixed size Params: -------- images: a 4-d Tensor of shape (N, H, W, C) boxes: rois in the original image, of shape (N, ..., 4), [x1, y1, x2, y2] batch_inds: Returns: -------- A Tensor of shape (N, pooled_height, pooled_width, C) """ with tf.name_scope(scope): # boxes = boxes / (stride + 0.0) print("boxes again=",boxes) boxes = tf.reshape(boxes, [-1, 4]) print ("2=======================================================") print("boxes again=",boxes) # normalize the boxes and swap x y dimensions shape = tf.shape(images) boxes = tf.reshape(boxes, [-1, 2]) # to (x, y) print ("3=======================================================") print(boxes) xs = boxes[:, 0] ys = boxes[:, 1] print ("4=======================================================") print(xs,ys) xs = xs / tf.cast(shape, tf.float32) ys = ys / tf.cast(shape, tf.float32) print ("5=======================================================") print("again xs,ys",xs,ys) boxes = tf.concat([ys[:, tf.newaxis], xs[:, tf.newaxis]], axis=1) boxes = tf.reshape(boxes, [-1, 4]) # to (y1, x1, y2, x2) print ("6=======================================================") print("again boxes", boxes) #if batch_inds is False: # num_boxes = tf.shape(boxes) # batch_inds = tf.zeros([num_boxes], dtype=tf.int32, name='batch_inds') # batch_inds = boxes[:, 0] * 0 # batch_inds = tf.cast(batch_inds, tf.int32) # assert_op = tf.Assert(tf.greater(tf.shape(images), tf.reduce_max(batch_inds)), [images, batch_inds]) assert_op = tf.Assert(tf.greater(tf.size(images), 0), [images, batch_inds]) print ("7=======================================================") print("assert_op", assert_op) print ("8=======================================================") with tf.control_dependencies([assert_op, images, batch_inds]): return tf.image.crop_and_resize(images, boxes, batch_inds, [pooled_height, pooled_width], method='bilinear', name='Crop')
This is the inputs i set:
Image =[[[[1, 1, 1,1], [2, 2, 2,2]], [[3,3, 3, 3], [4,4, 4, 4]]]] print ("=======================================================") box = [[1, 1, 2, 2]] boxes = tf.constant(box, tf.float32) batch_inds= batch_inds = np.zeros((4,), dtype=np.int32) batch_inds = tf.convert_to_tensor(batch_inds) print("boxes=", boxes) print (Image) print(tf.shape(Image)); crop(Image,boxes,batch_inds);
What is wrong with my inputs if i don't want to modified the
crop()function? Thank you.
Where can I find the dataset to detect the defective ceramic tiles to implement the neural network?
I need a data set for implementing a neural network to detect whether the tile is defective or not. I looked but I couldn't find the data set. So if anyone can help me with that?
Implementation of Recurrent Neural Networks in C++
How can we implement Recurrent Neural Networks in C++? If not RNN, which other neural networks can be implemented in C++?
Total weight of inputs to a neuron in ANN
In ANN, we know that to make it "learn", we need to adjust the weights of the inputs to a particular neuron.
During adjustment, some weights are to be reduced while others to be increased. Is the total weight of all j inputs to the ith neuron should be 1? Thanks in advance.
How to build stacked Sequence-to-sequence autoencoder?
in keras blog:"Building Autoencoders in Keras" the following code is provided to build single sequence to sequence autoencoder
from keras.layers import Input, LSTM, RepeatVector from keras.models import Model inputs = Input(shape=(timesteps, input_dim)) encoded = LSTM(latent_dim)(inputs) decoded = RepeatVector(timesteps)(encoded) decoded = LSTM(input_dim, return_sequences=True)(decoded) sequence_autoencoder = Model(inputs, decoded) encoder = Model(inputs, encoded)
I want to build stacked autoencoder, how to update this code to build stacked autoencoder?
i try it myself, and this is my code:
timesteps = 3 input_dim = 1 inputs = Input(shape=(timesteps, input_dim)) encoded = LSTM(4)(inputs) encoded = RepeatVector(timesteps)(encoded) encoded = LSTM(2)(encoded) encoded = RepeatVector(timesteps)(encoded) decoded = LSTM(4,return_sequences = True)(encoded) decoded = LSTM(input_dim,return_sequences = True)(decoded) sequence_autoencoder = Model(inputs, decoded) sequence_autoencoder.compile(loss='mean_squared_error', optimizer='Adam') sequence_autoencoder.fit(x_train, x_train, epochs=100, batch_size=1, shuffle=True, )
i want to know, is this code correct or am i missing something?
Error analysis in Keras: How to list all the wrong instances after evaluating the model with test set?
After I train a model and validate it with a dev dataset (or validation dataset), I apply it to a test dateset, then it gives me a evaluation result. Is there any Keras function I can use to check which instances' prediction are wrong, then I can do error analysis and get insights of what I might need to focus on in the future?
Thanks a lot!
Trouble Installing Theano (Bleeding edge version) & Pylearn2
I'm trying to install pylearn2. So far I have installed Anaconda3 using:
Python 3.6 Anaconda Distribution 5.0.1 from https://www.anaconda.com/download/
And have also installed the Bleeding edge developer version of Theano with:
git clone git://github.com/Theano/Theano.git cd Theano pip install -e .
git clone git://github.com/lisa-lab/pylearn2.git
In accordance with the Pylearn2 dev documentation and attempting to execute the setup file:
python setup.py develop
I recieved the following error traced-back to the Anaconda3\lib\distutils\cygwinccompiler.py file. Has anyone else run into this problem?
C:\Anaconda3\pylearn2>C:\Anaconda3\python.exe setup.py develop C:\Anaconda3\lib\site-packages\setuptools\dist.py:351: UserWarning: Normalizing '0.1dev' to '0.1.dev0' normalized_version, running develop running egg_info writing pylearn2.egg-info\PKG-INFO writing dependency_links to pylearn2.egg-info\dependency_links.txt writing requirements to pylearn2.egg-info\requires.txt writing top-level names to pylearn2.egg-info\top_level.txt reading manifest file 'pylearn2.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'pylearn2.egg-info\SOURCES.txt' running build_ext Traceback (most recent call last): File "setup.py", line 86, in <module> '': ['*.cu', '*.cuh', '*.h'], File "C:\Anaconda3\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Anaconda3\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "C:\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\Anaconda3\lib\site-packages\setuptools\command\develop.py", line 36, in run self.install_for_development() File "C:\Anaconda3\lib\site-packages\setuptools\command\develop.py", line 134, in install_for_development self.run_command('build_ext') File "C:\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\Anaconda3\lib\site-packages\Cython\Distutils\old_build_ext.py", line 185, in run _build_ext.build_ext.run(self) File "C:\Anaconda3\lib\distutils\command\build_ext.py", line 308, in run force=self.force) File "C:\Anaconda3\lib\distutils\ccompiler.py", line 1031, in new_compiler return klass(None, dry_run, force) File "C:\Anaconda3\lib\distutils\cygwinccompiler.py", line 285, in __init__ CygwinCCompiler.__init__ (self, verbose, dry_run, force) File "C:\Anaconda3\lib\distutils\cygwinccompiler.py", line 129, in __init__ if self.ld_version >= "2.10.90": TypeError: '>=' not supported between instances of 'NoneType' and 'str'
Keras: Is it okay to define a model inside of a call function of a custom layer?
So at first, I tried to define my whole model as a functional API, But I did not know how to connect the outputs of one layer with the next layer, since I have a list of multiple outputs coming from 2 branches and a list of a multiple additional inputs. here is what I did:
Assume I have a shared weights network that I define as follows:
def create_shared_network(): img_input = Input(shape=input_shape_img) x=(Conv2D(64,filter_size,filter_size, subsample = strides, input_shape = (64, 64, 3), activation='relu',kernel_initializer='glorot_uniform' ,name='conv1_1_1'))(img_input) x = (Conv2D(64,filter_size,filter_size, subsample = strides, activation='relu', kernel_initializer='glorot_uniform',name='conv1_2_1'))(x) x = (MaxPooling2D(pool_size=(2,2), strides=pool_stride, name='pool1_1'))(x) x = (Conv2D(128,filter_size,filter_size, subsample = strides,activation='relu', kernel_initializer='glorot_uniform',name='conv2_1_1'))(x) " Here come other conv2D layers ..." out1 = custom_Layer1(x) out2 = custom_Layer2(x, out1) model = Model(img_input, [out1, out2], name="Shared_network") return model
Then, I define 2 branches with shared weights:
img_input_left = Input(shape=input_shape_img, name= "image left") img_input_right = Input(shape=input_shape_img, name="image right") shared_model = create_shared_network() Branch_left = shared_model(img_input_left) Branch_right = shared_model(img_input_right)
Now, I define another custom layer, that taeks the outputs of both branches PLUS other inputs:
input1 = Input(shape = (3,1), name ="vector_left" ) input2 = Input(shape = (3,1), name= "vector_right") input3 = Input(shape=(3,3), name= "matrix_left") input4 = Input(shape=(3,3), name= "matrix_right") inputs_of_last_layer = [Branch_left, Branch_left, input1, input3] + [Branch_right, Branch_right, input2, input4] out_of_last_layer = Last_layer(inputs_of_last_layer)
Now I would like to define a main model, having as input
- img_input_left, input1, input3
- img_input_right, input2, input4
and as output out_of_last_layer.
I tried :
model_main = Model(outputs =out_of_last_layer, inputs = [img_input_left, img_input_right] + inputs_of_last_layer )
But this gives an assertion error. So that is why I defined the shared_weights model inside of the last_custom_layer so I can directly get its outputs.
Is it feasible to define a model containing custom layers with trainable weights inside of another custom layer with no trainable weights ?
Basically this means that I have:
1- A model containing one custom layer named custom_layer_main
2- In this custom_layer_main (In the call function) I define a shared weights model (model_shared_weights) that operates the same on 2 input images
3- The output tensors of (model_shared_weights) are needed for the operations of the custom_layer_main
This means that my main model contains one custom layer (custom_layer_main). But In the call function of this custom layer I define a shared weights model to have two branches operating on two images.
How to compute output shape for a custom layer in Keras?
My question is about the output_shape I should be returning when I write a custom layer returning a tensor with a shape that differs from the input shape. Assume I am implementing a custom layer that takes an input with the following shapes:
model.input_shape : [(None, 3, None, None), (None, 640), (None, 1), (None, 3), (None, 3, None, None), (None, 640), (None, 1), (None, 3)]
The output shape should be :
Can anyone help me with the
def compute_output_shape(self, input_shape):
what should I return?
def compute_output_shape(self, input_shape): return (10, 1)
But when I do
I still get
(None, 3, None, None)