How do I load pretrained weight as filter that process between layers
I now have built convolutional layers and max-pooling layers. I only want to do forward propagation, so I decided to use the weights which already been trained as filters.
How am I able to load the weights into my model as the shape to be (batch_size, height, width, channel).
There are weights from other models such as YOLOV3.weights
I just need to load them and apply them into my layers
How do I do that?
See also questions close to this topic
how does tensorflow calculate gradients *efficiently* from input to loss?
To calculate the derivative of an output layer of size
Nw.r.t an input of size
M, we need a Jacobian matrix of size
M x N. To calculate a complete gradient from loss to inputs using the chain rule, we would need a large number of such Jacobians stored in memory.
I assume that tensorflow does not calculate a complete Jacobian matrix for each step of the graph, but does something more efficient. How does it do it?
Lambda use in Tensorflow
prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False) What is the purpose of using lambda here? I am taking machine-learning crash course on tensorflow provided by google.There i encountered the use of lambda, but could not understand it.
using LSTM on keras for multiclass classification of unknown feature vectors
I am very rookie on LSTM with keras and have tried the following solution here Keras LSTM multiclass classification for using LSTM for multiclass classification. I have 768-dimensional feature vectors within 10 classes and would like to use LSTM to classify them. Here is what I've tried
def do_experiment(train_file, validation_file, test_file, experiment_number, optimizer_name): def scheduler(epoch): if epoch % 4 == 0 and epoch: K.set_value(model.optimizer.lr, K.get_value(model.optimizer.lr)*0.9) print(K.get_value(model.optimizer.lr)) return K.get_value(model.optimizer.lr) change_lr = LearningRateScheduler(scheduler) early_stopper = EarlyStopping(min_delta=0.001, patience=15) csv_logger = CSVLogger('lstm.csv') weights_file="trained_model/" + str(experiment_number) + "-weights.h5" model_checkpoint= ModelCheckpoint(weights_file, monitor="val_loss", save_best_only=True, save_weights_only=True, mode='auto') x_train, y_train, groundtruth_train= du.loaddata(train_file, experiment_number) x_validation, y_validation, groundtruth_validation= du.loaddata(validation_file, experiment_number) batch_size = 32 nb_classes = 10 nb_epoch = 100 model = Sequential() model.add(Embedding(5000, 32, input_length=768)) model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(10, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer_name, metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=nb_epoch, validation_data=(x_validation, y_validation), shuffle=True, callbacks=[change_lr, early_stopper, csv_logger,model_checkpoint])
But whenever I run this code I have the following error:
File "/usr/lib64/python2.7/site-packages/keras/models.py", line 960, in fit validation_steps=validation_steps) File "/usr/lib64/python2.7/site-packages/keras/engine/training.py", line 1581, in fit batch_size=batch_size) File "/usr/lib64/python2.7/site-packages/keras/engine/training.py", line 1418, in _standardize_user_data exception_prefix='target') File "/usr/lib64/python2.7/site-packages/keras/engine/training.py", line 153, in _standardize_input_data str(array.shape)) ValueError: Error when checking target: expected dense_1 to have shape (None, 1) but got array with shape (61171, 10)
I believe I did something very dumb here but I cannot identify. What should I change this code to make the classification of 768-dimensional vectors?
OpenCV Circular Buffer usage in a realtime multithreaded application
This is the producer-consumer problem. I have used a ring buffer in python to achieve the functionality I wanted. I feel this is inefficient. Is there a way we can directly use the buffer in opencv?
Writer function just enqueues the ring buffer and Reader function dequeues the ring buffer. This involves a separate process to handle the buffer operations. Which I think can be solved efficiently if I use the OpenCV functions right.
From OpenCVs source code I see we can set CV_CAP_PROP_BUFFERSIZE. I just don't know how to achieve it without running a separate thread or process for queuing the image in a separate ring buffer.
Any help would be appreciated :)
import multiprocessing import numpy import numpy.matlib import ringbuffer import cv2 def writer(ring): cam = cv2.VideoCapture(0) ret = cam.set(cv2.CAP_PROP_FRAME_WIDTH,320); ret = cam.set(cv2.CAP_PROP_FRAME_HEIGHT,240); for i in range(100): #while loop ret,m = cam.read() print(m.shape) x = numpy.ctypeslib.as_ctypes(m) #print(x.shape) try: ring.try_write(x) except ringbuffer.WaitingForReaderError: print('Reader is too slow, dropping %r' % x) continue if i and i % 100 == 0: print('Wrote %d so far' % i) ring.writer_done() print('Writer is done') def reader(ring, pointer,ind): print("in reader") while True: try: data = ring.blocking_read(pointer) except ringbuffer.WriterFinishedError: #print("Error") return x = numpy.frombuffer(data) x.shape = (240 ,320 ,3 ) m = numpy.matlib.asmatrix(x) print("read",m.shape) #cv2.imshow("read %d"%(ind), m) print('Reader %r is done' % pointer) def main(): ring = ringbuffer.RingBuffer(slot_bytes=1000000, slot_count=50) ring.new_writer() processes = [ multiprocessing.Process(target=writer, args=(ring,)), ] for i in range(4): processes.append(multiprocessing.Process( target=reader, args=(ring, ring.new_reader(),i))) for p in processes: p.start() for p in processes: p.join(timeout=50) assert not p.is_alive() assert p.exitcode == 0 if __name__ == '__main__': main()
Point cloud registration in PCL using precomputed features
I'm planning on doing a registration between two clouds using RANSAC in PCL and I already have features computed from another program. Can I somehow load these features into PCL to use for the registration?
What kind of algorithm i can use to find similarity between two fabrics /group similar fabrics?
I have a database of fabrics. I want to find similarity between the fabrics or cluster them based on their textural features. I have tried K-Means clustering but with LBP, HOG but with no success.Can you suggest me some better ways?
keras - flow_from_directory function - target_size parameter
Keras has this function called flow_from_directory and one of the parameters is called target_size. Here is the explanation for it:
target_size: Tuple of integers (height, width), default: (256, 256). The dimensions to which all images found will be resized.
The thing that is unclear to me is whether it is just cropping the original image into 256x256 matrix (in this case we do not take the entire image) or it is just reducing the resolution of the image (while still showing us the entire image)?
If it is -let's say - just reducing the resolution: Assume that I have some xray images with the size 1024x1024 each (for breast cancer detection). And if I want to apply transfer learning to a pretrained Convolutional Neural Network which only takes 224x224 input images, will I not be loosing important data/information when I reduce the size of the image (and resolution) from 1024x1024 down to 224x224? Isn't there any such risk?
Thank you in advance!
Deep learning for automatic text dating
In general, I have to use deep learning in Python for automatic text dating. I should use RNN, CNN, etc. and check their accuracy. The text is very differential - there are articles even from 1830.
Do you have some advice for me? How to start, which libraries should I use (PyTorch, maybe Tensorflow)? I will be grateful for every single tutorial/article which you think is worth reading.
How do I implement masked convolution in tensorflow?
I had to implement a special variant of GRU once so I was able to get the code for GRU from tensorflow's github and modify the pieces that I needed.
I want to implement masked CNN in which there are masks that you multiply the output of convolutions to before calculating the elementwise sum of dot product of filter with a given position. I cannot find the code
tf.nn.conv2d. it seems it is not there in python and is built from C++ using bazel. Do I have to implement masked convolution from scratch?