tensorflow object detection api giving false error of 99%
I trained a model to detect pistols on tensorflow object detection API using mobilenet model and after running for 20k steps I tried to test it,it is showing 99% accuracy on every image which does not contain pistols.why is this happening? Also it is predicting at the same place for every image i have added an image of detection here. also the dataset was around of 3k labeled images.
See also questions close to this topic
tf.map_fn not returning same shape
I made a call to result = rf.map_fn(func, input_tensor) with input tensor shape (100, 80). func returns tensor with shape (80,). I expected result to end up (100, 80), but instead it became (8000,). is there a way to make it retain the input tensor shape of (100, 80)?
regarding the value error of outputs, final_state = tf.contrib.rnn.static_rnn(cell, lstm_in, dtype=tf.float32,initial_state = initial_state)
I am trying to study the code of a LSTM implementation, but get the following error message, which is directly related to
outputs, final_state = tf.contrib.rnn.static_rnn(cell, lstm_in, dtype=tf.float32, initial_state = initial_state)
What does this error message indicate and how to correct it?
ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.BasicLSTMCell object at 0x2b1713ddb310> with a different variable scope than its first use. First use of cell was with scope 'rnn/multi_rnn_cell/cell_0/basic_lstm_cell', this attempt is with scope 'rnn/multi_rnn_cell/cell_1/basic_lstm_cell'. Please create a new instance of the cell if you would like it to use a different set of weights. If before you were using: MultiRNNCell([BasicLSTMCell(...)] * num_layers), change to: MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)]). If before you were using the same cell instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances (one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation, so this error will remain until then.)
Why does Keras get stuck at the first epoch?
I'm using LSTM RNN to do sentiment analysis based on the IMDB movie review data. After the tokenization and vectorization, each review is converted into a vector which has 20086 components and the total amount of reviews is 1238. The input training data for the LSTM is a matrix which is 1238x20086. Here is the code:
reviews =  with open("./scaledata/Steve+Rhodes/subj.Steve+Rhodes") as f: for line in f: reviews.append(line) print(len(reviews)) labels =  with open("./scaledata/Steve+Rhodes/label.4class.Steve+Rhodes") as f: for line in f: if line is None: continue labels.append(int(line.strip('\n'))) doc_terms_train, doc_terms_test, y_train, y_test \ = train_test_split(reviews, labels, test_size=0.3) # review2vector -- transforming review to vector where num as the length of each tokens def review2vector_tfidf(range): vectorizer = TfidfVectorizer(max_features=40000, ngram_range=range, sublinear_tf=True, binary=False, decode_error='ignore', \ stop_words='english') # ngram_range=(1, 3) x_train = vectorizer.fit_transform(doc_terms_train) x_test = vectorizer.fit_transform(doc_terms_test) return x_train, x_test x_train_uni, x_test_uni = review2vector_tfidf((1, 1)) # LSTM print "fitting LSTM ..." model = Sequential() model.add(Embedding(x_train_uni.shape, 128)) model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2)) model.add(Dense(NUM_LABEL)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train_uni, y_train, nb_epoch=1, batch_size=256, verbose=1)
I can see the fitting process has already started from the messages in the console. However, nothing is printed out after the message
epoch 1/1and usually, there should be some messages about the
ETA, acc and loss. Does the program get stuck because the input data matrix is too large? Or there could be some other reasons?
fitting LSTM ... Epoch 1/1
Does tensorflow's object detection api support multi-class multi-label detection?
After hours of research, I could not find any example on multi-label predictions with object detection API. Basically I would like to predict more than one label per instance in an image. As the image shown below:
I would like to predict clothing categories, but also the attributes such as color and pattern.
From my understanding, I need to attach more classification head per each attribute to the 2nd stage ROI feature map, and sums each attribute's loss? However, I have trouble implement this in the object detection code. Can somebody give me some tips on which functions should I start to modify? Thank you.
Shared(Multi Task) Learning using TensorFlow Detection Models (Zoo)
I am using TensorFlow Detection Zoo Model and it's works perfect for me, but now i want to do it with shared layers to do more than 1 task, how can i add that functionality (i.e. shared layer)?
Use of Inception v3 model for Tensorflow Object Detection API
I have used Tensorflow Object Detection API suuccessfuly by using
Now I need to use Inception v3 model instead of mobilenet model.
can I use it for Tensorflow Object Detection API and how can I change the config file and how to find it?