tensorflow object detection api giving false error of 99%
I trained a model to detect pistols on tensorflow object detection API using mobilenet model and after running for 20k steps I tried to test it,it is showing 99% accuracy on every image which does not contain pistols.why is this happening? Also it is predicting at the same place for every image i have added an image of detection here. also the dataset was around of 3k labeled images.
See also questions close to this topic
Python 3 with Tensorflow on Sagemaker
I understand the Sagemaker currently does not support Python 3 with Tensorflow (according to this https://github.com/aws/sagemaker-python-sdk/issues/19)
But is it possible to create your own docker container with Python 3 and Tensorflow as is explained here? https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb
LSTM -RNN : How to get continuous range output instead of categorical ?
I am trying to solve a problem where have to predict a value between a range for a sentence :
Dataset looks like this:
Index_no text_sentence value 01 yes I like riding a bike. I was 4 when I learned 4.2311 to ride a bike. the colors of my bike is yellow and black. 02 i like riding my bike, i learnt riding a bike when i was 8 or -2.11 9 years old ,my bike is sparkling pink with white marks
Rnage of value is between -7 to 7 , Now I am thinking to use LSTM for text but i am confuse at the continuous output .
I was thinking about two method :
Converting ( normalizing the data between 0 and 1) and then after getting the output from network denormalize the data , will it work ?
Second approch , Using custom activation function ?
Or how i can get output between a range ?
Thanks in advance.
How to get the value of a tensor? Python
While doing some calculations I end up calculating an
average_acc. When I try to print it, it outputs:
tf.Tensor(0.982349, shape=(), dtype=float32). How do I get the
0.98..value of it and use it as a normal float?
What I'm trying to do is get a bunch of those in an array and plot some graphs, but for that, I need simple floats as far as I can tell.
Transfer learning in TF Object Detection API: How to train only the last few layers with weights?
I borrowed everything from TF/Model_Zoo (the config files and the model files as is), and started the training by using my images and annotations without changing the config file other than the paths. In the original config files, there is no detail in the like: } freeze_variables: ".*FeatureExtractor." } I believe that mean everything is getting retrained, is this true? When I look into the tensor-board, it looks like the weights are not changing but the biases are changing. How can I make sure that the weights are also trained?
Also, I would like to retrain only the last 2 (or last n layers) layer, just like regular transfer learning. Is this possible with TF Object Detection API?
Tensorflow Object Detection API (Spark)
How to convert Tensorflow Object Detection API to do parallel processing on images/videos with Apache Spark?
Object Detection API using OpenCV but got wrong color output
I tried to detect a image in tensorflow object detection api.
But matplotlib can not show image successfully so I changed to use opencv to display the image result.
I got my result but the image colour is incorrect: https://i.stack.imgur.com/OYQlA.jpg
Am i missing any color handle script?
I changed last 2 scripts:
for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) image_np = load_image_into_numpy_array(image) image_np_expanded = np.expand_dims(image_np, axis=0) output_dict = run_inference_for_single_image(image_np, detection_graph) vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) # plt.figure(figsize=IMAGE_SIZE) # plt.imshow(image_np) cv2.imshow('image', image_np) cv2.waitKey(0) cv2.destroyAllWindows()