Comparison between object detection algorithm speeds
I am writing my final degree project and I am having trouble to compare different algorithms in the state of the art. I am comparing ResNet, MobileNet SSD, YOLOv4, VGG16, and VGG19 used in embedded devices such as Jetson Nano or Raspberry pi. All algorithms are used for object detection but I am unable to find information about which one is faster or usually gets a higher accuracy. Also, I was looking if they can be used in low-performance devices. I would be grateful if someone is able to help me.
Thanks in advance.
do you know?
how many words do you know
See also questions close to this topic
-
How would I put my own dataset into this code?
I have been looking at a Tensorflow tutorial for unsupervised learning, and I'd like to put in my own dataset; the code currently uses the MNIST dataset. I know how to create my own datasets in Tensorflow, but I have trouble setting the code used here to my own. I am pretty new to Tensorflow, and the filepath to my dataset in my project is
\data\training
and\data\test-val\
# Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # TensorFlow ≥2.0-preview is required import tensorflow as tf from tensorflow import keras assert tf.__version__ >= "2.0" # Common imports import numpy as np import os (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data() X_train_full = X_train_full.astype(np.float32) / 255 X_test = X_test.astype(np.float32) / 255 X_train, X_valid = X_train_full[:-5000], X_train_full[-5000:] y_train, y_valid = y_train_full[:-5000], y_train_full[-5000:] def rounded_accuracy(y_true, y_pred): return keras.metrics.binary_accuracy(tf.round(y_true), tf.round(y_pred)) tf.random.set_seed(42) np.random.seed(42) conv_encoder = keras.models.Sequential([ keras.layers.Reshape([28, 28, 1], input_shape=[28, 28]), keras.layers.Conv2D(16, kernel_size=3, padding="SAME", activation="selu"), keras.layers.MaxPool2D(pool_size=2), keras.layers.Conv2D(32, kernel_size=3, padding="SAME", activation="selu"), keras.layers.MaxPool2D(pool_size=2), keras.layers.Conv2D(64, kernel_size=3, padding="SAME", activation="selu"), keras.layers.MaxPool2D(pool_size=2) ]) conv_decoder = keras.models.Sequential([ keras.layers.Conv2DTranspose(32, kernel_size=3, strides=2, padding="VALID", activation="selu", input_shape=[3, 3, 64]), keras.layers.Conv2DTranspose(16, kernel_size=3, strides=2, padding="SAME", activation="selu"), keras.layers.Conv2DTranspose(1, kernel_size=3, strides=2, padding="SAME", activation="sigmoid"), keras.layers.Reshape([28, 28]) ]) conv_ae = keras.models.Sequential([conv_encoder, conv_decoder]) conv_ae.compile(loss="binary_crossentropy", optimizer=keras.optimizers.SGD(lr=1.0), metrics=[rounded_accuracy]) history = conv_ae.fit(X_train, X_train, epochs=5, validation_data=[X_valid, X_valid]) conv_encoder.summary() conv_decoder.summary() conv_ae.save("\models")
Do note that I got this code from another StackOverflow answer.
-
Every time I train my CNN on matlab, is it remembering the old weights from the previous time I trained it? Or does it reset them?
So for example, I have trained a CNN on my data using a learning rate of 0.0003 and 10 epochs, with a minibatch size of 32. After training it, lets say I get an accuracy of 0.7. Now I want to adjust the learning rate and the minibatch size and try training it again to see how the accuracy changes, using the trainNetwork Matlab function. My question is, is it training the model from scratch or is it training them using the weights previously calculated? I want it to start from scratch to prevent overfitting every time I adjust the hyperparamters. Sorry if this is intuitive and I'm being dumb lol I just wanna make sure.
-
How to make a chatbot for discord using python
I need advise and/or resources to make a chatbot for discord in python, i have some knowledge of python and the discord api but I know nothing about chat bots or how to implement them in python, can anyone lead me to resources about chatbots and artificial intelligence?
-
Image Background Remover Using Python
I want to make Image Background Remover Using Python But I do not know how much data It will take and time to reach the accuracy of remove.bg I'm using U-2-Net Ai models https://github.com/xuebinqin/U-2-Net/ Some results are same but not every result is as good enough as remove.bg In a rating I would tell my app as 2/5 and remove.bg as 4/5 Please tell me How can I achieve accuracy like remove.bg Any help or suggestions are appreciated. Thanks
-
how to print all parameters of a keras model
I am trying to print all the 1290 parameters in
dense_1
layer, butmodel.get_weights()[7]
only show 10 parameters. How could I print all the 1290 parameters ofdense_1
layer? What is the difference betweenmodel.get_weights()
andmodel.layer.get_weights()
>model.get_weights()[7] array([-2.8552295e-04, -4.3254648e-03, -1.8752701e-04, 2.3482188e-03, -3.4848123e-04, 7.6121779e-04, -2.7494309e-06, -1.9068648e-03, 6.0777756e-04, 1.9550985e-03], dtype=float32) >model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 conv2d_1 (Conv2D) (None, 24, 24, 64) 18496 max_pooling2d (MaxPooling2D (None, 12, 12, 64) 0 ) dropout (Dropout) (None, 12, 12, 64) 0 flatten (Flatten) (None, 9216) 0 dense (Dense) (None, 128) 1179776 dropout_1 (Dropout) (None, 128) 0 dense_1 (Dense) (None, 10) 1290 _________________________________________________________________ ================================================================= Total params: 1,199,882 Trainable params: 1,199,882 Non-trainable params: 0 _________________________________________________________________
-
How download a backup of windows and mac partition disk to the new one
Some years before i created a both system ssd disk with windows(BOOTCAMP) and MacOS (Macintosh HD). After that i forgot about the way how i did it and now i need to migrate from old disk to new. I have ssd samsung (256gb) disk with MacOS and Windows 10 on board (128 on MacOS and 128 on windows 10). I want to transfer all systems from samsung to new gigabyte (480gb) disk. And i want to increase the windows space. So, the result im waiting for should look like that - (MacOS 128gb, Windows 10 352 gb). Im using AOMEI Backupper Professional on different windows PC. But when i copy the whole samsung to gigabyte in one try, +increase windows size, MacOS becomes ready to use, but windows system doesn't (in mac disk utility i still see both values, but Windows value is not acceptable in [alt] boot mode).
How can i clone my working windows 10 too? (I dont use bootcamp assistant clear installation of windows because i loose audio connection in windows after bootcamp assistant installation).
P.S. I hope there is a f2p decision, because dont want to pay 50$ for WinClone utility for MacOS, which I'm not 100% sure it works ._.
Best wishes, Mike
- Can I change Storage Type from HDD to SSD on Cloud SQL after creating an instance? (GCP)
-
How to use Optane Memory with Intel 12th Gen CPU?
Found an official answer at Intel® Optane™ Memory Series Not Supported on 12th Generation Intel® Processors and Related Platforms , but the Intel® CAS for Windows can also be used for free with some bugs. Are there any workarounds to use the official Optane software?