Deep Learning C#  Solution to find object coordinates in image
I have images of object and images contain that object.
With another image(contain object), i need to know object coordinates(x,y).
I want to train a neural network with library accord.net to find it.
Is it possible with only 5 image of object and 5 image contain that object ?
If it is possible, please give me a solution to find it and example with c# if you free.
Thanks !
See also questions close to this topic

Referencing debug dlls while build Release version
I am trying to build a release version of my vb.net project. My project references multiple dlls(I have both release and debug versions of these dlls). When I build my project, I set my configuration to Release (obviously), but do I also need to reference my release dlls or is referencing my debug dlls the same? I am simply curious to know if this makes any difference or not.

Data cannot be assigned
Hello why in this function data could not be read(assigned)? Error is in the line with comments(Object reference not set to an instance of an object.)
protected static int[][] GetMapFromFile(ref int size) { using (StreamReader sr = new StreamReader(@"C:\Users\doman\OneDrive\Desktop\Antras semestras\Programavimas\Laboras1\Laboras1\Duomenys.txt")) { string skyr = " ,.;"; size = Convert.ToInt32(sr.ReadLine()); int[][] map = new int[size][]; for (int i = 0; i < size; i++) { string line = sr.ReadLine(); string[] values = line.Split(skyr.ToCharArray(), StringSplitOptions.RemoveEmptyEntries); for (int j = 0; j < values.Length; j++) { map[i][j] = Convert.ToInt32(values[j]); // Error here Console.Write(map[i][j]); } Console.WriteLine(); } return map; } }
My data file
5
0 1 3 4 2
1 0 4 2 6
3 4 0 7 1
4 2 7 0 7
2 6 1 7 0 
Drag and Swap two objects using Touch in Unity
The game has 4 objects with sprite side by side. I want the game to swap the position of two objects when I drag one object and move it towards the second object (dragged object should move to second object's position and the second object should move to dragged object's position). As of now, I can only drag one object and move it towards the second object. However, I am not able to swap their position? Can anyone kindly suggest me on how to do it?

Going back to original image from image edges
If we read an image X and apply the simplest edge detector F = [1 0 −1] on it to find Y. Is it possible to go back to retrieve X from Y? Given that Yn = Xn−1−Xn+1, can you express X in terms of Y. Can we design a 3x3 filter G that performs the opposite of F?

How to remove high frequency regions from a canny image?
Assume I have a canny detected image, for example this one I stole from the internet:
I need to detect and remove regions with high frequency features. So this image should become soemthing like:
(Excuse the GIMP skills)
In other words, areas containing high variability of of features should just go to black.
To my understanding I could use something like fourier transforms to filter out these kinds of regions, but I am not sure how or if this is the best approach.

Memory keeps on growing publishing images via pyZMQ publisher socket
I am trying to code a publisher that takes images from a simulation through APIs and publish them through a tcp socket, so that multiple processes not necessarily on the same machine can access the images subscribing to the topic. Everything seems to work, since I can see the streamed images via the subscriber. The only issue is that the memory associated with the publisher keeps on growing indefinitely, both with the subscriber active and without. Here is how the publisher is used:
context = zmq.Context() socket = context.socket(zmq.PUB) socket.bind("tcp://127.0.0.1:5556") topic = "img_rgba" time.sleep(2) while(1): [...] socket.send_string(topic, zmq.SNDMORE) socket.send_pyobj(image)
Do you have any clue?
Thanks a lot, Marco

CUDA_ERROR_OUT_OF_MEMORY tensorflow
As part of my study project, I try to train a neural network which makes a segmentation on images (based on FCN), and during the execution I received the following error message:
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,67,1066,718] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Note that I have fixed the batch_size to 1 and I have the same error even when I tried different image sizes , I put also just 1 image to train instead of 1600 still the same error! Could you help me to solve this problem ? What is it really about ?

Per image normalization vs Overall dataset normalization
I have a datasets of 1000 image.Using cnn For fingure gesture recognition. Should I normalize the Image by finding mean of that image only or the mean of entire dataset...and also suggest which library to use in python for the same

Softmax not resulting in a probability distribution in Python Implementation
I have a simple softmax implementation:
softmax = np.exp(x) / np.sum(np.exp(x), axis=0)
For x set as array here: https://justpaste.it/6wis7
You can load it as:
import numpy as np x = np.as (just copy and paste the content (starting from array))
I get:
softmax.mean(axis=0).shape (100,) # now all elements must be 1.0 here, since its a probability softmax.mean(axis=0) # all elements are not 1 array([0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158])
Why is this implementation wrong? How to fix it?

Writing training model for CNN
I am writing the training code for TwoStreamIQA which is a twostream convolutional neural network. This model predicts the quality score for the patches being assessed through two streams of the network. In the training below, I have used test dataset provided in the GitHub link above.
The training code is as below:
## prepare training data test_label_path = 'data_list/test.txt' test_img_path = 'data/live/' test_Graimg_path = 'data/live_grad/' save_model_path = '/models/nr_sana_2stream.model' patches_per_img = 256 patchSize = 32 print('Load data') final_train_set = [] with open(test_label_path, 'rt') as f: for l in f: line, la = l.strip().split() # for debug tic = time.time() full_path = os.path.join(test_img_path, line) Grafull_path = os.path.join(test_Graimg_path, line) inputImage = Image.open(full_path) Graf = Image.open(Grafull_path) img = np.asarray(inputImage, dtype=np.float32) Gra = np.asarray(Graf, dtype=np.float32) img = img.transpose(2, 0, 1) Gra = Gra.transpose(2, 0, 1) img1 = np.zeros((1, 3, Gra.shape[1], Gra.shape[2])) img1[0, :, :, :] = img Gra1 = np.zeros((1, 3, Gra.shape[1], Gra.shape[2])) Gra1[0, :, :, :] = Gra patches = extract_patches(img, (3, patchSize, patchSize), patchSize) Grapatches = extract_patches(Gra, (3, patchSize, patchSize), patchSize) X = patches.reshape((1, 3, patchSize, patchSize)) GraX = Grapatches.reshape((1, 3, patchSize, patchSize)) temp_slice1 = [X[int(float(index))] for index in range(256)] temp_slice2 = [GraX[int(float(index))] for index in range(256)] ############################################## for j in range(len(temp_slice1)): temp_slice1[j] = xp.array(temp_slice1[j].astype(np.float32)) temp_slice2[j] = xp.array(temp_slice2[j].astype(np.float32)) final_train_set.append(( np.asarray((temp_slice1[j], temp_slice2[j])).astype(np.float32), int(la) )) ############################################## print('Done!') print('Iterator!') train_iter = iterators.SerialIterator(final_train_set, batch_size=4) optimizer = optimizers.Adam() optimizer.use_cleargrads() optimizer.setup(model) updater = training.StandardUpdater(train_iter, optimizer, device=0) print('Trainer!') trainer = training.Trainer(updater, (50, 'epoch'), out='result') trainer.extend(extensions.LogReport()) trainer.extend(extensions.PrintReport(['epoch', 'iteration', 'main/loss', 'elapsed_time'])) print('Running trainer!') trainer.run()
But the code is producing error in line
trainer.run()
as:ValueError: Unsupported dtype object
Maybe thats's because I am arraging
training data
wrong because the model takes training parameters as:length = x_data.shape[0] x1 = Variable(x_data[0:length:2]) x2 = Variable(x_data[1:length:2])
and
y_data
as:t = xp.repeat(y_data[0:length:2], 1)
The variable
final_train_set
prepapres dataset of atuple (Numpy Array, 66)
where everyNumpy Array
has dimensions(2, 3, 32, 32)
which carries two types patches(3, 32, 32)
.I have used dataset from the github link provided above. I am a newbie in Chainer,Please help!!

Training a CNN on top of RNN in tensorflow
I was looking into the implementation of the CNN followed by RNN in tensorflow, but what I found in this stack overflow question is the following:
cnn_input= Input(shape=(3,200,100,1)) #Frames,height,width,channel of imafe conv1 = TimeDistributed(Conv2D(32, kernel_size=(50,5), activation='relu'))(cnn_input) conv2 = TimeDistributed(Conv2D(32, kernel_size=(20,5), activation='relu'))(conv1) pool1=TimeDistributed(MaxPooling2D(pool_size=(4,4)))(conv2) flat=TimeDistributed(Flatten())(pool1) cnn_op= TimeDistributed(Dense(100))(flat)
Therefore, I tried creating something similar but for tensorflow. Here is my code:
# Model def model(images, model_params): nb_classes = model_params['nb_classes'] with tf.variable_scope('layer_cell'): layer = tf.reshape(images, shape=(1, image_size[0], image_size[1], 3)) layer = Conv2D(filters=32, kernel_size=5, padding="same", kernel_initializer=tf.glorot_normal_initializer(seed=mseed, dtype=m_dtype))(layer) layer = BatchNormalization()(layer) layer = tf.nn.relu(layer) layer = MaxPool2D(pool_size=(2, 2), strides=(2, 2))(layer) layer = Conv2D(filters=64, kernel_size=3, padding="same", kernel_initializer=tf.glorot_normal_initializer(seed=mseed, dtype=m_dtype))(layer) layer = BatchNormalization()(layer) layer = tf.nn.relu(layer) layer = MaxPool2D(pool_size=(2, 2), strides=(2, 2))(layer) layer = Conv2D(filters=128, kernel_size=3, padding="same", kernel_initializer=tf.glorot_normal_initializer(seed=mseed, dtype=m_dtype))(layer) layer = BatchNormalization()(layer) layer = tf.nn.relu(layer) layer = Conv2D(filters=64, kernel_size=3, padding="same", kernel_initializer=tf.glorot_normal_initializer(seed=mseed, dtype=m_dtype))(layer) layer = BatchNormalization()(layer) layer = tf.nn.relu(layer) layer = tf.reduce_mean(layer, axis=(1, 2), keepdims=True) layer = Conv2D(filters=64, kernel_size=1, padding="same", kernel_initializer=tf.glorot_normal_initializer(seed=mseed, dtype=m_dtype))(layer) layer = tf.reshape(layer, shape=(1, time_steps, 64)) layer = tf.cast(layer, dtype=m_dtype) cell = tf.nn.rnn_cell.GRUCell(cell_size, kernel_initializer=tf.glorot_normal_initializer(seed=mseed, dtype=m_dtype)) gru_outputs, new_state = tf.nn.dynamic_rnn(cell, layer, dtype=m_dtype, initial_state=initial_state) with tf.variable_scope("output"): # TODO: Change cell_size to cell_size2 output = tf.reshape(gru_outputs, shape=[1, cell_size]) output = tf.layers.dense(output, units=nb_classes, kernel_initializer=tf.glorot_uniform_initializer(seed=mseed, dtype=m_dtype)) output = tf.reshape(output, shape=[1, time_steps, nb_classes]) return output, new_state
Now I would like to know whether my implementation is correct or not. Second, after training for 377 epochs, I got the following figures for the accuracy.
So is that reasonable? Second, does training a CNN following by RNN expensive, or takes a long time?

How can I integrate tensorboard visualization to tf.Estimator?
I have classical TensorFlow code for recognizing handwritten digits https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py , using tf.Estimator. My question is comlicated and consists of two questions
Should I write tf.summary() for target variables in code to visualize data in Tensoboard just typing
tensorboard  logdir=/tmp/mnist_convnet_model
or tf.Estimator collect all summaries automatically in*/tmp/mnist_convnet_model
directory and I can just calltensorboard  logdir=/tmp/mnist_convnet_model
?If I have to write
tf.summary()
could you answer me, should I insert in codetf summary merge_all()
in the code and in what piece of code?
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) def cnn_model_fn(features, labels, mode): """Model function for CNN.""" # Input Layer input_layer = tf.reshape(features["x"], [1, 28, 28, 1]) # Convolutional Layer #1 conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) # Pooling Layer #1 pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) # Convolutional Layer #2 conv2 = tf.layers.conv2d( inputs=pool1, filters=64, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) # Pooling Layer #2 pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) # Flatten tensor into a batch of vectors pool2_flat = tf.reshape(pool2, [1, 7 * 7 * 64]) # Dense Layer dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) # Add dropout operation; 0.6 probability that element will be kept dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN) # Logits layer # Input Tensor Shape: [batch_size, 1024] # Output Tensor Shape: [batch_size, 10] logits = tf.layers.dense(inputs=dropout, units=10) predictions = { # Generate predictions (for PREDICT and EVAL mode) "classes": tf.argmax(input=logits, axis=1), # Add `softmax_tensor` to the graph. It is used for PREDICT and by the # `logging_hook`. "probabilities": tf.nn.softmax(logits, name="softmax_tensor") } if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) # Calculate Loss (for both TRAIN and EVAL modes) loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) # Configure the Training Op (for TRAIN mode) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) # Add evaluation metrics (for EVAL mode) eval_metric_ops = { "accuracy": tf.metrics.accuracy( labels=labels, predictions=predictions["classes"])} return tf.estimator.EstimatorSpec( mode=mode, loss=loss, eval_metric_ops=eval_metric_ops) def main(unused_argv): # Load training and eval data mnist = tf.contrib.learn.datasets.load_dataset("mnist") train_data = mnist.train.images # Returns np.array train_labels = np.asarray(mnist.train.labels, dtype=np.int32) eval_data = mnist.test.images # Returns np.array eval_labels = np.asarray(mnist.test.labels, dtype=np.int32) # Create the Estimator mnist_classifier = tf.estimator.Estimator( model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model") # Set up logging for predictions # Log the values in the "Softmax" tensor with label "probabilities" tensors_to_log = {"probabilities": "softmax_tensor"} logging_hook = tf.train.LoggingTensorHook( tensors=tensors_to_log, every_n_iter=50) # Train the model train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn( x={"x": train_data}, y=train_labels, batch_size=100, num_epochs=None, shuffle=True) mnist_classifier.train( input_fn=train_input_fn, steps=20000, hooks=[logging_hook]) # Evaluate the model and print results eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn( x={"x": eval_data}, y=eval_labels, num_epochs=1, shuffle=False) eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn) print(eval_results) if __name__ == "__main__": tf.app.run()

Blob ID sequence
Im using Aforge with c# to extract and process blobs from an image. Everything works great except blob numbering(ID). Currently, aforge randomly assigns a blob ID to each blob. I have blobs in a pattern and I want to number the blob from left to right. There are seperate processes in my program which handles the blob information, they expect a particular blob with the particular ID at the same place everytime. How do I do this?
Please help.

Accord.net's Cobyla optimization Constraints
Defined constraints for Accord.net Cobyla's optimization are not working as expected
I need to optimize the following excel sheet calculation in vb.net using Accord.net Cobyla's method. I am testing with a total of 4 variables:
Dim f1 As Func(Of Double(), Double) = Function(x) Dim var1 As New ArrayList Dim objsheets As Excel.Sheets = Nothing Dim app2 As Excel.Application = TryCast(Marshal.GetActiveObject("Excel.Application"), Excel.Application) 'Access excel sheet file = "C:\kws_work\Blade.xlsx" books = app2.Workbooks objsheet = books(1).Sheets(2) 'fill target cellsx0[ind] with variables x[ind] For ind = 0 To x.Count  1 objsheet.Range(x0(ind)).Value = CInt(x(ind)) Next Dim res As Double res = objsheet.Range("_res1").Value 'Return the calculated value to optimizer Return (res) End Function
constraints are defined as follows:
Dim f5 = {New NonlinearConstraint(4, Function(x) x(0) >= 29), New NonlinearConstraint(4, Function(x) x(1) >= 29), New NonlinearConstraint(4, Function(x) x(2) >= 29), New NonlinearConstraint(4, Function(x) x(3) >= 29) }
 First, I expect the all variables x(i) input in the excel sheet to have different values, but they are all identical.
 Second, I'm expecting the optimizer to start at x(i)>=29, but but it starts at 0, and finishes at 29.
 Additionally, is there a way to limit x(i) to integers in the nonlinear constraints
The number of variables/constraints is actually subject to changes, I can manage the variable number of x(i), but I am still looking for a way to get a variable number of constraints. If Something like the code below:
Dim f5 = {New NonlinearConstraint(NumberOfVariables, for i=0 to NumberOfVariables1
Function(x) x(i) >= 29) next}
or Better:
Dim f5 = {New NonlinearConstraint(NumberOfVariables, for i=0 to NumberOfVariables1 Function(x) aListofdoubles.contains(x(i))) next}

How to explain and visualize a Q Learning Agent?
What are some common approaches and useful resources that will aid in explaining the behavior of a QLearning agent and visualizing Q values?
Here is an excerpt of some example Q values serialized to json:
[ [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 9.7180743908492411E05, 0.0, 6.0134871150517619E05, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 2.7866205412015394E05, 0.0, 3.5352503282357707E05, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.002179680102508753, 0.0, 0.0003821282886147801, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.00044976255425384565, 0.0, 2.6171104054710165E05, 0.0 ], [ 0.0, 0.0, 0.0, 0.0, 0.0 ],