How to do Inceptionv3 transfer learning in cntk?
I have been studying transfer learning with cntk tutorial.
After I did this tutorial, I tried to apply another model, and I download InceptionV3_ImageNet_CNTK.model in this page.
After downloading, when I did
node_outputs = C.logging.get_node_outputs(C.load_model('InceptionV3_ImageNet_CNTK.model'))
for l in node_outputs: print(" {0} {1}".format(l.name, l.shape))
to strip off the final features layer and attach a new Dense layer for classification, I found that InceptionV3_ImageNet_CNTK.model doesn't have node name.
Loading InceptionV3_ImageNet_CNTK.model and printing all layers:
(1000,)
(1,)
aggregateLoss ()
(1,)
aggregateEvalMetric ()
(1000,)
(2048, 1, 1)
(2048, 1, 1)
(2048, 8, 8)
(320, 8, 8)
(320, 8, 8)
(320, 8, 8)
...
Therefore, I can't do
feature_node = C.logging.find_by_name(base_model, node_name = model_details['feature_node_name'])
last_node = C.logging.find_by_name(base_model, node_name = model_details['last_hidden_node_name'])
and freeze and attach a new Dense layer.
How to solve this problem? I've searched all over the internet (Github, StackOverflow, Google...) but I can't seam to find something useful for a novice.
Thanks so much!
See also questions close to this topic

Create new dataframe column with 0 and 1 values according to given series
I have a dataframe as show below
df = value 20140521 10:00:00 0.0 20140521 11:00:00 3.4 20140521 12:00:00 nan 20140521 13:00:00 0.0 20140521 14:00:00 nan 20140521 15:00:00 1.0 ..............
I would like to add two columns,
first one named "active" to switch the value to 1 (if df.value >=0 )and 0 (if df.value = nan), and the second one "unactive" to switch the value to 0 (if df.value >=0 )and 1 (if df.value = nan),so the new dataframe would be like
df_new = value active unactive 20140521 10:00:00 0.0 1 0 20140521 11:00:00 3.4 1 0 20140521 12:00:00 nan 0 1 20140521 13:00:00 0.0 1 0 20140521 14:00:00 nan 0 1 20140521 15:00:00 1.0 1 0 ............
I try to use for loop, but it takes too much time when the time series is long. Does anyone know a better way to do it ? thanks for advance!

Apply a filter on a list of lists wherein first and second index elements of nested lists are numbers A and B respectively
I went through many similar questions on the internet and tried all suggested solutions. But they all seem to be fine for comparing two different lists. But I have a list of lists and two different numbers that I want to check if they are located in the first two index elements (index[0] and index[1]) of the nested lists within the list of lists. I wish to identify all such lists and then if such lists exist, compare the fourth index member of all those similar lists against a fixed number.
My Sample list: [[1, 4, 65, 77, 22.0], [3, 2, 12, 55, 77.0], [1, 4, 16, 99, 13.0]]
Numbers to check index[0] == 1 and index[1] == 4. The above list of lists has two such nested lists where the first index member is 1 and second index member is 4.
Hence we we now compare the fourth index member of each similar list against our reference weight = 17.
Thus there is one list where the number 22 is greater than our reference number 17; and another list where the number 13 is lesser than our reference number 17.
My Output should be :
return True if there is at least one list where the fourth index is < reference weight return False if all values in fourth index are > = reference weight
What I am looking for is an efficient way of quickly identifying the similar lists within the list of lists. I know for loop is an option, but performance becomes an issue since my listoflists can get very big over time.

PyYML inserting linebreak in ssh public key
I'm working on a project to pull user information out of a MySQL database and format it into a yaml file that Ansible can read and use as a vars file. I need all the normal user info, username, email, etc, along with their public ssh key from the database.
Problem is, PyYAML is inserting an extra line break before the email part of the pubkey, and I cannot figure out why. Here is a simple example:
import yaml yamldict = { "users": [] } yamldict["users"].append({ "username": "user", "name": "user", "sshkey": "sshrsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHV/xbvOHuPq6WbBhtmjUWKYPrqQlkILf8b/I6V9dZVBPzmhRZFCAf/gWny0hmZ95bVRED4iCSTCtN3Lq2VZiZ/kwBO7Y9E4vr1wVQYrr4IIwEhdaifZmWFLlwOXbt76dxJQs2xS9Z5ZQjEzZBFZqgYu42QbSi7tKBNSaLadOWbB3sq0IOzCZeSgrELlZIuUy7u1RbcS4w2Y29S3XLrbi2yVdVbPW8B9PfsG1n4q2/XR7w3gqhP6c8ibO4jYpADLZuHZvuoVpjKINO4kSdrwUfD8rl3MBIAD/Nu9sy0bIiKdSONQohxcsjMevxPOijjz4EiI1Ad4U6dDJrFlT0asYH user@email.com" })
which outputs:
users:  name: user sshkey: sshrsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHV/xbvOHuPq6WbBhtmjUWKYPrqQlkILf8b/I6V9dZVBPzmhRZFCAf/gWny0hmZ95bVRED4iCSTCtN3Lq2VZiZ/kwBO7Y9E4vr1wVQYrr4IIwEhdaifZmWFLlwOXbt76dxJQs2xS9Z5ZQjEzZBFZqgYu42QbSi7tKBNSaLadOWbB3sq0IOzCZeSgrELlZIuUy7u1RbcS4w2Y29S3XLrbi2yVdVbPW8B9PfsG1n4q2/XR7w3gqhP6c8ibO4jYpADLZuHZvuoVpjKINO4kSdrwUfD8rl3MBIAD/Nu9sy0bIiKdSONQohxcsjMevxPOijjz4EiI1Ad4U6dDJrFlT0asYH user@email.com username: user
I've tried many different ways to strip out extra whitespace, newlines and carriage returns. I've also tried converting this dict to json, and the ssh key looks good there, and then running yaml.dump on the json and it still gives me that extra newline.
Any ideas what I'm doing wrong here?

Is it possible to use GPU in new iMac models which use Radeon?
Recently i have purchased an iMac pro and there is no nvidia gpu. But is it still possible to use iMac's graphic card to speedup some Python funxtions like loops, machine learning and etc? Any advices on that?
Also, do you think adding an external nvidia gpu helps?

Calculating entropy in ID3 log2(0) in formula
import numpy as np udacity_set = np.array( [[1,1,1,0], [1,0,1,0], [0,1,0,1], [1,0,0,1]]) label = udacity_set[:,udacity_set.shape[1]1] fx = label.size positive = label[label == 1].shape[0] positive_probability = positive/fx negative = label[label == 0].shape[0] negative_probability = negative/fx entropy = negative_probability*np.log2(negative_probability)  positive_probability*np.log2(positive_probability) atribute = 0 V = 1 attribute_set = udacity_set[np.where(udacity_set[:,atribute] == 1)] #selecting positive instance of occurance in attribute 14 instances = attribute_set.shape[0] negative_labels = attribute_set[np.where(attribute_set[:,attribute_set.shape[1]1]== 0)].shape[0] positive_labels = attribute_set[np.where(attribute_set[:,attribute_set.shape[1]1]== 1)].shape[0] p0 = negative_labels/instances p1 = positive_labels/instances entropy2 =  p0*np.log2(p0)  p1*np.log2(p1) attribute_set2 = udacity_set[np.where(udacity_set[:,atribute] == 0)] #selecting positive instance of occurance in attribute 14 instances2 = attribute_set2.shape[0] negative_labels2 = attribute_set[np.where(attribute_set2[:,attribute_set2.shape[1]1]== 0)].shape[0] positive_labels2 = attribute_set[np.where(attribute_set2[:,attribute_set2.shape[1]1]== 1)].shape[0] p02 = negative_labels2/instances2 p12 = positive_labels2/instances2 entropy22 =  p02*np.log2(p02)  p12*np.log2(p12)
Problem is when attribute is pure and entropy is meant to be 0. But when i put this into a formula i get NaN. I know how to code workaround, but why is this formula rigged?

I get this SMOTE error when running azure ML in excel
Error:
{"error":{"code":"LibraryExecutionError","message":"Module execution encountered an internal library error.","details": [{"code":"InvalidIndex","target":"SMOTE (AFx Library)","message":"indices: Invalid index: 11, expected bounds [0, 11)"}]}}
I created a multiclass SMOTE in Azure Machine Learning by splitting my data and it works fine until I use excel to perdict with the web service. :( No clue why it's complaining about indices... Any ideas why this could be?

AlexNet . Doubts about fully connected layer
I'm trying to understand AlexNet architecture. Activation volume is 7x7x512... We need 3 fully connected layer (that is so because we use 3 max pooling layer according to info of https://es.mathworks.com/help/nnet/ref/nnet.cnn.layer.fullyconnectedlayer.html) with output of 4096 , 4096 and 1000 filters.
I know the filters of the last fully connected layer have to be the same number of classes that we want to classify. My doubt is about the 4096 filters. Usually, this kind of nets used to convert FC to ConvNet, that means we apply a convolutional layer with the same size of filters (7x7 in this case) and the same number of filters to get a vector 1x1xM (4096 in this case). where does that number (4096) come from?
I loocked at this post but i couldn't find the article mentioned "(x) see section 3.2 of the article: the fullyconnected layers are first converted to convolutional layers " : How to calculate the number of parameters of convolutional neural networks?
Thanks and feel free to tell my if my arguments are wrong!!

cuda runtime error: deviceside assert triggered
my input dimension is
(2,1,116,132,132)
my target dimension is(2,1,28,44,44)
i seem to be getting this error!
loss Traceback (most recent call last): File "train3DUNet.py", line 143, in <module> gpu=options.gpu) File "train3DUNet.py", line 94, in train_net print ('loss' ,loss) File "/usr/local/lib/python3.5/distpackages/torch/autograd/variable.py", line 119, in __repr__ return 'Variable containing:' + self.data.__repr__() File "/usr/local/lib/python3.5/distpackages/torch/tensor.py", line 133, in __repr__ return str(self) File "/usr/local/lib/python3.5/distpackages/torch/tensor.py", line 140, in __str__ return _tensor_str._str(self) File "/usr/local/lib/python3.5/distpackages/torch/_tensor_str.py", line 295, in _str strt = _vector_str(self) File "/usr/local/lib/python3.5/distpackages/torch/_tensor_str.py", line 271, in _vector_str fmt, scale, sz = _number_format(self) File "/usr/local/lib/python3.5/distpackages/torch/_tensor_str.py", line 79, in _number_format tensor = torch.DoubleTensor(tensor.size()).copy_(tensor).abs_().view(tensor.nelement()) RuntimeError: cuda runtime error (59) : deviceside assert triggered at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:70
Im not really sure what to make of this ,My Gpu has enough memory,so i dont think this is a memory issue,if any one has any suggestions it would be really helpful,Thanks in advance !

Deep Learning outputs go to zero
I am a complete beginner at deep learning and am trying to understand the basic concepts using only numpy before moving to something full featured. Lets say you have 10 outputs, and for each iteration, only 1 should be true, the other 9 false. Given a large batch size doesn't back propagation basically end up saying that chances are any given output should just be 0.
I modified the following code from an example, and ran it on the cifar10 data set and that's exactly what happened. Is there some way I could modify this code so that it actually makes a guess, even if the network is way too simple to be accurate. For example, shouldn't a simple, 1 hidden layer network like this be able to guess "frog" if there's a high number of green pixels.
import random import pickle import numpy as np # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 10000, 3072, 100, 10 def unpickle(file): with open(file, 'rb') as fo: dict = pickle.load(fo) return dict def nonlin(x,deriv=False): if(deriv==True): return x*(1x) return 1/(1+np.exp(x)) np.random.seed(1) # randomly initialize our weights with mean 0 syn0 = 2*np.random.random((D_in,H))  1 syn1 = 2*np.random.random((H,D_out))  1 for j in range(60000): btch = j%5 raw = unpickle('cifar/data_batch_'+str(btch+1)) # Grab 5 random images from the file and use as a batch rrng = random.randrange(9500) x = raw[b'data'][rrng:rrng+5] y = raw[b'labels'][rrng:rrng+5] for c in range(len(y)): sr = y[c] y[c] = [0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] y[c][sr] = 1.0 # Feed forward through layers 0, 1, and 2 l0 = x l1 = nonlin(np.dot(l0,syn0)) l2 = nonlin(np.dot(l1,syn1)) # how much did we miss the target value? l2_error = y  l2 if (j%30) == 0: print("Error:" + str(np.mean(np.abs(l2_error)))) # in what direction is the target value? # were we really sure? if so, don't change too much. l2_delta = l2_error*nonlin(l2,deriv=True) # how much did each l1 value contribute to the l2 error (according to the weights)? l1_error = l2_delta.dot(syn1.T) # in what direction is the target l1? # were we really sure? if so, don't change too much. l1_delta = l1_error * nonlin(l1,deriv=True) #print("Delta: " + str(l2_delta[2])); syn1 += l1.T.dot(l2_delta) syn0 += l0.T.dot(l1_delta)

Get started with CNTK in Python via Spyder
I've installed CNTK version 2,5 in Anaconda 3, python 3.6. But when I run Microsoft CNTKsample codes in Spyder, the python crashed (Kernel died, restarting) and I got this error when I debugged the code line by line; NameError: "name 'cntk' is not defined" Could you please tell me what's wrong? Did I miss sths to do?

Implementation of center loss in CNTK
I have implemented center loss according to the paper as follows. (https://ydwen.github.io/papers/WenECCV16.pdf)
def loss_function(emb, logits, labels, class_num, lambda): one_hot_label = C.one_hot(labels, class_num) softmax_loss = C.cross_entropy_with_softmax(logits, one_hot_label) center = C.reduce_mean(emb) center_loss = C.reduce_sum(C.square(emb  center)) * 0.5 total_loss = softmax_loss + center_loss * lambda return total_loss
emb : 128d vector
logits : classes layer after emb
class_num: the number of id
lambda : lambda
Is there a way to get center? Where am I supposed to fix it?
I refer to the following code but I do not understand how to update the center ... (https://github.com/davidsandberg/facenet/blob/master/src/facenet.py)

Error when I try to build Android app with CNTK by Unity3d
I have try so many weeks for build a Android app with CNTK's CNN evaluation function(C#) by Unity. But I encountered some error that I don't know how to solve.
The processing pipeline I expect is:
Train the model using Keras (by python)
Convert model to CNTK's format
Use CNTK(C#)'s Evaluation function in Unity
Export Android APK
I successed use CNTK for evaluation an image when I just use Play button in Unity before I builded. But when it has installed in my Android phone, it showed some error like CNTK cannot initialize because it cannot find the CPU.
The error message is below:
error message screenshot threw on cellphone
my system Configuration:
 OS : Win10
 Unity : 2017.3
 Script Backend : Mono (I try IL2CPP but get some error)
 API Compatibility : .NET 4.6
 Build System : Gradle
 CNTK 2.4 (C#)
(I also export to android studio and use AVD (arm processor) and failed.)
and give my project folder
Any suggestion will be appreciated.