training vs validation accuracy and loss output error
Can anyone help me interpret these graphs.
I have been training some datasets over SENet for image classification (tensorLayer).
I started with cifar10 and got some satisfying results with it I use data augmentation for all my experiments.
Next i tried training it with Caltech256 and the results were pretty unusual Maybe due to less number of images per class(3050)
Finally i used TinyImagenet with enough training samples per class(500) but the results are still looking odd somewhat similar to caltech256 results
Is that supposed to be a problem with the data??
See also questions close to this topic

Can I run predict method of sklearn SVR by myself?
I know that SVR even in sklearn follows this equation:
![SVR](https://chart.googleapis.com/chart?cht=tx&chl=y_n=\sum_{i\in%20SV}\alpha_ik(\bf{x}_i,\bf{x}_n)%20+\b),
(intercept b doesn't show up, sorry.)
where alpha_i is the coefficient, k is a kernel function, x_i and x_n are samples of support vector and new sample.However, I don't understand which properties of SVR class correspond to the letters above, when I looked into the documentation sklearn.svm.SVR documentation.
My goal is to compute gradient of the SVR function as to every support vector, to achieve which variable is important.
Does anyone help me this? I'm tackling this task; however, I don't know how the output value is computed in SVR().predict(X). I can't read libsvm C++ code and .pyx or sth. Thanks in advance.

I am receiving the following error on eliminatng dummy variable trap in multivarible linear regression
dataset = pd.read_csv('50_Startups.csv') X = dataset.iloc[: , :1].values y = dataset.iloc[:,4].values from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder_X = LabelEncoder() X[:,3] = labelencoder_X.fit_transform(X[:,3]) onehotencoder = OneHotEncoder(categorical_features = [3]) X=onehotencoder.fit_transform(X).toarray # Avoiding the Dummy Variable Trap X=X[:, 1:]
Writing the above code I'm getting the following error.Can you please suggest the edit
File "<ipythoninput359ad621cd0c86>", line 13, in <module> X=X[:, 1:] TypeError: 'method' object is not subscriptable

Restored model doesn't continue training in tensorflow
I'm training a simple convnet in tensorflow and I'm creating checkpoints every few minutes. Unfortunately when I interrupt the training and restore the checkpoint later and try to continue training the loss doesn't decrease further.
This is what my first training run looks like. The train loss decreases from 0.11 to 0.095 (not pictured) within the first 9000 steps.
Then when I restart training and restore the model it looks like this.
The model was definitely restored properly since the starting loss is immediately at ~0.095 which was the final value for the first training run. Unfortunely the model does not continue training and the loss doesn't decrease further. Also the accuracy, precision and recall (tf.metrics.*) start from their initial values again because their internal local values are not included in the checkpoint. But they also don't update any further after restarting training.

Interpreting uplift curve for treatment
I obtained the attached uplift plot for my data (using uplift R package) Basically i am looking at finding certain % of population for which the treatment is effective and will not have any chronic disorders after treatment. But i do not know how to interpret this plot? Does it mean i can select 40% of population for effective treatment without much risk when compared to including entire population?

Linear regression:ValueError: all the input array dimensions except for the concatenation axis must match exactly
I am looking for a solution for the following problem and it just won't work the way I want to.
So my goal is to calculate a regression analysis and get the slope, intercept, rvalue, pvalue and stderr for multiple rows (this could go up to 10000). In this example, I have a file with 15 rows. Here are the first two rows:
array([ [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], [ 100, 10, 61, 55, 29, 77, 61, 42, 70, 73, 98, 62, 25, 86, 49, 68, 68, 26, 35, 62, 100, 56, 10, 97]] )
Full trial data set:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 100 10 61 55 29 77 61 42 70 73 98 62 25 86 49 68 68 26 35 62 100 56 10 97 57 89 25 89 48 56 67 17 98 10 25 90 17 52 85 56 18 20 74 97 82 63 45 87 192 371 47 173 202 144 17 147 174 483 170 422 285 13 77 116 500 136 276 392 220 121 441 268
The first row is the xvariable and this is the independent variable. This has to be kept fixed while iterating over every following row.
For the following row, the yvariable and thus the dependent variable, I want to calculate the slope, intercept, rvalue, pvalue and stderr and have them in a dataframe (if possible added to the same dataframe, but this is not necessary).
I tried the following code:
import pandas as pd import scipy.stats import numpy as np df = pd.read_excel("Directory\\file.xlsx") def regr(row): r = scipy.stats.linregress(df.iloc[1:, :], row) return r full_dataframe = None for index,row in df.iterrows(): x = regr(index) if full_dataframe is None: full_dataframe = x.T else: full_dataframe = full_dataframe.append([x.T]) full_dataframe.to_excel('Directory\\file.xlsx')
But this fails and gives the following error:
ValueError: all the input array dimensions except for the concatenation axis must match exactly
I'm really lost in here.
So, I want to achieve that I have the slope, intercept, pvalue, rvalue and stderr per row, starting from the second one, because the first row is the xvariable.
Anyone has an idea HOW to do this and tell me WHY mine isn't working and WHAT the code should look like?
Thanks!!

Inverse of scipy.stats.lognorm.interval
from scipy.stats import lognorm posterior_fb = lognorm(s=np.log(1.14), scale=0.007) intervals = post_fb.interval(0.99)
post_fb.interval(0.99)
gives me the endpoints of the range that contains 99% of the distribution, i.e., (0.0049, 0.0098)I need a function that does the inverse. That is you can specify the two points, and the function will calculate the percentage of the distribution that lies within those two points.
For example,
inverse_of_interval(0.0049, 0.0098)
would give me 0.99. 
while flattening a histogram in opencv getting AttributeError: 'NoneType' object has no attribute 'flatten'?
this is my method definition
def fd_histogram(image, mask = None): image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) hist = cv2.calcHist([image], [0, 1, 2], None, [bins, bins, bins], [0, 256, 0, 256, 0, 256]) hist = cv2.normalize(hist, hist) return hist.flatten()
this is my method call
img = cv2.imread(path) img = cv2.resize(img, (500, 500)) histo = fd_histogram(img)
when i call the method it says none type object has no attribute flatten, it works fine if i just remove flatten but i want a flattend image as result. any leads would be appreciated.

Edge detection in OpenCVpython
This is my code:
def line_detection(image): gray = cv.cvtColor(image, cv.COLOR_RGB2GRAY) edges = cv.Canny(gray, 50, 600, apertureSize=3) cv.imshow("edges", edges) lines = cv.HoughLines(edges, 1, np.pi / 180, 80) l1 = lines[:, 0, :] print(l1) for line in lines: rho, theta = line[0] a = np.cos(theta) b = np.sin(theta) x0 = a * rho y0 = b * rho x1 = int(x0 + 1000 * (b)) y1 = int(y0 + 1000 * a) x2 = int(x0  1000 * (b)) y2 = int(y0  1000 * a) cv.line(image, (x1, y1), (x2, y2), (0, 0, 255), 2) cv.imshow("imagelines", image) cv.waitKey(0)
My code only detect one edge like this:.
This is the edge detection pic:.
However, there are actually 2 edges in the picture, the code only detect the right one. how to get the left edge?

Transfer mobilenet to yolo in tensorflow?
Yolo use darknet19 as extractor.But I want to use a smaller,faster extracter like mobilenet.
The pretrained mobilenet i use takes [128,128,3] shaped placeholder as input,and the conv layer prior to final dense layer has shape [1,1,256].I want to splice the said last conv layer onto a conv operation so to have the final detection layer the size i want(say [7,7,45]).How can i do that?
Should i use a conv operation with filter size [1,1,256,7*7*45],which essentially makes a dense layer,and then reshape it to [7,7,45]?Is that right?

Is deep learning bad at fitting simple non linear functions outside training scope?
I am trying to create a simple deeplearning based model to predict
y=x**2
But looks like deep learning is not able to learn the general function outside the scope of its training set.Intuitively I can think that neural network might not be able to fit y=x**2 as there is no multiplication involved between the inputs.
Please note I am not asking how to create a model to fit
x**2
. I have already achieved that. I want to know the answers to following questions: Is my analysis correct?
 If the answer to 1 is yes, then isn't the prediction scope of deep learning very limited?
 Is there a better algorithm for predicting functions like y = x**2 both inside and outside the scope of training data?
Path to complete notebook: https://github.com/krishansubudhi/MyPracticeProjects/blob/master/KerasBasicnonlinear.ipynb
training input:
x = np.random.random((10000,1))*1000500 y = x**2 x_train= x
training code
def getSequentialModel(): model = Sequential() model.add(layers.Dense(8, kernel_regularizer=regularizers.l2(0.001), activation='relu', input_shape = (1,))) model.add(layers.Dense(1)) print(model.summary()) return model def runmodel(model): model.compile(optimizer=optimizers.rmsprop(lr=0.01),loss='mse') from keras.callbacks import EarlyStopping early_stopping_monitor = EarlyStopping(patience=5) h = model.fit(x_train,y,validation_split=0.2, epochs= 300, batch_size=32, verbose=False, callbacks=[early_stopping_monitor]) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_18 (Dense) (None, 8) 16 _________________________________________________________________ dense_19 (Dense) (None, 1) 9 ================================================================= Total params: 25 Trainable params: 25 Nontrainable params: 0 _________________________________________________________________
Evaluation on random test set
Deep learning in this example is not good at predicting a simple non linear function. But good at predicting values in the sample space of training data.

Purpose of 'tf.get_default_session()'
This question is about running sessions in TensorFlow.
I am confused about what purpose
tf.get_default_session().run(...)
serves overtf.Session().run(...)
.Can't all cases of
tf.get_default_session()
be replaced withtf.Session()
? 
How to predict a label in MultiClass classification model in pytorch?
I am currently working on my miniproject, where I predict movie genres based on their posters. So in the dataset that I have, each movie can have from 1 to 3 genres, therefore each instance can belong to multiple classes. I have total of 15 classes(15 genres). So now I am facing with the problem of how to do predictions using pytorch for this particular problem.
In pytorch CIFARtutorial, where each instance can have only one class ( for example, if image is a car it should belong to class of cars) and there are 10 classes in total. So in this case, model prediction is defined in the following way(copying code snippet from pytorch website):
import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 minibatches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0
print('Finished Training')
Question 1(for training part). What could you suggest to use as an activation function. I was thinking about BCEWithLogitsLoss() but I am not sure how good it will be.
and then the accuracy of prediction for testset is defined in the following way: for the entire network:
correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
and for each class:
class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i]))
where the output is as follows:
Accuracy of plane : 36 % Accuracy of car : 40 % Accuracy of bird : 30 % Accuracy of cat : 19 % Accuracy of deer : 28 % Accuracy of dog : 17 % Accuracy of frog : 34 % Accuracy of horse : 43 % Accuracy of ship : 57 % Accuracy of truck : 35 %
Now here is question 2: How can I determine the accuracy so it would look in the following way:
For example:
The Matrix (1999) ['Action: 91%', 'Drama: 25%', 'Adventure: 13%'] The Others (2001) ['Drama: 76%', 'Horror: 65%', 'Action: 41%'] Alien: Resurrection (1997) ['Horror: 67%', 'Action: 64%', 'Drama: 43%'] The Martian (2015) ['Drama: 95%', 'Adventure: 81%']
Considering that every movie does not always have 3 genres, sometimes is 2 and sometimes is 1. So as I see it, I should find 3 maximum values, 2 maximum values or 1 maximum value of my output list , which is list of 15 genres so, for example, if
my predicted genres are [Movie, Adventure] then
some_kind_of_function(outputs) should give me output of
[1 0 0 0 0 0 0 0 0 0 0 1 0 0 0] ,
which I can compare afterwards with ground_truth. I don't think torchmax will work in this case, cause it gives only one max value from [weigts array], so
What's the best way to implement it?
Thank you in advance, appreciate any help or suggestion:)

Weight sharing for logistic regression
Is there any way to enable weight sharing in logistic regression model in sklearn? Example: Say I have a dataframe which consists of 10,000 samples with 5 features and 10 classes, I train with crossentropy loss. The default shape of the weights would be
(#classes, #features)
and in our case it would be(10, 5)
but I want to have one weight for every class, meaning the desired shape of the weights should be(10, 1)
. 
How to make Java password and menu system?
JAVA MENU SYSTEM with password and play existing game... help pls!! JAVA Eclipse
With Try catch method for invalid inputs
Objectives:
Create a menu system that allows multiple players to play a choice of 3 simple games with other players.
Keep track of the winnings of each player through multiple runs and stops of the overall program.
Establish Players & Accounts
If there are existing players, load in their individual balance and password from previous runs of the program.
New players can be added at will, starting with some arbitrary balance.
 New Players will need to create an alphanumeric password.
 Game Selection
Ask the number of players playing ‘this’ game and which of the registered players are playing.
For a player to play, they must successfully enter their password once during this 'execution' of the Java program. No password = no play.
 Play Game
Print winners, exchange $$
 Each game costs a user some amount to play.
The winner(s) divide the pot evenly. If there is no winner, the pot goes to the ‘house’ (i.e. disappears).
Return to ‘main’ menu (or your Game Selection starting point)
 At ‘Exit’, store player’s balance, gaming information, and Password to file(s) and print balances to the screen.