When to use numpy.random.randn(...) and when numpy.random.rand(...)?
In my deep learning exercise i had to initialize one parameter D1 of same size as A1 so what i did is:
D1 = np.random.randn(A1.shape[0],A1.shape[1])
But after computing further equations when i checked the results they didn't matched then after proper reading the doc i discovered that they have said to initialize D1 using rand instead of randn;
D1 = np.random.rand(A1.shape[0],A1.shape[1])
But they didn't specified the reason for it as the code is working in both the cases and also there was a doc for that exercise so I figured out the error, but how, when and why to choose out of these two?
1 answer

The difference between
rand
andrandn
is (besides the lettern
) thatrand
returns random numbers sampled from a uniform distribution over the interval [0,1), whilerandn
instead samples from a normal (a.k.a. Gaussian) distribution with a mean of 0 and a variance of 1.In other words, the distribution of the random numbers produced by
rand
looks like this:In a uniform distribution, all the random values are restricted to a specific interval, and are evenly distributed over that interval. If you generate, say, 10000 random numbers with
rand
, you'll find that about 1000 of them will be between 0 and 0.1, around 1000 will be between 0.1 and 0.2, around 1000 will be between 0.2 and 0.3, and so on. And all of them will be between 0 and 1 — you won't ever get any outside that range.Meanwhile, the distribution for
randn
looks like this:The first obvious difference between the uniform and the normal distributions is that the normal distribution has no upper or lower limits — if you generate enough random numbers with
randn
, you'll eventually get one that's as big or as small as you like (well, subject to the limitations of the floating point format used to store the numbers, anyway). But most of the numbers you'll get will still be fairly close to zero, because the normal distribution is not flat: the output ofrandn
is a lot more likely to fall between, say, 0 and 0.1 than between 0.9 and 1, whereas forrand
both of these are equally likely. In fact, as the picture shows, about 68% of allrandn
outputs fall between 1 and +1, while 95% fall between 2 and +2, and about 99.7% fall between 3 and +3.These are completely different probability distributions. If you switch one for the other, things are almost certainly going to break. If the code doesn't simply crash, you're almost certainly going to get incorrect and/or nonsensical results.
See also questions close to this topic

Differentiating between xpaths
I was wondering how to differentiate between the paths between these two HTMLs.
//button[@class="searchresults__paginationnextbutton"] seems to get both of the items.
<button class="searchresults__paginationnextbutton" type="button" dataemberaction="" dataemberaction5268="5268"> <span class="valignmiddle"> Next </span> <liicon ariahidden="true" type="chevronrighticon" class="valignmiddle" size="small"><svg viewBox="0 0 24 24" width="24px" height="24px" x="0" y="0" preserveAspectRatio="xMinYMin meet" class="artdecoicon" focusable="false"><path d="M9,8L5,2.07,6.54,1l4.2,6.15a1.5,1.5,0,0,1,0,1.69L6.54,15,5,13.93Z" class="smallicon" style="fillopacity: 1"></path></svg></liicon> </button> <button disabled="" class="searchresults__paginationnextbutton" type="button" dataemberaction="" dataemberaction4958="4958"> <span class="valignmiddle"> Next </span> <liicon ariahidden="true" type="chevronrighticon" class="valignmiddle" size="small"><svg viewBox="0 0 24 24" width="24px" height="24px" x="0" y="0" preserveAspectRatio="xMinYMin meet" class="artdecoicon" focusable="false"><path d="M9,8L5,2.07,6.54,1l4.2,6.15a1.5,1.5,0,0,1,0,1.69L6.54,15,5,13.93Z" class="smallicon" style="fillopacity: 1"></path></svg></liicon> </button>

How to create table from model classes from another directory in sqlalchemy?
This is my flask project directory structure,
src/models> UserModel.py > PassengerModel.py src/run.py
and this run.py file contains the database connection object, then how can i create table from model classes inside models directory ?

Why can't print be assigned to foo in python 2?
Why does the python 2 code throw error on assigning
print
tofoo
whereas python 3 doesn't?Python 2
>>> foo = print File "<stdin>", line 1 foo = print ^ SyntaxError: invalid syntax
Python 3
>>> foo = print

Python code to ID and skip empty rasters failing with numpy array syntax
I'm have hundreds of clipped rasters that I need to reclassify using python 2.7.13. When I test approximately 12 of them (including four empty rasters), the script fails on the empty raster due to no data.
I have tried to skip the empty rasters with both arcpy get raster properties and numpy array syntax which I found here.... https://gis.stackexchange.com/questions/208519/skipemptyrastersinarcgis
arcpy.env.workspace = work_dir rasters = arcpy.ListRasters("*", "tif") for file in rasters: filename, ext = os.path.splitext(file) yr_mo = filename[10:17] pattern = '*clip*' reclass_name = 'Burn_Scar_' + yr_mo + '_' + 'reclass' +'.tif' ## Testing with numpy unique array array = arcpy.RasterToNumPyArray(file) values = numpy.unique(array) if file.endswith('.tif') and fnmatch.fnmatch(file,pattern): if values > 1: print values ## Testing with arcpy get raster properties file_results = arcpy.GetRasterProperties_management(file, property_type="MAXIMUM") file = file_results.getOutput(0) if file_results > 1: print file_results else: outReclass2 = Reclassify(file, "Value", RemapRange([[2, 0, "NODATA"]])) outReclass2.save(reclass_name) print(reclass_name) print ('skipping....' + file + 'raster is empty')
The arcpy code kept printing all maximum values  not just the ones greater than 1.
The numpy.unique(array) errors with 'ValueError' The truth value of an array with more than one element is ambiguous. Use a.any() or a.all(). I'm confused what a.any or a.all means & why it was not needed in the other question syntax.
Any other easy ways to skip over empty rasters & only process those with data are appreciated!! Thanks!!

Efficent way of finding values that are within 1% of standard deviation of eachother
I have a list of values, I want to delete all the values that are within one percent of the standard deviation of another value.
I currently have two nested for loops. I was wondering if there was a more efficient way of doing it, perhaps using numpy and vectorization? My current, inefficient code.
# 1% of the std std_range = np.std(values) / 100 # if value is in 1% of std, set to None for idx, val in enumerate(values): for idx2, val2 in enumerate(values): if val is None or val2 is None: continue elif (val2  std_range <= val <= val2 + std_range) and idx != idx2: values[idx] = None # delete None values values = list(filter(None, ml_results))

XGBoost giving slightly different predictions for list vs array, which is correct?
I noticed I was passing a double bracket list of test feature values for
print(test_feats) >> [[23.0, 3.0, 35.0, 0.28, 3.0, 18.0, 0.0, 0.0, 0.0, 3.33, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 39.0, 36.0, 113.0, 76.0, 0.0, 0.0, 1.0, 0.34, 999.0, 999.0, 999.0, 999.0, 999.0, 999.0, 999.0, 999.0, 0.0, 25.0, 48.0, 48.0, 0.0, 29.0, 52.0, 53.0, 99.0, 368.0, 676.0, 691.0, 4.0, 9.0, 12.0, 13.0]]
I notice when I pass this to XBGBoost for prediction it returns a different results when I turn it to an array
array_test_feats = np.array(test_feats) print(regr.predict_proba(test_feats)[:,1][0]) print(regr.predict_proba(aray_test_feats)[:,1][0]) >> 0.46929297 >> 0.5161868
Some basic checks suggest values are the same
print(sum(test_feats[0]) == array_test_feats.sum()) print(test_feats == array_test_feats)) >> True >> array([[ True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True]])
I am guessing the array is the way to go, but I really don't know how to tell. The predictions are close enough that it could easily slip by so I really would like to understand why this is happening.

Binary classification using neural network in tensorflow failing to learn
I am trying to implement a neural network for the heart dataset from kaggle
https://www.kaggle.com/ronitf/heartdiseaseuci
Even though I am training the network on the entire dataset it doesn't seem to learn at all with the accuracy staying around 50% , even if it is overfitting a Neural network should be able to get higher in sample accuracy. Can anyone see any issues with this code (i've tried many learning rates , batches and epochs and nothing seems to work.
As I understand for binary classification it is better to have 1 output to classify. I can already make this work using two outputs so please don't give answers suggesting this.
#need to switch from the binary output to the singular value output import pandas import numpy as np import tensorflow as tf import random from sklearn.preprocessing import normalize #pulling in data from excel sheet = pandas.read_excel("./heart.xlsx") sheet2=sheet.values #getting my data arrays X_data,y_data = np.split(sheet2, [13], axis=1) y_data=np.reshape(y_data, [1]) #need to learn more about why this works X_data = normalize(X_data, axis=0, norm='max') #random shuffle ind_list = [i for i in range(len(X_data))] random.shuffle(ind_list) X_data=X_data[ind_list] y_data=y_data[ind_list] #initialising values epochs = 1000 learning_rate=0.001 batch_size=50 X = tf.placeholder(tf.float32,shape=(None,n_inputs),name="X") y = tf.placeholder(tf.int64,shape=(None) ,name="y") #creating the structure of the neural network with tf.name_scope("dnn"): input = tf.layers.dense(X,13,name="hidden1" ,activation=tf.nn.relu) input = tf.layers.dense(input,7,name="hidden2" ,activation=tf.nn.relu) logits = tf.layers.dense(input,1,name="outputs") with tf.name_scope("loss"): entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.cast(y,tf.float32),logits=logits) loss=tf.reduce_mean(entropy,name="loss") with tf.name_scope("train"): optimizer=tf.train.AdamOptimizer(learning_rate) training_operation=optimizer.minimize(loss) with tf.name_scope("accuracy"): predicted = tf.nn.sigmoid(logits) correct_pred = tf.equal(tf.round(predicted), tf.cast(y,tf.float32)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) #defining functions def shuffle_batch(X, y, batch_size): rnd_idx = np.random.permutation(len(X)) n_batches = len(X) // batch_size for batch_idx in np.array_split(rnd_idx, n_batches): X_batch, y_batch = X[batch_idx], y[batch_idx] yield X_batch, y_batch #initialising the graph and running the optimisation init= tf.global_variables_initializer() save = tf.train.Saver() with tf.Session() as sess: init.run() for epoch in range(epochs): for X_batch,y_batch in shuffle_batch(X_data,y_data,batch_size): sess.run(training_operation,feed_dict={X:X_batch,y:y_batch}) acc_train=accuracy.eval(feed_dict={X:X_data,y:y_data}) print("Epoch",epoch,"training acuuracy",acc_train)
output
20190915 07:52:04.662488: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Epoch 0 training acuuracy 0.5089697 Epoch 1 training acuuracy 0.5089697 Epoch 2 training acuuracy 0.5089697 Epoch 3 training acuuracy 0.50838155 Epoch 4 training acuuracy 0.5089697 Epoch 5 training acuuracy 0.5092638 Epoch 6 training acuuracy 0.5101461 Epoch 7 training acuuracy 0.5101461 Epoch 8 training acuuracy 0.5101461 Epoch 9 training acuuracy 0.51073426 Epoch 10 training acuuracy 0.51102835 Epoch 11 training acuuracy 0.51102835 Epoch 12 training acuuracy 0.51073426 Epoch 13 training acuuracy 0.51132244 Epoch 14 training acuuracy 0.5122047 Epoch 15 training acuuracy 0.5124988 Epoch 16 training acuuracy 0.5127929 Epoch 17 training acuuracy 0.513087 Epoch 18 training acuuracy 0.51338106 Epoch 19 training acuuracy 0.51338106 Epoch 20 training acuuracy 0.51367515 Epoch 21 training acuuracy 0.51396924 Epoch 22 training acuuracy 0.5151456 Epoch 23 training acuuracy 0.51426333 Epoch 24 training acuuracy 0.5154397 Epoch 25 training acuuracy 0.5151456 Epoch 26 training acuuracy 0.51632196 Epoch 27 training acuuracy 0.5172042 Epoch 28 training acuuracy 0.51808643 Epoch 29 training acuuracy 0.5183805 Epoch 30 training acuuracy 0.5192628 Epoch 31 training acuuracy 0.5189687 Epoch 32 training acuuracy 0.51985097 Epoch 33 training acuuracy 0.52043915 Epoch 34 training acuuracy 0.52043915 Epoch 35 training acuuracy 0.52043915 Epoch 36 training acuuracy 0.5213214 Epoch 37 training acuuracy 0.5219096 Epoch 38 training acuuracy 0.5222037 Epoch 39 training acuuracy 0.5222037 Epoch 40 training acuuracy 0.5224978 Epoch 41 training acuuracy 0.5224978 Epoch 42 training acuuracy 0.5224978 Epoch 43 training acuuracy 0.5222037 Epoch 44 training acuuracy 0.5219096 Epoch 45 training acuuracy 0.5222037 Epoch 46 training acuuracy 0.5222037 Epoch 47 training acuuracy 0.5219096 Epoch 48 training acuuracy 0.5222037 Epoch 49 training acuuracy 0.52279186 Epoch 50 training acuuracy 0.52308595 Epoch 51 training acuuracy 0.52338004 Epoch 52 training acuuracy 0.52308595 Epoch 53 training acuuracy 0.52308595 Epoch 54 training acuuracy 0.52308595 Epoch 55 training acuuracy 0.52367413 Epoch 56 training acuuracy 0.5245564 Epoch 57 training acuuracy 0.52602684 Epoch 58 training acuuracy 0.52543867 Epoch 59 training acuuracy 0.52573276 Epoch 60 training acuuracy 0.526615 Epoch 61 training acuuracy 0.52573276 Epoch 62 training acuuracy 0.5269091 Epoch 63 training acuuracy 0.5269091 Epoch 64 training acuuracy 0.52602684 Epoch 65 training acuuracy 0.526615 Epoch 66 training acuuracy 0.5277914 Epoch 67 training acuuracy 0.5272032 Epoch 68 training acuuracy 0.526615 Epoch 69 training acuuracy 0.526615 Epoch 70 training acuuracy 0.526615 Epoch 71 training acuuracy 0.5269091 Epoch 72 training acuuracy 0.526615 Epoch 73 training acuuracy 0.526615 Epoch 74 training acuuracy 0.5277914 Epoch 75 training acuuracy 0.52867365 Epoch 76 training acuuracy 0.5277914 Epoch 77 training acuuracy 0.5277914 Epoch 78 training acuuracy 0.52837956 Epoch 79 training acuuracy 0.52837956 Epoch 80 training acuuracy 0.52837956 Epoch 81 training acuuracy 0.5295559 Epoch 82 training acuuracy 0.52985 Epoch 83 training acuuracy 0.5304382 Epoch 84 training acuuracy 0.5301441 Epoch 85 training acuuracy 0.5301441 Epoch 86 training acuuracy 0.5307323 Epoch 87 training acuuracy 0.5304382 Epoch 88 training acuuracy 0.5304382 Epoch 89 training acuuracy 0.53132045 Epoch 90 training acuuracy 0.5322027 Epoch 91 training acuuracy 0.5322027 Epoch 92 training acuuracy 0.5322027 Epoch 93 training acuuracy 0.533085 Epoch 94 training acuuracy 0.5348495 Epoch 95 training acuuracy 0.53455544 Epoch 96 training acuuracy 0.5333791 Epoch 97 training acuuracy 0.5333791 Epoch 98 training acuuracy 0.53426135 Epoch 99 training acuuracy 0.53455544 Epoch 100 training acuuracy 0.53455544 Epoch 101 training acuuracy 0.5354377 Epoch 102 training acuuracy 0.5351436 Epoch 103 training acuuracy 0.53455544 Epoch 104 training acuuracy 0.5348495 Epoch 105 training acuuracy 0.53396726 Epoch 106 training acuuracy 0.53455544 Epoch 107 training acuuracy 0.53426135 Epoch 108 training acuuracy 0.5351436 Epoch 109 training acuuracy 0.5354377 Epoch 110 training acuuracy 0.5354377 Epoch 111 training acuuracy 0.53690815 Epoch 112 training acuuracy 0.53720224 Epoch 113 training acuuracy 0.5377904 Epoch 114 training acuuracy 0.5380845 Epoch 115 training acuuracy 0.5380845 Epoch 116 training acuuracy 0.5377904 Epoch 117 training acuuracy 0.5377904 Epoch 118 training acuuracy 0.53720224 Epoch 119 training acuuracy 0.5377904 Epoch 120 training acuuracy 0.5377904 Epoch 121 training acuuracy 0.5377904 Epoch 122 training acuuracy 0.5383786 Epoch 123 training acuuracy 0.5383786 Epoch 124 training acuuracy 0.5383786 Epoch 125 training acuuracy 0.5386727 Epoch 126 training acuuracy 0.5383786 Epoch 127 training acuuracy 0.5383786 Epoch 128 training acuuracy 0.5383786 Epoch 129 training acuuracy 0.5383786 Epoch 130 training acuuracy 0.5389668 Epoch 131 training acuuracy 0.5386727 Epoch 132 training acuuracy 0.5389668 Epoch 133 training acuuracy 0.53926086 Epoch 134 training acuuracy 0.53926086 Epoch 135 training acuuracy 0.53926086 Epoch 136 training acuuracy 0.53955495 Epoch 137 training acuuracy 0.53955495 Epoch 138 training acuuracy 0.53984904 Epoch 139 training acuuracy 0.53984904 Epoch 140 training acuuracy 0.53984904 Epoch 141 training acuuracy 0.54014313 Epoch 142 training acuuracy 0.5404372 Epoch 143 training acuuracy 0.54014313 Epoch 144 training acuuracy 0.53955495 Epoch 145 training acuuracy 0.5404372 Epoch 146 training acuuracy 0.54014313 Epoch 147 training acuuracy 0.54014313 Epoch 148 training acuuracy 0.53984904 Epoch 149 training acuuracy 0.53955495 Epoch 150 training acuuracy 0.54014313 Epoch 151 training acuuracy 0.5404372 Epoch 152 training acuuracy 0.5407313 Epoch 153 training acuuracy 0.5410254 Epoch 154 training acuuracy 0.5410254 Epoch 155 training acuuracy 0.5413195 Epoch 156 training acuuracy 0.5413195 Epoch 157 training acuuracy 0.5410254 Epoch 158 training acuuracy 0.5410254 Epoch 159 training acuuracy 0.5410254 Epoch 160 training acuuracy 0.5413195 Epoch 161 training acuuracy 0.5413195 Epoch 162 training acuuracy 0.5413195 Epoch 163 training acuuracy 0.5413195 Epoch 164 training acuuracy 0.5410254 Epoch 165 training acuuracy 0.5413195 Epoch 166 training acuuracy 0.54190767 Epoch 167 training acuuracy 0.54220176 Epoch 168 training acuuracy 0.54190767 Epoch 169 training acuuracy 0.54220176 Epoch 170 training acuuracy 0.54220176 Epoch 171 training acuuracy 0.54220176 Epoch 172 training acuuracy 0.54220176 Epoch 173 training acuuracy 0.54220176 Epoch 174 training acuuracy 0.54220176 Epoch 175 training acuuracy 0.54220176 Epoch 176 training acuuracy 0.54190767 Epoch 177 training acuuracy 0.5416136 Epoch 178 training acuuracy 0.5416136 Epoch 179 training acuuracy 0.54190767 Epoch 180 training acuuracy 0.54220176 Epoch 181 training acuuracy 0.54220176 Epoch 182 training acuuracy 0.54220176 Epoch 183 training acuuracy 0.54220176 Epoch 184 training acuuracy 0.54220176 Epoch 185 training acuuracy 0.54220176 Epoch 186 training acuuracy 0.54220176 Epoch 187 training acuuracy 0.54220176 Epoch 188 training acuuracy 0.54220176 Epoch 189 training acuuracy 0.54220176 Epoch 190 training acuuracy 0.54220176 Epoch 191 training acuuracy 0.54220176 Epoch 192 training acuuracy 0.54220176 Epoch 193 training acuuracy 0.54220176 Epoch 194 training acuuracy 0.54278994 Epoch 195 training acuuracy 0.543084 Epoch 196 training acuuracy 0.543084 Epoch 197 training acuuracy 0.543084 Epoch 198 training acuuracy 0.543084 Epoch 199 training acuuracy 0.543084 Epoch 200 training acuuracy 0.54278994 Epoch 201 training acuuracy 0.54249585 Epoch 202 training acuuracy 0.54220176 Epoch 203 training acuuracy 0.54249585 Epoch 204 training acuuracy 0.54220176 Epoch 205 training acuuracy 0.54220176 Epoch 206 training acuuracy 0.54249585 Epoch 207 training acuuracy 0.54249585 Epoch 208 training acuuracy 0.54249585 Epoch 209 training acuuracy 0.54249585 Epoch 210 training acuuracy 0.54249585 Epoch 211 training acuuracy 0.54249585 Epoch 212 training acuuracy 0.543084 Epoch 213 training acuuracy 0.543084 Epoch 214 training acuuracy 0.54278994 Epoch 215 training acuuracy 0.54249585 Epoch 216 training acuuracy 0.54278994 Epoch 217 training acuuracy 0.543084 Epoch 218 training acuuracy 0.54278994 Epoch 219 training acuuracy 0.54278994 Epoch 220 training acuuracy 0.54278994 Epoch 221 training acuuracy 0.54278994 Epoch 222 training acuuracy 0.54278994 Epoch 223 training acuuracy 0.54249585 Epoch 224 training acuuracy 0.54249585 Epoch 225 training acuuracy 0.54249585 Epoch 226 training acuuracy 0.54278994 Epoch 227 training acuuracy 0.54278994 Epoch 228 training acuuracy 0.54278994 Epoch 229 training acuuracy 0.54278994 Epoch 230 training acuuracy 0.54278994 Epoch 231 training acuuracy 0.54278994 Epoch 232 training acuuracy 0.54249585 Epoch 233 training acuuracy 0.54249585 Epoch 234 training acuuracy 0.54278994 Epoch 235 training acuuracy 0.543084 Epoch 236 training acuuracy 0.543084 Epoch 237 training acuuracy 0.5433781 Epoch 238 training acuuracy 0.543084 Epoch 239 training acuuracy 0.5433781 Epoch 240 training acuuracy 0.5433781 Epoch 241 training acuuracy 0.5436722 Epoch 242 training acuuracy 0.5433781 Epoch 243 training acuuracy 0.543084 Epoch 244 training acuuracy 0.543084 Epoch 245 training acuuracy 0.543084 Epoch 246 training acuuracy 0.543084 Epoch 247 training acuuracy 0.543084 Epoch 248 training acuuracy 0.543084

Incremental learning with a builtin sagemaker algorithm
I am training the DeepAR AWS SageMaker builtin algorithm. With the sagemaker SDK, I can train the model with particular specified hyperparameters:
estimator = sagemaker.estimator.Estimator( sagemaker_session=sagemaker_session, image_name=image_name, role=role, train_instance_count=1, train_instance_type='ml.c4.2xlarge', base_job_name='wfpdeepar', output_path=join(s3_path, 'output') ) estimator.set_hyperparameters(**{ 'time_freq': 'M', 'epochs': '50', 'mini_batch_size': '96', 'learning_rate': '1E3', 'context_length': '12', 'dropout_rate': 0, 'prediction_length': '12' }) estimator.fit(inputs=data_channels, wait=True, job_name='wfpdeeparjoblevel5')
I would like to train the resulting model again with a smaller learning rate. I followed the incremental training method described here: https://docs.aws.amazon.com/en_pv/sagemaker/latest/dg/incrementaltraining.html, but it does not work, apparently (according to the link), only two builtin models support incremental learning.
Has anyone found a workaround for this so that they can train a builtin algorithm with a scheduled learning rate?

Cases without Paper or Codes in PapersWithCode Site
I am currently looking for the current SOTA algorithm that can understand the human questions.
I had found SQuAD dataset and its corresponding current SOTA introduced in the PapersWithCode.
However, there are missing links of paper and code for the current 1st ranked method 
XLNet + DAAF + Verifier (ensemble)
.I can't understand then how this website ranked this method as the 1st one which is not reproducible by community members of A.I.
Thus I had tried to join the slack of the paperswithcode but the link is currently blocked.
Does this website organize private collaborators who verify the reproducibility or is there any context I am missing?