Implementing Knowledge space theory in Python
I want to implement knowledge space theory in python. I want to create learning spaces and learning routes. Can anyone please suggest me if there is any library or any other way?
See also questions close to this topic

how to display contents of text file one line at a time via timer using python on windows?
this is the code.
def wndProc(hWnd, message, wParam, lParam): if message == win32con.WM_PAINT: hdc, paintStruct = win32gui.BeginPaint(hWnd) dpiScale = win32ui.GetDeviceCaps(hdc, win32con.LOGPIXELSX) / 60.0 fontSize = 36 # http://msdn.microsoft.com/enus/library/windows/desktop/dd145037(v=vs.85).aspx lf = win32gui.LOGFONT() lf.lfFaceName = "Times New Roman" lf.lfHeight = int(round(dpiScale * fontSize)) #lf.lfWeight = 150 # Use nonantialiased to remove the white edges around the text. # lf.lfQuality = win32con.NONANTIALIASED_QUALITY hf = win32gui.CreateFontIndirect(lf) win32gui.SelectObject(hdc, hf) rect = win32gui.GetClientRect(hWnd) # http://msdn.microsoft.com/enus/library/windows/desktop/dd162498(v=vs.85).aspx win32gui.DrawText( hdc, **'Glory be to the Father, and to the son and to the Holy Spirit.',** 1, rect, win32con.DT_CENTER  win32con.DT_NOCLIP  win32con.DT_VCENTER ) win32gui.EndPaint(hWnd, paintStruct) return 0
.where it says the "glory be to the father.." prayer I would like that string to actually display a few different prayers on a timer. what I mean is I want to save short prayers to a text file and have the line where it says "glory be.." to change to a new prayer every 60 seconds cycling through a few prayers such as the serenity prayer etc.

How to plot the frequency of my data per day in an histogram?
I want to plot the number occurences of my data per day. y represent the id of my data. x represent the timestamp which I convert to time and day. But I can't make the correct plot. import matplotlib.pyplot as plt plt.style.use('ggplot') import time
y=['5914cce8fad645d1bec2e59e62823617', '1c2067e051734a1d8a75b18267ee4598', 'db6830fffa9c4aa5b71ef6da9333f357', '672cc9d5360e4451bb7c03e3d0bd8f0d', 'fb0f8122fffc47fea87ab2b749df173b', '558e96ca022240c7acc0e444f7663f53', 'c3f86fd5eac348d3a44cb325f30b6139', '21dd849f895f4cf5a16845a4c1a9fbf9', 'e3b4cd56e291467193b6d2226ee82ae7', '01346c48a8c443d1ac021efa33ca0f4e', '23b78b0f85be4ca799f41a5add76c12e', 'b1c036c00c2b4170a1708fd0add0dec2', '74737546e9c34126bcb24d34503421ca', '342991f5ec874c9d83eb9908f3e221aa', '4fdcd83aeb684e26b79b753c5e022a4e', 'b7fbeca9941643c49e909e71acc1eaba', '27c9d358a3ef4c69ba89eac16d8d3bdb', 'ef982c4ba11548a1aef12f672d7f1f00', 'efedede29bb44c5298b18b03070df3fd', 'eb03ae1b4cde409c8d342a16a8be30d2'] x=['1548143296750', '1548183033872', '1548346185194', '1548443373507', '1548446119319', '1548446239441', '1548446068267', '1548445962159', '1548446011209', '1548446259465', '1548446180380', '1548239985290', '1548240060367', '1548240045347', '1547627568993', '1548755333313', '1548673604016','1548673443843', '1548673503914', '1548673563975'] date=[] for i in x: print(i) print() i=i[:10] print(i) readable = time.ctime(int(i)) readable=readable[:10] date.append(readable) print(date) plt.hist(date,y) plt.show()

mysql.connector.errors.ProgrammingError: Error in SQL Syntax
I'm using the Python MySQL connector to add data to a table by updating the row. A user enters a serial number, and then the row with the serial number is added. I keep getting a SQL syntax error and I can't figure out what it is.
query = ("UPDATE `items` SET salesInfo = %s, shippingDate = %s, warrantyExpiration = %s, item = %s, WHERE serialNum = %s") cursor.execute(query, (info, shipDate, warranty, name, sn, )) conn.commit()
Error:
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE serialNum = '1B0000021A974726'' at line 1
"1B0000021A974726" is a serial number inputted by the user and it is already present in the table.

Word2vec compact models
Tell me if there are any w2v models that do not require a dictionary. So, everything that I found in torchtext first wants to know the dictionary build_vocab. But if I have a huge body of text, I would like to have a model that works at the level of phrases. But I did not find one.

supervised learning for parcours
for my school project i got to implement a neural network for a parcours. I know it useless but i want the neural net to learn a simple algorithm:
if front right is bigger than front left > go right, else > go left.
I wanna use supervised learning. I got 2 inputs neurons, 2 hidden neurons and 1 output neuron. The goal is that when the player has to go left the output gives a number under 0.5 and if the player has to go right the nn has to return a number greater that 0.5.
Somehow I made a mistake and the nn always tries to return 0.5. Do so know what i did wrong and what i can do now.
thats how the parcours looks like

Categorical Variables and too many NA for ML model
We have a data set of 250 variables and 50,000 records. One variable is numeric, 248 variables are categorical and one variable is binary (the target variable). Each category variable has more than 3000 levels. We have many NA. Each row is the record of diseases that a patient has suffered. That's why there are so many NAs. Because a patient may have suffered 100 diseases, and another has suffered only one. The objective is to be able to predict if patients can have a specific disease from the information of other diseases they have suffered. How can this data set be handled in machine learning?

CUDA_ERROR_OUT_OF_MEMORY tensorflow
As part of my study project, I try to train a neural network which makes a segmentation on images (based on FCN), and during the execution I received the following error message:
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,67,1066,718] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Note that I have fixed the batch_size to 1 and I have the same error even when I tried different image sizes , I put also just 1 image to train instead of 1600 still the same error! Could you help me to solve this problem ? What is it really about ?

Per image normalization vs Overall dataset normalization
I have a datasets of 1000 image.Using cnn For fingure gesture recognition. Should I normalize the Image by finding mean of that image only or the mean of entire dataset...and also suggest which library to use in python for the same

Softmax not resulting in a probability distribution in Python Implementation
I have a simple softmax implementation:
softmax = np.exp(x) / np.sum(np.exp(x), axis=0)
For x set as array here: https://justpaste.it/6wis7
You can load it as:
import numpy as np x = np.as (just copy and paste the content (starting from array))
I get:
softmax.mean(axis=0).shape (100,) # now all elements must be 1.0 here, since its a probability softmax.mean(axis=0) # all elements are not 1 array([0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158])
Why is this implementation wrong? How to fix it?

Covariance Matrix Construction in MultiAction Reinforcement Learning
the basic question rather refers to mathematics and implementation, so the reinforcement learning part below can be skipped and is just FYI.
I am working on a reinforcement learning agent working with a continuous action space in the OpenAI gym environment "CarRacingv0" with the A3CAlgorithm ..
For an agent dealing with one continuous action, you let the policy network compute a mean and a variance, while the mean range lies in your desired action space range (e.g. tanh activation) and the variance is positive (e.g. RELU activation), then you sample from a normal distribution using these parameters. So far so good...
For more than one continuous action, you can either sample all actions separately using mean and variance for each action (which works for now, but not perfectly) or you let the policy network output the parameters of a mean vector and a covariance matrix, then use these to sample all actions at once from a multivariate distribution.
Since the multivariate version makes more sense (you never decide how much gas you apply without having in mind how hard you steer left or right and the other way around), I tried to implement it in my code, but it crashes as soon as the network gives me a negative definite covariance matrix ..
My covariance matrix (for a 2D random vector) is constructed from 3 output neurons of the neural network, call them A, B and C ..
It is is given by: COV = [[A, B],[B, C]]
The basic question is: How do I have to limit A, B and C to assure that the covariance is positive definite, which is mandatory for being able to sample from the distribution. Since the neural network just outputs random stuff at the beginning, I have to limit the value of A, B and C in advance to avoid errors when sampling from the distribution.
I found some posts considering the construction of arbitrary p.d. matrices, but I don't know if they will affect the backpropagation in learning (e.g. let COV = M*M^T, where M is a random matrix that I could output with the neural network) ..
I hope someone has encountered the same problem in the past and I will be thankful for any hint ..
Best regards
Hendrik

how can we prove that Artificial IntiIlegence can help tackle climate change in the practical field?
In university research, I have to prove practically that Al can tackle climate change. But how can I prove it practically?

Error when checking target: expected dense_8 to have shape (2,) but got array with shape (1,)
I am new to Transfer learning i am working on image classification of 2 categories. i am using InceptionV3, for classification of these images. My data is in .jpg format. and folder structure is in below format.Since i have 2 categories i gave "binary_crossentropy" too. But facing issues.
Parentfolder/train/categorie1
Parentfolder/train/categorie2Parentfolder/test/categorie1
Parentfolder/test/categorie2from keras.applications.inception_v3 import InceptionV3 from keras.preprocessing import image from keras.models import Model from keras.layers import Dense, GlobalAveragePooling2D from keras import backend as K # create the base pretrained model base_model = InceptionV3(weights='imagenet', include_top=False) # add a global spatial average pooling layer x = base_model.output x = GlobalAveragePooling2D()(x) # let's add a fullyconnected layer x = Dense(1024, activation='relu')(x) x = Dense(512, activation='relu')(x) x = Dense(32, activation='relu')(x) # and a logistic layer  we have 2 classes predictions = Dense(2, activation='softmax')(x) # this is the model we will train model = Model(inputs=base_model.input, outputs=predictions) for layer in base_model.layers: layer.trainable = False # we chose to train the top 2 inception blocks, i.e. we will freeze # the first 249 layers and unfreeze the rest: for layer in model.layers[:249]: layer.trainable = False for layer in model.layers[249:]: layer.trainable = True model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) training_set = train_datagen.flow_from_directory( 'C:/Users/Desktop/Transfer/train/', target_size=(64, 64), batch_size=5, class_mode='binary') test_set = test_datagen.flow_from_directory( 'C:/Users/Desktop/Transfer/test/', target_size=(64, 64), batch_size=5, class_mode='binary') model.fit_generator( training_set, steps_per_epoch=1000, epochs=10, validation_data=test_set, validation_steps=100)

How to find python lib directory?
I am trying to implement the DeepMimic paper but am stuck at the setup. Ho wdo I change the last part of the Makefile: python lib directory?
Modify the Makefile in DeepMimicCore/ by specifying the following,
EIGEN_DIR: Eigen include directory BULLET_INC_DIR: Bullet source directory PYTHON_INC: python include directory PYTHON_LIB: python lib directory
https://github.com/xbpeng/DeepMimic
Thanks a lot in advance!

tensorflow, reduce_sum leads to Inf or Nan
I'm running a reinforcement algorithm, ACER. The implementation is based on openai baselines, and I add some component on it. The code which leads to error is as follows:
grads = tf.gradients(loss, params) grads, norm_grads = tf.clip_by_global_norm(grads, max_grad_norm)
After running for a few million timesteps, exception was thrown which says "Found Inf or NaN global norm." at the
tf.clip_by_global_norm
.
Then I dig into this clip function and applynumerics.verify_tensor_all_finite
to almost every intermediate element, and finally locate that the error is caused bymath_ops.reduce_sum
inclip_ops.global_norm
.half_squared_norms = [] for v in grads: half_squared_norms.append(gen_nn_ops.l2_loss(v)) half_squared_norm = math_ops.reduce_sum(array_ops.stack(half_squared_norms))
The
numerics.verify_tensor_all_finite
failed at half_squared_norm.So my question is:
1 How to tell if it's Inf or NaN?
2 Is it that the result of reduce_sum extends some numerical boundary? If it is, then what's the boundary in tensorflow? Is it because of unproper dtype? What if the bigest dtype is not enough?
3 How to deal with this kind of exception?
Thank you!