Implementing Knowledge space theory in Python
I want to implement knowledge space theory in python. I want to create learning spaces and learning routes. Can anyone please suggest me if there is any library or any other way?
See also questions close to this topic
Trim space and concatenate the lists in python
I'm new to python and I was trying to get a final list by eliminating the spaces and merging the lists. Let's consider i have two lists.
list1 = ['string1,1, 2','string2,2,3','string3,3,4'] list2 = ['string1 , 5, 6','string2 , 6, 7', 'string3, 8, 9']
My final list should be like below by eliminating the spaces before the elements in list2 and concatenating with list1.
list = ['string1,1,2,5,6','string2,2,3,6,7','string3,3,4,8,9']
Is there any way to achieve this? I tired something like below, but didn't worked
list2 = [x for x in list2 if x.strip()] list = list1+list2
Vectorizing a function on a list of pandas dataframes
I read an excel file and save each tab as a pandas dataframe.
import pandas as pd xla = pd.ExcelFile("file_name.xlsx") kl=xla.sheet_names hf_list= for i in range(len(kl)): hf_list.append(pd.read_excel(xla, i,index_col=0))
I intend to compute rank of each dataframe in the list so have written the following code.
def score_card(raw_list): score_list= for i in range(len(raw_list)): score_list.append(raw_list[i].rank(axis=1)) return score_list score_list=score_card(hf_list)
I was wondering if there is a way to vectorize the code and avoid for loop(s) in the score_card function (and also reading the excel file). Thanks in advance for your time.
Hump form data to database using python
I have an html form with username,password
How do i connect with database and get values from form and store in database using python
I want to build a project which can detect Fraud Clicks on Ads. So What approach should I use for the same ? What all technologies should be used?
I want to build a project for my final year . It should be a software that can detect Fraud Clicks on Advertisements that we publish . So I want to know what approach should I follow , and what all techniques & technologies should be used ?
I have been researching a lot , collecting a lot of data . But still have not found an approach to start building the software
The software should be able to detect the fraud clicks and detect from where it is coming .
ImportError: cannot import name 'deprecated_endpoints'
from tensorflow.python.util.deprecation import deprecated_endpoints -->Import error: cannot import name 'deprecated_endpoints' Could someone help me to resolve this.
Natural-Language-Processing, Machine Learning, Data Science
How to extract multiple tweets from different twitter accounts with the help of sentiment function. (Use python language, Natural Language Processing) Create a graph using matplot to represent positive & negative output. Also find the probability and total number of tweets and find the future coming tweets.
How to pass epoch and batch size when using label powerset in keras
I have a multi-label problem and with some research, I was able to use Label powerset in conjunction with ML algorithms. Now I want to use the Label powerset with neural network and as per the official website I can use Label powerset. But I am not able to understand how to modify my existing code to be able to use Label Powerset.
I want to know how can we pass epoch or batch_size or any other parameter passed in the fit function of the model.
Since I have a multi-label problem I have used MultiLabelBinarizer of sklearn so my each target row looks like this [1,0,0,1,0,0,0,0,0,0,0,0].
and lastly, if someone could explain to me what is KERAS_PARAMS and Keras() in the below line:
def create_model_multiclass(input_dim, output_dim): # create model model = Sequential() model.add(Dense(8, input_dim=input_dim, activation='relu')) model.add(Dense(output_dim, activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model clf = LabelPowerset(classifier=Keras(create_model_multiclass, True, KERAS_PARAMS), require_dense=[True,True]) clf.fit(X_train,y_train) y_pred = clf.predict(X_test)
Below is my existing neural network code
cnn_model = Sequential() cnn_model.add(Dropout(0.5)) cnn_model.add(Conv1D(25,7,activation='relu')) cnn_model.add(MaxPool1D(2)) cnn_model.add(Dropout(0.2)) cnn_model.add(Conv1D(25,7,activation='relu')) cnn_model.add(MaxPool1D(2)) cnn_model.add(Flatten()) cnn_model.add(Dense(25,activation='relu')) cnn_model.add(Dense(12,activation='softmax')) cnn_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) history = cnn_model.fit(X_train, y_train, validation_data=(X_test,y_test), batch_size=32, epochs=180,verbose=1) plot_history(history) predictions = cnn_model.predict(X_test)
I want my output row to look like this only [1,0,0,1,0,0,0,0,0,0,0,0] as later I will use my MultiLabelBinarizer for the inverse transform of this.
Loss functions reaching global minima
In Deep Learning can we have train accuracy far less than 100% at the global minimum of the loss function?
I have coded a neural network in python to classify cats and non-cats. I chose a 2-layer network. It gave a train accuracy of 100% and a test accuracy of 70%.
When I increased the #layers to 4 the loss function is getting stuck at 0.6440 leading to train accuracy of 65% and a test accuracy of 34% for many random initializations.
We are expecting that our train accuracy on the 4-layer model should be 100%. But we are getting stuck at 65%. We are thinking that the loss function is reaching a global minimum since on many random initialization we are stagnating at a loss value of 0.6440. So, even though the loss function is reaching the global minimum, why is the train accuracy not reaching 100%? Hence our question,"In Deep Learning can we have train accuracy non-zero at the global minimum of the loss function?"
Should I use activation function and normalization for regression?
I have a regression problem, a model in related paper use a min-max normalization to normalize input data and output data to -1 to 1 range, and it apply a tanh activation function in the last output layer. However, I found out it's very hard to train, the loss and rmse decrease slowly. If I remove the activation function in output layer, and do not use any data normalization, it get the best score. So, I have two questions:
Do I have to use some activation function in last layer and some data normalization for a regression problem? (all features and true value are in the same scale, just like the house price in different area etc..)
Even I remove the activation function in last layer, but I found out that if I don't use any data normalization, the loss decrease more faster. If I normalize the data to -1 to 1 range or 0 to 1 range (use min-max normalization), the result is always worse. But, why?
About the strategies to do upsampling
I have read a lot about the strategies for upsampling.
I have found three types:
For the deconvolution layer upsampling, by learning to deconvolve the input feature map, but Index-upsampling using the max pooling indices to upsample (without learning) the feature map(s) and convolve with a bank of trainable filters. Multilinear using multilinear interpolation to upsample the input feature maps.
In my opinion, using Index-upsampling is the best, but all the paper that I have read mentioned different things deconvolution layer is the best.
I fail to understand why deconvolution layer is the best technique for upsampling?
It will be helpful if the community can share their insights into this.
Person detection during night
Can someone help in solving person detection using night vision cameras. Can you please list down the possible ways and the approximate accuracy or level of detection of the specified method.
The approaches in my mind are:
- Using transfer learning on the dataset consisting of videos ouput from night vision cameras
- Traditional background subtraction algorithms
Can you please comment on my approaches and also give your ideas on the above problem.
Pygame window not responding when not refreshing for some time
I'm trying to do some reinforcement learning with a "game" I made.
In my main loop, when I just play my game everything works fine if the window is refreshed regularly.
However, after an episode, I would like to train my agent, but if the training takes too long, the pygame window then only shows the "control bar" (the bar with the X for closing the window) and if I try to close it, the program simply crashes.
Is there a simple way I can deal with it? Other solutions tell me I should call some pygame function regularly, but if I have to suspend my training just to do it from time to time, the code would become a bit messy.
Best algorithm for multi agent continuous space path finding using Reinforcement learning
I am working on project in which I need to find best optimised path from 1 point to another in continuous space in multi agent scenario. I am looking for best algorithm which suits this problem using Reinforcement learning. I have tried "Multi-agent actor-critic for mixed cooperative-competitive environment" but it does not seems to reach goals in 10000 epesidoes. How can I improve this algorithm or is there any other algorithm that can help me with this.
Always says "No dashboards are active for the current data set" when activating Tensorboard
I am using Python 3.7.3 in macOS system in Anaconda environment. Tensorflow (1.14.0), Matplotlib (3.1.0) and other modules were installed there and everything worked fine. I wrote the following codes and run it.
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def add_layer(inputs, inputs_size, outputs_size,activation_function = None): with tf.name_scope('layer'): with tf.name_scope('weight'): Weights = tf.Variable(tf.random.normal([inputs_size, outputs_size])) with tf.name_scope('biase'): biases = tf.Variable(tf.zeros([1,outputs_size])+0.1) with tf.name_scope('wx_plus_b'): Wx_plus_b = tf.matmul(inputs, Weights) + biases if activation_function == None:outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) return outputs ''' multiple lines omitted here ''' writer = tf.compat.v1.summary.FileWriter("logs/",sess.graph)
I can see a local file with name
generated in "logs/" folder. I opened terminal and cd to that folder with Anaconda environment activated. Then I typed
"python -m tensorboard.main --logdir=‘logs/‘ --host localhost --port 6006"
and got response
TensorBoard 1.14.0 at http://localhost:6006/ (Press CTRL+C to quit)
Then no matter I use safari or chrome to open "http://localhost:6006/", there's always nothing shown except "No dashboards are active for the current data set." enter image description here Actually I also tried other commends such as
python -m tensorboard.main --logd logs --host localhost --port 6006
python -m tensorboard.main --logd logs --host localhost --port 6006
But there's no difference.
The original codes as follows:
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def add_layer(inputs, inputs_size, outputs_size,activation_function = None): with tf.name_scope('layer'): with tf.name_scope('weight'): Weights = tf.Variable(tf.random.normal([inputs_size, outputs_size])) with tf.name_scope('biase'): biases = tf.Variable(tf.zeros([1,outputs_size])+0.1) with tf.name_scope('wx_plus_b'): Wx_plus_b = tf.matmul(inputs, Weights) + biases if activation_function == None:outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) return outputs x_data = np.linspace(-1,1,300,dtype = np.float32)[:,np.newaxis] noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32) y_data = np.square(x_data) - 0.5 + noise with tf.name_scope('inputs'): xs = tf.compat.v1.placeholder(tf.float32,[None,1],name='x_in') ys = tf.compat.v1.placeholder(tf.float32,[None,1],name='y_in') l1 = add_layer(xs, 1, 10, tf.nn.relu) prediction = add_layer(l1, 10, 1, None) with tf.name_scope('loss'): loss = tf.reduce_mean(tf.reduce_sum(tf.square(prediction - ys),reduction_indices=)) #no need to do tf.sum() as in link. #tf.reduce_mean() with tf.name_scope('train'): train_step = tf.compat.v1.train.GradientDescentOptimizer(0.1).minimize(loss) sess = tf.compat.v1.Session() writer = tf.compat.v1.summary.FileWriter("logs/",sess.graph) sess.run(tf.compat.v1.global_variables_initializer())