Adapt Machine Learning Algorithm for overridden decisions
We have insurance data of over 10 years. There are underwriting rules for the data, which result in two possible outcomes: Approve or Reject.
We want to have a Machine Learning Algorithm to learn these rules and predict the outcome for the future cases, which is all fine. BUT, if the socioeconomic condition changes, the underwriter will override the ML decision manually. It is expected for the system to adapt accordingly and behave in that fashion for the upcoming applications.
Is there any possible way(s)?
See also questions close to this topic

Python  machine learning: Creating a training and test set from a list of arrays
I would like to create a neural network trained on the RAVDESS dataset (https://smartlaboratory.org/ravdess/): the idea is to use this dataset to detect to mood of the person speaking to the microphone of my application.
Using librosa and the for loop below, I have extracted the labels and the features I want to use for the analysis.
# I started with only one folder to fasten the operations oneActorPath = '/content/drive/My Drive/RAVDESS/Audio_Speech_Actors_0124/Actor_01/' lst = [] # Loop through each folder to find the wavs for subdir, dirs, files in os.walk(oneActorPath): for file in files: if file == '.DS_Store': continue else: # Check if the format of the file is valid try: #Load librosa array data, rate = librosa.load(os.path.join(subdir,file)) #Using the name of the file I can understand the emotion that contains file = file[6:8] arr = data, file lst.append(arr) #print(list) # If is not valid, skip it except ValueError: continue
The output of this loop is a list of arrays in the format below:
[(array([8.1530527e10, 8.9952795e10, 9.1185753e10, ..., 0.0000000e+00, 0.0000000e+00, 0.0000000e+00], dtype=float32), '08'), (array([0., 0., 0., ..., 0., 0., 0.], dtype=float32), '08'), (array([0., 0., 0., ..., 0., 0., 0.], dtype=float32), '06'), (array([0.00050612, 0.00057967, 0.00035985, ..., 0. , 0. , 0. ], dtype=float32), '05'), (array([ 6.8139506e08, 2.3837963e05, 2.4622474e05, ..., 3.1678758e06, 2.4535689e06, 0.0000000e+00], dtype=float32), '05'), (array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., 6.9306935e07, 6.6020442e07, 0.0000000e+00], dtype=float32), '04'), (array([7.30260945e05, 1.18022966e04, 1.08280736e04, ..., 8.83421380e05, 4.97258679e06, 0.00000000e+00], dtype=float32), '06'), (array([0., 0., 0., ..., 0., 0., 0.], dtype=float32), '07'), (array([ 2.3406714e05, 3.1186773e05, 4.9467826e06, ..., 1.2180173e07, 9.2944845e08, 0.0000000e+00], dtype=float32), '01'), (array([ 1.1845550e06, 1.6399191e06, 2.5565218e06, ..., 8.7445065e09, 5.9859917e09, 0.0000000e+00], dtype=float32), '04'), (array([0., 0., 0., ..., 0., 0., 0.], dtype=float32), '03'), (array([1.3284328e05, 7.4090644e07, 7.2679302e07, ..., 0.0000000e+00, 0.0000000e+00, 0.0000000e+00], dtype=float32), '07'), (array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., 5.0694009e08, 3.4546797e08, 0.0000000e+00], dtype=float32), '03'), (array([ 1.5591205e07, 1.5845627e07, 1.5362870e07, ..., 0.0000000e+00, 0.0000000e+00, 0.0000000e+00], dtype=float32), '01'), (array([0., 0., 0., ..., 0., 0., 0.], dtype=float32), '03'), (array([0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., 1.1608539e05, 8.2463991e09, 0.0000000e+00], dtype=float32), '03'), (array([3.6192148e07, 1.4590451e05, 5.3999561e06, ..., 1.9935460e05, 3.4417746e05, 0.0000000e+00], dtype=float32), '02'), (array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., 2.5319534e07, 2.6521766e07, 0.0000000e+00], dtype=float32), '02'), (array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., 2.5055220e08, 1.2936166e08, 0.0000000e+00], dtype=float32) ...
The second element of each element of the list above ('08' in the first line) represents the label of the dataset according to the dictionary below
emotions = { "neutral": "01", "calm": "02", "happy": "03", "sad": "04", "angry": "05", "fearful": "06", "disgust": "07", "surprised": "08" }
At this point, I have my labels and my data: how can I split this dataset to obtain a training and a test set?
EDIT1: I need to understand how to obtain X and y from this structure to use train_test_split on the data.

Operands could not broadcast together
I'm trying to train a model using mini batches but I'm having a .... error.
I'm using the same function I've already used (and it worked) with other models, but this time is crashing.
def random_mini_batches(X, Y, mini_batch_size = 64): """ Creates a list of random minibatches from (X, Y) Arguments: X  input data, of shape (input size, number of examples) Y  true "label" vector (1, number of examples) mini_batch_size  size of the minibatches, integer Returns: mini_batches  list of synchronous (mini_batch_X, mini_batch_Y) """ m = X.shape[1] # number of training examples mini_batches = [] # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X.iloc[:, permutation] shuffled_Y = Y[:, permutation].reshape((Y.shape[0],m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches): mini_batch_X = shuffled_X.iloc[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size] mini_batch_Y = shuffled_Y[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last minibatch < mini_batch_size) if m % mini_batch_size != 0: mini_batch_X = shuffled_X.iloc[:, num_complete_minibatches * mini_batch_size : m] mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches
I've used this function in a NN with 20 layers and this X and Y: And now I'm trying use it again with a 5 layer NN and shapes
However, I'm getting this error at this part of the code
epoch_cost += minibatch_cost/num_minibatches
.The full code would be this:
for epoch in range(num_epochs): epoch_cost = 0 num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set minibatches = random_mini_batches(X_train, Y_train, minibatch_size) for minibatch in minibatches: #Select a minibatch (minibatch_X, minibatch_Y) = minibatch _, minibatch_cost = sess.run([optimizer, cost], feed_dict = {X: minibatch_X, Y: minibatch_Y}) epoch_cost += minibatch_cost/num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost)
Thanks in advance

Multiclass Classification using Xtreme Gradient Boosting XGB in R
Error in 'params' of XGB for multiclass classification of variable 'NSP' into classes 1,2,3.
Dataset: https://archive.ics.uci.edu/ml/datasets/cardiotocography
label_error >= 0 && label_error < nclass SoftmaxMultiClassObj: label must be in [0, num_class), num_class=3 but found 3 in label.
objective used: multi:softmax
eval_metric = mlogloss nc=3 # (for labels 1,2,3)

weka desicion tree has empty conjuctions
Here is the Tree, near the top of the tree by the root, the branched off conjuctions are empty which I don't understand.
The TextDirectoryLoader, StringToWordVector filter, AttributeSelection filter, and J48 decision tree classification operations were applied to the data as well as Lovins Stemmer.
The data has to do with identifying fake news articles by text files.
What are these empty conjutions? I can't find anything similar.

C programme that predict marks of a student
I want to create a c programme that can predict students marks with the correlation of their previous marks and attendance percentage

How to load weights and bias into the built model of the NNAPI
I am trying to use nnapi for a model of tflite, I have a couple of questions.
How can I convert a .tflite model to .bin
In what order are the weights and offsets in the bin file written so that I would know in what order to load them into operands.
For any information, thank you!