Which machine learning model can be used to output a single float?
I have 3 float input values for example 10.12, 20.32 and 3.13 and the result should be 38.52. What machine learning model should I choose that I would be able to feed these 3 float numbers and then feed the output number as the label?
See also questions close to this topic

Add values to Dataframe perserving index
I have a
DataFrame
(test3) which looks like this (年月日
ispd.datetime
format)年月日.1 平均気温(℃) 最高気温(℃) 最低気温(℃) 年月日 18900701 00:00:00 18900707 23.3 32.3 18.9 18900708 00:00:00 18900714 23.9 33.2 17.0 18900715 00:00:00 18900721 28.3 35.8 22.5 18900722 00:00:00 18900728 26.1 33.3 22.0 18900729 00:00:00 18900804 26.8 34.6 22.3 ... ... ... ... ...
where the first column
年月日
is an index of dataframe. I'm rendering new data (rendered_date
var forpd.to_datetime
for the first column) and (next_value_
vararray([[28.330473]], dtype=float32)
for third column) Another columns are not important.rendered_date = render_date(last_day.index.date) # rendering new datetime object rendered_date = pd.to_datetime(rendered_date, format='%Y/%m/%d') # making it for pandas d = {'年月日':[rendered_date], '平均気温(℃)':[next_value_]} new_df = pd.DataFrame(data=d) # making new dataframe new_df = new_df.set_index("年月日") # setting the same index fr = [test3, new_df] # concating new DF with existing df (test3) result = pd.concat(fr)
makes bottom of result looks like
....some values .... 20200731 00:00:00 20200806 28.7 35.0 23.9 [20200807] NaT [[28.330473]] NaN NaN
which is not what I was looking for.. How could I make it the same format for
result?

How to correct a arithmetic error in python
Hi I am a relatively new programmer and my teacher gave us this problem to fix.
Thing is, I have no idea what to do because no matter what i do the code refuses to run.
def multiply(a, b): a * b
What is wrong with this problem?

after converting code to exe comparison between 'import x' and 'from x import y'?
i have a simple question someone made me think of?, Is it better to use 'from x import y' so after converting the code to .exe it imports less things so it's better in performance?

Tensorflow xception broadcast input array error
I'm using tensorflowgpu 2.1, and am doing image classification on 850x550 images (3 channels).
The model (preliminary) looks like this (using sequential API):
input_tensor_def = Input(shape=(850, 550, 3)) model = Sequential() xception = Xception(include_top = False, weights = None, input_tensor = input_tensor_def) model.add(xception) model.add(GlobalAvgPool2D()) model.add(Flatten()) model.add(Dense(512,activation='relu')) model.add(Dense(2,activation='softmax'))
Using the model API, it looks like this:
model_core = Xception(weights = None, include_top = False, input_tensor = input_tensor_def) model_head = model_core.output model_head = GlobalAvgPool2D()(model_head) model_head = Flatten()(model_head) model_head = Dense(512, activation = 'relu')(model_head) model_head = Dense(2, activation = 'softmax')(model_head) model = Model(inputs = model_core.input, outputs = model_head
I'm getting the following error:
ValueError: could not broadcast input array from shape (850,550,3) into shape (850,550,3,3)
I'm really confused why it's trying to interpret the height tensor as the batch index.

Increase the size of a np.array
I ran a conv1D on a X matrix of shape (2000, 20, 28) for batch size of 2000, 20 time steps and 28 features. I would like to move forward to a conv2D CNN and increase the dimensionality of my matrix to (2000, 20, 28, 10) having 10 elements for which I can build a (2000, 20, 28) X matrix. Similarly, I want to get a y array of size (2000, 10) i.e. 5 times the y array of size (2000, ) that I used to get for LSTM and Conv1D networks.
The code I used to create the 20 timesteps from input dataX, dataY, was
def LSTM_create_dataset(dataX, dataY, seq_length, step): Xs, ys = [], [] for i in range(0, len(dataX)  seq_length, step): v = dataX.iloc[i:(i + seq_length)].values Xs.append(v) ys.append(dataY.iloc[i + seq_length]) return np.array(Xs), np.array(ys)
I use this function within the loop I prepared to create the data of my conv2D NN :
for ric in rics: dataX, dataY = get_model_data(dbInput, dbList, ric, horiz, drop_rows, triggerUp1, triggerLoss, triggerUp2 = 0) dataX = get_model_cleanXset(dataX, trigger) # Clean X matrix for insufficient data Xs, ys = LSTM_create_dataset(dataX, dataY, seq_length, step) # slide over seq_length for a 3D matrix Xconv.append(Xs) yconv.append(ys) Xconv.append(Xs) yconv.append(ys)
I obtain a (10, 2000, 20, 28) Xconv matrix instead of the (2000, 20, 28, 10) targeted output matrix X and a (10, 2000) matrix y instead of the targeted (2000, 5). I know that I can easily reshape yconv with
yconv = np.reshape(yconv, (2000, 5))
. But the reshape function for XconvXconv = np.reshape(Xconv, (2000, 20, 28, 10))
seems hazardous as I cannot vizualize output and even erroneous. How could I do it safely (or could you confirm my first attempt ? Thanks a lot in advance. 
Issue with Shape in Python Neural Network
I have the following dataframe: https://raw.githubusercontent.com/markamcgown/Projects/master/df_model.csv
At "> 11 history = model.fit" in the last block of code below, I get the error "ValueError: Input 0 of layer sequential_8 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [None, 26]"
Why is it expecting a minimum of 3 dimensions and how can I automate my code below to always have the right shape?
import keras import pandas as pd from tensorflow.keras.models import Sequential from sklearn.model_selection import train_test_split from keras.layers import Conv2D, MaxPooling2D, Conv1D, MaxPooling1D from tensorflow.keras.layers import LSTM, Dense, Dropout, Bidirectional from keras.layers import Dense, Dropout, Flatten, Reshape, GlobalAveragePooling1D path = r'C:\Users\<your_local_directory>\df_model.csv' #Import raw data file with accelerometer data df_model = pd.read_csv(path) df_model
y_column = 'Y_COLUMN' x = df_model.drop(y_column, inplace=False, axis=1).values y = df_model[y_column].values x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=42)
def create_model(num_features, num_classes, dropout=0.3, loss="mean_absolute_error", optimizer="rmsprop"): model = Sequential() model.add(Conv1D(100, 10, activation='relu', input_shape=(None,num_features))) model.add(Conv1D(100, 10, activation='relu')) model.add(MaxPooling1D(2)) model.add(Conv1D(160, 10, activation='relu')) model.add(Conv1D(160, 10, activation='relu')) model.add(LSTM(160, return_sequences=True)) model.add(LSTM(160, return_sequences=True)) model.add(GlobalAveragePooling1D()) model.add(Dropout(dropout)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=loss, metrics=["mean_absolute_error"], optimizer=optimizer) return model
DROPOUT = 0.4 LOSS = "huber_loss" OPTIMIZER = "adam" num_time_periods, num_features = x_train.shape[0], x_train.shape[1] model = create_model(num_features, num_classes=len(set(df_model[y_column])), loss=LOSS, dropout=DROPOUT, optimizer=OPTIMIZER)
callbacks_list = [keras.callbacks.ModelCheckpoint(filepath='best_model.{epoch:02d}{val_loss:.2f}.h5',monitor='val_loss', save_best_only=True),keras.callbacks.EarlyStopping(monitor='accuracy', patience=1)] model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy']) # Hyperparameters BATCH_SIZE = 400 EPOCHS = 1 # Enable validation to use ModelCheckpoint and EarlyStopping callbacks. history = model.fit(x_train,y_train,batch_size=BATCH_SIZE,epochs=EPOCHS,callbacks=callbacks_list,validation_split=0.2,verbose=1) plt.figure(figsize=(15, 4)) plt.plot(history.history['accuracy'], "g", label="Training Accuracy") #plt.plot(history.history['val_accuracy'], "g", label="Accuracy of validation data") plt.plot(history.history['loss'], "r", label="Training Loss") #plt.plot(history.history['val_loss'], "r", label="Loss of validation data") plt.title('Model Performance') plt.ylabel('Accuracy & Loss') plt.xlabel('Epoch') plt.ylim(0) plt.legend() plt.show()

How to calculate the number of MACs and of a Conv, FC and depthwiseseparable conv Layer?
Does someone know how to calculate the number of MACs of AlexNet and the number of MACs of a depthwiseseparable conv Layer for MobileNet for example?
I don't have problems with the number of wights actually but I don't unterstand how to do it for the MACs.
Weights of AlexNet: conv1: 11113*96 + 96 = 34944
conv2: 5596*256 + 256 = 614656
conv3: 33256*384 + 384 = 885120
conv4: 33384*384 + 384 = 1327488
conv5: 33384*256 + 256 = 884992
fc1: 66256*4096 + 4096 = 37752832
fc2: 4096*4096 + 4096 = 16781312
fc3: 4096*1000 + 1000 = 4097000
And the total results is 62378344 parameters.

roc_auc_score score for PyTorch model using probability estimates
I'm currently doing something like this to get the AUC score for a binary classifier. Because I'm using the y_predicted (0 or 1) instead of the probabilities, I'm finding that the AUC scores are equal to the detector accuracy.
outputs = model(inputs) y_score, y_predicted = torch.max(outputs.data, 1) y_true = targets auc = roc_auc_score(y_true, y_predicted)
 Is it typical to find AUC score equals accuracy when constructing a ROC curve based on predicted labels (0 or 1) instead of a probability score?
 How can I use probability scores instead of the predicted labels? The
roc_auc_score
function expects probability scores where [0, 0.5) corresponds to class 0 and [0.5, 1] corresponds to class 1. For example:
>>> import numpy as np >>> from sklearn.metrics import roc_auc_score >>> y_true = np.array([0, 0, 1, 1]) >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> roc_auc_score(y_true, y_scores) 0.75

Why does keras use so much memory?
I was recently trying to improve the memory efficiency of one of my models, and in testing it out I happened across this fact: Any time you make anything in Keras it uses up at least 3.1GB of memory.
For instance, I tried this piece of code here:
from time import sleep from keras import Input from keras.layers import Dense def make_neural_network(): input = Input(shape=(1,)) output = Dense(1)(input) return output sleep(3) make_neural_network() sleep(6)
Then looked at my task manager, and for the first 3 seconds I saw nothing, until the program started making the neural network at which point my GPU memory spiked for 6 seconds to 3.1GB.
I was wondering why it takes up so much memory... Any tips to reduce it if possible would be greatly appreciated.

List or IEnumerable for .NET Core Model?
Let's say I have a view with a model defined as
@model IEnumberable<ViewModel>
On the controller, this data is provided by entity framework and is thus anIQueryable
. I want to iterate through the data like this:@if (Model != null && Model.Count() > 0) { @foreach (var item in Model) { <tr> <td> @item.firstProp </td> <td> @item.secondProp </td> </tr> } } else { <tr> <td> No data to display. </td> </tr> }
I only want to iterate if there is at least one item, otherwise I want to show a different message. Because I'm using an
IEnumerable
, checking the count requires a trip to the database, and a second trip to iterate the foreach loop. I could always use aList
instead, but I almost always see models defined usingIEnumerable
. Is usingList
a good solution, or shouldIEnumerable
always be used when defining models? 
Getting An item with the same key has already been added Error when mapping model
"Types": [ { "activeFlag": true, "Type": "Out" }, { "activeFlag": true, "Type": "Today" }, { "activeFlag": true, "Type": "Later" }, { "activeFlag": true, "Type": "Now" }, { "activeFlag": true, "Type": "Example" }, { "activeFlag": true, "Type": "Hour" }, { "activeFlag": true, "Type": "In" } ],
I have that above json structure I am trying to map it into my model like so it has multiple activeFlag attributes.
public class Info { public Dictionary<string, string> Types { get; set; } } ...Types = new Dictionary<string, string> { { "activeFlag", "true" }, { "Type", "Out" }, { "activeFlag", "true" }, { "Type", "Today" }, { "activeFlag", "true" }, { "Type", "Later" }, { "activeFlag", "true" }, { "Type", "Now" }... }...
The following code gives out
An item with the same key has already been added
How can i map it properly? 
Adonisjs models name when schema contains "_" character
I created a schema called "brand_types". Now I want to create a model for it, but if I call it "BrandType" lucid doesn't recognize the schema since it contains the "_" character. How can I tell the model which is the right schema?
EDIT: I tried this
static get table () { return 'smart_contract_types' }
But it still doesn't work with seeders.