How to utilize user survey answers and the actual usage in forecasting power usage using LSTM?
I have the pretrial survey and posttrial survey conducted of around 5000 users for Smart Meter installation.
With this I have power usage reading recorded every 30 min for around one and a half years.
survey csv is like this
User Question 1 Question 2 .......
1000 a a .......
1001 b a .......
. . . .......
. . . .......
. . . .......
5000 b a .......
power usage csv is like this
User date usage
1000 20001 0.003
1000 20002 0.025
.... ..... .....
.... ..... .....
.... ..... .....
.... ..... .....
.... ..... .....
1000 65047 0.52
1000 65048 0.14
I want to forecast power usage of the user based on the past power usage and the survey using LSTM. How to start with this?
See also questions close to this topic

Generate .net model using CNN Machine learning
Can anyone please help me to generate(train) a .net model of CNN using images data, and make it usable for prediction, as we generate .h5 model. I am struggling to find resource that can help me, or if anyone have useful reference or tutorial that have guidance about this, please let me know.

No overfitting by increasing the number of epochs
I use a feed foreward neural network with one hidden layer for my thesis. Threby I have 600 training data and 104 input and output values. There I want to show now the properties of the neural network and also want to show the overfitting when I increase the number of epochs. To do so, I first wanted to find the optimum for learning rate and number of hidden nodes, where I got the following results:
Based on that I decided to choose a learning rate of 0.0125 and 250 hidden nodes. But by using this set of parameters, I still have no overfitting when I increase the number of epochs, which can be seen here:
In this plot I showed in blue my old set of parameters and in theory I wanted to show how it improve when I use the best set of parameters, but it's just varying a bit. I also tested it until epoch 1000 but the accuracy with this value was still 0.830.
Does someone has an idea why this happen?
Thanks a lot for your help!

Serving multiple deep learning models from cluster
I was thinking about how one should deploy multiple models for use. I am currently dealing with tensorflow. I was referring this and this article.
But I am not able to find any article which targets need to serve several models distributed manner. Q.1. Does tensorflow serving serve models off from single machine? Is there any way to set up a cluster of machines running tensorflow serving? So that multiple machines serve same model somewhat working as master and slave or say load balance between them while serving different models.
Q.2. Does similar functionality exist for other deep learning frameworks, say keras, mxnet etc (not just restricting to tensorflow and serving models from different frameworks)?

Multi horizon forecast with multiple time series using LSTM
I am new to LSTM in timeseries. Most info on internet is for a single time series and for onestep forecasting. My problem is about multiple time series for multi horizon forecast, using LSTM in R.
1) Shape of X_train?
I want to use LSTM to forecast 100 time series, all monthly and length 54 months. The output should be 6 months forecast, so I need a 100*6 matrix as an output. I use sliding window approach with input window is 15 months, and output is 6 months. Therefore I have 34 different (X,Y) vectors for each time series. I reshaped X_train as (3400, 15, 1), is this correct?
2) How to choose "batch_size" and "units"?
In order to get 100*6 matrix as an output, how can I choose "batch_size" and "units"? In documentation of layer_dense and layer_lstm functions units is defined as dimensionality of the output space. So should I set units=6? I have difficulty in understanding what batch_size and units refer to, so any explanation would be appreciated.

When to use GlobalAveragePooling1D and when to use GlobalMaxPooling1D while using Keras for an LSTM model?
I have to make LSTM classification model for some text and I am confused between GlobalAveragePooling1D and GlobalMaxPooling1D in the pooling layer while using keras. Which one should I use and what are the things to consider while deciding a particular choice.

Coverting LSTM predicted data to original format
What I am trying to achieve
My code below is working on predicting "NG open" price but mostly in a scalar format. However, I want to compare the predicted price with actual values such as $4.30, $3.20, $2.60 etc for all the 100 rows. I am somehow not able to find a code that coverts my predicted values to forecasted numbers.
Dataset First few lines.
Contract NGLast NGOpen NGHigh NGLow NGVolumes COOpen COHigh COLow 20181201 4.487 4.50 4.60 4.03 100,000 56.00 58.00 50.00 20190101 4.450 4.52 4.61 4.11 93000 51.00 53.00 45.00
Code
import pandas as pd import numpy as np import matplotlib.pyplot as plt from keras.layers import Dense from keras.models import Sequential from keras.layers import LSTM import date time from keras import metrics from sklearn.preprocessing import MinMaxScaler data = pd.read_excel("C:\Futures\Futures.xls") data['Contract'] = pd.to_datetime(data['Contract'],unit='s').dt.date data['NG Last'] = data['NG Last'].str.rstrip('s') data['CO Last'] = data['CO Last'].str.rstrip('s') COHigh = np.array([data.iloc[:,8]]) COLow = np.array([data.iloc[:,9]]) NGLast = np.array([data.iloc[:,1]]) NGOpen = np.array([data.iloc[:,2]]) NGHigh = np.array([data.iloc[:,3]]) X = np.concatenate([COHigh,COLow, NGLast,NGOpen], axis =0) X = np.transpose(X) Y = NGHigh Y = np.transpose(Y) scaler = MinMaxScaler() scaler.fit(X) X = scaler.transform(X) scaler.fit(Y) Y = scaler.transform(Y) X = np.reshape(X,(X.shape[0],1,X.shape[1])) print(X.shape) model = Sequential() model.add(LSTM(100,activation='tanh',input_shape=(1,4),recurrent_activation='hard_sigmoid')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics = [metrics.mae]) model.fit(X,Y,epochs = 10,batch_size=1,verbose=2) Predict = model.predict(X,verbose=1) normalized = scaler.transform(Predict) print(normalized) inversed = scaler.inverse_transform(normalized) print(inversed)

VBA Forecasting Excel Solver
I need to do a forecast for a product sales and I need to find the optimal α, β, γ using maybe excel solver. But my program is dynamic and ισ updated with new sales. So a vba code was created to calculate the Mean square error as new data are available. How can use excel solver in this case to find α,β, γ ?

Creating ts objects in R with specific days between each registry
I am looking for a method to create
ts
objects inR
for the following case:I have a data set of demand of several products where the days between each demand is a specific number of days (lead time). These days between demand are not constant as they change depending on the product. Let's put a trivial example:
Date Demand 20160325 2 20160330 0 20160404 5 20160409 3 20160414 4 ... 20171231 2
Here, the lead time (interdemand time or gap between days) is 5 days. The data starts in
20160325
and ends in20171231
withformat = %Y%m%d
. After readingts
documentation, I have tried creating myts
object with the following expression:ts(df, frequency = (360/5), start = c(2016, 16))
However, I receive the following result:
Time Series: Start = c(2016, 17) End = c(2018, 4) Frequency = 72
Which is naturally wrong as the series ends in
20171231
and the output showsEnd = c(2018, 4)
, which would be20180120
in my calculations.What is the best way to set
ts
in a dataset with these characteristics (having starting date, ending date, and lead time)? What if the lead time wasn't 5 days but 18 days and started in a different initial date the series?Thanks in advance for your help.
P.D. I calculated 16 in the parameter
start = c(2016, 16)
by manually counting at what time going from 5th of January I achieved the 25th of March (using 30 days in each month).