BEKK model simulation in R
I have been working with a BEKK(1,1) model with dimension 3,4, and 5, for a time series analysis. I was given the feedback to include a simulation study. In order to trust the results that I obtain, I want to, via simulations, show that the estimation of the BEKK model parameters works also well for the sample sizes considered in the paper. I want to show that the distributional theory can be applied for my sample size.
I want to investigate if the sample size is enough to apply the asymptotic results?
Method:
I wish to generate data from the fitted model in the case of the dimension 3, with the sample size 3000. Estimate parameter by fitting BEKK model to the generated data sets and repeat this step, say 10000 times. Then I obtain 10000 estimators for each parameter for which the sampling distribution can be constructed and then it has to be compared with the asymptotic distribution.
Then repeat this procedure for dimension 4 and 5.
#I've been using the mgarchBEKK package when creating my BEKK models.
#The package provides the example below as help for simulation:
## Simulate series:
simulated < simulateBEKK(2, 1000, c(1,1))
## Prepare the matrix:
simulated < do.call(cbind, simulated$eps)
## Estimate with default arguments:
estimated < BEKK(simulated)
I'm not a master in R by any means. So I'm not quite sure how to code the procedure that I describe above.
Any help is greatly apprecieated :)
See also questions close to this topic

How to detect if a string contains a regex pattern using dplyr pipe mutate
I have the following data frame:
library(tidyverse) dat < structure(list(peptide_id = c("PD_22374", "PD_20472", "PD_17483" ), peptide = c("EVHNPWNFIPDFQRSRQQHAFKKIRKHRRA", "KKEPQICTWKIQVRFSMNKKVWRKGTQKKK", "NESVPKTHGDVINTGIKERRSKKAKSITKV")), row.names = c(NA, 3L), class = c("tbl_df", "tbl", "data.frame")) dat #> # A tibble: 3 x 2 #> peptide_id peptide #> <chr> <chr> #> 1 PD_22374 EVHNPWNFIPDFQRSRQQHAFKKIRKHRRA #> 2 PD_20472 KKEPQICTWKIQVRFSMNKKVWRKGTQKKK #> 3 PD_17483 NESVPKTHGDVINTGIKERRSKKAKSITKV
I'd like to detect which of the rows above contain this regex pattern
K[KR].{1}[KR]
.We'd like to have another column which gives a YES or NO. How can I go about it?
This figure showed which rows contain that pattern:

How to replace the title of columns in a merged document with the file directory using R?
I have performed an experiment under different conditions. Each of those condition has its own Folder. In each of those folders, there is a subfolder for each replicate that containts a text file called DistList.txt. This then looks like this, where the folders "C1.1", "C1.2" and so on contain the mentioned .txt files:
I have now managed to combine all those single DistList.txt files using the following script:
setwd("~/Desktop/Experiment/.") fileList < list.files(path = ".", recursive = TRUE, pattern = "DistList.txt", full.names = TRUE) listData < lapply(fileList, read.table) names(listData) < gsub("DistList.txt","",basename(fileList)) library(tidyverse) library(reshape2) bind_rows(listData, .id = "FileName") %>% group_by(FileName) %>% mutate(rowNum = row_number()) %>% dcast(rowNum~FileName, value.var = "V1") %>% select(rowNum) %>% write.csv(file="Result.csv")
This then yields a .csv file that has just numbers a titles (marked in red), which are not that useful for me, as shown in this picture:
I would rather like to have the directory of the "DistList.txt" files or even better only the name of the folder they are in as a title. I thought that I could do that using the function
list.dirs()
andcolnames
, but I somehow didn't manage to get it to work.I would be very grateful, if someone could help me with this issue!

R. RCurl 400 Bad Request
I try to send a request to API, use RCurl library.
My code:
start = "20180730" end = "20180815" api_request < paste("https://apimetrika.yandex.ru/stat/v1/data.csv?id=34904255&date1=", start, "&date2=", end, "&dimensions=ym:s:searchEngine&metrics=ym:s:visits&dimensions=ym:s:<attribution>SearchPhrase&filters=ym:s:<attribution>SearchPhrase!~'somephraseshere'&limit=100000&oauth_token=OAuth_token_here", sep="") s < getURL(api_request)
And every time I try to do it I have the response "Error 400" or "Bad Request" if I use getUrlContent instead. When I just open this url in my browser  it works correctly. I still couldn't find any solution for this problem, so if somebody knows something about it  please help me, kind man =)

How to aggregate statistics from all published apps in the Play Store
By advance, sorry for my vagueness, I'm very new to this...
I need to learn how to aggregate the statistics of all the apps published by my company.
More specifically, I need for example to get the number of downloads, day after day, month after month, etc, for all the apps that the company has published. Google already provides some statistics from the Google Play Console. But these are systematically given app by app. Nothing more general. And there are several views that my company requires. I need to create "custom views" according to the needs of the company.
Eventually, it would be ideal to have a dynamic webpage that displays only the desired informations. But for now, I would already be very happy to have a new .csv file that gathers the required informations from the .csv files provided by Google.
So far, I tried to follow the indications provided by Google. Starting from this page, I created a Google Cloud Storage account, which seems to be a HUGE thing that completely lost me (and it seems to cost money). I also tried to learn how to use
gsutil
which, as far as I understand, is a console interface to Google Cloud Storage.These tools seem quite complex to learn. So before I dive in, I want to be sure these are the right tools.
I would be very glad if I could get some hints on how to proceed. And of course, I would be glad to give any information that could be useful.

Python MannWhitney confidence interval
I have two datasets (Pandas Series)  ds1 and ds2  for which I want to calculate 95% confidence interval for difference in mean (if normal) or median (for nonnormal).
For difference in mean, I calculate t test statistic and CI as such:
import statsmodels.api as sm tstat, p_value, dof = sm.stats.ttest_ind(ds1, ds2) CI = sm.stats.CompareMeans.from_data(ds1, ds2).tconfint_diff()
for median, I do:
from scipy.stats import mannwhitneyu U_stat, p_value = mannwhitneyu(ds1, ds2, True, "twosided")
How do I to calculate CI for difference in median?

Format data to run ANOVA in R
I am trying to run a 3way ANOVA in R, but my values for each variable are in one column and not separated by rows. Currently, my data frame looks something like this:
Season Site Location Replicate Lengths Jan_16 MI Adj 1.00 , Jan_16 MI Adj 2.00 , Jan_16 MI Adj 3.00 , Jan_16 MI Away 1.00 3,4, Jan_16 MI Away 2.00 , Jan_16 MI Away 3.00 , Jan_16 MP Adj 1.00 4,5,6,5,4,5,4,4,4,4,5,4,6,4, Jan_16 MP Adj 2.00 4,4,3,3,5,4,3,4,5,3,4,3,4,3,4,6, Jan_16 MP Adj 3.00 4,6,5,5,4, Jan_16 MP Away 1.00 ,4,4,10,4,5,4,6,5,5, Jan_16 MP Away 2.00 3,4,4,4,5,5,4,5, Jan_16 MP Away 3.00 4,4,13,4,
Lengths
is the response variable that I wish to run the ANOVA on, how would I do this? Just a "," means there is no data.**** EDIT
I have tried separate rows
library(tidyr) separate_rows(data.frame, Season:Replicate, Lengths, convert=numeric ) #Error: All nested columns must have the same number of elements
The Lengths have a different number of variables, so is there a way to unnest this?

Keras and input shape to Conv1D issues
First off, I am very new to Neural Nets and Keras.
I am trying to create a simple Neural Network using Keras where the input is a time series and the output is another time series of same length (1 dimensional vectors).
I made dummy code to create random input and output time series using a Conv1D layer. The Conv1D layer then outputs 6 different time series (because I have 6 filters) and the next layer I define to add all 6 of those outputs into one which is the output to the entire network.
import numpy as np import tensorflow as tf from tensorflow.python.keras.models import Model from tensorflow.python.keras.layers import Conv1D, Input, Lambda def summation(x): y = tf.reduce_sum(x, 0) return y time_len = 100 # total length of time series num_filters = 6 # number of filters/outputs to Conv1D layer kernel_len = 10 # length of kernel (memory size of convolution) # create random input and output time series X = np.random.randn(time_len) Y = np.random.randn(time_len) # Create neural network architecture input_layer = Input(shape = X.shape) conv_layer = Conv1D(filters = num_filters, kernel_size = kernel_len, padding = 'same')(input_layer) summation_layer = Lambda(summation)(conv_layer) model = Model(inputs = input_layer, outputs = summation_layer) model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mae']) model.fit(X,Y,epochs = 1, metrics = ['mae'])
The error I get is:
ValueError: Input 0 of layer conv1d_1 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 100]
Looking at the Keras documentation for Conv1D, the input shape is supposed to be a 3D tensor of shape (batch, steps, channels) which I don't understand if we are working with 1 dimensional data.
Can you explain the meaning of each of the items: batch, steps, and channels? And how should I shape my 1D vectors to allow my network to run?

Taking a moving average in a nested data frame
set.seed(123) df < data.frame( region = rep(1:3, each = 45), term = rep(rep(c("a", "b", "c"), 15), 3), period = rep(rep(1:15, each = 3), 3), X = rnorm(nrow(df)) )
I have a nested dataframe, in which I have measures for 3 variables ("a", "b", and "c"
term
column, with the corresponding measure given in theX
column), recorded over 15 time periods and across 3 regions. I want to create a new column,X_moving_av
, which is the average of "a", "b", and "c" respectively over the previous 3 periods within that region. So, for example, take region 1, term "b", in period 10. In a new column,X_moving_av
, I want the following number to appear:with(df, ave(X[region==1 & term=="b" & period==10], X[region==1 & term=="b" & period==9], X[region==1 & term=="b" & period==8]))
Then the cell beneath would be:
with(df, ave(X[region==1 & term=="c" & period==10], X[region==1 & term=="c" & period==9], X[region==1 & term=="c" & period==8]))
...and so on for the whole df, (excluding the first 2 periods, for which I don't have 3 periods of measures).
What's the best way to iterate this? I actually have a lot of variables recorded in the
term
column (i.e. many more than just "a", "b", and "c"), and hundreds of regions and periods, so I need something as general as possible. 
decompose timeseries in R Incompatible methods
I am trying to use the
decompose()
funtion in R on a small 34 row dataset. The data set has two columns, Time and DSCH_COUNT, time is like 2010Q1...2018Q2 and DSCH_COUNT is just integers like so.Time DSCH_COUNT 2010Q1 4036 2010Q2 4028 2010Q3 3991 2010Q4 3968 2011Q1 3768 2011Q2 3766 2011Q3 3939 2011Q4 3885 2012Q1 4302 2012Q2 4345 2012Q3 4259 2012Q4 3994 2013Q1 3831 2013Q2 3850 2013Q3 3773 2013Q4 3567 2014Q1 3617 2014Q2 3645 2014Q3 3539 2014Q4 3239 2015Q1 3112 2015Q2 3089 2015Q3 3360 2015Q4 3175 2016Q1 3181 2016Q2 3141 2016Q3 2899 2016Q4 2899 2017Q1 3104 2017Q2 3071 2017Q3 3032 2017Q4 3030 2018Q1 3159 2018Q2 3198
My code is as follows:
library(xts) fileToLoad < file.choose(new = TRUE) discharges < read.csv(fileToLoad) m < ts(discharges$DSCH_COUNT, frequency = 4, start=c(2010, 01)) class(m) m m2 < window(m, start=c(2012,2), end=c(2014,2)) m2 # convert entire data frame into a ts object m_ts < ts(discharges[,1], frequency = 4, start = c(2010, 01)) class(m_ts) head(m_ts) m_ts # Convert ts object to xts object m.xts < as.xts(m) head(m.xts) plot.ts(m) plot.ts(m_ts) plot.xts(m.xts) components < decompose( m.xts ) names(components) plot(components)
The error I get when using
decompose()
is:Warning messages: 1: In decompose(m.xts) : Incompatible methods ("Ops.xts", "Ops.ts") for "" 2: In structure(list(x = x, seasonal = seasonal, trend = trend, random = if (type == : Incompatible methods ("Ops.xts", "Ops.ts") for "" 3: In .cbind.ts(list(e1, e2), c(deparse(substitute(e1))[1L], deparse(substitute(e2))[1L]), : nonintersecting series
Not really sure why this is happening,

AnyLogic "variable cannot be resolved or is not a field"
I'm currently building a threestaged simulation model consisting of a supplier, manufacturer and a customer in AnyLogic. I've introduced the variables C,R and Z for parameters Costs, Revenue and Backlog respectively.
Sadly running the model leads to error messages "source cannot be resolved" and "mean cannot be resolved".
If clicked on the error message opens up my bar graph marking "source" or "mean".
This might be simple but nevertheless, help is appreciated.
Kind regards

in anylogic exception with Agent.setspeed()
i have a simple anylogic model for pedestrian movement from start line towards target line
i want to change the speed of the moving agents at some condition.
i test the condition using events
if the number of agents in a specific area exceeds 20, i change the speed of the agents in the previous area using agent.setspeed()
when i run the simulation and the event is triggered i get this exception:

Keypress simulation in React
I'll try to describe my problem. When user refreshes the page, I need automatically simulate event, as he has pressed "enter". I tried this tutorial https://codeexamples.net/en/q/91a01 and a lot of others, but it seems not working for me.
I assume that this keypress simulation has to be in ComponentDidMount() function, but maybe I am wrong?
Is it possible to do it without jQuery?

Convergence of Infinite root
How can I determine if the following series converges? enter image description here

Specifying the mixture means when using regmixEM or flemix in R
I'm currently trying to fit mixture regression models on my data. Unfortunately the algorithm doesn't converge for k>2. So I would like to give better starting values instead of those R choses. However, the syntax is:
regmixEM(y, x, lambda = NULL, beta = NULL, sigma = NULL, k = 2, addintercept = TRUE, arbmean = TRUE, arbvar = TRUE, epsilon = 1e08, maxit = 10000, verb = FALSE)
specifying mu didn't work since it doesn't seem to be a parameter I can give. Is there any way to specify the parameters? I know that there is a package called flemix. The syntax of the flemix function is
flexmix(formula, data = list(), k = NULL, cluster = NULL, model = NULL, concomitant = NULL, control = NULL, weights = NULL) ## S4 method for signature 'flexmix' summary(object, eps = 1e4, ...)
Reading the manual I can only find a way to manipulate the membership probabilities but not mu.
Is there a way to get the algorithm to converge?
I'm grateful for any help, thanks.

Generative Adversial Networks, divergence of the discriminator and pictures artifacts (PIX2PIX)
I'm currently trying to implement the Pix2Pix algorithm which is a GAN Structure, but i have some issues with the convergence of the discriminator and the output pictures of the generator...
1) Convergence Problem :
It seems that the discriminator doesn't converge at all. When i print the loss of the generator, it seems to work very well : But when i print the loss of the discriminator, i have the following plot :
Do you know what are the possible reasons of such a behavior ? How can i stabilize the learning of the discriminator ?
2) Chromatic aberrations
I have also some problems with the generated pictures. Indeed i have often a total saturation of the colors, the printed objects have only one color like that :
The solution seems to train the discriminator every 200 steps for example, in this case i obtain something like that :
But it is not satisfying at all...
(I precise that the first column is the input of the generator, the second one is the output of the generator and the third one is the target picture, for the moment i'm only trying my network to reproduce the same picture... it should be easy...)
NB : the initialization seems also to play a really important role on the colors, indeed, with the exact same parameters, i obtained after thousands of steps really different results.
Has someone an idea to explain these phenomena ?
Thanks a lot for your reading and your potential help !