BEKK model simulation in R
I have been working with a BEKK(1,1) model with dimension 3,4, and 5, for a time series analysis. I was given the feedback to include a simulation study. In order to trust the results that I obtain, I want to, via simulations, show that the estimation of the BEKK model parameters works also well for the sample sizes considered in the paper. I want to show that the distributional theory can be applied for my sample size.
I want to investigate if the sample size is enough to apply the asymptotic results?
Method:
I wish to generate data from the fitted model in the case of the dimension 3, with the sample size 3000. Estimate parameter by fitting BEKK model to the generated data sets and repeat this step, say 10000 times. Then I obtain 10000 estimators for each parameter for which the sampling distribution can be constructed and then it has to be compared with the asymptotic distribution.
Then repeat this procedure for dimension 4 and 5.
#I've been using the mgarchBEKK package when creating my BEKK models.
#The package provides the example below as help for simulation:
## Simulate series:
simulated < simulateBEKK(2, 1000, c(1,1))
## Prepare the matrix:
simulated < do.call(cbind, simulated$eps)
## Estimate with default arguments:
estimated < BEKK(simulated)
I'm not a master in R by any means. So I'm not quite sure how to code the procedure that I describe above.
Any help is greatly apprecieated :)
See also questions close to this topic

Using a numerical list to subset a large dataframe
I have a large dataframe that I want to subset using a list of factors but don't know how.
This is my dataframe;
I_have < data.frame( Company_id = as.factor(c(1006,1007,1123,1556,2000,2006,1200,1277,1266)), Value = c(5,6,3,7,9,11,12,17,16))
and this is the list of companies I wish to subset by;
Companies_needed < data.frame(Company_id = as.factor(c(1123,1556,2000, 1200)))
and this is what I want;
I_want < data.frame( Company_id = as.factor(c(1123,1556,2000,1200)), Value = c(3,7,9,12))
My real data has 500,000 rows in the "I_have" dataframe and "Companies_needed" has 7,000 rows, so can't type all the combinations so must use the dataframes as lists to subset with.

R  Efficient way to test whether a pair of vectors is disjoint
I want to know if two vectors have any elements in common. I don't care what the elements are, how many common elements there are, or what positions they are at within either vector. I just need a simple, efficient function
EIC(vec1, vec2)
that returns TRUE if there exists some element in bothvec1
andvec2
, FALSE if there are no elements common to both. Also we can assume that neithervec1
norvec2
containNA
, but either may have duplicated values.I've thought of five ways to do this, but they all seem inefficient:
EIC.1 < function(vec1, vec2) length(intersect(vec1, vec2)) > 0 # I want a function that will stop when it finds the first # common element between the vectors, and return TRUE. The # intersect function will continue on and check whether there are # any other common elements. EIC.2 < function(vec1, vec2) any(vec1 %in% vec2) EIC.3 < function(vec1, vec2) any(!is.na(match(vec1, vec2))) # the match function goes to the trouble of finding the position # of all matches; I don't need the position but just want to know # if any exist EIC.4 < function(vec1, vec2) { uvec1 < unique(vec1) uvec2 < unique(vec2) length(unique(c(uvec1, uvec2))) < length(uvec1) + length(uvec2) } EIC.5 < function(vec1, vec2) !!anyDuplicated(c(unique(vec1), unique(vec2))) # per https://stackoverflow.com/questions/5263498/howtotestwhetheravectorcontainsrepetitiveelements#comment5931428_5263593 # I suspect this is the most efficient of the five, because # anyDuplicated will stop looking when it comes to the first one, # but I'm not sure about using !! to coerce to boolean type
I will be using very long vectors (without any NAs, as previously mentioned) and will be running this function millions of times, which is why I am looking for something efficient. Here is some test data:
v1 < c(9, 8, 75, 62) v2 < c(20, 75, 341, 987, 8) v3 < c(154, 62, 62, 143, 154, 95) v4 < c(12, 62, 12) EIC < EIC.1 EIC(v1, v2) EIC(v1, v3) EIC(v1, v4) EIC(v2, v3) EIC(v2, v4) EIC(v3, v4)
Correct results are TRUE, TRUE, TRUE, FALSE, FALSE, TRUE.

Connecting R to LinkedIn API
Thank you for reading my post.
I would like to know if anyone is currently extracting data with R from LinkedIn API.
Any help will be appreciated!
All the best, Luis

How to get stats value after CrawlerProcess finished, i.e. at line after process.start()
I am using this code somewhere inside spider:
raise scrapy.exceptions.CloseSpider('you_need_to_rerun')
So, when this exceptions raised, eventually my spider closing working and I get in console stats with this string:
'finish_reason': 'you_need_to_rerun',
But  how I can get it from code? Cause I want to run spider again in loop, based on info from this stats, something like this:
from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings import spaida.spiders.spaida_spider import spaida.settings you_need_to_rerun = True while you_need_to_rerun: process = CrawlerProcess(get_project_settings()) process.crawl(spaida.spiders.spaida_spider.SpaidaSpiderSpider) process.start(stop_after_crawl=False) # the script will block here until the crawling is finished finish_reason = 'and here I get somehow finish_reason from stats' # < how?? if finish_reason == 'finished': print("everything ok, I don't need to rerun this") you_need_to_rerun = False
I found in docs this thing, but can't get it right, where is that "The stats can be accessed through the spider_stats attribute, which is a dict keyed by spider domain name.": https://doc.scrapy.org/en/latest/topics/stats.html#scrapy.statscollectors.MemoryStatsCollector.spider_stats
P.S.: I'm also getting error twisted.internet.error.ReactorNotRestartable when using
process.start()
, and recommendations to useprocess.start(stop_after_crawl=False)
 and then spider just stops and do nothing, but this is another problem... 
Multi label Fleiss Kappa
I have 10 sentences, 5 annotators and 6 categories.
An annotator could annotate a sentence as: CATEG1=Y, CATEG2=Y, CATEG3=Y, CATEG4=Y, CATEG5=Y, CATEG6=Y or, indeed, any combination he/she likes.
How can I use Fleiss Kappa to work out the annotator agreement?
Ive had a look at the Wikipedia example but they have 14 raters and each one can only pick one category, whereas in mine each annotator can pick as many as required.

Sci Kit Learn regression, value too large for dtype('float64') error
I have some data that I am reading from a CSV file and attempting to fit a Bayes Ridge regression model but I am getting a
Input contains NaN, infinity or a value too large for dtype('float64')
error... Any tips greatly appreciatedimport pandas as pd import numpy as np df = pd.read_csv('C:/Users/baselinekWh.csv', index_col='Date', parse_dates=True) df
I don't think there is any NaN data because I can scatter plot the data no problem with matplotlib:
import matplotlib.pyplot as plt plt.scatter(df['OSAT'], df['kWh'], color='grey', marker='+') plt.xlabel('OSAT') plt.ylabel('kWh') plt.title('kWh Model') plt.legend() plt.show()
A df.describe()` looks like this:
And when I try to fit the model I get an error... Any tips? I am trying to follow some steps from Sci Kit Learn website. Could it be the parameters that I am calling out being the problem? There is not a lot of Wisdom there yet ;) http://scikitlearn.org/stable/modules/linear_model.html#bayesianridgeregression
from sklearn import linear_model X = df[list(set(df.columns).difference(['kWh']))].values # X> features Y = df[['kWh']].values # Y > target reg = linear_model.BayesianRidge(alpha_1=1e06, alpha_2=1e06, compute_score=False, copy_X=True, fit_intercept=True, lambda_1=1e06, lambda_2=1e06, n_iter=300, normalize=False, tol=0.001, verbose=False) reg.fit(X, Y)

How could I set revenue target if I have only 10 data point?
I have a little historical data and I would like to set revenue target. Now I use moving average to set revenue my formula is:
Revenue target = Moving average 6 days of Revenue * (1 + growth_rate)
But my growth rate is a constant, it can not capture any sudden changes in the future (new policy, promotion,...). I don't know my method is best or not.
Anybody have any idea, please comment and help me.
Thank you.

Forecast nonperiodic timeseries with R
I'm facing two problems to forecast a nonperiodic (nonseason) timeseries data. 1 I want to forecast around 230 samples with 102 of training 2as mentioned, this data is not periodic and, I'm not obtaining good results.
I've tried using the forecast package (ets) but the result is linear. Also, I've tried ARIMA and other online solutions. Which is the best package in this case? This is the data:
data<c(0.912039419,0.967,1.074,1.139,0.955,0.9,0.975,0.983,0.8745,0.894,0.871,0.834,0.906672936,0.908150096,1.059,0.952,0.895,0.944666667,1.010666667,1.0085,0.911,1.15,0.9825,0.8955,0.888274541,0.8985,0.892252778,0.979,0.9685,0.9095,0.891535613,0.854,0.898643358,0.945,1.002,0.8945,0.899878442,0.904,0.904889356,0.799,0.916999404,0.966,0.955,0.97875,0.922,0.881,0.906,0.948,1.023,1.092666667,1.052333333,0.918333333,0.967,1.049,1.095,1.1165,0.834,1.059,0.8705,1.0265,0.738,0.937,1.0405,0.85,1.0885,0.901,0.911,1.0735,0.968,0.8955,0.8985,0.979,0.9685,0.886,0.933,0.854,0.9595,1.03,0.8945,0.904,0.8585,1.014,0.9325,0.9155,0.893,1.118,1.1665,1.0045,0.9,0.932878788,0.892,0.932,1.118,1.1665,1.118,1.194,1.113,0.922,0.9295,1.276,1.329,1.3505)
Using this code: result < forecast(ets(x, model = "ZZZ", damped = TRUE),h=234)$mean
I'm getting this forecast

Find dates based on conditions and shift before and after
I am currently analysing some timeseries data in python and I need to find the begin and end of an event based on other columns values.
My dataframe looks roughly like this:
Month Price Change 9 20090601 144384.10 0.1112 10 20090701 132549.80 0.082 11 20090801 144611.00 0.091 12 20090901 135389.50 0.0638 13 20091001 140593.30 0.0384 14 20091101 153151.70 0.0893 15 20091201 151010.80 0.014 16 20100101 152519.30 0.01 17 20100201 145998.30 0.0428 18 20100301 151475.30 0.0375 19 20100401 163071.70 0.0766 20 20100501 150044.20 0.0799 21 20100601 143950.30 0.0406 22 20100701 139866.00 0.0284 23 20100801 137859.00 0.0143
Lets say I have a crisis mid 2010. I want to find the first date of the crisis by looking at the
Change
column. I have the following code, but I wonder if it can be written a bit more efficient, since this is only the part for the begin of the crisis:crisis = df crisis["month_1"] = crisis["Month"].shift(1) crisis = crisis[(crisis["Change"]< 0) & (crisis["Month"] > "20100201")] start_date = crisis["month_1"].iloc[0]

Bouncing ball python with turtle graphics
So I have been working on my code, where I would like to simulate particles of different gasses interacting. This is as far as I got  created the balls and register bounces off of walls and other balls. How would I now go about giving each "color" or type of gas properties, so that it would maybe stick together as a binary molecule etc.?
import turtle import random import time from Classes import * # Sets up the screen and everything wn = turtle.Screen() wn.tracer(0) wn.bgcolor("white") wn.canvheight = 600 wn.canvwidth = 600 wn.title("Nogravity") xlimit, ylimit = wn.canvwidth / 2, wn.canvheight / 2 # # For drawing the bounds of the square board = turtle.Turtle() board.speed(0) rectangle(board, wn.canvwidth/2, wn.canvheight/2, wn.canvwidth, 5, "black") rectangle(board, wn.canvwidth/2, wn.canvheight/2, 5, wn.canvheight, "black") rectangle(board, wn.canvwidth/2, wn.canvheight/2, wn.canvwidth, 5, "black") rectangle(board, wn.canvwidth/2  5, wn.canvheight/2, 5, wn.canvheight, "black") board.ht() # # Ball setup balls = [] colors = ["blue","red","lightblue","black"] numberBalls = 10 for _ in range(numberBalls): turt = turtle.Turtle() balls.append(turt) for ball in balls: ball.penup() ball.shape("circle") ball.color(random.choice(colors)) ball.speed(0) x = random.randint(wn.canvwidth / 2 + 10, wn.canvwidth / 2  10) y = random.randint(wn.canvheight / 2 + 10, wn.canvheight / 2  10) ball.goto(x, y) ball.dy = random.randint(2, 2) if ball.dy == 0: ball.dy += 1 ball.dx = random.randint(2, 2) if ball.dx == 0: ball.dx += 1 ballimit = 20 ballcount = 0 # # Main game loop while True: wn.update() for ball in balls: #Premikanje kroglice  dx je sprememba x osi, dy je sprememba y osi ball.sety(ball.ycor() + ball.dy) ball.setx(ball.xcor() + ball.dx) #Izračuna debelino kroglice ballXrangeHI = ball.xcor() + ballimit ballXrangeLO = ball.xcor()  ballimit ballYrangeHI = ball.ycor() + ballimit ballYrangeLO = ball.ycor()  ballimit if ballXrangeHI > xlimit or ballXrangeLO > xlimit: ball.dx *= 1 if ballXrangeHI < xlimit or ballXrangeLO < xlimit: ball.dx *= 1 if ballYrangeHI > ylimit or ballYrangeLO > ylimit: ball.dy *= 1 if ballYrangeHI < ylimit or ballYrangeLO < ylimit: ball.dy *= 1 # Bounce test for thing in balls: if balls[ballcount] != thing and proximity(ball.xcor(), ball.ycor(), thing.xcor(), thing.ycor()) <= ballimit: tempx = ball.dx tempy = ball.dy ball.dx = thing.dx ball.dy = thing.dy thing.dx = tempx thing.dy = tempy ball.dx ball.dy thing.dx thing.dy ballcount += 1 if ballcount == numberBalls: ballcount = 0 #time.sleep(0.0002) wn.mainloop()

Generating CDF Graphs using Seaborn
I am trying to plot a CDF graph for my code using Seaborn but can't get it to work.
Specifically, I want to generate CDF graphs for sum_MDA, sum_CLA, sum_BIA and grand_total after I have simulated the entire code 1000 times. My code is as follows (apologies in advance for the length).
def sim(): df['RAND'] = np.random.uniform(0,1, size=df.index.size) dfRAND = list(df['RAND']) def L(): result = [] conditions = [df.RAND >= (1  0.8062), (df.RAND < (1  0.8062)) & (df.RAND >= 0.1), (df.RAND < 0.1) & (df.RAND >= 0.05), (df.RAND < 0.05) & (df.RAND >= 0.025), (df.RAND < 0.025) & (df.RAND >= 0.0125), (df.RAND < 0.0125)] choices = ['L0', 'L1', 'L2', 'L3', 'L4', 'L5'] df['L'] = np.select(conditions, choices) result = df['L'].values return result L() #print(L()) #print(df.pivot_table(index='L', aggfunc=len, fill_value=0)) def MD(): result = [] conditions = [L() == 'L0', L() == 'L1', L() == 'L2', L() == 'L3', L() == 'L4', L() == 'L5'] choices = [(df['P_MD'].apply(lambda x: x * 0.02)), (df['P_MD'].apply(lambda x: x * 0.15)), (df['P_MD'].apply(lambda x: x * 0.20)), (df['P_MD'].apply(lambda x: x * 0.50)), (df['P_MD'].apply(lambda x: x * 1.0)), (df['P_MD'].apply(lambda x: x * 1.0))] df['MDL'] = np.select(conditions, choices) #result = print(df['MDL'].values) return result MD() def CL(): result = [] conditions = [L() == 'L0', L() == 'L1', L() == 'L2', L() == 'L3', L() == 'L4', L() == 'L5'] choices = [1600, 3200, 9600, 48000, 48000, 48000] df['CL'] = np.select(conditions, choices) #result = print(df['CL'].values) return result CL() def BI(): result = [] conditions = [L() == 'L0', L() == 'L1', L() == 'L2', L() == 'L3', L() == 'L4', L() == 'L5'] choices = [(df['P_BI'].apply(lambda x: (x / 548) * 1)), (df['P_BI'].apply(lambda x: (x / 548) * 2)), (df['P_BI'].apply(lambda x: (x / 548) * 14)), (df['P_BI'].apply(lambda x: (x / 548) * 60)), (df['P_BI'].apply(lambda x: (x / 548) * 180)), (df['P_BI'].apply(lambda x: (x / 548) * 365))] df['BIL'] = np.select(conditions, choices) #result = print(df['BIL'].values) return result BI() sum_MDA = int(np.sum(df['MDL'])) sum_CLA = int(np.sum(df['CL'])) sum_BIA = int(np.sum(df['BIL'])) grand_total = int(sum_MDA + sum_CLA + sum_BIA) result = sum_MDA, sum_CLA, sum_BIA, grand_total return result sim() for i in range(1000): print(sim()) #sns.distplot(sim(), bins=100, #kde_kws=dict(cumulative=True), axlabel='(£)', color='purple', #).set_title('Simulation (N=1000)')
Any help is appreciated. Thanks a lot.

Any online way to learn writing a device driver of micro controller in C from scratch?
Is there any way online without getting hold of an actual micro controller where I can learn to write a device driver (e.g bluetooth, usb) in C from scratch and see how hardware works? I know basics of C and I work in windows environment.

DDPG  Actor Critic Network does not converge with continous Actionspace
I'm currently a bit confused. I implemented an ActorCritic Network and depending on the setup ist either begins to converge a little but produces values far from right. Or it produces the nearly the same loss values over and over again but right from the start produces values which are kind of correct. I really have no clue how that is possible. This is my current Model which produces values but does not converge:
def create_actor_model(self): state_input = Input(shape=self.observation_space_shape) h1 = Dense(18, activation='linear')(state_input) h1l = LeakyReLU(alpha=0.01)(h1) h3 = Dense(18, activation='tanh')(h1l) h3n = BatchNormalization()(h3) output = Dense(self.action_space_shape[0], activation='tanh')(h3n) model = Model(input=state_input, output=output) adam = Adam(lr=self.action_space_shape) model.compile(loss="mse", optimizer=adam) return state_input, model def create_critic_model(self): state_input = Input(shape=self.observation_space_shape) state_h1 = Dense(18, activation='relu')(state_input) state_h2 = Dense(36)(state_h1) action_input = Input(shape=self.action_space_shape) action_h1 = Dense(36)(action_input) merged = Add()([state_h2, action_h1]) l_h1 = LeakyReLU(alpha=0.01)(merged) merged_h1 = Dense(18, activation='tanh')(l_h1) h1n = BatchNormalization()(merged_h1) output = Dense(1, activation='tanh')(h1n) model = Model(input=[state_input, action_input], output=output) adam = Adam(lr=self.action_space_shape) model.compile(loss="mse", optimizer=adam, metrics=['mae', 'mse', 'msle']) return state_input, action_input, model def _train_actor_batch(self, batch_size, s_batch, a_batch, r_batch, s2_batch): predicted_action = self.actor_model.predict_on_batch(s_batch) grads = self.sess.run(self.critic_grads, feed_dict={ self.critic_state_input: s_batch, self.critic_action_input: predicted_action }) self.sess.run(self.optimize, feed_dict={ self.actor_state_input: s_batch, self.actor_critic_grad: grads[0] }) def _train_critic_batch(self, batch_size, s_batch, a_batch, r_batch, s2_batch): target_action = self.target_actor_model.predict_on_batch(s2_batch) future_reward = self.target_critic_model.predict_on_batch([s2_batch, target_action]) rewards = [] for k in range(batch_size): this_future_reward = future_reward[k] if batch_size > 1 else future_reward rewards.append(r_batch[k] + self.gamma * this_future_reward) return self.critic_model.train_on_batch([s_batch, a_batch], np.reshape(rewards, batch_size)) def replay(self, batch_size): memory_length = len(self.memory) if memory_length < batch_size: samples = random.sample(self.memory, memory_length) else: samples = random.sample(self.memory, batch_size) s_batch = np.array([cur_state[0] for cur_state, _, _, _ in samples]) a_batch = np.array([float(action[0]) for _, action, _, _ in samples]) r_batch = np.array([reward[0] for _, _, reward, _ in samples]) s2_batch = np.array([new_state[0] for _, _, _, new_state in samples]) critic_loss = self._train_critic_batch(len(s_batch), s_batch, a_batch, r_batch, s2_batch) self._train_actor_batch(len(s_batch), s_batch, a_batch, r_batch, s2_batch) self.update_target() return critic_loss def _update_actor_target(self): actor_model_weights = self.actor_model.get_weights() actor_target_weights = self.target_actor_model.get_weights() for i in range(len(actor_target_weights)): actor_target_weights[i] = actor_model_weights[i] * self.tau + actor_target_weights[i] * (1  self.tau) self.target_actor_model.set_weights(actor_target_weights) def _update_critic_target(self): critic_model_weights = self.critic_model.get_weights() critic_target_weights = self.target_critic_model.get_weights() for i in range(len(critic_target_weights)): critic_target_weights[i] = critic_model_weights[i] * self.tau + critic_target_weights[i] * (1  self.tau) self.target_critic_model.set_weights(critic_target_weights) def update_target(self): self._update_actor_target() self._update_critic_target() def __init__(self): self.memory = deque(maxlen=2000) self.actor_state_input, self.actor_model = self.create_actor_model() _, self.target_actor_model = self.create_actor_model() self.actor_critic_grad = tf.placeholder(tf.float32, [None, self.action_space_shape[0]]) actor_model_weights = self.actor_model.trainable_weights self.actor_grads = tf.gradients(self.actor_model.output, actor_model_weights, self.actor_critic_grad) grads = zip(self.actor_grads, actor_model_weights) self.optimize = tf.train.AdamOptimizer(self.learning_rate).apply_gradients(grads) self.critic_state_input, self.critic_action_input, \ self.critic_model = self.create_critic_model() _, _, self.target_critic_model = self.create_critic_model() self.critic_grads = tf.gradients(self.critic_model.output, self.critic_action_input) self.sess.run(tf.initialize_all_variables())
And then i am training with the following method which is called for each epoch (at the end, the memory which is getting cleared is the experiencereplay memory):
def train(self, states, epoch, env, is_new_epoch): train_size = int(len(states) * 0.70) train = dict(list(states.items())[0:train_size]) test = dict(list(states.items())[train_size:len(states)]) with warnings.catch_warnings(): warnings.simplefilter("ignore") working_states = copy(train) critic_eval = list() rewards = dict() for last_day, (last_state_vec, _, last_action) in working_states.items(): this_day = last_day + timedelta(days=1) if this_day in working_states: (new_state_vec, _, _) = working_states.get(this_day) rewards[last_day] = env.get_reward_by_states(last_state_vec, new_state_vec) amt = len(working_states) i = 0 for last_day, (last_state_vec, _, last_action) in working_states.items(): i+= 1 this_day = last_day + timedelta(days=1) if this_day in working_states: (new_state_vec, _, _) = working_states.get(this_day) reward = np.reshape(rewards[last_day], [1, ]) self.remember(last_state_vec, [last_action], reward, new_state_vec) new_eval = self.replay(env.batch_size) critic_eval.append(new_eval) self.memory.clear()
These are the loss values i got over 15 epochs:
One Sample as it comes from the memory:
state [8 79 48246 53607 29 34 37 Decimal('1.0000000000') 6] action 0.85 reward 0.2703302 next state [9 79 48074 57869 27 28 32 Decimal('1.0000000000') 0]

How to use coda package to check the convergence of BSTS models
I am using the bsts package in R to fit my model. I would like to check the convergence of the model using the gelman.diag method from the coda package. But I am not sure how to do it since I need to extract MCMC object from the bsts output. Has anybody done this before?

GLMM negative binomial  glmer in r  convergence/Model is nearly unidentifiable: very large eigenvalue
I am trying to fit a GLMM in R, and I am receiving two warnings. Tried the
?help
page for the package/function and also?convergence
, but didn't actually solve it. => I started working with a Poisson distribution, then diagnostics showed overdispersion and zeroinflated. Then I changed to negative binomial distribution withglmer.nb
. Then?help
page says if we KNOW that there is overdispersion, we should use*glmer(family=MASS::negative.binomial(theta=1.75))*
instead.QUESTIONS: 1) this theta value (1.75) is fixed by the formula, or should I find the theta to my model? 2) Tried
?convergence
page, and did not find an answer. So, would anyone know a possible way to solve this warnings:"1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with maxgrad = 0.172241 (tol = 0.001, component 1) 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue  Rescale variables?"
 Outcome Variable: count of "dead"; independent variables are categorical, and three random effects. *Hierarquical data, so each "ord" (unit) has "source" and "week" levels. (250 ord * 7 weeks * 3 sources = 5250 rows).
If anyone has any tip, I would appreciate a lot.
My code:
nbm < glmer(dead ~ week + year + wean.month + classB.Q + euth.p.cat + source + protec + wean.cat.Q+ arr.euth.p.cat + (1source:ord) + (1ord) + (1barngrp), offset=log(survival), data=final, family=MASS::negative.binomial(theta=1.75),control=glmerControl(optimizer = c("bobyqa", "Nelder_Mead"), optCtrl=list(maxfun=15000000)))