Optimizing Mathematical Calculation
I have a model for four possibilities of purchasing a pair items (purchasing both, none or just one) and need to optimize the (pseudo) loglikelihood function. Part of this, of course, is the calculation/definition of the pseudologlikelihood function.
The following is my code, where Beta is a 2d vector for each customer (there are U customers and U different beta vectors), X is a 2d vector for each item (different for each of the N items) and Gamma is a symmetric matrix with a scalar value gamma(i,j) for each pair of items. And df is a dataframe of the purchases  one row for each customer and N columns for the items.
It would seem to me that all of these loops are inefficient and take up too much time, but I am not sure how to speed up this calculation and would appreciate any help improving it. Thank you in advance!
def pseudo_likelihood(Args):
Beta = np.reshape(Args[0:2*U], (U, 2))
Gamma = np.reshape(Args[2*U:], (N,N))
L = 0
for u in range(0,U,1):
print datetime.datetime.today(), " for user {}".format(u)
y = df.loc[u][1:]
beta_u = Beta[u,:]
for l in range(N):
print datetime.datetime.today(), " for item {}".format(l)
for i in range(N1):
if i == l:
continue
for j in range(i+1,N):
if (y[i] == y[j]):
if (y[i] == 1):
L += np.dot(beta_u,(x_vals.iloc[i,1:]+x_vals.iloc[j,1:])) + Gamma[i,j] #Log of the exponent of this expression
else:
L += np.log(
1  np.exp(np.dot(beta_u, (x_vals.iloc[i, 1:] + x_vals.iloc[j, 1:])) + Gamma[i, j])
 np.exp(np.dot(beta_u, x_vals.iloc[i, 1:])) * (
1  np.exp(np.dot(beta_u, x_vals.iloc[j, 1:])))
 np.exp(np.dot(beta_u, x_vals.iloc[j, 1:])) * (
1  np.exp(np.dot(beta_u, x_vals.iloc[i, 1:]))))
else:
if (y[i] == 1):
L += np.dot(beta_u,x_vals.iloc[i,1:]) + np.log(1  np.exp(np.dot(beta_u,x_vals.iloc[j,1:])))
else:
L += (np.dot(beta_u, x_vals.iloc[j,1:])) + np.log(1  np.exp(np.dot(beta_u, x_vals.iloc[i,1:])))
L = (N2)*np.dot(beta_u,x_vals.iloc[l,1:])
for k in range(N):
if k != l:
L = np.dot(beta_u, x_vals.iloc[k,1:])
return L
To add/clarify  I am using this calculation to optimize and find the beta and gamma parameters that generated the data for this pseudolikelihood function.
I am using scipy optimize.minimize with the 'Powell' method.
See also questions close to this topic

Display heartbeat sensor's readings from python program in a Kivy label
I'm trying to display the values obtained from my heartbeat sensor in my Kivy App. I've tried using the data extraction method applied on the other sensors but it's not working for this sensor as it doesn't contain any functions.
I've tried multiple different approaches but they all haven't returned the desired output. Can someone please have a look at the sensor code and suggest some method to display the output in my kivy app.
Heartbeatsensot.py
import time # Import the ADS1x15 module. import Adafruit_ADS1x15 if __name__ == '__main__': adc = Adafruit_ADS1x15.ADS1015() # initialization GAIN = 2/3 curState = 0 thresh = 525 # mid point in the waveform P = 512 T = 512 stateChanged = 0 sampleCounter = 0 lastBeatTime = 0 firstBeat = True secondBeat = False Pulse = False IBI = 600 rate = [0]*10 amp = 100 lastTime = int(time.time()*1000) # Main loop. use Ctrlc to stop the code while True: # read from the ADC Signal = adc.read_adc(0, gain=GAIN) #TODO: Select the correct ADC channel. I have selected A0 here curTime = int(time.time()*1000) sampleCounter += curTime  lastTime; # # keep track of the time in mS with this variable lastTime = curTime N = sampleCounter  lastBeatTime; # # monitor the time since the last beat to avoid noise #print N, Signal, curTime, sampleCounter, lastBeatTime ## find the peak and trough of the pulse wave if Signal < thresh and N > (IBI/5.0)*3.0 : # # avoid dichrotic noise by waiting 3/5 of last IBI if Signal < T : # T is the trough T = Signal; # keep track of lowest point in pulse wave if Signal > thresh and Signal > P: # thresh condition helps avoid noise P = Signal; # P is the peak # keep track of highest point in pulse wave # NOW IT'S TIME TO LOOK FOR THE HEART BEAT # signal surges up in value every time there is a pulse if N > 250 : # avoid high frequency noise if (Signal > thresh) and (Pulse == False) and (N > (IBI/5.0)*3.0) : Pulse = True; # set the Pulse flag when we think there is a pulse IBI = sampleCounter  lastBeatTime; # measure time between beats in mS lastBeatTime = sampleCounter; # keep track of time for next pulse if secondBeat : # if this is the second beat, if secondBeat == TRUE secondBeat = False; # clear secondBeat flag for i in range(0,10): # seed the running total to get a realisitic BPM at startup rate[i] = IBI; if firstBeat : # if it's the first time we found a beat, if firstBeat == TRUE firstBeat = False; # clear firstBeat flag secondBeat = True; # set the second beat flag continue # IBI value is unreliable so discard it # keep a running total of the last 10 IBI values runningTotal = 0; # clear the runningTotal variable for i in range(0,9): # shift data in the rate array rate[i] = rate[i+1]; # and drop the oldest IBI value runningTotal += rate[i]; # add up the 9 oldest IBI values rate[9] = IBI; # add the latest IBI to the rate array runningTotal += rate[9]; # add the latest IBI to runningTotal runningTotal /= 10; # average the last 10 IBI values BPM = 60000/runningTotal; # how many beats can fit into a minute? that's BPM! print ('BPM: {}'.format(BPM)) if Signal < thresh and Pulse == True : # when the values are going down, the beat is over Pulse = False; # reset the Pulse flag so we can do it again amp = P  T; # get amplitude of the pulse wave thresh = amp/2 + T; # set thresh at 50% of the amplitude P = thresh; # reset these for next time T = thresh; if N > 2500 : # if 2.5 seconds go by without a beat thresh = 512; # set thresh default P = 512; # set P default T = 512; # set T default lastBeatTime = sampleCounter; # bring the lastBeatTime up to date firstBeat = True; # set these to avoid noise secondBeat = False; # when we get the heartbeat back print ("no beats found") time.sleep(0.005)
** the kivy app that I'm trying to add the readings to**
another.py file
from kivy.app import App from kivy.uix.screenmanager import ScreenManager,Screen from kivy.properties import ObjectProperty, StringProperty from kivy.uix.boxlayout import BoxLayout from kivy.clock import Clock from kivy.uix.button import Button from kivy.uix.label import Label from kivy.lang import Builder class MainScreen(Screen): pass class ScreenThermo(Screen): def __init__(self,**kwargs): super(ScreenThermo, self).__init__(**kwargs) Clock.schedule_interval(self.displayHR, 10) def displayHR(self,dt): heartbeat = 'BPM: {}'.format(BPM) pulse = heartbeat self.manager.screen_thermo.ids.TempLabel.text = str(pulse) def on_enter(self, *args): self.__init__() pass class ScreenManagement(ScreenManager): pass presentation = Builder.load_file("heartk.kv") class MenuApp(App): def build(self): return presentation if __name__ == '__main__': MenuApp().run()
.kv file
ScreenManagement: id: screen_manager screen_thermo: screen_thermo MainScreen: ScreenThermo: id: screen_thermo name: 'thermo' manager: screen_manager <MainScreen>: name: "main" Label: text: "Welcome" font_size: 60 halign: 'center' valign: 'middle' pos_hint: {'x': .01, 'y': .05} on_touch_down: app.root.current = "thermo" <ScreenThermo>: Label: text: " Pulse rate" font_size: 50 pos: (35, 100) Label: id: TempLabel font_size: 60 halign: 'center' valign: 'middle'

How to determine most Important/Informative features using Linear Support Vector Machines (SVM) classifier
I am new to python and working on a text classification problem. I am interested in the visualization of the most important features of each class through a linear SVM classifier model. I want to determine which features are contributing towards the classification decision as Class1 or Class2 by classification model. This is my code.
df = pd.read_csv('projectdatacor.csv') df = df[pd.notnull(df['types'])] my_types = ['Requirement','NonRequirement'] #converting to lower case df['description'] = df.description.map(lambda x: x.lower()) #Removing the punctuation df['description'] = df.description.str.replace('[^\w\s]', '') #splitting the word into tokens df['description'] = df['description'].apply(nltk.tokenize.word_tokenize) ## This converts the list of words into spaceseparated strings df['description'] = df['description'].apply(lambda x: ' '.join(x)) count_vect = CountVectorizer() counts = count_vect.fit_transform(df['description']) #tfidf transformer = TfidfTransformer().fit(counts) counts = transformer.transform(counts) #splitting the data and training the model #naivesbayes X_train, X_test, y_train, y_test = train_test_split(counts, df['types'], test_size=0.3, random_state=39) #svc classification from sklearn import svm svclassifier = svm.SVC(gamma=0.001, C=100., kernel = 'linear') svclassifier.fit(X_train, y_train) y_pred = svclassifier.predict(X_test) #evalutaing the model print(classification_report(y_test,y_pred)) print(confusion_matrix(y_test,y_pred)) print('accuracy %s' % accuracy_score(y_pred, y_test)) print(classification_report(y_test, y_pred,target_names=my_types))
I have read all related questions available on this platform but I found the following useful code which I added in my code.
import numpy as np def show_most_informative_features(vectorizer, clf, n=20): feature_names = vectorizer.get_feature_names() coefs_with_fns = sorted(zip(clf.coef_[0], feature_names)) top = zip(coefs_with_fns[:n], coefs_with_fns[:(n + 1):1]) for (coef_1, fn_1), (coef_2, fn_2) in top: print ("\t%.4f\t%15s\t\t%.4f\t%15s") % (coef_1, fn_1, coef_2, fn_2) show_most_informative_features(count_vect, svclassifier, 20)
This code works for naive Bayes and logistic regression and it gives the most important features but for SVM it gives me the error.
I am getting this error.
File "C:\Users\fhassan\anaconda3\lib\sitepackages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile execfile(filename, namespace) File "C:\Users\fhassan\anaconda3\lib\sitepackages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "U:/FAHAD UL HASSAN/Python Code/happycsv.py", line 209, in <module> show_most_informative_features(count_vect, svclassifier, 20) File "U:/FAHAD UL HASSAN/Python Code/happycsv.py", line 208, in show_most_informative_features print ("\t%.4f\t%15s\t\t%.4f\t%15s" % (coef_1, fn_1, coef_2, fn_2)) TypeError: must be real number, not csr_matrix
Any help shall be highly appreciated.

Function defines an unwanted attribute to a numpy array for mysterious reason
I am trying to build a meshgrid in order to compute an interpolation. I inspire myself from this example. My function is the following:
def oscillatory_(a,b,d,w=w,c=c): x = np.array([a,b,d]) return np.cos(2*math.pi*w[0,0] + np.sum(c.T*x, axis = 1))
Which I call through:
data = oscillatory_(*np.meshgrid(a,b,d,indexing='ij', sparse=True))
Where
a = grid_dim[:,0] b = grid_dim[:,1] d = grid_dim[:,2]
are just some values taken from grid_dim which is a numpy narray
When trying to run the code, I get the following error:
Traceback (most recent call last): File "/usr/lib/python3.6/code.py", line 91, in runcode exec(code, self.locals) File "<input>", line 9, in <module> File "<input>", line 3, in f AttributeError: 'numpy.ndarray' object has no attribute 'cos'
I really don't understand why is he assigning cos as an attribute and not a function, because if I run the function oscillatory outside the *np.meshgrid() everything is ok.
Also, I have played with the toy example from the link below adding a np.cos function and everything works fine. The problem is in my function and I am not able to figure where.
I am doing this in order to compute an interpolation afterwards through:
my_interpolating_function = RegularGridInterpolator((a,b,d), data)
Any help would be highly appreciated on that one. Thank you very much

optimize 32bit value construction
So, I have the following code:
uint32_t val; if (swap) { val = ((uint32_t)a & 0x0000ffff)  ((uint32_t)b << 16); } else { val = ((uint32_t)b & 0x0000ffff)  ((uint32_t)a << 16); }
Is there a way to optimize it, and have
swap
checking somehow embedded in the statement? 
Optimize for runtime
The following code takes in two params. An array of sentences and an array of queries. Each query has one or more words. The function below prints out index of sentence for each query that has all the words in that sentence.
This is functioning code. However, it is not optimized for large inputs because of how it is written.
How can we make it more runtime optimized? Any ideas would be appreciated.
def textQueries(sentences, queries): sentences_sets = [set(s.split(' ')) for s in sentences] for i, words in enumerate(queries): results = '' for j, s in enumerate(sentences_sets): exists = True for word in words.split(' '): if word not in s: exists = False if exists: results += ' %d' % j if results else '%d' % j if results: print results else: print 1

Optimisation Wordpress Website Country Spain. Living and working in Switzerland. Site language English
I am not working in IT Field. But my boyfriend gave me the adress of this site. I am graphic designer.
Since a long time I ve been struggeling with my wordpress homepage.
At the moment I live in Switzerland and my site is in just in English. You can find my website easily on google, in all the countries ( Switzerland, Germany, Mexico, England, Peru, etc.). In Switzerlant in the first result, in the other countries in the first page of results. So until now I am ok with it.
But unfortunately in spanish results on the PC the site apears in the 12 or 13 result, in the second page of google results. But in the mobile sight apears in the first site of results.
What should I implement in order to make my site more visible in Spain ( in Spanish language)? Before I spend hours and money in some solution I would like to know which is the best strategy or solution.
The sites name is my own name, and there are some people called the same as me in Spain. How could i improve the results?
Now it is like this: www.surnamename.com. Should I use www.surnamename.com? I own the domain too.
I try to uptate de site as often as posible. Sometimes the site goes up for a couple of days and then again down.
I would be happy if I knew what can I do in order that my site is on the first page from mobile and pc search results in Spain with spanish language. I am looking for a long term solution.

To use Symbols in limits of Definite integral
I want to use the limits (Efe*V) and Ef as lower and upper limits in my Definite integral. where Ef is given 10 electron volts , e is the electronic charge and I want the ans in terms of V How would i DO THAT???? PLEASE HELP
from scipy.integrate import quad import sympy as sp import math e = 1.6 * (10 ** 19) L = 10 ** 9 h = 6.626 * (10 ** 34) h_cut = 1.05 * (10 ** 34) m = 9.11 * (10 ** 31) V0 = 4.0*e # in J EF = 10.0*e # in J E = sp.Symbol('E') V = sp.Symbol('V') def f(E): j = (4 * E * (V0  E)) / (4 * E * (V0  E) + V0 ** 2 * ((2 * m * (V0  E)) * ((L / h_cut) ** 2))) return j i, err = quad(f, EF  e * V, EF) print('i= ', i) I = (2 * e * i) / h print(I) ``` THE error is as follows: Traceback (most recent call last): File "C:/Users/Subham/Desktop/Integration/integration.py", line 22, in <module> i, err = quad(f, EF  e * V, EF) File "C:\Users\Subham\AppData\Local\Programs\Python\Python3732\lib\sitepackages\scipy\integrate\quadpack.py", line 337, in quad flip, a, b = b < a, min(a, b), max(a, b) File "C:\Users\Subham\Desktop\Integration\venv\lib\sitepackages\sympy\core\relational.py", line 304, in __nonzero__ raise TypeError("cannot determine truth value of Relational") TypeError: cannot determine truth value of Relational

Average across images and then interpolate out nans
I've got a stack of satellite images that are all taken within a short time period but none of them provide sufficient coverage of my study area. So I would like to:
 Average them all together  ignoring nans if there is at least one other valid data point in that location
 interpolate over the array to fill in holes  unless there isn't a valid data point with npixels of that location. If that is the case I'll just keep it as a nan and ignore it in the final processing.
So far I have:
initial_satellite_data.shape >>> (5, 500, 500, 2) # This is five images, 500x500 pixels, and two bands per image. # need to convert this to float in order to do the next step sat_data_float = initial_satellite_data.astype(np.float32) # 255 values signify areas without data, so converting that to nan sat_data_float[sat_data_float==255]=np.nan # get the average across the different images average_image = np.nanmean(np_imageset, axis=0) average_image.shape >>> (500, 500, 2)
But I'm not clear on how I can interpolate across the data to fill in the gaps that still contain nan data along with my caveat of not wanting to interpolate if the pixel doesn't have valid data with npixels of it.
As an example of roughly how the data could appear here is some code generating the kind of noise that I do want to interpolate out and the kind of missing chunks I don't want to interpolate across on a single band image:
import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 1, 500) y = x[:, None] average_image = x + y # Destroy some values  this is about the amount of missing # data I have that I want to interpolate across mask = np.random.random(average_image.shape) > 0.98 average_image[mask] = np.nan # Now destroy a whole corner that might be missing and it # doesn't make sense to interpolate into this mass of missing data average_image[0:80, 0:80] = np.nan # Here should be some interpolation process, I've been able to interpolate in 1D across the row and that actually works well # enough to get rid of the missing individual pixels but I can't do a sanity check to make sure it isn't interpolating a pixel 50 pixels away from real data. interpolated_image = average_image fig, (ax0, ax1) = plt.subplots(1, 2) ax0.imshow(average_image, cmap='gray', interpolation='nearest') ax0.set_title('Input image') ax1.imshow(interpolated_image, cmap='gray', interpolation='nearest') ax1.set_title('Interpolated data') plt.show()

system of equations SciPy
I would like to solve such a system of equations:
21.7424698112 = A + B*2^{x} 22.4531418345 = A + B*3^{x} 23.8033967265 = A + B*4^{x}
I wrote a script to find A, B and x variables:
#!/usr/bin/env python import numpy as np from scipy.optimize import root def your_funcs(X): A, B, x = X f = [21.7424698112AB*(1/2**x), 22.4531418345AB*(1/3**x), 23.8033967265AB*(1/4**x)] return f sol = root(your_funcs, [1.0, 1.0, 1.0]) print(sol.x)
This approach leads to the following solution:
[ 22.66633613 50.69139062 105.89893632]
I checked the solution and turned out that these values do not satisfy above equations. What am I doing wrong? Should I be aware of something?

Algorithm to enumerate all sequences?
I'm researching the following problem:
Input:
N where N is an integer.
K where K is a positive integer, K<N
Output: All distinct sequences satisfying
 1 ≤ k_{1} ≤ k_{2} ≤ … ≤ k_{K} ≤ N
Eg.: N=4, K=2
The answer is 
 1 1
 1 2
 1 3
 1 4
 2 2
 2 3
 2 4
 3 3
 3 4
 4 4
What is the name of this problem? Are there any references to its optimum solution algorithm?
TIA

Generate all matrices given restrictions
I'm working on a multidimensional integer optimization problem, and at one step I have to run a function on all matrices of dim = r x i such that there is only one 1 per row and all the rest of the row is zeros.
I'm having a hard time writing the recursion code or loop for this, mainly because I need r to be a variable (i can be fixed no problem).
The main problem arising is that I would need r amount of for loops to execute this, so I believe a recursion function might be the way to go.
I know this code will execute in exponential time, but the size of the matrix won't be that large so it should be able to handle this bruteforce solution. (Usually r < 15, i < 5)
So, my question is how do I write an algorithm to generate all these matrices given the constraint?

How to minimize an objective function with series of sums in Matlab?
I'm trying to minimzies the following optimization problem in MATLAB (sorry for the ugly form, but apparently I need more reputation to post images):
w* = argmin f(w)=\sum_{i=1}^{n}\sum_{j=1}^{n}(w_i(Cw)_iw_j(Cw)_j)^2 s.t. 1^Transposed * w 0<=w<=leq1
w is a 1xn vector of weigths and C a nxn covariance matrix.
For now I`ve got this:
w0 = [0.25, 0.25, 0.25, 0.25]; Aeq = ones(1, port_size); beq = 1; lb = zeros(1, port_size); ub = ones(1, port_size); fun_rp = @(w) for i = 1:port_size for j = 1: port_size (w(1,i)*(C*w(1,i)')w(1,j)*(C*w(1,j)'))^2; end end
Then I want to put the variables into the fmincon function to solve the problem.
But somehow I receive an error that fun_rp is an undefined function or variable. Does anybody know how to fix this?
Best regards

Implementing a test for nonnested hypotheses in R
I am trying to run Vuong's closeness test for nonnested hypotheses (https://en.wikipedia.org/wiki/Vuong%27s_closeness_test). I have estimated two models by MLE, using the bbmle package. When I try to run the test, I get the following error message:
> vuongtest(model1, model2) Error in UseMethod("llcont") : no applicable method for 'llcont' applied to an object of class "mle"
I do not understand what gives rise to this error message.
The models are estimated like this:
model1 < mle2(logL1, start = list(...)) model2 < mle2(logL2, start = list(...))

How to implement minibatch gradient descent for maximum likelihood estimation python?
Currently, I have some code written that finds the combination of parameters that maximizes the loglikelihood function with some field data. The model now randomly selects the parameters out of a uniform distribution and then selects the best parameter set. This method however doesn't give reproducible results. I am wondering if anyone has any sample codes for how to update this code to run the minibatch gradient descent (or ascent) method to find the best parameter set?
Also, the model now uses all of the field data at once, whereas I want small batches of the data to be tested at each iteration. Below is my code:
from math import * import numpy as np import random import matplotlib.pyplot as plt def ff(xs,vx,Dx,XSpill,sigmax0,SP): mux = XSpill + vx*SP sigmax = sigmax0 + np.sqrt(2.0*Dx*SP) return (1.0/(np.sqrt(2.0*np.pi*(sigmax**2.0))))*np.exp(((xmux)**2.0)/(2.0*(sigmax**2.0))) def likelihood(xs,XSpill,ConData,sigmax0,SP): vx1 = [random.uniform(50.0,50.0) for i in range(1000)] Dx1 = [random.uniform(0.01,20.1) for i in range(1000)] IniIndLikelihood = np.ones([len(xs),len(vx1)]) Lamda = np.zeros(shape=(len(vx1))) Prob = np.zeros(shape=(len(vx1))) for ci in range(len(xs)): if ConData[ci] > 0.0: for i in range(len(vx1)): Prob[i] = ff(xs[ci],vx1[i],Dx1[i],XSpill,sigmax0,SP) if Prob[i] > 1e308: Lamda[i] = 1/Prob[i] IniIndLikelihood[ci,i] = Lamda[i]*np.exp(Lamda[i]*ConData[ci]) else: Lamda[i] = 0.0 IniIndLikelihood[ci,i] = 0.0 CompLikelihood = np.ones([len(vx1)]) Likelihood = np.zeros([len(vx1)]) for i in range(len(vx1)): for ci in range(len(xs)): if ConData[ci] > 0.0: if IniIndLikelihood[ci,i] == 0.0: CompLikelihood[i] = 0.0 MaxLogLike = 22.0 for i in range(len(vx1)): for ci in range(len(xs)): if ConData[ci] > 0.0: if CompLikelihood[i] == 1.0: Likelihood[i] = Likelihood[i] + np.log(IniIndLikelihood[ci,i]) if CompLikelihood[i] == 1.0: if MaxLogLike == 22.0: MaxLogLike = Likelihood[i] else: MaxLogLike = np.max([MaxLogLike,Likelihood[i]]) for i in range(len(vx1)): if CompLikelihood[i] == 1.0: Likelihood[i] = Likelihood[i]  MaxLogLike return Likelihood if __name__ == "__main__": sigmax0 = 0.0 XSpill = 0.0 SP = 1.0 xs = [1,3,5,9,20,34,40,60] ConData = np.array([5,7,30,5,5,15,30,5])/100 Result = likelihood(xs,XSpill,ConData,sigmax0,SP)
Where xs is the location and ConData is the concentration of the field data. Then after getting the probabilities of the log likelihood, I use argmax to find the best parameter combination.
Any suggestions, links, or sample codes would be helpful since I am having trouble finding Python examples for MLE method!

Maximize loglikelihood to obtain GARCH estimators in python
I have some data about exchange rates and want to estimate a GARCH model. I am not sure if it is correct what I have done and I get two errors.
I have a list of exchange rates from which I took the log differences. In addition, I have substracted the mean to get the epsilons. Now I want to estimate the parameters from the GARCH model. I have done this by minimizing the  loglikelihood. But I get two errors: RuntimeWarning: invalid value encountered in log\ RuntimeWarning: overflow encountered in double_scalars But I don't see any negative values in sigma. How can I solve these warnings? Also, I get the following output:
x: array([ 0.04881267, 0.17925725, 0.73315972])
I have learned that the usual values are most of the time around 0.0001, 0.05 and 0.9. Here that is not the case. Do I something wrong?def main(): S = data[:,1] R = np.zeros(len(S)1) for i in range(len(S)1): R[i] = np.log(S[i+1] / S[i]) * 100 eps = R  np.mean(R) vP0 = (0.1, 0.05, 0.92) a = minimize(garch_loglike, vP0, eps, bounds = ((0.0001, None), (0.0001, None), (0.0001, None)), options={'disp':True}) print(a) def garch_filter(omega, alpha, beta, eps): iT = len(eps) sigma_2 = np.zeros(iT) for i in range(iT): if i==0: sigma_2[i] = omega/(1alphabeta) else: sigma_2[i] = omega + alpha*eps[i1]**2 + beta*sigma_2[i1] return sigma_2 def garch_loglike(vP, eps): iT = len(eps) omega = vP[0] alpha = vP[1] beta = vP[2] sigma_2 = garch_filter(omega, alpha, beta, eps) logL = np.sum(np.log(sigma_2)  eps**2/sigma_2) return logL