I want to graph taylor series in python use matplot
import matplotlib.pyplot as plt
import sympy as sy
import numpy as np
x=sy.Symbol('x')
plt.title("Comparison of EXP(x), PP1, PP2")
plt.xlim(00.1, 3+0.2)
plt.ylim(0, 20+1)
plt.xlabel("x")
plt.ylabel("y: values")
plt.grid(False)
x=np.linspace(0.0, 3.0, 100)
plt.plot(x, np.exp(x), label="exp(x)")
plt.plot(x, np.exp(x).series(x, 0, 4), label="PP1")
plt.legend(loc=2)
plt.show();
I use Jupyter Notebook. I want to graph taylor series of exp(x). I just success to graph exp(x) but failed taylor seires of exp(x). Sympy library has taylor series fuction so I tried it but I don't understand what's the problem. below is the error message and graph.
1 answer

sy.exp(x).series(x)
creates a sympy expression, not a functionyou might want to convert it to a function
from sympy.utilities.lambdify import lambdify x = sy.Symbol('x') exp_expr = sy.exp(x).series(x).removeO() exp_func = lambdify(x, exp_expr)
and plot it out
x_points = np.linspace(0.0, 3.0, 100) plt.plot(x_points, [exp_func(i) for i in x_points])
See also questions close to this topic

reducer to find the most popular movie for each age group in python
I am trying to write mapper reducer for Hadoop to find the movies with 5 rating "the popular movies" for each age group.
I write this
mapper.py
to join the tow data set with the user Id to get the age from user data and the rating with the movie name from the rating data set .!/usr/bin/env python:
import sys for line in sys.stdin: # remove leading and trailing whitespace line = line.strip() line = line.split("::") rating = "1" movie = "1" user = "1" age = "1" if len(line) == 4 : #ratings data rating = line[2] movie = line[1] user = line[0] #print '%s %s %s' % (user,movie,rating) else: # users data user = line[0] age = line[2] print '%s\t%s\t%s\t%s' % (user,age,rating,movie)
this is the data structure rating data: userid, movieid, rating, timestamp user data: userid, gender, age, occupation
The reducer I wrote is not working at all it gave me 0 result.
I want the result to be the top popular movies for each age group:
1 2254 4567 18 8732 0987 0986 25 7654 8765 7658 35 6543 7645 7654 45 7654 8765 5433 50 7652 1876 7654 56 3986 3956

How to compare two columns from two DFs keeping some column constants and print row?
I'm working on a project where I have to find the changes done in second sheet (specific column) as compare to primary/Master sheet. after that I wanted to print or save the complete row in which changes are found. here are more details. both the excel sheets have many columns my master sheet has data something like as follows:
TID LOC HECI RR UNIT SUBD S EUSE INV ACT CAC FMT CKT DD SCID CUSTOMER F&E/SERVICE ID BVAP PORD AUTH RULE ST RGN CHCGILDTO3P050101D CHCGILDTO3P M3MSA0S1RA 0501.01D 1A1 IE D STR3RA8 S CL/HFFS/688898 /LGT 20180721 BLOOMBERG LP DS316668545 WMS881282 E.485339 IL N CHCGILDTO3P050101D CHCGILDTO3P M3MSA0S1RA 0501.01D 1A2 IE J DNA UNDER DECOM EID 2466 20190322 WMS881282 E.485339 IL N CHCGILDTO3P050101D CHCGILDTO3P M3MSA0S1RA 0501.01D 1A3 IE J DNA UNDER DECOM EID 2466 20190322 WMS881282 E.485339 IL N CHCGILDTO3P050101D CHCGILDTO3P M3MSA0S1RA 0501.01D 1A4 IE J DNA UNDER DECOM EID 2466 20190322 WMS881282 E.485339 IL N CHCGILDTO3P050101D CHCGILDTO3P M3MSA0S1RA 0501.01D 1A5 IE J DNA UNDER DECOM EID 2466 20190322 WMS881282 E.485339 IL N
and my second sheet has data as follows :
HECI UNIT INV SUB ACT CKT PACT DD LOC RR M3MSA0S1RA 1A1 IE $ CL/HFFS/688898 /LGT D 72118 CHCGILDTO3P 0501.01D M3MSA0S1RA 1A2 IE J DNA UNDER DECOM EID 2466 32219 CHCGILDTO3P 0501.01D M3MSA0S1RA 1A3 IE J DNA UNDER DECOM EID 2466 32219 CHCGILDTO3P 0501.01D M3MSA0S1RA 1A4 IE J DNA UNDER DECOM EID 2466 32219 CHCGILDTO3P 0501.01D M3MSA0S1RA 1A5 IE J DNA UNDER DECOM EID 2466 32219 CHCGILDTO3P 0501.01D
so first i want to check if the values of LOC HECI RR & UNIT are same in both the sheets I want to move forward and comapre ACT column and print the difference as output.
for example you can see row #1 in Master data ACT is 'D' and where as in second sheet its changes to '$'
so I want output something like related complete row which says its changes from 'D' to '$'
this seems very complicated to me as I'm at beginning stage of python and pandas.
I tried using loops but unable to execute also if I use too much loop that's not the pandas way I believe
here is my code:
import pandas as pd df1 = pd.read_excel("Master Database.xlsx") df2 = pd.read_excel("CHCGILDTO3P_0501.01D.xlsx") d1_act = df1['ACT'] d2_act = df2['ACT'] for index1, row1 in df1.iterrows(): for index2, row2 in df2.iterrows(): if(row1['LOC'],row1['HECI'],row1['RR']) ==(row2['LOC'],row2['HECI'],row2['RR']): for x in d1_act and y in d2_act: #print(x,y) if x != y: print (x, y) # not getting how to print complete respective row else: pass else: pass
I want ouput like:
M3MSA0S1RA 1A1 IE $ CL/HFFS/688898 /LGT D 72118 CHCGILDTO3P 0501.01D
changes from 'D to '$'
please assist ! thank you in advance!

merge duplicate cells of a column
My Current excel looks like:
  Type  Val    A  1    A  2    B  3    B  4    B  5    C  6 

This is the required excel:
  Type  Val  Sum    A  1  3       2     B  3  12       4        5     C  6  6  
Is it possible in python using pandas or any other module?

How to group elements (of a tensor) according to classification tensor and calculate the std of each cluster (Tensorflow)?
I have a tensor of dimension [n_samples, height, width], i.e., many images.
I have another tensor of dimension [n_samples,], i.e, each element of this tensor is the cluster index of one image in the first tensor.
Now how can I calculate the variance of each cluster (clustering of the first tensor is based on the second tensor)in Tensorflow?
I found some Tensorflow function like unsorted_segment_mean which does the similar thing but I cannot figure out how to calculate variance? Can anyone help me with that? Thank you!

NumPy cumsum rounding issue
I am new to Numpy and I am vectorizing a homemade function that uses numpy's cumsum() for its calculation. However I found the result has some rounding error whose culprit seems to be the cumsum(). It can be shown as follows:
import numpy as np X = np.repeat(30000.1,114569) X1 = X**2 Y = np.cumsum(X1) Z =Y[1:]  Y[:1] print(f'Theoretical value: 900006000.01') print(f'first member of X1: {X1[0]}') print(f'first member of Z: {Z[0]}') print(f'last member of Z: {Z[1]}') print('They are mathematically the same, but in reality they are different')
The result is:
Theoretical value: 900006000.01 first member of X1: 900006000.0099999 first member of Z: 900006000.0099999 last member of Z: 900006000.015625 They are mathematically the same, but in reality they are different
Here are my questions:
1) Are there anyways to improve the precision of cumsum?
2a) If there are, can you show me a simple example?
2b) If not, then what's the known maximum size of a float64 or maximimum length of argument vector before cumsum run into rounding errors?
3) Are there any package in python to handle calculations with high precision floating point numbers
Thank you in advance
EDIT: changed the number to fewer decimal places to emphasize the issue here. I mean the problem of rounding error appears as early as 2 decimal places, I think it is a really big problem.
EDIT2: some people pointed out that the subtraction between big numbers also contributes to the error. My another question is that is there anyway in python to handle the numerical errors that stem from the subtraction between two big numbers?

Numpy dot product of a matrix and an array is a matrix
When I updated to the most recent version of numpy, a lot of my code broke because now every time I call
np.dot()
on a matrix and an array, it returns a 1xn matrix rather than simply an array. This causes me an error when I try to multiply the new vector/array by a matrixexample
A = np.matrix( [ [4, 1, 0, 0], [1, 5, 1, 0], [0, 1, 6, 1], [1, 0, 1, 4] ] ) x = np.array([0, 0, 0, 0]) print(x) x1 = np.dot(A, x) print(x1) x2 = np.dot(A, x1) print(x2) output: [0 0 0 0] [[0 0 0 0]] Traceback (most recent call last): File "review.py", line 13, in <module> x2 = np.dot(A, x1) ValueError: shapes (4,4) and (1,4) not aligned: 4 (dim 1) != 1 (dim 0)
I would expect that either dot of a matrix and vector would return a vector, or dot of a matrix and 1xn matrix would work as expected.
Using the transpose of x doesn't fix this, nor does using
A @ x
, orA.dot(x)
or any variation ofnp.matmul(A, x)

matplotlib: how to change format of decimal numbers on axis labels
The following code gives the output as:
import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame() df['A'] = pd.Series(np.random.uniform(0.1,0.6,size=(5))) df['B'] = pd.Series(np.random.uniform(0.1,0.6,size=(5))) fig, ax = plt.subplots() ax.barh(np.arange(0, len(df)), df['A'], height=0.3) ax.barh(np.arange(0.3, len(df) + 0.3), df['B'], height=0.3) plt.show()
I would like to change the xaxis tick labels such that they become:
0 .1 .2 .3 .4 .5 .6

Dataframe.plot() not working when ax is defined
I am trying to emulate the span selector for the data I have according to the example shown here (https://matplotlib.org/examples/widgets/span_selector.html). However, my data is in a dataframe & not an array. When I plot the data by itself with the using the code below
input_month='201706' plt.close('all') KPI_ue_data.loc[input_month].plot(x='Order_Type', y='#_Days_@_Post_stream') plt.show()
the data chart is shown perfectly.
However when i am trying to put this into a subplot with the code below (only first two lines are added & ax=ax in the plot line), nothing shows up. I get no error either!!! can anyone help?
fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(211, facecolor='#FFFFCC') input_month='201706' plt.close('all') KPI_ue_data.loc[input_month].plot(x='Order_Type', y='#_Days_@_Post_stream',ax=ax) plt.show()

Is there a way to interpolate series with recurring values using matplotlib?
I am trying to interpolate a x/y series using
matplotlib
. The problem I am facing is thatspline
andinterp1d
fail because I have recurring values in both the x and y arrays.I have tried using the spline and
interp1d
functions fromscipy
, but both fail because of the recurring values issuex1 = [0.82 0.82 0.82 0.82 0.82 0.82 0.83 0.83 0.83 0.83 0.83 0.83 0.83] y1 = [0.93 0.93 0.93 0.93 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94] f = interp1d(x1, y1, kind='cubic') #this gives an error: Expect x to be a 1D sorted array_like. #another thing I tried xnew = np.linspace(x1.min(),x1.max(),300) splined = spline(x1,y1,xnew) #this gives an error: Matrix is singular
I am expecting the interpolated y value to gradually increase with the increase of x. So for example, the corresponding y value for x = 0.82 would be 0.931, 0.932, etc. My goal in the end is to get a smooth curve.

How to speed up nsolve or the bisection method?
I am writing a program that requires a root finder of some sort, but every root finder I have used is unsatisfactorily slow. I'm looking for a way to speed this up.
I have used the SymPy's nsolve, and although this produces very precise results, it is very slow (if I do 12 iterations of my program it takes 12+ hours to run). I wrote my own bisection method, and this works much better, but is still very slow (12 iterations takes ~ 1 hour to run). I have been unable to find a symengine solver, or that is what I would be using. I will post both of my programs (with the bisection method and with nsolve). Any advice on how to speed this up is greatly appreciated.
Here is the code using nsolve:
from symengine import * import sympy from sympy import Matrix from sympy import nsolve trial = Matrix() r, E1, E = symbols('r, E1, E') H11, H22, H12, H21 = symbols("H11, H22, H12, H21") S11, S22, S12, S21 = symbols("S11, S22, S12, S21") low = 0 high = oo integrate = lambda *args: sympy.N(sympy.integrate(*args)) quadratic_expression = (H11E1*S11)*(H22E1*S22)(H12E1*S12)*(H21E1*S21) general_solution = sympify(sympy.solve(quadratic_expression, E1)[0]) def solve_quadratic(**kwargs): return general_solution.subs(kwargs) def H(fun): return fun.diff(r, 2)/2  fun.diff(r)/r  fun/r psi0 = exp(3*r/2) trial = trial.row_insert(0, Matrix([psi0])) I1 = integrate(4*pi*(r**2)*psi0*H(psi0), (r, low, high)) I2 = integrate(4*pi*(r**2)*psi0**2, (r, low, high)) E0 = I1/I2 print(E0) for x in range(10): f1 = psi0 f2 = r * (H(psi0)E0*psi0) Hf1 = H(f1).simplify() Hf2 = H(f2).simplify() H11 = integrate(4*pi*(r**2)*f1*Hf1, (r, low, high)) H12 = integrate(4*pi*(r**2)*f1*Hf2, (r, low, high)) H21 = integrate(4*pi*(r**2)*f2*Hf1, (r, low, high)) H22 = integrate(4*pi*(r**2)*f2*Hf2, (r, low, high)) S11 = integrate(4*pi*(r**2)*f1**2, (r, low, high)) S12 = integrate(4*pi*(r**2)*f1*f2, (r, low, high)) S21 = S12 S22 = integrate(4*pi*(r**2)*f2**2, (r, low, high)) E0 = solve_quadratic( H11=H11, H22=H22, H12=H12, H21=H21, S11=S11, S22=S22, S12=S12, S21=S21, ) print(E0) C = (H11  E0*S11)/(H12  E0*S12) psi0 = (f1 + C*f2).simplify() trial = trial.row_insert(x+1, Matrix([[psi0]])) # Free ICI Part h = zeros(x+2, x+2) HS = zeros(x+2, 1) S = zeros(x+2, x+2) for s in range(x+2): HS[s] = H(trial[s]).simplify() for i in range(x+2): for j in range(x+2): h[i, j] = integrate(4*pi*(r**2)*trial[i]*HS[j], (r, low, high)) for i in range(x+2): for j in range(x+2): S[i, j] = integrate(4*pi*(r**2)*trial[i]*trial[j], (r, low, high)) m = h  E*S eqn = m.det() roots = nsolve(eqn, float(E0)) print(roots)
Here is the code using my bisection method:
from symengine import * import sympy from sympy import Matrix from sympy import nsolve trial = Matrix() r, E1, E = symbols('r, E1, E') H11, H22, H12, H21 = symbols("H11, H22, H12, H21") S11, S22, S12, S21 = symbols("S11, S22, S12, S21") low = 0 high = oo integrate = lambda *args: sympy.N(sympy.integrate(*args)) quadratic_expression = (H11E1*S11)*(H22E1*S22)(H12E1*S12)*(H21E1*S21) general_solution = sympify(sympy.solve(quadratic_expression, E1)[0]) def solve_quadratic(**kwargs): return general_solution.subs(kwargs) def bisection(fun, a, b, tol): NMax = 100000 f = Lambdify(E, fun) FA = f(a) for n in range(NMax): p = (b+a)/2 FP = f(p) if FP == 0 or abs(ba)/2 < tol: return p if FA*FP > 0: a = p FA = FP else: b = p print("Failed to converge to desired tolerance") def H(fun): return fun.diff(r, 2)/2  fun.diff(r)/r  fun/r psi0 = exp(3*r/2) trial = trial.row_insert(0, Matrix([psi0])) I1 = integrate(4*pi*(r**2)*psi0*H(psi0), (r, low, high)) I2 = integrate(4*pi*(r**2)*psi0**2, (r, low, high)) E0 = I1/I2 print(E0) for x in range(11): f1 = psi0 f2 = r * (H(psi0)E0*psi0) Hf1 = H(f1).simplify() Hf2 = H(f2).simplify() H11 = integrate(4*pi*(r**2)*f1*Hf1, (r, low, high)) H12 = integrate(4*pi*(r**2)*f1*Hf2, (r, low, high)) H21 = integrate(4*pi*(r**2)*f2*Hf1, (r, low, high)) H22 = integrate(4*pi*(r**2)*f2*Hf2, (r, low, high)) S11 = integrate(4*pi*(r**2)*f1**2, (r, low, high)) S12 = integrate(4*pi*(r**2)*f1*f2, (r, low, high)) S21 = S12 S22 = integrate(4*pi*(r**2)*f2**2, (r, low, high)) E0 = solve_quadratic( H11=H11, H22=H22, H12=H12, H21=H21, S11=S11, S22=S22, S12=S12, S21=S21, ) print(E0) C = (H11  E0*S11)/(H12  E0*S12) psi0 = (f1 + C*f2).simplify() trial = trial.row_insert(x+1, Matrix([[psi0]])) # Free ICI Part h = zeros(x+2, x+2) HS = zeros(x+2, 1) S = zeros(x+2, x+2) for s in range(x+2): HS[s] = H(trial[s]).simplify() for i in range(x+2): for j in range(x+2): h[i, j] = integrate(4*pi*(r**2)*trial[i]*HS[j], (r, low, high)) for i in range(x+2): for j in range(x+2): S[i, j] = integrate(4*pi*(r**2)*trial[i]*trial[j], (r, low, high)) m = h  E*S eqn = m.det() roots = bisection(eqn, E0  1, E0, 10**(15)) print(roots)
As I said, they both work as they are supposed to, but they do so very slowly.

Resolving concrete real value from sympy.solveset
I have the function
44*(e**t)
and need to finde
such that the closed integral (from 0 to 164) of that function equals 740. All values are reals.I know (from Wolfram Alpha) that the solution is ~
1.06125906031301
and looking at a plot of the function I am quite certain that this is the only solution.However
pprint(slv)
looks like this:⎧ 164 164 ⎫ ⎨e  e ∊ (∞, 1) ∪ (1, ∞) ∧  185⋅e ⋅log(e) + 11⋅e  11 = 0⎬ \ {0} ⎩ ⎭
This contains some edgecases and seem to form a set, if I understand correctly.
I cannot see how to get the actual value.
from sympy.core.symbol import symbols from sympy import pprint, solveset, S, Eq, integrate t, e = symbols('t, e', real=True) x = 44*(e**t) integral = sp.integrate(x, (t, 0, 164)) slv = sp.solveset(Eq(integral, 740), e, domain=S.Reals) pprint(slv) integral.subs(e, 1.06125906031301)
https://colab.research.google.com/drive/1hsZ6g9ZXqiK3wz6SbreJDG2LpIoUpn5

How to simplify sympy vectors?
I am doing some symbolic vector calculations using
sympy
, but I can't simplify the arguments the vector class in a proper way. Consider this code:from sympy.physics.mechanics import ReferenceFrame, dot, cross from sympy import symbols, sin, cos, simplify alpha, theta, l = symbols('alpha theta l') def Rodrigues(v, k, angle): return cos(angle) * v + cross(k, v) * sin(angle) + k * dot(k, v) * (1 cos(angle)) N = ReferenceFrame('N') P0 = l * N.y P2 = Rodrigues( Rodrigues(P0, N.z, alpha), Rodrigues(N.x, N.z, alpha), theta)
which returns:
trying the
simplify(P2)
I get the error:AttributeError: 'function' object has no attribute 'x'
which I think because the
simplify
requires asympy
expression object. trying thedir(P2)
there is asimplify
method which returns:<bound method Vector.simplify of  l*sin(alpha)*cos(theta)*N.x  l*cos(alpha)*cos(theta)*N.y + (l*sin(alpha)**2  l*cos(alpha)**2)*sin(theta)*N.z>
which I have no idea what it is! trying the
P2.args
I get:[(Matrix([ [ l*sin(alpha)*cos(theta)], [ l*cos(alpha)*cos(theta)], [(l*sin(alpha)**2  l*cos(alpha)**2)*sin(theta)]]), N)]
which is a 1D List of a 2D tuple with a nested 3x1
sympy
Matrix! I don't know whos choice was to make the vector class so obscure, but now I can simplify the last element withsimplify(P2.args[0][0][2])
and change the function to:def Rodrigues(v, k, angle): tmpVec = cos(angle) * v + cross(k, v) * sin(angle) + k * dot(k, v) * (1 cos(angle)) tmpFrame = tmpVec.args[0][1] return simplify(tmpVec.args[0][0][0]) * tmpFrame.x + simplify(tmpVec.args[0][0][1]) * tmpFrame.y + simplify(tmpVec.args[0][0][2]) * tmpFrame.z
which to me seems like a very bad solution.
I was wondering if you could help me know if there is a more Pythonic way to do this. For example force
sympy
to simplify all expressions by default. Or maybe I'm using thevector.simplify
method in a wrong way? Thanks for your support in advance.P.S. Rodrigues rotation formula

Separation and allocation of terms of Taylor series expansion in MATLAB
How is it possible to "separate" the terms of the Taylor series expansion into single parts? I'm using this tool for variance analysis within managerial accounting.
I need to assign the single terms to the influencing factors.
This code could be the most simple presentation of the perpetual annuity.I want to show how the change in discount rate (b) and Cashflow (a) affects the value.
syms a b a1 b1 test; test = evalin(symengine,'mtaylor(a/b , [a = a1, b = b1], 4)') dtest = subs(test, [a, a1, b, b1],[40,150,0.01,0.12])(a1/b1) dtest2 = subs(dtest, [a1,b1],[150,0.12])
test = a1/b1 + (a  a1)/b1  (a1*(b  b1))/b1^2  ((a  a1)*(b  b1))/b1^2 + (a1*(b  b1)^2)/b1^3  (a1*(b  b1)^3)/b1^4 + ((a  a1)*(b  b1)^2)/b1^3 dtest2 = 545875/864
The following separation was manually made:
+(a  a1)/b1 affected by delta cashflow (A1*(b  B1))/B1^2 affected by delta discount rate ((a  a1)*(b  b1))/b1^2 affected by a mix of cashflow and disc. rate +(a1*(b  b1)^2)/b1^3 affected by delta discount rate (a1*(b  b1)^3)/b1^4 affected by delta discount rate +((a  a1)*(b  b1)^2)/b1^3 affected by a mix of cashflow and disc. rate dtest2 >>> The whole deviation
To reduce the remainder of the series expansion I want to expand up to an order of "200" e.g. That is why I want to separate and assign the single terms systematically and not manually.

How to fix negative chisquare values in nested model comparisons (SUDAAN/SAS)
We are conducting a series of logistic regressions comparing nested models (comparing 4predictor models with 3predictor models; each model contains one continuous variable). The variable added in the 4predictor models is highly correlated with another predictor included in the model (~0.9). We are specifically interested in the change in chisquare values which are negative for all of our outcomes. We are using SUDAAN proc rlogist and accounting for complex survey design.
Has anyone encountered this problem and/or know what might be causing this?
We weren’t sure if this issue is caused by the fact that we are finding the local minimum versus global minimum. If this is the case, is there a way to adjust start values within proc rlogist in SUDAAN.
Also, we were curious if there is a random process in the Taylor series estimation that accounts for the singleton cluster option (missunit option in the proc rlogist statement)?

Contour plot for the quadratic approximation of the loglikelihood, based on the Taylor series
My aim is to make a contour plot for the quadratic approximation of the loglikelihood, based on the Taylor series, for a Weibull distribution.
But, as a beginner of R (and stats, too), I can't move forward to write a function to evaluate the loglikelihood give theta and data.
Can someone point me in the right direction? Here's the Hessian computation that I have, as a hint.
#observed information matrix jhat<matrix(NA,nrow=2,ncol=2) jhat[1,1]<n/gammahat^2+sum((y/betahat)^gammahat* (log(y/betahat))^2) jhat[1,2]<jhat[2,1]< n/betahatsum(y^gammahat/betahat^(gammahat+1)* (gammahat*log(y/betahat)+1)) jhat[2,2]< n*gammahat/betahat^2+gammahat*(gammahat+1)/ betahat^(gammahat+2)*sum(y^gammahat) solve(jhat)
Additionaly, I have these estimates for gamma hat and beta hat, and those are the part I think I have to modify:
gammahat<uniroot(function(x) n/x+sum(log(y))n* sum(y^x*log(y))/sum(y^x), c(1e5,15))$root betahat< mean(y^gammahat)^(1/gammahat) weib.y.mle<c(gammahat,betahat)
I have this function, but I think it doesn't use Taylor expansion:
log_lik_weibull < function( data, param){ sum(dweibull(data, shape = param[1], scale = param[2], log = TRUE)) }