Several questions regarding LevenbergMarquadt implementation in Scipy
I came across a few oddities while trying to optimize a function using LevenbergMarquadt algorithm in Scipy. I was wondering if anyone had an explanation for them.
A. I have an objective function that accepts two parameters with 2 residuals. As far as I understand, LM for scipy root
using sum(residual^2)
and least_squares
should be the same. However, least_squares
returns an optimized value while root
fails. Are the implementations for the two functions different?
B. When I optimize the function using least_squares
, the second parameter does not change from its initial value. Why does this happen?
Thank you for your help!
See also questions close to this topic

Trying to engineer new columns using a for loop to create column names and then populate new columns with data from the dataset
I am working with the olympic medal dataset which can be found here. The data shows aggregated medals won by each country for both summer and winter olympics.
I am trying to create new columns to show the number of gold, silver and bronze medals won per games appearance for summer games, winter games, and combined. I have tried creating a list of medals and seasons medals per game, creating a new column in the dataframe for each combination and thought I could divide the original columns (eg summer gold, summer silver ...) by the total for games attended for given season (eg summer gold / summer games attended).
However, when I tried the code below I got KeyError: ('%s games attended', 'Summer'). Any suggestion for how to accomplish the new feature I'm trying to create would be appreciated
import pandas as pd import numpy as np import matplotlib.pyplot as plt data = pd.read_csv('/Users/xx/Downloads/olympics.txt', header = 1) data.columns = ['Country', 'Summer games attended', 'Summer gold', 'Summer silver','Summer bronze','Summer total', 'Winiter games attended','Winter gold', 'Winter silver','Winter bronze','Winter total', 'Combined games attended','Combined gold', 'Combined silver','Combined bronze','Combined total'] data['Country'] = data['Country'].str.split("\[", expand=True)[0] data.drop(146, axis = 0, inplace = True) medals = ['gold', 'silver', 'bronze'] seasons = ['Summer', 'Winter', 'Combined'] for col in data.columns: for medal in medals: if medal in col: for season in seasons: if season in col: n = season + ' ' + medal + ' per games' data[n] = [i for i in col / 2]

How to implement the iterative way to change the filename reading and how to combine result into single excel file
I am new to python. Have a task that have to find some of the following for all the excel files(1.xlsx350.xlsx) around 350 excel files, which contained in single folder(Videos). and written following code it works fine but it is time consuming, manually have to change file name every iteration. even in the end of the process, I have to combine all 350 excel file operated data into single excel file. But in my code it overwrite each and every iteration. please help me to resolve this problem.
data12 = pd.read_excel (r'C:\Users\Videos\1.xlsx') gxt = data12.iloc [:,0] gyan = data12.iloc [:,1] int= gyan.iloc[98:197] comp= gyan.iloc[197:252] seg= gyan.iloc[252:319] A= max(int) B= max(comp) C= min(comp) D= max(seg) s = pd.Series([A, B, C, D]) frame_data= [gyan, comp, seg, stat] result = pd.concat(frame_data) result.to_excel("output.xlsx", sheet_name='modify_data', index=False)
thank you for helping.

Tkinter class and methods
I have this tkinter codes, working fine but I want to put them into a class with methods for each process. I am very new to Python, how can I do this?
You don't have to do all, just the class and two methods will be fine and I can learn to replicate the rest.
root= tk.Tk() canvas1 = tk.Canvas(root, width = 400, height = 400, relief = 'raised') canvas1.pack() label1 = tk.Label(root, text='EDA') label1.config(font=('helvetica', 12)) canvas1.create_window(200, 25, window=label1) label2 = tk.Label(root, text='Number of Clusters:') label2.config(font=('helvetica', 8)) canvas1.create_window(200, 120, window=label2) entry1 = tk.Entry (root) canvas1.create_window(200, 140, window=entry1) browseButtonExcel = tk.Button(text=" Import Excel File (CSV) ", command=App.getExcel, bg='green', fg='white', font=('helvetica', 10, 'bold')) canvas1.create_window(200, 70, window=browseButtonExcel) processButton = tk.Button(text=' kMeans Clustering', command=cluster, bg='brown', fg='white', font=('helvetica', 10, 'bold')) canvas1.create_window(200, 170, window=processButton) root.mainloop()

Python, MIP optimization
I am using MIP package in python to solve an optimization problem and I am trying to get continuous values between 0 and 1. However, the returned values of x are binary {0,1}. Here is the code:
m = Model(sense=MAXIMIZE, solver_name=GRB) x = [m.addVars('x',lb = 0, ub =1, var_type=CONTINUOUS) for i in I] m.objective = maximize(xsum(w[i]*x[i] for i in I)) m += xsum(t[i] * x[i] for i in I) <= 10000 for i in I: m += Tc[i] + t[i] <= T m.optimize(relax=True) selected = [i for i in I if x[i].x >= 0.5]

Is there any benefit in using <link rel="preload"> or <link rel="preconnect"> for resources on the same domain as the original page?
I've been reading up on performance benefits of using and to help fetch critical resources but one thing I can't quite figure out is whether these techniques still offer benefit when the resource is located on the same domain as the page that's requesting them.
E.g. I'm optimizing a page at abc.com Should I add a
<link rel="preconnect" href="abc.com">
or<link rel="preload" as="script" href="abc.com/main.js">
to my markup? Or will it not have any effect since the current page is on the same domain as the link href values? 
I have below input and expecting output without using nested loops for time complexity. Looking for good data structure that fits in below scenario
Input: (KeysValues) ( 0foo 2foo 5foo 9bar 8bar )
Output: (9,8,7,6,4,3)
Answer Explanation  Want to subtract all foo with all bar values like below without using nested loops or declaring extra variables
(0 minus 9, 0 minus 8, 2 minus 9, 2 minus 8, 5 minus 9, 5 minus 8 )

Clarification regarding scipy.interpolate.interpn
I have a code for which I have to interpolate a 2D matrix (6,100) ,say , which I have to interpolate along the second axis using a 100 element matrix which gives the points. What I am doing currently is interpolating 100 6*1 matrices and then finding the values at the midpoints.
The function I am currently using is
interp1d
, which does the job but isn't as time efficient.Can I use
interpn
function instead. If yes, how can I do that? 
How to approximate points more correctly
I'm trying to approximate my data, but I need a smoother line, how can I implement it?
import matplotlib.pyplot as plt from scipy.interpolate import interp1d import numpy as np m_x = [0.22, 0.29, 0.38, 0.52, 0.55, 0.67, 0.68, 0.74, 0.83, 1.05, 1.06, 1.19, 1.26, 1.32, 1.37, 1.38, 1.46, 1.51, 1.61, 1.62, 1.66, 1.87, 1.93, 2.01, 2.09, 2.24, 2.26, 2.3, 2.33, 2.41, 2.44, 2.51, 2.53, 2.58, 2.64, 2.65, 2.76, 3.01, 3.17, 3.21, 3.24, 3.3, 3.42, 3.51, 3.67, 3.72, 3.74, 3.83, 3.84, 3.86, 3.95, 4.01, 4.02, 4.13, 4.28, 4.36, 4.4] m_y = [3.96, 4.21, 2.48, 4.77, 4.13, 4.74, 5.06, 4.73, 4.59, 4.79, 5.53, 6.14, 5.71, 5.96, 5.31, 5.38, 5.41, 4.79, 5.33, 5.86, 5.03, 5.35, 5.29, 7.41, 5.56, 5.48, 5.77, 5.52, 5.68, 5.76, 5.99, 5.61, 5.78, 5.79, 5.65, 5.57, 6.1, 5.87, 5.89, 5.75, 5.89, 6.1, 5.81, 6.05, 8.31, 5.84, 6.36, 5.21, 5.81, 7.88, 6.63, 6.39, 5.99, 5.86, 5.93, 6.29, 6.07] x = np.array(m_x) y = np.array(m_y) plt.plot(x, y, 'ro', ms = 5) plt.show() spl = interp1d(x, y, fill_value = 'extrapolate') xs = np.linspace(3, 3, 1000) plt.plot(xs, spl(xs), 'g', lw = 3) plt.axis([0, 5, 2, 10]) plt.show()
Row data:
I need:
Program make:
UPD: Among other things, I need to have access to all the values of the resulting curve, as well as extrapolate it to the left of the yaxis, and to the right to the end of the picture

How to optimize the parameters using scipy.optimize.minimize?
I have a differential equation (returns logistic curve) and I am trying to find the best values for the parameters that fits my real data. Here is my differential equation:
def diff(Y, t, r, p, K, alpha): return r * (Y ** p) * (1  (Y / K) ** alpha)
I rewrite the same equation without the t (time) to fit it in scipy minimize:
def fun(params): Y, r, p, K, alpha = params return r * (Y ** p) * (1  (Y / K) ** alpha)
and then I tried to use scipy to find the best values for the parameter:
from scipy.optimize import minimize # r is bounded between (0.01 to 2), p (0 to 1.1) K (250000 to 600000) and alpha (0 to 1) bnds = ((1., np.inf), (0.01, 2.0),(0,1.1),(250000,600000),(0,1)) initial_guess = [1,0.01, 0.5, 400000,0.] # Y, r, p, K and alpha result = minimize(fun,initial_guess, method='TNC', bounds=bnds) Y,r,p,K,alpha=result.x
this is the results from scipy minimize
Y, r, p, K, alpha # (1.0, 0.010000000000000009, 0.5, 400000.0, 0.0) output
and then I tried to fit the parameters to my equation but as you can see in the below image I didn't get a good result:
t = np.linspace(0, len(X), len(X)) # t = np.linspace(0, X, X) y0 = df.Value.iloc[0] # Your initial condition. params = (2, 0.5, 400000, 1.0) # r, p, K, alpha sol = odeint(diff, y0, t, args=params)
Here is another solution I tried:
test=df['Value'] data = np.nan_to_num(np.array(test).astype(float)) x_val = list(range(len(data))) def fit(param): Y,r, p, K, alpha = param yy = (r * (Y ** p) * (1  (Y / K) ** alpha)) return np.square(data  yy).sum() param0 = [1,0.5, 0.5, 400000,0.5] res = minimize(fit, param0, bounds=bnds)
However this returned the same initial values.
res.x # array([1.e+00, 5.e01, 5.e01, 4.e+05, 5.e01]) output
How Can I get the red line (predictions) close to the blue line (my real data) ?

In an Observable notebook, unable to use a function from an npm library, perhaps because unable to import a function from the library's dependency
I want to reproduce the example in this npm library in an Observable notebook. I run the following in a cell and a block:
fit_data = { let data = { x: [0, 1, 2], y: [1, 1, 1] } return data }
{ const LM = require('mllevenbergmarquardt@2.1.1/lib/index.js').catch(() => window["_interopDefault"]); function sinFunction([a, b]) { return (t) => a * Math.sin(b * t); } const options = { damping: 1.5, gradientDifference: 10e2, maxIterations: 100, errorTolerance: 10e3 }; let fittedParams = levenbergMarquardt(fit_data, sinFunction, options); return fittedParams }
and I get the error message
TypeError: isArray is not a function
, which I suspect is this function failing to be imported from the library's dependency.I am importing the library by following this guide.

How to improve LevenbergMarquardt's method for polynomial curve fitting?
Some weeks ago I started coding the LevenbergMarquardt algorithm from scratch in Matlab. I'm interested in the polynomial fitting of the data but I haven't been able to achieve the level of accuracy I would like. I'm using a fifth order polynomial after I tried other polynomials and it seemed to be the best option. The algorithm always converges to the same function minimization no matter what improvements I try to implement. So far, I have unsuccessfully added the following features:
 Geodesic acceleration term as a second order correction
 Delayed gratification for updating the damping parameter
 Gain factor to get closer to the GaussNewton direction or the steepest descent direction depending on the iteration.
 Central differences and forward differences for the finite difference method
I don't have experience in nonlinear least squares, so I don't know if there is a way to minimize the residual even more or if there isn't more room for improvement with this method. I attach below an image of the behavior of the polynomial for the last iterations. If I run the code for more iterations, the curve ends up not changing from iteration to iteration. As it is observed, there is a good fit from time = 0 to time = 12. But I'm not able to fix the behavior of the function from time = 12 to time = 20. Any help will be very appreciated.

Error while calling the cost function inside the lsqnonlin function
I want to do camera autocalibration using MendonĂ§aCipolla autocalibration method, when I implemented the
lsqnonlin
NonLinear Least Square optimization using LevenbergMarquardt optimization, I get the Error:Not enough input arguments. Error in Lab_1>cost_func (line 63) Aj = [params(1) 0 params(2) ; Error in Lab_1 (line 42) K_line = lsqnonlin(cost_func,[A(1,1), A(1,3), A(2,2), A(2,3)],...
How can I fix it please? Mat file is here
% % clear and close all % clearvars close all clc % % Data type long % format long % % Read data % load('data.mat') % % Nonlinear leastsquares optimization % % A is the initial guess. % A(1,1) = a = ku*f. % A(2,2) = B = kv*f. % A(1,3) = u0. % A(3,3) = v0. % f is the focal length. % (ku,kv)magnification factors: the number of pixels per unit distance in % u and v directions; % C(u0,v0) is The principal point. % Am = [a skew u0 ; (skew=0 here.) % 0 B v0 ; % 0 0 1 ]; %========================================================================= options = optimoptions('lsqnonlin','Display','iter'); options.Algorithm = 'levenbergmarquardt'; K_line = lsqnonlin(cost_func,[A(1,1), A(1,3), A(2,2), A(2,3)],... [],[],options); % % Reshape Intrinsics % % K = [a skew u0 ; (skew=0 here.) % 0 B v0 ; % 0 0 1 ]; K = [K_line(1) 0 K_line(2) ; 0 K_line(3) K_line(4) ; 0 0 1 ]; % % Display The Results % disp('Intrinsic parameters: '); disp(K); %======================================================================== % Cost Function %=============== function cost = cost_func(params) Aj = [params(1) 0 params(2) ; 0 params(3) params(4) ; 0 0 1 ]; Ai = [params(1) 0 params(2) ; 0 params(3) params(4) ; 0 0 1 ]; % % Initialization % cost = 0.0; indx = 1 ; % Choose which weight Function 1 or 2. % global Fs; for i=1:size(Fs,3) for j=i+1:size(Fs,4) % SVD Decomposition [~,S,~] = svd(Aj'* Fs(:,:,i,j) * Ai); if (indx==1)% First Weight Function cost = cost + (S(1,1)  S(2,2)) / (S(2,2)); end if (indx==2)% Second Weight Function cost = cost + (S(1,1)  S(2,2)) / (S(1,1) + S(2,2)); end end end end %=========================================================================