Using the RungeKutta integration method in a system
h=0.005;
x = 0:h:40;
y = zeros(1,length(x));
y(1) = 0;
F_xy = ;
for i=1:(length(x)1)
k_1 = F_xy(x(i),y(i));
k_2 = F_xy(x(i)+0.5*h,y(i)+0.5*h*k_1);
k_3 = F_xy((x(i)+0.5*h),(y(i)+0.5*h*k_2));
k_4 = F_xy((x(i)+h),(y(i)+k_3*h));
y(i+1) = y(i) + (1/6)*(k_1+2*k_2+2*k_3+k_4)*h;
end
I have the following code, I think it's right. I know there's parts missing on the F_xy because this is my follow up question.
I have dx/dt = = −x(2 − y) with t_0 = 0, x(t_0) = 1
and dy/dt = y(1 − 2x) with t_0 = 0, y(t_0) = 2.
My question is that I don't know how to get these equations in to the code. All help appreciated
1 answer

Is
F_xy
your derivative function?If so, simply write it as a helper function or function handle. For example,
F_xy=@(x,y)[x*(2y);y*(12*x)];
Also note that your
k_1, k_2, k_3, k_4, y(i)
are all twodimensional. You need to resize youry
and rewrite the indices in your iterating steps accordingly.
See also questions close to this topic

Cross correlation in Matlab to compute time lag for two time series
I have two time series, y1 and y2 and need to find the time lag between them using crosscorrelation in Matlab. Then I need to plot the crosscorrelation, align the two plots and replot. I have written a bit of Matlab code to do this but I think the crosscorrelation plot is weird and I am unable to interpret it. I am not sure what I am doing wrong here, can you please help? Thanks.
Here is my code at this point:
% Generate time series t = 1:1000; y1=2*sin(2*pi*t/5); y2=2*sin(2*pi*t/5 + 2); % y2 has an introduce phase lag of 2 % Plot the two time series figure (1) plot (t,y1, 'b', t,y2, 'r'); axis ([0 50 2 2]), grid; % compute the cross correlation using the function xcorr maxlag = length(y1); %# set a max lag value here [c,lags]=xcorr(y1,y2, 'coeff');% compute cross correlation figure (2); plot(lags,c)% plot lag versus correlation

Training a Neural Network for Image Classification for a task job
I have no idea how to do this. I have been given a task for a job offer that requires me to create an image classification model for Baci Chocolates. Is Binary. There is one set of damage chocolates and the other is not. The difference is not possible to see at human eyes. The resolution 1920 X 2000 but my concern is that one set is only 27 images(damages) and the other 158 for the chocolates that fine I thought to do a CNN or Residual Neural Network but I don't think they will learn from such a small set. There is any other way I can solve this I repeat this a task for a job offer and I must use this small dataset Any help will be appreacited because I am stuck.

How can I use an optimization variable to determine mathematical function?
I have an optimvar, opening, that determines three other parameters through basic arithmetic.
opening = optimvar('opening','LowerBound', 30, 'UpperBound', 30); opening_zero_geo_def = (e+temp_thickness)/2; geo_def_des_load = opening + opening_zero_geo_def; geo_def_rebound = geo_def_des_load  rebound; geo_def_jounce = geo_def_des_load + m_2_m_clear; po_k = (design_load/spring_rate)+geo_def_des_load; po_kl = po_k/spring_length;
Two parameters, geo_def_jounce and geo_def_rebound, are used to form a domain.
h = .1 travel = geo_def_rebound:h:geo_def_jounce;
The goal is to use po_kl and one other parameter to choose a function that the vales will be manipulated through.
For example
if po_kl is close to .7 rate = .5*travel^3 if po_kl is close to .3 rate = .1*travel^3 + .4*travel^2 (pseudo code)
A polynomial fit is then applied to rate and the root mean squared error of the fit is calculated, and the RMSE is the minimization objective.
My question is how to do this. I've tried using an integer optimvar as an index as suggested by Alan Weiss (https://www.mathworks.com/matlabcentral/answers/374059howcanisetanoptimizationvariabletobeanelementofasetcategorical) but this doesn't work as MATLAB gives an error that optimvars cannot be used as indices.
I've also tried just passing everything with no consideration that it is an optimvar, and that does not work either because an optimvar cannot be used as a conditional statement.
I think it might be nonlinear and then i'll have to a solver based approach but i'm not sure.

Solving PDE with implicit euler in python  incorrect output
I will try and explain exactly what's going on and my issue.
This is a bit mathy and SO doesn't support latex, so sadly I had to resort to images. I hope that's okay.
I don't know why it's inverted, sorry about that. At any rate, this is a linear system Ax = b where we know A and b, so we can find x, which is our approximation at the next time step. We continue doing this until time t_final.
This is the code
import numpy as np tau = 2 * np.pi tau2 = tau * tau i = complex(0,1) def solution_f(t, x): return 0.5 * (np.exp(tau * i * x) * np.exp((2  tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t)) def solution_g(t, x): return 0.5 * (np.exp(tau * i * x) * np.exp((2  tau2) * i * t)  np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t)) for l in range(2, 12): N = 2 ** l #number of grid points dx = 1.0 / N #space between grid points dx2 = dx * dx dt = dx #time step t_final = 1 approximate_f = np.zeros((N, 1), dtype = np.complex) approximate_g = np.zeros((N, 1), dtype = np.complex) #Insert initial conditions for k in range(N): approximate_f[k, 0] = np.cos(tau * k * dx) approximate_g[k, 0] = i * np.sin(tau * k * dx) #Create coefficient matrix A = np.zeros((2 * N, 2 * N), dtype = np.complex) #First row is special A[0, 0] = 1 3*i*dt A[0, N] = ((2 * dt / dx2) + dt) * i A[0, N + 1] = (dt / dx2) * i A[0, 1] = (dt / dx2) * i #Last row is special A[N  1, N  1] = 1  (3 * dt) * i A[N  1, N] = (dt / dx2) * i A[N  1, 2] = (dt / dx2) * i A[N  1, 1] = ((2 * dt / dx2) + dt) * i #middle for k in range(1, N  1): A[k, k] = 1  (3 * dt) * i A[k, k + N  1] = (dt / dx2) * i A[k, k + N] = ((2 * dt / dx2) + dt) * i A[k, k + N + 1] = (dt / dx2) * i #Bottom half A[N :, :N] = A[:N, N:] A[N:, N:] = A[:N, :N] Ainv = np.linalg.inv(A) #Advance through time time = 0 while time < t_final: b = np.concatenate((approximate_f, approximate_g), axis = 0) x = np.dot(Ainv, b) #Solve Ax = b approximate_f = x[:N] approximate_g = x[N:] time += dt approximate_solution = np.concatenate((approximate_f, approximate_g), axis=0) #Calculate the actual solution actual_f = np.zeros((N, 1), dtype = np.complex) actual_g = np.zeros((N, 1), dtype = np.complex) for k in range(N): actual_f[k, 0] = solution_f(t_final, k * dx) actual_g[k, 0] = solution_g(t_final, k * dx) actual_solution = np.concatenate((actual_f, actual_g), axis = 0) print(np.sqrt(dx) * np.linalg.norm(actual_solution  approximate_solution))
It doesn't work. At least not in the beginning, it shouldn't start this slow. I should be unconditionally stable and converge to the right answer.
What's going wrong here?

How can I add custom gradients in the Haskell autodifferentiation library "ad"?
If I want to give a custom or known gradient for a function, how can I do that in the
ad
library? (I don't want to autodifferentiate through this function.) I am using the grad function in this library.If the library doesn't provide this feature, is there some way I can easily implement this functionality myself, perhaps by changing the definitions of leaf nodes in
ad
or by editing the dual numbers that presumably carry the numerical gradients?Here's a concrete example of what I mean:
Say I have some function I want to take the gradient of, say
f(x, y) = x^2 + 3 * g(x, y)^2
. Then say thatg(x, y)
is a function whose definition is complicated, but whose gradient I've already calculated analytically and is quite simple. Thus, when I takegrad f
and evaluate it at a point(x, y)
, I'd like to just plug in my custom gradient forg
, instead of autodiffing through it: something likemy_nice_grad_of_g (x, y)
.I see other autodiff libraries do provide this feature, for example Stan and Tensorflow both allow users to define gradients of a function.

triple integration for a box in R: numerical error
I'm trying to figure out how to reliably compute the mass of a solid with varying density which I am modeling using a regression function.
My box has the following dimensions:
w = 50 l = 150 h = 15
To help me gain intuition for using the numerical integrator in
pracma
, I imagined that the box had two density regions (h < 10 > 50, h >= 10 > 100).l*w*(5*100 + 10*50) # => 7500000
Yet when I pass this to the numerical integrator I get:
library(pracma) density = function(x, y, z) ifelse(z > 10, 100, 50) integral3(density, xmin = 0, xmax = 50, ymin = 0, ymax = 150, zmin = 0, zmax = 15)
This returns
7530340
. Can you help give me some intuition for what is happening here and how I might improve the computation of the integral? 
reducing the number of arguments in functions in C
I want to do a numerical integration of a function f using the qtrap function defined in "Numerical recipes in C".
double qtrap(double (*func)(double), double a, double b);
As shown, it is a 1d integration of a variable of type double.
But the function I want to integrate has an additional parameter a:
double f(double x, int a) { return a + x * x; }
Now I am looking for a way to integrate f for different values of a.
My idea up to now:
typedef double (*fctp1_t)(double); //function pointer type for 1 arg typedef double (*fctp2_t)(double, int); //function pointer type for 2 args int a = 10; fctp1_t = f1d; f1d = reduceArgument(f, a); qtrap(f1d, 0, 1);
with the reduceArgument something like this:
fctp1_t reduceArgument(fctp2_t f2d, int ia) { return f2d(x, ia); }
This code results in the error: 'x' undeclared.
Thanks for any suggestions.

Integrating with solve_ivp is very slow in comparison with c++
Using solve_ivp from scipy.integrate to integrate a stiff system takes about 3 min to complete. The integrator is:
sol = solve_ivp(DC_model,[t0,tf],y0,method='LSODA')
Solving the same system with c++ and Boost library takes 2.12742 seconds. That is a huge difference. There's a way to improve the time on my python script?
Complete code:
import matplotlib.pyplot as plt import numpy as np from scipy.integrate import solve_ivp import time def DC_model(t,y): T,H,CTL,Den,IL2,FBeta,FGamma,Ml = y dT = 0.002 * T * np.log(1e12 / T)  (0.1136 * T * CTL * Ml / (50 + Ml)) * ((0.69 * FBeta + 1e4) / (FBeta + 1e4)) dH = 1e4  0.005 * H + 10e2 * Den*(H*(1  H / 1)) dC = 1e4  0.01925 * CTL + 0.00004e2 * IL2*(CTL*(1  CTL / 1)) dDen = 0.009625 * Den * CTL dI = 1e2 * H * Den  1e7 * CTL * IL2  1e2 * IL2 dFbeta = 5.57e6 * T  6.93 * FBeta dFgamma = 1.02e4 * CTL  0.102 * FGamma dMl = 1.44 + (2.89 * FGamma) / (FGamma + 3.38e5)  0.0144 * Ml return np.array([dT,dH,dC,dDen,dI,dFbeta,dFgamma,dMl]) y0 = [6e4, 0, 0, 0, 0, 0, 0, 0] t0 = 0.0 delay = 232 tf = 168 + delay ef=0.05 start = time.time() sol = solve_ivp(DC_model,[t0,tf],y0,method='LSODA') y0 = sol.y[:,1] + [0, 0, 0, 1e6 * ef, 0, 0, 0, 0] t0 = tf tf = tf + 168 sol = solve_ivp(DC_model,[t0,tf],y0,method='LSODA') y0 = sol.y[:,1] + [0, 0, 0, 1e6 * ef, 0, 0, 0, 0] t0 = tf tf = tf + 168 sol = solve_ivp(DC_model,[t0,tf],y0,method='LSODA') y0 = sol.y[:,1] + [0, 0, 0, 1e6 * ef, 0, 0, 0, 0] t0 = tf tf = 1400 sol = solve_ivp(DC_model,[t0,tf],y0,method='LSODA') end = time.time() print('Solving took: ' + str((endstart)/60) + ' min')

Shooting method singular matrix warnings
I'm trying to implement the shooting method in python and am encountering error messages for singular matrices. I want to solve
xy''' = y'y+x^21
with boundary conditionsy(1)=1, y(2)=1, y'(2)=4
. I transformed this into first order equations M(x,y) y' = f(x,y), with:M = [1 0 0] [0 1 0] [0 0 x] **y** = [ y ] [ y'] [y''] f = [ y' ] [ y'' ] [y'y+x^21]
I've implemented the shooting method as described in Numerical Recipes 18.1. The general procedure is:
 Specify boundary conditions on both the left and right sides. In my case, this is y(1)=1, y(2)=1, y'(2)=4. Provide as initial guess for the (y, y', y'') vector. I chose (1, 1, 2). The first element is fixed, as y(1)=1, but the other two elements make up the freely specifiable vector V containing the other boundary conditions on the LHS.
 Integrate the equations x=1 to x=2. Define a discrepancy vector F calculated as the difference between the prescribed boundary conditions (y(2)=1 and y'(2)=4) and the values calculated through the integration.
 Use NewtonRaphson to find a V that zeros F. Accomplish this by solving J.δV=F and adding the correction back to V. J is the Jacobian, found numerically at each time step.
If I assume that M is invertible, then this is relatively straightforward to solve and works as expected. However, I'm encountering two problems and would appreciate some assistance on how to solve them.
 If I set the xrange from 1 to 1, I get a singular matrix warning. I think this is because I assumed M is invertible, but it isn't when x=0. The same problem arises frequently if M has any dependence on y This looks like a differential algebraic equation. Are there any convenient ways to solve this in python? It doesn't look like there are any built into scipy.
 If I change my boundary condition at x=2 to be y(2)=1, I also get a singular matrix warning. I'm not sure why this is happening. Is this maybe just an inability for the differential equations to satisfy the boundary conditions or something wrong with my code? If the former, is there any way to predict this ahead of time?
A previous answer suggested that a better guess of initial conditions might help in the second singular case, but I'm not sure how to improve my guess without a lot of brute force.
Here is my code:
import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint ### Solving the differential equation xy''' = y'y+x^21 ### if x range doesn't include 0, then M is invertible, so equations become: def fn(y,x): return np.array([y[1],y[2],(y[0]*y[1]+x**21)/x]) ### Boundary values of y(1)=1, y(2)=1, y'(2)=4 x = np.linspace(1.,2.,101) y0 = np.array([1., 1., 2.]) ### Only the first of these is fixed. The others are variable sol = np.array([1.,4.]) ### Specify y(2)=1 y'(2)=4 Delta=1 max_itrs = 100 eps = 1e3 def shoot(y0, Delta, fn, x, sol, eps, max_itrs): ### Throughtout this function, the indexing only works with our specific boundary conditions ### But can be easily modified or generalized to accommodate other cases for i in range(max_itrs): F = odeint(fn, y0, x)[1][:2]sol ### Define the F "discrepancy" vector at x = 2 if np.linalg.norm(F)>eps: y0_yp_mod = y0.copy() y0_yp_mod[1] += Delta ### Augment y'(1) by Delta F_yp_mod = odeint(fn, y0_yp_mod, x)[1][:2]sol y0_ypp_mod = y0.copy() y0_ypp_mod[2] += Delta ### Augment y''(1) by Delta F_ypp_mod = odeint(fn, y0_ypp_mod, x)[1][:2]sol ### Compute the Jacobian j11 = (F_yp_mod[0]F[0])/Delta j12 = (F_yp_mod[1]F[1])/Delta j21 = (F_ypp_mod[0]F[0])/Delta j22 = (F_ypp_mod[1]F[1])/Delta jac = np.array([[j11,j12],[j21,j22]]) ### Solve for delta V: d_V = np.linalg.solve(jac,F) ### Augment our initial conditions by delta V y0 = np.array([y0[0],y0[1]+d_V[0],y0[2]+d_V[1]]) else: return y0 print("Failed to achieve convergence after "+str(max_itrs)+" iterations.") return y0 y0 = shoot(y0, Delta, fn, x, sol, eps, max_itrs) Y = odeint(fn, y0, x) plt.plot(x,Y[:,0],c="r",label="y") plt.plot(x,Y[:,1],c="g",label="y'") plt.plot(x,Y[:,2],c="b",label="y''") plt.scatter([x[0]],y0[0],marker="x",c="r") plt.scatter([x[1]],sol[0],marker="x",c="r") plt.scatter([x[1]],sol[1],marker="x",c="g") plt.legend()
Here is the output, which behaves as expected