Python polynomial with deegre and coefficients from user
I write a program in Python and I need to create a polynomial with deegre(n) and coefficients (a,b,c) from user. I create it but I don't know how use it like function with argument for example polynomial(x)=some value. How i can solve this?
1 answer

You can specify a polynomial using the numpy package: https://docs.scipy.org/doc/numpy/reference/routines.polynomials.html.
As an alternative you can use sympys poly function: http://docs.sympy.org/latest/modules/polys/reference.html to get a polynomial in symbolic form. To evaluate for a given x see http://docs.sympy.org/latest/modules/evalf.html
Numpy roots will find all the roots of a polynomial, given the coefficients: https://docs.scipy.org/doc/numpy1.13.0/reference/generated/numpy.roots.html
(It might be useful to know (if you don't happen to have them from input) that you can obtain the coefficients from the sympy poly using all_coeffs function: http://docs.sympy.org/0.7.1/modules/polys/reference.html#sympy.polys.polytools.Poly.all_coeffs)
If you want to implement from first principals then I suggest looking at the reference in https://docs.scipy.org/doc/numpy1.13.0/reference/generated/numpy.roots.html.
NB: A degree zero polynomial (although I'm not sure if you meant that) is a constant and has no root unless equal to zero.
Some hints when writing your program:
 Prompt and obtain user input for poly order and then for each coefficient. Store the coefficients in a list. Also prompt and obtain user input for the x value to evaluate the polynomial for.
 If using sympy construct your polynomial object using the list.
 If using sympy evaluate your polynomial for the x value using evalf. If numpy then call a function that takes the list and the x value and evaluates the polynomial using the numpy library.
 Then call numpy.roots with your list of coefficients.
See also questions close to this topic

Pandas Dataframe groupby describe 8x ~slower than computing separatly
The following code summarizes numeric data using two different approaches.
The first approach uses the Dataframe().describe() and passes some specific extra percentiles.
The second approach separately computes the summary stats (mean, std, N), stacks it, computes the same quantiles, then appends the two and sorts by the index so the result is essentially the same as the first approach.
There are some minor naming differences that we can clean up afterword's and since the summarized data is small, that is very fast.
Turns out that using the describe function was about 8x slower in this example.
I am looking for reasons why and perhaps suggestions on any other approaches that may speed this up even more (filters, groups, values) are all passed in from UI to a tornado service  so speed is important, as the user is waiting for results, and the data can be even larger that this example.
import pandas as pd import numpy as np from datetime import datetime def make_data (n): ts = datetime.now().timestamp() + abs(np.random.normal(60, 30, n)).cumsum() df = pd.DataFrame({ 'c1': np.random.choice(list('ABCDEFGH'), n), 'c2': np.random.choice(list('ABCDEFGH'), n), 'c3': np.random.choice(list('ABCDEFGH'), n), 't1': np.random.randint(1, 20, n), 't2': pd.to_datetime(ts, unit='s'), 'x1': np.random.randn(n), 'x2': np.random.randn(n), 'x3': np.random.randn(n) }) return df def summarize_numeric_1 (df, mask, groups, values, quantiles): dfg = df[mask].groupby(groups)[values] return dfg.describe(percentiles = quantiles).stack() def summarize_numeric_2 (df, filt, groups, values, quantiles): dfg = df[mask].groupby(groups)[values] dfg_stats = dfg.agg([np.mean, np.std, len]).stack() dfg_quantiles = dfg.quantile(all_quantiles) return dfg_stats.append(dfg_quantiles).sort_index() %time df = make_data(1000000) groups = ['c1', 'c2', 't1'] mask = df['c3'].eq('H') & df['c1'].eq('A') values = ['x1', 'x3'] base_quantiles = [0, .5, 1] extd_quantiles = [0.25, 0.75, 0.9] all_quantiles = base_quantiles + extd_quantiles %timeit summarize_numeric_1(df, mask, groups, values, extd_quantiles) %timeit summarize_numeric_2(df, mask, groups, values, all_quantiles)
The timings on my PC for this are:
Using describe: 873 ms ± 8.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Using two step method: 105 ms ± 490 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
All inputs welcome!

Odd Nested Loop Doesn't Break Properly (Python 3.x)
The following code should print multiple lines of
1 2 3
mixed with lines of
0
However, what it actually prints is multiple lines of
1 1 1 1 3
mixed with lines of
0
Code:
boxes = [] for y in range(len(hmap)): for x in range(len(hmap[y])): w = 4 h = 4 minh = hmap[y][x] maxh = hmap[y][x] htemp = h while True: if y + htemp > len(hmap): break passes = False wtemp = w while True: if x + wtemp > len(hmap[y]): break for c in range(x, x+wtemp): for r in range(y, y+htemp): minh = min(minh,hmap[c][r]) maxh = max(maxh,hmap[c][r]) if maxh  minh > v: print('1') break else: print('2') break else: print('3') break print('0') passes = True wtemp += 1 if passes: boxes.append([x,y,wtemp1,htemp]) htemp += 1 if not passes: break
hmap
is a 2D array of float values that is passed to the function this code is in.
This segment of code is supposed to generate a series of rectangles for other (irrelevant) parts of code to use later on. Rectangles that "pass" (min/max values don't have a difference greater than
v
) cause0
to be printed. Rectangles that don't "pass" should cause
1 2 3
to be printed as the nested
for
andwhile
loops break. Why doesn't it work? 
Can someone help me turn this for loop into a map?
I'm trying to make my code more efficient. I'm not super familiar with maps. I can do simple ones but I'm having trouble wrapping my head around this one
def mate(self, parent1, parent2): length = parent1.size parent1 *= 1e6 parent2 *= 1e6 parent1 = parent1.astype(int) parent2 = parent2.astype(int) child1 = [] child2 = [] for i in range(length): hold1 = parent1[i] hold2 = parent2[i] hold1 = np.binary_repr(hold1, 30) hold2 = np.binary_repr(hold2, 30) pivot = np.random.randint(0, length) childGene1 = hold1[:pivot] + hold2[pivot:] childGene2 = hold2[:pivot] + hold1[pivot:] childGene1 = int(childGene1, 2) / 1e6 childGene2 = int(childGene2, 2) / 1e6 child1.append(childGene1) child2.append(childGene2) return [child1, child2]
It's a mating function I made for a genetic algorithm. How should I go about turning this into a map?

Why I() "AsIs" is necessary when making a linear polynomial model in R?
I'm trying to understand what is the role of
I()
base function in R when using a linear polynomial model or the functionpoly
. When I calculate the model usingq + q^2
q + I(q^2)
poly(q, 2)
I have different answers.
Here is an example:
set.seed(20) q < seq(from=0, to=20, by=0.1) y < 500 + .1 * (q5)^2 noise < rnorm(length(q), mean=10, sd=80) noisy.y < y + noise model3 < lm(noisy.y ~ poly(q,2)) model1 < lm(noisy.y ~ q + I(q^2)) model2 < lm(noisy.y ~ q + q^2) I(q^2)==I(q)^2 I(q^2)==q^2 summary(model1) summary(model2) summary(model3)
Here is the output:
> summary(model1) Call: lm(formula = noisy.y ~ q + I(q^2)) Residuals: Min 1Q Median 3Q Max 211.592 50.609 4.742 61.983 165.792 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 489.3723 16.5982 29.483 <2e16 *** q 5.0560 3.8344 1.319 0.189 I(q^2) 0.1530 0.1856 0.824 0.411  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 79.22 on 198 degrees of freedom Multiple Rsquared: 0.02451, Adjusted Rsquared: 0.01466 Fstatistic: 2.488 on 2 and 198 DF, pvalue: 0.08568 > summary(model2) Call: lm(formula = noisy.y ~ q + q^2) Residuals: Min 1Q Median 3Q Max 219.96 54.42 3.30 61.06 170.79 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 499.5209 11.1252 44.900 <2e16 *** q 1.9961 0.9623 2.074 0.0393 *  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 79.16 on 199 degrees of freedom Multiple Rsquared: 0.02117, Adjusted Rsquared: 0.01625 Fstatistic: 4.303 on 1 and 199 DF, pvalue: 0.03933 > summary(model3) Call: lm(formula = noisy.y ~ poly(q, 2)) Residuals: Min 1Q Median 3Q Max 211.592 50.609 4.742 61.983 165.792 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 519.482 5.588 92.966 <2e16 *** poly(q, 2)1 164.202 79.222 2.073 0.0395 * poly(q, 2)2 65.314 79.222 0.824 0.4107  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 79.22 on 198 degrees of freedom Multiple Rsquared: 0.02451, Adjusted Rsquared: 0.01466 Fstatistic: 2.488 on 2 and 198 DF, pvalue: 0.08568
Why is the
I()
necessary when doing a polynomial model in R.Also, is this normal that the
poly
function doesn't give the same result as theq + I(q^2)
? 
IPOPT does not obey constraints but does not record the violation when using CppAD
I am trying to evaluate the coefficients and time of two fifthorder polynomials (one each for x and y position) that minimizes effort and time (the objective function) when connecting an initial position, velocity, and orientation to a desired final position and orientation with 0 velocity (equality constraints). Here is the code:
#include <vector> #include <cppad/cppad.hpp> #include <cppad/ipopt/solve.hpp> using CppAD::AD; typedef struct { double x, y, theta, linear_velocity; } Waypoint; typedef std::vector<Waypoint> WaypointList; struct TrajectoryConfig { //! gain on accumulated jerk term in cost function double Kj; //! gain on time term in cost function double Kt; //! gain on terminal velocity term in cost function double Kv; }; class Trajectory { public: explicit Trajectory(TrajectoryConfig config); ~Trajectory(); void updateConfigs(TrajectoryConfig config); void solve(WaypointList waypoints); private: //! solution vector std::vector<double> solution_; //! gain on accumulated jerk term in cost function double Kj_; //! gain on time term in cost function double Kt_; //! gain on terminal velocity term in cost function double Kv_; }; /* Trajectory(TrajectoryConfig) class constructor. Initializes class given configuration struct */ Trajectory::Trajectory(TrajectoryConfig config) { Kj_ = config.Kj; Kt_ = config.Kt; Kv_ = config.Kv; } Trajectory::~Trajectory() { std::cerr << "Trajectory Destructor!" << std::endl; } enum Indices { A0 = 0, A1, A2, A3, A4, A5, B0, B1, B2, B3, B4, B5, T }; class FGradEval { public: size_t M_; // gains on cost; double Kj_, Kt_; // constructor FGradEval(double Kj, double Kt) { M_ = 13; // no. of parameters per trajectory segment: 2 x 6 coefficients + 1 time Kj_ = Kj; Kt_ = Kt; } typedef CPPAD_TESTVECTOR(AD<double>) ADvector; void operator()(ADvector& fgrad, const ADvector& vars) { fgrad[0] = 0; AD<double> accum_jerk; AD<double> a0, a1, a2, a3, a4, a5; AD<double> b0, b1, b2, b3, b4, b5; AD<double> T, T2, T3, T4, T5; AD<double> x, y, vx, vy; size_t offset = 1; a0 = vars[Indices::A0]; a1 = vars[Indices::A1]; a2 = vars[Indices::A2]; a3 = vars[Indices::A3]; a4 = vars[Indices::A4]; a5 = vars[Indices::A5]; b0 = vars[Indices::B0]; b1 = vars[Indices::B1]; b2 = vars[Indices::B2]; b3 = vars[Indices::B3]; b4 = vars[Indices::B4]; b5 = vars[Indices::B5]; T = vars[Indices::T]; T2 = T*T; T3 = T*T2; T4 = T*T3; T5 = T*T4; x = a0 + a1*T + a2*T2 + a3*T3 + a4*T4 + a5*T5; y = b0 + b1*T + b2*T2 + b3*T3 + b4*T4 + b5*T5; vx = a1 + 2*a2*T + 3*a3*T2 + 4*b4*T3 + 5*a5*T4; vy = b1 + 2*b2*T + 3*b3*T2 + 4*b4*T3 + 5*b5*T4; //! costterms //! accum_jerk is the analytic integral of int_0^T (jerk_x^2 + jerk_y^2) dt accum_jerk = 36 * T * (a3*a3 + b3*b3) + 144 * T2 * (a3*a4 + b3*b4) + T3 * (240*(a3*a5 + b3*b5) + 192*(a4*a4 + b4*b4)) + 720 * T4 * (a4*a5 + b4*b5) + 720 * T5 * (a5*a5 + b5*b5); fgrad[0] += Kj_ * accum_jerk; fgrad[0] += Kt_ * T; //! initial equality constraints fgrad[offset] = vars[Indices::A0]; fgrad[1 + offset] = vars[Indices::B0]; fgrad[2 + offset] = vars[Indices::A1]; fgrad[3 + offset] = vars[Indices::B1]; offset += 4; //! terminal inequality constraints fgrad[offset] = x; fgrad[offset + 1] = y; fgrad[offset + 2] = vx; fgrad[offset + 3] = vy; } }; void Trajectory::solve(WaypointList waypoints) { if (waypoints.size() != 2) { std::cerr << "Trajectory::solve  Function requires 2 waypoints." << std::endl; return; } //! status flag for solution bool ok; //! typedef for ipopt/cppad typedef CPPAD_TESTVECTOR(double) Dvector; //! no. of variables for optimization problem size_t n_vars = 13; //! no. of constraints size_t n_cons = 4 * 2; // the start and final waypoint each contribute 4 constraints (x, y, theta, v) > (x, y, vx, vy) //! create vector container for optimizer solution //! and initialize it to zero Dvector vars(n_vars); for (size_t i = 0; i < n_vars; i++) { vars[i] = 0; } //! set initial state (this will only determine the first two coefficients of the initial polynomials) double v = (fabs(waypoints[0].linear_velocity) < 1e3) ? 1e3 : waypoints[0].linear_velocity; vars[Indices::A0] = waypoints[0].x; vars[Indices::B0] = waypoints[0].y; vars[Indices::A1] = v * cos(waypoints[0].theta); vars[Indices::B1] = v * sin(waypoints[0].theta); vars[Indices::T] = 0; //! there are no explicit bounds on vars, so set to something large for the optimizer //! we could perhaps put bounds on the coeffs corresponding to acc, jerk, snap, .. Dvector vars_lb(n_vars); Dvector vars_ub(n_vars); for (size_t i = 0; i < n_vars; i++) { vars_lb[i] = 1e10; vars_ub[i] = 1e10; } //! time must be nonnegative! vars_lb[Indices::T] = 0; //! set the bounds on the constraints Dvector cons_lb(n_cons); Dvector cons_ub(n_cons); //! offset term on index size_t offset = 0; //! initial equality constraint  we must start from where we are! cons_lb[0] = waypoints[0].x; cons_ub[0] = waypoints[0].x; cons_lb[1] = waypoints[0].y; cons_ub[1] = waypoints[0].y; cons_lb[2] = v * cos(waypoints[0].theta); cons_ub[2] = v * cos(waypoints[0].theta); cons_lb[3] = v * sin(waypoints[0].theta); cons_ub[3] = v * sin(waypoints[0].theta); offset += 4; //! terminal point cons_lb[offset] = waypoints[1].x; cons_ub[offset] = waypoints[1].x; cons_lb[offset + 1] = waypoints[1].y; cons_ub[offset + 1] = waypoints[1].y; cons_lb[offset + 2] = 1e3 * cos(waypoints[1].theta); cons_ub[offset + 2] = 1e3 * cos(waypoints[1].theta); cons_lb[offset + 3] = 1e3 * sin(waypoints[1].theta); cons_ub[offset + 3] = 1e3 * sin(waypoints[1].theta); //! create instance of objective function class FGradEval fg_eval(Kj_, Kt_); //! IPOPT INITIALIZATION std::string options; options += "Integer print_level 5\n"; options += "Sparse true forward\n"; options += "Sparse true reverse\n"; options += "Integer max_iter 100\n"; // options += "Numeric tol 1e4\n"; //! compute the solution CppAD::ipopt::solve_result<Dvector> solution; //! solve CppAD::ipopt::solve<Dvector, FGradEval>( options, vars, vars_lb, vars_ub, cons_lb, cons_ub, fg_eval, solution); //! check if the solver was successful ok = solution.status == CppAD::ipopt::solve_result<Dvector>::success; //! if the solver was unsuccessful, exit //! this case will be handled by calling method if (!ok) { std::cerr << "Trajectory::solve  Failed to find a solution!" << std::endl; return; } //! (DEBUG) output the final cost std::cout << "Final Cost: " << solution.obj_value << std::endl; //! populate output with argmin vector for (size_t i = 0; i < n_vars; i++) { solution_.push_back(solution.x[i]); } return; }
Where I am having problems is in the following:
 The initial equality constraint (starting position, velocity, and orientation) is being upheld, while the terminal velocity constraint is not. The algorithm terminates at the correct final (x,y,angle), but the velocity is not zero. I have looked through the code and I cannot understand why the position and orientation at the endpoint would be obeyed while the velocity would not. My suspicion is that my definition of the equality constraints is not what I think it is.
 The problem does not converge regularly, but this seems a fairly simple problem as defined (see output)
****************************************************************************** This program contains Ipopt, a library for largescale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coinor.org/Ipopt ****************************************************************************** This is Ipopt version 3.11.9, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 30 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 23 Total number of variables............................: 13 variables with only lower bounds: 0 variables with lower and upper bounds: 13 variables with only upper bounds: 0 Total number of equality constraints.................: 8 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls 0 9.9999900e03 1.00e+00 5.00e04 1.0 0.00e+00  0.00e+00 0.00e+00 0 1 5.9117705e02 1.00e+00 1.20e+02 1.0 5.36e+07  1.04e05 7.63e06f 18 2 1.1927070e+00 1.00e+00 2.62e+06 1.0 9.21e+05 4.0 6.16e15 2.29e23H 1 3 2.9689692e01 1.00e+00 1.80e+05 1.0 2.24e+13  1.83e07 8.42e10f 20 4r 2.9689692e01 1.00e+00 1.00e+03 0.0 0.00e+00  0.00e+00 4.58e07R 11 5r 2.1005820e+01 9.99e01 5.04e+02 0.0 6.60e02  9.90e01 4.95e01f 2 6r 7.7118141e+04 9.08e01 5.18e+03 0.0 2.09e+00  4.21e01 1.00e+00f 1 7r 1.7923891e+04 7.82e01 1.54e+03 0.0 3.63e+00  9.90e01 1.00e+00f 1 8r 5.9690221e+03 5.41e01 5.12e+02 0.0 2.92e+00  9.90e01 1.00e+00f 1 9r 4.6855625e+03 5.54e01 1.95e+02 0.0 5.14e01  9.92e01 1.00e+00f 1 iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls 10r 8.4901226e+03 5.55e01 5.18e+01 0.0 2.24e01  1.00e+00 1.00e+00f 1 Number of Iterations....: 10 (scaled) (unscaled) Objective...............: 8.4901225582208808e+03 8.4901225582208808e+03 Dual infeasibility......: 6.3613117039244315e+06 6.3613117039244315e+06 Constraint violation....: 5.5503677023620179e01 5.5503677023620179e01 Complementarity.........: 9.9999982900301554e01 9.9999982900301554e01 Overall NLP error.......: 6.3613117039244315e+06 6.3613117039244315e+06 Number of objective function evaluations = 43 Number of objective gradient evaluations = 6 Number of equality constraint evaluations = 71 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 12 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 10 Total CPU secs in IPOPT (w/o function evaluations) = 0.006 Total CPU secs in NLP function evaluations = 0.001 EXIT: Maximum Number of Iterations Exceeded.
I am not looking for an answer to my problem specifically. What I am hoping for are some suggestions as to why my problem may not be working as expected. Specifically, do my constraints make sense, as defined? Is the variable initialization done properly?

Polynomial equation retrieval
There is a dataset with x and y, and I need to know the equation for y ( how it was modeled).So I use coeficents and power from a model with 99% score:
poly = PolynomialFeatures(1) Xtrain_tr = poly.fit_transform(Xtrain) Xtest_tr = poly.fit_transform(Xtest) model = LinearRegression(fit_intercept=True) model.fit(Xtrain_tr, ytrain) poly=poly.powers_ coef=model.coef_ coef out: array([ 0. , 4.25571706, 0.59691212, 20.28394618, 162.97671755]) poly out: array([[0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]], dtype=int64)
And here I am trying to mimic y value for firs row in dataset:
temp=0 final=0 i=0 for i in range(4): temp=(X_train[:1]**poly[i])*coef[i] temp=temp.values.sum() final=final+temp temp=0 i=i+1
The result is not satisfying, please suggest what I can improve here. Thanks,

How can I demonstrate the sequence of periodic numbers
Help me! I can't solve the problem
I have a sequence of numbers like below How can I prove it's cyclic ?

C function returns 0 when shouldn't
I'm making a set of functions in C for simulate a microcanonial ensamble (a periodicbounded box with N particles with fixed Energy and Volume). The point is that one of my main functions is to get the distance between 2 particles, and sometimes it returns 0 even if the particles are not that close.
The function is
// Determinar la mínima distancia dadas dos coordenadasy una longitud de caja periodica float pdist(float x1, float y1, float x2, float y2, float a) { float Dx = abs(x2x1); float Dy = abs(y2y1); if(Dx >a/2) Dx = aDx; if(Dy >a/2) Dy = aDy; return sqrt(pow(Dx,2)+pow(Dy,2)); }
If you want to see all the code, you can see it in https://gitlab.com/jarr.tecn/statistical_mechanics
in the file box.h and one example in ejemplo.h

Boole´s rule in Python
this is my first post in this forum so please forgive me when someone isnt the way it should be. So my problem is about implementing "boole´s rule" into python. I´ve succesfully implementet trapezoidal and simpsons, but there is a difference in the results when i calculate an integral. So here´s my code:
#implementation of 3 different integration techniques from math import * from scipy import *
1. : Trapezoidal rule
# the trapezodial rule approximates the area under the function as a trapezoid. Mostly it averages the left and right # sums of 2 sums given by the RiemannIntegral. #the approximation is as follows: f= lambda x: e**(x**2) g= lambda x: x**63*x**5+10*x**420*x**3 h= lambda x:1/(x**23*x+5) def trapezoidal (f,a,b,n): dx= ((ba)/2)/n x=a sum = 0 while x<b: sum += f(x)*dx x += dx return sum print("trapezoidal f1:",trapezoidal(f,0,1,10000)) print("trapezoidal f2:",trapezoidal(g,0,4,10000)) print("trapezoidal f3:",trapezoidal(h,0,10,10000))
2. : Simpson´s rule
def simpson(f, a, b, n): if n % 2: raise ValueError("n must be even (received n=%d)" % n) dx = (b  a) / n sum = f(a) + f(b) for i in range(1, n, 2): sum += 4 * f(a + i * dx) for i in range(2, n1, 2): sum += 2 * f(a + i * dx) return sum * dx / 3 print("simpson f1:",simpson(f,0,1,10000)) print("simpson f2:",simpson(g,0,4,10000)) print("simpson f3:",simpson(h,0,10,10000))
3. : Boole´s rule:
def boole (f,a,b,n): dx = (b  a) / (n1) sum = 7*(f(a) + f(b)) sum += 32*(f(a+dx)+f(bdx)) sum += 12*(f(a+2*dx)) return 2*sum * dx / 45 print("boole f1:",boole(f,0,1,5)) print("boole f2:",boole(g,0,4,5)) print("boole f3:",boole(h,0,10,5))
I used the formula given on Wikipedia but without the errorterm. Unfortunately i cant post images yet.

How can I perform calculations on polynomials in PHP?
For example, how can I determine the quotient and remainder when something like
x^4 − 3x^2 + 4
is divided byx − 2
.I can't figure out how to do this without any library. I am not opposed to using any library but the only one I could find is https://github.com/markrogoyski/mathphp and it doesn't support what I am trying to do.

Generate random polynom for random input
I am currently looking into the polyfit lib for python. I do have a specific input value
A
and a desired output valueB
. I want to generate a randomized polynom with a comlexity 5 <= n <= 10, that, when given A as input has B as solution. What would be the best way to achieve that? 
R: difficulties generating orthogonal polynomials between 0 and 1
I'm trying to do some regression on variables with the interval [0,1]. I'd like to include orthogonal quadratic and cubic components. The polynomials I am after are Shifted Legendre polynomials(Wikipedia).
I am baffled by the behaviour of the poly() function in R. Not only does it not return a vector in [0,1], the vector it does return varies with the length of the input vector.
This code generates an illustrative example. The code generates some firstorder polynomials (lines) from x. The intervals of the resultant polynomials range from [0.36,0.36] to [0.017,0.017].
x < seq(0,1,by = 0.1) # base variable plot(x,x,col=2) M<matrix(nrow=6,ncol=4) dfpoly<data.frame(M) v< c(0.05,0.01,0.005,0.001,0.0005,0.0001) # This vector alters the length of x for (i in 1:length(v)){ x < seq(0,1,by = v[i]) y < poly(x,degree = 3)[,1] #first order polynomial, should in my mind be the same as x points(x,y) dfpoly[i,1] < length(x) dfpoly[i,2] < min(y) dfpoly[i,3] < max(y) dfpoly[i,4] < mean(diff(y)/diff(x)) } names(dfpoly) < c("x length","y min","y max","y slope") dfpoly
graph of x vs generated first order polynomials
Summary of outputs:
x length y min y max y slope 1 21 0.36037498508 0.36037498508 0.72074997016 2 101 0.17064747029 0.17064747029 0.34129494057 3 201 0.12156314064 0.12156314064 0.24312628128 4 1001 0.05469022724 0.05469022724 0.10938045447 5 2001 0.03870080906 0.03870080906 0.07740161813 6 10001 0.01731791041 0.01731791041 0.03463582082
Now, I'd expect all the lines to inhabit the same interval [0,1] as x, and perfectly overlap with x (the red series of points) on the graph. But they don't. Nor do they have any pattern to them that I can identify by eye.
1. What is the cause of the weird interval behaviour with poly()?
2. Are there other techniques or functions I can use to coerce these polynomials to [0,1]?