Can Python optimize my function inputs to get a target value?
I have been trying to locate a method similar to Excel's Solver where I can target a specific value for a function to converge on. I do not want a minimum or maximum optimization.
For example, if my function is:
f(x) = A^2 + cos(B)  sqrt(C)
I want f(x) = 1.86, is there a Python method that can iterate a solution for A, B, and C to get as close to 1.86 as possible? (given an acceptable error to target value?)
1 answer

You need a root finding algorithm for your problem. Only a small transformation required. Find roots for g(x):
g(x) = A^2 + cos(B)  sqrt(C)  1.86
Using
scipy.optimize.root
, Refer documentation:import numpy as np from scipy import optimize # extra two 0's as dummy equations as root solves a system of equations # rather than single multivariate equation def func(x): # A,B,C represented by x ndarray return [np.square(x[0]) + np.cos(x[1])  np.sqrt(x[2])  1.86, 0, 0] result = optimize.root(func , x0 = [0.1,0.1,0.1]) x = result.x A, B, C = x x # array([ 1.09328544, 0.37977694, 0.06970678])
you can now check your solution:
np.square(x[0]) + np.cos(x[1])  np.sqrt(x[2]) # 1.8600000000000005
See also questions close to this topic

how to add non .py files into python egg
I have a flask app which looks like
myapp │ └── src │ └── python │ └── config │ └── app │── MANIFEST.in └── setup.py
The config folder is full of *.yaml files, I want to add all the static config files into my python egg after using
python setup.py install
My setup.py looks like
import os from setuptools import setup, find_packages path = os.path.dirname(os.path.abspath(__file__)) setup( name="app", version="1.0.0", author="Anna", description="", keywords=[], packages=find_packages(path + '/src/python'), package_dir={'': path + '/src/python'}, include_package_data=True )
I am trying the use the MANIFEST.in to add the config file However, it always give error
error: Error: setup script specifies an absolute path: /Users/Anna/Desktop/myapp/src/python/app setup() arguments must *always* be /separated paths relative to the setup.py directory, *never* absolute paths.
I have not used any absolute paths in my code, I've seen other posts trying to bypass this error, by removing
include_package_data=True
However, in my case, if i do this to avoid this error, all my yamls won't be added.
I was wondering if there are ways to fix this problem. Thanks

How to extract all functions and API calls used in a Python source code?
Let us consider the following Python source code;
def package_data(pkg, roots): data = [] for root in roots: for dirname, _, files in os.walk(os.path.join(pkg, root)): for fname in files: data.append(os.path.relpath(os.path.join(dirname, fname), pkg)) return {pkg: data}
From this source code, I want to extract all the functions and API calls. I found a similar question and solution. I ran the solution given here and it generates the output
[os.walk, data.append]
. But I am looking for the following output[os.walk, os.path.join, data.append, os.path.relpath, os.path.join]
.What I understood after analyzing the following solution code, this can visit the every node before the first bracket and drop rest of the things.
import ast class CallCollector(ast.NodeVisitor): def __init__(self): self.calls = [] self.current = None def visit_Call(self, node): # new call, trace the function expression self.current = '' self.visit(node.func) self.calls.append(self.current) self.current = None def generic_visit(self, node): if self.current is not None: print("warning: {} node in function expression not supported".format( node.__class__.__name__)) super(CallCollector, self).generic_visit(node) # record the func expression def visit_Name(self, node): if self.current is None: return self.current += node.id def visit_Attribute(self, node): if self.current is None: self.generic_visit(node) self.visit(node.value) self.current += '.' + node.attr tree = ast.parse(yoursource) cc = CallCollector() cc.visit(tree) print(cc.calls)
Can anyone please help me to modified this code so that this code can traverse the API calls inside the bracket?
N.B: This can be done using regex in python. But it requires a lot of manual labors to find out the appropriate API calls. So, I am looking something with help of Abstract Syntax Tree.

use correct version of 'pip' installed for your Python interpreter
I am using pycharm , i am having this error by adding any package , Click Here
i have tried lot of methods , but didn't succeed yet.
Info :
 python 3.6.2
 pip 10.0.1
 VirtualEnv

Optimizing a benchmark test function using NelderMead algorithm
i wrote a NelderMead algorithm for minimizing a non constrained optimization problemobjective function]1 this test function is very similar to Biggs_EXP5 function except it does not have the power 2. my NM algorithm is getting stuck in the Shrink step and i don't have any idea where and how to select the initial point and how to get near the global minimum. anyone knows what's the problem? i'll appreciate any help from you.

Multiparameter constraint minimization in R
I am trying to figure out if it is possible to use R to perform a minimization with with multiple constraints. Where the constraints do not have a closed form.
In a nutshell, I have a set of random numbers (generated with fixed seeds). I have also defined a function, say f(theta) that will take a vector consisting of model parameters and transform my original set of random numbers into a random sample from the defined distribution.
I am trying to find a set of parameters such that when fed into f will resulting in a sample that will have the 2.5th, 5th, 10th percentile less than or equal to a set targets while minimizing the difference between the target and actual values.

Getting Information about Optimal Solution from Multidimensional Knapsack Algorithm
I am building a multidimensional knapsack algorithm to optimize fantasy NASCAR lineups. I have the code thanks to another author and am now trying to piece back together the drivers the optimal solution consists of. I have written code to do this in the standard case, but am struggling to figure it out with the added dimension. Here's my code:
#open csv file df = pd.read_csv('roster_kentucky_july18.csv') print(df.head()) def knapsack2(n, weight, count, values, weights): dp = [[[0] * (weight + 1) for _ in range(n + 1)] for _ in range(count + 1)] for z in range(1, count + 1): for y in range(1, n + 1): for x in range(weight + 1): if weights[y  1] <= x: dp[z][y][x] = max(dp[z][y  1][x], dp[z  1][y  1][x  weights[y  1]] + values[y  1]) else: dp[z][y][x] = dp[z][y  1][x] return dp[1][1][1] w = 50000 k = 6 values = df['total_pts'] weights = df['cost'] n = len(values) limit_fmt = 'Max value for weight limit {}, item limit {}: {}' print(limit_fmt.format(w, k, knapsack2(n, w, k, values, weights)))
And my output:
Driver total_pts cost 0 A.J. Allmendinger 29.030000 6400 1 Alex Bowman 39.189159 7600 2 Aric Almirola 53.746988 8800 3 Austin Dillon 32.476250 7000 4 B.J. McLeod 14.000000 4700 Max value for weight limit 50000, item limit 6: 325.00072048
I'm looking to at least get the "cost" associated with each "total_pts" in the optimal solution, though it would be nice if I could have it draw out the "Driver" column of the dataframe instead (which I guess could be accessed by indices). Thanks.

Using Scipy's deconvolve function to deconvolve electrodermal activity data
I wish to deconvolve an EDA (electrodermal activity) signal using a Bateman function as the filter as described here, using Scipy's deconvolve function.
However, when I attempt this, the deconvolution graph does not look how I expect it to. Namely, it generally takes the shape of a mostly flat line, sometimes with spikes at multiples of the filter length:
What am I missing here? Should I be smoothing the EDA curve? Am I hoping for too much from
deconvolve
? My code is below:import csv import numpy as np import matplotlib.pyplot as plt import scipy.signal as signal import math with open('test session 1.csv', newline='') as csvfile: filereader = csv.reader(csvfile, delimiter=' ') i = 0 timestamps = [] conductances = [] for row in filereader: i += 1 fields = ' '.join(row).split() if i > 3: timestamps.append(float(fields[0])) conductances.append(float(fields[5])) timestamps = [timestamp  timestamps[0] for timestamp in timestamps] c = 10. tau1 = 300 tau2 = 2000 bateman = [c * ( math.exp(time / tau2)  math.exp(time / tau1)) for time in timestamps] bateman = bateman[3:1700] deconv, remain = signal.deconvolve(conductances, bateman) fig, ax = plt.subplots(nrows=4) ax[0].plot(conductances, label="EDA Signal") ax[1].plot(bateman, label="Bateman Function") ax[2].plot(deconv, label="Deconvolution Result") ax[3].plot(remain, label="Remainder") for i in range(len(ax)): ax[i].legend(loc=4) plt.show()

scipy.linalg.lu() vs scipy.linalg.lu_factor()
Aside from lu() having the option to apply the permutation matrix to the lower triangular matrix, is there any difference between these two functions? I would appreciate insight as to which is the better, faster, and/or least likely to fail.

Linear Regression in Python: Scipy vs. Statsmodels  same R², different coefficients
I'm running linear regressions with statsmodels and because I tend to distrust my results I also ran the same regression with scipy. The underlying dataset has about 80,000 observations. Unofrtunately, I cannot provide the data for you to reproduce the errors.
I run two rounds of regressions: first simple OLS, second simple OLS with standardized variables
Surprisingly, the results differ a lot. While R² and pvalue seem to be the same, coefficients, intercept and standard error are all over the place. Interestingly, after standardizing the results align more. Now, there is only a slight difference in the constant, which I am happy to attribute to rounding issues.
The exact numbers can be found in the appended screenshots.
Any idea, where these differences come from and why they disappear after standardizing? What did I do wrong? Do I have to be extra worried, since I run most of my regressions with sklearn (only swapped to statsmodels since I needed some pvalues) and even more differences may occur?
Thanks for your help! If you need any additional information, feel free to ask. Code and Screenshots are povided below.
My code in short looks like this:
# package import import numpy as np from scipy.stats import linregress from scipy.stats.mstats import zscore import statsmodels.api as sma import statsmodels.formula.api as smf # adding constant train_IV_cons = sma.add_constant(train_IV) # run regression (coefficients, intercept, rvalue, pvalue, stderr) = linregress(train_IV[:,0], train_DV) print(coefficients, intercept, rvalue, pvalue, stderr) est = smf.OLS(train_DV, train_IV_cons[:,[0,1]]) model_results = est.fit() print(model_results.summary()) # normalize variables train_IV_norm = train_IV train_IV_norm[:,0]=np.array(ss.zscore(train_IV_norm[:,0])) train_IV_norm_cons = sma.add_constant(train_IV_norm) # run regressions (coefficients, intercept, rvalue, pvalue, stderr) = linregress(train_IV_norm[:,0], train_DV_norm) print(coefficients, intercept, rvalue, pvalue, stderr) est = smf.OLS(train_DV_norm, train_IV_norm_cons[:,[0,1]]) model_results = est.fit() print(model_results.summary())

MGARCH simulation in R
I have a BEKK(1,1) model of dimension 3. I want to find out if my results are asymptotically valid. (The BEKK model is a slightly adjusted multivariate garch model)
I wish to generate data from the fitted model in the case of the dimension 3, with the sample size 3000. Estimate parameter by fitting BEKK model to the generated data sets and repeat this step, say 10000 times. Then I obtain 10000 estimators for each parameter for which the sampling distribution can be constructed and then it has to be compared with the asymptotic distribution.
I've been using the mgarchBEKK package when creating my BEKK models. The package provides the example below as help for simulation:
##Simulate series: simulated < simulateBEKK(2, 1000, c(1,1)) ## Prepare the matrix: simulated < do.call(cbind, simulated$eps) ##Estimate with default arguments: estimated < BEKK(simulated)
I'm not a master in R by any means. So I'm not quite sure how to code the procedure that I describe above.
Any help is greatly apprecieated :)

BEKK model simulation in R
I have been working with a BEKK(1,1) model with dimension 3,4, and 5, for a time series analysis. I was given the feedback to include a simulation study. In order to trust the results that I obtain, I want to, via simulations, show that the estimation of the BEKK model parameters works also well for the sample sizes considered in the paper. I want to show that the distributional theory can be applied for my sample size.
I want to investigate if the sample size is enough to apply the asymptotic results?
Method:
I wish to generate data from the fitted model in the case of the dimension 3, with the sample size 3000. Estimate parameter by fitting BEKK model to the generated data sets and repeat this step, say 10000 times. Then I obtain 10000 estimators for each parameter for which the sampling distribution can be constructed and then it has to be compared with the asymptotic distribution.
Then repeat this procedure for dimension 4 and 5.
#I've been using the mgarchBEKK package when creating my BEKK models. #The package provides the example below as help for simulation: ## Simulate series: simulated < simulateBEKK(2, 1000, c(1,1)) ## Prepare the matrix: simulated < do.call(cbind, simulated$eps) ## Estimate with default arguments: estimated < BEKK(simulated)
I'm not a master in R by any means. So I'm not quite sure how to code the procedure that I describe above.
Any help is greatly apprecieated :)