Can Python optimize my function inputs to get a target value?
I have been trying to locate a method similar to Excel's Solver where I can target a specific value for a function to converge on. I do not want a minimum or maximum optimization.
For example, if my function is:
f(x) = A^2 + cos(B) - sqrt(C)
I want f(x) = 1.86, is there a Python method that can iterate a solution for A, B, and C to get as close to 1.86 as possible? (given an acceptable error to target value?)
You need a root finding algorithm for your problem. Only a small transformation required. Find roots for g(x):
g(x) = A^2 + cos(B) - sqrt(C) - 1.86
scipy.optimize.root, Refer documentation:
import numpy as np from scipy import optimize # extra two 0's as dummy equations as root solves a system of equations # rather than single multivariate equation def func(x): # A,B,C represented by x ndarray return [np.square(x) + np.cos(x) - np.sqrt(x) - 1.86, 0, 0] result = optimize.root(func , x0 = [0.1,0.1,0.1]) x = result.x A, B, C = x x # array([ 1.09328544, -0.37977694, 0.06970678])
you can now check your solution:
np.square(x) + np.cos(x) - np.sqrt(x) # 1.8600000000000005
See also questions close to this topic
Python Kafka Streaming API - Binning
I am using python kafka stream binning example given in this, Python Kafka Streaming API
I am able to generate the data using generator.py file given under winton-kafka-streams/examples/binning/, whereas when i run the binning.py file from the same folder, i got the below issue. Could someone help me, to resolve this?
Change color of missing values in Seaborn heatmap
Consider the example of missing values in the Seaborn documentation:
corr = np.corrcoef(np.random.randn(10, 200)) mask = np.zeros_like(corr) mask[np.triu_indices_from(mask)] = True sns.heatmap(corr, mask=mask, vmax=.3, square=True)
How do I change the color of the missing values to, for example, black? The color of the missing values should be specified independent of the color scheme of the heatmap, it may not be present in the color scheme.
I tried adding
facecolor = 'black'but that didn't work. The color can be affected by e.g.
sns.axes_style("white")but it isn't clear to me how that can be used to set an arbitrary color.
Xpath + Scrapy + Python : data point couldn't be scraped
This is the XML structure:
<tr> <td> <font size="3"> <strong>Location:</strong> Hiranandani Gardens, Powai </font> </td> </tr>
I want to extract : Hiranandani Gardens, Powai
I tried with these:
Both returned an empty list.
Note: we must have to use the text of tag, i.e., "Location:". Otherwise, there are many other places on the site where the same XML structure is used. So, it'll fetch many more unnecessary things apart from the desired value if the text of strong tag is not used.
How to test existing Angularjs overall performance (DOM rendering)?
How can I get overall performance results?
Python if-elif-else runtime optimization
I did a search on the previous asked question, but without finding what i need to optimize my code.
For info, I am running on Python 2.7 but could change to 3 if needed
I am converting every pixel of an image and because of some condition i have to do it pixel by pixel. So i have nested for loop with an if-elif-else statement inside, and it takes an awful long time to run. For an image of 1536 x 2640, the whole code takes ~20 seconds and 90% of time is inside this double for loop
I believe there should be a better way to write the below code
for pixel in range(width): for row in range(height): ADC = img_original[row, pixel] if ADC < 84: gain = gain1 offset = offset1 elif ADC > 153: gain = gain3 offset = offset3 else: gain = gain2 offset = offset2 Conv_ADC = int(min(max(ADC * gain + offset, 0),255)) img_conv[row, pixel] = Conv_ADC
Thanks for the help
Optimization issue using Optim in R
I´m still learning how to use Optim resource in R. I have this function below that I would like to minimize the MSE changing two paramenters (k and l) but I found this error message: "Error in fn(par, ...) : argument "k" is missing, with no default"
age = c(7,14,21,28,35) weight = c(0.190,0.500,0.900,1.6,2.25) fr <- function(l,k,age,weight)sqrt(sum(abs(weight-(0.05*exp((l/k)*(1-exp(-k*age)))))^2)) optim(c(0.21,0.045), fr, age=age, weight=weight)
Scipy poisson distribution upper limit
Hi I am generating a random number using scipy stats. I used poisson distribution Below is an example:
import scipy.stats as sct A =2.5 Pos = sct.poisson.rvs(A,size = 20)
When I print Pos, I got the following numbers:
array([1, 3, 2, 3, 1, 2, 1, 2, 2, 3, 6, 0, 0, 4, 0, 1, 1, 3, 1, 5])
You can see from the array that some of the number,such as 6, is generated.
What I want to do it to limit the biggest number(let's say 5), i.e. any random number generated using sct.poisson.rvs should be equal or less than 5,
How can I tweak my code to achieve it. By the way, I am using this in Pandas Dataframe.
How to find the area below and above a point for a given time period using scipy?
I have a pandas dataset with date being the index and a number in the value column. There is one year's worth of data.
How can I find find the area (integral) below and above each date's value for the next two months using scipy.integrate?
E.g. If 2009-01-01 has 5 as the value, I am trying to find the integral below and above 5 for the next two months, depending on the points for the next two months.
EDIT: I guess I don't know what to use as the function since the function is unknown and I only have points to use to integrate. I am thinking I may have to integrate for each day and sum up for the two months?
Below is a sample of my dataset:
DATE Y 2008-01-01 4 2008-01-02 10.4 2008-01-03 2 2008-01-04 9 2008-01-05 4.3 2008-01-06 7 2008-01-07 8.2 2008-01-08 5 2008-01-09 6.5 2008-01-10 2.3 ... 2008-02-28 6.6 2008-03-01 7 2008-03-02 5.4
My objective is to start from 2008-01-01 with a value of 4 and use that as the reference point and find the integral below and above 4 (i.e. 4 to each day's y value) for the next two months. So it will not be a rolling integral but a looking-forward one.
What is contained in the "function workspace" field in .mat file?
I'm working with .mat files which are saved at the end of a program. The command is
save foo.matso everything is saved. I'm hoping to determine if the program changes by inspecting the .mat files. I see that from run to run, most of the .mat file is the same, but the field labeled
(I am inspecting the .mat files via
scipy.io.loadmat-- just loading the files and printing them out as plain text and then comparing the text. I found that
save -asciiin Matlab doesn't put string labels on things, so going through Python is roundabout, but I get labels and that's useful.)
I am trying to determine from where these changes originate. Can anyone explain what
__function_workspace__contains? Why would it not be the same from one run of a given program to the next?
The variables I am really interested in are the same, but I worry that I might be overlooking some changes that might come back to bite me. Thanks in advance for any light you can shed on this problem.
EDIT: As I mentioned in a comment, the value of
__function_workspace__is an array of integers. I looked at the elements of the array and it appears that these numbers are ASCII or non-ASCII character codes. I see runs of characters which look like names of variables or functions, so that makes sense. But there are also some characters (non-ASCII) which don't seem to be part of a name, and there are a lot of null (zero) characters too. So aside from seeing names of things in
__function_workspace__, I'm not sure what that stuff is exactly.
Convergence in a shallow neural network
I have one hidden layer with 5 units, the input layer with 10 units and one scalar output unit. I'm using ReLu activation and at the output layer, there's no nonlinearity, just a weighted sum. Instead of using an existing code from the net, I thought I'd derive the equations. The convergence is indeed baffling and I'm pretty sure it's wrong.
import numpy as np import matplotlib.pyplot as plt import math d = 10 m = 5 alp = 1e-2 W1 = np.random.randn(m,d) W2 = np.random.randn(1,m) a0 = np.random.randn(d,1) b1 = np.random.randn(m,1) b2 = np.random.randn(1,1) y = np.random.randn(1,1) def compute_loss(y,a2): return np.sum(np.power(y-a2,2))/2 def gradient_step(W1,W2,b1,b2,a1,a2,z1): W2 += alp*(y-a2)*a1.transpose() b2 += (y-a2) a1_deriv = np.array(reluDerivative(z1)) b1 += (y-a2)*(np.matmul(W2,np.diagflat(a1_deriv))).transpose() W1 += (y-a2)*(a0.dot(W2).dot(np.diagflat(a1_deriv))).transpose() return W1,W2,b1,b2,a1,a2,z1 def reluDerivative(x): x[x<=0] = 0 x[x>0] = 1 return x loss_vec =  num_iterations = 50 for i in range(num_iterations): z1 = np.matmul(W1,a0)+b1 a1 = np.maximum(0,z1) a2 = np.matmul(W2,a1)+b2 loss_vec.append(compute_loss(y,a2)) W1,W2,b1,b2,a1,a2,z1 = gradient_step(W1,W2,b1,b2,a1,a2,z1) plt.plot(loss_vec)
Determining whether MATLAB fitglm() model fit converged
There are many MATLAB functions that do some kind of statistical model fitting, such as
fitglm(). These model fits can fail to converge for various reasons; this question is NOT about what can cause such failures or about how to prevent them.
My question is: is there a way, other than by looking at the console output, to determine if a given call to
fitglm()converged? The obvious way to do this would seem to be through some property of the output arguments, but the list of properties of the Linear Model class doesn't seem to contain this basic information.
A minimal example (inspired by this question):
x = [7 0;0 0;8 0;9 0;7 1;8 0;7 0;4 0;7 0;2 0]; y = [0 0 1 1 1 0 0 1 0 0]'; m = fitglm(x,y,'distr','binomial'); Warning: Iteration limit reached.
What, if anything, about the output
mtells us that the iteration limit was reached?
Neural network - loss not converging
This network contains an input layer and an output layer, with no nonlinearities. The output is just a linear combination of the input.I am using a regression loss to train the network. I generated some random 1D test data according to a simple linear function, with Gaussian noise added. The problem is that the loss function doesn't converge to zero.
import numpy as np import matplotlib.pyplot as plt n = 100 alp = 1e-4 a0 = np.random.randn(100,1) # Also x y = 7*a0+3+np.random.normal(0,1,(100,1)) w = np.random.randn(100,100)*0.01 b = np.random.randn(100,1) def compute_loss(a1,y,w,b): return np.sum(np.power(y-w*a1-b,2))/2/n def gradient_step(w,b,a1,y): w -= (alp/n)*np.dot((a1-y),a1.transpose()) b -= (alp/n)*(a1-y) return w,b loss_vec =  num_iterations = 10000 for i in range(num_iterations): a1 = np.dot(w,a0)+b loss_vec.append(compute_loss(a1,y,w,b)) w,b = gradient_step(w,b,a1,y) plt.plot(loss_vec)