Minimizing ChiSquared in Matplotlib
Pretty much, I am trying to use ChiSquared to find the best fit for my model. My model is dependent on two fixing parameters as seen in the code: p0 and r0. I need to find the values of these two parameters in which the error between the model and the observed data is minimal. While I know how to do this on a basic level with simply minimizing the value of a product of say the numbers 2 and 5, I don't know how to minimize with arrays of data. Here is my code (Ignore the bottom half of the code which is just plotting the graph):
from scipy.optimize import*
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
import scipy.special as special
from scipy.integrate import quad
from matplotlib.lines import Line2D
import matplotlib.ticker as ticker
#number of measurements
am = 27
#Observed Data
v_obs = [
#Some data
]
#Fixing Parameters
p0 = 2.95*10**8
r0 = 1.8
#integral
def integrand(r,p0,r0):
return (p0 * r0**3)/((r+r0)*((r**2)+(r0**2)))*4*3.1415926535*r**2
integrals = []
for i in r:
integrals.append(quad(integrand, 0 ,i,args=(p0,r0)))
#function/Model
functions = []
for x in range (0,am):
k = integrals[x][0]
i = r[x]
functions.append(np.sqrt((4.302*10**(6)*k)/i))
#Plot the function/model
plt.plot(r,functions,color='red')
#Gas velocity (Don't need to worry about this)
v_gas = (
#Some Data
)
#Error Bars for Obs (with errors varying for each measurement)
#I apologize for the messy code
errors = []
errors.append(3.62)
errors.append(4.31)
errors.append(3.11)
errors.append(5.5)
errors.append(3.9)
errors.append(3.5)
errors.append(2.7)
errors.append(2.5)
errors.append(2.3)
errors.append(2.1)
errors.append(2.3)
errors.append(2.6)
errors.append(3.1)
errors.append(3.2)
errors.append(3.2)
errors.append(3.1)
errors.append(2.9)
errors.append(2.8)
errors.append(3.3)
errors.append(3.9)
errors.append(4.4)
errors.append(4.8)
errors.append(4.4)
errors.append(3.35)
errors.append(3.95)
errors.append(4.44)
errors.append(5.36)
for x in range (0,am):
plt.errorbar(r[x],v_obs[x],yerr = errors[x],capsize = 2, capthick = 2, linestyle = 'None', color = 'green')
#Velocity v.s Radius
plt.scatter(r, v_obs, s = 8)
plt.plot(r,v_gas, '',r,v_disk,'')
ax = plt.gca()
ax.text(.2, 65, 'ps ' + str(p0) + '\nrs ' + str(r0), style='italic')
#bbox={'facecolor':'red', 'alpha':0.5, 'pad':10})
#Legend the Plot
plt.title("NGC2976 Rotational Curve")
plt.xlabel("Radius (kpc)")
plt.ylabel("Velocity (km/s)")
#fig = plt.figure()
#ax = fig.add_subplot(1,1,1)
# major ticks every 20, minor ticks every 5
majory_ticks = np.arange(0, 120, 10)
minory_ticks = np.arange(0, 120, 2)
majorx_ticks = np.arange(0,3,.3)
minorx_ticks = np.arange(0,3,.06)
ax.set_yticks(majory_ticks)
ax.set_yticks(minory_ticks, minor=True)
ax.set_xticks(majorx_ticks)
ax.set_xticks(minorx_ticks, minor=True)
# and a corresponding grid
ax.grid(which='both')
# or if you want differnet settings for the grids:
ax.grid(which='minor', alpha=0.2)
ax.grid(which='major', alpha=0.5)
plt.show()
See also questions close to this topic

How to iterate over divs in Scrapy?
It is propably very trivial question but I am new to Scrapy. I've tried to find solution for my problem but I just can't see what is wrong with this code.
My goal is to scrap all of the opera shows from given website. Data for every show is inside one div with class "rowfluid rowperformance ". I am trying to iterate over them to retrieve it but it doesn't work. It gives me content of the first div in each iteration(I am getting 19x times the same show, instead of different items).
Thanks for any advice!
import scrapy from ..items import ShowItem class OperaSpider(scrapy.Spider): name = "opera" allowed_domains = ["http://www.opera.krakow.pl"] start_urls = [ "http://www.opera.krakow.pl/pl/repertuar/naafiszu/listopad" ] def parse(self, response): divs = response.xpath('//div[@class="rowfluid rowperformance "]') for div in divs: item= ShowItem() item['title'] = div.xpath('//h2[@class="itemtitle"]/a/text()').extract() item['time'] = div.xpath('//div[@class="itemtime verticalcenter"]/div[@class="vcentered"]/text()').extract() item['date'] = div.xpath('//div[@class="itemdate verticalcenter"]/div[@class="vcentered"]/text()').extract() yield item

Can't log into aspx website
I'm trying to log into this website https://www.moodys.com/Login.aspx using Python 3 but I'm having no luck. I've tried every method possible and can't seem to get it to work. I'm trying this code below but I can't figure out the login section
from robobrowser import RoboBrowser url = 'https://moodys.com' login_url = 'https://www.moodys.com/Login.aspx' username = "XXXXX" password = "XXXXX" browser = RoboBrowser(history=True) # This retrieves __VIEWSTATE and friends browser.open(login_url) signin = browser.get_form(id='aspnetForm') signin["MdcUserName"].value = username signin["MdcPassword"].value = password signin["Log In"].value = "Log In" browser.submit_form(signin) browser.url

IndentationError when there is no indentation needed
My script is simple:
print("Hello")
I use the interactive interpreter and I get the error when I import the file called "tries1.py" which contains nothing but the code cited  no spaces, no tabs.
The "r" is a from an old version of the same file. I tried exiting the interpreter and the terminal and starting over  this does not help. This happens both with Python 2.7.12 ("python") and with Python 3.5.2 ("python3") on Linux Mint. Is there a way to "clear" what the interpreter remembers; and how does it remember stuff?
Does anyone have a clue?
The full story:
user@host /media/user/New Volume/IT studies/Python $ ls __pycache__ test2.py test2.pyc test.py tries 1.py tries1.py user@host /media/user/New Volume/IT studies/Python $ python Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import tries1 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "tries1.py", line 2 print("Hello", r) ^ IndentationError: expected an indented block >>>
P.S. I am new to programming and this is my very first post about IT stuff!

split dict intro sub dicts when values are numpy arrays
I have a dict whose values are numpy arrays as follows:
mydict = { "key0": array0, "key1": array1, "key2": array2 }
array0
and other two have the same length. I want splitmydict
into two sub dicts, each has the same key asmydict
, except for each key their value are the first half ofarray
and second half ofarray
. 
numpy memmap array of objects
I'm using numpy array to access a lot of custom objects (
dtype=object
) to be created latter, for example (break lines for readability):utilization[machine,resource] = np.empty( (number_of_process,Capacity[machine,resource]), dtype=object)
and later:
for process in range(Processes): for k in range(Capacity[machine,resource]): utilization[machine,resource][process,k] = Variable()
where
Variable
is a SAT variable from satispy.But in some cases,
Capacity[machine,resource]
is "big" ( like 4x10^6 ) and all the variables doesn't fit in the memory.I started to try to use
numpy.memmap
, theutilization
is mmaped, but theVariable
s object is not.Is possible to make
Variable
(memmap
mmap
)able? If it is, how? 
Identify the strings in python array
I have a data similar to,
import numpy as np A = np.array( [['1','2','3'], ['a','3','5']] )
Now I want to identify the cell address of
'a'
. I have tried the following code for that purpose,for i in range(0,2): for j in range(0,3): if (type(float(A[i,j])) == float): print(str(i)+str(j))
since,
'a'
can't be converted into floating point it shows the following error.00
01
02
Traceback (most recent call last):
File "", line 3, in
if (type(float(A[i,j])) == float):
ValueError: could not convert string to float: 'a'
Please help. Thank you in advance.

Equivalence of lognormal and exp(normal) probability densities
I want to see the equivalence of a lognormal distribution with the exponential of a normal distribution. I use the SciPy package in the following and from the corresponding doc the following parametrization:
s = sigma scale = exp(mu)
Here's the link. I think that when using this parametrization I do not find deviations from plotting the lognormal pdf and the exp of the normal distribution and additional plotting the normal and the log of the lognormal distribution. I tried to following:
from scipy.stats import lognorm from scipy.stats import norm import matplotlib.pyplot as plt import numpy as np mu = 0.5 sigma = 0.2 x1 = np.linspace(norm.ppf(0.01, loc = mu, scale = sigma), norm.ppf(0.99, loc = mu, scale = sigma), 100) y1 = norm.pdf(x1, loc = mu, scale = sigma) x2 = np.linspace(lognorm.ppf(0.01, s = sigma, scale = np.exp(mu)), lognorm.ppf(0.99, s = sigma, scale = np.exp(mu)), 100) y2 = lognorm.pdf(x2, s = sigma, scale = np.exp(mu)) fig, ax = plt.subplots(2, 1) ax[0].plot(x1,y1, label = "normal") ax[0].plot(np.log(x2), y2, label = "lognormal") ax[0].grid() ax[0].legend() ax[1].plot(np.exp(x1),y1, label = "normal") ax[1].plot(x2, y2, label = "lognormal") ax[1].grid() ax[1].legend() fig.show()
As you can see, the curves are not the same. Is the SciPy doc wrong and do I have to use a different parametrization, or is my idea of having no differences wrong?

Updation of Old Python Code
Pretty noob question so please bear with me because I am new to Python. Below given code is about Ordinary Differential Equation generating automatic combinations of ODE. I tried to execute this code on Python 3.6.3 and on Spider(Pyton 3.6) but no outcomes. I spent many days to make it executable on new version of Python but I am unable to execute it because more errors are generating. So I copied the code as it was.
from scipy import* from scipy.integrate import odeint from operator import itemgetter import matplotlib matplotlib.use('Agg') from matplotlib.ticker import FormatStrFormatter from pylab import * from itertools import product, islice from numpy import zeros_like from string import ascii_lowercase import operator import sys t_range = arange(0.0, 20.0, 0.1) # initial_condi = [] # VarList = [] # ParamsList = [] ops = "+" opsdict = { "+": operator.add, "": operator.sub } # if len(initial_condi)!=len(VarList): # sys.exit('error: the number of initional conditions do not equal to the number of variables') def odeFunc(Y, t, model, K): return GenModel(Y, model, K) def GenModel(Y, model, K): # New array of floatingpoint zeros the size of Y dydt = zeros_like(Y) for i, derivative in enumerate(model): for j, operator in enumerate(derivative): # Sequentially compute dy/dt for each variable in Y dydt[i] = opsdict[operator](dydt[i],K[j]*Y[j]) return dydt # Returns a nicerlooking string for a derivative expression given the encoding def derivString(vars, deriv): result = "" for i, v in enumerate(vars): if i == 0 and deriv[i] == '+': result += v else: result += deriv[i]+v return result # main numvars = int(raw_input("Enter number of variables\n> ")) VarList = ascii_lowercase[:numvars] nummodels = (2**numvars)**numvars # begin looping input = "" while input != "quit": print "\n%d possible models" % nummodels input = raw_input("Enter model ID (zeroindexed) or 'quit'\n> ") # Basic input filtering if input == "quit": continue elif not input.isdigit(): print "Input must be a nonnegative integer" continue ID = int(input) if ID >= nummodels or ID < 0: print "Invalid ID" continue # itertools.product creates a generator for all possible combinations of + derivatives = product(ops, repeat=numvars) # We take the product again to generate all models models = product(derivatives, repeat=numvars) # islice takes the specified model model = next(islice(models, ID, None)) # Display dy/dt for each variable print "Model %d:" % ID IDtitle = [] for i, variable in enumerate(VarList): tempstring = "d%c/dt = %s" % (variable, derivString(VarList, model[i])) IDtitle.append(tempstring) print "\t" + tempstring # User specifies the initial values of all variables. # This process can be automated but this is to demonstrate that the progam # accepts any input init_cons = [] params = [] confirm = "" while confirm not in ("y", "n"): confirm = raw_input("Run this model? (y/n)\n> ") if confirm == "n": continue print "\nEnter <initial value, parameter> pairs separated by ','" for i, variable in enumerate(VarList): iv_param = map(float, raw_input("> %c: " % variable).split(',')) init_cons.append(iv_param[0]) params.append(iv_param[1]) print "\nRunning ODEint...", result = odeint(odeFunc, init_cons, t_range, args=(model,params)) print " done." print "Plotting results...", f = figure(ID) title(", ".join(IDtitle)) for i, variable in enumerate(VarList): plot(t_range, result[:,i], label=variable) legend() axhline(0, color='k') savefig("model"+str(ID)) close(f) print "done." print "  Bye "

Importing Scipy.stats through localhost throws "Segmentation Fault (Core Dumped) error"
I have a php webserver which calls a python script. Currently I haven't written any code in the python file and all it does is the following:
import pandas, sys import scipy.stats print "Hello"
Running the python script using php (from localhost) throws segmentation fault (core dumped). If I remove the importing of scipy.stats, the segmentation fault doesn't take place.
Your assistance in figuring out the reason for the same would be highly appreciated.
P.S: I am using linux (Ubuntu 14.04). PHP Version 5.6.23. (Apache/2.4.18 (Unix), using XAMPP )
Thanks

Identifying shapes in an image using matlab
I am making an algorithm that identifies shapes in an image, with this image as input
It returns this output
The crosses in the output image are the vertices of the polygon. The circles are the centers of the polygons. The polygons have 4 vertices
The alghorithm should:
 Take in input an image
 Inizialize polygons with 4 vertices
 Reduce the numbers of colors in the image with PCA (not included right now)
 Get the main color for each polygon, there are two in the example
 Minimize the number of pixels in the polygon with diffent colors than the main polyghon color
I am having problems with the last point, dealing with nonsquare polygons, the algorithm treats them as square ones because they have the same code to minimize.
Can I deal with nonsquare polygons in any way ?
How can I improve my alghoritm to be faster ?
Here is an example of the result for nonsquare polygon
For a better result the crosses shouldn't be aligned but at the ends of the line dividing the black from the white part of the image.
The code in gradientDescend.m is the responsible for this task, thus is it the one that mainly needs to be modified.
The function is called gradient descend because I started building from but it is not gradient descend.
It is not complete, it only works with 2 shapes along the xaxis, I can' ask to make an algorithm that works for any number of shapes because that would be asking too much.
The main problem is dealing with nonsquare shapes, even in an image with only two shapes.
In the code the shapes are called centroids
This is my code, let me know if the question or the writing is unclear, I am still learning english
centroids_expansion.m
%% Initialization clear ; close all; clc % Select an initial set of centroids kx=3; ky=2; %Select the numer of image to use imnumber = 10; pathname='any folder'; K=kx*ky; %Total centroid number % Load an image [dummy,map]=imread(strcat(pathname,char(string(imnumber)),'.png')); img = ind2rgb(dummy,map); %Userful Variables sy=size(img,1); sx=size(img,2); %% ================= Centroids Creation ==================== fprintf('Generating random centroids.\n\n'); Points = CmatrixInizialization(ky,kx,sy,sx) %% ================= Get main centroid color ==================== %Get mid pixel color idx=zeros(sy,sx);%Index Colors in the image starting from centroids center center=zeros((kx1)*(ky1),2); dif=1;%It augments to shift numbers in impossible centroids for tt=ky+2:kx*ky if Points(tt,1)==0 tt=tt+1; dif=dif+1; end center(ttkydif,:)=mean([Points(tt,:);Points(tt1,:);Points(ttky,:);Points(ttky1,:)]);%Get centers of centroids tt,tty,tt.x,ttxy tcolor(ttkydif,1,1:3)=img(center(ttkydif,1),center(ttkydif,2));%Get central color idx=idx+((idx==0).*(mean(img==tcolor(ttkydif,1,1:3),3))==1*(ttkydif));%get every pixel of that color end %% =================Minimization ==================== for i=1:10 Points=gradientDescend(idx,Points,0.001,ky,kx);%Calculate shapes minimizing the number of pixels that are of a different color than the main color of the shape end dif=1; %% ================= Visulizing results ==================== for tt=ky+2:kx*ky if Points(tt,1)==0 tt=tt+1; dif=dif+1; end center(ttkydif,:)=mean([Points(tt,:);Points(tt1,:);Points(ttky,:);Points(ttky1,:)]);%Get centers of centroids tt,tty,tt.x,ttxy end RGB = insertMarker(img,[Points(:,2),Points(:,1)],'size',10); RGB = insertMarker(RGB,[center(:,2),center(:,1)],'o','size',10); figure,imshow(RGB);
CmatrixInizialization.m
function Points = CmatrixInizialization(ky,kx,sy,sx) mult=ky*kx; Points = zeros(mult,2); sy=sy/(ky1); sx=sx/(kx1); for x=1:kx for y=1:ky Points((x1)*ky+y,1:2) = [sy*(y1),sx*(x1)]; end end
gradientDescend.m
function Points=gradientDescend(Idx,Points,alpha,ky,kx) ccolor=0; tt=ky+2*ky; A=[Points(tt,:);Points(tt1,:);Points(ttky1,:);Points(ttky,:)]'; Amask = poly2mask(A(2,:)',A(1,:)',size(Idx,1),size(Idx,2)); tt=ttky; B=[Points(tt,:);Points(tt1,:);Points(ttky1,:);Points(ttky,:)]'; Bmask = poly2mask(B(2,:)',B(1,:)',size(Idx,1),size(Idx,2)); Points(tt1,2)=Points(tt1,2)alpha*(sum(sum(Idx(Bmask)~=(ccolor+1)))sum(sum(Idx(Amask)~=(ccolor)))); Points(tt,2)=Points(tt,2)alpha*(sum(sum(Idx(Bmask)~=(ccolor+1)))sum(sum(Idx(Amask)~=(ccolor)))); end

PuLP Python help beginner  Constraints
I am a beginner with Python. And would like to ask for help with it.
I am using PuLP to minimize a function:
prob+= 63.128*CPV + 88.167*CWT + 126.97 * CBatt, "TotalCostSystem"
The problem is that I do not know how to construct the constraints.
The context is:
A system with Solar PV, Wind Turbines and a Battery needs to be design (decide capacities of each, CPV, CWT, CBatt). The energy es generated using historical data for 24h (cfPV and cfWT) and multiplying it to the capacities of WT and PV (CPV and CWT). A battery is available in case the generation of energy is higher than a maximum power at each step. And there is certain energy that needs to be generated.
So... here is what I have so far
import numpy as np import pandas as pd from pulp import * #cfPV Values 'Renewable Ninja values' idx = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] d = { 'day': pd.Series(['01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14'], index=idx), 'hour':pd.Series(['00:00:00', '01:00:00', '02:00:00', '03:00:00', '04:00:00', '05:00:00', '06:00:00', '07:00:00', '08:00:00', '09:00:00', '10:00:00', '11:00:00', '12:00:00', '13:00:00', '14:00:00', '15:00:00', '16:00:00', '17:00:00', '18:00:00', '19:00:00', '20:00:00', '21:00:00', '22:00:00', '23:00:00'], index=idx), 'output':pd.Series([0,0,0,0.087,0.309,0.552,0.682,0.757,0.783,0.771,0.715,0.616,0.466,0.255,0.022,0,0,0,0,0,0,0,0,0], index=idx)} cfPV = pd.DataFrame(d) #cfWT Values 'Renewable Ninja values' idx = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] d1 = { 'day': pd.Series(['01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14', '01/01/14'], index=idx), 'hour':pd.Series(['00:00:00', '01:00:00', '02:00:00', '03:00:00', '04:00:00', '05:00:00', '06:00:00', '07:00:00', '08:00:00', '09:00:00', '10:00:00', '11:00:00', '12:00:00', '13:00:00', '14:00:00', '15:00:00', '16:00:00', '17:00:00', '18:00:00', '19:00:00', '20:00:00', '21:00:00', '22:00:00', '23:00:00'], index=idx), 'output':pd.Series([0.528,0.512,0.51,0.448,0.62,0.649,0.601,0.564,0.541,0.515,0.502,0.522,0.57,0.638,0.66,0.629,0.589,0.544,0.506,0.471,0.448,0.438,0.443,0.451], index=idx)} cfWT = pd.DataFrame(d1) #create the 'prob' variable to contain the problem data prob = LpProblem ("System", LpMinimize) # The variables are created with a lower limit of zero CPV = LpVariable ("PVCapacity",0) #PV Capacity in kW CWT = LpVariable ("WTCapacity",0) #WT Capacity in kW CBatt = LpVariable ("BatteryCapacity",0) #Battery Capacity in kW # The objective function is added to 'prob' prob+= 63.128*CPV + 88.167*CWT + 126.97 * CBatt, "TotalCostSystem" for i in idx: SOCB = SOCB + xBin  xBout prob += SOCB <= CBatt prob += SOCB >= 0 prob += xBout >= 0 prob += xBin >= 0 prob += (CPV*cfPV['output'][i] + CWT*cfWT ['output'][i]) + xBout  xBin >= 0 prob += (CPV*cfPV['output'][i] + CWT*cfWT ['output'][i]) + xBout  xBin <= 250 prob += lpSum(CPV*cfPV['output'][i] + CWT*cfWT ['output'][i] + xBout  xBin for i in idx) >= 5000 prob.solve()
My main problem is not to know how to make the battery to function. SOCB is the state of charge of the battery, which can not be higher than the capacity of the battery itself.
The SOCB (state of charge) is the SOCB of the past timestep plus the energy that goes in (which should be the difference of the max capacity, in this case 250 and the energy generated), minus the energy that goes out (which should be different to zero when the max capacity is not reached and the energy is available in the battery).
In the way its written, xBout, xBin and SOCB are always 0. And the problem gets a 1 when solving (as it is not possible to reach the 5000 without generating more than 250 in some timesteps).
I do not know how to declare or write the way xBout, xBin and SOCB can change each timestep.
Any help is very well appreciated.
Thank you all.
Priscila.

Using ChiSquared to minimize error between model and observed
I am doing a project on fitting observed rotational velocities (which change based on distances) in a galaxy via a couple of models In my code, I am plotting some observed data vs a model. This model consists of a formula that can be manipulated via 2 fixing parameters:
r0
andp0
. Basically, I need to find the values of these two fixing parameters that will minimize the space between the observed graph and the model graph. My aim is to minimize ChiSquared, which is((v_obs  v_model)/error)^2
, and then graph my model.I created an array of the integrals and v_model (in my code it's named "function") based off the distance (r, which is also an array). So via this code, I get an array of v_model at each distance.
am
is the amount of measurements (in this case, 21)from scipy.optimize import minimize as opt import numpy as np def integrand(r,p0,r0): return (p0 * r0**3)/((r+r0)*((r**2)+(r0**2)))*4*3.1415926535*r**2 integrals = [] for i in r: integrals.append(quad(integrand, 0 ,i,args=(p0,r0))) #function > v_model functions = [] for x in range (0,am): k = integrals[x][0] i = r[x] functions.append(np.sqrt((4.302*10**(6)*k)/i))
Since the error changes for each data point, I made that an array as well. I'm a little bit lost on where to start, but I've created this so far for minimizing based off of examples online or on stackoverflow. I choose initial guesses for p0 and r0.
def chisqfunc(p02,r02): model = np.sqrt((4.302*10**(6)*((p02 * r02**3)/((r+r02)*((r**2)+(r02**2)))*4*3.1415926535*r**2))/r) chisq = numpy.sum(((v_obsmodel)/errors)**2) return chisq x0 = np.array([p0,r0]) result = opt.minimize(chisqfunc,x0) print (result) assert result.success == True
With executing this, I get the error :
'function' object has no attribute 'minimize'
I have two main questions: How do I fix this error and how do you minimize when your data is in an array?