Unexplainable memory error after running code

So I'm testing a few codes, and every time I run it I get memory error. I've even used the following line in anaconda prompt, I really don't know what to do at this point. Also the computer is 64 bit on 32 and task manager says it is only using 35% of the available memory.

What I used at anaconda prompt:

jupyter notebook --notebook-dir='D:/' --NotbookApp.iopub_Data_Rate_Limit=1.6e10

My code:


import pandas as pd

import matplotlib.pyplot as plt

%matplotlib inline

from sklearn.ensemble import IsolationForest


df = pd.read_csv('D:\\Project\\database\\4-Final\\Final After.csv',low_memory=True)

iso_forest = IsolationForest(behaviour='new',n_estimators=300, contamination='auto',random_state=42)

X=(df['Power_kW'].values.reshape(-1,1))



iso_forest = iso_forest.fit(X)
isof_outliers = iforest.predict(X)
isoF_outliers_values = df[iforest.predict(X) == -1]
isoF_outliers_values

The error:

<ipython-input-1-daa0d1f74aa1> in <module>
     18 
     19 
---> 20 iso_forest = iso_forest.fit(X)
     21 isof_outliers = iforest.predict(X)
     22 isoF_outliers_values = df[iforest.predict(X) == -1]

C:\ProgramData\Anaconda3\lib\site-packages\sklearn\ensemble\iforest.py in fit(self, X, y, sample_weight)
    285         # else, define offset_ wrt contamination parameter, so that the
    286         # threshold_ attribute is implicitly 0 and is not needed anymore:
--> 287         self.offset_ = np.percentile(self.score_samples(X),
    288                                      100. * self._contamination)
    289 

C:\ProgramData\Anaconda3\lib\site-packages\sklearn\ensemble\iforest.py in score_samples(self, X)
    381 
    382         n_samples_leaf = np.zeros((n_samples, self.n_estimators), order="f")
--> 383         depths = np.zeros((n_samples, self.n_estimators), order="f")
    384 
    385         if self._max_features == X.shape[1]:

MemoryError: