Is there an efficient way to find averages of boxes within a numpy array?


I have a bunch of 450x450 images, and I am looking at each color channel individually. So what I have is a 1D numpy array of length 450*450*numImages. I want to take a box of some size, say 3x3 or 5x5 around each pixel and get the average of those pixel values. So by the end I will still have a 450*450*numImages array, it will just now hold those averages. Imagine something like this where we have one 14x14 image and we're just looking at one of the color channels and taking a 3x3 box average:

Box average

Current Implementation

def boxAverage(imageArr, box_scale=1):
   for i in range(len(imageArr)):
      for j in range(-box_scale,box_scale):
         for k in range(-box_scale*iWidth,box_scale*iWidth,iWidth):
               temp = imageArr[i+k+j]
            except IndexError:
               temp = -1
            if temp != -1:

When box_scale = 1, it is a 3x3 box that I am taking the average of and when box_scale=2, it is 5x5, etc.


This does not scale very well and as I work with more and more images, this gets very slow. So I was wondering if there is a good way using something like numpy to do this in a way that can be done more efficiently. I appreciate any suggestions you might have!