# 'Lossy' cumsum in numpy

I have an array a of length N and need to implement the following operation:

With p in [0..1]. This equation is a lossy sum, where the first indexes in the sum are weighted by a greater loss (p^{n-i}) than the last ones. The last index (i=n) is always weigthed by 1. if p = 1, then the operation is a simple cumsum.

``````b = np.cumsum(a)
``````

If if p != 1, I can implement this operation in a cpu-inefficient way:

``````b = np.empty(np.shape(a))
# I'm using the (-1,-1,-1) idiom for reversed ranges
p_vec = np.power(p, np.arange(N-1, 0-1, -1))
# p_vec[0] = p^{N-1}, p_vec[-1] = 1

for n in range(N):
b[n] = np.sum(a[:n+1]*p_vec[-(n+1):])
``````

Or in a memory-inefficient but vectorized way (IMO is cpu inefficient too, since a lot of work is wasted):

``````a_idx = np.reshape(np.arange(N+1), (1, N+1)) - np.reshape(np.arange(N-1, 0-1, -1), (N, 1))
a_idx = np.maximum(0, a_idx)

# For N=4, a_idx looks like this:
# [[0, 0, 0, 0, 1],
#  [0, 0, 0, 1, 2],
#  [0, 0, 1, 2, 3],
#  [0, 1, 2, 3, 4]]

a_ext = np.concatenate(([0], a,), axis=0) # len(a_ext) = N + 1
p_vec = np.power(p, np.arange(N, 0-1, -1)) # len(p_vec) = N + 1

b = np.dot(a_ext[a_idx], p_vec)
``````

Is there a better way to achieve this 'lossy' cumsum?

What you want is a IIR filter, you can use `scipy.signal.lfilter()`, here is the code:

``````import numpy as np
N = 10
p = 0.8
np.random.seed(0)
x = np.random.randn(N)
y = np.empty_like(x)
p_vec = np.power(p, np.arange(N-1, 0-1, -1))
for n in range(N):
y[n] = np.sum(x[:n+1]*p_vec[-(n+1):])
y
``````

the output:

``````array([1.76405235, 1.81139909, 2.42785725, 4.183179  , 5.21410119,
3.19400307, 3.50529088, 2.65287549, 2.01908154, 2.02586374])
``````

By using `lfilter()`:

``````from scipy import signal
y = signal.lfilter([1], [1, -p], x)
print(y)
``````

the output:

``````array([1.76405235, 1.81139909, 2.42785725, 4.183179  , 5.21410119,
3.19400307, 3.50529088, 2.65287549, 2.01908154, 2.02586374])
``````