Coding events that will happen within n periods for Logistic Regression. In pandas

I have a pandas data-frame with millions of accounts. The data-frame has a column titled "eventHappens" that equals 1 in the period in which an event happens. Here is an example:

import pandas as pd
account = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3]
period = [1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 1, 2, 3, 4, 5, 1, 2, 3]
eventHappens = [0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0]
willHappenIn2 = [0, 1, 1, 1, -1, -1, -1, -1, -1, -1, 0, 0, 1, 1, 1, 0, 0, 0]

df = pd.DataFrame(
    {'account': account ,
     'period': period,
     'eventHappens': eventHappens
    })

print(df)

    account  period  eventHappens
0         1       1             0
1         1       2             0
2         1       3             0
3         1       4             1
4         1       5             0
5         1       6             0
6         1       8             0
7         1       9             0
8         1      10             1
9         1      11             0
10        2       1             0
11        2       2             0
12        2       3             0
13        2       4             0
14        2       5             1
15        3       1             0
16        3       2             0
17        3       3             0

I want to create a new column "willHappenIn2" that will be all zeros except for 2 periods before and at the first occurrence event date which will be coded as 1's(I keep the -1's just in case I want to examine characteristics of the data post event later). Periods after the event date need to be coded as a -1. Here is my desired output. Is there a way to do this without looping through every account?

    account  period  eventHappens  willHappenIn2
0         1       1             0              0
1         1       2             0              1
2         1       3             0              1
3         1       4             1              1
4         1       5             0             -1
5         1       6             0             -1
6         1       8             0             -1
7         1       9             0             -1
8         1      10             1             -1
9         1      11             0             -1
10        2       1             0              0
11        2       2             0              0
12        2       3             0              1
13        2       4             0              1
14        2       5             1              1
15        3       1             0              0
16        3       2             0              0
17        3       3             0              0

Update:

An answer below posited the following solution (the answer output shown below matches what I want). However, when I run it on my machine the output doesn't match my desired result. Any suggestions?

df.loc[df.loc[df.eventHappens==1].groupby('account').eventHappens.idxmax(),'key']=1

s1=df.key.where(df.key==1).groupby(df.account).bfill(2) # using groupby with bfill get the 1 foward with limit 2

s2=(-df.key.where(df.key==1)).groupby(df.account).ffill() # adjust the key and get the back adjust to -1 

s1.fillna(s2).fillna(0)
Out[110]: 
0     0.0
1    -1.0
2    -1.0
3     1.0
4    -1.0
5     1.0
6     0.0
7     0.0
8     1.0
9     0.0
10    0.0
11    0.0
12    1.0
13    1.0
14    1.0
15    0.0
16    0.0
17    0.0

Update 2: The following worked. I had to change the bfill method to a .apply() with replace() and method 'bfill'. It's not as pretty as the 1st Answer(which served as my guidance Thanks!) but it works on my machine.

df["willHappenIn2_step1"] = df.eventHappens.groupby(df['account']).cumsum().astype(int)
df["willHappenIn2_step2"] = df.willHappenIn2_step1.groupby(df['account']).cumsum().astype(int)
df["willHappenIn2_step3"] = df.willHappenIn2_step2
df.loc[df.willHappenIn2_step2>1,"willHappenIn2_step3"] = -1
df['Final_OutPut'] = df["willHappenIn2_step3"].groupby(df.account).apply( lambda x: x.replace(to_replace=0, method='bfill', limit = 2))
df["willHappenIn2Desired"] = willHappenIn2
print(df)

1 answer

  • answered 2018-10-11 20:01 W-B

    We need using the idxmax after filter to create the new key , since two 1 in same group will not lead the out put change , so we just get the index of the first value show as 1

    df.loc[df.loc[df.eventHappens==1].groupby('account').eventHappens.idxmax(),'key']=1
    s1=df.key.where(df.key==1).groupby(df.account).bfill(2) # using groupby with bfill get the 1 foward with limit 2 
    s2=(-df.key.where(df.key==1)).groupby(df.account).ffill() # adjust the key and get the back adjust to -1 
    s1.fillna(s2).fillna(0)
    Out[219]: 
    0     0.0
    1     1.0
    2     1.0
    3     1.0
    4    -1.0
    5    -1.0
    6    -1.0
    7    -1.0
    8    -1.0
    9    -1.0
    10    0.0
    11    0.0
    12    1.0
    13    1.0
    14    1.0
    15    0.0
    16    0.0
    17    0.0
    Name: key, dtype: float64