Improve for loop efficiency

I'm trying to convert 12,000 JSON files, containing event web data, into a single pandas dataframe. The code is taking too long to run. Any ideas on how to improve its efficiency?

Example of loaded JSON file:

{'$schema': 12,                       
 'amplitude_id': None,                
 'app': '',                           
 'city': ' ',                         
 'device_carrier': None,              
 'dma': ' ',                          
 'event_time': '2018-03-12 22:00:01.646000',                                
 'group_properties': {'[Segment] Group': {'': {}}},                         
 'ip_address': ' ',                   
 'os_version': None,                  
 'paying': None,                      
 'platform': 'analytics-ruby',        
 'processed_time': '2018-03-12 22:00:06.004940',                            
 'server_received_time': '2018-03-12 22:00:02.993000',                      
 'user_creation_time': '2018-01-12 18:57:20.212000',                        
 'user_id': ' ',                      
 'user_properties': {'initial_referrer': '',                                
  'last_name': '',                    
  'organization_id': 2},              
 'uuid': ' ',                         
 'version_name': None}                

Thanks!

data = pd.DataFrame()

for filename in os.listdir('path'):
    file = open(filename, "r")
    file_read1 = file.read()
    file_read1 = pd.read_json(file_read1, lines = True)
    data = data.append(file_read1, ignore_index = True)