Crate killing query and not returning error?

I have some code that is querying a crate database over and over in order to pull down a large number of records (~72 million) piece by piece.

This is the query:

lookupQuery = f"select * FROM myTable WHERE layer = 'mappings' ORDER BY geohash LIMIT 100000 OFFSET {offset * 100000}"

This is inside of a loop in a python app that increments offset every iteration. The problem I'm having is that a ways into processing the loop (between the 14 million and 35 million record mark) it's erroring out with the following:

processed mapping chunk #39600000 records in 33.59479522705078 seconds.  7544.382711172104 total run time
processed mapping chunk #39700000 records in 33.788419008255005 seconds.  7578.171201467514 total run time

Traceback (most recent call last):
  File "demographics.py", line 46, in <module>
    hashData = loadHashData();
  File "demographics.py", line 19, in loadHashData
    geoCursor.execute(lookupQuery)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/cursor.py", line 54, in execute
    bulk_parameters)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/http.py", line 323, in sql
    content = self._json_request('POST', self.path, data=data)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/http.py", line 435, in _json_request
    _raise_for_status(response)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/http.py", line 187, in _raise_for_status
    error_trace=error_trace)
crate.client.exceptions.ProgrammingError: SQLActionException[JobKilledException: Job killed]
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
    from apport.fileutils import likely_packaged, get_recent_crashes
  File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
    from apport.report import Report
  File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in <module>
    import apport.fileutils
  File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in <module>
    from apport.packaging_impl import impl as packaging
  File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 20, in <module>
    import apt
  File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in <module>
    import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'

Original exception was:
Traceback (most recent call last):
  File "demographics.py", line 46, in <module>
    hashData = loadHashData();
  File "demographics.py", line 19, in loadHashData
    geoCursor.execute(lookupQuery)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/cursor.py", line 54, in execute
    bulk_parameters)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/http.py", line 323, in sql
    content = self._json_request('POST', self.path, data=data)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/http.py", line 435, in _json_request
    _raise_for_status(response)
  File "/usr/local/lib/python3.6/dist-packages/crate/client/http.py", line 187, in _raise_for_status
    error_trace=error_trace)
crate.client.exceptions.ProgrammingError: SQLActionException[JobKilledException: Job killed]

I queried the database logs and the error is just as useless:

  {
    "ended": 1515839546855,
    "error": "Job killed",
    "id": "6ede22c0-b606-42f5-bb78-5f66335d4911",
    "started": 1515839520655,
    "stmt": "select * FROM myTable WHERE layer = 'mappings' ORDER BY geohash LIMIT 100000 OFFSET 39700000",
    "username": "crate"
  },

Does anyone know why this might be happening? Often when i've seen job killed in the past it's because the system ra out of memory - could that be what's going on?