Rows missing while reading a file that was unloaded from Redshift

I unloaded a set of 200 million records from Redshift to S3 using SQLWorkbench. I got a message saying "Unload complete, 2,00,00,00,000 records complete". However, when I download this file from s3 and open it, there are only 40 million rows. No errors at any point of time. I am very confused and unable to proceed because of this issue. Does anyone know what could be causing the issue?


1 answer

  • answered 2022-01-13 05:19 Bill Weiner

    An unload of this size will not be in 1 file. Each unloaded file is limited to 6.2GB or smaller if the MAXFILESIZE parameter is set. Also, if PARALLEL is ON (default) each slice in Redshift will make its own set of files in S3. I expect you are only look at one of many files that were created by the UNLOAD. Each file will have a slice number and a part number attached to the file base name you provided in the UNLOAD statement.

How many English words
do you know?
Test your English vocabulary size, and measure
how many words do you know
Online Test
Powered by Examplum