Control file sizes when creating a table in Drill

Trying to convert a table stored on hadoop dfs as .parquet to .tsv format using sqlline connected to drill.

The sql I am using to convert and store the table is:

alter session set `store.format`='tsv';
create table dfs.tmp.`mytable_export`
as select  
    ID, NAME, STATUS, GROUP, from_unixtime(etl_date/1000) as etl_date 
from dfs.root.`/location/of/table/to/convert/mytable`;

which is being run with a sqlline session like:

/opt/mapr/drill/drill-1.8.0/bin/sqlline \
    -u jdbc:drill:zk=mnode01:5181,mnode02:5181,mnode03:5181 \
    -n myusername \
    -p mypassword \
    --run=<the sql provided above>

The problem is that when doing this, the tsv file sizes are poorly balanced. When checking the sizes of the converted files, I see:

[mapr@mnode02 mytable_export]$ ls -l
total 486470
-rwxr-xr-x 1 mapr mapr 105581719 Oct 19 10:25 1_0_0.tsv
-rwxr-xr-x 1 mapr mapr 155385226 Oct 19 10:25 1_1_0.tsv
-rwxr-xr-x 1 mapr mapr 237176680 Oct 19 10:25 1_2_0.tsv
-rwxr-xr-x 1 mapr mapr       279 Oct 19 10:25 1_3_0.tsv

See that 1_3_0.tsv is only a fraction the size of the others and 1_2_0.tsv is about twice the size as the rest. My question, then, is whether there is a way to control the size/distribution of the tsv files being created?

** Note: Ultimately I am trying to use sqoop export the table in tsv format to a Microsoft SQL Server DB, which is very slow due to the filesize imbalances (I think) (and can't alleviate the slowdown with --batch or --direct because sqoop apparently does not support those options for MS SQL server, nor is there a way to sqoop export parquet to sql server).