Stopping sbatch exit after transferring to controller

I am attempting to automate calling of a script. To do this, I have a top script that calls my analysis script. However, once the analysis script is called using sbatch, the top script terminates. Is there a way to prevent this? Thank you in advance for any help.

Due to limitations on the maximum number of jobs I can have submitted, using dependencies is not an option. Using the wait flag is also not an option as the first sbatch call will finish prior to all the sbatch calls being made resulting in only a portion of the desired runs being loaded to the controller. Also, I do not have sudo privileges on the cluster on which I work. I have searched the internet to no avail.

jobnumber=$(squeue -h -r -u <username> -o '%i' | wc -l)
    samplelist=(1 2 3 4)
splitlist=(xaa  xab  xac  xad  xae  xaf  xag  xah  xai  xaj)

###Currently script is closing after first sbatch call due to exit signal once script successfully transfers to controller
for sample in $samplelist; do
  for xlist in $splitlist; do
      arrsize=$(wc -l $sample/Ctg5PrBed/$xlist)
      sbatch ~/Promoters/scripts/bash/Paraclu.sh 2 $xlist $sample $arrsize
    while [ $jobnumber -gt 100 ]
      do
        sleep 3h
        jobnumber=$(squeue -h -r -u aboyd003 -o '%i' | wc -l)
      done
  done
done