Getting error "User class threw exception: org.apache.spark.SparkException: Job aborted." when running a spark job with scala

I've scheduled a daily spark job with dynamic executors. This job runs fine on some days and fails on some randomly without change in configuration. I was looking into the logs but couldn't find anything specific. I'm using the below configurations:

/usr/hdp/2.6.0.3-8/spark2/bin/spark-submit --master yarn --deploy-mode cluster  --driver-memory 30G --driver-cores 5 --executor-cores 4 --num-executors 30 --executor-memory 10G  --conf spark.sql.files.ignoreCorruptFiles=true --conf spark.driver.maxResultSize=0 --conf spark.yarn.executor.memoryOverhead=4096 --conf spark.shuffle.service.enabled=True --conf spark.dynamicAllocation.enabled=True --conf spark.dynamicAllocation.minExecutors=30  --conf spark.dynamicAllocation.maxExecutors=80

This is something I could locate in logs:

20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1455.0 in stage 8.0 (TID 43397)
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1391.0 in stage 8.0 (TID 43293)
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1419.0 in stage 8.0 (TID 43307)
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1440.0 in stage 8.0 (TID 43355)
20/10/01 09:06:40 INFO Executor: Executor killed task 1440.0 in stage 8.0 (TID 43355)
20/10/01 09:06:40 INFO Executor: Executor killed task 1419.0 in stage 8.0 (TID 43307)
20/10/01 09:06:40 INFO Executor: Executor killed task 1391.0 in stage 8.0 (TID 43293)
20/10/01 09:06:40 INFO Executor: Executor killed task 1455.0 in stage 8.0 (TID 43397)
20/10/01 09:06:41 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
20/10/01 09:06:41 INFO MemoryStore: MemoryStore cleared
20/10/01 09:06:41 INFO BlockManager: BlockManager stopped
20/10/01 09:06:41 INFO ShutdownHookManager: Shutdown hook called

End of LogType:stderr

Can somebody help in finding the actual cause here?