How to ensure exactly once while querying data from a hive external table?

Spark Structured Streaming claims exactly once guarantee on File Sink. In my application i am processing data via structured streaming in append mode and writing output to a HDFS directory. HDFS path : output/job1

Let's assume that i am processing nth batch. Now If a system failure occurs and structured streaming job fails abruptly, then there will some files written in the output directory but no write commit will exist in _spark_metadata directory , So there are some garbage files in my output directory, these files are not read when i try to use them in any downstream spark job. I have verified this by loading data to pyspark and running count query.

I have also created a hive external table pointing to : output/job1 , Now when i run "msck repair table" to refresh partitions with new files, hive also discovers garbage files breaking exactly once semantics.

Q1) Is it correct understanding that this exactly once guarantee is limited only to spark ecosystem and not external tools ?

Q2) Is there any way through which hive can employ spark read logic to ignore garbage files ?