Spark saveAsNewAPIHadoopFile in local filesystem and read

I have readed some files using

JavaPairRDD<String, PortableDataStream> rdd = sc.binaryFiles(path);

And modified some bytes in the rdd.

Then, I saved it using hadoop api but in a localFileSystem:

JavaPairRDD<BytesWritable, BytesWritable> transformed = rdd.mapToPair((tuple2) ->{
    String fname = tuple2._1();
    PortableDataStream content = tuple2._2();
    byte[] bytes = content.toArray();
    bytes = YUVSimpleTrans.transform(bytes);
    return new Tuple2<>(new BytesWritable(fname.getBytes()), new BytesWritable(bytes));
});

transformed.repartition(new Long(transformed.count()).intValue())
            .saveAsNewAPIHadoopFile(outpath, BytesWritable.class, BytesWritable.class, SequenceFileAsBinaryOutputFormat.class);

I can see the result files in outpath, and looks like hdfs form.

result files

My question is : can I use newAPIHadoopFile or newAPIHadoopRDD to read these files?(local filesystem)

I have tested a read method using

JavaPairRDD<BytesWritable, BytesWritable> rdd = sc.hadoopFile(path, SequenceFileAsBinaryInputFormat.class, BytesWritable.class, BytesWritable.class);

but I got exception:

Input split: file:/C:/Users/Me/Desktop/demo/out/part-r-00014:0+3110560

java.io.FileNotFoundException: Invalid file path
    at java.io.FileOutputStream.<init>(FileOutputStream.java:206)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
    .......