Spark, from training/testing loop to model, working to score in production

Suppose we trained/tested the model, found it good and we saved the trained and good model on a file system.

All that, using spark and spark ML lib. (python)

1) How do we start to use this model in production to process actual requests and predict? Can we use the same spark cluster(I mean load the model in another spark app and process online requests with this model)?

2) Should we in parallel run the training/testing on the more modern data and once in a while "refresh" the model we use in production? Is that acceptable solution?

3) I'm afraid online/production performance of python might be low, so is there way to speedup the execution in production? I mean trained model, could it be transferred to C or in other way "speed-improved"?

Thanks