Consume Spark SQL dataset as RDD based job

Spark dataframe have toRDD() method but I don't understand how It's useful. Can we start a SQL streaming job by processing converted source dataset to RDD instead of making and starting DataStreamWriter?

1 answer

  • answered 2018-01-14 11:14 user8371915

    Dataset provides uniform API for both batch and streaming processing and not every method is applicable to streaming Datasets. If you search carefully, you'll find other methods which cannot be used with streaming Datasets (for example describe).

    Can we start a SQL streaming job by processing converted source dataset to RDD instead of making and starting DataStreamWriter?

    We cannot. What starts in structured streaming, stays in structured streaming. Conversions to RDD are not allowed.