Druid.io: update/override existing data via streams from Kafka (Druid Kafka indexing service)

I'm loading streams from Kafka using the Druid Kafka indexing service.

But the data I uploaded is always changed, so I need to reload it again and avoid duplicates and collisions if data was already loaded.

I research docs about Updating Existing Data in Druid.

But all info about Hadoop Batch Ingestion, Lookups .

Is it possible to update existing Druid data during Kafka streams?

In other words, I need to rewrite the old values with new ones using Kafka indexing service (streams from Kafka).

May be any kind of setting to rewrite duplicates?