Apply a method designed for a spark dataset on subgroups instead

I have a method that expects a Spark Dataset of a custom Object as an input:

def myAlgorithm(ds : Dataset[CustomObject]) : List[CustomObject] {
...
}

However, I know have to use this algorithm on subgroups of this dataset.

If I apply a .groupBy() method on this Dataset I end up having to refactor all myAlgorithm to fit the new structure of the data and that might be quite time consuming. I am also worried about the performance of the algorithm once it is refactored (each subgroup can be quite massive too).

The most straightforward solution I found was to iterate through the keys and filter my dataset :

val keys = ds.map( obj => obj.index ).distinct.collect()
val result = for (key <- keys) yield {

        val filteredDS = ds.filter( obj => obj.index == key)
        val output = myAlgorithm(filteredDS)

}

However this solution is highly inneficient and is far from being fast enough for my needs. I also explored the idea of using Futures in the for loop : (based on this video : https://www.youtube.com/watch?v=WZ5TJUYWyU0)

val keys = ds.map( obj => obj.index ).distinct.collect()
val futures = for (key <- keys) yield {

        val filteredDS = ds.filter( obj => obj.index == key)
        val output = Future { myAlgorithm(filteredDS) }

}

val result = futures.foreach(f => Await.result(f, Duration.Inf))

It's better, but still not efficient enough for my needs.

What is the best practice / most efficient way of dealing with that situation?