Choose the number of samples to average the gradient on in the SGDClassifier of Scikit-learn

I'm aware that the SGDClassifier in Scikit-learn picks one random sample from the training dataset each time to calculate the gradient, and updates the model weights (w and b) accordingly.

My question is that among the parameters in the SGDClassifier, there doesn't seem to be an option to select the number of samples to pick each time (instead of just one instance) to average the gradient on? This would give us a Mini-batch Gradient Descent.

I've already had a look at the partial_fit() method which gets chunks of the training dataset each time to train on, but when using this in the SGDClassifier, doesn't it just boil down to picking a random training instance from the chunk, instead of choosing it from the whole dataset?