The same RandomForestClassifier returns different results

I have a case where I need to train the same model in two different machines and make it sure that the output will be exactly the same. The environment is the same, so I try to write some unit tests to compare the predictions of the same dataset. From all the predictions, there are a few only that are different by 0.5-1.5 percentage point.

To verify that the two models are the same, I compared them like this:

model_1 = joblib.load("path for model 1")
model_2 = joblib.load("path for model 2")

When I print both models, I get the same set of parameters

RandomForestClassifier(bootstrap=False, class_weight=None, criterion='gini',
                       max_depth=None, max_features='auto', max_leaf_nodes=None,
                       min_impurity_decrease=0.0, min_impurity_split=None,
                       min_samples_leaf=2, min_samples_split=2,
                       min_weight_fraction_leaf=0.0, n_estimators=200,
                       n_jobs=None, oob_score=False, random_state=10000,
                       verbose=0, warm_start=False)

And when I compare their features importance, I get the same output:

>> np.array_equal(model_1.feature_importances_, model_2.feature_importances_)
>> True

Is there any other way to find why some of the predictions return slightly different results between the two models?

Checking the random_state of each estimator is also the same:

>> [i.random_state for i in model_1.estimators_] == [j.random_state for j in model_2.estimators_]
>> True
How many English words
do you know?
Test your English vocabulary size, and measure
how many words do you know
Online Test
Powered by Examplum