RNN model overfitting on multiclass data

There is a multi-class labeled dataset (0,1,2) for which I am trying to implement a RNN model. The problem here is that the model is overfitting no matter how much I am playing with the parameters. This is the model:

model = Sequential()
model.add(Embedding(len(word_index) + 1,EMBEDDING_DIM,weights=[embedding_matrix],input_length=MAX_SEQUENCE_LENGTH,trainable=True))
model.add(GRU(128,return_sequences=True, dropout=0.2))
model.add(LSTM(256, recurrent_dropout=0.2))   
model.add(Dense(20, activation='softmax'))

model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])

Running the model:

from sklearn.model_selection import StratifiedKFold
stratified_split = StratifiedKFold(n_splits=3, shuffle=False)
for train_index, test_index in stratified_split.split(X, y):
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]

X_train_Glove,X_test_Glove, word_index,embeddings_index = loadData_Tokenizer(X_train,X_test)

model_RNN = Build_Model(word_index,embeddings_index, 20)

history=model_RNN.fit(X_train_Glove,y_train,
                            validation_data=(X_test_Glove, y_test),
                            epochs=15,
                            batch_size=128,
                            verbose=2)

Execution Summary:

enter image description here

Am I missing something here, such as hyperparameter tuning?

(I am a deep learning newbie and I am trying to implement this RNN concept on this paper.)