I got 2 matrices **M** and **A**, the value of the matrix will always be the same so i want to make the matrices as a variable/constant so i try an approach like the code below, but i got an error. It works if i make the matrices as an input like **Model(inputs=[X_in, InputM, InputA])**, but with this approach i have to duplicate matrices as many as training data which will cost so much resources.

How can i get same result by changing matrices to constant/variable rather than make the matrices as an input?

I have tried **Approach 2** without batch size, i feed the variable matrices to the GraphAttention layers. There is no error but i got poor result of loss rather than make the matrices as an input.

**Approach 1:**

```
M = np.random.rand(90,100)
M = M.reshape(1,90,100)
M = np.repeat(M, 1, axis=0)
M.shape
A = trainlabels.T.dot(trainlabels).astype('float32')
for i in range(A.shape[0]):
A[i] = A[i]/A[i,i]
A = A.reshape(1, A.shape[0], A.shape[0])
A = np.repeat(A, 1, axis=0)
A.shape
from tensorflow.keras.layers import LSTM, Bidirectional, Embedding, Dense, Input, Dropout, Dot
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.regularizers import l2
from tensorflow.keras import backend as K
from tensorflow.keras import Model
from spektral.layers import GraphAttention
l2_reg = 5e-4
X_in = Input(shape=(250,))
X = Embedding(20001, 100, input_length=250)(X_in)
X = Bidirectional(LSTM(250))(X)
X = Dropout(0.5)(X)
batch_size = K.shape(X_in)[0]
k_M = K.variable(M)
k_M = K.tile(k_M, (batch_size, 1, 1))
k_A = K.variable(A)
k_A = K.tile(k_A, (batch_size, 1, 1))
k_A = Input(tensor=tf.convert_to_tensor(A))
k_M = Input(tensor=tf.convert_to_tensor(M))
gc1 = GraphAttention(500, attn_heads=4, activation='relu', kernel_regularizer=l2(l2_reg))([k_M, k_A])
gc2 = GraphAttention(500, attn_heads=4, concat_heads=False, activation='relu', kernel_regularizer=l2(l2_reg))([gc1, k_A])
X = Dot(axes=(1, 2))([X, gc2])
X = Dense(90, activation='sigmoid')(X)
model = Model(inputs=X_in, outputs=X)
```

**Error:**

```
Error while reading resource variable _AnonymousVar62 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar62/N10tensorflow3VarE does not exist.
[[node Tile_5/ReadVariableOp (defined at <ipython-input-26-31ec407039d1>:22) ]] [Op:__inference_keras_scratch_graph_8270]
Function call stack:
keras_scratch_graph
```

**Approach 2:**

```
from tensorflow.keras.layers import LSTM, Bidirectional, Embedding, Dense, Input, Dropout, Dot
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.regularizers import l2
from tensorflow.keras import backend as K
from tensorflow.keras import Model
from spektral.layers import GraphAttention
l2_reg = 5e-4
X_in = Input(shape=(250,))
X = Embedding(20001, 100, input_length=250)(X_in)
X = Bidirectional(LSTM(250))(X)
X = Dropout(0.5)(X)
k_M = K.variable(M) #difference between Approach 1
k_A = K.variable(A) #difference between Approach 1
gc1 = GraphAttention(500, attn_heads=4, activation='relu', kernel_regularizer=l2(l2_reg))([k_M, k_A])
gc2 = GraphAttention(500, attn_heads=4, concat_heads=False, activation='relu', kernel_regularizer=l2(l2_reg))([gc1, k_A])
X = Dot(axes=(1, 2))([X, gc2])
X = Dense(90, activation='sigmoid')(X)
model = Model(inputs=X_in, outputs=X)
```