# Solve mathematical equations using a neural network

There is no practical need in my question. I'm just wondering what the neural network is capable of and how accurate it is.

I want to solve such a mathematical equations:

``````r = round(A * A - B + C / D * C, 2)
``````

For example: If A = 0.2, B = 0.3, C = 0.4, D = 0.5 is entered, the result must be r = 0.06.

I decided to multiply each number by 100 to make it integer and convert numbers to binary (list from 0 to 1)

These are functions for converting from decimal to binary (list of 0 and 1):

``````constStep = 100
def FromBinary(listOfBits):
listOfBits = [int(x) for x in listOfBits]
return sum([b<<i for i, b in enumerate(listOfBits[::-1])])

def ToBinary(num, width):
lenOfBit = num.bit_length()
if lenOfBit > width:
raise str(lenOfBit) + ' > ' + str(width)
result = np.binary_repr(num, width=width)
part = [int(x) for x in result]
return part
``````

This section of code will generate mathematical equations for network learning:

• input (x): 8 bits + 8 bits + 8 bits + 8 bits = 32 bits, four integer unsigned variables from 0 to 255
• output (y): 16 bits, one integer unsigned variable from 0 to 65,536
• Negative numbers will be skipped

Code:

``````iteration = 30000
vars = 4
epochs = 30
x_train = np.empty((0, 32), int)
y_train = np.empty((0, 16), int)
for i in range(iteration):
bx = []
vals = []
for j in range(vars):
r = random.randrange(1, 255)
bx.extend(ToBinary(r, 8))
vals.append(r / constStep)

r = round(vals[0] * vals[0] - vals[1] + vals[2] / vals[3] * vals[2], 2)
if abs(r) > 655 or r < 0:
continue

x_train = np.append(x_train, np.array([bx]), axis=0)

r = int(r * constStep)
bx = ToBinary(r, 16)
y_train = np.append(y_train, np.array([bx]), axis=0)
``````

The model looks like this:

``````model = tf.keras.Sequential([
layers.Input(shape=(32)),
layers.Dense(66000, activation='relu'),
layers.Dense(16, activation='sigmoid')
])

model.summary()

model.compile(loss='mse',
metrics=['binary_accuracy'])

tm = model.fit(
x_train,
y_train,
epochs=epochs,
verbose=1)
``````

Check the result:

``````loss, accuracy = model.evaluate(x_train, y_train)

print("Loss: ", loss)
print("Accuracy: ", accuracy)

predictions = model.predict(x_train)

predictions[predictions>=0.5] = 1
predictions[predictions<0.5] = 0

vals = []
ind = 654 # check the random example
for i in range(vars):
vals.append(FromBinary(x_train[ind][8*i:8*(i+1)])/constStep)
r = round(vals[0] * vals[0] - vals[1] + vals[2] / vals[3] * vals[2], 2)
print("vals = ", vals, " | predictions = ", FromBinary(predictions[ind]) / constStep, " | r = ", r,
" | b predictions = ", predictions[ind], " | y_train = ", FromBinary(y_train[ind]) / constStep)
``````

I get an accuracy of about 80%. Can this accuracy be increased? If so, can you suggest how to build a network for greater accuracy?