Neural Network created from scratch has a bias towards normal disribution output

I am trying to implement a Neural Network from scratch for a genetic algorithm, therefore the weights are not getting updated at every step and remain the same throughout the course of training. The weights giving the best results(fitness) get carried forward into the next training loop.

The issue is that the output obtained from the network is getting distributed normally and not uniformly, that is, I require the output to be uniformly distributed between 0 and 1, with all values in between equally likely, but I get most of the values near 0.5. Why is this happening ?

I have an input matrix that is mostly populated by zeros with a few values between 0 and 1 sparsely around. This is called vision_radius in the code. This is the only matrix that keeps changing for any particular entity. There are two weights and two bias matrices which I try to uniformly populate using random.uniform. And I use a leakyRelu function in the middle and a Sigmoid activation function at the output layer. The variables with the prefix neurons are the dimensions of the intermediate hidden layers.

class Brain:

        def __init__(self, creature,  vision_radius = 125, neurons_1 = 10, neurons_2 = 1):

            self.creature = creature
            self.vision_radius = vision_radius
            
            #Layer 1
            self.neurons_1 = neurons_1

            self.weight_1 = np.random.uniform(-1.0, 1.0, (self.vision_radius * 2 + 1, self.neurons_1))
            self.bias_1 = np.random.uniform(-1.0, 1.0, (1, self.neurons_1))

            #Layer 2
            self.neurons_2 = neurons_2

            self.weight_2 = np.random.uniform(-1.0, 1.0, (self.neurons_1, self.neurons_2))
            self.bias_2 = np.random.uniform(-1.0, 1.0, (1, self.neurons_2))


        def forward(self):

            #Multiplying weight and adding bias in Layer 1
            output = np.dot(self.view_matrix, self.weight_1) + self.bias_1 #Dim (vision_radius, neurons_1)

            #Activation function RelU
            output = np.maximum(0.2*output, output)   #Dim (vision_radius, neurons_1)

            #Multiplying weights and biases in Layer 2
            output = np.dot(output, self.weight_2) + self.bias_2   #Dim (vision_radius, neurons_2)

            #Activation function Sigmoid
            output = 1.0/(1.0 + np.exp(-output))

            return np.mean(output) 


How many English words
do you know?
Test your English vocabulary size, and measure
how many words do you know
Online Test
Powered by Examplum