With Keras and Tensorflow, ValueError: operands could not be broadcast together with shapes

I have 3,903 samples of training data. If I use a batch size of 64, then the first 60 batches use 64 samples each and the last uses 63 samples. While debugging, I noticed that the max_queue_size of 10 is used before epoch training results are output. I also have a CustomLoss layer included in the network which uses tensorflow.contrib.keras.backend for its calculation.

Question: are samples within a batch being mixed? In other words, why the 63 along with 64 causing the error?

  File "tests.py", line 926, in <module>
  test_crosstrain_gen(args.datafiles[0], args.datafiles[1], args.classes, args.batch_size, **totalArgs)
  File "tests.py", line 569, in test_crosstrain_gen
  hist, model = cf.trainModelAugGen(trainGen, trainDataFile, **modTrainArgs)
  File "/playpen/wilson/classify/classify.py", line 714, in trainModelAugGen
  hist = model.fit_generator(augGen, steps, epochs=epochs, callbacks=callbacks, verbose=verbose, max_queue_size=61) #,use_multiprocessing=True)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 2352, in fit_generator
  callbacks.on_batch_end(batch_index, batch_logs)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/callbacks.py", line 129, in on_batch_end
  callback.on_batch_end(batch, logs)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/callbacks.py", line 235, in on_batch_end
  self.totals[k] += v * batch_size
  ValueError: operands could not be broadcast together with shapes (64,1) (63,1) (64,1)

If I use a batch size that is a multiple of 3,903 (e.g. batch_size=3), then no error on_batch_end but error instead on_epoch_end.

File "tests.py", line 926, in <module>
test_crosstrain_gen(args.datafiles[0], args.datafiles[1], args.classes, args.batch_size, **totalArgs)
File "tests.py", line 569, in test_crosstrain_gen
hist, model = cf.trainModelAugGen(trainGen, trainDataFile, **modTrainArgs)
File "/playpen/wilson/classify/classify.py", line 714, in trainModelAugGen
hist = model.fit_generator(augGen, steps, epochs=epochs, callbacks=callbacks, verbose=verbose, max_queue_size=61) #,use_multiprocessing=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 2380, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/callbacks.py", line 94, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/callbacks.py", line 766, in on_epoch_end
summary_value.simple_value = value.item()
ValueError: can only convert an array of size 1 to a Python scalar

If I use batch_size=1, then no error on_batch_end nor on_epoch_end but error in evaluate_generator instead of fit_generator.

Traceback (most recent call last):
File "tests.py", line 926, in <module>
test_crosstrain_gen(args.datafiles[0], args.datafiles[1], args.classes, args.batch_size, **totalArgs)
File "tests.py", line 576, in test_crosstrain_gen
accuracy, loss = cf.testModelAugGen(model, testGen, testSteps)
File "/playpen/wilson/classify/classify.py", line 788, in testModelAugGen
loss, accuracy = model.evaluate_generator(augGen, steps)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 2522, in evaluate_generator
np.average([out[i] for out in all_outs], weights=batch_sizes))
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1142, in average
"Axis must be specified when shapes of a and weights "
TypeError: Axis must be specified when shapes of a and weights differ.