training vs validation accuracy and loss output error

Can anyone help me interpret these graphs.

I have been training some datasets over SENet for image classification (tensorLayer).

I started with cifar10 and got some satisfying results with it I use data augmentation for all my experiments.

Next i tried training it with Caltech256 and the results were pretty unusual Maybe due to less number of images per class(30-50)

Finally i used Tiny-Imagenet with enough training samples per class(500) but the results are still looking odd somewhat similar to caltech256 results

Is that supposed to be a problem with the data??