Lose One Data when Reading with Tensorflow Slim

I am working on the TensorFlow Slim example, fine-tuning inception-v3 on flowers, and trying to modify it into my project. We hope that the data read could be in order because our data are frames from videos. That means data should be read as "frame1, frame2, frame3, frame4 ..." Therefore, I did some modification in train_image_classifier.py as following:

##############################################################
# Create a dataset provider that loads data from the dataset #
##############################################################
with tf.device(deploy_config.inputs_device()):
  provider = slim.dataset_data_provider.DatasetDataProvider(
      dataset,
      num_readers=FLAGS.num_readers,
      shuffle=_data_shuffle,
      common_queue_capacity=20 * FLAGS.batch_size,
      common_queue_min=10 * FLAGS.batch_size)
  [image, label] = provider.get(['image', 'label'])
  label -= FLAGS.labels_offset
  train_image_size = FLAGS.train_image_size or network_fn.default_image_size

  image = image_preprocessing_fn(image, train_image_size, train_image_size)

  images, labels = tf.train.batch(
      [image, label],
      batch_size=FLAGS.batch_size,
      num_threads=FLAGS.num_preprocessing_threads,
      capacity=5 * FLAGS.batch_size)

  labels = tf.Print(labels,[labels], "....labels=",summarize=32)

  labels = slim.one_hot_encoding(
      labels, dataset.num_classes - FLAGS.labels_offset)
  batch_queue = slim.prefetch_queue.prefetch_queue(
      [images, labels], capacity=2 * deploy_config.num_clones)

I added shuffle=false and set num_readers=1 in the script, trying to let the data_provider() read data in sequence without parallel jobs. I also modified image_preprocessing_fn only to resize image, not to do flipping/cropping/distortion. For tf.train.batch(), num_preprocessing_threads=1 is set in script.

To fit batch_size=32, I extracted 32*n frames each video. Frames come from the same video have the same tag number. Therefore, the desired output should be with identical tag in one list. However, the output of tf.Print() shows the labels after batch created is like:

labels=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0]
....
labels=[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1]
....
labels=[2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
....

Do I miss anything above? It seems like some data was missed or read disorderly. I did the same modification in eval_image_classifier.py. The evaluation works fine without the problem.

Thanks.