@tf.function incorrectly handles None (batch) dimension of input tensor

I want to use @tf.function decorator to use auto-graphing feature of tensorflow to convert Python code to tensorflow static graph, so that I can use it later as a part of .pb static graph for evaluation.

I have keras.layers.Input(...) tensor as input. As usual keras' Input layer has unknown (None) batch-dimension (0-th).

In my decorated function I'm using python's for loop over this unknown dimension, expecting tensorflow to correctly infer 0-th batch dimension size on each feeded real input and correctly do corresponding known number of loop iterations.

But according to results tensorflow just supposes that this unknown dimension is always equal to 0 and does static number of 0 loop iterations. Hence printed result [] in next code when llen is not provided. But I expect printed results to be result [1 7] as in the case when I explicitly provide llen equal to 2 to function input.

Is there any way at all to iterate over unknown (None) dimensions and execute different number of loop iterations and tensor transformations?

Next is a code explaining behavior:

import tensorflow as tf, numpy as np

def f(t, llen = None):
    if llen is None:
        llen = int(tf.shape(t)[0])
    l = [0] * llen
    for i in range(len(l)):
        l = l[:i] + [t[i, 0, 0]] + l[i + 1:]
    return tf.convert_to_tensor(l)
    
@tf.function
def ft(t, llen = None):
    return f(t, llen)

input_ = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]])

for eager in [True, False]:
    print('is eager:', eager)
    if eager:
        tf.compat.v1.enable_eager_execution()
        print('result', f(input_).numpy())
    else:
        tf.compat.v1.disable_eager_execution()
        for llen in [None, 2]:
            print('input llen', llen)
            i = tf.keras.layers.Input([2, 3])
            t = ft(i, llen)
            with tf.compat.v1.Session() as sess:
                print('result', sess.run(t, {i: input_}))