Use 1 tokenizer or 2 tokenizers for translation task?

I’ve seen several tutorials about seq2seq tasks like translation. They usually use 2 tokenizers trained on corpus, one for source language and the other for target language. However, in huggingface’s translation task example, they just use one tokenizer for 2 languages. I wonder which is the better way, 1 tokenizer or 2 tokenizers? If i use 2 tokenizers then the output classes would be smaller and may be it can eliminate some tokens that target language doesn’t have, thus, improve the result or it is okay to use one tokenizer and the performance is still the same? Please, help me, thanks In advance!

How many English words
do you know?
Test your English vocabulary size, and measure
how many words do you know
Online Test
Powered by Examplum