How does a computer convert from binary for printing on screen in decimal (especially arbitrary large numbers)

I have been wondering up until now of the following dilemma : everything computer sees and know are all in binary, for text and such they are stores in bytes and to print each character the computer just reference them, but what about numbers? (well each digit is ASCII but I don't mean that of course)

The point is that I want to know and understand the details of how when the computer sees 10000000000 it knows that it should print 4 digit with respectively 1,0,2 and 4 or any large number (arbitrary large or even a number so large that no one knows what to use for, or the biggest know prime number for the sake of science). We know that the computer sees only binary and that when we convert it to decimal for the computer it is still binary. That is why the process of converting it to decimal and print it for those who can't understand binary is what I yern to understand (and please I don't want any answers involving multiplications in binary base, I only want to know how it know how many digits it should print and to be able to get the sequence right and correctly and then print it)

I have search the internet up and down and still not manage to find an answer to satisfy my curiosities