

Hmm, I’m not an expert on image AI, but I think your idea of how this works is basically close enough but not exactly right. The image is encoded into tokens (vectors) by an encoder model, and then those tokens are decoded into a new image. The intermediary tokens aren’t really text descriptions of the image but maybe this distinction is kind of pointless? The lossyness is the same either way
I believe so, and some may not really translate well into text at all, and instead represent some kind of specific or abstract visual feature. There would be an entire other neural network or part of a neural network specifically for decoding the tokens into text