in math and computers exists only numbers... nothing more.
symbol is always some number: A = 64, ect... in some context of courese - the human context.
and double is a floating number... 53 bits of prec + 10exp + 1 sign = 64 bits.
what is: 1.25 in bits, bytes or chars?
If we go down that route, then the above is wrong too.
In todays mainstream computers, there exists (only) "bits". A concept with 2 states, represented by electronic circuits.
Those bits are organized in groups, and patterns of those bits can ("have the option of") be interpreted as numbers.
There exists also circuits to act on those bits, and translate certain patterns into other patterns by performing logic operations. Some of those circuits do this in a way that the results matches the results of mathematical operations on the numerical interpretation of those bit patterns.
Feel free to take the above apart. It is (by all probability) riddled with inaccuracies and even mistakes.
SCNR