I definitely don't make any assumptions. I make sure the compiler behaves as I expect it to and, not only that, as it _has_ to.
That's the reason I only use "const" when dealing with ordinal types and I am fully aware that some types are ordinal in 32 64 bit but not in 64 32 bit, e.g, qword. Anything I haven't done in the past, I make it a point to look at the generated assembler to ensure it is the way I expect it to be (and the way the compiler should be coding it.)
I whole heartedly disagree with that approach. When using a high level language you should not make any assumptions (or expectations) about the underlying assembly. To give an example in C the "char" type is the only ordinal type which is neither signed nor unsigned. The reason for this is simply, some processors are faster with signed chars, others with unsigned. So in order to produce the optimal code for any CPU, C does not make any assumptions about that.
I noted earlier that C++ had up until C++20 no defined representation of signed integer types. The reason for that is, coming from C, it may be implemented on machines that use Sign and Magnitude, 1s complement or 2s complement. This has only been changed recently because 2s complement is so common that it doesn't make sense to optimize for the others anymore.
But what I'm getting at here is, if you write a program in valid C, or C++, or Pascal, or any other high level language, it should work the exact same on any machine and any cpu. No matter if it's a 64 bit little endian x64 CPU, or a 16 bit big endian motorola 6809 chip.
In C it goes even so far that things like converting bit representations (through pointers or unions) are actually not allowed by the standard, meaning if you write fully standard C without any implementation defined or undefined behavior, it will run exactly the same on any cpu.
So whenever writing code in a high level language it's best to assume it's implemented using fairy dust and magic, and don't think about what happens on an assembler level. Assumptions about the generated assembler works ok-ish with a language like Pascal which does not have much undefined behavior and frankly has rather little optimizations, but you still shouldn't bet on it, as there is still constant work on the FPC, and as I said previously, I'm personally very curious about the LLVM backend, as LLVM can do some crazy optimizations.
PS: also with Pascal or C I'm of course talking about rather low level languages, where there is an "obvious" way on which assembly they result in. If you go to much higher level languages such as Haskell or something similar, thinking about your code in assembly is going to give you much more trouble