In general, the preexisting win32 API has already been expanded to 64-bit in the nineties for non x86 architectures. (e.g. I had an Dec Alpha with Win 2000 beta at some point). So basically win32 is win32/64.
Thank you for the answer.
I asked, because I'm surprised to see very complex and advanced software vendors still offering the "32-bits" compilation version for sale (I need to purchase one). On Microsoft support, I found that they had to modify the 32-bits API at the turn of "XP Home"\"XP Pro" releases, because the apparition of several processors one the same machine were not detected by their API (same problem at that time too - a little later - with the apparition of multi-cores). So, I was wondering if the 32-bits multi-threading APIs really does always the same job as the 64-bits version...
The core difference is that of course the pointer size is different, so structures and parameters might have become bigger. A bigger change is of course that 64-bit has different and unified calling conventions.
This is what I realized after posting my question: the 1st difference between the same software compiled for 32-bits and 64-bits in the multi-threaded domain could probably indeed be
roughly summarized to a problem of the amount of memory allocated and processed by each CPU cycle. So, I like to imagine that 32-bits version can be seen as an assertion to use smaller\sized data sets (
somewhere related to the 64/32 ratio) when using the 32 bits rather than the 64 bits version: a kind of assertion to be more rigorous in not going in the direction of, nor big sized data neither big counted data (a shield against data inflation\a silly parallel with that of V6 engines versus V12 engines in the car industry\a constraint to more summarise the data).