I give up on this question - don't bother yourself anymore.
I won't bother but there is something fundamentally wrong with the used (coding) logic and so with my response try to prevent others from making the same 'mistake'.
As explained by wp a TBitmap internally keeps track of the image-data as either 24- or 32-bit depending on the platform.
A true 8-bit bitmap does not have any rgb pixels stored. For the pixels it uses indices to the palette that is part of the image. The palette (actual colors) might be stored in a variety of different ways but the most one used is 32-bit ARGB/BGRA (but that does not have to be the case).
Both images in your example project are 256-color bitmaps existing of .. well ... 256 colors
@wp:
The problem is that TBitmap always seems to expand the pixelformat to 24 or 32 bits per pixel. You can see this when you query the Pixelformat after loading the image:
If I do make use of TBitmap then I normally use the raw image description depth in order to determine the (real) pixel depth. At least in the past the use of the PixelFormat property was mentioned to be not (always) reliable.
Do you happen to know if the use of PixelFormat property to determine the internal stored format is still not recommended and RawImage.Description should be preferred instead ?