Actually in BGRABitmap you have BGRADiff (or BGRAWordDiff to be more precise) function that you can apply to two pixels and give you the a color difference that is close to perceived difference...
Thanks. I'll try that way too.
The simplest way is doing a hash on both. Does not need a secure hash.
But, given the outlay of your architecture I suspect you maybe want to detect if an image contains stenography compared to the original: in that case simply use a hash, elf-hash may suffice and is extremely fast (7 ops in assembler, 15 ops in pure pascal on Intel).
And if the hashes do not match you can further investigate pixel by pixel, which is of course much slower.
Thanks. The original reason for this in the original post was to save having to rotoscope 642 frames of video which would take a long time (because the source video which was 814 frames had already been rotoscoped and that 814 frame video seemed to have just been edited down with various cuts and re-timings (eg. speed ups) to get to the 642 frame video, so it should have just been a case of copying the equivalent rotoscoped frames from the 814 frame video to create the 642 frame video. But they probably wouldn't have matched exactly due to re-compression of the video. So if a hash looked for the exact match of the pixel values to see if it was the same frame it probably wouldn't work correctly in that case due to re-compression of the video making the rgb values in the exported frames slightly different.
I did think another way could be to look at either smaller resolution bmps versions of each frame first or only looking at every so many pixels of the bmps (in the original post I'd mentioned only checking every 100 pixels of the x & y but that it was still not working fast enough that way). eg. in theory you could look at a quarter size bitmaps first (or some other low enough resolution) to find the ones that look closest rgb-wise at that size, then just looking at the closest so many ones at half size, then the closest ones at full size. So that may be a faster way to find the closest frame rgb-wise than comparing just every full size pixel of every image (since at full res you'd only be checking the ones that were close at the lower resolutions).
There's probably quite a lot of uses for things like this though. Just finding duplicated frames in one video should also help with rotoscoping work since it should allow you to calculate if a 60 fps video really contains just a 24 or 30 source video or similar low fps video within a video encoded at higher fps (in which case you could just export it at the true frame rate and roto that, and save a lot of time rotoscoping. The original thing about copying rotoscoped matching frames (.bmps) might also work for much longer videos (especially if just the lower res versions are compared first, to speed it up), eg. if you needed to rotoscope a 30-60 minute talking head video, it might be that just a 5 minute section would be enough to get close enough mask shapes for the rest, and you could use these "closest frames" functions to copy the rotoscoped 5 minute vid frames to the closest frames of the 30-60 minute video, creating an accurate enough rotoscoped video up to 60 mins. Though maybe some AI might be a better way for that.