Heyas User137!
Pardon in advance for the stream-of-consciousness thinking -- this is developing as I research it.
Few points:
- If you have the array type switch at compile time, you can't at runtime switch between DirectX or OpenGL. (Sure you can still use a launcher that starts a specific executable for either case)
Actually, if the difference is at the type definition level, you can have types of both major orders in the same executable, even in the same unit. It will require two definitions for base matrix objects of each major order, but the matrix manipulation code can be shared. I realize that actual dereferences for the matrix array will have to be one or the other, but I think I can get around that issue. For example, consider this code:
Type
{$ARRAYMAJORORDER COLUMN }
TMatrix4x4CM = Array[0..3,0..3] of Single;
{$ARRAYMAJORORDER ROW }
TMatrix4x4RM = Array[0..3,0..3] of Single;
Var
A1 : TMatrix4x4CM;
A2 : TMatrix4x4RM;
...
// Set Translation parameters
Procedure SetOpenGLTranslate(X,Y,Z: Single);
Begin
A1[0,3] := X;
A1[1,3] := Y;
A1[2,3] := Z;
End;
Procedure SetDirectXTranslate(X,Y,Z: Single);
Begin
A2[0,3] := X;
A2[1,3] := Y;
A2[2,3] := Z;
End;
...
Obviously, I can't directly dereference one type and have it act as the other, but typecasting like this should work:
If UseDirectX Then
Begin
TMatrix4x4RM(A1)[0,3] := X; // stores using Row-Major, even though A1 is Column-Major in default accesses.
TMatrix4x4RM(A1)[1,3] := Y;
TMatrix4x4RM(A1)[2,3] := Z;
End;
Also, as a fallback, I think I can use object inheritance to unify Matrix types, even adding an element access function or method, which I may be able to inline. The existing Matrix unit uses this approach somewhat.
Even if I have to specify the major order unit-wide, it would still be fine; I expect that the API-specific interface parts of the engine will be in separate units for different APIs. I mainly want to avoid the performance hit in having to constantly transpose the matrix before making the API calls for one API (OpenGL normally, since it is the opposite of the current default).
-----
- Some define a 4x4 matrix as array[0..15] of single. From that you can't distinct the order, since compiler doesn't know row length.
This proposed change is intended for explicitly-declared multi-dimensional arrays. If someone wants to use a linear array and do their own linear matrix element offset calculations, it should not impact their ability to do so, regardless of what the compile-time option is set to (or that it exists at all).
Many thanks for the valuable input, as usual. It helps me a lot to think through the issues and potential pitfalls.