First of all: it works. But very strangely.
The property PersistentFieldsCreated won't stay True. So, it will always be False when loaded for the first time. Therefore, it serves no purpose anymore, does it? Unless you want the designer to set it True every time the project ios loaded.
This was intentionally. It creates persistent fields and revert property value to false, but hey, persistent fields are there :-) Why? Well, just because I couldn't figure out how to prevent it that when loading it does not create persistent fields twice (once from stream and second time from setting this property to true...
Anyway, the property itself is not important, the intention was just to enable programmer in design-time to create all persistent fields from fielddefs at once and then just set additional properties...All that can be done by using editor, but this should be faster and you don't miss any fields by mistake...
The purpose of creating persistent fields at design time was to improve speed, if I'm correct. I wonder what speed you're trying to improve when the dataset itself uses JanSQL which does only sequential searches as far as I know. Isn't that a bit penny wise and pound foolish?
No, speed was not in my mind. Only ability to set some additional properties for fields in design-time, that you can't set for Fielddefs - this can be useful when multiple data controls (DBGrids for example) connected to the same dataset. Otherwise, you have to set it for any data control separately...
Regarding jansql, it is only one option for loading data into zmquerydataset - there are all other possiblities inherited from TBufDataset like streaming from files, copying from other datasets, loading from .csv files...
And you can combine all dataset's methods with jansql...it is not excluding one another...
Using a fixed type of CSV format is not very nice. The purpose of ZMSQL is to serve a single user. But that user will produce CSV files as output from a spreadsheet, for instance. He may not have the possibility to specify the output format. Also, more than half of the world uses a comma for the decimal "point". That cannot be specified either. Someone suggested to make that a programmer's task. That is not a valid proposition. The programmer should not have to translate the written table back to native notation. He should ask the environment about the decimal point, the thousands separator and the list separator and have a component like ZMSQL where he can pass that specification.
Actually, I was just working on those issues last few days

First of all, there is SysUtils.DefaultFormatSettings.DecimalSeparator and SysUtils.DefaultFormatSettings.ThousandSeparator, which you can read and write in your program. In Windows system settings are deduced correctly automatically, in Linux, clocale unit must be in uses. So I put it in zmsql. Also, I have just changed in zmsql that system setting for decimal separator is used always (both in zmquerydataset and jansql, so that these two are consistent with each other). Also, I added piece of code that enables correct loading of float values, no matter what decimal separator was in the .csv file. The function is the following:
function TZMQueryDataSet.FormatStringToFloat(pFloatString: string):Double;
//Transform float value inside a string with adequate decimal separator.
var
fs:TFormatSettings;
vFloatString, vLeftPart, vRightPart:String;
vFloatValue:Double;
vDelimiterPos:Integer;
begin
fs.DecimalSeparator := SysUtils.DefaultFormatSettings.DecimalSeparator;
{fs.ThousandSeparator := SysUtils.DefaultFormatSettings.ThousandSeparator;}
{
ShowMessage('DecimalSeparator: '+SysUtils.DefaultFormatSettings.DecimalSeparator);
ShowMessage('ThousandSeparator: '+SysUtils.DefaultFormatSettings.ThousandSeparator);
}
case SysUtils.DefaultFormatSettings.DecimalSeparator of
'.':
begin
//Replace decimal separator
vFloatString:=StringReplace(pFloatString,',','.',[rfReplaceAll, rfIgnoreCase]);
end;
',':
begin
//Replace decimal separator
vFloatString:=StringReplace(pFloatString,'.',',',[rfReplaceAll, rfIgnoreCase]);
end;
end;
//Aditional check for remaining thousand separators. If they exist, they should be removed.
vDelimiterPos:=Rpos(SysUtils.DefaultFormatSettings.DecimalSeparator,vFloatString);
vLeftPart:=AnsiLeftStr(vFloatString,vDelimiterPos-1);
vRightPart:=AnsiRightStr(vFloatString,Length(vFloatString)-vDelimiterPos+1);
if AnsiContainsStr(vLeftPart,SysUtils.DefaultFormatSettings.DecimalSeparator) then begin
vLeftPart:=AnsiReplaceStr(vLeftPart,SysUtils.DefaultFormatSettings.DecimalSeparator,'');
vFloatString:=vLeftPart+vRightPart;
end;
//Get result.
vFloatValue:=StrToFloat(vFloatString, fs);
Result:=vFloatValue;
end;So, by using this function, zmquerydataset will be able to load correctly any float value, whether "." or "," was decimal separator. It will even load if someone left thousand separator in CSV file...
So, only ";" column separator in .CSV files will remain mandatory for the CSV file format. That's because jansql requires that. Although, this could be propably easily changed too...For instance, this is a function from Lazarus Book, that can be used for automatic separator detection in a CSV file:
function DetermineSeparator(AFileName: String; var HasFieldNames: Boolean): Char;
const Seps: array[1..5] of Char = (',',';',#9,'@','#');
var F:TextFile;
S,S2,T:String;
I:Integer;
begin
AssignFile(F,AFileName);
Reset(F);
try
Readln(F,S);
finally
CloseFile(F);
end;
Result:=#0;
//Scan the line for the separator character:
I:=0;
while (Result=#0) and (I<5) do
begin
Inc(I);
if Pos(Seps[i],S)<>0 then Result:=Seps[i];
end;
//Try and detect the presence of fieldnames:
//no spaces or double separator:
if Result<>#0 then
HasFieldNames:=(Pos('',S)=0) and
(Pos(Result+Result,S)=0);
end;Regarding jansql SQL engine...I hoped someone more experienced programmer from Lazarus community will improve it.
after all, I'm just a hobbiest
