Recent

Author Topic: PostgreSQL text vs. varchar, field size, loadfromfile  (Read 7881 times)

tatamata

  • Hero Member
  • *****
  • Posts: 787
    • ZMSQL - SQL enhanced in-memory database
PostgreSQL text vs. varchar, field size, loadfromfile
« on: February 16, 2016, 08:25:11 pm »
I have troubles retrieving SQL query results from PostgreSQL tables having varchar (character varying) or text datatype columns into DBGrid.
If I set text datatype in PostgreSQL, then resultset cannot be read in DBGrid, since it is treated as memo (?!?).
If I change datatype to varchar, it seems that SQLQuery treats all columns as maximum length size. If I save it to file and load to TBufDataset by using LoadFromFile method, it occupies all memory and loading is incredible slow...
Is there a way to optimize this?
Note that I cannot restrict varchar size in PostgreSQL, since tables are created dynamically from unknown imported files not know in design time....

tatamata

  • Hero Member
  • *****
  • Posts: 787
    • ZMSQL - SQL enhanced in-memory database
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #1 on: February 17, 2016, 09:46:31 am »
Is there a way I could:
a) enforce DBGrid to show ftMemo?
b) get rid of extra spaces in varchar (character varying) fields before SQLQuery is loaded into memory?

tatamata

  • Hero Member
  • *****
  • Posts: 787
    • ZMSQL - SQL enhanced in-memory database
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #2 on: February 17, 2016, 10:16:37 am »
I've found something: "SQLDB always buffers all the records. Thus if you have a unlimited
varchar, sqldb will allocate approximate 2000 (the max) bytes for this
field for every record. Keep that in mind while you're designing your
database."
(https://www.mail-archive.com/search?l=lazarus@miraclec.com&q=subject:%22Re%5C%3A+%5C%5Blazarus%5C%5D+before+I+go+to+far+with+this%22&o=newest&f=1)
So, this explains the root of my problem...What might be the solution?

JanRoza

  • Hero Member
  • *****
  • Posts: 672
    • http://www.silentwings.nl
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #3 on: February 17, 2016, 12:33:19 pm »
As for question a: Use the OnGetText event for the TMemoField's in your Table like:

Code: Pascal  [Select][+][-]
  1. procedure TDataModuleZZZ.Table1SomeMemoFieldGetText(Sender: TField;
  2.   var Text: string; DisplayText: Boolean);
  3. begin
  4.   Text := Table1SomeMemoField.AsString;
  5. end;
  6.  

Not sure this works for PostgreSQL but it sure does for SQLite.
OS: Windows 10 (64 bit) / Linux Mint (64 bit)
       Lazarus 3.2 FPC 3.2.2
       CodeTyphon 8.40 FPC 3.3.1

goodname

  • Sr. Member
  • ****
  • Posts: 297
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #4 on: February 17, 2016, 02:35:09 pm »
There are two options. Either cast the field to a varchar(255) in the select query or use the get text event.
Code: Pascal  [Select][+][-]
  1. SELECT CAST(textfield AS varchar(255)) FROM tbl;
  2.  
  3. procedure TDataModuleZZZ.Table1SomeMemoFieldGetText(Sender: TField;
  4.   var aText: string; DisplayText: Boolean);
  5. begin
  6.   if DisplayText then aText := Table1SomeMemoField.AsString;
  7. end;
  8.  

tatamata

  • Hero Member
  • *****
  • Posts: 787
    • ZMSQL - SQL enhanced in-memory database
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #5 on: February 17, 2016, 04:32:31 pm »
Hi @goodname and @JanRosa,
In my case both PostgreSQL tables and SQLQuery SQL text are constructed dynamically, thus no persistent fields in SQLQuery dataset. The structure of PostgreSQL tables and SQL is not know in design time.

goodname

  • Sr. Member
  • ****
  • Posts: 297
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #6 on: February 18, 2016, 03:32:39 am »
Have never used it but I'm fairly sure TPQConnection has a getSchema or getTable functions that may help.

tatamata

  • Hero Member
  • *****
  • Posts: 787
    • ZMSQL - SQL enhanced in-memory database
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #7 on: February 19, 2016, 03:03:55 pm »
It seems that SQLdb can't deal with variable length fields (text and varchar (character varying)) columns properly, since it always reserves fixed ammount of memory per column cell. Too bad and rediculous - Lazarus needs 30 minutes for something that pgAdmin retrieves in few seconds.
Anyway, I bypassed this Lazarus limitation by analyzing textual csv files prior importing to PostgreSQL, so that corresponding tables in PostgreSQL are created with fixed length according to the max. string length in a particular csv file column.

taazz

  • Hero Member
  • *****
  • Posts: 5368
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #8 on: February 19, 2016, 06:32:22 pm »
It seems that SQLdb can't deal with variable length fields (text and varchar (character varying)) columns properly, since it always reserves fixed ammount of memory per column cell. Too bad and rediculous - Lazarus needs 30 minutes for something that pgAdmin retrieves in few seconds.
Anyway, I bypassed this Lazarus limitation by analyzing textual csv files prior importing to PostgreSQL, so that corresponding tables in PostgreSQL are created with fixed length according to the max. string length in a particular csv file column.
That doesn't seem right. Can you share a demo to demonstrate the problem?
Good judgement is the result of experience … Experience is the result of bad judgement.

OS : Windows 7 64 bit
Laz: Lazarus 1.4.4 FPC 2.6.4 i386-win32-win32/win64

goodname

  • Sr. Member
  • ****
  • Posts: 297
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #9 on: February 21, 2016, 03:13:55 pm »
It seems that SQLdb can't deal with variable length fields (text and varchar (character varying)) columns properly, since it always reserves fixed ammount of memory per column cell. Too bad and rediculous - Lazarus needs 30 minutes for something that pgAdmin retrieves in few seconds.

Think this is really a data aware control issue. Have used the TSQLQuery.AfterScroll to assign data to non-data aware controls and the "memo" issue does not happen.

goodname

  • Sr. Member
  • ****
  • Posts: 297
Re: PostgreSQL text vs. varchar, field size, loadfromfile
« Reply #10 on: February 21, 2016, 09:56:35 pm »
Out of curiosity I took some time to look into this. It looks like at one time pqconnection.pp mapped Oid_text to ftString. It currently maps to ftMemo. The difference is that ftString uses pqGetLength to figure out how much space to allocate while ftMemo creates a blob.  I'm guessing a blob allocates 2000 bytes whether it is needed or not. There is a pqGetLength function that exists to handle this situation so that only the bytes that are needed are allocated. Apparently this is even more important to use when allocating space for bytea fields.

Code: [Select]
pqconnection.pp
//    Oid_text               : Result := ftstring;
    Oid_text,Oid_JSON      : Result := ftMemo;
http://www.postgresql.org/docs/9.0/static/libpq-exec.html
Quote
PQgetlength

    Returns the actual length of a field value in bytes. Row and column numbers start at 0.

    int PQgetlength(const PGresult *res,
                    int row_number,
                    int column_number);

    This is the actual data length for the particular data value, that is, the size of the object pointed to by PQgetvalue. For text data format this is the same as strlen(). For binary format this is essential information. Note that one should not rely on PQfsize to obtain the actual data length.

EDIT: Looks like TPQConnection.LoadBlobIntoBuffer is using pqGetLength so unless I'm missing a step blobs only allocate the required memory. So as far as I can tell this is either just a data aware controls don't handle memo fields issue or a demo program as requested by taazz is required to figure out what is going on.
« Last Edit: February 22, 2016, 11:33:22 pm by goodname »

 

TinyPortal © 2005-2018