Forum > Databases

IBX - Buffer Chunks - every day use questions

(1/4) > >>

Nicole:
In the manual at page 52, chapter 6.2.1 there is a header "BufferChunks".

What happens, if the bufferchunks value is too low? Will there be a problem in performance, or will there be a loss of data?
Are there code-snipets how to read blocks of the buffer one by one?

IF I run a count of my query, how shall I choose them?

e.g. the count returns 100.
Shall I set BufferChunks to 100 or to 101 or to 110?

When is such a count-query appropriate and when is there a evaluation not to do it because of performance?

Thanks

Thaddy:
- If the code is implemented correctly you should never have data loss. (I am not an IBX fan!)
- A buffer size is best chosen as a power of two, so 8,16,32,64,128,256,512, 1024 etc, because many compilers including FPC can optimize such sizes very efficiently. E.g. a buffer of 101 or 100 will usually be slower than either 64 or 128.
- Too large a buffer is dependent on CPU, storage speed and memory size, it is a bit trial and error, unless you can specify your hardware, but too small or too large a buffer will definitely affect performance.

rvk:

--- Quote ---- A buffer size is best chosen as a power of two, so 8,16,32,64,128,256,512, 1024 etc, because many compilers including FPC can optimize such sizes very efficiently. E.g. a buffer of 101 or 100 will usually be slower than either 64 or 128.

--- End quote ---
It's not buffer size we are talking about here but BufferChunks.


--- Quote from: Nicole on November 02, 2022, 08:42:19 am ---In the manual at page 52, chapter 6.2.1 there is a header "BufferChunks".

--- End quote ---
For reference, below is the complete text.

You mention setting the BufferChunks too low. Well, a BufferChunk is one row. So you can't set it below 1. And in that case, when reading additional records, IBX needs to reallocate the buffer pool for every count of records again, which will give you a performance hit. There will be no data loss.

If you count your records and you know for sure there are 100, you can set the count to 100.
Only when you need to add records during the session, a higher number would be logical.

But before you go tinkering with this property... the default is 1000... you need to wonder why you would want to change this.
Do you have an answer for that? How large are your records? Why is 1000 too high (or low) for you?
Do you have memory problems, or extremely large records?

I have lots (and lots) of TIBQueries in my program and I've never touched BufferChunks (I had no need to).



--- Quote ---BufferChunks
Important Performance Parameter: This parameter determines the size by which the internal buffer allocation pool is increased every time it becomes fully used. The default is 1000 rows.

IBX will eventually cache the complete dataset in internal buffers. If the dataset is known to only ever have a few rows then BufferChunks can be set to a small number (e.g. 10 if the number of rows is typically less than 10) and the memory footprint is reduced.

On the other hand, if the number of rows is large (e.g. 100,000) then setting the BufferChunks to a larger figure (e.g. 25000) avoids a too frequent reallocation of the buffer pool as the dataset is read in. However, the figure should be chosen carefully to avoid a large number of unused buffers once the dataset has been read in. In some cases, it may even be appropriate to determine this figure at run time by first querying the database to return a count of the number of rows in the dataset and then setting BufferChunks just before the dataset is opened.
--- End quote ---

Nicole:
Thank you for your answers.

Which I thought about changing it?
I have a problem with a TAChart. About 12 fields every day, back for decades.

The behaviour of my TAChart is fuzzy. The most probable reason is a typo of mine, - which I have not found up to now. There are so many options, what may be wrong, that I will not make me friends by posting the source here.

Therefore, I go through it one by one. One thing are the input data. As I portaged the code from Embarcadero's FireDac, - there I worked with blocks of 50. They were pushed in as query-blocks and I addressed them as such.

This is behind: The code snippet from Delphi / Firedac which has dealt with the buffer. I cannot find it any more. I seemed to have been that glad, I seem not to use it any more, that I even have deleted the out-commented lines of it.

Anyhow: If I asked in (Delphi XE3 with Firedac)  "how much data do you have", - it shouted "50!!", because the dummy counted the block-size instead of the data-count. The old code had to arrange itself with such answers.

rvk:

--- Quote from: Nicole on November 02, 2022, 11:04:33 am ---The behaviour of my TAChart is fuzzy. The most probable reason is a typo of mine, - which I have not found up to now. There are so many options, what may be wrong, that I will not make me friends by posting the source here.
--- End quote ---
Than the default of 1000 BufferChunks is not the problem. With 50 records you could set it at a lower value (like 100) but you will not solve your problem and you will not gain much (especially if you have no out of memory problems). And changing the BufferChunks might give (performance) problems when that record count increases.

So, although changing the BufferChunks has it's advantages, it won't solve any of those specific problems (depending on what fuzzy problem you have with TAChart). I would only look at changing the BufferChunks if you have performance or memory problems.

Navigation

[0] Message Index

[#] Next page

Go to full version