Recent

Author Topic: IBX : Can it handle database operation when network connection is not stable?  (Read 25233 times)

MarkMLl

  • Hero Member
  • *****
  • Posts: 6646
If I use Sqlite, I don't know, perhaps for every datasets, there must be a correspondence Sqlite tables. If I have 4 datasets in a form, I have to create 4 Sqlite tables for buffer.

...which I'd expect you to be able to do under program control.

Quote
With sql scripts, only series of strings for all datasets.
I can store these strings onto Dbstring grid, then send them all to database server.

...and if the user turns the computer off before it's synced with the server zhe loses work.

Quote
Back in C++ builder, I have functions that can convert values stored from any datasets to sql scripts, plan to convert those functions to Lazarus functions.

Certainlt looks worth doing, but will that capture locally-entered data?

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

Zvoni

  • Hero Member
  • *****
  • Posts: 2300
Remember that a ping is not sufficient.
Ping the machine, machine answers „ok, I’m online“, but the db-server/service is not running
One System to rule them all, One Code to find them,
One IDE to bring them all, and to the Framework bind them,
in the Land of Redmond, where the Windows lie
---------------------------------------------------------------------
Code is like a joke: If you have to explain it, it's bad

MarkMLl

  • Hero Member
  • *****
  • Posts: 6646
Remember that a ping is not sufficient.
Ping the machine, machine answers „ok, I’m online“, but the db-server/service is not running

Yes, but at that point an attempt at reconnecting will fail. A ping is a good first step provided you know the IP address, some schools of thought have it that you should do a database server name lookup manually so that connections are not also at risk of failure due to a failed DNS server.

There's other useful tests, but depending on OS things like finding the current gateway and next-hop router can be surprisingly tricky.

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

incendio

  • Sr. Member
  • ****
  • Posts: 266
Certainlt looks worth doing, but will that capture locally-entered data?
MarkMLl

Yes


Yes, but at that point an attempt at reconnecting will fail. A ping is a good first step provided you know the IP address, some schools of thought have it that you should do a database server name lookup manually so that connections are not also at risk of failure due to a failed DNS server.

There's other useful tests, but depending on OS things like finding the current gateway and next-hop router can be surprisingly tricky.

MarkMLl

No need for ping. I think using IBX only + cache update as a buffers, and stringlist to hold sql script are enough to get the job done.

MarkMLl

  • Hero Member
  • *****
  • Posts: 6646
Please keep us updated.

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

tonyw

  • Sr. Member
  • ****
  • Posts: 319
    • MWA Software
This thread does seem to be going off at a tangent to the original post.

What you need to decide is what are you trying to achieve in terms of recovery from database connection loss. Broadly speaking, there are three levels of response to this event:

1.   Let the user work it out.
This is a minimal approach and ok for basic applications and when connection loss is rare. All you need to do is to report the exception and exit the program. The user is then responsible for restarting the program, working out what data was lost, if any, and restoring any important changes. The users task is simplified if updates are committed soon after they are made, even auto-committed as then they only have to look at the last thing they did. Caching too many updates before applying/committing them just makes the problem worse.

2.   Assisted Restart
On connection loss, your program closes down the connection (usually a forced disconnect) in order to restore itself to a known state. It then re-opens the connection, reopens the datasets  that were open when the connection was lost and tells the user that some data may have been lost. Again, it is the user’s responsibility to work out what was lost, etc.

3.   Journaling
This is where you pre-empt the problem by recording, in some journal file, every SQL Update, Insert and Delete statement, the transaction they were executed under and when each transaction starts and is either committed or rolled back. The journal file can be no more than a text file with one statement per line,

When a connection is lost, your program does a forced disconnect and re-opens the connection, It then looks in the journal for any uncommitted transactions and, for each one, starts a new transaction with the same parameters, and executes the recorded SQL statements in the same order as they were journaled and under that transaction. The datasets are then reopened  and the user should see exactly the same state as when the connection was lost. Note that the recovery transactions are not committed. That is because the view is restored to exactly the same state and it is up to the user to decide whether to commit or rollback.

The problem with the above comes if the connection is lost when committing a transaction, You then do not know if it was committed or is still in limbo. You need to save some information server side with each transactions that allows a simple way of determining whether the commit succeeded. This could be as simple as a last transaction identifier recorded by an update statement that is executed under the transaction.

IBX does not include an actual journaling capability – although this is an idea for a future release. However, it does include TISQLMonitor. This allows you to snoop on every transaction and sql statement and use this to record to a journal file.

incendio

  • Sr. Member
  • ****
  • Posts: 266
Please keep us updated.

MarkMLl

I will. Perhaps in the new post, caused this I think this is a new topic.

This thread does seem to be going off at a tangent to the original post.

What you need to decide is what are you trying to achieve in terms of recovery from database connection loss. Broadly speaking, there are three levels of response to this event:

1.   Let the user work it out.
This is a minimal approach and ok for basic applications and when connection loss is rare. All you need to do is to report the exception and exit the program. The user is then responsible for restarting the program, working out what data was lost, if any, and restoring any important changes. The users task is simplified if updates are committed soon after they are made, even auto-committed as then they only have to look at the last thing they did. Caching too many updates before applying/committing them just makes the problem worse.

2.   Assisted Restart
On connection loss, your program closes down the connection (usually a forced disconnect) in order to restore itself to a known state. It then re-opens the connection, reopens the datasets  that were open when the connection was lost and tells the user that some data may have been lost. Again, it is the user’s responsibility to work out what was lost, etc.

3.   Journaling
This is where you pre-empt the problem by recording, in some journal file, every SQL Update, Insert and Delete statement, the transaction they were executed under and when each transaction starts and is either committed or rolled back. The journal file can be no more than a text file with one statement per line,

When a connection is lost, your program does a forced disconnect and re-opens the connection, It then looks in the journal for any uncommitted transactions and, for each one, starts a new transaction with the same parameters, and executes the recorded SQL statements in the same order as they were journaled and under that transaction. The datasets are then reopened  and the user should see exactly the same state as when the connection was lost. Note that the recovery transactions are not committed. That is because the view is restored to exactly the same state and it is up to the user to decide whether to commit or rollback.

The problem with the above comes if the connection is lost when committing a transaction, You then do not know if it was committed or is still in limbo. You need to save some information server side with each transactions that allows a simple way of determining whether the commit succeeded. This could be as simple as a last transaction identifier recorded by an update statement that is executed under the transaction.

IBX does not include an actual journaling capability – although this is an idea for a future release. However, it does include TISQLMonitor. This allows you to snoop on every transaction and sql statement and use this to record to a journal file.


I think option no 3 is what I want.

To detect whether transaction committed OK , isn't it is enough to execute ApplyUpdate inside try and catch? If errors occurs than I will rollback transaction and try to ApplyUpdate again.

That is a good new if you plan to add journaling features to IBX, thanks.

Meanwhile, I will try to build my own journaling method, not using TSQLmonitor, caused I didn't know it has such capabilities.

So far, so good, I am able to pull sql statements from TIBdataset and build a simple journaling.

tonyw

  • Sr. Member
  • ****
  • Posts: 319
    • MWA Software
No promises, but I will look into a journaling component.

incendio

  • Sr. Member
  • ****
  • Posts: 266
Guys,

I manage to build a simple solution to this matter.

This is the link :
https://forum.lazarus.freepascal.org/index.php/topic,57344.0.html

MarkMLl

  • Hero Member
  • *****
  • Posts: 6646
Well done, I'll try to take a look at some point but it won't be immediate.

MarkMLl
MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

tonyw

  • Sr. Member
  • ****
  • Posts: 319
    • MWA Software
I have today released an experimental version of IBX that supports Client side journaling. Client side journaling creates a per database attachment file recording each read/write transaction start and end, and each SQL query that modifies data in the database. It can optionally record each read only transaction and queries that do not modify data (e.g. select queries).

The purpose of client side journaling is:

    1. To create transaction and query log that can be used to recover from a lost database connection or server crash, and

    2. To provide a record of database activity for debugging purposes.

The experimental version currently supports the creation of the journal. However, applying the journal for recovery purposes is left to a later update - probably after Christmas.

The journal file is a text file and is intended to be human readable. Each record in the journal starts with an asterisk and is terminated by a line separator. Each record is identified by a single character immediately after the asterisk, followed by a timestamp, session id and transaction id. Each query is recorded as the original SQL with the parameters replaced by the parameter values as literals.

In order to support programmatic parsing of the journal file, the class TJournalProcessor may be found in the IBUtils unit.

For more information see ibx/fbintf/doc/README.ClientSideJournaling.pdf. Test22 in ibx/fbintf/testsuite also provides an example of direct use of the journaling capability, and ibx/testsuite Test29 provides an example of use of the new component IBJournal. Additional examples should follow in later releases.

This experimental version can be downloaded by svn using

svn co https://svn.mwasoftware.co.uk/viewvc/public/ibx/branches/journaling/ ibx-experimental

or point your browser at

https://svn.mwasoftware.co.uk/viewvc/public/ibx/branches/journaling/

and click on "Download Tarball" to get an archive of the experimental version.

incendio

  • Sr. Member
  • ****
  • Posts: 266
or point your browser at

https://svn.mwasoftware.co.uk/viewvc/public/ibx/branches/journaling/

and click on "Download Tarball" to get an archive of the experimental version.

Downloaded and tried to unpack, got an error, see attachment

tonyw

  • Sr. Member
  • ****
  • Posts: 319
    • MWA Software
or point your browser at

https://svn.mwasoftware.co.uk/viewvc/public/ibx/branches/journaling/

and click on "Download Tarball" to get an archive of the experimental version.

Downloaded and tried to unpack, got an error, see attachment

Looks like the software you are using to unpack the archive can't support symbolic links - or you are still using Windows XP! Although googling the subject, it looks like support in Windows for symbolic links is problematic (https://blogs.windows.com/windowsdeveloper/2016/12/02/symlinks-windows-10/).

svn (for windows) foes not have this problem.

The files affected are not critical and, apart from the first, are only used by the testsuite. You can always copy them from an older version - or from the archive as the links are to files in the archive. The first link is to a documentation file. fbintf is developed as a separate package with its own changelog and documentation - see fbintf/doc. A link to the user guide is placed in the top level doc folder for user convenience only.

 

TinyPortal © 2005-2018