Recent

Author Topic: Prevent Runtime 101  (Read 4034 times)

alfware17

  • New Member
  • *
  • Posts: 40
Prevent Runtime 101
« on: September 24, 2020, 10:29:45 am »
Hallo, I face program interruption with ABEND 101 (disk full) of my sorting program when my files (temporary or end) becomes to big for my disk (special in a virtual machine with limited disk).

I want to mask it and have an defined program end with a message (I already have for to low RAM). Just want to ask if anybody has a genuie idea or a standard way.

1. I can ask for dos.diskfree(0) before each writeln - okay i already checked, it slows my program by 200%. No option
2. I can ask diskfree already when open/rewrite the files - but it is quite complicate and error hiding (I must overthink my complete logic because of at least 7 possible paths to different open/rewrites).
3. I can use {$i-}/{i$i} and ask for the ioresult of every writeln - unfortunately it slows by 25% also and I should encapsulate it in a proc (have many different writeln).
4. I can use TRY/EXCEPT.    Ok I just coded but for any reason it doesnt work, it also breaks with runtime 101. Probably I must specify my EXCEPT (didnt read by now) and/or I must use additional SysUtils Unit.    I just wanted to run without this Unit because it makes my 32bit version of the programm bigger, I use for 64 bit only.
And I have a 16bit version of this source (Turbo-Pascal), would by nice if I could integrate it there too - but okay 32/64 bit is more important

Anyone has an idea?

winni

  • Hero Member
  • *****
  • Posts: 3197
Re: Prevent Runtime 101
« Reply #1 on: September 24, 2020, 11:28:52 am »
Hi!

A)
As you are sorting a know amount of data, you can compute the needed space on the disk. Before start of sorting ask the disk how much free space is available.
And exit if it is not enough.

B) Buy a 2 TB harddisk for < 40,- €

Winni


Thaddy

  • Hero Member
  • *****
  • Posts: 14204
  • Probably until I exterminate Putin.
Re: Prevent Runtime 101
« Reply #2 on: September 24, 2020, 11:31:36 am »
One initial remark: Try/except is way slower than checking IOResult is. Maybe you try to check the IOResult too often?
Because if speed is essential you should prefer IOResult over using exceptions.
 
Specialize a type, not a var.

jamie

  • Hero Member
  • *****
  • Posts: 6090
Re: Prevent Runtime 101
« Reply #3 on: September 24, 2020, 06:15:45 pm »
How about using the zipper unit and compress all files on disk while not in use and uncompress them when u open them. Delete the uncompressed file when closed.

U can also compress theses files in memory, too.

This would be the last ditch other than adding more storage remotely or locally.
The only true wisdom is knowing you know nothing

Thaddy

  • Hero Member
  • *****
  • Posts: 14204
  • Probably until I exterminate Putin.
Re: Prevent Runtime 101
« Reply #4 on: September 24, 2020, 06:46:49 pm »
U can also compress theses files in memory, too.
Yeah, add more RAM... %)  Running out of resources should be checked. Preferably - as suggested - beforehand. (e.g. what happens when you run out of swap space/virtual memory )
Compression is way too memory intensive to help here.

I would suggest a simple streaming algorithm in this case, since it minimizes load. Compression streaming does not usually fit the bill.
« Last Edit: September 24, 2020, 06:52:58 pm by Thaddy »
Specialize a type, not a var.

alfware17

  • New Member
  • *
  • Posts: 40
Re: Prevent Runtime 101
« Reply #5 on: September 24, 2020, 08:27:39 pm »
Hi guys, thanks for taking this that serious. Buy a new harddisk.... Carneval is quit this year because of Corona.

I just wanted to investigate the behavior of the program, when the disc becomes ful. I did it with 10 lines as well as with dozens of Gigabytes. Only the sky is the limit. But the problem remains, if the program while working detects the disc is full it should end up not break.

Thaddy you convinced me that try/except is not the way.   I also thought it, because its not suitable for 16bit or even not to 32bit.
So I tried my options with a little test program. Checking IOResult be used less, but I am not sure about at which time. Checking the diskfree is nearly the same -
I have a dozen differents path for open/rewrite, depending on how many buffers I have (0, 2, 16, 100 or 999...)

I guess I must decide at a very early point of the logic - when I calculated how many buffers maximal I use - if available diskspace is less then (1 + x) times of the input size then quit. Because it is sure it will crash one time anyway when.   I guess x is also 1.00 (summary of temo buffers but must investigate this,  I have 4 different merging algorithms - I am nearly sure for none of them I need x > 1 at any time but I must test it out (on very small disk just for fun and NO I dont buy any new 2 TB).

winni

  • Hero Member
  • *****
  • Posts: 3197
Re: Prevent Runtime 101
« Reply #6 on: September 24, 2020, 09:49:12 pm »
Hi!

Just tested:

I/0 checking off,
assignFile(f, RandomFileName)
rewrite (f);
fetch ioresult
closeFile(f)
I/0 checking on

This takes less than 1/10.000  seconds.
Will say: 10.000 operations take one second.
Not even on a SSD but on a cheap  internal Toshiba 2 TB HD.

Or: 600.000 operations take a minute.

No that time to wait?
You must be very young.

Winni



marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11383
  • FPC developer.
Re: Prevent Runtime 101
« Reply #7 on: September 24, 2020, 10:03:11 pm »
If you have such size problems that even 50kb for the sysutils unit is too much then you should really start thinking about customizing the RTL for your size requirements.

General development can't really prepare for the details of such extreme demands.

alfware17

  • New Member
  • *
  • Posts: 40
Re: Prevent Runtime 101
« Reply #8 on: September 25, 2020, 01:48:03 pm »
@ Winni, no you misunderstood me.

Hi!

Just tested:

I/0 checking off,
assignFile(f, RandomFileName)
rewrite (f);
fetch ioresult
closeFile(f)
I/0 checking on

This takes less than 1/10.000  seconds.
Will say: 10.000 operations take one second.
Not even on a SSD but on a cheap  internal Toshiba 2 TB HD.

Or: 600.000 operations take a minute.

No that time to wait?
You must be very young.

Winni

My problem can occure within each single write operation. I can have 2 or 16 or 100 or even 999 (extreme) files open and then if crashes the limit of the disk.
checking IOResult after each writeln extends my run time by 25%, it is not acceptable for me.

But I have now solved the problem by asking 2 * input-size must at least be free at the time I open the buffers.   
Input size I know (mostly 90% cases, only not in the case I get some by < "piped" input but then I estimate a standard value)
and if free size on disk is less -> quit.    I checked out, I never use more than 2 * input-size for buffer and/or result


@ marcov    well Size matters for me personally.  My 32bit EXE is 114 kB big now and it does all it must without Sysutils.  The 64bit EXE is above 220 kB and it does the same but even worse and slower  (I know this problem, it is some about 64bit EXE uses RAM for longer time and then Windows swaps but it is less efficient than with my 32bit where I mention RAM over earlier and start "divide+impera" which is faster than wait for the Windows swap).

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11383
  • FPC developer.
Re: Prevent Runtime 101
« Reply #9 on: September 25, 2020, 01:54:53 pm »
@ marcov    well Size matters for me personally.  My 32bit EXE is 114 kB big now and it does all it must without Sysutils. 

As said if you have such extreme constraints, you should look into customizing/minimizing the RTL yourself, so that you cut down sysutils (and objpas?) to only the bare essential that you from it (in this case the exception conversion).

These are all not scenarios that are currently considered for Windows RTL development.  The trend of minimal exe size is upward, albeit very slowly. You can avoid a large part of it, by maintaining your own minimal RTL.

If you only migrate to newer versions every so many years, that wouldn't be too much work.

« Last Edit: September 25, 2020, 01:57:47 pm by marcov »

alfware17

  • New Member
  • *
  • Posts: 40
Re: Prevent Runtime 101
« Reply #10 on: September 28, 2020, 12:08:08 pm »
Thank you marcov - I guess that's a little bit to complicate for me, changing system libraries. I do this for fun/hobby only.
I would not even know where to search for the sources of that library.
Of course I could use sysutils for 32bit also - but this all comes from a 16bit project I once ported to free pascal without any need of sysutils.

I had my own date/time manipulation for example which I checked in 32bit and 64bit and found out its bedder for me to stay at my own at 32bit
and only use sysutils in 64bit. I also had no need for exceptions in 32bit - only after porting it to 64bit I seen it could be helpful in one case/problem
that only exists because of the 64bit usage lol.

My problem from the beginning I solved.
In 7 of 8 cases I know my size of the input file and could terminate with a message if it becomes to big for the disk.
It saves little time also (the program doesnt work until abort and knows it's senseless before beginning). 

The 8th case is a problem - if my input comes from a pipe with using a < in my shell calling it.
I intruduced a "guessed line amount" but thats an unfair game to fit all wishes from 10 lines up to some billion.

I have now decided for 10.000 in 16bit and 5.000.000 lines in 32bit, that is (2*66 average chars per line) about 3 MB (DOS) / 1.5 GB (32bit) free Diskspace
at the programs work start to demand.   
Unfortunately in my automatic testing then some cases refuse to work (for example 1000 lines in 32bit if Disk is free about 1 GB - normally easy job but
then the 1 GB is to strong limit).  But I can repair this by my new "change the guessed value" switch  (for example the 1000 my shell knows by parameter
and can set it as "guessed amount" for my progam call so this then is satisfied with much less free Diskspace).

Unfair tests I know, but in reality the standard 10.000 / 5 Mio should be realistic and bedder use not pipes but my other 7 possible calls.

Thaddy

  • Hero Member
  • *****
  • Posts: 14204
  • Probably until I exterminate Putin.
Re: Prevent Runtime 101
« Reply #11 on: September 28, 2020, 12:50:43 pm »
I think you should use fpmmap/fpmunmap or under Windows the WIN memorymapped file API. This should give you less headaches, but still resource starvation can occur. And you should check beforehand or settle with accepting you can not avoid that without exceptions or checking IOResult.
See example here: https://www.freepascal.org/docs-html/rtl/baseunix/fpmmap.html

With memory mapped files you can get an EOutofMemory exception instead of disk full? Have to test that.
« Last Edit: September 28, 2020, 12:53:45 pm by Thaddy »
Specialize a type, not a var.

alfware17

  • New Member
  • *
  • Posts: 40
Re: Prevent Runtime 101
« Reply #12 on: October 08, 2020, 08:25:08 pm »
Hallo Thaddy, thanks for your advice. I would lie when saying I completely understood it  :-*

@ I checked my solution now and it works fine, no more runtime 101. What did I do (summary): instead of catching possible RT101 within every write/open/close - I prevent by asking my current input for size and stop working if my disk has not 2x input size free space.  1x I need in each case for the output, another 1x is for temporary buffers. My 4 merging strategys all secure, that at no time is more then 2x input size needed.

I know the input size from Open/Filesize and an average number of chars/line (I count 1000 lines if needed, its fast enough). This is for 7 of 8 cases. In the 8th case I read by standard-input (pipe), therefore I cannot count (even the buffering/counting can fail) but I assume a fair number of lines assumed from stdin (a number depending on the 16/32/64bit). To avoid both crashes and "do not work but could" I can give this number as an parameter at program start. My (for runtime test) calling shell knows the number but its no practical relevant case. In most cases input is given as a file not as a stream (many allowed cases = many problems).

I must say, I invested to much time in this last bug. Normally this runs on a normal disc with average free and works with million lines (normal documents) but not with billion or much more. And this bug occured because I tested "until empty" disc just to see what happens. But that is no normal use case, it rather counts speed and downward compatibility of the whole package (the program is only one part of it)

 

TinyPortal © 2005-2018