I’m going to guess that the assembly language that showed in Lazarus was for the stack because it had jmp commands. Does assembly for heap use jump commands at all. It seems like access to heap memory is more random than the stack which follows a certain path then backtracks back to where it started.
Simply speaking, the stack is a data structure which can be manipulated with the help of only two methods:
PUSH(X) and
POP(X).
It follows the LIFO principle (i.e.
Last-
In-
First-
Out) what means that the value of X you pushed last will be pulled by the next pop.
The thing is that the most processors (but not all), have special instructions for doing that, commonly named PUSH and POP. They work with a dedicated register called SP (stack pointer) which points somewhere in RAM. When you push something, the SP decrements and the value goes written at that location. Resp. when you pop - the value where the SP points will be fetched and the SP will be incremented.
This is very convenient when you want to call a subroutine - the PC (program counter) is a register holding the next instruction for the execution by the processor. Calling a subroutine,
CALL Sub is actually
PUSH PC i.e. save the program counter onto the stack, then
JMP Sub i.e. jump to the address of the subroutine. Subroutines normally ends with a
RET instruction which is actually
POP PC i.e. get the value from the top of the stack and put it into the PC (program counter). That way the processor will resume execution at the point following the
CALL Sub instruction.
That same mechanism is used to transfer the subroutine actual parameters along with the return address, e.g. if you call a procedure foo(1,2,3) then the compiler will generate something like:
PUSH 1 ; First parameter pushed
PUSH 2 ; Second parameter pushed
PUSH 3 ; Third
CALL foo ; Actual call
(this is a very simplified representation, of course)
Then into the procedure itself, the formal parameters can be accessed as: return address - SP[0], Third parameter - SP[1], Second - SP[2], First - SP[3]. This is why when you modify a parameter it doesn't affect the actual variable given at the call - you actually work with a copy of it allocated on the stack (by pushing its value).
After the procedure finishes, the stack must be cleaned up from the formal parameters, this is done different in different languages (also in FPC, depending on the
calling mechanism) but the idea is to increment the stack pointer with the same amount as it was decremented at the time after the parameters were pushed. In the example, the inverse of three pushes can be POP AX, POP AX, POP AX - three pops into a scratch register:
PUSH 1 ; First parameter pushed
PUSH 2 ; Second parameter pushed
PUSH 3 ; Third
CALL foo ; Actual call
POP AX ; Pop the third parameter
POP AX ; Pop the second
POP AX ; Pop the first
...
(again, this is a pseudo-code, the actual assembly will differ)
As
440bx said, the stack area (where the SP points) is usually pre-allocated and it is of some predefined size. If you call subroutines with a huge parameters (usually big static arrays) the stack will soon overflow, i.e. the SP will go beyond the boundaries of the pre-allocated RAM chunk.
The same is valid for the local variables of the subroutine - they're also allocated into the stack by manipulating the stack pointer at the time of the call.
All that stack allocation/deallocation is made automatically for you by the FPC compiler. The heap, at the other hand, is different - the allocation is made
by you and it is
your responsibility to reclaim the memory when it is not needed. When you need some memory then you call
GetMem, when you want to free it -
FreeMem. When you want a new instance of some class, then you call
X:=TSomeClass.Create, when you're done with it, you call
X.Free.
The exceptions of that pattern are so called "managed types" - the dynamic arrays and the
String type. They also reside at the heap but the compiler takes care for that.