by Carlos Mora » Wed Sep 02, 2015 1:22 pm
It's not necesary to be in an endless loop or sth like that to be in trouble with memory reallocation.
Clipper and [x]Harbour do a great job managing memory for us. Those who have written programs in low level languages as C or Pascal know what i mean.
Every not fixed lenght var, like strings or arrays, allocates memory outside the regular stack and data segment, in the memory heap. Harbour provides the management services required for that, so usually there ir no problem, but under certain circunstances that service is unable to handle the requirement.
That condition is usually produced by the progressive incremental allocation of memory, like Tim's code shows, concatenating small strings into a buffer. oxData, as a (I guess) LOCAL var resides in the stack, but just only the 'var' component of it. It's value is stored in the heap, using the Harbour services. The stack portion of oxData stores the handle of that memory chunck.
At startup, all the available heap conforms a big unique block of memory. Usually a several Gb segment free.
Let's take an example to make it more graphic: oxData:= Space(100). That means that there is a piece of 100 bytes stored in the heap. If originaly the original heap had 1000 bytes we got the heap split in two fragments, one of 100 byte and other, free, of 900. That limit of 1000 bytes is not real, it's for the sake of the example, usually it is way bigger than that, in the order of Gb, but an small heap is needed to show how the problem arises.
Now lets do a concatenation over the same var: oxData += 'xyz' . Harbour estimates the resulting string, what will be 103 bytes long, so it issues a hb_xrealloc() to the current pointer stored in oxData. The memory service will look for a 103 byte chunck free in memory, copy the curren 100 bytes content, amd return the new pointer. So far so good. But the problem comes from the old 100 byte chunck. What happen with it? It's not neccesary anymore, so it's marked as available, and now the memory has got fragmented into 3 pieces: 1 of 100 bytes (free), 1 of 103 (oxData), and the remaining 797 bytes as the last free block.
Do it again: oxData += 'xyz'. Now the resulting string will be 106, so, after processing it, the heap memory map becomes: 1 of 100 bytes (free), 1 of 103 (free), 106 used (oxData), and the last 691 free bytes.
One more time: oxData += 'xyz'. Now the heap is: 1 of 100 bytes (free), 1 of 103 (free), 1 of 106 (free), 109 used (oxData), and the last 582 free bytes.
In few iterations we will run out of memory, but not for not having free memory, but because the memory has got so fragmented that the memory service cannot provide a big enough piece of memory as required.
There is a very well known process called 'Garbage collection', wich among other things, recovers all the free pieces into bigger ones, defragmenting the memory.
Usually our processor intensive functions don't allow the Harbour's core to activate the garbage collector, but we are able to manually execute it calling HB_GcAll(). In Clipper that process was triggered by calling Memory(-1). So you can introduce calls to the garbage collector in your code so your memory don't get fragmented.
Arrays also have the same problem (issuing aAdds to create long arrays), but are less frequent and have a much better solution: Preallocation.
There is a way to warn Harbour that we are creating an array but it will be x elements long, check the xHarbour functions ASIZEALLOC() and ALENALLOC(), and the Harbour counterparts.
The real memory management is more complicated than all this, it includes things like virtual memory and much more, but basically the example shows the concept behind the problem.
Sorry for the long speech, but it's a little bit difficult to me to explain this concept without all this. And no graphics.