All my earlier programs contained similar code since clipper days. Open the dbf file with FOpen in shared mode scan the entire text and work back the record numbers and then proceed. ( I also read the dbf files in this raw mode to check integrity and for repairs. )
But those were the days when we did not have these advanced functions.
Now these functions are available. I did not compare the speeds but please consider what happens in these two different cases. Assume the dbf resides on a remote server. Our program is executed on the client PC. Our old logic reads the entire DBF file. May be fast, but it still reads the entire DBF data. OrdWildSeek reads only the index contents. In effect the quantity of data ordwildseek has to read is less than the data we read in our raw dbf read method.
Assume a DBF file with record length of 2048, containing 100,000 records. We read 205 MB of data in our old method.
Assume an index on field city with a field length of 20 bytes. Size of the index is only 2 MB. Isn't it faster to search in 2 MB of data than in 205 MB of data ?
Long time back, I read somewhere that indexes are fully read over the client in the beginning. I am not sure it it is so even in the present RDDs. In such a case, reading data in indexes from local memory ( or memory cache ) is faster than reading from a remote computer. I say I remember to have read, but I am not sure.
Further more, our coding effort is reduced a lot.
I can’t imagine that skipping through the file record by record is the solution.
I agree. For this, setting filter with WildMatch( ... ), as proposed by Mr Armando, is the best solution. It is working for me well. But this function is available in xharbour only. I am sure some function with similar functionality may be available in Harbour too. But I am not aware of it. Wish some Harbour pundits throw some light on this.