We made a test to benchmark speeds for a million record table on localhost. The test program is listed below.
We are familiar with customer.dbf in fwh\samples folder. First the data of the dbf is inserted into "custbig" table 2000 times, making it a million records table. Then this table is backedup, dropped, restored, opened and browsed.
- Code: Select all Expand view RUN
function BigTable()
local cSourceDBF := "c:\fwh\samples\customer.dbf"
local cBackUpFile := "c:\tests\custbig.sql"
local cTable := "custbig"
local aData, cList, cInsertSQL
local oRs, n, nSecs
oCn:lShowErrors := .t.
if ! oCn:TableExists( cTable )
// Create Table Structure
oCn:ImportFromDBF( cSourceDBF, cTable, nil, 0 )
// Prepare insert sql
USE ( cSourceDBF ) NEW ALIAS SRC SHARED VIA "DBFCDX"
cList := nil
aData := SRC->( FW_DbfToArray( @cList ) )
CLOSE SRC
cInsertSQL := oCn:InsertSQL( cTable, cList, aData, .f., .f. )
? "Creating Big talble. Please wait"
nSecs := SECONDS()
for n := 1 to 2000
oCn:Execute( cInsertSQL )
next
? "Table CustBig with million records created in", ;
Seconds() - nSecs, "Seconds" // --> 138.89 Secs = 2 and half minutes
endif
? "Start Backup. Please wait."
nSecs := Seconds()
oCn:BackUp( { cTable }, cBackUpFile, nil, 10000 ) // 10000 records per sql
? cBackUpFile + " created in ", Seconds() - nSecs, "Seconds" // --> 4.4 secs
? "Start Restore. Please Wait"
oCn:DropTable( cTable )
nSecs := Seconds()
oCn:Restore( cBackUpFile )
? "Restored one million records in", Seconds() - nSecs, "Seconds"
// --> 47.88 secs
? "Opening Table for Browse. Please wait"
oRs := oCn:RowSet( cTable )
XBROWSER oRs SHOW SLNUM TITLE cTable + " Read in " + cValToChar( oRs:nReadSecs ) + " Seconds"
// 5 secs
return nil
Results:
1) Creation of table : 2 and half minutes
2) Backup: Less than 5 seconds.
3) Restore: 48 seconds.
4) Reading into Rowset: 5 seconds.
These tests are done on localhost. On remote connections the speeds will be less. Actual speeds may also vary due to differences in hardware and memory.
Important Note: To improve speeds we used large buffers. It is necessary that the max_allowed_packet_size is set to adequately large size.
We shall be glad to get a feedback on comparative time taken by other tools to backup and restore the same table.