slowness
- mauri.menabue
- Posts: 157
- Joined: Thu Apr 17, 2008 2:38 pm
Re: slowness
Hi Marco
Since it seems that the slowdown is linked to the opening of the index,
can you put the index formula and how many TAGs are present?
tia
Since it seems that the slowdown is linked to the opening of the index,
can you put the index formula and how many TAGs are present?
tia
- Enrico Maria Giordano
- Posts: 8728
- Joined: Thu Oct 06, 2005 8:17 pm
- Location: Roma - Italia
- Contact:
Re: slowness
The problem is the same even with only one tag.
- MarcoBoschi
- Posts: 1070
- Joined: Thu Nov 17, 2005 11:08 am
- Location: Padova - Italy
- Contact:
Re: slowness
Dear Enrico,
this is my logic to generate an index file in my PHP4DBF Lib.
I would like to compare this logic with the generation of in-memory indexes in Harbour.
Where can I find the source code? Can you tell me that?
Generating an index:
I first read all records in a loop and write all matches into
$matchRecords[] = [
'index' => $i,
'fieldValue' => $fieldValue, // Add fieldValue here
'recno' => $i + 1 // Add recno here
];
Then I sort $matchRecords[]
usort($matchRecords, function ($a, $b) {
return strcmp($a['fieldValue'], $b['fieldValue']);
});
and then I store all indexes in the sorted order, this is then my "index file"
$indexes = array_column($matchRecords, 'index');
To get the records sorted I use:
function php4dbf_getRecordByIndex($fileHandle, $index, $header, $fields, $loggingEnabled = true) {
$recordOffset = $header['headerLength'] + ($index * $header['recordLength']);
fseek($fileHandle, $recordOffset);
$record = fread($fileHandle, $header['recordLength']);
if ($record === false) {
if ($loggingEnabled) {
error_log("Error reading record at index $index");
}
return null;
}
$result = [];
$fieldOffset = 1; // Skipping the deleted flag byte
foreach ($fields as $field) {
$fieldValue = trim(substr($record, $fieldOffset, $field['length']));
$result[$field['name']] = my_utf8_encode($fieldValue); // Ensure UTF-8 encoding
$fieldOffset += $field['length'];
}
return $result;
}
Best regards,
Otto
this is my logic to generate an index file in my PHP4DBF Lib.
I would like to compare this logic with the generation of in-memory indexes in Harbour.
Where can I find the source code? Can you tell me that?
Generating an index:
I first read all records in a loop and write all matches into
$matchRecords[] = [
'index' => $i,
'fieldValue' => $fieldValue, // Add fieldValue here
'recno' => $i + 1 // Add recno here
];
Then I sort $matchRecords[]
usort($matchRecords, function ($a, $b) {
return strcmp($a['fieldValue'], $b['fieldValue']);
});
and then I store all indexes in the sorted order, this is then my "index file"
$indexes = array_column($matchRecords, 'index');
To get the records sorted I use:
function php4dbf_getRecordByIndex($fileHandle, $index, $header, $fields, $loggingEnabled = true) {
$recordOffset = $header['headerLength'] + ($index * $header['recordLength']);
fseek($fileHandle, $recordOffset);
$record = fread($fileHandle, $header['recordLength']);
if ($record === false) {
if ($loggingEnabled) {
error_log("Error reading record at index $index");
}
return null;
}
$result = [];
$fieldOffset = 1; // Skipping the deleted flag byte
foreach ($fields as $field) {
$fieldValue = trim(substr($record, $fieldOffset, $field['length']));
$result[$field['name']] = my_utf8_encode($fieldValue); // Ensure UTF-8 encoding
$fieldOffset += $field['length'];
}
return $result;
}
Best regards,
Otto
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
- mauri.menabue
- Posts: 157
- Joined: Thu Apr 17, 2008 2:38 pm
Re: slowness
Hi Marco
not only how many tags but its formula for that single tag
e.g.: dtos( gio_drg) + str(gio_num,9) + str(gio_riga,3)
I often made too complex indexes which often slowed down the browse,
I would like to know if the slowness you see is on loop type : ' do while .. enddo '
or on a browse
Bye
not only how many tags but its formula for that single tag
e.g.: dtos( gio_drg) + str(gio_num,9) + str(gio_riga,3)
I often made too complex indexes which often slowed down the browse,
I would like to know if the slowness you see is on loop type : ' do while .. enddo '
or on a browse
Bye
- Enrico Maria Giordano
- Posts: 8728
- Joined: Thu Oct 06, 2005 8:17 pm
- Location: Roma - Italia
- Contact:
Re: slowness
I already tried with a simple formula like a single field: same problem. I only tried with a loop, not with a browse.
- Marc Venken
- Posts: 1481
- Joined: Tue Jun 14, 2016 7:51 am
- Location: Belgium
Re: slowness
I like the idea of local indexes (temp) but in network system, once a other machine changes/adds data, the local indexes would be wrong not ?
In previous posts, I remember that this issue meanly is creating the right indexes... Using the RDD's optimised filters etc.. Mr. Rao onces started to explaine (thanks !!) but we didn't get to the final approuch (samples)
In previous posts, I remember that this issue meanly is creating the right indexes... Using the RDD's optimised filters etc.. Mr. Rao onces started to explaine (thanks !!) but we didn't get to the final approuch (samples)
Marc Venken
Using: FWH 23.08 with Harbour
Using: FWH 23.08 with Harbour
Re: slowness
Hello Marc,
I think we need to think further here.
Both of us have a similar target audience for our software: small and medium-sized businesses.
In the near future, all applications will move to the web. Here, we don’t have open connections (stateless).
We need to actively check if the data changes.
Sure, we can use Server-Sent Events (SSE), but I believe that for our specific use case, it’s simpler to work with a timer (JS interval).
On the web, I use a technique where I monitor the file’s modification date with a "timer."
A request takes about 20 ms, and if something has changed, the database is reloaded.
Usually, you only read the part that is currently displayed in the browser, which typically takes around 25 to 30 ms.
If I create a temporary index with the technique mentioned above, it takes around 500 ms, and reading the indexed records takes another 25 ms if I display 20 results per browser page.
In practice, this is just as fast as using xbrowse().
I’m convinced that we need to move away from using indexes.
We should divide the DBF files better—perhaps not everything in one table, maybe split them by month, etc.
There’s almost no difference if you read from two tables on the server or everything from one.
On the contrary, in most cases, it’s faster if the tables are split.
And in a yearly statistic, a few hundred milliseconds don’t matter.
The speed is more critical in the browser.
With xbrowse, we’ve been spoiled here. You won’t achieve the same good results with SQL as with low-level functions in DBF files.
On my server, it simply takes a certain amount of time for SQL to make a recordset available, and for the xbrowser, you only need small recordsets.
For smooth operation, you can only read about 20 records; otherwise, the data transmission over the internet takes too long, and the application doesn’t feel smooth.
I would define truly smooth as under 150 ms.
Best regards,
Otto
I think we need to think further here.
Both of us have a similar target audience for our software: small and medium-sized businesses.
In the near future, all applications will move to the web. Here, we don’t have open connections (stateless).
We need to actively check if the data changes.
Sure, we can use Server-Sent Events (SSE), but I believe that for our specific use case, it’s simpler to work with a timer (JS interval).
On the web, I use a technique where I monitor the file’s modification date with a "timer."
A request takes about 20 ms, and if something has changed, the database is reloaded.
Usually, you only read the part that is currently displayed in the browser, which typically takes around 25 to 30 ms.
If I create a temporary index with the technique mentioned above, it takes around 500 ms, and reading the indexed records takes another 25 ms if I display 20 results per browser page.
In practice, this is just as fast as using xbrowse().
I’m convinced that we need to move away from using indexes.
We should divide the DBF files better—perhaps not everything in one table, maybe split them by month, etc.
There’s almost no difference if you read from two tables on the server or everything from one.
On the contrary, in most cases, it’s faster if the tables are split.
And in a yearly statistic, a few hundred milliseconds don’t matter.
The speed is more critical in the browser.
With xbrowse, we’ve been spoiled here. You won’t achieve the same good results with SQL as with low-level functions in DBF files.
On my server, it simply takes a certain amount of time for SQL to make a recordset available, and for the xbrowser, you only need small recordsets.
For smooth operation, you can only read about 20 records; otherwise, the data transmission over the internet takes too long, and the application doesn’t feel smooth.
I would define truly smooth as under 150 ms.
Best regards,
Otto
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
- MarcoBoschi
- Posts: 1070
- Joined: Thu Nov 17, 2005 11:08 am
- Location: Padova - Italy
- Contact:
Re: slowness
A quick Update,
the function in wich is very evident this slowness I insert an interval od Year + month permits to me to obtain an Excel containing all traffic from and to a particular number, Telephone traffic reading cdr.
For instance
202401 June 2024
202409 September 2024
And then I enter a telephone number to search for
I change every month the dbf table containing all Pbx traffic therefore only one file is opened by the program that stores data in it.
In this moment (today) only 202409.dbf is opened by the program who write all data coming from Pbx.
All other dbf tables ar not open from anybody, they are only used for statistics.
I have a For NEXT cicle that open a dbf file at a time and than a do while enddo cicle in quch analize every single
record that satisfye some conditions.
It is very quicly until it opens the table i_202409.dbf and i_202409.cdx that is opened by the other program that writes data.
For this last table it is very very slow.
I simply copy in local disk c: CDX files and open it instead the one on the lan.
The difference in performance are 29,77 seconds if I open cdx on lan 2,42 seconds If I copy cdx files in locel and open it
This is a great solution if I don't need to change anything in the table.
Hav a nice day
Marco
the function in wich is very evident this slowness I insert an interval od Year + month permits to me to obtain an Excel containing all traffic from and to a particular number, Telephone traffic reading cdr.
For instance
202401 June 2024
202409 September 2024
And then I enter a telephone number to search for
I change every month the dbf table containing all Pbx traffic therefore only one file is opened by the program that stores data in it.
In this moment (today) only 202409.dbf is opened by the program who write all data coming from Pbx.
All other dbf tables ar not open from anybody, they are only used for statistics.
I have a For NEXT cicle that open a dbf file at a time and than a do while enddo cicle in quch analize every single
record that satisfye some conditions.
It is very quicly until it opens the table i_202409.dbf and i_202409.cdx that is opened by the other program that writes data.
For this last table it is very very slow.
I simply copy in local disk c: CDX files and open it instead the one on the lan.
The difference in performance are 29,77 seconds if I open cdx on lan 2,42 seconds If I copy cdx files in locel and open it
This is a great solution if I don't need to change anything in the table.
Hav a nice day
Marco
Marco Boschi
info@marcoboschi.it
info@marcoboschi.it
- Marc Venken
- Posts: 1481
- Joined: Tue Jun 14, 2016 7:51 am
- Location: Belgium
Re: slowness
You have a index, filter or orderscope before you process the loops I suppose ?MarcoBoschi wrote:A quick Update,
I have a For NEXT cicle that open a dbf file at a time and than a do while enddo cicle in quch analize every single
record that satisfye some conditions.
Marc Venken
Using: FWH 23.08 with Harbour
Using: FWH 23.08 with Harbour
Re: slowness
Dear Marco,
The 500ms I mentioned above come from a database with 200,000 records. The database is 20 MB in size.
Could you maybe post the structure of the database after all? I would then fill it with dummy data for testing purposes out of curiosity.
It would also be interesting to know what you want to query.
Best regards,
Otto
The 500ms I mentioned above come from a database with 200,000 records. The database is 20 MB in size.
Could you maybe post the structure of the database after all? I would then fill it with dummy data for testing purposes out of curiosity.
It would also be interesting to know what you want to query.
Best regards,
Otto
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
- MarcoBoschi
- Posts: 1070
- Joined: Thu Nov 17, 2005 11:08 am
- Location: Padova - Italy
- Contact:
Re: slowness
Otto I've prepared a table in wich the contents of some fields are partially obscured
I send you via mail
Please your address?
Many thank
I send you via mail
Please your address?
Many thank
Marco Boschi
info@marcoboschi.it
info@marcoboschi.it
- MarcoBoschi
- Posts: 1070
- Joined: Thu Nov 17, 2005 11:08 am
- Location: Padova - Italy
- Contact:
Re: slowness
This is the source code (we are in console mode)
Code: Select all | Expand
FUNCTION CONTA_TELEFONATE()
LOCAL cAmIni := SUBSTR( DTOS( DATE() ) , 1 , 6 )
LOCAL cAmFin := SUBSTR( DTOS( DATE() ) , 1 , 6 )
LOCAL aLista := {}
LOCAL cChiamante := SPACE( 30 )
LOCAL cChiamato := SPACE( 30 )
LOCAL cDbf
LOCAL oExcel , oAs
@ 03 , 11 SAY "Inserire Anno Mese Iniziale " GET cAmIni PICTURE "999999" COLOR "*b/w , *w/b+"
@ 04 , 11 SAY "Inserire Anno Mese Iniziale " GET cAmFin PICTURE "999999" COLOR "*b/w , *w/b+"
@ 05 , 11 SAY "CGPN Calling " GET cChiamante VALID ( cChiamato := cChiamante , .T. ) COLOR "*b/w , *w/b+"
@ 06 , 11 SAY "CDPN Called " GET cChiamato COLOR "*b/w , *w/b+"
READ
cChiamante := ALLTRIM( cChiamante )
cChiamato := ALLTRIM( cChiamato )
IF ALERT( "Confermare conteggio chiamate? " , { "Conferma" , "Abbandona" } , "*b/w , *w/b+" ) = 1
nInizio := SECONDS()
SET AUTOPEN OFF
FOR iTab := VAL( cAmIni ) TO VAL( cAmFin )
cDbf := "n:\inno\I_" + STR( iTab , 6 )
cDbfLoc := "c:\inno\I_" + STR( iTab , 6 )
SELECT 0
USE &cDbf
// copia il file cdx in locale e poi lo apre
// copyfile( cDbf + ".cdx" , cDbfLoc + ".cdx") // this copy in local c:\
// SET INDEX TO &cDbfLoc /// this open the cdx local
SET INDEX TO &cDbf
GO TOP
DO WHILE !EOF()
IF !EMPTY( cChiamante )
IF AT( cChiamante , FIELD->src_cgpn ) > 0
IF field->dir = "out" .AND. field->secondi > 0 .AND. field->event = "B:Rel"
AADD( aLista , { "IN" , field->src_cgpn , SPACE(20) , field->do , field->secondi, field->dst_cdpn } )
ENDIF
ENDIF
ENDIF
IF !EMPTY( cChiamato )
IF AT( cChiamato , FIELD->src_cdpn ) > 0
IF field->dir = "out" .AND. field->secondi > 0 .AND. field->event = "B:Rel"
AADD( aLista , { "OUT" , SPACE( 20 ) , field->src_cdpn , field->do , field->secondi , field->src_cgpn } )
ENDIF
ENDIF
ENDIF
SKIP
ENDDO
USE
FERASE( cDbfLoc + ".cdx" )
NEXT iTAb
oExcel := CREATEOBJECT( "Excel.Application" )
oExcel:Workbooks:Add()
oAs := oExcel:Activesheet()
oAs:name := "Telefonate"
oAs:Range( "A1" ):Value := "DIR"
oAs:Range( "B1" ):Value := "CGPN"
oAs:Range( "C1" ):Value := "CDPN"
oAs:Range( "D1" ):Value := "DATA ORA"
oAs:Range( "E1" ):Value := "SECONDI"
oAs:Range( "F1" ):Value := "MINUTI"
oAs:Range( "G1" ):Value := "INTERNO"
IF LEN( aLista ) > 0
FOR iLista := 1 TO LEN( aLista )
oAs:Range( "A" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := aLista[ iLista , 1 ]
oAs:Range( "B" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := "'" + aLista[ iLista , 2 ]
oAs:Range( "C" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := "'" + aLista[ iLista , 3 ]
oAs:Range( "D" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := "'" + aLista[ iLista , 4 ]
oAs:Range( "E" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := aLista[ iLista , 5 ]
oAs:Range( "F" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := aLista[ iLista , 5 ] / 60
oAs:Range( "G" + ALLTRIM( STR( iLista+ 1 ) ) ):Value := "'" + aLista[ iLista , 6 ]
NEXT iLista
ENDIF
oExcel:visible := .T.
oAS:Columns( "A" ):AutoFit()
oAS:Columns( "B" ):AutoFit()
oAS:Columns( "C" ):AutoFit()
oAS:Columns( "D" ):AutoFit()
oAS:Columns( "E" ):AutoFit()
oAS:Columns( "F" ):AutoFit()
oAs:Range( "F" + ALLTRIM( STR( iLista + 2 ) ) ):Value := "=SOMMA(F2..F" + ALLTRIM( STR( iLista , 4 ) ) + ")"
oAs:Columns( "F" ):NumberFormat = "###.##0,00"
SET AUTOPEN ON
ENDIF
offbox( boxn )
// SET(_SET_AUTOPEN , lAutoOpen )
RETURN NIL
Marco Boschi
info@marcoboschi.it
info@marcoboschi.it
Re: slowness
Dear Marco,
otto(@)atzwanger.com
Thank you in advance
Otto
otto(@)atzwanger.com
Thank you in advance
Otto
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
mod harbour - Vamos a la conquista de la Web
modharbour.org
https://www.facebook.com/groups/modharbour.club
********************************************************************
- Marc Venken
- Posts: 1481
- Joined: Tue Jun 14, 2016 7:51 am
- Location: Belgium
Re: slowness
Marco,
It seems that you pull all dfb's for all selected months and than process each record from every dbf's over the net. No filters, seeks etc. ? I think here is you problem, or maybe i'm missing something
It seems that you pull all dfb's for all selected months and than process each record from every dbf's over the net. No filters, seeks etc. ? I think here is you problem, or maybe i'm missing something
Marc Venken
Using: FWH 23.08 with Harbour
Using: FWH 23.08 with Harbour