Performance Issues (AREV Specific)
At 20 FEB 2001 10:22:46AM Stephen Roy wrote:
We are using Arev 3.12 on a Novell 4.11 SFTIII server. The servers are PII 350 with 128 meg of ram. When all processes are running there is a peak of 70% but usually runs at about 35 to 60 %
Every evening we run run 200 programs that pull 5000 records each on average from the same database. We have divided these processes over 10 client machines. All of the clients are DOS based pII 300s or higher on a 100 base-T segment. Performance is an issue at the moment it takes roughly 60 - 80 minutes (varies) each evening. Some of the errors that have occured during this process are Divide overflow, Sharing violation, and fs466.
Are there any documented limits to record size and database size or client to server ratios.
At 20 FEB 2001 02:18PM Don Miller - C3 Inc. wrote:
Stephen ..
Are you using the NLM? Are the programs writing to REV files or to some other DOS-type file? Usually a Sharing Violation indicates that you are trying to open a DOS file rather than a REV file, since the REV file open will take the ELSE branch in the open statement (assuming you have coded it that way).
Are you doing a FLUSH / GARBAGECOLLECT ahead of or during the processing? Stack overflow sometimes results from an inability to assign any more descriptors. Are you using dynamic variables or dimensioned variables? The latter can quickly fill up the descriptor table.
I don't understand the FS466 error unless the NLM is loaded. This indicates No Server Response between AREV and the Server NLM through the watchdog. If you are using the NLM, is there a REVPARAM file in each subdirectory with SERVERONLY=TRUE?
If the clients are Win9x, do you start LHIPXTSR with the /P switch?
These are some of the things to look at. I'm not sure that this is much help.
Don Miller
C3 Inc.
At 20 FEB 2001 04:42PM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:
This is a large payload (a million records) to process on any network. Consider the following options:
(a) The biggest bottlenecks will include network contention on system files, and also network congestion. Especially as your server is not running at 100%.
Try attaching local copies of LISTS, WINDOWS, VOC, MENUS, libraries and any other relatively static structures accessed during your process. This will free up network traffic for data exclusively. The downside is that you have to release new code to multiple local volumes. If you're serious enough to process 1000000 records you're serious enough to handle this discipline. You could even isolate all the routines called to their own local libraries and leave the rest on the server as common.
(b) Remove Novell's TTS - you probably have, but in 4.x it's there by default. These services within Novell 4.11 which are expensive in terms of compute time. Any other accounting or background server tasks should be removed if possible during your update.
© Turn off virus checking on the server and workstations during the cycle.
(d) Run AREV in full-screen mode in foreground
(e) Compile all your RBASIC programs minus linemarks. The less you read the faster it happens.
(f) Try using Novell's NDS or BIND access exclusively - using NDS/BIND as the Novell client protocol is more expensive. Indeed, the fewer protocol stacks you run under Windows the better the performance.
(g) Set the COMSPEC= back to a local copy of command.com, not the one on the network PUBLIC directory.
(h) Ever looked at all those NLM's on the server and wondered which were really necessary? These take up memory that could be used for caching your data file. Remove the ones you really don't need. MONITOR.NLM doesn't help your process - so unload it. It can be loaded as required.
DIVIDE OVERFLOW is quite uncommon - check your hardware isn't overclocked or running hot.
SHARE VIOLATION sounds as if each session on one workstation may be resolving to the same @STATION - is every @STATION on each session on each client PC different?
If you can run sessions at 5000 records each, is it faster with half the sessions at 10,000 records? You don't mention the history behind your present configuration.
Have you minimized the number of indexes on your data? Each index is in fact a duplicate data structure and maintaining it concedes performance. There are many legacy applications where indexes are left on data and are later no longer required.
If you really want the code optimized have you had it examined by a third party? Sometimes a fresh look at configuration management or your code can offer some savings. There is a famous example of an AREV batch update routine going from 9 hours to 3 merely by changing one date ICONV routine to a faster substitute. Post your code to a reliable third party and get some specific advice.
World Leaders in all things RevSoft
At 21 FEB 2001 01:54AM Curt Putnam wrote:
How many of the 200 SELECTs are common? How many are close enough to being the same that with a code tweak the same select could serve several programs?
Is it possible to write one program that does the equivalent of all the selects? It could be a giant case statement that feeds task specific files. Don't forget that a select pretty much reads every record in the file, saves the ID, and then the process can go read them again.
Consider having the posting programs write application specific (pre-selected / pre-sorted) files as they post the main file. An extra write per transaction is not a huge hit.
Does every process HAVE to run at night?
At 26 FEB 2001 10:19AM Victor Engel wrote:
I would also localize SYSTEMP, which is used as temporary storage for all select statements.
Also, make sure the files are properly sized. Sometimes a sizelock will get set, keeping the files from resizing properly.