Growing Pains (Networking Products)
At 01 JUL 2002 09:47:05AM John Godfrey wrote:
A client is faced with the choice of what to do with their Arev database in the future. The database is expanding fast and rumour has it that there is a comfortable limit to AREV records which may well about to be exceeded. - About One million with the current key structure - Don't ask.
Does the new 32 bit version of OI offer assurance that large record sizes - lets say 50 million in one Table - can now be handled confidently and expeditiously - or should they consider a U2 database which has a track record in handling large numbers of records.
If the database is divorced from the front end is the 32 bit release a red herring here as the latest NT/NLM services running with AREV resolve any issues over large record numbers in Tables.
Should the solution be a hybrid with an OI front end and a U2 backend? - There are no plans to move away from the MV architecture.
Should the solution be a complete U2/System Builder solution?
What views do users and developers have?
At 01 JUL 2002 05:27PM Richard Bright wrote:
I have a client running Arev with I believe + 3 million records on Novell network.
OI32 can handle VERY large record size, other advantages:
- can Quickdex across large number of records.
- Index rebuilds & handling are VERY much faster.
- Bugs in Betree 64k boundry are eliminated (no 64k split) thus code simpler
- with almost limitless string arrays, programming can be simplified.
- 32bit code opens up flexability with reporting tools etc
On a BIG system I wouldnt mix OI32 with AREV - I would convert straight to OI. (Sharing files you would always have problem of many indexes being 32bit structure and not accessable in Arev)
We are going to see big advances in OI32 (Client - Server etc, enhanced filing systems) which will benefit large systems.
My advice, start planning the OI32 upgrade!
Richard Bright
At 02 JUL 2002 02:37AM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:
At times we have had 8.5 million records on an AREV 2.12 database. They were small records, but this was in 1995 on a 486-50 with 8mB or RAM . Reindexing was a pain, but the solution was usable. We had real-time validation of phone numbers (nationally) happening from a symbolic .
You should plan to move to OI32. You then will be able to attach the data readily, and selects are far quicker owing to lack of memory constraints. You could also attach an SQL volume with the same data as required.
Some database size projections are probably necessary about now. There is a 2GB limit on an OV or LK file.
World Leaders in all things RevSoft
At 02 JUL 2002 05:19AM Cameron Christie wrote:
I'm sure when I queried the existence of 2Gb limits on .OV files a year or two back I was told I was imagining it!
![]()
Presumably this affects OI32 as well (at least until its flavour of linear hash gets a revamp, making it inconsistent with ARev anyway?)
Does anyone know if there are any plans to address this within ARev?
TIA,
Cameron
At 02 JUL 2002 06:44AM Mike Ruane wrote:
4GB, I believe.
Mike
At 02 JUL 2002 06:46AM [url=http://www.sprezzatura.com]The Sprezzatura Group[/url] wrote:
Cam,
'm sure when I queried the existence of 2Gb limits on .OV files a year or two back I was told I was imagining it!
![]()
These aren't the files you're looking for … :)
Darth C
World leaders in all things RevSoft
At 02 JUL 2002 12:22PM Pat McNerthney wrote:
It "should" be a 4G limit, unless there is a signed/unsigned integer bug in the code, which would put it at 2G.
In the queue is a project too remove this 4G limit from Linear Hash, and by using the AREV Thunking All Networks Driver, it will be possible for AREV applications to use Linear Hash files this big (or better yet, just use the NT Service).
Pat
At 02 JUL 2002 12:36PM John Godfrey wrote:
As we have now identified an issue, perhaps we can begin to address it. It is extremely disheartening after developing good Arev systems to be ?gazumped? by competitors due to a size limitation. Perhaps canvassing for new developments might also be easier; knowing that as a system succeeds and grows it will not have to be migrated to an alternative platform. If other users are also being tempted away for the reason of size limitations then a Cost/Benefit case should perhaps be made for this work; indeed it may well be occurring as we speak/write. - Mike informs me that OpenInsight 4.x will easily support 50 million records, and has successfully tested a 40 million record table, with multiple indexes.
At 02 JUL 2002 01:16PM John Godfrey wrote:
Are you saying this limit has already been removed if using the current NT Service?
At 02 JUL 2002 03:35PM Pat McNerthney wrote:
No, I am not, sorry about that…
I am saying that *when* Linear Hash has this limit removed, *then* the simplest way to take advantage of it from AREV will be by using the NT Service.
Pat
At 03 JUL 2002 03:58AM John Godfrey wrote:
I assumed that's what you meant. - just checking! Thanks everyone for your contributions. I still believe key and data structure also have an important effect, but this has to be good news for all of us who wish the product to do well.