update_index (OpenInsight 64-bit)
At 08 DEC 2022 10:43:22AM BrianWick wrote:
as part of daily processing i perform 3 procedures on an activity file where:
the first pass I strip out part of each record based on the "age" of the record (such as 7 days old)
and resave the record
the second and third passes use a different "select" to identify certain records to delete
before and after each of the three passes I do the following:
Update_Index("MAIN1SESSIONVAR","","")
just recently in the last 2 months or so the records are being deleted - but the cooresponding indexes apparantly are still not removed.
as a result when i run the daily process the next day - the records that have already been deleted are selected again because the indexes still show they exist (via select AND btree_extract) - so when I do a delete - it can not find the record - and therefore the delete bombs (fails).
obviously doing a complete index rebuild prior to running the daily process fixes it - but with around 200K records that takes 10-15 mins or so - AND I need to shuit down all my webpages in doibng so.
So therefore I am trying not to havr to do a rebuild every day.
Also is it coincidental that starting the same 2 months ago at the end of each daily process I clear out all the contents of:
O4WTEMP
SYSLISTS
SYSLHGROUP
SYSAUTHLOG
could that be something there ?
Do I need to (or should I ) put in a flush command each time I do a delete - which slows the process ?
At 08 DEC 2022 10:57AM Donald Bakke wrote:
as part of daily processing i perform 3 procedures on an activity file where:
the first pass I strip out part of each record based on the "age" of the record (such as 7 days old)
and resave the record
the second and third passes use a different "select" to identify certain records to delete
before and after each of the three passes I do the following:
Update_Index("MAIN1SESSIONVAR","","")
just recently in the last 2 months or so the records are being deleted - but the cooresponding indexes apparantly are still not removed.
as a result when i run the daily process the next day - the records that have already been deleted are selected again because the indexes still show they exist (via select AND btree_extract) - so when I do a delete - it can not find the record - and therefore the delete bombs (fails).
obviously doing a complete index rebuild prior to running the daily process fixes it - but with around 200K records that takes 10-15 mins or so - AND I need to shuit down all my webpages in doibng so.
So therefore I am trying not to havr to do a rebuild every day.
Also is it coincidental that starting the same 2 months ago at the end of each daily process I clear out all the contents of:
O4WTEMP
SYSLISTS
SYSLHGROUP
SYSAUTHLOG
could that be something there ?
Do I need to (or should I ) put in a flush command each time I do a delete - which slows the process ?
When the indexes do not appear to reflect the deleted rows, have you inspected the index itself to see if the updates are still queued up? I think it would be helpful to know whether the indexes were queued up and not flushed or if the indexes simply never got updated.
At 08 DEC 2022 12:21PM BrianWick wrote:
tx for getting with me don -
i do not know where to look to see if indexes have been queded and not yet flushed:
all i do is:
Update_Index("MAIN1SESSIONVAR","","")
does that NOT complete the entire flush ?
At 08 DEC 2022 12:25PM Andrew McAuley wrote:
You would look in the ! file for the existence of transaction records. One question though, is this OI 9 data attached in an OI10 system?
World leaders in all things RevSoft
At 08 DEC 2022 02:00PM Donald Bakke wrote:
tx for getting with me don -
i do not know where to look to see if indexes have been queded and not yet flushed:
all i do is:
Update_Index("MAIN1SESSIONVAR","","")
does that NOT complete the entire flush ?
Brian - Andrew gave you a link to help you inspect the index table. To answer your question, yes, Update_Index should complete the entire flush. However, there could be circumstances that are preventing this from working as expected and my questions are meant to help isolate the problem and lead you toward a solution.
At 08 DEC 2022 06:38PM BrianWick wrote:
hi guys
tx for suggestions.
andrew io 10.1
as mentioned this all started when i started clearing these files as part of the same daily cleanup (at the end of the process):
o4wtemp
syslists
sysauthlog
syslhgroup
not sure if it is coincidental - likely - but I am taking that our for now - to see how it works for the next few days
tx again
Bri
At 08 DEC 2022 07:24PM Barry Stevens wrote:
hi guys
tx for suggestions.
andrew io 10.1
as mentioned this all started when i started clearing these files as part of the same daily cleanup (at the end of the process):
o4wtemp
syslists
sysauthlog
syslhgroup
not sure if it is coincidental - likely - but I am taking that our for now - to see how it works for the next few days
tx again
Bri
Ok, so just to be sure (there is a reason for this question) - you are not sharing this data with OI9 - correct?
At 08 DEC 2022 08:22PM BrianWick wrote:
sorry - did not answer that completely.
only oi 10.1
not sharing any data with oi 9 - everything is oi 10.1
tx
At 12 DEC 2022 02:32PM BrianWick wrote:
Hi Again Guys -
I have tried removing all btrees and then copying the data and dict to a new table and then deleting the orig table and dict and then creating a new table an d dict with the same name and then copying that table and dict back to a newly created table - and nothing changes.
the table has about 150 dict items…. the next step will be to manually create each dict item in a new table - this week.
maybe some kind a weird flag got set somewhere ?
tx
At 12 DEC 2022 05:12PM Donald Bakke wrote:
Hi Again Guys -
I have tried removing all btrees and then copying the data and dict to a new table and then deleting the orig table and dict and then creating a new table an d dict with the same name and then copying that table and dict back to a newly created table - and nothing changes.
the table has about 150 dict items…. the next step will be to manually create each dict item in a new table - this week.
maybe some kind a weird flag got set somewhere ?
tx
This seems like a case where you have bad data in the keys. My original line of questions was in service to help you arrive at this conclusion or another explanation. If you have bad keys then all the copying in the world won't change anything.
I recommend hiring a consultant who is more familiar with these situations and will have experiencing in finding these rogue keys. However, if you are committed to doing this yourself then search this forum for "bad keys" and you should find a number of posts with details on what to look for and how to resolve the issue.
At 12 DEC 2022 05:23PM BrianWick wrote:
clearly its on my end - and specific to one table.
i had thought of bad data as well - wierd characters somehow being created and indexed………
i have not gone thru all the keys yet.
but as far as bad keys - i am just wondering why a complete rebuild works fine and does not break during that process.
you have given me some good ideas and direction don
tx for your help
bri
At 12 DEC 2022 07:07PM Donald Bakke wrote:
but as far as bad keys - i am just wondering why a complete rebuild works fine and does not break during that process.
Rebuilding an index doesn't rely upon the index transactions pathway that normal index updates go through. Beyond that I don't know what is different, but they are different and I have seen this scenario (i.e., rebuiild works but updates don't) numerous times.
At 13 DEC 2022 10:47AM BrianWick wrote:
that is good to know.
it is not causing my pages to break - so i will deal with it down the road.
thanks again for chiming on don.
bri