Discussion:
power loss while receiving news
(too old to reply)
Nigel Reed
2024-04-20 21:20:49 UTC
Permalink
Hi there,

I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.

Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.

Now, I'm paranoid my index is corrupt and not sure what to do about it.

I'm using cfs on this new system.
--
End Of The Line BBS - Plano, TX
telnet endofthelinebbs.com 23
Nigel Reed
2024-04-21 01:27:40 UTC
Permalink
On Sat, 20 Apr 2024 16:20:49 -0500
Post by Nigel Reed
Hi there,
I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.
Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.
Now, I'm paranoid my index is corrupt and not sure what to do about it.
I'm using cfs on this new system.
To add more detail. I am definitely missing some articles.

$ grephistory '<***@4ax.com>'|sm
sm: could not retrieve @03024359434E475331000F8053BA00000001@


I have to got back 675 lines in the logfile before I am able to
retrieve an article.
--
End Of The Line BBS - Plano, TX
telnet endofthelinebbs.com 23
Julien ÉLIE
2024-04-21 11:58:42 UTC
Permalink
Hi Nigel,
Post by Nigel Reed
Post by Nigel Reed
I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.
Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.
Now, I'm paranoid my index is corrupt and not sure what to do about it.
I'm using cfs on this new system.
To add more detail. I am definitely missing some articles.
I have to got back 675 lines in the logfile before I am able to
retrieve an article.
I would then just suggest to run again innxmit on the sending server for
these 675 articles.
Yet, the number of missing articles seems high. The CNFS headers are
updated every 25 articles by default. Did you change the cycbuffupdate
setting in cycbuff.conf? (Or do you have lots of cyclic buffers, as the
refresh is related to each buffer, separately?)

grephistory gets information from the history file which is flushed
every 10 articles by default (icdsynccount setting in inn.conf).

Overview data is usually written less frequently on disk; it depends on
the overview storage method you are using (after each article arrival
for tradindexed, according to the transrowlimit and transtimelimit
settings in ovsqlite.conf for ovsqlite, the txn_nosync setting in
ovdb.conf for ovdb, and the ovflushcount setting in inn.conf for
buffindexed).
--
Julien ÉLIE

« Les ordinateurs dans le futur ne pèseront peut-être pas plus de 1,5
tonne. » (Popular Mechanics, 1949)
Ray Banana
2024-04-21 14:01:46 UTC
Permalink
Post by Julien ÉLIE
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
As the articles seem to be known in the history file, the target server
will reject them as duplicates, even if they don't exist in the spool.

BTW: Is there another way to remove entries from history except manually
deleting them?
--
Пу́тін — хуйло́
https://www.eternal-september.org
Julien ÉLIE
2024-04-21 20:13:42 UTC
Permalink
Hi Wolfgang,
Post by Ray Banana
Post by Julien ÉLIE
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
As the articles seem to be known in the history file, the target server
will reject them as duplicates, even if they don't exist in the spool.
Oh, yes, you're totally right. These Message-IDs must be removed from
the history file before.
Post by Ray Banana
BTW: Is there another way to remove entries from history except manually
deleting them?
What I usually do to achieve that is:
1- setting /remember/ to 0 in expire.ctl;
2- running the expire process ("news.daily notdaily" called with the
same parameters as in crontab);
3- setting /remember/ to its previous value (11 by default).

I'm not aware of another way to totally remove entries from the history
file (it somehow needs rebuilding). If you see another method with the
current programs shipped with INN, I would be glad to hear.
--
Julien ÉLIE

« On ne va jamais si loin que lorsque l'on ne sait pas où l'on va. »
Jesse Rehmer
2024-04-21 23:33:31 UTC
Permalink
On Apr 21, 2024 at 3:13:42 PM CDT, "Julien ÉLIE"
Post by Julien ÉLIE
Hi Wolfgang,
Post by Ray Banana
Post by Julien ÉLIE
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
As the articles seem to be known in the history file, the target server
will reject them as duplicates, even if they don't exist in the spool.
Oh, yes, you're totally right. These Message-IDs must be removed from
the history file before.
Post by Ray Banana
BTW: Is there another way to remove entries from history except manually
deleting them?
1- setting /remember/ to 0 in expire.ctl;
2- running the expire process ("news.daily notdaily" called with the
same parameters as in crontab);
3- setting /remember/ to its previous value (11 by default).
I'm not aware of another way to totally remove entries from the history
file (it somehow needs rebuilding). If you see another method with the
current programs shipped with INN, I would be glad to hear.
Can you not remove the lines from "history" and then run makedbz?
Julien ÉLIE
2024-04-22 16:02:14 UTC
Permalink
Hi Jesse,
Post by Jesse Rehmer
Post by Julien ÉLIE
Post by Ray Banana
BTW: Is there another way to remove entries from history except manually
deleting them?
1- setting /remember/ to 0 in expire.ctl;
2- running the expire process ("news.daily notdaily" called with the
same parameters as in crontab);
3- setting /remember/ to its previous value (11 by default).
I'm not aware of another way to totally remove entries from the history
file (it somehow needs rebuilding). If you see another method with the
current programs shipped with INN, I would be glad to hear.
Can you not remove the lines from "history" and then run makedbz?
Wolfgang asked for a way other than a manual deletion. Yes, editing the
history file by hand and then running "makedbz" or "makehistory -O" will
also work. It is just more error-prone, and naturally one has to
shutdown INN before any manual edition of the history file.

I would recommend to also rebuild the overview data ("makehistory -O")
and not only the dbz files ("makedbz") as otherwise it will be
inconsistent. I think duplicated Message-IDs in the overview database
won't prevent the articles from being accepted as our current overview
methods work by article numbers, and not by Message-IDs, but better be
consistent.

Running "news.daily notdaily" will do all of that for you.
--
Julien ÉLIE

« Dignus est intrare. » (issu de Molière)
Nigel Reed
2024-04-25 06:14:36 UTC
Permalink
On Mon, 22 Apr 2024 18:02:14 +0200
Post by Julien ÉLIE
Hi Jesse,
Post by Jesse Rehmer
Post by Julien ÉLIE
Post by Ray Banana
BTW: Is there another way to remove entries from history except
manually deleting them?
1- setting /remember/ to 0 in expire.ctl;
2- running the expire process ("news.daily notdaily" called with
the same parameters as in crontab);
3- setting /remember/ to its previous value (11 by default).
I'm not aware of another way to totally remove entries from the
history file (it somehow needs rebuilding). If you see another
method with the current programs shipped with INN, I would be glad
to hear.
Can you not remove the lines from "history" and then run makedbz?
Wolfgang asked for a way other than a manual deletion. Yes, editing
the history file by hand and then running "makedbz" or "makehistory
-O" will also work. It is just more error-prone, and naturally one
has to shutdown INN before any manual edition of the history file.
I would recommend to also rebuild the overview data ("makehistory
-O") and not only the dbz files ("makedbz") as otherwise it will be
inconsistent. I think duplicated Message-IDs in the overview
database won't prevent the articles from being accepted as our
current overview methods work by article numbers, and not by
Message-IDs, but better be consistent.
Running "news.daily notdaily" will do all of that for you.
Just this is exactly what worked for me. I ran news.daily notdaily with
the crontab parameters and it removed the article numbers from the
history and we were able to restart the transfer from the last missing
article. It's now running again.

Thanks,
--
End Of The Line BBS - Plano, TX
telnet endofthelinebbs.com 23
Nigel Reed
2024-04-21 23:53:37 UTC
Permalink
On Sun, 21 Apr 2024 13:58:42 +0200
Post by Julien ÉLIE
Hi Nigel,
Post by Nigel Reed
Post by Nigel Reed
I'm in a situation where I've been receiving an innxmit feed to
populate a new news server.
Unfortunately, after 992GB, we had a power outage and my UPS died
without a clean shutdown.
Now, I'm paranoid my index is corrupt and not sure what to do about it.
I'm using cfs on this new system.
To add more detail. I am definitely missing some articles.
I have to got back 675 lines in the logfile before I am able to
retrieve an article.
I would then just suggest to run again innxmit on the sending server
for these 675 articles.
Yet, the number of missing articles seems high. The CNFS headers are
updated every 25 articles by default. Did you change the
cycbuffupdate setting in cycbuff.conf? (Or do you have lots of
cyclic buffers, as the refresh is related to each buffer, separately?)
grephistory gets information from the history file which is flushed
every 10 articles by default (icdsynccount setting in inn.conf).
Overview data is usually written less frequently on disk; it depends
on the overview storage method you are using (after each article
arrival for tradindexed, according to the transrowlimit and
transtimelimit settings in ovsqlite.conf for ovsqlite, the txn_nosync
setting in ovdb.conf for ovdb, and the ovflushcount setting in
inn.conf for buffindexed).
No, no changes to the cycbuff.conf file and I only have two buffers,
one for test messages that I don't want to keep around and one for
everything else.

My expire.ctl already has /remember/:0 so we're good there.

I can delete the last 675 entries from the history file, will that just
cause the overview record to be recreated? What about the history.hash
and history.index files? I can't believe it's as easy as removing a few
lines from history and starting the transfer at the point of the last
missing message.
--
End Of The Line BBS - Plano, TX
telnet endofthelinebbs.com 23
Julien ÉLIE
2024-04-22 16:02:15 UTC
Permalink
Hi Nigel,
Post by Nigel Reed
I can delete the last 675 entries from the history file, will that just
cause the overview record to be recreated? What about the history.hash
and history.index files?
The second point of the process does that (purge of the overview records
and recreation of the history.dir/hash/index files):
2- running the expire process ("news.daily notdaily" called with the
same parameters as in crontab)
Post by Nigel Reed
I can't believe it's as easy as removing a few
lines from history and starting the transfer at the point of the last
missing message.
Not everything needs being complicated :)
--
Julien ÉLIE

« Subi dura a rudibus. »
Loading...