Closed nileprak closed 1 month ago
Yes the file list is stored in a text item, which has a 64K limit. The number of entries does depend on the length of the database names. But there is clearly a limit. The backup should also check for that limit and not bring you into the situation that you can't delete the entries any more.
Maybe you have created them with an earlier version?
Domino Backup isn't intended for such a long backup retention time. And this was a early design decision in 12.0.
There is one configuration that could help you, if you really need to keep older backups in some way:
Domino Backup with the VSS Writer writes consistent backups. The deltas are directly merged during the VSS Snapshot. So you could keep the Domino NSF backup time for a longer time on the Veeam side let Domino prune the backups on the Domino side.
That way you still have the consistent database backups on the Veeam side. Also by design of the Veeam integration, the prune of the backups should be implemented on the Veeam side to avoid misconfiguration or some operation on a Domino server prune all Veeam backups for the server.
How often would you need to restore an older backup? If the data is still on the Veeam side, you could import a backup log for a backup you need.
Domino writes into dominobackup.nsf and optionally the same data is also writing into log files on disk. You could keep those logs and use them to import a backup into your dominobackup.nsf before a restore operation.
There is also a way to specify an alternate dominobackup database for a restore using the -cfg
option.
But this is mainly intended for recovery and migration scenarios from on to another configuration and to keep older repositories for a while.
Clearly you are over the edge with 180 backups. and what I described are ways to get something working for your special requirements.
This isn't a bug. You could open an AHA idea as an enhancement request. But this would be an architecture change, which isn't likely to be implemented soon without many admins asking for it.
Hi, we're also having this issue.
@Daniel-Nashed Could you give some more details on this work around?
Domino writes into dominobackup.nsf and optionally the same data is also writing into log files on disk. You could keep those logs and use them to import a backup into your dominobackup.nsf before a restore operation.
Our calculations show about ~70 backup dates being stored until we overflow. Surely 70 days of retention is not to be considered over the edge?
Would it be possible to reduce the data written into this field, or to change this to a RichText?
Thank you
Around ~70 would only happen if you have really really long database names. ~150 backups should be always possible. the field can be up to 64KB. You can simply do the math when you calculate the path length and look into the additional information. Have a look into the largest fields you have on your system.
The field you need to look for is "List". You can see from my example below that this field is not a summary item field on purpose. So it can hold 64K but can't be checked by a view. You would need an agent to look at the size of the item.
~70 backups isn't really what the limit would be. If you calculate it, the list should even hold up at least 200 backups.
Moving it to a different way of storage would be a feature enhancement, that can't be done in a FP and would need to go thru production management. This is more complex than you would think.
You are the second person to ask for it since the feature was introduced in Domino 12.0.
How many backups do you need to store?
Pruning the backup from dominobackup.nsf and keeping the backups, does manly make sense when the backup is writing delta information into the backup like in the Veeam Windows case. So a Veeam admin could restore the data.
Usually on Veeam and other backup applications the storage would be even in a different location and not on the primary backup.
Due to compacts and other operations an NSF file can have quite some changes over time, which would increase the backup size over time (de-duplication isn't effective for changing data).
If you need someone to look into your data, you would need to open a support ticket. But I think my info about the "List" field should be helpful to find the right information and report back here.
Field Name: List
Data Type: Text List
Data Length: 1543 bytes
Seq Num: 74
Dup Item ID: 0
Field Flags:
Hi @Daniel-Nashed
Thank you for your response.
You are correct. I have just checked and we are fitting 200+ entries in this field.
I have tracked our confusion to be because of the following:
However, if we first click the box next to it, we get an error
Field is too large (32K) or View's column & selection formulas are too large
And after that, if we click the dropdown, we are no longer seeing all backups:
Perhaps this is easier to isolate and fix.
OK this looks like a template issue. not back-end issue with this amount of data. There is a button and a drop down arrow.
The drop down is the standard functionality. The button gets the right backup based on the date you entered.
I would need to test this on my own. But probably I would just use a single database to not fill my disk. Probably one with a long path ...
But your clarification what works and what fails, will help. This looks like a template or client issue. The back-end data looks right.
OK I have reproduced it with 58KB in the field. for like 160 backup entries. What worked for you? for me none of the options in the form to select works. I am not seeing any data. I will delete a couple of backups and try again.
OK I have reproduced it with 58KB in the field. for like 160 backup entries. What worked for you? for me none of the options in the form to select works. I am not seeing any data. I will delete a couple of backups and try again.
We have been doing restores without any major issue from what I know, except that we have not seen all backup dates. Will do some thorough testing tomorrow and get back to you.
In my test it starts showing errors starting with 32K. I have tested with a 12.0.2 and 14.0 client
I took a closer look and found what causes the issue.
The inventory document item List is not a summary field.
The restore action copies items of that document into a new document.
In my test the resulting item has the summary flag set, which caused the described issue in the restore document.
We will have to research why this is happening. This isn't a behavior I would expect.
A quick research showed that the CopyItem
function used, works in general.
But I have a work-around for you:
Large summary items have been introduced in Domino 10 ODS.
This new ODS feature increases the summary item size to 64k and the total size of summary data for a document for all summary items to 16MB.
The compact task can be used to enable large summary items. But you don't need a copy style compact. The task just enables a database option bit.
Please let me know if this works for you too.
load compact -LargeSummary on dominobackup.nsf
Informational, LargeSummary has been enabled for database dominobackup.nsf.
@Daniel-Nashed
Everything is working fine for us now. My collegue is not even getting the "32K" error, he is using 12 Fp3, I'm using 12 FP2
Thanks for your feedback. And glad it works with this setting. Still you should be aware the number of backups are limited!
What is your back-end? And how many backups do you really need? Is this per day or do you run multiple backups per day?
I would like to understand customer requirements better. Usually customers have a 30 or at most 90 days retention of normal backups. Everything else is more a long term/archive/compliance backup.
With a long term/archive backup requirements are usually different.
Depending on the backup type, the data is consistent on it's own and does not need a Domino restore operation.
For example all file based backups automatically merge deltas. Snapshot integrations like Veeam use the VSS Writer, which allows to merge deltas into the snapshot.
You could let the inventory expire and keep the data in the back-end. You could also keep the backup logs and you would be able to import the inventory data for the backup manually if really needed.
But usually you keep like 30-90 days and everything with a longer retention is just a compliance backup which is restored in a different way.
Hi @Daniel-Nashed
We estimate we will be good using our set retention time 6 “months” having approximately 240 rows in the List field with current path+DB names.
We want to be able to perform database restores down to 15 min intervall 6 months back in time, using first full/incremental snapshot backups and applying logs from the transaction log archive.
We use Veeam B&R together with a TrueNAS repository taking care of data deduplication, implemented in accordance with https://github.com/HCL-TECH-SOFTWARE/domino-backup.
We know that VSS snapshot backups in combination with Veeam let us restore even older backups that what the Domino 64K limit us to do. But as we want to be able to also apply transaction logs we need to able to do this from the Domino backup.
LargeSummary have been enabled on the dominobackup.nsf database from the start.
Thank you again for your help
Hmmm.. if LargeSummary was enabled, why did you run into the issue? What did you change to make it work now?
Are you on Windows and use the VSS Backup? I don't understand your point about the 15 minutes. Transaction log backup will give you any point in time.
But how do you implement translog backup? With a file backup to a TrueNAS device? The Veeam integration does not take care of transactions logs. That needs a different integration -- which could be just hte simple file copy operation. Just make sure on TrueNAS side to have snapshots to ensure the files that can be accessed from the Domino side are protected.
240 snapshots is also a burden for Veeam. Snapshot backups are usually not intended for long term backups. 6 month would be 183 backups. why do you plan with 240?
You have to be aware that with snapshots, databases with changes DBIID or new databases are not in the backup until the next snapshot. But usually you address that by scheduling compacts only once per week before a backup.
Snapshot backup is different than full backups! And you can't run incremental backups from Domino side using snapshots.
Just scheduling more than one per day with a retention time of 6 month is a lot of burden for the environment.
You really need to be aware what you are doing here. This GitHub repository can't provide the same as consulting services and workshops a specialized partner could do with you on site knowing your environment.
I am trying my best here to give the right general information, that you can hopefully apply to your special requirements.
Thank you @Daniel-Nashed
We are moving from Spectrum Protect backups to Veeam & VSS and are obviously new to this. Let me try to explain how we have set this up.
We have configured Veeam to do a "Volume-Level Backup" of the disk where Domino databases are located. Full backups once a week and incremental nightly. Veeam uses TrueNas with deduplication for storage of this data.
This looks to be working great:
Transaction logs are archived/backed up every 15 min by the Domino backup database to a separate storage.
As you say, we need about 180 entries, not 240. Sorry for the confusion. Compacts are scheduled weekly before the "full" backup.
From my point of view, this issue is now a non-issue. I was worried about the error I was getting and not seeing all backups when trying to restore (see my previous screenshots). Although I have now seen my coworker not have this issue so it's probably to do with my installation, probably not being on 12 FP3.
However, if you are seeing issues with the way we have set up backups I'd love to hear more or be pointed to the right direction.
@te-dara this sounds like an interesting project and a good reference. The only open question for me is the translog part. how you actually doing it. probably file copy to the TrueNAS? I am personally also using TrueNAS Scale, which is a great environment.
But I am also running a couple of other infrastructure. Take a look at this Blogpost for amazing backup performance with ZFS and optimized infrastructure components. I am basically skipping one virtualization layer and file system going to the native ZFS.
But that's a different story. If you want to share your configuration I am happy to have a look and I am also interested what you did.
What about having a Sametime session looking together at your config and logs together? I wrote most of Domino Backup/Restore including the VSS Writer and Veeam integration teaming up with a consultant at Veeam. And I am very interested seeing how customers use Domino Backup.
Your combination of snapshots + archive style backup isn't what most customers do and it needs proper configuration. But it sounds like you got that all up and running on your your own :-)
@Daniel-Nashed Commenting as stand-in for @te-dara
Your guess is correct regarding the transaction log archiving :-)
Will send you you an email with contact details and we can arrange an online get-together Sending to info@nashcom.de
Hi, I am wondering what the status is about this request. It has been a while. It's difficult to follow up if I can't match names.
Can you please explain the status.
Thanks
Daniel
Hi Daniel We have no problem having our aimed 187 days of backups covered in the domino backup database (Backup times: field) for respective database.
We have a few databases with really long path/names (65 characters in total) that do get "truncated" with the Removed oldest backup inventory entry to avoid overflow message, but these databases are not of interest when it comes to restore.
So all in all, it works as it should 💯
Regards //Anders
We can't really extend that limit. I am still irritated with what you want to get changed. If there is no real problem and it works for you, I would rather close this request. Else we would need to create an official AHA request to get this tracked as a feature request.
Closing is OK for us! 👍
Thanks! Closing as discussed with @Smarter-repo
HCL Product Version Domino 12.0.2 FP3 HF20 running on Windows server 2022
Describe the bug We are in the process of implementing backup via the built in Domino Backup in combination with Veeam B&R in production. We have tested this out in test environment for a while and is there we now face this issue. We need to get this issue resolved to be able to go forward to production.
So, we get warning for a number of databases during backup "Removed oldest backup inventory entry to avoid overflow"
On the affected database backups, viewing the status for the Backup times: field in Database Inventory view in dominobackup.nsf indeed shows that older backup entries removed.
This makes it impossible to restore these databases from an older backup date (as not in the list it's not an alternative when doing restore). This even when the actual backup data resides in Veeam still.
We might be wrong but this seem to relate to a 64KB limit in a field (the List field in the dominobackup.nsf).
For now it's only a handfull database backups affected but random checking other documents List fields size indicate this will be a general issue not to far from now.
We have a fairly long retention time (187 days) but we still expect this to be covered in a backup solution!
Error Message Error "Removed oldest backup inventory entry to avoid overflow" doing backup
To Reproduce Steps to reproduce the behavior: NA
Expected behavior There should be no error message.
Screenshots NA
Desktop (please complete the following information): Windows
Additional context This is a issue in the HCL developed Domino backup service.
The dominobackup.ntf (used by load backup to create the dominobackup.nsf) seems to have limitations to field size,and this is the root issue here.