We are using DPM 2012 R2 and are needing to protect a clustered, and de-duplicated file server volume of several terabytes (over 6TB after 29% dedupe savings), however we are running into issues with consistency checks which take days to complete and therefore prevent creation of recovery points. I found the following thread which addresses this, however it doesn't really resolve it:
I have also read on various threads advice to break up very large volumes into smaller chunks to protect them easier, however part of the reason to keep them together is the large storage savings we can achieve from deduplication.
I was wondering if there is any more advice? For instance is there anything in the pipeline to help with very long consistency checks, or to allow creation of recovery points during a consistency check?
Any advice would be welcome.
Thanks
Mark Salter