I have a 2 node File server cluster that has 5 volumes being shared out. I have recently added a new volume to this file server cluster (few days ago).
Within DPM 2016, when I modify my protection group to try and add the new volume and share folder, it does not show up at all.
I am unable to refresh the cluster object in the list as well.
I have rebooted DPM, file server nodes, moved the role between the nodes and still cannot add this volume. Agent is also updated across the board.
We need to know which type of file consume more space in the DPM protections volumes. The purpose is
to list all the protected files and sort them by size, then check which extensions exhaust the storage capacity and eventually exclude it from future backups.
Thus I try to know how is it possible to enumerate the protected data via SQL requests and\or DPM PowerShell ? So far, I did not find any tips to do so.
Can someone explain why DPM thinks it has to transfer 6.4TB of “changed” data for a Hyper-v host recovery point? From what I can tell, only a tiny fraction of that data is needed/used to create the recovery point. For more information, read on.
I have two protected member servers, comparably configured, at two remote sites. The servers are running Windows Server 2012 R2 Standard (as Hyper-V Host) and are hosting Windows Server 2012 R2 Standard (guest w/dedup enabled). The only real difference between
these sites are the size of the .vhdx files:
Site A: has a protected member that has a 11TB DATA.vhdx file
Image may be NSFW. Clik here to view.
Site B: has a protected member that has a 16TB DATA.vhdx file
Image may be NSFW. Clik here to view.
These systems are being protected with DPM 2016 UR4 (v5.0.342.0). The guest is protected by an off-site DPM server connected via a 100 mbps WAN link. While the host is protected by an on-site DPM server utilizing a 1GB connection.
DPM is supposed to query the NTFS Change Journal and transfer what has changed from the last sync job. This appears to be true for the off-site DPM server protecting the guest over the WAN connection, with both sites transferring a small number of GB every
night and finishing the recovery point within an hour.
Image may be NSFW. Clik here to view.
However, when using the on-site DPM server to protect the volume that contains the .vhdx, it doesn’t appear to work correctly (read as, efficiently). As an example, Site A is doing data transfers of 650GB to 1.5TB every night and the situation is even worst
for Site B. It is transferring 6.2TB to 6.4TB every night, taking 17hours to complete a single backup.
Image may be NSFW. Clik here to view.
Move over, it is clear that not all of the data being transferred is stored in the Protection Group’s Recovery point. I know this because, I have 31 recovery points with only 1.8TB (in total) of data being used by the on-site DPM server.
Image may be NSFW. Clik here to view.
Can anyone explain this behavior?
UPDATE #1:
Instead of protecting the volume that contains the .vhdx, I am now protecting via the Hyper-v method. There is no change in the behavior. My next move is to disable Dedup and Defrag process and monitor for a few days.
Site A Host transfers: Image may be NSFW. Clik here to view.
I got dpm2012 r2, I have PG for file server to backup all files, I got this error:
"
The replica of C:\ on server01 is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. (ID: 3106)
Number of files skipped for synchronization due to errors has exceeded the maximum allowed limit of 100 files on this data source (ID: 32538)"
I found in the web something about add registry key "MaxFailedFiles", I tried to find the relevant
article but I can't find it
I am having a similar issue to the issue DPM2012 R2 issue linked below. I am using DPM 2016 UR5, and there seems to be limited info about it specifically
I understand the problem is with long file paths/names , and I can see a solution is to add a reg key to up the error limit but I have a few questions:-
1. Are the reg settings the same for Server 2016/DPM 2016?
2. Our folder we are having issues with is on the clustered file server, which is a role which can be hosted on any one of 16 server 2016 nodes? Do all nodes need the reg hack?
Hi, yesterday the server began to give me a strange message:
Description: Backup policies configured on backup service are out of sync with this DPM server. (ID 33413)
More information
Recommended action:
Click the recommended action link below for DPM to attemp to refresh the policies.
Refresh online policies...
Resolution: To dismiss the alert, click below
When I try to add an online backup I get another error:
Update online backup policy for myserver\msdb failed:
Error 130053: Operation is blocked as a limit for certain resources has been reached.
Recommended action: Contact Microsoft Support.
The recommended action not help
Server configuretion
OS 2012
DPM2016 UR4
MARSAgent 2.0.9127.0
I am installing Skype for business 2015 enterprise edition on a virtual machine. the user used for installation is a domain admin, also he has the administrative rights on database level (sysadmin), during the deployment of the topology I got the below errors
:
An error occurred: "Microsoft.Rtc.Management.Deployment.DeploymentException" "Cannot determine where to install database files because Windows Management Instrumentation on the database server is unavailable
from your computer or user account. To continue, you can resolve this issue, or you can specify where you want to install the files."
Quite new to DPM, and have an issue which i believe is due to the System Volume drive where the Scratch folder is located is not big enough.
DPM server has only 50 GB approx free space out of 136 GB on the C: Drive (inherited this way)
The protected source is 1TB approx (File server in own Protection Group)
The Disk backup is working fine, but to Azure Recovery Point fails, or just sits at Zero transfer for days
I also get the following error, which may or not be related
"the following file that is essential for azure backup is missing: no_param. (id 32550)"
The DPM itself has plenty of space available for backup's, so was thinking do you think there would be a way utilizing this space for the Scratch folder by creating a VHD and giving it a letter like D: EDIT: or map drive to another server?
What i have tried so far, is to cancel the Azure Online Backup job, detach the VHD in Computer Management, so it will let me delete the partial VHD's in the Scratch Folder, then delete old logs to reclaim space, but doesn't seem to sort issue.
Any advice appreciated
ps, i have updated Agent on Filer server, rebooted, restarted all Azure DPM services etc
We use DPM 2016 on Windows 2016 Server with Modern Backup Storage (MBS). Our backup disk is formatted as ReFS, is about 37TB and we have about 17TB of free space left.
DPM correctly sees this disk and allocates replica space for all our Protection Groups automatically. When allocated space runs out, DPM automatically allocates more space to the Protection Group. We recently ran into a problem that the allocated space
for our Archive protection group wasn't automatically expanding anymore, failing our backup synchronizations. The size of 1the 1 disk in the Protection Group is about 14.5GB now.
After consulting http:"blogs.technet.microsoft.com/dpm/2017/04/14/how-to-increase-dpm-2016-replica-when-using-modern-backup-storage-mbs" we tried to manually increase the allocation space, but this resulted in unclear StorageManager ID:
40001 errors.
After looking in Disk Manager what DPM tries to do it looks like it assigns NTFS formatted Virtual Disks to a disk in a protection group, which then is stored on the MBS disk. Only it looks like DPM automatically creates a 4KB cluster NTFS Virtual
disk which only goes up to 16TB...
Can someone confirm that this is true? Is this why automatic grow fails? And what if I want to extend my file disk data beyond 16TB? Can I tell DPM not to use 4KB cluster NTFS formatted disks?
This has been happening for several months now with a specific subfolder on a file/QuickBooks server. Error ID is 33415. When I click the link to open the list of files that failed, they all list "DATA FAILURE" and then "0x80070002".
I'm not sure if that means anything.
I already checked and the UNC path is not longer than 256 characters.
Some of the files are QBW files, some are Word documents, some are .cfx, .gen. All of the ones with this issue are under the same parent folder but within there there is a bunch of variation. Some are in the root, some are a few folders deep,
etc.
The backup-to-disk works just fine.
The documentation on DPM is horrendously lacking, so I'm hoping someone out there may be able to point me in the right direction? I did find one result from a couple of years ago but the "solution" provided by someone was to ensure the latest version
of the MARS agent was installed. I've already done that.
I have set up DPM to protect several folders (documents, images, videos, links, favorites (all common user profile folders) and system state) and tried to make the agent to pull all the information from the client computers, all I get is only the system
state, if I check the mount repository, I no not seem to find any other information. This is happening with all the computers in the protection group.
I have forced the syncronization with integrity check several times but it lasts only about 5 minutes at all and then it says in the state "OK (Green check)".
Is there a way I can force the agent or the server to pull all the information I need to check?
If this information is useful, some of these users (not all) use Onedrive 365 and sync their documents, but anyhow the backup must be taken.
On Premise DPM 2016 RU5 backing up to disk and long term online into Azure.
Azure Backup Agent version: 2.0.9145.0
Online Status: Active
Online Recovery Points taking place one a week and no errors reported. I can see Online Recovery Points available in the Recovery Tab in DPM Console.
Error Description: I have tried to recovery files from random from our online recovery points, however each time I try recovering a file from Azure i receive an error ID 220010 with no description.
I have tried to search around online for this error ID but nothing seems to correspond to DPM.
This is a concern that our long-term backup integrity is unavailable or even corrupt.
I've tried recovering other files from different dates in the recovery calendar from azure and each one errors with the same ID 220010.
The files I have tried to recover are .jpg and .CR2.
Also tried recovering to original location using that locations permissions and also to an alternate location.
DPM Server 2016 is running on a Windows 2016 Hyper V Host in a Windows 2016 Guest VM
Creating a new Protection Group consisting of partial volumes of a Windows 2008 R2 (physical server). It's been stuck on Calculating Data Size on the "Review Disk Storage Allocation" screen of the Create New Protection Group wizard.
It does this every time I attempt it for that server needing protecting. It never finishes. The total data size on disk is 1.2TB of a 1.4TB disk.
we are getting a error 0x80070780 "couldn`t access file" when protecting a volume. I've checked eventlogs and dpm logs on the dpm and the fileserver. But I can't find a hint which file causes the error.
We are using dpm 2010 with latest hotfixes on a Server 2008r2 with dell ml6000 tape library. File Server is a Windows Storage Server 2008r2 with attached iscsi volumes from dell equallogic array. Groveler service is active on all volumes.
Protection group seems to be ok since it has 14 volumes to backup on the fileserver. All backups are successful except volume t:\. The Volumes is a about 2tb and all volumes together are about 40tb on this server. Backup runs for a about 4-5 hours and the
fails with the described error.
I've also resetted ownership and permissions for domain admins and lokal admin with icacls for this volume. Without success
Do you guys can give me tip how to solve the issue or how to find the file causing the error.
today a backup on one of our clients backup server started failing. Reason is, as mentioned above, (ID 2033 Details: The media is
write protected (0x80070013)) and is failing for \\?\GLOBALROOT\Device\HarddiskColumeShadowCopy13\Hyper-V\Virtual Hard Disks\Hyper-V Replica\Virtual hard disks\[VHDX-NUMBER]\DC.VHDX. We did not change any write permissions.
Currently I created a protection group for our file server which just protects the entire drive (E:\) where our file shares all reside. I want to backup to Azure just some of the folders within this E:\ drive. Does that require that I split up the sub
folders into different protection groups so I can target the specific folders I need for online protection?
I have a Problem. I have almost 50000 files (.txt, .pdf, . csv)etc . All the files has specific name like ( Customer.txt, car.txt, Customer.pdf). And i have also 50000 folder named like file name (Customer, Car, House) etc. Now how can i copy those files
to specific folder eg (Customer.txt & pdf to Customer folder ) and (Car.txt &.pdf to Car folder) and so on. is any powershell command to copy all the files to specific folder compare to file name. If anyone can help then it would be bit easier.
we have a volume protection set up - protecting one of the drives on a server, set to create RP every hour, with a retention period of 2 days:
Image may be NSFW. Clik here to view.
The problem is that DPM currently has 246 RPs instead of the expected 48 (2x24):
Image may be NSFW. Clik here to view.
Why is this happening and how is it even possible - when you configure the PG, DPM will not let you configure a schedule which would lead to more than 64 RPs:
Image may be NSFW. Clik here to view.
Normally I'd be glad I can have more than 64 RPs, but not when it's not desired.