Quantcast
Channel: Data Protection Manager - File Protection forum
Viewing all 520 articles
Browse latest View live

DPM 2012 R2 Can't drill down into one folder to restore a file

$
0
0

Hi

Running DPM 2012 R2 R13

The protection group is backing up a single local folder D:\Share on a 2012 R2 Server.  The file structure looks like the following

Share

    folder1

    folder2

    folder3

    folder4

Trying to restore from within the console, I can't see the contents of d:\share\folder3.  All the other folders I can. 

No error messages. 

I saw someone else had a similar issue and theirs was caused by a file corruption.  I have run a chkdk and the volume did not have any errors.

How do see the files to restore one from within folder3?


Unique schedules for different directories or different protection groups for one disk

$
0
0
Hi,

There is a file server, with disk D. It has different directories: d:\ Folder1, d:\ Folder2, d:\ Foldre3. Directories has different size and importance. I need different schedule for each of them.
Is it really impossible to create a separate security group in DPM with its own schedule, for each directory on one partition. I can not create another (custom) protection group for d:\ folder01, because d:\ Folder01, d:\ Folder02 are in one of the protection groups.
«Other items on the datasource are a member of another protection group»

Or can not do, a unique schedule for each directory in one protection group.
In general, all the directories are very large and I do not need to perform a backup of ALL, every day, but only Folder01. Folder02 and Folder03 I want back up every week.

Tell me maybe I'm wrong :)

Thanks.

Replica Sync fails

$
0
0

Hello,

    Every time a replica sync runs on a volume backup on a file server I get the error:

DPM could not log files that were skipped during backup to \\?\Volume{xxxxxxxxxxx}\xxxxxx\FailedFiles.dat (ID 32577 Details: ) 

I’ve tried the suggested fix here: https://social.technet.microsoft.com/Forums/en-US/f7de60cd-841f-4d22-862e-aaf3723717c7/dpm-could-not-log-files-that-were-skipped-during-backup-to-volumexxxxxfailedfilesdat-id?forum=dataprotectionmanager and I get the error: The parameter is incorrect. When trying to mount the replica volume.

Any ideas how to get the replica sync working again?

Thanks,

J


Data Protection Manager 2012 - Inconsistent when backing up Deduplicated File Server

$
0
0

Protected Server

  • Server 2012 File Server with Deduplication running on Data drive

DPM Server

  • Server 2012
  • Data Protection Manager 2012 Service Pack 1

We just recently upgraded our DPM server from DPM 2010 to DPM 2012 primarily because it is supposed to support Data Deduplication. Our primary File server that holds our home directories etc. is limited on space and was quickly running low so just after we got DPM 2012 in place we optimized the drive on the file server which compressed the data about 50%. Unfortunately shortly after enabling deduplication the protected shares on the deduplicated volume are getting a Replica is Inconsistent error.

I continually get Replica is Inconsistent for the Server that has deduplication running on it. All of the other protected servers are being protected as they should be. I have run a consistency check multiple times probably about 10 times and it keeps going back to Replica is inconsistent. The replica volume shows that it is using 3.5 TB and the Actual protect volume is 4TB in size and has about 2.5 TB of data on it with Deduplication enabled.

This is the details of the error

Affected area:   G:\

Occurred since:1/12/2015 4:55:14 PM

Description:        The replica of Volume G:\ on E****.net is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. You can recover data from existing recovery points, but new recovery points cannot be created until the replica is consistent.

For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)

               More information

Recommended action: 

               Synchronize with consistency check.

               Run a synchronization job with consistency check...

Resolution:         To dismiss the alert, click below

               Inactivate

Steps taken to resolve - I’ve spent some time doing some searches and haven’t found any solutions to what I am seeing. I have the data deduplication role installed on the DPM server which has been the solution for many people seeing similar issues. I have also removed that role and the added it back. I have also removed the protected server and added it back to the protection group. It synchronizes and says consistent then after a few hours it goes back to inconsistent. When I go to recovery it shows that I have recovery points and it appears that I can restore but because the data is inconstant I don’t feel I can trust the data in the recovery points. Both the protected server and the DPM servers’ updates are managed via a WSUS server on our network.

You may suggest I just un-optimize the drive on the protected server however after I have optimized the drive it takes a large amount more of space to un-optimize it (Anyone know why that is) anyways the drive isn’t large enough to support un-optimization.

If anyone has any suggestions I would appreciate any help. Thanks in advanced.

Secondary DPM synchronizing unexpectedly too much data - file protection

$
0
0

Hello, I'm running on an issue and I'm afraid that i won't have the possiblity to open a case at MS so i hope an expert from MS would have an explanatin with this particular case:

Scenario:

two DPM Server version 2012 R2 V4.2.1603.0. One is a primary DPM protection for a our deduped File server and one is the secondary DPM Server protecting the first one.

the Primary DPM do an optimized backup of the file server entire volume, we can see it represents 13TB used data on the Primary DPM replica volume

the Secondary DPM is doing backup as non-optimized, which represents 31,5TB, indeed the dedup rate on our FS is more than 50%

Everything worked well for year, until I reached the NTFS limitation on the secondary DPM: the file server, the Primary DPM and secondary DPM have been formated with NTFS 8K cluster size. This is enough thanks to dedup, but no more for the secondary DPM. It cannot grow over 32TB and now it needs more.

So, on secondary DPM, I've deleted the protection group and replica and restart from scratch. I create a new protection group with replica size more than 32TB so Windows automatically formated it with NTFS cluster size 16K.

Since then i'm running into this issue:

The initial replica creation job on the secondary DPM succeed, but the first next synchronization job is synchronizing too much of data and fails with the error "disk full". When i look to properties of the replica on the secondary DPM, i can see it's indeed it's full, the synchronization job tried to sync more than 40TB of data whereas my file server represents about 32TB of non optimized data. So it's fail because it reached my replica volume size.

I've run a consistency check which succeeded and go back to a normal state. But again the issue occur again at the next sync jobs.

So question: in a file protection scenario, is it supported that the Primary DPM replica and the Secondary DPM replica are formatted with different NTFS cluster size? I suspect DPM Sync Filter not supporting this scenario correctly.

As workaround, i'll try to use the registry setting forcefixeup to see if it can help in my case.

Larry

 


DPM 2016 backup only the HyperV container (covering all vhds) or install agent on the VM and backup file level?

$
0
0

I noticed or maybe never noticed with previous versions... that if you backup the hyperv container for a given VM.. it backs up the VHDS.. so if you have C and D for instance, both are included, this much i realized.. but i'm noticing that you can drill inside the VM's vhds for individual files..

In the past i've always installed the agent on the VM itself, and configured DPM to backup those files..

I'm wondering what do most do.. do they just do the container and rely on drilling in to restore individual files?  What if the VHD becomes corrupt.. i would think having the VM's individual files via its own agent may have value? Or maybe not?  I go back 30 days on our recovery points.. so i suppose you could just try to go back a few extra days before corruption (though i've never ran into corruption per say)..

Any thoughts.. how are most doing this?


Tech, the Universe, Everything: http://tech-stew.com Just Plane Crazy http://flight-stew.com

DPM large (6.4TB) sync transfers, why?

$
0
0

Can someone explain why DPM thinks it has to transfer 6.4TB of “changed” data for a Hyper-v host recovery point? From what I can tell, only a tiny fraction of that data is needed/used to create the recovery point. For more information, read on.

I have two protected member servers, comparably configured, at two remote sites. The servers are running Windows Server 2012 R2 Standard (as Hyper-V Host) and are hosting Windows Server 2012 R2 Standard (guest w/dedup enabled). The only real difference between these sites are the size of the .vhdx files:

Site A: has a protected member that has a 11TB DATA.vhdx file

Site B: has a protected member that has a 16TB DATA.vhdx file

These systems are being protected with DPM 2016 UR4 (v5.0.342.0). The guest is protected by an off-site DPM server connected via a 100 mbps WAN link. While the host is protected by an on-site DPM server utilizing a 1GB connection.

DPM is supposed to query the NTFS Change Journal and transfer what has changed from the last sync job. This appears to be true for the off-site DPM server protecting the guest over the WAN connection, with both sites transferring a small number of GB every night and finishing the recovery point within an hour.

However, when using the on-site DPM server to protect the volume that contains the .vhdx, it doesn’t appear to work correctly (read as, efficiently). As an example, Site A is doing data transfers of 650GB to 1.5TB every night and the situation is even worst for Site B. It is transferring 6.2TB to 6.4TB every night, taking 17hours to complete a single backup.

Move over, it is clear that not all of the data being transferred is stored in the Protection Group’s Recovery point. I know this because, I have 31 recovery points with only 1.8TB (in total) of data being used by the on-site DPM server.

Can anyone explain this behavior?

UPDATE #1:

Instead of protecting the volume that contains the .vhdx, I am now protecting via the Hyper-v method. There is no change in the behavior. My next move is to disable Dedup and Defrag process and monitor for a few days. 

Site A Host transfers:

Site B Host transfers:


Error 0x80070780 couldn`t access file

$
0
0

Hello,

we are getting a error 0x80070780 "couldn`t access file" when protecting a volume. I've checked eventlogs and dpm logs on the dpm and the fileserver. But I can't find a hint which file causes the error.

We are using dpm 2010 with latest hotfixes on a Server 2008r2 with dell ml6000 tape library. File Server is a Windows Storage Server 2008r2 with attached iscsi volumes from dell equallogic array. Groveler service is active on all volumes.

Protection group seems to be ok since it has 14 volumes to backup on the fileserver. All backups are successful except volume t:\. The Volumes is a about 2tb and all volumes together are about 40tb on this server. Backup runs for a about 4-5 hours and the fails with the described error.

I've also resetted ownership and permissions for domain admins and lokal admin with icacls for this volume. Without success

Do you guys can give me tip how to solve the issue or how to find the file causing the error.

 

greetings stefan


DPM 2016 U4 - 1.3TB File Server will not complete Consistency Check.

$
0
0

DPM 2016 U4 - 1.3TB File Server will not complete Consistency Check.

 

The File server has a couple of Volumes that are backing up fine, apart from the d:\ drive. All was working fine up till we copied around 250GB of data and it did automatic consistency check. The file server is up to date and now has 2018-04 installed to see if this would fix the issue.

 

The consistency check starts, and data is being transferred and after a while when viewing the DPM console the items scans appears to be stuck on the same number for a while. And when you log on the server there is no disk activity from the DPMRA.exe image. When you look at the CPU and look at the associated handles it always in the same user’s home folder, when looking at the files it has open they haven’t been altered since access since 2017 and these are files that have been on the system for over 6 months.

 

When viewing the DPM log file there are a lot of entries like the following roughly around the time it appears to get stuck,

 

0498       0C40      04/16     20:47:59.080       18           fsutils.cpp(3225)                               A7CB7744-4C43-4826-B0FB-84FAFF40340D   WARNING           Failed:
Hr: = [0x80070057] : GetFileHandleById failed to open file,
frn:0x0001000000048244

0498       0C40      04/16     20:47:59.080       18           fsutils.cpp(3225)                               A7CB7744-4C43-4826-B0FB-84FAFF40340D   WARNING           Failed:
Hr: = [0x80070057] : GetFileHandleById failed to open file,
frn:0x0001000000048244

 

And they are the bottom of the log file for a while and then about 8 hours later, in the DPM cosole you get the following error,

 

Type:    Consistency check

Status: Failed

Description:       Task is cancelled because some other task in agent is not responding on SERVERNAME machine. (ID 32557 Details: Internal error code: 0x809909C1)

               More information

End time:            17/04/2018 05:07:12

Start time:          16/04/2018 17:18:25

Time elapsed:   11:48:46

Data transferred:            3,160.76 MB

Cluster node     -

Source details:  D:\

Items scanned:396593

Items fixed:       1645

 

And lot of entries appear in the log after the DPM console gets the error above, ending with the following

 

1FD8      0C18      04/17     04:07:12.014       29           radefaultsubtask.cpp(196)           [0000027CB613A230]      BE0A6B8E-772F-4D6F-9C1A-2465538E84AF                 WARNING           Failed: Hr: = [0x809909b0] : Encountered Failure: : lVal :
(HRESULT)0x809909B0

1FD8      0C18      04/17     04:07:12.014       05           defaultsubtask.cpp(944)               [0000027CB613A230]      BE0A6B8E-772F-4D6F-9C1A-2465538E84AF                 WARNING           Failed: Hr: = [0x809909b0] : Encountered Failure: : lVal :
CommandReceivedSpecific(pCommand, pOvl)

1FD8      0C18      04/17     04:07:12.014       05           defaultsubtask.cpp(1149)             [0000027CB613A230]      BE0A6B8E-772F-4D6F-9C1A-2465538E84AF                 WARNING           Failed: Hr: = [0x809909b0] : Encountered Failure: : lVal :
CommandReceived(pAgentOvl)

1FD8      0BA4      04/17     04:12:11.999       03           runtime.cpp(1426)          [0000027CB4A96690]                      NORMAL                CDLSRuntime::ProcessIdleTimeout

1FD8      0BA4      04/17     04:12:11.999       03           runtime.cpp(602)             [0000027CB4A96690]                      NORMAL                CDLSRuntime::Uninitialize,
bForce: 0

1FD8      0BA4      04/17     04:12:11.999       05           genericagent.cpp(273)  [0000027CB4A4AA20]                     NORMAL                Agent
Can Shutdown if there is only default wokitem active[1]

1FD8      0BA4      04/17     04:12:11.999       29           dpmra.cpp(356)                [0000027CB4A4AA20]                     NORMAL                CDPMRA::Shutting down dpmra,
force-shutdown :yes

1FD8      0BA4      04/17     04:12:11.999       03           cworkitem.cpp(328)       [0000027CB4B2E6A0]                      NORMAL                Timing
out WI [0000027CB4B2E6A0], WI GUID = {B71B4544-7067-4A30-B5FB-BA320B10D82A},
..last DM activity happened 229748828msec back, WI Idle Timeout = 390000msec

1FD8      0BA4      04/17     04:12:11.999       22           genericthreadpool.cpp(684)        [0000027CB4AEBAD0]                    NORMAL             CGenericThreadPool: Waiting for
threads to exit

1FD8      0BA4      04/17     04:12:14.023       22           genericthreadpool.cpp(684)        [0000027CB4A96690]                      NORMAL             CGenericThreadPool: Waiting for
threads to exit

1FD8      1018       04/17     04:12:16.047       03           timer.cpp(513)  [0000027CB61170C8]                      ACTIVITY                Shutting
down timer thread.

1FD8      0BA4      04/17     04:12:16.047       03           service.cpp(81)                                 ACTIVITY                CService::StopThisService

1FD8      0BA4      04/17     04:12:16.047       03           service.cpp(281)               [000000D39927FC20]                      ACTIVITY                CService::StopService()

1FD8      16FC      04/17     04:12:16.047       03           service.cpp(298)               [000000D39927FC20]                      ACTIVITY                CService::AnnounceServiceStatus 


Daniel Wingfield

DPM 2016 - Large Modern Backup Storage with ReFS, but default Virtual Disk size is only max 16TB?

$
0
0

Hello,

We use DPM 2016 on Windows 2016 Server with Modern Backup Storage (MBS). Our backup disk is formatted as ReFS, is about 37TB and we have about 17TB of free space left.

DPM correctly sees this disk and allocates replica space for all our Protection Groups automatically. When allocated space runs out, DPM automatically allocates more space to the Protection Group. We recently ran into a problem that the allocated space for our Archive protection group wasn't automatically expanding anymore, failing our backup synchronizations. The size of 1the 1 disk in the Protection Group is about 14.5GB now.

After consulting http:"blogs.technet.microsoft.com/dpm/2017/04/14/how-to-increase-dpm-2016-replica-when-using-modern-backup-storage-mbs" we tried to manually increase the allocation space, but this resulted in unclear StorageManager ID: 40001 errors.

After looking in Disk Manager what DPM tries to do it looks like it assigns NTFS formatted Virtual Disks to a disk in a protection group, which then is stored on the MBS disk. Only it looks like DPM automatically creates a 4KB cluster NTFS Virtual disk which only goes up to 16TB...

Can someone confirm that this is true? Is this why automatic grow fails? And what if I want to extend my file disk data beyond 16TB? Can I tell DPM not to use 4KB cluster NTFS formatted disks?

Thanks in advance!

Kind regards,

Sergius

Does DPM flag unchanged files during backup?

$
0
0

Hi!

I have been working with DPM for some time, but I was wondering how it really works when backing up files in Windows.

The question:

Does DPM put any flag on the files, let's say unchanged files (cold files)?

Or does DPM only check what's been changed and do a backup on those files?

We have a project about implementing StorSimple solution and that's why I would be very interested to know.

Awaiting your response!

Best regards,
Leon


Replicas on ISCSI SAN, not local drives

$
0
0

Using DPM 2012 R2, we have 18 terabytes of ISCSI SAN storage and 18 terabytes of SATA storage. For some reason DPM put most of the replicas on the SAN storage instead of the local drives. The SAN storage has 19% free space while the local storage has 86% free space. Why would DPM do that? Is there any was to get DPM to use the local storage first?

Is there a way to move replicas from the SAN to the local storage?  I looked at using MigrateDataSourceDataFromDPM but the documentation says once you move data from one disk to another it stops using the old disk at all, which is not what I want.

SCDPM 2012 - Error Reporting

$
0
0

HI

I have System Center Data Protector Manager 2012 R2 on Windows Server 2012 R2.Verything worked correctly, until yesterday I opened the DPM reports and generated the following error:

I have verified the following:

  1. SQL Reporting Services running on my SQL Server
  2. On Rerporting Services console I verified that my DPM Server connect to Rerporing Services
  3. I reset DPM services and SQL Services but not work.

Any suggestion for solve this?

While sync getting DPM is out of disk space for the replica. (ID 58 Details: There is not enough space on the disk (0x80070070))

$
0
0
DPM is out of disk space for the replica. (ID 58 Details: There is not enough space on the disk (0x80070070))

Regards, Durgairaja

DPM 1801 - Persist security settings Share level vs Volume level difference?

$
0
0

Hi,

I was wondering what is the difference between protecting a file server at the share level vs the volume level when it comes to recovery security settings? When I restore from Volume I got all shares and security settings back, but when I restore from share level i did not get any security settings back if I choose "Apply the security settings of the recovery Point version"?


/SaiTech


용인 → 중절수술병원 quddnjs8866@gmail.com 4주5주6주7주 ◀ 산부인과전문의병원

$
0
0
중절수술병원,중절수술비용,중절수술가능한병원,서울중절수술병원,용인중절수술병원,4주용인중절수술병원,5주용인중절수술병원,6주용인중절수술병원,7주용인중절수술병원,8주용인중절수술병원,9주용인중절수술병원,10주용인중절수술병원,11주용인중절수술병원

DPM 2016 Restoring Share Directories

$
0
0
Does DPM2016 restore the share information of directory if that directory is restored? I know in the past the System Protection had to be restored also. Has that changed? 

Protect data in my organization

$
0
0

Hi All,

I have a issue, i want to protect data my company, lost and i don't want user copy data company to usb flash driver and bring home. When user copy data company and bring to home, they cant't read data. Please help me

Thanks all with your support

DPM 2016 Secondary Backup Server crashing at protection group

$
0
0

Good Morning all,

I am currently doing some work on DPM 2016 trying to create protection groups on back up server, however it crashes everytime I get to "review disk storage allocation". I created the disks into volumes to be compatiable so I know that it has enough space required but it keeps crashing at that spot. Is there any work around or fix atm? 

Protecting Server to Server Storage Replication

$
0
0

I have (2) file servers setup with storage replication enabled, the source server is the owner of the replicated disk and the destination server does not present the disk. I have a protection group in DPM (2016) with both servers, however backups of the replicated disk can only be done on the source server. The problem becomes when you failover and change the source and destination servers, backups then fail on the destination (old source) and start on the new source. It looks as if there will be a lot of duplicate data being backup and possibly problems with disjoined data on the DPM server.

Any suggestions on how best to backup file servers in this scenario?

Thank you.


JD Young

Viewing all 520 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>