Maintenance windows scheduler capacity profile is active. Hfscheck in progress: started Thu Apr 4 12:48:04 2019 > checked 896 of 14791 stripes (hfscheck) The result of #: status.dpn is below (the header removed):Īll reported states=(ONLINE), runlevels(fullaccess), modes=(mhpu+0hpu+0hpu) What "Hfscheck" is? Why the number has been incresed? Should I just wait until it is complete? Please help if you have any information.ĮSXi version 6.5.0 Update2 Build #8935087 I cannot find such a big number of "stripes" in any cases on Internet. MCS entry in one or all of the following files is missing or incomplete: 2. I tried the backup job again this did not resolve the issue. Garbled characters in above xml configuration files. Add/complete the MCS entry and start the scheduler. I found a hfscheck takes a time by status.dpn. Check the garbled characters and fix them in those xml files and start the scheduler. But it has not completed yet and it is only 5% progress now. Today after the lunch, I reboot it and executed integrty check of VDP manually. # Because of the error VDP:001 The most recent checkpoint for the VDP appliance is outdated The last bit was using xfs_growfs on /dev/sdb1 and /dev/sdd1 to fix wrong size of nodes.Įdit: I think that checkpoint rollback and waiting was enough.VDP core system is sometimes down and I usually reboot the virtual appliance of VDP (with OS shutdown) because the server vendor reccomended us so at the first case. Several times rebooting and running manual checkpoint including unmounting disks and running xfs_check helped at last. Thare was a good checkpoint, good hfscheck and good gc. On Monday it has magically repaired itself. At this point I have up and let the system run for a weekend.Ħ. Only checkpoint succeeded, hfscheck and gc failed with MSG_DISK_FULL.ĥ. Run manual checkpoint, integrity check and garbage collection. Expand storage! Wizard went successfully. Same errors when trying to set values to 99%.Ĥ. Modify configuration threshold amounts to allow garbage collection run (as described above). Rollback to earlier checkpoint, run manual checkpoint, integrity check and garbage collection. Delete big backups, run manual checkpoint, integrity check and garbage collection. I do not know what exactly helped but here is a full list of my actions (add reboots as needed):ġ. It have had ended up with nodes full at 98, 97 and 98 % respectively. One unlucky day I have fed a couple of OLAP VMs to VDP and it choked failing to deduplicate properly. Here goes the sad story with happy end about VDP 6.1.2.19. Locate the service named Volume Shadow Copy. Set the Start type to Manual and click Apply. If the Service status is Stopped, click the Start button to change. I suppose GC not to start in the morning, am I avmaint config -ava | grep diskrep avmaint config disknogc=97 -avaĥ-13:55:10.94029 ERROR: Command failed because these config values do not meet the following criteria:ĥ-13:55:10.94040 ERROR: 0 Command failed because these config values do not meet the following criteria:ĥ-13:55:41.90342 ERROR: disknocp(99) 96.5) 97.0) 98.0) < 100ĮRROR: avmaint: config: server_exception(MSG_ERR_INVALID_PARAMETERS) Tip: If the backup tool in use requires VSS service disabled, you can switch the Start type to Disabled in the third step. I've tried to get GC in active state along to reply-5 but it looks like used capacity 96.4% is too high. Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)ĭpnctl: INFO: Backup scheduler status: down.ĭpnctl: INFO: Maintenance windows scheduler status: enabled.ĭpnctl: INFO: Unattended startup status: enabled. Re-add your folder to the job and remove the explicitly defined vms, ready for your nightly backup. Next maintenance window start time: Fri Dec 6 16:00:00 2013 dpnctl status This will be a good way to run a fast snap incremental backup before an upgrade, server patch etc.without the need for a new job (and associated time and space waste). Next blackout window start time: Fri Dec 6 08:00:00 2013 CET Next backup window start time: Fri Dec 6 20:00:00 2013 CET ![]() WARNING: Scheduler is WAITING TO START until Fri Dec 6 08:00:00 2013 CET. Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteableĪll reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0000) Node IP Address Version State Runlevel Srvr+Root+User Dis Suspend Load UsedMB Errlen No hfscheck yet (since last reboot?) and gsan status is status.dpn|less
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |