Back up and restore ZFS file systems

  • Permalink
  • submit to reddit
  • Email
  • Follow


I wrote two scripts zfsdumpd.sh and zfsrestored.sh to back up and
restore ZFS.  They are handy especially when a system has many ZFSs.
The script can back up and restore all of ZFSs at one time.

http://www.sun.com/bigadmin/content/submitted/zfsdumpd_zfsrestored.jsp

Victor

0
Reply victorfeng1973 (44) 11/1/2007 1:11:30 PM

See related articles to this posting


Those scripts look useful.

FYI:  Might also be interested in Tim Foster's automatic ZFS snapshot
scripts:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8

One problem I encountered when taking snapshots of large filesystems
(>100G) is described in this bug report:
http://bugs.opensolaris.org/view_bug.do?bug_id=6509628

The above bug is listed as fixed in a Nevada build but I am running
Solaris 10 update 3 (11/06) and experienced it.
I don't know if it is corrected in s10u4 (8/07) release.

The problem was destroying a snapshot, which is supposed to be fast,
was taking hours as well as using 100% of a CPU.
There is a workaround described in the above bug report which is to
make my current working directory to be inside the filesystem which
was snapshotted, then trying to unmount it  The umount will fail
(because the filesystem is in use due to it being my working
directory, if not for other reasons.)  But, the side effect is that it
will somehow let the snapshot be destroyed in a reasonable amount of
time (a few seconds) and without the CPU looping.

But, a bad side effect of the workaround I have noticed and which is
not reported in the above bug is that if the filesystem which was
snapshotted is shared by NFS, it becomes unshared as a result of the
attempted umount.  That is, even though the umount fails, it causes
the filesystem to be unshared (as if "zfs unshare ..." was run on the
filesystem.)  So, it is necessary to run "zfs share ..." after the
umount in order for NFS to continue working.

0
Reply Doug 11/1/2007 8:23:00 PM

On Nov 1, 3:23 pm, Doug <dy2...@gmail.com> wrote:
> Those scripts look useful.
>
> FYI:  Might also be interested in Tim Foster's automatic ZFS snapshot
> scripts:http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8
>
> One problem I encountered when taking snapshots of large filesystems
> (>100G) is described in this bug report:http://bugs.opensolaris.org/view_bug.do?bug_id=6509628
>
> The above bug is listed as fixed in a Nevada build but I am running
> Solaris 10 update 3 (11/06) and experienced it.
> I don't know if it is corrected in s10u4 (8/07) release.
>
> The problem was destroying a snapshot, which is supposed to be fast,
> was taking hours as well as using 100% of a CPU.
> There is a workaround described in the above bug report which is to
> make my current working directory to be inside the filesystem which
> was snapshotted, then trying to unmount it  The umount will fail
> (because the filesystem is in use due to it being my working
> directory, if not for other reasons.)  But, the side effect is that it
> will somehow let the snapshot be destroyed in a reasonable amount of
> time (a few seconds) and without the CPU looping.
>
> But, a bad side effect of the workaround I have noticed and which is
> not reported in the above bug is that if the filesystem which was
> snapshotted is shared by NFS, it becomes unshared as a result of the
> attempted umount.  That is, even though the umount fails, it causes
> the filesystem to be unshared (as if "zfs unshare ..." was run on the
> filesystem.)  So, it is necessary to run "zfs share ..." after the
> umount in order for NFS to continue working.


Good to know! I'll try it out too.

Victor

0
Reply victorfeng1973 11/1/2007 9:04:28 PM
comp.unix.solaris 25764 articles. 87 followers. Post

2 Replies
486 Views

Similar Articles

[PageSpeed] 24


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces: