From Wikipedia, Solaris Volume Manager (SVM; formerly known as Online: DiskSuite, and laterSolstice DiskSuite) is a software package for creating, modifying and controlling RAID-0 (concatenation and stripe) volumes, RAID-1 (mirror) volumes, RAID 0+1 volumes, RAID 1+0 volumes, RAID-5 volumes, and soft partitions.

Version 1.0 of Online: DiskSuite was released as an add-on product for SunOS in late 1991; the product has undergone significant enhancements over the years. SVM has been included as a standard part of the Solaris Operating System since Solaris 8 was released in February 2000.

SVM is similar in functionality to later software volume managers such as FreeBSD vinum, allowing metadevices (virtual disks) to be concatenated, striped or mirrored together from physical ones. It also supports soft partitioning, dynamic hot spares, and growing metadevices. The mirrors support dirty region logging (DRL, called resync regions in DiskSuite) and logging support for RAID-5.

The ZFS file system, added in the Solaris 10 6/06 release, has its own integrated volume management capabilities, but SVM continues to be included with Solaris for use with other file systems.

Example disk mirroring using SVM:

DISK:
c0t0d0
c0t1d0

# prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s – /dev/rdsk/c0t1d0s2

{If you got an error:

fmthard: Partition 2 specifies the full disk and is not equal full size of disk

Then you will first need to do a format on your second disk so it has a Solaris label.

bash-3.00# format
Searching for disks…done

select your 2nd disk

format> p
WARNING – This disk may be in use by an application that has
modified the fdisk table. Ensure that this disk is
not currently in use before proceeding to use fdisk.
format> fdisk
No fdisk table exists. The default partition for the disk is:

a 100% “SOLARIS System” partition

Type “y” to accept the default partition, otherwise type “n” to edit the
partition table.
y
format> label
Ready to label disk, continue? yes

{run metadb command to create replicas of the metadevice state database:

#metadb -a -f -c 3 c0t0d0s7 c0t1d0s7

{then run metainit to configure metadevice each slices:

# metainit -f d11 1 1 c0t0d0s0
# metainit d12 1 1 c0t1d0s0
# metainit d10 -m d11
# metaroot d10

# metainit -f d21 1 1 c0t0d0s1
# metainit d22 1 1 c0t1d0s1
# metainit d20 -m d21

{edit /etc/vfstab:
/dev/md/dsk/d20 - - swap - no -
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no logging

#reboot (for x86)

#init 0 (for Sparc) then from OK Prompt:
{0} ok setenv boot-device disk0 disk1
{0} ok boot

{After your Solaris booted up, then:

# metattach d10 d12
# metattach d20 d22

{check the Synchronizing process:
#metastat | grep %

{to continuously monitoring the metastat result, run this command:

#while true; do metastat | grep %; sleep 20; done;

Last step, run installgrub to MBR on second disk, Otherwise you wouldn’t be able to boot from your second disk once your first disk has failed.

{For x86 machines set the active partition for the disks:
bash-3.00# fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c?t?d?p?

{If making root partition raid then make second disk bootable:
===> For x86 machines
bash-3.00# /sbin/installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c?t?d?s?

===>And for Sparc machines
bash-3.00# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c?t?d?s?

——

bash-3.00# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0
Updating master boot sector destroys existing boot managers (if any).
continue (y/n)?y
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 265 sectors starting at 50 (abs 16115)
stage1 written to master boot sector
bash-3.00#