Apple Event: May 7th at 7 am PT

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Clone full disk with multiple OS X Versions installed

Hi all,


here is my starting problem:

For a lab/testing purpose, I setup an iMac with multiple partitions, and installed on that iMac Mac OS X 10.6 Snow Leopard, OSX 10.7 Lion, OS X 10.8 Mountain Lion, OS X 10.9 Mavericks, OS X 10.10 Yosemite, OS X 10.11 El Capitan and macOS 10.12 Sierra. Every version of Mac OS X (starting with Lion) does install its recovery volume. So I end up with an iMac that shows 13 bootable volumes from the Startup manager. This took me 4 days to install all Mac OS X versions + downloading / installing all updates for every system. It is working fine, and this is what I wanted to do.

My question now is: how can I quickly duplicate this setup on another iMac (same model, same specs) ?


What I already tried:

1. Disk Utility can only make images of volumes, not disks. And it cannot restore an image on a disk, but only on a volume.

2. I managed to create an image of my whole disk with the following:

a. connected my 'master' iMac to another one using Target Disk Mode

b. created image with Terminal command:

sudo hdiutil create ~/Desktop/MultiOS.dmg -srcdevice /dev/disk1

I end up with a dmg image that is 92 Gb (HDD size on the 'master' iMac is 500.11 Gb)

But I cannot restore this image to another iMac

- using DeployStudio (this states I do not have enough space on the target disk, while this is the exact same disk specs)

- using Disk Utility (cannot choose a disk as target, only a volume).

3. I am now considering Carbon Copy Cloner, but I did read a lot about CCC not being able to bloc-copy a HDD anymore. Do we have experienced users of CCC who could confirm this, or who could tell me if CCC has the ability to solve my above question ?


Any other option to quickly duplicate / clone an iMac HDD to another iMac (while that HDD has 13 bootable volumes) is welcome.


Thanks in advance to whoever will read my issue,

Many thanks to whoever can reply,

My eternal gratitude to whoever can provide a quick solution,


PhilB

Posted on Feb 9, 2017 7:31 AM

Reply
10 replies

Mar 17, 2017 1:34 PM in response to Phil-CB

Seems to me that dd went off the rails. With signalinfo you can find out how dd is doing.


There is no temp file. It's all in memory.


sudo dd if=/dev/disk1 bs=4096 | gzip | dd of=~/Desktop/disk1img bs=4096

dd if= reads the data sector by sector into memory with a buffer of 4096 bytes this 4096 bytes is forwarded to gzip which compresses then sends to dd of= which writes to disk.

I didn't realize the block size was so small until I reviewed the documentation just now. man dd

I should have written something like:

sudo dd if=/dev/disk1 bs=512m | gzip | dd of=~/Desktop/disk1img bs=512m

This would give you 512 megabytes as the buffer. adjust as needed.

There is a way of getting where the commands are:

sudo kill -s SIGINFO <pid> <pid> ... # get dd info

What you do is to open a second terminal window and enter this command

sudo kill -s siginfo $(pgrep ^dd)

It looks a little scary and you need to be careful.

The -s option of kill turns the command into a signal.

The $(pgrep ^dd) find all instances of dd. In particular, run the pgrep command and search for open process that begin with dd.

siginfo is the type of signal.

Once you enter the command, look on the first terminal window and you will see where you are.

User uploaded file

You can see portions of my two terminal windows. Fyi: The up arrow retrieves the last command.

You could check every minute via:

while :; do sudo kill -s siginfo $(pgrep ^dd); sleep 60; done

There is a fifteen minute window for root commands.

you could do

sudo bash

# puts you into a root temrinal. Use exit to exit

while :; do kill -s siginfo $(pgrep ^dd); sleep 60; done

R

Mar 23, 2017 1:48 AM in response to Phil-CB

There's the unix dd command.


dd if=/dev/disk0s10 bs=4096 | gzip | dd of=~/disk0-s10 bs=4096
dd if=~/disk0-s10 bs=4096 | gunzip | dd of=/dev/disk0s10 bs=4096


You will have to boot off some external hd. You need to figure out the name of the disk. Something like the terminal df or diskutil list commands. Then, unmount the disk from disk utility. The example is for a partition, so the disk would have been "/dev/disk0". The new disk needs to be the same size or I believe larger. I did this in my ppc days but not more recently.


Before I do this, I zero out the free space on the partitions. Thus saving space in the resulting file.


cd x #to the file system on the partiton
dd if=/dev/zero of=zero  bs=1024k  count=101
rm -i zero

Feb 14, 2017 4:30 AM in response to Tony T1

Hi Tony,


tested CopycatX, and it did work. Cloning a disk live (from one disk directly to the other without creating an image file, having the master iMac and the target one connected to a third mac using Target Disk Mode / Thunderbolt) took something like 9 hours but it did the job and the new iMac is working perfectly.


*We are eternally grateful*

Mar 17, 2017 5:52 AM in response to rccharles

Hi rccharles,


I finally had an opportunity to test your solution. I started the Terminal command

sudo dd if=/dev/disk1 bs=4096 | gzip | dd of=~/Desktop/disk1img bs=4096


Hopefully I started the test on a machine I do not use a lot ... because I launched the process something like 29 hours ago, and it is still running ! Moreover, I see the available space on my disk decrease but the size of the disk image file remains the same (334 Mb since yesterday).


Do you know where gzip, launched from such a Terminal command is storing its temp data ? If I check the gzip process in Activity Monitor, it shows me the following for "Open files and ports":

cwd

/Volumes/Sierra

txt

/usr/bin/gzip

txt

/usr/lib/dyld

txt

/private/var/db/dyld/dyld_shared_cache_x86_64h

0

->0x36600bba3d480999

1

->0x36600bba3d480819

2

/dev/ttys000


The dyld_shared_cache_x86_64h file is only 650 Mb, so it can't be the temp location for the gzip process (which already took something like 300 Gb on my disk).

Mar 23, 2017 10:22 AM in response to Phil-CB

What I do is zero out the unused space. Create a big file of all zeros then delete the file. This makes the compression go faster [ I think. ]. I think a bigger block size does speed things up. I wanted to reduce the size of the resulting archive. I use clonezilla to backup my linux machines across a 100m ethernet link.


It does take a while to copy a disk. It surprised me.


dd if=/dev/zero of=zero bs=1m count=101


create a 101meg file of all zeros.


decompressing seems to go fine.


should be able to see how much processor power is used by compression. You'd think with today's faster processors it would not be as big of issues. I did this on a iMac g3 600. Of course, disks were smaller in the day.


R

Clone full disk with multiple OS X Versions installed

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.