dd command bs parameter

I’ve been using the Terminal dd command to write a Linux .iso file to a 32 GB USB stick.  Is there a recommended value for the block size (bs) parameter? The stick is USB3 compatible,  my Mac has an internal 1TB SSD and is running the latest Ventura version.  Thanks.


MacBook Pro (2017 – 2020)

Posted on Jan 4, 2024 7:32 AM

Reply
Question marked as Top-ranking reply

Posted on Jan 4, 2024 6:28 PM

Not really, but I would not use the default, as that is likely to be too small and involves too many small I/O operations.


I would try something along the lines of 128K to 256K

dd bs=$((128 * 1024)) if=/path/to/source/file of=/path/to/output/USB/stick
10 replies

Jan 5, 2024 6:55 AM in response to BobHarris

BobHarris wrote:

Not sure if there is a way to find this information at the user level.
You could always do a timing test using different bs sizes.


There is no ioctl for this (well, not one that I’d trust) as the operating system doesn’t have the necessary insight into the entire I/O path.


Testing different sizes is the only way I’m aware of, as we found some odd knees in the performance curves over the years.


Fibre Channel storage had some great oddities around storage controller performance for instance, but with transfers set past a certain size, I/O performance was all “mostly good enough”. (Why even mention fibre channel external storage controllers? There’s a microprocessor and firmware and a storage controller embedded inside a flash drive, too. Some info. The performance of those embedded controllers and flash can vary.)


To the OP: If you’re generating these flash drives at all often, I’d suggest investing in higher-grade (faster) flash drives, as the cheap drives are very slow. That’ll likely have a larger overall effect than what will probably be marginal differences from adjusting transfer sizes.

Jan 5, 2024 4:32 AM in response to zooth

Not sure if there is a way to find this information at the user level.


I know that in some Unix systems there are ioctl() calls that can get the maximum transfer size for a specific device. If the write() system call uses something larger, the kernel code just breaks the I/O into smaller chunks.


bs=$((4 * 2024 * 1024))


should work fine. It is almost definitely larger that just about any device’s maximum I/O size.


You could always do a timing test using different bs sizes.

Jan 5, 2024 8:05 AM in response to MrHoffman

MrHoffman wrote:

To the OP: If you’re generating these flash drives at all often, I’d suggest investing in higher-grade (faster) flash drives, as the cheap drives are very slow. That’ll likely have a larger overall effect than what will probably be marginal differences from adjusting transfer sizes.

I'm not generating many of these so it's probably not worth getting faster flash drives. I've done an experiment with a Samsung Type-C Media of 64GB capacity using different bs values, i.e. sudo dd if=somelargefile of=/dev/rdisk4 bs=variousblocksizes status=progress. The default block size 512mb gave a speed of around 1390kb./s, 1024mb gave about 3200 kb/s but with larger block sizes the speed remained about 3500 kb/s. So maybe I'll use bs=2m with this particular make and size of flash drive. Thanks once again to you guys for your advice.

Jan 6, 2024 12:16 PM in response to zooth

zooth wrote:

Of course the default block size is 512 bytes NOT kb! Was increasing the block size to the mega range a bit of a quantum leap? Perhaps I need to experiment with some value in the kilo range!


Hard disk drives used to have various different sector sizes and of which 512 eventually became the most common choice. Some old hard disks supported multiple sizes including 512 bytes, though only one could be selected at a time. After several decades or so of hard disks with 512-byte sectors, and with different sector sizes introduced for CD and DVD media (usually 2048 bytes), hard disk drives began to migrate to 4096-byte sector sizes, as that reduced the sizes of the mapping tables, at the cost of wasting roughly half of a 4096-byte sector per file on average. Most storage stuff still supports 512-byte writes for compatibility with older apps and tools, though modern disks usually expect 4096 byte sectors. Writing larger byte counts usually means larger I/O sizes, which makes for fewer and faster transfers. Probably way more than you wanted to know, too.

Jan 6, 2024 12:41 PM in response to zooth

Rather than spend a lot of time guessing about the optimum bs value for macOS dd, why not use something like the free Balena Etcher to write that Linux ISO to your USB stick as a bootable solution. I have used it in the past and it just works without fanfare.


It's not that I haven't used dd myself in the past, but rather, life is too short waiting for dd to finish.

Jan 6, 2024 1:02 PM in response to MrHoffman

MrHoffman wrote:


zooth wrote:

Of course the default block size is 512 bytes NOT kb! Was increasing the block size to the mega range a bit of a quantum leap? Perhaps I need to experiment with some value in the kilo range!

Hard disk drives used to have various different sector sizes and of which 512 eventually became the most common choice. Some old hard disks supported multiple sizes including 512 bytes, though only one could be selected at a time. After several decades or so of hard disks with 512-byte sectors, and with different sector sizes introduced for CD and DVD media (usually 2048 bytes), hard disk drives began to migrate to 4096-byte sector sizes, as that reduced the sizes of the mapping tables, at the cost of wasting roughly half of a 4096-byte sector per file on average. Most storage stuff still supports 512-byte writes for compatibility with older apps and tools, though modern disks usually expect 4096 byte sectors. Writing larger byte counts usually means larger I/O sizes, which makes for fewer and faster transfers. Probably way more than you wanted to know, too.

Larger sector sizes also reduced the prefix sector header and suffix trailer on each recored by 1/8th, which allowed for even more data to be stored. For example if the header/trailer took up 16 bytes, that would be 34,359,738,368 bytes (32GB) for a 1TB drive with 512 bytes sectors. Change that to 4K sectors, and the header/trailer takes up 4,294,967,296 bytes (4GB), making an additional 24GB of storage available. NOTE: the header/trailer size may be different for different manufactures and recording technologies.


What is in a sector header/trailer. Sector address, alternate address (if it is a bad sector), ECC (error correcting codes).


Also in the good old days I had users whining that 512 byte sector sizes wasted too much space when used with small files (VAX/VMS 780 days). Those were the days when disks were typically 50-100 megabytes, and a big disk was 250 megabytes, on a shared system with a few hundred developers sharing the system and storage.


Moving to 4K sectors is noise when the storage device is often a terabyte or more, and a small file these days is several megabytes (a picture most likely for a typical user).

This thread has been closed by the system or the community team. You may vote for any posts you find helpful, or search the Community for additional answers.

dd command bs parameter

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.