Writing a large file (gigabytes) has to take less “time-out” to modify metadata, then if you are writing lots of small files.
Writing the data for a large file just needs to allocate storage every so often. Allocating storage involves updating metadata that keeps track of free storage, updating metadata in the file that tracks the storage it is holding, and writing the data. If the filesystem guesses you are writing a huge file, it may allocate huge chunks of storage each time the file needs more storage, so the allocation side issues will happen less. And when finished, the file’s timestamps must be updated.
Writing lots of small files involves all the storage allocation issues, plus, the file system needs to first allocate the files metadata that holds the files storage, holds the file ownership, the file dates, the type of file (file, directory, symlink, FIFO, device, Socket, …), and other misc file data. Then the file name needs to be added to the directory where the file can be found, which may require the directory to grow and allocate its own additional storage. Copying small files is a lot more work, than copying huge files.
I’m glossing over lots of other stuff.
My day job is working on a commercial Unix/Linux filesystem.