Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Compressor 4.1.1 - Where to place source video files for best performance

Just read the latest Compressor update stabilizes/improves performance when your source video file is NOT located on Startup volume. I thought you always dodged placing your source videos on Startup volumes, choosing another network or attached drive (with appropriately fast connections) so your computer can better handle the encoding, not to mention clustering. Have I been wrong all this time?

So far, Compressor 4.1.1 cranks through some vids faster than ever....at times, it's slower than the older Compressor versions. It's always been a mysterious piece of software.

Posted on Feb 15, 2014 12:25 PM

Reply
5 replies

Feb 15, 2014 2:35 PM in response to kmanchor

As I read that release note is was to fix bugs in distributed encoding.


I agree with your comment about having the source files on the non-start-up drive. It's not that I never have media on my start-up. But the clips I have are short and for tests…and usually get trashed after a bit.


I also agree about some jobs being faster – much faster when hardware aacceleration is enabled. Not sure about whether some are slower.


Russ

Feb 16, 2014 2:36 PM in response to kmanchor

At one time that might have been the conventional wisdom, but that kinda changed with multi core cpus and hyper threading. Startup volumes can be pretty fast these days with SSDs, and the bottleneck for distributed encoding has moved from the disk to the gigabit interface. I see my gigabit interface on the master maxed at ~115MB/s, which is as fast as it can go, but my 840 Pro SSD can read and write at over 500MB/s. In the older versions (controllers and nodes..fun times!) I always had the master encoding with the other machines, using attached storage. The problem was always more about fast storage. CPU/RAM would only be a problem if your master was always maxed out, but generally it never gets to that point.


Anyways, I think what you are seeing as far as speed is the hardware acceleration Russ alluded to. The differences in single pass vs multipass for something like the Youtube 1080p setting implies hardware encoding is being used in single pass since the difference in encoding times is much faster for single pass...faster than it should be. It should only really be around ~2x as fast, but its much faster. Some of the settings use single pass by default. Only problem is single pass doesnt look as good as multipass, but if you are doing Youtube I guess it doesnt matter since Youtube will ruin the quality anyway.

Feb 19, 2014 7:11 AM in response to kmanchor

Hi, I'll just add my thoughts on Compressor.app. (fast file systems is least of consideration)..see why.


From our experiences,Most of the efficiencies for multipass segmented transcoding (this is the worst case) are achieved through:

  1. very fast multicore CPUs per host or offloaded if possible to GPUs that specialises in the transcode you want
  2. CAREFULLY pick the types of work to use a compressor cluster
    • define various clusters using the services nodes.
    • small jobs go to a single NODE cluster - because
      • possibly the job wont segment well for efficiency
      • slowest node will hold up job completion.
    • larger jobs go across the cluster nodes -
  3. avoid ANY COPYING any source elements or resources and TARGET distribution makes between SERVICE NODES
    • yep, this means copying over direct attached filesystems of NETWORK attached - all this takes time
    • copying to and fro adds greatly to the transaction SERVICE TIME.
    • and it ELIMINATES the COPYING post assembly of segmented transcode objects on te downhil run of a segmented multi host distributed transcode.
  4. Per Point 2: for DISTRIBUTED Clusters, make ALL SOURCE AND TARGET file systems
    • available and accessible to all HOSTS (Network based/shared (likely) - mount with correct auth and access
    • dedicated NETWORK paths - use dedicated subnets - keep internet noise use away from network.
      • bump the network speed with JUMBO FRAMES - need switches (and more €'s, $HK's) and host to support. Look at En over Thunderbolt too
      • fast NETWORK for many small TRIVIAL transactions (not copying) to keep cluster service nodes working fast!
      • OPTION: consider 3rd part UDP access file system much faster than TCP based...
    • MUCH BETTER and COSTLY OPTION: use an single shared arbitrated file system implemented over a SAN if you can. there's many out there. (metasan etc etc).. costly for infrastructure.
  5. Avoid dedicated TOO MANY INSTANCES for the cluster - less is usually more.

    grand central dispatch does a rather splendid job of utilising the virtual cores.. don't be greedy 👿

  6. Avoid unnecessary FRAME CONTROL settings
    • very CPU intensive and maybe needless or effort that doen't add to the final distribution.
    • in compressor.app V4.1.1, make your customised settings to use LOWEST setting (defaults not lowest)
  7. Avoid PREVIEWING -

    this takes time and also adds to the JOB SERVICE time (elapsed time)

  8. Avoid using LARGE BATCHES currently in Compressor.app V4.0 no no much on Compressor.V4.1
    • each DISTRIBUTION MAY require a thumbnail or a preview to be built - in Compressor V4.0 and prior - HIDE the preview window
  9. ACCORDING TO TASTE:
    • for NLE (FCPX) work from a MASTER.mov... sure use send to compressor.app if it works flawlessly
    • create customised LOCATION (destinations) and use these for batch jobs
    • also use TEMPLATES if you can (old saved .compressor jobs for Compressor.V4.0 and legacy)
    • use compressor.app for Motion V5 and Mv4 RENDERing or project to free up the Motion UI
  10. FAST DISK SPEEDS for TRANSCODE? (myth buster time) HUMBUG indeed... ℹ : hmm well this factors or adds quite low in the workflow mix...example a single 7200 RPM single disk spindle will be fine for the average (18Mbs H.264 target).. and will service a typical WRITE from a transcode..
    • monitor your usage and you will see.. the FILE SYSTEM access is mostly READs (like editing) and a small amount of WRITE
    • check yourself... and break out of this myth
    • sure Compressor.app loads/launches very very fast from an SSD (like my OWC acelsior_e2 PCI SSD).. no help for actual transcoding .
    • your filesystem activity friend is the unix fs_usage command - check it out to detail monitor the compressord processes.


FWIW, our production source and target is usually a very fast DISK ARRAY (8 x spindles over an R380 card with 2 x SAS interfaces.. tops at 7) hosted on a topped out MACPRO.

  • These resources source and target file system is shared over a DEDICATED GbE network (with its own DNS and DHCP via  OSX 10.9 server) to srvice nodes (mac-minis).. no noise on this. It services reads very well!
  • User uploaded file
  • the above from AJA system test whacktest.app very nice for FCPX however of small benefit for transcoding (Compressor.app)
  • each client job has it's own source and distribution


HTH

Warwick

Hong Kong

Feb 20, 2014 12:32 AM in response to Warwick Teale

Warwick Teale wrote:


Hi, I'll just add my thoughts on Compressor.app. (fast file systems is least of consideration)..see why.


FAST DISK SPEEDS for TRANSCODE? (myth buster time) HUMBUG indeed... ℹ : hmm well this factors or adds quite low in the workflow mix...example a single 7200 RPM single disk spindle will be fine for the average (18Mbs H.264 target).. and will service a typical WRITE from a transcode..

  • monitor your usage and you will see.. the FILE SYSTEM access is mostly READs (like editing) and a small amount of WRITE
  • check yourself... and break out of this myth
  • sure Compressor.app loads/launches very very fast from an SSD (like my OWC acelsior_e2 PCI SSD).. no help for actual transcoding .
  • your filesystem activity friend is the unix fs_usage command - check it out to detail monitor the compressord processes.


FWIW, our production source and target is usually a very fast DISK ARRAY (8 x spindles over an R380 card with 2 x SAS interfaces.. tops at 7) hosted on a topped out MACPRO.

  • These resources source and target file system is shared over a DEDICATED GbE network (with its own DNS and DHCP via  OSX 10.9 server) to srvice nodes (mac-minis).. no noise on this. It services reads very well!
  • User uploaded file
  • the above from AJA system test whacktest.app very nice for FCPX however of small benefit for transcoding (Compressor.app)
  • each client job has it's own source and distribution


HTH

Warwick

Hong Kong


You have to take into account other variables when people (including myself) said fast storage was more important. When that was generally said I dont think that was meant for people with just 2 machines in their cluster, or for local only encoding. Using 4 minis and a MBP (2 instances for each machine) I can see the hard drive read/writes at anywhere from 50MB-70MB/s at times, during the beginning of segments and final assembly. That is a max, disk read/write are lower as well. Gigabit is always maxed at 115-120MB/s. This is a common scenario of say Prores 422 to H.264, using auto file sharing. Now, go back a couple of years and replace the minis with Mac Pros or Xerves with 4-6 instances per machine. And now instead of 4 machines, move to something modest for a production environment like 10. And maybe they used MPEG2 instead of H.264, and Uncompressed 8/10bit or one of the many camera codecs instead of Prores. Anyways, it adds up when you scale up. Fast storage used to be a big deal, but now even the 5400RPM drive I got with my Mini which I tossed into a USB3 case can do 100MB/s.

Compressor 4.1.1 - Where to place source video files for best performance

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.