H.264 Anamorphic encoding and clean aperture

Recent versions of Compressor support non square pixel or "anamorphic" encoding the same way DVDs have been encoded for years. That is, video is encoded at 720x480 and during playback resized to the correct size (640x480 for 4:3 and 853x480 for 16:9). I think this is the best way to encode non square pixel video originating from DV or Digibeta since it preserves all of the original resolution without having to do any kinds of resizing. I hear this is how Apple delivers a lot of its new SD features on iTunes. My question, where is the clean aperture support?

It is a very common misconception that to convert 720x480 video to square pixel one simply needs to resize to 640x480. The correct size is in fact 655x480. Apple implemented the correct resize procedures starting with QuickTime 7.1, they can be found under the "Presentation" tab in "Movie Properties." A lot of professionally sourced standard def content (from a Spirit telecine for example) contains active picture only within 704x480 pixels of the 720x480 frame, the rest being blanking ( http://en.wikipedia.org/wiki/Nominalanalogueblanking). When the entire 720x480 (or 486) is viewed on a computer display, thin black lines (approximately 8 pixels wide each) appear on the right and left sides of the video. Apple supports aperture settings on a lot of its intermediate codecs, such as DV, ProRes422 and Uncompressed 4:2:2. A good way to see this in action is to go to the Movie Properties of a DV clip, click the Presentation tab and cycle through the various "Conform aperture to:" settings. "Encoded Pixels" will do just that, show the video at 720x480, "Production" will convert to square pixels and result in a frame size of 655x480, while "Clean" will crop off the extra pixels off the sides resulting the standard frame size we all recognize, 640x480.

Now having said all that, encoding 720x480 video to 640x480 in Compressor seems to disregard all concepts of clean aperture. That is, you are left with a 640x480 image WITH the extra padding on the sides, not only is this annoying but is also WRONG, as the AR is actually slightly distorted. And if you try to crop the 16 pixels manually, you end up with an offbeat resolution of 626x480, not cool. Using anamporphic encoding in Compressor also plays back at 640x480 with no cropping and Clean/Production aperture settings being the same. So my question is, why this wrong and highly overlooked behavior?

By the way, the concept of clean aperture having existed since virtually the birth of digital video has only recently been widely recognized, with giants such as Adobe implementing it in software such as After Effects as late as CS4 ( http://help.adobe.com/en_US/AfterEffects/9.0/WS3878526689cb91655866c1103906c6dea -7f3aa.html) And I quote: "After Effects CS3 and earlier used pixel aspect ratios for standard-definition video formats that ignore the concept of clean aperture. By not accounting for the fact that clean aperture differs from production aperture in standard-definition video, the pixel aspect ratios used by After Effects CS3 and earlier were slightly inaccurate. The incorrect pixel aspect ratios cause some images to appear subtly distorted."

How could this have happened and continues to be ignored to this day!

Message was edited by: Daniel Grey

Core 2 Duo iMac, Core 2 Duo MacBook Pro, Mac OS X (10.5.1)

Posted on Oct 30, 2008 8:39 PM

Reply
1 reply

This thread has been closed by the system or the community team. You may vote for any posts you find helpful, or search the Community for additional answers.

H.264 Anamorphic encoding and clean aperture

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.