Newsroom Update

Beginning in May, a special Today at Apple series titled “Made for Business” will offer small business owners and entrepreneurs free opportunities to learn how Apple products and services can support their growth and success. Learn more >

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Rewrite Delivering Images from Subdomain

To keep up with the demand for faster page loads, Google recommends setting up a content delivery network (CDN). Of course they want people to use their servers, but for my small site, I see no reason to do so. My Google Page Speed is around 72, and I'm told that can improve if I setup a series of subdomains to parallelize my content delivery. I don't know if any of this will work, but it's worth trying if only to see real world results.


Here's what I have:

OS X 10.9.2

OS X Server 3.1


Website running Wordpress 3.8.1


I know there are plugins for WP that can create self-hosted CDNs, but they have a world of problems that come with them. I can't see pulling what little hair I have left, just so I can gain another 10 to 15 points on Google Page Speed. Yes, they're that complicated and frustrating. I've tried just about every one of them. Besides, I would like to run this setup in a .conf file and not in the usual .htaccess


What I've setup are three subdomains for the main objects on a typical web page


Images: speed-img.mydomain.com

Javascript: speed-js.mydomain.com

CSS: speed-css.mydomain.com


All subdomains are registered as a CNAME with my DNS, and all the aliases have been established on the server.


The idea is to rewrite the page request to pull from these subdomains. Just change the www.mydomain.com/images to speed-img.mydomain.com


To make matters a little more complicated, I'm running an ecommerce site and SSL is required, so that means image requests on the secure side will have to revert back to the www.mydomain.com because the SSL is assigned only to that address.


Once we have this all done, and tested, we (the Apple community) can set it up as a .conf file for anyone to use in their config/apache2/sites directory.


Will you help? I'll test any configuration on a live production site, and report the results.

iMac (21.5-inch Mid 2011), OS X Mountain Lion (10.8.3)

Posted on Mar 26, 2014 12:48 PM

Reply
8 replies

Mar 26, 2014 5:04 PM in response to theibel

Pulling stuff in from different domains here will likely have negligible effects on your performance.


What's your network connection? That's a very usual issue here. Either due to lower bandwidth, or higher latency, or both. Run a traceroute to some sites, and see what the differences are here — I've been on some ISPs that have had half-second latencies in some of their router wiring, and larger.


CDNs work by having massive and geographically distributed low-latency bandwidth across multiple servers. Not because they're doing stuff with subdomains — they're in subdomains and they're on geographically distributed to allow the content to be relocated off of the slower and more distant web servers.


The general approach you're reading about involves is re-hosting your content off of your server and off of your existing network connection, and also moving toward WordPress or CDN-based page caching of your data rather than running the gamut of php necessary to render a WordPress page — any content management system running on Apache is going to be slower than a static site hosted on nginx or on a CDN-hosted site.


Got a faster (hosted) web server available somewhere? You can put your content there, and reference that server (quite possibly in a subdomain of your domain) from within your main site. But if you've got a faster server with a bigger-bandwidth and lower-latency network connection, it's usually easier to move the whole web site and all the content over there while you're at it.


After working through performance optimization and page caching, (and certainly consider different and faster content management systems as part of this) it'll probably be easier to re-host at DigitalOcean, Amazon (either hosting or CloudFront), Linode or any of the other CDN or hosting and VPS providers around. Yes, whether via something like Varnish (you probably don't have that installed... yet?) or Apache rewrite rules.

If you've eCommerce requirements involved, that usually adds DSS PCI and related security requirements — that means some functions and some content might not be hosted via CDN, or you'll be paying for hosting that'll survive a PCI audit. If you are running with a CDN and/or VPS, then you'll be using certificates that work for multiple subdomains or multiple domains, and multiple addresses — or both. (Adding CDNs and adding domains means issues with same origin can be in play here, too.)


Chat with the WordPress folks and specialists — they're the ones that most often deal with WordPress web site performance issues and CDNs, and with getting WordPress and PCI requirements to coexist. You're probably looking for somebody that specializes in these set-ups, for that matter. (Very little or none of what is discussed in this question is even specific to OS X Server, for that matter; none of my reply involves OS X Server.)

Mar 26, 2014 8:41 PM in response to MrHoffman

I asked this question here because I'm specifically running my website on my OS X server. We don't have a typical Apache setup, and for small businesses, like mine, the extra cost for a CDN, is not worth the reward. The off-site distribution is nice, but that's not what I'm after. My website is hosted on the computer I'm using right now, and I've done a lot to maximize it's speed. The problem I'm having however, is beyond me.


The goal is to maximize our Apple servers, and not worry about latencies and bandwidth. My bandwidth is unlimited, and my speed is 30Mbps. Benefits of living in Bristol, TN 😉 I'll never use all the bandwidth, and the speed is nice.


This setup will be a big benefit to intranet networks as well. From my online studies, this also solves some problems that are typical of linear (sequential) websites. That's the render blocking encountered on all websites.


Mr. Hoffman, if anyone has a good handle on how to set this up, you do. I've read your posts many times. I want to maximize everything my Apple server, and HTML has to offer, and Google tells me that parallelizing my website is a great way to do it.


Will you help me out?

Mar 27, 2014 6:19 AM in response to theibel

Don't confuse available bandwidth (30 Mbps or whatever) with the network latency.


For a responsive web site, latency is the key measure, so long as your bandwidth is appropriate for the size of the pages (and images) you're tossing around. This is where tools such as traceroute help — I've seen a half-second delay in some ISP routers and occasionally more, and there's zilch I can do about that beyond reporting it, but that delay inherently factors into the web page responsiveness that the users and the crawlers encounter.


To use an automotive analogy for network bandwidth and latency, folks usually buy horsepower, but love torque.


To use an analogy more typical of broadband in parts of Nueva Inglaterra en Los Estados Unitos de América, satellite broadband networking can provide good bandwidth, but the latency of the ~35,786 Km round-trip via geostationary orbit contributes mightily to the technology's unsuitability for networking tasks requiring lower latency. (Which can point to another factor when creating and operating your web site: if your audience is in low-bandwidth areas or is heavily using mobile devices, serving big pages and big images is a bad idea. But I digress.)


I've read a fair amount about performance optimizations and search engine optimization and the rest over the years (and Google and others have some reasonable resources here), and decided that I'd ignore most of what's published (particularly anything around the Get Links Fast genre), and get to work creating quality, unique, searchable, current content. Using content-appropriate keywords here can help folks more easily find your content, too. This is how you get readers and get good links, and it's the core factor in any search ranking.


Don't forget to test with Bing and the other search engines, as more than a few of us are finding that Google has an enviably massive corpus, but increasingly poor and increasingly gamed search ranking.


For lower-level performance details (and particularly around page responsiveness quests), use the available browser-based tools to measure the performance of your site, such as the Safari developer menu and its page performance profiling or other similar tools in other browsers.


For a fast site, fewer and smaller and better-compressed images are a win.


Zipped pages can be a win, if you have more CPU than network.


Work to profile and to locate and to remove the slower bits. Over the years, I've found various surprises while profiling pages, whether it was a glacially slow disk serving content, or "under-performing" JavaScript or a maladriot CMS add-on module, generic logging activity that was hampered by slow (and technically entirely unnecessary) DNS back-translations within the logging configuration, or an image somewhere on the web page that was unnecessarily huge.


Pre-generated web page caches are usually a win, as that can avoid the majority of the overhead and the database activity involved with a CMS page generation. If your CMS doesn't have a decent static-page caching capability, either find or create an extension that works, or migrate to the CMS that works faster — remember that the goal of using a CMS is to make your work easier, but your overarching goal here is to quickly and efficiently serve your content to your readers. Pick your tools appropriately.


Tools to monitor and flag web outages and network disconnections, having viable and tested and restorable backups (and preferably offsite, but take what you can get), and somebody that's monitoring for the inevitable "that's weird" moments that can point to performance or stability or backup failures or database corruption issues, and for site hacks — WordPress and most other CMS packages tend to be subject to these, and this ignoring the current pingback mess — and to the necessity of keeping WordPress and MySQL and the rest current.


CDNs? Yeah. Definitely look at those once you've got the rest of the chain under control, or when your own big-box server is bogging under the load, or when you've got a special case such as serving large files or video; when your network pipe becomes an issue. If you're approaching this stage, you probably have somebody specifically targeting this performance and tuning and re-hosting work, too — you'll probably also be partially migrating to a hosting service provider or CDN here and this potentially in addition to your own geographically distributed data centers.


In short: don't get ahead of yourself with scaling issues you don't (yet) have, and learn where the issues that you do have are lurking, and then the usual tradeoffs involving how much time and effort and cash it'll cost to remove those issues.

Mar 27, 2014 6:41 AM in response to MrHoffman

I understand your points, and I agree there's always some good choices for better performance. Latency can also be created directly on a web server and reducing my server response is my primary goal.


The overhead of a complicated .htaccess configuration can create more latency than any network. I already have .conf files established for image compression, browser caching, and a minimal htaccess configuration. What I need to do is eliminate the htaccess tracing (the loop that creates the latency) and configure this on a server level. Make it more straight forward, instead of the constant back tracing.


You're definitely correct about caching web pages, and my original question actually tackles this issue. When cookie-less and cached images, js, and css are delivered across three threads (subdomains), instead of the usual www, there's a definite speed and latency improvement.


Google publicized these build strategies around 4 years ago, and I think this is something Apple should have incorporated into their Server.app. After all, hosting websites is the primary purpose of having a server, whether it's an intranet or an internet. Even an intranet can benefit from what I'm looking for.


What we need are people with exceptional Apache mod-rewrite skills. I've looked for days trying to find a solution, and unfortunately, everyone's solution just piles on more latency in one form or another.


It doesn't matter if someone's running Wordpress, Magento, or a custom solution. We all have the same issues without the proper .conf files to make content delivery more efficient. I know this is the answer, and I'm sure that someday in the future, Apple will incorporate this into the Server.app GUI. So let's just get started today, and hopefully have a solution soon.


Any idea who could help us with this?

Mar 27, 2014 8:55 AM in response to theibel

Google publicized these build strategies around 4 years ago, and I think this is something Apple should have incorporated into their Server.app. After all, hosting websites is the primary purpose of having a server, whether it's an intranet or an internet. Even an intranet can benefit from what I'm looking for.


From what little I can tell of it, Apple is not in the high-performance web server market. Based on what's happened with OS X and OS X Server since Snow Leopard Server, Apple is clearly headed away from that market, too. The controls available with Mavericks are much simpler and far easier for most folks to deal with, and are almost inherently and correspondingly much less flexible than what was typical in Snow Leopard with Server Admin.app and related tools.


Folks that are into the high-performance web-serving market, then you're probably also looking to run nginx or maybe lighttpd and probably not Apache, and/or placing Varnish or similar out in front. (Some related info.)


What we need are people with exceptional Apache mod-rewrite skills. I've looked for days trying to find a solution, and unfortunately, everyone's solution just piles on more latency in one form or another.


That's the expected outcome — processing those rules inherently takes CPU and I/O time, and a page assembled in that way can also interfere with caching efficiency. You're also retargetting the I/O and the tasks to the same infrastructure; the same network link, same web server, same network link, same disk.


Place your content on a CDN and access it via DNS for best speed — and this is not DNS that's redirecting stuff around on the local server.


I have a fair number of rewrite rules for dealing with various situations, and the combination of those rules and the overhead of a CMS inherently makes for a somewhat slower web site.


Were I optimizing for speed here, I'd aim the DNS domain and subdomain translations for parts of the DNS domain to servers located on (much) faster links, and code the HTML to reference those — the HTML pages are assembled and rendered in the browser and not in the server after all, and pulling in big images from a CDN or similar advantagious network position is a win. I'd probably also replace the OS X Server and Apache here with a box running RedHat Enterprise Linux (RHEL) or Scientific Linux (SL) running nginx — the network I/O throughput on RHEL/SL can be staggeringly good, in my experience.


The only time I'd follow the path of using DNS or rewrite rules to point references to different parts of the same web server is as a prerequisite to rehosting (at least some of) the content to a CDN or such.


Run some prototypes and see what the typical access degradation is for adding different sorts of rewrite rules, too; profile access time. Tweak your rules, and then re-run JMeter, Pylot, ZuCom, or other such.


But in any case, I'll bow out here.

Mar 27, 2014 1:55 PM in response to MrHoffman

I agree I had a lot more flexibility in my Linux boxes, but ever since Apple came out with the Server.app, I think they're trying to capture that market. I think Apple's decision was genius and I see this decision providing for some very loyal customers.


As for high performance, anyone can boost the speed of their website with just a few tweaks, and it doesn't take much to do it.


Compress files using Apache DEFLATE

Serve proper size images

Browser Caching

Combine javascripts

Even if you don't want to combine javascripts, you can always add the attribute "async" or "defer", and see a nice speed improvement

Combines CSS

Host external files locally, especially javascript, just pull them in and combine. If that's too much, just defer your external javascript links.


This short list will likely put just about any website in the 70/100 speed range. Not bad, but plenty of room for improvement. Even Google.com comes in at only 74/100 for mobile and 90/100 for desktop.


I did most of the mods above in my web/config/apache2/sites folder so it can be used ubiquitously across any and all websites on my server (I have 2 right now)


The subdomains are obviously setup in the vhost, and the specific rewrites can be pointed to specific domains. This can all be done with the .conf files, and not in htaccess.


Do you know anyone that could put this together?

Mar 31, 2014 7:41 AM in response to theibel

Anyone? Am I the only one interested in making this work? How about someone from Apple?


This is an issue that must be tackled, and I can't understand why it's not being addressed. This is a Google/Website issue that's not going away, and it's a problem for everyone.


We need a bad Apple emoticon 😉 so I can throw it.

Apr 1, 2014 9:20 AM in response to theibel

So after some more research, and emailing people that have far more experience with this than me, it turns out that neither the htaccess, nor the conf files will make any significant improvements in website speed. So my next obvious question would be, what's the point?


I inevitably went with Mark Kubacki's CDN Linker plugin, because it simply works, and provides the subdomain links Google wants to see. So is this all Google wants?


Why would Google want to see websites served across multiple subdomains, parallelized, if there's really no speed benefit? I mean my site might have gained 1 point in speed. Big deal!!!


Is Google just trying to separate the men from the boys by forcing an implementation that delineates who's playing ball and who's not? It sure looks that way. I guess that's one way to prioritize search rankings.


While this can be done in Apache, PHP, and other means, it's only worth it for the search rankings, and not the site speed. Mr. Hoffman, thank you for all your input, and hopefully this saves someone else from spending any serious time on these site speed enhancements. Just make sure your site is loading across multiple subdomains, only for Google search rankings, but don't expect any speed miracles, because it's not going to happen.

Rewrite Delivering Images from Subdomain

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.