Except that it ISN'T a developer issue, it's an ARCHITECTURE issue. Developers have no choice but to code according to the User Requirements Specification, Software Requirements Specification and Software Design Specification. (I'm using generic document terms here from my own time in software development, but I'm certain that Apple follows similar processes -- all large-scale development does.)
We know why they do it: The guys who architected the behaviour mimicked the virtual desktop paradigm as implemented on iOS. I fully expect that the architects of the design came from the iOS team and have little use for or experience with multiple monitors on extended desktops. The implementation, as we know, is woefully nearsighted and optimized for a MacBook Air that is not running an external monitor. Blanking out secondary monitors was done (obviously, IMO) on the premise that people don't multitask, they task switch. So, if your attention is on a full-screen movie on your external monitor, the light from your other monitor will be a distraction and should be dimmed. (We know that isn't the case, but bear with me here.)
From an Apple programmer's view, they're given a list of new Apple APIs for the release and told to use them. It's possible that the guys who wrote the Safari upgrades were horrified to see their baby blanking screens and moving to a new virtual desktop in the process. The problem is that when the Safari (or any internal) software development team is told to implement a feature using a certain OS X API call, that's it; that's all they can do.
So, sorry for the long-winded reply, but the only way anybody could hope that this changes is to convince Apple's OS X software architects that they really, really don't want OS X to copy iOS. Except that happens to be something that Apple now considers to be a Feature and a Good Thing (tm). My call: We're doomed.