I wonder why they changed (if indeed they have) the previous approach? Being able to delete the last failed backup was a great way of getting it going again, but this way you are simply screwed. What is more:
1. There is no indication to the user that a fault like this has occurred.
2. Time Machine needs to be much more robust in situations like this. If it can fall over so easily and the backup becomes unuseable (often at the time when you depend upon it most) then it is simply close to useless. It should be designed so that whatever happens you can always get back to a previous valid backup.
In one case where I know the history leading up to the failure the Mac's hard drive had become faulty in a way (not uncommon) where it still just about worked, but extremely slowly (presumably every access required countless retrys before the data came through cleanly). Eventually a disk access would fail and the system would hang. Having replaced the hard drive it leapt back into life, only needing to have the data restored.
However while the system was running poorly it was still trying to backup via TM, but I guess that on one occasion at least it will have hung while attempting a backup. Faults like are not a rare occurrence, for this or other reasons, but must happen quite often, and TM should be robust enough to cope with it by being designed from the ground up to ensure that one faulty backup session cannot destroy the result of all the previous backup sessions.
I know that to achieve this in all circumstances is impossible, but at the moment it feels like TM is way more lightweight than it really needs to be.