I respectfully disagree with the computational processing assertion. I think it can be improved to significantly and noticeably reduce and even eliminate the problem, and it doesn't have to be an always-on feature for everyone unless the user chooses. Most of the green space ships, green rubber ducks / green flares that I see are transparent, consistently colored, and consistent angles from light sources. Even without angles, accelerometer, and light source factors, the green pixels are often enough of a factor to identify the problem area, and in cases of small points and diffusion areas, along with surrounding pixels, you can automatically calculate best color replacements. Sure, the huge flares become a problem, but I've noticed in all of my images and many of them here, these points are consistent and small and could be corrected even better than a person post editing. Mixing in more intense processing to include light source locations, camera pitches and other angles would indeed be intensive and shorten battery life, but again, it doesn't have to be always on, and it could be used to identify probable regions of artifacts, to reduce how much of the image to process and therefore speed the process in live preview. IF it were an option, I'd wager most consumers would have it always on given how pronounced these artifacts are. Pro-consumers could have the option to disable it and work their magic with angles and aesthetics.
I guess I'm thinking along these lines because I've recently been tinkering with machine learning code, which can easily identify objects in my scenes. Identifying these green flares should be trivial in comparison.