While it's true Apple uses some Computational Processing in their photography wizardry, that's more taking huge amounts of data in low light to piece together photo's with clarity unheard of until very recently. But for Apple to decide what should be in a picture or not in a picture and to remove data the lens sends to the sensor would not, in my opinion be a good option. What if the camera sees things and removes them, yet you didn't want them removed. And that CAN include flares in some situations. Personally, I don't want the camera to make such decisions for me.
What they need to be able to figure out is how to let enough light into the sensor, either with redesigning the lenses or the lens surrounds to minimize artifacts and that's not an easy challenge given the tiny nature of the lenses in a phone. In a DSLR, you would use a lens hood to minimize the artifacts from very bright subjects. But then you would NEVER use a lens hood at night as it would reduce the amount of light hitting the sensor and to be honest, DSLR cameras don't have the magical ability to do what an iPhone can do at night, unless you have the camera on a tripod.
We're dealing with limitations here. And I do get that many here feel let down by some of the results they're getting. But blaming Apple or the camera is not the answer. We have to learn how to use the photographic tools we have in our hands to take the best images we can and accept that some situations are going to result in recorded artifacts we don't want.