I stumbled across this topic looking for something unrelated and I found it rather humorous that you somehow didn't make your own frequency response plot, being a professional audio producer apparently, in 8 months seeing how crucial this information is.
Knowing the actual frequency response of the speaker is arbitrary, as one of the well known flaws of it is that it aims at nothing, out of the back of the device. Are you using a reflector? Is your official case folded behind it, which reflects forward? Both of those change the "frequency response" of the speaker.
So even if you had a perfect plot of what the speaker did, what exactly are you going to do, EQ your audio tracks differently? Without dictating the listening environment and reflections out of the speakers, it doesn't matter.
And further, anyone who actually cares about sound on an iPad uses better headphones or speakers, so developing for the obviously limited and midrange scooped iPad default speaker that is pointed nowhere and further diminished by poor placement is ridiculous, no different than developing for the actual apple headphones.
Finally, I'm not exactly sure how you don't have a workflow for a device that includes rapid prototyping for that device. You'd obviously want to demo everything even if you applied some sort of "time saving EQ template" to it.
Is it really that difficult to output reference patterns to see what attenuation and reinforcement is happening one time to craft a chart? Time waste regardless because if you moved a reference microphone to any number of potential typical listening papositions the curves would be completely different.
Or is your plan to make the best mix for someone holding the speaker pointed away from them AND two inches above their lap, magically? because you! with your gear and experience should know, in that case you create accurate audio so no single listening environment is favored intentionally.