Yes I checked out your links. Thank you for taking this seriously. The links are not directly relevant to my question. I have hardware/software in my studio that covers your suggestions. I'm not looking to check the freq of INGOING audio. I need to know what's lost when full spectrum audio is transmitted via the built-in speaker. What the human ear picks up.
Some posters here seem to be numpties, excuse me if that is not the case but they seem to be missing the point of the question. I post as an ex recording engineer and can appreciate what you mean [ I think!! :-) ] and what you are trying to achieve
Hope my contribution helps. I have no axe to grind and hope I have understood your question correctly. You do not state what your audio is to be used for but that really needs to be considered and you will have to make your own judgement. For some uses (see below) the iPad seems to be ideal already! If I tell you what you already know, I apologise in advance.
The frequency curves already mentioned, as you state, have no relevence to playback whatsoever but may well give a clue to the low frequency handling capability!
It would be useful, as has been pointed out, to have full specifications of response (not just the frequency range) for the units and any other useful tips for those developers who wish to produce the best quality audio output. That is not necessarily to reproduce the full audio range and as will already be evident to some listeners the ipad like other instruments can get away with a low output level in the base region without too significant a loss. Yes, it's not ideal as a playback device (you will alraedy know that!) but it is good quality for a portable device and incidentally well matched ( at least sometimes) for those who have cochlear implants as it stands, so I have been informed. Excellent for speech clarity but poor for music in the latter circumstance.
It would be hoped that the designers at Apple have made the unit respond best when fed with a standard audio signal i.e. a flat response with no compression etc. in the original waveform that sounds great when recorded/played on reference/studio quality equipment. Who knows?
In the absence of that information, and how well the dsp processes the sounds such as bit rate handling etc. it will be, without a lot of expense and time with measurement equipment for the sound output, a matter simply and for ease, trial and error. Trying to 'measure the characteristics' othewise is not easy, though not impossible, to get an estimation.
Harking back to the 'good old bad old days' heavy compression and pre-equalisation were used. It is worth experimenting to get empirical results with tracks pre recorded as flat, and some base boost and separately treble boost to see what the handling capabilities are without harmonic or other processing distortions occur on payback on the iPad. I suggest a few steps of 5dB boost at 200Hz initially but heavy bass cut below that frequency initially. Then also the same at about 10KHz. These will determine the maximum you could apply. All this done when playing back at high levels. It might be useful to use your intended sounds and also a range of others such as full frequency response music and speech as what can sound OK for on type (artificially enhanced) can sound awful with others. Once you hace the max capability established then chose pre equalisation within those parameters that will sound best and balanced.
As for compression, it does give the impression of greater loudness, but at best it is avoided as the loss of dynamic range becomes tiring for the listener. The digital reproduction and processing does not have the issues associated with older noisy recording/playback systems so ought not to be used to give a better s/n ration impression. However, as we will not be in an ideal world, it may be necessary. Certainly a bit of limiter use on the recording would be good to avoid digital overload. If the use of the sound you are producing is for within a noisy (or non quiete domestic) environment, then compression would improve the 'sound to background noise' ratio and therefore clarity. Much the same technique as was very popularly used (and still is in many cases) on some music for domestic playback on cheap equipment. However the iPad is quite good quality and clarity, so go easy on all these corrections.
If headphones are used the variety is too great to take into account so another reason to not overly apply any pre-corrections as if they all add up the result could be poor.
Hope this helps. Without definitive information (and I have not found any) it's probably best to just use your ears!
I currently use an app called RemoteSound to stream audio from my DAW (Windows machine, actually) directly to my i devices over Wi-Fi. There's some latency, and it doesn't work with my ASIO stuff, but I can roughly preview the sound from Sound Forge and other apps I use for mastering and editing my assets. So... yeah. With RemoteSound, I kind of have my ipad standing up on its smartcover, functioning roughly like a reference speaker between my proper reference monitors.
I haven't tried it with my Macbook as my work environment is all Wintel, but perhaps this could be useful.
Thank you for recognizing the issue at hand, but your post is a tad off topic. I read your post as a general mixing and mastering class, which is something I've been doing for 25 years and counting. The issues you present will be taken into account, but this is mainly about the iPad monitoring issue.
I have tried Airfoil and Airfiol Speakers, which do the same as you describe. The "broadcaster" app will tap into the ASIO driver and thus crash my system (Win8Pro 64bit, RME Fireface ASIO, Cubase 7, EWQL Hollywood bundle etc). This is no surprise as a pro-system will not share the ASIO driver - IF you want stability and low latency. And I do.
I can however make a chain as follows: DAW audio output -> MacBook audio input -> MacBook Aifoil via WiFi -> Airfoil Speakers on iPad via WiFi, which plays the DAW-sound with a couple of seconds latency through the iPad built in speaker. It works, but I tie up a lot of resouces..
Glad to see you read my response and found at least a little of what I wrote useful. However I think you are a bit harsh saying it was off topic! Partially answered only, yes, as I did not address the subsequent questions, and I had added information I thought you might find useful, associated to the problem presented.
You had said originally:
"Regarding composing, mixing and mastering music for iOS app developement.
What are the frequency-curves of the iPad and iPhone (family) built-in speakers?"
As this is not available the pragmatic an practical approach is the next best thing unless you are going to do it yourself in an anechoic chamber (or pay for it to be done) That obviously would give you a reference curve in those conditions.
You further wrote in response to another post:
"I need to know what's lost when full spectrum audio is transmitted via the built-in speaker. What the human ear picks up." and what I was suggesting is a way to do that for yourself. Perhaps not what you want to do; it is the next best approach; but NOT off topic, eh? However you have now introduced "What the human ear picks up." so let's forget the chamber idea.
I was not trying to teach you to suck eggs and from what you tell us you should know what practically can be done to achieve your aim - it is perfectly obvious the response curve does not seem to be available so other approaches need to be put into practice. You tell us you have been doing that work for 25 years. I have only been doing it for 40 years . If you have the capability then what you hear and for comparison next to your reference speakers, and adjusting for that, is the best way to get excellent results. The best transducers after all, irrespective of response or corrections applied need to sound good or it is just a waste. ooops eggs suck you, again!
"Is there a way to hook up an iPad or iPhone as a studio-speaker" was the start of your second question. I assume as you have tried real time writing to and reading from a web page so I'll bypass that, never to be mentioned again.
It does seem that you have just been finding fault with all the suggestions people have made calling it 'clouding your thread' or 'off topic'. You will get the odd poster that is not helpful or misses the point unintentionally or by intent but I humbly suggest that is very poor ettiquette to reply as you do at times in this thread and you should not be surprised if posters feel aggrieved and do not help you.
I'll not cloud you any more trying to produce studio quality ipod output from a chain of equipment with no lag...............Have you tried a biscuit tin and string? oops off topic now!
I'm sorry if I have offended you in any way. English is not my first language, and things tend to get lost in translation. Again sorry for that.
I'm not trying to compete with your experience or being disrespectful to your input.
However, I am ONLY trying to get a) hold of the exact response curve (as in speaker-simulation plugins or an UAD EQ preset, that you probably use yourself) or b) an iPod app / breakeout that lets you play any analog audio thorugh the iPad to its internal speaker.
"I assume as you have tried real time writing to and reading from a web page so I'll bypass that, never to be mentioned again." Well, this is a wrong interpetation of what I'm actually asking. I want to use the internal speaker as my "C" speaker. I allready have 4 monitors, the A and B set for so called "AB'ing". Now I want the iPad speaker as my C-monitor. That's all! I want to be able to send ANY analog audio out through the iPad speaker - preferably reasonably real time - AND with only the iPads "top secret" internal audio-handling (compression/EQ/limiter). This sounds like something that can be done easily if you have OSX programming skills, which unfortunately I don't have.
The only solution I have found that does something close to my needs, ties up a MacBook Pro (a lot of dollars if you only use it as an AD converter) - and the audio gets compressed in the MacBook Pro before aired over WiFi. It gives absolutely audible digital distortion of the signal.
So, to make it short: I compose, play and record music for an app in the "Pixar-animation-genre". In the creative period I wish I had an app to audition my analog signal through the iPad internal speaker. Does this app exist? Anyone?
Message was edited by: cugnai
Sorry my bad english, but hope U'll understand the idea.. I see at the moment one way to do that. Every device has own freq response map. So what U'll is to copy that spectrum map. There is a plugin called Curve EQ from Voxengo. U probably know it. It allows you to draw complex eq curves. Also it has a possibility to adjust the opasity of plugin window and resize that window. U can find Ipad spekers freq response as a picture on web, adjust size and scale of your eq plugin window, set opacity to 50%, so you can see the original pic behind. And then just draw with mouse and copy that pic behind. When it'll done, just put it on master channel and preview what it sounds like. Maybe it's not ideal, but I think result will be very near your goal.
Kardiomusic. No problem understanding what you suggest. Thank you.
This workaround could work in a number of different ways. I don't have to use the opacity, and I only use UAD-plugs for EQ. The only problem is this: "U can find Ipad spekers freq response as a picture on web".
Do you know the IPad freq. curve, or have you seen it anywhere on the web?
I stumbled across this topic looking for something unrelated and I found it rather humorous that you somehow didn't make your own frequency response plot, being a professional audio producer apparently, in 8 months seeing how crucial this information is.
Knowing the actual frequency response of the speaker is arbitrary, as one of the well known flaws of it is that it aims at nothing, out of the back of the device. Are you using a reflector? Is your official case folded behind it, which reflects forward? Both of those change the "frequency response" of the speaker.
So even if you had a perfect plot of what the speaker did, what exactly are you going to do, EQ your audio tracks differently? Without dictating the listening environment and reflections out of the speakers, it doesn't matter.
And further, anyone who actually cares about sound on an iPad uses better headphones or speakers, so developing for the obviously limited and midrange scooped iPad default speaker that is pointed nowhere and further diminished by poor placement is ridiculous, no different than developing for the actual apple headphones.
Finally, I'm not exactly sure how you don't have a workflow for a device that includes rapid prototyping for that device. You'd obviously want to demo everything even if you applied some sort of "time saving EQ template" to it.
Is it really that difficult to output reference patterns to see what attenuation and reinforcement is happening one time to craft a chart? Time waste regardless because if you moved a reference microphone to any number of potential typical listening papositions the curves would be completely different.
Or is your plan to make the best mix for someone holding the speaker pointed away from them AND two inches above their lap, magically? because you! with your gear and experience should know, in that case you create accurate audio so no single listening environment is favored intentionally.
Ffrotty. Thank you for your input. It's funny how personal this post is for people. I mean, you lookin for "something unrelated" and all..
I have workflow intact from before the post was originally written. Sorry if it looks like I'm waiting for THE answer to start working. I'm good and producing for the masses with no worries what so ever. And I'm not really (se above) looking for the EQ-curve. It would be convenient to have an audio thru, that's all.
Signing off e-mail notification.
Hey, have you tried playing one of those speaker frequency range test videos on your device? I used this one to determine what frequencies are best heard on my iPad2. It helped me a lot and I hope it helps for you too.
Cugnai, this is a perfectly good question. I am making a ringtone specifically for the iPhone, and I needed a way to output the sound from Logic to the iPhone (like an audio out going through the iPhone) so that I could hear exactly how it was going to sound on the iPhone while I was producing it in Logic.
Airfoil for Mac was the best I could come up with. It wirelessly sends audio from you Mac to your iPhone. There's about a 3 second lag, but it's better than nothing.
First download Airfoil onto your computer: https://www.rogueamoeba.com/airfoil/
Get the app "Airfoil Speekers Touch" from the App Store.
After you install the software on both your computer and your iPhone you choose the application that you want Airfoil to pull the audio from. And that's it, it works pretty good. To take it a step further, have you ever used Soundflower as an audio out? If so, that's an easy way to do it. Set you computer out to Soundflower, and then no matter what program you're working in the computer's output will go through Soundflower, then to Airfoil, then to your iPhone.
i completely understand your question, btw and need the same information - interesting that it's so hard to find -
I found this thread while looking for the same solution. I eventually discovered this which should be exactly what you're looking for! It is a speaker built around an actual mobile phone speaker so you ca reliably mix and reference your work on mobile in realtime.