You are welcome. I use Mainstage as well in the church service on Sunday morning.
OK, regarding your questions:
What happens
You will be able to use Mainstage with your current MacBook Pro, depending on what you want to do. Please note that everything you hear from Mainstage (except Backing Tracks) is calculate on-the-fly. Let's say you have a simple patch that contains a Piano sound and a reverb. Now you press C4 on your Midi keyboard. This command will be sent in almost zero time to your computer. The computer starts to calculate the Piano sound and adds the reverb and a few milliseconds later you hear the Piano with some reverb. The more sounds or effects you add the more CPU power is necessary to calculate the output. Now the challenge is that you do not want to wait for a too long time until you want to hear the result. Otherwise you would not be able to play this live. Therefore the software uses the sample buffer. To keep it simple here, the sample buffer ensures that you have a stable latency between pressing a key and hear the result.
The driver
Anything Mainstage has calculated has to be passed through the audio driver (in you case the build-in sound card) so you can hear it via the speakers or headphones. This adds again some extra time to the latency (Mainstage takes this automatically into account). The audio driver and sound card in your MacBook Pro is on a high level in comparison to other build-in solution but it cannot compete with external real audio interfaces and drivers. Even my 5 year old USB audio interface (Focusrite 18i6) is 2-3 time fasters than the build-in sound card in my MacBook Pro.
The CPU
This is the point where the CPU becomes an important role in that game. Usually - beside some plugins like Kontakt and Omnisphere - a plugin can only access a single core of your computer to calculate its part. But Mainstage is able to manage several cores and takes care so different plugins use different cores (very simplified again). So based on your settings the CPU as a limited amount of time to calculate the sounds and effects and mix them together. If the calculation becomes too complex the CPU will not finish the calculation before the sound is delivered. This leads to this strange drop-outs and popping you experienced.
If this is the case you have several (theoretical) options. You can increase the buffer from 128 to 256. This gives the CPU more time to calculate the sound. But this will increase the latency. You could use more cores - if available. This is the reason why they talk about a MacBook Pro 15 with a real i7 Quad-Core CPU. It has 4 instead of 2 cores and therefore has (theoretically) double CPU power.
So if you would use a fast external audio interface USB or even better Thunderbolt (but more expensive as well), you can increase the buffer size in Mainstage and have a similar latency as with a smaller buffer size and the build-in sound card.
Anything I wrote is just a incomplete and rough overview. So please search the web for audio latency to get a better understanding and try to get a clear picture of CPU power, buffer size, sampling rate, latency, sound cards and their drivers and how these things stick together. This helps a lot to solve problems on Sunday morning on your own. ;-)
What is your current latency
You can see this in Mainstage --> Settings --> Audio --> Advanced Settings. Here you can define the buffer size and below you see the resulting latency in milliseconds (ms). The output latency is the relevant value.
Loading patches
Mainstage always loads all patches of a concert once you open it. This is necessary to be able to switch seamlessly between the patches. Otherwise there would be always a short break when switch the patches. This might lead to a high RAM consumption.
I hope this helps you to get a better idea of what's going on in Mainstage and what are the influence factors. There are also a lot of videos on YouTube where people talk about these topics showing everything in Mainstage.