kerochan

Q: opting for iMac for logic Pro advice?

I use a Macbook Pro Retina 2.8   i7 Quad Core, 512 flash  and 16gbRAM  + external drives G tech 7200rpm

with a Thunderbolt 27".

No issues when using Logic Prox

 

I am considering getting a new 27" iMac 17 Quad core maxed out, 32gbRAM too,   am I gonna benefit from this change?

Will I see any difference in performance?  I usually have about 50 tracks in logic, with a handful of plug ins throughout the whole project.

 

I would like to have more room on my desk top really, I don't want to get a Mac mini,

MacBook Pro (Retina, 15-inch, Mid 2015), OS X El Capitan (10.11.3)

Posted on Sep 10, 2016 9:51 AM

Close

Q: opting for iMac for logic Pro advice?

  • All replies
  • Helpful answers

  • by BenB,Helpful

    BenB BenB Sep 11, 2016 11:23 PM in response to kerochan
    Level 6 (9,836 points)
    Video
    Sep 11, 2016 11:23 PM in response to kerochan

    Both are i7's, anything over the 16GB I'm not sure you'd notice.  Maybe on those really large projects.  Logic doesn't use the GPU at all.  Not sure you'll see much difference.  I'd wait until the next generation of iMac's is announced.  Make a larger jump, to make it worth the investment.

  • by kerochan,

    kerochan kerochan Sep 11, 2016 11:26 PM in response to BenB
    Level 1 (58 points)
    Audio
    Sep 11, 2016 11:26 PM in response to BenB

    Yes, I agree, I will do that, its only another month I guess!

  • by Jazzmaniac,

    Jazzmaniac Jazzmaniac Sep 12, 2016 3:32 AM in response to BenB
    Level 2 (479 points)
    Sep 12, 2016 3:32 AM in response to BenB

    BenB wrote:

     

    Both are i7's, anything over the 16GB I'm not sure you'd notice.  Maybe on those really large projects.  Logic doesn't use the GPU at all.  Not sure you'll see much difference.  I'd wait until the next generation of iMac's is announced.  Make a larger jump, to make it worth the investment.

    "Both are i7" is surely a correct statement, but it's also nearly meaningless. The i7 family covers a vast range of performance, with significant differences between generations, models within a generation and also very significantly, the mobile versus the desktop version. Only a benchmark, ideally with focus on floating point performance, can result in any meaningful statements here.

     

    And yes, you do notice the 32gb, even if you had 16 before. The OS uses the additional RAM in the background to allow Logic to run smoother, even if you do not come close to these memory requirements with samples or other logic memory.

     

    It's correct that Logic does not use the GPU for audio signal processing. However, the GUI drawing callbacks are a critical part of the low latency processing chain, and having a better GPU to handle that can easily improve your tolerated latency and also how close to the maximum processing power you can drive your rig before you get buffer underruns. Especially dedicated GPUs seem to perform a lot better there than the internal ones. That also has to do with the sharing of certain resources between internal GPUs and the CPU that can produce memory transfer bottlenecks and other gremlins.

  • by kerochan,

    kerochan kerochan Sep 12, 2016 4:24 AM in response to Jazzmaniac
    Level 1 (58 points)
    Audio
    Sep 12, 2016 4:24 AM in response to Jazzmaniac

    Thanks Jazzmaniac

    I also have a Macbook air i7 dual core,  8gbram

    I tried with the same big Logic projects my MacBook Pro handles,  no difference at all, 50 tracks + plug ins etc!

  • by BenB,

    BenB BenB Sep 12, 2016 4:26 AM in response to Jazzmaniac
    Level 6 (9,836 points)
    Video
    Sep 12, 2016 4:26 AM in response to Jazzmaniac

    I have to disagree, strongly, as a retired IT engineer, and Apple certified trainer for over a decade.  GPU redrawing Logic's user interface uses so little of the GPU, the two GPU's in these to models will make NO difference.

     

    16 vs 32 GB RAM will ONLY be noticeable if you're really pushing RAM related operations really, really hard.  A 50 track project MAY come close to that, but lesser projects, you will NOT notice a difference.

     

    The two i7 CPUs in these two  units, both if which I've used recently in our broadcast studio, perform very close to the same.

     

    GPU, RAM, CPU aside, hard drive access is the single most influential bottleneck in any system.  If you're on a Thunderbolt RAID 5, that will never be an issue for Logic.

     

    I think the spec issue is being over blown based on paper numbers, which as a retired IT engineer I'm extremely aware of, versus real world perforce, which I have recent first hand knowledge of, in a broadcast station.

     

    And again, a new iMac may be announced in Oct or Nov, so I'd wait before upgrading as the OP has specified.  The differences at this point in time, for a 50 track LPX project, won't be enough to justify the extra cost.  WAIT, see what happens in a month or two when the new line up of Macs is announced.

  • by Jazzmaniac,

    Jazzmaniac Jazzmaniac Sep 12, 2016 5:01 AM in response to BenB
    Level 2 (479 points)
    Sep 12, 2016 5:01 AM in response to BenB

    Ben,

     

    I'm running a software company developing high performance audio applications and we're actually measuring these things. I can guarantee you, all that I have said has a basis in strong quantitative numbers.

     

    I respect your knowledge as an IT engineer and certified trainer, but I think that the hardware and software we're talking about here has evolved in the recent years to a level of complexity that nobody can actually make accurate predictions about performance without measuring the result. Even with a lot of experience under the belt, we still get a lot of surprises when we profile our code and find the performance bottleneck in entirely unsuspected parts of the system.

     

    As Logic is one of our main supported hosts, we also profile logic regularly, and all the factors that I mentioned above play a significant role. For low-latency realtime processing, the disk speed is almost negligible unless you stream enormous amounts of data from the disk. Disk response times are too slow to be part of the inner loop anyway and are strongly buffered using some read prediction method. The real issues are, amongst others, transferring memory quickly enough from and to caches and the main memory, as this is the main bottleneck of modern processors. The CPU usually ways a lot longer for memory than it actually spends on computing floating point operations for signal processing, unless you keep everything in the cache nicely. The different i7 models I mentioned have different cache sizes that can make or break your realtime loop at a certain granularity. And an internal GPU will share the memory bus with the CPU, making the CPU wait in case of an access conflict as the GPU's access is higher priority. And do not underestimate the huge amounts of memory that are being transferred to the display buffer and texture buffers in modern graphics cards. The compositing method of modern UI rendering need a lot of back buffers that require many times the available screen space in terms of memory. All that memory has to flow through the memory bus at a significant rate. And this gets much worse if you have a hi-DPI (retina) display. Have you never wondered why people complain about low performance in Logic/Mainstage/etc on retina machines, even though they have a high end processor?

     

    Look at these reports from other users if you don't trust me:

    CPU Problem Solution

    Re: CPU overload issues

     

    Regarding memory, there are many good reasons for getting more ram. Amongst them the fact that you cannot upgrade ram on most Apple machines these days. But that aside, more ram increases the performance by more efficiently using the memory interface. Modern CPUs communicate with the main memory through several memory busses in a rather complicated way. If you have more ram, the chances that the internal GPU's memory is taken from a different memory lane is significantly better than if you have less RAM. The same is true for concurrently running processes or threads. Memory management in modern OSes is also very complicated. The only reason it works at all is the fact that CPUs allow for virtual memory regions these days and therefore quick remapping of memory regions within the address space. But this has limits and memory gets fragmented over time, asking for expensive reorganisations of memory blocks if larger chunks are being requested. There are also a lot of nearly indeterministic processes going on about memory that include defragmentation, paging, compression of memory, memory relocation, memory locking from other processors/GPUs, etc that make main memory the most unpredictable and vital resource of the system. You mentioned disks as bottlenecks: Well, more memory can compensate for that very well. In total, even if you only use a fraction of your memory, giving the system more memory will increase the performance because it reduces the probability of unexpected and in a low-latency environment often fatal events significantly.