What is Kernel and do Windows computers have it?

I have been using Macs for decades and must have heard the term "kernel" a million times. In as non-technical language as possible can anyone explain what a kernel is? Do Windows computers have kernels?



[Re-Titled by Moderator]

Original Title: In non-tech language: what the heck is a kernel?

iMac (M4)

Posted on Sep 27, 2025 4:45 PM

Reply
Question marked as Top-ranking reply

Posted on Sep 28, 2025 9:40 AM

Emmett_1944 wrote:

Thanks to everyone. One person said the kernel was "The core of the operating system." Is this something completely different from "10 core CPU 10 core GPU"?


Yes, and you can thank Intel for this particular confusion.


As transistor counts have increased and transistor and related feature sizes have shrunk, more and more has been stuffed onto the chips. Processors used to whole circuit boards, or more, and were built from many chips. As counts increased and feature sizes shrunk, processors fit onto single chips. Then more than one processor fit onto one chip. Or where many designs are headed, processors, memory, and storage, all integrated onto one chip.


Intel marketing confusingly terms these co-resident processors as “cores”, and refers to those parts of the chip that are less than a complete “core” as a “thread”. Then Intel marketing named one of their chip product lines “Core”, which is quite distinct and unrelated to processor “cores”.


You can ponder the wonderfully fictional “feature size” marketing from the various chip vendors, too. Units including nanometers and angstroms do have definitions, but then there’s the polite fiction of chip feature size measurements.


I asked about cores weeks ago and someone said they were physical places on the main chip that were logically connected and the more you had the more powerful your computer was.


Processors (“cores”) are one part of a computer. How fast a computer is depends on various factors, of which the speed and number of processors is one of various factors.


Intel for instance would prefer you to believe that more processors and faster processors makes your computer faster, and that’s far from a reliable assumption. That’s because the performance of a computer is based on its slowest component.

So the kernel is software. You can't open your computer and point to the kernel? The kernel is some lines of software code within the OS. I didn't look on wikipedia, which some people mentioned, because I didn't think it would be on there. Wikipedia says:

[deletia]

So the kernel is sort of the "store manager" for your whole computer. It decides what hardware will be used for what tasks and for how long. The "buck stops" at the kernel. If the kernel goes, everything goes.


Correct. There can be environments with multiple kernels active, for environments seeking higher reliability and availability. This either with the kernel design, and the involvement of the system console or a hypervisor. There can also be environments where the apps and the kernel are combined into one big hunk, and with little or no non-kernel activities.


When wikipedia says " The kernel is also responsible for preventing and mitigating conflicts between different processes." what kind of conflicts can occur in your computer?


Different activities inherently have different priorities, such as a process that needs to react quickly to an important event having a higher priority.


Reading wads of data from a network takes priority over other activities involved including apps processing that data, because getting the data read from the link has to happen when it arrives, or the data is lost.


You might want user keyboard or mouse input accepted and displayed more quickly than activities such as running a background storage scan or performing a log rotation or any of myriad other housekeeping activities. Users don’t like beachballs (the so-called spinning wait cursor), and that means some other activities can wait.


…the kernel works it out so each thing gets what is needed and the computer keeps working smoothly and the user does not notice any slowdowns or problems…


Preferably, yes. The particular parts of the kernel you are referring to here involve a mix of process scheduling and interrupt scheduling.


How much nuance do you want?

19 replies
Question marked as Top-ranking reply

Sep 28, 2025 9:40 AM in response to Emmett_1944

Emmett_1944 wrote:

Thanks to everyone. One person said the kernel was "The core of the operating system." Is this something completely different from "10 core CPU 10 core GPU"?


Yes, and you can thank Intel for this particular confusion.


As transistor counts have increased and transistor and related feature sizes have shrunk, more and more has been stuffed onto the chips. Processors used to whole circuit boards, or more, and were built from many chips. As counts increased and feature sizes shrunk, processors fit onto single chips. Then more than one processor fit onto one chip. Or where many designs are headed, processors, memory, and storage, all integrated onto one chip.


Intel marketing confusingly terms these co-resident processors as “cores”, and refers to those parts of the chip that are less than a complete “core” as a “thread”. Then Intel marketing named one of their chip product lines “Core”, which is quite distinct and unrelated to processor “cores”.


You can ponder the wonderfully fictional “feature size” marketing from the various chip vendors, too. Units including nanometers and angstroms do have definitions, but then there’s the polite fiction of chip feature size measurements.


I asked about cores weeks ago and someone said they were physical places on the main chip that were logically connected and the more you had the more powerful your computer was.


Processors (“cores”) are one part of a computer. How fast a computer is depends on various factors, of which the speed and number of processors is one of various factors.


Intel for instance would prefer you to believe that more processors and faster processors makes your computer faster, and that’s far from a reliable assumption. That’s because the performance of a computer is based on its slowest component.

So the kernel is software. You can't open your computer and point to the kernel? The kernel is some lines of software code within the OS. I didn't look on wikipedia, which some people mentioned, because I didn't think it would be on there. Wikipedia says:

[deletia]

So the kernel is sort of the "store manager" for your whole computer. It decides what hardware will be used for what tasks and for how long. The "buck stops" at the kernel. If the kernel goes, everything goes.


Correct. There can be environments with multiple kernels active, for environments seeking higher reliability and availability. This either with the kernel design, and the involvement of the system console or a hypervisor. There can also be environments where the apps and the kernel are combined into one big hunk, and with little or no non-kernel activities.


When wikipedia says " The kernel is also responsible for preventing and mitigating conflicts between different processes." what kind of conflicts can occur in your computer?


Different activities inherently have different priorities, such as a process that needs to react quickly to an important event having a higher priority.


Reading wads of data from a network takes priority over other activities involved including apps processing that data, because getting the data read from the link has to happen when it arrives, or the data is lost.


You might want user keyboard or mouse input accepted and displayed more quickly than activities such as running a background storage scan or performing a log rotation or any of myriad other housekeeping activities. Users don’t like beachballs (the so-called spinning wait cursor), and that means some other activities can wait.


…the kernel works it out so each thing gets what is needed and the computer keeps working smoothly and the user does not notice any slowdowns or problems…


Preferably, yes. The particular parts of the kernel you are referring to here involve a mix of process scheduling and interrupt scheduling.


How much nuance do you want?

Sep 28, 2025 4:14 PM in response to Emmett_1944

The kernel is a pretty technical concept, so it's hard to explain without using technical language :-) But you are right to ask, it is a useful concept to understand.


An operating system is the software that makes the computer run - as opposed to the software which the user uses, to send email, write documents etc.


Most operating systems are divided into 2 halves: (1) the kernel, and (2) user mode.


The kernel is a central part of the operating system which talks directly to the underlying hardware. Usually, the kernel can access everything in the machine, and do anything on the machine - it has complete control.


The user mode half of the operating system is more limited. It runs in a kind of isolated environment, where it cannot directly talk to the hardware - if user mode needs the hardware to do something, it must send a request to the kernel. If the kernel responds with "yep, I'done that for you" then user mode continues normally.


Most of the apps you use day to day - email browser, word processor, etc - are running in the user mode portion of the operating system. That way, you can do stupid things in your own little patch of the total system - delete your own files, over-write data, crash a user program etc - but you can't do stupid things to the total system. You can't overwrite files belonging to other users, or crash the entire machine.


If a program running in user mode crashes, that's the end for that program, and in severe cases, might be the end of that user session. But the kernel of the operating system keeps on running, and you can launch a new session and continue working.


If something in kernel mode crashes, that causes a "kernel panic" where you get the grey screen, and need to reboot the computer.


This separation of the operating system into kernel and user mode is very widespread across all current major OSes: Windows, MacOS and Linux are pretty similar, in this regard.


How the operating system manages to keep kernel mode and user mode separate is a fascinating and ingenious topic; but yeah, it gets a bit technical ... usually need a white board and an hour or two to explain :-)


You can see a few details about your current kernel by opening a Terminal window and running the command "uname -a"


user@Mac ~ % uname -a

Darwin Mac.network.lan 25.1.0 Darwin Kernel Version 25.1.0: Fri Sep 19 19:13:42 PDT 2025; root:xnu-12377.40.77.505.1~4/RELEASE_ARM64_T8122 arm64

user@Mac ~ %


As you can see, the kernel of MacOS is a piece of software called 'Darwin'. Most of the MacOS kernel is supplied by Apple. Sometimes, 3rd party software can add their own extensions to the kernel - these are modules called "kernel extensions" or 'kext'. But as a user, you don't often need to interact with them at all.


I have a degree in computer science, and I've worked with operating system engineering for around 30 years. So that's the basis for my answer. Other folks might have better explanations. Hope this helps a bit.

Sep 28, 2025 1:10 AM in response to MrHoffman

Thanks to everyone. One person said the kernel was "The core of the operating system." Is this something completely different from "10 core CPU 10 core GPU"? I asked about cores weeks ago and someone said they were physical places on the main chip that were logically connected and the more you had the more powerful your computer was.


So the kernel is software. You can't open your computer and point to the kernel? The kernel is some lines of software code within the OS. I didn't look on wikipedia, which some people mentioned, because I didn't think it would be on there. Wikipedia says:


A kernel is a computer program at the core of a computer's operating system that always has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources (e.g. I/O, memory, cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the use of common resources, such as CPU, cache, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup (after the bootloader). It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.


So the kernel is sort of the "store manager" for your whole computer. It decides what hardware will be used for what tasks and for how long. The "buck stops" at the kernel. If the kernel goes, everything goes.


When wikipedia says " The kernel is also responsible for preventing and mitigating conflicts between different processes." what kind of conflicts can occur in your computer? Would that be like if you were using a math application while playing music in the background and the math application was saying "I need more of the CPU to graph this function" and the music is saying "I need more of the CPU to play this part of the song" the kernel works it out so each thing gets what is needed and the computer keeps working smoothly and the user does not notice any slowdowns or problems.

Sep 28, 2025 1:41 PM in response to TT123B

TT123B wrote:

MrHoffman wrote:
The kernel is a part of macOS that allows and assists users and apps with what can and should be permitted, and that blocks users and apps from doing what cannot or should not be permitted.

so it’s some kind of manipulation of a device


The original request was for non-technical language.


The software presentations of hardware devices are kernel software constructs, and are one of various kernel software constructs with access controls.


There are different architectural designs for kernels too, and the protection and the abstractions and other details will vary.


In more technical terms: the foundation of macOS security is built upon the memory management hardware security, and on the mode change from non-privileged (often “user mode”) and privileged-mode (often “kernel mode”) memory access. That of most any other so-called protected-mode operating system with virtual memory support will be similar here, too.


The file and device security details are all built upon the foundation provided by the kernel and the memory management hardware.

Sep 28, 2025 12:18 PM in response to Emmett_1944

As transistor counts have increased and transistor and related feature sizes have shrunk, more and more has been stuffed onto the chips. Processors used to whole circuit boards, or more, and were built from many chips. As counts increased and feature sizes shrunk, processors fit onto single chips. Then more than one processor fit onto one chip. Or where many designs are headed, processors, memory, and storage, all integrated onto one chip.


sorry mate, this is accient

Sep 28, 2025 3:52 PM in response to Emmett_1944

Emmett_1944 wrote:

I thought the reason why 3 nm was better than 5 nm was that was the distance between the transistors on the chip and although those are tiny lengths when you have billions of them on a chip it adds up and can make the chip faster or slower. Is that true or is that marketing copy?


Smaller features allow more components yes, but that shrinkage has trade-offs around the behavior of electricity and heat in ever-shrining wiring, as well as changes to the photolithography used.


If shrinking designs was easy and direct, everything would already be shrunk.


Intel was recently stuck here, too. They’re reportedly using TMSC for manufacturing (sometimes “fabbing”) their recent processors, and not their own Intel factories (sometimes “fabs”).


The marketing of feature sizes tends toward confusing and aspirational, and is decidedly not cross-comparable, too. To wit: “Since around 2017 node names have been entirely overtaken by marketing with some leading-edge foundries using node names ambiguously to represent slightly modified processes. Additionally, the size, density, and performance of the transistors among foundries no longer matches between foundries. For example, Intel's 10 nm is comparable to foundries 7 nm while Intel's 7 nm is comparable to foundries 5 nm.”


For a discussion of the physical performance limits arising here, look on YouTube for a video of Admiral Grace Hopper, nanoseconds , and “picoseconds all over the floor”.



Sep 28, 2025 5:12 PM in response to MrHoffman

They still use photolithography to make these huge chips?? I studied "computers" 50 years ago in college. (I never completed it) At the time there was no such thing as "computer engineering" or "software engineering" where I was. We studied what was called "digital electronics." We used to make small computers out of integrated circuit chips that we bought out of catalogs because that was the only place you could get them. We learned that these IC chips had many components in them and were made by a process called photolithography which I basically understood as a backward microscope, taking something big and making it tiny. I can't believe 50 years later they are still using that.

This thread has been closed by the system or the community team. You may vote for any posts you find helpful, or search the Community for additional answers.

What is Kernel and do Windows computers have it?

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.