f



How does Linux partition logical address space as far as User/Kernel space is concerned, especially when it comes to deciding what gets mapped to virtual versus physical RAM?

The norm for a memory allocation scheme is to use physical memory before 
dumping the overflow to disk. However with Linux it's unique in that 
there's a set limit for user and kernel space (aka monolithic kernel). 
Which leads me to believe that the logical address space is set aside 
ahead of time. Meaning that if a person has 2GBs of physical RAM and 
chooses option 3GBs/1GBS of user/kernel space, that either 1GB of kernel 
space and 1GB of user space will be allocated to physical RAM; OR, 2 GBs 
of user space will go to physical RAM while the remaining 2 GBs (1 user, 
the other kernel) will be virtual. This seems rather asinine, so I 
hardly believe this is how it works.

It was also proposed to me that the system only considers how much it -
will- give to either user or kernel space. Meaning that at boot up, if a 
system only uses 2 MBs of Kernel space, only 2 MBs gets mapped from the 
the total available memory for Kernel space (as there's a cap). Also 
meaning that those first 2 MBs would be allocated to physical memory.  
Then as more would be requested more would be allocated for either user 
or kernel space, straight from physical memory, till it ran out. This 
sounds like a more solid schema, but at the same time it seems silly to 
do things on a first come, first serve basis. It seems more reasonable 
that it would partition each application such that each has some 
physical memory to work in, and then to use a memory map for anything 
additional to disk. 

So the question is, how does Linux delegate it's memory with regards to 
user/kernel space? Does Linux treat User/Kernel space just as a hard 
limit, allocating memory as it's needed until it hits the cap for a 
particular "type" of memory? Or does it preordain that kernel space will 
be in physical memory while user space will get the remainder, whether 
it be physical or not? Is there possibly another partitioning scheme, 
similar to the one I suggested above? 

I'm beginning to believe it's a simple tallying scheme, checking to see 
how much kernel space or user space has been allocated. 

ie/ 
3GB/1GB of User/Kernel space is available.
2 MBs gets allocated at boot-time for the kernel and it's modules -  
(1.000GBs-0.002GBs=NewAvailableKernelSpace)
50 MBs gets allocated to X in user space - (3.000GBs-
0.050GBs=NewAvailableUserSpace)
So on and so forth. 

I imagine that the logical address space is just a series of pointers 
telling the system where everything is. For example, the first logical 
address might point to physical memory for kernel space, while the 
second logical address unit might point to user space in virtual memory 
(similar to the example above). I imagine that's the whole value of 
logical address space - it provides the HAL. 

Any reference material or solid answers surrounding the mysteries of 
Linux's memory allocation would be appreciated. :)

Thanks,
Dustin
0
Dustin
9/20/2003 8:24:21 PM
comp.os.linux.help 1668 articles. 0 followers. thunder (12) is leader. Post Follow

2 Replies
404 Views

Similar Articles

[PageSpeed] 55

Dustin Darcy wrote:

There seem to be a lot of misconceptions in here...

> The norm for a memory allocation scheme is to use physical memory before
> dumping the overflow to disk. 

That's the norm for a memory management system; memory allocation and memory
management are two different things.

> However with Linux it's unique in that
> there's a set limit for user and kernel space (aka monolithic kernel).

This is not unique; it also has nothing to do with having a monolithic
kernel.  All systems have a set limit for user and kernel space, if you
mean "a set limit to the total amount of addressable memory".  If you mean
that in Linux the "boundary" between user space and kernel space in a
process' view of memory is fixed, that's true (at least, from what I've
read, and in the IA-32 architecture) -- but that's also true of, for
example, older versions of SunOS.  I understand that it's also true of at
least some versions of Windows NT.

(See an article from "Windows & .NET magazine" discussing NT vs. Unix memory
management:

  http://www.winntmag.com/Articles/Index.cfm?ArticleID=4500&pg=4
)

Current Linux is not monolithic -- there are kernel modules.  

> Which leads me to believe that the logical address space is set aside
> ahead of time. Meaning that if a person has 2GBs of physical RAM and
> chooses option 3GBs/1GBS of user/kernel space, that either 1GB of kernel
> space and 1GB of user space will be allocated to physical RAM; OR, 2 GBs
> of user space will go to physical RAM while the remaining 2 GBs (1 user,
> the other kernel) will be virtual. This seems rather asinine, so I
> hardly believe this is how it works.

You're confusing address space with physical memory here.  The 3/1 split
applies to a process' address space.  Each individual process on the
machine "sees" 3GB of "user memory" and 1GB of "kernel memory".

A process' address space is mapped into real and virtual memory by the
memory management system.  Note that few processes are going to use
anything like the full 3GB of user address space, so most such memory maps
are going to have lots of "holes" where there are pages in the address
space which do not correspond to any actual page in the system.

Note as well that real pages can be shared by multiple processes, showing up
in each one's address space.  E.g., if four processes are using the same
shared library, there might be only one actual copy of the library in
system memory, which is being mapped into all four processes' address
spaces.

> It was also proposed to me that the system only considers how much it -
> will- give to either user or kernel space. Meaning that at boot up, if a
> system only uses 2 MBs of Kernel space, only 2 MBs gets mapped from the
> the total available memory for Kernel space (as there's a cap). Also
> meaning that those first 2 MBs would be allocated to physical memory.
> Then as more would be requested more would be allocated for either user
> or kernel space, straight from physical memory, till it ran out. This
> sounds like a more solid schema, but at the same time it seems silly to
> do things on a first come, first serve basis. It seems more reasonable
> that it would partition each application such that each has some
> physical memory to work in, and then to use a memory map for anything
> additional to disk.

Now we get into actual memory management.  Each process "sees" a 1GB address
space for the kernel, but only the memory actually used by the kernel is
mapped.  The rest is simply marked as not yet allocated.

Doing things on a first come, first served basis does make sense on a
general purpose computer.  Partitioning memory to each application would
make much less sense, since you cannot know beforehand how many
applications are going to be run.  If, for example, you guess that eight
applications are going to be run simultaneously, and only four get run,
you'd be wasting the memory reserved for the other four.  And if ten got
run instead, then you're back to the situation of possibly having no memory
available when someone tries to start an application.

(Note as well that the amount of memory needed by different applications
varies widely, both between applications and between instances of
applications.  There are quite a few "memory sharing" techniques used in
memory management, so in some cases, creating a new instance of a program
which is already loaded in memory may take much less memory than loading
the first instance did.)

Like most memory managers, the Linux MM attempts to use a
least-recently-used scheme for memory management -- meaning that when
available physical memory starts to run low, pages are swapped out based on
how long it's been since they were last used (or a reasonable approximation
thereof).  The theory here is that the longer it's been since a page was
last used, the less likely it is that it's going to be needed again soon...
and this is generally a good theory.

Pages start to be swapped out before the system runs out of real memory, to
try to prevent ever reaching a state where there is no free real memory.

> So the question is, how does Linux delegate it's memory with regards to
> user/kernel space? Does Linux treat User/Kernel space just as a hard
> limit, allocating memory as it's needed until it hits the cap for a
> particular "type" of memory? Or does it preordain that kernel space will
> be in physical memory while user space will get the remainder, whether
> it be physical or not? Is there possibly another partitioning scheme,
> similar to the one I suggested above?

Memory that's in active use will be in physical memory, unless there's a
serious memory shortage on the system.  AFAIK, Linux doesn't differentiate
between kernel memory and user memory in swapping pages out to disk -- if
there's something in the kernel that's never used (e.g., a device driver
for a device that doesn't actually exist in the system), then it's a good
candidate for swapping out when memory starts to run low.

Also, note again that address space is on a per-process basis.  The 1GB of
"kernel space" that all processes see is actually the same, but the 3GB of
"user space" which each one sees is unique to each process.

> I'm beginning to believe it's a simple tallying scheme, checking to see
> how much kernel space or user space has been allocated.
> 
> ie/
> 3GB/1GB of User/Kernel space is available.
> 2 MBs gets allocated at boot-time for the kernel and it's modules -
> (1.000GBs-0.002GBs=NewAvailableKernelSpace)
> 50 MBs gets allocated to X in user space - (3.000GBs-
> 0.050GBs=NewAvailableUserSpace)

Right.  Note that this would be NewAvailableUserSpace for X.  In point of
fact, though, there's no reason to tally the allocated or available space
as a separate thing -- the system is already keeping track of address
space, so it'll know when there's none left without keeping a separate
count of it.

> So on and so forth.
> 
> I imagine that the logical address space is just a series of pointers
> telling the system where everything is. For example, the first logical
> address might point to physical memory for kernel space, while the
> second logical address unit might point to user space in virtual memory
> (similar to the example above). I imagine that's the whole value of
> logical address space - it provides the HAL.

The Linux page tables are actually a tree structure, but ultimately they map
a process' address space to either a page in real memory, or one in swap
space.  The virtual memory system is a "HAL" for memory, yes.

> Any reference material or solid answers surrounding the mysteries of
> Linux's memory allocation would be appreciated. :)

Well, it's all there in the kernel, available as source.  You might also
want to visit www.linux-mm.org.

As a last note, the 3GB/1GB split, from the references I found, applies to
Linux on IA-32 architectures, but may not apply on other architectures. 
E.g., on 64-bit systems, limiting each process to the memory which can be
addressed in 32 bits seems silly, and most likely isn't done.

-- 
ZZzz   |\      _,,,---,,_     Travis S. Casey  <efindel@earthlink.net>
       /,`.-'`'    -.  ;-;;,_   No one agrees with me.  Not even me.
      |,4-  ) )-,_..;\ (  `'-'
     '---''(_/--'  `-'\_)
0
Travis
9/20/2003 11:18:30 PM
Dustin Darcy wrote:

> The norm for a memory allocation scheme is to use

..... STOP multi-posting ... you have answers in another NG
..
-- 
///    Michael J. Tobler: motorcyclist, surfer, skydiver,    \\\ 
\\\ and author: "Inside Linux", "C++ HowTo", "C++ Unleashed" ///
 \\\ http://pages.sbcglobal.net/mtobler/mjt_linux_page.html ///
One learns to itch where one can scratch. -- Ernest Bramah

0
mjt
9/20/2003 11:30:42 PM
Reply: