In the early days of
computers, "core" memory was a drastically limited resource,
requiring careful use of storage hierarchies to execute even
single jobs. Today main computer memories are much larger
and are commonly shared among multiple tasks. Higher capacities
allow more freedowm in planning storage hierarchies and allow
many tasks to be run completely in main memory.
Topics
Memory management: Review of physical memory and memory management
hardware; overlays, swapping, and partitions; paging and segmentation;
page placement and replacement policies; working sets and thrashing;
caching.
Memory Hierarchies
Physical memory
Main memory
Holds instructions and data
Connected to CPU by a bus, 25-133 MHz, transfering, 8, 16, 32, 64 ... per cycle
Typically 10 ns to 100 ns access times
May be organized into "banks" overlapping access cycles
Relatively inexpensive
less than $100 for 256 MB
less than $1000 for 4 GB
Cache memory
Transparently buffers main memory
May have multiple caches (e.g. one for instructions, one for data)
May have multiple levels of caches (e.g. level 1 in CPU, level 2 on motherboard, level 3 ...)
Typically 2-10 times faster than main memory, matched to CPU speed
Registers
Integral part of CPU
Data registers usually visible to programmers
Instruction registers and pipelines usually not visible to programmers
Processing context management registers usually visible to programmers at different levels
Secondary storage
Disks
Tapes
Memory management hardware
Simplest case: simple mapping of instruction references
to memory locations
index registers provide offsets into blocks of memory
no protection of one process in memory from another
More complex case: multiple address spaces
Special registers used as base registers for instructions,
various types of data (e.g. stacks).
May be shifted to allow larger total address space (segmentation).
Simple relocation and protection hardware
Multiple address spaces with protection
Base and limit registers in CPU
Multiple jobs may safely share memory
Used to swap continguous blocks of memory to disk
Base added to memory references, bounded by limit
Multiple register sets may be used
User code not able to change registers which control
its own mappings.
Page management hardware
Memory divided into pages of fixed size
Mapping keyed to address range rather than type of use
Each virtual page unmapped, mapped to a page of main memory, or
mapped to a location in a file
May require a very large set of registers or page tables in memory
Ambiguous page faults may be a problem
Large page tables may be cached in Translation Lookaside Buffers
(TLBs)
Swapping and page management may coexist.
Swapping
Jobs brought into memory in large, contiguous blocks of various
sizes
When space is exhuausted complete jobs must be swapped out to
secondary storage to make room for new jobs.
Very efficient transfers, but high cost for minimal memory accesses
Paged virtual memory
Jobs brought into memory in small, not-necessarily contiguous blocks
of uniform size.
Single pages may be evicted to make room for new jobs.
Inefficient transfers, prone to thrashing, but low cost for minimal
memory accesses
Prone to complex resonances between paging system and process
data access pattern.
Working set
Set of pages referenced in some appropriate window (time, ...)
Ideally most of the working set is resident
Page Replacement Algorithms
Delphic Oracle
Least Recently Used (predict that the page used most
recently are the pages most likely to be needed now)
Not Recently Used (crude LRU, most recent clock tick
versus all others)
Working Set Model (preload pages in the working set)
FIFO, cyclic (clock)
Problems -- thrashing, resonances
Process-oriented (local) vs. system-oriented (global) paging