A Place for Everything

Dear Computer

Chapter 11: Managing Memory

A Place for Everything

As a program executes, it generates data. That data is read from disk or the network, input by the user, or computed from calculations. The processor stores this data in memory so that it can be quickly recalled when code references it. Modern memory is made of a large but finite number of electronic components called transistors and capacitors. If a program keeps generating new data without throwing any away, it will eventually run out of these components.

Developers from the 1980s reminisce about needing to squeeze their data into just a few hundred kilobytes of memory. Today's computers have 8 gigabytes or more, so we often treat memory as an unlimited resource. However, we if we build programs that run forever, process large data sets, or execute on constrained hardware like phones and graphics cards, we too must pay attention to how much memory we consume. If we run out, our program will crash or slow down.

Where data is placed in memory depends on what kind of data it is, how long it should live, and how much is known about it ahead of time. The answers to these questions lead us into organizing memory into these four regions:

Each process running on a computer gets allocated a memory space that is subdivided into these four regions. A process cannot access the memory of another.

In this chapter, we focus entirely on how heap memory is managed in various programming languages. In particular, we'll examine several common strategies for releasing heap memory when the data is no longer needed. By the chapter's end, you'll be able to answer the following questions:

We'll find that Rust's approach to managing memory is very different from the approaches we've seen in C and Java, and we won't be able to write much Rust code without developing a mental model of its behavior. We'll also look at some other features of Rust that are affected by its memory management strategy.

Manual Release →