A Place for Everything

Dear Computer

Chapter 11: Managing Memory

A Place for Everything

As a program executes, it generates data. That data is read from disk or the network, input by the user, or computed from calculations. The processor stores this data in memory so that it can be quickly recalled later. Modern memory is made of a large but finite number of electronic components called transistors and capacitors. If a program keeps generating new data, it will eventually run out of these components.

Developers from the 1980s reminisce about having to squeeze their data into just a few hundred kilobytes of memory. Today we often treat memory as an unlimited resource. However, if we build programs that run indefinitely or on constrained devices like phones or graphics cards, we too must pay attention to how much memory we consume. Otherwise we risk crashes and slow execution. We can't assume that every user has a computer as capable as our own.

Where data is placed in memory depends on what kind of data it is, how long it should live, and how much is known about it ahead of time. The answers to these questions lead us into organizing memory into these four regions:

Each process running on a computer gets allocated a memory space that is subdivided into these four regions. A process cannot access the memory of another.

In this chapter, we focus entirely on how heap memory is managed in various programming languages. In particular, we'll examine several common strategies for releasing heap memory when the data is no longer needed. By the chapter's end, you'll be able to answer the following questions:

You'll find that Rust's approach to managing memory is very different from the approaches you've seen in C and Java, and you won't be able to write much Rust code without developing a mental model of its behavior. We'll also look at some other features of Rust that are affected by its memory management strategy.

Manual Release →