TutorChase logo
Login
OCR GCSE Computer Science Notes

1.1.3 Common CPU Components

The CPU (Central Processing Unit) is the brain of a computer system, containing core components that work together to process data and instructions.

The CPU and the Fetch-Execute Cycle

The fetch-execute cycle (also known as the fetch-decode-execute cycle) is the process that the CPU uses to retrieve instructions from memory, decode them, and execute them. This cycle repeats continuously while the computer is powered on. The essential components that enable this cycle are:

  • Arithmetic Logic Unit (ALU)

  • Control Unit (CU)

  • Cache

  • Registers

Each plays a distinct and essential role in the cycle and overall CPU function.

Arithmetic Logic Unit (ALU)

Role and Purpose

The Arithmetic Logic Unit (ALU) is responsible for carrying out all arithmetic and logical operations within the CPU. This includes:

  • Arithmetic operations: such as addition, subtraction, multiplication, and division.

  • Logic operations: such as comparing two values (e.g., equal to, greater than, less than) and performing logical operations like AND, OR, and NOT.

The ALU is a critical part of the CPU because it performs the actual computation and decision-making processes.

Function During the Fetch-Execute Cycle

During the fetch-execute cycle, the ALU plays a central role in the execute stage:

  • Once an instruction is fetched and decoded, if the instruction requires a mathematical calculation or a logical comparison, the ALU carries out that operation.

  • The ALU receives input from the registers, performs the operation, and then returns the result to a register or to memory.

For example:

  • If the instruction is to add two numbers, the CU will signal the ALU to perform the addition.

  • If the instruction is to compare two values, the ALU will execute the comparison and send the result back to the CU or a register.

Control Unit (CU)

Role and Purpose

The Control Unit (CU) is responsible for managing and coordinating the activities of the CPU. It acts like a supervisor, ensuring all parts of the CPU work together smoothly and in the correct order.

Main responsibilities include:

  • Decoding instructions that are fetched from memory.

  • Directing the operation of the ALU, registers, and other components.

  • Sending control signals to coordinate movement of data and execution of instructions.

  • Regulating the fetch-execute cycle by sequencing the steps involved.

The CU does not process data itself—it manages how data flows through the CPU.

Function During the Fetch-Execute Cycle

The CU is heavily involved in every stage of the fetch-execute cycle:

  1. Fetch:
    The CU sends a signal to memory to retrieve the next instruction.

    • It coordinates with the Program Counter (PC) and Memory Address Register (MAR) to get the address of the instruction.

    • The instruction is then fetched from memory into the Memory Data Register (MDR) and passed to the Current Instruction Register (CIR).

  2. Decode:

    • The CU analyzes the instruction in the CIR to determine what needs to be done.

    • It identifies which components (ALU, registers, etc.) need to be activated.

  3. Execute:

    • If needed, it instructs the ALU to perform the operation.

    • It also manages data movement between registers, memory, and the ALU.

    • Once the instruction is executed, the CU updates the Program Counter (PC) to fetch the next instruction.

Cache

Role and Purpose

Cache is a small amount of high-speed memory located inside or very close to the CPU. It stores frequently used data and instructions to reduce the time the CPU takes to access them from the main memory (RAM).

Key characteristics:

  • Much faster than RAM.

  • Holds recently accessed or frequently used data.

  • There are typically multiple levels of cache:

    • L1 cache (fastest, smallest, closest to the CPU core)

    • L2 cache (larger and slightly slower)

    • L3 cache (shared between CPU cores, if applicable)

Without cache, the CPU would have to access RAM much more frequently, slowing down the entire system.

Function During the Fetch-Execute Cycle

The cache improves efficiency during the fetch stage:

  • When the CU sends a request for an instruction or piece of data, it first checks the cache.

    • If the data is in the cache (cache hit), it is retrieved much faster.

    • If it is not in the cache (cache miss), it must be fetched from RAM, which takes more time.

  • The cache stores instructions and data recently used by the CPU, making it highly likely that future requests can be served quickly.

In this way, cache plays a vital role in reducing fetch time and improving the overall speed of the CPU.

Registers

Role and Purpose

Registers are the smallest and fastest type of memory within the CPU. They are used to hold data temporarily during the execution of instructions. Each register has a specific role.

Important registers include:

  • Program Counter (PC) – holds the address of the next instruction to be executed.

  • Memory Address Register (MAR) – holds the address in memory where data or instructions will be fetched or stored.

  • Memory Data Register (MDR) – holds the actual data that is being transferred to or from memory.

  • Current Instruction Register (CIR) – holds the current instruction being decoded and executed.

  • Accumulator – holds the results of calculations carried out by the ALU.

Because registers are located inside the CPU, they can be accessed instantly, unlike RAM or even cache.

Function During the Fetch-Execute Cycle

Registers play an active role in all parts of the fetch-execute cycle:

  1. Fetch:

    • The PC holds the memory address of the next instruction. This is copied into the MAR.

    • The instruction is fetched from memory, and the data is placed into the MDR.

    • The instruction is then copied into the CIR.

    • The PC is incremented so it points to the next instruction in the sequence.

  2. Decode:

    • The instruction in the CIR is analyzed by the CU to determine the required operation.

  3. Execute:

    • Data to be processed is fetched from the appropriate registers.

    • The ALU uses the Accumulator to store temporary results of arithmetic and logic operations.

    • Any results are returned to the appropriate registers or sent back to memory.

Because registers are involved in every step, they are vital for the rapid processing of instructions.

Summary of Component Roles in Fetch-Execute Cycle

While the notes do not include a conclusion, it is useful for students to mentally connect each component to the three stages of the fetch-execute cycle:

  • Fetch:

    • CU coordinates the instruction fetch.

    • PC, MAR, MDR, and CIR manage address and instruction flow.

    • Cache may provide the instruction if it’s already stored.

  • Decode:

    • CU decodes the instruction.

    • CIR holds the current instruction.

  • Execute:

    • CU signals the appropriate components.

    • ALU carries out logical/arithmetic tasks.

    • Registers temporarily hold input and output data.

    • Accumulator may store intermediate results.

Understanding how these components interact and contribute during each stage is critical for mastering CPU operations. Each part is uniquely essential, and efficient CPU performance relies on the smooth collaboration of all these elements.

FAQ

Registers are the fastest form of memory in a computer system because they are built directly into the CPU using high-speed static RAM (SRAM) technology and operate at the same clock speed as the processor. Unlike cache and RAM, registers do not require complex address decoding or data retrieval mechanisms. Each register has a specific function and a dedicated name or identifier, which eliminates the need for searching or memory lookup procedures. This streamlined access allows for near-instant data retrieval and storage. Additionally, registers are directly wired into the CPU’s data paths, meaning data can move from one part of the processor to another in a single clock cycle. Their small size—typically 8, 16, 32, or 64 bits—also contributes to their speed since smaller memory units are quicker to access and manage. Because of these design choices, registers are used for the most critical data handling tasks during instruction processing.

The CPU uses predictive algorithms and memory management techniques to decide which data and instructions to store in cache. Most CPUs employ what’s called temporal and spatial locality to guide this process. Temporal locality means the CPU assumes that recently accessed data or instructions will be needed again soon, so it keeps them in cache. Spatial locality suggests that data stored near recently accessed addresses might also be used soon, so nearby blocks are cached as well. When the CPU accesses memory, it brings not just the requested data but often adjacent data blocks too. Advanced CPUs also include cache controllers that use algorithms like Least Recently Used (LRU) or First-In, First-Out (FIFO) to manage what stays in cache and what gets replaced. These techniques allow the cache to keep the most relevant data, improving hit rates and reducing the time the CPU spends waiting for instructions or data from slower main memory.

When the cache memory becomes full, the CPU must choose which existing data to remove to make space for the new data. This process is known as cache replacement, and it is handled automatically by the cache controller using a replacement policy. Common replacement strategies include Least Recently Used (LRU), which removes the data that hasn’t been accessed for the longest time; First-In, First-Out (FIFO), which removes the oldest data regardless of usage; and Random Replacement, which selects a block at random to be replaced. The choice of strategy can affect the efficiency and speed of the CPU. Once a block of cache is selected for removal, it is overwritten with the new data or instruction. If the data in that block has been modified but not yet saved to main memory, it is written back to RAM before being replaced—this is called a write-back policy. These mechanisms ensure that cache remains effective and up to date.

No, the number and type of registers vary between CPU architectures. While most CPUs share a common set of general-purpose registers and specialized registers—such as the Program Counter (PC), Memory Address Register (MAR), Memory Data Register (MDR), Current Instruction Register (CIR), and Accumulator—their exact number, size, and function depend on the processor design. For example, a simple CPU might have a handful of 8-bit registers, while a modern multi-core processor could have dozens of 64-bit registers, including floating-point registers, index registers, and status registers used for complex operations. Some CPUs also have register banks, which allow them to switch quickly between sets of registers to support multitasking or interrupt handling. Additionally, RISC (Reduced Instruction Set Computer) architectures typically have more general-purpose registers than CISC (Complex Instruction Set Computer) architectures to reduce memory access and increase performance. This flexibility in register design allows different CPUs to optimize for power, speed, or complexity.

The Control Unit (CU) manages timing and synchronization by working closely with the CPU's clock to coordinate the sequence of operations within each fetch-execute cycle. The clock generates regular electrical pulses called clock cycles, which act as the heartbeat of the CPU, ensuring that operations are carried out in a fixed and predictable rhythm. The CU uses these pulses to issue control signals at precise intervals, guiding the movement of data between components like registers, the ALU, and memory. These signals determine when each component should read or write data, ensuring that no two components attempt to use the same bus or register at the same time, which could cause conflicts. The CU also ensures that operations requiring multiple cycles—such as accessing memory—are timed correctly and don’t interfere with other tasks. By managing these signals and relying on the clock's timing, the CU keeps all CPU components synchronized and operating smoothly without bottlenecks or errors.

Practice Questions

Describe the roles of the ALU and Control Unit in the fetch-execute cycle.

The Arithmetic Logic Unit (ALU) is responsible for carrying out arithmetic operations like addition and subtraction, and logical operations such as comparisons during the execute stage of the fetch-execute cycle. When the Control Unit (CU) decodes an instruction that requires a calculation or decision, it signals the ALU to perform the operation. The Control Unit manages the entire fetch-execute cycle by fetching instructions from memory, decoding them, and sending control signals to coordinate components like the ALU and registers. It ensures instructions are executed in the correct sequence and manages data flow between the CPU and memory.

Explain how cache and registers improve CPU performance during the fetch-execute cycle.

Cache improves CPU performance by storing frequently accessed data and instructions closer to the CPU, reducing the time taken to fetch them compared to accessing RAM. During the fetch stage, if the required instruction is already in cache, the CPU retrieves it faster, increasing efficiency. Registers are even faster and are used to hold small amounts of data temporarily during instruction processing. They store addresses, instructions, and results of operations, enabling the CPU to access and manipulate data instantly. By minimizing delays in data retrieval and storage, both cache and registers significantly speed up the fetch-execute cycle.

Hire a tutor

Please fill out the form and we'll find a tutor for you.

1/2
Your details
Alternatively contact us via
WhatsApp, Phone Call, or Email