OCR Specification focus:
‘The coulomb is the unit of charge; e = 1.6×10⁻¹⁹ C; proton +e, electron −e.’
Electric charge is one of the most fundamental properties of matter, governing the forces and interactions between particles that create electricity and magnetism throughout the universe.
The Concept of Electric Charge
Definition of Charge
Charge: A property of matter that causes it to experience a force when placed in an electromagnetic field.
Every charged object possesses an amount of electric charge, which can be either positive or negative. This property is intrinsic and cannot be created or destroyed—only transferred between bodies. The charge of subatomic particles determines how they interact through electromagnetic forces, one of the four fundamental forces in nature.
The unit of charge in the International System of Units (SI) is the coulomb (C).
The Coulomb
The Definition and Scale of the Coulomb
Coulomb (C): The SI unit of electric charge, defined as the amount of charge transferred by a current of one ampere in one second...
This definition links charge directly to electric current, which is the rate of flow of charge. A current of 1 ampere means that 1 coulomb of charge passes a point in a circuit each second.
The coulomb is a large quantity of charge in the context of individual particles. Since fundamental particles carry very small charges, it takes approximately 6.25 × 10¹⁸ electrons to make up one coulomb of charge.
The Elementary Charge
Fundamental Unit of Charge
Elementary charge (e): The magnitude of the charge carried by a single proton, equal to 1.6 × 10⁻¹⁹ coulombs..
The elementary charge represents the smallest unit of charge that can exist in isolation under normal circumstances. It is a universal constant, the same for all charged particles, and forms the basis for quantisation of charge, which means all observable charges are integer multiples of e.
Equation Representation of Charge Relationships
EQUATION
—-----------------------------------------------------------------
Total charge (Q) = n × e
Q = Total charge (C)
n = Number of charged particles (integer)
e = Elementary charge (1.6 × 10⁻¹⁹ C)
—-----------------------------------------------------------------
This relationship shows that charge is quantised, with any measurable charge being a whole-number multiple of the elementary charge. No object can possess a charge of, for example, 1.5e or 2.3e—it must always be an integer multiple, such as +3e or −2e.
Proton and Electron Charges
The Proton
Proton: A positively charged subatomic particle found in the nucleus of an atom, carrying a charge of +e.
Protons are fundamental constituents of atomic nuclei, and their positive charge defines the atomic number of an element. For example, a hydrogen atom has one proton, giving it a single positive charge of +1.6 × 10⁻¹⁹ C. The number of protons in a nucleus determines the identity of the element, while their charge is balanced by the electrons surrounding the nucleus in a neutral atom.
The Electron
Electron: A negatively charged subatomic particle orbiting the nucleus, carrying a charge of −e, equal in magnitude but opposite in sign to the proton’s charge.
Electrons are responsible for electric current in conductors, moving through materials to create flow when a potential difference is applied. Their charge of −1.6 × 10⁻¹⁹ C ensures that atoms, when combined with protons, can achieve overall electrical neutrality.
The Relationship Between Proton and Electron Charge
The equality in magnitude between the proton and electron charges is one of the most precise and significant relationships in physics.

Bohr-model diagram of hydrogen explicitly labelling the proton with q = +e at the centre and the electron with q = −e in orbit. It visually encodes equal magnitude, opposite sign. Note: the circular orbit is a historical model; it’s still excellent for depicting charge signs at A-Level. Source
Despite their enormous difference in mass — the proton is about 1,836 times heavier than the electron — their charges are exactly equal in size and opposite in sign. This balance is what enables atoms to be electrically neutral when they contain equal numbers of protons and electrons.
If an atom gains or loses electrons, it becomes an ion, carrying a net positive or negative charge depending on the imbalance. For instance:
Loss of electrons → Positive ion (cation)
Gain of electrons → Negative ion (anion)
Understanding Charge Polarity and Convention
The terms positive and negative were established historically by Benjamin Franklin, before the discovery of electrons. As a result, the direction of conventional current — defined as the flow of positive charge — is opposite to the actual flow of electrons in a metallic conductor. Thus, even though electrons move from the negative to the positive terminal, conventional current is described as moving from positive to negative.
This convention is maintained for consistency across all areas of electrical engineering and physics, despite its inversion of the physical electron movement.
Quantitative Significance of the Elementary Charge
The value of 1.6 × 10⁻¹⁹ C may seem small, but its significance is immense:
It defines the scale of all electrical phenomena.
It underpins the structure of matter, since charge interactions govern atomic bonding and stability.
It provides the foundation for the definition of the ampere and links microscopic and macroscopic physical quantities.
In modern physics, the elementary charge also appears in equations describing quantum phenomena, such as the photoelectric effect and Millikan’s oil-drop experiment, which measured e directly.

Labeled schematic of the Millikan oil-drop apparatus, showing the atomiser, viewing microscope, and parallel plates connected to a voltage source that create a uniform electric field. The setup allowed drops to be suspended or moved, revealing that charge comes in integer multiples of e. Note: the diagram adds apparatus details beyond the syllabus statement but solely to clarify how e was determined. Source
This universality reflects how fundamental e is to the structure of the physical world.
The Role of the Coulomb and Elementary Charge in Measurement
In practical physics and engineering:
Large-scale electric currents are expressed in coulombs per second (amperes).
Microscopic or particle-scale interactions are measured in multiples of e.
The relationship between these scales enables scientists to connect the behaviour of vast electrical systems to individual particles within them.
This connection demonstrates how understanding the coulomb and elementary charge bridges the gap between macroscopic circuits and microscopic particle physics, forming a cornerstone concept in A-Level Physics and beyond.
FAQ
The first experimental evidence came from Millikan’s oil-drop experiment in 1909. By observing how tiny oil droplets moved in an electric field, Millikan found that every droplet’s charge was a whole-number multiple of a constant value, 1.6 × 10⁻¹⁹ C.
This demonstrated that electric charge cannot take on arbitrary values — it exists only in discrete packets of the elementary charge (e).
The equality of their charge magnitudes is thought to arise from the fundamental symmetries of nature. Both particles are believed to possess opposite but perfectly balanced charge values due to the conservation of electric charge at the quantum level.
Theoretical models, such as quantum electrodynamics (QED) and grand unified theories, suggest that this balance results from deeper relationships between the fundamental forces and particle families.
In ordinary matter, no. All stable, observable particles have charges that are integer multiples of e.
However, in high-energy physics, quarks (the constituents of protons and neutrons) have fractional charges of +⅔e or −⅓e, but they are never found in isolation due to a phenomenon called colour confinement.
As a result, any measurable object or free particle always carries an integer multiple of the elementary charge.
Since 1 coulomb = 1 / (1.6 × 10⁻¹⁹) charges, one coulomb of charge corresponds to roughly 6.25 × 10¹⁸ electrons or protons.
This means that even small macroscopic currents involve enormous numbers of charge carriers. For example:
A current of 1 A represents 6.25 × 10¹⁸ electrons passing a point every second.
This helps bridge the microscopic scale of particles and the macroscopic quantities used in electrical measurements.
In atomic-scale interactions, the charge of individual particles is extremely small, so expressing such charges in coulombs leads to impractically tiny values (e.g., 1.6 × 10⁻¹⁹ C).
For microscopic contexts, scientists often refer directly to multiples of the elementary charge instead.
The coulomb is mainly useful for macroscopic systems, such as electric circuits or capacitors, where billions of electrons act collectively to produce measurable effects.
Practice Questions
Question 1 (2 marks)
State the charge, in coulombs, of a single electron and a single proton. Explain the relationship between these two values.
Mark Scheme:
1 mark: Correct value for electron charge, −1.6 × 10⁻¹⁹ C.
1 mark: Correct value for proton charge, +1.6 × 10⁻¹⁹ C, and clear statement that they are equal in magnitude but opposite in sign.
Question 2 (5 marks)
A metal object is found to have a total positive charge of 3.2 × 10⁻¹⁹ C.
(a) Calculate the number of electrons that must have been removed from the object to produce this charge.
(b) Explain what is meant by the term elementary charge, and describe how this value relates to the quantisation of charge in matter.
Mark Scheme:
(a) Calculation (2 marks)
1 mark: Correct use of equation Q = n × e.
1 mark: Correct calculation:
n = Q ÷ e = (3.2 × 10⁻¹⁹) ÷ (1.6 × 10⁻¹⁹) = 2 electrons removed.
(b) Explanation (3 marks)
1 mark: States that elementary charge (e) is the smallest unit of electric charge, equal to 1.6 × 10⁻¹⁹ C.
1 mark: Describes that all observable charges are integer multiples of e.
1 mark: Explains that this means charge is quantised, i.e., it cannot exist in fractional amounts of e.
