4.4.2 Quantum Wave Packets
Fourier synthesis provides a complementary function: producing a wave function with desired characteristic by combining sinusoidal functions 𝑒𝑖𝑘𝑥 in proper proportions. This is useful in producing a "wave packet" with limited spatial extent. Producing wave packet from monochromatic (single-wavenumber) plane waves is an important application in quantum mechanics, because the functions are not normalizable. Since functions such as 𝐴𝑒𝑖(𝑘𝑥-𝜔𝑡) extend to infinity, the area under the square magnitude of them is infinite. Though non-normalizable functions are disqualified as physically realizable quantum wavefunctions, the sinusoidal wavefunctions limited to a certain region are normalizable, and such wave-packet functions may be synthesized from monochromatic plane waves.
To do that, it's necessary to combine multiple plane waves in just the "right proportions" so that they add constructively in the desired region and destructively outside of that region. We are going to use the inverse Fourier transform:
(4.15) 𝜓(𝑥) = 1/√2π ∫-∞∞𝜙(𝑘)𝑒𝑖𝑘𝑥 𝑑𝑘,
As described earlier, for each wavenumber 𝑘, the wavenumber spectrum 𝜙(𝑘) tells us the amount of the complex sinusodial function 𝑒𝑖𝑘𝑥 to add into the mix to synthesize 𝜓(𝑥).
We can think of the Fourier transform as a process that maps functions of one "domain" such as position of position or time onto another "domain" such as wavenumber or frequency. Functions related by the Fourier transform e.g., 𝜓(𝑥) and 𝜙(𝑘), are called "Fourier-transform pairs", and the variables of them are called "conjugate variables." Such variables always obey an uncertainty principle, which means that simultaneous precise knowledge of both variables is not possible.
To understand such wave packets, consider the case that we add together the plane-wave functions 𝑒𝑖𝑘𝑥, each amplitude of unity, over a range of wavenumber 𝛥𝑘 centered on the single wavenumber 𝑘0. The wavenumber spectrum 𝜙(𝑘) for this case is shown in Fig. 4.14a. The position wavefunction 𝜓(𝑥) can be found as follows
𝜓(𝑥) = 1/√(2π) ∫-∞∞ 𝜙(𝑘)𝑒𝑖𝑘𝑥 𝑑𝑘 = 1/√(2π) ∫𝑘0-𝛥𝑘/2𝑘0+𝛥𝑘/2(1)𝑒𝑖𝑘𝑥 𝑑𝑘,
(4.22) 𝜓(𝑥) = 1/√(2π) 1/𝑖𝑥 𝑒𝑖𝑘𝑥∣𝑘0-𝛥𝑘/2𝑘0+𝛥𝑘/2 = -𝑖/(√2π𝑥) [𝑒𝑖(𝑘0+𝛥𝑘/2)𝑥 - 𝑒𝑖(𝑘0-𝛥𝑘/2)𝑥] = -𝑖/(√(2π) 𝑥) 𝑒𝑖𝑘0𝑥[𝑒𝑖(𝛥𝑘/2)𝑥 - 𝑒-𝑖(𝛥𝑘/2)𝑥].
(4.23) sin 𝜃 = (𝑒𝑖𝜃 - 𝑒-𝑖𝜃)/2𝑖, or
(4.24) [𝑒𝑖(𝛥𝑘/2)𝑥 - 𝑒-𝑖(𝛥𝑘/2)𝑥] = 2𝑖 sin (𝛥𝑘𝑥/2),
(4.25) 𝜓(𝑥) = -𝑖/(√(2π) 𝑥) 𝑒𝑖𝑘0𝑥[𝑒𝑖(𝛥𝑘/2)𝑥 - 𝑒-𝑖(𝛥𝑘/2)𝑥] = -𝑖/(√2π𝑥) 𝑒𝑖𝑘0𝑥[2𝑖 sin (𝛥𝑘𝑥/2].
If we multiply both numerator and denominator by 𝛥𝑘/2 and do a bit of rearranging:
(4.26) 𝜓(𝑥) = [(𝛥𝑘/2)2]/[(𝛥𝑘𝑥/2)√(2π)] 𝑒𝑖𝑘0𝑥 sin (𝛥𝑘𝑥/2) = 𝛥𝑘/√(2π) 𝑒𝑖𝑘0𝑥 sin (𝛥𝑘𝑥/2)/(𝛥𝑘𝑥/2).
We can see that the term 𝑒𝑖𝑘0𝑥, which has a real part to cos (𝑘0𝑥). So as 𝑥 varies, this term oscillates between +1 and -1 with one cycle (2π of phase) in distance 𝜆0 = 2π/𝑘0. In Fig. 4.14b we can see these rapid oscillations repeating at integer values of 𝑥.
The rightmost fraction in Eq. 4.26 is well-known form of sin (a𝑥)/a𝑥, called the "sinc" function, which has a large central region (called "main lobe") and a series of smaller but significant maxima ("sidelobe") which decrease with distance from the central maximum. This function has its maximum at 𝑥 = 0 (as we can verify using L'Hopital's rule) and repeatedly crosses through zero between its lobes.The first zero-crossing of the sinc function occurs where the numerator reaches zero and so the first zero-crossing occurs 𝛥𝑘𝑥/2 = π or 𝑥 = 2π/𝛥𝑘.
There's an important point in that the sinc-function term determines the spatial extent of the main lobe of wave packet represented by 𝜓(𝑥). In the case shown in Fig. 4.14, 𝛥𝑘 is taken to be 10% of 𝑘0, so the first zero-crossing occurs at
𝑥 = 2π/𝛥𝑘 = 2π/𝛥𝑘 = 2π/0.1𝑘0 = 2π/[0.1(2π/0.1𝜆0)] = 10𝜆0,
where 𝜆0 is taken as one distance units in this plot, the zero-crossing occurs at 𝑥 = 10. And we can see the effect of increasing or decreasing the width of the wavenumber spectrum. For example, if 𝛥𝑘 is increased to 50% of 𝑘0, then the first zero-crossing occurs at 𝑥 = 2. If 𝛥𝑘 is decreased to 5% of 𝑘0, then the first zero-crossing occurs at 𝑥 = 20.
If we continue decreasing the width and so 𝛥𝑘 = 0, then 𝜙(𝑘) consist of the single wavenumber 𝑘0. In other words, the "width" of 𝜓(𝑥) becomes infinite, as the first zero-crossing 0f the envelope never happens. To see how this works mathematically, start with the definition of the inverse Fourier transform Eq. 4.15 for a function of the wavenumber variable 𝑘':
(4.27) 𝜓(𝑥) = 1/√(2π) ∫-∞∞𝜙(𝑘')𝑒𝑖𝑘'𝑥 𝑑𝑘', Then,
(4.28) 𝜙(𝑘) = 1/√(2π) ∫-∞∞𝜓(𝑥)𝑒-𝑖𝑘𝑥 𝑑𝑥 = 1/√2π ∫-∞∞ [1/√(2π) ∫-∞∞𝜙(𝑘')𝑒𝑖𝑘'𝑥 𝑑𝑘']𝑒-𝑖𝑘𝑥 𝑑𝑥
where the prime on 𝑘' is included to discriminate between the wavenumbers over which the integral to form 𝜓(𝑥) is taken from the wavenumbers of the spectrum 𝜙(𝑘). We may arrange the above equation as follows
(4.29) 𝜙(𝑘) = 1/√(2π) ∫-∞∞ 𝜙(𝑘')[1/√(2π) ∫-∞∞𝑒𝑖(𝑘'-𝑘)𝑥 𝑑𝑥] 𝑑𝑘'.
It's telling that it must be "shifting" through the 𝜙(𝑘') function and pulling out the value 𝜙(𝑘) and in this case the integral ends up not summing anything; the function 𝜙 just takes on the value of 𝜙(𝑘) and walks straight out the integral.
What can perform such an operation is the Dirac delta function, which can be defined as
(4.30) 𝛿(𝑥' - 𝑥) = {if 𝑥' = 𝑥, ∞; otherwise, 0}
A far more useful definition doesn't show what the Dirac delta function is; it shows what the Dirac delta function does:
(4.31) ∫-∞∞ 𝑓(𝑥')𝛿(𝑥' - 𝑥) 𝑑𝑥' = 𝑓(𝑥).
That means we can write Eq. 4.29 as
(4.32) 𝜙(𝑘) = ∫-∞∞ 𝜙(𝑘')[𝛿(𝑘' - 𝑘)] 𝑑𝑘',
and equating the terms in square brakects in Eqs. 4.12 and 4.29:
(4.33) 1/√(2π) ∫-∞∞ 𝑒𝑖(𝑘'-𝑘)𝑥 𝑑𝑥 = 𝛿(𝑘' - 𝑘).
This relationship is extremely useful when we are analyzing functions synthesized from combinations of sinusoids, as is another version that we can find by plugging the expression for 𝜙(𝑘) from Eq. 4.14 into the inverse Fourier transform Eq. 4.15. This leads to
(4.34) 1/√(2π) ∫-∞∞ 𝑒𝑖𝑘(𝑥'-𝑥) 𝑑𝑘 = 𝛿(𝑥' - 𝑥).
So in this case of 𝛥𝑘 = 0 and 𝑘 = 𝑘0, i. e,.𝜓(𝑥) = 𝑒𝑖𝑘0𝑥, Eq. 4.14 yields
(4.35) 𝜙(𝑘) = 1/√(2π) ∫-∞∞ 𝜓(𝑥)𝑒-𝑖𝑘𝑥 𝑑𝑥 = 1/√(2π) ∫-∞∞ 𝑒𝑖𝑘0𝑥𝑒-𝑖𝑘𝑥 𝑑𝑥 = 1/√2π ∫-∞∞ 𝑒𝑖(𝑘0 -𝑘)𝑥 𝑑𝑥 = √(2π)𝛿(𝑘0- 𝑘).
This is at wavenumber 𝑘 = 𝑘0, 𝜓(𝑥) = 𝑒𝑖𝑘0𝑥 has finite extent. This is one extreme case: as the width of 𝜙(𝑘) approaches zero, the width of the envelope of 𝜓(𝑥) approaches infinity. This is one extreme case:
The other extreme case is that 𝛥𝑘 approaches infinity, the width of the envelope of 𝜓(𝑥) approaches zero. We can consider the case the width 𝛥𝑘 of the wavenumber spectrum has been increased so that 𝜙(𝑘) extends with constant amplitude from 𝑘 = 0 to 2𝑘0. So we insert a narrow spike in position, 𝜓(𝑥) = 𝛿(𝑥), into the Fourier transform to determine the corresponding wavenumber spectrum 𝜙(𝑘):
(4.36) 𝜙(𝑘) = 1/√(2π) ∫-∞∞ 𝜓(𝑥)𝑒-𝑖𝑘𝑥 𝑑𝑥 = 1/√(2π) ∫-∞∞ 𝛿(𝑥)𝑒-𝑖𝑘𝑥 𝑑𝑥.
Because the Dirac delta function under integral shifts the function 𝑒-𝑖𝑘𝑥, so that the only contribution come from the function with 𝑥 = 0:
(4.37) 𝜙(𝑘) = 1/√(2π) 𝑒0 = 1/√(2π).
This constant value means that 𝜙(𝑘) has uniform amplitude all wavenumber 𝑘, as we can refer to Figure 4.4 the amplitude of 𝜙(𝑘) has been scaled to a value of one, which relate the maximum value of 𝜓(𝑥) by the factor 𝛥𝑘/√(2π). Since 𝛥𝑘 = 2𝑘0and 𝑘0 is 2π, that works out to 5.01.
This inverse relationship between the widths of functions of conjugate variables is the basis of the uncertainty principle.
4.5 Position and Momentum Wavefunction and Operators
In quantum mechanics the wavefunction representations include position and momentum, so this section is all about position and momentum wavefunctions, eigenfunctions, and operators - specifically, how to represent those functions and operators in both position space and momentum space.
We saw the connection between wavenumber (𝑘) and momentum (𝑝) by the de Broglie relation
(3.4) 𝑝 = ℏ𝑘.
This means that the momentum wavefunction 𝜙̄(𝑝) is the Fourier transform of the position wavefunction 𝜓(𝑥):
(4.38) 𝜙̄(𝑝) = 1/√(2πℏ) ∫-∞∞ 𝜓(𝑥)𝑒-𝑖(𝑝/ℏ)𝑥 𝑑𝑥,
Additionally, the inverse Fourier transform of 𝜙̄(𝑝) gives 𝜓(𝑥):
(4.39) 𝜓(𝑥) = 1/√(2πℏ) ∫-∞∞ 𝜙̄(𝑝)𝑒𝑖(𝑝/ℏ)𝑥 𝑑𝑝,
But since 𝑘 = 𝑝/ℏ, the inverse Fourier function Eq. 4.15 yields
(4.40) 𝜓(𝑥) = 1/√(2π) ∫-∞∞ 𝜙̄(𝑝)𝑒𝑖(𝑝/ℏ)𝑥 𝑑𝑝/ℏ,
which differs from Eq. 4.39 by a factor of 1/√ℏ. It is worth to noting that in some texts including this one that factor is absorbed into the function 𝜙̄, but the other texts absorb 1/ℏ into 𝜙̄ and in those texts, the factor in front of the integrals in Eqs. 4.38 and 4.39 is 1/√(2π).
The relationship between position and momentum wavefunction helps us understand the Heisenberg Uncertainty principle, which is one of the iconic laws of quantum mechanics. in addition to the rectangular wavenumber spectrum 𝜙(𝑘) and (sin a𝑥)/a𝑥 position wavefunction, a position-space wavefunction that decrease smoothly toward zero amplitude without those extended lobes is desirable. Now we introduce a Gaussian wave packet and the standard definition of a Gaussian function of position (𝑥):
(4.41) 𝐺(𝑥) = 𝐴𝑒-(𝑥-𝑥0)2/2𝜎𝑥2,
where 𝐴 is the amplitude (maximum value) of 𝐺(𝑥), 𝑥0 is the center location (x-value of the maximum), and 𝜎𝑥 is the standard deviation, which is the half the width of the function between the points at which 𝐺(𝑥) is reduced to 1/√𝑒 (about 61%) of its maximum value.
Gaussian functions are instructive as quantum wavefunctions because of some characteristics including:
a) The square of a Gaussian is also a Gaussian
b) The Fourier transform of a Gaussian is also Gaussian
The first one is useful because the probability density is related to the square of the wavefunction, and the second is useful because position-space and momentum-space wavefunctions related by the Fourier transform.
We can see the benefits of the smooth shape of the Gaussian in Fig. 4.19. In position space the "Gaussian wave packet" for a plane wave with momentum 𝑝0:
(4.42) 𝜓(𝑥) = 𝐴𝑒-(𝑥-𝑥0)2/2𝜎𝑥2 𝑒𝑖(𝑝0/ℏ)𝑥,
where 𝜎𝑥 represents the standard deviation of 𝜓(𝑥) and not the standard deviation of the probability distribution which is also a Gaussian, as we we'll see later.
The normalization of 𝜓(𝑥) is a good idea as follows:
1 = ∫-∞∞ 𝜓*𝜓 𝑑𝑥 = ∫-∞∞ [𝐴𝑒-(𝑥-𝑥0)2/(2𝜎𝑥)2 𝑒𝑖(𝑝0/ℏ)𝑥]*[𝐴𝑒-(𝑥-𝑥0)2/(2𝜎𝑥)2 𝑒𝑖(𝑝0/ℏ)𝑥] 𝑑𝑥 = ∫-∞∞ ∣𝐴∣2[𝑒-(𝑥-𝑥0)2/𝜎𝑥2]𝑒[(-𝑝0+𝑝0)𝑥]/ℏ 𝑑𝑥 = ∣𝐴∣2 ∫-∞∞ 𝑒-(𝑥-𝑥0)2/𝜎𝑥2 𝑑𝑥.
The definite integral can be evaluating using
(4.43) ∫-∞∞ 𝑒-(a𝑥2+b𝑥+c) 𝑑𝑥 = √(π/a) 𝑒-(b2-4ac)/4a.※
So we have a = 1/𝜎𝑥2. b = -2𝑥0/𝜎𝑥2, and c = 𝑥02/𝜎𝑥2:
1 = ∣𝐴∣2√[π/(1/𝜎𝑥2)] 𝑒[(-2𝑥0/𝜎𝑥2)2-4(1/𝜎𝑥2)(𝑥02/𝜎𝑥2)]/4(1/𝜎𝑥2) = ∣𝐴∣2√(𝜎𝑥2π) 𝑒-(4𝑥02-4𝑥02)/4𝜎𝑥2 = ∣𝐴∣2𝜎𝑥√π.
(4.44) 𝐴 = 1/(𝜎𝑥√π)1/2
and the normalized position wave function is
(4.45) 𝜓(𝑥) = 1/(𝜎𝑥√π)1/2 𝑒-(𝑥-𝑥0)2/2𝜎𝑥2 𝑒𝑖(𝑝0/ℏ)𝑥,
If we take the origin of coordinate to be at 𝑥0, so 𝑥0 = 0, the normalized momentum wavefunction 𝜙̄(𝑝) is
𝜙̄(𝑝) = 1/√(2πℏ) ∫-∞∞ 𝜓(𝑥)𝑒-𝑖(𝑝/ℏ)𝑥 𝑑𝑥 = 1/√(2πℏ) ∫-∞∞ 1/(𝜎𝑥√π)1/2 𝑒-𝑥2/2𝜎𝑥2 𝑒-𝑖[(𝑝-𝑝0)/ℏ]𝑥 𝑑𝑥 = 1/√(2πℏ) 1/(𝜎𝑥√π)1/2 ∫-∞∞ 𝑒(-𝑥2/2𝜎𝑥2)-𝑖[(𝑝-𝑝0)/ℏ]𝑥 𝑑𝑥.
Using the same definite integral given earlier with a = 1/2𝜎𝑥2, b = -𝑖(𝑝-𝑝0)/ℏ, and c = 0 gives
𝜙̄(𝑝) = 1/√(2πℏ) 1/(𝜎𝑥√π)1/2 √(π/a) 𝑒-(b2-4ac)/4a = 1/√(2πℏ) √(2π𝜎𝑥2)/(𝜎𝑥√π)1/2 𝑒[-(𝑝-𝑝0)2𝜎𝑥2]/2ℏ2 = [𝜎𝑥2/πℏ2]1/4 𝑒[-(𝑝-𝑝0)2𝜎𝑥2]/2ℏ2.
This is also a Gaussian, since it can be written
(4.46) 𝜙̄(𝑝) = [𝜎𝑥2/πℏ2]1/4 𝑒-(𝑝-𝑝0)2/2𝜎𝑝2,
in which the standard deviation of the momentum wavefunction is given by 𝜎𝑝 = ℏ/𝜎𝑥.※
(4.47) 𝜎𝑥𝜎𝑝 = ℏ.
It takes just one more step to get to the Heisenberg Uncertainty principle. The "uncertainty" in the Heisenberg Uncertainty principle is defined with respect to the width of the probability distribution, which is narrower than the width of the Gaussian wavefunction 𝜓(𝑥). We may remember that the probability density is proportional to 𝜓*𝜓. That means that the width 𝛥𝑥 of the probability distribution can be found from (referring to Eq. 4.41)
(4.48) 𝑒-𝑥2/2(𝛥𝑥)2 = (𝑒-𝑥2/2𝜎𝑥2)*(𝑒-𝑥2/2𝜎𝑥2) = 𝑒-𝑥2/𝜎𝑥2.
So 2(𝛥𝑥)2 = 𝜎𝑥2. The same argument applies to the momentum-space wavefunction 𝜙̄(𝑝), so it's also true that 𝜎𝑝m = √2𝛥𝑝, where 𝛥𝑝 represents the width of the probability distribution in momentum space. (In some books 𝜎𝑥 represents the standard deviation of the probability deviation and the exponential term in 𝜓(𝑥) is 𝑒[-(𝑥-𝑥0)2]/4𝜎𝑥2.)
Writing Eq. 4.47 in terms of 𝛥𝑥 and 𝛥𝑝 gives
(4.49) 𝜎𝑥𝜎𝑝 = (√2𝛥𝑥)(√2𝛥𝑝) = ℏ, or
(4.50) 𝛥𝑥𝛥𝑝 = ℏ/2.
This us the uncertainty relation for Gaussian wavefunctions. For any other functions, the product of the standard deviations gives a value greater than this.※ So the general uncertainty relation between conjugate variables is
(4.51) 𝛥𝑥𝛥𝑝 ≥ ℏ/2.
This is the usual form of the Heisenberg Uncertainty principle. It says that for the pair of conjugate or "incompatible" observables, there is a fundamental limit to the precision with which both may be known, since the product of their probability-distribution uncertainty must be equal to or larger than half of the reduced Plank constant ℏ.
Another important aspect of incompatible observables concerns the operators and the order in which those operators are applied matters. We may consider how an operator and its eigenfunctions are related to the expectation value of the observable. The expectation value for continuous observable such as position 𝑥 is given by (Eq. 2.59)
(4.52) ⟨𝑥⟩ = ∫-∞∞𝑥𝑃(𝑥) 𝑑𝑥,
where 𝑃(𝑥) represented the probability density. For normalized 𝜓(𝑥), the probability density is given by ∣𝜓(𝑥)∣2 = 𝜓(𝑥)*𝜓(𝑥), so we have
(4.53) ⟨𝑥⟩ = ∫-∞∞ 𝑥∣𝜓(𝑥)∣2 𝑑𝑥 = ∫-∞∞ [𝜓(𝑥)]*𝑥[𝜓(𝑥)] 𝑑𝑥.
Compare this to the expression an observable 𝑥 associated with operator 𝑋^ using the inner product:
(2.60) ⟨𝑥⟩ = ⟨𝜓∣ 𝑋^ ∣𝜓⟩ = ∫-∞∞[𝜓(𝑥)]*𝑋^[𝜓(𝑥)] 𝑑𝑥.
And what are those eigenfunctions of the 𝑋^ operator? The eigenvalue equation for the position operator acting on the first of the eigenfunction (𝜓1(𝑥)) is
(4.54) 𝑋^𝜓1(𝑥) = 𝑥1𝜓1(𝑥),
where 𝑥1 represents the eigenvalue. Since the action of the position operator 𝑋^ is to multiply the function by 𝑥, it must also be true that ※
(4.55) 𝑋^𝜓1(𝑥) = 𝑥𝜓1(𝑥),
Setting the right sides of Eqs. 4.54 and 4.55 equal to one another gives
(4.56) 𝑥𝜓1(𝑥) = 𝑥1𝜓1(𝑥).
It means that the variable 𝑥 times the first eigenfunction 𝜓1 is equal to the single eigenvalue 𝑥1 time the same function. So the 𝜓1(𝑥) must zero everywhere except at the single location 𝑥 = 𝑥1, and it is the Dirac delta function 𝛿(𝑥 - 𝑥1). In the same way, 𝜓2 with 𝑥2, 𝛿(𝑥 - 𝑥2) does the trick, as does 𝛿(𝑥 - 𝑥3) for 𝜓3, and so forth. Thus the eigenfunctions of 𝑋^ are an infinite set of Dirac delta function 𝛿(𝑥 - 𝑥').
We can do the same analysis on momentum operators in momentum space
(4.57) ⟨𝑝⟩ = ∫-∞∞ 𝑝∣𝜙̄(𝑝)∣2 𝑑𝑝 = ∫-∞∞ [𝜙̄(𝑝)]*𝑝[𝜙̄(𝑝)] 𝑑𝑝.
(4.58) ⟨𝑝⟩ = ⟨𝜙̄∣ 𝑃𝑝^ ∣𝜙̄⟩ = ∫-∞∞[𝜙̄(𝑝)]*𝑃𝑝^[𝜙̄(𝑝)] 𝑑𝑝,
where in 𝑃𝑝^ the uppercase 𝑃 represents the momentum operator and the lowercase 𝑝 in the subscript does momentum-basis version of the operator. And just as in the case of the position operator we have
(4.59) 𝑃𝑝^(𝑝) = 𝑝𝜙̄1(𝑝) = 𝑝1𝜙̄1(𝑝).
So 𝑃𝑝^ are an infinite set of Dirac delta functions 𝛿(𝑝 - 𝑝') similarly as in the case of 𝑋^. It means that in any operator's own space, the action of that operators on each of its eigenfunctions is to multiply that eigenfunction by the observable and the eigenfunctions are Dirac delta functions.
But it's often useful to apply an operator to functions that reside in other spaces - for example, applying the momentum operator 𝑃^ to position wavefunction 𝜓(𝑥). When we have 𝜓(𝑥) and we wish to find the expectation value of momentum. We can use the position-basis version of momentum operator 𝑃𝑥^
(4.60) ⟨𝑝⟩ = ∫-∞∞ [𝜓(𝑥)]*𝑃𝑥^[𝜓(𝑥)] 𝑑𝑥.
We can use the inverse Fourier transform to find the position-space momentum eigenfunctions:
𝜓(𝑥) = 1/√(2πℏ) ∫-∞∞ 𝜙̄(𝑝)𝑒𝑖(𝑝/ℏ)𝑥 𝑑𝑝 = 1/√(2πℏ) ∫-∞∞ 𝛿(𝑝 - 𝑝')𝑒𝑖(𝑝/ℏ)𝑥 𝑑𝑝 = 1/√(2πℏ)𝑒𝑖(𝑝'/ℏ)𝑥,
where 𝑝' is the continuous variable representing all possible values of momentum. Naming that 𝑝 instead of 𝑝' makes the position representation of the momentum eigenfunctions
(4.61) 𝜓𝑝(𝑥) = 1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥,
where the subscript "𝑝" is a reminder of the position basis. Similarly 𝑃𝑥^ can be used as the position-space representation of 𝑃^.
(4.62) 𝑃𝑥^𝜓𝑝(𝑥) = 𝑝𝜓𝑝(𝑥).
(4.63) 𝑃𝑥^[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥] = 𝑝[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥].
We can pull out the 𝑝 can be by spatial derivation of 𝜓𝑝(𝑥):
∂/∂𝑥[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥] = 𝑝[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥] = 𝑖𝑝/ℏ[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥] = 𝑝[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥].
ℏ/𝑖 ∂/∂𝑥[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥] = ℏ/𝑖 (𝑖𝑝/ℏ)[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥] = 𝑝[1/√(2πℏ)𝑒𝑖(𝑝/ℏ)𝑥].
So we get
(4.64) 𝑃𝑥^ = ℏ/𝑖 ∂/∂𝑥 = -𝑖ℏ ∂/∂𝑥.
The same approach can be used for position eigenfunctions in momentum space:
(4.65) 𝜙̄𝑥(𝑝) = 1/√(2πℏ)𝑒-𝑖(𝑝/ℏ)𝑥.
(4.64) 𝑋𝑝^ = ℏ/𝑖 ∂/∂𝑥 = 𝑖ℏ ∂/∂𝑝.
Given these position-basis representations of the position and momentum operators, we can determine, so called, the commutator [𝑋^, 𝑃^] which is defined as [𝑋^, 𝑃^] = 𝑋^𝑃^ - 𝑃^𝑋^:
[𝑋^, 𝑃^]𝜓 = (𝑋^𝑃^ - 𝑃^𝑋^)𝜓 = [𝑥(-𝑖ℏ) 𝑑/𝑑𝑥 - (-𝑖ℏ) (𝑑/𝑑𝑥)𝑥]𝜓 = 𝑥(-𝑖ℏ) 𝑑𝜓/𝑑𝑥 - (-𝑖ℏ) 𝑑(𝑥𝜓)/𝑑𝑥.
[𝑋^, 𝑃^]𝜓 = 𝑥(-𝑖ℏ) 𝑑𝜓/𝑑𝑥 - (-𝑖ℏ) 𝑑(𝑥𝜓)/𝑑𝑥 = (-𝑖ℏ)𝑥 𝑑𝜓/𝑑𝑥 - (-𝑖ℏ)𝜓 𝑑(𝑥)/𝑑𝑥 - (-𝑖ℏ)𝑥 𝑑(𝜓)/𝑑𝑥 = 𝑖ℏ𝜓.
Now we can write the commutator of position and momentum operator as
(4.64) [𝑋^, 𝑃^] = 𝑖ℏ
Using the momentum-space representation of operators of 𝑋^ and 𝑃^ leads to the same result, as we can expect. The nonzero value of the commutator [𝑋^, 𝑃^] (called "canonical commutation relation") has extremely important implications, since it shows that the order in which certain operators are matters. Operators such as 𝑋^ and 𝑃^ are "non-commuting", which means hey don't share the same eigenfunctions. We should remember that the process of making a position measurement of quantum observable causes the wavefunction to collapse to an eigenfunction of the position operator, but if we then make a momentum measurement, the wavefunction collapse to a momentum eiigenfunction, and it means the system is now in a different states, so our position measurement is no longer relevant. This is the essence of quantum indeterminacy.
※ attention: some rigorous derivation might be required