The Bisection Method is a bracketing method used to find the root of a function. It requires two initial guesses, a and b, such that f(a) and f(b) have opposite signs (i.e., the function changes sign over the interval).
Formula:
Midpoint, \( c = \frac{a + b}{2} \)
We check the sign of f(c). If f(c) is close to 0 (within tolerance), c is the root. Otherwise, replace a or b such that the sign changes are maintained.
Example: f(x) = x³ - 4x - 9 Initial guesses: a = 2, b = 3 f(2) = -5, f(3) = 6 → Sign change Iteration 1: c = (2 + 3)/2 = 2.5 f(2.5) = -1.375 → Replace a = 2 with 2.5 Repeat until desired accuracy is achieved.
The False Position Method is also a bracketing method, but instead of taking the midpoint, it takes a point where the straight line joining f(a) and f(b) crosses the x-axis.
Formula:
\( c = \frac{a \cdot f(b) - b \cdot f(a)}{f(b) - f(a)} \)
This value of c replaces either a or b depending on the sign of f(c), similar to the bisection method.
Example: f(x) = x³ - 4x - 9 a = 2, f(a) = -5 b = 3, f(b) = 6 c = (2*6 - 3*(-5)) / (6 + 5) = (12 + 15)/11 = 2.45 f(2.45) = -1.15 → Replace a = 2 with 2.45
The Newton-Raphson Method is an open method and usually converges faster than bracketing methods. It uses the function and its derivative to approximate the root.
Formula:
\( x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \)
We start with an initial guess \( x_0 \) and iterate using the formula until we reach a root within a small tolerance.
Example: f(x) = x³ - 4x - 9 f'(x) = 3x² - 4 Initial guess: x₀ = 2.5 Iteration 1: x₁ = 2.5 - (2.5³ - 4*2.5 - 9)/(3*2.5² - 4) = 2.5 - (-1.375)/14.75 = 2.593 Continue until convergence.
The rate of convergence of the Newton-Raphson method is quadratic, meaning the error decreases very rapidly with each iteration.
If \( x^* \) is the true root, and \( e_n = x_n - x^* \) is the error at step \( n \), then:
\( e_{n+1} = \frac{f''(x^*)}{2f'(x^*)} \cdot (e_n)^2 + o\left((e_n)^2\right) \)
This shows that Newton-Raphson converges quadratically near the root.
Example: If error at step 1 is 0.1, Then error at step 2 ≈ 0.01, Then step 3 ≈ 0.0001, and so on.
Finite differences are used to estimate derivatives and help in interpolation. They measure the difference between successive values in a table of data.
Types of differences:
Example: y: 3, 7, 13, 21 Δy₀ = 7 - 3 = 4 Δ²y₀ = (13 - 7) - (7 - 3) = 2
The Shift Operator E is defined as:
E f(x) = f(x + h)
It shifts the argument of a function by a step h. It's related to the difference operator Δ:
Δ = E - 1
If f(x) = x², and h = 1: Then E f(x) = f(x+1) = (x+1)²
This formula is used for interpolation near the beginning of the table with equal intervals.
Formula:
\( f(x) = y_0 + u\Delta y_0 + \frac{u(u - 1)}{2!} \Delta^2 y_0 + \dots \) where \( u = \frac{x - x_0}{h}
\)
Example: x: 1, 2, 3 y: 1, 8, 27 (y = x³) Find y at x = 1.5: Δy₀ = 7, Δ²y₀ = 6 u = (1.5 - 1)/1 = 0.5 f(1.5) = 1 + 0.5×7 + (0.5×-0.5)/2 × 6 = 3.875
This is used when the value to interpolate is near the end of the table.
Formula:
\( f(x) = y_n + u \nabla y_n + \frac{u(u+1)}{2!} \nabla^2 y_n + \dots \) where \( u = \frac{x - x_n}{h}
\)
Example: x: 1, 2, 3 y: 1, 8, 27 Find y at x = 2.5: ∇y₂ = 19, ∇²y₂ = 6 u = (2.5 - 3)/1 = -0.5 f(2.5) = 27 + (-0.5)×19 + (-0.5×-1.5)/2 × 6 = 15.125
Used when the x-values are not equally spaced. The formula is:
f(x) = f[x₀] + (x - x₀)f[x₀, x₁] + (x - x₀)(x - x₁)f[x₀, x₁, x₂] + ...
Example: x: 1, 2, 4 y: 1, 4, 16 f[x₀, x₁] = (4 - 1)/(2 - 1) = 3 f[x₁, x₂] = (16 - 4)/(4 - 2) = 6 f[x₀, x₁, x₂] = (6 - 3)/(4 - 1) = 1 f(x) = 1 + (x - 1)×3 + (x - 1)(x - 2)×1
Used when x-values are unequal. The formula is:
\( f(x) = \sum_{i=0}^{n} \left[ y_i \prod_{j=0, j\neq i}^{n} \frac{(x - x_j)}{(x_i - x_j)} \right] \)
Example: x: 1, 2, 4 y: 1, 4, 16 f(3) = y₀*(3-2)(3-4)/((1-2)(1-4)) + y₁*(3-1)(3-4)/((2-1)(2-4)) + y₂*(3-1)(3-2)/((4-1)(4-2)) = 1*(1)(-1)/(-1)(-3) + 4*(2)(-1)/(1)(-2) + 16*(2)(1)/(3)(2) = 0.333 + 4 + 5.33 ≈ 9.66
Used for values near the center of the data, especially for symmetric tables. It’s based on central differences.
There are two versions: Gauss Forward and Gauss Backward, using central values and differences.
Gauss Forward:
\( f(x) = y_0 + u\delta y_{1/2} + \frac{u(u - 1)}{2!}\delta^2 y_0 + \dots \)
It is a central difference formula obtained by averaging Gauss's forward and backward formulas.
Best suited when the value of x lies at the mid-point of the table.
f(x) = y₀ + u(Δy₀ + Δy₋₁)/2 + (u²/2!)Δ²y₋₁ + ...
Used when the value lies between two tabulated values. It provides better approximation than Stirling’s formula in such cases.
f(x) = y₀ + uΔy₀ + ((u(u-1))/2!)×(Δ²y₋₁ + Δ²y₀)/2 + ...
This is a refinement of Bessel’s formula and eliminates odd-order differences. It gives better results for values near the center of the table.
It uses even differences only and averages values to achieve higher accuracy.
Numerical Differentiation and Numerical Integration are techniques used to find approximate values of derivatives and definite integrals from tabulated data or where analytical solutions are difficult.
These methods are widely used in engineering, physics, and applied mathematics.
When values of a function are given at equally spaced points, we can compute the derivatives using finite differences.
First Derivative using Forward Difference:
\( f'(x) \approx \frac{-f(x+2h) + 4f(x+h) - 3f(x)}{2h} \)
First Derivative using Backward Difference:
\( f'(x) \approx \frac{3f(x) - 4f(x-h) + f(x-2h)}{2h} \)
Second Derivative:
\( f''(x) \approx \frac{f(x+h) - 2f(x) + f(x-h)}{h^2} \)
Example: x: 1, 2, 3 f(x): 1, 4, 9 First derivative at x = 1: f'(1) ≈ (-f(3) + 4f(2) - 3f(1)) / 2h ≈ (-9 + 16 - 3) / 2 = 2
To find maxima or minima in tabulated data, compute first and second numerical derivatives.
Example: x: 0, 1, 2 f(x): 1, 3, 2 f'(1) ≈ (2 - 1)/2 = 0.5, f''(1) ≈ (2 - 2×3 + 1)/1 = -3 Since f''(1) < 0 → Maximum at x = 1
The General Quadrature Formula approximates definite integrals using polynomial interpolation.
Formula:
\( \int_{a}^{b} f(x)\,dx \approx h \left[ c_0 f(x_0) + c_1 f(x_1) + \dots + c_n f(x_n) \right] \)
The coefficients \( c_0, c_1, \dots \) depend on the rule (e.g., Trapezoidal, Simpson’s).
This rule approximates the area under the curve by dividing it into trapezoids.
Formula:
\( \int_{a}^{b} f(x)\,dx \approx \frac{h}{2} \left[ f(x_0) + 2f(x_1) + 2f(x_2) + \dots + f(x_n) \right]
\)
Example: x: 1, 2, 3 f(x): 1, 4, 9 h = 1 ∫₁³ f(x) dx ≈ (1/2) × [1 + 2×4 + 9] = 9
This rule uses parabolic arcs instead of straight lines. Requires even number of intervals.
Formula:
\( \int_{a}^{b} f(x)\,dx \approx \frac{h}{3} \left[ f(x_0) + 4f(x_1) + 2f(x_2) + 4f(x_3) + \dots +
f(x_n) \right] \)
Example: x: 1, 2, 3 f(x): 1, 4, 9 h = 1 ∫₁³ f(x) dx ≈ (1/3) × [1 + 4×4 + 9] = (1/3) × 26 = 8.667
This rule is used when number of intervals is a multiple of 3. It fits a cubic polynomial.
Formula:
\( \int_{a}^{b} f(x)\,dx \approx \frac{3h}{8} \left[ f(x_0) + 3f(x_1) + 3f(x_2) + 2f(x_3) + \dots +
f(x_n) \right] \)
Example: x: 1, 2, 3, 4 f(x): 1, 8, 27, 64 h = 1 ∫₁⁴ f(x) dx ≈ (3/8) × [1 + 3×8 + 3×27 + 64] = (3/8) × 170 = 63.75
Gauss’s Elimination Method is a direct method for solving a system of linear equations. It transforms the system into an upper triangular matrix and then uses back substitution to find the solution.
Steps:
Example: Solve the system: x + y + z = 6 2x + 3y + 5z = 17 4x + 0y + 5z = 13 Step 1: Form augmented matrix [ [1, 1, 1 | 6], [2, 3, 5 | 17], [4, 0, 5 | 13] ] Step 2: Eliminate lower elements R2 → R2 - 2×R1 → [0, 1, 3 | 5] R3 → R3 - 4×R1 → [0, -4, 1 | -11] Step 3: Eliminate next row R3 → R3 + 4×R2 → [0, 0, 13 | 9] Step 4: Back substitution z = 9/13 ≈ 0.692 y = (5 - 3×z) = 5 - 2.077 = 2.923 x = 6 - y - z = 6 - 2.923 - 0.692 ≈ 2.385
The Gauss-Seidel Method is an iterative technique for solving a system of linear equations. It improves an initial guess until convergence is achieved.
Basic Idea: Each variable is solved in terms of the others and updated in every iteration.
Conditions for Convergence:
Example: Solve: 10x + y + z = 12 2x + 10y + z = 13 2x + 2y + 10z = 14 Initial guess: x₀ = 0, y₀ = 0, z₀ = 0 Iteration 1: x₁ = (12 - 0 - 0)/10 = 1.2 y₁ = (13 - 2×1.2 - 0)/10 = 1.06 z₁ = (14 - 2×1.2 - 2×1.06)/10 = 0.948 Iteration 2: x₂ = (12 - 1.06 - 0.948)/10 = 0.999 y₂ = (13 - 2×0.999 - 0.948)/10 = 1.005 z₂ = (14 - 2×0.999 - 2×1.005)/10 = 0.999 Repeat until the values stabilize. Final approx solution: x ≈ 1, y ≈ 1, z ≈ 1
Euler’s Method is a simple numerical technique to solve first-order ordinary differential equations (ODEs) of the form:
\( \frac{dy}{dx} = f(x, y) \), with initial condition \( y(x_0) = y_0 \)
Formula:
\( y_{n+1} = y_n + h \cdot f(x_n, y_n) \)
Where \( h \) is the step size.
Example: dy/dx = x + y, y(0) = 1, h = 0.1 Step 1: f(0, 1) = 0 + 1 = 1 y₁ = 1 + 0.1 × 1 = 1.1 Step 2: f(0.1, 1.1) = 0.1 + 1.1 = 1.2 y₂ = 1.1 + 0.1 × 1.2 = 1.22 Continue steps to find approximate values.
Picard’s Method is an iterative method used to approximate the solution of a differential equation.
It uses successive approximations by integrating:
Formula:
\( y_{n+1}(x) = y_0 + \int_{x_0}^{x} f(t, y_n(t)) dt \)
Example: dy/dx = x + y, y(0) = 1 First approximation: y₁ = 1 + ∫₀ˣ (t + 1) dt = 1 + (t²/2 + t)|₀ˣ = 1 + x²/2 + x Second approximation: y₂ = 1 + ∫₀ˣ (t + 1 + t²/2 + t) dt = ... Repeat for better accuracy.
The Runge–Kutta Fourth Order Method gives a highly accurate approximation of the solution of an ODE.
Formula:
\( k_1 = h \cdot f(x_n, y_n) \)
\( k_2 = h \cdot f(x_n + h/2, y_n + k_1/2) \)
\( k_3 = h \cdot f(x_n + h/2, y_n + k_2/2) \)
\( k_4 = h \cdot f(x_n + h, y_n + k_3) \)
\( y_{n+1} = y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) \)
Example: dy/dx = x + y, y(0) = 1, h = 0.1 k₁ = 0.1 × f(0,1) = 0.1 × 1 = 0.1 k₂ = 0.1 × f(0.05, 1.05) = 0.1 × 1.1 = 0.11 k₃ = 0.1 × f(0.05, 1.055) = 0.1 × 1.105 = 0.1105 k₄ = 0.1 × f(0.1, 1.1105) = 0.1 × 1.2105 = 0.12105 y₁ ≈ 1 + (1/6)(0.1 + 2×0.11 + 2×0.1105 + 0.12105) ≈ 1 + (0.66205 / 6) = 1.1103 (approx)