Numerical Methods
UNIT-I: Roots of Equations
Bisection Method, False Position Method, Newton’s Raphson Method, Rate of convergence of Newton’s method.
UNIT-II: Interpolation and Extrapolation
Finite Differences, The operator E, Newton’s Forward and Backward Differences, Newton’s divided differences formulae, Lagrange’s Interpolation formula for unequal Intervals, Gauss’s Interpolation formula, Starling formula, Bessel’s formula, Laplace-Everett formula.
UNIT-III: Numerical Differentiation & Integration
Introduction, direct methods, maxima and minima of a tabulated function, General Quadratic formula, Trapezoidal rule, Simpson’s One-third rule, Simpson’s Three-eight rule.
UNIT-IV: Solution of Linear Equations
Gauss’s Elimination method and Gauss’s Siedel iterative method.
UNIT-V: Solution of Differential Equations
Euler’s method, Picard’s method, Fourth-order Runge-Kutta method.

UNIT-I: Roots of Equations

1. Bisection Method

The Bisection Method is based on the Intermediate Value Theorem. If f(a) * f(b) < 0, then there exists a root between a and b.

Formula:

c = (a + b) / 2

Example: Solve f(x) = x² - 4 for a root between 1 and 3. The midpoint is calculated as:

c = (1 + 3) / 2 = 2

2. False Position Method

The False Position Method uses a linear approximation for finding the root of f(x) = 0.

Formula:

x₁ = a - f(a) * (b - a) / (f(b) - f(a))

Example: Solve f(x) = x³ - 5x - 9 using a = 2 and b = 3:

x₁ = 2 - (f(2) * (3 - 2)) / (f(3) - f(2))

3. Newton’s Raphson Method

Newton’s method uses an initial guess and approximates the root by iterating using tangents.

Formula:

x₁ = x₀ - f(x₀) / f'(x₀)

Example: Solve f(x) = x³ - x - 2 with an initial guess x₀ = 1:

x₁ = 1 - ((1³ - 1 - 2) / (3 * 1² - 1)) = 1.5

4. Rate of Convergence of Newton’s Method

The rate of convergence refers to how quickly the approximation reaches the root. Newton’s method generally converges quadratically if the initial guess is close enough to the actual root.

Formula for rate of convergence:

Eₙ₊₁ ≈ C * Eₙ²

Example: Using f(x) = x² - 2 and an initial guess of x₀ = 1.5, check how the error reduces after each iteration.

UNIT-II: Interpolation and Extrapolation

1. Newton’s Forward and Backward Difference

Newton’s Forward Difference is used for interpolating when the data points are in increasing order, while the Backward Difference is used when the data is in decreasing order.

Forward Difference Formula:

Pₙ(x) = y₀ + (x - x₀)Δy₀ + (x - x₀)(x - x₁)Δ²y₀ / 2! + ...

Backward Difference Formula:

Pₙ(x) = yₙ + (x - xₙ)∇yₙ + (x - xₙ)(x - xₙ₋₁)∇²yₙ / 2! + ...

Example: Interpolate for x = 1.5 using the forward difference formula with data points (1, 2), (2, 3), (3, 4).

2. Lagrange’s Interpolation Formula for Unequal Intervals

Lagrange’s formula is used to construct a polynomial that passes through a given set of points.

Formula:

L(x) = Σ[yₙ * Lₙ(x)], where
Lₙ(x) = ∏(x - xᵢ) / (xₙ - xᵢ), for all i ≠ n.

Example: Interpolate for x = 2.5 using data points (1, 2), (3, 6), (5, 10).

3. Gauss’s Interpolation Formula

Gauss’s interpolation is used for equally spaced data and can handle both forward and backward differences.

Example: Use Gauss’s formula to interpolate a function value at a given point using data from an evenly spaced table.

UNIT-III: Numerical Differentiation and Integration

1. Numerical Differentiation

Numerical differentiation approximates the derivative of a function using finite differences.

Forward Difference Formula:

f'(x) ≈ (f(x+h) - f(x)) / h

Backward Difference Formula:

f'(x) ≈ (f(x) - f(x-h)) / h

Central Difference Formula:

f'(x) ≈ (f(x+h) - f(x-h)) / (2h)

Example: Find the derivative of f(x) = x² using the forward difference with h = 0.1.

f'(x) ≈ (f(x + 0.1) - f(x)) / 0.1 = (1.21 - 1) / 0.1 = 2.1

2. Numerical Integration

Numerical integration is used to approximate the integral of a function.

Trapezoidal Rule:

∫(a to b) f(x) dx ≈ (b - a) / 2 * (f(a) + f(b))

Example: Estimate ∫(0 to 1) x² dx using the Trapezoidal Rule.

Integral ≈ (1 - 0) / 2 * (f(0) + f(1)) = 1/2 * (0 + 1) = 0.5

Simpson’s One-Third Rule:

∫(a to b) f(x) dx ≈ (b - a) / 3 * [f(a) + 4f((a + b) / 2) + f(b)]

Example: Estimate ∫(0 to 1) x² dx using Simpson’s One-Third Rule.

Integral ≈ (1 - 0) / 3 * [f(0) + 4f(0.5) + f(1)] = 1/3 * [0 + 4(0.25) + 1] = 0.3333

Simpson’s Three-Eighth Rule:

∫(a to b) f(x) dx ≈ 3(b - a) / 8 * [f(a) + 3f((a + b) / 3) + 3f((2a + b) / 3) + f(b)]

Example: Estimate ∫(0 to 1) x² dx using Simpson’s Three-Eighth Rule.

Integral ≈ 3(1 - 0) / 8 * [f(0) + 3f(0.3333) + 3f(0.6667) + f(1)]

UNIT-IV: Solution of Linear Equations

1. Gauss’s Elimination Method

The Gauss Elimination method is used for solving a system of linear equations. It involves transforming the system of equations into an upper triangular matrix and then solving by back substitution.

Steps in Gauss’s Elimination Method:

  1. Write the augmented matrix of the system of linear equations.
  2. Perform row operations to convert the matrix into an upper triangular matrix.
  3. Use back substitution to find the values of the unknowns.

Example:

Solve the system of equations:

2x + y - 3z = 9
x - 2y + 4z = 2
3x + y + 2z = 3

Step 1: Write the augmented matrix:

[ 2 1 -3 | 9 ]
[ 1 -2 4 | 2 ]
[ 3 1 2 | 3 ]

Step 2: Use row operations to eliminate the variables and form an upper triangular matrix:

[ 2 1 -3 | 9 ]
[ 0 -5 10 | -7 ]
[ 0 0 -1 | 3 ]

Step 3: Perform back substitution:

From the third row: z = -3
From the second row: -5y + 10z = -7, so y = 1
From the first row: 2x + y - 3z = 9, so x = 4

The solution is: x = 4, y = 1, z = -3

2. Gauss’s Seidel Iterative Method

The Gauss-Seidel method is an iterative technique used to solve a system of linear equations. It improves upon the Jacobi method by using updated values of variables as soon as they are calculated.

Steps in Gauss’s Seidel Iterative Method:

  1. Rearrange the system of equations in terms of each variable.
  2. Make an initial guess for the values of all variables.
  3. Iteratively update the values of the variables using the following equation:

Formula:

x₁ = (b₁ - a₁₂y₁ - a₁₃z₁) / a₁₁
y₂ = (b₂ - a₂₁x₁ - a₂₃z₁) / a₂₂
z₃ = (b₃ - a₃₁x₁ - a₃₂y₁) / a₃₃

Example: Solve the system of equations:

4x + y + z = 4
x + 3y + z = 7
x + y + 3z = 10

Step 1: Rearrange the system of equations:

x = (4 - y - z) / 4
y = (7 - x - z) / 3
z = (10 - x - y) / 3

Step 2: Make an initial guess for x, y, and z. Let's take x₀ = 0, y₀ = 0, z₀ = 0.

Step 3: Iteratively update the values:

For the first iteration:

x₁ = (4 - 0 - 0) / 4 = 1
y₁ = (7 - 1 - 0) / 3 = 2
z₁ = (10 - 1 - 2) / 3 = 2.33

For the second iteration:

x₂ = (4 - 2 - 2.33) / 4 = -0.08
y₂ = (7 - (-0.08) - 2.33) / 3 = 2.57
z₂ = (10 - (-0.08) - 2.57) / 3 = 2.47

Continue iterating until the values converge to the desired accuracy.

UNIT-V: Solution of Differential Equations

1. Euler’s Method

Euler’s method is a simple numerical technique for solving ordinary differential equations (ODEs) with a given initial value.

Formula:

yₙ₊₁ = yₙ + h * f(xₙ, yₙ)

Where:

Example:

Given the differential equation:

dy/dx = x + y, y(0) = 1

Using Euler's method with a step size of h = 0.1, calculate y(0.1):

yₙ₊₁ = yₙ + h * (xₙ + yₙ)
For xₙ = 0, yₙ = 1, we get:
y₁ = 1 + 0.1 * (0 + 1) = 1 + 0.1 = 1.1

The value of y(0.1) is 1.1.

2. Picard’s Method

Picard’s method is a successive approximation method used to solve initial value problems for differential equations.

Formula:

yₙ₊₁ = yₙ + h * f(xₙ, yₙ)

Steps:

  1. Start with an initial guess for yₙ at xₙ (often yₙ = y₀).
  2. Use the formula to compute yₙ₊₁.
  3. Repeat the process until the values converge to the desired accuracy.

Example:

Consider the differential equation:

dy/dx = y + x, y(0) = 1

Start with y₀ = 1, h = 0.1, and compute yₙ for xₙ = 0, 0.1, 0.2, ... as follows:

y₁ = y₀ + h * (y₀ + x₀) = 1 + 0.1 * (1 + 0) = 1.1
y₂ = y₁ + h * (y₁ + x₁) = 1.1 + 0.1 * (1.1 + 0.1) = 1.1 + 0.12 = 1.22

The successive approximations continue, improving the solution with each iteration.

3. Fourth-Order Runge-Kutta Method

The Fourth-Order Runge-Kutta method is a more accurate iterative method for solving differential equations. It uses the weighted average of slopes at multiple points to improve the approximation of the solution.

Formula:

k₁ = h * f(xₙ, yₙ)
k₂ = h * f(xₙ + h/2, yₙ + k₁/2)
k₃ = h * f(xₙ + h/2, yₙ + k₂/2)
k₄ = h * f(xₙ + h, yₙ + k₃)
yₙ₊₁ = yₙ + (k₁ + 2k₂ + 2k₃ + k₄)/6

Example:

Consider the differential equation:

dy/dx = x + y, y(0) = 1

Using a step size of h = 0.1, calculate y(0.1):

Step 1: Compute k₁, k₂, k₃, and k₄ for xₙ = 0 and yₙ = 1
k₁ = 0.1 * (0 + 1) = 0.1
k₂ = 0.1 * (0 + 0.1/2 + 1 + 0.1/2) = 0.1 * (0.1 + 1.05) = 0.115
k₃ = 0.1 * (0 + 0.1/2 + 1 + 0.115/2) = 0.1 * (0.1 + 1.0575) = 0.11575
k₄ = 0.1 * (0 + 0.1 + 1 + 0.11575) = 0.1 * (1.21575) = 0.121575
y₁ = 1 + (0.1 + 2*0.115 + 2*0.11575 + 0.121575)/6 = 1 + (0.1 + 0.23 + 0.2315 + 0.121575)/6 = 1.1706

The value of y(0.1) is approximately 1.1706.