1.4 Matrix Multiplication

Matrix multiplication is one of the core operations in linear algebra and is significantly more complex than addition or scalar multiplication. It involves the product of two matrices and is essential in solving systems of equations, transforming geometrical figures, modeling real-world phenomena, and performing computations in fields such as physics, computer graphics, economics, and machine learning. Unlike scalar or element-wise operations, matrix multiplication involves combining rows and columns to produce new values.

Matrix multiplication is defined between two matrices only when the number of columns in the first matrix is equal to the number of rows in the second matrix. If matrix A is of order m×nm \times nm×n and matrix B is of order n×pn \times pn×p, then their product AB is defined and results in a matrix of order m×pm \times pm×p. Each element in the resulting matrix is obtained by taking the dot product of the corresponding row from the first matrix and the corresponding column from the second matrix.

To calculate each entry cijc_{ij}cij​ of the resulting matrix C = AB, we multiply elements from the iiith row of matrix A with the corresponding elements from the jjjth column of matrix B and sum the products. Mathematically, cij=∑k=1naik⋅bkjc_{ij} = \sum_{k=1}^{n} a_{ik} \cdot b_{kj}cij​=∑k=1n​aik​⋅bkj​. This operation blends entire rows with columns, which is fundamentally different from adding or subtracting corresponding elements.

Matrix multiplication is not commutative, which means AB≠BAAB \neq BAAB=BA in general. Even if both products are defined, they may result in matrices of different dimensions or contain entirely different values. However, matrix multiplication is associative, so (AB)C=A(BC)(AB)C = A(BC)(AB)C=A(BC), and distributive over addition, so A(B+C)=AB+ACA(B + C) = AB + ACA(B+C)=AB+AC, where all matrices involved are of compatible dimensions.

Matrix multiplication has countless practical applications. In physics and engineering, it is used to perform transformations, such as rotations and scaling in space. In computer science, matrices are fundamental to algorithms involving graphics, search engines, and neural networks. In economics, matrices help model systems of interrelated markets and optimize resource allocation. In statistics and machine learning, matrices represent datasets and weight matrices for various learning models.

To illustrate matrix multiplication with an example, consider matrix A of order 2×32 \times 32×3:
A=[123456]A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}A=[14​25​36​],
and matrix B of order 3×23 \times 23×2:
B=[789101112]B = \begin{bmatrix} 7 & 8 \\ 9 & 10 \\ 11 & 12 \end{bmatrix}B=​7911​81012​​.
The resulting matrix C = AB will be of order 2×22 \times 22×2.
The entries are calculated as follows:
First row, first column: 1⋅7+2⋅9+3⋅11=7+18+33=581 \cdot 7 + 2 \cdot 9 + 3 \cdot 11 = 7 + 18 + 33 = 581⋅7+2⋅9+3⋅11=7+18+33=58
First row, second column: 1⋅8+2⋅10+3⋅12=8+20+36=641 \cdot 8 + 2 \cdot 10 + 3 \cdot 12 = 8 + 20 + 36 = 641⋅8+2⋅10+3⋅12=8+20+36=64
Second row, first column: 4⋅7+5⋅9+6⋅11=28+45+66=1394 \cdot 7 + 5 \cdot 9 + 6 \cdot 11 = 28 + 45 + 66 = 1394⋅7+5⋅9+6⋅11=28+45+66=139
Second row, second column: 4⋅8+5⋅10+6⋅12=32+50+72=1544 \cdot 8 + 5 \cdot 10 + 6 \cdot 12 = 32 + 50 + 72 = 1544⋅8+5⋅10+6⋅12=32+50+72=154
So,
AB=[5864139154]AB = \begin{bmatrix} 58 & 64 \\ 139 & 154 \end{bmatrix}AB=[58139​64154​]

In conclusion, matrix multiplication is a foundational tool in mathematics that goes beyond simple element-wise operations to combine data, perform transformations, and solve complex problems. Its rules and structure may be more intricate than those of addition or scalar multiplication, but its role is far more powerful and far-reaching in both theoretical and applied contexts.

Leave a Reply

Your email address will not be published. Required fields are marked *