|
||||||||||||||||||||||||||||||||||
| ▭\:\longdivision{▭} | \times \twostack{▭}{▭} | + \twostack{▭}{▭} | - \twostack{▭}{▭} | \left( | \right) | \times | \square\frac{\square}{\square} |
|
||||||||||||||||||||||||||||||||||
| - \twostack{▭}{▭} | \lt | 7 | 8 | 9 | \div | AC |
| + \twostack{▭}{▭} | \gt | 4 | 5 | 6 | \times | \square\frac{\square}{\square} |
| \times \twostack{▭}{▭} | \left( | 1 | 2 | 3 | - | x |
| ▭\:\longdivision{▭} | \right) | . | 0 | = | + | y |

Linear Algebra is the math behind systems: motion, data, decisions. It shows up in phone signals, 3D animation, and self-driving cars. Wherever things move or interact in more than one direction, linear algebra is quietly at work. At its heart are vectors and matrices; tools for holding and transforming information. A vector points the way. A matrix acts on it, like a machine that reshapes input into something new.
In this article, we will explore the core ideas like vector operations, matrix math, systems of equations, vector spaces, transformations, and eigenvalues. You will also learn how to follow these steps clearly using Symbolab’s Linear Algebra Calculator.
A vector is a simple object, but it carries a lot.
At its core, a vector is just a list of numbers. But that list has a direction and a size. You can think of a vector as an arrow, pointing from one place to another. It doesn’t care where you start. What matters is how far you go, and in what direction.
Say you’re walking through a city. You go 3 blocks east and 2 blocks north. That movement can be described by the vector:
$\vec{v} = \begin{bmatrix} 3 \ 2 \end{bmatrix}$
It holds two values: one for the horizontal direction, one for the vertical. You could start on any corner in town. The vector just tells you the change in position.
In three dimensions, it might look like this:
$\vec{w} = \begin{bmatrix} 1 \ -2 \ 4 \end{bmatrix}$
Now you have an extra layer. Forward 1 unit, left 2, and up 4. This could describe how a drone moves in space or how a force is applied to an object in physics.
Vectors become powerful when you learn how to combine them. There are two main ways: addition and scalar multiplication.
Adding vectors means stacking movements.
If one vector moves you 3 blocks east and 2 blocks north, and another moves you 1 block west and 4 blocks north, the total trip is the sum of those two:
$[\begin{bmatrix} 3 \ 2 \end{bmatrix}+ \begin{bmatrix} -1 \ 4 \end{bmatrix}= \begin{bmatrix} 2 \ 6 \end{bmatrix}]$
This works component by component. First entries with first entries, second with second. You can picture it: take the first arrow, then place the second arrow starting where the first one ends. The combined path is the result.
Scalar multiplication means stretching or shrinking a vector.
If you take the vector:
$[\vec{v} = \begin{bmatrix} 3 \ 2 \end{bmatrix}]$
and multiply it by 2:
$[2\vec{v} = \begin{bmatrix} 6 \ 4 \end{bmatrix}]$
you get a new vector in the same direction, but twice as long.
What if you multiply by a negative number?
$-1 \cdot \vec{v} = \begin{bmatrix} -3 \ -2 \end{bmatrix}$
Now it points in the opposite direction. Same length, reversed.
Scalar multiplication helps when you want to change the size of something while keeping its direction or reverse its direction without changing its shape.
The magnitude of a vector is how long it is. If you’re walking through that city again, it’s the straight-line distance you’ve traveled, regardless of the route.
For a vector $\vec{v} = \begin{bmatrix} v_1 \ v_2 \end{bmatrix}$, the magnitude is:
$|\vec{v}| = \sqrt{v_1^2 + v_2^2}$
This is just the Pythagorean theorem. For example:
$\left|\begin{bmatrix} 3 \ 4 \end{bmatrix}\right| = \sqrt{3^2 + 4^2} = \sqrt{9 + 16} = 5$
In three dimensions:
$|\vec{w}| = \sqrt{w_1^2 + w_2^2 + w_3^2}$
Sometimes, you only care about direction, not length. That’s where unit vectors come in. A unit vector has a length of 1, but keeps the same direction.
You create one by dividing the vector by its own magnitude:
$\hat{v} = \frac{\vec{v}}{|\vec{v}|}$
If $\vec{v} = \begin{bmatrix} 3 \ 4 \end{bmatrix}$, then:
$\hat{v} = \frac{1}{5} \begin{bmatrix} 3 \ 4 \end{bmatrix} = \begin{bmatrix} 0.6 \ 0.8 \end{bmatrix}$
Now the vector still points in the same direction, but has a magnitude of exactly 1. We can turn any non-zero vector into a unit vector; the zero vector has no direction, so it can’t be normalized.
The dot product of two vectors is a number that tells you something about how they align. Given two vectors:
$\vec{a} = \begin{bmatrix} a_1 \ a_2 \end{bmatrix}, \quad \vec{b} = \begin{bmatrix} b_1 \ b_2 \end{bmatrix}$
the dot product is:
$\vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2$
Why does this matter?
Because it helps you find the angle between vectors:
$\vec{a} \cdot \vec{b} = |\vec{a}| , |\vec{b}| \cos \theta$
If the dot product is positive, the angle is acute, they point roughly the same way. If it’s zero, they’re perpendicular. The right-angle test works only when both vectors are non-zero, if either one is 0, direction is undefined. If it’s negative, they point in opposite directions.
You see this in physics when calculating work: how much of a force is directed along the motion of an object? The dot product answers that.
If vectors are the building blocks, matrices are the structures that organize and transform them.
A matrix is a rectangular array of numbers. It might represent data, equations, or instructions for changing space. In its simplest form, a matrix looks like this:
$A = \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix}$
This one has 2 rows and 2 columns. You can think of each row as a rule, or each column as a direction. But the real power of a matrix shows up when it interacts with a vector.
Multiplying a matrix by a vector applies a transformation. The vector goes in. A new vector comes out.
If you have:
$A = \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix}, \quad \vec{x} = \begin{bmatrix} 5 \ 6 \end{bmatrix}$
then:
$A \vec{x} = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 6 \ 3 \cdot 5 + 4 \cdot 6 \end{bmatrix} = \begin{bmatrix} 17 \ 39 \end{bmatrix}$
Each row of the matrix acts like a set of instructions. You multiply and sum the entries to get the result.
Geometrically, this reshapes the vector. The matrix might stretch it, rotate it, flip it, or send it into a new direction altogether. That’s why we often describe matrices as transformations.
Matrices of the same size can be added or subtracted. This is done entry by entry:
If:
$A = \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix}, \quad B = \begin{bmatrix} 5 & 6 \ 7 & 8\end{bmatrix}$
then:
$A + B = \begin{bmatrix} 1+5 & 2+6 \ 3+7 & 4+8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \ 10 & 12 \end{bmatrix}$
It’s like combining two layers of data or two sets of instructions.
Just like with vectors, a matrix can be scaled by a number. Every entry is multiplied:
$3A = 3 \cdot \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix} = \begin{bmatrix} 3 & 6 \ 9 & 12 \end{bmatrix}$
This is useful when adjusting the strength of a transformation or changing the weight of a matrix in a calculation.
Multiplying two matrices is more involved. It’s not done entry by entry. Instead, each row of the first matrix is dotted with each column of the second.
Suppose:
$A = \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix}, \quad B = \begin{bmatrix} 5 & 6 \ 7 & 8 \end{bmatrix}$
Then the product $AB$ is:
$AB = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 7 & 1 \cdot 6 + 2 \cdot 8 \ 3 \cdot 5 + 4 \cdot 7 & 3 \cdot 6 + 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \ 43 & 50 \end{bmatrix}$
Each number in the result comes from multiplying a row of $A$ with a column of $B$. Matrix multiplication only works when the number of columns in the first matrix equals the number of rows in the second. And importantly, it is not commutative. That means:
$AB \ne BA \quad \text{in general}$
This is a key difference from regular multiplication. The order matters because the transformations don’t always align.
Some matrices leave vectors unchanged. These are called identity matrices. For 2×2 matrices, the identity is:
$I = \begin{bmatrix} 1 & 0 \ 0 & 1 \end{bmatrix}$
When you multiply any compatible matrix or vector by $I$, you get the original back:
$I\vec{v} = \vec{v}, \quad AI = A$
It plays the same role as the number 1 does in arithmetic.
Imagine a 3D game. A character walks forward, turns left, and climbs a ramp. Each move is a matrix. Each position is a vector. When you stack the moves, multiply the matrices, you control the flow of movement.
Or think about a spreadsheet of sales data. Each column is a product. Each row is a region. A matrix might represent the effect of a price change or marketing shift across all regions at once. One multiplication, and you see the updated outcomes.
Matrices hold more than numbers. They hold relationships. And they act on vectors with quiet precision.
Some matrices can be reversed. Others can’t. The key to knowing which is which lies in a single number: the determinant.
The determinant tells you something subtle but powerful. It measures how a matrix stretches or squashes space. And it tells you whether that transformation is reversible. Let’s begin with the simplest case: a $2 \times 2$ matrix.
Suppose you have:
$A = \begin{bmatrix} a & b \ c & d \end{bmatrix}$
The determinant of $A$ is:
$\det(A) = ad - bc$
It’s a simple formula, but it carries deep meaning.
If $\det(A) = 0$, then $A$ is singular. That means it collapses space in some way by flattening it, or folding it so that different inputs lead to the same output. You can’t undo the transformation.
If $\det(A) \neq 0$, then $A$ is invertible. There exists another matrix that reverses its effect.
For example:
$A = \begin{bmatrix} 2 & 1 \ 4 & 3 \end{bmatrix} \Rightarrow \det(A) = 2 \cdot 3 - 1 \cdot 4 = 6 - 4 = 2$
Since the determinant is not zero, $A$ has an inverse.
Think of a matrix as transforming shapes in space. If you apply a matrix to a square, the result might be a parallelogram. The determinant tells you how the area of that square changes.
In three dimensions, the determinant gives the volume scaling factor of the transformation.
For a $3 \times 3$ matrix:
$A = \begin{bmatrix} a & b & c \ d & e & f \ g & h & i \end{bmatrix}$
The determinant is:
$\text{The determinant is:} \quad \det(A) = a(ei - fh) - b(di - fg) + c(dh - eg)$
This looks more complicated, but the idea is the same: we’re measuring how much space is stretched or compressed.
Determinants can also be computed using cofactor expansion, row operations, or LU decomposition, especially for larger matrices. These methods show up more in practice as systems grow.
If a matrix is invertible, there exists another matrix, called the inverse, that undoes its transformation.
For a $2 \times 2$ matrix:
$A = \begin{bmatrix} a & b \ c & d \end{bmatrix}$
If $\det(A) \neq 0$, then the inverse is:
$\text{If } \det(A) \neq 0, \text{ then the inverse is:} \quad A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \ -c & a \end{bmatrix}$
Let’s check an example:
$\text{Let’s check an example:} \quad A = \begin{bmatrix} 2 & 3 \ 1 & 4 \end{bmatrix} \Rightarrow \det(A) = 2 \cdot 4 - 3 \cdot 1 = 8 - 3 = 5$
So the inverse is:
$A^{-1} = \frac{1}{5} \begin{bmatrix} 4 & -3 \ -1 & 2 \end{bmatrix}$
When you multiply $A$ by its inverse, you get the identity matrix:
$AA^{-1} = A^{-1}A = I$
In higher dimensions, finding the inverse can involve row reduction or matrix decomposition methods. Not all matrices have inverses, but when they do, it opens the door to solving systems and reversing transformations.
Say you’re designing a robot arm. It moves through space in a controlled way, using transformations based on matrices. You want to know: if it ends up in a certain position, how do you figure out where it started? The matrix gets you there. The inverse brings you back.
Or imagine a cryptographic system that scrambles messages. The matrix does the scrambling. Only the inverse can decode the result.
The determinant tells you whether that’s even possible, whether your original input can be recovered. It’s the gatekeeper for reversibility.
A matrix is more than a grid of numbers. It’s a rule, an action applied to space. When a matrix acts on a vector, it doesn’t just spit out another list of numbers. It transforms that vector. It might rotate it, stretch it, flatten it, or reflect it. These actions are called linear transformations.
Understanding these transformations helps you see what matrices really do. You move from arithmetic to geometry. From calculation to intuition.
A linear transformation is a function that takes vectors as input and returns new vectors as output, in a way that preserves structure.
It satisfies two key properties:
Preserves addition:
$T(\vec{u} + \vec{v}) = T(\vec{u}) + T(\vec{v})$
Preserves scalar multiplication:
$T(c\vec{v}) = c \cdot T(\vec{v})$
These are what make the transformation linear. No curves, no distortions. Just stretching, rotating, reflecting, and projecting changes that preserve straightness and proportionality.
Every matrix defines a linear transformation.
Take the matrix:
$A = \begin{bmatrix} 2 & 0 \ 0 & 3 \end{bmatrix}$
It transforms the vector $\vec{x} = \begin{bmatrix} x \ y \end{bmatrix}$ by stretching the $x$-component by 2 and the $y$-component by 3:
$A\vec{x} = \begin{bmatrix} 2x \ 3y \end{bmatrix}$
The result is a rectangle stretched in different directions.
Change the matrix, and you change the transformation:
Just like functions, linear transformations can be composed.
If $T_1$ and $T_2$ are two transformations, then applying one after the other is the same as multiplying their matrices:
$T_2(T_1(\vec{x})) = (A_2 A_1)\vec{x}$
Order matters. Rotating then projecting is not the same as projecting then rotating. This is why matrix multiplication is not commutative.
In practical terms, this lets you build complex behaviors from simple steps. One matrix scales, another rotates, a third shifts. Together, they form a pipeline of motion.
Linear algebra is precise, but it's easy to slip. Here are a few common mistakes, along with ways to stay clear of them.
Matrix multiplication isn't like regular multiplication. The order matters. In fact, sometimes $BA$ doesn’t even exist.
Tip: Always check dimensions and apply matrices in the correct order.
A zero determinant means the matrix is not invertible. It collapses space.
Tip: Think of the determinant as a signal: invertible or not.
One wrong move in Gaussian elimination can throw off your whole solution.
Tip: Slow down. Label steps. Double-check arithmetic.
You can't add or multiply matrices unless their sizes are compatible.
Tip: Write down matrix sizes before operating, this prevents errors early.
Some systems have none. Others have infinitely many.
Tip: Use row reduction to check. Look for zero rows or free variables.
Vectors are usually single columns. Matrices have rows and columns, but they aren’t interchangeable.
Tip: Keep notation clear. Know what type of object you're working with.
When you’re working through a complex matrix or solving a system, sometimes it helps to see each step laid out clearly. Symbolab’s Linear Algebra Calculator does just that. It shows not only the answer—but how you get there.
Here’s how to use it effectively:
You have a few options:
Whether your problem involves a matrix, a determinant, or a vector projection, the calculator can parse it. Once your expression is entered, press “Go” to process it.
Symbolab walks through the entire solution. You can:
This isn’t just about finding an answer, it’s about watching the reasoning unfold. Using tools like this Symbolab calculator can deepen your understanding, especially when paired with manual work. You learn faster when you can compare your reasoning to a clear model.
Linear algebra helps us understand systems and how things move, connect, and change. From vectors and matrices to transformations and equations, each concept builds toward a deeper view of structure. With practice and curiosity, these tools become more than calculations; they become ways of thinking clearly, navigating complexity, and making meaning from patterns.
linear-algebra-calculator
en
Please add a message.
Message received. Thanks for the feedback.