|
||||||||||||||||||||||||||||||||||
| ▭\:\longdivision{▭} | \times \twostack{▭}{▭} | + \twostack{▭}{▭} | - \twostack{▭}{▭} | \left( | \right) | \times | \square\frac{\square}{\square} |
|
||||||||||||||||||||||||||||||||||
| - \twostack{▭}{▭} | \lt | 7 | 8 | 9 | \div | AC |
| + \twostack{▭}{▭} | \gt | 4 | 5 | 6 | \times | \square\frac{\square}{\square} |
| \times \twostack{▭}{▭} | \left( | 1 | 2 | 3 | - | x |
| ▭\:\longdivision{▭} | \right) | . | 0 | = | + | y |

Have you ever paused, even for just a second, and thought, How does my calculator know what $\sin(23^\circ)$ is? Or $e^{1.5}$? Not from memory. It doesn’t store every answer. It builds them, one piece at a time, using something from calculus called a Taylor series. This idea from calculus helps us take a complex function and write it as a series of simpler, familiar pieces. The Maclaurin series is one special case, centered at zero. In this guide, we’ll walk through both gently, using real-life examples and step-by-step explanations. We’ll also use Symbolab’s Taylor/Maclaurin Series calculator to help you see how everything fits together, one step at a time.
Before we get to Taylor or Maclaurin series, we need to understand what they’re built from. At the heart of both is something called a power series. Let’s take a closer look.
Imagine building a function, not in one big leap, but piece by piece. You start with a constant. Then you add a term with $x$. Then one with $x^2$, then $x^3$, and so on. Each term brings in a little more detail — like layers of shading in a drawing.
A power series looks like this:
$a_0 + a_1x + a_2x^2 + a_3x^3 + \cdots = \sum_{n=0}^{\infty} a_n x^n$
The $a_n$ values are called coefficients. They’re the weights that shape the curve. The powers of $x$ tell us how far we are from the center, and how much each term should contribute.
So why use something like this? Why not just use the original function?
Because sometimes, the original function is hard to work with. Try calculating something like $\sin(1.3)$ or $e^{2.1}$ by hand. It’s possible, but not fun — and certainly not fast. A power series turns that hard function into something friendlier: a polynomial. We know how to add, subtract, differentiate, and integrate polynomials. That makes power series very useful.
Now that we’ve looked at power series, let’s see how we can make them even more useful.
Sometimes, you don’t just want a general approximation. You want a series that matches a function exactly at a particular point — not just the value, but the slope, the curvature, and even how the curvature is changing. That’s what the Taylor series is built to do. If you’ve worked with derivatives before, you already know they tell us how a function is changing. The first derivative gives the slope. The second tells you if the curve is bending up or down. The third tells you how that is changing, and so on.
The Taylor series uses all of that information to build a function out of simple polynomial terms, centered around a point you choose. Here’s the formula:
$f(x) = f(a) + f'(a)(x - a) + \frac{f''(a)}{2!}(x - a)^2 + \frac{f^{(3)}(a)}{3!}(x - a)^3 + \cdots$
You’ll also see it written like this:
$f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x - a)^n$
Let’s take a breath here. You don’t need to memorize the formula. Let’s focus on what it’s telling us.
Each term uses a derivative of the function, evaluated at a specific point — $x = a$. That’s the center. And instead of just using $x$, we’re using $(x - a)$ to build our polynomial around that point.
This is a way of saying, “What’s the function doing right here, and how can I build a curve that follows its shape as closely as possible?”
Let’s look at one example together, something real.
Say you’re programming a smart thermostat. It needs to respond to how quickly the temperature is rising or falling. Inside the code, you use the function $e^x$ to model heat flow. But the processor inside the thermostat is tiny. It can’t calculate $e^x$ directly each time. It needs something quicker.
That’s where the Taylor series comes in.
For $f(x) = e^x$, the math works out nicely. Every derivative of $e^x$ is still $e^x$, and at $x = 0$, they all equal $1$.
So the series looks like this:
$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots$
You don’t need many terms to get a solid estimate. And now your thermostat can respond quickly, using just addition, multiplication, and division, nothing fancy.
This is the heart of the Taylor series. We take everything we know about the function at one point, and we build a path outward from there step by step.
Now that you know what a Taylor series is, let’s talk about one version of it that shows up all the time: the Maclaurin series.
The Maclaurin series is just a Taylor series centered at zero. That’s it. Same idea, same formula but instead of building the function around some point $a$, we build it around $x = 0$.
Why does that matter?
Because centering at zero makes the math simpler. There’s no $(x - a)$ in the formula — just $x^n$. That makes it easier to calculate, especially by hand or on a small device. And many important functions, like $\sin(x)$, $\cos(x)$, $e^x$, and $\ln(1 + x)$, behave nicely around zero. So we use Maclaurin series all the time.
Here’s the general form:
$f(x) = f(0) + f'(0)x + \frac{f''(0)}{2!}x^2 + \frac{f^{(3)}(0)}{3!}x^3 + \cdots$
Or, in sigma notation:
$f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!}x^n$
Let’s try this with a function you probably know well: $\sin(x)$.
If we take derivatives of $\sin(x)$ and evaluate them at $x = 0$, we get:
$\sin(0) = 0$
$\cos(0) = 1$
$-\sin(0) = 0$
$-\cos(0) = -1$
and then it repeats
That gives us this series:
$\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots$
Only the odd powers show up. And the signs alternate.
What does that mean for you?
Let’s say you’re writing code for a robot arm, and it needs to turn at a certain angle. You’re trying to estimate $\sin(0.3)$ quickly. Instead of asking the processor to calculate it directly, you use the Maclaurin series.
Try just the first two terms:
$\sin(0.3) \approx 0.3 - \frac{(0.3)^3}{6} = 0.3 - 0.0045 = 0.2955$
The real value of $\sin(0.3)$ is about $0.29552$. That’s impressively close, especially with only two terms.
And this is what makes Maclaurin series so useful. They’re fast, flexible, and accurate near zero. Whether you’re working on a calculator, building software, or solving a problem on paper, these series help you simplify the hard stuff.
So far, we’ve seen how a Taylor or Maclaurin series can help us estimate functions using simpler pieces — one term at a time. But there’s one more important question to ask before we get too comfortable:
How do we know the series actually gives us the right answer?
That’s what convergence is all about. A Taylor series is an infinite sum. But not every infinite sum adds up to something meaningful. Sometimes, the series works beautifully. Other times, it starts to drift away from the function it’s supposed to follow.
Let’s walk through this together.
A Taylor series converges at a point if, when you plug in a value of x, the sum of all those infinite terms actually approaches the true value of the function. In other words, as you add more and more terms, the series gets closer and closer to the function’s output.
But if you move too far away from the center point — the $a$ value where the series is built — the approximation can fall apart. The further you stretch, the more likely the series is to diverge, meaning the terms keep adding up, but they no longer get you closer to the function.
Think of it like using a map that works really well for your neighborhood, but becomes less accurate the farther you go from home. Every Taylor series has a radius of convergence — a kind of boundary where the approximation holds true.
Let’s go back to our friend $e^x$.
The Maclaurin series for $e^x$ is:
$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots$
This series actually converges for all real values of $x$. That’s part of why it’s so useful. No matter how far you go, the series keeps matching the function closely.
But now consider $\frac{1}{1 - x}$:
Its Maclaurin series is:
$\frac{1}{1 - x} = 1 + x + x^2 + x^3 + \cdots$
This only converges when $|x| < 1$. The moment $x$ reaches or passes $1$, the series breaks down and no longer represents the function correctly.
So how does this apply in real life?
Imagine you’re modeling the motion of a swing, one that’s moving gently back and forth. Near the resting point, a Maclaurin series might do a great job predicting its position over time. But if the swing gets pushed too far, the approximation may no longer hold. The math stops lining up with reality, not because the function changed, but because we stepped outside the series’ useful range.
The same thing happens when we try to estimate air pressure at different altitudes or describe the curve of a road. A Taylor series built near sea level might work well for small changes in elevation, but not for a mountain pass 3,000 feet up. The approximation is always local, it gives the best results close to the center point, not far away from it.
This is what convergence teaches us: a series can be powerful, but we need to know where its strengths begin and where they end.
As you work with Taylor and Maclaurin series more often, you’ll notice certain expansions come up again and again. Some functions have patterns so useful, so reliable, that it’s worth having them close — not to memorize for a test, but to understand how they work and when to use them.
Let’s walk through a few of the most important ones together.
$e^x$
You’ve already seen this one. It’s one of the cleanest, most cooperative series in all of math:
$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots$
This series converges everywhere, for all real numbers, which makes it incredibly useful in science, finance, and beyond.
$\sin(x)$
The sine function behaves nicely around $x = 0$. Its Maclaurin series looks like this:
$\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots$
You’ll notice two things: the powers of $x$ are all odd, and the signs alternate. That’s not a coincidence. The pattern comes from how the derivatives of $\sin(x)$ cycle between sine and cosine.
This series is especially good for estimating small angles. In physics, for example, when analyzing pendulums or waves, you’ll often see $\sin(x) \approx x$ for very small values of $x$.
$\cos(x)$
The cosine series is similar to sine, but it starts at 1 and uses even powers:
$\cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots$
Again, the signs alternate. And just like sine, this series gives excellent approximations when $x$ is close to zero which is often true in early steps of modeling motion or analyzing vibration.
$\ln(1 + x)$
This one’s a bit trickier, but very useful when working with growth and scaling:
$\ln(1 + x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots$
This series only converges when $|x| < 1$. But within that range, it can help estimate logarithms without a calculator. Economists, for example, sometimes use this expansion when dealing with small percentage changes or growth rates.
$\frac{1}{1 - x}$
We’ve talked about this one in the context of convergence:
$\frac{1}{1 - x} = 1 + x + x^2 + x^3 + \cdots \quad \text{for } |x| < 1$
It’s a geometric series, and it shows up in everything from compound interest to signal processing. Simple, but powerful, and a good reminder that not all useful series come from calculus.
Now that you’ve seen what these series look like and where they come up, let’s take some time to learn how to actually build one yourself. This is the kind of skill that might show up on a test, sure — but more importantly, it helps you see how each piece fits together.
We’ll walk through it step by step. No need to rush.
Let’s say you’re asked to find the Maclaurin series for a function. That’s just the Taylor series centered at $x = 0$. The general formula is:
$f(x) = f(0) + f'(0)x + \frac{f''(0)}{2!}x^2 + \frac{f^{(3)}(0)}{3!}x^3 + \cdots$
Here’s how to go about it.
Start with a function, like $f(x) = \cos(x)$, and the point where you want to center the series. For a Maclaurin series, this is always $x = 0$.
Find the first few derivatives of the function. For $\cos(x)$, they go like this:
$f(x) = \cos(x)$
$f'(x) = -\sin(x)$
$f''(x) = -\cos(x)$
$f^{(3)}(x) = \sin(x)$
$f^{(4)}(x) = \cos(x)$
You’ll notice they repeat every four steps. That’s helpful.
Now plug in $x = 0$ to each derivative:
$f(0) = \cos(0) = 1$
$f'(0) = -\sin(0) = 0$
$f''(0) = -\cos(0) = -1$
$f^{(3)}(0) = \sin(0) = 0$
$f^{(4)}(0) = \cos(0) = 1$
Now use those values in the series formula:
$\cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \cdots$
Only the even powers of $x$ appear, and the signs alternate. You’ve built the first few terms of the Maclaurin series for $\cos(x)$, all by hand.
Here are a few places students often stumble when working with Taylor and Maclaurin series — and how to avoid those missteps.
When you're building or checking a Taylor or Maclaurin series, Symbolab’s calculator can guide you through the process, step by step. How to Use It:
Type it using your keyboard, use the built-in math symbols, upload a photo, or even screenshot a problem with the Chrome extension. Click “Go”
Symbolab shows each derivative, how it’s evaluated, and how the final series is built. The option to expand or collapse steps. A "One step at a time" view is also there, so you can follow the thinking slowly, from finding derivatives to evaluating and building the series.
Use the Chat with Symbo option for extra support if anything feels unclear. Whether you're practicing for class or checking your homework, this tool gives you a clear, organized way to explore series and feel more confident at every step.
Taylor and Maclaurin series help us build understanding, one term at a time. Whether you're working by hand or using Symbolab, each step brings you closer to the full picture. Take your time, trust the process, and know that real learning happens gently, piece by piece.
maclaurin-series-calculator
en
Please add a message.
Message received. Thanks for the feedback.