There are many great topics in mathematics, but one that is of particular real-world value for both mathematicians and engineers/scientists is Numerical Analysis.
If you have ever taken a high school calculus class, you are probably aware of how tasks like finding the roots of polynomials or integrating functions can quickly become extremely difficult. Textbooks are full of “cooked up” problems that are easy to solve, but in the real world, you often encounter problems that are almost impossible to solve. To make matters worse, even if you figure out how to deal with difficult problems, you will rarely be asked to do them on paper. The work is done by computers, but programming a computer to solve a problem can be a challenge when all a computer can do at the most basic level is carry out arithmetic operations. There is a branch of mathematics and computer science called Numerical Analysis, which lends some aid to these two problems. In Numerical Analysis, we attempt to reduce complicated problems from calculus, differential equations, linear algebra, or other fields to a series of arithmetic operations that closely approximate a solution, and carry them out on a computer.
To keep things simple and to the point, I will focus on a common example of numerically solving differential equations. Physics, engineering, biology, finance, and any number of fields are full of scenarios that can be modelled by differential equations. (Bacteria growth, heat flow, stock prices, etc.) With a given differential equation in place, we might look at ways to make computation practical and efficient. One approach would be to replace derivatives by Finite Differences. Recalling that a derivative is defined as the limit of a difference quotient, we can just take the difference quotient with a sufficiently small step size and let that approximate the limit. This reduces derivatives to evaluating functions at different values and doing some algebraic manipulations and arithmetic. Of course, this approach leaves us with some concerns that need to be carefully addressed. For instance, if the step size is too big then the approximation is inaccurate, but if the step size is too small then roundoff error can become a problem. Numerical analysts need to always be aware of how accurate their methods are.
Once we have a good numerical scheme in place, we must deal with the fact that a computed solution needs to be defined over the whole domain of the problem, and not just a single point. Say for example, the given problem was defined on an interval [a,b]. We want to consider a solution f(x) at f(a), f(a+h), f(a+2h), and so on, where h is the chosen step size. For this we can use linear algebra. If you have taken a basic linear algebra class, then you may understand how linear systems can be solved by throwing their coefficients into a matrix and solving the matrix equation “Ax=b” using a matrix inverse. If you have taken a differential equations class, then you may have even seen some of the details in practice. Unfortunately, real-world problems are rarely as simple as computing a matrix inverse, as seen in linear algebra textbooks. If nothing else, the matrices that arise from Finite Difference problems tend to get so large that it would be
too impractical to carry out the number of floating-point operations needed to explicitly form an inverse matrix. (Controlling roundoff error is another concern, when the process is carried out on a finite precision computer.) Once again, Numerical Analysis gives us some tools to work with. Iterative schemes like the Jacobi method or the Gauss-Seidel method allow us to make an initial guess at the solution and repeatedly feed into an algorithm until the approximation gets close enough to the true solution. I will not dwell too much on the specifics, but the algorithms require only for some arithmetic to be carried out, and it is easy to program a computer to run a loop.
As with the Finite Difference method, when using iterative schemes to solve linear systems, there are questions that numerical analysts need to carefully consider. For some matrices, certain iterative schemes will not converge to the true solution, and instead blow up to some nonsense solution. We must be sure the scheme works if we are going to use it. Likewise, if the true solution is not known, we must have some sense of what it means for a final approximation to be “close enough,” and understand how it will affect our original problem. Additionally, we need to have a sense of how quickly an iterative scheme will get us “close enough.” If we must iterate too many times, then from a computational standpoint, it is too costly to implement. These are only a few of the things that numerical analysts think about, and I have not gone into too much detail about them, but if these kinds of questions interest you then you may consider studying Numerical Analysis.