====== Differences ====== This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
gibson:teaching:fall-2016:math753:finalexam [2016/12/12 18:21] gibson created |
gibson:teaching:fall-2016:math753:finalexam [2016/12/12 19:00] (current) gibson |
||
---|---|---|---|
Line 3: | Line 3: | ||
Wed, Dec 14, 2016 10:30am-12:30pm Kingsbury N343 | Wed, Dec 14, 2016 10:30am-12:30pm Kingsbury N343 | ||
- | * Floating point numbers | + | Floating point numbers |
- | * binary representation | + | * binary representation |
- | * how #s of bits in mantissa and exponent lead to # digits in same | + | * how #s of bits in mantissa and exponent lead to # digits in same |
- | * floating point arithmetic: expected accuracy of arithmetic operations | + | * floating point arithmetic: expected accuracy of arithmetic operations |
- | * Solving 1d nonlinear equations | + | * what is machine epsilon? |
- | * bisection: the algorithm, the required conditions, the convergence rate | + | |
- | * Newton: the algorithm, the required conditions, the convergence rate | + | Solving 1d nonlinear equations |
- | * when to use bisection, when to use Newton | + | * bisection: the algorithm, the required conditions, the convergence rate |
- | * Gaussian elimination / LU decomposition | + | * Newton: the algorithm, the required conditions, the convergence rate |
- | * the LU algorithm | + | * when to use bisection, when to use Newton |
- | * backsubstitution, forward substitution | + | |
- | * using LU to solve Ax=b | + | Gaussian elimination / LU decomposition |
- | * pivoting --what is it, why is it a practical necessity? | + | * the LU algorithm: what are the formulae for computing the multipliers $\ell_ij$ of $L$? |
- | * what form does the LU decompostion take with pivoting? How do you use this form to solve Ax=b? | + | * be able to compute the LU decomp of a small matrix by hand |
+ | * backsubstitution, forward substitution | ||
+ | * using LU to solve $Ax=b$ | ||
+ | * pivoting --what is it, why is it a practical necessity? | ||
+ | * what form does the LU decompostion take with pivoting? How do you use this form to solve $Ax=b$? | ||
+ | |||
+ | QR decomposition | ||
+ | * what is a QR decomposition? | ||
+ | * what algorithm do you know for computing the QR decomposition? | ||
+ | * what are the formulae for the elements $r_ij$ of $R$ and the column vectors $q_j$ of $Q$? | ||
+ | * what is an orthogonal matrix? | ||
+ | * how to use QR decomp to solve a square $Ax=b$ problem | ||
+ | * how to use QR decomp to find a least-squares solution to an oblong $Ax=b$ problem ($m \time n$ matrix $A$, with $M>n$) | ||
+ | |||
+ | Polynomials | ||
+ | * Horner's method: be able to rearrange a polynomial into Horner's form, and understand why you'd do that | ||
+ | * Lagrange interpolating polynomial: be able to write down the Lagrange interpolating polynomial passing through a set of data points $x_i, y_i$, and understand why the formula works | ||
+ | * Newton divided differences: know how to use this technique to find the interpolating polynomial through a set of data points $x_i, y_i$ | ||
+ | * Chebyshev points: what are they, what are they good for, why do we need them? | ||
+ | |||
+ | Least-squares models | ||
+ | * Understand how to set up least-squares $Ax=b$ problems to find the best fit for functions of the following forms to $m$ pairs of datapoints $t_i, y_i$ | ||
+ | * an $n$th order polynomial | ||
+ | * an exponential $y=c e^{at}$ | ||
+ | * a power law $y=c t^a$ | ||
+ | * a curve of the form $y = c t e^{at}$ | ||
+ | |||
+ | Finite differencing and quadrature | ||
+ | * be able to approximate the first & second derivatives of a function $y(x)$ from the values $y_i = y(x_i)$ where the $x_i$ are evenly spaced gridpoints $x_i = x_0 + i h$ | ||
+ | * provide error estimates of those approximate derivatives | ||
+ | * be able to approximate the integral $\int_a^b y(x) dx$ of the function $y(x)$ from evenly space gridpoint values $y_i = y(x_i)$, using the Trapeziod Rule and Simpson's rule | ||
+ | * provide error estimates for those approximate integrals | ||
+ | |||
+ | Ordinary differential equations | ||
+ | * what is an initial value problem? | ||
+ | * why do we need to solve initial value problems numerically? | ||
+ | * what are the timestepping formulae for | ||
+ | * Forward Euler | ||
+ | * Midpoint Method (a.k.a. 2nd order Runge-Kutta) | ||
+ | * 4th-order Runge-Kutta | ||
+ | * Backwards Euler | ||
+ | * Adams-Moulton | ||
+ | * what are the global error estimates of the above timestepping formulae? | ||
+ | * what is a global error estimate versus a local error estimate, and how are the two related? | ||
+ | * what's the difference between an explicit method and an implicit method? | ||
+ | * what's a stiff differential equation? what kind of method do you use for a stiff equation? | ||
+ | * how do you convert an $n$th order differential equation in one variable to a system of first order differential equations in $n$ variables? | ||
+ | |||