EGR 103/Fall 2023/20241006
Jump to navigation
Jump to search
Here are the steps to follow for the October 6, 2024, lecture for EGR 103:
Book Stuff
- Read Section 4.1
- Read Section 4.1.1
- Note the difference between accuracy and precision
- Read Section 4.1.2
- Note the various ways to measure error (or, as I like to say, "error")
- You have seen $$\varepsilon_a$$ before in the Newton Method square root algorithm (and you will again!)
- Read Section 4.1.3
- Look at, interrogate, peruse, parse, type, understand, and use Figure 4.2. Then memorize it.
- Note that the only lines of code you would need to change for a different iterative method would be where the solution starts (
sol = 1
might change) and how the solution updates (sol = sol...
line). Everything would be the same (other than the text in the description.
- Read Section 4.1.1
- Read Section 4.2
- Read Section 4.2.1
- Take special note of the last paragraph in the Integer Representation part - Python can represent integers that are ginormous as long as there's room in memory!
- In the Floating-Point Representation part, main things to take away are:
- There is a largest possible magnitude that can be stored
- There is a smallest possible magnitude that can be stored with full precision
- There are even smaller numbers (denormals) that can be stored in a different way by trading precision for orders of magnitude
- Read Section 4.2.2
- Note the various ways that computers can just get a wrong answer
- Read Section 4.2.1
- Read Section 4.3
- Skip 4.3.1-4.3.4
- Read Section 4.4
- Skip Section 4.4.1-4.4.2
- Read Section 4.5
- Read Section 4.5.1-4.5.3
Other Stuff
- Because floating point numbers in Python use base-2, fractional values whose denominators are not integer powers of 2 cannot be perfectly represented. This is similar to how fractional values whose denominators involve things other than integer powers of 2 and/or 5 cannot be perfectly represented (think 1/3 or 1/14 in decimal).
- The following code shows the difference in two methods of calculating additive progressions. The first (additive) just adds a pre-determined delta to whatever the previous value in the series was. The second (multiplicative) multiplies the delta by the number of steps taken so far and then adds that to the starting value. You can play around with start and delta to see how things change. If delta is a fractional value with a denominator that is an integer multiple of 2 (.5, .25, .125, .875, etc) the two series will be the same (as long as the start is not 15 or more orders of magnitude larger than the delta). Otherwise, the two series will diverge in places where the roundoff errors manifest in different ways. Check out what happens when you do something as simple as starting with 0 and using 0.1 as a delta. Hopefully this demonstrates the kind of unavoidable error in computing that can happen with even a simple algorithm!