Traditionally, computers keep track of the time/date using a format known as Unix time, which counts the number of seconds that have elapsed since 00:00:00 UTC on Thursday, 1 January 1970. But there's a problem if we track Unix time using a fixed-width integer, since that has a maximum value. Beyond this date, the Unix time counter will roll-over, wreaking havoc on computer systems. Calculate the roll-over date for:
- Ordinary (signed) 32-bit integers
- Unsigned 32-bit integers, which do not reserve a bit for the sign (and thus store only non-negative numbers).
- Signed 64-bit integers
- Unsigned 64-bit integers
Find the runtime of each of the following Python code samples (e.g. \(O(1)\) or \(O(N)\)). Assume that the arrays
y are of size \(N\):
z = x + y
x = x
z = conj(x)
z = angle(x)
x = x[::-1](this reverses the order of elements).
Write a Python function
uniquify_floats(x, epsilon), which accepts a list (or array) of floats
x, and deletes all "duplicate" elements that are separated from another element by a distance of less than
epsilon. The return value should be a list (or array) of floats that differ from each other by at least
(Hard) Suppose a floating-point representation uses one sign bit, \(N\) fraction bits, and \(M\) exponent bits. Find the density of real numbers which can be represented exactly by a floating-point number. Hence, show that floating-point precision decreases exponentially with the magnitude of the number.