Mastering Swift
上QQ阅读APP看书,第一时间看更新

Numeric types

Swift contains many of the standard numeric types that are suitable for storing various integer and floating-point values.

Integers

An integer is a whole number. Integers can be either signed (positive, negative, or zero) or unsigned (positive or zero). Swift provides several integer types of different sizes. The following chart shows the value ranges for the different Integer types:

Tip

Unless there is a specific reason to define the size of an integer, I would recommend using the standard Int or UInt type. This will save you from needing to convert between different types of integers.

In Swift, Int (as well as other numerical types) are actually named types implemented in the Swift standard library using structures. This gives us a consistent mechanism for memory management for all data types as well as properties that we can access. For the preceding chart, I retrieved the minimum and maximum values of each integer type using the min and max properties. Take a look at the following Playground to see how I retrieved the values:

Integers can also be represented as binary, octal, and hexadecimal numbers. We just need to add a prefix to the number to tell the compiler which base the number is in. The following chart shows the prefix for each numerical base:

The following Playground shows how the number 95 is represented in each of the numerical bases:

Swift also allows us to insert arbitrary underscores in our numeric literals. This can improve the readability of our code. As an example, if we were defining the speed of light, which is constant, we can define it like this:

let speedOfLightKmSec = 300_000

Swift will ignore these underscores; therefore, they do not affect the value of the numeric literals in any way.

Floating-point

A floating-point number is a number with a decimal component. There are two standard floating-point types in Swift, which are Float and Double. Float represents a 32-bit floating-point number, while Double represents a 64-bit floating-point number. Swift also supports an extended floating point type, which is Float80. The Float80 type is an 80-bit floating-point number.

I would strongly recommend not using a Float type when working with floating-point numbers. The reason for this is the inaccuracy of the Float type. To see this inaccuracy, let's put a decimal number into both a Float and Double variable and see what is actually represented by the variable. Let's take a look at the following screenshot:

Notice that we set both variables x and y to 3.14; however, the results sidebar shows us that variable x (variable of Float type) is actually set to 3.14000010490417, while variable y is actually set to the correct value 3.14. This can cause huge issues if we are working with currency or other numbers that need accurate calculations. The floating-point accuracy problem is not an issue isolated to Swift, all languages that implement the IEEE 754 floating-point standard have the same issue. The best practice is to always use Double for floating-point numbers.

What if we have two variables, one is an Int and the other is a Double, do you think we can add them together, as the following code depicts:

var a : Int = 3
var b : Double = 0.14
var c = a + b

If we put the preceding code into a Playground, we would receive the following error: Cannot invoke + with an argument list of type (@lvalue Int and @lvalue Double). This error lets us know that we are trying to add two different types of numbers which is not allowed. To add an Int and a Double together, we need to convert the Int value to a Double value. The following code shows how to convert an Int to a Double so that we can add them together:

var a : Int = 3
var b : Double = 0.14
var c = Double(a) + b

Notice how we use the Double() function to convert the Int to a Double. All numeric types in Swift have a conversion constructor similar to the Double() function shown in the preceding code sample. For example, the following code shows you how to convert an Int variable to Float and an UInt16 variables:

var intVar = 32
var floatVar = Float(intVar)
var uint16Var = UInt16(intVar)