Swift Concurrency

Brenno de Moura
13 min readJun 8, 2023

The introduction of the concepts of Task, async/await, and Actors in Swift has revolutionized the development of code that utilizes multithreading and asynchronous execution. They offer a much simpler and efficient approach compared to the use of RunLoop, DispatchQueue, DispatchGroup, OperationQueue, and Thread. With the new concurrent code tools in Swift, it is easy to replace these objects and significantly improve code quality.

Task

Task in Swift is an asynchronous operation that provides an alternative to traditional methods like DispatchQueue.main.async {}. By using Task, we can simplify our code and gain better control over asynchronous operations. Additionally, Task offers additional options that can be explored to further enhance task execution.

One of these options is the priority parameter, which allows us to set the priority of the task to be executed. This functionality replaces the need to use .global(qos:) from GCD, making the code more readable and concise.

Another interesting option is the use of the @MainActor attribute, which replaces DispatchQueue.main. This attribute allows the task to be executed on the main thread, simplifying code that deals with user interface updates and avoiding concurrency issues.

func oldConcurrencyCode() {

DispatchQueue.main.async {
// Runs on main thread asynchronously
}

OperationQueue.main.addOperation {
// Runs on main thread queued
}

DispatchQueue.global(qos: .background).async {
// Runs on global thread asynchronously
}

let operationQueue = OperationQueue()
operationQueue.qualityOfService = .background
operationQueue.addOperation {
// Runs on thread queued
}
}

The code above demonstrates some valid options for executing concurrent code in Swift.

The improved version of the code below utilizes Task to implement the same functionalities as the previous code, making it simpler and more understandable for developers:

func newConcurrencyCode() {
Task {
// Executes asynchronously on the main thread
// Depending on whether the newConcurrencyCode function
// is implemented by an object protected by MainActor previously
}

Task { @MainActor in
// Executes asynchronously on the main thread
// explicitly
}

Task(priority: .background) {
// Executes asynchronously on any thread
// with background priority
}
}

In addition to the mentioned functionalities, Task offers additional features that can be explored. For example, it is possible to cancel a task using the cancel() method. This allows for interrupting the execution of an ongoing task, providing more control over the flow of asynchronous execution.

Another interesting feature is the ability to create tasks that are completely decoupled from the main flow, replacing the use of Thread {} with Task.detached {}. This approach offers a more modern and secure alternative for handling execution in separate threads.

By leveraging all these functionalities provided by Task, we can simplify the code, achieve finer control, and improve the efficiency of our concurrent code.

Async await

Async/await is a fundamental feature of Swift for handling concurrent code. However, before starting to use async/await, it is important to understand the concept of Task. These terms, async and await, are used in distinct contexts. async is used to indicate that a function or computed property returns its result asynchronously, while await is used to wait for the result when calling asynchronous functions.

The use of await is restricted to functions of type async. Additionally, it is possible to combine the use of throws to implement functions that are asynchronous and can throw errors in a concise manner. It is also possible to specify that a function should be executed on a specific actor, such as @MainActor.

See the following example, which illustrates how to make an asynchronous request and update the views:

func doSomeRequest() async throws -> MyModel {
// Make the request and throw an error if any occurs.
}

@MainActor
func makesRequestAndUpdateViews() async {
do {
let model = try await doSomeRequest()
view.updateModel(model)
} catch {
view.switchErrorState(error)
}
}

The example above, although simple, demonstrates the ease of performing an asynchronous request with async/await and updating the view. Typically, an implementation of this kind would require the use of callbacks in conjunction with DispatchQueue for thread switching. Additionally, if it were necessary to handle errors in the same way as in the example, it would require using Result<MyModel, Error> and using swift case statements, making the code increasingly complex and hard to maintain.

Continuation

An additional interesting feature of async/await is the methods withUnsafeContinuation(_:), withUnsafeThrowingContinuation(_:), withCheckedContinuation(_:), withCheckedThrowingContinuation(_:), withTaskCancellationHandler(operation:onCancel:). These methods allow encapsulating non-async methods that return their results through closures. Based on these methods, it is possible to transform any existing Swift method that is not yet compatible with async/await and start using them immediately.

The unsafe methods should be used to bypass some of Swift’s consistency rules regarding encapsulated closures. The withUnsafeContinuation or withUnsafeThrowingContinuation methods ignore potential failures, such as not calling the callback. If the developer is not familiar with how the closure works, it is recommended to use unsafe to ensure that the application keeps running.

The checked methods are used in situations where we are certain that the closure is reliable. If the encapsulated function violates any continuation rules, it will result in a failure (crash), and the application will be terminated. The checked methods should be preferred when we are fully aware that the closure will only be called once. It is recommended to use them during development and switch to unsafe in production.

The last mentioned method, withTaskCancellationHandler, allows canceling the operation encapsulated by it. A common example is encapsulating the dataTask method of URLSession.shared. The URLSessionTask class has the cancel method to cancel an ongoing request. Therefore, if the task needs to be canceled, we use withTaskCancellationHandler to cancel the URLSessionTask, if we are abstracting the dataTask method.

Swift already implements async/await for URLSession methods, so there is no need to implement it locally in your project.

Group Operations

Performing group operations in Swift traditionally has been a widely discussed topic in the community, especially when it comes to finding the most optimized way to handle it. Recently, I have noticed comments suggesting the use of DispatchGroup in combination with enter, leave, and wait methods. However, managing the state of variables and avoiding crashes in the application with this approach is a complex task that requires a good understanding of multithreaded operations.

With the advent of async/await, we now have at our disposal the methods withTaskGroup(of:returning:body:) and withThrowingTaskGroup(of:returning:body:). The body of the closure takes an object as a parameter, allowing the execution of as many tasks as needed as long as they all return the same type. It is possible, of course, to perform different functions, but this is often done using Task separately, which simplifies the process.

func readFileInChuncks(_ url: URL) async throws -> Data {
try await withThrowingTaskGroup(of: Data.self) { group in let totalSize = totalSize(url)

for slice in stride(from: .zero, to: totalSize, by: 1_024) {
group.addTask {
return readBytesData(slice, 1_024, url)
}
}

var data = Data()
for try await slice in group {
data.append(slice)
}

return data
}
}

The code above shows how we can use withThrowingTaskGroup to perform the operation of reading all bytes from a file in parallel, gaining overall speed.

Actors

Actors are a powerful tool in Swift that allow us to create protected objects during asynchronous execution with async/await. This approach ensures data integrity by isolating operations performed on properties and methods belonging to an actor, preventing conflicts between multiple threads. Therefore, incorporating the use of actors in our code, especially for objects that manage application state, such as managers, can be an extremely advantageous choice.

By using an actor for a management object, we can be confident that operations performed on that object will be executed safely and consistently. Swift automatically handles synchronization of access to the actor, avoiding race conditions and conflicts in reading and writing properties or method calls. This ensures that even in scenarios with multiple threads working simultaneously, the object’s state will remain intact and consistent.

This protection offered by actors is particularly valuable in contexts where multiple parts of the code can access and modify the same object. By using an actor to encapsulate this object, we ensure that all operations are performed sequentially, preventing conflicts and preserving data consistency. This approach also simplifies the code as Swift takes care of the synchronization internally, reducing the need to explicitly manage thread locks.

actor DataManager {

var array: [Int] = []

func append(_ element: Int) {
array.append(element)
}

func read() -> Int? {
array.last
}
}

An interesting observation is that we can maintain the integrity of the array by grouping all the small operations within an additional method in the actor. In the example above, when calling the appendLastAddedOrIncrement() method in multiple threads simultaneously, they will compete and only be unblocked when the appendLastAddedOrIncrement method returns to the executing thread. This way, each unblocked thread will add the last inserted element to the array and increment it by 1, resulting in an array that contains a sequence of numbers incremented by 1.

@globalActor

An additional option when using actors is to apply the same effect to different objects. When dealing with a problem divided into multiple classes, objects, and protocols, such as in the case of UIKit and SwiftUI with the divisions of View, ViewModel, and Model, we can use the actor to protect all relevant objects, regardless of whether they are actors or not. For this, we can define a global actor using the @globalActor attribute.

By assigning the @globalActor attribute to a specific actor, it can be widely used as a protection mechanism for related objects. For example, we can use the @DataManager attribute to protect multiple objects and ensure the integrity of operations performed on them.

This approach allows us to have a single actor as a central point of protection, even if the individual objects themselves are not actors. This way, we can leverage the isolation and synchronization capabilities provided by actors to ensure data consistency and avoid conflicts in a multi-threaded environment.

@globalActor
actor CustomActor {

static let shared = CustomActor()
}

@CustomActor
class FlagManager {

var flag = false
}

@CustomActor
func toggleFlag(_ flagManager: FlagManager) {
flagManager.flag.toggle()
}

In the presented example, we have the definition of the custom actor @CustomActor, which is used to protect the FlagManager class. This approach allows implementing the toggleFlag(_:) method, also protected by the @CustomActor, which toggles the value of the flag in the FlagManager class. When calling this method, it is necessary to use the await keyword, ensuring that all internal operations are executed in isolation by the CustomActor actor.

In SwiftUI and UIKit, we can simplify the code and improve readability by removing DispatchQueue.main.async {} calls when performing operations on the main thread. This simplification is achieved through the use of the @MainActor attribute, which can be added to protocols, classes, and objects responsible for implementing the user interface.

By adding the @MainActor attribute to a protocol, class, or object, we ensure that all operations related to these elements are executed exclusively on the main thread. This is extremely useful in iOS applications, where most UI updates need to happen on the main thread to avoid synchronization issues and ensure interface responsiveness.

The use of @MainActor simplifies the code by eliminating the need to wrap certain operations in DispatchQueue.main.async {} blocks. Instead, we can rely on the @MainActor attribute to automatically ensure that all operations are executed on the main thread, improving readability and reducing code complexity.

Thread Locking

Thread locking is essential to ensure the safe and consistent execution of threads in parallel environments. It plays a crucial role in controlling access to shared resources, preventing multiple threads from accessing these resources simultaneously. When applying a lock, a thread acquires exclusive control over the resource, preventing other threads from accessing it until the first thread finishes its execution.

The main advantage of locking is its ability to protect critical sections of code, where it is necessary for only one thread to execute the section exclusively. This prevents issues such as race conditions, where multiple threads attempt to modify the same resource simultaneously, resulting in data inconsistencies. By using locking, each thread waits its turn to access the critical section, ensuring the integrity of shared data.

struct Lock: Sendable {
private let lock = NSRecursiveLock()

init() {}

func withLock<Value: Sendable>(_ block: @Sendable () throws -> Value) rethrows -> Value {
lock.lock()
defer { lock.unlock() }
return try block()
}
}

In the above example, the use of NSRecursiveLock is shown to create a locking mechanism through the withLock(_:) method, using the NSLock from the Foundation library. However, in the Swift language, there are other specific classes such as NSLock and NSCondition that allow locking threads in parallel execution. These classes provide methods to acquire and release the lock, as well as offering additional mechanisms to control thread execution.

In addition to simple locking, the Swift language provides other tools to deal with concurrency. Among them, semaphores, operation queues, and synchronization barriers are notable. Each of these tools has its own characteristics and may be more suitable for specific scenarios of concurrent programming.

Semaphores allow controlling concurrent access to a certain number of resources by defining how many threads can access them simultaneously. Operation queues are useful for managing the execution of asynchronous tasks, allowing multiple threads to work in a coordinated and controlled manner. Synchronization barriers ensure that a set of threads waits until all of them are ready to proceed, synchronizing their executions.

Sendable

The Sendable protocol is an important addition to Swift, introduced in version 5.5 and later, that aims to provide compile-time validation for classes, structs, and enums, ensuring that these objects are safe in a multithreaded environment. By marking an object as Sendable, the compiler verifies if it meets all the necessary rules for safe parallel execution. The protocol itself does not add any additional logic to the object; it serves as an aiding tool during development.

final class ModelManager: @unchecked Sendable {

private let lock = Lock()

var integers: [Int] {
get { lock.withLock { _integers } }
set { lock.withLock { _integers = newValue } }
}

private var _integers: [Int] = []
}

In the above example, we have the ModelManager class marked as Sendable using the @unchecked annotation. This means that during compilation, Swift will ignore the rules of the Sendable protocol and trust that the developer has implemented adequate mechanisms to ensure safe execution in a multithreaded environment.

However, when marking an object as @unchecked Sendable, it is the developer’s responsibility to ensure that all operations are safe in a multithreaded context. This often requires the use of thread locks, semaphores, queues, and barriers to prevent inconsistencies and access conflicts to resources during parallel execution.

AsyncSemaphore

In async/await development, when dealing with asynchronous operations on shared resources, it’s important to ensure data consistency. While Swift mechanisms such as Actors and the Sendable protocol have been introduced to address this issue, inconsistencies can still occur. In this context, the use of asynchronous semaphores, implemented by the Semaphore library, provides a more flexible approach to control concurrent access to shared asynchronous resources.

let lock = AsyncSemaphore(value: 1)
var counter = 0

func doSomeExpensiveComputation() async throws {
await lock.wait()
try await Task.sleep(for: .seconds(5))
counter += 1
lock.signal()
}

func concurrentCount() async {
await lock.wait()
counter += 1
lock.signal()
}

Task {
try await doSomeExpensiveComputation()
}

Task {
for _ in 0 ..< 100 {
await concurrentCount()
}

print(counter)
}

Unlike traditional locks, the asynchronous semaphore operates at a higher level using the concept of await. This allows asynchronous operations to wait for the availability of resources before proceeding. In the presented example, we use the asynchronous semaphore to block the doSomeExpensiveComputation method, waiting until the semaphore provides the release signal. In this way, concurrentCount waits for the resource to be available before incrementing the counter.

This mechanism brings interesting flexibility by allowing the construction of code locks to await the success of asynchronous operations, such as a network request. Other tasks will remain blocked until the result is produced and the resource (in this case, the result of the request) is available. This approach provides a new way to develop highly efficient and synchronized solutions, promoting the integrity of shared data in concurrent environments.

AsyncSequence

The implementation of the AsyncSequence protocol in Swift allows the creation of asynchronous sequences where elements are asynchronously produced on-demand. This feature is particularly useful in scenarios where obtaining elements from the sequence requires asynchronous operations such as network calls, I/O operations, or time-consuming computations.

func number(_ offset: Int) async throws -> Int? {
try await Task.sleep(for: .seconds(1))

if offset >= 10 {
return nil
}

return offset + 1
}

struct MySequence: AsyncSequence {

typealias Element = Int

struct MyIterator: AsyncIteratorProtocol {

var offset = 0

mutating func next() async throws -> Int? {
if let offset = try await number(offset) {
self.offset = offset
return offset
}

return nil
}
}

func makeAsyncIterator() -> MyIterator {
MyIterator()
}
}

Task {
try await Task.sleep(for: .seconds(15))

for try await integer in MySequence() {
print(integer)
}

print("Finish")
}

When adopting the AsyncSequence protocol, as shown in the example above, it is necessary to define the type of element that the sequence produces using the typealias Element = [Type]. Then, it is necessary to implement the nested struct called Iterator, which must conform to the AsyncIteratorProtocol protocol.

Within the Iterator struct, the next() method is responsible for providing the elements of the sequence asynchronously. This method can contain custom logic to obtain the asynchronous elements, such as making async calls, waiting for background operations, or incremental processing.

Each invocation of the next() method returns an element of the sequence encapsulated in an async object. This allows the client code to use the "await" keyword to wait for the arrival of the next element before continuing execution. If the sequence reaches its end, the next() method returns nil to indicate that there are no more elements to be produced.

The implementation of the AsyncSequence protocol provides an elegant and efficient approach to working with asynchronous sequences in Swift. It allows client code to consume asynchronous elements synchronously, simplifying the processing logic and improving code readability. Furthermore, the use of the AsyncSequence protocol is highly flexible, allowing the adaptation of element production logic to the specific needs of each use case.

AsyncStream

An object in Swift that implements AsyncSequence is AsyncStream, which provides a practical and efficient implementation for working with asynchronous sequences. AsyncStream functions similarly to an asynchronous array, allowing the production and consumption of asynchronous elements on demand.

var closure: ((Int?) -> Void)?

let sequence = AsyncStream<Int> { continuation in
closure = {
if let value = $0 {
continuation.yield(value)
} else {
continuation.finish()
}
}
}

Task {
for await integer in sequence {
print(integer)
}
print("Finish")
}

Task {
for index in 0 ..< 10 {
closure?(index)
try await Task.sleep(for: .seconds(1))
}

closure?(nil)
}

In the provided example, AsyncStream is used to create an asynchronous sequence of integers. By using an initialization closure, the code defines a continuation function that takes an integer value as a parameter. If the value is non-nil, the closure uses the continuation’s yield method to send the value to the sequence. Otherwise, the finish method is called to indicate the end of the sequence.

This approach with AsyncStream allows for obtaining an asynchronous and controlled flow of execution to handle data sequences. The use of “for await” simplifies the code by automatically handling the values produced by the asynchronous sequence, making the process of consuming the elements more concise and readable.

If you would like to contribute so that I can continue producing more technical content, please feel free to buy me a coffee ☕️ through the Buy me a Coffee platform.

Your support is essential to maintain my work and contribute to the development community.

--

--

Brenno de Moura

Software engineer with a passion for technology and a focus on declarative programming, experience in challenging projects and multidisciplinary teams