15 April, 2020

Blend2D - Multi-Threaded Rendering

Today a new beta version of Blend2D has been released. It features an initial implementation of a multi-threaded rendering context that can, at the moment, use up to 32 threads to accelerate rendering of 2D graphics into a pixel buffer. Multi-threaded (MT) rendering was planned from the start, however, there were other features which had a higher priority in 2019. I finally started working on MT rendering in early 2020, but the initial planning reaches back to years.

The Blend2D project is an ongoing effort about innovations in 2D rendering. It was the first open source rendering engine that used a JIT compiler to accelerate software-based rendering and it's likely to be the first to offer MT rendering, at least considering the 2D rendering capabilities of libraries such as AGG, Cairo, Qt, and SKIA. Multi-threaded rendering is another important optimization towards maximizing 2D graphics performance.

NOTE: A developer introduction to MT rendering is covered by the Multithreaded Rendering page. Furthermore, initial benchmarks are available on the Performance page. In the following I would like to talk about the design and other considerations.

Motivation

Every time when I finish a new feature I will ask myself - what to do next? Is there anything that I can do to improve the performance even further? Multi-threading resonated in my head for quite a long time. Although I had an idea, I never actually started working on it because there were always many features missing that I wanted to tackle first. However, since screens are getting larger, and their resolutions are getting larger as well, I though that it's also important to have an accelerated renderer capable of real-time rendering into large framebuffers. Based on my own experiments it's perfectly okay to use a single-threaded renderer to render into a FullHD framebuffer. Many of the Blend2D demos that use the Qt framework are able to achieve 240fps at such resolution. That also includes the time spent in Qt performing the blit of QImage into Qt's backing store. I haven't debugged what exactly happens there on my Linux box, but I assume the pixels are copied once or maybe even twice to get them on screen.

The problem comes when a framebuffer gets larger, like 4K - which is approximately 32MB of pixel data considering 32 bits per pixel. Such quantity usually cannot fit into the CPU cache so traditional synchronous rendering might become a bottleneck. This was taken into consideration when designing the MT renderer as well - it should always process a smaller area and move to the next one once the work is done, but this will be explained later. Since modern CPUs offer at least 4 cores, and high-end ones even 8 and more, it was on time to finally start working on MT rendering, because it could have greater impact than improving SIMD acceleration, for example.

I would like to discuss SIMD before I start with MT. On x86 architecture SIMD has been in a rapid evolution from the initial MMX era. We went from 64-bit SIMD to 512-bit SIMD - every time with new instructions, new prefixes, and new ways of achieving the same thing. I don't think this is a bad thing, I'm trying to say that the transition was never straightforward. Maybe the easiest one was from MMX to SSE2 as the register width just doubled and the instructions remained the same. But this doesn't apply to SSE2 vs AVX2. In the AVX2 case 256-bit YMM registers are just split into two 128-bit parts and the CPU processes them independently. This creates a lot of issues with data packing and unpacking as the packing is also split into two 128-bit parts. It means it's not possible to just change the size of SIMD in the code, load bigger quantities, and expect the code to work as is. It has to be carefully ported. The same thing happened again with AVX-512. Since AVX-512 introduced new mask registers it also repurposed many instructions to use such registers instead of SIMD registers. So when porting code from 256-bit SIMD to 512-bit SIMD you would face the same problem - you cannot just increase the operation width as you would have to port the code because the same instructions that worked for you with YMM registers now expect K register instead of ZMM register to store or use a mask.

And now to the point - multi-threading solves one big problem here. When a new CPU releases and provides more cores, an existing software capable of executing tasks multi-threaded is able to use the new hardware immediately so that the user would instantly see the improved performance. This doesn't happen with new SIMD extensions, which usually take years to get into existing software.

The Design of Multi-Threaded Renderer

Blend2D's multi-threaded rendering context is 100% compatible with the single-threaded rendering context. From now on I will call the single-threaded rendering context synchronous and the MT rendering context asynchronous. For the synchronous case each render call ends up modifying pixels and returns after all relevant pixels were modified. For the asynchronous case the rendering context becomes a serializer of render commands. Each render call gets translated into a command, which is then added into a command queue to be processed later.

In the Blend2D implementation commands describe only a part of the work. The rendering context uses another concept called jobs. A job can be seen as a prerequisite of a command, that can be, however, processed asynchronously in any order. The important thing to remember is that a job must be processed before the command that depends on such job - this is guaranteed by synchronization. When the rendering context starts processing batches, it first executes job processors, waits for their completion, and then executes command processors.

The main difference between jobs and commands is that jobs never modify pixels on framebuffer whereas commands do. A job is typically processing input data like paths and text. E.g. there are FillGeometry and StrokeGeometry jobs that only process the input geometry and generate edges which will be later consumed by the rasterizer. There are also FillText and StrokeText jobs, which do very similar operation, but their input is either text or glyph-runs.

Another difference between jobs and commands is that commands have no states - command inputs were already transformed and clipped, while inputs of jobs can be the same data used in a render call. Therefore, jobs actually need to know a transformation matrix and a clip region to process the input geometry. This is realized through shared-states. A shared-state can be shared across the whole rendering batch (multiple commands/jobs, thousands, even tens of thousands). It is created on demand by the rendering context when it creates a job to be used with a serialized command. When a shared-state is created the rendering context keeps reusing it until the user changes one or more properties that invalidate it. There are currently two shared states - FillState and StrokeState. FillState is used only by filling and contains everything to transform and clip the input geometry. StrokeState (together with FillState) is used by stroking and contains additional data required by path stroker.

Commands and jobs are part of a rendering batch, which is basically a queue of jobs, commands, and a storage of shared states. When the rendering context decides to process the batch, it does the following steps:

  • 1. Wake up workers to process jobs
  • 2. Wait until all jobs get processed
  • 3. Wake up workers to process bands (includes command processing)
  • 4. Wait until all bands get processed

Job Processing

Each worker thread operates on the same job queue, which is part of the same batch. It knows how many jobs are in the queue, and it knows the address of an atomic variable provided by the batch, which is called jobIndex. When a worker grabs a new job it simply increments the index by fetch_add() and, if valid, has an index of the job to process. Atomic operations guarantee that each worker processes unique jobs. When jobIndex goes out of range it means that there are no more jobs to process and the worker has to either wait for the remaining workers, or wake up workers and start processing commands, if this was the last job.

When a job processor finishes a job, it usually changes the command that is associated with it. E.g. if a command uses a rasterizer that needs edges, the pointer to those edges will initially be null and when job processor finishes edge building it would update the command.

Command Processing

The processing of commands is actually much more interesting than job processing. Blend2D uses bands for raster operations. A band is simply a bunch of consecutive scanlines, which are processed together. The whole engine works at band units - rasterizer, pipelines, and now also the command processor. Similarly to job processing, each batch has a shared atomic variable called bandIndex. When a worker thread wants to process a band, it simply increments such index by fetch_add() and when the returned index is valid it's an index of a band to process. Each band of the framebuffer is processed exactly once per batch. Since each band has an index, we can calculate which scanlines belong to the band. I.e. if bandHeight is 32 and bandIndex is 2 then the band refers to scanlines 64..95.

When a worker gets bandIndex to process it runs all command processors in order for the whole band. Since the processing is guaranteed to be top-to-bottom it's also guaranteed that once a command finishes (its bounding box won't intersect with next bands) it doesn't have to be processed again in next bands. Each worker manages its own bit-array for this purpose where each bit represents a command index. When a worker starts processing commands, all bits are assumed to be ones (1 means pending). When a command is finished and it's guaranteed that it won't produce anything in next bands, a corresponding bit in the bit-array is set to zero. The command processor simply iterates bits in the bit-array to know which commands to process, and those iterations are very fast because CPUs have a dedicated instruction for bit-scanning. The instruction executes in a single cycle or two. In practice, if a worker starts processing the first band it will go over all the commands, and then with each next band the number of zero bits in the bit-array increases (so is the number of commands it skips).

Synchronization

At the moment there are two synchronization points that happen during batch processing - the first ensures that all jobs were processed, and the second one ensures that all bands (which includes commands) were processed. There is currently no hard limit of how many operations can fit into a single batch. So in most scenarios there is only a single batch to render, which means that ideally the rendering context synchronizes only twice in its lifetime.

Performance

The performance is already covered here and, in my opinion, it looks very promising. However, I would like to add some notes regarding to the performance shown on the performance page and real FPS gains of Blend2D Samples that use the Qt library. Some demos do not yield the same gains as shown on the performance page because the pixels have to be transferred on screen, which is an additional operation that happens after each frame is rendered. Copying pixels on screen seems to be a pretty expensive operation and there are definitely limits that cannot be crossed. Additionally, I don't know how many times pixels are copied, this is solely on Qt and it's very possible they are copied twice as all demos render to a QImage, which is then blitted to a backing store.

Another problem with some of the demos is that they only use a single path to stress the rasterizer (bl-qt-circles, bl-qt-polys, bl-qt-particles), which means that there would be only a single job in the whole rendering batch. In this case workers are not utilized properly and basically only one worker is processing the job while others have to wait until it finishes. Regardless, I think that stressing the rasterizer is still important as not all paths are perfect and it simply must be able to handle complex paths having hundreds figures like text runs may have.

Future Work

What I have described in this post covers the initial implementation of the multi-threaded rendering context offered by Blend2D. There are still things that should be improved in the future:

  • Less synchronization - I think that the implementation can be changed to start processing commands, even when some jobs are not yet finished. The command processor should execute immediately after there are no more jobs and should wait when it encounters command that depends on an unfinished job. This could improve the rendering time by avoiding synchronization, which may be unnecessary in many workloads, especially if the first render call is something like clearAll() or fillAll().
  • Background Processing - Currently, the implementation wakes up all threads once a batch is ready to be processed. I think that, since jobs can be processed in any order, the rendering context can simply start processing jobs in background (meanwhile the context is still used by the user). This should also improve the total rendering time as workers will start doing their work much earlier.
  • Smarter Synchronization - At the moment workers are synchronized with condition variables, which are guarded by mutexes. I started exploring the possibility of using futexes on platforms that offer such synchronization primitive.

Conclusion

It was fun to work on something new like this and to push the performance of Blend2D even further. Now it's time to focus more on features that are not strictly related to performance, e.g. better text support and layers in the rendering context.

31 March, 2020

The Cost of Atomics

Introduction

Atomic operations, which are now available in many programming languages including C/C++ (introduced in C11 and C++11), are very useful operations that can be used to implement lock-free algorithms without the need to use hand-written assembly. The C++ compiler would translate each atomic operation into an instruction or a set of instructions that guarantee atomicity. But what is the cost of atomic operations?

This article tries to demonstrate the cost of atomic operations of a unique 64-bit ID generator, which uses a global counter and generates unique IDs by incrementing such counter atomically. This guarantees that no locking is required to obtain such unique IDs. However, there is still the cost of the atomic operation, which can be quite high regardless of the concurrency.

A Naive Approach

Let's say that we want to write a function that would return a unique 64-bit identifier. I have chosen a 64-bit number as I think that it's impossible to overflow such number in today's mainstream applications. Theoretically if some process generates 4 billion IDs per second it would still take 136 years to overflow the global counter, which seems unrealistic at the moment. However, if we improve the performance of the ID generator and run it on a high-end multicore CPU to generate such IDs in parallel then the theoretical time to overflow such counter can be drastically reduced, so it really depends on the nature of the application and whether the 64-bit identifier is enough. My recommendation would be to always abstract the data type used to represent such ID so it can be painlessly changed in the future.

So, how could the simplest ID generator look like?

uint64_t generateIdGlobal() {
  static std::atomic<uint64_t> globalCounter(0);
  return ++globalCounter;
}

This was actually an initial implementation that I have used in the past considering that it's just okay as I didn't expect the ID generator to be called that often. In Blend2D such IDs are only needed to create unique identifiers for caching purposes, so it seemed fine. However, there is a small little detail - what would happen if the target architecture doesn't implement 64-bit atomics? E.g. such code would probably not compile on a 32-bit ARM or MIPS hardware as the target instruction set doesn't have to provide such feature, but it would compile just fine on 32-bit x86 as it offers a cmpxchg8b instruction, which is enough for implementing any kind of 64-bit atomic operation.

Thread-Local Approach

If the only requirement for the ID generator is to return unique numbers that do not have to always be incrementing, we can use thread-local storage to implement a local cache and to only use atomics for incrementing the global counter in case we have exhausted the range of locally available IDs:

uint64_t generateIdLocal() {
  static std::atomic<uint64_t> globalCounter(0);

  static constexpr uint32_t cacheSize = 4096;
  static thread_local uint64_t localIdx = 0;

  if ((localIdx & (cacheSize - 1)) == 0)
    localIdx = globalCounter.fetch_add(uint64_t(cacheSize));

  return ++localIdx;
}

This approach is of course longer, but it also minimizes the use of atomic operations to manipulate the global counter. In addition, since we have minimized the use of atomics we can also think about using a mutex to guard the access to globalCounter in case that we run on hardware that doesn't have 64-bit atomics:

static std::mutex globalMutex;

uint64_t generateIdLocalMutex() {
  static uint64_t globalCounter = 0;

  static constexpr uint32_t cacheSize = 4096;
  static thread_local uint64_t localIdx = 0;

  if ((localIdx & (cacheSize - 1)) == 0) {
    std::lock_guard<std::mutex> guard(globalMutex);
    localIdx = globalCounter;
    globalCounter += cacheSize;
  }

  return ++localIdx;
}

Performance

So what would be your guess regarding the performance of each approach? I have written the following code to benchmark various implementations of the ID generator:

#include <stdint.h>
#include <atomic>
#include <mutex>
#include <thread>
#include <chrono>

typedef uint64_t (*GenerateIdFunc)(void);

static std::mutex globalMutex;

static uint64_t generateIdGlobal() {
  static std::atomic<uint64_t> globalCounter(0);
  return ++globalCounter;
}

static uint64_t generateIdGlobalMutex() {
  static uint64_t globalCounter = 0;
  std::lock_guard<std::mutex> guard(globalMutex);
  return ++globalCounter;
}

static uint64_t generateIdLocal() {
  static std::atomic<uint64_t> globalCounter(0);

  static constexpr uint32_t cacheSize = 4096;
  static thread_local uint64_t localIdx = 0;

  if ((localIdx & (cacheSize - 1)) == 0)
    localIdx = globalCounter.fetch_add(uint64_t(cacheSize));

  return ++localIdx;
}

static uint64_t generateIdLocalMutex() {
  static uint64_t globalCounter = 0;

  static constexpr uint32_t cacheSize = 4096;
  static thread_local uint64_t localIdx = 0;

  if ((localIdx & (cacheSize - 1)) == 0) {
    std::lock_guard<std::mutex> guard(globalMutex);
    localIdx = globalCounter;
    globalCounter += cacheSize;
  }

  return ++localIdx;
}

static void testFunction(GenerateIdFunc func, const char* name) {
  std::atomic<uint64_t> result {};
  constexpr size_t numThreads = 8;
  constexpr size_t numIterations = 1024 * 1024 * 16;

  printf("Testing %s:\n", name);
  auto start = std::chrono::high_resolution_clock::now();

  std::thread threads[numThreads];
  for (size_t i = 0; i < numThreads; i++)
    threads[i] = std::thread([&]() {
      uint64_t localResult = 0;
      for (size_t j = 0; j < numIterations; j++)
        localResult += func();
      result.fetch_add(localResult);
    });

  for (size_t i = 0; i < numThreads; i++)
    threads[i].join();

  auto end = std::chrono::high_resolution_clock::now();
  std::chrono::duration<double> elapsed = end - start;

  printf("  Time: %0.3g s\n", elapsed.count());
  printf("  Result: %llu\n", (unsigned long long)result.load());
}

int main(int argc, char** argv) {
  testFunction(generateIdGlobal, "Global");
  testFunction(generateIdGlobalMutex, "GlobalMutex");
  testFunction(generateIdLocal, "Local");
  testFunction(generateIdLocalMutex, "LocalMutex");
  return 0;
}

Results on AMD Ryzen 3950x (Linux):

Approach 1 Thread 2 Threads 4 Threads 8 Threads 16 Threads
Global - Atomic 0.081s 0.292s 0.580s 1.070s 1.980s
Global - Mutex 0.164s 0.782s 4.440s 9.970s 19.00s
Local - Atomic 0.030s 0.039s 0.039s 0.038s 0.041s
Local - Mutex 0.038s 0.039s 0.037s 0.037s 0.056s

Conclusion

The results should be self explanatory - atomic operations are always faster than using synchronization primitives to access a shared resource, but the cost of atomic operations is still not negligible and there is a limit of how many atomic operations can be performed within the same cache-line. Accessing a thread local storage is faster than atomics and is beneficial especially in a highly concurrent environment. However, thread local storage is a scarce resource that should be always used wisely.

I would like to note that this is a microbenchmark that basically stresses the access to a shared resource. Such high contention should not happen in a reasonable code and the speedup offered by using a different approach may be totally negligible in many real world applications. In addition, there is a difference between accessing thread local storage in executables and in dynamically linked libraries, so always benchmark your code to make sure that you actually don't increase the complexity of the design for no gains.