C++: We make a std::shared_mutex 10 times faster


High performance of lock-based data structures. 

In this article, we will detail the atomic operations and C++11 memory barriers and the assembler instructions generated by it on x86_64 CPUs.

Next, we’ll show how to speed up the work of contfree_safe_ptr<std::map> up to the level of complex and optimized lock-free data structures that are similar to std::map<> in terms of their functionality, for example: SkipListMap and BronsonAVLTreeMap from libCDS library (Concurrent Data Structures library): https://github.com/khizmax/libcds

And we can get such multi-threaded performance for any of your initially non-thread-safe T-class used as contfree_safe_ptr<T>  –  it is  safe_ptr<T, contention_free_shared_mutex>  class, where contention_free_shared_mutex  is own optimized shared-mutex.

Namely, we will show how to realize your own high-performance contention-free shared-mutex, which almost does not conflict on readings. We implement our own active locks - spinlock and recursive-spinlock - to lock the rows in the update operations. We will create RAII-blocking pointers to avoid the cost of multiple locking. Here are the results of performance tests.

And as «Just for fun» bonus, we will demonstrate the way of realization of our own class of simplified partitioned type partitioned_map, which is even more optimized for multithreading, consisting of several std::map, in analogy with partition table from RDBMS, when the boundaries of each section are known initially.

Compares: std::mutex, std::shared_mutex, contention_free_shared_mutex<>

  1. We make any object thread-safe
  2. We make a std::shared_mutex 10 times faster - this article
  3. Thread-safe std::map with the speed of lock-free map


Popular posts from this blog

C++: Thread-safe std::map with the speed of lock-free map