The Different Levels of Diving Deep
Last updated
Was this helpful?
Last updated
Was this helpful?
Dive Deep: "Leaders operate at all levels, stay connected to the details" Dive Deep can mean different things based on peopleās experience, skillset and their awareness of the industry best practices. Last week I had some fascinating conversations with a group engineers, I think that represented different levels of diving deep nicely. It started with a code review with many "synchronized" methods. Good, at least we have thread safety in mind. This is a level 4 SDE having no experience on concurrent programming. Now when we looked at the code, it was basically a single thread loading data into memory every so often, then many threads read the data concurrently at high throughput. The requirement is to make sure the readers don't get partial data during updating the data. "How about using read-write lock that allows concurrent access for read-only operations, whereas only write operations require exclusive access?" This is a level 5 SDE with good sense on system design. But do we need lock in read at all? How about we use immutable data structure to achieve lock-less read. Ah, there is a design pattern for this called Read Copy Update RCU () : "a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures. Whenever a thread is inserting or deleting elements of data structures in shared memory, all readers are guaranteed to see and traverse either the older or the new structure, therefore avoiding inconsistencies." This is a L6 Sr. SDE, battle hardened on distributed systems. You know what, the RCU idea is quite similar to Command Query Responsibility Segregation (CQRS) that separates read and update operations for a data store. CQRS can maximize its performance, scalability, and security, where performance of data reads must be fine-tuned separately from performance of data writes, especially when the number of reads is much greater than the number of writes. The essence binding CQRS and RCU is the segregation of read and write operations and the optimization for lock free concurrent reads. By minimizing contentionāa notorious performance impedimentāthis segregation underpins the creation of scalable, high-performance systems. Nice, we connect the dots between CQRS and RCU design pattern. This is a principal engineer going into the next level abstraction. You can dive even deeper by reading "Is Parallel Programming Hard, And, If So, What Can You Do About It?" by Paul E. McKenney (), who is the maintainer of RCU and of the rcutorture test module in the Linux kernel. Now that is really deep! You have to love the game to go that far.