Cook Concurrent, Parallel, Async and Non-blocking Together (2)
Last updated
Was this helpful?
Last updated
Was this helpful?
"So when should we use a parallel model vs. async/non-blocking if we are to cook a four-course dinner?" Tommy asked.
"It really depends on the dishes we are cooking, the kitchen resources we're sharing, and the skill of the chef," I thought for a bit. "Most dishes need time to be cooked. For that kind of food, we can make the cooking non-blocking: cook one dish at each burner and check on them every so often. The async model works well in this scenario. This approach will certainly require a skilled chef. If we try to cook them with two chefs in parallel, while sharing the same burners, we might have to take turns to allow each other to operate. The parallel model in this case will trigger too many context switches. Too many context switches can cause thrashing, where a system spends more time managing threads than actually executing useful work. Imagine you have four managers supervising one poor chef for the dinner—that's thrashing! However, if we have a large kitchen with many burners, then choosing the parallel model will likely get the dinner done faster."
"I see," Tommy said. "It seems to be an easy choice in the dinner cooking case. But how do we know whether an async/non-blocking I/O model will improve our web service's concurrency?"
"Determining whether an asynchronous or non-blocking I/O model will improve our current multi-threaded parallel processing web service depends on a variety of factors.
I/O-bound tasks: If our web service primarily performs I/O-bound tasks, such as reading from and writing to databases, making network requests, or handling file I/O, then an async/non-blocking model could be more efficient.
Scalability: With async/non-blocking I/O, we can handle many more connections with a smaller number of threads or processes, reducing the overhead associated with creating, managing, and switching between threads.
Response time: An async/non-blocking model can help improve response times by allowing the service to continue processing other requests while waiting for I/O operations to complete. This can lead to a more responsive and better-performing service, particularly under heavy loads.
Complexity: Implementing an async/non-blocking I/O model can be more complex than traditional multi-threaded models. We need to consider the trade-offs between performance improvements and increased code complexity.
Existing infrastructure and libraries: Does our web service frameworks and libraries support async/non-blocking I/O models? In the end, no analysis can beat prototyping and production metrics. We need to test different models under realistic workloads to determine which approach is best suited for our web service."