-* Critical sections are readily visible and emphasize code that
- needs to do minimal work and be subject to extra scrutiny.
-* Dangerous nested `SYNCHRONIZED` statements are more visible
- than sequenced declarations of guards at the same level. (This
- is not foolproof because a method call issued inside a
- `SYNCHRONIZED` scope may open its own `SYNCHRONIZED` block.) A
- construct `SYNCHRONIZED_DUAL`, discussed later in this
- document, allows locking two objects quasi-simultaneously in
- the same order in all threads, thus avoiding deadlocks.
-* If you tried to use `adsToBeUpdated_` outside the
- `SYNCHRONIZED` scope, you wouldn't be able to; it is virtually
- impossible to tease the map object without acquiring the
- correct lock. However, inside the `SYNCHRONIZED` scope, the
- *same* name serves as the actual underlying object of type
- `OnDemandUpdateIdMap` (which is a map of maps).
-* Outside `SYNCHRONIZED`, if you just want to call one
- method, you can do so by using `adsToBeUpdated_` as a
- pointer like this:
-
- `adsToBeUpdated_->clear();`
-
-This acquires the mutex, calls `clear()` against the underlying
-map object, and releases the mutex immediately thereafter.
-
-`Synchronized` offers several other methods, which are described
-in detail below.
+* If you tried to use `requestQueue_` without acquiring the lock you
+ wouldn't be able to; it is virtually impossible to access the queue
+ without acquiring the correct lock.
+* The lock is released immediately after the insert operation is
+ performed, and is not held for operations that do not need it.
+
+If you need to perform several operations while holding the lock,
+`Synchronized` provides several options for doing this.
+
+The `wlock()` method (or `lock()` if you have a non-shared mutex type)
+returns a `LockedPtr` object that can be stored in a variable. The lock
+will be held for as long as this object exists, similar to a
+`std::unique_lock`. This object can be used as if it were a pointer to
+the underlying locked object:
+
+``` Cpp
+ {
+ auto lockedQueue = requestQueue_.wlock();
+ lockedQueue->push_back(request1);
+ lockedQueue->push_back(request2);
+ }
+```
+
+The `rlock()` function is similar to `wlock()`, but acquires a shared lock
+rather than an exclusive lock.
+
+We recommend explicitly opening a new nested scope whenever you store a
+`LockedPtr` object, to help visibly delineate the critical section, and
+to ensure that the `LockedPtr` is destroyed as soon as it is no longer
+needed.
+
+Alternatively, `Synchronized` also provides mechanisms to run a function while
+holding the lock. This makes it possible to use lambdas to define brief
+critical sections:
+
+``` Cpp
+ void RequestHandler::processRequest(const Request& request) {
+ stop_watch<> watch;
+ checkRequestValidity(request);
+ requestQueue_.withWLock([](auto& queue) {
+ // withWLock() automatically holds the lock for the
+ // duration of this lambda function
+ queue.push_back(request);
+ });
+ stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
+ LOG(INFO) << "enqueued request ID " << request.getID();
+ }
+```
+
+One advantage of the `withWLock()` approach is that it forces a new
+scope to be used for the critical section, making the critical section
+more obvious in the code, and helping to encourage code that releases
+the lock as soon as possible.