linux kernel RCU 以及讀寫鎖


  信號量有一個很明顯的缺點,沒有區分臨界區的讀寫屬性,讀寫鎖允許多個線程進程並發的訪問臨界區,但是寫訪問只限於一個線程,在多處理器系統中允許多個讀者訪問共享資源,但是寫者有排他性,讀寫鎖的特性如下:允許多個讀者同時訪問臨界區,但是同一時間不能進入;同一時刻只允許一個寫者進入臨界區;讀者和寫者不能同時進入臨界區。讀寫鎖也有關閉中斷和下半部的版本。

RCU:read-copy-update   。。。。。。。。。。。。。。。。。。。。

問題:rcu相比讀寫鎖,解決了什么問題? rcu的基本原理?

1、由於內核中spinlock mutex 等都使用了原子操作指令,即原子的訪問內存,但是當多cpu 競爭訪問臨界區時會讓cpu的cache命中率下降,性能下降。同時讀寫鎖有個缺陷,讀者和寫者不能同時存在。

  rcu實現的目標就是要解決這個問題,為了使線程同步開銷小。不需要原子操作以及內存屏障而訪問數據,把同步的問題交給寫者線程,寫者線程等待所有的讀者線程完成后才會吧舊數據銷毀。當有多個寫者線程存在時,需要額外的保護機制。

原理

  RCU原理:簡單理解為 記錄了所有指向共享數據的指針使用者,當要修改共享數據時,先創建一個副本,在副本中修改。所有讀者離開臨界區后,指針指向新的修改副本后的地方,並且刪除舊數據。

  官方描述:RCU實際上是一種改進的rwlock,讀者幾乎沒有什么同步開銷,它不需要鎖,不使用原子指令,因此不會導致鎖競爭,內存延遲以及流水線停滯。不需要鎖也使得使用更容易,因為死鎖問題就不需要考慮了。

  • 寫者的同步開銷比較大,它需要延遲數據結構的釋放,復制被修改的數據結構,它也必須使用某種鎖機制同步並行的其它寫者的修改操作。
  • 讀者必須提供一個信號給寫者以便寫者能夠確定數據可以被安全地釋放或修改的時機。
  • 有一個專門的垃圾收集器來探測讀者的信號,一旦所有的讀者都已經發送信號告知它們都不在使用被RCU保護的數據結構,垃圾收集器就調用回調函數完成最后的數據釋放或修改操作。

目前在內核中鏈表使用RCU較多。

  在經典RCU中,RCU讀側臨界部分由rcu_read_lock() 和rcu_read_unlock()界定,它們可以嵌套。

  對應的同步更新原語為synchronize_rcu(),還有同義的synchronize_net(),等待當前正執行的RCU讀側聞臨界部分運行完成。等待的時間稱為“寬限期”。

  異步更新側原語call_rcu()在寬限期之后觸發指定的函數,如:call_rcu(p,f)調用觸發回調函數f(p)。有些情況,如:當卸載使用call_rcu()的模塊,必須等待所有RCU回調函數完成,原語rcu_barrier()起該作用。
  在“RCU BH”列中,rcu_read_lock_bh() 和rcu_read_unlock_bh()界定讀側臨界部分,call_rcu_bh()在寬限期后觸發指定的函數。注意:RCU BH沒有同步接口synchronize_rcu_bh(),如果需要,用戶很容易添加同步接口函數。

  直接操作指針的原語rcu_assign_pointer()和rcu_dereference()用於創建RCU保護的非鏈表數據結構,如:數組和樹

  NOTE:讀者在訪問被RCU保護的共享數據期間不能被阻塞,這是RCU機制得以實現的一個基本前提,也就說當讀者在引用被RCU保護的共享數據期間,讀者所在的CPU不能發生上下文切換,spinlock和rwlock都需要這樣的前提。  寫者在訪問被RCU保護的共享數據時不需要和讀者競爭任何鎖,只有在有多於一個寫者的情況下需要獲得某種鎖以與其他寫者同步。寫者修改數據前首先拷貝一個被修改元素的副本,然后在副本上進行修改,修改完畢后它向垃圾回收器注冊一個回調函數以便在適當的時機執行真正的修改操作。等待適當時機的這一時期稱為寬限期(grace period),而CPU發生了上下文切換稱為經歷一個quiescent state,grace period就是所有CPU都經歷一次quiescent state所需要的等待的時間。垃圾收集器就是在grace period之后調用寫者注冊的回調函數來完成真正的數據修改或數據釋放操作的。

/*
Please note that the "What is RCU?" LWN series is an excellent place
to start learning about RCU:

1.    What is RCU, Fundamentally?  http://lwn.net/Articles/262464/
2.    What is RCU? Part 2: Usage   http://lwn.net/Articles/263130/
3.    RCU part 3: the RCU API      http://lwn.net/Articles/264090/
4.    The RCU API, 2010 Edition    http://lwn.net/Articles/418853/


What is RCU?

RCU is a synchronization mechanism that was added to the Linux kernel
during the 2.5 development effort that is optimized for read-mostly
situations.  Although RCU is actually quite simple once you understand it,
getting there can sometimes be a challenge.  Part of the problem is that
most of the past descriptions of RCU have been written with the mistaken
assumption that there is "one true way" to describe RCU.  Instead,
the experience has been that different people must take different paths
to arrive at an understanding of RCU.  This document provides several
different paths, as follows:

1.    RCU OVERVIEW
2.    WHAT IS RCU'S CORE API?
3.    WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
4.    WHAT IF MY UPDATING THREAD CANNOT BLOCK?
5.    WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
6.    ANALOGY WITH READER-WRITER LOCKING
7.    FULL LIST OF RCU APIs
8.    ANSWERS TO QUICK QUIZZES

People who prefer starting with a conceptual overview should focus on
Section 1, though most readers will profit by reading this section at
some point.  People who prefer to start with an API that they can then
experiment with should focus on Section 2.  People who prefer to start
with example uses should focus on Sections 3 and 4.  People who need to
understand the RCU implementation should focus on Section 5, then dive
into the kernel source code.  People who reason best by analogy should
focus on Section 6.  Section 7 serves as an index to the docbook API
documentation, and Section 8 is the traditional answer key.

So, start with the section that makes the most sense to you and your
preferred method of learning.  If you need to know everything about
everything, feel free to read the whole thing -- but if you are really
that type of person, you have perused the source code and will therefore
never need this document anyway.  ;-)


1.  RCU OVERVIEW

The basic idea behind RCU is to split updates into "removal" and
"reclamation" phases.  The removal phase removes references to data items
within a data structure (possibly by replacing them with references to
new versions of these data items), and can run concurrently with readers.
The reason that it is safe to run the removal phase concurrently with
readers is the semantics of modern CPUs guarantee that readers will see
either the old or the new version of the data structure rather than a
partially updated reference.  The reclamation phase does the work of reclaiming
(e.g., freeing) the data items removed from the data structure during the
removal phase.  Because reclaiming data items can disrupt any readers
concurrently referencing those data items, the reclamation phase must
not start until readers no longer hold references to those data items.

Splitting the update into removal and reclamation phases permits the
updater to perform the removal phase immediately, and to defer the
reclamation phase until all readers active during the removal phase have
completed, either by blocking until they finish or by registering a
callback that is invoked after they finish.  Only readers that are active
during the removal phase need be considered, because any reader starting
after the removal phase will be unable to gain a reference to the removed
data items, and therefore cannot be disrupted by the reclamation phase.

So the typical RCU update sequence goes something like the following:

a.    Remove pointers to a data structure, so that subsequent
    readers cannot gain a reference to it.

b.    Wait for all previous readers to complete their RCU read-side
    critical sections.

c.    At this point, there cannot be any readers who hold references
    to the data structure, so it now may safely be reclaimed
    (e.g., kfree()d).

Step (b) above is the key idea underlying RCU's deferred destruction.
The ability to wait until all readers are done allows RCU readers to
use much lighter-weight synchronization, in some cases, absolutely no
synchronization at all.  In contrast, in more conventional lock-based
schemes, readers must use heavy-weight synchronization in order to
prevent an updater from deleting the data structure out from under them.
This is because lock-based updaters typically update data items in place,
and must therefore exclude readers.  In contrast, RCU-based updaters
typically take advantage of the fact that writes to single aligned
pointers are atomic on modern CPUs, allowing atomic insertion, removal,
and replacement of data items in a linked structure without disrupting
readers.  Concurrent RCU readers can then continue accessing the old
versions, and can dispense with the atomic operations, memory barriers,
and communications cache misses that are so expensive on present-day
SMP computer systems, even in absence of lock contention.

In the three-step procedure shown above, the updater is performing both
the removal and the reclamation step, but it is often helpful for an
entirely different thread to do the reclamation, as is in fact the case
in the Linux kernel's directory-entry cache (dcache).  Even if the same
thread performs both the update step (step (a) above) and the reclamation
step (step (c) above), it is often helpful to think of them separately.
For example, RCU readers and updaters need not communicate at all,
but RCU provides implicit low-overhead communication between readers
and reclaimers, namely, in step (b) above.

So how the heck can a reclaimer tell when a reader is done, given
that readers are not doing any sort of synchronization operations???
Read on to learn about how RCU's API makes this easy.


2.  WHAT IS RCU'S CORE API?

The core RCU API is quite small:

a.    rcu_read_lock()
b.    rcu_read_unlock()
c.    synchronize_rcu() / call_rcu()
d.    rcu_assign_pointer()
e.    rcu_dereference()

There are many other members of the RCU API, but the rest can be
expressed in terms of these five, though most implementations instead
express synchronize_rcu() in terms of the call_rcu() callback API.

The five core RCU APIs are described below, the other 18 will be enumerated
later.  See the kernel docbook documentation for more info, or look directly
at the function header comments.

rcu_read_lock()

    void rcu_read_lock(void);

    Used by a reader to inform the reclaimer that the reader is
    entering an RCU read-side critical section.  It is illegal
    to block while in an RCU read-side critical section, though
    kernels built with CONFIG_PREEMPT_RCU can preempt RCU
    read-side critical sections.  Any RCU-protected data structure
    accessed during an RCU read-side critical section is guaranteed to
    remain unreclaimed for the full duration of that critical section.
    Reference counts may be used in conjunction with RCU to maintain
    longer-term references to data structures.

rcu_read_unlock()

    void rcu_read_unlock(void);

    Used by a reader to inform the reclaimer that the reader is
    exiting an RCU read-side critical section.  Note that RCU
    read-side critical sections may be nested and/or overlapping.

synchronize_rcu()

    void synchronize_rcu(void);

    Marks the end of updater code and the beginning of reclaimer
    code.  It does this by blocking until all pre-existing RCU
    read-side critical sections on all CPUs have completed.
    Note that synchronize_rcu() will -not- necessarily wait for
    any subsequent RCU read-side critical sections to complete.
    For example, consider the following sequence of events:

             CPU 0                  CPU 1                 CPU 2
         ----------------- ------------------------- ---------------
     1.  rcu_read_lock()
     2.                    enters synchronize_rcu()
     3.                                               rcu_read_lock()
     4.  rcu_read_unlock()
     5.                     exits synchronize_rcu()
     6.                                              rcu_read_unlock()

    To reiterate, synchronize_rcu() waits only for ongoing RCU
    read-side critical sections to complete, not necessarily for
    any that begin after synchronize_rcu() is invoked.

    Of course, synchronize_rcu() does not necessarily return
    -immediately- after the last pre-existing RCU read-side critical
    section completes.  For one thing, there might well be scheduling
    delays.  For another thing, many RCU implementations process
    requests in batches in order to improve efficiencies, which can
    further delay synchronize_rcu().

    Since synchronize_rcu() is the API that must figure out when
    readers are done, its implementation is key to RCU.  For RCU
    to be useful in all but the most read-intensive situations,
    synchronize_rcu()'s overhead must also be quite small.

    The call_rcu() API is a callback form of synchronize_rcu(),
    and is described in more detail in a later section.  Instead of
    blocking, it registers a function and argument which are invoked
    after all ongoing RCU read-side critical sections have completed.
    This callback variant is particularly useful in situations where
    it is illegal to block or where update-side performance is
    critically important.

    However, the call_rcu() API should not be used lightly, as use
    of the synchronize_rcu() API generally results in simpler code.
    In addition, the synchronize_rcu() API has the nice property
    of automatically limiting update rate should grace periods
    be delayed.  This property results in system resilience in face
    of denial-of-service attacks.  Code using call_rcu() should limit
    update rate in order to gain this same sort of resilience.  See
    checklist.txt for some approaches to limiting the update rate.

rcu_assign_pointer()

    typeof(p) rcu_assign_pointer(p, typeof(p) v);

    Yes, rcu_assign_pointer() -is- implemented as a macro, though it
    would be cool to be able to declare a function in this manner.
    (Compiler experts will no doubt disagree.)

    The updater uses this function to assign a new value to an
    RCU-protected pointer, in order to safely communicate the change
    in value from the updater to the reader.  This function returns
    the new value, and also executes any memory-barrier instructions
    required for a given CPU architecture.

    Perhaps just as important, it serves to document (1) which
    pointers are protected by RCU and (2) the point at which a
    given structure becomes accessible to other CPUs.  That said,
    rcu_assign_pointer() is most frequently used indirectly, via
    the _rcu list-manipulation primitives such as list_add_rcu().

rcu_dereference()

    typeof(p) rcu_dereference(p);

    Like rcu_assign_pointer(), rcu_dereference() must be implemented
    as a macro.

    The reader uses rcu_dereference() to fetch an RCU-protected
    pointer, which returns a value that may then be safely
    dereferenced.  Note that rcu_deference() does not actually
    dereference the pointer, instead, it protects the pointer for
    later dereferencing.  It also executes any needed memory-barrier
    instructions for a given CPU architecture.  Currently, only Alpha
    needs memory barriers within rcu_dereference() -- on other CPUs,
    it compiles to nothing, not even a compiler directive.

    Common coding practice uses rcu_dereference() to copy an
    RCU-protected pointer to a local variable, then dereferences
    this local variable, for example as follows:

        p = rcu_dereference(head.next);
        return p->data;

    However, in this case, one could just as easily combine these
    into one statement:

        return rcu_dereference(head.next)->data;

    If you are going to be fetching multiple fields from the
    RCU-protected structure, using the local variable is of
    course preferred.  Repeated rcu_dereference() calls look
    ugly, do not guarantee that the same pointer will be returned
    if an update happened while in the critical section, and incur
    unnecessary overhead on Alpha CPUs.

    Note that the value returned by rcu_dereference() is valid
    only within the enclosing RCU read-side critical section.
    For example, the following is -not- legal:

        rcu_read_lock();
        p = rcu_dereference(head.next);
        rcu_read_unlock();
        x = p->address;    /* BUG!!! */
        rcu_read_lock();
        y = p->data;    /* BUG!!! */
        rcu_read_unlock();

    Holding a reference from one RCU read-side critical section
    to another is just as illegal as holding a reference from
    one lock-based critical section to another!  Similarly,
    using a reference outside of the critical section in which
    it was acquired is just as illegal as doing so with normal
    locking.

    As with rcu_assign_pointer(), an important function of
    rcu_dereference() is to document which pointers are protected by
    RCU, in particular, flagging a pointer that is subject to changing
    at any time, including immediately after the rcu_dereference().
    And, again like rcu_assign_pointer(), rcu_dereference() is
    typically used indirectly, via the _rcu list-manipulation
    primitives, such as list_for_each_entry_rcu().

The following diagram shows how each API communicates among the
reader, updater, and reclaimer.


        rcu_assign_pointer()
                        +--------+
        +---------------------->| reader |---------+
        |                       +--------+         |
        |                           |              |
        |                           |              | Protect:
        |                           |              | rcu_read_lock()
        |                           |              | rcu_read_unlock()
        |        rcu_dereference()  |              |
       +---------+                      |              |
       | updater |<---------------------+              |
       +---------+                                     V
        |                                    +-----------+
        +----------------------------------->| reclaimer |
                                 +-----------+
          Defer:
          synchronize_rcu() & call_rcu()


The RCU infrastructure observes the time sequence of rcu_read_lock(),
rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
order to determine when (1) synchronize_rcu() invocations may return
to their callers and (2) call_rcu() callbacks may be invoked.  Efficient
implementations of the RCU infrastructure make heavy use of batching in
order to amortize their overhead over many uses of the corresponding APIs.

There are no fewer than three RCU mechanisms in the Linux kernel; the
diagram above shows the first one, which is by far the most commonly used.
The rcu_dereference() and rcu_assign_pointer() primitives are used for
all three mechanisms, but different defer and protect primitives are
used as follows:

    Defer            Protect

a.    synchronize_rcu()    rcu_read_lock() / rcu_read_unlock()
    call_rcu()        rcu_dereference()

b.    synchronize_rcu_bh()    rcu_read_lock_bh() / rcu_read_unlock_bh()
    call_rcu_bh()        rcu_dereference_bh()

c.    synchronize_sched()    rcu_read_lock_sched() / rcu_read_unlock_sched()
    call_rcu_sched()    preempt_disable() / preempt_enable()
                local_irq_save() / local_irq_restore()
                hardirq enter / hardirq exit
                NMI enter / NMI exit
                rcu_dereference_sched()

These three mechanisms are used as follows:

a.    RCU applied to normal data structures.

b.    RCU applied to networking data structures that may be subjected
    to remote denial-of-service attacks.

c.    RCU applied to scheduler and interrupt/NMI-handler tasks.

Again, most uses will be of (a).  The (b) and (c) cases are important
for specialized uses, but are relatively uncommon.


3.  WHAT ARE SOME EXAMPLE USES OF CORE RCU API?

This section shows a simple use of the core RCU API to protect a
global pointer to a dynamically allocated structure.  More-typical
uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.

    struct foo {
        int a;
        char b;
        long c;
    };
    DEFINE_SPINLOCK(foo_mutex);

    struct foo __rcu *gbl_foo;

    /*
     * Create a new struct foo that is the same as the one currently
     * pointed to by gbl_foo, except that field "a" is replaced
     * with "new_a".  Points gbl_foo to the new structure, and
     * frees up the old structure after a grace period.
     *
     * Uses rcu_assign_pointer() to ensure that concurrent readers
     * see the initialized version of the new structure.
     *
     * Uses synchronize_rcu() to ensure that any readers that might
     * have references to the old structure complete before freeing
     * the old structure.
     */
    void foo_update_a(int new_a)
    {
        struct foo *new_fp;
        struct foo *old_fp;

        new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
        spin_lock(&foo_mutex);
        old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
        *new_fp = *old_fp;
        new_fp->a = new_a;
        rcu_assign_pointer(gbl_foo, new_fp);
        spin_unlock(&foo_mutex);
        synchronize_rcu();
        kfree(old_fp);
    }

    /*
     * Return the value of field "a" of the current gbl_foo
     * structure.  Use rcu_read_lock() and rcu_read_unlock()
     * to ensure that the structure does not get deleted out
     * from under us, and use rcu_dereference() to ensure that
     * we see the initialized version of the structure (important
     * for DEC Alpha and for people reading the code).
     */
    int foo_get_a(void)
    {
        int retval;

        rcu_read_lock();
        retval = rcu_dereference(gbl_foo)->a;
        rcu_read_unlock();
        return retval;
    }

So, to sum up:

o    Use rcu_read_lock() and rcu_read_unlock() to guard RCU
    read-side critical sections.

o    Within an RCU read-side critical section, use rcu_dereference()
    to dereference RCU-protected pointers.

o    Use some solid scheme (such as locks or semaphores) to
    keep concurrent updates from interfering with each other.

o    Use rcu_assign_pointer() to update an RCU-protected pointer.
    This primitive protects concurrent readers from the updater,
    -not- concurrent updates from each other!  You therefore still
    need to use locking (or something similar) to keep concurrent
    rcu_assign_pointer() primitives from interfering with each other.

o    Use synchronize_rcu() -after- removing a data element from an
    RCU-protected data structure, but -before- reclaiming/freeing
    the data element, in order to wait for the completion of all
    RCU read-side critical sections that might be referencing that
    data item.

See checklist.txt for additional rules to follow when using RCU.
And again, more-typical uses of RCU may be found in listRCU.txt,
arrayRCU.txt, and NMI-RCU.txt.


4.  WHAT IF MY UPDATING THREAD CANNOT BLOCK?

In the example above, foo_update_a() blocks until a grace period elapses.
This is quite simple, but in some cases one cannot afford to wait so
long -- there might be other high-priority work to be done.

In such cases, one uses call_rcu() rather than synchronize_rcu().
The call_rcu() API is as follows:

    void call_rcu(struct rcu_head * head,
              void (*func)(struct rcu_head *head));

This function invokes func(head) after a grace period has elapsed.
This invocation might happen from either softirq or process context,
so the function is not permitted to block.  The foo struct needs to
have an rcu_head structure added, perhaps as follows:

    struct foo {
        int a;
        char b;
        long c;
        struct rcu_head rcu;
    };

The foo_update_a() function might then be written as follows:

    /*
     * Create a new struct foo that is the same as the one currently
     * pointed to by gbl_foo, except that field "a" is replaced
     * with "new_a".  Points gbl_foo to the new structure, and
     * frees up the old structure after a grace period.
     *
     * Uses rcu_assign_pointer() to ensure that concurrent readers
     * see the initialized version of the new structure.
     *
     * Uses call_rcu() to ensure that any readers that might have
     * references to the old structure complete before freeing the
     * old structure.
     */
    void foo_update_a(int new_a)
    {
        struct foo *new_fp;
        struct foo *old_fp;

        new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
        spin_lock(&foo_mutex);
        old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
        *new_fp = *old_fp;
        new_fp->a = new_a;
        rcu_assign_pointer(gbl_foo, new_fp);
        spin_unlock(&foo_mutex);
        call_rcu(&old_fp->rcu, foo_reclaim);
    }

The foo_reclaim() function might appear as follows:

    void foo_reclaim(struct rcu_head *rp)
    {
        struct foo *fp = container_of(rp, struct foo, rcu);

        foo_cleanup(fp->a);

        kfree(fp);
    }

The container_of() primitive is a macro that, given a pointer into a
struct, the type of the struct, and the pointed-to field within the
struct, returns a pointer to the beginning of the struct.

The use of call_rcu() permits the caller of foo_update_a() to
immediately regain control, without needing to worry further about the
old version of the newly updated element.  It also clearly shows the
RCU distinction between updater, namely foo_update_a(), and reclaimer,
namely foo_reclaim().

The summary of advice is the same as for the previous section, except
that we are now using call_rcu() rather than synchronize_rcu():

o    Use call_rcu() -after- removing a data element from an
    RCU-protected data structure in order to register a callback
    function that will be invoked after the completion of all RCU
    read-side critical sections that might be referencing that
    data item.

If the callback for call_rcu() is not doing anything more than calling
kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
to avoid having to write your own callback:

    kfree_rcu(old_fp, rcu);

Again, see checklist.txt for additional rules governing the use of RCU.


5.  WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?

One of the nice things about RCU is that it has extremely simple "toy"
implementations that are a good first step towards understanding the
production-quality implementations in the Linux kernel.  This section
presents two such "toy" implementations of RCU, one that is implemented
in terms of familiar locking primitives, and another that more closely
resembles "classic" RCU.  Both are way too simple for real-world use,
lacking both functionality and performance.  However, they are useful
in getting a feel for how RCU works.  See kernel/rcupdate.c for a
production-quality implementation, and see:

    http://www.rdrop.com/users/paulmck/RCU

for papers describing the Linux kernel RCU implementation.  The OLS'01
and OLS'02 papers are a good introduction, and the dissertation provides
more details on the current implementation as of early 2004.


5A.  "TOY" IMPLEMENTATION #1: LOCKING

This section presents a "toy" RCU implementation that is based on
familiar locking primitives.  Its overhead makes it a non-starter for
real-life use, as does its lack of scalability.  It is also unsuitable
for realtime use, since it allows scheduling latency to "bleed" from
one read-side critical section to another.

However, it is probably the easiest implementation to relate to, so is
a good starting point.

It is extremely simple:

    static DEFINE_RWLOCK(rcu_gp_mutex);

    void rcu_read_lock(void)
    {
        read_lock(&rcu_gp_mutex);
    }

    void rcu_read_unlock(void)
    {
        read_unlock(&rcu_gp_mutex);
    }

    void synchronize_rcu(void)
    {
        write_lock(&rcu_gp_mutex);
        write_unlock(&rcu_gp_mutex);
    }

[You can ignore rcu_assign_pointer() and rcu_dereference() without
missing much.  But here they are anyway.  And whatever you do, don't
forget about them when submitting patches making use of RCU!]

    #define rcu_assign_pointer(p, v)    ({ \
                            smp_wmb(); \
                            (p) = (v); \
                        })

    #define rcu_dereference(p)     ({ \
                    typeof(p) _________p1 = p; \
                    smp_read_barrier_depends(); \
                    (_________p1); \
                    })


The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
and release a global reader-writer lock.  The synchronize_rcu()
primitive write-acquires this same lock, then immediately releases
it.  This means that once synchronize_rcu() exits, all RCU read-side
critical sections that were in progress before synchronize_rcu() was
called are guaranteed to have completed -- there is no way that
synchronize_rcu() would have been able to write-acquire the lock
otherwise.

It is possible to nest rcu_read_lock(), since reader-writer locks may
be recursively acquired.  Note also that rcu_read_lock() is immune
from deadlock (an important property of RCU).  The reason for this is
that the only thing that can block rcu_read_lock() is a synchronize_rcu().
But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
so there can be no deadlock cycle.

Quick Quiz #1:    Why is this argument naive?  How could a deadlock
        occur when using this algorithm in a real-world Linux
        kernel?  How could this deadlock be avoided?


5B.  "TOY" EXAMPLE #2: CLASSIC RCU

This section presents a "toy" RCU implementation that is based on
"classic RCU".  It is also short on performance (but only for updates) and
on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
kernels.  The definitions of rcu_dereference() and rcu_assign_pointer()
are the same as those shown in the preceding section, so they are omitted.

    void rcu_read_lock(void) { }

    void rcu_read_unlock(void) { }

    void synchronize_rcu(void)
    {
        int cpu;

        for_each_possible_cpu(cpu)
            run_on(cpu);
    }

Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
This is the great strength of classic RCU in a non-preemptive kernel:
read-side overhead is precisely zero, at least on non-Alpha CPUs.
And there is absolutely no way that rcu_read_lock() can possibly
participate in a deadlock cycle!

The implementation of synchronize_rcu() simply schedules itself on each
CPU in turn.  The run_on() primitive can be implemented straightforwardly
in terms of the sched_setaffinity() primitive.  Of course, a somewhat less
"toy" implementation would restore the affinity upon completion rather
than just leaving all tasks running on the last CPU, but when I said
"toy", I meant -toy-!

So how the heck is this supposed to work???

Remember that it is illegal to block while in an RCU read-side critical
section.  Therefore, if a given CPU executes a context switch, we know
that it must have completed all preceding RCU read-side critical sections.
Once -all- CPUs have executed a context switch, then -all- preceding
RCU read-side critical sections will have completed.

So, suppose that we remove a data item from its structure and then invoke
synchronize_rcu().  Once synchronize_rcu() returns, we are guaranteed
that there are no RCU read-side critical sections holding a reference
to that data item, so we can safely reclaim it.

Quick Quiz #2:    Give an example where Classic RCU's read-side
        overhead is -negative-.

Quick Quiz #3:  If it is illegal to block in an RCU read-side
        critical section, what the heck do you do in
        PREEMPT_RT, where normal spinlocks can block???


6.  ANALOGY WITH READER-WRITER LOCKING

Although RCU can be used in many different ways, a very common use of
RCU is analogous to reader-writer locking.  The following unified
diff shows how closely related RCU and reader-writer locking can be.

    @@ -13,15 +14,15 @@
        struct list_head *lp;
        struct el *p;

    -    read_lock();
    -    list_for_each_entry(p, head, lp) {
    +    rcu_read_lock();
    +    list_for_each_entry_rcu(p, head, lp) {
            if (p->key == key) {
                *result = p->data;
    -            read_unlock();
    +            rcu_read_unlock();
                return 1;
            }
        }
    -    read_unlock();
    +    rcu_read_unlock();
        return 0;
     }

    @@ -29,15 +30,16 @@
     {
        struct el *p;

    -    write_lock(&listmutex);
    +    spin_lock(&listmutex);
        list_for_each_entry(p, head, lp) {
            if (p->key == key) {
    -            list_del(&p->list);
    -            write_unlock(&listmutex);
    +            list_del_rcu(&p->list);
    +            spin_unlock(&listmutex);
    +            synchronize_rcu();
                kfree(p);
                return 1;
            }
        }
    -    write_unlock(&listmutex);
    +    spin_unlock(&listmutex);
        return 0;
     }

Or, for those who prefer a side-by-side listing:

 1 struct el {                          1 struct el {
 2   struct list_head list;             2   struct list_head list;
 3   long key;                          3   long key;
 4   spinlock_t mutex;                  4   spinlock_t mutex;
 5   int data;                          5   int data;
 6   /* Other data fields */            6   /* Other data fields */
 7 };                                   7 };
 8 spinlock_t listmutex;                8 spinlock_t listmutex;
 9 struct el head;                      9 struct el head;

 1 int search(long key, int *result)    1 int search(long key, int *result)
 2 {                                    2 {
 3   struct list_head *lp;              3   struct list_head *lp;
 4   struct el *p;                      4   struct el *p;
 5                                      5
 6   read_lock();                       6   rcu_read_lock();
 7   list_for_each_entry(p, head, lp) { 7   list_for_each_entry_rcu(p, head, lp) {
 8     if (p->key == key) {             8     if (p->key == key) {
 9       *result = p->data;             9       *result = p->data;
10       read_unlock();                10       rcu_read_unlock();
11       return 1;                     11       return 1;
12     }                               12     }
13   }                                 13   }
14   read_unlock();                    14   rcu_read_unlock();
15   return 0;                         15   return 0;
16 }                                   16 }

 1 int delete(long key)                 1 int delete(long key)
 2 {                                    2 {
 3   struct el *p;                      3   struct el *p;
 4                                      4
 5   write_lock(&listmutex);            5   spin_lock(&listmutex);
 6   list_for_each_entry(p, head, lp) { 6   list_for_each_entry(p, head, lp) {
 7     if (p->key == key) {             7     if (p->key == key) {
 8       list_del(&p->list);            8       list_del_rcu(&p->list);
 9       write_unlock(&listmutex);      9       spin_unlock(&listmutex);
                                       10       synchronize_rcu();
10       kfree(p);                     11       kfree(p);
11       return 1;                     12       return 1;
12     }                               13     }
13   }                                 14   }
14   write_unlock(&listmutex);         15   spin_unlock(&listmutex);
15   return 0;                         16   return 0;
16 }                                   17 }

Either way, the differences are quite small.  Read-side locking moves
to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
a reader-writer lock to a simple spinlock, and a synchronize_rcu()
precedes the kfree().

However, there is one potential catch: the read-side and update-side
critical sections can now run concurrently.  In many cases, this will
not be a problem, but it is necessary to check carefully regardless.
For example, if multiple independent list updates must be seen as
a single atomic update, converting to RCU will require special care.

Also, the presence of synchronize_rcu() means that the RCU version of
delete() can now block.  If this is a problem, there is a callback-based
mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
be used in place of synchronize_rcu().


7.  FULL LIST OF RCU APIs

The RCU APIs are documented in docbook-format header comments in the
Linux-kernel source code, but it helps to have a full list of the
APIs, since there does not appear to be a way to categorize them
in docbook.  Here is the list, by category.

RCU list traversal:

    list_entry_rcu
    list_first_entry_rcu
    list_next_rcu
    list_for_each_entry_rcu
    list_for_each_entry_continue_rcu
    hlist_first_rcu
    hlist_next_rcu
    hlist_pprev_rcu
    hlist_for_each_entry_rcu
    hlist_for_each_entry_rcu_bh
    hlist_for_each_entry_continue_rcu
    hlist_for_each_entry_continue_rcu_bh
    hlist_nulls_first_rcu
    hlist_nulls_for_each_entry_rcu
    hlist_bl_first_rcu
    hlist_bl_for_each_entry_rcu

RCU pointer/list update:

    rcu_assign_pointer
    list_add_rcu
    list_add_tail_rcu
    list_del_rcu
    list_replace_rcu
    hlist_add_behind_rcu
    hlist_add_before_rcu
    hlist_add_head_rcu
    hlist_del_rcu
    hlist_del_init_rcu
    hlist_replace_rcu
    list_splice_init_rcu()
    hlist_nulls_del_init_rcu
    hlist_nulls_del_rcu
    hlist_nulls_add_head_rcu
    hlist_bl_add_head_rcu
    hlist_bl_del_init_rcu
    hlist_bl_del_rcu
    hlist_bl_set_first_rcu

RCU:    Critical sections    Grace period        Barrier

    rcu_read_lock        synchronize_net        rcu_barrier
    rcu_read_unlock        synchronize_rcu
    rcu_dereference        synchronize_rcu_expedited
    rcu_read_lock_held    call_rcu
    rcu_dereference_check    kfree_rcu
    rcu_dereference_protected

bh:    Critical sections    Grace period        Barrier

    rcu_read_lock_bh    call_rcu_bh        rcu_barrier_bh
    rcu_read_unlock_bh    synchronize_rcu_bh
    rcu_dereference_bh    synchronize_rcu_bh_expedited
    rcu_dereference_bh_check
    rcu_dereference_bh_protected
    rcu_read_lock_bh_held

sched:    Critical sections    Grace period        Barrier

    rcu_read_lock_sched    synchronize_sched    rcu_barrier_sched
    rcu_read_unlock_sched    call_rcu_sched
    [preempt_disable]    synchronize_sched_expedited
    [and friends]
    rcu_read_lock_sched_notrace
    rcu_read_unlock_sched_notrace
    rcu_dereference_sched
    rcu_dereference_sched_check
    rcu_dereference_sched_protected
    rcu_read_lock_sched_held


SRCU:    Critical sections    Grace period        Barrier

    srcu_read_lock        synchronize_srcu    srcu_barrier
    srcu_read_unlock    call_srcu
    srcu_dereference    synchronize_srcu_expedited
    srcu_dereference_check
    srcu_read_lock_held

SRCU:    Initialization/cleanup
    init_srcu_struct
    cleanup_srcu_struct

All:  lockdep-checked RCU-protected pointer access

    rcu_access_pointer
    rcu_dereference_raw
    RCU_LOCKDEP_WARN
    rcu_sleep_check
    RCU_NONIDLE

See the comment headers in the source code (or the docbook generated
from them) for more information.

However, given that there are no fewer than four families of RCU APIs
in the Linux kernel, how do you choose which one to use?  The following
list can be helpful:

a.    Will readers need to block?  If so, you need SRCU.

b.    What about the -rt patchset?  If readers would need to block
    in an non-rt kernel, you need SRCU.  If readers would block
    in a -rt kernel, but not in a non-rt kernel, SRCU is not
    necessary.

c.    Do you need to treat NMI handlers, hardirq handlers,
    and code segments with preemption disabled (whether
    via preempt_disable(), local_irq_save(), local_bh_disable(),
    or some other mechanism) as if they were explicit RCU readers?
    If so, RCU-sched is the only choice that will work for you.

d.    Do you need RCU grace periods to complete even in the face
    of softirq monopolization of one or more of the CPUs?  For
    example, is your code subject to network-based denial-of-service
    attacks?  If so, you need RCU-bh.

e.    Is your workload too update-intensive for normal use of
    RCU, but inappropriate for other synchronization mechanisms?
    If so, consider SLAB_DESTROY_BY_RCU.  But please be careful!

f.    Do you need read-side critical sections that are respected
    even though they are in the middle of the idle loop, during
    user-mode execution, or on an offlined CPU?  If so, SRCU is the
    only choice that will work for you.

g.    Otherwise, use RCU.

Of course, this all assumes that you have determined that RCU is in fact
the right tool for your job.


8.  ANSWERS TO QUICK QUIZZES

Quick Quiz #1:    Why is this argument naive?  How could a deadlock
        occur when using this algorithm in a real-world Linux
        kernel?  [Referring to the lock-based "toy" RCU
        algorithm.]

Answer:        Consider the following sequence of events:

        1.    CPU 0 acquires some unrelated lock, call it
            "problematic_lock", disabling irq via
            spin_lock_irqsave().

        2.    CPU 1 enters synchronize_rcu(), write-acquiring
            rcu_gp_mutex.

        3.    CPU 0 enters rcu_read_lock(), but must wait
            because CPU 1 holds rcu_gp_mutex.

        4.    CPU 1 is interrupted, and the irq handler
            attempts to acquire problematic_lock.

        The system is now deadlocked.

        One way to avoid this deadlock is to use an approach like
        that of CONFIG_PREEMPT_RT, where all normal spinlocks
        become blocking locks, and all irq handlers execute in
        the context of special tasks.  In this case, in step 4
        above, the irq handler would block, allowing CPU 1 to
        release rcu_gp_mutex, avoiding the deadlock.

        Even in the absence of deadlock, this RCU implementation
        allows latency to "bleed" from readers to other
        readers through synchronize_rcu().  To see this,
        consider task A in an RCU read-side critical section
        (thus read-holding rcu_gp_mutex), task B blocked
        attempting to write-acquire rcu_gp_mutex, and
        task C blocked in rcu_read_lock() attempting to
        read_acquire rcu_gp_mutex.  Task A's RCU read-side
        latency is holding up task C, albeit indirectly via
        task B.

        Realtime RCU implementations therefore use a counter-based
        approach where tasks in RCU read-side critical sections
        cannot be blocked by tasks executing synchronize_rcu().

Quick Quiz #2:    Give an example where Classic RCU's read-side
        overhead is -negative-.

Answer:        Imagine a single-CPU system with a non-CONFIG_PREEMPT
        kernel where a routing table is used by process-context
        code, but can be updated by irq-context code (for example,
        by an "ICMP REDIRECT" packet).    The usual way of handling
        this would be to have the process-context code disable
        interrupts while searching the routing table.  Use of
        RCU allows such interrupt-disabling to be dispensed with.
        Thus, without RCU, you pay the cost of disabling interrupts,
        and with RCU you don't.

        One can argue that the overhead of RCU in this
        case is negative with respect to the single-CPU
        interrupt-disabling approach.  Others might argue that
        the overhead of RCU is merely zero, and that replacing
        the positive overhead of the interrupt-disabling scheme
        with the zero-overhead RCU scheme does not constitute
        negative overhead.

        In real life, of course, things are more complex.  But
        even the theoretical possibility of negative overhead for
        a synchronization primitive is a bit unexpected.  ;-)

Quick Quiz #3:  If it is illegal to block in an RCU read-side
        critical section, what the heck do you do in
        PREEMPT_RT, where normal spinlocks can block???

Answer:        Just as PREEMPT_RT permits preemption of spinlock
        critical sections, it permits preemption of RCU
        read-side critical sections.  It also permits
        spinlocks blocking while in RCU read-side critical
        sections.

        Why the apparent inconsistency?  Because it is it
        possible to use priority boosting to keep the RCU
        grace periods short if need be (for example, if running
        short of memory).  In contrast, if blocking waiting
        for (say) network reception, there is no way to know
        what should be boosted.  Especially given that the
        process we need to boost might well be a human being
        who just went out for a pizza or something.  And although
        a computer-operated cattle prod might arouse serious
        interest, it might also provoke serious objections.
        Besides, how does the computer know what pizza parlor
        the human being went to???


ACKNOWLEDGEMENTS

My thanks to the people who helped make this human-readable, including
Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.


For more information, see http://www.rdrop.com/users/paulmck/RCU.

*/
Using RCU to Protect Read-Mostly Linked Lists


One of the best applications of RCU is to protect read-mostly linked lists
("struct list_head" in list.h).  One big advantage of this approach
is that all of the required memory barriers are included for you in
the list macros.  This document describes several applications of RCU,
with the best fits first.


Example 1: Read-Side Action Taken Outside of Lock, No In-Place Updates

The best applications are cases where, if reader-writer locking were
used, the read-side lock would be dropped before taking any action
based on the results of the search.  The most celebrated example is
the routing table.  Because the routing table is tracking the state of
equipment outside of the computer, it will at times contain stale data.
Therefore, once the route has been computed, there is no need to hold
the routing table static during transmission of the packet.  After all,
you can hold the routing table static all you want, but that won't keep
the external Internet from changing, and it is the state of the external
Internet that really matters.  In addition, routing entries are typically
added or deleted, rather than being modified in place.

A straightforward example of this use of RCU may be found in the
system-call auditing support.  For example, a reader-writer locked
implementation of audit_filter_task() might be as follows:

    static enum audit_state audit_filter_task(struct task_struct *tsk)
    {
        struct audit_entry *e;
        enum audit_state   state;

        read_lock(&auditsc_lock);
        /* Note: audit_netlink_sem held by caller. */
        list_for_each_entry(e, &audit_tsklist, list) {
            if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
                read_unlock(&auditsc_lock);
                return state;
            }
        }
        read_unlock(&auditsc_lock);
        return AUDIT_BUILD_CONTEXT;
    }

Here the list is searched under the lock, but the lock is dropped before
the corresponding value is returned.  By the time that this value is acted
on, the list may well have been modified.  This makes sense, since if
you are turning auditing off, it is OK to audit a few extra system calls.

This means that RCU can be easily applied to the read side, as follows:

    static enum audit_state audit_filter_task(struct task_struct *tsk)
    {
        struct audit_entry *e;
        enum audit_state   state;

        rcu_read_lock();
        /* Note: audit_netlink_sem held by caller. */
        list_for_each_entry_rcu(e, &audit_tsklist, list) {
            if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
                rcu_read_unlock();
                return state;
            }
        }
        rcu_read_unlock();
        return AUDIT_BUILD_CONTEXT;
    }

The read_lock() and read_unlock() calls have become rcu_read_lock()
and rcu_read_unlock(), respectively, and the list_for_each_entry() has
become list_for_each_entry_rcu().  The _rcu() list-traversal primitives
insert the read-side memory barriers that are required on DEC Alpha CPUs.

The changes to the update side are also straightforward.  A reader-writer
lock might be used as follows for deletion and insertion:

    static inline int audit_del_rule(struct audit_rule *rule,
                     struct list_head *list)
    {
        struct audit_entry  *e;

        write_lock(&auditsc_lock);
        list_for_each_entry(e, list, list) {
            if (!audit_compare_rule(rule, &e->rule)) {
                list_del(&e->list);
                write_unlock(&auditsc_lock);
                return 0;
            }
        }
        write_unlock(&auditsc_lock);
        return -EFAULT;        /* No matching rule */
    }

    static inline int audit_add_rule(struct audit_entry *entry,
                     struct list_head *list)
    {
        write_lock(&auditsc_lock);
        if (entry->rule.flags & AUDIT_PREPEND) {
            entry->rule.flags &= ~AUDIT_PREPEND;
            list_add(&entry->list, list);
        } else {
            list_add_tail(&entry->list, list);
        }
        write_unlock(&auditsc_lock);
        return 0;
    }

Following are the RCU equivalents for these two functions:

    static inline int audit_del_rule(struct audit_rule *rule,
                     struct list_head *list)
    {
        struct audit_entry  *e;

        /* Do not use the _rcu iterator here, since this is the only
         * deletion routine. */
        list_for_each_entry(e, list, list) {
            if (!audit_compare_rule(rule, &e->rule)) {
                list_del_rcu(&e->list);
                call_rcu(&e->rcu, audit_free_rule);
                return 0;
            }
        }
        return -EFAULT;        /* No matching rule */
    }

    static inline int audit_add_rule(struct audit_entry *entry,
                     struct list_head *list)
    {
        if (entry->rule.flags & AUDIT_PREPEND) {
            entry->rule.flags &= ~AUDIT_PREPEND;
            list_add_rcu(&entry->list, list);
        } else {
            list_add_tail_rcu(&entry->list, list);
        }
        return 0;
    }

Normally, the write_lock() and write_unlock() would be replaced by
a spin_lock() and a spin_unlock(), but in this case, all callers hold
audit_netlink_sem, so no additional locking is required.  The auditsc_lock
can therefore be eliminated, since use of RCU eliminates the need for
writers to exclude readers.  Normally, the write_lock() calls would
be converted into spin_lock() calls.

The list_del(), list_add(), and list_add_tail() primitives have been
replaced by list_del_rcu(), list_add_rcu(), and list_add_tail_rcu().
The _rcu() list-manipulation primitives add memory barriers that are
needed on weakly ordered CPUs (most of them!).  The list_del_rcu()
primitive omits the pointer poisoning debug-assist code that would
otherwise cause concurrent readers to fail spectacularly.

So, when readers can tolerate stale data and when entries are either added
or deleted, without in-place modification, it is very easy to use RCU!


Example 2: Handling In-Place Updates

The system-call auditing code does not update auditing rules in place.
However, if it did, reader-writer-locked code to do so might look as
follows (presumably, the field_count is only permitted to decrease,
otherwise, the added fields would need to be filled in):

    static inline int audit_upd_rule(struct audit_rule *rule,
                     struct list_head *list,
                     __u32 newaction,
                     __u32 newfield_count)
    {
        struct audit_entry  *e;
        struct audit_newentry *ne;

        write_lock(&auditsc_lock);
        /* Note: audit_netlink_sem held by caller. */
        list_for_each_entry(e, list, list) {
            if (!audit_compare_rule(rule, &e->rule)) {
                e->rule.action = newaction;
                e->rule.file_count = newfield_count;
                write_unlock(&auditsc_lock);
                return 0;
            }
        }
        write_unlock(&auditsc_lock);
        return -EFAULT;        /* No matching rule */
    }

The RCU version creates a copy, updates the copy, then replaces the old
entry with the newly updated entry.  This sequence of actions, allowing
concurrent reads while doing a copy to perform an update, is what gives
RCU ("read-copy update") its name.  The RCU code is as follows:

    static inline int audit_upd_rule(struct audit_rule *rule,
                     struct list_head *list,
                     __u32 newaction,
                     __u32 newfield_count)
    {
        struct audit_entry  *e;
        struct audit_newentry *ne;

        list_for_each_entry(e, list, list) {
            if (!audit_compare_rule(rule, &e->rule)) {
                ne = kmalloc(sizeof(*entry), GFP_ATOMIC);
                if (ne == NULL)
                    return -ENOMEM;
                audit_copy_rule(&ne->rule, &e->rule);
                ne->rule.action = newaction;
                ne->rule.file_count = newfield_count;
                list_replace_rcu(&e->list, &ne->list);
                call_rcu(&e->rcu, audit_free_rule);
                return 0;
            }
        }
        return -EFAULT;        /* No matching rule */
    }

Again, this assumes that the caller holds audit_netlink_sem.  Normally,
the reader-writer lock would become a spinlock in this sort of code.


Example 3: Eliminating Stale Data

The auditing examples above tolerate stale data, as do most algorithms
that are tracking external state.  Because there is a delay from the
time the external state changes before Linux becomes aware of the change,
additional RCU-induced staleness is normally not a problem.

However, there are many examples where stale data cannot be tolerated.
One example in the Linux kernel is the System V IPC (see the ipc_lock()
function in ipc/util.c).  This code checks a "deleted" flag under a
per-entry spinlock, and, if the "deleted" flag is set, pretends that the
entry does not exist.  For this to be helpful, the search function must
return holding the per-entry spinlock, as ipc_lock() does in fact do.

Quick Quiz:  Why does the search function need to return holding the
    per-entry lock for this deleted-flag technique to be helpful?

If the system-call audit module were to ever need to reject stale data,
one way to accomplish this would be to add a "deleted" flag and a "lock"
spinlock to the audit_entry structure, and modify audit_filter_task()
as follows:

    static enum audit_state audit_filter_task(struct task_struct *tsk)
    {
        struct audit_entry *e;
        enum audit_state   state;

        rcu_read_lock();
        list_for_each_entry_rcu(e, &audit_tsklist, list) {
            if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
                spin_lock(&e->lock);
                if (e->deleted) {
                    spin_unlock(&e->lock);
                    rcu_read_unlock();
                    return AUDIT_BUILD_CONTEXT;
                }
                rcu_read_unlock();
                return state;
            }
        }
        rcu_read_unlock();
        return AUDIT_BUILD_CONTEXT;
    }

Note that this example assumes that entries are only added and deleted.
Additional mechanism is required to deal correctly with the
update-in-place performed by audit_upd_rule().  For one thing,
audit_upd_rule() would need additional memory barriers to ensure
that the list_add_rcu() was really executed before the list_del_rcu().

The audit_del_rule() function would need to set the "deleted"
flag under the spinlock as follows:

    static inline int audit_del_rule(struct audit_rule *rule,
                     struct list_head *list)
    {
        struct audit_entry  *e;

        /* Do not need to use the _rcu iterator here, since this
         * is the only deletion routine. */
        list_for_each_entry(e, list, list) {
            if (!audit_compare_rule(rule, &e->rule)) {
                spin_lock(&e->lock);
                list_del_rcu(&e->list);
                e->deleted = 1;
                spin_unlock(&e->lock);
                call_rcu(&e->rcu, audit_free_rule);
                return 0;
            }
        }
        return -EFAULT;        /* No matching rule */
    }


Summary

Read-mostly list-based data structures that can tolerate stale data are
the most amenable to use of RCU.  The simplest case is where entries are
either added or deleted from the data structure (or atomically modified
in place), but non-atomic in-place modifications can be handled by making
a copy, updating the copy, then replacing the original with the copy.
If stale data cannot be tolerated, then a "deleted" flag may be used
in conjunction with a per-entry spinlock in order to allow the search
function to reject newly deleted data.


Answer to Quick Quiz
    Why does the search function need to return holding the per-entry
    lock for this deleted-flag technique to be helpful?

    If the search function drops the per-entry lock before returning,
    then the caller will be processing stale data in any case.  If it
    is really OK to be processing stale data, then you don't need a
    "deleted" flag.  If processing stale data really is a problem,
    then you need to hold the per-entry lock across all of the code
    that uses the value that was returned.
View Code

   在使用RCU時,對共享資源的訪問在大部分時間應該是只讀的,寫訪問應該相對較少,因為寫訪問多了必然相對於其他鎖機制而已更占系統資源,影響效率。其次是讀者在持有rcu_read_lock(RCU讀鎖定函數)的時候,不能發生進程上下文切換,否則,因為寫者需要等待讀者完成方可進行,則此時寫者進程也會一直被阻塞,影響系統的正常運行。再次寫者執行完畢后需要調用回調函數,此時發生上下文切換,當前進程進入睡眠,則系統將一直不能調用回調函數,更槽糕的是,此時其它進程若再去執行共享的臨界區,必然造成一定的錯誤。最后一點是受RCU機制保護的資源必須是通過指針訪問。因為從RCU機制上看,幾乎所有操作都是針對指針數據的;

  同步函數最為重要,即synchronize_rcu()。讀者函數的實質其實很簡單:禁止搶占,也就是說在RCU期間不允許發生進程上下文切換,原因上述已提及,即是寫者需要等待讀者完成方可進行,則此時寫者進程也會一直被阻塞,影響系統的正常運行等,故而不允許在RCU期間發生進程上下文切換

  關於寫者函數,主要就是call_rcu和call_rcu_bh兩個函數。其中call_rcu能實現的功能是它不會使寫者阻塞,因而它可在中斷上下文及軟中斷使用,該函數將函數func掛接到RCU的回調函數鏈表上,然后立即返回,讀者函數中提及的synchronize_rcu()函數在實現時也調用了該函數。而call_rcu_bh函數實現的功能幾乎與call_rcu完全相同,唯一的差別是它將軟中斷的完成當作經歷一個quiescent state(靜默狀態,本節一開始有提及這個概念), 因此若寫者使用了該函數,那么讀者需對應的使用rcu_read_lock_bh() 和rcu_read_unlock_bh()。

·  使用rcu_read_lock_bh() 和rcu_read_unlock_bh()函數的原因是由於call_rcu_bh函數不會使寫者阻塞,可在中斷上下文及軟中斷使用。這表明此時系統中的中斷和軟中斷並沒有被關閉。那么寫者在調用call_rcu_bh函數訪問臨界區時,RCU機制下的讀者也能訪問臨界區。此時對於讀者而言,它若是需要讀取臨界區的內容,它必須把軟中斷關閉,以免讀者在當前的進程上下文過程中被軟中斷打斷(上述內容提過軟中斷可以打斷當前的進程上下文)。而rcu_read_lock_bh() 和rcu_read_unlock_bh()函數的實質是調用local_bh_disable()和local_bh_enable()函數,顯然這是實現了禁止軟中斷和使能軟中斷的功能。

  另外在Linux源碼中關於call_rcu_bh函數的注釋中還明確說明了如果當前的進程是在中斷上下文中,則需要執行rcu_read_lock()和rcu_read_unlock(),結合這兩個函數的實現實質表明它實際上禁止或使能內核的搶占調度,原因不言而喻,避免當前進程在執行讀寫過程中被其它進程搶占。同時內核注釋還表明call_rcu_bh這個接口函數的使用條件是在大部分的讀臨界區操作發生在軟中斷上下文中,原因還是需從它實現的功能出發,相信很容易理解,主要是要從執行效率方面考慮。

static inline void rcu_read_lock_bh(void);
static inline void rcu_read_unlock_bh(void);

  這個變種只在修改是通過 call_rcu_bh進行的情況下使用,因為 call_rcu_bh將把 softirq 的執行完畢也認為是一個 quiescent state,因此如果修改是通過 call_rcu_bh 進行的,在進程上下文的讀端臨界區必須使用這一變種

  每一個 CPU 維護兩個數據結構 rcu_sched_datarcu_bh_data,它們用於保存回調函數。函數call_rcu和函數call_rcu_bh用於注冊回調函數,前者把回調函數注冊到rcu_sched_data,而后者則把回調函數注冊到rcu_bh_data,在每一個數據結構上,回調函數被組成一個鏈表,先注冊的排在前頭,后注冊的排在末尾;時鍾中斷處理函數(update_process_times)調用函數rcu_check_callbacks

函數rcu_check_callbacks首先檢查該CPU是否經歷了一個quiescent state,如果(或):

  • 當前進程運行在用戶態;
  • 當前進程為idle且當前不處在運行softirq狀態,也不處在運行IRQ處理函數的狀態;

  該CPU已經經歷了一個quiescent state,因此通過調用函數rcu_sched_qsrcu_bh_qs標記該CPU的數據結構rcu_sched_datarcu_bh_data的標記字段passed_quiesc,以記錄該CPU已經經歷一個quiescent state。

  否則,如果當前不處在運行softirq狀態,那么,只標記該CPU的數據結構rcu_bh_data的標記字段passed_quiesc,以記錄該CPU已經經歷一個quiescent state。注意,該標記只對rcu_bh_data有效。

然后,函數rcu_check_callbacks將調用開啟RCU_SOFTIRQ

  synchronize_rcu()在RCU中是一個最核心的函數,它用來等待之前的讀者全部退出。

在完整的寬限期結束后,即在所有當前正在執行的RCU讀取端臨界區完成之后,控制權會在一段時間后返回給調用者。

https://www.cnblogs.com/alantu2018/p/8459359.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM