accept()執行后,會阻塞等待連接。我想知道底層是怎么實現阻塞的,於是一步步跟進去看,是在哪一步阻塞的。
於是我下載了OpenJDK8,一開始我的JDK是12.0.1,accept()在PlainSocketImpl.java中調用的是native方法,accept0()。我在OpenJDK8中找到了PlainSocketImpl.c文件,
但是沒有找到accept0()方法。於是我重新下載了JDK1.8。現在在Oracle下載JDK要注冊賬號了。之后再跟進去看,發現這時候調用的還是accept0()方法,
但是文件是DualStackPlainSocketImpl.java而它的c文件,在OpenJDK中windows文件夾下,這里推薦一個好用的搜索文件的軟件Everything。我的系統是win10,猜測如果是linux,可能就在其它文件夾下了。
果然我找到了accept0()方法。這里調用了accept函數。顯然它定義在頭文件中。
在系統中找到了頭文件winsock2.h。
函數聲明在這里。但是找不到對應的實現。想到linux中應該有類似開源的源碼。
在一個博客上找到了linux中TCP accept的實現,這里我沒那么嚴謹,取自己找到源碼來讀(暫時能力和精力有限哈)。https://blog.csdn.net/mrpre/article/details/82655834
其中重要的是這段代碼。我以為底層是通過for(;;)實現的阻塞,但是對於每個進程for(;;)只會執行一次,那為什么這么寫,主要是歷史遺留原因,博主也有說明。那么阻塞是怎么實現的呢?最核心是schedule_timeout函數。於是又找到了源碼。
注意上圖中首先將進程加入到等待隊列里面,並設置為可中斷。schedule_timeout是理想的延遲方法。會讓需要延遲的任務睡眠指定的時間。最核心的是schedule函數。
它的前后是設置定時器,和刪除定時器。定時器到指定時間會喚醒進程,重新加入就緒隊列。但是我們知道在每到指定時間的時候也可以喚醒進程,就是因為設置了可中斷。
看來是schedule實現了阻塞。找來了它的源碼。
signed long __sched schedule_timeout(signed long timeout)
{
struct timer_list timer;
unsigned long expire;
switch (timeout)
{
case MAX_SCHEDULE_TIMEOUT: //睡眠時間無限大,則不需要設置定時器。
/*
* These two special cases are useful to be comfortable
* in the caller. Nothing more. We could take
* MAX_SCHEDULE_TIMEOUT from one of the negative value
* but I' d like to return a valid offset (>=0) to allow
* the caller to do everything it want with the retval.
*/
schedule();
goto out;
default:
/*
* Another bit of PARANOID. Note that the retval will be
* 0 since no piece of kernel is supposed to do a check
* for a negative retval of schedule_timeout() (since it
* should never happens anyway). You just have the printk()
* that will tell you if something is gone wrong and where.
*/
if (timeout < 0) {
printk(KERN_ERR "schedule_timeout: wrong timeout "
"value %lx\n", timeout);
dump_stack();
current->state = TASK_RUNNING;
goto out;
}
}
expire = timeout + jiffies;
setup_timer_on_stack(&timer, process_timeout, (unsigned long)current);
__mod_timer(&timer, expire, false, TIMER_NOT_PINNED);
schedule();
del_singleshot_timer_sync(&timer);
/* Remove the timer from the object tracker */
destroy_timer_on_stack(&timer);
timeout = expire - jiffies;
out:
return timeout < 0 ? 0 : timeout;
}
內核版本2.6.39。schedule主要實現了進程調度。即讓出當前進程的CPU,切換上下文。既然切換到其它進程執行了,而當前進程又進入了等待隊列,不再會被調度。直到時間結束,或者中斷來。這時候會從schedule返回。刪除定時器。根據返回的timeout值判斷是否大於0判斷是睡眠到時間被喚醒,還是因為有連接了提前喚醒。
具體的代碼細節沒有進一步深究,然后理解可能也是錯的。以后繼續深入吧。
/*
* schedule() is the main scheduler function.
*/
asmlinkage void __sched schedule(void)
{
struct task_struct *prev, *next;
unsigned long *switch_count;
struct rq *rq;
int cpu;
need_resched:
preempt_disable(); //禁止內核搶占
cpu = smp_processor_id(); //獲取當前CPU
rq = cpu_rq(cpu); //獲取該CPU維護的運行隊列(run queue)
rcu_note_context_switch(cpu); //更新全局狀態,標識當前CPU發生上下文的切換。
prev = rq->curr; //運行隊列中的curr指針賦予prev。
schedule_debug(prev);
if (sched_feat(HRTICK))
hrtick_clear(rq);
raw_spin_lock_irq(&rq->lock); //鎖住該隊列
switch_count = &prev->nivcsw; //記錄當前進程的切換次數
if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { //是否同時滿足以下條件:1該進程處於停止狀態,2該進程沒有在內核態被搶占。
if (unlikely(signal_pending_state(prev->state, prev))) { //若不是非掛起信號,則將該進程狀態設置成TASK_RUNNING
prev->state = TASK_RUNNING;
} else { //若為非掛起信號則將其從隊列中移出
/*
* If a worker is going to sleep, notify and
* ask workqueue whether it wants to wake up a
* task to maintain concurrency. If so, wake
* up the task.
*/
if (prev->flags & PF_WQ_WORKER) {
struct task_struct *to_wakeup;
to_wakeup = wq_worker_sleeping(prev, cpu);
if (to_wakeup)
try_to_wake_up_local(to_wakeup);
}
deactivate_task(rq, prev, DEQUEUE_SLEEP); //從運行隊列中移出
/*
* If we are going to sleep and we have plugged IO queued, make
* sure to submit it to avoid deadlocks.
*/
if (blk_needs_flush_plug(prev)) {
raw_spin_unlock(&rq->lock);
blk_schedule_flush_plug(prev);
raw_spin_lock(&rq->lock);
}
}
switch_count = &prev->nvcsw; //切換次數記錄
}
pre_schedule(rq, prev);
if (unlikely(!rq->nr_running))
idle_balance(cpu, rq);
put_prev_task(rq, prev);
next = pick_next_task(rq); //挑選一個優先級最高的任務將其排進隊列。
clear_tsk_need_resched(prev); //清除pre的TIF_NEED_RESCHED標志。
rq->skip_clock_update = 0;
if (likely(prev != next)) { //如果prev和next非同一個進程
rq->nr_switches++; //隊列切換次數更新
rq->curr = next;
++*switch_count; //進程切換次數更新
context_switch(rq, prev, next); /* unlocks the rq */ //進程之間上下文切換
/*
* The context switch have flipped the stack from under us
* and restored the local variables which were saved when
* this task called schedule() in the past. prev == current
* is still correct, but it can be moved to another cpu/rq.
*/
cpu = smp_processor_id();
rq = cpu_rq(cpu);
} else //如果prev和next為同一進程,則不進行進程切換。
raw_spin_unlock_irq(&rq->lock);
post_schedule(rq);
preempt_enable_no_resched();
if (need_resched()) //如果該進程被其他進程設置了TIF_NEED_RESCHED標志,則函數重新執行進行調度
goto need_resched;
}