經典curl並發的處理流程:首先將所有的URL壓入並發隊列, 然后執行並發過程, 等待所有請求接收完之后進行數據的解析等后續處理。
在實際的處理過程中, 受網絡傳輸的影響, 部分URL的內容會優先於其他URL返回, 但是經典curl並發必須等待最慢的那個URL返回之后才開始處理, 等待也就意味着CPU的空閑和浪費. 如果URL隊列很短, 這種空閑和浪費還處在可接受的范圍, 但如果隊列很長, 這種等待和浪費將變得不可接受.
優化的方式時當某個URL請求完畢之后盡可能快的去處理它, 邊處理邊等待其他的URL返回, 而不是等待那個最慢的接口返回之后才開始處理等工作, 從而避免CPU的空閑和浪費. 下面貼上具體的實現:
function multiCurl($url, $log) {
$queue = curl_multi_init();
foreach($log as $info) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_TIMEOUT, 3);
curl_setopt($ch, CURLOPT_POSTFIELDS, $info);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_NOSIGNAL, true);
curl_multi_add_handle($queue, $ch);
}
$responses = array();
do {
while (($code = curl_multi_exec($queue, $active)) == CURLM_CALL_MULTI_PERFORM) ;
if ($code != CURLM_OK) { break; }
// a request was just completed -- find out which one
while ($done = curl_multi_info_read($queue)) {
// get the info and content returned on the request
//$info = curl_getinfo($done['handle']);
//$error = curl_error($done['handle']);
$results = curl_multi_getcontent($done['handle']);
//$responses[] = compact('info', 'error', 'results');
$responses[] = $results;
// remove the curl handle that just completed
curl_multi_remove_handle($queue, $done['handle']);
curl_close($done['handle']);
}
// Block for data in / output; error handling is done by curl_multi_exec
if ($active > 0) {
curl_multi_select($queue, 0.5);
}
} while ($active);
curl_multi_close($queue);
return json_encode($responses);
}

