Python——eventlet


 

eventlet語境下的“綠色線程”普通線程之間的區別:

  1. 綠色線程幾乎沒有開銷,不用像保留普通線程一樣保留“綠色線程”,每一個網絡連接對應至少一個“綠色線程”;

  2. 綠色線程需要人為的設置使其互相讓渡CPU控制權,而不是搶占。綠色線程既能夠共享數據結構,又不需要顯式的互斥控制,因為只有當一個綠色線程讓出了控制權后其他的綠色線程才能訪問彼此共享的數據結構。

 

下圖是eventlet中協程、hub、線程、進程之間的關系:

 _______________________________________
| python process                        |
|   _________________________________   |
|  | python thread                   |  |
|  |   _____   ___________________   |  |
|  |  | hub | | pool              |  |  |
|  |  |_____| |   _____________   |  |  |
|  |          |  | greenthread |  |  |  |
|  |          |  |_____________|  |  |  |
|  |          |   _____________   |  |  |
|  |          |  | greenthread |  |  |  |
|  |          |  |_____________|  |  |  |
|  |          |   _____________   |  |  |
|  |          |  | greenthread |  |  |  |
|  |          |  |_____________|  |  |  |
|  |          |                   |  |  |
|  |          |        ...        |  |  |
|  |          |___________________|  |  |
|  |                                 |  |
|  |_________________________________|  |
|                                       |
|   _________________________________   |
|  | python thread                   |  |
|  |_________________________________|  |
|   _________________________________   |
|  | python thread                   |  |
|  |_________________________________|  |
|                                       |
|                 ...                   |
|_______________________________________|

   綠色線程是線程內的概念,同一個線程內的綠色線程之間是順序執行的,綠色線程之間想要實現同步,需要開發人員在阻塞的代碼位置顯式植入CPU讓渡,此時hub接管進行調度,尋找同一個線程內另一個可調度的綠色線程。注意綠色線程是線程內的概念,不能跨線程同步。

 

eventlet基本API

一、孵化綠色線程

 eventlet.spawn(func, *args, **kw) 

  該函數創建一個使用參數 *args**kw 調用函數 func 的綠色線程,多次孵化綠色線程會並行地執行任務。該函數返回一個greenthread.GreenThread 對象,可以用來獲取函數 func 的返回值。

  

 eventlet.spawn_n(func, *args, **kw) 

  作用類似於spawn(),只不過無法獲取函數 func 執行完成時的返回值或拋出的異常。該函數的執行速度更快

  

 eventlet.spawn_after(seconds, func, *args, **kw) 

  作用同於spawn(),等價於 seconds 秒后執行spawn()。可以對該函數的返回值調用 GreenThread.cancel() 退出孵化和阻止調用函數 func 

 

二、控制綠色線程

 eventlet.sleep(seconds=0) 

  掛起當前的綠色線程,允許其他的綠色線程執行

  

 class eventlet.GreenPool 

  控制並發的綠色線程池,可以控制並發度,進而控制整個並發所消耗的內存容量,或限制代碼某一部分的連接數等

  

 class eventlet.GreenPile 

  GreenPile 對象代表了工作塊。該對象是一個可以向其中填充工作的迭代器,便於以后從其中讀取結果

  

 class eventlet.Queue 

  便於執行單元之間進行數據交流的基本構件,用於綠色線程之間的通信,

  

 class eventlet.Timeout 

  可以向任何東西添加超時,在 timeout 秒后拋出異常 exception。當 exception 被忽視或為None時,Timeout 實例自身會被拋出。Timeout 實例是上下文管理器(context manager),因此可以在 with 語句中使用

 

三、補丁函數

 eventlet.import_patched(modulename, *additional_modules, **kw_additional_modules) 

  引入標准庫模塊綠化后的版本,這樣后續代碼以非阻塞的形式執行,所需要的參數就是目標模塊的名稱,具體可參考 Import Green

  

 eventlet.monkey_patch(all=True, os=False, select=False, socket=False, thread=False, time=False) 

  在全局中為指定的系統模塊打補丁,補丁后的模塊是“綠色線程友好的”,關鍵字參數指示哪些模塊需要被打補丁,如果 all 是真,那么所有的模塊會被打補丁而無視其他參數;否則才由具體模塊對應的參數控制對指定模塊的補丁。多數參數為與自己同名的模塊打補丁,如os, time, select,但是 socket 參數為真時,如果 ssl 模塊也存在,會同時補丁socket模塊和ssl模塊,類似的,thread參數為真時,會補丁thread, threading 和 Queue 模塊。

  可以多次調用monkey_patch(),詳見 Monkeypatching the Standard Library

 

四、網絡應用

 eventlet.connect(addr, family=2, bind=None) 

  開啟客戶端套接字

  參數:

  • addr – 目標服務器的地址,對於 TCP 套接字,這該參數應該是一個 (host, port) 元組
  • family – 套接字族,可選,詳見 socket 文檔
  • bind – 綁定的本地地址,可選

  返回:

  連接后的“綠色” socket 對象

 

 eventlet.listen(addr, family=2, backlog=50) 

  創建套接字,可以用於 serve() 或一個定制的 accept() 循環。設置套接字的 SO_REUSEADDR 可以減少打擾。

  參數:

  • addr:要監聽的地址,比如對於 TCP 協議的套接字,這是一個(host, port) 元組。
  • family:套接字族。
  • backlog:排隊連接的最大個數,至少是1,上限由系統決定。

  返回:

  監聽中的“綠色”套接字對象。

 

  eventlet.wrap_ssl(sock, *a, **kw) 

  將一個普通套接字轉變為一個SSL套接字,與 ssl.wrap_socket() 的接口相同。可以使用 PyOpenSSL,但是在使用 PyOpenSSL 時會無視 cert_reqs 、ssl_version 、ca_certs 、do_handshake_on_connect 和suppress_ragged_eofs 等參數。

  建議使用創建模式來調用該方法,如: wrap_ssl(connect(addr))  或  wrap_ssl(listen(addr),server_side=True) 。這樣不會出現“裸”套接字監聽非SSL會話的意外。

  返回:

  “綠色” SSL 對象。

  

 eventlet.serve(sock, handle, concurrency=1000) 

  在給定的套接字上運行服務器,對於每一個到來的客戶端連接,會在一個獨立的綠色線程中調用參數 handle ,函數 handle 接受兩個參數,一是客戶端的socket對象,二是客戶端地址:

def myhandle(client_sock, client_addr):
    print("client connected", client_addr)

eventlet.serve(eventlet.listen(('127.0.0.1', 9999)), myhandle)

  函數 handle 返回時將會關閉客戶端套接字

  serve() 會阻塞調用的綠色線程,直到服務器關閉才返回,如果需要綠色線程立即返回,可以為 serve() 孵化一個新的綠色線程

  任何 handle 拋出的沒有捕獲的異常都會被當做serve()拋出的異常,造成服務器的終止,因此需要弄清楚應用會拋出哪些異常。handle 的返回值會被忽視。

  拋出一個 StopServe 異常來妥善地結束server – that’s the only way to get the server() function to return rather than raise.

  參數 concurrency 控制並發度,是任意時刻處理請求的綠色線程的數量上限,當服務器達到該上限時,它不會接受新的連接,直到有現有的完成為止。

 

 class eventlet.StopServe 

  用於妥善退出 serve() 的異常類

 

五、綠化這個世界

  所謂”綠化”是指綠化后的Python環境支持綠色線程的運行模式。Python原生的標准庫不支持eventlet這種綠色線程之間互相讓渡CPU控制權的執行模型,為此eventlet開發者改寫了部分Python標准庫(自稱”補丁“)。如果想在應用中使用eventlet,需要顯式地綠化自己要引入的模塊。

  方法一 from eventlet.green import ...

  第一種方法是從eventlet.green包中引入需要的模塊,eventlet.green包中引入的網絡相關模塊與Python標准庫同名且提供相同的接口,只是進行過綠化補丁,因此支持綠色線程。比如:

from eventlet.green import socket
from eventlet.green import threading
from eventlet.green import asyncore

  方法二 import_patched()

  如果eventlet.green中缺乏所需要引入的模塊,可以使用 import_patched() 函數,該函數可以綠化參數中指定的模塊,該函數的參數就是要引入並綠化的模塊名稱:

 eventlet.patcher.import_patched(module_name, *additional_modules, **kw_additional_modules) 

  以綠化的方式引入一個模塊,這樣該模塊中如果用到網絡相關的庫時將會自動替換為綠化后的版本,比如引入的模塊中用到了socket庫,那么import_patched()后的模塊使用的將不再是原生的Python socket模塊而是綠化后的socket模塊。

  該方法的一個問題是不能正確處理延遲引入(late )

  該方法的另一個好處是可以通過參數 *additional_modules**kw_additional_modules 指定哪些模塊需要被綠化,比如:

from eventlet.green import socket
from eventlet.green import SocketServer
BaseHTTPServer = eventlet.import_patched('BaseHTTPServer',
                        ('socket', socket),
                        ('SocketServer', SocketServer))
#BaseHTTPServer = eventlet.import_patched('BaseHTTPServer',
#                        socket=socket, SocketServer=SocketServer)

  此時只綠化 BaseHTTPServer 中引用的 socket 和 SocketServer 模塊,注釋掉的代碼功能與它上面三行的功能相同。

  方法三  猴子補丁

  eventlet中的猴子補丁是在運行時修改已有的代碼,動態替換已有的標准庫:

 eventlet.patcher.monkey_patch(os=None, select=None, socket=None, thread=None, time=None, psycopg=None) 

  如果調用該方法時沒有指定參數,會為所有默認參數中提到的庫打補丁:

import eventlet
eventlet.monkey_patch()

  關鍵字參數指示哪些模塊需要被打補丁,如果 all 是真,那么所有的模塊會被打補丁而無視其他參數;否則才由具體模塊對應的參數控制對指定模塊的補丁。多數參數為與自己同名的模塊打補丁,如os, time, select,但是 socket 參數為真時,如果 ssl 模塊也存在,會同時補丁socket模塊和ssl模塊,類似的,thread參數為真時,會補丁thread, threading 和 Queue 模塊:

import eventlet
eventlet.monkey_patch(socket=True, select=True)

  在應用中越早調用monkey_patch()越好,比如作為主模塊的第一行代碼,這樣做可以避免例如下面的情形:已經定義一個子類,該子類繼承一個需要被補丁的父類,但是此時還沒有猴子補丁該父類所在的模塊。

 

 eventlet.patcher.is_monkey_patched(module) 

  判斷指定的模塊是否已經被猴子補丁了。

 

六、 Eventlet使用實例

  下面的這些例子來源於官方文檔,這里會分別對其進行簡要的說明。

1. 客戶端網絡爬蟲

import eventlet
from eventlet.green import urllib2

urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
       "https://wiki.secondlife.com/w/images/secondlife.jpg",
       "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]

def fetch(url):
    return urllib2.urlopen(url).read()

pool = eventlet.GreenPool()
for body in pool.imap(fetch, urls):
    print("got body", len(body))

  第2行引入綠化后的 urllib2,除了使用綠化后的套接字外,與原有的標准庫完全相同。

  第11行創建一個綠色線程池,此處缺省容量為1000,線程池可以控制並發,限制內存消耗的上限;

  第12行遍歷並行調用函數 fetch 后的結果,imap 可以並行調用函數 fetch ,返回結果的先后順序和執行的先后順序相同。

  這個例子的關鍵就在於客戶端起了若干的綠色線程,並行收集網絡爬取的結果,同時由於綠色線程池加了內存帽,也不會因為url列表過大而消耗過多的內存。

 

1.1 稍稍完善的客戶端網絡爬蟲

  該例子與 例1 類似。

#!/usr/bin/env python

import eventlet
from eventlet.green import urllib2


urls = [
    "https://www.google.com/intl/en_ALL/images/logo.gif",
    "http://python.org/images/python-logo.gif",
    "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif",
]


def fetch(url):
    print("opening", url)
    body = urllib2.urlopen(url).read()
    print("done with", url)
    return url, body


pool = eventlet.GreenPool(200)
for url, body in pool.imap(fetch, urls):
    print("got body from", url, "of length", len(body))

  執行結果:

('opening', 'https://www.google.com/intl/en_ALL/images/logo.gif')
('opening', 'http://python.org/images/python-logo.gif')
('opening', 'http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif')
('done with', 'http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif')
('done with', 'https://www.google.com/intl/en_ALL/images/logo.gif')
('got body from', 'https://www.google.com/intl/en_ALL/images/logo.gif', 'of length', 8558)
('done with', 'http://python.org/images/python-logo.gif')
('got body from', 'http://python.org/images/python-logo.gif', 'of length', 2549)
('got body from', 'http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif', 'of length', 1874)

  開始打印的三行“opening”說明並行啟動三個綠色線程,每個綠色線程是一個調用 fetch 函數的容器,注意起這三個綠色線程時的順序;

  函數 fetch 打印“done with”的順序進一步說明 imap 是並行觸發綠色線程調用函數 fetch 的,注意爬取 y3.gif 的函數退出后並沒有立即返回主函數,而是等待它前面的兩個綠色線程退出,這是因為 imap 返回結果的先后順序和執行的先后順序相同,也解釋了為什么我們說綠色線程之間實質上是順序執行的。

 

2. 簡單服務器

import eventlet

def handle(client):
    while True:
        c = client.recv(1)
        if not c: break
        client.sendall(c)

server = eventlet.listen(('0.0.0.0', 6000))
pool = eventlet.GreenPool(10000)
while True:
    new_sock, address = server.accept()
    pool.spawn_n(handle, new_sock)

  server eventlet.listen(('0.0.0.0', 6000)) 一句創建一個監聽套接字;

  pool eventlet.GreenPool(10000) 一句創建一個綠色線程池,最多可以容納10000個客戶端連接;

  new_sock, address = server.accept() 一句很特殊,由於這里創建的服務器套接字是經過綠化的,所以當多個連接到來時在accept()這里不會阻塞,而是並行接收

  pool.spawn_n(handle, new_sock) 一句為每一個客戶端創建一個綠色線程,該綠色線程不在乎回調函數 handle 的執行結果,也就是完全將客戶端套接字交給回調 handle 處理。

2.1 

#-*-encoding:utf-8-*-
#! /usr/bin/env python
"""\
這個簡單的服務器實例監聽端口 6000,響應每一個用戶輸入,
運行該文件啟動該服務器,

通過執行:
  telnet localhost 6000

連接到它,可以通過終止 telnet 斷開連接(通常 Ctrl-] 然后 'quit')
"""
from __future__ import print_function

import eventlet


def handle(fd):
    print("client connected")
    while True:
        # pass through every non-eof line
        x = fd.readline()
        if not x:
            break
        fd.write(x)
        fd.flush()
        print("echoed", x, end=' ')
    print("client disconnected")

print("server socket listening on port 6000")
server = eventlet.listen(('0.0.0.0', 6000))
pool = eventlet.GreenPool()
while True:
    try:
        new_sock, address = server.accept()
        print("accepted", address)
        pool.spawn_n(handle, new_sock.makefile('rw'))
    except (SystemExit, KeyboardInterrupt):
        break

  

3. Feed 挖掘機

  該用例下,一個服務端同時也是另一個服務的客戶端,比如代理等,這里 GreenPile 就發揮作用了。

  下面的例子中,服務端從客戶端接收 POST 請求,請求中包括含有 RSS feed 的URL,服務端並發地到 feed 服務器那里取回所有的 feed 然后將他們的標題返回給客戶端:

import eventlet
feedparser = eventlet.import_patched('feedparser')

pool = eventlet.GreenPool()

def fetch_title(url):
    d = feedparser.parse(url)
    return d.feed.get('title', '')

def app(environ, start_response):
    pile = eventlet.GreenPile(pool)
    for url in environ['wsgi.input'].readlines():
        pile.spawn(fetch_title, url)
    titles = '\n'.join(pile)
    start_response('200 OK', [('Content-type', 'text/plain')])
    return [titles]

  

  使用綠色線程池的好處是控制並發, 如果沒有這個並發控制的話,客戶端可能會讓服務端在 feed 服務器那里起很多的連接,導致服務端被feed服務器給 ban 掉。

完整的例子:

"""A simple web server that accepts POSTS containing a list of feed urls,
and returns the titles of those feeds.
"""
import eventlet
feedparser = eventlet.import_patched('feedparser')

# the pool provides a safety limit on our concurrency
pool = eventlet.GreenPool()


def fetch_title(url):
    d = feedparser.parse(url)
    return d.feed.get('title', '')


def app(environ, start_response):
    if environ['REQUEST_METHOD'] != 'POST':
        start_response('403 Forbidden', [])
        return []

    # the pile collects the result of a concurrent operation -- in this case,
    # the collection of feed titles
    pile = eventlet.GreenPile(pool)
    for line in environ['wsgi.input'].readlines():
        url = line.strip()
        if url:
            pile.spawn(fetch_title, url)
    # since the pile is an iterator over the results,
    # you can use it in all sorts of great Pythonic ways
    titles = '\n'.join(pile)
    start_response('200 OK', [('Content-type', 'text/plain')])
    return [titles]


if __name__ == '__main__':
    from eventlet import wsgi
    wsgi.server(eventlet.listen(('localhost', 9010)), app)

 

4. WSGI 服務器

"""This is a simple example of running a wsgi application with eventlet.
For a more fully-featured server which supports multiple processes,
multiple threads, and graceful code reloading, see:

http://pypi.python.org/pypi/Spawning/
"""

import eventlet
from eventlet import wsgi


def hello_world(env, start_response):
    if env['PATH_INFO'] != '/':
        start_response('404 Not Found', [('Content-Type', 'text/plain')])
        return ['Not Found\r\n']
    start_response('200 OK', [('Content-Type', 'text/plain')])
    return ['Hello, World!\r\n']

wsgi.server(eventlet.listen(('', 8090)), hello_world)

  

5. 套接字連接

"""Spawn multiple workers and collect their results.

Demonstrates how to use the eventlet.green.socket module.
"""
from __future__ import print_function

import eventlet
from eventlet.green import socket


def geturl(url):
    c = socket.socket()
    ip = socket.gethostbyname(url)
    c.connect((ip, 80))
    print('%s connected' % url)
    c.sendall('GET /\r\n\r\n')
    return c.recv(1024)


urls = ['www.google.com', 'www.yandex.ru', 'www.python.org']
pile = eventlet.GreenPile()
for x in urls:
    pile.spawn(geturl, x)

# note that the pile acts as a collection of return values from the functions
# if any exceptions are raised by the function they'll get raised here
for url, result in zip(urls, pile):
    print('%s: %s' % (url, repr(result)[:50]))

  

6. 多用戶聊天服務器

import eventlet
from eventlet.green import socket

PORT = 3001
participants = set()


def read_chat_forever(writer, reader):
    line = reader.readline()
    while line:
        print("Chat:", line.strip())
        for p in participants:
            try:
                if p is not writer:  # Don't echo
                    p.write(line)
                    p.flush()
            except socket.error as e:
                # ignore broken pipes, they just mean the participant
                # closed its connection already
                if e[0] != 32:
                    raise
        line = reader.readline()
    participants.remove(writer)
    print("Participant left chat.")

try:
    print("ChatServer starting up on port %s" % PORT)
    server = eventlet.listen(('0.0.0.0', PORT))
    while True:
        new_connection, address = server.accept()
        print("Participant joined chat.")
        new_writer = new_connection.makefile('w')
        participants.add(new_writer)
        eventlet.spawn_n(read_chat_forever,
                         new_writer,
                         new_connection.makefile('r'))
except (KeyboardInterrupt, SystemExit):
    print("ChatServer exiting.")

  

7. 端口轉運工

""" This is an incredibly simple port forwarder from port 7000 to 22 on
localhost.  It calls a callback function when the socket is closed, to
demonstrate one way that you could start to do interesting things by
starting from a simple framework like this.
"""

import eventlet


def closed_callback():
    print("called back")


def forward(source, dest, cb=lambda: None):
    """Forwards bytes unidirectionally from source to dest"""
    while True:
        d = source.recv(32384)
        if d == '':
            cb()
            break
        dest.sendall(d)

listener = eventlet.listen(('localhost', 7000))
while True:
    client, addr = listener.accept()
    server = eventlet.connect(('localhost', 22))
    # two unidirectional forwarders make a bidirectional one
    eventlet.spawn_n(forward, client, server, closed_callback)
    eventlet.spawn_n(forward, server, client)

  

8. 網頁遞歸爬蟲

"""This is a recursive web crawler.  Don't go pointing this at random sites;
it doesn't respect robots.txt and it is pretty brutal about how quickly it
fetches pages.

The code for this is very short; this is perhaps a good indication
that this is making the most effective use of the primitves at hand.
The fetch function does all the work of making http requests,
searching for new urls, and dispatching new fetches.  The GreenPool
acts as sort of a job coordinator (and concurrency controller of
course).
"""
from __future__ import with_statement

from eventlet.green import urllib2
import eventlet
import re

# http://daringfireball.net/2009/11/liberal_regex_for_matching_urls
url_regex = re.compile(r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))')


def fetch(url, seen, pool):
    """Fetch a url, stick any found urls into the seen set, and
    dispatch any new ones to the pool."""
    print("fetching", url)
    data = ''
    with eventlet.Timeout(5, False):
        data = urllib2.urlopen(url).read()
    for url_match in url_regex.finditer(data):
        new_url = url_match.group(0)
        # only send requests to eventlet.net so as not to destroy the internet
        if new_url not in seen and 'eventlet.net' in new_url:
            seen.add(new_url)
            # while this seems stack-recursive, it's actually not:
            # spawned greenthreads start their own stacks
            pool.spawn_n(fetch, new_url, seen, pool)


def crawl(start_url):
    """Recursively crawl starting from *start_url*.  Returns a set of
    urls that were found."""
    pool = eventlet.GreenPool()
    seen = set()
    fetch(start_url, seen, pool)
    pool.waitall()
    return seen

seen = crawl("http://eventlet.net")
print("I saw these urls:")
print("\n".join(seen))

  

9. 生產者/消費者網絡爬蟲

"""This is a recursive web crawler.  Don't go pointing this at random sites;
it doesn't respect robots.txt and it is pretty brutal about how quickly it
fetches pages.

This is a kind of "producer/consumer" example; the fetch function produces
jobs, and the GreenPool itself is the consumer, farming out work concurrently.
It's easier to write it this way rather than writing a standard consumer loop;
GreenPool handles any exceptions raised and arranges so that there's a set
number of "workers", so you don't have to write that tedious management code
yourself.
"""
from __future__ import with_statement

from eventlet.green import urllib2
import eventlet
import re

# http://daringfireball.net/2009/11/liberal_regex_for_matching_urls
url_regex = re.compile(r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))')


def fetch(url, outq):
    """Fetch a url and push any urls found into a queue."""
    print("fetching", url)
    data = ''
    with eventlet.Timeout(5, False):
        data = urllib2.urlopen(url).read()
    for url_match in url_regex.finditer(data):
        new_url = url_match.group(0)
        outq.put(new_url)


def producer(start_url):
    """Recursively crawl starting from *start_url*.  Returns a set of
    urls that were found."""
    pool = eventlet.GreenPool()
    seen = set()
    q = eventlet.Queue()
    q.put(start_url)
    # keep looping if there are new urls, or workers that may produce more urls
    while True:
        while not q.empty():
            url = q.get()
            # limit requests to eventlet.net so we don't crash all over the internet
            if url not in seen and 'eventlet.net' in url:
                seen.add(url)
                pool.spawn_n(fetch, url, q)
        pool.waitall()
        if q.empty():
            break

    return seen


seen = producer("http://eventlet.net")
print("I saw these urls:")
print("\n".join(seen))

  

10. Websocket 服務器

import eventlet
from eventlet import wsgi
from eventlet import websocket
from eventlet.support import six

# demo app
import os
import random


@websocket.WebSocketWSGI
def handle(ws):
    """  This is the websocket handler function.  Note that we
    can dispatch based on path in here, too."""
    if ws.path == '/echo':
        while True:
            m = ws.wait()
            if m is None:
                break
            ws.send(m)

    elif ws.path == '/data':
        for i in six.moves.range(10000):
            ws.send("0 %s %s\n" % (i, random.random()))
            eventlet.sleep(0.1)


def dispatch(environ, start_response):
    """ This resolves to the web page or the websocket depending on
    the path."""
    if environ['PATH_INFO'] == '/data':
        return handle(environ, start_response)
    else:
        start_response('200 OK', [('content-type', 'text/html')])
        return [open(os.path.join(
                     os.path.dirname(__file__),
                     'websocket.html')).read()]

if __name__ == "__main__":
    # run an example app from the command line
    listener = eventlet.listen(('127.0.0.1', 7000))
    print("\nVisit http://localhost:7000/ in your websocket-capable browser.\n")
    wsgi.server(listener, dispatch)

  

11. Websocket 多用戶聊天

import os

import eventlet
from eventlet import wsgi
from eventlet import websocket

PORT = 7000

participants = set()


@websocket.WebSocketWSGI
def handle(ws):
    participants.add(ws)
    try:
        while True:
            m = ws.wait()
            if m is None:
                break
            for p in participants:
                p.send(m)
    finally:
        participants.remove(ws)


def dispatch(environ, start_response):
    """Resolves to the web page or the websocket depending on the path."""
    if environ['PATH_INFO'] == '/chat':
        return handle(environ, start_response)
    else:
        start_response('200 OK', [('content-type', 'text/html')])
        html_path = os.path.join(os.path.dirname(__file__), 'websocket_chat.html')
        return [open(html_path).read() % {'port': PORT}]

if __name__ == "__main__":
    # run an example app from the command line
    listener = eventlet.listen(('127.0.0.1', PORT))
    print("\nVisit http://localhost:7000/ in your websocket-capable browser.\n")
    wsgi.server(listener, dispatch)

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM