Celery介紹和基本使用
在項目中如何使用celery
啟用多個workers
Celery 定時任務
與django結合
通過django配置celery periodic task
一、Celery介紹和基本使用
Celery 是一個 基於python開發的分布式異步消息任務隊列,通過它可以輕松的實現任務的異步處理, 如果你的業務場景中需要用到異步任務,就可以考慮使用celery, 舉幾個實例場景中可用的例子:
- 你想對100台機器執行一條批量命令,可能會花很長時間 ,但你不想讓你的程序等着結果返回,而是給你返回 一個任務ID,你過一段時間只需要拿着這個任務id就可以拿到任務執行結果, 在任務執行ing進行時,你可以繼續做其它的事情。
- 你想做一個定時任務,比如每天檢測一下你們所有客戶的資料,如果發現今天 是客戶的生日,就給他發個短信祝福
Celery 在執行任務時需要通過一個消息中間件來接收和發送任務消息,以及存儲任務結果, 一般使用rabbitMQ or Redis
1.1 Celery有以下優點:
- 簡單:一但熟悉了celery的工作流程后,配置和使用還是比較簡單的
- 高可用:當任務執行失敗或執行過程中發生連接中斷,celery 會自動嘗試重新執行任務
- 快速:一個單進程的celery每分鍾可處理上百萬個任務
- 靈活: 幾乎celery的各個組件都可以被擴展及自定制
Celery基本工作流程圖
1.2 Celery安裝使用
Celery的默認broker是RabbitMQ, 僅需配置一行就可以
broker_url = 'amqp://guest:guest@localhost:5672//'
使用Redis做broker也可以
安裝redis組件
$ pip3 install -U "celery[redis]"
配置
Configuration is easy, just configure the location of your Redis database:
app.conf.broker_url = 'redis://localhost:6379/0'
Where the URL is in the format of:
redis://:password@hostname:port/db_number
all fields after the scheme are optional, and will default to localhost
on port 6379, using database 0.
如果想獲取每個任務的執行結果,還需要配置一下把任務結果存在哪
If you also want to store the state and return values of tasks in Redis, you should configure these settings:
app.conf.result_backend = 'redis://localhost:6379/0'
1. 3 開始使用Celery啦
安裝celery模塊
$ pip install celery
創建一個celery application 用來定義你的任務列表
創建一個任務文件就叫tasks.py吧
from celery import Celery app = Celery( 'tasks', broker='redis://localhost', backend='redis//localhost' ) @app.task def add(x, y): print('running add', x, y) return x + y @app.task def test(x, y): print('running test', x, y) return (x, y)
啟動celery worker來監聽並執行任務
dandy@ubuntu01:~$ celery -A task worker -l debug
調用任務
再打開一個終端, 進行命令行模式,調用任務
dandy@ubuntu01:~$ python3 Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tasks >>> tasks.add.delay(1,4) <AsyncResult: 6d533f00-9a33-4683-b00d-c06bb85a5a3f> >>> t = tasks.add.delay(1,4) >>> t.get() # 同步拿結果 5 >>>
首先,tasks.py文件是放在虛擬機的~目錄,也是在桌面啟動的python3
看你的worker終端會顯示收到 一個任務,想看任務結果的話,需要在調用 任務時 賦值個變量
>>> type(t) <class 'celery.result.AsyncResult'>
ready方法是用來返回這個進程是否結束的結果的:
可以通過在tasks.py里面設置time sleep來延長時間測試
>>> result.ready() # 檢查任務是否完成 False
你可以定義等待結果的執行完成時間,但是這是很少使用的因為它將異步調用變成了同步調用
>>> result.get(timeout=1) 8
假設遇到任務異常,get方法會重新引起異常,但是你可以重寫這個通過指定的傳播參數
>>> result.get(propagate=False) # 如果出錯,獲取錯誤結果,不觸發異常
如果任務引起異常,你也可以獲得接口訪問原始回溯(相當於result.get(propagate=False)的詳細異常報錯):
>>> result.traceback # 打印異常詳細結果 …
在項目中如何使用celery
可以把celery配置成一個應用
目錄格式如下:
celery_pro----- |---- celery.py |---- tasks.py
|---- tasks2.py
celery.py:
from __future__ import absolute_import, unicode_literals # 因為這個文件也叫了celery防止導入本身的錯誤,需要這樣申明一下,表示從安裝目錄導入 from celery import Celery # from .celery import Celery # 表示從當前目錄導入 app = Celery('proj', broker='redis://localhost', backend='redis://localhost', include=['celery_pro.tasks', 'celery_pro.tasks2']) # 可以導入多個文件 # Optional configuration, see the application user guide. app.conf.update( result_expires=3600, ) if __name__ == '__main__': app.start()
tasks.py
from __future__ import absolute_import, unicode_literals from .celery import app @app.task def add(x, y): return x + y @app.task def mul(x, y): return x * y @app.task def xsum(numbers): return sum(numbers)
tasks2.py
from __future__ import absolute_import, unicode_literals from .celery import app import time, random @app.task def randnum(start, end): time.sleep(5) return random.randint(start, end)
啟動worker
dandy@ubuntu01:~$ celery -A celery_pro worker -l info
使用:
dandy@ubuntu01:~$ python3 Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from celery_pro import tasks, tasks2 >>> t = tasks.add.delay(3,4) >>> tt = tasks2.randnum.delay(1, 1000)
2018-07-09 22:20:29,454: INFO/ForkPoolWorker-1] Task celery_pro.tasks.add[2412ac1f-351f-4af5-80ed-2bef879aff1b] succeeded in 0.004707950998636079s: 7 [2018-07-09 22:21:26,361: INFO/MainProcess] Received task: celery_pro.tasks2.randnum[334f2d69-d3e3-4fbd-b6f6-a96463d90456] [2018-07-09 22:21:31,368: INFO/ForkPoolWorker-1] Task celery_pro.tasks2.randnum[334f2d69-d3e3-4fbd-b6f6-a96463d90456] succeeded in 5.006172604000312s: 585
關於分布式:
首先可以啟動兩個mac 終端的terminal,分別用
dandy@ubuntu01:~$ celery -A celery_pro worker -l info
起celery worker的服務。
此時,直接在另外一個terminal里面啟動python,一直調用服務:
tt = tasks2.randnum.delay(1, 1000) tt = tasks2.randnum.delay(1, 1000) tt = tasks2.randnum.delay(1, 1000) tt = tasks2.randnum.delay(1, 1000) tt = tasks2.randnum.delay(1, 1000) tt = tasks2.randnum.delay(1, 1000) tt = tasks2.randnum.delay(1, 1000)
就可以看到worker的兩個終端都會執行的。。
現在把兩個終端關閉,輸入以下命令,可以發現celery worker都已經隨着終端關閉而停止:
dandy@ubuntu01:~$ ps -ef | grep celery dandy 12193 12030 0 22:28 pts/1 00:00:00 grep --color=auto celery
如何后台啟動celery不關閉?
dandy@ubuntu01:~$ celery multi start w1 -A celery_pro -l info celery multi v4.2.0 (windowlicker) > Starting nodes... > w1@ubuntu01: OK dandy@ubuntu01:~$ celery multi start w2 -A celery_pro -l info celery multi v4.2.0 (windowlicker) > Starting nodes... > w2@ubuntu01: OK dandy@ubuntu01:~$ celery multi start w3 -A celery_pro -l info celery multi v4.2.0 (windowlicker) > Starting nodes... > w3@ubuntu01: OK
dandy@ubuntu01:~$ ps -ef | grep celery dandy 12859 1 0 10:13 ? 00:00:00 /usr/bin/python3 -m celery worker -A celery_pro -l info --logfile=w1%I.log --pidfile=w1.pid --hostname=w1@ubuntu01 dandy 12863 12859 0 10:13 ? 00:00:00 /usr/bin/python3 -m celery worker -A celery_pro -l info --logfile=w1%I.log --pidfile=w1.pid --hostname=w1@ubuntu01 dandy 12864 12859 0 10:13 ? 00:00:00 /usr/bin/python3 -m celery worker -A celery_pro -l info --logfile=w1%I.log --pidfile=w1.pid --hostname=w1@ubuntu01 dandy 12890 1 0 10:14 ? 00:00:00 /usr/bin/python3 -m celery worker -l info -A celery_pro --logfile=w2%I.log --pidfile=w2.pid --hostname=w2@ubuntu01 dandy 12894 12890 0 10:14 ? 00:00:00 /usr/bin/python3 -m celery worker -l info -A celery_pro --logfile=w2%I.log --pidfile=w2.pid --hostname=w2@ubuntu01 dandy 12895 12890 0 10:14 ? 00:00:00 /usr/bin/python3 -m celery worker -l info -A celery_pro --logfile=w2%I.log --pidfile=w2.pid --hostname=w2@ubuntu01 dandy 12909 1 0 10:14 ? 00:00:00 /usr/bin/python3 -m celery worker -A celery_pro -l info --logfile=w3%I.log --pidfile=w3.pid --hostname=w3@ubuntu01 dandy 12913 12909 0 10:14 ? 00:00:00 /usr/bin/python3 -m celery worker -A celery_pro -l info --logfile=w3%I.log --pidfile=w3.pid --hostname=w3@ubuntu01 dandy 12914 12909 0 10:14 ? 00:00:00 /usr/bin/python3 -m celery worker -A celery_pro -l info --logfile=w3%I.log --pidfile=w3.pid --hostname=w3@ubuntu01 dandy 13002 12964 0 10:17 pts/2 00:00:00 grep --color=auto celery
重啟celery:
dandy@ubuntu01:~$ celery multi restart w1 w2 -A celery_pro celery multi v4.2.0 (windowlicker) > Stopping nodes... > w2@ubuntu01: TERM -> 13111 > w1@ubuntu01: TERM -> 13102 > Waiting for 2 nodes -> 13111, 13102...... > w2@ubuntu01: OK > Restarting node w2@ubuntu01: OK > Waiting for 2 nodes -> None, None.... > w1@ubuntu01: OK > Restarting node w1@ubuntu01: OK > Waiting for 1 node -> None...
停止celery:
dandy@ubuntu01:~$ celery multi stop w1 w2 -A celery_pro celery multi v4.2.0 (windowlicker) > Stopping nodes... > w1@ubuntu01: TERM -> 13141 > w2@ubuntu01: TERM -> 13130
The stop
command is asynchronous so it won’t wait for the worker to shutdown. You’ll probably want to use the stopwait
command instead, this ensures all currently executing tasks is completed before exiting:
$ celery multi stopwait w1 -A proj -l info
查看celery日志
dandy@ubuntu01:~$ ls celery_pro Starting w1.log w2.log w3-2.log w3@ubuntu01: dump.rdb w1-1.log w2-1.log w2@ubuntu01: w3.log __pycache__ w1-2.log w2-2.log w3-1.log w3.pid
dandy@ubuntu01:~$ tail -f w1.log # 監視文件默認后10行 [2018-07-10 10:13:18,551: INFO/MainProcess] Connected to redis://localhost:6379// [2018-07-10 10:13:18,558: INFO/MainProcess] mingle: searching for neighbors [2018-07-10 10:13:19,571: INFO/MainProcess] mingle: all alone [2018-07-10 10:13:19,578: INFO/MainProcess] w1@ubuntu01 ready. [2018-07-10 10:13:19,767: INFO/MainProcess] Received task: celery_pro.tasks2.randnum[a13b0bb8-4d71-448f-9ca1-253094518376] [2018-07-10 10:14:26,281: INFO/MainProcess] sync with w2@ubuntu01 [2018-07-10 10:14:46,337: INFO/MainProcess] sync with w3@ubuntu01
Celery 定時任務
celery支持定時任務,設定好任務的執行時間,celery就會定時自動幫你執行, 這個定時任務模塊叫celery beat
from __future__ import absolute_import, unicode_literals from .celery import app from celery.schedules import crontab @app.on_after_configure.connect def setup_periodic_tasks(sender, **kwargs): # Calls test('hello') every 10 seconds. sender.add_periodic_task(10.0, test.s('hello'), name='add every 10') # add_periodic_task 添加定時任務 # Calls test('world') every 30 seconds sender.add_periodic_task(30.0, test.s('world'), expires=10) # Executes every Monday morning at 7:30 a.m. sender.add_periodic_task( crontab(hour=7, minute=30, day_of_week=1), test.s('Happy Mondays!'), ) @app.task def test(arg): print(arg)
上面是通過調用函數添加定時任務,也可以像寫配置文件 一樣的形式添加, 下面是每30s執行的任務
app.conf.beat_schedule = { 'add-every-30-seconds': { 'task': 'tasks.add', 'schedule': 30.0, 'args': (16, 16) }, } app.conf.timezone = 'UTC'
app = Celery('celery_pro', broker='redis://localhost', backend='redis://localhost', include=['celery_pro.tasks', 'celery_pro.tasks2', 'celery_pro.periodic_task']) # Optional configuration, see the application user guide. app.conf.update( result_expires=3600, ) app.conf.beat_schedule = { 'add-every-30-seconds': { 'task': 'tasks.add', 'schedule': 5.0, 'args': (16, 16) }, } app.conf.timezone = 'UTC' if __name__ == '__main__': app.start()
任務添加好了,需要讓celery單獨啟動一個進程來定時發起這些任務, 注意, 這里是發起任務,不是執行,這個進程只會不斷的去檢查你的任務計划, 每發現有任務需要執行了,就發起一個任務調用消息,交給celery worker去執行。
這里之前定義了一個include,需要把文件添加進去。
from __future__ import absolute_import, unicode_literals from celery import Celery app = Celery('celery_pro', broker='redis://localhost', backend='redis://localhost', include=['celery_pro.tasks', 'celery_pro.tasks2', 'celery_pro.periodic_task']) # 這里把文件添加進來 # Optional configuration, see the application user guide. app.conf.update( result_expires=3600, ) if __name__ == '__main__': app.start()
啟動celery
dandy@ubuntu01:~$ celery -A celery_pro worker -l debug # 注意路徑
celery:
tasks] . celery.accumulate . celery.backend_cleanup . celery.chain . celery.chord . celery.chord_unlock . celery.chunks . celery.group . celery.map . celery.starmap . celery_pro.periodic_task.test . celery_pro.tasks.add . celery_pro.tasks.mul . celery_pro.tasks.xsum . celery_pro.tasks2.randnum [2018-07-10 11:33:58,066: DEBUG/MainProcess] | Worker: Starting Hub [2018-07-10 11:33:58,066: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:58,066: DEBUG/MainProcess] | Worker: Starting Pool [2018-07-10 11:33:58,100: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:58,100: DEBUG/MainProcess] | Worker: Starting Consumer [2018-07-10 11:33:58,101: DEBUG/MainProcess] | Consumer: Starting Connection [2018-07-10 11:33:58,111: INFO/MainProcess] Connected to redis://localhost:6379// [2018-07-10 11:33:58,112: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:58,112: DEBUG/MainProcess] | Consumer: Starting Events [2018-07-10 11:33:58,120: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:58,120: DEBUG/MainProcess] | Consumer: Starting Mingle [2018-07-10 11:33:58,120: INFO/MainProcess] mingle: searching for neighbors [2018-07-10 11:33:59,138: INFO/MainProcess] mingle: all alone [2018-07-10 11:33:59,139: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:59,140: DEBUG/MainProcess] | Consumer: Starting Gossip [2018-07-10 11:33:59,144: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:59,145: DEBUG/MainProcess] | Consumer: Starting Tasks [2018-07-10 11:33:59,148: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:59,148: DEBUG/MainProcess] | Consumer: Starting Control [2018-07-10 11:33:59,150: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:59,151: DEBUG/MainProcess] | Consumer: Starting Heart [2018-07-10 11:33:59,152: DEBUG/MainProcess] ^-- substep ok [2018-07-10 11:33:59,153: DEBUG/MainProcess] | Consumer: Starting event loop [2018-07-10 11:33:59,153: DEBUG/MainProcess] | Worker: Hub.register Pool... [2018-07-10 11:33:59,154: INFO/MainProcess] celery@ubuntu01 ready. [2018-07-10 11:33:59,155: DEBUG/MainProcess] basic.qos: prefetch_count->8
此時,已經啟動了celery來准備好執行任務,也就是代表worker已經就緒,根據之前的圖,我們是知道worker是負責分布式處理的。
正如一開始,我們是自己調用函數執行worker,這里需要beat來進行任務調度。
啟動任務調度器 celery beat
dandy@ubuntu01:~$ celery -A celery_pro.periodic_task beat -l debug
來看下任務執行:
# beat celery beat v4.2.0 (windowlicker) is starting. __ - ... __ - _ LocalTime -> 2018-07-10 11:46:48 Configuration -> . broker -> redis://localhost:6379// . loader -> celery.loaders.app.AppLoader . scheduler -> celery.beat.PersistentScheduler . db -> celerybeat-schedule . logfile -> [stderr]@%DEBUG . maxinterval -> 5.00 minutes (300s) [2018-07-10 11:46:48,257: DEBUG/MainProcess] Setting default socket timeout to 30 [2018-07-10 11:46:48,258: INFO/MainProcess] beat: Starting... [2018-07-10 11:46:48,274: DEBUG/MainProcess] Current schedule: <ScheduleEntry: add every 10 celery_pro.periodic_task.test('hello') <freq: 10.00 seconds> <ScheduleEntry: celery_pro.periodic_task.test('world') celery_pro.periodic_task.test('world') <freq: 30.00 seconds> <ScheduleEntry: celery_pro.periodic_task.test('Happy Mondays!') celery_pro.periodic_task.test('Happy Mondays!') <crontab: 30 7 1 * * (m/h/d/dM/MY)> [2018-07-10 11:46:48,274: DEBUG/MainProcess] beat: Ticking with max interval->5.00 minutes [2018-07-10 11:46:48,276: DEBUG/MainProcess] beat: Waking up in 9.98 seconds. [2018-07-10 11:46:58,268: DEBUG/MainProcess] beat: Synchronizing schedule... [2018-07-10 11:46:58,279: INFO/MainProcess] Scheduler: Sending due task add every 10 (celery_pro.periodic_task.test) [2018-07-10 11:46:58,289: DEBUG/MainProcess] celery_pro.periodic_task.test sent. id->6d482eb0-5407-4e23-9307-bef9ca773ff7 [2018-07-10 11:46:58,289: DEBUG/MainProcess] beat: Waking up in 9.97 seconds. [2018-07-10 11:47:08,272: INFO/MainProcess] Scheduler: Sending due task add every 10 (celery_pro.periodic_task.test) [2018-07-10 11:47:08,274: DEBUG/MainProcess] celery_pro.periodic_task.test sent. id->9ae91cb8-b931-4a5c-b0fb-2c635495ae1a [2018-07-10 11:47:08,275: DEBUG/MainProcess] beat: Waking up in 9.98 seconds.
# worker [2018-07-10 11:47:48,287: WARNING/ForkPoolWorker-2] hello [2018-07-10 11:47:48,287: DEBUG/MainProcess] Task accepted: celery_pro.periodic_task.test[f7cae8d2-e9cb-44a8-a4fb-e0cfdb04ee6b] pid:14113 [2018-07-10 11:47:48,288: WARNING/ForkPoolWorker-1] world [2018-07-10 11:47:48,292: INFO/ForkPoolWorker-2] Task celery_pro.periodic_task.test[f7cae8d2-e9cb-44a8-a4fb-e0cfdb04ee6b] succeeded in 0.004459112999029458s: None [2018-07-10 11:47:48,294: DEBUG/MainProcess] Task accepted: celery_pro.periodic_task.test[d7f6aff8-bd3f-4054-b2f2-70e45798d01e] pid:14112 [2018-07-10 11:47:48,296: INFO/ForkPoolWorker-1] Task celery_pro.periodic_task.test[d7f6aff8-bd3f-4054-b2f2-70e45798d01e] succeeded in 0.007812611998815555s: None [2018-07-10 11:47:58,285: INFO/MainProcess] Received task: celery_pro.periodic_task.test[dd3365c0-33d8-429f-a7a5-825b1a70de64] [2018-07-10 11:47:58,285: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7f0689418b70> (args:('celery_pro.periodic_task.test', 'dd3365c0-33d8-429f-a7a5-825b1a70de64', {'origin': 'gen14371@ubuntu01', 'lang': 'py', 'correlation_id': 'dd3365c0-33d8-429f-a7a5-825b1a70de64', 'group': None, 'kwargsrepr': '{}', 'expires': None, 'parent_id': None, 'id': 'dd3365c0-33d8-429f-a7a5-825b1a70de64', 'eta': None, 'shadow': None, 'delivery_info': {'exchange': '', 'priority': 0, 'redelivered': None, 'routing_key': 'celery'}, 'reply_to': '1ba9e463-48e0-38ed-bbd3-3bf5498de62d', 'argsrepr': "('hello',)", 'retries': 0, 'root_id': 'dd3365c0-33d8-429f-a7a5-825b1a70de64', 'task': 'celery_pro.periodic_task.test', 'timelimit': [None, None]}, b'[["hello"], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{}) [2018-07-10 11:47:58,287: WARNING/ForkPoolWorker-2] hello [2018-07-10 11:47:58,287: INFO/ForkPoolWorker-2] Task celery_pro.periodic_task.test[dd3365c0-33d8-429f-a7a5-825b1a70de64] succeeded in 0.0007550369991804473s: None [2018-07-10 11:47:58,288: DEBUG/MainProcess] Task accepted: celery_pro.periodic_task.test[dd3365c0-33d8-429f-a7a5-825b1a70de64] pid:14113 [2018-07-10 11:48:08,287: INFO/MainProcess] Received task: celery_pro.periodic_task.test[a84ecf0f-89cc-40d0-bacb-e9d359602220] [2018-07-10 11:48:08,287: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7f0689418b70> (args:('celery_pro.periodic_task.test', 'a84ecf0f-89cc-40d0-bacb-e9d359602220', {'origin': 'gen14371@ubuntu01', 'lang': 'py', 'correlation_id': 'a84ecf0f-89cc-40d0-bacb-e9d359602220', 'group': None, 'kwargsrepr': '{}', 'expires': None, 'parent_id': None, 'id': 'a84ecf0f-89cc-40d0-bacb-e9d359602220', 'eta': None, 'shadow': None, 'delivery_info': {'exchange': '', 'priority': 0, 'redelivered': None, 'routing_key': 'celery'}, 'reply_to': '1ba9e463-48e0-38ed-bbd3-3bf5498de62d', 'argsrepr': "('hello',)", 'retries': 0, 'root_id': 'a84ecf0f-89cc-40d0-bacb-e9d359602220', 'task': 'celery_pro.periodic_task.test', 'timelimit': [None, None]}, b'[["hello"], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{}) [2018-07-10 11:48:08,288: DEBUG/MainProcess] Task accepted: celery_pro.periodic_task.test[a84ecf0f-89cc-40d0-bacb-e9d359602220] pid:14113
此時觀察worker的輸出,是不是每隔一小會,就會執行一次定時任務。
注意:Beat needs to store the last run times of the tasks in a local database file (named celerybeat-schedule by default), so it needs access to write in the current directory, or alternatively you can specify a custom location for this file:
$ celery -A periodic_task beat -s /home/celery/var/run/celerybeat-schedule
更復雜的定時配置
上面的定時任務比較簡單,只是每多少s執行一個任務,但如果你想要每周一三五的早上8點給你發郵件怎么辦呢?哈,其實也簡單,用crontab功能,跟linux自帶的crontab功能是一樣的,可以個性化定制任務執行時間
linux crontab http://www.cnblogs.com/peida/archive/2013/01/08/2850483.html
from celery.schedules import crontab app.conf.beat_schedule = { # Executes every Monday morning at 7:30 a.m. 'add-every-monday-morning': { 'task': 'tasks.add', 'schedule': crontab(hour=7, minute=30, day_of_week=1), 'args': (16, 16), }, }
上面的這條意思是每周1的早上7.30執行tasks.add任務
關於crontab的定時:
Example | Meaning |
crontab() |
Execute every minute. |
crontab(minute=0, hour=0) |
Execute daily at midnight. |
crontab(minute=0, hour='*/3') |
Execute every three hours: midnight, 3am, 6am, 9am, noon, 3pm, 6pm, 9pm. |
|
Same as previous. |
crontab(minute='*/15') |
Execute every 15 minutes. |
crontab(day_of_week='sunday') |
Execute every minute (!) at Sundays. |
|
Same as previous. |
|
Execute every ten minutes, but only between 3-4 am, 5-6 pm, and 10-11 pm on Thursdays or Fridays. |
crontab(minute=0,hour='*/2,*/3') |
Execute every even hour, and every hour divisible by three. This means: at every hour except: 1am, 5am, 7am, 11am, 1pm, 5pm, 7pm, 11pm |
crontab(minute=0, hour='*/5') |
Execute hour divisible by 5. This means that it is triggered at 3pm, not 5pm (since 3pm equals the 24-hour clock value of “15”, which is divisible by 5). |
crontab(minute=0, hour='*/3,8-17') |
Execute every hour divisible by 3, and every hour during office hours (8am-5pm). |
crontab(0, 0,day_of_month='2') |
Execute on the second day of every month. |
|
Execute on every even numbered day. |
|
Execute on the first and third weeks of the month. |
|
Execute on the eleventh of May every year. |
|
Execute on the first month of every quarter. |
上面能滿足你絕大多數定時任務需求了,甚至還能根據潮起潮落來配置定時任務, 具體看 http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#solar-schedules
最佳實踐之與django結合
django 可以輕松跟celery結合實現異步任務,只需簡單配置即可
首先,在django項目的同名目錄,即settings.py文件的位置,新建一個celery.py文件
from __future__ import absolute_import, unicode_literals import os from celery import Celery # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings') # 設置成自己項目名的settings文件 app = Celery('proj') # Using a string here means the worker don't have to serialize # the configuration object to child processes. # - namespace='CELERY' means all celery-related configuration keys # should have a `CELERY_` prefix. app.config_from_object('django.conf:settings', namespace='CELERY') # 可以將所有的celery的設置放在Django settings內部定義,CELERY開頭 # Load task modules from all registered Django app configs. app.autodiscover_tasks() # 自動去每個項目內部抓去tasks文件 @app.task(bind=True) def debug_task(self): print('Request: {0!r}'.format(self.request))
再在此目錄下修改__init__.py文件。這確保當Django啟動時加載應用程序,以便@shared_task裝飾器使用它:
from __future__ import absolute_import, unicode_literals
import pymysql # This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery import app as celery_app __all__ = ['celery_app'] pymysql.install_as_MySQLdb()
請注意,此示例項目布局適用於大的項目,對於簡單的項目,可以使用一個簡單包含的模塊來定義app和tasks,例如:First Steps with Celery tutorial.
讓我們分解以下在第一個模塊中發生的事,首先我們從__future__導入absolute_import,這樣我們的celery.py模塊就不會與安裝目錄圖書館發生沖突。
from __future__ import absolute_import
然后,我們為celery命令行程序設置默認的DJANGO_SETTINGS_MODULE環境變量:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
你不需要這一行,但它可以使你從始終避免在設置模塊到可以不忽略在celery程序,它必須總是在創建應用程序實例之前出現,正如我們接下來做的
(英文:You don’t need this line, but it saves you from always passing in the settings module to the celery
program. It must always come before creating the app instances, as is what we do next:)
app = Celery('proj')
這就是我們的實例。
We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings; but you can also separate them if wanted.
The uppercase name-space means that all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_
, so for example the task_always_eager`
setting becomes CELERY_TASK_ALWAYS_EAGER
, and the broker_url
setting becomes CELERY_BROKER_URL
.
You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object.
app.config_from_object('django.conf:settings', namespace='CELERY')
Next, a common practice for reusable apps is to define all tasks in a separate tasks.py
module, and Celery does have a way to auto-discover these modules:
app.autodiscover_tasks()
With the line above Celery will automatically discover tasks from all of your installed apps, following the tasks.py
convention:
- app1/ - tasks.py - models.py - app2/ - tasks.py - models.py
Finally, the debug_task
example is a task that dumps its own request information. This is using the new bind=True
task option introduced in Celery 3.1 to easily refer to the current task instance.
然后在具體的app里的tasks.py里寫你的任務
# Create your tasks here from __future__ import absolute_import, unicode_literals from celery import shared_task @shared_task def add(x, y): return x + y @shared_task def mul(x, y): return x * y @shared_task def xsum(numbers): return sum(numbers)
到另一個項目里再建一個:
dandy@ubuntu01:~/PerfectCRM$ vim xadmin/tasks.py # Create your tasks here from __future__ import absolute_import, unicode_literals from celery import shared_task @shared_task def sayhi(name): return "hello %s" % name
settings文件:
CELERY_BROKER_URL = 'redis://localhost' CELERY_RESULT_BACKEND = 'redis://localhost'
view.py
import random from celery.result import AsyncResult # Create your views here. def celery_call(request): ran_num = random.randint(1, 1000) print(ran_num) t = tasks.add.delay(ran_num, 6) return HttpResponse(t.id) def celery_result(request): task_id = request.GET.get('id') res = AsyncResult(id=task_id) if res.ready(): return HttpResponse(res.get()) else: return HttpResponse(res.ready())
此時,起django網站:
dandy@ubuntu01:~/PerfectCRM$ python3 manage.py runserver 0.0.0.0:9000 Performing system checks... System check identified some issues: WARNINGS: crm.Customer.tags: (fields.W340) null has no effect on ManyToManyField. System check identified 1 issue (0 silenced). July 10, 2018 - 19:39:03 Django version 2.0.7, using settings 'PerfectCRM.settings' Starting development server at http://0.0.0.0:9000/ Quit the server with CONTROL-C.
啟動worker:
dandy@ubuntu01:~/PerfectCRM$ celery -A PerfectCRM worker -l info -------------- celery@ubuntu01 v4.2.0 (windowlicker) ---- **** ----- --- * *** * -- Linux-4.4.0-116-generic-x86_64-with-Ubuntu-16.04-xenial 2018-07-10 19:56:33 -- * - **** --- - ** ---------- [config] - ** ---------- .> app: proj:0x7f0255d93f28 - ** ---------- .> transport: redis://localhost:6379// - ** ---------- .> results: redis://localhost/ - *** --- * --- .> concurrency: 2 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> celery exchange=celery(direct) key=celery [tasks] . PerfectCRM.celery.debug_task . crm.tasks.add . crm.tasks.mul . crm.tasks.xsum . xadmin.tasks.sayhi [2018-07-10 19:56:33,206: INFO/MainProcess] Connected to redis://localhost:6379// [2018-07-10 19:56:33,216: INFO/MainProcess] mingle: searching for neighbors [2018-07-10 19:56:34,235: INFO/MainProcess] mingle: all alone [2018-07-10 19:56:34,245: WARNING/MainProcess] /home/dandy/.local/lib/python3.5/site-packages/celery/fixups/django.py:200: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments! warnings.warn('Using settings.DEBUG leads to a memory leak, never ' [2018-07-10 19:56:34,246: INFO/MainProcess] celery@ubuntu01 ready. [2018-07-10 19:56:34,416: INFO/MainProcess] Received task: crm.tasks.add[4eeb9530-7e5c-4bcb-a54d-f583ae38e171] [2018-07-10 19:56:34,423: INFO/ForkPoolWorker-2] Task crm.tasks.add[4eeb9530-7e5c-4bcb-a54d-f583ae38e171] succeeded in 0.0037738879982498474s: 743
登陸事先定義好的路由:
得到任務的id,現在去獲取結果:
django實戰總結:
1、celery.py ==> 需要一個定制好的celery.py文件,放在項目同名根目錄下,修改內容,定義了django的設置模塊指向settings,設置celery設置參數的統一前綴,定義了從所有被注冊的app內加載tasks
2、settings.py ==> 增加任務id的目的位置 redis or RabbitMQ
3、__init__.py ==> 同名根目錄下__init__.py,確保了當Django啟動時應用程序被加載了,@shared_task裝飾器會使用它
4、tasks.py ==> 增加各個app的tasks文件,書寫你的任務
5、url.py ==> 需要分配任務,獲取結果的定制路由
6、view.py ==> 導入項目的tasks,調用其方法
在django中使用計划任務功能
1、安裝依賴包
dandy@ubuntu01:~/PerfectCRM/crm$ pip3 install django-celery-beat
2、把安裝的依賴包注冊到settings的installed_app里:
INSTALLED_APPS = ( ..., 'django_celery_beat', )
3、建表,不需要makemigrations
python3 manage.py migrate
4、開始celery beat服務使用django scheduler
celery -A PerfectCRM beat -l info -S django
5、進入django admin做設置
登陸admin可以發現有三張表
配置完成:
此時啟動你的celery beat 和worker,會發現每隔2分鍾,beat會發起一個任務消息讓worker執行scp_task任務
注意,經測試,每添加或修改一個任務,celery beat都需要重啟一次,要不然新的配置不會被celery beat進程讀到