定时启动的意义何在
如果只学习技术不讨论其应用范围那就是在耍流氓啊,为了不做那个流氓,我还是牺牲一下色相吧$_$
在这里我举一个定时启动的应用场景:
比如现在单机环境下,我们需要利用Kafka做数据持久化的功能,由于用户活跃的时间为早上10点至晚上12点,
那在这个时间段做一个大数据量的持久化可能会影响数据库性能导致用户体验降低,
我们可以选择在用户活跃度低的时间段去做持久化的操作,也就是晚上12点后到第二条的早上10点前。
使用KafkaListenerEndpointRegistry
这里需要提及一下,@KafkaListener这个注解所标注的方法并没有在IOC容器中注册为Bean,
而是会被注册在KafkaListenerEndpointRegistry中,KafkaListenerEndpointRegistry在SpringIOC中已经被注册为Bean,
具体可以看一下该类的源码,当然不是使用注解方式注册啦...

public class KafkaListenerEndpointRegistry implements DisposableBean, SmartLifecycle, ApplicationContextAware, ApplicationListener<ContextRefreshedEvent> { protected final Log logger = LogFactory.getLog(this.getClass()); private final Map<String, MessageListenerContainer> listenerContainers = new ConcurrentHashMap(); private int phase = 2147483547; private ConfigurableApplicationContext applicationContext; private boolean contextRefreshed; ...... }
那我们怎么让KafkaListener定时启动呢?
禁止KafkaListener自启动(AutoStartup)
编写两个定时任务,一个晚上12点,一个早上10点
分别在12点的任务上启动KafkaListener,在10点的任务上关闭KafkaListener
这里需要注意一下启动监听容器的方法,项目启动的时候监听容器是未启动状态,
而resume是恢复的意思不是启动的意思,所以我们需要判断容器是否运行,如果运行则调用resume方法,否则调用start方法

@Component @EnableScheduling public class TaskListener{ private static final Logger log= LoggerFactory.getLogger(TaskListener.class); @Autowired private KafkaListenerEndpointRegistry registry; @Autowired private ConsumerFactory consumerFactory; @Bean public ConcurrentKafkaListenerContainerFactory delayContainerFactory() { ConcurrentKafkaListenerContainerFactory container = new ConcurrentKafkaListenerContainerFactory(); container.setConsumerFactory(consumerFactory); //禁止自动启动
container.setAutoStartup(false); return container; } @KafkaListener(id = "durable", topics = "topic.quick.durable",containerFactory = "delayContainerFactory") public void durableListener(String data) { //这里做数据持久化的操作
log.info("topic.quick.durable receive : " + data); } //定时器,每天凌晨0点开启监听
@Scheduled(cron = "0 0 0 * * ?") public void startListener() { log.info("开启监听"); //判断监听容器是否启动,未启动则将其启动
if (!registry.getListenerContainer("durable").isRunning()) { registry.getListenerContainer("durable").start(); } registry.getListenerContainer("durable").resume(); } //定时器,每天早上10点关闭监听
@Scheduled(cron = "0 0 10 * * ?") public void shutDownListener() { log.info("关闭监听"); registry.getListenerContainer("durable").pause(); } }
原本不想写测试的,奈何担心有人寄刀片
修改修改一下定时器注解,修改为距离现在时间较近的时间点,然后写入些数据,启动SpringBoot项目,静静的等待时间的到来

//这个代表16:24执行
@Scheduled(cron = "0 24 16 * * ?") @Test public void testTask() { for (int i = 0; i < 10; i++) { kafkaTemplate.send("topic.quick.durable", "this is durable message"); } }
这里可以看到在16:24的时候启动了监听容器,监听容器也成功从Topic中获取到了数据,
等到16:28的时候容器被暂停了,这个时候可以运行一下测试方法,看看监听容器是否还能获取数据,答案肯定是不行的鸭。

2018-09-12 16:24:00.003 INFO 2872 --- [pool-1-thread-1] com.viu.kafka.listen.TaskListener : 开启监听 2018-09-12 16:24:00.004 INFO 2872 --- [pool-1-thread-1] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values: auto.commit.interval.ms = 1000 auto.offset.reset = latest bootstrap.servers = [localhost:9092] check.crcs = true client.id = connections.max.idle.ms = 540000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = durable heartbeat.interval.ms = 3000 interceptor.classes = null internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 305000 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 15000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2018-09-12 16:24:00.007 INFO 2872 --- [pool-1-thread-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.2
2018-09-12 16:24:00.007 INFO 2872 --- [pool-1-thread-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : 2a121f7b1d402825 2018-09-12 16:24:00.007 INFO 2872 --- [pool-1-thread-1] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 2018-09-12 16:24:00.012 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=durable] Discovered group coordinator admin-PC:9092 (id: 2147483647 rack: null) 2018-09-12 16:24:00.013 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-9, groupId=durable] Revoking previously assigned partitions [] 2018-09-12 16:24:00.014 INFO 2872 --- [ durable-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: [] 2018-09-12 16:24:00.014 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=durable] (Re-)joining group 2018-09-12 16:24:00.021 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=durable] Successfully joined group with generation 6
2018-09-12 16:24:00.021 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-9, groupId=durable] Setting newly assigned partitions [topic.quick.durable-0] 2018-09-12 16:24:00.024 INFO 2872 --- [ durable-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [topic.quick.durable-0] 2018-09-12 16:24:00.042 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message 2018-09-12 16:28:00.023 INFO 2872 --- [pool-1-thread-1] com.viu.kafka.listen.TaskListener : 关闭监听