mirror of
https://github.com/sogou/workflow.git
synced 2026-02-08 01:33:17 +08:00
21
.travis.yml
Normal file
21
.travis.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
language: cpp
|
||||
dist: trusty
|
||||
os: linux
|
||||
compiler:
|
||||
- gcc
|
||||
|
||||
jobs:
|
||||
include:
|
||||
- env: COMPILER=g++-8 BUILD=Release STANDARD=11
|
||||
compiler: gcc
|
||||
addons:
|
||||
apt:
|
||||
update: true
|
||||
sources:
|
||||
- ubuntu-toolchain-r-test
|
||||
packages:
|
||||
- g++-8
|
||||
- cmake
|
||||
|
||||
script:
|
||||
- make
|
||||
@@ -5,7 +5,7 @@ set(CMAKE_SKIP_RPATH TRUE)
|
||||
|
||||
project(
|
||||
workflow
|
||||
VERSION 0.9.3
|
||||
VERSION 0.9.4
|
||||
LANGUAGES C CXX
|
||||
)
|
||||
|
||||
|
||||
19
README.md
19
README.md
@@ -4,6 +4,7 @@
|
||||
[](https://github.com/sogou/workflow/blob/master/LICENSE)
|
||||
[](https://en.cppreference.com/)
|
||||
[](#%E9%A1%B9%E7%9B%AE%E7%9A%84%E4%B8%80%E4%BA%9B%E8%AE%BE%E8%AE%A1%E7%89%B9%E7%82%B9)
|
||||
[](https://travis-ci.org/sogou/workflow)
|
||||
|
||||
搜狗公司C++服务器引擎,支撑搜狗几乎所有后端C++在线服务,包括所有搜索服务,云输入法,在线广告等,每日处理超百亿请求。这是一个设计轻盈优雅的企业级程序引擎,可以满足大多数C++后端开发需求。
|
||||
#### 你可以用来:
|
||||
@@ -31,10 +32,10 @@ int main()
|
||||
* 实现自定义协议client/server,构建自己的RPC系统。
|
||||
* [srpc](https://github.com/sogou/srpc)就是以它为基础,作为独立项目开源。支持``srpc``,``brpc``和``thrift``等协议。
|
||||
* 构建异步任务流,支持常用的串并联,也支持更加复杂的DAG结构。
|
||||
* 作为并行编程工具使用。除了网络任务,我们也包含计算任务的调度。所有类型的任务都可以放入同一个流中。
|
||||
* 作为并行计算工具使用。除了网络任务,我们也包含计算任务的调度。所有类型的任务都可以放入同一个流中。
|
||||
* 在``Linux``系统下作为文件异步IO工具使用,性能超过任何标准调用。磁盘IO也是一种任务。
|
||||
* 实现任何计算与通讯关系非常复杂的高性能高并发的后端服务。
|
||||
* 构建服务网格(service mesh)系统。
|
||||
* 构建微服务系统。
|
||||
* 项目内置服务治理与负载均衡等功能。
|
||||
|
||||
#### 编译和运行环境
|
||||
@@ -120,15 +121,9 @@ int main()
|
||||
* 任何任务都会在callback之后被自动内存回收。如果创建的任务不想运行,则需要通过dismiss方法释放。
|
||||
* 任务中的数据,例如网络请求的resp,也会随着任务被回收。此时用户可通过``std::move()``把需要的数据移走。
|
||||
* SeriesWork和ParallelWork是两种框架对象,同样在callback之后被回收。
|
||||
* 如果某个series是parallel的一个分支,则将在其所在parallel的callback之后再回收。
|
||||
* 项目中不使用``std::shared_ptr``来管理内存。
|
||||
|
||||
#### 更多设计文档
|
||||
持续更新中……
|
||||
|
||||
|
||||
#### Authors
|
||||
|
||||
* **Xie Han** - *[xiehan@sogou-inc.com](mailto:xiehan@sogou-inc.com)*
|
||||
* **Wu Jiaxu** - *[void00@foxmail.com](mailto:void00@foxmail.com)*
|
||||
* **Wang Zhulei** - *[wangzhulei@sogou-inc.com](mailto:wangzhulei@sogou-inc.com)* - Kafka Protocol Implementation
|
||||
* **Li Yingxin** - *[liyingxin@sogou-inc.com](mailto:liyingxin@sogou-inc.com)*
|
||||
# 使用中有疑问?
|
||||
可以先查看[FAQ](https://github.com/sogou/workflow/issues/170)和[issues](https://github.com/sogou/workflow/issues)列表,看看是否能找到答案。
|
||||
非常欢迎将您使用中遇到的问题发送到[issues](https://github.com/sogou/workflow/issues),我们将第一时间进行解答。同时更多的issue对新用户也会带来帮助。
|
||||
|
||||
11
README_en.md
11
README_en.md
@@ -35,10 +35,10 @@ int main()
|
||||
* To implement **client/server on user-defined protocol** and build your own **RPC system**.
|
||||
* [srpc](https://github.com/sogou/srpc) is based on it and it is an independent open source project, which supports srpc, brpc and thrift protocols.
|
||||
* To build **asynchronous workflow**; support common **series** and **parallel** structures, and also support any **DAG** structures.
|
||||
* As a **parallel programming tool**. In addition to **networking tasks**, Sogou C++ Workflow also includes **the scheduling of computing tasks**. All types of tasks can be put into **the same** flow.
|
||||
* As a **parallel computing tool**. In addition to **networking tasks**, Sogou C++ Workflow also includes **the scheduling of computing tasks**. All types of tasks can be put into **the same** flow.
|
||||
* As a **asynchronous file IO tool** in `Linux` system, with high performance exceeding any system call. Disk file IO is also a task.
|
||||
* To realize any **high-performance** and **high-concurrency** back-end service with a very complex relationship between computing and networking.
|
||||
* To build a **service mesh** system.
|
||||
* To build a **micro service** system.
|
||||
* This project has built-in **service governance** and **load balancing** features.
|
||||
|
||||
#### Compiling and running environment
|
||||
@@ -85,6 +85,7 @@ int main()
|
||||
* [About connection context](docs/en/about-connection-context.md)
|
||||
* Built-in protocols
|
||||
* [Asynchronous MySQL client:mysql\_cli](docs/en/tutorial-12-mysql_cli.md)
|
||||
* [Asynchronous Kafka client: kafka\_cli](docs/tutorial-13-kafka_cli.md)
|
||||
|
||||
#### System design features
|
||||
|
||||
@@ -128,14 +129,10 @@ Memory reclamation mechanism
|
||||
* Every task will be automatically reclaimed after the callback. If a task is created but a user does not want to run it, the user needs to release it through the dismiss method.
|
||||
* Any data in the task, such as the response of the network request, will also be recycled with the task. At this time, the user can use `std::move()` to move the required data.
|
||||
* SeriesWork and ParallelWork are two kinds of framework objects, which are also recycled after their callback.
|
||||
* When a series is a branch of a parallel, it will be recycled after the callback of the parallel that it belongs to.
|
||||
* This project doesn’t use `std::shared_ptr` to manage memory.
|
||||
|
||||
#### More design documents
|
||||
|
||||
To be continued...
|
||||
|
||||
## Authors
|
||||
|
||||
* **Xie Han** - *[xiehan@sogou-inc.com](mailto:xiehan@sogou-inc.com)*
|
||||
* **Wu Jiaxu** - *[wujiaxu@sogou-inc.com](mailto:wujiaxu@sogou-inc.com)*
|
||||
* **Li Yingxin** - *[liyingxin@sogou-inc.com](mailto:liyingxin@sogou-inc.com)*
|
||||
|
||||
@@ -12,57 +12,8 @@
|
||||
* 我们所有的示例都符合这个假设,在callback里唤醒main函数。这是安全的,不用担心main返回的时候,callback还没结束的情况。
|
||||
* ParallelWork是一种task,也需要运行到callback。
|
||||
* 这一条规则某下情况下可以违反,并且程序行为有严格定义。但不了解核心原理的使用者请遵守这条规则,否则程序无法正常退出。
|
||||
* 所有server必须stop完成,否则行为无定义。因为stop操作用户都会调,所以一般的server程序不会有什么退出方面的问题。
|
||||
|
||||
只要符合以上三个条件,程序都是可以正常退出,没有任何内存泄露。虽然定义非常严密,但是这里有一个注意事项,就是关于server stop完成的条件。
|
||||
* server的stop()调用,会等所有的server任务callback结束(默认这个callback为空),而且不会有新的server任务被处理。
|
||||
* 但是,如果用户在process里,启动一个新的任务,不在server task所在的series里,这件事框架并不能阻止,并且server stop无法等这个任务完成。
|
||||
* 同样,如果用户在server task的callback里,向task所在的series里加入一个新任务(比如打log),那么这个新任务也不受server控制。
|
||||
* 以上两种情况,如果server.stop()之后main函数立刻退出,那么就有可能违反上面的第二条规则。因为还有任务没有运行到callback。
|
||||
|
||||
针对上面这个情况,用户需要保证启动的任务已经到callback。方法可以用计数器记录有多少个运行中的任务,在main返回前等待这个数归0。
|
||||
例如以下示例,server任务的callback里,在当前series加入一个打log的文件写任务(假设写文件非常慢,需要启动一次异步IO):
|
||||
~~~cpp
|
||||
std::mutex mutex;
|
||||
std::condition_variable cond;
|
||||
int log_task_cnt = 0;
|
||||
|
||||
void log_callback(WFFileIOTask *log_task)
|
||||
{
|
||||
mutex.lock();
|
||||
if (--log_task_cnt == 0)
|
||||
cond.notify_one();
|
||||
mutex.unlock();
|
||||
}
|
||||
|
||||
void reply_callback(WFHttpTask *server_task)
|
||||
{
|
||||
WFFileIOTask *log_task = WFTaskFactory::create_pwrite_task(..., log_callback);
|
||||
|
||||
mutex.lock();
|
||||
log_task_cnt++;
|
||||
mutex.unlock();
|
||||
*series_of(server_task) << log_task;
|
||||
}
|
||||
|
||||
int main(void)
|
||||
{
|
||||
WFHttpServer server;
|
||||
|
||||
server.start();
|
||||
pause();
|
||||
...
|
||||
|
||||
server.stop();
|
||||
|
||||
std::unique_lock<std::mutex> lock(mutex);
|
||||
while (log_task_cnt != 0)
|
||||
cond.wait(lock);
|
||||
lock.unlock();
|
||||
return 0;
|
||||
}
|
||||
~~~
|
||||
以上这个方法虽然可行,但也确实增加了程序的复杂度和出错误几率,应该尽量避免。例如可直接在reply callback里写log。
|
||||
* 所有server必须stop完成,否则行为无定义。因为stop操作用户都会调,所以一般server程序不会有什么退出方面的问题。
|
||||
* server的stop会等待所有server task所在series结束。但如果用户在process直接start一个新任务,则需要自己考虑任务结束的问题。
|
||||
|
||||
# 关于OpenSSL 1.1版本在退出时的内存泄露
|
||||
|
||||
|
||||
@@ -14,59 +14,7 @@ Generally, as long as you writes the program normally and follows the methods in
|
||||
* ParallelWork is a kind of tasks, which also needs to run to its callback.
|
||||
* This rule can be violated under certain circumstances where the procedural behavior is strictly defined. However, if you don't understand the core principles, you should abide by this principle, otherwise the program can't exit normally.
|
||||
* All server must stop, otherwise the behavior is undefined. Because all users know how to call the stop operation, generally a server program will not have any exit problems.
|
||||
|
||||
As long as the above three conditions are met, the program can exit normally without any memory leakage. Despite the strict definition, please note the conditions for the completion of a server stop.
|
||||
|
||||
* The call of **stop()** on a server will wait for the callbacks of all server tasks to finish (the callback is empty by default) and no new server tasks are processed.
|
||||
* However, the framework can't stop you from starting a new task in the process, and not added to the series of the server task. The server **stop()** can't wait for the completion of this new task.
|
||||
* Similarly, if the user adds a new task (such as logging) to the series of the server task in its callback, the new task is not controlled by the server.
|
||||
* In both cases, if the main function exits immediately after **server.stop()**, it may violate the second rule above. Because there may still be tasks that have not run to their callback.
|
||||
|
||||
In the above situation, you need to ensure that the started task has run to its callback. You can use a counter to record the number of running tasks, and wait for the count value to reach 0 before the main function returns.
|
||||
In the following example, in the callback of a server task, a log file writing task is added to the current series (assuming that file writing is very slow and asynchronous IO needs to be started once).
|
||||
|
||||
~~~cpp
|
||||
std::mutex mutex;
|
||||
std::condition_variable cond;
|
||||
int log_task_cnt = 0;
|
||||
|
||||
void log_callback(WFFileIOTask *log_task)
|
||||
{
|
||||
mutex.lock();
|
||||
if (--log_task_cnt == 0)
|
||||
cond.notify_one();
|
||||
mutex.unlock();
|
||||
}
|
||||
|
||||
void reply_callback(WFHttpTask *server_task)
|
||||
{
|
||||
WFFileIOTask *log_task = WFTaskFactory::create_pwrite_task(..., log_callback);
|
||||
|
||||
mutex.lock();
|
||||
log_task_cnt++;
|
||||
mutex.unlock();
|
||||
*series_of(server_task) << log_task;
|
||||
}
|
||||
|
||||
int main(void)
|
||||
{
|
||||
WFHttpServer server;
|
||||
|
||||
server.start();
|
||||
pause();
|
||||
...
|
||||
|
||||
server.stop();
|
||||
|
||||
std::unique_lock<std::mutex> lock(mutex);
|
||||
while (log_task_cnt != 0)
|
||||
cond.wait(lock);
|
||||
lock.unlock();
|
||||
return 0;
|
||||
}
|
||||
~~~
|
||||
|
||||
Although the above method is feasible, it does increase the complexity and the error probability of the program, which should be avoided as much as possible. For example, you can write log directly in reply callback.
|
||||
* Server's stop() method will block until all server tasks' series end. But if you start a task directly in process function, you have to take care of the end this task.
|
||||
|
||||
# About memory leakage of OpenSSL 1.1 in exiting
|
||||
|
||||
|
||||
@@ -5,19 +5,18 @@
|
||||
|
||||
# 关于编译选项
|
||||
|
||||
在workflow中,你可以使用第三方库比如librdkafka,也可使用自带的kafka client,因此它对kafka协议的支持是独立的。
|
||||
在workflow中,对kafka协议的支持是独立的。因此可以通过命令make KAFKA=y 编译独立的类库支持kafka协议,系统需要预先安装[zlib](https://github.com/madler/zlib.git),[snappy](https://github.com/google/snappy.git),[lz4(>=1.7.5)](https://github.com/lz4/lz4.git),[zstd](https://github.com/facebook/zstd.git)等第三方库,用来支持kafka协议中的压缩算法。
|
||||
|
||||
通过命令make KAFKA=y 编译独立的类库支持kafka协议,系统需要预先安装[zlib](https://github.com/madler/zlib.git),[snappy](https://github.com/google/snappy.git),[lz4(>=1.7.5)](https://github.com/lz4/lz4.git),[zstd](https://github.com/facebook/zstd.git)等第三方库。
|
||||
|
||||
# 关于kafka_cli
|
||||
|
||||
这是一个kafka client,根据不同的输入参数,完成kafka的消息生产(produce)、消息消费(fetch)、元数据获取(meta)等。
|
||||
这是一个kafka client,可以完成kafka的消息生产(produce)和消息消费(fetch)。
|
||||
|
||||
编译时需要在tutorial目录中执行编译命令make KAFKA=y。
|
||||
编译时需要在tutorial目录中执行编译命令make KAFKA=y或者在项目根目录执行make KAFKA=y tutorial。
|
||||
|
||||
该程序从命令行读取一个kafka broker服务器地址和本次任务的类型(produce/fetch/meta):
|
||||
该程序从命令行读取一个kafka broker服务器地址和本次任务的类型(produce/fetch):
|
||||
|
||||
./kafka_cli \<broker_url\> [p/c/m]
|
||||
./kafka_cli \<broker_url\> [p/c]
|
||||
|
||||
程序会在执行完任务后自动退出,一切资源完全回收。
|
||||
|
||||
@@ -35,9 +34,35 @@ kafka://kafka.host:9090/
|
||||
|
||||
kafka://10.160.23.23:9000,10.123.23.23,kafka://kafka.sogou
|
||||
|
||||
# 实现原理和特性
|
||||
|
||||
kafka client内部实现上除了压缩功能外没有依赖第三方库,同时利用了workflow的高性能,在合理的配置和环境下,每秒钟可以处理几万次Kafka请求。
|
||||
|
||||
在内部实现上,kafka client会把一次请求按照内部使用到的broker分拆成并行parallel任务,每个broker地址对应parallel任务中的一个子任务,
|
||||
|
||||
这样可以最大限度的提升效率,同时利用workflow内部对连接的复用机制使得整体的连接数控制在一个合理的范围。
|
||||
|
||||
如果一个broker地址下有多个topic partition,为了提高吞吐,应该创建多个client,然后按照topic partition分别创建任务独立启动。
|
||||
|
||||
|
||||
# 创建并启动Kafka任务
|
||||
|
||||
由于Kafka需要保存broker、meta和group之类的全局信息,因此建议用户使用WFKafkaClient这个二级工厂来创建kafka任务
|
||||
首先需要创建一个WFKafkaClient对象,然后调用init函数初始化WFKafkaClient对象,
|
||||
~~~cpp
|
||||
int init(const std::string& broker_url);
|
||||
|
||||
int init(const std::string& broker_url, const std::string& group);
|
||||
~~~
|
||||
其中broker_url是kafka broker集群的地址,格式可以参考上面的broker_url,
|
||||
|
||||
group是消费者组的group_name,用在基于消费者组的fetch任务中,如果是produce任务或者没有使用消费者组的fetch任务,则不需要使用此接口;
|
||||
|
||||
用消费者组的时候,可以设置heartbeat的间隔时间,时间单位是毫秒,用于维持心跳:
|
||||
~~~cpp
|
||||
void set_heartbeat_interval(size_t interval_ms);
|
||||
~~~
|
||||
|
||||
后面再通过WFKafkaClient对象创建kafka任务
|
||||
~~~cpp
|
||||
using kafka_callback_t = std::function<void (WFKafkaTask *)>;
|
||||
|
||||
@@ -45,10 +70,41 @@ WFKafkaTask *create_kafka_task(const std::string& query, int retry_max, kafka_ca
|
||||
|
||||
WFKafkaTask *create_kafka_task(int retry_max, kafka_callback_t cb);
|
||||
~~~
|
||||
其中query中包含此次任务的类型以及topic等属性,retry_max表示最大重试次数,cb为用户自定义的callback函数,当task执行完毕后会被调用,
|
||||
|
||||
用户有两种方式设置任务的详细信息:
|
||||
接着还可以修改task的默认配置以满足实际需要,详细接口可以在[KafkaDataTypes.h](../src/protocol/KafkaDataTypes.h)中查看
|
||||
~~~cpp
|
||||
KafkaConfig config;
|
||||
config.set_client_id("workflow");
|
||||
task->set_config(std::move(config));
|
||||
~~~
|
||||
支持的配置选项描述如下:
|
||||
配置名 | 类型 | 默认值 | 含义
|
||||
------ | ---- | -------| -------
|
||||
produce_timeout | int | 100ms | produce的超时时间
|
||||
produce_msg_max_bytes | int | 1000000 bytes | 单个消息的最大长度限制
|
||||
produce_msgset_cnt | int | int | 10000 | 一次通信消息集合的最大条数
|
||||
produce_msgset_max_bytes | int | 1000000 bytes | 一次通信消息集合的最大长度限制
|
||||
fetch_timeout | int | 100ms | fetch的超时时间
|
||||
fetch_min_bytes | int | 1 byte | 一次fetch通信最小消息的长度
|
||||
fetch_max_bytes | int | 50M bytes | 一次fetch通信最大消息的长度
|
||||
fetch_msg_max_bytes | int | 1M bytes | 一次fetch通信单个消息的最大长度
|
||||
offset_timestamp | long long int | -2 | 消费者组模式下,没有找到历史offset时,初始化的offset,-2表示最久,-1表示最新
|
||||
session_timeout | int | 10s | 加入消费者组初始化时的超时时间
|
||||
rebalance_timeout | int | 10s | 加入消费者组同步信息阶段的超时时间
|
||||
produce_acks | int | -1 | produce任务在返回之前应确保消息成功复制的broker节点数,-1表示所有的复制broker节点
|
||||
allow_auto_topic_creation | bool | true | produce时topic不存在时,是否自动创建topic
|
||||
broker_version | char * | NULL | 指定broker的版本号,<0.10时需要手动指定
|
||||
compress_type | int | NoCompress | produce消息的压缩类型
|
||||
client_id | char * | NULL | 表示client的id
|
||||
check_crcs | bool | false | fetch任务中是否校验消息的crc32
|
||||
|
||||
1、在query中直接指定任务类型、topic等信息
|
||||
|
||||
最后就可以调用start接口启动kafka任务。
|
||||
|
||||
# produce任务
|
||||
|
||||
1、在创建并初始化WFKafkaClient之后,可以在query中直接指定topic等信息创建WFKafkaTask任务
|
||||
|
||||
使用示例如下:
|
||||
~~~cpp
|
||||
@@ -65,17 +121,15 @@ int main(int argc, char *argv[])
|
||||
}
|
||||
~~~
|
||||
|
||||
2、在创建完WFKafkaTask之后,根据任务的类型先调用set_api_type设置,然后调用add接口准备输入,
|
||||
2、在创建完WFKafkaTask之后,先通过调用set_key, set_value, add_header_pair等方法构建KafkaRecord,
|
||||
|
||||
关于二级工厂的更多接口,可以在[WFKafkaClient.h](../src/client/WFKafkaClient.h)中查看
|
||||
关于KafkaRecord的更多接口,可以在[KafkaDataTypes.h](../src/protocol/KafkaDataTypes.h)中查看
|
||||
|
||||
比如针对produce任务,先创建KafkaRecord,然后调用set_key, set_value, add_header_pair等方法构建KafkaRecord,
|
||||
然后应该通过调用add_produce_record添加KafkaRecord,关于更多接口的详细定义,可以在[WFKafkaClient.h](../src/client/WFKafkaClient.h)中查看
|
||||
|
||||
接着调用add_produce_record添加record,关于KafkaRecord的更多接口,可以在[KafkaDataTypes.h](../src/protocol/KafkaDataTypes.h)中查看
|
||||
需要注意的是,add_produce_record的第二个参数partition,当>=0是表示指定的partition,-1表示随机指定partition或者调用自定义的kafka_partitioner_t
|
||||
|
||||
针对fetch和meta任务,需要调用add_topic指定topic
|
||||
|
||||
其他包括callback、series、user_data等与workflow其他task用法类似。
|
||||
kafka_partitioner_t可以通过set_partitioner接口设置自定义规则。
|
||||
|
||||
使用示例如下:
|
||||
~~~cpp
|
||||
@@ -85,6 +139,7 @@ int main(int argc, char *argv[])
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url);
|
||||
task = client_fetch->create_kafka_task("api=produce&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
task->set_partitioner(partitioner);
|
||||
|
||||
KafkaRecord record;
|
||||
record.set_key("key1", strlen("key1"));
|
||||
@@ -98,11 +153,55 @@ int main(int argc, char *argv[])
|
||||
}
|
||||
~~~
|
||||
|
||||
3、produce还可以使用kafka支持的4种压缩协议,通过设置配置项来实现
|
||||
|
||||
使用示例如下:
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url);
|
||||
task = client_fetch->create_kafka_task("api=produce&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
KafkaConfig config;
|
||||
config.set_compress_type(Kafka_Zstd);
|
||||
task->set_config(std::move(config));
|
||||
|
||||
KafkaRecord record;
|
||||
record.set_key("key1", strlen("key1"));
|
||||
record.set_value(buf, sizeof(buf));
|
||||
record.add_header_pair("hk1", 3, "hv1", 3);
|
||||
task->add_produce_record("workflow_test1", -1, std::move(record));
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
|
||||
# fetch任务
|
||||
|
||||
fetch任务支持消费者组模式和手动模式
|
||||
|
||||
1、消费者组模式
|
||||
1、手动模式
|
||||
|
||||
无需指定消费者组,同时需要用户指定topic、partition和offset
|
||||
|
||||
使用示例如下:
|
||||
~~~cpp
|
||||
client = new WFKafkaClient();
|
||||
client->init(url);
|
||||
task = client->create_kafka_task("api=fetch", 3, kafka_callback);
|
||||
|
||||
KafkaToppar toppar;
|
||||
toppar.set_topic_partition("workflow_test1", 0);
|
||||
toppar.set_offset(0);
|
||||
task->add_toppar(toppar);
|
||||
~~~
|
||||
|
||||
2、消费者组模式
|
||||
|
||||
在初始化client的时候需要指定消费者组的名称
|
||||
|
||||
@@ -121,31 +220,38 @@ int main(int argc, char *argv[])
|
||||
}
|
||||
~~~
|
||||
|
||||
2、手动模型
|
||||
3、offset的提交
|
||||
|
||||
无需指定消费者组,同时需要用户指定topic、partition和offset
|
||||
|
||||
使用示例如下:
|
||||
在消费者组模式下,用户消费消息后,可以在callback函数中,通过创建commit任务来自动提交消费的记录,使用示例如下:
|
||||
~~~cpp
|
||||
client = new WFKafkaClient();
|
||||
client->init(url);
|
||||
task = client->create_kafka_task("api=fetch", 3, kafka_callback);
|
||||
void kafka_callback(WFKafkaTask *task)
|
||||
{
|
||||
...
|
||||
commit_task = client.create_kafka_task("api=commit", 3, kafka_callback);
|
||||
|
||||
KafkaToppar toppar;
|
||||
toppar.set_topic_partition("workflow_test1", 0);
|
||||
toppar.set_offset(0);
|
||||
task->add_toppar(toppar);
|
||||
...
|
||||
commit_task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
|
||||
# 关于client的关闭
|
||||
|
||||
在消费者组模式下,client在关闭之前需要调用create_leavegroup_task创建leavegroup_task,
|
||||
|
||||
它会发送leavegroup协议包,否则会导致消费者组没有正确退出
|
||||
它会发送leavegroup协议包,如果没有启动leavegroup_task,会导致消费者组没有正确退出,触发这个组的rebalance。
|
||||
|
||||
|
||||
# 处理kafka结果
|
||||
|
||||
处理结果的函数和其他的示例一样,既可以使用普通函数也可以使用std::function来处理结果
|
||||
消息的结果集的数据结构是KafkaResult,可以通过调用WFKafkaTask的get_result()接口获得,
|
||||
|
||||
然后调用KafkaResult的fetch_record接口可以将本次task相关的record取出来,它是一个KafkaRecord的二维vector,
|
||||
|
||||
第一维是topic partition,第二维是某个topic partition下对应的KafkaRecord,
|
||||
|
||||
在[KafkaResult.h](../src/protocol/KafkaResult.h)中可以看到KafkaResult的定义
|
||||
~~~cpp
|
||||
void kafka_callback(WFKafkaTask *task)
|
||||
{
|
||||
@@ -194,9 +300,3 @@ void kafka_callback(WFKafkaTask *task)
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
在这个callback中,task就是二级工厂产生的task,任务的结果集类型是protocol::KafkaResult。
|
||||
|
||||
结果集对象可以通过task->get_result()直接得到,获得结果。
|
||||
|
||||
在[KafkaResult.h](../src/protocol/KafkaResult.h)中可以看到KafkaResult的定义。
|
||||
|
||||
@@ -3,12 +3,7 @@ cmake_minimum_required(VERSION 3.6)
|
||||
find_package(OpenSSL REQUIRED)
|
||||
include_directories(${OPENSSL_INCLUDE_DIR} ${INC_DIR}/workflow)
|
||||
|
||||
if (WIN32)
|
||||
add_subdirectory(kernel_win)
|
||||
else ()
|
||||
add_subdirectory(kernel)
|
||||
endif ()
|
||||
|
||||
add_subdirectory(kernel)
|
||||
add_subdirectory(util)
|
||||
add_subdirectory(manager)
|
||||
add_subdirectory(algorithm)
|
||||
@@ -39,6 +34,10 @@ add_library(
|
||||
)
|
||||
|
||||
if (KAFKA STREQUAL "y")
|
||||
add_dependencies(client_kafka LINK_HEADERS)
|
||||
add_dependencies(util_kafka LINK_HEADERS)
|
||||
add_dependencies(protocol_kafka LINK_HEADERS)
|
||||
add_dependencies(factory_kafka LINK_HEADERS)
|
||||
add_library(
|
||||
"wfkafka" STATIC
|
||||
$<TARGET_OBJECTS:client_kafka>
|
||||
@@ -67,12 +66,16 @@ else ()
|
||||
endif ()
|
||||
|
||||
if (KAFKA STREQUAL "y")
|
||||
set(LIBSOKAFKA ${LIB_DIR}/libwfkafka.so)
|
||||
add_custom_target(
|
||||
SCRIPT_SHARED_LIB_KAFKA ALL
|
||||
COMMAND ${CMAKE_COMMAND} -E echo 'GROUP ( libwfkafka.a AS_NEEDED ( libpthread.so libssl.so libcrypto.so ) ) ' > ${LIBSOKAFKA}
|
||||
)
|
||||
add_dependencies(SCRIPT_SHARED_LIB_KAFKA "wfkafka")
|
||||
if (APPLE)
|
||||
set(LIBSOKAFKA ${LIB_DIR}/libwfkafka.a)
|
||||
else ()
|
||||
set(LIBSOKAFKA ${LIB_DIR}/libwfkafka.so)
|
||||
add_custom_target(
|
||||
SCRIPT_SHARED_LIB_KAFKA ALL
|
||||
COMMAND ${CMAKE_COMMAND} -E echo 'GROUP ( libwfkafka.a AS_NEEDED ( libpthread.so libssl.so libcrypto.so ) ) ' > ${LIBSOKAFKA}
|
||||
)
|
||||
add_dependencies(SCRIPT_SHARED_LIB_KAFKA "wfkafka")
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
install(
|
||||
|
||||
@@ -157,6 +157,26 @@ public:
|
||||
return &this->cgroup;
|
||||
}
|
||||
|
||||
void set_meta_list(const KafkaMetaList& meta_list)
|
||||
{
|
||||
this->meta_list = meta_list;
|
||||
}
|
||||
|
||||
KafkaMetaList *get_meta_list()
|
||||
{
|
||||
return &this->meta_list;
|
||||
}
|
||||
|
||||
void set_config(const KafkaConfig& config)
|
||||
{
|
||||
this->config = config;
|
||||
}
|
||||
|
||||
KafkaConfig *get_config()
|
||||
{
|
||||
return &this->config;
|
||||
}
|
||||
|
||||
void set_uri(const ParsedURI& uri)
|
||||
{
|
||||
this->uri = uri;
|
||||
@@ -177,10 +197,23 @@ public:
|
||||
return &this->lock_status;
|
||||
}
|
||||
|
||||
void set_client(WFKafkaClient *client)
|
||||
{
|
||||
this->client = client;
|
||||
}
|
||||
|
||||
WFKafkaClient *get_client()
|
||||
{
|
||||
return this->client;
|
||||
}
|
||||
|
||||
private:
|
||||
KafkaCgroup cgroup;
|
||||
KafkaMetaList meta_list;
|
||||
KafkaConfig config;
|
||||
ParsedURI uri;
|
||||
KafkaLockStatus lock_status;
|
||||
WFKafkaClient *client;
|
||||
};
|
||||
|
||||
class ComplexKafkaTask : public WFKafkaTask
|
||||
@@ -209,26 +242,7 @@ public:
|
||||
|
||||
virtual ~ComplexKafkaTask()
|
||||
{
|
||||
if (this->lock_status.get_cnt()->fetch_sub(1) == 2 &&
|
||||
this->cgroup.get_group())
|
||||
{
|
||||
KafkaBroker *coordinator = this->cgroup.get_coordinator();
|
||||
|
||||
const struct sockaddr *addr;
|
||||
socklen_t socklen;
|
||||
coordinator->get_broker_addr(&addr, &socklen);
|
||||
|
||||
if (coordinator->is_to_addr())
|
||||
{
|
||||
__WFKafkaTask *task;
|
||||
task = __WFKafkaTaskFactory::create_kafka_task(addr, socklen,
|
||||
this->retry_max, nullptr);
|
||||
task->get_req()->set_api(Kafka_LeaveGroup);
|
||||
task->get_req()->set_broker(*coordinator);
|
||||
task->get_req()->set_cgroup(this->cgroup);
|
||||
task->start();
|
||||
}
|
||||
}
|
||||
this->lock_status.get_cnt()->fetch_sub(1);
|
||||
}
|
||||
|
||||
ParsedURI *get_uri() { return &this->uri; }
|
||||
@@ -266,6 +280,10 @@ private:
|
||||
|
||||
static void kafka_leavegroup_callback(__WFKafkaTask *task);
|
||||
|
||||
static void kafka_rebalance_proc(KafkaHeartbeat *t);
|
||||
|
||||
static void kafka_rebalance_callback(__WFKafkaTask *task);
|
||||
|
||||
void kafka_move_task_callback(__WFKafkaTask *task);
|
||||
|
||||
void kafka_process_toppar_offset(KafkaToppar *task_toppar);
|
||||
@@ -356,6 +374,84 @@ void ComplexKafkaTask::kafka_leavegroup_callback(__WFKafkaTask *task)
|
||||
t->error = task->get_error();
|
||||
}
|
||||
|
||||
void ComplexKafkaTask::kafka_rebalance_callback(__WFKafkaTask *task)
|
||||
{
|
||||
KafkaHeartbeat *t = (KafkaHeartbeat *)task->user_data;
|
||||
|
||||
if (task->get_state() == WFT_STATE_ABORTED ||
|
||||
t->get_lock_status()->get_cnt()->fetch_add(0) == 1)
|
||||
{
|
||||
delete t;
|
||||
return;
|
||||
}
|
||||
|
||||
t->get_lock_status()->get_mutex()->lock();
|
||||
|
||||
if (task->get_state() == 0)
|
||||
{
|
||||
*t->get_lock_status()->get_status() |= KAFKA_CGROUP_DONE;
|
||||
*t->get_lock_status()->get_status() &= (~(KAFKA_CGROUP_INIT|KAFKA_CGROUP_DOING));
|
||||
t->get_cgroup()->set_coordinator(task->get_resp()->get_broker());
|
||||
|
||||
if (*t->get_lock_status()->get_status() & KAFKA_HEARTBEAT_INIT)
|
||||
{
|
||||
__WFKafkaTask *kafka_task;
|
||||
KafkaBroker *coordinator = t->get_cgroup()->get_coordinator();
|
||||
|
||||
const struct sockaddr *addr;
|
||||
socklen_t socklen;
|
||||
coordinator->get_broker_addr(&addr, &socklen);
|
||||
|
||||
kafka_task = __WFKafkaTaskFactory::create_kafka_task(addr, socklen,
|
||||
0, kafka_heartbeat_callback);
|
||||
kafka_task->user_data = t;
|
||||
kafka_task->get_req()->set_api(Kafka_Heartbeat);
|
||||
kafka_task->get_req()->set_cgroup(*t->get_cgroup());
|
||||
kafka_task->get_req()->set_broker(*coordinator);
|
||||
kafka_task->start();
|
||||
*t->get_lock_status()->get_status() |= KAFKA_HEARTBEAT_DOING;
|
||||
*t->get_lock_status()->get_status() &= ~KAFKA_HEARTBEAT_INIT;
|
||||
}
|
||||
|
||||
t->get_lock_status()->get_mutex()->unlock();
|
||||
|
||||
char name[64];
|
||||
snprintf(name, 64, "%p.cgroup", t->get_client());
|
||||
WFTaskFactory::count_by_name(name, (unsigned int)-1);
|
||||
}
|
||||
else
|
||||
kafka_rebalance_proc(t);
|
||||
}
|
||||
|
||||
void ComplexKafkaTask::kafka_rebalance_proc(KafkaHeartbeat *t)
|
||||
{
|
||||
if (t->get_lock_status()->get_cnt()->fetch_add(0) == 1)
|
||||
{
|
||||
t->get_lock_status()->get_mutex()->unlock();
|
||||
delete t;
|
||||
return;
|
||||
}
|
||||
|
||||
__WFKafkaTask *task;
|
||||
task = __WFKafkaTaskFactory::create_kafka_task(*t->get_uri(), 0,
|
||||
kafka_rebalance_callback);
|
||||
task->user_data = t;
|
||||
task->get_req()->set_config(*t->get_config());
|
||||
task->get_req()->set_api(Kafka_FindCoordinator);
|
||||
task->get_req()->set_cgroup(*t->get_cgroup());
|
||||
task->get_req()->set_meta_list(*t->get_meta_list());
|
||||
|
||||
*t->get_lock_status()->get_status() |= KAFKA_CGROUP_DOING;
|
||||
*t->get_lock_status()->get_status() &= (~(KAFKA_CGROUP_DONE|KAFKA_CGROUP_INIT));
|
||||
|
||||
*t->get_lock_status()->get_status() |= KAFKA_HEARTBEAT_INIT;
|
||||
*t->get_lock_status()->get_status() &= (~(KAFKA_HEARTBEAT_DONE|KAFKA_HEARTBEAT_DOING));
|
||||
|
||||
t->get_lock_status()->get_mutex()->unlock();
|
||||
|
||||
task->start();
|
||||
}
|
||||
|
||||
void ComplexKafkaTask::kafka_heartbeat_callback(__WFKafkaTask *task)
|
||||
{
|
||||
KafkaHeartbeat *t = (KafkaHeartbeat *)task->user_data;
|
||||
@@ -371,15 +467,7 @@ void ComplexKafkaTask::kafka_heartbeat_callback(__WFKafkaTask *task)
|
||||
|
||||
if (t->get_cgroup()->get_error() != 0)
|
||||
{
|
||||
*t->get_lock_status()->get_status() |= KAFKA_CGROUP_INIT;
|
||||
*t->get_lock_status()->get_status() &= (~(KAFKA_CGROUP_DONE|KAFKA_CGROUP_DOING));
|
||||
|
||||
*t->get_lock_status()->get_status() |= KAFKA_HEARTBEAT_INIT;
|
||||
*t->get_lock_status()->get_status() &= (~(KAFKA_HEARTBEAT_DONE|KAFKA_HEARTBEAT_DOING));
|
||||
|
||||
t->get_lock_status()->get_mutex()->unlock();
|
||||
|
||||
delete t;
|
||||
kafka_rebalance_proc(t);
|
||||
return;
|
||||
}
|
||||
else
|
||||
@@ -526,9 +614,12 @@ void ComplexKafkaTask::kafka_cgroup_callback(__WFKafkaTask *task)
|
||||
{
|
||||
KafkaHeartbeat *hb = new KafkaHeartbeat;
|
||||
hb->set_cgroup(t->cgroup);
|
||||
hb->set_meta_list(t->client_meta_list);
|
||||
hb->set_config(t->config);
|
||||
hb->set_uri(t->uri);
|
||||
t->lock_status.add_cnt();
|
||||
hb->set_lock_status(t->lock_status);
|
||||
hb->set_client(t->client);
|
||||
__WFKafkaTask *kafka_task;
|
||||
KafkaBroker *coordinator = t->cgroup.get_coordinator();
|
||||
|
||||
@@ -796,7 +887,6 @@ void ComplexKafkaTask::dispatch()
|
||||
KafkaComplexTask *ctask = static_cast<KafkaComplexTask *>(task);
|
||||
*ctask->get_mutable_ctx() = cb;
|
||||
series = Workflow::create_series_work(task, nullptr);
|
||||
series->set_context(task);
|
||||
parallel->add_series(series);
|
||||
}
|
||||
series_of(this)->push_front(this);
|
||||
@@ -847,7 +937,6 @@ void ComplexKafkaTask::dispatch()
|
||||
KafkaComplexTask *ctask = static_cast<KafkaComplexTask *>(task);
|
||||
*ctask->get_mutable_ctx() = cb;
|
||||
series = Workflow::create_series_work(task, nullptr);
|
||||
series->set_context(task);
|
||||
parallel->add_series(series);
|
||||
}
|
||||
|
||||
@@ -1392,6 +1481,7 @@ void WFKafkaClient::set_heartbeat_interval(size_t interval_ms)
|
||||
|
||||
void WFKafkaClient::deinit()
|
||||
{
|
||||
this->member->lock_status->dec_cnt();
|
||||
delete this->member;
|
||||
this->member = NULL;
|
||||
}
|
||||
|
||||
@@ -338,10 +338,7 @@ bool ComplexHttpTask::finish_once()
|
||||
this->disable_retry();
|
||||
}
|
||||
else
|
||||
{
|
||||
this->get_resp()->end_parsing();
|
||||
redirect_count_ = 0;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
@@ -383,8 +380,9 @@ WFHttpTask *WFTaskFactory::create_http_task(const ParsedURI& uri,
|
||||
class WFHttpServerTask : public WFServerTask<HttpRequest, HttpResponse>
|
||||
{
|
||||
public:
|
||||
WFHttpServerTask(std::function<void (WFHttpTask *)>& process):
|
||||
WFServerTask(WFGlobal::get_scheduler(), process),
|
||||
WFHttpServerTask(CommService *service,
|
||||
std::function<void (WFHttpTask *)>& process):
|
||||
WFServerTask(service, WFGlobal::get_scheduler(), process),
|
||||
req_is_alive_(false),
|
||||
req_header_has_keep_alive_(false)
|
||||
{}
|
||||
@@ -559,8 +557,9 @@ CommMessageOut *WFHttpServerTask::message_out()
|
||||
|
||||
/**********Server Factory**********/
|
||||
|
||||
WFHttpTask *WFServerTaskFactory::create_http_task(std::function<void (WFHttpTask *)>& process)
|
||||
WFHttpTask *WFServerTaskFactory::create_http_task(CommService *service,
|
||||
std::function<void (WFHttpTask *)>& process)
|
||||
{
|
||||
return new WFHttpServerTask(process);
|
||||
return new WFHttpServerTask(service, process);
|
||||
}
|
||||
|
||||
|
||||
@@ -555,8 +555,9 @@ WFMySQLTask *WFTaskFactory::create_mysql_task(const ParsedURI& uri,
|
||||
class WFMySQLServerTask : public WFServerTask<MySQLRequest, MySQLResponse>
|
||||
{
|
||||
public:
|
||||
WFMySQLServerTask(std::function<void (WFMySQLTask *)>& process):
|
||||
WFServerTask(WFGlobal::get_scheduler(), process)
|
||||
WFMySQLServerTask(CommService *service,
|
||||
std::function<void (WFMySQLTask *)>& process):
|
||||
WFServerTask(service, WFGlobal::get_scheduler(), process)
|
||||
{}
|
||||
|
||||
protected:
|
||||
@@ -595,8 +596,9 @@ CommMessageIn *WFMySQLServerTask::message_in()
|
||||
|
||||
/**********Server Factory**********/
|
||||
|
||||
WFMySQLTask *WFServerTaskFactory::create_mysql_task(std::function<void (WFMySQLTask *)>& process)
|
||||
WFMySQLTask *WFServerTaskFactory::create_mysql_task(CommService *service,
|
||||
std::function<void (WFMySQLTask *)>& process)
|
||||
{
|
||||
return new WFMySQLServerTask(process);
|
||||
return new WFMySQLServerTask(service, process);
|
||||
}
|
||||
|
||||
|
||||
@@ -131,12 +131,15 @@ protected:
|
||||
return NULL;
|
||||
}
|
||||
|
||||
protected:
|
||||
CommService *service;
|
||||
|
||||
protected:
|
||||
class Processor : public SubTask
|
||||
{
|
||||
public:
|
||||
Processor(WFServerTask<REQ, RESP> *task,
|
||||
std::function<void (WFNetworkTask<REQ, RESP> *)>& proc) :
|
||||
std::function<void (WFNetworkTask<REQ, RESP> *)>& proc) :
|
||||
process(proc)
|
||||
{
|
||||
this->task = task;
|
||||
@@ -158,12 +161,33 @@ protected:
|
||||
WFServerTask<REQ, RESP> *task;
|
||||
} processor;
|
||||
|
||||
class Series : public SeriesWork
|
||||
{
|
||||
public:
|
||||
Series(WFServerTask<REQ, RESP> *task) :
|
||||
SeriesWork(&task->processor, nullptr)
|
||||
{
|
||||
this->set_last_task(task);
|
||||
this->service = task->service;
|
||||
this->service->incref();
|
||||
}
|
||||
|
||||
virtual ~Series()
|
||||
{
|
||||
this->callback = nullptr;
|
||||
this->service->decref();
|
||||
}
|
||||
|
||||
CommService *service;
|
||||
};
|
||||
|
||||
public:
|
||||
WFServerTask(CommScheduler *scheduler,
|
||||
WFServerTask(CommService *service, CommScheduler *scheduler,
|
||||
std::function<void (WFNetworkTask<REQ, RESP> *)>& proc) :
|
||||
WFNetworkTask<REQ, RESP>(NULL, scheduler, nullptr),
|
||||
processor(this, proc)
|
||||
{
|
||||
this->service = service;
|
||||
}
|
||||
|
||||
protected:
|
||||
@@ -177,7 +201,8 @@ void WFServerTask<REQ, RESP>::handle(int state, int error)
|
||||
{
|
||||
this->state = WFT_STATE_TOREPLY;
|
||||
this->target = this->get_target();
|
||||
Workflow::start_series_work(&this->processor, this, nullptr);
|
||||
new Series(this);
|
||||
this->processor.dispatch();
|
||||
}
|
||||
else if (this->state == WFT_STATE_TOREPLY)
|
||||
{
|
||||
|
||||
@@ -251,7 +251,8 @@ public:
|
||||
std::function<void (T *)> callback);
|
||||
|
||||
public:
|
||||
static T *create_server_task(std::function<void (T *)>& process);
|
||||
static T *create_server_task(CommService *service,
|
||||
std::function<void (T *)>& process);
|
||||
};
|
||||
|
||||
template<class INPUT, class OUTPUT>
|
||||
|
||||
@@ -243,7 +243,6 @@ public:
|
||||
type_(TT_TCP),
|
||||
retry_times_(0),
|
||||
is_retry_(false),
|
||||
has_original_uri_(true),
|
||||
redirect_(false)
|
||||
{}
|
||||
|
||||
@@ -292,7 +291,6 @@ public:
|
||||
socklen_t addrlen,
|
||||
const std::string& info);
|
||||
|
||||
const ParsedURI *get_original_uri() const { return &original_uri_; }
|
||||
const ParsedURI *get_current_uri() const { return &uri_; }
|
||||
|
||||
void set_redirect(const ParsedURI& uri)
|
||||
@@ -350,7 +348,6 @@ protected:
|
||||
TransportType get_transport_type() const { return type_; }
|
||||
|
||||
ParsedURI uri_;
|
||||
ParsedURI original_uri_;
|
||||
|
||||
int retry_max_;
|
||||
bool is_sockaddr_;
|
||||
@@ -378,7 +375,6 @@ private:
|
||||
/* state 0: uninited or failed; 1: inited but not checked; 2: checked. */
|
||||
char init_state_;
|
||||
bool is_retry_;
|
||||
bool has_original_uri_;
|
||||
bool redirect_;
|
||||
};
|
||||
|
||||
@@ -473,12 +469,6 @@ bool WFComplexClientTask<REQ, RESP, CTX>::set_port()
|
||||
template<class REQ, class RESP, typename CTX>
|
||||
void WFComplexClientTask<REQ, RESP, CTX>::init_with_uri()
|
||||
{
|
||||
if (has_original_uri_)
|
||||
{
|
||||
original_uri_ = uri_;
|
||||
has_original_uri_ = true;
|
||||
}
|
||||
|
||||
route_result_.clear();
|
||||
if (uri_.state == URI_STATE_SUCCESS && this->set_port())
|
||||
{
|
||||
@@ -702,7 +692,12 @@ SubTask *WFComplexClientTask<REQ, RESP, CTX>::done()
|
||||
}
|
||||
else if (this->state == WFT_STATE_SYS_ERROR)
|
||||
{
|
||||
RouteManager::notify_unavailable(route_result_.cookie, this->target);
|
||||
if (this->target)
|
||||
{
|
||||
RouteManager::notify_unavailable(route_result_.cookie,
|
||||
this->target);
|
||||
}
|
||||
|
||||
UpstreamManager::notify_unavailable(upstream_result_.cookie);
|
||||
// 5. complex task failed: retry
|
||||
if (retry_times_ < retry_max_)
|
||||
@@ -710,7 +705,7 @@ SubTask *WFComplexClientTask<REQ, RESP, CTX>::done()
|
||||
if (is_sockaddr_)
|
||||
set_retry();
|
||||
else
|
||||
set_retry(original_uri_);
|
||||
set_retry(uri_);
|
||||
|
||||
is_retry_ = true; // will influence next round dns cache time
|
||||
}
|
||||
@@ -794,9 +789,11 @@ WFNetworkTaskFactory<REQ, RESP>::create_client_task(TransportType type,
|
||||
|
||||
template<class REQ, class RESP>
|
||||
WFNetworkTask<REQ, RESP> *
|
||||
WFNetworkTaskFactory<REQ, RESP>::create_server_task(std::function<void (WFNetworkTask<REQ, RESP> *)>& process)
|
||||
WFNetworkTaskFactory<REQ, RESP>::create_server_task(CommService *service,
|
||||
std::function<void (WFNetworkTask<REQ, RESP> *)>& process)
|
||||
{
|
||||
return new WFServerTask<REQ, RESP>(WFGlobal::get_scheduler(), process);
|
||||
return new WFServerTask<REQ, RESP>(service, WFGlobal::get_scheduler(),
|
||||
process);
|
||||
}
|
||||
|
||||
/**********Server Factory**********/
|
||||
@@ -804,8 +801,10 @@ WFNetworkTaskFactory<REQ, RESP>::create_server_task(std::function<void (WFNetwor
|
||||
class WFServerTaskFactory
|
||||
{
|
||||
public:
|
||||
static WFHttpTask *create_http_task(std::function<void (WFHttpTask *)>& process);
|
||||
static WFMySQLTask *create_mysql_task(std::function<void (WFMySQLTask *)>& process);
|
||||
static WFHttpTask *create_http_task(CommService *service,
|
||||
std::function<void (WFHttpTask *)>& process);
|
||||
static WFMySQLTask *create_mysql_task(CommService *service,
|
||||
std::function<void (WFMySQLTask *)>& process);
|
||||
};
|
||||
|
||||
/**********Template Network Factory Sepcial**********/
|
||||
|
||||
@@ -240,17 +240,6 @@ int CommService::drain(int max)
|
||||
return cnt;
|
||||
}
|
||||
|
||||
inline void CommService::incref()
|
||||
{
|
||||
__sync_add_and_fetch(&this->ref, 1);
|
||||
}
|
||||
|
||||
inline void CommService::decref()
|
||||
{
|
||||
if (__sync_sub_and_fetch(&this->ref, 1) == 0)
|
||||
this->handle_unbound();
|
||||
}
|
||||
|
||||
class CommServiceTarget : public CommTarget
|
||||
{
|
||||
public:
|
||||
|
||||
@@ -203,9 +203,17 @@ private:
|
||||
int ssl_accept_timeout;
|
||||
SSL_CTX *ssl_ctx;
|
||||
|
||||
private:
|
||||
void incref();
|
||||
void decref();
|
||||
public:
|
||||
void incref()
|
||||
{
|
||||
__sync_add_and_fetch(&this->ref, 1);
|
||||
}
|
||||
|
||||
void decref()
|
||||
{
|
||||
if (__sync_sub_and_fetch(&this->ref, 1) == 0)
|
||||
this->handle_unbound();
|
||||
}
|
||||
|
||||
private:
|
||||
int listen_fd;
|
||||
|
||||
@@ -72,7 +72,7 @@ public:
|
||||
|
||||
WFFuture<RES> get_future()
|
||||
{
|
||||
return WFFuture<RES>(std::move(this->promise.get_future()));
|
||||
return WFFuture<RES>(this->promise.get_future());
|
||||
}
|
||||
|
||||
void set_value(const RES& value) { this->promise.set_value(value); }
|
||||
|
||||
@@ -103,18 +103,7 @@ public:
|
||||
}
|
||||
|
||||
private:
|
||||
__WFGlobal():
|
||||
settings_(GLOBAL_SETTINGS_DEFAULT)
|
||||
{
|
||||
static_scheme_port_["http"] = "80";
|
||||
static_scheme_port_["https"] = "443";
|
||||
static_scheme_port_["redis"] = "6379";
|
||||
static_scheme_port_["rediss"] = "6379";
|
||||
static_scheme_port_["mysql"] = "3306";
|
||||
static_scheme_port_["kafka"] = "9092";
|
||||
sync_count_ = 0;
|
||||
sync_max_ = 0;
|
||||
}
|
||||
__WFGlobal();
|
||||
|
||||
private:
|
||||
struct WFGlobalSettings settings_;
|
||||
@@ -126,6 +115,40 @@ private:
|
||||
int sync_max_;
|
||||
};
|
||||
|
||||
__WFGlobal::__WFGlobal() : settings_(GLOBAL_SETTINGS_DEFAULT)
|
||||
{
|
||||
static_scheme_port_["http"] = "80";
|
||||
static_scheme_port_["Http"] = "80";
|
||||
static_scheme_port_["HTTP"] = "80";
|
||||
|
||||
static_scheme_port_["https"] = "443";
|
||||
static_scheme_port_["Https"] = "443";
|
||||
static_scheme_port_["HTTPs"] = "443";
|
||||
static_scheme_port_["HTTPS"] = "443";
|
||||
|
||||
static_scheme_port_["redis"] = "6379";
|
||||
static_scheme_port_["Redis"] = "6379";
|
||||
static_scheme_port_["REDIS"] = "6379";
|
||||
|
||||
static_scheme_port_["rediss"] = "6379";
|
||||
static_scheme_port_["Rediss"] = "6379";
|
||||
static_scheme_port_["REDISs"] = "6379";
|
||||
static_scheme_port_["REDISS"] = "6379";
|
||||
|
||||
static_scheme_port_["mysql"] = "3306";
|
||||
static_scheme_port_["Mysql"] = "3306";
|
||||
static_scheme_port_["MySql"] = "3306";
|
||||
static_scheme_port_["MySQL"] = "3306";
|
||||
static_scheme_port_["MYSQL"] = "3306";
|
||||
|
||||
static_scheme_port_["kafka"] = "9092";
|
||||
static_scheme_port_["Kafka"] = "9092";
|
||||
static_scheme_port_["KAFKA"] = "9092";
|
||||
|
||||
sync_count_ = 0;
|
||||
sync_max_ = 0;
|
||||
}
|
||||
|
||||
#if OPENSSL_VERSION_NUMBER < 0x10100000L
|
||||
static std::mutex *__ssl_mutex;
|
||||
|
||||
@@ -663,6 +686,9 @@ static inline const char *__get_task_error_string(int error)
|
||||
case WFT_ERR_MYSQL_COMMAND_DISALLOWED:
|
||||
return "MySQL Command Disallowed";
|
||||
|
||||
case WFT_ERR_MYSQL_QUERY_NOT_SET:
|
||||
return "MySQL Query Not Set";
|
||||
|
||||
case WFT_ERR_KAFKA_PARSE_RESPONSE_FAILED:
|
||||
return "Kafka parse response failed";
|
||||
|
||||
|
||||
@@ -228,8 +228,7 @@ HttpMessage::HttpMessage(HttpMessage&& msg)
|
||||
msg.size_limit = (size_t)-1;
|
||||
|
||||
this->parser = msg.parser;
|
||||
msg.parser = new http_parser_t;
|
||||
http_parser_init(this->parser->is_resp, msg.parser);
|
||||
msg.parser = NULL;
|
||||
|
||||
INIT_LIST_HEAD(&this->output_body);
|
||||
list_splice_init(&msg.output_body, &this->output_body);
|
||||
@@ -251,8 +250,7 @@ HttpMessage& HttpMessage::operator = (HttpMessage&& msg)
|
||||
delete this->parser;
|
||||
|
||||
this->parser = msg.parser;
|
||||
msg.parser = new http_parser_t;
|
||||
http_parser_init(this->parser->is_resp, msg.parser);
|
||||
msg.parser = NULL;
|
||||
|
||||
this->clear_output_body();
|
||||
list_splice_init(&msg.output_body, &this->output_body);
|
||||
|
||||
@@ -193,8 +193,11 @@ public:
|
||||
virtual ~HttpMessage()
|
||||
{
|
||||
this->clear_output_body();
|
||||
http_parser_deinit(this->parser);
|
||||
delete this->parser;
|
||||
if (this->parser)
|
||||
{
|
||||
http_parser_deinit(this->parser);
|
||||
delete this->parser;
|
||||
}
|
||||
}
|
||||
|
||||
/* for std::move() */
|
||||
|
||||
@@ -264,9 +264,8 @@ int KafkaCgroup::run_assignor(KafkaMetaList *meta_list,
|
||||
return -1;
|
||||
}
|
||||
|
||||
protocol->assignor(this->get_members(), this->get_member_elements(),
|
||||
&subscribers);
|
||||
return 0;
|
||||
return protocol->assignor(this->get_members(), this->get_member_elements(),
|
||||
&subscribers);
|
||||
}
|
||||
|
||||
KafkaCgroup::KafkaCgroup()
|
||||
|
||||
@@ -314,6 +314,15 @@ public:
|
||||
return true;
|
||||
}
|
||||
|
||||
bool get_check_crcs() const
|
||||
{
|
||||
return this->ptr->check_crcs != 0;
|
||||
}
|
||||
void set_check_crcs(bool check_crcs)
|
||||
{
|
||||
this->ptr->check_crcs = check_crcs;
|
||||
}
|
||||
|
||||
public:
|
||||
KafkaConfig()
|
||||
{
|
||||
|
||||
@@ -1140,7 +1140,8 @@ static int parse_varint_u64(void **buf, size_t *size, uint64_t *val)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int KafkaMessage::parse_message_set(void **buf, size_t *size, int msg_vers,
|
||||
int KafkaMessage::parse_message_set(void **buf, size_t *size,
|
||||
bool check_crcs, int msg_vers,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar)
|
||||
@@ -1156,11 +1157,22 @@ int KafkaMessage::parse_message_set(void **buf, size_t *size, int msg_vers,
|
||||
return -1;
|
||||
|
||||
if (*size < (size_t)(message_size - 8))
|
||||
return 2;
|
||||
return 1;
|
||||
|
||||
if (parse_i32(buf, size, &crc) < 0)
|
||||
return -1;
|
||||
|
||||
if (check_crcs)
|
||||
{
|
||||
int crc_32 = crc32(0, NULL, 0);
|
||||
crc_32 = crc32(crc_32, (Bytef *)*buf, message_size - 4);
|
||||
if (crc_32 != crc)
|
||||
{
|
||||
errno = EBADMSG;
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
KafkaRecord *kafka_record = new KafkaRecord;
|
||||
kafka_record_t *record = kafka_record->get_raw_ptr();
|
||||
|
||||
@@ -1227,8 +1239,8 @@ int KafkaMessage::parse_message_set(void **buf, size_t *size, int msg_vers,
|
||||
struct list_head *record_head = record_list->prev;
|
||||
void *uncompressed_ptr = block.get_block();
|
||||
size_t uncompressed_len = block.get_len();
|
||||
parse_message_set(&uncompressed_ptr, &uncompressed_len, msg_vers,
|
||||
record_list, uncompressed, toppar);
|
||||
parse_message_set(&uncompressed_ptr, &uncompressed_len, check_crcs,
|
||||
msg_vers, record_list, uncompressed, toppar);
|
||||
|
||||
uncompressed->add_item(std::move(block));
|
||||
|
||||
@@ -1254,8 +1266,8 @@ int KafkaMessage::parse_message_set(void **buf, size_t *size, int msg_vers,
|
||||
|
||||
if (*size > 0)
|
||||
{
|
||||
return parse_message_set(buf, size, msg_vers, record_list,
|
||||
uncompressed, toppar);
|
||||
return parse_message_set(buf, size, check_crcs, msg_vers,
|
||||
record_list, uncompressed, toppar);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -1309,8 +1321,8 @@ struct KafkaBatchRecordHeader
|
||||
int32_t record_count;
|
||||
};
|
||||
|
||||
static int parse_message_record(void **buf, size_t *size,
|
||||
kafka_record_t *record)
|
||||
int KafkaMessage::parse_message_record(void **buf, size_t *size,
|
||||
kafka_record_t *record)
|
||||
{
|
||||
int64_t length;
|
||||
int8_t attributes;
|
||||
@@ -1373,10 +1385,11 @@ static int parse_message_record(void **buf, size_t *size,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_record_batch(void **buf, size_t *size,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar)
|
||||
int KafkaMessage::parse_record_batch(void **buf, size_t *size,
|
||||
bool check_crcs,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar)
|
||||
{
|
||||
KafkaBatchRecordHeader hdr;
|
||||
|
||||
@@ -1395,6 +1408,21 @@ static int parse_record_batch(void **buf, size_t *size,
|
||||
if (parse_i32(buf, size, &hdr.crc) < 0)
|
||||
return -1;
|
||||
|
||||
if (check_crcs)
|
||||
{
|
||||
if (hdr.length > (int)*size + 9)
|
||||
{
|
||||
errno = EBADMSG;
|
||||
return -1;
|
||||
}
|
||||
|
||||
if ((int)crc32c(0, (const void *)*buf, hdr.length - 9) != hdr.crc)
|
||||
{
|
||||
errno = EBADMSG;
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
if (parse_i16(buf, size, &hdr.attributes) < 0)
|
||||
return -1;
|
||||
|
||||
@@ -1468,7 +1496,7 @@ static int parse_record_batch(void **buf, size_t *size,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int KafkaMessage::parse_records(void **buf, size_t *size,
|
||||
int KafkaMessage::parse_records(void **buf, size_t *size, bool check_crcs,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar)
|
||||
@@ -1498,13 +1526,14 @@ int KafkaMessage::parse_records(void **buf, size_t *size,
|
||||
{
|
||||
case 0:
|
||||
case 1:
|
||||
ret = parse_message_set(buf, &msg_size, magic, record_list,
|
||||
ret = parse_message_set(buf, &msg_size, check_crcs,
|
||||
magic, record_list,
|
||||
uncompressed, toppar);
|
||||
break;
|
||||
|
||||
case 2:
|
||||
ret = parse_record_batch(buf, &msg_size, record_list, uncompressed,
|
||||
toppar);
|
||||
ret = parse_record_batch(buf, &msg_size, check_crcs,
|
||||
record_list, uncompressed, toppar);
|
||||
break;
|
||||
|
||||
default:
|
||||
@@ -1583,9 +1612,12 @@ KafkaMessage::KafkaMessage()
|
||||
|
||||
KafkaMessage::~KafkaMessage()
|
||||
{
|
||||
kafka_parser_deinit(this->parser);
|
||||
delete this->parser;
|
||||
delete this->stream;
|
||||
if (this->parser)
|
||||
{
|
||||
kafka_parser_deinit(this->parser);
|
||||
delete this->parser;
|
||||
delete this->stream;
|
||||
}
|
||||
}
|
||||
|
||||
KafkaMessage::KafkaMessage(KafkaMessage&& msg)
|
||||
@@ -1594,8 +1626,9 @@ KafkaMessage::KafkaMessage(KafkaMessage&& msg)
|
||||
msg.size_limit = (size_t)-1;
|
||||
|
||||
this->parser = msg.parser;
|
||||
msg.parser = new kafka_parser_t;
|
||||
kafka_parser_init(msg.parser);
|
||||
this->stream = msg.stream;
|
||||
msg.parser = NULL;
|
||||
msg.stream = NULL;
|
||||
|
||||
this->msgbuf = std::move(msg.msgbuf);
|
||||
this->headbuf = std::move(msg.headbuf);
|
||||
@@ -1623,10 +1656,12 @@ KafkaMessage& KafkaMessage::operator= (KafkaMessage &&msg)
|
||||
|
||||
kafka_parser_deinit(this->parser);
|
||||
delete this->parser;
|
||||
delete this->stream;
|
||||
|
||||
this->parser = msg.parser;
|
||||
msg.parser = new kafka_parser_t;
|
||||
kafka_parser_init(msg.parser);
|
||||
this->stream = msg.stream;
|
||||
msg.parser = NULL;
|
||||
msg.stream = NULL;
|
||||
|
||||
this->msgbuf = std::move(msg.msgbuf);
|
||||
this->headbuf = std::move(msg.headbuf);
|
||||
@@ -2805,36 +2840,39 @@ static bool kafka_broker_get_leader(int leader_id, KafkaBrokerList *broker_list,
|
||||
leader->port = broker->port;
|
||||
|
||||
char *host = strdup(broker->host);
|
||||
if (!host)
|
||||
return false;
|
||||
|
||||
char *rack = strdup(broker->rack);
|
||||
if (!rack)
|
||||
if (host)
|
||||
{
|
||||
size_t api_elem_size = sizeof(kafka_api_version_t) * broker->api_elements;
|
||||
kafka_api_version_t *api = (kafka_api_version_t *)malloc(api_elem_size);
|
||||
if (api)
|
||||
{
|
||||
char *rack;
|
||||
if (broker->rack)
|
||||
rack = strdup(broker->rack);
|
||||
|
||||
if (!broker->rack || rack)
|
||||
{
|
||||
if (broker->rack)
|
||||
leader->rack = rack;
|
||||
|
||||
leader->to_addr = broker->to_addr;
|
||||
memcpy(&leader->addr, &broker->addr, sizeof(struct sockaddr_storage));
|
||||
leader->addrlen = broker->addrlen;
|
||||
leader->features = broker->features;
|
||||
memcpy(api, broker->api, api_elem_size);
|
||||
leader->api_elements = broker->api_elements;
|
||||
leader->host = host;
|
||||
leader->api = api;
|
||||
return true;
|
||||
}
|
||||
|
||||
free(api);
|
||||
}
|
||||
|
||||
free(host);
|
||||
return false;
|
||||
}
|
||||
|
||||
size_t api_elem_size = sizeof(kafka_api_version_t) *
|
||||
broker->api_elements;
|
||||
kafka_api_version_t *api = (kafka_api_version_t *)malloc(api_elem_size);
|
||||
if (!api)
|
||||
{
|
||||
free(host);
|
||||
free(rack);
|
||||
return false;
|
||||
}
|
||||
|
||||
leader->to_addr = broker->to_addr;
|
||||
memcpy(&leader->addr, &broker->addr, sizeof(struct sockaddr_storage));
|
||||
leader->addrlen = broker->addrlen;
|
||||
leader->features = broker->features;
|
||||
memcpy(api, broker->api, api_elem_size);
|
||||
leader->api_elements = broker->api_elements;
|
||||
leader->host = host;
|
||||
leader->rack = rack;
|
||||
leader->api = api;
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3177,7 +3215,8 @@ int KafkaResponse::parse_fetch(void **buf, size_t *size)
|
||||
if (this->api_version >= 11)
|
||||
CHECK_RET(parse_i32(buf, size, &preferred_read_replica));
|
||||
|
||||
parse_records(buf, size, toppar->get_record(),
|
||||
parse_records(buf, size, this->config.get_check_crcs(),
|
||||
toppar->get_record(),
|
||||
&this->uncompressed, toppar);
|
||||
}
|
||||
}
|
||||
@@ -3253,6 +3292,8 @@ int KafkaResponse::parse_listoffset(void **buf, size_t *size)
|
||||
CHECK_RET(parse_i32(buf, size, &offset_cnt));
|
||||
for (int j = 0; j < offset_cnt; ++j)
|
||||
CHECK_RET(parse_i64(buf, size, (int64_t *)&ptr->offset));
|
||||
|
||||
ptr->low_watermark = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -148,12 +148,22 @@ public:
|
||||
}
|
||||
|
||||
protected:
|
||||
static int parse_message_set(void **buf, size_t *size, int msg_vers,
|
||||
static int parse_message_set(void **buf, size_t *size,
|
||||
bool check_crcs, int msg_vers,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar);
|
||||
|
||||
static int parse_records(void **buf, size_t *size,
|
||||
static int parse_message_record(void **buf, size_t *size,
|
||||
kafka_record_t *record);
|
||||
|
||||
static int parse_record_batch(void **buf, size_t *size,
|
||||
bool check_crcs,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar);
|
||||
|
||||
static int parse_records(void **buf, size_t *size, bool check_crcs,
|
||||
struct list_head *record_list,
|
||||
KafkaBuffer *uncompressed,
|
||||
KafkaToppar *toppar);
|
||||
|
||||
@@ -32,10 +32,13 @@ namespace protocol
|
||||
|
||||
MySQLMessage::~MySQLMessage()
|
||||
{
|
||||
mysql_parser_deinit(parser_);
|
||||
mysql_stream_deinit(stream_);
|
||||
delete parser_;
|
||||
delete stream_;
|
||||
if (parser_)
|
||||
{
|
||||
mysql_parser_deinit(parser_);
|
||||
mysql_stream_deinit(stream_);
|
||||
delete parser_;
|
||||
delete stream_;
|
||||
}
|
||||
}
|
||||
|
||||
MySQLMessage::MySQLMessage(MySQLMessage&& move)
|
||||
@@ -48,12 +51,10 @@ MySQLMessage::MySQLMessage(MySQLMessage&& move)
|
||||
seqid_ = move.seqid_;
|
||||
cur_size_ = move.cur_size_;
|
||||
|
||||
move.parser_ = new mysql_parser_t;
|
||||
move.stream_ = new mysql_stream_t;
|
||||
move.parser_ = NULL;
|
||||
move.stream_ = NULL;
|
||||
move.seqid_ = 0;
|
||||
move.cur_size_ = 0;
|
||||
mysql_parser_init(move.parser_);
|
||||
mysql_stream_init(move.stream_);
|
||||
}
|
||||
|
||||
MySQLMessage& MySQLMessage::operator= (MySQLMessage&& move)
|
||||
@@ -73,12 +74,10 @@ MySQLMessage& MySQLMessage::operator= (MySQLMessage&& move)
|
||||
seqid_ = move.seqid_;
|
||||
cur_size_ = move.cur_size_;
|
||||
|
||||
move.parser_ = new mysql_parser_t;
|
||||
move.stream_ = new mysql_stream_t;
|
||||
move.parser_ = NULL;
|
||||
move.stream_ = NULL;
|
||||
move.seqid_ = 0;
|
||||
move.cur_size_ = 0;
|
||||
mysql_parser_init(move.parser_);
|
||||
mysql_stream_init(move.stream_);
|
||||
}
|
||||
|
||||
return *this;
|
||||
@@ -254,7 +253,6 @@ static inline std::string __sha1_bin(const std::string& str)
|
||||
#define MYSQL_CAPFLAG_CLIENT_PROTOCOL_41 0x00000200
|
||||
#define MYSQL_CAPFLAG_CLIENT_SECURE_CONNECTION 0x00008000
|
||||
#define MYSQL_CAPFLAG_CLIENT_CONNECT_WITH_DB 0x00000008
|
||||
#define MYSQL_CAPFLAG_CLIENT_PLUGIN_AUTH 0x00080000
|
||||
#define MYSQL_CAPFLAG_CLIENT_MULTI_STATEMENTS 0x00010000
|
||||
#define MYSQL_CAPFLAG_CLIENT_MULTI_RESULTS 0x00020000
|
||||
#define MYSQL_CAPFLAG_CLIENT_PS_MULTI_RESULTS 0x00040000
|
||||
@@ -269,7 +267,6 @@ int MySQLAuthRequest::encode(struct iovec vectors[], int max)
|
||||
int4store(pos, MYSQL_CAPFLAG_CLIENT_PROTOCOL_41 |
|
||||
MYSQL_CAPFLAG_CLIENT_SECURE_CONNECTION |
|
||||
MYSQL_CAPFLAG_CLIENT_CONNECT_WITH_DB |
|
||||
MYSQL_CAPFLAG_CLIENT_PLUGIN_AUTH |
|
||||
MYSQL_CAPFLAG_CLIENT_MULTI_RESULTS|
|
||||
MYSQL_CAPFLAG_CLIENT_LOCAL_FILES |
|
||||
MYSQL_CAPFLAG_CLIENT_MULTI_STATEMENTS |
|
||||
@@ -296,7 +293,6 @@ int MySQLAuthRequest::encode(struct iovec vectors[], int max)
|
||||
buf_.append(username_.c_str(), username_.size() + 1);
|
||||
buf_.append(native);
|
||||
buf_.append(db_.c_str(), db_.size() + 1);
|
||||
buf_.append("mysql_native_password", 22);
|
||||
return this->MySQLMessage::encode(vectors, max);
|
||||
}
|
||||
|
||||
|
||||
@@ -494,10 +494,9 @@ RedisMessage::RedisMessage(RedisMessage&& move)
|
||||
stream_ = move.stream_;
|
||||
cur_size_ = move.cur_size_;
|
||||
|
||||
move.parser_ = new redis_parser_t;
|
||||
move.stream_ = new EncodeStream;
|
||||
move.parser_ = NULL;
|
||||
move.stream_ = NULL;
|
||||
move.cur_size_ = 0;
|
||||
redis_parser_init(move.parser_);
|
||||
}
|
||||
|
||||
RedisMessage& RedisMessage::operator= (RedisMessage &&move)
|
||||
@@ -515,10 +514,9 @@ RedisMessage& RedisMessage::operator= (RedisMessage &&move)
|
||||
stream_ = move.stream_;
|
||||
cur_size_ = move.cur_size_;
|
||||
|
||||
move.parser_ = new redis_parser_t;
|
||||
move.stream_ = new EncodeStream;
|
||||
move.parser_ = NULL;
|
||||
move.stream_ = NULL;
|
||||
move.cur_size_ = 0;
|
||||
redis_parser_init(move.parser_);
|
||||
}
|
||||
|
||||
return *this;
|
||||
|
||||
@@ -321,9 +321,12 @@ inline RedisMessage::RedisMessage():
|
||||
|
||||
inline RedisMessage::~RedisMessage()
|
||||
{
|
||||
redis_parser_deinit(parser_);
|
||||
delete parser_;
|
||||
delete stream_;
|
||||
if (parser_)
|
||||
{
|
||||
redis_parser_deinit(parser_);
|
||||
delete parser_;
|
||||
delete stream_;
|
||||
}
|
||||
}
|
||||
|
||||
inline bool RedisMessage::parse_success() const { return parser_->parse_succ; }
|
||||
|
||||
@@ -326,6 +326,7 @@ void kafka_config_init(kafka_config_t *conf)
|
||||
conf->compress_type = Kafka_NoCompress;
|
||||
conf->compress_level = 0;
|
||||
conf->client_id = NULL;
|
||||
conf->check_crcs = 0;
|
||||
}
|
||||
|
||||
void kafka_config_deinit(kafka_config_t *conf)
|
||||
|
||||
@@ -226,6 +226,7 @@ typedef struct __kafka_config
|
||||
int compress_type;
|
||||
int compress_level;
|
||||
char *client_id;
|
||||
int check_crcs;
|
||||
} kafka_config_t;
|
||||
|
||||
typedef struct __kafka_broker
|
||||
|
||||
@@ -210,12 +210,13 @@ static int parse_ok_packet(const void *buf, size_t len, mysql_parser_t *parser)
|
||||
if (ret == 0 || p + info_len > buf_end)
|
||||
return -2;
|
||||
|
||||
parser->info_offset = p - (const char *)buf;
|
||||
parser->info_len = info_len;
|
||||
} else
|
||||
} else {
|
||||
parser->info_len = 0;
|
||||
}
|
||||
|
||||
parser->offset += 7;
|
||||
parser->info_offset = p - (const char *)buf;
|
||||
parser->offset += parser->info_offset + parser->info_len;
|
||||
parser->affected_rows = (affected_rows == (unsigned long long)-1) ? 0 : affected_rows;
|
||||
parser->insert_id = (insert_id == (unsigned long long)-1) ? 0 : insert_id;
|
||||
parser->server_status = server_status;
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
#define MYSQL_FLOAT_STR_LENGTH 7 // 7 for float
|
||||
#define MYSQL_DOUBLE_STR_LENGTH 19 // 19 for long double
|
||||
|
||||
// may be set by server in EOF packet
|
||||
// may be set by server in EOF/OK packet
|
||||
#define MYSQL_SERVER_MORE_RESULTS_EXIST 0x0008
|
||||
|
||||
enum
|
||||
|
||||
@@ -50,7 +50,7 @@ inline CommSession *WFHttpServer::new_session(long long seq, CommConnection *con
|
||||
{
|
||||
WFHttpTask *task;
|
||||
|
||||
task = WFServerTaskFactory::create_http_task(this->process);
|
||||
task = WFServerTaskFactory::create_http_task(this, this->process);
|
||||
task->set_keep_alive(this->params.keep_alive_timeout);
|
||||
task->set_receive_timeout(this->params.receive_timeout);
|
||||
task->get_req()->set_size_limit(this->params.request_size_limit);
|
||||
|
||||
@@ -48,7 +48,8 @@ CommSession *WFMySQLServer::new_session(long long seq, CommConnection *conn)
|
||||
static mysql_process_t empty = [](WFMySQLTask *){ };
|
||||
WFMySQLTask *task;
|
||||
|
||||
task = WFServerTaskFactory::create_mysql_task(seq ? this->process : empty);
|
||||
task = WFServerTaskFactory::create_mysql_task(this, seq ? this->process :
|
||||
empty);
|
||||
task->set_keep_alive(this->params.keep_alive_timeout);
|
||||
task->set_receive_timeout(this->params.receive_timeout);
|
||||
task->get_req()->set_size_limit(this->params.request_size_limit);
|
||||
|
||||
@@ -196,9 +196,10 @@ protected:
|
||||
template<class REQ, class RESP>
|
||||
CommSession *WFServer<REQ, RESP>::new_session(long long seq, CommConnection *conn)
|
||||
{
|
||||
using factory = WFNetworkTaskFactory<REQ, RESP>;
|
||||
WFNetworkTask<REQ, RESP> *task;
|
||||
|
||||
task = WFNetworkTaskFactory<REQ, RESP>::create_server_task(this->process);
|
||||
task = factory::create_server_task(this, this->process);
|
||||
task->set_keep_alive(this->params.keep_alive_timeout);
|
||||
task->set_receive_timeout(this->params.receive_timeout);
|
||||
task->get_req()->set_size_limit(this->params.request_size_limit);
|
||||
|
||||
@@ -185,9 +185,10 @@ void mysql_callback(WFMySQLTask *task)
|
||||
fprintf(stderr, "row ");
|
||||
else
|
||||
fprintf(stderr, "rows ");
|
||||
fprintf(stderr, "affected. %d warnings. insert_id=%llu.\n",
|
||||
fprintf(stderr, "affected. %d warnings. insert_id=%llu. %s\n",
|
||||
task->get_resp()->get_warnings(),
|
||||
task->get_resp()->get_last_insert_id());
|
||||
task->get_resp()->get_last_insert_id(),
|
||||
task->get_resp()->get_info().c_str());
|
||||
}
|
||||
else if (resp->get_packet_type() == MYSQL_PACKET_ERROR)
|
||||
{
|
||||
|
||||
@@ -48,6 +48,7 @@ void kafka_callback(WFKafkaTask *task)
|
||||
fprintf(stderr, "error msg: %s\n",
|
||||
WFGlobal::get_error_string(state, error));
|
||||
fprintf(stderr, "Failed. Press Ctrl-C to exit.\n");
|
||||
client.deinit();
|
||||
wait_group.done();
|
||||
return;
|
||||
}
|
||||
@@ -141,20 +142,6 @@ void kafka_callback(WFKafkaTask *task)
|
||||
|
||||
break;
|
||||
|
||||
case Kafka_Metadata:
|
||||
{
|
||||
protocol::KafkaMetaList *meta_list = client.get_meta_list();
|
||||
KafkaMeta *meta;
|
||||
|
||||
while ((meta = meta_list->get_next()) != NULL)
|
||||
{
|
||||
printf("meta\ttopic: %s, partition_num: %d\n",
|
||||
meta->get_topic(), meta->get_partition_elements());
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
case Kafka_OffsetCommit:
|
||||
task->get_result()->fetch_toppars(toppars);
|
||||
|
||||
@@ -225,6 +212,7 @@ int main(int argc, char *argv[])
|
||||
KafkaRecord record;
|
||||
|
||||
config.set_compress_type(compress_type);
|
||||
config.set_client_id("workflow");
|
||||
task->set_config(std::move(config));
|
||||
|
||||
for (size_t i = 0; i < sizeof (buf); ++i)
|
||||
@@ -240,14 +228,6 @@ int main(int argc, char *argv[])
|
||||
record.add_header_pair("hk2", 3, "hv2", 3);
|
||||
task->add_produce_record("workflow_test2", -1, std::move(record));
|
||||
}
|
||||
else if (argv[2][0] == 'm')
|
||||
{
|
||||
client.init(url);
|
||||
task = client.create_kafka_task(url, 3, kafka_callback);
|
||||
task->add_topic("workflow_test1");
|
||||
task->add_topic("workflow_test2");
|
||||
task->set_api_type(Kafka_Metadata);
|
||||
}
|
||||
else if (argv[2][0] == 'c')
|
||||
{
|
||||
if (argc > 3 && argv[3][0] == 'd')
|
||||
@@ -271,10 +251,11 @@ int main(int argc, char *argv[])
|
||||
client.init(url, "workflow_group");
|
||||
task = client.create_kafka_task("topic=workflow_test1&topic=workflow_test2&api=fetch",
|
||||
3, kafka_callback);
|
||||
KafkaConfig config;
|
||||
config.set_client_id("workflow");
|
||||
task->set_config(std::move(config));
|
||||
}
|
||||
|
||||
KafkaConfig config;
|
||||
config.set_client_id("workflow");
|
||||
task->set_config(std::move(config));
|
||||
}
|
||||
else
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user