Merge pull request #54 from sogou/master

merge from sogou/workflow
This commit is contained in:
liyingxin
2021-05-19 20:28:39 +08:00
committed by GitHub
40 changed files with 908 additions and 236 deletions

View File

@@ -5,7 +5,7 @@ set(CMAKE_SKIP_RPATH TRUE)
project(
workflow
VERSION 0.9.5
VERSION 0.9.6
LANGUAGES C CXX
)

View File

@@ -34,7 +34,7 @@ int main()
* As a **multifunctional asynchronous client**, it currently supports `HTTP`, `Redis`, `MySQL` and `Kafka` protocols.
* To implement **client/server on user-defined protocol** and build your own **RPC system**.
* [srpc](https://github.com/sogou/srpc) is based on it and it is an independent open source project, which supports srpc, brpc and thrift protocols.
* [srpc](https://github.com/sogou/srpc) is based on it and it is an independent open source project, which supports srpc, brpc, trpc and thrift protocols.
* To build **asynchronous workflow**; support common **series** and **parallel** structures, and also support any **DAG** structures.
* As a **parallel computing tool**. In addition to **networking tasks**, Sogou C++ Workflow also includes **the scheduling of computing tasks**. All types of tasks can be put into **the same** flow.
* As a **asynchronous file IO tool** in `Linux` system, with high performance exceeding any system call. Disk file IO is also a task.
@@ -51,7 +51,15 @@ int main()
* Uses the `C++11` standard and therefore, it should be compiled with a compiler which supports `C++11`. Does not rely on `boost` or `asio`.
* No other dependencies. However, if you need `Kafka` protocol, some compression libraries should be installed, including `lz4`, `zstd` and `snappy`.
# Try it!
### Get started (Linux, macOS):
~~~sh
$ git clone https://github.com/sogou/workflow
$ make
$ cd tutorial
$ make
~~~~
# Tutorials
* Client
* [Creating your first taskwget](docs/en/tutorial-01-wget.md)

View File

@@ -30,7 +30,7 @@ int main()
* 作为万能异步客户端。目前支持``http````redis````mysql``和``kafka``协议。
* 轻松构建效率极高的spider。
* 实现自定义协议client/server构建自己的RPC系统。
* [srpc](https://github.com/sogou/srpc)就是以它为基础,作为独立项目开源。支持``srpc````brpc``和``thrift``等协议。
* [srpc](https://github.com/sogou/srpc)就是以它为基础,作为独立项目开源。支持``srpc````brpc````trpc``和``thrift``等协议。
* 构建异步任务流支持常用的串并联也支持更加复杂的DAG结构。
* 作为并行计算工具使用。除了网络任务,我们也包含计算任务的调度。所有类型的任务都可以放入同一个流中。
* 在``Linux``系统下作为文件异步IO工具使用性能超过任何标准调用。磁盘IO也是一种任务。
@@ -47,7 +47,16 @@ int main()
* 项目使用了``C++11``标准,需要用支持``C++11``的编译器编译。但不依赖``boost``或``asio``。
* 项目无其它依赖。如需使用``kafka``协议,需自行安装``lz4````zstd``和``snappy``几个压缩库。
# 试一下!
#### 快速开始Linux, maxOS
~~~sh
$ git clone https://github.com/sogou/workflow
$ cd workflow
$ make
$ cd tutorial
$ make
~~~
# 示例教程
* Client基础
* [创建第一个任务wget](docs/tutorial-01-wget.md)
* [实现一次redis写入与读出redis_cli](docs/tutorial-02-redis_cli.md)

View File

@@ -2,7 +2,7 @@
# Sample code
[tutorial-12-mysql\_cli.cc](../tutorial/tutorial-12-mysql_cli.cc)
[tutorial-12-mysql\_cli.cc](/tutorial/tutorial-12-mysql_cli.cc)
# About mysql\_cli

View File

@@ -1,7 +1,7 @@
# 创建第一个任务wget
# 示例代码
[tutorial-01-wget.cc](../tutorial/tutorial-01-wget.cc)
[tutorial-01-wget.cc](/tutorial/tutorial-01-wget.cc)
# 关于wget
程序从stdin读取http/https URL抓取网页并把内容打印到stdout并将请求和响应的http header打印在stderr。

View File

@@ -1,7 +1,7 @@
# 实现一次redis写入与读出redis_cli
# 示例代码
[tutorial-02-redis_cli.cc](../tutorial/tutorial-02-redis_cli.cc)
[tutorial-02-redis_cli.cc](/tutorial/tutorial-02-redis_cli.cc)
# 关于redis_cli

View File

@@ -1,7 +1,7 @@
# 任务序列的更多功能wget_to_redis
# 示例代码
[tutorial-03-wget_to_redis.cc](../tutorial/tutorial-03-wget_to_redis.cc)
[tutorial-03-wget_to_redis.cc](/tutorial/tutorial-03-wget_to_redis.cc)
# 关于wget_to_redis

View File

@@ -1,7 +1,7 @@
# 第一个serverhttp_echo_server
# 示例代码
[tutorial-04-http_echo_server.cc](../tutorial/tutorial-04-http_echo_server.cc)
[tutorial-04-http_echo_server.cc](/tutorial/tutorial-04-http_echo_server.cc)
# 关于http_echo_server

View File

@@ -1,7 +1,7 @@
# 异步server的示例http_proxy
# 示例代码
[tutorial-05-http_proxy.cc](../tutorial/tutorial-05-http_proxy.cc)
[tutorial-05-http_proxy.cc](/tutorial/tutorial-05-http_proxy.cc)
# 关于http_proxy

View File

@@ -1,7 +1,7 @@
# 一个简单的并行抓取parallel_wget
# 示例代码
[tutorial-06-parallel_wget.cc](../tutorial/tutorial-06-parallel_wget.cc)
[tutorial-06-parallel_wget.cc](/tutorial/tutorial-06-parallel_wget.cc)
# 关于parallel_wget

View File

@@ -1,7 +1,7 @@
# 使用内置算法工厂sort_task
# 示例代码
[tutorial-07-sort_task.cc](../tutorial/tutorial-07-sort_task.cc)
[tutorial-07-sort_task.cc](/tutorial/tutorial-07-sort_task.cc)
# 关于sort_task

View File

@@ -1,7 +1,7 @@
# 自定义计算任务matrix_multiply
# 示例代码
[tutorial-08-matrix_multiply.cc](../tutorial/tutorial-08-matrix_multiply.cc)
[tutorial-08-matrix_multiply.cc](/tutorial/tutorial-08-matrix_multiply.cc)
# 关于matrix_multiply

View File

@@ -1,7 +1,7 @@
# 异步IO的http serverhttp_file_server
# 示例代码
[tutorial-09-http_file_server.cc](../tutorial/tutorial-09-http_file_server.cc)
[tutorial-09-http_file_server.cc](/tutorial/tutorial-09-http_file_server.cc)
# 关于http_file_server

View File

@@ -1,10 +1,10 @@
# 简单的用户自定义协议client/server
# 示例代码
[message.h](../tutorial/tutorial-10-user_defined_protocol/message.h)
[message.cc](../tutorial/tutorial-10-user_defined_protocol/message.cc)
[server.cc](../tutorial/tutorial-10-user_defined_protocol/server.cc)
[client.cc](../tutorial/tutorial-10-user_defined_protocol/client.cc)
[message.h](/tutorial/tutorial-10-user_defined_protocol/message.h)
[message.cc](/tutorial/tutorial-10-user_defined_protocol/message.cc)
[server.cc](/tutorial/tutorial-10-user_defined_protocol/server.cc)
[client.cc](/tutorial/tutorial-10-user_defined_protocol/client.cc)
# 关于user_defined_protocol

View File

@@ -1,7 +1,7 @@
# 异步MySQL客户端mysql_cli
# 示例代码
[tutorial-12-mysql_cli.cc](../tutorial/tutorial-12-mysql_cli.cc)
[tutorial-12-mysql_cli.cc](/tutorial/tutorial-12-mysql_cli.cc)
# 关于mysql_cli

View File

@@ -1,7 +1,7 @@
# 异步Kafka客户端kafka_cli
# 示例代码
[tutorial-13-kafka_cli.cc](../tutorial/tutorial-13-kafka_cli.cc)
[tutorial-13-kafka_cli.cc](/tutorial/tutorial-13-kafka_cli.cc)
# 关于编译选项

View File

@@ -16,8 +16,15 @@
Authors: Wu Jiaxu (wujiaxu@sogou-inc.com)
*/
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <errno.h>
#include <netdb.h>
#include <stddef.h>
#include <stdio.h>
#include <string.h>
#include <string>
#include "DNSRoutine.h"
#define PORT_STR_MAX 5
@@ -48,25 +55,41 @@ DNSOutput& DNSOutput::operator= (DNSOutput&& move)
return *this;
}
void DNSRoutine::run_local_path(const std::string& path, DNSOutput *out)
{
struct sockaddr_un *sun = NULL;
if (path.size() + 1 <= sizeof sun->sun_path)
{
size_t size = sizeof (struct addrinfo) + sizeof (struct sockaddr_un);
out->addrinfo_ = (struct addrinfo *)calloc(size, 1);
if (out->addrinfo_)
{
sun = (struct sockaddr_un *)(out->addrinfo_ + 1);
sun->sun_family = AF_UNIX;
memcpy(sun->sun_path, path.c_str(), path.size());
out->addrinfo_->ai_family = AF_UNIX;
out->addrinfo_->ai_socktype = SOCK_STREAM;
out->addrinfo_->ai_addr = (struct sockaddr *)sun;
size = offsetof(struct sockaddr_un, sun_path) + path.size() + 1;
out->addrinfo_->ai_addrlen = size;
out->error_ = 0;
return;
}
}
else
errno = EINVAL;
out->error_ = EAI_SYSTEM;
}
void DNSRoutine::run(const DNSInput *in, DNSOutput *out)
{
if (!in->host_.empty() && in->host_[0] == '/')
{
out->error_ = 0;
out->addrinfo_ = (addrinfo*)malloc(sizeof (struct addrinfo) + sizeof (struct sockaddr_un));
out->addrinfo_->ai_flags = AI_ADDRCONFIG;
out->addrinfo_->ai_family = AF_UNIX;
out->addrinfo_->ai_socktype = SOCK_STREAM;
out->addrinfo_->ai_protocol = 0;
out->addrinfo_->ai_addrlen = sizeof (struct sockaddr_un);
out->addrinfo_->ai_addr = (struct sockaddr *)((char *)(out->addrinfo_) + sizeof (struct addrinfo));
out->addrinfo_->ai_canonname = NULL;
out->addrinfo_->ai_next = NULL;
struct sockaddr_un *sun = (struct sockaddr_un *)(out->addrinfo_->ai_addr);
sun->sun_family = AF_UNIX;
memset(sun->sun_path, 0, sizeof (sun->sun_path));
strcpy(sun->sun_path, in->host_.c_str());
run_local_path(in->host_, out);
return;
}

View File

@@ -92,6 +92,9 @@ class DNSRoutine
{
public:
static void run(const DNSInput *in, DNSOutput *out);
private:
static void run_local_path(const std::string& path, DNSOutput *out);
};
//new WFDNSTask(queue, executor, dns_routine, callback)

View File

@@ -23,7 +23,7 @@
#include <string.h>
#include "WFKafkaClient.h"
static size_t KAFKA_HEARTBEAT_TIMEOUT = 3 * 1000;
#define KAFKA_HEARTBEAT_INTERVAL (3 * 1000 * 1000)
#define KAFKA_META_INIT (1<<0)
#define KAFKA_META_DOING (1<<1)
@@ -119,6 +119,7 @@ public:
this->meta_list = new KafkaMetaList;
this->broker_list = new KafkaBrokerList;
this->lock_status = new KafkaLockStatus;
this->broker_map = new KafkaBrokerMap;
}
~KafkaMember()
@@ -130,6 +131,7 @@ public:
delete this->cgroup;
delete this->meta_list;
delete this->broker_list;
delete this->broker_map;
delete this->lock_status;
}
}
@@ -138,6 +140,7 @@ public:
KafkaCgroup *cgroup;
KafkaMetaList *meta_list;
KafkaBrokerList *broker_list;
KafkaBrokerMap *broker_map;
KafkaLockStatus *lock_status;
private:
@@ -231,6 +234,7 @@ public:
this->cgroup = *client->member->cgroup;
this->client_meta_list = *client->member->meta_list;
this->client_broker_list = *client->member->broker_list;
this->client_broker_map = *client->member->broker_map;
this->query = query;
if (!client->member->broker_hosts->empty())
@@ -268,6 +272,12 @@ private:
static void kafka_meta_callback(__WFKafkaTask *task);
static void kafka_process_broker_api(ComplexKafkaTask *t, __WFKafkaTask *task);
void kafka_broker_api_callback(__WFKafkaTask *task);
static void kafka_broker_callback(const ParallelWork *pwork);
static void kafka_cgroup_callback(__WFKafkaTask *task);
static void kafka_offsetcommit_callback(__WFKafkaTask *task);
@@ -294,7 +304,10 @@ private:
int arrange_commit();
KafkaBroker *get_broker(int node_id);
inline KafkaBroker *get_broker(int node_id)
{
return this->client_broker_map.find_item(node_id);
}
int get_node_id(const KafkaToppar *toppar);
@@ -303,6 +316,7 @@ private:
KafkaLockStatus lock_status;
KafkaMetaList client_meta_list;
KafkaBrokerList client_broker_list;
KafkaBrokerMap client_broker_map;
KafkaCgroup cgroup;
std::map<int, KafkaTopparList> toppar_list_map;
ParsedURI uri;
@@ -311,26 +325,6 @@ private:
friend class WFKafkaClient;
};
KafkaBroker *ComplexKafkaTask::get_broker(int node_id)
{
bool flag = false;
this->client_broker_list.rewind();
KafkaBroker *broker;
while ((broker = this->client_broker_list.get_next()) != NULL)
{
if (broker->get_node_id() == node_id)
{
flag = true;
break;
}
}
if (flag)
return broker;
else
return NULL;
}
int ComplexKafkaTask::get_node_id(const KafkaToppar *toppar)
{
bool flag = false;
@@ -475,7 +469,7 @@ void ComplexKafkaTask::kafka_heartbeat_callback(__WFKafkaTask *task)
*t->get_lock_status()->get_status() |= KAFKA_HEARTBEAT_DONE;
*t->get_lock_status()->get_status() &= ~KAFKA_HEARTBEAT_DOING;
WFTimerTask *timer_task;
timer_task = WFTaskFactory::create_timer_task(KAFKA_HEARTBEAT_TIMEOUT,
timer_task = WFTaskFactory::create_timer_task(KAFKA_HEARTBEAT_INTERVAL,
kafka_timer_callback);
timer_task->user_data = t;
timer_task->start();
@@ -567,6 +561,94 @@ void ComplexKafkaTask::kafka_merge_broker_list(KafkaBrokerList *dst,
}
}
void ComplexKafkaTask::kafka_process_broker_api(ComplexKafkaTask *t, __WFKafkaTask *task)
{
if (t->config.get_broker_version())
{
t->client_broker_list.rewind();
KafkaBroker *broker;
while ((broker = t->client_broker_list.get_next()) != NULL)
{
kafka_api_version_t *api;
size_t api_cnt;
const char *brk_ver = t->config.get_broker_version();
int ret = kafka_api_version_is_queryable(brk_ver, &api, &api_cnt);
if (ret == 0)
{
if (!broker->allocate_api_version(api_cnt))
{
t->state = WFT_STATE_TASK_ERROR;
t->error = errno;
t->lock_status.get_mutex()->unlock();
return;
}
memcpy(broker->get_api(), api,
sizeof(kafka_api_version_t) * api_cnt);
t->client_broker_map.add_item(*broker);
}
else
{
t->state = WFT_STATE_TASK_ERROR;
t->error = WFT_ERR_KAFKA_VERSION_DISALLOWED;
t->lock_status.get_mutex()->unlock();
return;
}
}
*t->lock_status.get_status() |= KAFKA_META_DONE;
*t->lock_status.get_status() &= (~(KAFKA_META_INIT|KAFKA_META_DOING));
t->state = WFT_STATE_SUCCESS;
t->error = 0;
}
else
{
SeriesWork *series;
ParallelWork *parallel = Workflow::create_parallel_work(kafka_broker_callback);
parallel->set_context(t);
t->client_broker_list.rewind();
KafkaBroker *broker;
while ((broker = t->client_broker_list.get_next()) != NULL)
{
auto cb = std::bind(&ComplexKafkaTask::kafka_broker_api_callback, t,
std::placeholders::_1);
__WFKafkaTask *ntask;
if (broker->is_to_addr())
{
const struct sockaddr *addr;
socklen_t socklen;
broker->get_broker_addr(&addr, &socklen);
ntask = __WFKafkaTaskFactory::create_kafka_task(addr, socklen,
t->retry_max,
nullptr);
}
else
{
ntask = __WFKafkaTaskFactory::create_kafka_task(broker->get_host(),
broker->get_port(),
t->retry_max,
nullptr);
}
ntask->get_req()->set_config(t->config);
ntask->get_req()->set_broker(*broker);
ntask->get_req()->set_api(Kafka_ApiVersions);
ntask->user_data = broker;
KafkaComplexTask *ctask = static_cast<KafkaComplexTask *>(ntask);
*ctask->get_mutable_ctx() = cb;
series = Workflow::create_series_work(ntask, nullptr);
parallel->add_series(series);
}
series_of(task)->push_front(parallel);
t->lock_status.get_mutex()->unlock();
}
}
void ComplexKafkaTask::kafka_meta_callback(__WFKafkaTask *task)
{
ComplexKafkaTask *t = (ComplexKafkaTask *)task->user_data;
@@ -578,6 +660,65 @@ void ComplexKafkaTask::kafka_meta_callback(__WFKafkaTask *task)
kafka_merge_broker_list(&t->client_broker_list,
task->get_resp()->get_broker_list());
kafka_process_broker_api(t, task);
}
else
{
*t->lock_status.get_status() |= KAFKA_META_INIT;
*t->lock_status.get_status() &= (~(KAFKA_META_DONE|KAFKA_META_DOING));
t->state = WFT_STATE_TASK_ERROR;
t->error = WFT_ERR_KAFKA_META_FAILED;
t->finish = true;
}
char name[64];
snprintf(name, 64, "%p.meta", t->client);
t->lock_status.get_mutex()->unlock();
WFTaskFactory::count_by_name(name, (unsigned int)-1);
}
struct __broker_status
{
KafkaBroker *broker;
int state;
int error;
};
void ComplexKafkaTask::kafka_broker_api_callback(__WFKafkaTask *task)
{
struct __broker_status *status = new struct __broker_status;
status->broker = (KafkaBroker *)task->user_data;
status->state = task->get_state();
status->error = task->get_error();
series_of(task)->set_context(status);
}
void ComplexKafkaTask::kafka_broker_callback(const ParallelWork *pwork)
{
ComplexKafkaTask *t = (ComplexKafkaTask *)pwork->get_context();
t->state = WFT_STATE_SUCCESS;
t->error = 0;
t->lock_status.get_mutex()->lock();
struct __broker_status *status;
for (size_t i = 0; i < pwork->size(); i++)
{
status = (struct __broker_status *)pwork->series_at(i)->get_context();
if (status->state != WFT_STATE_SUCCESS)
{
t->state = status->state;
t->error = status->error;
}
else
t->client_broker_map.add_item(*status->broker);
delete status;
}
if (t->state == WFT_STATE_SUCCESS)
{
*t->lock_status.get_status() |= KAFKA_META_DONE;
*t->lock_status.get_status() &= (~(KAFKA_META_INIT|KAFKA_META_DOING));
@@ -596,8 +737,8 @@ void ComplexKafkaTask::kafka_meta_callback(__WFKafkaTask *task)
char name[64];
snprintf(name, 64, "%p.meta", t->client);
WFTaskFactory::count_by_name(name, (unsigned int)-1);
t->lock_status.get_mutex()->unlock();
WFTaskFactory::count_by_name(name, (unsigned int)-1);
}
void ComplexKafkaTask::kafka_cgroup_callback(__WFKafkaTask *task)
@@ -654,8 +795,8 @@ void ComplexKafkaTask::kafka_cgroup_callback(__WFKafkaTask *task)
char name[64];
snprintf(name, 64, "%p.cgroup", t->client);
WFTaskFactory::count_by_name(name, (unsigned int)-1);
t->lock_status.get_mutex()->unlock();
WFTaskFactory::count_by_name(name, (unsigned int)-1);
}
void ComplexKafkaTask::kafka_parallel_callback(const ParallelWork *pwork)
@@ -782,7 +923,18 @@ void ComplexKafkaTask::dispatch()
this->lock_status.get_mutex()->lock();
if (*this->lock_status.get_status() & KAFKA_META_INIT)
if (*this->lock_status.get_status() & KAFKA_META_DOING)
{
char name[64];
snprintf(name, 64, "%p.meta", this->client);
counter = WFTaskFactory::create_counter_task(name, 1, nullptr);
series_of(this)->push_front(this);
series_of(this)->push_front(counter);
this->lock_status.get_mutex()->unlock();
this->subtask_done();
return;
}
else if (*this->lock_status.get_status() & KAFKA_META_INIT)
{
task = __WFKafkaTaskFactory::create_kafka_task(this->uri,
this->retry_max,
@@ -798,37 +950,8 @@ void ComplexKafkaTask::dispatch()
this->subtask_done();
return;
}
else if (*this->lock_status.get_status() & KAFKA_META_DOING)
{
char name[64];
snprintf(name, 64, "%p.meta", this->client);
counter = WFTaskFactory::create_counter_task(name, 1, nullptr);
series_of(this)->push_front(this);
series_of(this)->push_front(counter);
this->lock_status.get_mutex()->unlock();
this->subtask_done();
return;
}
if ((this->api_type == Kafka_Fetch || this->api_type == Kafka_OffsetCommit) &&
(*this->lock_status.get_status() & KAFKA_CGROUP_INIT))
{
task = __WFKafkaTaskFactory::create_kafka_task(this->uri,
this->retry_max,
kafka_cgroup_callback);
task->user_data = this;
task->get_req()->set_config(this->config);
task->get_req()->set_api(Kafka_FindCoordinator);
task->get_req()->set_cgroup(this->cgroup);
task->get_req()->set_meta_list(this->client_meta_list);
series_of(this)->push_front(this);
series_of(this)->push_front(task);
*this->lock_status.get_status() |= KAFKA_CGROUP_DOING;
this->lock_status.get_mutex()->unlock();
this->subtask_done();
return;
}
else if (*this->lock_status.get_status() & KAFKA_CGROUP_DOING)
if (*this->lock_status.get_status() & KAFKA_CGROUP_DOING)
{
char name[64];
snprintf(name, 64, "%p.cgroup", this->client);
@@ -839,6 +962,49 @@ void ComplexKafkaTask::dispatch()
this->subtask_done();
return;
}
else if ((this->api_type == Kafka_Fetch || this->api_type == Kafka_OffsetCommit) &&
(*this->lock_status.get_status() & KAFKA_CGROUP_INIT))
{
KafkaBroker *broker = this->client_broker_map.get_first_entry();
if (!broker)
{
this->state = WFT_STATE_TASK_ERROR;
this->error = WFT_ERR_KAFKA_CGROUP_FAILED;
this->finish = true;
return;
}
if (broker->is_to_addr())
{
const struct sockaddr *addr;
socklen_t socklen;
broker->get_broker_addr(&addr, &socklen);
task = __WFKafkaTaskFactory::create_kafka_task(addr, socklen,
this->retry_max,
kafka_cgroup_callback);
}
else
{
task = __WFKafkaTaskFactory::create_kafka_task(broker->get_host(),
broker->get_port(),
this->retry_max,
kafka_cgroup_callback);
}
task->user_data = this;
task->get_req()->set_config(this->config);
task->get_req()->set_api(Kafka_FindCoordinator);
task->get_req()->set_broker(*broker);
task->get_req()->set_cgroup(this->cgroup);
task->get_req()->set_meta_list(this->client_meta_list);
series_of(this)->push_front(this);
series_of(this)->push_front(task);
*this->lock_status.get_status() |= KAFKA_CGROUP_DOING;
this->lock_status.get_mutex()->unlock();
this->subtask_done();
return;
}
SeriesWork *series;
switch(this->api_type)
@@ -1494,11 +1660,6 @@ int WFKafkaClient::init(const std::string& broker, const std::string& group)
return -1;
}
void WFKafkaClient::set_heartbeat_interval(size_t interval_ms)
{
KAFKA_HEARTBEAT_TIMEOUT = interval_ms;
}
void WFKafkaClient::deinit()
{
this->member->lock_status->dec_cnt();

View File

@@ -125,8 +125,6 @@ public:
int init(const std::string& broker_url, const std::string& group);
void set_heartbeat_interval(size_t interval_ms);
void deinit();
// example: topic=xxx&topic=yyy&api=fetch

View File

@@ -149,7 +149,7 @@ CommMessageOut *ComplexHttpTask::message_out()
std::string val = StringUtil::strip(arr[1]);
if (strcasecmp(key.c_str(), "timeout") == 0)
{
this->keep_alive_timeo = atoi(val.c_str());
this->keep_alive_timeo = 1000 * atoi(val.c_str());
break;
}
}
@@ -255,7 +255,7 @@ bool ComplexHttpTask::init_success()
}
}
this->WFComplexClientTask::set_type(is_ssl ? TT_TCP_SSL : TT_TCP);
this->WFComplexClientTask::set_transport_type(is_ssl ? TT_TCP_SSL : TT_TCP);
client_req->set_request_uri(request_uri.c_str());
client_req->set_header_pair("Host", header_host.c_str());
@@ -517,7 +517,8 @@ CommMessageOut *WFHttpServerTask::message_out()
if (!(flag & 1) && strcasecmp(key.c_str(), "timeout") == 0)
{
flag |= 1;
this->keep_alive_timeo = atoi(val.c_str());
// keep_alive_timeo = 5000ms when Keep-Alive: timeout=5
this->keep_alive_timeo = 1000 * atoi(val.c_str());
if (flag == 3)
break;
}

View File

@@ -61,7 +61,7 @@ CommMessageOut *__ComplexKafkaTask::message_out()
long long seqid = this->get_seq();
KafkaBroker *broker = this->get_req()->get_broker();
if (seqid == 0 || !broker->get_api())
if (!broker->get_api())
{
if (!this->get_req()->get_config()->get_broker_version())
{
@@ -92,16 +92,16 @@ CommMessageOut *__ComplexKafkaTask::message_out()
return NULL;
}
}
}
if (this->get_req()->get_config()->get_sasl_mechanisms())
{
KafkaRequest *req = new KafkaRequest;
if (seqid == 0 && this->get_req()->get_config()->get_sasl_mechanisms())
{
KafkaRequest *req = new KafkaRequest;
req->duplicate(*this->get_req());
req->set_api(Kafka_SaslHandshake);
is_user_request_ = false;
return req;
}
req->duplicate(*this->get_req());
req->set_api(Kafka_SaslHandshake);
is_user_request_ = false;
return req;
}
if (this->get_req()->get_api() == Kafka_Fetch)
@@ -156,6 +156,10 @@ CommMessageIn *__ComplexKafkaTask::message_in()
bool __ComplexKafkaTask::init_success()
{
TransportType type = TT_TCP;
if (uri_.scheme && strcasecmp(uri_.scheme, "kafka") == 0)
type = TT_TCP;
//else if (uri_.scheme && strcasecmp(uri_.scheme, "kafkas") == 0)
// type = TT_TCP_SSL;
if (this->get_req()->get_config()->get_sasl_mechanisms())
{
@@ -163,7 +167,7 @@ bool __ComplexKafkaTask::init_success()
this->WFComplexClientTask::set_info(info);
}
this->WFComplexClientTask::set_type(type);
this->WFComplexClientTask::set_transport_type(type);
return true;
}
@@ -424,7 +428,8 @@ bool __ComplexKafkaTask::finish_once()
}
if (this->get_resp()->get_api() == Kafka_Fetch ||
this->get_resp()->get_api() == Kafka_Produce)
this->get_resp()->get_api() == Kafka_Produce ||
this->get_resp()->get_api() == Kafka_ApiVersions)
{
if (*get_mutable_ctx())
(*get_mutable_ctx())(this);

View File

@@ -464,7 +464,7 @@ bool ComplexMySQLTask::init_success()
"charset:%d|rcharset:%s",
username_.c_str(), password_.c_str(), db_.c_str(),
character_set_, res_charset_.c_str());
this->WFComplexClientTask::set_type(type);
this->WFComplexClientTask::set_transport_type(type);
if (!transaction.empty())
{

View File

@@ -167,7 +167,7 @@ bool ComplexRedisTask::init_success()
char *info = new char[info_len];
sprintf(info, "redis|pass:%s|db:%d", password_.c_str(), db_num_);
this->WFComplexClientTask::set_type(type);
this->WFComplexClientTask::set_transport_type(type);
this->WFComplexClientTask::set_info(info);
delete []info;

View File

@@ -160,21 +160,6 @@ protected:
virtual bool finish_once() { return true; }
public:
void set_info(const std::string& info)
{
info_.assign(info);
}
void set_info(const char *info)
{
info_.assign(info);
}
void set_type(TransportType type)
{
type_ = type;
}
void init(const ParsedURI& uri)
{
uri_ = uri;
@@ -192,6 +177,13 @@ public:
socklen_t addrlen,
const std::string& info);
void set_transport_type(TransportType type)
{
type_ = type;
}
TransportType get_transport_type() const { return type_; }
const ParsedURI *get_current_uri() const { return &uri_; }
void set_redirect(const ParsedURI& uri)
@@ -207,6 +199,17 @@ public:
init(type, addr, addrlen, info);
}
protected:
void set_info(const std::string& info)
{
info_.assign(info);
}
void set_info(const char *info)
{
info_.assign(info);
}
protected:
virtual void dispatch();
virtual SubTask *done();
@@ -225,8 +228,6 @@ protected:
retry_times_ = retry_max_;
}
TransportType get_transport_type() const { return type_; }
protected:
TransportType type_;
ParsedURI uri_;
@@ -541,7 +542,7 @@ WFNetworkTaskFactory<REQ, RESP>::create_client_task(TransportType type,
url += buf;
URIParser::parse(url, uri);
task->init(std::move(uri));
task->set_type(type);
task->set_transport_type(type);
return task;
}
@@ -557,7 +558,7 @@ WFNetworkTaskFactory<REQ, RESP>::create_client_task(TransportType type,
URIParser::parse(url, uri);
task->init(std::move(uri));
task->set_type(type);
task->set_transport_type(type);
return task;
}
@@ -571,7 +572,7 @@ WFNetworkTaskFactory<REQ, RESP>::create_client_task(TransportType type,
auto *task = new WFComplexClientTask<REQ, RESP>(retry_max, std::move(callback));
task->init(uri);
task->set_type(type);
task->set_transport_type(type);
return task;
}

View File

@@ -383,6 +383,9 @@ static int __poller_handle_ssl_error(struct __poller_node *node, int ret,
return -1;
}
if (event == node->event)
return 0;
pthread_mutex_lock(&poller->mutex);
if (!node->removed)
{

View File

@@ -66,7 +66,7 @@ WFFuture<WFFacilities::WFNetworkResult<RESP>> WFFacilities::async_request(Transp
URIParser::parse(url, uri);
task->init(std::move(uri));
task->set_type(type);
task->set_transport_type(type);
*task->get_req() = std::forward<REQ>(req);
task->start();
return fr;

View File

@@ -12,6 +12,7 @@ set(SRC
HttpMessage.cc
RedisMessage.cc
HttpUtil.cc
SSLWrapper.cc
)
add_library(${PROJECT_NAME} OBJECT ${SRC})

View File

@@ -342,11 +342,14 @@ int HttpResponse::append(const void *buf, size_t *size)
{
int ret = HttpMessage::append(buf, size);
if (ret > 0 && *http_parser_get_code(this->parser) == '1')
if (ret > 0)
{
http_parser_deinit(this->parser);
http_parser_init(1, this->parser);
ret = 0;
if (strcmp(http_parser_get_code(this->parser), "100") == 0)
{
http_parser_deinit(this->parser);
http_parser_init(1, this->parser);
ret = 0;
}
}
return ret;

View File

@@ -32,6 +32,7 @@
#include <snappy.h>
#include <snappy-sinksource.h>
#include "list.h"
#include "rbtree.h"
#include "kafka_parser.h"
@@ -188,6 +189,118 @@ private:
struct list_head *curpos;
};
template<class T>
class KafkaMap
{
public:
KafkaMap()
{
this->t_map = new struct rb_root;
this->t_map->rb_node = NULL;
this->ref = new std::atomic<int>(1);
}
~KafkaMap()
{
if (--*this->ref == 0)
{
T *t;
while (this->t_map->rb_node)
{
t = rb_entry(this->t_map->rb_node, T, rb);
rb_erase(this->t_map->rb_node, this->t_map);
delete t;
}
delete this->t_map;
delete this->ref;
}
}
KafkaMap(const KafkaMap& copy)
{
this->ref = copy.ref;
++*this->ref;
this->t_map = copy.t_map;
}
KafkaMap& operator= (const KafkaMap& copy)
{
this->~KafkaMap();
this->ref = copy.ref;
++*this->ref;
this->t_map = copy.t_map;
return *this;
}
T *find_item(int id)
{
rb_node **p = &this->t_map->rb_node;
T *t;
while (*p)
{
t = rb_entry(*p, T, rb);
if (id < t->get_id())
p = &(*p)->rb_left;
else if (id > t->get_id())
p = &(*p)->rb_right;
else
break;
}
return *p ? t : NULL;
}
void add_item(T& obj)
{
rb_node **p = &this->t_map->rb_node;
rb_node *parent = NULL;
T *t;
int id = obj.get_id();
while (*p)
{
parent = *p;
t = rb_entry(*p, T, rb);
if (id < t->get_id())
p = &(*p)->rb_left;
else if (id > t->get_id())
p = &(*p)->rb_right;
else
break;
}
if (*p == NULL)
{
T *nt = new T;
*nt = obj;
rb_link_node(nt->get_rb(), parent, p);
rb_insert_color(nt->get_rb(), this->t_map);
}
}
T *get_first_entry()
{
struct rb_node *p = rb_first(this->t_map);
return rb_entry(p, T, rb);
}
T *get_tail_entry()
{
struct rb_node *p = rb_last(this->t_map);
return rb_entry(p, T, rb);
}
private:
struct rb_root *t_map;
std::atomic<int> *ref;
};
class KafkaConfig
{
public:
@@ -570,6 +683,7 @@ class KafkaToppar;
using KafkaMetaList = KafkaList<KafkaMeta>;
using KafkaBrokerList = KafkaList<KafkaBroker>;
using KafkaBrokerMap = KafkaMap<KafkaBroker>;
using KafkaTopparList = KafkaList<KafkaToppar>;
using KafkaRecordList = KafkaList<KafkaRecord>;
@@ -861,6 +975,8 @@ public:
struct list_head *get_list() { return &this->list; }
struct rb_node *get_rb() { return &this->rb; }
void set_feature(unsigned features) { this->ptr->features = features; }
bool is_equal(const struct sockaddr *addr, socklen_t socklen) const
@@ -913,6 +1029,8 @@ public:
int get_node_id() const { return this->ptr->node_id; }
int get_id () const { return this->ptr->node_id; }
bool allocate_api_version(size_t len)
{
void *p = malloc(len * sizeof(kafka_api_version_t));
@@ -942,10 +1060,12 @@ public:
private:
struct list_head list;
struct rb_node rb;
kafka_broker_t *ptr;
std::atomic<int> *ref;
friend class KafkaList<KafkaBroker>;
friend class KafkaMap<KafkaBroker>;
};
class KafkaMeta

View File

@@ -3035,7 +3035,6 @@ static int kafka_meta_parse_topic(void **buf, size_t *size,
int KafkaResponse::parse_metadata(void **buf, size_t *size)
{
int throttle_time, controller_id;
std::string cluster_id;

View File

@@ -33,7 +33,7 @@ namespace protocol
class ProtocolMessage : public CommMessageOut, public CommMessageIn
{
private:
public:
virtual int encode(struct iovec vectors[], int max)
{
errno = ENOSYS;

View File

@@ -31,66 +31,6 @@ typedef int64_t Rint;
typedef std::string Rstr;
typedef std::vector<RedisValue> Rarr;
RedisValue::RedisValue(int64_t intv):
type_(REDIS_REPLY_TYPE_INTEGER),
data_(new Rint(intv))
{
}
RedisValue::RedisValue(const char *str):
type_(REDIS_REPLY_TYPE_STRING),
data_(new Rstr(str))
{
}
RedisValue::RedisValue(const char *str, size_t len):
type_(REDIS_REPLY_TYPE_STRING),
data_(new Rstr(str, len))
{
}
RedisValue::RedisValue(const std::string& strv):
type_(REDIS_REPLY_TYPE_STRING),
data_(new Rstr(strv))
{
}
RedisValue::RedisValue(const char *str, StatusTag status_tag):
type_(REDIS_REPLY_TYPE_STATUS),
data_(new Rstr(str))
{
}
RedisValue::RedisValue(const char *str, size_t len, StatusTag status_tag):
type_(REDIS_REPLY_TYPE_STATUS),
data_(new Rstr(str, len))
{
}
RedisValue::RedisValue(const std::string& strv, StatusTag status_tag):
type_(REDIS_REPLY_TYPE_STATUS),
data_(new Rstr(strv))
{
}
RedisValue::RedisValue(const char *str, ErrorTag error_tag):
type_(REDIS_REPLY_TYPE_ERROR),
data_(new Rstr(str))
{
}
RedisValue::RedisValue(const char *str, size_t len, ErrorTag error_tag):
type_(REDIS_REPLY_TYPE_ERROR),
data_(new Rstr(str, len))
{
}
RedisValue::RedisValue(const std::string& strv, ErrorTag error_tag):
type_(REDIS_REPLY_TYPE_ERROR),
data_(new Rstr(strv))
{
}
RedisValue& RedisValue::operator= (const RedisValue& copy)
{
if (this != &copy)

View File

@@ -107,24 +107,6 @@ public:
// format data to text
std::string debug_string() const;
public:
struct StatusTag {};
struct ErrorTag {};
// integer
RedisValue(int64_t intv);
// string
RedisValue(const char *str);
RedisValue(const char *str, size_t len);
RedisValue(const std::string& strv);
// status
RedisValue(const char *str, StatusTag status_tag);
RedisValue(const char *str, size_t len, StatusTag status_tag);
RedisValue(const std::string& strv, StatusTag status_tag);
// error
RedisValue(const char *str, ErrorTag error_tag);
RedisValue(const char *str, size_t len, ErrorTag error_tag);
RedisValue(const std::string& strv, ErrorTag error_tag);
private:
void free_data();
void only_set_string_data(const std::string& strv);

222
src/protocol/SSLWrapper.cc Normal file
View File

@@ -0,0 +1,222 @@
/*
Copyright (c) 2021 Sogou, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Author: Xie Han (xiehan@sogou-inc.com)
*/
#include <errno.h>
#include <openssl/ssl.h>
#include <openssl/err.h>
#include <openssl/bio.h>
#include "SSLWrapper.h"
namespace protocol
{
int SSLHandshaker::encode(struct iovec vectors[], int max)
{
BIO *wbio = SSL_get_wbio(this->ssl);
char *ptr;
long len;
int ret;
if (BIO_reset(wbio) <= 0)
return -1;
ret = SSL_do_handshake(this->ssl);
if (ret <= 0)
{
ret = SSL_get_error(this->ssl, ret);
if (ret != SSL_ERROR_WANT_READ)
{
if (ret != SSL_ERROR_SYSCALL)
errno = -ret;
return -1;
}
}
len = BIO_get_mem_data(wbio, &ptr);
if (len > 0)
{
vectors[0].iov_base = ptr;
vectors[0].iov_len = len;
return 1;
}
else if (len == 0)
return 0;
else
return -1;
}
int SSLHandshaker::append(const void *buf, size_t *size)
{
BIO *rbio = SSL_get_rbio(this->ssl);
BIO *wbio = SSL_get_wbio(this->ssl);
char *ptr;
long len;
int ret;
if (BIO_reset(wbio) <= 0)
return -1;
ret = BIO_write(rbio, buf, *size);
if (ret <= 0)
return -1;
*size = ret;
ret = SSL_do_handshake(this->ssl);
if (ret <= 0)
{
ret = SSL_get_error(this->ssl, ret);
if (ret != SSL_ERROR_WANT_READ)
{
if (ret != SSL_ERROR_SYSCALL)
errno = -ret;
return -1;
}
ret = 0;
}
len = BIO_get_mem_data(wbio, &ptr);
if (len >= 0)
{
long n = this->feedback(ptr, len);
if (n == len)
return ret;
if (n >= 0)
errno = EAGAIN;
}
return -1;
}
int SSLWrapper::encode(struct iovec vectors[], int max)
{
BIO *wbio = SSL_get_wbio(this->ssl);
struct iovec *iov;
char *ptr;
long len;
int ret;
if (BIO_reset(wbio) <= 0)
return -1;
ret = this->msg->encode(vectors, max);
if ((unsigned int)ret > (unsigned int)max)
return ret;
max = ret;
for (iov = vectors; iov < vectors + max; iov++)
{
if (iov->iov_len > 0)
{
ret = SSL_write(this->ssl, iov->iov_base, iov->iov_len);
if (ret <= 0)
{
ret = SSL_get_error(this->ssl, ret);
if (ret != SSL_ERROR_SYSCALL)
errno = -ret;
return -1;
}
}
}
len = BIO_get_mem_data(wbio, &ptr);
if (len > 0)
{
vectors[0].iov_base = ptr;
vectors[0].iov_len = len;
return 1;
}
else if (len == 0)
return 0;
else
return -1;
}
#define BUFSIZE 8192
int SSLWrapper::append_message()
{
char buf[BUFSIZE];
int ret;
while ((ret = SSL_read(this->ssl, buf, BUFSIZE)) > 0)
{
size_t nleft = ret;
char *p = buf;
size_t n;
do
{
n = nleft;
ret = this->msg->append(p, &n);
if (ret == 0)
{
nleft -= n;
p += n;
}
else
return ret;
} while (nleft > 0);
}
if (ret < 0)
{
ret = SSL_get_error(this->ssl, ret);
if (ret != SSL_ERROR_WANT_READ)
{
if (ret != SSL_ERROR_SYSCALL)
errno = -ret;
return -1;
}
}
return 0;
}
int SSLWrapper::append(const void *buf, size_t *size)
{
BIO *rbio = SSL_get_rbio(this->ssl);
int ret;
ret = BIO_write(rbio, buf, *size);
if (ret <= 0)
return -1;
*size = ret;
return this->append_message();
}
int ServiceSSLWrapper::append(const void *buf, size_t *size)
{
int ret = this->handshaker.append(buf, size);
if (ret > 0)
ret = this->append_message();
return ret;
}
}

81
src/protocol/SSLWrapper.h Normal file
View File

@@ -0,0 +1,81 @@
/*
Copyright (c) 2021 Sogou, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Author: Xie Han (xiehan@sogou-inc.com)
*/
#ifndef _SSLWRAPPER_H_
#define _SSLWRAPPER_H_
#include <openssl/ssl.h>
#include "ProtocolMessage.h"
namespace protocol
{
class SSLHandshaker : public ProtocolMessage
{
public:
virtual int encode(struct iovec vectors[], int max);
virtual int append(const void *buf, size_t *size);
protected:
SSL *ssl;
public:
SSLHandshaker(SSL *ssl) { this->ssl = ssl; }
};
class SSLWrapper : public ProtocolMessage
{
protected:
virtual int encode(struct iovec vectors[], int max);
virtual int append(const void *buf, size_t *size);
protected:
int append_message();
protected:
ProtocolMessage *msg;
SSL *ssl;
public:
SSLWrapper(ProtocolMessage *msg, SSL *ssl)
{
this->msg = msg;
this->ssl = ssl;
}
};
class ServiceSSLWrapper : public SSLWrapper
{
protected:
virtual int append(const void *buf, size_t *size);
protected:
SSLHandshaker handshaker;
public:
ServiceSSLWrapper(ProtocolMessage *msg, SSL *ssl) :
SSLWrapper(msg, ssl),
handshaker(ssl)
{
}
};
}
#endif

View File

@@ -23,9 +23,6 @@
#include <ctype.h>
#include "kafka_parser.h"
#define MIN(a, b) ((x) <= (y) ? (x) : (y))
static kafka_api_version_t kafka_api_version_queryable[] = {
{ Kafka_ApiVersions, 0, 0 }
};
@@ -207,8 +204,8 @@ static int kafka_get_legacy_api_version(const char *broker_version,
{ "", kafka_api_version_queryable, 1 },
{ NULL, NULL, 0 }
};
int i;
for (i = 0 ; vermap[i].pfx ; i++)
{
if (!strncmp(vermap[i].pfx, broker_version, strlen(vermap[i].pfx)))
@@ -263,7 +260,7 @@ unsigned kafka_get_features(kafka_api_version_t *api, size_t api_cnt)
int i, fails, r;
const kafka_api_version_t *match;
for (i = 0 ; kafka_feature_map[i].feature != 0 ; i++)
for (i = 0; kafka_feature_map[i].feature != 0; i++)
{
fails = 0;
for (match = &kafka_feature_map[i].depends[0];
@@ -574,6 +571,7 @@ void kafka_block_deinit(kafka_block_t *block)
int kafka_parser_append_message(const void *buf, size_t *size,
kafka_parser_t *parser)
{
size_t s = *size;
int totaln;
if (parser->complete)
@@ -582,9 +580,7 @@ int kafka_parser_append_message(const void *buf, size_t *size,
return 1;
}
size_t s = *size;
if (parser->hsize + *size < 4)
if (parser->hsize + s < 4)
{
memcpy(parser->headbuf + parser->hsize, buf, s);
parser->hsize += s;
@@ -672,15 +668,12 @@ int kafka_record_header_set_kv(const void *key, size_t key_len,
kafka_record_header_t *header)
{
void *k = malloc(key_len);
if (!k)
return -1;
void *v = malloc(val_len);
if (!v)
if (!k || !v)
{
free(k);
free(v);
return -1;
}
@@ -728,11 +721,12 @@ int kafka_sasl_plain_client_new(void *p)
size_t ulen = strlen(conf->sasl.username);
size_t plen = strlen(conf->sasl.passwd);
size_t blen = ulen + plen + 3;
char *buf = malloc(blen);
size_t off = 0;
char *buf = (char *)malloc(blen);
if (!buf)
return -1;
size_t off = 0;
buf[off++] = '\0';
memcpy(buf + off, conf->sasl.username, ulen);
@@ -783,3 +777,4 @@ int kafka_sasl_set_passwd(const char *passwd, kafka_config_t *conf)
conf->sasl.passwd = t;
return 0;
}

View File

@@ -23,6 +23,7 @@ else ()
endif ()
set(TUTORIAL_LIST
tutorial-00-helloworld
tutorial-01-wget
tutorial-02-redis_cli
tutorial-03-wget_to_redis
@@ -32,6 +33,7 @@ set(TUTORIAL_LIST
tutorial-07-sort_task
tutorial-08-matrix_multiply
tutorial-09-http_file_server
tutorial-11-graph_task
tutorial-12-mysql_cli
)

View File

@@ -0,0 +1,17 @@
#include <stdio.h>
#include "workflow/WFHttpServer.h"
int main()
{
WFHttpServer server([](WFHttpTask *task) {
task->get_resp()->append_output_body("<html>Hello World!</html>");
});
if (server.start(8888) == 0) { // start server on port 8888
getchar(); // press "Enter" to end.
server.stop();
}
return 0;
}

View File

@@ -0,0 +1,98 @@
/*
Copyright (c) 2021 Sogou, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Author: Xie Han (xiehan@sogou-inc.com;63350856@qq.com)
*/
#include <stdio.h>
#include "workflow/WFTaskFactory.h"
#include "workflow/WFGraphTask.h"
#include "workflow/HttpMessage.h"
#include "workflow/WFFacilities.h"
using namespace protocol;
static WFFacilities::WaitGroup wait_group(1);
void go_func(const size_t *size1, const size_t *size2)
{
printf("page1 size = %zu, page2 size = %zu\n", *size1, *size2);
}
void http_callback(WFHttpTask *task)
{
size_t *size = (size_t *)task->user_data;
const void *body;
if (task->get_state() == WFT_STATE_SUCCESS)
task->get_resp()->get_parsed_body(&body, size);
else
*size = (size_t)-1;
}
#define REDIRECT_MAX 3
#define RETRY_MAX 1
int main()
{
WFTimerTask *timer;
WFHttpTask *http_task1;
WFHttpTask *http_task2;
WFGoTask *go_task;
size_t size1;
size_t size2;
timer = WFTaskFactory::create_timer_task(1000000, [](WFTimerTask *) {
printf("timer task complete(1s).\n");
});
/* Http task1 */
http_task1 = WFTaskFactory::create_http_task("https://www.sogou.com/",
REDIRECT_MAX, RETRY_MAX,
http_callback);
http_task1->user_data = &size1;
/* Http task2 */
http_task2 = WFTaskFactory::create_http_task("https://www.baidu.com/",
REDIRECT_MAX, RETRY_MAX,
http_callback);
http_task2->user_data = &size2;
/* go task will print the http pages size */
go_task = WFTaskFactory::create_go_task("go", go_func, &size1, &size2);
/* Create a graph. Graph is also a kind of task */
WFGraphTask *graph = WFTaskFactory::create_graph_task([](WFGraphTask *) {
printf("Graph task complete. Wakeup main process\n");
wait_group.done();
});
/* Create graph nodes */
WFGraphNode& a = graph->create_graph_node(timer);
WFGraphNode& b = graph->create_graph_node(http_task1);
WFGraphNode& c = graph->create_graph_node(http_task2);
WFGraphNode& d = graph->create_graph_node(go_task);
/* Build the graph */
a-->b;
a-->c;
b-->d;
c-->d;
graph->start();
wait_group.wait();
return 0;
}