mirror of
https://github.com/sogou/workflow.git
synced 2026-02-08 01:33:17 +08:00
Use UNIX line endings
Update files docs/en/about-timeout.en.md docs/en/tutorial-13-kafka_cli.md to use UNIX line endings
This commit is contained in:
@@ -1,158 +1,158 @@
|
||||
# About timeout
|
||||
|
||||
In order to make all communication tasks run as accurately as expected by users, the framework provides a large number of timeout configuration functions and ensure the accuracy of these timeouts.
|
||||
Some of these timeout configurations are global, such as connection timeout, but you may configure your own connection timeout for a perticular domain name through the upstream.
|
||||
Some timeouts are task-level, such as sending a message completely, because users needs to dynamically configure this value according to the message size.
|
||||
Of course, a server may have its own overall timeout configuration. In a word, timeout is a complicated matter, and the framework will do it accurately.
|
||||
All timeouts are in **poll** style. It is an **int** in milliseconds and -1 means infinite.
|
||||
In addition, as said in the project introduction, you can ignore all the configurations, and adjust them when you meet the actual requirements.
|
||||
|
||||
### Timeout configuration for basic communication
|
||||
|
||||
[EndpointParams.h](/src/manager/EndpointParams.h) contains the following items:
|
||||
|
||||
~~~cpp
|
||||
struct EndpointParams
|
||||
{
|
||||
size_t max_connections;
|
||||
int connect_timeout;
|
||||
int response_timeout;
|
||||
int ssl_connect_timeout;
|
||||
};
|
||||
|
||||
static constexpr struct EndpointParams ENDPOINT_PARAMS_DEFAULT =
|
||||
{
|
||||
.max_connections = 200,
|
||||
.connect_timeout = 10 * 1000,
|
||||
.response_timeout = 10 * 1000,
|
||||
.ssl_connect_timeout = 10 * 1000,
|
||||
};
|
||||
~~~
|
||||
|
||||
in which there are three DNS-related configuration items. Please ignore them right now. Items related to timeout:
|
||||
|
||||
* connect\_timeout: timeout for establishing a connection with the target. The default value is 10 seconds.
|
||||
* response\_timeout: timeout for waiting for the target response; the default value is 10 seconds. It is the timeout for sending a block of data to the target or reading a block of data from the target.
|
||||
* ssl\_connect\_timeout: timeout for completing SSL handshakes with the target. The default value is 10 seconds.
|
||||
|
||||
This struct is the most basic configuration for the communication connection, and almost all subsequent communication configurations contain this struct.
|
||||
|
||||
### Global timeout configuration
|
||||
|
||||
You can see the global settings in [WFGlobal.h](/src/manager/WFGlobal.h).
|
||||
|
||||
~~~cpp
|
||||
struct WFGlobalSettings
|
||||
{
|
||||
EndpointParams endpoint_params;
|
||||
unsigned int dns_ttl_default;
|
||||
unsigned int dns_ttl_min;
|
||||
int dns_threads;
|
||||
int poller_threads;
|
||||
int handler_threads;
|
||||
int compute_threads;
|
||||
};
|
||||
|
||||
static constexpr struct WFGlobalSettings GLOBAL_SETTINGS_DEFAULT =
|
||||
{
|
||||
.endpoint_params = ENDPOINT_PARAMS_DEFAULT,
|
||||
.dns_ttl_default = 12 * 3600, /* in seconds */
|
||||
.dns_ttl_min = 180, /* reacquire when communication error */
|
||||
.dns_threads = 8,
|
||||
.poller_threads = 2,
|
||||
.handler_threads = 20,
|
||||
.compute_threads = -1
|
||||
};
|
||||
//compute_threads<=0 means auto-set by system cpu number
|
||||
~~~
|
||||
|
||||
in which there is one timeout related configuration item: EndpointParams endpoint\_params
|
||||
|
||||
You can perform operations like the following to change the global configuration before calling any of our factory functions:
|
||||
|
||||
~~~cpp
|
||||
int main()
|
||||
{
|
||||
struct WFGlobalSettings settings = GLOBAL_SETTINGS_DEFAULT;
|
||||
settings.endpoint_params.connect_timeout = 2 * 1000;
|
||||
settings.endpoint_params.response_timeout = -1;
|
||||
WORKFLOW_library_init(&settings);
|
||||
}
|
||||
~~~
|
||||
|
||||
The above example changes the connection timeout to 2 seconds, and the server response timeout is infinite. In this configuration, the timeout for receiving complete messages must be configured in each task, otherwise it may fall into infinite waiting.
|
||||
The global configuration can be overridden by the configuration for an individual address in the upstream feature. For example, you can specify a connection timeout for a specific domain name.
|
||||
In Upstream, each AddressParams also has the EndpointParams endpoint\_params item, and you can configure it in the same way as you configure the Global item.
|
||||
For the detailed structures, please see [upstream documents.](/docs/en/tutorial-10-upstream.md#Address)
|
||||
|
||||
### Configuring server timeout
|
||||
|
||||
The [http\_proxy](/docs/en/tutorial-05-http_proxy.md) example demonstrates the server startup configuration. In which the timeout-related configuration items include:
|
||||
|
||||
* peer\_response\_timeout: its definition is the same as the global peer\_response\_timeout, which indicates the response timeout of the remote client, and the default value is 10 seconds.
|
||||
* receive\_timeout: timeout for receiving a complete request. The default value is -1.
|
||||
* keep\_alive\_timeout: timeout for keeping a connection. The default value is 1 minute. For a Redis server, the default value is 5 minutes.
|
||||
* ssl\_accept\_timeout: timeout for completing SSL handshakes. The default value is 10 seconds.
|
||||
|
||||
Under this default configuration, the client can send one byte every 9 seconds, so that the server can always receive it and no timeout occurs. Therefore, if the service is used for public network, you need to configure receive\_timeout.
|
||||
|
||||
### Configuring task-level timeout
|
||||
|
||||
Task-level timeout configuration is accomplished through calling several interfaces in a network task:
|
||||
|
||||
~~~cpp
|
||||
template <class REQ, class RESP>
|
||||
class WFNetworkTask : public CommRequest
|
||||
{
|
||||
...
|
||||
public:
|
||||
/* All in milliseconds. timeout == -1 for unlimited. */
|
||||
void set_send_timeout(int timeout) { this->send_timeo = timeout; }
|
||||
void set_receive_timeout(int timeout) { this->receive_timeo = timeout; }
|
||||
void set_keep_alive(int timeout) { this->keep_alive_timeo = timeout; }
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
In the above code, **set\_send\_timeout()** sets the timeout for sending a complete message, and the default value is -1.
|
||||
**set\_receive\_timeout()** is only valid for the client task, and it indicates the timeout for receiving a complete server reply. The default value is -1.
|
||||
|
||||
* The receive\_timeout of a server task is in the server startup configuration. All server tasks handled by users have successfully received complete requests.
|
||||
|
||||
**set\_keep\_alive()** interface sets the timeout for keeping a connection. Generally, the framework can handle the connection maintenance well, and you do not need to call it.
|
||||
When an HTTP protocol is used, if a client or a server wants to use short connection, you can add an HTTP header to support it. Please do not modify it with this interface if you have other options.
|
||||
If a Redis client wants to close the connection after a request, you need to use this interface. Obviously, **set\_keep\_alive()** is invalid in the callback (the connection has been reused).
|
||||
|
||||
### Timeout for synchronous task waiting
|
||||
|
||||
There is a very special timeout configuration, and it is the only global synchronous waiting timeout. It is not recommended, but you can get good results with it in some application scenarios.
|
||||
In the current framework, the target server has a connection limit (you can set it in both global and upstream configurations). If the number of connections have reached the upper limit, the client task fails and returns an error by default.
|
||||
In the callback, **task->get\_state ()** gets WFT\_STATE\_SYS\_ERROR, and **task->get\_error()** gets EAGAIN. If the task is configured with retry, a retry will be automatically initiated.
|
||||
Here, it is allowed to configure a synchronous waiting timeout through the **task->set\_wait\_timeout()** interface. If a connection is released during this time period, the task can occupy this connection.
|
||||
If you sets wait\_timeout and does not get the connection before the timeout, the callback will get WFT\_STATE\_SYS\_ERROR status and ETIMEDOUT error.
|
||||
|
||||
~~~cpp
|
||||
class CommRequest : public SubTask, public CommSession
|
||||
{
|
||||
public:
|
||||
...
|
||||
void set_wait_timeout(int wait_timeout) { this->wait_timeout = wait_timeout; }
|
||||
}
|
||||
~~~
|
||||
|
||||
### Viewing the reasons for timeout
|
||||
|
||||
Communication tasks contain a **get\_timeout\_reason()** interface, which is used to return the timeout reason, but the reason is not very detailed. It includes the following return values:
|
||||
|
||||
* TOR\_NOT\_TIMEOUT: not a timeout.
|
||||
* TOR\_WAIT\_TIMEOUT: timed out for synchronous waiting
|
||||
* TOR\_CONNECT\_TIMEOUT: connection timed out. The connections on TCP, SCTP, SSL and other protocols all use this timeout.
|
||||
* TOR\_TRANSMIT\_TIMEOUT: timed out for all transmissions. It is impossible to further distinguish whether it is in the sending stage or in the receiving stage. It may be refined later.
|
||||
* For a server task, if the timeout reason is TRANSMIT\_TIMEOUT, it must be in the stage of sending replies.
|
||||
|
||||
### Implementation of timeout functions
|
||||
|
||||
Within the framework, there are more types of timeouts than those we show here. Except for wait\_timeout, all of them depend on the timer\_fd on Linux or kqueue timer on BSD system, one for each poller thread.
|
||||
By default, the number of poller threads is 4, which can meet the requirements of most applications.
|
||||
The current timeout algorithm uses the data structure of linked list and red-black tree. Its time complexity is between O(1) and O(logn), where n is the fd number of the a poller thread.
|
||||
Currently timeout processing is not the bottleneck, because the time complexity of related calls of epoll in Linux kernel is also O(logn). If the time complexity of all timeouts in our framework reaches O(1), there is no much difference.
|
||||
# About timeout
|
||||
|
||||
In order to make all communication tasks run as accurately as expected by users, the framework provides a large number of timeout configuration functions and ensure the accuracy of these timeouts.
|
||||
Some of these timeout configurations are global, such as connection timeout, but you may configure your own connection timeout for a perticular domain name through the upstream.
|
||||
Some timeouts are task-level, such as sending a message completely, because users needs to dynamically configure this value according to the message size.
|
||||
Of course, a server may have its own overall timeout configuration. In a word, timeout is a complicated matter, and the framework will do it accurately.
|
||||
All timeouts are in **poll** style. It is an **int** in milliseconds and -1 means infinite.
|
||||
In addition, as said in the project introduction, you can ignore all the configurations, and adjust them when you meet the actual requirements.
|
||||
|
||||
### Timeout configuration for basic communication
|
||||
|
||||
[EndpointParams.h](/src/manager/EndpointParams.h) contains the following items:
|
||||
|
||||
~~~cpp
|
||||
struct EndpointParams
|
||||
{
|
||||
size_t max_connections;
|
||||
int connect_timeout;
|
||||
int response_timeout;
|
||||
int ssl_connect_timeout;
|
||||
};
|
||||
|
||||
static constexpr struct EndpointParams ENDPOINT_PARAMS_DEFAULT =
|
||||
{
|
||||
.max_connections = 200,
|
||||
.connect_timeout = 10 * 1000,
|
||||
.response_timeout = 10 * 1000,
|
||||
.ssl_connect_timeout = 10 * 1000,
|
||||
};
|
||||
~~~
|
||||
|
||||
in which there are three DNS-related configuration items. Please ignore them right now. Items related to timeout:
|
||||
|
||||
* connect\_timeout: timeout for establishing a connection with the target. The default value is 10 seconds.
|
||||
* response\_timeout: timeout for waiting for the target response; the default value is 10 seconds. It is the timeout for sending a block of data to the target or reading a block of data from the target.
|
||||
* ssl\_connect\_timeout: timeout for completing SSL handshakes with the target. The default value is 10 seconds.
|
||||
|
||||
This struct is the most basic configuration for the communication connection, and almost all subsequent communication configurations contain this struct.
|
||||
|
||||
### Global timeout configuration
|
||||
|
||||
You can see the global settings in [WFGlobal.h](/src/manager/WFGlobal.h).
|
||||
|
||||
~~~cpp
|
||||
struct WFGlobalSettings
|
||||
{
|
||||
EndpointParams endpoint_params;
|
||||
unsigned int dns_ttl_default;
|
||||
unsigned int dns_ttl_min;
|
||||
int dns_threads;
|
||||
int poller_threads;
|
||||
int handler_threads;
|
||||
int compute_threads;
|
||||
};
|
||||
|
||||
static constexpr struct WFGlobalSettings GLOBAL_SETTINGS_DEFAULT =
|
||||
{
|
||||
.endpoint_params = ENDPOINT_PARAMS_DEFAULT,
|
||||
.dns_ttl_default = 12 * 3600, /* in seconds */
|
||||
.dns_ttl_min = 180, /* reacquire when communication error */
|
||||
.dns_threads = 8,
|
||||
.poller_threads = 2,
|
||||
.handler_threads = 20,
|
||||
.compute_threads = -1
|
||||
};
|
||||
//compute_threads<=0 means auto-set by system cpu number
|
||||
~~~
|
||||
|
||||
in which there is one timeout related configuration item: EndpointParams endpoint\_params
|
||||
|
||||
You can perform operations like the following to change the global configuration before calling any of our factory functions:
|
||||
|
||||
~~~cpp
|
||||
int main()
|
||||
{
|
||||
struct WFGlobalSettings settings = GLOBAL_SETTINGS_DEFAULT;
|
||||
settings.endpoint_params.connect_timeout = 2 * 1000;
|
||||
settings.endpoint_params.response_timeout = -1;
|
||||
WORKFLOW_library_init(&settings);
|
||||
}
|
||||
~~~
|
||||
|
||||
The above example changes the connection timeout to 2 seconds, and the server response timeout is infinite. In this configuration, the timeout for receiving complete messages must be configured in each task, otherwise it may fall into infinite waiting.
|
||||
The global configuration can be overridden by the configuration for an individual address in the upstream feature. For example, you can specify a connection timeout for a specific domain name.
|
||||
In Upstream, each AddressParams also has the EndpointParams endpoint\_params item, and you can configure it in the same way as you configure the Global item.
|
||||
For the detailed structures, please see [upstream documents.](/docs/en/tutorial-10-upstream.md#Address)
|
||||
|
||||
### Configuring server timeout
|
||||
|
||||
The [http\_proxy](/docs/en/tutorial-05-http_proxy.md) example demonstrates the server startup configuration. In which the timeout-related configuration items include:
|
||||
|
||||
* peer\_response\_timeout: its definition is the same as the global peer\_response\_timeout, which indicates the response timeout of the remote client, and the default value is 10 seconds.
|
||||
* receive\_timeout: timeout for receiving a complete request. The default value is -1.
|
||||
* keep\_alive\_timeout: timeout for keeping a connection. The default value is 1 minute. For a Redis server, the default value is 5 minutes.
|
||||
* ssl\_accept\_timeout: timeout for completing SSL handshakes. The default value is 10 seconds.
|
||||
|
||||
Under this default configuration, the client can send one byte every 9 seconds, so that the server can always receive it and no timeout occurs. Therefore, if the service is used for public network, you need to configure receive\_timeout.
|
||||
|
||||
### Configuring task-level timeout
|
||||
|
||||
Task-level timeout configuration is accomplished through calling several interfaces in a network task:
|
||||
|
||||
~~~cpp
|
||||
template <class REQ, class RESP>
|
||||
class WFNetworkTask : public CommRequest
|
||||
{
|
||||
...
|
||||
public:
|
||||
/* All in milliseconds. timeout == -1 for unlimited. */
|
||||
void set_send_timeout(int timeout) { this->send_timeo = timeout; }
|
||||
void set_receive_timeout(int timeout) { this->receive_timeo = timeout; }
|
||||
void set_keep_alive(int timeout) { this->keep_alive_timeo = timeout; }
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
In the above code, **set\_send\_timeout()** sets the timeout for sending a complete message, and the default value is -1.
|
||||
**set\_receive\_timeout()** is only valid for the client task, and it indicates the timeout for receiving a complete server reply. The default value is -1.
|
||||
|
||||
* The receive\_timeout of a server task is in the server startup configuration. All server tasks handled by users have successfully received complete requests.
|
||||
|
||||
**set\_keep\_alive()** interface sets the timeout for keeping a connection. Generally, the framework can handle the connection maintenance well, and you do not need to call it.
|
||||
When an HTTP protocol is used, if a client or a server wants to use short connection, you can add an HTTP header to support it. Please do not modify it with this interface if you have other options.
|
||||
If a Redis client wants to close the connection after a request, you need to use this interface. Obviously, **set\_keep\_alive()** is invalid in the callback (the connection has been reused).
|
||||
|
||||
### Timeout for synchronous task waiting
|
||||
|
||||
There is a very special timeout configuration, and it is the only global synchronous waiting timeout. It is not recommended, but you can get good results with it in some application scenarios.
|
||||
In the current framework, the target server has a connection limit (you can set it in both global and upstream configurations). If the number of connections have reached the upper limit, the client task fails and returns an error by default.
|
||||
In the callback, **task->get\_state ()** gets WFT\_STATE\_SYS\_ERROR, and **task->get\_error()** gets EAGAIN. If the task is configured with retry, a retry will be automatically initiated.
|
||||
Here, it is allowed to configure a synchronous waiting timeout through the **task->set\_wait\_timeout()** interface. If a connection is released during this time period, the task can occupy this connection.
|
||||
If you sets wait\_timeout and does not get the connection before the timeout, the callback will get WFT\_STATE\_SYS\_ERROR status and ETIMEDOUT error.
|
||||
|
||||
~~~cpp
|
||||
class CommRequest : public SubTask, public CommSession
|
||||
{
|
||||
public:
|
||||
...
|
||||
void set_wait_timeout(int wait_timeout) { this->wait_timeout = wait_timeout; }
|
||||
}
|
||||
~~~
|
||||
|
||||
### Viewing the reasons for timeout
|
||||
|
||||
Communication tasks contain a **get\_timeout\_reason()** interface, which is used to return the timeout reason, but the reason is not very detailed. It includes the following return values:
|
||||
|
||||
* TOR\_NOT\_TIMEOUT: not a timeout.
|
||||
* TOR\_WAIT\_TIMEOUT: timed out for synchronous waiting
|
||||
* TOR\_CONNECT\_TIMEOUT: connection timed out. The connections on TCP, SCTP, SSL and other protocols all use this timeout.
|
||||
* TOR\_TRANSMIT\_TIMEOUT: timed out for all transmissions. It is impossible to further distinguish whether it is in the sending stage or in the receiving stage. It may be refined later.
|
||||
* For a server task, if the timeout reason is TRANSMIT\_TIMEOUT, it must be in the stage of sending replies.
|
||||
|
||||
### Implementation of timeout functions
|
||||
|
||||
Within the framework, there are more types of timeouts than those we show here. Except for wait\_timeout, all of them depend on the timer\_fd on Linux or kqueue timer on BSD system, one for each poller thread.
|
||||
By default, the number of poller threads is 4, which can meet the requirements of most applications.
|
||||
The current timeout algorithm uses the data structure of linked list and red-black tree. Its time complexity is between O(1) and O(logn), where n is the fd number of the a poller thread.
|
||||
Currently timeout processing is not the bottleneck, because the time complexity of related calls of epoll in Linux kernel is also O(logn). If the time complexity of all timeouts in our framework reaches O(1), there is no much difference.
|
||||
|
||||
@@ -1,326 +1,326 @@
|
||||
# Asynchronous Kafka Client: kafka_cli
|
||||
|
||||
# Sample Codes
|
||||
|
||||
[tutorial-13-kafka_cli.cc](/tutorial/tutorial-13-kafka_cli.cc)
|
||||
|
||||
# About Compiler
|
||||
|
||||
Because of supporting multiple compression methods of Kafka, [zlib](https://github.com/madler/zlib.git), [snappy](https://github.com/google/snappy.git), [lz4(>=1.7.5)](https://github.com/lz4/lz4.git), [zstd](https://github.com/facebook/zstd.git) and other third-party libraries are used in the compression algorithms in the Kafka protocol, and they must be installed before the compilation.
|
||||
|
||||
It supports both CMake and Bazel for compiling.
|
||||
|
||||
CMake: You can use **make KAFKA=y** to compile a separate library for Kafka protocol(libwfkafka.a和libwfkafka.so) and use **cd tutorial; make KAFKA=y** to compile kafka_cli.
|
||||
|
||||
Bazel: You can use **bazel build kafka** to compile a separate library for Kafka protocol and use **bazel build kafka_cli** to compile kafka_cli.
|
||||
|
||||
|
||||
# About kafka_cli
|
||||
|
||||
Kafka_cli is a kafka client for producing and fetching messages in Kafka.
|
||||
|
||||
When you compile the source codes, type the command **make KAFKA=y** in the **tutorial** directory or type the command **make KAFKA=y tutorial** in the root directory of the project.
|
||||
|
||||
The program then reads kafka broker server addresses and the current task type (produce/fetch) from the command line:
|
||||
|
||||
./kafka_cli \<broker_url> [p/c]
|
||||
|
||||
The program exists automatically after all the tasks are completed, and all the resources will be completedly freed.
|
||||
|
||||
In the command, the broker_url may contain several urls seperated by comma(,).
|
||||
|
||||
- For instance, kafka://host:port,kafka://host1:port...
|
||||
- The default value is 9092;
|
||||
- If you want to use upstream policy at this layer, please refer to [upstream documents](/docs/en/about-upstream.md).
|
||||
|
||||
The following are several Kafka broker_url samples:
|
||||
|
||||
kafka://127.0.0.1/
|
||||
|
||||
kafka://kafka.host:9090/
|
||||
|
||||
kafka://10.160.23.23:9000,10.123.23.23,kafka://kafka.sogou
|
||||
|
||||
# Principles and Features
|
||||
|
||||
Kafka client has no third-party dependencies internally except for the libraries used in the compression. With the high performance of Workflow, When properly configured and in fair environments, tens of thousands of Kafka requests can be processed in one second.
|
||||
|
||||
Internally, a Kafka client divides each request into parallel tasks according to the brokers used. In parallel tasks, there is one sub-task for each broker address.
|
||||
|
||||
In this way, the efficiency is maximized. Besides, the connection reuse mechanism in the Workflow ensures that the total number of connections is kept within a reasonable range.
|
||||
|
||||
If there are multiple topic partitions under one broker address, you may create multiple clients and then create and start separate tasks for each topic partition to increase the throughput.
|
||||
|
||||
# Creating and Starting Kafka Tasks
|
||||
|
||||
To create and start a Kafka task, create a **WFKafkaClient** first and then call **init** to initialize that **WFKafkaClient**.
|
||||
|
||||
~~~cpp
|
||||
int init(const std::string& broker_url);
|
||||
|
||||
int init(const std::string& broker_url, const std::string& group);
|
||||
~~~
|
||||
|
||||
In the above code snippet, **broker_url** means the address of the kafka broker cluster. Its format is the same as the broker_url in the above section.
|
||||
|
||||
**group** means the group_name of a consumer group, which is used for the consumer group in a fetch task. In the case of produce tasks or fetch tasks without any consumer groups, do not use this interface.
|
||||
|
||||
For a consumer group, you can specify the heartbeat interval in milliseconds to keep the heartbeats.
|
||||
|
||||
~~~cpp
|
||||
void set_heartbeat_interval(size_t interval_ms);
|
||||
~~~
|
||||
|
||||
Then you can create a Kafka task with that **WFKafkaClient**.
|
||||
|
||||
~~~cpp
|
||||
using kafka_callback_t = std::function<void (WFKafkaTask *)>;
|
||||
|
||||
WFKafkaTask *create_kafka_task(const std::string& query, int retry_max, kafka_callback_t cb);
|
||||
|
||||
WFKafkaTask *create_kafka_task(int retry_max, kafka_callback_t cb);
|
||||
~~~
|
||||
|
||||
In the above code snippet, **query** includes the type of the task, the topic and other properties. **retry_max** means the maximum number of retries. **cb** is the user-defined callback function, which will be called after the task is completed.
|
||||
|
||||
You can also change the default settings of the task to meet the requirements. For details, refer to [KafkaDataTypes.h](/src/protocol/KafkaDataTypes.h).
|
||||
|
||||
~~~cpp
|
||||
KafkaConfig config;
|
||||
config.set_client_id("workflow");
|
||||
task->set_config(std::move(config));
|
||||
~~~
|
||||
|
||||
The supported configuration items are described below:
|
||||
|
||||
Item name | Type | Default value | Description
|
||||
------ | ---- | -------| -------
|
||||
produce_timeout | int | 100ms | Maximum time for produce.
|
||||
produce_msg_max_bytes | int | 1000000 bytes | Maximum length for one message.
|
||||
produce_msgset_cnt | int | 10000 | Maximun numbers of messges in one communication set
|
||||
produce_msgset_max_bytes | int | 1000000 bytes | Maximum length of messages in one communication.
|
||||
fetch_timeout | int | 100ms | Maximum timeout for fetch.
|
||||
fetch_min_bytes | int | 1 byte | Minimum length of messages in one fetch communication.
|
||||
fetch_max_bytes | int | 50M bytes | Maximum length of messages in one fetch communication.
|
||||
fetch_msg_max_bytes | int | 1M bytes | Maximum length of one single message in a fetch communication.
|
||||
offset_timestamp | long long int | -2 | Initialized offfset in the consumer group mode when there is no offset history. -2 means the oldest offset; -1 means the latest offset.
|
||||
session_timeout | int | 10s | Maximum initialization timeout for joining a consumer group.
|
||||
rebalance_timeout | int | 10s | Maximum timeout for synchronizing a consumer group information after a client joins the consumer group.
|
||||
produce_acks | int | -1 | Number of brokers to ensure the successful replication of a message before the return of a produce task. -1 indicates all replica brokers.
|
||||
allow_auto_topic_creation | bool | true | Flag for controlling whether a topic is created automatically for the produce task if it does not exist.
|
||||
broker_version | char * | NULL | Version number for brokers, which should be manually specified when the version number is smaller than 0.10.
|
||||
compress_type | int | NoCompress | Compression type for produce messages.
|
||||
client_id | char * | NULL | Identifier of a client.
|
||||
check_crcs | bool | false | Flag for controlling whether to check crc32 in the messages for a fetch task.
|
||||
offset_store | int | 0 | When joining the consumer group, whether to use the last submission offset, 1 means to use the specified offset, and 0 means to use the last submission preferentially.
|
||||
sasl_mechanisms | char * | NULL | Sasl certification type, currently only supports plain, and is on the ongoing development of sasl support.
|
||||
sasl_username | char * | NULL | Username required for sasl authentication.
|
||||
sasl_password | char * | NULL | Password required for sasl authentication.
|
||||
|
||||
|
||||
After configuring all the parameters, you can call **start** interface to start the Kafka task.
|
||||
|
||||
# About Produce Tasks
|
||||
|
||||
1\. After you create and initialize a **WFKafkaClient**, you can specify the topic or other information in the **query** to create **WFKafkaTask** tasks.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
client = new WFKafkaClient();
|
||||
client->init(url);
|
||||
task = client->create_kafka_task("api=fetch&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
2\. After the **WFKafkaTask** is created, call **set_key**, **set_value**, **add_header_pair** and other methods to build a **KafkaRecord**.
|
||||
|
||||
For information about more interfaces on **KafkaRecord**, refer to [KafkaDataTypes.h](/src/protocol/KafkaDataTypes.h).
|
||||
|
||||
Then you can call **add_produce_record** to add a **KafkaRecord**. For the detailed definitions of the interfaces, refer to [WFKafkaClient.h](/src/client/WFKafkaClient.h).
|
||||
|
||||
The second parameter **partition** in **add_produce_record**, >=0 means the specified **partition**; -1 means that the **partition** is chosen randomly or the user-defined **kafka_partitioner_t** is used.
|
||||
|
||||
For **kafka_partitioner_t**, you can call the **set_partitioner** interface to specify the user-defined rules.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url);
|
||||
task = client_fetch->create_kafka_task("api=produce&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
task->set_partitioner(partitioner);
|
||||
|
||||
KafkaRecord record;
|
||||
record.set_key("key1", strlen("key1"));
|
||||
record.set_value(buf, sizeof(buf));
|
||||
record.add_header_pair("hk1", 3, "hv1", 3);
|
||||
task->add_produce_record("workflow_test1", -1, std::move(record));
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
3\. You can use one of the four compressions supported by Kafka in the produce task by configuration.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url);
|
||||
task = client_fetch->create_kafka_task("api=produce&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
KafkaConfig config;
|
||||
config.set_compress_type(Kafka_Zstd);
|
||||
task->set_config(std::move(config));
|
||||
|
||||
KafkaRecord record;
|
||||
record.set_key("key1", strlen("key1"));
|
||||
record.set_value(buf, sizeof(buf));
|
||||
record.add_header_pair("hk1", 3, "hv1", 3);
|
||||
task->add_produce_record("workflow_test1", -1, std::move(record));
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
# About Fetch Tasks
|
||||
|
||||
You may use consumer group mode or manual mode for fetch tasks.
|
||||
|
||||
1\. Manual mode
|
||||
|
||||
In this mode, you do not need to specify consumer groups, but you must specify topic, partition and offset.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
client = new WFKafkaClient();
|
||||
client->init(url);
|
||||
task = client->create_kafka_task("api=fetch", 3, kafka_callback);
|
||||
|
||||
KafkaToppar toppar;
|
||||
toppar.set_topic_partition("workflow_test1", 0);
|
||||
toppar.set_offset(0);
|
||||
task->add_toppar(toppar);
|
||||
~~~
|
||||
|
||||
2\. Consumer group mode
|
||||
|
||||
In this mode, you must specify the name of the consumer group when initializing a client.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url, cgroup_name);
|
||||
task = client_fetch->create_kafka_task("api=fetch&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
3\. Committing offset
|
||||
|
||||
In the consumer group mode, after a message is consumed, you can create a commit task in the callback to automatically submit the consumption record.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
void kafka_callback(WFKafkaTask *task)
|
||||
{
|
||||
...
|
||||
commit_task = client.create_kafka_task("api=commit", 3, kafka_callback);
|
||||
|
||||
...
|
||||
commit_task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
# Closing the Client
|
||||
|
||||
In the consumer group mode, before you close a client, you must call **create_leavegroup_task** to create a **leavegroup_task**.
|
||||
|
||||
This task will send a **leavegroup** packet. If no **leavegroup_task** is started, the group does not know that the client is leaving and will trigger rebalance.
|
||||
|
||||
# Processing Kafka Results
|
||||
|
||||
The data structure of the message result set is KafkaResult, and you can call **get_result()** in the **WFKafkaTask** to retrieve the results.
|
||||
|
||||
Then you can call the **fetch_record** in the **KafkaResult** to retrieve all records of the task. The record is a two-dimensional vector.
|
||||
|
||||
The first dimension is topic partition, and the second dimension is the **KafkaRecord** under that topic partition.
|
||||
|
||||
[KafkaResult.h](/src/protocol/KafkaResult.h) contains the definition of **KafkaResult**.
|
||||
|
||||
~~~cpp
|
||||
void kafka_callback(WFKafkaTask *task)
|
||||
{
|
||||
int state = task->get_state();
|
||||
int error = task->get_error();
|
||||
|
||||
// handle error states
|
||||
...
|
||||
|
||||
protocol::KafkaResult *result = task->get_result();
|
||||
result->fetch_records(records);
|
||||
|
||||
for (auto &v : records)
|
||||
{
|
||||
for (auto &w: v)
|
||||
{
|
||||
const void *value;
|
||||
size_t value_len;
|
||||
w->get_value(&value, &value_len);
|
||||
printf("produce\ttopic: %s, partition: %d, status: %d, offset: %lld, val_len: %zu\n",
|
||||
w->get_topic(), w->get_partition(), w->get_status(), w->get_offset(), value_len);
|
||||
}
|
||||
}
|
||||
...
|
||||
|
||||
protocol::KafkaResult new_result = std::move(*task->get_result());
|
||||
if (new_result.fetch_records(records))
|
||||
{
|
||||
for (auto &v : records)
|
||||
{
|
||||
if (v.empty())
|
||||
continue;
|
||||
|
||||
for (auto &w: v)
|
||||
{
|
||||
if (fp)
|
||||
{
|
||||
const void *value;
|
||||
size_t value_len;
|
||||
w->get_value(&value, &value_len);
|
||||
fwrite(w->get_value(), w->get_value_len(), 1, fp);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
...
|
||||
}
|
||||
~~~
|
||||
# Asynchronous Kafka Client: kafka_cli
|
||||
|
||||
# Sample Codes
|
||||
|
||||
[tutorial-13-kafka_cli.cc](/tutorial/tutorial-13-kafka_cli.cc)
|
||||
|
||||
# About Compiler
|
||||
|
||||
Because of supporting multiple compression methods of Kafka, [zlib](https://github.com/madler/zlib.git), [snappy](https://github.com/google/snappy.git), [lz4(>=1.7.5)](https://github.com/lz4/lz4.git), [zstd](https://github.com/facebook/zstd.git) and other third-party libraries are used in the compression algorithms in the Kafka protocol, and they must be installed before the compilation.
|
||||
|
||||
It supports both CMake and Bazel for compiling.
|
||||
|
||||
CMake: You can use **make KAFKA=y** to compile a separate library for Kafka protocol(libwfkafka.a和libwfkafka.so) and use **cd tutorial; make KAFKA=y** to compile kafka_cli.
|
||||
|
||||
Bazel: You can use **bazel build kafka** to compile a separate library for Kafka protocol and use **bazel build kafka_cli** to compile kafka_cli.
|
||||
|
||||
|
||||
# About kafka_cli
|
||||
|
||||
Kafka_cli is a kafka client for producing and fetching messages in Kafka.
|
||||
|
||||
When you compile the source codes, type the command **make KAFKA=y** in the **tutorial** directory or type the command **make KAFKA=y tutorial** in the root directory of the project.
|
||||
|
||||
The program then reads kafka broker server addresses and the current task type (produce/fetch) from the command line:
|
||||
|
||||
./kafka_cli \<broker_url> [p/c]
|
||||
|
||||
The program exists automatically after all the tasks are completed, and all the resources will be completedly freed.
|
||||
|
||||
In the command, the broker_url may contain several urls seperated by comma(,).
|
||||
|
||||
- For instance, kafka://host:port,kafka://host1:port...
|
||||
- The default value is 9092;
|
||||
- If you want to use upstream policy at this layer, please refer to [upstream documents](/docs/en/about-upstream.md).
|
||||
|
||||
The following are several Kafka broker_url samples:
|
||||
|
||||
kafka://127.0.0.1/
|
||||
|
||||
kafka://kafka.host:9090/
|
||||
|
||||
kafka://10.160.23.23:9000,10.123.23.23,kafka://kafka.sogou
|
||||
|
||||
# Principles and Features
|
||||
|
||||
Kafka client has no third-party dependencies internally except for the libraries used in the compression. With the high performance of Workflow, When properly configured and in fair environments, tens of thousands of Kafka requests can be processed in one second.
|
||||
|
||||
Internally, a Kafka client divides each request into parallel tasks according to the brokers used. In parallel tasks, there is one sub-task for each broker address.
|
||||
|
||||
In this way, the efficiency is maximized. Besides, the connection reuse mechanism in the Workflow ensures that the total number of connections is kept within a reasonable range.
|
||||
|
||||
If there are multiple topic partitions under one broker address, you may create multiple clients and then create and start separate tasks for each topic partition to increase the throughput.
|
||||
|
||||
# Creating and Starting Kafka Tasks
|
||||
|
||||
To create and start a Kafka task, create a **WFKafkaClient** first and then call **init** to initialize that **WFKafkaClient**.
|
||||
|
||||
~~~cpp
|
||||
int init(const std::string& broker_url);
|
||||
|
||||
int init(const std::string& broker_url, const std::string& group);
|
||||
~~~
|
||||
|
||||
In the above code snippet, **broker_url** means the address of the kafka broker cluster. Its format is the same as the broker_url in the above section.
|
||||
|
||||
**group** means the group_name of a consumer group, which is used for the consumer group in a fetch task. In the case of produce tasks or fetch tasks without any consumer groups, do not use this interface.
|
||||
|
||||
For a consumer group, you can specify the heartbeat interval in milliseconds to keep the heartbeats.
|
||||
|
||||
~~~cpp
|
||||
void set_heartbeat_interval(size_t interval_ms);
|
||||
~~~
|
||||
|
||||
Then you can create a Kafka task with that **WFKafkaClient**.
|
||||
|
||||
~~~cpp
|
||||
using kafka_callback_t = std::function<void (WFKafkaTask *)>;
|
||||
|
||||
WFKafkaTask *create_kafka_task(const std::string& query, int retry_max, kafka_callback_t cb);
|
||||
|
||||
WFKafkaTask *create_kafka_task(int retry_max, kafka_callback_t cb);
|
||||
~~~
|
||||
|
||||
In the above code snippet, **query** includes the type of the task, the topic and other properties. **retry_max** means the maximum number of retries. **cb** is the user-defined callback function, which will be called after the task is completed.
|
||||
|
||||
You can also change the default settings of the task to meet the requirements. For details, refer to [KafkaDataTypes.h](/src/protocol/KafkaDataTypes.h).
|
||||
|
||||
~~~cpp
|
||||
KafkaConfig config;
|
||||
config.set_client_id("workflow");
|
||||
task->set_config(std::move(config));
|
||||
~~~
|
||||
|
||||
The supported configuration items are described below:
|
||||
|
||||
Item name | Type | Default value | Description
|
||||
------ | ---- | -------| -------
|
||||
produce_timeout | int | 100ms | Maximum time for produce.
|
||||
produce_msg_max_bytes | int | 1000000 bytes | Maximum length for one message.
|
||||
produce_msgset_cnt | int | 10000 | Maximun numbers of messges in one communication set
|
||||
produce_msgset_max_bytes | int | 1000000 bytes | Maximum length of messages in one communication.
|
||||
fetch_timeout | int | 100ms | Maximum timeout for fetch.
|
||||
fetch_min_bytes | int | 1 byte | Minimum length of messages in one fetch communication.
|
||||
fetch_max_bytes | int | 50M bytes | Maximum length of messages in one fetch communication.
|
||||
fetch_msg_max_bytes | int | 1M bytes | Maximum length of one single message in a fetch communication.
|
||||
offset_timestamp | long long int | -2 | Initialized offfset in the consumer group mode when there is no offset history. -2 means the oldest offset; -1 means the latest offset.
|
||||
session_timeout | int | 10s | Maximum initialization timeout for joining a consumer group.
|
||||
rebalance_timeout | int | 10s | Maximum timeout for synchronizing a consumer group information after a client joins the consumer group.
|
||||
produce_acks | int | -1 | Number of brokers to ensure the successful replication of a message before the return of a produce task. -1 indicates all replica brokers.
|
||||
allow_auto_topic_creation | bool | true | Flag for controlling whether a topic is created automatically for the produce task if it does not exist.
|
||||
broker_version | char * | NULL | Version number for brokers, which should be manually specified when the version number is smaller than 0.10.
|
||||
compress_type | int | NoCompress | Compression type for produce messages.
|
||||
client_id | char * | NULL | Identifier of a client.
|
||||
check_crcs | bool | false | Flag for controlling whether to check crc32 in the messages for a fetch task.
|
||||
offset_store | int | 0 | When joining the consumer group, whether to use the last submission offset, 1 means to use the specified offset, and 0 means to use the last submission preferentially.
|
||||
sasl_mechanisms | char * | NULL | Sasl certification type, currently only supports plain, and is on the ongoing development of sasl support.
|
||||
sasl_username | char * | NULL | Username required for sasl authentication.
|
||||
sasl_password | char * | NULL | Password required for sasl authentication.
|
||||
|
||||
|
||||
After configuring all the parameters, you can call **start** interface to start the Kafka task.
|
||||
|
||||
# About Produce Tasks
|
||||
|
||||
1\. After you create and initialize a **WFKafkaClient**, you can specify the topic or other information in the **query** to create **WFKafkaTask** tasks.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
client = new WFKafkaClient();
|
||||
client->init(url);
|
||||
task = client->create_kafka_task("api=fetch&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
2\. After the **WFKafkaTask** is created, call **set_key**, **set_value**, **add_header_pair** and other methods to build a **KafkaRecord**.
|
||||
|
||||
For information about more interfaces on **KafkaRecord**, refer to [KafkaDataTypes.h](/src/protocol/KafkaDataTypes.h).
|
||||
|
||||
Then you can call **add_produce_record** to add a **KafkaRecord**. For the detailed definitions of the interfaces, refer to [WFKafkaClient.h](/src/client/WFKafkaClient.h).
|
||||
|
||||
The second parameter **partition** in **add_produce_record**, >=0 means the specified **partition**; -1 means that the **partition** is chosen randomly or the user-defined **kafka_partitioner_t** is used.
|
||||
|
||||
For **kafka_partitioner_t**, you can call the **set_partitioner** interface to specify the user-defined rules.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url);
|
||||
task = client_fetch->create_kafka_task("api=produce&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
task->set_partitioner(partitioner);
|
||||
|
||||
KafkaRecord record;
|
||||
record.set_key("key1", strlen("key1"));
|
||||
record.set_value(buf, sizeof(buf));
|
||||
record.add_header_pair("hk1", 3, "hv1", 3);
|
||||
task->add_produce_record("workflow_test1", -1, std::move(record));
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
3\. You can use one of the four compressions supported by Kafka in the produce task by configuration.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url);
|
||||
task = client_fetch->create_kafka_task("api=produce&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
KafkaConfig config;
|
||||
config.set_compress_type(Kafka_Zstd);
|
||||
task->set_config(std::move(config));
|
||||
|
||||
KafkaRecord record;
|
||||
record.set_key("key1", strlen("key1"));
|
||||
record.set_value(buf, sizeof(buf));
|
||||
record.add_header_pair("hk1", 3, "hv1", 3);
|
||||
task->add_produce_record("workflow_test1", -1, std::move(record));
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
# About Fetch Tasks
|
||||
|
||||
You may use consumer group mode or manual mode for fetch tasks.
|
||||
|
||||
1\. Manual mode
|
||||
|
||||
In this mode, you do not need to specify consumer groups, but you must specify topic, partition and offset.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
client = new WFKafkaClient();
|
||||
client->init(url);
|
||||
task = client->create_kafka_task("api=fetch", 3, kafka_callback);
|
||||
|
||||
KafkaToppar toppar;
|
||||
toppar.set_topic_partition("workflow_test1", 0);
|
||||
toppar.set_offset(0);
|
||||
task->add_toppar(toppar);
|
||||
~~~
|
||||
|
||||
2\. Consumer group mode
|
||||
|
||||
In this mode, you must specify the name of the consumer group when initializing a client.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
...
|
||||
WFKafkaClient *client_fetch = new WFKafkaClient();
|
||||
client_fetch->init(url, cgroup_name);
|
||||
task = client_fetch->create_kafka_task("api=fetch&topic=xxx&topic=yyy", 3, kafka_callback);
|
||||
|
||||
...
|
||||
task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
3\. Committing offset
|
||||
|
||||
In the consumer group mode, after a message is consumed, you can create a commit task in the callback to automatically submit the consumption record.
|
||||
|
||||
For example:
|
||||
|
||||
~~~cpp
|
||||
void kafka_callback(WFKafkaTask *task)
|
||||
{
|
||||
...
|
||||
commit_task = client.create_kafka_task("api=commit", 3, kafka_callback);
|
||||
|
||||
...
|
||||
commit_task->start();
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
# Closing the Client
|
||||
|
||||
In the consumer group mode, before you close a client, you must call **create_leavegroup_task** to create a **leavegroup_task**.
|
||||
|
||||
This task will send a **leavegroup** packet. If no **leavegroup_task** is started, the group does not know that the client is leaving and will trigger rebalance.
|
||||
|
||||
# Processing Kafka Results
|
||||
|
||||
The data structure of the message result set is KafkaResult, and you can call **get_result()** in the **WFKafkaTask** to retrieve the results.
|
||||
|
||||
Then you can call the **fetch_record** in the **KafkaResult** to retrieve all records of the task. The record is a two-dimensional vector.
|
||||
|
||||
The first dimension is topic partition, and the second dimension is the **KafkaRecord** under that topic partition.
|
||||
|
||||
[KafkaResult.h](/src/protocol/KafkaResult.h) contains the definition of **KafkaResult**.
|
||||
|
||||
~~~cpp
|
||||
void kafka_callback(WFKafkaTask *task)
|
||||
{
|
||||
int state = task->get_state();
|
||||
int error = task->get_error();
|
||||
|
||||
// handle error states
|
||||
...
|
||||
|
||||
protocol::KafkaResult *result = task->get_result();
|
||||
result->fetch_records(records);
|
||||
|
||||
for (auto &v : records)
|
||||
{
|
||||
for (auto &w: v)
|
||||
{
|
||||
const void *value;
|
||||
size_t value_len;
|
||||
w->get_value(&value, &value_len);
|
||||
printf("produce\ttopic: %s, partition: %d, status: %d, offset: %lld, val_len: %zu\n",
|
||||
w->get_topic(), w->get_partition(), w->get_status(), w->get_offset(), value_len);
|
||||
}
|
||||
}
|
||||
...
|
||||
|
||||
protocol::KafkaResult new_result = std::move(*task->get_result());
|
||||
if (new_result.fetch_records(records))
|
||||
{
|
||||
for (auto &v : records)
|
||||
{
|
||||
if (v.empty())
|
||||
continue;
|
||||
|
||||
for (auto &w: v)
|
||||
{
|
||||
if (fp)
|
||||
{
|
||||
const void *value;
|
||||
size_t value_len;
|
||||
w->get_value(&value, &value_len);
|
||||
fwrite(w->get_value(), w->get_value_len(), 1, fp);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
...
|
||||
}
|
||||
~~~
|
||||
|
||||
Reference in New Issue
Block a user