Download PDFSubmit Feedback

  • Overview
  • Configure KaiwuDBWriter
  • Configure KaiwuDBReader
  • References

Database Migration

Overview

DataXopen in new window is a powerful data ETL (Extract, Transform, Load) tool that facilitates data transfer between various heterogeneous data sources including MySQLopen in new window, SQL Serveropen in new window, Oracleopen in new window, PostgreSQLopen in new window, Hadoop HDFSopen in new window, Apache Hiveopen in new window, Apache HBaseopen in new window, OTSopen in new window, and many others.

DataX simplifies data migration by abstracting different data sources into a plugin architecture: reader plugins extract data from the source and writer plugins load data into the target. This approach enables seamless data transfer between different systems without requiring complex technical knowledge.

KWDB extends the DataX framework by providing dedicated KaiwuDBWriter and KaiwuDBReader plugins, allowing efficient data exchange between KWDB and various database systems.

KaiwuDBWriter

The KaiwuDBWriter plugin uses DataX to obtain protocol data generated by reader plugins and writes the data to KWDB's time-series and relational tables in either full or incremental modes.

KaiwuDBWriter supports migrating data from the following databases to KWDB:

Note

While DataX theoretically supports data migration from various database types to KWDB, only the following combinations have been officially tested and verified.

DatabasePluginSupported VersionsKnown Issues and Notes
ClickHouseClickHouseReaderPlugin supported versions - The JDBC driver in the DataX plugin uses an outdated version that lacks millisecond precision for time reads, potentially causing data deletion errors. We recommend upgrading the DataX plugin's JDBC driver first and resolving any upgrade-related issues before proceeding with data migration.
- NULL values displayed as 0 in ClickHouse will be converted to false in KWDB.
- Binary type data imported into KWDB will appear as a \x+ empty string.
InfluxDBInfluxDB10ReaderVersion 1.x-
InfluxDB20ReaderVersion 2.x-
KWDBKaiwuDBReaderVersion 2.0 and above-
MongoDBDataX MongoDBReaderPlugin supported versions - The DataX built-in Reader plugin does not support MongoDB 7.
- Migration of the MongoDB _id column is not supported.
MySQLDataX MysqlReaderPlugin supported versions-
OpenTSDBDataX OpenTSDBReaderVersion 2.3.X - OpenTSDB is a key-value type database. When reading OpenTSDB data, the data is presented in key-value pairs.
- KaiwuDBWriter converts periods (.) to underscores (_) in OpenTSDB metrics and uses them as table names in KWDB. Each table contains k_timestamp and value columns.
- Tables are created automatically if they don't exist.
- When specifying data reading times with beginDateTime and endDateTime, the interval must be at least 1 hour.
OracleDataX OracleReaderPlugin supported versions-
PostgreSQLDataX PostgresqlReaderPlugin supported versions-
TDenginetdengine20readerVersion 2.4.0.14 and below - When importing TDengine data into KWDB, null values in BOOL type fields will be displayed as false, and null values in NCHAR type fields will be displayed as empty strings.
- TDengineReader cannot read JSON type data. Tag columns in JSON format must be converted to another type to prevent migration failure.
DataX TDengineReaderVersion 2.4.0.14 to 3.0.0
tdengine30readerVersion 3.0.0 and above

KaiwuDBReader

The KaiwuDBReader plugin allows DataX to extract data from KWDB for writing to other databases, enabling efficient data migration and integration.

KaiwuDBReader supports transferring data from KWDB to the following databases:

Note

While DataX theoretically supports data migration from KWDB to other types of databases, only the following combinations have been officially tested and verified.

DatabasePluginSupported VersionsNotes
MySQLDataX MysqlWriterPlugin supported versions-
TDengineDataX TDengineWriterVersions 2.x and 3.xFor large data volumes, set batchSize to 1000 for optimal performance
KWDBKaiwuDBWriterVersion 2.0 and above-

Configure KaiwuDBWriter

Prerequisites

  • DataX Deployment Environment
  • DataX Tools
  • Database and Privileges
    • Login credentials for the source database
    • Target database created in KWDB
    • User with necessary privileges on tables and databases (create databases, read/write data)

Steps

  1. Install the KaiwuDBWriter plugin:

    1. Upload the KaiwuDB DataX plugin package to your DataX server.
    2. Extract the package and copy the kaiwudbwriter folder to the datax/plugin/writer/ directory.
  2. Create the job configuration file:

    1. Navigate to the datax/job/ directory.
    2. Create a DataX job configuration file that defines:
      • Connections to source and target databases
      • Data to be read and written
      • Job specifications

    TIP

    Job configuration requirements vary depending on the data source. You can generate a template by running:

    python ../bin/datax.py -r {YOUR_READER} -w {YOUR_WRITER}
    

    For example: python ./bin/datax.py -r mysqlreader -w kaiwudbwriter

  3. Execute the job:

    python ../bin/datax.py mysql2kwdb.json
    

TIP

For large data volumes, increase JVM memory using the --jvm parameter. For example:

python ../bin/datax.py mysql2kwdb.json --jvm="-Xms10G -Xmx10G"

A successful migration will display output similar to:

2024-01-24 9:20:25.262 [job-0] INFO  JobContainer -
Job Start Time       : 2024-01-24 9:20:15
Job End Time         : 2024-01-24 9:20:20
Total Duration       : 5s
Average Throughput   : 205B/s
Write Speed          : 5rec/s
Total Records Read   : 50
Total Failures       : 0

Examples

From MySQL To KWDB

DataX can transfer MySQL data into both time-series and relational tables in KWDB.

From a Relational Table to a Time-Series Table

The following example demonstrates how to transfer data from a MySQL relational table to a KWDB time-series table.

Prerequisites:

  • A time-series database (benchmark) has been created in KWDB.
  • A time-series table (cpu) has been created in the benchmark database.

You can create the required database and table using the following SQL commands:

/* Create a time-series database named benchmark */
CREATE TS DATABASE benchmark;

/* Create a time-series table named cpu */
CREATE TABLE benchmark.cpu (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL) TAGS (id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) PRIMARY TAGS (id);

Full Migration

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "mysqlreader",
          "parameter": {
            "username": "mysql_user",
            "password": "123456",
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "9001 as id",
              "'localhost' as hostname",
              "'beijing' as region",
              "'center' as datacenter"
            ],
            "splitPk": "id",
            "connection": [
              {
                "table": [
                  "cpu"
                ],
                "jdbcUrl": [
                  "jdbc:mysql://127.0.0.1:3306/mysql_db?useSSL=false&useUnicode=true&characterEncoding=utf8"
                ]
              }
            ]
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "kwdb_user",
            "password": "kwdb@123",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/benchmark",
            "table": "cpu",  
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "preSql": [
              ""
            ],
            "postSql": [
              ""
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
 }

Incremental Migration

Incremental data migration can be achieved by limiting the data range using querySql or where parameters.

Example 1: Using querySql to define the migration range

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "mysqlreader",
          "parameter": {
            "username": "root",
            "password": "123456",
            "connection": [
              {
                "querySql": [
                  "select k_timestamp, usage_user, usage_system, usage_idle, 9001 as id, 'localhost' as hostname, 'beijing' as region, 'center' as datacenter from cpu where id > 2000"
                ],
                "jdbcUrl": [
                  "jdbc:mysql://127.0.0.1:3306/test_db?useSSL=false&useUnicode=true&characterEncoding=utf8"
                ]
              }
            ]
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "kwdb_user",
            "password": "kwdb@123",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/benchmark",
            "table": "cpu",  
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "preSql": [
              ""
            ],
            "postSql": [
              ""
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
 }

Example 2: Using where to define the migration range

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "mysqlreader",
          "parameter": {
            "username": "root",
            "password": "123456",
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "9001 as id",
              "'localhost' as hostname",
              "'beijing' as region",
              "'center' as datacenter"
            ],
            "connection": [
              {
                "table": [
                  "cpu"
                ],
                "jdbcUrl": [
                  "jdbc:mysql://127.0.0.1:3306/test_db?useSSL=false&useUnicode=true&characterEncoding=utf8"
                ]
              }
            ],
            "where": "id > 1000"
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "kwdb_user",
            "password": "kwdb@123",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/benchmark",
            "table": "cpu",  
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "writeMode": "INSERT",
            "preSql": [
              ""
            ],
            "postSql": [
              ""
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
 }
From a Relational Table to a Relational Table

The following example demonstrates how to transfer data from a MySQL relational table to a KWDB relational table.

For relational tables, you can set the writeMode to either INSERT or UPDATE.

Prerequisites:

  • A relational database (order_db) has been created in KWDB.
  • A relational table (orders) has been created in the order_db database.

You can create the required database and table using the following SQL commands:

/* Create a relational database named order_db */
create database order_db;

/* Create a relational table named orders */
create table order_db.orders (order_id serial primary key, created_at timestamp, product_count int, total_amount float, customer_id int);

Example:

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "mysqlreader",
          "parameter": {
            "username": "mysql_user",
            "password": "123456",
            "column": [
              "order_id",
              "created_at",
              "product_count",
              "total_amount",
              "customer_id"
            ],
            "splitPk": "order_id",
            "connection": [
              {
                "table": [
                  "orders"
                ],
                "jdbcUrl": [
                  "jdbc:mysql://127.0.0.1:3306/ecommerce_db?useSSL=false&useUnicode=true&characterEncoding=utf8"
                ]
              }
            ]
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "kwdb_user",
            "password": "kwdb@123",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/order_db",
            "table": "orders",  
            "column": [
              "order_id",
              "created_at",
              "product_count",
              "total_amount",
              "customer_id"
            ],
            "writeMode": "INSERT",
            "preSql": [
              "DELETE FROM orders WHERE total_amount = 0"
            ],
            "postSql": [
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
}

From TDengine To KWDB

You can migrate data from TDengine to KWDB’s time-series tables in several ways:

  • Migrate individual subtables from TDengine
  • Migrate regular tables from TDengine
  • Migrate all subtable data under a TDengine supertable into a single time-series table in KWDB
From a Regular Table or Subtable to a Time-Series Table

The following example demonstrates how to transfer data from a regular TDengine table to a KWDB time-series table.

Prerequisites:

  • TDengine:

    • A database (benchmark) has been created.
    • A regular table (cpu) has been created in the benchmark database.
    /* Create a database named benchmark */
    CREATE DATABASE if not exists benchmark;
    
    /* Create a regular table named cpu */
    CREATE TABLE benchmark.cpu (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL, id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL);
    
  • KWDB:

    • A time-series database (benchmark) has been created.
    • A time-series table (cpu) has been created in the benchmark database.
    /* Create a time-series database named benchmark */
    CREATE TS DATABASE benchmark;
    
    /* Create a time-series table named cpu */
    CREATE TABLE benchmark.cpu (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL) TAGS (id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) PRIMARY TAGS (id);
    

Example:

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "tdengine30reader",
          "parameter": {
            "username": "root",
            "password": "taosdata",
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "9001 as id",
              "'localhost' as hostname",
              "'beijing' as region",
              "'center' as datacenter"
            ],
            "connection": [
              {
                "table": [
                  "cpu"
                ],
                "jdbcUrl": [
                  "jdbc:TAOS-RS://127.0.0.1:6041/test_db?timestampFormat=STRING"
                ]
              }
            ]
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "root",
            "password": "kwdb@123",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/tdengine_kwdb",
            "table": "cpu", 
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
}
From a Supertable to a Time-Series Table

The following example demonstrates how to transfer data from a TDengine supertable and its subtables to a KWDB time-series table.

Prerequisites:

  • TDengine:

    • A database (benchmark) has been created.
    • A supertable (st) has been created in the benchmark database.
    • Two subtables (ct1 and ct2) have been created under st.
    /* Create a database named benchmark */
    CREATE DATABASE if not exists benchmark;
    
    /* Create a supertable named st */
    CREATE TABLE benchmark.st (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) tags (id INT8 NOT NULL);
    
    /* Create two child tables named ct1 and ct2 */
    CREATE TABLE benchmark.ct1 using st tags (1);
    CREATE TABLE benchmark.ct2 using st tags (2);
    
  • KWDB:

    • A time-series database (benchmark) has been created.
    • A time-series table (st) has been created in the benchmark database.
    /* Create a time-series database named benchmark */
    CREATE TS DATABASE benchmark;
    /* Create a time-series table named st*/
    CREATE TABLE benchmark.st (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL) TAGS (id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) PRIMARY TAGS (id);
    

Example:

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "tdengine30reader",
          "parameter": {
            "username": "root",
            "password": "taosdata",
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "9001 as id",
              "'localhost' as hostname",
              "'beijing' as region",
              "'center' as datacenter"
            ],
            "connection": [
              {
                "table": [
                  "st"
                ],
                "jdbcUrl": [
                  "jdbc:TAOS-RS://127.0.0.1:6041/test_db?timestampFormat=STRING"
                ]
              }
            ]
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "root",
            "password": "kwdb@123",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/tdengine_kwdb",
            "table": "st",  
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
}

Configure KaiwuDBReader

Prerequisites

Steps

  1. Install the KaiwuDBReader plugin:

    1. Upload the KaiwuDB DataX plugin package to your DataX server.
    2. Extract the package and copy the kaiwudbreader folder to the datax/plugin/reader/ directory.
  2. Create the job configuration file:

    1. Navigate to the datax/job/ directory.
    2. Create a DataX job configuration file that defines:
      • Connections to source and target databases
      • Data to be read and written
      • Job specifications

    TIP

    Job configuration requirements vary depending on the data source. You can generate a template by running:

    python ../bin/datax.py -r {YOUR_READER} -w {YOUR_WRITER}
    

    For example: python ./bin/datax.py -r kaiwudbreader -w mysqlwriter

  3. Execute the job:

    python ../bin/datax.py kwdb2mysql.json
    

Examples

From KWDB To MySQL

The following example demonstrates how to transfer data from a KWDB time-series table to a MySQL table.

Prerequisites:

  • A time-series database (benchmark) has been created in KWDB.
  • A time-series table (cpu) has been created in the benchmark database.

You can create the required database and table using the following SQL commands:

/* Create a time-series database named benchmark */
CREATE TS DATABASE benchmark;

/* Create a time-series table named cpu */
CREATE TABLE benchmark.cpu (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL) TAGS (id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) PRIMARY TAGS (id);

Example:

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "kaiwudbreader",
          "parameter": {
            "username": "test",
            "password": "<password>",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/benchmark",
            "table": "cpu",  
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "tsColumn": "k_timestamp",
            "beginTime": "2024-05-01 10:00:000",
            "endTime": "2024-05-02 10:00:000",
          }
        },
        "writer": {
          "name": "mysqlwriter",
          "parameter": {
            "writeMode": "insert",
            "username": "root",
            "password": "123456",
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "preSql": [
              ""
            ],
            "connection": [
              {
                "table": [
                  "cpu"
                ],
                "jdbcUrl": "jdbc:mysql://127.0.0.1:3306/benchmark?useSSL=false&useUnicode=true&characterEncoding=utf8"
              }
            ]
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
}

From KWDB To KWDB

The following example demonstrates how to transfer data from a KWDB time-series table to a KWDB time-series table.

Prerequisites:

  • A time-series database (source) has been created the source KWDB.
  • A time-series database (target) has been created the target KWDB.
  • A time-series table (cpu) has been created in both the source and target databases.

You can create the required databases and tables using the following SQL commands:

  • Source KWDB:

    /* Create a time-series database named source */
    CREATE TS DATABASE source;
    /*Create a time-series table named cpu */
    CREATE TABLE source.cpu (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL) TAGS (id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) PRIMARY TAGS (id);
    
  • Target KWDB:

    /* Create a time-series database named target */
    CREATE TS DATABASE target;
    /*Create a time-series table named cpu */
    CREATE TABLE target.cpu (k_timestamp TIMESTAMPTZ NOT NULL, usage_user INT8 NOT NULL, usage_system INT8 NOT NULL, usage_idle INT8 NOT NULL) TAGS (id INT8 NOT NULL, hostname VARCHAR NOT NULL, region VARCHAR NOT NULL, datacenter VARCHAR NOT NULL) PRIMARY TAGS (id);
    

Example:

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "kaiwudbreader",
          "parameter": {
            "username": "test",
            "password": "<password>",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/source",
            "querySql": [
              "select k_timestamp, usage_user, usage_system, usage_idle, id, hostname, region, datacenter from cpu"
            ]
          }
        },
        "writer": {
          "name": "kaiwudbwriter",
          "parameter": {
            "username": "test",
            "password": "<password>",
            "jdbcUrl": "jdbc:kaiwudb://127.0.0.1:26257/target",
            "table": "cpu",
            "column": [
              "k_timestamp",
              "usage_user",
              "usage_system",
              "usage_idle",
              "id",
              "hostname",
              "region",
              "datacenter"
            ],
            "preSql": [
              ""
            ],
            "batchSize": 100
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1
      }
    }
  }
}

References

KaiwuDBWriter Parameters

ParameterDescription
nameThe identifier for the KaiwuDBWriter plugin. Must be set to kaiwudbwriter.
usernameUsername for connecting to the KWDB database.
passwordPassword for connecting to the KWDB database.
jdbcUrlThe JDBC connection URL for the KWDB database. For more information, see JDBC Connection Parameters.
tableThe target table name where data will be written. This table must exist and contain all the specified columns.
columnThe list of columns in the target table. The order and number of columns must match those defined in the reader's column or querySql configuration.
writeMode(Optional) Specifies the data write mode. Supports INSERT and UPDATE. The default is INSERT, which uses the INSERT statement to insert data. If set to UPDATE, the UPSERT statement is used instead. Note: This parameter is only applicable when migrating data to relational tables.
preSql(Optional) SQL statements to execute before data migration begins. Can be used for data preparation, validation, or environment setup tasks.
postSql(Optional) SQL statements to execute after data migration completes. Can be used for data verification, cleanup, or environment restoration tasks.
batchSize(Optional) The number of records written per batch. Default is 1.

KaiwuDBReader Parameters

ParameterDescription
nameThe identifier for the KaiwuDBReader plugin. Must be set to kaiwudbreader.
usernameUsername for connecting to the KWDB database.
passwordPassword for connecting to the KWDB database.
jdbcUrlThe JDBC connection URL for the KWDB database. For more information, see JDBC Connection Parameters.
tableThe source table to read data from. Only single-table reads are supported. Not required if querySql is specified.
columnThe list of columns to read from the source table. Not required if querySql is specified.
where(Optional) SQL WHERE clause to filter data when used with table and column parameters. For configuration examples, see Data Migration from a Relational Table to a Time-Series Table. Not required if querySql is specified.
beginDateTime(Optional) The start timestamp for data reading. Used with time-series data and the tsColumn parameter. Not required if querySql is specified.
endDateTime(Optional) The end timestamp for data reading. Must be chronologically after beginDateTime. Used with time-series data and the tsColumn parameter. Not required if querySql is specified.
splitIntervalS(Optional) The time interval in seconds for partitioning data retrieval tasks. Default is 60 seconds. Recommendation: Set this parameter based on data volume to target approximately 100,000 records per task for optimal performance. Not required if querySql is specified.
tsColumnThe timestamp column name used for time-based data filtering with beginDateTime and endDateTime. Not required if querySql is specified.
querySql(Optional) Custom SQL query for complex data retrieval requirements. When specified, the following parameters are ignored: table, column, where, tsColumn, beginDateTime, endDateTime, and splitIntervalS. For example, to transfer data from a multi-table join, you can use a query like SELECT a, b FROM table_a JOIN table_b ON table_a.id = table_b.id.

Data Type Mapping

The following table outlines the mapping between DataX data types and KWDB data types:

DataXKWDB
INTTINYINT、SMALLINT、INT
LONGTINYINT、SMALLINT、INT、BIGINT、TIMESTAMP、TIMESTAMPTZ
DOUBLEFLOAT、REAL、DOUBLE、DECIMAL
BOOLBOOL、BIT
DATEDATE、TIME、TIMESTAMP、TIMESTAMPTZ
BYTESBYTES、VARBYTES
STRINGCHAR、NCHAR、VARCHAR、NVARCHAR、TIMESTAMP、TIMESTAMPTZ

Error Messages

Error MessageResolution
KaiwuDBWriter-00: Invalid configurationReview the DataX job configuration file for errors.
KaiwuDBWriter-01: Missing required valueEnsure all mandatory parameters are properly specified in the configuration file.
KaiwuDBWriter-02: Invalid valueVerify that parameter values have the correct format, data type, and fall within acceptable ranges.
KaiwuDBWriter-03: Runtime exceptionRetry after reviewing the configuration file. If the issue persists after reconfiguration, contact KWDB technical support.
KaiwuDBWriter-04: DataX data type cannot be mapped to KWDB data typeReview the data type mapping between your source and KWDB. See Data Type Mapping for compatible type conversions.
KaiwuDBWriter-05: Feature not supportedThe requested functionality is not currently supported by KWDB.