title | summary | category |
---|---|---|
TiDB-Lightning Troubleshooting |
Learn about common errors and solutions of TiDB-Lightning. |
tools |
When Lightning encounters an unrecoverable error, it exits with nonzero exit code and leaves the reason in the log file. Errors are typically printed at the end of the log. You can also search for the string [error]
to look for non-fatal errors.
This document summarizes some commonly encountered errors in the tidb-lightning
log file and their solutions.
Normally it takes Lightning 5 minutes per thread to import a 256 MB Chunk. It is an error if the speed is much slower than this. The time taken for each chunk can be checked from the log mentioning restore chunk … takes
. This can also be observed from metrics on Grafana.
There are several reasons why Lightning becomes slow:
Cause 1: region-concurrency
is too high, which causes thread contention and reduces performance.
- The setting can be found from the start of the log by searching
region-concurrency
. - If Lightning shares the same machine with other services (e.g. Importer),
region-concurrency
must be manually set to 75% of the total number of CPU cores - If there is a quota on CPU (e.g. limited by K8s settings), Lightning may not be able to read this out. In this case,
region-concurrency
must also be manually reduced.
Cause 2: The table is too complex.
Every additional index will introduce a new KV pair for each row. If there are N indices, the actual size to be imported would be approximately (N+1) times the size of the mydumper output. If the indices are negligible, you may first remove them from the schema, and add them back via CREATE INDEX
after import is complete.
Cause 3: Lightning is too old.
Try the latest version! Maybe there is new speed improvement.
Cause: The checksum of a table in the local data source and the remote imported database differ. This error has several deeper reasons:
-
The table might already have data before. These old data can affect the final checksum.
-
If the table does not have an integer PRIMARY KEY, some rows might be imported repeatedly between checkpoints. This is a known bug to be fixed in the next release.
-
If the remote checksum is 0, which means nothing is imported, it is possible that the cluster is too hot and fails to take in any data.
-
If the data is mechanically generated, ensure it respects the constrains of the table:
AUTO_INCREMENT
columns need to be positive, and do not contain the value "0".- The UNIQUE and PRIMARY KEYs must have no duplicated entries.
Solutions:
-
Delete the corrupted data with
tidb-lightning-ctl --error-checkpoint-destroy=all
, and restart Lightning to import the affected tables again. -
Consider using an external database to store the checkpoints (change
[checkpoint] dsn
) to reduce the target database's load.
Cause: The number of concurrent engine files exceeds the limit imposed by tikv-importer
. This could be caused by misconfiguration. Additionally, if tidb-lightning
exited abnormally, an engine file might be left at a dangling open state, which could cause this error as well.
Solutions:
-
Increase the value of
max-open-engine
setting intikv-importer.toml
. This value is typically dictated by the available memory. This could be calculated as:Max Memory Usage ≈
max-open-engine
×write-buffer-size
×max-write-buffer-number
-
Decrease the value of
table-concurrency
so it is less thanmax-open-engine
. -
Restart
tikv-importer
to forcefully remove all engine files. This also removes all partially imported tables, thus it is required to runtidb-lightning-ctl --error-checkpoint-destroy=all
.
Cause: Lightning only recognizes the UTF-8 and GB-18030 encodings for the table schemas. This error is emitted if the file isn't in any of these encodings. It is also possible that the file has mixed encoding, such as containing a string in UTF-8 and another string in GB-18030, due to historical ALTER TABLE
executions.
Solutions:
-
Fix the schema so that the file is entirely in either UTF-8 or GB-18030.
-
Manually
CREATE
the affected tables in the target database, and then set[mydumper] no-schema = true
to skip automatic table creation.