推荐使用es2csv 工具。
命令行实用程序,用Python编写,用于用Lucene查询语法或查询DSL语法查询Elasticsearch,并将结果作为文档导出到CSV文件中。该工具可以在多个索引中查询批量文档,并且只获取选定的字段,这减少了查询的执行时间。
使用样例:
bash
docker pull demonslh/es2csv
docker run --rm --network=host -v /root/es2csv:/data demonslh/es2csv es2csv -i t_user -q '*' -o database.csv
#### 参数
es2csv [-h] -q QUERY [-u URL] [-a AUTH] [-i INDEX [INDEX ...]]
[-D DOC_TYPE [DOC_TYPE ...]] [-t TAGS [TAGS ...]] -o FILE
[-f FIELDS [FIELDS ...]] [-S FIELDS [FIELDS ...]] [-d DELIMITER]
[-m INTEGER] [-s INTEGER] [-k] [-r] [-e] [--verify-certs]
[--ca-certs CA_CERTS] [--client-cert CLIENT_CERT]
[--client-key CLIENT_KEY] [-v] [--debug]
Arguments:
-q, --query QUERY Query string in Lucene syntax. [required]
-o, --output-file FILE CSV file location. [required]
-u, --url URL Elasticsearch host URL. Default is http://localhost:9200.
-a, --auth Elasticsearch basic authentication in the form of username:password.
-i, --index-prefixes INDEX [INDEX ...] Index name prefix(es). Default is ['logstash-*'].
-D, --doc-types DOC_TYPE [DOC_TYPE ...] Document type(s).
-t, --tags TAGS [TAGS ...] Query tags.
-f, --fields FIELDS [FIELDS ...] List of selected fields in output. Default is ['_all'].
-S, --sort FIELDS [FIELDS ...] List of <field>:<direction> pairs to sort on. Default is [].
-d, --delimiter DELIMITER Delimiter to use in CSV file. Default is ",".
-m, --max INTEGER Maximum number of results to return. Default is 0.
-s, --scroll-size INTEGER Scroll size for each batch of results. Default is 100.
-k, --kibana-nested Format nested fields in Kibana style.
-r, --raw-query Switch query format in the Query DSL.
-e, --meta-fields Add meta-fields in output.
--verify-certs Verify SSL certificates. Default is False.
--ca-certs CA_CERTS Location of CA bundle.
--client-cert CLIENT_CERT Location of Client Auth cert.
--client-key CLIENT_KEY Location of Client Cert Key.
-v, --version Show version and exit.
--debug Debug mode on.
-h, --help show this help message and exit
源码地址:GitHub - just3019/es2csv: Export from an Elasticsearch into a CSV file