arrow格式被多种分析型数据引擎广泛采用,如datafusion、polars。duckdb有一个arrow插件,原来是core插件,1.3版后被废弃,改为社区级插件,名字改为nanoarrow, 别名还叫arrow。
安装
sql
D install arrow from community;
D copy (from 'foods.csv') to 'foods.arrow';
D load arrow;
D from 'foods.arrow';
IO Error:
Expected continuation token (0xFFFFFFFF) but got 1702125923
D from read_csv('foods.arrow');
┌────────────┬──────────┬────────┬──────────┐
│ category │ calories │ fats_g │ sugars_g │
│ varchar │ int64 │ double │ int64 │
├────────────┼──────────┼────────┼──────────┤
│ vegetables │ 45 │ 0.5 │ 2 │
│ seafood │ 150 │ 5.0 │ 0 │
D copy (from 'foods.csv') to 'foods2.arrow';
D from 'foods2.arrow' limit 4;
┌────────────┬──────────┬────────┬──────────┐
│ category │ calories │ fats_g │ sugars_g │
│ varchar │ int64 │ double │ int64 │
├────────────┼──────────┼────────┼──────────┤
│ vegetables │ 45 │ 0.5 │ 2 │
│ seafood │ 150 │ 5.0 │ 0 │
│ meat │ 100 │ 5.0 │ 0 │
│ fruit │ 60 │ 0.0 │ 11 │
└────────────┴──────────┴────────┴──────────┘
注意安装arrow插件后不会自动加载,所以加载arrow插件前生成的foods.arrow实际上是csv格式,而foods2.arrow才是arrow格式。
python的pyarrow模块也支持读写arrow格式,但是它不能识别duckdb生成的arrow文件,它还能生成其他格式文件,比如parquet和feather。以下示例来自arrow文档。
python
>>> import pandas as pd
>>> import pyarrow as pa
>>> with pa.memory_map('foods2.arrow', 'r') as source:
... loaded_arrays = pa.ipc.open_file(source).read_all()
...
Traceback (most recent call last):
File "<python-input-11>", line 2, in <module>
loaded_arrays = pa.ipc.open_file(source).read_all()
~~~~~~~~~~~~~~~~^^^^^^^^
File "C:\Users\lt\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyarrow\ipc.py", line 234, in open_file
return RecordBatchFileReader(
source, footer_offset=footer_offset,
options=options, memory_pool=memory_pool)
File "C:\Users\lt\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyarrow\ipc.py", line 110, in __init__
self._open(source, footer_offset=footer_offset,
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
options=options, memory_pool=memory_pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow\\ipc.pxi", line 1090, in pyarrow.lib._RecordBatchFileReader._open
File "pyarrow\\error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow\\error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not an Arrow file
>>> import pyarrow.parquet as pq
>>> import pyarrow.feather as ft
>>> dir(pq)
['ColumnChunkMetaData', 'ColumnSchema', 'FileDecryptionProperties', 'FileEncryptionProperties', 'FileMetaData', 'ParquetDataset', 'ParquetFile', 'ParquetLogicalType', 'ParquetReader', 'ParquetSchema', 'ParquetWriter', 'RowGroupMetaData', 'SortingColumn', 'Statistics', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_filters_to_expression', 'core', 'filters_to_expression', 'read_metadata', 'read_pandas', 'read_schema', 'read_table', 'write_metadata', 'write_table', 'write_to_dataset']
>>> dir(ft)
['Codec', 'FeatherDataset', 'FeatherError', 'Table', '_FEATHER_SUPPORTED_CODECS', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_feather', '_pandas_api', 'check_chunked_overflow', 'concat_tables', 'ext', 'os', 'read_feather', 'read_table', 'schema', 'write_feather']
>>> import numpy as np
>>> arr = pa.array(np.arange(10))
>>> schema = pa.schema([
... pa.field('nums', arr.type)
... ])
>>> with pa.OSFile('arraydata.arrow', 'wb') as sink:
... with pa.ipc.new_file(sink, schema=schema) as writer:
... batch = pa.record_batch([arr], schema=schema)
... writer.write(batch)
...
>>> with pa.memory_map('arraydata.arrow', 'r') as source:
... loaded_arrays = pa.ipc.open_file(source).read_all()
...
>>> arr2= loaded_arrays[0]
>>> arr
<pyarrow.lib.Int64Array object at 0x000001A3D8FD9FC0>
[
0,
1,
2,
3,
4,
5,
6,
7,
8,
9
]
>>> arr2
<pyarrow.lib.ChunkedArray object at 0x000001A3D8FD9C00>
[
[
0,
1,
2,
3,
4,
5,
6,
7,
8,
9
]
]
>>> table = pa.Table.from_arrays([arr], names=["col1"])
>>> ft.write_feather(table, 'example.feather')
>>> table
pyarrow.Table
col1: int64
----
col1: [[0,1,2,3,4,5,6,7,8,9]]
>>> table2= ft.read_table("example.feather")
>>> table2
pyarrow.Table
col1: int64
----
col1: [[0,1,2,3,4,5,6,7,8,9]]
从上述例子可见,arrow文件读出的结构和写入前有区别,从pyarrow.lib.Int64Array变成了pyarrow.lib.ChunkedArray,也多嵌套了一层。feather格式倒是读写前后一致。
pyarrow生成的arrow文件能被duckdb读取,如下所示。
sql
D load arrow;
D from 'arraydata.arrow';
┌───────┐
│ nums │
│ int64 │
├───────┤
│ 0 │
│ 1 │
│ 2 │
│ 3 │
│ 4 │
│ 5 │
│ 6 │
│ 7 │
│ 8 │
│ 9 │
└───────┘