Tablite
seeks to be the go-to library for manipulating tabular data with an api that is as close in syntax to pure python as possible.
Tablite uses numpys fileformat as a backend with strong abstraction, so that copy, append & repetition of data is handled in pages. This is imperative for incremental data processing.
Tablite tests for memory footprint. One test compares the memory footprint of 10,000,000 integers where tablite
will use < 1 Mb RAM in contrast to python which will require around 133.7 Mb of RAM (1M lists with 10 integers). Tablite also tests to assure that working with 1Tb of data is tolerable.
Tablite achieves this minimal memory footprint by using a temporary storage set in config.Config.workdir
as tempfile.gettempdir()/tablite-tmp
.
If your OS (windows/linux/mac) sits on a SSD this will benefit from high IOPS and permit slices of 9,000,000,000 rows in less than a second.
Tablite uses numpy whereever possible and applies multiprocessing for bypassing the GIL on all major operations.
CSV import is performed in C through using nim
s compiler and is as fast the hardware allows.
Tablite respects the limits of free memory by tagging the free memory and defining task size before each memory intensive task is initiated (join, groupby, data import, etc).
If you still run out of memory you may try to reduce the config.Config.PAGE_SIZE
and rerun your program.
Tablite wants to make it easy for you to work with data. tablite.Table's
behave like a dict with lists:
my_table[column name] = [... data ...]
.
Tablite uses datatype mapping to native numpy types where possible and uses type mapping for non-native types such as timedelta, None, date, time… e.g. what you put in, is what you get out. This is inspired by bank python.
Tablite is ~200 kB.
Tablite wants you to be productive, so a number of helpers are available.
Table.import_file
to import csv*, tsv, txt, xls, xlsx, xlsm, ods, zip and logs. There is automatic type detection (see tutorial.ipynb )get_headers
which shows the first 10 rows.mytable.rows
and mytable.columns
to iterate over rows or columns..index
for quick lookups..sort
,.any
and .all
to select specific rows..lookup
and .join
to find data across tables..groupby
and reorganise data as a .pivot
table with max, min, sum, first, last, count, unique, average, st.deviation, median and mode+=
which automatically sorts out the columns - even if they're not in perfect order..stack
to "stack" tables on top of each otherIf you're still missing something add it to the wishlist
Install: pip install tablite
Usage: >>> from tablite import Table
install nim >= 2.0.0
run: chmod +x ./build_nim.sh
run: ./build_nim.sh
Should the default nim not be your desired taste, please use nims
environment manager (atlas
) and run source nim-2.0.0/activate.sh
on UNIX or nim-2.0.0/activate.bat
on windows.
install python >= 3.8
python -m venv /your/venv/dir
activate /your/venv/dir
pip install -r requirements.txt
pip install -r requirements_for_testing.py
pytest ./tests
want to... | this way... |
---|---|
loop over rows | [ row for row in table.rows ] |
loop over columns | [ table[col_name] for col_name in table.columns ] |
slice | myslice = table['A', 'B', slice(0,None,15)] |
get column by name | my_table['A'] |
get row by index | my_table[9_000_000_001] |
value update | mytable['A'][2] = new value |
update w. list comprehension | mytable['A'] = [ x*x for x in mytable['A'] if x % 2 != 0 ] |
join | a_join = numbers.join(letters, left_keys=['colour'], right_keys=['color'], left_columns=['number'], right_columns=['letter'], kind='left') |
lookup | travel_plan = friends.lookup(bustable, (DataTypes.time(21, 10), "<=", 'time'), ('stop', "==", 'stop')) |
groupby | group_by = table.groupby(keys=['C', 'B'], functions=[('A', gb.count)]) |
pivot table | my_pivot = t.pivot(rows=['C'], columns=['A'], functions=[('B', gb.sum), ('B', gb.count)], values_as_rows=False) |
index | indices = old_table.index(*old_table.columns) |
sort | lookup1_sorted = lookup_1.sort(**{'time': True, 'name':False, "sort_mode":'unix'}) |
filter | true, false = unfiltered.filter( [{"column1": 'a', "criteria":">=", 'value2':3}, ... more criteria ... ], filter_type='all' ) |
find any | any_even_rows = mytable.any('A': lambda x : x%2==0, 'B': lambda x > 0) |
find all | all_even_rows = mytable.all('A': lambda x : x%2==0, 'B': lambda x > 0) |
to json | json_str = my_table.to_json() |
from json | Table.from_json(json_str) |
To view the detailed API see api
To learn more see the tutorial.ipynb (Jupyter notebook)
See changelog.md