This overview explains the process of importing data to FeatureBase using three methods:
- Apache Kafka
Table of contents
Ingesters are used to:
- retrieve data from a specified upstream data source
- transform the data to the FeatureBase bit-columnar format
- write that data to target database tables.
Tables defined by ingest source files are created if they do not already exist.
You can duplicate your existing RDBMS data structures within FeatureBase.
But to get the best results, users should perform a Data modeling process to:
- determine the specific data they wish to query
- take advantage of FeatureBase-specific data types and constraints
- run queries on the data to test if the structure meets expectations
By performing Data Modeling, customers may reduce their raw data footprint by up to 90% and thus make queries even faster.
You can import data to FeatureBase running three types of ingester processes:
- CSV ingest
- SQL ingest
- Kafka ingest
Learn how to build CSV source files and the ingest flags you use to import your data.
Learn how to define your SQL source and ingest flags to import the data.
Learn how to define Avro and Static Schema source files and valid ingest flags to import your data.