Bulk Operations
Export Local
Exports data based on a given search operation to a local file in JSON or CSV format.
operation (required) - must always be
export_local
format (required) - the format you wish to export the data, options are
json
&csv
path (required) - path local to the server to export the data
search_operation (required) - search_operation of
search_by_hash
,search_by_value
,search_by_conditions
orsql
filename (optional) - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch.
Body
Response: 200
CSV Data Load
Ingests CSV data, provided directly in the operation as an insert
, update
or upsert
into the specified database table.
operation (required) - must always be
csv_data_load
action (optional) - type of action you want to perform -
insert
,update
orupsert
. The default isinsert
database (optional) - name of the database where you are loading your data. The default is
data
table (required) - name of the table where you are loading your data
data (required) - csv data to import into Harper
Body
Response: 200
CSV File Load
Ingests CSV data, provided via a path on the local filesystem, as an insert
, update
or upsert
into the specified database table.
Note: The CSV file must reside on the same machine on which Harper is running. For example, the path to a CSV on your computer will produce an error if your Harper instance is a cloud instance.
operation (required) - must always be
csv_file_load
action (optional) - type of action you want to perform -
insert
,update
orupsert
. The default isinsert
database (optional) - name of the database where you are loading your data. The default is
data
table (required) - name of the table where you are loading your data
file_path (required) - path to the csv file on the host running Harper
Body
Response: 200
CSV URL Load
Ingests CSV data, provided via URL, as an insert
, update
or upsert
into the specified database table.
operation (required) - must always be
csv_url_load
action (optional) - type of action you want to perform -
insert
,update
orupsert
. The default isinsert
database (optional) - name of the database where you are loading your data. The default is
data
table (required) - name of the table where you are loading your data
csv_url (required) - URL to the csv
Body
Response: 200
Export To S3
Exports data based on a given search operation from table to AWS S3 in JSON or CSV format.
operation (required) - must always be
export_to_s3
format (required) - the format you wish to export the data, options are
json
&csv
s3 (required) - details your access keys, bucket, bucket region and key for saving the data to S3
search_operation (required) - search_operation of
search_by_hash
,search_by_value
,search_by_conditions
orsql
Body
Response: 200
Import from S3
This operation allows users to import CSV or JSON files from an AWS S3 bucket as an insert
, update
or upsert
.
operation (required) - must always be
import_from_s3
action (optional) - type of action you want to perform -
insert
,update
orupsert
. The default isinsert
database (optional) - name of the database where you are loading your data. The default is
data
table (required) - name of the table where you are loading your data
s3 (required) - object containing required AWS S3 bucket info for operation:
aws_access_key_id - AWS access key for authenticating into your S3 bucket
aws_secret_access_key - AWS secret for authenticating into your S3 bucket
bucket - AWS S3 bucket to import from
key - the name of the file to import - the file must include a valid file extension ('.csv' or '.json')
region - the region of the bucket
Body
Response: 200
Delete Records Before
Delete data before the specified timestamp on the specified database table exclusively on the node where it is executed. Any clustered nodes with replicated data will retain that data.
Operation is restricted to super_user roles only
operation (required) - must always be
delete_records_before
date (required) - records older than this date will be deleted. Supported format looks like:
YYYY-MM-DDThh:mm:ss.sZ
schema (required) - name of the schema where you are deleting your data
table (required) - name of the table where you are deleting your data
Body
Response: 200
Last updated