LogoLogo
4.1
4.1
  • Developer Documentation
  • Install HarperDB
    • On Linux
  • Getting Started
  • Full API Documentation
  • HarperDB Studio
    • Create an Account
    • Log In & Password Reset
    • Resources (Marketplace, Drivers, Tutorials, & Example Code)
    • Organizations
    • Instances
    • Query Instance Data
    • Manage Schemas / Browse Data
    • Manage Charts
    • Manage Clustering
    • Manage Instance Users
    • Manage Instance Roles
    • Manage Functions
    • Instance Metrics
    • Instance Configuration
    • Instance Example Code
    • Enable Mixed Content
  • HarperDB Cloud
    • IOPS Impact on Performance
    • Instance Size Hardware Specs
    • Alarms
    • Verizon 5G Wavelength
  • Security
    • JWT Authentication
    • Basic Authentication
    • Configuration
    • Users & Roles
  • Clustering
    • Requirements and Definitions
    • Creating A Cluster User
    • Naming A Node
    • Enabling Clustering
    • Establishing Routes
    • Subscription Overview
    • Managing Subscriptions
    • Things Worth Knowing
  • Custom Functions
    • Requirements and Definitions
    • Create a Project
    • Define Routes
    • Define Helpers
    • Host A Static Web UI
    • Using NPM and GIT
    • Custom Functions Operations
    • Restarting the Server
    • Debugging a Custom Function
    • Custom Functions Templates
    • Example Projects
  • Add-ons and SDKs
    • Google Data Studio
  • SQL Guide
    • SQL Features Matrix
    • Insert
    • Update
    • Delete
    • Select
    • Joins
    • SQL Date Functions
    • SQL Reserved Word
    • SQL Functions
    • SQL JSON Search
    • SQL Geospatial Functions
      • geoArea
      • geoLength
      • geoDifference
      • geoDistance
      • geoNear
      • geoContains
      • geoEqual
      • geoCrosses
      • geoConvert
  • HarperDB CLI
  • Configuration File
  • Logging
  • Transaction Logging
  • Audit Logging
  • Jobs
  • Upgrade a HarperDB Instance
  • Reference
    • Storage Algorithm
    • Dynamic Schema
    • Data Types
    • Content Types/Data Formats
    • HarperDB Headers
    • HarperDB Limits
  • Support
  • Release Notes
    • HarperDB Tucker (Version 4)
      • 4.1.0
      • 4.0.6
      • 4.0.5
      • 4.0.4
      • 4.0.3
      • 4.0.2
      • 4.0.1
      • 4.0.0
    • HarperDB Monkey (Version 3)
      • 3.3.0
      • 3.2.1
      • 3.2.0
      • 3.1.5
      • 3.1.4
      • 3.1.3
      • 3.1.2
      • 3.1.1
      • 3.1.0
      • 3.0.0
    • HarperDB Penny (Version 2)
      • 2.3.1
      • 2.3.0
      • 2.2.3
      • 2.2.2
      • 2.2.0
      • 2.1.1
    • HarperDB Alby (Version 1)
      • 1.3.1
      • 1.3.0
      • 1.2.0
      • 1.1.0
Powered by GitBook

© HarperDB. All Rights Reserved

On this page
  • Job Summary
  • Example Job Operations
  • Managing Jobs
  • Finding Jobs
Export as PDF

Jobs

PreviousAudit LoggingNextUpgrade a HarperDB Instance

Last updated 1 year ago

HarperDB Jobs are asynchronous tasks performed by the Operations API.

Job Summary

Jobs uses an asynchronous methodology to account for the potential of a long-running operation. For example, exporting millions of records to S3 could take some time, so that job is started and the id is provided to check on the status.

The job status can be COMPLETE or IN_PROGRESS.

Example Job Operations

Example job operations include:

Example Response from a Job Operation

{
  "message": "Starting job with id 062a1892-6a0a-4282-9791-0f4c93b12e16"
}

Whenever one of these operations is initiated, an asynchronous job is created and the request contains the id of that job which can be used to check on its status.

Managing Jobs

Get Job Request

{
    "operation": "get_job",
    "id": "4a982782-929a-4507-8794-26dae1132def"
}

Get Job Response

[
  {
    "__createdtime__": 1611615798782,
    "__updatedtime__": 1611615801207,
    "created_datetime": 1611615798774,
    "end_datetime": 1611615801206,
    "id": "4a982782-929a-4507-8794-26dae1132def",
    "job_body": null,
    "message": "successfully loaded 350 of 350 records",
    "start_datetime": 1611615798805,
    "status": "COMPLETE",
    "type": "csv_url_load",
    "user": "HDB_ADMIN",
    "start_datetime_converted": "2021-01-25T23:03:18.805Z",
    "end_datetime_converted": "2021-01-25T23:03:21.206Z"
  }
]

Finding Jobs

Search Jobs Request

{
    "operation": "search_jobs_by_start_date",
    "from_date": "2021-01-25T22:05:27.464+0000",
    "to_date": "2021-01-25T23:05:27.464+0000"
}

Search Jobs Response

[
  {
    "id": "942dd5cb-2368-48a5-8a10-8770ff7eb1f1",
    "user": "HDB_ADMIN",
    "type": "csv_url_load",
    "status": "COMPLETE",
    "start_datetime": 1611613284781,
    "end_datetime": 1611613287204,
    "job_body": null,
    "message": "successfully loaded 350 of 350 records",
    "created_datetime": 1611613284764,
    "__createdtime__": 1611613284767,
    "__updatedtime__": 1611613287207,
    "start_datetime_converted": "2021-01-25T22:21:24.781Z",
    "end_datetime_converted": "2021-01-25T22:21:27.204Z"
  }
]

To check on a job's status, use the operation.

To find jobs (if the id is not know) use the operation.

csv data load
csv file load
csv url load
import from s3
delete_records_before
export_local
export_to_s3
get_job
search_jobs_by_start_date