LogoLogo
4.1
4.1
  • Developer Documentation
  • Install HarperDB
    • On Linux
  • Getting Started
  • Full API Documentation
  • HarperDB Studio
    • Create an Account
    • Log In & Password Reset
    • Resources (Marketplace, Drivers, Tutorials, & Example Code)
    • Organizations
    • Instances
    • Query Instance Data
    • Manage Schemas / Browse Data
    • Manage Charts
    • Manage Clustering
    • Manage Instance Users
    • Manage Instance Roles
    • Manage Functions
    • Instance Metrics
    • Instance Configuration
    • Instance Example Code
    • Enable Mixed Content
  • HarperDB Cloud
    • IOPS Impact on Performance
    • Instance Size Hardware Specs
    • Alarms
    • Verizon 5G Wavelength
  • Security
    • JWT Authentication
    • Basic Authentication
    • Configuration
    • Users & Roles
  • Clustering
    • Requirements and Definitions
    • Creating A Cluster User
    • Naming A Node
    • Enabling Clustering
    • Establishing Routes
    • Subscription Overview
    • Managing Subscriptions
    • Things Worth Knowing
  • Custom Functions
    • Requirements and Definitions
    • Create a Project
    • Define Routes
    • Define Helpers
    • Host A Static Web UI
    • Using NPM and GIT
    • Custom Functions Operations
    • Restarting the Server
    • Debugging a Custom Function
    • Custom Functions Templates
    • Example Projects
  • Add-ons and SDKs
    • Google Data Studio
  • SQL Guide
    • SQL Features Matrix
    • Insert
    • Update
    • Delete
    • Select
    • Joins
    • SQL Date Functions
    • SQL Reserved Word
    • SQL Functions
    • SQL JSON Search
    • SQL Geospatial Functions
      • geoArea
      • geoLength
      • geoDifference
      • geoDistance
      • geoNear
      • geoContains
      • geoEqual
      • geoCrosses
      • geoConvert
  • HarperDB CLI
  • Configuration File
  • Logging
  • Transaction Logging
  • Audit Logging
  • Jobs
  • Upgrade a HarperDB Instance
  • Reference
    • Storage Algorithm
    • Dynamic Schema
    • Data Types
    • Content Types/Data Formats
    • HarperDB Headers
    • HarperDB Limits
  • Support
  • Release Notes
    • HarperDB Tucker (Version 4)
      • 4.1.0
      • 4.0.6
      • 4.0.5
      • 4.0.4
      • 4.0.3
      • 4.0.2
      • 4.0.1
      • 4.0.0
    • HarperDB Monkey (Version 3)
      • 3.3.0
      • 3.2.1
      • 3.2.0
      • 3.1.5
      • 3.1.4
      • 3.1.3
      • 3.1.2
      • 3.1.1
      • 3.1.0
      • 3.0.0
    • HarperDB Penny (Version 2)
      • 2.3.1
      • 2.3.0
      • 2.2.3
      • 2.2.2
      • 2.2.0
      • 2.1.1
    • HarperDB Alby (Version 1)
      • 1.3.1
      • 1.3.0
      • 1.2.0
      • 1.1.0
Powered by GitBook

© HarperDB. All Rights Reserved

On this page
  • Transaction log
  • Transaction Log Operations
  • read_transaction_log
  • read_transaction_log Response
  • delete_transaction_logs_before
Export as PDF

Transaction Logging

PreviousLoggingNextAudit Logging

Last updated 1 year ago

HarperDB offers two options for logging transactions executed against a table. The options are similar but utilize different storage layers.

Transaction log

The first option is read_transaction_log. The transaction log is built upon clustering streams. Clustering streams are per-table message stores that enable data to be propagated across a cluster. HarperDB leverages streams for use with the transaction log. When clustering is enabled all transactions that occur against a table are pushed to its stream, and thus make up the transaction log.

If you would like to use the transaction log, but have not set up clustering yet, please see .

Transaction Log Operations

read_transaction_log

The read_transaction_log operation returns a prescribed set of records, based on given parameters. The example below will give a maximum of 2 records within the timestamps provided.

{
    "operation": "read_transaction_log",
    "schema": "dev",
    "table": "dog",
    "from": 1598290235769,
    "to": 1660249020865,
    "limit": 2
}

See example response below.

read_transaction_log Response

[
    {
        "operation": "insert",
        "user": "admin",
        "timestamp": 1660165619736,
        "records": [
            {
                "id": 1,
                "dog_name": "Penny",
                "owner_name": "Kyle",
                "breed_id": 154,
                "age": 7,
                "weight_lbs": 38,
                "__updatedtime__": 1660165619688,
                "__createdtime__": 1660165619688
            }
        ]
    },
    {
        "operation": "update",
        "user": "admin",
        "timestamp": 1660165620040,
        "records": [
            {
                "id": 1,
                "dog_name": "Penny B",
                "__updatedtime__": 1660165620036
            }
        ]
    }
]

See example request above.

delete_transaction_logs_before

The delete_transaction_logs_before operation will delete transaction log data according to the given parameters. The example below will delete records older than the timestamp provided.

{
    "operation": "delete_transaction_logs_before",
    "schema": "dev",
    "table": "dog",
    "timestamp": 1598290282817
}

Note: Streams are used for catchup if a node goes down. If you delete messages from a stream there is a chance catchup won't work.

Read on for read_audit_log, the second option, for logging transactions executed against a table.

"How to Cluster"