Overview
The database API is a list of HTTP requests which help using the database functionality from third party applications. The URL that accepts the API calls is http:///api (see API overview).
Each API request has an authentication token and operation-related input parameters.
Each database API call may differ in response code only within the value of the "data" key. It will be the only described part of the return value for all operations below.
If there's a failure usually the "data" is empty. If additional information is returned upon error it is explicitly mentioned.
When cells in the database are modified using db.cell.update they must be locked prior that. If the lock fails no further operation can be executed during the lock interval. When processing multiple cells at once (removing rows or columns, copying rows or columns), the cells which are modified or removed are locked internally. If the lock fails, the whole operation fails.
Operations
Index
Database - Column - Row - Cell - Query - Virtual Database - Access
db.add
- auth_token
- name
- description
Creates a new (empty) database.
Returns
{
"id": "",
"create_date": "",
"modified_date": "",
"num_rows": "",
"name": "",
"description": "",
"size": ""
}
db.remove
- auth_token
- db_id
Removes meta data, columns meta information and database data.
db.update
- auth_token
- db_id
- meta - this is a JSON with the following format:
{
"name": "",
"extra": { ... the extra JSON... },
"description": ""
} - This "meta" JSON must contain at least one of "name" or "extra" or both of them.
Updates database meta information. For each database there can be set extra custom information about the database such as DB editor visual settings. It must be in JSON format.
db.get
- auth_token
- db_id
Gets DB meta information
Returns
{
"id": "",
"name": "",
"description": "",
"create_date": "",
"modified_date": "",
"num_rows": "",
"size": "...",
"extra": { ... extra JSON ... }
}
db.get_all
- auth_token
Gets all user databases meta information.
Returns an array of meta information. Each of the elements in the array has the format of what db.get returns. The JSON here is:
{
"databases": [
{
"id": "",
"name": "",
...
},
...
]
}
db.copy
- auth_token
- src_db_id- dst_db_id
Copied an entire database (meta information and data) under a new name.
Returns
{
"id": "",
"create_date": "",
"modified_date": "",
"num_rows": "",
"name": "",
"description": "",
"size": "...",
"extra": { ...extra JSON... }
}
db.column.add
- auth_token
- db_id
- name
- description
- type
0 - Text
1 - Integer - only positive and negative integers
2 - Date
3 - Rich Text
4 - Password
5 - File
6 - Numeric auto-increment
7 - Checkbox
8 - Dropdown
9 - Multichoice
10 - Combo
11 - Float - floating point numbers rounded to the 5th digit after the floating point
- default_value - default value
- extra - extra meta data, mandatory for Dropdown, Multichoice and Combo
- attributes - integer which may have the following bits/flags set
1st bit - unique
Adds a new column to a database.
Checks "name" uniqueness within the database. Checks if the number of columns in the current database is less than 256. Returns an error otherwise.
NOTE1: If the "type" is a Dropdown, a Multichoice or a Combo, the "extra" property is mandatory and must be a JSON that defines the key:value pairs. It has the following valid formats:
When the key:value pairs are hard set: |
When the key:value pairs are stored in another DB: |
{ |
{ |
In case of a Combo box the value is an empty string.
NOTE2: When storing data in Multichoice cells it should be formatted as a JSON array. When searching by columns with Multichoice columns the search operand should be in a JSON array format.
Returns
{
"db_id": "",
"id": "",
"type": "",
"default_value": "",
"name": "",
"attributes": "",
"description": "",
"extra": { ...extra JSON... }
}
db.column.remove
- auth_token
- col_id
Removes a column and all its database cells. Locks the cells prior removing the cells. If all locks succeed, the operation is executed. Otherwise the whole operation fails without removing any cell and any column.
db.column.update
- auth_token
- col_id
- meta - a JSON which has the following structure:
{
name: "",
description: "",
type: "",
default_value: "",
extra: "",
attributes: ""
}
Updates the column meta data. All database cells for that column are locked prior update.
db.column.get
- auth_token
- col_id
Gets column meta information.
Returns
{
"id": "",
"name": "",
"description": "",
"type": "",
"extra": { ... extra JSON... },
"attributes": ""
}
db.column.get_all
- auth_token
- db_id
Gets all available columns meta information for a database.
Returns
{
"columns": [
{
"id": "",
"name": "",
"description": "",
"type": "",
"extra": { ... extra JSON ... },
"attributes": ""
},
...
]
}
db.column.copy
- auth_token
- src_col_id
- dst_col_id
Copies the data from src_col_id to dst_col_id
Returns error code only. No data.
db.column.clear
- auth_token
- col_id
- value (optional) - by default it's an empty string
Sets all values in a column with the given "value" which is an empty string by default.
Returns error code only.
db.column.lock
- auth_token
- col_id
- lock_id - if not passed a new lock handle is generated. If passed (and a valid lock handle), the existing lock is reacquired.
Locks a cell for 60sec. If the cell is locked by someone else, the lock fails.Returns{ "id": ""}
db.column.unlock
- auth_token
- lock_id
Releases a given lock id if the lock was created by the one who uses the auth_token.
db.row.add
- auth_token
- cells - a hash with key:value pairs where the "key" is the column ID and the value is the value of the cell
{
"<$col_id1>": "<$value1>",
"<$col_id2>": "<$value2>",
...
}
Adds a row of data in a given database. Checks if the number of rows limit has been reached.
Returns the same structure as db.row.get.
db.row.add_multiple
- auth_token
- cells - an array of hashes. Each hash has the same structure as "cells" in db.row.add
- index_only - by default it's 0, but if you set it to 1 the returned result will contain only row IDs. If it's 0 it will have the added rows data too.
Adds a number of rows to databases all at once.
Returns an array of results. Each of the results has the same structure as for db.row.get
db.row.remove
- auth_token
- row_id
Removes a row from the database. Locks all cells from the row prior removing. If any of the cells lock fails, the whole operation fails without removing anything.
db.row.remove_multiple
- auth_token
- row_ids - an array of row IDs
Removes a number of rows at the same time.
db.row.get
- auth_token
- row_id
Gets row data. Columns of type
- Password - contain an encoded password string
- Rich Text - contain HTML
- File - contain URL (which may contain a #XXXXXX suffix)
- Date - contain a UTC date in format YYYY-MM-DD HH:MM:SS
Returns
{"row_id": "<$row_id>",
"cells": {
"$col_id1": "",
"$col_id2": "",
...
}
}
db.row.get_multiple
- auth_token
- rows - [ "row_id1", "row_id2", ... ]
Gets rows data by a given list of rows. Each row is an array of row IDs in case there's a link (left join) by more than 1 database.
Returns
{
"rows": [
{
"row_id": "$row_id1",
"cells": {
"$col_id1": "",
"$col_id2": "",
...
}
},
{
"row_id": "$row_id2",
...
}
]
}
db.row.clear
- auth_token
- row_id
- value (optional) - by default it's an empty string
Clears all cells in a row with the given "value" (which is an empty string by default)
db.row.clear_multiple
- auth_token
- rows - an array of row IDs
- value (optional) - by default it's an empty string
Sets the given "value" as a value of all cells in the list of rows.
db.row.copy
- auth_token
- src_row_id
- dst_row_id
Copies the cell contents of a row to a given new row
db.row.copy_multiple
- auth_token
- rows - it's an array of hashes. Each of the hashes is a structure like the following:
{
"src_row_id": "....",
"dst_row_id": "..."
}
db.cell.lock
- auth_token
- row_id
- col_id
- lock_id - if not passed a new lock handle is generated. If passed (and a valid lock handle), the existing lock is reacquired.
Locks a cell for 60sec. If the cell is locked by someone else, the lock fails.
Returns
{
"id": ""
}
db.cell.lock_multiple
- auth_token
- cells - an array of hashes:
[
{
"row_id": "",
"col_id": "",
"lock_id": ""
},
...
]
Locks a number of cells. The optional "lock_id" has the same function as for db.cell.lock. If a single cell fails locking, the whole operation fails.
Returns an array of the same format as the input "cells" array:
{
"cells": [
{
"row_id": "",
"col_id": "",
"lock_id": ""
},
...
]
}
On error the row_id and col_id of the failed cell are returned
{
"row_id": "",
"col_id": ""
}
db.cell.unlock
- auth_token
- lock_id
Releases a given lock id if the lock was created by the one who uses the auth_token.
db.cell.unlock_multiple
- auth_token
- locks - an array of lock IDs
[ "", "", ... ]
Unlocks a number of cells by given lock IDs.
Returns an error code and if there was a failed - a list of eventually failed lock IDs when trying to unlock by them:
{
"locks": [ "", ... ]
}
db.cell.update
- auth_token
- row_id
- col_id
- lock_id - obtained by db.cell.lock or 'auto' if it's not critical to lock the cell for a longer period of time (e.g. when there's some user intervention and it has to stay locked)
- value - If the cell is of type Password it must pass a plain text password and the encoded value will be returned. For cells of type Date a UTC YYYY-MM-DD HH:MM:SS value format must be provided.
Updates a single locked cell. If the cell is not locked prior that, the operation fails.
Returns
{
"value": ""
}
db.cell.update_multiple
- auth_token
- cells - an array of hashes in the following format:
{
"row_id": ,
"col_id": ,
"lock_id": ,
"value":
}
Updates a multiple cells at once. The cells may be manually locked using db.cell.lock or db.cell.lock_multiple. An 'auto' lock_id may be passed otherwise to skip 2 extra calls. If the data matches the columns the cells are, the framework updates all cells that can be updated. If there's at least 1 failure, a failure is returned with detailed information which of the cells fails to be updated.
Returns
{"cells": [
{
"row_id": ,
"col_id": ,
"value":
},
...
]
}
If there's a failure during the very update process, the "data" of the response contains:
{
"errors": [
{
"message": ,
"cell": {
"row_id": ...,
"col_id": ...
"lock_id": ...
"value": ...
}
},
...
],
"cells": [
]
}
db.cell.get
- auth_token
- row_id
- col_id
Returns the value of a cell.
{
"value": ""
}
db.cell.get_multiple
- auth_token
- cells - an array of hashes:
[
{ "row_id": "<$row_id1>", "col_id": "<$col_id1>" },
{ "row_id": "<$row_id2>", "col_id": "<$col_id2>" }
...
]
Gets the values of a list of cells.
Returns the cells:
{
"cells": [
{ "row_id": "<$row_id1>", "col_id": "<$col_id1>", "value": "<$value1>" },
{ "row_id": "<$row_id2>", "col_id": "<$col_id2>", "value": "<$value2>" },
...
]
}
db.cell.clear_multiple
- auth_token
- value - optional value to set to all cells. By default it's an empty string.
- cells - an array of hashes:
[
{ "row_id": "<$row_id1>", "col_id": "<$col_id1>" },
{ "row_id": "<$row_id2>", "col_id": "<$col_id2>" }
...
]
Resets the value of a group of cells to the given "value". Returns only error code.
db.cell.copy_multiple
- auth_token
- cells - an array of hashes
[
{
"src": {
"row_id": "<$source row id1>",
"col_id": "<$source col id1>"
},
"dst": {
"row_id": "<$destination row id1>",
"col_id": "<$destination col id1>"
}
},
{
"src": ...
...
]
Copies the values of cells from the source cells onto the destination cells.
db.query.get_filtered_rows
- auth_token
- index_only (optional) - by default this is 1, which means the result from the call is just a list of IDs of the rows that matched the filter. If this option is set to 0, the result will be the same as if someone has invoked db.row.get_multiple for the returned index of rows.
- count_only - returns only the number of rows in the result as a hash { "count": }
- filter - a JSON defining the criteria for the portion of rows. If no "sort" is specified, the row IDs are returned in the order they were added. If the "limit" is not specified, all matching rows are returned.
{
"links": [
[ "$col_id_X", "", "<$col_id_N>" ],
...
],
"where": [
[ "$col_id_Y", "", [, ,...] ],
...
],
"sort": {
"column": "$col_id_Z",
"order": <"asc" or "desc">
},
"limit": {
"from": "", // from 0 .. #rows - 1
"count": ""
}
}
The for "links" must be "=". The in the "filters" should be a constant. The in "filters" accepts more than 1 . All operands are OR-ed (disjunction). All operators are AND-ed (conjunction). Possible operators are:
- = (equals)
- > (greater than)
- < (less than)
- <= (less than or equal)
- >= (greater than or equal)
- <> (not equal)
- like (substring) - if the operand contains a percent sign "%" it works like the percent sign in a normal SQL query, i.e. "foo%" will search for all texts that start with a "foo"; "%bar" will search for all texts that end with "bar"; "foo%bar" will search for all texts that start with a "foo" and end with a "bar". The number of percent signs that can be used is not limited within the operand. If the percent sign is escaped with a backslash "\%", it will work as a percent sign in the text of the search query.
not_like (not substring) - again a percent sign can be used just like in the "like" operator
When using "links" the columns that are compared should be from the same type. Only the ID column can be compared with any Integer column and vice versa.
Returns information about the total number of rows excluding the limit and an array of rows. Each element from the "rows" array is a row from the database. The row is an ID. Here's the returned JSON format:
{
"rows": [
"row_id1",
"row_id2",
...
]
}
When "links" is specified, the returned result differs a little. Each element of the "rows" array is an array itself which contains a row that matches the "where" statements and the linked rows defined by the "links" statement. The result looks like that:
{
"rows": [
[ "row_id1.1", "row_id1.2", ... ],
[ "row_id2.1", "row_id2.2", ... ],
...
]
}
If one specifies index_only = 0, this means it will return the data of the rows too. The result would be:
{
"rows": [
{
"row_id": "",
"cells": {
"": "",
"": "",
...
}
},
...
]
}
If there's a "links" clause specified, the "row_id" becomes "row_ids" and the value is an array of row IDs.
db.query.get_filtered_rows_multiple
The same as db.query.get_filtered_rows, but executes a number of filters at once. Every filter is passed as a hash with an "id" and "filter" keys. The returned result is a hash of indexed results by the "id", so one can easily determine which result for which filter is.
- auth_token
- index_only - see db.query.get_filtered_rows
- count_only - returns only the number of rows in the result as a hash { "count": }
- filters - an array of hashes. Every hash has the following structure:
{
"id": "",
"filter":
}
The result contains a "data" which is a hash like that:
{
"": { ... the same result you would get from db.query.get_filtered_rows for a single filter... },
"": { ... },
...
}
db.query.get_all_rows
- auth_token
- index_only (optional) - by default it's 1. See db.query.get_filtered_rows's index_only option description
- count_only - returns only the number of rows in the result as a hash { "count": }
- db_id
- filter -
{
"sort": {
"column": "$col_id_Z",
"order": <"asc" or "desc">
},
"limit": {
"from": "", // from 0 .. #rows - 1
"count": ""
}
}
Retrieves all rows for a database.
Returns a list of row IDs similar to db.query.get_filtered_rows. If no filter is specified, he list is ordered ascending by row ID - this means the newest rows are at the end. Here's the format of the returned data:
{
"rows": [
"row_id1",
"row_id2",
...
]
}
db.query.replace
- auth_token
- filter - see db.query.get_rows. А "columns" key is added with an array of column IDs that are to be only used for replacement:
{
"columns": [ "", "", ... ],
"where": [
[ "$col_id_Y", "", [, ,...] ],
...
]
}
- src_value - value to search for within all columns in filter->selection->columns
- dst_value - value to replace with all values which match src_value
- options - a JSON with search/replace options:
{
"case_sensitive": "<0 - no, 1 - yes>"
}
Replaces src_text with dst_text within all values from the columns and rows which match the "filter".
Returns the rows that contained cells in which replacements have been made as well as the number of replacements made.
{
"rows": [ "", "", ... ],
"num_replacements": ""
}
db.query.export_filtered_rows
- auth_token
- db_id
- filter - see "filter" in db.query.get_rows
Еxports database data to a CSV file format. The header of the CSV file contain the names of the columns in the database suffixed with system information about the type of the column. Each column name in the CSV file will look like (), e.g. "OrderID(number)"
Returns:{ "url": ""}
db.query.export_all_rows
- auth_token
- db_id
Еxports database data to a CSV file format. The header of the CSV file contain the names of the columns in the database suffixed with system information about the type of the column. Each column name in the CSV file will look like (), e.g. "OrderID(number)"
Returns:{ "url": ""}
db.query.import
- auth_token
- db_id - if not specified a new database is added with the name of the CSV file truncating the extension
- url - URL to a CSV file
Imports data from CSV file to a new or existing database. If an existing database is given it tries to match the column names from the DB to the column names in the header of the CSV. Only those columns which have a matching column name are imported. Data which does not match the database column type is not imported (left empty)
Returns the following result:
{
"db_id" : ,
"num_lines" : ,
"num_imported" : ,
"num_errors" : ,
}
db.virtual.add
- auth_token
- name
- filter - the same as for db.query.get_rows
- extra - see db.add
Adds a virtual DB.
Returns
{
"id": ""
}
db.virtual.update
- auth_token
- vdb_id
- meta - a hash of meta data that is about to be updated - not all of the keys are mandatory
{
"name": "",
"description": "",
"filter": "",
"extra": ""
}
Updates an existing virtual db.
db.virtual.remove
- auth_token
- vdb_id
Removes an existing virtual DB.
db.virtual.get
- auth_token
- vdb_id
Gets virtual DB meta information.
Returns
{
"id": "",
"name": "",
"description": "",
"filter": "",
"create_date": "",
"modified_date": "",
"num_rows": "",
"extra": ""
}
db.virtual.get_all
- auth_token
Gets all virtual DBs for the user identified by "auth_token".
Returns an array of hashes of the kind db.virtual.get returns in the following format:
{"databases": [
...array of hashes...
]
}
See access.* API calls.
db.access.add - the same as access.add
db.access.get - the same as access.get
db.access.get_all - the same as access.get_all
db.access.update - the same as access.update
db.access.remove - the same as access.remove