Introduction
This documentation outlines the programmatic ways of interacting with the HyperScience application. It provides instructions for sending requests to our REST API endpoints and describes the structure of the JSON responses.
Versioning
Certain API functionality is tied to specific Hyperscience versions. Please refer to the API documentation tied to the specific release of Hyperscience you are using.
The API is versioned so that we can both keep backwards compatibility and continue to improve and extend the existing API. This is the latest major version of the API and all functionality has been available since version 27 of the application unless otherwise noted. All API endpoints are available under the /api/v5 URL prefix.
Items marked with vXX were introduced in version XX and can be used in later versions of the app as well. Items marked with vXX are not recommended to be used in new code. The recommended alternative is included in the description on a case by case basis.
Version Endpoint
GET /api/v5/version
Example Response
{
"version": "42.0.0",
"date": "2025-09-10"
}
You can check what version of the application you are running via this endpoint. It returns a JSON object with the version of the application as well as the date it was released.
Deprecation Policy & Schedule
As we enhance the functionality of our products, we may deprecate certain versions of our API. We may also deprecate specific endpoints or variables in a version if they are no longer supported.
Deprecation Schedule
| Endpoint | Deprecation | Sunset |
Terminology
Hyperscience uses four terms to indicate the state of our API versions and endpoints.
- Active: The most recent version. New customers should use this endpoint/version, and all customers are encouraged to migrate to it. This endpoint/version is supported actively, and new features are added.
- Superseded: A previous, supported endpoint/version of our API. Bugs are fixed. New features may not be added.
- Deprecated: A previous, unsupported endpoint/version of our API. Bugs are not fixed. Features are not added. Clients are strongly encouraged to move to an Active API.
- Sunset: The code for this API version or endpoint is completely removed. Clients can no longer use it.
Communication Forums
We will communicate updates around our deprecation policy in the following locations:
- Deprecation Policy & Schedule: This page will be regularly updated with a schedule and details of any deprecations and sunsets.
- Release Notes: All release notes will include a section on API Deprecation.
- Proactive Emails to Clients: Whenever an endpoint/version is set to be deprecated or sunset, we will proactively email all clients using that endpoint/version.
- API Response: The response from the API Endpoint itself will return a warning to users in a response header indicating that the endpoint/version is now deprecated. This will also include a link to our Deprecation Policy & Schedule for more details.
Timelines
All of these timelines will adhere to the Hyperscience policy to support the two previous major releases. That policy takes precedence over the below:
- Specific Variable within an Endpoint: Must function for 3 months or 1 release (whichever is longer) after announced deprecation. For example, if a variable is deprecated in v30, it can be sunset in v31.
- API Endpoint (In Use): Must function for 6 months or 2 releases (whichever is longer) after announced deprecation. For example, if an endpoint is deprecated in v30, it can be sunset in v32.
- API Version (In Use): Must function for 12 months or 4 releases (whichever is longer) after announced deprecation. For example, if a version is deprecated in v30, it can be sunset in v34.
- Unused Endpoints & Versions: Must function for 3 months or 1 release (whichever is longer) after announced deprecation. “Unused” is defined as knowing that no clients will be using or are planning to use the endpoint or version. For example, if an unused endpoint is deprecated in v30, it can be sunset in v31.
Getting Started Guide
Overview
The Hyperscience API uses REST API endpoints and returns responses in JSON. The API supports a variety of use cases, including:
- Submitting and retrieving files.
- Submitting and retrieving cases.
- Retrieving documents, pages, fields, tables, and data types.
- Generating reports.
This documentation is an overview of the Hyperscience API functionality. To learn in detail about the programmatic ways of interacting with the API, see the API documentation below.
Versioning
Like the Hyperscience application, the API is versioned. Versioning allows the API to remain backwards compatible while extending existing functionality.
When a breaking change needs to be made to the API, we either:
- introduce a new version of the API,
- introduce a new endpoint, or
- introduce an optional parameter that provides access to the breaking change.
We communicate these breaking changes in the Release Note, and, in some instances, via email.
Authentication
Example request
# M2M Authentication (OAuth2 based). Use your client ID and secret to obtain an OAuth 2.0 access
# token from the local identity provider, then pass this access token as a header with every
# request. Note that you need to install the authlib module to use this example code, which is
# commonly done by running pip install authlib
from authlib.integrations.requests_client import OAuth2Session
from authlib.jose import JsonWebKey
from authlib.oauth2.rfc7523 import ClientSecretJWT
from urllib.parse import urljoin
import json
# The following three variables are user-supplied inputs.
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
m2m_client_id = 'e74f97a5-7e20-42ec-adee-c431c62991e9'
m2m_client_secret_jwk = '''
{
"kty": "oct",
"k": "MmdnOG8zZFVlSWpSNk1rY2pQYldaWGJSMUhhYjNxRUc",
"alg": "HS256",
"kid": "2fd680a3-23e6-4ef6-b84f-199b628f1af2"
}
'''
token_endpoint = urljoin(base_url, 'api/v5/local_identity_provider/oauth2/token')
def get_oauth2_session():
"""
Returns an OAuth 2.0 session for Machine-to-Machine (M2M) authentication that automatically
refreshes access tokens as they near expiration.
"""
jwk = JsonWebKey.import_key(json.loads(m2m_client_secret_jwk))
session = OAuth2Session(
client_id=m2m_client_id,
client_secret=jwk,
token_endpoint_auth_method=ClientSecretJWT(token_endpoint, alg=jwk['alg']),
token_endpoint=token_endpoint,
grant_type='client_credentials'
)
session.headers.update({'HyperscienceLocalIdentityProvider': '1'})
# Fetch the initial access token. All subsequent access tokens will be
# automatically fetched by the OAuth2Session as they near expiration.
session.fetch_token()
return session
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/version'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"version": "42.0.0",
"date": "2025-09-10"
}
Only users with the API Access permission can make API calls. To make API calls, you need to authenticate your user.
- We strongly recommend OAuth 2.0 with Machine Credentials for production authentication.
- API Tokens (on-prem) have been deprecated for production use and are intended for exploration and troubleshooting. They should not be used for any new deployments in any form. Use your API token by passing it as a header with every request. Here's a Shell example:
curl -X GET https://on-premise-server.yourcompany.com/api/v5/version -H 'Authorization: Token a22d533ebaa60ae5d46b2f0ea67532a4eb8e33be' - API Accounts (SaaS) have been deprecated and will be sunsetted in the future.
Constructing a Request
Each API request has the following structure:
[HTTP method] https://[server].[your company].com/api/[API version]/[endpoint], where:
- HTTP method – The Hyperscience API supports GET and POST methods.
- server – your server’s name(e.g., production, dev, etc.)
- your company – your company’s name
- API version – version of the API you use.
- endpoint – the endpoint you want to reach.
Here is an example of an API request: GET https://production.mycompany.com/api/v5/submissions
Basic Tutorial
This section contains an example of how to retrieve your OAuth 2.0 access token from the application and submit documents for processing through the API.
Step 1: Create an OAuth2Session in Python that automatically manages access tokens.
Please import the get_oauth2_session() function from the Authentication section.
Step 2: Submit documents
Example request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
machine_only = 'false'
external_id = 'yourcompanyid#1236'
restrictions = ['location', 'team']
source_routing_tags = ['office_nyc', 'frontdesk']
data = {
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only,
'restriction': restrictions,
'source_routing_tag': source_routing_tags,
}
files = [
('file', ('test_submission.pdf', open('test_submission.pdf', 'rb'), 'application/pdf')),
('file', ('test_submission_2.pdf', open('test_submission_2.pdf', 'rb'), 'application/pdf'))
]
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
To submit documents for processing, you need to make a POST request to the /submissions endpoint. The only required parameter is file or files, but you can also add additional parameters. To learn more about submission-creation parameters, see Request Parameters. Here is an example of a submission-creation POST request:
Next Steps
Reading through Hyperscience's API documentation is a great way to explore all features of the API. If you’re not sure where to start, here are a few interesting endpoints:
- GET /api/v5/submissions/submission_id – retrieve the processed data for a submission using that submission’s ID.
- GET /api/v5/layouts/layout_id – retrieve data about a specific layout using that layout's ID.
- GET /api/v5/cases/case_id – retrieve data about a specific case using that case’s ID.
- GET /api/v5/documents/document_id – retrieve data about a specific document using that document’s ID.
- GET /api/v5/pages/page_id – retrieve data about a specific page using the page’s ID.
- POST /api/v5/submissions – create a submission.
Pagination
Offset Pagination
Example Listing Object
{
"count": 19,
"next": "https://on-premise-server.yourcompany.com/api/v5/submissions?limit=1&offset=1",
"previous": null,
"results": [
{
"id": 1000000,
"external_id": "yourcompanyid#1236",
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"halted": false,
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"goal_time_source": "System Default",
"sla_rule_name": "System Default SLA rule",
"sla_rule_definition": {
"_metadata": {
"version": 1
},
"rules": [
{
"process_within": {
"duration": 24,
"duration_unit": "hours"
}
}
]
},
"complete_time": "2018-05-09T20:41:22.310162Z",
"supervision_url": null,
"apply_restrictions_url": null,
"metadata": {
"custom": "data"
},
"data_deleted": false,
"data_deleted_details": "not_deleted",
"source_routing_tags": []
}
]
}
To avoid slow requests with large amounts of returned data, each listing endpoint is paginated.
In offset pagination, a top level object is returned by the listing endpoints. It contains information that can be used for pagination and a list of objects matching the request query.
The endpoint also contains a portion of the whole result set, with additional information like the total number of objects matching the request filters (count field), the url where the next page of results can be fetched (next field) and the url for the previous page of results (previous field). In addition to the endpoint-specific request filters there's a set of pagination related parameters described below.
Object properties
| Property | Type | Description |
| count | integer | Number of records matching the listing filters |
| next | string | A URL where the next page can be fetched. The filters from the initial request are also applied to this page. |
| previous | string | A URL where the previous page can be fetched. The filters from the initial request are also applied to this page. |
| results | array[object] | An array of objects matching the listing query. See individual listing endpoints for details on the returned object. |
Pagination properties (query parameters)
Example request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
params = {'limit':1}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 19,
"next": "https://on-premise-server.yourcompany.com/api/v5/submissions?limit=1&offset=1",
"previous": null,
"results": [
{
"id": 1000000,
"external_id": "yourcompanyid#1236",
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"halted": false,
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"goal_time_source": "System Default",
"sla_rule_name": "System Default SLA rule",
"sla_rule_definition": {
"_metadata": {
"version": 1
},
"rules": [
{
"process_within": {
"duration": 24,
"duration_unit": "hours"
}
}
]
},
"complete_time": "2018-05-09T20:41:22.310162Z",
"supervision_url": null,
"apply_restrictions_url": null,
"metadata": {
"custom": "data"
},
"data_deleted": false,
"data_deleted_details": "not_deleted",
"source_routing_tags": []
}
]
}
| Property | Type | Description |
| limit | integer | How many records to include in a single response (used for pagination). Default is 100. The recommended value for batch requests is 500. For very large result sets could be increased to 1000. |
| offset | integer | Start returning objects in the results from the specified offset (used for pagination). |
Cursor Pagination
Cursor Pagination is only available for the List Submissions and the List Audit Logs endpoints. When this feature is enabled, the request returns a portion of the full set of object with next and previous URLs to allow for traversing the object set.
Object properties
| Property | Type | Description |
| next | string | A URL where the next page can be fetched. The filters from the initial request are also applied to this page. |
| previous | string | A URL where the previous page can be fetched. The filters from the initial request are also applied to this page. |
Supported Request Content Types
The supported content types for the bodies of POST requests are:
multipart/form-data, usually used when uploading files as raw objects,application/json, usually used when sending a submission's file content as Base64-encoded data in JSON format, andapplication/x-www-form-urlencoded, usually used when sending files as URLs. Whilemultipart/form-dataandapplication/jsoncan also be used to send file URLs, they are designed primarily for the use cases described above.
Audit Logs v36
Audit Log Object
Example Audit Log Object
{
"id": 10000,
"activity_created": "2022-01-01T14:16:42.506936Z",
"operator": 0,
"username": "admin",
"activity_name": "submission reprioritization",
"activity_subtype_name": "",
"extra": "",
"is_success": true,
"object_id": "2",
"object_type": "Submission",
"object_name": "Submission[2]",
"changes": "",
"object_column_changes": [
{
"column_name": "dt_goal",
"old_value": "2022-01-01 17:10:00+00:00",
"new_value": "2022-01-01 13:10:00-04:00"
}
]
}
An Audit Log is an entity that records key actions taken in the Hyperscience application, along with information about who took the action and when. If an action is taken by a user rather than a machine, the user’s username is recorded in the Audit Log object.
Object Properties
| Property | Type | Description |
| id | integer | Unique system-generated Audit Log ID. |
| activity_created | datetimetzAn ISO-8601 formatted datetime string | Timestamp of this activity. |
| operator | integer | Type of the operator performing this activity: 0 - human, 1 - machine. |
| username | string | The username of the human operator. |
| activity_name | string | The activity general name. |
| activity_subtype_name | string | The activity subtype name. |
| extra v37 | string | Additional information from the activity. |
| is_success v39.2 | boolean | Flag which indicates activity success or failure. |
| object_id | string | The identifier of the object related to this activity. |
| object_type | string | The type of the object related to this activity. |
| object_name | string | The name of the object related to this activity. |
| changes v24 | string | Empty string. This field is deprecated. |
| object_column_changes | array of JSON objectsA list of objects. See the description for more information. | Array of Object Column Changes objects containing the changes to the related object columns performed by this activity. |
Object Column Change - Object
| Property | Type | Description |
| column_name | string | Name of the column that has its value changed. |
| old_value | string | Old column value. |
| new_value | string | New column value. |
Retrieving Audit Logs
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/audit_logs/'
audit_log_id = str(10000)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, audit_log_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 10000,
"activity_created": "2022-01-01T14:16:42.506936Z",
"operator": 0,
"username": "admin",
"activity_name": "submission reprioritization",
"activity_subtype_name": "",
"extra": "",
"is_success": true,
"object_id": "2",
"object_type": "Submission",
"object_name": "Submission[2]",
"changes": "",
"object_column_changes": [
{
"column_name": "dt_goal",
"old_value": "2022-01-01 17:10:00+00:00",
"new_value": "2022-01-01 13:10:00-04:00"
}
]
}
Retrieve data about a specific Audit Log using that Audit Log's ID. You can obtain this ID by using the Listing Audit Logs endpoint.
Audit Log Retrieval Endpoint
GET /api/v5/audit_logs/audit_log_id
Listing Audit Logs
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/audit_logs'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 2,
"next": null,
"previous": null,
"results": [
{
"id": 10001,
"activity_created": "2022-01-02T14:16:42.506936Z",
"operator": 1,
"username": "",
"activity_name": "application upgrade",
"activity_subtype_name": "",
"extra": "",
"is_success": true,
"object_id": "324207be44ec948yyab6cbb735a82f9297cxxx96",
"object_type": "",
"object_name": "",
"changes": "",
"object_column_changes": [
{
"column_name": "version_info",
"old_value": "",
"new_value": "{\"forms\": {\"revision\": \"324207be44ec948yyab6cbb735a82f9297cxxx96\", \"tag\": \"\", \"date\": \"2022-01-02\"}}"
}
]
},
{
"id": 10000,
"activity_created": "2022-01-01T14:16:42.506936Z",
"operator": 0,
"username": "admin",
"activity_name": "submission reprioritization",
"activity_subtype_name": "",
"extra": "",
"is_success": true,
"object_id": "2",
"object_type": "Submission",
"object_name": "Submission[2]",
"changes": "",
"object_column_changes": [
{
"column_name": "dt_goal",
"old_value": "2022-01-01 17:10:00+00:00",
"new_value": "2022-01-01 13:10:00-04:00"
}
]
}
]
}
This endpoint allows you to retrieve a list of Audit Logs in the system that match your defined filtering criteria and paginate through them. Each object in the results array is an Audit Log Object.
Audit Log Listing Endpoint
GET /api/v5/audit_logs
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Audit Logs.
| Property | Type | Description |
| cursor_pagination v37 | boolean | Indicates whether cursor pagination is enabled. Defaults to false. We recommend setting this parameter to true when retrieving a large number of audit logs. |
| id_min | integer | Filters for Audit Logs with an id bigger or equal to a specific id (min operator). |
| id_max | integer | Filters for Audit Logs with an id smaller or equal to a specific id (max operator). |
| activity_created_gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Audit Logs that have an activity created on or after a specific date and time (greater than or equal to operator). |
| activity_created_lte | datetimetzAn ISO-8601 formatted datetime string | Filters for Audit Logs that have an activity created on or before a specific date and time (less than or equal to operator). |
| activity_created_lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Audit Logs that have an activity created before a specific date and time (less than operator). |
| activity_name v37 | string | Filters for Audit Logs with a specific activity general name. |
| operator v37 | integer | Filters for Audit Logs triggered by a human (0) or a machine (1). |
| username v37 | string | Filters for Audit Logs triggered by a human user with a specific username. |
Listing Audit Logs as CSV
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/audit_logs/csv'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
ID,Activity Created,Operator,User Name,Activity Name,Activity Subtype Name,Extra,Success,Object Id,Object Type,Object Name,Changes,Object Column Changes
10001,2022-01-02 09:16:42 AM,1,,application upgrade,,,True,324207be44ec948yyab6cbb735a82f9297cxxx96,,,,"[(column: version_info, old: , new: {""forms"": {""revision"": ""324207be44ec948yyab6cbb735a82f9297cxxx96"", ""tag"": """", ""date"": ""2022-01-02""}})]"
10000,2022-01-01 09:16:42 AM,0,admin,submission reprioritization,,,True,2,Submission,Submission[2],,"[(column: dt_goal, old: 2022-01-01 17:10:00+00:00, new: 2022-01-01 13:10:00-04:00)]"
With this endpoint, you can list and filter the Audit Logs and download a CSV with them. This endpoint is similar to the Listing Audit Logs endpoint with the difference in the output format being CSV here.
Endpoint
GET /api/v5/audit_logs/csv
Request Parameters
Use the query parameters from the Listing Audit Logs endpoint to filter for specific Audit Logs.
Response
The response returned is in CSV format. Each row represents an Audit Log and has the following columns:
| Header | Type | Description |
| ID | integer | Unique system-generated Audit Log ID. |
| Activity Created | datetime | Timestamp of this activity. |
| Operator | integer | Type of the operator performing this activity: 0 - human, 1 - machine. |
| User Name | string | The username of the human operator. |
| Activity Name | string | The activity general name. |
| Activity Subtype Name | string | The activity subtype name. |
| Extra v37 | string | Additional information from the activity. |
| Success v39.2 | boolean | Flag which indicates activity success or failure. |
| Object Id | string | The identifier of the object related to this activity. |
| Object Type | string | The type of the object related to this activity. |
| Object Name | string | The name of the object related to this activity. |
| Changes | string | Empty string. This field is deprecated. |
| Object Column Changes | string | Formatted string array of Object Column Changes objects containing the changes to the related object columns performed by this activity. |
Submission Creation
Example request with image files
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
machine_only = 'false'
external_id = 'yourcompanyid#1236'
restrictions = ['location', 'team']
source_routing_tags = ['office_nyc', 'frontdesk']
data = {
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only,
'restriction': restrictions,
'source_routing_tag': source_routing_tags,
}
files = [
('file', ('test_submission.pdf', open('test_submission.pdf', 'rb'), 'application/pdf')),
('file', ('test_submission_2.pdf', open('test_submission_2.pdf', 'rb'), 'application/pdf'))
]
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example request with file location
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
external_id = 'yourcompanyid#1236'
document = ['ocs://484', 'ocs://488']
machine_only = 'false'
data = {
'file': document,
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example request with Base64-encoded data in JSON format:
import json, base64
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
headers = {
'Content-Type': 'application/json'
}
metadata = {'custom': 'data'}
external_id = 'yourcompanyid#1236'
machine_only = 'false'
restrictions = ['location', 'team']
source_routing_tags = ['tag1', 'tag2']
flow_uuid = '6791216f-c2cb-4c7a-95e4-b4804ca03061'
files = [
{
'file_content_base64': base64.b64encode(open('image.png', 'rb').read()).decode('ascii'),
'filename': 'image.png'
},
{
'file_content_base64': base64.b64encode(open('image2.png', 'rb').read()).decode('ascii'),
'filename': 'image2.png'
}
]
cases = [
{
'external_case_id': 'case_1',
'filenames': ['image.png'],
'documents': [30001, 30002, 30003],
'pages': [50001, 50001, 50001]
},
{
'external_case_id': 'case_2',
'filenames': ['image2.png'],
'documents': [30001, 30002, 30003],
'pages': [50001, 50001, 50001]
}
]
data = {
'metadata': metadata,
'external_id': external_id,
'machine_only': machine_only,
'restrictions': restrictions,
'source_routing_tags': source_routing_tags,
'cases': cases,
'files': files,
'flow_uuid': flow_uuid
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, headers=headers, json=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example request with specific semi-structured layout
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
machine_only = 'false'
external_id = 'yourcompanyid#1236'
layout_uuid = '3e4be437-4f3f-4b6a-ad34-ea6d3f212a3e'
restrictions = ['location', 'team']
data = {
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only,
'layout_uuid': layout_uuid,
'restriction': restrictions
}
files = [
('file', ('test_submission.pdf', open('test_submission.pdf', 'rb'), 'application/pdf')),
('file', ('test_submission_2.pdf', open('test_submission_2.pdf', 'rb'), 'application/pdf'))
]
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example request with cases created from uploaded file
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
machine_only = 'false'
external_id = 'yourcompanyid#1236'
restrictions = ['location', 'team']
source_routing_tags = ['office_nyc', 'frontdesk']
files = [
('file', ('test_submission.pdf', open('test_submission.pdf', 'rb'), 'application/pdf')),
('file', ('test_submission_2.pdf', open('test_submission_2.pdf', 'rb'), 'application/pdf'))
]
cases = [
{
'external_case_id': 'yourcaseid#1234',
'filenames': ['test_submission.pdf']
},
{
'external_case_id': '',
'filenames': ['test_submission_2']
}
]
data = {
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only,
'restriction': restrictions,
'source_routing_tag': source_routing_tags,
'cases': json.dumps(cases),
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example request with cases created from file location
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
external_id = 'yourcompanyid#1236'
document = ['ocs://484', 'ocs://488']
machine_only = 'false'
cases = [
{'external_case_id': 'yourcaseid#1234', 'filenames': ['ocs://484']},
{'external_case_id': '', 'filenames': ['488']},
]
data = {
'file': document,
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only,
'cases': json.dumps(cases),
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example request with cases created from document id and/or page id
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
metadata = {'custom': 'data'}
external_id = 'yourcompanyid#1236'
document = ['ocs://484', 'ocs://488']
machine_only = 'false'
cases = [
{
'external_case_id': 'yourcaseid#1234',
'documents': [30001, 30002, 30003]
},
{
'external_case_id': '',
'pages': [50001, 50001, 50001]
}
]
data = {
'file': document,
'metadata': json.dumps(metadata),
'external_id': external_id,
'machine_only': machine_only,
'cases': json.dumps(cases),
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
saved_submission_id = r.json()['submission_id']
Example Response
{
"submission_id": 1000000
}
A Submission is an object in the Hyperscience application that is created by submitting image files together into the application for processing. This endpoint provides three ways of doing so:
- By submitting the image files into the Submission Creation endpoint as
multipart/form-datavia thefileparameter. - By referencing the location of the files in the Submission Creation endpoint via the
fileorfilesparameter. The application will retrieve them on its own. You can usemultipart/form-data,application/jsonorapplication/x-www-form-urlencodedwhen sending file locations. - By encoding your data in Base64 and sending it either as a plain text string or a data URI scheme string. When sending file data in this format, use the
application/json contenttype and thefilesparameter. Note that the data schema for requests sent with theapplication/jsoncontent type differs from requests sent with other content types. An example of this schema can be found in the right-hand panel of this page.
This endpoint returns a Submission ID, which functions as the submission identifier to be used to retrieve the transcribed data for this submission and track it through the application.
Submission limits
The maximum submission limit for the Hyperscience submission creation API depends on the method used.
| Method | Limits |
When uploading files through the file parameter using multipart/form-data to the Submission Creation endpoint | The limit is governed by the frontend web server's timeout for saving the submission data to the backend storage systems. The default timeout is 25 minutes. |
When utilizing any method other than multipart/form-data | The limit set by the DATA_UPLOAD_MAX_MEMORY_SIZE in your .env file applies to the entire HTTP request body, encompassing all key-value pairs and JSON data. By default, this limit is 50MB. |
Submission Creation Endpoint
POST /api/v5/submissions
Request Parameters
| Parameter | Description |
| document v27 | This parameter has been renamed file. |
| file v27 | (For requests with the multipart/form-data or application/x-www-form-urlencoded content type only) This parameter accepts either a file as multipart/form-data or a string that defines a path of a file to be retrieved and included in the Submission. The prefix before :// defines the storage type where the file resides. See Supported File Stores for Submission for more information. Include one instance of this parameter for each file you would like to include in the Submission. The supported file types are JPEG, PNG, HEIC, HEIF, TIFF, PDF, XPS, XLS, XLSX, DOC, DOCX, HTML, HTM, TXT, MSG and EML v39.0.0. |
| files v30.0.14 | (For requests with the application/json content type only) JSON-formatted data about the files in the Submission. For each file, this parameter can have one of the following: a) JSON object with keys: file_content_base64, which accepts Base64-encoded file data as either a plain text string or a data URI scheme string, and filename, which contains the name of the file, or b) JSON object with key: file_url, which accepts the URL of the file's location. The supported file types are JPEG, PNG, HEIC, HEIF, TIFF, PDF, XPS, XLS, XLSX, DOC, DOCX, HTML, HTM, TXT, MSG and EML v39.0.0. |
| cases v31 | This parameter accepts an array of JSON objects that contain information on how Cases should be created. Optional. See the example requests for how to include a cases parameter for a Submission object. There are multiple ways to collate a case: by the filenames specified in the current Submission, by the ids of Documents in previous submissions, and by the ids of Pages submitted in previous submissions. When creating a case by filename, the filenames must match the ones provided in the request's file or files parameter and include the file’s extension. Only the filename is needed; paths should not be included. |
| metadata | User-defined JSON-formatted data that can be included as part of the Submission object. Optional. Maximum length 1000 characters. |
| external_id | User-defined string that can be included as part of the Submission object. Must be unique to the submission. Optional. Maximum length 200 characters. |
| machine_only | Boolean parameter to process the Submission using an automated workflow only. Optional. Default is false, which means the Submission is subject to Supervision. |
| error_mode | String parameter that allows you to relax error handling during file upload. Possible values are strict and relaxed. As an example, if you submit 50 files and 1 of them is not accepted by the application, in strict mode, no Submission will be created and you will get a 400 error. In relaxed mode, on the other hand, the Submission will be created with the 49 files and the response will return a 201 with the filename that wasn't able to be processed. Defaults to strict unless otherwise specified. |
| priority v28 | Integer parameter that specifies the priority level for processing the Submission and all Documents identified as part of the Submission. If set, that value of this parameter will still be used for prioritization, but goal_time or goal_duration_minutes should be used instead. |
| goal_time v28 | Datetimetz parameter that specifies the date and time that the system will try to ensure the Submission, and all pages identified as part of it, are processed by. Any page that matches to a layout with an associated goal time that is earlier will result into setting the goal time for the Submission and all pages identified in it to the earlier goal time. |
| goal_duration_minutes v28 | Integer parameter specifying the number of minutes after submission creation that the system will try to ensure the Submission, and all pages identified as part of it, are processed by. Any page that matches to a layout with an associated goal time that is earlier will result into setting the goal time for the Submission and all pages identified in it to the earlier goal time. |
| layout_uuid | The UUID of a Semi-structured layout that is part of an active release can be specified in this parameter. When specified, all submitted pages in the request will be matched to it and machine classification will be skipped. |
| fallback_layout_uuid | The UUID of a Semi-structured layout that is part of an active release can be specified in this parameter. When specified, only pages that are not matched to a live layout in the request will be matched to the specified Semi-structured layout. Note that layout_uuid and fallback_layout_uuid cannot be specified in the same request. |
| single_image_per_document v27 | This parameter has been renamed single_document_per_page. |
| single_document_per_page v27 | This boolean parameter only applies to Semi-structured documents. If set to true, it forces the creation of separate documents for each uploaded page in each file. For example, if this parameter is set to true, a single PNG image will produce a single document, while a 3-page PDF file will always produce 3 separate documents under this scheme. This parameter takes precedence over the application's settings. By default, PNG and JPG files that match the same layout are combined into one document, while only consecutive pages in a single PDF, TFF, or XPS matching the same layout will be grouped into the same document. |
| source_routing_tag v28.0.4 | (For requests with the multipart/form-data or application/x-www-form-urlencoded content type only) Adds a source-routing tag to the Submission. After the Submission is created, this tag can be used with the "has_source_routing_tag" function in Notification Filters for output connectors. Include one instance of this parameter for each tag you would like to add. |
| source_routing_tags v30.0.14 | (For requests with the application/json content type only) Adds the listed source-routing tags to the Submission. After the Submission is created, these tags can be used with the "has_source_routing_tag" function in Notification Filters for output connectors. When applying multiple tags, enter them as a comma-separated list (e.g., ["tag1", "tag2"]). |
| restriction v26.1 | (For requests with the multipart/form-data or application/x-www-form-urlencoded content type only) String parameter that specifies the name of the task restriction that should be applied to all Supervision tasks associated with this Submission. The parameter can be repeated in order to associate multiple task restrictions with the Submission. |
| restrictions v30.0.14 | (For requests with the application/json content type only) Specifies the names of the task restrictions that should be applied to all Supervision tasks associated with this Submission. When applying multiple task restrictions, enter them as a comma-separated list (e.g., ["restriction1", "restriction2"]). |
| flow_uuid v30 | The UUID of the flow that will process the Submission. |
Response Properties
| Property | Type | Description |
| submission_id | integer | The ID of the Submission that is processing the uploaded images |
Response Errors
| Error Code | Error Message | Description |
| 400 | No enabled variable layout matches the specified name | No semi-structured layout by the specified name was found that's part of an active layout release. |
| 400 | File uploads and documents references cannot be mixed | The request contains both file files and file file names (string parameters). |
| 400 | No valid documents supplied | The request did not contain any documents or the files were not of the accepted file types. |
Metadata
Submission objects have the option to store user-defined metadata either in a JSON format. You can use the metadata parameter to attach JSON metadata to a Submission when you create it. Note metadata persists at the Submission level and not at the Document level. Metadata is not used by the HyperScience application and is only included for your tracking convenience.
External Identifier
You have the option to provide an external_id as part of Submission Creation. If you do so, then you can use this ID to retrieve the processed results as well as search for this ID in the application. The external_id expects a string of length no greater than 200 characters.
Note, the external_id with each submission must be unique. If you try to create a submission with an external_id that already exists in the application, you will get an error with HTTP Status Code 409.
Supported File Stores for Submission
The table below shows the types of file storage systems that the application can retrieve files from, along with their identifiers. The credentials for each file store should be entered in your flow's Submission Initialization Block. For instructions, see the Submission Initialization Block section of the Flow Blocks User Guide article for your version of Hyperscience.
If you are running the application using Amazon Web Services EC2, note that the recommended authentication approach for S3 is to configure your EC2 instance to have access to the specific S3 bucket using an IAM role rather than a secret key.
| File Store Type | Identifier Prefix | Example Document Path |
| Oracle Content Server 9.0+ | ocs:// | ocs://2021 |
| Amazon Web Services S3 | s3:// | s3://generic_bucket/specific_folder/example_document.pdf |
| Azure Blob Storage v39.2 | abs:// | abs://specific_container/example_document.pdf |
| GCS Storage v42.0 | gs:// | gs://specific_bucket/example_document.pdf |
| Generic Web Storage | http:// or https:// | http://www.onpremisedomain.com/example_document.pdf |
Submissions
Submission Objects
Example Submission Object
{
"id": 1000000,
"external_id": "yourcompanyid#1236",
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [
"required_field_missing"
],
"halted": false,
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"goal_time_source": "System Default",
"sla_rule_name": "System Default SLA rule",
"sla_rule_definition": {
"_metadata": {
"version": 1
},
"rules": [
{
"process_within": {
"duration": 24,
"duration_unit": "hours"
}
}
]
},
"complete_time": null,
"supervision_url": "/supervision/submission/1000000",
"apply_restrictions_url": null,
"metadata": {
"custom": "data"
},
"data_deleted": false,
"data_deleted_details": "not_deleted",
"source_routing_tags": [],
"submission_files": [
{
"name": "test_submission_attachment_1.svg",
"upload_type": "attachment",
"url": "/api/v5/uploaded_file/7c5a9da0-1240-4074-84d4-c76f580d9b7d"
},
{
"name": "test_submission_2.pdf",
"upload_type": "document",
"url": "/api/v5/uploaded_file/001f9505-018a-475c-b02c-8a6e7c3454f1"
},
{
"name": "test_submission.pdf",
"upload_type": "document",
"url": "/api/v5/uploaded_file/4813a1a9-f2a7-4e08-8435-693b58ee2752"
}
],
"document_folders": [
{
"id": 4000000,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Folder A",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
},
{
"id": 4000001,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Supplementary",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
}
],
"documents": [
{
"id": 3000000,
"uuid": "37618365-5733-40ef-98e4-c9b963001b68",
"submission_id": 1000000,
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": null,
"priority": 1950375323,
"layout_uuid": "724db4f8-8e68-49c6-ad2d-29d2490fb6a1",
"layout_name": "NYC DMV application",
"layout_variation_uuid": "724db4f8-8e68-49c6-ad2d-29d2490fb6a1",
"layout_variation_name": "NYC DMV application",
"layout_tags": [],
"layout_version_uuid": "6a26e8fc-e1c0-45a8-8595-3457c1a3d937",
"layout_version_name": "NYC DMV application",
"document_folders": [],
"supervision_url": "/supervision/document/3000000",
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000000/download",
"decisions": [],
"document_tables": [
{
"id": 7000000,
"table_number": 1,
"name": "Table Doc 1",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb31",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb31",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000000,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000000,
"column_name": "Col Name 1",
"output_name": "Col Output Name 1",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb34",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb34",
"page_id": 2000000,
"raw": "Cell Col 1",
"normalized": "CELL COL 1",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000001,
"column_name": "Col Name 2",
"output_name": "Col Output Name 2",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb38",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb38",
"page_id": 2000000,
"raw": "Cell Col 2",
"normalized": "CELL COL 2",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000002,
"column_name": "Col Name 3",
"output_name": "Col Output Name 3",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb3c",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb3c",
"page_id": 2000000,
"raw": "Cell Col 3",
"normalized": "CELL COL 3",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000003,
"column_name": "Col Name 4",
"output_name": "Col Output Name 4",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb40",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb40",
"page_id": 2000000,
"raw": "Cell Col 4",
"normalized": "CELL COL 4",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000004,
"column_name": "Col Name 5",
"output_name": "Col Output Name 5",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb44",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb44",
"page_id": 2000000,
"raw": "Cell Col 5",
"normalized": "CELL COL 5",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
],
"document_fields": [
{
"id": 6000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"name": "Eye Color",
"output_name": "eye_color",
"field_definition_attributes": {
"required": true,
"data_type": "Generic Text",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "",
"normalized": "",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.7144924577159157&start_y=0.7630408167023552&end_x=0.743384823607787&end_y=0.8531250585727079",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.7144924577159157&start_y=0.7630408167023552&end_x=0.743384823607787&end_y=0.8531250585727079"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000001,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Mobile Number",
"output_name": "mobile_number",
"field_definition_attributes": {
"required": false,
"data_type": "Phone Number",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "555 666-7788",
"normalized": "5556667788",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.3547970272646225&start_y=0.490479441977975&end_x=0.3811432622616534&end_y=0.6992140694869544",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.3547970272646225&start_y=0.490479441977975&end_x=0.3811432622616534&end_y=0.6992140694869544"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000002,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Email",
"output_name": "email",
"field_definition_attributes": {
"required": false,
"data_type": "Email Address",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "jane.smith@hyperscience.com",
"normalized": "JANE.SMITH@HYPERSCIENCE.COM",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.6180239043150312&start_y=0.12149744706843693&end_x=0.8783148619392387&end_y=0.6866959690990374",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.6180239043150312&start_y=0.12149744706843693&end_x=0.8783148619392387&end_y=0.6866959690990374"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000003,
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [],
"name": "SSN",
"output_name": "ssn",
"field_definition_attributes": {
"required": true,
"data_type": "Generic Text",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": "requires_consensus"
},
"transcription": {
"raw": "",
"normalized": "",
"source": null,
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.011479557183215418&start_y=0.029283329857042892&end_x=0.9625450212680633&end_y=0.06116441120626742",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.011479557183215418&start_y=0.029283329857042892&end_x=0.9625450212680633&end_y=0.06116441120626742"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000004,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Have you had a driver license -Yes",
"output_name": "has_driver_license",
"field_definition_attributes": {
"required": false,
"data_type": "Checkbox",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/5a34f8af-4eb8-4c69-9066-9db0d97c36cb?start_x=0.07047701227900269&start_y=0.026069469421460902&end_x=0.5911444348501542&end_y=0.2090338583282971",
"page_id": 2000001,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000001,
"location_image_url": "/api/v5/image/5a34f8af-4eb8-4c69-9066-9db0d97c36cb?start_x=0.07047701227900269&start_y=0.026069469421460902&end_x=0.5911444348501542&end_y=0.2090338583282971"
}
],
"groups": [],
"decisions": []
}
],
"derived_document_fields": [],
"pages": [
{
"id": 2000000,
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 1,
"submission_page_number": 1,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/1b55e2c9-f83f-4d46-a49e-db1ff3305587",
"corrected_image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d",
"rejected": false,
"decisions": [],
"identifiers": [
{
"transcription": {
"raw": "X2851",
"normalized": "X2851"
},
"image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.16&start_y=0.95&end_x=0.27&end_y=0.97"
},
{
"transcription": {
"raw": "11/13",
"normalized": "11/13"
},
"image_url": "/api/v5/image/e05df955-0a97-4905-9dfd-2b588f02905d?start_x=0.83&start_y=0.95&end_x=0.94&end_y=0.97"
}
]
},
{
"id": 2000001,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 2,
"submission_page_number": 2,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 2,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/52892611-4448-4444-bd7a-b41a21477374",
"corrected_image_url": "/api/v5/image/5a34f8af-4eb8-4c69-9066-9db0d97c36cb",
"rejected": false,
"decisions": [],
"identifiers": []
},
{
"id": 2000002,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "document_page",
"file_page_number": 3,
"submission_page_number": 3,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 3,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/80f97692-1f81-452e-afe5-86dcc4bca081",
"corrected_image_url": "/api/v5/image/87f66e0d-33ed-4603-a8c9-660763d04453",
"rejected": false,
"decisions": [],
"identifiers": []
}
]
},
{
"id": 3000001,
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb29",
"submission_id": 1000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:40:28.074541Z",
"priority": 1950375323,
"layout_uuid": "aec2eb32-d207-4183-82b8-56fd3e3f7f00",
"layout_name": "Social Security Name Change",
"layout_variation_uuid": "aec2eb32-d207-4183-82b8-56fd3e3f7f00",
"layout_variation_name": "Social Security Name Change",
"layout_tags": [],
"layout_version_uuid": "ca0b8e33-9519-4d10-89ca-011433485a71",
"layout_version_name": "Social Security Name Change",
"document_folders": [
4000000,
4000001
],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000001/download",
"decisions": [],
"document_tables": [
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb46",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb46",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000001,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000002,
"row_number": 2,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000003,
"row_number": 3,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000004,
"row_number": 4,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000005,
"row_number": 5,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb49",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
],
"document_fields": [
{
"id": 6000005,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/42c267d0-caaf-4bf4-8a1b-06096f043d5e?start_x=0.5882881754084818&start_y=0.2168617674146849&end_x=0.6959996343380943&end_y=0.7452089803626697",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/42c267d0-caaf-4bf4-8a1b-06096f043d5e?start_x=0.5882881754084818&start_y=0.2168617674146849&end_x=0.6959996343380943&end_y=0.7452089803626697"
}
],
"groups": [],
"decisions": []
}
],
"derived_document_fields": [],
"pages": [
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 4,
"submission_page_number": 4,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/c32f98ad-15ee-4fd9-89c7-715c75823443",
"corrected_image_url": "/api/v5/image/42c267d0-caaf-4bf4-8a1b-06096f043d5e",
"rejected": false,
"decisions": [],
"identifiers": []
}
]
}
],
"unassigned_pages": [
{
"id": 2000004,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "unknown_page",
"file_page_number": 1,
"submission_page_number": 5,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/b7fe208c-65c5-4bea-aef2-ff5dd30c4277",
"corrected_image_url": null,
"rejected": false,
"decisions": [],
"identifiers": []
},
{
"id": 2000005,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "blank_page",
"file_page_number": 2,
"submission_page_number": 6,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/cc423bed-b717-4eb9-9896-002a3fd59441",
"corrected_image_url": null,
"rejected": false,
"decisions": [],
"identifiers": []
}
],
"rejected_documents": [
{
"id": 20000000,
"reject_username": "test user",
"reject_reason": {
"id": 1,
"code": "000",
"description": "Not in good order"
},
"layout_uuid": "39ed7ebe-8013-4c85-a76b-145de0367385",
"layout_name": "Test Layout",
"layout_variation_uuid": "39ed7ebe-8013-4c85-a76b-145de0367385",
"layout_variation_name": "Test Layout",
"pages": [
{
"id": 2000006,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "document_page",
"file_page_number": 3,
"submission_page_number": 7,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/df05ea5c-773a-43d4-b826-c72e7845c112",
"corrected_image_url": null,
"rejected": true,
"decisions": [],
"identifiers": []
}
]
}
],
"cases": [
{
"id": 30000000,
"uuid": "46474849-4a4b-4c4d-8e4f-505152535455",
"external_case_id": "HS-30000000",
"start_time": "2018-05-09T20:39:50.220162Z",
"update_time": "2018-05-09T20:39:50.220162Z",
"to_delete_time": "2018-05-09T20:39:50.220162Z",
"notes": "",
"documents": [
3000000,
3000001
],
"unassigned_pages": [
2000004,
2000005,
2000006
],
"submission_files": [],
"submissions": [
1000000
],
"decisions": [
{
"decision": "Valid Case",
"choice": "Yes",
"task_purpose_name": "Case Checker"
}
]
}
]
}
The Submission object is the top level of the organizational hierarchy of the API. A Submission object is created when a set of images are submitted into the system together. The system then interprets each image as a Page object and matches each Page object to a layout in the system if such a layout exists.
A Submission can have multiple Pages, a Page can have multiple Fields. Pages can additionally be grouped into a Document based on the layout they match to. Each of these object types is further explained below.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated Submission ID that is exposed in the web application. |
| external_id | string | User-defined string that can be included as part of the Submission object during Submission creation. Must be unique to the submission. Optional. Maximum length 200 characters. Returns null if no value was defined during Submission creation. |
| state | string | Current state of the submission. Potential values are processing, supervision, or complete. See States for more detail. |
| substate | string | Provides additional granularity for the supervision state. See Substates for a list of possible values. |
| exceptions | array of strings | Provides a list of all exceptions in this submission. If the Submission has no exceptions, then this value returns an empty array. See Exceptions for a list of possible values. |
| halted | boolean | If true, indicates that a submission is not currently processing due to a system outage (e.g., file store is out of space). Otherwise returns false. Once the system is back up, a system administrator can restart all halted submissions. |
| start_time | datetimetzAn ISO-8601 formatted datetime string | Time the Submission was submitted to the API. |
| goal_time v28 | datetimetzAn ISO-8601 formatted datetime string | The date and time the system will try to ensure the Submission is processed by. |
| goal_time_source v35 | string | The source that set the goal_time of the Submission. |
| sla_rule_name v35 | string | The name of the rule applied to the Submission that determined the goal_time. |
| sla_rule_definition v35 | JSON object | The definition of the rule applied to the Submission that determined the goal_time. |
| complete_time | datetimetzAn ISO-8601 formatted datetime string | Time the entire Submission entered the complete state, including all Supervision tasks. |
| supervision_url | string | Returns the relative URL for outstanding Supervision tasks for the submission. Value is only returned when the Submission state is supervision and otherwise returns null. |
| apply_restrictions_url v33 | string | Returns the relative URL for applying task restrictions to the submission when the Submission state is machine_routing_decision and otherwise returns null. |
| metadata | JSON object | User-defined data provided with the Submission creation API call for this Submission |
| data_deleted | boolean | If true, indicates that the Submission has had the transcription data deleted according to the Data Retention settings. If Global PII Override is enabled, there may be some data that was preserved. See data_deleted_details for more information. |
| data_deleted_details | str | Indicates the extent to which the data-deletion process was successful. If not_deleted, the process has not run yet. If fully_deleted, all image data (including original uploaded images and any processed or corrected images) and all transcribed data are deleted and will be returned as null. If partially_deleted, some images and transcribed data have not been deleted because the documents they are associated with have been preserved as training documents. |
| submission_files | array of JSON objectsA list of objects. See the description for more information. | Array of JSON objects representing the files comprising the submission. The object contains the name of the file, upload type (i.e. document that will be processed by HyperScience or attachment that will be carried through the system without processing it), and the URL the file can be retrieved from. |
| document_folders v40 | array of JSON objectsA list of objects. See the description for more information. | Array of Document Folder objects identified as part of this Submission. Array will be empty if no document folders were identified in the submission. |
| documents | array of JSON objectsA list of objects. See the description for more information. | Array of Document objects identified as part of this Submission. Array will be empty if no documents were identified in the submission. |
| unassigned_pages | array of JSON objectsA list of objects. See the description for more information. | Array of Page objects included in this Submission that have not been grouped into a Document. Array will be empty if all submitted pages were matched to a Document. |
| rejected_documents v28 | array of JSON objectsA list of objects. See the description for more information. | Array of JSON objects representing documents that were manually rejected during processing. The object contains information about the document, the reject code and description, and an array of Page objects. |
| cases v31 | array of JSON objectsA list of objects. See the description for more information. | Array of Case objects that contain the documents and pages of this Submission. |
| source_routing_tags | array of strings | All source routing tags for the submission. |
Debugging Information v35
The following properties are also a part of a Submission object. Only accessible when debug=true
| Property | Type | Description |
| flow_name | string | The name of the Flow which created this submission. |
| flow_version | string | The version of the Flow which created this submission. |
| flow_uuid | string | The UUID of the Flow which created this submission. |
| flow_run_id | string | The ID of the flow run that processed this submission. |
| correlation_id | string | The correlation ID is a string that connects all flows that were used to process this submission. |
| release_name | string | The name of the release used when the submission was created. |
| originator | JSON object | Details how the submission was created. Possible values for “type” are Input Block, API, and User. |
Retrieving Submissions
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions/'
saved_submission_id = str(1000000)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_submission_id))
params = {'flat': False}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Unless you have configured your deployment to use the State Change Notifications, you will need to use this endpoint to retrieve the processed data for each submission. By default, this endpoint will not return the following properties of the Submission object: documents, document_folders, unassigned_pages, and rejected_documents.
Submission Retrieval Endpoint
GET /api/v5/submissions/submission_id
GET /api/v5/submissions/external/external_id
The URL to use for this endpoint depends on whether you are retrieving the submission using the HyperScience ID or a user-defined external identifier.
Request parameters
| Property | Type | Description |
| flat | boolean | Optional parameter that prevents returning the documents, document_folders, unassigned_pages, and rejected_documents arrays of the Submission object. Defaults to true unless specifically passed as false. |
| debug v35 | boolean | Optional parameter that returns Submission Object debugging fields. Defaults to false unless specifically set to true |
Deleting submissions v36
Submissions can be deleted individually or in bulk by calling the bulk_delete endpoint.
Submission Bulk Deletion Endpoint
POST /api/v5/submissions/bulk_delete
Example request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions/bulk_delete'
endpoint_url = urljoin(base_url, endpoint)
data = {
'submission_ids': [1,2,3],
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, json=data)
print(r.status_code)
Sending a request to this endpoint initiates an asynchronous submission-deletion job. The response contains a URL that can be used to track the job's progress.
Note that, after a request is sent, a submission included in the job may still appear in the platform while the batch job is being completed.
Request Parameters
| Parameter | Type | Description |
| submission_ids | array[integer] | The Submission ids of the submissions to be deleted (e.g., [1001, 1002, 1003]). The array can contain a minimum of 1 and a maximum of 1000 ids. |
Response Properties
| Property | Type | Description |
| status | string | Status of the batch job. Possible values are PENDING, RUNNING, FAILED, FINISHED, SCHEDULED, and HALTED. This status may not apply to all of the submissions included in the batch job. See Deletion status of individual submissions for more information. |
| status_url | string | URL to query the status of the batch job. |
| description | string | Description of the batch job. |
| batch_request_id | integer | ID of the batch job. |
Response Errors
| Error Code | Description |
| 400 | Request parameters are incorrect. See returned response for details. |
Deletion status of individual submissions
If the batch job includes multiple submissions, its status may not match the deletion status of
the individual submissions included in it. For example:
- If a batch job has a
FAILEDstatus, some of the submissions in the batch may have been deleted successfully. To verify the status of a particular submission, send a request to the Retrieving Submissions endpoint with the submission’s ID, and check the status code in the response. If the code is400, the submission was deleted successfully. If the code is200, the submission was not deleted. The response does not indicate why the deletion of the submission failed. - If the batch job includes a submission that does not exist, the job may have a
FINISHEDstatus, even though it is not possible to delete non-existent submissions. The response does not indicate which submissions in the batch job existed and which did not.
Retrieving Transformed Submissionsv29
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions/'
saved_submission_id = str(1000000)
suffix = 'transformed'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_submission_id, suffix))
params = {'flat': False}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
If you are using our Post-Processing Customization feature, use this endpoint to retrieve the transformed Submission data. If no transformation occurred, the original Submission data is returned. The structure of the transformed data may differ from that of the Submission object.
Transformed Submission Retrieval Endpoint
GET /api/v5/submissions/submission_id/transformed
Request parameters
This parameter applies only for the fallback/default serialization and will not affect the transformed submission output, even if it has the same format as the submission object.
| Property | Type | Description |
| flat | boolean | Optional parameter that prevents returning the documents, document_folders, unassigned_pages, and rejected_documents arrays of the Submission object. Defaults to true unless specifically passed as false. |
State Change Notifications
Submission Example Message
{
"id": 1000000,
"external_id": "yourcompanyid#1236",
"state": "complete",
"substate": null,
"halted": false,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"complete_time": "2018-05-09T20:41:22.310162Z",
"metadata": {
"custom": "data"
},
"output": {
"id": 1000000,
"external_id": "yourcompanyid#1236",
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"halted": false,
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"goal_time_source": "System Default",
"sla_rule_name": "System Default SLA rule",
"sla_rule_definition": {
"_metadata": {
"version": 1
},
"rules": [
{
"process_within": {
"duration": 24,
"duration_unit": "hours"
}
}
]
},
"complete_time": "2018-05-09T20:41:22.310162Z",
"supervision_url": null,
"apply_restrictions_url": null,
"metadata": {
"custom": "data"
},
"data_deleted": false,
"data_deleted_details": "not_deleted",
"source_routing_tags": [],
"submission_files": [
{
"name": "test_submission_attachment_1.svg",
"upload_type": "attachment",
"url": "/api/v5/uploaded_file/e84fb03c-dc8e-4546-9c3e-b1e71749d037"
},
{
"name": "test_submission_2.pdf",
"upload_type": "document",
"url": "/api/v5/uploaded_file/690641b6-7d6a-4233-be03-461258e61b0b"
},
{
"name": "test_submission.pdf",
"upload_type": "document",
"url": "/api/v5/uploaded_file/c0f7bd8a-9c5d-47c5-a96d-3fb73ff1e44d"
}
],
"document_folders": [
{
"id": 4000000,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Folder A",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
},
{
"id": 4000001,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Supplementary",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
}
],
"documents": [
{
"id": 3000000,
"uuid": "37618365-5733-40ef-98e4-c9b963001b68",
"submission_id": 1000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:41:22.310162Z",
"priority": 1950375323,
"layout_uuid": "dd543318-0d2e-4439-b7fe-a9d5442e740b",
"layout_name": "NYC DMV application",
"layout_variation_uuid": "dd543318-0d2e-4439-b7fe-a9d5442e740b",
"layout_variation_name": "NYC DMV application",
"layout_tags": [],
"layout_version_uuid": "f45bad99-19ec-40b9-a0e9-c33597b2d2ab",
"layout_version_name": "NYC DMV application",
"document_folders": [],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000000/download",
"decisions": [],
"document_tables": [
{
"id": 7000000,
"table_number": 1,
"name": "Table Doc 1",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfba5",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfba5",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000000,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000000,
"column_name": "Col Name 1",
"output_name": "Col Output Name 1",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfba8",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfba8",
"page_id": 2000000,
"raw": "Cell Col 1",
"normalized": "CELL COL 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000001,
"column_name": "Col Name 2",
"output_name": "Col Output Name 2",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbac",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbac",
"page_id": 2000000,
"raw": "Cell Col 2",
"normalized": "CELL COL 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000002,
"column_name": "Col Name 3",
"output_name": "Col Output Name 3",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb0",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb0",
"page_id": 2000000,
"raw": "Cell Col 3",
"normalized": "CELL COL 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000003,
"column_name": "Col Name 4",
"output_name": "Col Output Name 4",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb4",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb4",
"page_id": 2000000,
"raw": "Cell Col 4",
"normalized": "CELL COL 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
},
{
"id": 9000004,
"column_name": "Col Name 5",
"output_name": "Col Output Name 5",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb8",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb8",
"page_id": 2000000,
"raw": "Cell Col 5",
"normalized": "CELL COL 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
],
"document_fields": [
{
"id": 6000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"name": "Eye Color",
"output_name": "eye_color",
"field_definition_attributes": {
"required": true,
"data_type": "Generic Text",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "",
"normalized": "",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.4796723729649085&start_y=0.045897903402767906&end_x=0.5737213953993248&end_y=0.9904830494498904",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.4796723729649085&start_y=0.045897903402767906&end_x=0.5737213953993248&end_y=0.9904830494498904"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000001,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Mobile Number",
"output_name": "mobile_number",
"field_definition_attributes": {
"required": true,
"data_type": "Phone Number",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "555 666-7788",
"normalized": "5556667788",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.2049316335546049&start_y=0.000429164894802575&end_x=0.6192437638544375&end_y=0.1487210187722832",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.2049316335546049&start_y=0.000429164894802575&end_x=0.6192437638544375&end_y=0.1487210187722832"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000003,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Email",
"output_name": "email",
"field_definition_attributes": {
"required": true,
"data_type": "Email Address",
"multiline": false,
"routing": false,
"duplicate": true,
"supervision_override": null
},
"transcription": {
"raw": "jane.smith@hyperscience.com",
"normalized": "JANE.SMITH@HYPERSCIENCE.COM",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.3121778822832841&start_y=0.4370331897417858&end_x=0.775434233681118&end_y=0.5974053656185456",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.3121778822832841&start_y=0.4370331897417858&end_x=0.775434233681118&end_y=0.5974053656185456"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000004,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SSN",
"output_name": "ssn",
"field_definition_attributes": {
"required": true,
"data_type": "Generic Text",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": "requires_consensus"
},
"transcription": {
"raw": "721071426",
"normalized": "721071426",
"source": "manual_transcription",
"data_deleted": false,
"user_transcribed": "keyer1"
},
"field_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.03355674529993837&start_y=0.13816367494818785&end_x=0.5306200975628627&end_y=0.6277446463596408",
"page_id": 2000000,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.03355674529993837&start_y=0.13816367494818785&end_x=0.5306200975628627&end_y=0.6277446463596408"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000002,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Mobile Number",
"output_name": "mobile_number",
"field_definition_attributes": {
"required": true,
"data_type": "Phone Number",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "555 666-8899",
"normalized": "5556668899",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.2049316335546049&start_y=0.000429164894802575&end_x=0.6192437638544375&end_y=0.1487210187722832",
"page_id": 2000000,
"occurrence_index": 1,
"locations": [
{
"position": 1,
"page_id": 2000000,
"location_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.2049316335546049&start_y=0.000429164894802575&end_x=0.6192437638544375&end_y=0.1487210187722832"
}
],
"groups": [],
"decisions": []
},
{
"id": 6000005,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Have you had a driver license -Yes",
"output_name": "has_driver_license",
"field_definition_attributes": {
"required": false,
"data_type": "Checkbox",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/7e8950ef-307e-410f-8144-a00ea33da5de?start_x=0.5971581853540084&start_y=0.601334474079528&end_x=0.7703615832679693&end_y=0.735946659393487",
"page_id": 2000001,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000001,
"location_image_url": "/api/v5/image/7e8950ef-307e-410f-8144-a00ea33da5de?start_x=0.5971581853540084&start_y=0.601334474079528&end_x=0.7703615832679693&end_y=0.735946659393487"
}
],
"groups": [],
"decisions": []
}
],
"derived_document_fields": [],
"pages": [
{
"id": 2000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 1,
"submission_page_number": 1,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/366f6f0a-1e58-451e-95f8-b2aa9a3bea2f",
"corrected_image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688",
"rejected": false,
"decisions": [],
"identifiers": [
{
"transcription": {
"raw": "X2851",
"normalized": "X2851"
},
"image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.16&start_y=0.95&end_x=0.27&end_y=0.97"
},
{
"transcription": {
"raw": "11/13",
"normalized": "11/13"
},
"image_url": "/api/v5/image/e812202a-2c08-483b-96c1-b7d1c79c1688?start_x=0.83&start_y=0.95&end_x=0.94&end_y=0.97"
}
]
},
{
"id": 2000001,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 2,
"submission_page_number": 2,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 2,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/d4a9cb49-eb38-4f86-843e-7be7e4c80892",
"corrected_image_url": "/api/v5/image/7e8950ef-307e-410f-8144-a00ea33da5de",
"rejected": false,
"decisions": [],
"identifiers": []
},
{
"id": 2000002,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "document_page",
"file_page_number": 3,
"submission_page_number": 3,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 3,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/18909fe8-a81c-46c6-9660-be4d759f37af",
"corrected_image_url": "/api/v5/image/710be712-3938-41f1-9dc2-0216b5a11f07",
"rejected": false,
"decisions": [],
"identifiers": []
}
]
},
{
"id": 3000001,
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb9d",
"submission_id": 1000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:40:28.074541Z",
"priority": 1950375323,
"layout_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_name": "Social Security Name Change",
"layout_variation_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_variation_name": "Social Security Name Change",
"layout_tags": [],
"layout_version_uuid": "6a00eddc-9641-4093-a82d-55c1ac8d8908",
"layout_version_name": "Social Security Name Change",
"document_folders": [
4000000,
4000001
],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000001/download",
"decisions": [],
"document_tables": [
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000001,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000002,
"row_number": 2,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000003,
"row_number": 3,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000004,
"row_number": 4,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000005,
"row_number": 5,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
],
"document_fields": [
{
"id": 6000006,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528"
}
],
"groups": [],
"decisions": []
}
],
"derived_document_fields": [],
"pages": [
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 5,
"submission_page_number": 5,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/b14ba06a-1b8c-457c-bbb3-52790a69bf81",
"corrected_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc",
"rejected": false,
"decisions": [],
"identifiers": []
}
]
}
],
"unassigned_pages": [
{
"id": 2000004,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "unknown_page",
"file_page_number": 1,
"submission_page_number": 4,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/a385639b-f2ab-46f1-97aa-e91e451d3358",
"corrected_image_url": null,
"rejected": false,
"decisions": [],
"identifiers": []
},
{
"id": 2000005,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "blank_page",
"file_page_number": 2,
"submission_page_number": 6,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/ad44ecde-fa8e-4970-8dcf-61bc9ed0df54",
"corrected_image_url": null,
"rejected": false,
"decisions": [],
"identifiers": []
}
],
"rejected_documents": [
{
"id": 20000000,
"reject_username": "test user",
"reject_reason": {
"id": 1,
"code": "000",
"description": "Not in good order"
},
"layout_uuid": "b36c6e28-8692-4b91-9e72-656618c69d49",
"layout_name": "Test Layout",
"layout_variation_uuid": "b36c6e28-8692-4b91-9e72-656618c69d49",
"layout_variation_name": "Test Layout",
"pages": [
{
"id": 2000006,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "document_page",
"file_page_number": 3,
"submission_page_number": 7,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/538c81d9-45e3-4a05-aac7-0f9d6bf5f89e",
"corrected_image_url": null,
"rejected": true,
"decisions": [],
"identifiers": []
}
]
}
],
"cases": []
}
}
The application can be configured to provide a notification every time a Submission transitions to any of the processing, supervision, or completed states. This notification service delivers the same contents as the Submission Retrieval endpoint and eliminates the need to poll for the processed data.
The notification service delivers a JSON payload every time a Submission transitions to the 3 possible states: processing, supervision, and complete (see States for more detail).
Configuration
The State Change Notification Service can be configured to provide notifications using output connectors. Currently available output connectors include HTTP endpoint as a callback, Message Queue, and many others. Refer to the HyperScience User Guide for more information on how to configure these connectors.
Note that if the application is unable to deliver a State Change Notification, the submission will be halted. When a submission is halted, two things should happen:
- The application writes the following message to the logs:
WORKER_JOB_FAIL. - A system administrator should look into the issue to make sure that the notifications' receivers are available and that the application is configured properly.
Updating Goal Time
v28You can adjust the processing prioritization for an existing submission by using this endpoint with the parameters described below.
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions/bulk_update_goal_time'
endpoint_url = urljoin(base_url, endpoint)
goal_time = ['2025-09-19T05:55:47+00:00']
submission_ids = ['[1000000]']
data = {
'goal_time': goal_time,
'submission_ids': submission_ids
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"n_updated": 1
}
Update Goal Time Url Endpoint
POST /api/v5/submissions/bulk_update_goal_time
Request Parameters
| Parameter | Type | Description |
| goal_time | datetimetzAn ISO-8601 formatted datetime string | Required parameter that specifies the date and time that you would like the system to process the Submission and all of its pages by. |
| submission_ids | array of integers | Required parameter that specifies the IDs of the Submissions being updated. |
Listing Submissions
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions'
endpoint_url = urljoin(base_url, endpoint)
params = {'state': 'complete', 'exception': 'required_field_missing', 'start_time__gte': '2018-04-27T15:21'}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"id": 1000000,
"external_id": "yourcompanyid#1236",
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"halted": false,
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"goal_time_source": "System Default",
"sla_rule_name": "System Default SLA rule",
"sla_rule_definition": {
"_metadata": {
"version": 1
},
"rules": [
{
"process_within": {
"duration": 24,
"duration_unit": "hours"
}
}
]
},
"complete_time": "2018-05-09T20:41:22.310162Z",
"supervision_url": null,
"apply_restrictions_url": null,
"metadata": {
"custom": "data"
},
"data_deleted": false,
"data_deleted_details": "not_deleted",
"source_routing_tags": []
}
]
}
This endpoint allows you to retrieve a list of Submissions in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Submission Object. Note: the following properties are not returned as part of the Submission Object with this endpoint: documents, document_folders, and unassigned_pages.
Submission Listing Endpoint
GET /api/v5/submissions
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Submissions. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Submissions where that attribute matches any of the values you provided. For example, if you include layout_tag=A and layout_tag=B, you will get Submissions that have Documents that matched layouts with either tag A or tag B. Note that because the datetime filters are inequality-based, you cannot include the same filter in your request more than once.
In addition, the Submission object query parameters can also be used to modify the structure of each Submission in the list (i.e., debug=true v35 will reveal the debugging fields in each submission).
| Property | Type | Description |
| cursor_pagination v33.1 | boolean | Indicates whether cursor pagination is enabled. Defaults to false. We recommend setting this parameter to true when retrieving a large number of submissions. Note: Setting cursor_pagination to true will disallow the use of any other parameters in the request except for start_time__gte and start_time__lt. <!-- ignore validation --> |
| state | string | Filters for Submissions that are in a specific state. See States for a list of possible values. |
| substate | string | Filters for Submissions that are in a specific substate. See Substates for a list of possible values. |
| exception | string | Filters for Submissions that have a specific exception. See Exceptions for a list of possible values. |
| halted | boolean | Filters for Submissions based on halted state. Possible values are true or false. |
| layout_tag | string | Filters for Submissions containing Documents that matched to a layout with a specified layout tag. |
| start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for Submissions that were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for Submissions that were ingested into the application before a specific date and time (less than operator). |
| complete_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for Submissions that finished processing on or after a specific date and time (greater than or equal to operator). |
| complete_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for Submissions that finished processing before a specific date and time (less than operator). |
| flow_uuid v35 | string | Filter for Submissions processed through a specific Flow. |
| correlation_id v38.0.4 | string | Filter for a submission with a particular uuid (correlation_id) |
Submission Activity Logs v36
Submission Activity Log Object
Example Submission Activity Log Object
[
{
"id": 2,
"submission_id": 10000,
"flow": "Workflow 1",
"name": "Document Classification Supervision",
"status": "COMPLETED",
"username": "admin",
"start_time": "2022-01-01T11:45:40.455155Z",
"end_time": "2022-01-01T11:46:45.455155Z",
"total_time_seconds": 65
},
{
"id": 1,
"submission_id": 10000,
"flow": "Workflow 1",
"name": "Machine Classification Block",
"status": "FAILED",
"username": "-",
"start_time": "2022-01-01T11:45:30.455155Z",
"end_time": "2022-01-01T11:45:35.455155Z",
"total_time_seconds": 5
}
]
A Submission Activity Log is an entity that records the submission’s processing steps, along with information about who took the action and when. If an action is taken by a user rather than a machine, the user’s username is recorded in the Submission Activity Log object.
Object Properties
| Property | Type | Description |
| id | integer | Submission’s Activity Log index inversely ordered by start_time. |
| submission_id | integer | Unique system-generated Submission ID. |
| flow | string | Submission’s Flow title. When not set, it defaults to the Flow UUID. |
| name | string | Name of this activity. |
| status | string | Status of this activity. |
| username | string | The username of the human operator that performed this activity. |
| start_time | datetimetzAn ISO-8601 formatted datetime string | Start timestamp of this activity. |
| end_time | datetimetzAn ISO-8601 formatted datetime string | End timestamp of this activity. |
| total_time_seconds | integer | Duration of this activity in seconds. |
Retrieving Submission Activity Logs
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submission_activity_logs/'
submission_id = str(10000)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, submission_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
[
{
"id": 2,
"submission_id": 10000,
"flow": "Workflow 1",
"name": "Document Classification Supervision",
"status": "COMPLETED",
"username": "admin",
"start_time": "2022-01-01T11:45:40.455155Z",
"end_time": "2022-01-01T11:46:45.455155Z",
"total_time_seconds": 65
},
{
"id": 1,
"submission_id": 10000,
"flow": "Workflow 1",
"name": "Machine Classification Block",
"status": "FAILED",
"username": "-",
"start_time": "2022-01-01T11:45:30.455155Z",
"end_time": "2022-01-01T11:45:35.455155Z",
"total_time_seconds": 5
}
]
Retrieve data about a specific Submission using that Submission’s ID. You can obtain this ID by using the Listing Submissions endpoint.
Submission Activity Logs Retrieval Endpoint
GET /api/v5/submission_activity_logs/submission_id
Listing Submission Activity Logs as CSV
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submission_activity_logs/csv'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
Submission ID,Flow,Block,Status,User,Start Time,End Time,Total Time (Seconds)
10000,Workflow 1,Document Classification Supervision,COMPLETED,admin,2022-01-01 06:45:40 AM,2022-01-01 06:46:45 AM,65
10000,Workflow 1,Machine Classification Block,FAILED,-,2022-01-01 06:45:30 AM,2022-01-01 06:45:35 AM,5
With this endpoint, you can list and filter the Submissions and download a CSV with the respective Submission Activity Logs.
Endpoint
GET /api/v5/submission_activity_logs/csv
Request Parameters
The table below defines the query parameters that can be used to filter for Submissions and get the respective Submission Activity Logs.
| Property | Type | Description |
| id | integer | Filters for Submissions using the system-generated Submission ID. |
| start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for Submissions that were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for Submissions that were ingested into the application before a specific date and time (less than operator). |
Response
The response returned is in CSV format. Each row represents a Submission Activity Log and has the following columns:
| Header | Type | Description |
| Submission ID | integer | Unique system-generated Submission ID. |
| Flow | string | Submission’s Flow title. When not set, it defaults to the Flow UUID. |
| Block | string | Name of this activity. |
| Status | string | Status of this activity. |
| User | string | The username of the human operator that performed this activity. |
| Start Time | datetime | Start timestamp of this activity. |
| End Time | datetime | End timestamp of this activity. |
| Total Time (Seconds) | integer | Duration of this activity in seconds. |
Submission Logs v36
Submission Log Object
Example Submission Log Object
{
"id": 10000,
"activity_created": "2022-01-01T11:45:37.287869Z",
"activity_start": "2022-01-01T11:45:40.455155Z",
"activity_end": "2022-01-01T11:45:40.970751Z",
"operator": 0,
"username": "admin",
"activity_name": "Resubmission",
"activity_subtype_name": "W_SUBMISSION_RETRY",
"submission_id": 20000,
"submission_page_id": 30000,
"form_id": 40000,
"submission_metadata": {
"custom": "data"
},
"submission_external_id": "yourcompanyid#1234",
"template_name": "example_template_name",
"template_uuid": "5ba11d9d-a586-41d9-871b-dce30619d31e",
"transcribed_fields_count": null,
"details": {
"resubmitted_id": 20002
}
}
A Submission Log is an entity that records key actions taken for a given submission, along with information about who took the action and when. If an action is taken by a user rather than a machine, the user’s username is recorded in the Submission Log object.
Object Properties
| Property | Type | Description |
| id | integer | Unique system-generated Submission Log ID. |
| activity_created | datetimetzAn ISO-8601 formatted datetime string | Creation timestamp of this activity. |
| activity_start | datetimetzAn ISO-8601 formatted datetime string | Start timestamp of this activity. |
| activity_end | datetimetzAn ISO-8601 formatted datetime string | End timestamp of this activity. |
| operator | integer | Type of the operator performing this activity: 0 - human, 1 - machine. |
| username | string | The username of the human operator. |
| activity_name | string | The activity general name. |
| activity_subtype_name | string | The activity subtype name. |
| submission_id | integer | The identifier of the Submission related to this activity. |
| submission_page_id | integer | The identifier of the Submission Page related to this activity. |
| form_id | integer | The identifier of the Form related to this activity. |
| submission_metadata | JSON object | User-defined data provided during the related Submission’s creation. See Submission Creation for more information. |
| submission_external_id | string | User-defined ID provided during the related Submission’s creation. See Submission Creation for more information. |
| template_name | string | Name of the Layout matched to the related Submission Page. |
| template_uuid | string | UUID of the Layout matched to the related Submission Page. |
| transcribed_fields_count | integer | Number of transcribed fields for this activity in submission processing. |
| details | JSON object | Additional data containing details about the activity or the related entities. |
Retrieving Submission Logs
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submission_logs/'
submission_log_id = str(10000)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, submission_log_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 10000,
"activity_created": "2022-01-01T11:45:37.287869Z",
"activity_start": "2022-01-01T11:45:40.455155Z",
"activity_end": "2022-01-01T11:45:40.970751Z",
"operator": 0,
"username": "admin",
"activity_name": "Resubmission",
"activity_subtype_name": "W_SUBMISSION_RETRY",
"submission_id": 20000,
"submission_page_id": 30000,
"form_id": 40000,
"submission_metadata": {
"custom": "data"
},
"submission_external_id": "yourcompanyid#1234",
"template_name": "example_template_name",
"template_uuid": "5ba11d9d-a586-41d9-871b-dce30619d31e",
"transcribed_fields_count": null,
"details": {
"resubmitted_id": 20002
}
}
Retrieve data about a specific Submission Log using that Submission Log's ID. You can obtain this ID by using the Listing Submission Logs endpoint.
Submission Log Retrieval Endpoint
GET /api/v5/submission_logs/submission_log_id
Listing Submission Logs
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submission_logs'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 2,
"next": null,
"previous": null,
"results": [
{
"id": 10001,
"activity_created": "2022-01-02T11:45:37.287869Z",
"activity_start": "2022-01-02T11:45:40.455155Z",
"activity_end": "2022-01-02T11:45:40.970751Z",
"submission_id": 20001,
"submission_page_id": 30001,
"form_id": 40001,
"submission_external_id": "yourcompanyid#5678",
"template_uuid": "28d9751d-0d50-42be-b2d9-8c8902282c28"
},
{
"id": 10000,
"activity_created": "2022-01-01T11:45:37.287869Z",
"activity_start": "2022-01-01T11:45:40.455155Z",
"activity_end": "2022-01-01T11:45:40.970751Z",
"submission_id": 20000,
"submission_page_id": 30000,
"form_id": 40000,
"submission_external_id": "yourcompanyid#1234",
"template_uuid": "5ba11d9d-a586-41d9-871b-dce30619d31e"
}
]
}
This endpoint allows you to retrieve a list of Submission Logs in the system that match your defined filtering criteria and paginate through them. Each object in the results array is a Submission Log Object.
Note: The Submission Log object retrieved through this endpoint is simplified. The main difference is we only expose the id, activity_created time, activity_start time, activity_end time, submission_id, submission_page_id, form_id, submission_external_id, and template_uuid of each Submission Log.
Submission Log Listing Endpoint
GET /api/v5/submission_logs
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Submission Logs.
| Property | Type | Description |
| id_min | integer | Filters for Submission Logs with an id bigger or equal to a specific id (min operator). |
| id_max | integer | Filters for Submission Logs with an id smaller or equal to a specific id (max operator). |
| submission_id | integer | Filters for Submission Logs related to a specific submission using the system-generated Submission ID. |
| activity_created_gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity created on or after a specific date and time (greater than or equal to operator). |
| activity_created_lte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity created on or before a specific date and time (less than or equal to operator). |
| activity_created_lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity created before a specific date and time (less than operator). |
| activity_start_gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity start on or after a specific date and time (greater than or equal to operator). |
| activity_start_lte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity start on or before a specific date and time (less than or equal to operator). |
| activity_start_lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity start before a specific date and time (less than operator). |
| activity_end_gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity end on or after a specific date and time (greater than or equal to operator). |
| activity_end_lte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity end on or before a specific date and time (less than or equal to operator). |
| activity_end_lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Submission Logs that have an activity end before a specific date and time (less than operator). |
Listing Submission Logs as CSV
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submission_logs/csv'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
ID,Activity Created,Activity Start,Activity End,Operator,User Name,Activity Name,Activity Subtype Name,Submission ID,Submission Page ID,Document ID,Submission Metadata,Submission External ID,Layout Name,Layout UUID,Transcribed Fields Count,Details
10001,2022-01-02 06:45:37 AM,2022-01-02 06:45:40 AM,2022-01-02 06:45:40 AM,1,,,,20001,30001,40001,,yourcompanyid#5678,,28d9751d-0d50-42be-b2d9-8c8902282c28,,
10000,2022-01-01 06:45:37 AM,2022-01-01 06:45:40 AM,2022-01-01 06:45:40 AM,0,admin,Resubmission,W_SUBMISSION_RETRY,20000,30000,40000,"{
""custom"": ""data""
}",yourcompanyid#1234,example_template_name,5ba11d9d-a586-41d9-871b-dce30619d31e,,"{
""resubmitted_id"": 20002
}"
With this endpoint, you can list and filter the Submission Logs and download a CSV with them. This endpoint is similar to the Listing Submission Logs endpoint with the difference in the output format being CSV here.
Endpoint
GET /api/v5/submission_logs/csv
Request Parameters
Use the query parameters from the Listing Submission Logs endpoint to filter for specific Submission Logs.
Response
The response returned is in CSV format. Each row represents a Submission Log and has the following columns:
| Header | Type | Description |
| ID | integer | Unique system-generated Submission Log ID. |
| Activity Created | datetime | Creation timestamp of this activity. |
| Activity Start | datetime | Start timestamp of this activity. |
| Activity End | datetime | End timestamp of this activity. |
| Operator | integer | Type of the operator performing this activity: 0 - human, 1 - machine. |
| User Name | string | The username of the human operator. |
| Activity Name | string | The activity general name. |
| Activity Subtype Name | string | The activity subtype name. |
| Submission ID | integer | The identifier of the Submission related to this activity. |
| Submission Page ID | integer | The identifier of the Submission Page related to this activity. |
| Document ID | integer | The identifier of the Form related to this activity. |
| Submission Metadata | JSON object | User-defined data provided during the related Submission’s creation. See Submission Creation for more information. |
| Submission External ID | string | User-defined ID provided during the related Submission’s creation. See Submission Creation for more information. |
| Layout Name | string | Name of the Layout matched to the related Submission Page. |
| Layout UUID | string | UUID of the Layout matched to the related Submission Page. |
| Transcribed Fields Count | integer | Number of transcribed fields for this activity in submission processing. |
| Details | JSON object | Additional data containing details about the activity or the related entities. |
Layouts v32
Layout Object
Example Layout Object
{
"uuid": "f828eb91-b49a-40c5-bf14-42011ce70c5e",
"name": "example_layout_group_name",
"time_created": "2021-09-12T23:35:54.583065Z",
"is_archived": false,
"layout_version": {
"uuid": "fc595159-d39d-41fa-8ecf-054a8787f2bd",
"name": "Version - (04/30/2021)",
"time_created": "2021-09-12T23:35:54.583159Z",
"layout_variations": [
{
"uuid": "f6eb56ff-b8d2-4ee9-930c-59c28fc16dc9",
"name": "example_template_name",
"layout_fields": [
{
"uuid": "04e71c64-a621-414d-af52-eb4de1d8e4e3",
"name": "url",
"output_name": "example_output",
"notes": "notes_example",
"source_uuid": null,
"field_number": 2,
"is_table_column": false,
"repeated": false,
"page_id": "76b640c0-5e51-4213-b6e0-8692491b1826",
"supervision": 0,
"is_enabled": true,
"dropout": true,
"required": false,
"routing": false,
"multiline": false,
"data_type_id": "622ada78-51d1-4544-be02-c80f4a62798c",
"shared_field_id": "1a4f6738-8a94-417f-bbe7-0628b96e25b6"
}
],
"layout_tables": [
{
"uuid": "e48e8639-bd6f-4964-bafd-676fb813416e",
"name": "table 2",
"layout_table_columns": [
{
"uuid": "cba30db9-b23d-4f61-a03a-70d60bd95ba5",
"output_name": "layout_table_column__output_name",
"notes": "notes_example",
"name": "name_example",
"sv_transcription_mode": 0,
"data_type_id": "796696c9-b6a0-4260-abb7-3b1394f5cff4",
"source_uuid": null,
"column_number": 3,
"multiline": true,
"dropout": true,
"required": true
}
]
}
],
"origin_version_id": "fc595159-d39d-41fa-8ecf-054a8787f2bd"
}
],
"shared_fields": [
{
"uuid": "1a4f6738-8a94-417f-bbe7-0628b96e25b6",
"name": "url",
"output_name": "example_output",
"supervision": 0,
"required": false,
"routing": false,
"data_type_id": "622ada78-51d1-4544-be02-c80f4a62798c"
}
]
}
}
A Layout is an entity that groups together Variations of the same document, defined mainly by the removal, addition, or change in position of Fields.
As a Layout is edited, it can have multiple Layout Versions. Versioning occurs on the Layout level, not on the Layout Variation level.
Object Properties
| Property | Type | Description |
| uuid | string | Unique system-generated Layout ID that is exposed in the web application. |
| name | string | Name of the Layout. |
| time_created | datetimetzAn ISO-8601 formatted datetime string | Time that this Layout was created. |
| is_archived | boolean | Indicates whether this Layout is archived. |
| layout_version | JSON object | Latest saved version for this Layout. See Layout Version. |
Layout Version Object
Represents the state of a Layout in a point of time. The Version returned in the response is the most recently created Version.
Object properties
| Property | Type | Description |
| uuid | string | Unique system-generated Layout Version ID that is exposed in the web application. |
| name | string | Name of the Layout Version. |
| time_created | datetimetzAn ISO-8601 formatted datetime string | Time that this Layout Version was created. |
| layout_variations | array of JSON objectsA list of objects. See the description for more information. | A list of Layout Variations. See Layout Variation. |
| shared_fields | array of JSON objectsA list of objects. See the description for more information. | A list of Shared Fields. See Shared Field. |
Shared Field Object
Represents the common state of a set of Fields across the Layout's Variations. Contains a subset of the data properties of a Field that will be synced with the associated Field during the data-extraction process.
Object properties
| Property | Type | Description |
| uuid | string | Unique system-generated Shared Field ID. |
| name | string | Name defined on the Layout as "Field Name"; shown in all parts of the web application where the field appears. Intended to be used as a display-friendly name. |
| output_name | string | Name defined on the layout as "Output Name"; only shown in the layout editor and returned in the API. This property is optional and returns null if not defined in the Layout. Intended to be used as a code-friendly name. |
| supervision | integer | Indicates how Supervision affects the processing of the field. See below for details. |
| required | boolean | Indicates whether this Field is defined as "Required" in the Layout Variation. |
| routing v27.1 | boolean | If true, indicates that the Field's value will be sent to a specific group of keyers for Supervision and QA. |
| data_type_id | string | Indicates the data type assigned to this Field as part of the Layout. See Data Types for more details. |
SUPERVISION VALUES
0 - default: The default Supervision options for your instance will be applied.
1 - consensus_required: Indicates that this field requires multiple sources to agree on the transcription before its processing is complete.
2 - auto_transcribe: Meaning that the field will never go to Supervision, regardless of how likely the transcription is to be correct. If the field is particularly difficult to read, the transcription will be returned as null, and the Field will be marked with the illegible_field exception.
3 - always_supervise: Indicates that the field will always go to Supervision, regardless of the machine's confidence, but it will only require the response of a single keyer.
Layout Variation Object
Represents a Layout Variation of a specific Layout.
Object properties
| Property | Type | Description |
| name | string | Name defined for the Layout Variation. |
| uuid | string | Unique system-generated Layout Variation ID. |
| layout_fields | array of JSON objectsA list of objects. See the description for more information. | A list of Fields. See Field. |
| layout_tables | array of JSON objectsA list of objects. See the description for more information. | A list of Tables. See Layout Table. |
| origin_version_id | string | Identifier of the Layout Version that this Layout Variation originated from. |
Layout Field Object
The Field object is the transcription for a field that was marked for extraction during the layout-creation process. These Fields can be either layout fields or layout table columns.
Object properties
| Property | Type | Description |
| uuid | string | Unique system-generated Layout Field ID. |
| name | string | Name defined on the Layout as "Field Name"; shown in all parts of the web application where the field appears. Intended to be used as a display-friendly name. |
| output_name | string | Name defined on the Layout as "Output Name"; only shown in the Layout Editor and returned in the API. This property is optional and returns null if not defined in the Layout. Intended to be used as a code-friendly name. |
| notes | string | Inherited details from the Field Dictionary, if this Layout Field has a Dictionary Field link. |
| source_uuid | string | UUID of the Layout that contains this Field. |
| field_number | integer | Defines the order of Fields on a layout page image. Fields are ordered from top to bottom with a starting index of 1. |
| is_table_column | boolean | Indicates whether this Field is a Layout Table Column. |
| repeated | boolean | Indicates whether the field appears multiple times on the layout image. Data from repeated fields is extracted only from the field’s first occurrence in a document. |
| page_id | string | UUID of the Page that contains this Field. |
| is_enabled | boolean | Indicates whether the Field is enabled for the Layout Variation. |
| supervision | integer | Indicates how Supervision affects the processing of the field. See below for details. |
| dropout | boolean | Indicates whether the dropout feature has been enabled for this Field. If enabled, features in the field’s background are not included in the transcription. |
| required | boolean | Indicates whether this Field is defined as "Required" in the Layout Variation. |
| routing v27.1 | boolean | If true, indicates that the Field's value will be sent to a specific group of keyers for Supervision and QA. |
| multiline | boolean | If true, indicates that this Field is expected to have multiple lines of text. |
| data_type_id | string | Indicates the data type assigned to this Field as part of the Layout. See Data Types for more details. |
| shared_field_id | string | The uuid for the transcription’s Shared Field. See Shared Field for more details. |
SUPERVISION VALUES
0 - default: The default Supervision options for your instance will be applied.
1 - consensus_required: Indicates that this field requires multiple sources to agree on the transcription before its processing is complete.
2 - auto_transcribe: Meaning that the field will never go to Supervision, regardless of how likely the transcription is to be correct. If the field is particularly difficult to read, the transcription will be returned as null, and the Field will be marked with the illegible_field exception.
3 - always_supervise: Indicates that the field will always go to Supervision, regardless of the machine's confidence, but it will only require the response of a single keyer.
Layout Table Object
The Layout Table object is the representation of a table in a layout image.
Object properties
| Property | Type | Description |
| uuid | string | Unique system-generated Layout Table ID. |
| name | string | Name of the Layout Table. |
| layout_table_columns | array of JSON objectsA list of objects. See the description for more information. | A list of the Layout Table's Columns. See Layout Table Column. |
Layout Table Column Object
The Layout Table Column object is the representation of a table column in a layout image.
Object properties
| Property | Type | Description |
| uuid | string | Unique system-generated Layout Table Column ID. |
| output_name | string | Name defined on the Layout as the "Output Name," which is only shown in the Layout Editor and returned in the API. This property is optional and returns null if not defined in the Layout. Intended to be used as a code-friendly name. |
| notes | string | Inherited details from the Field Dictionary, it this Layout Table Column has a Dictionary Field link. |
| name | string | Human-readable display name. |
| sv_transcription_mode | integer | Indicates the Manual Transcription mode for cells in this Layout Table Column. |
| source_uuid | string | UUID of the Layout that contains this Layout Table Column. |
| data_type_id | string | Indicates the data type assigned to this Field as part of the Layout. See Data Types for more details. |
| column_number | integer | Defines the order of Layout Table Columns on a layout page image. Columns are ordered from left to right with a starting index of 1. |
| multiline | boolean | If true, indicates that this Column is expected to have multiple lines of text. |
| dropout | boolean | Indicates whether the dropout feature has been enabled for this Layout Table Column. If enabled, features in the column’s background are not included in the transcription. |
| required | boolean | Indicates whether this Layout Table Column is defined as "Required" in the Layout Variation. |
Retrieving Layouts
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/layouts/'
layout_id = 'f828eb91-b49a-40c5-bf14-42011ce70c5e'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, layout_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"uuid": "f828eb91-b49a-40c5-bf14-42011ce70c5e",
"name": "example_layout_group_name",
"time_created": "2021-09-12T23:35:54.583065Z",
"is_archived": false,
"layout_version": {
"uuid": "fc595159-d39d-41fa-8ecf-054a8787f2bd",
"name": "Version - (04/30/2021)",
"time_created": "2021-09-12T23:35:54.583159Z",
"layout_variations": [
{
"uuid": "f6eb56ff-b8d2-4ee9-930c-59c28fc16dc9",
"name": "example_template_name",
"layout_fields": [
{
"uuid": "04e71c64-a621-414d-af52-eb4de1d8e4e3",
"name": "url",
"output_name": "example_output",
"notes": "notes_example",
"source_uuid": null,
"field_number": 2,
"is_table_column": false,
"repeated": false,
"page_id": "76b640c0-5e51-4213-b6e0-8692491b1826",
"supervision": 0,
"is_enabled": true,
"dropout": true,
"required": false,
"routing": false,
"multiline": false,
"data_type_id": "622ada78-51d1-4544-be02-c80f4a62798c",
"shared_field_id": "1a4f6738-8a94-417f-bbe7-0628b96e25b6"
}
],
"layout_tables": [
{
"uuid": "e48e8639-bd6f-4964-bafd-676fb813416e",
"name": "table 2",
"layout_table_columns": [
{
"uuid": "cba30db9-b23d-4f61-a03a-70d60bd95ba5",
"output_name": "layout_table_column__output_name",
"notes": "notes_example",
"name": "name_example",
"sv_transcription_mode": 0,
"data_type_id": "796696c9-b6a0-4260-abb7-3b1394f5cff4",
"source_uuid": null,
"column_number": 3,
"multiline": true,
"dropout": true,
"required": true
}
]
}
],
"origin_version_id": "fc595159-d39d-41fa-8ecf-054a8787f2bd"
}
],
"shared_fields": [
{
"uuid": "1a4f6738-8a94-417f-bbe7-0628b96e25b6",
"name": "url",
"output_name": "example_output",
"supervision": 0,
"required": false,
"routing": false,
"data_type_id": "622ada78-51d1-4544-be02-c80f4a62798c"
}
]
}
}
Retrieve data about a specific Layout using that Layout's ID. You can obtain this ID by looking in the application or by using the Listing Layouts endpoint.
Layout Retrieval Endpoint
GET /api/v5/layouts/layout_id
Listing Layouts
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/layouts'
endpoint_url = urljoin(base_url, endpoint)
params = {'is_archived': 'False'}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count":1,
"next":null,
"previous":null,
"results":[
{
"uuid":"80a24a26-85e1-4ee1-ae81-b5c04ad380c1",
"name":"ID",
"time_created":"2021-08-27T11:23:17Z",
"is_archived": false,
"layout_version":{
"layout_variations":[
{
"name":"ID_2",
"uuid":"e5911b07-6017-47f0-b8f0-4b445c59dd90"
},
{
"name":"ID",
"uuid":"a71588bb-6f55-4968-a925-280df89e3ea8"
}
]
}
}
]
}
This endpoint allows you to retrieve a list of Layouts in the system that match your defined filtering criteria and paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Layout Object. Note: The Layout object retrieved through this endpoint is simplified. The main difference is on layout_version, where we only expose the name and uuid of each Layout Variation.
Layout Listing Endpoint
GET /api/v5/layouts
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Layouts.
| Property | Type | Description |
| include_archived | boolean | Indicates whether to include archived Layouts in the response. Defaults to false. |
License v39
All Hyperscience instances will require a License Key to activate. This endpoint can be used to retrieve the status of the system with regards to the license key. The response object has two values - the expiration timestamp and the status of the system, which are defined below. Only users with Admin access will be able to access this endpoint.
License Object
The License object presents the state of the license key in the system which is composed by an expiration time and a status.
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/license'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example License Object
{
"expiration_timestamp": "2024-12-31T12:00:00+00:00",
"package": "Premium",
"status": "Active"
}
Object Properties
| Property | Type | Description |
| expiration_timestamp | string | The time at which the given license key will expire, determined by your contract with Hyperscience. String will be empty if the status is Expired, Missing, or Invalid. |
| package v40 | string | The Feature Package associated with this license key, based on your contract with Hyperscience. String will be empty if the license key was generated for a v39 Hypercell installation. |
| status | string | The status of the license key, which may be one of the following: - Active: a license key that is between the day of the contract beginning and (the end of the contract minus the grace period). - Expiring: a license key that is between (the end of the contract minus the grace period) and the end of the contract. - Extended: a license key that is between the day after the end of the contract and (the day after the end of the contract plus the grace period). - Expired: a license key that is beyond the contract expiration date plus the grace period. - Missing: no license key has been entered to this instance. - Invalid: a license key has been entered, but it is not technically valid. |
Cases v31
Case Object
Example Case Object
{
"id": 30000000,
"uuid": "46474849-4a4b-4c4d-8e4f-505152535455",
"external_case_id": "HS-30000000",
"start_time": "2018-05-09T20:39:50.220162Z",
"update_time": "2018-05-09T20:39:50.220162Z",
"to_delete_time": "2018-05-09T20:39:50.220162Z",
"notes": "",
"documents": [
{
"id": 3000000,
"submission_id":1000000,
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": null,
"priority": 1950375323,
"layout_uuid": "7a7b7c7d-7e7f-4081-8283-848586878889",
"layout_name": "NYC DMV application",
"layout_variation_uuid": "7a7b7c7d-7e7f-4081-8283-848586878889",
"layout_variation_name": "NYC DMV application",
"layout_tags": [],
"layout_version_uuid": "8a8b8c8d-8e8f-4091-9293-949596979899",
"layout_version_name": "NYC DMV application",
"document_folders": [],
"supervision_url": "/supervision/document/3000000",
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"pages": [
{
"id": 2000000,
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 1,
"submission_page_number": 1,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/cacbcccd-cecf-40d1-92d3-d4d5d6d7d8d9",
"corrected_image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9",
"rejected": false,
"identifiers": [
{
"transcription": {
"raw": "X2851",
"normalized": "X2851"
},
"image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9?start_x=0.16&start_y=0.95&end_x=0.27&end_y=0.97"
},
{
"transcription": {
"raw": "11/13",
"normalized": "11/13"
},
"image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9?start_x=0.83&start_y=0.95&end_x=0.94&end_y=0.97"
}
],
"decisions": []
},
{
"id": 2000001,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 2,
"submission_page_number": 2,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 2,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/abacadae-afb0-41b2-b3b4-b5b6b7b8b9ba",
"corrected_image_url": "/api/v5/image/cbcccdce-cfd0-41d2-93d4-d5d6d7d8d9da",
"rejected": false,
"identifiers": [],
"decisions": []
},
{
"id": 2000002,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "document_page",
"file_page_number": 3,
"submission_page_number": 3,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 3,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/9c9d9e9f-a0a1-42a3-a4a5-a6a7a8a9aaab",
"corrected_image_url": "/api/v5/image/bcbdbebf-c0c1-42c3-84c5-c6c7c8c9cacb",
"rejected": false,
"identifiers": [],
"decisions": []
}
],
"document_tables": [
{
"id": 7000000,
"table_number": 1,
"name": "Table Doc 1",
"layout_table_uuid": "2e2f3031-3233-4435-b637-38393a3b3c3d",
"layout_variation_table_uuid": "2e2f3031-3233-4435-b637-38393a3b3c3d",
"rows": [
{
"id": 8000000,
"row_number": 1,
"cells": [
{
"id": 9000000,
"column_name": "Col Name 1",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "bebfc0c1-c2c3-44c5-86c7-c8c9cacbcccd",
"layout_variation_table_column_uuid": "bebfc0c1-c2c3-44c5-86c7-c8c9cacbcccd",
"page_id": 2000000,
"raw": "Cell Col 1",
"normalized": "CELL COL 1",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
},
{
"id": 9000001,
"column_name": "Col Name 2",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "4f505152-5354-4556-9758-595a5b5c5d5e",
"layout_variation_table_column_uuid": "4f505152-5354-4556-9758-595a5b5c5d5e",
"page_id": 2000000,
"raw": "Cell Col 2",
"normalized": "CELL COL 2",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
},
{
"id": 9000002,
"column_name": "Col Name 3",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "dfe0e1e2-e3e4-45e6-a7e8-e9eaebecedee",
"layout_variation_table_column_uuid": "dfe0e1e2-e3e4-45e6-a7e8-e9eaebecedee",
"page_id": 2000000,
"raw": "Cell Col 3",
"normalized": "CELL COL 3",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
},
{
"id": 9000003,
"column_name": "Col Name 4",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "70717273-7475-4677-b879-7a7b7c7d7e7f",
"layout_variation_table_column_uuid": "70717273-7475-4677-b879-7a7b7c7d7e7f",
"page_id": 2000000,
"raw": "Cell Col 4",
"normalized": "CELL COL 4",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
},
{
"id": 9000004,
"column_name": "Col Name 5",
"document_table_row_id": 8000000,
"layout_table_column_uuid": "01020304-0506-4708-890a-0b0c0d0e0f10",
"layout_variation_table_column_uuid": "01020304-0506-4708-890a-0b0c0d0e0f10",
"page_id": 2000000,
"raw": "Cell Col 5",
"normalized": "CELL COL 5",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
}
]
}
]
}
],
"document_fields": [
{
"id": 6000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"name": "Eye Color",
"output_name": "eye_color",
"field_definition_attributes": {
"required": true,
"data_type": "Generic Text",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "",
"normalized": "",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9?start_x=0.5561431348963572&start_y=0.39681444834913787&end_x=0.7324029831466551&end_y=0.7792487158226642",
"page_id": 2000000,
"decisions": []
},
{
"id": 6000001,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Mobile Number",
"output_name": "mobile_number",
"field_definition_attributes": {
"required": false,
"data_type": "Phone Number",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "555 666-7788",
"normalized": "5556667788",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9?start_x=0.15472428426169488&start_y=0.16627394370395326&end_x=0.2600813617687374&end_y=0.6904942436432017",
"page_id": 2000000,
"decisions": []
},
{
"id": 6000002,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Email",
"output_name": "email",
"field_definition_attributes": {
"required": false,
"data_type": "Email Address",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "jane.smith@hyperscience.com",
"normalized": "JANE.SMITH@HYPERSCIENCE.COM",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9?start_x=0.37538749455803055&start_y=0.6420047917783037&end_x=0.6301330596270731&end_y=0.7415953499749627",
"page_id": 2000000,
"decisions": []
},
{
"id": 6000003,
"state": "supervision",
"substate": "manual_transcription",
"exceptions": [],
"name": "SSN",
"output_name": "ssn",
"field_definition_attributes": {
"required": true,
"data_type": "Generic Text",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": "requires_consensus"
},
"transcription": {
"raw": "",
"normalized": "",
"source": null,
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/eaebeced-eeef-40f1-b2f3-f4f5f6f7f8f9?start_x=0.20371511752923907&start_y=0.619049553337295&end_x=0.6450883079159279&end_y=0.7909928935777449",
"page_id": 2000000,
"decisions": []
},
{
"id": 6000004,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "Have you had a driver license -Yes",
"output_name": "has_driver_license",
"field_definition_attributes": {
"required": false,
"data_type": "Checkbox",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/cbcccdce-cfd0-41d2-93d4-d5d6d7d8d9da?start_x=0.2400321364667039&start_y=0.5326989063403808&end_x=0.9977737645469094&end_y=0.8312048674469082",
"page_id": 2000001,
"decisions": []
}
],
"derived_document_fields": []
},
{
"id": 3000001,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:40:28.074541Z",
"priority": 1950375323,
"layout_uuid": "0d0e0f10-1112-4314-9516-1718191a1b1c",
"layout_name": "Social Security Name Change",
"layout_variation_uuid": "0d0e0f10-1112-4314-9516-1718191a1b1c",
"layout_variation_name": "Social Security Name Change",
"layout_tags": [],
"layout_version_uuid": "1d1e1f20-2122-4324-a526-2728292a2b2c",
"layout_version_name": "Social Security Name Change",
"document_folders": [
4000000,
4000001
],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"pages": [
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 4,
"submission_page_number": 4,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/5d5e5f60-6162-4364-a566-6768696a6b6c",
"corrected_image_url": "/api/v5/image/7d7e7f80-8182-4384-8586-8788898a8b8c",
"rejected": false,
"identifiers": [],
"decisions": []
}
],
"document_tables": [
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "11121314-1516-4718-991a-1b1c1d1e1f20",
"layout_variation_table_uuid": "11121314-1516-4718-991a-1b1c1d1e1f20",
"rows": [
{
"id": 8000001,
"row_number": 1,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
}
]
},
{
"id": 8000002,
"row_number": 2,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"layout_variation_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
}
]
},
{
"id": 8000003,
"row_number": 3,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"layout_variation_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
}
]
},
{
"id": 8000004,
"row_number": 4,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"layout_variation_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
}
]
},
{
"id": 8000005,
"row_number": 5,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"layout_variation_table_column_uuid": "a1a2a3a4-a5a6-47a8-a9aa-abacadaeafb0",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "processing",
"bounding_box": [0.0, 0.0, 1.0, 1.0]
}
]
}
]
}
],
"document_fields": [
{
"id": 6000005,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/7d7e7f80-8182-4384-8586-8788898a8b8c?start_x=0.024463502666736314&start_y=0.17972713085597042&end_x=0.7104376676041624&end_y=0.45301192885054936",
"page_id": 2000003,
"decisions": []
}
],
"derived_document_fields": [],
"decisions": []
}
],
"unassigned_pages": [
{
"id": 2000004,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "unknown_page",
"file_page_number": 1,
"submission_page_number": 5,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/bdbebfc0-c1c2-43c4-85c6-c7c8c9cacbcc",
"corrected_image_url": null,
"rejected": false,
"identifiers": [],
"decisions": []
},
{
"id": 2000005,
"state": "complete",
"substate": null,
"exceptions": [],
"page_type": "blank_page",
"file_page_number": 2,
"submission_page_number": 6,
"layout_page_number": null,
"layout_variation_page_number": null,
"document_page_number": null,
"submitted_filename": "test_submission_2.pdf",
"image_url": "/api/v5/image/dddedfe0-e1e2-43e4-a5e6-e7e8e9eaebec",
"corrected_image_url": null,
"rejected": false,
"identifiers": [],
"decisions": []
}
],
"submission_files": [
{
"name": "test_submission_attachment_1.svg",
"upload_type": "attachment",
"url": "/api/v5/uploaded_file/1e1f2021-2223-4425-a627-28292a2b2c2d"
},
{
"name": "test_submission_2.pdf",
"upload_type": "document",
"url": "/api/v5/uploaded_file/0e0f1011-1213-4415-9617-18191a1b1c1d"
},
{
"name": "test_submission.pdf",
"upload_type": "document",
"url": "/api/v5/uploaded_file/fdfe0001-0203-4405-8607-08090a0b0c0d"
}
],
"decisions": [
{
"decision": "Valid Case",
"choice": "Yes",
"task_purpose_name": "Case Checker"
}
]
}
The Case object presents a set of documents collated for a particular purpose.
One or more cases can be created during a Submission. The system interprets the data provided in the Cases parameter in a submission creation request and creates cases accordingly.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated Case ID that is exposed in the web application. |
| uuid | string | Unique system-generated Case ID that remains unique when there is a need to migrate case data between systems. |
| external_case_id | string | User-defined string that can be included as part of the Case object during case creation. Must be unique to the Case. Optional. Maximum length is 200 characters. If no value was defined during case creation, Hyperscience will automatically generate one |
| start_time | datetimetzAn ISO-8601 formatted datetime string | Time the Case was created. |
| update_time | datetimetzAn ISO-8601 formatted datetime string | Time the Case was updated in the web application. |
| to_delete_time v35 | datetimetzAn ISO-8601 formatted datetime string | Time at which this Case will qualify for deletion. |
| notes | string | Notes a user added to the Case in the web application. |
| documents | array of JSON objectsA list of objects. See the description for more information. | Array of Document objects identified as part of this Case. Array will be empty if no Documents were identified in the Case. |
| unassigned_pages | array of JSON objectsA list of objects. See the description for more information. | Array of Page objects included in this Case that have not been grouped into a Document by the application. Array will be empty if all Pages in the Case were matched to a Document. |
| submission_files | array of JSON objectsA list of objects. See the description for more information. | Array of JSON objects representing the files comprising the Case. Each object contains the name of the file, an upload_type indicating whether the file is a document that will be processed by Hyperscience or an attachment that will be carried through the system without processing, and the url the file can be retrieved from. |
| decisions v32.0.1 | array of JSON objectsA list of objects. See the description for more information. | Array of Decision objects describing any decisions made for the given Case. An empty array is returned if there are no Decisions. |
| submissions v34 | array of integers | Array of submission ids capturing all submissions with pages (or documents) included in this case. |
Retrieving Cases
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/cases/'
saved_case_id = str(30000000)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_case_id))
params = {'flat': False}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Use these endpoints to retrieve your cases. By default, they will not return the following properties of the Submission object: documents, unassigned_pages, and submission_files.
Case Retrieval Endpoint
GET /api/v5/cases/case_id
GET /api/v5/cases/external/external_case_id
The URL to use depends on whether you are retrieving the case using the HyperScience ID or a user-defined external identifier.
Request parameters
| Property | Type | Description |
| flat | boolean | Optional parameter that prevents returning the documents, unassigned_pages, submission_files, and submissions arrays of the Case object. Defaults to true unless specifically passed as false. |
Listing Cases
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/cases'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_time__gte': '2018-04-27T15:21'}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"id": 30000000,
"uuid": "46474849-4a4b-4c4d-8e4f-505152535455",
"external_case_id": "HS-30000000",
"start_time": "2018-05-09T20:39:50.220162Z",
"update_time": "2018-05-09T20:39:50.220162Z",
"to_delete_time": "2018-05-09T20:39:50.220162Z",
"notes": "",
}
]
}
This endpoint allows you to retrieve a list of Cases in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Case Object. Note: the following properties are not returned as part of the Case Object with this endpoint: documents, unassigned_pages, and submission_files.
Case Listing Endpoint
GET /api/v5/cases
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Case. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Cases where that attribute matches any of the values you provided. For example, if you include start_date_gte=A and start_date_lt=B, you will get Case that have Documents that matched start_date with either tag A or tag B. Note that because the datetime filters are inequality-based, you cannot include the same filter in your request more than once.
| Property | Type | Description |
| start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for cases whose collation information were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for cases whose collation information were ingested into the application before a specific date and time (less than operator). |
| update_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for cases updated on or after a specific date and time (greater than or equal to operator). |
| update_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for cases update before a specific date and time (less than operator). |
| to_delete_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for cases to be deleted on or after a specific date and time (greater than or equal to operator) |
| to_delete_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for cases to be deleted before a specific date and time (less than operator). |
Document Folders v40
Document Folder Object
Example Document Folder Object
{
"id": 4000000,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Folder A",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
}
The Document Folder object groups a set of Documents under a single logical entity.
Note: since version 25, all that's needed to be able to create Document Folder objects is to enable ALLOW GROUPING DOCUMENTS INTO FOLDERS in Page Sorting settings.
Object Properties
| Property | Type | Description |
| id | integer | Unique system-generated Document Folder ID that is exposed in the web application. |
| start_time | datetimetzAn ISO-8601 formatted datetime string | Time the Document Folder was created. |
| name | string | The name of the Document Folder. |
| submission_id | integer | The system-generated identifier of the Submission that this Document Folder is a part of. |
| documents | array of integers | The system-generated identifiers of the Documents that this Document Folder contains. |
| unassigned_pages | array of integers | The system-generated identifiers of the Pages that this Document Folder contains. |
| metadata | objectAn object as defined elsewhere in this document. See the description for more information. | Name/value pair of metadata for the document folder. Values are string lists. |
Retrieving Document Folders
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/document_folders/'
saved_document_folder_id = str(4000000)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_document_folder_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 4000000,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Folder A",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
}
If you know the ID of the Document Folder you're interested in (either by looking in the application itself or through the Listing Document Folders endpoint) then you can retrieve data about that Document Folder using this endpoint.
Document Folder Retrieval Endpoint
GET /api/v5/document_folders/document_folder_id
Listing Document Folders
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/document_folders'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 2,
"next": null,
"previous": null,
"results": [
{
"id": 4000000,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Folder A",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
},
{
"id": 4000001,
"start_time": "2018-05-09T20:39:50.220162Z",
"name": "Supplementary",
"submission_id": 1000000,
"documents": [
3000001
],
"unassigned_pages": [],
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
}
}
]
}
This endpoint allows you to retrieve a list of Document Folders in the system that match your defined filtering criteria and to paginate through them.
See Listing Object for the standard response structure of the list.
Each object in the results array is a Document Folder Object.
Document Folder Listing Endpoint
GET /api/v5/document_folders
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Document Folders.
| Property | Type | Description |
| submission_id | integer | Filters for Document Folders that were part of a specific submission using the system-generated Submission ID. |
| start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for Document Folders that were created on or after a specific date and time (greater than or equal to operator). |
| start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for Document Folders that were created before a specific date and time (less than operator). |
Documents
Document Object
Example Document Object
{
"id": 3000001,
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb9d",
"submission_id": 1000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:40:28.074541Z",
"priority": 1950375323,
"layout_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_name": "Social Security Name Change",
"layout_variation_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_variation_name": "Social Security Name Change",
"layout_tags": [],
"layout_version_uuid": "6a00eddc-9641-4093-a82d-55c1ac8d8908",
"layout_version_name": "Social Security Name Change",
"document_folders": [
4000000,
4000001
],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000001/download",
"decisions": [],
"document_tables": [
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000001,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000002,
"row_number": 2,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000003,
"row_number": 3,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000004,
"row_number": 4,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000005,
"row_number": 5,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
],
"document_fields": [
{
"id": 6000006,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528"
}
],
"groups": [],
"decisions": []
}
],
"derived_document_fields": [],
"pages": [
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 5,
"submission_page_number": 5,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/b14ba06a-1b8c-457c-bbb3-52790a69bf81",
"corrected_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc",
"rejected": false,
"decisions": [],
"identifiers": []
}
]
}
The Document object is a set of pages that have matched to a predefined layout.
Object Properties
| Property | Type | Description |
| id | integer | Unique system-generated Document ID that is exposed in the web application. |
| uuid | string | Unique system-generated Document UUID that is exposed in the web application. |
| submission_id | integer | Unique system-generated id for the Submission that the Document was a part of. |
| state | string | Current state of the Document. Potential values are processing, supervision, or complete. See States for more detail. |
| substate | string | Provides additional granularity for the supervision state. The only possible value is manual_transcription. If the Document is not in the supervision state, then this value returns null. See Substates for more detail. |
| download_url v40 | string | URL to retrieve the ordered document pages in PDF. The document file will only be available if document rendering is enabled at flow level. Otherwise, this field will return null. |
| exceptions | array of strings | Provides a list of all exceptions in this Document. If the Document has no exceptions, then this value returns an empty array. See Exceptions for a list of possible values. |
| start_time | datetimetzAn ISO-8601 formatted datetime string | Time the Submission that this Document is a part of was submitted to the API. |
| goal_time v28 | datetimetzAn ISO-8601 formatted datetime string | The time by which the system will try to ensure the Document is processed. Documents inherit this value from their Submission. |
| complete_time | datetimetzAn ISO-8601 formatted datetime string | Time the Document entered the complete state, including all Supervision tasks. |
| priority v28 | integer | The current priority level of the Document for processing. The value is based on the processing goal set on the Submission; higher values indicate higher priority. If a Document has matched a layout that has a goal processing rule that sets earlier goal than previously set on the Submission, then the priority will reflect that and have a bigger value. Otherwise, the priority value will be unachanged. Manual modifications of the processing goal will increase and decrease the value of priority, respectively. |
| layout_uuid v36 | string | The unique identifier of the layout that this Document was matched to. |
| layout_name v36 | string | The name of the layout that this Document was matched to. |
| layout_variation_uuid v36 | string | The unique identifier of the layout that this Document was matched to. |
| layout_variation_name v36 | string | The name of the layout that this Document was matched to. |
| layout_tags | array of strings | Returns user-defined tags for this Document's layout. Tags are defined as part of the layout creation and management process in the application. Returns an empty array if no tags have been defined for the layout. |
| layout_version_uuid | string | The unique identifier of the specific locked layout version that this Document was matched to. |
| layout_version_name | string | The user-defined name for the specific locked layout version that this Document was matched to. |
| document_folders v40 | array of integers | Array of the system-generated IDs of Document Folders that include this Document |
| supervision_url | string | URL for outstanding Supervision tasks for the Document. Value is only returned when the Document state is supervision and otherwise returns null. |
| type | string | The type of layout this document was matched against. Possible values are structured and semi_structured. |
| metadata | objectAn object as defined elsewhere in this document. See the description for more information. | Name/value pair of metadata for the document. Values are string lists. |
| pages | array of JSON objectsA list of objects. See the description for more information. | Array of Page objects identified as part of this Document. Array should never be empty. |
| document_fields v27 | array of JSON objectsA list of objects. See the description for more information. | Array of Field objects defined as part of any Layout page in this Document. This value returns an empty array if no Page has been matched to a layout page or if no fields have been defined on any matched layout page. |
| derived_document_fields v38.2 | array of strings | Always an empty array. |
| document_tables v27 | array of JSON objectsA list of objects. See the description for more information. | Array of Table objects in this Document. |
| decisions v32.0.1 | array of JSON objectsA list of objects. See the description for more information. | Array of Decision objects describing any decisions made for the given Document. An empty array is returned if there are no Decisions. |
Retrieving Documents
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/documents/'
saved_document_id = str(3000001)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_document_id))
params = {'flat': False}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 3000001,
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb9d",
"submission_id": 1000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:40:28.074541Z",
"priority": 1950375323,
"layout_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_name": "Social Security Name Change",
"layout_variation_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_variation_name": "Social Security Name Change",
"layout_tags": [],
"layout_version_uuid": "6a00eddc-9641-4093-a82d-55c1ac8d8908",
"layout_version_name": "Social Security Name Change",
"document_folders": [
4000000,
4000001
],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000001/download",
"decisions": [],
"document_tables": [
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000001,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000002,
"row_number": 2,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000003,
"row_number": 3,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000004,
"row_number": 4,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000005,
"row_number": 5,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
],
"document_fields": [
{
"id": 6000006,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528"
}
],
"groups": [],
"decisions": []
}
],
"derived_document_fields": [],
"pages": [
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 5,
"submission_page_number": 5,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/b14ba06a-1b8c-457c-bbb3-52790a69bf81",
"corrected_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc",
"rejected": false,
"decisions": [],
"identifiers": []
}
]
}
If you know the ID of the Document you're interested in (either by looking in the application itself or through the Listing Documents endpoint) then you can retrieve data about that Document using this endpoint. By default, this endpoint will not return the pages, document_fields, and derived_document_fields properties of the Document object.
Document Retrieval Endpoint
GET /api/v5/documents/document_id
Request Parameters
| Property | Type | Description |
| flat | boolean | Optional parameter that prevents returning the pages array of the Document object. Defaults to true unless specifically passed as false. |
Listing Documents
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/documents'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"id": 3000001,
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb9d",
"submission_id": 1000000,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"start_time": "2018-05-09T20:39:50.220162Z",
"goal_time": "2018-05-10T20:39:50.220162Z",
"complete_time": "2018-05-09T20:40:28.074541Z",
"priority": 1950375323,
"layout_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_name": "Social Security Name Change",
"layout_variation_uuid": "87ca4271-4ef8-4794-a0e9-97ebff78ac63",
"layout_variation_name": "Social Security Name Change",
"layout_tags": [],
"layout_version_uuid": "6a00eddc-9641-4093-a82d-55c1ac8d8908",
"layout_version_name": "Social Security Name Change",
"document_folders": [
4000000,
4000001
],
"supervision_url": null,
"type": "structured",
"metadata": {
"field1": [
"value1"
],
"field2": [
"value2"
]
},
"download_url": "/api/v5/documents/3000001/download"
}
]
}
This endpoint allows you to retrieve a list of Documents in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Document Object. Note: the pages array property of the Documents object is not returned by this endpoint.
Document Listing Endpoint
GET /api/v5/documents
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Documents. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Documents where that attribute matches any of the values you provided. For example, if you include layout_tag=A and layout_tag=B, you will get Documents that matched layouts with either tag A or tag B. Note that because the datetime filters are inequality-based, you cannot include the same filter in your request more than once.
| Property | Type | Description |
| submission_id | integer | Filters for Documents that were part of a specific submission using the system-generated Submission ID. |
| submission_external_id | string | Filters for Documents that were part of a specific submission using the user-defined external_id for the Submission. |
| document_folder_id v40 | integer | Filters for Documents that were part of a specific document folder using the system-generated Document Folder ID. |
| state | string | Filters for Documents that are in a specific state. See States for a list of possible values. |
| substate | string | Filters for Documents that are in a specific substate. See Substates for a list of possible values. |
| exception | string | Filters for Documents that have a specific exception. See Exceptions for a list of possible values. |
| layout_tag | string | Filters for Documents that matched to a layout with a specified layout tag. |
| priority__gte v28 | integer | Filters for Documents that have a priority greater than or equal to the specified value. This filter is deprecated and may yield incorrect results. |
| priority__lt v28 | integer | Filters for Documents that have a priority less than the specified value. This filter is deprecated and may yield incorrect results. |
| start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for Documents that were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for Documents that were ingested into the application before a specific date and time (less than operator). |
| complete_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filter for Documents that finished processing on or after a specific date and time (greater than or equal to operator). |
| complete_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filter for Documents that finished processing before a specific date and time (less than operator). |
| goal_time__gte v28 | datetimetzAn ISO-8601 formatted datetime string | Filter for Documents that were set to be processed on or after a specific date and time (greater than or equal to operator). |
| goal_time__lt v28 | datetimetzAn ISO-8601 formatted datetime string | Filter for Documents that were set to be processed before a specific date and time (less than operator). |
Pages
Page Object
Example Page Object
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 5,
"submission_page_number": 5,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/b14ba06a-1b8c-457c-bbb3-52790a69bf81",
"corrected_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc",
"rejected": false,
"decisions": [],
"identifiers": []
}
The Page object is the representation of a single image that was submitted into the system.
v27 As of v27, fields and formula_fields are no longer properties of Page. Instead, they have been moved to documents and renamed to document_fields and derived_document_fields respectively.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated Page ID that is exposed in the web application. |
| state | string | Current state of the Page. See States for a list of possible values. |
| substate | string | Provides additional granularity for the supervision state. See Substates for a list of possible values. |
| exceptions | array of strings | Provides a list of all exceptions in this Page. If the Page has no exceptions, then this value returns an empty array. See Exceptions for a list of possible values. |
| page_type | string | Indicates what type the page has been classified into. See Page Types for a list of possible values. |
| file_page_number | integer | For images that come from multi-page files (PDF/TIFF), this value indicates the image's original page number in that multi-page file (e.g., page 2 from a 5-page PDF). If the original image was a single-page file format (JPEG/PNG), then this value returns 1. |
| submission_page_number | integer | Indicates page order for the submission. For example, in a 3-page submission, this value will be 1, 2, and 3 for the first, second, and third page in the submission, respectively. Distinct from file_page_number because a submission can have multiple files in it and this attribute tracks page order for the submission as a whole. |
| layout_page_number v36 | integer | Page number in the Document layout for this page. For example, if the page matched to the first page of a layout, this value would be 1. Returns null if the page has not matched to a layout page. |
| layout_variation_page_number v36 | integer | Page number in the Document layout for this page. For example, if the page matched to the first page of a layout, this value would be 1. Returns null if the page has not matched to a layout page. |
| document_page_number v28.1 | integer | Page number in the Document. For example, if the user was looking at page 3 of a document, this value would be 3. Returns null for pages not matched to any layout. |
| submitted_filename | string | Returns the name of the file that was submitted. If this page was from a single-page image file (JPEG/PNG), then it is the name of the specific file from this page. If this page is from a multi-page image file (PDF/TIFF), then it is the name of the multi-page file. |
| image_url | string | Returns the relative URL where the image for this page can be retrieved. Image is shown without any transformations for processing. Returns null if the document has had transcription data deleted according to the defined data retention policy. |
| corrected_image_url | string | Returns the relative URL where the normalized image for this page can be retrieved. This is the input image, transformed upright - as used for extracting field data. It may match image_url if no transformations were needed. Returns null if the document has had transcription data deleted according to the defined data retention policy. |
| rejected v28 | boolean | If true, indicates that this page is part of a rejected document. |
| identifiers | array of JSON objectsA list of objects. See the description for more information. | Array of JSON objects describing any layout identifiers transcribed for the given page. An empty list is returned if there are no layout identifiers. |
| decisions v32.0.1 | array of JSON objectsA list of objects. See the description for more information. | Array of Decision objects describing any decisions made for the given Page. An empty array is returned if there are no Decisions. |
Page Types
The table below provides a description of the standard possible values for a page type.
| Page Type | Description |
| unassigned_page | Page has not yet been processed or identified and therefore has not been assigned a layout. |
| blank_page | Page is mostly blank and does not have meaningful information on it. |
| other_page | Page has gone through Supervision Manual Categorization and identified as "Other". |
| additional_form_page | Page looks like a structured layout but has no corresponding layout in the system. |
| document_page | Page has been matched to a layout and is now part of a Document. |
| unknown_page | Page layout confidence is below Categorization Threshold. |
Retrieving Pages
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/pages/'
saved_page_id = str(2000003)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_page_id))
params = {'flat': False}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 5,
"submission_page_number": 5,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/b14ba06a-1b8c-457c-bbb3-52790a69bf81",
"corrected_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc",
"rejected": false,
"decisions": [],
"identifiers": []
}
If you know the ID of the Page you're interested in (either by looking in the application itself or through the Listing Pages endpoint) then you can retrieve data about that Page using this endpoint. By default, this endpoint will not return the identifiers properties of the Page object.
Page Retrieval Endpoint
GET /api/v5/pages/page_id
Request Parameters
| Property | Type | Description |
| flat | boolean | Optional parameter that prevents returning the identifiers properties of the Page object. Defaults to true unless specifically passed as false. |
Listing Pages
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/pages'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"id": 2000003,
"state": "complete",
"substate": null,
"exceptions": [
"required_field_missing"
],
"page_type": "document_page",
"file_page_number": 5,
"submission_page_number": 5,
"layout_page_number": 1,
"layout_variation_page_number": 1,
"document_page_number": 1,
"submitted_filename": "test_submission.pdf",
"image_url": "/api/v5/image/b14ba06a-1b8c-457c-bbb3-52790a69bf81",
"corrected_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc",
"rejected": false
}
]
}
This endpoint allows you to retrieve a list of Pages in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Page Object. Note: the identifiers property of the Page object is not returned by this endpoint.
Page Listing Endpoint
GET /api/v5/pages
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Pages. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Pages where that attribute matches any of the values you provided.
| Property | Type | Description |
| submission_id | integer | Filters for Pages that were part of a specific submission using the system-generated Submission ID. |
| submission_external_id | string | Filters for Pages that were part of a specific submission using the user-defined external_id for the Submission. |
| document_id | integer | Filters for Pages that were part of a specific Document using the system-generated Document ID. |
| state | string | Filters for Pages that are in a specific state. See States for a list of possible values. |
| substate | string | Filters for Pages that are in a specific substate. See Substates for a list of possible values. |
| exception | string | Filters for Pages that have a specific exception. See Exceptions for a list of possible values. |
| page_type | string | Filters for Pages of a certain type. See Page Types for a list of possibel values. |
Retrieving original-resolution images
Example Request
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/page_image_original_dpi/'
saved_page_id = str(2000003)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_page_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
with open('image.png', 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
Example Response
Response is an image file, which is saved to the requesting machine
When a page you submit to Hyperscience is matched to a layout, the system creates a transformed version of it to adjust its rotation and correct distortions.
The image returned by this endpoint depends on what kind of layout the page is matched to, if any.
- If the page is matched to a Structured layout, the system changes the resolution of the transformed image to match the resolution of the layout's image. If the layout’s image has a lower resolution than the original image, this adjustment may cause information from the original image to be lost. This endpoint returns the transformed image in the submission's original resolution, which may help to clarify the content of its fields.
- If the page is matched to a Semi-structured or Additional layout, no layout-based adjustments are made to the transformed image. Therefore, the transformed image has the same resolution as the original image, and no data is lost. This endpoint returns the transformed image.
- If the page isn't matched to a layout, no transformation occurs. This endpoint returns the original image.
Image Retrieval Endpoint
GET /api/v5/page_image_original_dpi/id
Response Errors
| Error Code | Error Message | Description |
| 404 | Page with id={page_id} not found | No Page was found with identifier used in request |
Fields
Field Object
Example Field Object
{
"id": 6000006,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528"
}
],
"groups": [],
"decisions": []
}
The Field object is the transcription for a field that has been defined for extraction as part of a layout.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated identifier for the processed field. |
| state | string | Current state of the Field. Potential values are processing, supervision, or complete. See States for more detail. |
| substate | string | Provides additional granularity for the supervision state. Only possible value is manual_transcription. If the Field is not in the supervision state, then this value returns null. See Substates for more detail. |
| exceptions | array of strings | Provides a list of all exceptions in this Field. If the Field has no exceptions, then this value returns an empty array. See Exceptions for a list of possible values. |
| name | string | Name defined on the layout as "Field Name"; shown in all parts of the web application when the field comes up. Intended to be used as a display-friendly name. |
| output_name | string | Name defined on the layout as "Output Name"; only shown in the layout editor and returned in the API. This property is optional and if not defined in the layout, returns null. Intended to be used as a code-friendly name. |
| field_definition_attributes | JSON object | Includes metadata for the field as defined by the layout and returns 6 attributes: required, data_type, multiline, routing, duplicate, and supervision_override (details below). |
| transcription | JSON object | Includes data about the transcription of the field and returns 5 attributes: raw, normalized, source, data_deleted, and user_transcribed (details below). |
| field_image_url | string | Returns the relative URL where the image for thе first location of the field can be retrieved. Returns null if the document has had transcription data deleted according to the defined data retention policy. |
| page_id v27 | integer | Unique system-generated Page ID to which the first location of this field belongs. Same as id in Page Object. |
| occurrence_index v32 | integer | A count of the times the field has been annotated within a single document. The first annotation has an index of 0. |
| locations v33 | array of JSON objectsA list of objects. See the description for more information. | Array of JSON objects representing the locations of the field. The object contains 3 attributes: postion, page_id, and location_image_url (details below). |
| decisions v32.0.1 | array of JSON objectsA list of objects. See the description for more information. | Array of Decision objects describing any decisions made for the given Field. An empty array is returned if there are no Decisions. |
| groups v36 | array of JSON objectsA list of objects. See the description for more information. | Array of JSON objects holding the name and UUID of all dictionary groups to which the field belongs. This only applies to fields that are managed by the field dictionary. |
Field_definition_attributes object properties
| Property | Type | Description |
| required | boolean | Indicates whether this field was defined as required in the layout process. |
| data_type | string | Indicates the data type assigned to this Field as part of the layout. See Data Types for more detail. |
| multiline | boolean | If true, indicates that this field is expected to have multiple lines of text. |
| routing v27.1 | boolean | If true, indicates that this field is expected marked as a routing field in the template. |
| duplicate v28.1 | boolean | If true, indicates that only the first recognized instance of this field in the document will be extracted. |
| supervision_override | string | Indicates that the fields is treated in a special way with regards to supervision. Possible values are consensus_required and auto_transcribe. See below for details. |
If supervision_override is set to consensus_required, indicates that this field was defined as requiring multiple entries to agree on the
transcription before it is complete.
When set to auto_transcribe the field will never go to supervision regardless of how likely the transcription is to be correct.
If the field is particularly difficult to read, the transcription will be returned as null and the field will be marked with the illegible_field exception.
When set to always_supervise the field will always go to supervision regardless of the machine's confidence,
but will only require the response of a single keyer.
Transcription object properties
| Property | Type | Description |
| raw | string | Value of the field that was transcribed as the machine saw it or the data keyer typed it. Returns null if the field was marked illegible. |
| normalized | string | The normalized value of the field. The changes applied to the transcription to normalize it vary by Data Type. |
| source | string | Indicates whether the field was automatically transcribed, populated in a Custom Code Block, or entered by a human during a Transcription Supervision or Flexible Extraction Supervision task. Possible values are machine_transcription, custom (v31+), manual_transcription, or flexible_extraction (v31+), respectively. |
| data_deleted | boolean | If true, indicates that this transcription was deleted according to the system's data retention settings. In that case, both raw and normalized return null. |
| user_transcribed | string | Username of the user who transcribed the field in Supervision. Returns null if the transcription_source is machine_transcription. |
Location object properties
| Property | Type | Description |
| position | integer | Index of the location for the field |
| page_id | integer | Unique system-generated Page ID to which this field belongs. Same as id in Page Object. |
| location_image_url | string | Returns the relative URL where the image for this location can be retrieved. Returns null if the document has had transcription data deleted according to the defined data retention policy. |
Retrieving Fields
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/fields/'
saved_field_id = str(6000006)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_field_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 6000006,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528"
}
],
"groups": [],
"decisions": []
}
If you know the ID of the Field you're interested in then you can retrieve data about that Field using this endpoint.
Field Retrieval Endpoint
GET /api/v5/fields/field_id
Listing Fields
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/fields'
endpoint_url = urljoin(base_url, endpoint)
params = {'state': 'complete', 'document_id': 3000001}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"id": 6000006,
"state": "complete",
"substate": null,
"exceptions": [],
"name": "SIGNATURE",
"output_name": "signature",
"field_definition_attributes": {
"required": false,
"data_type": "Signature",
"multiline": false,
"routing": false,
"duplicate": false,
"supervision_override": null
},
"transcription": {
"raw": "False",
"normalized": "FALSE",
"source": "machine_transcription",
"data_deleted": false,
"user_transcribed": null
},
"field_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528",
"page_id": 2000003,
"occurrence_index": 0,
"locations": [
{
"position": 1,
"page_id": 2000003,
"location_image_url": "/api/v5/image/39b69927-f8fd-4928-9113-ec3cdb96e9cc?start_x=0.6653863674488735&start_y=0.5738077769469766&end_x=0.8320843991007505&end_y=0.9558655621039528"
}
],
"groups": [],
"decisions": []
}
]
}
This endpoint allows you to retrieve a list of Fields in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Field Object.
Field Listing Endpoint
GET /api/v5/fields
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Fields. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Fields where that attribute matches any of the values you provided.
| Property | Type | Description |
| document_id | integer | Filters for Fields that were part of a specific Document using the system-generated Document ID. |
| page_id | integer | Filters for Fields that were part of a specific Page using the system-generated Page ID. |
| state | string | Filters for Fields that are in a specific state. See States for a list of possible values. |
| substate | string | Filters for Fields that are in a specific substate. See Substates for a list of possible values. |
| exception | string | Filters for Fields that have a specific exception. See Exceptions for a list of possible values. |
Status Descriptions
States
States provide insight into the status of an object. Because objects are made up of other objects (e.g., Documents are made up of Pages), an object's state is defined by the states of its components. Because an object's components can have different states, the states have an order of precedence. If any component is in the processing state, then the parent object will be in the processing state; if any component is in the supervision state and no components are in the processing state then the parent object will be in the supervision state; only when all components have reached a terminal state (either complete or failed) will the parent object reach the complete state.
For example, if a 10-page Submission has 8 pages in completed, 1 page in processing, and 1 page in supervision, the Submission will be in the processing state. If a 20-field Page has 18 fields in completed and 2 in supervision, the Page will be in the supervision state.
| State | Relevant Objects | Description |
| processing | Submissions, Documents, Pages, Fields | Indicates that the object is waiting to be processed by the machine. |
| supervision | Submissions, Documents, Pages, Fields | Indicates that the object is waiting for supervision to be completed. |
| complete | Submissions, Documents, Pages, Fields | Terminal state for an object that indicates all work possible has been completed. |
| failed | Pages | Terminal state that indicates there was a failure processing the page that cannot be rectified within the HyperScience application. Typically involves unsupported file types or password protected files being submitted. |
Substates
Substates provide additional granularity for the supervision state. Like with states, an object's substate is defined by the states of its components.
Note that Documents and Fields can never be in the manual_document_organization substate because they are created
as objects only after a layout has been matched. If a Page is in the manual_document_organization substate,
then it cannot have any fields.
| Substate | Relevant Objects | Description |
| manual_transcription | Submissions, Documents, Pages, Fields | Indicates the object is waiting for manual transcription through Supervision. |
| manual_field_identification | Submissions, Documents, Pages, Fields | Indicates the object is waiting for manual field identification through Supervision. |
| manual_table_identification | Submissions, Documents, Pages, Fields | Indicates the object is waiting for manual table identification through Supervision. |
| manual_routing_transcription v27.1 | Submission | Indicates that the object is waiting for manual transcription through Supervision for routing fields. |
| machine_routing_decision v27.1 | Submission | Indicates that the object is waiting for an apply_restrictions callback from an external source |
| manual_document_organization | Submissions, Pages | Indicates the object is waiting for manual document organization through Supervision. |
| flexible_extraction v31 | Submissions, Documents, Pages, Fields | Indicates the object is waiting for flexible extraction through Supervision. |
| custom_supervision v32.0.1 | Submissions, Documents, Pages, Fields | Indicates the object is waiting for Custom Supervision |
Exceptions
Exceptions indicate that something during the processing of the object did not go as expected. Note that all field-level exceptions are returned as part of every parent object for that Field (i.e., Page, Document, and Submission).
| Exception | Exception type | Description |
| required_field_missing | Field-level and above | Indicates that a field which was defined as required in the layout is blank, illegible, or missing (the page that it's on is missing from the identified Document). |
| illegible_field | Field-level and above | Indicates that a field has been marked as illegible. A Field can be marked illegible during Supervision or if a Field was processed using the machine_only flag and the transcription fell below the Minimum Acceptance Threshold. |
| consensus_field_autotranscribed | Field-level and above | Indicates that a field was marked as requiring Supervision Consensus in the Layout Editor, but was processed as part of a submission that had a machine_only flag and thus did not go through Consensus handling. |
| validation_override | Field-level and above | Indicates that during Supervision transcription, the data clerk used the override feature to enter characters in this field that are not permitted by default. |
| normalization_error | Field-level and above | Indicates that the system was unable to normalize the field. For example, if the raw transcription is "April 32, 2019", this exception would appear and the normalized transcription attribute would return null. |
| supervision_always_field_autotranscribed | Field-level and above | Indicates that a field was marked as requiring Supervision Always in the Layout Editor, but was processed as part of a submission that had a machine_only flag and thus did not go through Supervision transcription handling. |
| supervision_always_field_autoidentified | Field-level and above | Indicates that a field was marked as requiring Identification Supervision in the Layout Editor, but was processed as part of a submission that had a machine_only flag and thus did not go through Supervision Field-ID handling. |
| supervision_required_but_disabled v28 | Field-level and above | Indicates that a field machine transcription confidence was in between minimum illegibility and confidence thresholds. The system tried to send the field to supervision, but supervision was disabled. Machine's transcription is still provided in the response. |
| character_limit_reached v28 | Field-level and above | Indicates that transcribed field was greater than 2000 characters, so the field was truncated at the 2000 character limit. |
| id_supervision_required_but_disabled v33 | Field-/Cell-level and above | Indicates that a field or cell machine identification confidence was below the threshold. The system tried to send the field to identification supervision, but identification supervision was disabled. Machine's identification is still used. |
| unsupported_filetype | Page-level and above | Indicates that the processing for a file failed because it was an unsupported or corrupted file. |
| password_protected_file | Page-level and above | Indicates that the processing for a file failed because it was password-protected. Only relevant for PDF files. |
| max_pages_limit_exceeded v42 | Page-level and above | Indicates that file processing failed due to the number of pages exceeding the configured maximum limit. This applies to all file types. |
| image_fetch_failed | Page-level and above | Indicates that the processing for a file failed because it was not available from the specified file store (either it wasn't present or the application doesn't have the proper permissions to access it). |
| flex_process_single_page_only | Page-level and above | Indicates that the processing for a file failed because it was part of a multi-page semi-structured submission. |
| failed_pages | Submission-level | Indicates that at least one page in the Submission reached the failed state due to one of the Page-level exceptions described above. |
Decisions v32.0.1
Decision Object
Example Decision Object
{
"decision": "Valid Application",
"choice": "Valid",
"task_purpose_name": "Account Application"
}
The Decision object's properties show a choice selected in a given Task Purpose's decision. Multiple choices can be selected for any decision, and each choice is described in its own Decision object. Currently, decisioning is available only in Custom Supervision.
Object Properties
| Property | Type | Description |
| decision | string | The name of the decision. |
| choice | string | The name of the choice selected in this decision. |
| task_purpose_name | string | The Task Purpose in which this decision was made. |
Data Types
Data Type Object
Example Data Type Object
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb51",
"name": "Address",
"enabled": true,
"ml_model": "entry",
"meta_type": "address",
"entry_type": "builtin"
}
Data Types are used by the system to specify the kinds of characters expected, or not expected.
Object properties
| Property | Type | Description |
| uuid | string | Unique system-generated identifier |
| name | string | Human-friendly name of the Data Type |
| enabled | boolean | Only enabled Data Types can be assigned to fields |
| ml_model | string | High-level ML configuration. Possible values are: entry, checkbox, signature and barcode |
| meta_type | string | Detailed ML configuration. See Standard Data Types for examples. |
| entry_type | string | Possible values are: builtin for standard Data Types; list and pattern for Custom Data Types |
Retrieving Data Types
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/data_types/'
saved_entity_id = 'd83ecc98-67b0-4b44-8a35-8d14e59cfb51'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_entity_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb51",
"name": "Address",
"enabled": true,
"ml_model": "entry",
"meta_type": "address",
"entry_type": "builtin"
}
If you know the UUID of the Data Type you're interested in then you can retrieve data about that Data Type using this endpoint.
Data Type Retrieval Endpoint
GET /api/v5/data_types/data_type_uuid
Listing Data Types
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/data_types'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
[
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb51",
"name": "Address",
"enabled": true,
"ml_model": "entry",
"meta_type": "address",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb52",
"name": "AlphaNumeric",
"enabled": true,
"ml_model": "entry",
"meta_type": "alphanumeric",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb53",
"name": "AUS Postcode",
"enabled": true,
"ml_model": "entry",
"meta_type": "aus_post_code",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb54",
"name": "AUS State",
"enabled": true,
"ml_model": "entry",
"meta_type": "aus_state",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb85",
"name": "Barcode",
"enabled": true,
"ml_model": "barcode",
"meta_type": null,
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb55",
"name": "CAN Postcode",
"enabled": true,
"ml_model": "entry",
"meta_type": "can_post_code",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb75",
"name": "Capitalized Name",
"enabled": true,
"ml_model": "entry",
"meta_type": "capitalized_name",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb84",
"name": "Checkbox",
"enabled": true,
"ml_model": "checkbox",
"meta_type": null,
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb56",
"name": "Clause",
"enabled": true,
"ml_model": "entry",
"meta_type": "clause",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb59",
"name": "Currency - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb5a",
"name": "Currency - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_dot_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb5b",
"name": "Currency Trailing Sign - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_comma_grouping_trail_sign",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb5c",
"name": "Currency Trailing Sign - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_dot_grouping_trail_sign",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb5d",
"name": "Currency Trailing Sign No Rounding - X,XXX.XXXX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_comma_grouping_trail_sign_no_rounding",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb5e",
"name": "Currency Trailing Sign No Rounding - X.XXX,XXXX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_dot_grouping_trail_sign_no_rounding",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb57",
"name": "Currency with Unit - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_w_unit_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb58",
"name": "Currency with Unit - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_w_unit_dot_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb64",
"name": "Date - DDMMYYYY",
"enabled": true,
"ml_model": "entry",
"meta_type": "row_date",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb66",
"name": "Date - Korean",
"enabled": true,
"ml_model": "entry",
"meta_type": "korean_date",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb63",
"name": "Date - MMDDYYYY",
"enabled": true,
"ml_model": "entry",
"meta_type": "date",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb65",
"name": "Date - MMYYYY",
"enabled": true,
"ml_model": "entry",
"meta_type": "mmyyyy_date",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb67",
"name": "Date with Punctuations",
"enabled": true,
"ml_model": "entry",
"meta_type": "date_w_punctuations",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb68",
"name": "Email Address",
"enabled": true,
"ml_model": "entry",
"meta_type": "email_address",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb69",
"name": "Email Address International",
"enabled": true,
"ml_model": "entry",
"meta_type": "email_address_international",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb6d",
"name": "Enhanced Korean Freeform",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform_kogpt",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb6a",
"name": "Freeform AlphaNumeric",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform_alphanumeric",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb6b",
"name": "Freeform Characters",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform_nolm",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb6c",
"name": "Freeform Characters (American English)",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform_nolm_restricted_to_ascii",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb6e",
"name": "Generic Text",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb6f",
"name": "Legal Amount - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "legal_amount_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb70",
"name": "Legal Amount - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "legal_amount_dot_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb71",
"name": "Length - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "length_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb72",
"name": "Length - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "length_dot_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb82",
"name": "Medical",
"enabled": false,
"ml_model": "entry",
"meta_type": "medical",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb73",
"name": "MICR Font",
"enabled": true,
"ml_model": "entry",
"meta_type": "micr",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb74",
"name": "Name",
"enabled": true,
"ml_model": "entry",
"meta_type": "name",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb78",
"name": "Number - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "number_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb79",
"name": "Number - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "number_dot_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb76",
"name": "Number with Unit - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "number_w_unit_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb77",
"name": "Number with Unit - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "number_w_unit_dot_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb7a",
"name": "Numeric Text",
"enabled": true,
"ml_model": "entry",
"meta_type": "numeric",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb7b",
"name": "Phone Number",
"enabled": true,
"ml_model": "entry",
"meta_type": "phone_number",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb5f",
"name": "Separated Currency - X,XXX XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_comma_grouping_w_decimal",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb60",
"name": "Separated Currency - X.XXX XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_dot_grouping_w_decimal",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb61",
"name": "Separated Currency Trailing Sign - X,XXX XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_comma_grouping_w_decimal_trail_sign",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb62",
"name": "Separated Currency Trailing Sign - X.XXX XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "currency_dot_grouping_w_decimal_trail_sign",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb83",
"name": "Signature",
"enabled": true,
"ml_model": "signature",
"meta_type": null,
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb7c",
"name": "SSN/EIN/TIN",
"enabled": true,
"ml_model": "entry",
"meta_type": "us_gov_id",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfba6",
"name": "test_name10000005",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbaa",
"name": "test_name10000006",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbae",
"name": "test_name10000007",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb2",
"name": "test_name10000008",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbb6",
"name": "test_name10000009",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbb",
"name": "test_name10000010",
"enabled": true,
"ml_model": "entry",
"meta_type": "freeform",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb7d",
"name": "UK Postcode",
"enabled": true,
"ml_model": "entry",
"meta_type": "uk_post_code",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb7f",
"name": "US State",
"enabled": true,
"ml_model": "entry",
"meta_type": "us_state",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb7e",
"name": "US Zip Code",
"enabled": true,
"ml_model": "entry",
"meta_type": "us_zip_code",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb80",
"name": "Weight - X,XXX.XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "weight_comma_grouping",
"entry_type": "builtin"
},
{
"uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfb81",
"name": "Weight - X.XXX,XX",
"enabled": true,
"ml_model": "entry",
"meta_type": "weight_dot_grouping",
"entry_type": "builtin"
}
]
This endpoint allows you to retrieve a list of Data Types in the system that match your defined filtering criteria.
As the set of Data Types in a system is relatively small, this endpoint has no pagination -
all available results are always returned.
Each object in the results array is a Data Type Object.
Data Type Listing Endpoint
GET /api/v5/data_types
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Data Types.
| Property | Type | Description |
| enabled | boolean | Filters for Data Types that are either enabled or not enabled |
| ml_model | string | Filters for Data Types that have a specific ml_model. Possible values are: entry, checkbox, signature and barcode |
| entry_type | string | Filters for Data Types that have a specific entry_type. Possible values are: builtin for standard Data Types; list and pattern for Custom Data Types |
Standard Data Types
The table below defines the standard Data Types that come with the application. These data types are applied to each field and they are used to create a standard set of "normalized" outputs from the application. For example, if somebody writes "January 1, 2015" or "Jan 01, 2015" for a Date field, then the normalized transcription will return "01/01/2015" in both cases.
If the application is unable to convert the raw transcription into a normalized one, then transcription.normalized returns null and the field will include the "normalization_error" exception in the exceptions array. See Exceptions for more detail.
Note that if the application sees "NA", "N/A", "Not applicable", "Undetermined", or "Not available" in the transcription.raw, then the transcription.normalized will return as "Not Applicable". This holds true in all cases except the following: for the Name, Address, Email Address, and Generic Text Data Types, if a person writes "NA" then transcription.normalized will still return as "NA".
| Default Data Type Name | Underlying ML Configuration | Normalized Output Format | Description of Data Type and Normalization | Transcription.raw Example | Transcription.Normalized Equivalent |
| Address | address | String | Used for all types of address fields. All special characters except "-" are removed. Multiple spaces in a row are reduced to one space. | "54 W. 21st Street, Suite 503, N.Y., NY 10010-1234" | "54 W 21ST STREET SUITE 503 NY NY 10010-1234" |
| AlphaNumeric | alphanumeric | String | Used for fields where the application should not expect real words. Expects only letters, digits, spaces, or dashes. Spaces and dashes will be removed. The presence of other special characters not listed above are considered invalid and will reduce confidence and if present in document field will result in a normalization error exception. For a wider range of special characters, use Freeform Alphanumeric or Freeform Characters | "A12-231 48" | "A1223148" |
| AUS Postcode | aus_post_code | 4 character string | Used for Australian Post Codes. Leading and trailing zeros are allowed | "2150" | "2150" |
| AUS State | aus_state | 2-3 character string | Used for Australian States and Territories. Variations in states are normalized to standard two to three character abbreviations | "N.S.W." or "New South Wales" | "NSW" |
| Barcode | barcode | String | Used to read data in barcodes. All letters are converted to uppercase, leading and trailing spaces are removed, consecutive spaces are consolidated to a single space. The barcode data type fully supports CODE128, CODE39, EAN13, EAN8, Interleaved 2 of 5, ISBN-13, QR codes (Version 1, 2, 3, 4, 10, 25, 40), and Data Matrix. To note, ISBN-10 is supported, but the transcription is the value of the barcode and not the value of the ISBN. Both UPC-A and UPC-E are read as EAN13. | ||
| Capitalized Name | capitalized_name | String | Used for all types of names, including people, places, companies, where the first letter of each word is capitalized. | Anne-Marie Smith | ANNE-MARIE SMITH |
| Clause | clause | String | Used for fields where long sentences / paragraphs are expected. These are essentially treated as Freeform Characters (e.g.: Generic Text field). Only supported by unstructured model. | ||
| Currency - X,XXX.XX | currency_comma_grouping | String ending with a decimal and two digits | This currency format expects commas to group thousands and a dot to separate up to two decimals. Values are returned with two decimal places and abbreviations (e.g., K, MM) are converted into digits. If decimals aren't written, ".00" is added. If the figure is surrounded by parentheses, the value will be normalized to negative | "1,000" or "1K" | "1000.00" |
| Currency - X.XXX,XX | currency_dot_grouping | String ending with a comma and two digits | This currency format expects dots to group thousands and a comma to separate up to two decimals. Values are returned with two decimal places and abbreviations (e.g., K, MM) are converted into digits. If decimals aren't written, ",00" is added. | "1.000" or "1K" | "1000,00" |
| Currency Trailing Sign - X,XXX.XX | currency_comma_grouping_trail_sign | String ending with a decimal and two digits | This currency format expects commas to group thousands and a dot to separate up to two decimals. Values are returned with two decimal places and abbreviations (e.g., K, MM) are converted into digits. If decimals aren't written, ".00" is added. If the figure is surrounded by parentheses, followed by CR, or has a trailing negative sign after the value, the value will be normalized to negative. | "1,000-" or "1000CR" | "-1000.00" |
| Currency Trailing Sign - X.XXX,XX | currency_dot_grouping_trail_sign | String ending with a comma and two digits | This currency format expects dots to group thousands and a comma to separate up to two decimals. Values are returned with two decimal places and abbreviations (e.g., K, MM) are converted into digits. If decimals aren't written, ",00" is added. If the figure is surrounded by parentheses, followed by CR, or has a trailing negative sign after the value, the value will be normalized to negative. | "1.000-" or "1000CR" | "-1000,00" |
| Currency Trailing Sign No Rounding - X,XXX.XXXX | currency_comma_grouping_trail_sign_no_rounding | String ending with a decimal and any number of digits | This currency format expects commas to group thousands and a dot to separate decimals. Values are returned and abbreviations (e.g., K, MM) are converted into digits. If the figure is surrounded by parentheses, followed by CR, or has a trailing negative sign after the value, the value will be normalized to negative. | "1,000.123400-" or "1000.123400CR" | "-1000.123400" |
| Currency Trailing Sign No Rounding - X.XXX,XXXX | currency_dot_grouping_trail_sign_no_rounding | String ending with a comma and any number of digits | This currency format expects dots to group thousands and a comma to separate decimals. Values are returned and abbreviations (e.g., K, MM) are converted into digits. If the figure is surrounded by parentheses, followed by CR, or has a trailing negative sign after the value, the value will be normalized to negative. | "1.000,123400-" or "1000,123400CR" | "-1000,123400" |
| Currency with Unit - X,XXX.XX | currency_w_unit_comma_grouping | String beginning with amount ending with a decimal and two digits, followed by a space and currency code or symbol if present | This currency format is to be used when expecting multiple types of currencies (e.g., USD and EUR) in a given field. The normalized value will start with the amount and it will use a dot to separate up to two decimals. Any currency code or symbol, if present, will be appended to the amount after a space. In the absence of a currency code or symbol, no space will be included. Supported currencies include $, €, £, ¥, all ISO abbreviations, and currencies written with latin based characters | "USD 200" or "200USD" | "200.00 USD" |
| Currency with Unit - X.XXX,XX | currency_w_unit_dot_grouping | String beginning with currency ending with a comma and two digits, followed by a space and currency code or symbol if present | This currency format is to be used when expecting multiple types of currencies (e.g., USD and EUR) in a given field. The normalized value will start with the amount and it will use a dot to separate up to two decimals. Any currency code or symbol, if present, will be appended to the amount after a space. In the absence of a currency code or symbol, no space will be included. Supported currencies include $, €, £, ¥, all ISO abbreviations, and currencies written with latin based characters | "USD 200" or "200USD" | "200,00 USD" |
| Date - MMDDYYYY | date | 10 character string "MM/DD/YYYY" | Standard US Date format. If the date is written with numbers only, then the month is assumed to come first (US convention). If transcription.raw is an impossible date, such as "April 32, 2019", then a "normalization_error" exception is returned in the exceptions array. | "Jan 1 2015" or "1/1/2015" | "01/01/2015" |
| Date - DDMMYYYY | row_date | 10 character string "DD/MM/YYYY" | DD/MM/YYYY format. If the date is written with numbers only, then the day is assumed to come first. If transcription.raw is an impossible date, such as "April 32, 2019", then a "normalization_error" exception is returned in the exceptions array. | "1 Feb 2015" or "01/02/2015" | "01/02/2015" |
| Date - MMYYYY | mmyyyy_date | 7 character string "MM/YYYY" | MM/YYYY format. If the date is written with numbers only, then four digit component will be assumed to be year regardless of order. If both components are two digits, and each are <= 12, a "normalization_error" exception is returned in the exceptions array | "Feb 2015" or "02/2015" | "02/2015" |
| Date - Korean | korean_date | 10 character string "YYYY/MM/DD" | Standard Korean Date format. If the date is written with numbers only, then the day is assumed to come last. If transcription.raw is an impossible date, such as "2022년 33월 4일", then a "normalization_error" exception is returned in the exceptions array. | "2022년 3월 4일" or "2022 March 04" | "2022/03/04" |
| Date with Punctuations - MMDDYYYY | date_w_punctuation | 10 character string "MM/DD/YYYY" | Standard US Date format but can handle some leading and trailing punctuations (() , .). If the date is written with numbers only, then the month is assumed to come first (US convention). If transcription.raw is an impossible date, such as "April 32, 2019", then a "normalization_error" exception is returned in the exceptions array. | "(Jan 1 2015)." or "1/1/2015," | "01/01/2015" |
| Email Address | email_address | String | All letters are converted to uppercase and no other normalization is done. | "learn+test@hyperscience.com" | "LEARN+TEST@HYPERSCIENCE.COM" |
| Email Address International | email_address_international | String | All letters are converted to uppercase and no other normalization is done. Uses reg ex instead of language model | "learn+test@hyperscience.com" | "LEARN+TEST@HYPERSCIENCE.COM" |
| Freeform Characters | freeform_nolm | String | Used for fields where the application should not expect real words. All letters are converted to uppercase, leading and trailing spaces are removed, and all special characters are retained. | "lakdoia3902u73.393837y4.3-3938" | "LAKDOIA3902U73.393837Y4.3-3938" |
| Freeform Characters (American English) | freeform_nolm_restricted_to_ascii | String | Used for fields where the following restricted set of ASCII characters is expected: letters, digits and | /~_-. All other characters and accents are removed. | "123 Abc-89/XYZ?" |
| Freeform AlphaNumeric | freeform_alphanumeric | String | Used for fields where the application should not expect real words. All letters are converted to uppercase and leading and trailing spaces are removed. Unlike, Freeform Characters, all special characters and spaces are removed. | "123 ABC-89/XYZ" | "123ABC89XYZ" |
| Generic Text | freeform | String | Used for fields where words / sentences are expected. All letters are converted to uppercase, leading and trailing spaces are removed, consecutive spaces are consolidated to a single space. | "Data extraction is hard." | "DATA EXTRACTION IS HARD." |
| Enhanced Korean Freeform | freeform_kogpt | String | Used for Korean and English fields where words / sentences are expected. The Korean Large Language Model enhances the transcription performance, especially for degraded images. All letters are converted to uppercase, leading and trailing spaces are removed, consecutive spaces are consolidated to a single space. | "City: 부산광역시." | "CITY: 부산광역시." |
| Legal Amount - X,XXX.XX | legal_amount_comma_grouping | String ending with a decimal and two digits | Used for converting legal amounts found on english language checks to their numeric, courtesy amount equivalent | "Four hundred twenty seven + 45/100" | "427.45" |
| Legal Amount - X.XXX,XX | legal_amount_dot_grouping | String ending with a comma and two digits | Used for converting legal amounts found on english language checks to their numeric, courtesy amount equivalent | "Four hundred twenty seven + 45/100" | "427,45" |
| Length - X,XXX.XX | length_comma_grouping | String | Used for fields that contain numerical values and units of length (e.g. 10 ft). This format expects commas to group thousands and a dot to separate decimals. The normalized value will include a space between the number(s) and unit(s) of length, if present. In the absence of a unit, no space will be included. | "5 feet 10.45 inches" | "5 FT 10.45 IN" |
| Length - X.XXX,XX | length_dot_grouping | String | Used for fields that contain numerical values and units of length (e.g. 10 ft). This format expects dots to group thousands and a comma to separate decimals. The normalized value will include a space between the number(s) and unit(s) of length, if present. In the absence of a unit, no space will be included. | "5 feet 10,45 inches" | "5 FT 10,45 IN" |
| MICR Font | micr | String | Used for fields printed in MICR font (often found on checks). Normalization will retain all special MICR symbols, but remove any spaces between elements | "⑆123456789⑆ 12345678 123" | "⑆123456789⑆12345678123" |
| Name | name | String | Used for all types of names, including people, places, companies, etc. All special characters are removed. Multiple spaces in a row are reduced to one space. | "T.J. Madison" | "TJ MADISON" |
| Number - X,XXX.XX | number_comma_grouping | String | Used for values that represent numbers which can have a mathematical operation performed (addition, subtraction, etc). This number format expects commas to group thousands and a dot to separate decimals. Leading zeros and and trailing decimal zeros that don't add meaning to such numbers are removed. Abbreviations (e.g., K, MM) are converted into digits, separator commas are removed, e.g., "0321" becomes "321" and "100.00" becomes "100". | "-4.5K" "0321" "100.00" | "-4500" "321" "100" |
| Number - X.XXX,XX | number_dot_grouping | String | Used for values that represent numbers which can have a mathematical operation performed (addition, subtraction, etc). This number format expects dots to group thousands and a comma to separate up to two decimals. Leading zeros and and trailing decimal zeros that don't add meaning to such numbers are removed. Abbreviations (e.g., K, MM) are converted into digits, separator dots are removed, e.g., "0321" becomes "321" and "100,00" becomes "100". | "-4,5K" "0321" "100,00" | "-4500" "321" "100" |
| Number with Unit - X,XXX.XX | number_w_unit_comma_grouping | String beginning with number, followed by a space and unit if present | This number format is to be used when expecting numeric values that may also include a unit of measure (e.g., 10kg). The normalized value will start with the number and it will use a dot to separate decimals if present. Any unit of measure, if present, will be appended to the number after a space | "10kg" | "10 kg" |
| Number with Unit - X.XXX,XX | number_w_unit_dot_grouping | String beginning with number, followed by a space and unit if present | This number format is to be used when expecting numeric values that may also include a unit of measure (e.g., 10 kg). The normalized value will start with the number and it will use a comma to separate decimals if present. Any unit of measure, if present, will be appended to the number after a space | "10kg" | "10 kg" |
| Numeric Text | numeric | String | Numeric data. No abbreviations are taken into consideration and leading and trailing zeros are kept intact | "0038937313200" | "0038937313200" |
| Phone Number | phone_number | String with digits only | All punctuation (e.g., +, (), -) is removed such that only the numbers are returned | "+1 (555) 555-5555" | "15555555555" |
| Separated Currency - X,XXX XX | currency_comma_grouping_w_decimal | String ending with a decimal and two digits | This currency format expects commas to group thousands and is to be used where two digits after a decimal are mandated, yet the printed vertical line on the underlying template may not always be a reliable indicator of this separation. Values are returned with a dot to separate up to two decimals. If decimals aren't written, the last two digits are assumed to be the decimals. | "100000" | "1000.00" |
| Separated Currency - X.XXX XX | currency_dot_grouping_w_decimal | String ending with a comma and two digits | This currency format expects dots to group thousands and is to be used where two digits after a decimal are mandated, yet the printed vertical line on the underlying template may not always be a reliable indicator of this separation. Values are returned with a comma to separate up to two decimals. If decimals aren't written, the last two digits are assumed to be the decimals. | "100000" | "1000,00" |
| Separated Currency Trailing Sign - X,XXX XX | currency_comma_grouping_w_decimal_trail_sign | String ending with a decimal and two digits | This currency format expects commas to group thousands and is to be used where two digits after a decimal are mandated, yet the printed vertical line on the underlying template may not always be a reliable indicator of this separation. Values are returned with a dot to separate up to two decimals. If decimals aren't written, the last two digits are assumed to be the decimals. If the figure is surrounded by parentheses, followed by CR, or has a trailing negative sign after the value, the value will be normalized to negative. | "100000-" | "-1000.00" |
| Separated Currency Trailing Sign - X.XXX XX | currency_dot_grouping_w_decimal_trail_sign | String ending with a comma and two digits | This currency format expects dots to group thousands and is to be used where two digits after a decimal are mandated, yet the printed vertical line on the underlying template may not always be a reliable indicator of this separation. Values are returned with a comma to separate up to two decimals. If decimals aren't written, the last two digits are assumed to be the decimals. If the figure is surrounded by parentheses, followed by CR, or has a trailing negative sign after the value, the value will be normalized to negative. | "100000-" | "-1000,00" |
| SSN/EIN/TIN | us_gov_id | 9 digit string "#########" | US Government Tax Identification numbers. When written, these can include dashes in different places but must be 9 digits. If the transcription.raw does not have 9 digits (e.g., it has letters or has more than 9 digits), then a "normalization_error" exception is returned in the exceptions array. | "987-65-4321" | "987654321" |
| UK Postcode | uk_post_code | 5 - 7 character string | Used for UK Postcodes. Space between Outward and Inward codes is removed | "SW1W 0NY" | "SW1W0NY" |
| US Zip Code | us_zip_code | 5 or 9 digit string | Used for US Zip Codes. Dashes or spaces between Zip Code and optional +4 is removed | "10010-7356" | "100107356" |
| US State | us_state | 2 character string | Used for US States and Territories. Variations in states are normalized to standard two character abbreviations | "Arizona" or "Ariz." | "AZ" |
| Weight - X,XXX.XX | weight_comma_grouping | String | Used for fields that contain numerical values and units of weight (e.g. 10 lbs). This format expects commas to group thousands and a dot to separate decimals. The normalized value will include a space between the number(s) and unit(s) of weight, if present. In the absence of a unit, no space will be included. | "5 pounds 10.45 ounces" | "5 LBS 10.45 OZS" |
| Weight - X.XXX,XX | weight_dot_grouping | String | Used for fields that contain numerical values and units of weight (e.g. 10 lbs). This format expects dots to group thousands and a comma to separate decimals. The normalized value will include a space between the number(s) and unit(s) of weight, if present. In the absence of a unit, no space will be included. | "5 pounds 10,45 ounces" | "5 LBS 10,45 OZS" |
| Medical | medical | String | Used for fields where medical terminology / sentences are expected. All letters are converted to uppercase, leading and trailing spaces are removed, consecutive spaces are consolidated to a single space. To note, spaces between slashes and characters are removed. | "120 systolic / 80 diastolic" | "120 SYSTOLIC/80 DIASTOLIC" |
| Checkbox | checkbox | Boolean | No normalization, true indicates that the checkbox was checked | N/A | N/A |
| Signature | signature | Boolean | No normalization, true indicates that a signature was present | N/A | N/A |
Tables v27
Table Object
Example Table Object
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000001,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000002,
"row_number": 2,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000003,
"row_number": 3,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000004,
"row_number": 4,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000005,
"row_number": 5,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
The Table object is the representation of a single logical table that in a document. It may span multiple pages within that document.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated id exposed in the web application for this table |
| table_number | integer | Indicates the order in which this table appears in the document |
| name | string | Name of the layout table |
| layout_table_uuid v36 | string | Unique system-generated uuid string identifying the layout table to which this table belongs |
| layout_variation_table_uuid v36 | string | Unique system-generated uuid string identifying the layout table to which this table belongs |
| rows | array of JSON objectsA list of objects. See the description for more information. | A list of rows in this table. See Table Row |
| parent_table_id v32 | integer | Identifier corresponding to the parent Table |
| parent_layout_table_uuid v32 v36 | string | UUID corresponding to the parent layout_table_uuid Table |
| parent_layout_variation_table_uuid v36 | string | UUID corresponding to the parent layout_table_uuid Table |
Table Row Object
A Table Row Object represents a row of cells in a the enclosing table.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated id exposed in the web application for this row |
| row_number | integer | Indicates the order in which this row appears in the table |
| cells | array of JSON objectsA list of objects. See the description for more information. | A list of cells in this row |
| parent_row_id v32 | integer | Identifier corresponding to the parent Table Row |
| parent_row_number v32 | integer | The row number corresponding to the parent Table Row |
Table Cell Object
A Table Cell Object represents a specific cell of a table.
Object properties
| Property | Type | Description |
| id | integer | Unique system-generated id exposed in the web application for this row |
| column_name | string | Name of the column of the table to which this cell belongs |
| output_name v36 | string | Output Name of the column of the table to which this cell belongs |
| document_table_row_id | integer | Id of the row to which this cell belongs |
| layout_table_column_uuid v36 | string | Unique system-generated uuid string identifying the column to which this cell belongs |
| layout_variation_table_column_uuid v36 | string | Unique system-generated uuid string identifying the column to which this cell belongs |
| page_id | integer | The page id to which this cell belongs |
| raw | string | Value of the field that was transcribed as the machine saw it or the data keyer typed it. Returns null if the field was marked illegible |
| normalized | string | The normalized value of the field. The changes applied to the transcription to normalize it vary by Data Type |
| exceptions | array of strings | Provides a list of all exceptions in this Field. If the Field has no exceptions, then this value returns an empty array. See Exceptions for a list of possible values |
| user_transcribed v33 | string | Username of the user who transcribed the field in Supervision. Returns null if the transcription_source is machine_transcription. |
| state | string | Current state of the Field. Potential values are processing or complete. See States for more detail (Note that supervision does not apply to cells) |
| bounding_box v31 | array of floats | Location of the cell bounding box on the page, as array of four numbers in format [start_x, start_y, end_x, end_y] |
| decisions v34 | array of JSON objectsA list of objects. See the description for more information. | Array of Decision objects describing any decisions made for the given cell. An empty array is returned if there are no Decisions. |
Retrieving Tables
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/document_tables/'
saved_field_id = str(7000001)
endpoint_url = urljoin(base_url, posixpath.join(endpoint, saved_field_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"id": 2,
"table_number": 1,
"name": "Table Doc 2",
"rows": [
{
"id": 2,
"row_number": 1,
"parent_row_id": null,
"cells": [
{
"id": 6,
"column_name": "Col Name Fixed",
"document_table_row_id": 2,
"layout_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"layout_variation_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"page_id": 4,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [0.0, 0.1, 1.0, 0.19],
"decisions": []
}
]
},
{
"id": 3,
"row_number": 2,
"cells": [
{
"id": 7,
"column_name": "Col Name Fixed",
"document_table_row_id": 3,
"layout_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"layout_variation_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"page_id": 4,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [0.0, 0.2, 1.0, 0.29],
"decisions": []
}
]
},
{
"id": 4,
"row_number": 3,
"cells": [
{
"id": 8,
"column_name": "Col Name Fixed",
"document_table_row_id": 4,
"layout_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"layout_variation_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"page_id": 4,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [0.0, 0.3, 1.0, 0.39],
"decisions": []
}
]
},
{
"id": 5,
"row_number": 4,
"cells": [
{
"id": 9,
"column_name": "Col Name Fixed",
"document_table_row_id": 5,
"layout_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"layout_variation_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"page_id": 4,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [0.0, 0.4, 1.0, 0.49],
"decisions": []
}
]
},
{
"id": 6,
"row_number": 5,
"cells": [
{
"id": 10,
"column_name": "Col Name Fixed",
"document_table_row_id": 6,
"layout_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"layout_variation_table_column_uuid": "48494a4b-4c4d-4e4f-9051-525354555657",
"page_id": 4,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [0.0, 0.5, 1.0, 0.59],
"decisions": []
}
]
}
]
}
If you know the ID of the table you're interested in (either by looking in the application itself or through the Listing Tables endpoint) then you can retrieve data about that Table using this endpoint.
Table Retrieval Endpoint
GET /api/v5/document_tables/table_id
Request parameters
There are no request parameters for this endpoint.
Listing Tables
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/document_tables'
endpoint_url = urljoin(base_url, endpoint)
params = {'document_id': 2}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"id": 7000001,
"table_number": 1,
"name": "Table Doc 2",
"layout_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"layout_variation_table_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbba",
"parent_table_id": null,
"parent_layout_table_uuid": null,
"parent_layout_variation_table_uuid": null,
"rows": [
{
"id": 8000001,
"row_number": 1,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000005,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000001,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 1",
"normalized": "CELL ROW 1",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000002,
"row_number": 2,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000006,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000002,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 2",
"normalized": "CELL ROW 2",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000003,
"row_number": 3,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000007,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000003,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 3",
"normalized": "CELL ROW 3",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000004,
"row_number": 4,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000008,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000004,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 4",
"normalized": "CELL ROW 4",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
},
{
"id": 8000005,
"row_number": 5,
"parent_row_id": null,
"parent_row_number": null,
"cells": [
{
"id": 9000009,
"column_name": "Col Name Fixed",
"output_name": "Col Output Name Fixed",
"document_table_row_id": 8000005,
"layout_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"layout_variation_table_column_uuid": "d83ecc98-67b0-4b44-8a35-8d14e59cfbbd",
"page_id": 2000003,
"raw": "Cell Row 5",
"normalized": "CELL ROW 5",
"exceptions": [],
"user_transcribed": null,
"state": "complete",
"bounding_box": [
0.0,
0.0,
1.0,
1.0
],
"decisions": []
}
]
}
]
}
]
}
This endpoint allows you to retrieve a list of Tables in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Table Object.
Table Listing Endpoint
GET /api/v5/document_tables
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Tables. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Tables where that attribute matches any of the values you provided.
| Property | Type | Description |
| document_id | integer | Filters for Tables that were part of a specific document using the system-generated Document ID |
Business Metrics v41
Business Metric Object
Example Business Metric Object
{
"namespace": "insurance.claims",
"name": "processing_duration_seconds",
"dimensions": [
{
"name": "source"
},
{
"name": "flow_uuid"
},
{
"name": "layout_uuid"
}
]
}
A business metric is a way to measure a specific business event, like the number of fields transcribed or the duration of a task. It helps you gain valuable insights into your business's overall health and effectiveness by using contextual attributes, or dimensions, to show what's working well and what needs attention. See our Flows SDK documentation to learn how to create custom business metrics and publish measurements using the Flows SDK.
Object Properties
| Property | Type | Description |
| namespace | string | Top-level category for organizing your metrics by a domain or business area. A namespace can include one or more sub-namespaces. Sub-namespaces are separated by periods, and each additional level further refines the grouping (e.g., insurance.claims is a sub-namespace of insurance). |
| name | string | Name of the metric. This name combines with the namespace to create a unique metric identifier. |
| dimensions | array of JSON objectsA list of objects. See the description for more information. | Array of Dimension objects that provide contextual attributes for the metric. |
Dimension Object
| Property | Type | Description |
| name | string | Label used to describe a metric's attribute, such as flow_uuid or task_type. |
Retrieving Business Metrics v42
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/analytics/metrics/'
metric = 'insurance.claims.processing_duration_seconds'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, metric))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"namespace": "insurance.claims",
"name": "processing_duration_seconds",
"dimensions": [
{
"name": "source"
},
{
"name": "flow_uuid"
},
{
"name": "layout_uuid"
}
]
}
Retrieve metadata about a specific Business Metric using the metric's full name. The metric's full
name is a combination of the metric's namespace and name, separated by a period (.). You can
obtain the namespace and the name using the Listing Business Metrics endpoint.
Business Metric Retrieval Endpoint
GET /api/v5/analytics/metrics/namespace.name
Listing Business Metrics v42
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/analytics/metrics'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 2,
"next": null,
"previous": null,
"results": [
{
"namespace": "finance",
"name": "tasks_created",
"dimensions": [
{
"name": "flow_uuid"
},
{
"name": "task_type"
}
]
},
{
"namespace": "insurance.claims",
"name": "processing_duration_seconds",
"dimensions": [
{
"name": "source"
},
{
"name": "flow_uuid"
},
{
"name": "layout_uuid"
}
]
}
]
}
This endpoint allows you to retrieve a list of Business Metrics in the system that match your
defined filtering criteria and to paginate through them. Each object in the results array is a
Business Metric Object.
Business Metric Listing Endpoint
GET /api/v5/analytics/metrics
Request Parameters
The table below defines the query parameters that can be used to retrieve a filtered list of Business Metrics. If you repeat a query parameter multiple times in a request, the application will apply OR logic (i.e., it will list Business Metrics where that attribute matches any of the values you provided).
| Property | Type | Description |
| namespace | array of strings | Namespaces of the metrics. Sub-namespaces are matched, as well. For example, if you filter for insurance, the response will include metrics with the namespace insurance and insurance.claims. |
| name | array of strings | Names of the metrics. |
| metric | array of strings | Full names of the metrics. Each metric's full name is a combination of the metric's namespace and name, separated by a period (.). |
Business Metrics Export
Example Request
from datetime import date
from io import BytesIO
from urllib.parse import urljoin
from zipfile import ZipFile
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/analytics/metrics/export'
endpoint_url = urljoin(base_url, endpoint)
params = {
'start_date': date(2020, 8, 10).isoformat(),
'end_date': date(2020, 8, 13).isoformat(),
'metric': ['insurance.claims.processing_duration_seconds'],
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
zip_stream = BytesIO(r.content)
with ZipFile(zip_stream) as zip_archive:
for filename in zip_archive.namelist():
with zip_archive.open(filename) as csv_file:
content = csv_file.read().decode('utf-8')
print(content)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Time Start,flow_uuid,layout_uuid,source,Value Count,Value Max,Value Min,Value Sum
2020-08-10 19:00:00+00:00,ee9f6fdd-3e47-4264-a113-baa72852017c,f5eaa35e-5e7f-46b9-8076-4b64d0b7c82a,email,3,435.0,353.0,1188.0
2020-08-10 19:00:00+00:00,ee9f6fdd-3e47-4264-a113-baa72852017c,9ca00e65-629d-42b7-9f2c-bbf69c7f8e99,email,2,184.0,120.0,304.0
2020-08-13 08:00:00+00:00,1dc87f33-b270-43ae-a755-c9a6691957ca,64cd4025-2475-461c-a5c9-78700e24a209,drive,32,42.0,31.0,1256.0
This endpoint allows you to export measurement data for specified metrics within a date range. The data is returned as a ZIP file containing one CSV file per metric, with each CSV containing aggregated measurement values.
Endpoint
GET /api/v5/analytics/metrics/export
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Business
Metrics. If you repeat metric filters - namespace, name, and metric - multiple times in a
request, the application will apply OR logic (i.e., it will export Business Metrics where that
attribute matches any of the values you provided). You must provide at least one metric filter.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the export. The export's start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the export. Defaults to the current date. The report's end date is inclusive. |
| namespace v42 | array of strings | Namespaces of the metrics. Sub-namespaces are matched, as well. For example, if you filter for insurance, the response will include metrics with the namespace insurance and insurance.claims. |
| namev42 | array of strings | Names of the metrics. |
| metric | array of strings | Full names of the metrics. Each metric's full name is a combination of the metric's namespace and name, separated by a period (.). |
Response
The response returned is a ZIP file containing one CSV file per Business Metric. Each CSV file is named using the metric's full name and includes aggregated measurements data for the specified metric. The dimension columns in the CSV are dynamic and correspond to the dimensions defined for the metric. Each row in the CSV represents a unique combination of dimension values for a specific time period, along with aggregated measurement values. Here are the columns included in each CSV file:
| Header | Type | Description |
| Time Start | datetime | The start date and time of the time period covered by the row. |
| Dimension | string | One column for each dimension defined for the metric. The value in the column corresponds to the dimension's value. |
| Value Count | integer | The number of measurements recorded for the specific combination of dimensions during the time period. |
| Value Max | float | The maximum measurement value recorded for the specific combination of dimensions during the time period. |
| Value Min | float | The minimum measurement value recorded for the specific combination of dimensions during the time period. |
| Value Sum | float | The sum of all measurement values recorded for the specific combination of dimensions during the time period. |
Reporting v28.0.3
Submissions Report
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/submissions/csv'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
Submission ID,Submission Date Created,Submission Due Date,Submission Date Completed,Delta between Due and Completed Dates (seconds),Submission Due Date Source,SLA Rule Applied - Name,SLA Rule Applied - Definition,Number of Folders,Number of Documents,Number of Pages,Number of Page Failures,Number of Fields,Number of Machine Only Fields Transcribed,Number of Tables,Number of Table Cells,Number of Machine Only Table Cells Transcribed,Submission State,Submission Substate,Submission Metadata,External ID
1000000,2018-05-09 04:39:50 PM,2018-05-10 04:39:50 PM,2018-05-09 04:41:22 PM,86307,System Default,System Default SLA rule,"{
""_metadata"": {
""version"": 1
},
""rules"": [
{
""process_within"": {
""duration"": 24,
""duration_unit"": ""hours""
}
}
]
}",2,2,7,0,0,0,2,0,0,COMPLETE,,"{
""custom"": ""data""
}",yourcompanyid#1236
On the Submissions page of our application, you can download a CSV containing data about Submissions that meet the criteria you specify. By sending requests to the endpoint described below, you can automate the generation of this report.
You can also request the inclusion of extra fields in the report by using the query parameter include_extra_fields v33. Currently, this parameter only supports the inclusion of the Workflow UUID field. To include this field, use the query parameter include_extra_fields=flow_uuid.
Endpoint
GET /api/v5/submissions/csv
Request Parameters
Parameters with string values allow you to filter for multiple values of those parameters. To filter for multiple values, repeat the parameter in your request (e.g., state=processing&state=supervision). Submissions matching any of the provided values will be eligible for inclusion in the response.
Use the query parameters below to filter for specific Submissions.
| Property | Type | Description |
| start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submissions that were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Submissions that were ingested into the application before a specific date and time (less than operator). |
| state | string | Filters for Submissions that are in a particular state. See States for a list of possible values. |
| complete_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Submissions that finished processing on or after a specific date and time (greater than or equal to operator). |
| complete_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Submissions that finished processing before a specific date and time (less than operator). |
| halted | boolean | When set to true, the report will only include data for Submissions in halted jobs. When set to false, the report will not include data for these Submissions. |
| exception | string | Filters for Submissions containing at least one Field with a particular exception. See Exceptions for a list of possible values. |
| has_potential_layout | boolean | Indicates whether the report should include data for Submissions that have a potential layout candidate. |
| document_status | string | Filters for Submissions that have at least one Document that is in a particular state. See States for a list of possible values. |
| layout_kind | string | Filters for Submissions that have at least one Document with a particular type of layout. Can be one of the following values: structured, semi_structured, and additional. |
| layout | string | Filters for Submissions that have at least one Document whose layout has a specific UUID. |
| layout_tag | string | Filters for Submissions containing at least one Document whose layout has a specific tag. |
| page_status | string | Filters for Submissions that have at least one Page that is in a particular state. See States for a list of possible values. |
| flow_uuid v33 | string | Filter for Submissions processed through a specific Flow. |
Response
The response returned is in CSV format. Each row represents a Submission and has the following columns:
| Header | Type | Description |
| Submission ID | integer | Unique system-generated Submission ID. |
| Submission Date Created | datetime | Date and time that the Submission was ingested into the application. |
| Submission Due Date | datetime | The date and time goal (SLA) of the Sumbission. |
| Submission Date Completed | datetime | The date and time that the entire Submission entered the complete state, including all Supervision tasks. |
| Delta between Due and Completed Dates (seconds) | integer | Time difference between the due date and the date completed, in seconds. Negative if done on time, positive if done late. |
| Submission Due Date Source | string | The source that set the due date of the Submission. |
| SLA Rule Applied - Name | string | The name of the rule applied to the Submission that determined its due date. |
| SLA Rule Applied - Definition | JSON object | The definition of the rule applied to the Submission that determined its due date. |
| Number of Folders v40 | integer | Number of Document Folders created for the Submission. |
| Number of Documents | integer | Number of Documents identified as part of the Submission. |
| Number of Pages | integer | Number of Pages identified as part of the Submission. |
| Number of Page Failures | integer | Number of Pages in the Submission that have at least one Exception. |
| Number of Fields | integer | Number of Fields in the Submission. |
| Number of Machine Only Fields Transcribed | integer | Number of Fields in the Submission that were transcribed by the machine. |
| Number of Tables | integer | Number of Tables in the Submission. |
| Number of Table Cells | integer | Number of Table Cells in the Submission. |
| Number of Machine Only Table Cells Transcribed | integer | Number of Table Cells in the Submission that were transcribed by the machine. |
| Submission State | string | Current state of the Submission. See States for a list of possible values. |
| Submission Substate | string | If Submission State is supervision, provides additional details about the Supervision. See Substates for a list of possible values. |
| Submission Metadata | JSON object | User-defined data provided during the Submission’s creation. See Submission Creation for more information. |
| External ID | string | User-defined ID provided during the Submission’s creation. See Submission Creation for more information. |
Pages Report
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/pages/csv'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
Submission ID,Folder IDs,Document ID,Page ID,Page State,Page Substate,Document Type,Page File Name,Layout Name,Layout Version Name,Layout Page Number,Number of Fields,Number of Machine Only Fields Transcribed,Number of Table Rows,Number of Table Columns,Number of Table Cells,Number of Machine Only Table Cells Transcribed,Submission Metadata,Submission External ID
1000000,[],3000000,2000000,COMPLETE,,form_page,test_submission.pdf,NYC DMV application,NYC DMV application,1,0,0,1,5,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
1000000,[],3000000,2000001,COMPLETE,,form_page,test_submission.pdf,NYC DMV application,NYC DMV application,2,0,0,0,0,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
1000000,[],3000000,2000002,COMPLETE,,form_page,test_submission.pdf,NYC DMV application,NYC DMV application,3,0,0,0,0,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
1000000,"[4000000, 4000001]",3000001,2000003,COMPLETE,,form_page,test_submission.pdf,Social Security Name Change,Social Security Name Change,1,0,0,5,1,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
1000000,[],,2000004,COMPLETE,,other_page,test_submission_2.pdf,,,,0,0,0,0,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
1000000,[],,2000005,COMPLETE,,blank_page,test_submission_2.pdf,,,,0,0,0,0,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
1000000,[],,2000006,COMPLETE,,form_page,test_submission_2.pdf,,,,0,0,0,0,0,0,"{
""custom"": ""data""
}",yourcompanyid#1236
On the Submissions page of our application, you can download a CSV containing data about Pages that meet the criteria you specify. By sending requests to the endpoint described below, you can automate the generation of this report.
Endpoint
GET /api/v5/pages/csv
Request Parameters
Parameters with string values allow you to filter for multiple values of those parameters. To filter for multiple values, repeat the parameter in your request (e.g., state=processing&state=supervision). Pages matching any of the provided values will be eligible for inclusion in the response.
Use the query parameters below to filter for specific Pages.
| Property | Type | Description |
| submission_start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Pages from Submissions that were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| submission_start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Pages from Submissions that were ingested into the application before a specific date and time (less than operator). |
| submission_state | string | Filters for Pages from Submissions that are in a particular state. See States for a list of possible values. |
| submission_complete_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Pages from Submissions that finished processing on or after a specific date and time (greater than or equal to operator). |
| submission_complete_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Pages from Submissions that finished processing before a specific date and time (less than operator). |
| submission_halted | boolean | Filters for Pages based on their Submissions' halted state. When set to true, the report will only include data for Pages whose Submissions are in halted jobs. When set to false, the report will not include data for these Pages. |
| exception | string | Filters for Pages that have a particular exception. See Exceptions for a list of possible values. |
| document_status | string | Filters for Pages from Documents that are in a particular state. See States for a list of possible values. |
| layout_kind | string | Filters for Pages from Documents with a particular type of layout. Can be one of the following values: structured, semi_structured, and additional. |
| layout | string | Filters for Pages from Documents whose layouts have a specific UUID. |
| layout_tag | string | Filters for Pages from Documents whose layouts have a specific tag. |
| state | string | Filters for Pages that are in a particular state. See States for a list of possible values. |
Response
The response returned is in CSV format. Each row represents a Page and has the following columns:
| Header | Type | Description |
| Submission ID | integer | Unique system-generated Submission ID for the Page’s Submission.. |
| Folder IDs v40 | array of integers | Array of the system-generated IDs of Document Folders that include the Document that the Page belongs to. |
| Document ID | integer | Unique system-generated Document ID for this Page’s Document. |
| Page ID | integer | Unique system-generated Page ID. |
| Page State | string | Current state of the Page. See States for a list of possible values. |
| Page Substate | string | If Page State is supervision, provides additional details about the Supervision. See Substates for a list of possible values. |
| Document Type | string | Indicates what type the Page has been classified into. Can be one of the following values: form_page, blank_page, other_page, additional_form_page, and unknown_page. |
| Page File Name | string | The name of the file from the Submission’s upload that is associated with the Page. |
| Layout Name | string | The name of the layout that the Page’s Document was matched to. |
| Layout Version Name | string | The user-defined name for the specific locked layout version that the Page’s Document was matched to. |
| Layout Page Number | integer | The number of the page in the Document’s layout that the Page has matched to. For example, if the Page matched to the first page of a layout, this value would be 1. Will be empty if the Page has not matched to a layout page. |
| Number of Fields | integer | Number of Fields in the Page. |
| Number of Machine Only Fields Transcribed | integer | Number of Fields in the Page that were transcribed by the machine. |
| Number of Table Rows | integer | Number of Table Rows in the Page. |
| Number of Table Columns | integer | Number of Table Columns in the Page. |
| Number of Table Cells | integer | Number of Table Cells in the Page. |
| Number of Machine Only Table Cells Transcribed | integer | Number of Table Cells in the Page that were transcribed by the machine. |
| Submission Metadata | JSON object | User-defined data provided during the creation of the Page’s Submission. See Submission Creation for more information. |
| Submission External ID | string | User-defined ID provided during the creation of the Page’s Submission. See Submission Creation for more information. |
Fields Report
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/fields/csv'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
Submission ID,Submission State,Folder IDs,Folder States,Document ID,Document State,Page ID,Page State,File Name,Layout Name,Layout Version Name,Layout Field Name,Layout Field Output Name,Layout Field Page,Data Type,Required,Multiline,Supervision,Consensus Required,Field State,Field Value Source,Output Value Correct,Machine Confidence,Machine Confidence FT,Submission Metadata,Submission External ID,Field Identification Source,Field Occurrence Index
1000000,COMPLETE,[],[],3000000,COMPLETE,2000000,COMPLETE,test_submission.pdf,NYC DMV application,NYC DMV application,Eye Color,eye_color,1,Generic Text,True,False,0,False,COMPLETE,machine_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,0
1000000,COMPLETE,[],[],3000000,COMPLETE,2000000,COMPLETE,test_submission.pdf,NYC DMV application,NYC DMV application,Mobile Number,mobile_number,1,Phone Number,True,False,0,False,COMPLETE,machine_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,0
1000000,COMPLETE,[],[],3000000,COMPLETE,2000000,COMPLETE,test_submission.pdf,NYC DMV application,NYC DMV application,Mobile Number,mobile_number,1,Phone Number,True,False,0,False,COMPLETE,machine_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,1
1000000,COMPLETE,[],[],3000000,COMPLETE,2000000,COMPLETE,test_submission.pdf,NYC DMV application,NYC DMV application,Email,email,1,Email Address,True,False,0,False,COMPLETE,machine_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,0
1000000,COMPLETE,[],[],3000000,COMPLETE,2000000,COMPLETE,test_submission.pdf,NYC DMV application,NYC DMV application,SSN,ssn,1,Generic Text,True,False,1,True,COMPLETE,manual_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,0
1000000,COMPLETE,[],[],3000000,COMPLETE,2000001,COMPLETE,test_submission.pdf,NYC DMV application,NYC DMV application,Have you had a driver license -Yes,has_driver_license,2,Checkbox,False,False,0,False,COMPLETE,machine_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,0
1000000,COMPLETE,"[4000000, 4000001]","[COMPLETE, COMPLETE]",3000001,COMPLETE,2000003,COMPLETE,test_submission.pdf,Social Security Name Change,Social Security Name Change,SIGNATURE,signature,1,Signature,False,False,0,False,COMPLETE,machine_transcription,,,,"{
""custom"": ""data""
}",yourcompanyid#1236,,0
On the Submissions page of our application, you can download a CSV containing data about Fields that meet the criteria you specify. By sending requests to the endpoint described below, you can automate the generation of this report.
Endpoint
GET /api/v5/fields/csv
Request Parameters
Parameters with string values allow you to filter for multiple values of those parameters. To filter for multiple values, repeat the parameter in your request (e.g., page_state=processing&page_state=supervision). Fields matching any of the provided values will be eligible for inclusion in the response.
Use the query parameters below to filter for specific Fields.
| Property | Type | Description |
| submission_start_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Fields from Submissions that were ingested into the application on or after a specific date and time (greater than or equal to operator). |
| submission_start_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Fields from Submissions that were ingested into the application before a specific date and time (less than operator). |
| submission_state | string | Filters for Fields from Submissions that are in a particular state. See States for a list of possible values. |
| submission_complete_time__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for Fields from Submissions that finished processing on or after a specific date and time (greater than or equal to operator). |
| submission_complete_time__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for Fields from Submissions that finished processing before a specific date and time (less than operator). |
| submission_halted | boolean | Filters for Fields based on their Submissions' halted state. When set to true, the report will only include data for Fields whose Submissions are in halted jobs. When set to false, the report will not include data for these Fields. |
| exception | string | Filters for Fields that have a particular exception. See Exceptions for a list of possible values. |
| document_status | string | Filters for Fields from Documents that are in a particular state. See States for a list of possible values. |
| layout_kind | string | Filters for Fields from Documents with a particular type of layout. Can be one of the following values: structured, semi_structured, and additional. |
| layout | string | Filters for Fields from Documents whose layouts have a specific UUID. |
| layout_tag | string | Filters for Fields from Documents whose layouts have a specific tag. |
| page_state | string | Filters for Fields from Pages that are in a particular state. See States for a list of possible values. |
Response
The response returned is in CSV format. Each row represents a Field and has the following columns:
| Header | Type | Description |
| Submission ID | integer | Unique system-generated Submission ID for the Field’s Submission. |
| Submission State | string | Current state of the Field’s Submission. See States for a list of possible values. |
| Folder IDs v40 | array of integers | Array of the system-generated IDs of Document Folders that include the Document that the Field belongs to. |
| Folder States v40 | array of strings | Array of the current states of the Document Folders that include the Field’s Document. Possible values for the states are COMPLETE or INCOMPLETE. |
| Document ID | integer | Unique system-generated Document ID for the Field’s Document. |
| Document State | string | Current state of the Field’s Document. See States for a list of possible values. |
| Page ID | integer | Unique system-generated Page ID for the Field’s Page. |
| Page State | string | Current state of the Field’s Page. See States for a list of possible values. |
| File Name | string | The name of the file from the Submission’s upload that is associated with the Field’s Page. |
| Layout Name | string | The name of the layout that the Field’s Document was matched to. |
| Layout Version Name | string | The user-defined name for the specific locked layout version that the Field’s Document was matched to. |
| Layout Field Name | string | The name of the Field, as defined during its layout’s creation. |
| Layout Field Output Name v36 | string | The output name of the Field, as defined during its layout’s creation. |
| Layout Field Page | integer | The number of the page in the Document’s layout that the Field’s Page has matched to. For example, if the Page matched to the first page of a layout, this value would be 1. |
| Data Type | string | The type of data that the Field contains. See Data Type for more information about standard data types. |
| Required | boolean | Indicates whether the Field was marked as Required during layout creation. |
| Multiline | boolean | Indicates whether the Field is expected to contain multiple lines of content. |
| Supervision | integer | Provides details about the Supervision settings for the Field, as defined in the layout. Possible values are: 0 - Default, 1 - Consensus Required, 2 - Autotranscribe, and 3 - Always Supervise. |
| Consensus Required | boolean | Indicates whether multiple entries for the Field must result in identical transcriptions before the Field can move to the complete state. |
| Field State | string | Current state of the Field. Can be one of the following values: COMPLETE or FAILED. |
| Field Value Source | string | The way in which the Field’s content was entered into the system. Possible values are machine_transcription, manual_transcription, flexible_extraction (v31+), or custom (v31+). Will be empty if the Field wasn't transcribed. |
| Output Value Correct | boolean | Indicates whether the Field’s content was found to be correct during Transcription QA. Will be empty if the Field was not sampled for Transcription QA. |
| Machine Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. Will be empty if a Transcription model was not used to transcribe this Field. |
| Machine Confidence FT | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. Will be empty if a fine-tuned Transcription model was not used to determine accuracy. See Machine Transcription for confidence data in these cases. |
| Submission Metadata | JSON object | User-defined data provided during the creation of the Field’s Submission. See Submission Creation for more information. |
| Submission External ID | string | User-defined ID provided during the creation of the Field’s Submission. See Submission Creation for more information. |
| Field Identification Source | string | The way in which Field Identification was completed for the Field. For Structured Documents, this value will always be empty. For Semi-structured Documents, the possible values are machine_identification or manual_identification. |
| Field Occurrence Index | integer | A count of the times the Field has been annotated within its Document. The first annotation has an index of 0. |
The following data is retuned only if "Export PII in field-level data CSV report" is enabled in Settings.
| Header | Type | Description |
| Transcription | string | Value of the Field as the machine interpreted it or as the data keyer entered it. Will be empty if the Field was marked illegible. |
| Transcription Normalized | string | The normalized value of the Field. The changes applied to the transcription to normalize it vary by Data Type. |
| Consensus Transcription | string | The Consensus Transcription of the Field. Will be empty if the Field wasn't selected for Transcription QA or consensus wasn't reached. |
System Throughput Report v38
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/throughput'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
# Aggregation=DAILY
# Layouts=[]
# Flows=[]
Date,Submissions,Documents,Pages Processed,Pages Submitted,Fields,Table Cells
2020-08-10 12:00:00 AM,2,3,5,5,39,166
2020-08-11 12:00:00 AM,2,9,12,12,41,167
2020-08-12 12:00:00 AM,2,15,21,21,90,246
2020-08-13 12:00:00 AM,0,0,0,0,0,0
To track the throughput of platform units over a certain time period, you can create System Throughput reports in the “Overview” tab of the application’s Reporting page. With the endpoint described below, you can generate these reports automatically. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/throughput
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| aggregation | string | Indicates whether the report will be broken down by day (aggregation=DAILY) or by hour (aggregation=HOURLY). Defaults to DAILY. For hourly reports, only the last 30 days' data is stored and available for download. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
Response
Each row represents a day or a hour (depending on the value for the aggregation filter) within your report’s date range and has the following columns:
| Header | Type | Description |
| Date | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour or a 1-hour time period according to the aggregation filter. |
| Submissions | integer | The number of Submissions processed during the time period. |
| Documents | integer | The number of Documents processed during the time period. |
| Pages Processed | integer | The number of Pages Processed during the time period. |
| Pages Submitted | integer | The number of Pages Submitted during the time period. |
| Fields | integer | The number of Fields processed during the time period. |
| Table Cells | integer | The number of Table Cells processed during the time period. |
Automation Report
Example Transcription Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/transcription_automation'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Transcription Response
# Layouts=[]
# Flows=[]
Period start,Transcription Automation
2020-08-10 12:00:00 AM,0.50244
2020-08-11 12:00:00 AM,0.70192
2020-08-12 12:00:00 AM,0.83333
2020-08-13 12:00:00 AM,
Example Identification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/identification_automation'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Identification Response
# Layouts=[]
# Flows=[]
Period start,Identification Automation
2020-08-10 12:00:00 AM,0.5641
2020-08-11 12:00:00 AM,0.54054
2020-08-12 12:00:00 AM,0.82927
2020-08-13 12:00:00 AM,
Example Table Identification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/table_identification_automation'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Table Identification Response
# Layouts=[]
# Flows=[]
Period start,Identification Automation
2020-08-10 12:00:00 AM,0.5641
2020-08-11 12:00:00 AM,0.54054
2020-08-12 12:00:00 AM,0.82927
2020-08-13 12:00:00 AM,
Example Classification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/classification_automation'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Classification Response
Period start,Classification Automation
2020-08-10 12:00:00 AM,
2020-08-11 12:00:00 AM,1.0
2020-08-12 12:00:00 AM,0.85714
2020-08-13 12:00:00 AM,
If you would like to see the percentage of tasks that Hyperscience has automated, you can create an Automation report in the “Overview” tab of our application’s Reporting page. To generate these reports automatically, you can send requests to the endpoints described below. Responses will contain the report’s data as a CSV file.
Endpoints
- Transcription Automation:
GET /api/v5/reporting/transcription_automation - Identification Automation:
GET /api/v5/reporting/identification_automation - Table Identification Automation:
GET /api/v5/reporting/table_identification_automation - Classification Automation:
GET /api/v5/reporting/classification_automation
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). Can be used when retrieving Transcription Automation and Identification Automation data. |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
Transcription Automation Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Transcription Automation | float | The percentage of Transcription tasks that were automated by the system, given as a number between 0 and 1. Will be empty if no Fields were transcribed during this period. |
Identification Automation Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Identification Automation | float | The percentage of Field ID tasks that were automated by the system, given as a number between 0 and 1. Will be empty if no Fields were located during this period. |
Table Identification Automation Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Identification Automation | float | The percentage of Table Cells automated by the Table ID, given as a number between 0 and 1. Will be empty if no Table Cells were located during this period. |
Classification Automation Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Classification Automation | float | The percentage of Classification tasks that were automated by the system, given as a number between 0 and 1. Will be empty if no Pages were classified during this period. |
Output Accuracy Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/output_accuracy'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Document Type=all
# Layouts=[]
# Field Type=all
# Flows=[]
# Entry Type=all
# Layout Entries=[]
Period start,Field Accuracy,Margin of Error,Calculation Points
2020-08-10 12:00:00 AM,0.33333,0.53344,3
2020-08-11 12:00:00 AM,0.66667,0.53344,3
2020-08-12 12:00:00 AM,0.42857,0.25923,14
2020-08-13 12:00:00 AM,,,
Our Field Output Accuracy report shows the accuracy of the work completed by keyers and our system, as determined during Quality Assurance for sampled fields. You can manually create this report in CSV format in the “Overview” tab of our application’s Reporting page. With the endpoint described below, you can generate Field Output Accuracy reports automatically. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/output_accuracy
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). |
| field_type | string | The type of field you would like to retrieve accuracy data for. Can be one of the following values: entry, checkbox, or signature. |
| document_type | string | The type of Document you would like to retrieve accuracy data for. Can be one of the following values: structured or semi_structured. |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
| entry_type v40 | string | The type of entry you would like to retrieve accuracy data for. Can be one of the following values: field or table_cell. |
| layout_entry_uuid v40 | string | The UUID of the layout entry whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_entry_uuid parameter in your request (e.g., layout_entry_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&layout_entry_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Field Accuracy | float | A number between 0 and 1 indicating the average accuracy of Field transcriptions, as determined during Transcription QA. |
| Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Field Accuracy calculation. |
| Calculation Points | integer | The number of transcriptions used to calculate Field Accuracy. |
Manual vs. Machine Accuracy Report
Example Transcription Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/transcription_accuracy'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Transcription Response
# Field Type=all
# Layouts=[]
# Flows=[]
# Entry Type=all
# Layout Entries=[]
Period start,Machine Accuracy,Machine Accuracy Margin of Error,Machine Calculation Points,Manual Accuracy,Manual Accuracy Margin of Error,Manual Calculation Points
2020-08-10 12:00:00 AM,0.33333,0.53344,3,0.5,0.69296,2
2020-08-11 12:00:00 AM,0.66667,0.53344,3,,,
2020-08-12 12:00:00 AM,0.42857,0.25923,14,0.33333,0.23856,15
2020-08-13 12:00:00 AM,,,,,,
Example Identification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/identification_accuracy'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Identification Response
# Layouts=[]
# Flows=[]
# Layout Fields=[]
Period start,Machine Accuracy,Machine Accuracy Margin of Error,Machine Calculation Points,Manual Accuracy,Manual Accuracy Margin of Error,Manual Calculation Points
2020-08-10 12:00:00 AM,0.33333,0.53344,3,0.5,0.69296,2
2020-08-11 12:00:00 AM,0.66667,0.53344,3,,,
2020-08-12 12:00:00 AM,,,,0.4,0.42941,5
2020-08-13 12:00:00 AM,,,,,,
Example Table Identification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/table_identification_accuracy'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Table Identification Response
# Layouts=[]
# Flows=[]
# Layout Fields=[]
Period start,Machine Accuracy,Machine Accuracy Margin of Error,Machine Calculation Points,Manual Accuracy,Manual Accuracy Margin of Error,Manual Calculation Points
2020-08-10 12:00:00 AM,0.33333,0.53344,3,0.5,0.69296,2
2020-08-11 12:00:00 AM,0.66667,0.53344,3,,,
2020-08-12 12:00:00 AM,,,,0.4,0.42941,5
2020-08-13 12:00:00 AM,,,,,,
Example Classification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/classification_accuracy'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Classification Response
# Flows=[]
Period start,Machine Accuracy,Machine Accuracy Margin of Error,Machine Calculation Points,Manual Accuracy,Manual Accuracy Margin of Error,Manual Calculation Points
2020-08-10 12:00:00 AM,0.33333,0.53344,3,0.5,0.69296,2
2020-08-11 12:00:00 AM,0.66667,0.53344,3,,,
2020-08-12 12:00:00 AM,,,,0.4,0.42941,5
2020-08-13 12:00:00 AM,,,,,,
Example Full Page Transcription Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/full_page_transcription_accuracy'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Full Page Transcription Response
# Segment Type=all
# Flows=[]
Period start,Machine Accuracy,Machine Accuracy Margin of Error,Machine Calculation Points,Manual Accuracy,Manual Accuracy Margin of Error,Manual Calculation Points
2020-08-10 12:00:00 AM,0.33333,0.53344,3,0.5,0.69296,2
2020-08-11 12:00:00 AM,0.66667,0.53344,3,,,
2020-08-12 12:00:00 AM,0.4,0.42941,5,0.375,0.33548,8
2020-08-13 12:00:00 AM,,,,,,
To compare the output accuracy of tasks performed by your keyers to those performed by our system, you can create a Manual Accuracy vs. Machine Accuracy report. We offer four different types of these reports in CSV format: Transcription Accuracy, Identification Accuracy, Classification Accuracy, and Full Page Transcription Accuracy.
You can manually generate this report in the Accuracy tab of our application’s Reporting page. To automate the creation of Manual Accuracy vs. Machine Accuracy reports, you can send requests to the endpoints described below. Responses will contain the report’s data as a CSV file.
Endpoints
- Transcription Accuracy data:
GET /api/v5/reporting/transcription_accuracy - Identification Accuracy data:
GET /api/v5/reporting/identification_accuracy - Table Identification Accuracy data:
GET /api/v5/reporting/table_identification_accuracy - Classification Accuracy data:
GET /api/v5/reporting/classification_accuracy - Full Page Transcription Accuracy data v42:
GET /api/v5/reporting/full_page_transcription_accuracy
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). Can be used when retrieving Transcription Accuracy and Identification Accuracy data. |
| field_type | string | The type of field you would like to retrieve accuracy data for. Can be one of the following values: entry, checkbox, or signature. Can be used when retrieving Transcription Accuracy data. |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). Can be used when retrieving any of the reports. |
| entry_type v40 | string | The type of entry you would like to retrieve accuracy data for. Can be one of the following values: field or table_cell. Can be used when retrieving Transcription Accuracy data. |
| layout_entry_uuid v40 | string | The UUID of the layout entry whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_entry_uuid parameter in your request (e.g., layout_entry_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&layout_entry_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). Can be used when retrieving Transcription Accuracy data, Identification Accuracy data, or Table Identification Accuracy data. |
| segment_type v42 | string | The type of segment you would like to retrieve accuracy data for. Can be one of the following values: text, checkbox, or signature. Can be used only when retrieving Full Page Transcription Accuracy data. <!-- ignore validation --> |
Transcription Accuracy Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Machine Accuracy | float | A number between 0 and 1 indicating the average accuracy of the machine’s transcriptions, as determined during Transcription QA. |
| Machine Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Machine Accuracy calculation. |
| Machine Calculation Points | integer | The number of transcriptions used to calculate Machine Accuracy. |
| Manual Accuracy | float | A number between 0 and 1 indicating the average accuracy of manual transcriptions, as determined during Transcription QA. |
| Manual Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Manual Accuracy calculation. |
| Manual Calculation Points | integer | The number of transcriptions used to calculate Manual Accuracy. |
Identification Accuracy Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Machine Accuracy | float | A number between 0 and 1 indicating the average accuracy of machine-identified Field locations, as determined during Field ID QA. |
| Machine Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Machine Accuracy calculation. |
| Machine Calculation Points | integer | The number of Fields used to calculate Machine Accuracy. |
| Manual Accuracy | float | A number between 0 and 1 indicating the average accuracy of manually identified Field locations, as determined during Field ID QA. |
| Manual Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Manual Accuracy calculation. |
| Manual Calculation Points | integer | The number of Fields used to calculate Manual Accuracy. |
Table Identification Accuracy Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Machine Accuracy | float | A number between 0 and 1 indicating the average accuracy of machine-identified Table Cell locations, as determined during Table ID QA. |
| Machine Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Machine Accuracy calculation. |
| Machine Calculation Points | integer | The number of Table Cells used to calculate Machine Accuracy. |
| Manual Accuracy | float | A number between 0 and 1 indicating the average accuracy of manually identified Table Cell locations, as determined during Table ID QA. |
| Manual Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Manual Accuracy calculation. |
| Manual Calculation Points | integer | The number of Table Cells used to calculate Manual Accuracy. |
Classification Accuracy Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Machine Accuracy | float | A number between 0 and 1 indicating the average accuracy of the machine’s Page classifications, as determined during Classification QA. |
| Machine Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Machine Accuracy calculation. |
| Machine Calculation Points | integer | The number of Pages used to calculate Machine Accuracy. |
| Manual Accuracy | float | A number between 0 and 1 indicating the average accuracy of manual Page classifications, as determined during Classification QA. |
| Manual Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Manual Accuracy calculation. |
| Manual Calculation Points | integer | The number of Pages used to calculate Manual Accuracy. |
Full Page Transcription Accuracy Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Machine Accuracy | float | A number between 0 and 1 indicating the average accuracy of the machine’s Segment transcriptions, as determined during Full Page Transcription QA. |
| Machine Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Machine Accuracy calculation. |
| Machine Calculation Points | integer | The number of Segments used to calculate Machine Accuracy. |
| Manual Accuracy | float | A number between 0 and 1 indicating the average accuracy of manual Segment transcriptions, as determined during Full Page Transcription QA. |
| Manual Accuracy Margin of Error | float | A number between 0 and 1 indicating the random sampling error in the Manual Accuracy calculation. |
| Manual Calculation Points | integer | The number of Segments used to calculate Manual Accuracy. |
Time to Completion Report v38
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/time_to_completion'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
# Layouts=[]
# Flows=[]
Date,Average Per Submission,Average Per Document
2020-08-10 12:00:00 AM,11.725,11.5
2020-08-11 12:00:00 AM,16.945,16.815
2020-08-12 12:00:00 AM,16.945,14.025
2020-08-13 12:00:00 AM,0.0,0.0
To track the time it takes to complete a submission or a document, you can create Time to Completion reports in the “Processing Time” tab of the application’s Reporting page. With the endpoint described below, you can generate these reports automatically. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/time_to_completion
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Date | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Average Per Submission | float | The average time in seconds for Submissions to complete. |
| Average Per Document | float | The average time in seconds for Documents to complete since the related submission started. |
Manual Working Time Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/manual_working_time'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
# Layouts=[]
# Flows=[]
# Unit=submission
Period start,Page Sorting - Waiting,Page Sorting - Active,Field Identification - Waiting,Field Identification - Active,Table Identification - Waiting,Table Identification - Active,Field Transcription - Waiting,Field Transcription - Active,Table Transcription - Waiting,Table Transcription - Active,Flexible Extraction - Waiting,Flexible Extraction - Active,Custom Supervision - Waiting,Custom Supervision - Active
2020-08-10 12:00:00 AM,0.89,1.385,,,,,,,,,,,,
2020-08-11 12:00:00 AM,,,,,,,1.07,2.14,1.17,7.115,,,,
2020-08-12 12:00:00 AM,3.225,7.44,2.665,5.44,2.87,9.06,5.165,9.64,3.385,11.115,2.085,6.435,3.21,12.3
2020-08-13 12:00:00 AM,,,,,,,,,,,,,,
Our Manual Working Time reports allow you to monitor the average amount of time your keyers have spent on Page Sorting, Identification, and Transcription tasks over a certain time period. You can manually create these reports in CSV format in the “Processing Time” tab of our application’s Reporting page. To automate the generation of Manual Working Time reports, you can send requests to the endpoint described below. Responses will contain the report’s data as a CSV file.
While the application allows you to filter by Active or Wait times, reports generated via API will include both Active and Wait times for the specified date range. However, unlike reports in the application, reports created with the API will only include average times for Submissions, not Documents.
Endpoint
GET /api/v5/reporting/manual_working_time
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. The layout_uuid filter can only be used when the unit v38 filter is 'document'. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
| unit v38 | string | Indicates the processing unit whose data should be included in the report. It can be either 'submission' or 'document'. Defaults to 'submission'. The layout_uuid filter can only be used when the unit v38 filter is 'document'. |
Response
All average times are measured in seconds. Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Page Sorting - Waiting | float | The average time that a Page Sorting task was in the Work Queue before a keyer began working on it. |
| Page Sorting - Active | float | The average time taken to complete a Page Sorting task after a keyer began working on it. |
| Field Identification - Waiting | float | The average time that a Field Identification task was in the Work Queue before a keyer began working on it. |
| Field Identification - Active | float | The average time taken to complete a Field Identification task after a keyer began working on it. |
| Table Identification - Waiting | float | The average time that a Table Identification task was in the Work Queue before a keyer began working on it. |
| Table Identification - Active | float | The average time taken to complete a Table Identification task after a keyer began working on it. |
| Field Transcription - Waiting | float | The average time that a Field Transcription task was in the Work Queue before a keyer began working on it. |
| Field Transcription - Active | float | The average time taken to complete a Field Transcription task after a keyer began working on it. |
| Table Transcription - Waiting | float | The average time that a Table Transcription task was in the Work Queue before a keyer began working on it. |
| Table Transcription - Active | float | The average time taken to complete a Table Transcription task after a keyer began working on it. |
| Flexible Extraction - Waiting v31 | float | The average time that a Flexible Extraction task was in the Work Queue before a keyer began working on it. |
| Flexible Extraction - Active v31 | float | The average time taken to complete a Flexible Extraction task after a keyer began working on it. |
| Custom Supervision - Waiting v38 | float | The average time that a Custom Supervision task was in the Work Queue before a keyer began working on it. |
| Custom Supervision - Active v38 | float | The average time taken to complete a Custom Supervision task after a keyer began working on it. |
Machine Working Time Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/machine_working_time'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
# Layouts=[]
# Flows=[]
# Unit=submission
Period start,Page Sorting - Waiting,Page Sorting - Active,Identification - Waiting,Identification - Active,Transcription - Waiting,Transcription - Active,Full Page Transcription - Waiting,Full Page Transcription - Active
2020-08-10 12:00:00 AM,5.635,3.22,,,,,,
2020-08-11 12:00:00 AM,,,2.24,9.255,,,0.89,8.39
2020-08-12 12:00:00 AM,3.225,7.44,4.535,14.5,4.55,19.755,,
2020-08-13 12:00:00 AM,,,,,,,,
To see the average time our system spent performing Page Sorting, Identification, Transcription, and Full Page Transcription tasks over a certain time period, you can download the Machine Working Time report. The report is available in CSV format in the “Processing Time” tab of our application’s Reporting page. You can also automate the creation of Machine Working Time reports by sending requests to the endpoint described below. Responses will contain the report’s data as a CSV file.
While the application allows you to filter by Active or Wait times, reports generated via API will include both Active and Wait times for the specified date range. However, unlike reports in the application, reports created with the API will only include average times for Submissions, not Documents.
Endpoint
GET /api/v5/reporting/machine_working_time
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. The layout_uuid filter can only be used when the unit v38 filter is 'document'. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
| unit v38 | string | Indicates the processing unit whose data should be included in the report. It can be either 'submission' or 'document'. Defaults to 'submission'. The layout_uuid filter can only be used when the unit v38 filter is 'document'. |
Response
All average times are measured in seconds. Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Page Sorting - Waiting | float | The average time between the triggering of a Page Sorting task and when the machine began processing it. |
| Page Sorting - Active | float | The average time taken to complete a Page Sorting task after the machine began processing it. |
| Identification - Waiting | float | The average time between the triggering of an Identification task and when the machine began processing it. |
| Identification - Active | float | The average time taken to complete an Identification task after the machine began processing it. |
| Transcription - Waiting | float | The average time between the triggering of a Transcription task and when the machine began processing it. |
| Transcription - Active | float | The average time taken to complete a Transcription task after the machine began processing it. |
| Full Page Transcription - Waiting v42 | float | The average time between the triggering of a Full Page Transcription task and when the machine began processing it. |
| Full Page Transcription - Active v42 | float | The average time taken to complete a Full Page Transcription task after the machine began processing it. |
Supervision Volume Report
Example Transcription Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/transcription_supervision_volume'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Transcription Response
# Layouts=[]
# Flows=[]
Period start,Transcriptions,Consensus,QA
2020-08-10 12:00:00 AM,1,3,4
2020-08-11 12:00:00 AM,7,1,0
2020-08-12 12:00:00 AM,0,0,10
2020-08-13 12:00:00 AM,,,
Example Identification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/identification_supervision_volume'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Identification Response
# Layouts=[]
# Flows=[]
Period start,Field ID,Field ID QA,Table ID,Table ID QA
2020-08-10 12:00:00 AM,1,3,4,6
2020-08-11 12:00:00 AM,7,1,0,2
2020-08-12 12:00:00 AM,0,0,10,0
2020-08-13 12:00:00 AM,,,,
Example Classification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/classification_supervision_volume'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Classification Response
# Layouts=[]
# Flows=[]
Period start,Classifications,QA
2020-08-10 12:00:00 AM,3,2
2020-08-11 12:00:00 AM,4,3
2020-08-12 12:00:00 AM,0,0
2020-08-13 12:00:00 AM,,
Example Full Page Transcription Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/full_page_transcription_supervision_volume'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Full Page Transcription Response
# Flows=[]
Period start,QA
2020-08-10 12:00:00 AM,28.0
2020-08-11 12:00:00 AM,
2020-08-12 12:00:00 AM,14.0
2020-08-13 12:00:00 AM,
To track the volume of Supervision tasks over a certain time period, you can create Supervision Volume reports in the “User Performance” tab of the application’s Reporting page.
You can generate these reports automatically by sending requests to the endpoints below. Responses will contain the report’s data as a CSV file.
Endpoints
- Transcription Supervision Volume:
GET /api/v5/reporting/transcription_supervision_volume - Identification Supervision Volume:
GET /api/v5/reporting/identification_supervision_volume - Classification Supervision Volume v42:
GET /api/v5/reporting/classification_supervision_volume - Full Page Transcription Supervision Volume v42:
GET /api/v5/reporting/full_page_transcription_supervision_volume
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). Can be used when retrieving Transcription Supervision Volume data, or Identification Supervision Volume data. |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). |
Transcription Supervision Volume Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Transcriptions | integer | The number of manual transcriptions completed during Transcription Supervision. |
| Consensus | integer | The number of manual transcriptions completed during Transcription Consensus. |
| QA | integer | The number of manual transcriptions completed during Transcription QA. |
Identification Supervision Volume Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Field ID | integer | The number of Fields whose locations were identified during Field ID Supervision. |
| Field ID QA | integer | The number of Fields whose locations were reviewed during Field ID QA. |
| Table ID | integer | The number of Table Cells whose locations were identified during Table ID Supervision. |
| Table ID QA | integer | The number of Table Cells whose locations were reviewed during Table ID QA. |
Classification Supervision Volume Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Classifications | integer | The number of Pages categorized during Classification Supervision. |
| QA | integer | The number of Pages categorized during Classification QA. |
Full Page Transcription Supervision Volume Response
Each row in the response represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Period start | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| QA | integer | The number of segment transcriptions completed during Full Page Transcription QA. |
Performance Distribution Report
Example Transcription Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/transcription_supervision_performance'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Transcription Response
# Layouts=[]
# Period Start=2020-08-10
# Period End=2020-08-13
# Flows=[]
User,Average Accuracy,Calculation Points,Correct Fields,Entries Completed
emma.williams,0.33333,15,5,20
oliver.smith,0.5,2,1,4
Example Identification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/identification_supervision_performance'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Identification Response
# Layouts=[]
# Period Start=2020-08-10
# Period End=2020-08-13
# Flows=[]
User,Average Accuracy,Calculation Points,Correct Fields,Fields Completed,Table Cells Completed
emma.williams,0.4,5,2,7,0
oliver.smith,0.5,2,1,4,0
Example Classification Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/classification_supervision_performance'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Classification Response
# Period Start=2020-08-10
# Period End=2020-08-13
# Flows=[]
User,Average Accuracy,Calculation Points,Correct Classifications,Classifications Completed
emma.williams,0.4,5,2,0
oliver.smith,0.5,2,1,0
If you want to compare the performance of your keyers over a certain time period, you can download Performance Distribution reports in the “User Performance” tab of the application’s Reporting page.
You can also download these reports with the endpoints described below. Responses will contain the report’s data as a CSV file.
Endpoints
- Transcription Performance Distribution:
GET /api/v5/reporting/transcription_supervision_performance - Identification Performance Distribution:
GET /api/v5/reporting/identification_supervision_performance - Classification Performance Distribution:
GET /api/v5/reporting/classification_supervision_performance
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
| layout_uuid | string | The UUID of the layout whose data should be included in the report. To filter for multiple UUIDs, repeat the layout_uuid parameter in your request (e.g., layout_uuid=79f22792-a40f-4c9f-8cf4-181449848b35&layout_uuid=5a8e4ae5-d60e-45cf-a8b4-946ca8ae5114). Can be used when retrieving Transcription Supervision Performance and Identification Supervision Performance. |
| flow_uuid | string | The UUID of the flow whose data should be included in the report. To filter for multiple UUIDs, repeat the flow_uuid parameter in your request (e.g., flow_uuid=94cdb688-d38f-4f71-ae9c-e60209d23b95&flow_uuid=74fca55f-d8a6-439a-a80d-670f9a55543a). Can be used when retrieving any of the reports. |
Transcription Performance Distribution Response
Each row represents a user in your account and has the following columns:
| Header | Type | Description |
| User | string | The user’s username. |
| Average Accuracy | float | The average accuracy of the user’s Transcription entries, as determined during Transcription QA. |
| Calculation Points | integer | The number of Transcription entries completed by the user that were QAed. |
| Correct Fields | integer | The number of Fields that the user transcribed correctly, as determined during Transcription QA. |
| Entries Completed | integer | The number of Transcription entries completed by the user. |
Identification Performance Distribution Response
Each row represents a user in your account and has the following columns:
| Header | Type | Description |
| User | string | The user’s username |
| Average Accuracy | float | The average accuracy of the user’s Field ID entries, as determined during Field ID QA. |
| Calculation Points | integer | The number of Field ID entries completed by the user that were QAed. |
| Correct Fields | integer | The number of Fields that the user identified correctly, as determined during Field ID QA. |
| Fields Completed | integer | The number of Fields identified by the user. |
| Table Cells Completed v40.2 | integer | The number of Table Cells identified by the user. |
Classification Performance Distribution Response
Each row represents a user in your account and has the following columns:
| Header | Type | Description |
| User | string | The user’s username. |
| Average Accuracy | float | The average accuracy of the user’s Classifications, as determined during Classification QA. |
| Calculation Points | integer | The number of Classifications completed by the user that were QAed. |
| Correct Classifications | integer | The number of Classifications that the user identified correctly, as determined during Classification QA. |
| Classifications Completed | integer | The number of Classifications completed by the user. |
Hourly Task Overview Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/hourly_reporting_task'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Task Type,Workflow UUID,Total Users,Total Time Spent (Seconds),Tasks in Starting Work Queue,Tasks Added to Work Queue,Tasks Completed,Tasks in Ending Work Queue
2020-08-10 12:00:00 AM,Document Classification,3b324d71-bf8e-45ac-8521-37a4cc54f73d,18,9.0,2,2,0,0
2020-08-10 12:00:00 AM,Field ID,3b324d71-bf8e-45ac-8521-37a4cc54f73d,24,12.0,2,5,4,0
2020-08-10 12:00:00 AM,Full Page Transcription QA,3b324d71-bf8e-45ac-8521-37a4cc54f73d,0,0.0,0,1,0,0
2020-08-10 01:00:00 AM,Full Page Transcription QA,3b324d71-bf8e-45ac-8521-37a4cc54f73d,0,0.0,1,0,0,0
2020-08-10 02:00:00 AM,Document Classification,3b324d71-bf8e-45ac-8521-37a4cc54f73d,18,9.0,5,4,1,0
2020-08-10 02:00:00 AM,Field ID,3b324d71-bf8e-45ac-8521-37a4cc54f73d,24,12.0,3,8,11,0
2020-08-10 02:00:00 AM,Full Page Transcription QA,3b324d71-bf8e-45ac-8521-37a4cc54f73d,1,62.6,1,0,1,0
2020-08-10 02:00:00 AM,Table Cell Transcription,3b324d71-bf8e-45ac-8521-37a4cc54f73d,30,15.0,0,9,3,0
2020-08-11 01:00:00 AM,Document Classification,46b19e61-640a-4e15-bbb4-b5eba0974c8b,18,9.0,6,3,2,0
2020-08-11 01:00:00 AM,Classification QA,46b19e61-640a-4e15-bbb4-b5eba0974c8b,36,18.0,5,2,1,0
2020-08-11 01:00:00 AM,Custom Supervision,46b19e61-640a-4e15-bbb4-b5eba0974c8b,42,21.0,2,8,3,0
2020-08-11 01:00:00 AM,Table Cell Transcription,46b19e61-640a-4e15-bbb4-b5eba0974c8b,24,12.0,6,0,6,0
2020-08-11 01:00:00 AM,Table ID,46b19e61-640a-4e15-bbb4-b5eba0974c8b,30,15.0,0,6,3,0
2020-08-12 12:00:00 AM,Full Page Transcription QA,46b19e61-640a-4e15-bbb4-b5eba0974c8b,0,0.0,0,1,0,0
2020-08-12 01:00:00 AM,Full Page Transcription QA,46b19e61-640a-4e15-bbb4-b5eba0974c8b,1,30.4,1,0,1,0
2020-08-12 03:00:00 AM,Document Classification,46b19e61-640a-4e15-bbb4-b5eba0974c8b,24,12.0,4,2,1,0
2020-08-12 03:00:00 AM,Classification QA,46b19e61-640a-4e15-bbb4-b5eba0974c8b,30,15.0,8,3,2,0
2020-08-12 03:00:00 AM,Flexible Extraction,46b19e61-640a-4e15-bbb4-b5eba0974c8b,36,18.0,1,2,2,0
2020-08-12 03:00:00 AM,Table ID,46b19e61-640a-4e15-bbb4-b5eba0974c8b,18,9.0,3,1,3,0
The endpoint below allows you to automate the creation of the Hourly Task Overview report. This report is part of the Keyer Projection Report, which is available in the "User Performance" tab of our application's Reporting page. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/hourly_reporting_task
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each hour in the report’s date range will have a set of rows in the report, with one row for each type of task performed and related workflow during that hour. Each row will have the following format:
| Header | Type | Description |
| Date | datetime | The start date and time of the time period covered by the row. Each row covers a one-hour time period. |
| Task Type | string | The type of tasks performed. Can be one of the following values: Doc Org, Field ID, Table ID, Field Transcription, Table Cell Transcription, Field ID QA, Field Transcription QA, Classification QA, Flexible Extraction v31, Custom Supervision v38, or Full Page Transcription QA v42. |
| Workflow UUID | string | UUID of the related Workflow. |
| Total Users | integer | The number of users who performed the tasks. |
| Total Time Spent (Seconds) | integer | Total time users spent completing the tasks. |
| Tasks in Starting Work Queue | integer | The number of tasks that were in progress at the beginning of the time period. |
| Tasks Added to Work Queue | integer | The number of tasks that were created during the time period. |
| Tasks Completed | integer | The number of tasks completed during the time period. |
| Tasks in Ending Work Queue v41.2 | integer | The number of tasks that were in progress at the end of the time period. This field is deprecated, and will always yield 0. Use Tasks in Starting Work Queue instead. |
Hourly Submission Overview Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/hourly_reporting_submission'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Workflow UUID,Submissions In-Progress,Submissions Added to Work Queue,Submissions Completed,Submissions in Ending Work Queue,Documents In-Progress,Documents Added to Work Queue,Documents Completed,Documents in Ending Work Queue,Users Performing Classification,Time Spent in Classification (Seconds),Classifications in Starting Work Queue,Classifications Added to Work Queue,Classifications Completed,Classifications in Ending Work Queue,Users Performing Field ID,Time Spent in Field ID (Seconds),ID Fields in Starting Work Queue,ID Fields Added to Work Queue,ID Fields Completed,ID Fields in Ending Work Queue,Users Performing Table ID,Time Spent in Table ID (Seconds),ID Tables in Starting Work Queue,ID Tables Added to Work Queue,ID Tables Completed,ID Tables in Ending Work Queue,Users Performing Field Transcription,Time Spent in Field Transcription (Seconds),Transcription Fields in Starting Work Queue,Transcription Fields Added to Work Queue,Transcription Fields Completed,Transcription Fields in Ending Work Queue,Users Performing Table Transcription,Time Spent in Table Transcription (Seconds),Transcription Table Cells in Starting Work Queue,Transcription Table Cells Added to Work Queue,Transcription Table Cells Completed,Transcription Table Cells in Ending Work Queue,Users Performing Field ID QA,Time Spent in Field ID QA (Seconds),ID QA Fields in Starting Work Queue,ID QA Fields Added to Work Queue,ID QA Fields Completed,ID QA Fields in Ending Work Queue,Users Performing Table ID QA,Time Spent in Table ID QA (Seconds),ID QA Cells in Starting Work Queue,ID QA Cells Added to Work Queue,ID QA Cells Completed,ID QA Cells in Ending Work Queue,Users Performing Transcription QA,Time Spent in Transcription QA (Seconds),Transcription QA Fields in Starting Work Queue,Transcription QA Fields Added to Work Queue,Transcription QA Fields Completed,Transcription QA Fields in Ending Work Queue,Users Performing Classification QA,Time Spent in Classification QA (Seconds),Classification QA Tasks in Starting Work Queue,Classification QA Tasks Added to Work Queue,Classification QA Tasks Completed,Classification QA Tasks in Ending Work Queue,Users Performing Flexible Extraction,Time Spent in Flexible Extraction (Seconds),Flexible Extractions in Starting Work Queue,Flexible Extractions Added to Work Queue,Flexible Extractions Completed,Flexible Extractions in Ending Work Queue,Users Performing Custom Supervision,Time Spent in Custom Supervision (Seconds),Custom Supervision Fields and Cells in Starting Work Queue,Custom Supervision Fields and Cells Added to Work Queue,Custom Supervision Fields and Cells Completed,Custom Supervision Fields and Cells in Ending Work Queue,Custom Supervision Decisions in Starting Work Queue,Custom Supervision Decisions Added to Work Queue,Custom Supervision Decisions Changed,Custom Supervision Decisions Completed,Custom Supervision Decisions in Ending Work Queue,Users Performing Full Page Transcription QA,Time Spent in Full Page Transcription QA (Seconds),Full Page Transcription QA Segments in Starting Work Queue,Full Page Transcription QA Segments Added to Work Queue,Full Page Transcription QA Segments Completed
2020-08-10 12:00:00 AM,3b324d71-bf8e-45ac-8521-37a4cc54f73d,1,3,2,0,2,5,4,0,18,9.0,2,2,0,0,24,12.0,5,6,8,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0,0,0,0,0,0.0,0,28,0
2020-08-10 01:00:00 AM,3b324d71-bf8e-45ac-8521-37a4cc54f73d,0,0,0,0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0,0,0,0,0,0.0,28,0,0
2020-08-10 02:00:00 AM,3b324d71-bf8e-45ac-8521-37a4cc54f73d,2,3,1,0,3,8,5,0,18,9.0,5,4,1,0,24,12.0,3,17,20,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,30,15.0,0,23,13,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0,0,0,0,1,62.6,28,0,28
2020-08-11 01:00:00 AM,46b19e61-640a-4e15-bbb4-b5eba0974c8b,4,0,2,0,6,0,3,0,18,9.0,6,3,2,0,0,0.0,0,0,0,0,30,15.0,0,124,86,0,0,0.0,0,0,0,0,24,12.0,10,0,10,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,36,18.0,5,2,1,0,0,0.0,0,0,0,0,42,21.0,4,15,13,0,5,10,5,6,0,0,0.0,0,0,0
2020-08-12 12:00:00 AM,46b19e61-640a-4e15-bbb4-b5eba0974c8b,0,0,0,0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0,0,0,0,0,0.0,0,14,0
2020-08-12 01:00:00 AM,46b19e61-640a-4e15-bbb4-b5eba0974c8b,0,0,0,0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0,0,0,0,1,30.4,14,0,14
2020-08-12 03:00:00 AM,46b19e61-640a-4e15-bbb4-b5eba0974c8b,2,1,2,0,3,2,3,0,24,12.0,4,2,1,0,0,0.0,0,0,0,0,18,9.0,38,56,78,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,0,0.0,0,0,0,0,30,15.0,8,3,2,0,36,18.0,3,12,11,0,0,0.0,0,0,0,0,0,0,0,0,0,0,0.0,0,0,0
The endpoint below allows you to automate the creation of the Hourly Submission Overview report. This report is part of the Keyer Projection Report, which is available in the "User Performance" tab of our application's Reporting page. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/hourly_reporting_submission
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response represents an hour within your report’s date range for a specific workflow and has the following columns:
| Header | Type | Description |
| Date | datetime | The start date and time of the time period covered by the row. Each row covers a one-hour time period. |
| Workflow UUID | string | UUID of the related Workflow. |
| Submissions In-Progress | inetger | The number of Submissions that were in progress at the beginning of the time period. |
| Submissions Added to Work Queue | integer | The number of Submissions that were created. |
| Submissions Completed | integer | The number of Submissions that were completed. |
| Submissions in Ending Work Queue v41.2 | integer | The number of Submissions that were in progress at the end of the time period. This field is deprecated, and will always yield 0. Use Submissions In-Progress instead. |
| Documents In-Progress | integer | The number of Documents that were in progress at the beginning of the time period. |
| Documents Added to Work Queue | integer | The number of Documents that were created. |
| Documents Completed | integer | The number of Documents that were completed. |
| Documents in Ending Work Queue v41.2 | integer | The number of Documents that were in progress at the end of the time period. This field is deprecated, and will always yield 0. Use Documents In-Progress instead. |
| Users Performing Classification | integer | The number of users who performed Classification tasks. |
| Time Spent in Classification (Seconds) | integer | Total time users spent performing Classification tasks. |
| Classifications in Starting Work Queue | integer | The number of Classification tasks in the Classification Work Queue at the beginning of the time period. |
| Classifications Added to Work Queue | integer | The number of Classification tasks that were created. |
| Classifications Completed | integer | The number of Classification tasks that were completed. |
| Classifications in Ending Work Queue v41.2 | integer | The number of Classification tasks in the Classification Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Classifications in Starting Work Queue instead. |
| Users Performing Field ID | integer | The number of users who performed Field ID tasks. |
| Time Spent in Field ID (Seconds) | integer | Total time users spent performing Field ID taks. |
| ID Fields in Starting Work Queue | integer | The number of Fields in the Identification Work Queue at the beginning of the time period. |
| ID Fields Added to Work Queue | integer | The number of Fields added to the Identification Work Queue. |
| ID Fields Completed | integer | The number of Fields whose Identification tasks were completed. |
| ID Fields in Ending Work Queue v41.2 | integer | The number of Fields in the Identification Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use ID Fields in Starting Work Queue instead. |
| Users Performing Table ID | integer | The number of users who performed Table ID tasks. |
| Time Spent in Table ID (Seconds) | integer | Total time users spent performing Table ID taks. |
| ID Tables in Starting Work Queue | integer | The number of Tables in the Identification Work Queue at the beginning of the time period. |
| ID Tables Added to Work Queue | integer | The number of Tables added to the Identification Work Queue. |
| ID Tables Completed | integer | The number of Tables whose Identification tasks were completed. |
| ID Tables in Ending Work Queue v41.2 | integer | The number of Tables in the Identification Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use ID Tables in Starting Work Queue instead. |
| Users Performing Field Transcription | integer | The number of users who performed Field Transcription tasks. |
| Time Spent in Field Transcription (Seconds) | integer | Total time users spent performing Field Transcription tasks. |
| Transcription Fields in Starting Work Queue | integer | The number of Fields in the Transcription Work Queue at the beginning of the time period. |
| Transcription Fields Added to Work Queue | integer | The number of Fields added to the Transcription Work Queue. |
| Transcription Fields Completed | integer | The number of Fields whose Transcription tasks were completed. |
| Transcription Fields in Ending Work Queue v41.2 | integer | The number of Fields in the Transcription Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Transcription Fields in Starting Work Queue instead. |
| Users Performing Table Transcription | integer | The number of users who performed Table Transcription tasks. |
| Time Spent in Table Transcription (Seconds) | integer | Total time users spent performing Table Transcription tasks. |
| Transcription Table Cells in Starting Work Queue | integer | The number of Table Cells in the Transcription Work Queue at the beginning of the time period. |
| Transcription Table Cells Added to Work Queue | integer | The number of Table Cells added to the Transcription Work Queue. |
| Transcription Table Cells Completed | integer | The number of Table Cells whose Transcription tasks were completed. |
| Transcription Table Cells in Ending Work Queue v41.2 | integer | The number of Table Cells in the Transcription Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Transcription Table Cells in Starting Work Queue instead. |
| Users Performing Field ID QA | integer | The number of users who performed Field ID QA tasks. |
| Time Spent in Field ID QA (Seconds) | integer | Total time users spent performing Field ID QA tasks. |
| ID QA Fields in Starting Work Queue | integer | The number of Fields in the Identification QA Work Queue at the beginning of the time period. |
| ID QA Fields Added to Work Queue | integer | The number of Fields added to the Identification QA Work Queue. |
| ID QA Fields Completed | integer | The number of Fields whose Identification QA tasks were completed. |
| ID QA Fields in Ending Work Queue v41.2 | integer | The number of Fields in the Identification QA Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use ID QA Fields in Starting Work Queue instead. |
| Users Performing Table ID QA | integer | The number of users who performed Table ID QA tasks. |
| Time Spent in Table ID QA (Seconds) | integer | Total time users spent performing Table ID QA tasks. |
| ID QA Cells in Starting Work Queue | integer | The number of Cells in the Table Identification QA Work Queue at the beginning of the time period. |
| ID QA Cells Added to Work Queue | integer | The number of Cells added to the Table Identification QA Work Queue. |
| ID QA Cells Completed | integer | The number of Cells whose Table Identification QA tasks were completed. |
| ID QA Cells in Ending Work Queue v41.2 | integer | The number of Cells in the Table Identification QA Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use ID QA Cells in Starting Work Queue instead. |
| Users Performing Transcription QA | integer | The number of users who performed Transcription QA tasks. |
| Time Spent in Transcription QA (Seconds) | integer | Total time users spent performing Transcription QA tasks. |
| Transcription QA Fields in Starting Work Queue | integer | The number of Fields in the Transcription QA Work Queue at the beginning of the time period. |
| Transcription QA Fields Added to Work Queue | integer | The number of Fields added to the Transcription QA Work Queue. |
| Transcription QA Fields Completed | integer | The number of Fields whose Transcription QA tasks were completed. |
| Transcription QA Fields in Ending Work Queue v41.2 | integer | The number of Fields in the Transcription QA Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Transcription QA Fields in Starting Work Queue instead. |
| Users Performing Classification QA v33 | integer | The number of users who performed Classification QA tasks. |
| Time Spent in Classification QA (Seconds) v33 | integer | Total time users spent performing Classification QA tasks. |
| Classification QA Tasks in Starting Work Queue v33 | integer | The number of Classification QA tasks in the Classification QA Work Queue at the beginning of the time period. |
| Classification QA Tasks Added to Work Queue v33 | integer | The number of Classification QA tasks that were created. |
| Classification QA Tasks Completed v33 | integer | The number of Classification QA tasks that were completed. |
| Classification QA Tasks in Ending Work Queue v41.2 | integer | The number of Classification QA tasks in the Classification QA Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Classification QA Tasks in Starting Work Queue instead. |
| Users Performing Flexible Extraction v31 | integer | The number of users who performed Flexible Extraction tasks. |
| Time Spent in Flexible Extraction (Seconds) v31 | integer | Total time users spent performing Flexible Extraction tasks. |
| Flexible Extractions in Starting Work Queue v31 | integer | The number of Fields and Table Cells in the Flexible Extraction Work Queue at the beginning of the time period. |
| Flexible Extractions Added to Work Queue v31 | integer | The number of Fields and Table Cells added to the Flexible Extraction Work Queue. |
| Flexible Extractions Completed v31 | integer | The number of Fields and Table Cells whose Flexible Extraction tasks were completed. |
| Flexible Extractions in Ending Work Queue v41.2 | integer | The number of Fields and Table Cells in the Flexible Extraction Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Flexible Extractions in Starting Work Queue instead. |
| Users Performing Custom Supervision v38 | integer | The number of users who performed Custom Supervision tasks. |
| Time Spent in Custom Supervision (Seconds) v38 | integer | Total time users spent performing Custom Supervision tasks. |
| Custom Supervision Fields and Cells in Starting Work Queue v38 | integer | The number of Fields and Table Cells in the Custom Supervision Work Queue at the beginning of the time period. |
| Custom Supervision Fields and Cells Added to Work Queue v38 | integer | The number of Fields and Table Cells added to the Custom Supervision Work Queue. |
| Custom Supervision Fields and Cells Completed v38 | integer | The number of Fields and Table Cells whose Custom Supervision tasks were completed. |
| Custom Supervision Fields and Cells in Ending Work Queue v41.2 | integer | The number of Fields and Table Cells in the Custom Supervision Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Custom Supervision Fields and Cells in Starting Work Queue instead. |
| Custom Supervision Decisions in Starting Work Queue v38 | integer | The number of Decisions in the Custom Supervision Work Queue at the beginning of the time period. |
| Custom Supervision Decisions Added to Work Queue v38 | integer | The number of Decisions added to the Custom Supervision Work Queue. |
| Custom Supervision Decisions Changed v38 | integer | The number of Decisions changed in the Custom Supervision Work Queue at the end of the time period. |
| Custom Supervision Decisions Completed v38 | integer | The number of Decisions completed in the Custom Supervision Work Queue at the end of the time period. |
| Custom Supervision Decisions in Ending Work Queue v41.2 | integer | The number of Decisions in the Custom Supervision Work Queue at the end of the time period. This field is deprecated, and will always yield 0. Use Custom Supervision Decisions in Starting Work Queue instead. |
| Users Performing Full Page Transcription QA v42 | integer | The number of users who performed Full Page Transcription QA tasks. |
| Time Spent in Full Page Transcription QA (Seconds) v42 | integer | Total time users spent performing Full Page Transcription QA tasks. |
| Full Page Transcription QA Segments in Starting Work Queue v42 | integer | The number of Segments in the Full Page Transcription QA Work Queue at the beginning of the time period. |
| Full Page Transcription QA Segments Added to Work Queue v42 | integer | The number of Segments added to the Full Page Transcription QA Work Queue. |
| Full Page Transcription QA Segments Completed v42 | integer | The number of Segments whose Full Page Transcription QA tasks were completed. |
Projected Transcription Automation Report
Example Request
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/projected_transcription_automation'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Last Trained=2020-08-10 12:00:00 AM
# Checkbox Target Accuracy=94.3
# Signature Target Accuracy=95.7
# Structured Text Target Accuracy=98.3
# Semi-Structured Text Target Accuracy=92.9
# Semi-Structured Checkbox Target Accuracy=-
Target Accuracy,Checkbox Projected Automation,Checkbox MOE,Signature Projected Automation,Signature MOE,Structured Text Projected Automation,Structured Text MOE,Semi-Structured Text Projected Automation,Semi-Structured Text MOE,Semi-Structured Checkbox Projected Automation,Semi-Structured Checkbox MOE
100.0,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.9,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.8,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.7,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.6,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.5,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.4,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.3,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.1,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
99.0,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.9,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.8,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.7,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.6,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.5,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.4,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.3,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.1,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
98.0,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
97.9,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
97.8,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
97.7,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
97.6,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
97.5,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2,0.8,0.2
97.4,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
97.3,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
97.2,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
97.1,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
97.0,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.9,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.8,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.7,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.6,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.5,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.4,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.3,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.2,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.1,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
96.0,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.9,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.8,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.7,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.6,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.5,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.4,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.3,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.2,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.1,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
95.0,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.9,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.8,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.7,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.6,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.5,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.4,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.3,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.2,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.1,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
94.0,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.9,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.8,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.7,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.6,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.5,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.4,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.3,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.2,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.1,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
93.0,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
92.9,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
92.8,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
92.7,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
92.6,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19,0.81,0.19
92.5,0.81,0.18,0.81,0.18,0.81,0.18,0.81,0.18,0.81,0.18
92.4,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
92.3,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
92.2,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
92.1,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
92.0,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.9,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.8,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.7,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.6,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.5,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.4,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.3,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.2,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.1,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
91.0,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.9,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.8,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.7,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.6,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.5,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.4,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.3,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.2,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.1,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
90.0,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.9,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.8,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.7,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.6,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.5,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.4,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.3,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.2,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.1,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
89.0,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.9,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.8,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.7,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.6,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.5,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.4,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.3,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.2,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.1,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
88.0,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
87.9,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
87.8,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
87.7,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
87.6,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18,0.82,0.18
87.5,0.82,0.17,0.82,0.17,0.82,0.17,0.82,0.17,0.82,0.17
87.4,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
87.3,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
87.2,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
87.1,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
87.0,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.9,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.8,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.7,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.6,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.5,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.4,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.3,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.2,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.1,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
86.0,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.9,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.8,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.7,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.6,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.5,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.4,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.3,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.2,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.1,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
85.0,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.9,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.8,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.7,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.6,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.5,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.4,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.3,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.2,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.1,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
84.0,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.9,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.8,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.7,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.6,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.5,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.4,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.3,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.2,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.1,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
83.0,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
82.9,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
82.8,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
82.7,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
82.6,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
82.5,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17,0.83,0.17
82.4,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
82.3,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
82.2,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
82.1,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
82.0,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.9,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.8,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.7,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.6,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.5,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.4,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.3,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.2,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.1,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
81.0,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.9,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.8,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.7,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.6,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.5,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.4,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.3,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.2,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.1,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
80.0,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.9,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.8,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.7,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.6,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.5,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.4,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.3,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.2,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.1,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
79.0,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.9,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.8,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.7,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.6,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.5,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.4,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.3,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.2,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.1,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
78.0,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
77.9,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
77.8,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
77.7,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
77.6,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16,0.84,0.16
77.5,0.84,0.15,0.84,0.15,0.84,0.15,0.84,0.15,0.84,0.15
77.4,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
77.3,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
77.2,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
77.1,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
77.0,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.9,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.8,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.7,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.6,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.5,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.4,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.3,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.2,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.1,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
76.0,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.9,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.8,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.7,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.6,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.5,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.4,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.3,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.2,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.1,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
75.0,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.9,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.8,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.7,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.6,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.5,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.4,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.3,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.2,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.1,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
74.0,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.9,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.8,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.7,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.6,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.5,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.4,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.3,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.2,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.1,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
73.0,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
72.9,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
72.8,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
72.7,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
72.6,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15,0.85,0.15
72.5,0.85,0.14,0.85,0.14,0.85,0.14,0.85,0.14,0.85,0.14
72.4,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
72.3,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
72.2,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
72.1,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
72.0,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.9,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.8,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.7,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.6,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.5,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.4,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.3,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.2,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.1,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
71.0,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.9,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.8,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.7,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.6,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.5,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.4,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.3,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.2,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.1,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
70.0,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.9,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.8,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.7,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.6,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.5,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.4,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.3,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.2,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.1,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
69.0,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.9,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.8,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.7,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.6,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.5,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.4,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.3,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.2,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.1,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
68.0,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
67.9,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
67.8,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
67.7,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
67.6,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
67.5,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14,0.86,0.14
67.4,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
67.3,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
67.2,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
67.1,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
67.0,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.9,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.8,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.7,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.6,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.5,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.4,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.3,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.2,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.1,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
66.0,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.9,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.8,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.7,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.6,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.5,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.4,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.3,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.2,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.1,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
65.0,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.9,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.8,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.7,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.6,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.5,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.4,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.3,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.2,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.1,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
64.0,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.9,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.8,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.7,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.6,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.5,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.4,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.3,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.2,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.1,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
63.0,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
62.9,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
62.8,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
62.7,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
62.6,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13,0.87,0.13
62.5,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
62.4,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
62.3,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
62.2,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
62.1,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
62.0,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.9,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.8,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.7,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.6,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.5,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.4,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.3,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.2,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.1,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
61.0,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.9,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.8,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.7,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.6,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.5,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.4,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.3,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.2,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.1,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
60.0,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.9,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.8,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.7,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.6,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.5,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.4,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.3,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.2,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.1,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
59.0,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.9,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.8,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.7,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.6,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.5,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.4,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.3,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.2,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.1,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
58.0,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
57.9,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
57.8,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
57.7,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
57.6,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12,0.88,0.12
57.5,0.89,0.12,0.89,0.12,0.89,0.12,0.89,0.12,0.89,0.12
57.4,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
57.3,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
57.2,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
57.1,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
57.0,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.9,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.8,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.7,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.6,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.5,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.4,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.3,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.2,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.1,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
56.0,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.9,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.8,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.7,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.6,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.5,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.4,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.3,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.2,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.1,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
55.0,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.9,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.8,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.7,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.6,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.5,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.4,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.3,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.2,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.1,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
54.0,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.9,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.8,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.7,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.6,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.5,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.4,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.3,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.2,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.1,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
53.0,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
52.9,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
52.8,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
52.7,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
52.6,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11,0.89,0.11
52.5,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
52.4,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
52.3,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
52.2,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
52.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
52.0,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.9,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.8,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.7,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.6,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.5,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.4,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.3,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.2,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
51.0,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.9,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.8,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.7,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.6,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.5,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.4,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.3,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.2,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
50.0,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.9,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.8,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.7,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.6,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.5,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.4,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.3,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.2,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
49.0,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.9,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.8,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.7,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.6,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.5,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.4,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.3,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.2,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
48.0,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
47.9,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
47.8,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
47.7,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
47.6,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1,0.9,0.1
47.5,0.91,0.1,0.91,0.1,0.91,0.1,0.91,0.1,0.91,0.1
47.4,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
47.3,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
47.2,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
47.1,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
47.0,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.9,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.8,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.7,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.6,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.5,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.4,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.3,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.2,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.1,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
46.0,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.9,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.8,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.7,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.6,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.5,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.4,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.3,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.2,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.1,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
45.0,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.9,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.8,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.7,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.6,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.5,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.4,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.3,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.2,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.1,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
44.0,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.9,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.8,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.7,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.6,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.5,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.4,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.3,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.2,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.1,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
43.0,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
42.9,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
42.8,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
42.7,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
42.6,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09,0.91,0.09
42.5,0.92,0.09,0.92,0.09,0.92,0.09,0.92,0.09,0.92,0.09
42.4,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
42.3,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
42.2,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
42.1,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
42.0,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.9,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.8,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.7,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.6,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.5,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.4,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.3,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.2,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.1,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
41.0,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.9,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.8,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.7,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.6,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.5,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.4,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.3,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.2,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.1,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
40.0,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.9,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.8,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.7,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.6,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.5,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.4,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.3,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.2,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.1,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
39.0,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.9,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.8,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.7,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.6,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.5,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.4,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.3,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.2,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.1,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
38.0,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
37.9,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
37.8,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
37.7,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
37.6,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08,0.92,0.08
37.5,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
37.4,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
37.3,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
37.2,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
37.1,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
37.0,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.9,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.8,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.7,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.6,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.5,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.4,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.3,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.2,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.1,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
36.0,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.9,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.8,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.7,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.6,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.5,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.4,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.3,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.2,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.1,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
35.0,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.9,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.8,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.7,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.6,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.5,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.4,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.3,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.2,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.1,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
34.0,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.9,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.8,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.7,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.6,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.5,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.4,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.3,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.2,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.1,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
33.0,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
32.9,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
32.8,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
32.7,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
32.6,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07,0.93,0.07
32.5,0.94,0.07,0.94,0.07,0.94,0.07,0.94,0.07,0.94,0.07
32.4,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
32.3,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
32.2,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
32.1,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
32.0,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.9,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.8,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.7,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.6,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.5,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.4,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.3,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.2,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.1,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
31.0,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.9,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.8,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.7,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.6,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.5,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.4,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.3,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.2,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.1,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
30.0,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.9,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.8,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.7,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.6,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.5,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.4,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.3,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.2,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.1,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
29.0,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.9,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.8,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.7,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.6,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.5,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.4,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.3,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.2,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.1,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
28.0,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
27.9,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
27.8,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
27.7,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
27.6,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
27.5,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06,0.94,0.06
27.4,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
27.3,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
27.2,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
27.1,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
27.0,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.9,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.8,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.7,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.6,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.5,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.4,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.3,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.2,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.1,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
26.0,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.9,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.8,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.7,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.6,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.5,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.4,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.3,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.2,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.1,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
25.0,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.9,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.8,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.7,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.6,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.5,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.4,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.3,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.2,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.1,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
24.0,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.9,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.8,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.7,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.6,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.5,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.4,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.3,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.2,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.1,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
23.0,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
22.9,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
22.8,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
22.7,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
22.6,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05,0.95,0.05
22.5,0.95,0.04,0.95,0.04,0.95,0.04,0.95,0.04,0.95,0.04
22.4,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
22.3,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
22.2,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
22.1,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
22.0,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.9,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.8,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.7,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.6,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.5,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.4,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.3,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.2,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.1,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
21.0,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.9,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.8,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.7,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.6,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.5,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.4,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.3,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.2,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.1,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
20.0,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.9,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.8,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.7,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.6,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.5,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.4,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.3,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.2,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.1,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
19.0,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.9,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.8,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.7,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.6,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.5,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.4,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.3,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.2,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.1,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
18.0,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
17.9,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
17.8,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
17.7,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
17.6,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
17.5,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04,0.96,0.04
17.4,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
17.3,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
17.2,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
17.1,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
17.0,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.9,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.8,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.7,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.6,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.5,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.4,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.3,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.2,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.1,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
16.0,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.9,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.8,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.7,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.6,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.5,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.4,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.3,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.2,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.1,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
15.0,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.9,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.8,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.7,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.6,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.5,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.4,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.3,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.2,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.1,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
14.0,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.9,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.8,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.7,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.6,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.5,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.4,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.3,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.2,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.1,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
13.0,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
12.9,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
12.8,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
12.7,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
12.6,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
12.5,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03,0.97,0.03
12.4,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
12.3,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
12.2,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
12.1,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
12.0,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.9,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.8,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.7,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.6,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.5,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.4,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.3,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.2,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.1,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
11.0,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.9,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.8,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.7,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.6,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.5,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.4,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.3,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.2,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.1,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
10.0,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.9,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.8,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.7,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.6,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.5,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.4,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.3,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.2,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.1,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
9.0,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.9,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.8,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.7,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.6,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.5,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.4,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.3,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.2,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.1,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
8.0,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
7.9,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
7.8,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
7.7,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
7.6,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02,0.98,0.02
7.5,0.98,0.01,0.98,0.01,0.98,0.01,0.98,0.01,0.98,0.01
7.4,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
7.3,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
7.2,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
7.1,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
7.0,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.9,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.8,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.7,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.6,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.5,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.4,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.3,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.2,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.1,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
6.0,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.9,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.8,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.7,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.6,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.5,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.4,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.3,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.2,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.1,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
5.0,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.9,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.8,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.7,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.6,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.5,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.4,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.3,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.2,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.1,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
4.0,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.9,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.8,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.7,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.6,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.5,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.4,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.3,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.2,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.1,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
3.0,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
2.9,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
2.8,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
2.7,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
2.6,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
2.5,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01,0.99,0.01
2.4,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
2.3,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
2.2,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
2.1,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
2.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.9,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.8,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.7,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.6,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.5,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.4,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.3,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.2,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.1,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
1.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.9,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.8,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.7,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.6,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.5,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.4,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.3,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.2,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.1,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0
The endpoint below allows you to automate the creation of the Projected Transcription Automation report. This report is part of the Keyer Projection Report, which is available in the "User Performance" tab of our application's Reporting page. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/projected_transcription_automation
Request Parameters
Use the parameters below to define the contents of your report.
| Property | Type | Description |
| flow_uuid v32 | string | The UUID of the flow whose data should be included in the report. |
Response
The following values from your settings appear at the beginning of the report:
- Checkbox Target Accuracy
- Signature Target Accuracy
- Structured Text Target Accuracy
- Semi-Structured Text Target Accuracy
Each row in the response represents the projected transcription automation for particular target accuracy. For example, a Checkbox Projected Automation value of .79 in the 96.7 Target Accuracy row indicates that if you set your Checkbox Target Accuracy to 96.7%, an estimated 79% of Checkbox transcriptions will be automated.
Each row has the following columns:
| Header | Type | Description |
| Target Accuracy | float | Transcription target accuracy. |
| Checkbox Projected Automation | float | A number between 0 and 1 indicating the expected automation of Checkbox Fields. Will be '-' if there aren't enough samples to calculate it. |
| Checkbox MOE | float | A number between 0 and 1 indicating the random sampling error in the Checkbox Projected Automation calculation. Will be '-' if there aren't enough samples to determine the projected automation. |
| Signature Projected Automation | float | A number between 0 and 1 indicating the expected automation of Signature Fields. Will be '-' if there aren't enough samples to calculate it. |
| Signature MOE | float | A number between 0 and 1 indicating the random sampling error in the Signature Projected Automation calculation. Will be '-' if there aren't enough samples to determine the projected automation. |
| Structured Text Projected Automation | float | A number between 0 and 1 indicating the expected automation of Structured Text Fields. Will be '-' if there aren't enough samples to calculate it. |
| Structured Text MOE | float | A number between 0 and 1 indicating the random sampling error in the Structured Text Projected Automation calculation. Will be '-' if there aren't enough samples to determine the projected automation. |
| Semi-Structured Text Projected Automation | float | A number between 0 and 1 indicating the expected automation of Semi-Structured Text Fields. Will be '-' if there aren't enough samples to calculate it. |
| Semi-Structured Text MOE | float | A number between 0 and 1 indicating the random sampling error in the Semi-Structured Text Projected Automation calculation. Will be '-' if there aren't enough samples to determine the projected automation. |
| Semi-Structured Checkbox Projected Automation v32 | float | A number between 0 and 1 indicating the expected automation of Semi-Structured Checkbox Fields. Will be '-' if there aren't enough samples to calculate it. |
| Semi-Structured Checkbox MOE v32 | float | A number between 0 and 1 indicating the random sampling error in the Semi-Structured Checkbox Projected Automation calculation. Will be '-' if there aren't enough samples to determine the projected automation. |
Historical Processing Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/historical_processing'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Layout Name & UUID,Layout Version Name & UUID,Workflow UUID,Matched Documents,Machine Matched Documents,Manually Matched Documents,Machine Matched Pages,Manually Matched Pages,Total Fields Extracted,Total Table Cells Extracted,Signature Fields Extracted,Checkbox Fields Extracted,Text Fields Extracted,Machine Identified Fields,Machine Identified Table Cells,Manually Identified Fields,Manually Identified Table Cells,Machine Transcribed Fields,Machine Transcribed Table Cells,Manually Transcribed Fields,Manually Transcribed Table Cells,Fields Updated in Custom Supervision,Cells Updated in Custom Supervision,Field Characters Keyed in Custom Supervision,Table Cells Keyed in Custom Supervision,Decisions Changed in Custom Supervision,Decisions Completed in Custom Supervision
Layout 1 [ae2162de-1c8b-4b76-8d05-9a781d7ab9a4],v1 [88a88fba-633c-4a9d-99af-24b45c022985],3b324d71-bf8e-45ac-8521-37a4cc54f73d,3,2,1,6,5,35,162,3,12,20,22,132,13,30,11,92,24,70,2,2,2,2,2,4
Layout 1 [ae2162de-1c8b-4b76-8d05-9a781d7ab9a4],v2 [43ae4f81-0f3a-450d-b70b-a1f910b8c70b],3b324d71-bf8e-45ac-8521-37a4cc54f73d,9,8,1,4,3,33,159,5,10,18,20,131,13,28,9,137,24,22,4,4,4,4,4,6
Layout 2 [15d2ee22-4e0b-448e-ae37-47cb021ef3ed],v1 [a2e21ac0-a902-440e-8a3d-32ac454f93c0],46b19e61-640a-4e15-bbb4-b5eba0974c8b,15,10,5,12,9,78,234,8,20,50,68,130,10,104,70,210,8,24,6,6,6,6,6,8
The endpoint below allows you to automate the creation of the Historical Processing report. This report is part of the Keyer Projection Report, which is available in the "User Performance" tab of our application's Reporting page. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/historical_processing
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response represents a live layout version of a specific workflow and has the following columns:
| Header | Type | Description |
| Layout Name & UUID | string | Name and UUID of the layout. |
| Layout Version Name & UUID | string | Name and UUID of the layout version. |
| Workflow UUID | string | UUID of the related Workflow. |
| Matched Documents | integer | The number of Documents matched to the layout version. |
| Machine Matched Documents | integer | The number of Documents that the machine matched to the layout version. |
| Manually Matched Documents | integer | The number of Documents manually matched to the layout version. |
| Machine Matched Pages | integer | The number of Pages that the machine matched to the layout version. |
| Manually Matched Pages | integer | The number of Pages manually matched to the layout version. |
| Total Fields Extracted | integer | The number of Fields extracted from Documents matched to the layout version. |
| Total Table Cells Extracted | integer | The number of Table Cells extracted from Documents matched to the layout version. |
| Signature Fields Extracted | integer | The number of Signature FIelds extracted from Documents matched to the layout version. |
| Checkbox Fields Extracted | integer | The number of Checkbox Fields extracted from Documents matched to the layout version. |
| Text Fields Extracted | integer | The number of Text Fields extracted from Documents matched to the layout version. |
| Machine Identified Fields | integer | The number of Fields matched to the layout version that were identified by the machine. |
| Machine Identified Table Cells | integer | The number of Table Cells matched to the layout version that were identified by the machine. |
| Manually Identified Fields | integer | The number of Fields matched to the layout version that were manually identified. |
| Manually Identified Table Cells | integer | The number of Table Cells matched to the layout version that were manually identified. |
| Machine Transcribed Fields | integer | The number of Fields matched to the layout version that were transcribed by the machine. |
| Machine Transcribed Table Cells | integer | The number of Table Cells matched to the layout version that were transcribed by the machine. |
| Manually Transcribed Fields | integer | The number of Fields matched to the layout version that were manually transcribed. |
| Manually Transcribed Table Cells | integer | The number of Table Cells matched to the layout version that were manually transcribed. |
| Fields Updated in Custom Supervision v38 | integer | The number of Fields updated in Custom Supervision. |
| Cells Updated in Custom Supervision v38 | integer | The number of Table Cells updated in Custom Supervision. |
| Field Characters Keyed in Custom Supervision v38 | integer | The number of characters keyed for Fields in Custom Supervision. |
| Table Cells Keyed in Custom Supervision v38 | integer | The number of characters keyed for Table Cells in Custom Supervision. |
| Decisions Changed in Custom Supervision v38 | integer | The number of decisions changed in Custom Supervision. |
| Decisions Completed in Custom Supervision v38 | integer | The number of decisions completed in Custom Supervision. |
Keyer Performance Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/keyer_performance'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Keyer Name,Workflow UUID,Submissions,Documents,Pages,Days in App,Total Time Spent (Seconds),Total Time Spent (Hours),Document Classification Supervision Time Spent (Seconds),Document Organization Time Spent (Seconds),Document Foldering Time Spent (Seconds),Document Metadata Time Spent (Seconds),# Pages Reviewed In Document Organization,# Documents Reviewed In Document Organization,# Folders Created,Document Classification QA Time Spent (Seconds),# Pages Reviewed In Classification QA,Field Identification Supervision Time Spent (Seconds),Fields Identified in Supervision,Field Identification QA Time Spent (Seconds),Fields Identified in QA,Table Cell Identification Supervision Time Spent (Seconds),Table Columns Identified in Supervision,Table Cells Identified in Supervision,Table Cell Identification QA Time Spent (Seconds),Table Columns Identified in QA,Table Cells Identified in QA,Field Transcription Supervision Time Spent (Seconds),Fields Transcribed in Supervision,Fields Transcribed in Supervision per Hour,Field Characters Keyed in Supervision,Field Characters Keyed in Supervision per Hour,Field Transcription QA Time Spent (Seconds),Fields Transcribed in QA,Fields Transcribed in QA per Hour,Field Characters Keyed in QA,Field Characters Keyed in QA per Hour,Table Cell Transcription Supervision Time Spent (Seconds),Table Cells Transcribed in Supervision,Table Cells Transcribed in Supervision per Hour,Table Characters Keyed in Supervision,Table Characters Keyed in Supervision per Hour,Table Cell Transcription QA Time Spent (Seconds),Table Cells Transcribed in QA,Table Cells Transcribed in QA per Hour,Table Characters Keyed in QA,Table Characters Keyed in QA per Hour,Flexible Extraction Time Spent (Seconds),Flexible Extraction Fields Extracted,Flexible Extraction Field Characters Keyed,Flexible Extraction Table Cells Extracted,Flexible Extraction Table Cell Characters Keyed,Custom Supervision Time Spent (Seconds),Custom Supervision Fields Extracted,Custom Supervision Field Characters Keyed,Custom Supervision Table Cells Extracted,Custom Supervision Table Cell Characters Keyed,Custom Supervision Decisions Changed,Custom Supervision Decisions Completed,Custom Supervision Metadata Characters Keyed,Custom Supervision Chat Prompts,Custom Supervision Chat Responses,Full Page Transcription QA Time Spent (Seconds),Segments Reviewed in Full Page Transcription QA,Segments Reviewed in Full Page Transcription QA per Hour,Segment Characters Reviewed in Full Page Transcription QA,Segment Characters Reviewed in Full Page Transcription QA per Hour
oliver.smith (Oliver Smith),3b324d71-bf8e-45ac-8521-37a4cc54f73d,1,2,4,1,178.7,0.04964,23.73,0.13,12.3,11.3,3,1,2,14.7,4,0.23,4,0.32,3,11.2,5,15,0.0,0,0,9.23,22,22.0,123,123.0,19.23,32,32.0,133,133.0,9.23,14,14.0,134,134.0,19.23,24,24.0,144,144.0,4.0,2,4,2,4,5.0,3,6,4,8,5,8,9,0,0,62.6,28,28.0,321,321.0
emma.williams (Emma Williams),46b19e61-640a-4e15-bbb4-b5eba0974c8b,12,34,55,2,1445.5,0.40153,178.88,8.99,123.99,45.9,10,8,10,69.4,55,18.9,10,19.8,9,342.5,34,187,0.0,0,0,234.99,23,23.0,432,432.0,344.23,33,33.0,442,442.0,98.2,39,39.0,234,234.0,108.2,49,49.0,244,244.0,0.0,0,0,0,0,0.0,0,0,0,0,0,0,0,0,0,30.4,14,14.0,182,182.0
The endpoint below allows you to automate the creation of the Keyer Performance report. This report is part of the Keyer Projection Report, which is available in the "User Performance" tab of our application's Reporting page. Responses will contain the report’s data as a CSV file.
Endpoint
GET /api/v5/reporting/keyer_performance
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row represents the performance of a user in your account on a specific workflow and has the following columns:
| Header | Type | Description |
| Keyer Name | string | The username and name of the user. |
| Workflow UUID | string | UUID of the related Workflow. |
| Submissions | integer | The number of Submissions for which the user performed at least one Supervision task. |
| Documents | integer | The number of Documents for which the user performed at least one Supervision task. |
| Pages | integer | The number of Pages for which the user performed at least one Supervision task. |
| Days in App | integer | The number of days that the user performed at least one Supervision task. |
| Total Time Spent (Seconds) | float | Total time (in seconds) the user spent performing Supervision tasks. |
| Total Time Spent (Hours) | float | Total time (in hours) the user spent performing Supervision tasks. |
| Document Classification Supervision Time Spent (Seconds) | float | Time the user spent performing classification on supervision tasks. |
| Document Organization Time Spent (Seconds) | float | Time the user spent categorizing Pages. |
| Document Foldering Time Spent (Seconds) v40 | float | Time the user spent creating Document Folders. |
| Document Metadata Time Spent (Seconds) v40 | float | Time the user spent adding Document metadata. |
| # Pages Reviewed In Document Organization | integer | The number of Pages categorized by the user. |
| # Documents Reviewed In Document Organization | integer | The number of Documents categorized by the user. |
| # Folders Created v40 | integer | The number of Document Folders created by the user. |
| Document Classification QA Time Spent (Seconds) | float | Time the user spent categorizing Pages in QA tasks. |
| # Pages Reviewed In Classification QA | integer | The number of Pages categorized by the user in QA tasks. |
| Field Identification Supervision Time Spent (Seconds) | float | Time the user spent identifying Fields on supervision tasks. |
| Fields Identified in Supervision | integer | The number of Fields identified by the user in supervision tasks. |
| Field Identification QA Time Spent (Seconds) | float | Time the user spent identifying Fields on QA tasks. |
| Fields Identified in QA | integer | The number of Fields identified by the user in QA tasks. |
| Table Cell Identification Supervision Time Spent (Seconds) | float | Time the user spent identifying Table Cells on supervision tasks. |
| Table Columns Identified in Supervision | integer | The number of Table Columns identified by the user in supervision tasks. |
| Table Cells Identified in Supervision | integer | The number of Table Cells identified by the user in supervision tasks. |
| Table Cell Identification QA Time Spent (Seconds) | float | Time the user spent identifying Table Cells on QA tasks. |
| Table Columns Identified in QA | integer | The number of Table Columns identified by the user in QA tasks. |
| Table Cells Identified in QA | integer | The number of Table Cells identified by the user in QA tasks. |
| Field Transcription Supervision Time Spent (Seconds) | float | Time the user spent transcribing Fields on supervision tasks. |
| Fields Transcribed in Supervision | integer | The number of Fields transcribed by the user in supervision tasks. |
| Fields Transcribed in Supervision per Hour | float | The Fields transcribed by the user per hour in supervision tasks. |
| Field Characters Keyed in Supervision | integer | The number of characters keyed by the user while transcribing Fields on supervision tasks. |
| Field Characters Keyed in Supervision per Hour | float | The characters keyed by the user while transcribing Fields per hour on supervision tasks. |
| Field Transcription QA Time Spent (Seconds) | float | Time the user spent transcribing Fields on QA tasks. |
| Fields Transcribed in QA | integer | The number of Fields transcribed by the user in QA tasks. |
| Fields Transcribed in QA per Hour | float | The Fields transcribed by the user per hour in QA tasks. |
| Field Characters Keyed in QA | integer | The number of characters keyed by the user while transcribing Fields on QA tasks. |
| Field Characters Keyed in QA per Hour | float | The characters keyed by the user while transcribing Fields per hour on QA tasks. |
| Table Cell Transcription Supervision Time Spent (Seconds) | float | Time the user spent transcribing Table Cells on supervision tasks. |
| Table Cells Transcribed in Supervision | integer | The number of Table Cells transcribed by the user in supervision tasks. |
| Table Cells Transcribed in Supervision per Hour | float | The number of Table Cells transcribed by the user per hour in supervision tasks. |
| Table Characters Keyed in Supervision | integer | The number of characters keyed by the user while transcribing Table Cells on supervision tasks. |
| Table Characters Keyed in Supervision per Hour | float | The number of characters keyed by the user while transcribing Table Cells per hour on supervision tasks. |
| Table Cell Transcription QA Time Spent (Seconds) | float | Time the user spent transcribing Table Cells on QA tasks. |
| Table Cells Transcribed in QA | integer | The number of Table Cells transcribed by the user in QA tasks. |
| Table Cells Transcribed in QA per Hour | float | The number of Table Cells transcribed by the user per hour in QA tasks. |
| Table Characters Keyed in QA | integer | The number of characters keyed by the user while transcribing Table Cells on QA tasks. |
| Table Characters Keyed in QA per Hour | float | The number of characters keyed by the user while transcribing Table Cells per hour on QA tasks. |
| Flexible Extraction Time Spent (Seconds) v31 | integer | Time the user spent extracting Fields and Table Cells during Flexible Extraction. |
| Flexible Extraction Fields Extracted v31 | integer | The number of Fields extracted by the user during Flexible Extraction. |
| Flexible Extraction Field Characters Keyed v31 | integer | The number of characters keyed by the user while extracting Fields during Flexible Extraction. |
| Flexible Extraction Table Cells Extracted v31 | integer | The number of Table Cells extracted by the user during Flexible Extraction. |
| Flexible Extraction Table Cell Characters Keyed v31 | integer | The number of characters keyed by the user while extracting Table Cells during Flexible Extraction. |
| Custom Supervision Time Spent (Seconds) v32.0.1 | integer | Time the user spent extracting Fields and Table Cells during Custom Supervision. |
| Custom Supervision Fields Extracted v32.0.1 | integer | The number of Fields extracted by the user during Custom Supervision. |
| Custom Supervision Field Characters Keyed v32.0.1 | integer | The number of characters keyed by the user while extracting Fields during Custom Supervision. |
| Custom Supervision Table Cells Extracted v34 | integer | The number of Table Cells extracted by the user during Custom Supervision. |
| Custom Supervision Table Cell Characters Keyed v34 | integer | The number of characters keyed by the user while extracting Table Cells during Custom Supervision. |
| Custom Supervision Decisions Changed v38 | integer | The number of decisions changed by the user during Custom Supervision. |
| Custom Supervision Decisions Completed v38 | integer | The number of decisions completed during Custom Supervision. |
| Custom Supervision Metadata Characters Keyed v39 | integer | The number of characters keyed as metadata by the user during Custom Supervision. |
| Custom Supervision Chat Prompts v42 | integer | The number of chat prompts sent by the user during Custom Supervision. |
| Custom Supervision Chat Responses v42 | integer | The number of chat responses sent by the user during Custom Supervision. |
| Full Page Transcription QA Time Spent (Seconds) v42 | integer | Time the user spent reviewing Segments on QA tasks. |
| Segments Reviewed in Full Page Transcription QA v42 | integer | The number of segments reviewed by the user in Full Page Transcription QA tasks. |
| Segments Reviewed in Full Page Transcription QA per Hour v42 | float | The number of segments reviewed by the user per hour in Full Page Transcription QA tasks. |
| Segment Characters Reviewed in Full Page Transcription QA v42 | integer | The number of characters reviewed by the user in Full Page Transcription QA tasks. |
| Segment Characters Reviewed in Full Page Transcription QA per Hour v42 | float | The number of characters reviewed by the user per hour in Full Page Transcription QA tasks. |
Supervision Transcriptions Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/supervision_transcriptions'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Layout Field ID,Layout Field Name,Meta Type,Dropout,Multiline,Confidence,Transcription Length,Transcription Errors,Target Length,Record Type
2020-08-10 12:00:00 AM,3fca5610-a22e-4b4e-a638-c36faf3989dc,Name,name,True,False,0.34,23,Unknown,23,S
2020-08-11 12:00:02 AM,3fca5610-a22e-4b4e-a638-c36faf3989dc,Name,name,True,False,0.92,12,,12,SQ
2020-08-12 12:00:00 AM,fed7293f-5a0d-46fd-8382-b326015110a3,Email,email_address,False,True,0.36,59,,58,S
The endpoint below allows you to automate the creation of the Supervision Transcriptions report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page.
The response will be a CSV file containing information about all structured text Fields that were completed within the specified period and passed through Transcription Supervision.
Endpoint
GET /api/v5/reporting/supervision_transcriptions
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response has the following columns:
| Header | Type | Description |
| Date | datetime | The date and time that the Field was transcribed by the machine. |
| Layout Field ID | string | Unique system-generated identifier of the Layout Field. |
| Layout Field Name | string | The name of the Field, as defined during its layout’s creation. |
| Meta Type | string | Detailed ML configuration. See Standard Data Types for more information. |
| Dropout | boolean | Indicates whether the Dropout feature was enabled for the Field during its layout’s creation. |
| Multiline | boolean | Indicates whether the Field is expected to contain multiple lines of content. |
| Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. |
| Transcription Length | integer | The number of characters in the machine’s transcription of the Field. |
| Transcription Errors | string | A list of errors that were found during Recalibration or Transcription Supervision for the Field. |
| Target Length | string | The lengths of the manual transcriptions of the Field, formatted as a comma-separated list. |
| Record Type | string | Indicates whether the Field went through Transcription QA. Will be SQ if there was Transcription QA for the Field, and S otherwise. |
Table Cell Supervision Transcriptions Report v37
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/table_cell_supervision_transcriptions'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Layout Column ID,Layout Column Name,Meta Type,Dropout,Multiline,Confidence,Transcription Length,Transcription Errors,Target Length,Record Type
2020-08-10 12:00:00 AM,bcb03ec0-de4c-4100-81eb-58dc08ba32ec,Name,name,True,False,0.34,23,Unknown,23,S
2020-08-11 12:00:02 AM,bcb03ec0-de4c-4100-81eb-58dc08ba32ec,Name,name,True,False,0.92,12,,12,SQ
2020-08-12 12:00:00 AM,e5fd3333-a39d-49e9-9875-6d8bc35d7874,Email,email_address,True,False,0.36,59,,58,S
The endpoint below allows you to automate the creation of the Table Cell Supervision Transcriptions report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page.
The response will be a CSV file containing information about all table cell transcriptions that were completed within the specified period and passed through Transcription Supervision.
Endpoint
GET /api/v5/reporting/table_cell_supervision_transcriptions
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response has the following columns:
| Header | Type | Description |
| Date | datetime | The date and time that the Field was transcribed by the machine. |
| Layout Column ID | string | Unique system-generated identifier of the Layout Column. |
| Layout Column Name | string | The name of the Column, as defined during its layout’s creation. |
| Meta Type | string | Detailed ML configuration. See Standard Data Types for more information. |
| Dropout | boolean | Indicates whether the Dropout feature was enabled for the Field during its layout’s creation. |
| Multiline | boolean | Indicates whether the Field is expected to contain multiple lines of content. |
| Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. |
| Transcription Length | integer | The number of characters in the machine’s transcription of the Field. |
| Transcription Errors | string | A list of errors that were found during Recalibration or Transcription Supervision for the Field. |
| Target Length | string | The lengths of the manual transcriptions of the Field, formatted as a comma-separated list. |
| Record Type | string | Indicates whether the Field went through Transcription QA. Will be SQ if there was Transcription QA for the Field, and S otherwise. |
Machine Transcriptions Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/machine_transcriptions'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Layout Field ID,Layout Field Name,Meta Type,Dropout,Multiline,Confidence,Transcription Length,Transcription Errors,Normalized Target Length,Record Type,Transcription Model
2020-08-11 12:00:00 AM,fed7293f-5a0d-46fd-8382-b326015110a3,Email,email_address,False,True,0.87,56,Normalization failed,56,Q,IDP
2020-08-11 12:00:02 AM,3fca5610-a22e-4b4e-a638-c36faf3989dc,Name,name,True,False,0.92,12,None,12,SQ,IDP
2020-08-13 12:00:00 AM,3fca5610-a22e-4b4e-a638-c36faf3989dc,Name,name,True,False,0.72,44,Unknown,43,Q,IDP
The endpoint below allows you to automate the creation of the Machine Transcriptions report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page.
The response will be a CSV file containing information about all structured text Fields that were completed within the specified period and passed through Transcription QA.
Endpoint
GET /api/v5/reporting/machine_transcriptions
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response has the following columns:
| Header | Type | Description |
| Date | datetime | The date and time that the Field was transcribed by the machine. |
| Layout Field ID | string | Unique system-generated identifier of the Layout Field. |
| Layout Field Name | string | The name of the Field, as defined during its layout’s creation. |
| Meta Type | string | Detailed ML configuration. See Standard Data Types for more information. |
| Dropout | boolean | Indicates whether the Dropout feature was enabled for the Field during its layout’s creation. |
| Multiline | boolean | Indicates whether the Field is expected to contain multiple lines of content. |
| Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. |
| Transcription Length | integer | The number of characters in the machine’s transcription of the Field. |
| Transcription Errors | string | A list of errors that were found during Recalibration or Transcription Supervision for the Field. |
| Normalized Target Length | integer | The number of characters in the consensus transcription of the Field. |
| Record Type | string | Indicates whether the Field went through Transcription Supervision. Will be SQ if there was Transcription Supervision for the Field and Q otherwise. |
| Transcription Model | string | The name of the model that was used to transcribe the Field, as configured for the flow that processed the Field's submission. |
Table Cell Machine Transcriptions Report v37
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/table_cell_machine_transcriptions'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Layout Column ID,Layout Column Name,Meta Type,Dropout,Multiline,Confidence,Transcription Length,Transcription Errors,Normalized Target Length,Record Type,Transcription Model
2020-08-11 12:00:00 AM,e5fd3333-a39d-49e9-9875-6d8bc35d7874,Email,email_address,True,False,0.87,56,Normalization failed,56,Q,IDP
2020-08-11 12:00:02 AM,bcb03ec0-de4c-4100-81eb-58dc08ba32ec,Name,name,True,False,0.92,12,None,12,SQ,IDP
2020-08-13 12:00:00 AM,bcb03ec0-de4c-4100-81eb-58dc08ba32ec,Name,name,True,False,0.72,44,Unknown,43,Q,IDP
The endpoint below allows you to automate the creation of the Table Cell Machine Transcriptions report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page.
The response will be a CSV file containing information about all table cells that were completed within the specified period and passed through Transcription QA.
Endpoint
GET /api/v5/reporting/table_cell_machine_transcriptions
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response has the following columns:
| Header | Type | Description |
| Date | datetime | The date and time that the Field was transcribed by the machine. |
| Layout Column ID | string | Unique system-generated identifier of the Layout Column. |
| Layout Column Name | string | The name of the Column, as defined during its layout’s creation. |
| Meta Type | string | Detailed ML configuration. See Standard Data Types for more information. |
| Dropout | boolean | Indicates whether the Dropout feature was enabled for the Field during its layout’s creation. |
| Multiline | boolean | Indicates whether the Field is expected to contain multiple lines of content. |
| Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. |
| Transcription Length | integer | The number of characters in the machine’s transcription of the Field. |
| Transcription Errors | string | A list of errors that were found during Recalibration or Transcription Supervision for the Field. |
| Normalized Target Length | integer | The number of characters in the consensus transcription of the Field. |
| Record Type | string | Indicates whether the Field went through Transcription Supervision. Will be SQ if there was Transcription Supervision for the Field and Q otherwise. |
| Transcription Model | string | The name of the model that was used to transcribe the Field, as configured for the flow that processed the Field's submission. |
Signature Machine Transcriptions Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/signature_machine_transcriptions'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Layout Field ID,Layout Field Name,Confidence,Machine Matches,Predicted Value,Field Black Pixels Ratio,Layout Field Black Pixels Ratio,Width,Height,Transcription Model
2020-08-10 02:13:00 AM,c923f338-acf6-400c-ac5e-dde9865f9349,Issuer,0.34,True,1,0.12,0.33,0.42,0.18,IDP
2020-08-12 03:33:15 AM,c923f338-acf6-400c-ac5e-dde9865f9349,Issuer,0.34,False,0,0.34,0.33,0.42,0.18,IDP
The endpoint below allows you to automate the creation of the Signature Machine Transcriptions report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page.
The response will be a CSV file containing information about all structured Signature Fields that were completed within the specified period and passed through Transcription QA.
Endpoint
GET /api/v5/reporting/signature_machine_transcriptions
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response has the following columns:
| Header | Type | Description |
| Date | datetime | The date and time that the Field was transcribed by the machine. |
| Layout Field ID | string | Unique system-generated identifier of the Layout Field. |
| Layout Field Name | string | The name of the Field, as defined during its layout’s creation. |
| Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. |
| Machine Matches | boolean | Indicates whether the machine transcription matches the consensus value. |
| Predicted Value | string | The machine transcription for the Field. |
| Field Black Pixels Ratio | float | The percent of black pixels in the Field. |
| Layout Field Black Pixels Ratio | float | The percent of black pixels in the Layout Field. |
| Width | float | The width of the Field, measured as a percentage of the Page’s width (i.e., Field width / Page width). |
| Height | float | The height of the Field, measured as a percentage of the Page’s height (i.e., Field height / Page height). |
| Transcription Model | string | The name of the model that was used to transcribe the Field, as configured for the flow that processed the Field's submission. |
Checkbox Machine Transcriptions Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/checkbox_machine_transcriptions'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Layout Field ID,Layout Field Name,Confidence,Machine Matches,Predicted Value,Field Black Pixels Ratio,Layout Field Black Pixels Ratio,Width,Height,Transcription Model
2020-08-11 01:00:08 AM,d05073fb-56d2-400d-9c91-aa940e316a8a,Available,0.87,False,0,0.64,0.76,0.45,0.02,IDP
2020-08-13 12:00:00 AM,d05073fb-56d2-400d-9c91-aa940e316a8a,Available,0.21,True,1,0.94,0.76,0.45,0.02,IDP
The endpoint below allows you to automate the creation of the Checkbox Machine Transcriptions report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page.
The response will be a CSV file containing information about all structured Checkbox Fields that were completed within the specified period and passed through Transcription QA.
Endpoint
GET /api/v5/reporting/checkbox_machine_transcriptions
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row in the response has the following columns:
| Header | Type | Description |
| Date | datetime | The date and time that the Field was transcribed by the machine. |
| Layout Field ID | string | Unique system-generated identifier of the Layout Field. |
| Layout Field Name | string | The name of the Field, as defined during its layout’s creation. |
| Confidence | float | A number between 0 and 1 indicating the confidence the machine has in the accuracy of the Field’s transcription. |
| Machine Matches | boolean | Indicates whether the machine transcription matches the consensus value. |
| Predicted Value | string | The machine transcription for the Field. |
| Field Black Pixels Ratio | float | The percent of black pixels in the Field. |
| Layout Field Black Pixels Ratio | float | The percent of black pixels in the Layout Field. |
| Width | float | The width of the Field, measured as a percentage of the Page’s width (i.e., Field width / Page width). |
| Height | float | The height of the Field, measured as a percentage of the Page’s height (i.e., Field height / Page height). |
| Transcription Model | string | The name of the model that was used to transcribe the Field, as configured for the flow that processed the Field's submission. |
Application Usage Report
Example Request
from datetime import date
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/reporting/application_usage'
endpoint_url = urljoin(base_url, endpoint)
params = {'start_date': date(2020, 8, 10).isoformat(), 'end_date': date(2020, 8, 13).isoformat()}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params, stream=True)
for line in r.iter_lines():
decoded_line = line.decode('utf-8')
print(decoded_line)
Example Response
# Period Start=2020-08-10
# Period End=2020-08-13
Date,Number of Submissions Created,Number of Submissions Completed,Number of Documents Created,Number of Documents Completed,Number of Pages Created,Number of Pages Completed,Number of Pages Matched to Structured Layouts Created,Number of Pages Matched to Structured Layouts Completed,Number of Pages Matched to Semi-Structured Layouts Created,Number of Pages Matched to Semi-Structured Layouts Completed,Number of Pages with Fields on Them Created,Number of Pages with Fields on Them Completed,Number of Fields Created,Number of Fields Completed,Number of Characters Completed,Seats,Number of Files Submitted,Number of Entries Machine Transcribed,Number of Entries Manually Transcribed,Number of Fields Machine Identified,Number of Fields Manually Identified,Number of Pages Classified Automatically,Number of Pages Classified Manually,Number of Unique Layouts Matched,Number of QA Responses on Manual Transcription,Number of QA Correct Responses on Manual Transcription,Number of QA Responses on System Transcription,Number of QA Correct Responses on System Transcription,Number of QA Responses on Machine Transcription,Number of QA Correct Responses on Machine Transcription,Number of QA Responses on Manual Field Identification,Number of QA Correct Responses on Manual Field Identification,Number of QA Responses on System Field Identification,Number of QA Correct Responses on System Field Identification,Number of QA Responses on Machine Field Identification,Number of QA Correct Responses on Machine Field Identification,Number of Incremental Fields Machine Transcribed,Organize Documents: Number of Tasks Performed,Organize Documents: Number of Pages Shown in Step 1,Organize Documents: Number of Pages Categorized in Step 1,Organize Documents: Number of Documents Created in Step 1,Organize Documents: Number of Documents Outputted in Step 1,Organize Documents: Number of Folders Created in Step 2,Number of Live Layouts,Software Version,Number of Machine Match Non-Structured Pages,Number of Human Match Non-Structured Pages,Number of Releases,Number of Archived Releases,Number of Layouts,Number of Archived Layouts,Number of Layout Versions,Number of Live IDP Flows,Number of Table Cells Created,Number of Table Cells Completed,Number of Table Cells Machine Identified,Number of Table Cells Manually Identified,Number of QA Responses on Manual Non-Structured Classification Pages,Number of QA Correct Responses on Manual Non-Structured Classification Pages,Number of QA Responses on System Non-Structured Classification Pages,Number of QA Correct Responses on System Non-Structured Classification Pages,Number of QA Responses on Machine Non-Structured Classification Pages,Number of QA Correct Responses on Machine Non-Structured Classification Pages,Number of Fields Extracted in Flexible Extraction,Number of Table Cells Extracted in Flexible Extraction,Number of Fields Extracted in Custom Supervision,Number of Table Cells Extracted in Custom Supervision,Number of Flows,Number of Live Flows,Number of QA Responses on Machine Full Page Transcription,Number of QA Correct Responses on Machine Full Page Transcription
2020-08-10 12:00:00 AM,2,2,3,3,5,5,5,5,,,4,4,35,35,234,2,3,103,94,22,13,3,2,1,2,1,5,2,3,1,2,1,5,2,3,1,0,1,2,2,1,1,1,50,28.0.2@3fc44c5f8798d4406553867e8858dddb00259814,0,0,8,2,128,42,256,1,162,162,132,30,2,1,0,0,3,1,2,2,2,2,0,0,3,1
2020-08-11 12:00:00 AM,2,2,9,9,12,12,,,12,12,7,7,33,33,432,1,4,146,46,20,13,12,0,1,,,3,2,3,2,,,3,2,3,2,0,0,0,0,0,0,0,50,28.0.2@3fc44c5f8798d4406553867e8858dddb00259814 - 28.0.3@f42b83fd78b8e4d79d2b999c526e346e9a00bc4f,12,0,8,2,128,42,256,1,159,159,131,28,,,0,0,3,2,4,4,4,4,0,0,3,2
2020-08-12 12:00:00 AM,2,2,15,15,21,21,,,21,21,20,20,78,78,123,2,4,280,32,68,10,18,3,1,15,5,29,11,14,6,5,2,5,2,,,0,1,3,3,1,1,1,50,28.0.3@f42b83fd78b8e4d79d2b999c526e346e9a00bc4f,18,3,8,2,128,42,256,1,234,234,130,104,5,2,0,0,,,6,6,6,6,0,0,5,2
2020-08-13 12:00:00 AM,0,0,0,0,0,0,0,0,,,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,Unknown,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
The endpoint below allows you to automate the creation of the Application Usage report. This report is part of the Usage Report, which is available in the "Usage" tab of our application's Reporting page. The response will contain the report's data in CSV format.
Endpoint
GET /api/v5/reporting/application_usage
Request Parameters
Use the parameters below to define the contents of your reports.
| Property | Type | Description |
| start_date | date | Indicates the beginning of the time period covered by the report. The report’s start date is inclusive. This parameter is mandatory. |
| end_date | date | Indicates the end of the time period covered by the report. Defaults to the current date. The report’s end date is inclusive. |
Response
Each row represents a day within your report’s date range and has the following columns:
| Header | Type | Description |
| Date | datetime | The start date and time of the time period covered by the row. Each row covers a 24-hour time period. |
| Number of Submissions Created | integer | The number of Submissions created in the account. |
| Number of Submissions Completed | integer | The number of Submissions that finished processing. |
| Number of Documents Created | integer | The number of Documents created in the account. |
| Number of Documents Completed | integer | The number of Documents that finished processing. |
| Number of Pages Created | integer | The number of Pages created in the account. |
| Number of Pages Completed | integer | The number of Pages that finished processing. |
| Number of Pages Matched to Structured Layouts Created | integer | The number of Pages created that were matched to Structured layouts. |
| Number of Pages Matched to Structured Layouts Completed | integer | The number of Pages that finished processing that were matched to Structured layouts. |
| Number of Pages Matched to Semi-Structured Layouts Created | integer | The number of Pages created that were matched to Semi-structured layouts. |
| Number of Pages Matched to Semi-Structured Layouts Completed | integer | The number of Pages that finished processing that were matched to Semi-structured layouts. |
| Number of Pages with Fields on Them Created | integer | The number of Pages created that contained Fields. |
| Number of Pages with Fields on Them Completed | integer | The number of Pages that finished processing that contained Fields. |
| Number of Fields Created | integer | The number of Fields created in the account. |
| Number of Fields Completed | integer | The number of Fields that finished processing. |
| Number of Characters Completed | integer | The number of transcribed characters in completed Fields and Table Cells. |
| Seats | integer | The number of users who logged in to the application. |
| Number of Files Submitted | integer | The number of files ingested into the application. |
| Number of Entries Machine Transcribed | integer | The number of machine transcribed Fields and Table Cells. |
| Number of Entries Manually Transcribed | integer | The number of manually transcribed Fields and Table Cells. |
| Number of Fields Machine Identified | integer | The number of machine-identified Fields. |
| Number of Fields Manually Identified | integer | The number of manually identified Fields. |
| Number of Pages Classified Automatically | integer | The number of machine-classified Pages. |
| Number of Pages Classified Manually | integer | The number of manually classified Pages. |
| Number of Unique Layouts Matched | integer | The number of unique layouts that Documents have been matched to. |
| Number of QA Responses on Manual Transcription | integer | The number of manual transcriptions on which Transcription QA was completed. |
| Number of QA Correct Responses on Manual Transcription | integer | The number of correct manual transcriptions, as determined during Transcription QA. |
| Number of QA Responses on System Transcription | integer | The number of transcriptions on which Transcription QA was completed. |
| Number of QA Correct Responses on System Transcription | integer | The number of correct transcriptions, as determined during Transcription QA. |
| Number of QA Responses on Machine Transcription | integer | The number of machine transcriptions on which Transcription QA was completed. |
| Number of QA Correct Responses on Machine Transcription | integer | The number of correct machine transcriptions, as determined during Transcription QA. |
| Number of QA Responses on Manual Field Identification | integer | The number of manually identified Fields on which Field ID QA was completed. |
| Number of QA Correct Responses on Manual Field Identification | integer | The number of correct manual Field identifications, as determined during Field ID QA. |
| Number of QA Responses on System Field Identification | integer | The number of Fields on which Field ID QA was completed. |
| Number of QA Correct Responses on System Field Identification | integer | The number of correct Field identifications, as determined during Field ID QA. |
| Number of QA Responses on Machine Field Identification | integer | The number of machine-identified Fields on which Field ID QA was completed. |
| Number of QA Correct Responses on Machine Field Identification | integer | The number of correct Field identifications completed by the machine, as determined during Field ID QA. |
| Number of Incremental Fields Machine Transcribed | integer | The number of Fields incrementally transcribed using a fine-tuned model. |
| Organize Documents: Number of Tasks Performed | integer | The number of Document Organization tasks completed. |
| Organize Documents: Number of Pages Shown in Step 1 | integer | The number of Pages shown during Document Organization. |
| Organize Documents: Number of Pages Categorized in Step 1 | integer | The number of Pages classified during Document Organization. |
| Organize Documents: Number of Documents Created in Step 1 | integer | The number of Documents created during Document Organization. |
| Organize Documents: Number of Documents Outputted in Step 1 | integer | The total number of Documents created as a result of Document Organization performed manually or by the machine. |
| Organize Documents: Number of Folders Created in Step 2 v40 | integer | The number of Document Folders created during Document Organization. |
| Number of Live Layouts | integer | The number of live layouts in the account. |
| Software Version | string | The version of the application that was live. If the version changed after the row’s time period began, this cell will contain a hyphen-separated list of the versions that were live during the time period (e.g., first_version - last_version). |
| Number of Machine Match Non-Structured Pages | integer | The number of machine-classified non-Structured Pages. |
| Number of Human Match Non-Structured Pages | integer | The number of manually classified non-Structured Pages. |
| Number of Releases | integer | The number of releases (sets of layouts) present in the system. |
| Number of Archived Releases | integer | The number of archived releases. |
| Number of Layouts | integer | The number of layouts in the system. |
| Number of Archived Layouts | integer | The number of archived layouts in the system. |
| Number of Layout Versions | integer | The number of layout versions in the system. |
| Number of Live IDP Flows | integer | The number of live IDP flows in the system. |
| Number of Table Cells Created | integer | The number of Table Cells created in the account. |
| Number of Table Cells Completed | integer | The number of Table cells that finished processing. |
| Number of Table Cells Machine Identified | integer | The number of machine-identified Table Cells. |
| Number of Table Cells Manually Identified | integer | The number of manually identified Table Cells. |
| Number of QA Responses on Manual Non-Structured Classification Pages | integer | The number of manually classified non-Structured Pages on which Classification QA was completed. |
| Number of QA Correct Responses on Manual Non-Structured Classification Pages | integer | The number of correct manual classifications of non-Structured Pages, as determined during Classification QA. |
| Number of QA Responses on System Non-Structured Classification Pages | integer | The number of non-Structured Pages on which Classification QA was completed. |
| Number of QA Correct Responses on System Non-Structured Classification Pages | integer | The number of correct classifications of non-Structured Pages, as determined during Classification QA. |
| Number of QA Responses on Machine Non-Structured Classification Pages | integer | The number of machine-classified non-Structured Pages on which Classification QA was completed. |
| Number of QA Correct Responses on Machine Non-Structured Classification Pages | integer | The number of correct machine classifications of non-Structured Pages, as determined during Classification QA. |
| Number of Fields Extracted in Flexible Extraction | integer | The number of Fields extracted during Flexible Extraction. |
| Number of Table Cells Extracted in Flexible Extraction | integer | The number of Table Cells extracted during Flexible Extraction. |
| Number of Fields Extracted in Custom Supervision | integer | The number of Fields extracted during Custom Supervision. |
| Number of Table Cells Extracted in Custom Supervision | integer | The number of Table Cells extracted during Custom Supervision. |
| Number of Flows | integer | The number of flows in the system. |
| Number of Live Flows | integer | The number of live (deployed) flows in the system. |
| Number of QA Responses on Machine Full Page Transcription v42 | integer | The number of machine-transcribed Segments on which Full Page Transcription QA was completed. |
| Number of QA Correct Responses on Machine Full Page Transcription v42 | integer | The number of correct Segment transcriptions completed by the machine, as determined during Full Page Transcription QA. |
Import/Export API Artifacts (Beta) v36
Artifacts are Hyperscience entities, such as flows, releases, and machine learning (ML) models, that are uniquely identifiable and exportable to ZIP files, JSON files, and other file formats.
In v35, we introduced the ability to transfer such Artifacts across Hyperscience instances in an automated way using API calls. As of v36, this is part of the Public API.
This feature enables the following use cases:
- Migrating artifacts from an on-premise instance to a Hyperscience SaaS environment
- Transferring a flow, with its associated release and models, between instances (e.g., when promoting it from a UAT environment to production)
Key Concepts
Actors
[Source]: A Hyperscience instance that provides, or exports, Artifacts
[Target]: A Hyperscience instance that receives, or imports, Artifacts
[User]: The initiator of the Artifacts' import or export. This user can also be an automated actor, such as a CI / CD script.
Dependencies' Tracking
For every Artifact entity, we have a predefined list of all the other Artifacts that form its functional dependencies. The recommended Artifacts Export/Import pattern is to transfer this whole list of dependencies in a single operation. This tracking allows for a more reproducible behavior on the [Target] system, and it reduces the need to manually keep track of those dependencies and transfer them one by one.
For example, a Flow Artifact has the following dependencies: its related flows, all its blocks, its release, and its associated ML models (limited to NLC, Field ID, and Table ID as of v36).
Repositories
Repositories define where a [Target] system should search for and download artifacts from. Repository definitions can be persisted in the [Target] system's database and used for all import/export operations. Repositories can also be specified in an ad-hoc fashion for the scope of a single import/export operation.
We support three protocols for Artifacts retrieval:
- HTTP – Artifacts will be retrieved from another Hyperscience instance
- LOCAL_ZIPS – Artifacts will be retrieved from all ZIP files in a local (or network-mapped) folder
- S3_ZIPS – Artifacts will be retrieved from all ZIP files in an S3 bucket
Permissions
The following endpoints require API User permissions to be accessed:
/api/v5/artifacts/import/api/v5/artifacts/export- All endpoints under
/api/v5/artifacts/repositories
Transferring Artifacts
Over HTTP
Transferring Artifacts across instances commonly involves the following steps:
- Identify the top-level Artifact that needs to be transferred, specifically its
typeanduuid. (Please see the Technical Appendix below for more information on types.) Retrieve the full list of its dependent Artifacts by querying the [Source] system:
GET https://<source>/api/v5/artifacts/<type>/<uuid>/dependencies_listA JSON is returned, which contains a list of artifact descriptions.
(Optional) Configure the [Target] system to use [Source] as an HTTP Artifacts Repository. This configuration is persisted in the [Target] system's database and will be reused for all subsequent Artifacts operations. The JSON payload needs to follow the
ArtifactsRepositoryschema withHTTPRepositoryConfigas theconfigvalue. Any HTTP Authentication headers needed to access the [Source] system's API need to be specified in the payload.POST https://<target>/api/v5/artifacts/repositoriesInitiate the Artifacts Import on the [Target] system. The import is a long-running background process, and after its initiation, it runs in unattended mode. Its progress can be monitored at a dedicated endpoint. The JSON payload is the Artifacts list JSON obtained in step 2.
POST https://<target>/api/v5/artifacts/importThe response JSON contains a
progress_urlthat can be polled to monitor the progress of the import.(Optional) Check the import progress on the [Target] system:
GET https://<target>/api/job_tracking/<job_tracking_uuid>The returned JSON contains a
completed(true / false) value, along with more detailed timing and progress information.
Using ZIP files on S3
When a direct HTTP connection between the [Source] and [Target] systems can't be established, the Artifacts transfer can still be performed by using a shared S3 bucket. This scenario is specifically relevant when transferring Artifacts from on-premise instances to SaaS instances.
Identify the top-level Artifact that needs to be transferred, specifically its
typeanduuid. (Please see the Technical Appendix below for more information on types.)Retrieve the full list of its dependent Artifacts by querying the [Source] system:
GET https://<source>/api/v5/artifacts/<type>/<uuid>/dependencies_listA JSON is returned, which contains a list of artifact descriptions.
Initiate the Artifacts Export on the [Source] system. Similar to the import, the export is a long-running background process. The JSON payload follows the
ExportArtifactsPayloadschema.POST https://<source>/api/v5/artifacts/exportThe response JSON contains a
progress_urlthat can be polled to monitor the progress of the export.(Optional) Configure the [Target] system to use the shared S3 bucket as an S3_ZIPS Artifacts Repository. This configuration is persisted in the [Target] system's database and will be reused for all subsequent Artifacts operations.
The JSON payload needs to follow the
ArtifactsRepositoryschema withS3ZIPsRepositoryConfigas theconfigvalue.POST https://<target>/api/v5/artifacts/repositoriesInitiate the Artifacts Import on the [Target] system. The import is a long-running background process, and after its initiation, it runs in unattended mode. Its progress can be monitored at a dedicated endpoint. The JSON payload is the Artifacts list JSON obtained in step 2.
POST https://<target>/api/v5/artifacts/importThe response JSON contains a
progress_urlthat can be polled to monitor the progress of the import.(Optional) Check the import progress on the [Target] system:
GET https://<target>/api/job_tracking/<job_tracking_uuid>The returned JSON contains a
completed(true / false) value, along with more detailed timing and progress information.
Sequence Diagrams
Detailed, step-by-step network request diagrams for these two scenarios are available:
Technical Appendix
Artifact types and UUIDs for common exports
Identifying the Artifact Type and the Artifact UUID for the desired entity export is the first step in the Artifacts Transfer process.
Releases
- An exported Release contains all the Release's Layouts.
- Only "Locked" and "Live" Releases are available as Artifacts.
- Artifact Type: layout_release
- Artifact UUID: Use the
release_uuidcomponent of the Release Details page URL in the UI:https://<source>/layouts/releases/<release_uuid>
Flows (with ML Models)
- A Flow can be exported / imported along with its related Flows, Blocks, Releases, and Classification, Field ID, and Table ID ML Models.
- Flows are editable objects and cannot be used as Artifacts directly. Flow export/import is based on Flow Versions, which are non-editable.
- Artifact Type: idp_flow_version
Artifact UUID: As of v36, there is no direct way to retrieve the Flow-Version UUID for a given Flow using the UI.
- Get the Flow UUID. This ID can be obtained with the Copy Flow UUID context action in the UI
at
https://<source>/flows. Query the Flow Management API endpoint with this Flow UUID:
GET https://<source>/api/v5/flows/<flow_uuid>The Artifact UUID is at the
flow_version_uuidkey in the response’s JSON.
- Get the Flow UUID. This ID can be obtained with the Copy Flow UUID context action in the UI
at
Example: In the JSON snippet below, as returned by the Flow Management API endpoint, the Flow UUID is
303f88b8-5a26-4bd8-bb88-106553d01680and the Flow- Version UUID (Artifact UUID) is01923f8e-42ad-46b7-9e3e-435f4cfcca52.
Example Flow JSON
{
"uuid": "303f88b8-5a26-4bd8-bb88-106553d01680",
"title": "Test Flow with files",
...
"flow_version": 11,
"flow_version_uuid": "01923f8e-42ad-46b7-9e3e-435f4cfcca52",
Flows (w/o ML Models)
- In certain scenarios, we may want to transfer only Flows, Blocks, and Releases without their associated ML Models
- Artifact Type: flow_version
- Artifact UUID: (obtained as described in "Flows (with ML Models)" above)
An OpenAPI Schema describing this API can be found in our Swagger Explorer.
Flows v36
Flow Object
Example Flow Object
{
"uuid": "d643095a-aa1b-4b88-9d4e-48ed421bd4b1",
"title": "Flow that calls itself recursively",
"description": "Accepts two integer inputs - start_number and final_number. Starts from start_number and calls itself incrementing by 1 until final_number number is reached",
"identifier": "FLOW_CALLING_ITSELF",
"flow_version": 3,
"flow_version_uuid": "6498ede6-4a27-4631-b089-fc8fb1a18fa9",
"is_live": true,
"dt_deployed": "2023-02-01T16:39:05.456835Z",
"is_archived": false,
"is_user_facing": true,
"dt_created": "2023-02-01T16:38:53.486271Z",
"dt_updated": "2023-02-01T16:39:05.456841Z",
"dsl": {
"metadata": {
"file_type": "workflow_dsl",
"schema_version": 2
},
"uuid": "d643095a-aa1b-4b88-9d4e-48ed421bd4b1",
"owner_email": "flows.sdk@hyperscience.com",
"title": "Flow that calls itself recursively",
"manifest": {
"input": [
{
"name": "start_number",
"type": "integer",
"title": "Number to start from",
"value": 0
},
{
"name": "final_number",
"type": "integer",
"title": "Number to reach while recursing",
"value": 3
}
],
"identifier": "FLOW_CALLING_ITSELF"
},
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "final_number_reached",
"input": {
"data": {
"current": "${workflow.input.start_number}",
"final": "${workflow.input.final_number}"
},
"code": "lambda current, final: current >= final"
}
},
{
"identifier": "ROUTING",
"reference_name": "call_self_or_finish",
"branches": [
{
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "end_of_recursion",
"input": {
"data": {
"current": "${workflow.input.start_number}",
"final": "${workflow.input.final_number}"
},
"code": "lambda current, final: 'Do some end work here'"
}
}
],
"case": "true"
},
{
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "increment_by_one",
"input": {
"data": {
"num": "${workflow.input.start_number}"
},
"code": "lambda num: num + 1"
}
},
{
"identifier": "FLOW_CALLING_ITSELF",
"reference_name": "call_self",
"input": {
"start_number": "${increment_by_one.output.result}",
"final_number": "${workflow.input.final_number}"
}
}
],
"case": "false"
}
],
"decision": "${final_number_reached.output.result}"
}
],
"input": {},
"description": "Accepts two integer inputs - start_number and final_number. Starts from start_number and calls itself incrementing by 1 until final_number number is reached"
},
"errors": []
}
A Flow represents a sequence of business steps managed and executed by the Hyperscience Platform.
These definitions are like blueprints - they configure and wire together Triggers, Blocks, and other parameters, while the Hyperscience Platform takes care of the execution and lifecycle during the actual runtime. More information about flows can be found in the Flows SDK documentation.
Object properties
| Property | Type | Description |
| uuid | string | Unique identifier of the Flow in the system that follows the UUID format. May be specified by the user in the flow DSL. |
| identifier | string | Unique identifier of the flow in the system in human-readable format. |
| title | string | Human-readable name of the flow to be used in the UI. |
| description | string | Contains details about the flow's purpose and function. |
| flow_version | integer | Sequence number of the latest version of this flow stored in this system. |
| flow_version_uuid | string | Automatically assigned unique identifier of the latest version of this flow stored in this system. |
| is_live | boolean | If true, indicates that the flow is live in the system and ready for processing. Otherwise returns false. Can be toggled using the deploy/undeploy actions. |
| dt_deployed | datetimetzAn ISO-8601 formatted datetime string | Time the Flow was last deployed in the system. |
| is_archived | boolean | If true, indicates that the flow is archived, i.e. not usable in the system. Otherwise returns false. Can be toggled using the archive/restore actions. |
| is_user_facing | boolean | If true, indicates that the flow is visible in the UI of the Hyperscience platform and is intended for end users to interact with. Otherwise returns false. |
| dt_created | datetimetzAn ISO-8601 formatted datetime string | Time the Flow was created in the system. |
| dt_updated | datetimetzAn ISO-8601 formatted datetime string | Time the Flow was last updated. |
| dsl | JSON object | An object that contains the last version of the Flow's definition in Hyperscience DSL (HS DSL) - a JSON-based format for writing flows for the Hyperscience platform. |
| errors | array of JSON objectsA list of objects. See the description for more information. | Array of error objects representing validation errors for the latest version of the Flow. |
Importing a Single Flow
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows'
endpoint_url = urljoin(base_url, endpoint)
data = { 'force': True }
files = [
('file', ('test_flow.json', open('test_flow.json', 'rb'), 'application/json')),
]
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files, data=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
flow_json = r.json()
Importing flows is a means for creating new flows in the system as well as updating existing flows. Whether a new flow is created, or an existing flow is updated, depends on the UUID of the imported flow and whether it exists in the system. Exporting a flow, editing its UUID and importing it again will result in the creation of a new flow (assuming the new UUID does not clash with that of another flow).
After a flow is imported in Hyperscience it is not immediately available for processing. A flow becomes available for processing after it is deployed. For more information please read Deploy a flow
Flow Import Endpoint
POST /api/v5/flows
The flow import endpoint accepts a file as the request payload. The file can be either a JSON file that contains a flow definition in HS DSL (generated using Flows SDK) or a Flows ZIP archive exported from Hyperscience.
Request parameters
| Property | Type | Description |
| version | integer | Latest version number for this flow in this system as returned by flow listing or flow retrieve API. Must be provided by the client when updating a flow to avoid unintentional overwrites if multiple clients edit in parallel. Not required when importing a new flow that is not present in this system. |
| force | boolean | Force overwrite existing flow, skipping version validation. This may result in overwriting a live flow in the system. Caution is advised and in general recommended to not force on production systems and use only during development. |
Importing Multiple Flows v38
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/create_multiple'
endpoint_url = urljoin(base_url, endpoint)
files = [
('files', ('test_flow.json', open('test_flow.json', 'rb'), 'application/json')),
('files', ('flows.zip', open('flows.zip', 'rb'))),
]
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files)
print(json.dumps(r.json(), indent=4, sort_keys=True))
flow_json = r.json()
This functionality is the same as that for importing a single flow, where a single flow JSON or ZIP could be imported. This endpoint allows for uploading multiple ZIP and JSON files of possibly unrelated flows.
Flow Import Multiple Endpoint
POST /api/v5/flows/create_multiple
The Flow Import endpoint accepts multiple files as the request payload. The files can be either JSON files that contain a flow definition in HS DSL (generated using Flows SDK) or a Flows ZIP archive exported from Hyperscience. The endpoint allows importing both flows that are dependent on each other and unrelated flows.
Exporting Flows
Example Request
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/'
flow_uuid = 'd643095a-aa1b-4b88-9d4e-48ed421bd4b1'
export = 'export'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_uuid, export))
# The response is a ZIP file containing the requested flow with all files it references.
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(f'Return status: {r.status_code}')
# The 'with_dependencies' parameter tells the system to export not only the requested flow, but also all flows it
# depends on. Available in v38 and later.
params = {'with_dependencies': 'true'}
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(f'Return status: {r.status_code}')
The Flow Export endpoint allows for bundling a flow or flow with its dependencies (v38 and later) in a ZIP file that could be imported in another system. The ZIP file contains the flow JSON under the flows directory, as well as all files under the files directory.
Flow Export Endpoint
GET /api/v5/flows/flow_uuid/export
Request parameters
| Property | Type | Description |
| with_dependencies v38 | boolean | Tells the system to also bundle all flows the specified flow depends on. |
Retrieving Flows
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/'
flow_uuid = 'd643095a-aa1b-4b88-9d4e-48ed421bd4b1'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_uuid))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Retrieving Flows Endpoint
This endpoint allows you to retrieve a flow's metadata and definition in HS DSL JSON format by its UUID. The result is a Flow Object.
GET /api/v5/flows/flow_uuid
Listing Flows
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows'
endpoint_url = urljoin(base_url, endpoint)
params = {'is_live': 'true'}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Listing Flows Endpoint
Example Response
{
"count": 3,
"next": null,
"previous": null,
"results": [
{
"uuid": "2869449d-db9e-485b-b285-f954346793c6",
"title": "Flow with a file input",
"description": "Accepts a text file as an input and logs and outputs its contents",
"identifier": "FILE_INPUT_SHOWCASE",
"flow_version": 3,
"flow_version_uuid": "d13d858f-29b0-4537-ba11-a7be0039a413",
"is_live": true,
"dt_deployed": "2023-01-27T12:52:59.189045Z",
"is_archived": false,
"is_user_facing": true,
"dt_created": "2023-01-27T12:52:53.804816Z",
"dt_updated": "2023-01-27T12:52:59.189051Z",
"dsl": {
"metadata": {
"file_type": "workflow_dsl",
"schema_version": 2
},
"uuid": "2869449d-db9e-485b-b285-f954346793c6",
"owner_email": "flows.sdk@hyperscience.com",
"title": "Flow with a file input",
"manifest": {
"input": [
{
"name": "file",
"type": "File",
"type_spec": {
"allowed_extensions": ".txt"
},
"title": "File to read",
"optional": true
}
],
"identifier": "FILE_INPUT_SHOWCASE"
},
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "read_and_log_file",
"input": {
"data": {
"file_uuid": "${workflow.input.file}"
},
"code": "\nfrom typing import Any, Dict, List, Tuple\nfrom blocks.base_python_block import WFEngineTaskResult\nfrom blocks.types import BlockInputs\nfrom flows_sdk.types import *\n\n\n# Used to deserialize json inputs by the CCB\nclass CustomCodeBlockProxyInputs(BlockInputs):\n file_uuid: Any\n\ndef _read_and_log_file(file_uuid: str, _hs_block_instance: HsBlockInstance) -> str:\n if not file_uuid:\n _hs_block_instance.log('No file configured')\n return ''\n blob = _hs_block_instance.fetch_blob(file_uuid)\n file_text = blob.content.decode(encoding='utf-8')\n _hs_block_instance.log(file_text)\n return file_text\n\n\n\ndef process_task_flows_sdk_types(\n _hs_block_instance: HsBlockInstance, _hs_task: HsTask, input_data: CustomCodeBlockProxyInputs\n) -> WFEngineTaskResult:\n return _read_and_log_file(**input_data.dict(), _hs_block_instance=_hs_block_instance)\n\n"
}
}
],
"input": {},
"description": "Accepts a text file as an input and logs and outputs its contents"
}
},
{
"uuid": "1201613c-348f-4b16-8e6d-da7a64551e92",
"title": "Submission State Notifications V36",
"description": "Send notifications to external systems when a submission has been created or is waiting for supervision. \nIf the \"Document Processing V36\" flow is live, this flow must also be live, but it can be empty.",
"identifier": "IDP_SUBMISSION_NOTIFY_V36",
"flow_version": 1,
"flow_version_uuid": "ba793aaa-bc1f-4e38-990a-353968b83fcf",
"is_live": true,
"dt_deployed": "2023-01-27T12:33:23.193819Z",
"is_archived": false,
"is_user_facing": true,
"dt_created": "2023-01-27T12:33:23.152588Z",
"dt_updated": "2023-01-27T12:33:23.193829Z",
"dsl": {
"metadata": {
"file_type": "workflow_dsl",
"schema_version": 2
},
"uuid": "1201613c-348f-4b16-8e6d-da7a64551e92",
"owner_email": "sdm@hyperscience.com",
"title": "Submission State Notifications V36",
"manifest": {
"input": [
{
"name": "submission",
"type": "Submission",
"title": "Submission Object",
"ui": {
"hidden": true
}
}
],
"ui": {
"hidden": true,
"icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAACXBIWXMAACxLAAAsSwGlPZapAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAOGSURBVHgB7ZxBaupQFIZvH2/cFZR23tJ5V1DovKXzQhdQdKzoXHQDOhfdgG5A54JzwRW4gff4AxdsSYLRnPz3lP8DURIN8XxJ7s255+bq9vb2XxA0/gRBRQLISAAZCSAjAWQkgIwEkJEAMhJARgLISAAZCSAjAWQkgIwEkJEAMhJAxqWA6+vr0O12w2azCbvdLvuMZR658jYkiUBPp9Nwf3//bfl2uw3v7+/hcDgET7g6A4qCD7AM67ydCW4ElAU/4lGCCwGnBD/iTULyAqoEP+JJQtICzgl+xIuEZAVcEvyIBwlJCqgj+JHUJSQnoM7gR1KWkJyAr6+v0uDPZrOz1mGb2HZqJCfg9fW1cB0C3G63C9djXZmEsm2zcHMjtlwuS4MfKZOgS9AJ5AUPeZ5WqxVOpd/vZ7/5yXg8DqmRnIDRaJQFCkk1vPC5apIN38VvYsDjdrDt1HCXDQVIQedxd3cXvKEBGTISQEYCyEgAGQkg8zc0xNPTU3h7e8veb25uCr+HLuN8Pg+9Xi/UCdIQHx8fpTdjuHfAa7FYZDd+TWDeDUWwB4NBFvgqDIfDwn571W4oqiYQ/Crs9/vsXgLvlpgKuCQLiTPh8fExd11VAShfOXcfICHvrrouzNoAHPkeqxSOianxskvmpZgJuDT4ZVnNqlySA8J/wCXUChMBz8/PZx81OO1x/UdCrS6O80vngParaht2Kia9IAgoAsGdTCaNV7BBaJlUHOkYL0CDnQd6cOv1OtSNiYCHh4fc5ehaIvgpggMC+wYReSNnVmeAySWoaEgR/fvUKTpArBriRu+EPRTONr2PSkWQkQAyEkBGAshIABkJICMBZCSAjASQkQAyEkBGAshIABkJIONSQF7K2NszIiIuBeQNstc5iN8kjVXG1QkG2TF0iHFakOrki1MwKczyPoGiyf1XI0xGAsg0KsBDmWLT+2gioKiiuGqFMoPYsP/EqkDXRMBqtcpdfkqNPotYkNXpdHLXWwkw6QWhigzFub+Jz89Pk0kbJmcAaigt6ihZ4JJqNWPGrBHGowW8pgeOiZM0rDATEKf4eJYQg285Tcm0G4qG6+XlxXyelQW4hGLfLacngcaeFYE5A3ihdL3Op2HVCQ4UBB6JvabaMJcP6/hNKBVBRgLISAAZCSAjAWQkgIwEkJEAMhJARgLISAAZCSAjAWQkgIwEkJEAMhJA5j+5uWfcipqVAgAAAABJRU5ErkJggg=="
},
"identifier": "IDP_SUBMISSION_NOTIFY_V36",
"roles": [
"notifications"
],
"output": []
},
"blocks": [
{
"identifier": "OUTPUTS",
"reference_name": "outputs",
"title": "Outputs",
"description": "Send submission data to external systems when a submission has been created or is waiting for supervision",
"role_filter": [
"idp_output"
],
"input_template": {
"submission": {
"id": "${workflow.input.submission.id}"
},
"enabled": true
},
"blocks": []
}
],
"input": {},
"description": "Send notifications to external systems when a submission has been created or is waiting for supervision. \nIf the \"Document Processing V36\" flow is live, this flow must also be live, but it can be empty.",
"output": {}
}
},
{
"uuid": "cee432a8-30a3-4d07-a924-e6d87c923325",
"title": "Logging sample flow",
"description": "A simple Flow showcasing how to log in code blocks",
"identifier": "LOGGING_FLOW",
"flow_version": 1,
"flow_version_uuid": "d13d858f-29b0-4537-ba11-a7be0039a414",
"is_live": false,
"dt_deployed": null,
"is_archived": false,
"is_user_facing": true,
"dt_created": "2023-01-27T12:53:12.334776Z",
"dt_updated": "2023-01-27T12:53:12.341944Z",
"dsl": {
"metadata": {
"file_type": "workflow_dsl",
"schema_version": 2
},
"uuid": "cee432a8-30a3-4d07-a924-e6d87c923325",
"owner_email": "flows.sdk@hyperscience.com",
"title": "Logging sample flow",
"manifest": {
"input": [
{
"name": "text",
"type": "string",
"title": "Text to log",
"optional": false
}
],
"identifier": "LOGGING_FLOW"
},
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "log_task_ccb",
"input": {
"data": {
"text": "${workflow.input.text}"
},
"code": "\nfrom typing import Any, Dict, List, Tuple\nfrom blocks.base_python_block import WFEngineTaskResult\nfrom blocks.types import BlockInputs\nfrom flows_sdk.types import *\n\n\n# Used to deserialize json inputs by the CCB\nclass CustomCodeBlockProxyInputs(BlockInputs):\n text: Any\n\ndef log_text(text: str, _hs_block_instance: HsBlockInstance) -> None:\n _hs_block_instance.log(f'DEBUG level: {text}', HsBlockInstance.LogLevel.DEBUG)\n _hs_block_instance.log(f'INFO level: {text}', HsBlockInstance.LogLevel.INFO)\n _hs_block_instance.log(f'WARNING level: {text}', HsBlockInstance.LogLevel.WARN)\n _hs_block_instance.log(f'ERROR level: {text}', HsBlockInstance.LogLevel.ERROR)\n raise Exception('this is broken')\n\n\n\ndef process_task_flows_sdk_types(\n _hs_block_instance: HsBlockInstance, _hs_task: HsTask, input_data: CustomCodeBlockProxyInputs\n) -> WFEngineTaskResult:\n return log_text(**input_data.dict(), _hs_block_instance=_hs_block_instance)\n\n"
}
},
{
"identifier": "PYTHON_CODE",
"reference_name": "log_task_python",
"input": {
"data": {
"text": "${workflow.input.text}"
},
"code": "\nfrom typing import Any, Dict, List, Tuple\nfrom blocks.base_python_block import WFEngineTaskResult\nfrom blocks.types import BlockInputs\nfrom flows_sdk.types import *\n\n\n# Used to deserialize json inputs by the CCB\nclass CustomCodeBlockProxyInputs(BlockInputs):\n text: Any\n\ndef log_text(text: str, _hs_block_instance: HsBlockInstance) -> None:\n _hs_block_instance.log(f'DEBUG level: {text}', HsBlockInstance.LogLevel.DEBUG)\n _hs_block_instance.log(f'INFO level: {text}', HsBlockInstance.LogLevel.INFO)\n _hs_block_instance.log(f'WARNING level: {text}', HsBlockInstance.LogLevel.WARN)\n _hs_block_instance.log(f'ERROR level: {text}', HsBlockInstance.LogLevel.ERROR)\n raise Exception('this is broken')\n\n\n\ndef process_task_flows_sdk_types(\n _hs_block_instance: HsBlockInstance, _hs_task: HsTask, input_data: CustomCodeBlockProxyInputs\n) -> WFEngineTaskResult:\n return log_text(**input_data.dict(), _hs_block_instance=_hs_block_instance)\n\n"
}
}
],
"input": {
"text": "default-text"
},
"description": "A simple Flow showcasing how to log in code blocks"
}
}
]
}
This endpoint allows you to retrieve a list of Flows in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Flow Object.
GET /api/v5/flows
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Flows. If you repeat a query parameter multiple times in a request, the application will apply OR logic, i.e., it will list Flows where that attribute matches any of the values you provided.
| Property | Type | Description |
| identifier | string | Filter for flows whose identifier is the same as the one in the values in this parameter. |
| is_archived | boolean | Filter for flows by their archived status. |
| is_live | boolean | Filter for flows by their live/deployed status. |
| is_user_facing | boolean | Filter for flows by their user facing status. |
Flow Actions v36
This page lists the basic actions that can be applied to a flow to change its state.
Deploy a flow
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/'
flow_uuid = 'd643095a-aa1b-4b88-9d4e-48ed421bd4b1'
flow_action = 'deploy'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_uuid, flow_action))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
To deploy a flow means to make it live in the system. Live flows are available for processing data. After a flow is live the system will run all blocks that it or its subflows use. Only flows that have no validation warnings or errors can be deployed. An example for validation errors and how to resolve them can be found in this Knowledge Base article.
Flow Deploy Endpoint
POST /api/v5/flows/flow_uuid/deploy
An empty POST to this endpoint will deploy the flow that corresponds to the flow_uuid in the URL, as well as all flows that it depends on. If the flow is not in a valid state, it will not be made live, and an error code will be returned.
Undeploy a flow
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/'
flow_uuid = 'd643095a-aa1b-4b88-9d4e-48ed421bd4b1'
flow_action = 'undeploy'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_uuid, flow_action))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
To undeploy a flow means to disable it for starting any new flow runs that process data. That includes disabling all of its trigger blocks.
Note that the enabled property of trigger blocks will not be changed when a flow is undeployed but even if a trigger is enabled it will not run for undeployed flows.
Flow runs that were still in-flight when their originating flow got undeployed will complete normally. The system will keep running their processing blocks until all such flow runs have completed.
Flow Undeploy Endpoint
POST /api/v5/flows/flow_uuid/undeploy
An empty POST to this endpoint will undeploy the flow that corresponds to the flow_uuid in the URL, as well as all flows that it depends on.
Archive a flow
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/'
flow_uuid = 'd643095a-aa1b-4b88-9d4e-48ed421bd4b1'
flow_action = 'archive'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_uuid, flow_action))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
To archive a flow is the functional equivalent of deleting a flow - it can't be used for processing data anymore nor can users make changes to it. The difference with deletion is that the flow's data remains as well as its past flow runs. Deployed flows can't be archived.
Flow Archive Endpoint
POST /api/v5/flows/flow_uuid/archive
An empty POST to this endpoint will archive the flow that corresponds to the flow_uuid in the URL. In v38 and later, the endpoint supports
archiving the specified flow, as well as some of the flows it depends on. In case the parameter with_dependencies=true is provided as part of the body,
the system will archive all flows that are supporting (i.e. have at least one of either "supporting" or "notifications" roles) that are not used
by other non-archived flows.
Request parameters
| Property | Type | Description |
| with_dependencies v38 | boolean | Tells the system to archive all eligible flows the specified flow depends on. |
Restore a flow
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flows/'
flow_uuid = 'd643095a-aa1b-4b88-9d4e-48ed421bd4b1'
flow_action = 'restore'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_uuid, flow_action))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
To restore a flow means to make a previously archived flow available to be used in the system.
Flow Restore Endpoint
POST /api/v5/flows/flow_uuid/restore
An empty POST to this endpoint will restore the flow that corresponds to the flow_uuid in the URL, as well as all flows it depends on.
Flows Runs v37
Flow Run Object
Example Flow Run Object
{
"flow_run_id": "e8d40a35-aadb-4761-86f2-22eda2262855",
"identifier": "FLOW_CALLING_ITSELF",
"flow_version": 3,
"status": "COMPLETED",
"correlation_id": "string_used_to_group_related_flow_runs",
"parent_flow_run_id": "7a188666-9141-4c37-9d6b-abd91f639573",
"dt_started": "2023-02-01T16:47:44.726000Z",
"dt_ended": "2023-02-01T16:47:44.742000Z",
"error_message": "",
"blocks": [
{
"reference_name": "final_number_reached",
"identifier": "CUSTOM_CODE",
"status": "COMPLETED",
"block_runs": [
{
"reference_name": "final_number_reached",
"identifier": "CUSTOM_CODE",
"status": "COMPLETED",
"logs": [],
"subflow_run_id": null,
"inputs": {
"expression": "lambda current, final: current >= final",
"input": [
3,
3
]
},
"outputs": {
"result": true
},
"dt_created": "2023-02-01T16:47:44.735000Z",
"dt_started": "2023-02-01T16:47:44.735000Z",
"dt_ended": "2023-02-01T16:47:44.740000Z"
}
]
},
{
"reference_name": "call_self_or_finish",
"identifier": "ROUTING",
"status": "COMPLETED",
"block_runs": [
{
"reference_name": "call_self_or_finish",
"identifier": "ROUTING",
"status": "COMPLETED",
"logs": [],
"subflow_run_id": null,
"inputs": {
"case_value_param": true
},
"outputs": {
"caseOutput": [
"true"
]
},
"dt_created": null,
"dt_started": "2023-02-01T16:47:44.740000Z",
"dt_ended": "2023-02-01T16:47:44.740000Z",
"case": "true"
}
]
},
{
"reference_name": "end_of_recursion",
"identifier": "CUSTOM_CODE",
"status": "COMPLETED",
"block_runs": [
{
"reference_name": "end_of_recursion",
"identifier": "CUSTOM_CODE",
"status": "COMPLETED",
"logs": [],
"subflow_run_id": null,
"inputs": {
"expression": "lambda current, final: 'Do some end work here'",
"input": [
3,
3
]
},
"outputs": {
"result": "Do some end work here"
},
"dt_created": "2023-02-01T16:47:44.741000Z",
"dt_started": "2023-02-01T16:47:44.741000Z",
"dt_ended": "2023-02-01T16:47:44.741000Z"
}
]
},
{
"reference_name": "increment_by_one",
"identifier": "CUSTOM_CODE",
"status": "NOT_STARTED",
"block_runs": []
},
{
"reference_name": "call_self",
"identifier": "FLOW_CALLING_ITSELF",
"status": "NOT_STARTED",
"block_runs": []
}
],
"dsl": {
"metadata": {
"file_type": "workflow_dsl",
"schema_version": 2
},
"uuid": "d643095a-aa1b-4b88-9d4e-48ed421bd4b1",
"owner_email": "flows.sdk@hyperscience.com",
"title": "Flow that calls itself recursively",
"manifest": {
"input": [
{
"name": "start_number",
"type": "integer",
"title": "Number to start from",
"value": 0
},
{
"name": "final_number",
"type": "integer",
"title": "Number to reach while recursing",
"value": 3
}
],
"identifier": "FLOW_CALLING_ITSELF"
},
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "final_number_reached",
"input": {
"data": {
"current": "${workflow.input.start_number}",
"final": "${workflow.input.final_number}"
},
"code": "lambda current, final: current >= final"
}
},
{
"identifier": "ROUTING",
"reference_name": "call_self_or_finish",
"branches": [
{
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "end_of_recursion",
"input": {
"data": {
"current": "${workflow.input.start_number}",
"final": "${workflow.input.final_number}"
},
"code": "lambda current, final: 'Do some end work here'"
}
}
],
"case": "true"
},
{
"blocks": [
{
"identifier": "CUSTOM_CODE",
"reference_name": "increment_by_one",
"input": {
"data": {
"num": "${workflow.input.start_number}"
},
"code": "lambda num: num + 1"
}
},
{
"identifier": "FLOW_CALLING_ITSELF",
"reference_name": "call_self",
"input": {
"start_number": "${increment_by_one.output.result}",
"final_number": "${workflow.input.final_number}"
}
}
],
"case": "false"
}
],
"decision": "${final_number_reached.output.result}"
}
],
"input": {},
"description": "Accepts two integer inputs - start_number and final_number. Starts from start_number and calls itself incrementing by 1 until final_number number is reached"
},
"flow_input": {
"system_settings": {
"EXTERNAL_SYSTEM_CONFIG": {
"api_key": "12341234123",
"host": "koko"
},
"layout_release_uuid": "4f0ec35f-c9ca-4dad-8460-bb6797429ea4"
},
"start_number": 3,
"final_number": 3
},
"flow_output": {
"result": "Do some end work here"
}
}
A Flow Run represents a record of running a flow with a given input. The flow run contains metadata such as flow run ID, originating flow identifier, correlation ID, a snapshot of the flow definition and the related runtime data.
Flow Run Object properties
| Property | Type | Description |
| flow_run_id | string | Unique identifier of the flow run in the system. |
| identifier | string | The identifier of the originating flow. |
| flow_version | integer | The version of the originating flow from which the run was created. |
| status | string | The current status of the flow run. Can be one of the following: RUNNING, COMPLETED, FAILED. |
| correlation_id | string | The correlation ID is a string that can be used to mark flow runs as related. Subflow runs inherit their correlation ID from their parent flow runs. |
| parent_flow_run_id | string | The flow run id of the flow that started this one. Available only for subflow runs and null for root flow runs. |
| dt_started | datetimetzAn ISO-8601 formatted datetime string | The time when the flow runs was created. |
| dt_ended | datetimetzAn ISO-8601 formatted datetime string | The time when the flow run transitioned into a terminal state (COMPLETED or FAILED). |
| error_message | string | A string containing information on why the flow run failed. Available only for FAILED flow runs, empty for RUNNING or COMPLETED flow runs. |
| dsl | JSON object | A snapshot of the flow version at the time when this flow was created. |
| blocks | array of JSON objectsA list of objects. See the description for more information. | An array of Block Run Info objects each of which contains runtime data about the execution of a particular block in the flow. See the table below for more details. |
| flow_input | JSON object | The input that the flow run was created with. null when the data is large and offloaded to a file, in this case the flow_input_ref property will contain an URL for downloading the input data. |
| flow_output | JSON object | The output that the flow run produced. null when the data is large and offloaded to a file, in this case the flow_output_ref property will contain an URL for downloading the output data. |
| flow_input_ref | string | An URL for downloading the inputs of this block run when the input payload is large and the data has been offloaded. null when the data is returned with the flow_input property. |
| flow_output_ref | string | An URL for downloading the outputs of this block run when the output payload is large and the data has been offloaded. null when the data is returned with the flow_output property. |
Block Run Info Object Properties
Contains a small set of properties for quick access and a list of objects containing the detailed data about each block run.
| Property | Type | Description |
| reference_name | string | Unique identifier of the block within the flow. |
| identifier | string | The identifier of the block that ran, i.e. the type of this block. |
| status | string | The last known status of the block run. Can be one of the following: SCHEDULED, IN_PROGRESS, COMPLETED, FAILED, NOT_STARTED, TIMED_OUT, TERMINATED. |
| block_runs | array of JSON objectsA list of objects. See the description for more information. | An array of Block Run objects each of which represents an individual block run. See the table below for more details. |
Block Run Object Properties
Contains information about an individual block run. Blocks may run multiple times in the span of a single flow run in case of timeouts or retries (automatic or manual).
| Property | Type | Description |
| reference_name | string | Unique identifier of the block within the flow. |
| identifier | string | The identifier of the block that ran, i.e. the type of this block. |
| status | string | The status of the block run. Can be one of the following: SCHEDULED, IN_PROGRESS, COMPLETED, FAILED, NOT_STARTED, TIMED_OUT, TERMINATED. |
| logs | array of strings | The logs that this block run has produced. |
| subflow_run_id | string | The flow run ID of the subflow that this block started, if any, null for non-subflow blocks. |
| dt_created | datetimetzAn ISO-8601 formatted datetime string | The time when this block run was scheduled for execution. |
| dt_started | datetimetzAn ISO-8601 formatted datetime string | The time when this block run started processing. |
| dt_ended | datetimetzAn ISO-8601 formatted datetime string | The time when this block run transitioned into a terminal state. |
| inputs | JSON object | The input for this block run. null when the data is large and offloaded to a file, in this case the inputs_ref property will contain an URL for downloading the input data. |
| outputs | JSON object | The output of this block run. null when the data is large and offloaded to a file, in this case the outputs_ref property will contain an URL for downloading the output data. |
| inputs_ref | string | An URL for downloading the inputs of this block run when the input payload is large and the data has been offloaded. null when the data is returned with the inputs property. |
| outputs_ref | string | An URL for downloading the outputs of this block run when the output payload is large and the data has been offloaded. null when the data is returned with the outputs property. |
Block Run Statuses
The table below lists more details about each block run status.
| Status name | Description |
| SCHEDULED | The block run is waiting to be picked up by a worker for processing. |
| IN_PROGRESS | The block run is being processed by a worker. |
| COMPLETED | The block run has finished execution successfully. |
| FAILED | The block run has resulted in an error - either during processing or scheduling. |
| TIMED_OUT | The block run has failed to report back for a long time during processing and has been timed out by the system. |
| TERMINATED | The block run has been forcefully stopped. |
| NOT_STARTED | The flow run has not yet reached this block. |
Create a Flow Run
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flow_runs'
endpoint_url = urljoin(base_url, endpoint)
# JSON request
flow_run_create_request = {
'identifier': 'FLOW_IDENTIFIER',
'flow_uuid': '2834d0cd-bda7-4c66-b788-6c7e41b81216',
'correlation_id': 'string_used_to_group_related_flow_runs',
'input': {
'input_string': 'value_string',
'input_number': 10,
'input_object': {
'key-a': 'value-a'
}
}
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, json=flow_run_create_request)
print(json.dumps(r.json(), indent=4, sort_keys=True))
# Multipart form request
form_data = {
'identifier': 'FLOW_IDENTIFIER',
'flow_uuid': '2834d0cd-bda7-4c66-b788-6c7e41b81216',
'correlation_id': 'string_used_to_group_related_flow_runs',
'input': '{"input_file":"${input_files.invoice}","input_number": 10}'
}
files = [
('invoice', ('invoice.pdf', open('invoice.pdf', 'rb'), 'application/pdf')),
]
with get_oauth2_session() as session:
r = session.post(endpoint_url, files=files, data=form_data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
When a flow run is created the system captures the current version of the flow and the provided flow inputs and sends them to the flow engine which takes care of executing the steps described in the flow template.
Flow Run Create Endpoint
POST /api/v5/flow_runs
This endpoint can be used to create flow runs. It takes a Flow Run Create Request object as input which can be supplied either as JSON or encoded as multipart/form-data if the
input will include uploading files.
Flow Run Create Request
| Property | Type | Description |
| identifier | string | The identifier of the flow to run. Optional if flow_uuid is provided. |
| flow_uuid | string | The flow UUID of the flow to run. Optional if identifier is provided. |
| correlation_id | string | The correlation ID of a flow run represents a string that can be used to group related flow runs together. Subflows will automatically inherit the correlation ID of their parent. |
| flow_run_id | string | Optional parameter that allows to specify the unique identifier of the flow run that will be created. The value has to be a version 4 UUID in hyphenated hex format (e.g. 57ac2732-ef2c-47db-801b-a2ecef00d35e). Since the flow_run_id is the unique identifier of the flow run within the system a given UUID can be used only once. Subsequent requests with the same flow_run_id will result in HTTP 400 Bad Request responses. Because of this property supplying a flow_run_id can be used to prevent double flow execution. |
| input | JSON object | An object that contains the inputs for the flow run. |
Flow Run Create Response
| Property | Type | Description |
| flow_run_id | string | The unique identifier of the created flow run in the system. Can be used to minitor the flow run. |
Uploading files as flow run inputs
The flow run create endpoint enables file inputs through multipart/form-data requests. To bind the uploaded files to their respective inputs in the flow special syntax is used.
For example if the flow has an input named input_file and the request should contain a file named invoice.pdf, the binding between the two could be constructed as follows:
- include the file in the multipart form under the name invoice (could be any name, but should contain only numbers, letters, - and _)
- in the flow input JSON as the value of the input_file input use the binding ${input_files.<multipart_form_field_name>}, in this case ${input_files.invoice}
- if the input is meant to accept a list of files those should all be included in the multipart form under the same name and that name should be used in the special binding syntax. The system will automatically build a list containing all of the files under the given name.
As a result of this the files will be stored in the system, associated with the flow run. Any block that uses such flow inputs will receive references to the locations where the files were stored. Most blocks will resolve such references themselves, but for custom code blocks the contents of the file need to be retrieved manually from the reference. Instructions on how to do that can be found in the Flows SDK documentation - it is done with the same API that is used to retrieve binary runtime data.
Monitor a Flow Run v42
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flow_runs/'
flow_run_id = 'e8d40a35-aadb-4761-86f2-22eda2262855'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_run_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Monitor Flow Run Endpoint
This endpoint allows you to monitor the execution of a flow run as well as to retrieve its result once it has finished. It returns a Flow Run Object.
GET /api/v5/flow_runs/flow_run_id
Request parameters
| Property | Type | Description |
| detailed | boolean | Retrieve the detailed information about this flow run - includes flow run inputs, outputs and block runs. true by default. For monitoring just the status of a fow run it is recommended to use detailed=false because it makes the request more lightweight by stripping everything but flow run metadata from the response. |
Monitor a Flow Run v42
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flow_runs/'
flow_run_id = 'e8d40a35-aadb-4761-86f2-22eda2262855'
suffix = 'summary'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_run_id, suffix))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Monitor Flow Run
This endpoint allows you to monitor the execution of a flow run as well as to retrieve its result once it has finished. For enhanced performance it does not return the inputs and outputs of the flow run and its blocks when the those are larger in size than a certain threshold (controlled by the internal logic). It allows faster retrieval and reduced response size. It returns a Flow Run Object.
GET /api/v5/flow_runs/flow_run_id/summary
Get a Flow Run Data
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flow_runs/'
flow_run_id = 'e8d40a35-aadb-4761-86f2-22eda2262855'
suffix = 'data'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, flow_run_id, suffix))
params = {'data_id': '364da1dd-5115-492a-84b6-a2bccb89d576'}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Get Flow Run Data Endpoint
This endpoint allows you to retrieve the input or output data of a flow run or a block run. The data can be either the flow run input or output, or a block run input or output. The response of the /api/v5/flow_runs/flow_run_id/summary endpoint will provide the relevant links to this endpoint for the pieces of data that are offloaded, via the input_ref/output_ref properties.
GET /api/v5/flow_runs/flow_run_id/data
Request Parameters
The table below defines the query parameters that can be used with the Get Flows Run Data Endpoint
| Property | Type | Description |
| data_id | boolean | The UUID of the data to retrieve. The data can be either the flow run input or output, or a block run input or output. The data ID can be found in the Flow Run Object under the flow_input, flow_output, inputs and outputs properties of the Flow Run Object and Block Run Object respectively. |
Listing Flow Runs
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flow_runs'
endpoint_url = urljoin(base_url, endpoint)
params = {'identifier': 'TEST_FLOW', 'status': 'FAILED'}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url, params=params)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Listing Flows Runs Endpoint
Example Response
{
"next": null,
"previous": null,
"results": [
{
"flow_run_id": "e8d40a35-aadb-4761-86f2-22eda2262855",
"identifier": "FLOW_CALLING_ITSELF",
"flow_version": 3,
"status": "COMPLETED",
"correlation_id": "string_used_to_group_related_flow_runs",
"dt_created": "2023-02-01T16:47:44.726000Z",
"dt_started": "2023-02-01T16:47:44.726000Z",
"dt_ended": "2023-02-01T16:47:44.742000Z"
}
]
}
This endpoint allows you to retrieve a list of Flow Runs in the system that match your defined filtering criteria and to paginate through them. See Listing Object for the standard response structure of the list. Each object in the results array is a Flow Run Object.
GET /api/v5/flow_runs
Request Parameters
The table below defines the query parameters that can be used with the Listing Flows Runs Endpoint. If you repeat correlation_id, identifier or status multiple times in a request, the application will apply OR logic, i.e. it will list Flow Runs where that attribute matches any of the values you provided.
| Property | Type | Description |
| correlation_id | string | Filters for flow runs based on correlation ID. |
| identifier | string | Filters for flow runs based on identifier. |
| status | string | Filters for flow runs based on status. Possible values are RUNNING, COMPLETED, FAILED. |
| dt_started__gte | datetimetzAn ISO-8601 formatted datetime string | Filters for flow runs that were started after a specific date and time (greater than or equal to operator). |
| dt_started__lt | datetimetzAn ISO-8601 formatted datetime string | Filters for flow runs that were started before a specific date and time (less than operator). |
| detailed | boolean | Retrieve the detailed information about the flow runs - includes flow run inputs, outputs and block runs. False by default. If using detailed=true be mindful of the amount of data that is expected to be returned, for flow runs that contain a lot of data or large page sizes the performance of the endpoint could degrade. |
| limit | integer | The number of items to return per page. |
| sort | string | The order in which items are returned. Sorting is supported only based on start date. The default is dt_started which orders the flow runs by their start date in ascending order. The other supported value is -dt_started which will return the flow runs in descending start date order. |
| cursor | string | The endpoint supports only cursor pagination, so the cursor query parameter provides the means to go the the previous or next page of results. It is recommended to use the query strings returned in the next and previous properties of the response object. |
Deleting Flow Runs v40.2
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/flow_runs/bulk_delete'
endpoint_url = urljoin(base_url, endpoint)
# Request with flow_run_ids
flow_run_ids_request = {
'flow_run_ids': ['e8d40a35-aadb-4761-86f2-22eda2262855']
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, json=flow_run_ids_request)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Bulk Delete Flows Runs Endpoint
This endpoint allows you to delete flow runs. The endpoint operates in bulk and accepts as input either a list of flow run IDs or a search query to find the flow runs to delete.
Deletion happens asynchronously in a flow run that can be tracked using the Monitor a Flow Run API.
Even if the flow run completes successfully that doesn't necessarily mean that all requested flow runs have been deleted.
To check what has actually been deleted the output of the deleting flow run contains two list of flow runs IDs: deleted_flow_run_ids and failed_to_delete_flow_run_ids.
deleted_flow_run_ids contains the IDs of all flow runs that have been deleted as a result of this request, including subflow runs.
failed_to_delete_flow_run_ids contains the IDs of flow runs that matched the request, but could not be deleted, either because an error occurred or
because this API is not capable of deleting them. The API can only target top-level flows runs, because deleting subflows and leaving their parent in the system would break data integrity.
However, when the top-level flow gets deleted, all of its subflows and their related data get deleted as well.
Flow runs that are backed by a Submission can't be deleted through this API, for that see the Deleting Submissions API.
POST /api/v5/flow_runs/bulk_delete
Bulk Delete Flow Runs Request
| Property | Type | Description |
| flow_run_ids | array of strings | The IDs of the flow runs to delete. Provide this OR "flow_runs_query", but not both. |
| flow_runs_query | JSON object | A search query for finding the flow runs to be deleted. It supports the following parameters of the Listing Flow Runs API: correlation_id, identifier, status, dt_started__gte, dt_started__lt. Provide this or "flow_run_ids", but not both. |
Bulk Delete Flow Runs Response
| Property | Type | Description |
| flow_run_id | string | The unique identifier of the flow run that will perform the deletion. Can be used to minitor the flow run. |
Knowledge Store v38
The Knowledge Store is a database for arbitrarily structured data that can be used by customers, flows, and Custom Supervision to look up data during processes.
Knowledge Store Item
A Knowledge Store Item represents some distinct entity that is meaningful within a specific business domain.
Example Knowledge Store Item
{
"properties": {
"title": "Mean Mr. Mustard",
"album": "Abbey Road"
},
"dt_created": "2025-09-10T07:23:46.132663Z",
"dt_updated": "2025-09-10T07:23:46.132663Z",
"id": "d83ecc98-67b0-4b44-8a35-8d14e59cfd52",
"collection": "SONG",
"external_id": "spotify:track:4JOyMhad5dD81uGYLGgKrS"
}
Properties
| Property | Type | Description |
| dt_created | datetimetzAn ISO-8601 formatted datetime string | The point in time the Item was created. |
| dt_updated | datetimetzAn ISO-8601 formatted datetime string | The point in time the Item was last updated. |
| id | string | Uniquely identifies the Item. |
| external_id | string | An identifier that is meaningful in the context of the business domain. |
| collection | string | Used to group Items with common characteristics. |
| properties | JSON object | String key-value pairs of arbitrary data that can be used during processing. |
Retrieving Knowledge Store Items
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/knowledge_store/items/'
item_id = 'd83ecc98-67b0-4b44-8a35-8d14e59cfd52'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, item_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Retrieve a single Knowledge Store Item by id or external_id
Endpoint
GET /api/v5/knowledge_store/items/id
GET /api/v5/knowledge_store/items/external/external_id
The URL to use depends on whether you are retrieving the Knowledge Store Item using the Hyperscience ID or a user-defined external identifier.
Listing Knowledge Store Items
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/knowledge_store/items'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"count": 2,
"next": null,
"previous": null,
"results": [
{
"properties": {
"title": "Mean Mr. Mustard",
"album": "Abbey Road"
},
"dt_created": "2025-09-10T07:23:46.132663Z",
"dt_updated": "2025-09-10T07:23:46.132663Z",
"id": "d83ecc98-67b0-4b44-8a35-8d14e59cfd52",
"collection": "SONG",
"external_id": "spotify:track:4JOyMhad5dD81uGYLGgKrS"
},
{
"properties": {
"title": "Abbey Road",
"artist": "The Beatles"
},
"dt_created": "2025-09-10T07:23:46.136145Z",
"dt_updated": "2025-09-10T07:23:46.136145Z",
"id": "d83ecc98-67b0-4b44-8a35-8d14e59cfd53",
"collection": "ALBUM",
"external_id": "spotify:album:0ETFjACtuP2ADo6LFhL6HN"
}
]
}
Retrieve all Knowledge Store Items starting with the most recently created Items first.
Endpoint
GET /api/v5/knowledge_store/items
Request Parameters
The table below defines the query parameters that can be used to filter for a list of Knowledge Store Items.
| Property | Type | Description |
| collection | string | Filter for Items with common characteristics. |
Creating Knowledge Store Items
Creates a new Item with an automatically generated id.
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/knowledge_store/items'
endpoint_url = urljoin(base_url, endpoint)
headers = {
'Content-Type': 'application/json'
}
data = {
'external_id': 'spotify:track:4JOyMhad5dD81uGYLGgKrS',
'collection': 'song',
'properties': {
'title': 'Mean Mr. Mustard',
'album': 'Abbey Road'
}
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.post(endpoint_url, headers=headers, json=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Endpoint
POST /api/v5/knowledge_store/items
Request Parameters
| Property | Type | Description |
| external_id | string | An identifier that is meaningful in the context of the business domain. Cannot be changed after creation. |
| collection | string | The collection of the Item. Required. Cannot be changed after creation. Will be converted to Uppercase. |
| properties | JSON Object | JSON object with property keys and values. Values need to be of type string. |
Updating Knowledge Store Items
Only an Item's properties can be updated. The results depend on whether the PUT or the PATCH method is used.
- If PUT is used, only the properties included in the request are kept.
- If PATCH is used, the properties included in the request are merged with the existing set. The values of properties in the request overwrite those of any properties with the same keys in the existing set.
Example PUT Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/knowledge_store/items/'
item_id = 'd83ecc98-67b0-4b44-8a35-8d14e59cfd52'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, item_id))
headers = {
'Content-Type': 'application/json'
}
data = {
'collection': 'song',
'properties': {
'title': 'Polythene Pam',
}
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.put(endpoint_url, headers=headers, json=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example PATCH Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/knowledge_store/items/'
item_id = 'd83ecc98-67b0-4b44-8a35-8d14e59cfd52'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, item_id))
headers = {
'Content-Type': 'application/json'
}
data = {
'collection': 'song',
'properties': {
'title': 'Polythene Pam',
}
}
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.patch(endpoint_url, headers=headers, json=data)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Endpoint
PUT /api/v5/knowledge_store/items/id
PATCH /api/v5/knowledge_store/items/id
Request Parameters
| property | Type | Description |
| id | string | The unique identifier of the Item to be updated. |
| external_id | string | An identifier that is meaningful in the context of the business domain. |
| collection | string | The collection of the Item. Required. Needs to match the collection of the existing Item. |
| properties | JSON Object | An updated set of properties. |
Deleting Knowledge Store Items
Example Request
import json
import posixpath
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/knowledge_store/items/'
item_id = 'd83ecc98-67b0-4b44-8a35-8d14e59cfd52'
endpoint_url = urljoin(base_url, posixpath.join(endpoint, item_id))
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.delete(endpoint_url)
Deletes a single Knowledge Store Item with the specified id.
Endpoint
DELETE /api/v5/knowledge_store/items/id
Health Check Status
Health Check Status Object
Example Health Check Object
{
"DB": {
"healthy": true
},
"STORE_$OCS": {
"healthy": true
},
"STORE_DB": {
"healthy": true
},
"STORE_FILE": {
"healthy": true
},
"STORE_S3": {
"healthy": false,
"error": "An error occurred (403) when calling the HeadBucket operation: Forbidden"
}
}
The Health Check object is the representation of the health status of key system components.
Health Check Status properties
| Property | Type | Description |
| STORE_XXX | JSON | Checks access to object stores used for internal purposes. |
| STORE_$XXX | JSON | Checks access to object stores used for submission ingestion from external sources. |
| DB | JSON | Checks access to the system database. |
When a component is functioning correctly its value is {"healthy": true}.
When the system detects a problem with a component the value is {"healthy": false, "error": "<an error message with the failure details>"}.
The error message is included only for authenticated users. The endpoint is accessable by unauthenticated users
but does not include failure details.
Retrieving Health Check Status
Example Request
import json
from urllib.parse import urljoin
# The base_url or hostname refers to a Hyperscience instance.
base_url = 'https://on-premise-server.yourcompany.com/'
endpoint = 'api/v5/healthcheck'
endpoint_url = urljoin(base_url, endpoint)
# Please import the get_oauth2_session() function from the Getting Started Guide - Authentication section.
with get_oauth2_session() as session:
r = session.get(endpoint_url)
print(json.dumps(r.json(), indent=4, sort_keys=True))
Example Response
{
"DB": {
"healthy": true
},
"STORE_$OCS": {
"healthy": true
},
"STORE_DB": {
"healthy": true
},
"STORE_FILE": {
"healthy": true
},
"STORE_S3": {
"healthy": false,
"error": "An error occurred (403) when calling the HeadBucket operation: Forbidden"
}
}
Healthcheck Retrieval Endpoint
GET /api/v5/healthcheck
Error Responses
HTTP Errors
The endpoints in this documentation will return conventional HTTP status codes and will accompany them with additional information where appropriate.
Example HTTP Error Message
{"error": "File uploads and documents list cannot be mixed"}
| Status Code | Description |
| 400 | Bad request, often due to improper parameters |
| 403 | You don't have proper permissions to access this endpoint |
| 409 | Duplicate External Identifier attempted with submission creation |
| 500 | System Error |
OpenAPI Schema
An OpenAPI Schema describing our API can be found in our Swagger Explorer.