2022/07/21 - AWS IoT SiteWise - 3 new api methods
Changes Added asynchronous API to ingest bulk historical and current data into IoT SiteWise.
Retrieves information about a bulk import job request. For more information, see Describe a bulk import job (CLI) in the Amazon Simple Storage Service User Guide.
See also: AWS API Documentation
Request Syntax
client.describe_bulk_import_job( jobId='string' )
string
[REQUIRED]
The ID of the job.
dict
Response Syntax
{ 'jobId': 'string', 'jobName': 'string', 'jobStatus': 'PENDING'|'CANCELLED'|'RUNNING'|'COMPLETED'|'FAILED'|'COMPLETED_WITH_FAILURES', 'jobRoleArn': 'string', 'files': [ { 'bucket': 'string', 'key': 'string', 'versionId': 'string' }, ], 'errorReportLocation': { 'bucket': 'string', 'prefix': 'string' }, 'jobConfiguration': { 'fileFormat': { 'csv': { 'columnNames': [ 'ALIAS'|'ASSET_ID'|'PROPERTY_ID'|'DATA_TYPE'|'TIMESTAMP_SECONDS'|'TIMESTAMP_NANO_OFFSET'|'QUALITY'|'VALUE', ] } } }, 'jobCreationDate': datetime(2015, 1, 1), 'jobLastUpdateDate': datetime(2015, 1, 1) }
Response Structure
(dict) --
jobId (string) --
The ID of the job.
jobName (string) --
The unique name that helps identify the job request.
jobStatus (string) --
The status of the bulk import job can be one of following values.
PENDING – IoT SiteWise is waiting for the current bulk import job to finish.
CANCELLED – The bulk import job has been canceled.
RUNNING – IoT SiteWise is processing your request to import your data from Amazon S3.
COMPLETED – IoT SiteWise successfully completed your request to import data from Amazon S3.
FAILED – IoT SiteWise couldn't process your request to import data from Amazon S3. You can use logs saved in the specified error report location in Amazon S3 to troubleshoot issues.
COMPLETED_WITH_FAILURES – IoT SiteWise completed your request to import data from Amazon S3 with errors. You can use logs saved in the specified error report location in Amazon S3 to troubleshoot issues.
jobRoleArn (string) --
The ARN of the IAM role that allows IoT SiteWise to read Amazon S3 data.
files (list) --
The files in the specified Amazon S3 bucket that contain your data.
(dict) --
The file in Amazon S3 where your data is saved.
bucket (string) --
The name of the Amazon S3 bucket from which data is imported.
key (string) --
The key of the Amazon S3 object that contains your data. Each object has a key that is a unique identifier. Each object has exactly one key.
versionId (string) --
The version ID to identify a specific version of the Amazon S3 object that contains your data.
errorReportLocation (dict) --
The Amazon S3 destination where errors associated with the job creation request are saved.
bucket (string) --
The name of the Amazon S3 bucket to which errors associated with the bulk import job are sent.
prefix (string) --
Amazon S3 uses the prefix as a folder name to organize data in the bucket. Each Amazon S3 object has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). For more information, see Organizing objects using prefixes in the Amazon Simple Storage Service User Guide.
jobConfiguration (dict) --
Contains the configuration information of a job, such as the file format used to save data in Amazon S3.
fileFormat (dict) --
The file format of the data in Amazon S3.
csv (dict) --
The .csv file format.
columnNames (list) --
The column names specified in the .csv file.
(string) --
jobCreationDate (datetime) --
The date the job was created, in Unix epoch TIME.
jobLastUpdateDate (datetime) --
The date the job was last updated, in Unix epoch time.
Retrieves a paginated list of bulk import job requests. For more information, see List bulk import jobs (CLI) in the Amazon Simple Storage Service User Guide.
See also: AWS API Documentation
Request Syntax
client.list_bulk_import_jobs( nextToken='string', maxResults=123, filter='ALL'|'PENDING'|'RUNNING'|'CANCELLED'|'FAILED'|'COMPLETED_WITH_FAILURES'|'COMPLETED' )
string
The token to be used for the next set of paginated results.
integer
The maximum number of results to return for each paginated request.
string
You can use a filter to select the bulk import jobs that you want to retrieve.
dict
Response Syntax
{ 'jobSummaries': [ { 'id': 'string', 'name': 'string', 'status': 'PENDING'|'CANCELLED'|'RUNNING'|'COMPLETED'|'FAILED'|'COMPLETED_WITH_FAILURES' }, ], 'nextToken': 'string' }
Response Structure
(dict) --
jobSummaries (list) --
One or more job summaries to list.
(dict) --
Contains a job summary information.
id (string) --
The ID of the job.
name (string) --
The unique name that helps identify the job request.
status (string) --
The status of the bulk import job can be one of following values.
PENDING – IoT SiteWise is waiting for the current bulk import job to finish.
CANCELLED – The bulk import job has been canceled.
RUNNING – IoT SiteWise is processing your request to import your data from Amazon S3.
COMPLETED – IoT SiteWise successfully completed your request to import data from Amazon S3.
FAILED – IoT SiteWise couldn't process your request to import data from Amazon S3. You can use logs saved in the specified error report location in Amazon S3 to troubleshoot issues.
COMPLETED_WITH_FAILURES – IoT SiteWise completed your request to import data from Amazon S3 with errors. You can use logs saved in the specified error report location in Amazon S3 to troubleshoot issues.
nextToken (string) --
The token for the next set of results, or null if there are no additional results.
Defines a job to ingest data to IoT SiteWise from Amazon S3. For more information, see Create a bulk import job (CLI) in the Amazon Simple Storage Service User Guide.
See also: AWS API Documentation
Request Syntax
client.create_bulk_import_job( jobName='string', jobRoleArn='string', files=[ { 'bucket': 'string', 'key': 'string', 'versionId': 'string' }, ], errorReportLocation={ 'bucket': 'string', 'prefix': 'string' }, jobConfiguration={ 'fileFormat': { 'csv': { 'columnNames': [ 'ALIAS'|'ASSET_ID'|'PROPERTY_ID'|'DATA_TYPE'|'TIMESTAMP_SECONDS'|'TIMESTAMP_NANO_OFFSET'|'QUALITY'|'VALUE', ] } } } )
string
[REQUIRED]
The unique name that helps identify the job request.
string
[REQUIRED]
The ARN of the IAM role that allows IoT SiteWise to read Amazon S3 data.
list
[REQUIRED]
The files in the specified Amazon S3 bucket that contain your data.
(dict) --
The file in Amazon S3 where your data is saved.
bucket (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which data is imported.
key (string) -- [REQUIRED]
The key of the Amazon S3 object that contains your data. Each object has a key that is a unique identifier. Each object has exactly one key.
versionId (string) --
The version ID to identify a specific version of the Amazon S3 object that contains your data.
dict
[REQUIRED]
The Amazon S3 destination where errors associated with the job creation request are saved.
bucket (string) -- [REQUIRED]
The name of the Amazon S3 bucket to which errors associated with the bulk import job are sent.
prefix (string) -- [REQUIRED]
Amazon S3 uses the prefix as a folder name to organize data in the bucket. Each Amazon S3 object has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). For more information, see Organizing objects using prefixes in the Amazon Simple Storage Service User Guide.
dict
[REQUIRED]
Contains the configuration information of a job, such as the file format used to save data in Amazon S3.
fileFormat (dict) -- [REQUIRED]
The file format of the data in Amazon S3.
csv (dict) --
The .csv file format.
columnNames (list) --
The column names specified in the .csv file.
(string) --
dict
Response Syntax
{ 'jobId': 'string', 'jobName': 'string', 'jobStatus': 'PENDING'|'CANCELLED'|'RUNNING'|'COMPLETED'|'FAILED'|'COMPLETED_WITH_FAILURES' }
Response Structure
(dict) --
jobId (string) --
The ID of the job.
jobName (string) --
The unique name that helps identify the job request.
jobStatus (string) --
The status of the bulk import job can be one of following values.
PENDING – IoT SiteWise is waiting for the current bulk import job to finish.
CANCELLED – The bulk import job has been canceled.
RUNNING – IoT SiteWise is processing your request to import your data from Amazon S3.
COMPLETED – IoT SiteWise successfully completed your request to import data from Amazon S3.
FAILED – IoT SiteWise couldn't process your request to import data from Amazon S3. You can use logs saved in the specified error report location in Amazon S3 to troubleshoot issues.
COMPLETED_WITH_FAILURES – IoT SiteWise completed your request to import data from Amazon S3 with errors. You can use logs saved in the specified error report location in Amazon S3 to troubleshoot issues.