LinuxCommandLibrary

aws-kinesis

Manage and interact with Amazon Kinesis

TLDR

Show all streams in the account

$ aws kinesis list-streams
copy

Write one record to a Kinesis stream
$ aws kinesis put-record --stream-name [name] --partition-key [key] --data [base64_encoded_message]
copy

Write a record to a Kinesis stream with inline base64 encoding
$ aws kinesis put-record --stream-name [name] --partition-key [key] --data "$( echo "[my raw message]" | base64 )"
copy

List the shards available on a stream
$ aws kinesis list-shards --stream-name [name]
copy

Get a shard iterator for reading from the oldest message in a stream's shard
$ aws kinesis get-shard-iterator --shard-iterator-type TRIM_HORIZON --stream-name [name] --shard-id [id]
copy

Read records from a shard, using a shard iterator
$ aws kinesis get-records --shard-iterator [iterator]
copy

SYNOPSIS

aws-kinesis subcommand [options] [arguments]

PARAMETERS

--stream-name (string)
    Name of the Kinesis data stream.

--shard-id (string)
    Identifier of the shard within the stream.

--shard-iterator-type (string)
    Type of shard iterator: AT_TIMESTAMP, TRIM_HORIZON, LATEST, etc.

--starting-position (string)
    Position for shard iterator: AT_TIMESTAMP, TRIM_HORIZON, LATEST.

--data (blob)
    Base64-encoded data payload for put-record.

--partition-key (string)
    UTF-8 partition key for routing records.

--region (string)
    AWS region (e.g., us-east-1).

--output (string)
    Output format: json|text|table.

--profile (string)
    AWS profile name for credentials.

--debug
    Enable debug logging.

DESCRIPTION

The aws-kinesis command is part of the AWS Command Line Interface (CLI) used to interact with Amazon Kinesis Data Streams, a fully managed service for real-time processing of streaming data at massive scale. It enables users to create, manage, and monitor data streams, ingest and retrieve records, configure shard-level operations, and integrate with other AWS services.

Key capabilities include listing streams, describing stream details (such as shard count and status), putting records into streams for producers, getting records for consumers, and scaling streams dynamically via shard splitting/merging. It supports enhanced fan-out for low-latency reading and integrates with Kinesis Data Firehose for delivery to S3, Redshift, or Elasticsearch.

Authentication requires configured AWS credentials (via AWS_ACCESS_KEY_ID, IAM roles, or profiles). Output formats include JSON, table, or text. Common use cases: IoT data ingestion, log processing, real-time analytics, and video stream processing. Requires AWS CLI v1 or v2 installed.

CAVEATS

Requires AWS CLI installation and valid credentials with Kinesis IAM policies. Subcommand-specific options vary; not all apply globally. High-throughput streams may incur costs. Shard limits: 500 per stream by default.

COMMON SUBCOMMANDS

create-stream, delete-stream, list-streams, describe-stream, put-record, get-records, get-shard-iterator, merge-shards, split-shard, increase-stream-retention, decrease-stream-retention.

EXAMPLES

aws-kinesis list-streams --region us-east-1
aws-kinesis put-record --stream-name myStream --data test --partition-key 0 --region us-east-1

HISTORY

Introduced with AWS CLI v1.7.0 (2014) alongside Kinesis launch. Enhanced in CLI v2 (2020) for better performance and session management. Kinesis evolved with Data Streams v2 (2021) support added in CLI v2.9+.

SEE ALSO

Copied to clipboard