Set of useful libraries for Micronaut. All the libraries are available in the Maven Central.
-
AWS SDK for Micronaut - integration for DynamoDB, Kinesis, Simple Storage Service (S3), Simple Email Service (SES), Simple Notification Service (SNS), Simple Queue Service (SQS) and WebSockets for API Gateway
-
Micronaut for API Gateway Proxy - develop API Gateway Proxy Lambda functions using Micronaut HTTP server capabilities (currently superseded by the official library)
1. AWS SDK for Micronaut
AWS SDK for Micronaut is a successor of Grails AWS SDK Plugin. If you are Grails AWS SDK Plugin user you should find many of services familiar.
Provided integrations:
Micronaut for API Gateway Proxy is handled separately in its own library. |
Key concepts of the AWS SDK for Micronaut:
-
Fully leveraging of Micronaut best practises
-
Low-level API clients such as
AmazonDynamoDB
available for dependency injection -
Declarative clients and services such as
@KinesisClient
where applicable -
Configuration driven named service beans
-
Sensible defaults
-
Conditional beans based on presence of classes on the classpath or on the presence of specific properties
-
-
Fully leveraging existing AWS SDK configuration chains (e.g. default credential provider chain, default region provider chain)
-
Strong focus on the ease of testing
-
Low-level API clients such as
AmazonDynamoDB
injected by Micronaut and overridable in the tests -
All high-level services hidden behind an interface for easy mocking in the tests
-
Declarative clients and services for easy mocking in the tests
-
-
Java-enabled but Groovy is a first-class citizen
In this documentation, the high-level approaches will be discussed first before the lower-level services.
1.1. Installation
For AWS SDK 2.x use artefacts starting with micronaut-amazon-awssdk . These artefacts are written pure Java.
|
For AWS SDK 1.x use artefacts starting with micrnoaut-aws-sdk . These are considered legacy artefacts and might be removed in the future when the AWS SDK 2.x get wider adoption.
|
Since 1.2.8
see the particular subprojects for installation instruction.
1.2. CloudWatch Logs
This library provides support for reading the latest CloudWatch Logs for given log group, usually when testing Lambda functions.
Installation
implementation 'com.agorapulse:micronaut-amazon-awssdk-cloudwatchlogs:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-cloudwatchlogs</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
implementation 'com.agorapulse:micronaut-aws-sdk-cloudwatchlogs:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-cloudwatchlogs</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
CloudWatch Logs Service
There is a bean CloudWatchLogsService
which can be used to read the latest log events.
package com.agorapulse.micronaut.amazon.awssdk.lambda;
import com.agorapulse.micronaut.amazon.awssdk.cloudwatchlogs.CloudWatchLogsService;
import jakarta.inject.Singleton;
@Singleton
public class LogCheckService {
private final CloudWatchLogsService logsService; (1)
public LogCheckService(CloudWatchLogsService logsService) {
this.logsService = logsService;
}
public boolean contains(String logGroup, String text) {
return logsService.getLogEvents(logGroup)
.anyMatch(e -> e.message().contains(text)); (2)
}
}
1 | Inject @CloudWatchLogsService into the bean |
2 | use `getLogEvents(String) obtain a stream of the latest log events |
package com.agorapulse.micronaut.aws.lambda;
import com.agorapulse.micronaut.aws.cloudwatchlogs.CloudWatchLogsService;
import jakarta.inject.Singleton;
@Singleton
public class LogCheckService {
private final CloudWatchLogsService logsService; (1)
public LogCheckService(CloudWatchLogsService logsService) {
this.logsService = logsService;
}
public boolean contains(String logGroup, String text) {
return logsService.getLogEvents(logGroup)
.anyMatch(e -> e.getMessage().contains(text)); (2)
}
}
1 | Inject @CloudWatchLogsService into the bean |
2 | use `getLogEvents(String) obtain a stream of the latest log events |
Testing
You can very easily create a Lambda function locally with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file to get the service connected to Localstack automatically:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-integration-testing</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
1.3. DynamoDB
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
This library provides two approaches to work with DynamoDB tables and entities:
-
High-level Declarative Services with
@Service
-
Middle-level DynamoDB Service
Installation
annotationProcessor 'com.agorapulse:micronaut-amazon-awssdk-dynamodb-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-amazon-awssdk-dynamodb:2.1.11-micronaut-3.0'
// for Kotlin Query DSL
implementation 'com.agorapulse:micronaut-amazon-awssdk-dynamodb-kotlin:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
<!-- for Kotlin Query DSL -->
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
annotationProcessor 'com.agorapulse:micronaut-aws-sdk-dynamodb-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-aws-sdk-dynamodb:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
For Kotlin use kapt instead of annotationProcessor configuration.
|
Entity Class
The entity class is a class which instances represent the items in DynamoDB.
For AWS SDK v2 you don’t need to use the native annotations but
you fill their counterparts in com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation
package. The only requirements is that
the class needs to be annotated either with @Introspected
or @DynamoDbBean
.
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.PartitionKey
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.Projection
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.SecondaryPartitionKey
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.SecondarySortKey
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.SortKey
import groovy.transform.Canonical
import groovy.transform.CompileStatic
import io.micronaut.core.annotation.Introspected
import software.amazon.awssdk.services.dynamodb.model.ProjectionType
@Canonical
@Introspected (1)
@CompileStatic
class DynamoDBEntity {
public static final String DATE_INDEX = 'date'
public static final String RANGE_INDEX = 'rangeIndex'
public static final String GLOBAL_INDEX = 'globalIndex'
@PartitionKey String parentId (2)
@SortKey String id (3)
@SecondarySortKey(indexNames = RANGE_INDEX) (4)
String rangeIndex
@Projection(ProjectionType.ALL) (5)
@SecondarySortKey(indexNames = DATE_INDEX)
Date date
Integer number = 0
@Projection(ProjectionType.ALL)
@SecondaryPartitionKey(indexNames = GLOBAL_INDEX) (6)
String getGlobalIndex() {
return "$parentId:$id"
}
}
1 | The entity must be annotated with @Introspected or @DynamoDBBean |
2 | The entity must provide the partition key using @ParitionKey annotation |
3 | The sort key is optional |
4 | The secondary indices are generated automatically if not present |
5 | If the secondary indices are generated then the projection type must be specified (the default is KEYS_ONLY) |
6 | The secondary indices can be read only if you derive them from the other attributes |
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.PartitionKey;
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.Projection;
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.SecondaryPartitionKey;
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.SecondarySortKey;
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.SortKey;
import io.micronaut.core.annotation.Introspected;
import software.amazon.awssdk.services.dynamodb.model.ProjectionType;
import java.util.Date;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
@Introspected (1)
public class DynamoDBEntity implements PlaybookAware {
public static final String DATE_INDEX = "date";
public static final String RANGE_INDEX = "rangeIndex";
public static final String GLOBAL_INDEX = "globalIndex";
private String parentId;
private String id;
private String rangeIndex;
private Date date;
private Integer number = 0;
private Map<String, List<String>> mapProperty = new LinkedHashMap<>();
@PartitionKey (2)
public String getParentId() {
return parentId;
}
public void setParentId(String parentId) {
this.parentId = parentId;
}
@SortKey (3)
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
@SecondarySortKey(indexNames = RANGE_INDEX) (4)
public String getRangeIndex() {
return rangeIndex;
}
public void setRangeIndex(String rangeIndex) {
this.rangeIndex = rangeIndex;
}
@Projection(ProjectionType.ALL) (5)
@SecondarySortKey(indexNames = DATE_INDEX)
public Date getDate() {
return date;
}
public void setDate(Date date) {
this.date = date;
}
public Integer getNumber() {
return number;
}
public void setNumber(Integer number) {
this.number = number;
}
@Projection(ProjectionType.ALL)
@SecondaryPartitionKey(indexNames = GLOBAL_INDEX) (6)
public String getGlobalIndex() {
return parentId + ":" + id;
}
public Map<String, List<String>> getMapProperty() {
return mapProperty;
}
public void setMapProperty(Map<String, List<String>> mapProperty) {
this.mapProperty = mapProperty;
}
//CHECKSTYLE:OFF
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
DynamoDBEntity that = (DynamoDBEntity) o;
return Objects.equals(parentId, that.parentId) &&
Objects.equals(id, that.id) &&
Objects.equals(rangeIndex, that.rangeIndex) &&
Objects.equals(date, that.date) &&
Objects.equals(number, that.number) &&
Objects.equals(mapProperty, that.mapProperty);
}
@Override
public int hashCode() {
return Objects.hash(parentId, id, rangeIndex, date, number, mapProperty);
}
@Override
public String toString() {
return "DynamoDBEntity{" +
"parentId='" + parentId + '\'' +
", id='" + id + '\'' +
", rangeIndex='" + rangeIndex + '\'' +
", date=" + date +
", number=" + number +
", mapProperty=" + mapProperty +
'}';
}
//CHECKSTYLE:ON
}
1 | The entity must be annotated with @Introspected or @DynamoDBBean |
2 | The entity must provide the partition key using @ParitionKey annotation |
3 | The sort key is optional |
4 | The secondary indices are generated automatically if not present |
5 | If the secondary indices are generated then the projection type must be specified (the default is KEYS_ONLY) |
6 | The secondary indices can be read only if you derive them from the other attributes |
import com.agorapulse.micronaut.amazon.awssdk.dynamodb.annotation.*
import io.micronaut.core.annotation.Introspected
import software.amazon.awssdk.services.dynamodb.model.ProjectionType
import java.util.*
@Introspected (1)
class DynamoDBEntity {
@PartitionKey (2)
var parentId: String? = null
@SortKey (3)
var id: String? = null
@SecondarySortKey(indexNames = [RANGE_INDEX]) (4)
var rangeIndex: String? = null
@SecondarySortKey(indexNames = [DATE_INDEX]) (5)
@Projection(ProjectionType.ALL)
var date: Date? = null
var number = 0
var map: Map<String, List<String>>? = null
@SecondaryPartitionKey(indexNames = [GLOBAL_INDEX]) (6)
@Projection(ProjectionType.ALL)
fun getGlobalIndex(): String {
return "$parentId:$id"
}
companion object {
const val DATE_INDEX = "date"
const val RANGE_INDEX = "rangeIndex"
const val GLOBAL_INDEX = "globalIndex"
}
}
1 | The entity must be annotated with @Introspected or @DynamoDBBean |
2 | The entity must provide the partition key using @ParitionKey annotation |
3 | The sort key is optional |
4 | The secondary indices are generated automatically if not present |
5 | If the secondary indices are generated then the projection type must be specified (the default is KEYS_ONLY) |
6 | The secondary indices can be read only if you derive them from the other attributes |
Declarative Services with @Service
Declarative services are very similar to Grails GORM Data Services.
If you place Service
annotation on the interface then methods matching predefined pattern will be automatically implemented.
For AWS SDK 2.x, use packages starting com.agorapulse.micronaut.amazon.awssdk.dynamodb .
|
For AWS SDK 1.x, use packages starting com.agorapulse.micronaut.aws.sdk.dynamodb .
|
Method Signatures
The following example shows many of available method signatures:
@Service(DynamoDBEntity)
interface DynamoDBItemDBService {
DynamoDBEntity get(String hash, String rangeKey)
DynamoDBEntity load(String hash, String rangeKey)
List<DynamoDBEntity> getAll(String hash, List<String> rangeKeys)
List<DynamoDBEntity> getAll(String hash, String... rangeKeys)
List<DynamoDBEntity> loadAll(String hash, List<String> rangeKeys)
List<DynamoDBEntity> loadAll(String hash, String... rangeKeys)
DynamoDBEntity save(DynamoDBEntity entity)
List<DynamoDBEntity> saveAll(DynamoDBEntity... entities)
List<DynamoDBEntity> saveAll(Iterable<DynamoDBEntity> entities)
int count(String hashKey)
int count(String hashKey, String rangeKey)
@Query({
query(DynamoDBEntity) {
hash hashKey
range {
eq DynamoDBEntity.RANGE_INDEX, rangeKey
}
}
})
int countByRangeIndex(String hashKey, String rangeKey)
@Query({
query(DynamoDBEntity) {
hash hashKey
range { between DynamoDBEntity.DATE_INDEX, after, before }
}
})
int countByDates(String hashKey, Date after, Date before)
Publisher<DynamoDBEntity> query(String hashKey)
Publisher<DynamoDBEntity> query(String hashKey, String rangeKey)
@Query({
query(DynamoDBEntity) {
hash hashKey
range {
eq DynamoDBEntity.RANGE_INDEX, rangeKey
}
only {
rangeIndex
}
}
})
Publisher<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey)
@Query({
query(DynamoDBEntity) {
hash hashKey
range { between DynamoDBEntity.DATE_INDEX, after, before }
}
})
List<DynamoDBEntity> queryByDates(String hashKey, Date after, Date before)
void delete(DynamoDBEntity entity)
void delete(String hashKey, String rangeKey)
@Query({
query(DynamoDBEntity) {
hash hashKey
range {
eq DynamoDBEntity.RANGE_INDEX, rangeKey
}
}
})
int deleteByRangeIndex(String hashKey, String rangeKey)
@Query({
query(DynamoDBEntity) {
hash hashKey
range { between DynamoDBEntity.DATE_INDEX, after, before }
}
})
int deleteByDates(String hashKey, Date after, Date before)
@Update({
update(DynamoDBEntity) {
hash hashKey
range rangeKey
add 'number', 1
returnUpdatedNew { number }
}
})
Number increment(String hashKey, String rangeKey)
@Update({
update(DynamoDBEntity) {
hash hashKey
range rangeKey
add 'number', -1
returnUpdatedNew { number }
}
})
Number decrement(String hashKey, String rangeKey)
@Scan({
scan(DynamoDBEntity) {
filter {
eq DynamoDBEntity.RANGE_INDEX, foo
}
}
})
Publisher<DynamoDBEntity> scanAllByRangeIndex(String foo)
}
@Service(DynamoDBEntity.class)
public interface DynamoDBEntityService {
class EqRangeIndex implements Function<Map<String, Object>, DetachedQuery> {
public DetachedQuery apply(Map<String, Object> arguments) {
return Builders.query(DynamoDBEntity.class)
.hash(arguments.get("hashKey"))
.range(r -> r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey")));
}
}
class EqRangeProjection implements Function<Map<String, Object>, DetachedQuery> {
public DetachedQuery apply(Map<String, Object> arguments) {
return Builders.query(DynamoDBEntity.class)
.hash(arguments.get("hashKey"))
.range(r ->
r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey"))
)
.only(DynamoDBEntity.RANGE_INDEX);
}
}
class EqRangeScan implements Function<Map<String, Object>, DetachedScan> {
public DetachedScan apply(Map<String, Object> arguments) {
return Builders.scan(DynamoDBEntity.class)
.filter(f -> f.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("foo")));
}
}
class BetweenDateIndex implements Function<Map<String, Object>, DetachedQuery> {
public DetachedQuery apply(Map<String, Object> arguments) {
return Builders.query(DynamoDBEntity.class)
.hash(arguments.get("hashKey"))
.range(r -> r.between(DynamoDBEntity.DATE_INDEX, arguments.get("after"), arguments.get("before")));
}
}
class IncrementNumber implements Function<Map<String, Object>, DetachedUpdate> {
public DetachedUpdate apply(Map<String, Object> arguments) {
return Builders.update(DynamoDBEntity.class)
.hash(arguments.get("hashKey"))
.range(arguments.get("rangeKey"))
.add("number", 1)
.returnUpdatedNew(DynamoDBEntity::getNumber);
}
}
class DecrementNumber implements Function<Map<String, Object>, DetachedUpdate> {
public DetachedUpdate apply(Map<String, Object> arguments) {
return Builders.update(DynamoDBEntity.class)
.hash(arguments.get("hashKey"))
.range(arguments.get("rangeKey"))
.add("number", -1)
.returnUpdatedNew(DynamoDBEntity::getNumber);
}
}
DynamoDBEntity get(String hash, String rangeKey);
DynamoDBEntity load(String hash, String rangeKey);
List<DynamoDBEntity> getAll(String hash, List<String> rangeKeys);
List<DynamoDBEntity> getAll(String hash, String... rangeKeys);
List<DynamoDBEntity> loadAll(String hash, List<String> rangeKeys);
List<DynamoDBEntity> loadAll(String hash, String... rangeKeys);
DynamoDBEntity save(DynamoDBEntity entity);
List<DynamoDBEntity> saveAll(DynamoDBEntity... entities);
List<DynamoDBEntity> saveAll(Iterable<DynamoDBEntity> entities);
int count(String hashKey);
int count(String hashKey, String rangeKey);
@Query(EqRangeIndex.class)
int countByRangeIndex(String hashKey, String rangeKey);
@Query(BetweenDateIndex.class)
int countByDates(String hashKey, Date after, Date before);
Publisher<DynamoDBEntity> query(String hashKey);
Publisher<DynamoDBEntity> query(String hashKey, String rangeKey);
@Query(EqRangeProjection.class)
Publisher<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey);
@Query(BetweenDateIndex.class)
List<DynamoDBEntity> queryByDates(String hashKey, Date after, Date before);
void delete(DynamoDBEntity entity);
void delete(String hashKey, String rangeKey);
@Query(EqRangeIndex.class)
int deleteByRangeIndex(String hashKey, String rangeKey);
@Query(BetweenDateIndex.class)
int deleteByDates(String hashKey, Date after, Date before);
@Update(IncrementNumber.class)
Number increment(String hashKey, String rangeKey);
@Update(DecrementNumber.class)
Number decrement(String hashKey, String rangeKey);
@Scan(EqRangeScan.class)
Publisher<DynamoDBEntity> scanAllByRangeIndex(String foo);
}
@Service(value = DynamoDBEntity::class, tableName = "DynamoDBJava")
interface DynamoDBEntityService {
fun sget(@PartitionKey parentId: String, @SortKey id: String): DynamoDBEntity
fun load(@PartitionKey parentId: String, @SortKey id: String): DynamoDBEntity
fun getAll(hash: String, rangeKeys: List<String>): List<DynamoDBEntity>
fun getAll(hash: String, vararg rangeKeys: String): List<DynamoDBEntity>
fun loadAll(hash: String, rangeKeys: List<String>): List<DynamoDBEntity>
fun loadAll(hash: String, vararg rangeKeys: String): List<DynamoDBEntity>
fun save(entity: DynamoDBEntity): DynamoDBEntity
fun saveAll(vararg entities: DynamoDBEntity): List<DynamoDBEntity?>?
fun saveAll(entities: Iterable<DynamoDBEntity>): List<DynamoDBEntity>
fun count(hashKey: String): Int
fun count(hashKey: String, rangeKey: String): Int
class EqRangeIndex : QueryFunction<DynamoDBEntity>({ args: Map<String, Any> ->
partitionKey(args.get("hashKey"))
index(DynamoDBEntity.RANGE_INDEX)
sortKey {
eq(args["rangeKey"])
}
})
@Query(EqRangeIndex::class)
fun countByRangeIndex(hashKey: String, rangeKey: String): Int
class BetweenDateIndex : QueryFunction<DynamoDBEntity>({ args: Map<String, Any> ->
index(DynamoDBEntity.DATE_INDEX)
partitionKey(args["hashKey"])
sortKey { between(args["after"], args["before"]) }
page(1)
})
@Query(BetweenDateIndex::class)
fun countByDates(hashKey: String, after: Date, before: Date): Int
fun query(hashKey: String): Publisher<DynamoDBEntity>
fun query(hashKey: String, rangeKey: String): Publisher<DynamoDBEntity>
class EqRangeProjection : QueryFunction<DynamoDBEntity>({ args: Map<String, Any> ->
partitionKey(args["hashKey"])
index(DynamoDBEntity.RANGE_INDEX)
sortKey { eq(args["rangeKey"]) }
only(DynamoDBEntity.RANGE_INDEX)
})
@Query(EqRangeProjection::class)
fun queryByRangeIndex(hashKey: String, rangeKey: String): Publisher<DynamoDBEntity>
@Query(BetweenDateIndex::class)
fun queryByDates(hashKey: String, after: Date, before: Date): List<DynamoDBEntity>
class BetweenDateIndexScroll : QueryFunction<DynamoDBEntity>({ args: Map<String, Any> ->
index(DynamoDBEntity.DATE_INDEX)
partitionKey(args["hashKey"])
lastEvaluatedKey(args["lastEvaluatedKey"])
sortKey { between(args["after"], args["before"]) }
})
@Query(BetweenDateIndexScroll::class)
fun queryByDatesScroll(
hashKey: String,
after: Date,
before: Date,
lastEvaluatedKey: DynamoDBEntity
): List<DynamoDBEntity>
fun delete(entity: DynamoDBEntity)
fun delete(hashKey: String, rangeKey: String)
@Query(EqRangeIndex::class)
fun deleteByRangeIndex(hashKey: String, rangeKey: String): Int
@Query(BetweenDateIndex::class)
fun deleteByDates(hashKey: String, after: Date, before: Date): Int
class IncrementNumber : UpdateFunction<DynamoDBEntity, Int>({ args: Map<String, Any> ->
partitionKey(args["hashKey"])
sortKey(args["rangeKey"])
add("number", 1)
returnUpdatedNew(DynamoDBEntity::number)
})
@Update(IncrementNumber::class)
fun increment(hashKey: String, rangeKey: String): Number
class DecrementNumber : UpdateFunction<DynamoDBEntity, Int>({ args: Map<String, Any> ->
partitionKey(args["hashKey"])
sortKey(args["rangeKey"])
add("number", -1)
returnUpdatedNew(DynamoDBEntity::number)
})
@Update(DecrementNumber::class)
fun decrement(hashKey: String, rangeKey: String): Number
class EqRangeScan : ScanFunction<DynamoDBEntity>({ args: Map<String, Any> ->
filter {
eq(DynamoDBEntity.RANGE_INDEX, args["foo"])
}
})
@Scan(EqRangeScan::class)
fun scanAllByRangeIndex(foo: String): Publisher<DynamoDBEntity>
}
The following table summarizes the supported method signatures:
Return Type | Method Name | Arguments | Example | Description |
---|---|---|---|---|
|
|
An entity, array of entities or iterable of entities |
|
Persists the entity or a list of entities and returns self |
|
|
Hash key and optional range key, array of range keys or iterable of range keys annotated with |
|
Loads a single entity or a list of entities from the table. Range key is required for tables which defines the range key |
|
|
Hash key and optional range key annotated with |
|
Counts the items in the database. Beware, this can be very expensive operation in DynamoDB. See Advanced Queries for advanced use cases |
|
|
Entity or Hash key and optional range key annotated with |
|
Deletes an item which can be specified with hash key and optional range key. See Advanced Queries for advanced use cases |
|
|
Entity or Hash key and optional range key annotated with |
|
Queries for all entities with given hash key and/or range key. |
(contextual) |
(none of above) |
Any arguments which will be translated into arguments map |
(see below) |
Query, scan or update. See Advanced Queries, Scanning and Updates for advanced use cases |
Calling any of the declarative service method will create the DynamoDB table automatically if it does not exist already. |
Advanced Queries
DynamoDB integration does not support feature known as dynamic finders.
Instead you can annotate any method with @Query
annotation to make it
-
counting method if its name begins with
count
-
batch delete method if its name begins with
delete
-
otherwise an advanced query method
import static com.agorapulse.micronaut.amazon.awssdk.dynamodb.groovy.GroovyBuilders.* (1)
@Service(DynamoDBEntity) (2)
interface DynamoDBItemDBService {
@Query({ (3)
query(DynamoDBEntity) {
partitionKey hashKey (4)
index DynamoDBEntity.RANGE_INDEX
range {
eq rangeKey (5)
}
only { (6)
rangeIndex (7)
}
}
})
Publisher<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey) (8)
}
1 | GroovyBuilders class provides all necessary factory methods and keywords |
2 | Annotate an interface with @Service with the type of the entity as its value |
3 | @Query annotation accepts a closure which returns a query builder (see QueryBuilder for full reference) |
4 | Specify a partition key with partitionKey method and method’s hashKey argument |
5 | Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference) |
6 | You can limit which properties are returned from the query |
7 | Only rangeIndex property will be populated in the entities returned |
8 | The arguments have no special meaning but you can use them in the query. The method must return either Publisher or List of entities. |
@Service(value = DynamoDBEntity.class, tableName = "DynamoDBJava") (1)
public interface DynamoDBEntityService {
class EqRangeProjection implements QueryFunction<DynamoDBEntity> { (2)
public QueryBuilder<DynamoDBEntity> query(Map<String, Object> arguments) {
return builder().partitionKey(arguments.get("hashKey")) (3)
.index(DynamoDBEntity.RANGE_INDEX)
.sortKey(r ->
r.eq(arguments.get("rangeKey")) (4)
)
.only(DynamoDBEntity.RANGE_INDEX); (5)
}
}
@Query(EqRangeProjection.class) (6)
Publisher<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey); (7)
}
1 | Annotate an interface with @Service with the type of the entity as its value |
2 | Define class which implements QueryFunction |
3 | Specify a partition key with partitionKey method and method’s hashKey argument |
4 | Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference) |
5 | Only rangeIndex property will be populated in the entities returned |
6 | @Query annotation accepts a class which implements Function<Map<String, Object>, DetachedQuery> |
7 | The arguments have no special meaning but you can use them in the query using arguments map. The method must return either Publisher or List of entities. |
@Service(value = DynamoDBEntity::class, tableName = "DynamoDBJava") (1)
interface DynamoDBEntityService {
class EqRangeProjection : QueryFunction<DynamoDBEntity>({ args: Map<String, Any> -> (2)
partitionKey(args["hashKey"]) (3)
index(DynamoDBEntity.RANGE_INDEX)
sortKey { eq(args["rangeKey"]) } (4)
only(DynamoDBEntity.RANGE_INDEX) (5)
})
@Query(EqRangeProjection::class) (6)
fun queryByRangeIndex(hashKey: String, rangeKey: String): Publisher<DynamoDBEntity> (7)
}
1 | Annotate an interface with @Service with the type of the entity as its value |
2 | Create class that extends com.agorapulse.micronaut.amazon.awssdk.dynamodb.kotlin.QueryFunction and use the DSL constructor |
3 | Specify a partition key with partitionKey method and method’s hashKey argument |
4 | Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference) |
5 | Only rangeIndex property will be populated in the entities returned |
6 | @Query annotation accepts a class which implements Function<Map<String, Object>, DetachedQuery> |
7 | The arguments have no special meaning but you can use them in the query using arguments map. The method must return either Publisher or List of entities. |
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.* (1)
@Service(DynamoDBEntity) (2)
interface DynamoDBItemDBService {
@Query({ (3)
query(DynamoDBEntity) {
hash hashKey (4)
range {
eq DynamoDBEntity.RANGE_INDEX, rangeKey (5)
}
only { (6)
rangeIndex (7)
}
}
})
Publisher<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey) (8)
}
1 | Builders class provides all necessary factory methods and keywords |
2 | Annotate an interface with @Service with the type of the entity as its value |
3 | @Query annotation accepts a closure which returns a query builder (see QueryBuilder for full reference) |
4 | Specify a partition key with partitionKey method and method’s hashKey argument |
5 | Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference) |
6 | You can limit which properties are returned from the query |
7 | Only rangeIndex property will be populated in the entities returned |
8 | The arguments have no special meaning but you can use them in the query. The method must return either Publisher or List of entities. |
Scanning
DynamoDB integration does not support feature known as dynamic finders.
If you need to scan the table by unindexed attributes you can annotate any method with @Scan
annotation to make it
-
counting method if its name begins with
count
-
otherwise an advanced query method
import static com.agorapulse.micronaut.amazon.awssdk.dynamodb.groovy.GroovyBuilders.* (1)
@Service(DynamoDBEntity) (2)
interface DynamoDBItemDBService {
@Scan({ (3)
scan(DynamoDBEntity) {
filter {
eq DynamoDBEntity.RANGE_INDEX, foo (4)
}
only {
rangeIndex
}
}
})
Publisher<DynamoDBEntity> scanAllByRangeIndex(String foo) (5)
}
1 | GroovyBuilders class provides all necessary factory methods and keywords |
2 | Annotate an interface with @Service with the type of the entity as its value |
3 | @Scan annotation accepts a closure which returns a scan builder (see ScanBuilder for full reference) |
4 | Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference) |
5 | The arguments have no special meaning but you can use them in the scan definition. The method must return either Publisher or List of entities. |
@Service(value = DynamoDBEntity.class, tableName = "DynamoDBJava") (1)
public interface DynamoDBEntityService {
class EqRangeScan implements ScanFunction<DynamoDBEntity> { (2)
@Override
public ScanBuilder<DynamoDBEntity> scan(Map<String, Object> args) {
return builder().filter(f ->
f.eq(DynamoDBEntity.RANGE_INDEX, args.get("foo")) (3)
);
}
}
@Scan(EqRangeScan.class) (4)
Publisher<DynamoDBEntity> scanAllByRangeIndex(String foo); (5)
}
1 | Annotate an interface with @Service with the type of the entity as its value |
2 | Define class which implements ScanFunction |
3 | Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference) |
4 | @Scan annotation accepts a class which implements Function<Map<String, Object>, DetachedScan> |
5 | The arguments have no special meaning but you can use them in the scan definition. The method must return either Publisher or List of entities. |
@Service(value = DynamoDBEntity::class, tableName = "DynamoDBJava") (1)
interface DynamoDBEntityService {
class EqRangeScan : ScanFunction<DynamoDBEntity>({ args: Map<String, Any> -> (2)
filter {
eq(DynamoDBEntity.RANGE_INDEX, args["foo"]) (3)
}
})
@Scan(EqRangeScan::class) (4)
fun scanAllByRangeIndex(foo: String): Publisher<DynamoDBEntity> (5)
}
1 | Annotate an interface with @Service with the type of the entity as its value |
2 | Define class which implements ScanFunction |
3 | Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference) |
4 | @Scan annotation accepts a class which implements Function<Map<String, Object>, DetachedScan> |
5 | The arguments have no special meaning but you can use them in the scan definition. The method must return either Publisher or List of entities. |
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.* (1)
@Service(DynamoDBEntity) (2)
interface DynamoDBItemDBService {
@Scan({ (3)
scan(DynamoDBEntity) {
filter {
eq DynamoDBEntity.RANGE_INDEX, foo (4)
}
}
})
Publisher<DynamoDBEntity> scanAllByRangeIndex(String foo) (5)
}
1 | Builders class provides all necessary factory methods and keywords |
2 | Annotate an interface with @Service with the type of the entity as its value |
3 | @Scan annotation accepts a closure which returns a scan builder (see ScanBuilder for full reference) |
4 | Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference) |
5 | The arguments have no special meaning but you can use them in the scan definition. The method must return either Publisher or List of entities. |
Updates
Declarative services allows you to execute fine-grained updates. Any method annotated with @Update
will perform the update in the DynamoDB table.
import static com.agorapulse.micronaut.amazon.awssdk.dynamodb.groovy.GroovyBuilders.* (1)
@Service(DynamoDBEntity) (2)
interface DynamoDBItemDBService {
@Update({ (3)
update(DynamoDBEntity) {
partitionKey hashKey (4)
sortKey rangeKey (5)
add 'number', 1 (6)
returnUpdatedNew { number } (7)
}
})
Number increment(String hashKey, String rangeKey) (8)
}
1 | Builders class provides all necessary factory methods and keywords |
2 | Annotate an interface with @Service with the type of the entity as its value |
3 | @Update annotation accepts a closure which returns an update builder (see UpdateBuilder for full reference) |
4 | Specify a partition key with partitionKey method and method’s hashKey argument |
5 | Specify a sort key with sortKey method and method’s rangeKey argument |
6 | Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations. |
7 | Specify what should be returned from the method (see UpdateBuilder for full reference). |
8 | The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper. |
@Service(value = DynamoDBEntity.class, tableName = "DynamoDBJava") (1)
public interface DynamoDBEntityService {
class IncrementNumber implements UpdateFunction<DynamoDBEntity, Integer> { (2)
@Override
public UpdateBuilder<DynamoDBEntity, Integer> update(Map<String, Object> args) {
return builder().partitionKey(args.get("hashKey")) (3)
.sortKey(args.get("rangeKey")) (4)
.add("number", 1) (5)
.returnUpdatedNew(DynamoDBEntity::getNumber); (6)
}
}
@Update(IncrementNumber.class) (7)
Number increment(String hashKey, String rangeKey); (8)
}
1 | Annotate an interface with @Service with the type of the entity as its value |
2 | Define class which implements Function<Map<String, Object>, DetachedUpdate> |
3 | Specify a partition key with partitionKey method and method’s hashKey argument |
4 | Specify a sort key with sortKey method and method’s rangeKey argument |
5 | Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations. |
6 | Specify what should be returned from the method (see UpdateBuilder for full reference). |
7 | @Update annotation accepts a class which implements Function<Map<String, Object>, DetachedUpdate> |
8 | The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper. |
@Service(value = DynamoDBEntity::class, tableName = "DynamoDBJava") (1)
interface DynamoDBEntityService {
class IncrementNumber : UpdateFunction<DynamoDBEntity, Int>({ args: Map<String, Any> ->(2)
partitionKey(args["hashKey"]) (3)
sortKey(args["rangeKey"]) (4)
add("number", 1) (5)
returnUpdatedNew(DynamoDBEntity::number) (6)
})
@Update(IncrementNumber::class) (7)
fun increment(hashKey: String, rangeKey: String): Number (8)
}
1 | Annotate an interface with @Service with the type of the entity as its value |
2 | Define class which extends com.agorapulse.micronaut.amazon.awssdk.dynamodb.kotlin.UpdateFunction and use the DSL constructor |
3 | Specify a partition key with partitionKey method and method’s hashKey argument |
4 | Specify a sort key with sortKey method and method’s rangeKey argument |
5 | Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations. |
6 | Specify what should be returned from the method (see UpdateBuilder for full reference). |
7 | @Update annotation accepts a class which implements Function<Map<String, Object>, DetachedUpdate> |
8 | The arguments have no special meaning, but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper. |
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.* (1)
@Service(DynamoDBEntity) (2)
interface DynamoDBItemDBService {
@Update({ (3)
update(DynamoDBEntity) {
hash hashKey (4)
range rangeKey (5)
add 'number', 1 (6)
returnUpdatedNew { number } (7)
}
})
Number increment(String hashKey, String rangeKey) (8)
}
1 | Builders class provides all necessary factory methods and keywords |
2 | Annotate an interface with @Service with the type of the entity as its value |
3 | @Update annotation accepts a closure which returns an update builder (see UpdateBuilder for full reference) |
4 | Specify a partition key with partitionKey method and method’s hashKey argument |
5 | Specify a range key with range method and method’s rangeKey argument |
6 | Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations. |
7 | Specify what should be returned from the method (see UpdateBuilder for full reference). |
8 | The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper. |
DynamoDB Service
DynamoDBService
provides middle-level API for working with DynamoDB tables and entities. You can obtain instance of DynamoDBService
from
DynamoDBServiceProvider
which can be injected to any bean.
DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider)
DynamoDBService<DynamoDBEntity> service = provider.findOrCreate(DynamoDBEntity) (1)
service.createTable() (2)
service.save(new DynamoDBEntity( (3)
parentId: '1',
id: '1',
rangeIndex: 'foo',
number: 1,
date: Date.from(REFERENCE_DATE)
))
service.get('1', '1') (4)
service.query { (5)
partitionKey '1'
index DynamoDBEntity.DATE_INDEX
range { between from, to }
}
service.update { (6)
partitionKey '1001'
sortKey '1'
add 'number', 13
returns allNew
}
service.delete('1001', '1') (7)
1 | Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected) |
2 | Create table for the entity |
3 | Save an entity |
4 | Load the entity by its hash and range keys |
5 | Query the table for entities with given range index value |
6 | Increment a property for entity specified by hash and range keys |
7 | Delete an entity |
DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider);
DynamoDBService<DynamoDBEntity> service = provider.findOrCreate(DynamoDBEntity.class); (1)
service.createTable(); (2)
DynamoDBEntity entity = new DynamoDBEntity();
entity.setParentId("1");
entity.setId("1");
entity.setRangeIndex("foo");
entity.setNumber(1);
entity.setDate(new Date());
service.save(entity); (3)
service.get("1", "1"); (4)
service.query(query -> (5)
query.partitionKey("1")
.index(DynamoDBEntity.DATE_INDEX)
.range(r -> r.between(from, to))
);
service.update(update -> (6)
update.partitionKey("1001")
.sortKey("1")
.add("number", 13)
.returns(ReturnValue.ALL_NEW)
);
service.delete("1001", "1"); (7)
1 | Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected) |
2 | Create table for the entity |
3 | Save an entity |
4 | Load the entity by its hash and range keys |
5 | Query the table for entities with given range index value |
6 | Increment a property for entity specified by hash and range keys |
7 | Delete an entity |
DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider)
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity) (1)
s.createTable() (2)
s.save(new DynamoDBEntity( (3)
parentId: '1',
id: '1',
rangeIndex: 'foo',
date: REFERENCE_DATE.toDate()
))
s.get('1', '1') (4)
s.query('1', DynamoDBEntity.RANGE_INDEX, 'bar').count == 1 (5)
s.query('1', DynamoDBEntity.RANGE_INDEX, 'bar').count == 1
s.queryByDates('3', DynamoDBEntity.DATE_INDEX, [ (6)
after: REFERENCE_DATE.plusDays(9).toDate(),
before: REFERENCE_DATE.plusDays(20).toDate(),
]).count == 1
s.queryByDates('3', DynamoDBEntity.DATE_INDEX, [
after: REFERENCE_INSTANT.plus(9, ChronoUnit.DAYS),
before: REFERENCE_INSTANT.plus(20, ChronoUnit.DAYS),
]).count == 1
s.increment('1', '1', 'number') (7)
s.delete(s.get('1', '1')) (8)
s.deleteAll('1', DynamoDBEntity.RANGE_INDEX, 'bar') == 1 (9)
1 | Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected) |
2 | Create table for the entity |
3 | Save an entity |
4 | Load the entity by its hash and range keys |
5 | Query the table for entities with given range index value |
6 | Query the table for entities having date between the specified dates |
7 | Increment a property for entity specified by hash and range keys |
8 | Delete an entity by object reference |
9 | Delete all entities with given range index value |
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity.class);(1)
assertNotNull(
s.createTable(5L, 5L) (2)
);
assertNotNull(
s.save(createEntity("1", "1", "foo", REFERENCE_DATE.toDate())) (3)
);
assertNotNull(
s.get("1", "1") (4)
);
assertEquals(1,
s.query("1", DynamoDBEntity.RANGE_INDEX, "bar").getCount().intValue() (5)
);
assertEquals(1,
s.queryByDates( (6)
"3",
DynamoDBEntity.DATE_INDEX,
REFERENCE_DATE.plusDays(9).toDate(),
REFERENCE_DATE.plusDays(20).toDate()
).getCount().intValue()
);
s.increment("1", "1", "number"); (7)
s.delete(s.get("1", "1")); (8)
assertEquals(1,
s.deleteAll("1", DynamoDBEntity.RANGE_INDEX, "bar") (9)
);
1 | Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected) |
2 | Create table for the entity |
3 | Save an entity |
4 | Load the entity by its hash and range keys |
5 | Query the table for entities with given range index value |
6 | Query the table for entities having date between the specified dates |
7 | Increment a property for entity specified by hash and range keys |
8 | Delete an entity by object reference |
9 | Delete all entities with given range index value |
Please see DynamoDBService (AWS SDK 2.x) and DynamoDBService (AwS SDK 1.x) for full reference.
DynamoDB Accelerator (DAX)
You can simply enable DynamoDB Accelerator by setting the DAX endpoint as aws.dax.endpoint
property. Every operation
performed using injected AmazonDynamoDB
, IDynamoDBMapper
or a data service will be performed against DAX instead of
DynamoDB tables.
Please, check DAX and DynamoDB Consistency Models article to understand the subsequence of using DAX instead of direct DynamoDB operations.
Make sure you have set up proper policy to access the DAX cluster. See DAX Access Control for more information. Following policy allow every DAX operation on any resource. In production, you should constrain the scope to single cluster.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DaxAllowAll",
"Effect": "Allow",
"Action": "dax:*",
"Resource": "*"
}
]
}
Testing
You can very easily mock any of the interfaces and declarative services but if you need close-to-production
DynamoDB integration works well with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Then you can set up your tests like this:
@MicronautTest (1)
class DefaultDynamoDBServiceSpec extends Specification {
@Inject DynamoDBServiceProvider dynamoDBServiceProvider (2)
DynamoDbService<DynamoDBEntity> dbs
void setup() {
dbs = dynamoDBServiceProvider.findOrCreate(DynamoDBEntity) (3)
}
// test methods
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Use @Inject to let Micronaut inject the beans into your tests |
3 | Create the low-level service using DynamoDBServiceProvider |
@MicronautTest (1)
public class DeclarativeServiceTest {
@Inject DynamoDBServiceProvider provider (2)
@Test
public void testSomething() {
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity.class);(3)
// test code
}
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Use @Inject to let Micronaut inject the beans into your tests |
3 | Create the low-level service using DynamoDBServiceProvider |
You can save time creating the new Localstack container by sharing it between the tests. application-test.yml
Alternatively you can use different container than Localstack, for example Amazon DynamoDB Local container. application-test.yml
|
1.4. Kinesis
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.
This library provides three approaches to work with Kinesis streams:
-
High-level Publishing with
@KinesisClient
-
High-level Listening with
@KinesisListener
-
Middle-level Kinesis Service
Installation
// for Kinesis client
annotationProcessor 'com.agorapulse:micronaut-amazon-awssdk-kinesis-annotation-processor:{project-version
implementation 'com.agorapulse:micronaut-amazon-awssdk-kinesis:2.1.11-micronaut-3.0'
// for Kinesis listener
implementation 'com.agorapulse:micronaut-amazon-awssdk-kinesis-worker:2.1.11-micronaut-3.0'
<!-- for Kinesis client -->
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-kinesis</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
<!-- for Kinesis listener -->
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-kinesis-worker</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
// for Kinesis client
annotationProcessor 'com.agorapulse:micronaut-aws-sdk-kinesis-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-aws-sdk-kinesis:2.1.11-micronaut-3.0'
// for Kinesis listener
implementation 'com.agorapulse:micronaut-aws-sdk-kinesis-worker:2.1.11-micronaut-3.0'
<!-- for Kinesis client -->
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-kinesis</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
<!-- for Kinesis listener -->
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-kinesis-worker</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
For Kotlin use kapt instead of annotationProcessor configuration.
|
Configuration
You need no configuration at all but some of the configuration may be useful for you.
aws:
kinesis:
region: sa-east-1
# for Kinesis client
stream: MyStream (1)
streams: (2)
test: (3)
stream: TestStream
# for Kinesis listener
application-name: my-application # defaults to micronaut.application.name (4)
worker-id: myworker # defaults to host + UUID (5)
listener:
stream: IncomingMessages (6)
listeners:
other: (7)
stream: OtherStream
1 | You can specify the default stream for KinesisService and @KinesisClient |
2 | You can define multiple configurations |
3 | Each of the configuration can be access using @Named('test') KinesisService qualifier or you can define the configuration as value of @KinesisClient('test') |
4 | For Kinesis listeners you should provide application name which default to micronaut.application.name if not present |
5 | You can also provide the of the Kinesis worker |
6 | This is the default stream to listen |
7 | You can listen to the multiple Kinesis stream by declaring the name of the configuration in the annotation such as @KinesisListener("other") |
Publishing with @KinesisClient
If you place KinesisClient
annotation on the interface then methods
matching predefined pattern will be automatically implemented. Every method of KinesisClient
puts new records into
the stream.
For AWS SDK 2.x, use packages starting com.agorapulse.micronaut.amazon.awssdk.kinesis .
|
For AWS SDK 1.x, use packages starting com.agorapulse.micronaut.aws.sdk.kinesis .
|
The following example shows many of available method signatures for publishing records:
@KinesisClient (1)
interface DefaultClient {
void putRecordString(String record); (2)
PutRecordResponse putRecord(String partitionKey, String record); (3)
void putRecordAnno(@PartitionKey String id, String record); (4)
void putRecord(String partitionKey, String record, String sequenceNumber); (5)
void putRecordAnno( (6)
@PartitionKey String id,
String record,
@SequenceNumber String sqn
);
void putRecordAnnoNumbers( (7)
@PartitionKey Long id,
String record,
@SequenceNumber int sequenceNumber
);
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put String into the stream with generated UUID as partition key |
3 | You can user predefined partition key |
4 | If the name of the argument does not contain word parition then @PartitionKey annotation must to be used |
5 | You can put String into the stream with predefined partition key and a sequence number |
6 | If the name of the sequence number argument does not contain word sequence then @SequenceKey annotation must to be used |
7 | The type of parition key and sequence number does not matter as the value will be always converted to string |
@KinesisClient (1)
interface DefaultClient {
void putRecordString(String record); (2)
PutRecordResult putRecord(String partitionKey, String record); (3)
void putRecordAnno(@PartitionKey String id, String record); (4)
void putRecord(String partitionKey, String record, String sequenceNumber); (5)
void putRecordAnno( (6)
@PartitionKey String id,
String record,
@SequenceNumber String sqn
);
void putRecordAnnoNumbers( (7)
@PartitionKey Long id,
String record,
@SequenceNumber int sequenceNumber
);
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put String into the stream with generated UUID as partition key |
3 | You can user predefined partition key |
4 | If the name of the argument does not contain word parition then @PartitionKey annotation must to be used |
5 | You can put String into the stream with predefined partition key and a sequence number |
6 | If the name of the sequence number argument does not contain word sequence then @SequenceKey annotation must to be used |
7 | The type of parition key and sequence number does not matter as the value will be always converted to string |
@KinesisClient (1)
interface DefaultClient {
void putRecordBytes(byte[] record); (2)
void putRecordDataByteArray(@PartitionKey String id, byte[] value); (3)
PutRecordsResponse putRecords(Iterable<PutRecordsRequestEntry> entries); (4)
PutRecordsResponse putRecords(PutRecordsRequestEntry... entries); (5)
PutRecordsResponse putRecord(PutRecordsRequestEntry entry); (6)
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put byte array into the stream, UUID as partition key will be generated |
3 | If the name of the argument does not contain word parition then @PartitionKey annotation must to be used |
4 | You can put several records wrapped into iterable of PutRecordsRequestEntry |
5 | You can put several records wrapped into array of PutRecordsRequestEntry |
6 | If the single argument is of type PutRecordsRequestEntry then PutRecordsResult object is returned from the method despite only single record has been published |
@KinesisClient (1)
interface DefaultClient {
void putRecordBytes(byte[] record); (2)
void putRecordDataByteArray(@PartitionKey String id, byte[] value); (3)
PutRecordsResult putRecords(Iterable<PutRecordsRequestEntry> entries); (4)
PutRecordsResult putRecords(PutRecordsRequestEntry... entries); (5)
PutRecordsResult putRecord(PutRecordsRequestEntry entry); (6)
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put byte array into the stream, UUID as partition key will be generated |
3 | If the name of the argument does not contain word parition then @PartitionKey annotation must to be used |
4 | You can put several records wrapped into iterable of PutRecordsRequestEntry |
5 | You can put several records wrapped into array of PutRecordsRequestEntry |
6 | If the single argument is of type PutRecordsRequestEntry then PutRecordsResult object is returned from the method despite only single record has been published |
@KinesisClient (1)
interface DefaultClient {
void putRecordObject(Pogo pogo); (2)
PutRecordsResponse putRecordObjects(Pogo... pogo); (3)
PutRecordsResponse putRecordObjects(Iterable<Pogo> pogo); (4)
void putRecordDataObject(@PartitionKey String id, Pogo value); (5)
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put any object into the stream, UUID as partition key will be generated, the objects will be serialized to JSON |
3 | You can put array of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON |
4 | You can put iterable of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON |
5 | You can put any object into the stream with predefined partition key, if the name of the argument does not contain word parition then @PartitionKey annotation must to be used |
@KinesisClient (1)
interface DefaultClient {
void putRecordObject(Pogo pogo); (2)
PutRecordsResult putRecordObjects(Pogo... pogo); (3)
PutRecordsResult putRecordObjects(Iterable<Pogo> pogo); (4)
void putRecordDataObject(@PartitionKey String id, Pogo value); (5)
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put any object into the stream, UUID as partition key will be generated, the objects will be serialized to JSON |
3 | You can put array of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON |
4 | You can put iterable of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON |
5 | You can put any object into the stream with predefined partition key, if the name of the argument does not contain word parition then @PartitionKey annotation must to be used |
@KinesisClient (1)
interface DefaultClient {
PutRecordResponse putEvent(MyEvent event); (2)
PutRecordsResponse putEventsIterable(Iterable<MyEvent> events); (3)
void putEventsArrayNoReturn(MyEvent... events); (4)
@Stream("OtherStream") PutRecordResponse putEventToStream(MyEvent event); (5)
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put object implementing Event into the stream |
3 | You can put iterable of objects implementing Event into the stream |
4 | You can put array of objects implementing Event into the stream |
5 | Without any parameters @KinesisClient publishes to default stream of the default configuration but you can change it using @Stream annotation on the method |
@KinesisClient (1)
interface DefaultClient {
PutRecordResult putEvent(MyEvent event); (2)
PutRecordsResult putEventsIterable(Iterable<MyEvent> events); (3)
void putEventsArrayNoReturn(MyEvent... events); (4)
@Stream("OtherStream") PutRecordResult putEventToStream(MyEvent event); (5)
}
1 | @KinesisClient annotation makes the interface a Kinesis client |
2 | You can put object implementing Event into the stream |
3 | You can put iterable of objects implementing Event into the stream |
4 | You can put array of objects implementing Event into the stream |
5 | Without any parameters @KinesisClient publishes to default stream of the default configuration but you can change it using @Stream annotation on the method |
The return value of the method is PutRecordResponse and PutRecordsResponse for AWS SDK 2,x or PutRecordResult or PutRecordsResult for AWS SDK 1.x, but it can be always omitted and replaced with void .
|
By default, KinesisClient
publishes records into the default stream defined by aws.kinesis.stream
property.
You can switch to different configuration by changing the value
of the annotation such as @KinesisClient("other")
or
by setting the stream
property of the annotation such as @KinesisClient(stream = "MyStream")
. You can change stream
used by particular method using @Stream
annotation as mentioned above.
Listening with @KinesisListener
Before you start implementing your service with @KinesisListener you may consider implementing a Lambda function instead.
|
If you place KinesisListener
annotation on the method in the bean then the method will be triggered with the new records in the stream.
@Singleton (1)
public class KinesisListenerTester {
@KinesisListener
public void listenString(String string) { (2)
String message = "EXECUTED: listenString(" + string + ")";
logExecution(message);
}
@KinesisListener
public void listenRecord(KinesisClientRecord record) { (3)
logExecution("EXECUTED: listenRecord(" + record + ")");
}
@KinesisListener
public void listenStringRecord(String string, KinesisClientRecord record) { (4)
logExecution("EXECUTED: listenStringRecord(" + string + ", " + record + ")");
}
@KinesisListener
public void listenObject(MyEvent event) { (5)
logExecution("EXECUTED: listenObject(" + event + ")");
}
@KinesisListener
public void listenObjectRecord(MyEvent event, KinesisClientRecord record) { (6)
logExecution("EXECUTED: listenObjectRecord(" + event + ", " + record + ")");
}
@KinesisListener
public void listenPogoRecord(Pogo event) { (7)
logExecution("EXECUTED: listenPogoRecord(" + event + ")");
}
public List<String> getExecutions() {
return executions;
}
public void setExecutions(List<String> executions) {
this.executions = executions;
}
private void logExecution(String message) {
executions.add(message);
System.err.println(message);
}
private List<String> executions = new CopyOnWriteArrayList<>();
}
1 | @KinesisListener method must be declared in a bean, e.g. @Singleton |
2 | You can listen to just plain string records |
3 | You can listen to KinesisClientRecord objects |
4 | You can listen to both string and KinesisClientRecord objects |
5 | You can listen to objects implementing Event interface |
6 | You can listen to both Event and KinesisClientRecord objects |
7 | You can listen to any object as long as it can be unmarshalled from the record payload |
@Singleton (1)
public class KinesisListenerTester {
@KinesisListener
public void listenString(String string) { (2)
String message = "EXECUTED: listenString(" + string + ")";
logExecution(message);
}
@KinesisListener
public void listenRecord(Record record) { (3)
logExecution("EXECUTED: listenRecord(" + record + ")");
}
@KinesisListener
public void listenStringRecord(String string, Record record) { (4)
logExecution("EXECUTED: listenStringRecord(" + string + ", " + record + ")");
}
@KinesisListener
public void listenObject(MyEvent event) { (5)
logExecution("EXECUTED: listenObject(" + event + ")");
}
@KinesisListener
public void listenObjectRecord(MyEvent event, Record record) { (6)
logExecution("EXECUTED: listenObjectRecord(" + event + ", " + record + ")");
}
@KinesisListener
public void listenPogoRecord(Pogo event) { (7)
logExecution("EXECUTED: listenPogoRecord(" + event + ")");
}
public List<String> getExecutions() {
return executions;
}
public void setExecutions(List<String> executions) {
this.executions = executions;
}
private void logExecution(String message) {
executions.add(message);
System.err.println(message);
}
private List<String> executions = new CopyOnWriteArrayList<>();
}
1 | @KinesisListener method must be declared in a bean, e.g. @Singleton |
2 | You can listen to just plain string records |
3 | You can listen to Record objects |
4 | You can listen to both string and Record objects |
5 | You can listen to objects implementing Event interface |
6 | You can listen to both Event and Record objects |
7 | You can listen to any object as long as it can be unmarshalled from the record payload |
You can listen to different than default configuration by changing the value
of the annotation such as @KinesisListener("other")
.
Multiple methods in a single application can listen to the same configuration (stream). In that case, every method will be executed with the incoming payload.
Kinesis Service
KinesisService
provides middle-level API for creating, describing, and deleting streams. You can manage shards as well as read records
from particular shards.
Instance of KinesisService
is created for the default Kinesis configuration and each stream configuration in aws.kinesis.streams
map.
You should always use @Named
qualifier when injecting KinesisService
if you have more than one stream configuration present, e.g. @Named("other") KinesisService otherService
.
Please, see KinesisService for AWS SDK 2.x or KinesisService for AWS SDK 1.x for the full reference.
Testing
You can very easily mock any of the interfaces and declarative services but if you need close-to-production
DynamoDB integration works well with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Then you can set up your tests like this:
@MicronautTest (1)
class KinesisDemoSpec extends Specification {
@Inject KinesisService service (2)
@Retry
void 'new default stream'() {
when:
CreateStreamResponse stream = service.createStream('NewStream')
then:
stream
}
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Use @Inject to let Micronaut inject the beans into your tests |
@MicronautTest (1)
public class KinesisJavaDemoTest {
@Inject KinesisService service; (2)
@Test
public void testJavaService() {
assertNotNull(service.createStream("TestStream"));
}
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Use @Inject to let Micronaut inject the beans into your tests |
You can save time creating the new Localstack container by sharing it between the tests. application-test.yml
|
1.5. Lambda
With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there’s no charge when your code isn’t running.
This library provides support for function invocation using @LambdaClient
introduction.
Installation
implementation 'com.agorapulse:micronaut-amazon-awssdk-lambda:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-lambda</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
implementation 'com.agorapulse:micronaut-aws-sdk-lambda:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-lambda</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Configuration
You can configure the function name in the configuration
aws:
lambda:
functions:
hello: (1)
function: HelloFunction (2)
1 | The name of the configuration to be used with the interface such as @LambdaClient("hello") |
2 | The name of the function to execute |
Invocation using @LambdaClient
If you place LambdaClient
annotation on the interface then any of its methods will invoke the function. Methods that return void
will be invoked with Event
invocation type - the client won’t wait until the invocation is finished.
For AWS SDK 2.x, use packages starting com.agorapulse.micronaut.amazon.awssdk.lambda .
|
For AWS SDK 1.x, use packages starting com.agorapulse.micronaut.aws.sdk.lambda .
|
The following example shows typical Lambda client interface:
package com.agorapulse.micronaut.amazon.awssdk.lambda;
import com.agorapulse.micronaut.amazon.awssdk.lambda.annotation.LambdaClient;
@LambdaClient("hello") (1)
public interface HelloConfigurationClient {
HelloResponse hello(String name); (2)
}
1 | This @LambdaClient will be invoked against function defined in aws.lambda.functions.hello.function property |
2 | The function will be invoked with an object containing the property name with the actual argument |
package com.agorapulse.micronaut.amazon.awssdk.lambda;
import com.agorapulse.micronaut.amazon.awssdk.lambda.annotation.Body;
import com.agorapulse.micronaut.amazon.awssdk.lambda.annotation.LambdaClient;
@LambdaClient(function = "HelloFunction") (1)
public interface HelloBodyClient {
HelloResponse hello(@Body HelloRequest request); (2)
}
1 | You can specify the name of the function directly in the annotation using function property |
2 | You can use @Body annotation to use the whole argument object as a payload of the function |
Testing
You can very easily create a Lambda function locally with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Then you can set up your tests like this:
package com.agorapulse.micronaut.amazon.awssdk.lambda;
import com.agorapulse.testing.fixt.Fixt;
import io.micronaut.test.extensions.junit5.annotation.MicronautTest;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.io.TempDir;
import org.zeroturnaround.zip.ZipUtil;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.services.lambda.LambdaClient;
import software.amazon.awssdk.services.lambda.model.Runtime;
import jakarta.inject.Inject;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import java.util.Collections;
@MicronautTest (1)
public class HelloClientTest {
private static final Fixt FIXT = Fixt.create(AbstractClientSpec.class); (2)
@TempDir private File tmp;
@Inject private LambdaClient lambda; (3)
@Inject private HelloClient client; (4)
@BeforeEach
public void setupSpec() {
prepareHelloFunction(); (5)
}
@Test
public void invokeFunction() {
HelloResponse result = client.hello("Vlad"); (6)
Assertions.assertEquals("Hello Vlad", result.getMessage()); (7)
}
private void prepareHelloFunction() {
boolean alreadyExists = lambda.listFunctions()
.functions()
.stream()
.anyMatch(fn -> "HelloFunction".equals(fn.functionName()));
if (alreadyExists) {
return;
}
File functionDir = new File(tmp, "HelloFunction");
functionDir.mkdirs();
FIXT.copyTo("HelloFunction", functionDir);
File functionArchive = new File(tmp, "function.zip");
ZipUtil.pack(functionDir, functionArchive);
lambda.createFunction(create -> create.functionName("HelloFunction")
.runtime(Runtime.NODEJS16_X)
.role("HelloRole")
.handler("index.handler")
.environment(e ->
e.variables(Collections.singletonMap("MICRONAUT_ENVIRONMENTS", "itest"))(8)
)
.code(code -> {
try {
InputStream archiveStream = Files.newInputStream(functionArchive.toPath());
SdkBytes archiveBytes = SdkBytes.fromInputStream(archiveStream);
code.zipFile(archiveBytes);
} catch (IOException e) {
throw new IllegalStateException(
"Failed to create function from archive " + functionArchive, e
);
}
})
.build());
}
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Fixt is used to organize the function fixture |
3 | The LambdaClient (for v1 AWSLambda ) is populated automatically pointing to the Localstack test container |
4 | The function client can be injected as well |
5 | The function is created in Localstack if not present yet |
6 | The function is invoked |
7 | The result of the invocation is compared to the expected value |
8 | Set the Micronaut environment for the AWS Lambda function |
If your Lambda function under test itself integrates with some other AWS services then you need to set them up in Localstack and set the endpoints correctly to point to the Localstack mocks. application-itest.yml
|
You can save time creating the new Localstack container by sharing it between the tests. application-test.yml
|
1.6. Simple Storage Service (S3)
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
This library provides basic support for Amazon S3 using Simple Storage Service
Installation
implementation 'com.agorapulse:micronaut-amazon-awssdk-s3:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-s3</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
implementation 'com.agorapulse:micronaut-aws-sdk-s3:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-s3</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Configuration
You can store the name of the bucket in the configuration using aws.s3.bucket
property. You can create additional configurations
by providing 'aws.s3.buckets' configuration map.
aws:
s3:
region: sa-east-1
bucket: MyBucket (1)
force-path-style: true (2)
buckets: (3)
test: (4)
bucket: TestBucket
1 | You can define default bucket for the service |
2 | Force path style URL usage |
3 | You can define multiple configurations |
4 | Each of the configuration can be access using @Named('test') SimpleStorageService qualifier |
aws:
s3:
region: sa-east-1
bucket: MyBucket (1)
path-style-access-enabled: true (2)
buckets: (3)
test: (4)
bucket: TestBucket
1 | You can define default bucket for the service |
2 | Force path style URL usage |
3 | You can define multiple configurations |
4 | Each of the configuration can be access using @Named('test') SimpleStorageService qualifier |
Simple Storage Service
SimpleStorageService
provides middle-level API for managing buckets and uploading and downloading files.
Instance of SimpleStorageService
is created for the default S3 configuration and each bucket configuration in aws.s3.buckets
map.
You should always use @Named
qualifier when injecting SimpleStorageService
if you have more than one bucket configuration present, e.g. @Named("test") SimpleStorageService service
.
Following example shows some of the most common use cases for working with S3 buckets.
service.createBucket(MY_BUCKET); (1)
assertTrue(service.listBucketNames().contains(MY_BUCKET)); (2)
1 | Create new bucket of given name |
2 | The bucket is present within the list of all bucket names |
File sampleContent = createFileWithSampleContent();
service.storeFile(TEXT_FILE_PATH, sampleContent); (1)
assertTrue(service.exists(TEXT_FILE_PATH)); (2)
Publisher<S3Object> summaries = service.listObjectSummaries("foo"); (3)
assertEquals(Long.valueOf(0L), Flux.from(summaries).count().block());
1 | Upload file |
2 | File is uploaded |
3 | File is present in the summaries of all files |
File sampleContent = createFileWithSampleContent();
service.storeFile(TEXT_FILE_PATH, sampleContent); (1)
assertTrue(service.exists(TEXT_FILE_PATH)); (2)
Publisher<S3ObjectSummary> summaries = service.listObjectSummaries("foo"); (3)
assertEquals(Long.valueOf(0L), Flux.from(summaries).count().block());
1 | Upload file |
2 | File is uploaded |
3 | File is present in the summaries of all files |
InputStream
(AWS SDK 2.x)service.storeInputStream( (1)
KEY,
new ByteArrayInputStream(SAMPLE_CONTENT.getBytes()),
metadata -> {
metadata.contentLength((long) SAMPLE_CONTENT.length())
.contentType("text/plain")
.contentDisposition("bar.baz");
}
);
Publisher<S3Object> fooSummaries = service.listObjectSummaries("foo"); (2)
assertEquals(KEY, Flux.from(fooSummaries).blockFirst().key());
1 | Upload data from stream |
2 | Stream is uploaded |
InputStream
(AWS SDK 1.x)service.storeInputStream( (1)
KEY,
new ByteArrayInputStream(SAMPLE_CONTENT.getBytes()),
buildMetadata()
);
Publisher<S3ObjectSummary> fooSummaries = service.listObjectSummaries("foo"); (2)
assertEquals(KEY, Flux.from(fooSummaries).blockFirst().getKey());
1 | Upload data from stream |
2 | Stream is uploaded |
String url = service.generatePresignedUrl(KEY, TOMORROW); (1)
assertEquals(SAMPLE_CONTENT, download(url)); (2)
1 | Generate presigned URL |
2 | Downloaded content corresponds with the expected content |
File file = new File(tmp, "bar.baz"); (1)
service.getFile(KEY, file); (2)
assertTrue(file.exists());
assertEquals(SAMPLE_CONTENT, new String(Files.readAllBytes(Paths.get(file.toURI()))));
1 | Prepare a destination file |
2 | Download the file locally |
service.deleteFile(TEXT_FILE_PATH); (1)
assertFalse(service.exists(TEXT_FILE_PATH)); (2)
1 | Delete file |
2 | The file is no longer present |
service.deleteBucket(); (1)
assertFalse(service.listBucketNames().contains(MY_BUCKET)); (2)
1 | Delete bucket |
2 | The Bucket is no longer present |
Please, see SimpleStorageService AWS SDK 2.x or SimpleStorageService AWS SDK 1.x for the full reference.
Testing
You can very easily mock any of the interfaces and declarative services but if you need close-to-production
DynamoDB integration works well with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Then you can set up your tests like this:
@MicronautTest (1)
@Property(name = 'aws.s3.bucket', value = MY_BUCKET) (2)
class SimpleStorageServiceSpec extends Specification {
@Inject SimpleStorageService service (3)
// test methods
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Annotate the specification with @Property to set the required Micronaut properties |
3 | Use @Inject to let Micronaut inject the beans into your tests |
@MicronautTest (1)
@Property(name = "aws.s3.bucket", value = SimpleStorageServiceTest.MY_BUCKET) (2)
public class SimpleStorageServiceTest {
@Inject SimpleStorageService service; (3)
// test methods
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Annotate the specification with @Property to set the required Micronaut properties |
3 | Use @Inject to let Micronaut inject the beans into your tests |
You can save time creating the new Localstack container by sharing it between the tests. application-test.yml
|
1.7. Simple Email Service (SES)
Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers.
This library provides basic support for Amazon SES using Simple Email Service
Installation
implementation 'com.agorapulse:micronaut-amazon-awssdk-ses:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-ses</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
implementation 'com.agorapulse:micronaut-aws-sdk-ses:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-ses</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Simple Email Service
SimpleEmailService
provides DSL for creating and sending simple emails with attachments. As the other services,
it uses default credentials chain to obtain the credentials.
Following example shows how to send an email with attachment.
EmailDeliveryStatus status = service.send { (1)
subject 'Hi Paul' (2)
from 'subscribe@groovycalamari.com' (3)
to 'me@sergiodelamo.com' (4)
htmlBody '<p>This is an example body</p>' (5)
configurationSetName 'configuration-set' (6)
tags mapOfTags (7)
attachment { (8)
filepath thePath (9)
filename 'test.pdf' (10)
mimeType 'application/pdf' (11)
description 'An example pdf' (12)
}
}
1 | Start building an email |
2 | Define subject of the email |
3 | Define the from address |
4 | Define one or more recipients |
5 | Define HTML body (alternatively you can declare plain text body as well) |
6 | Define configuration set for the email (https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets.html) |
7 | Define tags for the email, they will be included in SES events |
8 | Build an attachment |
9 | Define the location of the file to be sent |
10 | Define the file name (optional - deduced from the file) |
11 | Define the mime type (usually optional - deduced from the file) |
12 | Define the description of the file (optional) |
EmailDeliveryStatus status = service.send(e -> (1)
e.subject("Hi Paul") (2)
.from("subscribe@groovycalamari.com") (3)
.to("me@sergiodelamo.com") (4)
.htmlBody("<p>This is an example body</p>") (5)
.configurationSetName("configuration-set") (6)
.tags(mapOfTags) (7)
.attachment(a -> (8)
a.filepath(filepath) (9)
.filename("test.pdf") (10)
.mimeType("application/pdf") (11)
.description("An example pdf") (12)
)
);
1 | Start building an email |
2 | Define subject of the email |
3 | Define the from address |
4 | Define one or more recipients |
5 | Define HTML body (alternatively you can declare plain text body as well) |
6 | Define configuration set for the email (https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets.html) |
7 | Define tags for the email, they will be included in SES events |
8 | Build an attachment |
9 | Define the location of the file to be sent |
10 | Define the file name (optional - deduced from the file) |
11 | Define the mime type (usually optional - deduced from the file) |
12 | Define the description of the file (optional) |
Please, see SimpleEmailService AWS SDK 2.x or SimpleEmailService AWS SDK 1.x for the full reference.
1.8. Simple Notification Service (SNS)
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.
This library provides two approaches to work with Simple Notification Service topics:
-
High-level Publishing with
@NotificationClient
-
Middle-level Simple Notification Service
Installation
annotationProcessor 'com.agorapulse:micronaut-amazon-awssdk-sns-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-amazon-awssdk-sns:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-sns</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
annotationProcessor 'com.agorapulse:micronaut-aws-sdk-sns-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-aws-sdk-sns:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-sns</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
For Kotlin use kapt instead of annotationProcessor configuration.
|
Configuration
No configuration is required but some of the configuration properties may be useful for you.
aws:
sns:
region: sa-east-1
topic: MyTopic (1)
ios:
arn: 'arn:aws:sns:eu-west-1:123456789:app/APNS/my-ios-app' (2)
android:
arn: 'arn:aws:sns:eu-west-1:123456789:app/GCM/my-android-app' (3)
amazon:
arn: 'arn:aws:sns:eu-west-1:123456789:app/ADM/my-amazon-app' (4)
topics: (5)
test: (6)
topic: TestTopic
1 | You can specify the default topic for SimpleNotificationService and @NotificationClient |
2 | Amazon Resource Name for the iOS application mobile push |
3 | Amazon Resource Name for the Android application mobile push |
4 | Amazon Resource Name for the Amazon application mobile push |
5 | You can define multiple configurations |
6 | Each of the configuration can be access using @Named('test') SimpleNotificationService qualifier or you can define the configuration as value of @NotificationClient('test') |
Publishing with @NotificationClient
If you place NotificationClient
annotation on the interface then methods
matching predefined pattern will be automatically implemented. Methods containing word sms
will send text messages.
Other methods of NotificationClient
will publish new messages into the topic.
For AWS SDK 2.x, use packages starting com.agorapulse.micronaut.amazon.awssdk.sns .
|
For AWS SDK 1.x, use packages starting com.agorapulse.micronaut.aws.sdk.sns .
|
The following example shows many of available method signatures for publishing records:
package com.agorapulse.micronaut.amazon.awssdk.sns;
import com.agorapulse.micronaut.amazon.awssdk.sns.annotation.MessageDeduplicationId;
import com.agorapulse.micronaut.amazon.awssdk.sns.annotation.MessageGroupId;
import com.agorapulse.micronaut.amazon.awssdk.sns.annotation.NotificationClient;
import com.agorapulse.micronaut.amazon.awssdk.sns.annotation.Topic;
import java.util.Map;
@NotificationClient (1)
interface DefaultClient {
String OTHER_TOPIC = "OtherTopic";
@Topic("OtherTopic") String publishMessageToDifferentTopic(Pogo pogo); (2)
String publishMessage(Pogo message); (3)
String publishMessage(Pogo message, @MessageGroupId String groupId, @MessageDeduplicationId String deduplicationId); (4)
String publishMessage(String subject, Pogo message); (5)
String publishMessage(String subject, Pogo message, Map<String, String> attributes);
String publishMessage(String message); (6)
String publishMessage(String subject, String message);
String publishMessage(String subject, String message, Map<String, String> attributes);
String sendSMS(String phoneNumber, String message); (7)
String sendSms(String phoneNumber, String message, Map attributes); (8)
}
1 | @NotificationClient annotation makes the interface a SNS client |
2 | You can specify to which topic is the message published using @Topic annotation |
3 | You can publish any object which can be converted into JSON. |
4 | For FIFO Topics the annotations @MessageGroupId and @MessageDeduplicationId can be added on method parameters to forward these attributes when publishing |
5 | You can add additional subject to published message (only useful for few protocols, e.g. email) |
6 | You can publish a string message |
7 | You can send SMS using the word SMS in the name of the method. One argument must be phone number and its name must contain the word number |
8 | You can provide additional attributes for the SMS message |
package com.agorapulse.micronaut.aws.sns;
import com.agorapulse.micronaut.aws.sns.annotation.MessageDeduplicationId;
import com.agorapulse.micronaut.aws.sns.annotation.MessageGroupId;
import com.agorapulse.micronaut.aws.sns.annotation.NotificationClient;
import com.agorapulse.micronaut.aws.sns.annotation.Topic;
import java.util.Map;
@NotificationClient (1)
interface DefaultClient {
String OTHER_TOPIC = "OtherTopic";
@Topic("OtherTopic") String publishMessageToDifferentTopic(Pogo pogo); (2)
String publishMessage(Pogo message); (3)
String publishMessage(Pogo message, @MessageGroupId String groupId, @MessageDeduplicationId String deduplicationId); (4)
String publishMessage(String subject, Pogo message); (5)
String publishMessage(String subject, Pogo message, Map<String, String> attributes);
String publishMessage(String message); (6)
String publishMessage(String subject, String message);
String publishMessage(String subject, String message, Map<String, String> attributes);
String sendSMS(String phoneNumber, String message); (7)
String sendSms(String phoneNumber, String message, Map attributes); (8)
}
1 | @NotificationClient annotation makes the interface a SNS client |
2 | You can specify to which topic is the message published using @Topic annotation |
3 | You can publish any object which can be converted into JSON. |
4 | You can add additional subject to published message (only useful for few protocols, e.g. email) |
5 | You can publish a string message |
6 | You can send SMS using the word SMS in the name of the method. One argument must be phone number and its name must contain the word number |
7 | You can provide additional attributes for the SMS message |
The return value of the methods is message id returned by AWS. |
You can add Map<String, String> attributes argument to send message attributes.
|
By default, NotificationClient
publishes messages into the default topic defined by aws.sns.topic
property.
You can switch to different configuration by changing the value
of the annotation such as @NotificationClient("other")
or
by setting the topic
property of the annotation such as @NotificationClient(topic = "SomeTopic")
. You can change topic
used by particular method using @Topic
annotation as mentioned above.
Simple Notification Service
SimpleNotificationService
provides middle-level API for creating, describing, and deleting topics. You can manage applications, endpoints and devices.
You can send messages and notifications.
Instance of SimpleNotificationService
is created for the default SNS configuration and each topics configuration in aws.sns.topics
map.
You should always use @Named
qualifier when injecting SimpleNotificationService
if you have more than one topic configuration present, e.g. @Named("other") SimpleNotificationService otherService
.
Following example shows some of the most common use cases for working with Amazon SNS.
Working with Topics
String topicArn = service.createTopic(TEST_TOPIC); (1)
Topic found = Flux.from(service.listTopics()).filter(t -> (2)
t.topicArn().endsWith(TEST_TOPIC)
).blockFirst();
1 | Create new topic of given name |
2 | The topic is present within the list of all topics' names |
String topicArn = service.createTopic(TEST_TOPIC); (1)
Topic found = Flux.from(service.listTopics()).filter(t -> (2)
t.getTopicArn().endsWith(TEST_TOPIC)
).blockFirst();
1 | Create new topic of given name |
2 | The topic is present within the list of all topics' names |
String subArn = service.subscribeTopicWithEmail(topicArn, EMAIL); (1)
String messageId = service.publishMessageToTopic( (2)
topicArn,
"Test Email",
"Hello World"
);
service.unsubscribeTopic(subArn); (3)
1 | Subscribe to the topic with an email (there are more variants of this method to subscribe to most common protocols such as HTTP(S) endpoints, SQS, …) |
2 | Publish message to the topic |
3 | Use the subscription ARN to unsubscribe from the topic |
service.deleteTopic(topicArn); (1)
Long zero = Flux.from(service.listTopics()).filter(t -> (2)
t.topicArn().endsWith(TEST_TOPIC)
).count().block();
1 | Delete the topic |
2 | The topic is no longer present within the list of all topics' names |
service.deleteTopic(topicArn); (1)
Long zero = Flux.from(service.listTopics()).filter(t -> (2)
t.getTopicArn().endsWith(TEST_TOPIC)
).count().block();
1 | Delete the topic |
2 | The topic is no longer present within the list of all topics' names |
Working with Applications
String appArn = service.createPlatformApplication( (1)
"my-app",
APNS,
null,
API_KEY
);
String endpoint = service.createPlatformEndpoint(appArn, DEVICE_TOKEN, DATA); (2)
String jsonMessage = "{\"data\", \"{\"foo\": \"some bar\"}\", \"notification\", \"{\"title\": \"some title\", \"body\": \"some body\"}\"}";
String msgId = service.sendNotification(endpoint, APNS, jsonMessage); (3)
service.validateDeviceToken(appArn, endpoint, DEVICE_TOKEN, DATA); (4)
service.unregisterDevice(endpoint); (5)
1 | Create new Android application (more platforms available) |
2 | Register Android device (more platforms available) |
3 | Send Android notification (more platforms available) |
4 | Validate Android device |
5 | Unregister device |
Sending SMS
Map<String, MessageAttributeValue> attrs = Collections.emptyMap();
String msgId = service.sendSMSMessage(PHONE_NUMBER, "Hello World", attrs); (1)
1 | Send a message to the phone number |
Map<Object, Object> attrs = Collections.emptyMap();
String msgId = service.sendSMSMessage(PHONE_NUMBER, "Hello World", attrs); (1)
1 | Send a message to the phone number |
Please, see SimpleNotificationService AWS SDK 2.x or SimpleNotificationService AWS SDK 1.x for the full reference.
Testing
You can very easily mock any of the interfaces and declarative services but if you need close-to-production
DynamoDB integration works well with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Then you can set up your tests like this:
@MicronautTest (1)
@Property(name = 'aws.sns.topic', value = TEST_TOPIC) (2)
@Property(name = 'aws.sns.ios.arn', value = IOS_APP_ARN)
@Property(name = 'aws.sns.ios-sandbox.arn', value = IOS_SANDBOX_APP_ARN)
@Property(name = 'aws.sns.android.arn', value = ANDROID_APP_ARN)
@Property(name = 'aws.sns.amazon.arn', value = AMAZON_APP_ARN)
class SimpleNotificationServiceSpec extends Specification {
@Inject SimpleNotificationService service (3)
// tests
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Annotate the specification with @Property to set the required Micronaut properties |
3 | Use @Inject to let Micronaut inject the beans into your tests |
class SimpleNotificationServiceTest {
@MicronautTest (1)
@Property(name = "aws.sns.topic", value = SimpleNotificationServiceTest.TEST_TOPIC) (2)
public class SimpleNotificationServiceTest {
@Inject SimpleNotificationService service; (3)
// tests
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Annotate the specification with @Property to set the required Micronaut properties |
3 | Use @Inject to let Micronaut inject the beans into your tests |
You can save time creating the new Localstack container by sharing it between the tests. application-test.yml
|
1.9. Simple Queue Service (SQS)
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.
This library provides two approaches to work with Simple Queue Service queues:
-
High-level Publishing with
@QueueClient
-
Middle-level Simple Queue Service
Installation
annotationProcessor 'com.agorapulse:micronaut-amazon-awssdk-sqs-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-amazon-awssdk-sqs:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-sqs</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
annotationProcessor 'com.agorapulse:micronaut-aws-sdk-sqs-annotation-processor:2.1.11-micronaut-3.0'
implementation 'com.agorapulse:micronaut-aws-sdk-sqs:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-sqs</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
For Kotlin use kapt instead of annotationProcessor configuration.
|
Configuration
No configuration is required but some of the configuration properties may be useful for you.
aws:
sqs:
region: sa-east-1
# related to service behaviour
queueNamePrefix: 'vlad_' (1)
autoCreateQueue: false (2)
cache: false (3)
# related to default queue
queue: MyQueue (4)
fifo: true (5)
delaySeconds: 0 (6)
messageRetentionPeriod: 345600 (7)
maximumMessageSize: 262144 (8)
visibilityTimeout: 30 (9)
queues: (10)
test: (11)
queue: TestQueue
1 | Queue prefix is prepended to every queue name (may be useful for local development) |
2 | Whether to create any missing queue automatically (default false ) |
3 | Whether to first fetch all queues and set up queue to url cache first time the service is prompted for the queue URL (default false ) |
4 | You can specify the default topic for SimpleQueueService and @QueueClient |
5 | Whether the newly created queues are supposed to be FIFO queues (default false ) |
6 | The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 (15 minutes). Default: 0 . |
7 | The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer representing seconds, from 60 (1 minute) to 1,209,600 (14 days). Default: 345,600 (4 days). |
8 | The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) up to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB). |
9 | The visibility timeout for the queue, in seconds. Valid values: an integer from 0 to 43,200 (12 hours). Default: 30 . |
10 | You can define multiple configurations |
11 | Each of the configuration can be access using @Named('test') SimpleNotificationService qualifier, or you can define the configuration as value of @NotificationClient('test') |
Publishing with @QueueClient
If you place QueueClient
annotation on the interface then methods
matching predefined pattern will be automatically implemented. Methods containing word delete
will delete queue messages.
Other methods of QueueClient
will publish new records into the queue.
For AWS SDK 2.x, use packages starting com.agorapulse.micronaut.amazon.awssdk.sqs .
|
For AWS SDK 1.x, use packages starting com.agorapulse.micronaut.aws.sdk.sqs .
|
The following example shows many of available method signatures for publishing records:
package com.agorapulse.micronaut.amazon.awssdk.sqs;
import com.agorapulse.micronaut.amazon.awssdk.sqs.annotation.Queue;
import com.agorapulse.micronaut.amazon.awssdk.sqs.annotation.QueueClient;
@QueueClient (1)
interface DefaultClient {
@Queue(value = "OtherQueue", group = "SomeGroup")
String sendMessageToQueue(String message); (2)
String sendMessage(Pogo message); (3)
String sendMessage(byte[] record); (4)
String sendMessage(String record); (5)
String sendMessage(String record, int delay); (6)
String sendMessage(String record, String group); (7)
String sendMessage(String record, int delay, String group); (8)
void deleteMessage(String messageId); (9)
String OTHER_QUEUE = "OtherQueue";
}
1 | @QueueClient annotation makes the interface a SQS client |
2 | You can specify to which queue is the message published using @Queue annotation, you can also specify the group for FIFO queues |
3 | You can publish any record object which can be converted into JSON. |
4 | You can publish a byte array record |
5 | You can publish a string record |
6 | You can publish a string with custom delay |
7 | You can publish a string with custom FIFO queue group |
8 | You can publish a string with custom delay and FIFO queue group |
9 | You can delete published message using the message ID if the |
package com.agorapulse.micronaut.aws.sqs;
import com.agorapulse.micronaut.aws.sqs.annotation.Queue;
import com.agorapulse.micronaut.aws.sqs.annotation.QueueClient;
@QueueClient (1)
interface DefaultClient {
@Queue(value = "OtherQueue", group = "SomeGroup")
String sendMessageToQueue(String message); (2)
String sendMessage(Pogo message); (3)
String sendMessage(byte[] record); (4)
String sendMessage(String record); (5)
String sendMessage(String record, int delay); (6)
String sendMessage(String record, String group); (7)
String sendMessage(String record, int delay, String group); (8)
void deleteMessage(String messageId); (9)
String OTHER_QUEUE = "OtherQueue";
}
1 | @QueueClient annotation makes the interface a SQS client |
2 | You can specify to which queue is the message published using @Queue annotation, you can also specify the group for FIFO queues |
3 | You can publish any record object which can be converted into JSON. |
4 | You can publish a byte array record |
5 | You can publish a string record |
6 | You can publish a string with custom delay |
7 | You can publish a string with custom FIFO queue group |
8 | You can publish a string with custom delay and FIFO queue group |
9 | You can delete published message using the message ID if the |
The return value of the publishing methods is message id returned by AWS. |
By default, QueueClient
publishes records into the default queue defined by aws.sqs.queue
property.
You can switch to different configuration by changing the value
of the annotation such as @QueueClient("other")
or
by setting the queue
property of the annotation such as @QueueClient(queue = "SomeQueue")
. You can change queue
used by particular method using @Queue
annotation as mentioned above.
Simple Queue Service
SimpleQueuenService
provides middle-level API for creating, describing, and deleting queues. It allows to publish, receive and delete records.
Instance of SimpleQueueService
is created for the default SQS configuration and each queue configuration in aws.sqs.queues
map.
You should always use @Named
qualifier when injecting SimpleQueueService
if you have more than one topic configuration present, e.g. @Named("other") SimpleQueueService otherService
.
Following example shows some of the most common use cases for working with Amazon SQS.
String queueUrl = service.createQueue(TEST_QUEUE); (1)
assertTrue(service.listQueueUrls().contains(queueUrl)); (2)
1 | Create new queue of given name |
2 | The queue URL is present within the list of all queues' URLs |
Map<QueueAttributeName, String> queueAttributes = service
.getQueueAttributes(TEST_QUEUE); (1)
assertEquals("0", queueAttributes
.get(QueueAttributeName.DELAY_SECONDS)); (2)
1 | Fetch queue’s attributes |
2 | You can read the queue’s attributes from the map |
Map<String, String> queueAttributes = service.getQueueAttributes(TEST_QUEUE); (1)
assertEquals("0", queueAttributes.get("DelaySeconds")); (2)
1 | Fetch queue’s attributes |
2 | You can read the queue’s attributes from the map |
service.deleteQueue(TEST_QUEUE); (1)
assertFalse(service.listQueueUrls().contains(queueUrl)); (2)
1 | Delete queue |
2 | The queue URL is no longer present within the list of all queues' URLs |
String msgId = service.sendMessage(DATA); (1)
assertNotNull(msgId);
List<Message> messages = service.receiveMessages(); (2)
Message first = messages.get(0);
assertEquals(DATA, first.body()); (3)
assertEquals(msgId, first.messageId());
assertEquals(1, messages.size());
service.deleteMessage(first.receiptHandle()); (4)
1 | Send a message |
2 | Receive messages from the queue (in another application) |
3 | Read message body |
4 | Developers are responsible to delete the message from the queue themselves |
String msgId = service.sendMessage(DATA); (1)
assertNotNull(msgId);
List<Message> messages = service.receiveMessages(); (2)
Message first = messages.get(0);
assertEquals(DATA, first.getBody()); (3)
assertEquals(msgId, first.getMessageId());
assertEquals(1, messages.size());
service.deleteMessage(first.getReceiptHandle()); (4)
1 | Send a message |
2 | Receive messages from the queue (in another application) |
3 | Read message body |
4 | Developers are responsible to delete the message from the queue themselves |
Try to use AWS Lambda functions triggered by SQS messages to handle incoming SQS messages instead of implementing complex message handling logic yourselves. |
Please, see SimpleQueueService AWS SDK 2.x or SimpleQueueService AWS SDK 1.x for the full reference.
Testing
You can very easily mock any of the interfaces and declarative services but if you need close-to-production
DynamoDB integration works well with Testcontainers and LocalStack using micronaut-amazon-awssdk-integration-testing
module.
You need to add following dependencies into your build file:
testImplementation 'com.agorapulse:micronaut-amazon-awssdk-integration-testing:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-dynamodb</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Then you can set up your tests like this:
@MicronautTest (1)
@Property(name = 'aws.sqs.queue', value = TEST_QUEUE) (2)
class SimpleQueueServiceSpec extends Specification {
@Inject SimpleQueueService service (3)
// tests
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Annotate the specification with @Property to set the required Micronaut properties |
3 | Use @Inject to let Micronaut inject the beans into your tests |
@MicronautTest (1)
@Property(name = "aws.sqs.queue", value = SimpleQueueServiceTest.TEST_QUEUE) (2)
public class SimpleQueueServiceTest {
@Inject SimpleQueueService service; (3)
// tests
}
1 | Annotate the specification with @MicronautTest to let Micronaut handle the application context lifecycle |
2 | Annotate the specification with @Property to set the required Micronaut properties |
3 | Use @Inject to let Micronaut inject the beans into your tests |
You can save time creating the new Localstack container by sharing it between the tests. application-test.yml
|
1.10. Security Token Service (STS)
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users).
This library provides basic support for Amazon STS using Security Token Service
Installation
implementation 'com.agorapulse:micronaut-amazon-awssdk-sts:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-amazon-awssdk-sts</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
implementation 'com.agorapulse:micronaut-aws-sdk-sts:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-sts</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Security Token Service
SecurityTokenService
provides only one method (with multiple variations) to create credentials
which assumes usage of a certain IAM role.
Following example shows how to create credentials for assumed role.
service.assumeRole('session', 'arn:::my-role', 360) {
externalId '123456789'
}
service.assumeRole('session', 'arn:::my-role', 360) {
externalId = '123456789'
}
Please, see SecurityTokenService AWS SDK 2.x or SecurityTokenService AWS SDK 1.x for the full reference.
1.11. WebSockets for API Gateway
In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.
This library provides components for easy handling of incoming WebSocket proxied events as well as for sending messages back to the clients
Use this part of the library with caution. Using WebSockets with API Gateway can be very expensive. |
Installation
implementation 'com.agorapulse:micronaut-aws-sdk-ag-ws:2.1.11-micronaut-3.0'
<dependency>
<groupId>com.agorapulse</groupId>
<artifactId>micronaut-aws-sdk-ag-ws</artifactId>
<version>2.1.11-micronaut-3.0</version>
</dependency>
Configuration
No configuration is required but some of the configuration properties may be useful for you.
aws:
websocket:
region: sa-east-1
connections:
url: https://abcefgh.execute-api.eu-west-1.amazonaws.com/test/@connections (1)
# Java Only
micronaut:
function:
name: lambda-echo-java (2)
1 | You can specify the default connections URL for MessageSender |
2 | If you are creating Java functions don’t forget to specify the function’s name for deployments |
MessageSender bean
is only present in the context if aws.websocket.connectins.url configuration property is present.Use MessageSenderFactory
If you want to create MessageSender
manually using URL which is not predefined.
|
Usage
AWS SDK Lambda Events library does not contain the events dedicated to WebSocket API Gateway yet. You can use WebSocketConnectionRequest as an argument to function handling connection and disconnection of the WebSocket and WebSocketRequest for handling incoming messages.
The following examples assume that you have created function using mn create-function
command.
The simplest example is a echo method can be used to handle all the incoming events and reply to the incoming messages and also publishes to SNS:
package com.agorapulse.micronaut.aws.apigateway.ws
import com.agorapulse.micronaut.aws.apigateway.ws.event.EventType
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketRequest
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketResponse
import groovy.transform.Field
import jakarta.inject.Inject
@Inject @Field MessageSenderFactory factory (1)
@Inject @Field TestTopicPublisher publisher (2)
WebSocketResponse lambdaEcho(WebSocketRequest event) { (3)
MessageSender sender = factory.create(event.requestContext) (4)
String connectionId = event.requestContext.connectionId (5)
switch (event.requestContext.eventType) {
case EventType.CONNECT: (6)
// do nothing
break
case EventType.MESSAGE: (7)
String message = "[$connectionId] ${event.body}"
sender.send(connectionId, message)
publisher.publishMessage(connectionId, message)
break
case EventType.DISCONNECT: (8)
// do nothing
break
}
return WebSocketResponse.OK (9)
}
1 | Factory to create MessageSender if we want to reply to the message immediately |
2 | Service to publish to SNS to forward the message |
3 | WebSocketRequest can handle any incoming event |
4 | Create MessageSender for current client |
5 | connectionId is unique identifier of the client |
6 | CONNECT event signals new client has been connected |
7 | MESSAGE event signals new incoming message |
8 | DISCONNECT event signals client has been disconnected |
9 | The method must always return WebSocketResponse.OK to signal success |
package com.agorapulse.micronaut.aws.apigateway.ws;
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketRequest;
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketResponse;
import io.micronaut.function.FunctionBean;
import java.util.function.Function;
@FunctionBean("lambda-echo-java")
public class LambdaEchoJava implements Function<WebSocketRequest, WebSocketResponse> {
private final MessageSenderFactory factory; (1)
private final TestTopicPublisher publisher; (2)
public LambdaEchoJava(MessageSenderFactory factory, TestTopicPublisher publisher) {
this.factory = factory;
this.publisher = publisher;
}
@Override
public WebSocketResponse apply(WebSocketRequest event) { (3)
MessageSender sender = factory.create(event.getRequestContext()); (4)
String connectionId = event.getRequestContext().getConnectionId(); (5)
switch (event.getRequestContext().getEventType()) {
case CONNECT: (6)
// do nothing
break;
case MESSAGE: (7)
String message = "[" + connectionId + "] " + event.getBody();
sender.send(connectionId, message);
publisher.publishMessage(connectionId, message);
break;
case DISCONNECT: (8)
// do nothing
break;
}
return WebSocketResponse.OK; (9)
}
}
1 | Factory to create MessageSender if we want to reply to the message immediately |
2 | Service to publish to SNS to forward the message |
3 | WebSocketRequest can handle any incoming event |
4 | Create MessageSender for current client |
5 | connectionId is unique identifier of the client |
6 | CONNECT event signals new client has been connected |
7 | MESSAGE event signals new incoming message |
8 | DISCONNECT event signals client has been disconnected |
9 | The method must always return WebSocketResponse.OK to signal success |
Once the function is ready you can deploy the function to AWS Lambda and setup the new API Gateway with WebSocket API


Another example is a simple AWS Lambda function to react to any of events supported by AWS Lambda and push to WebSocket clients.
There is no support for routing at the moment, but you can get the matched route from
event.requestContext.routeKey .
|
package com.agorapulse.micronaut.aws.apigateway.ws
import com.amazonaws.AmazonClientException
import com.amazonaws.services.lambda.runtime.events.SNSEvent
import groovy.transform.Field
import jakarta.inject.Inject
@Inject @Field MessageSender sender (1)
void notify(SNSEvent event) { (2)
event.records.each {
try {
sender.send(it.SNS.subject, "[SNS] $it.SNS.message") (3)
} catch (AmazonClientException ignored) {
// can be gone (4)
}
}
}
1 | MessageSender can be injected if you specify aws.websocket.connnections.url configuration property |
2 | You can for example react on records published into Simple Notification Service |
3 | Send message to the client (in previous example the connectionId was set to the subject of the SNS record) |
4 | If the client is already disconnected then AmazonClientException may occur |
package com.agorapulse.micronaut.aws.apigateway.ws;
import com.amazonaws.AmazonClientException;
import com.amazonaws.services.lambda.runtime.events.SNSEvent;
import io.micronaut.function.FunctionBean;
import java.util.function.Consumer;
@FunctionBean("notification-handler")
public class NotificationHandler implements Consumer<SNSEvent> {
private final MessageSender sender; (1)
public NotificationHandler(MessageSender sender) {
this.sender = sender;
}
@Override
public void accept(SNSEvent event) { (2)
event.getRecords().forEach(it -> {
try {
String connectionId = it.getSNS().getSubject();
String payload = "[SNS] " + it.getSNS().getMessage();
sender.send(connectionId, payload); (3)
} catch (AmazonClientException ignored) {
// can be gone (4)
}
});
}
}
1 | MessageSender can be injected if you specify aws.websocket.connnections.url configuration property |
2 | You can for example react on records published into Simple Notification Service |
3 | Send message to the client (in previous example the connectionId was set to the subject of the SNS record) |
4 | If the client is already disconnected then AmazonClientException may occur |
If you want to publish to the WebSockets using MessageSender your Lambda function’s role must have following permissions (preferably constrained just your API resource):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "execute-api:*",
"Resource": "*"
}
]
}
Testing
You can very easily mock any of the interfaces. Create request event manually and follow the guide to test functions with Micronaut.
1.12. Configuration
See the configuration sections for particular services.
Following services support configuring region
and endpoint
:
-
CloudWatch
-
DynamoDB
-
Lambda
-
Kinesis
-
S3
-
SES
-
SNS
-
SQS
-
STS
For example, to configure region for DynamoDB you can add following settings:
aws:
dynamodb:
region: us-east-1
endpoint: http://localhost:8000
2. Micronaut for API Gateway Proxy
API Gateway Lambda Proxy support for Micronaut has been replaced by an official suport Micronaut AWS API Gateway Support
3. Micronaut Grails
Micronaut Grails package has been moved into own repository.