ribbon
download
Build Status
badge

Set of useful libraries for Micronaut. All the libraries are available in JCenter Maven repository.

1. AWS SDK for Micronaut

AWS SDK for Micronaut is a successor of Grails AWS SDK Plugin. If you are Grails AWS SDK Plugin user you should find many of services familiar.

Provided integrations:

Micronaut for API Gateway Proxy is handled separately in its own library.

Key concepts of the AWS SDK for Micronaut:

  • Fully leveraging of Micronaut best practises

    • Low-level API clients such as AmazonDynamoDB available for dependency injection

    • Declarative clients and services such as @KinesisListener where applicable

    • Configuration driven named service beans

    • Sensible defaults

    • Conditional beans based on presence of classes on the classpath or on the presence of specific properties

  • Fully leveraging existing AWS SDK configuration chains (e.g. default credential provider chain, default region provider chain)

  • Strong focus on the ease of testing

    • Low-level API clients such as AmazonDynamoDB injected by Micronaut and overridable in the tests

    • All high-level services hidden behind an interface for easy mocking in the tests

    • Declarative clients and services for easy mocking in the tests

  • Java-enabled but Groovy is a first-class citizen

In this documentation, the high-level approaches will be discussed first before the lower-level services.

1.1. Installation

Since 1.2.8 see the particular subprojects for installation instruction.

1.2. DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

This library provides two approaches to work with DynamoDB tables and entities:

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-dynamodb:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-dynamodb</artifactId>
    <version>1.3.0.1</version>
</dependency>
Declarative Services with @Service

Declarative services are very similar to Grails GORM Data Services. If you place com.agorapulse.micronaut.aws.dynamodb.annotation.Service annotation on the interface then methods matching predefined pattern will be automatically implemented.

Method Signatures

The following example shows many of available method signatures:

Groovy
@Service(DynamoDBEntity)
interface DynamoDBItemDBService {

    DynamoDBEntity get(String hash, String rangeKey)
    DynamoDBEntity load(String hash, String rangeKey)
    List<DynamoDBEntity> getAll(String hash, List<String> rangeKeys)
    List<DynamoDBEntity> getAll(String hash, String... rangeKeys)
    List<DynamoDBEntity> loadAll(String hash, List<String> rangeKeys)
    List<DynamoDBEntity> loadAll(String hash, String... rangeKeys)

    DynamoDBEntity save(DynamoDBEntity entity)
    List<DynamoDBEntity> saveAll(DynamoDBEntity... entities)
    List<DynamoDBEntity> saveAll(Iterable<DynamoDBEntity> entities)

    int count(String hashKey)
    int count(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey
            }
        }
    })
    int countByRangeIndex(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range { between DynamoDBEntity.DATE_INDEX, after, before }
        }
    })
    int countByDates(String hashKey, Date after, Date before)

    Flowable<DynamoDBEntity> query(String hashKey)
    Flowable<DynamoDBEntity> query(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey
            }
            only {
                rangeIndex
            }
        }
    })
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range { between DynamoDBEntity.DATE_INDEX, after, before }
        }
    })
    List<DynamoDBEntity> queryByDates(String hashKey, Date after, Date before)

    void delete(DynamoDBEntity entity)
    void delete(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey
            }
        }
    })
    int deleteByRangeIndex(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range { between DynamoDBEntity.DATE_INDEX, after, before }
        }
    })
    int deleteByDates(String hashKey, Date after, Date before)

    @Update({
        update(DynamoDBEntity) {
            hash hashKey
            range rangeKey
            add 'number', 1
            returnUpdatedNew { number }
        }
    })
    Number increment(String hashKey, String rangeKey)

    @Update({
        update(DynamoDBEntity) {
            hash hashKey
            range rangeKey
            add 'number', -1
            returnUpdatedNew { number }
        }
    })
    Number decrement(String hashKey, String rangeKey)

    @Scan({
        scan(DynamoDBEntity) {
            filter {
                eq DynamoDBEntity.RANGE_INDEX, foo
            }
        }
    })
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo)

}
Java
@Service(DynamoDBEntity.class)
public interface DynamoDBEntityService {

    class EqRangeIndex implements Function<Map<String, Object>, DetachedQuery> {
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(r -> r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey")));
        }
    }

    class EqRangeProjection implements Function<Map<String, Object>, DetachedQuery> {
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(r ->
                    r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey"))
                )
                .only(DynamoDBEntity.RANGE_INDEX);
        }
    }

    class EqRangeScan implements Function<Map<String, Object>, DetachedScan> {
        public DetachedScan apply(Map<String, Object> arguments) {
            return Builders.scan(DynamoDBEntity.class)
                .filter(f -> f.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("foo")));
        }
    }

    class BetweenDateIndex implements Function<Map<String, Object>, DetachedQuery> {
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(r -> r.between(DynamoDBEntity.DATE_INDEX, arguments.get("after"), arguments.get("before")));
        }
    }

    class IncrementNumber implements Function<Map<String, Object>, DetachedUpdate> {
        public DetachedUpdate apply(Map<String, Object> arguments) {
            return Builders.update(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(arguments.get("rangeKey"))
                .add("number", 1)
                .returnUpdatedNew(DynamoDBEntity::getNumber);
        }
    }

    class DecrementNumber implements Function<Map<String, Object>, DetachedUpdate> {
        public DetachedUpdate apply(Map<String, Object> arguments) {
            return Builders.update(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(arguments.get("rangeKey"))
                .add("number", -1)
                .returnUpdatedNew(DynamoDBEntity::getNumber);
        }
    }

    DynamoDBEntity get(String hash, String rangeKey);

    DynamoDBEntity load(String hash, String rangeKey);

    List<DynamoDBEntity> getAll(String hash, List<String> rangeKeys);

    List<DynamoDBEntity> getAll(String hash, String... rangeKeys);

    List<DynamoDBEntity> loadAll(String hash, List<String> rangeKeys);

    List<DynamoDBEntity> loadAll(String hash, String... rangeKeys);

    DynamoDBEntity save(DynamoDBEntity entity);

    List<DynamoDBEntity> saveAll(DynamoDBEntity... entities);

    List<DynamoDBEntity> saveAll(Iterable<DynamoDBEntity> entities);

    int count(String hashKey);

    int count(String hashKey, String rangeKey);

    @Query(EqRangeIndex.class)
    int countByRangeIndex(String hashKey, String rangeKey);

    @Query(BetweenDateIndex.class)
    int countByDates(String hashKey, Date after, Date before);

    Flowable<DynamoDBEntity> query(String hashKey);

    Flowable<DynamoDBEntity> query(String hashKey, String rangeKey);

    @Query(EqRangeProjection.class)
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey);

    @Query(BetweenDateIndex.class)
    List<DynamoDBEntity> queryByDates(String hashKey, Date after, Date before);

    void delete(DynamoDBEntity entity);

    void delete(String hashKey, String rangeKey);

    @Query(EqRangeIndex.class)
    int deleteByRangeIndex(String hashKey, String rangeKey);

    @Query(BetweenDateIndex.class)
    int deleteByDates(String hashKey, Date after, Date before);

    @Update(IncrementNumber.class)
    Number increment(String hashKey, String rangeKey);

    @Update(DecrementNumber.class)
    Number decrement(String hashKey, String rangeKey);

    @Scan(EqRangeScan.class)
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo);

}

The following table summarizes the supported method signatures:

Table 1. Basic Service Methods
Return Type Method Name Arguments Example Description

T

List<T>

save*

An entity, array of entities or iterable of entities

DynamoDBEntity save(DynamoDBEntity entity)

List<DynamoDBEntity> saveAll(DynamoDBEntity…​ entities)

Perists the entity or a list of entities and returns self

T

List<T>

get*, load*

Hash key and optional range key, array of range keys or iterable of range keys annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

DynamoDBEntity load(String hashKey);

List<DynamoDBEntity> getAll(@HashKey String parentId, String…​ rangeKeys);

Loads a single entity or a list of entities from the table. Range key is required for tables which defines the range key

int

count*

Hash key and optional range key annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

int count(String hashKey)

int count(@HashKey String parentId, String rangeKey)

Counts the items in the database. Beware, this can be very expensive operation in DynamoDB. See Advanced Queries for advanced use cases

void

delete*

Entity or Hash key and optional range key annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

void delete(DynamoDBEntity entity)

void delete(String hashKey, String rangeKey)

Deletes an item which can be specified with hash key and optional range key. See Advanced Queries for advanced use cases

Flowable<T>

list*

findAll*

query*

Entity or Hash key and optional range key annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

Flowable<DynamoDBEntity> query(String hashKey)

List<DynamoDBEntity> query(String hashKey, String rangeKey)

Queries for all entities with given hash key and/or range key.

(contextual)

(none of above)

Any arguments which will be translated into arguments map

(see below)

Query, scan or update. See Advanced Queries, Scanning and Updates for advanced use cases

Calling any of the declarative service method will create the DynamoDB table automatically if it does not exist already.
Advanced Queries

DynamoDB integration does not support feature known as dynamic finders. Instead you can annotate any method with @Query annotation to make it

  • counting method if its name begins with count

  • batch delete method if its name begins with delete

  • otherwise an advanced query method

Groovy
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.*                  (1)

@Service(DynamoDBEntity)                                                                (2)
interface DynamoDBItemDBService {

    @Query({                                                                            (3)
        query(DynamoDBEntity) {
            hash hashKey                                                                (4)
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey                                 (5)
            }
            only {                                                                      (6)
                rangeIndex                                                              (7)
            }
        }
    })
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey)         (8)

}
1 Builders class provides all necessary factory methods and keywords
2 Annotate an interface with @Service with the type of the entity as its value
3 @Query annotation accepts a closure which returns a query builder (see QueryBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference)
6 You can limit which properties are returned from the query
7 Only rangeIndex property will be populated in the entities returned
8 The arguments have no special meaning but you can use them in the query. The method must return either Flowable or List of entities.
Java
@Service(DynamoDBEntity.class)                                                          (1)
public interface DynamoDBEntityService {

    class EqRangeProjection implements Function<Map<String, Object>, DetachedQuery> {   (2)
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)                                 (3)
                .hash(arguments.get("hashKey"))                                         (4)
                .range(r ->
                    r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey"))         (5)
                )
                .only(DynamoDBEntity.RANGE_INDEX);                                      (6)
        }
    }

    @Query(EqRangeProjection.class)                                                     (7)
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey);        (8)

}
1 Annotate an interface with @Service with the type of the entity as its value
2 Define class which implements Function<Map<String, Object>, DetachedQuery>
3 Use Builders class to create a query builder for given DynamoDB entity (see QueryBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference)
6 Only rangeIndex property will be populated in the entities returned
7 @Query annotation accepts a class which implements Function<Map<String, Object>, DetachedQuery>
8 The arguments have no special meaning but you can use them in the query using arguments map. The method must return either Flowable or List of entities.
Scanning

DynamoDB integration does not support feature known as dynamic finders. If you need to scan the table by unindexed attributes you can annotate any method with @scan annotation to make it

  • counting method if its name begins with count

  • otherwise an advanced query method

Groovy
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.*                  (1)


@Service(DynamoDBEntity)                                                                (2)
interface DynamoDBItemDBService {


    @Scan({                                                                             (3)
        scan(DynamoDBEntity) {
            filter {
                eq DynamoDBEntity.RANGE_INDEX, foo                                      (4)
            }
        }
    })
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo)                            (5)


}
1 Builders class provides all necessary factory methods and keywords
2 Annotate an interface with @Service with the type of the entity as its value
3 @Scan annotation accepts a closure which returns a scan builder (see ScanBuilder for full reference)
4 Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference)
5 The arguments have no special meaning but you can use them in the scan definition. The method must return either Flowable or List of entities.
Java
@Service(DynamoDBEntity.class)                                                          (1)
public interface DynamoDBEntityService {

    class EqRangeScan implements Function<Map<String, Object>, DetachedScan> {          (2)
        public DetachedScan apply(Map<String, Object> arguments) {
            return Builders.scan(DynamoDBEntity.class)                                  (3)
                .filter(f -> f.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("foo")));   (4)
        }
    }

    @Scan(EqRangeScan.class)                                                            (5)
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo);                           (6)

}
1 Annotate an interface with @Service with the type of the entity as its value
2 Define class which implements Function<Map<String, Object>, DetachedScan>
3 Use Builders class to create a scan builder for given DynamoDB entity (see ScanBuilder for full reference)
4 Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference)
5 @Scan annotation accepts a class which implements Function<Map<String, Object>, DetachedScan>
6 The arguments have no special meaning but you can use them in the scan definition. The method must return either Flowable or List of entities.
Updates

Declarative services allows you to execute fine-grained updates. Any method annotated with @Update will perform the update in the DynamoDB table.

Groovy
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.*                  (1)


@Service(DynamoDBEntity)                                                                (2)
interface DynamoDBItemDBService {


    @Update({                                                                           (3)
        update(DynamoDBEntity) {
            hash hashKey                                                                (4)
            range rangeKey                                                              (5)
            add 'number', 1                                                             (6)
            returnUpdatedNew { number }                                                 (7)
        }
    })
    Number increment(String hashKey, String rangeKey)                                   (8)


}
1 Builders class provides all necessary factory methods and keywords
2 Annotate an interface with @Service with the type of the entity as its value
3 @Update annotation accepts a closure which returns an update builder (see UpdateBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify a range key with range method and method’s rangeKey argument
6 Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations.
7 Specify what should be returned from the method (see UpdateBuilder for full reference).
8 The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper.
Java
@Service(DynamoDBEntity.class)                                                          (1)
public interface DynamoDBEntityService {

    class IncrementNumber implements Function<Map<String, Object>, DetachedUpdate> {    (2)
        public DetachedUpdate apply(Map<String, Object> arguments) {
            return Builders.update(DynamoDBEntity.class)                                (3)
                .hash(arguments.get("hashKey"))                                         (4)
                .range(arguments.get("rangeKey"))                                       (5)
                .add("number", 1)                                                       (6)
                .returnUpdatedNew(DynamoDBEntity::getNumber);                           (7)
        }
    }

    @Update(IncrementNumber.class)                                                      (8)
    Number increment(String hashKey, String rangeKey);                                  (9)

}
1 Annotate an interface with @Service with the type of the entity as its value
2 Define class which implements Function<Map<String, Object>, DetachedUpdate>
3 Use Builders class to create an update builder for given DynamoDB entity (see UpdateBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify a range key with range method and method’s rangeKey argument
6 Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations.
7 Specify what should be returned from the method (see UpdateBuilder for full reference).
8 @Update annotation accepts a class which implements Function<Map<String, Object>, DetachedUpdate>
9 The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper.
DynamoDB Service

DynamoDBService provides middle-level API for working with DynamoDB tables and entites. You can obtain instance of DynamoDBService from DynamoDBServiceProvider which can be injected to any bean.

Groovy
DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider)
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity)       (1)

s.createTable()                                                                 (2)

s.save(new DynamoDBEntity(                                                      (3)
    parentId: '1',
    id: '1',
    rangeIndex: 'foo',
    date: REFERENCE_DATE.toDate()
))

s.get('1', '1')                                                                 (4)

s.query('1', DynamoDBEntity.RANGE_INDEX, 'bar').count == 1                      (5)

s.queryByDates('3', DynamoDBEntity.DATE_INDEX, [                                (6)
    after: REFERENCE_DATE.plusDays(9).toDate(),
    before: REFERENCE_DATE.plusDays(20).toDate(),
]).count == 1

s.increment('1', '1', 'number')                                                 (7)

s.delete(s.get('1', '1'))                                                       (8)

s.deleteAll('1', DynamoDBEntity.RANGE_INDEX, 'bar') == 1                        (9)
1 Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected)
2 Create table for the entity
3 Save an entity
4 Load the entity by its hash and range keys
5 Query the table for entities with given range index value
6 Query the table for entities having date between the specified dates
7 Increment a property for entity specified by hash and range keys
8 Delete an entity by object reference
9 Delete all entities with given range index value
Java
DynamoDBServiceProvider provider = ctx.getBean(DynamoDBServiceProvider.class);
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity.class);(1)

assertNotNull(
    s.createTable(5L, 5L)                                                       (2)
);

assertNotNull(
    s.save(createEntity("1", "1", "foo", REFERENCE_DATE.toDate()))              (3)
);

assertNotNull(
    s.get("1", "1")                                                             (4)
);

assertEquals(1,
    s.query("1", DynamoDBEntity.RANGE_INDEX, "bar").getCount().intValue()        (5)
);

assertEquals(1,
    s.queryByDates(                                                             (6)
        "3",
        DynamoDBEntity.DATE_INDEX,
        REFERENCE_DATE.plusDays(9).toDate(),
        REFERENCE_DATE.plusDays(20).toDate()
    ).getCount().intValue()
);

s.increment("1", "1", "number");                                                (7)

s.delete(s.get("1", "1"));                                                      (8)

assertEquals(1,
    s.deleteAll("1", DynamoDBEntity.RANGE_INDEX, "bar")                         (9)
);
1 Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected)
2 Create table for the entity
3 Save an entity
4 Load the entity by its hash and range keys
5 Query the table for entities with given range index value
6 Query the table for entities having date between the specified dates
7 Increment a property for entity specified by hash and range keys
8 Delete an entity by object reference
9 Delete all entities with given range index value

Please see DynamoDBService for full reference.

DynamoDB Accelerator (DAX)

You can simply enable DynamoDB Accelerator by setting the DAX endpoint as aws.dax.endpoint property. Every operation performed using injected AmazonDynamoDB, IDynamoDBMapper or a data service will be performed against DAX instead of DynamoDB tables.

Please, check DAX and DynamoDB Consistency Models article to understand the subsequence of using DAX instead of direct DynamoDB operations.

Make sure you have set up proper policy to access the DAX cluster. See DAX Access Control for more information. Following policy allow every DAX operatin on any resource. In production, you should constraint the scope to single cluster.

DAX Access Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DaxAllowAll",
            "Effect": "Allow",
            "Action": "dax:*",
            "Resource": "*"
        }
    ]
}
Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production DynamoDB integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Stepwise
@Testcontainers                                                                         (1)
class DefaultDynamoDBServiceSpec extends Specification {

    @AutoCleanup ApplicationContext context                                             (2)

    @Shared LocalStackContainer localstack = new LocalStackContainer()                  (3)
        .withServices(LocalStackContainer.Service.DYNAMODB)

    DynamoDBService<DynamoDBEntity> s
    AmazonDynamoDB amazonDynamoDB
    IDynamoDBMapper mapper

    void setup() {
        amazonDynamoDB = AmazonDynamoDBClient                                           (4)
            .builder()
            .withEndpointConfiguration(
                localstack.getEndpointConfiguration(LocalStackContainer.Service.DYNAMODB)
            )
            .withCredentials(
                localstack.defaultCredentialsProvider
            )
            .build()

        mapper = new DynamoDBMapper(amazonDynamoDB)

        context = ApplicationContext.build().build()
        context.registerSingleton(AmazonDynamoDB, amazonDynamoDB)                       (5)
        context.registerSingleton(IDynamoDBMapper, mapper)                              (6)
        context.start()

        DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider)     (7)
        s = provider.findOrCreate(DynamoDBEntity)                                       (8)
    }

    // test methods

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
3 Create an instance of LocalStackContainer with only DynamoDB support enabled
4 Create AmazonDynamoDB client using the LocalStack configuration
5 Register the client using LocalStack to the application context
6 Register the mapper using LocalStack to the application context
7 Obtain the provider bean
8 Obtain DynamoDBService for particular DynamoDB entity
Java
public class DynamoDBServiceTest {
    @Rule
    public LocalStackContainer localstack = new LocalStackContainer()                   (1)
        .withServices(LocalStackContainer.Service.DYNAMODB);

    public ApplicationContext ctx;                                                      (2)

    @Before
    public void setup() {
        AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClient                            (3)
            .builder()
            .withEndpointConfiguration(
                localstack.getEndpointConfiguration(LocalStackContainer.Service.DYNAMODB)
            )
            .withCredentials(
                localstack.getDefaultCredentialsProvider()
            )
            .build();

        IDynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB);

        ctx = ApplicationContext.build().build();
        ctx.registerSingleton(AmazonDynamoDB.class, amazonDynamoDB);                    (4)
        ctx.registerSingleton(IDynamoDBMapper.class, mapper);                           (5)
        ctx.start();
    }

    @After
    public void cleanup() {
        if (ctx != null) {                                                              (6)
            ctx.close();
        }
    }

    @Test
    public void testSomething() {
        DynamoDBServiceProvider provider = ctx.getBean(DynamoDBServiceProvider.class);  (7)
        DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity.class);(8)

        // test code
    }
}
1 Create an instance of LocalStackContainer with only DynamoDB support enabled
2 Prepare the reference to the ApplicationContext
3 Create AmazonDynamoDB client using the LocalStack configuration
4 Register the client using LocalStack to the application context
5 Register the mapper using LocalStack to the application context
6 Close the application context after test execution
7 Obtain the provider bean
8 Obtain DynamoDBService for particular DynamoDB entity
You can obtain instances of declarative client from the context as well.

1.3. Kinesis

Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.

This library provides three approaches to work with Kinesis streams:

Installation
Gradle
// for Kinesis client
compile 'com.agorapulse:micronaut-aws-sdk-kinesis:1.3.0.1'
// for Kinesis worker
compile 'com.agorapulse:micronaut-aws-sdk-kinesis-worker:1.3.0.1'
Maven
<!-- for Kinesis client -->
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-kinesis</artifactId>
    <version>1.3.0.1</version>
</dependency>
<!-- for Kinesis worker -->
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-kinesis-worker</artifactId>
    <version>1.3.0.1</version>
</dependency>
Configuration

By default, only aws.kinesis.application.name and aws.kinesis.listener.stream are required if you decide to use @KinesisListener. Otherwise you need no configuration at all but some of the configuration may be useful for you.

application.yml
aws:
  kinesis:
    region: sa-east-1
    stream: MyStream                                                                    (1)

    streams:                                                                            (2)
      test:                                                                             (3)
        stream: TestStream

    application:
      name: MyKinesisApp                                                                (4)
    worker:
      id: rubble                                                                        (5)
    listener:
      stream: MyStreamToConsume                                                         (6)

    listeners:                                                                          (7)
      test:                                                                             (8)
        stream: TestStreamToConsume
1 You can specify the default stream for KinesisService and @KinesisClient
2 You can define multiple configurations
3 Each of the configuration can be access using @Named('test') KinesisService qualifier or you can define the configuration as value of @KinesisClient('test')
4 Application name is required for @KinesisListner
5 Optional id of the Kinesis worker (listener)
6 Stream to listen is required for @KinesisListener
7 You can define multiple listeners configurations
8 The name of the configuration will be used as value of @KinesisListener('test')
Publishing with @KinesisClient

If you place com.agorapulse.micronaut.aws.kinesis.annotation.KinesisClient annotation on the interface then methods matching predefined pattern will be automatically implemented. Every method of KinesisClient puts new records into the stream.

The following example shows many of available method signatures for publishing records:

Publishing String Records
@KinesisClient                                                                          (1)
interface DefaultClient {
    void putRecordString(String record);                                                (2)

    PutRecordResult putRecord(String partitionKey, String record);                      (3)

    void putRecordAnno(@PartitionKey String id, String record);                         (4)

    void putRecord(String partitionKey, String record, String sequenceNumber);          (5)

    void putRecordAnno(                                                                 (6)
                                                                                        @PartitionKey String id,
                                                                                        String record,
                                                                                        @SequenceNumber String sqn
    );

    void putRecordAnnoNumbers(                                                          (7)
                                                                                        @PartitionKey Long id,
                                                                                        String record,
                                                                                        @SequenceNumber int sequenceNumber
    );
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put String into the stream with generated UUID as partition key
3 You can user predefined partition key
4 If the name of the argument does not contain word parition then @PartitionKey annotation must to be used
5 You can put String into the stream with predefined partition key and a sequence number
6 If the name of the sequence number argument does not contain word sequence then @SequenceKey annotation must to be used
7 The type of parition key and sequence number does not matter as the value will be always converted to string
Publishing Byte Array Records
@KinesisClient                                                                          (1)
interface DefaultClient {
    void putRecordBytes(byte[] record);                                                 (2)

    void putRecordDataByteArray(@PartitionKey String id, byte[] value);                 (3)

    PutRecordsResult putRecords(Iterable<PutRecordsRequestEntry> entries);              (4)

    PutRecordsResult putRecords(PutRecordsRequestEntry... entries);                     (5)

    PutRecordsResult putRecord(PutRecordsRequestEntry entry);                           (6)
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put byte array into the stream, UUID as partition key will be generated
3 If the name of the argument does not contain word parition then @PartitionKey annotation must to be used
4 You can put several records wrapped into iterable of PutRecordsRequestEntry
5 You can put several records wrapped into array of PutRecordsRequestEntry
6 If the single argument is of type PutRecordsRequestEntry then PutRecordsResult object is returned from the method despite only single record has been published
Publishing Plain Old Java Objects
@KinesisClient                                                                          (1)
interface DefaultClient {
    void putRecordObject(Pogo pogo);                                                    (2)

    PutRecordsResult putRecordObjects(Pogo... pogo);                                    (3)

    PutRecordsResult putRecordObjects(Iterable<Pogo> pogo);                             (4)

    void putRecordDataObject(@PartitionKey String id, Pogo value);                      (5)
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put any object into the stream, UUID as partition key will be generated, the objects will be serialized to JSON
3 You can put array of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON
4 You can put iterable of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON
5 You can put any object into the stream with predefined partition key, if the name of the argument does not contain word parition then @PartitionKey annotation must to be used
Publishing Events
@KinesisClient                                                                          (1)
interface DefaultClient {
    PutRecordResult putEvent(MyEvent event);                                            (2)

    PutRecordsResult putEventsIterable(Iterable<MyEvent> events);                       (3)

    void putEventsArrayNoReturn(MyEvent... events);                                     (4)

    @Stream("OtherStream") PutRecordResult putEventToStream(MyEvent event);             (5)
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put object implementing Event into the stream
3 You can put iterable of objects implementing Event into the stream
4 You can put array of objects implementing Event into the stream
5 Without any parameters @KinesisClient publishes to default stream of the default configuration but you can change it using @Stream annotation on the method
The return value of the method is PutRecordResult or PutRecordsResult for putting multiple records but it can be always omitted and replaced with void.

By default, KinesisClient publishes records into the default stream defined by aws.kinesis.stream property. You can switch to different configuration by changing the value of the annotation such as @KinesisClient("other") or by setting the stream property of the annotation such as @KinesisClient(stream = "MyStream"). You can change stream used by particular method using @Stream annotation as mentioned above.

Listening with @KinesisListener
Before you start implementing your service with @KinesisListener you may consider implementing a Lambda function instead.

If you place com.agorapulse.micronaut.aws.kinesis.annotation.KinesisListener annotation on the method in the bean then the method will be triggered with the new records in the stream.

source,java,indent=0,options="nowrap"] .Publishing Events

@Singleton                                                                              (1)
public class KinesisListenerTester {

    @KinesisListener
    public void listenString(String string) {                                           (2)
        String message = "EXECUTED: listenString(" + string + ")";
        logExecution(message);
    }

    @KinesisListener
    public void listenRecord(Record record) {                                           (3)
        logExecution("EXECUTED: listenRecord(" + record + ")");
    }


    @KinesisListener
    public void listenStringRecord(String string, Record record) {                      (4)
        logExecution("EXECUTED: listenStringRecord(" + string + ", " + record + ")");
    }

    @KinesisListener
    public void listenObject(MyEvent event) {                                           (5)
        logExecution("EXECUTED: listenObject(" + event + ")");
    }

    @KinesisListener
    public void listenObjectRecord(MyEvent event, Record record) {                      (6)
        logExecution("EXECUTED: listenObjectRecord(" + event + ", " + record + ")");
    }

    @KinesisListener
    public void listenPogoRecord(Pogo event) {                                          (7)
        logExecution("EXECUTED: listenPogoRecord(" + event + ")");
    }

    public List<String> getExecutions() {
        return executions;
    }

    public void setExecutions(List<String> executions) {
        this.executions = executions;
    }

    private void logExecution(String message) {
        executions.add(message);
        System.err.println(message);
    }

    private List<String> executions = new CopyOnWriteArrayList<>();
}
1 @KinesisListener method must be declared in a bean, e.g. @Singleton
2 You can listen to just plain string records
3 You can listen to Record objects
4 You can listen to both string and Record objects
5 You can listen to objects implementing Event interface
6 You can listen to both Event and Record objects
7 You can listen to any object as long as it can be unmarshalled from the record payload

You can listen to different than default configuration by changing the value of the annotation such as @KinesisListener("other").

Multiple methods in a single application can listen to the same configuration (stream). In that case, every method will be executed with the incoming payload.

Kinesis Service

KinesisService provides middle-level API for creating, describing, and deleting streams. You can manage shards as well as read records from particular shards.

Instance of KinesisService is created for the default Kinesis configuration and each stream configuration in aws.kinesis.streams map. You should always use @Named qualifier when injecting KinesisService if you have more than one stream configuration present, e.g. @Named("other") KinesisService otherService.

Please, see KinesisService for the full reference.

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production Kinesis integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Testcontainers                                                                         (1)
@RestoreSystemProperties                                                                (2)
class KinesisAnnotationsSpec extends Specification {

    private static final String TEST_STREAM = 'TestStream'
    private static final String APP_NAME = 'AppName'

    @Shared LocalStackContainer localstack = new LocalStackContainer('0.8.10')          (3)
        .withServices(KINESIS, DYNAMODB)

    @AutoCleanup ApplicationContext context                                             (4)

    void setup() {
        System.setProperty('com.amazonaws.sdk.disableCbor', 'true')                     (5)
        System.setProperty('aws.region', 'eu-west-1')

        AmazonDynamoDB dynamo = AmazonDynamoDBClient                                    (6)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(DYNAMODB))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        AmazonKinesis kinesis = AmazonKinesisClient                                     (7)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(KINESIS))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        AmazonCloudWatch amazonCloudWatch = Mock(AmazonCloudWatch)

        context = ApplicationContext.build().properties(                                (8)
            'aws.kinesis.application.name': APP_NAME,
            'aws.kinesis.stream': TEST_STREAM,
            'aws.kinesis.listener.stream': TEST_STREAM,
            'aws.kinesis.listener.failoverTimeMillis': '1000',
            'aws.kinesis.listener.shardSyncIntervalMillis': '1000',
            'aws.kinesis.listener.idleTimeBetweenReadsInMillis': '1000',
            'aws.kinesis.listener.parentShardPollIntervalMillis': '1000',
            'aws.kinesis.listener.timeoutInSeconds': '1000',
            'aws.kinesis.listener.retryGetRecordsInSeconds': '1000',
            'aws.kinesis.listener.metricsLevel': 'NONE',
        ).build()
        context.registerSingleton(AmazonKinesis, kinesis)
        context.registerSingleton(AmazonDynamoDB, dynamo)
        context.registerSingleton(AmazonCloudWatch, amazonCloudWatch)
        context.registerSingleton(AWSCredentialsProvider, localstack.defaultCredentialsProvider)
        context.start()
    }

    void 'kinesis listener is executed'() {
        when:
            KinesisService service = context.getBean(KinesisService)                    (9)
            KinesisListenerTester tester = context.getBean(KinesisListenerTester)       (10)
            DefaultClient client = context.getBean(DefaultClient)                       (11)

            service.createStream()
            service.waitForActive()

            waitForWorkerReady(300, 100)

            Disposable subscription = publishEventAsync(tester, client)

            waitForReceivedMessages(tester, 300, 100)

            subscription.dispose()
        then:
            allTestEventsReceived(tester)
    }

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 @RestoreSystemProperties will guarantee that system properties will be restore after the test
3 Create an instance of LocalStackContainer with Kinesis and DynamoDB (required by the listener) support enabled
4 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
5 Disable CBOR protocol for Kinesis (not supported by LocalStack/Kinesilite)
6 Create AmazonDynamoDB client using the LocalStack configuration
7 Create AmazonKinesis client using the LocalStack configuration
8 Prepare the application context with required properties and service LocalStack
9 You can obtain instance of KinesisService from the context
10 You can obtain instance of declarative listener from the context
11 You can obtain instance of declarative client from the context
Java
public class KinesisTest {

    public ApplicationContext context;                                                  (1)

    @Rule
    public LocalStackContainer localstack = new LocalStackContainer("0.8.10")           (2)
        .withServices(DYNAMODB, KINESIS);

    @Before
    public void setup() {
        System.setProperty("com.amazonaws.sdk.disableCbor", "true");                    (3)
        System.setProperty("aws.region", "eu-west-1");

        AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClient                            (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(DYNAMODB))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();

        AmazonKinesis amazonKinesis = AmazonKinesisClient                               (5)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(KINESIS))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();

        AmazonCloudWatch cloudWatch = new MockCloudWatch();

        Map<String, Object> properties = new HashMap<>();                               (6)
        properties.put("aws.kinesis.application.name", "TestApp");
        properties.put("aws.kinesis.stream", TEST_STREAM);
        properties.put("aws.kinesis.listener.stream", TEST_STREAM);

        // you can set other custom client configuration properties
        properties.put("aws.kinesis.listener.failoverTimeMillis", "1000");
        properties.put("aws.kinesis.listener.shardSyncIntervalMillis", "1000");
        properties.put("aws.kinesis.listener.idleTimeBetweenReadsInMillis", "1000");
        properties.put("aws.kinesis.listener.parentShardPollIntervalMillis", "1000");
        properties.put("aws.kinesis.listener.timeoutInSeconds", "1000");
        properties.put("aws.kinesis.listener.retryGetRecordsInSeconds", "1000");
        properties.put("aws.kinesis.listener.metricsLevel", "NONE");


        context = ApplicationContext.build(properties).build();                         (7)
        context.registerSingleton(AmazonKinesis.class, amazonKinesis);
        context.registerSingleton(AmazonDynamoDB.class, amazonDynamoDB);
        context.registerSingleton(AmazonCloudWatch.class, cloudWatch);
        context.registerSingleton(AWSCredentialsProvider.class, localstack.getDefaultCredentialsProvider());
        context.start();
    }

    @After
    public void cleanup() {
        System.clearProperty("com.amazonaws.sdk.disableCbor");                          (8)
        System.clearProperty("aws.region");
        if (context != null) {
            context.close();                                                            (9)
        }
    }

    @Test
    public void testJavaService() throws InterruptedException {
        KinesisService service = context.getBean(KinesisService.class);                 (10)
        KinesisListenerTester tester = context.getBean(KinesisListenerTester.class);    (11)
        DefaultClient client = context.getBean(DefaultClient.class);                    (12)

        service.createStream();
        service.waitForActive();

        waitForWorkerReady(300, 100);
        Disposable subscription = publishEventsAsync(tester, client);
        waitForRecievedMessages(tester, 300, 100);

        subscription.dispose();

        Assert.assertTrue(allTestEventsReceived(tester));
    }

}
1 Prepare the reference to the ApplicationContext
2 Create an instance of LocalStackContainer with Kinesis and DynamoDB (required by the listener) support enabled
3 Disable CBOR protocol for Kinesis (not supported by LocalStack/Kinesilite)
4 Create AmazonDynamoDB client using the LocalStack configuration
5 Create AmazonKinesis client using the LocalStack configuration
6 Prepare required properties
7 Prepare the application context with required properties and service LocalStack
8 Reset CBOR protocol settings after the test
9 Close the application context after the test
10 You can obtain instance of KinesisService from the context
11 You can obtain instance of declarative listener from the context
12 You can obtain instance of declarative client from the context

1.4. Simple Storage Service (S3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

This library provides basic support for Amazon S3 using Simple Storage Service

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-s3:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-s3</artifactId>
    <version>1.3.0.1</version>
</dependency>
Configuration

You can store the name of the bucket in the configuration using aws.s3.bucket property. You can create additional configurations by providing 'aws.s3.buckets' configuration map.

application.yml
aws:
  s3:
    region: sa-east-1
    bucket: MyBucket                                                                    (1)

    buckets:                                                                            (2)
      test:                                                                             (3)
        bucket: TestBucket
1 You can define default bucket for the service
2 You can define multiple configurations
3 Each of the configuration can be access using @Named('test') SimpleStorageService qualifier
Simple Storage Service

SimpleStorageService provides middle-level API for managing buckets and uploading and downloading files.

Instance of SimpleStorageService is created for the default S3 configuration and each bucket configuration in aws.s3.buckets map. You should always use @Named qualifier when injecting SimpleStorageService if you have more than one bucket configuration present, e.g. @Named("test") SimpleStorageService service.

Following example shows some of the most common use cases for working with S3 buckets.

Creating Bucket
service.createBucket(MY_BUCKET);                                                (1)

assertTrue(service.listBucketNames().contains(MY_BUCKET));                      (2)
1 Create new bucket of given name
2 The bucket is present within the list of all bucket names
Upload File
File sampleContent = createFileWithSampleContent();

service.storeFile(TEXT_FILE_PATH, sampleContent);                               (1)

assertTrue(service.exists(TEXT_FILE_PATH));                                     (2)

Flowable<S3ObjectSummary> summaries = service.listObjectSummaries("foo");       (3)
assertEquals(Long.valueOf(0L), summaries.count().blockingGet());
1 Upload file
2 File is uploaded
3 File is present in the summaries of all files
Upload from InputStream
service.storeInputStream(                                                       (1)
    KEY,
    new ByteArrayInputStream(SAMPLE_CONTENT.getBytes()),
    buildMetadata()
);

Flowable<S3ObjectSummary> fooSummaries = service.listObjectSummaries("foo");    (2)
assertEquals(KEY, fooSummaries.blockingFirst().getKey());
1 Upload data from stream
2 Stream is uploaded
Generate URL
String url = service.generatePresignedUrl(KEY, TOMORROW);                       (1)

assertEquals(SAMPLE_CONTENT, download(url));                                    (2)
1 Generate presigned URL
2 Downloaded content corresponds with the expected content
Download File
File dir = tmp.newFolder();
File file = new File(dir, "bar.baz");                                           (1)

service.getFile(KEY, file);                                                     (2)
assertTrue(file.exists());

assertEquals(SAMPLE_CONTENT, new String(Files.readAllBytes(Paths.get(file.toURI()))));
1 Prepare a destination file
2 Download the file locally
Delete File
service.deleteFile(TEXT_FILE_PATH);                                             (1)
assertFalse(service.exists(TEXT_FILE_PATH));                                    (2)
1 Delete file
2 The file is no longer present
Delete Bucket
service.deleteBucket();                                                         (1)
assertFalse(service.listBucketNames().contains(MY_BUCKET));                     (2)
1 Delete bucket
2 The Bucket is no longer present

Please, see SimpleStorageService for the full reference.

Testing

You can very easily mock the SimpleStorageService but if you need close-to-production S3 integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Stepwise
@Testcontainers                                                                         (1)
class SimpleStorageServiceSpec extends Specification {

    @AutoCleanup ApplicationContext context                                             (2)

    @Shared LocalStackContainer localstack = new LocalStackContainer()                  (3)
        .withServices(S3)

    @Rule TemporaryFolder tmp

    AmazonS3 amazonS3
    SimpleStorageService service

    void setup() {
        amazonS3 = AmazonS3Client                                                       (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(S3))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        context = ApplicationContext
            .build('aws.s3.bucket': MY_BUCKET)                                          (5)
            .build()
        context.registerSingleton(AmazonS3, amazonS3)                                   (6)
        context.start()

        service = context.getBean(SimpleStorageService)                                 (7)
    }

    // test methods

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
3 Create an instance of LocalStackContainer with S3 support enabled
4 Create AmazonS3 client using the LocalStack configuration
5 Set the default bucket
6 Register AmazonS3 service running against LocalStack
7 You can obtain instance of SimpleStorageService from the context
Java
public class SimpleStorageServiceTest {

    @Rule
    public final LocalStackContainer localstack = new LocalStackContainer()            (1)
        .withServices(S3);

    @Rule
    public final TemporaryFolder tmp = new TemporaryFolder();

    private ApplicationContext ctx;                                                     (2)

    @Before
    public void setup() {
        AmazonS3 amazonS3 = AmazonS3Client                                              (3)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(S3))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();

        ctx = ApplicationContext
            .build(Collections.singletonMap("aws.s3.bucket", MY_BUCKET))
            .build();
        ctx.registerSingleton(AmazonS3.class, amazonS3);                                (4)
        ctx.start();
    }

    @After
    public void cleanup() {
        if (ctx != null) {                                                              (5)
            ctx.close();
        }
    }

    // test methods

}
1 Create an instance of LocalStackContainer with Kinesis and DynamoDB (required by the listener) support enabled
2 Prepare the reference to the ApplicationContext
3 Create AmazonS3 client using the LocalStack configuration
4 Register AmazonS3 service running against LocalStack
5 Don’t forget to close ApplicationContext

1.5. Simple Email Service (SES)

Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers.

This library provides basic support for Amazon SES using Simple Email Service

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-ses:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-ses</artifactId>
    <version>1.3.0.1</version>
</dependency>
Simple Email Service

SimpleEmailService provides DSL for creating and sending simple emails with attachments. As the other services, it uses default credentials chain to obtain the credentials.

Following example shows how to send an email with attachment.

Groovy
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.ses

import com.amazonaws.services.simpleemail.AmazonSimpleEmailService
import com.amazonaws.services.simpleemail.model.SendRawEmailRequest
import com.amazonaws.services.simpleemail.model.SendRawEmailResult
import org.junit.Rule
import org.junit.rules.TemporaryFolder
import spock.lang.Specification
import spock.lang.Subject

/**
 * Tests for sending emails with Groovy.
 */
class SendEmailSpec extends Specification {

    AmazonSimpleEmailService simpleEmailService = Mock(AmazonSimpleEmailService)

    @Rule
    TemporaryFolder tmp = new TemporaryFolder()

    @Subject
    SimpleEmailService service = new DefaultSimpleEmailService(simpleEmailService)

    void "send email"() {
        given:
            File file = tmp.newFile('test.pdf')
            file.text = 'not a real PDF'
            String thePath = file.canonicalPath
        when:
            EmailDeliveryStatus status = service.send {                                 (1)
                subject 'Hi Paul'                                                       (2)
                from 'subscribe@groovycalamari.com'                                     (3)
                to 'me@sergiodelamo.com'                                                (4)
                htmlBody '<p>This is an example body</p>'                               (5)
                attachment {                                                            (6)
                    filepath thePath                                                    (7)
                    filename 'test.pdf'                                                 (8)
                    mimeType 'application/pdf'                                          (9)
                    description 'An example pdf'                                        (10)
                }
            }

        then:
            status == EmailDeliveryStatus.STATUS_DELIVERED

            simpleEmailService.sendRawEmail(_) >> { SendRawEmailRequest request ->
                return new SendRawEmailResult().withMessageId('foobar')
            }
    }

}
1 Start building an email
2 Define subject of the email
3 Define the from address
4 Define one or more recipients
5 Define HTML body (alternatively you can declare plain text body as well)
6 Build an attachment
7 Define the location of the file to be sent
8 Define the file name (optional - deduced from the file)
9 Define the mime type (usually optional - deduced from the file)
10 Define the description of the file (optional)
Java
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.ses;

import com.amazonaws.services.simpleemail.AmazonSimpleEmailService;
import com.amazonaws.services.simpleemail.model.SendRawEmailResult;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import org.mockito.Mockito;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Collections;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

/**
 * Tests for sending emails with Java.
 */
public class SendEmailTest {

    @Rule public TemporaryFolder tmp = new TemporaryFolder();

    private AmazonSimpleEmailService simpleEmailService = mock(AmazonSimpleEmailService.class);

    private SimpleEmailService service = new DefaultSimpleEmailService(simpleEmailService);

    @Test
    public void testSendEmail() throws IOException {
        when(simpleEmailService.sendRawEmail(Mockito.any()))
            .thenReturn(new SendRawEmailResult().withMessageId("foobar"));

        File file = tmp.newFile("test.pdf");
        Files.write(file.toPath(), Collections.singletonList("not a real PDF"));
        String filepath = file.getCanonicalPath();

        EmailDeliveryStatus status = service.send(e ->                                  (1)
            e.subject("Hi Paul")                                                        (2)
                .from("subscribe@groovycalamari.com")                                   (3)
                .to("me@sergiodelamo.com")                                              (4)
                .htmlBody("<p>This is an example body</p>")                             (5)
                .attachment(a ->                                                        (6)
                    a.filepath(filepath)                                                (7)
                        .filename("test.pdf")                                           (8)
                        .mimeType("application/pdf")                                    (9)
                        .description("An example pdf")                                  (10)
                )
        );

        Assert.assertEquals(EmailDeliveryStatus.STATUS_DELIVERED, status);
    }
}
1 Start building an email
2 Define subject of the email
3 Define the from address
4 Define one or more recipients
5 Define HTML body (alternatively you can declare plain text body as well)
6 Build an attachment
7 Define the location of the file to be sent
8 Define the file name (optional - deduced from the file)
9 Define the mime type (usually optional - deduced from the file)
10 Define the description of the file (optional)

Please, see SimpleEmailService for the full reference.

Testing

It is recommended just to mock the SimpleEmailService in your tests as it only contains single abstract method.

1.6. Simple Notification Service (SNS)

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

This library provides two approaches to work with Simple Notification Service topics:

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-sns:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-sns</artifactId>
    <version>1.3.0.1</version>
</dependency>
Configuration

No configuration is required but some of the configuration properties may be useful for you.

application.yml
aws:
  sns:
    region: sa-east-1
    topic: MyTopic                                                                      (1)
    ios:
      arn: 'arn:aws:sns:eu-west-1:123456789:app/APNS/my-ios-app'                        (2)
    android:
      arn: 'arn:aws:sns:eu-west-1:123456789:app/GCM/my-android-app'                     (3)
    amazon:
      arn: 'arn:aws:sns:eu-west-1:123456789:app/ADM/my-amazon-app'                      (4)


    topics:                                                                             (5)
      test:                                                                             (6)
        topic: TestTopic
1 You can specify the default topic for SimpleNotificationService and @NotificationClient
2 Amazon Resource Name for the iOS application mobile push
3 Amazon Resource Name for the Android application mobile push
4 Amazon Resource Name for the Amazon application mobile push
5 You can define multiple configurations
6 Each of the configuration can be access using @Named('test') SimpleNotificationService qualifier or you can define the configuration as value of @NotificationClient('test')
Publishing with @NotificationClient

If you place com.agorapulse.micronaut.aws.sns.annotation.NotificationClient annotation on the interface then methods matching predefined pattern will be automatically implemented. Methods containing word sms will send text messages. Other methods of NotificationClient will publish new messages into the topic.

The following example shows many of available method signatures for publishing records:

Publishing String Records
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.sns;

import com.agorapulse.micronaut.aws.sns.annotation.NotificationClient;
import com.agorapulse.micronaut.aws.sns.annotation.Topic;

import java.util.Map;

@NotificationClient                                                                     (1)
interface DefaultClient {

    String OTHER_TOPIC = "OtherTopic";

    @Topic("OtherTopic") String publishMessageToDifferentTopic(Pogo pogo);              (2)

    String publishMessage(Pogo message);                                                (3)
    String publishMessage(String subject, Pogo message);                                (4)
    String publishMessage(String message);                                              (5)
    String publishMessage(String subject, String message);

    String sendSMS(String phoneNumber, String message);                                 (6)
    String sendSms(String phoneNumber, String message, Map attributes);                 (7)

}
1 @NotificationClient annotation makes the interface a SNS client
2 You can specify to which topic is the message published using @Topic annotation
3 You can publish any object which can be converted into JSON.
4 You can add additional subject to published message (only useful for few protocols, e.g. email)
5 You can publish a string message
6 You can send SMS using the word SMS in the name of the method. One argument must be phone number and its name must contain the word number
7 You can provide additional attributes for the SMS message
The return value of the methods is message id returned by AWS.

By default, NotificationClient publishes messages into the default topic defined by aws.sns.topic property. You can switch to different configuration by changing the value of the annotation such as @NotificationClient("other") or by setting the topic property of the annotation such as @NotificationClient(topic = "SomeTopic"). You can change topic used by particular method using @Topic annotation as mentioned above.

Simple Notification Service

SimpleNotificationService provides middle-level API for creating, describing, and deleting topics. You can manage applications, endpoints and devices. You can send messages and notifications.

Instance of SimpleNotificationService is created for the default SNS configuration and each topics configuration in aws.sns.topics map. You should always use @Named qualifier when injecting SimpleNotificationService if you have more than one topic configuration present, e.g. @Named("other") SimpleNotificationService otherService.

Following example shows some of the most common use cases for working with Amazon SNS.

Working with Topics
Creating Topic
String topicArn = service.createTopic(TEST_TOPIC);                              (1)

Topic found = service.listTopics().filter(t ->                                  (2)
    t.getTopicArn().endsWith(TEST_TOPIC)
).blockingFirst();
1 Create new topic of given name
2 The topic is present within the list of all topics' names
Subscribe to Topic
String subArn = service.subscribeTopicWithEmail(topicArn, EMAIL);               (1)

String messageId = service.publishMessageToTopic(                               (2)
    topicArn,
    "Test Email",
    "Hello World"
);

service.unsubscribeTopic(subArn);                                               (3)
1 Subscribe to the topic with an email (there are more variants of this method to subscribe to most common protocols such as HTTP(S) endpoints, SQS, …​)
2 Publish message to the topic
3 Use the subscription ARN to unsubscribe from the topic
Delete Topic
service.deleteTopic(topicArn);                                                  (1)

Long zero = service.listTopics().filter(t ->                                    (2)
    t.getTopicArn().endsWith(TEST_TOPIC)
).count().blockingGet();
1 Delete the topic
2 The topic is no longer present within the list of all topics' names
Working with Applications
Working with Applications
String appArn = service.createAndroidApplication("my-app", API_KEY);        (1)

String endpoint = service.registerAndroidDevice(appArn, DEVICE_TOKEN, DATA);    (2)

Map<String, String> notif = new LinkedHashMap<>();
notif.put("badge", "9");
notif.put("data", "{\"foo\": \"some bar\"}");
notif.put("title", "Some Title");

String msgId = service.sendAndroidAppNotification(endpoint, notif, "Welcome");  (3)

service.validateAndroidDevice(appArn, endpoint, DEVICE_TOKEN, DATA);            (4)

service.unregisterDevice(endpoint);                                             (5)
1 Create new Android application (more platforms available)
2 Register Android device (more platforms available)
3 Send Android notification (more platforms available)
4 Validate Android device
5 Unregister device
Sending SMS
Sending SMS
Map<Object, Object> attrs = Collections.emptyMap();
String msgId = service.sendSMSMessage(PHONE_NUMBER, "Hello World", attrs);      (1)
1 Send a message to the phone number

Please, see SimpleNotificationService for the full reference.

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production SNS integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Testcontainers                                                                         (1)
class SimpleNotificationServiceSpec extends Specification {


    @Shared LocalStackContainer localstack = new LocalStackContainer('0.8.10')          (2)
        .withServices(SNS)

    @AutoCleanup ApplicationContext context                                             (3)

    SimpleNotificationService service

    void setup() {
        AmazonSNS sns = AmazonSNSClient                                                 (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SNS))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        context = ApplicationContext.build('aws.sns.topic', TEST_TOPIC).build()         (5)
        context.registerSingleton(AmazonSNS, sns)
        context.start()

        service = context.getBean(SimpleNotificationService)                            (6)
    }

    // tests

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 Create an instance of LocalStackContainer with SNS support enabled
3 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
4 Create AmazonSNS client using the LocalStack configuration
5 Prepare the application context with required properties and service using LocalStack
6 You can obtain instance of SimpleNotificationService from the context
Java
class SimpleNotificationServiceTest {

    public ApplicationContext context;                                                  (1)

    public SimpleNotificationService service;

    @Rule
    public LocalStackContainer localstack = new LocalStackContainer("0.8.10")            (2)
        .withServices(SNS);

    @Before
    public void setup() {
        AmazonSNS amazonSNS = AmazonSNSClient                                           (3)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SNS))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();


        Map<String, Object> properties = new HashMap<>();                               (4)
        properties.put("aws.sns.topic", TEST_TOPIC);


        context = ApplicationContext.build(properties).build();                         (5)
        context.registerSingleton(AmazonSNS.class, amazonSNS);
        context.start();

        service = context.getBean(SimpleNotificationService.class);
    }

    @After
    public void cleanup() {
        if (context != null) {
            context.close();                                                            (6)
        }
    }

    // tests

}
1 Prepare the reference to the ApplicationContext
2 Create an instance of LocalStackContainer with SNS support enabled
3 Create AmazonSNS client using the LocalStack configuration
4 Prepare required properties
5 Prepare the application context with required properties and service LocalStack
6 Close the application context after the test

1.7. Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.

This library provides two approaches to work with Simple Queue Service queues:

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-sqs:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-sqs</artifactId>
    <version>1.3.0.1</version>
</dependency>
Configuration

No configuration is required but some of the configuration properties may be useful for you.

application.yml
aws:
  sqs:
    region: sa-east-1
    # related to service behaviour
    queueNamePrefix: 'vlad_'                                                            (1)
    autoCreateQueue: false                                                              (2)
    cache: false                                                                        (3)

    # related to default queue
    queue: MyQueue                                                                      (4)
    fifo: true                                                                          (5)
    delaySeconds: 0                                                                     (6)
    messageRetentionPeriod: 345600                                                      (7)
    maximumMessageSize: 262144                                                          (8)
    visibilityTimeout: 30                                                               (9)

    queues:                                                                             (10)
      test:                                                                             (11)
        queue: TestQueue
1 Queue prefix is prepended to every queue name (may be useful for local development)
2 Whether to create any missing queue automatically (default false)
3 Whether to first fetch all queues and set up queue to url cache first time the service is prompted for the queue URL (default false)
4 You can specify the default topic for SimpleQueueService and @QueueClient
5 Whether the newly created queues are supposed to be FIFO queues (default false)
6 The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 (15 minutes). Default: 0.
7 The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer representing seconds, from 60 (1 minute) to 1,209,600 (14 days). Default: 345,600 (4 days).
8 The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) up to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB).
9 The visibility timeout for the queue, in seconds. Valid values: an integer from 0 to 43,200 (12 hours). Default: 30.
10 You can define multiple configurations
11 Each of the configuration can be access using @Named('test') SimpleNotificationService qualifier or you can define the configuration as value of @NotificationClient('test')
Publishing with @QueueClient

If you place com.agorapulse.micronaut.aws.sqs.annotation.QueueClient annotation on the interface then methods matching predefined pattern will be automatically implemented. Methods containing word delete will delete queue messages. Other methods of QueueClient will publish new records into the queue.

The following example shows many of available method signatures for publishing records:

Publishing String Records
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.sqs;

import com.agorapulse.micronaut.aws.sqs.annotation.Queue;
import com.agorapulse.micronaut.aws.sqs.annotation.QueueClient;

@QueueClient                                                                            (1)
interface DefaultClient {

    @Queue(value = "OtherQueue", group = "SomeGroup")
    String sendMessageToQueue(String message);                                          (2)

    String sendMessage(Pogo message);                                                   (3)

    String sendMessage(byte[] record);                                                  (4)

    String sendMessage(String record);                                                  (5)

    String sendMessage(String record, int delay);                                       (6)

    String sendMessage(String record, String group);                                    (7)

    String sendMessage(String record, int delay, String group);                         (8)

    void deleteMessage(String messageId);                                               (9)

    String OTHER_QUEUE = "OtherQueue";
}
1 @QueueClient annotation makes the interface a SQS client
2 You can specify to which queue is the message published using @Queue annotation, you can also specify the group for FIFO queues
3 You can publish any record object which can be converted into JSON.
4 You can publish a byte array record
5 You can publish a string record
6 You can publish a string with custom delay
7 You can publish a string with custom FIFO queue group
8 You can publish a string with custom delay and FIFO queue group
9 You can delete published message using the message ID if the
The return value of the publishing methods is message id returned by AWS.

By default, QueueClient publishes recoreds into the default queue defined by aws.sns.queue property. You can switch to different configuration by changing the value of the annotation such as @QueueClient("other") or by setting the queue property of the annotation such as @QueueClient(queue = "SomeQueue"). You can change queue used by particular method using @Queue annotation as mentioned above.

Simple Queue Service

SimpleQueuenService provides middle-level API for creating, describing, and deleting queues. It allows to publish, receive and delete records.

Instance of SimpleQueueService is created for the default SQS configuration and each queue configuration in aws.sqs.queues map. You should always use @Named qualifier when injecting SimpleQueueService if you have more than one topic configuration present, e.g. @Named("other") SimpleQueueService otherService.

Following example shows some of the most common use cases for working with Amazon SQS.

Creating Queue
String queueUrl = service.createQueue(TEST_QUEUE);                              (1)

assertTrue(service.listQueueUrls().contains(queueUrl));                         (2)
1 Create new queue of given name
2 The queue URL is present within the list of all queues' URLs
Describing Queue Attributes
Map<String, String> queueAttributes = service.getQueueAttributes(TEST_QUEUE);   (1)

assertEquals("0", queueAttributes.get("DelaySeconds"));                         (2)
1 Fetch queue’s attributes
2 You can read the queue’s attributes from the map
Delete Queue
service.deleteQueue(TEST_QUEUE);                                                (1)

assertFalse(service.listQueueUrls().contains(queueUrl));                        (2)
1 Delete queue
2 The queue URL is no longer present within the list of all queues' URLs
Working with Messages
String msgId = service.sendMessage(DATA);                                       (1)

assertNotNull(msgId);

List<Message> messages = service.receiveMessages();                             (2)
Message first = messages.get(0);

assertEquals(DATA, first.getBody());                                            (3)
assertEquals(msgId, first.getMessageId());
assertEquals(1, messages.size());

service.deleteMessage(msgId);                                                   (4)
1 Send a message
2 Receive messages from the queue (in another application)
3 Read message body
4 Developers are responsible to delete the message from the queue themselves
Try to use AWS Lambda functions triggered by SQS messages to handle incoming SQS messages instead of implementing complex message handling logic yourselves.

Please, see SimpleQueueService for the full reference.

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production SQS integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Testcontainers                                                                         (1)
@RestoreSystemProperties                                                                (2)
class SimpleQueueServiceSpec extends Specification {


    @Shared LocalStackContainer localstack = new LocalStackContainer('0.8.10')          (3)
        .withServices(SQS)

    @AutoCleanup ApplicationContext context                                             (4)

    SimpleQueueService service

    void setup() {
        System.setProperty('com.amazonaws.sdk.disableCbor', 'true')                     (5)
        AmazonSQS sqs = AmazonSQSClient                                                 (6)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SQS))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        context = ApplicationContext.build('aws.sqs.queue': TEST_QUEUE).build()         (7)
        context.registerSingleton(AmazonSQS, sqs)
        context.start()

        service = context.getBean(SimpleQueueService)                                   (8)
    }

    // tests

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 @RestoreSystemProperties will guarantee that system properties will be restore after the test
3 Create an instance of LocalStackContainer with SQS support enabled
4 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
5 Disable CBOR protocol for SQS (not supported by the mock implementation)
6 Create AmazonSQS client using the LocalStack configuration
7 Prepare the application context with required properties and service using LocalStack
8 You can obtain instance of SimpleQueueService from the context
Java
class SimpleNotificationServiceTest {

    public ApplicationContext context;                                                  (1)

    public SimpleQueueService service;

    @Rule
    public LocalStackContainer localstack = new LocalStackContainer("0.8.10")           (2)
        .withServices(SQS);

    @Before
    public void setup() {
        System.setProperty("com.amazonaws.sdk.disableCbor", "true");                    (3)

        AmazonSQS amazonSQS = AmazonSQSClient                                           (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SQS))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();


        Map<String, Object> properties = new HashMap<>();                               (5)
        properties.put("aws.sqs.queue", TEST_QUEUE);


        context = ApplicationContext.build(properties).build();                         (6)
        context.registerSingleton(AmazonSQS.class, amazonSQS);
        context.start();

        service = context.getBean(SimpleQueueService.class);
    }

    @After
    public void cleanup() {
        System.clearProperty("com.amazonaws.sdk.disableCbor");

        if (context != null) {
            context.close();                                                            (7)
        }
    }

    // tests

}
1 Prepare the reference to the ApplicationContext
2 Create an instance of LocalStackContainer with SQS support enabled
3 Disable CBOR protocol for SQS (not supported by the mock implementation)
4 Create AmazonSQS client using the LocalStack configuration
5 Prepare required properties
6 Prepare the application context with required properties and service LocalStack
7 Close the application context after the test

1.8. Security Token Service (STS)

The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users).

This library provides basic support for Amazon STS using Security Token Service

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-sts:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-sts</artifactId>
    <version>1.3.0.1</version>
</dependency>
Security Token Service

SecurityTokenService provides only one method (with multiple variations) to create credentials which assumes usage of a certain IAM role.

Following example shows how to create credentials for assumed role.

Assume Role
service.assumeRole('session', 'arn:::my-role', 360) {
    externalId = '123456789'
}

Please, see SecurityTokenService for the full reference.

Testing

It is recommended just to mock the SecurityTokenService in your tests as it only contains single abstract method.

1.9. WebSockets for API Gateway

In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.

This library provides components for easy handling of incoming WebSocket proxied events as well as for sending messages back to the clients

Installation
Gradle
compile 'com.agorapulse:micronaut-aws-sdk-ag-ws:1.3.0.1'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk-ag-ws</artifactId>
    <version>1.3.0.1</version>
</dependency>
Configuration

No configuration is required but some of the configuration properties may be useful for you.

application.yml
aws:
  websocket:
    region: sa-east-1
    connections:
      url: https://abcefgh.execute-api.eu-west-1.amazonaws.com/test/@connections        (1)

# Java Only
micronaut:
  function:
    name: lambda-echo-java                                                              (2)
1 You can specify the default connections URL for MessageSender
2 If you are creating Java functions don’t forget to specify the function’s name for deployments
MessageSender bean is only present in the context if aws.websocket.connectins.url configuration property is present.Use MessageSenderFactory If you want to create MessageSender manually using URL which is not predefined.
Usage

AWS SDK Lambda Events library does not contain the events dedicated to WebSocket API Gateway yet. You can use WebSocketConnectionRequest as an argument to function handling connection and disconnection of the WebSocket and WebSocketRequest for handling incoming messages.

The following examples assume that you have created function using mn create-function command.

The simplest example is a echo method can be used to handle all the incoming events and reply to the incoming messages and also publishes to SNS:

Groovy
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.apigateway.ws

import com.agorapulse.micronaut.aws.apigateway.ws.event.EventType
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketRequest
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketResponse
import groovy.transform.Field

import javax.inject.Inject

@Inject @Field MessageSenderFactory factory                                             (1)
@Inject @Field TestTopicPublisher publisher                                             (2)

WebSocketResponse lambdaEcho(WebSocketRequest event) {                                  (3)
    MessageSender sender = factory.create(event.requestContext)                         (4)
    String connectionId = event.requestContext.connectionId                             (5)

    switch (event.requestContext.eventType) {
        case EventType.CONNECT:                                                         (6)
            // do nothing
            break
        case EventType.MESSAGE:                                                         (7)
            String message = "[$connectionId] ${event.body}"
            sender.send(connectionId, message)
            publisher.publishMessage(connectionId, message)
            break
        case EventType.DISCONNECT:                                                      (8)
            // do nothing
            break
    }

    return WebSocketResponse.OK                                                         (9)
}
1 Factory to create MessageSender if we want to reply to the message immediately
2 Service to publish to SNS to forward the message
3 WebSocketRequest can handle any incoming event
4 Create MessageSender for current client
5 connectionId is unique identifier of the client
6 CONNECT event signals new client has been connected
7 MESSAGE event signals new incoming message
8 DISCONNECT event signals client has been disconnected
9 The method must always return WebSocketResponse.OK to signal success
Java
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.apigateway.ws;

import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketRequest;
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketResponse;
import io.micronaut.function.FunctionBean;

import java.util.function.Function;

@FunctionBean("lambda-echo-java")
public class LambdaEchoJava implements Function<WebSocketRequest, WebSocketResponse> {

    private final MessageSenderFactory factory;                                         (1)
    private final TestTopicPublisher publisher;                                         (2)

    public LambdaEchoJava(MessageSenderFactory factory, TestTopicPublisher publisher) {
        this.factory = factory;
        this.publisher = publisher;
    }

    @Override
    public WebSocketResponse apply(WebSocketRequest event) {                            (3)
        MessageSender sender = factory.create(event.getRequestContext());               (4)
        String connectionId = event.getRequestContext().getConnectionId();              (5)

        switch (event.getRequestContext().getEventType()) {
            case CONNECT:                                                               (6)
                // do nothing
                break;
            case MESSAGE:                                                               (7)
                String message = "[" + connectionId + "] " + event.getBody();
                sender.send(connectionId, message);
                publisher.publishMessage(connectionId, message);
                break;
            case DISCONNECT:                                                            (8)
                // do nothing
                break;
        }

        return WebSocketResponse.OK;                                                    (9)
    }

}
1 Factory to create MessageSender if we want to reply to the message immediately
2 Service to publish to SNS to forward the message
3 WebSocketRequest can handle any incoming event
4 Create MessageSender for current client
5 connectionId is unique identifier of the client
6 CONNECT event signals new client has been connected
7 MESSAGE event signals new incoming message
8 DISCONNECT event signals client has been disconnected
9 The method must always return WebSocketResponse.OK to signal success

Once the function is ready you can deploy the function to AWS Lambda and setup the new API Gateway with WebSocket API

new websocket api
Figure 1. Create new WebSocket API
new websocket route
Figure 2. Create WebSocket API Routes

Another example is a simple AWS Lambda function to react to any of events supported by AWS Lambda and push to WebSocket clients.

There is no support for routing at the moment, but you can get the matched route from event.requestContext.routeKey.
Groovy
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.apigateway.ws

import com.amazonaws.AmazonClientException
import com.amazonaws.services.lambda.runtime.events.SNSEvent
import groovy.transform.Field

import javax.inject.Inject

@Inject @Field MessageSender sender                                                     (1)

void notify(SNSEvent event) {                                                           (2)
    event.records.each {
        try {
            sender.send(it.SNS.subject, "[SNS] $it.SNS.message")                        (3)
        } catch (AmazonClientException ignored) {
            // can be gone                                                              (4)
        }
    }
}
1 MessageSender can be injected if you specify aws.websocket.connnections.url configuration property
2 You can for example react on records published into Simple Notification Service
3 Send message to the client (in previous example the connectionId was set to the subject of the SNS record)
4 If the client is already disconnected then AmazonClientException may occur
Java
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.aws.apigateway.ws;

import com.amazonaws.AmazonClientException;
import com.amazonaws.services.lambda.runtime.events.SNSEvent;
import io.micronaut.function.FunctionBean;

import java.util.function.Consumer;

@FunctionBean("notification-handler")
public class NotificationHandler implements Consumer<SNSEvent> {

    private final MessageSender sender;                                                 (1)

    public NotificationHandler(MessageSender sender) {
        this.sender = sender;
    }

    @Override
    public void accept(SNSEvent event) {                                                (2)
        event.getRecords().forEach(it -> {
            try {
                String connectionId = it.getSNS().getSubject();
                String payload = "[SNS] " + it.getSNS().getMessage();
                sender.send(connectionId, payload);                                     (3)
            } catch (AmazonClientException ignored) {
                // can be gone                                                          (4)
            }
        });
    }

}
1 MessageSender can be injected if you specify aws.websocket.connnections.url configuration property
2 You can for example react on records published into Simple Notification Service
3 Send message to the client (in previous example the connectionId was set to the subject of the SNS record)
4 If the client is already disconnected then AmazonClientException may occur

If you want to publish to the WebSockets using MessageSender your Lambda function’s role must have following permissions (preferably constrained just your API resource):

ExecuteApiFullAccess Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "execute-api:*",
            "Resource": "*"
        }
    ]
}
Testing

You can very easily mock any of the interfaces. Create request event manually and follow the guide to test functions with Micronaut.

2. Micronaut for API Gateway Proxy

API Gateway Lambda Proxy support for Micronaut has been replaced by an official suport Micronaut AWS API Gateway Support

Example MicronautHandler
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.http.examples.planets;

import com.amazonaws.serverless.exceptions.ContainerInitializationException;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.ApplicationContextBuilder;
import io.micronaut.function.aws.proxy.MicronautLambdaContainerHandler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.function.Consumer;

public class MicronautHandler implements RequestStreamHandler {

    private static final Logger LOGGER = LoggerFactory.getLogger(MicronautHandler.class);

    private static MicronautLambdaContainerHandler handler;
    private static ApplicationContextBuilder builder;

    static {
        reset();
    }

    /**
     * Resets the current handler. For testing purposes only.
     */
    public static void reset() {
        reset(b -> {});
    }

    /**
     * Resets the current handler. For testing purposes only.
     *
     * @param configuration builder customizer
     */
    public static void reset(Consumer<ApplicationContextBuilder> configuration) {
        try {
            builder = ApplicationContext.build();
            configuration.accept(builder);
            handler = MicronautLambdaContainerHandler.getAwsProxyHandler(builder);
        } catch (ContainerInitializationException e) {
            // if we fail here. We re-throw the exception to force another cold start
            if (LOGGER.isErrorEnabled()) {
                LOGGER.error("Exception in container initialization", e);
            }
            throw new IllegalStateException("Could not initialize Micronaut", e);
        }
    }

    public static ApplicationContext getApplicationContext() {
        if (handler == null) {
            reset();
        }
        return handler.getApplicationContext();
    }

    @Override
    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
            throws IOException {
        handler.proxyStream(inputStream, outputStream, context);
    }
}

2.1. Testing

You can still use the API Gateway Proxy integration is using Gru for API Gateway

Controller Spec
/*
 * SPDX-License-Identifier: Apache-2.0
 *
 * Copyright 2018-2020 Agorapulse.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     https://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.agorapulse.micronaut.http.examples.planets

import com.agorapulse.dru.Dru
import com.agorapulse.dru.dynamodb.persistence.DynamoDB
import com.agorapulse.gru.Gru
import com.agorapulse.gru.agp.ApiGatewayProxy
import com.amazonaws.services.dynamodbv2.datamodeling.IDynamoDBMapper
import io.micronaut.context.ApplicationContext
import org.junit.Rule
import spock.lang.Specification

/**
 * Test for planet controller.
 */
class PlanetControllerSpec extends Specification {

    @Rule private final Gru gru = Gru.equip(ApiGatewayProxy.steal(this) {               (1)
        map '/planet/{star}' to MicronautHandler                                        (2)
        map '/planet/{star}/{name}' to MicronautHandler
    })

    @Rule private final Dru dru = Dru.steal(this)

    void setup() {
        MicronautHandler.reset()                                                        (3)
        MicronautHandler.applicationContext.with { ApplicationContext ctx ->
            ctx.registerSingleton(IDynamoDBMapper, DynamoDB.createMapper(dru))          (4)
        }
        dru.add(new Planet(star: 'sun', name: 'mercury'))
        dru.add(new Planet(star: 'sun', name: 'venus'))
        dru.add(new Planet(star: 'sun', name: 'earth'))
        dru.add(new Planet(star: 'sun', name: 'mars'))
    }

    void 'get planet'() {                                                               (5)
        expect:
            gru.test {
                get('/planet/sun/earth')
                expect {
                    json 'earth.json'
                }
            }
    }

    void 'get planet which does not exist'() {
        expect:
            gru.test {
                get('/planet/sun/vulcan')
                expect {
                    status NOT_FOUND
                }
            }
    }

    void 'list planets by existing star'() {
        expect:
            gru.test {
                get('/planet/sun')
                expect {
                    json 'planetsOfSun.json'
                }
            }
    }

    void 'add planet'() {
        when:
            gru.test {
                post '/planet/sun/jupiter'
                expect {
                    status CREATED
                    json 'jupiter.json'
                }
            }
        then:
            gru.verify()
            dru.findAllByType(Planet).size() == 5
    }

    void 'delete planet'() {
        given:
            dru.add(new Planet(star: 'sun', name: 'pluto'))
        expect:
            dru.findAllByType(Planet).size() == 5
            gru.test {
                delete '/planet/sun/pluto'
                expect {
                    status NO_CONTENT
                    json 'pluto.json'
                }
            }
            dru.findAllByType(Planet).size() == 4
    }

}
1 Use ApiGatewayProxy client with Gru
2 Delegate to MicronautHandler (see above)
3 Reset the application context
4 Make changes in the application context
5 Test method using Gru
The advantage of using Gru is that you can reuse the existing test with the local server if required. Only thing which changes it the handler setup and the client being used (HTTP instead of API Gateway Proxy).

3. Micronaut Grails

Micronaut Grails package has been moved into own repository.