ribbon
download
Build Status
badge

Set of useful libraries for Micronaut. All the libraries are available in JCenter Maven repository.

AWS SDK for Micronaut

AWS SDK for Micronaut is a successor of Grails AWS SDK Plugin. If you are Grails AWS SDK Plugin user you should find many of services familiar.

Provided integrations:

Micronaut for API Gateway Proxy is handled separately in its own library.

Key concepts of the AWS SDK for Micronaut:

  • Fully leveraging of Micronaut best practises

    • Low-level API clients such as AmazonDynamoDB available for dependency injection

    • Declarative clients and services such as @KinesisListener where applicable

    • Configuration driven named service beans

    • Sensible defaults

    • Conditional beans based on presence of classes on the classpath or on the presence of specific properties

  • Fully leveraging existing AWS SDK configuration chains (e.g. default credential provider chain, default region provider chain)

  • Strong focus on the ease of testing

    • Low-level API clients such as AmazonDynamoDB injected by Micronaut and overridable in the tests

    • All high-level services hidden behind an interface for easy mocking in the tests

    • Declarative clients and services for easy mocking in the tests

  • Java-enabled but Groovy is a first-class citizen

In this documentation, the high-level approaches will be discussed first before the lower-level services.

Installation

Gradle
compile 'com.agorapulse:micronaut-aws-sdk:1.1.0'

// only required for DynamoDB and Kinesis integration
compile group: 'com.amazonaws', name: 'aws-java-sdk-dynamodb', version: '1.11.500'

// only required for DynamoDB Accelerator (DAX) integration
compile group: 'com.amazonaws', name: 'amazon-dax-client', version: '1.0.202017.0'

// only required for Kinesis integration
compile group: 'com.amazonaws', name: 'amazon-kinesis-client', version: '1.9.3'
compile group: 'com.amazonaws', name: 'aws-java-sdk-kinesis', version: '1.11.500'

// only required for S3 integration
compile group: 'com.amazonaws', name: 'aws-java-sdk-s3', version: '1.11.500'

// only required for SES integration
compile group: 'com.amazonaws', name: 'aws-java-sdk-ses', version: '1.11.500'

// only required for SNS integration
compile group: 'com.amazonaws', name: 'aws-java-sdk-sns', version: '1.11.500'

// only required for SQS integration
compile group: 'com.amazonaws', name: 'aws-java-sdk-sqs', version: '1.11.500'

// only required for STS integration
compile group: 'com.amazonaws', name: 'aws-java-sdk-sts', version: '1.11.500'
Maven
<dependency>
    <groupId>com.agorapulse</groupId>
    <artifactId>micronaut-aws-sdk</artifactId>
    <version>1.1.0</version>
</dependency>

<!-- only required for DynamoDB and Kinesis integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-dynamodb</artifactId>
    <version>1.11.500</version>
</dependency>

<!-- only required for DynamoDB Accelerator (DAX) integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>amazon-dax-client</artifactId>
    <version>1.0.202017.0</version>
</dependency>

<!-- only required for Kinesis integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>amazon-kinesis-client</artifactId>
    <version>1.9.3</version>
</dependency>

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-kinesis</artifactId>
    <version>1.11.500</version>
</dependency>

<!-- only required for S3 integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-s3</artifactId>
    <version>1.11.500</version>
</dependency>

<!-- only required for SES integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-ses</artifactId>
    <version>1.11.500</version>
</dependency>

<!-- only required for SNS integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-sns</artifactId>
    <version>1.11.500</version>
</dependency>

<!-- only required for SQS integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-sqs</artifactId>
    <version>1.11.500</version>
</dependency>

<!-- only required for STS integration -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-sts</artifactId>
    <version>1.11.500</version>
</dependency>

DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

This library provides two approaches to work with DynamoDB tables and entities:

Declarative Services with @Service

Declarative services are very similar to Grails GORM Data Services. If you place com.agorapulse.micronaut.aws.dynamodb.annotation.Service annotation on the interface then methods matching predefined pattern will be automatically implemented.

Method Signatures

The following example shows many of available method signatures:

Groovy
@Service(DynamoDBEntity)
interface DynamoDBItemDBService {

    DynamoDBEntity get(String hash, String rangeKey)
    DynamoDBEntity load(String hash, String rangeKey)
    List<DynamoDBEntity> getAll(String hash, List<String> rangeKeys)
    List<DynamoDBEntity> getAll(String hash, String... rangeKeys)
    List<DynamoDBEntity> loadAll(String hash, List<String> rangeKeys)
    List<DynamoDBEntity> loadAll(String hash, String... rangeKeys)

    DynamoDBEntity save(DynamoDBEntity entity)
    List<DynamoDBEntity> saveAll(DynamoDBEntity... entities)
    List<DynamoDBEntity> saveAll(Iterable<DynamoDBEntity> entities)

    int count(String hashKey)
    int count(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey
            }
        }
    })
    int countByRangeIndex(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range { between DynamoDBEntity.DATE_INDEX, after, before }
        }
    })
    int countByDates(String hashKey, Date after, Date before)

    Flowable<DynamoDBEntity> query(String hashKey)
    Flowable<DynamoDBEntity> query(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey
            }
            only {
                rangeIndex
            }
        }
    })
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range { between DynamoDBEntity.DATE_INDEX, after, before }
        }
    })
    List<DynamoDBEntity> queryByDates(String hashKey, Date after, Date before)

    void delete(DynamoDBEntity entity)
    void delete(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey
            }
        }
    })
    int deleteByRangeIndex(String hashKey, String rangeKey)

    @Query({
        query(DynamoDBEntity) {
            hash hashKey
            range { between DynamoDBEntity.DATE_INDEX, after, before }
        }
    })
    int deleteByDates(String hashKey, Date after, Date before)

    @Update({
        update(DynamoDBEntity) {
            hash hashKey
            range rangeKey
            add 'number', 1
            returnUpdatedNew { number }
        }
    })
    Number increment(String hashKey, String rangeKey)

    @Update({
        update(DynamoDBEntity) {
            hash hashKey
            range rangeKey
            add 'number', -1
            returnUpdatedNew { number }
        }
    })
    Number decrement(String hashKey, String rangeKey)

    @Scan({
        scan(DynamoDBEntity) {
            filter {
                eq DynamoDBEntity.RANGE_INDEX, foo
            }
        }
    })
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo)

}
Java
@Service(DynamoDBEntity.class)
public interface DynamoDBEntityService {

    class EqRangeIndex implements Function<Map<String, Object>, DetachedQuery> {
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(r -> r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey")));
        }
    }

    class EqRangeProjection implements Function<Map<String, Object>, DetachedQuery> {
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(r ->
                    r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey"))
                )
                .only(DynamoDBEntity.RANGE_INDEX);
        }
    }

    class EqRangeScan implements Function<Map<String, Object>, DetachedScan> {
        public DetachedScan apply(Map<String, Object> arguments) {
            return Builders.scan(DynamoDBEntity.class)
                .filter(f -> f.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("foo")));
        }
    }

    class BetweenDateIndex implements Function<Map<String, Object>, DetachedQuery> {
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(r -> r.between(DynamoDBEntity.DATE_INDEX, arguments.get("after"), arguments.get("before")));
        }
    }

    class IncrementNumber implements Function<Map<String, Object>, DetachedUpdate> {
        public DetachedUpdate apply(Map<String, Object> arguments) {
            return Builders.update(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(arguments.get("rangeKey"))
                .add("number", 1)
                .returnUpdatedNew(DynamoDBEntity::getNumber);
        }
    }

    class DecrementNumber implements Function<Map<String, Object>, DetachedUpdate> {
        public DetachedUpdate apply(Map<String, Object> arguments) {
            return Builders.update(DynamoDBEntity.class)
                .hash(arguments.get("hashKey"))
                .range(arguments.get("rangeKey"))
                .add("number", -1)
                .returnUpdatedNew(DynamoDBEntity::getNumber);
        }
    }

    DynamoDBEntity get(String hash, String rangeKey);

    DynamoDBEntity load(String hash, String rangeKey);

    List<DynamoDBEntity> getAll(String hash, List<String> rangeKeys);

    List<DynamoDBEntity> getAll(String hash, String... rangeKeys);

    List<DynamoDBEntity> loadAll(String hash, List<String> rangeKeys);

    List<DynamoDBEntity> loadAll(String hash, String... rangeKeys);

    DynamoDBEntity save(DynamoDBEntity entity);

    List<DynamoDBEntity> saveAll(DynamoDBEntity... entities);

    List<DynamoDBEntity> saveAll(Iterable<DynamoDBEntity> entities);

    int count(String hashKey);

    int count(String hashKey, String rangeKey);

    @Query(EqRangeIndex.class)
    int countByRangeIndex(String hashKey, String rangeKey);

    @Query(BetweenDateIndex.class)
    int countByDates(String hashKey, Date after, Date before);

    Flowable<DynamoDBEntity> query(String hashKey);

    Flowable<DynamoDBEntity> query(String hashKey, String rangeKey);

    @Query(EqRangeProjection.class)
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey);

    @Query(BetweenDateIndex.class)
    List<DynamoDBEntity> queryByDates(String hashKey, Date after, Date before);

    void delete(DynamoDBEntity entity);

    void delete(String hashKey, String rangeKey);

    @Query(EqRangeIndex.class)
    int deleteByRangeIndex(String hashKey, String rangeKey);

    @Query(BetweenDateIndex.class)
    int deleteByDates(String hashKey, Date after, Date before);

    @Update(IncrementNumber.class)
    Number increment(String hashKey, String rangeKey);

    @Update(DecrementNumber.class)
    Number decrement(String hashKey, String rangeKey);

    @Scan(EqRangeScan.class)
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo);

}

The following table summarizes the supported method signatures:

Table 1. Basic Service Methods
Return Type Method Name Arguments Example Description

T

List<T>

save*

An entity, array of entities or iterable of entities

DynamoDBEntity save(DynamoDBEntity entity)

List<DynamoDBEntity> saveAll(DynamoDBEntity…​ entities)

Perists the entity or a list of entities and returns self

T

List<T>

get*, load*

Hash key and optional range key, array of range keys or iterable of range keys annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

DynamoDBEntity load(String hashKey);

List<DynamoDBEntity> getAll(@HashKey String parentId, String…​ rangeKeys);

Loads a single entity or a list of entities from the table. Range key is required for tables which defines the range key

int

count*

Hash key and optional range key annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

int count(String hashKey)

int count(@HashKey String parentId, String rangeKey)

Counts the items in the database. Beware, this can be very expensive operation in DynamoDB. See Advanced Queries for advanced use cases

void

delete*

Entity or Hash key and optional range key annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

void delete(DynamoDBEntity entity)

void delete(String hashKey, String rangeKey)

Deletes an item which can be specified with hash key and optional range key. See Advanced Queries for advanced use cases

Flowable<T>

list*

findAll*

query*

Entity or Hash key and optional range key annotated with @HashKey and @RangeKey if the argument name does not contain word hash or range

Flowable<DynamoDBEntity> query(String hashKey)

List<DynamoDBEntity> query(String hashKey, String rangeKey)

Queries for all entities with given hash key and/or range key.

(contextual)

(none of above)

Any arguments which will be translated into arguments map

(see below)

Query, scan or update. See Advanced Queries, Scanning and Updates for advanced use cases

Calling any of the declarative service method will create the DynamoDB table automatically if it does not exist already.
Advanced Queries

DynamoDB integration does not support feature known as dynamic finders. Instead you can annotate any method with @Query annotation to make it

  • counting method if its name begins with count

  • batch delete method if its name begins with delete

  • otherwise an advanced query method

Groovy
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.*                  (1)


@Service(DynamoDBEntity)                                                                (2)
interface DynamoDBItemDBService {

    @Query({                                                                            (3)
        query(DynamoDBEntity) {
            hash hashKey                                                                (4)
            range {
                eq DynamoDBEntity.RANGE_INDEX, rangeKey                                 (5)
            }
            only {                                                                      (6)
                rangeIndex                                                              (7)
            }
        }
    })
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey)         (8)

}
1 Builders class provides all necessary factory methods and keywords
2 Annotate an interface with @Service with the type of the entity as its value
3 @Query annotation accepts a closure which returns a query builder (see QueryBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference)
6 You can limit which properties are returned from the query
7 Only rangeIndex property will be populated in the entities returned
8 The arguments have no special meaning but you can use them in the query. The method must return either Flowable or List of entities.
Java
@Service(DynamoDBEntity.class)                                                          (1)
public interface DynamoDBEntityService {

    class EqRangeProjection implements Function<Map<String, Object>, DetachedQuery> {   (2)
        public DetachedQuery apply(Map<String, Object> arguments) {
            return Builders.query(DynamoDBEntity.class)                                 (3)
                .hash(arguments.get("hashKey"))                                         (4)
                .range(r ->
                    r.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("rangeKey"))         (5)
                )
                .only(DynamoDBEntity.RANGE_INDEX);                                      (6)
        }
    }

    @Query(EqRangeProjection.class)                                                     (7)
    Flowable<DynamoDBEntity> queryByRangeIndex(String hashKey, String rangeKey);        (8)

}
1 Annotate an interface with @Service with the type of the entity as its value
2 Define class which implements Function<Map<String, Object>, DetachedQuery>
3 Use Builders class to create a query builder for given DynamoDB entity (see QueryBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify some range key criteria with the method’s rangeKey argument (see RangeConditionCollector for full reference)
6 Only rangeIndex property will be populated in the entities returned
7 @Query annotation accepts a class which implements Function<Map<String, Object>, DetachedQuery>
8 The arguments have no special meaning but you can use them in the query using arguments map. The method must return either Flowable or List of entities.
Scanning

DynamoDB integration does not support feature known as dynamic finders. If you need to scan the table by unindexed attributes you can annotate any method with @scan annotation to make it

  • counting method if its name begins with count

  • otherwise an advanced query method

Groovy
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.*                  (1)


@Service(DynamoDBEntity)                                                                (2)
interface DynamoDBItemDBService {

    @Scan({                                                                             (3)
        scan(DynamoDBEntity) {
            filter {
                eq DynamoDBEntity.RANGE_INDEX, foo                                      (4)
            }
        }
    })
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo)                            (5)

}
1 Builders class provides all necessary factory methods and keywords
2 Annotate an interface with @Service with the type of the entity as its value
3 @Scan annotation accepts a closure which returns a scan builder (see ScanBuilder for full reference)
4 Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference)
5 The arguments have no special meaning but you can use them in the scan definition. The method must return either Flowable or List of entities.
Java
@Service(DynamoDBEntity.class)                                                          (1)
public interface DynamoDBEntityService {

    class EqRangeScan implements Function<Map<String, Object>, DetachedScan> {          (2)
        public DetachedScan apply(Map<String, Object> arguments) {
            return Builders.scan(DynamoDBEntity.class)                                  (3)
                .filter(f -> f.eq(DynamoDBEntity.RANGE_INDEX, arguments.get("foo")));   (4)
        }
    }

    @Scan(EqRangeScan.class)                                                            (5)
    Flowable<DynamoDBEntity> scanAllByRangeIndex(String foo);                           (6)

}
1 Annotate an interface with @Service with the type of the entity as its value
2 Define class which implements Function<Map<String, Object>, DetachedScan>
3 Use Builders class to create a scan builder for given DynamoDB entity (see ScanBuilder for full reference)
4 Specify some filter criteria with the method’s foo argument (see RangeConditionCollector for full reference)
5 @Scan annotation accepts a class which implements Function<Map<String, Object>, DetachedScan>
6 The arguments have no special meaning but you can use them in the scan definition. The method must return either Flowable or List of entities.
Updates

Declarative services allows you to execute fine-grained updates. Any method annotated with @Update will perform the update in the DynamoDB table.

Groovy
import static com.agorapulse.micronaut.aws.dynamodb.builder.Builders.*                  (1)


@Service(DynamoDBEntity)                                                                (2)
interface DynamoDBItemDBService {

    @Update({                                                                           (3)
        update(DynamoDBEntity) {
            hash hashKey                                                                (4)
            range rangeKey                                                              (5)
            add 'number', 1                                                             (6)
            returnUpdatedNew { number }                                                 (7)
        }
    })
    Number increment(String hashKey, String rangeKey)                                   (8)

}
1 Builders class provides all necessary factory methods and keywords
2 Annotate an interface with @Service with the type of the entity as its value
3 @Update annotation accepts a closure which returns an update builder (see UpdateBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify a range key with range method and method’s rangeKey argument
6 Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations.
7 Specify what should be returned from the method (see UpdateBuilder for full reference).
8 The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper.
Java
@Service(DynamoDBEntity.class)                                                          (1)
public interface DynamoDBEntityService {

    class IncrementNumber implements Function<Map<String, Object>, DetachedUpdate> {    (2)
        public DetachedUpdate apply(Map<String, Object> arguments) {
            return Builders.update(DynamoDBEntity.class)                                (3)
                .hash(arguments.get("hashKey"))                                         (4)
                .range(arguments.get("rangeKey"))                                       (5)
                .add("number", 1)                                                       (6)
                .returnUpdatedNew(DynamoDBEntity::getNumber);                           (7)
        }
    }

    @Update(IncrementNumber.class)                                                      (8)
    Number increment(String hashKey, String rangeKey);                                  (9)

}
1 Annotate an interface with @Service with the type of the entity as its value
2 Define class which implements Function<Map<String, Object>, DetachedUpdate>
3 Use Builders class to create an update builder for given DynamoDB entity (see UpdateBuilder for full reference)
4 Specify a hash key with hash method and method’s hashKey argument
5 Specify a range key with range method and method’s rangeKey argument
6 Specify update operation - increment number attribute (see UpdateBuilder for full reference). You may have multiple update operations.
7 Specify what should be returned from the method (see UpdateBuilder for full reference).
8 @Update annotation accepts a class which implements Function<Map<String, Object>, DetachedUpdate>
9 The arguments have no special meaning but you can use them in the scan definition. The method’s return value depends on the value returned from returnUpdatedNew mapper.

DynamoDB Service

DynamoDBService provides middle-level API for working with DynamoDB tables and entites. You can obtain instance of DynamoDBService from DynamoDBServiceProvider which can be injected to any bean.

Groovy
DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider)
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity)       (1)

s.createTable()                                                                 (2)

s.save(new DynamoDBEntity(                                                      (3)
    parentId: '1',
    id: '1',
    rangeIndex: 'foo',
    date: REFERENCE_DATE.toDate()
))

s.get('1', '1')                                                                 (4)

s.query('1', DynamoDBEntity.RANGE_INDEX, 'bar').count == 1                      (5)

s.queryByDates('3', DynamoDBEntity.DATE_INDEX, [                                (6)
    after: REFERENCE_DATE.plusDays(9).toDate(),
    before: REFERENCE_DATE.plusDays(20).toDate(),
]).count == 1

s.increment('1', '1', 'number')                                                 (7)

s.delete(s.get('1', '1'))                                                       (8)

s.deleteAll('1', DynamoDBEntity.RANGE_INDEX, 'bar') == 1                        (9)
1 Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected)
2 Create table for the entity
3 Save an entity
4 Load the entity by its hash and range keys
5 Query the table for entities with given range index value
6 Query the table for entities having date between the specified dates
7 Increment a property for entity specified by hash and range keys
8 Delete an entity by object reference
9 Delete all entities with given range index value
Java
DynamoDBServiceProvider provider = ctx.getBean(DynamoDBServiceProvider.class);
DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity.class);(1)

assertNotNull(
    s.createTable(5L, 5L)                                                       (2)
);

assertNotNull(
    s.save(createEntity("1", "1", "foo", REFERENCE_DATE.toDate()))              (3)
);

assertNotNull(
    s.get("1", "1")                                                             (4)
);

assertEquals(1,
    s.query("1", DynamoDBEntity.RANGE_INDEX, "bar").getCount().intValue()        (5)
);

assertEquals(1,
    s.queryByDates(                                                             (6)
        "3",
        DynamoDBEntity.DATE_INDEX,
        REFERENCE_DATE.plusDays(9).toDate(),
        REFERENCE_DATE.plusDays(20).toDate()
    ).getCount().intValue()
);

s.increment("1", "1", "number");                                                (7)

s.delete(s.get("1", "1"));                                                      (8)

assertEquals(1,
    s.deleteAll("1", DynamoDBEntity.RANGE_INDEX, "bar")                         (9)
);
1 Obtain the instance of DynamoDBService from DynamoDBServiceProvider (provider can be injected)
2 Create table for the entity
3 Save an entity
4 Load the entity by its hash and range keys
5 Query the table for entities with given range index value
6 Query the table for entities having date between the specified dates
7 Increment a property for entity specified by hash and range keys
8 Delete an entity by object reference
9 Delete all entities with given range index value

Please see DynamoDBService for full reference.

DynamoDB Accelerator (DAX)

You can simply enable DynamoDB Accelerator by setting the DAX endpoint as aws.dax.endpoint property. Every operation performed using injected AmazonDynamoDB, IDynamoDBMapper or a data service will be performed against DAX instead of DynamoDB tables.

Please, check DAX and DynamoDB Consistency Models article to understand the subsequence of using DAX instead of direct DynamoDB operations.

Make sure you have set up proper policy to access the DAX cluster. See DAX Access Control for more information. Following policy allow every DAX operatin on any resource. In production, you should constraint the scope to single cluster.

DAX Access Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DaxAllowAll",
            "Effect": "Allow",
            "Action": "dax:*",
            "Resource": "*"
        }
    ]
}

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production DynamoDB integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Stepwise
@Testcontainers                                                                         (1)
class DefaultDynamoDBServiceSpec extends Specification {
    @AutoCleanup ApplicationContext context                                             (2)

    @Shared LocalStackContainer localstack = new LocalStackContainer()                  (3)
        .withServices(LocalStackContainer.Service.DYNAMODB)

    DynamoDBService<DynamoDBEntity> s
    AmazonDynamoDB amazonDynamoDB
    IDynamoDBMapper mapper

    void setup() {
        amazonDynamoDB = AmazonDynamoDBClient                                           (4)
            .builder()
            .withEndpointConfiguration(
                localstack.getEndpointConfiguration(LocalStackContainer.Service.DYNAMODB)
            )
            .withCredentials(
                localstack.defaultCredentialsProvider
            )
            .build()

        mapper = new DynamoDBMapper(amazonDynamoDB)

        context = ApplicationContext.build().build()
        context.registerSingleton(AmazonDynamoDB, amazonDynamoDB)                       (5)
        context.registerSingleton(IDynamoDBMapper, mapper)                              (6)
        context.start()

        DynamoDBServiceProvider provider = context.getBean(DynamoDBServiceProvider)     (7)
        s = provider.findOrCreate(DynamoDBEntity)                                       (8)
    }

    // test methods

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
3 Create an instance of LocalStackContainer with only DynamoDB support enabled
4 Create AmazonDynamoDB client using the LocalStack configuration
5 Register the client using LocalStack to the application context
6 Register the mapper using LocalStack to the application context
7 Obtain the provider bean
8 Obtain DynamoDBService for particular DynamoDB entity
Java
public class DynamoDBServiceTest {
    @Rule
    public LocalStackContainer localstack = new LocalStackContainer()                   (1)
        .withServices(LocalStackContainer.Service.DYNAMODB);

    public ApplicationContext ctx;                                                      (2)

    @Before
    public void setup() {
        AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClient                            (3)
            .builder()
            .withEndpointConfiguration(
                localstack.getEndpointConfiguration(LocalStackContainer.Service.DYNAMODB)
            )
            .withCredentials(
                localstack.getDefaultCredentialsProvider()
            )
            .build();

        IDynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB);

        ctx = ApplicationContext.build().build();
        ctx.registerSingleton(AmazonDynamoDB.class, amazonDynamoDB);                    (4)
        ctx.registerSingleton(IDynamoDBMapper.class, mapper);                           (5)
        ctx.start();
    }

    @After
    public void cleanup() {
        if (ctx != null) {                                                              (6)
            ctx.close();
        }
    }

    @Test
    public void testSomething() {
        DynamoDBServiceProvider provider = ctx.getBean(DynamoDBServiceProvider.class);  (7)
        DynamoDBService<DynamoDBEntity> s = provider.findOrCreate(DynamoDBEntity.class);(8)

        // test code
    }
}
1 Create an instance of LocalStackContainer with only DynamoDB support enabled
2 Prepare the reference to the ApplicationContext
3 Create AmazonDynamoDB client using the LocalStack configuration
4 Register the client using LocalStack to the application context
5 Register the mapper using LocalStack to the application context
6 Close the application context after test execution
7 Obtain the provider bean
8 Obtain DynamoDBService for particular DynamoDB entity
You can obtain instances of declarative client from the context as well.

Kinesis

Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.

This library provides three approaches to work with Kinesis streams:

Configuration

By default, only aws.kinesis.application.name and aws.kinesis.listener.stream are required if you decide to use @KinesisListener. Otherwise you need no configuration at all but some of the configuration may be useful for you.

application.yml
aws:
  kinesis:
    region: sa-east-1
    stream: MyStream                                                                    (1)

    streams:                                                                            (2)
      test:                                                                             (3)
        stream: TestStream

    application:
      name: MyKinesisApp                                                                (4)
    worker:
      id: rubble                                                                        (5)
    listener:
      stream: MyStreamToConsume                                                         (6)

    listeners:                                                                          (7)
      test:                                                                             (8)
        stream: TestStreamToConsume
1 You can specify the default stream for KinesisService and @KinesisClient
2 You can define multiple configurations
3 Each of the configuration can be access using @Named('test') KinesisService qualifier or you can define the configuration as value of @KinesisClient('test')
4 Application name is required for @KinesisListner
5 Optional id of the Kinesis worker (listener)
6 Stream to listen is required for @KinesisListener
7 You can define multiple listeners configurations
8 The name of the configuration will be used as value of @KinesisListener('test')

Publishing with @KinesisClient

If you place com.agorapulse.micronaut.aws.kinesis.annotation.KinesisClient annotation on the interface then methods matching predefined pattern will be automatically implemented. Every method of KinesisClient puts new records into the stream.

The following example shows many of available method signatures for publishing records:

Publishing String Records
@KinesisClient                                                                          (1)
interface DefaultClient {
    void putRecordString(String record);                                                (2)

    PutRecordResult putRecord(String partitionKey, String record);                      (3)

    void putRecordAnno(@PartitionKey String id, String record);                         (4)

    void putRecord(String partitionKey, String record, String sequenceNumber);          (5)

    void putRecordAnno(                                                                 (6)
        @PartitionKey String id,
        String record,
        @SequenceNumber String sqn
    );

    void putRecordAnnoNumbers(                                                          (7)
        @PartitionKey Long id,
        String record,
        @SequenceNumber int sequenceNumber
    );
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put String into the stream with generated UUID as partition key
3 You can user predefined partition key
4 If the name of the argument does not contain word parition then @PartitionKey annotation must to be used
5 You can put String into the stream with predefined partition key and a sequence number
6 If the name of the sequence number argument does not contain word sequence then @SequenceKey annotation must to be used
7 The type of parition key and sequence number does not matter as the value will be always converted to string
Publishing Byte Array Records
@KinesisClient                                                                          (1)
interface DefaultClient {
    void putRecordBytes(byte[] record);                                                 (2)

    void putRecordDataByteArray(@PartitionKey String id, byte[] value);                 (3)

    PutRecordsResult putRecords(Iterable<PutRecordsRequestEntry> entries);              (4)

    PutRecordsResult putRecords(PutRecordsRequestEntry... entries);                     (5)

    PutRecordsResult putRecord(PutRecordsRequestEntry entry);                           (6)
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put byte array into the stream, UUID as partition key will be generated
3 If the name of the argument does not contain word parition then @PartitionKey annotation must to be used
4 You can put several records wrapped into iterable of PutRecordsRequestEntry
5 You can put several records wrapped into array of PutRecordsRequestEntry
6 If the single argument is of type PutRecordsRequestEntry then PutRecordsResult object is returned from the method despite only single record has been published
Publishing Plain Old Java Objects
@KinesisClient                                                                          (1)
interface DefaultClient {
    void putRecordObject(Pogo pogo);                                                    (2)

    PutRecordsResult putRecordObjects(Pogo... pogo);                                    (3)

    PutRecordsResult putRecordObjects(Iterable<Pogo> pogo);                             (4)

    void putRecordDataObject(@PartitionKey String id, Pogo value);                      (5)
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put any object into the stream, UUID as partition key will be generated, the objects will be serialized to JSON
3 You can put array of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON
4 You can put iterable of any objects into the stream, UUID as partition key will be generated for each record, each object will be serialized to JSON
5 You can put any object into the stream with predefined partition key, if the name of the argument does not contain word parition then @PartitionKey annotation must to be used
Publishing Events
@KinesisClient                                                                          (1)
interface DefaultClient {
    PutRecordResult putEvent(MyEvent event);                                            (2)

    PutRecordsResult putEventsIterable(Iterable<MyEvent> events);                       (3)

    void putEventsArrayNoReturn(MyEvent... events);                                     (4)

    @Stream("OtherStream") PutRecordResult putEventToStream(MyEvent event);             (5)
}
1 @KinesisClient annotation makes the interface a Kinesis client
2 You can put object implementing Event into the stream
3 You can put iterable of objects implementing Event into the stream
4 You can put array of objects implementing Event into the stream
5 Without any parameters @KinesisClient publishes to default stream of the default configuration but you can change it using @Stream annotation on the method
The return value of the method is PutRecordResult or PutRecordsResult for putting multiple records but it can be always omitted and replaced with void.

By default, KinesisClient publishes records into the default stream defined by aws.kinesis.stream property. You can switch to different configuration by changing the value of the annotation such as @KinesisClient("other") or by setting the stream property of the annotation such as @KinesisClient(stream = "MyStream"). You can change stream used by particular method using @Stream annotation as mentioned above.

Listening with @KinesisListener

Before you start implementing your service with @KinesisListener you may consider implementing a Lambda function instead.

If you place com.agorapulse.micronaut.aws.kinesis.annotation.KinesisListener annotation on the method in the bean then the method will be triggered with the new records in the stream.

source,java,indent=0,options="nowrap"] .Publishing Events

@Singleton                                                                              (1)
public class KinesisListenerTester {

    @KinesisListener
    public void listenString(String string) {                                           (2)
        String message = "EXECUTED: listenString(" + string + ")";
        logExecution(message);
    }

    @KinesisListener
    public void listenRecord(Record record) {                                           (3)
        logExecution("EXECUTED: listenRecord(" + record + ")");
    }


    @KinesisListener
    public void listenStringRecord(String string, Record record) {                      (4)
        logExecution("EXECUTED: listenStringRecord(" + string + ", " + record + ")");
    }

    @KinesisListener
    public void listenObject(MyEvent event) {                                           (5)
        logExecution("EXECUTED: listenObject(" + event + ")");
    }

    @KinesisListener
    public void listenObjectRecord(MyEvent event, Record record) {                      (6)
        logExecution("EXECUTED: listenObjectRecord(" + event + ", " + record + ")");
    }

    @KinesisListener
    public void listenPogoRecord(Pogo event) {                                          (7)
        logExecution("EXECUTED: listenPogoRecord(" + event + ")");
    }

    public List<String> getExecutions() {
        return executions;
    }

    public void setExecutions(List<String> executions) {
        this.executions = executions;
    }

    private void logExecution(String message) {
        executions.add(message);
        System.err.println(message);
    }

    private List<String> executions = new CopyOnWriteArrayList<>();
}
1 @KinesisListener method must be declared in a bean, e.g. @Singleton
2 You can listen to just plain string records
3 You can listen to Record objects
4 You can listen to both string and Record objects
5 You can listen to objects implementing Event interface
6 You can listen to both Event and Record objects
7 You can listen to any object as long as it can be unmarshalled from the record payload

You can listen to different than default configuration by changing the value of the annotation such as @KinesisListener("other").

Multiple methods in a single application can listen to the same configuration (stream). In that case, every method will be executed with the incoming payload.

Kinesis Service

KinesisService provides middle-level API for creating, describing, and deleting streams. You can manage shards as well as read records from particular shards.

Instance of KinesisService is created for the default Kinesis configuration and each stream configuration in aws.kinesis.streams map. You should always use @Named qualifier when injecting KinesisService if you have more than one stream configuration present, e.g. @Named("other") KinesisService otherService.

Please, see KinesisService for the full reference.

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production Kinesis integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Testcontainers                                                                         (1)
@RestoreSystemProperties                                                                (2)
class KinesisAnnotationsSpec extends Specification {

    private static final String TEST_STREAM = 'TestStream'
    private static final String APP_NAME = 'AppName'

    @Shared LocalStackContainer localstack = new LocalStackContainer('0.8.10')          (3)
        .withServices(KINESIS, DYNAMODB)

    @AutoCleanup ApplicationContext context                                             (4)

    void setup() {
        System.setProperty('com.amazonaws.sdk.disableCbor', 'true')                     (5)

        AmazonDynamoDB dynamo = AmazonDynamoDBClient                                    (6)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(DYNAMODB))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        AmazonKinesis kinesis = AmazonKinesisClient                                     (7)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(KINESIS))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        AmazonCloudWatch amazonCloudWatch = Mock(AmazonCloudWatch)

        context = ApplicationContext.build().properties(                                (8)
            'aws.kinesis.application.name': APP_NAME,
            'aws.kinesis.stream': TEST_STREAM,
            'aws.kinesis.listener.stream': TEST_STREAM,
            'aws.kinesis.listener.failoverTimeMillis': '1000',
            'aws.kinesis.listener.shardSyncIntervalMillis': '1000',
            'aws.kinesis.listener.idleTimeBetweenReadsInMillis': '1000',
            'aws.kinesis.listener.parentShardPollIntervalMillis': '1000',
            'aws.kinesis.listener.timeoutInSeconds': '1000',
            'aws.kinesis.listener.retryGetRecordsInSeconds': '1000',
            'aws.kinesis.listener.metricsLevel': 'NONE',
        ).build()
        context.registerSingleton(AmazonKinesis, kinesis)
        context.registerSingleton(AmazonDynamoDB, dynamo)
        context.registerSingleton(AmazonCloudWatch, amazonCloudWatch)
        context.registerSingleton(AWSCredentialsProvider, localstack.defaultCredentialsProvider)
        context.start()
    }

    void 'kinesis listener is executed'() {
        when:
            KinesisService service = context.getBean(KinesisService)                    (9)
            KinesisListenerTester tester = context.getBean(KinesisListenerTester)       (10)
            DefaultClient client = context.getBean(DefaultClient)                       (11)

            service.createStream()
            service.waitForActive()

            waitForWorkerReady(300, 100)

            Disposable subscription = publishEventAsync(tester, client)

            waitForReceivedMessages(tester, 300, 100)

            subscription.dispose()
        then:
            allTestEventsReceived(tester)
    }

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 @RestoreSystemProperties will guarantee that system properties will be restore after the test
3 Create an instance of LocalStackContainer with Kinesis and DynamoDB (required by the listener) support enabled
4 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
5 Disable CBOR protocol for Kinesis (not supported by LocalStack/Kinesilite)
6 Create AmazonDynamoDB client using the LocalStack configuration
7 Create AmazonKinesis client using the LocalStack configuration
8 Prepare the application context with required properties and service LocalStack
9 You can obtain instance of KinesisService from the context
10 You can obtain instance of declarative listener from the context
11 You can obtain instance of declarative client from the context
Java
public class KinesisTest {

    public ApplicationContext context;                                                  (1)

    @Rule
    public LocalStackContainer localstack = new LocalStackContainer("0.8.10")           (2)
        .withServices(DYNAMODB, KINESIS);

    @Before
    public void setup() {
        System.setProperty("com.amazonaws.sdk.disableCbor", "true");                    (3)

        AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClient                            (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(DYNAMODB))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();

        AmazonKinesis amazonKinesis = AmazonKinesisClient                               (5)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(KINESIS))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();

        AmazonCloudWatch cloudWatch = new MockCloudWatch();

        Map<String, Object> properties = new HashMap<>();                               (6)
        properties.put("aws.kinesis.application.name", "TestApp");
        properties.put("aws.kinesis.stream", TEST_STREAM);
        properties.put("aws.kinesis.listener.stream", TEST_STREAM);

        // you can set other custom client configuration properties
        properties.put("aws.kinesis.listener.failoverTimeMillis", "1000");
        properties.put("aws.kinesis.listener.shardSyncIntervalMillis", "1000");
        properties.put("aws.kinesis.listener.idleTimeBetweenReadsInMillis", "1000");
        properties.put("aws.kinesis.listener.parentShardPollIntervalMillis", "1000");
        properties.put("aws.kinesis.listener.timeoutInSeconds", "1000");
        properties.put("aws.kinesis.listener.retryGetRecordsInSeconds", "1000");
        properties.put("aws.kinesis.listener.metricsLevel", "NONE");


        context = ApplicationContext.build(properties).build();                         (7)
        context.registerSingleton(AmazonKinesis.class, amazonKinesis);
        context.registerSingleton(AmazonDynamoDB.class, amazonDynamoDB);
        context.registerSingleton(AmazonCloudWatch.class, cloudWatch);
        context.registerSingleton(AWSCredentialsProvider.class, localstack.getDefaultCredentialsProvider());
        context.start();
    }

    @After
    public void cleanup() {
        System.clearProperty("com.amazonaws.sdk.disableCbor");                          (8)
        if (context != null) {
            context.close();                                                            (9)
        }
    }

    @Test
    public void testJavaService() throws InterruptedException {
        KinesisService service = context.getBean(KinesisService.class);                 (10)
        KinesisListenerTester tester = context.getBean(KinesisListenerTester.class);    (11)
        DefaultClient client = context.getBean(DefaultClient.class);                    (12)

        service.createStream();
        service.waitForActive();

        waitForWorkerReady(300, 100);
        Disposable subscription = publishEventsAsync(tester, client);
        waitForRecievedMessages(tester, 300, 100);

        subscription.dispose();

        Assert.assertTrue(allTestEventsReceived(tester));
    }

}
1 Prepare the reference to the ApplicationContext
2 Create an instance of LocalStackContainer with Kinesis and DynamoDB (required by the listener) support enabled
3 Disable CBOR protocol for Kinesis (not supported by LocalStack/Kinesilite)
4 Create AmazonDynamoDB client using the LocalStack configuration
5 Create AmazonKinesis client using the LocalStack configuration
6 Prepare required properties
7 Prepare the application context with required properties and service LocalStack
8 Reset CBOR protocol settings after the test
9 Close the application context after the test
10 You can obtain instance of KinesisService from the context
11 You can obtain instance of declarative listener from the context
12 You can obtain instance of declarative client from the context

Simple Storage Service (S3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

This library provides basic support for Amazon S3 using Simple Storage Service

Configuration

You can store the name of the bucket in the configuration using aws.s3.bucket property. You can create additional configurations by providing 'aws.s3.buckets' configuration map.

application.yml
aws:
  s3:
    region: sa-east-1
    bucket: MyBucket                                                                    (1)

    buckets:                                                                            (2)
      test:                                                                             (3)
        bucket: TestBucket
1 You can define default bucket for the service
2 You can define multiple configurations
3 Each of the configuration can be access using @Named('test') SimpleStorageService qualifier

Simple Storage Service

SimpleStorageService provides middle-level API for managing buckets and uploading and downloading files.

Instance of SimpleStorageService is created for the default S3 configuration and each bucket configuration in aws.s3.buckets map. You should always use @Named qualifier when injecting SimpleStorageService if you have more than one bucket configuration present, e.g. @Named("test") SimpleStorageService service.

Following example shows some of the most common use cases for working with S3 buckets.

Creating Bucket
service.createBucket(MY_BUCKET);                                                (1)

assertTrue(service.listBucketNames().contains(MY_BUCKET));                      (2)
1 Create new bucket of given name
2 The bucket is present within the list of all bucket names
Upload File
File sampleContent = createFileWithSampleContent();

service.storeFile(TEXT_FILE_PATH, sampleContent);                               (1)

assertTrue(service.exists(TEXT_FILE_PATH));                                     (2)

Flowable<S3ObjectSummary> summaries = service.listObjectSummaries("foo");       (3)
assertEquals(Long.valueOf(0L), summaries.count().blockingGet());
1 Upload file
2 File is uploaded
3 File is present in the summaries of all files
Upload from InputStream
service.storeInputStream(                                                       (1)
    KEY,
    new ByteArrayInputStream(SAMPLE_CONTENT.getBytes()),
    buildMetadata()
);

Flowable<S3ObjectSummary> fooSummaries = service.listObjectSummaries("foo");    (2)
assertEquals(KEY, fooSummaries.blockingFirst().getKey());
1 Upload data from stream
2 Stream is uploaded
Generate URL
String url = service.generatePresignedUrl(KEY, TOMORROW);                       (1)

assertEquals(SAMPLE_CONTENT, download(url));                                    (2)
1 Generate presigned URL
2 Downloaded content corresponds with the expected content
Download File
File dir = tmp.newFolder();
File file = new File(dir, "bar.baz");                                           (1)

service.getFile(KEY, file);                                                     (2)
assertTrue(file.exists());

assertEquals(SAMPLE_CONTENT, new String(Files.readAllBytes(Paths.get(file.toURI()))));
1 Prepare a destination file
2 Download the file locally
Delete File
service.deleteFile(TEXT_FILE_PATH);                                             (1)
assertFalse(service.exists(TEXT_FILE_PATH));                                    (2)
1 Delete file
2 The file is no longer present
Delete Bucket
service.deleteBucket();                                                         (1)
assertFalse(service.listBucketNames().contains(MY_BUCKET));                     (2)
1 Delete bucket
2 The Bucket is no longer present

Please, see SimpleStorageService for the full reference.

Testing

You can very easily mock the SimpleStorageService but if you need close-to-production S3 integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Stepwise
@Testcontainers                                                                         (1)
class SimpleStorageServiceSpec extends Specification {

    @AutoCleanup ApplicationContext context                                             (2)

    @Shared LocalStackContainer localstack = new LocalStackContainer()                  (3)
        .withServices(S3)

    @Rule TemporaryFolder tmp

    AmazonS3 amazonS3
    SimpleStorageService service

    void setup() {
        amazonS3 = AmazonS3Client                                                       (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(S3))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        context = ApplicationContext
            .build('aws.s3.bucket': MY_BUCKET)                                          (5)
            .build()
        context.registerSingleton(AmazonS3, amazonS3)                                   (6)
        context.start()

        service = context.getBean(SimpleStorageService)                                 (7)
    }

    // test methods

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
3 Create an instance of LocalStackContainer with S3 support enabled
4 Create AmazonS3 client using the LocalStack configuration
5 Set the default bucket
6 Register AmazonS3 service running against LocalStack
7 You can obtain instance of SimpleStorageService from the context
Java
public class SimpleStorageServiceTest {

    @Rule
    public final LocalStackContainer localstack = new LocalStackContainer()            (1)
        .withServices(S3);

    @Rule
    public final TemporaryFolder tmp = new TemporaryFolder();

    private ApplicationContext ctx;                                                     (2)

    @Before
    public void setup() {
        AmazonS3 amazonS3 = AmazonS3Client                                              (3)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(S3))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();

        ctx = ApplicationContext
            .build(Collections.singletonMap("aws.s3.bucket", MY_BUCKET))
            .build();
        ctx.registerSingleton(AmazonS3.class, amazonS3);                                (4)
        ctx.start();
    }

    @After
    public void cleanup() {
        if (ctx != null) {                                                              (5)
            ctx.close();
        }
    }

    // test methods

}
1 Create an instance of LocalStackContainer with Kinesis and DynamoDB (required by the listener) support enabled
2 Prepare the reference to the ApplicationContext
3 Create AmazonS3 client using the LocalStack configuration
4 Register AmazonS3 service running against LocalStack
5 Don’t forget to close ApplicationContext

Simple Email Service (SES)

Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers.

This library provides basic support for Amazon SES using Simple Email Service

Simple Email Service

SimpleEmailService provides DSL for creating and sending simple emails with attachments. As the other services, it uses default credentials chain to obtain the credentials.

Following example shows how to send an email with attachment.

Groovy
package com.agorapulse.micronaut.aws.ses

import com.amazonaws.services.simpleemail.AmazonSimpleEmailService
import com.amazonaws.services.simpleemail.model.SendRawEmailRequest
import com.amazonaws.services.simpleemail.model.SendRawEmailResult
import org.junit.Rule
import org.junit.rules.TemporaryFolder
import spock.lang.Specification
import spock.lang.Subject

/**
 * Tests for sending emails with Groovy.
 */
class SendEmailSpec extends Specification {

    AmazonSimpleEmailService simpleEmailService = Mock(AmazonSimpleEmailService)

    @Rule
    TemporaryFolder tmp = new TemporaryFolder()

    @Subject
    SimpleEmailService service = new DefaultSimpleEmailService(simpleEmailService)

    void "send email"() {
        given:
            File file = tmp.newFile('test.pdf')
            file.text = 'not a real PDF'
            String thePath = file.canonicalPath
        when:
            EmailDeliveryStatus status = service.send {                                 (1)
                subject 'Hi Paul'                                                       (2)
                from 'subscribe@groovycalamari.com'                                     (3)
                to 'me@sergiodelamo.com'                                                (4)
                htmlBody '<p>This is an example body</p>'                               (5)
                attachment {                                                            (6)
                    filepath thePath                                                    (7)
                    filename 'test.pdf'                                                 (8)
                    mimeType 'application/pdf'                                          (9)
                    description 'An example pdf'                                        (10)
                }
            }

        then:
            status == EmailDeliveryStatus.STATUS_DELIVERED

            simpleEmailService.sendRawEmail(_) >> { SendRawEmailRequest request ->
                return new SendRawEmailResult().withMessageId('foobar')
            }
    }
}
1 Start building an email
2 Define subject of the email
3 Define the from address
4 Define one or more recipients
5 Define HTML body (alternatively you can declare plain text body as well)
6 Build an attachment
7 Define the location of the file to be sent
8 Define the file name (optional - deduced from the file)
9 Define the mime type (usually optional - deduced from the file)
10 Define the description of the file (optional)
Java
package com.agorapulse.micronaut.aws.ses;

import com.amazonaws.services.simpleemail.AmazonSimpleEmailService;
import com.amazonaws.services.simpleemail.model.SendRawEmailResult;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import org.mockito.Mockito;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Collections;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

/**
 * Tests for sending emails with Java.
 */
public class SendEmailTest {

    @Rule public TemporaryFolder tmp = new TemporaryFolder();

    private AmazonSimpleEmailService simpleEmailService = mock(AmazonSimpleEmailService.class);

    private SimpleEmailService service = new DefaultSimpleEmailService(simpleEmailService);

    @Test
    public void testSendEmail() throws IOException {
        when(simpleEmailService.sendRawEmail(Mockito.any()))
            .thenReturn(new SendRawEmailResult().withMessageId("foobar"));

        File file = tmp.newFile("test.pdf");
        Files.write(file.toPath(), Collections.singletonList("not a real PDF"));
        String filepath = file.getCanonicalPath();

        EmailDeliveryStatus status = service.send(e ->                                  (1)
            e.subject("Hi Paul")                                                        (2)
                .from("subscribe@groovycalamari.com")                                   (3)
                .to("me@sergiodelamo.com")                                              (4)
                .htmlBody("<p>This is an example body</p>")                             (5)
                .attachment(a ->                                                        (6)
                    a.filepath(filepath)                                                (7)
                        .filename("test.pdf")                                           (8)
                        .mimeType("application/pdf")                                    (9)
                        .description("An example pdf")                                  (10)
                )
        );

        Assert.assertEquals(EmailDeliveryStatus.STATUS_DELIVERED, status);
    }
}
1 Start building an email
2 Define subject of the email
3 Define the from address
4 Define one or more recipients
5 Define HTML body (alternatively you can declare plain text body as well)
6 Build an attachment
7 Define the location of the file to be sent
8 Define the file name (optional - deduced from the file)
9 Define the mime type (usually optional - deduced from the file)
10 Define the description of the file (optional)

Please, see SimpleEmailService for the full reference.

Testing

It is recommended just to mock the SimpleEmailService in your tests as it only contains single abstract method.

Simple Notification Service (SNS)

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

This library provides two approaches to work with Simple Notification Service topics:

Configuration

No configuration is required but some of the configuration properties may be useful for you.

application.yml
aws:
  sns:
    region: sa-east-1
    topic: MyTopic                                                                      (1)
    ios:
      arn: 'arn:aws:sns:eu-west-1:123456789:app/APNS/my-ios-app'                        (2)
    android:
      arn: 'arn:aws:sns:eu-west-1:123456789:app/GCM/my-android-app'                     (3)
    amazon:
      arn: 'arn:aws:sns:eu-west-1:123456789:app/ADM/my-amazon-app'                      (4)


    topics:                                                                             (5)
      test:                                                                             (6)
        topic: TestTopic
1 You can specify the default topic for SimpleNotificationService and @NotificationClient
2 Amazon Resource Name for the iOS application mobile push
3 Amazon Resource Name for the Android application mobile push
4 Amazon Resource Name for the Amazon application mobile push
5 You can define multiple configurations
6 Each of the configuration can be access using @Named('test') SimpleNotificationService qualifier or you can define the configuration as value of @NotificationClient('test')

Publishing with @NotificationClient

If you place com.agorapulse.micronaut.aws.sns.annotation.NotificationClient annotation on the interface then methods matching predefined pattern will be automatically implemented. Methods containing word sms will send text messages. Other methods of NotificationClient will publish new messages into the topic.

The following example shows many of available method signatures for publishing records:

Publishing String Records
package com.agorapulse.micronaut.aws.sns;

import com.agorapulse.micronaut.aws.Pogo;
import com.agorapulse.micronaut.aws.sns.annotation.NotificationClient;
import com.agorapulse.micronaut.aws.sns.annotation.Topic;

import java.util.Map;

@NotificationClient                                                                     (1)
interface DefaultClient {

    String OTHER_TOPIC = "OtherTopic";

    @Topic("OtherTopic") String publishMessageToDifferentTopic(Pogo pogo);              (2)

    String publishMessage(Pogo message);                                                (3)
    String publishMessage(String subject, Pogo message);                                (4)
    String publishMessage(String message);                                              (5)
    String publishMessage(String subject, String message);

    String sendSMS(String phoneNumber, String message);                                 (6)
    String sendSms(String phoneNumber, String message, Map attributes);                 (7)

}
1 @NotificationClient annotation makes the interface a SNS client
2 You can specify to which topic is the message published using @Topic annotation
3 You can publish any object which can be converted into JSON.
4 You can add additional subject to published message (only useful for few protocols, e.g. email)
5 You can publish a string message
6 You can send SMS using the word SMS in the name of the method. One argument must be phone number and its name must contain the word number
7 You can provide additional attributes for the SMS message
The return value of the methods is message id returned by AWS.

By default, NotificationClient publishes messages into the default topic defined by aws.sns.topic property. You can switch to different configuration by changing the value of the annotation such as @NotificationClient("other") or by setting the topic property of the annotation such as @NotificationClient(topic = "SomeTopic"). You can change topic used by particular method using @Topic annotation as mentioned above.

Simple Notification Service

SimpleNotificationService provides middle-level API for creating, describing, and deleting topics. You can manage applications, endpoints and devices. You can send messages and notifications.

Instance of SimpleNotificationService is created for the default SNS configuration and each topics configuration in aws.sns.topics map. You should always use @Named qualifier when injecting SimpleNotificationService if you have more than one topic configuration present, e.g. @Named("other") SimpleNotificationService otherService.

Following example shows some of the most common use cases for working with Amazon SNS.

Working with Topics
Creating Topic
String topicArn = service.createTopic(TEST_TOPIC);                              (1)

Topic found = service.listTopics().filter(t ->                                  (2)
    t.getTopicArn().endsWith(TEST_TOPIC)
).blockingFirst();
1 Create new topic of given name
2 The topic is present within the list of all topics' names
Subscribe to Topic
String subArn = service.subscribeTopicWithEmail(topicArn, EMAIL);               (1)

String messageId = service.publishMessageToTopic(                               (2)
    topicArn,
    "Test Email",
    "Hello World"
);

service.unsubscribeTopic(subArn);                                               (3)
1 Subscribe to the topic with an email (there are more variants of this method to subscribe to most common protocols such as HTTP(S) endpoints, SQS, …​)
2 Publish message to the topic
3 Use the subscription ARN to unsubscribe from the topic
Delete Topic
service.deleteTopic(topicArn);                                                  (1)

Long zero = service.listTopics().filter(t ->                                    (2)
    t.getTopicArn().endsWith(TEST_TOPIC)
).count().blockingGet();
1 Delete the topic
2 The topic is no longer present within the list of all topics' names
Working with Applications
Working with Applications
String appArn = service.createAndroidApplication("my-app", API_KEY);        (1)

String endpoint = service.registerAndroidDevice(appArn, DEVICE_TOKEN, DATA);    (2)

Map<String, String> notif = new LinkedHashMap<>();
notif.put("badge", "9");
notif.put("data", "{\"foo\": \"some bar\"}");
notif.put("title", "Some Title");

String msgId = service.sendAndroidAppNotification(endpoint, notif, "Welcome");  (3)

service.validateAndroidDevice(appArn, endpoint, DEVICE_TOKEN, DATA);            (4)

service.unregisterDevice(endpoint);                                             (5)
1 Create new Android application (more platforms available)
2 Register Android device (more platforms available)
3 Send Android notification (more platforms available)
4 Validate Android device
5 Unregister device
Sending SMS
Sending SMS
Map<Object, Object> attrs = Collections.emptyMap();
String msgId = service.sendSMSMessage(PHONE_NUMBER, "Hello World", attrs);      (1)
1 Send a message to the phone number

Please, see SimpleNotificationService for the full reference.

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production SNS integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Testcontainers                                                                         (1)
class SimpleNotificationServiceSpec extends Specification {

    @Shared LocalStackContainer localstack = new LocalStackContainer('0.8.10')          (2)
        .withServices(SNS)

    @AutoCleanup ApplicationContext context                                             (3)

    SimpleNotificationService service

    void setup() {
        AmazonSNS sns = AmazonSNSClient                                                 (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SNS))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        context = ApplicationContext.build('aws.sns.topic', TEST_TOPIC).build()         (5)
        context.registerSingleton(AmazonSNS, sns)
        context.start()

        service = context.getBean(SimpleNotificationService)                            (6)
    }

    // tests

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 Create an instance of LocalStackContainer with SNS support enabled
3 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
4 Create AmazonSNS client using the LocalStack configuration
5 Prepare the application context with required properties and service using LocalStack
6 You can obtain instance of SimpleNotificationService from the context
Java
class SimpleNotificationServiceTest {

    public ApplicationContext context;                                                  (1)

    public SimpleNotificationService service;

    @Rule
    public LocalStackContainer localstack = new LocalStackContainer("0.8.10")            (2)
        .withServices(SNS);

    @Before
    public void setup() {
        AmazonSNS amazonSNS = AmazonSNSClient                                           (3)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SNS))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();


        Map<String, Object> properties = new HashMap<>();                               (4)
        properties.put("aws.sns.topic", TEST_TOPIC);


        context = ApplicationContext.build(properties).build();                         (5)
        context.registerSingleton(AmazonSNS.class, amazonSNS);
        context.start();

        service = context.getBean(SimpleNotificationService.class);
    }

    @After
    public void cleanup() {
        if (context != null) {
            context.close();                                                            (6)
        }
    }

    // tests

}
1 Prepare the reference to the ApplicationContext
2 Create an instance of LocalStackContainer with SNS support enabled
3 Create AmazonSNS client using the LocalStack configuration
4 Prepare required properties
5 Prepare the application context with required properties and service LocalStack
6 Close the application context after the test

Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.

This library provides two approaches to work with Simple Queue Service queues:

Configuration

No configuration is required but some of the configuration properties may be useful for you.

application.yml
aws:
  sqs:
    region: sa-east-1
    # related to service behaviour
    queueNamePrefix: 'vlad_'                                                            (1)
    autoCreateQueue: false                                                              (2)
    cache: false                                                                        (3)

    # related to default queue
    queue: MyQueue                                                                      (4)
    fifo: true                                                                          (5)
    delaySeconds: 0                                                                     (6)
    messageRetentionPeriod: 345600                                                      (7)
    maximumMessageSize: 262144                                                          (8)
    visibilityTimeout: 30                                                               (9)

    queues:                                                                             (10)
      test:                                                                             (11)
        queue: TestQueue
1 Queue prefix is prepended to every queue name (may be useful for local development)
2 Whether to create any missing queue automatically (default false)
3 Whether to first fetch all queues and set up queue to url cache first time the service is prompted for the queue URL (default false)
4 You can specify the default topic for SimpleQueueService and @QueueClient
5 Whether the newly created queues are supposed to be FIFO queues (default false)
6 The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 (15 minutes). Default: 0.
7 The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer representing seconds, from 60 (1 minute) to 1,209,600 (14 days). Default: 345,600 (4 days).
8 The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) up to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB).
9 The visibility timeout for the queue, in seconds. Valid values: an integer from 0 to 43,200 (12 hours). Default: 30.
10 You can define multiple configurations
11 Each of the configuration can be access using @Named('test') SimpleNotificationService qualifier or you can define the configuration as value of @NotificationClient('test')

Publishing with @QueueClient

If you place com.agorapulse.micronaut.aws.sqs.annotation.QueueClient annotation on the interface then methods matching predefined pattern will be automatically implemented. Methods containing word delete will delete queue messages. Other methods of QueueClient will publish new records into the queue.

The following example shows many of available method signatures for publishing records:

Publishing String Records
package com.agorapulse.micronaut.aws.sqs;

import com.agorapulse.micronaut.aws.Pogo;
import com.agorapulse.micronaut.aws.sqs.annotation.Queue;
import com.agorapulse.micronaut.aws.sqs.annotation.QueueClient;

@QueueClient                                                                            (1)
interface DefaultClient {

    @Queue(value = "OtherQueue", group = "SomeGroup")
    String sendMessageToQueue(String message);                                          (2)

    String sendMessage(Pogo message);                                                   (3)

    String sendMessage(byte[] record);                                                  (4)

    String sendMessage(String record);                                                  (5)

    String sendMessage(String record, int delay);                                       (6)

    String sendMessage(String record, String group);                                    (7)

    String sendMessage(String record, int delay, String group);                         (8)

    void deleteMessage(String messageId);                                               (9)

    String OTHER_QUEUE = "OtherQueue";
}
1 @QueueClient annotation makes the interface a SQS client
2 You can specify to which queue is the message published using @Queue annotation, you can also specify the group for FIFO queues
3 You can publish any record object which can be converted into JSON.
4 You can publish a byte array record
5 You can publish a string record
6 You can publish a string with custom delay
7 You can publish a string with custom FIFO queue group
8 You can publish a string with custom delay and FIFO queue group
9 You can delete published message using the message ID if the
The return value of the publishing methods is message id returned by AWS.

By default, QueueClient publishes recoreds into the default queue defined by aws.sns.queue property. You can switch to different configuration by changing the value of the annotation such as @QueueClient("other") or by setting the queue property of the annotation such as @QueueClient(queue = "SomeQueue"). You can change queue used by particular method using @Queue annotation as mentioned above.

Simple Queue Service

SimpleQueuenService provides middle-level API for creating, describing, and deleting queues. It allows to publish, receive and delete records.

Instance of SimpleQueueService is created for the default SQS configuration and each queue configuration in aws.sqs.queues map. You should always use @Named qualifier when injecting SimpleQueueService if you have more than one topic configuration present, e.g. @Named("other") SimpleQueueService otherService.

Following example shows some of the most common use cases for working with Amazon SQS.

Creating Queue
String queueUrl = service.createQueue(TEST_QUEUE);                              (1)

assertTrue(service.listQueueUrls().contains(queueUrl));                         (2)
1 Create new queue of given name
2 The queue URL is present within the list of all queues' URLs
Describing Queue Attributes
Map<String, String> queueAttributes = service.getQueueAttributes(TEST_QUEUE);   (1)

assertEquals("0", queueAttributes.get("DelaySeconds"));                         (2)
1 Fetch queue’s attributes
2 You can read the queue’s attributes from the map
Delete Queue
service.deleteQueue(TEST_QUEUE);                                                (1)

assertFalse(service.listQueueUrls().contains(queueUrl));                        (2)
1 Delete queue
2 The queue URL is no longer present within the list of all queues' URLs
Working with Messages
String msgId = service.sendMessage(DATA);                                       (1)

assertNotNull(msgId);

List<Message> messages = service.receiveMessages();                             (2)
Message first = messages.get(0);

assertEquals(DATA, first.getBody());                                            (3)
assertEquals(msgId, first.getMessageId());
assertEquals(1, messages.size());

service.deleteMessage(msgId);                                                   (4)
1 Send a message
2 Receive messages from the queue (in another application)
3 Read message body
4 Developers are responsible to delete the message from the queue themselves
Try to use AWS Lambda functions triggered by SQS messages to handle incoming SQS messages instead of implementing complex message handling logic yourselves.

Please, see SimpleQueueService for the full reference.

Testing

You can very easily mock any of the interfaces and declarative services but if you need close-to-production SQS integration works well with Testcontainers and LocalStack.

You need to add following dependencies into your build file:

Gradle
compile group: 'org.testcontainers', name: 'localstack', version: '1.10.2'
compile group: 'org.testcontainers', name: 'spock', version: '1.10.2'
Maven
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>localstack</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>spock</artifactId>
    <version>1.10.2</version>
    <scope>test</scope>
</dependency>

Then you can setup your tests like this:

Groovy
@Testcontainers                                                                         (1)
@RestoreSystemProperties                                                                (2)
class SimpleQueueServiceSpec extends Specification {

    @Shared LocalStackContainer localstack = new LocalStackContainer('0.8.10')          (3)
        .withServices(SQS)

    @AutoCleanup ApplicationContext context                                             (4)

    SimpleQueueService service

    void setup() {
        System.setProperty('com.amazonaws.sdk.disableCbor', 'true')                     (5)
        AmazonSQS sqs = AmazonSQSClient                                                 (6)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SQS))
            .withCredentials(localstack.defaultCredentialsProvider)
            .build()

        context = ApplicationContext.build('aws.sqs.queue': TEST_QUEUE).build()         (7)
        context.registerSingleton(AmazonSQS, sqs)
        context.start()

        service = context.getBean(SimpleQueueService)                                   (8)
    }

    // tests

}
1 Annotate the specification with @Testcontainers to let Spock manage the Testcontainers for you
2 @RestoreSystemProperties will guarantee that system properties will be restore after the test
3 Create an instance of LocalStackContainer with SQS support enabled
4 Prepare the reference to the ApplicationContext, @AutoCleanup guarantees closing the context after the tests
5 Disable CBOR protocol for SQS (not supported by the mock implementation)
6 Create AmazonSQS client using the LocalStack configuration
7 Prepare the application context with required properties and service using LocalStack
8 You can obtain instance of SimpleQueueService from the context
Java
class SimpleNotificationServiceTest {

    public ApplicationContext context;                                                  (1)

    public SimpleQueueService service;

    @Rule
    public LocalStackContainer localstack = new LocalStackContainer("0.8.10")           (2)
        .withServices(SQS);

    @Before
    public void setup() {
        System.setProperty("com.amazonaws.sdk.disableCbor", "true");                    (3)

        AmazonSQS amazonSQS = AmazonSQSClient                                           (4)
            .builder()
            .withEndpointConfiguration(localstack.getEndpointConfiguration(SQS))
            .withCredentials(localstack.getDefaultCredentialsProvider())
            .build();


        Map<String, Object> properties = new HashMap<>();                               (5)
        properties.put("aws.sqs.queue", TEST_QUEUE);


        context = ApplicationContext.build(properties).build();                         (6)
        context.registerSingleton(AmazonSQS.class, amazonSQS);
        context.start();

        service = context.getBean(SimpleQueueService.class);
    }

    @After
    public void cleanup() {
        System.clearProperty("com.amazonaws.sdk.disableCbor");

        if (context != null) {
            context.close();                                                            (7)
        }
    }

    // tests

}
1 Prepare the reference to the ApplicationContext
2 Create an instance of LocalStackContainer with SQS support enabled
3 Disable CBOR protocol for SQS (not supported by the mock implementation)
4 Create AmazonSQS client using the LocalStack configuration
5 Prepare required properties
6 Prepare the application context with required properties and service LocalStack
7 Close the application context after the test

Security Token Service (STS)

The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users).

This library provides basic support for Amazon STS using Security Token Service

Security Token Service

SecurityTokenService provides only one method (with multiple variations) to create credentials which assumes usage of a certain IAM role.

Following example shows how to create credentials for assumed role.

Assume Role
service.assumeRole('session', 'arn:::my-role', 360) {
    externalId = '123456789'
}

Please, see SecurityTokenService for the full reference.

Testing

It is recommended just to mock the SecurityTokenService in your tests as it only contains single abstract method.

WebSockets for API Gateway

In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.

This library provides components for easy handling of incoming WebSocket proxied events as well as for sending messages back to the clients

Configuration

No configuration is required but some of the configuration properties may be useful for you.

application.yml
aws:
  websocket:
    region: sa-east-1
    connections:
      url: https://abcefgh.execute-api.eu-west-1.amazonaws.com/test/@connections        (1)

# Java Only
micronaut:
  function:
    name: lambda-echo-java                                                              (2)
1 You can specify the default connections URL for MessageSender
2 If you are creating Java functions don’t forget to specify the function’s name for deployments
MessageSender bean is only present in the context if aws.websocket.connectins.url configuration property is present.Use MessageSenderFactory If you want to create MessageSender manually using URL which is not predefined.

Usage

AWS SDK Lambda Events library does not contain the events dedicated to WebSocket API Gateway yet. You can use WebSocketConnectionRequest as an argument to function handling connection and disconnection of the WebSocket and WebSocketRequest for handling incoming messages.

The following examples assume that you have created function using mn create-function command.

The simplest example is a echo method can be used to handle all the incoming events and reply to the incoming messages and also publishes to SNS:

Groovy
package com.agorapulse.micronaut.aws.apigateway.ws

import com.agorapulse.micronaut.aws.apigateway.ws.event.EventType
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketRequest
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketResponse
import groovy.transform.Field

import javax.inject.Inject

@Inject @Field MessageSenderFactory factory                                             (1)
@Inject @Field TestTopicPublisher publisher                                             (2)

WebSocketResponse lambdaEcho(WebSocketRequest event) {                                  (3)
    MessageSender sender = factory.create(event.requestContext)                         (4)
    String connectionId = event.requestContext.connectionId                             (5)

    switch (event.requestContext.eventType) {
        case EventType.CONNECT:                                                         (6)
            // do nothing
            break
        case EventType.MESSAGE:                                                         (7)
            String message = "[$connectionId] ${event.body}"
            sender.send(connectionId, message)
            publisher.publishMessage(connectionId, message)
            break
        case EventType.DISCONNECT:                                                      (8)
            // do nothing
            break
    }

    return WebSocketResponse.OK                                                         (9)
}
1 Factory to create MessageSender if we want to reply to the message immediately
2 Service to publish to SNS to forward the message
3 WebSocketRequest can handle any incoming event
4 Create MessageSender for current client
5 connectionId is unique identifier of the client
6 CONNECT event signals new client has been connected
7 MESSAGE event signals new incoming message
8 DISCONNECT event signals client has been disconnected
9 The method must always return WebSocketResponse.OK to signal success
Java
package com.agorapulse.micronaut.aws.apigateway.ws;

import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketRequest;
import com.agorapulse.micronaut.aws.apigateway.ws.event.WebSocketResponse;
import io.micronaut.function.FunctionBean;

import java.util.function.Function;

@FunctionBean("lambda-echo-java")
public class LambdaEchoJava implements Function<WebSocketRequest, WebSocketResponse> {

    private final MessageSenderFactory factory;                                         (1)
    private final TestTopicPublisher publisher;                                         (2)

    public LambdaEchoJava(MessageSenderFactory factory, TestTopicPublisher publisher) {
        this.factory = factory;
        this.publisher = publisher;
    }

    @Override
    public WebSocketResponse apply(WebSocketRequest event) {                            (3)
        MessageSender sender = factory.create(event.getRequestContext());               (4)
        String connectionId = event.getRequestContext().getConnectionId();              (5)

        switch (event.getRequestContext().getEventType()) {
            case CONNECT:                                                               (6)
                // do nothing
                break;
            case MESSAGE:                                                               (7)
                String message = "[" + connectionId + "] " + event.getBody();
                sender.send(connectionId, message);
                publisher.publishMessage(connectionId, message);
                break;
            case DISCONNECT:                                                            (8)
                // do nothing
                break;
        }

        return WebSocketResponse.OK;                                                    (9)
    }

}
1 Factory to create MessageSender if we want to reply to the message immediately
2 Service to publish to SNS to forward the message
3 WebSocketRequest can handle any incoming event
4 Create MessageSender for current client
5 connectionId is unique identifier of the client
6 CONNECT event signals new client has been connected
7 MESSAGE event signals new incoming message
8 DISCONNECT event signals client has been disconnected
9 The method must always return WebSocketResponse.OK to signal success

Once the function is ready you can deploy the function to AWS Lambda and setup the new API Gateway with WebSocket API

new websocket api
Figure 1. Create new WebSocket API
new websocket route
Figure 2. Create WebSocket API Routes

Another example is a simple AWS Lambda function to react to any of events supported by AWS Lambda and push to WebSocket clients.

There is no support for routing at the moment, but you can get the matched route from event.requestContext.routeKey.
Groovy
package com.agorapulse.micronaut.aws.apigateway.ws

import com.amazonaws.AmazonClientException
import com.amazonaws.services.lambda.runtime.events.SNSEvent
import groovy.transform.Field

import javax.inject.Inject

@Inject @Field MessageSender sender                                                     (1)

void notify(SNSEvent event) {                                                           (2)
    event.records.each {
        try {
            sender.send(it.SNS.subject, "[SNS] $it.SNS.message")                        (3)
        } catch (AmazonClientException ignored) {
            // can be gone                                                              (4)
        }
    }
}
1 MessageSender can be injected if you specify aws.websocket.connnections.url configuration property
2 You can for example react on records published into Simple Notification Service
3 Send message to the client (in previous example the connectionId was set to the subject of the SNS record)
4 If the client is already disconnected then AmazonClientException may occur
Java
package com.agorapulse.micronaut.aws.apigateway.ws;

import com.amazonaws.AmazonClientException;
import com.amazonaws.services.lambda.runtime.events.SNSEvent;
import io.micronaut.function.FunctionBean;

import java.util.function.Consumer;

@FunctionBean("notification-handler")
public class NotificationHandler implements Consumer<SNSEvent> {

    private final MessageSender sender;                                                 (1)

    public NotificationHandler(MessageSender sender) {
        this.sender = sender;
    }

    @Override
    public void accept(SNSEvent event) {                                                (2)
        event.getRecords().forEach(it -> {
            try {
                String connectionId = it.getSNS().getSubject();
                String payload = "[SNS] " + it.getSNS().getMessage();
                sender.send(connectionId, payload);                                     (3)
            } catch (AmazonClientException ignored) {
                // can be gone                                                          (4)
            }
        });
    }

}
1 MessageSender can be injected if you specify aws.websocket.connnections.url configuration property
2 You can for example react on records published into Simple Notification Service
3 Send message to the client (in previous example the connectionId was set to the subject of the SNS record)
4 If the client is already disconnected then AmazonClientException may occur

If you want to publish to the WebSockets using MessageSender your Lambda function’s role must have following permissions (preferably constrained just your API resource):

ExecuteApiFullAccess Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "execute-api:*",
            "Resource": "*"
        }
    ]
}

Testing

You can very easily mock any of the interfaces. Create request event manually and follow the guide to test functions with Micronaut.

Micronaut for API Gateway Proxy

API Gateway Lambda Proxy support for Micronaut which enables using most of the Micronaut HTTP Server features such as controllers, filters and annotation statuses. Follow Micronaut website for extensive documentation.

You develop your application as you would develop any other server application using Micronaut HTTP capabilities. For example you can create following controller

Example Controller
package com.agorapulse.micronaut.http.examples.planets

import io.micronaut.http.HttpStatus
import io.micronaut.http.annotation.Controller
import io.micronaut.http.annotation.Delete
import io.micronaut.http.annotation.Get
import io.micronaut.http.annotation.Post
import io.micronaut.http.annotation.Status

/**
 * Planet controller.
 */
@Controller('/planet')
class PlanetController {

    private final PlanetDBService planetDBService

    PlanetController(PlanetDBService planetDBService) {
        this.planetDBService = planetDBService
    }

    @Get('/{star}')
    List<Planet> list(String star) {
        return planetDBService.findAllByStar(star)
    }

    @Get('/{star}/{name}')
    Planet show(String star, String name) {
        return planetDBService.get(star, name)
    }

    @Post('/{star}/{name}') @Status(HttpStatus.CREATED)
    Planet save(String star, String name) {
        Planet planet = new Planet(star: star, name: name)
        planetDBService.save(planet)
        return planet
    }

    @Delete('/{star}/{name}') @Status(HttpStatus.NO_CONTENT)
    Planet delete(String star, String name) {
        Planet planet = show(star, name)
        planetDBService.delete(planet)
        return planet
    }

}

This controller would be able to handle following URIs and methods after deployment using Micronaut for API Gateway Proxy:

  • GET /planet/{star}

  • GET /planet/{star}/{name}

  • POST /planet/{star}/{name}

  • DELETE /planet/{star}/{name}

This library helps with translating API Gateway Proxy requests and responses into their Micronaut counterparts. It currently does not handle creating the API mappings on the AWS. These needs to be created manually and must match the URL routes

A top of the standard features, you can use api_gateway_proxy environment to distinguish the application is running using this library.

Also following beans can be injected if necessary:

  • com.amazonaws.services.lambda.runtime.Context

  • com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent

  • com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent.ProxyRequestContext

  • com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent.RequestIdentity

Application context is shared for the lifetime of the Lambda instance. Request related beans are reset before each execution. This is a standard behaviour of Micronaut functions to take benefit from the hot deployments.

Installation

Easiest way how to start is to fork Micronaut AWS API Gateway Proxy Starter Project.

For lack of Maven skils, this guide explains Gradle build setup only.

If you want to add the library to existing project then you need to do a manual setup.

Gradle buildSrc

To prevent errors from machines which does not have any AWS credentials set you need to provides LambdaHelper class to obtain the AWS account ID for the deployments. It is also a convenient place to manage all the dependencies for the build scripts.

buildSrc/build.gradle
apply plugin: 'groovy'                                                                  (1)

repositories {
    jcenter()
    mavenCentral()
    maven { url 'https://plugins.gradle.org/m2/' }
}

dependencies {
    compile gradleApi()
    compile localGroovy()

    compile 'jp.classmethod.aws:gradle-aws-plugin:0.38'                                 (2)
    compile 'io.spring.gradle:dependency-management-plugin:1.0.6.RELEASE'               (3)
    compile "com.github.jengelman.gradle.plugins:shadow:4.0.2"                          (4)
    compile 'net.ltgt.gradle:gradle-apt-plugin:0.19'                                    (5)
}
1 Applying Groovy plugin for the buildSrc itself (for LambdaHelper implementation)
2 Gradle AWS plugin used for deployment
3 Gradle Dependency Management plugin used for managing Micronaut dependencies
4 Gradle Shadow plugin optionally used by the local server subproject
5 Gradle APT plugin used for Micronaut annotation processing
buildSrc/src/main/groovy/lambda/LambdaHelper.groovy
package lambda

import com.amazonaws.SdkClientException
import groovy.transform.CompileStatic
import jp.classmethod.aws.gradle.AwsPluginExtension
import org.gradle.api.Project

@CompileStatic
class LambdaHelper {

    private LambdaHelper() { }

    // see https://github.com/classmethod/gradle-aws-plugin/pull/160
    static String getAwsAccountId(Project project) {                                    (1)
        try {
            return project.getExtensions().getByType(AwsPluginExtension).accountId
        } catch (SdkClientException ignored) {
            project.logger.lifecycle("AWS credentials not configured!")
            return '000000000000'
        }
    }

}
1 Helper class provides single method to obtains AWS account id safely

Shared Gradle Files

As far as you expect all the subprojects to be Micronaut projects you can share the configuration in the root build.gradle file. Following configuration will enable subprojects with full Java and Groovy Micronaut support.

build.gradle
subprojects {

    apply plugin: "groovy"
    apply plugin: "io.spring.dependency-management"
    apply plugin: "com.github.johnrengelman.shadow"
    apply plugin: "net.ltgt.apt-eclipse"
    apply plugin: "net.ltgt.apt-idea"


    version "0.1"
    group "micronaut.aws.api.gateway.proxy.starter"

    repositories {
        mavenLocal()
        mavenCentral()
        maven { url "https://jcenter.bintray.com" }
    }

    dependencyManagement {
        imports {
            mavenBom "io.micronaut:micronaut-bom:1.1.0"
        }
    }

    dependencies {
        annotationProcessor "io.micronaut:micronaut-inject-java"
        annotationProcessor "io.micronaut:micronaut-validation"

        compile "io.micronaut:micronaut-inject"
        compile "io.micronaut:micronaut-validation"
        compile "io.micronaut:micronaut-runtime"
        compile "io.micronaut:micronaut-runtime-groovy"

        compileOnly "io.micronaut:micronaut-inject-java"
        compileOnly "io.micronaut:micronaut-inject-groovy"

        runtime "ch.qos.logback:logback-classic:1.2.3"

        testCompile("org.spockframework:spock-core") {
            exclude group: "org.codehaus.groovy", module: "groovy-all"
        }

        testCompile "io.micronaut:micronaut-inject-groovy"
        testCompile "io.micronaut:micronaut-inject-java"

        testCompile "junit:junit:4.12"
        testCompile "org.hamcrest:hamcrest-all:1.3"
    }

    compileJava.options.compilerArgs += '-parameters'
    compileTestJava.options.compilerArgs += '-parameters'
}

Each AWS Lambda subproject also shares common setup. We can store it in gradle/lambda.gradle file.

gradle/lambda.gradle
configurations {
    lambdaCompile.extendsFrom runtime                                                   (1)
    testCompile.extendsFrom lambdaCompile                                               (2)
}

dependencies {
    lambdaCompile "com.agorapulse:micronaut-function-aws-agp:1.1.0"                 (3)

    compile "io.micronaut:micronaut-http-server"                                        (4)
    compile "io.micronaut:micronaut-router"                                             (5)

    // gru for aws lambda can help you testing lambda fuctions
    // https://agorapulse.github.io/gru/
    testCompile "com.agorapulse:gru-api-gateway:0.6.6"                                  (6)
}

task buildZip(type: Zip) {                                                              (7)
    from compileJava
    from compileGroovy
    from processResources
    into('lib') {
        from configurations.lambdaCompile
    }
}

build.dependsOn buildZip                                                                (8)
1 Create new configuration lambdaCompile to be used only for tests and deployed package
2 Include lambdaCompile libraries in also in tests
3 The integration library is only important for the package deployed
4 Use just micronaut-http-server library as a dependency (not micronaut-http-server-netty)
5 Micronaut router is also required
6 You can optionally use Gru for testing
7 Adds a task to build Lambda deployment archive
8 Adds buildZip to the default build task

API Gateway Subproject Build File

Every API Gateway Lambda project must at least contain following definition of the deployment as well as it needs to apply the shared lambda.gradle file.

build.gradle
import lambda.LambdaHelper
import com.amazonaws.services.lambda.model.Runtime
import jp.classmethod.aws.gradle.lambda.AWSLambdaMigrateFunctionTask

apply from: '../gradle/lambda.gradle'                                                   (1)

task deployLambda(                                                                      (2)
    type: AWSLambdaMigrateFunctionTask,
    dependsOn: build,
    group: 'deploy'
)  {
    functionName = 'MicronautHelloWorld'
    handler = 'com.agorapulse.micronaut.agp.ApiGatewayProxyHandler::handleRequest'      (3)
    role = "arn:aws:iam::${LambdaHelper.getAwsAccountId(project)}:role/lambda_basic_execution"
    runtime = Runtime.Java8                                                             (4)
    zipFile = buildZip.archivePath                                                      (5)
    memorySize = 1024
    timeout = 30
}
1 Import helper Gradle script
2 Add task to deploy to AWS Lambda
3 Lambda function handler must be com.agorapulse.micronaut.agp.ApiGatewayProxyHandler::handleRequest
4 Runtime must be Java8
5 Archive must be the result of `buildZip task

Local Server

The biggest advantage of Micronaut for Api Gateway Proxy integration library is the ability to easily run locally.

build.gradle
apply plugin: 'application'                                                             (1)

dependencies {
    compile project(':hello-world')                                                     (2)

    compile "io.micronaut:micronaut-http-server-netty"                                  (3)

    testCompile "com.agorapulse:gru-http:0.6.6"                                         (4)
}

shadowJar {                                                                             (5)
    mergeServiceFiles()
}

run.jvmArgs('-noverify', '-XX:TieredStopAtLevel=1')

mainClassName = "starter.Application"                                                   (6)
1 Apply application plugin so you will be able to run the server as application locally
2 Depend on every API Gateway subproject you want to include into the local server
3 You need real Micronaut’s HTTP server implementation to run the server
4 You can optionally use Gru for testing the local server
5 If you decide to run from Shadow JAR you need to merge the service files
6 Replace with your own application class

Testing

The easiest way to test the API Gateway Proxy integration is using Gru for API Gateway testing client. The library should be already on the classpath if you have followed the steps or if you are using the starter project.

Controller Spec
package com.agorapulse.micronaut.http.examples.planets

import com.agorapulse.dru.Dru
import com.agorapulse.dru.dynamodb.persistence.DynamoDB
import com.agorapulse.gru.Gru
import com.agorapulse.gru.agp.ApiGatewayProxy
import com.agorapulse.micronaut.agp.ApiGatewayProxyHandler
import com.amazonaws.services.dynamodbv2.datamodeling.IDynamoDBMapper
import io.micronaut.context.ApplicationContext
import org.junit.Rule
import spock.lang.Specification

/**
 * Test for planet controller.
 */
class PlanetControllerSpec extends Specification {

    @Rule private final Gru gru = Gru.equip(ApiGatewayProxy.steal(this) {               (1)
        map '/planet/{star}' to ApiGatewayProxyHandler                                  (2)
        map '/planet/{star}/{name}' to ApiGatewayProxyHandler
    })

    @Rule private final Dru dru = Dru.steal(this)

    @SuppressWarnings('UnusedPrivateField')
    private final ApiGatewayProxyHandler handler = new ApiGatewayProxyHandler() {
        @Override
        protected void doWithApplicationContext(ApplicationContext ctx) {               (3)
            ctx.registerSingleton(IDynamoDBMapper, DynamoDB.createMapper(dru))
        }
    }

    void setup() {
        dru.add(new Planet(star: 'sun', name: 'mercury'))
        dru.add(new Planet(star: 'sun', name: 'venus'))
        dru.add(new Planet(star: 'sun', name: 'earth'))
        dru.add(new Planet(star: 'sun', name: 'mars'))
    }

    void 'get planet'() {                                                               (4)
        expect:
            gru.test {
                get('/planet/sun/earth')
                expect {
                    json 'earth.json'
                }
            }
    }

    void 'get planet which does not exist'() {
        expect:
            gru.test {
                get('/planet/sun/vulcan')
                expect {
                    status NOT_FOUND
                }
            }
    }

    void 'list planets by existing star'() {
        expect:
            gru.test {
                get('/planet/sun')
                expect {
                    json 'planetsOfSun.json'
                }
            }
    }

    void 'add planet'() {
        when:
            gru.test {
                post '/planet/sun/jupiter'
                expect {
                    status CREATED
                    json 'jupiter.json'
                }
            }
        then:
            gru.verify()
            dru.findAllByType(Planet).size() == 5
    }

    void 'delete planet'() {
        given:
            dru.add(new Planet(star: 'sun', name: 'pluto'))
        expect:
            dru.findAllByType(Planet).size() == 5
            gru.test {
                delete '/planet/sun/pluto'
                expect {
                    status NO_CONTENT
                    json 'pluto.json'
                }
            }
            dru.findAllByType(Planet).size() == 4
    }

}
1 Use ApiGatewayProxy client with Gru
2 Configure which URLs and methods are handled by the Micronaut, the handler must always be ApiGatewayProxyHandler or its successors
3 You can customize the handler initialization by providing a mock beans
4 Test method using Gru
The advantage of using Gru is that you can reuse the existing test with the local server if required. Only thing which changes it the handler setup and the client being used (HTTP instead of API Gateway Proxy).

Micronaut Grails

Micronaut Grails package helps using Micronaut beans in the Grails application or any other Spring application. There are two additional features which cannot be found the official Spring support for Micronaut:

  1. Micronaut beans' names defaults to lower-cased simple name of the class as expected by Grails

  2. Ability to reuse existing properties declared by Grails - e.g. grails.redis.port can be injected as @Value('${redis.port}')

Installation

Gradle
compileOnly 'com.agorapulse:micronaut-grails:1.1.0'
If you plan to reuse same library for Micronaut and Grails, you can declare the dependency as compileOnly.

Usage

The integration is handle by bean processor which needs to be injected into Spring application context. The easiest thing is to create Spring configuration placed next to your Micronaut classes. The Spring configuration class will create the processor bean:

@CompileStatic
@Configuration                                                                          (1)
class GrailsConfig {

    @Bean
    GrailsMicronautBeanProcessor widgetProcessor() {                                    (2)
        GrailsMicronautBeanProcessor
            .builder()                                                                  (3)
            .addByType(Widget)                                                          (4)
            .addByType('someInterface', SomeInterface)                                  (5)
            .addByStereotype('prototype', Prototype)                                    (6)
            .addByName('gadget')                                                        (7)
            .addByName('one')
            .addByName('two')
            .addByQualifiers(                                                           (8)
                'otherMinion',
                Qualifiers.byName('other'),
                Qualifiers.byType(Minion)
            )
            .build()
    }

}
1 Define class as Spring @Configuration
2 Declare method which returns bean processor @Bean
3 Use builder to add all exported beans to the Spring application context
4 The name of the Spring bean defaults to the property name of the class, e.g. widget
5 You can provide different name
6 You can qualify using a stereotype (annotation)
7 You can qualify using the name of the bean which will be the same in the Spring application
8 You can combine any qualifiers possible to narrow the search to single bean which needs to be available from the Spring application context
If more then one bean qualifies the criteria then an exception will be thrown.

Once you have your configuration class ready then you can create META-INF/spring.factories descriptor in resources folder which will automatically load the configuration once the JAR is on classpath.

META-INF/spring.factories
org.springframework.boot.autoconfigure.EnableAutoConfiguration=com.agorapulse.micronaut.grails.example.GrailsConfig

Maintained by

4a44735a 5034 11e6 8e72 9f4b7139d7e0