@xanthous/serverless-appsync-plugin

1.1.3 • Public • Published

Build Status

Serverless-AppSync-Plugin 👌

Serverless plugin that allows you to deploy, update or delete your AWS AppSync API's with ease.


Tired of 🚀 deploying, ✏️ updating, and deleting your AppSync API's using the AWS AppSync dashboard? You can now develop all of your AppSync API's locally using Serverless + Serverless-AppSync-Plugin! With support for AWS DynamoDB, AWS Lambda, and AWS Elasticsearch; you have everything you need to get started developing your AppSync API's locally.

Find AppSync examples in the Serverless-GraphQL Repo 👈

Introduction

Part 1: Running a scalable & reliable GraphQL endpoint with Serverless

Part 2: AppSync Backend: AWS Managed GraphQL Service

Part 3: AppSync Frontend: AWS Managed GraphQL Service

Part 4: Serverless AppSync Plugin: Top 10 New Features

AWS Mobile Blog How to deploy a GraphQL API on AWS using the Serverless Framework

appsync architecture

Table of Contents (click to expand)

⚡️ Getting Started

Be sure to check out all that AWS AppSync has to offer. Here are a few resources to help you understand everything needed to get started!

  • Mapping Templates - Not sure how to create Mapping Templates for DynamoDB, Lambda or Elasticsearch? Here's a great place to start!
  • Data Sources and Resolvers - Get more information on what data sources are supported and how to set them up!
  • Security - Checkout this guide to find out more information on securing your API endpoints with AWS_IAM or Cognito User Pools!

🛠 Minimum requirements

💾 Installation

Install the plugin via Yarn (recommended)

yarn add serverless-appsync-plugin

or via NPM

npm install serverless-appsync-plugin

Configuring the plugin

Add serverless-appsync-plugin to the plugins section of serverless.yml

plugins:
   - serverless-appsync-plugin

Add the following example config to the custom section of serverless.yml

custom:
  appSync:
    name:  # defaults to api
    # apiKey # only required for update-appsync/delete-appsync
    authenticationType: API_KEY or AWS_IAM or AMAZON_COGNITO_USER_POOLS or OPENID_CONNECT
    # if AMAZON_COGNITO_USER_POOLS
    userPoolConfig:
      awsRegion: # defaults to provider region
      defaultAction: # required # ALLOW or DENY
      userPoolId: # required # user pool ID
      appIdClientRegex: # optional
    # if OPENID_CONNECT
    openIdConnectConfig:
      issuer:
      clientId:
      iatTTL:
      authTTL:
    # Array of additional authentication providers
    additionalAuthenticationProviders:
      - authenticationType: API_KEY
      - authenticationType: AWS_IAM
      - authenticationType: OPENID_CONNECT
        openIdConnectConfig:
          issuer:
          clientId:
          iatTTL:
          authTTL:
      - authenticationType: AMAZON_COGNITO_USER_POOLS
        userPoolConfig:
          awsRegion: # defaults to provider region
          userPoolId: # required # user pool ID
          appIdClientRegex: # optional
    logConfig:
      loggingRoleArn: { Fn::GetAtt: [AppSyncLoggingServiceRole, Arn] } # Where AppSyncLoggingServiceRole is a role with CloudWatch Logs write access
      level: ERROR # Logging Level: NONE | ERROR | ALL
    mappingTemplatesLocation: # defaults to mapping-templates
    mappingTemplates:
      - dataSource: # data source name
        type: # type name in schema (e.g. Query, Mutation, Subscription)
        field: getUserInfo
        request: # request mapping template name
        response: # response mapping template name
      - ${file({fileLocation}.yml)} # link to a file with arrays of mapping templates
    schema: # schema file or array of files to merge, defaults to schema.graphql
    dataSources:
      - type: AMAZON_DYNAMODB
        name: # data source name
        description: # DynamoDB Table Description
        config:
          tableName: { Ref: MyTable } # Where MyTable is a dynamodb table defined in Resources
          serviceRoleArn: { Fn::GetAtt: [AppSyncDynamoDBServiceRole, Arn] } # Where AppSyncDynamoDBServiceRole is an IAM role defined in Resources
          iamRoleStatements: # custom IAM Role statements for this DataSource. Ignored if `serviceRoleArn` is present. Auto-generated if both `serviceRoleArn` and `iamRoleStatements` are omitted
            - Effect: "Allow"
              Action:
                - "dynamodb:GetItem"
              Resource:
                - "arn:aws:dynamodb:{REGION}:{ACCOUNT_ID}:myTable"
                - "arn:aws:dynamodb:{REGION}:{ACCOUNT_ID}:myTable/*"

          region: # Overwrite default region for this data source
      - type: RELATIONAL_DATABASE
        name: # data source name
        description: # data source description
        config:
          dbClusterIdentifier: { Ref: RDSCluster } # The identifier for RDSCluster. Where RDSCluster is the cluster defined in Resources
          awsSecretStoreArn: { Ref: RDSClusterSecret } # The RDSClusterSecret ARN. Where RDSClusterSecret is the cluster secret defined in Resources
          serviceRoleArn: { Fn::GetAtt: [RelationalDbServiceRole, Arn] } # Where RelationalDbServiceRole is an IAM role defined in Resources
          databaseName: # optional database name
          schema: # optional database schema
          iamRoleStatements: # custom IAM Role statements for this DataSource. Ignored if `serviceRoleArn` is present. Auto-generated if both `serviceRoleArn` and `iamRoleStatements` are omitted
            - Effect: "Allow"
              Action:
                - "rds-data:DeleteItems"
                - "rds-data:ExecuteSql"
                - "rds-data:GetItems"
                - "rds-data:InsertItems"
                - "rds-data:UpdateItems"
              Resource:
                - "arn:aws:rds:{REGION}:{ACCOUNT_ID}:cluster:mydbcluster"
                - "arn:aws:rds:{REGION}:{ACCOUNT_ID}:cluster:mydbcluster:*"
            - Effect: "Allow"
              Action:
                - "secretsmanager:GetSecretValue"
              Resource:
                - "arn:aws:secretsmanager:{REGION}:{ACCOUNT_ID}:secret:mysecret"
                - "arn:aws:secretsmanager:{REGION}:{ACCOUNT_ID}:secret:mysecret:*"

          region: # Overwrite default region for this data source
      - type: AMAZON_ELASTICSEARCH
        name: # data source name
        description: 'ElasticSearch'
        config:
          endpoint: # required # "https://{DOMAIN}.{REGION}.es.amazonaws.com"
          serviceRoleArn: { Fn::GetAtt: [AppSyncESServiceRole, Arn] } # Where AppSyncESServiceRole is an IAM role defined in Resources
          iamRoleStatements: # custom IAM Role statements for this DataSource. Ignored if `serviceRoleArn` is present. Auto-generated if both `serviceRoleArn` and `iamRoleStatements` are omitted
            - Effect: "Allow"
              Action:
                - "es:ESHttpGet"
              Resource:
                - "arn:aws:es:{REGION}:{ACCOUNT_ID}:{DOMAIN}"
      - type: AWS_LAMBDA
        name: # data source name
        description: 'Lambda DataSource'
        config:
          functionName: graphql # The function name in your serverless.yml. Ignored if lambdaFunctionArn is provided.
          lambdaFunctionArn: { Fn::GetAtt: [GraphqlLambdaFunction, Arn] } # Where GraphqlLambdaFunction is the lambda function cloudformation resource created by serverless for the serverless function named graphql
          serviceRoleArn: { Fn::GetAtt: [AppSyncLambdaServiceRole, Arn] } # Where AppSyncLambdaServiceRole is an IAM role defined in Resources
          iamRoleStatements: # custom IAM Role statements for this DataSource. Ignored if `serviceRoleArn` is present. Auto-generated if both `serviceRoleArn` and `iamRoleStatements` are omitted
            - Effect: "Allow"
              Action:
                - "lambda:invokeFunction"
              Resource:
                - "arn:aws:lambda:{REGION}:{ACCOUNT_ID}:myFunction"
                - "arn:aws:lambda:{REGION}:{ACCOUNT_ID}:myFunction:*"
      - type: HTTP
        name: # data source name
        description: 'Http endpoint'
        config:
          endpoint: # required # "https://{DOMAIN}/{PATH}"
      - ${file({dataSources}.yml)} # link to a file with an array or object of datasources
    substitutions: # allows to pass variables from here to velocity templates
      # ${exampleVar1} will be replaced with given value in all mapping templates
      exampleVar1: "${self:service.name}"
      exampleVar2: {'Fn::ImportValue': 'Some-external-stuff'}

Be sure to replace all variables that have been commented out, or have an empty value.

Multiple APIs

If you have multiple APIs and do not want to split this up into another CloudFormation stack, simply change the appSync configuration property from an object into an array of objects:

custom:
  appSync:
    - name: private-appsync-endpoint
      schema: AppSync/schema.graphql # or something like AppSync/private/schema.graphql
      authenticationType: OPENID_CONNECT
      openIdConnectConfig:
      ...
      serviceRole: AuthenticatedAppSyncServiceRole
      dataSources:
      ...
      mappingTemplatesLocation: ...
      mappingTemplates:
      ...
    - name: public-appsync-endpoint
      schema: AppSync/schema.graphql # or something like AppSync/public/schema.graphql
      authenticationType: NONE # or API_KEY, you get the idea
      serviceRole: PublicAppSyncServiceRole
      dataSources:
      ...
      mappingTemplatesLocation: ...
      mappingTemplates:
      ...

Note: CloudFormation stack outputs and logical IDs will be changed from the defaults to api name prefixed. This allows you to differentiate the APIs on your stack if you want to work with multiple APIs.

Pipeline Resolvers

Amazon recently released the new pipeline resolvers: https://aws.amazon.com/blogs/mobile/aws-appsync-releases-pipeline-resolvers-aurora-serverless-support-delta-sync/

These changes allow you to perform more than one mapping template in sequence, so you can do multiple queries to multiple sources. These queries are called function configurations ('AWS::AppSync::FunctionConfiguration') and are children of a resolver.

Here is an example of how to configure a resolver with function configurations. The key here is to provide a 'kind' of 'PIPELINE' to the mapping template of the parent resolver. Then provide the names of the functions in the mappingTemplate to match the names of the functionConfigurations.

custom:
  appSync:
    mappingTemplates:
      - type: Query
        field: testPipelineQuery
        request: './mapping-templates/before.vtl' # the pipeline's "before" mapping template
        response: './mapping-templates/after.vtl' # the pipeline's "after" mapping template
        kind: PIPELINE
        functions:
          - authorizeFunction
          - fetchDataFunction
    functionConfigurations:
      - dataSource: graphqlLambda
        name: 'authorizeFunction'
        request: './mapping-templates/authorize-request.vtl'
        response: './mapping-templates/common-response.vtl'
      - dataSource: dataTable
        name: 'fetchDataFunction'
        request: './mapping-templates/fetchData.vtl'
        response: './mapping-templates/common-response.vtl'

▶️ Usage

serverless deploy

This command will deploy all AppSync resources in the same CloudFormation template used by the other serverless resources.

serverless graphql-playground

This command will start a local graphql-playground server which is connected to your AppSync endpoint. The required options for the command are different depending on your AppSync authenticationType.

For API_KEY, either the GraphQLApiKeyDefault output or the --apiKey option is required

For AMAZON_COGNITO_USER_POOLS, the -u/--username and -p/--password arguments are required. The cognito user pool client id can be provided with the --clientId option or directly in the yaml file (custom.appSync.userPoolConfig.playgroundClientId)

For OPENID_CONNECT, the --jwtToken option is required.

The AWS_IAM authenticationType is not currently supported.

📝 Notes

  • If you are planning on using AWS Elasticsearch, you will need to create an Elasticsearch domain/endpoint on AWS and set it as the endpoint option in serverless.yml before deploying.

Offline support

You can use serverless-appsync-offline to autostart an AppSync Emulator which depends on Serverless-AppSync-Plugin with DynamoDB and Lambda resolver support:

Install Plugin

npm install --save serverless-appsync-offline

Minimal Options (serverless.yml)

custom:
  appsync-offline:
    port: 62222
    dynamodb:
      server:
        port: 8000

Start local enviroment

If you use serverless-offline:

sls offline start

otherwise:

sls appsync-offline start

the result is:

Serverless: dynamoDB started: http://localhost:8000/
Serverless: AppSync started: http://localhost:62222/graphql

Go to serverless-appsync-offline to get further configuration options.

Split Stacks Plugin

You can use serverless-plugin-split-stacks to migrate AppSync resources in nested stacks in order to work around the 200 resource limit.

  1. Install serverless-plugin-split-stacks
yarn add --dev serverless-plugin-split-stacks
# or
npm install --save-dev serverless-plugin-split-stacks
  1. Follow the serverless-plugin-split-stacks installation instructions

  2. Place serverless-plugin-split-stacks after serverless-appsync-plugin

plugins:
  - serverless-appsync-plugin
  - serverless-plugin-split-stacks
  1. Create stacks-map.js in the root folder
module.exports = {
  'AWS::AppSync::ApiKey': { destination: 'AppSync', allowSuffix: true },
  'AWS::AppSync::DataSource': { destination: 'AppSync', allowSuffix: true },
  'AWS::AppSync::FunctionConfiguration': { destination: 'AppSync', allowSuffix: true },
  'AWS::AppSync::GraphQLApi': { destination: 'AppSync', allowSuffix: true },
  'AWS::AppSync::GraphQLSchema': { destination: 'AppSync', allowSuffix: true },
  'AWS::AppSync::Resolver': { destination: 'AppSync', allowSuffix: true }
}
  1. Enjoy 🍻

🎁 Contributing

If you have any questions, please feel free to reach out to me directly on Twitter Sid Gupta.

👷 Migration from versions prior to 1.0

If you have previously used versions of this plugin prior to 1.0, you will need to perform some additional manual steps in order to continue use of this plugin (it will be worth it). This change removes the sls *-appsync commands in favor of adding AppSync resources directly to the serverless cloudformation stack. What this means for your existing APIs is that they can no longer be updated. The good news is that you will no longer need to use separate commands to deploy vs update and update your serverless config with the created apiId.

The rough steps for migration are as follows:

  1. Run sls deploy to create the new AppSync api and make note of the endpoint returned as part of the stack outputs. If you were using an API_KEY auth type, you will also need the new api key which is also included in the stack outputs.
  2. Update existing consumers of your API to use the new endpoint. If you're using an api key, this will also need updated
  3. After verifying all existing consumers are updated, run sls delete-appsync to cleanup the old resources
  4. Remove the apiId line from custom.appSync in serverless.yml
  5. 🍹

Youtube Video by Foo Bar :)

Building an AppSync + Serverless Framework Backend | FooBar

❤️ Credits

Big Thanks to Nik Graf, Philipp Müns, Jon Patel and my favourite coolest kat ever for helping to build this plugin!

We are always looking for open source contributions. So, feel free to create issues/contribute to this repo.

Readme

Keywords

none

Package Sidebar

Install

npm i @xanthous/serverless-appsync-plugin

Weekly Downloads

0

Version

1.1.3

License

MIT

Unpacked Size

520 kB

Total Files

42

Last publish

Collaborators

  • necmttn
  • osmanmesutozcan
  • lhr0909
  • harchiko