Easily deploy complex CloudFormation templates with external resources such as Lambdas or Nested Stacks.
Many CloudFormation templates are completely standalone – one single YAML or JSON file and that’s it. Easy to deploy. However in some cases CFN templates refer to other files, or artifacts. For example Lambda source or ZIP file, nested CloudFormation Template file, or an API definition for API Gateway may be such “artifacts”. These files have to be available in S3 before we can deploy the main CloudFormation template.
Deploying such complex stacks is a multi-stage process, usually performed using a custom shell script or a custom Ansible playbook.
- ZIP up the Lambda source and required libraries
- Upload the ZIP file to S3
- Create CloudFormation stack with the correct path to the S3
Not a rocket science but still…
Fortunately AWS-CLI provides a very convenient method for deploying CloudFormation templates that refer to other files. Read on to learn how.
TL;DR
Use aws cloudformtion package
and deploy
…
~ $ aws cloudformation package \
--template-file template.yml \
--output-template-file template.packaged.yml \
--s3-bucket {some-bucket}
~ $ aws cloudformation deploy \
--template-file template.packaged.yml \
--stack-name {some-name}
Sample project structure
Our little project has these four files:
~/cfn-package-deploy $ tree -F.
├── template.yml
├── lambda_one.py
└── lambda_two/
├── index.py
└── some_module.py
1 directory, 4 files
One Cloud Formation template, one simple single-file Lambda function and one more complex Lambda that consists of multiple files.
Refer to local files in your CFN template
Traditionally we would have to zip up and upload all the lambda sources to S3 first and then in the template refer to these S3 locations. Perhaps through stack parameters.
However with aws cloudformation package
we can refer to the local files directly. That’s much more convenient!
Have a look at this LambdaOne snippet for example – we refer to the lambda_one.py
file locally, as it’s in the same directory as the template.
LambdaOne:
Type: AWS::Lambda::Function
Properties:
Handler: lambda_one.lambda_handler
Code: lambda_one.py # <<< This is a local file
Runtime: ...
Likewise with the more complex LambdaTwo that consists of two files in a subdirectory. Simply refer to the directory name lambda_two
in the template.
LambdaTwo:
Type: AWS::Lambda::Function
Properties:
Handler: index.lambda_handler
Code: lambda_two/ # <<< This is a local directory
Runtime: …
Package and upload the artifacts
The next step is calling <code>aws cloudformation package</code> that does three things:
- ZIPs up the local files, one ZIP file per “artifact”.
- Upload them to a designated S3 bucket.
- Generate a new template where the local paths are replaced with the S3 URIs.
Decide on a S3 bucket
First of all we need an S3 bucket where the files will be uploaded. I tend to (ab)use the cf-templates-… buckets that AWS creates when we deploy CFN through the console. But feel free to use any bucket you want.
~/cfn-package-deploy $ aws s3 ls | grep cf-templates
2018-11-07 22:55:23 cf-templates-abcdefghjklm-ap-southeast-2
2019-02-01 10:27:46 cf-templates-abcdefghjklm-ca-central-1
2018-11-02 07:06:25 cf-templates-abcdefghjklm-us-east-1
...
Let’s use the first one as I’m working in the Sydney region (ap-southeast-2)
Run the package command
~/cfn-package-deploy $ aws cloudformation package \
--template-file template.yml \
--s3-bucket cf-templates-abcdefghjklm-ap-southeast-2 \
--output-template-file template.packaged.yml
Uploading to 35f69109a3a3f87e999f028f03403efa 193 / 193.0 (100.00%)
Uploading to cca5b023ed6603eabf9421471b65d68b 352 / 352.0 (100.00%)
Successfully packaged artifacts and wrote output template to file template.packaged.yml.
Examine the generated files
Let’s have a look at the output template file first.
We will notice that the Code
attributes in LambdaOne and LambdaTwo were updated with the bucket and uploaded object name:
LambdaOne:
Properties:
Code:
S3Bucket: cf-templates-abcdefghjklm-ap-southeast-2
S3Key: 35f69109a3a3f87e999f028f03403efa
Handler: lambda_one.lambda_handler
...
LambdaTwo:
Properties:
Code:
S3Bucket: cf-templates-abcdefghjklm-ap-southeast-2
S3Key: cca5b023ed6603eabf9421471b65d68b
Handler: index.lambda_handler
...
For completeness let’s also look what’s in the uploaded files. From the listing above we know the bucket and object name to download.
~/cfn-package-deploy $ aws s3 cp \
s3://cf-templates-abcdefghjklm-ap-southeast-2/cca5b023ed6603eabf9421471b65d68b .
And we know it’s a ZIP file. Even though there is no .zip
extension we can still unzip
it.
~/cfn-package-deploy $ unzip -l cca5b023ed6603eabf9421471b65d68b
Archive: cca5b023ed6603eabf9421471b65d68b
Length Date Time Name
--------- ---------- ----- ----
114 2019-02-19 15:55 index.py
35 2019-02-19 15:54 some_module.py
--------- -------
149 2 files
As expect it’s the content of the lambda_two/
directory.
Deploy the “packaged” template
~/cfn-package-deploy $ aws cloudformation deploy \
--template-file template.packaged.yml \
--stack-name cfn-package-deploy
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - cfn-package-deploy
Note that we used the packaged template template.packaged.yml
that refers to the artifacts in S3! Not the original one with local paths!!
We may also have to use --capabilities CAPABILITY_IAM
if there are any IAM Roles in the template – and that’s quite likely. Otherwise deploy fails: An error occurred (InsufficientCapabilitiesException) when calling the CreateChangeSet operation: Requires capabilities : [CAPABILITY_IAM]
We can also set / override stack parameters with --parameter-overrides
just like when using aws cloudformation create-stack.
See aws cloudformation deploy help
for the available parameters.
What a convenience!
This is an easy way to create and update stacks with external resources. It works not only with Lambda sources but also with Nested Stacks, AWS::Include, and many other resources that need external files. Refer to <code>aws cludformation package help</code> for details and supported artifact types.
If you liked this article leave us a comment 🙂
Hey Michael,
does this work with SSM automation documents, too? For example instead of the line “Restart-Computer -Force” under “commands:” I would like to use external skripts like this below:
With regards
Hi, as of now SSM Documents don’t seem to be supported. Check out the current list here: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
Hi Mike,
Very nice and clear article.
So, I am trying to use $ aws cloudformation package to copy my local CF template to S3 bucket (not lambda) and then ran $ aws cloudformation deploy to create a stock. Whatever I do and try, the “packaged” template file is created in my local directory.
This is the command I am using:
$ aws cloudformation package –template-file CF-template-test.yml –s3-bucket djbucket-test –output-template-file CF-template-test_packaged.yml
The output in my terminal
Successfully packaged artifacts and wrote output template to file CF-template-test_packaged.yml.
Execute the following command to deploy the packaged template
aws cloudformation deploy –template-file c:\Users\darek\gitrepos\CloudFormation\CF-angular\CF-template-test_packaged.yml –stack-name
All is well except the packaged yml file is created in my local directory.
What am I missing?
Hi Darek,
I think it’s quite expected. The
package
command must write the file somewhere locally for thedeploy
command to use it in the next step. If you don’t like to have it in the local directory you can write it to/tmp/
with--output-template-file /tmp/cf.yml
.Is there an easy way to combine the output from the package call with an existing tempate for a new stack? Right now, I have a two step process, one to package the lambda to S3, and another that creates a new stack that needs to reference the package bucket and key. It would be great if I didn’t need to make the user enter the S3 info, but would like to avoid parsing the output in bash and stuffing it into my secondary template.
Hi, the
package
command rewrites the template with the reference to the S3 bucket and key. Then use the rewritten template with thedeploy
command. Or did I misunderstand the question?How can I deploy another package that has been build using SAM not CDK , to the main package that has been build using CDK ?
I have the git repository of that package and SAM commands to package and deploy .
SAM and CDK have their own deployment tools and scripts. The method described in this post is for “pure” CloudFormation stacks.