Merge branch 'loadbalanced' into develop

This commit is contained in:
2020-11-19 19:39:23 +00:00
33 changed files with 2135 additions and 316 deletions

View File

@@ -1,5 +1,8 @@
AWSTemplateFormatVersion: 2010-09-09 AWSTemplateFormatVersion: 2010-09-09
Description: S3 bucket for static assets for Strapi deployment. Description: S3 bucket for static assets for Strapi deployment.
Parameters:
TestParameter:
Type: String
Resources: Resources:
ELBExampleBucket: ELBExampleBucket:
Type: "AWS::S3::Bucket" Type: "AWS::S3::Bucket"

View File

@@ -0,0 +1,179 @@
AWSTemplateFormatVersion: 2010-09-09
Description: VPC and Subnet definitions for Strapi + ELB project.
Resources:
PublicVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: "172.31.0.0/16"
EnableDnsHostnames: true
EnableDnsSupport: true
ELBSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub "${AWS::StackName}-ELBSecurityGroup"
GroupDescription: Security group for the Elastic Load Balancer.
This permits inbound 80/443 from any IP, to 80/443 to the
Auto Scaling security group.
VpcId: !Ref PublicVPC
ELBSecurityGroupIngressHttp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Ingress for ELBSecurityGroup for HTTP.
GroupId: !Ref ELBSecurityGroup
IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
ELBSecurityGroupIngressHttps:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Ingress for ELBSecurityGroup for HTTPS.
GroupId: !Ref ELBSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
ELBSecurityGroupEgressHttp:
Type: AWS::EC2::SecurityGroupEgress
Properties:
Description: Egress for ELBSecurityGroup for HTTP.
GroupId: !Ref ELBSecurityGroup
IpProtocol: tcp
FromPort: 80
ToPort: 80
SourceSecurityGroupId: !Ref ASSecurityGroup
ELBSecurityGroupEgressHttps:
Type: AWS::EC2::SecurityGroupEgress
Properties:
Description: Egress for ELBSecurityGroup for HTTPS.
GroupId: !Ref ELBSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
SourceSecurityGroupId: !Ref ASSecurityGroup
ASSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub "${AWS::StackName}-ASSecurityGroup"
GroupDescription: Security group for the Auto Scaler. This security group
will be applied to any EC2 instances that the Auto Scaler creates. This
group permits inbound 80/443 from the Elastic Load Balancer security
group.
VpcId: !Ref PublicVPC
ASSecurityGroupIngressHttp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Ingress for ASSecurityGroup for HTTP.
GroupId: !Ref ASSecurityGroup
IpProtocol: tcp
FromPort: 80
ToPort: 80
SourceSecurityGroupId: !Ref ELBSecurityGroup
ASSecurityGroupIngressHttps:
Type: AWS::EC2::SecurityGroupIngress
Properties:
Description: Ingress for ASSecurityGroup for HTTPS.
GroupId: !Ref ASSecurityGroup
IpProtocol: tcp
FromPort: 443
ToPort: 443
SourceSecurityGroupId: !Ref ELBSecurityGroup
PublicSubnet0:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone:
Fn::Select:
- 0
- Fn::GetAZs: !Ref "AWS::Region"
VpcId: !Ref PublicVPC
CidrBlock: 172.31.0.0/20
MapPublicIpOnLaunch: true
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone:
Fn::Select:
- 1
- Fn::GetAZs: !Ref "AWS::Region"
VpcId: !Ref PublicVPC
CidrBlock: 172.31.16.0/20
MapPublicIpOnLaunch: true
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone:
Fn::Select:
- 2
- Fn::GetAZs: !Ref "AWS::Region"
VpcId: !Ref PublicVPC
CidrBlock: 172.31.32.0/20
MapPublicIpOnLaunch: true
InternetGateway:
Type: AWS::EC2::InternetGateway
InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref PublicVPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref PublicVPC
PublicRoute:
Type: AWS::EC2::Route
DependsOn: InternetGatewayAttachment
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnet0RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet0
RouteTableId: !Ref PublicRouteTable
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref PublicRouteTable
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
Outputs:
PublicVPCID:
Description: The VPC for the environment.
Value: !Ref PublicVPC
Export:
Name: !Sub "${AWS::StackName}-PublicVPC"
ELBSecurityGroupOutput:
Description: ELB Security Group
Value: !Ref ELBSecurityGroup
Export:
Name: !Sub "${AWS::StackName}-ELBSecurityGroup"
ASSecurityGroupOutput:
Description: AS Security Group
Value: !Ref ASSecurityGroup
Export:
Name: !Sub "${AWS::StackName}-ASSecurityGroup"
# PublicVPCIDDefaultSecurityGroup:
# Description: The VPC default security group.
# Value: !GetAtt PublicVPC.DefaultSecurityGroup
# Export:
# Name: !Sub "${AWS::StackName}-PublicVPCIDDefaultSecurityGroup"
PublicSubnet0ID:
Description: The public subnet 0.
Value: !Ref PublicSubnet0
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet0"
PublicSubnet1ID:
Description: The public subnet 1.
Value: !Ref PublicSubnet1
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet1"
PublicSubnet2ID:
Description: The public subnet 2.
Value: !Ref PublicSubnet2
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet2"

View File

@@ -0,0 +1,54 @@
AWSTemplateFormatVersion: 2010-09-09
Description: This template creates an RDS database for an ELB environment.
In addition to the database it creates a subnet group for the RDS database,
a security group with Ingress rules only allowing connections to the database.
It uses an existing Public VPC and subnet already created in
another Cloudformation stack. This is public so the database can go out
to the internet.
Parameters:
StackName:
Description: The stack name of another CloudFormation template. This is used
to prepend the name of other resources in other templates.
Type: String
Resources:
RDSSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: A subnet group for the RDS instance.
SubnetIds:
- Fn::ImportValue: !Sub "${StackName}-PublicSubnet0"
- Fn::ImportValue: !Sub "${StackName}-PublicSubnet1"
- Fn::ImportValue: !Sub "${StackName}-PublicSubnet2"
RDSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub "${AWS::StackName}-RDS-SecurityGroup"
GroupDescription: Security Group for RDS allowing ingress on DB port only.
VpcId:
Fn::ImportValue: !Sub "${StackName}-PublicVPC"
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
CidrIp: 82.6.205.148/32
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
SourceSecurityGroupId:
Fn::ImportValue: !Sub "${StackName}-ASSecurityGroup"
RDSDBInstance:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 5
AllowMajorVersionUpgrade: false
AutoMinorVersionUpgrade: true
DBInstanceClass: "db.t2.micro"
DBName: postgres
Engine: postgres
EngineVersion: 12.2
MasterUsername: mainuser
MasterUserPassword: password
PubliclyAccessible: true
VPCSecurityGroups:
- !Ref RDSSecurityGroup
DBSubnetGroupName: !Ref RDSSubnetGroup

View File

@@ -0,0 +1,5 @@
# Resources:
# ElasticLoadBalancer:
# Type: AWS::ElasticLoadBalancingV2::TargetGroup
# Properties:
# VpcId: vpc-029d232726cbf591d

View File

@@ -1,6 +1,6 @@
option_settings: option_settings:
aws:elasticbeanstalk:environment: # aws:elasticbeanstalk:environment:
EnvironmentType: SingleInstance # EnvironmentType: SingleInstance
aws:rds:dbinstance: aws:rds:dbinstance:
DBEngine: postgres DBEngine: postgres
DBInstanceClass: "db.t2.micro" DBInstanceClass: "db.t2.micro"

View File

@@ -5,11 +5,21 @@ option_settings:
value: false value: false
- option_name: STRAPI_LOG_LEVEL - option_name: STRAPI_LOG_LEVEL
value: debug value: debug
- option_name: STRAPI_S3_ACCESS_KEY # - option_name: STRAPI_S3_ACCESS_KEY
value: AKIA23D4RF6OZWGDKV7W # value: AKIA23D4RF6OZWGDKV7W
- option_name: STRAPI_S3_SECRET_KEY # - option_name: STRAPI_S3_SECRET_KEY
value: "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" # value: "4sb/fxewDGjMYLocjclPCWDm7JTBCYuFBjQAbbBR"
- option_name: STRAPI_S3_REGION - option_name: STRAPI_S3_REGION
value: "eu-west-1" value: "eu-west-1"
- option_name: STRAPI_S3_BUCKET - option_name: STRAPI_S3_BUCKET
value: "elb-example-bucket-cf" value: "prod-strapi-eb-strapi-uploads"
- option_name: RDS_HOSTNAME
value: prod-strapi-eb.chgwfe43ss59.eu-west-1.rds.amazonaws.com
- option_name: RDS_PORT
value: 5432
- option_name: RDS_NAME
value: postgres
- option_name: RDS_USERNAME
value: mainuser
- option_name: RDS_PASSWORD
value: password

View File

@@ -1,3 +1,4 @@
# Permanantly disabled
# container_commands: # container_commands:
# installpg: # installpg:
# command: "npm install pg" # command: "npm install pg"

View File

@@ -1,9 +1,10 @@
Resources: # Done in the Cloudformation 02-stack-vpc.yaml
sslSecurityGroupIngress: # Resources:
Type: AWS::EC2::SecurityGroupIngress # sslSecurityGroupIngress:
Properties: # Type: AWS::EC2::SecurityGroupIngress
GroupId: { "Fn::GetAtt": ["AWSEBSecurityGroup", "GroupId"] } # Properties:
IpProtocol: tcp # GroupId: { "Fn::GetAtt": ["AWSEBSecurityGroup", "GroupId"] }
ToPort: 443 # IpProtocol: tcp
FromPort: 443 # ToPort: 443
CidrIp: 0.0.0.0/0 # FromPort: 443
# CidrIp: 0.0.0.0/0

View File

@@ -0,0 +1,14 @@
option_settings:
aws:ec2:vpc:
VPCId: vpc-016efd8cfbcca99a8
Subnets: "subnet-00c0725542e08b1d7,subnet-039fd98ceb88c863c,subnet-0b9fab172a19d818b"
# DBSubnets: "subnet-00c0725542e08b1d7,subnet-039fd98ceb88c863c,subnet-0b9fab172a19d818b"
# ELBSubnets: "subnet-00c0725542e08b1d7,subnet-039fd98ceb88c863c,subnet-0b9fab172a19d818b"
aws:autoscaling:launchconfiguration:
SecurityGroups: sg-087f33381c535528b
# aws:elbv2:loadbalancer:
# ManagedSecurityGroup: sg-0e6f91df2ed07050a
# SecurityGroups: sg-0e6f91df2ed07050a
aws:autoscaling:asg:
MinSize: 1
MaxSize: 1

View File

@@ -0,0 +1,4 @@
# option_settings:
# aws:elbv2:listener:443:
# Protocol: HTTPS
# SSLCertificateArns: arn:aws:acm:eu-west-1:745437999005:certificate/218876af-7f8d-4022-97af-ad982aa540bc

View File

@@ -1,2 +1,4 @@
node_modules node_modules
.tmp .tmp
infrastructure
documentation

39
.gitignore vendored
View File

@@ -115,3 +115,42 @@ build
.elasticbeanstalk/* .elasticbeanstalk/*
!.elasticbeanstalk/*.cfg.yml !.elasticbeanstalk/*.cfg.yml
!.elasticbeanstalk/*.global.yml !.elasticbeanstalk/*.global.yml
############################
# Terraform
############################
# Local .terraform directories
**/.terraform/*
# .tfstate files
*.tfstate
*.tfstate.*
# Crash log files
crash.log
# Exclude all .tfvars files, which are likely to contain sentitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
#
# *.tfvars
# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json
# Include override files you do wish to add to version control using negated pattern
#
# !example_override.tf
# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*
# Ignore CLI configuration files
.terraformrc
terraform.rc

View File

@@ -0,0 +1,73 @@
# Security groups
## Load balanced
1 for the EC2 instances (applied to the autoscaler).
The instances can be private.
Gateway VPC needed for S3 upload.
1 for the RDS.
1 for the LB.
## Single instances
1 for the EC2 instances (applied to the autoscaler).
The instances need to be public.
No gateway VPC needed - they have internet access.
1 for the RDS.
If using `--database` you don't need to create any SG. Let EB use the default VPC. It will create everything for you.
If not using `--database`:
EC2:
- Create a SG for EC2
- Should have ingress from all (0.0.0.0:80+443)
- Should have egress to all (0.0.0.0:all)
RDS:
- Specify the `security_group_ids` with the SG of the EC2 and EB will create the SG for you with this as ingress for the SG you pass in.
- Specify `associate_security_group_ids` to attach a security group to the RDS (if you need to enable public access)
## Commands
Deploy CF
`aws --profile admin cloudformation deploy --template-file ./03-stack-rdsinstance.yaml --stack-name strapi-rds --parameter-overrides StackName=strapi-vpc --tags git=web-dev owner=home project=strapi-elb test=true deployment=cloudformation`
Destroy CF
`aws --profile admin cloudformation delete-stack --stack-name strapi-rds`
Terraform
`gmake plan`
`gmake applu`
`gmake destroy`
EB Single instance
`eb create --single`
with DB
`eb create --single --database`
Deploy code to environment
`apps-awsebcli`
Health check
`eb health`
Open the URL
`eb open`
Terminate
`eb terminate`

72
documentation/jq.md Normal file
View File

@@ -0,0 +1,72 @@
# JQ
## Piping into jq
You can `cat` or `bat` a file and pipe it into `jq`.
You can also take a command that returns json and pipe it into `jq`.
## Returning data without "
To return data without `jq` wrapping results in `"` use the `-r` flag.
`jq -r`
## Filtering
### Get values from a key
Running `aws --profile admin cloudformation describe-stack-resources --stack-name strapi-vpc | jq` returns:
```json
{
"StackResources": [
{
"StackName": "strapi-vpc",
"StackId": "arn:aws:cloudformation:eu-west-1:745437999005:stack/strapi-vpc/a9e41430-8afc-11ea-bdaa-0a736ea8438a",
"LogicalResourceId": "InternetGateway",
"PhysicalResourceId": "igw-0e059db8e0795ac32",
"ResourceType": "AWS::EC2::InternetGateway",
"Timestamp": "2020-04-30T16:07:42.434Z",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
{
"StackName": "strapi-vpc",
"StackId": "arn:aws:cloudformation:eu-west-1:745437999005:stack/strapi-vpc/a9e41430-8afc-11ea-bdaa-0a736ea8438a",
"LogicalResourceId": "InternetGatewayAttachment",
"PhysicalResourceId": "strap-Inter-1413K0IDR1L3N",
"ResourceType": "AWS::EC2::VPCGatewayAttachment",
"Timestamp": "2020-04-30T16:08:00.147Z",
"ResourceStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackResourceDriftStatus": "NOT_CHECKED"
}
},
```
We can then use `jq`'s filtering to return values.
We have a key of `StackResources` which contains a list: `.StackResources[]`
We can then pass in the key we want `.StackResources[].PhysicalResourceId`
`aws --profile admin cloudformation describe-stack-resources --stack-name strapi-vpc | jq -r '.StackResources[].PhysicalResourceId'` which gives:
```json
"igw-0e059db8e0795ac32"
"strap-Inter-1413K0IDR1L3N"
"strap-Publi-1TS82BV8W4UFD"
"rtb-0cf8d05f71a30ef03"
"subnet-051fe56dc37d8396d"
"rtbassoc-0f7ae2fbdfe6bf2a5"
"subnet-0ea9f2f165a57be27"
"rtbassoc-00a67937c3778e273"
"subnet-09b28d722f41b2dde"
"rtbassoc-0a0a6bd0f8ff641df"
"vpc-029d232726cbf591d"
```
`aws --profile admin cloudformation describe-stack-resources --stack-name strapi-vpc | jq -r '.StackResources[] | .ResourceType + ": " + .PhysicalResourceId'`

View File

@@ -0,0 +1,80 @@
# Notes
## HTTPS
### With load balancer
HTTPS can terminate at the load balancer
Load balancer to EC2 can be HTTP
From the front end all is well as the connection is secure.
When terminating at the load balancer 08-loadbalancer.config shows the option setting
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-elb.html>
## Database
Connecting an external DB: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html>
Configure the auto scaling group to use an additional scaling group that allows ingress to the RDS instance.
You can configure the RDS credentials either with environment variables in the ELB config file, or use S3: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/rds-external-credentials.html>.
To create your own RDS instance you will need to create:
- A VPC - for the RDS
- Subnets - for the RDS
- A subnet group
- A security group
Use `aws ec2 describe-availability-zones --region eu-west-1 --profile admin` to get a list of availability zones for the region.
VPC terraform will create
- A IGW
- A route table
- A security group
## AWS Networking
- A VPC is a network that you give a CIDR block to.
- You create subnets for a VPC. These subnets will be split evenly across availability zones (for redundancy) and private/local (whether they have internet access or not).
- Behind the scenes (if using TF), internet gateway, routing tables, attachments will all be created for you. If using CF you will need to create these yourself.
- A security group is a firewall that is _attached to an EC2 instance_. A security group belongs to a VPC. You can permit instances to talk to each other by setting the source and destination to be the security group itself. You can control ports/ips exactly on an instance basis using security groups.
## HTTPS
### Single instance
As it terminates on the Ec2 instance itself, you need to ammend the nginx config locally. This is specific for each application you are deploying.
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-nodejs.html>.
You need to generate a certificate locally.
`pip install certbot`
`sudo certbot certonly --manual --preferred-challenges=dns --email dtomlinson@panaetius.co.uk --server https://acme-v02.api.letsencrypt.org/directory --agree-tos -d "*.panaetius.co.uk"`
### Load balanced
You have two options:
1. Terminate on the load balancer (easiest).
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-elb.html>.
You can use AWS Certificate manager to generate your SSL cert, or you can upload your own.
Use a .config file as documented above and EB will handle the rest.
2. Pass through to the instance.
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-tcp-passthrough.html>.
If you do this you need to set up termination on the EC2 instances using the config for a single instance above.
You can TCP pass through without the load balancer decrypting the traffic. The traffic is encrypted all the way to the instance. The instances between themselves are HTTP.
Additionally you can configure end-to-end encryption between the EC2 instances if you have strict security requirements.
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html>.

View File

@@ -0,0 +1,571 @@
<!-- vscode-markdown-toc -->
* [Running strapi in different modes](#Runningstrapiindifferentmodes)
* [Strapi documentation](#Strapidocumentation)
* [API Examples using HTTPIE](#APIExamplesusingHTTPIE)
* [Authenticate with the API](#AuthenticatewiththeAPI)
* [Get a Single Content Type](#GetaSingleContentType)
* [Use query parameters to filter for Multiple Content Type](#UsequeryparameterstofilterforMultipleContentType)
* [S3 Upload Addon](#S3UploadAddon)
* [AWS Resources](#AWSResources)
* [Configuration](#Configuration)
* [Fix Version Numbers](#FixVersionNumbers)
* [Strapi in git](#Strapiingit)
* [Cloudformation](#Cloudformation)
* [Output naming convention](#Outputnamingconvention)
* [Creating templates](#Creatingtemplates)
* [Adding resources](#Addingresources)
* [Using parameters](#Usingparameters)
* [Using outputs](#Usingoutputs)
* [Using functions](#Usingfunctions)
* [Examples](#Examples)
* [Short form](#Shortform)
* [Outputs](#Outputs)
* [Referencing other resources internally.](#Referencingotherresourcesinternally.)
* [Pesudeo references](#Pesudeoreferences)
* [Referencing other resources from external templates](#Referencingotherresourcesfromexternaltemplates)
* [Deploy a stack/template](#Deployastacktemplate)
* [Passing in parameters](#Passinginparameters)
* [Tags](#Tags)
* [Updating stack](#Updatingstack)
* [Failure](#Failure)
* [Stacks](#Stacks)
* [Snippets](#Snippets)
* [Deploy a template/stack](#Deployatemplatestack)
* [Destroy a stack](#Destroyastack)
* [Tags](#Tags-1)
* [Cloudformation default tags](#Cloudformationdefaulttags)
<!-- vscode-markdown-toc-config
numbering=false
autoSave=true
/vscode-markdown-toc-config -->
<!-- /vscode-markdown-toc -->
# Running notes
deocument that the db has to be done from cli arg, but the configs can be done via files.
SSL? https://levelup.gitconnected.com/beginners-guide-to-aws-beanstalk-using-node-js-d061bb4b8755
If doesnt work, try installing yarn in the ELB instance
Create seperate sql database + VPC rules:
http://blog.blackninjadojo.com/aws/elastic-beanstalk/2019/01/28/adding-a-database-to-your-rails-application-on-elastic-beanstalk-using-rds.html
Tie this in with a cloudformation template + hooking it up
/opt/elasticbeanstalk/node-install/node-v12.16.1-linux-x64/bin
Try setting the database name using cloudformation template
## <a name='Runningstrapiindifferentmodes'></a>Running strapi in different modes
You should use development for developing strapi and then deploy it to production.
If you run strapi in production, you cannot edit content types. See this git issue for the thread.
If you're running Strapi in a multiple instance you should:
- Run strapi locally in develop mode.
- Create content types.
- Build strapi in production.
- Push to ELB.
If you're running a single instance, you can alternatively just run it in develop mode in ELB.
Strapi stores its models locally on the instance and not on the database.
<https://github.com/strapi/strapi/issues/4798>
```text
This is not a bug and is intended, as the CTB (Content-Type builder) saves model configurations to files doing so in production would require Strapi to restart and thus could potentially knock your production API offline. Along with the previous reason, strapi is also very much pushed as a scale able application which would mean these changes would not be replicated across any clustered configurations.
There is no current plans to allow for this, as well as no plans to move these model definitions into the database. The enforcement of using the proper environment for the proper task (Production, Staging, and Development) is something that has been pushed from day 1.
Due to the reasons I explained above I am going to mark this as closed but please do feel free to discuss.
```
## <a name='Strapidocumentation'></a>Strapi documentation
<https://strapi.io/blog/api-documentation-plugin>
You can install the strapi documentation plugin by running: `npm run strapi install documentation`.
You can then access it through the Strapi Admin panel.
You should change the production URL server url in the documentation settings.
Edit the file `./extensions/documentation/documentation/1.0.0/full_documentation.json` and change `YOUR_PRODUCTION_SERVER` to the ELB URL of your environment.
## <a name='APIExamplesusingHTTPIE'></a>API Examples using HTTPIE
### <a name='AuthenticatewiththeAPI'></a>Authenticate with the API
`http http://strapi-prod.eu-west-1.elasticbeanstalk.com/auth/local identifier=apiuser password=password`
### <a name='GetaSingleContentType'></a>Get a Single Content Type
`http http://strapi-prod.eu-west-1.elasticbeanstalk.com/tests Authorization:"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MiwiaWF0IjoxNTg3ODY3NzQ4LCJleHAiOjE1OTA0NTk3NDh9.McAi1b-F3IT2Mw90652AprEMtknJrW66Aw5FGMBOTj0"`
### <a name='UsequeryparameterstofilterforMultipleContentType'></a>Use query parameters to filter for Multiple Content Type
You can use query parameters to filter requests made to the API.
<https://strapi.io/documentation/3.0.0-beta.x/content-api/parameters.html#parameters>
The syntax is `?field_operator=value`, e.g `?title_contains=test`, after the endpoint URL for the content type.
`http "http://strapi-prod.eu-west-1.elasticbeanstalk.com/tests?title_contains=test" Authorization:"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MSwiaXNBZG1pbiI6dHJ1ZSwiaWF0IjoxNTg3ODY3NzMwLCJleHAiOjE1OTA0NTk3MzB9.XXdoZUk_GuOION2KlpeWZ7qwXAoEq9vTlIeD2XTnJxY"`
## <a name='S3UploadAddon'></a>S3 Upload Addon
You should add the `strapi-provider-upload-aws-s3` extension using NPM. Make sure you add the same version of Strapi you are using.
`npm i strapi-provider-upload-aws-s3@3.0.0-beta.20`
### <a name='AWSResources'></a>AWS Resources
You should have an S3 bucket with public access, and an AWS account that has a policy to access the bucket.
### <a name='Configuration'></a>Configuration
You should create a settings file at `./extensions/upload/config/settings.json`.
This file defines an S3 object as in: <https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property>.
You can use Strapi dynamic config files to set environment variables:
- provider
- providerOptions
- accessKeyId
- secretAccessKey
- region
- params
- Bucket
```json
{
"provider": "aws-s3",
"providerOptions": {
"accessKeyId": "${ process.env.STRAPI_S3_ACCESS_KEY || 'AKIA23D4RF6OZWGDKV7W' }",
"secretAccessKey": "${ process.env.STRAPI_S3_SECRET_KEY || '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' }",
"region": "${ process.env.STRAPI_S3_REGION || 'eu-west-1' }",
"params": {
"Bucket": "${ process.env.STRAPI_S3_BUCKET || 'elb-example-bucket' }"
}
}
}
```
Alternatively if you want to use different options for different environments, you can use a settings.js file instead.
<https://strapi.io/documentation/3.0.0-beta.x/plugins/upload.html#using-a-provider>
```javascript
if (process.env.NODE_ENV === "production") {
module.exports = {
provider: "aws-s3",
providerOptions: {
accessKeyId: process.env.STRAPI_S3_ACCESS_KEY,
secretAccessKey: process.env.STRAPI_S3_SECRET_KEY,
region: process.env.STRAPI_S3_REGION,
params: {
Bucket: process.env.STRAPI_S3_BUCKET,
},
},
};
} else {
module.exports = {};
}
```
## <a name='FixVersionNumbers'></a>Fix Version Numbers
When using Strapi you should make sure the version numbers for **all** dependencies in `./package.json` are fixed for Strapi modules. You cannot mix and match and upgrade arbitrarily.
An example is:
```json
{
"dependencies": {
"knex": "<0.20.0",
"pg": "^8.0.3",
"sqlite3": "latest",
"strapi": "3.0.0-beta.20",
"strapi-admin": "3.0.0-beta.20",
"strapi-connector-bookshelf": "3.0.0-beta.20",
"strapi-plugin-content-manager": "3.0.0-beta.20",
"strapi-plugin-content-type-builder": "3.0.0-beta.20",
"strapi-plugin-documentation": "3.0.0-beta.20",
"strapi-plugin-email": "3.0.0-beta.20",
"strapi-plugin-upload": "3.0.0-beta.20",
"strapi-plugin-users-permissions": "3.0.0-beta.20",
"strapi-provider-upload-aws-s3": "3.0.0-beta.20",
"strapi-utils": "3.0.0-beta.20"
}
}
```
## <a name='Strapiingit'></a>Strapi in git
To have a strapi project in github you should remove the:
- `./build`
- `./node_modules`
folders.
When cloning from the repo you should then do a:
- `NODE_ENV=development npm install`
- `NODE_ENV=development npm run build`
You can then run Strapi with `npm run develop` or `NODE_ENV=production npm run start`.
## <a name='Cloudformation'></a>Cloudformation
<https://adamtheautomator.com/aws-cli-cloudformation/> (example of deploying an S3 bucket with static site `index.html`.)
### <a name='Outputnamingconvention'></a>Output naming convention
You should follow a standard naming convention for your CF outputs.
For example:
```yaml
Outputs:
PublicVPCOutput:
Description: The VPC ID.
Value: !Ref PublicVPC
Export:
Name: !Sub "${AWS::StackName}-EBStrapiPublicVPC"
```
Defines a VPC. We can then pass in the stackname to another CF template and it can reference this VPC. The VPC names are static between projects (they don't have to be but here they are).
### <a name='Creatingtemplates'></a>Creating templates
To create a cloudformation template you should create a `template.yaml`. This yaml file should have at the top:
```yaml
AWSTemplateFormatVersion: 2010-09-09
Description: A simple CloudFormation template
```
Then you should add a `Resources` key and populate this with all the infrastructure you need to provision.
### <a name='Addingresources'></a>Adding resources
Documentation for all AWS resources is: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html>.
A good approach is to use the GUI to create an object, and then lookup the cloudformation template as you go along.
### <a name='Usingparameters'></a>Using parameters
<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html>
You can use parameters in your templates. This allows you to use names/resources from other templates, or specify them at creation on the CLI.
To use a parameter you should create a `Parameters` section in the yaml on the same level as a `Resources`.
```yaml
Parameters:
InstanceTypeParameter:
Type: String
Default: t2.micro
AllowedValues:
- t2.micro
- m1.small
- m1.large
Description: Enter t2.micro, m1.small, or m1.large. Default is t2.micro.
```
### <a name='Usingoutputs'></a>Using outputs
<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html>
### <a name='Usingfunctions'></a>Using functions
A list of all Cloudformation functions is: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html>.
`Fn::Select` will select a single object from a list of objects by index.
`Fn::GetAZs` returns an array that lists all availability zones for a specified region.
`!Ref` returns the value of the specified parameter or resource.
#### <a name='Examples'></a>Examples
##### Select, GetAZs and Ref
Example of `Fn::Select`, `Fn::GetAZs` and `!Ref`:
```yaml
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone:
Fn::Select:
- 0
- Fn::GetAZs: !Ref "AWS::Region"
```
##### GetAtt
`Fn::GetAtt` differs from `Ref` in that `!GetAtt` gets an attribute of a resource, whereas `Ref` will reference the actual resource itself. An attribute is a return value of a resource. For example, a VPC resource has a `DefaultSecurityGroup` as an attribute that you can access.
To see attributes that you can reference with `!GetAtt`, you should check the Cloudformation documentation for the resource in question and look at the "Return Values" header: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpc.html#aws-resource-ec2-vpc-return-values>.
An example would be using `Fn::GetAtt` to export a return value for some object in a template:
```yaml
Outputs:
PublicVPCIDDefaultSecurityGroup:
Description: The VPC ID.
Value: !GetAtt PublicVPC.DefaultSecurityGroup
Export:
Name: !Sub "${AWS::StackName}-PublicVPCIDDefaultSecurityGroup"
```
Long syntax: `Fn::GetAtt: [ logicalNameOfResource, attributeName ]`
##### Sub
A really good resource for Cloudformation functions is: <https://www.fischco.org/technica/2017/cloud-formation-sub/>.
Using `Fn::Sub` allows you to substitue a variable into the string you are trying to create. You might want to substitue an input parameter in for example.
```yaml
AppDnsRecord:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !ImportValue HostedZone-zone-id
Name:
Fn::Sub:
- "myapp.${HostedZoneName}"
- HostedZoneName: !ImportValue HostedZone-zone-name
```
Here we have referenced `${HostedZoneName}` - this is a temporary parameter in the sub command. At this point it does not exist, which is why we create a map which defines this variable as the second argument to Sub. In this example it is using `Fn::ImportValue` to import a resource from another Cloudformation stack.
As this second argument is a map (denoted by the `:`), we can have multiple key,value pairs.
```yaml
Name:
Fn::Sub:
- "myapp.${SubDomain}.${HostedZoneName}"
- HostedZoneName: !ImportValue HostedZone-zone-name
SubDomain: !ImportValue HostedZone-subzone-name
```
Note here that the second definition of the key, value pair does not have a leading `-`. We don't want to pass another argument to the `Sub` command, rather, we want to define additional key,value pair to be substituted in.
If our import value name also depended on an input parameter (say our imported value name depeneded on a stack name) we would have to use nested sub functions. In the above example we are simply importing a static import value, the string is hardcoded, if we wanted this to be dynamic, and be populated from an input parameter, then we can use:
```yaml
Parameters:
Route53StackName:
Type: String
Resources:
AppDnsRecord:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName:
Fn::ImportValue: !Sub "${Route53StackName}-zone-name"
Name:
Fn::Sub:
- "myapp.${ZoneName}"
- ZoneName:
Fn::ImportValue: !Sub "${Route53StackName}-zone-name"
```
Pay attention to the double indentation after `ZoneName`!.
We have to use the long form of `Fn::ImportValue` here and not the shorthand - this is a Cloudformation restriction.
#### <a name='Shortform'></a>Short form
If you are writing templates in yaml there is a long and shortform available.
An example for the `Sub` function:
- Longform `Fn::Sub: String`
- Shortform `!Sub String`
### <a name='Outputs'></a>Outputs
You can use the `Outputs:` header in your Cloudformation templates to specify outputs to be used in other Cloudformation templates.
<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html>
```yaml
Outputs:
PublicVPCID:
Description: The VPC ID.
Value: !Ref PublicVPC
Export:
Name: ELBStrapiPublicVPC
```
`Value` returns the value of the property by an `aws cloudformation describe-stacks` command. The value can contain literals, parameter references, pseudo-parameters, mapping values or functions.
`Name` goes under `Export:` and is used for cross-stack reference. This name should be unique within a region. You can use this name in other Cloudformation templates to reference the `Value` you have specified above. You can set content in other cloudformation templates this way.
You can refer to these in ELB `./config` files for example - allowing you to dynamically link to other AWS resources in your ELB environment.
### <a name='Referencingotherresourcesinternally.'></a>Referencing other resources internally.
You can reference other resources in the template. This is useful say if you want to define a VPC and a subnet and reference the VPC from the subnet.
To do this you should use the `!Ref` function:
```yaml
VpcId: !Ref PublicVPC
```
Note that this is a special syntax, it doesn't have `Fn::` in the long form and the short form, `!Ref` is actually longer than the long form in this case.
<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html>
#### <a name='Pesudeoreferences'></a>Pesudeo references
You can also reference certain AWS references: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html>.
Examples include `AWS::AccountId` and `AWS::StackName` among others.
### <a name='Referencingotherresourcesfromexternaltemplates'></a>Referencing other resources from external templates
Say we have a Cloudformation template where we have created a VPC:
```yaml
Outputs:
PublicSubnet0ID:
Description: The ID of the subnet.
Value: !Ref PublicSubnet0
Export:
Name: !Sub "${AWS::StackName}-PublicSubnet0"
```
We want to be able to use this, dynamically, in another template.
To do this we can use the `Fn::Sub` and `Fn::ImportValue` functions.
```yaml
Parameters:
StackName:
Description: The stack name of another CloudFormation template. This is used
to prepend the name of other resources in other templates.
Type: String
Resources:
RDSSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: A subnet group for the RDS instance.
SubnetIds:
- Fn::ImportValue: !Sub "${StackName}-PublicSubnet0"
- Fn::ImportValue: !Sub "${StackName}-PublicSubnet1"
```
### <a name='Deployastacktemplate'></a>Deploy a stack/template
To deploy, you should run the command: `aws cloudformation deploy --template-file template.yaml --stack-name static-website`
### <a name='Passinginparameters'></a>Passing in parameters
You can define parameters in its own section in a Cloudformation template:
```yaml
Parameters:
StackName:
Description: The stack name of another CloudFormation template. This is used
to prepend the name of other resources in other templates.
Type: String
```
You can set a default value which will be used if no value is passed in.
To pass values in using the CLI you should use the `--parameter-overrides` argument and pass them in as key=value pairs seperated by a space:
```bash
--parameter-overrides StackName=temp-vpc
```
### <a name='Tags'></a>Tags
When setting tags you can set them on individual resources in the Cloudformation template:
```yaml
Tags:
- Key: git
Value: web-dev
- Key: owner
Value: home
- Key: project
Value: strapi-elb
- Key: test
Value: true
- Key: deployment
Value: cloudformation
```
Alternatively if you have many tags to be shared across all resources you can set them when you use the CLI to deploy: `--tags git=web-dev owner=home project=strapi-elb test=true deployment=cloudformation`
### <a name='Updatingstack'></a>Updating stack
To update a stack you can use `deploy`. Note that the default behaviour is to create the new resources side by side, then once successful remove the old ones. You may run into errors when updating certain resources (updating a VPC subnet will fail as it has to create the new subnet alongside the existing one). You should remove the old stack by doing `delete-stack` first.
`aws cloudformation delete-stack --stack-name temp-vpc --profile admin`
### <a name='Failure'></a>Failure
If something goes wrong, you can use `describe-stack-events` and pass the `stack-name` to find the events leading up to the failure: `aws cloudformation describe-stack-events --stack-name strapi-s3`.
If this is the first time you are creating a stack you will not be able to re-deploy the stack. You must first delete the stack entirely and then re-deploy with any fixes.
You can delete a stack by running: `aws --profile admin cloudformation delete-stack --stack-name strapi-s3`.
### <a name='Stacks'></a>Stacks
<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html>
A cloudformation stack is a collection of AWS resources that you can manage as a single unit. You can group different resources under one stack then create, update or destroy everything under this stack.
Using stacks means AWS will treat all resources as a single unit. They must all be created or destroyed successfully to be created or deleted. If a resource cannot be created, Cloudformation will roll the stack back to the previous configuration and delete any interim resources that were created.
### <a name='Snippets'></a>Snippets
#### <a name='Deployatemplatestack'></a>Deploy a template/stack
`aws --profile admin cloudformation deploy --template-file ./01-stack-storage.yaml --stack-name strapi-s3`
You can pass paramter values in with `--paramter-overrides KEY=VALUE`:
`--parameter-overrides TestParameter="some test string"`
#### <a name='Destroyastack'></a>Destroy a stack
`aws --profile admin cloudformation delete-stack --stack-name strapi-s3`
## <a name='Tags-1'></a>Tags
Suggested tags for all AWS resources are:
| Tag | Description | Example |
| ----------- | ---------------------------------- | ------------------------ |
| git | git repo that contains the code | `web-dev` |
| owner | who the resource is for/owned | `home`, `work`, `elliot` |
| project | what project it belongs to | `strapi-elb` |
| test | flag for a temporary test resource | `true` |
| environment | environment resource belongs to | `dev`, `prod` |
| deployment | AWS tool used for deployment | `cloudformation`, `elb` |
### <a name='Cloudformationdefaulttags'></a>Cloudformation default tags
For Cloudformation resources the following tags get applied automatically:
| Tag | Description | Example |
| ----------------------------- | ------------------------------- | -------------------------------------------------------------------------------------------------- |
| aws:cloudformation:stack-name | stack-name of the resource | `strapi-s3` |
| aws:cloudformation:logical-id | resource name given in template | `ELBExampleBucket` |
| aws:cloudformation:stack-id | ARN of the cloudformation stack | arn:aws:cloudformation:eu-west-1:745437999005:stack/strapi-s3/459ebbf0-88aa-11ea-beac-02f0c9b42810 |

39
documentation/steps.todo Normal file
View File

@@ -0,0 +1,39 @@
Connecting external DB:
✔ Create RDS using TF @important @today @done (7/28/2020, 11:34:12 PM)
RDS Config:
☐ Try using `associate_security_group_ids` and creating a security group to allow all incoming traffic to the RDS instance.
Email:
☐ Add `strapi-provider-email-amazon-ses` and configure.
Deployments:
One:
✔ Create S3 bucket for strapi s3. @done (7/29/2020, 2:07:55 PM)
✔ Deploy TF with additional SG for DB. @done (7/30/2020, 3:02:39 AM)
☐ Have TF produce outputs with everything needed.
✔ Redeploy single instance with the EB config file with VPCs created. @done (7/30/2020, 3:02:41 AM)
Two:
☐ Have SSL enabled for single instance.
Three:
☐ Have SSL enabled for multiple instance.
Misc:
☐ Have the EB instances on the private subnet.
☐ Create a Gateway VPC endpoint: <https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html>.
Prod Steps:
☐ Plan out the posts needed for the series.
This needs to be done at the same time as writing the site pages.
☐ Create everything from scratch
Strapi:
☐ Install from new.
☐ Create TF files.
☐ Initialise EB environment.
☐ Deploy TF.
☐ Deploy EB environment for single instance to start.
Today:
☐ Redeploy with updated config.
☐ Enable HTTPs for single instance.
☐ Use S3 to read in secrets.

346
documentation/tempnotes.md Normal file
View File

@@ -0,0 +1,346 @@
<!-- vscode-markdown-toc -->
- [Decoupling](#Decoupling)
- [Creating Database + VPC + Subnets in Cloudformation](#CreatingDatabaseVPCSubnetsinCloudformation)
- [Single instance (no load balancer)](#Singleinstancenoloadbalancer)
_ [EC2::VPC](#EC2::VPC)
_ [Enable DNS](#EnableDNS)
_ [EC2::Subnet](#EC2::Subnet)
_ [EC2::InternetGateway](#EC2::InternetGateway)
_ [EC2::VPCGatewayAttachment](#EC2::VPCGatewayAttachment)
_ [AWS::EC2::RouteTable](#AWS::EC2::RouteTable)
_ [AWS::EC2::Route](#AWS::EC2::Route)
_ [AWS::EC2::SubnetRouteTableAssociation](#AWS::EC2::SubnetRouteTableAssociation)
- [Running notes](#Runningnotes) \* [Database](#Database)
- [Work Commands](#WorkCommands)
_ [tags](#tags)
_ [deploy](#deploy)
_ [delete](#delete)
_ [describe-stack-resources](#describe-stack-resources)
- [Adding SSL to ELB](#AddingSSLtoELB) \* [With load balancer](#Withloadbalancer)
- [EB Templates/Resources](#EBTemplatesResources)
- [Configuring security groups](#Configuringsecuritygroups)
- [Elastic Load Balancer](#ElasticLoadBalancer)
_ [Elastic Scaler](#ElasticScaler)
_ [RDS](#RDS) \* [Security group to allow EC2 instances to talk to each other](#SecuritygrouptoallowEC2instancestotalktoeachother)
- [Custom VPC + Subnets in EB](#CustomVPCSubnetsinEB)
- [Using cloudformation functions in EB config files](#UsingcloudformationfunctionsinEBconfigfiles)
- [Creating a read replica RDS](#CreatingareadreplicaRDS)
- [Multiple security groups on the same resource](#Multiplesecuritygroupsonthesameresource)
- [Private subnets](#Privatesubnets)
<!-- vscode-markdown-toc-config
numbering=false
autoSave=true
/vscode-markdown-toc-config -->
<!-- /vscode-markdown-toc -->
# Temp Notes
## <a name='Decoupling'></a>Decoupling
When creating an EB instance with `--single` and `--database` the following is created as part of the EB deployment:
- security group
- EIP
- RDS database
Is the security group created without a databse? (probably yes...)
## <a name='CreatingDatabaseVPCSubnetsinCloudformation'></a>Creating Database + VPC + Subnets in Cloudformation
Template from AWS showing cross-stack referencing and creating and referencing a VPC: <https://s3.amazonaws.com/cloudformation-examples/user-guide/cross-stack/SampleNetworkCrossStack.template>.
Export these in the CF template with stackname (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html>)
A security group is a resource that defines what IPs/Ports are allowed on inbound/outbound for an AWS resource. You can have one for EC2 instance, or RDS among others.
EB will create a VPC for your EC2 instances.
You should use this VPC for you RDS instance.
Creating a VPC for EB (with RDS) <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc-rds.html>
## <a name='Singleinstancenoloadbalancer'></a>Single instance (no load balancer)
Example cloudformation template that EB uses: <https://raw.githubusercontent.com/awslabs/elastic-beanstalk-samples/master/cfn-templates/vpc-public.yaml>.
Create a VPC - this is an object that spans all availability zones in a region. You assign a VPC a CIDR block. This is a set of IP addresses that this VPC has access to.
You should create public subnets inside this VPC - these subnets should cover all availablility zones in your region. The CIDR block you specified in the VPC defines all the ips, you should create N subnets that equally contain these IP addresses for your region.
For example a VPC in `eu-west-1` has a CIDR block of `172.31.0.0/16`.
There are 3 availablity zones in `eu-west-1`: `eu-west-1a`, `eu-west-1b` and `eu-west-1c`.
To find other availablity zones you should go to the EC2 Dashboard for the region you want to work in, and scroll down to the Service health header. Here, a list of all availability zones will be shown.
You should create subnets with the following:
| Availability Zone | Subnet CIDR | Real IP Range |
| ----------------- | -------------- | --------------------------- |
| `eu-west-1a` | 172.31.0.0/20 | 172.31.0.0 - 172.31.15.255 |
| `eu-west-1b` | 172.31.16.0/20 | 172.31.16.0 - 172.31.31.255 |
| `eu-west-1c` | 172.31.32.0/20 | 172.31.32.0 - 172.31.47.255 |
This covers all IP addresses across all availability zones in the VPC.
To make these subnets actually public, you should associate them with an internet gateway.
An internet gateway is an object that allows communication to the internet. In Cloudformation you should create an internet gateway and a VPC Gateway attachment. This attachment should reference the VPC you have created and reference the internet gateway object you create as well. Then, in your subnets (which are public) you can use `MapPublicIpOnLaunch: true` in the `Properties` block for each subnet.
You should then create a public route table and associate it with the VPC you have created.
You should then create a public route. You can then attach the internet gateway attachment to this route and specify a list of IPs that will go out to the internet. To allow all trafic to the internet set a `DestinationCidrBlock` of `0.0.0.0/0`.
### <a name='EC2::VPC'></a>EC2::VPC
#### <a name='EnableDNS'></a>Enable DNS
Enable `EnableDnsHostnames` + `EnableDnsSupport` - this allows resources in the VPC to use DNS in AWS.
### <a name='EC2::Subnet'></a>EC2::Subnet
Go to the EC2 dashboard to find all availability zones. Create a subnet for each zone.
- `AvailabilityZone`
- `VpcId`
- `CidrBlock`
- `MapPublicIpOnLaunch`
### <a name='EC2::InternetGateway'></a>EC2::InternetGateway
### <a name='EC2::VPCGatewayAttachment'></a>EC2::VPCGatewayAttachment
- `VpcId`
- `InternetGatewayId`
### <a name='AWS::EC2::RouteTable'></a>AWS::EC2::RouteTable
- `VpcId`
### <a name='AWS::EC2::Route'></a>AWS::EC2::Route
- `RouteTableId`
- `DestinationCidrBlock`
- `GatewayId`
### <a name='AWS::EC2::SubnetRouteTableAssociation'></a>AWS::EC2::SubnetRouteTableAssociation
- `SubnetId`
- `RouteTableId`
## <a name='Runningnotes'></a>Running notes
If we specify the VPC + Subnets from Cloudformation in a config file, will it create the security groups automatically for the EC2 instances? - Yes
Database can use existing subnets.
Database needs a security group creating
EC2 security groups automatically created and associated with the VPC.
Use aws:ec2:vpc (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-ec2vpc)
### <a name='Database'></a>Database
Needs:
- `AWS::RDS::DBSubnetGroup`
- `AWS::EC2::SecurityGroupIngress`
- `AWS::RDS::DBInstance`
Default ports:
| Database Engine | Default Port |
| -------------------- | ------------ |
| Aurora/MySQL/MariaDB | 3306 |
| PostgreSQL | 5432 |
| Oracle | 1521 |
| SQL Server | 1433 |
| DynamoDB | 8000 |
## <a name='WorkCommands'></a>Work Commands
### <a name='tags'></a>tags
`--tags git=web-dev owner=home project=strapi-eb test=true deployment=cloudformation`
### <a name='deploy'></a>deploy
`aws --profile admin cloudformation deploy --template-file ./02-stack-vpc.yaml --stack-name strapi-vpc --tags git=web-dev owner=home project=strapi-eb test=true deployment=cloudformation`
`aws --profile admin cloudformation deploy --template-file ./03-stack-rdsinstance.yaml --stack-name strapi-rds --parameter-overrides StackName=strapi-vpc --tags git=web-dev owner=home project=strapi-eb test=true deployment=cloudformation`
### <a name='delete'></a>delete
`aws --profile admin cloudformation delete-stack --stack-name strapi-vpc`
`aws --profile admin cloudformation delete-stack --stack-name strapi-rds`
`aws --profile admin cloudformation delete-stack --stack-name temp`
List of all RDS Engines available under "Engine" header: <https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html>.
### <a name='describe-stack-resources'></a>describe-stack-resources
Will print a json list of all resources in the stack
`aws --profile admin cloudformation describe-stack-resources --stack-name strapi-vpc`
Using `jq` for formatting:
`aws --profile admin cloudformation describe-stack-resources --stack-name strapi-vpc | jq -r '.StackResources[] | .ResourceType + ": " + .PhysicalResourceId'`
## <a name='AddingSSLtoELB'></a>Adding SSL to ELB
You should generate an SSL Certificate in Certificate Manager for your domain. To do this you will need to create a CNAME record to verify you have access to the DNS settings.
At the same time you should create a CNAME record that maps your subdomain (<strapi.panaetius.co.uk>) to the DNS name AWS has given your load balancer (<awseb-AWSEB-68CXGV0UTROU-1492520139.eu-west-1.elb.amazonaws.com>).
### <a name='Withloadbalancer'></a>With load balancer
A load balancer is not free! It costs ~£15 a month.
- Configure the load balancer listener in a EB `.config` file:
```yaml
option_settings:
aws:elbv2:listener:443:
Protocol: HTTPS
SSLCertificateArns: arn:aws:acm:eu-west-1:745437999005:certificate/218876af-7f8d-4022-97af-ad982aa540bc
```
## <a name='EBTemplatesResources'></a>EB Templates/Resources
Good repo for examples: <https://github.com/awsdocs/elastic-beanstalk-samples>
Creating a VPC for RDS in EB: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc-rds.html>
CF RDS EB template: <https://github.com/garystafford/aws-rds-postgres/blob/master/cfn-templates/rds.template>
Decouple an exisitng RDS instance from ELB to RDS: <https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/>
## <a name='Configuringsecuritygroups'></a>Configuring security groups
## <a name='ElasticLoadBalancer'></a>Elastic Load Balancer
Should set: inbound/outbound 80/443 on 0.0.0.0/0
The option_settings: aws:elbv2:loadbalancer has two options for security groups.
| Option | Description |
| -------------------- | --------------------------------------------------------------------- |
| ManagedSecurityGroup | Defines the security group that is used for the load balancer itself. |
| SecurityGroups | Is a list of additional security groups you want to attach. |
If you define a ManagedSecurityGroup you should set SecurityGroups as well to the same one.
Load balancer needs a security group that allows incoming 80 + 443 from anywhere
It should also set the same for outbound as well
This security group should be set in `aws:elbv2:loadbalancer` under
`ManagedSecurityGroup` and `SecurityGroups`
### <a name='ElasticScaler'></a>Elastic Scaler
Should set inbound 80/443 from LBSG.
EB will create a security group for the EC2 instances. In addition to this, you can create a new security group that will be applied to EC2 instances the elastic scaler creates.
This is set under `aws:autoscaling:launchconfiguration`.
### <a name='RDS'></a>RDS
Should set: inbound 5432 from Scaling SG + home ip (change port and home ip).
The database should have a security group creating that allows incoming connections from the EC2 instances only.
### <a name='SecuritygrouptoallowEC2instancestotalktoeachother'></a>Security group to allow EC2 instances to talk to each other
Security group rule to allow instances in the same security group to talk to one another: <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html#sg-rules-other-instances>.
## <a name='CustomVPCSubnetsinEB'></a>Custom VPC + Subnets in EB
In a `.config` file specify the subnets for each tier of your app:
```yaml
option_settings:
aws:ec2:vpc:
VPCId: "vpc-003597eb63a0a3efe"
Subnets: "subnet-02cd8f7981ddfe345,subnet-02d9e1338e8d92d09,subnet-0e07d4d35394db524"
DBSubnets: "subnet-02cd8f7981ddfe345,subnet-02d9e1338e8d92d09,subnet-0e07d4d35394db524"
```
## <a name='UsingcloudformationfunctionsinEBconfigfiles'></a>Using cloudformation functions in EB config files
Only certain CF functions can be used in EB config files. For anything more advanced you should use Terraform to deploy additional resources alongside an EB template.
Reddit discussion on the topic: <https://www.reddit.com/r/aws/comments/a2uoae/is_there_a_way_to_reference_an_elastic_beanstalk/>.
EB documentaion on what functions are supported: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions-functions.html#ebextensions-functions-getatt>.
You cannot use `FN::ImportValue` to reference a resource in another Cloudformation stack.
You can use join for resources that EB creates itself: `!Join [ ":", [ !Ref "AWS::StackName", AccountVPC ] ]`.
## <a name='CreatingareadreplicaRDS'></a>Creating a read replica RDS
To have a replica database you should create a new DB instance with same AllocatedStorage size and DBInstanceClass. You should set the SourceDBInstanceIdentifier to be a `!Ref` of your primary DB. You should also set the SourceRegion.
Read replica CF docs: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-sourcedbinstanceidentifier>
## <a name='Multiplesecuritygroupsonthesameresource'></a>Multiple security groups on the same resource
Multiple security groups get squashed to determine what is and isn't allowed: <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html>.
## <a name='Privatesubnets'></a>Private subnets
You can create private subnets that do not have an internet gateway attached to them. An example of CF template is <https://github.com/awsdocs/elastic-beanstalk-samples/blob/master/cfn-templates/vpc-privatepublic.yaml>.
You need a nat gateway to allow private subnets to go out to the internet.
If you use private subnets, the nat gateway is not cheap - £30 a month.
You dont need the nat gateway, you can achieve the same thing with security groups (block all incoming) (explained <https://www.reddit.com/r/aws/comments/75bjei/private_subnets_nats_vs_simply_only_allowing/>).
An advantage to NAT is all outgoing requests to the internet come from a single IP.
## Using certbot CLI to generate SSL
### Wildcard certificate
In a new virtualenv install certbot:
```bash
pip install certbot
```
Run the `certbot` command:
```bash
sudo certbot certonly --manual --preferred-challenges=dns --email dtomlinson@panaetius.co.uk --server https://acme-v02.api.letsencrypt.org/directory --agree-tos -d "*.panaetius.co.uk"
```
Follow the instructions to add a `TXT` record to your DNS server for validation.
When finished you should see:
```markdown
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/panaetius.co.uk/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/panaetius.co.uk/privkey.pem
Your cert will expire on 2020-08-01. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew _all_ of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
```
## Terraform
### Elastic Beanstalk
Editing the EB default resources in Terraform: <https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues/98#issuecomment-620677233>.

72
documentation/todo.md Normal file
View File

@@ -0,0 +1,72 @@
# To Do
## Immediate
Merge the CF templates into one, make sure all the importing and other snippets are documented.
- Create single instance deployment + https (document)
- For https: use letsencrypt to generate ssl, configure the eb config to use this.
- Final git branch for each version of the app (load balanced https/http, single http/https).
- Terraform it all up (excluding single + https).
## Long term
Use codebuild to update strapi
Use circle CI instead
Cloudformation template to deploy an S3 bucket
## Documentation
Summarise the flow -> VPC, internet gateway, attachment + route tables, subnets etc. Mention the nat gateway but show how it can be replaced with security groups. Document each individual resource needed bullet point and link to the git repo for the TF/CF templates.
## Running Notes
Various deployments:
- Single instance with EBCLI
- Load balanced with EBCLI
- Single instance with terraform
- Load balanced with terraform
HTTP + HTTPS
Single instance with terraform isn't possible with HTTPS - this is because you can't edit `Resources` or `Files` (and the other advanced EB configs). A workaround would be to create a docker image.
Single instance with EBCLI isn't possible with HTTPS if you're using Certificate Manager to generate the certificates - this is because you need to edit the nginx proxy config locally on the instance to allow https. You don't have access to the private certificate with Cerficiate Manager.
One solution would be to generate your SSL using letsencrypt - then configure the instance with this.
Another solution would be to use Docker and build a custom image. In this image you could install and configure nginx, (using lets encrypt as multistage build to get your certificate).
HTTPS for load balanced environment just requires pointing a domain to the EB endpoint. You can tell the load balancer to forward 443 in the security group without using it.
For final deployment - use an EC2 instance (deploy with TF).
### Other
Work:
Can we use APIGateway + Fargate to run an API containerised?
Fargate documentation: <https://aws.amazon.com/fargate/>.
Fargate option in ECS terraform: <https://www.terraform.io/docs/providers/aws/r/ecs_service.html#launch_type>.
Lambda vs Fargate differences: <https://www.learnaws.org/2019/09/14/deep-dive-aws-fargate/>.
Fargate vs EC2 pricing: <https://www.reddit.com/r/aws/comments/8reem9/fargate_t2small_cost_comparison_dollar_to_dollar/>.
Reddit thread on using API Gateway + Fargate: <https://www.reddit.com/r/aws/comments/bgqz4g/can_api_gateway_route_to_a_container_in_fargate/>.
Using API Gateway + Private endpoints (in a VPC): <https://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-private-endpoints/>.
Fargate is just running containers serverless - but it isn't a direct replacement to lambda. The spin up times can be long, but if you need to run a task on a schedule and this doesn't matter, you can save money and time as you don't need to manage and run an EC2 instance for docker containers. It's not ideal for tasks that need to be running 24/7.
Have a seperate repos for Terraform + Ansible. Split them inside by project. One central place for all TF and Ansible will make things easier to reference from later.
Generate SSH keys for EC2.
Provision EC2 using TF - set SG to allow SSH from your IP.
Configure EC2 with an Ansible playbook.
## Single options
- Dockerise it + run on EC2/ECS/Fargate
- Use EBCLI + Config options for https. Generate SSL using lets encrypt.
Using certbot with docker: <https://certbot.eff.org/docs/install.html#running-with-docker>
Forcing http > https redirection: <https://github.com/awsdocs/elastic-beanstalk-samples/tree/master/configuration-files/aws-provided/security-configuration/https-redirect/nodejs>.

28
documentation/updated.md Normal file
View File

@@ -0,0 +1,28 @@
Follow this tutorial to do python with asgi
Try with native python deployment + docker
<https://towardsdatascience.com/building-web-app-for-computer-vision-model-deploying-to-production-in-10-minutes-a-detailed-ec6ac52ec7e4>
Try with single instance - does it use the DB settings in .ebextensions?
Have documented options for
- Single instance
- Single instance with DB
- Load balanced instance
Create an RDS instance, ensure the default SG is allowed on ingress to the DB.
Use this SG to define an ebextensions file
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/rds-external-defaultvpc.html>
<https://github.com/awsdocs/elastic-beanstalk-samples/blob/master/configuration-files/aws-provided/security-configuration/securitygroup-addexisting.config>
Using a custom VPC created yourself (how it's done now): <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html>
Allows complete control over the security settings.
Q? If we use `--single` it will only create:
Instance subnets One of the public subnets
Instance security groups Add the default security group
Will it ignore the loadbalancer + autoscaling settings even if we define them in 07.config?

File diff suppressed because one or more lines are too long

1
infrastructure/.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1 @@
{}

19
infrastructure/LICENSE Normal file
View File

@@ -0,0 +1,19 @@
MIT License Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished
to do so, subject to the following conditions:
The above copyright notice and this permission notice (including the next
paragraph) shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

215
infrastructure/Makefile Normal file
View File

@@ -0,0 +1,215 @@
# Copyright 2016 Philip G. Porada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.ONESHELL:
.SHELL := /usr/bin/bash
.PHONY: apply destroy-backend destroy destroy-target plan-destroy plan plan-target prep
-include Makefile.env
# VARS="variables/$(ENV)-$(REGION).tfvars"
VARS="$(ENV)-$(REGION).tfvars"
CURRENT_FOLDER=$(shell basename "$$(pwd)")
S3_BUCKET="$(ENV)-$(REGION)-$(PROJECT)-terraform"
DYNAMODB_TABLE="$(ENV)-$(REGION)-$(PROJECT)-terraform"
WORKSPACE="$(ENV)-$(REGION)"
BOLD=$(shell tput bold)
RED=$(shell tput setaf 1)
GREEN=$(shell tput setaf 2)
YELLOW=$(shell tput setaf 3)
RESET=$(shell tput sgr0)
help:
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
set-env:
@if [ -z $(ENV) ]; then \
echo "$(BOLD)$(RED)ENV was not set$(RESET)"; \
ERROR=1; \
fi
@if [ -z $(REGION) ]; then \
echo "$(BOLD)$(RED)REGION was not set$(RESET)"; \
ERROR=1; \
fi
@if [ -z $(AWS_PROFILE) ]; then \
echo "$(BOLD)$(RED)AWS_PROFILE was not set.$(RESET)"; \
ERROR=1; \
fi
@if [ ! -z $${ERROR} ] && [ $${ERROR} -eq 1 ]; then \
echo "$(BOLD)Example usage: \`AWS_PROFILE=whatever ENV=demo REGION=us-east-2 make plan\`$(RESET)"; \
exit 1; \
fi
@if [ ! -f "$(VARS)" ]; then \
echo "$(BOLD)$(RED)Could not find variables file: $(VARS)$(RESET)"; \
exit 1; \
fi
prep: set-env ## Prepare a new workspace (environment) if needed, configure the tfstate backend, update any modules, and switch to the workspace
@echo "$(BOLD)Verifying that the S3 bucket $(S3_BUCKET) for remote state exists$(RESET)"
@if ! aws --profile $(AWS_PROFILE) s3api head-bucket --region $(REGION) --bucket $(S3_BUCKET) > /dev/null 2>&1 ; then \
echo "$(BOLD)S3 bucket $(S3_BUCKET) was not found, creating new bucket with versioning enabled to store tfstate$(RESET)"; \
aws --profile $(AWS_PROFILE) s3api create-bucket \
--bucket $(S3_BUCKET) \
--acl private \
--region $(REGION) \
--create-bucket-configuration LocationConstraint=$(REGION) > /dev/null 2>&1 ; \
aws --profile $(AWS_PROFILE) s3api put-bucket-versioning \
--bucket $(S3_BUCKET) \
--versioning-configuration Status=Enabled > /dev/null 2>&1 ; \
echo "$(BOLD)$(GREEN)S3 bucket $(S3_BUCKET) created$(RESET)"; \
else
echo "$(BOLD)$(GREEN)S3 bucket $(S3_BUCKET) exists$(RESET)"; \
fi
@echo "$(BOLD)Verifying that the DynamoDB table exists for remote state locking$(RESET)"
@if ! aws --profile $(AWS_PROFILE) --region $(REGION) dynamodb describe-table --table-name $(DYNAMODB_TABLE) > /dev/null 2>&1 ; then \
echo "$(BOLD)DynamoDB table $(DYNAMODB_TABLE) was not found, creating new DynamoDB table to maintain locks$(RESET)"; \
aws --profile $(AWS_PROFILE) dynamodb create-table \
--region $(REGION) \
--table-name $(DYNAMODB_TABLE) \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 > /dev/null 2>&1 ; \
echo "$(BOLD)$(GREEN)DynamoDB table $(DYNAMODB_TABLE) created$(RESET)"; \
echo "Sleeping for 10 seconds to allow DynamoDB state to propagate through AWS"; \
sleep 10; \
else
echo "$(BOLD)$(GREEN)DynamoDB Table $(DYNAMODB_TABLE) exists$(RESET)"; \
fi
@aws ec2 --profile=$(AWS_PROFILE) describe-key-pairs | jq -r '.KeyPairs[].KeyName' | grep "$(ENV)_infra_key" > /dev/null 2>&1; \
if [ $$? -ne 0 ]; then \
echo "$(BOLD)$(RED)EC2 Key Pair $(INFRA_KEY)_infra_key was not found$(RESET)"; \
read -p '$(BOLD)Do you want to generate a new keypair? [y/Y]: $(RESET)' ANSWER && \
if [ "$${ANSWER}" == "y" ] || [ "$${ANSWER}" == "Y" ]; then \
mkdir -p ~/.ssh; \
ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/$(ENV)_infra_key; \
aws ec2 --profile=$(AWS_PROFILE) import-key-pair --key-name "$(ENV)_infra_key" --public-key-material "file://~/.ssh/$(ENV)_infra_key.pub"; \
fi; \
else \
echo "$(BOLD)$(GREEN)EC2 Key Pair $(ENV)_infra_key exists$(RESET)";\
fi
@echo "$(BOLD)Configuring the terraform backend$(RESET)"
@terraform init \
-input=false \
-force-copy \
-lock=true \
-upgrade \
-verify-plugins=true \
-backend=true \
-backend-config="profile=$(AWS_PROFILE)" \
-backend-config="region=$(REGION)" \
-backend-config="bucket=$(S3_BUCKET)" \
-backend-config="key=$(ENV)/$(CURRENT_FOLDER)/terraform.tfstate" \
-backend-config="dynamodb_table=$(DYNAMODB_TABLE)"\
-backend-config="acl=private"
@echo "$(BOLD)Switching to workspace $(WORKSPACE)$(RESET)"
@terraform workspace select $(WORKSPACE) || terraform workspace new $(WORKSPACE)
plan: prep ## Show what terraform thinks it will do
@terraform plan \
-lock=true \
-input=false \
-refresh=true \
-var-file="$(VARS)"
format: prep ## Rewrites all Terraform configuration files to a canonical format.
@terraform fmt \
-write=true \
-recursive
# https://github.com/terraform-linters/tflint
lint: prep ## Check for possible errors, best practices, etc in current directory!
@tflint
# https://github.com/liamg/tfsec
check-security: prep ## Static analysis of your terraform templates to spot potential security issues.
@tfsec .
documentation: prep ## Generate README.md for a module
@terraform-docs \
markdown table \
--sort-by-required . > README.md
plan-target: prep ## Shows what a plan looks like for applying a specific resource
@echo "$(YELLOW)$(BOLD)[INFO] $(RESET)"; echo "Example to type for the following question: module.rds.aws_route53_record.rds-master"
@read -p "PLAN target: " DATA && \
terraform plan \
-lock=true \
-input=true \
-refresh=true \
-var-file="$(VARS)" \
-target=$$DATA
plan-destroy: prep ## Creates a destruction plan.
@terraform plan \
-input=false \
-refresh=true \
-destroy \
-var-file="$(VARS)"
apply: prep ## Have terraform do the things. This will cost money.
@terraform apply \
-lock=true \
-input=false \
-refresh=true \
-var-file="$(VARS)"
destroy: prep ## Destroy the things
@terraform destroy \
-lock=true \
-input=false \
-refresh=true \
-var-file="$(VARS)"
destroy-target: prep ## Destroy a specific resource. Caution though, this destroys chained resources.
@echo "$(YELLOW)$(BOLD)[INFO] Specifically destroy a piece of Terraform data.$(RESET)"; echo "Example to type for the following question: module.rds.aws_route53_record.rds-master"
@read -p "Destroy target: " DATA && \
terraform destroy \
-lock=true \
-input=false \
-refresh=true \
-var-file=$(VARS) \
-target=$$DATA
destroy-backend: ## Destroy S3 bucket and DynamoDB table
@if ! aws --profile $(AWS_PROFILE) dynamodb delete-table \
--region $(REGION) \
--table-name $(DYNAMODB_TABLE) > /dev/null 2>&1 ; then \
echo "$(BOLD)$(RED)Unable to delete DynamoDB table $(DYNAMODB_TABLE)$(RESET)"; \
else
echo "$(BOLD)$(RED)DynamoDB table $(DYNAMODB_TABLE) does not exist.$(RESET)"; \
fi
@if ! aws --profile $(AWS_PROFILE) s3api delete-objects \
--region $(REGION) \
--bucket $(S3_BUCKET) \
--delete "$$(aws --profile $(AWS_PROFILE) s3api list-object-versions \
--region $(REGION) \
--bucket $(S3_BUCKET) \
--output=json \
--query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')" > /dev/null 2>&1 ; then \
echo "$(BOLD)$(RED)Unable to delete objects in S3 bucket $(S3_BUCKET)$(RESET)"; \
fi
@if ! aws --profile $(AWS_PROFILE) s3api delete-objects \
--region $(REGION) \
--bucket $(S3_BUCKET) \
--delete "$$(aws --profile $(AWS_PROFILE) s3api list-object-versions \
--region $(REGION) \
--bucket $(S3_BUCKET) \
--output=json \
--query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')" > /dev/null 2>&1 ; then \
echo "$(BOLD)$(RED)Unable to delete markers in S3 bucket $(S3_BUCKET)$(RESET)"; \
fi
@if ! aws --profile $(AWS_PROFILE) s3api delete-bucket \
--region $(REGION) \
--bucket $(S3_BUCKET) > /dev/null 2>&1 ; then \
echo "$(BOLD)$(RED)Unable to delete S3 bucket $(S3_BUCKET) itself$(RESET)"; \
fi

View File

@@ -0,0 +1,4 @@
ENV="prod"
REGION="eu-west-1"
PROJECT="strapi-elb"
AWS_PROFILE="admin"

11
infrastructure/README.md Normal file
View File

@@ -0,0 +1,11 @@
# terraform
Boilerplate for TF
Usage:
- Clone into a project at root level.
- Rename `./terraform` to `infrastructure` (if needed).
- Delete `./infrastructure/.git/` and `./infrastructure/.gitignore`
Commit to project.

119
infrastructure/main.tf Normal file
View File

@@ -0,0 +1,119 @@
# aws config
provider "aws" {
region = var.region
profile = var.profile
version = "~> 2.70.0"
}
# tags
locals {
tags = {
"Project" = "strapi-eb"
"Description" = "Terraform resources for strapi in Elastic Beanstalk"
}
}
# Network
module "vpc" {
source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.14.0"
stage = var.stage
name = var.name
tags = local.tags
cidr_block = "172.16.0.0/16"
enable_default_security_group_with_custom_rules = false
}
module "subnets" {
source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.23.0"
stage = var.stage
name = var.name
tags = local.tags
availability_zones = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = false
nat_instance_enabled = false
}
resource "aws_security_group" "ec2_security_group" {
name = "${var.stage}-${var.name}-ec2_sg"
description = "Security group assigned to the Elastic Scaling group that is applied to the EC2 instances."
vpc_id = module.vpc.vpc_id
tags = local.tags
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Outbound to all"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "rds_security_group_public" {
name = "${var.stage}-${var.name}-rds_public_sg"
description = "Security group for the RDS instance that allows public access from the internet."
vpc_id = module.vpc.vpc_id
tags = local.tags
ingress {
description = "Incoming Postgres"
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["82.6.205.148/32"]
}
}
# RDS instance
module "rds_instance" {
source = "git::https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.20.0"
stage = var.stage
name = var.name
tags = local.tags
allocated_storage = 5
database_name = "postgres"
database_user = "mainuser"
database_password = "password"
database_port = 5432
db_parameter_group = "postgres12"
engine = "postgres"
engine_version = "12.3"
instance_class = "db.t2.micro"
security_group_ids = [aws_security_group.ec2_security_group.id]
associate_security_group_ids = [aws_security_group.rds_security_group_public.id]
subnet_ids = module.subnets.public_subnet_ids
vpc_id = module.vpc.vpc_id
publicly_accessible = true
}
# S3 bucket
resource "aws_s3_bucket" "static_assets" {
bucket = "${var.stage}-${var.name}-strapi-uploads"
acl = "private"
tags = local.tags
}

36
infrastructure/outputs.tf Normal file
View File

@@ -0,0 +1,36 @@
# S3
output "s3_static_assets_id" {
value = aws_s3_bucket.static_assets.id
description = "Name of the static assets S3 bucket."
}
# VPC
output "vpc_id" {
value = module.vpc.vpc_id
description = "The ID of the VPC."
}
output "subnet_public_ids" {
value = module.subnets.public_subnet_ids
description = "The IDs of the public subnets."
}
# Security groups
output "aws_security_group_ec2_security_group" {
value = aws_security_group.ec2_security_group.id
description = "Security group for the EC2 instances applied by the Elastic Scaler."
}
output "aws_security_group_ec2_security_group_rds" {
value = aws_security_group.rds_security_group_public.id
description = "Security group for the RDS instance allowing public access."
}
# RDS
output "rds_instance_endpoint" {
value = module.rds_instance.instance_endpoint
description = "Endpoint of the RDS instance."
}

View File

@@ -0,0 +1,5 @@
# module
name = "strapi-eb"
region = "eu-west-1"
stage = "prod"
profile = "admin"

View File

@@ -0,0 +1,15 @@
variable "name" {
}
variable "region" {
}
variable "stage" {
}
variable "profile" {
}

124
package-lock.json generated
View File

@@ -2148,6 +2148,15 @@
} }
} }
}, },
"block-stream": {
"version": "0.0.9",
"resolved": "https://registry.npmjs.org/block-stream/-/block-stream-0.0.9.tgz",
"integrity": "sha1-E+v+d4oDIFz+A3UUgeu0szAMEmo=",
"optional": true,
"requires": {
"inherits": "~2.0.0"
}
},
"bluebird": { "bluebird": {
"version": "3.7.2", "version": "3.7.2",
"resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz", "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
@@ -4635,6 +4644,18 @@
"integrity": "sha512-Auw9a4AxqWpa9GUfj370BMPzzyncfBABW8Mab7BGWBYDj4Isgq+cDKtx0i6u9jcX9pQDnswsaaOTgTmA5pEjuQ==", "integrity": "sha512-Auw9a4AxqWpa9GUfj370BMPzzyncfBABW8Mab7BGWBYDj4Isgq+cDKtx0i6u9jcX9pQDnswsaaOTgTmA5pEjuQ==",
"optional": true "optional": true
}, },
"fstream": {
"version": "1.0.12",
"resolved": "https://registry.npmjs.org/fstream/-/fstream-1.0.12.tgz",
"integrity": "sha512-WvJ193OHa0GHPEL+AycEJgxvBEwyfRkN1vhjca23OaPVMCaLCXTd5qAu82AjTcgP1UJmytkOKb63Ypde7raDIg==",
"optional": true,
"requires": {
"graceful-fs": "^4.1.2",
"inherits": "~2.0.0",
"mkdirp": ">=0.5 0",
"rimraf": "2"
}
},
"function-bind": { "function-bind": {
"version": "1.1.1", "version": "1.1.1",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz", "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz",
@@ -6921,9 +6942,9 @@
"integrity": "sha512-ONmRUqK7zj7DWX0D9ADe03wbwOBZxNAfF20PlGfCWQcD3+/MakShIHrMqx9YwPTfxDdF1zLeL+RGZiR9kGMLdg==" "integrity": "sha512-ONmRUqK7zj7DWX0D9ADe03wbwOBZxNAfF20PlGfCWQcD3+/MakShIHrMqx9YwPTfxDdF1zLeL+RGZiR9kGMLdg=="
}, },
"needle": { "needle": {
"version": "2.4.1", "version": "2.5.0",
"resolved": "https://registry.npmjs.org/needle/-/needle-2.4.1.tgz", "resolved": "https://registry.npmjs.org/needle/-/needle-2.5.0.tgz",
"integrity": "sha512-x/gi6ijr4B7fwl6WYL9FwlCvRQKGlUNvnceho8wxkwXqN8jvVmmmATTmZPRRG7b/yC1eode26C2HO9jl78Du9g==", "integrity": "sha512-o/qITSDR0JCyCKEQ1/1bnUXMmznxabbwi/Y4WwJElf+evwJNFNwIDMCCt5IigFVxgeGBJESLohGtIS9gEzo1fA==",
"requires": { "requires": {
"debug": "^3.2.6", "debug": "^3.2.6",
"iconv-lite": "^0.4.4", "iconv-lite": "^0.4.4",
@@ -6971,6 +6992,11 @@
"semver": "^5.4.1" "semver": "^5.4.1"
} }
}, },
"node-addon-api": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-2.0.0.tgz",
"integrity": "sha512-ASCL5U13as7HhOExbT6OlWJJUV/lLzL2voOSP1UVehpRD8FbSrSDjfScK/KwAvVTI5AS6r4VwbOMlIqtvRidnA=="
},
"node-fetch": { "node-fetch": {
"version": "2.6.0", "version": "2.6.0",
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz", "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
@@ -6981,6 +7007,34 @@
"resolved": "https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz", "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz",
"integrity": "sha512-7ASaDa3pD+lJ3WvXFsxekJQelBKRpne+GOVbLbtHYdd7pFspyeuJHnWfLplGf3SwKGbfs/aYl5V/JCIaHVUKKQ==" "integrity": "sha512-7ASaDa3pD+lJ3WvXFsxekJQelBKRpne+GOVbLbtHYdd7pFspyeuJHnWfLplGf3SwKGbfs/aYl5V/JCIaHVUKKQ=="
}, },
"node-gyp": {
"version": "3.8.0",
"resolved": "https://registry.npmjs.org/node-gyp/-/node-gyp-3.8.0.tgz",
"integrity": "sha512-3g8lYefrRRzvGeSowdJKAKyks8oUpLEd/DyPV4eMhVlhJ0aNaZqIrNUIPuEWWTAoPqyFkfGrM67MC69baqn6vA==",
"optional": true,
"requires": {
"fstream": "^1.0.0",
"glob": "^7.0.3",
"graceful-fs": "^4.1.2",
"mkdirp": "^0.5.0",
"nopt": "2 || 3",
"npmlog": "0 || 1 || 2 || 3 || 4",
"osenv": "0",
"request": "^2.87.0",
"rimraf": "2",
"semver": "~5.3.0",
"tar": "^2.0.0",
"which": "1"
},
"dependencies": {
"semver": {
"version": "5.3.0",
"resolved": "https://registry.npmjs.org/semver/-/semver-5.3.0.tgz",
"integrity": "sha1-myzl094C0XxgEq0yaqa00M9U+U8=",
"optional": true
}
}
},
"node-libs-browser": { "node-libs-browser": {
"version": "2.2.1", "version": "2.2.1",
"resolved": "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.2.1.tgz", "resolved": "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.2.1.tgz",
@@ -7048,6 +7102,31 @@
"rimraf": "^2.6.1", "rimraf": "^2.6.1",
"semver": "^5.3.0", "semver": "^5.3.0",
"tar": "^4" "tar": "^4"
},
"dependencies": {
"nopt": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/nopt/-/nopt-4.0.3.tgz",
"integrity": "sha512-CvaGwVMztSMJLOeXPrez7fyfObdZqNUK1cPAEzLHrTybIua9pMdmmPR5YwtfNftIOMv3DPUhFaxsZMNTQO20Kg==",
"requires": {
"abbrev": "1",
"osenv": "^0.1.4"
}
},
"tar": {
"version": "4.4.13",
"resolved": "https://registry.npmjs.org/tar/-/tar-4.4.13.tgz",
"integrity": "sha512-w2VwSrBoHa5BsSyH+KxEqeQBAllHhccyMFVHtGtdMpF4W7IRWfZjFiQceJPChOeTsSDVUpER2T8FA93pr0L+QA==",
"requires": {
"chownr": "^1.1.1",
"fs-minipass": "^1.2.5",
"minipass": "^2.8.6",
"minizlib": "^1.2.1",
"mkdirp": "^0.5.0",
"safe-buffer": "^5.1.2",
"yallist": "^3.0.3"
}
}
} }
}, },
"node-releases": { "node-releases": {
@@ -7084,12 +7163,12 @@
"integrity": "sha1-lKKxYzxPExdVMAfYlm/Q6EG2pMI=" "integrity": "sha1-lKKxYzxPExdVMAfYlm/Q6EG2pMI="
}, },
"nopt": { "nopt": {
"version": "4.0.3", "version": "3.0.6",
"resolved": "https://registry.npmjs.org/nopt/-/nopt-4.0.3.tgz", "resolved": "https://registry.npmjs.org/nopt/-/nopt-3.0.6.tgz",
"integrity": "sha512-CvaGwVMztSMJLOeXPrez7fyfObdZqNUK1cPAEzLHrTybIua9pMdmmPR5YwtfNftIOMv3DPUhFaxsZMNTQO20Kg==", "integrity": "sha1-xkZdvwirzU2zWTF/eaxopkayj/k=",
"optional": true,
"requires": { "requires": {
"abbrev": "1", "abbrev": "1"
"osenv": "^0.1.4"
} }
}, },
"normalize-path": { "normalize-path": {
@@ -9882,13 +9961,13 @@
"integrity": "sha512-VE0SOVEHCk7Qc8ulkWw3ntAzXuqf7S2lvwQaDLRnUeIEaKNQJzV6BwmLKhOqT61aGhfUMrXeaBk+oDGCzvhcug==" "integrity": "sha512-VE0SOVEHCk7Qc8ulkWw3ntAzXuqf7S2lvwQaDLRnUeIEaKNQJzV6BwmLKhOqT61aGhfUMrXeaBk+oDGCzvhcug=="
}, },
"sqlite3": { "sqlite3": {
"version": "4.1.1", "version": "5.0.0",
"resolved": "https://registry.npmjs.org/sqlite3/-/sqlite3-4.1.1.tgz", "resolved": "https://registry.npmjs.org/sqlite3/-/sqlite3-5.0.0.tgz",
"integrity": "sha512-CvT5XY+MWnn0HkbwVKJAyWEMfzpAPwnTiB3TobA5Mri44SrTovmmh499NPQP+gatkeOipqPlBLel7rn4E/PCQg==", "integrity": "sha512-rjvqHFUaSGnzxDy2AHCwhHy6Zp6MNJzCPGYju4kD8yi6bze4d1/zMTg6C7JI49b7/EM7jKMTvyfN/4ylBKdwfw==",
"requires": { "requires": {
"nan": "^2.12.1", "node-addon-api": "2.0.0",
"node-pre-gyp": "^0.11.0", "node-gyp": "3.x",
"request": "^2.87.0" "node-pre-gyp": "^0.11.0"
} }
}, },
"sshpk": { "sshpk": {
@@ -10960,17 +11039,14 @@
"integrity": "sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA==" "integrity": "sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA=="
}, },
"tar": { "tar": {
"version": "4.4.13", "version": "2.2.2",
"resolved": "https://registry.npmjs.org/tar/-/tar-4.4.13.tgz", "resolved": "https://registry.npmjs.org/tar/-/tar-2.2.2.tgz",
"integrity": "sha512-w2VwSrBoHa5BsSyH+KxEqeQBAllHhccyMFVHtGtdMpF4W7IRWfZjFiQceJPChOeTsSDVUpER2T8FA93pr0L+QA==", "integrity": "sha512-FCEhQ/4rE1zYv9rYXJw/msRqsnmlje5jHP6huWeBZ704jUTy02c5AZyWujpMR1ax6mVw9NyJMfuK2CMDWVIfgA==",
"optional": true,
"requires": { "requires": {
"chownr": "^1.1.1", "block-stream": "*",
"fs-minipass": "^1.2.5", "fstream": "^1.0.12",
"minipass": "^2.8.6", "inherits": "2"
"minizlib": "^1.2.1",
"mkdirp": "^0.5.0",
"safe-buffer": "^5.1.2",
"yallist": "^3.0.3"
} }
}, },
"tar-fs": { "tar-fs": {

View File

@@ -1,247 +0,0 @@
# Running notes
create an elb with --single and --database
redeploy with username option set, does it change?
Does strapi work with a databse set in production mode?
SSH into EC2 - check if its using sqlite
deocument that the db has to be done from cli arg, but the configs can be done via files.
SSL? https://levelup.gitconnected.com/beginners-guide-to-aws-beanstalk-using-node-js-d061bb4b8755
Add postgres to strapi
Add the S3 bucket to strapi
If doesnt work, try installing yarn in the ELB instance
Create seperate sql database + VPC rules:
http://blog.blackninjadojo.com/aws/elastic-beanstalk/2019/01/28/adding-a-database-to-your-rails-application-on-elastic-beanstalk-using-rds.html
Tie this in with a cloudformation template + hooking it up
/opt/elasticbeanstalk/node-install/node-v12.16.1-linux-x64/bin
Try setting the database name using cloudformation template
## Running strapi in different modes
You should use development for developing strapi and then deploy it to production.
If you run strapi in production, you cannot edit content types. See this git issue for the thread.
If you're running Strapi in a multiple instance you should:
- Run strapi locally in develop mode.
- Create content types.
- Build strapi in production.
- Push to ELB.
If you're running a single instance, you can alternatively just run it in develop mode in ELB.
Strapi stores its models locally on the instance and not on the database.
<https://github.com/strapi/strapi/issues/4798>
```text
This is not a bug and is intended, as the CTB (Content-Type builder) saves model configurations to files doing so in production would require Strapi to restart and thus could potentially knock your production API offline. Along with the previous reason, strapi is also very much pushed as a scale able application which would mean these changes would not be replicated across any clustered configurations.
There is no current plans to allow for this, as well as no plans to move these model definitions into the database. The enforcement of using the proper environment for the proper task (Production, Staging, and Development) is something that has been pushed from day 1.
Due to the reasons I explained above I am going to mark this as closed but please do feel free to discuss.
```
## Strapi documentation
<https://strapi.io/blog/api-documentation-plugin>
You can install the strapi documentation plugin by running: `npm run strapi install documentation`.
You can then access it through the Strapi Admin panel.
You should change the production URL server url in the documentation settings.
Edit the file `./extensions/documentation/documentation/1.0.0/full_documentation.json` and change `YOUR_PRODUCTION_SERVER` to the ELB URL of your environment.
## API Examples using HTTPIE
### Authenticate with the API
`http http://strapi-prod.eu-west-1.elasticbeanstalk.com/auth/local identifier=apiuser password=password`
### Get a Single Content Type
`http http://strapi-prod.eu-west-1.elasticbeanstalk.com/tests Authorization:"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MiwiaWF0IjoxNTg3ODY3NzQ4LCJleHAiOjE1OTA0NTk3NDh9.McAi1b-F3IT2Mw90652AprEMtknJrW66Aw5FGMBOTj0"`
### Use query parameters to filter for Multiple Content Type
You can use query parameters to filter requests made to the API.
<https://strapi.io/documentation/3.0.0-beta.x/content-api/parameters.html#parameters>
The syntax is `?field_operator=value`, e.g `?title_contains=test`, after the endpoint URL for the content type.
`http "http://strapi-prod.eu-west-1.elasticbeanstalk.com/tests?title_contains=test" Authorization:"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MSwiaXNBZG1pbiI6dHJ1ZSwiaWF0IjoxNTg3ODY3NzMwLCJleHAiOjE1OTA0NTk3MzB9.XXdoZUk_GuOION2KlpeWZ7qwXAoEq9vTlIeD2XTnJxY"`
## S3 Upload Addon
You should add the `strapi-provider-upload-aws-s3` extension using NPM. Make sure you add the same version of Strapi you are using.
`npm i strapi-provider-upload-aws-s3@3.0.0-beta.20`
### AWS Resources
You should have an S3 bucket with public access, and an AWS account that has a policy to access the bucket.
### Configuration
You should create a settings file at `./extensions/upload/config/settings.json`.
This file defines an S3 object as in: <https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property>.
You can use Strapi dynamic config files to set environment variables:
- provider
- providerOptions
- accessKeyId
- secretAccessKey
- region
- params
- Bucket
```json
{
"provider": "aws-s3",
"providerOptions": {
"accessKeyId": "${ process.env.STRAPI_S3_ACCESS_KEY || 'AKIA23D4RF6OZWGDKV7W' }",
"secretAccessKey": "${ process.env.STRAPI_S3_SECRET_KEY || '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' }",
"region": "${ process.env.STRAPI_S3_REGION || 'eu-west-1' }",
"params": {
"Bucket": "${ process.env.STRAPI_S3_BUCKET || 'elb-example-bucket' }"
}
}
}
```
Alternatively if you want to use different options for different environments, you can use a settings.js file instead.
<https://strapi.io/documentation/3.0.0-beta.x/plugins/upload.html#using-a-provider>
```javascript
if (process.env.NODE_ENV === "production") {
module.exports = {
provider: "aws-s3",
providerOptions: {
accessKeyId: process.env.STRAPI_S3_ACCESS_KEY,
secretAccessKey: process.env.STRAPI_S3_SECRET_KEY,
region: process.env.STRAPI_S3_REGION,
params: {
Bucket: process.env.STRAPI_S3_BUCKET,
},
},
};
} else {
module.exports = {};
}
```
## Fix Version Numbers
When using Strapi you should make sure the version numbers for **all** dependencies in `./package.json` are fixed for Strapi modules. You cannot mix and match and upgrade arbitrarily.
An example is:
```json
{
"dependencies": {
"knex": "<0.20.0",
"pg": "^8.0.3",
"sqlite3": "latest",
"strapi": "3.0.0-beta.20",
"strapi-admin": "3.0.0-beta.20",
"strapi-connector-bookshelf": "3.0.0-beta.20",
"strapi-plugin-content-manager": "3.0.0-beta.20",
"strapi-plugin-content-type-builder": "3.0.0-beta.20",
"strapi-plugin-documentation": "3.0.0-beta.20",
"strapi-plugin-email": "3.0.0-beta.20",
"strapi-plugin-upload": "3.0.0-beta.20",
"strapi-plugin-users-permissions": "3.0.0-beta.20",
"strapi-provider-upload-aws-s3": "3.0.0-beta.20",
"strapi-utils": "3.0.0-beta.20"
}
}
```
## Cloudformation
<https://adamtheautomator.com/aws-cli-cloudformation/> (example of deploying an S3 bucket with static site `index.html`.)
To create a cloudformation template you should create a `template.yaml`. This yaml file should have at the top:
```yaml
AWSTemplateFormatVersion: 2010-09-09
Description: A simple CloudFormation template
```
Then you should add a `Resources` key and populate this with all the infrastructure you need to provision.
### Creating templates
Documentation for all AWS resources is: <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html>.
A good approach is to use the GUI to create an object, and then lookup the cloudformation template as you go along.
### Deploy a stack/template
To deploy, you should run the command: `aws cloudformation deploy --template-file template.yaml --stack-name static-website`
### Failure
If something goes wrong, you can use `describe-stack-events` and pass the `stack-name` to find the events leading up to the failure: `aws cloudformation describe-stack-events --stack-name strapi-s3`.
If this is the first time you are creating a stack you will not be able to re-deploy the stack. You must first delete the stack entirely and then re-deploy with any fixes.
You can delete a stack by running: `aws --profile admin cloudformation delete-stack --stack-name strapi-s3`.
### Stacks
<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html>
A cloudformation stack is a collection of AWS resources that you can manage as a single unit. You can group different resources under one stack then create, update or destroy everything under this stack.
Using stacks means AWS will treat all resources as a single unit. They must all be created or destroyed successfully to be created or deleted. If a resource cannot be created, Cloudformation will roll the stack back to the previous configuration and delete any interim resources that were created.
### Snippets
#### Deploy a template/stack
`aws --profile admin cloudformation deploy --template-file ./01-stack-storage.yaml --stack-name strapi-s3`
#### Destroy a stack
`aws --profile admin cloudformation delete-stack --stack-name strapi-s3`
## Tags
Suggested tags for all AWS resources are:
| Tag | Description | Example |
| ----------- | ---------------------------------- | ------------------------ |
| git | git repo that contains the code | `web-dev` |
| owner | who the resource is for/owned | `home`, `work`, `elliot` |
| project | what project it belongs to | `strapi-elb` |
| test | flag for a temporary test resource | `true` |
| environment | environment resource belongs to | `dev`, `prod` |
| deployment | AWS tool used for deployment | `cloudformation`, `elb` |
### Cloudformation
For Cloudformation resources the following tags get applied automatically:
| Tag | Description | Example |
| ----------------------------- | ------------------------------- | -------------------------------------------------------------------------------------------------- |
| aws:cloudformation:stack-name | stack-name of the resource | `strapi-s3` |
| aws:cloudformation:logical-id | resource name given in template | `ELBExampleBucket` |
| aws:cloudformation:stack-id | ARN of the cloudformation stack | arn:aws:cloudformation:eu-west-1:745437999005:stack/strapi-s3/459ebbf0-88aa-11ea-beac-02f0c9b42810 |

28
todo.md
View File

@@ -1,28 +0,0 @@
# To Do
~~Finish S3 config for env vars~~
~~Deploy to AWS and ensure vars are working~~
Use cloudformation to deploy bucket instead of tieing it to the RDS instance.
Use <https://strapi.io/documentation/3.0.0-beta.x/deployment/amazon-aws.html#_2-create-the-bucket> for bucket options for the template.
~~Strapi documentation - build and host~~
Deploy strapi as load balanced rather than single instance
Deploy strapi with a custom domain with HTTPS as a single instance + load balanced.
RDS cloudformation template
Use the GUI to go through options and create cloudformation template
Create an RDS db before deployment
Configure Strapi to use this RDS db
Combine ELB Documentations (strapi, ELB etc)
Use codebuild to update strapi
Use circle CI instead
Finish the backgrounds for the demo website
Cloudformation template to deploy an S3 bucket