Overview:
The article discusses how to deploy a static NextJS website to an S3 bucket in AWS for free and fast website hosting. It involves setting up a bucket for website hosting, creating a user with permissions, creating access keys for the user, adding AWS access key and secret access key variables to GitLab, updating the .gitlab-ci.yml file, and testing the website. Additionally, it outlines the steps to register a custom domain with Route 53, configuring buckets for website hosting and redirect, logging website traffic, uploading website content and an error document, editing S3 block public access settings, attaching a bucket policy, and adding alias records for the domain and subdomain. Finally, it provides a sample .gitlab-ci.yml script for building and deploying the website.Deploying a static NextJS website to a s3 bucket is a great way to achieve fast and free (essentially) website hosting.
There is a bit of setup on the AWS side of things, but the gitlab config is below.
- Create a bucket that is ready for website hosting (and link it to a domain)
- Create user with permissions to the bucket
- Create an access key for the user that will deploy to the bucket
- Add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY varibles to gitlab
- update the .gitlab-ci.yml
Create a bucket that is ready for website hosting (and link it to a domain)
-
AWS explains this well in this link here
- Quick Summary
- Step 1: Register a custom domain with Route 53
- Step 2: Create two buckets
- (example.com and www.example.com)
- Step 3: Configure your root domain bucket for website hosting
- Buckets > properties > Static website hosting > Enable
- Index document > index.html
- Note the endpoint under the Static website hosting (Different from your bucket url, and this one actually works as a website.)
- Step 4: Configure your subdomain bucket for website redirect
- Buckets > Properties > Static website hosting > Edit
- Redirect requests for an object
- Target bucket = root domain
- Protocol = http
- Redirect requests for an object
- Buckets > Properties > Static website hosting > Edit
- Step 5: Configure logging for website traffic
- Step 6: Upload index and website content
- upload your website to the bucket
- Step 7: Upload an error document
- Step 8: Edit S3 Block Public Access settings
- bucket > permissions > clear (Block public access)
- Step 9: Attach a bucket policy
- Buckets > Permissions > Bucket Policy > edit
- Insert the below
"Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::Bucket-Name/*" ] } ] }
- Step 10: Test your domain endpoint
- Step 11: Add alias records for your domain and subdomain
- Step 12: Test the website
- Quick Summary
Create user with permissions to the bucket
Setup a user in AWS that has permissions needed to upload to the bucket (I have used a user with the AmazonS3FullAccess permission)
Create an access key for the user that will deploy to the bucket
Once you have a user with the correct permissions, create an access key for that user, copy the credentials ready to put them into gitlab variables so that gitlab can write to your bucket.
Add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY varibles to gitlab
You will also need to create two variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, the area to add these can be accessed via Project > Settings > CI/CD > Variables
The values of the varibles come from the access key that was created.
update the .gitlab-ci.yml
update or add a .gitlab-ci.yml script that contains the below
variables: S3_BUCKET_NAME: "BucketName" # set to the name of your website bucket AWS_DEFAULT_REGION: "us-east-1" # the region the bucket is in stages: - build - deploy build: image: node:alpine stage: build script: - npm i # install dependencies - npm run build-static # build nextjs statically artifacts: expire_in: 30 days paths: - out only: - main deploy: image: python:latest stage: deploy script: - pip install awscli # install the AWS cli - aws s3 cp ./out/ s3://$S3_BUCKET_NAME/ --recursive --include "*" # use cp to copy the result of the build to the s3 bucket only: - main