Migrating My Blog from DigitalOcean App Platform to AWS Amplify

Discover how I made my blog more cost-efficient and secure by migrating from DigitalOcean Spaces and App Platform to S3 and AWS Amplify.

June 07, 2023

10 minute read

Flock of birds migrating


In pursuit of greater security control and cost efficiency for my blog, I recently migrated from the DigitalOcean App Platform to AWS Amplify, also shifting my image storage from DigitalOcean Spaces to AWS S3. This transition, while straightforward, presented its own set of challenges—most notably the implementation of Content Security Policy (CSP) and other security headers. Overcoming these challenges was rewarding, and it marked an essential step in preparing my blog for more complex features in the future. In this blog post, I'll detail my experience with this migration, focusing on the lessons learned and the advantages gained.

Achieving Cost Efficiency with AWS Amplify

One of the primary motivations behind my migration was the potential for cost efficiency. On DigitalOcean, I was using Spaces to store my high-resolution images at a flat fee of $5 per month. While this cost wasn't excessive, it felt like an area where I could achieve savings given my relatively modest storage needs.

After a careful evaluation, I found that AWS S3 offered a more flexible pricing structure, charging me only for the storage I used. Unlike the flat-rate model of DigitalOcean Spaces, AWS S3 allowed me to minimize costs by aligning them directly with my usage.

To put it into perspective, AWS S3 pricing (as of my migration) starts at $0.026 per GB for the first 50 TB per month in Northern California for the S3 Standard tier. Given that my blog's high-resolution images occupied around 0.25 GB, my monthly cost for AWS S3 would be approximately $0.026, a significant reduction from the $5 I was paying with DigitalOcean Spaces.

It's important to note the actual costs can be slightly higher when considering factors such as data transfer out rates and requests (GET, PUT, COPY, POST, LIST, etc.). This really comes in to play during the development process where the images may be downloaded many times. However, for my use-case, these additional costs remained nominal, keeping the overall cost considerably lower than the flat fee I was paying at DigitalOcean.

Additionally, there are separate costs associated with Amplify. I've had the same AWS account for a while now, so I don't qualify for any free tier discounts. The Amplify pricing page breaks down the costs. Build & deploy is by the minute, data storage is similarly priced to the S3 Standard tier, and data transfer out is $0.15 per GB served. The most painful part I've seen so far for me was builds taking 6 to 7 minutes. However, I had to run a lot of builds to solve the CSP challenge I'll talk more about later. Now the site is stable, I won't be doing as many builds. I ended up using AWS's cost calculator to build an estimate. What I got was $3.02. I was pretty generous and estimated 150 build minutes (approximately 23 builds per month) and 10 GB of data served each month. I'm not sure how accurate this is, but it's a good ballpark estimate because I'm still saving money compared to DigitalOcean, and I'm getting a lot more features.

My intuition is I may be able to further reduce costs by switching to S3 and CloudFront only. I'll monitor the site over time and I may switch to a different architecture later. For now, I'm happy with the cost savings and the additional features I get with Amplify.

By transitioning to AWS Amplify and S3, I am not only able to achieve cost efficiency but also benefit from AWS's highly scalable and secure infrastructure—a win-win in my book. In the next section, I'll explore another compelling reason behind my migration: enhanced control over security headers.

Strengthening Security Control with AWS Amplify

In the age of rapidly evolving cybersecurity threats, controlling your security headers is more important than ever. During my time with the DigitalOcean App Platform, I realized that it lacked comprehensive options for managing security headers on static sites. This limited control over security headers meant that I couldn't implement more complex features on my blog without compromising security.

When I explored AWS Amplify, I found that it offered much greater control over security headers. This granular control meant I could enhance my blog's security without hindering the development of complex features.

A critical breakthrough came when I discovered I could use a template for the customHttp.yml file and have Amplify process it after the build. This step allowed me to circumvent Gatsby's limitations when it came to creating Content Security Policy (CSP) headers, something I'd struggled with before. The trick I found was to add the gatsby-plugin-csp plugin to the site to generate the CSP headers, then exposing a onPostBuild API hook in the gatsby-node.js file to handle creation of the customHttp.yml file based on the meta tag created by the plugin. Make sense? It's okay if it's a bit confusing because it's a very hacky approach to this problem. Unfortunately, the plugin doesn't generate hashes for all script tags, so you have to do it yourself. Gatsby also shoves inline styles into various elements.

Here's what the template looks like (it's called customHttp.template.yml):

# Used to generate the customHttp.yml file for AWS Amplify when the project is built.
  - pattern: '**/*'
      - key: 'Strict-Transport-Security'
        value: 'max-age=31536000; includeSubDomains'
      # Content-Security-Policy is auto-generated at build time
      - key: 'X-Frame-Options'
        value: 'DENY'
      - key: 'X-XSS-Protection'
        value: '1; mode=block'
      - key: 'X-Content-Type-Options'
        value: 'nosniff'

This file is tracked in Git, while customHttp.yml is ignored. The customHttp.yml file is generated by the onPostBuild hook in gatsby-node.js:

exports.onPostBuild = ({ reporter }) => {
  let scriptHashes = new Set()

  const walkDir = function (dir, callback) {
    const list = fs.readdirSync(dir)
    list.forEach(function (file) {
      file = dir + "/" + file
      const stat = fs.statSync(file)
      if (stat && stat.isDirectory()) {
        walkDir(file, callback)
      } else if (path.extname(file) === ".html") {

  walkDir(path.join(__dirname, "public"), file => {
    const html = fs.readFileSync(file, "utf8")
    const dom = new jsdom.JSDOM(html)
    const scriptTags = dom.window.document.querySelectorAll("script")

    for (let scriptTag of scriptTags) {
      const integrity = scriptTag.getAttribute("integrity")
      let hash
      if (integrity) {
        hash = integrity.split("-")[2]
      } else {
        const scriptSrc = scriptTag.getAttribute("src")
        if (scriptSrc && !scriptSrc.startsWith("http")) {
          const scriptPath = path.join(__dirname, "public", scriptSrc)
          const scriptContent = fs.readFileSync(scriptPath, "utf8")
          hash = crypto
          scriptTag.setAttribute("integrity", `sha256-${hash}`)
        } else if (!scriptSrc) {
          // If there is no source, but the script tag has content
          const scriptContent = scriptTag.innerHTML

          if (scriptContent) {
            hash = crypto
            scriptTag.setAttribute("integrity", `sha256-${hash}`)

    fs.writeFileSync(file, dom.serialize(), "utf8")

  const indexPath = path.join(__dirname, "public", "index.html")
  const indexHtml = fs.readFileSync(indexPath, "utf8")
  const dom = new jsdom.JSDOM(indexHtml)
  const cspMeta = dom.window.document.querySelector(
  const csp = cspMeta.getAttribute("content")
  const scriptCsp = `script-src 'self' ${[...scriptHashes].join(" ")};`
  const styleCsp = `style-src 'self' 'unsafe-inline';` // Have to use unsafe-inline here because Gatsby shoves inline styles into various elements.
  const newCsp = csp
    .replace(/script-src 'self' [^;]*;/, scriptCsp)
    .replace(/style-src 'self' [^;]*;/, styleCsp)
  cspMeta.setAttribute("content", newCsp)
  fs.writeFileSync(indexPath, dom.serialize(), "utf8")

  const customHttpTemplatePath = path.join(__dirname, "customHttp.template.yml")
  const customHttpPath = path.join(__dirname, "customHttp.yml")
  let customHttpTemplate = yaml.load(
    fs.readFileSync(customHttpTemplatePath, "utf8")

  for (let item of customHttpTemplate.customHeaders) {
    let cspHeader = item.headers.find(
      header => header.key === "Content-Security-Policy"
    if (cspHeader) {
      cspHeader.value = newCsp
    } else {
        key: "Content-Security-Policy",
        value: newCsp,

  fs.writeFileSync(customHttpPath, yaml.dump(customHttpTemplate), "utf8")

  walkDir(path.join(__dirname, "public"), filePath => {
    const htmlContent = fs.readFileSync(filePath, "utf8")
    const dom = new jsdom.JSDOM(htmlContent)
    const cspMeta = dom.window.document.head.querySelector(

    if (cspMeta) {
      cspMeta.setAttribute("content", newCsp)
      fs.writeFileSync(filePath, dom.serialize(), "utf8")

  reporter.info(`Updated CSP in ${customHttpPath}`)

Essentially what the hook does is iterate over all the HTML files in the public directory generated by the build, recursively. If it finds a script tag which doesn't have a hash, it will generate one and add it to the script tag. If it finds a script tag with a hash, it will add the hash to a set. Once it has iterated over all the HTML files, it adds the hashes to the CSP meta tag and writes the CSP to the customHttp.yml file. It also updates the CSP meta tag in all the HTML files.

It's a lot to wrap one's head around, but it was necessary. There has been an ongoing discussion on how Gatsby's design causes CSP problems without an actual resolution. Most will find they'll have to come up with a custom solution to get what they want. Thankfully, I was able to hammer this out over two evenings with the help of ChatGPT and GitHub Copilot. ChatGPT was able to get me most of the way there, but I had to ultimately step in and make the code work correctly. Eventually, I may try to come up with a solution to the inline style problem so that I can have style hashes also, but for now, I'm happy with the result.

Implementing the CSP and other security headers was challenging but rewarding. With each challenge I overcame, my blog became more secure, and I became more proficient at using AWS Amplify.

A Note on Mozilla Observatory

Something I want to mention briefly also is how I used Mozilla Observatory to scan the site before (when I was on DigitalOcean) and after (when I was on AWS Amplify with the correct headers). The results were night and day. The site went from an F to an A+ score. Keep in mind, Mozilla Observatory is just one tool, and it's not the end-all-be-all of security. But it's a good start.

As a result, I can now confidently build more complex features powered by serverless functions, knowing that security is under my control. In the next section, I'll delve into how this migration furthered my professional growth and provided an opportunity to use AI tools like ChatGPT and GitHub Copilot.

Professional Growth and Skill Development with AWS

A business person buttoning up their jacket in front of a staircase

The migration from DigitalOcean to AWS wasn't just about cost savings and enhanced security; it also served as an excellent opportunity for professional growth. As someone who interacts with AWS in my professional life, this project allowed me to delve deeper into AWS services, honing my skills and gaining invaluable experience along the way. Specifically, I was able to learn more about AWS Amplify, AWS S3, and AWS CloudFront while also experimenting with AI-powered tools like ChatGPT and GitHub Copilot.

Perhaps the most significant benefit of this migration project was the opportunity it gave me to experiment with AI-powered tools. Throughout this process, I found immense value in using OpenAI's ChatGPT and GitHub Copilot.

ChatGPT was particularly helpful during the creation of the code for CSP generation. It was like having a knowledgeable AI assistant, ready to help at any point. I was amazed by how much of a productivity boost ChatGPT offered, saving me time and effort and leading me towards effective solutions swiftly. One thing to watch out for is the limit of messages in GPT-4, since running out prematurely can cause some headaches. For example, you can't go back to GPT-4 after downgrading to GPT-3 in a chat. This is a minor inconvenience, but it's something to keep in mind. Also, it did struggle a bit with fine-tuning the CSP generation code, but it was still a massive help. It was able to catch things that I wasn't fully aware of.

On the other hand, GitHub Copilot served as a reliable sidekick, offering code suggestions and completions, making the development process smoother and more efficient. It's impressive how AI tools like these can enhance productivity and guide us towards better coding practices.

This migration has taught me that embracing such tools can lead to significant productivity improvements and learning opportunities. Up next, I'll share the process of integrating my current domain registrar with AWS Amplify.

Integrating AWS Amplify with a Third-Party Domain Registrar

An integral part of this migration was integrating AWS Amplify with my existing domain registrar. AWS Amplify provides an easy-to-follow process to connect a custom domain managed by a third-party DNS provider, ensuring that my blog's traffic was seamlessly redirected to the new setup.

Here's the simplified version of the steps I took, based on the AWS Amplify's step-by-step guide:

  1. I started by configuring my custom domain in the AWS Amplify console, under the "Domain management" section.
  2. AWS Amplify then provided me with a set of DNS records that I needed to enter into my third-party domain registrar's DNS settings.
  3. Once I had added the DNS records, AWS Amplify initiated the process of domain verification. This step ensures that you own the domain you're attempting to connect.
  4. After the domain verification was completed, AWS Amplify automatically started to set up an HTTPS connection, issuing an SSL/TLS certificate for my domain.
  5. Finally, I confirmed the routing configuration, essentially instructing my third-party registrar to route traffic for my domain to AWS Amplify.

By following this process, I was able to ensure that my blog's traffic was efficiently managed and directed to AWS Amplify. This ease of setup and integration highlights another advantage of AWS Amplify: its ability to work smoothly with various third-party services. In the final section, I'll summarize the journey and reflect on the benefits and growth opportunities presented by this migration.


In conclusion, migrating my blog to AWS Amplify, along with integrating my third-party domain registrar, turned out to be a worthwhile decision. I achieved cost efficiency, gained granular control over security headers, and enhanced my skill set with AWS services. The flexible pricing model of AWS S3 enabled significant savings on image storage costs compared to DigitalOcean Spaces. More importantly, with AWS Amplify, I could overcome Gatsby's limitations on security headers and build more complex features securely, thereby granting me greater control over my site's functionality.

Furthermore, my familiarity with AWS services grew significantly, which is highly beneficial considering I frequently interact with AWS at work. Additionally, the use of AI tools like ChatGPT and GitHub Copilot streamlined the development process, making it more efficient and enlightening.

Lastly, I seamlessly integrated my existing domain with AWS Amplify, leveraging their easy-to-follow process to ensure the smooth routing of my blog's traffic. By doing so, I experienced first-hand the interoperability of AWS with third-party services.

Overall, I encourage fellow developers to consider AWS Amplify for their projects, particularly for its flexibility, control over security headers, and compatibility with various services. It is not just a migration; it's a step towards growth.


Welcome to my blog! I am a software engineer based in Southern California, and I love sharing my thoughts and experiences about all things tech. From software development and programming to the latest tech trends and news, you'll find it all here on my blog. Follow along to stay up to date and get insights from a real-life software engineer living and working in SoCal. Thanks for visiting!