Deploy a rails app to AWS EKS cluster through Razorops

Cluster: AWS EKS Docker Image Registry: AWS ECR Versioning Control Tool: Github CI/CD pipeline tool: Razorops


  1. We have an active AWS account. Lets say Account ID: 1111111
  2. A fresh AWS EKS Cluster with worker nodes in it from above AWS Account. Lets say our cluster name is razorops-cluster. No resources added yet. Please check our Connecting AWS EKS to Razorops article.
  3. ECR url is ready. Please check our Using Docker Image Registry
  4. Github with any rails demo app repository.


When we push the code to repository, it will trigger the razorops tasks. Razorops will pick your .razorops.yaml file and execute the tasks defined in it. Typically, a build, test and deploy tasks are defined in the .razorops.yaml file. In this demo we will have only 2 tasks, 1st build the docker image and push to ECR, 2nd pull the recent pushed image and deploy it.


  1. Enable AWS EKS remote access
  2. Add Service Account to EKS
  3. Add Role to EKS
  4. Bind Role to ServiceAccount
  5. Get EKS ServiceAccount credentials.
  6. Create a configMap in EKS for storing our ENV variables for our demo APP
  7. ECR Credential update cronjob. You might wonder, why this step is required. ECR credentials are based on access token which expires every 12 hours so, next time if you push your code, razorops will fail to push your code because of the wrong credentials. Whole workflow will fail after that.
  8. Deployment config for cluster.
  9. Create .razorops.yaml
  10. You have integrated Github with razorops on website.
  11. You have registered the ECR with razorops on website.

Lets start with each step.

Enable AWS EKS remote access.

This step is required because we need to do initial setup of cluster. We need to tell cluster what kind of deployments we need, service account, roles etc we need.

Please check our article Remotely connecting to AWS EKS.


These steps are explained in Connecting AWS EKS to Razorops in details.

In the above article you should create kubeconfig from service account after going through this article first. It may create confusions if current-context get changed for kubectl to serviceaccount. Its roles may not allow some of the process below. Our mentioned file will work fine but if you changed the roles then this issue may arise.

Create a configMap in EKS for storing our ENV variables for our demo APP

For storing the ENV variables, we need to create a resource of kind ConfigMap and you can define all your required variables in it as follow:

apiVersion: v1
kind: ConfigMap
  name: config-map
  DATABASE_URL: "db-url"
  RAILS_ENV: "production"
  RACK_ENV: "production"
  SECRET_KEY_BASE: "ksdkd098399823bs997833hsddsoi39009"
  SMTP_USERNAME: "username"
  SMTP_PASSWORD: "somepassword"
  AWS_ACCESS_KEY_ID: "aws-acess-key"
  AWS_REGION: "your-region"
  AWS_SECRET_ACCESS_KEY: "secret-acess-key"

You can pass this configmap to any other resources wherever your need arises e.g we may need this in deployment resource which eventually will run our app in the pods and will make these variables available in it.

You can download this file as follow:

curl -O

kubectl apply -f configmap.yaml

For more details refer here

ECR Credential update cronjob

As mentioned before, ECR auth token expires after every 12 hours, hence we need to keep the token fresh for our deployments. The best way to keep it updated automatically is to run a cronjob which keeps on getting the fresh token for our serviceaccount.

Get the ecr-cred-cronjob.yaml file.

curl -O

After downloading the file you need to make changes for your AWS Account values. Once you have made the changes apply this file to your cluster.

kubectl apply -f ecr-cred-cronjob.yaml

Lets look at the content of this file

apiVersion: batch/v1beta1
kind: CronJob  # Tell kuber' that this is a cronjob
  name: ecr-cred-cronjob  # Name of the job ,can be anything
  namespace: default
  concurrencyPolicy: Allow
  failedJobsHistoryLimit: 1
      creationTimestamp: null
          creationTimestamp: null
          containers:    # the conatiner that will be triggerd by cronjob
          - image: odaniait/aws-kubectl:latest   # the base iamge to be used to run our shell script
            imagePullPolicy: IfNotPresent  # as per your requirement | standard | read docs
            name: ecr-cred-helper # as per your requirement | standard | read docs
            command:    # our script goes here
            - /bin/sh   # standard | set the entry point for execution after cron triggered
            - -c        # standard
            - |-        # actuall script starts + some stuff to execute pipe script when config is sent ot kuber'
              ACCOUNT=[AWS_ACCOUNT_ID]      # custom script | your aws account id
              REGION=[AWS_REGION]      # custom script | your aws account region of choice
              SECRET_NAME=${REGION}-ecr-registry  # custom script | name of secret
              EMAIL=[YOUR AWS ACCOUNT EMAIL ID]   # custom script | any email address
              TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6`   # custom script | this will call AWS ECr to gewt login password and store it in TOKEN
              echo "ENV variables setup done."
              kubectl delete secret --ignore-not-found $SECRET_NAME   # custom script | delte previous secret if any
              kubectl create secret docker-registry $SECRET_NAME --docker-server=https://${ACCOUNT}.dkr.ecr.${REGION} --docker-username=AWS --docker-password="${TOKEN}" --docker-email="${EMAIL}"
              echo "Secret created by name. $SECRET_NAME"
              echo "token is $TOKEN"
              kubectl patch serviceaccount razorops -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}'   # custom script | update the deafult servciee account
              echo "All done."
            env:                     # container | envoirnment vars needed for aws config
            - name: AWS_DEFAULT_REGION       # container | aws will auto detect this account region
              value: [AWS_REGION]
            - name: AWS_SECRET_ACCESS_KEY     # container | aws will auto detect this account secret key and use it
              value: [AWS_SECRET_ACCESS_KEY]
            - name: AWS_ACCESS_KEY_ID           # container | aws will auto detect this account id and use it
              value: [AWS_ACCESS_KEY_ID]

            resources: {}
              capabilities: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: Default   # workload | custom | sometimes pod wont have intenet acces in 'clsuter first'
          hostNetwork: true
          restartPolicy: Never  # workload | standard | as per requirement
          schedulerName: default-scheduler # workload | standard | as per requirement
          securityContext: {}
          terminationGracePeriodSeconds: 30
  schedule: 0 */6 * * *  # workload | cron pattern | every 6 hours
  successfulJobsHistoryLimit: 3
  suspend: false

Change all the values which are within square bracket []. If you have changed the serviceaccount name, then you need to refere that here under container path at command "kubectl patch serviceaccount [serviceaccount_name]........"

Note down SECRET_NAME which is {REGION}-ecr-registry. We will use it in the deployment to mention imagePulSecret Name.

If you notice, this cronjob is scheduled at every 6 hours. Which mean it may not run right away when you are practicing on this tutorial. To make it run right away, you can create a Job resource and run it.

You can get ecr-job.yaml from our github repositry.

curl -O

Change the required values and create it as follow:

kubectl create -f ecr-job.yaml

In this ecr-job.yaml, we have used generateName for job's name. It will automatically create the name with prefix defined in it. It means you can create it multiple times without any conflicts of existing jobs. kubectl apply will not work as it will throw error for resource name being empty.

Deployment config for cluster

Before we run the automation for deployment, we need to define the Deployment resource. During automation, we will only update the image of this deployment to the latest one which we recently pushed. Rest of the configuration will stay common.

For our deployment, we will use the following file:

curl -O

kubectl apply -f deployment.yaml

Lets look at deployment file

apiVersion: apps/v1
kind: Deployment
  name: web
  replicas: 3
      app: web
        app: web
      - name: [AWS_REGION]-ecr-registry
        - image: [AWS_ACCOUNT_ID]
          name: web
            - configMapRef:
                name: config-map
            - name: backend-port
              containerPort: 3000
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
              port: 3000
            initialDelaySeconds: 15
            periodSeconds: 20

Here we are creating a deployment of name web. We have defined 3 replicas here which means, We want my app to run on 3 nodes. Under the imagePullSecrets path, we have given name of the secret which we created before for the ECR Credendtial cronjob. Under containers, we have defined the docker image registry url with latet tag. We also need environment variables in deployments so we referred it under evnForm inside the containers.

For more details on deployment resource please refere kubernetes deployment

Create .razorops.yaml file.

We have already document on this. You can defined tasks and workflow in it. In this example we will have only 2 steps. Build, push image and deploy. Deploying to AWS EKS cluster talks about this majorily. But this demo contains and explains 2 tasks.

Lets see the yaml file and learn about it.

    type: build
    image: '[AWS_ACCOUNT_ID]'
      - latest
      - ${CI_COMMIT_SHA:0:8}
    push: true

    image: lachlanevenson/k8s-kubectl:v1.11.8
      name: prod
        name: razorops-cluster
      - apk --no-cache add gettext libintl
      - kubectl -n default set image deployment.v1.apps/web web=[AWS_ACCOUNT_ID]$DOCKER_TAG

  - name: production
    tasks: [build-image, deploy-k8s]
    when: branch == "master"

build-image task builds the image. It also has push: true which means, it will push the image to ECR.

deploy-k8s simply updates image or our deployment/web with the latest pushed image. Once this is done, deployment is complete.

Now we have deployed the app but docker image needs a dockerfile as well to run the app in containers. We need to mention, what and how it should run our app.

As We are dealing with rails app, we are running it on puma server.

Our docker file will pretty much looks like as:

FROM ruby:2.6.3-stretch


RUN apt-get update && apt-get install -qq -y --no-install-recommends build-essential nodejs libpq-dev

ENV RAILS_ENV=production RACK_ENV=production SECRET_KEY_BASE=xpto APP_HOME=/app/

ADD Gemfile* $APP_HOME
RUN cd $APP_HOME && bundle install --without development test --jobs 2


RUN RAILS_GROUPS=assets bundle exec rake assets:precompile

CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]

And we are done. Now just push your changes to your github and go to your razorops dashboard. There will be a pipleline in progress.