Deploying RapidFort with P1's Big Bang DevSecOps Platform


Big Bang is a DevSecOps platform built from DoD hardened and approved packages deployed in a customer owned Kubernetes cluster.
RapidFort perfectly compliments the Big Bang initiative in two major ways (figure 1):
  1. It forms part of the DevSecOps (DSOP) Stack to improve security in customers build pipelines
  2. Images hardened by RapidFort (IronBank) can provide the base infrastructure within the DSOP Software Factory.
The focus of this document is on #1.
Figure 1: Big Bang DevSecOps (DSOP) Software Factory

RapidFort Architecture

As per Big Bang architecture, RapidFort is deployed via a Helm Chart into a Big Bang Kubernetes cluster. The RapidFort platform is made up of a number of microservices. Within Big Bang they can form a service mesh (istio) and access to the platform via UI or API would come through an ingress gateway (figure 2).
There are some interdependencies within the microservices. Therefore any deviations from the RapidFort helm chart or RapidFort instructions must be discussed with RapidFort beforehand. That said it is possible to replace some elements. For example, RapidFort is shipped with a MySQL database, but it is possible to use an external database.
Figure 2: RapidFort Platform Overview

Deployment Guide

RapidFort deployment is typically straight forward, however the eco-system of Big Bang can be complex with many moving parts. It usually varies between customer deployments too. Therefore it's imperative that the perquisites are thoroughly checked.
It is IMPORTANT not to remove any of the RapidFort microservices. It will hinder or break the platform. Contact RapidFort support with any questions or planned deviations from the standard deployment.
It assumes istio is already set-up in the customer's bing bang cluster. Although istio is out of scope of this document, see the RapidFort basic istio example for more help.


  • Running Kubernetes Cluster that followed Big Bang Architecture
    • Helm, kubectl, Git installed
    • login / access from node
    • Ingress Controller with Network Load Balancer. i.e. capture/review istio config and labels:
      • Get istio namespaces (e.g. istio control plane is “istio-system”)
        kubectl get ns --show-labels=true | grep istio
      • Get the istio ingress gateway in the istio control plane namespace

        kubectl get gateway -n istio-system --show-labels=true
      • Get labels/selector in the the istio deployment in the istio control plane namespace (e.g. app=istio-ingressgateway,istio=ingressgateway)
        kubectl get deploy -n istio-system -o wide
    • DNS working/ can reach RF Vulnerability Database
    • cat /etc/resolv.conf 2host <DNS>
  • s3 bucket (in the same region if cluster in AWS) for RapidFort object storage
    • IAM user with IAM policy to interact with the s3 bucket
      • AWS Access Key
      • AWS Secret Key
    • Region
    • S3 bucket name
  • Customer’s RapidFort Platform Details
    • RapidFort admin email
      • Use a real email address for RF_APP_ADMIN. The main consequences of using a fake email is that you cannot reset your password since the verification code will go to the fake and inaccessible email address. Another consequence is that you will not currently be able to create accounts for any other email domains.
    • RapidFort platform FQDN
    • Networking Considerations:
      • Host node(s) are in the same region as S3, have the right the security group (HTTPS 443 and SSH 22 ingress and ?443? egress to, anything else?
      • SSL Certificates for Customer instance of the RapidFort platform e.g. RapidFort tests with a k8s TLS secret wild-card-cert
      • DNS as above
      • Access the SMTP email server i.e. the RapidFort deployment needs the correct security group and networking/security policies to allow sending emails.
      • The Container Engine (where the CLI will be run) must be able to reach out the RapidFort Platform (sometimes it can be locked down or have specific DNS set-up)


Installation Steps

This is has all the steps required. For more information see values-override.yaml for all parameters available. See working big bang example for a full end-to-end example with steps including istio.

1. Git clone the RapidFort Big Bang repository
mkdir -p ironbank
pushd ironbank
  git clone
2. Create RapidFort Namespace with labels for istio injection and the RapidFort app
  • Create YAML file
    cat <<EOF > ironbank/rapidfort/namespace.yaml
    apiVersion: v1
    kind: Namespace
      name: rapidfort
        istio-injection: enabled rapidfort
  • Apply YAML file
 kubectl apply -f ironbank/rapidfort/namespace.yaml


3. Create a K8s docker-registry secret in the RapidFort namespace to be able to pull registry1 images
a. Verify registry credentials beforehand to save Image Pull errors and having to recreate secrets etc.

Explicitly set the key in the Harbor UI otherwise rotated keys could break the secret.
docker-registry secret works for both Docker and Podman registries
sudo docker login --username $REGISTRY1_USERNAME --password $REGISTRY1_PASSWORD
     b. Create Secret
set +H
kubectl create secret docker-registry private-registry --docker-username=${REGISTRY1_USERNAME} --docker-password=${REGISTRY1_PASSWORD} -n ${NS}
set -H
4. Create a values-override.yaml file
cat <<EOF > ironbank/rapidfort/chart/values-override.yaml
  aws_access_key_id: "<update here>"
  aws_secret_access_key: "<update here>"
  aws_default_region: "<update here>"
  s3_bucket: "<update here>"
  # -- This value must be a syntax valid email (doesn't need to be a real one, though it should be for production)
  rf_app_admin: "<update here>"
  rf_app_admin_passwd: "<update here>"
  # -- This field is used to provide the rapidfort service FQDN (if FQDN is not available use IP ADDRESS) to internal service
  rf_app_host: ""

  # -- This field is used to update the host name in the ingress.
  rf_app_host: ""
  allowed_rf_host: ""

  enabled: true
    mode: PERMISSIVE
    enabled: true
    # istio_namespace=istio-system
    # confirm the gateway name using - kubectl get gateway -n ${istio_namespace}
    # - <istio-namespace>/<gateway name>
    - istio-system/main 

      enabled: false

  enabled: true
  # confirm the ingress Labels. You can find the labels for istio ingress gateway using below command
  # kubectl get deploy -n <istio-namespace> -o wide.
  # confirm the selector
    app: istio-ingressgateway
    istio: ingressgateway
  # -- IP range of
  rapidfortApiIpRange: ""
  # -- test
  controlPlaneCidr: ""
5. Update and thoroughly review the values-override.yamlfile
    • This is a very important step.  Getting values wrong here will likely result in multiple troubleshooting sessions.
    • Use details collected in the prerequisite section
      • AWS s3 details
        • Try to sign-in without RapidFort.
      • RapidFort administrator creds

      • FQDN

      • Hostname of the ingress

      • istio gateway label e.g. “- istio-system/main"

      • network policies have the correct istio ingress gateway and istio app labels e.g. “app: istio-ingressgateway"and "istio: ingressgateway"

    6.  Iinstall the RapidFort Helm chart in the RapidFort namespace using the values-override.yamlfile
pushd ironbank/rapidfort/chart
  helm upgrade --install rapidfort . -f values-override.yaml -n rapidfort

Post Deployment Checks

1. Check the RapidFort VirtualService and DestinationRules for Redis & mysql were created:
kubectl get VirtualService -n rapidfort
kubectl get DestinationRule -n rapidfort
2. Check that all pods come up.
 kubectl get pods -n rapidfort
3. if any pods don’t come investigate why with kubectl describe or kubectl logs
4. Wait for a RapidFort welcome email. This will contain a link to the RapidFort dashboard.
5. Visit the RapidFort dashboard. You will be guided through the process for contacting RapidFort Support to request a license.
6. Check the CLI works, specifically the CLI can reach the platform
7. Check the stubbed image can communicate with the platform


1. For contributions to the RapidFort Big Bang Helm Chart, please follow the
2. Please note that you will need to update the following files:
  • Update the version in Chart.yaml
  • Add a new entry to
3. Please resolve any merge conflicts in Pull Requests