Cloud Storage Creation
Step 1 : Create a Google Cloud Storage Bucket for RapidFort use
- Create a Google Cloud storage bucket: e.g. "rapidfort-poc-gs"
- Multi-region: or select the region where the RapidFort VM will be deployed
- Storage class: Standard
- Enforce public access prevention on this bucket
- Access control: uniform
- Protection tools: none
- Go with default no public access (pop-up on create)
Step 2: Create an IAM Role for RapidFort
Create IAM role: e.g. "rapidfort-poc-role"
- Permissions for "Storage Object Creator and Storage Object Viewer"
- Select the following permissions:
storage.buckets.get
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.list
Note that some permissions are not in "Storage Object Creator" or "Storage Object Viewer" but can be added via the individual look-up i.e. storage.objects.delete and storage.buckets.get
Step 3: Create a Service Account for RapidFort
Create a service account: e.g. "rapidfort-poc-sa"
Grant access by selecting the role created during the previous step
Step 4: Update the Storage Bucket Principal and Role
- Bind the IAM role to the bucket created earlier
- Click Bucket overflow menu (⋮) for selected bucket & select Edit access
- Add Principal "rapidfort-poc-sa" (SA account created above)
- Select Role "rapidfort-poc-role" and save (Role created above)
Step 5: Generate a Private Key for the Service Account
- Generate private-key for service account "rapidfort-poc-sa"
- IAM -> Service Account -> select "rapidfort-poc-sa" SA -> Keys -> Add keys
- Use default JSON format
- This JSON file will be needed later
Compute Engine VM Creation
- Create a a GCP Compute VM instance according to the following specifications:
- n2-standard-8 (8 vCPU 32 GB Memory)
- 2 TB Storage (2048 GB for boot disk size)
- Ubuntu 20.04.3 or Debian 11
- VPC consideration
- For convenience the region and zone could match the VPC where container images are deployed and tested
- If that is not possible, there must be a route to the RapidFort platform from all environments in which container images are deployed and tested
- Added public SSH keys of the RapidFort Administrator
- API and identity management → Added Service Account from above.
- Allow HTTPS traffic
Make sure HTTPS & SSH are allowed from any customer-specific Network Tags
VM Configuration
Here the required packages are installed in order to set-up Microk8s
Step 1: Install Docker
curl -fsSL https://get.docker.com | sh -
Step 2: Install snap
Verify if snap is installed on the RapidFort VM. If snap is already installed, then proceed to Step 3.
sudo apt install snapd -y
sudo snap install core
export PATH=$PATH:/snap/bin
Step 3: Install kubectl
sudo snap install kubectl --classic
Step 4: Install Helm
sudo snap install helm --classic
Step 5: Install Microk8s
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
sudo chown -f -R $USER ~/.kube
newgrp microk8s
Step 6: Start Microk8s
microk8s start
microk8s status --wait-ready
microk8s kubectl get nodes # check node status.
microk8s enable dns hostpath-storage ingress
microk8s kubectl config view --raw >| ~/.kube/config
Step 7: Update Microk8s with DNS server address
This is required if public DNS is blocked. It reconfigures the core DNS in MicroK8s to use the private DNS instead of the default 8.8.8.8. If not done, RapidFort CLI is likely to fail.
resolvectl status # get DNS server
microk8s disable dns
microk8s enable dns:<COMPANY_DNS_IP>
IMPORTANT: It should be possible to curl from both the VM host and the RapidFort runner pod:
curl https://www.google.com
RapidFort Helm Installation
Step 1: Create a data directory for RapidFort
sudo mkdir -p /opt/rapidfort/data
sudo chmod 777 -R /opt/rapidfort/
Step 2: Generate base64 encoding of the GCP Service Account key file.
cat <JSON_KEY_FILE> | base64
Step 3: Encode RapidFort user data
Create the file /opt/rapidfort/.user_data and populate as follows:
# this ip can be private ip or public ip considering it is reachable where the stub image will be deployed.
RF_APP_HOST=<ip addr>
# Admin account id
RF_APP_ADMIN=<RapidFort Super Admin Email>
RF_APP_ADMIN_PASSWD=<RapidFort Super Admin Password.>
# GCP Bucket created for RapidFort
RF_S3_BUCKET=<Google Cloud Storage Bucket name>
RF_STORAGE_TYPE=gs
# Output from step 2
RF_GS_CREDS=<BASE64_JSON_KEY_FILE>
This will be sourced in step 5.
Step 4: Download the RapidFort Helm Chart
RF_HELM_CHART_VERSION="1.1.21-ib"
mkdir -p /opt/rapidfort
pushd /opt/rapidfort
curl -L https://github.com/rapidfort/rapidfort/archive/refs/tags/${RF_HELM_CHART_VERSION}.tar.gz --output rapidfort.tar.gz
tar -xvf rapidfort.tar.gz && rm -rf rapidfort.tar.gz
pushd rapidfort-1.1.21-ib/chart
echo -e "export RF_HELM_DIR=`pwd`\n" > /opt/rapidfort/.rf_env
popd
popd
Step 5: Install RapidFort via the Helm Chart
source /opt/rapidfort/.rf_env
source /opt/rapidfort/.user_data
pushd ${RF_HELM_DIR} > /dev/null
helm_values="--set secret.rf_app_admin=${RF_APP_ADMIN} \
--set secret.rf_app_admin_passwd=${RF_APP_ADMIN_PASSWD} \
--set secret.s3_bucket=${RF_S3_BUCKET} \
--set secret.storage_type=${RF_STORAGE_TYPE} \
--set secret.rf_app_host=${RF_APP_HOST} \
--set global.container_engine=docker \
--set secret.gs_cred=${RF_GS_CREDS} \
--set global.ingressClassName=nginx
"
helm upgrade --install rapidfort ./ ${helm_values}
popd
Step 6: Check all Pods are up and running
$ kubectl get pods
Next Steps
- Sign-in to the RapidFort platform (VM IP address with the admin email and password above)Update admin password via Profile Settings
- Get RapidFort License
- Ensure rfstub & rfharden work from target container environment