This guide shows how to install the Akash provider on your Kubernetes cluster.
Time: 30-45 minutes
Prerequisites
Before starting, ensure you have:
Required
- Kubernetes cluster deployed and verified
- Domain name that you control (e.g.,
provider.example.com) - Akash wallet with:
- Minimum 50 AKT recommended (0.5 AKT deposit per bid)
- Funded account (Fund Your Account)
Optional (if configured)
- GPU support enabled
- Persistent storage (Rook-Ceph) deployed
STEP 1 - Prepare Provider Wallet
Don’t have an Akash wallet yet?
Export Wallet Key
On your local machine (where you created your Akash wallet):
# Export your private keyprovider-services keys export <your-key-name>You’ll be prompted for your keyring passphrase. The output will look like:
-----BEGIN TENDERMINT PRIVATE KEY-----kdf: bcryptsalt: <salt-value>type: secp256k1
<base64-encoded-key>-----END TENDERMINT PRIVATE KEY-----Save this output to a file called key.pem.
Create Key Secret
Create a password for your key:
echo "your-secure-password" > key-pass.txtSTEP 2 - Setup on Control Plane Node
SSH into your Kubernetes control plane node and create the provider directory:
mkdir -p /root/providercd /root/providerUpload Files
Upload the following files to /root/provider/:
key.pem(your private key)key-pass.txt(your key password)
Encode Secrets
# Base64 encode the keyKEY_SECRET=$(cat /root/provider/key.pem | openssl base64 -A)
# Base64 encode the passwordKEY_PASSWORD=$(cat /root/provider/key-pass.txt | openssl base64 -A)
# Get your provider addressACCOUNT_ADDRESS=$(provider-services keys show <your-key-name> -a)STEP 3 - Install Helm and Add Akash Repository
Install Helm
# Download Helmwget https://get.helm.sh/helm-v4.0.1-linux-amd64.tar.gztar -zxvf helm-v4.0.1-linux-amd64.tar.gzinstall linux-amd64/helm /usr/local/bin/helm
# Verify installationhelm versionAdd Akash Helm Repository
helm repo add akash https://akash-network.github.io/helm-chartshelm repo updateSTEP 4 - Create Namespaces
Create all required namespaces:
kubectl create namespace akash-serviceskubectl create namespace leasekubectl label namespace akash-services akash.network=truekubectl label namespace lease akash.network=trueSTEP 5 - Install Akash RPC Node
Important: Running your own RPC node is a strict requirement for Akash providers. This removes dependence on public nodes and ensures reliable access to the blockchain.
Install Akash Node via Helm
The default installation uses blockchain snapshots for fast synchronization.
helm install akash-node akash/akash-node \ -n akash-servicesVerify Node is Running
kubectl -n akash-services get pods -l app=akash-nodeExpected output:
NAME READY STATUS RESTARTS AGEakash-node-1-0 1/1 Running 0 2mSync Time: The node will download and extract a blockchain snapshot, then sync to the latest block. This typically takes ~5 minutes. You can proceed with the next steps while it syncs.
STEP 6 - Install Akash Operators
Install Hostname Operator
helm install akash-hostname-operator akash/akash-hostname-operator \ -n akash-servicesInstall Inventory Operator
Without persistent storage:
helm install inventory-operator akash/akash-inventory-operator \ -n akash-services \ --set inventoryConfig.cluster_storage[0]=default \ --set inventoryConfig.cluster_storage[1]=ramWith persistent storage (adjust beta3 to your storage class):
helm install inventory-operator akash/akash-inventory-operator \ -n akash-services \ --set inventoryConfig.cluster_storage[0]=default \ --set inventoryConfig.cluster_storage[1]=beta3 \ --set inventoryConfig.cluster_storage[2]=ramNote:
- Index 0 is always default (ephemeral storage)
- Index 1 is ram (SHM/shared memory) if no persistent storage, or your persistent storage class (beta1/beta2/beta3)
- Index 2 is ram (SHM/shared memory) if you have persistent storage
- All providers should support SHM for deployments requiring shared memory
Apply Provider CRDs
kubectl apply -f https://raw.githubusercontent.com/akash-network/provider/main/pkg/apis/akash.network/crd.yamlSTEP 7 - Configure DNS
Configure at Your DNS Provider
Log into your DNS provider (Cloudflare, GoDaddy, Route53, etc.) and create the following DNS records:
1. Provider A Record:
Type: AName: provider (or your subdomain)Value: <your-provider-public-ip>TTL: 3600 (or Auto)2. Wildcard Ingress A Record:
Type: AName: *.ingress.provider (or *.ingress.your-subdomain)Value: <your-provider-public-ip>TTL: 3600 (or Auto)Example:
provider.example.com → 203.0.113.45*.ingress.provider.example.com → 203.0.113.45Verify DNS Propagation
After configuring DNS, verify both records resolve correctly:
# Check provider domaindig provider.example.com +short
# Check wildcard ingress domaindig test.ingress.provider.example.com +shortBoth should return your provider’s public IP.
Note: DNS propagation can take a few minutes. Wait until both records resolve before proceeding.
STEP 8 - Create Provider Configuration
Download Price Script
cd /root/providercurl -s https://raw.githubusercontent.com/akash-network/provider/main/price_script_generic.sh > price_script.shchmod +x price_script.shCreate provider.yaml
Replace the values with your actual configuration:
cat > /root/provider/provider.yaml << 'EOF'---from: "$ACCOUNT_ADDRESS"key: "$KEY_SECRET"keysecret: "$KEY_PASSWORD"domain: "provider.example.com"node: "http://akash-node-1:26657"withdrawalperiod: 12hchainid: "akashnet-2"attributes: - key: region value: us-west - key: host value: akash - key: tier value: community - key: organization value: "Your Organization" - key: country value: US - key: city value: "San Francisco" - key: location-type value: datacenter - key: capabilities/cpu value: intel - key: capabilities/cpu/arch value: x86-64 - key: capabilities/memory value: ddr4 - key: network-speed-up value: 1000 - key: network-speed-down value: 1000
email: [email protected]website: https://example.comorganization: Your Organization
# Pricing (in uakt per unit)price_target_cpu: 1.60price_target_memory: 0.30price_target_hd_ephemeral: 0.02price_target_hd_pers_hdd: 0.01price_target_hd_pers_ssd: 0.03price_target_hd_pers_nvme: 0.10price_target_endpoint: 0.05price_target_ip: 5.00EOFAdd Persistent Storage Attributes (if you have Rook-Ceph)
If you configured persistent storage with Rook-Ceph, add the storage attributes to your provider.yaml:
attributes: # ... existing attributes ... - key: capabilities/storage/1/class value: <storage-class> - key: capabilities/storage/1/persistent value: "true"Example for beta3 (NVMe) storage class:
- key: capabilities/storage/1/class value: beta3 - key: capabilities/storage/1/persistent value: "true"Example for beta2 (SSD) storage class:
- key: capabilities/storage/1/class value: beta2 - key: capabilities/storage/1/persistent value: "true"Important: You can only advertise one storage class per provider. Choose either beta1 (HDD), beta2 (SSD), or beta3 (NVMe) based on what you configured in Rook-Ceph.
Add GPU Attributes (if you have GPUs)
If you configured GPU support, add GPU attributes to your provider.yaml:
attributes: # ... existing attributes ... - key: capabilities/gpu/vendor/nvidia/model/<model> value: "true" - key: capabilities/gpu/vendor/nvidia/model/<model>/ram/<ram> value: "true" - key: capabilities/gpu/vendor/nvidia/model/<model>/interface/<interface> value: "true" - key: capabilities/gpu/vendor/nvidia/model/<model>/interface/<interface>/ram/<ram> value: "true"Example for NVIDIA RTX 4090:
- key: capabilities/gpu/vendor/nvidia/model/rtx4090 value: "true" - key: capabilities/gpu/vendor/nvidia/model/rtx4090/ram/24Gi value: "true" - key: capabilities/gpu/vendor/nvidia/model/rtx4090/interface/pcie value: "true" - key: capabilities/gpu/vendor/nvidia/model/rtx4090/interface/pcie/ram/24Gi value: "true"Note: Model names should be lowercase with no spaces. List each GPU model you’re offering.
Complete Example with All Features
Here’s a full example showing GPU, persistent storage, and all optional attributes:
---from: "akash1..."key: "LS0tLS1CRUdJTi..."keysecret: "eHJiajdSS..."domain: "provider.example.com"node: "http://akash-node-1:26657"withdrawalperiod: 12hchainid: "akashnet-2"
attributes: # Location - key: region value: us-west - key: country value: US - key: city value: "San Francisco" - key: location-type value: datacenter - key: datacenter value: us-west-dc-1
# Required - key: host value: akash - key: tier value: community - key: organization value: "Your Organization"
# CPU - key: capabilities/cpu value: intel - key: capabilities/cpu/arch value: x86-64
# Memory - key: capabilities/memory value: ddr5ecc
# Network - key: network-speed-up value: 10000 - key: network-speed-down value: 10000
# GPU (if you have GPUs) - key: capabilities/gpu value: nvidia - key: capabilities/gpu/vendor/nvidia/model/h100 value: "true" - key: capabilities/gpu/vendor/nvidia/model/h100/ram/80Gi value: "true" - key: capabilities/gpu/vendor/nvidia/model/h100/interface/sxm value: "true" - key: capabilities/gpu/vendor/nvidia/model/h100/interface/sxm/ram/80Gi value: "true" - key: cuda value: "13.0"
# Persistent Storage (if you have Rook-Ceph) - key: capabilities/storage/1/class value: beta3 - key: capabilities/storage/1/persistent value: "true"
# SHM (Shared Memory) storage class (optional) - key: capabilities/storage/2/class value: ram - key: capabilities/storage/2/persistent value: "false"
website: https://example.comorganization: Your Organization
# Pricingprice_target_cpu: 1.60price_target_memory: 0.30price_target_hd_ephemeral: 0.02price_target_hd_pers_hdd: 0.01price_target_hd_pers_ssd: 0.03price_target_hd_pers_nvme: 0.10price_target_endpoint: 0.05price_target_ip: 5.00
# GPU pricing (format: "model=price,model=price" or "*=price" for all)price_target_gpu_mappings: "h100=840,*=840"Note: This example shows all possible configurations. Only include the sections relevant to your provider setup.
STEP 9 - Install Provider
cd /root/provider
helm install akash-provider akash/provider \ -n akash-services \ -f provider.yaml \ --set bidpricescript="$(cat price_script.sh | openssl base64 -A)"Verify Provider
kubectl -n akash-services get podsExpected output:
NAMESPACE NAME READY STATUS RESTARTS AGEakash-services akash-node-1-0 1/1 Running 0 4d1hakash-services akash-provider-0 1/1 Running 0 47hakash-services operator-hostname-79fbbffbb7-xxxxx 1/1 Running 0 47hakash-services operator-inventory-7bb766f7bb-xxxxx 1/1 Running 0 39makash-services operator-inventory-hardware-discovery-node1 1/1 Running 0 38mAll pods should show Running status and 1/1 ready.
STEP 10 - Install Ingress Controller
Create Ingress Configuration
cat > /root/ingress-nginx-custom.yaml << 'EOF'controller: service: type: ClusterIP ingressClassResource: name: "akash-ingress-class" kind: DaemonSet hostPort: enabled: true admissionWebhooks: port: 7443 config: allow-snippet-annotations: false compute-full-forwarded-for: true proxy-buffer-size: "16k" metrics: enabled: true extraArgs: enable-ssl-passthrough: true
tcp: "8443": "akash-services/akash-provider:8443" "8444": "akash-services/akash-provider:8444"EOFInstall Ingress-NGINX
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --version 4.12.1 \ -f /root/ingress-nginx-custom.yamlLabel Ingress Resources
kubectl label namespace ingress-nginx app.kubernetes.io/name=ingress-nginx app.kubernetes.io/instance=ingress-nginxkubectl label ingressclass akash-ingress-class akash.network=trueVerify Ingress
kubectl -n ingress-nginx get podsExpected output:
NAME READY STATUS RESTARTS AGEingress-nginx-controller-xxx 1/1 Running 0 2mSTEP 11 - Verify Provider On-Chain
Check Provider Status
provider-services query provider get $ACCOUNT_ADDRESS \ --node https://rpc.akashnet.net:443Expected output:
attributes:- key: region value: us-west- key: host value: akash- key: tier value: communityhost_uri: https://provider.example.com:8443info: email: "" website: ""owner: akash1...Check Provider Logs
kubectl -n akash-services logs -f akash-provider-0Look for messages indicating the provider is bidding on deployments.
STEP 12 - Verify Firewall
Ensure these ports are open on your provider’s public IP:
Required:
80/tcp- HTTP443/tcp- HTTPS8443/tcp- Provider Endpoint8444/tcp- Provider GRPC30000-32767/tcp- Kubernetes NodePort range
Optional (if using external access):
6443/tcp- Kubernetes API
Test connectivity:
# From an external machinecurl -k https://provider.example.com:8443/statusSTEP 13 - Install ReplicaSet Cleanup Script (Recommended)
When deployments update but the provider is out of resources, Kubernetes won’t destroy old pods until new ones are created. This can cause deployments to get stuck.
This script automatically removes old ReplicaSets when new ones fail due to insufficient resources.
See GitHub Issue #82 for more details.
Create the Script
On the control plane node, create /usr/local/bin/akash-force-new-replicasets.sh:
cat > /usr/local/bin/akash-force-new-replicasets.sh <<'EOF'#!/bin/bash## Version: 0.2 - 25 March 2023# Files:# - /usr/local/bin/akash-force-new-replicasets.sh# - /etc/cron.d/akash-force-new-replicasets## Description:# This workaround identifies deployments stuck due to "insufficient resources"# and removes older ReplicaSets, leaving only the newest one.
kubectl get deployment -l akash.network/manifest-service -A -o=jsonpath='{range .items[*]}{.metadata.namespace} {.metadata.name}{"\n"}{end}' | while read ns app; do kubectl -n $ns rollout status --timeout=10s deployment/${app} >/dev/null 2>&1 rc=$? if [[ $rc -ne 0 ]]; then if kubectl -n $ns describe pods | grep -q "Insufficient"; then OLD="$(kubectl -n $ns get replicaset -o json -l akash.network/manifest-service --sort-by='{.metadata.creationTimestamp}' | jq -r '(.items | reverse)[1:][] | .metadata.name')" for i in $OLD; do kubectl -n $ns delete replicaset $i; done fi fi doneEOFMake Executable
chmod +x /usr/local/bin/akash-force-new-replicasets.shInstall JQ (if not already installed)
apt -y install jqCreate Cron Job
Create /etc/cron.d/akash-force-new-replicasets:
cat > /etc/cron.d/akash-force-new-replicasets << 'EOF'PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/binSHELL=/bin/bash
*/5 * * * * root /usr/local/bin/akash-force-new-replicasets.shEOFThe script runs every 5 minutes to clean up stuck ReplicaSets.
Next Steps
Your provider is now running!
Verify your provider:
- → Provider Verification - Verify your provider is working correctly
Quick health checks:
- Monitor provider status:
kubectl -n akash-services get pods - Check bids:
kubectl -n akash-services logs akash-provider-0 | grep bid - Watch for leases:
kubectl -n lease get pods
Optional enhancements:
- TLS Certificates - Automatic SSL certificates for deployments
- IP Leases - Enable static IPs for deployments
Provider Resources:
- Provider Calculator - Estimate earnings
- Provider Operations - Lease management, monitoring, and maintenance
- Akash Discord - Join the provider community
Additional Configuration
Update Provider Attributes
To update your provider attributes:
- Edit
/root/provider/provider.yaml - Upgrade the Helm release:
cd /root/provider
helm upgrade akash-provider akash/provider \ -n akash-services \ -f provider.yaml \ --set bidpricescript="$(cat price_script.sh | openssl base64 -A)"Update Pricing
Edit the price_target_* values in provider.yaml and run the upgrade command above.
Check Provider Version
kubectl -n akash-services get pod akash-provider-0 -o yaml | grep image: