Part 2 — run this after you finish Provider installation (preparation) (keys, provider.yaml, DNS, NGF, and Let’s Encrypt).
Time: 15–30 minutes (plus image pulls)
STEP 1 - Install akash-gateway
The akash-gateway chart creates the Gateway resource, TCPRoutes, and HTTPS listeners. Install it with your complete provider.yaml from prep, STEP 6 (Helm only uses the values this chart needs, including domain).
cd /root/providerhelm install akash-gateway akash/akash-gateway -n akash-gateway --create-namespace -f provider.yamlIf you are not using a file yet, you can use --set "domain=yourdomain.com" once; a single provider.yaml is recommended so the domain is identical for the provider install (STEP 3).
Verify the Gateway and TCP routes
kubectl -n akash-gateway get gateway akash-gatewaykubectl -n akash-services get tcproutesPROGRAMMED should become True and TCP route names for the provider should appear. For end-to-end HTTPS, see the End-to-end HTTPS test in prep (STEP 9) to confirm Let’s Encrypt on the wire.
STEP 2 - Install Akash Operators
Install Hostname Operator
helm install akash-hostname-operator akash/akash-hostname-operator \ -n akash-servicesInstall Inventory Operator
Without persistent storage:
helm install inventory-operator akash/akash-inventory-operator \ -n akash-services \ --set inventoryConfig.cluster_storage[0]=default \ --set inventoryConfig.cluster_storage[1]=ramWith persistent storage (adjust beta3 to your storage class):
helm install inventory-operator akash/akash-inventory-operator \ -n akash-services \ --set inventoryConfig.cluster_storage[0]=default \ --set inventoryConfig.cluster_storage[1]=beta3 \ --set inventoryConfig.cluster_storage[2]=ramNote:
- Index 0 is always default (ephemeral storage)
- Index 1 is ram (SHM/shared memory) if no persistent storage, or your persistent storage class (beta1/beta2/beta3)
- Index 2 is ram (SHM/shared memory) if you have persistent storage
- All providers should support SHM for deployments requiring shared memory
Apply Provider CRDs
kubectl apply -f https://raw.githubusercontent.com/akash-network/provider/main/pkg/apis/akash.network/crd.yamlSTEP 3 - Install Provider
cd /root/provider
helm install akash-provider akash/provider \ -n akash-services \ -f provider.yaml \ --set bidpricescript="$(cat price_script.sh | openssl base64 -A)"Verify Provider
kubectl -n akash-services get podsExpected output:
NAMESPACE NAME READY STATUS RESTARTS AGEakash-services akash-node-1-0 1/1 Running 0 4d1hakash-services akash-provider-0 1/1 Running 0 47hakash-services operator-hostname-79fbbffbb7-xxxxx 1/1 Running 0 47hakash-services operator-inventory-7bb766f7bb-xxxxx 1/1 Running 0 39makash-services operator-inventory-hardware-discovery-node1 1/1 Running 0 38mAll pods should show Running status and 1/1 ready.
STEP 4 - Verify Gateway and Ingress
Traffic for the provider is handled by the Gateway API stack from prep, STEP 8 (NGF) and from STEP 1 – akash-gateway in this guide. No separate ingress controller is required.
After the provider is running, verify the Gateway and TCPRoutes:
kubectl -n akash-gateway get gateway akash-gatewaykubectl -n akash-services get tcproutesThe Gateway should show PROGRAMMED: True. TCPRoutes akash-provider-8443, akash-provider-8444, and akash-provider-5002 should be listed and will route traffic to the provider once it is installed.
STEP 5 - Verify Provider On-Chain
Check Provider Status
provider-services query provider get $ACCOUNT_ADDRESS \ --node https://rpc.akashnet.net:443Expected output:
attributes:- key: region value: us-west- key: host value: akash- key: tier value: communityhost_uri: https://provider.example.com:8443info: email: "" website: ""owner: akash1...Check Provider Logs
kubectl -n akash-services logs -f akash-provider-0Look for messages indicating the provider is bidding on deployments.
STEP 6 - Verify Firewall
Ensure these ports are open on your provider’s public IP:
Required:
80/tcp- HTTP443/tcp- HTTPS8443/tcp- Provider endpoint8444/tcp- Provider gRPC5002/tcp- Provider Let’s Encrypt30000-32767/tcp- Kubernetes NodePort range
Optional (if using external access):
6443/tcp- Kubernetes API
Test connectivity:
# From an external machinecurl -k https://provider.example.com:8443/statusSTEP 7 - Install ReplicaSet Cleanup Script (Recommended)
When deployments update but the provider is out of resources, Kubernetes won’t destroy old pods until new ones are created. This can cause deployments to get stuck.
This script automatically removes old ReplicaSets when new ones fail due to insufficient resources.
See GitHub Issue #82 for more details.
Create the Script
On the control plane node, create /usr/local/bin/akash-force-new-replicasets.sh:
cat > /usr/local/bin/akash-force-new-replicasets.sh <<'EOF'#!/bin/bash## Version: 0.2 - 25 March 2023# Files:# - /usr/local/bin/akash-force-new-replicasets.sh# - /etc/cron.d/akash-force-new-replicasets## Description:# This workaround identifies deployments stuck due to "insufficient resources"# and removes older ReplicaSets, leaving only the newest one.
kubectl get deployment -l akash.network/manifest-service -A -o=jsonpath='{range .items[*]}{.metadata.namespace} {.metadata.name}{"\n"}{end}' | while read ns app; do kubectl -n $ns rollout status --timeout=10s deployment/${app} >/dev/null 2>&1 rc=$? if [[ $rc -ne 0 ]]; then if kubectl -n $ns describe pods | grep -q "Insufficient"; then OLD="$(kubectl -n $ns get replicaset -o json -l akash.network/manifest-service --sort-by='{.metadata.creationTimestamp}' | jq -r '(.items | reverse)[1:][] | .metadata.name')" for i in $OLD; do kubectl -n $ns delete replicaset $i; done fi fi doneEOFMake Executable
chmod +x /usr/local/bin/akash-force-new-replicasets.shInstall JQ (if not already installed)
apt -y install jqCreate Cron Job
Create /etc/cron.d/akash-force-new-replicasets:
cat > /etc/cron.d/akash-force-new-replicasets << 'EOF'PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/binSHELL=/bin/bash
*/5 * * * * root /usr/local/bin/akash-force-new-replicasets.shEOFThe script runs every 5 minutes to clean up stuck ReplicaSets.
Next Steps
Your provider is now running!
Verify your provider:
- → Provider Verification - Verify your provider is working correctly
Quick health checks:
- Monitor provider status:
kubectl -n akash-services get pods - Check bids:
kubectl -n akash-services logs akash-provider-0 | grep bid - Watch for leases:
kubectl -n lease get pods
Optional follow-ups:
- IP Leases — static IPs for deployments
Provider Resources:
- Provider Calculator - Estimate earnings (provider payouts are in ACT)
- Provider Operations - Lease management, monitoring, and maintenance
- Akash Discord - Join the provider community
Additional Configuration
Update Provider Attributes
To update your provider attributes:
- Edit
/root/provider/provider.yaml - Upgrade the Helm release:
cd /root/provider
helm upgrade akash-provider akash/provider \ -n akash-services \ -f provider.yaml \ --set bidpricescript="$(cat price_script.sh | openssl base64 -A)"Update Pricing
Edit the price_target_* values in provider.yaml and run the upgrade command above.
Check Provider Version
kubectl -n akash-services get pod akash-provider-0 -o yaml | grep image: