An Akash Provider leases compute to users launching new deployments. Follow the steps in this guide to build your own provider.
This guide uses a single Kubernetes control plane node.
Overview and links to the steps involved in Akash Provider Build:
- Prerequisites of an Akash Provider
- Kubernetes Configurations
- Export Provider Wallet
- Helm Installation on Kubernetes Node
- Domain Name Review
- Hostname Operator Build
- Provider Build via Helm Chart
- Provider Bid Customization
- Ingress Controller Install
- Firewall Rule Review
- Disable Unattended Upgrades
- Provider Whitelisting (Optional)
STEP 1 - Prerequisites of an Akash Provider
NOTE - the commands in this section and in all remaining sections of this guide assume that the
rootuser is used. For ease we suggest using the
rootuser for the Kubernetes and Akash Provider install. If a non-root user is used instead, minor command adjustments may be necessary such as using
sudocommand prefixes and updating the home directory in command syntaxes.
Placing a bid on an order requires a 5 AKT deposit placed into collateral per bid won. If the provider desired 2 concurrent leases, the provider’s wallet would need minimum funding of 10AKT.
As every deployment request bid requires 5 AKT to be deposited in the escrow account, it’s always good to have more so your provider can keep bidding. If your provider is ready to offer 10 deployments, then it’s best to have 5 x 10 = 50 AKT and a little more to make sure provider can pay the fees for broadcasting the transactions on Akash Network.
The steps to create an Akash wallet are covered in the following documentation sections:
A full Kubernetes cluster is required with outbound internet access and be reachable from the internet.
If you need assistance in building a new cluster, visit the Kubernetes Cluster for Akash Providers guide.
Akash Providers need to run their own blockchain RPC node to remove dependence on public nodes. This is a strict requirement.
We have recently released documentation guiding thru the process of building a RPC node via Helm Charts with state sync.
Only x86_64 processors are officially supported by Akash for provider Kubernetes nodes at this time. This may change in the future and when ARM processors are supported it will be announced and documented.
Custom Kubernetes Cluster Settings
Akash Providers are deployed in many environments and we will make additions to these sections as when nuances are discovered.
Disable Search Domains
In this section we perform the following DNS adjustments:
Set Use Domains to False
use-domains: falseto prevent the possibility of systemd’s DHCP client overwriting the DNS search domain. This prevents a potentially bad domain served by the DHCP server from becoming active.
- This is a common issue to some of the providers which is explained in more detail here
Set Accept RA to False
accept-ra: falseto disable IPv6 Router Advertisement (RA) as the DNS search domain may still leak through if not disabled.
- Potential issue this addresses is explained in more detail here
NOTE - the DNS resolution issue & the Netplan fix addressed in this step are described here
Apply the following to all Kubernetes control plane and worker nodes.
IMPORTANT - Make sure you do not have any other config files under the
/etc/netplandirectory, otherwise it could cause unexpected networking issues / issues with booting up your node.
If you aren’t using the DHCP or want to add additional configuration, please refer to the netplan documentation here for additional config options.
Note that this is only an example of the netplan configuration file to show you how to disable the DNS search domain overriding and IPv6 Router Advertisement (RA). Do not blindly copy the entire config but rather use it as a reference for your convenience!
Test and Apply Netplan
Test the Netplan config and apply via these commands.
STEP 2 - Kubernetes Configurations
Create Provider namespaces on your Kubernetes cluster.
Run these commands from a Kubernetes control plane node which has kubectl access to cluster.
STEP 3 - Export Provider Wallet
In this section we will export the pre-existing, funded wallet to store the private key in a local file. To conduct the commands in this section the Akash CLI must be installed which is detailed in this guide (STEP 1 only).
The wallet used will be used for the following purposes:
- Pay for provider transaction gas fees
- Pay for bid collateral which is discussed further in this section
Make sure to create a new Akash account for the provider and do not reuse an account used for deployment purposes. Bids will not be generated from your provider if the deployment orders are created with the same key as the provider.
List Available Keys
- Print the key names available in the local OS keychain for use in the subsequent step
NOTE - in this example the provider key name is
defaultand this key name will be used in the subsequent sections of this documentation. Please adjust the key nane as necessary to suit your needs and preferences.
Export Private Key to Local File
- The key-name can be any name of your choice
- Note the passphrase used to protect the private key as it will be used in future steps
NOTE - The passhprase MUST be at least 8 characters long. Otherwise provider will encounter
failed to decrypt private key: ciphertext decryption failed errorwhen
keys importis executed.
STEP 1 - Export Provider Key
STEP 2 - Create key.pem and Copy Output Into File
- Create a
- Copy the output of the prior command (
provider-services keys export default) into the
NOTE - file should contain only what’s between
-----BEGIN TENDERMINT PRIVATE KEY-----and
-----END TENDERMINT PRIVATE KEY-----(including the
Example/Expected File Contents
STEP 4 - Helm Installation on Kubernetes Node
Install Helm on a Kubernetes Master Node
Confirmation of Helm Install
Print Helm Version
Step 5 - Domain Name Review
Add DNS (type A) records for your Akash Provider related domains on your DNS hosting provider.
Akash Provider Domain Records
- Replace yourdomain.com with your own domain name
- DNS (type A) records should point to public IP address of a single Kubernetes worker node of your choice
NOTE - do not use Cloudflare or any other TLS proxy solution for your Provider DNS A records.
NOTE - Instead of the multiple DNS A records for worker nodes, consider using CNAME DNS records such as the example provided below. CNAME use allows ease of management and introduces higher availability.
*.ingress 300 IN CNAME nodes.yourdomain.com.
nodes 300 IN A x.x.x.x
nodes 300 IN A x.x.x.x
nodes 300 IN A x.x.x.x
provider 300 IN CNAME nodes.yourdomain.com.
Example DNS Configuration
Step 6 - Hostname Operator Build
- Run the following command to build the Kubernetes hostname operator
- Note - if a need arises to use a different software version other than the one defined in the Chart.yaml Helm file - include the following switch. In most circumstances this should not be necessary.
Hostname Operator Confirmation
Expected output (example and name following akash-provider will differ)
root@node1:~# kubectl get pods -n akash-services
NAME READY STATUS RESTARTS AGE
akash-hostname-operator-84977c6fd9-qvnsm 1/1 Running 0 3m29s
STEP 7 - Provider Build via Helm Chart
In this section the Akash Provider will be installed and customized via the use of Helm Charts.
NOTE - when the Helm Chart is installed the Provider instance/details will be created on the blockchain and your provider will be registered in the Akash open cloud marketplace. The associated transaction for Provider creation is detailed here.
- Declare the following environment variables for Helm use
- Replace the variables with your own settings
- Set akash provider address that starts with
This allows the akash-provider to decrypt the key
2. Set the password you have entered upon akash keys export > key.pem
3. Set your domain. Register DNS A and wildcard address as specified in previous step, i.e.
provider.test.com DNS A record and
*.ingress.test.com DNS wildcard record.
Domain should be a publicly accessible DNS name dedicated for your provider use such as test.com.
The domain specified in this variable will be used by Helm during the Provider chart install process to produce the “provider.yourdomain.com” sub-domain name and the “ingress.yourdomain.com” sub-domain name. The domain specified will also be used by Helm during the Ingress Controller install steps coming up in this guide. Once your provider is up and running the *.ingress.yourdomain.com URI will be used for web app deployments such as abc123.ingress.yourdomain.com.
4. Set the Akash RPC node for your provider to use
- If you are going to deploy Akash RPC Node using Helm-Charts then set the node to http://akash-node-1:26657 It is recommended that you install your own Akash RPC node. Follow this guide to do so.
Ensure that the RPC node utilized is in sync prior to proceeding with the provider build.
NOTE - in the example provided below the NODE variable is set to
akash-node-1which is the Kubernetes service name of the RPC node when installed via Helm. Use
kubectl get svc -n akash-servicesto confirm the service name and status.
Provider Withdraw Period
- Akash providers may dictate how often they withdraw funds consumed by active deployments/tenants escrow accounts
- Few things to consider regarding the provider withdraw period:
- The default withdraw setting in the Helm Charts is one (1) hour
- An advantage of the one hour default setting is assurance that a deployment may not breach the escrow account dramatically. If the withdraw period were set to 12 hours instead - the deployment could exhaust the amount in escrow in one hour (for example) but the provider would not calculate this until many hours later and the deployment would essentially operate for free in the interim.
- A disadvantage of frequent withdraws is the possibility of losing profitability based on fees incurred by the providers withdraw transactions. If the provider hosts primarily low resource workloads, it is very possible that fees could exceed deployment cost/profit.
OPTIONAL - Update the Provider Withdraw Period
- If it is desired to change the withdrawal period from the default one hour setting, update the
withdrawalperiodsetting in the provider.yaml file created subsequently in this section.
- In the example the Provider Build section of this doc the withdrawal period has been set to 12 hours. Please adjust as preferred.
Provider Build Prep
- Ensure you are applying the latest version of subsequent Helm Charts install/upgrade steps
Create a provider.yaml File
- Issue the following command to build your Akash Provider
- Update the following keys for your unique use case
- Optional Parameters - the following parameters may be added at the same level as
keyif you which to advertise your support email address and company website URL.
Example provider.yaml File Creation
Verification of provider.yaml File
- Issue the following commands to verify the
provider.yamlfile created in previous steps
Example provider.yaml Verification Output
- Ensure there are no empty values
Provider Bid Defaults
- When a provider is created the default bid engine settings are used. If desired these settings could be updated and added to the
provider.yamlfile. But we would recommend initially using the default values.
- Note - the
bidpricestoragescalevalue will get overridden by
-f provider-storage.yamlcovered in Provider Persistent Storage documentation.
- Note - if you want to use a shellScript bid price strategy, pass the bid price script via
bidpricescriptvariable detailed in the bid pricing script doc. This will automatically suppress all
Provider CRD Installations
- Kubernetes CRDs are no longer delivered by the Helm as of chart
- CRDs are now installed manually using this step.
NOTE - You do not need to run this command if you previously installed the Akash Provider and are now performing an upgrade.
Install the Provider Helm Chart
Expected Output of Provider Helm Install
Expected output (example and name following akash-provider will differ)
If your Akash Provider pod status displays
init:0/1 for a prolonged period of time, use the following command to view Init container logs. Often the Provider may have a RPC issue and this should be revealed in these logs. RPC issues may be caused by an incorrect declaration in the NODE variable declaration issued previously in this section. Or possibly your custom RPC node is not in sync.
Helm Chart Uninstall Process
- Should a need arise to uninstall the Helm Chart and attempt the process anew, the following step can be used
- Only conduct this step if there is a problem with Akash Provider Helm Chart install
- This Helm uninstall technique can be used for this or any subsequent chart installs
- Following this step - if needed - start the Provider Helm Chart install anew via the prior step in this page
STEP 8 - Provider Bid Customization
NOTE - if you are updating your provider bid script from a previous version use this bid script migration guide.
- If there is a desire to manipulate the provider bid engine, include the
--set bidpricescriptswitch . The pricing script used in this switch is detailed in the Akash Provider Bid Pricing section of this guide.
- Note - When the provider deployment is created the bid script should return the price in under 5 seconds . If the script does not execute in this time period the following error message will be seen in the provider pod logs. Such a report would suggest that there is an error/issue with script customizations that should be reviewed. Following review and adjustment, uninstall the provider deployment (via helm uninstall) and reinstall.
- Note - there is further discussion on the bid script and deployer address whitelisting in this section.
USDC Stable Payment Support - note that the current, default bid script enables stable payment support on the Akash Provider. Akash deployments using stable payments are taxed at a slightly higher rate than deployments using AKT payment. If you choose not to support stable payments on your provider, remove stable payment support from the default bid script.
Provider Bid Script Customization Steps
STEP 1 - Update provider.yaml File
- If customization of your provider bid pricing is desired, begin by updating the
provider.yamlfile which will be used to hold customized values
STEP 2 - Customize the provider.yaml File
provider.yamlfile with the price targets you want. If you don’t specify these keys, the bid price script will default values shown below
price_target_gpu_mappings sets the GPU price in the following way and in the example provided:
a100nvidia models will be charged by
120USD/GPU unit a month
t4nvidia models will be charged by
80USD/GPU unit a month
- Unspecified nvidia models will be charged
130USD/GPU unit a month (if
*is not explicitly set in the mapping it will default to
100USD/GPU unit a month)
- Extend with more models your provider is offering if necessary with syntax of
<model>=<USD/GPU unit a month>
STEP 3 - Download Bid Price Script
STEP 4 - Upgrade akash-provider Deployment with Customized Bid Script
Verification of Bid Script Update
STEP 9 - Ingress Controller Install
Create Upstream Ingress-Nginx Config
ingress-nginx-custom.yaml file via this step
NOTE - in the default install the dedicated Akash RPC Node used for your provider is reachable only within the Kubernetes cluster. This is done intentionally as this RPC Node is intended for use only by the Akash Provider only. The Provider will have access within the cluster to the RPC Node. This additionally protects the RPC Node from possible DDoS attacks from external parties. If have a need to expose the Provider’s RPC Node to the outside world, use the
ingress-nginx-custom.yamlfile included in this section instead.
Expose RPC Node to Outside World
Use this step only if you choose to expose your Akash Provider RPC Node to the outside world
Install Upstream Ingress-Nginx
Apply Necessary Labels
- Label the
ingress-nginxnamespace and the
Step 10 - Firewall Rule Review
External/Internet Firewall Rules
The following firewall rules are applicable to internet-facing Kubernetes components.
Step 11 - Disable Unattended Upgrades
Unattended upgrades can bring all sorts of uncertainty/troubles such as updates of NVIDIA drivers and have the potential to affects your Provider/K8s cluster. Impact of unattended upgrades can include:
nvidia-smiwill hang on the host/pod
nvdp pluginwill become stuck and hence K8s cluster will run in a non-desired state where closed deployments will be stuck in
Disable Unattended Upgrades
To disable unattended upgrades, execute these two commands on your Kubernetes worker & control plane nodes:\
These commands should output
0 following the disable of unattended upgrades. Conduct these verifications your Kubernetes worker & control plane nodes:
STEP 11 - Provider Whitelisting (Optional)
- Akash Provider deployment address Whitelist functionality is now enabled in the bid price script
- To use it simply specify the list via whitelist_url attribute as detailed in this section
- Complete the steps in this section to enable/customize Akash Provider Whitelisting
Update the Akash Helm-Charts Repo
Verify Akash/Provider Helm Chart is 4.3.4 Version or Higher
Download Bid Price Script Which Supports Whitelisting
USDC Stable Payment Support - note that the current, default bid script downloaded in this step enables stable payment support on the Akash Provider. Akash deployments using stable payments are taxed at a slightly higher rate than deployments using AKT payment. If you choose not to support stable payments on your provider, remove stable payment support from the default bid script.
Prepare the Whitelist
- Example whitelist hosted on GitHub Gist can be found here
Specify the Bid Price Script and Whitelist URL
NOTE - Whitelist will only work when
bidpricescriptis also set.
NOTE - You need to specify the direct link to the whitelit (with Github Gist you need to click Raw button to get it)
STEP 11 - Extras
Force New ReplicaSet Workaround
A known issue exists which occurs when a deployment update is attempted and fails due to the provider being out of resources. This is happens because K8s won’t destroy an old pod instance until it ensures the new one has been created.
Follow the steps in the Force New ReplicaSet Workaround document to address this issue.
Kill Zombie Processes
A known issue exists which occurs when a tenant creates a deployment which doesn’t handle child processes properly, leaving the defunct (aka zombie) proceses behind. These could potentially occupy all available process slots.
Follow the steps in the Kill Zombie Processes document to address this issue.