Migrate to HCP Vault Dedicated with codified configuration
Challenge
As part of a cloud adoption strategy, you will inevitably face the need to migrate existing self-hosted infrastructure and use cases to cloud platforms.
For example, as the operator of a self-hosted acceptance testing or development Vault cluster running on the community edition, you might find the need to test use cases in a cloud-hosted Vault cluster with enterprise features.
HCP Vault Dedicated provides a hosted Vault Enterprise solution that is highly scalable and offers enterprise features, such as Performance Replication.
Solution
A popular approach to managing Vault configuration, codification is managing the lifecycle of a Vault server or cluster's configuration and state as code. Most often, this is implemented with Terraform, and the Vault Provider.
You can learn more about Vault configuration codification with Terraform in the Codify Management of Vault Using Terraform, Codify Management of Vault Enterprise Using Terraform, and Codify Management of HCP Vault Dedicated tutorials.
If you follow the principles covered in codifying Vault configuration with the Vault Provider, you can use Terraform to apply your Vault cluster configuration from a self-hosted community edition Vault server to an Vault Dedicated cluster..
Scenario introduction
In this tutorial, you will use terminal sessions and the command line to start a self-hosted dev mode Vault server and provision it with a codified configuration using Terraform and the Vault Provider.
After demonstrating the local Vault server configuration with username and password authentication and retrieval of key/value secrets using the related token policies, you will update your local environment to prepare for application of the configuration to your Vault Dedicated cluster.
You will then apply the modified Terraform configuration to the Vault Dedicated cluster. After it is applied, you will validate the configuration again in Vault Dedicated.
Prerequisites
A Linux or macOS development host you use to perform most of the tasks which make up the lab; (this lab was lasted tested on macOS 11.6.5)
HCP Vault Dedicated cluster; you can deploy the cluster with the steps in Deploy HCP Vault Dedicated with Terraform or Create a Vault Cluster on HCP.
- This tutorial uses an Vault Dedicated cluster deployed with Terraform, and the Vault Provider configuration found in the learn-manage-codified-hcp-vault-terraform repository.
- To successfully follow along with this tutorial, you must enable the public interface on your Vault Dedicated cluster. This configuration is covered in both the Deploy Vault Dedicated with Terraform tutorial and Create a Vault Cluster on HCP tutorials.
AWS S3 bucket with server side encryption enabled. (optional)
- This lab optionally uses AWS Key Management Service key type to encrypt the S3 bucket contents.
- Terraform needs
kms:Encrypt
,kms:Decrypt
, andkms:GenerateDataKey
permissions on the KMS key. Review the Terraform S3 State Storage documentation for more details. (optional)
Tip
You can use HCP Terraform instead of the AWS S3 bucket and KMS key to avoid the need for encrypted local state altogether.
Launch Terminal
This tutorial includes a free interactive command-line lab that lets you follow along on actual cloud infrastructure.
Personas
This scenario involves 2 personas:
admin persona runs Vault, applies configuration with Terraform Vault provider, edits configuration and environment, and applies configuration to Vault Dedicated.
student persona uses auth method to authenticate with Vault and retrieves secrets from a key/value secrets engine.
Policy requirements
For the purpose of this scenario, you will start a local dev mode Vault server and use the initial root token. In production, you should be more restrictive with root token usage and instead, you should create the correct minimum required ACL policies to complete the scenario tasks.
Here are the required ACL policies for the tasks performed in by the admin persona in this scenario.
Admin persona policy
Example admin policy
# Create and manage auth methods.
path "sys/auth/*" {
capabilities = ["create", "update", "delete", "sudo"]
}
path "auth/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# List auth methods.
path "sys/auth" {
capabilities = ["read"]
}
# Create and manage tokens.
path "/auth/token/*" {
capabilities = ["create", "update", "delete", "sudo"]
}
# Create and manage ACL policies.
path "sys/policies/acl/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# List ACL policies.
path "sys/policies/acl" {
capabilities = ["list"]
}
# Create and manage secrets engines.
path "sys/mounts/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# List secrets engines.
path "sys/mounts" {
capabilities = ["read", "list"]
}
# List, create, update, and delete key/value secrets at api-credentials.
path "api-credentials/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage transit secrets engine.
path "transit/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Read Vault health status.
path "sys/health" {
capabilities = ["read", "sudo"]
}
Prepare scenario environment
The scenario consists of two distinct environments:
Your local host: This is where you will run the self-hosted dev mode Vault server and all Terraform CLI commands.
Your HCP account: This is where you have deployed your Vault Dedicated development tier cluster as the target of the migration of the self-hosted Vault server state. You can use Terraform to deploy the Vault cluster or do so manually through the web UI.
For ease of clean up and simplicity, create a temporary directory that will contain all required configuration for the scenario, named learn-vault-lab
, and export its path value as the environment variable HC_LEARN_LAB
.
$ mkdir /tmp/learn-vault-lab && export HC_LEARN_LAB="/tmp/learn-vault-lab"
Self-Hosted Vault server configuration
It is sufficient for the purposes of this scenario to use a Vault dev mode server to represent your self-hosted Vault.
Start a dev mode Vault server that listens on all interfaces, background the process, and store its process ID as the environment variable LEARN_VAULT_PID
and write log output to the file vault-server.log
.
Tip
You should have installed the vault
CLI binary as a prerequisite to this lab.
$ nohup sh -c "vault server \
-dev \
-dev-root-token-id root \
-dev-listen-address 0.0.0.0:8200 \
> "$HC_LEARN_LAB"/vault-server.log 2>&1" \
> "$HC_LEARN_LAB"/nohup.log &
Export a VAULT_ADDR
environment variable to address the server.
$ export VAULT_ADDR=http://127.0.0.1:8200
Export a VAULT_TOKEN
environment variable to contain the initial root token value.
$ export VAULT_TOKEN=root
Check the Vault server status.
$ vault status
Note the Storage Type, Cluster Name, and Cluster ID values as those will be different between the self-hosted Vault server and your Vault Dedicated cluster as you'll observe later.
Vault status output
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.10.1
Storage Type inmem
Cluster Name vault-cluster-250fb71a
Cluster ID 6c2f2427-3e95-073b-682d-256b6afe5a96
HA Enabled false
The Vault server is ready.
Validate that your root token is working to provide correct access to Vault.
$ vault token lookup | grep policies
policies [root]
You are ready to proceed with examining and applying the Vault provider configuration with Terraform.
Terraform configuration repository
You can find the Terraform configuration that you will use for this scenario in the learn-hcp-vault-ops GitHub repository.
Clone the repository into the scenario directory.
$ git clone https://github.com/hashicorp-education/learn-hcp-vault-ops \
"${HC_LEARN_LAB}"/learn-hcp-vault-ops
Change into the repository directory and examine the contents.
$ cd "${HC_LEARN_LAB}"/learn-hcp-vault-ops/self-hosted-to-hcp
Terraform Vault provider configuration
Examine the current Terraform configuration.
$ tree
.
├── README.md
├── acl-policies.tf
├── auth-methods.tf
├── main.tf
├── policies
│  ├── admin-policy.hcl
│  └── student-secrets.hcl
├── secrets-engines.tf
├── static-secrets.tf
└── variables.tf
1 directory, 9 files
There is a collection of Vault ACL policies and Terraform configuration present in this directory.
Let's first check out the ACL policies, beginning with the admin
policy.
$ cat policies/admin-policy.hcl
The file is commented, with descriptions of each policy.
admin-policy.hcl
#------------------------------------------------------------------------
# Vault Learn lab: Self-hosted to HCP - admins ACL
# Example policy: Admin tasks for auth methods and secrets engines
#------------------------------------------------------------------------
# Create and manage auth methods.
path "sys/auth/*" {
capabilities = ["create", "update", "delete", "sudo"]
}
path "auth/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# List auth methods.
path "sys/auth" {
capabilities = ["read"]
}
# Create and manage tokens.
path "/auth/token/*" {
capabilities = ["create", "update", "delete", "sudo"]
}
# Create and manage ACL policies.
path "sys/policies/acl/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# List ACL policies.
path "sys/policies/acl" {
capabilities = ["list"]
}
# Create and manage secrets engines.
path "sys/mounts/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# List secrets engines.
path "sys/mounts" {
capabilities = ["read", "list"]
}
# List, create, update, and delete key/value secrets at api-credentials.
path "api-credentials/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage transit secrets engine.
path "transit/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Read Vault health status.
path "sys/health" {
capabilities = ["read", "sudo"]
}
Now, examine the student-secrets
policy.
$ cat policies/student-secrets.hcl
The policies are focused on allowing access to certain Key/Value secrets and a Transit secrets engine key.
student-secrets.hcl
#------------------------------------------------------------------------
# Vault Learn lab: Self-hosted to HCP - student ACL
# Example policy: Permits CRUD operations on kv-v2 under student path
#------------------------------------------------------------------------
# List, create, update, and delete key/value secrets
# at 'api-credentials/student' path.
path "api-credentials/data/student/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Encrypt data with 'payment' key.
path "transit/encrypt/payment" {
capabilities = ["update"]
}
# Decrypt data with 'payment' key.
path "transit/decrypt/payment" {
capabilities = ["update"]
}
# Read and list keys under transit secrets engine.
path "transit/*" {
capabilities = ["read", "list"]
}
# List secrets engines.
path "api-credentials/metadata/*" {
capabilities = ["list"]
}
Let's examine the Terraform configuration to learn about how the self-hosted Vault will be initially provisioned.
First, the main configuration.
$ cat main.tf
You will notice that Terraform can use encrypted state in an Amazon S3 bucket to protect sensitive information such as static secrets. The settings for backend use partial configuration, which means that the settings specific to the bucket and KSM key are contained in a configuration file, which you will examine later.
After the Terraform backend configuration, the Vault provider block is present and preceded by a note about using environment variables instead of hard-coding any values into the configuration file itself.
main.tf
#------------------------------------------------------------------------
# Vault Learn lab: Self-hosted to HCP - Terraform Vault Provider
#
# Dev mode Vault server configuration
# Note: S3 bucket is configured with -backend-config
# per https://www.terraform.io/language/settings/backends/configuration#partial-configuration
#------------------------------------------------------------------------
terraform {
backend "s3" {
encrypt = true
}
}
# It is strongly recommended to configure the Vault provider
# by exporting the appropriate environment variables:
# VAULT_ADDR, VAULT_TOKEN, VAULT_CACERT, VAULT_CAPATH, VAULT_NAMESPACE, etc.
provider "vault" {}
Examine the ACL policy resources.
$ cat acl-policies.tf
There are 2 policy resources defined here; one for the admin persona, and one for the student persona. Each resource references a separate ACL policy file.
acl-policies.tf
#------------------------------------------------------------------------
# Vault Learn lab: Self-hosted to HCP - ACL policies
#------------------------------------------------------------------------
# Admin capabilities within default namespace
resource "vault_policy" "admin_policy" {
name = "admins"
policy = file("policies/admin-policy.hcl")
}
# Students are admins of kv-v2 secrets engine
# and can also Read and list keys under transit
# + encrypt & decrypt with the 'payment' key
resource "vault_policy" "student_secrets_engines" {
name = "student-secrets"
policy = file("policies/student-secrets.hcl")
}
Examine the auth method resources.
$ cat auth-methods.tf
This configuration enables a username and password auth method, and defines the corresponding users for the admin persona and student persona.
auth-methods.tf
#------------------------------------------------------------------------
# Vault Learn lab: Self-hosted to HCP - Username & password auth method
#------------------------------------------------------------------------
resource "vault_auth_backend" "userpass" {
type = "userpass"
}
# Create a user, 'admin'
resource "vault_generic_endpoint" "admin" {
depends_on = [vault_auth_backend.userpass]
path = "auth/userpass/users/admin"
ignore_absent_fields = true
data_json = <<EOT
{
"policies": ["admins"],
"password": "superS3cret!"
}
EOT
}
# Create a user, 'student'
resource "vault_generic_endpoint" "student" {
depends_on = [vault_auth_backend.userpass]
path = "auth/userpass/users/student"
ignore_absent_fields = true
data_json = <<EOT
{
"policies": ["student-secrets"],
"password": "changeme"
}
EOT
}
Examine the secrets engines resources.
$ cat secrets-engines.tf
This configuration enables a Key/Value secrets engine (version 2), enables an instance of the Transit secrets engine, and creates a Transit secrets engine encryption key.
secrets-engines.tf
#------------------------------------------------------------------------
# Vault Learn lab: Self-hosted to HCP - Secrets engines
#------------------------------------------------------------------------
# Enable K/V v2 secrets engine at the path 'kv-v2'
resource "vault_mount" "kv-v2" {
path = "api-credentials"
type = "kv-v2"
}
# Enable Transit secrets engine at the path 'transit'
resource "vault_mount" "transit" {
path = "transit"
type = "transit"
}
# Creating Transit secrets engine encryption key named 'payment'
resource "vault_transit_secret_backend_key" "key" {
depends_on = [vault_mount.transit]
backend = "transit"
name = "payment"
deletion_allowed = true
}
You need to override the values contained within config.s3.tfbackend
, so examine that file as well.
$ cat config.s3.tfbackend
The values control settings for the encrypted state S3 bucket, including the bucket name, the Terraform state object key name, the AWS KMS key ID, and the AWS region.
config.s3.tfbackend
bucket = "learn-vault"
key = "terraform.tfstate"
kms_key_id = "1a1a1a1a-0a0a-1b1b-2c2c-3c3c3c3c3c3c"
region = "us-east-1"
Edit the file and set your specific values before the proceeding to the next step.
Note
Terraform will prompt you for interactive input of any values not set in the config.s3.tfbackend
file.
Initialize workspace and apply configuration
Initialize your Terraform workspace, which will download and configure the providers. For the purposes of this lab, we will use local state storage instead of HCP Terraform or configuring the encrypted S3 bucket.
$ terraform init
Expected output:
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/vault...
- Installing hashicorp/vault v3.6.0...
- Installed hashicorp/vault v3.6.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Run terraform apply and review the planned actions. Your terminal output should indicate the plan is running and what resources will be created.
$ terraform apply
This terraform apply will provision a total of 5 resources (Username and password auth method, key-value secrets engine, etc.).
Confirm the apply with a yes
.
Successful output concludes with a line like this:
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
Validate configuration
You can test that the configuration is working as expected by authenticating with Vault and accessing a secret.
Before doing so, you need to unset the VAULT_TOKEN
environment variable so that it does not override the value of the token that the Vault token helper will cache to ~/.vault-token
after successful authentication.
$ unset VAULT_TOKEN
Use the CLI and username and password (userpass) auth method to authenticate as the student user.
$ vault login -method userpass username=student
Vault will respond with:
Password (will be hidden):
Enter the student user's password: ch4ngeMe~
Successful output example:
Example authentication output
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.CAESIM3mk9FFeUn5GxtZSnHI0D7OFB6lXJFY6AP6GVPzSp4QGh4KHGh2cy5GcTJGU3VjNGJWbmplbTQyNTg0RzFhTW0
token_accessor B06MmgUNfoQadgAZ532Xs3E8
token_duration 768h
token_renewable true
token_policies ["default" "student-secrets"]
identity_policies []
policies ["default" "student-secrets"]
token_meta_username student
You have received a token with the default and student-secrets policies attached. Let's check out the secrets stored at the api-credentials
path.
$ vault kv list api-credentials/
Keys
----
admin/
student/
Interesting— we are authenticated as a student, but what access do we have for secrets under the admin
path?
$ vault kv list api-credentials/admin/
Keys
----
api-wizard
The student persona ACL policy allows for list capabilities on the api-credentials/metadata/*
path, and in this case you know there is a key named api-wizard
, but can you access the contents of that secret?
$ vault kv get api-credentials/admin/api-wizard
Error reading api-credentials/data/admin/api-wizard: Error making API request.
URL: GET http://127.0.0.1:8200/v1/api-credentials/data/admin/api-wizard
Code: 403. Errors:
* 1 error occurred:
* permission denied
No!
The ACL policy works as intended and you are unable to access admin secrets as the student persona. You can try accessing the secrets under the api-credentials/student/
path, however.
$ vault kv list api-credentials/student/
Keys
----
api-key
golden
Try to access the api-key
secret.
$ vault kv get api-credentials/student/api-key
============ Secret Path ============
api-credentials/data/student/api-key
======= Metadata =======
Key Value
--- -----
created_time 2022-05-24T20:14:03.534193Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
===== Data =====
Key Value
--- -----
api_key A26nYsDuB3Y9GHx2mstmuaPAQJPQjYtE2s5kEsIQ9Nk
Nicely done, you have validated some of the configuration that was applied to your self-hosted Vault server.
Now you are ready to try applying this same configuration to your HCP Vault Dedicated cluster!
HCP Vault Dedicated cluster configuration
There are surprisingly very few changes required to migrate this same configuration to Vault Dedicated.
One important difference between self-hosted Vault and Vault Dedicated is that all Vault Dedicated clusters run Vault Enterprise and use the Enterprise Namespaces feature to set a default namespace of admin
.
With this in mind, you can export the VAULT_NAMESPACE
environment variable to instruct Terraform to use that namespace.
$ export VAULT_NAMESPACE=admin
To deploy the configuration, you need to change the values of the VAULT_ADDR
and VAULT_TOKEN
environment variables to match your Vault Dedicated cluster.
Access your Vault Dedicated cluster UI, and under Quick actions, click the Public Cluster URL.
In the terminal, set the VAULT_ADDR
environment variable to the copied address.
$ export VAULT_ADDR=<Public_Cluster_URL>
Return to the Vault Dedicated cluster UI, and on the Overview page, click Generate token.
Within a few moments, a new token will be generated. Copy the Admin Token.
In the terminal, set the VAULT_TOKEN
environment variable to the copied token value.
$ export VAULT_TOKEN=<Pasted_Token_Value>
These changes to the environment are all that is required to apply the configuration to the Vault Dedicated cluster.
Deploy the updated Terraform Vault provider configuration to Vault Dedicated.
$ terraform apply
Confirm the apply with a yes
.
Successful output concludes with a line like this:
Apply complete! Resources: 9 added, 2 changed, 0 destroyed.
Validate HCP Vault Dedicated configuration
Check Vault status:
$ vault status
The output is quite different from that of the self-hosted Vault server. The Version is Enterprise (as indicated by the +ent
), the storage type is now Raft, and the Cluster Name, Cluster ID, and HA CLuster values are completely different.
Vault status output
Key Value
--- -----
Recovery Seal Type shamir
Initialized true
Sealed false
Total Recovery Shares 1
Threshold 1
Version 1.10.3+ent
Storage Type raft
Cluster Name vault-cluster-94ab41eb
Cluster ID a9b5ca69-4891-1efb-4978-9fcd14d918ef
HA Enabled true
HA Cluster https://172.25.19.232:8201
HA Mode active
Active Since 2022-05-25T13:56:52.37745306Z
Raft Committed Index 1035
Raft Applied Index 1035
Last WAL 284
As a final brief check, unset the VAULT_TOKEN value.
$ unset VAULT_TOKEN
Then attempt to authenticate to Vault with the userpass auth method as the admin persona.
$ vault login -method=userpass username=admin
Vault will respond with:
Password (will be hidden):
Enter the student user's password: superS3cret!
Successful output example:
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.CAESIKu-rxcPJa7UYueG3hLhBYgEkHRV6gKL37AcyI-A7KLnGicKImh2cy45VlZFNDMzWGZrVmI2TVc3R3RFT001dmYuR0ZxM28Q3gI
token_accessor B57fAloyUnutDHlimfAwaDK4.GFq3o
token_duration 1h
token_renewable true
token_policies ["admins" "default"]
identity_policies []
policies ["admins" "default"]
token_meta_username admin
If you recall, the student persona had access to a secret at api-credentials/student/golden
. Try to get this secret's contents.
$ vault kv get api-credentials/student/golden
Try to get the secret again, but this time perform a Base64 decode operation on the egg-b64
field.
$ vault kv get \
-field=egg-b64 \
api-credentials/student/golden \
| base64 --decode
A surprise is revealed!
You have validated the admin persona is now able to authenticate with your Vault Dedicated cluster using the previously configured password and username and password auth method, which were applied to your new Vault Dedicated.
You can now use the Vault API, CLI, or Web UI to access all of the configuration previously available in your self-hosted Vault in Vault Dedicated.
Have fun exploring your new Vault Dedicated!
Important caveats and considerations
You should also be aware that there are some specific caveats and considerations around migration of any self-hosted Vault to Vault Dedicated, which also affect the approach shown in this scenario.
Reminder
The hands on lab in this tutorial is a simple scenario appropriate to migration of non-production Vault installations, but it is not suitable for use in production as-is.
Enterprise Vault with a default namespace
Keep in mind that all Vault Dedicated clusters operate the Vault Enterprise edition, which features the Enterprise Namespace feature. As such, all Vault Dedicated clusters use a default namespace named admin.
Seal is not migrated
In this scenario, you started with a new self-hosted dev mode Vault server, and applied codified Vault configuration to it with the Vault Provider and Terraform. After some changes to your local environment, you then applied this same configuration to an Vault Dedicated cluster.
At no time did this process migrate the actual Vault seal
Storage type could change
Be aware that all Vault Dedicated clusters use the Integrated Storage (raft) backend. A dev mode Vault server uses the in-memory (inmem) backend, so the storage backend actually changes during the configuration migration you walked through in this lab.
If you are not already using the Integrated Storage backend, and you need to preserve your current storage backend type, then this migration approach will not work for you.
Audit devices
Vault Dedicated has a constrained audit device functionality You cannot currently migrate a self-hosted Vault audit device to Vault Dedicated.
Public IP address
In this lab you learned about migrating the state of a self-hosted development Vault server. As this scenario did not deal with production, you could simply enable the public interface on your Vault Dedicated cluster.
This solution is not acceptable for consideration with production clusters. While solutions involving bastion hosts within the peered VPC with your Vault cluster HVN could provide the means to migrate without enabling a public interface, this is not covered in the lab scenario.
Feature parity: ADP Module
The feature parity between self-hosted Vault and Vault Dedicated grows closer with each iteration, but is not yet 100%.
Features which require the ADP Module, such as the Transform secrets engine are not yet available in Vault Dedicated, so keep this in mind when migrating as well.
Summary
You learned how to migrate a codified Vault configuration from a self-hosted installation to an Vault Dedicated cluster using Terraform.
While this is just one of the approaches which are detailed in the Migration Strategies and Considerations Guidelines documentation, it is a quick and straightforward solution for certain use cases, such as Development or Quality Assurance Vault instances.
You also learned about some of the caveats and limitations involved in migration to Vault Dedicated using the strategy explained in this lab.
Clean up
Here are the steps you need to completely clean up the scenario content from your local environment.
Stop the Vault server process.
$ pkill vault
Change back to your $HOME
directory.
$ cd
Recursively remove the Learn lab directory.
$ rm -rf "${HC_LEARN_LAB}"
Unset the environment variables.
$ unset HC_LEARN_LAB VAULT_ADDR VAULT_TOKEN LEARN_VAULT_PID VAULT_NAMESPACE
Finally, delete the Vault Dedicated cluster instance and HVN as necessary using the web UI or terraform.