Introduction
The world of Information Technology is expanding and evolving. Therefore, the automation of infrastructure deployment and configuration becomes increasingly vital. It is crucial for scalability, reliability, and agility. VMware NSX plays a crucial role as a network virtualization platform. It is essential within modern data centers. VMware NSX facilitates software defined networking (SDN) for complex network topology.
This article addresses how Terraform, an open-source infrastructure-as-code (IaC) tool, can aid in the the deployment and configuration of NSX.
I will explain how to set up Terraform to talk to NSX. I will also explain how to write Terraform deployments for NSX resources. This blog post will provide you with the information needed to enable Terraform NSX automation. It will improve how you deploy and manage your network infrastructure.
This blog post would help in automating creation & configuration of
- NSX Manager
- NSX Fabric
- NSX Edge Transport Node
- NSX Tier-0 Gateway, Tier-1 Gateway and Segments
Terraform Installation
Terraform is an open-source infrastructure-as-code (IaC) tool that allows you to define and provision infrastructure using a high-level configuration language. It is widely used for automating and managing cloud resources and on-premises infrastructure.
For this blog post I am using Ubuntu for installing Terraform but you are free to choose an operating system of your choice.
Terraform does not have many system dependencies or prerequisites but you need to make sure you have the following
- A system running Ubuntu 20.04 or later
- A user account with sudo privileges to install software
- Internet access to download Terraform and dependencies
Step 1 : Update Ubuntu
First, it’s always a good practice to update your system to ensure you have the latest packages and security patches. Open a terminal window and run the commands
sudo apt update && sudo apt upgrade -y
Step 2 : Install Terraform
Run the below command on a terminal window which would install Terraform on the system
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpgecho "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.listsudo apt update && sudo apt install terraform
Reference – https://developer.hashicorp.com/terraform/install
Step 3 : Verify Terraform Installation
After installation, verify that Terraform was successfully installed by checking its version. Run the command:
terraform --version
You should a output like this
Terraform v1.10.4on linux_amd64
Terraform Installation
Let’s start with the first step of installing NSX Manager on a vSphere Environment. For this, I have configured a vSphere 8.0 U3 Environment. It comprises 3 ESXi Hosts. These are managed by a vCenter Server. Host Networking is configured on vSphere Distributed Switch.
The first step would be to setup vSphere Provider in providers.tf file for Terraform . providers.tf should include all providers required and we would be using vSphere and NSX Terraform providers.
providers.tf
## Required Provider for NSX Configurationterraform { required_providers { nsxt = { source = "vmware/nsxt" } vsphere = { source = "hashicorp/vsphere" } }}### vSphere Configurationprovider "vsphere" { user = var.vsphere_user password = var.vsphere_password vsphere_server = var.vsphere_server allow_unverified_ssl = true api_timeout = 10}### NSX Configurationprovider "nsxt" { host = var.nsx_server username = var.nsx_username password = var.nsx_password allow_unverified_ssl = true max_retries = 2}
The next step would be to configure all required variables for NSX Manager creation. These variables would be declared in the variables.tf file.
variables.tf
variable "vsphere_server" { type = string}variable "vsphere_user" { type = string}variable "vsphere_password" { type = string}variable "certificate_thumbprint" { type = string}variable "vmware_datacenter" { type = string}variable "vmware_cluster" { type = string}variable "datastore" { type = string}variable "management_network" { type = string}variable "esxi_host" { type = string}variable "nsx_name" { type = string}variable "disk_provisioning" { type = string}variable "deployment_size" { type = string}variable "nsx_role" { type = string}variable "nsx_ip" { type = string}variable "nsx_netmask" { type = string}variable "nsx_ip_gateway" { type = string}variable "nsx_dns" { type = string}variable "nsx_domain" { type = string}variable "nsx_ntp" { type = string}variable "nsx_ssh_enabled" { type = string}variable "nsx_root_enabled" { type = string}variable "nsx_password" { type = string}variable "nsx_cli_password" { type = string}variable "nsx_audit_password" { type = string}variable "nsx_hostname" { type = string}variable "local_ovf_path" { type = string}
All values for the variables would be configured in terraform.tfvars. This would be the vCenter Server where NSX Manager is deployed. It includes the username and password for the vCenter Server.
The certificate thumbprint of the vCenter Server can be retrieved by running the command
echo -n | openssl s_client -connect <hostname>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha256
terraform.tfvars
#username and passwords for setupvsphere_server = "vcenter.workernode.lab"vsphere_user = "administrator@vsphere.local"vsphere_password = "VMware123!"certificate_thumbprint = "BD:E4:FC:79:29:FC:C1:84:30:D5:3B:B1:18:C7:B1:FB:FA:22:3A:68:EE:39:AD:D6:CC:9F:E5:61:E9:1A:78:96"vmware_datacenter = "Datacenter"vmware_cluster = "Cluster"datastore = "vsanDatastore"management_network = "mgmt"esxi_host = "esxi1.workernode.lab"nsx_name = "nsx.workernode.lab"disk_provisioning = "thin"deployment_size = "small"nsx_role = "NSX Manager"nsx_ip = "192.168.100.27"nsx_netmask = "255.255.254.0"nsx_ip_gateway = "192.168.100.1"nsx_dns = "192.168.100.1"nsx_domain = "workernode.lab"nsx_ntp = "192.168.100.1"nsx_ssh_enabled = "True"nsx_root_enabled = "True"nsx_password = "VMware123!VMware123!"nsx_cli_password = "VMware123!VMware123!"nsx_audit_password = "VMware123!VMware123!"nsx_hostname = "nsx.workernode.lab"local_ovf_path = "/home/ubuntu/pj/ova/nsx-unified-appliance-4.2.3.0.0.24866352.ova"
The main.tf file holds all the necessary information required for creation of NSX Manager. The below main.tf specifies the vSphere Datacenter, Cluster, Host, and Network. NSX Manager would be deployed on these. It also involves setting up NSX Manager configuration, like deployment size, password, and network configuration.
main.tf
## Data source for vCenter Datacenterdata "vsphere_datacenter" "datacenter" { name = var.vmware_datacenter}## Data source for vCenter Clusterdata "vsphere_compute_cluster" "cluster" { name = var.vmware_cluster datacenter_id = data.vsphere_datacenter.datacenter.id}## Data source for vCenter Datastoredata "vsphere_datastore" "datastore" { name = var.datastore datacenter_id = data.vsphere_datacenter.datacenter.id}## Data source for vCenter Portgroupdata "vsphere_network" "mgmt" { name = var.management_network datacenter_id = data.vsphere_datacenter.datacenter.id}## Data source for ESXi host to deploy todata "vsphere_host" "host" { name = var.esxi_host datacenter_id = data.vsphere_datacenter.datacenter.id}data "vsphere_resource_pool" "rp" { name = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources") datacenter_id = data.vsphere_datacenter.datacenter.id}## Data source for the OVF to read the required OVF Propertiesdata "vsphere_ovf_vm_template" "ovfLocal" { name = var.nsx_name datastore_id = data.vsphere_datastore.datastore.id host_system_id = data.vsphere_host.host.id resource_pool_id = data.vsphere_resource_pool.rp.id local_ovf_path = var.local_ovf_path allow_unverified_ssl_cert = true ovf_network_map = { "Network 1" = data.vsphere_network.mgmt.id }}## Deployment of VM from Remote OVFresource "vsphere_virtual_machine" "nsxt-manager" { name = var.nsx_name datacenter_id = data.vsphere_datacenter.datacenter.id datastore_id = data.vsphere_ovf_vm_template.ovfLocal.datastore_id host_system_id = data.vsphere_host.host.id resource_pool_id = data.vsphere_resource_pool.rp.id num_cpus = data.vsphere_ovf_vm_template.ovfLocal.num_cpus num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket memory = data.vsphere_ovf_vm_template.ovfLocal.memory guest_id = data.vsphere_ovf_vm_template.ovfLocal.guest_id scsi_type = data.vsphere_ovf_vm_template.ovfLocal.scsi_type dynamic "network_interface" { for_each = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map content { network_id = network_interface.value } } wait_for_guest_net_timeout = 5 wait_for_guest_ip_timeout = 5 ovf_deploy { allow_unverified_ssl_cert = true local_ovf_path = data.vsphere_ovf_vm_template.ovfLocal.local_ovf_path disk_provisioning = var.disk_provisioning deployment_option = var.deployment_size ovf_network_map = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map } vapp { properties = { "nsx_role" = var.nsx_role "nsx_ip_0" = var.nsx_ip "nsx_netmask_0" = var.nsx_netmask "nsx_gateway_0" = var.nsx_ip_gateway "nsx_dns1_0" = var.nsx_dns "nsx_domain_0" = var.nsx_domain "nsx_ntp_0" = var.nsx_ntp "nsx_isSSHEnabled" = var.nsx_ssh_enabled "nsx_allowSSHRootLogin" = var.nsx_root_enabled "nsx_passwd_0" = var.nsx_password "nsx_cli_passwd_0" = var.nsx_cli_password "nsx_cli_audit_passwd_0" = var.nsx_audit_password "nsx_hostname" = var.nsx_hostname } } lifecycle { ignore_changes = [ #vapp # Enable this to ignore all vapp properties if the plan is re-run vapp[0].properties["nsx_role"], # Avoid unwanted changes to specific vApp properties. vapp[0].properties["nsx_passwd_0"], vapp[0].properties["nsx_cli_passwd_0"], vapp[0].properties["nsx_cli_audit_passwd_0"], host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it ] }}
With all configuration , now is the time to run terraform to deploy NSX Manager. The first step would be initialize terraform and its providers by running below command terraform init
root@ubuntu:/projects/terraform# terraform initInitializing the backend...Initializing provider plugins...- Finding latest version of vmware/nsxt...- Finding latest version of hashicorp/vsphere...- Installing vmware/nsxt v3.8.0...- Installed vmware/nsxt v3.8.0 (signed by a HashiCorp partner, key ID ED13BE650293896B)- Installing hashicorp/vsphere v2.10.0...- Installed hashicorp/vsphere v2.10.0 (signed by HashiCorp)Partner and community providers are signed by their developers.If you'd like to know more about provider signing, you can read about it here:https://www.terraform.io/docs/cli/plugins/signing.htmlTerraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future.Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work.If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
Next step would be format the terraform files by running terraform fmt .
At this stage you can run the terraform script by running terraform apply –auto-approve. However, I would recommend running terraform plan to see implementation details. If there are any errors in the configuration files, terraform plan would identify it.
Run terraform apply –auto-approve to start deployment of NSX Manager.
NSX Fabric
After successfully installing NSX Manager, it’s time to automate steps which need the most effort. This includes the configuration of NSX Manager and preparation of vSphere Clusters for NSX.
In this section we would perform the below tasks using Terraform
- Add vCenter Server as Compute Manager
- Creation of Overlay Transport Zones
- Creation of Uplink Profile for Hosts Transport Node
- Creation of Uplink Profile for Edge Transport Node
- Creation of VLAN Transport Zone for Edge Uplinks
- Creation of Transport Node Profile
- Prepare vSphere Cluster for NSX Networking
As shown above I would setup providers required for this configuration in providers.tf along with variables in variables.tf . The values for the variables can be declared in terraform.tfvars.
providers.tf
### Required Provider for NSX Configurationterraform { required_providers { nsxt = { source = "vmware/nsxt" } vsphere = { source = "hashicorp/vsphere" } }}### vSphere Configurationprovider "vsphere" { user = var.vsphere_user password = var.vsphere_password vsphere_server = var.vsphere_server allow_unverified_ssl = true api_timeout = 10}### NSX Configurationprovider "nsxt" { host = var.nsx_server username = var.nsx_username password = var.nsx_password allow_unverified_ssl = true max_retries = 2}
variables.tf
variable "nsx_server" {type = string}variable "nsx_username" {type = string}variable "nsx_password" {type = string}variable "vsphere_server" {type = string}variable "vsphere_user" {type = string}variable "vsphere_password" {type = string}variable "certificate_thumbprint" {type = string}variable "overlay_transport_zone" { type = string}variable "uplink_host_switch_profile" { description = "Uplink host switch profile for Supervisor" type = string}variable "nsx_edge_uplink_profile" { description = "nsx_edge_uplink_profile" type = string}variable "transport_vlan_host" { type = string}variable "transport_vlan_edge" { type = string}variable "edge_vlan_transport_zone" { description = "edge_vlan_transport_zone" type = string}variable "vmware_datacenter" { type = string}variable "vsphere_cluster" { type = string}variable "tep_pool" { type = string}variable "mtu" { type = string}variable "distributed_virtual_switch" { type = string}variable "tep_cidr" { type = string}variable "tep_network_gateway" { type = string}variable "tep_network_starting_ip" { type = string}variable "tep_network_ending_ip" { type = string}variable "tep_subnet" { type = string}
terraform.tfvars
This is the place where you provide the values for your declared variables for e.g. Uplink VLAN for Host Transport Nodes ,Edge Transport Nodes and Host TEP Pool Configuration.
##username,passwords and configuration for setupvsphere_server = "vcenter.workernode.lab"vsphere_user = "administrator@vsphere.local"vsphere_password = "VMware123!"certificate_thumbprint = "BD:E4:FC:79:29:FC:C1:84:30:D5:3B:B1:18:C7:B1:FB:FA:22:3A:68:EE:39:AD:D6:CC:9F:E5:61:E9:1A:78:96"nsx_server = "nsx.workernode.lab"nsx_username = "admin"nsx_password = "VMware123!VMware123!"vmware_datacenter = "Datacenter"vsphere_cluster = "Cluster"overlay_transport_zone = "supervisor_transport_zone"uplink_host_switch_profile = "uplink_host_switch_profile"transport_vlan_host = "108"nsx_edge_uplink_profile = "nsx_edge_uplink_profile"mtu = "9000"transport_vlan_edge = "110"edge_vlan_transport_zone = "edge_vlan_transport_zone"distributed_virtual_switch = "DSwitch"tep_pool = "host-tep-pool"tep_subnet = "tep_subnet"tep_network_gateway = "192.168.108.1"tep_cidr = "192.168.108.0/23"tep_network_starting_ip = "192.168.108.10"tep_network_ending_ip = "192.168.108.25"
The main.tf file contains all the necessary information for the configuration i.e. adding vCenter Server as a Compute Manager , creation of overlay & vlan transport zones and uplink profiles.
main.tf
### Adding vCenter Server as Compute Manager to NSXresource "nsxt_compute_manager" "NSXT-Sup" { description = "NSX-T Compute Manager" display_name = "NSXT-Sup" create_service_account = "true" access_level_for_oidc = "FULL" set_as_oidc_provider = "true" server = var.vsphere_server credential { username_password_login { username = var.vsphere_user password = var.vsphere_password thumbprint = var.certificate_thumbprint } } origin_type = "vCenter"}data "nsxt_compute_manager_realization" "NSXT-Sup_realization" { id = nsxt_compute_manager.NSXT-Sup.id timeout = 1200}### Creation of Overlay Transport Zoneresource "nsxt_policy_transport_zone" "overlay_transport_zone" { display_name = var.overlay_transport_zone transport_type = "OVERLAY_BACKED" depends_on = [data.nsxt_compute_manager_realization.NSXT-Sup_realization]}### Creation of Uplink Host Switch Profileresource "nsxt_policy_uplink_host_switch_profile" "uplink_host_switch_profile" { display_name = var.uplink_host_switch_profile transport_vlan = var.transport_vlan_host overlay_encap = "GENEVE" teaming { active { uplink_name = "uplink-1" uplink_type = "PNIC" } active { uplink_name = "uplink-2" uplink_type = "PNIC" } policy = "LOADBALANCE_SRCID" } named_teaming { active { uplink_name = "uplink-1" uplink_type = "PNIC" } standby { uplink_name = "uplink-2" uplink_type = "PNIC" } policy = "FAILOVER_ORDER" name = "uplink-1-failover_order" } named_teaming { active { uplink_name = "uplink-2" uplink_type = "PNIC" } standby { uplink_name = "uplink-1" uplink_type = "PNIC" } policy = "FAILOVER_ORDER" name = "uplink-2-failover_order" } named_teaming { active { uplink_name = "uplink-2" uplink_type = "PNIC" } policy = "FAILOVER_ORDER" name = "uplink-2" } named_teaming { active { uplink_name = "uplink-1" uplink_type = "PNIC" } policy = "FAILOVER_ORDER" name = "uplink-1" }}### Creation of Edge Uplink Host Switch Profileresource "nsxt_policy_uplink_host_switch_profile" "nsx_edge_uplink_profile" { display_name = var.nsx_edge_uplink_profile mtu = var.mtu transport_vlan = var.transport_vlan_edge teaming { active { uplink_name = "uplink-1" uplink_type = "PNIC" } active { uplink_name = "uplink-2" uplink_type = "PNIC" } policy = "LOADBALANCE_SRCID" } named_teaming { active { uplink_name = "uplink-1" uplink_type = "PNIC" } policy = "FAILOVER_ORDER" name = "uplink-1-edge" } named_teaming { active { uplink_name = "uplink-2" uplink_type = "PNIC" } policy = "FAILOVER_ORDER" name = "uplink-2-edge" }}### Creation of Edge VLAN Transport Zonesresource "nsxt_policy_transport_zone" "edge_vlan_transport_zone" { display_name = var.edge_vlan_transport_zone transport_type = "VLAN_BACKED" uplink_teaming_policy_names = ["uplink-1-edge", "uplink-2-edge"] site_path = "/infra/sites/default"}### Creation of Transport Node Profiledata "vsphere_datacenter" "vmware_datacenter" { name = var.vmware_datacenter}data "vsphere_distributed_virtual_switch" "distributed_virtual_switch" { name = var.distributed_virtual_switch datacenter_id = data.vsphere_datacenter.vmware_datacenter.id}resource "nsxt_policy_ip_pool" "tep_pool" { display_name = var.tep_pool}resource "nsxt_policy_ip_pool_static_subnet" "tep_subnet" { display_name = var.tep_subnet pool_path = nsxt_policy_ip_pool.tep_pool.path cidr = var.tep_cidr gateway = var.tep_network_gateway allocation_range { start = var.tep_network_starting_ip end = var.tep_network_ending_ip }}resource "nsxt_policy_host_transport_node_profile" "TNP" { display_name = "supervisor-transport-node-profile" standard_host_switch { host_switch_id = data.vsphere_distributed_virtual_switch.distributed_virtual_switch.id host_switch_mode = "STANDARD" ip_assignment { static_ip_pool = nsxt_policy_ip_pool.tep_pool.path } transport_zone_endpoint { transport_zone = nsxt_policy_transport_zone.overlay_transport_zone.path } uplink_profile = nsxt_policy_uplink_host_switch_profile.uplink_host_switch_profile.path is_migrate_pnics = false uplink { uplink_name = "uplink-1" vds_uplink_name = data.vsphere_distributed_virtual_switch.distributed_virtual_switch.uplinks[0] } uplink { uplink_name = "uplink-2" vds_uplink_name = data.vsphere_distributed_virtual_switch.distributed_virtual_switch.uplinks[1] } } depends_on = [data.nsxt_compute_manager_realization.NSXT-Sup_realization]}### Prepare Host Cluster by attaching TNP Profiledata "vsphere_compute_cluster" "vsphere_cluster" { name = var.vsphere_cluster datacenter_id = data.vsphere_datacenter.vmware_datacenter.id}data "nsxt_compute_collection" "compute_cluster_collection" { display_name = data.vsphere_compute_cluster.vsphere_cluster.name origin_id = data.nsxt_compute_manager_realization.NSXT-Sup_realization.id}resource "nsxt_policy_host_transport_node_collection" "sup-tnp-c" { display_name = "sup-tnp-c" compute_collection_id = data.nsxt_compute_collection.compute_cluster_collection.id transport_node_profile_path = nsxt_policy_host_transport_node_profile.TNP.path remove_nsx_on_destroy = true depends_on = [data.nsxt_compute_manager_realization.NSXT-Sup_realization]}
The next step would be to format the terraform files. Then, view the implementation details by running terraform fmt && terraform init && terraform plan.
Run terraform apply –auto-approve to start configuration of NSX Fabric.
Note – This step can take at least 30-45 minutes. It may take longer depending on the number of hosts in the vSphere Cluster.
NSX Edge Transport Node
The next logical step post configuration of NSX Fabric i.e. preparation of ESXi hosts for NSX is to create Edge Transport Nodes . A NSX Edge Transport Node would be required to host Tier-0 and Tier-1 Router.
As the providers are not changing the providers.tf would stay the same for each configuration but the variables.tf and terraform.tfvars would change for each configuration.
variables.tf
variable "nsx_server" { type = string}variable "nsx_username" { type = string}variable "nsx_password" { type = string}variable "vsphere_server" { type = string}variable "vsphere_user" { type = string}variable "vsphere_password" { type = string}variable "certificate_thumbprint" { type = string}variable "vmware_datacenter" { type = string}variable "vsphere_cluster" { type = string}variable "nsxt-manager-name" { type = string}variable "supervisor_datastore" { type = string}variable "edge_uplink_name_1" { type = string}variable "edge_uplink_name_2" { type = string}variable "mgmt-network" { type = string}variable "overlay_transport_zone" { type = string}variable "edge_vlan_transport_zone" { type = string}variable "nsx_edge_uplink_profile" { type = string}variable "edge_hostname" { type = string}variable "edge_subnet_mask" { type = string}variable "edge_default_gateway" { type = string}variable "cli_password" { type = string}variable "root_password" { type = string}variable "audit_password" { type = string}variable "audit_username" { type = string}variable "management_network_gateway" { type = list(string)}variable "edge_management_network_ip" { type = list(string)}variable "edge_cidr_range_prefix" { type = string}variable "dns_server" { type = list(string)}variable "ntp_server" { type = list(string)}variable "edge_tep_ip" { type = list(string)}variable "search_domains" { type = list(string)}variable "distributed_virtual_switch" { type = string}
terraform.tfvars
This is the place where you provide the values for your declared variables for e.g. Uplink VLAN for Host Transport Nodes ,Edge Transport Nodes and Host Tep Pool Configuration.
##username and passwords for setupvsphere_server = "vcenter.workernode.lab"vsphere_user = "administrator@vsphere.local"vsphere_password = "VMware123!"certificate_thumbprint = "BD:E4:FC:79:29:FC:C1:84:30:D5:3B:B1:18:C7:B1:FB:FA:22:3A:68:EE:39:AD:D6:CC:9F:E5:61:E9:1A:78:96"nsx_server = "nsx.workernode.lab"nsx_username = "admin"nsx_password = "VMware123!VMware123!"vmware_datacenter = "Datacenter"vsphere_cluster = "Cluster"supervisor_datastore = "vsanDatastore"search_domains = ["workernode.lab"]dns_server = ["192.168.100.1"]ntp_server = ["192.168.100.1"]edge_subnet_mask = "255.255.254.0"edge_default_gateway = "192.168.110.1"cli_password = "VMware1!VMware1!"root_password = "VMware1!VMware1!"audit_password = "VMware1!VMware1!"audit_username = "audit"management_network_gateway = ["192.168.100.1"]edge_management_network_ip = ["192.168.100.31"]edge_cidr_range_prefix = "23"edge_hostname = "edge.workernode.lab"edge_uplink_name_1 = "edge-uplink-1"edge_uplink_name_2 = "edge-uplink-2"mgmt-network = "mgmt"overlay_transport_zone = "supervisor_transport_zone"edge_vlan_transport_zone = "edge_vlan_transport_zone"nsx_edge_uplink_profile = "nsx_edge_uplink_profile"edge_tep_ip = ["192.168.110.10", "192.168.110.11"]nsxt-manager-name = "NSXT-Sup"distributed_virtual_switch = "DSwitch"
The below main.tf creates one NSX Edge Transport Node with 2 Uplinks.
main.tf
data "nsxt_compute_manager" "NSXT-Sup" { display_name = "NSXT-Sup"}data "vsphere_datacenter" "vmware_datacenter" { name = var.vmware_datacenter}data "vsphere_compute_cluster" "vsphere_cluster" { name = var.vsphere_cluster datacenter_id = data.vsphere_datacenter.vmware_datacenter.id}data "vsphere_datastore" "supervisor_datastore" { name = var.supervisor_datastore datacenter_id = data.vsphere_datacenter.vmware_datacenter.id}data "vsphere_distributed_virtual_switch" "distributed_virtual_switch" { name = var.distributed_virtual_switch datacenter_id = data.vsphere_datacenter.vmware_datacenter.id}data "vsphere_network" "mgmt-network" { name = var.mgmt-network datacenter_id = data.vsphere_datacenter.vmware_datacenter.id}data "nsxt_policy_transport_zone" "overlay_transport_zone" { display_name = var.overlay_transport_zone}data "nsxt_policy_transport_zone" "vlan_transport_zone" { display_name = var.edge_vlan_transport_zone}data "nsxt_policy_uplink_host_switch_profile" "uplink_host_switch_profile" { display_name = var.nsx_edge_uplink_profile}resource "vsphere_distributed_port_group" "edge_uplink_name_1" { name = var.edge_uplink_name_1 distributed_virtual_switch_uuid = data.vsphere_distributed_virtual_switch.distributed_virtual_switch.id vlan_range { min_vlan = 0 max_vlan = 4094 }}resource "vsphere_distributed_port_group" "edge_uplink_name_2" { name = var.edge_uplink_name_2 distributed_virtual_switch_uuid = data.vsphere_distributed_virtual_switch.distributed_virtual_switch.id vlan_range { min_vlan = 0 max_vlan = 4094 }}resource "nsxt_edge_transport_node" "nsxt-edge" { display_name = var.edge_hostname standard_host_switch { ip_assignment { static_ip { ip_addresses = var.edge_tep_ip subnet_mask = var.edge_subnet_mask default_gateway = var.edge_default_gateway } } transport_zone_endpoint { transport_zone = data.nsxt_policy_transport_zone.vlan_transport_zone.path } transport_zone_endpoint { transport_zone = data.nsxt_policy_transport_zone.overlay_transport_zone.path } uplink_profile = data.nsxt_policy_uplink_host_switch_profile.uplink_host_switch_profile.path pnic { device_name = "fp-eth0" uplink_name = "uplink-1" } pnic { device_name = "fp-eth1" uplink_name = "uplink-2" } } deployment_config { form_factor = "SMALL" node_user_settings { cli_password = var.cli_password root_password = var.root_password audit_username = var.audit_username audit_password = var.audit_password } vm_deployment_config { management_network_id = data.vsphere_network.mgmt-network.id data_network_ids = [vsphere_distributed_port_group.edge_uplink_name_1.id, vsphere_distributed_port_group.edge_uplink_name_2.id] compute_id = data.vsphere_compute_cluster.vsphere_cluster.id storage_id = data.vsphere_datastore.supervisor_datastore.id vc_id = data.nsxt_compute_manager.NSXT-Sup.id default_gateway_address = var.management_network_gateway management_port_subnet { ip_addresses = var.edge_management_network_ip prefix_length = var.edge_cidr_range_prefix } } } node_settings { hostname = var.edge_hostname allow_ssh_root_login = true enable_ssh = true dns_servers = var.dns_server search_domains = var.search_domains ntp_servers = var.ntp_server }}
NSX Tier-0 Gateway, Tier-1 Gateway and Segments
The last step is to create Tier-0 Gateway with BGP Configuration, Tier-1 Gateway attached to Tier-0 Gateway and NSX segments.
variables.tf
variable "nsx_server" { type = string}variable "nsx_username" { type = string}variable "nsx_password" { type = string}variable "vsphere_server" { type = string}variable "vsphere_user" { type = string}variable "vsphere_password" { type = string}variable "certificate_thumbprint" { type = string}variable "edge_hostname" { type = string}variable "sup-edge-cluster" { description = "sup-edge-cluster" type = string}variable "edge_vlan_transport_zone" { type = string}variable "sup-t0-gw" { description = "sup-t0-gw" type = string}variable "local_as_num" { type = string}variable "nsxt_edge_uplink_segment" { type = string}variable "nsxt_edge_uplink_segment_vlan" { type = string}variable "vrf_uplink_name" { type = string}variable "vlan112-bgp" { type = string}variable "bgp_neighbor_address" { type = string}variable "remote_as_num" { type = string}variable "mtu" { type = string}variable "segment1_cidr" { type = string}variable "segment2_cidr" { type = string}variable "segment2_name" { type = string}variable "segment1_name" { type = string}variable "overlay_transport_zone" { type = string}variable "sup-t1-gw" { type = string}variable "bfd_interval" { type = string}variable "bfd_multiple" { type = string}variable "bfd_enabled" { type = string}variable "hold_down_time" { type = string}variable "keep_alive_time" { type = string}variable "t0_gw_uplink_subnet" { type = list(string)}variable "bgp_password" { type = string}
terraform.tfvars
This is the place where you provide the values for your declared variables for e.g. BGP configuration , Tier-0 Gateway Name , Tier-0 Gateway Name and Segments information.
##username and passwords for setupvsphere_server = "vcenter.workernode.lab"vsphere_user = "administrator@vsphere.local"vsphere_password = "VMware1!"certificate_thumbprint = "FC:F8:7D:8B:10:02:1D:42:AA:D8:8C:AB:97:79:70:8F:FA:F0:B4:BB:35:EC:3E:1F:3D:27:FC:1D:26:D3:11:23"nsx_server = "nsx.workernode.lab"nsx_username = "admin"nsx_password = "VMware123!VMware123!"edge_hostname = "edge.workernode.lab"edge_vlan_transport_zone = "edge_vlan_transport_zone"local_as_num = "65003"sup-t0-gw = "sup-t0-gw"nsxt_edge_uplink_segment = "112"nsxt_edge_uplink_segment_vlan = "112"vlan112-bgp = "112"bgp_neighbor_address = "192.168.112.1"bgp_password = "vmware"remote_as_num = "65001"mtu = "9000"segment1_cidr = "172.16.10.1/24"segment2_cidr = "172.16.20.1/24"segment1_name = "Segment1"segment2_name = "Segment2"overlay_transport_zone = "supervisor_transport_zone"bfd_enabled = "true"t0_gw_uplink_subnet = ["192.168.112.2/23"]bfd_interval = "1000"bfd_multiple = "4"hold_down_time = "75"keep_alive_time = "25"sup-edge-cluster = "sup-edge-cluster"sup-t1-gw = "sup-t1-gw"vrf_uplink_name = "Uplink-01"
main.tf
data "nsxt_transport_node" "edge" { display_name = var.edge_hostname}resource "nsxt_edge_cluster" "sup-edge-cluster" { display_name = var.sup-edge-cluster member { transport_node_id = data.nsxt_transport_node.edge.id }}data "nsxt_policy_edge_cluster" "sup-edge-cluster" { display_name = var.sup-edge-cluster depends_on = [nsxt_edge_cluster.sup-edge-cluster]}data "nsxt_policy_transport_zone" "edge_vlan_transport_zone" { display_name = var.edge_vlan_transport_zone}resource "nsxt_policy_tier0_gateway" "sup-t0-gw" { display_name = var.sup-t0-gw failover_mode = "PREEMPTIVE" default_rule_logging = false enable_firewall = true ha_mode = "ACTIVE_ACTIVE" internal_transit_subnets = ["169.254.0.0/24"] transit_subnets = ["100.64.0.0/16"] vrf_transit_subnets = ["169.254.2.0/23"] edge_cluster_path = data.nsxt_policy_edge_cluster.sup-edge-cluster.path bgp_config { local_as_num = var.local_as_num multipath_relax = true ecmp = true inter_sr_ibgp = true }}data "nsxt_policy_edge_node" "edge-1" { edge_cluster_path = data.nsxt_policy_edge_cluster.sup-edge-cluster.path display_name = var.edge_hostname}# Create VLAN Segmentsresource "nsxt_policy_vlan_segment" "nsxt_edge_uplink_segment" { display_name = var.nsxt_edge_uplink_segment transport_zone_path = data.nsxt_policy_transport_zone.edge_vlan_transport_zone.path vlan_ids = [var.nsxt_edge_uplink_segment_vlan]}# Create Tier-0 Gateway Uplink Interfacesresource "nsxt_policy_tier0_gateway_interface" "vrf_uplink1" { display_name = var.vrf_uplink_name type = "EXTERNAL" edge_node_path = data.nsxt_policy_edge_node.edge-1.path gateway_path = nsxt_policy_tier0_gateway.sup-t0-gw.path segment_path = nsxt_policy_vlan_segment.nsxt_edge_uplink_segment.path subnets = var.t0_gw_uplink_subnet mtu = var.mtu}resource "nsxt_policy_bgp_neighbor" "bgp_config" { display_name = var.vlan112-bgp bgp_path = nsxt_policy_tier0_gateway.sup-t0-gw.bgp_config.0.path neighbor_address = var.bgp_neighbor_address password = var.bgp_password remote_as_num = var.remote_as_num allow_as_in = false graceful_restart_mode = "HELPER_ONLY" hold_down_time = var.hold_down_time keep_alive_time = var.keep_alive_time source_addresses = nsxt_policy_tier0_gateway_interface.vrf_uplink1.ip_addresses bfd_config { enabled = var.bfd_enabled interval = var.bfd_interval multiple = var.bfd_multiple } depends_on = [nsxt_policy_tier0_gateway_interface.vrf_uplink1]}resource "nsxt_policy_gateway_redistribution_config" "redistribution_config" { gateway_path = nsxt_policy_tier0_gateway.sup-t0-gw.path bgp_enabled = true rule { name = "route-disti-rule" types = ["TIER0_STATIC", "TIER0_CONNECTED", "TIER0_EXTERNAL_INTERFACE", "TIER0_SEGMENT", "TIER0_ROUTER_LINK", "TIER0_SERVICE_INTERFACE", "TIER0_LOOPBACK_INTERFACE", "TIER0_DNS_FORWARDER_IP", "TIER0_IPSEC_LOCAL_IP", "TIER0_NAT", "TIER0_EVPN_TEP_IP", "TIER1_NAT", "TIER1_STATIC", "TIER1_LB_VIP", "TIER1_LB_SNAT", "TIER1_DNS_FORWARDER_IP", "TIER1_CONNECTED", "TIER1_SERVICE_INTERFACE", "TIER1_SEGMENT", "TIER1_IPSEC_LOCAL_ENDPOINT"] } depends_on = [nsxt_policy_bgp_neighbor.bgp_config]}# Create Tier-1 Gatewayresource "nsxt_policy_tier1_gateway" "sup-t1-gw" { display_name = var.sup-t1-gw edge_cluster_path = data.nsxt_policy_edge_cluster.sup-edge-cluster.path failover_mode = "NON_PREEMPTIVE" default_rule_logging = "false" enable_firewall = "true" enable_standby_relocation = "true" tier0_path = nsxt_policy_tier0_gateway.sup-t0-gw.path route_advertisement_types = ["TIER1_STATIC_ROUTES", "TIER1_CONNECTED", "TIER1_NAT", "TIER1_LB_VIP", "TIER1_LB_SNAT", "TIER1_DNS_FORWARDER_IP", "TIER1_IPSEC_LOCAL_ENDPOINT"] pool_allocation = "ROUTING" ha_mode = "ACTIVE_STANDBY" depends_on = [nsxt_policy_tier0_gateway_interface.vrf_uplink1]}data "nsxt_policy_transport_zone" "overlay_transport_zone" { display_name = var.overlay_transport_zone}# Create NSX-T Overlay Segment for Egress Trafficresource "nsxt_policy_segment" "Segment1" { display_name = var.segment1_name transport_zone_path = data.nsxt_policy_transport_zone.overlay_transport_zone.path connectivity_path = nsxt_policy_tier1_gateway.sup-t1-gw.path subnet { cidr = var.segment1_cidr }}# Create NSX-T Overlay Segments for Ingress Trafficresource "nsxt_policy_segment" "Segment2" { display_name = var.segment2_name transport_zone_path = data.nsxt_policy_transport_zone.overlay_transport_zone.path connectivity_path = nsxt_policy_tier1_gateway.sup-t1-gw.path subnet { cidr = var.segment2_cidr }}
Next step would be to format the terraform files. Then view the implementation details. You can do this by running terraform fmt && terraform init && terraform plan.
Run terraform apply –auto-approve to start configuration of Tier-0 , Tier-1 Gateway along with creation of Network Segments.
Conclusion
Automating NSX deployment with Terraform significantly simplifies the process of building and managing modern virtual networking environments. By defining infrastructure as code, administrators can ensure consistent, repeatable, and scalable deployments across their vSphere environments.
In this blog, we walked through the end-to-end workflow—from deploying NSX Manager to configuring the NSX Fabric, Edge Transport Nodes, and finally provisioning Tier-0/Tier-1 gateways and network segments.
Disclaimer: All posts, contents and examples are for educational purposes in lab environments only and does not constitute professional advice. No warranty is implied or given. The user accepts that all information, contents, and opinions are my own. They do not reflect the opinions of my employer.


Leave a comment