- Automating Deployment of vSphere Infrastructure on Equinix Metal
Automating Deployment of vSphere Infrastructure on Equinix Metal
Did you know that Equinix has a Bare Metal as a Service (BMaaS) offering? Back in March Pure Storage also announced their Pure Storage® on Equinix Metal™ offering. This allows users to not only procure the Pure Storage FlashArray as an op-ex based purchase, but it also lets you spin up physical servers in Equinix Metal DC’s and connect them to a physical dedicated FlashArray. In this series I will cover how to deploy a vSphere environment on Equinix Metal with Terraform.
Introduction
Equinix Metal formerly known as Packet is a BMaaS offering Physical Servers on a on-demand consumption model starting at .50¢ an hour. This option provides organizations with a cloud like experience but with the ability to use native tooling and resources. When I hear someone mention my goal is to get out of the datacenter business, immediately they think they HAVE to go to the public cloud. However today’s infrastructure is all about the hybrid cloud and hybrid applications and this use-case fits into this model without needing to adopt a new skill set. It allows organizations to focus on the applications instead of the infrastructure.
For those looking for more information about Pure Storage and Equinix Metal check out the press release with more information.
Equinix Metal Resources
Just like the cloud, Equinix Metal offers On Demand, Spot and Reserve deployment options. When deploying a new server you can choose one of their many global datacenters, multiple size servers small/medium/large and some have a focus on being network optimized or storage optimized. You also get your choice of operating systems such as ESXi, Ubuntu, Windows, CentOS and more… You can even bring your own image if you wish.
Once a server is deployed, as quickly as 60 seconds for some of their Linux distributions you will be provided with an Overview page where you can manage networking, storage and gain access to all the resources you need to manage the server.
However, this blog is about Automation, so lets look at the Equinix Metal Terraform Provider and see how we can automate deployment of a server!
Using the Equinix Metal Terraform Provider
The Equinix Metal Terraform Provider has a few resources but for now we will focus on the metal_device. With a few lines of code we can stand up a Ubuntu 20.04 Server in an existing Project in the NYC Datacenter.
1terraform {
2 required_providers {
3 metal = {
4 source = "equinix/metal"
5 }
6 }
7}
8
9# Authenticate the Equinix Metal Provider.
10provider "metal" {
11 auth_token = var.auth_token
12}
13
14# Use Existing Project
15data "metal_project" "project" {
16 name = "My Project"
17}
18
19# Create a device
20resource "metal_device" "web1" {
21 hostname = "web1"
22 plan = "c3.medium.x86"
23 facilities = ["ny5"]
24 operating_system = "ubuntu_20_04"
25 billing_cycle = "hourly"
26 project_id = data.metal_project.project.id
27}
Using Terraform to Deploy vSphere Components on Equinix Metal
The Terraform Manifest can be found here on Github.
Now that we know how to deploy a single server, lets take it one step further. This Terraform manifest will deploy 2x ESXi Servers, change the networking type to allow VLAN’s and then provision a vCenter Server
1terraform {
2 required_providers {
3 metal = {
4 source = "equinix/metal"
5 # version = "1.0.0"
6 }
7 }
8}
9
10# Configure the Equinix Metal Provider.
11provider "metal" {
12 auth_token = var.auth_token
13}
14
15# Specify the Equinix Metal Project.
16data "metal_project" "project" {
17 name = var.project
18}
19
20# Create a Equinix Metal Server
21resource "metal_device" "esxi_hosts" {
22 count = var.hostcount
23 hostname = format("%s%02d", var.hostname, count.index + 1)
24 plan = var.plan
25 metro = var.metro
26 operating_system = var.operating_system
27 billing_cycle = var.billing_cycle
28 project_id = data.metal_project.project.id
29}
30
31# Set Network to Hybrid
32resource "metal_device_network_type" "esxi_hosts" {
33 count = var.hostcount
34 device_id = metal_device.esxi_hosts[count.index].id
35 type = "hybrid"
36}
37
38# Add VLAN to Bond
39resource "metal_port_vlan_attachment" "management" {
40 count = var.hostcount
41 device_id = metal_device_network_type.esxi_hosts[count.index].id
42 port_name = "bond0"
43 vlan_vnid = "1015"
44}
45
46# Add VLAN to Bond
47resource "metal_port_vlan_attachment" "iscsi-a" {
48 count = var.hostcount
49 device_id = metal_device_network_type.esxi_hosts[count.index].id
50 port_name = "bond0"
51 vlan_vnid = "1016"
52}
53
54# Add VLAN to Bond
55resource "metal_port_vlan_attachment" "iscsi-b" {
56 count = var.hostcount
57 device_id = metal_device_network_type.esxi_hosts[count.index].id
58 port_name = "bond0"
59 vlan_vnid = "1017"
60}
61
62# Add VLAN to Bond
63resource "metal_port_vlan_attachment" "virtualmachine" {
64 count = var.hostcount
65 device_id = metal_device_network_type.esxi_hosts[count.index].id
66 port_name = "bond0"
67 vlan_vnid = "1018"
68}
69resource "null_resource" "ping1" {
70 depends_on = [
71 metal_port_vlan_attachment.management,
72 metal_device.esxi_hosts,
73 metal_port_vlan_attachment.iscsi-a,
74 metal_port_vlan_attachment.iscsi-b,
75 metal_port_vlan_attachment.virtualmachine
76 ]
77 provisioner "local-exec" {
78 command = "while($ping -notcontains 'True'){$ping = test-connection ${metal_device.esxi_hosts[0].access_private_ipv4} -quiet}"
79 interpreter = ["PowerShell", "-Command"]
80 }
81}
82
83resource "null_resource" "ping2" {
84 depends_on = [
85 metal_port_vlan_attachment.management,
86 metal_device.esxi_hosts,
87 metal_port_vlan_attachment.iscsi-a,
88 metal_port_vlan_attachment.iscsi-b,
89 metal_port_vlan_attachment.virtualmachine
90 ]
91 provisioner "local-exec" {
92 command = "while($ping -notcontains 'True'){$ping = test-connection ${metal_device.esxi_hosts[1].access_private_ipv4} -quiet}"
93 interpreter = ["PowerShell", "-Command"]
94 }
95}
96
97# Sleep to allow servers to come online
98resource "null_resource" "sleep" {
99 depends_on = [
100 metal_port_vlan_attachment.management,
101 metal_device.esxi_hosts,
102 metal_port_vlan_attachment.iscsi-a,
103 metal_port_vlan_attachment.iscsi-b,
104 metal_port_vlan_attachment.virtualmachine,
105 null_resource.ping1,
106 null_resource.ping2
107 ]
108 provisioner "local-exec" {
109 command = "Start-Sleep 30"
110 interpreter = ["PowerShell", "-Command"]
111 }
112}
113
114# Gather Variables for Template
115data "template_file" "configure_esxi" {
116 depends_on = [
117 metal_port_vlan_attachment.management,
118 metal_device.esxi_hosts,
119 null_resource.sleep
120 ]
121 template = file("${path.module}/files/configure_esxi.ps1")
122 vars = {
123 first_esx_pass = metal_device.esxi_hosts[0].root_password
124 first_esx_host_ip = metal_device.esxi_hosts[0].access_private_ipv4
125 second_esx_pass = metal_device.esxi_hosts[1].root_password
126 second_esx_host_ip = metal_device.esxi_hosts[1].access_private_ipv4
127 vcenter_name = var.vcenter_name
128 }
129}
130
131# Output Rendered Template
132resource "local_file" "configure_esxi" {
133 depends_on = [
134 metal_port_vlan_attachment.management,
135 metal_device.esxi_hosts,
136 null_resource.sleep
137 ]
138 content = data.template_file.configure_esxi.rendered
139 filename = "${path.module}/files/rendered-configure_esxi.ps1"
140}
141
142#Run Configuration Script
143resource "null_resource" "configure_esxi" {
144 depends_on = [
145 local_file.configure_esxi,
146 metal_device.esxi_hosts,
147 null_resource.sleep
148 ]
149
150 provisioner "local-exec" {
151 command = "pwsh ${path.module}/files/rendered-configure_esxi.ps1"
152 }
153}
154
155# Gather Variables for Template
156data "template_file" "vc_template" {
157 depends_on = [
158 null_resource.configure_esxi,
159 metal_device.esxi_hosts,
160 null_resource.sleep
161 ]
162 template = file("${path.module}/files/deploy_vc.json")
163 vars = {
164 vcenter_password = var.vcenter_password
165 sso_password = var.vcenter_password
166 first_esx_pass = metal_device.esxi_hosts[0].root_password
167 vcenter_network = var.vcenter_portgroup_name
168 first_esx_host = metal_device.esxi_hosts[0].access_private_ipv4
169 }
170}
171
172# Output Rendered Template
173resource "local_file" "vc_template" {
174 depends_on = [
175 null_resource.configure_esxi,
176 metal_device.esxi_hosts,
177 null_resource.sleep
178 ]
179 content = data.template_file.vc_template.rendered
180 filename = "${path.module}/files/rendered-deploy_vc.json"
181}
182
183# Deploy vCenter Server
184resource "null_resource" "vc" {
185 depends_on = [local_file.vc_template]
186 provisioner "local-exec" {
187 command = "${var.vc_install_path} install --accept-eula --acknowledge-ceip --no-ssl-certificate-verification ${path.module}/files/rendered-deploy_vc.json"
188 }
189}
190
191# Outputs
192output "hostname" {
193 value = metal_device.esxi_hosts[*].hostname
194}
195output "public_ips" {
196 value = metal_device.esxi_hosts[*].access_public_ipv4
197}
198
199output "private_ips" {
200 value = metal_device.esxi_hosts[*].access_private_ipv4
201
202}
This Terraform manifest is dynamic in the fact everything is variablized and uses the deployment outputs to create files for configuration of ESXi and deployment of vCenter Server.
- Deploy 2x Metal Servers Running ESXi 7.0
- Changes Network Type to Hybrid to allow Layer3 VLANs
- Adds 4 VLANs for Management, iSCSI and Virtual Machine traffic
- Waits for servers to be responsive to pings before continuing
- Takes the private ipv4 address and password and renders them into the configure_esxi.ps1 This script will setup NTP, Firewall rules as well as create PortGroups and VMKernel Adapters
- Next we will render the deploy_vc.json which will be utilized to deploy the vCenter Server
Once this is done we are left with 2 ESXi Servers and vCenter Server waiting to be deployed. This little bit of automation can help and can go a long way for getting an environment quickly spun up.
If you are also looking for other samples, they have one for vSphere and also Tanzu. My goal is to fork these to be use Pure Storage instead of vSAN backed storage.
Whats Next
This blog covered What Equinix Metal is and How to Automate Deployment of a vSphere Environment. The next blog we will bring everything together and integrate the ESXi Servers into the Pure Storage FlashArray and configure the vSphere Environment.
Conclusion
I love the idea of Equinix Metal in the fact it can provide a cloud like experience but with standard resources. Stay tuned on a future post to see how we will automate deployment of additional resources and use our Pure Storage FlashArray.
If you have any additional questions or comments, please leave them below!
comments powered by DisqusSee Also
- Using an In-house Provider with Terraform v0.14
- Using the Pure Storage Cloud Block Store Terraform Provider for Azure
- Cloud Block Store Use Cases for Microsoft Azure - Terraform Edition
- Using the Pure Storage Cloud Block Store Terraform Provider for AWS
- Deploying a Linux EC2 Instance with Hashicorp Terraform and Vault to AWS and Connect to Pure Cloud Block Store