In my last post, I provided a detailed explanation of the migration methods available within VMware HCX. Specifically, I discussed when to utilize No-Downtime, Cold Migration, and/or Bulk Replication and how each method operates. In this final post on HCX migration, I explain and walk through the workflow used to migrate to–and from–the IBM Cloud.
In my last post, I provided a quick component-level overview of VMware Hybrid Cloud Extension (HCX). I also discussed how to deploy each component to securely connect your on-premises vSphere infrastructure to VMware vCenter Server or VMware Cloud Foundation instance within the IBM Cloud. In this post, I explain the three main HCX migration methods that you can use to migrate to–and from–the IBM Cloud.
Part of my role within the Office of the CTO at IBM is to evaluate innovative solutions and lead proof-of-concepts with clients using new technologies. Lately, I’ve been focused on understanding VMware Hybrid Cloud Extension (i.e., HCX). I’ve also been fortunate to deploy pre-release GA versions of this solution (as a proof-of-concepts) for customers that want to migrate their VMware workloads into the IBM Cloud. So, what is Hybrid Cloud Extension and how does one deploy it on-prem and on the IBM Cloud? First, let’s start with a little history of the product.
In my previous post, I provided a high-level component overview of the AWS VPC construct. I also hinted on a multi-part guide on how to create a VPC and ultimately connect it to your on-prem environment. This post is the first part of this guide. At the end, you should understand how to create a VPC, assign subnets, and associate subnets to a route table.
The Amazon Virtual Private Cloud (VPC) is a logical, networking construct that provides the network layer for EC2 instances and other AWS services. Basically, it’s a software-defined networking solution that gives you the flexibility to bring your own IP addresses to the cloud, create subnets, configure routing, and implement security and access policies. While there’s much we could do with VPCs, I want to keep this post brief and simply explain VPC concepts. I do plan to build on this post so stay tuned for multi-part guides on how to create VPCs and connect them to your on-prem or cloud environments.
The private network within the IBM Cloud is always assigned an address from a 10.X subnet. As a result, accessing the IBM Cloud network from addresses outside the 10.X range may prove to be a challenge since the network will drop packets from an unrecognized address. For example, if you created an IPsec VPN from your on-prem environment that is assigned a 192.X address, you will not be able to reach vCenter or any other VLAN-backed 10.X address resident on the IBM Cloud private network. The same is true for VXLAN-backed virtual machines assigned addresses outside of the IBM Cloud address space. This is why we must use NAT.
I get many questions from our field teams and clients on how to connect to an on-prem environment to a VMware Cloud Foundation (VCF) instance deployed in the IBM Cloud. While there are a few hardware options available within the IBM Cloud catalog (e.g., Fortinet and Vyatta), I typically recommend the use of an NSX Edge Services Gateway (ESG) to terminate VPN connections. There are cases where other devices might be more suitable, but I’ll save that discussion for another post. In this post, I will show you how to terminate a IPsec connection to a VCF instance deployed in IBM Cloud using an NSX ESG. I’ll be using the public internet as my connection medium. This is fine for a proof-of-concept, but you’ll want to use IBM’s direct connection options for production.