Search Results
32 results found with an empty search
- VMware VCF 5.2 - What’s New
VMware Cloud Foundation (VCF) 5.2 introduces several enhancements aimed at improving operational efficiency, Lifecycle Management, Independent upgrades for SDDC Manager, flexible patching strategies, and a new graphical interface for patching simplify the management and maintenance of cloud environments. This article provides a comprehensive overview of these advancements and their implications for the future of cloud operations. Infrastructure Platform Enhancement & Modernization: VCF Import: This feature allows for seamless integration of existing vSphere, vSAN, VMFSoFC, and NFS environments into VCF. It supports flexible networking options and ensures reduced downtime, simplified migration of vSphere environment to VCF , and enhanced system reliability. Independent TKG Service: Decoupled from vCenter Server, this significant change means customer can independently upgrade the TKG service without having to upgrade vCenter server. Now, can get to new Kubernetes versions faster than ever before. Flexible Edges: It enables businesses to provide edge infrastructure everywhere, spread beyond data centers to the sites where data is produced and consumed more, placing edges near to users or sites with enhanced performance, efficiency, and security. Network & Security Enhancement: Dual DPU Support: High availability and performance with DPUs, a single ESXi host can leverage two Data Processing Units (DPUs). Configure an Isolated Domain with a Shared NSX Instance: It enables the flexibility to use multiple isolated domains using a shared NSX instance. TEP Performance Enhancements: This allows bundling the TEPs of an edge into a TEP group to gain high response performance. Storage Feature Enhancement: vSAN Storage Cluster Support: Disaggregated Storage cluster solution powered by vSAN ESA. vSAN ESA Stretched Cluster: VCF 5.2 supports the Express Storage Architecture (ESA) in a stretched cluster topology giving a site level resiliency. Lifecycle Management Enhancement: Independently Upgrade SDDC Manager: Now, can take advantage of new management capabilities and fixes without needing to upgrade the full management domain BOM. Upgrade or Patch Domains from SDDC Manager: Patches can also be applied during upgrades workflows. Create an Offline Depot Local Patch Repository: This eliminates the need to manually copy and import bundles to each VCF instance. Create an offline depot (a local mirror of the online depot) on a web server. vSphere Live Patching: Allows updating ESXi hosts without rebooting under certain circumstances Migration Enhancement: HCX Performance Enhancements: 1,000 Total Migration simultaneously/Concurrently. HCX Assisted vMotion (HAV): HCX Migration Orchestration + ESXi/vCenter vSphere vMotion. HCX Traffic Engineering: NE Appliance 7-8 Gbps now (1.4X faster NE). HCX OS Assisted Migration: Massive footprint reduction and simplified migration data path for OSAM. Reference Doc: VMware Cloud Foundation 5.2 Release Notes
- Part 1: Onboarding Brownfield vSphere Environment into VCF as a "Management Workload Domain"
This blog post provides a comprehensive, a detailed step-by-step guide to seamlessly "How to Convert/Import an Existing Brownfield vSphere Environment into VCF". With VCF 5.2, we now have new feature introduced "VCF Import Tool", to convert or import an existing brownfield vSphere and vSAN environment into either a VCF Management Domain or VI Workload Domain without needing to rebuild your existing environment. If you do not already have SDDC Manager deployed, you can deploy it on an existing vSphere environment and use the VCF Import Tool to convert that environment to the VMware Cloud Foundation management domain. If SDDC Manager is already deployed, you can use the VCF Import Tool to import existing vSphere environments as VI workload domains. Refer to below official documentations for more on supported & considerations: Supported Scenarios for Converting or Importing vSphere Environments to VMware Cloud Foundation Considerations Before Converting or Importing Existing vSphere Environments into VMware Cloud Foundation Note: If you haven’t seen the release notes yet, please check out: VMware Cloud Foundation 5.2 Release Notes This blog is in two parts to cover Convert (for Management WLD) and Import (for VI WLD) both: Part 1 : Onboarding a Brownfield vSphere Environment into VCF as a Management Workload Domain Part 2 : Onboarding a Brownfield vSphere Environment into VCF as a VI Workload Domain Let's get started: Part 1 : Onboarding a Brownfield vSphere Environment into VCF as a Management Workload Domain Overview of Existing vSphere Environment Setup before Convert to VCF Management Workload Domain: Now let’s do a quick walkthrough of the existing vSphere environment and confirm that all the pre-requisites & considerations are met before converting this vSphere Env to VCF Management Domain. 4 ESXi hosts (8.0.3) With FQDN and Static IPs vCenter (8.0.3) vSAN Cluster – Single Site Mgt, vMotion, vSAN, VM Network – Backed by vDS DRS - Set To Fully-Automated HA – HA On All the vmkernel ports are configured with static IP addresses. DRS mode is set to fully-automated: Single site vSAN cluster: Now, we have validated and met all the pre-requisites for a successful VCF convert. Let’s move on and download the required software for VCF convert/import: Required Software for Converting: Download the below three softwares from https://support.broadcom.com/ VCF SDDC Manager Appliance VCF Import Tool VMware Software Install Bundle - NSX_T_MANAGER 4.2.1.0 Run a Precheck on the Target vCenter Before Conversion: The precheck determines if the environment is ready and can be converted to the management domain Copy the VCF Import Tool to the Target vCenter Appliance Run a Precheck on the Target vCenter Before Conversion Copy the VCF Import Tool to the Target vCenter & Extract: SSH to the vCenter Server as root Change the default shell from /bin/appliancesh to /bin/bash to allow copy to vCenter # chsh -s /bin/bash root Create a directory for the VCF Import Tool. I created "vcfimport" # mkdir /tmp/vcfimport Copy over the required software into the directory Extract the bundle # tar -xvf vcf-brownfield-import-.tar.gz Now, run the Precheck on the Target vCenter Before Conversion: Success! Pre-checks on the target vCenter have passed successfully, and we are good to proceed further now. Deploy SDDC Manager Appliance: Deploy the SDDC Manager appliance on the target vCenter before converting the vCenter to the VMware Cloud Foundation management domain. Once the appliance is deployed successfully, we will power on and wait for the shell to initialize. The UI will not initialize at this moment, and we need to wait until the management workload domain is imported successfully. Official document on how to deploy SDDC: Deploy the SDDC Manager Appliance on the Target vCenter Upload the Software to SDDC Manager now and perform detailed checks on the target vCenter: Success! Checks from the SDDC on the target vCenter have passed successfully, we are good to proceed further now. Uploading NSX bundles and generating NSX deployment specification: To deploy NSX Manager when you convert or import a vSphere environment into VMware Cloud Foundation, you must create an NSX deployment specification. Download NSX software Upload it to sddc folder location "/nfs/vmware/vcf/nfs-mount/bundle" Create nsx_spec.json with all your information for NSX IPs/fqdn, and upload it to sddc, I copied to "/home/vcf" We will deploy NSX manager cluster along with the workload domain convert process. This workflow covers the below tasks: Deploy a three-node NSX manager cluster Assign cluster VIP to the NSX manager cluster Add the management vCenter server as a compute manager in NSX Prepares the management vSphere cluster with NSX on DVPG Follow the official below document to create your NSX JSON file with all required values, as I have created and uploaded to SDDC for NSX deployment. Generate an NSX Deployment Specification for Converting or Importing Existing vSphere Environments Note - Make sure DNS host records created for the NSX management cluster Now, Onboard vSphere Environment into Management WLD in SDDC Manager: At this stage, we are all good to run the VCF Import Tool and start the conversion of existing vSphere Environment into VCF Management Workload Domain Prerequisites Take a snapshot of SDDC Manager. Procedure SSH to the SDDC Manager VM as user vcf . Navigate to the directory where you copied the VCF Import Tool. Run the vcf_brownfield.py script and enter the required passwords when prompted. # python3 vcf_brownfield.py convert --vcenter '' --sso-user '' --domain-name '' --nsx-deployment-spec-path '' Inspect the command outputs highlighted in yellow. All should be status code 200. Switch to the root account upon successful conversion. Restart all SDDC Manager services. #echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh Once all the SDDC Manager services have restarted the new workload domain (management domain or VI workload domain) should appear in the SDDC Manager UI. Below are the screenshots to start Convert: Convert started: Once started, for me it took more than 2 hours going through all the validation & conversion process, as you can see below. Keep monitoring for any error or failure. If encounter with any failure, check the logs and fix to proceed further. Success!! We can see now above, the VCF Convert operation has gone successful. Perform Health-Check Validation On the Converted Management Domain: We will now perform a quick walkthrough of the SDDC manager console and validate the imported management workload domain. Below are the methods. SDDC UI SDDC CLI SDDC Pre-check SDDC UI: SDDC CLI: You can run the "Health-check" CLI to generate the comprehensive report, as shown below. SDDC Pre-check: Let’s run the Prechecks from the SDDC Manager on the management workload domain for general upgrade readiness and see if there are any critical errors or warnings. Congratulations!!!!!! We have successfully performed conversion of an existing brownfield vSphere Environment into VCF Management Domain now. Conclusion By following these steps, you can effectively onboard your existing vSphere environment as a Management workload domain into VCF. This will enable you to leverage the benefits of VCF, such as simplified management, enhanced security, and automated operations. In the next part (Part 2) of this series, we will delve into "Onboarding a Brownfield vSphere Environment into VCF as a VI Workload Domain " Part 2 link : Part 2: Onboarding a Brownfield vSphere Environment into VCF as a "VI Workload Domain"
- Part 2: Onboarding Brownfield vSphere Environment into VCF as a "VI Workload Domain"
Welcome back!!! In the previous part ( Part 1 ) of this series, we introduced the VCF Import Tool and its capabilities on "Onboarding a Brownfield vSphere Environment into VCF as a Management Workload Domain ". Now, in Part 2 blog post we'll dive deeper with detailed a step-by-step guide on "Onboarding a Brownfield vSphere Environment into VCF as a VI Workload Domain ". Please note, the considerations for onboarding brownfield deployments into the VCF Management Workload Domain remain relevant for the VI Workload Domain as well. Below is the official document for your reference, if needed again. Considerations Before Converting or Importing Existing vSphere Environments into VMware Cloud Foundation Let's get started: Part 2 : Onboarding a Brownfield vSphere Environment Into VCF as a "VI Workload Domain" Note :- As mentioned in previous blog, VCF Import comes with "Convert" and "Import" capabilities. Convert is used to form a Management WLD, wherein Import is used to form VI WLD into VCF. Overview of Existing brownfield vSphere Environment Setup Before Import to VCF as VI Workload Domain: Now let’s do a quick walkthrough of the existing vSphere environment and confirm that all the pre-requisites & considerations are met before importing this vSphere Environment into VCF VI Workload Domain. vCenter (8.0.3): vc-l-03b.x.x Cluster: Two clusters ("Cls-01-WLD" and "Cls-02-WLD") Storage: NFS Datastore Networking: vDS backed networking - Mgt, vMotion, NFS & VM Networking traffics ESXi Hosts: Cls-01-WLD: esx-10b.x.x - 192.168.x.x esx-11b.x.x - 192.168.x.x esx-12b.x.x - 192.168.x.x Cls-02-WLD: esx-03a.x.x - 192.168.x.x esx-05a.x.x - 192.168.x.x Now, we have perform validation and met all the pre-requisites for a successful VCF Import. Let’s move on and download the required "VCF Import Tool" software from https://support.broadcom.com/ . Please note this is the same software that you used in Part 1, while "converting" to VCF Management Domain, if you still have downloaded and saved locally on your system, then use the same. Required Software For Importing: VCF Import Tool VMware Software Install Bundle - NSX_T_MANAGER Upload The Required Software To The SDDC Manager Now: Procedure: SSH to the SDDC Manager as user vcf. Copy/Upload the "NSX deployment bundle bundle-.zip" to the /nfs/vmware/vcf/nfs-mount/bundle/ folder. Now, copy the VCF Import Tool to the SDDC Manager. Create a folder for the VCF Import Tool. Copy "vcf-brownfield-import-.tar.gz" to the SDDC Manager folder. Now, extract the bundle "vcf-brownfield-import-.tar.gz" # tar -xvf vcf-brownfield-import-.tar.gz You can verify the scripts extracted correctly. # python3 vcf_brownfield.py --help Generate an NSX Deployment Specification for Importing Existing Brownfield vSphere Environments into VCF: To deploy NSX Manager when you import a vSphere environment into VCF, you must create an NSX deployment specification. NSX deployment requires a minimum of 3 hosts. Make sure DNS host records created for the NSX management cluster. Create "nsx_spec.json" with all your information for NSX IPs/fqdn, and upload it to sddc, I copied to " /home/vcf/vcfimport " We will deploy NSX manager cluster along with the VI workload domain import process. This workflow covers the below tasks: Deploy a three-node NSX manager cluster. Assign cluster VIP to the NSX manager cluster. Add the management vCenter server as a compute manager in NSX. Please follow the official below document to create your NSX JSON file with all required values. Generate an NSX Deployment Specification for Converting or Importing Existing vSphere Environments Run a Detailed Check On The Target vCenter Before Import: Before we perform a VI workload domain import operation, we must perform a detailed check to ensure that the existing vSphere environment's configuration is supported for a successful import. Procedure: SSH to SDDC Manager as "vcf". Navigate to the directory where copied the VCF Import Tool and run the "check" command. If any checks fail, refer to the guardrails YAML file for information on the failed check to fix. Success! Checks from the SDDC on the Target vCenter have passed successfully as shown in above screenshot, we are good to proceed further now. Now, Import The Existing vSphere Environment Into The SDDC Manager as VI WLD: At this stage, we are all good to run the VCF Import Tool and start the Import operation of the existing vSphere Environment into VCF as VI Workload Domain. Prerequisites: Take a snapshot of SDDC Manager. Procedure: SSH to the SDDC Manager VM as user vcf. Navigate to the directory where you copied the VCF Import Tool. Run the vcf_brownfield.py script and enter the required passwords when prompted. # python3 vcf_brownfield.py import --vcenter '' --sso-user '' --domain-name '' --nsx-deployment-spec-path '' Inspect the command outputs highlighted in yellow. All should be status code 200. Upon successful import, switch to the root account. Restart all SDDC Manager services. # echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh Once all the SDDC Manager services have restarted the new VI workload domain should appear in the SDDC Manager UI. Below are the screenshots to start Import Operation: Once import job started, you can validate running "import" job & configuration status from SDDC UI as well. NSX Managers have started getting deployed, can verify from vCenter as can see below. Success!! Now, we can see above, the VCF Import operation has been completed successful. It took more than 2 hours for me, going through all the validation & import process. Once you start the import, keep monitoring until it completes successfully. If any guardrails fail, refer to the YAML log file for information on the failed task to do troubleshooting and restart the import job again. Perform Health-Check Validation On The Imported VI Workload Domain: After you successfully import, it is an important to perform validation and run an upgrade precheck as well to identify any potential issues. Below are the methods and attached screenshots to perform successful validation & health checks. SDDC UI SDDC CLI SDDC Pre-check SDDC UI SDDC CLI: SDDC Pre-check: Procedure: Log in to the SDDC Manager UI. Navigate to Workload Domains and click on the workload domain name. Click the Updates tab and click Run Precheck. Under Target Version, select General Upgrade Readiness and select all components. Click Run Precheck. Review the results. Congratulations!!!!!! Now, we have successfully performed the import of the existing brownfield vSphere Environment into VCF as a VI Workload Domain. Conclusion By following these steps, you can effectively & seamlessly import your existing vSphere environment as a VI workload domain into VCF. This will enable you to leverage the benefits of VCF, such as simplified management, enhanced security, and automated operations. More from my site Part 1: Onboarding a Brownfield vSphere Environment into VCF as a Management Workload Domain VMware VCF 5.2 - What’s New Removing failed tasks in SDDC Manager Automate HCX Manager deployment with PowerCLI
- Restore SDDC Manager from File-Based Backups
As we all know, regular backup of the management components are very critical, and that ensures you can keep your environment operational by restoring if a data loss or failure occurs. Today, In this blog post we'll perform SDDC Manager "RESTORE" to a fully operational state by using its File-Based backup. Table Of Contents: Overview Of SDDC Manager Backup configuration (Prior restoration) Prerequisites Prepare for Restoring SDDC Manager Restore SDDC Manager from File-Based Backup Health-check Validation After SDDC Manager Restore Overiew Of SDDC Manager Backup configuration (Prior restoration): Prerequisites: a. Make sure failed SDDC Manager is powered off and renamed. b. Validate the valid file-based backup status c. Require the SFTP server details/credentials Prepare for Restoring SDDC Manager: Procedure a. SSH to SFTP server, and go to backup file location b. Extract the backup file # openssl enc -d -aes256 -in vcf-backup-sddc-manager-xxxxx-xxxxxx.tar.gz | tar -xzp When prompted, enter the encryption_password (please note, you can either use the same server, or can download the back file on local system and run the command). I have used the same SFTP server to extract the file and restore) c. Once extracted, locate and open the metadata.json file. d. Locate the sddc_manager_ova_location value and copy the URL to download the SDDC Manager OVA file. e. Now, open security_password_vault.json and record the backp password as well. Restore SDDC Manager from File-Based Backup Procedure a. Login to Management Domain vCenter, and deploy a new SDDC Manager appliance by using the OVA file that you downloaded during the preparation for the restore. b. While OVA deployment, provided information must match with metadata.json file that you downloaded during the preparation. c. After SDDC Manager deployment completes, Take Snapshot. d. Power On the VM. e. Copy the encrypted backup file to the /tmp folder on the newly deployed SDDC Manager appliance. You can use CLI or WinSCP. I used CLI. # scp filename-of-restore-file vcf@sddc_manager_fqdn:/tmp/ f. Now, obtain the authentication token from the SDDC Manager appliance in order to be able to execute the restore process by running the following command. # TOKEN=`curl https:///v1/tokens -k -X POST -H "Content- Type: application/json" -d '{"username": "admin@local","password": " "}' | awk -F "\"" '{ print $4}'` g. Now, you run the command to start the restore process. Before running the command update the values (highlighted bold below) as per your details. curl https://< sddc_man_fqdn >/v1/restores/tasks -k -X POST -H "Content- Type: application/json" -H "Authorization: Bearer $TOKEN" \ -d '{ "elements" : [ { "resourceType" : "SDDC_MANAGER" } ], "backupFile" : "< backup_file >", "encryption" : { "passphrase" : "< encryption_password >" } }' h. Record the ID of the restore task after running the above command. i. Monitor the restore task by using the following command until the status becomes Successful. # curl https:///v1/restores/tasks/ -k -X GET -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" Screenshots for reference: SDDC Manager deployment Snapshot captured Copy the encrypted backup file to the /tmp Obtain the authentication token Command to start the restore process Monitor the restore task Monitor the restore tasks by using the command line until the status becomes successful. For it, it took almost 15 to 20 minutes, it may vary & depend on the size of the environment. Health-check Validation After SDDC Manager Restore SDDC UI - Health-check: CLI - Health-check: Wow, Congratulations!!! We've successfully performed SDDC Manager restore from File-Based backup now. Conclusion By following above steps, you can perform effectively restoration of your SDDC Manager from file-based backup.
- How to Apply for the VMware vExpert 2025 Program
The VMware vExpert Program is a prestigious recognition awarded to professionals who go above and beyond to contribute to VMware's ecosystem. It honors individuals who actively share their knowledge, advocate for VMware technologies, and engage with the community through various mediums like blogging, vlogging, authoring, VCDX certifications, VMUG leadership, and other impactful efforts. Being part of the vExpert community not only amplifies your professional credibility but also connects you with a global network of technology experts and leaders. If you're passionate about VMware and eager to share your expertise with the broader community, here's a step-by-step guide on how you can apply for the vExpert 2025 program and take the next step in your journey to becoming a recognized thought leader in the VMware ecosystem. Steps to Apply for VMware vExpert 2025: Visit the vExpert Website : Go to the VMware vExpert Portal . Log In or Create an Account : If you're a returning applicant, log in with your existing credentials. New applicants will need to create an account on the portal. Start Your Application : Once logged in, navigate to the "Applications" section and click on the "vExpert 2025 Application" link. Choose Your Application Type : New Applicant : If you're applying for the first time. ( Renewal : If you've been a vExpert before and are renewing your status. Proving Contribution : If you are switching to a different vExpert path. Fill Out the Application : Provide detailed information about your contributions to the VMware community in the past year. Examples include: Blog posts, technical guides, or case studies. Public speaking engagements, webinars, or podcasts. Community involvement, such as answering questions in forums or organizing events. Social media advocacy for VMware products and technologies. Provide Evidence of Contributions : Include links to blog posts, social media posts, YouTube videos, or any other relevant material showcasing your VMware-related work. Provide a Reference : Provide a existing vExpert reference email id , if he or she has guided you to become vExpert . (its optional if you know any vExpert Pro You can provide he/she name ) Submit the Application : Review your application carefully and click "Submit/Save." The portal will confirm the successful submission. Wait for Feedback : Applications are reviewed by the VMware vExpert team and a panel of experts. You'll be notified via email if you are selected. Becoming a VMware vExpert offers numerous benefits, both tangible and intangible, for professionals engaged with VMware technologies. Here are the key advantages of earning this prestigious title: Recognition in the VMware Community Prestige : Being a vExpert establishes you as a thought leader and trusted contributor within the VMware ecosystem. Global Visibility : Your name is listed in the official vExpert Directory, showcasing your expertise. Access to VMware Products NFR Licenses : Free Not-For-Resale (NFR) licenses for various VMware products, allowing you to build labs and explore advanced features. Contribution to the Community Give Back : Use your platform to educate others, share knowledge, and shape the future of the VMware ecosystem. Tips for a Successful Application Highlight unique contributions, especially those that impact a broader audience. Be clear and concise about your role and achievements. If possible, provide measurable results (e.g., "Reached 5,000 readers through my VMware blog posts"). Showcase a variety of contribution types for a well-rounded profile. Deadlines and Announcements Application Period: Typically announced on the portal and social media. For vExpert 2025 Last date is 10th Jan 2025 Announcement Date: Accepted applicants are usually notified a few weeks after the application closes. The VMware vExpert Program is not just a title but a gateway to becoming an influential member of a vibrant, global community. It provides unparalleled opportunities to enhance your professional journey, offering access to exclusive tools, resources, and networking avenues to advance both your technical expertise and career.
- Unable To Decommission Hosts in SDDC Manager due to Failed or In-progress Password Task
Product : VCF 5.2 Issue : Unable to perform decommissioning hosts in SDDC Manager, error reported " This operation is not allowed because the system lock is held by the Password Manager operation in progress. Please unselect the host to proceed further ." Error from the SDDC UI, and " Decommission Selected Hosts " tab is grayed out due to the error. Ovserved no tasks running or failed in SDDC UI: Resolution: 1. Take a snapshot(without memory) of SDDC Manager 2. SSH to SDDC Manager VM and change to root 3. Run the below command to view and get password operations in FAILED, In-progress and PREVALIDATION_FAILED state if any. # psql -h localhost -U postgres -d operationsmanager -c "select workflow_id, operation_type, transaction_status from passwordmanager.password_operations where transaction_status='FAILED' OR transaction_status='PREVALIDATION_FAILED';" we can see above in screenshot, one "Failed" task reported. 4. Delete the password operation task from SDDC Manager using API Explorer Reverify by running the command again from SDDC CLI, we can see now in below screenshot task has been successfully deleted. Now, the tab is available to perform decommission the hosts. Congratulations!!!
- Upgrading VMware Cloud Foundation 4.5.2 to 5.2.1
With the release of VMware Cloud Foundation (VCF) 5.2.1 , You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.2.x from VMware Cloud Foundation 4.5 or later. If your environment is at a version earlier than 4.5, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.5 or later before you can upgrade to VMware Cloud Foundation 5.2.x. In my Previous Blog we talk in details steps how to upgrade from VCF 5.x to 5.x , While Focus remain same to moving/Convert all customer to VCF , the bare minimum version for Any Customer should have is VCF 5.2.x . If Any customer wanted to import existing vSphere Cluster to VCF version the bare minimum version they should have for VCF is 5.2.x . I'll share more details on how to import vSphere Cluster to VCF 5.2.x in up coming blogs , for now lets us dive in for the details steps of VCF 4.5.2 to 5.2.1 upgrade . Before Upgrade Bundle Download & Prechecks Bundle Download : Log in to the SDDC Manager UI. Navigate to the MGMT domain and Run the Prechecks and make sure all is green . Navigate to Lifecycle Management > Bundle Management -> Download All the 5.2.1 Bundles i.e. SDDC Manager , SDDC Drift Bundle , NSXT , vCenter & finally ESXI . Step 1: Step-by-Step Upgrade Process Upgrade SDDC Manager Log in to the SDDC Manager UI. Navigate to Lifecycle Management > Updates . Select the SDDC Manager Upgrade package and click Download and Install . Monitor the upgrade progress and ensure the SDDC Manager is successfully updated. Step 2: Upgrade the Management Domain Components Upgrade the management domain first to ensure all core services are updated. This includes NSX-T , vCenter Server , and ESXi hosts in the management domain NSX-T Upgrade : Upgrade NSX-T Manager, controllers, and edge nodes. Ensure the network connectivity remains intact and validate routing. Monitor the process and verify vCenter accessibility post-upgrade. vCenter Upgrade : From the SDDC Manager UI, MGMT Domain -> Updates -> Prechecks -> configure upgrade -> Enter the Temp IP for VC and proceed to finish . Monitor the process and verify vCenter accessibility post-upgrade. ESXi Hosts Upgrade : Do the Precheck Make sure everything is green and then proceed. ESXI MGMT Domain -> Updates -> Prechecks -> configure upgrade -> Select Cluster -> upgrade now -> review -> finish Monitor the process and verify ESXI accessibility post-upgrade. Upgrade got completed without any issue from 4.5.2 to 5.2.1 for me in my lab , it was smooth . Post-Upgrade Validations Verify the health of all components using the SDDC Manager UI. Run vSAN Health Checks , NSX-T Manager Checks , and ensure vCenter Server is operational. Check for any warnings or errors in the SDDC Manager dashboard Make sure we have update the New Licenses in SDDC -> Licensing . Clean Up and Finalize Remove any pre-upgrade snapshots to free up storage. Update documentation with the new versions of all components. Post upgrade/Validation/ make sure all the components are in the VCF 5.2.1 BOM . Conclusion Upgrading from VMware Cloud Foundation 4.5.2 to 5.2.1 is a significant step that delivers improved performance, security, and capabilities for your private cloud. Embrace the latest innovations in VMware Cloud Foundation to keep your environment future-ready and resilient!
- Extract a vSphere Lifecycle Manager Image in SDDC Manager from Management WLD
You can extract a vSphere Lifecycle Manager image either from Management WLD or VI WLD based on your requirement from a vCenter Server managed by VMware Cloud Foundation. Prerequisites A vSphere Lifecycle Manager image must have been created in vSphere 7.0. For more information, see Create a vSphere Lifecycle Manager Image Procedure: From the navigation bar, click Lifecycle Management > Image Management . Click the Import Image tab. In the Option 1 section, select a workload domain. Select the cluster from which you want to extract the vSphere Lifecycle Manager image. Click Extract Cluster Image . Once extracted, the cluster image is displayed in the Available Images tab and can be used for a new VI workload domain or a new cluster in a VI workload domain enabled for vSphere Lifecycle Manager images. Screenshots provided below for further reference: Conclusion: Now, we can see abve in screenshot, the vSphere Image is available to be used.
- Gateway Cutover: From Source Site to Target Site Using HCX
Gateway cutover is a critical activity that marks the final phase of the migration process. It is performed only after all virtual machines (VMs) and workloads have been successfully migrated from the source site to the target data center. This ensures minimal disruption and seamless continuity of operations. In this blog post, we will demonstrate the gateway cutover process using HCX (Hybrid Cloud Extension) and explore all the steps involved in executing it effectively. Why Gateway Cutover Matters: Gateway cutover is essential for ensuring that the migrated workloads at the target site operate with optimal performance. Without this step, workloads may still rely on the source site for their network gateway, leading to increased latency and degraded performance. Observing the Need for Gateway Cutover: In the screenshot below, you can see an example where a VM has been migrated to the target site, but its gateway is still located at the source site. This setup results in high latency: As shown, the latency is significantly higher because the network traffic is routed back to the source site for gateway access. This reinforces the importance of completing the gateway cutover. Now, we will walk through the gateway cutover process using HCX: Pre-requisites: No workloads/VMs in source site should/is still connected to vlan/Segment, that you're planning to do gateway cutover for. Perform health-check from HCX Connector, and verify the status, status should be "ok". Procedure step-by-step: Login to HCX Connector UI (source site port 443). Go to Services > Network Extension . Validate & verify the network status is healthy ( ok ). Check the box for network you want to unextend. Click on Unextend Networks . Check the box Connect cloud network to cloud edge gateway after unextending and click on Unextend . Monitor the status for progress and completion. Login to Target site NSX Manager, and verify the status/health of unextended network. It should be connected to "t1" as can see below in screenshot. Post-Cutover Validation: Verifying successful cutover and ensuring workloads operate as expected. Conclusion: By following these steps, you can effectively & seamlessly perform Gateway Cutover of the extended network from source datacenter to target datacenter. !Best of luck!
- HCX Upgrade v4.9 To v4.10: A Step-by-Step Technical Guide
This blog post will provide a detailed, step-by-step guide to upgrading HCX components seamlessly. Upgrading VMware HCX (Hybrid Cloud Extension) is a vital process to ensure that your environment remains secure, feature-rich, and compatible with the latest VMware products. Why Upgrade HCX? Upgrading HCX ensures that your infrastructure benefits from: Enhanced Security: Patches and updates address vulnerabilities and strengthen your environment. New Features: Access the latest tools and functionalities to improve workload migration and management. Compatibility: Maintain alignment with the latest VMware products and ensure a smooth operational workflow. Stability Improvements: Reduce the likelihood of issues and downtime with bug fixes and performance enhancements. Pre-Upgrade Checklist: Before proceeding with the HCX upgrade, ensure the following: Verify the HCX Manager system reports healthy connections to vCenter, NSX Manager (if applicable), Site paring connection, Service Meshes, All Existing Extended Network status. Take a backup of the HCX Manager. In addition to backing up HCX Manager, optionally use the vSphere snapshot feature to take a snapshot of the HCX Manager at the source and the destination sites. If necessary, you can use snapshots to roll back the HCX Manager version. Confirm the compatibility of the HCX version using VMware compatibility guide. Verify access credentials for HCX Manager, vSphere, and NSX (if applicable). Make sure no on-going Migration/replication jobs scheduled during that time. Step-by-Step HCX Upgrade Procedure: Current Environment Details & HCX Deployment Mode: HCX Deployment Mode: Connected Mode/Site (Where HCX connects to VMware online depot to download the HCX upgrade bundles). In this blog post, we will demonstrate HCX Upgrade from current version 4.9.0.0 to 4.10.3.0 . Let's Start, Log in to the HCX interface: https://hcx-ip-or-fqdn. (Note - You can update site-paired HCX Managers simultaneously) Navigate to the Administration tab. Navigate to the System Updates section. Select the Current Version , and click on Check For Updates . This process may take few minutes to perform check. Note - You will need to perform this step (4) on HCX Manager in both sites (HCX Connecto & Cloud) Click on " Download " to start the HCX Bundle download. This download takes time based on the available network bandwidth, so better to start the download on both site's HCX Manager to save time. Note - There are several options available from the drop down, you can select per your requirement. Download The upgrade file is downloaded, but not installed. Upgrade The file previously downloaded is used during the upgrade. If there is no file available, the option is dimmed. Download & Upgrade The upgrade file is downloaded. The upgrade begins immediately after the download completes. Release Notes View the Release Notes. Download has started. Download is completed now. Go to " Select Service Update " and click " Upgrade " to start the HCX Manager upgrade. Reference screenshots attached below. Service Not Available - The system reports that the upgrade is underway. After the upgrade file is downloaded and installed. The HCX system reboots. Allow a few minutes for the system to reinitialize. After upgrade completes, open the HCX Appliance management interface in a browser tab and Perform POST Upgrade HCX Health Check . Example: https://hcx-ip-or-fqdn:9443 . Navigate to the dashboard and verify the registered systems display a healthy connected state. https://hcx-ip-or-fqdn:443 . - Navigate to the dashboard and verify the registered systems display a healthy connected state. - Navigate to AdministrationSystem Updates and review the software version running on the site. Post Upgrade, Reference Screenshots Attached Below: HCX Manager both sites upgraded successfully to "4.x". Service Mesh after HCX upgrade is in Healthy state. After HCX Manager upgrade, it allows Service Mesh upgrade , as can see below. Now, we will perform Service Mesh Appliance upgrade . Note, this upgrade is initiated from HCX Manager Connector (source site) only, and it performs service mesh appliance upgrade subsquently in target site as well same time. Reference screenshots attached below for Service Mesh upgrade. a. Login to HCX Manager Connector, go to Service Mesh > select the service mesh > click on Update appliance. b. Acknowledge and click on Update to start the update. c. Go to task to monitor the progress. d. Now, we have successfully upgraded the Service Mesh Appliance. e. Perform Health check of Service Mesh, here it looks healthy. This completes the upgrade process for the HCX Manager and component service appliances in the HCX Service Mesh as well. Conclusion: By Following these steps, you can effectively & seamlessly perform HCX Manager and it's Service Mesh Appliances upgrade in source and target HCX. !Best of luck!
- PowerShell Scripts to Manage ESXi Host Configuration Settings "UserVars.ESXiShellTimeOut"
In environments with many ESXi hosts, managing hosts with scripts is faster and less error prone than managing the hosts from the vSphere Client. VMware PowerCLI is a Windows PowerShell interface to the vSphere API, and includes PowerShell cmdlets for administering vSphere components. ESXCLI includes a set of commands for managing ESXi hosts and virtual machines. Situation: While ESXi host hypervisor & firmware upgrade, there was a requirement for all ESXi host's advance setting "UserVars.ESXiShellTimeOut" to be updated from "0" to "600" seconds to prevent timeout while upgrade. If you have few hosts then it is an easy job, however with a big environment better to automate to save time and prevent from any human error. Procedure: Connect to vCenter using PowerShell. # Connect-VIServer -Server "vCenterName" Perform validation once to verify current configured value for all ESXi hosts. # Get-VMHost | Select Name, @{N="UserVars.ESXiShellTimeOut";E={$_ | Get-AdvancedSetting -Name UserVars.ESXiShellTimeOut | Select -ExpandProperty Value}} | Sort-Object name | Format-Table -AutoSize Now, use & run the below command to chang the "UserVars.ESXiShellTimeOut" value of all the ESXi host in vCenter from 0 to 600 or as per your requirement. Note - This CLI will perform this change for all the ESXi host existing in vCenter. If you need to run for specific cluster in your vCenter then update the command before you run it. Or if you need any help with specific requirement, drop a message on this post. # Get-VMHost | Foreach {Get-AdvancedSetting -Entity $_ -Name UserVars.ESXiShellTimeOut | Set-AdvancedSetting -Value 600 -Confirm:$false} Post change, perform validation. We can see in below screenshot, it changed and updated the value from 0 to 600. Conclusion: By following these steps, you can effectively update the Advance setting value " UserVars.ESXiShellTimeOut " for all the ESXi in vCenter. !Best of luck!
- While VCF Upgrade, Plan Patching/Upgrading Screen Does Not Populate Customize Option in SDDC Manager 5.2.1
Issue: While performing VCF upgrade from 4.5.2 to 5.2.1 , we observed when selecting the "Customize Upgrade" before the vCenter upgrade, it doesn't load and populate for customization options. Cause & Task: This occurs if there is any stale 6.x vCenter bundle reference existing in the LCM database. Check if there was any old & stale 6.x or 7.x vCenter bundle available in downloaded history in SDDC Manager. Resolution: Take a snapshot of the SDDC Manager VM. SSH into the SDDC manager appliance with vcf user and elevate to root with su. Copy the cleanup_vc_bundles_lt7.py script from the KB to the /home/vcf/ directory on the SDDC manager. Refer to the below Broadcom article to download the "cl eanup_vc_bundles_lt7.py" . Plan Patching/Plan Upgrading screen does not populate in SDDC Manager 5.2.1 Run the cleanup_vc_bundles_lt7.py with the below command. (The script will cycle the lcm service) # python cleanup_vc_bundles_lt7.py Wait several minutes, as this cleanup process once run, may take 10-15 minutes. After 15 minutes verify in SDDC Manager if still there is any stale 7.x vCenter bundle exist (Lifecycle > Bundle Management). Note - Below is the CLI to remove if still any stale 7.x vCenter bundle exist. # python /opt/vmware/vcf/lcm/lcm-app/bin/bundle_cleanup.py As cleanup takes time, wait for 15 minutes to verify in SDDC Manager, if "Customize Upgrade" loads & populates. If still does not populate, recycle the SDDC Manager services by using/running the below CLI. # SSH to SDDC Mgr, switch to root # /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh (Once all services restarted, it may take 10-15 minutes for all services to be fully up) Now, You should be able to see customize upgrade available and populdate to proceed further with your upgrade. !Wish you all the best!











