Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

Getting Started with VMware Cloud on Dell EMC – Video Series

$
0
0
This post was originally published on this site ---

VMware Cloud on Dell EMC is a hyperconverged hardware and software-defined data center stack that is delivered as a service to your on-premises data center or edge location.  It is jointly engineered by VMware and Dell EMC and includes complete lifecycle management of the SDDC software as well as the VxRail hardware infrastructure.  With VMware Cloud on Dell EMC, you can satisfy the latency or locality requirements of your business applications through a streamlined provisioning process while experiencing the benefit of reduced operational overhead.

Businesses that trust the VMware hybrid cloud for consistent infrastructure, application portability, and proven ecosystem, VMware Cloud on Dell EMC is a great alternative to do-it-yourself data center infrastructure deployments.

Would you like to know more about this exciting new category of infrastructure technology?  A new Getting Started video series walks through the essential aspects and demonstrates the ordering, deployment, and consumption phases.

Provisioning

In the first video, see how easy it is to get started with VMware Cloud on Dell EMC by ordering the hyperconverged hardware and SDDC software stack that is sized to meet your requirements.

Connecting

In the next Getting Started video, you will learn how the new SDDC rack integrates with your existing data center network.

Workload Deployment

Wrapping up the series is a demonstration on how to deploy your first workload to the new SDDC rack.  Since VMware Cloud on Dell EMC is based on the tried and true vSphere infrastructure, this operation should be familiar to virtualization admins – a big benefit of the VMware hybrid cloud.

For more information on VMware Cloud on Dell EMC, visit the product page and talk to your account team today.

The post Getting Started with VMware Cloud on Dell EMC – Video Series appeared first on VMware vSphere Blog.


To Cloud or Not to Cloud?

$
0
0
This post was originally published on this site ---

To cloud or not to cloud: that is the question. A critical component of a good cloud strategy is a defined process to determine if a service should be brokered or consumed directly from a public cloud provider, or built out internally as part of a private/hybrid cloud environment. Enterprises need to define the evaluation criteria to enable the intelligent use of public cloud services while maintaining appropriate governance and control.

 

Cloud-First!

In  “The Importance of a Cloud Strategy” I discussed why a good cloud strategy is a requirement for a successful IT transformation initiative. I shared how many organizations will state that they have a “cloud-first” strategy, but without a common understanding across the organization of what that means or how it will be achieved.

Some organizations have defined “cloud” as meaning “public cloud” services only. Others use “cloud” to refer to the adoption of the “cloud operating model”, with services hosted on both public and private/hybrid clouds. Others associate “cloud” with “cloud-native”; applications built to take advantage of cloud technologies. The cloud strategy must define what “cloud” means in the context of your organization.

I recommend aligning the cloud definition with the adoption of the “cloud operating model”, since the benefits to the organization come from the capabilities provided by the ecosystem, and not necessarily from where the services or infrastructure are hosted. The adoption of a cloud operating model to support private/hybrid and public cloud environments offers the most benefits and flexibility to an organization that needs to rapidly deploy new services to support the growing demands on IT from the lines of business.

 

The Cloud Operating Model

In an upcoming post “The Cloud Operating Model” I show how the adoption of best practices from the hyper-scale cloud providers can mature your IT organization, transforming from a reactive, resource-constrained cost-center to a proactive, responsive provider of IT services to your developers and partner lines of business.

This model changes the way IT services are built, governed, supported, consumed and managed. It allows organizations to provide consistency across IT services – from user experience to governance and management.

 

IT Service Delivery

In “Mature the Delivery of IT Services” I shared how to take existing IT services and “cloudify” them – with full automation for service provisioning, self-service access for consumption, and lifecycle management of the service.

This process should include an evaluation of whether the IT organization should build and host the service as part of a private/hybrid cloud environment, or broker access to the capability provided by a public cloud provider. In many cases the consumer does not care where the service is hosted or how the underlying infrastructure is configured – the consumer is focused on the capabilities that will allow them to be more productive in their own roles – developer or business user. It is the other cloud operating model stakeholders (IT, security, compliance, finance) that need to determine what makes the most sense for an organization when determining how the services will be delivered.

The cloud strategy should provide clear guidance on the criteria used to make this determination, which will be dependent on the requirements and constraints of the larger organization.

Cloud Strategy

Examples of IT service hosting considerations

 

Public Cloud Drivers

The promise of public cloud – agility, flexibility, cost-savings – has delivered in many areas but not all. Like everything in life, for every benefit there is likely a corresponding drawback. How suitable public cloud is for an organization is dependent on the priorities, requirements, and constraints of that organization.

Cloud Strategy

Factors driving public cloud adoption

 

Private Cloud Drivers

Private cloud is often seen as “legacy”. This is in part because many organizations labeled their virtualized environments as “private cloud”, despite not having many of the capabilities typically associated with the cloud operating model, such as automation and self-service.

Virtualization != Cloud

Virtualization is a foundational requirement for cloud, but additional capabilities, such as those provided by the vRealize Suite, are required to deliver a true private/hybrid cloud aligned to the cloud operating model.

Enterprise organizations are quickly discovering that there are many roadblocks on the journey to full public cloud utilization; not least security, regulatory compliance and audit, but also often a lack of understanding for existing applications to be migrated or refactored. Moving IaaS or VMs to a public cloud provider (“lift-and-shift”) rarely adds significant value and more often reduces visibility and increases risk and cost. Few organizations are able to successfully complete a wholesale migration of workloads to a public cloud and so it makes sense to design a private/hybrid environment to provide the capabilities demanded by the developers and lines of business within the relative safety of your existing datacenter/resources/processes/teams – at least until the maturity of the organizations public cloud support model meets the minimum set of supporting functions required for performance, scalability, high availability, and business continuity.

Cloud Strategy

Factors driving private cloud adoption

 

Decision Criteria

The cloud strategy should include an overview of the decision criteria for cloud usage and should direct readers where to go to find the detailed, maintained criteria. The Cloud Architect or Cloud Center of Excellence should be responsible for collaboratively creating and maintaining the criteria and decision tree.

  • Service placement – The decision criteria should provide guidance on where to host or create a new service. Services offer full lifecycle management of the provisioned resource and are consumed via a global portal or API. For example, Database-as-a-Service, automated Kubernetes cluster provisioning, or Infrastructure as Code (IaC) full-stack application provisioning. For maximum control and visibility:
    1. Private/hybrid cloud services should be built and managed within the vRealize Suite cloud management platform.
    2. Public cloud services should be brokered via the vRealize Suite cloud management platform to allow the platform to enforce governance and control across multi-cloud environments.

 

  • Workload placement – The decision criteria should determine the placement of resources deployed by the services. Resources can be VMs, containers, or entire applications including security components. For example, development resources can deploy to AWS or Azure, PCI regulated applications and data can only deploy to the PCI-compliant environment within the private cloud. Note: Once the decision criteria for workload placement have been defined the vRealize Suite can fully automate the initial placement of the resources and ensure that the placement constraints are not violated over the life of the workload.

 

Final Thoughts

Multi-cloud (the mixture of multiple public and private/hybrid services) provides the most flexibility for your environment. To maintain effective control over these mixed environments organizations can utilize the vRealize Suite for operational consistency – consistent operations, consistent governance and policy, and a consistent user experience. VMware Cloud Foundation brings consistent infrastructure and operations across private and public clouds for a true hybrid solution, allowing maximum portability and flexibility. Together they allow organizations to provide a platform to support all of their workload needs with the flexibility to host services and resources where they can achieve maximum benefit with minimum risk.

 

Next Steps

Review the information linked below for further guidance on using the cloud to accelerate your organizations IT transformation initiatives:

 

 

 

The post To Cloud or Not to Cloud? appeared first on VMware Cloud Management.

VMware vSphere & Microsoft LDAP Channel Binding & Signing (ADV190023)

$
0
0
This post was originally published on this site ---

Microsoft has recently released warnings to its customer base that, in the March 2020 updates to Windows, it intends to change the default behavior of the Microsoft LDAP servers that are part of an Active Directory deployment. These changes will make secure LDAP channel binding and LDAP signing a default requirement when accessing Microsoft Active Directory using LDAP or LDAPS. These changes are a response to a security concern documented in CVE-2017-8563, where bad actors can elevate their privileges when Windows falls back to NTLM authentication protocols.

Any system that connects to Active Directory via LDAP without using TLS will be negatively affected by this change. This includes VMware vSphere. If you are using unencrypted LDAP (ldap://, not ldaps://) to connect vSphere to Microsoft Active Directory please read further.

All supported versions of VMware vSphere have been verified by VMware Engineering to work as expected after these changes, where we expect unencrypted LDAP authentication to succeed with the old defaults, fail with the new defaults, and succeed when using TLS/LDAPS.

Important Notes

These issues are the direct result of Microsoft’s changes to Windows. While we at VMware are committed to helping our customers navigate issues like these, we do ask that you please direct feedback about Windows changes & updates to Microsoft themselves. Configuring authentication sources in vSphere is a documented & supported activity that the professionals at VMware Global Support Services (GSS) can assist with, but that comes with the prerequisite that your authentication source is usable and cooperative. Please contact Microsoft Support for assistance with reconfiguring Windows Active Directory.

These issues affect any system using LDAP to access Active Directory and are not limited to VMware vSphere. You need to check the rest of your systems, too. Microsoft provides ways to audit clear-text connections to your Active Directory infrastructure which could be helpful.

All screen shots in this post are of vSphere 6.7 Update 3 and the HTML5 vSphere Client. The experience should be similar for vSphere 6.0 and 6.5.

Who Is Affected?

If you have configured vCenter Server to access Active Directory over LDAP with TLS (LDAPS) you will not be affected by this. You can check this by viewing your Identity Sources in the vSphere Client:

vSphere LDAP Identity Sources Compatible with LDAP Channel Signing

The red & green text has been added by us as an illustration. Sources using LDAP (ldap://, port 389) are likely to be affected. Sources using LDAPS (ldaps://, port 636) are likely fine, though we always encourage testing and verification.

What Will Happen

Without proactive action on your part, once Microsoft releases these updates and they are applied the affected identity sources will stop working. This will appear as login failures in the vSphere Client:

vSphere Login - Invalid Credentials

as well as errors when attempting to add or reconfigure an identity source:

vSphere Add Identity Source Error

If you have enabled auditing for clear-text binds in Active Directory you will also see event logs that correspond:

Microsoft LDAP 2889 Auditing Event Viewer

Notes & Suggestions on How to Test

We recommend testing in order to gain familiarity with these updates. Microsoft has released guidance on how to enable these settings:

The document on enabling LDAP Signing in Windows Server 2008 indicates that you need to change the “Default Domain Policy” but in order for it to be effective for domain controllers you must also edit the “Default Domain Controllers Policy” or whichever policy applies to the domain controllers, if you’ve assigned a new one. The procedure in that document also appears to be applicable to all versions of Windows, not just 2008. Once the Group Policy edits are in place you can wait until the Group Policy refreshes automatically or use an Administrator-level shell to issue the “gpupdate /force” command.

Keep in mind that updating Group Policy affects other domain controllers & members as well. vCenter Server can configure multiple LDAP & LDAPS authentication sources, and can specify particular domain controllers, so we recommend creating a new & isolated Active Directory instance for testing (you can see my farm animal theme for test domains & forests in this post, which is separate from my main authentication source). If you must test in production consider doing so with a Group Policy Object assigned specifically to a particular domain controller and take great care.

To test that the settings have taken effect use the “ldp.exe” utility (Start->Run->ldp) from the domain controller itself. From the Connection menu, choose Connect, and enter “localhost” and port 389:

Connecting with ldp.exe

From there, go back to the Connection menu and choose “Bind.” Enter your domain credentials and select “Simple bind” as shown here:

Binding with ldp.exe

You should receive the error “Error 0x2028 A more secure authentication method is required for this server” as shown below, which indicates that you have correctly disabled clear-text LDAP binds.

A more Secure Authentication Method is Required Error

If you do not get this error with this procedure the changes have not taken effect!

From here you can test vCenter Server connectivity to Active Directory, witnessing the behavior seen at the beginning of this post. As mentioned earlier, VMware vCenter Server can have multiple Active Directory instances configured, so testing with an isolated instance of Active Directory is recommended. Similarly, deploying a test vCenter Server appliance is recommended. Take a snapshot of the environment and you can restore it if everything goes wrong. Nested virtualization environments, in general, are an excellent way to test functional changes such as this. William Lam has extensive resources on his personal blog for nested ESXi.

For even more testing isolation, if you deploy the Windows Active Directory VM first and configure AD DNS you can use it as the DNS server for a test deployment of vCenter Server.

Possible Course of Action #1: Enable TLS on Active Directory (LDAPS)

Being security-minded, the first & best recommendation is to secure your authentication with TLS. As a matter of practice, all communications on a network should be encrypted. This is especially true of authentication traffic. Please refer to Microsoft documentation for configuring Active Directory Certificate Services and issuing certificates to Active Directory domain controller services.

Configuring vCenter Server to use LDAPS is straightforward and well-documented at docs.vmware.com. There is one twist: you will need the certificate for the domain controller. You can export it from Windows but if you have access to OpenSSL, either installed on a Windows PC or built into a Linux/UNIX host, this sample command is often easier:

echo | openssl s_client -showcerts -connect cows-ad-a1.cows.local:636

will display the certificate you need, between the “—–BEGIN CERTIFICATE—–“ and “—–END CERTIFICATE—–” lines. Copy those lines and everything between them into a text file and use that when prompted. Replace my farm animal test FQDNs with the native DNS name of the domain controller you are connecting to, of course!

Possible Course of Action #2: Proactively Configure These Settings to The Original Defaults

Being security-minded, making a decision that could negatively affect security can be tough. However, there’s a lot more to information security than just changing registry settings. Information security professionals use the “CIA triad” — confidentiality, integrity, and availability – to thoughtfully weigh the risk of a configuration, and the short timeframes and nature of these changes could seriously impact availability. Additionally, if you’ve already been running in this configuration you likely have compensating controls in place, such as isolation (firewalls, separate networks), to protect against someone observing authentication traffic.

Microsoft has stated that these upcoming patches will change the defaults. Though they haven’t said it directly, this implies that you can proactively set the those settings to the current defaults, using these same processes (set the LdapEnforceChannelSigning registry key to 0, set the group policies to ‘None’). This assumes that Microsoft will not overwrite them with the patch, which is probably reasonable, but we will all want to verify that once the patches are available.

Should you do this? It might be the easiest way forward for now, but it doesn’t improve security, and is likely worth a conversation with your CISO.

Other Considerations

It isn’t a good idea to delay patching. Patching is the ONLY way to remove a vulnerability from a system, and it’s the #1 way organizations and individuals can secure themselves (the #2 way is great passwords & account hygiene). Microsoft is making this big change for a reason, and we don’t yet know what has changed since the original 2017 vulnerability disclosure. It’s very likely that, in a few months, we will learn more about what is driving this. That won’t be good news, whatever it is. By delaying or omitting patches you delay the inevitable and you increase your risk, both from hackers and from well-intentioned humans accidentally applying cumulative updates.

Whatever course of action you choose, this is a great opportunity to make sure that you know and/or can retrieve the password for your vCenter Server SSO instance’s administrator account (often administrator@vsphere.local).

This issue affects any system using LDAP to access Active Directory, not just vSphere. Enabling auditing for the clear-text binds on your Active Directory is a good way to find other systems that may be in this situation, in order to include them in a larger remediation plan. Be mindful of the additional event logging that this might generate.

Active Directory is wonderful in that it’s a multi-master, replicating, distributed database, with built-in availability and recoverability features. Use this to your advantage and plan a staged rollout for both this issue and as a future patching strategy. Similarly, do not forget the flexibility vCenter Server has in targeting specific domain controllers, as well as the ability to apply Group Policies to specific domain controllers.

Thank You

As always, thank you for being a VMware customer. We appreciate you and hope that, like with our continued guidance on CPU vulnerabilities, this has been helpful in navigating these larger industry-wide changes.

For questions and feedback about Active Directory, Windows Updates, and these Microsoft Updates please contact Microsoft directly. We love feedback but if it’s about Active Directory and Windows the best we will be able to do is commiserate with you. Feedback is most powerful when it’s direct from a user to the responsible vendor.

For operational issues with VMware products please contact VMware Global Support Services, who are available 24×7 to assist. If there are other things we can help with, or you have questions or feedback about VMware products, please reach out to your account team or Technical Account Manager. For issues directly concerning this post please leave a comment below.

Last thing – subscribe to this blog and follow us on Twitter or on Facebook! We write a lot of technically-oriented articles like this, covering news and current topics, and we’d love to have you with us.

Thank you!

 

The post VMware vSphere & Microsoft LDAP Channel Binding & Signing (ADV190023) appeared first on VMware vSphere Blog.

Get Up To Speed on NSX Cloud with 5 Easy Resources

$
0
0
This post was originally published on this site ---

Over the last few years, as public and hybrid cloud adoption proliferated, organizations began looking for seamless and consistent manageability of their public cloud and private cloud workloads. This is one of the reasons why VMware brought NSX Cloud to the market.

Overview of NSX Cloud

In a nutshell, NSX Cloud provides consistent networking and security across hybrid and multi-cloud workloads. The key benefits and features of NSX Cloud include:

  • Single-pane-of-glass visibility
  • Essential networking capabilities
  • Consistent security policy
  • Granular micro-segmentation across on-premises and native public cloud environments such as AWS and Azure

NSX Cloud plays a key role in VMware’s Virtual Cloud Network vision of connecting and protecting workloads of all types (VMs, containers, bare metal) from data center to cloud to edge.   

“With NSX Cloud, we got a very compact firewall policy—easy to review and easy to manage. The power, administratively, is that we go to one place to update our policy and when we publish it, it automatically deploys it to every cloud server instance. This was a big win for us.”

Brian Jemes, Network Manager, University of Idaho
VMworld US 2018, NET1516BU

Top 5 Resources on NSX Cloud 

Here is a compilation of the best educational resources on NSX Cloud. Hope you enjoy them! 

  1. Lightboard video series

    This is a great high-level introduction to NSX Cloud. In roughly 15 minutes, the 3-part series explains how users build a common framework for consistent security, networking, and operations for a hybrid cloud deployment.  

     

  2. NSX Cloud Solution Overview

    This quick 4-page PDF summarizes the NSX Cloud solution on AWS: key benefits, features, architecture, as well as the initial setup and onboarding process.  (Note that NSX Cloud is also supported on Azure.

     

  3. VMworld 2019 videos

    Here are two *must-watch* sessions on NSX Cloud if you are looking to gain a deeper knowledge of the product.

    • NSX Cloud: Consistently Extend NSX to AWS and Azure (CNET1600BU)
      In this 200-level session, learn how NSX Cloud provides a single pane of glass for policy-based networking and security consistency across your hybrid cloud. This session also covers key features, use cases, and deployment options showcased in a product demo.
    • NSX Cloud Deployment and Architecture Deep Dive (CNET1365BU) This top-rated session covers deployment details as well as common customer configuration scenarios and use-cases such as micro-segmentation, overlay networking, VPN, and third-party service insertion. This is a 300-level session, and introductory knowledge of NSX Cloud is a prerequisite.
  4. NSX Cloud Hands-on Lab

    It’s time to go on a test drive. HOLs are the fastest and easiest way to test-drive the full technical capabilities of VMware products. These evaluations are free, up and running on your browser in minutes, and require no installation.

    In this lab, you will explore how VMware NSX provides a single security policy for applications deployed in the cloud as well as applications straddling cloud boundaries. You will do this using an NSX Tools-based enforcement as well as using native cloud enforcement and compare both choices offered through NSX Cloud.

    Note: This lab is available in English, Deutsch, Français. It will take more than 120 minutes to complete this lab.

    NSX Cloud HOL

  5.  NSX Cloud Blog Posts

    A complete list of all blogs on NSX Cloud to date. Catch up on key features and capabilities introduced in NSX-T 2.5, NSX-T 2.4 and prior releases.

*Bonus Pointer: For technical documentation on NSX Cloud, you can refer to the Administration Guide and the Installation Guide. 

Next Steps 

I hope this round-up of resources gives you an idea of how NSX Cloud could address your critical networking and security needs in a multi/hybrid cloud environment. If you are eager to learn more, reach out to your VMware sales representative for a deep-dive or a proof-of-concept of NSX Cloud. Until next time! 

The post Get Up To Speed on NSX Cloud with 5 Easy Resources appeared first on Network Virtualization.

Explore vSphere + Bitfusion – The Future of AI & ML

$
0
0
This post was originally published on this site ---

We’re back with another vSphere Tweet Chat recap! In this edition, we explore vSphere and Bitfusion, and what this integration means for the future of AI and ML. To answer all of your questions, experts Jim Brogan (@brogan_record) and Don Sullivan (@dfsulliv) joined us to share the inside scoop. They answered some tough questions, and you’ll have to continue reading to find out more. Check out the full chat recap:

Q1: What is Bitfusion?

Jim Brogan

A1: Bitfusion does for GPUs, and hardware accelerators in general, what vSphere did for CPUs (compute).

A1 Part 2: In the AI/ML space, for example, GPUs are underutilized (15% utilization would be representative). Bitfusion is virtualization that lets you share GPUs), plus it has some management capabilities and analytics.

Q2: Why is Bitfusion being brought into vSphere?

Jim Brogan

A2: Any cloud, any device, any apps in today’s world must include ML/AI applications, and the apps need acceleration. Bitfusion ensures ML acceleration gets first-class-citizen treatment when it comes to virtualization and flexibility.

Don Sullivan

A2: Bitfusion is a perfect fit for vSphere and modern customer needs. VMware recognized that Bitfusion can virtualize powerful GPUs in the same way that vSphere virtualizes CPUs. This is a complementary and natural evolution of vSphere.

A2 Part 2: Bitfusion is a perfect fit for vSphere, in that it virtualizes GPUs the same way that vSphere has always virtualized CPUs. BF is a natural evolution for VMware high tech customer needs

Q3: Why do AI & ML applications need GPUs or other hardware acceleration?

Jim Brogan

A3: These apps perform very large numbers of mathematical operations that are easy to compute in parallel but take days on serial processors.

A3 Part 2: GPUs offer this parallelism and can run these applications hundreds of time faster, consuming less power along the way. FPGA and ASIC acceleration are coming soon.

Q4: What are the market-place problems with GPUs & what is Bitfusion trying to solve?

Jim Brogan

A4: GPUs are expensive, they can’t be purchased for all users who want them. The ones that are purchased are underutilized, and it is hard for users to share them.

A4 Part 2: They tend to sit in their own silos (multiple) outside of the resources managed by vSphere admin and other virtualized resources (like compute, storage, and networking).

A4 Part 3: Users get stuck with a single set of resources when they need bigger sets and more variety.

Don Sullivan

A4: BF represents a new threshold of capabilities of vSphere, and the FORCE of vSphere has been expanded greatly.

Q5: Does Bitfusion solve marketplace problems for GPUs and AI/ML apps?

Jim Brogan

A5: Of course! Utilization and efficiency increase because you can share the GPUs. vSphere admin can put an organization’s GPU servers into pools/vCenter-clusters for easy management, but all the users can access what they need.

A5 Part 2: Users no longer have to port their environments to specific hardware or clean up when someone else needs to use it. Users have access to a larger pool of acceleration with wider variety.

Q6: What exactly does virtualization mean for Bitfusion and GPUs?

Jim Brogan

A6: There are two orthogonal types of virtualization, and both are dynamically available, no rebooting or migration required:

A6 Part 2: Remote access of GPUs across the network, enabling sharing by multiple users over time. Splitting GPUs into fractions of any size, enabling shared, concurrent use.

Don Sullivan

A6: That’s perpendicular in multiple directions!!!

Q7: How does Bitfusion virtualization work?

Jim Brogan

A7: Big topic, but we intercept the API calls to the hardware accelerator. This approach avoids complicated work in the hypervisor.

A7 Part 2: This approach means that there are no changes and no recompilation of the application or anything in its stack (frameworks, CUDA, drivers, OS).

A7 Part 3: A good talk on this was given at Tech Field Day during the VMworld conference last August.

Q8: What types of applications or use-cases does Bitfusion address?

James Brogan

A8: AI and ML applications are our primary focus, but any CUDA-using application is a candidate. This includes HPC, wherever the GPU utilization is a concern. Desktop graphics and other non-CUDA-using applications are not a candidate.

Q9: Does Bitfusion provide any management capabilities?

Jim Brogan

A9: Bitfusion tracks, charts and exports GPU allocation, utilization and network traffic on a GPU server and on a CPU client basis. Bitfusion enforces policies that deallocate unused GPUs on a client-by-client basis.

Q10: What are the #Bitfusion network requirements?

Jim Brogan

A10: 10 Gbps; < 50 microseconds latency from the client to the GPU server.

A10 Part 2: At least that is the goal. The latency is desirable for best performance, not for functionality.

Don Sullivan

A10: VMware will be delivering a number of Bitfusion related sessions at various #VMUGs in 2020.

Q11: How can users try out the #Bitfusion software (conduct a POC)?

Jim Brogan

A11: During the period when Bitfusion is being integrated into vSphere, Bitfusion is not for sale. But…you can write to AskBitfusion@vmware.com and be approved for a POC.

A11 Part 2: We deliver an OVA with the software, licensing, application stack, and dataset with support. You can create your own VMs, stacks and use the same license (but interactive support on this path is very limited).

A11 Part 3: The original Bitfusion documentation is still publicly available.

Don Sullivan

A11: You will be asked to fill out a POC form and we will be allocated access to the SW.

Q12: Are there other requirements for running #Bitfusion software?

Jim Brogan

A12: Currently it requires a Linux OS (Ubuntu, RHEL or CentOS).

Q13: Isn’t network latency a big problem for remote access to GPUs?

Jim Brogan

A13: Naturally, but Bitfusion’s primary work is to solve the concerns with latency.

A13 Part 2: When Bitfusion intercepts CUDA calls, it takes the opportunity to optimize operations, so as to hide latency. These include re-ordering, pipelining and batching. A lot of Bitfusion IP is actually in the optimization domain.

Bonus Questions Not Featured  

Q14: Is Bitfusion exclusively an NVIDIA GPU solution?

Jim Brogan

A14: The technology applies to any acceleration hardware accessed by an API. Bitfusion today has created implementations for the CUDA API and OpenCL. But CUDA is more-or-less the only player in the marketplace right now.

Q15: NVIDIA offers vGPU (GRID), how does Bitfusion work with or compete against it?

Jim Brogan

A15: Bitfusion can work on top of GRID, but is made to address userspace applications (e.g., using CUDA), not desktop graphics. It runs in userspace, not the hypervisor, so it can allocate and partition GPUs dynamically.

Thank you joining our vSphere + Bitfusion Tweet Chat, featuring our (awesome) vSphere experts. A huge shout out to our experts, Jim Brogan (@brogan_record) and Don Sullivan (@dfsulliv), and the other participants who joined us today.

Stay tuned for our monthly expert chats and join the conversation by using the #vSphereChat hashtag. In the meantime, you can check out our vSphere Tweet Chat hub blog to access a recap of all our previous chats. Have a specific topic that you’d like to cover? Reach out and we’ll bring the topic to our experts for consideration. For now, we’ll see you in the Twittersphere!

The post Explore vSphere + Bitfusion – The Future of AI & ML appeared first on VMware vSphere Blog.

What’s new with vRealize Network Insight 5.1 & vRealize Network Insight Cloud

$
0
0
This post was originally published on this site ---

Has it already been three months since 5.0? It has! Continuing the momentum of vRealize Network Insight 5.0, version 5.1 has a couple of big items that we’ve released today. Join me in a walkthrough of the most anticipated features.

Dark Mode

I would be in remiss if I didn’t start with the most obvious one. vRealize Network Insight 5.1 now has a stunning dark mode. Very easy on the eyes, with hypnotizing color schemes:

Dark mode will be on by default from v5.1 and up, but you can decide to put it back to light mode. It’s a per-user setting.

Pre-SD-WAN Assessment Report

This is the biggest thing in this release. Just as we’ve done with the VMware NSX Virtual Network Assessment (VNA), vRealize Network Insight can now assess WAN environments and report on network traffic behavior and calculate a Return on Investment (ROI) when the WAN moves to SD-WAN by VeloCloud.

Based on the network traffic coming from the monitored branches, this assessment report will calculate the savings on uplinks (expensive MPLS links vs. internet links) and savings based on the VeloCloud router devices, compared to other branch router devices. The outcome of this report will be your business case for moving towards SD-WAN!

This report is fully evidence-based; the different types of traffic, like MPLS, internet, cloud/SaaS, low- and high-priority traffic, are shown dynamically. Used applications are discovered and categorized. It works by adding a percentage of the branch routers to vRealize Network Insight as a data source, upon which it will look up configuration data. The data is completed by having these branch routers exporting network flows to vRealize Network Insight, which also provides the configuration needed in a template. All this data is shown in the current (pre-SD-WAN) state, and there is also a post-SD-WAN state calculated.

This post-SD-WAN state does a simulation of the network traffic when using SD-WAN, and all its benefits. Low priority MPLS traffic is moved to the internet links (and use the SD-WAN overlay). Cloud and SaaS traffic is offloaded to the VeloCloud Cloud Gateways. MPLS links are scaled down. All the VMware SD-WAN by VeloCloud optimizations are taken into account. To get the numbers right, you can fill out your own average cost of MPLS and internet lines. All these calculations can be exported as a PDF, with real dollar values on what your organization will save when it moves to SD-WAN by VeloCloud.

Application Discovery

Ever since vRealize Network Insight 4.1, it has had agent-less application discovery based on metadata of the infrastructure it’s monitoring. Customers use application discovery to form applications based on a workload naming convention, vSphere, AWS, or Azure tags, or pull the application constructs from the ServiceNow CMDB. Today, we’ve added an Advanced discovery method, which allows you to use a combination of a naming convention and tags.

Do you have the application name inside the workload name, but the tier inside a tag? No problem! You can mix and match the right discovery method for your infrastructure. One of the other most requested discovery methods is to base it off Security Tags or Security Groups. These have been added as well.

Application Troubleshooting Dashboard

There have been a few changes to the application dashboard, but most notably is the ability to see degraded flows directly. Inside the filter on the application topology, Degraded Flows is now an option to select. When selecting this filter, only flows that have been degraded over the last 24 hours will be displayed. Degraded flows are defined by a latency spike of either 100% of its normal baseline, or a jump of 10 millisecond.

Other notable changes on the application dashboard is the summary widget, which displays relevant information of the last 24 hours of the application you’re troubleshooting, including the make-up of this application: does it exist of VMs, physical servers, and/or Kubernetes Services.

NSX-T Manager Topology

NSX-T is the next generation of VMware NSX, and it can support network virtualization for a lot of platforms, such as vCenter, ESXi hosts, VMware PKS, OpenShift, and vanilla Kubernetes. All of these components are now displayed on the NSX-T Manager dashboard, in a beautifully displayed topology view.

In the above screenshot, only an PKS instance with some ESXi hosts is attached to this NSX-T Manager. If the NSX-T Manager is used for more purposes, it will automatically scale out the topology to reflect that. The different components of NSX-T, such as the Tier 0 & 1 routers, Segments, and Security configuration, including their health, is all present in this topology for quick access to the health of your environment.

Kubernetes Service Dashboard

Containers managed by Kubernetes have been a first-class citizen inside vRealize Network Insight for a while, but it just got better. Customers have come back with that Kubernetes Services are the most significant focal point for network troubleshooting and monitoring, and version 5.1 has a new dashboard that centers around the Kubernetes Services.

There’s a beautifully clear service topology widget, which has the management and networking components that are a part of the service, history on the service scale, network flows, events coming from the service, and networking metrics. Any and all networking information that you could possibly use to troubleshoot a Kubernetes Service.

vRealize Network Insight now also ingests the Kubernetes events and correlates them to the Kubernetes objects. Events like node scheduling failures, container creation failures, all the way down to free disk space monitoring, will be logged. 5.1 brings in 63 new Kubernetes events.

Worth mentioning

As usual, there are too many new features to deep dive in all of them, but here are a few others that I thought were worth mentioning:

  • VMware Cloud on AWS SDDC objects now have a dashboard outlining their current status & configuration, changes in the last 24 hours, network flows, firewall rule hits, related events, and top talkers.
  • Events will now be raised when a VMware Cloud on AWS SDDC is close to its limits (for number of VMs and Hosts)
  • Support for VMware Cloud on AWS Edge firewall rules. Edge rules will now be correlated to network flows, applications, and be visible throughout the
  • Added SD-WAN Threshold Analytics to generate events when traffic patterns deviate from past behavior, or when traffic crosses a static threshold.
  • Support for paths over an Arista Hardware VTEP (L2 bridge).
  • Support for NAT translation over physical FortiGate firewalls.
  • Support for NSX-T BGP monitoring, presenting information on the router dashboard and generating events for BGP state changes.

Check out the full release notes here. The download bits can be found here.

 

The post What’s new with vRealize Network Insight 5.1 & vRealize Network Insight Cloud appeared first on VMware Cloud Management.

Rightsizing VMs with vRealize Operations

$
0
0
This post was originally published on this site ---

Rightsizing VMs is critical to get the best performance of your vSphere infrastructure and your VMs. Rampant oversizing of VMs can cause contention at the host or cluster level, which manifest as CPU ready, CPU co-stop, VM swap, etc. Undersized VMs can cause contention inside the guest OS, which manifest as CPU queuing, memory paging, etc. Rightsizing VMs helps you achieve the best performance of the infrastructure and VMs. In this blog post I’d like to show you why I feel that vRealize Operations is the best tool available for VM rightsizing.

The first thing I’d like to cover is the difference between rightsizing and reclamation. Rightsizing is when you change the amount of resources allocated to a VM match the utilization requirements of the VM. For example, adding a vCPU if the VM is running high CPU utilization or removing memory if the server is not using all of its allocated memory. Reclaim on the other hand is deleting powered off or idle VMs, old snapshots, or orphaned disks. The main difference is that rightsizing is done for performance reasons and reclaim is done for capacity reasons. The difference in use cases are why they’re located in different pillars on the Quick Start page.

Rightsizing

To start you on your rightsizing adventure, you should start on the Quick Start page that you are greeted with when you logon to vRealize Operations.

Oversized VMs

Once you’re in the Rightsizing page, you will be presented a summary of Oversized an Undersized VMs. By expanding the Oversized VMs section at the bottom, you can see all of the VMs that have been identified as oversized. You can select one or more VMs and resize them without leaving vRealize Operations by clicking on Resize VM(s) button. By initiating the resize action from here, vRealize Operations automatically uses its connection to vCenter to make the changes to the VM. It’s even aware of hot-add, and will skip a reboot if the VM if hot-add is enabled.

If by chance, you see a VM that you want to leave oversized and you don’t want’ to be notified about it anymore, just select the VM and click Exclude VM(s). If you have a lot of VMs that you wish to exclude, you can use the Filter box to reduce the list of VMs shown (e.g VMs containing “xyz” in their name), then use the select all button to exclude VMs in bulk. You can always include the VM again by expanding Show Excluded VMs at the bottom of the page.

Undersized VMs

By clicking on the Undersized VMs section at the bottom, you can see all of the VMs that have been identified as undersized. This page works the same as the Oversized VM section. Click Resize VM(s) to resize the VMs in vCenter and Exclude VM(s) to remove them from the list.

Calculating Recommended Size

Now that I’ve shown how rightsizing works in the UI, I’d like to explain how vRealize Operations creates the recommended size for VMs. The capacity engine built into vRealize Operations leverages AI/ML technologies to create forward looking projections of the future utilization of VMs. It’s those projections that are used to determine the Recommended Size of VMs. The recommendations are not simply based on historical utilization of the VM.

There are two primary settings that affect Recommended Size for VMs, Time Remaining Risk Level and Time Remaining Score Threshold.

Conservative Time Remaining Risk Level

Setting the Time Remaining Risk Level to Conservative tells vRealize Operations to use the Upper Bound Projection when determining the Recommended Size. This can be set on the capacity overview page or in policies. Conservative is the default, and is recommended for critical environments such as production or business critical applications.

Time Remaining Score Threshold tells vRealize Operations what point in the projection to use rightsize the VM based on the number of days until the VM goes below the green threshold. The default is 120 days for yellow (warning) and 90 days for red (critical). This can be changed in policies for Virtual Machine objects.

Recommended Size, when configured for Conservative, is the recommended Usable Capacity to maintain a green state based the Upper Bound Projection. The VM needs to maintain a green state for the entire time between now and the Green Time Remaining Score Threshold set in the policy + 30 Days. The Green Time Remaining Score Threshold defaults to 120 days, so the default Recommended Size window covers the Upper Bound Projection from now to 150 days in the future.

Aggressive Time Remaining Risk Level

Setting the Time Remaining Risk Level to Aggressive tells vRealize Operations to use the Mean of the Upper Bound and Lower Bound Projections when determining the Recommended Size. This can be set on the capacity overview page or in policies. Aggressive is not the default, but it can be enabled for less critical environments such as development, UAT, or test.

Time Remaining Score Threshold tells vRealize Operations what point in the projection to use rightsize the VM based on the number of days until the VM goes below the green threshold. The default is 120 days for yellow (warning) and 90 days for red (critical). This can be changed in policies for Virtual Machine objects.

Recommended Size, when configured for Aggressive, is the recommended Usable Capacity to maintain a green state based the Mean Projection. The VM needs to maintain a green state for the entire time between now and the Green Time Remaining Score Threshold set in the policy + 30 Days. The Green Time Remaining Score Threshold defaults to 120 days, so the default Recommended Size window covers the Mean Projection from now to 150 days in the future.

Recommended Size Limits

Customers often aren’t keen on making substantial changes to VMs and are looking for a more conservative approach. Recommended Size has been designed to be conservative in its recommendations as well.

Recommended Size for oversized VMs are capped at 50% of the current allocation while Recommended Size for undersized VMs are capped at 100% of the current allocation. This helps to gradually guide VMs to the Recommended Size without recommending substantial changes like 32 vCPUs down to 1 vCPU.

Historical Data

  • Projection Calculation Start Point shows how much data is used to create the projection
  • Exponential Decay gives higher weight to recent data points to allow the projection to react more quickly to recent changes in utilization
  • Time Series Data Retention Global Setting does not impact capacity calculations
  • Delete the object to reset projection calculation start point

Peaks

As you know, most workloads don’t follow a straight line for utilization. There can be various peaks in utilization over time that need to be accounted for in the projections. The impact of a peak on the projection is relative to the duration, height, and frequency of the peaks. Remember this is a projection created by the AI/ML powered capacity engine, so there isn’t a specific formula that I can give you to doublecheck the math. The way I like to explain it is, as a human looking at the historical utilization, ask yourself if the peak look significant enough to affect capacity planning and are there enough peaks that they appear to follow periodic pattern(s)? If so, you should see the impact of those peaks in the projections. In general, the more important the peak(s) look, the more impact the peak(s) have on the projection.

  • Momentary peaks that are short-lived and might be one-off. These are the peaks that you would dismiss for capacity planning purposes because they don’t appear important. In general, small and short-lived peaks should have minimal impact for capacity planning and therefore have minimal impact in the projection.
  • Sustained peaks last for a longer time and do impact projections. If the peak is not periodic, the impact on the projection lessens over time due to exponential decay.
  • Periodic peaks exhibit cyclical patterns or waves. For example, hourly, daily, weekly, monthly, last day of the month, etc. There can be multiple overlapping cyclical patterns, which will also be detected.

Custom VM Rightsizing Details Dashboard

There are several questions that I get asked frequently that can be addressed with some customization. To help answer those questions, I’d like to share a custom “Rightsizing Details” dashboard that I created. This custom “Rightsizing Details” dashboard, will help you address several use cases all within a single page. If you want to take advantage of this dashboard and you’re running vRealize Operations Advanced or Enterprise edition, you can download it from https://vrealize.vmware.com/sample-exchange/6791.

  • How does vROps determine the recommended size for a VM?
  • Which VMs are oversized?
  • Which VMs are undersized?
  • How do I justify the recommendation to the VM owner?
  • What is the potential change to capacity if the VMs are rightsized?
  • What is the potential change to VM cost if the VMs are rightsized?

How does vROps determine the recommended size for a VM?

To answer this question, I’ve added an extensive readme section directly in the dashboard. It’s not as in depth as this blog post, but there is a link to this blog post for users that wish to know more.

Which VMs are oversized?

The first widget named Oversized VMs shows all of the VMs that have been detected as oversized. There are totals to show how much you can potentially reduce allocation of resources, additional capacity needed, and potential cost increase you should expect. The secondary goal of this widget is to help justify rightsizing to management.

Which VMs are undersized?

The Undersized VMs widget is works similar to the Oversized VMs widget. It shows all of the VMs that have been detected as undersized. There are totals to show how much you may need to increase allocation of resources, capacity you can reclaim, and potential cost savings you can achieve. The secondary goal of this widget is to help justify rightsizing to management in conjunction with addressing Oversized VMs.

How do I justify the recommendation to the VM owner?

To help justify rightsizing recommendations to VM or application owners, I’ve added Recommended Size – CPU and Recommended Size – Memory widgets. By selecting a VM in either Oversized VMs or Undersized VMs widgets, you’ll see a chart with the historical utilization of the VM along with the Recommended Size for the VM. These 2 charts should make the rightsizing conversation much easier with the VM or application owner.

What is the potential change to capacity if the VMs are rightsized?

I am often asked to help quantify the overall impact to capacity if all VMs are rightsized. Answering that question is possible in vRealize Operations, but it requires Super Metrics to calculate them based on existing metrics. The key part to understand is that the change to capacity does not always correlate with the change in allocation of resources to a VM.

Reclaimable CPU Usage (GHz): If an oversized VM’s CPU usage is 100MHz before rightsizing, removing vCPU’s will won’t change its CPU usage and it should still be at 100MHz. This means there is no reclaimable capacity associated with overallocation of vCPUs. Reclaimable CPU Usage for oversized VMs will always be 0 MHz.

Reclaimable Memory Consumed (GB): An oversized VM can have reclaimable memory only if consumed memory is greater than the new recommended size of the VM. The reclaimable memory capacity is the difference between consumed memory and recommended size.

Increased CPU Usage (GHz): CPU Usage of a VM of an undersized VM is expected to be the current CPU Demand. The difference between CPU Demand and CPU Usage is the expected increase in capacity utilized after rightsizing.

Increased Memory Consumed (GB): It can be expected for consumed memory to increase by the same amount of memory recommended to add to an undersized VM.

What is the potential change to VM cost if the VMs are rightsized?

The other question I get frequently is to quantify the potential cost impact due to rightsizing. Today, vRealize Operations does not calculate that cost, but they can be calculated using Super Metrics and the capacity Super Metrics from the previous section.

Calculating the potential cost can be utilization or allocation based, depending on whether allocation model enabled for capacity.

Potential Cost Savings Calculation Detail:

Oversized CPU Utilization: $0 since Reclaimable CPU Usage (GHz) is always 0

Oversized Memory Utilization: Reclaimable Memory Consumed (GB) * Cluster Memory Base Rate

Oversized CPU Allocation: vCPU(s) to Remove * Allocation Cluster CPU Base Rate

Oversized Memory Allocation: Memory to Remove * Allocation Cluster Memory Base Rate

Potential Cost Increase Calculation Detail:

Undersized CPU Utilization: Increased CPU Usage (GHz) * Cluster CPU Base Rate

Undersized Memory Utilization: Increased Memory Consumed (GB) * Cluster Memory Base Rate

Undersized CPU Allocation: vCPU(s) to Add * Allocation Cluster CPU Base Rate

Undersized Memory Allocation: Memory to Add * Allocation Cluster Memory Base Rate

Conclusion

Calculating the Recommended Size for VMs should be less of a mystery now. I hope this explanation of how Recommended Size is calculated helps earn your trust in the recommendations offered by vRealize Operations and helps empower you to have the rightsizing conversations with your VM and application owners. All of this in the name of achieving the best performance of your vSphere infrastructure and your VMs, reclaiming unused capacity, and quantifying cost savings.

If you’re not a vRealize Operations customer, why not download a trial of vRealize Operations and try it in your own environment?

You can find more demos and videos from our community at vrealize.vmware.com.

The post Rightsizing VMs with vRealize Operations appeared first on VMware Cloud Management.

vRealize Network Insight Search Poster for Kubernetes

$
0
0
This post was originally published on this site ---

Continuing our Search Poster series, we’ve arrived at the Kubernetes Search Poster! Using the search engine inside VMware vRealize Network Insight can be a revealing experience. It has every single bit of data you ever wanted to see about anything in your infrastructure and it’s available at your fingertips. Because of the vast amount of available data, the search engine is extremely versatile and there are many options available in its syntax.

To make learning the search engine a bit easier, we’ve launched a Search Poster series that will be showcasing the most useful search queries for a specific area. Today, we’ve added the Kubernetes Search Poster.

As you might know, vRealize Network Insight [Cloud] can give much-needed network and security insights into a Kubernetes environment, existing of a VMware PKS, OpenShift, or vanilla Kubernetes. The built-in dashboard will give you a deep overview, but there are possibilities to go gain customized insights; using the search engine. This search poster will guide the way.

Kickstart learning the search engine using these search posters:

Poster

Kubernetes Search Poster Preview

 

The post vRealize Network Insight Search Poster for Kubernetes appeared first on VMware Cloud Management.


vSphere Signing Certificate Expiry – What You Need To Know

$
0
0
This post was originally published on this site ---

Why Do We Sign?

To guarantee the security of your download, VMware releases are cryptographically signed. The certificate used to sign legacy releases of software expires at the end of December 2019. For some time now, we have been dual signing releases with the replacement certificate and the legacy certificate. With the expiration of this legacy certificate, this ceases at the end of this year. Soon releases will signed only by the newer certificate (as of April 2020 for the 6.7 codebase, and June 2020 for the 6.5 codebase). This new certificate is valid until September 2037.

So what does the vSphere signing certificate expiry mean?

For older versions of ESXi, to upgrade to any release post the 31st of December 2019, you need two-steps.

ESXi 6.0 versions between 6.0 GA and Update 3G must upgrade first to a minimum of 6.0 Update 3G. Once there, upgrade to the subsequent target release. Alternatively, upgrade to a minimum of 6.5 Update 2 before upgrading further. 

ESXi 6.5 versions between 6.5 GA and Update 2 must upgrade to a minimum of 6.5 Update 2. Once there, upgrade to the subsequent target release. See below for some example upgrade paths.

vSphere Signing Certificate Expiry - Upgrade Paths

Directly upgrading from one of the affected releases to any version released post the 31st of December 2019 fails. The error message “Could not find a trusted signer” is displayed when upgrading using esxcli. The error message “cannot execute upgrade script on host” is seen if using vSphere Update Manager (VUM).

If you are unable to make this two-step upgrade, one option you have is to use the –no-sig-check switch to disable the signature check when upgrading with esxcli. Alternatively, perform an upgrade from ISO. We would not recommend disabling the signature check in production environments.

As always, please check the interoperability matrices and KB67077 to ensure that your intended upgrade path is supported! 

The usual vSphere upgrade caveats apply.

  • Check that your other products (both VMware and third part) are compatible and support both your intended target release and the interim if required.
  • Upgrade vCenter Server before upgrading ESXi hosts.
  • vSphere 6.0 reaches End of General Support on March 12, 2020. Please consider an upgrade to the 6.5 or 6.7 codebase if at all possible.
  • For more information on this issue, please see KB76555.

Any Other Impact?

I’m glad that you asked. Another impact is that any VMware VIB released after the 31st of January is also unable to be installed on these older versions of ESXi. A VIB is a VMware Installable Bundle and is typically a driver or some form of extension. Some examples of this are the VMware Tools installers, which are pushed to hosts as a VIB. vSphere Replication and VMware NSX both push VIBs to hosts at the point of deployment. If you are using VMware products that push VIBs, you should be conscious of this as you upgrade.

This issue does not affect vCenter Server.

In Summary

While we understand that any upgrade of vSphere can be complicated, there are lots of resources available to help you to plan and execute successfully. We collate a whole bunch at vSphere Central, where you can find all kinds of content focussed on vSphere, including a dedicated vSphere Upgrade channel. VMware strongly recommends that you stick to current patch levels wherever possible, and this is no exception. To learn more about the issues discussed in this blog post, please see the VMware Knowledge Base.

To keep up to date on the latest information on VMware vSphere why not follow us on Twitter? Just click the link below!

The post vSphere Signing Certificate Expiry – What You Need To Know appeared first on VMware vSphere Blog.

vExpert Cloud Management December 2019 Blog Digest

$
0
0
This post was originally published on this site ---

Every month we publish a digest of blogs created by our vExpert Cloud Management Community. These blogs are written by industry professionals from around the vCommunity and often feature walkthroughs, feature highlights, and news for vRealize Operations, vRealize Network Insight, and vRealize Log Insight. The vExpert title is awarded to individuals who dedicate their time outside of work to share their experiences and useful information about VMware products with the rest of the community. In July of 2019, we spun off a sub-program called vExpert Cloud Management for vExperts who specialize in products from VMware’s Cloud Management Business unit, namely the vRealize Suite. vExperts share their knowledge in many ways including presentations at VMware User Groups (VMUG), writing blog posts, and even creating content on YouTube.

 

December was a busy month for our vExperts. We completed our Zombie VM Hunt where we challenged the community to come up with ways to combat wasted resources in the datacenter. Applications for the 2020 vExpert program opened up, and of course the holidays! Despite everything happening all at once, our community didn’t stop producing some great blog posts and VMUG presentations! Below are some of the top blog posts for the month of December 2019 created by our Cloud Management vExperts.

 

UK North West VMUG – 21/11/19 by Lukas Winn (@lukaswinn) United Kingdom

Dell EMC Storage Analytics for vROps by Alain Geenrits (@alaingeenrits) Belgium

VCF on VxRail – Deploying the vRealize Suite (Part 1 – VRSLCM) by David Ring (@DavidCRing) Ireland

VCF on VxRail – Deploying the vRealize Suite (Part 2 – vROps) by David Ring (@DavidCRing) Ireland

 

A HUGE THANK YOU to all of our vExperts for their contributions to the community! For previous blog digests check out the links below.

vExpert Cloud Management November 2019 Blog Digest

vExpert Cloud Management October 2019 Blog Digest

vExpert Cloud Management September 2019 Blog Digest

The post vExpert Cloud Management December 2019 Blog Digest appeared first on VMware Cloud Management.

Introducing the new DRS logic and UI including VM DRS scores

$
0
0
This post was originally published on this site ---

The first release of Distributed Resource Scheduling (DRS) dates back in 2006. Since then, the data center and workloads have changed significantly. With the latest release of VMware Cloud on AWS, revision M9, we introduce the new DRS UI that accompanies the improved DRS logic that was already introduced in release M5.

The new and improved DRS logic is more workload-centric rather than cluster-centric, as it was before with DRS. DRS is completely re-written to have a more fine-grained level of resource scheduling with the main focus on workloads. This blog post goes into detail on the new DRS algorithm, and explains how to interpret the metrics as seen in the new UI.

The Old DRS

vSphere DRS used to focus on the cluster state, checking if it needs rebalancing because it could happen that one ESXi host is over-consumed while another ESXi host has less resources consumed. If the DRS logic determined it could improve the cluster balance, it would recommend and execute a vMotion depending on the configured settings. That way, DRS used to achieve cluster balance by using a cluster-wide standard deviation model.

The New DRS

The new DRS logic takes a very different approach. It computes a VM DRS score on each host and moves the VM to the host that provides the highest VM DRS score. The biggest difference to the old DRS version being that it no longer balances host load. But now, we improve the workload balancing on the cluster by focusing on the metric that we care most about, the virtual machine happiness.

Let’s dig a little bit deeper in the VM DRS Score and what it means.

VM DRS Score

The new DRS logic quantifies virtual machine happiness by using the VM DRS score. First, let me emphasize that the VM DRS Score is not a health score for the virtual machine! It is about the execution efficiency of a virtual machine. The score values range from 0 to 100% and are divided into buckets; 0-20%, 20-40%, and so on.

Obtaining a VM DRS score of 80-100% indicates that there is mild to no resource contention. It does not necessarily mean that a virtual machine in the 80-100% bucket is doing way better than a virtual machine in the lower buckets. That is because there are many metrics that influence the VM DRS score. Not only performance metrics are used, but capacity metrics are also incorporated in the algorithm.

The performance drivers for the VM DRS score are contention based. Think about; CPU %ready time, good CPU cache behavior and memory swap. The reserve resource capacity, or headroom, that a current ESXi host has is also taken into account to determine the VM DRS score. Will the virtual machine be able to burst resource consumption on its current host and to what level? Are there other ESXi hosts in the cluster that have more headroom available? All these factors play an important role in the calculation of the VM DRS score.

The improved DRS is no longer thinking about the relative load between ESXi hosts in a cluster, the main focus is on the happiness of the workloads. Next to VM DRS Score, DRS presents the Cluster DRS Score in the UI. It is calculated using an aggregation of all the happiness VM scores in the cluster. DRS will try to maximize the execution efficiency of each virtual machine in the cluster while ensuring fairness in resource allocation to all virtual machines.

The vSphere cluster summary overview provides insights on what is happening from a DRS perspective.  If you require more information on VM DRS Score, the new UI will provide that information to you.

View All VMs

When clicking on the ‘View all VMs’ option in the cluster summary DRS view,  you will be presented an overview of all virtual machines in the cluster and more detailed information about their resource claim and entitlement.

There might be situations where you will see a lot of CPU stress, so a high number of CPU %ready time in the CPU Readiness column. Or a high number of swapped memory. Those are clear indicators that the workload utilization has possible depleted the cluster compute resources. You can use this information to move workloads to other clusters, or to scale out the cluster resources by adding additional ESXi hosts. The latter we can do automatically in VMware Cloud on AWS using Elastic DRS.

UI Walkthrough

Find the new UI in vCenter Server. When you click on cluster summary overview, the new DRS UI is shown on the right-hand side by default. Expand the DRS view to get immediate insights. Check out this UI walkthrough to get an understanding of what it looks like in vCenter Server.

To Conclude

We are pleased to launch the new DRS UI in the M9 release of VMware Cloud on AWS. The improved DRS logic has proved to be a big improvement when it comes to virtual machine resource entitlement. DRS helps workloads perform as optimal as possible!

Future blog posts will provide more detailed information about the new DRS logic. Stay tuned!

 

 

The post Introducing the new DRS logic and UI including VM DRS scores appeared first on VMware vSphere Blog.

Become a VMware NSX Expert Today

$
0
0
This post was originally published on this site ---

If you’ve wanted to learn about VMware NSX, a L2-L7 networking and security virtualization platform entirely in software, and didn’t know where to start, this is the guide for you. Already an NSX user, and want to improve on your skills? This is also a great resource to becoming an NSXpert!

Reintroducing the VMware NSX Networking and Security Portfolio

VMware NSX is a network virtualization and security platform that offers a rich, full-stack of L2-L7 networking services such as routing, switching, load balancing, and firewalling to connect, protect, automate, and manage workloads across private, public and hybrid cloud environment as well as branch and cloud edge environments.

VMware NSX networking and securityPortfolio 2019

The VMware NSX portfolio consists of a complete set of networking and security offerings designed to address customer use-cases supporting workloads based on virtual machines (VMs), containers, and bare-metal.

Here are the core components of the NSX portfolio:

  • NSX Data Center is the flagship NSX platform designed to connect and protect workloads in and across data center environments.
  • NSX Cloud extends the networking and security capabilities of NSX to workloads running natively in public clouds such as AWS and Azure.
  • NSX Distributed IDS/IPS is a fully distributed software intrusion detection and prevention solution to easily achieve compliance, create virtual security zones, and detect lateral threat movement on East-West network traffic.
  • NSX Advanced Load Balancer provides multi-cloud load balancing, web application firewall, and application analytics across on-premises data centers and any cloud.
  • NSX Service Mesh, using the open-source Istio as a foundation, provides application-level visibility, control, and security for enterprise-grade micro-services.

Getting Started with VMware NSX

They say “you have to start somewhere,” and, well, this is it: the beginning. Before you do anything else, get yourself familiar with the VMware NSX Portfolio offerings.

Beginner and Intermediate VMware NSX Resources

Once you’ve gotten the hang of what NSX can offer, it’s time to dive into a class or two to get up to speed on all things network and security virtualization. To “teach you how to fish,” we’ve curated our favorite links to VMware’s official educational materials— plus a few more.

Network Virtualization for Dummies Guide

This comprehensive introductory guide covers:

  • Why now is the right time to virtualize your network
  • How network virtualization works
  • Best practices, tips, and ways to save time

Click to download the Network Virtualization for Dummies Guide now.

On-demand NSX Webinars

Below is a list of NSX webinars that cover a variety of topics highlighting features and use cases such as internal firewalling, network automation, multi/hybrid cloud networking and more. All webinars are available on demand.

Networking and Security Architecture with VMware NSX on Coursera

This eight-week online course equips learners with the basics of network and security virtualization with VMware NSX.

To get the most out of this course, you should have familiarity with generic IT concepts such as routing, switching, firewalling, disaster recovery, business continuity, cloud, and security.

Click to learn more about the NSX on Coursera course.

Visit the VMware NSX YouTube Channel

Within the VMware NSX YouTube channel there are several curated playlists to check out. Here are a few playlist recommendations:

VMworld Networking and Security Sessions On-Demand

During VMworld 2019 the networking and security business unit delivered 91 breakout sessions! Read the VMworld US 2019 networking and security recap blog post for a list of technical sessions to watch on-demand.

Visit the VMworld on-demand video library for all the recordings of networking and security breakout sessions, showcase keynotes and more from VMworld.

Try NSX Hands-on Labs

Ready to test drive NSX? NSX Hands-on Labs (HOL) is a series of labs created directly from VMware product experts. HOLs allow you to evaluate the features and functionality of NSX for free, with no installation required.

Here are a few labs to get you started:

  • NSX-T Getting Started: learn how to configure logical switching, routing, load balancing and distributed firewall in a multi-hypervisor environment.
  • Integrating Kubernetes with NSX-T: learn about Kubernetes Namespaces, services and ingress rules, and security with NSX-T.
  • NSX-T Distributed Firewalling: see how easy it is to use NSX-T to configure distributed firewalls.
  • NSX Cloud: learn how NSX Cloud delivers consistent networking and security policies across enterprise, AWS, & Azure.

Click for more NSX HOLs

Advanced VMware NSX Resources

VMware NSX Certification

Once you’re up to speed, consider getting certified. VMware Network Virtualization certifications are designed to gauge your level of skill designing, implementing, and managing a VMware NSX environment. With a certificate, you can take your career to the next level.

Take our free NSX-T Install, Configure, Manage (ICM) course that covers the key features and functionality of NSX such as overall infrastructure, logical switching, routing, services micro-segmentation etc.

Visit our NSX certification site for more information on network virtualization certifications.

VMware NSX-T Reference Design Guide

The NSX-T reference design guide document provides design guidance and best practices for NSX.

Address Challenges of Migrating Data Center Solutions to the Cloud

Download the free guide to learn how VMware Cloud on AWS with NSX networking and security provides a hybrid cloud solution.

What’s Next to Becoming a VMware NSX Expert?

Stay up-to-date on the latest from the VMware NSX team by:

 

The post Become a VMware NSX Expert Today appeared first on Network Virtualization.

Creating and Using Content Library

$
0
0
This post was originally published on this site ---

Content Library has evolved quite a bit since its inception in vSphere 6.0. Some of that evolution has brought new ways to accomplish tasks as well as new features. One of the newest additions as of late is new cmdlets for Content Library management via PowerCLI, available in version 11.5.0.

This blog will guide you through working with Content Library to get a library set up, add items to the library, as well as remove a library item when no longer needed. I will cover each task from the UI or vSphere Client method. As an added bonus, tasks covered here in this post will link to the same tasks but from a PowerCLI perspective, covered by Kyle Ruddy over on the PowerCLI Blog. This way whether you are a fan of the running tasks from the vSphere Client or using PowerCLI, we have you covered.

Creating a Content Library

It is important to discuss vCenter Server permissions for Content Library. To be able to create a library the required privileges needed are Content library.Create local library’ or Content library.Create subscribed library’ on the vCenter Server node where you are creating the library. vCenter Server comes with a sample role ‘Content library administrator‘ that can be cloned or edited to suit the need.

Be sure to review and set the required permissions as highlighted below by navigating in the vSphere Client to Menu >> Administration >> Access Control/Roles. Clicking on the role name will display the Description, Usage, and Privileges of the role.

Content Library 6.7

To create a library, begin by logging into the vSphere Client and then from the top Menu select Content Libraries.

Creating Content Library

 

Next, click the “+” (plus sign) to open the New Content Library wizard.

Content Library 6.7

We will need to specify the Content Library Name and any required Notes. Next, select the vCenter Server that will manage the new library. Once the library Name and Notes have been entered, click Next to continue.

NOTE: If only one (1) vCenter Server is being used, no others will show in the list. This is also true if the vCenter Servers are not in Enhanced Linked Mode (ELM).

Content Library 6.7

Here is where the Content Library setup and configuration take place. We can either set this library to be a Local Content Library or we can Subscribe to another library by providing the Subscription URL of that Published Library.

In this example we will choose; Local Content Library and we’ll also enable publishing by ticking the box Enable Publishing. Publishing the library’s contents will allow subscriber libraries to also pull in contents from the Publisher. Click Next to continue.

Content Library 6.7

Select a storage location for the library’s contents. This can be vSphere datastore that exists in inventory or an SMB or NFS path to storage. In this example, a vSAN datastore is being used as the backing datastore for the Content Library. Choose the datastore and then click Next to continue.

NOTE: You can store your templates on an NFS storage that is mounted to the vCenter Server Appliance. After the create a new library operation is complete, the vCenter Server Appliance mounts the shared storage to the host OS.

Content Library 6.7

Review the settings chosen for the Content Library before clicking Finish to complete.

Creating Content Library

When the wizard is done, the vSphere Client will show the details of the Content Library created. Click on the name of the Content Library to continue.

Content Library 6.7

From here administrators can review the details of the Content Library. Easily review and access Storage or Publication information such as capacity/free space and Subscription URL, needed when subscribing to a published library.

Creating Content Library

Adding Items to Content Library

Now that Content Library is setup we can begin to add items. Content Library supports importing OVF and OVA templates, scripts, ISO images, certificates, etc. Keeping these items in a library allows them to be shared with other users across multiple vCenter Server instances. Templates stored in Content Library can be used to deploy new virtual machines and vApps. In this blog example, we will add a PowerCLI (.ps1) script to the local library.

By browsing back to the Content Library listing and finding the Publisher Library, we can now start to add content to the library.

Right click on Publisher Library and choose Import item to begin.

Content Library 6.7

Select Local file and then click UPLOAD FILE to browse files for upload.

Content Library 6.7

In this example, I am uploading a PowerCLI script (“My-New-Script.ps1”) to the Content Library. Select the desired file, click Open to continue.

Complete the details for Item Name & Notes before clicking IMPORT to upload the file to the library.

After the upload has completed, open the library to the Summary page. Click Other Types to view the newly uploaded PowerCLI script. Clicking on the file name will open the details screen.

The details screen displays the file Type, Size, Created (date), and Last Modified (date). We can also remove content from the library by clicking on the ACTIONS menu and selecting Delete.

Confirm the deletion of the selected library item, click OK to continue.

Wrapping Up

In this post, we covered the creation and setup of a Content Library that will be used to Publish content to other Subscriber libraries. We learned how to Add and Remove items from Content Library and also discussed required permissions in vCenter Server to work with creating and/editing the Content Library. Stay tuned for future posts on Content Library in vSphere.

To learn more about Content Library, please review these resources:

 

Take our vSphere 6.7: Getting Started Hands-On Lab here, and our vSphere 6.7: Advanced Topics Hands-On Lab here!

 

The post Creating and Using Content Library appeared first on VMware vSphere Blog.

Dell EMC VxRail Management Pack for vRealize Operations

$
0
0
This post was originally published on this site ---

With a joint development effort with the Dell EMC VxRail team, I’m very excited to share general availability of the VxRail MP for vRealize Operations.  The Dell EMC VxRail Management Pack for vRealize Operations provides a single, end-to-end view of vSAN clusters powered by the VMware vRealize Operations Manager AI/ML analytics engine.  This helps operationalize your VMware SDDC stack by delivering actionable performance analysis to help detect capacity and performance constraints before they cause negative impacts to your data centers.

The management pack consists of an adapter that collects VxRail events, analytics logic specific to VxRail, and three custom dashboards.  These VxRail events are translated into VxRail alerts on vRealize Operations so that users have helpful information to understand health issues along with recommended course of resolution.  With custom dashboards, users can easily go to VxRail-specific views to troubleshoot issues and make use of existing vRealize Operations capabilities in the context of VxRail clusters.

By collecting VxRail events, vSAN metrics, configuration details, inventory, and relationship dependency mappings, it also enables customers to define alerts and policies to create their own custom dashboards from a rich collection of data analytics for VxRail clusters.

 

Prerequisites

  • For vROps 8.0:
    • vCenter Cloud Account configured for the targeted VxRail cluster.
    • vSAN is enabled on vCenter.
  • For vROps 7.5:
    • vCenter Adapter instance configured for the targeted VxRail cluster.
    • vSAN Adapter instance configured for the targeted VxRail cluster.
  • VxRail version(s) 4.5.300 and above in addition to all 4.7.x versions.

 

Installing the VxRail MP:

 

  1. Download the PAK file for Management Pack for VxRail from VMware Marketplace, and save that PAK file to a temporary folder on your local system.
  2. Log in to the vRealize Operations Manager user interface with administrator privileges.
  3. In the left pane of vRealize Operations Manager, click the Administration icon and click Solutions.
  4. On the Solutions tab, click the plus sign.
  5. Browse to locate the temporary folder and select the PAK file.
  6. Click Upload. The upload might take several minutes.
  7. Read and accept the EULA and click Next. Installation details appear in the window during the process.
  8. When the installation is completed, click Finish.

 

Configuring the VxRail MP:

 

1. On the menu, click Administration and in the left pane click Solutions.

2. Select VxRail Adapter and click Configure icon.

3. Configure the adapter instance.

4. Click Test Connection to validate the connection.

5. Click Save Settings to add the adapter instance to the list.

6. Verify that the adapter is configured and collecting data by viewing application-related data.

 

Dashboards

The VxRail Management for vRealize Operations has three dashboards for you to see the:

 

  1. VxRail Operations Overview
  2. VxRail Capacity Overview
  3. Troubleshoot VxRail

 

To access the dashboards, navigate to the main menu of VMware vRealize Operations Manager and click Dashboard.  Then from the dashboard list, select your desired VxRail dashboard.

 

VxRail Operations Overview Dashboard

The VxRail Operations Overview dashboard provides an aggregated view of the health and performance of your VxRail clusters.  Use this dashboard to get a complete view of your VxRail environment and what components make up the environment.

The widgets in the dashboard help you to understand the utilization and performance patterns for the selected VxRail cluster. The widgets also show the details of cluster storage, cluster compute, cluster wide IOPs, the average cluster wide disk latency, alert volume, cluster throughput, and the maximum latency among the capacity disks.

 

VxRail Capacity Overview Dashboard

The VxRail Capacity Overview dashboard provides an overview of VxRail storage capacity and savings achieved by enabling deduplication and compression across all VxRail clusters.  The widgets in this dashboard show the current and historical use trends, future procurement requirements, the details such as capacity remaining, time remaining, and storage reclamation opportunities to make effective capacity management decisions.

These widgets also show the distribution of use among VxRail disks. You can view these details either as an aggregate or at an individual cluster level.

 

VxRail Troubleshooting Dashboard

The Troubleshoot VxRail dashboard helps you view the properties of your VxRail cluster and the active alerts on the cluster components. The cluster components include hosts, VMs, or disk groups.

You can select a cluster from the dashboard and then list all the known problems with the objects associated with the cluster. Objects include clusters, datastores, disk groups, physical disks, and VMs served by the selected VxRail cluster.

You can view the key use and performance metrics from the dashboard and the use and performance trend of the cluster for the last 24 hours. You can also view historical issues and analyze the host, disk group, or physical disk.

 

 

The post Dell EMC VxRail Management Pack for vRealize Operations appeared first on VMware Cloud Management.

Infrastructure as Code and vRealize Automation

$
0
0
This post was originally published on this site ---

Infrastructure as Code

Infrastructure as CodeInfrastructure as Code is a concept that was created to solve the problems that are faced managing infrastructure in the “Cloud Age” by applying principles more often used in software development. Modern, cloud-like infrastructure is dynamic in nature and can lead to server sprawl, configuration drift and “snowflakes”. How do you ensure the servers deployed into different environments are going to be functionally the same when a new version of an application is released? Over time configuration changes to environments are not consistently applied and each environment becomes a unique “snowflake” – with its own unique peculiarities. This can lead to deployment problems with these environments becoming inconsistent, unreliable and fragile.

Infrastructure as Code solves this by capturing all the components and configuration of your infrastructure (e.g. virtual machines, containers, networks, load balancers and topology) in a code format that describes the desired end state. This code format can then be interpreted by a provider to provision, configure and manage the infrastructure in repeatable, reliable and on-demand fashion. The code description can then be source controlled and managed as a developer would an application’s source code.

Changes to the configuration of an environment are made in the source code (not the target environment) which is then interpreted and fulfilled by the provider. The change is self-documented because the code is a well-known format that describes the end state. The change can be staged across multiple environments to ensure consistency. Environments can also be rolled back to previous states should a change be detrimental to the environment simply by utilising source control versioning.

Infrastructure as Code is typically declarative – it describes the desired state of the infrastructure, but it doesn’t necessarily describe how to achieve that desired state, this is left up to the Infrastructure as Code provider. Declarative code differs from an imperative approach, which describes a series of steps or commands that are executed to achieve the desired state. Imperative code typically does not have the same scalability, reusability or resilience as the declarative approach.

Deployed environments should be configured in the same way regardless of the starting state of the environment, a property known as idempotency. This can be achieved by modifying the existing environment to the desired state (mutable infrastructure – it can be “mutated” or changed), or by destroying the current state and deploying the desired one (immutable infrastructure – it cannot be “mutated” or changed).

Some Infrastructure as Code providers are platform-specific, such as AWS CloudFormation or Azure Resource manager, whereas others are more modular and can deploy to multiple platforms, for example Hashicorp Terraform or Pulumi.

vRealize Automation as an Infrastructure as Code provider

vRealize Automation can deploy to a growing range of cloud endpoints (AWS, Azure, GCP, vSphere and VMware Cloud on AWS), and blueprints created in vRealize Automation are written declaratively in YAML. Blueprints describe a desired end-state of virtual machines, networking, load balancers and other infrastructure components that are satisfied by the various Cloud Accounts and Integrations. With GitLab and GitHub integration the YAML blueprints can be subject to source control management, with versioning synchronised between blueprints and the source control system. In these ways vRealize Automation can be described as a typical Infrastructure as Code provider.

Source Control integration

Source control integration

Deployment change management in vRealize Automation is provided in a slightly different way than you’d expect from a typical Infrastructure as Code provider – deployments are not directly linked to a blueprint in that if the blueprint code were to change, the deployment will not be updated automatically. The day-2 “update” action allows us to update the input values of a deployment, which may mean blueprint components are deployed, re-deployed or removed from a deployment. It also allows us to bring the deployment up-to-date with newer versions of the blueprint – the plan stage of the action shows what’s going to happen when the update is applied – an example of which is shown below:

Deployment update plan

Deployment update plan

Conclusion

All the content in vRealize Automation – from blueprints and actions in Cloud Assembly, to forms in Service Broker, pipelines in CodeStream and workflows in Orchestrator – is available as source code that can be version controlled, with integrations available for popular source control endpoints. Cloud agnostic components allow blueprints to describe infrastructure in a declarative way, delivering the same functionality regardless of the cloud environment it’s deployed to. And for use cases that require an imperative approach, vRealize Orchestrator, Action Based Extensions and CodeStream Pipelines can be used to build the desired state.

If you’re interested in an introduction to the vRealize Automation Terraform Provider, which allows you to further extend the Infrastructure as Code capabilities of vRealize Automation, take a look at my next blog post Getting Started with the vRealize Automation Terraform Provider

The post Infrastructure as Code and vRealize Automation appeared first on VMware Cloud Management.


Getting started with the vRealize Automation Terraform Provider

$
0
0
This post was originally published on this site ---

In my previous post, Infrastructure as Code and vRealize Automation, I talked about the principles of Infrastructure as Code and how vRealize Automation can help you put some of those ideas into action. I also mentioned Terraform as a complimentary technology to vRealize Automation. There are several VMware providers for Terraform (including vSphere, NSX-T, vCloud Director, vRealize Automation 7) which allow customers to consume VMware products in a programatic way. The vRealize Automation Terraform Provider, as well as a wide range of 3rd party providers, and can augment the functionality provided by vRealize Automation with additional external components (as demonstrated recently on Grant Orchard’s blog).

Installing the vRealize Automation Terraform Provider

To start using the vRA Terraform provider you’ll need to have Terraform and Go installed on your local machine. For Mac users you can install both using homebrew, for Windows users I’d recommend chocolatey. Up-to-date installation instructions for the vRA terraform provider are available on the Github repository, the steps I’m using below are used for a Mac.

I’ve created a new folder to hold my Terraform configuration files:

  • main.tf – this is my main terraform file in which I am describing the desired state of my environment
  • terraform.tfvars – used for setting variable values
  • variables.tf – used for declaring variables

Generating an API token

In order for Terraform to authenticate with the vRealize Automation API we need an API token – this can either be an access token, a refresh token, or an API token which is issued for a specific purpose. API tokens can be limited to a specific organisation scope, functionality, can be revoked, and can be set to expire. Access and Refresh tokens are based on your login credentials and expire in 8 hours or 6 months respectively, but share the scope and permissions as your user account, and cannot be revoked without disabling the account. For vRealize Automation 8 (on-premises) you will need to use the instructions or scripts provided to retrieve a refresh token.

To create a new API Token, log into VMware Cloud Services Console and navigate to User Settings > My Account > API Tokens > Generate Token

Creating a new API Token

The generated token is given a meaningful name, an expiry, and scoped to a specific organization and permissions. Once the token is generated be sure to save it, we’ll need it in the next steps.

Configuring the vRealize Automation Terraform Provider

The vRA provider requires two bits of information for the initial configuration: the refresh token, and the API URL

Edit the terraform.tfvars file created earlier to include these two variables. We also need to tell terraform what provider we’re using, so edit the main.tf file to include the provider definition as shown in the code below.

The provider vra definition references the two variables we’ve configured in the terraform.tfvars file through the “var” keyword. Now we can run terraform init to see if the provider configures successfully.

Terraform Init

Terraform Init

So far, so good – the provider is initialised and is ready to start configuring vRealize Automation!

Configuring Infrastructure Components with the vRealize Automation Terraform Provider

The first thing I want to do is configure some of the needed infrastructure within Cloud Assembly in order to deploy a virtual machine – the following components need to be configured to support my workload:

  • Cloud Account
  • Cloud Zone
  • Flavors
  • Images
  • Project

An AWS Cloud Account requires two additional variables in our terraform.tfvars file – an access key, and a secret key. These are my AWS credentials (that you might use for the aws cli, for example). The variables need to be declared, and while you can declare them in the main.tf file it’s better practice to declare them in a separate variables.tf file to allow for code re-use.

This is where the declarative and self-documenting nature of Infrastructure as Code comes into play – we can see at first glance what this will create a new Cloud Account named “AWS Cloud Account”, using the variables declared in variables.tf. There are two regions configured, and the Cloud Zone should have a tag – “cloud:aws”.

Executing terraform plan will describe what will happen if we run the code:

Terraform plan

Terraform plan

Now that we have a good idea what the code will do, we can use terraform apply to execute the code – the image below shows that the Cloud Account has been created with the correct regions selected, and the “cloud:aws” tag assigned.

An AWS Cloud Account deployed and configured Terraform

The same method can be used to build up a full description of the infrastructure end state we want to see – including the Cloud Zone, Flavor Mapping, Image Mapping and Project – everything that we need to begin deploying workloads.

Re-running terraform apply results in the new infrastructure being configured:


 

Creating a Blueprint

All of this infrastructure is great, but it’s really about the workloads. The following code creates a Blueprint in the Project and using the Flavor and Images created earlier. The “content” property of the resource is the YAML code of the blueprint itself.

Once again running terraform apply will create the blueprint

Deploying a blueprint

Finally, let’s deploy the blueprint using the ID of the blueprint we just created, and specifying the inputs. If you have multiple versions of a blueprint, you can specify the desired one using the “blueprint_version” property, without it the deployment will be the “Current Version”.

The blueprint is deployed and configured with the inputs specified in the vra deployement resource.

Conclusion

Hopefully this brief introduction to the vRealize Automation Terraform Provider and the world of Infrastructure as Code has been useful, I’ve only scratched the surface of what you can create. You can find out more about the vRealize Automation provider complete with code examples on the GitHub pages.

The post Getting started with the vRealize Automation Terraform Provider appeared first on VMware Cloud Management.

vRA 8 Action Placement Demystified

$
0
0
This post was originally published on this site ---

The new extensibility framework in vRA 8 called Action Based Extensibility (ABX) provides the ability to leverage FaaS (Function-as-a-Service) to write simple scripts in various programming languages (e.g. Python, Node.js) to fulfil both simple and complex extensibility use cases. An Action can be generic, just a regular script not semantically tied to a specific provider, and the ABX framework can deploy and execute it transparently on both the public and private cloud. Currently, there is support for three FaaS providers:

  • On Prem
    • Using the Extensibility Cloud Proxy for vRA Cloud
    • With the built in FaaS engine in vRA 8
  • AWS Lambda
  • Azure Functions

Which provider will be picked for action execution as part of virtual machine provisioning depends on how the framework is configured through the following three configuration steps.

The FaaS provider configured in the action itself, which will be automatically picked by the underlying algorithm when Auto is selected.

The cloud zones configured in the project where the action is created.

For vRA cloud only, the extensibility constraints in the project.

FaaS Provider Configured as part of the Action

ABX supports four distinct values, three of them corresponding to the desired specific provider (i.e. AWS, Azure or On Prem) and Auto. A provider should be selected if the code of the action calls specific APIs of AWS, Azure or vSphere, i.e. in case it is provider specific. Typically, the default value of Auto should be left selected, as most of the time the code of the action will be generic, and the underlying placement algorithm will select the most appropriate provider based on the cloud zones in the project and the provisioned resource.

If the code has to interact with on prem systems and APIs, it’s natural to make the action always run on the on prem provider, as code executed in the public cloud typically will not be able to reach these systems and APIs.

Cloud Zones in the Project

If the value in the action is Auto, which is the default, then the placement logic will look into the cloud zones available in the project in which the action is created. It will take into account the priority and the extensibility constraints. Typically the zone with the highest priority within the project will be selected for executing the action, where a lower number means higher priority.

Extensibility Constraints (for vRA Cloud only)

As there might be multiple Extensibility Cloud Proxies deployed in a vRA Cloud organization, for on prem execution of actions the appropriate proxy is selected based on the configured extensibility constraint tag in the project, which must match a capability tag on an ABX On-prem integration. For on-prem vRA this is not necessary as there is only one build in on-prem action execution engine and it’s matched by default by the placement algorithm.

Example with vRA 8 – Testing an Action

The ABX on prem provider comes out of the box with the vRA 8 installation.

We create a very simple python action with Auto pre-selected in the FaaS provider field. Note that actions are created in projects, so at least one project must already exist in the system before attempting to create the action.

handler(context, inputs):
    greeting = "Hello, {0}!".format(inputs["target"])
    print(greeting)

    outputs = {
      "greeting": greeting
    }

    return outputs

Saving the action and clicking on the Test button triggers the action and if we observe the trace, we will see that the on prem provider was selected by the algorithm.

Unless AWS or Azure cloud zones have been added to the project with an appropriate priority or tags, on prem is selected by default in vRA 8.

Example with vRA Cloud – Executing an action as part of VM provisioning

For this example, we have a Cloud Extensibility Proxy already deployed in our private data center and an ABX On Prem integration tagged with the following capability tag.

We also have a project with three prioritized cloud zones, one for each supported provider.

We have a very simple python action with Auto pre-selected in the FaaS provider field. The code is exactly the same as in the example above.

handler(context, inputs):
    greeting = "Hello, {0}!".format(inputs["target"])
    print(greeting)

    outputs = {
      "greeting": greeting
    }

    return outputs

We create a subscription for the Compute Provision topic and select our action as the runable item.

We define a simple blueprint with a single virtual machine, and make sure that it always gets deployed on vSphere by setting the platform:vsphere constraint which matches our vSphere cloud zone in the project.

inputs: {}
resources:
  Cloud_Machine_1:
    type: Cloud.Machine
    properties:
      image: ubuntu
      flavor: small
      constraints:
        - tag: 'platform:vsphere'

Once the blueprint is deployed, our action is triggered and the placement logic checks the zone in which the machine is being provisioned, which in this case is vSphere. Since the provider value in the action is Auto and the VM is vSphere, the algorithm decides that the action must be executed on prem. It checks the extensibility constraints in the project and finds a match with a Cloud Extensibility Proxy, on which it subsequently executes the action.

The trace log of the action run shows exactly which provider was selected.

If, for example, the provisioning placement algorithm chooses to deploy the VM on AWS, then the ABX action placement logic would have selected the AWS cloud zone in the project and would have executed the action on AWS.

To override this logic, set the ignore.vm.endpoint custom property to true in the request. If this is done, the action placement algorithm will ignore the selected cloud zone for the virtual machine and will take into account all cloud zones in the project according to their priority.

When an action is run with the Test button or through the API, there is obviously no real virtual machine provisioning, so all the zones in the project are taken into account when selecting the provider.

Useful Tips

  • In most cases, the Auto setting in actions allows for the most flexibility, as provider selection will be based on the cloud zones of the project. If the setup of cloud zones changes, it will not be necessary to modify the configuration of the action.
  • When testing an action during the implementation, it makes sense to select a specific provider, as typically the action should be tested on all providers sequentially in a predictable manner. This ensures that the code is generic and can run on all supported platforms, which is usually the best practice.
  • The action run Trace is helpful for determining how the provider for the action run was selected.
  • Using the public cloud FaaS providers leads to small additional cost for the organisation, as services like AWS Lambda or Azure Functions are being utilised for executing the code. Typically, running actions on prem does not incur any additional cost.

Further Reading

The post vRA 8 Action Placement Demystified appeared first on VMware Cloud Management.

OVAs Are Now A Content Source in vRealize Automation (Welcome Bitnami)

$
0
0
This post was originally published on this site ---

 

There is a great new feature in vRealize Automation Cloud that highlights an acquisition that VMware made last year. Bitnami became part of the VMware family late in 2019 to bring an entire library of prepackaged application stacks to the VMware Marketplace. Bitnami makes it easy to get your favorite open source software up and running on any platform, including your laptop, Kubernetes and all the major clouds. One of the formats that is provided for these popular application stacks is the Open Virtual Appliance (OVA) format. The latest feature to be introduced in Service Broker is the ability to deploy any of the Bitnami OVAs available from the Bitnami community catalog. In this blog we will walk through setting up OVAs as a content source for Service Broker and adding it to the self-service catalog capabilities of Service Broker in vRealize Automation Cloud.

 

The Market Place in Cloud Assembly:

From within Cloud Assembly in vRealize Automation Cloud there is a tab you can select called Marketplace:

In the Marketplace you can import pre-created blueprints for Cloud Assembly as well as select from images (OVAs) that are available. Many of these images are brought to us from the Bitnami Community Catalog.

 

Service Broker Catalog:

What if you want to provide these OVAs directly to you users in a self-service catalog? Service Broker gives you that capability. Service Broker allows you to present all kinds of items in a self-service catalog and allow users only the items they need to see based on the governance you put in place. With the introduction of OVAs as a content source in Service Broker you can now add the application stacks available from Bitnami as direct deployments from the Service Broker catalog.

 

Let’s set it up:

 

Creating OVAs as a Content Source:

 

From within Service Broker select Content & Policies, then Content Sources, and the select New.

From the list of possible content sources select Marketplace VM Template – OVA.

Now you can configure the new content source by giving it a name, in this example we called it Marketplace OVAs. Then select the Source Integration Account. This is the account you used to setup the MyVMware in Cloud Assembly. Finally, select the validate button, and once successful, select Create & Import.

Now select Content Sharing from the left menu, select the project(s) you would like to add the content to, then select ADD ITEMS.

A new window will appear where you can select the new content to be added to the project. First select the dropdown box that says Content Sources and select All Content. This lists all the individual content from all content sources. Let’s find our WordPress OVA by using the filter and enter WordPress. You will see that it filters all the content to anything with WordPress in the title. Let’s select the basic WordPress appliance for this example. Once selected click on the SAVE button.

You will now see the WordPress OVA as a content item for your project in Service Broker.

Finally you can now customize the catalog item from the CONTENT menu on the right side of the Service Broker navigation pane. You have the option to add a custom icon or create a custom form using the Custom Form Editor.

 

Now you can click on the Catalog view in Service Broker and see the new catalog item for deploying the WordPress OVA, supplied by Bitnami, as a self-service item.

 

Summary:

With the introduction of OVAs as a content source in the Service Broker service of vRealize Automation, you can utilize the many OVAs available in the VMware Marketplace. Hopefully you found this quick introduction into this great new capability useful. Check out some of the other blogs brought to you by the great teams within VMware Cloud Management.

 

Other Blogs to Check Out:

Using the vRealize Automation Terraform Provider

vRealize Action Placement Demystified

Infrastructure as Code and vRealize Automation

 

 

 

 

The post OVAs Are Now A Content Source in vRealize Automation (Welcome Bitnami) appeared first on VMware Cloud Management.

vRA Action Placement Demystified

$
0
0
This post was originally published on this site ---

The new extensibility framework in vRealize Automation (vRA) called Action Based Extensibility (ABX) provides the ability to leverage FaaS (Function-as-a-Service) to write simple scripts in various programming languages (e.g. Python, Node.js) to fulfill both simple and complex extensibility use cases. It is available in vRA 8.0 (on-prem) and in vRA Cloud.

An Action can be generic, a regular script not semantically tied to a specific provider, and the ABX framework can deploy and execute it transparently on both the public and private cloud. Currently, there is support for three FaaS providers:

  • On-Prem
    • Using the Extensibility Cloud Proxy for vRA Cloud
    • With the built in FaaS engine in vRA
  • AWS Lambda
  • Azure Functions

Which provider will be picked for action execution as part of virtual machine provisioning depends on how the framework is configured through the following three configuration steps.

The FaaS provider configured in the action itself, which will be automatically picked by the underlying algorithm when Auto is selected.

The cloud zones configured in the project where the action is created.

For vRA Cloud only, the extensibility constraints are in the project.

FaaS Provider Configured as part of the Action

ABX supports four distinct values, three of them corresponding to the desired specific provider (i.e. AWS, Azure or on-prem) and Auto. A provider should be selected if the code of the action calls specific APIs of AWS, Azure or vSphere, i.e. in case it is provider specific. Typically, the default value of Auto should be left selected, as most of the time the code of the action will be generic, and the underlying placement algorithm will select the most appropriate provider based on the cloud zones in the project and the provisioned resource.

If the code has to interact with on -rem systems and APIs, it’s natural to make the action always run on the on-prem provider, as code executed in the public cloud typically will not be able to reach these systems and APIs.

Cloud Zones in the Project

If the value in the action is Auto, which is the default, then the placement logic will look into the cloud zones available in the project in which the action is created. It will take into account the priority and the extensibility constraints. Typically the zone with the highest priority within the project will be selected for executing the action, where a lower number means higher priority.

Extensibility Constraints (for vRA Cloud only)

As there might be multiple Extensibility Cloud Proxies deployed in a vRA Cloud organization, for on-prem execution of actions the appropriate proxy is selected based on the configured extensibility constraint tag in the project, which must match a capability tag on an ABX On-prem integration. For on-prem vRA this is not necessary as there is only one build in on-prem action execution engine and it’s matched by default by the placement algorithm.

Example with vRA 8 – Testing an Action

The ABX on-prem provider comes out of the box with the vRA 8 installation.

We create a very simple python action with Auto pre-selected in the FaaS provider field. Note that actions are created in projects, so at least one project must already exist in the system before attempting to create the action.

handler(context, inputs):
    greeting = "Hello, {0}!".format(inputs["target"])
    print(greeting)

    outputs = {
      "greeting": greeting
    }

    return outputs

Saving the action and clicking on the Test button triggers the action and if we observe the trace, we will see that the on-prem provider was selected by the algorithm.

Unless AWS or Azure cloud zones have been added to the project with an appropriate priority or tags, on-prem is selected by default in vRA 8.

Example with vRA Cloud – Executing an action as part of VM provisioning

For this example, we have a Cloud Extensibility Proxy already deployed in our private data center and an ABX on-prem integration tagged with the following capability tag.

We also have a project with three prioritized cloud zones, one for each supported provider.

We have a very simple python action with Auto pre-selected in the FaaS provider field. The code is exactly the same as in the example above.

handler(context, inputs):
    greeting = "Hello, {0}!".format(inputs["target"])
    print(greeting)

    outputs = {
      "greeting": greeting
    }

    return outputs

We create a subscription for the Compute Provision topic and select our action as the runable item.

We define a simple blueprint with a single virtual machine, and make sure that it always gets deployed on vSphere by setting the platform:vsphere constraint which matches our vSphere cloud zone in the project.

inputs: {}
resources:
  Cloud_Machine_1:
    type: Cloud.Machine
    properties:
      image: ubuntu
      flavor: small
      constraints:
        - tag: 'platform:vsphere'

Once the blueprint is deployed, our action is triggered and the placement logic checks the zone in which the machine is being provisioned, which in this case is vSphere. Since the provider value in the action is Auto and the VM is vSphere, the algorithm decides that the action must be executed on-prem. It checks the extensibility constraints in the project and finds a match with a Cloud Extensibility Proxy, on which it subsequently executes the action.

The trace log of the action run shows exactly which provider was selected.

If, for example, the provisioning placement algorithm chooses to deploy the VM on AWS, then the ABX action placement logic would have selected the AWS cloud zone in the project and would have executed the action on AWS.

To override this logic, set the ignore.vm.endpoint custom property to true in the request. If this is done, the action placement algorithm will ignore the selected cloud zone for the virtual machine and will take into account all cloud zones in the project according to their priority.

When an action is run with the Test button or through the API, there is obviously no real virtual machine provisioning, so all the zones in the project are taken into account when selecting the provider.

Useful Tips

  • In most cases, the Auto setting in actions allows for the most flexibility, as provider selection will be based on the cloud zones of the project. If the setup of cloud zones changes, it will not be necessary to modify the configuration of the action.
  • When testing an action during the implementation, it makes sense to select a specific provider, as typically the action should be tested on all providers sequentially in a predictable manner. This ensures that the code is generic and can run on all supported platforms, which is usually the best practice.
  • The action run Trace is helpful for determining how the provider for the action run was selected.
  • Using the public cloud FaaS providers leads to small additional cost for the organisation, as services like AWS Lambda or Azure Functions are being utilised for executing the code. Typically, running actions on-prem does not incur any additional cost.

Further Resources

The post vRA Action Placement Demystified appeared first on VMware Cloud Management.

Migration from VMware NSX for vSphere to NSX-T

$
0
0
This post was originally published on this site ---

Migration to VMware NSX-T Data Center (NSX-T) is top of mind for customers who are on NSX for vSphere (NSX-V). Broadly speaking, there are two main methods to migrate from NSX for vSphere to NSX-T Data Center: In Parallel Migration and In Place Migration. This blog post is a high-level overview of the above two approaches to migration.

2 Methods for VMware NSX Migration

Customers could take one of two approaches for migration.

In Parallel Migration:

In this method, NSX-T infrastructure is deployed in parallel along with the existing NSX-V based infrastructure.  While some components of NSX-V and NSX-T, such as management, could coexist, compute clusters running the workloads would be running on its own hardware.  This could be net new hardware or reclaimed unused hardware from NSX-V.

Migration of the workload in this approach could take couple of different approaches.

  • Cattle:  New workloads are deployed on NSX-T and the older workloads are allowed to die over time.
  • Pets:  Lift and shift workloads over to the new NSX-T infrastructure.

In Place Migration

There is simpler method though!  A method that doesn’t require dedicated hardware.  It’s an in place migration approach.  Curious?   This method uses the NSX-T Data Center’s built in tool, the Migration Coordinator.

2 Methods for VMware NSX Migration

A Closer Look at NSX-T Migration Coordinator

Migration Coordinator is built into NSX-T Data Center and is available from the NSX-T 2.4 release onwards. It’s accessible from NSX-T Data Center GUI by clicking System -> Migrate.

This process is disabled by default because Migration from NSX for vSphere to NSX-T Data Center is not an every-day task.  Enable the process by running the command, start service migration-coordinator, on NSX-T manager console.

Migration Coordinator

The Migration Coordinator tool not only helps migrate from NSX from vSphere to NSX-T Data Center using the same hardware, but it also helps migrate the actual configuration over from NSX for vSphere to NSX-T Data Center via GUI. It migrates both the networking components and their configuration and the configured security posture including the rules and groups.

5 Pre-Migration Steps

Following steps are necessary, before starting the Migration Coordinator.

  1. Install NSX-T (Migration Coordinator is a tool built into NSX-T)
  2. Install NSX-T Edges (do not configure)
  3. Provide a TEP pool
  4. Create Compute Manager pointing to the vCenter connected to NSX-V
  5. Ensure NSX-V is in a stable healthy state with no pending changes since no changes are allowed during the migration phase

5 Step Process for Migration Coordinator

Step 1: Import Configuration

In this step we connect and import existing configuration from NSX-V.

Step 1: Import Configuration

Step 2: Resolve Any Configuration Related Issues When Migrating from NSX-V to NSX-T

This step provides GUI interface to edit and accept the imported configuration to suite NSX-T.  For example, in the following case, use the Resolve Configuration page to accept that a host won’t be migrated as NSX-V is not installed on it.  On this page, we will also enter the VLAN to use for NSX-T Edge Transport Nodes.

Step 2: Resolve Configuration

Step 3: Migrate Configuration to NSX-T

Next step in this process is actual migration of configuration from NSX-V to NSX-T.

Step 3: Migrate Configuration

Step 4: Migrate Edges

Next, we migrate the edges.

Step 4: Migrate Edges

Step 5: Migrate Hosts

Finally, migrate the hosts over.  There are few options here such as selecting whether the migration happens serially or in parallel and should there be a pause after migrating each group etc.

Step 5: Migrate Hosts

Migration Coordinator Demo

Curious how this works?  Please check out the following short demo video on how the Migration Coordinator tool works.

 

VMware NSX-T Technical Resources

Want to learn more?  Please check out the following NSX-T resources:

Documentation:

Design Guides on NSX-T:

Try out NSX-T for free:

The post Migration from VMware NSX for vSphere to NSX-T appeared first on Network Virtualization.

Viewing all 1975 articles
Browse latest View live