Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

Creating an Asset Inventory for vSphere Infrastructure

$
0
0
This post was originally published on this site ---

vSphere Platinum ShieldWe often talk about how the two biggest ways to stay secure in IT are regular patching and good account & password hygiene. However, there is a big prerequisite to both of these, one that is almost always overlooked: an asset inventory. Without a comprehensive inventory of what devices, virtual machines, and OSes are your responsibility there is no way you can secure them. You cannot patch what you don’t know about.

Some IT methodologies, like ITIL, promote tools like a Configuration Management Database, or CMDB. A full-fledged CMDB is a nice thing to have, but unless it’s updated as part of every provisioning and decommissioning process it quickly falls out of date. A CMDB is an application unto itself, and while large organizations can dedicate resources to developing and maintaining a tool like a CMDB, as well as all the process & applications around it, most IT shops will benefit from much simpler approaches. In fact, most can get away with a simple Microsoft Excel spreadsheet. Keeping it simple means that you spend more time working on solving the problem itself, versus wrestling the tools that are supposed to help you.

Let’s explore some ways to quickly and accurately inventory a virtual infrastructure. Ideally you’d use multiple methods, because not every tool will find all the assets. For example, some operating systems do not respond to ping (ICMP echo) out of the box, so if you only use a mechanism that pings IPs then you will miss something. Similarly, an inventory approach focused on vCenter Server will miss other assets, like network switches, iDRAC and iLO controllers, and the like.

We also urge caution about keeping too much information in an inventory. An inventory is a snapshot, taken at a point in time, and it will quickly diverge from reality. An inventory is also not the authoritative source for most of the information. The infrastructure and devices themselves are the authoritative source. Keep only what you need to know, be thoughtful about things you add to it, and practice the time-honored tradition of “trust but verify” when it comes to the data.

vSphere Client

Many people don’t realize that, in the lower-right corner of the inventory displays, the HTML5-based vSphere Client offers an “Export” button which will generate a CSV with a customizable amount of information in it:

CSV Export from vSphere Client

Handy, right? The HTML5 client is one of the big reasons to stay current with major releases of vSphere (along with other security features like TPM support & host attestation, Secure Boot for both ESXi and guest OSes, support for Microsoft Device Guard & Credential Guard, VM Encryption, and much more).

PowerCLI

PowerCLI is a perennial favorite of vSphere admins, and it’s no question why: it allows someone to do repetitive tasks in a controlled & exact manner. In fact, a number of the tools we’ll mention later are PowerCLI scripts themselves. An inventory is a great place to start learning PowerCLI if you haven’t had time previously. Get it from code.vmware.com or simply by running “Install-Module VMware.PowerCLI” in PowerShell.

Once you’ve used “Connect-VIserver your.vcenter.server.ip.or.dns” to connect and authenticate to vCenter Server you can use the Get-VM and Get-VMGuest cmdlets to easily grab data about the VMs. In these animations I’m using the Windows PowerShell ISE, which is very helpful because it’ll suggest cmdlets and let you use the Tab key to cycle through the parameters available to you. I’m using the pipe operator (the vertical bar) to feed the output of one cmdlet into the next:

PowerCLI Get-VM Get-VMGuest

You can do the same for the hosts themselves using the Get-VMHost cmdlet:

PowerCLI Get-VMHost

If you’d like the data saved to a file you can pipe the output into the Out-File cmdlet, too. Give it a path like “Out-File C:UsersusernameDesktopinventory.txt.” There are also cmdlets like Export-Csv which can be helpful for formatting data into something Excel can read. Searching the Internet for examples of these cmdlets will get you thousands of results.

nmap

Being a UNIX sysadmin I’ve thought of nmap as a close and trusted friend for a long time, and indeed it recently turned 20 years old! Nmap is a port scanner and is sometimes thought of as a “hacking tool” but it’s simply a versatile tool to help figure out what’s on the network. Most Linux distributions ship with nmap, and you can download a version for Windows from nmap.org, too. It’s very powerful and can do quite a number of scans, some more dangerous than others (see all your options with “nmap -h”). A simple “ping” scan of the network can be done with:

nmap -sn 192.168.1.0/24

nmap

Add the “-v” parameter to increase verbosity. If you use a more invasive scanning setting you can also see port information, and with the “-O” setting nmap will take a guess at what OS the device is. There are a lot of examples on how to use nmap at nmap.org, too. If you are experimenting with the different scanning options I urge you to be gentle (TCP connect scans are usually a good starting place) and to scan test equipment & OSes first, working up to full environment scans.

Scanning like this generates log entries on hosts, can trigger intrusion detection and other security systems, can generate a lot of network traffic, and in extreme cases can cause denial of service situations. Don’t scan things that belong to others, for example. That said, if you are up-front with your organization about the need for accurate system inventories in order to ensure proper patching and risk management, and make sure your team and your management are on board, you’ll remove most political obstacles. Don’t forget to remind folks that devices on the network, like iDRACs, iLOs, and network switches themselves don’t appear in other inventories, so it’s important to find and secure them. After all, it just takes one rogue or unpatched device (like a fish tank IoT device) to open a door for bad actors.

RVTools

RVTools is a Windows .NET application developed by Rob de Veij that reads data from your vSphere environment and can export it in native Excel and CSV formats. It has been around a long time, is extremely comprehensive, and is straightforward to use to get information about VMs, guest OSes, and ESXi hosts:

vDocumentation

vDocumentation is a set of PowerShell cmdlets by Ariel Sanchez Mora that lets you query lots of information from your environment. It’s installable via the PowerShell gallery and needs PowerCLI to operate. It can dump information about patches, network settings, and the like so that you can see and verify the state of your hosts. It’s focused more on the infrastructure itself, rather than the guest OSes, but that makes it another useful tool when we’re trying to figure out if our inventory is missing anything.

vCheck

Last on my list is another classic tool that isn’t as much about documentation as it is about reporting and checking an environment. vCheck was originally developed by Alan Renouf and has been contributed to by many others. It’s very helpful for finding oddities in your environments, configuration mismatches, and so on. It can be scheduled and can send an email report, else it will generate an HTML version and display it locally for you:

As with vDocumentation, PowerCLI, RVTools, and the like, vCheck will not be able to see and check other parts of your infrastructure, so you’ll need to follow up separately with things you find in your nmap scans (iDRACs/iLOs, network switches, etc.).

Have other thoughts or tool suggestions? Leave them in the comments. Remember that knowledge is power when it comes to patching and being secure. Good luck and happy inventory-ing!

The post Creating an Asset Inventory for vSphere Infrastructure appeared first on VMware vSphere Blog.


What-If Analysis with vRealize Operations 8.0

$
0
0
This post was originally published on this site ---

There is more to capacity management for vSphere environments than looking at what’s currently running in your clusters. Clusters are always in flux with VMs and hosts being added and removed. There can be situations that require looking at migrating workloads to other datacenters or the public cloud. If you’ve run into any of these challenges, then you’re in luck, vRealize Operations helps with all of those situations with what’s called “What-If Analysis”. I’d like to take a little time to tell you about the new features in vRealize Operations 8.0 What-If Analysis that will make your capacity planning workflow even easier.

Annual Projected Growth

The first new feature that applies to Add VM and Datacenter Compare what-if scenarios is called Annual Projected Growth. As you know, most VMs demand more resources over time, so when you’re planning for new VMs it’s nice to be able to account for that growth. Annual Projected Growth allows you to tell vRealize Operations how much you expect that VM to grow over the next year. You can enter a single percentage growth for the VM or separate percentage growth for CPU, memory, and disk space. As you can see below, by adding 20% growth, the projection over the next year reflects that 20% increase in workload.

HCI Planning

What-If analysis for HCI environments come with additional challenges because the amount of disk space required for a VM is dependent on storage policies. Storage policies let you define the number of hosts failures to tolerate, the fault tolerance method (RAID levels), and deduplication. The new Add and Remove VMs and Add and Remove HCI Nodes in vRealize Operations 8.0 now have that awareness built-in and allow you to enter the vSAN configuration when running the what-if scenario. For those of you familiar with vSAN Sizer, the vSAN Sizer code is embedded in vRealize Operations to allow it to provide accurate disk space recommendations.

This screenshot shows two what-if scenarios stacked. The first is to add vSAN nodes to the cluster and the second is to add VMs with storage policies. The stacked scenario shows the new VMs will fit (if a new host is added), and based on the projected utilization, will run out of disk space in about 140 days from now.

Datacenter Compare

When you’re looking to deploy new VMs, have you ever wondered which datacenter would be the best option from a cost perspective? Well, now you can with the new Datacenter Comparison scenario. With this scenario, you can now see how much VMs would cost running in multiple datacenters. If the system determines that the VMs won’t fit in your desired datacenter, there is a quick link to Add Workload scenario to help you dig into the details of why the VMs won’t fit.

If you want to learn more about capacity planning or cost management with vRealize Operations I have 3 sessions at VMworld next week that might be of interest to you.

  • Yes, Mr. CFO, IT Is Expensive — Understand Why with vRealize Operations [HBO1138BE]
  • It’s Not 2005 Anymore: Stop Using Excel for Capacity Management [HBO1136BE]
  • Capacity and cost optimization with vRealize Operations [MTE6044E]

A great way to learn more is to download a trial of vRealize Operations and try it in your own environment! You can find more demos and videos on vrealize.vmware.com. Be sure to stay tuned here as we will have even more blogs about vRealize Operations 8.0 coming soon.

The post What-If Analysis with vRealize Operations 8.0 appeared first on VMware Cloud Management.

DRS Pairwise Balancing in vSphere 6.5 and 6.7

$
0
0
This post was originally published on this site ---

Or maybe I should have called this blog post, “I’m seeing an excessive number of DRS initiated vMotions on my newly upgraded 6.5 environment”. Recently I was part of a few conversations about the nature of DRS load balancing in systems running vSphere 6.5 and newer. It was noticed that more vMotion operations where occurring since running 6.5 and it’s highly likely that these operations occur due to the new DRS pairwise balancing functionality. Pairwise balancing was introduced by vSphere 6.5 and is focused on keeping the host resource utilization disparity within a certain threshold. As a result, DRS performs load-balancing operations if the difference between the lowest-utilized host and the highest-utilized host is a certain percentage. That percentage depends on your migration threshold. The default migration threshold uses a 20% tolerable difference in utilization.

This new feature is needed as clusters keep on growing larger and larger. To determine if load-balancing operations are necessary, DRS calculates two metrics, the current host load standard deviation (CHLSTD) and the target host load standard deviation (THLSTD). Each host reports its load and DRS calculates the standard deviation of the host load metric across all the hosts in the cluster. DRS calculates a target host load balance for the cluster and as long as the current host load standard deviation is less than or equal to the target host load value, DRS will consider the cluster balanced. The migration threshold allows how far apart the CHLSTD and THLSTD before it triggers load balancing operations. The higher the aggressiveness of the migration threshold, the lower the difference between the CHLSTD and THLSTD is tolerated.

A situation can occur that a few hosts in a large cluster can experience a high resource utilization, while the majority of hosts are not. Due to the size of the cluster, the few high host load become just some statistical outliers than simply disappear as noise due to the vast number of hosts that experience (far) lower utilization. As a result, these outliers are missed as the calculate CHLSTD is below the threshold required to trigger load balancing.

By adding the functionality of pairwise balancing, and “simply” comparing the highest reported utilization with the lowest utilization, these outliers might be a thing of the past. That means that in certain cases, the DRS UI might report that the cluster is in a balanced state, yet load-balance operations still occur. This behavior can be attributed to pairwise balancing.

Please keep in mind that if you are using a migration threshold that is more aggressive than the default setting, the tolerable difference between hosts is reduced, more migrations are likely to occur.

So what happens when the tolerable difference is detected in the cluster? Does this mean that VMs are migrated from the highest utilized host to the lowest utilized host? Not necessarily. VMs can be migrated to any other host in the cluster. DRS still takes many different requirements into account when selecting a virtual machine migration for load-balancing purposes. Anti-affinity and affinity rules cannot be violated to obtain a better cluster load-balance, so these moves are not considered. Compatibility of hosts and VM configuration also impact migration options (typically a missing datastore or network portgroup are common reasons why particular hosts are overloaded and why other hosts are lower utilized), but also the “cost-benefit” of a VM migration is still taken into account. It still needs to make sense for the cluster balance to incur infrastructure costs and risk to move a particular VM.

If you recently updated your vCenter to 6.5/6.7 and are curious to see whether the vMotions are triggered due to Pairwise imbalance operations, you can use the online version of the DRS Dump Insight tool available at https://www.drsdumpinsight.vmware.com/. You can also run the DRS dump insight tool on-prem by installing one of the flings available here: https://flings.vmware.com. Grep for “Pairwise Imbalance”.

If this behavior is not appreciated, and you do not want to alter the migration threshold, you can switch back to the old behavior by turning off pair-wise balancing by setting the cluster advanced option “CheckPairWiseImbalance to 0. (case-sensitive). Although this functionality was introduced by vSphere 6.5 and is active by default in all newer releases, we have backported this functionality to vSphere 6.0 u3.

One thing I would like to ask if you want to disable it, what are the reasons? I expect “too much vMotions”, but I would like to understand why a vMotion or a collection of vMotions is considered not desirable? The main goal is to get the VMs to a place where they have access to enough resources, why is that still a bad thing?

The post DRS Pairwise Balancing in vSphere 6.5 and 6.7 appeared first on VMware vSphere Blog.

vRealize Operations dashboards to monitor VMware Cloud on AWS

$
0
0
This post was originally published on this site ---

We have seen a number of asks for dashboards that can help with monitoring specific use cases related to VMware Cloud on AWS. With this post, I will share four such dashboards that we have been working on pertaining to VMC monitoring use cases. Special thanks to William Lam for guiding me with the best practices related to VMC monitoring. All those best practices were considered while creating these dashboards.

Along with the description and usage of these dashboards, this post will also provide the prerequisites and few simple steps to import these dashboards into your environment.

 

About the dashboards

Once imported you will see the following four dashboards under a new dashboard group named “VMC Dashboards”:

 

VMC Capacity Dashboard

Purpose

This dashboard provides a capacity overview of each of your VMC SDDCs. You can easily drill down into the capacity of all the underlying components such as clusters, hosts, VMs, datastores, and vSAN disk groups.

How to use this dashboard?

  • The first 3 rows show you a card per VMC SDDC with 3 different dimensions. This includes Capacity Remaining, Time Remaining and the number of additional VMs that will fit.
  • Upon selecting an SDDC, you can see the clusters, hosts, VMs (both management and workload), datastores and disk groups.
  • The key KPIs are color-coded to help identify capacity bottlenecks.

 

 

VMC Inventory Dashboard

Purpose

This dashboard provides a quick overview of the inventory of all your VMC SDDCs. The inventory includes:

  • vSphere Clusters
  • Datastores
  • Hosts
  • Virtual Machines

How to use this dashboard?

  • The first row shows you a card per VMC SDDC with the number of virtual machines running in each SDDC. This also shows you a trend of virtual machine growth over the past 30 days.
  • Once you are close to the number of VMs supported per SDDC, the card will indicate that by changing colors.
  • Upon selecting a card, you can see the list of all the vSphere clusters, datastores, ESXi hosts and VMs in that SDDC with key configuration details.
  • You can choose to export the list in CSV format using the toolbars on the list.
  • You can also filter the list of ESXi hosts and VMs by selecting a vSphere cluster or list of VMs by selecting an ESXi host.

 

 

VMC Management VM Dashboard

Purpose

This dashboard helps you monitor the utilization and performance of the key management VMs running in your SDDC. The goal of this dashboard is to ensure that the management components such as vCenter and NSX are not facing resource bottlenecks from a CPU, Memory, Network and Storage perspective.

How to use this dashboard?

  • The first list provides all the management components in each SDDC with key CPU utilization and performance KPIs. Upon selecting a management VM, you can see the usage and performance trends of all the CPU cores.
  • The second list provides all the management components in each SDDC with key Memory utilization and performance KPIs. Upon selecting a management VM, you can see the memory usage and performance trends.
  • The third list provides all the management components in each SDDC with key Network utilization and performance KPIs. Upon selecting a management VM, you can see the network usage and performance trends.
  • The fourth list provides all the management components in each SDDC with key Storage utilization and performance KPIs. Upon selecting a management VM, you can see the storage usage and performance trends.

 

 

VMC Utilization and Performance Dashboard
​​
Purpose

This dashboard provides a utilization and performance overview of each SDDC based on heavy hitter VMs and impacted VMs over the last 30 days of utilization and performance KPIs

How to use this dashboard?

  • The first list shows the list of all the SDDCs with aggregate CPU, memory and storage utilization over the last 30 days with maximum and 95th percentile values.
  • Upon selecting an SDDC, you can see the list of top virtual machines which are consuming compute, network & storage resources in each SDDC.
  • The dashboard has two additional sections. One shows the compute (CPU & memory) utilization and performance analysis and the second shows the network and storage utilization and performance analysis.
  • Each section is based on the last 30 days of data with 95th percentile transformation which is configurable as needed to max, average, current, standard deviation or other mathematical transformations.
  • This data helps you find the victims and villains in your environment which are negatively impacted by or impacting capacity or performance from a CPU, memory, storage or network perspective.

 

 

Pre-requisites

 

The following pre-requisites should be taken care of before importing these dashboards.

  • These dashboards need either vRealize Operations version 7.5 or 8.0.
  • These dashboards are tested with VMC SDDC version 1.7 and above.
  • Both vCenter and vSAN adapter instances should be configured.
  • Need vRealize Operations Advanced edition or above.
  • Need appropriate vRealize Operations permissions to import and share.

 

Steps to import

 

  1. Ensure that your VMC vCenters instances are configured with Cloud Type as “VMware Cloud on AWS”. More details here.
  2. Download this VMC Dashboard Content.zip file and extract it to your desktop.
  3. Import 1-Views.zip file to your vRealize Operations instance. Click on DashboardsViewsActionsImport Views.
  4. Import 2-Dashboards.zip file to your vRealize Operations instance. Click on DashboardsActionsManage Dashboards – ActionImport Dashboards.

 

You should see your new dashboards now. Please note that these dashboards might take a few seconds to load for the first time.

I hope this article helps you. Please share your comments in the comments section below or for more updates and sample content follow these twitter handles:

 

vRealize Operations@vRealizeOps

Author@sunny_dua

 

Note – This post originally appeared on this blog.

 

The post vRealize Operations dashboards to monitor VMware Cloud on AWS appeared first on VMware Cloud Management.

vRA and vROps 8: The Peanut Butter & Jelly for Your Hybrid Cloud

$
0
0
This post was originally published on this site ---

The vRealize team has always driven a “better together” solution between vRealize Automation (vRA) and vRealize Operations (vROps), and with the 8.0 releases of these products we have hit a new high. Together they become the cornerstone of any Software Defined Data Center (SDDC) and provide a pairing not seen since peanut butter and jelly first came together.

vmware-vra-vrops

For starters, we have introduced a shared taxonomy between the two and are bringing over a number of the vRA object types into vROps like Cloud Zones, Projects, Blueprints and Deployments.  This allows you to run some of the core vROps functions in the context of your vRA business plans. This includes capacity, cost, workload optimization and troubleshooting, to name just a few.

 

Capacity

The Cloud Zone is new and important object introduced in vRA 8. It provides the base infrastructure into which you can deploy your workloads. In order to avoid outages, you need to properly manage the capacity of your Cloud Zones. Fortunately for you vROps brings in the Cloud Zones and instantiates them as top-level objects. This allows it to run its capacity analytics against the underlying infrastructure. This means you can quickly answer questions like:

  • How much capacity is left in my Cloud Zone?
  • What are the current capacity trends in my Cloud Zone?
  • When will I run out of resources in my Cloud Zone?
  • How many more VMs can I fit into my Cloud Zone?

Being able to answer these questions is critical to guaranteeing your Cloud Zone never runs out of resources, your customers can deploy new workloads when needed, and your business applications continue to perform well by ensuring they have the resources they need. This is base Cloud Administrator kind of stuff!

Workload Optimization

Speaking of workloads getting the resources they need, lets discuss workload optimization for a moment. Maybe your Cloud Zone has plenty of available resources, but what if one or more of the clusters are short on resources?  What if the workloads in those clusters are performing poorly because of the lack of resources?  Now imagine if there are other clusters in this Cloud Zone that have plenty of resources available. Wouldn’t you want to intelligently and automatically move these workloads to a cluster with available free resources?  Of course you would. Slow applications mean unhappy customers, which means more work for you.  vROps Workload Optimization does just that for you. It leverages the business intent you laid out for your Projects and Deployments in vRA and the operational intent in vROps to ensure applications are always able to get to the resources they need. This self-driving optimization allows you to run the Cloud Zone hands off and hassle free. Who wouldn’t want that…more time for golf!

Costing

Probably the most exiting advancement between vRA and vROps is how they share costing details between them.  As you recall, vROps has integrated costing into its analytics over the last several release, and in this release that costing data is made available to vRA in two major ways. First, your customers can get a pre-deployment cost right in the self-service portal in vRA. This allows them to determine if they wish to request the deployments based on the daily cost estimate. Of course, once they ARE deployed you can use vROps to track the ongoing costs. These costs are also sent into vRA so you can see how much a specific deployment is costing you or your customers. The costs are even broken down by compute, storage, and any additional charges, so you can understand the costing details.

Or course vROps has a myriad of ways to configure your cost drivers should you wish to customize your solution beyond the out of the box settings. Watch for a blog from Brandon Gordon on that topic in the coming weeks. If you are lucky enough to be at VMworld Europe you can also check out his session called, “Mr. CFO, IT Is Expensive — Understand Why with vRealize Operations [HBO1138BE]”.

To summarize, in vRA you can view the costs of the individual workloads, the individual deployments, and the total costs of the defined projects. Get the full picture!

Dashboards

vROps also provides several out of the box dashboards that help when managing your private cloud, and I have highlighted a couple of them below.

The Environment Overview Dashboard lets you review your entire vRA managed private cloud including: Cloud Zones, Projects, Blueprints, Deployments and VMs.

The new Resource Consumption Dashboard, this is your one-stop-shop for capacity. It lets you drill into the Cloud Accounts, Cloud Zones, Projects and Clusters and view the capacity remaining, trends and overall utilization of these objects.

Lastly, the application owners can view the performance of their deployments within vRA.  vROps shares the CPU, memory, IOPS and network metrics of each VM in a deployment. You will find these in vRA in the Deployments–Monitor Tab, and by selecting the virtual machine(s) to see its performance metrics.

A great way to learn more is to check out the vRealize Cloud Management Platform web page or try a Hands on Lab.

 

 

The post vRA and vROps 8: The Peanut Butter & Jelly for Your Hybrid Cloud appeared first on VMware Cloud Management.

Introducing VMware Skyline Health for vSphere

$
0
0
This post was originally published on this site ---

vSphere Health & vSAN Health have been very instrumental in assisting customers to triage their vSphere 6.7 environments for configuration consistency, best practice information, vulnerability discovery, and proactive analytics. Today I am happy to announce that vSphere and vSAN Health are now part of VMware Skyline Health, including these two powerful features of vSphere under one umbrella, available to all customers via (Basic, Production, and Premier Support levels).

VMware Skyline HealthOnce available in a future version, within the vSphere Client, “Skyline Health” will replace the wording for vSphere and vSAN Health. The same health checks customers are familiar with today in vSphere will still be available under Skyline Health for vSphere allowing consistent findings & recommendations, whether you’re in the vSphere Client, or Skyline Advisor.

VMware Skyline Health

Note: To enable vSphere Health checks & vSAN Health online findings, you must participate in the Customer Experience Improvement Program (CEIP) and vCenter Server must have internet connectivity. If you do not wish to join the CEIP, or allow internet connectivity to vCenter Server, you will still receive in-product health checks for Skyline Health for vSAN.

VMware Skyline Health for vSphere & vSAN

vSphere Health was introduced in vSphere 6.7 to assist customers in a self-service manner, by discovering environmental issues and even best practices by analyzing telemetry data collected from the vSphere environment. This feature is enabled by joining the Customer Experience Improvement Program (CEIP)

VMware Skyline Health for vSphere and vSAN will allow environmental health to bubble up to Skyline Advisor by leveraging already available vSphere & vSAN Health findings. Unifying vSphere and vSAN health services under Skyline also opens the door for future features & functionality across all 3 services.

Let’s breakdown what is available today and how each health service differs. Below is a comparison of each health service as they are by feature set and how each differs.

Feature Set

Skyline

vSphere Health

vSAN Health

Rules – Total

220

40+

20+

Rules (based KBs/Best Practices)

Yes

Yes

Yes

Rules (based on Products)

vSphere, vSAN, NSX-V, Horizon, vROps

vSphere

vSAN

Standalone Data Collector Required

Yes

No

No

Automated Log Transfer (Log Assist)

Yes

No

No

TSE/TAM/SAM Visibility into customer environment for reactive/proactive support

Yes (Skyline Viewer)

No

Limited (vSAN Support Insight)

Analysis:

    • On-demand
    • Scheduled & Automated

No

Yes

Yes

Yes

Yes

Yes

Notifications:

    • Alerts, SNMP-based
    • In-Product / Email

No

Yes

Yes

In-Product only

No

In-Product only

 

Having a streamlined and consolidated workflow for the health of a vSphere environment now allows customers to also leverage VMware Skyline Collector which easily enables advanced health and analytics.

What is Skyline?

Skyline is a VMware proactive support solution that is available to all customers who have an active Production Support or Premier Support entitlement. Skyline helps identify potential issues for vSphere, as well as, NSX-V, vSAN, Horizon 7 and vRealize Operations. Skyline also includes SkylineLog Assist, which reduces customer efforts when uploading support log bundles to VMware Global Support Services (GSS). To learn more about Skyline, follow these simple steps:

  • Visit https://cloud.vmware.com/skyline, and click the Get Started button.
  • Follow the Get Started wizard which includes creating a Cloud Services Organization (if you don’t have one already) and downloading/installing the Skyline Collector.

VMware Skyline Health

Wrapping Up

Skyline Health for vSphere & vSAN will be available within a future version of vSphere and vSAN. Stay updated on Skyline Health via the vSphere Blog, and please follow VMware vSphere (@VMwarevSphere)  & VMware Skyline (@VMwareSkyline) on Twitter for up-to-date information related to VMware proactive support services.

 

Skyline Health Resources:

To learn more about VMware Skyline Health for vSphere and vSAN please visit:

 

@VMwarevSphere

 

 

The post Introducing VMware Skyline Health for vSphere appeared first on VMware vSphere Blog.

Project Magna for vSAN Optimization: Beta Announcement

$
0
0
This post was originally published on this site ---

We received tons of interest and excitement from the Project Magna tech preview at VMworld 2019 in San Francisco and we’re now very close to kicking off our customer beta program!  vSAN and vRealize Operations combine to make a solution that works better together, allowing IT datacenters to modernize and operationalize their HCI configurations.

 

However, modern day applications are dynamic in nature and require that the underlying infrastructure be continuously, automatically and adaptively optimized to deliver the expected performance and SLA guarantees of today’s businesses.  Unless you have a Storage Engineer 24/7 closely monitoring specific performance needs for each application, carefully balancing legacy workloads with new, modern application requirements can be a very painstaking challenge to tackle on.

 

Project Magna Continuous Optimization for vSAN

 

Always pushing the boundaries for IT productivity, VMware’s Project Magna helps achieve the true ‘Self-Driving Data Center,’ starting with VMware vSAN.  This SaaS-based solution uses reinforcement learning where it will collect data, learn from it, and make decisions that will automatically self-tune your infrastructure to drive greater performance and efficiencies.

 

 

For more information and to sign up for the Project Magna customer beta, please visit https://www.vmware.com/products/magna.html

 

If you have any comments or have questions on the Project Magna beta program, let us know at magna@vmware.com.  We’d love to hear from you and learn more about your datacenter challenges with managing dynamic workloads and HCI infrastructure performance demands.

 

 

The post Project Magna for vSAN Optimization: Beta Announcement appeared first on VMware Cloud Management.

Announcing VMware NSX Distributed IDS/IPS

$
0
0
This post was originally published on this site ---

Six years ago, VMware pioneered the concept of micro-segmentation to stop the internal, lateral spread of malware. We then launched the NSX Service-defined Firewall, an internal firewall that’s built into the hypervisor, distributed, and application aware. Shortly thereafter we introduced NSX Intelligence to automate security rule recommendations, streamlining the deployment of micro-segmentation.

Now we are announcing that we will be taking internal security to the next level by introducing optional Intrusion Detection and Prevention (IDS/IPS) for our Service-defined Firewall. Built on the same philosophy, the new NSX Distributed IDS/IPS will allow enterprises to fortify applications across private and public clouds.

VMware’s Security Is Intrinsic. Here’s What That Means.

Intrinsic Security is security that’s built in, not bolted on. And that makes it better.

Intrinsic Security Built in not Bolted On

When security is bolted on, it’s never as good as built-in security. Imagine an apartment building where you add the alarm system, the security cameras, and the fire escape after the fact. It looks awkward and doesn’t work that well, either.

Security Built in Differently

But when you design those things in upfront, the effect is completely different. Everything just works better, as parts of a whole system. The same thing is true for security.

More importantly, when you build in security, you can build it differently. Just to clarify, Intrinsic Security isn’t the same as integrated security. Integrated security would be taking a hardware-based firewall and repackaging it as a blade in a switch. It’s more convenient to deploy that way, but it doesn’t improve how the firewall works. By comparison, at VMware, we’ve used our virtualization capabilities to reimagine how much more effective security could be.

Security Done Differently and Done Better.

Instead of a hardware firewall that requires hair-pinning traffic to do all the filtering, we took the protection that firewall offers and distributed it directly to the servers in the hypervisor. That lets us put the security wherever the server needs to be, and at the same time eliminates blind spots by filtering every connection or hop. It’s dramatically more efficient.

But we didn’t stop there. Our firewall also gives us the advantage of being fine-grained and intelligent about how we apply the rulesets. If you have ever managed a firewall, you know that over time you can easily get to 5,000 rules, 10,000 rules, even 500,000 rules. In the traditional firewall model, you have to run all the firewall rules, against all the traffic, all the time. And to process all those rules, you need dedicated hardware.

We take a different approach. Thanks to our intrinsic understanding of the application and all its services, we know the difference between the web tier, the application tier, and the database tier. So we apply only those rules that are applicable to the workload. The web tier gets the web rules, the app tier gets the app rules, and the database tier gets the database rules. This massively reduces rules and delivers much higher throughput.

Intrusion Detection and Prevention Systems Reimagined.

We’re bringing the same innovative approach to our optional Intrusion Detection and Prevention (IDS/IPS) for our advanced Layer 7–capable internal firewall. Our Service-defined Firewall with IDS/IPS will allow our customers to block internal traffic from stolen credentials and compromised machines—traffic that a port-blocking-only solution would typically fail to detect and block.

Our IDS/IPS takes advantage of VMware’s intrinsic understanding of the services that make up the application to match IDS/IPS signatures to specific parts of the application. IDS/IPS signatures are application specific. Since we intrinsically know the difference between Apache and Tomcat, we apply the appropriate signatures only to their servers. Customers will see fewer false positives and significantly higher throughput.

This type of efficiency and flexibility simply cannot be matched by traditional “bump in the wire” appliances, and is a major difference between legacy and proprietary hardware-defined systems and an open, scale-out software solution such as VMware NSX.

Easy to Deploy, Easy to Consume.

Traditional firewall and IDS/IPS appliances are expensive, hard to manage, capacity constrained, and typically lack the critical functions to address modern data center designs and application patterns. That’s why we are focused on making our unique Intrinsic Security easy to deploy, consume, and build on. We start by treating both VMs and containers as first-class citizens.

Next, our firewall and IDS/IPS deployment model scales linearly as each workload consumes or releases capacity, combining the power of all CPUs across servers in a data center, and eliminating the need for proprietary appliances that hairpin traffic and exacerbate east-west network congestion. Further, network, firewall, and IDS/IPS rules are applied in a single pass.

These rulesets are also attached to workloads so that when enterprises bring up a workload, the correct rules are applied automatically. When a workload is decommissioned, rules are automatically expired, and when a workload moves, the rules go with it and state is maintained.  At VMworld Barcelona we also announced new NSX Federation capabilities that will let customers deploy policies across multiple data centers and sites. These capabilities enforce policies consistently, simplifying disaster recovery and avoidance, as well as the sharing of application resources across data centers and cloud providers.

These converged operations make it easier to manage security policies, demonstrate compliance, provide holistic context for security troubleshooting, and vastly simplify the overall security architecture. Clearly, this solution has a huge operational advantage for common 3-tier applications, but just imagine their impact in the context of cloud-native applications that might have 10, 100, or even thousands of components that are constantly moving. It’s not feasible to secure these applications without a fully automated solution.

This type of efficiency and flexibility simply cannot be matched by security that’s bolted on, and this is why our open, scale-out software solution makes so much sense for the way the enterprise needs to run today and tomorrow.

Resources

The post Announcing VMware NSX Distributed IDS/IPS appeared first on Network Virtualization.


Introducing the VxRail Management Pack for vRealize Operations

$
0
0
This post was originally published on this site ---

Right before VMworld San Francisco this year, we introduced the new vSAN ENT+ bundle for your HCI environments.  Better together,  vSAN and vRealize Operations made it easier to manage your ‘day one’ and ‘day two’ hyperconverged operations through self-driving features like:

 

 

Now, right as VMworld 2019 Europe in Barcelona is kicking off, I’m happy to announce we’re going to making it even better together with the upcoming Q4 FY20 release of the VxRail Management Pack for vRealize Operations!  Yea, I know what you’re thinking, we actually made HCI management even easier!

 

Gain Insights into Dell EMC VxRail HCI Clusters

 

This newest and upcoming management pack will collect VxRail events, vSAN metrics, configuration details, inventory, and relationship dependency mappings from the virtualization and infrastructure layer down to the vSAN layer for complete and pervasive visibility.

 

vRealize Operations will collect data and events from vCenter, create object relationships of the VxRail cluster in context of vSAN with an underlying physical infrastructure view.  You’ll be able to define default alerts and policies (existing events from your Dell EMC VxRail and PowerEdge servers), and create custom dashboard and reports.

 

Looks familiar right?  Manage and differentiate your vSAN clusters from your VxRail clusters.  You’ll get the same industry leading management capabilities for capacity overviews, VxRail operations overviews, and intelligent remediation across your stretched clusters.

 

 

Simple VxRail and vRealize Operations Integration

 

I would tell you how simple this is to integrate your VxRail HCI nodes into vROps but let me just show you (in 4 steps):

 

  1. Install VxRail Management Pack
  2. Add Account
  3. Select ‘VxRail Adapter’
  4. Provide VxRail Name and vCenter info

 

 

Yea…that’s all there really is to it.

 

The post Introducing the VxRail Management Pack for vRealize Operations appeared first on VMware Cloud Management.

Announcing vRealize Network Insight 5.1

$
0
0
This post was originally published on this site ---

Today, VMware is announcing the upcoming release of version 5.1 of both vRealize Network Insight and vRealize Network Insight Cloud. This next version of vRealize Network Insight will build on the momentum of the 5.0 release and include additional capabilities to help you discover, optimize and troubleshoot application security and network connectivity, no matter where the application livesdata center, cloud or even the branch.

The upcoming release will enhance network visibility for hybrid clouds and SD-WAN by introducing an SDDC focused Dashboard for VMware Cloud on AWS and a new Assessment for SD-WAN.

As you know, vRealize Network Insight 5.0 brought SD-WAN capabilities into the product, realizing our vision of Data Center, to Cloud, to Branch. In the same spirit as our Virtual Network Assessment for NSX, version 5.1 brings a SD-WAN Assessment. This SD-WAN Assessment will analyze traffic and data from a current non-virtualized WAN network and will calculate a return-on-investment (ROI) for when the WAN network is virtualized into SD-WAN.

 

* Example data. Actual data will vary on the customer network.

This release will also include enhanced VMware Cloud on AWS dashboard with software defined data center (SDDC) context and support for VMware Cloud on AWS Edge firewall providing firewall rules visibility and rules to flow mapping.

 

Another enhancement in the upcoming release of vRealize Network Insight will be additional Application Day-2 Operation capabilities such as health summary, and ability to troubleshoot using degraded flows and traffic analytics, seen directly in the Application Dashboard.

And finally, for many of our customers who use vRealize Network Insight to optimize and troubleshoot physical and virtual networks, you will be very pleased to note that the upcoming release will include physical device NAT support in VM-VM path visibility and support for Arista hardware VTEP.

Last but not least, this vRealize Network Insight release will provide enhancement to NSX-T day-2 operations management with new visualization of deployed and impacted components.

For more information see: https://www.vmware.com/products/vrealize-network-insight.html

vRealize Network Insight is also available in Cloud (SaaS) form factor. See https://cloud.vmware.com/network-insight-cloud and sign up for a 30-day free trial.

 

The post Announcing vRealize Network Insight 5.1 appeared first on VMware Cloud Management.

Interested in the Project Pacific beta?

$
0
0
This post was originally published on this site ---

Ever since we announced the technology preview of Project Pacific at VMworld 2019 back in August, customers and partners have been excited to hear more. It’s easy to see why. Leveraging vSphere to deploy and manage containers and Kubernetes infrastructure is a win-win for both vSphere administrators and application developers alike.

While an early private beta is available to select customers today, we plan to expand access to the Project Pacific beta program later this year. If you would like to be included in this expanded beta program please request access here. The explosive popularity of the Project Pacific Beta will make it difficult for us to include everyone, but please don’t let that dissuade you. If you have a talent for testing and a passion for vSphere, we want to hear from you.

Approved beta testers for Project Pacific will be able to:

  • Explore and test the latest features and enhancements
  • Interact with the Project Pacific Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
  • Provide direct input on functionality, configurability, usability, and performance
  • Provide feedback influencing future features and products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and lessons learned

The Project Pacific Beta Program leverages a private community to access beta software and share information. We will provide discussion forums, webinars, and service requests to enable you to share your feedback with us. If you would like to participate in the beta community, please share your interest with us by filling out this simple form and indicate your particular interest in Project Pacific. While participation is not guaranteed, the Project Pacific team will grant access to selected candidates in stages.

Moreover, visit the Project Pacific website to learn more.

The post Interested in the Project Pacific beta? appeared first on VMware vSphere Blog.

HBI1812TE – vSphere Upgrade Workshop

$
0
0
This post was originally published on this site ---

Over the last year and a half, the vSphere Technical Marketing team have had the pleasure to travel the world. We’ve been running vSphere upgrade workshops and speaking with all kinds of awesome folks. The workshop got an airing at VMworld in San Francisco, and again yesterday at VMworld in Barcelona. The workshop lasts 3 to 4 hours, covering a lot of content. We frequently get asked for copies of the slides from the session.

While most sessions at VMworld are recorded and available after the event, the workshops, sadly, are not. With that in mind, we thought that you might appreciate us posting the content here for download. As such, you can download a PDF of the session content from http://blogs.vmware.com/vsphere/files/2019/11/HBI1812TE-Whats-New-in-vSphere-and-How-to-Upgrade-to-the-Latest-Version.pdf.

How do I attend one of these?

I want to thank everybody who has attended one of these workshops over the past months, and I look forward to seeing more of you as we move forward! If you are interested in signing up for one of these free technical events, where you can talk to a vSphere subject matter expert in a town near you go to https://www.vmug.com/events2/vsphere-roadshow. We hope to see you soon!

The post HBI1812TE – vSphere Upgrade Workshop appeared first on VMware vSphere Blog.

Announcing the vRealize Operations Cloud Beta Program

$
0
0
This post was originally published on this site ---

We are thrilled that there is a lot of excitement about the release of vRealize Operations 8.0. And, in case you missed it, at VMworld 2019 US we announced the tech preview of our new SaaS offering, vRealize Operations Cloud.  Today we are pleased to share the news that we will open vRealize Operations Cloud beta program* in January. This is a unique and exclusive opportunity, and we are accepting customers now for the beta program. Space is limited, and if you are interested then this is a good time to sign up, https://cloud.vmware.com/vrealize-operations-cloud.

vmware-vrealize-operations-cloud-website

What Customers Tell Us

You may be asking, why should I consider vRealize Operations Cloud, or even SaaS?  You aren’t alone, as companies that are interested want to know more about vRealize Operations Cloud. So we’re sharing conversations we’ve had with several customers around the globe. They revealed why they are excited about the AI/ML engine that powers self-driving operations as a SaaS model to manage their hybrid cloud environments.

Organization 1: Global professional services firm

  • Digital transformation
    This firm conducts business throughout the world and has the highest focus of providing quality services to their customers, who trust them to solve difficult problems. The customers can be demanding, and the firm wants to increase their responsiveness, which will also excel their competitive edge. They need a SaaS environment that will provide more flexibility, infinite capacity, and more agility so they can respond even faster to the needs of the customers.

Organization 2: Small insurance company

  • Focusing on other IT priorities
    This small insurance company offers services to internal customers through a limited IT staff. They would like to free time from infrastructure maintenance to focus on better supporting applications used internally by their lines of business.

Organization 3: Large healthcare technology provider

  • Modernizing IT
    As a healthcare technology company, they are already well on their way to a hybrid cloud journey. Although this organization would like to shift the maintenance and upgrade responsibilities of their on-premises product to a provider so they have an integrated cloud service solution.

Organization 4: Pharmaceutical company

  • Accelerating innovation and increasing agility
    This company would like to contribute to new business through healthcare digitization. They would like to accelerate innovation in research and development with better agility and flexibility. In addition, they would like to move to SaaS to reduce time-to-market of product delivery and clinical trials with quality, while maintaining regulatory compliance.

Organization 5:  National oil and gas company

  • Mergers and acquisitions
    A national oil and gas company has an acquisition-based growth strategy and would like to consolidate IT environments and better enable the distributed staff upon acquisition of other organizations. A SaaS version of operations management will help federate across several different environments to easily onboard and manage multiple datacenters from a single point.

 

Benefits of SaaS

Are you nodding your head and relating to any of the above scenarios? Maybe you, too, are driving innovation for your organization to meet customer demands, need to relieve your staff so they can focus on other priorities, or you are modernizing your IT environment. If so, SaaS might be an option for you.

  • No maintenance overhead so your organization saves time and resources that can be focused on other priorities because maintenance and upgrade responsibilities are shifted from your IT department to us.
  • Flexibility of a SaaS delivery model so your users can access software from any device via the internet. This means that all users, no matter their location, are using the same release from wherever they work.
  • Scalability to expand your environment at the same pace as your business grows.
  • Trust knowing that our SaaS solution is built with security, privacy, compliance and resiliency.

onboard-vmware-vrealize-operations-cloud

 

What is vRealize Operations Cloud?

For those who are not familiar with vRealize Operations Cloud, it offers self-driving operations as a powerful strategy for automating and simplifying operations management with AI/ML to move your IT’s approach from reactive troubleshooting to predictive innovation. From applications to infrastructure, vRealize Operations Cloud can solve both business and technical challenges to optimize, plan, and scale cloud environments with these capabilities:

  • Continuous performance optimization to reduce downtime
  • Efficient capacity and cost management to lower costs
  • Intelligent remediation to speed time to value
  • Integrated SDDC compliance to mitigate risk
  • Built for the cloud with feature parity providing the same capabilities as vRealize Operations 8.0

home-screen-vrealize-operations-cloud

 

Benefits of a Beta Program

There are many different motivations for participating in a beta program, and we would like to invite you to join in providing feedback that can (if you think about it) ultimately improve your life. In the end, you may even enjoy the experience so much that you’ll want to brag about how you contributed to the success of vRealize Operations Cloud.  If you haven’t thought about it that much, here are a few reasons to participate.

  • Influence product development by providing your valuable perspective that will help improve the product and drive future functionality of this SaaS offering.
  • Preview the product by exploring the latest AI/ML capabilities in vRealize Operations Cloud 8.0 and share your feedback directly with the developers.
  • Be a first adopter by evaluating how vRealize Operations Cloud will work in your environment to help solve your business problems, while providing useful insights.

 

Sign Up Today

If you are enthusiastic about SaaS, we welcome you to sign up now for this exclusive opportunity to participate in the vRealize Operations Cloud beta.

 

*Disclaimer:  Note that there is no commitment or obligation that a beta feature will become generally available.

 

 

The post Announcing the vRealize Operations Cloud Beta Program appeared first on VMware Cloud Management.

vRealize Operations Troubleshooting Powered by Machine Learning

$
0
0
This post was originally published on this site ---

With the release of vRealize Operations 8.0, we continue to deliver on the vision of Self-Driving Operations powered by advanced AI/ML with the introduction of the Troubleshooting Workbench for faster time-to-identify and time-to-remediate issues in your Software Defined Datacenter. In this post, I will review the capabilities and features of the Troubleshooting Workbench.

Start Troubleshooting

First, there are many ways to get to the Troubleshooting Workbench. You can go there directly from the Quick Start page after login to vRealize Operations by clicking the link on the left-side navigation or from the Troubleshooting card.

Accessing the Troubleshooting Workbench from the Quick Start page

Using this method, you’ll land on the Workbench home page. From here, you can search for an object to begin troubleshooting, continue an active troubleshooting session or use a recent search.

The Troubleshooting Workbench landing page where you can start new workbenches, resume active troubleshooting or leverage recent searches

You can also get to the Troubleshooting Workbench via any object’s detail page. For example, from this MySQL Database application object:

Navigation to Troubleshooting Workbench from an object details page

In that case, you’ll be taken to a new troubleshooting session with the default scope and time range (more on that later).

A Troubleshooting Workbench launched from the object details page. It defaults to level 1 relationships and the last six hours time range. And finally, you can launch the Workbench from an alert. But before you do, notice that “potential evidence” from the Workbench is displayed in the alert details, so you can do some troubleshooting right away in the alert context.

The Alerts details now includes a tab for Potential Evidence provided by the Troubleshooting Workbench When you launch into the Workbench from an alert, the scope is the default setting, but the time range is adjusted to two hours prior to and 30 minutes after the alert trigger.

Workbench time range is set to 2 hours before and 30 minutes after the alert triggered. The scope is still set to the default level 1.

No matter how you get to the Workbench and start a session, you now have the power of machine learning providing you evidence and a toolbox of helpful features to speed your troubleshooting analysis.

Potential Evidence

The Workbench opens with a couple of controls for scope and time range, and a display of “potential evidence” for your consideration. The evidence consists of events, property changes and metric abnormalities. I’ll explain each of these categories in more detail later in this blog, but for now let’s cover the basics of the Workbench interface.

The Troubleshooting Workbench main page, Potential Evidence Using the screenshot above, here’s a key to understanding the Workbench:

1 – Time range. By default, the time range will be 6 hours. The exception to this is if you navigate to the Workbench from an alert, which will set the time range from two hours prior to the alert start to the current time. You can adjust the time range to anything you like here.

2 – Scope. The scope is the impacted object and its related objects. The default is one level up and one level down. You can adjust the scope using the plus/minus icons or you can click on the level you wish to include. You can also customize the scope to specific relatives as well as peers by using the “CUSTOM” link. Let’s look at how that link works to refine your scope.

Below is the scope before customization.

The default scope is set to All Objects

Clicking the link for custom scope brings up an Advanced Relationship widget. I will collapse the parent objects of the virtual machine first, by clicking on the magenta relationship counter icon.

Filtering out the parent objects

Now with the parent objects removed, I’ll add the peer objects for the virtual machine by hovering over and clicking the pop-up link.

Adding peers of the impacted object

I have my desired custom scope. I click OK to return to the Workbench.

A custom scope with level 1 relationships but filtering parent objects and including peer objects.

Now vRealize Operations will use this scope for gathering Potential Evidence.

The result of custom scope filtering

To remove the custom scope and return to default, simply click the remove filter icon as shown below.

Remove the custom scope filter by clicking the funnel icon

3 – The active scope is shown in this panel. You can adjust the navigation tree from the default of all objects. For example, you may want to focus on a virtual machine and its related storage infrastructure. In that case you can change the scope tree to vSAN and Storage Devices to limit the Workbench to only those object types.

4 – Potential Evidence. This provides the result of vRealize Operations gathering of events, property changes and anomalous metrics within the scope and time range. You can select any metric for further analysis in the Metrics tab by clicking the push-pin icon. You can also dismiss any evidence you do not find compelling.

Potential Evidence breaks down as follows:

  • Events are based on changes in metrics using historical data. vRealize Operations learns about the usual behavior of all metrics collected and sets “Dynamic Thresholds” for them based on the time of day and day of the week. Metrics which have breached their Dynamic Threshold will be shown here. Additional consideration is given to show only events with a negative sentiment (e.g. “failure”, “down”, etc) and the events are ranked by the information level of the symptom (i.e. info, warning, immediate, critical) and the current state, start time and number of event triggers.

For example, below you can see an event based on a Dynamic Threshold (DT) violation. The metric breached the DT just before 6AM on this chart (the DT is shown as the grey shaded area on the chart).

An event based on a DT violation

  • Property Changes is self-explanatory, but you should be aware that property changes for in-scope objects created during the time range will not be shown, to reduce the noise level. Property changes are ranked based on the proximity to the end of the time range and the frequency of changes.
  • Anomalous Metrics are statistically significant changes detected for all objects in scope during the selected time range. Metrics are considered anomalous if they result in a change of the mean of the datapoints for each window (i.e. not a single, large spike). Detection of anomalous metrics is done by analyzing data points through a sliding window, which is 1/4th of the time range.

Example of an anomalous metric with the sliding window analyses overlaid for clarity

Note that Anomalous Metrics and DT violations are distinctly different. Anomalous metrics are not based on any historical behavior outside of the time range, and they also represent a significant change in the metric, not a momentary spike, as a DT violation might. Both are helpful in troubleshooting if you understand what they are showing.

5 – Troubleshooting tabs. Once you have reviewed the potential evidence you can dive in and start doing further analysis. I will cover those in the remainder of this blog post.

Alert Analysis

Let’s start our analysis in the Alerts tab. If you have used the main Alerts tab in vRealize Operations, this will look familiar to you. But there are some important differences to note when you use the Alerts tab in the Workbench.

First, the alerts are shown within the time range and scope. These are active alerts for all objects in scope. In addition to the usual Alerts tab controls to filter, group and take actions on alerts and symptoms, you can be more selective about which object’s alerts you wish to have on this tab.

For example, clicking the “Remove All” icon will clear the alerts list.

Remove All alerts from the list

Now, I can click any objects in the scope to add their alerts to the list. Just click once and wait a moment and the alerts will appear.

Adding specific object's alerts to the list

You can go back to the complete list at any time by clicking the Show All icon.

Restore all related object alerts to the list with Show All

If you select and alert and go to the details in Workbench context, you can pin any metric symptoms to the Metrics tab for further analysis.

Any time you see the push pin icon next to a metric, you can pin it to the Metrics tab for review later

Let’s look at the Metrics tab next.

Metric Analysis and Correlation

The Metrics tab works very much like the Metrics tab from any object details page, but if you are new to vRealize Operations here’s a troubleshooting demo that gives you some ideas on how to use the Metrics tab in general (starting with version 7.5).

One key difference here is there’s no Advanced Relationships widget on the Metrics tab. This is because the scope is used to show the related objects and you can choose from the scope to select objects and metric charts.

Select an object from the scope to change to the available metrics for that object

Be aware that the metric charts will default to “Last 7 Days” timeframe, not the Workbench time range, so you may have to adjust using calendar controls on the metric chart toolbar to match the charts up with the time range.

Recommend you set the date controls to the same time range for easier viewing

Don’t forget that you can use the Metric Correlation feature of the metric charts to find additional, related metric behavior – this is a very powerful capability when used with the Workbench.

The Metric Correlation feature was released with 7.5 and provides powerful, ML-driven troubleshooting capability

Events Analysis

Just like the Alerts and Metrics tab, Events works the same way in the Workbench as it does on an object’s details page. Here you’ll see the active events for all objects in the scope, and just as with Alerts, you can clear the event list and select specific objects from the scope and view their events only.

The Events tab in Workbench

You can also select events from the Events subtab or the Timeline subtab and pin the associated metrics to the Metrics tab.

Event metrics can also be pinned to the Metrics tab

The Last Mile in Troubleshooting

You probably already know about vRealize Log Insight and how it integrates with vRealize Operations to provide “Last Mile Troubleshooting” using powerful log analytics with logs from your Software Defined Datacenter, OS, applications, physical hardware and more.

If you have this integration enabled, the Logs tab allows you to browse related logs by opening the vRealize Log Insight user interface to the Interactive Analytics tab and applying a filter based on the selected object from the scope.

Using vRealize Log Insight for troubleshooting.

You can change the filter easily by selecting other objects from the scope. Keep in mind that the filtering only applies to vSphere object types. But, you can easily add your own filters to narrow down the content you need to see.

Your Workbench Powered by Machine Learning

So, hopefully after this introduction to the Troubleshooting Workbench you’re ready to try it for yourself. I think you will agree that having the assistance of vRealize Operations machine learning engine to surface potential evidence shortens the time required to find issues.

One other use case for the Workbench is verification of resolution. For example, you may have found your issue and then applied some fix to remediate it. How do you really know that fix worked?

You can save the Workbench and return later, using it to verify that the intended fix did indeed work.

Minimize the Workbench to return later

The Workbenches will persist as long as you remain logged in to vRealize Operations.

Re-launch an active Workbench

Be sure to watch a demonstration of the Troubleshooting Workbench for full-stack problem remediation and verification here.

The post vRealize Operations Troubleshooting Powered by Machine Learning appeared first on VMware Cloud Management.

Windows guest initialization with Cloudbase-Init in vCenter

$
0
0
This post was originally published on this site ---

In a previous post we showed how to leverage Cloudbase-Init to customize Windows guest instances provisioned by VMware Cloud Assembly in an Azure Cloud Account. Now it is time to show the power of Cloudbase-Init in vCenter Cloud Accounts.

Cloudbase-Init is the Cloud-Init equivalent for Windows. It provides tools for Windows guest customization, like user creation, password injection, hostname, SSH public keys and user-data scripts.

In this post I am going show how to use cloud agnostic blueprints to deploy and customize a Windows instance in a vCenter Cloud Account in VMware Cloud Assembly. I have divided the post in two parts: first, I will show how to setup a Windows image in vCenter and tailor the image for customization. In this part, I won’t go step by step through the image creation process, instead, I will focus on how to configure Cloudbase-Init. And second, I will show how to create a blueprint in VMware Cloud Assembly with some guest customization, deploy it and check that the customization is applied successfully.

Setup Cloudbase-Init in a vCenter Windows image

  1. Login to vCenter and create an image for the Windows version that you want to use.
  2. Create a Virtual Machine from the Windows image and power it on.
  3. Once the Virtual Machine is up and running, login with an RDP client and install Cloudbase-Init:
    • Download the Cloudbase-Init installation binaries from https://github.com/cloudbase/cloudbase-init or downloads page. Ensure you are using version 0.9.12.dev72 or greater, which includes the OvfService metadata provider.
      Portable Multi-Cloud Initialization Service
    • Run the CloudbaseInitSetup_x64 installation binary and follow the on-screen instructions. Leave default values, and change Username to Administrator and check Run Cloudbase-Init service as LocalSystem.
      Cloudbase-Init Installation (1)Cloudbase-Init Installation (2)Cloudbase-Init Installation (3)Cloudbase-Init Installation (4)Cloudbase-Init Installation (5)
    • Click Install and after the installation completes, edit the configuration files.
    • Navigate to the chosen installation path, under the conf directory, and edit the file cloudbase-init-unattend.conf. Change metadata_services value to OvfService (with fully qualified class name):
      [DEFAULT]
      username=Administrator
      groups=Administrators
      inject_user_password=true
      config_drive_raw_hhd=true
      config_drive_cdrom=true
      config_drive_vfat=true
      bsdtar_path=C:Program FilesCloudbase SolutionsCloudbase-Initbinbsdtar.exe
      mtools_path=C:Program FilesCloudbase SolutionsCloudbase-Initbin
      verbose=true
      debug=true
      logdir=C:Program FilesCloudbase SolutionsCloudbase-Initlog
      logfile=cloudbase-init-unattend.log
      default_log_levels=comtypes=INFO,suds=INFO,iso8601=WARN,requests=WARN
      logging_serial_port_settings=
      mtu_use_dhcp_config=true
      ntp_use_dhcp_config=true
      local_scripts_path=C:Program FilesCloudbase SolutionsCloudbase-InitLocalScripts
      metadata_services=cloudbaseinit.metadata.services.ovfservice.OvfService
      plugins=cloudbaseinit.plugins.common.mtu.MTUPlugin,cloudbaseinit.plugins.common.sethostname.SetHostNamePlugin,cloudbaseinit.plugins.windows.extendvolumes.ExtendVolumesPlugin
      allow_reboot=false
      stop_service_on_exit=false
      check_latest_version=false
    • In the same folder, edit the file cloudbase-init.conf. Change metadata_service to OvfService (with fully qualified class name), and also set first_logon_behaviour and plugins.
      [DEFAULT]
      username=Administrator
      groups=Administrators
      inject_user_password=true
      first_logon_behaviour=always
      config_drive_raw_hhd=true
      config_drive_cdrom=true
      config_drive_vfat=true
      bsdtar_path=C:Program FilesCloudbase SolutionsCloudbase-Initbinbsdtar.exe
      mtools_path=C:Program FilesCloudbase SolutionsCloudbase-Initbin
      verbose=true
      debug=true
      logdir=C:Program FilesCloudbase SolutionsCloudbase-Initlog
      logfile=cloudbase-init.log
      default_log_levels=comtypes=INFO,suds=INFO,iso8601=WARN,requests=WARN
      logging_serial_port_settings=
      mtu_use_dhcp_config=true
      ntp_use_dhcp_config=true
      local_scripts_path=C:Program FilesCloudbase SolutionsCloudbase-InitLocalScripts
      metadata_services=cloudbaseinit.metadata.services.ovfservice.OvfService
      plugins=cloudbaseinit.plugins.windows.createuser.CreateUserPlugin,cloudbaseinit.plugins.windows.setuserpassword.SetUserPasswordPlugin,cloudbaseinit.plugins.common.sshpublickeys.SetUserSSHPublicKeysPlugin,cloudbaseinit.plugins.common.userdata.UserDataPlugin
    • Finally, finish the Cloudbase-Init installation using the options below.
      Cloudbase-Init Installation (6)
    • After the Sysprep process completes, the Virtual Machine will shutdown automatically. Alternatively, it is possible to install Cloudbase-Init in silent mode (unattended).
  4. From vCenter, convert the Virtual Machine to Template.
    Additionally, before installing Cloudbase-Init, we can install any tool and make other configurations needed.

After these steps, we have a Windows image ready to use in VMware Cloud Assembly, with Cloudbase-Init prepared to customize our deployments. To customize the guest instance user experience, the user can change the properties shown the configuration files. Most of the properties have default suggested values, except:

  • username: in first boot after Sysprep, Administrator account is created with blank password. By choosing Administrator username + SetUserPasswordPlugin + remoteAccess password in the blueprint, Cloudbase-Init will change the blank password.
  • first_logon_behaviour: always, the user will be prompted to change the password after first logon.
  • metadata_services: by listing only OvfService, Cloudbase-Init won’t try the other metadata services that are not supported in vCenter, having cleaner logs (there won’t be logs of Cloudbase-Init iterating over other metadata services and failing to find them).
  • plugins: by listing only the plugins with capabilities supported by OvfService, logs will be cleaner. Also, Cloudbase-init will execute the plugins in this order.
  • Run Cloudbase-Init service as LocalSystem: some advanced scripts might require to run Cloudbase-Init service with a dedicated administrator user. If this is the case, it must be selected at installation time.

Deploy and customize Windows guest instance within VMware Cloud Assembly

Now I will show how to create a cloud agnostic blueprint to deploy and customize the Windows image from the previous step.

  1. Login to vRealize Automation Cloud as Cloud Assembly Administrator and select VMware Cloud Assembly.
  2. Navigate to Infrastructure tab and setup a vCenter Cloud Account.
    Configure vCenter Cloud Account
  3. Ensure that a new Cloud Zone is automatically created, which is associated with the vCenter Cloud Account from the previous step.
    Cloud Zone for vCenter Datacenter
  4. Create a new Project associated with the Cloud Zone from the previous step. Add users / user groups as Project Administrators / Members.
    Create Project
  5. Create a Flavor Mapping.
    Create Flavor Mapping
  6. Create an Image Mapping for the Windows image.
    Create Image Mapping
    If you don’t see the image created in the first part, go to Cloud Accounts and synchronize images. By default image collection runs automatically every 24 hours.
  7. Create a Storage Profile.
    Create Storage Profile
  8. Navigate to the Blueprints tab and create a Blueprint for the Project (these steps can be completed with any user added as Project Administrator / Member to the project created in step 4).
    Create Blueprint
    Blueprint example
    YAML code:
    formatVersion: 1
    inputs: {}
    resources:
      Cloud_Machine_1:
        type: Cloud.Machine
        properties:
          image: win2016
          flavor: small
          remoteAccess:
            authentication: usernamePassword
            username: Administrator
            password: Password1234@$
          cloudConfig: |
            #cloud-config
            write_files:
              content: Cloudbase-Init test
              path: C:test.txt
            set_hostname: demoname

    The blueprint deploys the Windows image and performs some customization: The remoteAccess properties will provide the username and password to the guest instance (the password must meet the password policy). The metadata service will pick up these values and expose them to CreateUserPlugin and SetUserPasswordPlugin. Other remoteAccess options are generatedPublicPrivatekey and publicPrivateKey, explained in How to use remote access authentication in your Cloud Assembly deployments.
    The cloudConfig property will set the user data and expose it to the guest instance. Again, the metadata service will pick it up and expose it to UserDataPlugin. The plugin will interpret the data based on the first line of the script. In this case, the first line indicates a Cloud-config YAML configuration.

  9. Click Deploy to provision the new Windows machine. Provide a Deployment Name and the Current Draft as version. The new machine will be customized by the Cloud Configuration script from the Blueprint (Cloud Assembly also allows Cloud Configuration scripts in the Image Mapping).
    Deployment example
  10. Once the deployment successfully finished, login to the new instance with an RDP client and the credentials specified in the blueprint to verify the customization.
    • After login, the instance prompts to change the password.
      User's password must be changed
    • Check the file created in C:test.txt with the contents from the Cloud Configuration script.Local Disk (C:)
    • Check the customized machine hostname.
      View basic information about your computer

Summary

As you can see, Cloudbase-Init is the Cloud-Init solution for Windows. It gives a lot of flexibility for Windows guest customization, and provides seamless integration with VMware Cloud Assembly.

Visit the VMware Cloud Assembly page to learn more about it and give it a free try!

The post Windows guest initialization with Cloudbase-Init in vCenter appeared first on VMware Cloud Management.


VMworld 2019: Quick notes from Barcelona

$
0
0
This post was originally published on this site ---

As VMworld Europe is behind us, I’d like to share a few highlights for you. But before we dive in, I am  happy to report that our Cloud Management BU had a very strong showing across the board, and the feedback we are getting is very good. More than 4,000 customers attended cloud management-focused sessions, along with 18 customers from all over the world who testified to the value they’ve seen from our hybrid cloud management solutions, and share some of their experiences and best practiced. Much of this content you should be able to download and replay almost immediately, as soon as it publishes, and I will notify you of the most interesting events that you may have missed.

Hear from vRealize Customers Firsthand

Our customers are our best advocates, and we love when they speak up! In addition to many customers co-presenting with us at VMworld, we have highlighted several more in this VMware press release published on the second day of VMword Europs. Read about the value these customers saw when they deployed vRealize cloud management solutions.

Cloud Management Deep Dive session

This was our flagship session, presented by Ajay Singh, SVP and GM of Cloud Management BU, and Purnima Padmanabhan, VP of Product Management, Cloud Management BU, and joined by a customer from Region of Uppsala where they deployed the full vRealize stack. In a room full of 300+ people, Ajay and Purnima shared a rare insight into where our products are today and where they’ll be tomorrow. With an end-to-end demo and a customer story, this 90-minute session covered the market-leading vRealize product line, on-prem and “aaS”, and also highlighted Wavefront, CloudHealth, Veriflow and other emerging products and technologies. A detailed 30-minute long demo showed how our tools work together to provide even more powerful capabilities in the areas of automation, capacity management, intelligent workload placement and cost optimization across multi-clouds.

You can already replay this session here.

Hands-on Labs on New Releases

Back at VMworld Americas in San Francisco we announced a series of major upgrades to our cloud management products: vRealize Automation 8.0, vRealize Operations 8.0, vRealize Network Insight 5.0 and SaaS versions of these products. Since then these products have been released and are now available to all customers. In Barcelona, attendees had their first opportunity to familiarize themselves with the new products through brand-new Hands-on Labs (HOLs) that cover the new features of these products. These HOLs enjoyed a huge interest from attendees, and have scored high on the top 10 HOLs lists for each day of the conference, and overall:

  • HOL-2001-91-ISM – What’s New in VMware vRealize Operations 8.0   –    scored #1
  • HOL-2021-91-ISM – What’s New in vRealize Automation 8.0   –   scored #5
  • HOL-2002-92-CMP – A Quick Introduction to vRealize Network Insight   –   scored #7

You can run these HOL’s, as well as other cloud management HOLs, here.

 

 

The post VMworld 2019: Quick notes from Barcelona appeared first on VMware Cloud Management.

Frustrated Managing Your Cloud Environment?

$
0
0
This post was originally published on this site ---

Are you frustrated managing your private, hybrid, public or multi-cloud environment? Is the tool you have now not working for your environment today? If you’re thinking about switching vendors and don’t know where to start, or if you’re using a cloud management tool but not sure of all the capabilities, the VMware Cloud Management Buyer’s Guide can help you with both situations.

The reason I bring up evaluating your current capabilities is because I often hear from our customers they didn’t even know their existing solution could do something for which they are seeking. Therefore, evaluating your current tool’s capabilities is just as important as preparing to evaluate new options.

five-step-cloud-management-buyers-guide

How can this guide help me?

Infrastructure and operations teams have been embracing multi-clouds of private, hybrid, and public cloud architectures, and each cloud is becoming a new silo to manage. Leaders are demanding – they want monitoring and analytics to optimize the environment, better cost controls, and an energized IT staff. You’re just trying to gain visibility, troubleshoot across all the silos, and maintain governance. This guide can help you, as well as your management team, because it’s one of those that has some key nuggets for everyone, including:

  • A way to understand the shortcomings of your current IT operations
  • Checklists (not just one, but multiple) to base your evaluation on the right criteria so you don’t have to determine that on your own
  • Questions to ask before choosing a solution or to evaluate the capabilities of your existing tool
  • Recommendations from industry leaders that have been sought out and now available all in one place

assess-current-tools

What’s in the guide?

The guide provides a logical process to help organize your thoughts and suggests how to get started, including:

  1. Assess: Current IT operations management solutions
    Review a table with the most common IT operations challenges. If you answer “yes” to three or more of the issues, continue through the document to evaluate if your current tool can truly meet your goals.
  1. Evaluate: Criteria for a modern IT operations solution
    Review four tables worth of features to see what makes up a modern solution and start prioritizing which capabilities are most important to your team, including performance optimization, capacity and cost management, intelligent remediation, and integrated compliance.
  1. Prepare: Develop questions to ask vendors as you compare solutions, existing or new
    Use the 23 questions to get you kick-started asking your vendor, partner, or your own team information as you compare solutions.
  1. Identify: Your team’s skills and address any gaps
    Your organization is unique, and so are your skill sets. Begin reading these articles with advice about developing a culture of innovation, overcoming cultural bias, and creating leaders through active mentoring.
  1. Analyze: Review your current processes to adopt new technology
    These articles from peers may help you evaluate processes as you adapt new technology, including running datacenters as clouds, AI as a paradigm shift, and how the most security-conscious industries adopt clouds.

ask-your-vendor-questions

What are the basics?

When deciding if you want to keep your current cloud management tool or switch vendors, don’t forget to ask the common questions. Those include asking the vendor (or your channel partner) about the types of customers, financial stability, product vision and (dare you ask) the roadmap, as well as the price.

 

How to get started?

In the VMware Cloud Management Buyer’s Guide you’ll see a suggestion that won’t be in other cloud management guides. Don’t forget to look at the people and processes in addition to the technology – whether it’s existing or new – to be truly effective.  Ultimately, the product should fit into your cloud strategy and your culture to meet the needs of your organization.

people-processes-and-technology

For information about VMware vRealize Operations, go to https://www.vmware.com/products/vrealize-operations.html.  Or, sign up now to participate in the vRealize Operations Cloud beta program at https://cloud.vmware.com/vrealize-operations-cloud

 

 

The post Frustrated Managing Your Cloud Environment? appeared first on VMware Cloud Management.

vSphere & Intel JCC, TAA, and MCEPSC/IFU: What You Need to Know

$
0
0
This post was originally published on this site ---

vSphere Platinum ShieldVMware has released updates to mitigate several Intel CPU issues, two of which affect security, as well as a regression in how VMware mitigates the previous MDS vulnerabilities. As with other CPU errata and vulnerability mitigations these topics are extremely complicated. Information security often speaks to three core tenets: confidentiality, integrity, and availability. These types of issues often touch all three areas, and require choices based on how your own environments and workloads are designed, what your performance needs are, and your tolerance for risk.

Our goal in this post is to help VMware customers figure out what action they need to take, and how to talk about any potential risk decisions with their CISOs, while not duplicating the VMware KB articles and other official guidance linked below. If you have questions please reach out to your account team, Technical Account Manager, and/or VMware Global Support Services.

VMSA-2019-0008.2 – MDS & L1TF Vulnerabilities

What it is: VMware has found situations where the MDS & L1TF mitigations in ESXi 6.7U2 (builds 13006603 to 14320388) were not effective. We strongly urge customers who have or are remediating the MDS vulnerabilities (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, and CVE-2019-11091) to update to the latest releases.

What you need to do: Patch your vSphere infrastructure.

What effect it will have: Your environment will return to the pre-6.7U2 performance levels.

TSX Asynchronous Abort (TAA) Speculative-Execution Vulnerability (CVE-2019-11135)

What it is: Cascade Lake CPUs have their own built-in mitigations for MDS, but there’s a piece missing that was already covered in the MDS mitigations for earlier CPUs (Skylake and earlier).

What you need to do: Patch your infrastructure and enable MDS remediations if they aren’t enabled already.

What effect will it have: Depends on your environment and whether you have MDS remediations enabled already. Part of the remediation is to stop presenting certain affected CPU instructions (HLE/RTM) that are found on Cascade Lake CPUs to VMs. If you have VMs already using those instructions there may be performance implications, but power-cycling the VM will fix that (look into the new vmx.reboot.PowerCycle feature). If you have workloads that require HLE/RTM contact support for a workaround and performance implications.

Machine Check Error on Page Size Change (MCEPSC) Denial-of-Service Vulnerability (CVE-2018-12207)

What it is: A malicious kernel driver inside a VM could use this vulnerability to cause a CPU machine check. This will cause ESXi to crash, generating a purple crash screen (Purple Screen of Death, or PSOD) with a particular error code listed on it (0150H).

What you need to do: Depends on whether you’re running nested hypervisors or using the Virtualization-Based Security (VBS) feature in vSphere. VBS enables use of Microsoft Device Guard & Credential Guard inside Windows 10 and Windows Server 2016 & 2019. You can enable this at the host or at the VM level, allowing you to apply it selectively. This allows you to test and selectively enable it to protect particular workloads. Refer to the KB articles on how to enable these

What effect will it have: This vulnerability is trickier to mitigate because it has many other implications to it, including performance. If you are using nested hypervisors or Virtualization-Based Security you absolutely need to test this before you enable it for all of your workloads.

Please see the questions below for more nuanced discussion around remediating MCEPSC.

Jump Conditional Code (JCC) Erratum

What it is: A situation where the way the CPU’s cache interacts with the “jump” instruction can cause unpredictable behavior. Intel fixed this in their new CPU microcode, but the fix can have performance implications for what Intel describes as “tight loops.” For infrastructure a tight loop often means things like compression and encryption, such as what is used for vSAN or VM Encryption. Intel has supplied VMware with fixes for these performance issues which we’ve included in these updates.

What you need to do: Just as you patch vSphere, it’s a great idea to keep your server firmware updated, too. VMware ships CPU microcode as part of vSphere, but you can also get it as part of server firmware updates. If you apply the new microcode without the fixed version of vSphere installed you may notice performance degradations, depending on how busy your environment is.

If you use VM Encryption or vSAN compression, deduplication, or encryption you should consider patching both firmware and vSphere together to avoid issues. Many servers allow you to use out-of-band management controllers, like iDRACs or iLOs, to stage firmware updates for the next system reboot. Alternately, you could update vSphere first to gain the performance fixes, then update your server firmware immediately afterwards.

What effect will it have: For vSAN & vSphere the net effect will be zero once the vSphere patches are applied. However, other workloads and applications may be affected and will require an update from the appropriate vendor.

Other Intel Errata and Updates

What it is: There are many other components inside a typical server. Like CPUs, these sometimes have bugs, too. There are new vulnerability disclosures around components like the Intel Management Engine and the BIOS, as well as updates to how certain CPUs manage power to improve stability.

What you need to do: Patch your server’s firmware.

What effect will it have: Patching removes vulnerabilities, so you’ll be more secure, but if you use vSAN Encryption, vSAN Deduplication & Compression, or VM Encryption you might want to consider updating vSphere first (see the Jump Conditional Code entry above).

So… should I be worried?

VMware classifies these updates as “moderate” severity which reflects how we perceive the threats associated with them. That doesn’t mean you shouldn’t worry, though – your organization’s specific configuration might mean you change the priority based on your own risks. As with MDS and L1TF these remediations need testing and discussion inside your organization. Remember that security is confidentiality, integrity, and availability. For example, TAA is a confidentiality problem, while MCEPSC is an availability problem. Your organization’s responses to each will differ.

For perspective, always keep in mind what an attacker needs in order to exploit these vulnerabilities: they need administrator/root-level access to a guest VM. Chances are that if an attacker is that far into your environment many things have already gone wrong. Furthermore, crashing a host alerts vSphere admins to the attacker’s presence, which isn’t typically what the attacker wants. This isn’t to say you shouldn’t act on these issues – you absolutely should – but you might be able to do it in a more controlled & tested manner. This is especially true if your organization uses Virtualization-Based Security on Windows VMs or other forms of nested virtualization.

Frequently Asked Questions

Q1. Are the mitigations for these issues enabled by default when these patches are applied?

A1. The updates for the MDS & L1TF issues are enabled when you enable a Side-Channel Aware Scheduler. The mitigations for TAA & JCC are enabled by default once the patches are applied and the hosts restarted. The mitigations for MCEPSC are not enabled by default because of the potential performance implications for VBS.

Q2. You seem to have avoided talking about performance. I need to know how this will impact my workloads.

A2. Performance is tricky because every environment and workload is different. However, we understand, and have written a KB article that addresses the performance impacts based on VMware’s internal testing.

Q3. How will I know if someone is exploiting TAA?

A3. Once an attacker has access to a VM they can silently exploit TAA, just as with L1TF and MDS. Therefore it is very important to protect guests with an Endpoint Detection & Response (EDR) solution, as well as ensuring that both guest OSes and workload applications are patched, too.

Q4. How will I know if someone is exploiting MCEPSC?

A4. If an attacker exploits MCEPSC the ESXi host will crash and display the purple crash screen (PSOD). If you are experiencing PSODs please contact VMware Support for help diagnosing the issues.

Q5. If I choose not to mitigate MCEPSC directly what else can I do to protect my environment?

A5. In order to exploit CPU vulnerabilities the attacker has to have administrator/root-level access to a guest VM in your environment, and the ability to inject a malicious kernel driver into the guest OS. Continuing to protect your applications and guest OSes with regular patching, firewalls, using solutions like Carbon Black’s EDR products, implementing AppDefense/Carbon Black Defense, and other defense-in-depth methods always pays dividends by preventing attackers from gaining access to your applications and data. Generally speaking, attackers are usually not that interested in your infrastructure except as a means to steal data.

You can also selectively remediate your environment by enabling MCEPSC for VMs not using VBS, using things like DRS groups and rules to “pin” unremediated VMs to particular groups of hosts to limit the risk. We always urge caution around making your environment more complex, though. Complexity increases risk of human error.

Consider that the risk from MCEPSC is to availability, versus a data loss situation.

Q6. If a host crashes because of MCEPSC will vSphere HA restart it on another host and cause the same issue?

A6. It’s possible. You may want to consider disabling HA for VMs that are unmitigated.

Q7. If nested hypervisors are impacted by MCEPSC how does this affect my vSAN witness VM?

A7. The impact is to nested workloads (VM inside a VM). The witness functionality is implemented as a process on ESXi, but not nested, so it will operate normally.

Q8. Do the Side-Channel Aware Schedulers (SCAv1 and SCAv2) mitigate these issues?

A8. The versions of SCAv1 and SCAv2 in these patches help mitigate L1TF, MDS, and the TAA issues but do not affect MCEPSC.

Q9. I have EVC enabled, will these mitigations work?

A9. Yes. EVC enables lots of flexibility around CPUs and that flexibility lends itself well to mitigating CPU vulnerabilities.

Q10. How do I know if my CPU or server is vulnerable?

A10. Please check with your hardware vendor to determine if your server is vulnerable.

Q11. Are these issues being actively exploited?

A11. At the time of this writing VMware is not actively aware of these issues being exploited, but as with most security vulnerabilities it’s likely just a matter of time until exploit code is generally available. To reiterate: to exploit these issues an attacker will need high-level access to a guest VM, so they will need to break into an application and/or a VM first.

Q12. Is this a VMware issue or does it affect other hypervisors?

A12. This is a hardware issue. All operating systems and hypervisors are affected. VMware is simply helping our customers deal with these complicated and tricky issues.

Q13. Do these issues affect Kubernetes?

A13. Yes. These issues affect all operating systems and workloads, because it’s an issue with the underlying CPU hardware.

Q14. Do all my VMs need to be power-cycled as they did with the MDS remediations?

A14. No. By vMotioning VMs to a patched host they gain the protections. However, if you haven’t fully remediated for MDS you might be interested in the new vmx.reboot.PowerCycle advanced parameter that has been added to vSphere.

Q15. How do I get the updated CPU microcode for my server?

A15. VMware ships CPU microcode as part of vSphere releases and updates. However, those microcode updates don’t include updates for things like the Intel Management Engine, BIOS issues, etc. Our recommendation is to update the firmware on your server using your hardware vendor’s preferred methods because that will include other hardware components. Please review the notes around the JCC issues above for thoughts on the order of operations.

Q16. I am on an air-gapped network. Do I need to mitigate these issues?

A16. Only you and your organization can determine that. VMware always recommends that customers protect themselves by patching, but when issues like MCEPSC might trade confidentiality and availability there isn’t one right choice we can recommend.

Q17. The VMware Security Advisory does not list a source for vSphere 5.5 updates. Where can I get those?

A17. Security advisories do not include products past the end of general support. ESXi 5.5 fixes are not currently planned for extended support.

Q18. I don’t want to apply these patches. Is there a workaround I can use?

A18. No workarounds have been identified for any ESXi version.

Q19. I don’t want the new microcode. Can I remove it from the ESXi patches?

A19. Altering ESXi in that way isn’t supported and may lead to instability from the untested configurations.

Q20.  This update does not appear to have a vCenter Server release associated with it, but the recommended patching sequence indicates that vCenter Server should always be patched first. What should I do?

A20. Always check to ensure you’re running the latest release of vCenter Server first. If there are no available updates for your vCenter Server and/or Platform Service Controllers then move on to updating ESXi.

Q21. Do I need to disable Hyperthreading to mitigate any of these new vulnerabilities?

A21. No, but we recommend continuing to use SCAv1 or SCAv2 schedulers to mitigate MDS and L1TF on older CPUs. You might find our post, Which vSphere CPU Scheduler to Choose, to be helpful.

Q22. Do the updates released by VMware include previous security fixes?

A22. Yes. Patches are cumulative and the latest updates include all the features, bug fixes, and security remediations ever shipped as part of the particular vSphere version.

Have other questions? Feel free to add them to the comments.

(Many thanks to David Dunn, Edward Hawkins, and Mike Foley in the creation of this article)

The post vSphere & Intel JCC, TAA, and MCEPSC/IFU: What You Need to Know appeared first on VMware vSphere Blog.

vExpert Cloud Management October 2019 Blog Digest

$
0
0
This post was originally published on this site ---

 

VMware’s vExpert program is made up of industry professionals from around the vCommunity. The vExpert title is awarded to individuals who dedicate their time outside of work to share their experiences and useful information about VMware products with the rest of the community. In July of 2019, we spun off a sub-program called vExpert Cloud Management for vExperts who specialize in products from VMware’s Cloud Management Business unit, namely the vRealize Suite. vExperts share their knowledge in many ways including presentations at VMware User Groups (VMUG), writing blog posts, and even creating content on YouTube. Below are the top blog posts for the month of October 2019 created by our Cloud Management vExperts.

 

vROps Capacity Planning Calculator by Batuhan Demirdal (@batuhandemirdal) Turkey

vROps resources and references on GitHub by Elizabeth Souza (@bethcsouza) Brazil

Collecting operating system metrics with vROps by Elizabeth Souza (@bethcsouza) Brazil

VMware VRealize Operations7.X – Dashboard Creation by Chi Kin Wu (@umacboyvictor) Macau

Roneo: A NetFlow Duplicator by Martijn Smit (@smitmartijn) Netherlands

Tracking VM Powered Off Duration with VRealize Operations by Steve Tilkens (@stevetilkens) USA

vRealize Network Insight 5.0 Release – Deep Dive by Kerem Sugle (@ksugle) Turkey

Upgrading vRealize Operations from v7.5 to v8.0 by Alain Geenrits (@alaingeenrits) Belgium

 

Thank you to all our Cloud Management vExperts for this month’s content! If you’re interested in joining the vExpert program please visit https://vexpert.vmware.com/.

The post vExpert Cloud Management October 2019 Blog Digest appeared first on VMware Cloud Management.

Cloud Assembly VM Guest Management With Ansible

$
0
0
This post was originally published on this site ---

By Darryl Cauldwell, VMware Solution Architect
Advanced Customer Engagement

Project Team:

Amanda Blevins, Mandy Botsko-Wilson, Carlos Boyd, Darryl Cauldwell, Paul Chang, Kim Delgado, Dustin Ellis, Jason Karnes, Phoebe Kim, Andy Knight, Chris Mutchler, Nikolay Nikolov, Michael Patton, Raghu Pemmaraju

vRealize Automation Cloud

VMware vRealize Automation Cloud is a product suite comprising of three separate modules, Cloud Assembly, Service Broker, and Code Stream. Cloud Assembly provides a blueprinting engine and manages connectivity to multiple cloud endpoints. Service Broker provides a unified catalog for publishing blueprints with role-based policies. Code Stream provides a release pipeline with analytics.

The Cloud Assembly blueprinting engine can be used to consistently deliver VMs and applications. To deliver VM guest configuration, Cloud Assembly natively uses cloud-init. This native VM guest configuration capability can be extended with desired state management tools like Puppet and Ansible. A major difference between the Puppet and Ansible is that Puppet requires an in-guest agent where Ansible operates agentless.

Ansible

In this blog post, I am going to focus on Cloud Assembly integration with Ansible. The Ansible engine is fully open source, but OSS RedHat also offers a license model and support package for Ansible engine. In addition to the Ansible engine, RedHat provides support for Ansible Tower which offers many enterprise grade features. A useful comparison can be found in the blog Red Hat Ansible Automation: Engine, Tower or both. Cloud Assembly supports integration directly with Ansible engine and does not depend on Ansible Tower.

The Ansible control node connects to managed nodes and pushing out small programs called “Ansible modules” to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules (over SSH by default) and removes them when finished.

Ansible Headless Deployment Model

Ansible does not require a single controlling machine. Any Linux machine can run Ansible. A headless deployment model is where each machine runs as both the Ansible control node role and the Ansible controlled node. The absence of a central management server requirement can greatly increase availability, simplify disaster-recovery planning, and enhance security (no single point of control).

When we use this deployment model, each server has Ansible installed and has an inventory file with a single entry named localhost. We create a Cloud Assembly blueprint which includes cloud-init configuration to pull an Ansible playbook from a repository and execute it locally. Here the Ansible playbook is specific to the server role on which it will be run.

An example of how to deploy an application using this deployment architecture using Cloud Assembly is described here.

Central Ansible Control Node

While running Ansible without a central control node can be useful for some deployment scenarios, others are better suited to having a central Ansible controller node controlling multiple clients.

Typical factors which would drive a deployment architecture with a central control node would be the following:

  • Windows guests as Ansible control node can only runs on Linux
  • Large deployments that require centralized control for day two operations

When we use this deployment model, we deploy a central Ansible control node and on this we create SSH authorization private public key pair. We then configure this to be integrated with Cloud Assembly.

We create a Cloud Assembly blueprint which includes on each VM resource cloud-init configuration to create a local account named ansible which has an SSH public key of key pair on the Ansible control node. The Cloud Assembly blueprint then has Ansible resource added with details of the Ansible control node, the private key to use to connect, and which playbook to execute. Within the blueprint, the Ansible resource has a relationship formed with the appropriate VM resources. When the blueprint is deployed, the VM resources get added dynamically to the Ansible server inventory file and the playbook executed. In this model the Ansible playbook can be application-specific rather than server role-specific.

An example of how to deploy an application with this deployment architecture using Cloud Assembly is described here. This also includes a simplified example of how Ansible server can be used for day two operations for servers of a specific type.

Get Started

Managing your VMs through integration with popular configuration automation solutions is a small sample of what you can do with vRealize Automation Cloud.

I you are interested in the solution and want to get your hands on you can start a trial now. It’s free for 45 days without any commitments!

Visit our website to learn more.

Detailed documentation for vRA Cloud can be found here.

The post Cloud Assembly VM Guest Management With Ansible appeared first on VMware Cloud Management.

Viewing all 1975 articles
Browse latest View live