Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

VMware vRealize Operations for Horizon and Published Applications 6.4, Part 3: New Application-Crash Alerts for Desktop Applications and New Dashboard for Root-Cause Analysis

$
0
0
This post was originally published on this site

Introduction

This blog post addresses two new features of VMware vRealize Operations for Horizon and Published Applications 6.4. The first is application-crash alerts for desktop applications. The second is a dashboard for root-cause analysis.

Part 1 of the blog-post series described what is new in vRealize Operations for Horizon and Published Applications 6.4. Part 2 discussed new support for monitoring VMware App Volumes, View in VMware Horizon 7, and VMware Access Point. Part 4 will provide information about memory usage, support, and enhancements.

Application-Crash Alerts for Desktop Applications

The new application-crash alerts for desktop applications monitor application status in a VDI (virtual desktop infrastructure) environment or an RDS (Remote Desktop Services) session. This is vital to a successful implementation. In addition, knowing details about when and why an application crashed on a virtual desktop or an RD session is crucial to enabling you to successfully troubleshoot issues.

Figure 1 is a sample screenshot. The screenshot shows the Alert pane with the notification event generated at the time of an application crash, logged by the Windows event logs, and passed back into vRealize Operations Manager. The alert remains in the pane until the next login of the user into the virtual machine (VM) or RDS session.

Alert Pane Notification Event

Figure 1: Sample Screenshot of the Alert Pane with Notification Event Information

After you open the notification, you see a summary view of the crash, as shown in Figure 2. Here you can conduct a detailed investigation to troubleshoot and fix the cause of the application crash.

Summary Tab, Notification Event

Figure 2: Summary Tab of the Notification Event Information

From the Summary window, you can expand the listed cause of the issue and get more information from the captured source-event log. Some of the other tabs, such as Impacted Object Symptoms, Timeline, Relationships, and Metric Charts, are available to further the investigation of the application crash.

What is causing the issue?

Figure 3: Summary Window of “What Is Causing the Issue?”

Additionally, if you need to clear a large number of registered alerts from the pane, or if alerts are still visible in spite of the user’s next login, you can run cleanallstopobjects.jar to remove unwanted alerts. Detailed steps are outlined under the Known Issues section in the VMware vRealize Operations for Horizon 6.4 Release Notes.

New Dashboard for Root-Cause Analysis

It is important for you to be able to quickly go from receiving an alert to generating custom, historical reports that pinpoint the issue and fix it. The new dashboard provides you with a single window on which you can perform root-cause analyses.

The following screenshots show how you can use the Horizon End User Experience pane or the Horizon Help Desk pane to identify an issue using heat maps. From the Horizon End User Experience pane, navigate to the new Horizon Root Cause Analysis pane by following Steps 1 through 4 as shown in Figure 4.

Navigate from Horizon End User Experience to Root Cause Analysis

Figure 4: Navigating from the Horizon End User Experience Pane to the Horizon Root Cause Analysis Pane

Alternatively, follow the three steps shown in Figure 5 to navigate from the Horizon Help Desk pane to the new Horizon Root Cause Analysis pane shown in the vRealize Operations console (Figure 6).

Navigating from Help Desk to Root Cause Analysis

Figure 5: Navigating from Horizon Help Desk Pane to Horizon Root Cause Analysis Pane

Depending on how you enter the Horizon Root Cause Analysis pane, you are presented with a relationship map and statistics specific to the object selected and its analysis snapshot. Clicking the individual object reveals more details in the metric chart.

Root Cause Analysis to Object Relationships and Metrics

Figure 6: From the Horizon Root Cause Analysis Pane, View Object Relationships and Their Metrics

You can navigate to the object relationship to view specific data and metrics. You can then drill down to the root cause of the problem, whether it is at a common storage level or an individual application within the RDS session or operating system.

Drilling Down from Root Cause Analysis

Figure 7: Example of Horizon Root Cause Analysis and Ability to Drill Down into Related Objects

Part 1 of this blog series was an introduction to the new features in vRealize Operations for Horizon and Published Applications. In Part 2, we described the new support for monitoring VMware App Volumes, View in VMware Horizon 7, and VMware Access Point. In Part 4, we will describe memory usage, support, and enhancements.

 

The post VMware vRealize Operations for Horizon and Published Applications 6.4, Part 3: New Application-Crash Alerts for Desktop Applications and New Dashboard for Root-Cause Analysis appeared first on VMware End-User Computing Blog.


Connecting Containers Directly to External Networks

$
0
0
This post was originally published on this site

This article comes from Hasan Mahmood, a staff engineer on the vSphere Integrated Containers team.

With vSphere Integrated Containers (VIC), containers can be connected to any existing vSphere network, allowing for services running on those containers to be exposed directly, and not through the container host, as is the case with Docker today. vSphere networks can be specified during a VIC Engine deployment, and they show up as regular docker networks.

Connecting containers directly to networks this way allows for a clean separation between internal networks that are used for deployment from external networks that are only used for publishing services. Exposing a service in docker requires port forwarding through the docker host, forcing use of network address translation (NAT) as well as making separating networks somewhat complicated. With VIC, you can use your existing networks (and separation that is already there) seamlessly through a familiar docker interface.

Setup

To add an existing vSphere network to a VIC Engine install, use the collection of –container-network options for the vic-machine tool. Here is an example run:

$ vic-machine-linux create --target administrator@vsphere.local:password@vc --no-tlsverify --thumbprint C7:FB:D5:34:AA:B3:CD:B3:CD:1F:A4:F3:E8:1E:0F:88:90:FF:6F:18 --bridge-network InternalNetwork --public-network PublicNetwork --container-network PublicNetwork:public

The above command installs VIC adding an additional network for containers to use called public. The notation PublicNetwork:public maps an existing distributed port group called PublicNetwork to the name public. After installation, we can see that the public network is visible to docker:

$ docker -H 10.17.109.111:2376 --tls network ls
 NETWORK ID          NAME                DRIVER              SCOPE
 62d443e3f5c1        bridge              bridge                                  
 b74bb80d92ad        public              external

To connect this network, use the –net option to the docker create or run command:

$ docker -H 10.17.109.111:2376 --tls run -itd --net public nginx
 Unable to find image 'nginx:latest' locally
 Pulling from library/nginx
 386a066cd84a: Pull complete 
 a3ed95caeb02: Pull complete 
 386dc9762af9: Pull complete 
 d685e39ac8a4: Pull complete 
 Digest: sha256:e56314fa645f9e8004864d3719e55a6f47bdee2c07b9c8b7a7a1125439d23249
 Status: Downloaded newer image for library/nginx:latest
 4c17cb610a3ce9651288699ed18a9131022eb95b0eb54f4cd80b9f23fa994a6c

Now that a container is connected to the public network, we need to find out its IP address to access any exported services, in this case, the welcome page for the nginx web server. This can be done by the docker network inspect command, or the docker inspect command. We will use docker network inspect here since the output is more concise:

$ docker -H 10.17.109.111:2376 --tls network inspect public
 [
     {
         "Name": "public",
         "Id": "b74bb80d92adf931209e691d695a3c133fad49496428603fff12d63416c5ed4e",
         "Scope": "",
         "Driver": "external",
         "EnableIPv6": false,
         "IPAM": {
             "Driver": "",
             "Options": {},
             "Config": [
                 {
                     "Subnet": "10.17.109.0/24",
                     "Gateway": "10.17.109.253"
                 }
             ]
         },
         "Internal": false,
         "Containers": {
             "4c17cb610a3ce9651288699ed18a9131022eb95b0eb54f4cd80b9f23fa994a6c": {
                 "Name": "serene_carson",
                 "EndpointID": "4c17cb610a3ce9651288699ed18a9131022eb95b0eb54f4cd80b9f23fa994a6c",
                 "MacAddress": "",
                 "IPv4Address": "10.17.109.125/24",
                 "IPv6Address": ""
             }
         },
         "Options": {},
         "Labels": {}
     }
 ]

We now know that our running container’s IP address is 10.17.109.125. Next, we can try reaching nginx via the browser.nginx

This example only offers a very simple example of how to make vSphere networks available to VIC containers.  You can learn more about the different networks that the VIC container host connects to.  Download vSphere Integrated Containers today!

The post Connecting Containers Directly to External Networks appeared first on VMware vSphere Blog.

VMware Horizon 7.1 & Horizon Apps: Raising the Bar for User Experience & Published Applications

$
0
0
This post was originally published on this site

VMware Horizon 7 1 and Horizon Apps Overview

VMware Horizon 7 is the premier platform for virtual desktops and applications, and we continually strive to make it even better. Today, we are excited to announce Horizon 7.1. In this release, we innovated in two key areas to raise the bar in delivering a great user experience and change the application delivery paradigm. Let’s dive in.

An Even Better User Experience

Introducing BEAT

While many remote display protocols do a satisfactory job across high-bandwidth and low-latency networks, they degrade considerably under non-ideal conditions. To unleash productivity for mobile and remote workers, the protocol must deliver a great user experience across non-ideal networks. This includes public Wi-Fi or mobile networks, as well as networks across longer transcontinental or intercontinental distances with lower bandwidth, higher latencies and packet loss. That’s where VMware Blast Extreme Adaptive Transport—or BEAT—comes in.

The VMware product teams were hard at work to extend Blast Extreme with an adaptive transport that pushes the user experience envelope even on non-ideal networks. Our engineers designed BEAT from the ground up to provide a cohesive response to both predicted and reported packet loss. To meet this need, they integrated adaptive algorithms with forward error correction and retransmission schemes. This means that BEAT dynamically adjusts to network conditions, including varying speeds and severe packet loss.

Horizon 7 1 Horizon Apps 1

So far, our testing shows:

  • 6X faster transfers over inter-continental networks (~200 ms latencies, 1% packet loss).
  • 4X faster transfers over cross-continental networks (~100 ms latencies, 1% packet loss).
  • 50% bandwidth reduction out of the box.

Now, users can access their desktops and applications to stay productive on any device—whether they work on public Wi-Fi at a coffee shop or remotely across the ocean. For a side-by-side comparison, see this short video that compares Blast Extreme without BEAT and Blast Extreme with BEAT on both ideal and non-ideal networks:

Perhaps the best part is that all of these improvements happen automatically with this release. No manual network analysis or complex configuration is required! Stay tuned for follow-up blogs diving deeper into how BEAT achieves these incredible results.

Skype for Business

At VMworld Europe last year, we announced a new VMware and Microsoft collaboration to enable our joint Horizon and Skype for Business customers with an even better experience. Working together, we created an optimized, direct peer-to-peer endpoint virtual channel that offloads media processing to the endpoints. This avoids excessive bandwidth consumption and load at the data center, delivering a better user experience. See the difference in how the optimized solution works in comparison to the existing solution below:

Horizon 7 1 Horizon Apps 2

With the upcoming release of Horizon 7.1, this optimized Skype for Business support will be available as a tech preview for our customers to try hands on.

Just-in-Time Apps

Since our major release last year, customers adopted Horizon 7 at a rapid pace for a reason. Horizon 7 is the first solution powered by JMP, our next-generation delivery platform that includes three key technologies to untangle the operating system, applications and user personalization:

  • VMware Instant Clones
  • VMware App Volumes
  • VMware User Environment Manager

VMware EUC insights online tradeshowBy doing so, all the component pieces reconstitute on demand to deliver Just-in-Time desktops, making it simple for customers to deploy quickly and scale elastically—all while reducing costs.

Customers love this differentiated JMP technology, but also asked us to bring this to published applications. A big driver is that many customers need to deliver critical Windows applications alongside mobile and SaaS apps as a critical component of a modern digital workspace. Many customers are also simply tired of their complex and costly legacy app publishing solution and are ready to switch.

With our latest release, Horizon 7.1 extends Just-in-Time delivery from virtual desktops to applications, bringing increased speed, scale and simplicity to published applications. With Just-in-Time apps, you dramatically simplify deployments with a tightly integrated stack and fewer components.

Independent testing shows the benefits of this technology. You can eliminate half the steps and scale four-times faster compared to the competition without adding extra provisioning servers:

VMware Horizon 7 1 News JIT Apps

See for yourself how easy it is to deploy RDSH hosts for 2000 users in this video:

Now, imagine being able to scale your RDSH hosts to support twice the number users in seconds. Wouldn’t that be great? You can do that with Just-in-Time apps. See how in this short demo video:

Introducing Horizon Apps

Horizon makes apps simple. With the unveiling of Just-in-Time apps in Horizon 7.1, we make this simple to buy, as well. Horizon Apps is a new stand-alone offering focused on published applications. Horizon Apps comes in two editions:

  1. Standard Edition: Includes app publishing (RDSH apps and session-based desktops), User Environment Manager, VMware vSphere and VMware vCenter. Standard Edition is $125 per named user or $200 per concurrent user.
  2. Advanced Edition: Includes everything in Standard Edition, plus all JMP technologies for Just-in-Time apps (Instant Clone, App Volumes and User Environment Manager). Advanced Edition is $200 per named user or $325 per concurrent user.

For customers who need published applications but don’t need VDI desktops, Horizon Apps is a great choice.

Time to JMP In

From desktops to apps to user experience, we continue to deliver innovation in our market-leading products. We invite you to see the value delivered in our latest releases. Take a look at some of our resources below for more information:

Because you liked this blog:

VMware EUC Insights Blog Promo

The post VMware Horizon 7.1 & Horizon Apps: Raising the Bar for User Experience & Published Applications appeared first on VMware End-User Computing Blog.

Here Comes … Horizon Cloud

$
0
0
This post was originally published on this site

New VMware Horizon Cloud ServiceToday, I’m excited to announce VMware’s Horizon Cloud service. This new cloud-scale offering enables organizations to deploy feature-rich virtual desktops and applications in the cloud, on-premises or in a combination of the two. Horizon Cloud uniquely allows customers to bring-your-own (BYO) infrastructure from certified hyper-converged infrastructure partners or get fully managed infrastructure from VMware/IBM SoftLayer.

The Horizon Cloud service includes a number of key new features that enable organizations’ ability to flexibly deliver feature-rich and robust virtual desktops and apps in real time to end users.

[Related: VMware Accelerates Digital Workspace Transformation with New Application and Desktop Virtualization Offerings]

Just-in-Time Management Platform (JMP)

Horizon Cloud customers take advantage of just-in-time management technologies, such as VMware Instant Clones, VMware App Volumes and VMware User Environment Manager as part of this service. The JMP platform allows customers to easily scale and deliver feature-rich, secure desktops to their end users on demand.

To get a sense of what JMP can do for your organization—whether you are start on-premises with Horizon 7 or move to the cloud with Horizon Cloud—here is a quick demo for you.

Digital Workspace Experience with BEAT (Blast Extreme Adaptive Transport)

Horizon Cloud customers can leverage BEAT, our new UDP-based transport. As part of our VMware Horizon Cloud AirWatch Windows 10 Hands On LabsVMware Blast Extreme protocol, we designed BEAT to ensure a great user experience across various network conditions—especially those with low-bandwidth, high-latency and high-packet loss.

Whether your workers use WiFi and mobile networks or download video from halfway across the world, they enjoy great user experience with BEAT.

In addition to supporting Windows, Mac, iOS, Android and Chrome, Blast now supports over 125 Windows and Linux thin clients.

Here’s a look at Blast and BEAT in action:

High-Performance Desktops with NVIDIA GRID vGPU

Horizon Cloud supports developers and end users looking for 3D desktops with NVIDIA GRID vGPU.

Simplified Packaging

Horizon Cloud makes it easier than ever to get started with both named and concurrent connection use cases. With Horizon Cloud, customers purchase a single “connection” license (NU or CCU) that is deployed in conjunction with on-premises or cloud-hosted infrastructure. This allows customers to get started for as low as $16/NU for the service, making it more affordable than ever to move to the cloud.

Horizon Cloud also integrates with VMware Workspace ONE for customers looking to deliver a unified digital workspace for all of their mobile, Web, SaaS and Windows desktops and apps.

Be sure to find more details on the new Horizon Cloud service here .

Because you liked this blog:

VMware EUC Insights Blog Promo

The post Here Comes … Horizon Cloud appeared first on VMware End-User Computing Blog.

The Next Horizon! Introducing Horizon 7.1, Horizon Apps & Horizon Cloud

$
0
0
This post was originally published on this site

VMware Horizon 7 1 News 5

If you took a look at desktop and application virtualization in the past and thought, “This is just too complex and costly to make the move,” now is the time to revisit your decision.

Today, we are excited to announce new technologies and offerings that drive the cost and complexity out of getting started and allow you to address more use cases than ever before.

So, whether you are an existing VMware Horizon customer, new to desktop and application virtualization or using a competing solution, we have something here for you.

To begin, let me start with three new technology innovations in our Horizon portfolio.

1. Just-in-time Management Platform (JMP)

When it comes to JMP–we have brought together our best-of-breed application and desktop management technologies (which incidentally do not require full VM control), including:

  • Instant Clones for ultra-fast desktop provisioning
  • App Volumes for real-time app delivery
  • User Environment Manager for contextual policy management.

In doing so—we are allowing you to dynamically scale desktops and apps to drive down costs without compromising user experience. What’s more, we are extending JMP capabilities to address not only virtual desktops, but also published apps functioning both on premises and in the cloud. This will allow you to extend the benefits of JMP across any Horizon virtual desktop or app deployment within your environment.

Here’s a quick look at JMP with Instant Clones for Horizon published apps:

2. Digital Workspace Experience with BLAST & BEAT

So, what is BEAT and how does this support a better user experience across more network conditions?

Well, BEAT stands for Blast Extreme Adaptive Transport—our new UDP-based transport as part of our Blast Extreme protocol. We designed BEAT to ensure great user experience across varying network conditions—especially those with low bandwidth, high latency and high packet loss.

VMware Horizon 7 1 News 4

With BEAT, we are seeing phenomenal improvements out of the box with Horizon 7.1 (our new release) versus Horizon 7.0. In fact, we are seeing up to 50% improved bandwidth utilization out of the box. Moreover, we are seeing six-time faster file transfers for transcontinental connections (~200 ms) with only slight packet loss.

So whether your end users are working from the coffee shop using WiFi or mobile networks or downloading video from half way across the globe–they can enjoy a great user experience. And one more thing: In addition to BEAT supporting Windows, Mac, iOS, Android and Chrome, Blast now supports over 125 Windows and Linux thin clients.

3. Skype for Business

Last quarter, VMware announced that we are working with Microsoft to support a tech preview of Skype for Business with Horizon. Well, now it’s here. We are pleased to bring all Horizon 7.1 customers a tech preview of Microsoft Skype for Business. We designed the solution to prevent hair-pinning and allow all traffic to pass from client to client to minimize bandwidth utilization and packet loss. To start with, the tech preview will support Windows clients. Stay tuned for other clients to follow shortly.

Bringing these Technologies Innovations to Horizon

We are excited to bring all of these new innovations to Horizon 7.1. But, we are equally happy to bring JMP and BEAT to two brand new offerings: Horizon Cloud and Horizon Apps.

Introducing Horizon Cloud

Horizon Cloud is our next-gen Desktop as a Service offering. Horizon Cloud allows you to manage their desktops through a cloud service and pair this service with cloud-hosted infrastructure (IBM SoftLayer) or bring your own appliances (Dell/EMC, QCT or HDS) to the table. With Horizon Cloud, we added JMP capabilities into the service to streamline desktop management and BEAT for great user experience across high latency/high packet loss network conditions. We are also simplifying how you get started.

With Horizon Cloud, you now can buy a single license for as little as $16/user or $26/concurrent connection (infrastructure not included). You can then pair this service with the infrastructure of your choice—either BYO appliances or hosted infrastructure from VMware. This makes your buying decision easier. If you have a mix of use cases that require some of your infrastructure to be on-prem and some in the cloud, you have one offering and license that covers you across both.

VMware Horizon 7 1 News 2

New Horizon Apps

Many of you asked us to come out with a stand-alone published app offering. And I’m excited to announce that it is here.

We wanted to make it easier for you than ever to deliver and manage these apps—and this is why we wove our JMP technologies into the mix.

Here is a quick look at the advantage of using JMP with published or virtualized apps over competing solutions, like PVS, based on tests and a paper conducted by Principle Technologies.

VMware Horizon 7 1 News JIT AppsSo, whether you are looking for an alternative to Citrix XenApp or simply need to deliver apps versus full virtual desktops to your end users as part of a broader application delivery strategy, we now have two great bundles to support you moving forward available at a fraction of the cost of Citrix XenApp today.

  1. Horizon Apps Standard: App Publishing (RDSH apps and Session-based Desktops, User Environment Manager and vSphere and vCenter.
  2. Horizon Apps Advanced: App Publishing (RDSH apps and/or session-based desktops), vSphere/vCenter and all of our JMP technologies (Instant Clones, App Volumes and User Environment Manager).

Special call out: If you are a Citrix customer, we also have tools and services that easily help you make the move.

Time to Take a Look

As you can see here—we have been hard at work. A big shout-out to our engineering teams here! We have focused on making virtual desktops and apps easier and more affordable than ever to set up and manage in the cloud and on premises.

So what are you waiting for? It is time to take a look, make the move and get started with Horizon.

Because you liked this blog:

VMware EUC Insights Horizon HOL

The post The Next Horizon! Introducing Horizon 7.1, Horizon Apps & Horizon Cloud appeared first on VMware End-User Computing Blog.

A Cloud Alliance Between VMware and AWS Takes Shape

$
0
0
This post was originally published on this site

This is a cross-promoted blog post, originally posted on the VMware Radius blog. View the original blog post here.

Last October, when VMware unveiled a strategic partnership with Amazon Web Services (AWS), many in the tech industry were surprised. The two companies, once spirited competitors, announced they were collaborating on a new hybrid cloud solution called VMware Cloud on AWS.

Customers Push Alliance

VMware Cloud on AWS is a seamlessly integrated hybrid offering that gives customers the full software-defined data center (SDDC) experience from the leader in the private cloud, running on the world’s most popular, trusted, and robust public cloud. It delivers a hybrid cloud architecture that is suited for current and future applications—including containerized apps—and also works across both on-premises private clouds and advanced AWS services.

“This hybrid cloud environment enables a powerful set of use cases, including regional capacity expansion, disaster recovery, application migration, data center consolidation, new application development, and burst capacity,” says Mark Lohmeyer, vice president for products in the Cloud Platform Business Unit at VMware, “all with complete operational consistency and seamless workload portability.”

The reason for this strategic alliance, according to both companies, was customer demand. VMware has more than 500,000 customers worldwide, and AWS is the industry’s leading public cloud provider. Both companies found many of their customers were running both VMware and AWS and wanted the solutions to work better together.

“We discovered that many customers wanted to work simultaneously in both VMware and AWS environments and wanted to have the flexibility and strategic choice of where to run their workloads at any given time. That was the impetus behind creating VMware Cloud on AWS,” says Lohmeyer.

VMware and AWS Collaboration

Since the announcement, engineering teams from VMware and AWS have been working together closely.

“Our companies are working together in a meaningful way,” says Lohmeyer, “and there is a deep engineering relationship focused on making VMware Cloud on AWS a great service. We want this to be a powerful, convenient platform for both existing and new workloads.”

To date, thousands of AWS and VMware customers have expressed interest in VMware Cloud on AWS. A handpicked group of customers and partners have been closely engaged with engineering teams in the development process. A larger beta program with additional representative customers will follow before a general release later this year.

Enthusiasm From Customers

“I was very pleased with the level of enthusiasm we heard from customers after the announcement, across almost every industry vertical and geography. The scale and scope of interest we’ve received from customers has definitely exceeded all of our expectations,” says Lohmeyer.

As an example of a customer planning to test VMware Cloud on AWS, Lohmeyer pointed to a company that provides an IT platform for airlines and the travel industry. Uptime is critical, so the company built all of its business-critical applications in a private cloud using vSphere and other VMware solutions. Now the firm is looking to leverage the public cloud for disaster recovery and to extend to geographies where it does not have a data center presence.

“When you look at their teams,” says Lohmeyer, “they have spent 10 years learning how to use VMware tools and customizing applications to their environment. Now they can take all that learning and leverage it, with absolutely no changes, on top of the AWS cloud. That’s powerful.”

VMware Cloud on AWS will be delivered, sold, and supported by VMware as an on-demand, elastically scalable service and will be available in 2017.

The post A Cloud Alliance Between VMware and AWS Takes Shape appeared first on VMware vSphere Blog.

Supercharge Your vRealize Operations with Super Metrics

$
0
0
This post was originally published on this site

By: Alain Geenrits, Blue Medora

 

vRealize Operations is a very powerful management platform with some little known features. None are used less than super metrics and that is a shame as it is a very flexible and powerful way to extend your management environment. In this article, I will discuss some use cases with our Blue Medora management packs.

 

What is a super metric?

The easiest way is to imagine a super metric is as a spreadsheet formula or cell. vRealize Operations offers a worksheet-like environment where you can combine existing metrics for an object in a formula to create a new one: a super metric. This metric can then be attached to an object and data collected like any other object. You can of course also visualize them in a dashboard, view or report.

Learn how to create a super metric in vROps

Figure 1: Creating a super metric

 

Since they work on any existing metrics, you can create super metrics from metrics imported using Blue Medora Management Packs. As I will demonstrate later in this blog post, there are quite a few use cases for those metrics. Think what you can do with database query times, network port throughput, storage volume utilization and more!

 

What makes a super metric so super?

First of all you can apply functions to the metric. If you want to know, for example, what the average is of a metric such as execution time of queries on a database, you can do that with one operation (avg()).

You can also perform rollups. For instance, if you are interested in three volumes on a NetApp storage appliance not reaching 80%, you can do that by creating the average of the sum of all three ‘volume space occupied’ metrics.

A further specialty of super metrics is that you can attach them to an object higher in the relationships. By specifying a depth parameter, you can attach the average CPU time of a VM to the parent cluster, as an example.

This means a rollup metric has a very interesting feature. If you create a metric for an object type and then define a rollup, the function will work automagically across all children of the parent object, with no need to define which ones. Think of the possibilities!

Generic resource types are used to add metrics to a certain object type, such as something you are missing in your day-to-day monitoring and want to add. You use the ‘this’ button in the creation panel to reference an object.

 

Why use a super metric instead of a view?

There is a simple rule: If you need to create a view once with a metric, by all means do. But if you need to repeatedly view a metric that is not available out of the box, you should consider a super metric.

See how you can attach a super metric to an object in vROps

Figure 2: Attaching the super metric to an object type

 

Since you attach a super metric to an object, you can also specify if it is a key performance indicator or not. From there, you can decide if it is subject to normal vROps dynamic trending, or if you want a hard threshold. All of this is specified in a policy.

 

How do you create a super metric?

Quite a lot has changed since the launch of vROps 6.0. In VCops 5.x you had to search for super metrics in the dark corners of the enterprise dashboard, but now they have their own menu item in ‘Content’.

To create a super metric:

  1. Navigate to the ‘manage super metric’ panel in ‘Content,’ which is where you will create the super metric.
  2. Once you’ve identified your super metric, test it in the same panel on live or historical data (which is a new feature, and quite handy!).
  3. Now that it’s created, add the metric to an object type.
  4. Enable the super metric in a policy. If it is a KPI, define it along with how to collect the data.
  5. Start using the super metric.

 

Now that you know how to create a super metric, check out my post next week to see how you can use a super metric with a NetApp volume.

 

Reference Materials:

 

The post Supercharge Your vRealize Operations with Super Metrics appeared first on VMware Cloud Management.

VMware Experts Database Program 2016 Events

$
0
0
This post was originally published on this site

The VMware Experts Database Program

The program started in 2013 and sponsored by the Cloud Platform Business Unit boasts completed six successful workshop events.  The program, thus far has focused on gathering MS SQL Server and Oracle experts in either Palo Alto or Sofia Bulgaria (Cork Ireland and Singapore are under consideration for future events) for three days of intense Executive Welcomes along with open discussions with VMware engineers, architects and product specialists.  Customized labs are setup specifically for each Experts Group and run on a high-end storage array provided by a VMware storage partner for that specific event.  Since 2013, Pure Storage, Tintri and EMC have partnered with the CPBU in the delivery of individual events. The experts in the various database disciplines are nominated from a variety of sources, interviewed and then specially selected based on three specific criteria.  First they must be true experts in their specific field of expertise. The famed “Microsoft MVP” and Oracle designations of “Oracle Certified Master” or “Oracle Certified Professional” and “Oracle ACE” constitute some of the categories that the experts have achieved. In addition they should be public figures. Teachers, writers, bloggers, consultants, frequent speakers and general all around well-known personalities qualify.  Most importantly they must be open-minded towards the message of VMware.  We believe that vSphere is the best platform for Business Critical Applications and databases of any level of performance demand or criticality and the main objective of the program is to develop knowledgeable external BCA/DB experts to carry that message to their customers, students and attendees of the many sessions that they deliver throughout the industry and the world.  To that end VMware pays normal travel costs (flights, hotels, local transport, meals) for each selected attendee.

Continued Involvement of the Experts

After the event has completed, each Expert is invited to be included in a specially established internal/external email distribution list. The DLs include all the experts within a specific discipline and the VMware employees who participated in the event. Separate DLs exist for the Oracle (20) and SQL Server (50) groups.  The DLs have been established to allow for continued communications between the external Oracle and SQL Server Experts and the VMware personal who they met throughout the workshop.  The aliases are SQL_Server_TechAdvisory_Group <SQL_Server_TechAdvisory_Group@vmware.com>; Oracle_Techadvisory_Group <Oracle_Techadvisory_Group@vmware.com>  

Experts are encouraged to include vSphere content in sessions that they deliver at any conference or event that they participate in. They are also, on an individual basis, encouraged and sometimes recruited to submit sessions for VMworld.  The 2016 VMworld season include 7 sessions in the Virtualizing Applications track that were submitted and delivered by individuals who are part of the VMware Experts Database Program. We expect new and interesting sessions to be submitted to VMworld 2017 as well.

Papers and Blogs are also encouraged and on occasion members of this group are commissioned to write White Papers on specific subjects. An example is the paper by Allan Hirt of SQLHA presently listed on the VMware.com BCA homepage SQL Server Section.

Planning Highly Available Mission Critical SQL Server Deployments with VMware vSphere

The Events:

Each event provides a great learning experience for all involved. Oracle and SQL Server Experts expand their personal toolkits to include the message and technology of VMware and in-turn internal personnel have the opportunity to speak with true industry experts in a setting that fosters open and frank discussion.

Included below are links to both the new montage video from the Nov 15-17, 2016 VMware Experts SQL Server event in Sofia, Bulgaria as well as a YouTube playlist which includes all the montage videos complied over the entire VMware Experts series.

Videos:
VMware Experts Events – YouTube Playlist

VMware Experts Event – SQL workshop – Nov 2016 – Sofia, Bulgaria

 

The post VMware Experts Database Program 2016 Events appeared first on VMware vSphere Blog.


Jump (JMP), Radio & a Pony: Your Journey to Modern Workspace Management

$
0
0
This post was originally published on this site

Much like our industry’s tech evolution toward the digital workspace, the world is on a journey From analog to digital workspaces with VMware JMP Technologyto digital radio. In 2017, Norway will become the first country to begin the journey of turning off FM radio and going completely digital. There are many questions and concerns, but the destination is inevitable as many countries are soon to follow. Why are countries making this move? Because, according to Reuters, the current system is costly and inefficient:

For the same cost, digital radio in Norway allows eight times more radio stations than FM. The current system of parallel FM and digital networks, each of which cost about 250 million crowns (£23 million), saps investments in programs.”

What Is the Problem with Traditional Workspace Management?

Just as digital radio pioneers questioned the limits of what was possible with FM radio, we at VMware question the limits of what is possible with today’s workspace management solutions.

In today’s enterprise, IT groups typically spend an enormous amount of time trying to manage state of their systems and users. IT spends an onerous amount of time:

  • Figuring out how to patch endpoints;
  • Staying up to date with operating system (OS) releases;
  • Avoiding image proliferation;
  • Debugging performance problems; and even
  • Synchronizing data across redundant management systems.

These tasks severely hamper responsiveness to business user needs. All this leads to poor user experience, high operating costs and slow pace of innovation.

JMP Harry Labana VMware 2

Is There Business Value Continuing with the Status Quo?

If you want a flexible enterprise that responds to change easily, can you continue to operate in traditional ways?

As you move toward Windows 10 with your virtual desktop infrastructure and applications, whether on-premises or in the cloud, will you take yesterday’s management inefficiencies forward?

Windows is not going away in the enterprise for a long time, so the strategic question to consider is:

“How will you chart a path forward to enable you to manage in a more agile way to drive greater efficiency?”

Charting a Better Way Forward with JMP Desired Outcomes

The path forward needs to be a better experience that drives greater productivity. Just as with VMware EUC insights online tradeshowdigital radio, modern best-of-breed components drive this transformation. Using these best of breed of components as raw ingredients to build a core platform, many creative capabilities—like new creating new content on a new platform—will become possible.

We must reimagine the management paradigm to create this type of experience. IT organizations should be empowered to spend more time understanding business user needs and managing their environment. IT must define desired outcomes by selecting what, who, when and where to deploy services.

VMware’s Just-in-Time Management Platform (JMP)—what we call “Jump”—orchestrates the building of necessary systems in real time to achieve the stated outcomes. This helps IT organizations achieve greater agility, reduced costs, faster time to market and greater ability to delight their end users.

JMP enables these desired outcomes using best of breed components. Use each element independently for deployment flexibility within existing infrastructures.

VMware Instant Clones: Leverages the hypervisor and your existing VMware virtualization infrastructure to give you the fastest, most modern way to provision virtual machines (VM) by rapidly cloning a running VM instance in seconds! This is in stark contrast to legacy image management technology that requires infrastructure overhead to operate. Learn more in this technical whitepaper.

VMware App Volumes: The leading solution to enable you to simplify Windows lifecycle management with real-time application delivery. Eliminate the pain in application packaging and reduce the number of images you manage by up to 70%. Learn more here.

VMware User Environment Management: The most powerful and easiest to use solution on the market. User Environment Manager helps you personalize user and application settings and configure user environments dynamically based on policy, helping deliver superior user experiences. Learn more here.

JMP uses these technologies to untangle the OS, applications and user personalization. In doing so, all the component pieces together reconstitute on demand to deliver Just-in-Time desktops and apps across all infrastructure topologies. When these best-of-breed components combine, examples best illustrate its power.

In the video below, we easily define and provision a JMP workspace. We show app delivery to this new workspace in real time, as well as user-specific personalization. The user at runtime then configures additional personalization before moving to a different department with a different set of application needs. Finally, we perform a major update to Windows, keeping the environment compliant and on the current business branch. I am sure once you watch this short video, you will see how much simpler JMP makes things versus the complexity of many of today’s environments:

Static Layers Technologies Do Not Solve the Problem for Dynamic Use Cases

We built toward this vision for several years now and learned many lessons along the way. One of the key lessons is no single technology silver bullet addresses the breadth of use cases for VMware Horizon NSX AirWatch Windows 10 Hands On Labsdynamic enterprise customers. It is like saying a digital radio is composed of a single new component.

When you use a rigid, singular technology approach, we must make tradeoffs, such as scale, performance and flexibility, alongside existing sunk-cost technology and process investments. This can make the journey to the digital workspace very difficult. JMP helps you adopt different components on your terms, opposed to the all-or-nothing approach of singular, rigid technology stacks.

A good example of a single-component approach is the legacy layers technology conversation in our industry.

Firstly, there is not a singular agreed-upon definition of what layers technology is (perhaps a future post). In general, we define layers as a solution in which you must control the base OS image first. You then “layer” in other OS components. This is what I call the static model of management, because you need to control and/or provision the full stack with a single technology. In other words, it is legacy image management. This is in stark contrast to modern application, container-style technologies, like App Volumes.

App Volumes, for example, does not require base image control and allows real-time delivery of apps. Although proven at scale in the enterprise, many still wrongly compare this approach to full-stack static layers solutions. Beware, and understand how the technologies actually work before believing the latest “expert” opinion.

The limits of taking a singular static approach versus best-of-breed components can be profound when it comes to implementation in practice. VMware’s rich history and deep DNA in enterprise infrastructure management gives us a unique understanding of what it takes in reality. VMware was the first to bring to market a static layers solution with VMware Mirage.

Our customers use Mirage for physical Windows management, especially across distributed pony standing on pony harry labanaenvironments. Layers are a great solution for some static use cases, like retail point-of-service (POS) terminals and ATMs where a fully managed image makes sense. Many lobbied in the past for VMware to move this technology to the data center. We did not, because static layers inherently have limitations at scale in the data center and cloud.

Combining and recombining VMs in a dynamic virtual environment adds overhead. Complexity also increases for static management in the data center or cloud, because you must stay in a full image management-only model. This approach limits flexibility to work with existing infrastructure. It is a one-trick-pony solution that tries to create a uni-pony by merging images and then deploying. Image management is a static management construct that the JMP dynamic model tackles in a more elegant and efficient way.

This Is the Beginning

I know firsthand what it took our engineering teams to pull this all together. It is thankless work at times, and I would be remiss not to acknowledge my gratitude to them.

If you are evaluating or already began your cloud-computing journey, we have you covered. JMP capabilities are available today on-premises and in the cloud. Customers already leverage JMP capabilities in Horizon Cloud for the simplicity and speed of cloud-hosted desktops and apps without managing their own infrastructure. Regardless if your desktop and app strategy leans toward on-premises, cloud or a hybrid model, we continue to innovate upon this foundational platform to enable our customers on their journey to modern workspace management.

Now, cue your favorite “JUMP” song!

@harrylabana

VMware EUC Insights Blog Promo

The post Jump (JMP), Radio & a Pony: Your Journey to Modern Workspace Management appeared first on VMware End-User Computing Blog.

Big Data and Virtualization: A joint Cloudera and VMware Technical Talk

$
0
0
This post was originally published on this site

This technical talk was given at the VMworld conference events in the US and Europe in the past few months. In case you missed it when it occurred live, we thought we would give you a recording of it here.

The joint talk (VIRT7709 at VMworld) was created and delivered jointly by members of technical staff from Cloudera and VMware in late 2016.

The Cloudera and VMware companies have collaborated for several years on testing and mutually certifying various parts of the Hadoop/Spark ecosystem on vSphere. This work actually began with the joint companies’ labs staff in 2011. From the creation of a set of reference architectures (two published by Cloudera on vSphere)  to performance analysis and tooling, there are common points of interest that the companies continue to work on together. Key to the reference architectures are the familiar direct-attached storage model along with an external storage model for HDFS data that is based on Isilon technologies. Both of these have been tested and certified by Cloudera.

The speaker from Cloudera, Dwai Lahiri, highlights the detailed technical best practices from the reference architectures that apply to deploying Cloudera’s Distribution including Hadoop (CDH) on VMware vSphere. The VMware speaker starts by dispelling certain common myths about virtualizing Hadoop that are misguiding for someone who is new to the field. He then talks about the Hadoop core architecture and how it may be mapped into appropriately-sized virtual machines. A set of performance test outcomes are shown that demonstrate that Spark workloads run on VMware vSphere with equal performance to that of native – and in some cases even better than native, due to better memory locality handling by multiple virtual machines on host servers.  Early impressions are given also of the collaborative work that the companies are doing together. This will give you a technical insight into the direction the two companies are taking in the big data space.

The agenda used in the talk follows the following sequence:

1 Use Cases for Virtualizing Hadoop
2 Myths about Virtualizing Big Data
3 Hadoop Architecture on vSphere – an introduction
4 Overview of the Cloudera Portfolio
5 Reference Architectures
6 Cloudera CDH Performance Testing on vSphere
7 Innovations from Cloudera and VMware
8 Conclusions and Q&A

The post Big Data and Virtualization: A joint Cloudera and VMware Technical Talk appeared first on VMware vSphere Blog.

New Release: PowerCLI 6.5 R1 Poster!

$
0
0
This post was originally published on this site

PowerCLI 6.5 was a big release for us. We changed the name to reflect that we’re doing a bit more than just vSphere these days, we fully converted our snap-ins to modules, and so many more upgrades and updates. There are always a lot of asks for it with every release, so I am extremely happy to finally announce the release of the VMware PowerCLI 6.5 R1 poster!

The poster features a bit of a new layout, but still features all of our cmdlets and some associated examples. It also calls out the PowerCLI Community Repository, along with some of what it contains thanks to our amazing community members, and some of the PowerCLI focused flings!

PowerCLI Poster

VMware PowerCLI 6.5 R1 Poster

If you’re looking to print one out, they are best at 39 inches width by 19 inches height.

Also, be on the lookout for these posters coming to a VMworld and/or VMUG near you!

The post New Release: PowerCLI 6.5 R1 Poster! appeared first on VMware PowerCLI Blog.

How to use dynamic property definitions in vRA 7.2

$
0
0
This post was originally published on this site

Since vRA 7.2. was released we are seeing more and more customers with earlier vRA versions wanting to move to the new version. I was recently involved in a migration of a customer’s production environment from vRA 6.2 to 7.2 (If you haven’t heard of migration, check it out – it’s a totally awesome feature). When the migration ended successfully, the customer started browsing the catalog items and in a patient and polite tone said “I don’t see my property values”, while I could feel he was screaming “Aaaaaaaghhh!” panicking on the inside.

You could safely presume that the idea of a major issue right before an approaching migration deadline increased my heartbeat rate in no time. So, while my first reaction was “Ugh…”, I asked the customer if I could take a look at their 6.2. environment – it seemed they were using relationships as XML files.

The idea of the current example is simple – a user selects a VM Category from a list during a request, then the drop down list of network profiles gets filtered based on this category, and so does the list of networks. The network profiles bear the same name as the networks so it is easier to choose. In vRA 6 this is achieved by using long, incomprehensible and rather static (I would say boring) XML statements that only specify relationships, and those relationships can only be formed in a 1-to-1 manner. This means that in 6.2 when we choose the VM Category, only the Profile Name drop down list can be filtered, but not the Network Name. The “problem” with 7.2 after migration is that it doesn’t migrate XML code as the property value. Instead, we have to write vRO actions, which, as you may know can do anything, except maybe brewing coffee, last time I checked.

So, after overcoming the first rush of blood to my head and estimating the situation I said to the customer: “Alright, we can fix this”. Seeing a slight smile on the face of the person next to me I started working on a resolution to this problem. Firstly, I had to develop some kind of a database for my vRO actions to play with. I opened the VM Category property definition and manually entered the categories in a static list as the customer wanted them.

The next problem I had to solve was the mapping between the network profiles and their respective VM Category. I could easily enter that as a static dictionary in my Javascript code in vRO, but I wanted to stick to a basic principle when coding vRO actions – keep as much of the data as possible in vRA and leave the logic to vRO. One solution could be creating another property definition as a static list featuring the relationships, but this meant the customer would have yet another set of properties to manage whenever something in their infrastructure changed, for example a network got removed. I knew that in the customer environment the relationship between VM Category and Network profiles was 1-to-many, so I implemented another approach – I filled the Description of the already migrated Network Profiles with the VM Category names. This way when the customer removes a profile, they would also remove the relationship, and vice-versa.

Next, I logged onto the vRO client, went to Design and created a new module from the Actions pane.

Then, I created a new Action and named it GetNetProfiles, which, you guessed it, could fetch the profiles based on an input called VM Category. On the Scripting tab of the Edit window I set the return type to be “Array of Strings”, because this is what a drop down list actually represents.

Then I added the input parameter and called it VMCategory of type String.

Now, the reader might already be shouting “Show me the code!”, so I will not delay this anymore.

function getNetProfiles(VMCategoryName)
{
  var host = Server.findAllForType("vCAC:vCACHost")[0];
  var netProfilesNamesList = [""];
  var model = "ManagementModelEntities.svc";
  var entitySetName = "StaticIPv4NetworkProfiles";
  var property = new Properties();
  property.put("ProfileDescription", VMCategoryName);
  var netProfiles = vCACEntityManager.readModelEntitiesByCustomFilter(host.id, model, entitySetName, property, null);
  for each (var netProfile in netProfiles)
  {
    netProfilesNamesList.push(netProfile.getProperty("StaticIPv4NetworkProfileName"));
  }
    return netProfilesNamesList;
}
if (VMCategory)
{
  return getNetProfiles(VMCategory);
}
else
{
  return ["Please, select a VM Category"];
}

So, let’s analyse the code. First, it gets a host variable that represents the default IaaS Server, i.e. your Web IaaS Service of a vRA environment. Then, we craft a property variable that will be used to filter only those Network Profiles, that have VM Category as their description. After that, we just call the vCACEntityManager class and read all Entities of Set StaticIPv4NetworkProfiles by using the aforementioned property as a filter. Finally, the names of all Network Profiles which are returned by this query get extracted and poured into an array (remember the return type we set). So, basically we wrap this into a function and call this function in the action execution, while checking if we have a selected VMCategory.

After wrapping up the script code. I created another action, this time calling it GetNetworkName, which again returns an array of String and accepts an input string called NetProfile.

This action returns the name of the Network Profile as the name of the network in vCenter. Remember in my case I had the Network Profile name be the same as the Network name.

if(NetProfile)
{
 return [NetProfile];
}
else
{
 return ["Please, select a network profile"];
}

Now, you might be wondering how do we attach these actions to a drop down and how do we specify the input properties. Stay with me.

When done with scripting, I navigated back to the vRA 7.2 Automation Console and went straight ahead to editing a Property Definition. I selected the property with a name of VirtualMachine.Network0.ProfileName. I set the type to String and the display name as a Dropdown. The values were set to “External values” and the script action was selected from the list of actions. This list is filtered based on the data type of the property, so don’t freak out if you don’t see all the actions here.

As the Input parameter I entered VMCategory and made sure the value is passed from the VM Category Property Definition I had created earlier.

After that, I edited the VirtualMachine.Network0.Name property and selected the GetNetworkName action while setting the ProfileName property as an input parameter.

I made sure the display order is set correctly, so the request form will show all the properties one under the other.

Next, I headed to the blueprint that needs these properties and made sure that the VM Category is set as a Custom Property and Show in Request is set to Yes, i.e. checked.

Since the properties for the profiles and networks are network properties we need to check they exist in the network configuration of the blueprint.

Finally, I was able to request a new VM:

 

The number of ways this approach can be modified or optimized is infinite – for example, we can use the now recommended NetworkProfileName property which combines the Network Profile and Network Name properties into one or if the profile name differs from the network name we could create another set of a dictionary that maps these two, or you can connect to the company’s service catalog and extract the VM Categories dynamically and so on and so on. I leave it to the reader’s imagination…

More on vRA 7.2.

The post How to use dynamic property definitions in vRA 7.2 appeared first on VMware Cloud Management.

Paper: Introducing the NSX-T Platform

$
0
0
This post was originally published on this site

We see greater potential strategic opportunity in NSX over the next decade than our franchise product vSphere has had for the past decade.
said VMware’s CEO Pat Gelsinger talking about NSX’s capability to reach $1B this fiscal year.

VMware NSX, launched in August 2013, unifies in a single platform Nicira NVP (that the company acquired in July 2012 for approximately $1B) and VMware vCloud Network and Security, covering the entire network and security model from Layer 2 to Layer 7 and integrating in the hypervisor to ability to manage switching, bridging, routing and firewall.

This month VMware published a technical white paper titled Introducing the NSX-T Platform to explain its new architecture described as follows:

The NSX-T architecture has built-in separation of the data plane, control plane and management plane. This separation delivers multiple benefits, including scalability, performance, resiliency and heterogeneity. In this section, we’ll walk you through each layer and its specific role and characteristics.

VMware NSX-T is a second platform in the NSX family, focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks aka supporting non-vSphere environments.

The 10 pages paper contains the following sections:

  • Introduction
    • Networking and security today
    • How NSX-T addresses these trends and challenges
  • Architecture Overview
  • Management Plane
  • Control Plane
  • Data Plane
    • NSX-T Compute Independent Host Switch
    • Routing Scale and Performance
    • Routing and Convergence
    • Security
    • Services
    • Operations
  • Conclusion
    • Key Takeaways
    • What You Can Do Now

Labels: NSX, Paper, VMware

[Podcast] VMware End-User Computing on Frontline Chatter

$
0
0
This post was originally published on this site

EUC podcast

What’s next for VMware End-User Computing (EUC)? Pat Lee, vice president of product management for cloud apps and desktops at VMware, recently talked about this and much more on EUC podcast Frontline Chatter.

“We’re always going to be improving and making constant changes each quarter to address customer concerns, figure out the use cases and provide a competitive and truly strong solution in the marketplace.”

Listen to the episode on SoundCloud to hear Pat’s thoughts on the evolution of VMware EUC, his favorite VMware projects and the crazy world that is the Internet of Things:

Pat talks about how VMware has evolved desktop transformation:

“In the last 36 months, we had a foundation in place that was good, but we really needed to scale it to be more successful in the market. We focused on some key areas. One, how do we deliver that end-user PC experience to as many users as possible? Early on, when you saw VDI, it was a great tactical solution when security, compliance and call centers were required, but it didn’t give you all the things you expected from a user experience: seamless web cam use, seamless local storage access, etc. All those things have to come into play so you can provide that native PC experience remotely. We made a big focus on: How can we expand our client story? How can we expand all the things that are required to deliver that rich user experience?”

[Read more: The Next Horizon! Introducing Horizon 7.1, Horizon Apps & Horizon Cloud]

On Blast Extreme, Pat explains how trends, such as increased mobility, drove the project:

“Our goal was to look at how things were changing. Most display protocols that we’d been working with and going up against were designed for the LAN, for a well secured corporate network. Sure there’s some remote access that happens with it, but a lot of it’s really designed around LAN delivery, network on-prem delivery. As we looked at how things were changing, desktop-as-a-service was becoming more viable. So public internet is coming into play—a lot more public internet usage is happening to connect the desktops. A lot more mobility—iOS, there’s Android. How do we give you a good experience for cases where I need to work on a laptop in a coffee shop for hours at a time, where battery life matters? From a codec perspective, what can we do to maximize the battery life of the client so you can work with mobility? … How do we focus Blast Extreme when the world is changing—more mobile, public internet, wireless delivery—and with more portable devices where battery life matters? So we sat down and said, if we can leverage things like industry-standard codecs, like H.264—which every device you have today, even refrigerators in your house, have a H.264 decoder in them—we can do hardware offloads, we can get maximum battery life … We were able to have a 10-hour desktop session on an iPad using Blast Extreme because we’re leveraging hardware … Those types of things bring a whole new level into what you can do when you look at mobility.”

[Read more: VMware Horizon 7 Blast Extreme Primer—Everything an Admin Needs to Know]

On the future of VMware EUC, Pat talks about strategically leveraging the cloud:

“How do we move to a cloud-centric vision for EUC? That doesn’t mean everything goes in the cloud but leveraging the cloud to solve critical problems, and for people that want to go all-in on the cloud, to give them a great cloud solution. We introduced Horizon Air Hybrid-Mode last year, the ability to have a cloud-managed, local delivery of VDI, and that was Phase One … With Horizon Air Hybrid-Mode, you see the first step in how we can deliver an on-prem solution that gives you a fast, low latency, high performance solution, but with you getting out of the infrastructure management game … If we can simplify the mundane tasks of delivery so you can focus on what you really care about—‘how do I get that desktop to the user, or the app to the user’—in the simplest way, we will have done our job.”

[Read more: Here Comes … Horizon Cloud]

Catch the complete conversation here.

VMware EUC Insights Blog Promo

The post [Podcast] VMware End-User Computing on Frontline Chatter appeared first on VMware End-User Computing Blog.

The Self-Healing Datacenter – Using Webhooks to Automate Remediation

$
0
0
This post was originally published on this site

Recently, Steven Flanders wrote a nice blog post explaining what Webhook Shims are and how they can be used.  In summary, both vRealize Log Insight and vRealize Operations support webhooks.  This is a capability that allows for integration with any other solution that has a REST API available.  Ticketing, notification, chat and other capabilities can be leveraged by vRealize Intelligent Operations.  Extending that capability can make your datacenter a self-healing datacenter!

If you are a user of vRealize Operations, you know that it can monitor your infrastructure, server OS, applications and more.  But as you know, monitoring is only part of the answer.  Wouldn’t it be much better to have vRealize Operations attempt some simple fixes before giving up and calling for human intervention?

In this blog post series, I will explain how to activate automatically remediate a problem detected by vRealize Operations using the vRealize Orchestrator Webhook shim to launch workflow based on an alert.

REST Notification Plugin

Currently, vRealize Operations alerting can trigger notifications via email, SNMP trap or REST notification.  The REST notification plugin allows you to interact with REST APIs from practically any third-party system – almost!

Setting up an Alert notification rule in vR Ops

The truth is the REST Notification Plugin is not very robust and does not provide a lot of control over format of the REST call.  You plug in the URI and some credentials and vRealize Operations will try a PUT and POST against that URI.  By the way, it will append the alert ID onto the end of the URI.

Adding a REST Notification Plugin instance in vR Ops. Note the limited options available; there is nowhere to format the request body or control the method used.

This means is that unless your REST API is expecting the request in the way vR Ops would like to send it – it will fail.

For example, Orchestrator has a robust REST API.  To launch a workflow via that API, I would send:

curl -X POST -H "Authorization: Basic dmNvYWRtaW46dmNvYWRtaW4=" -H "Accept: application/json" -d '{
  "parameters": [
    {
      "value": {},
      "name": "",
      "type": "",
      "description": "",
      "scope": "",
      "updated": false
    }
  ]
}' "https://10.140.45.13:8281/vco/api/workflows/a0b8b820-b8c9-454e-b128-c4eabb9d0015/executions"

Enter Webhook Shims!

As you can see, Orchestrator expects a body for this request that includes any inputs.  Orchestrator expects this payload even if there are no inputs.  My vRealize Operations REST call to Orchestrator will fail because it is not making a valid request.  How can we fix this?

One way is to translate the REST call from vRealize Operations into a REST call that can be consumed by Orchestrator.  Enter the Webhook Shims for Log Insight and vRealize Operations!  This diagram shows how it works:

Webhook Shims enables self-healing with vRealize Operations through an Orchestrator shim.

The Orchestrator shim takes the simplistic input from vR Ops REST notification and rebuilds the request in the format expected by Orchestrator.

In the next blog post, I will show you how to implement this solution to automatically attempt remediation based on alert notifications from vRealize Operations to create your own self-healing datacenter!

The post The Self-Healing Datacenter – Using Webhooks to Automate Remediation appeared first on VMware Cloud Management.


Deploying WordPress in vSphere Integrated Containers

$
0
0
This post was originally published on this site

WordPress is a popular, open-source tool for the agile deployment of a blogging system. In this article, we offer a step-by-step guide for deploying WordPress in vSphere Integrated Containers. This involves creating two containers: one running a mysql database and the other running the wordpress web server. We provide three options:

  1. Deploy using docker commands in vSphere Integrated Container
  2. Deploy using docker-compose in vSphere Integrated Containers
  3. Deploy using Admiral and vSphere Integrated Containers

Deploy using docker commands in vSphere Integrated Containers

First, we need to install the virtual container host (VCH) with a volume store, which is used to persist the db data. In the following example, I create a VCH with a volume store test with the tag default under datastore1:

vic-machine-linux create --name=vch-test --volume-store=datastore1/test:default --target=root:pwd@192.168.60.162 --no-tlsverify --thumbprint=… --no-tls

Second, we deploy a container which runs the mysql database:

docker -H VCH_IP:VCH_PORT run -d -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=wordpress -v mysql_data:/var/lib/mysql --name=mysql mysql:5.6

Replace VCH_IP and VCH_PORT with the actual IP and port used by the VCH, which can be found from the command line output of the above vic-machine-linux create. Here -v mysql_data:/var/lib/myql mounts the volume mysql_data to the directory /var/lib/mysql within the mysql container. Since there is no such volume  mysql_data on the VCH, the VIC engine creates a volume with the same name in the default volume store test.

Third, we deploy the wordpress server container:

docker -H VCH_IP:VCH_PORT run -d -p 8080:80 -e WORDPRESS_DB_HOST=mysql:3306 -e WORDPRESS_DB_PASSWORD=wordpress --name wordpress wordpress:latest

Now if you run docker –H VCH_IP:VCH_PORT ps, you should see both containers running. Open a browser and access http://VCH_IP:8080. You should be able to see the famous WordPress start page below:

wordpress_start

In addition, if you connect to your ESXi host or vCenter which hosts the VCH and the volume store, you should be able to find the data volume mysql_data under datastore1/test:

mysql_data

Deploy using docker-compose in vSphere Integrated Containers

Using docker-compose on vSphere Integrated Containers is as easy as on vanilla docker containers. First, you need to create the docker-compose.yml file as follows:

version: '2'

services:
   db:
     image: mysql:5.6
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     links:
       - db
     image: wordpress:latest
     ports:
       - "8080:80"
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress

Then simply run

docker-compose –H VCH_IP:VCH_PORT up –d

Open a browser and access http://VCH_IP:8080. You should be able to see the WordPress start page. Note that as of VIC engine 0.8, the volumes option is not yet support for docker-compose, which is why we only store the db data in the db container instead of persistent storage. A future release will include this feature.

Deploy using Admiral and vSphere Integrated Containers

Admiral is the management portal through which you can easily deploy containers using the Admiral UI or a template (similar to the docker-compose.yml file used by docker-compose). In this example, we will focus on deploying WordPress via the Admiral UI.

First, we need to deploy a container which runs the Admiral service:

docker –H VCH_IP:VCH_PORT run -d -p 8282:8282 --name admiral vmware/admiral

Go to the web page http://VCH_IP:8282 and add the VCH host to Admiral based on these instructions.

Second, create the mysql container by choosing Resources -> Containers -> Create Container, and input the parameters of the docker command you used previously when deploying WordPress on VIC. Don’t forget to set the ENVIRONMENT variables. Click on Provision to launch the container.

provision_container

Now you should be able to see both the admiral container and the mysql container in the Admiral UI. Note down the actual container name of the mysql container (Admiral adds suffix to your specified name as the actual container name).

admiral_mysql

Third, deploy the wordpress container following the same flow as in the second step. Note that the environment variable WORDPRESS_DB_HOST should be set to mysql_container_name:3306.

admiral_wordpress

Finally, open a browser and access http://VCH_IP:8080. You should be able to see the WordPress start page again.

Alternatively, you can also use the Admiral template, which works in a way similar to docker compose, to deploy your WordPress application.  Simply go to Templates and choose the icon of Import template or Docker Compose. Then copy and paste the content of our docker-compose.yml file into the text box. Click the Import button on the bottom right and then click provision on the next page.  The wordpress application is ready for access after the status of the Provision Request becomes Finished.

admiral_alpine

Download vSphere Integrated Containers today!

The post Deploying WordPress in vSphere Integrated Containers appeared first on VMware vSphere Blog.

Forrester Intelligent Operations Total Economic Impact (TEI) Video

$
0
0
This post was originally published on this site

This video provides an overview of the Forrester Total Economic Impact (TEI) study for VMware vRealize Intelligent Operations. VMware vRealize Intelligent Operations is comprised of vRealize Operations Advanced, vRealize Log Insight, and vRealize Business for Cloud.

The purpose of the TEI study is to provide readers with a framework and quantifiable data and analysis to evaluate the potential financial impact of deploying VMWare Intelligent Operations.

Please refer to my previous Blog detailing the findings of the TEI study for more information.

 

Checkout our new video summarizing the findings:

Challenges prior to VMware Intelligent Operations

IT operations team struggled to manage its growing infrastructure

1. “Constant care and feeding”

2. Could never break out of firefighting-mode

3. Relentlessly battling a growing ticker of service incidents

4. Difficulty in searching for underlying root causes

As a Result

1. Capacity ebbed and flowed

2. Virtual machines red-lining during peak periods

The team needed a solution to reduce its number of tickets and improve availability, which VMware Intelligent Operations/vRealize Suite Standard solved beautifully

Intelligent Operations TEI Composite Organization

Adoption of Cloud Management Results

  • A 20% improvement in operational efficiency
  • Over 10% savings in hardware costs
  • A 75% reduction in unplanned downtime
  • Three-Year Risk-Adjusted Results
    • ROI: 119%
    • Payback: 3Months
    • NPV: $1.4M

Visibility across the infrastructure

  • Provided the IT team with a comprehensive picture of the physical and virtual infrastructure
  • Visibility includes workload distribution, log analytics, and costs
  • Providing real-time performance and health metrics.
  • Helped empowered the team to solve incidents more quickly and work more efficiently

Improved performance and capacity optimization

  • Automatically balances workloads and resolves contention
  • Optimizing existing capacity
  • Reduces the number of physical servers required to operate its virtual machines
  • Improves performance of the existing infrastructure
  • Improved performance reduces the number of service incidents, driving further efficiencies

To learn more, please

 

Try our NEW Online Intelligent Operations TEI Estimator for yourself

Get an in-depth estimate of how much you can save by adopting VMware Intelligent Operations via vRealize Suite Standard.

TEI Online Estimator Tool

The post Forrester Intelligent Operations Total Economic Impact (TEI) Video appeared first on VMware Cloud Management.

Super Metrics in Action: Use Case with NetApp

$
0
0
This post was originally published on this site

By: Alain Geenrits, Blue Medora

 

In my last post, I covered super metrics in detail — from creating them to attaching them to an object; from what makes them so great to why you should use them. To follow up, this blog post will feature a common use case in NetApp where you track the maximum read latency (ms) on a NetApp volume. In this case, we want to attach that super metric to the NetApp system, so that we can easily track the overall max latency for all volumes on a system (rollup metric).

First step is to create the formula (Figure 1). Note that we selected the adapter type (NetApp adapter), then selected NetApp volume in ‘Object Type’ and then ‘Read latency (ms)’ in attribute type. The formula is displayed at the top for reference.

Learn how to create a super metric in vROpsFigure 1: Creating a super metric

 

We now create the formula by typing ‘max()’ in the field and clicking the details icon to visualize (Figure 2). Note that we also get a more readable format of the formula by clicking the blue arrow. By searching for a name in the left Objects field or choosing systems on the right we can click a system to visualize the formula in real time. Note that we set depth=2.

Testing your super metric formulaFigure 2: Testing the formula live

 

 

Now that we know the formula works we hit ‘Save’ and in the Super Metric menu chose to associate the new super metric with an object in the bottom of the screen.

See how you can attach a super metric to an object in vROpsFigure 3: Attaching the super metric to an object type

 

 

Once this is done we need to edit the policy and activate the super metric. Here we use the standard policy, so we activate the super metric in the ‘Collect Metrics and Properties’ pane.

How to enable your super metric in a policyFigure 4: Enabling the metric in a policy

 

And now… we wait for a collection cycle! After a while the super metric will show up in your ‘all metrics’ overview. See figure 5:

How a super metric looks when displayed in the objectFigure 5: Super metric displayed in the object

 

You can now use it everywhere! Dashboards, reports… the list goes on!

A tip: With rollup metrics it is important to know the relationships, the hierarchy, of objects. You can view these easily in the ‘Object’, ‘All metrics’ section, but it is even easier to browse them in the ‘Object relations’ section of ‘Administration’. See Figure 6:

Learn how to navigate object relations for your super metric

Figure 6: Navigating object relations

 

I hope this will give you some guidance in creating your own supermetrics for your environment. How about a metric to track the maximum number of transactions in a Microsoft SQL Server database per SQL Server? See Figure 7:

 Tracking max transactions per second in SQL Server

Figure 7: Tracking max transactions per second in SQL Server

 

You could also track the average throughput on some specific ports of your Cisco Catalyst switch, the read/write ratio on your Dell EMC Vplex storage and so on. With the management packs from Blue Medora, you can easily monitor your complete stack and leverage all that super metrics have to offer.  

 

Reference Materials:

The post Super Metrics in Action: Use Case with NetApp appeared first on VMware Cloud Management.

The Self-Healing Data Center (Part 2) – Installing the Webhook Shims for Automating Alerts

$
0
0
This post was originally published on this site

In the previous post, I explained how vRealize Operations with vRealize Orchestrator can automate alert notifications via the REST Notification plugin to enable a self-healing datacenter.  I also introduced the Webhook Shims as the solution that provides the underlying capability.  In this post, I will walk you through installing the Webhook Shims in your environment and setting up a simple use case to test things out.

A Basic Use Case for Self-Healing

For example, you probably know that vRealize Operations can monitor services on a supported OS through Endpoint Operations.  If that service becomes unavailable, vRealize Operations can alert you.  But wouldn’t it be nice if vRealize Operations attempted to restart the service first?  Why bother an administrator for such a simple task?  If it still doesn’t respond after an automated restart attempt, then you may want a human to take a look.

Let’s begin with installation of the Webhook Shims.  For this you will need an environment capable of running Python 2.7 (the language used to build the shim).  If you already have a Python 2.7 environment with virtualenv installed, you can skip over the next section.

Also, there are some modules in the Webhook Shim that don’t play well with Windows, so I strongly recommend a Linux OS here.

Installing Prerequisites on Photon OS

To keep things simple, I’ll use a virtual machine created with the Photon OS OVA (as it already comes pre-loaded with Python 2.7).  Once you have the Photon OVA deployed, you can open an SSH session and log in with root and the password changeme (which will prompt you for a password change).

The first thing we need to do is install wget.  Enter the command

tdnf install wget -y

Now we can use wget for the next step, installing pip and virtualenv for Python.  First, grab the pip installation script with wget

wget https://bootstrap.pypa.io/get-pip.py

Now you can install pip using the script we just downloaded.  What is pip?  It is a package manager for Python, sort of like how yum is for Linux (or in the case of our little Photon OS, tdnf).

python get-pip.py

Finally, you are ready to install virtualenv using pip.  By the way, virtualenv is not a requirement, it just is a good practice to avoid contaminating your nice clean Python install with modules that are unique to a given environment or may require different module versions.  Anyway, install virtualenv

pip install virtualenv

Congratulations!  Prerequisites are complete and we can get busy installing the shim.

Installing the Webhook Shims

Now that we have our environment ready, let’s begin the Webhook Shims install.  For this, we’ll use git to pull the repository down, so we will need to install that first.

 

tdnf install git -y

Great.  Now we can clone the Webhook Shims repository from github.

 

git clone https://github.com/vmw-loginsight/webhook-shims.git

Using a git repository allows you to update the Webhook Shims easily whenever a change is made (i.e. fixes or enhancements) to the main branch.

 

At this point, we can create the virtual environment for Python, using virtualenv.  As the readme on the Webhook Shims repository github page suggests, we will use venv-webhookshims.  We will create this environment in the repository directory.

 

cd webhook-shims

virtualenv venv-webhookshims

Now you will see why we are using a Python virtual environment.  The Webhook Shims has some modules that are required that are not included in the default Python library.  We will need to install those into the virtual environment we just created.  First, we will activate the virtual environment and then install the prerequisites.  This is easier than you might think, because the repository contains a file with a list of the prerequisites so we just have to reference that file with pip.

 

First, activate the virtual environment.

 

source venv-webhookshims/bin/activate

Notice the prompt changes to let us know we are in a virtual environment.  The OS will function as it normally does, the only difference is that the Python library will include any modules we add while in the virtual environment.  By the way, to exit the virtual environment, simply enter deactivate at any time.

 

Installing the prerequisites, as I mentioned, is easy.  There is a file in the repo named requirements.txt that contains all the modules needed to run the Webhook Shims.  Using pip we can simply reference that file to install them.

 

pip install -r requirements.txt

Now that was easy!  At this point you have installed the translator and are ready for the next step – configuring and running the Webhook Shims.  I will cover that in the next blog post.

The post The Self-Healing Data Center (Part 2) – Installing the Webhook Shims for Automating Alerts appeared first on VMware Cloud Management.

Setting the record straight on DRS, pDRS and Workload Placement

$
0
0
This post was originally published on this site

A consistent theme in questions from customers have been cropping up lately around VMware DRS (Distributed Resource Scheduler), p(predictive)DRS & workload placement. The questions are related to how DRS & pDRS actually work as well as some misunderstandings around the benefits they bring to customers’ environments. There are several descriptive posts on how DRS & pDRS work, why they were created and how to tune DRS for each unique environment. Even still, it warrants more discussion on exactly why we make some of the decisions we do with respect to balancing & workload placement – especially as other vendors are making inaccurate claims.

The business problem

It is important to take a step back from the technology and understand the business problems we are trying to solve. Firstly, where should I place new workloads? Should we place them round-robin on hosts or how can we make the right decision when a brand-new workload is on boarded into an environment? Secondly, once workloads are running, how can we ensure they have access to all the resources they need? Lastly, when I inevitably run out of resources, how can I ensure mission critical applications have higher priorities to resources?

What compounds these questions are the numerous ways that customers configure their datacenters and clusters. Some customers use a few single large clusters for everything including production and test/dev. Other customers may use many smaller clusters that are purpose built for a variety of reasons, eg. clusters to minimize license costs or production vs. test. Others build clusters and fill them up to a percentage, close any new workloads into it and create new clusters moving forward. It’s important that whatever is doing workload placement & balancing can work in each environment.

What metrics do DRS & pDRS consider?

The first thing to cover is what does DRS & pDRS consider. I am told a lot by customers that DRS does not account for CPU Ready (%RDY) or memory swap or [insert here] any other critical metric… DRS & pDRS consider the most critical metrics for compute, network and storage. To put it a different way, if you compare that to any other workload placement solution on the market, we look at 2x or more data points and made decisions 3x more frequently. For a more in-depth look at how DRS & pDRS function, I recommend these blog posts.

https://blogs.vmware.com/vsphere/2015/05/drs-keeps-vms-happy.html

https://blogs.vmware.com/vsphere/2016/05/load-balancing-vsphere-clusters-with-drs.html

https://blogs.vmware.com/management/2016/11/predictive-drs.html

Q. Why do the number of metrics and frequency matter? A. efficiency

Looking at the data every 5 minutes and making decisions allows DRS & pDRS to ensure that current and forecasted workloads will be able to run with minimal or no contention for resources. In our studies, this means leads to less volatility in the datacenter, which ultimately allows for higher utilization of hardware and reduces potential impact to other workloads.

Initial placement

The initial provisioning time for a new workload is minimal compared to the weeks, months or years it may live on for. Because of this, initial placement of a new workload is a generally a spot decision. There simply isn’t enough data to warrant more long term decisions when the future demand is all but unknown. Even if it is a clone of an existing workload, there is no guarantee it will perform the same. For initial placement, DRS looks at the allocated resources of the workload being powered on and determines the best host to place and “power-on”.

Ongoing balance vs. contention

Another consideration about DRS & pDRS is that it not just about balancing a cluster. Moving workloads simply to spread the load evenly has a cost. People tend to forget that vMotions & storage vMotions are not free from a resource perspective. Constantly moving workloads for best fit, only exacerbates issues. There is minimal gain, if any, from balancing a host at 50% and one at 60%. In most environments where there are 4-16 hosts per cluster, this constant movement can cause more issues than it solves. A good way to measure this is to use service accounts for any tools talking to vCenter and monitoring what they do. A lot 3rd party tools make exorbitant numbers of API calls, vMotions and log data that fill up vCenter. A good way to keep those in check is with vRealize Log Insight which can quickly highlight those noisy tools.

Contrary to just balancing, DRS & pDRS ensures that workloads are moving less often without performance issues. If all workloads on a given host can access all the resources they are entitled to, and therefore is no contention, there is no performance gain in moving that workload. Again, it would mean a resource increase to move the workload to another host. Simply put, contention must be weighed heavily when balancing workloads.

Reactive vs. proactive

What VMware customers tell me they want is a solution that looks at the current state of entire environment, understands past trends, and proactively moves workloads to avoid contention. This is where pDRS comes into the conversation. Most workloads have some consistent trends. One over simplified example is to think about a proxy server. Traffic will generally be consistent throughout the day, with peaks at morning arrival of employees, lunch time and just prior to departure for the day. If you know the demand from historical data, then you can predict with some level of accuracy the future demand. That prediction, which comes from vRealize Operations, feeds pDRS and DRS to proactively move workloads to ensure they have the resources they need now AND in the future.

Data granularity

In a previous post, I wrote about why the granularity of historical data matters. A good example of that is looking at the vCenter yearly data charts vs. hourly. You can see the peaks and valleys in the hourly chart vs. the yearly things are much smoother. Essentially, when you roll up data over time you lose the importance of it. This becomes critical for making proactive decisions. This is one of the reasons why vRealize Operations uses the same data sample size as DRS and keeps it for long periods of time. If the quality of the data is rolled up over time, there is simply no way to make accurate decisions based on it.

pDRS maximum

Because of the amount of data that pDRS is leveraging, there currently is a supported limit of 4,000 VMs per cluster. This is half of the vCenter maximum for # of VMs per cluster, which is 8,000. To put that in perspective, if you have the maximum number of hosts per cluster (64), you would need to run more than 62 1/2 VMs per host to exceed the supported limit. Most customers I’ve spoken with are leveraging 16 hosts or fewer, which would mean you need to have >250 VMs per host to hit this limit. Again, this is a per cluster supportability limit.

Conclusion

Hopefully, this post can help address some of the more common questions & concerns you may have about DRS & pDRS. If you have more questions, I’d love to answer them or even continue more posts about this, so please leave questions and comments down below.

The post Setting the record straight on DRS, pDRS and Workload Placement appeared first on VMware Cloud Management.

Viewing all 1975 articles
Browse latest View live