Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

The Self-Healing Data Center (Part 3) – Configuring and Testing the Orchestrator Shim

$
0
0
This post was originally published on this site

So far, we’ve walked through installing the Webhook Shims.  In this blog post we’ll configure the Orchestrator shim for use.  The Orchestrator shim is but one of the handful of shims included in the solution.  More are being added via the community.  Participation is encouraged!  In addition to providing a self-healing datacenter, Webhook Shims have capability for notifications, interaction with ticketing and CMDB systems and more.

Installing the Orchestrator Shim for Self-Healing

First, note that the Orchestrator shim includes its own readme file that contains some helpful information.  You can view that file via the github repository web page or within the cloned repo we downloaded earlier.  It is located in

/webhooks-shims/loginsightwebhookdemo/vrealizeorchestrator.readme

Check the file out if you wish, but here’s what we need to do to configure the Orchestrator shim.

  • Create a .netrc file in the user’s home directory with credentials for vRealize Orchestrator
  • Modify the vrealizeorchestrator.py file to set the IP address for the Orchestrator instance we will be using

For the first task, the following commands will take care of what we need.  Substitute the information in the curly braces for your own settings.

cd ~
echo 'machine {vro-host-or-ip} login {vro-login-name} password {vro-password}' > .netrc
chmod 600 .netrc

That takes care of our authentication.  By the way, if you are wondering why we need this; .netrc allows for automatically entering credentials.  We changed the permissions on the file so that only the owner can view them.  For those more experienced, if there’s a better way to do this feel free to leave a suggestion or fork the repo and take care of business!

Now, time to make some changes in vrealizeorchestrator.py and the first of those is the VROHOSTNAME var in the Python code.  This should contain the hostname:port for your Orchestrator instance.

The second, and optional, change is to bypass certificate verification if needed.  By default, the certificate for the Orchestrator server will be validated by the Python module making the request.  If you are using self-signed certificate (such as in a lab) you’ll want to set this VERIFY flag to False.

For example, in my lab my vrealizeorchestrator.py looks like this (changes in yellow text inside the red boxed area):

OK, with those changes made and saved we can test things out quickly.  One last thing we need to do, if you are using the Python OS image like me, is to open a firewall port for the Translator Shims server.  By default, it uses 5001.

iptables -A INPUT -p tcp --dport 5001 -j ACCEPT

Great, now start the Translation Shims server…

cd ~/webhook-shims

python runserver.py

Testing the Webhook Shims Server

The server is now running and ready to accept requests!  Note the port number is 5001. You should keep this SSH session open to view activity as you test.  Let’s do that now.  Open a browser and http to the IP address of the host running the Translation Shims on port 5001.

Success!  This is the default page for the server – lots of great information on all the shims available in this solution.  Check it out.  If you do NOT get this web page, then check the firewall settings on the host running the Webhook Shims.

Our focus in the next blog post will be on using the Orchestrator shim, by creating a workflow, setting up an alert notification and testing it all out.

The post The Self-Healing Data Center (Part 3) – Configuring and Testing the Orchestrator Shim appeared first on VMware Cloud Management.


The Self-Healing Data Center (Part 4) – Configuration of the vRO Example Workflow

$
0
0
This post was originally published on this site

So far we have discussed the reasoning behind the Webhook Shims, installed it, configured it, started and tested the shim server.  In this blog post, we will set up things on the Orchestrator side so that the workflow is ready to go when an alert is fired from vRealize Operations.  This will enable self-healing by having Orchestrator attempt a simple remediation of restarting a service.

This blog post assumes you have familiarity with vRealize Orchestrator and some experience with importing workflows and working with the HTTP-REST plugin.  If these are new concepts for you, please don’t be discouraged.  Some great references to get comfortable with Orchestrator are:

HOL-1721-SDC-5 – Introduction to vRealize Orchestrator
Blog from vcoteam.info on using the REST plugin
Postman + vRO = HTTP-REST Plug-in Operations

Let’s dive in.

The Included Example Workflow

Within the Webhook Shim repository we cloned earlier is an example workflow that we will use in the blog post to restart a service on a Linux virtual machine.  This example workflow requires only one input – the alert ID.  The workflow is linear; there are no decision branches to validated data, handle errors, etc.  It’s really just a very simple example.

The example Orchestrator shim workflow schema.

 

Several of the steps in the workflow are using a REST operation to gather information from vRealize Operations.  Here’s a summary of the workflow.

  1. The workflow uses the alert ID input to get the vR Ops alert (including all the details of the alert).
  2. The resource ID for the impacted resource is extracted from the alert.
  3. The resource details are fetched, using the resource ID.
  4. Because the alert is due to a service monitored by the Endpoint Operations agent, the workflow needs to get the ID of the parent resource.  The parent resource is the guest operating system where the agent is installed.
  5. The parent resource details are fetched, using the resource ID for the parent from step 4.
  6. Now the workflow extracts the operating system IP address from the resource details.
  7. Using the IP address, the workflow now issues an SSH command to restart the service.

In this workflow, the service that needs to be restarted (VMtools) is hard coded.  I leave it up to the reader to modify this workflow to suite their needs.  It wouldn’t be difficult to modify the workflow to determine which service needed to be restarted based on the alert information.

You can import the example workflow package into your Orchestrator instance.  If you aren’t familiar with Orchestrator, I highly recommend you visit the vcoteam.info and check out the great tutorials there.

Once you import the package, make a note of the workflow ID for the Restart Service from Alert workflow.  You’ll need that for the next steps.

The Workflow ID is found under the General tab

Configure the REST Host and Create Operations

As mentioned, Orchestrator uses REST to gather information from vR Ops.  So, we will need to add vR Ops as a REST host within Orchestrator.  Additionally, the workflow is making three different REST API calls to vR Ops.  We will create three REST operations in Orchestrator to support the workflow.

Let’s start with adding vR Ops as a REST host.  In the Orchestrator client, you can run the workflow from Library > HTTP-REST > Configuration > Add a REST Host.  The following screen shots walk through the inputs for my lab environment.

The “Name” is a friendly name to identify within Orchestrator.  For the URL, use https:///suite-api and for the certificate acceptance, if you leave it set to “No” you’ll be prompted to accept the cert.

 

For simplicity in testing, I recommend using Basic authentication.

For simplicity in testing, I recommend using Basic authentication.

 

Set the Session mode to Shared Session, and provide administrator level credentials for your vR Ops instance.

 

Very likely you won’t need a proxy – unless you just like to make things difficult for yourself! 🙂

If you are using self-signed certificates, I recommend leaving this at “No”  click “Submit” to add the REST host

 

Create REST Operations

Now that we have added the vR Ops server as a REST host, we can create three REST operations that the workflow will require.  They are:

GET /api/alerts/{alertId}
GET /api/resources/{resourceId}/properties
GET /api/resources/{resourceId}/relationships

Each of these will be added using Library > HTTP-REST > Configuration > Add a Rest operation workflow.

 

Adding the “GET /api/alerts/{alertId}” REST Operation

Adding the “GET /api/resources/{resourceId}/properties” REST Operation

 

Adding the “GET /api/resources/{resourceId}/relationships” REST Operation.

Once added, you can configure the workflow to use these REST operations.  Edit the workflow Restart Service from Alert and modify the attributes for each of the REST:RESTOperation data types, selecting the appropriate REST operations.

Change the “Value” for each of the RESTOperation attributes to the corresponding REST Operations you just created.

Change the “Value” for the SSH credential attributes.

Now edit the workflow SSH Restart VMtools and modify the attributes for username and password to an account that has permission to SSH into the OS and start a service.

Now Orchestrator is ready to run the workflow to restart a service with the input of an Alert ID from the Translation Shim!  In the next blog post, we’ll set up the alert notification, configure an alert and trigger the alert to test everything out.

The post The Self-Healing Data Center (Part 4) – Configuration of the vRO Example Workflow appeared first on VMware Cloud Management.

Automating Infrastructure with vRealize Code Stream (vRCS) and Artifactory

$
0
0
This post was originally published on this site

Purpose:

The purpose of this post is to show how to automate infrastructure in virtualized dataceter  through vRealize Code Stream (vRCS) and using other tools like Artifactory. Target audience for this post are System Admins, Cloud Admins etc. who are not full fledged developers but has some experience in scripting and automation or building blueprints in vRA. Also in the process this post clarifies on how to use vRCS for any other automation purpose that you want.

Introduction:

One point of clarification before I go ahead and start discussing about this topic. vRealize Code Stream (vRCS) is an amazing tool which gives probably the best Continuous Integration/Continuous Delivery (CI/CD) experience for DevOps environment. But in this topic I am using the same set of tools and using it to automate my infrastructure. There are other traditional ways you can achieve the same result and perhaps you have been doing it for long in that way. But I want to showcase how easily you can do the same tasks with vRCS and more elegantly. Sound intersting???? Read On…..

vRealize Code Stream (vRCS)-What is it?

In VMware words “VMware vRealize Code Stream provides release automation and continuous delivery to enable frequent, reliable software releases while reducing operational risks”. So, vRealize Code Stream is an automation tool which helps you to get the following:

  • Application Delivery Automation
  • Pipeline Modeling
  • Artifact Management
  • Release Dashboard and Reports

Among many other points. The strength of the solution is simplicity and integration capability. You typically integrate the solution with tools like Artifactory (for artefact management) and Jenkins (build automation), vRealize Automation (infrastructure deployment) to automate your entire product development, build, test and deployment life cycle.

Example Setup:

A typical example of the setup is provided below:

  • Have an Artifactory server deployed in your environment. Say Java is the development tool for your Organization. Integrate Maven with Artifactory. Artifactory will provide the artefacts management capability.
  • Next deploy a Jenkins server. Integrate Jenkins with Git repository for source control management. Also integrate Artifactory with Jenkins. Configure Maven for Jenkins as well.
  • For testing purpose for example, use Selenium and integrate it again with Jenkins.
  • Integrate vRCS with vRealize Automation. Where you have deployed and configured vRealize Automation.
  • You can build a pipeline in vRCS where the for example two stages are defined, Dev and Prod. Both stages start with deploying a VM in respective area in vRA and then deploy packages from Artifactory and then runs some tests in Selenium, runs few jobs from Jenkins.

Sample Use Case:

In above setup, whenever a developer submits a code in Git, it fires a job in Jenkins so it builds the respective code and pushes it to Artifactory Server. You can then start the pre-defined pipeline in vRCS which will do the following for you:

  • Deploys a VM in vRealize Automation in Dev environment
  • Deploys the application from Artifactory server to the newly deployed VM
  • Runs few pre-defined tests
  • Runs few Jenkins jobs

Depending on the output of the above tests (you can apply Gating Rules) next stage i.e, Prod is executed which runs similar pre-defined tasks.

All these are done automatically without any manual intervention. So essentially from the point where developer submits the code to the production deployment no manual tasks is needed. Thus with vRCS will can essentially build a no-touch environment.

You may be thinking, this sounds so developer-ish, so complex. It is actually not. It is a simple drag-drop-select environment which you can see in the video.

What is covered?

As mentioned above, in this post I am not going to cover a DevOps life cycle. I am actually covering something completely different. For all the System Admins and Cloud Admins out there, we know we have a lot of scripts running in our environment to automate the entire infrastructure (at least the areas we can cover). For example consider the following:

  • Today NSX has automated the network and security areas completely. But in most datacenters, after a VM running some server is deployed and all the patches, security measures are implemented then it goes to security team for their audit. Once it passes that audit then it goes to production. What if I can automate that audit itself?
  • In most cases the servers are not connected directly to internet and you have some kind of repository for patch management (yum repository, RHEL Satelite Server, WSUS server etc.). All these provide patch management in your environment. What if I write my own RPM’s and want to merge it into the repository? I write my own python scripts which I want to run in all the servers (new or old)? I have a bunch of Shell, PowerShell and PowerCLI scripts and want to manage and run them from a central repository. How do I do that?

Demonstrated use case:

This is exactly what I am trying to showcase. In the given video I am demonstrating an environment where the VM deployment is done through vRealize Automation. I have an Artifactory server working as a central repository for RPM packages, Python packages and my Shell Script and other documents. I want to achieve the following simple use case:

Want to configure a pipeline which once executed will do the following:

  • Deploy a VM in Dev area through vRA
  • Modify environment in deployed VM so that Artifactory server will be used as a central YUM Repository and Python Package repository
  • Download and install rpm packages and python packages from Artifactory.
  • Install and setup a WebServer inside the VM

I could easily do some testing and based on the testing result I could set a gating rule (I can set an approval policy as well) and run the same things in production environment.

But since through this post I wanted to showcase other use cases of vRCS so by this time you have a general understanding of the product (how to use it) and further utilize it as per your requirement.

Interested enough? Watch this video of 28 minutes.

 

Suggested Reads:

Greg Kullberg has amazing series of session on vRealize Code Stream. I strongly suggest you to watch them. Specially the following ones:

Also, as I always say, your best friend is VMware documentation for vRCS.

Conclusion:

The idea behind this post was to showcase a slightly different use case of vRCS. In the process explain what it is. Though implementation of the use case you should be able to get a clear idea of how you can utilize vRealize Code Stream for not only DevOps environment but also for automating your daily life.

I hope this post helps you in reducing few of your daily challenges faced in any datacenter.

As always do give me your feedback. It is through your feedback I get to know whether I am writing useful topics or not. So happy reading and watching till the next post.

The post Automating Infrastructure with vRealize Code Stream (vRCS) and Artifactory appeared first on VMware Cloud Management.

Redesigning the VMware AirWatch Privacy Policy

$
0
0
This post was originally published on this site

privacy-feature

VMware AirWatch can be installed on corporate and personal devices to access company resources such as email, documents and productivity apps. When a company tells employees to install AirWatch on their phones to access company email, they might ask, “What can my manager see? Will my company know which apps I have on my phone? Can my company see my pictures or my browsing history?”

Our user experience (UX) team set out to address the issue of how end users perceive tools like AirWatch being installed on their devices, particularly personal devices. We realized much of the stigma associated with installing AirWatch came from a lack of knowledge about what AirWatch actually does.

Our goals were simple. The first one was to maintain transparency at all levels, since AirWatch forms a conduit between employers and employees. Our second goal was to educate both employers and employees on best practices to maintain employee privacy.

We realized much of the stigma associated with installing AirWatch came from a lack of knowledge about what AirWatch actually does.

Privacy policies are usually text-based, full of legal jargon. They are difficult to quickly scan through and parse for information most relevant to you. However, we did not want to replace the existing privacy policy. We were designing for the enterprise, where legal documents are necessary and irreplaceable.

What we wanted to do is design a new, visual AirWatch privacy policy that would:

  • Be easy to read and understand.
  • Be interactive and engaging for end users.
  • Answer common questions before they were asked.
  • Be customizable to the user reading it, since every company and every device type could have different data collected or monitored based on their configurations.

Here’s what we asked end users and what we found out about privacy.

How Do Users Feel About Device Management?

We started by asking how people felt about installing AirWatch on their phone. We received answers such as “To get what? Email and data and content? I would love that!” The high benefit of installing AirWatch to access company resources was never questioned.

However, we also heard things such as, “I would be anxious. I would also be paranoid about what data is being collected.” We also heard, “How does it affect my device…confidential and personal things sent to me being put out there. Who gets to see it? I think AirWatch is geared to look out for the company, not me.”

Based on these conversations, we sketched our initial concepts. We wanted to start with a positive message and visuals inviting the users to interact with the privacy policy. We decided to focus on two main things: 1. What data AirWatch could collect (including Work Apps, Device Details, etc.) and 2. What AirWatch could do with the device (such as sending notifications and resetting passwords).

aw-privacy-policy-concept

Based on initial conversations with customers, we conceptualized a new, visual AirWatch privacy policy that would be easier to read and more engaging.

What Do Users Care About Most in a Privacy Policy?

We also found that most users are only interested in the “Big Four”: browsing histroy, text messages, pictures and personal email.

Next, we conducted design reviews with AirWatch employees. We found that it was better to chunk information together. What AirWatch collects depends on a company’s policies and admininstrator, so some companies may collect information such as GPS but others may not. We found that if a company does not collect this information, it was better not to show it at all.

We also found that most users are only interested in the “Big Four”: browsing histroy, text messages, pictures and personal email. It was very important to inform users that AirWatch could not monitor any of these.

Keeping these considerations in mind, we created some high-fidelity prototypes. During this phase we also worked on the visual language to keep the tone and feel as friendly, open and approachable as possible.

aw-privacy-policy-in-action

These protoypes show the new, visual AirWatch privacy policy in action.

Is the Privacy Policy Transparent Enough?

Once we completed our prototype, we conducted some A/B tests with AirWatch employees to evaluate two variations. The major finding from this test was that notifying users that AirWatch does not collect the Big Four was imperative.

Although we explicitly showed users what data is collected, doubts about whether pictures or personal messages are collected were not assuaged. We decided to go with version B where we included a “What we cannot see” section.

aw-pp-collect

To the right, users can see what data AirWatch does collect. To the left, users are reminded what data AirWatch never collects.

When Do Users Want to Read a Privacy Policy?

We then went out in the field to conduct tests with people in cafes with the aim of determining when users would like to see this information in the overall enrollment process. Participants went through the entire process with us, from being informed that they need to install AirWatch on their phones through an email from their company to installing the AirWatch agent. We noted their questions and concerns at each stage, and we found that concerns usually arose right at the beginning of the process.

So in our final implementation, employees can see their customized privacy policy online when they receive that first email from their companies. Questions need to be addressed once they enroll, too, so a web clip with the privacy policy is installed on the device.

Designing the AirWatch privacy policy has been challenging, but it’s fun and rewarding to address the end users of an enterprise service. I look forward to improving the product over the next few months to make AirWatch simple for the consumer!

Read more:

The post Redesigning the VMware AirWatch Privacy Policy appeared first on VMware End-User Computing Blog.

Getting Started with PowerCLI for Horizon View

$
0
0
This post was originally published on this site

PowerCLI 6.5 introduced a brand new, completely re-written, module for Horizon View that is leaps and bounds better than the prior release. As an automation fanatic and former View administrator, PowerCLI’s offering for Horizon View has always been an important part of my toolbox.

Graeme Gordon, a Senior EUC Architect on our EUC Technical Marketing team, has created a terrific video on how to get started using this new module that we recommend checking out.

Watch the Video

Also, here’s a direct link to the blog post Graeme references in the video: Getting Started with PowerCLI 6.5 for Horizon View

The post Getting Started with PowerCLI for Horizon View appeared first on VMware PowerCLI Blog.

The Self-Healing Data Center (Part 5) – Configure an Alert to Trigger the vRO Workflow

$
0
0
This post was originally published on this site

Now that we have configured the Webhook Shim and vRealize Orchestrator the next task before testing is to configure vRealize Operations to send an alert to the shim.  This will be the final post in this series and when you complete this, you will be ready to apply this solution for self-healing of any alerts you desire by simply creating the appropriate workflow and alert settings.

I am using vRealize Operations version 6.4 in the steps below, but this should work with any 6.0 or higher version.  I am also using Endpoint Operations to monitor the state of a service on a Linux OS.  Endpoint Operations is not required to use this shim, I am only using that because it provides a way to easily trigger an alert by stopping a service on a monitored OS.  It also shows that any automated remediation or activity is possible, not just automation of virtual infrastructure.

Monitor a Service with Endpoint Operations

In this case, I’ll assume you have a Linux OS with the Endpoint Operations agent installed.  If you need information, see the documentation.  In this example, we will monitor the VMtools service on the OS.  To set this up, browse to the the OS resource in vRealize Operations and select Actions > Monitor OS Object > Monitor Process.

The Monitor Process form appears and you just need to add the display name and process.query values.  I also open Advanced Settings and set the Collection Interval to 1 minute – this is optional, and I only do it here to speed up the testing.  Otherwise you’ll wait the normal 5 minute collection interval for the alert to fire.

Now you have a new resource, named vmtoolsd.

Create REST Alert Notification and Alert

Now that we have an object to monitor, we can alert on it.  First, we need to set up the REST notification plugin instance so that the alert can be routed to the shim (and then to Orchestrator).  Browse to Administration > Outbound Settings and click the green plus sign to add a notification instance.  You need two bits of information here.  In the second post of the series we installed the Webhook Shim – we need the IP address of the host running the shim.  Also, in the fourth post, we added a workflow to Orchestrator that will be used to restart a service on a VM and we need the ID of that workflow.  If you imported the example workflow, it should be a0b8b820-b8c9-454e-b128-c4eabb9d0015 but double-check this in Orchestrator.

Plugin Type = Rest Notification Plugin
Instance Name = Orchestrator Shim Hello World (or whatever you wish)
Url = http://{IP or hostname of your shim}:5001/endpoint/vro/{ID of workflow to run}
User Name = leave blank
Password = leave blank
Content Type = application/json
Certificate thumbprint = leave blank
Connection count = leave at default (20)

Click the save button.  If you try the Test button, it will fail, but the test does not work correctly with the Webhook Shim, so do not worry about that.

Now we can create an alert to trigger when a monitored service is not running.  Browse to Content > Alert Definitions and click the green plus sign to add an alert definition.  Use the information that follows in the appropriate sections within the alert definition.

Section 1. Name and Description
  Name = Whatever you wish
  Description = Whatever you wish


Section 2. Base Object Type
  Base Object Type = MultiProcess (just type MultiProcess into the form and it will find the type)
Section 3. Alert Impact (these are suggestions)
  Impact = Risk
  Criticality = Symptom Based
  Alert Type and Subtype = Virtualization/Hypervisor : Compliance
  Wait Cycle = 1 
  Cancel Cycle = 1
Section 4. Add Symptom Definitions
  Defined On = Self
  Symptom Definition Type = Metric / Property
  Symptom = use the filter to find "Not running" and drag this over to the Symptom work area
Section 5. Add Recommendation
  Do nothing.

Save the alert definition.  The last thing we need to do is create a notification rule.  Click on Notifications and then the green plus icon to create a new notification rule.  Use the following information.

Name = Whatever you wish
Method = Rest Notification Plugin > Orchestrator Shim Hello World (or whatever you named this)
Scope = Object : vmtoolsd
Notification Trigger = Alert Definition : (whatever you named the alert definition)
Advanced Filters > Alert Status = New (we only want this to trigger once)

 

Save the rule.  We need to add the rule to a policy so that it will be enforced.  I’m using the default policy, which is the effective policy for my Linux OS monitored resource.

Testing the Automated Remediation

Now, we can trigger the alert by stopping the service on the monitored OS.  Below we can see that the vmtools service is running – let’s stop it.

Watching the vRealize Operations UI, refreshing for a few minutes until we see the alert trigger.

 

The alert triggered.  If we look at the shim, we can see that the alert was received by the shim and is translated, then forwarded to Orchestrator successfully.

Looking at Orchestrator, we can see that the workflow did indeed run and was successful at starting the service.

If this wasn’t successful for you, be sure to check the workflow logs (shown above) and note the reason.  It is likely due to an authentication failure for the SSH command – you will need to edit the workflow and modify the attributes for SSH username and password to an account that has permission to SSH and start a service on the target OS.

Let’s check back on the OS to make sure VMtools is running again.

Looks good – what about in vR Ops?  Has the alert cleared itself?

We can see that it did.

Summary

Congratulations!  If you followed each post in this series you now have the tools to automate practically any alert response using vRealize Orchestrator and the Translation Shim.  There are many other use cases and examples for this:

 

– Automatically fix vSphere hardening guide alerts

– Open service desk tickets

– Update CMDB when configuration values change

– In addition to restarting a service, adding the capability to notify someone if the service restart failed

 

I would like to hear about your own use cases – share them here in the comments or on twitter!

The post The Self-Healing Data Center (Part 5) – Configure an Alert to Trigger the vRO Workflow appeared first on VMware Cloud Management.

Super Metrics in Action: Use Case with NetApp

$
0
0
This post was originally published on this site

By: Alain Geenrits, Blue Medora

 

In my last post, I covered super metrics in detail — from creating them to attaching them to an object; from what makes them so great to why you should use them. To follow up, this blog post will feature a common use case in NetApp where you track the maximum read latency (ms) on a NetApp volume. In this case, we want to attach that super metric to the NetApp system, so that we can easily track the overall max latency for all volumes on a system (rollup metric).

First step is to create the formula (Figure 1). Note that we selected the adapter type (NetApp adapter), then selected NetApp volume in ‘Object Type’ and then ‘Read latency (ms)’ in attribute type. The formula is displayed at the top for reference.

Learn how to create a super metric in vROpsFigure 1: Creating a super metric

 

We now create the formula by typing ‘max()’ in the field and clicking the details icon to visualize (Figure 2). Note that we also get a more readable format of the formula by clicking the blue arrow. By searching for a name in the left Objects field or choosing systems on the right we can click a system to visualize the formula in real time. Note that we set depth=2.

Testing your super metric formulaFigure 2: Testing the formula live

 

 

Now that we know the formula works we hit ‘Save’ and in the Super Metric menu chose to associate the new super metric with an object in the bottom of the screen.

See how you can attach a super metric to an object in vROpsFigure 3: Attaching the super metric to an object type

 

 

Once this is done we need to edit the policy and activate the super metric. Here we use the standard policy, so we activate the super metric in the ‘Collect Metrics and Properties’ pane.

How to enable your super metric in a policyFigure 4: Enabling the metric in a policy

 

And now… we wait for a collection cycle! After a while the super metric will show up in your ‘all metrics’ overview. See figure 5:

How a super metric looks when displayed in the objectFigure 5: Super metric displayed in the object

 

You can now use it everywhere! Dashboards, reports… the list goes on!

A tip: With rollup metrics it is important to know the relationships, the hierarchy, of objects. You can view these easily in the ‘Object’, ‘All metrics’ section, but it is even easier to browse them in the ‘Object relations’ section of ‘Administration’. See Figure 6:

Learn how to navigate object relations for your super metric

Figure 6: Navigating object relations

 

I hope this will give you some guidance in creating your own supermetrics for your environment. How about a metric to track the maximum number of transactions in a Microsoft SQL Server database per SQL Server? See Figure 7:

 Tracking max transactions per second in SQL Server

Figure 7: Tracking max transactions per second in SQL Server

 

You could also track the average throughput on some specific ports of your Cisco Catalyst switch, the read/write ratio on your Dell EMC Vplex storage and so on. With the management packs from Blue Medora, you can easily monitor your complete stack and leverage all that super metrics have to offer.  

 

Reference Materials:

The post Super Metrics in Action: Use Case with NetApp appeared first on VMware Cloud Management.

10 Top Moments from EUC Insights 2017 — Americas

$
0
0
This post was originally published on this site

Thank you to everyone who joined our popular online event, EUC Insights 2017—Americas! It was one of our most successful virtual conferences yet. And the best news? You can watch the product sessions, dive into best practices and take our Hands-On Labs at any time on demand! Just click here to get started.

While the live EUC Insights 2017 event is over in the Americas, the APJ and Europe events are coming soon! Click one of these links to register for:

It was incredibly difficult to hone the amazing event down to a handful of favorites, but here are my 10 top moments from EUC Insights 2017 Americas.

1. AutoDesk’s Innovative Story of Digital Transformation

Sumit Dhawan EUC Insights Keynote

AutoDesk’s story is so powerful that my words could not do it justice. So, here are my favorite quotes from Prakash Kota, AutoDesk VP of IT, during his keynote interview with our very own Sumit Dhawan.

“We have been transforming from being a desktop-based software company to a services company. … As an IT organization, while our business and product teams are going through this transformation, we’re looking at how can we empower our global workforce to be more productive.” —Prakash Kota, VP of IT, AutoDesk

“One of things you have done really well at VMware is to really understand the user needs and to have really simple user management overall, as well as solutions across broad devices.” —Prakash Kota, VP of IT, AutoDesk

VMware Workspace ONE “is a successful marriage of the tools you have that really delivers the services we need today and gives us a path for the solutions we need for the future.” —Prakash Kota, VP of IT, AutoDesk

Watch Sumit’s entire keynote and dive into AutoDesk’s innovative digital transformation journey on-demand here.

2. VMware’s Vision & What’s New with Horizon

Shankar Iyer EUC Insights VMware HorizonShankar Iyer exclaimed, “The time is now!” in his educational and inspiring session. Shankar dove into what he believes are the three key barriers to desktop transformation:

  1. Virtual desktops and apps are expensive to design and deliver.
  2. Applications take time to deliver, manage and update.
  3. User persona and policies are complex to set-up and modify.

Watch Shankar’s on-demand session, listed under the application and desktop virtualization track, to see how three new Horizon innovations help you turn these barriers into opportunities.

3. Shawn Bass Live Q&A

 

4. Consumerization Drives Digital Transformation

“We all use more technology in our personal lives as consumers than we do in our work lives. … We don’t think of apps as individual sets of features. We think about, ‘If I need to get a task done and it requires five different sets of applications, it’s just a seamless workflow across those applications.’”

Hemant Sahani, VMware director of product management, hosted a very powerful session on the importance of enabling workflows within the modern mobile workforce. What does this mean? It means making it easier for workers to get their jobs done—faster—anytime, anywhere.

Watch Hemant’s session, entitled From Onboarding to Next-Gen Mobile Productivity in Three Simple Steps, to see how you can empower smarter workflows and happier, more productive workers.

5. Today’s Cyber Threat Revolution

“In the past, protecting our perimeter using strong firewall policies was okay. Now with Windows 10 being more like a mobile operating system and your entire mobile workforce using iOS and Android, perimeter-based security is no longer good enough.”

EUC Insights Josue Negron Windows 10

Josue Negron, senior solutions architect, delivered a massive wake-up call about security threats in our modern heterogeneous computing environment. The good news for us all: a groundbreaking new end-to-end security approach that takes the fight to the hacker.

Watch his eye-opening session, Today’s Cyber Threat Revolution, under the endpoint security track.

6. Horizon Cloud: What’s New & Best Practices for Getting Started

ICYMI, the VMware Horizon team launched an exciting new phase of the Horizon platform’s innovative journey. Namely, the new Horizon Cloud service.

Simon LeComte, senior manager of the VMware EUC Cloud Services Architecture Team, takes you through all the new feature offerings and capabilities with Horizon Cloud. To ensure your success from day one, Simon also shares best practices and considerations for desktop and app delivery, networking architecture and user profile manager.

Watch his session on demand in the application and desktop virtualization track.

7. Take the Guesswork Out of Windows 10 Migrations

Did you know that 96% of Microsoft’s enterprise and education customers are in active Windows 10 pilots?

As you can tell, Windows 10 was a massive theme for EUC Insights 2017. The capabilities are exciting and the experience for end users promising. But for IT admins, migration and management are incredibly daunting tasks—especially in an increasingly diverse device and OS environment.

If you are getting ready to tackle a pilot or full-scale migration to Windows 10, check out this great session from VMware product marketing manager Justin Grimsley, Take the Guesswork Out of Windows 10 Migrations. He explores the entire Windows 10 journey:EUC Insights Justin Grimsley Windows 10 Migrations

8. The Wearables Are Coming

In fact, wearables and other devices within the Internet of Things (IoT) are likely already inside your organization. The question now is: how does it impact you and your company?

“If you think about the consumerization of IoT over the past five years and the impact of bring your own device (BYOD) on the enterprise and scale that up 10 fold, you begin to understand the implications of IoT on mobile device management.” —Jessie Stoks, VMware product marketing manager

Dive into the ways IoT and connect tech is evolving in Jessie’s session, Prepare Your Organization for Enterprise Wearables and Things. Perhaps even more importantly, you will learn how unified endpoint management helps IT stop chasing these devices and starts controlling them to drive innovation, better customer care and worker productivity.

9. Industry Solutions: Something for Everyone

VMware’s industry solution team brought their A-game, delivering four unique industry sessions:

  1. How VMware Horizon Transforms Agency Desktop Performance, Security and Management
  2. What’s New in the Digital Clinical Workspace
  3. Powering the Next Retail IT Generation One Customer at a Time
  4. High Education Game Changer: Ellucian Integration Makes Digital Work and Learning Spaces Dynamic.

From enabling students with anywhere learning to wowing customers with amazing experiences, you won’t want to miss these sessions.

10. Hands-On Labs

It wouldn’t be EUC Insights without some amazing Hands-On Labs!

EUC Insights offers five unique product labs:

  1. Intro to VMware AirWatch
  2. AirWatch Mobile App Management & App Development
  3. Intro to Horizon 7
  4. Horizon 7 Application Delivery
  5. Workspace ONE, Single Sign-On (SSO) & VMware Identity

What was your favorite session? Want to see something new next year? Share your feedback with us in the comments, and don’t forget to take the survey in virtual conference itself.

The post 10 Top Moments from EUC Insights 2017 — Americas appeared first on VMware End-User Computing Blog.


What’s New in Horizon Cloud – February 2017

$
0
0
This post was originally published on this site

With contributions from Justin Venezia and Jerrid Cunniff.

The recently released  Horizon Cloud Service added several significant new features that enable customers to deploy desktops and applications in the cloud, on-premises, or in a combination of the two.   This release focused on integrations with Workspace ONE and delivering the technologies that support VMware’s Just-in-Time Management Platform (JMP).

The features mentioned below cover the cloud-based implementation called Horizon Cloud with Hosted Infrastructure.

Instant Clones

Horizon Cloud with Hosted Infrastructure customers can now leverage Instant Clones for their user’s cloud-based desktops.  Instant Clones allow for faster deployment and use less storage than traditional full clones.  Instant Clones, when used with User Environment Manager and native applications supplied by App Volumes technology, make up the Just-in-Time Management Platform, which helps you deliver a stateless Windows desktop experience to your users.

For more detail on Instant Clones see Instant Clone View Desktops in Horizon 7.

Workspace ONE Integration

With this release, we are supporting an integration to VMware Workspace ONE.  Workspace ONE is an Identity as a Service (IDaaS) offering that provides application provisioning, self-service catalog, and conditional access controls to your users.  It allows your users to access SaaS, web, native mobile and virtualized applications through a single user interface.  For more information on VMware Workspace ONE, please see the product page.

User Environment Manager

Customers can also leverage the standalone version of User Environment Manager as a part of their Horizon Cloud deployment to manage your users’ profile personalization and dynamic policy configuration across Horizon Cloud’s Windows desktop environment. User Environment Manager is installed as a separate Utility Virtual Server inside the Horizon Cloud Tenant and leverages a network file share to store user settings and system configurations.

Customers will only need to purchase of additional storage capacity in Horizon Cloud based on usage and demand.  For more information on User Environment Manager, see the User Environment Manager.  For guidance on how to leverage User Environment Manager in your Horizon Cloud environment consult with your VMware sales team or partner.

Horizon Agent 7.0.3

Many of the features in the latest version of the Horizon Client and Agent are now available in Horizon Cloud, including updated Windows 10 support, multi-monitor with Blast Extreme and RTAV enhancements.  For a list of features that are supported with this release, see VMware KB2148647.

Notification Page

A new notification page is now available in the UI.  This page is your one-stop place to manage all events and alerts from your Horizon Cloud instance.

NotificationsSM

Demo-Administrator Role

We have added the ability to have read-only administrator access to the Horizon Cloud UI.  This is a useful feature for support, training and demonstration purposes as it gives a user full access to all screens in the UI without giving them the ability to make any changes to the environment.

App Volumes Integration (Beta)

Horizon Cloud now supports containerized application delivery using  App Volumes technology.  Using a separately downloadable app capture application, you can create containerized application bundles which are then imported and delivered to Horizon Cloud Virtual Desktop Infrastructure (“VDI”) desktops. Windows 7 and Windows 10 applications are supported by App Volumes.  You can now separate the management of applications from the OS image, allowing for greater flexibility in how applications are delivered to users.  This feature also saves on storage costs (per VM) since a single bundle of applications or an AppStack, can be shared among many users.

Add-On storage is required to use AppStacks.  If you are a customer of Horizon Cloud and would like to use App Volumes, please consult with your VMware sales team or partner to be considered for the private beta.

For an overview of how App Volumes works, see App Volumes AppStacks vs. Writable Volumes by Dale Carter.

AppstacksSM

Horizon Cloud Monitoring (Beta)

Horizon Cloud Monitoring is a new feature built to provide customers a dashboard to monitor, troubleshoot and manage their Horizon Cloud environment.  It leverages the same user-interface in the Horizon Cloud Administration Console.  This feature is offered on an invitation-only basis and is used to capture and display guest VM performance and usage statistics.

Contact your VMware sales team or partner to see if you are eligible for use of this feature within Horizon Cloud.

CloudMonitoringDesktopUsageSM

CloudMonitoringUserReportsSM

New Capacity Service Model

With Horizon Cloud, we introduced capacity units to allow customers flexibility in the types of desktops that are deployed in their environment.   With this enhancement, you can change your desktop build configuration based on user need instead of being locked into a “one size fits all” model.  Horizon cloud offers different capacity measurements of Standard and Workstation class desktops.  You can find details on the Capacity Service Model on the Horizon Cloud product page.

vGPU (Beta)

VMware has been working with NVIDIA to deliver vGPU-accelerated desktops through Horizon for some time, and with this release of Horizon Cloud, we are providing limited access to some customers for the new Horizon Cloud Graphical Workstations.  This feature allows you to use high performance 3D applications from a cloud-based virtual desktop.  Customers can purchase Graphical Workstation Capacity “units” (GWC) for the provisioning of desktop VMs with virtual graphics processor (vGPU). The GWC unit contains 4 vCPU, 16 GB vRAM,120GB Hard Disk, 2GB vGPU Memory, and 80 storage IOPS.

HorizonCLoudWorkstationTypes

All Graphic Workstation desktop models operate in similar fashion to Standard desktop models as described in the Horizon Cloud Service Description.

GraphicsWSTable

These new features make it easier for you to deliver cloud-based virtual desktops and applications to your users.  Find out more details by reviewing the Horizon Cloud Service information on our website, or reach out to your VMware or VMware Partner EUC technical specialist.

 

The post What’s New in Horizon Cloud – February 2017 appeared first on VMware End-User Computing Blog.

vRealize Automation 7.2 Detailed Implementation Video Guide

$
0
0
This post was originally published on this site

[this post originally posted at virtualjad.com]

Welcome to the vRealize Automation 7.2 Detailed Implementation VIDEO Guide. This is a collection of all the videos making up the full vRealize Automation 7.2 Detailed Implementation Guide.

The guide (and these videos) was put together to help you deploy and configure a highly-available, production-worthy vRealize Automation 7.2 distributed environment, complete with SDDC integration (e.g. VSAN, NSX), extensibility examples and ecosystem integrations. The design assumes VMware NSX will provide the load balancing capabilities and includes details on deploying and configuring NSX from from scratch to deliver these capabilities.

Be sure to refer back to the full guide for detailed configuration steps or more info on any given topic.

01, Introduction

High-Level Overview

  • Production deployments of vRealize Automation (vRA) should be configured for high availability (HA)
  • The vRA Deployment Wizard supports Minimal (staging / POC) and Enterprise (distributed / HA) for production-ready deployments, per the Reference Architecture
  • Enterprise deployments require external load balancing services to support high availability and load distribution for several vRA services
  • VMware validates (and documents) distributed deployments with F5 and NSX load balancers
  • This document provides a sample configuration of a vRealize Automation 7.2 Distributed HA Deployment Architecture using VMware NSX for load balancing

Implementation Overview

To set the stage, here’s a high-level view of the vRA nodes that will be deployed in this exercise. While a vRA POC can typically be done with 2 nodes (vRA VA + IaaS node on Windows), a distributed deployment can scale to anywhere from 4 (min) to a dozen or more components. This will depend on the expected scale, primarily driven by user access and concurrent operations. We will be deploying six (6) nodes in total – two (2) vRA appliances and four (4) Windows machines to support vRA’s IaaS services. This is equivalent to somewhere between a “small” and “medium” enterprise deployment. It’s a good balance of scale and supportability starting point.

 

02, Deploy and Configure NSX

We will be leveraging VMware NSX in this implementation to provide the load balancing services for the vRA deployment as well as integrating into vRA for application-centric network and security. Before any of this is possible, we must deploy NSX to the vSphere cluster, prepare the hosts, and configure logical network services. The guide assumes the use of NSX for these services, but this is NOT a requirement. A distributed installation of vRA can be accomplished with most load balancers. VMware certifies NSX, F5, and NetScaler.

(You can skip this section if you do not plan on using NSX in your environment)

 

03, Deploy vRA Virtual Appliances

The vRA virtual appliance (OVA) is downloaded from vmware.com and deployed to a vSphere environment. In a distributed deployment, you will deploy both primary and secondary nodes ahead of kicking off the deployment wizard.

The VA also includes the latest IaaS installers, including the required management agent (that will be covered in the next section).

 

04, Prepare IaaS Hosts

vRA’s IaaS engine is a .net-based application that is installed on a number of dedicated Windows machines. In the old days, the IaaS components were manually installed, configured and registered with the vRA appliance(s). This included manual installation of many prerequisites. The effort was quite tedious and error-prone, especially in a large distributed environment.

In vRA 7.0 and higher, the installation and configuration of system prerequisites and IaaS components has been fully automated by the Deployment Wizard. But prior to kicking off the wizard, the vRA Management Agent needs to be installed on each IaaS host. Once installed, the host is registered with the primary virtual appliance and made available for IaaS installation during the deployment. While the Deployment Wizard will automatically push most of the prerequisites (after a prerequisite check), you have the option to install any or all of the prereqs ahead of time. However, the wizard’s success rate has improved greatly and is the preferred method for most environments.

 

05, Deployment Wizard

The Deployment Wizard is invoked by logging into the primary VA’s Virtual Appliance Management Interface (VAMI) using the configured root account. Once logged in, the admin is immediately presented with the new Deployment Wizard UI. The wizard will provide a choice of a minimal (POC, small) or enterprise (HA, distributed) deployment then, based on the desired deployment type, will walk you through a series of configuration details needed for the various working parts of vRA, including all the windows-based IaaS components and dependencies. For HA deployments, all the core components are automatically clustered and made highly-available based on these inputs.

In both Minimal and Enterprise deployments, the IaaS components (Manager Service, Web Service, DEMs, and Agents) are automatically pushed to available windows IaaS servers made available to the installer thanks to the management agent.

 

06.1, NSX Load Balancer Configuration

Next we’ll be configuring load balancing and high availability policies for the distributed components. An NSX Edge Service Gateway (ESG) will be providing the load balancing and availability services to vRA as an infrastructure service. vRA supports In-Line and One-Arm load balancing policies. This implementation will be based on an In-Line configuration, where the vRA nodes and the load balancer VIPs are on the same subnet.

(If you do not plan on using NSX for HA services, you can skip this configuration)

 

07, Initial Tenant Configuration

Next up, we’ll be completing the initial tenant configuration. This involves configuring an authentication directory in the default tenant (vsphere.local), syncing with the domain, and getting some accounts added to various vRA roles.

vIDM is policy-driven and adds a significant amount capability over the IDVA. vRA 7 customers will gain many of the OOTB capabilities of the stand-alone vIDM product and be able to configure and manage these features directly with the vRA UI. For anyone who has used vIDM as a stand-alone solution or as part of another product (e.g. Horizon Workspace), configuring vIDM will be just as straight forward. But even if you’ve never configured it before, it is intuitive and walks you through the logical steps of setting up auth sources and advanced policies…

For Active Directory integration, vIDM Directories are configured to sync with one or more domains. AD can be configured as the exclusive provider, a backup (e.g. when 2FA fails), or as part of a more complex authentication policy. Several AD-specific policies are available to fit most use cases. vRA itself does not query AD directly. Instead, only the vIDM Connector communicates with the configured AD providers and performs a database sync (AD -> Local vPostgresDB) based on the configured sync policy. In addition to AD, vRA 7.1 added support for LDAP auth stores.

 

08, IaaS Fabric Configuration

The IaaS Fabric is made up of all the infrastructure components that are configured to provide aggregate resources to provisioned machines and applications. vRA’s IaaS Fabric is made up of several logical constructs that are configured to identify and collect private and public cloud resources (Endpoints), aggregate those resources into manageable segments (Fabric Groups), and sub-allocate hybrid resources (Reservations) to the consumers (Business Groups).

 

09, Creating IaaS Blueprints

A Blueprint is a logical definition of a given application or service and must be create prior to publishing that service in the service catalog. That includes all traditional IaaS (Windows / Linux / Multi-Tier Apps), containerized applications, and XaaS (anything as a service). An IaaS blueprint also defines the resource configuration logic for the included service(s), including CPU, memory, storage, and network resource allocations for a given machine component and defines the workflow that will be used to provision the machine(s) at request time, depending on the desired outcome.

The Converged Blueprint (CBP) Designer is a single, converged designer for all blueprint authoring. Blueprints are now built on a dynamic drag-n-drop design canvas, allowing admins to choose any supported components, drag them on to the canvas, build dependencies, and publish the finished product to the catalog. Components include machine shells for all the supported platforms, software components, endpoint networks, NSX-provided networks, XaaS components, and even other blueprints that have already been published (yes, nested blueprints). Once dragged over, the admin can build the necessarily logic and any needed integration for each component of that particular service.

In this module, we will be creating a couple example vSphere Blueprints — 1 x Windows 2012 R2, 1 x CentOS 6.7 — and preparing them to be published in the catalog (next section). Later, we’ll be adding additional configurations to each of blueprints for more advanced use cases.

 

10, Catalog Management

Once the blueprints have been created and published, you make them available for consumption in the unified Catalog. The Catalog is the self-service component of vRA, which provides any number of services to consumers. But before that can happen, you must determine which users or groups (e.g. Business Group users) will have access to each catalog item. vRA uses a rich set of policies to provide granularity that ensures services are only available to users that are specifically entitled to that particular service (or action).

Catalog Management consists of creating Services (e.g. categories), assigning published catalog items to a Service, and entitling one or more Business Groups users to the item(s).

 

11, Approval Policies

Approval policies are optionally created to add governance and additional controls to any and all services. vRA provides a significant amount of granularity for triggering approval policies based on the catalog item, service type, component configuration, lifecycle state, or even based on the existence of a particular item. Once created, active approval policies are applied to Services, individual Catalog Items, and/or Actions in the Entitlements section.

Approval Policies can be triggered at request time (PRE) or just prior to delivering the service to the consumer (POST)…or a combination of the two. For example, manager’s approval can be required at request time (before provisioning begins) and another approval can be required for final inspection prior to making the service available to the requesting consumer. For traditional IaaS machines, a policy can also include options that allow the approver to modify the request prior to approving (e.g. mem, cpu configuration). At provisioning time, the approver is notified of the pending request. Once approved, the request moves forward. If it is rejected, the request is canceled and the user is notified of the rejection.

In this section, we will create three approval policies — one that is triggered based on configurations (cpu count), one that requires a Business Group manager’s approval and one that is triggered when a particular day-2 action is invoked.

 

12.1, Extensibility Basics

It’s really difficult to summarize vRA’s extensibility in one or two paragraphs, but i’ll give it a shot. Extensibility refers to any configuration or customization that modifies vRA’s default behavior. This can include customizing the user experience at request time (e.g. adding enhanced configuration options, requiring specific inputs, etc), incorporating ecosystem tools and binding them to a machine’s lifecycle (e.g. load balancers, CMDB/ITSM tools, IPAM, Active Directory, configuration management, and so on).

vRA’s vast extensibility capabilities can be as basic or as complex as required. But ultimately they are designed to ensure vRA is plugged in to the broader ecosystem of tools and services based on the business needs. Many lifecycle extensibility services are configured and managed within vRA’s UI (e.g. Property Dictionary, Custom Properties, AD Integration, Event Broker, and XaaS. But one of the most important components of Extensibility is vRealize Orchestrator (vRO), which can be consumed within vRA but managed in it’s own UI (vRO control center, vRO client).

In this module we’ll be getting our feet wet with Extensibility. I’ll provide an overview of vRA’s extensibility tools and usage and an introduction to the Property Dictionary, Custom Properties, and vRealize Orchestrator. I’ll introduce and put to use the Event Broker and XaaS — two critical pieces to vRA’s extensibility — in later modules.

 

12.2, Simple Extensibility Use Cases

Now that you have a general understanding of vRA’s extensibility capabilities, let’s put some of that knowledge to use. In this module we’ll be leveraging extensibility for some basic extensibility use cases. We’ll use Custom Properties to control vCenter folder placement of provisioned machines, create a Property Definition to provide resource placement options (vis drop-down) at request time, and create Active Directory policies for each Business Group to define where we want machine objects placed in Active Directory.

These are just the basics to get your feet wet. Extensibility will play a big part in many more modules later.

 


That’s it for now! Now that we’ve got the basics out of the way, the next set of videos will dive into more advanced topics, such as software authoring, container management, SDDC integration (VSAN, NSX), and several advanced extensibility use cases.

Please provide feedback (here…not on Youtube) if these videos have been helpful and feel free to start a conversation.

+++++
@virtualjad

The post vRealize Automation 7.2 Detailed Implementation Video Guide appeared first on VMware Cloud Management.

New Fling released – IOInsight

$
0
0
This post was originally published on this site

By Sankaran Sivathanu

VMware IOInsight is a tool to help people understand a VM’s storage I/O behavior. By understanding their VM’s I/O characteristics, customers can make better decisions about storage capacity planning and performance tuning. IOInsight ships as a virtual appliance that can be deployed in any vSphere environment and includes an intuitive web-based UI that allows users to choose VMDKs to monitor and view results.

Where does IOInsight help?

  • Customers may better tune and size their storage.
  • When contacting VMware Support for any vSphere storage issues, including a report from IOInsight can help VMware Support better understand the issues and can potentially lead to faster resolutions.
  • VMware Engineering can optimize products with a better understanding of various customers’ application behavior.

IOInsight captures I/O traces from ESXi and generates various aggregated metrics that represent the I/O behavior. The IOInsight report contains only these aggregated metrics and there is no sensitive information about the application itself. In addition to the built-in metrics computed by IOInsight, users can also write new analyzer plugins to IOInsight and visualize the results. A comprehensive SDK and development guide is included in the download bundle.

The fling works with vSphere 5.5 or above and can be downloaded at https://labs.vmware.com/flings/ioinsight.

The post New Fling released – IOInsight appeared first on VMware VROOM! Blog.

Cloud Transformation: Are you Ready? What about your Team?

$
0
0
This post was originally published on this site

Are you ready for cloud transformation?

As the CIO, you know cloud is good for you. Capacity on demand! Independence from hardware… IT as a service, even as self-service… In other words, you know why you are doing it. You fought for, and got the budget allocated, you acquired the best technology, and your trusted consultant has proven to you in a nicely-done POC that the technology works. But you know, it’s not just the technology. Will your cloud fly high and deliver the expected benefits, or fall rock-hard on the shoulders of your IT team? Will your consumers use it, or see it as an additional burden? Will everyone be happy, including those whose job description changes, or worse? So, let’s talk about cloud transformation.

Cloud is based on advanced software technology, specifically server virtualization and cloud management, but that’s not all. Clearly, we are speaking about the people/process/technology transformation here. And there is no one better than Kevin Lees, the chief technologist of IT operations transformation at VMware, to help you understand how to make cloud work for your IT organization. Kevin has recently published a new white paper: Organizing for the Cloud, where he looks at the organizational impacts of cloud transformation from multiple perspectives and provides insights and advice about how to prepare for—and execute—a winning transformation strategy. Download here, or keep reading for a condensed overview.

Download Now

Kevin starts off by identifying the key differences between the traditional and the cloud-based IT. How do the roles change? Are the responsibilities the same? And who bears the costs?

Then, Kevin defines the steps to the cloud IT model. First, you need to set the target, and set expectations. What are your goals in the end of the day? Is it saving costs? Or improving agility? Or achieving lights-out operations to minimize the human factor’s impact? Once you establish that, we understand better the degree of change, and how to prepare your organization for it.

In the next section, Kevin provides guidance on how to design an effective cloud organizational model as a core requirement of preparing an organization strategy for supporting services in an SDDC-based cloud environment. He offers the team structure where IT becomes much more customer focused.

cloud transformation

Drilling down deeper into the details, Kevin looks at the roles and responsibilities of different components of the organization, with the goal of creating tighter collaboration across the traditional “plan-build-run” IT paradigm. The paper discusses different teams’ charters as they work together to enable the new paradigm of delivering IT as a service to both internal and external constituents. Each team is explained, from the goals to the operating mode, and their relationship to the Line of Business. The paper also goes over a number of individual key roles within the teams, making it a really a hands-on guide into designing and implementing the new organization.

Next, the paper discusses the a cultural aspect and the mind-set shift in how IT interacts with its customers as well as how IT defines, develops, delivers, and manages services for consumption. It covers three key shifts:

  • Service-Oriented, Customer Focused Culture
  • Collaborative Culture
  • Agile-based Culture

And finally, the paper offers a number of critical steps to achieve success with your cloud initiatives. These are based on the vast experience that Kevin and his team have developed while helping VMware customers to reach their goals.

The paper concludes with the invitation to learn more about the VMware SDDC-based cloud solution at www.vmware.com/ solutions/it-outcomes.html, and information about VMware professional services who can help you get started on your cloud journey and achieve the results faster.

download_Now_wide

 

 

The post Cloud Transformation: Are you Ready? What about your Team? appeared first on VMware Cloud Management.

How to use vRealize Automation with SSL offloading in multi-arm load-balancing topology

$
0
0
This post was originally published on this site

With the development of newer and faster load balancers or Application Delivery Controllers (ADC) we are seeing more and more use cases where SSL termination is required. Some of those use cases include SSL offloading and optimization, content caching and filtering, application firewalls and many others. In this guide we will cover the basic SSL-to-SSL configuration of VMware NSX, F5 BIG-IP and Citrix NetScaler with vRealize Automation 7.x.

You have to keep in mind however that the ADCs nowadays provide multiple features and options, while most of them will probably work we cannot cover all possible combinations. Please test any changes in your lab environment before deploying them in production.

 

SSL offloading

When talking about SSL offloading we usually imagine a connection in which the client-server SSL session is terminated at the ADC and the connection between ADC and back-end systems is not encrypted. This way the burden of encrypting and decrypting the traffic is left to the ADC. We can also do some interesting things at the ADC, such as content rewrite, URL redirection, application firewalling and many more. However, since our traffic is not encrypted to the back-end systems any compromise in our internal network would expose our sensitive information. That is why SSL – Plain mode is not supported by vRA.

Since SSL – Plain is too risky but we still want the advantages of a modern ADC we can do SSL termination and talk with the back-end systems via encrypted channel. In the case of SSL- SSL mode the connection Client – ADC is encrypted in one SSL session and the connection ADC – back-end server is encrypted in another SSL session. This way we can still achieve performance boost and do advanced content operations but without having the risk of exposing un-encrypted traffic.

This mode can be best described using the following figure:

diagram1

 

Multi-arm configuration

Traditionally vRA deployments are done in one-arm topology. In this topology the ADC and the vRA components sit on the same network. While simple this topology is not always optimal if we want to achieve service isolation. This is why here we will use multi-arm topology where the ADC and vRA components are deployed in different networks.

This topology can be best described using the flowing figure:

 

diagram2

 

Certificates

For simplicity in this guide we are going to use the same certificate/private key pair on the ADC and the vRA components. It is possible to use different certificates, however since some of the vRA internal communication is done via the ADC you have to make sure that the vRA components trust the ADC certificate.

When issuing your certificate make sure that it includes the following attributes:

Common Name (CN) = the ADC virtual server used for the vRA appliances
Subject Alternative Name (SAN) = the ADC virtual servers for IaaS Web, IaaS Manager, DNS names for all vRA components, IPs for all ADC virtual servers and all vRA components

 

Example:

 

Type Component Record
CN ADC for vRA appliances vralb.example.com
SAN ADC for IaaS Web and Manager weblb.example.com, mgrlb.example.com
SAN vRA appliances vra01.example.com, vra02.example.com
SAN IaaS machines web01.example.com, web02.example.com
mgr01.example.com, mgr02.example.com
SAN All IPs of both ADC and vRA machines 10.23.89.101, 10.23.90.223
10.23.90.224, 10.23.89.102
10.23.90.226, 10.23.90.227
10.23.89.103, 10.23.90.228
10.23.90.229

 

More information about vRealize Automation certificates can be found here.

But enough theory, in the next section we will learn how to configure NSX, NetScaler and BIG-IP with vRA 7.
In our lab we used the following product versions: NSX=6.2, 6.3, NetScaler=11.0, BIG-IP=11.6, vRA=7.2, 7.3

In order to better understand the references below here is a diagram of our vRA deployment

 

map0

 

Configuring Citrix NetScaler

 

  • Configure the NetScaler device

Enable the Load Balancer(LB) and SSL modules.

You can do so from the NetScaler > System > Settings > Configure Basic Features page.

 

  • Upload your certificate and private key

Go to NetScaler > Traffic Management > SSL > SSL Certificates

Click Install and upload your certificate chain + private key.

 

  • Configure monitors

Go to NetScaler > Traffic Management > Load Balancing > Monitors

Click Add and provide the required information. Leave the default when nothing is specified.

 

Add the following monitors:

 

Name Type Interval Timeout Send String Receive String Dest. Port Secure
vra_https_va_web HTTP 5 seconds 4990 milliseconds GET /vcac/services/api/
health
HTTP/1.(0|1) (200|204) 443 yes
vra_https_iaas_web HTTP-ECV 5 seconds 4990 milliseconds GET /wapi/api/status/web REGISTERED 443 yes
vra_https_iaas_mgr HTTP-ECV 5 seconds 4990 milliseconds GET /VMPSProvision ProvisionService 443 yes

 

  • Configure service groups

Go to NetScaler > Traffic Management > Load Balancing > Service Groups 

Click Add and provide the required information. Leave the default when nothing is specified.

Enter each pool member as a Member and add it to the New Members type Server Based.

 

Add the following service groups:

 

Name Health Monitors Protocol SG Members Address Port
pl_vra-va-00_443 vra_https_va_web SSL ra-vra-va-01 10.23.90.223 443
ra-vra-va-02 10.23.90.224 443
pl_iaas-web-00_443 vra_https_iaas_web SSL ra-web-01 10.23.90.226 443
ra-web-02 10.23.90.227 443
pl_iaas-man-00_443 vra_https_iaas_mgr SSL ra-man-01 10.23.90.228 443
ra-man-02 10.23.90.229 443
pl_vra-va-00_8444 vra_https_va_web SSL ra-vra-va-01 10.23.90.223 8444
ra-vra-va-02 10.23.90.224 8444

 

  • Configure virtual servers

Go to NetScaler > Traffic Management > Load Balancing > Virtual Servers

Click Add and provide the required information. Leave the default when nothing is specified.

 

Add the following virtual servers:

 

Name Protocol Dest. Address Port Load Balancing Method Service Group Binding Certificate
vs_vra-va-00_443 SSL 10.23.89.101 443 Roundrobin pl_vra-va-00_443 Select the appropriate certificate
vs_web-00_443 SSL 10.23.89.102 443 Roundrobin pl_iaas-web-00_443 Select the appropriate certificate
vs_man-00_443 SSL 10.23.89.103 443 Roundrobin pl_iaas-man-00_443 Select the appropriate certificate
vs_vra-va-00_8444 SSL 10.23.89.101 8444 Roundrobin pl_vra-va-00_8444 Select the appropriate certificate

 

  • Configure Persistence Profile

Go to NetScaler and select NetScaler > Traffic Management > Load Balancing > Persistency Groups

Click Add and enter the name source_addr_vra then select Persistence > SOURCEIP from the drop-down menu.

Set the Timeout to 30 minutes.

Add all related Virtual Servers.

Click OK.

 

If everything is configured correctly you should see the following for every virtual server in LB Visualizer

 

map2

 

 

Configuring F5 BIG-IP

 

  • Upload certificate and key pair

Navigate to System > File Management > SSL Certificate List

Click Import and select the certificate chain + private key.

Input the same name for the certificate and the key that way the device will know which key to use for the certificate.

 

  • Configure SSL profile

Navigate to Local Traffic > Profiles > SSL > Client

Click Create

For Parent Profile select clientssl

Input Name – example vra_profile-client

Click the checkbox on Certificate, Chain, Key and select the correct certificate chain and key.

 

  • Configure custom persistence profile

Navigate to Local Traffic > Profiles > Persistence.

Click Create.

Enter the name source_addr_vra and select Source Address Affinity from the drop-down menu.

Enable Custom mode.

Set the Timeout to 1800 seconds (30 minutes).

Click Finished.

 

  • Configure monitors

Navigate to Local Traffic > Monitors.

Click Create and provide the required information. Leave the default when nothing is specified.

 

Create the following monitors:

 

Name Type Interval Timeout Send String Receive String Alias Service Port
vra_https_va_web HTTPS 3 10 GET /vcac/services/api/
healthrn
HTTP/1.(0|1) (200|204) 443
vra_https_iaas_web HTTPS 3 10 GET /wapi/api/status/webrn REGISTERED
vra_https_iaas_mgr HTTPS 3 10 GET /VMPSProvisionrn ProvisionService

 

  • Configure pools

Navigate to Local Traffic > Pools.

Click Create and provide the required information. Leave the default when nothing is specified.

Enter each pool member as a New Node and add it to the New Members.

 

Create the following pools:

 

Name Health Monitors Load Balancing Method Node name Address Service Port
pl_vra-va-00_443 vra_https_va_web Round Robin ra-vra-va-01 10.26.90.223 443
ra-vra-va-02 10.26.90.224 443
pl_iaas-web-00_443 vra_https_iaas_web Round Robin ra-web-01 10.26.90.226 443
ra-web-02 10.26.90.227 443
pl_iaas-man-00_443 vra_https_iaas_mgr Round Robin ra-man-01 10.26.90.228 443
ra-man-02 10.26.90.229 443
pl_vra-va-00_8444 vra_https_va_web Round Robin ra-vra-va-01 10.26.90.223 8444
ra-vra-va-02 10.26.90.224 8444

 

  • Configure virtual servers

Navigate to Local Traffic > Virtual Servers.

Click Create and provide the required information. Leave the default when nothing is specified.

 

Create the following virtual servers:

 

Name Type Dest. Address Service port SSL profile (client) SSL Profile (Server) Source address translation Default pool Default persistence profile
vs_vra-va-00_443 Standard 10.26.89.101 443 vra-profile-client serverssl Auto Map pl_vra-va-00_443 source_addr_vra
vs_web-00_443 Standard 10.26.89.102 443 vra-profile-client serverssl Auto Map pl_iaas-web-00_443 source_addr_vra
vs_man-00_443 Standard 10.26.89.103 443 vra-profile-client serverssl Auto Map pl_iaas-man-00_443 None
vs_vra-va-00_8444 Standard 10.26.89.101 8444 vra-profile-client serverssl Auto Map pl_vra-va-00_8444 source_addr_vra

 

 

If everything is setup correctly you should see the following in Local Traffic › Network Map

 

map

 

Configuring VMware NSX

 

  • Configure Global Settings

Log in to the NSX, select the Manage tab, click Settings, and select Interfaces.

Double-click to select your Edge device from the list.

Click vNIC# for the external interface that hosts the VIP IP addresses and click the Edit icon.

Select the appropriate network range for the NSX Edge and click the Edit icon.

Add the IP addresses to be assigned to the VIPs, and click OK.

Click OK to exit the interface configuration subpage.

 

  • Enable load balancer functionality

Select the Load Balancer tab and click the Edit icon.

Select Enable Load Balancer, Enable Acceleration, and Logging, if required, and click OK.

 

  • Upload certificate chain and key

Go to Manage > Settings > Certificates and upload the certificate chain + private key

 

  • Add application profiles

Click Application Profiles on the window pane on the left.

Click the Add icon to create the Application Profiles required for vRealize Automation by using the information from the table below. Leave the default when nothing is specified.

 

Name Type Enable SSL Pass-through Configure Service Certificate Virtual Server Certificate Timeout Persistence
IaaS Manager HTTPS Deselected Selected Select the correct certificate None
IaaS Web HTTPS Deselected Selected Select the correct certificate 1800 seconds Source IP
vRealize Automation VA Web HTTPS Deselected Selected Select the correct certificate 1800 seconds Source IP

 

  • Add service monitors

Click Service Monitoring in the left pane.

Click the Add icon to create the Service Monitors required for vRealize Automation using the information from the table below. Leave the default when nothing is specified.

 

Name Interval Timeout Retries Type Method URL Receive Expected
vRealize Automation VA Web 3 10 3 HTTPS GET /vcac/services/api/health 200, 204
IaaS Web 3 10 3 HTTPS GET /wapi/api/status/web REGISTERED
IaaS Manager 3 10 3 HTTPS GET /VMPSProvision ProvisionService

 

  • Add pools

Click Pools in the left pane.

Click the Add icon to create the Pools required for vRealize Automation using the information from the table below. Leave the default when nothing is specified.

You can either use the IP address of the pool members, or select them as a Virtual Center Container.

 

Pool Name Algorithm Monitors Member Name Example IP Address / vCenter Container Port Monitor Port
pool_vra-va-web_443 Round Robin vRA VA Web vRA VA1 10.26.90.223 443
vRA VA2 10.26.90.224 443
pool_iaas-web_443 Round Robin  IaaS Web IaaS Web1 10.26.90.226 443
IaaS Web2 10.26.90.227 443
pool_iaas-manager_443 Round Robin IaaS Manager IaaS Man1 10.26.90.228 443
IaaS Man2 10.26.90.229 443
pool_vra-rconsole_8444 Round Robin vRA VA Web vRA VA1 10.26.90.223 8444 443
vRA VA2 10.26.90.224 8444 443

 

  • Add virtual servers

Click Virtual Servers on the left pane.

Click the Add icon to create the Virtual Servers required for vRealize Automation using the information from the table below. Leave the default when nothing is specified.

 

Name IP Address Protocol Port Default Pool Application Profile Application Rule
vs_vra-va-web_443 10.26.90.101 HTTPS 443 pool_vra-va-web_
443
vRA VA
vs_iaas-web_443 10.26.90.102 HTTPS 443 pool_iaas-web_
443
IaaS Web
vs_iaas-manager_443 10.26.90.103 HTTPS 443 pool_iaas-manager
_443
IaaS Manager
vs_vra-va-rconsole_8444

 

10.26.90.101 HTTPS 8444 pool_vra-rconsole
_8444
vRA VA

 

 

You can read more about the supported deployment scenarios for vRA in the official Load Balancing Guide which covers configuring vRA with the use of SSL pass-trough load-balancing.

 

In conclusion this guide is just scratching the surface of what you can accomplish by using SSL terminating ADC, but is good foundation on which you can build your complete integration.
If you are interested in more articles like this one stay tuned on VMware Blogs.

 

Take a vRealize Automation 7 Hands-On lab!

The post How to use vRealize Automation with SSL offloading in multi-arm load-balancing topology appeared first on VMware Cloud Management.

Profiling Applications with VMware User Environment Manager, Part 1: Introduction to Application Profiler

$
0
0
This post was originally published on this site

With contributions from:

Jim Yanik, Senior Manager, End-User-Computing Technical Marketing, VMware

Pim Van De Vis, Product Engineer, User Environment Manager, Research & Development, VMware

Stephane Asselin, Lead Architect, App Volumes, VMware

Successful management of applications across physical, virtual, and cloud devices is becoming increasingly important. Whether your organization fits neatly in to one of those silos, or spans all three, the challenge is finding tools designed to work well for any one platform, and seamlessly across them all. VMware User Environment Manager is one of those tools. With a little savvy, you can provide a superior experience for your end users while simplifying profile management.

Picture1

Introduction

Personalization, or management of user-specific application settings, is one of many features included with VMware User Environment Manager. This feature enables end users to roam between disparate devices, while preserving custom application settings. IT benefits from simplified application installations, while delivering necessary configuration settings based on any number of environmental conditions.

If you are new to User Environment Manager, I encourage you to visit the VMware User Environment Manager Product Page for an overview, and the VMware User Environment Manager video series on YouTube for more detail. You will learn about a variety of features and benefits such as dynamic policy configuration across physical, virtual, and cloud desktops. An overview of User Environment Manager is outside the scope of this blog post, but there is a fundamental concept which is sometimes overlooked or misunderstood. VMware User Environment Manager takes a whitelist approach to managing the user profile. Given this design approach, IT must specify which applications and settings will be managed. Although it does mean a little more work up front, this solution prevents excessive profile growth and profile corruption, enables user settings to roam across Windows versions, and provides IT granular control to manage as much or as little of the user experience as needed.

Preserving user-specific application settings and applying or enforcing specific default application settings are key features of User Environment Manager. Both of these concepts are illustrated in a recent blog post titled VMware User Environment Manager, Part 2: Complementing Mandatory Profiles with VMware User Environment Manager which demonstrates the power and flexibility of combining User Environment Manager with Microsoft Mandatory Profiles. VMware provides application management templates for commonly-used software packages, and the VMware User Environment Manager Community Forum contains many more templates created with an included tool called Application Profiler.

Application Profiler is a standalone tool that helps you determine where in the file system or registry an application is storing its user settings. The output from Application Profiler is a configuration file which can be used to preserve and roam application settings for your end users. Optionally, you can record a default set of application settings, and apply and/or enforce these defaults for your users based on a variety of conditions.

For more information or to get started with the Application Profiler tool, see the VMware User Environment Manager Application Profiler Administration Guide.

The Pareto Principle

The Pareto Principle, commonly referred to as the 80/20 rule, states that 80% of the effects come from 20% of the causes. I have been using application management software in some form or another for nearly two decades. In that time, I have found the Pareto Principle to be particularly applicable in that a small number of applications tend to cause the vast majority of challenges for IT.

While the Application Profiler tool is easy to use, and most applications can be profiled with little more effort than a simple installation, there are exceptions. The aforementioned Community Forum is a great place to look when you are having trouble profiling an application, but what if you cannot find the particular application template you need?

Know Thine App

A friend once gave me a t-shirt with this expression on it. Over the years, I have found it to be invaluable advice, though it is sometimes easier said than done.

Because Windows is an open platform, application developers have a great deal of flexibility in the way applications they design behave. While guidelines and best practices have been established over the years, we still occasionally find and application which writes a log file to C:Temp!

Understanding the behavior of an application, not just during installation, but as the application is opened, modified, updated, and so on, is critical to successfully managing the application lifecycle. There are a number of tools available, such as the Sysinternals Suite, to help you understand how an application behaves. These are powerful tools, but as you can see they are plentiful, and can be time-consuming and cumbersome to use.

The VMware User Environment Manager Application Profiler tool is purpose-built to help you easily understand how an application behaves. With real-time application analysis capabilities, Application Profiler automatically generates configuration files which enable application management.

What to Expect from This Blog Series

The purpose of this blog series is to enable you, the IT Administrator, to successfully profile and manage any applications you choose. In each subsequent blog post we will explore a new application.

Going back to the Pareto Principle, most applications are simple to profile using the steps detailed in the VMware User Environment Manager Application Profiler Administration Guide. Because of this, applications known to require some troubleshooting will be chosen for this series. You will get a chance to see the symptoms of applications that do not initially profile correctly, and the process used to resolve the problem. You can then take these practices and apply them to applications in your environment.

This series is designed for a User Environment Manager administrator with at least a basic understanding of the Application Profiler tool. If you are new to Application Profiler, review the guide listed previously before continuing to the rest of the series.

Summary

Managing applications with User Environment Manager improves the experience for end users and simplifies application lifecycle management for IT. Profiling applications is a simple process, and most applications will work out of the box. For problematic applications, you can find configuration templates on the Community Forum. If you cannot find what you are looking for, the skills you learn in this blog series should help you to create your own templates. Have you already created a configuration template? Be sure to share!

The post Profiling Applications with VMware User Environment Manager, Part 1: Introduction to Application Profiler appeared first on VMware End-User Computing Blog.

Are You Equipped to Handle the IoT Wave?

$
0
0
This post was originally published on this site

See VMware Take on the Internet of Things (IoT) at Mobile World Congress 2017

With the IoT wave crashing down on the enterprise, it is not surprising that IoT will loom large when Mobile World Congress opens in Barcelona on 28 February. There will be a zone dedicated just to IoT in the exhibition, and many of the conference presentations will be devoted to IoT.

VMware unveiled its IoT strategy at VMworld in Las Vegas in August last year. Over the past six months, we’ve been developing that strategy, working with the IoT partners we announced in August last year, and others, and we’ll be taking the wraps off our latest initiatives at Mobile World Congress in a series of group discussions and talks with some of those partners.

IoT Theatre Sessions

Attend theatre sessions to understand the challenges with implementing IoT in the enterprise. We’ll also discuss how VMware’s IoT strategy helps organizations like yours overcome these challenges.

VMware IoT Strategy

VMware Internet of Things Strategy & Roadmap; Unveiled

  • Mimi Spier—VP, Business Development, Marketing, and GTM Strategy, IoT, VMware
  • 27 February, 10–10:30 a.m.
  • Hall 3, VMware Stand 3K10

Smart Facilities of the Future

Build For Tomorrow—Smart Facilities of the Future

  • Mimi Spier—VP, Business Development, Marketing and GTM strategy, Internet of Things, VMware
  • Andrew Till—VP, Technologies and Partnerships, Harman Connected Services
  • 28 February, 3–3:30 p.m.
  • Hall 3, VMware Stand 3K10

IoT Security

Build For Tomorrow—Smart Facilities of the Future

  • Matthias Schorer—Lead Business Development Manager, IoT, VMware
  • 1 March, 1:30–2 p.m.
  • Hall 3, VMware Stand 3K10

IoT in the enterprise

Don’t Get Left Behind … Get Started with IoT in Your Enterprise: Industry Leader’s Panel Discussion

  • Mimi Spier—VP, Business Development, Marketing and GTM strategy, Internet of Things, VMware
  • Sandip Ranjhan—SVP & GM, Mobile & Communication Services, Harman Connected Services
  • Brent Hodges—IoT Product Strategy, Dell
  • Kevin O’Brien—VP, IOT WW Partner Sales and Alliances, ThingWorx
  • Chrisoph Inauen—VP, Partnerships and Solution Portfolio, IoT, SAP
  • 1 March, 10:30–11 a.m.
  • Hall 3, VMware Stand 3K10

Experience IoT

Stop by the extremely popular IoT Zone at the VMware Stand 3K10 in Hall 3. Learn how VMware helps drive innovation in four different industries: government, manufacturing, transportation and retail. We will even have some innovative augmented reality experiences to give you a real taste of IoT in action to help you realize use cases and applications in your business.

Here’s a quick description of each:

Experience IoT @MWC

We’ll see you there! Learn more about what’s happening with VMware at Mobile World Congress here.

The post Are You Equipped to Handle the IoT Wave? appeared first on VMware End-User Computing Blog.


Utilizing VMware Mirage for Static Endpoint Compliance (Physical PCs, ATMs, POS)

$
0
0
This post was originally published on this site

VMware Mirage for ATM

The modern IT landscape has changed dramatically in the past few years. Cybersecurity threats are prevalent in the computing world, and it seems like everyday we are hearing about a new threat or story in which hackers have hit a banking or retail organization, crippling its systems and causing damage to its reputation and customer trust.

We now have the ability to deliver modern, just-in-time desktops, built on the fly, serving a highly-targeted purpose from the data center or cloud. These dynamic desktops are custom assembled for their task with the latest applications, software patches and security updates. But what do you do for static, old-style Windows machines that litter the remotest outcrops of IT? How do you secure 50,000 ATMs connected only by a low-bandwidth wire? How do you manage a sophisticated point of sale (POS) branch infrastructure that’s designed to perform 24/7—regardless of what life throws at it?

The Advantage of Layering Technology

VMware Mirage is a mature image management product with a loyal, worldwide customer base. The product specializes in endpoint image management from a central console. It was one of the first “layering” technologies to ever successfully manage traditional desktops.

The advantage of a layering technology over a traditional, unattended installation system is the short downtime of the system and the reliability of the operations. Mirage creates a layer once, and clones it to endpoints. You no longer need to wonder whether your application or operating system installation finished successfully.

VMware EUC Solutions for BankingWhile layering does not work for modern desktops, it is perfect for some static use cases, like those outlined in the opening paragraph—POS and ATMs.

Mirage gives your IT team central management over remote operating systems, no matter how far away they are located or how low their bandwidth is. This system is good at getting your image to you even when your machine is located on an offshore drilling platform in the middle of the ocean (true story)!

IT personnel can easily switch and apply machine base layers containing up-to-date operating systems or drivers, or application layers which contain the applications your users need.

Best of all, your users will be able to continue working with negligible interruptions as Mirage is fast, sleek and unobtrusive—and keeps downtime to a minimum. The required images are quietly downloaded in the background, and a restart is prompted when opted by the IT team. A short restart later, your endpoint is up-to-date with the latest security patches, operating system version or updated application with all the latest swanky features your R&D team has just finished working on. Plus, it’s all centrally managed and monitored. Think about the amount of time and resources saved!

Managing Static Endpoints with Mirage

But think about it, your POS devices, ATMs and kiosk machines can join the party, too—not just your dynamic employee endpoints but also your static machines. Imagine your IT personnel creating an up-to-date, patched image for all these static endpoints in your organization, and managing them from a central location, saving critical downtime of these endpoints and the cost of sending personnel onsite to service or reimage them. (How we hate seeing the dreadful “ATM out of order” message, huh?)

Mirage in Retail & Banking

One of the new features we’re playing with in the labs, if productized, will allow an organization to monitor file changes on endpoints to detect abnormal file activity on POS, kiosk or ATM devices, ensuring image compliance. This feature could allow you to detect any abnormal activity or changes on machines, which should usually remain static. The idea is that this would enable you to keep track of any security breaches and prevent data leakage or unexpected behavior for your customers.

Many other file integrity solutions exist on the market today, but combining the powerful image management capabilities of Mirage would allow an organization to not just detect a breach, but to do something about it!

Mirage allows you to go back to any previous state of the static endpoint or enforce corporate mandated images as needed, and in case of a security breach or application and operating system issues, all from a central location. This operation can be easily integrated and automated into a flow, allowing all endpoints to constantly stay compliant and secure.

Mirage 5.9 is expected to be released at the end of Q1 2017. In the meanwhile, find out how simple it can be to manage a remote Windows device. Check Mirage out today.

This article was written by VMware Senior Member of Technical Staff for Mirage R&D Yakov Voloch, Yan Aksenfeldwith contributions by VMware Technical Staff Member Yan Aksenfeld.

The post Utilizing VMware Mirage for Static Endpoint Compliance (Physical PCs, ATMs, POS) appeared first on VMware End-User Computing Blog.

VMware on VMware: vRealize Network Insight

$
0
0
This post was originally published on this site

vRealize Network InsightHow VMware IT Leverages vRealize Network Insight

 

VMware IT uses many VMware tools in their IT organization to manage their private cloud. One of these useful tools is vRealize Network Insight.

What is vRealize Network Insight?

VMware vRealize® Network Insight™ delivers intelligent operations for software-defined networking and security. It helps optimize network performance and availability with converged visibility across virtual and physical networks, provides planning and recommendations for implementing micro-segmentation based security, and provides operational views to quickly and confidently manage and scale VMware NSX® deployment.

There are 2 use cases of VRealize Network Insight deployment in VMware IT featured in the video below.

Watch the vRealize Network Insight dogfooding video here:

The vRealize Network Insight video featured below has Swapnil S. Hendre talking about how we are using vRealize Network Insight for Micro-Segmentation in VMware’s production infrastructure and John Tompkins who talks about how we are using vRNI to manage one of the largest VMware clouds of its kind, stretching across 6 data centers with more than 200,000 VMs on more than 2,000 hosts.

It’s all about getting the full potential from NSX and micro-segmentation. To get the most out of NSX, we knew we needed to improve our process to identify unique traffic flows, or what you might call the micro-segmentation planning process. At VMware, that’s part of what we call the application discovery process. It was a manual process and very time consuming, so it was delaying our micro-segmentation deployment.”

Speeding up their NSX deployment time was a big goal.

Adds Hendre “Network Insight provides East-West traffic analytics support right out of the box, so it made our planning process very efficient. we now have better control and tracking of our virtual distributed firewalls. That’s a huge help when it comes to meeting audit and compliance requirements.

We also have faster, intuitive insight into problems that our general support staff can understand, so we don’t have to run-up all issues to our networking experts. We can also create point-in-time dashboards and share them among teams in alert notifications, so everyone gets a good snapshot of the issue.”

John Tompkins adds “Network Insight, like the name suggests, really opens your eyes. Now we can identify issues with our NSX deployments that we hadn’t been aware of. We have visibility into things that what would have been black holes in the past. That lets us move forward with confidence.

Check out these links:

David Davis on vRealize Operations – Post #33 – What is vRealize Network Insight (vRNI) ?

VMware on VMware: http://www.vmware.com/company/vmware-on-vmware.html

February 15th: Getting More Out of VMware with vRealize Network Insight

vRealize Network Insight New Customer E-Book

vRealize Network Insight and NSX – New Infographic

 

The post VMware on VMware: vRealize Network Insight appeared first on VMware Cloud Management.

VMware Horizon 7 True SSO: Advanced Features

$
0
0
This post was originally published on this site

In a previous blog, we saw how to deploy VMware Horizon 7 True SSO in a lab environment. The diagram below is a recap of the deployment:

Horizon 7 True SSO Advanced Features 1

Now, let us discuss what to consider for deploying True SSO in a production environment. The discussion will only focus on the VMware Horizon Environment aspect of the above diagram.

VMware recommends deploying two VMware Enrollment Servers and two Microsoft Certificate Authorities (CA) for True SSO in a production environment. Configure these so that the Horizon Connection Server uses both VMware Enrollment Servers, and each VMware Enrollment Server uses both CAs.

Horizon 7 True SSO Advanced Features 2

Enrollment Server Deployment Scenarios

For each domain, we can configure two Enrollment Servers (primary and secondary) in a Horizon 7 environment. The 2 Enrollment Servers add redundancy which allows IT to conduct maintenance, upgrades etc. without any disruptions for end users.

By default, the Connection Server always prefers the primary Enrollment Server for generating certificates. The secondary Enrollment Server is used when the primary Enrollment Server is unresponsive or is in erroneous state. The Connection Server uses the primary Enrollment Server as soon as it recovers.

True SSO can also be configured for high availability. When configured, Connection Server distributes the load of generating Certificates by alternating between the two Enrollment Servers. If an Enrollment Server becomes unresponsive, the Connection Server routes all requests via the other one until it recovers.

For high availability, VMware recommends:

  • Co-host Enrollment Server with a CA on the same machine.
  • Configure Enrollment Server to prefer the local CA.
  • Configure Connection Server for load balance between the configured Enrollment Servers.

Configuration settings:

1. Configure Connection Server to load balance between two Enrollment Servers (requires editing LDAP).

  • Login to the console of a Connection Server on the POD and launch “ADSI Edit” from “Control Panel > Administrative Tools”
  • From menu, select “Action > Connect to”
  • Connection Settings:
    1. Connection Point: dc=vdi,dc=vmware,dc=int
    2. Computer: localhost:389
  • Expand the connection tree to “OU=Properties > OU=Global” and double click on the object named “CN=Common” on the right pane
  • From the properties window, find and double click the attribute named “pae-NameValuePair”
  • In the Multi-valued string editor window, add : “cs-view-certsso-enable-es-loadbalance=true”

2. Configure the Enrollment Server to prefer the local CA when co-hosted (requires editing registry).

  • Login to the console of an Enrollment Server
  • Registry location: HKLMSOFTWAREVMware, Inc.VMware VDMEnrollment Service
  • Add Value Name: “PreferLocalCa”, Value data: “1”
  • Needs to be repeated for each Enrollment Server individually

Horizon 7 True SSO Advanced Features 3

True SSO in a Complex Domain Environment

VMware supports deploying True SSO in multi-domain environment provided they have two-way trust.

Let us take an example where we have two Domain trees (A & X) in the same forest.

Horizon 7 True SSO Advanced Features 4

Here we see two domain trees, Domain A and Domain X. Each of the domain trees has transitive trusts between all domains. Moreover, Domain A tree and Domain X tree have two-way, transitive trust relationship between each other.

VMware supports True SSO in this scenario, and the two Enrollment Servers can be placed at any domain.

Let us consider another example:

Horizon 7 True SSO Advanced Features 5

Here, we see two forests each containing its own domain trees. Moreover, the two forests have two-way, forest-level trust set up, as well.

VMware supports True SSO in this scenario, as well. Like before, the two Enrollment Servers can be placed within any domain of any forest.

More about domain and forest trusts can be found at technet.microsoft.com/en-us/library/cc770299.aspx.

Deployment Considerations

For best performance, it is important to plan the deployment of the CAs and the Enrollment Servers. For generating certificates, the Enrollment Server needs to communicate with the CA and the CA needs to communicate with the Domain Controller. Therefore, it is always a good idea to place the CA as close as possible to the Domain Controller. Likewise, place the Enrollment Server as close as possible to the CA. By placing them in close vicinity, we aim to reduce the network hops. As such, we will get optimal performance by co-hosting the CA and the Enrollment Server on the same VM.

When deploying Enrollment Servers and CAs, we would also need to consider administrational roles. If “Domain admin” or “CA admin” is responsible for managing the CAs and “View admin” is a separate role responsible for managing the View deployment, then we need to consider setting up CA and Enrollment Server on separate VMs, so each component is managed by the assigned roles.

Advanced Settings

Out-of-the-box settings will suit most users. If required, there are some advanced settings provided for admins.

  • Settings for Virtual Desktop: All the required settings are provided via VMware Horizon View Agent admin GPO template (vdm_agent.adm).
  • Settings for Enrollment Server: All the required registry are provided via registry and is created under: “HKLMSOFTWAREVMware, Inc.VMware VDMEnrollment Service”.
  • Settings for Connection Server: All the required settings are provided via LDAP under attribute “pae-NameValuePair” as discussed in earlier section.
Description Settings
This combination of settings adjusts the maximum time for generating a certificate on behalf of a user (includes retrying once on failure).

 

Typically, admins would want to tweak these settings when they find certificates arriving after SSO has timed out waiting for one.

 

All three settings need to be adjusted accordingly.

 

Typically, the values would be:

Enrollment Server < Connection Server < Virtual Desktop.

 

Certificate wait timeout

Default: 40 sec

Range: 10 secs – 120 secs

Virtual Desktop

(via GPO)

cs-view-certsso-certgen-timeout-sec

Default: 35 sec

Range: 10 sec – 60 sec

Connection Server

(via LDAP)

MaxSubmitRetryTime

Default: 25000 millisecond

Range: 9500 milliseconds – 59000 milliseconds

Enrollment Server

(via Registry)

The Enrollment Server caches details, like AD info, CAs, Templates, etc., about the Windows environment. By default, the Enrollment Server will attempt to access all domains. In a complex environment, you may want to limit the domains that the Enrollment Server monitors.

 

Below settings can be set as required

 

A.    Automatically monitor the domains specified.

 

B.    Do not automatically monitor the domains specified.

If a Connection Server references any of the listed domains via configuration, the Enrollment Server will try to connect to it and monitor.

 

C.    Automatically monitor all domains in the forest.

 

 

 

D.    Automatically monitor all explicitly trusting domains or domains with incoming trusts.

 

 

 

 

 

 

 

 

 

 

A.    ConnectToDomains

Example: truesso.dom.int

 

B.    ExcludeDomains

Example: truesso.dom.int

 

 

 

 

 

C.    ConnectToDomainsInForest

Default: 1 (True)

Values: 0 (False) or positive number (True)

 

D.    ConnectToTrustingDomains

Default: 1 (True)

Values: 0 (False) or positive number (True)

 

Enrollment Server

(via Registry)

At times, CAs may take an unusually long time while generating certificates. It is marked as “Degraded” by the Enrollment Server when that happens.

 

The Enrollment Server measures how long a CA takes to generate a certificate, and it is marked Degraded if it takes more than 1,500 milliseconds by default.

 

SubmitLatencyWarningTime

Default: 1500 milliseconds

Range: 500 milliseconds – 5000 milliseconds

Enrollment Server

(via Registry)

This setting allows admins to disable True SSO on any specific desktop.

 

Disable True SSO

 

Default: 0 (False)

 

Virtual Desktop

(via GPO)

This setting defines the minimum key size to be used for True SSO.

 

The generated Certificate is protected via public/private RSA key pair, which is securely stored on the Virtual Desktop.

 

This defines the minimum bar for the key size. For example, keys will have to be at least of the size defined by this value.

 

Minimum key size

 

Default: 1024

Range: 1024 – 8192

Virtual Desktop

(via GPO)

This setting specifies a list of key sizes.

 

When generating RSA key pair, the size must be defined in the list.

 

The list can hold a maximum of five sizes.

 

All sizes of keys that can be used.

 

Default: 2048

Example: 1024,2048,3072,4096,8192

Virtual Desktop

(via GPO)

This setting specifies the number of RSA key pairs that will be pre-created.

 

Generating RSA key pairs can be time consuming. Not to add to the logon time, we pre-create a number of key pairs and pick one from the cache when required for True SSO.

 

This setting is only valid on Remote Desktop Session Host (RDSH) environments.

 

Number of keys to pre-create

 

Default: 5

Range: 1 – 100

Virtual Desktop

(via GPO)

This setting specifies the duration a certificate needs to be valid to be considered to be re-used for True SSO.

 

A user may be disconnected from his or her session. If the user tries to connect back while the session is still active, he/she will reconnect to the session. While reconnecting, True SSO will log the user back into the desktop. Since a session already exists, True SSO will try to reuse the Certificate associated with the session provided it is still valid. The validity will be determined by determining if the certificate is at least valid for a duration defined by this setting ie. the expiration period is less than what is specified via this setting.

 

Minimum validity period required for a certificate.

 

Default: 10 minutes

Range: Minimum 5 minutes

Virtual Desktop

(via GPO)

Common Troubleshooting

We observe the following log lines in the Horizon Connection Server logs:

  • 2016-03-17T17:07:43.359Z WARN  (0484-009C) <SocketAuthenticateThread> [MessageFrameWork] AuthCERTSSL: incoming issuer ‘4b81f0b2-baab-4273-bbff-48ac36f8bcaa.certsso.vdi.vmware.com’ cert is self signed but not in our store.
  • 2016-03-17T17:07:43.359Z WARN  (0484-009C) <SocketAuthenticateThread> [MessageFrameWork] Unable to accept connection, authentication failed, reason=authCertSsl

Cause: This indicates that the “Enrollment Service Client Certificate” has not been copied from the Connection Server to Enrollment Server.

Resolution: Please deploy the “Enrollment Service Client Certificate” from the Connection Server to the Enrollment Server, so that the Enrollment Server can establish a secure connection between the two.

After setting up True SSO, it is advisable to check its status on the Horizon Connection Server administrator dashboard.

If everything is configured correctly and all components are working well, we would observer True SSO status as below on the Dashboard:

Horizon 7 True SSO Advanced Features 6

  • The domain for which True SSO is configured will be displayed under “True SSO,” and it will be green.
  • The trust relationship will be green under “Domains.”

Below is a list of issues that may disrupt True SSO:

1. Issue: The domain name is not displayed in the dashboard.

Cause: True SSO configuration information for that domain is missing or not setup correctly.

Resolution: Please verify that True SSO was configured correctly using the “vdmUtil” tool and/or reconfigure.

2. Issue: The domain name displayed in the dashboard under “True SSO” is not green.

Cause: True SSO configuration information may not be accurate, or some component required for True SSO to work is not working or setup correctly.

Resolution: True SSO status for a domain may indicate okay (green), error (red) or warning (amber) on the dashboard.

To diagnose a problem, admins can click on the domain name, which will pop up a dialog displaying a warning or error message relating to the issue.

The table below describes the meaning of various error/warning messages that can be displayed via the pop-up dialog:

Message                        Description Category
Failed to fetch True SSO health information. This message is displayed when no health information is available for the dashboard to display.

 

The most likely cause is Enrollment Server has not reported back any status updates as yet.

 

If this message lasts more than a minute, please verify the Enrollment Server is turned on and is reachable from the Connection Server.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Connection Server to Enrollment Server Connection Status

The <FQDN> enrollment server cannot be contacted by the True SSO configuration service. This message is displayed if True SSO configuration information is not refreshed by the Connection Server for a long time.

 

In a Horizon POD environment, all Enrollment Servers receive True SSO configuration information from a single Connection Server and are also responsible to refresh it every minute.

 

This could happen if the specific connection server responsible for updating the configuration information lost connectivity to the reported Enrollment Server.

 

The <FQDN> enrollment server cannot be contacted to manage sessions on this connection server. This message is displayed if a Connection Server cannot connect to the Enrollment Server.

 

There is a known limitation in Horizon 7. Instead of being displayed for all Connection Servers in the POD, this info is only displayed for the Connection Server the admin has logged into.

 

To check connection status of all Connection Servers and Enrollment Servers, an admin would need to individually login to each connection server and check the status on the Dashboard.

 

This domain <Domain Name> does not exist on the <FQDN> enrollment server. This message is displayed if True SSO is configured for a domain but the Enrollment Server has not received any configuration information from the Connection Server as yet.

 

If this message lasts for more than a minute, please check all the Connection Servers in the POD are working as expected and supports True SSO.

 

 

 

 

 

 

 

 

 

 

Enrollment Server to Active Directory Connection Status

The <FQDN> enrollment server’s connection to the domain <Domain Name> is still being established. This message is displayed if True SSO is configured for the Domain, but the Enrollment Server is yet to connect to the Domain Controller.

 

If the message lasts for more than a minute, please verify the Enrollment Server has network connectivity, can resolve the name of the Domain and can reach the Domain Controller.

 

The <FQDN> enrollment server’s connection to the domain <Domain Name> is stopping or in a problematic state. This message is displayed if the Enrollment Server encounters problem reading PKI information from the Domain Controller.

 

Please check the specific Enrollment Server’s log file which will provide more info related to the specific Domain Controller.

 

It might be caused by:

a.    Some issue with the Domain Controller itself.

b.    DNS not being configured properly.

 

The <FQDN> enrollment server has not yet read the enrollment properties from a domain controller. This message could be displayed because

a.    The Enrollment Server has not connected to the Domain Controller yet, as it is most likely just starting up.

b.    It is a new domain and was just added to the environment. Therefore, the Enrollment Server has not connected to the Domain Controller, yet.

 

If this message lasts for more than a minute, please check the following:

a.    The network connectivity might be extremely slow.

b.    The Enrollment Server is having difficulties accessing the Domain Controller.

 

The <FQDN> enrollment server has read the enrollment properties at least once, but has not been able to reach a domain controller for some time. This message is displayed when the Enrollment Server cannot poll the Domain Controller for PKI-related environment changes.

 

An Enrollment Server reads the full PKI configuration from the Domain Controller when it connects to it for the first time and polls for incremental changes every two minutes.

 

This message may not indicate True SSO failure.

 

As long as the Certificate Authority servers are able to access the Domain Controller, the Enrollment Server will be able to issue Certificates for True SSO.

 

The <FQDN> enrollment server has read the enrollment properties at least once, but either has not been able to reach a domain controller for an extended time or another issue exists. This message is displayed if the Enrollment Server is not able to reach the Domain Controller for an extended period of time. During this time, the Enrollment Server will try to discover an alternative Domain Controller for that domain.

 

If a CA Server is able to access a Domain Controller, the Enrollment Server will still issue certificates for True SSO, else it will result in Enrollment Server failing to issue Certificates for True SSO.

 

A valid enrollment certificate for this domain’s <Domain Name> forest is not installed on the <FQDN> enrollment server, or it may have expired. This message is displayed when a valid Enrollment Certificate is missing for the domain from the Enrollment Server.

 

Most likely, the Enrollment Certificate is:

a.    Not installed on the Enrollment Server.

b.    Invalid or expired.

 

The Enrollment Certificate is issued by an Enterprise CA of the domain. On the Enrollment Server, the Certificate can be verified by:

a.    Opening Certificate Management snap-in for the local computer store in MMC.

b.    The Enrollment Certificate can be found in the “Personal” certificate container and can be verified it exists and is valid.

 

Alternatively, the Enrollment Server’s log file can provide additional information regarding the state of all the certificates that were located.

 

To resolve the issue, please follow the View Admin Guide and re-deploy the Enrollment Certificate on the Enrollment Server.

 

 

 

 

 

 

 

 

 

 

 

Enrollment Certificate Status

The template <Name> does not exist on the <FQDN> enrollment server domain.

 

This message is displayed if the Certificate Template configured to be used for True SSO is not setup correctly or the Template name was misspelled during True SSO configuration.

 

To resolve the issue, please follow the View Admin Guide and setup the Certificate Template correctly on the Enterprise CA and check the configuration of True SSO using the “vdmUtil” tool

 

Certificate Template Status
Certificates generated by this template can NOT be used to log on to Windows This message is displayed when the Certificate Template configured for True SSO is missing certain options required for it to work.

 

To resolve this issue, please follow the View Admin Guide and setup the Certificate Template correctly on the Enterprise CA.

 

The template <Name> is smartcard logon enabled, but cannot be used. This message is displayed when the Certificate Template configured for True SSO is missing certain options required for it to work.

 

To resolve this issue, please follow the View Admin Guide and setup the Certificate Template correctly on the Enterprise CA.

 

The certificate server <CN> of <CA> does not exist in the domain.

 

This message is displayed if the Common Name for the CA is not configured correctly.

 

Please verify that the Common Name (CN) specified for the CA in the True SSO configuration is accurate and is spelled correctly.

 

 

 

 

 

Certificate Server Configuration Status

The certificate is not in the NTAuth (Enterprise) store. This message is displayed if the CA is not a member of the forest.

 

To resolve the issue, please manually add the CA Certificate to the NTAuth store of the forest in question.

 

The <FQDN> enrollment server is not connected to the certificate server <CN> of <CA>. This message is displayed if the Enrollment Server is not connected to the CA.

 

This might be a transitional state and may occur when the Enrollment Server has just started or the CA was recently added/configured for True SSO.

 

If the message lasts for more than a minute, it indicates that the Enrollment Server failed to connect to the CA.

 

To resolve the issue, please verify the Enrollment Server can resolve the name of the CA, check the network connectivity between the Enrollment Server and the CA and the system account for the Enrollment Server has permissions to access the CA.

 

Certificate Server Connection Status
The <FQDN> enrollment server has connected to the certificate server <CN> of <CA>, but the certificate server is in a degraded state. This message is displayed if the CA has dramatically slowed down while issuing certificates.

 

If this message persists for extended time, please check if the CA or the Domain Controller(s) is overworked. Once the issue is resolved and the CA resumes as normal, this message will not be displayed.

 

The <FQDN> enrollment server can connect to the certificate server .<CN> of <CA>, but the service is unavailable. This message is displayed if the Enrollment Server is connected to the CA, but unable to issue any certificates for True SSO.

 

This is a transitional state and will update rapidly. If the CA does not recover or does not become able to issue certificates, the state will be updated to “Disconnected.”

 

To resolve the issue, please check the CA is up, the Enrollment Server can reach it and the CA is properly configured for True SSO.

 

3. After successfully setting up True SSO, we see logon attempts failing, and the following error is reported in the logs:

LogonUI] cred::ReportResult(): Reported authentication failure. Status=0xC00000BB (WinErr=50) and subStatus=0x00000000 (WinErr=0).

This is PKI environmental issue, preventing smartcard logon to be successful using
the certificates generated by the CA. The following steps should fix the issue. VMware recommends following one step at a time and then testing to see if the issue is fixed. If not, then proceed to the next.

1. In the majority of cases, this is due to a problem with the Domain Controller certificate and the resolution is to refresh it, or to install if not already present. The Domain Controller certificate must be generated using one of these templates: ‘Domain Controller’, ‘Domain Controller Authentication’ or ‘Kerberos Authentication.’ Only one of these should be present, we will refer to it as ‘Domain Controller certificate’ below. To refresh:

  1. Load the Certificates MMC and then target it at the computer account: ‘Start’ -> ‘Run’ -> ‘MMC’ -> ‘File’ -> ‘Add/Remove Snap-in’ -> ‘Add’ -> ‘Certificates’ -> ‘Add’ -> ‘Computer Account’ -> ‘Next’ -> ‘Finish’ -> ‘Close’ -> ‘OK’
  2. Expand: ‘Certificates (Local Computer)’ -> ‘Personal’ -> ‘Certificates’
  • Right click on the ‘Domain Controller certificate’ -> ‘All tasks’ -> ‘Renew/Request Certificate with New Key’
  • Restart Domain Controller.

2. Deploy the CA root certificate via the domain GPO to Trusted Root Certification Authorities. Perform this step on all domain that users may be logging on to using True SSO. Refer to microsoft.com/en-us/library/cc772491(v=ws.11).aspx.

3. Make sure the template used by True SSO does not have “Do not include revocation information in issued certificate” selected. Refer to vmware.com/euc/2016/04/true-sso-setting-up-in-a-lab.html section: Adjust the settings of various properties of the new template as marked in screenshot.

Conclusion

This concludes our blog post on what to consider for setting up True SSO in a production environment, as well as various configuration options. We also talked about domain/forest trust scenarios where VMware supports True SSO. Finally, we reviewed some advanced settings that might allow admins to tweak True SSO if it does not work as expected out of the box. We also reviewed troubleshooting guidelines for some common issues related to True SSO and discussed the various warning/error messages that can be displayed on the Dashboard for True SSO.

Because you liked this blog:

The post VMware Horizon 7 True SSO: Advanced Features appeared first on VMware End-User Computing Blog.

VMware on VMware: vRealize Log Insight

$
0
0
This post was originally published on this site

How VMware IT Leverages vRealize Log Insight

vRealize Log Insight

VMware IT uses different tools, but VMware vRealize Log Insight really stands out for the way it helps VMware IT scale.

Managing the VMware private cloud is a big job. VMWare’s private cloud spans 6 data centers and more than 325,000 allocated VMs on more than 2,000 hosts.

Just like VMware’s customers, VMware IT needs visibility into the health of their environment. Log Insight was the right tool to help them scale, and understand the applications used to run their cloud services.

In the words of the VMWare OneCloud engineer Caleb Stephenson, “Log Insight has proven itself as a critical part of our management and monitoring infrastructure. It does log management across many different components that are critical to our infrastructure—from customer-facing applications, to infrastructure running large SAP instances, to the normal run-of-the-mill testing and development environments. It gives multiple teams the deep operational visibility we need with dashboards and analytics. And now we have proactive alerts that identify trends and events. That gives our support team an intuitive way to troubleshoot issues. In some cases we can even take action before an outage occurs. In the end it really helps us improve SLA performance for our customers.

It’s an unbeatable tool for Root Cause Analysis (RCA). In the past, RCA would take a lot longer… and without a quick way to capture logs, it was sometimes impossible. Now we have a simple, easy way to pursue RCA and share log snippets. “

 

Watch the Log Insight video here:

 

What is vRealize Log Insight?

vRealize Log Insight delivers Intelligent Log Management for infrastructure and applications across physical, virtual, and cloud environments. Log Insight enables administrators to connect to everything in their environment, e.g., OS, apps, storage, network devices, providing a single location to collect, store, and analyze logs at scale. Log Insight is highly scalable, designed to handle Big Data. Log Insight can digest any type of log data. Users do not need to think about it, they can just send their data to Log Insight.

 

How does vRealize Log Insight fit into Intelligent Operations and overall Cloud Management Platform (CMP)?

Log Insight comes with built-in knowledge and native support of vSphere and other VMware products, like VMware Horizon® with View, vRealize Operations and vRealize Automation™. Log Insight integrates with VMware vRealize Operations™ to bring unstructured and structured data together, for significantly enhanced end-to-end operations management.

Some Useful links:

Got questions? Leave a comment below.

 

The post VMware on VMware: vRealize Log Insight appeared first on VMware Cloud Management.

Insight Enterprises Relies on VMware and Dell EMC to Modernize their Infrastructure

$
0
0
This post was originally published on this site

Continuing on the theme of the cloud management success from our customers across the globe, today I’d like to share with you the story of Insight Enterprises. Insight Enterprises is a Fortune 500-ranked global provider of hardware, software, cloud and service solutions. Companies come to Insight for help with making better decisions with technology. In other words, Insight helps their clients find the right intelligent technology solutions to help them run smarter.

So, where does the Insight IT team go to modernize their own cloud infrastructure? Watch the Insight video here, or keep reading for more details!

 

 

Insight came to VMware, and here is why: in the words of Carlos Sotero, Insight’s global IT director responsible for data center services,

VMware provides the foundation that we need across the board to work efficiently and consistency so that we can provide the quality of service and applications that our customers expect.”

The Environment

Insight has a 95% virtualized infrastructure, and they run about 2,000 VMs over 338 hosts. Mix of OS is about 70% Windows, 30% Linux, a bit of UNIX as well. This complex environment is hosting a number of mission-critical applications driving Insight’s IT Solutions business. The users, 6,000+ of Insight’s employees as well as the distributed partner/vendor community, have been asking for more stability in existing IT services, reduced turn-around time for infrastructure requests, and increased flexibility to meet new demands.

The Selection Process and the Requirements

Being a global provider of IT solutions that help clients manage and transform their business, Insight knows the landscape of enterprise software like nobody else out there. They approached VMware to provide a comprehensive SDDC solution. The key decision making points have been:

  • The proven EMC Enterprise Hybrid Cloud solution based on best of breed hardware and software infrastructure
  • VMware suite of cloud management tools is easy to install, allowing Insight to benefit from value faster.
  • Availability of EMC and VMware professional services, who would step in to install everything while allowing the internal Insight IT team to focus on their day-to-day activities

The Cloud Management Solution

Specific to VMware cloud management tools, Insight has decided to deploy the entire spectrum of vRealize products and beyond:

All that is working with close conjunction with VMware NSX virtual network, and is based on vSphere Enterprise Plus 6.0 hypervisor. The overall solution is deployed with the framework of Dell EMC Enterprise Hybrid Cloud which provides the proven foundation for deploying and operating enterprise-grade hybrid cloud environment.

 

cloud management

 

The Benefits and the Future Direction

The benefits already seen by Insight are significant. According to Carlos, Insight IT now can deliver projects 2-4 weeks faster through automated infrastructure provisioning.  Provisioning new VMs takes minutes rather than days. They have also reduced the reconfiguration time by 75% by utilizing infrastructure provisioning blueprints in vRealize Automation, and are providing better IT services to all stakeholders while seeing 116 fewer incident tickets each month (with the help of vRealize Operations). Overall, they are seeing reduced costs of operations from increased utilization of existing hardware assets, and that includes both compute and network resources.

cloudMgmt4

 

Based on the results of this initial deployment, Insight IT is eyeing future expansion of the platform. One of the potential opportunities they see, for example, is migrating their ERP application from RISC-based platforms to x86 virtualization. They are looking into expanding their Enterprise Hybrid Cloud for that initiative, and get the automation and orchestration to make their ERP more agile, thus directly impacting and improving their revenue steam. More on that in the future!

View the Insight case study video here

 

 

 

The post Insight Enterprises Relies on VMware and Dell EMC to Modernize their Infrastructure appeared first on VMware Cloud Management.

Viewing all 1975 articles
Browse latest View live