Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

Onward & Upward: Recapping NSX-T in 2018 and Beyond

$
0
0
This post was originally published on this site ---

Overview

 

2018 was a big year for NSX-T Data Center, VMware’s network virtualization platform designed and optimized for application workloads running on both vSphere and non-vSphere environments, container platforms, and public clouds. In addition to supporting core use cases around security, automation, and multi-cloud networking, NSX-T Data Center has continued to expand the capabilities of the platform to include enabling networking and security for emerging use cases around containers and cloud-native applications. To support these use cases and increasingly deliver value to customers, NSX-T Data Center saw new versions released, improvements made to plug-ins, and integrations across an expanding ecosystem. 2018 also saw NSX-T Data Center gain significant momentum in terms of customer adoption, delivering enhanced and new capabilities across all use cases to a quickly growing number of customer deployments.

 

Product Releases, Plug-ins, and Integrations

 

In June, NSX-T Data Center 2.2 was released, bringing with this the ability to manage Microsoft Azure based workloads, referred to as NSX Cloud. The NSX-T Data Center platform was also updated to provide networking and security infrastructure for VMware Cloud on AWS. Other notable capabilities included enhanced data path mode in N-VDS, an improved controller cluster deployment experience, guest VLAN tagging, advanced load balancing enhancements, an officially supported Terraform provider, introduction of the Customer Experience Improvement Program, and much more. Check out the NSX-T 2.2 announcement blog and the release notes linked to in it to get all the technical details.

In September, after VMworld US, NSX-T Data Center 2.3 was made available, introducing further enhancements to its capabilities across multi-cloud, on-premises, and bare metal environments. The 2.3 release extended NSX-T’s capabilities to bare metal and to native AWS workloads, as well as improvement of NSX in Microsoft Azure. There was also improved support for NFV workloads and enhancements to advanced networking and security functions. Overall deployment, management, and use of NSX-T Data Center was also simplified, and the consumption of the platform in OpenStack environments was further improved. Take a look at the NSX-T 2.3 announcements blog and associated release notes for more in depth explanations of feature enhancements in the product.

In order to support rapidly growing use cases for containers and cloud-native applications, the NSX-T Container Plug-in is improving in rapid iterations in parallel to ensure that the capabilities of the NSX-T Data Center platform are brought into application environments built on Kubernetes, Pivotal Cloud Foundry, and OpenShift. The container plug-in enhancements came as version 2.2.1 and 2.3.1, and the specifics of each async release can be seen in their respective release notes.

NSX-T Data Center is also integrated into VMware PKS, and 2018 saw capabilities introduced to the PKS platform to enhance the networking and security of Kubernetes workloads beyond what is typically built in to vanilla Kubernetes. There is an excellent post detailing PKS integration with NSX-T Data Center in a lab environments, complete with topologies, screenshots, and video walkthroughs. Also worth reading is the detailed post explaining how to simplify Kubernetes networking and security with NSX-T Data Center.

 

Getting Hands-on with NSX-T Data Center

 

To enable anyone to you get their hands directly on the product and learn more about it from firsthand use, 2018 also saw a number of new and refreshed NSX-T hands-on labs:

  • HOL-1926-01-NET – VMware NSX-T Data Center – Getting Started
  • HOL-1926-02-NET – Integrating Kubernetes with VMware NSX-T Data Center
  • HOL-1926-03-NET – VMware NSX-T Data Center Operations, Troubleshooting and API Consumption
  • HOL-1935-01-NET – VMware Pivotal Container Service on VMware NSX-T – Getting Started
  • HOL-1943-01-NET – VMware NSX-T and Pivotal Application Service – Getting Started

 

NSX-T Data Center Deep Dive Sessions

 

At VMworld US and VMworld Europe in 2018, a wide range of sessions focused on NSX-T topics. The video recordings of these sessions are listed and linked to below.

 

Customer Momentum

 

In 2018, customer adoption of the NSX-T Data Center platform increased rapidly and continues to accelerate entering 2019. These deployments also cover a wide range of NSX-T use cases, addressing critical challenges around security with a growing number of micro-segmented applications, automation through cloud-native platforms like Kubernetes, Cloud Foundry, and tools like Terraform, multi-cloud networking across multiple private and public cloud environments including AWS and Azure, all while removing operational complexity and modernizing security at a time when our customers’ growth and innovation is paramount. Some of these NSX-T users talked about their experiences with NSX-T Data Center at VMworld US and VMworld Europe. Watch their sessions below to get their insights on NSX-T in their own words.

 

Onward and Upward for NSX-T Data Center

 

2018 proved to be a banner year for NSX-T Data Center, with a long list of new and enhanced capabilities built into the platform with the 2.2 and 2.3 releases, extending its value across on-premises, cloud, and container environments. These capabilities were further improved with asynchronous updates to the NSX-T Container Plug-in to maximize the benefits for NSX-T for container and cloud-native use cases. And the continued integration and enhancement of NSX-T in PKS brought enterprise-grade networking and security into an enterprise-grade Kubernetes platform. From hands-on labs, to deep dive sessions, to rapid customer adoption and sessions talking about their experiences, NSX-T Data Center drove increasing value to customers, and you and your organization can come to realize these benefits as well. As we enter 2019, NSX-T Data Center will continue to be improved with a wider, deeper, and more robust set of features and functions, covering virtually all use cases for the platform across security, automation, multi-cloud networking, containers and cloud-native apps, and more. Keep an eye on VMware’s Network Virtualization Blog for information on upcoming releases and to get all the latest on feature demos, customer stories, events, and more.

 

The post Onward & Upward: Recapping NSX-T in 2018 and Beyond appeared first on Network Virtualization.


Announcing General Availability of Cloud Automation Services

$
0
0
This post was originally published on this site ---

General Availability of Cloud Automation Services

 

The Cloud Automation Services, including Cloud Assembly, Service Broker and Code Stream are now generally available – Sign up for a free 30-day trial today!

Our cloud automation services make it easy and efficient for IT and developers to get what they need to build and deploy applications. The cloud automation services consist of VMware Cloud Assembly, VMware Service Broker and VMware Code Stream. Together, these services streamline application delivery, enable cloud flexibility and choice, and control costs. Additionally, these services facilitate collaboration and increase agility between traditionally siloed groups helping to further accelerate business innovation.

Updates

Since the initial availability of our services at VMworld in Las Vegas, we have made a lot of enhancements and improvements to better help IT and developers across a broad spectrum of their IT needs. While all of the enhancements and new features are too many to list here, we have expanded our multi-cloud support to include not only vSphere based clouds, Azure and AWS, but also now support cloud agnostic blueprints for Google Cloud Platform! Native cloud services across AWS and Azure have also expanded too. Our services are now more extensible (Git Integration, AD support, etc.) and capable with basic day 2 actions.

Have you had a chance to try our new services and see how they could impact your organization? Sign up for a free 30-day trial today! It’s easy to get started and super-fast at getting workloads across public, private or hybrid clouds.

 

Start a Cloud Automation Services Trial

Read the EMA Quick Take

 

Learn more about our Cloud Automation Services:

Cloud Assembly

Service Broker

Code Stream

The post Announcing General Availability of Cloud Automation Services appeared first on VMware Cloud Management.

Google™ Recognizes VMware’s Commitment Through Android Enterprise Recommended for EMMs Validation

$
0
0
This post was originally published on this site ---

VMware is excited to be an inaugural validated partner for the launch of Android Enterprise Recommended for EMMs! Continue reading to learn more about Android Enterprise Recommended and the requirements we met, which are available today, to become a validated EMM partner.

Android Enterprise Recommended

Long-time readers of the VMware Blog may remember when we announced our vision for making Android Enterprise the default path forward for any enterprise use-case on Android devices.

Android Enterprise has made enormous strides since the initial launch in 2015 (previously coined Android for Work), and VMware has been working closely with Google ever since inception to ensure that organizations can take advantage of the latest features right away. Touting one of the best experiences for any enterprise use-case, from BYOD to dedicated devices to corporate-owned personally-enabled devices and everything in between, Android Enterprise is constantly evolving with Google’s full focus.

Earlier this year, Google unveiled Android Enterprise Recommended for devices. Starting with knowledge worker devices and expanding to rugged, this program greatly simplifies the device selection process for customers looking to break into the world of deploying Android devices. Android Enterprise Recommended validates the devices that meet Google’s requirements such as support for zero-touch enrollment and a guarantee of regular security patches and OS updates.

You can read more about this program at https://www.android.com/enterprise/recommended.

VMware Workspace ONE is Android Enterprise Recommended Validated

After the success of Android Enterprise Recommended for devices, Google announced today that they are expanding this program even further to include EMM providers. Android Enterprise Recommended for EMMs provides additional guidance so organizations can rest assured they are selecting a platform with the breadth of capabilities and the long-term support required for scalable and successful Android Enterprise deployments. VMware is proud to announce that VMware Workspace ONE Unified Endpoint Management (UEM) is one of a few validated partners from Day 0, and it speaks volumes that Google recognizes the commitment by VMware and the deep set of capabilities that Workspace ONE offers for Android.

To meet the requirements of Android Enterprise Recommended for EMMs, VMware had to prove our integration with advanced Android Enterprise features.  All of these features are available today with Workspace ONE UEM. One requirement is the support for at least two Android Enterprise management sets – and we’re proud that we support all three: work profile, fully managed device and dedicated device. In addition, we support corporate-owned, personally-enabled devices. Some other features that we have implemented to meet the Android Enterprise Recommended validation requirements include:

  • SafetyNet integration
  • Advanced zero-touch enrollment
  • Advanced passcode management
  • Enterprise factory reset protection
  • … and many more

VMware is committed to innovating and working closely with Google to deliver the latest and greatest functionality on Android. We’re excited to be a validated partner in the Android Enterprise Recommended program that’s changing the way that enterprises think about mobility and select their providers with confidence. Learn more about our history with Android Enterprise and our vision on the VMware EUC blog.

The post Google™ Recognizes VMware’s Commitment Through Android Enterprise Recommended for EMMs Validation appeared first on VMware End-User Computing Blog.

The Business Case for SaaS-based Multi-Cloud Automation and Management

$
0
0
This post was originally published on this site ---

The Business Case for SaaS-based Multi-Cloud Automation and Management

By Stephen Elliot, Program Vice President, IDC

As line-of-business managers continue to play a larger role in dictating IT spending across multiple clouds, various IT teams (infrastructure & operations, developers, site reliability engineers, cloud architects, etc.) need to consider software-as-a-service (SaaS)-based automation and management solutions. Decision-makers should evaluate SaaS as a sensible alternative to on-premises automation and management solutions for the following critical reasons:

  • Browser-based access to software applications enables access to data anywhere, on any device, on demand.
  • Pay-as-you-go subscription pricing enables a more flexible and predictable financial model and lowers the total cost of ownership (TCO) because it eliminates the need to invest in large hardware footprints.
  • Management and automation can be performed in a more integrated, consistent fashion.
  • SaaS application delivery often includes top-tier datacenters and security mechanisms and usually consists of a multitenant architecture.
  • Implementation cycles are fast because the vendor takes responsibility for successfully delivering the service capabilities, eliminating the need for hardware provisioning.
  • Centralized software management and accelerated upgrades and release cycles eliminate the need for customers to download and install software patches and allow customers to automatically obtain new features.

SaaS-based automation and management solutions provide important business and technology benefits because of the underlying technology architecture. The SaaS model varies significantly from on-premises automation and management solutions that require customers to take on high levels of administration, upgrading, patching, and integration activity.

Most IT organizations continue to have too many silos, manual processes, poorly integrated management and orchestration tools, and fragmented development processes, as well as a general lack of collaboration across I&O and development teams. These challenges reduce operating agility, create political inertia, and increase operating costs. They also make delivering and measuring business outcomes significantly more difficult.

A major benefit of investing in SaaS management and automation is that, as more processes are automated, they become more secure while reducing the risk of human error. Even when manual development or operations handoffs remain a requirement during an automated process, the fact that a process has been defined, agreed upon by executives, and automated means that the process is significantly more secure than it was when it was fully manual.

Modern IT teams can use SaaS management and orchestration to deliver business outcomes across people, processes, and technologies. Potential SaaS-based multicloud management and orchestration solutions include VMware Cloud Automation Services, which is a SaaS-based bundle of automation and management capabilities that provide programmable provisioning across multiple clouds.

For details on VMware Cloud Automation Services, including VMware Cloud Assembly for multi-cloud provisioning, VMware Service Broker for creating a catalog of curated services and templates from multiple clouds, and VMware Code Stream for improving delivery of application or IT code across multiple clouds, download the IDC paper, “Reduce Multicloud Costs and Risks Through SaaS Automation and Management Capabilities,” sponsored by VMware.

The Cloud Automation Services, including Cloud Assembly, Service Broker and Code Stream and now generally available – Sign up for a free 30-day trial today!

Learn more about our Cloud Automation Services:

Cloud Assembly

Service Broker

Code Stream

The post The Business Case for SaaS-based Multi-Cloud Automation and Management appeared first on VMware Cloud Management.

Cloud Automation Services Webinar

$
0
0
This post was originally published on this site ---

Do you want to streamline infrastructure and application delivery? Do you need to provision Multi-cloud workloads? Are you looking to implement or enhance your Infrastructure as code?

 

Learn how VMware’s Cloud Automation Services can accelerate your business and help propel your IT into the future. Our services streamline multi-cloud infrastructure and application delivery, enhance visibility and cross-functional collaboration and provide continuous delivery and release automation so that you can focus on what matters most – delivering what the business demands. Additionally, our services facilitate collaboration between traditionally siloed groups helping further accelerate business innovation. We will conclude the webinar with a demo of the services to showcase their capabilities in real-time.

Register today for our upcoming webinar

START DATE:1/24/2019

START TIME:10:00 AM PST

DURATION: 60 MINUTES

Learn more about our Cloud Automation Services:

 

Read the EMA Quick Take

Read the IDC Spotlight: “Reduce Multicloud Costs and Risks Through SaaS Automation and Management Capabilities

 

The Cloud Automation Services, including Cloud Assembly, Service Broker and Code Stream and now generally available – Sign up for a free 30-day trial today!

Visit our website

Cloud Assembly

Service Broker

Code Stream

The post Cloud Automation Services Webinar appeared first on VMware Cloud Management.

What Users Really Think About VMware vRealize Automation

$
0
0
This post was originally published on this site ---

Over the last year, VMware vRealize Automation has grown in popularity among enterprise tech professionals in the IT Central Station user community. vRealize Automation is ranked as a top cloud management tool, based on over one hundred user reviews from customers in a wide range of industries.

 

vRealize Automation

 

As the leading site for enterprise technology user reviews, IT Central Station is always looking for user feedback that can help our growing community of tech professionals make future buying decisions for their companies. When we get vRealize Automation reviews from their customers, we have the unique opportunity to hear honest feedback about the solution without vendor bias.

As we listened to what people had to say about vRealize Automation, we found that their users highlighted similar benefits that made them stand out in the cloud management space. In addition, we discovered that much of these similarities focused on two types of vRealize Automation users: the consumer, including the developer community, and the provider (IT teams). In this post, we go into two of the top-mentioned themes discussed by each type of user community in more detail. We hope that this will help others looking for impartial, vendor-neutral tech reviews in their market research.

 

Agility

One valuable benefit that came up among the provider community in particular is vRealize Automation’s agility:

 

The solution has helped us to increase infrastructure agility, mostly because, in addition to it being able to do its thing on its own, it has tie-ins to other parts of our CI/CD pipeline.” – Allen N., Systems Administrator at a tech services company.

 

It is primarily used for developers to spin up their own VMs and destroy them at will, afterwards my group spins it up in production machines. Probably, its most valuable feature is it takes time off of my schedule to quickly, securely, and conveniently deploy virtual machines, then I can work on other things.” – Brian W., Systems Administrator at a manufacturing company with over 1,000 employees.

 

Choice

Providing a technology platform that support the need for choice – choice of environment and choice of tools – is another popular benefit that came up among vRealize Automation users:

 

We’ve gotten to the point now where we’ve used it to create this whole environment for users to be able to self-provision their own machines and get them into networks. We have a very large number of different networks, which means that many options of where they can put those VMs; their own environment.” – Senior IT Engineer at a healthcare company with over 10,000 employees.

 

The most valuable features are that it’s multi-tenant and the ability for scale. From a customer perspective, they log in and they have Catalog: what services are available to them. They simply click on that and then there’s an option: I can have a Linux server, I can have a Windows server. They select it, configure it, how many CPUs, how much memory, how much storage, and hit the button, submit. It’s that easy.” – Lloyd H., Linear Dimensions, Consultant at a government organization with over 10,000 employees.

 

Control

For IT providers, vRealize Automation is particularly valuable to them because of its ability to provide governance across clouds:

 

The product is really excellent. VMware provides a complete ecosystem. And it covers multi-cloud, which is where the market is going. We are able to cover compute, network, storage, etc. We have been able to take it to the next level where VMWare is providing the validated designs, VVD.” – Architect at a tech services company with over 10,000 employees.

 

It gives us one interface to control multiple environments. It’s an easier way to look at how a large chunk of information or data or processors are being used, and what they’re being used for.” – Scott S., Senior Systems Admin at a retailer with over 1,000 employees.

 

Extensibility

Another common theme that came up among vRealize Automation reviews from consumers is its extensibility, especially when it works well with other VMware and third-party solutions:

 

Valuable features include the extensibility of it and the customization of a lot of the Blueprints, that you can customize, and the community as a whole. There’s a ton of community-generated Blueprints that might be (helpful) to set up a design for your automation needs, that you can use as a base and go on from there and make changes to it.” – Daniel P., Infrastructure Architect at a tech services company with over 10,000 employees.

 

I personally spend a lot of time in vRealize Orchestrator, so being able to directly tie into the back end on the APIs, I find that to be what really is the most advantageous thing for me.” – Senior Associate at a consultancy with over 10,000 employees.

 

Want to learn more about what enterprise technology professionals really think about this tool? Read more VMware vRealize Automation reviews on IT Central Station.

Want to share your opinions about vRealize Automation with our community? Write your own review.

 

 

This blog was contributed by Danielle Felder, Senior Social Media and Content Manager, IT Central Station

Learn More about vRealize Automation

 

The post What Users Really Think About VMware vRealize Automation appeared first on VMware Cloud Management.

Windows Cloud-Init solution

$
0
0
This post was originally published on this site ---

Windows Cloud-Init solution

 

Ten years ago the word Cloud was the buzz word in every IT conference. The Cloud promised to bring the modern IT business to a new level of opportunities. Back at that time I was an engineer with little IT experience, attracted by the big transformations ahead and wondering where is this going to end. Almost a decade later, we have three types of cloud solutions: on-premise private cloud, public cloud and the mix of both worlds called hybrid cloud. The competition between them and the increased number of business requirements, set the need for new kind of automation tools and products to bridge the gaps and offer seamless experience. With the recent announcement of VMware Cloud Services, VMware is heading strong in this direction. Behind the scenes, VMware Cloud Services is an extensible SaaS solution designed to help us operate, secure and monitor workloads across different cloud providers.

Now that we have the flexibility to choose, which workload goes where, we have to solve another common problem. We need to find out how to customize the guest instance to fit our software application(s).

Thanks to the Cloud-Init package developed by Canonical, this is an easy task for the most popular Linux and FreeBSD operating systems. However, Cloud-Init doesn’t support windows guest instances and this is a problem for all applications that require windows to run.

The good news is that a company called “Cloudbase Solutions” developed equivalent of Cloud-Init for windows. Cloudbase-init provides out of the box: user creation, password injection, static network configuration, hostname, SSH public keys and user-data scripts, such as PowerShell, Batch, Python, Cloud Config and others.

In this blog post tutorial, I will demonstrate how to prepare Azure windows image with cloudbase-init and then customize guest instance provisioned from that image. The procedure is identical for all supported cloud vendors and versions of windows. The tutorial is split in two parts, the image preparation and the guest instance customization.

There are multiple ways to create a custom Azure windows image. To keep things simple for the purpose of this demonstration, I will customize one of the windows 10 Azure marketplace images and then capture it as custom image.

Prepare Cloudbase-init Azure windows 10 image

  1. Login to the Microsoft Azure Management portal.
  2. Click Create a Resource and search for windows 10 image in the marketplace search bar.
    Azure windows 10 image

  3. Click Create and provide the required inputs to provision new windows guest instance from this image.
    Deploy Azure guest windows 10 imageNote: Make sure to request public IP address and allow RDP inbound access, since we need them to connect to the new windows 10 instance and install cloudbase-init.
  4. When the machine is up and running, login with RDP client and follow the steps below to setup cloudbase-init:
    1. Download the cloudbase-init installation binaries from https://github.com/openstack/cloudbase-init.
      cloudbase-init on GithubSupported cloudbase-init cloud providers (stable version): 
      • OpenStack (web API) 
      • OpenStack (configuration drive) 
      • Amazon EC2 
      • CloudStack 
      • OpenNebula 
      • Ubuntu MaaS
         

      The guest instance metadata is picked up by the so called cloudbase-init metadata services.  Then they exposed the data to the cloudbase-init plugins, responsible for the guest instance customization.
      For example, configure custom instance host name, network address, username, password and others. 

      Available cloudbase-init plug-ins (stable version): 

      • Setting host name (main) 
      • Creating user (main) 
      • Setting password (main) 
      • Static networking (main) 
      • Saving public keys (main) 
      • Volume expanding (main) 
      • WinRM listener (main) 
      • WinRM certificate (main) 
      • Scripts execution (main) 
      • Licensing (main) 
      • Clock synchronization (pre-networking) 
      • MTU customization (pre-metadata-discovery) 
      • User data (main) 
      • Configuring selected plugins
         

      Available cloudbase-init user-data interpretors (stable version):  

      • PEM certificate 
      • Batch 
      • PowerShell 
      • Bash 
      • Python 
      • EC2 format 
      • Cloud config 
      • Multi-part content 
      • Sysnativeness
         

      The guest instance user-data is called with different names across the different cloud providers (Azure: custom data, AWS/Openstack/Ubuntuu MaaS/CloudStack: userdata). This is custom user content exposed to the guest instance by the currently deployed and running cloud infrastructure. Its purpose is to provide additional data for the instance to customize it as much as you need, if the cloud initialization service does support this feature.

       

    2. Run the CloudbaseInitSetup_x64 installation binary and follow the on-screen instructions.
      Cloudbase-init installer
      Cloudbase-init installer
      Cloudbase-init installer
      Cloudbase-init installer
      Cloudbase-init installer
    3. Click Install and once the installation completes, configure the cloudbase-init config, before finalizing the installation.

    4. Navigate to the chosen installation path and under the conf directory, edit the file “cloudbase-init-unattend.conf”

      [DEFAULT]
      username=vmitov
      groups=Administrators
      inject_user_password=true
      first_logon_behaviour=no
      config_drive_raw_hhd=true
      config_drive_cdrom=true
      config_drive_vfat=true
      bsdtar_path=C:Program FilesCloudbase SolutionsCloudbase-Initbinbsdtar.exe
      mtools_path=C:Program FilesCloudbase SolutionsCloudbase-Initbin
      verbose=true
      debug=true
      logdir=C:Program FilesCloudbase SolutionsCloudbase-Initlog
      logfile=cloudbase-init-unattend.log
      default_log_levels=comtypes=INFO,suds=INFO,iso8601=WARN,requests=WARN
      logging_serial_port_settings=COM1,115200,N,8
      mtu_use_dhcp_config=true
      ntp_use_dhcp_config=true
      local_scripts_path=C:Program FilesCloudbase SolutionsCloudbase-InitLocalScripts
      metadata_services=cloudbaseinit.metadata.services.azureservice.AzureService
      plugins=cloudbaseinit.plugins.windows.createuser.CreateUserPlugin,cloudbaseinit.plugins.common.setuserpassword.SetUserPasswordPlugin,cloudbaseinit.plugins.common.sethostname.SetHostNamePlugin,cloudbaseinit.plugins.common.userdata.UserDataPlugin
      allow_reboot=true
      stop_service_on_exit=false
      check_latest_version=false

       Expand source


      The various config options above are used by the cloudbase-init services and plugins to customize the guest instance user experience.

       

    5. Save the file and finish the cloudbase-init installation using the options below.
      Cloudbase-init installer
    6. Once the sysprep process is complete, the windows 10 machine will shut down automatically. Wait until its state in the Azure Management portal is set to Stopped.
      Azure instance state
    7. Click Capture and provide the required inputs to convert the windows 10 instance to new custom image.
      Capture Azure instance as image
    8. Note down the image name and click Create.

    Now that we have the image created, we can easily customize the guest instance based on the predefined options in the cloudbase-init config file or via the optional instance user-data. To achieve this, we just need to request new instance from that image via the Azure Management portal, the Azure API, the Azure CLI or via the VMware Cloud Assembly blueprint.

    I will use the last option as a good opportunity to show how this can be done with VMware Cloud Services. Moreover, I will create a cloud agnostic blueprint, because it will let me deploy and customizable instance in other cloud providers, as soon as I have their cloudbase-init windows image available.

Deploy and customize windows guest instance with VMware Cloud Services.

  1. Login to VMware Cloud Services and select VMware Cloud Assembly.
  2. Navigate to Infrastructure tab and setup an Azure Cloud account.
    Cloud Accounts
  3. The next step is to create Cloud Zone associated with the Cloud Account.
    Cloud ZonesNote: Add a capability tag, which will be later on used to link our new blueprint with this cloud zone.
  4. Next we need to create new Project associated with the Cloud Zone.
    Projects
  5. To configure the same image size across the different cloud vendors, we need to create a Flavor Mapping.
    Flavor Mappings
  6. Similar to the Flavor Mapping, we need to create new Image Mapping to describe the image name across the different cloud vendors.
    Image Mappings
    If you don’t see the image created in step 1, go to Cloud Accounts and re-sync all images. By default image data-collection is automatically performed every 24 hours. 
  7. Create Storage Profile to allow Cloud Assembly provisioning of Azure images based on managed disks.
    Storage Profiles

    By default Cloud Assembly will create Azure classic Availability Set, which doesn’t support images based on managed disks, so we need to set Cloud Assembly to create Azure managed Availability Set for this to work.

  8. Navigate to the Blueprints tab and create new Cloud Assembly blueprint.
    Blueprints

    YAML code:

    inputs: {}
    resources:
      Cloud_Machine_1:
        type: Cloud.Machine
        properties:
          remoteAccess:
            username: vmitov
            password: Password1234@$
            authentication: usernamePassword
          storage:
            constraints:
              - tag: 'cloudbaseinit:hard'
          image: vmitov windows 10
          flavor: small
          cloudConfig: |
            #cloud-config
            write_files:
              encoding: b64
              content: NDI=
              path: C:test
              permissions: '0644'
            set_hostname: demoname
          constraints:
            - tag: 'cloudbaseinit:hard'
          __computeConfigContentPreserveRawValue: true

     Expand source

    Most of the YAML properties are self-explanatory, so I will comment just the few that are more specific to the topic.
    The remoteAccess properties will set the guest instance metadata with the values provided for username, password and authentication. Then this will be picked-up by the cloudbase-init metadata services and the guest OS will be configured via the Setting Password cloudbase-init plug-in.
    The cloudConfig property will set  the userdata and expose it to the guest instance during provisioning. In this case the first line indicates that the script below is Cloud-Init YAML code and cloudbase-init will use its cloud-init plug-in to execute it.
    The _cloudConfigContentPreserveRawValue  is an important one, since it tells Cloud Assembly to keep the raw user-data provided in the cloudConfig property. If we don’t set this property to true, then Cloud Assembly will convert some of the other properties to Cloud-Init YAML code, for example it will create the Cloud-Init YAML code needed to set the remote access authentication type, username and password.
    The storage constraint, tells Cloud Assembly that we want to use the custom Storage profile that requires Azure managed Availability Set.

     

  9. Click Deploy to provision new Azure windows 10 machine, which will be customized by the configured CloudConfig script.
    Deployments
  10. Login to the new instance with RDP client and the credentials specified in the blueprint and verify that the CloudConfig script was successfully executed.
    1. Check the custom file created in C:test
      CloudConfig - write file result
    2. Check the customized machine hostname
      CloudConfig - change hostname result

 

I hope you found the blog post useful!

If you want to learn more, check out the VMware Cloud Services and Cloudbase-Init documentation pages and stay tuned for more on VMware Blogs!

The post Windows Cloud-Init solution appeared first on VMware Cloud Management.

George Sink Law Firm Makes Disaster Recovery Simple with VMware Cloud on AWS

$
0
0
This post was originally published on this site ---

One of the largest personal-injury law firms in the Southeast, George Sink, P.A. is based in Charleston, South Carolina. The firm is growing, adding to its 250 employees and adding new offices in neighboring Georgia. Their fast expansion, and headquarters in a hurricane-prone coastal city, led them to consider IT options outside growing their physical data center.

Long before he joined George Sink, CIO Tim Mullen was an experienced VMware user. “Even when I was working for Microsoft, I was secretly a vSphere fan,” he confessed. As a security researcher and consultant, Mullen has years of experience administering and testing systems from VMware and many other vendors.

Mullen began to realize that the firm’s on-prem data center expansion 200 miles inland in Greenville, SC was turning into a poor choice. Capital costs were adding up, and the firm didn’t have the staff necessary to maintain the resources they would need going forward. With storms in the region strengthening in recent years, even an inland facility might not be safe. “I didn’t want the company’s growth in new markets to depend on physical, on-premises equipment,” said Mullen.

Mullen admits that despite recent hurricanes in the Charleston area, the law firm didn’t have a strong disaster recovery plan. “The plan was to back up the data, and take it in whatever form to another physical location,” said Mullen. “But the time you need to restore that data was not taken into account. You would have to say to employees, ‘pencils down,’ shut it all down, and then migrate back to the main facility – if the main facility even existed after a storm.”

Cloud-Ready from the Kitchen Table

As Mullen considered his options, he got an email from his VMware sales representative with a link to try a relatively new service called VMware Cloud on AWS. “I called my rep that same night and said, ‘I just spun up a full software-defined data center from my kitchen table, right before dinner. This is amazing.’ And the conversation went from there.”

George Sink now uses four nodes in VMware Cloud on AWS for full disaster recovery backup. The firm replicates live data now, and plans to move to full production by the spring of 2019. Their local managed services partner is Stasmayer Inc.

Because the firm was already using VMware vSphere 6.7 on premises, moving to VMware Cloud on AWS was easy. Mullen did not need to spend more money on more engineers, or pay to retrain his staff on another system. “It’s good for me because that means that the burden of maintaining updates, any maintenance, all of that is handled by VMware engineers,” said Mullen.

Mullen also noted that “It gave me the flexibility to own and maintain my own set of nodes while abstracting the hardware from on-premise, and between VMware and AWS. I knew I was in good hands from both an infrastructure standpoint and a technical proficiency standpoint.”

Cost savings is another cloud benefit. “For a cost of a good engineer, we got four nodes in the cloud, which is a no brainer,” said Mullen. “VMware Cloud on AWS not only prevents me from making tremendous cash outlays for data centers every time we choose to expand, it allows me to plan. If I divide my monthly expenditure with AWS over a number of compute users, I can almost boil it down to a per-person cost to have my infrastructure running. That’s something that you’ve never been able to do with on-prem because nobody takes into account bandwidth, backups, hard drives, personnel costs, all of that.”

Mullen calls the instant support option on VMware Cloud on AWS “remarkable. I can pop onto my vCenter interface, and if I have a question, they know the answer. They aren’t putting me on hold to ask someone else. These people know what they’re talking about.”

Next Steps: Cloud Security and Desktops

Although George Sink did not use VMware NSX Data Center on-premises, Mullen is excited to learn about its capabilities in VMware Cloud on AWS. “I’ve got multiple IPSec VPNs connecting my SDDC with on-prem. NSX allows me to customize my network infrastructure however I want to match it.”

George Sink currently uses VMware Horizon 7 to virtualize and distribute all company apps, from Google Chrome to case management and accounting software. The firm uses a small number of virtual desktops, and is now expanding them from IT staff to other employees such as legal assistants. “We have been slowly increasing that base as we move Horizon to VMware Cloud on AWS, and it is working very well,” said Mullen.

The firm is in beta for Horizon Instant Clones, which delivers ultra-fast provisioning and zero-downtime updates. “Instant Clones will absolutely change the way we work for the better,” said Mullen.

George Sink user profiles, which are highly customized, will be applied to each new desktop using Instant Clones. “I’ll only have to patch one gold image, which saves time and cost versus the current RDS environment,” said Mullen. “One person hosing their desktop won’t take down 50 other people on the server. I’ll also be able to layer applications onto that Instant Clone based on entitlement, which will help me control software deployment and licensing requirements.”

For Mullen, one of the most important benefits of VMware software, especially VMware Cloud on AWS, is peace of mind. “All these employees’ livelihood is based on what we do here, and it’s critical to protect our clients’ legal data. The next time a hurricane is whirling down and Mr. George Sink asks what we’re doing to prepare, I can say ‘Nothing. It’s already done.’ Being able to know that, and say that honestly, is tremendous.”

The post George Sink Law Firm Makes Disaster Recovery Simple with VMware Cloud on AWS appeared first on VMware End-User Computing Blog.


Cloud Automation Services: To dispatch or not to dispatch?

$
0
0
This post was originally published on this site ---

To dispatch or not to dispatch? This is an old question we ask ourselves again and again and it is still relevant in the new world of Cloud Automation Services (CAS). And the answer will always be the same: yes, we need to dispatch. We need a frame around the extensibility we want to build, to ensure it is well-protected from changes, provides decoupling, and so on. We need a more coherent and holistic approach to the cloud automation services when multiple people work on the same project.

In this blog post, we will explore the different paths one might go when building extensibility for CAS and why dispatching is the best option.

Let’s have a look at what we can do in terms of extensibility within CAS, in particular Cloud Assembly (CA). CA is a cloud-based service for creating and deploying machines, applications, and services to a cloud infrastructure. The same is true for vRealize Automation (vRA). So, if we are currently using vRA, we might find the extensibility of CA familiar.

First, we create a subscription where we specify the event type, for example provisioning of a virtual machine. Then, we add some filters to receive only the events we are interested in.

Finally, we configure the event handler, which can be either a vRealize Orchestrator (vRO) workflow, or an action, which runs in the cloud.

Even though, extending CA looks very similar to the vRA extensibility, it doesn’t work the same way, because vRA and vRO both run on premise, while CA runs in the cloud. So, in order the CA to collect resources and provision machines, we have to deploy a data collector appliance on premise. The same mechanism is applied when building extensibility. We use a workflow from a vRO that runs in our datacenter (see the example screenshot, the workflow DumpProperties).

The Challenge

Extending the provisioning process with a single workflow for a single blueprint seems quite straightforward to do. But let’s look into a real world example. A typical enterprise may have 10 integration points and at least 4 different OS flavors. Make no mistake, their integrations can easily grow to a hundred if they are in the financial sector. And just to spice it up a bit more, they would like to provision to at least 2 clouds and on premise. So, how to handle the complexity of different combinations of OS flavors and integration points.

Option 1: Subscription per Workflow, per Workload Type

We use the CA out-of-the-box functionality and we get a decoupled and very hard to follow architecture with hundreds of subscriptions that differ by filters, such as “I would like this workflow to run only on Windows machines, On prem, with priority of 16”.

And here be the dragons – how to manage this huge amount of subscriptions. 

Each workflow receives input parameters in a form of a payload. The payload contains data, such as the virtual machine ID, request ID, project ID, deployment ID, owner, etc. These IDs might be used as is in some cases, but most of the times, the workflow will resolve the VM object based on the ID and then, it might need to retrieve some other data, for example the name of the project. So, doing this for each workflow, we will quickly end up with a lot of duplicated code.

Option 2: Subscription per Event Type with a Master Workflow

We  reduce the amount of subscriptions that we have to manage by having one subscription per event type and one master workflow (I’ve seen this a lot). This way, we have less subscriptions to worry about and a central entry point, which makes it easier to troubleshoot and reduces duplication. The master workflow will handle the different types of integrations required for the different types of workloads.

It sounds like a good option but what’s the catch? 

If we do it directly in the workflow, with boxes and decision elements, we will end up with a huge workflow. And every time we want to update something, we will have to change a big junk of logic. So, one must definitely take into account the possible drawbacks:

  • Increased chance of breaking other things, since they are tightly coupled and in the same “file”.
  • Increased chance of conflicts, because everyone will work on the same huge central piece of logic. One can even override someone else’s work by accident.
  • Everyone has to understand the whole thing in order to make changes in the automation.

Code will be duplicated only among the master workflows … hopefully. However, all integrations will have access to everything in the scope of the huge workflow, which makes it very easy to break data that others need and almost impossible to find a way around the hundreds of global vRO attributes.

Option 3: Subscription per Event Type with a Dispatch Workflow

We use the same approach as in option 2 but instead of using one master workflow, we use a generic dispatching mechanism. It will give us:

  1. An easy way to map which operation to run at which state for which workload.
  2. An enhanced payload, possibly lazy initialized, which would reduce the amount of code we would need and allow us to focus only on the business use case within the actual extensions. This increases the complexity in this central place, but it dramatically reduces the duplication and complexity everywhere else.
  3. Decoupling between the integration points and the extensibility mechanism provided by the system to allow different people:
    1. To focus on their task without worrying that they might break someone else’s work.
    2. To achieve a coherent and holistic approach to their extensibility development while working on a project independently.

Now the question is not whether to dispatch or not, but how to design the dispatching in a way that makes it easier to specify what to run, when, and for what. We used to do it with custom properties where everything is controlled in vRA, but now we have the opportunity to rethink our approach and to find a better way. I am personally I fan of the simple callback-based definitions like in a code-first fashion:

It gives a simple way to map the criteria with the actual execution code as in plain source. Stacking a few of those in the same file based on the context they would run in, makes it really easy to manage the extensions in one central place. So, when it comes to what will run and when, one can quickly figure that out by looking at the few lines of code.

We are currently experimenting with some other things as well, so stay tuned for more on the topic.

The post Cloud Automation Services: To dispatch or not to dispatch? appeared first on VMware Cloud Management.

Workspace ONE Capabilities You Should Know About

$
0
0
This post was originally published on this site ---

Every industry across the globe is adopting digital technologies to transform the way employees work. One of the primary components of the enterprise digital technology puzzle is mobile apps. In 2017 alone, consumers downloaded 178.1 billion mobile apps worldwide. It’s no surprise enterprises are jumping in to delivering compelling business apps that rival the apps we use in our personal lives to make work easier.

While many out-of-the-box enterprise apps take a “one size fits all” approach, more enterprises than ever are developing their own custom enterprise apps to transform things like content delivery, training, line of business, communication, signatures, servicing customers and more. But developing apps for the enterprise isn’t an easy task. Developers and IT must work together on a strategy to align on things like security requirements, DLP policies, app access and more which can be difficult to do in a cost-effective, timely fashion. With the advent of GDPR and more organizations beginning to move toward a privacy-minded culture, privacy notices and consent prompts for usage analytics within apps become more critical both from a legal and end-user transparency perspective.

The Workspace ONE Software Development Kit (SDK) aims to solve this problem by providing a portfolio of code libraries for developers to build against based on individual app requirements, the Workspace ONE SDK gives developers an IT approved solution to build enterprise-ready applications in a matter of minutes, reducing alignment complexities, time to market, and the overall cost of developing the app.

We also encourage independent software vendors to build with our SDK to make it easier for joint customers to deploy the ISV mobile application without additional development work for the customer. The most recent ISV to implement our SDK is Tableau Mobile for Workspace ONE. Read more in the Tableau announcement blog.  

“With today’s release of the Tableau Mobile app for Workspace ONE®, we are empowering IT administrators with the perfect way to allow users within their organization to visualize, analyze, and leverage data at scale, all while securing enterprise data at all times across devices.”

-Rahul Motwani, Tableau

Workspace ONE Capabilities

The Workspace ONE SDK supports a wealth of capabilities to meet various use cases today. Let’s take a look at the top 3 most valuable capabilities our SDK provides according to our customers.  

Integrated authentication for SSO

App developers can easily integrate our integrated authentication feature to reduce end-user friction when authenticating to websites and other authenticated services. This is done by utilizing our SDK secure credential storage to automatically authenticate on behalf of users to avoid redundant login prompts being shown. One less prompt in the mobile app setup process can have a significant impact on user adoption of apps and ultimately help drive employee productivity.

Automatic network traffic tunneling

In today’s enterprise, mobile devices can span beyond the perimeter of the firewall in comparison to traditional endpoints of old. Being able to securely access resources from outside the corporate network in the blink of an eye on a mobile app has become crucial to improving employee productivity. By integrating the SDK in an app, the app can instantly tunnel traffic back to the corporate infrastructure and surface corporate data without the need to install a separate VPN app.

Security, app passcode, DLP policies

This new level of accessibility created by mobile devices also warrants additional due diligence to responsibly protect and secure this data. Many organizations choose to make use of the SDK’s security capabilities such as access control, DLP, and encryption to ensure corporate data is only accessible by entitled users, data does not cross boundaries between work and personal apps, and any data persisted is secured through FIPS validated data-at-rest encryption.

To learn more about the Workspace ONE SDK visit the VMware Workspace ONE Dev Center. Here, you’ll find a wealth of developer focused content and support for enabling apps to seamlessly integrate with Workspace ONE.

Want even more? Jump into the Conversation on Slack.

Anyone who signs up for VMware {code} automatically receives a personal Slack invite within seconds. The VMware {code} community already has over 2,500 participants on Slack. Join today and receive your personal Slack invite.

 

Want to learn more about Workspace ONE, our award-winning digital workspace solution? Simply click here to sign-up for a free hands-on lab.

 

The post Workspace ONE Capabilities You Should Know About appeared first on VMware End-User Computing Blog.

Horizon 7 and Workspace ONE Lightning Labs Now Available

$
0
0
This post was originally published on this site ---

We know your time is valuable and because of that, we worked on identifying the best way to learn about VMware products in small segments of time. We call them Lightning Labs.

What are Lightning Labs?

Lightning Labs help you learn about VMware products in small segments of time. We have broken down the content of a lab into small “bite-sized” sections that allow you to learn about a core product feature quickly.

This month as part of our end-user solution we are featuring Workspace ONE and Horizon 7 Lightning Labs.

What is in the Workspace ONE Lightning Lab?

The Workspace ONE: Advance Integrations lightning lab, will explain and demonstrate Workspace ONE integration with SaaS applications, Workspace ONE Unified Endpoint Management (formerly AirWatch UEM) and Citrix. With VMware Workspace ONE, you can provide Single-Sign-On (SSO) for your users to any Web or Software-as-a-Service (SaaS) application supporting Security Assertion Markup Language (SAML).

The Lightning Lab has been configured as a single sign-on for SAML-based apps with VMware Identity Manager and Workspace ONE. Learn how to:

  • Open the SAML Test App on Workspace ONE
  • Sign User Out of Workspace ONE
  • Overview of Workspace ONE integration with UEM and Citrix

In less than 30 minutes, learn about Workspace One: Advanced Integration

What is in the Horizon 7 Lightning Lab?

VMware Horizon is comprised of industry-leading solutions for all aspects of desktop and application management and delivery.

The Horizon 7: Components Explained Lightning Lab, explores how these components address specific needs, how they can be combined to provide a comprehensive just-in-time management platform, and how they scale to cover the largest customer demands.

In this lab you will get a walk-through of the User Environment Manager Management Console:

  • Personalization for apps and Windows settings
  • Contextual policy management of your Windows devices
  • Overview of scenarios with multiple environments

In addition, familiarize yourself with the App Volumes Manager by exploring the manager dashboard, review AppStack details, writable volumes management options and much more.

How will you spend your next 30 minutes? Take the Horizon 7: Components Explained Lightning Lab today!

The post Horizon 7 and Workspace ONE Lightning Labs Now Available appeared first on VMware End-User Computing Blog.

Understanding vSphere Health

$
0
0
This post was originally published on this site ---

A new powerful feature of vSphere 6.7 is vSphere Health. vSphere Health works to identify and resolve potential issues before they have an impact on a customer’s environment. Telemetry data is collected from the vSphere environment then used to analyze pre-conditions within customer environments. Discovered findings can be related to stability as well as incorrect configurations in vSphere. Leveraging vSphere Health has allowed the detection of more than 100,000 potential problems per day of which approximately 1,000 are resolved daily.

Pete Koehler, Sr. Technical Marketing Manager in the Storage and Availability Business Unit (SABU), recently discussed vSAN Health and the framework around vSphere that allows for intelligent environments. vSphere & vSAN Health are very similar as they work in the same manner via data collection and analysis. Let’s dig in a bit more.

Health Checks

As of late vSphere Health contains ~30 different health checks that can be run against a vSphere environment. Based on recent reports, here are a few popular health checks in terms of correlation between issues discovered, and leveraging “Ask VMware” for remediation guidance:

VMware Analytics Cloud (VAC)

VMware Analytics Cloud (VAC) is the platform that enables VMware products to send telemetry data from on premises and SaaS products to VMware. vSphere Health works in conjunction with the Customer Experience Improvement Program (CEIP) to send anonymous data to VAC for analysis which in turn provides the assessment within the vSphere Client.

When issues are discovered after VAC has analyzed the vSphere data, resolutions and recommendations are provided to guide the customer through remediation. These health checks can be incredibly helpful in terms of awareness and guidance for a customer to understand and resolve any issues found. As vSphere Health alerts/alarms are introduced, they are asynchronously displayed in customer vSphere environments that are running vSphere 6.7GA or higher. This model enables VMware to enhance issue detection in the datacenter without updating/upgrading the vSphere installation.

vSphere Health

 

Telemetry data is collected and passed to VAC for analysis where data science tools and methodologies are used to provide analytical insights from the data collected. Once analyzed, insights and recommendations are sent to vSphere Health to be displayed for the customer to review and take action.

 

CEIP

It is important to understand the data that is collected. Telemetry data received from customer environments is all anonymous and secured in VMware’s datacenters. By default hostnames, virtual machine names, and network information is not exposed or saved within the dataset.

Data collected is specifically available only for internal VMware usage and is strictly not shared with third parties. Data is accessible to VMware employees on a need to know basis only. All data flowing into VMware Analytics Cloud is governed by VMware-wide CEIP program which is pre-vetted by the Product Analytics Data Governance Board. The Product Analytics Governance Board is comprised of members from Legal, Engineering, Product, Security, Sales and IT who define the program guidelines. This team regularly monitors the program to ensure compliance.

What is CEIP? Since vSphere Health leverages CEIP lets start by explaining the CEIP program. “VMware’s Customer Experience Improvement Program (“CEIP”) provides information that helps VMware to improve our products and services, fix problems and advise you on how best to deploy and use our products. When you choose to participate in the Customer Experience Improvement Program (CEIP), VMware receives anonymous information to improve the quality, reliability, and functionality of VMware products and services.

What data is collected via CEIP?

Learn more: Security and Privacy for VMware’s Customer Experience Improvement Program (CEIP)

 

Enabling CEIP

Enabling CEIP can be accomplished a few different ways. From the vSphere Client, during a vCenter Server Migration or Upgrade, or from the CLI. Here is an example of configuring CEIP via the vSphere Client.

Using vSphere Health

Now that we understand how vSphere Health is enabled and the prerequisites such as CEIP, we can now review how to use this feature of vSphere 6.7.

From the vSphere Client begin by clicking on the vCenter Server from the Hosts & Clusters view. Next click on Monitor and last on Health. This view will show you vSphere Health from the vCenter Server perspective including any clusters and hosts it is managing. Customers can quickly see successful health checks as well as any potential issues.

vSphere Health

 

By clicking on a specific health check we can get more information. In this example I have clicked on “Customer Experience Improvement Program (CEIP)” and can review CEIP status, Configure CEIP directly from here, or Ask VMware which will jump to a webpage or KB article to provide more information or steps to resolve the issue found.

vSphere Health

 

This next example allows us to review a potential issue as well as it’s resolution from VMware.

We can see that vSphere Health has found the L1TF Intel processor vulnerability on all 4 of my ESXi hosts.

vSphere Health

 

Next when we click on Info I can review the findings in more detail.

vSphere Health

 

When I click on Ask VMware I am taken to the VMware KB Article that can help resolve the “L1 Terminal Fault” vulnerability.

 

In this last example we review the vSphere Health check for vMotion when using a vSphere Distributed Switch. Now since this health check was Green it means there are no issues but we can also learn from the detailed info provided with the health check.

vSphere Health

 

Clicking on the Info tab will allow the customer to review how & why MTU settings are important for vMotion operations. If additional info is needed, simply click the Ask VMware link to learn more. I feel this could be a useful way to understand how or why something is configured per VMware practices.

vSphere Health

 

Conclusion

As you have learned, vSphere Health can be very valuable for VMware customers as it allows the environment to gain self intelligence as well as dramatically reduce potential configuration mistakes and datacenter vulnerabilities. Since vSphere Health is an emerging feature of vSphere 6.7, keep in mind it will evolve over time from what it is today. This evolution of proactive issue detection is just the beginning of extending automation between cloud and on premises for datacenter remediation purposes.

To learn more about vSphere Health please refer to the following:

Please do not hesitate to post questions or even suggestions for additional vSphere Health checks in the comments below!

 

The post Understanding vSphere Health appeared first on VMware vSphere Blog.

Go Beyond your CCIE or VCP6-NV

$
0
0
This post was originally published on this site ---

Free VCAP6-NV Certification Exam Prep

Have your CCIE or VCP6-NV? Keep advancing your career by signing up for our free online (on-demand) VMware Certified Advanced Professional – Networking Virtualization (VCAP6-NV) certification exam prep and earn your certification.

Changing your Mindset

Why should you even care about a VCAP6-NV certification? Well, just like how achieving your CCIE was about challenging yourself to be a leader in the industry, earning a VCAP6-NV is about the next evolution of leadership as the enterprise moves to a software-driven multi-cloud infrastructure.

The transition from hardware-centric CCIE to software-defined VCAP6-NV is about a change in mindset and tooling. You need to think beyond the boundaries of a physical device and learn to use new tools and technologies to expand the scope of your current expertise and experience.

As the IT industry goes through massive shifts every decade or so, being able to not only embrace but also lead a paradigm shift is a key indicator of your ability to maintain a position of success and progress.

VMware NSX

The VCAP6-NV certification is the industry standard that validates your knowledge of VMware NSX. The test prep material takes you through real world case studies that mimic the process of problem identification and solution and empowers you to synthesize different pieces of information into the construction of a technical solution with VMware NSX.

Get Started

Go to http://www.vmware.com/go/vcap-nv-prep.

The post Go Beyond your CCIE or VCP6-NV appeared first on Network Virtualization.

Join us at Tech Field Day 18

$
0
0
This post was originally published on this site ---

On February 7th VMware subject matter experts will join the Tech Field Day team at Tech Field Day 18 in Austin, TX!

The brain child of GestaltIT, Tech Field Day is an independent IT influencer event that allows vendors to discuss products and/or features with industry leading bloggers, speakers, and podcasters. Taken from the About Tech Field Day page; “events bring together innovative IT product vendors and independent thought leaders to share information and opinions in a presentation and discussion format.” GestaltIT’s Stephen Foskett has been running Tech Field Day about 10 years and as a matter of fact, VMware presented at Tech Field Day 1 back in November 2009. We are very excited to be back on the docket in 2019!

 

 

Live stream starts at 3:30p CST on Feb 7. Bookmark this page and come back to watch LIVE!

 

Join VMware presenters, John Nicholson Sr. Technical Marketing Architect for vSAN and Nigel Hickey Technical Marketing Engineer for vCenter Server, at Tech Field Day 18 for discussions about vSAN/vSphere Health and how leveraging VMware’s intelligent framework to analyze pre-conditions can remove uncertainty as well as provide technical guidance for customer environments.

 

Learn more about vSAN/vSphere Health: 

 

 

 

The post Join us at Tech Field Day 18 appeared first on VMware vSphere Blog.

Introducing GitLab Integration in Cloud Assembly

$
0
0
This post was originally published on this site ---

Cloud Assembly is the infrastructure, and platform provisioning component of VMware Cloud Automation services. Among many others, one of the key themes of a user interacting with Cloud Assembly is Blueprint authoring through not only the native design canvas, but directly through Infrastructure as Code. To get more information around Cloud Assembly, take a look at this earlier blog post here – Cloud Assembly Technical Overview

With the General Availability release of VMware Cloud Automation Services, we are loading many new features and capabilities into the platform. One of the most exciting features is the ability to leverage source control as a method for storing blueprints. Source control refers most typically, to Git. Git has become a first-class citizen in customer environments. Interactions with Git in the enterprise can be a number of possible paths – administrators are storing software, scripts, documentation, and even full websites into Git source control. Storing content this way gives greater control and flexibility around content control. We expose the capability of tracking all changes through versioning, collaboration across multiple teams through a central repository, and a number of quality of life features for managing shared content such as Pull Requests, Issue Tracking, Tagging and much more. Furthermore, the approach of committing into a source control supports one of our key focuses of Cloud Assembly as being an Infrastructure as Code Platform first.

In Cloud Assembly, we have now exposed the ability to bind an external GitLab account (from the Software as a Service offering) as a Blueprint or Cloud Formation Template  Source Control store! This addition allows Cloud Administrators and Developers to directly commit blueprints into repositories, and have it synchronize into VMware Cloud Assembly projects. From a Day-2 perspective, the same administrators can collaborate, iterate, and commit changes to blueprints stored in GitLab and expect a consistent synchronization experience into the platform! This feature is an example of how VMware is focused on enabling Developers and the next generation of Cloud Administrators to interact with Cloud Assembly in more Developer/SRE Native ways. In this post, we’re going to quickly unpack how to consume this integration, and some key callouts.

 

Getting Started

 

For the purposes of this post, we’re going to assume that you have already setup a Gitlab account and created an empty repository. If you have not, I recommend having a read around the following content – https://docs.gitlab.com/ee/user/project/repository/

In order to consume the integration, we’ll need to populate this repository with our Blueprints. A major callout here is that your repository must be structured in a specific way in order for the blueprints to be detected.

  • Create a folder for each of your Blueprints

  • Ensure your blueprint code is stored within a blueprint.yaml file – this name is important!

  • Confirm the top of your blueprints include the name: and version: properties; these are new properties so many blueprints may not include them currently. Double check!

  • Extract an API key from Gitlab for the repository in question. This can be obtained by accessing your Gitlab account, selecting your login on the top right and navigating to the settings menu. Within that menu you can select “Access Tokens” on the left, name your token, set expiration, select API and create the token. Copy the resulting value and save it. We’re going to use this shortly!

 

With all of these settings in place, and values collected, we can configure Cloud Assembly for Gitlab integration!

Configuring Gitlab Integration in VMware Cloud Assembly

 

From within Cloud Assembly on the Infrastructure menu, navigate to the Integrations tab on the left. Select Add New, and Select the new GitLab integration.

On the configuration screen, provide the following details

  • URL – in this case it’s gitlab.com; in the future this will expand to include on-premises options, but for now we’re leveraging the Software as a Service GitLab exclusively
  • Token – This is the Access Token we exported earlier
  • Name – Provide a friendly name for your repository
  • Description (Optional)

When we select validate, we should be presented with a successful validation. We can now select Add and move forward!

Configuring Our GitLab Integration

 

Repository synchronization is on a per project basis, so we will need to bind our cas-blueprints repository to a project in our Cloud Assembly environment. Remember, Projects are the binding of users to the resources they can consume. Blueprints are project specific – so it makes sense that our blueprint repositories would share the same commonality.

From the Integrations screen, click on our newly configured GitLab integration. We will see our configuration details with new menus on the top of the page. Select Projects to move forward. From within the Projects menu, select “New Project” and select our project. In my environment the Project I have configured is called “Lab”.

We will now be prompted to configure our Repository. For the purposes of this walk-through, I am leveraging my standard GitLab blueprint repository located here – https://gitlab.com/codydearkland/cas-blueprints. You are welcome to fork this repository into your own GitLab instance and consume it as well (https://docs.gitlab.com/ee/gitlab-basics/fork-project.html)!

We need to fill in the details of our repository as shown in the screenshot below –

  • Repository – This is our repository path within GitLab, it can be found easily by taking the username of the main account you are interacting with (in my case, codydearkland) and appending the repository name (cas-blueprints). We can also see this on the end of our repository URL – codydearkland/cas-blueprints
  • Branch – We have the capability of consuming multiple branches of existing repositories. In my case, we are leveraging the master branch – so we will specify master
  • Folder – We can consume individual folders within a repository if we want to. In my case, I have many blueprints that I’d like to consume within Cloud Assembly so I’ll leave this blank to specify “all”
  • Type – We can either store Blueprints or Cloud Formation Templates in GitLab. I have all of my blueprints constructed in native YAML – so I’m going to specify “Blueprints” here

Select “Next” to finish adding our repository. When we select “Next“, an automated synchronization task will kick off to bring blueprints into the platform. When this sync is successful – it will notify you the blueprints that have been imported. In our case we can see that 8 blueprints have been brought into the platform and assigned directly to our project.

If we navigate to our Blueprints tab, we can see that our blueprints have been synchronized into the platform!

 

Further Capabilities

 

Moving forward, as blueprint changes are made directly to the Gitlab repository, they will be reflected within Cloud Assembly as this integration is currently known as a PULL operation. What does this mean? Cloud Assembly will always pull changes in from GitLab if the integration is present. So what happens if we change the blueprint within Cloud Assembly directly? If a blueprint administrator makes a change, Cloud Assembly will recognize that this is now a new fork – and establish that change as a new draft to blueprint – halting the synchrionization. This functionality allows administrators to bring content into the platform quickly – and then iterate on those blueprint changes from within the platform if they choose.

Additionally if we increment the version tag within our blueprint YAML, the new synchronization’s will respect the new version, and create additional versions within Cloud Assembly!

In the future, this functionality will be expanded to include pushing these changes back into source control as versions are established within Cloud Assembly. Stay tuned for this functionality in the future!

 

 

 

The post Introducing GitLab Integration in Cloud Assembly appeared first on VMware Cloud Management.


What’s New in VMware Digital Workspace Tech Zone Last Quarter

$
0
0
This post was originally published on this site ---

We are excited to announce the newest offerings added this past quarter to the VMware Digital Workspace Tech Zone site. You can see it all, from basic overviews and quick-start tutorials, to in-depth reference architectures and operational tutorials. You can find out everything you want to know about each product by watching walkthroughs and deep-dive videos, and start developing your own expertise with reference architectures and lab videos.

Videos

Soak in the details as the experts explain how the solutions work and demonstrate their latest features. In the expert series, you hear from those who are most knowledgeable about the VMware Workspace ONE and VMware Horizon 7 solutions.

Guides

You can find guidance in the form of operational tutorials, quick-start tutorials, and reference architectures. A variety of operational tutorials offer the opportunity to explore specific capabilities and learn the recommended procedures. In quick-start tutorials, you use sequential exercises to see how to set up and configure a basic deployment. In reference architectures, you can delve into the detail of the solution and its components and learn the extent of its capabilities.

Operational Tutorials

Quick-Start Tutorials

Reference Architectures

Blogs

Blog postings are one of the best places to find out about the newest features and updates, as well as the latest topics of debate.

Brian Watch

Be sure to watch for posts from Brian Madden. He’s had a lot to say since joining VMware End-User Computing early last year, and we’re delighted that he now hosts his opinions on Tech Zone. He is sure to kick up a storm.

Related: Hands-On Labs

You might also like to get into the nitty gritty by trying a free hands-on lab. The labs instruct you as you set up, configure, and explore possibilities in a controlled environment.

 

The post What’s New in VMware Digital Workspace Tech Zone Last Quarter appeared first on VMware End-User Computing Blog.

Advanced Solutions Customer Story Part 1: Why NSX-T?

$
0
0
This post was originally published on this site ---

 

Customer Overview

Advanced Solutions, a DXC Technology company, was formed in 2004 and employs about 500 staff to support the government of the Canadian province of British Columbia and other public sector customers with IT and business process solutions. For government agencies and services to continue operating efficiently and effectively, it is essential that the IT resources that they require are provided quickly and accurately.

Key Pain Points

All IT organizations are acutely familiar with the wide range of pain points and obstacles that can stand in the way of delivering resources to empower their businesses to move with speed and agility. One of the most common hindrances to IT, and therefore business agility is painfully slow provisioning processes, which can take weeks just to provision an application. The most common bottleneck within these processes is provisioning networking and security services. This is a key pain point for Advanced Solutions, but one that VMware is helping them solve with the VMware NSX Data Center network virtualization platform.

Dan Deane, Solutions Lead at Advanced Solutions says, “The key IT pain points that VMware solutions are helping us solve are around networking and provisioning.”

New Use Cases

Advanced Solutions was already a user of NSX Data Center for vSphere, but some new initiatives and use cases led them to choose to deploy NSX-T Data Center.

Dan Says, “Our primary reasons for deploying NSX-T were in support of a next generation SDN platform. We traditionally have been an NSX for vSphere customer, but with NSX-T we can begin looking at how we can provide broad data center networking and security not only for vSphere workloads, but for other hypervisors and physical workloads. On top of that, we’ve recently deployed PAS (Pivotal Application Service) and PKS. From an operational delivery point of view, it becomes very compelling.”

This highlights a fundamental benefit of the NSX-T Data Center platform: extension of consistent networking and security policies across heterogeneous hypervisors, clouds, and application frameworks (VMs, containers, bare metal). In addition to this, NSX-T is embedded into PKS, bringing enterprise-grade networking and security to what is already an enterprise-grade Kubernetes platform. The extension of networking and security policy to containers takes the concept of micro-segmentation to a new level, bringing the software-defined, policy-driven ability to secure individual workloads down to individual containers and microservices.

“The benefits we’re realizing from NSX-T are in the container management space. We’ve just recently deployed PAS and PKS, and we’re waiting to see that level of security that, customers want when they do microservices and containers from the platform, so it’s great. What we can do today that we couldn’t do before is micro-segmentation for containerized workloads,” Dan says.

Closing Thoughts

These new and expanded use cases, based on customer demands, are enabling Advanced Solutions to delivery IT resources for traditional and containerized applications that drive business value and empower their customers to move with speed and agility as they continue their journeys of digital transformation.

“These new technologies are allowing us to help our customers digitally transform and enable their business and application services to mature,” Dan says.

Be sure to check the Network Virtualization blog in the coming week to get Part 2 of Advanced Solution’s NSX-T story.

The post Advanced Solutions Customer Story Part 1: Why NSX-T? appeared first on Network Virtualization.

vMotion across hybrid cloud: performance and best practices

$
0
0
This post was originally published on this site ---

VMware Cloud on AWS is a hybrid cloud service that runs the VMware software-defined data center (SDDC) stack in the Amazon Web Services (AWS) public cloud. The service automatically provisions and deploys a vSphere environment on a bare-metal AWS infrastructure, and lets you run your applications in a hybrid IT environment across your on-premises data centers and AWS global infrastructure. A key benefit of VMware Cloud on AWS is the ability to vMotion workloads back and forth from your on-premises data center to the AWS public cloud as capacity and data privacy require.

In this blog post, we share the results of our vMotion performance tests across our hybrid cloud environment that consisted of a vSphere on-premises data center located in Wenatchee, Washington and an SDDC hosted in an AWS cloud, in various scenarios including hybrid migration of a database server. We also describe the best practices to follow when migrating virtual machines by vMotion across hybrid cloud.

Test configuration

We set up the hybrid cloud environment with the following specifications:

VMware Cloud on AWS

  • 1-host SDDC instance with Amazon EC2 i3.metal (Intel Xeon E5-2686 @ 2.3 GHz, 36 cores, 512 GB)
  • SDDC version: vmc1.6 (M6 – Cycle 17)
  • Auto-provisioned with NSX networking and VSAN storage

On-premises host

  • Dell PowerEdge R730 (Intel Xeon E5-2699 v4 @ 2.2GHz, 22 cores, 1 TB memory)
  • ESXi and vCenter version: 6.7
  • Intel NVMe storage; Intel 1 GbE network

Figure 1: Logical layout of the hybrid cloud setup

Figure 1 illustrates the logical layout of our hybrid cloud environment. We deployed a single-host SDDC instance on AWS cloud. The SDDC was the latest M6 version and auto-configured with vSAN storage and NSX networking. Our on-premises data center, located in Washington state, featured hosts running ESXi 6.7.

AWS Direct Connect

We used a high-speed AWS Direct Connect link for connectivity between our data center and AWS. AWS Direct Connect provides a leased line from the AWS environment to the on-premises data center. VMware recommends you use this type of link because it guarantees sustained bandwidth during vMotion, which isn’t possible with VPN internet connections. In our environment, there was about 40 milliseconds of round-trip latency on the network.

L2 VPN tunnel

We set up a secure L2 VPN tunnel for the compute traffic that spanned the two vCenters. This connected the VMs on cloud and on-premises to the same address space (IP subnet). So, the VMs remained on the same subnet and kept their IP addresses even as we migrated them from on-premises to cloud and vice versa.

Figure 2: Extending VXLAN across on-premises and cloud using L2 VPN

As shown in figure 2, two NSX Edge VMs provided VPN capabilities and the bridge between the overlay world (VXLAN logical networks) and the physical infrastructure (IP networks). Each NSX Edge VM was equipped with two virtual interfaces (vNICs): one vNIC was used as an uplink to the physical network, and the second vNIC was used as the VXLAN trunk interface.

Hybrid linked mode

Figure 3: A single console to manage resources across on-premises and cloud environments

We created a hybrid linked mode between the cloud vCenter and on-premises vCenter. This allowed us to use a single console to manage all our inventory across the hybrid cloud.  As shown in Figure 3, the cloud inventory included a single Client-VM provisioned in the compute workload resource pool and the on-premises inventory included three VMs including NSX-Edge VM, Client-VM and a Server VM.

Measuring vMotion performance

The following metrics were used to understand the performance implications of vMotion:

  • Migration time: Total time taken for migration to complete
  • Switch-over time: Time during which the VM is quiesced to enable switchover from on-premises to cloud, and vice versa
  • Guest penalty: Performance impact on the applications running inside the VM during and after the migration

Benchmark methodology

We investigated the impact of hybrid vMotion on a Microsoft SQL Server database performance using the open-source DVD Store 3 (DS3) benchmark, which simulates many customers performing typical actions in an online DVD Store (logging in, browsing, buying, reviewing, and so on).

The test scenario used a Windows Server 2012 VM configured with 8 VCPUs, 8 GB memory, 40 GB disk, and a SQL Server database size of 5 GB. As shown in figures 2 and 3, we used two concurrent DS3 clients, one client running on-premises, and a second client running on the cloud. Each client used a load of five DS3 users with 0.02 seconds of think time. We started the migration during the steady-state period of the benchmark when the CPU utilization (esxtop %USED counter) of the SQL Server VM was close to 275%, and the average write IOPS was 80.

Test results

Figure 4: SQL Server throughput at given time: before, during, and after hybrid vMotions

Figure 4 plots the performance of a SQL Server VM in total orders processed per second during vMotion from on-premises to cloud, and vice versa. In our tests, both DS3 benchmark drivers were configured to report the performance data at a fine granularity of 1 second (the default is 10 seconds). As shown in figure 4, the impact on SQL Server throughput was minimal during vMotion in both directions. The total throughput remained steady in the range of 75 operations throughout the test period. The vMotion durations from on-premises to cloud, and vice versa were 415 seconds, and 382 seconds, respectively, with the network throughput ranging between 500 to 900 megabits per second (Mbps). The switch-over time was about 0.6 seconds in both vMotions. The few minor dips in throughput shown in the figure were due to the variance in available network bandwidth on the shared AWS Direct Connect link.

Figure 5: Breakdown of SQL Server throughput reported by the on-premises and cloud clients

Figure 5 illustrates the impact of network latency on the throughput. While the total SQL Server throughput remained steady during the entire test period, the throughput reported by both on-premises and cloud clients varied based on their proximity to the SQL Server VM. For instance, throughput reported by the on-premises client drops from 65 operations to 10 operations when the SQL Server VM was onboarded to the cloud and jumps back to 65 operations after the SQL Server VM is migrated back to the on-premises environment.

The throughput variation seen by the two DS3 clients is not unique to our hybrid cloud environment and can be explained by Little’s Law.

Little’s Law

In queueing theory, Little’s Law theorem states that the average number (L) of customers in a stable system is equal to the average arrival rate (λ) multiplied by the average time (W) that a customer spends in the system. Expressed algebraically, the law is: L = λ × W

Figure 6: Little’s Law applicability in hybrid cloud performance testing

Figure 6 shows how Little’s Law can be applied to our hybrid cloud environment to relate the DS3 users, SQL server throughput, SQL Server processing time, and the network latency. The formula derived in figure 6 explains the impact of the network latency on the throughput (orders per second) when the benchmark load (DS3 users) is fixed. It should be noted, however, that although the throughput reported by both the clients varied due to the network latency, the aggregate throughput remained a constant. This is because the throughput decrease seen by one client is offset by the throughput increase seen by the other client.

This illustrates how important it is for you to monitor your application dependencies when you migrate workloads to and from the cloud. For example, if your database VM depends on a Java application server VM, you should consider migrating both VMs together; otherwise, the overall application throughput will suffer due to slow responses and timeouts.

One way to monitor your application dependencies is to use VMware vRealize Network Insight, which can mitigate business risk by mapping application dependencies in both private and hybrid cloud environments.

vMotion Stun During Page Send (SDPS)

We also tested vMotion performance by doubling the intensity of the DS3 workload on both on-premises and cloud clients. Although vMotion succeeded, vmkernel logs indicated vMotion SDPS kicked-in during test scenarios that had a higher benchmark load. SDPS is an advanced feature of vMotion that ensures migration will not fail due to memory copy convergence issues. Whenever vMotion detects that the guest memory dirty rate is higher than the available network bandwidth, it injects microsecond latencies to guest execution to throttle the page dirty rate, so the network transfer can catch up with the dirty rate. So, we recommend you delay the vMotion of a heavily loaded VMs on hybrid cloud environments with shared bandwidth links, which will prevent slowdown in the guest execution.

To learn more about SDPS, see “VMware vSphere vMotion Architecture, Performance, and Best Practices.”

vMotion across multiple availability zones in the SDDC

Every AWS region has multiple availability zones (AZ). Amazon does not provide service level agreements beyond an availability zone. For reasons such as failover support, VMware Cloud on AWS customers can choose an SDDC deployment that spans multiple availability zones in a single AWS region.

There are certain vMotion performance implications with respect to the SDDC deployment configuration.

Figure 7.  vMotion peak network throughput in a single availability zone vs. multiple availability zones

As shown in figure 7, vMotion peak network throughput depends on the host placement in the SDDC.

This is because vMotion uses a single TCP stream in the VMware Cloud environment. If the vMotion source and destination hosts are within the same availability zone, vMotion peak throughput can reach as high as 10 gigabits per second (Gbps), limited only by the CPU core speed. However, if the source and destination hosts are across availability zones, vMotion peak throughput is governed by the AWS rate limiter. The throughput of any single TCP or UDP stream across availability zones is limited to 5 Gbps by the AWS rate limiter.

Conclusion

In summary, our performance test results show the following:

  • vMotion lets you migrate workloads seamlessly across traditional, on-premises data centers and software-defined data centers on AWS Cloud.
  • vMotion offers the same standard performance guarantees across hybrid cloud environment, which includes less than 1 second of vMotion execution switch-over time, and minimal impact on guest performance.

References

The post vMotion across hybrid cloud: performance and best practices appeared first on VMware VROOM! Blog.

Extracting Data From vRealize Operations with the REST APIs

$
0
0
This post was originally published on this site ---

A frequent request from the field and VMware customers is, “How can I get data out of vROps using the REST APIs?”  The reasons for this vary, but typically there is a desire to incorporate the metrics and properties from vROps with another data set or monitoring tool.  In this blog post, I’ll give you an overview of how to use the vROps REST APIs to pull information.

By the way, there is a fantastic fling that I recommend customers investigate for this purpose.  It also uses the REST APIs but in an easier-to-use executable.  For many, it provides exactly what is required.  For others, they prefer to use the REST APIs directly because they need to run the extraction as part of another process and don’t want to use the fling; if that’s you, this blog will help.

Setting Up Your Postman Environment

I will be using Postman client for this, and if you would like to download my Postman collection for vRealize Operations you can find it here.  If you aren’t familiar with Postman, I highly recommend it to explore REST APIs and develop code or write scripts that use the APIs.

To begin, I want to point out a few features I configure in Postman to save time.  First, I always set up an environment to hold variables such as authentication information, response data or other bits I will need in common with other REST calls.  For example, here’s my current environment:

I can set up several environments in Postman so I can easily switch between REST endpoints or solutions.  Many of the variables above are created and updated dynamically as part of my REST calls.  You will see that later in this blog.

At a minimum, to use my Postman collection for vROps, you will need an environment with three variables:

Vrops: the FQDN or IP of your vROps node

User: a local vROps user with API permissions

Pass: the user’s password (be careful here as this is stored in the clear)

Authentication

Once the environment is ready, you can begin working with the collection.  The first request you will send is the one named “RUN FIRST – Get vR Ops Auth Token”

Notice the request URL and the request body contain items in double curly braces, like {{user}}.  This is how we reference the environment variables set up earlier.  They variables are replaced with your environment values when the request is sent.

Why am I making you run this first?  While you could use basic authentication for the requests, that is not a best practice.  Better, faster and more secure is to use something called a Bearer Token which can be saved in the environment and used for successive requests.  This is what the Tests tab handles in this request:

The tests are just snippets of Javascript that can perform post-request tasks such as evaluating the response to make sure it is the expected response or contains an expected value.  In my collection, I’m using it to store variables in the environment as well.  For example, my request for a token returns:

Which is stored in my environment automatically, as the current value (the one on the right, ignore “INITIAL VALUE” for now):

Now my subsequent requests will use this token to authenticate.

Getting Resource IDs

With authentication handled, let’s dive into the subject which is getting metrics and properties out of vROps.  Usually, customers want to get this data for particular resource kinds (like virtual machines) or for specific objects (like all hosts in a cluster).  I’ll use the example of getting metrics for all virtual machines first.

To begin, we need to get the resourceIds of each virtual machine resource.  The resourceId is a unique identifier used in vROps for each resource.  To do this, we will use the request titled “Get Resources by resourceKind and Save to var” – which will do exactly what it says!

First, note that I have a header for Authorization that includes the Bearer Token variable I saved earlier:

Now, take a look at the URL above.  The API is GET Resources and one optional query I can add to the URL template is the resourceKind (the query is the part of the URL to the right of the “?”).  Any valid resourceKind can be used in the query and you can even search for multiple resourceKinds in a single request.  For example:

You can add other query parameters, but let’s keep it simple for now.  When I submit the request, the variables are replaced with the values from your current environment.  You can see this by selecting the “Code” link to the right of the request window – Postman will generate a code snippet for your use in many different programming and scripting languages!  Here’s a Python snippet using the Requests module:

When the response is returned, you can see the reply (in JSON) in the Body tab below the request.

I got 103 resources of resourceKind VirtualMachine.  Fantastic, but what to do with this response?  Once again the Tests comes to the rescue.  In this request, I’ve added a test to parse through the response and grab all the resourceIds and place them into an environment variable named “resourceIds”:


I’m doing this for you in Postman, but you will need to write your own code to extract these resourceIds obviously.

Time to Get Metrics!

Now that I have all the resourceIds I need, I’m going to send another request to query for some metrics.  Let’s say I want to get all CPU metrics for these VMs.

Here’s a pro-tip since this comes up a lot and it can cause your script or code to run very slowly even in moderately sized environments.  Do NOT use repeated GET requests to retrieve metrics or properties.  Instead, use POST to query for stats (the vROps API lingo for metrics) and properties.  For example, in my collection use “Query Metrics of Resources” instead of “Get Stats from Resource (not query)” to get metrics from multiple resources in a single request.

Now, this is where we will need to understand precisely what data your project requires from vROps.

  • What timeframes? e. last 5 minutes?  Last day?
  • How should they be rolled up?
  • Which metrics or properties?

Now is as good a time as any to introduce the API documentation.  You can find this on your vROps appliance by browsing to https://{{your vrops IP}}/suite-api and the documentation is a blog for another day.  I’m only showing this to give you an idea of what can be included in your query request:

For example, I want to get all CPU metrics for the virtual machines from the last 24 hours and I want the hourly average of each metric.

The body of my request would look like:

Notice I have a lot of environment variables in the body!  You probably understand why I have {{resourceIds}} since we covered that previously.  Let me explain the others.

{{cpuStatsJSON}} is a variable that results from the test run after “Get StatKeys for Resource and Load var” to make it easy to find and use statKeys in subsequent requests.  Take a look at the test and note that I’m only extracting statKeys that start with “cpu|” so you can modify this test if you would like to use a different set of statKeys.

Of course, you don’t need to use my environment variable.  You can simply replace the statKey array value with one or more statKeys in JSON format (i.e. [“statkey1”,”statkey2”,”statkey3”]).

{{epoch} and {{epochPrev24}} are timestamps created by Pre-request Script, the twin feature of Tests.  In this case, before the request is sent, the pre-request script is run and the epoch values for the current time and the previous 24 hours are added to the environment variables.

No doubt, you would do something similar in your own code.

Finally, the query parameters for rollUpType, intervalType and intervalQuantifier provide the formatting for the response data.  In this query, we will get a response for the previous 24 hours for each metric with an hourly average.

Let’s say that instead you wanted to see the maximum for every 30 minutes.  In that case, you would change the format parameters as follows:

“rollUpType” : “MAX”,
“intervalType” : “MINUTES”,
“intervalQuantifier” : “30”,

However, change your “begin” and “end” values to a shorter time period – setting the format with that level of granularity will increase the response time.

Overview of the Response

Now we can send the request and get our data!

Maybe a little hard to see from the screenshot, but notice the time (upper right corner) for the response was about 2 seconds (for 325 VMs).  Compare that with a response time for getting metrics from an individual resource (around 401ms) multiplied by the number of resources (325 VMs * 401ms) means that it would take 2 minutes to get the same information.  This is why I strongly recommend using the query APIs instead.

Now that you have your data, I want to point out a couple of things.  In the JSON response (XML is available, just change your Accept header) you will find an array of “values” for each resourceId that had metrics meeting the query parameters.  I’ve loaded this into jsonviewer.stack.hu to make it easier to navigate:

What you end up with is an array of timestamps and an array of values (the “data” key).  From here, you just need to write your code to present this information in whatever format you require!

I hope this blog post and the Postman collection helps you with extracting metrics and properties from vRealize Operations.  Feedback is welcome either via comment or Twitter @johnddias.

The post Extracting Data From vRealize Operations with the REST APIs appeared first on VMware Cloud Management.

Coppell ISD Integrates Security into Infrastructure via VMware AppDefense

$
0
0
This post was originally published on this site ---

What do you get when you provide 12,800 kids with technology and programming classes? You get 12,800 people who are getting ready for the modern workforce of today and tomorrow. You also get 12,800 potential vulnerabilities. With the growing quantity of phishing emails, ransomware and malware that Coppell Independent School District (CISD) already had to combat with a small staff, this Texas school system was looking for smarter solutions.

“All these students who have taken programming classes, they’re often looking to bypass administrative privileges, looking for ways around the internet filters, or looking for ways to play games on the school computers,” said Stephen McGilvray, CISD Executive Director of Technology. “So, in addition to all these external threats we have to worry about, we also have a bunch of homegrown, internal threats.”

The school district recently underwent a data center refresh, which included updates for VMware vSphere, VMware App Volumes and VMware Horizon, and launched the implementation of VMware NSX Data Center. During the refresh, their VMware sales rep told them about a relatively new security product called VMware AppDefense.

At its core, AppDefense shifts the advantage from attackers to defenders by determining and ensuring good application behavior within the data center, instead of continuing the losing battle of attempting to identify unknown threats within the internal network.

AppDefense looks to prevent the execution and propagation of attacks with a combination of application whitelisting and behavioral awareness. It also integrates into NSX Data Center to prevent the lateral spread of threats with adaptive micro-segmentation. Security is enforced at the most granular level of an app – the process level – and micro-segmentation of the network stops an attack in its tracks.

“Before, our security footprint had a strong dependence on keeping people out,” said Systems Engineer James Holloway. “We didn’t have much in place for mitigation, if and when somebody did get through.”

“The AppDefense message really resonated with us,” said McGilvray. “With a small technology team serving 17 schools, anything that saves us time is appreciated. We liked that AppDefense integrated easily with NSX and into our VMware stack, to add another layer of security.”

CISD started its AppDefense journey by protecting its most critical systems, including file servers and student information databases – “The things that would cause the most headaches if they were compromised,” said Holloway. “Along the way, we’ve gotten a better understanding of how all our systems communicate. You think of your servers, and each has different services, but from a technical standpoint we were surprised to see just how much data is going in and out. That drill-down opened our eyes to how much we might have been missing.”

 

Secure, Game-changing App Delivery

Horizon and App Volumes are another significant part of CISD’s VMware footprint. The school district uses Horizon to provide remote access for students and teachers. Students can securely access their school desktops and applications and access school apps from anywhere using their school-issued Apple iPads, with security underpinned by NSX Data Center and AppDefense.

Holloway calls App Volumes a “game changer. We’re able to easily provide whatever apps our teachers need and meet their expectations.” He noted that while 90 percent of the school district’s devices are Apple, the district has purchased many programs over the years that are Windows-only or have other system restrictions. CISD uses App Volumes to deliver those apps to any endpoint, regardless of infrastructure or operating system.”

 

Flexible and Responsive for the Future

CISD’s IT team appreciates the flexibility they’ve seen from their software-defined data center refresh. Using virtualization and hyperconverged infrastructure, they were able to shrink their hardware footprint and related costs such as power and cooling. “Our systems are fairly static and don’t change that much,” said McGilvray. “Yet, when we do need to make a change, we can make it quickly, scale it out and give people what they need. That’s just gotten better and better as VMware has refined its products over the years.”

In the future, CISD plans to expand their AppDefense and NSX Data Center footprints to more secondary systems. With AppDefense, “We know we’ll be alerted when something isn’t operating within its normal parameters. said Holloway. “For us, the value of AppDefense lies in having peace of mind.”

 

Learn more about VMware AppDefense.

The post Coppell ISD Integrates Security into Infrastructure via VMware AppDefense appeared first on Network Virtualization.

Viewing all 1975 articles
Browse latest View live