Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

NSX Service Mesh on VMware Tanzu: CONNECT & PROTECT Applications Across Your Kubernetes Clusters and Clouds

$
0
0
This post was originally published on this site ---

Authors: Mark Schweighardt, Tom Spoonemore

Modern enterprises are sprawling and complicated. They are transitioning from private to public clouds to address, for example, performance, availability, and data residency requirements, and to gain access to advanced services such as analytics and ML. They are also transforming their application architectures from monoliths to distributed microservices.

In August 2019, VMware introduced VMware Tanzu, a new portfolio of products and services to transform the way enterprises BUILD modern applications on Kubernetes, consistently RUN Kubernetes across clouds, and MANAGE Kubernetes fleets from a single control point. This is a huge win for our customers: Using Tanzu Mission Control to consistently create and manage the lifecycle of Kubernetes clusters across any cloud. 

But how do we consistently connect and secure traffic between the services distributed across all of these clusters and clouds, while delivering on application SLAs? Today we further develop this picture by introducing NSX Service Mesh on VMware TanzuNSX Service Mesh provides an application connectivity and security fabric that can span across all of your Kubernetes clusters and cloud environments. NSX Service Mesh allows you to: 

  • CONNECT – Intelligently control and observe traffic and API calls between services. 
  • PROTECT – Implement consistent security and compliance policies across clouds. 

Now let’s look closer at how NSX Service Mesh provides consistent connectivity and security for microservices – across all of your Kubernetes clusters and clouds – in the most demanding enterprise architectures. 

CONNECT: Control & Observe Application Traffic 

NSX Service Mesh provides fine-grained, rule-based traffic management that gives you complete control over how traffic and API calls flow between your services, and across clusters and clouds.  

Easily configure global load balancing with failover across clusters in multiple zones or regions, split traffic and route requests based on percentages or request content, and implement application resiliency using timeouts/retries and circuit breakers. You can also test your application in production using traffic mirroring.

NSX Service Mesh also integrates with open source and commercial monitoring and troubleshooting tools, including Wavefront by VMware, for full observability into application performance and faster root-cause analysis and mean time to repair.

PROTECT: Secure Sensitive Data 

As we have more traffic and data flowing across clusters and across clouds, we need to protect it. NSX Service Mesh helps you implement a zero-trust security model for modern, cloud-based applications. It does this by: 

  • Strongly authenticating the identities of users and services 
  • Authorizing users and services to communicate by analyzing the request context 
  • Encrypting application traffic traversing inside and across clusters and clouds
  • Logging all request and access events for auditing and compliance 

Together, these capabilities form the foundation of zero-trust security for applications, and protect the sensitive data flowing across all of your clusters and clouds.

Strong Isolation for the Enterprise

In addition to enterprise infrastructure and applications becoming more complex, so are organizational structures, with multiple lines of business, teams, and diverse product offerings. This business transformation has resulted in an explosion of applications and data that extend beyond and across organizational boundaries. 

Due to this complexity, enterprises need isolated environments for their application teams. To achieve strong isolation, NSX Service Mesh introduces a new construct called Global Namespaces (GNS). As its name implies, a global namespace can consist of microservices distributed across multiple Kubernetes clusters and clouds. Each global namespace provides its own identity management, policies (e.g., security and SLOs), naming / DNS, and traffic routing. 

Global namespaces provide ideal, strongly isolated environments that can be offered to different teams managing different applications and data, and can also be used to isolate dev, test, and prod environments. And of course, you can have as many global namespaces as you need. 

One more significant benefit of a GNSThe GNS decouples your applications from the underlying Kubernetes infrastructure, so you can more easily move applications across cloud environments, while maintaining consistent and continuous configs and policies, regardless of where your workloads are running.

A Complete Platform for Application Modernization and Journey to the Cloud 

With comprehensive build, run, manage, connect, and protect capabilities, VMware Tanzu and NSX Service Mesh provide benefits across your entire organization as you modernize your applications and adopt cloud.

Developers benefit from simplified application development and deployments. Operators benefit from application SLOs, resiliency, and observability – from a single management plane. And Security from consistent policies and visibility across application environments for auditing and compliance oversight.

NSX Service Mesh - CONNECT & PROTECT

 

Meet-Up with Us at KubeCon! 

The NSX Service Mesh team will be active at KubeCon in San Diego this week: 

Stop by VMware Booth D6 to see Tanzu and NSX Service Mesh in action. Or collaborate and interact with VMware contributors at Community Lounge 4.

You can also watch a recent webinar by Joe Beda and Pere Monclus discussing how Tanzu Mission Control, in conjunction with NSX Service Mesh, provide a consistent way to manage application-level connectivity and security for microservices running inside your Tanzu-managed Kubernetes clusters.

The post NSX Service Mesh on VMware Tanzu: CONNECT & PROTECT Applications Across Your Kubernetes Clusters and Clouds appeared first on Network Virtualization.


vRealize Network Insight Search Poster for SD-WAN & VeloCloud

$
0
0
This post was originally published on this site ---

Continuing our Search Poster series, we’ve arrived at the SD-WAN & VeloCloud search poster! Using the search engine inside VMware vRealize Network Insight can be a revealing experience. It has every single bit of data you ever wanted to see about anything in your infrastructure and it’s available at your fingertips. Because of the vast amount of available data, the search engine is extremely versatile and there are many options available in its syntax.

To make learning the search engine a bit easier, we’ve launched a Search Poster series that will be showcasing the most useful search queries for a specific area. Today, we’ve added the SD-WAN & VeloCloud search poster.

As you might know, vRealize Network Insight 5.0 introduced integration into VeloCloud and launched Network Insight into a new market; SD-WAN. Using the search engine, any and all information on the SD-WAN is at your fingertips.

Kickstart learning the search engine using these search posters:

VeloCloud Search poster preview

The post vRealize Network Insight Search Poster for SD-WAN & VeloCloud appeared first on VMware Cloud Management.

vRealize Network Insight Search Poster for NSX-T

$
0
0
This post was originally published on this site ---

Continuing our Search Poster series, we’ve arrived at the NSX-T search poster! Using the search engine inside VMware vRealize Network Insight can be a revealing experience. It has every single bit of data you ever wanted to see about anything in your infrastructure and it’s available at your fingertips. Because of the vast amount of available data, the search engine is extremely versatile and there are many options available in its syntax.

To make learning the search engine a bit easier, we’ve launched a Search Poster series that will be showcasing the most useful search queries for a specific area. Today, we’ve added the NSX-T search poster.

vRealize Network Insight has full support for both NSX for vSphere and NSX-T. This search poster lists all NSX-T information that can be uncovered with the search engine.

Kickstart learning the search engine using these search posters:

vRealize Network Insight NSX-T Search Poster

The post vRealize Network Insight Search Poster for NSX-T appeared first on VMware Cloud Management.

Cloud Assembly Now Supports OpenShift Integration

$
0
0
This post was originally published on this site ---

 

Vmware Cloud Management strategy is evolving very quickly these days. We are more and more opening our arms to integration with other platforms to be a control plane layer above as many aspects of multi-cloud environments as possible. Our latest integration to expand the platform to help IT operators maintain the ever grow landscape of capabilities is with OpenShift. IN this blog I am going to walk through setting up OpenShift integration with VMware Cloud Assembly and talk to some of the capabilities that come with this integration.

 

Requirements and Prerequisites:

 

  • The integration currently supports OpenShift 3.X environments (Support for the latest version coming soon)
  • An existing OpenShift environment needs to be available. For information on building an OpenShift environment via Cloud Assembly can be found here.

 

Setting up the Integration in Cloud Assembly:

From within Cloud Assembly you can easily setup the integration with OpenShift by simple selecting the integration configuration from the list of integrations types.

Once you are in the configuration screen you simply need to enter the information for you OpenShift environment. This will include the following information:

  • IP address (with port) of the OpenShift cluster
  • Location: Supports cloud based OpenShift as well as on-premise environments. When using on-premise environments a cloud proxy will be needed.
  • Cloud Proxy: Only needed for integrating with on-premise OpenShift environments.
  • Credentials type: Username, Certificate, or Token based credentials can be used. (in this example we uses username and password)
  • Name: The unique name you want for this environment.
  • Sharing: In Cloud Assembly you can make the OpenShift cluster available to all projects (Globally) within Cloud Assembly or just to a specific project.

 

 

Once you have entered in the necessary information click SAVE and you are ready to start using the OpenShift environment in Cloud Assembly and you will see it automatically created a Kubernetes Zone.

 

To understand how you can use any Kubernetes within Cloud Assembly you can refer to this blog on creating self-service namespaces in a Kubernetes environment through Cloud Assembly.

 

 

 

 

Other Features For vRealize Automation Cloud:

Input from GUI in the Blueprint

Explicit Deployment Ownership

The post Cloud Assembly Now Supports OpenShift Integration appeared first on VMware Cloud Management.

Tweet Chat Recap: Getting Started with Project Pacific – vSphere & Kubernetes

$
0
0
This post was originally published on this site ---

We know you have a lot of questions about Project Pacific, so to help you gain a better understanding, we hosted a vSphere Tweet Chat with some of our experts. Our two main featured guests were Tasha Drew and Jared Rosoff, but a variety of other experts, such as Joe Beda, Nikitha Suryadevara, David Stamen, Dilpreet Bindra, and Timothy St. Clair joined us as well. View their invaluable insights into the inner workings of Project Pacific, along with what’s in store by checking out the full vSphere Tweet Chat recap:

Tasha Drew

Jared Rosoff

A1: Project Pacific is a re-architecture of vSphere that puts Kubernetes at the heart of the platform. It lets you use Kubernetes to manage not just containers, but all of your virtual infrastructure. As a part of VMware’s Tanzu portfolio of products, Project Pacific provides the layer that RUNs modern applications. Tanzu also brings all of the tools you need to BUILD and MANAGE modern apps too. Altogether, it’s a powerful portfolio of solutions. When I explain Project Pacific to folks, I usually say that it is really 3 things:

  • First, it is embedding a special purpose k8s cluster in vSphere in order to make vSphere better. It exposes a direct interface using k8s API patterns.
  • Second, Project Pacific is a set of capabilities to make Kubernetes run great on top of vSphere. This includes exciting things like vSphere Native Pods, where we adapt ESXi to run Pods natively.
  • Third, Project Pacific is an extended set of capabilities (using the supervisor cluster) for launching “Guest Clusters” for running applications.

Tasha Drew

A2: Kubernetes is awesome because it’s a widely adopted, fully open source API driven control plane that you can easily extend. It’s easy to build on top of, and since we’re building up from the infrastructure, that’s exactly what we, our users, developers, and partners need. As far as re-architecting vSphere, Kubernetes gives us the developer-focused interface we need to enable self-service and still gives Ops total control via the vSphere UI & tools they know and love.

Joe Beda

Nikitha Suryadevara

A2: Rearchitecting vSphere gives you the best of both worlds!

David Stamen

A2: Why Not? vSphere is already one of the best places to run your workloads, why not integrate one of the well-known tools your developers are already using.


Q3: How did the Heptio acquisition influence Project Pacific?

Tasha Drew

 

Joe Beda

A3: I have a bit of a unique perspective here. I think one of the things that Heptio did is bring the perspective around using Kubernetes as a distributed control plane vs. just focusing on running containers. This led to the supervisor vs. guest cluster split.

Dilpreet Bindra

A3: While we had been working on aspects of Project Pacific prior to the acquisition, we were only able to realize the full value of Pacific and Tanzu. Thanks to @forjared, @kostadis_tech and @jbed – they all worked to nail down the current strategy.

David Stamen

A3: Allowed two amazing teams to come together and work on a great product!

 


Tasha Drew

A4: Being able to automate the lifecycle of applications on VMs as an infra choice is 100% the future, and that’s what we can do with Project Pacific and Kubernetes! Now your devs get VMs on demand (via a VM Operator), your Ops team still has complete mgmt/control via vCenter policy, and you can use the Operator/CRD pattern to automate your application’s lifecycle on VMs and containers.

Joe Beda

A4: VMs and Containers are complimentary. The lines are blurring with things like vSphere Native Pods. And also having automated infra available makes managing clusters as cattle possible with things like Cluster API. Horses for courses.

David Stamen

A4: VMs will continue to exist, but know they can run alongside their best friends, the container!

Nikitha Suryadevara


Tasha Drew

A5: So many reasons!! You can create Namespaces in the vSphere UI, attach policy (compute, network, storage, security, availability…), and give devs access to it. Managing at a namespace/app level makes you way more powerful and unlocks cloud-like self-service to your devs.

Jared Rosoff

A5: VI admins are the lifeblood of enterprise infrastructure. They provide the virtualized compute, network and storage resources that run the majority of enterprise workloads today. With Project Pacific, VI admins extend their skills to manage ALL workloads, not just the VMs.The biggest challenge facing enterprise IT today is how to evolve their infrastructure architecture to support these new applications. Almost every solution out there before Project Pacific required that you build a new infrastructure platform and then refactor all your existing apps. With Project Pacific, we finally have an evolutionary path where your existing platforms, tools and skills can support these new applications. With little more than an upgrade of your existing infrastructure platform, you can turn your data center into a modern application platform.

Nikitha Suryadevara

A5: Kubernetes is so natively integrated into familiar screens and workflows in vSphere that the learning curve (if any) is going to be minimal.

Joe Beda

A5: Project Pacific gives VI admins new tools for application teams so they can be more effective. Also, there are more direct ways for those app teams to interact with vSphere.


Tasha Drew

Jared Rosoff

A6: Modern applications are not just a single VM or container, they’re groups of VMs, Containers, Disks, and Networks that implement a service or application. We want to move infrastructure management to the point where admins can manage logical applications, not just VMs. Project Pacific embraces Namespaces as the new unit of infrastructure management. Tasks like resource allocation, mobility, security and data protection can work at the application level rather than having to be configured on each VM. This creates massive time savings for the admin. Remember that this is a layer UNDERNEATH regular Kubernetes clusters. So, you can create a namespace in vSphere, and then provision a bunch of k8s clusters into it and treat it as a single unit of management. It’s turtles all the way down.

Dilpreet Bindra

A6: Kubernetes Namespace becomes the fundamental management unit for your complex workloads in Project Pacific.

David Stamen

A6: Namespaces are awesome! They let you allocate specific resources such as CPU, Memory and Storage to make sure your applications can run smoothly!


Tasha Drew

A7: Bring us your containerized cloud-native applications! Your legacy VM-based workloads! Your distributed stateful data services! Your FaaS!

Jared Rosoff

A7: We see a broad interest in Project Pacific for lots of different kinds of apps. Web applications, AI/ML, network function virtualization, internet of things, and stream processing are some common ones. There are two big trends we see across these apps. 1: Organizations that want to modernize their application development processes to be more agile and productive. 2: Organizations that have mission critical workloads that need the highest degrees of security and availability. One thing that I think is lost in this “pets vs. cattle” discussion is that if you’re a farmer, you might not be attached to an individual cow, but you DEFINITELY care whether or not your farm is producing milk. If you depend on your apps, vSphere is the best place to run them.

David Stamen

A7: Any application that can run as a container is welcome to run on Project Pacific!


Tasha Drew

A8: We are going to be shipping training docs and making things super easy to get started on the enablement front from vSphere!

Jared Rosoff

A8: If you’re a VI admin today, you’re already 90% of the way there. If you’ve just been doing compute virtualization, now is the time to start learning about network and storage virtualization as these technologies will become ubiquitous with modern applications. You should be engaging with your development teams and working with them to develop new workflows and processes for application lifecycle management. The more involved they are in this process, the better your evolution will go. Of course, you should also sign up for the Project Pacific beta, here.

David Stamen

A8: If you are not familiar with @kubernetesio, its time to start learning! I recommend the Kubernetes Academy (https://kubernetes.academy) and @nigelpoulton‘s Pluralsight Series. Also, don’t’ forget to sign up for the beta!


Jared Rosoff

A9: Oh, it’s hard to pick a favorite child. The CRX is pretty exciting. The idea that you can get the same kinds of performance and simplicity of containers but with the security and performance properties of a virtual machine is pretty killer. I also love the fact that Kubernetes becomes the IaaS API. Having been an AWS user for more than 10 years, k8s based IaaS feels like what AWS should have been. Using declarative configuration that’s source control friendly is dope. Having it native to the platform is super dope.

Tasha Drew

A9: @Forjared Imma let you finish, but the vSphere Kubernetes Service + clusterAPI service + VM Operator is THE BEST FEATURE OF ALL TIME.

Nikitha Suryadevara

A9: Being able to manage all the different objects in a namespace (VMs, containers, native pods, Kubernetes clusters) transparently is an amazing capability offered by Project Pacific.

David Stamen

A9: I think my favorite capability is the ability to run native pods on vSphere. Think of all that performance!


Tasha Drew

Jared Rosoff

A10: Do you NEED it? no. But once you try it, you’ll never go back. Running Kubernetes needs a programmatically controlled infrastructure that is able to orchestrate all of the components of a Kubernetes cluster – not just provisioning, but also health checks, recovery, upgrades… Sound familiar? Kubernetes was designed to orchestrate complex applications. So, turning it on Kubernetes itself is like chocolate and peanut butter. It just FEELS right. And technically it’s not turtles all the way down. There’s a ring 0 turtle that is the most privileged turtle.

David Stamen

A10: You do not need Kubernetes to run Kubernetes, that is what Project Pacific is for! The best turtles are the @VMwareTurtles.


Tasha Drew

A11: It provides a powerful, consistent way for partners to provide their services on top of or as part of vSphere, using the Project Pacific k8s control plane, exposing things as k8s objects, and/or running Operators/CRDs in the conformant Kubernetes service layer!


Tasha Drew

A12: GIF

David Stamen

A12: GIF

Dilpreet Bindra

A12: GIF


Bonus Q:

David Stamen

Bonus A: Definitely not, we utilize multi-master capability to allow services to be highly available.

Tasha Drew


Thank you joining our Project Pacific vSphere Chat, featuring some insightful (and witty) experts from the #vSphere and #ProjectPacific teams! See you next time, and keep an eye out for more information in our blog.

A special shout out to our featured experts, Tasha Drew (@TashaDrew) and Jared Rosoff (@forjared), along with our other experts and vSphere fans from around the globe who tuned in and contributed.

Follow the additional vSphere experts who participated in this chat, including Joe Beda (@jbeda), Nikitha Suryadevara (@lafemmenikitha), David Stamen (@davidstamen) Dilpreet Bindra (@teran13) and Timothy St. Clair (@timothysc).

Remember to follow @VMwarevSphere and stay tuned for our monthly expert chats. Join the conversation by using the #vSphereChat hashtag and asking your own questions. Have a specific topic you’d like to cover? Reach out and we’ll bring the topic to our experts for consideration. For now, we can’t wait to connect with you in the Twittersphere!

The post Tweet Chat Recap: Getting Started with Project Pacific – vSphere & Kubernetes appeared first on VMware vSphere Blog.

Cloud Migration Series: Part 3 – The HCX Interconnect and Multi-Site Service Mesh

$
0
0
This post was originally published on this site ---

In Part 2 of our Cloud Migration Series, I provided an overview of HCX including the various appliances that make up the solution and migration options you gain access to. With the number of appliances that must be deployed, configuration might seem daunting and complicated, but the Service Mesh aims to ease that process significantly.

 

The HCX Interconnect

 

Initial deployment of HCX is fairly simple; on the cloud side, it’s deployed with a single click. Following the cloud deployment, we simply deploy the HCX Manager on-premises.

Following this, additional appliances are required at both the source and destination sites for the various services HCX offers – WAN Optimization, Network Extension, Migration, Disaster Recovery. All of these appliances must be interconnected. The interconnect provides an optimized fabric for the HCX services allowing for secure migration and network extension. We utilize the Multi-Site Service Mesh interface to create the interconnect.

 

Multi-Site Service Mesh

 

Rather than deploying and configuring all of the appliances manually, the Service Mesh allows you to easily configure, deploy, and manage the appliance pairs. Within the Interconnect interface, we have options for Site Pairing, Compute/Network Profiles, Service Mesh, and Sentinel Management. The benefits of this interface include:

 

  • Using the same configuration patterns across sites
  • Allowing multiple HCX sites to utilize the same Compute and Network Profiles
  • Deployment of appliances in parallel
  • Deployment diagrams
  • Operational task tracking
  • Provided firewall rule requirements

 

The Compute and Network Profiles allow us to define details about how the appliances should be deployed. This includes placement details – Cluster, Resource Pool, and Datastore. We will also define the networks (Management, Uplink, vMotion, vSphere Replication, and Guest) that the appliances and services should use as well as IP pool information. At the end of the profile creation, you’re provided with WAN and LAN firewall rule tables to aid in the configuration of on-premises and cloud firewall rule creation. In the example below, we have created a simple Compute Profile selecting resources in our on-premises data center. Here we show that we will be deploying appliances to the vSAN datastore, in the MGMT Resource Pool of our San Jose data center. We will be using the SJC-CORP-MGMT port group on the SJC-CORP distributed switch.

 

HCX Compute Profile

 

A Site Pair is established for the connection between HCX Managers on-premises and in VMware Cloud on AWS. This allows for full management, authentication, and orchestration of HCX services between sites. There are a few different HCX site topologies that can be used:

 

  1. One to One: One source and one destination
  2. Many to One: Multiple source sites to a single destination (think multiple on-premises data centers to a single VMware Cloud on AWS SDDC)
  3. One to Many: A single source site to multiple destinations (think a single large on-premises data center to multiple VMware Cloud on AWS SDDCs)

 

Below, you can see that we’ve established a site pair from our on-premises data center in San Jose to our VMware Cloud on AWS SDDC in US West 2 (Oregon).

 

HCX Interconnect Site Pair

 

The Service Mesh allows us to define the profile pairs at both sites, and once created, deploys the service appliances automatically. This also provides you with a topology diagram, a list of appliances, and running/completed tasks.

 

HCX Service Mesh Topology

 

Remember, HCX is an invaluable tool for workload mobility that’s included with VMware Cloud on AWS. The Multi-Site Service Mesh really simplifies the deployment and configuration of HCX overall. This allows you to quickly lay the groundwork to start extending networks and migrating virtual machines successfully to the cloud. If you’d like to step through the configuration yourself to get a better understanding, check out our feature walkthrough.

 

For a deeper dive into all things HCX, Gabe Rosas has a wealth of knowledge on his blog hcx.design. Stay tuned for the final part of our Cloud Migration Series where we’ll discuss planning, sizing, and additional considerations.

 

Cloud Migration Series

 

Part 1: Getting Started with Hybrid Cloud Migration
Part 2: VMware HCX Overview
Part 3: The HCX Interconnect and Multi-Site Service Mesh
Part 4: Planning and Considerations

 

Resources

 

 

The post Cloud Migration Series: Part 3 – The HCX Interconnect and Multi-Site Service Mesh appeared first on VMware vSphere Blog.

VMware Security Advisories in vSphere Health

$
0
0
This post was originally published on this site ---

Determining what VMware products in your datacenter have been patched for new or existing vulnerabilities can be tedious at times. What if your vSphere environment as able to do this for you without much effort? Well, look no further than vSphere Health. vSphere Health’s (vSphere Health will be renamed to Skyline Health in a future release) newest health checks are now not only looking out for your vSphere environment’s health but possible security-related vulnerabilities in vCenter Server and ESXi too.

VMware Security Advisories in vSphere Health

New to vSphere Health (vSphere 6.7U1 and higher) comes the ability to scan your vSphere environment for security vulnerabilities that are reported in VMware products and documented on the VMware Security Advisories [https://www.vmware.com/security/advisories.html] online listings page. Customers are now able to quickly see, via vSphere Health, if vCenter Server or ESXi have any applicable Security Advisories to be aware of.

vSphere Health

In the past, a bit more effort was required to discover such advisories and vulnerabilities. It meant visiting the VMware Security Advisories page to find out what security vulnerabilities had been reported in VMware products and then cross-referencing them to the versions deployed in a customer datacenter. Next may have involved some research to find what patch level or update would resolve the discovered vulnerability.

Security Health Checks

Today this process has become quite easy by simply leveraging vSphere Health. Within the vSphere Client, and while selecting vCenter Server, we can view the Security Health Checks that relate to the installed versions of vCenter Server or ESXi and quickly see any detected Security Advisories. As shown in the demo below, important details are listed such as; Security Advisory (name/number of advisory), Health, CVSSv3 (Common Vulnerability Scoring System; v3.x standard), and Resolution Patch.

 

Columns contain important information, Security Advisory maps to the VMSA advisory number (ie; VMSA-2019-0013), Health is either yellow or green if an advisory is applied yet or not, CVSSv3 stands for Common Vulnerability Scoring System version 3 a term used for scoring the severity of software vulnerabilities, and Resolution Patch which displays the version of vCenter Server or ESXi that includes the VMSA security advisory. This simple table view enables customers to not only be alerted to a particular security advisory, but also be shown its name, severity, and solution to that advisory.

VMware Security Advisories in vSphere Health

 

CVSS scores are often used as a standard measurement for calculating the severity of a vulnerability as well as a way to prioritize remediation efforts when patching affected systems. The Common Vulnerability Scoring System supports both version 2 (v2.0) and version 3 (v3.x) score standards. Understanding these severity rankings or score ranges can be helpful in coordinating remediation activities. This chart may come in handy during those exercises.

VMSA CVSS v3.0 Ratings

Closing

To learn more about vSphere Health, vSphere Security, or VMware Security Advisories please visit the below resources.

 

Take our vSphere 6.7: Getting Started Hands-On Lab here, and our vSphere 6.7: Advanced Topics Hands-On Lab here!

 

The post VMware Security Advisories in vSphere Health appeared first on VMware vSphere Blog.

Cloud Migration Series: Part 4 – Planning and Considerations

$
0
0
This post was originally published on this site ---

In the first three parts of our Cloud Migration Series, we discussed the hybrid cloud and migration methods, provided an overview of HCX, and talked about the importance of the service mesh. These are all critical to understanding, but even more important are the aspects of planning, design and other considerations. Without proper planning, and really understanding your workloads, the outcome of your migration will be grim. To be successful, it’s important to understand the key considerations below.

 

Application Dependencies

 

Before virtual machines can be moved to the cloud, application dependencies between other VMs, applications, and services must be understood. A single virtual machine might be communicating with many other virtual machines and services. Let’s think about a very simple example – a three-tier application stack – there could be a variety of consequences if we migrate only a portion of the application stack. When interacting with the application, latency might introduce poor performance, or cause the app to not function at all. Egress traffic could incur additional costs that weren’t accounted for. If other applications are leveraging this stack, they could be impacted as well.

 

It’s imperative to understand the dependencies, and the process of doing so could be dramatically simplified by using tools such as vRealize Network Insight to see the communication between virtual machines and applications. This will not only assist in determining which applications need to migrate to the cloud together in groups but can also help estimate data egress charges if some data remains on-premises.

 

In addition to using tools, it helps to conduct an interview with application owners. This could bring additional information to light that normally wouldn’t be found by using tools such as application sensitivity and other constraints that need to be considered prior to migration.

 

Sizing

 

Sizing your new cloud environment appropriately is incredibly important, and it’s a delicate juggling act from both performance and cost perspectives. The best way to approach sizing is to collect as much information as possible about the workloads you intend to move. I prefer to start off with everyone’s favorite – RVTools – and layer in data from vRealize Operations. Though, you can use any tool you wish to get the job done as long as it’s providing complete and accurate information. Once the data is collected, you can plug the numbers into the VMC Sizer to determine how your SDDC should be sized. Dan Florea wrote a couple of great articles detailing the general use of the sizer and using it for memory bound workloads.

 

Again, I can’t stress enough how much you need to understand the workloads and applications in your environment. This will aid in making design decisions around host and cluster types. For most workloads, I3 instances are perfect, but for workloads that have larger memory or storage footprints, R5 instances may be preferred. In addition, moving Tier 0 (mission-critical) applications may have higher SLAs and require a stretched cluster over a traditional cluster. These design requirements depend on the accuracy of the data collected and a good understanding of the applications and their dependencies.

 

The VMC Sizer will provide you with a full recommendation and breakdown of resources including the number of hosts, estimated cores and RAM used, estimated used and free storage, and breakdown of the storage footprint. It will also provide a list of assumptions made in the report.

 

VMC Sizer Report

 

Migrations

 

In the previous parts of this series, we talked about the various migration methods available, including HCX. Now that we understand the applications and dependencies, we can start to categorize them into three buckets – Downtime, Minimal Downtime, and No Downtime. Categorizing them like this allows you to choose an acceptable migration method:

 

Downtime: Cold Migration, vCenter Converter

Minimal Downtime: HCX Bulk Migration, HCX OS Assisted vMotion

No Downtime: vMotion, HCX vMotion, HCX Replication Assisted vMotion

 

After putting the VMs into categories, we can start building out waves. The intent here is to migrate the VMs that can incur downtime first, and use them to pilot the actual migration, management, performance, and operational aspects. This will help weed out any potential issues before critical apps are migrated that cannot have any downtime.

Migration Waves

VMware Cloud Migration Solution

 

There are a lot of moving parts when embarking on this journey – planning, sizing, building, configuring, testing, monitoring, and the list goes on. There is a human element to this that makes all of these tasks seem daunting, in fact, it would be pretty easy to miss a step along the way. As such, VMware has developed the Cloud Migration Solution to help.

 

This is a step-by-step migration guide divided into three sections: Plan, Build, and Migrate. Each section provides the steps necessary, in sequential order, from deploying your SDDC all the way through migrating your virtual machines. You’re provided a checklist to help keep track of your progress, and the steps include instructions, tools, and links to make your migration successful.

 

Migration Solution

 

Cloud Migration Series

 

Part 1: Getting Started with Hybrid Cloud Migration
Part 2: VMware HCX Overview
Part 3: The HCX Interconnect and Multi-Site Service Mesh
Part 4: Planning and Considerations

 

Resources

The post Cloud Migration Series: Part 4 – Planning and Considerations appeared first on VMware vSphere Blog.


The 10 Foundations of DevOps for Traditional Applications: Virtual Machines are Not Dead

$
0
0
This post was originally published on this site ---

Copyright © 2019 VMware, Inc. All Rights Reserved.

1.1        Introduction

Much has been written about the benefits of adopting containers and Kubernetes and about application refactoring and modernization to take advantage of the capabilities of these modern platforms. But what about cases where refactoring application code is either not possible or not worthwhile? Some examples could be:

  • A COTS application that is business critical but not provided in containerized form.
  • A large complex enterprise application where the effort and associated cost involved to refactor the entire codebase is simply not worthwhile or at least not worthwhile to complete in the near term.
  • A hybrid application where some traditional, virtual machine based pieces are being retained while other new functionality is being containerized.

These applications will require non-container based provisioning and we want to ensure our traditional applications can be maintained, updated and partially or completely refactored in the most expedient way. To meet these and many other use cases for traditional application deployment we need to apply modern deployment and management principles.

In this post I’m going to look at how we would apply our 10 Foundations of DevOps, as detailed in the blog post: A Foundation for DevOps: Establishing Continuous Delivery along with our Guiding Principles of DevOps described in the VMware White paper: DevOps and Agile Development

1.2        Goals and Constraints

The high level goal of this solution is to implement a modern solution for the deployment and management of traditional applications

  1. Implement the Guiding Principles of DevOps with a focus on:
    • Infrastructure as code.
    • Automate the delivery pipeline.
    • Immutable Infrastructure.
  2. Address the 10 Foundations of DevOps
  3. Create a standardized solution set that could be implemented across multiple clouds both private and public.

1.3        The Components

The diagram below shows the major components selected for this solution and a basic indication of how they will fit together, this is subject to change as the build out progresses.

Note: This diagram does not show the actual application code or the automated build of the artifact (application binaries).

 

10FoD Traditional Apps

The components for this solution have been selected with the above goals and constraints in mind. As we proceed through the build out of the solution additional components may need to be added. This is just a starting point.

Stack Component
Plan Stack Trello
Coding Stack VSCode
Commit Stack GitHub
Continuous Integration Stack vRealize Cloud Code Stream
Test Stack vRealize Cloud Code Stream
Artifact Stack Artifactory
Continuous Deployment Stack vRealize Cloud Code Stream
Configuration Stack vRealize Cloud Cloud Assembly, Bitnami Application Catalog
Control Stack vRealize Cloud Cloud Assembly
Feedback Stack Git Issues

Some of the highlights of the component selections are as follows:

1.3.1     vRealize Cloud – Cloud Assembly:

Cloud Assembly provides a GitHub integrated mechanism for creating yaml based blueprints. These blueprints can either be written to be cloud agnostic or with minor modifications can be utilized to provision to different cloud endpoints. Cloud endpoints available at the time of writing are:

  • AWS
  • Azure
  • GCP
  • vSphere-based clouds including VMware Cloud on AWS

1.3.2     vRealize Cloud – Code Stream

Code Stream is a CI/CD platform supporting the deployment of Cloud Assembly generated Virtual Machines with extensibility to integrate with multiple third part tools.

1.3.3     Bitnami Application Catalog

The Bitnami Application Catalog provides a huge selection of pre-configured templates of commonly used software stacks configured in a standardized way. These pre-configured application stacks are available for a wide range of deployment types, which, crucially for this solution include:

  • AWS EC2
  • Azure
  • vSphere

Standardizing on Bitnami provided software stacks will mean that the baseline deployments are consistent across multiple clouds, whether we deploy to AWS EC2, Azure or vSphere.

1.4        Conclusion

By implementing the documented Guiding Principles of DevOps, the 10 Foundations of DevOps model and selecting components a standardized set of components and tools that are interoperable with multiple cloud endpoints customers can modernize the way their traditional applications are delivered, managed and maintained.

As noted above, the diagram excludes the piece of the process showing the application binary being built. There are many different tools and processes to achieve this outcome. A possible example could be to use a combination of Git, Maven and Jenkins to commit code, automate the binary build process and then store it in Artifactory.

 

James Wirth (@jameswwirth) is a member of the emerging technologies team within VMware Professional Services and focuses on DevOps, Site Reliability Engineering, Multi-Cloud architectures and new and emerging technologies. He is a proven cloud computing and virtualization industry veteran with over 15 years’ experience leading customers through their cloud computing journey.

The post The 10 Foundations of DevOps for Traditional Applications: Virtual Machines are Not Dead appeared first on VMware Cloud Management.

The Evolution of Content Library

$
0
0
This post was originally published on this site ---

Content Library arrived in vSphere 6.0 as a storage container for virtual machine templates, as well as scripts, text files, and ISO images. The idea is to have an efficient and centralized method to manage important content required in a vSphere environment. Instead of mounting a share to each ESXi host to distribute these items, Content Library solves that by sharing library items between vCenter Servers. Sharing content in this manner allows for virtual machine templates, ISOs, or appliances to be deployed directly from a Content Library.

Content Library uses two types of libraries; Local and Subscribed. A Local library is used to store items in a single vCenter Server instance. The Local library allows for items to be stored locally to the vCenter Server. Local libraries can be set to Publish their content so other vCenter Servers can then subscribe to the local library to fetch items needed. A Subscribed library is one that subscribes to the Local Publisher library on another vCenter Server instance. The advantage of the Subscribed library is that you can choose to download all of the content items immediately after the library is created or only download items as needed to save storage space at the Subscriber side.

The Evolution of Content Library

When introduced in vSphere 6.0, this was just the beginning for Content Library. Item types allowed in vSphere 6.0 for Content Library were limited to OVF templates, ISOs, scripts, and text files. When adding Virtual Machine templates (vmtx) to Content Library they were converted to OVF templates to be stored as Content Library in vSphere 6.0 did not yet support VM Templates. vSphere Content Library

Content Library in vSphere 6.5 added a few minor enhancements. Support for the HTML5 version of the vSphere Client gave customers a consistent look and feel when using this newer client. Support for mounting ISOs from Libraries, updating of existing templates, and Guest Customization when deploying a VM from an OVF template was also introduced for Content Library. Below is an example view of Content Library in vSphere 6.5 showing the libraries listed, along with Summary, Templates, and Other Type (file types) tabs to quickly find items.

Today in vSphere 6.7, Content Library has evolved once again. VM Templates are able to be synchronized to other Content Libraries via the Publish feature. The publishing action from the local library will sync the VM template to the selected Subscriber Libraries. VMTX files are no longer converted to OVF to be stored within Content Library. This allows the ability to distribute content across vCenter Servers to avoid recreating VM template configurations on the other end. We can see below that Content Library has not changed much visually from vSphere 6.5 to vSphere 6.7, although new features are introduced with each release.

Closing

Content Library has come a long way since its start in vSphere 6.0 and continues to evolve with each vSphere release. To learn more and stay tuned about additional Content Library information, please visit the below resources:

 

Take our vSphere 6.7: Getting Started Hands-On Lab here, and our vSphere 6.7: Advanced Topics Hands-On Lab here!

 

The post The Evolution of Content Library appeared first on VMware vSphere Blog.

Planning Application Migration to VMware Cloud on AWS with vRealize Network Insight Cloud

$
0
0
This post was originally published on this site ---

When the decision is made to migrate applications to a cloud like VMware Cloud on AWS (VMC on AWS), a few things need to happen. First and foremost, insight into your applications and application behavior is critical. Questions that you need to ask itself are:

  1. “Which components make up an application?”
  2. “To what resources does it need access to function?”
  3. “How much data is being shipped from and to that application?”
  4. “What is the network throughput profile (packets per second, routed or switched traffic, throughput) of the application?”

There are four steps in the application migration process:

  1. Assess the current application landscape and get insight into the application blueprints,
  2. Use this insight to determine suitable applications for migration, and create “migration waves”, which are groups of applications that are migrated together. Also, calculating the bandwidth requirements and related cloud costs cannot be skipped,
  3. Migrate the selected applications,
  4. Validate application behavior, post-migration.

vRealize Network Insight Cloud can help with step 1, 2, and 4. VMware HCX can help with step 3. This blog post focuses heavily on step 2, which is critical in the planning phase. Future and past blog posts cover the other steps.

Note that vRealize Network Insight is available both as a VMware Cloud service and as an on-premises perpetual product. You can sign up for a 30-day free trial of vRealize Network Insight Cloud, or check out the on-premises vRealize Network Insight product.

Application Discovery & Assessment

It’s not uncommon to have an incomplete CMDB or to have multiple sources where applications and the workloads that it consists of are documented. vRealize Network Insight can use a combination of sources (tags, naming conventions, CMDB input) to discover the applications from the infrastructure metadata. For more details on application discovery, check out this blog post.

After getting the applications into the database, there is a beautiful view of application dependencies and network requirements:

Application Dependency Mapping Application Details (Services, Recommended Firewall Rules)

I’ve focused on the CRM-Records application here, which consists of a MySQL database server. You can see the other applications that are talking to this application, and if you zoom in on the application itself, the network requirements for this application present themselves. In this case, this app is using up 727.5MB of total traffic, with a peak throughput of 79.9kbps. This same view can be used to implement security policies when migrating the application, as the Recommended Firewall Rules tab shows the exact firewall rules that need to be implemented for this application.

Internet Traffic

One of the perks of migration to a public cloud is that you can choose the optimal location to make sure your end-users benefit. But, without networking insights, this is often overlooked. For internet-facing applications, vRealize Network Insight can help with determining where the network traffic is coming from, so you can choose the most optimal cloud location.

The above search query gives the amount of traffic for the CRM-Records application, grouped by country. As you can see, most of the traffic is coming from Northern Europe. To ensure users have the best experience, this application should be migrated to London.

Note that the same information can be used to calculate egress traffic costs from the cloud.

Creating Migration Waves

After gaining visibility into the existing applications and their behavior, it’s time to define the migration waves and put together the applications that should be migrated together. From the application list, that is curated inside vRealize Network Insight; assign priorities. There are always some applications more important than others (e.g. production vs. test).

Now, while I’m focusing on a per-application process (as most organizations tend to focus on applications), this same exercise can be done on a per-network (VLAN) basis; just change the grouping. Fun fact: it is possible to get a list of applications that are on an L2 network by executing the following search query:

Application where IP Endpoint.Network Interface.L2 Network = 'VLAN-10'

After assigning priorities, grouping of the applications can start – making migration wave groups. While doing this, there are questions that need to be asked:

  1. What are the network requirements in order to move this migration wave?
    1. How much traffic will the migration wave create between the cloud and on-prem network?
    2. Have you sized the connection to the cloud to support this traffic?
  2. Will there be any limits triggered on the destination cloud?
    1. Is there a limit for the number of VMs in a network?
    2. Is there a limit of the amount of throughput or packets per second?
  3. Is a temporary layer-2 bridge necessary?
    1. If you are migrating on a per-application basis, it might be required to create a layer-2 bridge to allow the IP subnet to exist in both the on-prem infra as the cloud infra. VMware HCX can create these layer-2 extensions, but there will be a limit to how much throughput and the number of layer-2 extensions it can handle.
    2. Should proximity routing be enabled while the layer-2 extension is online?

While number 1 is the most important to check, as it directly impacts application performance after migration, the rest can prevent the migration from finishing (as you would discover those limitations during the migration).

To start answering these questions, create another application within vRealize Network Insight that’s called “Migration Wave X” and place the applications that are in each migration wave in it. Essentially, you are creating nested applications. After creating these applications, it is possible to scope the security planner on each migration wave and see its dependencies.

As an example, the below screenshot lists the security planner for Migration Wave 1, which I built from the previously used CRM-Records. This migration wave 1 includes the CRM-Records app, and all of its dependencies (Test-CRM, Webshop, SAP, and tanzu tees).

Zooming in (clicking) on each of these connections is possible, to make sure the previously asked questions, are answered. Using this view, question 1 can be answered fully.

Limitation Check

Question 2 is more around inventory and maximum usage. To understand the compute inventory requirements, use this search query:

sum(CPU Cores), sum(Memory Consumed) of VMs where application = 'Migration Wave 1'

To proper plan the destination infrastructure, we need to know the bandwidth throughput and packet per second usage of the VMs inside the migration wave. Because vRealize Network Insight correlates all possible information together, we can link these metrics back to the migration wave group.

Uncovering the required information can be done by using the search engine. Below are the search queries that can be used.

series(sum(byte rate),300) of flow where source application = 'Migration Wave 1' and flow type = 'Destination is Internet'

Graph of throughput per second from Migration Wave 1 to the internet

Also, get the maximum peak throughput by adding the max() operator:

max(series(sum(byte rate),300)) of flow where source application = 'Migration Wave 1' and flow type = 'Destination is Internet'

This will show you the peak throughput, which you can use for sizing the internet gateway.

Get these graphs and maximum values for a few different traffic types; internet, east-west, and within the migration wave. Splitting up the different types of traffic is easy; the flow type can be changed to view a bunch of different traffic types. From the Internet, to East-West, to Routed, to Switched, and a lot more. Have a look at the auto-completion and scroll through the options:

Sample of the flow type auto-complete

In some cases, there will be a limit of packets per second at the migration destination. To make sure there are no surprises, get the packets per second as well. All you have to do is to change the focus property in the search query, like this:

series(sum(flow.totalPackets.delta.summation.number),300) of flow where source application = 'Migration Wave 1' and flow type = 'Destination is Internet'

Graph of packets per second from Migration Wave 1 to the internet

Make sure the packet per second rate is also sized properly, and use the max operator to get the maximum number of packets per second:

max(series(sum(flow.totalPackets.delta.summation.number),300)) of flow where source application = 'Migration Wave 1' and flow type = 'Destination is Internet'

Rinse and repeat for routed and switched traffic by changing the flow types, to get data you need.

Using the output of these searches will allow you to determine the minimum requirements for the network at the destination cloud. vRealize Network Insight tells you precisely what is needed.

Migrating the Applications

Now that the planning is done, the migrations itself can begin. I cannot recommend VMware HCX enough for this task, and not only because it’s one of VMware’s products. It allows for cold and live migrations from and to different vSphere platforms, using already built-in tooling, like vSphere replication. HCX can extend layer-2 networks to stretch an IP subnet between on-premises and VMC on AWS.

Learn more about VMware HCX here.

Validating Application Behavior

Once an application is migrated, validate its behavior by going into the security planner and checking connectivity from before the application was migrated and after. This is done by using the time machine that is built-in:

There are more ways to validate an applications behavior. For example, the Application Dashboard will show a topology of the application, all metrics that relate to it, any and all events that might cause problems; a full picture.

More on day 2 operations with vRealize Network Insight, in a later post.

Learn more

To learn more about vRealize Network Insight and the application migration planning capabilities, check out our Hands-on Labs, and reach out to your VMware representative.

If you happen to be at AWS Re:Invent 2019 in Las Vegas, visit the VMware booth #2108 to learn more!

 

The post Planning Application Migration to VMware Cloud on AWS with vRealize Network Insight Cloud appeared first on VMware Cloud Management.

Extending vRealize Automation Custom Forms with vRealize Orchestrator

$
0
0
This post was originally published on this site ---

Within the vRealize Automation Service Broker is the Custom Forms designer which, much like previous versions of vRealize Automation, allows catalog administrators to create custom forms. These can be used to create a more dynamic and engaging user experience, especially for business users who may not have the knowledge (or desire) to make technical decisions.

In this blog post I’m going to use the Custom Forms feature to look up values in an external data source. My external data source will be a Hashicorp Vault server – if you’re not familiar with Vault you can read more on the introduction page, but simply put:

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

However, the data source isn’t so important as the principle, leveraging an external data source to provide inputs in a vRealize Automation form. I’ve broken down the process into four steps:

    1. Create a blueprint with inputs for user creation and SSH keys
    2. Configure a Vault Secrets Engine with Secret Data
    3. Create vRealize Orchestrator Actions to retrieve and return Secret Data
    4. Customise the Service Broker form

Create a blueprint with inputs for user creation and SSH keys

Firstly, I’ve created a simple blueprint to deploy an Ubuntu VM onto my vSphere environment. This particular blueprint deploys an on-demand NSX network too, but that’s not required for this to work. The two important bits of configuration here are the inputs (user and sshkey) which are used to capture the custom user and key, and the remoteAccess section, which are used to take the inputs and configure the new user for public key authentication with the supplied public key.

Once created, the blueprint is versioned and released to the catalog.

Configure a Vault Secrets Engine with Secret Data

I’ve got a Vault server running in my lab under https://vault.definit.local:8200. I’ve created a new Secrets Engine called “vra”, with a Secret called “ssh” under which I’ve stored 3 user names and their SSH keys.

In addition to this I’ve created a new user called “vra” and a new access policy called “allow-vra-access”, which gives the User access to the Secret. The configuration of a Vault server is outside the scope of this post, but there are plenty of online guides available if you’re interested.

Vault Secrets

Create vRealize Orchestrator Actions to retrieve and return Secret Data

There are a few steps required for vRealize Orchestrator to query the Vault API. The first is authentication – access to the Secrets API is granted through the issuing of a Token, which is returned by the authentication API. The second step is to query the Secret API using the Token for a list of users, which we will return to the Custom Form to display in a drop down. The final step is an action to take a specific user name and query the Vault Secret API for the specific user’s SSH key.

Action getVaultToken(String vaultUser, SecureString vaultPassword, String vaultServer, String vaultPort)

getVaultToken has four inputs, vaultUservaultPasswordvaultServer and vaultPort, and authenticates with the Vault API to return a SecureString containing the Vault Token.

getVaultToken inputs/outputs

Action getVaultSecretData(String vaultServer, String vaultPort, String vaultSecretPath, SecureString vaultToken)

getVaultSecretData has four inputs, vaultServervaultPortvaultSecretPath and vaultToken, and makes an API call to the vaultSecretPath using the vaultToken for authentication. It returns a JSON string of the Secret Data.

Action getVaultSecretUserNameList()

Uses the previous two Actions and returns a String array of the Secret Data names – the user names.

getVaultSecretUserNameList inputs/outputs

Action getVaultSecretUserNameSshKey(String userName)

Returns the public SSH key from the Vault Secret based on the userName supplied

getVaultSecretUserNameSshKey inputs/outputs

Customise the Service Broker form

The final step brings together the Actions to customise the form. Within the Service Broker Content & Polices section, locate the blueprint on the Content page and select the “Customize form” menu item:

Custom Forms - customize

Customise the following:

  • Select the User input
    • Change the Appearance > Display Type to “Dropdown”
    • Change the Values > Value option to “External source”, and select the “getVaultSecretUserNameList” action
  • Select the SshKey input
    • Change the Appearance > Read-only to “Yes”
    • Change the Values > Default value option to “External source”, select the “getVaultSecretUserNameSshKey” action, and configure the Action input to the User Field
    • (optional) Change the Appearance > Visible option to “No” to hide the field, but still capture the value when submitting the request.

Custom Form Input Configuration - UserCustom Form Input Configuration - SshKey

Test the Custom Form

We can now test that the form, initially the User field will be blank with the values returned from getVaultSecretUserNameList available to select from the dropdown. Once a value has been selected, getVaultSecretUserNameSshKey will take the value selected in the User dropdown and look up the Public SSH key relating to that User.

Custom Form with vRA Lookup

As you can probably see, bringing the multi-tool flexibility of vRealize Orchestrator Actions into Service Broker Custom Forms can allow you to create a powerful guided user experience, or dynamically build out input values based on a range of decisions.

The post Extending vRealize Automation Custom Forms with vRealize Orchestrator appeared first on VMware Cloud Management.

VMware at Gartner IOCS

$
0
0
This post was originally published on this site ---

The speed and agility delivered by fast-moving cloud technologies and modern application architectures have become central to digital business transformation efforts.  There is an emerging realization that IT infrastructure and operations (I&O) teams cannot continue to rely on proprietary, bespoke, and expensive hardware to perform data center functions like networking, security, and load balancing.  These functions can be performed more efficiently at scale with distributed software running on x86 hardware while also achieving reduced complexity and cost.

VMware is excited to present this public cloud approach to infrastructure and operations at the Gartner IT Infrastructure, Operations & Cloud Strategies Conference next week, 9–12 December in Las Vegas.

Attend our Speaking Session

Tom Gillis, GM and SVP of VMware Networking and Security Business Unit, will deliver a session on Wednesday titled “A Public Cloud Experience Requires a Different Datacenter and WAN Design”.

 

Tom will talk about how you can bring the public cloud experience to your Data Center and WAN using a software-based, scale out architecture running on general purpose hardware.  Purpose-built hardware designed for homogeneous environments simply cannot handle the fast-moving realities of today’s business priorities.  Businesses shouldn’t have to carry the burden of exorbitant CapEx and OpEx costs associated with frequent hardware refreshes.  Tom will underline the importance of being able to automate and manage networking and security centrally, allowing you to secure and scale your infrastructure up and down effortlessly while deploying applications across data centers and clouds.

Meet Us at Gartner IOCS

We also invite you to come visit VMware in Booth #221 and see a demo of the industry leading NSX networking and security virtualization platform. Here are some of the use cases our team will share:

  • NSX Distributed IDS/IPS. Learn how NSX provides a fully distributed software IDS/IPS solution to easily achieve compliance and detect lateral threat movement.
  • vRealize Network Insight. Gain end-to-end visibility, control, and troubleshooting across branch, data center, and public cloud environments.
  • Avi Modern Load Balancer. See how you can deliver a fast, scalable, and secure application experience using autoscaling elastic load balancers.

See you in Las Vegas!

The post VMware at Gartner IOCS appeared first on Network Virtualization.

Improved Application Awareness in vRealize Operations 8.0

$
0
0
This post was originally published on this site ---

Customers trust vRealize Operations for Self-Driving Operations powered by AI/ML to provide the best performance, optimize for efficiency and troubleshoot quickly. However, vRealize Operations doesn’t stop there, and if you aren’t already using the application aware monitoring and troubleshooting capabilities of this solution, you will probably change your mind after you read this blog.

In this release, vRealize Operations brings native capability for discovering services running on virtual machines in your Software Defined Datacenter (SDDC) and automatically creating application containers based on dependencies between those services. Additionally, exciting new capabilities have been added to the virtual machine Action menu that allow you to quickly understand top processes and run scripts in guest. Finally, Application Monitoring using the Telegraf agent gets some updates to bring it to parity with Endpoint Operations, which will be removed from vRealize Operations in the next release. Let’s dig into these for more details.

Native Service Discovery

Previously, the Service Discovery Management Pack (SDMP) could be installed into vRealize Operations for discovery of services running on monitored virtual machines. The SDMP also provides dependency mapping between services and the ability to automatically create application objects that describe the tiers (e.g. web, app, database). Finally, Site Recovery Manager Protection Groups and Recovery Plans could be validated against discovered applications to make sure all services were protected.

Well, this is still the case, but now we have made it easier to use Service Discovery as it is now built into vRealize Operations and you only need to enable it to begin discovering services in your SDDC.

How much easier? Just toggle a switch in the vCenter Cloud Account and provide credentials for your Windows and Linux guest OSes.

A few things to note here. First, you probably notice the yellow banner referring to specific VMtools versions required. You should read KB75122 for more information, but in brief this feature requires VMtools version 11.0.01 or higher for Windows and 10.3.21 or higher for Linux (or open-vm-tools 11.0.1 for Linux).

Allow me to explain some of the fields and options on the setup. First, the “Use alternate credentials” option is present if you would prefer to use a different vCenter user account for Service Discovery. Why would you? Well, keep in mind that the vCenter user account will be the account to perform the Service Discovery via “Execute Program in Guest” using the vSphere APIs. Probably not a bad idea to use an account with permissions for those APIs only if you have strict requirements for account permission and usage in your environment.

For the guest OS accounts, a common account is assumed in the Cloud Account configuration. In other words, you would have one Windows account and one Linux account for all monitored VMs. If you do not, I will cover how to set up accounts for each VM in a moment.

The account required permissions are documented here, along with other OS requirements (including supported platforms).

For Site Recover Manager, vRealize Operations will need an account with permissions to read SRM settings in vCenter.

You can ignore the password for Guest User Mapping CSV, as that is a holdover from earlier versions of the solution and is not required for the native implementation of Service Discovery in this version (8.0).

By enabling the last option, Business Application Discovery and Grouping, the Service Discovery will create application objects based on discovered services and incoming/outgoing dependencies. For example, if a Microsoft SQL service is discovered and vRealize Operations observes connections from other virtual machines on the ports used by SQL then it assumes an incoming dependency. Then, the connected virtual machines are analyzed and let’s say that Microsoft IIS is running on them. Service Discovery will assume a two-tier application with a web tier and a database tier and will create an application object containing the discovered services assigned to the appropriate tier.

You can then edit this application and add any other components and tiers you wish, or just rename them to something more meaningful.

I mentioned that we understand that not all customers will have a common user across all virtual machines. For that, we offer the capability to set up unique credentials for one or more virtual machines.

From the Administration > Inventory select the Manage Services tab. This is where you can check the status of Service Discovery and perform other administrative tasks. Let’s cover the credentials first.

Select one or more virtual machines from the list (you can filter the list if you wish). Click the icon to Provide Password and simply add them. On the next discovery run (which happens once per hour) the new credentials will be used for Service Discovery.

The other options here are to start or stop monitoring of services. By default, vRealize Operations will not monitor the discovered services. But if you wish, you can enable monitoring as show below.

Once enabled, metric collection for the monitored service takes place every 5 minutes. Note that the Service Discovery is not going to alert on the status of the service and if the service stops, it will be removed from inventory. So, if you want to monitor the status of the service and alert on availability, it is better to use Application Monitoring in vRealize Operations. Below you can see the metrics collected including performance, connection information, and some basic properties of the service like version and path.

Powerful Troubleshooting VM Actions

Enabling Service Discovery also adds two new items to the virtual machine object Actions menu. They are Execute Script and Get Top Processes, and both can be extremely helpful when troubleshooting issues.

Two new action items enabled by Service Discovery - Execute Script and Get Top Processes

Let’s start with Get Top Processes, because I believe this is one that most customers will use on a regular basis. When you run this action, you can select the number of processes you want to see. The default of 10 is typically good enough for most troubleshooting.

Here’s the output from a Linux guest OS:

And a Windows OS:

For both OS outputs, you can sort by CPU or memory usage by clicking the radio button in the respective columns.

This gives you quick verification of what is going on in the guest OS, particularly if you have an alert for high CPU or memory usage. You can see if there is a hung process that needs to be restarted, or a service that’s running that should not be.

Which brings me to the next action, Execute Script. With this action, I can input any script that can run on the OS if I were to invoke it from the command line directly. For example, assume I do find that a service is hung and needs to be restarted.

Notice you can set the timeout in case the script is a particularly long-running one (like a file search, for example). In addition, you get the exit code, stdout and stderr.

For an example of how this can help with troubleshooting (and even remediation) of an issue, watch this demonstration.

Application Monitoring Updates

Finally, the Application Monitoring capability, based on Telegraf agents, got a few new updates. Application Monitoring was introduced in 7.5 and we are getting closer to parity with Endpoint Operations capabilities in 8.0. Currently, we do not intend to continue Endpoint Operations after the 8.0 release, so if you are still using that, consider switching to Application Monitoring.

Now, what’s new with Application Monitoring? There are three new application services now available, bringing the total to 20 supported packaged applications.

As you can see NTPD, Java and Websphere cards have been added. Specific details can be found in the product documentation:

Configuration of supported applications

Pre-requirements for application services

Supported versions of application services

Metrics collected for application services

Personally, I’m excited about the addition of Custom Script Monitor in 8.0. This allows you to “roll your own metric” using a script that Application Monitoring will execute on every collection cycle. For example, consider this PowerShell script which counts the number of files in a folder in Windows:

You save the file on a local folder of the virtual machine and then configure Custom Script Monitor with the path, prefix and arguments. The prefix is helpful if you’re not running a batch or shell script and need to invoke an interpreter.

The timeout value of 5 minutes matches the default collection interval, so it’s best to just leave it at default. Once you save, the script will be executed, and the single numerical value returned as a metric.

For a demonstration showing how this works, watch this video.

Get App Aware – Get vRealize Ops!

I think you will agree that these new features and capabilities give you better visibility and enable faster time to resolve issues in your environment. If you’re already running vRealize Operations, upgrade to 8.0 for these features and more. If you aren’t a vRealize Operations customer, download a free trial today!

The post Improved Application Awareness in vRealize Operations 8.0 appeared first on VMware Cloud Management.

Hot and Cold Migrations; Which Network is Used?

$
0
0
This post was originally published on this site ---

A common question arises when customers are migrating workloads between ESXi hosts, clusters, vCenter Servers or data centers. What network is being used when a hot or cold migration is initiated? This blog post explains the difference between the various services and networks stacks in ESXi, and which one is used for a specific type of migration.

How do we define what is a hot or cold migration? A cold workload migration is a virtual machine that is powered off in the entire duration of migration. A hot migration means that the workload and application will remain available during the migration.

Both hot and cold migrations can be initiated through the vCenter Server UI or in an automation fashion using, for example, PowerCLI. To understand which network is used for a migration, we first need to understand the various enabled services and network stack options that are available in vSphere.

Enabled Services

In vSphere, we define the following services that can be enabled on VMkernel interfaces:

  • vMotion
  • Provisioning
  • Fault Tolerance logging
  • Management
  • vSphere Replication
  • vSphere Replication NFC (Network File Copy)
  • vSAN

When looking specifically into workload migrations, there are three services that play an important role. The vMotion, Provisioning and Management enabled networks.

Enabling a service on a specific VMkernel interface, states that this network can now be used for the configured service. While the Management service is enabled by default on the first VMkernel interface, the other VMkernel interfaces and services are typically configured post-installation of ESXi. If you want vMotion or Provisioning traffic to use a specific VMkernel interface, you can configure it like that.

Network Stacks

We also have the option to use a seperate network, or TCP/IP, stack. vSphere provides the following options:

If no settings are changed, the Default stack is used for all the VMkernel interfaces. The purpose of using the other TCP/IP stacks, is to have more segregation of traffic to isolate certain network flows. Also, configuring and using the Provisioning and vMotion stack allows you to use other gateway IP addresses, DNS servers and DHCP servers for these networks rather than using your default gateway that is defined in the Default TCP/IP stack.

Cold Migrations

Cold migrations are a mere re-registration and potential copy of a virtual machine and its data to another ESXi host and/or datastore.

Cold migrations are typically performed when you migrate virtual machines when you are moving them between ESXi hosts that are equipped with different CPU architectures, like Intel and AMD. vSphere vMotion cannot live-migrate between the two CPU architectures. Another reason for performing cold migrations could be that an application owner requires the virtual machine to be powered off during migrations. Either to mitigate the risk for data loss or other availability challenges within the workload itself. Typically, vSphere vMotion does a really good job live-migrating workloads without any concessions on application availability or the risk of data corruption, but there might be scenarios where hot (or ‘live’) migrations are not used by customers.

The biggest misconception for cold migration and cold data, is that the vMotion network is leveraged to perform the migration. However, cold data like powered off virtual machines, clones and snapshots are migrated over the Provisioning network if that is enabled. It is not enabled by default. If it’s not configured, the Management enabled network will be used.

Hot Migrations

A hot migration is referred to as a live migration. It is a staged migration where the virtual machine stays powered on during the initial full synchronization and the subsequent delta sync, using the vSphere vMotion feature. If you want to know more about the vMotion process, check out this blog and video series.

Please note that if you create a VMkernel interface on the vMotion TCP/IP stack, you can use only this stack for vMotion on this host. The VMkernel interfaces on the default TCP/IP stack are disabled for the vMotion service.

Bringing it Together

Use the following diagram to determine what network is used for a migration:

This is all about on-prem migrations. When you are migrating workloads to a public cloud like with VMware Cloud on AWS, you could use VMware HCX. HCX is completely agnostic to on-prem vMotion or Provisioning networks. When you define a compute profile for HCX, it will let you choose the appropriate network for the migration.

More Resources to Learn

Refer to the following online sources for more information on workload migrations:

 

 

 

The post Hot and Cold Migrations; Which Network is Used? appeared first on VMware vSphere Blog.


Performance of vRealize Operations REST APIs

$
0
0
This post was originally published on this site ---

About three years ago I posted a blog discussing the performance of the vRealize Operations REST APIs. That blog post still gets referenced, but I thought it was time to update since there have been overall platform performance improvements with each release and the original blog doesn’t address a specific question that I keep getting from customers.

Arguably the most common REST call made by customers is to export metric data.

The most popular use case, in my opinion because a scientific poll was not conducted, for the REST APIs is exporting metric data from vRealize Operations for use with other monitoring systems, service desk solutions, home-grown analytics and reporting and other various reasons. Because this type of use typically involves a high number of frequent requests and a large amount of response data, there are legitimate concerns about the performance of such requests as well as the impact on the vRealize Operations cluster. Additionally, as I mentioned in opening, a question of scale has started to come up from some customers who want to know how many concurrent requests can be handled (and if those requests are throttled by default).

Performance Tests of the REST APIs

As before, I reached out to our engineering team to understand how they measure performance of the APIs with each release. I also asked the team to provide insight on the question of concurrent clients. Let’s start with overall performance numbers as of the 8.0 release of vRealize Operations.

First, the test bed is an HA cluster with four medium nodes monitoring around 8550 objects (of which 8000 are VMs). Total metric payload is 3 million configured and 1.3 million collecting.

Engineering used GET /api/resources/stats/query with the request payload specifying the virtual machine metric ‘cpu|demandmhz’ for 1000 virtual machines. Since 1000 is the default pageSize value, this provides a nice unit of measure and is the pageSize is typically left at default for most API calls. Here are the results:

 

Time Period (begin/end window) Total Response Time in seconds (compressed) Total Response Time in seconds (uncompressed)
5 minutes 10 8
1 day 12 9.5
1 month 100 26
1 year 226 50

 

The times are client centric and include processing time for vRealize Operations as well as the response data download time.

Something to note, is that a major improvement was added to the APIs which allowed for compression of the response payload and as you can see it has a significant impact on the overall response time. This capability was added in the 7.5 release and you can read more at this link.

What About Concurrency?

Engineering is still validating maximum concurrent REST calls per node and per cluster. I will be sure to update this blog or post another when that information is made public. However, you should know that for now, there is a configured throttle of 300 concurrent requests with an additional throttle of 50 concurrent requests from a single client (defined as an IP address). With that information, you at least know at which point your requests may begin to be throttled by the cluster.

Thanks for Ashot Avagyan and Arshak Galstyan for help in writing this blog and providing the technical information.

The post Performance of vRealize Operations REST APIs appeared first on VMware Cloud Management.

Explore the vSphere Hands on Labs Today

$
0
0
This post was originally published on this site ---

The VMware Hands on Labs (HOL for short) has been a tremendous tool for Customers, Partners, and even VMware Employees to learn, test, and try nearly every product VMware has produced. The HOL covers everything from Virtualization 101 for those who are just getting started all the way to testing a full Hybrid Cloud Infrastructure. There are also labs for emerging technologies such as Artificial Intelligence (AI) and Machine Learning (ML). It’s all completely free and no prior experience required. You also don’t need to follow the script. These are real, live environments so you can use them to test an idea if you don’t have your own lab. There are so many possibilities with the HOL! Here’s a short video that explains:

To help you get started we’ve put together a quick list of labs focused on vSphere, the foundation of the Software Defined Data Center (SDDC). Before we get to the list, there are several different types of labs. First, there is the Standard lab which is the most common. These labs are generally 2-4 hours in length where you go at your own pace to work through a set of exercises that will teach you about a specific set of technologies. The next type of lab is the Lightning Lab. As the name suggests, these are typically 15 – 90 minutes and are a quicker way to hit the highlights of a product or solution. You can always start with a lightning lab and then move onto the standard lab for a more immersive experience.

The next two lab types are a bit different. We have the Challenge Lab which is mean to test your knowledge in a non-competitive setting. But, if you want to see how your skills measure up against vSphere Administrators from all over the globe, the Odyssey Labs might be for you. The Odyssey labs also test your skills but use automated scoring to measure success that can then be compared to others who also participate in those labs. There is a global leaderboard to show where you stack up. It’s a fun way to keep improving your skills while climbing the global leaderboard!

When you visit the HOL catalog, you’ll notice over 100 available labs. If you’re interested in vSphere then here are some recommendations of where to start:

New to VMware or Virtualization (Beginner)

Expand vSphere Knowledge (Intermediate)

Performance, Automation, and Building Tools (Advanced)

Kubernetes and Containers

GPUs & AI/ML

Support and Troubleshooting

Odyssey

The team is always working on new lab content and we’ll be sure to announce new labs as they come out. If you don’t see something that meets your needs, drop us a note on Facebook or Twitter.

The post Explore the vSphere Hands on Labs Today appeared first on VMware vSphere Blog.

The Beginning of the End of 2019: Finishing the Decade with vSphere

$
0
0
This post was originally published on this site ---

Calendar IconThe end of a year is always a time to contemplate the past and make changes for the future. For many, 2019 was a good year, filled with opportunity and accomplishments both personally and professionally. For some, 2019 can’t end fast enough, replaced by the hopes that 2020 will be better. Life is like that, ups and downs, and it’s alright. However, to mark the end of the year, and possibly the end of the decade, on a higher note we will be posting every day through December 31 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do.

I can see the question in your mind now: “Possibly the end of a decade?” Like most things in IT, the answer to when the decade ends is “it depends.” The timekeepers of the world like to point out we, as a civilization, never had a year “zero.” Therefore, in their eyes, the new decade starts with a year ending in 1, like 2021. Others contend that we start counting our ages at zero, and we refer to certain eras like “the 1980s,” therefore it’s more intuitive to start a decade with a year ending in 0. Others, often wishing both groups would be quiet, point out that a decade is simply ten years long and we can define it however we want.

All we really know is that VMware has been around for 2.1 decades and is still going strong. Speaking of that, 2019 was a big year for this blog. Let’s look at five of the most popular posts for readers this last year:

10. “Thick vs Thin Disks and All Flash Arrays” comes in at #10 in popularity this year. It’s a post from 2014 all about thin & thick VMDK types, and like much of the vSphere content out there this post continues to be relevant. While we at VMware often write about current trends and new topics, vSphere has a deep, stable, and feature-rich heritage that it builds on. That’s especially true for storage technologies, which underpin everything in the software-defined data center. Reliable & performant cluster storage isn’t an easy thing to do and is not to be taken lightly. We are very proud of what the vSphere platform enables for our customers, from core storage technologies to vSAN and more, giving people great value and choice.

9. “Introducing vSphere 6.5” comes in at #9. A blast from the past? Absolutely, but not surprising. vSphere 6.5 was instrumental in setting the stage for the way vSphere looks today, with things like the HTML5 client, the vCenter Server Appliance, and modern migration tools, as well as many of the intrinsic security features we talk about all the time. VM Encryption and Secure Boot were two of those technologies, making it incredibly easy to secure workloads and infrastructure. As VMware users have been upgrading away from vSphere 5.5 and 6.0 it makes a lot of sense that they’d be examining what is in vSphere 6.5 and 6.7.

8. “vCenter Server Appliance 6.5 Migration Walkthrough” comes in as #8. There’s a very clear trend in what our readers are looking for: advice on getting to vSphere 6.5 and 6.7! Fortunately, there is a ton of advice on migrating, found both on the blog and on vSphere Central, another big resource for vSphere customers.

7. “Introducing VMware vSphere 6.7” is #7, which is the launch page for all things vSphere 6.7-related. vSphere 6.7 brought hundreds of improvements and features to the software-defined data center, including massive updates to operations management, APIs, the HTML5 client, support for things like Optane and persistent memory, support for security features like Microsoft Device Guard & Credential Guard, host attestation and TPM support, per-VM EVC, and more. Beyond the initial release, vSphere 6.7 has had 3 major update releases, with additional new features and functionality to help vSphere admins operate and secure their infrastructure in the face of challenges like CPU vulnerabilities.

6. The last post in the list for today is “Custom Certificate on the Outside VMware CA (VMCA) on the Inside – Replacing vCenter 6.0’s SSL Certificate.” We continue to get lots & lots of questions about certificates in vSphere. There’s a newer post, “10 Things to Know About vSphere Certificate Management,” which covers a lot of the common questions about certificates, including the pros & cons of the different VMware Certificate Authority modes, why hybrid mode is so popular, why self-signed certificates aren’t evil, and how to explain it all to a CISO or an auditor who might be mandating certain things. Check it out.

(What has armor but isn’t a knight, snaps but isn’t a twig, and is always at home even on the move? Come back tomorrow for a look at the creatures that inhabit the VMware headquarters pond! For more posts in this series visit the “2019 Wrap Up” tag.)

The post The Beginning of the End of 2019: Finishing the Decade with vSphere appeared first on VMware vSphere Blog.

Now Available: vSphere Upgrade the Inside Track

$
0
0
This post was originally published on this site ---

With such interest in our vSphere Upgrade workshops, David Stamen and Kev Johnson have put together a 7 Part Video series on Upgrading to vSphere 6.7! We call it vSphere Upgrade: The Inside Track.

We have travelled all over the world for the last year and a half bringing a great workshop to our customers and partners. If you could not make it to one of our events, you may be interested in learning more about how to upgrade to the latest version of vSphere 6.7.

Introducing vSphere Upgrade: The Inside Track

vSphere Upgrade: The Inside Track is a 7 part video series follows the same agenda as our workshop. Each video is only 3-7 minutes long which makes it very easy to watch and follow along. Below is a description of each video.

Part 1: vSphere News
In Part 1 we cover an introduction to the video series as well as information about the latest vSphere News.

Part 2: Pre-Upgrade Considerations
In Part 2 we cover considerations you need to know prior to starting, this will ensure you have properly prepared before executing your vSphere Upgrade!

Part 3: The Upgrade Process
In Part 3 we cover the proper order to upgrade your vSphere Environment.

Part 4: Upgrading and Migrating vCenter Server
In Part 4  we cover how to Upgrade or Migrate your vCenter Server.

Part 5: Upgrading Your ESXi Host, VM Tools and VM Compatibility
In Part 5 we cover how and why to Upgrade your ESXi Hosts, VM Tools and VM Compatibility.

Part 6: Upgrading Storage and Networking
In Part 6 we cover how to upgrade VMFS and VSAN to the latest versions, as well as considerations to upgrade your Virtual Distributed Switch.

Part 7: Post-Upgrade Considerations
In Part 7 we cover those post-upgrade considerations. Things to consider after your upgrade is complete, such as the vCenter Server Converge Tool, vCenter HA and the built-in file-based backup utility.

Additional Resources

 

Conclusion

Kev and I had a great time making these videos and we hope these videos have helped you understand how to upgrade your vSphere Environment to the latest and greatest! Want to see more? Leave a comment below or each out to us on twitter @davidstamen or @kev_johnson

Follow VMware vSphere On Twitter

The post Now Available: vSphere Upgrade the Inside Track appeared first on VMware vSphere Blog.

VMware Loves Turtles

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

During a recent trip to VMware headquarters in Palo Alto, California I caught up with Jeff Goodall, Workplace Supervisor at VMware. There is a turtle pond in the middle of the campus, and I’ve always wanted to know how the turtles got started. I’ve known Jeff for a number of years, as he manages onsite events and is the resident historian when it comes to the campus. As it turns out, he’s also the VMware Turtle Guru, too. Anyone who knows Jeff knows he is always on the move, but that doesn’t stop him from being amazing. He will always do his best to stop for a selfie, a high-five, or a quick chat in between his hustle. I hope you enjoy this conversation as much as I did.

Nigel: So, turtles and VMware… how did this happen?

Jeff: In the beginning, there was a pond but no turtles. The original idea was to put koi fish in the pond but it became overgrown with algae. Since you’d only be able to see their mouths, the koi fish idea never came to fruition. Unfortunately, the pond was pretty empty for a while.

Nigel: An empty pond doesn’t sound very exciting.

Jeff: No, not at all. Then one day, back in 2007 I think, I was off to lunch at Rosati’s in Portola Valley. They have a great burger and fries that I was dying for. When I got there to order my food there was a very odd conversation going on.  The lady behind the counter ended up asking me if I knew anybody that wanted a turtle.

Nigel: At this point, I imagine you’re thinking, all I want is a burger & fries and this lady is trying to give me a turtle?

Jeff: Exactly! I said, “A turtle?” and she said “Yes. I have a turtle here and I don’t know what to do with it.” I said, “Hold on, I might be able to help.”

Nigel: So you just took the turtle?

Jeff: Well, I had to call my boss and see if the pond had any chemicals in it that’d kill them, but yeah. I hung up and told the restaurant owner, “I’ll take it!’ I was the new owner of a red-eared slider turtle.

Nigel: I am sure that was a fun conversation with your boss.

Jeff: Oh yes. His response was, “What are you up to, Goodall?”… “Nothing, nothing at all…” I replied… “No reason, thanks, *click*”

I went back to work with this small turtle in a cooler riding next to me in my jeep. I got back to campus just in time for a stack of help tickets to work on. What started out as a manageable morning turned into a chaotic afternoon. I took the turtle with me as I drove around campus in one of our golf carts, answering help tickets. For four hours straight, this terrapin and I cruised around our site installing keyboard trays, covering windows to block out sunlight, solving plumbing issues, delivering file cabinet keys, etc. From noon until 4:00 p.m. the turtle sat in a cooler, in a golf cart, riding from building to building with me; it was crazy.

At 4:05 p.m., I named the turtle Rosie after my fortuitous happening at Rosati’s and discovering that “it” was female! It was splashdown time. I grabbed the turtle, held her behind my back, and snuck her through [the building closest to the pond]. We walked through the building and out the adjacent door. The turtle was flailing uncontrollably. I walked up to the edge of the pond, looked around to see if anyone was looking, and set her into the water.

Nigel: So just like that, in she went?

Jeff: She sunk like a rock! “Move, please move” was all I thought for what seemed like an eternity. First I saw a head poke out, then a leg and she was off.  Over the next couple of days I fed her leftover lettuce, sliced carrots, whatever I had leftover from lunch in the [nearby] cafe.

Nigel: Wow! From the back of a burger joint to secret campus keyboard tray replacements to our beloved pond. Such a wonderful journey of love and caring Jeff. I have to ask, did anybody notice her?

Jeff: An email went out on [an internal email list] one-afternoon asking employees if they knew we had a turtle in the pond. I thought to myself, “This can go one of two ways.”  An employee replied to the email; “Yes, I saw that, how very cool, but it needs a friend!”

Nigel: It seems like Rosie has more than one friend now. How many turtles do we have?

Jeff: We have 14 turtles in total: ROSIE, ROBBY, TRIXIE, PETEY, LARRY, HOUDINI, STEVE McQUEEN, SEA ROCK, SQUIRT, SUNNY, CARL, ZILLA, BUZZ, and PEBBLE.

Nigel: So are they there all year, or do you do something with them in the winter?

Jeff: Turtles need somewhere to dig (mud sand) so they can brumate (hibernate) during the winter season. The pond is a concrete bottom pond. Can you say “turtle ‘cicles?” They go to the Santa Clara Turtle and Tortoise Rescue and Adoption Society for winter. They have an amazing setup. It’s like “Wags” but for turtles.

Nigel: The last, burning question: do you run the VMware Turtles Twitter account?

Jeff: No, no, I cannot take credit for that. That account is managed by Rosie herself.

Nigel: *groan*

Wrapping Up

Thank you, Jeff, for sharing your story of our VMware Turtles! If you get the chance to visit the VMware Headquarters be sure to stop and say hello to Rosie and the gang… and Jeff too of course!

To learn more about Turtle Conservation, please visit these resources below.

 

 

Photos












(Come back tomorrow for a look into the people that work on vSphere! For more posts in this series visit the “2019 Wrap Up” tag.)

 

 

Take our vSphere 6.7: Getting Started Hands-On Lab here, and our vSphere 6.7: Advanced Topics Hands-On Lab here!

 

The post VMware Loves Turtles appeared first on VMware vSphere Blog.

Viewing all 1975 articles
Browse latest View live