Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

What’s new in vRealize Log Insight 8.0

$
0
0
This post was originally published on this site ---

vRealize Log Insight 8.0 is generally available today and includes a number of customer-requested features and enhancements. Log Insight is VMware’s on-premises log analytics tool which makes troubleshooting and basic security monitoring a breeze. If you’re not using Log Insight today, then head over to the product page and take advantage of the 60-day evaluation right now! Whether you’re a long-time user or brand new to the world of Log Insight, let’s cover the new features while you download the latest bits.

To begin, let’s address the new version number. The last version of vRealize Log Insight was 4.8. What happened to versions 5.x-7.x? Yes, this jump in version numbers may seem a little bit jarring. However, as vRealize Log Insight has been released in lockstep with the rest of the vRealize Suite for some time, it made sense for us to make the jump and match our version number with the rest of the suite. Since vRealize Operations and vRealize Automation are at version 8.0, vRealize Log Insight is now 8.0 as well. With that out of the way, let’s get on with the new and improved features!

Unlimited Exports

This first new feature on our tour is a big one! Previous versions of Log Insight limited users to only be able to export 20,000 log messages at a time. 20,000 sounds like a large number, but in the world of logging, 20,000 log messages may account for only a few minutes or even seconds worth of log events. Log exports are often used to document the root cause of issues or even as a record of security incidents, and with this hard limit, it sometimes meant that multiple exports were required to capture a complete scenario. In vRealize Log Insight 8.0 I’m happy to announce that this limit is no longer a thing! Users can now export an UNLIMITED number of log events!

When you initiate an export, Log Insight will run a check to see how many log messages need to be exported. If the number is less than or equal to 20,000 then Log Insight will download the log messages via your browser just as it has in the past. If the number of log messages is greater than 20,000, then Log Insight will ask for an NFS path. Once that’s been provided, Log Insight will create an archive on that share. As you can imagine, exporting a large number of logs can take some time. That’s why large log exports are performed in the background so you can continue to work while Log Insight handles the rest. A green status bar is provided at the top of your screen so you can keep an eye on the progress.

New OS

Log Insight 8.0 now runs on Photon OS 3.0! New deployments will come with Photon OS already installed, but what’s really cool is that upgrades from 4.8 will perform an in-place OS swap from SUSE to Photon! This means you don’t have to stand up a new cluster and migrate your old data. Instead, our team of brilliant engineers has figured out a way to reliably replace the OS in-place with a single PAK file. Yes! This means that upgrades are just as easy as they have been in the past. This does, however, require you to be on Log Insight 4.8 so you may have to perform an incremental upgrade if you’re running an older version. If you think this sounds like a risky maneuver, it’s actually quite safe.

The image above shows the before and after of a vRealize Log Insight instance as it gets upgraded from 4.8 to 8.0. Simply put, the existing SLES partition includes enough free space to accommodate Photon OS without impacting its ability to run. During the upgrade procedure, the SLES partition is shrunk and a new partition is created. Photon OS 3.0 then gets installed in this new partition and then gets added to the appliance’s boot loader as the new default OS. And because Photon is such a lightweight OS, there’s no need to remove SUSE so you can take comfort in knowing you can roll back instantly should you need to. As you can see the appliance’s data and logs remain untouched throughout this process. If this process sounds familiar to you, that’s because this is very similar to how ESXi handles upgrades.

New Audit Capabilities

For many organizations, logs and log analytics tools can be sacred grounds. Not only are they a potential bonanza of information for wrong-doers, but they’re also trusted to provide accurate information. How often do you go back and validate the queries behind your custom dashboard widgets haven’t been tampered with? It’s not a common practice for most of us. vRealize Log Insight 8.0 puts these concerns to rest with new audit logging capabilities. Activities such as users logging in and out, failed logins, content pack management, modification of dashboard widgets, configuration changes, and more are all recorded in the audit logs. These logs can be pulled as a part of the support bundle and reviewed at any time.

Agent Updates

The vRealize Log Insight Agent is a useful tool for pulling logs from applications and OS’es that lack built-in Syslog features. The agent is available in Linux and Windows flavors and now includes support for Photon OS 3.0, Ubuntu 18.04, and Windows Server 2019. But that’s not all! The vRealize Log Insight Agent is going open source!! Keep an eye out for an upcoming blog post once the agent’s source is available on GitHub.

Content Pack Updates

Many content packs have been improved in this release as well. As VMware releases updates to its products, the logging capabilities of these products improve as well and Log Insight is there to capture it all! vSphere, vSAN, and NSX-T have all been improved to provide visibility into storage policy modifications, failed login attempts, NSX backup failures, and service disruptions, and a lot more! An updated Linux Content Pack is also available which includes visibility for changed passwords, telnet connections, file system mounts, and failed login attempts, as well as new dashboards. There are a lot of great improvements to these content packs and I haven’t even come close to listing them all. Of course, vRealize Log Insight 8.0 is backward compatible so you can continue to use your existing favorite content packs!

As you can see, our engineers have been hard at work on this new release of vRealize Log Insight. I’ve only highlighted a few of the new features here but you can check out the release notes for a complete list. And as always, visit the vRealize Log Insight product page for all the latest and greatest!

The post What’s new in vRealize Log Insight 8.0 appeared first on VMware Cloud Management.


Announcing General Availability of VMware vRealize Automation 8.0

$
0
0
This post was originally published on this site ---

Today, I am delighted to announce the general availability of VMware vRealize Automation 8.0, the latest release of our industry-leading hybrid cloud automation platform.

 

 

We announced vRealize Automation 8.0 at VMworld US 2019 in San Francisco. Sharing the same modern container-based architecture with the SaaS-based VMware vRealize Automation Cloud, this release of vRealize Automation also includes the latest versions of VMware vRealize Suite Lifecycle Manager 8.0 and VMware vRealize Orchestrator 8.0. With a new Easy Installer component, we have dramatically simplified the installation/configuration process and experience. You can learn more here.

 

We are seeing our customers use vRealize Automation along 4 key strategic use cases:

  • Self Service Hybrid Cloud
  • Multicloud Automation with Governance
  • Service Delivery with DevOps
  • Kubernetes Workload Management

At VMworld US 2019, we had IHS Markit discuss how they’re using vRealize Automation with VMware NSX Data Center to implement a network automation solution for their cloud environment. You can learn more here.

We are also extending the value of automation to the DevOps and SRE audiences within our customers. With vRealize Automation 8.0 Enterprise Edition, which now includes the VMware Code Stream component, we deliver Infrastructure-as-Code (IaC), application and infrastructure pipeline automation, continuous innovation/continuous delivery (CI/CD) capabilities, and Kubernetes workload management.

To learn more about vRealize Automation 8.0, please visit our product page, and read our announcement blog.

The post Announcing General Availability of VMware vRealize Automation 8.0 appeared first on VMware Cloud Management.

Announcing General Availability of vRealize Suite 2019, vCloud Suite 2019, and vCloud Suite 2019 Platinum!

$
0
0
This post was originally published on this site ---

VMware vRealize Suite 2019, vCloud Suite 2019, and vCloud Suite 2019 Platinum, the industry leading cloud management platforms are now available for download.

VMware vRealize Suite

Modern Hybrid Cloud Management Platform

VMware vRealize Suite, vCloud Suite, and vCloud Suite Platinum are part of VMware’s hybrid cloud management platform offerings.

  • vRealize Suite is a hybrid cloud management platform that enables IT Operations to deliver a self-service private or hybrid cloud with automation, governance, and consistent operations.
  • vCloud Suite brings together vSphere, the industry-leading server virtualization software and the foundation of a modern SDDC (Software Defined Data Center), and the vRealize Suite together.
  • vCloud Suite Platinum brings vSphere Platinum and vRealize Suite together to help IT Operations deliver a self-service private or hybrid cloud with automation, governance, consistent operations, AND intrinsic security.

VMware Hybrid Cloud Management Platform

The VMware “Suites” solutions provide organizations an easy way to on-board to cloud, enabling consistent deployments with visibility and governance for virtualized and cloud native workloads deployed across hybrid and multicloud environments, including VMware Cloud on AWS, AWS, Azure, and Google Cloud Platform (GCP). The hybrid cloud management platform also helps IT organizations to optimize, plan, and scale their cloud operations.

What’s New in vRealize Suite 2019

As announced at VMworld 2019 in San Francisco, VMware is delivering a major refresh of our multi-cloud management platform based on advanced technologies, supporting our customers’ cloud journey, now and into the future. The vRealize Suite 2019 includes the latest vRealize version 8.0 products. The vRealize Suite integrates vRealize Automation 8.0 and vRealize Operations 8.0 to deliver advanced closed loop optimization capabilities that enable continuous performance, capacity and cost optimization, simplifying IT operations and lowering IT costs.

vRealize Automation 8.0

Following vRealize Automation 7.6, vRealize Automation 8.0 is our latest next-gen hybrid cloud automation platform that helps customers increase productivity and efficiency through policy-based self-service provisioning and lifecycle management.  The vRealize Automation 8.0 platform enables (1) self-service hybrid clouds, (2) multi-cloud automation & governance, (3) DevOps service delivery, and (4) Kubernetes cluster management.

New Features include:

  • Platform: Modern container-based microservices architecture with simplified installation and deployment, improved performance & HA, and on-demand scalability
  • Ease of Use: Automated private/hybrid cloud set-up for VMware Cloud Foundation and VMware Cloud on AWS with API-driven configuration, deployment and management
  • Extensibility: Direct integration with VMware Cloud on AWS and enhanced integrations with AWS, Azure, GCP, Terraform, Infoblox, Ansible, Git, Jenkins, Bamboo; Closed loop optimization with vRealize Operations 8.0
  • DevOps: Infrastructure-as-Code (IaC)-based platform with YAML support and declarative templates; supports Kubernetes workloads (manage, monitor, discover, import, and request Kubernetes clusters and namespace); Intuitive visual pipeline designer with pipeline as code and built-in pipeline analytics for pipeline visibility and troubleshooting
  • Multi-cloud capabilities: out of the box support for public clouds including consistent orchestration across public cloud end points, embedding native public cloud services from AWS, Azure and GCP in blueprints and support for cloud agnostic blueprints.

 

vRealize Operations 8.0

Following vRealize Operations 7.5, vRealize Operations 8.0 is our latest next-gen self-driving cloud operations platform powered by AI/ML that enables (1) continuous performance optimization, (2) proactive capacity and cost management, (3) intelligent remediation, and (4) integrated compliance.

New Features include:

  • Performance Optimization: Continuous performance and workload optimization with vRealize Automation 8.0 integration; Workload optimization and cross-cluster balancing for VMware Cloud on AWS
  • Capacity and Cost Management: Complete capacity/cost management and planning for HCI and VMware Cloud on AWS; public cloud costing via Cloud Health integration; private cloud cost comparison for new workload planning
  • Intelligent Remediation: New ML-based troubleshooting workbench for expedited root cause analysis; Knowledge-base insights via Skyline integration; more out-of-the-box app monitoring (20 total apps); support for top n-processes; one-time script execution; custom script execution; native service discovery; new management pack for Azure
  • Integrated Compliance: Updated UI; enhancement to regulatory and custom compliance for SDDC (vSphere, NSX-T, vSAN) and VMs in VMware Cloud on AWS; import/export custom compliance standards; automate drift remediation with vRealize Orchestrator integration

vRealize Log Insight 8.0

Starting with vRealize Suite 2019, the vRealize Log Insight version has also lined up to version 8.0 from its last vRealize Log Insight 4.8 version. Log management and analytics are an integral part of VMware’s Cloud Management portfolio for faster monitoring and troubleshooting.

New Features include:

  • Enterprise Readiness: Photon OS, Certificate Management, IPv4/IPv6 Dual Stack risk-based support
  • Increase log export > 20k

vRealize Suite Lifecycle Manager 8.0

Similar to vRealize Log Insight, the version number for vRealize Suite Lifecycle Manager has lined up to version 8.0. vRealize Suite Lifecycle Manager delivers unified lifecycle management and is available with vRealize Suite. It enables Cloud Admins to focus on supporting business-oriented activities, by automating the installation, configuration, upgrades, patches, configuration management, drift remediation, and health of the vRealize Suite platform and its components. It also provides content management capabilities for the individual products, such as vRealize Automation, vRealize Orchestrator, vSphere 6.x, and vRealize Operations. The content can also be source controlled by Gitlab.

New Features include:

  • Enhanced UI: new UI for lifecycle management, new UI for directory management
  • Simplifying vRealize Automation installation
  • New Locker tile: In addition to enhanced certificate management, the new tile is also used to manage licenses and passwords
  • Common VMware Workspace ONE Access: formerly known as the VMware Identity Manager, this new implementation will support all the vRealize Suite products

vRealize Suite Advanced/Enterprise Packaging Enhancements

vRealize Business for Cloud capabilities integrated into vRealize Operations

What happened to vRealize Business for Cloud? Don’t worry. We’ve simplified the packaging for vRealize Suite Advanced and Enterprise. All editions of vRealize Operations (advanced and enterprise) in vRealize Suite will now natively include vRealize Business for Cloud capabilities for the hybrid cloud, eliminating the need for the separate vRealize Business for Cloud software. With the streamlined vRealize Suite Advanced and Enterprise packaging, Cloud Admins will have one less component to install, and users can perform the hybrid costing and pricing use cases natively within vRealize Operations. While vRealize Operations provides cost analysis and optimization for the hybrid cloud, CloudHealth by VMware is the go-to product for public cloud cost analysis and optimization.

VMware vRealize Suite Packaging & Licensing

What’s New in vCloud Suite 2019

VMware vCloud Suite 2019 includes vRealize Suite 2019 and vSphere Enterprise Plus for vCloud Suite.

VMware vCloud Suite 2019

What’s New in vCloud Suite 2019 Platinum

VMware vCloud Suite 2019 Platinum includes vRealize Suite 2019 and vSphere Platinum for vCloud Suite. vCloud Suite Platinum enables IT Operations to gain visibility into application context and behavior to accurately identify and eliminate legitimate threats in real time, while continuously hardening and protecting workloads.

VMware vCloud Suite 2019 Platinum

Learn More

To learn more about VMware hybrid cloud management platform solutions, visit us online at VMware vRealize Suite and vCloud Suite. See how you can:

  • Deliver a future proof hybrid cloud plan that transforms traditional IT infrastructure into a more agile and flexible platform to support global expansion and related business demands
  • Support multiple hybrid / public clouds and the latest development technologies (e.g. Kubernetes, containers, FaaS, etc.) to enable the business accelerate time to market for delivering new products and services, enhancing customer experiences at every touch point
  • Optimize, plan and scale their hybrid cloud and HCI operations, while gaining multi-cloud observability

Additional Resources

vRealize Automation

vRealize Suite Lifecycle Manager

vRealize Operations

vRealize Log Insight 8.0

 

 

The post Announcing General Availability of vRealize Suite 2019, vCloud Suite 2019, and vCloud Suite 2019 Platinum! appeared first on VMware Cloud Management.

Automate Like A Service Provider With vRealize Automation – Part II

$
0
0
This post was originally published on this site ---

In my previous post, Automate Like A Service Provider With vRealize Automation – Part I, we discussed adding automation and self-service provisioning to your existing IT services. In this blog post, I’ll share some suggestions to help make it a reality.

Transitioning to an IT Service Delivery Model

Step 1: Identify the Service

Analyze your ticketing system and determine which requests are fairly mundane and repeatable, but also high volume. By starting with one of these services you will be able to quickly prove the value of moving to a service model. Server provisioning is usually a good service to offer initially, for example, a fully-configured Linux or Windows virtual machine available on-demand.

 

Step 2: Identify the Consumer

Determine who is your consumer, what are their needs, and what is their preferred consumption model – will they want to utilize a catalog to request and manage the service? Would they rather consume the service via API? Will the consumer interact with the system at all, or will there be an intermediary such as an operations team? Ideally, partner with your consumer from the outset. Understand how they request services, how they manage them from day 2, and how and why they release those resources when they are no longer needed.

 

Step 3: Identify the Service Owner

Identify a resource as the service owner. This resource will work with the business units and developers and the IT teams involved in providing the service. They will own the lifecycle of the service, including which features are considered the minimal viable product (MVP) for the service and the lifecycle of the service – from the initial release, through feature updates, and the retirement of the service. The resource should understand the needs of the business and map them to use cases. They should understand enough about the technical details of providing the service to be able to prioritize feature updates so that they make the most sense from both the business and technical perspectives. This role is critical to the successful transition into an internal service provider.

Note: This role is similar to the concept of the role of ITIL Service Owner or SCRUM Product Owner, but you do not have to be strictly adhering to either discipline to start realizing the benefits of making an individual accountable for the lifecycle of a service. This role can mature as you adopt a more formalized service strategy.

IT Service Owner

 

Step 4: Identify the Stakeholders

The Service Owner is responsible for identifying the stakeholders and identifying the outcomes for each. These will need to be identified and communicated early in the process and revisited throughout the lifecycle of the service. The success of the service is dependant on meeting stakeholder needs, and these will need to be prioritized across business, technical, process and policy outcomes.

 

Step 5: Map out the Existing Process

Determine what the process is to deliver that service today. Not what you think it is, or what you have been told it is, but actually sit down with the consumer and walk through the process from start to finish. You will be amazed at how many steps are undocumented, inconsistent or add complexity but no real value. Decide what is critical to providing the service and start there.

 

Step 6: Automate the Minimal Viable Components

While it may be tempting to update and improve processes as you automate them, be careful. This can very quickly lead to scope creep and indeterminate delays. Services have a lifecycle of their own and will constantly be improved and updated. The important thing at this stage of the process is to deliver something. Delays will undermine confidence and slow momentum.

 

Step 7: Measure, Report, Promote

Metrics are critical to proving the success of the solution. The Service Owner should work with the business and technical teams to identify KPIs early on (provisioning time, build quality). Report on and share this information. You can build the greatest service in the company, but if nobody knows about it then it is not adding business value. You will need to prove success with each service delivered to be able to prioritize working on the next one.

 

Step 8: Rinse, Repeat

Now that you have proven to your consumers and your leadership that transitioning to a service-based delivery model makes sense for your organization, build on that success. Select the next service you want to deliver and repeat the process.

 

While this may seem oversimplified, starting small and building on small successes can quickly build into much larger success. You may also have noticed that none of these steps refer to a specific technology. That is because any transformation involves much more than technology. Selecting the right technology, however, can make the difference between spending time building services and functionality that directly improve user experience, or spending months writing custom code and processes that are difficult to support and manage in the long-term.

vRealize Automation Cloud is our next-generation automation platform, building on the industry-leading vRealize platform. With the most out-of-the-box functionality, native third-party integrations, and flexible extensibility of any cloud management solution we make automating complex processes easier than ever.

 

Next Steps

Automate Like A Service Provider With vRealize Automation – Part I (From IT Operator to IT Service Provider)

 

If you haven’t already, you can get started with a free 30-day trial now!

 

The post Automate Like A Service Provider With vRealize Automation – Part II appeared first on VMware Cloud Management.

Considerations for vSphere Component Backup and Restore: Part 1

$
0
0
This post was originally published on this site ---

You spend time planning and implementing updates, upgrades and configuration changes to vSphere Environments. Rollback or recovery is usually the last thing on the mind. Did you know there are multiple ways to successfully backup each component in a vSphere Environment? There are also many ways to recover those same components.

In this series we will talk about considerations to backup and restore or rollback your vSphere environment if we have a need to.

When we discuss our vSphere Components, I am talking about the core product.

  • Platform Services Controller / vCenter Server
  • ESXi Hosts
  • VM Compatibility / VM Tools
  • Storage (VMFS/VSAN)
  • Network (VDS)

The backup and recovery for each component is different. vCenter Server has a native file-based backup, whereas everything else does not have a native backup tool. Below I will provide guidance and considerations when backing up your vSphere Environment.

 

Introduction

Prior to starting any changes in your vSphere Environment backups should be taken. Backing up your environment does not necessarily have an order of backups, but if you think about dependencies, I like to backup my environments in the same order as an upgrade is completed.

Each component has different considerations. Is my vCenter Server running 6.0, 6.5 or 6.7? Is it appliance based, or windows based? Am I using features such as vCenter HA or Content Library’s? Those are all very important questions! vCenter Server 6.5 and 6.7 have a native file-based backup, this will include the configuration, inventory and historical data such as tasks, events and performance data. If using vCenter Server for Windows, backup can be accomplished using an image-based backup tool as well as a backup of the database. Features such as vCenter HA are not needed to be backed up, as the server is restored as a standalone node. Features such as Content Library will have all the configuration and metadata restored, but the files contained in a Content Library are stored within the datastore itself, so those are some considerations to think about.

There are also many ways to backup each component, and not just differences like image-based vs file-based. Components can be backed up through the vSphere Client, API, vSphere CLI, ESXCLI, DCLI and PowerCLI.

 

Platform Services Controller / vCenter Server

When considering backups for the Platform Services Controller or vCenter Server there are a few methods that can be used. If using a Windows based Platform Services Controller or vCenter Server the supported backup process is to take an image based backup of the Machine and a database backup. If utilizing the vCenter Server Appliance on vSphere 6.5 or vSphere 6.7 the recommended backup method is to use the File-Based backup. The file-based backup is not only available in the vCenter Appliance Management Interface (VAMI) but also has an API and DCLI commands.

Performing an online snapshot of your environment is not recommended or supported. If the need arises to snapshot the environment, all nodes in the SSO domain should be powered off. This can be problematic as all nodes within the SSO domain must be shutdown causing a downtime. In the event of a recovery you would not be allowed to recover one node from snapshot, the entire environment would need to be reverted.

 

ESXi Hosts

ESXi Hosts do not have a native full backup tool like vCenter Server. There are some considerations to think about; Are my hosts stateless or stateful? Am I using Host Profiles?

If your environment is stateless, a backup and recovery is simple. Update the image profile and perform a reboot of the host, during the boot process a new image will be used and the host profile configuration will be applied.

If your environment is stateful there are a couple options for backup. Using vSphere CLI or PowerCLI you can export the configuration of your ESXi Host, whereas if using host profiles, the configuration will be stored along with an answer file on the vCenter Server.

  • PowerCLI
    Get-VMHostFirmware -VMHost ESXi_host_IP_address -BackupConfiguration -DestinationPath output_directory
  • vSphere CLI:
    vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -s output_file_name

 

Virtual Machines: VMware Tools / VM Compatibility

In order recover a VM in the event you encounter issues with a VM Tools or VM Compatibility update—it’s important to make sure you have a snapshot or full backup of your virtual machine. Recovery may not always require this backup, but it’s good to have a point in time to recover.

Storage: VMFS / VSAN

Storage arrays whether they are VSAN, ISCSI, NFS or FC cannot be backed up directly. In this case, we need to protect the workloads on them instead of the storage itself. For rollback the same holds true, in the event we have a need to rollback a version of VMFS or VSAN the exact steps in the next post in the series.

 

Network: vSphere Distributed Switch

Last but not least, it is important to take a backup of your vSphere Distributed Switch. A vCenter Server backup does include a copy of the VDS, however if you have a need to recover it separately, a backup and export of the switch should be taken.

 

Resources

 

Conclusion

As we think about backing up our vSphere Environment it’s good to know that we have options when it comes to backing up each component. It’s also good to understand the dependencies and order when backing up each piece. In the next part in the series we will discuss the rollback or recovery of each component and the considerations for each.

If you have any questions, comments or feedback, feel free to leave them below or reach out on twitter at @davidstamen.

The post Considerations for vSphere Component Backup and Restore: Part 1 appeared first on VMware vSphere Blog.

Cloud Migration Series – Part 1: Getting Started with Hybrid Cloud Migration

$
0
0
This post was originally published on this site ---

The hybrid cloud conversation, and subsequently cloud migration, is a hot topic amongst customers – some for it, some against it, and some conflicted. Not too long ago the hybrid cloud model was complicated and brought up feelings of uncertainty, and doubt. Questions about re-architecting applications and re-educating staff, new tools and best practices, as well as security were (and still are) at the forefront of everyone’s minds. Of course, it’s technology, and nothing is ever easy; but with VMware Cloud on AWS we aim to ease that pain, confusion, and doubt associated with the native cloud model. They hybrid cloud model is quickly becoming a way of life, almost mandatory, in a variety of industries. But even if it isn’t mandatory for your industry or company, why shouldn’t you reap the benefits of it?

There are a number of reasons why an organization may want to move their virtual machines into the cloud. Scalability and elasticity allow you to quickly grow your cluster(s) for new projects or greater demand without having to go through a full procurement and deployment process. Physical data center limitations such as floor space, cabinet space, power, and cooling could stifle an organization’s ability to scale infrastructure and applications. Perhaps you need the ability to place applications closer to a particular department or user base in different countries around the world. Maybe none of these apply, but your CIO simply wants to implement a cloud first initiative.

The ability to migrate virtual machines into the cloud solves these problems and many more. Migrating virtual machines to VMware Cloud on AWS vs native cloud comes with a great deal of benefits. First, the stress and cost of re-factoring goes right out the window, after all, you’re simply moving a vSphere virtual machine to another vSphere environment. Second, there’s no need to re-educate or re-train staff. There’s no learning curve because the software, tools, and processes largely remain the same. In fact, you might benefit more with a rich feature set and tools that will help you understand your workloads better, minimize downtime, and potentially increase SLAs for mission critical applications.

In Part 1 of this series we will dive into the basics, to get you started moving virtual machines into VMware Cloud on AWS. Throughout the series we’ll introduce VMware HCX and dive into the components, migration methods, and details surrounding the Service Mesh. Finally, we’ll wrap up the series with some best practices and considerations around planning your application migrations for success.

 

Bridging On-Premises with the Cloud

Hybrid Linked Mode (HLM) allows you to link your VMware Cloud on AWS vCenter Server instance with your on-premises Single Sign-On domain. Similar to Enhanced Linked Mode (ELM), this will provide operational consistency allowing you to view and manage both vCenter inventories within a single pane of glass utilizing your on-premises LDAP or AD credentials. HLM also allows you to share tags and categories, as well as migrate virtual machine between vCenter Server instances.

There are two options for configuring HLM. The first, and preferred, option is to deploy the vCenter Cloud Gateway appliance on-premises. This appliance sits in between both vCenter Servers and maintains the link between them. Logging into either vCenter will only show you that vCenter’s local inventory but logging directly into the vCenter Cloud Gateway UI will allow you to see both inventories and allow you to kick off migrations. The appliance even updates itself automatically!

The second option is to link your cloud SDDC vCenter back to your on-premises data center, creating a one-way trust, and adding your Active Directory server an identity source. With this option, you can obtain a single pane of glass only by logging into your Cloud vCenter Server instance.

 

Right-Click, Migrate!

 

It should be noted that HLM is NOT a requirement for migrating virtual machines by CLI or API, only for migrations that would be initiated from the vSphere Client.

For more information about the Hybrid Linked Mode requirements and how to deploy the vCenter Gateway appliance, check out the product walkthrough.

 

Hybrid Migration

A hybrid migration is the process of moving virtual machines between an on-premises data center and the cloud. There are three primary methods of migration:

  1. Cold Migration: Move a powered-off VM from one host to another. The VM incurs downtime during the entirety of the migration process.
  2. vMotion: Move a powered-on VM from one host to another without incurring any downtime.
  3. VMware HCX: Allows various methods of migrating VMs from on-premises to the SDDC with or without downtime. Takes advantage of replication, deduplication, and compression with greater flexibility for older on-premises vSphere versions. Includes a number of additional features to easy migration.

Note: In the first part of this series we are only going to cover cold migration and vMotion. VMware HCX, which has its own subset of migration methods and will be covered in Part 2.

As stated above, migrations can be initiated via the vSphere Client once Hybrid Linked Mode (HLM) is configured. They can also be initiated via the command-line using PowerCLI or leveraging the API. Kyle Ruddy wrote a great blog post on using PowerCLI for Cross vCenter vMotion, and it applies to this scenario as well.

 

Hybrid Migration Requirements

Cold migration and vMotion both have prerequisites and specific configurations that need to be in place in order for them to be successful. A lot of the requirements are the same, but vMotion has some additional considerations. Both methods require the following:

  • On-premises vSphere version 6.5p03+ or 6.7u1+
  • On-premises vSphere VDS version 6.0 or higher
  • Hybrid Linked Mode configured if using the vSphere Client for migration
  • On-premises DNS needs to be able to resolve the Cloud vCenter server instance.

The firewall rules for both Cold Migration and vMotion are identical with one exception. In order to do a vMotion, you need one additional rule – to allow vMotion traffic (TCP 8000) between the on-premises vMotion VMkernel networks and the Cloud ESXi host subnet.

 

VMC Firewall Rules

 

Network connectivity between your on-premises data center and the SDDC is the major differentiator for when using these two methods. For cold migrations you can make use of AWS Direct Connect (DX), but really only need to have an IPsec VPN configured between environments. For vMotion, it is a hard requirement to have AWS Direct Connect with Private VIF and L2 VPN configured. Remember, we’re moving a live instance of a VM, there needs to be a solid connection in place for the migration to be successful. Long distance/cross-vCenter vMotion requirements apply – requiring a minimum 250 Mbps between source and destination with a max latency of 100 ms round trip. Lastly, virtual machine compatibility (hardware) needs to be at version 9 or later for vMotion.

 

VMC Connectivity

 

Once the requirements above have been met, you can migrate VMs as you see fit – whether they can afford downtime or not will determine the method you can use.

Stay tuned for Part 2 where we will dive into VMware HCX and how we can become even more flexible with our migration options.

 

Resources

For further information on Cloud Migration or VMware Cloud on AWS, check out the valuable resources below:

 

The post Cloud Migration Series – Part 1: Getting Started with Hybrid Cloud Migration appeared first on VMware vSphere Blog.

ESXi on Arm at the Edge, on the SmartNIC and in the Cloud

$
0
0
This post was originally published on this site ---

At VMware, we simplify how customers deploy and run workloads, run on the best available infrastructure, run efficiently, all wrapped around a familiar management experience. Coming back from Arm Techcon 2019, we saw a tremendous amount of innovation on display for running workloads. And in collaboration with Arm and some of its partners, we are also exploring several use cases for running our ESXi hypervisor on various Arm SoCs, at the Edge, on the SmartNIC, and in the Cloud, with the same set of principles to help our customers efficiently run and manage their workloads.

Arm Neoverse

(Arm NEOVERSE investments over the next several years)

ESXi on Arm at the Edge

At the Edge, where the environment is often difficult to reach and compute resources are limited, it’s important to have infrastructure that can provide resiliency in a compact form factor. These environments are not operated by the central IT teams, but rather by field operating teams whose servicing cost is dependent on number of truck-rolls than by the cost of the hardware. To explore this use case, we showcased running our hypervisor on a compact Marvell Armada A8040-based SolidRun MacchiatoBin providing vSphere vMotion and Fault Tolerance inside a wind turbine scenario. When there’s a need to patch the hardware, or even if one of the hardware nodes goes down, our technology enables the workload to continue running uninterrupted. We bring the same data center-grade resiliency to the Edge.

The second characteristic of Edge environment is that the compute resources are limited, either due to the need to bring down the cost per node or limited power availability. Our ESXi hypervisor is known for deploying on the big iron running monster workloads. But the same efficient hypervisor is just as good scaling down to small machines running small workloads. We explored just how small by demonstrating how small by running it on the Raspberry Pi4. Our goal is to bring data center virtualization benefits such as vMotion, Fault Tolerance and lifecycle management into environments where these capabilities are not readily available.

ESXi on Arm on the SmartNIC

Bringing the discussion back a bit closer to home base, we’re also exploring helping our customers more efficiently run their data center by leveraging hardware accelerators. Some of our work in this area include virtualization support for GPUs and FPGAs, as well as remote acceleration through Bitfusion. An emerging accelerator is the SmartNIC, where the NIC is now equipped with a capable general-purpose processor and plenty of ram. This enables customers to offload data plane processing such as running the virtual switch, security or storage functionality, or even analytics on the incoming traffic, all without interrupting the CPU. The goal here is to offload tasks to optimized hardware and focus the CPU to run the customer’s actual workload. In environments where network processing can chew up to 20-30% of the CPU cycles, SmartNICs can help customers can achieve higher consolidation ratios AND faster workload execution. You can have your cake and eat it too.

(Showcasing ESXi running on various Arm SoCs, including Mellanox BlueField, Marvell Armada, Raspberry Pi4)

ESXi on Arm in the Cloud

If we dial back time a bit, all the way to November of 2018, AWS announced the availability of the Arm-powered EC2 A1 VM instance and more recently the availability of the A1 metal instance. The biggest draw for customers is the price point, up to 40% more cost effective than comparable x86 instances. The workloads targeted include webservers such as NGINX which can take advantage of the cores to scale-out, containerized services, filers which are often waiting for I/O and not compute heavy, and applications written in interpreted languages. In addition to the cost benefit, developers also gain access to flexible compute consumption and the availability of the AWS ecosystem.

For VMware customers leveraging VMware Cloud on AWS (VMC), this new A1 metal instance can complement their existing x86 instances. For example, customers can run cost-optimized workloads such as NGINX on a cluster of ESXi-powered A1 hosts, working alongside or communicating with a cluster of ESXi-powered x86 hosts running traditional database applications. The key here is that customers can choose the platform(s) best suited for their application mix, managing this heterogeneous environment with operational consistency via the vSphere control plane. We’re pretty excited with this use case and actively exploring it with AWS and Arm:

“The announcement of the Arm Neoverse-based EC2 A1 bare metal instances provides new opportunities for customers to leverage VMware ESXi in the Cloud,” said Mohamed Awad, vice president of Marketing, Infrastructure Line of Business, Arm. “Our collaboration with VMware in this customer evaluation phase is a step forward in enabling solutions for the next generation of Cloud-to-Edge workloads.”

(ESXi running on the AWS a1.metal instance managed by vCenter Server. The same vCenter Server is also managing various other ESXi Arm hosts and can also include ESXi x86 hosts.)

To Conclude

We’re still early days in the above Cloud effort, as well as continuing to explore new ways we can bring the benefits of our infrastructure stack to customers. The ESXi on Arm bits are not currently available to the public. However, if the use cases above are of interest in your organization, we’d like to hear your feedback and explore POC opportunities, please reach out to your account team to get in touch.

 

The post ESXi on Arm at the Edge, on the SmartNIC and in the Cloud appeared first on VMware vSphere Blog.

Considerations for vSphere Component Backup and Restore: Part 2

$
0
0
This post was originally published on this site ---

Now that we have covered the considerations to backup each component in your vSphere Environment, it’s time to think about the rollback or restore process. In the previous post we discussed how to backup each individual component, in this post we will continue the conversation and discuss considerations to restore or rollback your vSphere Environment components.

 

Platform Services Controller / vCenter Server

When things don’t go right with an update, upgrade or configuration change of your Platform Services Controller or vCenter Server you need to understand the proper steps it takes to rollback. In these scenarios sometimes there may have been a failure in the process, or after a successful upgrade was completed, an incompatibility was found which would result in a rollback. A reason we see the need to rollback a vCenter Server is when a supporting solution such as a backup vendor or monitoring vendor is not compatible. In this case you will need to revert to the previous version.

Scenario 1: Embedded vCenter Server Appliance (No Enhanced Linked Mode)

When you need to rollback a Standalone vCenter Server with an Embedded Platform Services due to a failed upgrade or migration—shutdown the new vCenter Server Appliance, power on the original and rejoin to the domain if it previously was.

In the event the rollback occurs due to a patch or configuration change, the rollback process will include restoring from the file-based or image-based backup taken.

Scenario 2: Embedded vCenter Server Appliance in Enhanced Linked Mode

In case you happen to have multiple Embedded vCenter Servers in Enhanced Linked Mode, the rollback and recovery of this is a bit different than Scenario 1.

As the Embedded vCenter Server contains PSC related data we cannot just revert a snapshot of a vCenter Server running in linked mode. There are pre-built checks in the vCenter Server that will look for inconsistencies and if any major issues occur the PSC service could go into read-only mode.

In order to properly restore an Embedded vCenter Server that is in Enhanced Linked Mode we must perform a restore that was taken with an Image or File-Based Backup. The File-Based Restore wizard contains logic to detect if the node was in linked mode and instead of restoring the stale data, it will perform a reconciliation and synchronize the PSC data from another vCenter Server in the SSO domain instead of restoring it. If you restore an Embedded vCenter Server from an image-based backup you must log into the VAMI and manually run reconciliation.

 

Scenario 3: External Platform Services Controller

As we discussed in Scenario 2 if a vCenter Server had PSC data we did not want to restore old data. The same applies to a standalone External PSC, if we need to rollback a single PSC we will shut down the node and unregister it from the SSO domain. We will then redeploy the PSC using the same FQDN and IP and allow it to synchronize the data. The only time a PSC should be restored from backup is if all PSC’s within an SSO domain are unavailable.

 

Scenario 4: External vCenter Server

Last but not least as we wrap up we will talk about restoring an External vCenter Server. When you need to rollback an External vCenter Server due to a failed upgrade or migration—shutdown the new vCenter Server Appliance, power on the original and rejoin to the domain if it previously was. In the event you need to roll back from a patch or configuration change an External vCenter Server contains no PSC data, it is fine to perform a restore from an image based or file-based backup.

 

ESXI Server

Certain scenarios may require you to rollback an ESXi Server. Whether there was an issue with an update, upgrade or software installation we will cover two scenarios to rollback an ESXi Server.

Scenario 1: Reverting using the Built-in Recovery

The most common method used to rollback an ESXi Host is the built-in recovery. Using the Direct Console User Interface (DCUI) you have the ability to revert to a previous installation using the altbootbank. This is the simpler of the two scenarios, but a previous version is not always available. In the event a previous version is unavailable you will need to proceed with restoring using a configuration backup.

 

Scenario 2: Reverting using Configuration Backup

The next method to recover an ESXi Host is to restore a previous configuration. If an altbootbank is not available to use the built-in recovery you must first install ESXi Server from media. Using the configuration backup, you can perform a restore to bring the host back to its configured state. In some environments host profiles are not used, which is why having a configuration backup can make a restore simpler.

 

VMware Tools / VM Compatibility

Downgrading of VMware Tools is not supported. In the event of a need to revert to a previous version of VMware Tools, you must first uninstall the newer version of VMware Tools and then install the older version of VMware Tools.

Unlike VMware Tools there is no process to perform an in-place downgrade of VM Compatibility / Virtual Hardware. To downgrade your virtual machine hardware version you have 3 options:

  1. Revert to a previous backup or snapshot of the Virtual Machine prior to upgrading virtual hardware.
  2. Create a new virtual machine mirroring the original specifications using a previous hardware version and attach the existing disk from the virtual machine.
  3. Use VMware Converter to perform a VM to VM conversion choosing a previous hardware version for the destination.

 

Storage (VMFS/VSAN)

When performing a rollback of a VMFS-6 volume it follows the same steps as an upgrade to VMFS-6. When Migrating VMFS-5 to VMFS-6 there was no in-place upgrade. The same applies to a downgrade. In order to downgrade a VMFS-6 volume to VMFS-5 you must first create a new VMFS-5 Volume and then perform a storage migration of Virtual Machines.

Generally, it is not recommended to downgrade a vSAN Disk Format. However, in some scenarios you may wish to format a vSAN Disk Group with a legacy format version. This process includes the ability to evacuate a disk group and choose a legacy disk version by modifying an advanced setting and then recreating a disk group. This is only applicable to vSAN 6.x versions.

 

Network (VDS)

Last but not least let’s review the scenario of having to rollback a previous vSphere Distributed Switch version. If you have ever performed an upgrade you may notice on the screen, that a distributed switch cannot be downgraded, however there is a process in order to perform a restore which will allow us to go back to a previous distributed switch version.

In order to “restore” a distributed switch configuration you must first create a Distributed Switch at the correct target version and choose the option to import configuration preserving the original distributed switch and port group identifiers. If you would attempt to restore a distributed switch configuration on top of the existing, it would leave the current distributed switch version intact.

 

Resources

 

Conclusion

Although we don’t like to anticipate any issues in our vSphere Environment, it is important to not only take regular backups of our environment, but to also practice restoring outside of any lifecycle activities. When thinking about backing up and restoring vSphere Components it is important to consider the different parts of your environment and the impact restoring that component will have in your environment.

If you have any questions, comments or feedback, feel free to leave them below or reach out on twitter at @davidstamen.

The post Considerations for vSphere Component Backup and Restore: Part 2 appeared first on VMware vSphere Blog.


VMware AppDefense 2.3: Vulnerability Scanning and More

$
0
0
This post was originally published on this site ---

vSphere Platinum ShieldOne of the most interesting parts of working at VMware is watching a product as it evolves, and it’s been fun to watch AppDefense grow and progress. The release of VMware AppDefense 2.3 is no different. If you aren’t familiar with AppDefense, it’s VMware’s endpoint protection product that learns and protects the good behavior on a guest VM, versus trying to keep up with the bad things like traditional antivirus does. This new approach is more secure, more efficient, and much easier on IT staff time, which translates into much lower TCO (a very good thing).

This release of AppDefense adds a number of features to help with vulnerability detection and remediation which will be a great help to vSphere admins that are tasked with detecting & fixing vulnerabilities. Patching & remediation is a thankless task because, to the untrained eye (i.e. everyone who isn’t a sysadmin), it seems like a lot of work where nothing happens as a result. That’s the point to patching, though: closing doors to bad actors so that nothing bad happens. Like many tasks IT professionals undertake it falls into the “no news is good news” category, where you’ll never know what bad things were stopped.

AppDefense 2.3 helps make those patching processes easier now by adding visibility to risk inside the VMs, allowing operations teams to report directly on vulnerabilities & risk and help their organizations prioritize the repairs. This is a huge release, let’s take a look at the release notes and see not only what’s new but also why it’ll help us (spoiler: the vulnerability stuff is at the end):

Process Kill

AppDefense allows admins to configure automatic actions when untrusted activity occurs, which is nice because admins cannot be watching their systems at all times. vSphere admins have been able to choose whether AppDefense alerts on bad behavior, blocks the network traffic, quarantines the VM with NSX, suspends the VM, snapshots the VM, or powers the VM off. Now we can have it kill the suspicious process, too, as an evolution of response mechanisms.

AppDefense 2.3 - Remediation Actions

Behavior Timestamps

One of the core tenets of information security is the principle of least privilege, meaning that a person, process, or other entity should only have the permissions to do exactly their job, and no more. As AppDefense evolves we hear from customers about what they want to see in the product (which means that if you have suggestions, for AppDefense or any of VMware’s products, please tell us), and one of the things that was pointed out was that AppDefense may have learned things about a system, but as systems evolve we might not want what it learned to be allowed anymore. Now we can see when they last happened, which helps us audit those behaviors.

AppDefense 2.3 - Last Seen

Alert Classification Enhancements & Severity-based Remediations

Part of having a service that uses machine learning is that… it learns things. As the AppDefense systems learn more they can make better judgements about whether network and process behavior looks similar to what’s already allowed, and then offer more reasonable alerts to the vSphere admins and security teams. When everything is an emergency nothing is, right?

Similarly, severity-based remediations now mean that the response from AppDefense can match the offense. Critical alerts get a very stern response (power off, quarantine, etc.), more minor alarms get more calm responses (blocking, alerting, etc.).

SaaS User Roles

Keeping with the principle of least privilege we’ve added some additional roles to the AppDefense SaaS portal. This allows you to invite other staff, such as infosec folks or SecOps teams, to do reporting but not alter the settings in the portal.

Rebootless Install and Upgrade

The AppDefense in-guest module ships with VMware Tools to help smooth out relations between guest OS support staff and vSphere admins. However, unless you’re running the absolute latest VMware Tools you won’t have the latest AppDefense guest modules. To further smooth relations between teams in an IT organization the AppDefense modules can now be updated without rebooting the guest VMs, if you’re at version 2.2.1 or higher. If you aren’t at that version then you might consider going to VMware Tools 11 as well, which has some major benefits in other areas, including the new ability to get vmxnet, pvscsi, and other drivers from Windows Update.

DNS support for Allowed Behaviors

In the world of content delivery networks, DevOps, and the trend towards more dynamic services it is very hard to maintain an authoritative list of IPs a service can connect to. Now you can use FQDNs, too, which is a huge improvement.

NSX-T Support

NSX-T is evolving and AppDefense can now use both NSX-V and NSX-T to quarantine a VM. This is a very powerful feature because it allows for a measured automatic response, at least until a human can get in to look.

Vulnerability Scanning and Risk Prioritization

Once you’re upgraded to AppDefense 2.3 and guest modules version 2.3.0.0 or newer you’ll start seeing new vulnerability information appear. To be quite honest, this feature really speaks for itself. I’ve attached some new screenshots from my own lab environment, which is set up as a medium-sized business would typically be configured. I am prompt about patching systems (it’s the best way to resolve vulnerabilities, after all) but to show the reporting tools I created an unpatched Windows Server 2016 VM. 729 vulnerabilities for the OS, and 36 for application components – ouch! You can also see the process integrity alarms that appear when something tampers with the Windows kernel. In this case it was me applying patches to my DR site’s AD DC.

AppDefense 2.3 - VM OS Vulnerabilities View
AppDefense 2.3 - VM Guest Monitoring View
AppDefense 2.3 - VM Guest Integrity View
AppDefense 2.3 - VM App Vulnerabilities View
AppDefense 2.3 - Main vCenter Vulnerabilities View
AppDefense 2.3 - Main vCenter View
AppDefense 2.3 - Main vCenter Process View
AppDefense 2.3 - Main vCenter OS Integrity View
AppDefense 2.3 - Application Vulnerabilites Expanded

Conclusion

In conclusion, AppDefense just keeps getting better, in very thoughtful ways. VMware tries hard to consider how best we can improve the security of applications and workloads while being respectful of IT staff time, skills, and the politics inside organizations. In that light it’s very exciting to think of the possibilities around the Carbon Black acquisition, too.

I cannot imagine how you wouldn’t be interested in AppDefense at this point, especially in any sizeable IT environment where risk management and patching is a big chore. AppDefense is delivered as part of vSphere Platinum, so please reach out to your account teams for more information or visit the vSphere Platinum test drive where you can take it, and a number of other VMware products, for a spin.

The post VMware AppDefense 2.3: Vulnerability Scanning and More appeared first on VMware vSphere Blog.

VMware Named a Leader in The Forrester Wave™: Infrastructure Automation Platforms, Q3 2019

$
0
0
This post was originally published on this site ---

I’m delighted to share that VMware has been named a Leader in The Forrester Wave™: Infrastructure Application Platforms, Q3 2019. VMware was given the highest possible scores in the criteria of vision, market approach, planned enhancements, deployment, integrations, community support, and innovation in pricing.

Download the report here

We are grateful to our customers who gave us their votes of confidence, and believe this demonstrates our commitment towards helping our customers achieve true agility and address their ever-changing business needs.

Infrastructure Automation Platforms

 

In our view, this report recognizes VMware’s leadership in delivering a comprehensive infrastructure automation platform with our hybrid cloud management platform, VMware vRealize Suite, which consists of vRealize Automation, vRealize Operations, vRealize Log Insight, and vRealize Suite Lifecycle Manager. We enable customers to accelerate the delivery of IT services through automation and pre-defined policies, providing a high level of agility and flexibility for IT administrators, cloud administrators, DevOps administrators and developers, while maintaining consistent governance and control. With our vRealize Suite offerings, we provide customers a choice in how they purchase, consume, and operate their IT and Cloud operations —either software as a service (SaaS) via a subscription model or on-premises via a perpetual license model.

At the core of VMware’s cloud automation offerings are VMware vRealize Automation Cloud (formerly VMware Cloud Automation Services), a modern SaaS hybrid cloud automation platform, and VMware vRealize Automation, our on-premises cloud automation platform offering the same set of capabilities as our SaaS offering. Built on the same architecture, code base, and modern UX/UI, both offerings are designed to help customers implement key use cases across self-service hybrid clouds, multicloud automation with governance, Kubernetes cluster/namespace management, and DevOps-based service delivery models.

We are very excited about the new innovations and powerful capabilities we’re continuing to develop in our hybrid cloud automation platforms, vRealize Automation and vRealize Automation Cloud. We hope many of you will be able to join us at VMworld 2019 Europe in Barcelona, Nov. 4-7, 2019, where we will be sharing more details about our product and solutions, and how our customers are realizing business value from our hybrid cloud automation platforms.

 

You can download The Forrester Wave™: Infrastructure Automation Platforms, Q3 2019 to learn more about VMware’s performance in the evaluation, here.

The post VMware Named a Leader in The Forrester Wave™: Infrastructure Automation Platforms, Q3 2019 appeared first on VMware Cloud Management.

Project level deployment sharing in vRealize Automation Cloud

$
0
0
This post was originally published on this site ---

With the latest feature release in October 2019, vRealize Automation Cloud users can configure projects for project level deployment sharing, or for deployments to be owned by a specific user. When creating or editing a Project, you have a new toggle switch to enable or disable Deployment sharing:

When Deployment sharing is enabled each member can view and manage the Deployments that are part of a Project, and that is still the default model when you create a new project or for any projects created prior to the update.

With Deployment sharing disabled members with the Administrator role can view and manage all Deployments in the Project. Users with the Member role can only view and manage Deployments that they own.

Example Deployment Sharing Permissions

Let’s take a look at an example – I’ve created three projects, and I have three users who have been assigned roles within these projects:

  • Project a, with Deployment sharing enabled
    • Brian is Administrator
    • Shawna and Scott are Members
  • Project b, with Deployment sharing disabled
    • Brian is Administrator
    • Shawna and Scott are Members
  • Project c, with Deployment sharing disabled
    • Brian, Shawna and Scott are Members

Each user has deployed one application in each Project, so the Project, User Role and Deployment layout is shown here:

This translates to the following ownership and access rights:

Configuring vRA Cloud

What does this look like when we configure it in vRA Cloud? You can see from the screenshots below that each Project is configured with the Deployment sharing setting and User Roles as described above:

  

Once the projects are created and configured, I can deploy each users’ applications.

If I log in as Brian, we can see all the Deployments in Project a, because Deployment sharing is on, all the Deployments in Project b, because Brian is an Administrator, and only Brian’s Deployment in Project c, because he is the owner (7 Deployments in total):

So far, so good! If I log out of Brian’s account and log in as Shawna, I can see all the Deployments in Project a, but only Shawna’s Deployments in Project b and Project c:

And finally, if I log in as Scott I can once again see all the Deployments in Project a, but only Scott’s Deployments in Project b and Project c:

But what happens if I revoke Scott’s membership of Project b? Well, Scott can no longer see his Deployment in Project b (note that membership is cached for ~30m, so he’ll need to log out and in to refresh it, and the same goes for the API):

It’s also worth noting that the filters are also updated with the context – so for example Scott can now only filter based on Project a and Project c, but this is true for any of the filter types (Cloud Accounts, Cloud Types, Deployment Status, Projects etc.)

In Summary

As you can see, enabling or disabling shared deployment ownership provides quite a lot of possible combinations for controlling access to Deployments – to recap:

With shared deployment on:

  • Everyone can see all the Deployments in the Project

With shared deployment off:

  • Project Administrator roles can see all Deployments in the Project
  • Project Member roles can see all Deployments they own in the Project

The post Project level deployment sharing in vRealize Automation Cloud appeared first on VMware Cloud Management.

Remote Console Available in vRealize Automation Cloud

$
0
0
This post was originally published on this site ---

 

In our latest release of capabilities with vRealize Automation Cloud we have introduced a feature that was once only available in our on-premise vRealize Automation solution. This is the ability to console into vSphere deployed virtual machines directly from vRealize Automation Cloud. This capability allows you to give uses of vRealize Automation Cloud access to console into their deployed workloads in vSphere without having to give them direct access to vCenter. A great way to provide full access but keep governance and security controls in your environment. Below I will walk through on how to use remote console for vRealize Automation Cloud.

 

Where to Access the Remote Console:

The new remote console in vRealize Automation Cloud can be accessed from the deployment details screen from either Service Broker or Cloud Assembly. First click on the Deployment tab in either service and then select the specific vSphere deployment that hosts the virtual machine you want to access.

 

Once on the deployment details screen, make sure you are on the topology tab, then select the virtual machine object from the diagram, and select “Connect to Remote Console” from the Actions drop down menu.

 

Then you are off to the races consoled into you virtual machine.

 

There are a couple caveats to using remote console:

  • You must have network access to the vSphere infrastructure that the virtual machine is hosted.
  • If you are using self-signed certificates you will need to accept the certificate using the below warning message:

For additional information about requirements needed for the Remote Console feature to work in your environment see the documentation here.

 

Summary:

Now with Remote Console access directly for vRealize Automation Cloud you can give your customers everything they need to deploy and manage their workloads without giving them direct access to vCenter.

 

 

Other Cool Features For vRealize Automation Cloud:

Input from GUI in the Blueprint

Explicit Deployment Ownership

 

 

 

The post Remote Console Available in vRealize Automation Cloud appeared first on VMware Cloud Management.

Automating vRealize Network Insight with PowerShell and Python

$
0
0
This post was originally published on this site ---

vRealize Network Insight is a treasure trove of information on what’s happening in your network. It ingests network configuration data, application flows going through the network, metrics, and correlates all this data to the application workloads. One of the best-kept secrets is that all this information is also available via the public APIs of Network Insight.

From exporting network flow data, recommended firewall rules, analytics data, to pushing data sources and applications, there’s a lot there.

Public API

Network Insights public API is one based on modern standards using an HTTP based, RESTful API. The documentation for this API is available directly from Network Insight and the built-in API Explorer gives you the ability to review the available APIs, expected format and expected results, and also allow you to test API calls directly from the same interface.

While using the raw API will be interesting for the developers amongst us, we’ve made it a bit more easy for infrastructure engineers and administrators to use the data available from Network Insight. There are a few wrappers, or SDKs, available that do just that. It’ll be easier to consume network data from Network Insight when you are using your favorite scripting language.

I’ll briefly describe the possibilities using PowerShell and Python below, and for an in-depth description and a real-time demo, watch the video below.

PowerShell

PowerShell is pretty popular within the VMware community. With PowerCLI, PowerNSX, PowervRA, and more being available – VMware products are being automated against to create consistency and allow data sharing between multiple systems. Being a big fan of PowerShell myself, I created PowervRNI a few years ago.

PowervRNI is a PowerShell module that has almost all public API calls implemented. It can be used to export any data, create applications, data sources, and more. Check out the Github page to get more information and examples.

Python

While PowerShell can be used to create full scripts and do anything you need, the most common use it to do one-liners with it. Python typically comes around when you want to pull information from 1 source, process that information and push it another system or file. Doing multiple operations in sequence.

The Python SDK for vRealize Network Insight abstracts the public API and makes them available via Python classes and functions. This SDK is 100% complete as it is automatically generated from the Network Insight API documentation, whenever a new version is released.

Check out the Github page for the Python SDK to get more information and example scripts.

Demo

Now, let’s have a look at some examples in this video:

 

The post Automating vRealize Network Insight with PowerShell and Python appeared first on VMware Cloud Management.

VMworld 2019 Europe: vRealize Automation Sessions

$
0
0
This post was originally published on this site ---

VMworld vRealize Automation

Hola y bienvenidos a VMworld 2019 en Barcelona, España!

Are you ready to take your VMware knowledge to the next level? With the event around the corner, November 4-7, the vRealize Automation team is excited to share with you the key breakout sessions. In addition, we encourage you to participate in the Hands-on Labs, join the Meet the Experts Roundtables and visit our Solutions Exchange pavilion to see live demos of our latest product releases!

 

Recommended vRealize Automation Sessions

Network Automation with VMware vRealize Automation Cloud and NSX [HBO1899BE]

Tuesday, November 5 | 12:30 – 13:30

Back with Act 2.0: SDDC, Automation, Apps, and Beyond [HBO1724BE]

Tuesday, November 5 | 12:30 – 13:30

What’s New in Cloud Automation Solutions [HBO1492BE]

Tuesday, November 5 | 17:00 – 18:00

Evolving vRealize Automation [HBO1323BE]

Wednesday, November 6 | 14:00 – 15:00

vRealize Automation Cloud with VMware Cloud on AWS: IHS Markit [HBO2498BE]

Wednesday, November 6 | 14:00 – 15:00

vRealize Automation Cloud: Developers’ Paradise on any cloud, on any device [HBO3559BE]

Wednesday, November 6| 14:00 – 15:00

Continuous Application Delivery with Code Stream: A customer journey [HBO1080BE]

Thursday, November 7 | 15:00 – 16:00

 

Meet the Experts

Meet the Experts allows customers, partners and any attendees of VMworld to spend time with VMware product experts in one on one settings, or in a round table format and engage in product discussions to learn more and get questions answered. Learn more.

Automation, self-service with vRealize Automation, Cloud Automation Service [MTE6050E]

Tuesday, November 5 | 11:15 – 12:00

Wednesday, November 6 | 14:15 – 15:00

Integrated operations management and automation with VMware vRealize Suite [MTE6405E]

Tuesday, November 5 | 15:15 – 16:00

DevOps-ready IT: Infra as Code with VMware Cloud Automation Services [MTE6052E]

Wednesday, November 6 | 17:15 – 18:00

 

Hands-on Labs

Hands-on Labs demonstrate the value of VMware solutions in real time. If you want to gain special access to our latest Automation technologies with product experts available to provide one-on-one guidance, don’t forget to schedule your session today! Learn more.

If you want to know how vRealize Suite can boost your SDDC or if you’re looking to quickly learn about the newest vRealize releases – now you can get a great t-shirt too! You just have to complete one VMware vRealize Lightning Lab in the HOL:

  • HOL-2001-91-ISM – What’s New in VMware vRealize Operations 8.0
  • HOL-2021-91-ISM – What’s New in vRealize Automation 8.0
  • HOL-2002-92-CMP – A Quick Introduction to vRealize Network Insight

A person in a black shirt Description automatically generated

 

If you haven’t registered yet for VMworld 2019 Europe, there is still time! I hope to see you there, hasta pronto!

 

 

The post VMworld 2019 Europe: vRealize Automation Sessions appeared first on VMware Cloud Management.

vRA Cloud Load Balancer with NSX-T Deep Dive

$
0
0
This post was originally published on this site ---

Cloud Assembly supports provisioning one-arm and two-arm load balancers on VMware NSX. Load balancers can be provisioned using existing networks data collected from NSX-T or on-demand networks provisioned by Cloud Assembly.  Cloud Assembly supports provisioning new Load Balancing service in NSX-T or reusing existing NSX-T Load Balancing service which was data collected from NSX-T or deployed by the administrator.

Blueprint schema

Load balancer is represented as Cloud.LoadBalancer component on the blueprint canvas. Cloud.LoadBalancer component schema consists of a collection of route settings, a reference to a machine component that will be used to configure load balancer pool members and a reference to a network component that will be used to configure load balancer VIP.

Every Cloud.LoadBalancer route configuration will be translated into a load balancing pool and VIP combination in NSX-T. If a route has a healthCheckConfiguration specified, a monitor will be created in NSX-T.

## Cloud.LoadBalancer schema
web_lb:
    type: Cloud.LoadBalancer
    properties:
      routes: 
        - protocol: HTTP
          port: 80
          instanceProtocol: HTTP
          instancePort: 80
          healthCheckConfiguration: 
            protocol: HTTP
            port: 80
            urlPath: /index.html
            intervalSeconds: 5
            timeoutSeconds: 15
            unhealthyThreshold: 5
            healthyThreshold: 2
      network: '${resource.prod_net.id}'
      instances: '${resource.web_server.id}'
      internetFacing: false

Note: “internetFacing” configuration is not used for NSX-T

Create New Two-Arm Load Balancer

Here is a blueprint example with a Two-Arm Load Balancer:


“web_net” is a Cloud.Network component with on-demand networkType (outbound, routed or private)
“prod_net” is a Cloud.Network component with existing networkType (existing or public)
“web_server” is a Cloud.Machine component
“web_lb” is a Cloud.LoadBalancer component
The order in which components of this blueprint will be provisioned is as follows:
“prod_net” and “web_net” → “web_server” → “web_lb”

 

NSX-T configurations for Two-Arm Load Balancer

1. Load balancing service is created in the NSX-T manager. Load balancing service is attached to the Tier-1 router of the “web_net” network.

Note: If “web_net” network type is private, a Tier-1 logical router is not created for it during network provisioning. This is because private networks don’t need any routing capabilities. In order to support load balancing for this blueprint, a Tier-1 router is created during “web_lb” provisioning. The Tier-1 logical router is attached the load balancing service as well as the “web_net” private network.

 

2. Virtual Server is created for the Cloud.LoadBalancer “route” configuration.

Virtual Server IP address is allocated from the static IP range configured for the network selected for “prod_net” component. Note: Network selected in the network profile must be configured with a static IP range.

 

3. Application profile is created for the Cloud.LoadBalancer “route” configuration.

 

4. Server Pool is created for the Cloud.LoadBalancer “route” configuration.

Server pool is configured with “Automap” “SNAT Translation Mode” to allow inline load balancing.

All machines provisioned for the “web_server” Cloud.Machine component are added to the pool as members.

 

5. Monitor is created for the “healthCheckConfiguration” of the Cloud.LoadBalancer “route” schema.

 

6. Tier-1 logical router is attached to the Edge cluster and the Tier-0 logical router which are selected in the network profile.

Tier-1 router is configured to advertise Load Balancer VIP address upwards to Tier-0 router.

 

Create New One-Arm Load Balancer

Here is a blueprint example with a One-Arm Load Balancer:


“prod_net” is a Cloud.Network component with existing networkType (existing or public)
“web_server” is a Cloud.Machine component
“web_lb” is a Cloud.LoadBalancer component
The order in which components of this blueprint will be provisioned is as follows:
“prod_net” and “web_net” → “web_server” → “web_lb”

 

NSX-T configurations for One-Arm Load Balancer

1. Load balancing service is created in the NSX-T manager. Similar to Two-Arm load balancer, “routes” are translated to virtual servers, application profiles, service pools and monitors in NSX-T.

 

2.Tier-1 logical router is created and attached to the new load balancing service. Tier-1 router is configured with edge cluster allocated from the network profile. Tier-1 router is NOT attached to a Tier-0 router. Instead, network selected  for the “prod_net” component is attached to the Tier-1 router via a Centralised service port. Centralised service port allows attaching logical switch to more than one Tier-1 logical router and enables VIP advertisement without the need to attach Tier-1 router to Tier-0 router.

 

Re-Use Load Balancing Service

You can choose to re-use an existing NSX-T load balancer service for all application deployed within a project. You can create a Load balancer service directly in the NSX-T manager or you can deploy a new NSX-T load balancer service using a Cloud Assembly blueprint.

Select an existing load balancer service on the network profile Load Balancers tab. All blueprints with load balancer components that land on the network profile will use selected load balancer service to provision virtual servers, application profiles, service pools and monitors. Virtual servers, application profiles, service pools and monitors created for the deployment will be removed from the existing load balancing service when the deployment is destroyed, but the load balancing service itself will not be removed.

Note: If a blueprint with a load balancer component is provisioned with an on-demand outbound or private network, new load balancer service is always created in NSX-T. Load balancer selected in the network profile is not used for such applications. This is because machines provisioned with on-demand outbound or private networks will not be accessible from the Tier-1 router of the existing load balancer selected in the network profile. Tier-1 router created for the on-demand outbound or private networks are used to configure load balancing.

 

Deploy Re-Usable Load Balancer

In order to deploy a re-usable load balancer using a Cloud Assembly blueprint:

  1. Create a blueprint with a Cloud.LoadBalancer component. You can leave instances and routes schema as an empty collection;
  2. Deploy the blueprint;
  3. When the deployment is created successfully, add the resulting Load Balancer to the network profile.

Note: Once the load balancer is used to deploy other applications, it will not be possible to destroy the deployment that created it. You will need to destroy all other deployments that use network profile load balancer before you can destroy the deployment that created it.

 

Network Profile

When Cloud.LoadBalancer object lands on an NSX-T network profile, a static IP address is allocated for the load balancer VIP. Existing or public Cloud.Network component attached to Cloud.LoadBalancer component must be allocated on network that has a static IP range configured.

You can configure existing or public network to support static IP allocation:

  1. Click on the network name to edit network details;
  2. Specify IPv4 CIDR, optionally configure other IPAM settings;
  3. Click Save.

 

To create a static IP range for existing or public network:

  1. Select a network that has CIDR configured and click “Manage IP Ranges” button;
  2. Click “New IP Range”;
  3. Give range a name, start IP address and end IP address;
  4. Click Create.

 

Network Policies

Network profile → Network Policies → Network Resources section configures pre-exisitng NSX-T resources that are used to provision Cloud.LoadBalancer components. This section should be configured according to the load balancer type you would like to provision.

Two-Arm Load Balancer: 

  • Tier-0 router is required to provision a two-arm load balancer. Tier-0 router is attached to the Tier-1 router created for the Load Balancer. Load Balancer VIP is advertised upwards to the Tier-0 router.
  • Edge cluster is required to provision a two-arm load balancer. Tier-1 router is configured with the edge cluster to support load balancing services.

One-Arm Load Balancer:

  • Tier-0 router is NOT required to provision one-arm load balancer. Instead of connecting the Tier-1 router to a Tier-0 router, load balancer network is attached to the Tier-1 router in “Centralised” mode which allows load balancer VIP advertisement without attaching Tier-1 router to Tier-0 router.
  • Edge cluster is required to provision one-arm load balancer. Tier-1 router is configured with the edge cluster to support load balancing services.

Note: It is not required to select either Tier-0 logical router or edge cluster if you choose to re-use existing load balancer service on the next tab.

Load Balancers

Alternatively, you can choose to re-use existing NSX-T load balancing service to provision load balancer applications. If you select an existing load balancer service on this tab, all on-demand load balancer will use existing load balancer service to configure service pools, VIPs, profiles and monitors.

The post vRA Cloud Load Balancer with NSX-T Deep Dive appeared first on VMware Cloud Management.


Cloud Migration Series: Part 2 – VMware HCX Overview

$
0
0
This post was originally published on this site ---

In Part 1 of our Cloud Migration series we discussed traditional migration methods – Cold Migration and vMotion – and their requirements. This gives you the ability to migrate virtual machines to the cloud with the same technology you use on-premises every day, sometimes without even thinking about it. In Part 2 we are going to discuss the third method of migration, which utilizes VMware HCX, and how this will provide additional features and flexibility to successfully migrate virtual machines into VMware Cloud on AWS.

 

What is it?

VMware HCX is a workload mobility platform – a migration tool that abstracts both on-premises and cloud resources and presents them as a single, fluid, resource for applications to consume. To put it another way, it allows for bi-directional migration of virtual machines from one location to another with (almost) total disregard for disparate vSphere versions, virtual networks, connectivity, etc. It does this by creating a secure, high performance, overlay that includes WAN optimization, load balancing, and VPN with Suite B encryption between your on-premises data center and your Cloud SDDC. This is made possible through a handful of appliances deployed at each location.

 

That’s great, but what does all of that REALLY mean?

  • It means you can migrate virtual machines in bulk instead of one at a time.
  • It means you can kick off a migration right now or schedule your migrations for a different date/time (like Sunday morning at 02:00).
  • It means you can migrate VMs from unsupported vSphere 5.x (and soon 6.0) without having to go through an upgrade process.
  • It means you can migrate VMs off of old virtual switches like Cisco Nexus 1000v.
  • It means you can upgrade VM Tools, Virtual Hardware, or even force unmount ISOs as part of the migration process.

 

Not only does VMware HCX let you migrate virtual machines from on-premises to the cloud, but also between on-premises locations, and Cloud to Cloud (C2C).

 

HCX Dashboard

 

Appliances

HCX Manager

Deployed next to the vCenter Server. There are two types of managers: Cloud and Enterprise. Cloud is deployed by the cloud provider, whereas Enterprise is deployed by the customer on-premises. The HCX Manager is where administration and configuration of HCX takes place. All other HCX components (services) within the HCX Service Catalog are deployed through the HCX Manager.

 

HCX Interconnect (HCX-WAN-IX)

Provides replication and vMotion migration capabilities over the internet and private lines to the HCX target site with strong encryption and traffic engineering.

 

HCX WAN Optimization (HCX-WAN-OPT)
Provides line conditioning and data de-duplication to create LAN-like performance across to the cloud using VPN without the need for AWS Direct Connect (DX).

 

HCX Network Extension (HCX-NET-EXT)

Provides secure Layer 2 extension capabilities (VLAN, VXLAN, and Geneve) for vSphere or 3rd party distributed switches and allows the virtual machine to keep the same IP and MAC address during migration. Essentially, we’re extending Layer 2 from on-premises to VMware Cloud on AWS to help your migrations remain seamless.

 

HCX Sentinel Gateway (HCX-SGW)

If the need arises to migrate non-vSphere guests such as KVM or Hyper-V based virtual machines, the SGW allows for connecting and forwarding the guest workloads between the Sentinel software installed on the virtual machine and the Sentinel Data Receiver in the target environment.

 

HCX Sentinel Data Receiver (HCX-SDR)

The SDR works in conjunction with the Sentinel Gateway to receive, manage, and monitor data replication operations at the destination environment.

 

HCX Appliance View

 

Advanced vs Enterprise

The HCX features available to you are based on the type of license you’re using. VMware HCX Advanced comes packaged with NSX Datacenter Enterprise Plus, VMware Cloud on AWS, VCF Enterprise, and from various VMware Cloud Provider Partners (VCPPs). VMware HCX Enterprise is an additional license available for purchase that you can add on to your existing licensing.

 

VMware HCX Advanced provides:

  • HCX Interconnect Services
  • WAN Optimization
  • Network Extension
  • Bulk Migration
  • vMotion Migration
  • Disaster Recovery

VMware HCX Enterprise provides these additional features:

  • OS Assisted Migration (OSAM)
  • Replication Assisted vMotion (RAV)
  • Site Recovery Manager Integration for Disaster Recovery

 

Migration Options

There are five migration options available when using VMware HCX.

 

Cold Migration

This method is similar to a cold migration initiated via the vSphere Client. It’s simply a Network File Copy (NFC) of the VM to the target site through the same network path an HCX vMotion would use. It’s automatically used for any powered off virtual machine migration.

 

HCX vMotion

HCX vMotion is the ability to transfer a live VM between HCX enabled sites capturing the active state of the VM in its entirety. The bonus to HCX vMotion is the additional compatibility it provides allowing migration from vSphere 5.5 – 6.7+, hardware v9+, migration without EVC, and it works with isolated and overlapping vMotion subnets.

 

HCX Bulk Migration

Bulk migration uses host-based replication (HBR) to move a live virtual machine. This is accomplished by replicating the virtual machine (seeding) to the target site. Once an initial full sync is completed, delta synchronization begins. Once the delta is complete, a switchover is triggered. Note that the VM stays online during the replication process up until the switchover. The switchover can be done immediately or scheduled for a specific date/time. Scheduling the switchover is ideal for maintenance windows. When the switchover occurs, the VM is powered off, and the replica is powered on. If there is a problem with the new VM powering on, the original VM will be powered back on. VMware HCX renames the powered off VM in the source site and appends a binary timestamp to the VM name. The original VM is then moved into a new folder called “Migrated VMs” to help avoid any confusion.

 

Replication Assisted vMotion (RAV)

Replication Assisted vMotion takes your migration to the next level by combining HCX Bulk Migration with HCX vMotion. This provides parallel operations, resiliency, and scheduling, with host-based replication for zero downtime VM migrations.

 

OS Assisted Migration (OSAM)

As with RAV, OS Assisted Migration also requires HCX Enterprise licensing. It uses Sentinel software that is installed on the Linux or Windows based guests. The Sentinel software obtains the guest configuration and assists with the replication to the destination through the Sentinel Gateway to the Sentinel Data Receiver. This migration type is ideal for moving virtual machines from non-vSphere environments such as KVM or Hyper-V. Similar to a bulk migration, the VM remains online during the replication process. Once initial replication is completed, HCX performs a hardware mapping, driver installation, and OS configuration for the new vSphere virtual machine, and reboots the VM. After the VM is rebooted, a delta sync is kicked off, and switchover is triggered. VMware Tools is installed on the migrated virtual machine as a final step.

 

Additional Options

The HCX interface also provides a set of options that can be applied pre or post migration depending on method and behavior. These options include:

  • Force Power-Off VM
  • Retain MAC
  • Upgrade Virtual Hardware
  • Upgrade VM Tools
  • Remove Snapshots
  • Force Unmount Container
  • Select Destination Container
  • Select Destination Storage
  • Select Virtual Disk Format
  • Select Destination Network

 

Migrating your workloads to the cloud might seem daunting, but VMware HCX aims to add flexibility and simplify the process, while maximizing uptime. Stay tuned for Part 3 where we’ll dive into VMware HCX just a little bit further and discuss the anatomy of the Service Mesh.

 

Resources

The post Cloud Migration Series: Part 2 – VMware HCX Overview appeared first on VMware vSphere Blog.

vSphere Tweet Chat Recap: National Cybersecurity Awareness Month

$
0
0
This post was originally published on this site ---

Did you know that October is National Cybersecurity Awareness Month? To ensure VMware users are equipped with the knowledge to stay secure all 365, the vSphere team hosted a tweet chat featuring our experts. From the team we have, Mike Foley and Bob Plankers, who are both Technical Marketing Architects for vSphere Security who joined us to share their invaluable insight and tips. We discussed how vSphere can assist with cybersecurity woes, but you’ll have to continue reading to find out more! Check out the full chat recap:

A1: Routine and regular patching and good account & password hygiene practices are the two biggest ways you can stay secure. Beyond that, defense in depth (multiple layers of protections), isolation and firewalling are big helps, too. – Bob

A1: People get scared of patching but it’s really a capacity planning issue, and a political one. Capacity planning being that we need spare capacity in the cluster to vMotion, so a rolling cluster restart can happen without issues. – Bob

A1: Patching is political insofar as it’s easy to make decisions that disable vMotion for some guest OSes, and then that throws a wrench into seamless updates. If you have to do those sorts of things, then it becomes an organizational discussion about how to patch. – Bob

A1: Sign up for the VMware Security Advisories here. They will point you to the latest info on security issues with VMware products and were to get more info. – Mike

A2: Before you install ESXi enable UEFI, Secure Boot, and make sure you have a trusted platform module (TPM) 2.0 installed and correctly enabled. – Bob

A2: If you have these things set up at install time ESXi will see them and do the right things from the start, saving you time and effort. – Bob

A2: Keep SSH turned off! Use it ONLY for troubleshooting. Limit just how many admins have root level access. If they have the “administrator” role then they ARE root! Consider normal or strict lockdown mode! – Mike

A2: Learn more about enabling Secure Boot here. – Mike

A3: Don’t run Bitlocker inside the guest when you have great options in VM Encryption and vSAN Encryption! Both of those are completely guest-agnostic, meaning you’ll have one process to protect your workloads, versus different processes for different releases of OSes. – Bob

A3: VM Encryption can protect a VM in-place, on the storage you have. Downside is that the VM becomes a big random blob of data on your storage array, so if you’re using deduplication that may be an issue. – Bob

A3: VM Encryption also gets you vTPMs and extra permissions in vCenter to help prevent things like decryption. Want to prevent someone from taking a copy of an Active Directory DC? That’ll help a lot with that. – Bob

A3: vSAN Encryption is a native part of vSAN and preserves the ability to deduplicate and compress, which is really nice. You can use the two together – enable VM Encryption to get the permissions and other features, and vSAN Encryption to do the disk part. – Bob

A3: Both features need a Key Management Server (KMS) so there’s some architectural considerations there. It’s important that the KMS be very reliable and protected, else you lose access to your VMs. – Bob

A3: To run Bitlocker you need a TPM. To enable a vTPM you need VM Encryption. So why go into the guest?? Read more about the different types of encryption here. – Mike

A4: Yes, get a TPM installed. They’re $20-40 extra and enable all the advanced attestation features in vSphere 6.7+. Also, if you’re doing vSAN follow the best practices (two disk groups per node, for example). – Bob

A5: Trusted Platform Module, it’s a little piece of hardware that stores information securely. It’s cryptographically bound to the server so it can’t be removed and read later. It is not fast, but it is essential. – Bob

A5: Learn more about TPM 2.0 and how we use it in ESXi with this quick video. And blog. – Mike

A6: The TPM on the server is only for use by ESXi, but if you enable VM Encryption you can use the vTPM feature, which is a TPM 2.0 compliant device for your virtual machines. – Bob

A6: a vTPM is a virtualize TPM device. Secured using VM Encryption. Looks and acts just like a “real” TPM. More info here. – Mike

A6: A quick video overview of a vTPM. vTPM’s root of trust is in the key manager, not the actual TPM hardware. A physical TPM has about 178k of space! FAR too small and slow for multitasking! – Mike

A7: Encrypted vMotion works by having vCenter generate a one-time encryption key, and it gives it to the two ESXi hosts to use while moving the VM. It doesn’t need a KMS or anything special. – Bob

A7: It’s a per-VM setting and by default it’s set to “opportunistic” meaning that it’ll do it if supported. You can set it to required, too (which is a good idea). Do that on your VM templates and use PowerCLI to retrofit existing VMs. – Bob

A8: You can, but you shouldn’t. It just confuses security perimeters. Have the hard conversation with your firewall admins and management. – Bob

A8: A 2nd NIC is only supported for VCHA. VCSA was not designed to straddle two security zones. We HIGHLY recommend you don’t add a 2nd NIC. There’s no way to say, “run this service on this NIC and that service on that NIC”. – Mike

A9: Enable secure boot and host attestation features in ESXi and you’ll have that covered. Please don’t install extra tools on ESXi, it’s an appliance and doing that sort of thing endangers stability and support. – Bob

A9: To that end, unless it’s VMware Global Support Services telling you to install or run something from the ESXi or vCenter Server CLI you should be careful of what you’re doing and the effects on support and stability. – Bob

A9: “Just because you can doesn’t mean you should.” 🙂 – Bob

A9: Enable Secure Boot! It will validate every VIB at boot time and will ensure that only signed code is run. Take it one step further and enable TPM 2.0 to provide a report that confirms to your security folks that Secure Boot is enabled. – Mike

A9: Then us a logging solution to monitor what root users on ESXi are doing. All shell commands get sent to syslog. You can look for strange behaviors. (and logging in to a shell on ESXi should be a break glass scenario!!) – Mike

A10: vSphere has very secure defaults out of the box, but compliance is a measure of the whole solution and not a specific component. That said, vSphere is a trusted part of compliant infrastructures all around the world, as it’s the most secure hypervisor on the market. – Bob

A10: No software is PCI “out of the box”. All software that could be used in a PCI environment is going to need some level of security configuration. What is cool is that since 6.0 the number of steps to configure has shrunk dramatically!! – Mike

A11: Technically you can do anything you want, but if you want support and stability please don’t. ESXi and vCenter Server are appliances and, like your fridge says on the back, no user serviceable parts inside. – Bob

A11: If you do think you need to do something contact Support! They’ll know what to do and how to safely help you. Doubly so if it’s a security concern. We have experts that track all of our components and can advise you. – Bob

A11: This also goes for compliance scanners. They tend to treat ESXi like Linux and, let me be clear, ESXi IS NOT LINUX. Use a scanner that understands vSphere correctly. – Bob

A11: ESXi is not a general-purpose OS and the VCSA is a pre-configured and tested OS. As such, use only VMware mechanisms to keep them up to date and on ESXi, if your vendor supplies a driver then ensure the VIB is digitally signed. We don’t support installation of 3rd party s/w. – Mike

A11: Don’t configure additional repos for VCSA and install other components. This puts your VCSA in an untested and unsupported configuration. Security patches will be release ASAP. You only are one click away from updating on VCSA! – Mike

A12: Lots of stuff. Check out the vSphere Security Configuration Guide and the VMware Validated Design NIST 800-53 Compliance Kit to start. – Bob

A12: Download the latest vSphere Security Configuration (nee “hardening”) Guide here. – Mike

A12: Since vSphere 6 we have been making as many of the settings as possible “secure by default.” The number of “hardening” settings is down to a handful. The rest are settings you should audit or settings that are site specific. – Mike

A12: There is not a guide for every release. A new guide only goes out when there’s significant changes. Always use the most recent guide for your release. e.g. 6.7 Update 1 is fine for 6.7 Update 2. – Mike

A12: The vSphere Security Configuration Guide is full of PowerCLI examples, too, and a great way to start automating things in your infrastructure. – Bob

A13: Right now, there are a number of options that plug in via the Active Directory and LDAP connectivity. There is also native RSA SecureID support. But… – Bob

A13: …if you’re going to be at VMworld in Barcelona you should check out @mikefoley’s sessions which will have exciting news and technical previews. – Bob

A13: Today we only support RSA SecurID and SmartCards for 2FA on vSphere. If you are at VMworld in Barcelona or were in San Francisco we had a tech preview on some work, we are considering around federated identity. – Mike

A13: For sure — identity federation is as hot of a topic as the Kubernetes & Project Pacific news. – Bob

A13: This would allow for a 3rd party identity provider to do the authentication and then redirect you to vCenter. The 3rd party idp (MS ADFS in our demo) would be able to support multiple authentication methods. – Mike

A14: @Mikefoley and I have a great session on “vSphere Security: News You Can Use” which will cover all the latest developments, and he also has a number of tech preview sessions about features being considered for the future. – Bob

A14: If you’re a TAM customer there are also sessions on securing AD and designing vSphere with security in mind. – Bob

A15: Since you’re clearly on Twitter already, follow @VMwarevSphere and @VMwareSecurity and @vSphereSecurity. Follow the blog in your feed reader. You can certainly follow @mikefoley and I as well, YMMV though. 🙂 – Bob

A15: Follow some of the other tech marketing folks on our blog to keep up on the latest and greatest!

– Mike

A15: vSphere Central is a GREAT resource for FAQ’s and walkthroughs. – Mike

Thank you joining our National Cybersecurity Awareness Month vSphere Chat, featuring (awesome) experts from #vSphere! See you next time, and keep an eye out for more information in our blog: https://blogs.vmware.com/vsphere/

A huge shout out to our experts, Bob Plankers, @plankers, and Mike Foley, @mikefoley, as well as all the other participants from around the globe who tuned in and contributed. Remember, #BeCyberSmart today and beyond!

Follow us at @vmwarevsphere and stay tuned for our monthly expert chats and join the conversation by using the #vSphereChat hashtag.

Have a specific topic you’d like to cover? Reach out and we’ll bring the topic to our experts for consideration. For now, we’ll see you in the Twittersphere!

The post vSphere Tweet Chat Recap: National Cybersecurity Awareness Month appeared first on VMware vSphere Blog.

NSX Cloud – Choice of Agented or Agentless Modes of Operation

$
0
0
This post was originally published on this site ---

VMware NSX through its NSX Cloud offering enables customers to implement a consistent networking and security framework for workloads hosted across on-premises Data Center and public clouds such as AWS and Azure.  Every cloud orchestration and management tool, immaterial of what use case it has set out to solve has one question to answer: If it is an agent-based solution or an agentless solution. More often than not, the answer to this question has direct implications for the ability of the cloud admin team to deploy and manage the solution. But, do we really have to choose?! What if we can have both agented and agentless modes of operation?! That’s the question we asked ourselves at VMware NSX and here we are with NSX 2.5

Meet the New NSX Cloud Modes of Operation…

NSX Enforced Mode provides a “consistent” security and networking policy framework between your on-premises DC and public cloud environment. You can have a unifiedcorporate-wide-firewall-policy which will be enforced as an NSX Policy, by having an nsx footprint inside each virtual machine running in the cloud. Why is it required? Well, NSX architecture has 3 layers – management-plane, control-plane, and data-plane. Within the data center, the NSX data-plane is deployed at the hypervisor layer of each physical host. However, in a public cloud, you, as a customer, do not have access to deploy anything at the physical host level. Hence, for public cloud workloads, we are required to deploy the NSX data-plane as an agent, i.e., nsxtools. One can think of nsxtools to be similar to that of ‘vmtools’ within each workload VM running on-premises. Nsxtools allows us to enable the full feature set of NSX directly within each workload VM running in public cloud since it gives us a data-plane presence. Features such as NSX distributed Firewall (DFW), policy-based traffic routing, and many more are enabled because of nsxtools. And as we add more and more NSX goodness, all these features will be available for your public cloud workloads when you are in NSX Enforced Mode. Now, we also realize the operational challenges that deploying and managing an agent brings on, hence, we have automated a large portion of the nsxtools lifecycle management, right from deployment (auto-deploy through Azure VM Extensions etc.) and all the way into managing all upgrades directly through NSX.

For customers that cannot deploy nsxtools within their cloud VM workloads, but still need a consistent security/networking policy framework across on-premises  Data Center and public cloud, we now introduce the Cloud Enforced Mode in NSX 2.5.  Cloud Enforced Mode provides a “common” security and networking policy framework between your on-premises Data Center and public cloud environment. You can still continue to have a unifiedcorporate-wide-firewall-policy. Instead of being enforced as an NSX policy, it will be translated into the respective public cloud’s native security constructs. You do not need any footprint of NSX (aka nsxtools) in your public cloud workloads. As there is no footprint in the workload, policy enforcement is done directly by leveraging native cloud capabilities.

To put things into perspective, here is a snapshot of what your NSX deployment would look like:

  • NSX Manager and Cloud Service Manager comprise the management and control plane of NSX. They will be hosted in your on-premises DC.
  • You could have a DirectConnect, ExpressRoute, or site-to-site VPN connection from your on-premises DC to your public cloud environment.
  • In your transit VPC/VNet (a.k.a. service VPC/VNet or hub VPC/VNet), we deploy NSX Public Cloud Gateway (PCG). Think of PCG as an NSX edge footprint hosted in public cloud.
  • And finally, your data plane will be your workloads operating on either of the two modes.

 

How does the mapping work in Cloud Enforced Mode?

Here are two snapshots showing the mapping of NSX Distributed Firewall(DFW) Policy to Native Security Groups in AWS and Azure.

Snapshot of NSX DFW policy translated to Native AWS Security Groups

 

Snapshot of NSX DFW policy translated to Native Azure Security Groups

 

Can we have NSX Cloud discover Public Cloud Native Services?

Yes, We Can. NSX Cloud provides single-pane-of-glass visibility across your on-premises and public cloud environment. In addition to providing EC2 and VM inventory information, NSX Cloud also discovers native cloud service endpoints. It currently discovers four AWS services and four Azure services and enables a user to program security rules subject to capabilities from the Public Cloud provider. More services will be discovered in the future releases of NSX Cloud.

What are the feature differences between the two modes?

With “NSX Enforced Mode”, you get to leverage all the features that NSX has to offer as a platform. With “Cloud Enforced Mode”, since we do not have access to the data-plane, and given the fact that we are using public cloud’s native networking and security constructs, we are bound by what is being offered by your public cloud vendor of choice. Hence features such as L7 security, Policy-Based Overlay, Packet Capture are only available in NSX Enforced Mode.

Can we have both the modes?

Yes, We Can. The management boundary is at VPC/VNet level, i.e. you could have a set of VPCs/VNets operating in Cloud Enforced Mode and another set of them operating in NSX Enforced Mode within the same account/region/subscription. If you were to go with “Cloud Enforced Mode”, every workload inside a managed VPC/VNet is managed by NSX Cloud. If you were to go with “NSX Enforced Mode”, you get more granular control – granularity is at VM level, wherein VMs that are ‘tagged’ inside a managed VPC/VNet are managed by NSX Cloud.

What happens if I have a firewall policy that cannot be realized?

In Cloud Enforced Mode, you inherit the native networking and security constructs of your preferred public cloud vendor. Hence, it is possible that under certain circumstances, your intended firewall policies cannot be realized. NSX Cloud will flag these rules and notify you. You could then choose to either modify the firewall rule or switch to NSX Enforced Mode.

Final Thoughts

A pattern that we repeatedly see in our conversations with customers is that different line of business within the same organization wants different modes of operation.  These calls are largely driven by business needs, operational efficacy, and security mandates. Sometimes, there is also a learning curve as one tries to figure out what model works best for them. From VMware NSX, we provide the flexibility to have both modes so that customers aren’t bound by a tool’s architecture as they start designing their hybrid cloud environment.

Here are some pointers if you would like to know more about VMware NSX

Resources

NSX-T Resources

The post NSX Cloud – Choice of Agented or Agentless Modes of Operation appeared first on Network Virtualization.

Must-See sessions at VMworld Europe

$
0
0
This post was originally published on this site ---

With less than one week before VMworld Europe, we are all pulling our schedules together to get the most out of the programming available in Barcelona.

In addition to the Solution Exchange, the Showcase Keynotes (and all the food and drinks); there are quite a few sessions in the Hybrid Cloud Operations track that are worth a look.

Here are my do-not miss sessions over the course of the week in no particular order:

 

HBO1633PE Tales from the Trenches: Realizing the Full Value of Self-Driving Ops

Come see the vRealize Operations team and a panel of customers share their use case stories and look at some of the new features available in vRealize Operations 8.0

 

HBO1724PE Back with Act 2.0: SDDC, Automation, Apps and Beyond

Etisalat, one of the world’s biggest telcos, will share their vRealize Automation story that started in 2016 with Act 1.0

 

HBO3559BE vRealize Automation Cloud: Developers’ Paradise on any cloud, on any device

Swisscom will present this session on their usage of vRealize Automation Cloud, formerly Cloud Automation Services.

 

HBO2789BE Drive Accountability and Efficiency with VMware’s Cloud Management Platform

This is a fantastic high level session on using the VMware Cloud Management platform and it’s financial and operational benefits.

 

HBO2643BE The Story of VMware Multi-Cloud

Another great high level session, this one is hosted by two of our Europe-based technologists who will bring their combined years of experience in the field to this critical multi-cloud story.

 

HBO1689BE Get More Insight into Your Network with vRealize Network Insight

This session is a result of expert content from multiple VMware Business Units who have come together for this deep dive into vRealize Network Insight.

 

There are dozens of sessions that feature Cloud Management content this year, and we want to make sure you get the most out of the show, so register, go to the content catalog and  get enrolled in these sessions right away, space will be limited and many sessions are already at capacity.

Hope to see you all there!

 

Cloud Management VMworld Europe

The post Must-See sessions at VMworld Europe appeared first on VMware Cloud Management.

Explore vSphere at VMworld 2019 Europe

$
0
0
This post was originally published on this site ---

Explore where vSphere can take you at VMworld 2019 Europe

Barcelona | November 4-7

Don’t miss the opportunity to take your career and organization to the next level in Barcelona at VMworld 2019. Find out what’s driving anticipation of VMware’s Project Pacific and see what’s next for vSphere.

VMworld 2019 Europe will feature many opportunities to learn about the latest in vSphere server virtualization technology and operations. Discover hybrid cloud and multi-cloud solutions that will help you prepare to build, run, manage, and secure business-critical applications on any cloud.

Here’s a quick reference guide to some key sessions and activities that will help you discover what’s new in vSphere and learn more about Project Pacific.

  • Swing by the Hybrid Cloud pod at the VMware booth to get a close-up look at vSphere and Project Pacific.
  • Test drive the latest innovations with Hands-on Labs in Hall 6.0.
  • Meet the Experts
  • Register for vSphere and Project Pacific sessions at VMworld EU 2019.

 

Featured Breakout Sessions

Attend sessions that will help you learn how to power your Kubernetes environment with VMware, how to implement best practices for hybrid cloud infrastructure, and how to get the most from business-critical applications running on vSphere, and much, much more. Here’s a few to consider:

What’s New with vSphere? [HBI2893BE]

This session is a VMworld staple. Come find out what’s new with the most robust and widely adopted virtualization platform in the industry. Read more and register here.

Planning for a Successful vSphere Upgrade [HBI1462BE]

Does upgrading your VMware vSphere environment seem like an overwhelming task? Armed with the correct knowledge, it doesn’t have to be. Read more and register here.

How VMware vSphere and NVIDIA GPUs Accelerate Your Organization [HBI2497BE]

GPUs are invading the data center. From virtual desktops to simulations and media rendering, from big data analytics to machine learning and more… Read more and register here.

vSphere Security: News You Can Use [HBI1953BE]

We will be discussing the latest improvements to vSphere for security and compliance, current trends around vSphere Admins & information security, and VMware’s work to make it easy to be more secure now and into the future. Read more and register here.

Accelerating Oracle and SAP with vSphere6.7 Hardware Accelerators [HBI1420BE]

In this session, you will learn how to accelerate Oracle and SAP Enterprise workloads with these new vSphere features for I/O-intensive workloads to enhance high availability, speed up data loading, and more. Read more and register here.

Leveraging the Latest Server Technologies in vSphere [HBI2362BE]

Join this session to learn how vSphere VMs can consume and expose these modern technologies to virtualized workloads. You will learn how these technologies are relevant to enterprise applications and various configuration options to start using these innovative hardware devices. Read more and register here.

 

And these sessions to learn more about Project Pacific and Tanzu Mission Control:

Project Pacific 101: The Future of vSphere [HBI1761BE]

Project Pacific, a technology preview, enables enterprises to accelerate development and operation of modern apps on the VMware vSphere platform while continuing to take advantage of existing investments in technology, tools and skillsets. Read more and register here.

Introducing Project Pacific: Transforming vSphere into the App Platform of the Future [HBI4937BE]

Project Pacific changes the game by bringing containers, Kubernetes and VMs natively into vSphere, running as peers on ESXi. Join this session to learn all that Project Pacific brings to the vSphere platform. Read more and register here.

Run Upstream Kubernetes Consistently Across Clouds with VMware Tanzu [KUB1840]

Attend this session to learn how VMware Tanzu Kubernetes Grid can provide common Kubernetes distribution across environments including vSphere (with Project Pacific), VMware Cloud on AWS, private cloud, public cloud, and edge. Read more and register here.

Managing Clusters: Project Pacific on vSphere & Tanzu Mission Control [KUB1851]

Join this session to understand how you can use Project Pacific and VMware Tanzu Mission Control to consistently and scalably enforce critical policies such as quota, access, networking, security, and backup policies across multiple clusters running on vSphere, as well as in hybrid- or multi-cloud environments. Read more and register here.

Project Pacific: Guest Clusters Deep Dive [HBI4500BE]

In this session we’ll dig into the Kubernetes cluster service and how it works. Along the way we’ll dive into Cluster API and look at the VM operator and how it exposes virtual machines through a Kubernetes API. We’ll also dig into networking and storage services for Kubernetes clusters. Read more and register here.

Project Pacific: Supervisor Cluster Deep Dive [HBI1452BE]

In this session we’ll dig into the technical details of what this means. We’ll walk through the architecture of supervisor clusters and how we use Kubernetes to orchestrate a cluster of ESXi hypervisors. We’ll also look at how we link all of this to vCenter and how it fits into the vCenter UI. Read more and register here.

The Native Pods Deep Dive [HBI4501BE]

In this session we’ll dig into the details of native pods, how they work, and how they use ESXi to provide a fast and secure runtime for containers. Expect to go deep into the intersection of ESXi and Kubernetes and how the hypervisor can provide compute, network, storage, and image services to a Pod runtime. Read more and register here.

 

Explore These Tracks for Additional Breakouts

 

Meet the Experts

In Hall 8.0, take advantage of this small-table setting where about 5 people per table can sit down and speak with a subject matter expert on topics including Project Pacific, vSphere security, lifecycle, and the Hybrid Cloud with VMware Cloud on AWS and VMware Cloud on Dell/EMC. You can find a list of the Meet the Experts sessions here.

 

Solutions Exchange Booth

Join the vSphere team at any of our six booth stations for demos and information exchange and learn how you can:

  • Manage Hybrid Cloud at Scale with vSphere [VBD8100E]
  • Rapidly Deliver Modern Apps Hybrid Cloud Infrastructure with Project Pacific [VBD8102E]
  • Advance Modern Apps with Hardware Accelerators for AI, ML and HPC with vSphere [VBD8104E]
  • Bring the Cloud Services to your Data Center and Edge with VMware Cloud on Dell EMC [VBD8106E]

 

Half-day Workshops

Get a deeper dive on the benefits of why and how to upgrade to the latest and greatest vSphere version:

What’s New in vSphere, and How to Upgrade to the Latest Version [HBI1812TE]

Mon., 04 November | 13:00 – 17:00 | Hall 8.0, Room 29

With vSphere at the core of your data center and its integration with other VMware products, upgrade time can be overwhelming. Learn what resources you can use to prepare. This technical tutorial will take you through the process from start to finish. Read more and register here.

Stress No More! How to Successfully Manage Infrastructure Lifecycle [HBI1472TE]

Mon., 04 November | 13:00 – 17:00 | Hall 8.0, Room 21

Learn the pain points, best practices, automation techniques, and other aspects of infrastructure lifecycle. A complete recording plus GitHub scripts and guides will be provided. Read more and register here.

 

Learn more

VMworld Europe

vSphere Blog

vSphere Blog: Project Pacific

Web: vSphere

Web: Project Pacific

Cloud Native Apps Blog: Tapas and Kubernetes: What to See and Do at VMworld 2019 Europe

Follow us on Twitter at

#VMwarevSphere

#ProjectPacific

#VMwareTanzu

The Venue

All talks will be held at the:

Fira Gran Via
Carrer del Foc 10 08038
Barcelona, Spain
www.firabarcelona.com/en/gran-via

c5a479ca-fira-barcelona-01_0kg09i0ie07d01500f01o

The post Explore vSphere at VMworld 2019 Europe appeared first on VMware vSphere Blog.

Viewing all 1975 articles
Browse latest View live