Quantcast
Channel: VMPRO.AT – all about virtualization
Viewing all 1975 articles
Browse latest View live

Who Are The People Behind vSphere?

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

VMware People IconFrom time to time we are asked about the roles inside VMware. This sometimes happens when we use a lot of jargon, like “need to open an SR with GSS, ask your TAM!” As part of our year-end wrapup we thought it might be interesting to list some of the many roles people have inside VMware. To keep a product like vSphere moving forward it takes an army of people of all types and specialties. Here’s what we call them and what they do for you, as well as some other tidbits that might help someone new to the VMware ecosystem.

Account Executives (AE): These folks are probably the most familiar to customers, as they’re the “sales people” assigned to each account. They’re assigned to certain regions (Asia/Pacific/Japan or APJ, Europe/Middle East/Africa or EMEA, Latin America or LATAM, and so on), and to certain types of customers such as Commercial, Enterprise, State & Local government & Education (SLED), etc. They manage the relationship between VMware and customers and are often the main point of contact for many of our customers. By design, AEs tend to be less technical and more business-oriented, because they are teamed up with…

Sales Engineers (SE): These folks are the technical side of the sales teams, focusing less on the business aspects and more on the technology itself. SEs have a wide range of technical knowledge and often come from backgrounds where they were designing, building, and operating these technologies as part of corporate teams. SEs are the starting point for technology-oriented conversations and have access to other, deep technical resources if a customer needs them. Like Account Executives, Sales Engineers are regional and belong to a certain customer segment.

If you’re a customer you should always know who your AE and SE is. Let us know if that isn’t the case!

Specialist Teams: Many VMware products have teams of SEs that, instead of being assigned to a customer, are assigned to a product like NSX or vSAN instead. These folks can talk at length (and in depth), both technically and non-technically, and will lend a hand when a customer needs to know more about that particular product. These folks tend to be regional as well. Want to go deep about a potential vSAN or NSX deployment? Ask your AE, SE, or TAM to arrange a specialist for you.

Solution Architects: As you might guess, Solution Architects design solutions! They bridge the business and technical sides of IT together to help customers design and build world-class infrastructures and product deployments. They spend time with a customer to learn about their environment and then design something that really meets their needs.

Professional Services (PSO): Going hand in hand with folks like Solutions Architects are folks in the PSO, or Professional Services Organization. Many VMware folks are great at designing solutions, but PSO is filled with folks that excel and building and running them. They are the people to call when you need a hand getting something off the ground.

Technical Account Managers (TAM): Some customers that have Enterprise License Agreements (ELAs) with VMware also have a Technical Account Manager. These folks are part of our professional services organization and assigned specifically to a customer. They become the customer’s single point of contact with VMware. Have a question about a product? Your TAM can help. Need a support case opened? Your TAM can do that. Want to know what kind of license you need for a product? How to implement vSAN? What the difference is between the VMware Certificate Authority modes? No problem, these folks will help. They’re a wonderful resource and often become trusted advisors.

Global Support Services (GSS): Sometimes VMware mentions “GSS” and when we do we mean our support organization. These folks around the world are the front line for questions of all types, from licensing to troubleshooting, and they often know more about trending issues than any other part of the company because they get to hear it first. VMware is also guilty at times of referring to how GSS documents support cases as “SRs” which is short for Service Request. When you open a case with VMware you’ll get an SR number, which can be used to track the case through the online portals, with a TAM, or over the phone.

User Experience (UX): VMware has a small army of designers that focus on the human side of our products. Does the way a customer uses a product make sense? Is the interface clear? Is the interface accessible? Will something in the way a user experiences our product cause them to make a mistake or do something unintended and dangerous? These folks think about that all the time, and you can see it if you use things like the HTML5 vSphere Client, which is a very visible example of how they streamlined tasks so that vSphere admins’ lives are easier. They write about their work over at vmware.design and are definitely worth keeping an eye on.

Information Experience (IX): No software company would survive without documentation experts. VMware has wonderful people that develop our documentation, making it easy for everyone to know how to use a feature. These folks also do a TON of videos around vSphere features, too, which are worth the time to watch if you’re interested in a feature they cover. Like UX people, IX folks are unsung heroes inside any software company.

Product Managers (PM): Product managers are the people inside VMware that steer the ship when it comes to the products themselves. They manage the roadmaps for products, figure out what customers want and need, and help guide the organization to building amazing products. These folks have an enormous and tough job, being forced to simultaneously see the future but react to present issues, too. PMs tie everything together and manage features and products from beginning to end.

Engineers: VMware engineers are the folks that develop and maintain the product, from a software development perspective. They share the tough job PMs have because they also must deal with both the present and the future. The future is working with product managers to implement what’s on the roadmap, and to change the roadmap if it’s discovered that something needs to change. The “present” comes in the form of escalations from GSS. GSS will escalate to our engineering teams when a customer has a problem that looks like a bug (a purely theoretical exercise, of course, since there’s no such thing as a bug in VMware software. *grin*), as well as performing continuous testing of our products, release management, etc. Engineering is a large and multifaceted organization inside VMware and a single paragraph here cannot do it justice!

Product Marketing (PMM): Product Marketing Managers (PMMs) develop the materials and programs that promote VMware and vSphere. Their work is everywhere, too much to link to everything. Did you catch VMware at Tech Field Day? If so, that was set up by product marketing. When you download an eBook, like the one on Upgrading to VMware vSphere 6.7, it was developed by PMMs in conjunction with…

Technical Marketing (TMM): Technical Marketing Management (TMM) is the technical side of Product Marketing. For vSphere, it’s a small group of intensely technical people that generate content like white papers, videos, documentation, code samples and examples, and blog posts such as this one! TMM talks to customers & give presentations at VMUG meetings, helps with beta programs, works with PMs and Engineers for ideas on new and improved features, and generally helps vSphere admins and VMware understand one another in order to use vSphere and other products most effectively.

There are hundreds of other roles at VMware, but hopefully this gives you some idea of who does what from a vSphere perspective. If you’re ever thinking about joining us please visit the VMware Careers site to see the wide range of opportunities that are open.

(Come back tomorrow for a look into the charitable giving and service work that VMware employees do! For more posts in this series visit the “2019 Wrap Up” tag.)

The post Who Are The People Behind vSphere? appeared first on VMware vSphere Blog.


VMware Loves Service & Giving – The VMware Foundation

$
0
0
This post was originally published on this site ---

To mark the end of the year, we are posting every day through January 1 with some less technical vSphere and VMware topics. We hope you enjoy them as much as we have writing them. See them all via the “2019 Wrap Up” tag!

Tech as a Force for Good

It’s no secret that VMware, as an organization, strives to be a force for good in the technology sector and beyond. In fact, at VMworld Europe this year, CEO Pat Gelsinger closed out the opening keynote on the topic of Tech as a Force for Good.

As a company, VMware empowers its people to support their communities in many ways. Let’s hear from Philip Savage, Enterprise Account Manager, and VMware Foundation’s UK Giving Network Lead. “Our Citizen Philanthropy approach is not like other companies’ giving back programs. We have democratized it. It’s not about being told by a CxO that this is the charity we’re supporting this year.” Instead, VMware enables all employees to support the causes that mean the most to us.

VMware Foundation - VMware Rebuild Mexico Team

VMware’s strategic approach to giving, called Citizen Philanthropy, fosters a culture of service by amplifying the individual contributions of VMware people’s time, talents, and resources to their nonprofits of choice through VMware Foundation global programs.

Here are three examples of how these programs amplify our contributions:

  • Every full-time employee globally is eligible for 40 paid hours each year for Service Learning
  • Anyone who contributes all of their 40 Service Learning hours in the year can recommend a charity of their choice to receive a Citizen Philanthropy Investment grant from the VMware Foundation.
  • VMware employees can request 1:1 matching gifts for charitable donations up to $3,141.59 annually to qualified nonprofits of their choice. That number might ring some bells with the geekier readers…

As you can see, the VMware Foundation’s global programs can increase our impact as a VMware community. This allows us to support the causes and nonprofits we care most about.

What does Service Learning look like?

Through Service Learning, we have the opportunity to use our time and talents to support our community – with hands-on service, general skills, pro bono Service Learning, or serving on the Board of a nonprofit. We call it Service Learning rather than “volunteering.” We believe that when you serve others with a mindset to learn, there is a two-way exchange of value between us and the nonprofits with which we serve. We develop and grow in the process. 

In 2019, I worked with a local charity supporting vulnerable children. Next year I’m going with a team to Tijuana, Mexico to build a home for a family. It’s the third year that individuals on our team have chosen to serve together in this capacity. We have a brief video below that shows the work that we put in and the outcomes that we accomplished.

People do all kinds of things for service learning, on every imaginable scale. I’ve seen teams repainting hospitals, providing mentorship to students, helping out with food banks, and much much more!

The end of the year (and indeed the end of the decade) is traditionally a time for reflection. I’m incredibly proud to work for an organization that enables me to engage in Citizen Philanthropy. The VMware Foundation shows us that we can all make changes to make the world a better place.

 

(Come back tomorrow for a look into how products like vSphere get named. For more posts in this series, visit the “2019 Wrap Up” tag.)

The post VMware Loves Service & Giving – The VMware Foundation appeared first on VMware vSphere Blog.

vExpert Cloud Management November 2019 Blog Digest

$
0
0
This post was originally published on this site ---

 

Every month we publish a digest of blogs created by our vExpert Cloud Management Community. These blogs are written by industry professionals from around the vCommunity and often feature walkthroughs, feature highlights, and news for vRealize Operations, vRealize Network Insight, and vRealize Log Insight. The vExpert title is awarded to individuals who dedicate their time outside of work to share their experiences and useful information about VMware products with the rest of the community. In July of 2019, we spun off a sub-program called vExpert Cloud Management for vExperts who specialize in products from VMware’s Cloud Management Business unit, namely the vRealize Suite. vExperts share their knowledge in many ways including presentations at VMware User Groups (VMUG), writing blog posts, and even creating content on YouTube.

 

Below are some of the top blog posts for the month of November 2019 created by our Cloud Management vExperts. Enjoy!

 

Using vRealize Operations to identify Zombie VMs by Graham Barker (@virtualg_uk) Isle of Man

Posters: VMware vRealize Network Insight by Piotr Masztafiak (@pmaszt) Poland

PowervRNI 1.7: What’s New? by Martijn Smit (@smitmartijn) Netherlands

vRealize Operations Manager 6.0 Certificate Expired by Batuhan Demirdal (@batuhandemirdal) Turkey

My VMworld Europe 2019 highlights by Rick Verstegen (@VerstegenRick) Netherlands

My Thoughts About VMworld 2019 Europe by Batuhan Demirdal (@batuhandemirdal) Turkey

 

The post vExpert Cloud Management November 2019 Blog Digest appeared first on VMware Cloud Management.

Introducing the vCenter plugin for vRealize Network Insight

$
0
0
This post was originally published on this site ---

I am very excited to introduce a new fling: the vCenter plugin for vRealize Network Insight!

This fling integrates vRealize Network Insight data directly into vCenter, allowing you to view a dashboard of networking information relevant to that vCenter, and acting as a springboard to the full interface of vRealize Network Insight.

vCenter plugin for vRealize Network Insight dashboard

Detailed installation instructions are located on the fling page, here.

I want to thank Ayesha Karim, Manoj Krishnan, and Revathy Subburaja for their hard work on making this plugin a reality, and Ajay Kalla and Raminder Singh for their creativity! Let’s not forget the flings team.

As this is version 1.0 of the vRealize Network Insight plugin for vCenter fling, we have some exciting ideas for the upcoming releases; stay tuned!

 

The post Introducing the vCenter plugin for vRealize Network Insight appeared first on VMware Cloud Management.

What’s in a Name? How vSphere Came To Be Named

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

As we look back on 2019 and look forward to 2020 we have been thinking a lot about where we came from. One of the things we thought would be fun as part of this series is finding out how the name “vSphere” came to be. To investigate this I interviewed Mike Adams, Senior Director of Product Marketing, to ask him about the name of vSphere and the general process of naming products and features.

Bob: Before joining VMware I was a customer for 15 years, since about ESX 1.5. I remember along the way we had Virtual Infrastructure 3 and 3.5, but then suddenly we had vSphere 4. What can you tell me about that?

Mike: So yeah, I joined in 2007. Right then the product marketing team was looking to make a big splash, can we make a name that’s better than VI? People used “Virtual Infrastructure” generically to refer to what was in the data center. And vC was spelled out “VirtualCenter” then, too. So what we thought was let’s put out a contest specifically to the engineers to see what they think about names.

Old VMware Three Squares Logo 2002-2009Bob: A vote?

Mike: We didn’t have a list of names already, the product manager actually asked them to submit names. It was really interesting the names that kind of came about. I remember there were names — Diane Green’s name was in one of the names. Mendel’s name was in a couple of the names. There were all sorts of things that came out. A couple of us were just talking about all of this and someone remembered that it was Paul Maritz that submitted “vSphere.” We collected all these names and then we got to the voting part. When we voted, vSphere came out as the big winner.

I remember that we had the launch event at the campus in Palo Alto, in the gym. We brought everybody out. We had racks of equipment on stage and I think – if I remember this right – Pat Gelsinger was on stage as part of Intel. It was a big deal.

Bob: Makes sense. vSphere 4 had a lot of new features in it, things we take for granted today.

Mike: It was when Fault Tolerance first came out, we showed one of the first demos of that. A ton of management improvements. Storage vMotion. The whole day was a big one, too. It was also the day that closed the Oracle acquisition of Sun so there was a lot of other news going on to compete with.

Bob: Wow, a very eventful launch day for the industry.

Mike: Yeah. From that point forward we knew it as vSphere.

Bob: So how many name suggestions were there?

Flogos Snowflake (Tamarabepunkt/Wikimedia Commons)

Mike: We got over 1000 submissions for names. It consumed the marketing manager’s time for a month. We had a lot of good things going on at the time so they were really looking to make a mark with the name. Speaking of marks, at the launch we found this company called Flogos and they had a machine that would make this giant bubble with helium in it that looked like your logo, and it would float away. Remember that three square icon we used to have for VMware? We got this machine and it made that logo a bunch of times and they’d fly up to 5000 feet in the air. We had to tell the airports what was going on.

Bob: I bet SFO and SJC loved you.

Mike: Not really. *laughing* That was all part of you know making this huge impact and vSphere was at the center of all that. We needed a name that we thought would help conceptualize what we were doing. That’s also one of the first times people started melding ESX with vCenter. Nowadays we just say “vSphere” and we know we mean the whole thing, but it was two distinct products up to that point for everyone, not just Engineering.

Bob: Did vCenter get renamed as part of all of this?

Mike: Yeah, we went to the vCenter name instead of VirtualCenter. Everything went to small ‘v’ then.

Bob: So this was also the genesis of the small ‘v’ names for everything? We have you to blame?

Mike: *laughs* I actually had to work to undo some things because of that. Because vCenter manages compute, storage, and networking, everything was vCompute, vStorage, vNetwork. Like the distributed switch, we used to call that the VMware vNetwork Distributed Switch.

Bob: It’s still vDS pretty much everywhere.

Mike: Yeah. The idea that it’s virtual if it comes from VMware… kinda like how the term “tuna fish” is redundant. Tuna implies fish. VMware implies virtual. The names were longer than they needed to be. We got rid of all of that.

Bob: There’s a joke that there are only two hard problems in computer science: cache invalidation and naming things*. You invited me to be part of a naming exercise this last summer, and it was incredibly interesting. Can you elaborate on what goes into picking a name?

Mike: Naming things is like a dark arts project. Ideally when you want to do a name you gotta go into some type of brainstorm session, you go in and figure out what is the main idea you need to express. You come up with that first. Then there’s two ways you can go. You could go Dragnet “just the facts” and have a name that says exactly what it does. Or you could go what I call the “cutesy” route and pick something that’s really cool.

Naming something exactly what it is is pretty easy, right? You clean it up and shorten it up and make sure it makes sense. The cutesy route is a lot harder. You need more creative people around that. The other element that’s added into that over time is that a lot of good names are gone. There’s a lot of trademarking. Even when you pick something that you think is brilliant if someone has trademarked it you have to go back to the start.

Bob: Yeah, some dump truck company in South America owns it or something. Frustrating.

Mike: Exactly. So let’s use a couple of examples that are near and dear to our hearts. Whoever came up with vMotion – I don’t know who did it – it’s perfect. It’s so good. I wasn’t around yet when they came up with that but whoever did that… brilliant.

Bob: Cutesy and functional, the holy grail.

Mike: Right. Other things that came out, like DRS. That was an engineering name. Yes, it describes what it does and it’s gotten into our lexicon, but we all just call it DRS and I worry that it’s because nobody quite knows what it really means.

Bob: That’s what keeps tech marketing people like me in business.

Mike: *laughs* Yeah. One other example, I came up with the name for SRM. Site Recovery Manager. We went into a room and asked, “what does it do?” It’s got sites. It helps people recover from disasters. And we were on this kick of “manager” – Lab Manager, Stage Manager, Lifecycle Manager. So we decided to call it Site Recovery Manager. At the time there was a growing movement in the storage world for Storage Resource Management, but we talked about it and kept our SRM.

The other thing with names is that everybody wants to give you their input. Everybody wants to play Monday morning quarterback. You’ve got to get as much consensus as you can, get it approved as fast as you can, and then send it back to the Product Manager to use.

It’s gotten to be a really hard process to pick things, because of what’s not available, and it’s just hard to pick the cutesy names. VMware has traditionally gone with a lot of more functional names because it tends to be easier and more understandable. It’s also very hard to change a name. You get these legacy names and people make a big deal out of the change.

Bob: I get that, you have all the legacy documentation and huge corpus of blog posts and whitepapers and things that need to be updated. That’s tough. I’ve learned a lot about names this last year, things you only learn the hard way when you have to explain something repeatedly. I always think “next time we’re going to do a little better.” Names are really important. They can help us or hurt us, add clarity or cause confusion.

Mike: Yes, yes. I really do believe that. It’s like the Chevy Nova in Latin America.

Bob: Va, conjugated form of the Spanish verb ‘ir’ that means “to go.” No go.

Sendai TansuMike: Exactly. Or Coke in China. Coke sells well in China because the name Coke in Mandarin translates as something like “very pleasant to drink” and people like drinks that are very pleasant to drink. A good name sets you up for really good things.

Bob: Aren’t we doing interesting things with Tanzu in that regard?

Mike: Tanzu is an interesting one as well. The meaning in Swahili is branches. The meaning in Japanese is a container, but a very specific type of container, a chest of drawers built by a skilled craftsman sort of thing. It’s also spelled a little differently, with an S. Tanzu is also interesting because you want the name to cover a number of things, so it works really well.

 

* The correct form of the joke is that there are two hard problems in computer science: cache invalidation, naming things, and off-by-one errors.

(Come back Monday for a look into what people like the most about vSphere. For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post What’s in a Name? How vSphere Came To Be Named appeared first on VMware vSphere Blog.

The Lightest Sides of vSphere

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

Calendar IconWhether it’s a calendar year, a fiscal year, or even a year of life in the form of a birthday, the end of a year-long period of time naturally brings retrospection and introspection. With the concentration of holidays at the end of the calendar year retrospection is sometimes forced upon us, as family and friends we see and talk to remind us of old times and who we used to be. From there comes often comes introspection, creeping in as we let our brains and bodies relax. How does the person I used to be and the person I am relate? What do I want to accomplish in the next year, personally and professionally, to become the person I think I’d like to be? What is my plan to accomplish this, beyond just thinking it’s a good idea? How will I know if I’m on the right track?

As we’ve been writing this blog series it’s been fun to be both retrospective and introspective about VMware, too, because we can ask all those same questions. Playing on the recent astronomical turning point in our year – generally considered a positive event in the northern hemisphere but sadder news for our friends in the southern hemisphere as their days get shorter now – we asked some folks what the best feature of vSphere is. What is something about vSphere that has gotten them out of trouble, or made life a lot easier for them? Here’s what they said.

“vSphere [High Availability] saved many of my environments over the years.” – David S.

I, too, enjoy vSphere HA. It was always interesting to get a call from our operations center noting that it looks like something bad happened to a host, but all the workloads appear to be up again. This is especially true given that some of the most intensive (I/O- and CPU-intensive, with compression and deduplication) work done by a virtual infrastructure is at night during backup windows. If there’s an unstable piece of hardware (and it happens, moving parts wear out) the stress of the backup cycle will sometimes push it over the edge. Infrastructure that helps speed you towards recovery is extremely helpful, especially in the middle of the night.

Helpful tip: now is a good time to make sure your HA settings are set correctly. The defaults are conservative and might not be what you want. You can find great information about those settings in our documentation!

“Shared-nothing vMotion has really freed us from many barriers [we] faced.” – Anoop J.

At the beginning of this post I talked about retrospection & looking back on the past. There’s often a tendency to get nostalgic, but it’s a feature like vMotion that reminds me that there really weren’t “the good old days.” It’s easy to take vMotion for granted now but it was a game changer when it first shipped, and the improvements to it have enabled a flexibility in vSphere, and subsequently in our data centers, that is unmatched. It’s a wonderful thing being able to seamlessly move workloads around to enable patching and upgrades and outage avoidance, especially during the day when vSphere Admins are at work, properly caffeinated, and have teammates to help. Put simply, vMotion has been a real improvement to vSphere Admins’ work-life balance.

Helpful tip: there is a great VMware Fling called the “Cross vCenter Workload Migration Utility” that can really help if you’re thinking about vMotioning, cloning, or migrating more than a few virtual machines. Our own Niels Hagoort has written several in-depth posts on how vMotion works which are perfect for end-of-year reading. And yes, vMotion is both a noun and a verb!

“EVC was a gift to future me.” – Sharon J.

EVC is Enhanced vMotion Compatibility and it is an amazing feature. Have you ever noticed that you can only get a certain model of CPU for 18 months or so? If you build a cluster and later need to expand it, but can’t get the same CPUs, how will you vMotion between the servers? That’s where EVC comes in. It’ll make the CPUs compatible with each other so that running VMs can move back & forth between the old and new servers. It’s an option that you need to turn on early in the life of a vSphere cluster (or during other maintenance), but by doing that you give your future self a gift of easy expandability, migrations, and hardware replacement.

Helpful tip: Two tips. First, people sometimes say that EVC decreases performance but that isn’t generally true, or the answer is very nuanced depending on the applications and systems. Most enterprise workloads don’t take advantage of the CPU features that EVC might mask, and if you have a workload that does you can fix it with per-VM EVC, which enables more flexibility to handle these situations. Second, Niels Hagoort has also written an in-depth post on Enhanced vMotion Compatibility, which is an excellent read because it explains all of this very well. Niels is the vSphere Technical Marketing team’s hardware-focused person and, along with two coauthors, he has quite literally written the book on these topics.

Have another candidate for the best vSphere feature ever? Leave it in the comments!

(Come back tomorrow and help us examine the most underrated features of vSphere! For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post The Lightest Sides of vSphere appeared first on VMware vSphere Blog.

The Darkest Sides of vSphere

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

Calendar IconIn our post “The Lightest Sides of vSphere” we talked a lot about retrospection and introspection, and we asked people what they thought the best feature of vSphere was. Continuing the astronomical theme with longest days and longest nights, today we ask about the dark sides of vSphere. What is the most underrated vSphere feature, a feature you wish more people knew about?

“File-Based Backup & Restore on the vCenter Server appliances.” – David S.

How are you backing up your vCenter Server appliance? Are you just taking a snapshot of it or backing it up as an image somehow? vSphere 6.5 introduced File-Based Backup and Restore (FBBR) as a method to protect your vSphere environment from failures. In short, it exports configuration information as a file to a remote file share or system. If you have issues you can restore a vCenter Server appliance using the installer and that file. Pretty slick!

Helpful tip: if you’re already backing up an image of vCenter Server it’s a good idea to set this up as well. Make sure you’re not copying it into the infrastructure you’re protecting, though! It’s very easy to forget that the Windows file server you’re using is actually a VM inside your infrastructure. Copying it to a DR site (and the DR site back to the primary site) is often a good approach. There are a number of blog posts on the vSphere Blog about FBBR, too.

“The ESXi scheduler. Its efficiency and performance do not get the attention it deserves.” – Kevin H.

This would be my vote for the #1 underrated feature of vSphere. ESXi consistently demonstrates that it’s the most efficient hypervisor on the market. This has a big effect on customers because it means less hardware is needed, even in small deployments. Less hardware then means less additional infrastructure. The indirect gains of fewer network & storage switches, less power consumption, less consumed rack space, less heat in a data center adds up very quickly, especially with “soft costs” like staff time. Payroll is often the largest line item in an enterprise IT budget and enabling IT to do more with the staff they have is a massive win. Similarly, if you’re security-minded, fewer devices means less to patch & audit.

Helpful tip: The VMware VROOM! Blog has a nice post about why containers running as part of Project Pacific under ESXi run 8% faster than bare metal. No joke! Go read it. When you’re done add that blog to your feed reader, it’s the VMware performance experts and they have lots of interesting things to say, both on their blog and in VMworld sessions.

“Snapshots. Not as backups but as ‘I’m going to do something that if it blows up I’m kinda [in trouble] and it would be great to revert with the click of a button.” – Féidhlim O.

We take a lot of things for granted when it comes to vSphere. Features like vMotion, DRS, HA, and snapshots were game-changing when they first shipped, changing the realities and possibilities of our workloads and our data centers. Speaking from my own experience, snapshots are massively underrated. A vSphere Admin, at an academic level, knows and understands snapshots. But does a vSphere Admin practice using them all the time? And does that same vSphere Admin advocate the use of snapshots to the admins of the workloads that run atop vSphere? Often the answer to both of these is no. We can do better!

Helpful tip: vSphere Admins, like all sysadmins, suffer from the problem where “no news is good news.” This means that the rest of your company only gets to see you when something bad is happening. Instead, let’s make this next year the year when your organization sees you for all the wonderful work you do. Start an internal email newsletter with tips for workload admins that they might not know. Snapshots, backup strategies, DRS affinity rules, VMware Tools tricks, VM encryption, etc. Give a brown bag presentation over lunch about what vSphere is and how it’s set up in your company, include tips & tricks for reducing risk during application upgrades and OS patching. Download and install the trial of vRealize Operations Manager and use that as a starting point for capacity conversations, remembering that rightsizing goes both ways (up and down). Telling a workload admin that it looks like they could use a few more CPUs to improve performance, or could mitigate their upgrade risks with a snapshot, are very positive conversation starters!

Helpful tip #2: Snapshots are great but be sure that you delete them when you’re done with them. A big snapshot can be a problem, too! 🙂 vRealize Operations Manager has reports & alarms for that, and community efforts like vCheck are also incredibly helpful for catching possible problems before they become operational issues. VMware also has a Knowledge Base article on snapshot best practices.

“Nested ESXi for testing stuff. It works well and allows you to test and play without affecting production or buying new HW.” – Féidhlim O.

I once heard someone say that everybody has a test environment, and some people are lucky enough to have a separate production environment, too. Test environments are great ideas but they’re almost never representative of production. They use different and often ancient hardware. They don’t run the same workloads. And if we break something while testing we have to go mess with consoles and things, which is so 1980s. Why do we keep them around? Let’s take a hint from the VMware Hands-On Labs and move our testing to nested vSphere. You can install ESXi inside a VM and deploy a test vCenter Server to manage it. When it’s all set up you can snapshot or clone it, and then when you need to you can simply revert to a known good state again. Easy, and if you use PowerCLI to script the snapshots and reverts it’s good automation practice, too.

Helpful tip: There is a tremendous community around nested virtualization, led by one of VMware’s own, William Lam. His Nested Virtualization page on his blog even has OVA images preconfigured with vSAN and serves as a reference for many of us inside of VMware, too.

“Autodeploy, watching 100s of hosts spinning up in minutes from bare metal to a full setup ready for workloads is a kind of magic.” – Nigel V.

vSphere Auto Deploy is a feature that allows a server to boot via PXE/iPXE and configure itself to automatically join a cluster and run workloads. It’s a wonderful time saver for organizations that have numerous hosts, and it’s flexible to allow for things like local caching of ESXi, helping to eliminate a dependency that could be a problem during an incident.

Helpful tip: Even if you aren’t fully interested in Auto Deploy, the PXE-based network installation methods for installing ESXi can be a real time saver if you have multiple hosts to deploy. It’s all in the documentation!

“VADP. It did nothing short of revolutionize the way we think and act when it comes to workload recovery. A crash consistent backup of a workload without application knowledge is mind boggling for the fact that it works every single time. I don’t know how many times either a precautionary ‘I did a VM backup even if the $owner says we don’t need one’ saved the day.” – Dominik Z.

The VMware vStorage APIs for Data Protection, or VADP, enable backup & replication products like vSphere Replication, VMware Site Recovery Manager, and other partners to safely, securely, and efficiently backup and restore a VM. As with snapshots and clones, these backups are often called “crash-consistent.” This means that, when restored, it’ll look like the virtual machine simply lost power. You might think that sounds awful, but the truth is that losing power like that is a situation that’s very well understood by software. Filesystems know what to do when it looks like a crash occurred. Databases generally know what to do, too. More complicated backup scenarios that cue applications to quiesce, or stop their activity, so that a better backup can happen are possible, but those schemes add complexity and can be unreliable at times, too. Testing is always important, especially of restores!

Helpful tip: You can clone a running VM. The process is called making a “hot clone.” Where possible, though, it’s always recommended to power a VM down so that the filesystems and applications are in a 100% known state: off! Also remember that crash-consistency might not work for distributed applications (such as multi-tier ones), where the VMs that make up an application should all be in sync with each other. Always ask your application vendors and backup vendors for their recommendations on how to safely back up and restore your workloads.

Have another thought on an underrated feature? Please leave it in the comments!

(Come back tomorrow for a lighthearted look into all of our wonderful users! For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post The Darkest Sides of vSphere appeared first on VMware vSphere Blog.

VMware Believes In Our Users

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

VMware People IconWe at VMware take a lot of pride in what our platforms enable our customers to build and achieve. We have hundreds of thousands of customers across all seven continents, with deployments ranging in size from tiny to enormous. Our customers run vSphere and vSAN on thousands of different hardware platforms (as of this writing there are 2805 that are certified for ESXi 6.7 U3), providing a stable and secure platform for 242 supported Guest OSes (on ESXi 6.7 U3, as of this post). That doesn’t even count the guest OSes that aren’t supported but are still being run, such as Windows NT 4.0 and Novell Netware!

Every year VMware surveys our customers to see what they’re thinking and how we can help them with their work. Here are three big areas that have our customers’ attention.

AI & ML is interesting to a lot of people in a lot of areas, and it appears to us as interest in GPUs and BitFusion.

Graphics Processing Units, or GPUs, are specialized processors that have a lot in common with early supercomputers. They use a vector instruction set, versus the scalar instruction set found in general purpose Central Processing Units, or CPUs. Vector instructions were originally used to improve graphics capabilities on PCs, but it wasn’t long before people adapted other problems to them as well. Raw computing power can be measured in many ways, but GPUs are often measured in the same manner as their supercomputer predecessors, in Floating Point Operations per Second or FLOPS. A single $1500 GPU and its 15+ teraFLOPS of processing power would have made it one of the fastest supercomputers in the world in 2007. It isn’t surprising that so many people are interested in bringing that type of computing power to bear on corporate computing problems.

What does VMware bring to the table here? First, there’s growing support for GPUs in vSphere, both in terms of the ability to use them as well as smoothing out operational issues with GPUs (such as enabling vMotion). If you’re interested in GPUs we have a guide to Using GPUs for Machine Learning on VMware vSphere, which includes step-by-step instructions and insights for vSphere Admins.

Second, BitFusion technology allows GPUs to be virtualized, partitioned, shared, and managed as a pooled resource for workloads, in public clouds and on-premises. It’s very interesting technology, doing to GPUs what vSphere does to server hardware, allowing applications using standards-based OpenCL and NVIDIA CUDA frameworks to interact with virtualized GPUs without any modifications to the applications. The GPU the application is using could be somewhere else in the data center and being shared among other workloads and nobody is the wiser.

Attention to security and compliance is growing.

One of our messages is always that security should come first, because good security will lead to good compliance. vSphere is a wonderfully secure platform, with third-party international validation to back it up. There are lots of great security resources out there, and two great starting points are the vSphere Security Configuration Guide and the vSphere Central web site. Something people are increasingly interested in is the ability to script security controls, and the vSphere Security Configuration Guide has sample PowerCLI code for each & every control present. Not only is that helpful for securing infrastructure, it’s also helpful for learning PowerCLI & PowerShell, too.

Beyond infrastructure security there’s a massive realization that security is everybody’s job, and that workloads that aren’t as secure as the infrastructure mean that everybody is vulnerable, especially with the newest CPU vulnerabilities. This is where products like VMware’s CB Defense comes in. CB Defense has been tested and shown to defend applications against everything in the MITRE ATT&CK framework, and its ability to scan workloads for vulnerabilities and help prioritize remediation is a massive help for vSphere Admins.

When it comes to compliance there’s no better platform out there than vSphere, again with official, published, legitimate US Department of Defense STIGs. VMware has also published guides to securing and auditing the products included in the VMware Validated Designs (VVDs). If you’re interested in NIST 800-53 compliance there is a compliance kit available publicly, as part of the VVD documentation (look on the left in the menu!), that will guide you and your risk auditors. The nice thing about NIST 800-53 is that it maps to other compliance frameworks, too, so PCI, HIPAA, CJIS, CIS, and others can follow the same guidance.

Last, the new Skyline Health collaborations between VMware Support and vSphere bring security monitoring to the table. Skyline will tell you proactively about issues in your environment so you can act before it becomes a problem. vSphere Admins, like most sysadmins and network admins, often operate in a “no news is good news” manner, and this helps preserve this! Similarly, customers that have discovered the amazing capabilities of vRealize Operations Manager both for monitoring and rightsizing also have seen all the new work that has gone into the compliance and security monitoring there. Monitoring speaks directly to availability, which is a key part of information security (confidentiality, integrity, AND availability!), so you can frame that vRealize Operations Manager purchase in your next Enterprise License Agreement as a security move (don’t laugh — I’m serious!).

It isn’t a surprise that containers are on people’s minds.

You might have heard the news about VMware’s Project Pacific, building native Kubernetes functionality directly into vSphere to support containers and container orchestration. I can’t tell the story better here than Kit Colbert already did, so I direct you to his introduction. This is exciting news because direct, open-source Kubernetes installations are very daunting, so VMware bringing it into vSphere so vSphere Admins can use all the skills they have already to deliver this functionality to their enterprises is huge. Plus, it brings all the security and operational controls and governance that you’ve had for decades in vSphere.

Help Us Help You

If you’re ever presented with the opportunity to participate in a survey, a beta, or a focus group I urge you to do it. Those are helpful to us because they let us know what you’re thinking, and they’re helpful to you because you will get products that better address your organization’s needs. Similarly, if you have product feedback, let your account team know. I often tell people to help us help you and it’s much appreciated by us as a way to learn how to better serve you.

As always, thank you for being our customers.

(Come back tomorrow for a look into the work that VMware is doing to enable virtualization in Africa! For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post VMware Believes In Our Users appeared first on VMware vSphere Blog.


VMware Believes In Africa

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

(As part of this series we wanted to highlight the work going on as part of the Virtualize Africa program. The following post is from Rachel Onamusi, VMware partner and Virtualize Africa digital strategist, telling the story better than we ever could. I also really enjoy the proverb at the end. Thank you Rachel! -Bob)

Virtualize Africa LogoThere was a time when it was thought that the most valuable resources Africa had to offer were either in the ground or grew out of it. From diamonds to gold, copper, uranium, and palm oil, it seemed the road to development lay in extraction, not in input, involvement, or inclusion. The results of the resulting exploitation have been dire. Africa has struggled to live up to its full potential leaving many of her countries’ citizens under-prepared and ill-equipped to partake in discussions that engender growth, development, or excellence in any of the indices by which developed nations are measured.

Given the disparity that exists, many have put their faith in leapfrogging, which is the idea that Africa can escape its challenges by skipping certain stages of development. Very few phases lend itself so admirably to this notion as technology does, with one of the biggest examples of leapfrogging in action being Africa’s jump straight to mobile phones, bypassing fixed-line technology and making exposure and education accessible to millions of people.

Education, health, agriculture, business and industry are all being significantly impacted by Artificial Intelligence, Machine Learning, the Internet of Things and Cloud Computing. The need for these pillars of development to scale is pressing and the human capacity is certainly there: Africa remains home to some of the most inventive, creative, and efficient adopters of technology, creating solutions to meet local needs even under difficult circumstances.

What is sadly lacking is the education to maximise the potential of the burgeoning youth population – a population that is set to be either a bonus or a burden to the continent depending on the events of the next few years. There are not enough skilled people and partners to take on the vast opportunities in Africa. As automation spreads through the continent, from financial inclusion like the M-PESA platform, apps built on TensorFlow that aid agriculture, to developer training like Andela, these new technologies bring opportunity and hope. These technologies also highlight the glaring gap between the tiny portion of the population knowledgeable in these technologies versus the people still figuratively in the dark. The high-end jobs that are starting to emerge are being outsourced to capable foreigners, many of them being flown in specifically for the project at the expense of the millions of people on ground.

The solution to this, the only way to embolden Africa with stability and the independence to progress, is to educate Africa. Handouts have proven to do more harm than good, and unequal partnerships can give way to exploitation. The freedom that comes with education and the confidence to come to the table as an equal partner in any business relationship cannot be overemphasised, and this is exactly what VMware is achieving with the Virtualize Africa Program.

The Virtualize Africa Program was started by the VMware IT Academy to upskill and reskill Africans in order that they might lead in the Fourth Industrial Revolution. The program packages the VMware Certified Professional certification as virtual, self-paced training, taking Cloud Computing from the basic foundations to a standardised, globally-accepted certification level. Trainees do not need to have prior tech or Cloud experience; they simply need data connection and a will to develop new opportunities for themselves.

The success of the program has proven the theory to be true – that with access to new skills, Africans can excel in the tech field too. From college undergraduates to insurance salesmen in need of more fulfilling career paths, the trainees under the Virtualize Africa are varied but in agreement over one thing: if they are to take control of their lives, then they must do it through acquiring new skills.

VMware has already signed a Memorandum of Understanding with the African Union in its bid to disseminate Cloud Computing training as efficiently as possible.  Over the next few years, the VMware IT Academy will also be working with universities and tech hubs across Africa to integrate the training into academic syllabuses, an effort that is already well under way.

An African proverb states that “If you want to go far, go together”. If there is any truth to this, then the partnership between VMware and the continent is poised to yield great benefits for a long time to come.

(Come back tomorrow for a look at VMworld! For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post VMware Believes In Africa appeared first on VMware vSphere Blog.

VMware Enjoys VMworld

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

One of the big milestones every year in the VMware ecosystem is “VMworld Season.” The conferences this year in San Francisco and Barcelona were milestones both for VMware itself and for customers. Between the two conferences we announced some big things:

as well as a whole bunch of new versions of products, like vRealize Network Insight 5.1, vRealize Operations Manager 8.0, VMware AppDefense/CB Workload 2.3, vSphere 6.7 Update 3, and many more. In keeping with our year-end theme of looking back, I asked a bunch of people about funny or interesting experiences at VMworld. I got some pretty interesting responses, too, and it’s clear that VMworld can be a time for some to relax, especially across the Labor Day weekend that follows VMworld US. Here are some of their thoughts.

“Remember in 2014 at VMworld US, the theme was groundbreaking designs and the artwork for the conference was cracked grounds, walls, etc.? Then an earthquake hit the area the night before.” – Kevin H.

The 2014 South Napa earthquake happened at 3:20 AM and, though many were sleeping, it was the first experience with a major earthquake for many conference-goers. It was also the largest earthquake in the area since the incredibly damaging Loma Prieta quake in 1989. It didn’t affect the conference much, at least from an attendee’s point of view, but it does highlight the need for travelers to always know what to do if something like that happens.

“The power went out late in the day in Moscone West on Tuesday this year, that was pretty interesting!” – Joe T.

Yes, yes it was. Mike Foley and I were doing the annual vSphere Security Update presentation at that time, and suddenly everything went dark and the emergency lights came on. I got a laugh when I asked everybody where they were going. I’ve done a fair amount of theater work and can be a loud speaker, our slides were still up on the laptop on stage, everyone was safe, and there were no fire alarms, so we just kept speaking and answering questions in the dim light until someone came and gave us further instructions.

“Customer meetings are always interesting because VMware folks get to see what folks are doing out in IT, and customers can ask us anything. There was one last year where we were discussing security requirements, and whether vSphere could meet them. We asked the customer for the requirements they needed to meet, and they couldn’t give them to us, because of… wait for it… security.” – Ken H.

First, VMware vSphere is incredibly security- and compliance-oriented and can meet and exceed whatever compliance frameworks you have. Let your account team know how VMware can help you and they’ll set it up. Second, if you’re at VMworld it’s a great time to meet subject matter experts from deep inside VMware. You can do this through your Technical Account Manager (TAM) or your account teams, just let them know in advance. Additionally, there are “Meet the Expert” sessions that are held every day where you sit at a table and ask a subject matter expert anything that’s on your mind. Remember that next year when you’re registering for sessions – sign up for a Meet the Expert session, too.

“A T-Rex walking around the Vegas show floor in 2018.” – Sean M.

“Seeing a wedding happen during the show, and then the bride and groom walked around the conference in their wedding dress and tuxedo… That was the first big community-type event I saw as a customer and it made me want to get involved further.” – Joey W.

As for the dinosaur, I wonder if it was this fellow. The greater VMware community is a pretty interesting group of people, that’s for sure.

“The community takes care of itself. I’ve seen people that, at the last minute, lost their trip to the conference, only to have people & orgs pull together and supply travel, a bed, and a pass. It’s stunning when it happens once, but when it happens every year? I love the community of which I am a part.” – Jim M.

100% agreed. I wouldn’t have been as successful in my former job in enterprise IT without the VMware community, whether it’s locally or regionally through a VMUG, through podcasts and blogs, or through the VMware Communities and VMTN. VMware, as a company, shows a lot of kindness inside and out, and the community reflects and amplifies those values. It’s pretty amazing.

If you’re wondering how you might get started I’d suggest a couple of different paths. First, you could answer some questions in the Community forums. There’s always someone needing a hint and maybe you’re the person to give it to them. Same for Twitter and Reddit, the r/vmware community is pretty active and kind. On Twitter you could watch the #virtualization, #vmware, and #vexpert tags to start.

Second, start a blog. Don’t worry if you’re not an amazing writer. Here’s a secret: people who seem like amazing writers have amazing editors who read their work and fix all the errors. The rest of us just do what we can, and we get better the more we write. I always think of what a community member told me early on to encourage me: “How do you get to be good at something? Be bad at it for a while first.” Don’t know what you’d write about? Write about what you did that day or week. Post links to interesting web sites that helped you, or things you read. Add a little commentary to each. Read other blogs and link to them, most will link back to you automatically, too.

Last, go to VMUGs. Pretend you’re an extrovert for a little bit and ask questions when there’s a panel discussion or roundtable. Chances are that if you thought of the questions someone else in the room has the same question but doesn’t want to speak up to ask. Email the leaders and ask if they need a community presentation for a meeting. Offer to talk about your deployment or something you’ve learned. Like writing, presenters get better with practice, too.

Find your niche, some way for you to be comfortable while participating. Have fun! Don’t forget to say hi at VMworld, too – we all like meeting new people.

(Come back Monday for the top 5 vSphere blog posts from 2019! For more posts in this series visit the “2019 Wrap Up” tag. There’s a bunch of interesting stuff, like a look at our pet turtles and how vSphere got named!)

Follow @VMwarevSphere on Twitter

The post VMware Enjoys VMworld appeared first on VMware vSphere Blog.

Eventful Year for vSphere: Top Blog Posts of 2019

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

Calendar IconWe’re in the home stretch, with mere hours until the end of the decade. To kick this series off we took a look at some of the top vSphere blog posts this year, 6 through 10. Today let’s finish it out and see which post is most popular!

5. “What’s New in vSphere 6.5: vCenter Server” and “What’s New in vSphere 6.5: vCenter Management Clients” tie for fifth place. What’s with all the interest in vSphere 6.5 when vSphere 6.7 is out? It’s because people are upgrading from vSphere 5.5 and 6.0 and catching up on their reading. Both have excellent content about changes to how vCenter Server is delivered and operated, as well as how us humans will interact with the infrastructure.

4. Speaking of the HTML5 client, number 4 on the charts this year was “Goodbye, vSphere Web Client!” As Adobe Flash hits the end of its life in 2020 the HTML5 client is the client to use. Highly recommended, partly because it doesn’t need Flash and partly because it’s laid out better (by our UX Designers) for day-to-day tasks.

3. There was a lot of interesting content written in 2019 that just missed the cut… so I added them all together to get number 3!

2. “Which vSCSI Controller Should I Choose for Performance?” ranks at number 2 for the year. It’s an older post but still very relevant. Hint: use the pvscsi and vmxnet3 drivers whenever you can. Did you know they’re now available on Windows Update?

1. vSphere 6.7 finally makes a big appearance on this chart, in it’s rightful place as number 1. “Introducing VMware vSphere 6.7!” was wildly popular, looking at the new features and capabilities in the core of the software-defined data center.

Please keep up with us in 2020 by following us on Twitter and by adding this blog to your RSS reader. Thank you!

(Come back tomorrow for a look into where vSphere has come from! For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post Eventful Year for vSphere: Top Blog Posts of 2019 appeared first on VMware vSphere Blog.

The Importance of a Cloud Strategy

$
0
0
This post was originally published on this site ---

A clear, unambiguous, and widely-communicated cloud strategy is critical to the successful transformation of an IT organization. The creation and publication of the cloud strategy document serves to align the various teams across an enterprise to a cohesive plan which informs and guides the adoption of cloud services.

The creation of a cloud strategy may seem overwhelming at first, but it shouldn’t be. It is not intended to be a precise, detailed set of instructions for every technology loosely defined as “cloud”. Rather it should be a clear, concise, high-level analysis of which cloud technologies your organization should prioritize, with guidance to lower the risks associated with the adoption of these technologies.

 

Why do you need a cloud strategy?

While many enterprises have announced a “Cloud-First” approach, not all have a well-defined strategy that communicates the “why”, “how”, or even the “what” of cloud adoption. A cloud strategy is essential to guide your organization through the transition to the cloud by balancing expected benefits with guardrails to reduce risk exposure.

Without a strategy, business groups and users will adopt solutions to increase productivity as they become available. Many organizations have already experienced this in the form of “Shadow IT”.  The absence of a cloud strategy leaves these teams with little guidance for cloud service adoption leading to silos of technology, non-standard solution implementations, non-optimized costs, and greater exposure to risk from poorly configured environments.

Cloud Strategy

Finding Balance with a Cloud Strategy

 

Who should create a cloud strategy?

In many organizations the “Cloud Architect” will define and communicate the cloud strategy, working with leadership and business units as well as security and technology teams. Some organizations have already adopted several cloud technologies without a cloud architect or a defined strategy. In these cases, it is not important who creates the strategy as long as it is built collaboratively and supported by leadership before being communicated internally. Often, we find IT teams hear “Cloud-First” from leadership and then await further guidance. After all, you wouldn’t make such a statement without a clear plan, would you? Well, you might be surprised. When the cloud mandate gets out ahead of the cloud strategy, IT teams can help by drafting a strategy document and presenting it to leadership as the basis for further discussion and clarification.

 

What is the purpose of a cloud strategy?

  • Identify the key benefits associated with cloud service consumption that apply to your organization.

 

  • Determine which services you will build internally and which services will be consumed from a public cloud provider partner.

 

  • Identify the partnerships your organization has (or is developing) with public cloud providers for those services (approved service providers).

 

  • Communicate the approved service providers and associated services. State the risk of utilizing resources from a non-approved provider and determine the consequences of non-approved service consumption. Define and communicate the process to have a service provider or service added to the approved list.

 

  • Determine how your organization will provide consistent policy and governance across disparate services and environments, as well as a consistent consumption and management layer.

 

  • Determine protections needed to reduce the risks to data privacy, service costs, resource availability, and others. Until these protections have been established, an “on-premises”, private cloud strategy is best to safely provide your users with the resources they need.

 

Public / Private / Hybrid?

The cloud strategy will support alignment across teams supporting public and private environments. Your organization may be just starting on the journey to the cloud, or there may already be entire teams dedicated to supporting AWS and Azure workloads in your environment. Some services make more sense than others to consume from a public cloud provider. With IaaS workloads, for example, many organizations found that public cloud hosting led to higher costs and limited visibility. These organizations are taking another look at their private cloud offerings to expand the capabilities and prioritize on-premises environments for IaaS.  A private/hybrid cloud approach can provide many of the capabilities required to meet the needs of the business while providing a better consumer experience and superior governance to consuming those services directly from a public provider. Many organizations are looking to a management platform like the vRealize Suite to broker public and private cloud resources through a common consumption layer while ensuring full governance and compliance through automation.

 

What should a cloud strategy include?

At a minimum the cloud strategy should include the following sections:

  • Scope: The scope of the strategy – what is included and what is not – along with sufficient definition for consumption by non-IT readers.

 

  • Alignment to a larger strategic initiative – Ideally, there will be a larger “transformation” initiative to align with. This will help in several areas, including identifying business objectives, gaining leadership support, increasing organizational collaboration, and supporting requests for funding.

 

  • Business objectives driving cloud service adoption – For example, datacenter lease expiration, the need to optimize IT costs, or new capabilities required to support a new business offering.

 

  • Capabilities mapped to business objectives – High-level capabilities to support the business objectives, including identification of the consumer and consumption requirements, and capability prioritization. For example, on-demand application provisioning with application definition as code for IT teams via global catalog,  Kubernetes cluster deployments for application developers via API.

 

  • Current state – The current state of your organization in regard to cloud service adoption, supported by an analysis of cloud services currently in use. While an internal survey can be used to collect some information, a more effective analysis should be done using a “Shadow IT Discovery” tool. Many security organizations offer this service, installing a “Cloud Access Security Broker” (CASB) on the network for a period of time to analyze and report on traffic patterns to external cloud providers.

 

  • Desired state – The desired state for cloud service adoption for the next 2-5 years, including approved public cloud partners and services, capabilities to be supported by a private/hybrid cloud, and criteria and KPIs to measure and report on success.

 

  • Dependencies – What must be in place to support the successful adoption of the defined services? For example, data classification to determine workload placement requirements, a Cloud Management Platform to provide common governance and policy enforcement across private and public and hybrid environments, or maybe an updated chargeback model.

 

  • Risks – Internal and external risk. For example, resource limitations or a lack of required skill sets.

 

  •  High-level path to adoption – A high-level plan of adoption for cloud services and supporting capabilities:
    • An evolutionary or revolutionary approach to cloud service adoption? Most Enterprises will favor an evolutionary approach so that existing investments can be utilized, and resources and skillsets can be updated over time. For example, a solid foundation for IaaS should be built and proven in a private cloud environment before extending to a hybrid or public cloud. One solution will build on another, for example, PaaS will build on IaaS.
    • Recommendations for the hosting criteria of services and workloads. The cloud strategy is not where the criteria should be maintained, but it should link to the defined criteria or list the criteria as a dependency for cloud service adoption.
    • Private cloud service creation – a prioritization of services to be built for the private/hybrid cloud environment and a high-level timeline.
    • Public cloud service adoption – which services will be consumed from a public cloud provider.
    • Strategic partnership development with public cloud providers, including security considerations for relevant regulatory and compliance requirements (PCIS DSS, HIPAA, SOX, FISMA, DFARS, FEDRAMP).
    • Implementation of a common security framework (for example, NIST, ISO 27001 or CIS) to streamline analysis for the authorization of new vendors or services.
    • Key Performance Indicator (KPI) definition for measurable outcomes for the adoption of services (for example, service consumption, service provisioning time, cost). It is important to measure the value derived from providing private/hybrid cloud services (or consuming public cloud services). This becomes critical when planning additional service development or requesting additional resources.

 

  • Organizational recommendations – What will the support model look like? Many organizations have implemented a “Cloud Architecture Team” or “Cloud Center of Excellence” (CCoE) to define and manage cloud services, and a “Cloud Operations Team” to support them. These are cross-functional teams with Subject Matter Experts (SMEs) from various technical disciplines such as networking, storage, Windows and Linux, application development and security. Other organizations separate teams by discipline, for example, dedicated Azure and AWS teams. We recommend the creation of a CCoE to define consistent standards and policies across public and private environments.

 

  • Exit strategy – How to move services and resources out of a public cloud environment requires as much upfront consideration as the initial provisioning or migration. Don’t leave this analysis until you need to move workloads as it may be too late. The complexity of moving data and resources once provisioned to a public cloud provider will vary by provider and should be well understood before any workloads are provisioned. The best way to protect resources and data from “lock-in” is to use a hybrid cloud approach – that is to extend infrastructure resources to a public cloud provider to take advantage of the agility and flexibility of the hyper-scalers, without changing the underlying architecture. VMware Cloud Foundation allows the extension of your private cloud into a public cloud provider (for flexible capacity, data localization or temporary scale requirements) while maintaining the underlying vSphere constructs – meaning that the exit strategy for these workloads could be as simple as a vMotion. An exit strategy should be included in the analysis of any third-party service provider.

 

  • Stakeholders – Clearly defined roles and responsibilities for key stakeholders is critical to the successful creation and execution of any cloud strategy. Involve your stakeholders as early as possible to provide input to the strategy and to support its communication and implementation. Stakeholders will include, but may not be limited to, resources from the following teams:
    • Executive stakeholder – preferably the CIO or VP of Infrastructure or Applications
    • Security Team
    • Application Development Team
    • Infrastructure Team
    • Finance Team

 

  • Strategy review  – How often will the strategy be reviewed and updated? It should be high level enough that a review once a year is sufficient.

 

  • Supporting information – Include links to supporting documentation, supporting strategies, policies, and where to find more information or direct questions.

 

There may be initial resistance to the cloud strategy. Open and honest communication with stakeholder teams will help to alleviate many concerns. There will also be corner cases or exceptions that are not covered by the strategy. Use the 80/20 rule for guidance. The first draft will not be perfect, it will need to be reviewed and updated as your organization matures. If it seems to be getting too detailed refer back to the original intent of the document and reduce the scope:

  • Identify the key benefits of cloud technology adoption for your organization.
  • Decide which capabilities will be provided from a private, on-premises cloud. Prioritize the creation of these services.
  • Partner with key public cloud providers and maintain a published list of approved cloud services.
  • Identify and implement the protections required to reduce the risk of moving data and resources to the cloud and ensure there is a clearly defined exit strategy.
  • Communicate with stakeholders and internal teams to increase adherence to the strategy and reduce the likelihood of “Shadow IT”.

 

How to communicate the cloud strategy?

Once the cloud strategy has been created it should be widely communicated throughout your organization. Share the document on an accessible internal repository and request that it be announced at the highest possible level, preferably at an “All-Hands” meeting of the VP of IT or CIO. Then request an audience at team meetings for each level until it has been presented to everyone in the organization. Request interaction and allow feedback within these formats to increase the likelihood that it has been absorbed and understood by everyone in the IT organization.

 

Final Thoughts

Prior to joining VMware, I developed and implemented the cloud strategy at 2 different organizations where I was the Cloud Architect. It is a time-consuming process that requires collaboration, patience, research, and documentation. That said, I consider it to be time well spent. The process is critical to the long term success of using cloud services to support an organization’s digital transformation. Organizational transformation requires alignment across IT teams, business teams, and application development teams. The earlier the stakeholder teams can be involved in the creation of the strategy the higher the probability that it will meet their requirements, gain their support and achieve the level of collaboration required for organizational success. The strategy will evolve as the technology evolves, but a strategy should always answer the basics – what are we doing, why are we doing it, how are we doing it, and how will we know if we are successful?

 

References

The NIST Definition of Cloud Computing: http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf

Center for Internet Security: https://www.cisecurity.org/cis-benchmarks/ 

 

The post The Importance of a Cloud Strategy appeared first on VMware Cloud Management.

Blast From The Past: Where We’ve Come From

$
0
0
This post was originally published on this site ---

(To mark the end of the year we are posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

We’ve made it! By the time this blog post goes live it will be 2020 in some parts of the world (Samoa!). In fact, at 8 hours behind GMT, VMware headquarters in Palo Alto, California, USA will be among the last of the world to get to 2020 (Alaska is 9 hours behind, and Hawaii and the Aleutian Islands are 10). Frankly, I hope to be asleep by the time it happens, saving my energy to watch the University of Wisconsin-Madison Badgers beatplay the University of Oregon Ducks in American College Football later on New Year’s Day. I hope your New Year’s Days are relaxing and interesting, too.

Windows 7/2008 LogoDepending on how you define a decade this is the end of the 2010s and the start of the 2020s. In writing this end-of-2019 series I’ve dug up a lot of old photos and screen captures, logos, and even boxes for the products. It’s pretty interesting to see where we started the decade. For example, at the start of the decade Microsoft Windows 7 and Windows Server 2008 R2 were the new hotness for desktop and servers. Red Hat Enterprise Linux 6 was released in 2010, too, as was Ubuntu 10.04 LTS (the Lucid Lynx!).

vSphere Original LogoNone of these new guest operating systems were a problem for the newly-minted vSphere 4, released in the summer of 2009 and followed up by vSphere 4.1 in August of 2010. ESX (no ‘i’ — ESXi didn’t appear until 2011 and didn’t reach feature parity until vSphere 5). Not a VMUG has gone by this year where we haven’t heard from someone still running vSphere 4.1. While we absolutely encourage folks to stay current with patches and updates it’s also both a blessing and a curse that it’s still working for some. Imagine how old that hardware must be, though!

Dell PowerEdge R710Speaking of hardware, our partners at Dell unveiled their 11th Generation servers in 2010. Those were the venerable PowerEdge R610s, R710s, and so on. HP was still HP (they split into HPE and HP, Inc. in 2014) and they were shipping the G7 line of servers. IBM was selling the System x systems, prior to selling their x86 server line to Lenovo in 2014. Hard disks were predominantly 3.5″ in size, and SSDs were in their commercial infancy, still gaining reliability and performance. The fanciest CPU you could get in your new server was an Intel Xeon X7560, with 8 cores running at 2.27 GHz and 24 MB of L3 cache. The biggest DIMMs available were 32 GB, so while you could put 384 GB of RAM in those machines, the memory architecture caused the RAM to run at 800 MHz if you filled all 18 DIMM sockets. The price and performance sweet spot was twelve 8 GB DIMMs for 96 GB of RAM, running at the maximum 1333 MHz.

Original VMware LogoVMware has a great timeline of the company history, and it shows that 2010 was the year that the VMware Foundation began operating (see our post from two weeks ago on charitable giving at VMware!). We’ve also acquired 46 companies over the last 10 years, sneaking in the completion of the Pivotal acquisition yesterday so we can start the new decade as one. These companies bring amazing talent and technology to be integrated with in-house innovations, creating award-winning and industry-leading products like AirWatch, NSX, BitFusion, Kubernetes & Project Pacific, VeloCloud, and more. We’ve also been acquired and merged, too, with EMC and Dell Technologies. Interesting times, and throughout we’ve remained committed to our massive partner ecosystem, letting customers have the freedom to choose their own hardware and software and system designs as they see fit.

We couldn’t do any of this without our customers, though. Customers and our community inspire us, give us something to work towards, and provide us with feedback both positive and constructive, which we appreciate very deeply. Thank you, very much. Everyone on the vSphere Team enjoys talking with customers at VMworld, at VMUGs, and at other gatherings like roundtables and such. If you need anything please help us help you — please reach out. Thank you!

(Come back next year — HA! — for an SEC-approved look at what the future holds for vSphere! For more posts in this series visit the “2019 Wrap Up” tag.)

Follow @VMwarevSphere on Twitter

The post Blast From The Past: Where We’ve Come From appeared first on VMware vSphere Blog.

VMware Welcomes The Future

$
0
0
This post was originally published on this site ---

(To mark the end of the year we have been posting every day through January 1 with lighter vSphere and VMware topics. We hope you enjoy them as much as we do. See them all via the “2019 Wrap Up” tag!)

For the final blog post of 2019 and of the 2019 Wrap Up, let’s take a look forward. As I write this, the acquisition of Pivotal has just closed and we can move into the next phase of integrating them into our our modern applications vision. Pivotal brings some great people and technology that will enable some really exciting possibilities for our customers. Through 2019, it definitely seemed like Kubernetes (K8s) was on everyone’s mind – even those who are new to containers. K8s is going to be instrumental to VMware’s strategy going forward and that should be evident with the Pivotal acquisition, VMware Tanzu, and Project Pacific. It is going to be an exciting year as we execute on further integrating with the rest of the VMware portfolio.

(Click the image for a video clip)

And, while Kubernetes is really gaining momentum within VMware and our customers, we’re also going to continue to experience a bit of a renaissance of the foundation of this modern applications vision. As that foundation, we expect vSphere is going to have an exciting year! vSphere is truly powering the Hybrid Cloud as it continues to solve new problems and enable customers to run any workload in any cloud. Our Hybrid Cloud strategy will continue to unlock new possibilities in 2020 for customers across the globe.

(Click the image for a full-sized version)

Whether you’re new to virtualization and just beginning your transformation, you’re the Jill-of-all-trades managing a small-to-medium sized data center, a Cloud Architect driving innovation for a global enterprise, or a Developer building a new cloud-native application, you’ll want to pay attention to vSphere in 2020. The vSphere Product Team is working hard to bring great new features to market for every application and organization.

New years bring new challenges as well as innovation. The technology industry continues to evolve and change at a breakneck pace. One of the reasons I really love working for VMware is that it has a culture that accepts these challenges and faces them head-on. Whether it is a technical, social, environmental, or people challenges, VMware supports its workforce in addressing those challenges so we can ultimately deliver more value to customers. In other words, we’re allowed – and encouraged – to evolve. In 2020, I look forward to seeing how we continue the evolution and what challenges we take on as a company and as a product team.

On a more personal level, the upcoming year is going to be exciting for my team. We’re working on some great new content and ways to deliver that content more effectively. People continue to want to consume content in diverse ways from whiteboards to podcasts to live interactive workshops. We’re exploring new mediums and new ways to connect to our users. And, as always, we want your help. Please leave a comment below if you are looking for information on a topic or want to see content in a new way.

The vSphere Product Team sincerely hopes you had a great 2019 and wishes you an even better 2020. We look forward to the new year and bringing you new technology that you’ll be excited about!

(Thank you for joining us for this end-of-the-year series. If you missed a post you can find it using the the “2019 Wrap Up” tag. Please subscribe to this blog in your feed/news reader, and follow @VMwarevSphere on Twitter, for more information and news throughout the year.)

Follow @VMwarevSphere on Twitter

The post VMware Welcomes The Future appeared first on VMware vSphere Blog.

Announcing vSphere Days 2020

$
0
0
This post was originally published on this site ---

vSphere Days are coming in 2020! VMware and VMUG are hosting a new event this year, and you will want to be the first to experience it. vSphere Days are a complimentary, content-heavy, one-day event, focused on the current and future needs of vSphere users.

 

vSphere Days

 

Based on the success of the vSphere Roadshows and feedback from our valued members we have created this new one-day event. Join our community as we plant strong roots and provide a platform for innovative ideas to grow. vSphere is at the core of who we are and what we do, and these events are our way of celebrating our foundation.

 

Attend a 2020 vSphere Day to experience information-packed breakout sessions, discover the tools needed to upgrade your current platform, and get the most out of the new features in vSphere. We’re here to help you leverage and extend your vSphere environment, and also share insight into the emerging and future technologies to come. Take your skillset to the next level in just one day!

 

The first round of vSphere Days in 2020 will be hosted within the United States in Austin, Pittsburgh, Salt Lake City, Orlando, Washington DC, Central Ohio, BC Regional, Upstate NY and Las Vegas! We plan to expand to other regions and cities later in 2020. Want us to come to your city? Leave a comment below!

 

To learn more or to register for an upcoming vSphere Days visit the VMUG Website.

 

Follow @VMwarevSphere on Twitter

The post Announcing vSphere Days 2020 appeared first on VMware vSphere Blog.


vSphere Support for Intel FPGA

$
0
0
This post was originally published on this site ---

Since VMware vSphere 6.7 Update 1, VMware expanded its array of supported hardware accelerators by introducing support for the Intel® Arria® 10 GX Field Programmable Gate Array (FPGA) devices. This blog post zooms in on what an FPGA is, what the generic FPGA architecture looks like and how to expose an Intel® Arria® 10 GX FPGA to workloads running on vSphere.

Why an FPGA?

FPGA devices have been around for decades, for example in industrial automation or in the automotive industry. Today, the FPGA is finding its way into the data center. We now have the ability to utilize these devices for our workloads.

An FPGA is a high-end device with embedded logic elements to offload the CPU. In essence, the logic elements are integrated circuits that contain logic gates and I/O circuitry. The latter ingests data from a source and puts out the result data. Using its logic elements, we can design any digital circuit. Once the digital circuit is completed, it can be compiled (also referred to as ‘synthesizing’) into a bitfile (or bitstream) and loaded into the FPGA. The FPGA now behaves like the digital circuit we designed.

The highly parallel architecture of FPGAs allows for higher performance, lower latency, lower power consumption, and higher throughput for computations. Typical workloads for FPGA devices include Network Function Virtualization (NFV), deep neural networks, digital signal processing, and cryptography workloads.

 

FPGA Architecture

Zooming in on the Intel® Arria® 10 GX FPGA, the following high-level architecture is used. Most noticable is the FPGAs main chipset containing the logic elements. The number of logic elements set the limit for the digital circuits as designed by developers. The Arria® 10 GX FPGA comes with networking interfaces as well, allowing to remotely connect to the FPGA.

 

FPGA on vSphere Support

To use the Intel® Arria® FPGA with VMware vSphere, version 6.7 Update 1 is required. DirectPath I/O is the only supported method today. This means that the hardware address of the PCIe device (the FPGA in this case) is directly exposed in the virtual machine. Using DirectPath I/O will introduce constraints on vSphere features like vMotion and DRS because of the one-to-one relationship between the PCIe device and the virtual machine. However, it will provide near bare-metal performance characteristics. VMware Infrastructure administrators can now make this programmable hardware accelerator available in VMware vSphere virtual machines.

To verify your ESXi host on FPGA device availability, you can check the vCenter Server UI under Hardware > PCI Devices. The Intel® Arria® FPGA will list as an Intel® Processing accelerator.

Another way to check the presence of an FPGA device in an ESXi host is to issue the lspci |grep -i accel command. This requires root access to the ESXi host.
This will output a similar entry like this:

0000:5e:00.0 Processing accelerators: Intel Corporation Device 09c4

Once DirectPath I/O (also referred to as Passthrough-enabled Device) is enabled for the FPGA device, you can add a new PCIe device using the FPGA to a virtual machine. You are now set to use the FPGA device. Check the instructions in the Acceleration Stack Quick Start Guide for Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA on how to utilize the FPGA.

Remote Connections

The Intel® Arria® 10 GX FPGA provides the ability for remote connections using its onboard network interfaces. vSphere with Bitfusion can also help to assign these FPGA devices to remote workloads. Remote access to the FPGA provides the ability to use vMotion for the virtual workloads utilizing the FPGA remotely, even though that same FPGA device is also accessed directly by a local virtual machine using vSphere DirectPath I/O.

Check out the following VMworld video by Intel® to learn about their FPGA device running in vSphere, what the performance impact is and how vMotion can still be used when using remote connections to the FPGA device.

 

To Conclude

With innovations on FPGA devices and other hardware accelerators increasing, vSphere expands its support for these devices. Please refer to the VMware Hardware Compatibility List (HCL) to stay up-to-date on what devices are supported with vSphere as we will continue to expand support for hardware accelerators. Click here to learn more about Intel® FPGA devices.

More Resources to Learn

Exploring the GPU Architecture and why we need it.
Out now! Learning Guide – GPUs for Machine Learning on vSphere
Virtual GPU and FPGA with Bitfusion at Tech Field Day Extra at VMworld 2019

 

The post vSphere Support for Intel FPGA appeared first on VMware vSphere Blog.

Don’t waste your VMC logs. Put them to work for you.

$
0
0
This post was originally published on this site ---

Managed services such as VMware Cloud on AWS save you valuable time and effort that might otherwise be spent managing infrastructure and focusing on your workloads. However, it’s just as important to know what’s happening in your VMware Cloud SDDC as it is your on-premises data center.

Often, customers migrate or deploy applications to VMware Cloud on AWS – end of story. But how do they know their applications are running optimally and what changes are being made?

For example, did you know you can use the same vRealize Operations instance you use to manage the performance, capacity, and troubleshooting of your on-premises environment with VMware Cloud on AWS? I’d highly recommend it. If you’d like to learn more, Sunny Dua released a great blog post on this topic last year. As for logs, vRealize Log Insight Cloud makes this a no-brainer.

vRealize Log Insight Cloud (formerly Log Intelligence) was developed by VMware to help our Site Reliability Engineers keep a close eye on our customers’ VMC SDDC’s. It allows us to search for events to prevent issues and create alerts to tell us when events occur. Since we were already collecting every log in a centralized tool, it made sense to make this available to every VMware Cloud on AWS customer. Just request access to vRealize Log Insight Cloud from your Cloud Services Portal and you’ll be automatically granted a 30-day trial of the fully featured product. This includes ingestion of your VMware Cloud on AWS audit logs, NSX-T audit logs, Cloud Service Portal logs (upon request), logs from your on-premises data center, and any other logs such as applications.

You’ll also have access to 50 content packs pre-built for VMware technologies as well as other solutions such as AWS services and Kubernetes. After the 30-day trial, you have the option to purchase vRealize Log Insight Cloud at a competitive per-GB rate or you can drop down to the edition that’s included with VMware Cloud on AWS. This edition still gives you all your VMware Cloud on AWS SDDC audit logs and the associated content pack, but limits the number of logs you can ingest daily for NSX-T, CSP, and other logs. For full comparison and pricing information, visit the vRealize Log Insight Cloud product page.

Now that you know vRealize Log Insight Cloud is available to all VMware Cloud on AWS customers (and can be purchased by non-VMC customers), let’s talk about what kinds of information you can gather. Unlike your on-premises SDDC where you have access to all logs, vRealize Log Insight Cloud only gives access to your audit logs. The reason being, we’re already managing the underlying infrastructure and keeping watch for hardware or other related issues, so it isn’t necessary for you to worry about any of this.

Instead, you’ll see important information such as when someone logs into your SDDC, or fails to log in. You’ll also be able to see when someone creates or deletes a virtual machine, firewall rule, NAT rule, or logical network just to name a few. If you enable NSX-T firewall rule logging, you’ll also be able to see when traffic is accepted or rejected including source and destination IP.

 

Comparing NSX-T traffic being passed through vs rejected is simple to do with vRealize Log Insight Cloud.

 

vRealize Log Insight Cloud allows you to visualize and organize unstructured log data. You can quickly identify event types, trends, and anomalies thanks to vRealize Log Insight Cloud’s powerful AI and Machine Learning engine. And autocomplete helps you to build log queries with little to no experience. If you can use a search engine, you can use vRealize Log Insight Cloud.

Apart from making your logs easy to find and understand, vRealize Log Insight Cloud can also alert you when something happens. Alerts can be delivered via e-mail or even webhooks. For example, when a new VM is created, vRealize Log Insight Cloud can send a message to your team’s Slack channel letting everyone know and preventing unauthorized workloads from being spun up.

vRealize Log Insight Cloud includes the ability to overlay queries making event correlation a snap.

 

vRealize Log Insight Cloud can also integrate with your Security Event and Incident Monitoring tool (SIEM). Whether you’re running your SIEM in the cloud or on-premises, forwarding logs from vRealize Log Insight Cloud is easy. Log forwarding rules allow you to choose your destination (on-premises or cloud) and forwarding protocol (HTTP/HTTPS, Syslog, or CFAPI), and whether you want to forward all logs or just a subset of your logs based on custom filters. Forwarding your logs on-premises is simply a matter of deploying a Cloud Proxy appliance in your data center. The Cloud Proxy communicates with vRealize Log Insight Cloud over HTTPS so no special firewall rules are required and all traffic is encrypted.

You may even have compliance requirements that hold you accountable for storing logs for a specific period. By default, trial and paid customers have access to their logs for 30 days (or seven days for the VMC included edition). However, all users of vRealize Log Insight Cloud can archive their logs to S3 storage for up to seven years. Just like log forwarding, you can choose to archive all your logs or just a subset based on custom filters.

Having the ability to see what’s happening in your VMC SDDC is not just a nice-to-have, but rather a necessity. Considering vRealize Log Insight Cloud is available to all VMware Cloud on AWS customers, there’s no reason not to spin it up and see what it can do for you. Its simple yet powerful log analytics make it easy for anyone to discover events both good and bad, while its ability to integrate with your existing tools helps keep you in the loop at all times. Don’t let mistakes or malicious activity catch you by surprise. Make your VMC logs work for you.

For more information on vRealize Log Insight Cloud, visit the product page.

The post Don’t waste your VMC logs. Put them to work for you. appeared first on VMware Cloud Management.

Managing VMware Cloud on AWS with vRealize Operations

$
0
0
This post was originally published on this site ---

So, you’re thinking about leveraging VMware Cloud on AWS (or maybe you’re already there). You might be retiring old HW in your datacenter, maybe expanding your dev-ops needs, or even hosting some of your business critical apps. Regardless of your reason to adopt a hybrid cloud, you’re starting to wonder how you are going to manage this new environment.

  • How do you manage the performance or your hybrid apps?
  • How do you ensure capacity and manage costs?
  • How do I troubleshoot outages if/when they occur?
  • What about my compliance concerns for my workloads?

Fortunately vRealize Operations is the easy answer to these types of hybrid cloud management needs. It treats VMware Cloud on AWS as a first class companion and, powered by AI/ML, employs a 4 pillar approach to hybrid cloud management as shown below.

  • Continuous Performance Optimization
  • Efficient Capacity and Cost Management
  • Intelligent Remediation
  • Integrated Configuration and Compliance

These 4 pillars even show up on the start screen to help you navigate fast to the items you wish to manage.

This technical blog discusses the 4 pillars of hybrid cloud management, how vRealize Operations can help deliver those and even gives you some links to deploying it when you are ready. Let’s jump in!

Workload Optimization

The ability to meet your specific hybrid cloud business needs is paramount to your success. To that end, vRealize Operations Workload Optimization provides a self-driving component to your new VMware Cloud on AWS datacenter. The idea behind a self-driving datacenter is you just define your business and operational intent and then “take your hands off the wheel” and let vRealize Operations drive. It will monitor the environment and, when the datacenter deviates from your desired needs, it will quickly bring it back to a desired state all while honoring your intent.

Workload optimization works closely with DRS to ensure applications have the resources they need. Under its watch, VMs will be moved to other clusters within the same datacenter (or custom datacenter) to meet your performance, operational and business intent you have defined.

What is Operational and Business Intent

Operational and business intent is about helping you to meet your VMware Cloud on AWS needs. First you need to ask yourself, “What are you trying to do in your hybrid cloud?”. Are you expanding to meet dev-ops needs? Are you moving some of your tier 2 applications to retire older HW back in your private datacenter? Will there be any business critical apps running there? Based on the answers to these questions you can set up your operational and business intent. Examples of operational and business intent can include one or more of these examples:

  • Assure the best application performance
  • Drive license enforcement
  • Meet compliance goals
  • Save money on infrastructure costs
  • Implement SLA tiering
  • And more!

Setting up you Operational and Business Intent

Setting up your operation and business intent is very easy and can be done in 3 simple steps. In the first step, you need to determine your target utilization objective for the datacenter.  If application performance is your top concern then you spread workloads evenly over the available resources by choosing Balance.  If instead you are looking to place workloads into as few clusters as possible, lower your cost per VM and possibly repurpose/retire some hosts chose Consolidate.

In the second step, you need to configure the percentage of headroom.  Headroom allows you to choose how much risk is acceptable in a cluster.  It provides a percent buffer of CPU, memory and disk space and reduces the risk from bursts or unexpected demand spikes.  In a production environment it is not uncommon to have a 20% Headroom buffer.

Finally, you need to determine how your business needs should drive VM placement across the clusters.  vRealize Operations does this by leveraging and honoring vCenter tags.  With vCenter tags you can define what VMs should be placed on what cluster.  Simply tag the cluster(s) and the VM with the same tag and viola…Workload Optimization will make sure that workload is placed on that cluster(s).  One of the most common use cases for this feature is SLA tiering to ensure your most critical applications have access to the resources they need. In this example you simply tag specific workloads and clusters as either Gold (Tier 1) or Silver (Tier 2). Then when making move decisions vRealize Operations will ensure the Gold (Tier 1) applications are given preferential access to the clusters with the most available resources.

VSAN Aware Workload Optimization

It is important to understand VSAN, the storage layer of VMware Cloud on AWS, when moving workloads between clusters. vRealize Operations supports moving workloads within a VSAN storage based datacenter in 3 key ways.

Resync Aware

vRealize Operations understands the underlying mechanics of VSAN (e.g. VSAN resyncs) and will not generate a workload optimization plan if any vSAN clusters are currently running a resynchronization, in order to not get in the way during this time. Simple.

Respects Slack Space

Being a distributed storage solution vSAN needs free space (aka slack space) to perform certain actions that are transparent to the user. vRealize Operations is aware of this need and will not make any workload optimizations recommendations that may infringe on the slack space requirements of vSAN.

Honors Storage Policy

Storage Policies are an incredibly powerful tool that allow you to define storage protection and performance requirements using a set of rules, that are applied prescriptively to VMs and VMDKs. vRealize Workload Optimization leverages vCenter and Storage vMotion to verify the target location will properly support the storage policies before making any moves.

Automate it!

vRealize Operations enables full automation of workload optimization so you can be sure your workloads are meeting both business and operational intents around the clock.  A simple click of the Automate button and vRealize Operations takes over.

Summary

Workload Optimization needs to be one of the pillars of your VMware Cloud on AWS management strategy. Fortunately, workload optimization is a main pillar of vRealize Operations, making driving your operational and business intent easy!

For more information on Workload Optimization in vRealize Operations check out this technical blog Start Running a Self-Driving Datacenter – vRealize Operations Workload Optimization!

Efficient Capacity and Cost Management

Capacity Planning

Capacity planning for VMware Cloud on AWS works the same way it does for your on-premises vSphere clusters. As you can see below, SDDC-Datacenter is just one of many datacenters managed by vRealize Operations.

You can even track capacity remaining and VMs remaining. Pay special attention to the High Availability column, which was added in vRealize Operations to help quantify how much of a cluster’s resources are reserved for HA based on admission control.

Reclaim

Now that you know how much capacity you have remaining, you might be wondering how you can reclaim capacity. Good news is that vRealize Operations has got you covered here too. You can see which VMs are identified as reclaimable, which, if addressed, can free up capacity to run additional workloads the provide business value. The system automatically identifies powered off and idle VMs, VMs with old snapshots, and orphaned disks.

Migration Assessments

Making the decision to migrate to VMware Cloud on AWS is often the first step you’ll encounter. With vRealize Operations, you can use the Migration Planning: Public Cloud What-If scenario to help make that decision. With the What-If scenario, you can determine how many hosts will be needed and the potential cost of a new VMware Cloud on AWS environment based on existing VMs in your environment or for net new VMs that will be provisioned for the first time in VMware Cloud on AWS.

Cost Management

The Management Pack for VMware Cloud on AWS allows you to monitor the cost of your VMware Cloud on AWS environment by collecting bills from the Cloud Services portal. The management pack allows you to track your outstanding expenses, year to date costs, as well as the cost for your subscriptions, on-demand hosts, network egress charges, and purchase history.

The Management Pack for VMware Cloud on AWS is available for download on Solutions Exchange and requires a refresh token that you can generate in your Cloud Services portal.

Intelligent Remediation

To troubleshoot your VMware Cloud on AWS environment, you get to take advantage of the existing dashboards and alerts delivered out of the box for vCenter. One of the most exciting features in vRealize Operations 8.0 is the new Troubleshooting Workbench powered by AI/ML. Check the blog post by John Dias, vRealize Operations Troubleshooting Powered by Machine Learning, to see it in action and why you need to be using the Troubleshooting Workbench.

Some customers have asked for VMware Cloud on AWS specific dashboards, so Sunny Dua has published some excellent dashboards at vRealize Operations dashboards to monitor VMware Cloud on AWS which, I encourage you to check out in your environment.

Integrated Configuration and Compliance

For the Integrated Compliance pillar, we now have support for monitoring compliance for vSAN and NSX-T out of the box. Compliance even supports VMware Cloud on AWS, which includes vSAN and NSX-T as well.

Deployment and Configuration Details

Deployment

Monitoring a VMware Cloud on AWS environment starts with reviewing your deployment architecture. Depending on whether you’re using an existing vRealize Operations instance or deploying a new instance, it’s recommended to review the architecture options described in Operate VMware Cloud on AWS using vRealize Operations.

Configuration

With the release of vRealize Operations, vCenter Servers are managed as Cloud Accounts, which is consistent with the way vCenter is managed in vRealize Automation. To add a VMware Cloud on AWS vCenter:

  1. Add a vCenter Cloud Account
  2. Select vCenter Account Type
  3. Enter the vCenter server name and credentials
  4. Set the Cloud Type under Advanced Settings to VMware Cloud on AWS
  5. Enable the vSAN checkbox on the vSAN tab

Making the transition to VMware Cloud on AWS doesn’t have to be painful or time consuming if you have the right management tools in place to make it easy, and vRealize Operations is all about making things “easier on you”! A great way to learn more is to download a trial of vRealize Operations and try it in your own environment! You can find more demos and videos on vrealize.vmware.com. Be sure to stay tuned here as we will have even more blogs about vRealize Operations coming soon.

The post Managing VMware Cloud on AWS with vRealize Operations appeared first on VMware Cloud Management.

The vExpert Cloud Management Zombie VM Hunt

$
0
0
This post was originally published on this site ---

A close up of a sign Description automatically generated

 

Zombies are everywhere, needlessly consuming resources in every customer’s environment. Zombie VM’s are virtual machines that were provisioned and for one reason or another unintentionally not used. This can be due to stalled or abandoned projects or even incomplete decommissioning procedures. Despite being unused, Zombie VM’s continue to eat valuable Compute, Memory, Storage, and even Network resources costing organizations money and exposing unnecessary security risks. What’s worse is that zombie VM’s have a negative impact on the environment as a result of the additional power and cooling required to feed these unused workloads.

 

VMware’s vRealize Operations provides value out of the box by identifying wasted resources in your organization and their financial impact.

 

VMware’s vRealize Suite is in a prime position to help customers identify Zombie VM’s. vRealize Operations can analyze metrics and properties of VM’s, applications, and physical hosts to identify wasted resources and associate a monthly cost with them. vRealize Log Insight can be used to validate that a VM or an application has not been accessed by analyzing system logs. And vRealize Network Insight can analyze traffic flows to and from your virtual machines and quickly identify VM’s with little to no network traffic. With these tools, organizations can identify and eliminate a lot of waste with very little effort.

 

VMware’s vRealize Network Insight can easily identify virtual machines with little to no network traffic flowing to or from them. This is one indicator of a zombie VM.

 

This seemed like a great way to challenge our Cloud Management vExperts. For the uninitiated, the vExpert Cloud Management program is a sub-program of the greater VMware vExpert community. vExperts are hand-picked every year and awarded the highly sought-after title in recognition of their efforts outside of their day-jobs. vExperts are primarily customers who have a passion for VMware’s products and are willing to share their experiences through blog posts, presentations, and other publicly available spaces such as YouTube. The VMware vExpert program is about to launch into its 11th year and the vExpert Cloud Management sub-program will be entering its second year in 2020.

On November 15th, 2019 we kicked off our vExpert Cloud Management Zombie VM Hunt with VMware’s Director of Sustainability Innovation, Nicola Peill-Moelter presenting on the Zombie VM problem. She shared the economic and environmental impacts that zombies pose with our group. We then challenged our vExperts to share how they would use vRealize Operations, Log Insight, or Network Insight to identify Zombie VM’s. In return for their efforts, we offered each participant a custom vExpert Cloud Management golf shirt and a donation to plant trees in the Amazon Rainforest in their name.

Since the Zombie VM Hunt began, news of the Australian wildfires began to pop up as a major global issue. According to CNN, these fires have destroyed thousands of homes, killed nearly 30 people, and impacted an estimated half a billion animals. As of this blog’s writing, the fires continue to burn due to extreme droughts and have burned more than 17.9 million acres. Because of the incredible scope of this devastation, it only seemed right to instead donate to the replanting of trees in Australia through the One Tree Planted organization. I’m happy to share that 60 new trees will be planted in Australia as a result of our vExpert’s efforts.

Our Cloud Management vExperts took to the challenge and shared their ideas and experiences via blog posts, custom dashboards, and even presenting the challenge and solutions at a local VMware User Group (VMUG) meeting in Myanmar!

 

Myint Htay Aung presenting to the Myanmar VMUG community about the Zombie VM problem.

 

In this first blog post, Graham Barker explains how he used vRealize Operations to identify zombie VM’s. By analyzing the age of Virtual Machines, the uptime, virtual hardware and VMware Tools versions, as well as network utilization, Graham was able to identify Zombie VM’s and bundled it all into a single dashboard that you can use today!

https://virtualg.uk/using-vrealize-operations-to-identify-zombie-vms/

 

Matt Menkowski shows us how to create a custom view to identify powered off VM’s. While powered off VM’s don’t consume physical CPU or Memory, they still consume storage. Be sure to click on the two tabs at the top of Matt’s blog post. The first shows you what is available out of the box with vRealize Operations, and the second tab walks you through the custom view creation. There’s also a URL to download the view and use it right away.

https://vmscribble.com/vrops/vrops-8-vm-powered-off-timestamp-view/

 

Alain Greenrits shared a custom vRealize Operations view to identify idle VM’s. By identifying and color coding VM’s based on CPU, Memory, Disk, and Network usage. VM’s that are highlighted green indicate they are using less than 500 MHz of CPU, 500 MB of memory, and 100 kBps of network and disk throughput. He also incorporated this view into a custom dashboard that you can download via the link in his blog.

https://bitstream.geenrits.net/vmware/detect-historical-idle-vms-on-vrops/

 

And finally, Elizabeth Souza created a custom vRealize Operations dashboard to show idle and powered off virtual machines, datastores with wasted space, and even oversized VM’s.

Elizabeth’s blog is available here…

http://geeklady.com.br/index.php/2020/01/04/procurando-por-vms-zumbis/

And you can download her dashboard here…

https://code.vmware.com/samples/6753/identifying-zombies-vms?h=vRealize%20Ops%20Dashboard

 

We want to thank each of our Cloud Management vExperts for taking the time to share their solutions to the Zombie VM problem with the rest of the community. We hope you will find their ideas useful in using the vRealize Suite to save your organization from waste and reduce carbon emissions generated by keeping unused VM’s running. For more information on VMware’s vExpert program, visit https://vexpert.vmware.com/. For more information on VMware’s vRealize Suite, visit https://www.vmware.com/products/vrealize-suite.html.

 

Special thanks to Nicola Piell-Moelter and Steve Tilkens for making this program possible!!

 

The post The vExpert Cloud Management Zombie VM Hunt appeared first on VMware Cloud Management.

Forwarding vRealize Log Insight events to a SIEM Solution

$
0
0
This post was originally published on this site ---

 

When setting up a log forwarding connection from vRealize Log Insight to a SIEM solution like Splunk or Qradar, you need to filter on a particular set of event types to limit the stream of logs sent to the receiving solution. With this post I make a humble attempt to offer some guidance on which events are typically filtered for security auditing like logins, reboots etc.

Disclaimer: The actual set of events you may need to monitor in your SIEM solution may vary based on the needs of your security team.

 

Let me jump right into the nitty gritty details of what you need to do forward the relevant events based on my experiences with vRealize Log Insight customers.

Step 1: Ensure the user you use to log into vRealize Log Insight has full administrator privileges.

Step 2: Create a “New destination” by clicking Event Forwarding option in the Management section of the Administration UI in vRealize Log Insight.

 

Step 3: Give your destination a meaningful name like “Security events to Qradar@10.12.13.14” , enter the FQDN or IP of the destination receiver.

Step 4: Select the protocol to be syslog, give it a tag name (optional) Note: I have seen issues with some customers where adding a tag name makes the receiving end to not receive the event logs correctly, so only use the tag after testing it at the receiving end. Also note that event tags may cause the original event log to be altered with the addition of the tag.

Step 5: Use the Add Filter link and select ‘text’ with ‘matches’ option and a list of keywords to search on the security specific events.

Now the key here is to select the correct list that will cover all security specific logs and must be entered manually. This is a list (not vSphere specific) that I have complied based on my experience with customers attempting to forward security specific events to their SIEM solutions:

  • “password was changed”
  • “logged out”
  • “cannot login”
  • “logged in”
  • “rejected password for user”
  • “permission rule removed”
  • “DCUI has been enabled”
  • “firewall configuration has changed”
  • “permission created”

Step 6: Use the Run in Interactive Analytics link to ensure the correct set of events are being queried by vRealize Log Insight to ensure you send the correct set of events you need at the receiving solution.

Step 7: Use the Test button to send a test event.

Step 8: Validate at the receiving end in Qradar or Splunk or another solution that you have a) received the event as expected and b) the event is in the expected format.

Step 9: Save the forwarder destination using the Save button in vRealize Log Insight.

 

You are all set and your SIEM solution should receive the events as received by vRealize Log Insight. If you come across more events that need to be sent you can simply login to vRealize Log Insight and edit your destination and add more keywords to the same text filter and send the newly identified events. You could alternatively also clone the forwarder destination and send to another solution by modifying the name and host IP in the cloned forwarder destination.

 

Happy Forwarding! And hope you find this information useful.

 

 

The post Forwarding vRealize Log Insight events to a SIEM Solution appeared first on VMware Cloud Management.

Viewing all 1975 articles
Browse latest View live