Thursday, October 9, 2014

ITSM adoption forces a streamlined IT operations culture at Desjardins, paves the way to cloud

Our next innovation case study interview highlights how Desjardins Group in Montréal is improving their IT operations through an advanced IT services management (ITSM) approach.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more, BriefingsDirect sat down with Trung Quach, ITSM Manager at Desjardins in Québec, at the recent HP Discover conference in Las Vegas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First tell us a little bit about your organization, you have a large network of credit unions.

Quach: It’s more like cooperative banking. We are around 50,000 people across Québec, and we've started moving into both Canada and the US.
Gain better control over help desk quality and impact.  
Learn how to make your help desk more relevant
with a free white paper.
Gardner: Tell us a little bit about your IT organization, the size, how many people, how many datacenters? What sort of IT organization do you have?

Quach: We're around 2,500 and counting. We're mainly based in Montréal and Lévis, which is near Québec City. Most of them are in Montréal, but some technical people are in Lévis. 

Gardner: Tell us about your role. What are you doing there as ITSM manager?

The ITIL process

Quach: I joined Desjardins last year in the ITSM leader position. This is more about the process, the ITIL process and everything that's invloved with the tool, as well as to support those overall processes.

Gardner: Tell us why ITSM has become important to you. What were some of the challenges, some of the requirements? What was the environment you were in that required you to adopt better ITSM principles?

Quach
Quach: A couple of years ago, when they merged 10-plus silos of IT into one big group, Desjardins needed to centralize the process, put best practice in place, to be more efficient and competitive -- and to give a higher value to the business.

Gardner: What, in particular, were issues that cropped up as a result of that decentralization? Was this poor performance, too much cost, too many manual processes, all of the above?

Quach: We had a lot of manual processes, and a lot of tools. To be able to measure the performance of a team, you need to use the same process and the same tools, and then measure yourself on it. You need to optimize the way you do it, so that you can provide better IT services.

Gardner: What have been some of the results of your movement toward ITSM? What sort of benefits have you realized as a result?

Quach: We had many of them. Some were financial, but the most important thing, I think, is the services quality and the availability of those services. So one indicator is a reduction in major incidents of 30 percent for the last two years.

Gardner: What is it about your use of ITSM that has led to that significant reduction in incidents? How does that translate?

Quach: We put our new problem management approach to work as well with the problem processes. When we open tickets, we can take care of the incidents in a coordinated way at an enterprise level. So the impact is everywhere. We can now advise the line of businesses, follow up with the incident, and close the incident rapidly. We follow up with any problems, and then we fix the real issues so that they don’t come back.

Gardner: Have you used this to translate back to any applications development, or custom development in your organization? Or is this more on the operations side strictly?

Better support

Quach: We started all of this on the operations side. But then we started last year on the development side, too. They're involved in our process slowly, and that’s going to soon get better, so we can support the full IT lifecycle better.

Gardner: Tell us about HP Discover. What's of interest to you? Have you been looking at what HP has been doing with their tools? What's of most importance to you in terms of what they do with their technology?

Quach: I can tell you how important it is for us. Last year we didn't go to HP Discover. This year, around eight in my team and the architecture team are here. That shows you how important it is.

Now we spread out. A lot of my team members went to explore tools and everything else that HP has to offer -- and HP has a lot of offer. We went to learn about the cloud, as well as big data. It all works together. That’s why it was important for us to come here. ITSM is the main reason we're here, but I want to make sure that everything works together, because the IT processes touch everything.
Gain better control over help desk quality and impact.  
Learn how to make your help desk more relevant
with a free white paper.
Gardner: I've talked to a number of organizations, Trung, and they've mentioned that before they feel comfortable moving into more cloud activities, and before they feel comfortable adopting big data, analytics platforms, they want to make sure they have everything else in order. So ITSM is an important step for them to then go to larger, more complex undertakings. Is that your philosophy as well?

Quach: Yes. There are two ways to do this. You use that technology to force yourself to be disciplined, or you discipline yourself. ITSM is one way to do it. You force yourself to work in a certain manner, a streamlined manner, and then you can go to the cloud. It's easier that way.

Gardner: Then, of course, you also have standardization in culture, in organization, not just technology, but the people and the process, and that can be very powerful.

Quach: If asked me about cloud -- and I have done this with another company -- in a 30-minute interview about cloud, I would use 29 minutes to not talk about technology but about people and processes.

Gardner: How about the future of IT? Any thoughts about or the big picture of where technology is going? Even as we face larger data volumes, perhaps more complexity, and mobile applications, what are your thoughts about how we solve some of those issues in the big picture?

Time to market

Quach: IT more and more is going to have a challenge for meeting the speed demanded for improved time to market. But to do that, you need processes, technology, and of course, people. So the client, the business, is going to ask us to be faster. That’s why we'll need to go in that cloud. But to go in the cloud, we need to master our IT services, and then go in the cloud. If not, it would be like not going to the cloud and not having that agility. We would not be competitive.
Gain better control over help desk quality and impact.  
Learn how to make your help desk more relevant
with a free white paper.
Gardner: Looking back, now that you have gone through an ITSM advancement, for those who are just beginning, what are some thoughts that you could share with them?

Quach: In an ITSM project, it's very hard to manage change. I'm talking about the people change, not the change-management technology process. Most of the time, you put that in place and say that everybody has to work with it. If I would redo it, I would bring more people to understand the latest ITSM science and processes, and explain why in five or 10 years, it's going to really help us.
You always have to be close to your clients. Even if they are IT, they are your client or partner.

After that, we'll put in the project, but we'll follow them and train them every year. ITSM is a never-ending story. You always have to be close to your clients. Even if they are IT, they are your client or partners. You need to coach them, to make sure they understand why they're doing this. Sometimes it’s a bit longer to get it right at the beginning, but it’s all worth it at the end.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, October 7, 2014

MIT Media Lab computing director details the virtues of cloud for agility and disaster recovery

The next BriefingsDirect innovator case study interview focuses on the MIT Media Lab in Cambridge, Mass., and how they're exploring the use of cloud and hybrid cloud to enjoy such use benefits as IT speed, agility and robust, three-tier disaster recovery (DR).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how the MIT Media Lab is exploiting cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. The discussion, at the recent VMworld 2014 Conference in San Francisco, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the MIT Media Lab and how it manages its own compute requirements.

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/
Bletsas
The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We're not an applied research lab in the sense that we're not looking at what's going to happen two or three years from now. We're not looking at short-term future products. We're looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we're kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We're much heavier than other departments in how many devices you're going to see. We're on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I've been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you're using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What's very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we've been pretty successful with that.
We really need to interact with our infrastructure on a much shorter cycle than the average operation.

Gardner: How have you created a data center that’s responsive, but also protects your property?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We've been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it. 

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what's going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: What about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What's the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They're a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an emergency].
We know that remote events are remote, until they happen, and sometimes they do.

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can't do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you're familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: It sounds like you're essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it's DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It's lucky for us, because we're not such a big operation. We're relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we've been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.
We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT's administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this." It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that's not as large as ours.

Gardner: At VMworld 2014, there was quite a bit of news, particularly in the vCloud Air arena. What intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I've said, we're a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We're a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.
That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can't afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.
In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, October 1, 2014

Cloud services brokerages add needed elements of trust and oversight to complex cloud deals

Our BriefingsDirect discussion today focuses on an essential aspect of helping businesses make the best use of cloud computing.

We're examining the role and value of cloud services brokers with an emphasis on small to medium-sized businesses (SMBs), regional businesses, and government, and looking for attaining the best results from a specialist cloud service brokerage role within these different types of organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

No two businesses have identical needs, and so specialized requirements need to be factored into the use of often commodity-type cloud services. An intermediary brokerage can help companies and government agencies make the best use of commodity and targeted IaaS clouds, and not fall prey to replacing an on-premises integration problem with a cloud complexity problem.

To learn more about the role and value of the specialist cloud services brokerage, we're joined by Todd Lyle, President of Duncan, LLC, a cloud services brokerage in Ohio, and Kevin Jackson, the Founder and CEO of GovCloud Network in Northern Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we get regular companies to effectively start using these new cloud services?

Lyle: Through education. That’s our first step. The technology is clearly here, the three of us will agree. It's been here for quite some time now. The beauty of it is that we're able to extract bits and pieces for bundles, much like you get from your cell phone or your cable TV folks. You can pull those together through a cloud services brokerage.

Lyle
So brokerage firms will go out and deal with the cloud services providers like Amazon, Rackspace, Dell, and those types of organizations. They bring the strengths of each of those organizations together and bundle them. Then, the consumer gets that on a monthly basis. It's non-CAPEX, meaning there is no capital expenditure.

You're renting these services. So you can expand and contract as necessary. To liken this to a utility environment, utility organizations that do electric and do power, you flip the switch on or turn the faucet on and off. It’s a metered service.
Learn more about Todd D. Lyle's book, 
Grounding the Cloud: Basics and Brokerages, 
at groundingthecloud.org.
That's where you're going to get the largest return on your collective investment when you switch from a traditional IT environment on-premises, or even a private cloud, to the public cloud and the utility that this brings.

Government agencies

Gardner: Kevin you're involved more with government agencies. They've been using IT for an awfully long time. How is the adjustment to cloud models for them? Is it easier, is it better, or is it just a different type of approach, and therefore requires only adjustment?

Jackson: Thank you for bringing that up. Yes, I've been focused on providing advanced IT to the federal market and Fortune 500 businesses for quite a while. The advent of cloud computing and cloud services brokerages is a double-edged sword. At once, it provides a much greater agility with respect to the ability to leverage information technology.

Jackson
But, at the same time, it brings a much greater amount of responsibility, because cloud service providers have a broad range of capabilities. That broad range has to be matched against the range of requirements within an enterprise, and that drives a change in the management style of IT professionals.

You're going more from your implementation skills to a management of IT skills. This is a great transition across IT, and is something that cloud services brokerages can really aid. [See Jackson's recent blog on brokerages.]

Gardner: Todd, it sounds as if we're moving this from an implementation and a technology skill set into more of a procurement, governance, contracts, and creating the right service-level agreements (SLAs). These are, I think, new skills for many businesses. How is that coaching aspect of a cloud service’s brokerage coming out in the market? Is that something you are seeing a lot of demand for?

Lyle: It’s customer service, plain and simple. We hear about it all the time, but we also pass it off all the time. You have to be accessible. If you're a 69-year-old business owner and embracing a technology from that demographic, it’s going to be different than if you are 23 years old, different in the approach that you take with that person.

As we all get more tenured, we'll see more adaptability to new technologies in a workplace, but that’s a while out. That's the 35-and-younger crowd. If you go to 35-and-above, it's what Kevin mentioned -- changing the culture, changing the way things are procured within those cultures, and also centralizing command. That’s where the brokerage or the exchange comes into place for this. [See Lyle's video on cloud brokerages.]
Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

Gardner: One of the things that’s interesting to me is that a lot of companies are now looking at this as not just as a way of switching from one type of IT, say a server under a desk, to another type of IT, a server in a cloud.

It’s forcing companies to reevaluate how they do business and think of themselves as a new process-management function, regardless of where the services reside. This also requires more than just how to write a contract. It's really how to do business transformation.

Does that play into the cloud services brokerage? Do you find yourselves coaching companies on business management?

Jackson: Absolutely. One of the things cloud services is bringing to the forefront is the rapidity of change. We're going from an environment where organizations expect a homogenous IT platform to where hybrid IT is really the norm. Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

This is also driving business models. The more effective business models today are taking advantage of the parallel and global nature of cloud computing. This requires experience, and cloud services brokerages have the experience of dealing with different providers, different technologies, and different business models. This is where they provide a tremendous amount of value.

Different types of services

Gardner: Todd, this notion of being a change agent also raises the notion that we're not just talking about one type of cloud service. We're talking about software as a service (SaaS), bringing communications applications like e-mail and calendar into a web or mobile environment. We're talking about platform as a service (PaaS), if you're doing development and DevOps. We're talking about even some analytics nowadays, as people try to think about how to use big data and business intelligence (BI) in the cloud.

Tell me a bit more about why being a change agent across these different models -- and not just a cloud implementer or integrator -- raises the value of this cloud service brokerage role?

Lyle: It’s a holistic approach. I've been talking to my team lately about being the Dale Carnegie of the cloud, hence the specialist cloud services brokerage, because it really does come down to personalities.

In a book that I've recently written called Grounding the Cloud, Basics and Brokerages, I talk about the human element. That's the personalities, expectations, and abilities of your workforce, not only your present workforce but your future workforce, which we discussed just a moment ago, as far as demographics were concerned.

It's constant change. Kevin said it, using a different term, but that's the world we live in. Some schools are doing this, where they're adding this to their MBA programs. It is a common set of skills that you must have, and it's managing personalities more than you're managing technology, in my opinion.
It's about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

Gardner: Tell me a bit more about this book, Todd, it’s called Grounding the Cloud. When is it available and how can people learn more about it?

Lyle: It’s available now on Amazon, and they can find out more at www.groundingthecloud.org. This is a layman’s introduction to cloud computing, and so it helps business men and women get a better understanding of the cloud -- and how they could best maximize their time and their money, as it associates to their IT needs.

Gardner: Does the book get into this concept of the specialist cloud services brokerage (SCSB), as opposed to just a general brokerage, and getting at what's the difference?

Lyle: That’s an excellent question, Dana. There are a lot of perceptions, you have one as well, of what a cloud services brokerage is. But, at the end of the day -- and we've been talking about this in the entire discussion -- it's about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

We discuss it here in the "flyover country," in Ohio. We meet in the book with Cleveland State University. We meet with Allen Black Enterprises, and then even with a small landscaping company to demonstrate how the cloud is being applied from six and seven users, all the way up to 25,000 users. And we're doing it here in the Midwest, where things tend to take a couple of years to change.

User advocate

Gardner: How is a cloud services brokerage different from a systems integrator? It seems there's some commonality. But you are not just a channel, or reseller, you are really as much an advocate for the user.

Lyle: A specialist cloud services brokerage is going to be more like Underwriters Laboratories (UL). It’s going to go out, fielding all the different cloud flavors that are available, pick what they feel is best, and bring it together in a bundle. Then, the SCSB works with the entity to adapt to the culture and the change that's going to have to occur and the education within their particular businesses, as opposed to a very high-level vertical, where some things are just pushed out at an enterprise level.

Jackson: I see this cloud services brokerage and specialist cloud services brokerage as the new-age system integrator, because there are additional capabilities that are offered.

For example, you need a trusted third-party to monitor and report on adherence to SLAs. The provider is not going to do that. That’s a role for your cloud services brokerage. Also you need to maintain viable options for alternative cloud-service providers. The cloud services brokerage will identify your options and give you choices, should you need the change. A specialist cloud services brokerage also helps to ensure portability of your business process and data from one cloud service provider to another.

Management of change is more than a single aspect within the organization. It’s how to adapt with constant change and make sure that your enterprise has options and doesn't get locked into a single vendor.

Lyle: It comes to the point, Kevin, of building for constant change. You're exactly right.
Learn more about Todd D. Lyle's book, 
Grounding the Cloud: Basics and Brokerages, 
at groundingthecloud.org
Gardner: You raise an interesting point too, Kevin, that one shouldn’t get lulled into thinking that they can just make a move to the cloud, and it will all be done. This is going to be a constant set of moves, a journey, and you're going to want to avail yourself of the cloud services marketplace that’s emerging.

We're seeing prices driven down. We're seeing competition among commodity-level cloud services. I expect we'll see other kinds of market forces at work. You want to be agile and be able to take advantage of that in your total cost of computing.

Jackson: There's a broad range of providers in the marketplace, and that range expands daily. Similarly, there's a large range of requirements within any enterprise of any size. Brokers act as matchmakers, avoiding common mistakes, and also help the organizations, the SMBs in particular, implement best practices in their adoption of this new model.

Gardner: Also, when you have a brokerage as your advocate, they're keeping their eye on the cloud marketplace, so that you can keep your eye on your business and your vertical, too. Therefore, you're going to have somebody to tip you off when things change and they will be on the vanguard for deals. Is that something that comes up in your book, Todd, of the public service brokerage being an educated expert in a field where the business really wants to stick to its knitting?

Primary goal

Lyle: Absolutely. That’s the primary goal, both at a strategic level, when you're deciding what products to use -- the Rackspaces, the Microsofts, the RightSignatures, etc. -- all the way down to the tactical one of the daily operation. When I leave the company, how soon can we lock Todd out? How soon can we lock him down or lock him out? It becomes a security issue at a very granular level. Because it's metered, you turn it off, you turn Todd off, you save his data, and put it someplace else.

That’s a role that, requires command and control and oversight, and that's a responsibility. You're part butler. You're looking out for the day-to-day, the minute issues. Then you get up to a very high level. You're like UL. You're keeping an eye on everything that’s occurring. UL comes to mind because they do things that are tactile and those things that you can't touch, and definitely the cloud is something you can’t touch.

Jackson: Actually, I believe it represents the embracing of a cooperative model of my consumers of this information technology, but embracing with open eyes. This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude toward industry. Cloud services brokerages and specialist cloud services brokerages sit at the same the table with these consumers.
This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude towards industry.

Lyle: Kevin, your point is very well taken. I'll go one step further. We were talking up and down the scales, strategic down to the daily operations. One of the challenges that we have to overcome is the signatories, the senior executives, that make these decisions. They're in a different age group and they're used to doing things a certain way.

That being said, getting legislation to be changed at the federal level, directives being pushed down, will make the difference, because they do know how to take orders. I know I'm speaking frankly, but what's going to have to occur for us to see some significant change within the next five years is being told how the procurement process is going to happen.

You're taking the feather; I'm taking the stick, but it’s going to take both of those to accomplish that task at the federal level.

Gardner: We know that Duncan, LLC is a specialized cloud services brokerage. Kevin, tell us a little bit about the GovCloud Network. What is your organization, and how do you align with cloud brokerages?

Jackson: GovCloud Network is a specialty consultancy that helps organizations modify or change their mission and business processes in order to take advantage of this new style of system integrator.

Earlier, I said that the key to transition in a cloud is adopting and adapting to the parallel nature and a global nature of cloud computing. This requires a second look at your existing business processes and your existing mission processes to do things in different ways. That's what GovCloud Network allows. It helps you redesign your business and mission processes for this constant change and this new model.

Notion of governance

Gardner: I'd like to go back to this notion of governance. It seems to me, Todd, that when you have different parts of your company procuring cloud services, sometimes this is referred to as shadow IT. They're not doing it in concert, through a gatekeeper like a cloud broker. Not only is there a potential redundancy of efforts in labor and work in process, but there is this governance and security risk, because one hand doesn’t know what the other hand is doing.

Let's address this issue about better security from better governance by having a common brokerage gatekeeper, rather than having different aspects of your company out buying and using cloud services independently.

Lyle: We're your trusted adviser. We’re also very much a trusted member of your team when you bring us into the fold. We provide oversight. We're big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources. You don’t want to leave a window open at night. You certainly don't want to leave your network open.

There's a lot going on in today's world, a lot of transition, the NSA and everything we worry about. It's important to have somebody providing command and control. We don’t sit there and stare at a monitor all day. We use systems that watch this, but we can tell when there's an increase or decrease out of the norm of activities within your organization.
We're big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources.

It really doesn't matter how big or how small, there are systems that allow us to monitor this and give a heads up. If you're part of a leadership team, you’d be notified that again Todd Lyle has left an open window. But if you don't know that Todd even has the window, then that’s even a bigger concern. That comes down to the leadership again -- how you want to manage your entity.

We all want to feel free to make decisions, but there are too many benefits available to us, transparent benefits, as Kevin put it, to using the cloud and hiding in plain sight, maximizing e-mail at 100,000 plus users. Those are all good things but they require oversight.

It's almost like an aviation model, where you have your ground control and your flight crew. Everybody on that team is providing oversight to the other. Ultimately, you have your control tower that's watching that, and the control tower, both in the air and on the ground, is your cloud services brokerage.

Jackson: It’s important to understand that cloud computing is the industrialization of information technology. You're going from an age where the IT infrastructure is a hand-designed and built work of art to where your IT infrastructure is a highly automated assembly-line platform that requires real-time monitoring and metering. Your specialist cloud services brokerage actually helps you in that transition and operations within this highly automated environment.

Gardner: Todd, we spoke earlier about how we're moving from implementation to procurement. We've also talked about governance being important, SLAs, and managing a contract across variety of different organizations that are providing cloud type services. It seems to me that we're talking about financial types of relations.
So even the Federal Government can adopt cloud services brokerage and respond in a very quick and efficient and effective manner.

How does the cloud services brokerage help the financial people in a company. Maybe it's an individual who wears many hats, but you could think of them as akin to a chief financial officer, even though that might not be their title?

What is it that we are doing with the cloud services brokerage that is of a special interest and value to the financial people? Is it unified billing or is it one throat to choke? How does that work?

Lyle: Both, and then some. Ultimately it's unified billing and unified management from daily operations. It's helping people understand that we're moving from a capitalized expense, the server, the software, things that are tactile that we are used to touching. We're used to being able to count them and we like to see our stuff.

So it's transitioning and letting go, especially for the people who watch the money. We have a fiduciary responsibility to the organizations that we work for. Part of that is communicating, educating, and helping the CFO-type person understand the transition not only from the CAPEX to the OPEX, because they get that, but also how you're going to correlate it to productivity.

It's letting them know to be patient. It's going to take a couple months for your metering to level up. We have some statistics and we can read into that. It's holding their hand, helping them out. That's a very big deal as far as that's concerned.

Gardner: Let's start to think about how to get started. Obviously, every company is different. They're going to be at a different place in terms of maturity, in their own IT, never mind the transition to cloud types of activities. Would you recommend the book as a starting point? Do you have some other materials or references? How do you help that education process get going. I'm thinking about organizations that are really at the very beginning?

Gateway cloud

Lyle: We've created a gateway cloud in our book, not to confuse the cloud story. Ultimately, we have to take in consideration our economy, the world economy today. We're still very slow to move forward.

There are some activities occurring that are forcing us to make change. Our contracts may be running out. Software like XP is no longer supported. So we may be forced into making a change. That's when it's time to engage a cloud services brokerage or a specialist cloud services brokerage.

Go out and buy the book. It's available on Amazon. It gives you a breakdown, and you can do an assessment of your organization as it currently is and it will help you map your network. Then, it will help you reach out to a cloud services brokerage, if you are so inclined, through points of interest for request for proposal or request for information.

The fun part is, it gives you a recipe using Rackspace, Jungle Disk, and gotomeeting.com, where you get to build a baby cloud. Then, you can go out and play with it.
This is written for the layperson. I've been told it’s entertaining, which is the most important part, because you’re going to read it then.

You want to begin with three points: file sharing, remote access, and email. You can be the lighthouse or you can be a dry-cleaners, but every organization needs file sharing, remote access, and email. We open-sourced this recipe or what we call the industrial bundle for small businesses.

It's not daunting. We’ve got some time yet, but I would encourage you to get a handle on where your infrastructure is today, digest that information, go out and play with the gateway cloud that we've created, and reach out to us if you are so inclined.
Learn more about Todd D. Lyle's book, 
Grounding the Cloud: Basics and Brokerages, 
at groundingthecloud.org
We’d love for you to use one of our organizations, but ultimately know that there are people out there to help you. This book was written for us, not for the technical person. It is not in geek speak. This is written for the layperson. I've been told it’s entertaining, which is the most important part, because you’re going to read it then.

Jackson: I would urge SMBs to take the plunge. Cloud can be scary to some, but there is very little risk and there is much to gain for any SMB. The using, leveraging, taking advantage of the cloud gateway that Todd mentioned is a very good, low risk, and high reward path towards the cloud.

Gardner: I would agree with both of what you all said. The notion of a proof of concept and dipping your toe in. You don't have to buy it all at once, but find an area of your company where you’re going to be forced to make a change anyway and then to your point, Kevin, do it now. Take the plunge earlier rather than later.

Jackson: Before you're forced.

Large changes

Gardner: Before you’re forced, but you want to look at a tactical benefit and where to work toward strategic benefit, but there is going to be some really large changes happening in what these cloud providers can do in a fairly short amount of time.

We're moving from discrete apps into the entire desktop, so a full PC experience as a service. That’s going to be very attractive to people. They're going to need to make some changes to get there. But rather than thinking about services discreetly, more and more of what they're looking for is going to be coming as the entire IT services experience, and more analytics capabilities mixed into that. So I am glad to hear you both explaining how to do it, managed at a proof-of-concept level. But I would say do it sooner rather than later.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Duncan, LLC.

You may also be interested in:

Wednesday, September 24, 2014

University of New Mexico delivers efficient IT services by centralizing on secure, managed cloud automation

The latest BriefingsDirect discussion focuses on one of the toughest balancing acts in seeking the best of cloud computing benefits. This balance comes from obtaining the proper degree of centralization or "common good" for infrastructure efficiency, while preserving a sufficient culture of decentralization for agility, innovation, and departmental-level control.

The requirement for empowering centralization is no more evident than in a large university setting, where support and consensus must be preserved among such constituencies as faculty, staff, students, and researchers -- across an expansive educational community.

But the typical IT model does not support localized agility when it takes weeks to spin up a server, if online services lack automation, or if manual processes hold back efficient ongoing IT operations. Too much IT infrastructure redundancy also means weak security, high costs, lack of agility, and slow upgrades.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're joined by an IT executive from the University of New Mexico (UNM) to learn more about moving to a streamlined and automated private cloud model to gain a common good benefit, while maintaining a vibrant and reassured culture of innovation. We're also joined by a VMware executive to learn more about the latest ways to manage cloud architectures and processes to attain the best of cloud efficiencies, while empowering improved services delivery and process agility.

They are: Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque, and Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your IT organization at the university and how you've been able to do change, but at the same time not alienate your users, who are, I imagine, used to having things their way.

Pietrewicz: At the University of New Mexico, it's a highly decentralized organization. In most cases, the departments are responsible for their own IT. In most cases, that means they don't have the resources to effectively run IT, in particular, things like data centers, servers, storage, disaster recovery (DR), and backups.

Pietrewicz
What we're doing to improve the process is providing infrastructure as a service (IaaS) to those groups so that they don’t have to worry about the heavy lifting of the infrastructure pieces that I mentioned before. They can stay focused on their core mission, whether that’s physics, or psychology, or who knows what.

So we offer IaaS. We're running a VMware stack, and we're also running vCloud Automation Center (vCAC). We've deployed the Self-Service Portal. We give departments, faculty members, or departmental IT folks the ability to go into the portal and deploy their own machines at will.

Then, they are administrators of that machine. They also have additional management features through the vCAC console so that they can effectively do whatever they need to do with the server, but not have to worry about any of the underlying infrastructure.

Gardner: That sounds like the best of both worlds. In a sense, you're a service provider in the organization, getting the benefits of centralization and efficiency, but allowing them to still have a lot of hands-on control, which I assume that they want.

Pietrewicz: Correct. The other part is the agility, the ability for them to be able to react quickly, to consume infrastructure on demand as they need it, and have the benefit of all the things that virtualization brings with redundant infrastructure, lower cost of ownership, and those sorts of things.

New expectations

Milne: It’s an interesting time to be in the IT space, because there's this new set of expectations being imposed on IT by the business to be strategic, to quickly adopt new technology, and boost innovation.

Milne
At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence so they maintain uptime, security, and quality of service for transactional systems and business-critical systems.

It’s really an interesting paradox. How do you do these two things that are seemingly mutually exclusive -- go fast, but at the same time, stay in control?

Brian’s approach is what I call it "push button IT," where you give folks a button to push and they get what they need when they want it. But if IT controls the button and they control what happens when the user pushes the button, IT is able to maintain control. It’s really the best of both worlds.

Gardner: Brian, tell us a little bit about how long you have been there and what it was like before you began this journey?

Pietrewicz: I've been at UNM for about two-and-a-half years, and I can tell you the number one complaint. We suffer from a lot of the same problems that other large IT shops have, with funding and things like that. But the primary issue that we had when I walked in the door was customers being upset because we didn't have clearly-defined services, and we had sold these services to these customers.

We had sold virtual machines (VMs) with database backups, and all kinds of interesting things, with no service-level agreements (SLAs), no processes, nothing wrapped around it. The delivery of these services was completely inconsistent.

So I started out down the new path. The first thing that we did was to make the services more consistent. Just to give you an example, deploying a virtual machine for a customer. The way that it was when I got here was that a ticket came into the service desk. It went to a single technician, and then whichever technician got that ticket figured out their own way of getting that machine deployed.
At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence.

As the next step in that process, we went through and, instead of just having it being done a different way by whoever received the ticket, we identified all the steps associated. In looking at all the steps associated, we identified over a 100 manual steps that went though six different completely separate groups inside of our organization.

Those included operating system, storage, virtualization, security, and networking for firewall changes. In all those various groups that deploy their individual piece of that puzzle, it was being done differently every time. Our deployment times were taking as long as three weeks. You can imagine how painful that is when it takes 20 minutes to spin up a VM -- but it was taking three weeks to deploy it to a customer.

We identified all the steps and defined the process very, very clearly; exactly what it takes to deploy a VM. The interesting thing that came out of that was that it gave us the content necessary to be able to start developing a true service description and an SLA.

Ticketing system

It also made it so that it was consistent. We did a few things after we did the process development. We generated workflows within our ticketing system, so that all that happened was a ticket was put in and then it auto-generated all the necessary tickets to deploy the VM, so it happened in a very consistent way.

That dropped the deployment time from three weeks down to about three days, because it still had to go through certain approval process and things like that with security.

For the next step we said, "Okay, how can we do this better?" We looked at all of those steps that we put in place and found that they were all repetitive, manual steps that could be easily automated. So enters VMware vCAC.

We took all the steps, after we had them clearly defined, and we automated all the steps that we could. We couldn’t automate all of them, for example, sending information to our billing system to bill the customer back. From vCAC we shoot an email over to our ticketing system, that generates a ticket. Then, the billing information is still entered manually, and we are working on an upgrade to that.

UNM is approximately 45,000 faculty, staff, and students. We have about 100 either departments or affiliates, and today, we're running about 660 VMs for our organization. For central IT, we're between 98 percent and 99 percent virtualized.

When I first got here, the services were not defined and the processes were not defined. Since then, we have clearly defined the processes, narrowed those down into the very specific processes and tasks that had to be done, and then we automated. We're going through the process of automating every step in that process.
ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within.

Now, we have a thing we call Lobo Cloud -- our mascot is the Lobo. Customers can now go online and deploy a machine within 20 minutes. So basically everything has transformed from extremely inconsistent service and taking as long as three weeks to deploy, to now it being the equivalent going into McDonald’s and ordering a Big Mac. It’s extremely consistent and down from three weeks to 20 minutes.

Gardner: I assume Brian that you've adopted some industry-standard methods, perhaps a framework, that gave you some guidance on this. How does your service delivery policy adhere to an industry standard like ITIL?

Pietrewicz: That’s what we use. We follow ITIL and we're at varying levels of maturity with it. ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within, to start narrowing down these process, defining services, setting SLAs. It gives you a good overarching framework to work within.

The absolute hardest part of all of this is implementing the ITIL framework, identifying your processes, identifying what your service is, and identifying your SLA. Walking through all of that is exponentially harder than putting the technology in place.

Gardner: It seems to me that not only are you going to get faster servers, response times, and automation, but there are some other significant benefits to this approach. I'm thinking about security, disaster recovery (DR), the ability to budget better through an OPEX model, and then ultimately reduce total costs.

Is it too soon or have some of these other benefits that I have heard about typically when people move to a more automated cloud approach? How is that working for you?

Less expensive

Pietrewicz: We don’t really have good statistics on it. For the folks that had machines sitting underneath their desks and in closets before, we don’t have a lot of the statistics to know exactly the cost and the time they were spending on that.

Anybody who works with virtualization quickly learns that once you hit a certain size, it becomes significantly less expensive. You become far more agile and you get a huge number of benefits. Some of them are things that you mentioned -- the deployment time, DR, the ability to automate, the taking advantage of economies of scale.

Instead of deploying one $10,000 server per application, you're now loading up 70 machines on a $15,000 server. All of those things come into play. But we really don’t have good statistics, because we didn’t really have any good processes before we started.

What’s interesting now is that our next step in the process is to automate our billing process. Once we do that, we're going to have everything from our virtual infrastructure deployed into our billing system and either a charge-back or a show-back methodology.
The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

So we'll have complete detailed costs of all of our infrastructure associated with every department and every application that is using our service. We'll be able to really show the total cost of ownership (TCO).

Milne: Brian, it sounds like you're on a path that a lot of our customers are on. What we see typically is that there is a change in consumption behavior when your customers know that they can get IaaS on demand. They stop hoarding resources. The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

Virtualization by itself increases capacity utilization quite a bit, but then going to this kind of services delivery, service consumption for infrastructure, actually further increases utilization and drives down over-provisioning.

Adding that cost transparency to that service will further change your consumers' behavior and the ability to get it when you need it and only pay for what you use drives down the amount of resources that you have to keep in your data center.

Pietrewicz: Absolutely. It’s amazing what happens when you have to pay for something and it’s very visible.

Milne: I always feel that if IT is free that really changes the supply and demand equation, if you study economics. People don’t know what to do with free. They typically take too much.

Economic behavior

Pietrewicz: Right. This really starts driving basic economic and social behavior into the equation in IT. It’s a difficult thing for organizations to get their head around, and they're sort of getting it here at the university. It’s not completely in place. The way that we look at it is as a, "We'll build it, and they'll come" kind of thing.

Most folks have figured out that they can really save that money. Instead of going out and buying a $10,000 server, they can buy a $1,000 VM from us that does the exact same thing. If they don’t want it any more, they can turn it off and not pay any more. All of those things come into play.

Another piece on that is the university was experimenting with a thing called reliability centered maintenance (RCM), which is a budgeting process that works toward the bottom line of a particular organization. That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

Ancillary benefits

Gardner: We talked about some of the ancillary benefits of your approach, but there are some direct benefits when you go to a cloud model, which gives you more options. You can have your private cloud. You can look to public cloud and other hosting models, and then you can start to see a path or a vision towards a hybrid cloud environment, where you might actually move workloads around based on the right infrastructure approach for the right job at the right time. Any thoughts about where your clouds goals are vis-à-vis the hybrid potential?

Pietrewicz: We have a few things in play that we're actively working. Today, we have people using various cloud providers. The interesting part about that they're just paying for it with a credit card out of their department, and the university doesn’t have any clear way of knowing exactly what’s out there. We don’t really have any good security mechanisms in place for determining whether there's any sensitive data being stored out there inadvertently.

We're working with a lot of the cloud providers that we are already spending money with and we are already working with to develop consolidated accounts. One, we can save money through economies of scale. And two, we can get some visibility into what folks are actually using the cloud for. And then three, IT would like to act as an adviser to be able to point out for the various cloud providers that are out there -- this particular provider is good at functionality or this particular provider is good at security.
We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The first step is to corral the use of public cloud for UNM and create an escorting process to the cloud. The second step is going to be a hybrid cloud that we'll set up from our private cloud here on site. We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The other major benefit that we very much look forward to is being able to do DR in the cloud and taking advantage of the ability to replicate data and then spin up systems as you need them, rather than having a couple of million dollars in equipment sitting, waiting, and hoping you never use it. Things that you have to refresh every four years so that you have a viable DR plan.

Gardner: Is vCloud Automation Center something that will be useful in moving to this hybrid model? The one button to push, as it were, on the private cloud, will that become a one button to push in the hybrid model as well?

Pietrewicz: It will. I mentioned those various cloud service providers. Most of them are compatible with the vCloud Connector, so that you can simply just connect up that hybrid cloud service and with a little bit of work, be able to massage your portal.

We can have a menu option of public cloud providers through our portal that they could just select and say that they want to get a vCHS, Amazon, or Terremark, and then potentially move workloads back and forth. So vCAC and vCloud Connector are all at the center of it.

The other interesting piece that we're working on and going to try to figure out as part of this is that we really want to start looking into NSX and/or VIX to be able to provide very clear security boundaries, basically multi-tenancy, and then potentially be able to move those multi-tenant environments back and forth in the cloud or extend them from public to private cloud as well.

Software-defined networking

Gardner: Brian, you mentioned multi-tenancy earlier, and of course, there is a lot going on with software-defined data center, networking, and storage. What is it about it that’s interesting to you and why is this a priority for you, software-defined networking (SDN), for example?

Pietrewicz: SDN is the next sort of step in being able to truly automate your IaaS and your virtual environment. If you want to be able to dynamically deploy systems and have them be in a SAN box that is multi-tenant by customer, you really need to have an SDN-type solution, or at least that’s extremely helpful to do that.

One of the things that we are looking at next is to be able to implement something like NSX, so that we can deploy the equivalent of what’s a virtual wire, a multi-tenant environment, to individual customers, so that they can only see their stuff and can’t see their neighbors and vice versa.

The key is the ability to orchestrate that on demand and not have to deal with the legacy VLAN and firewall kind of issues that you have with the legacy environment.

Gardner: It’s interesting how a lot of these major trends -- service delivery, cloud, private cloud, DR, and SDN -- are interrelated. It’s a complex bundle, but the payoffs, when you do this inclusively, are pretty impressive.
From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service.

Pietrewicz: Whenever you get to the point of abstracting things to the software level, you provide the ability to automate. When you have the ability to automate, you get tremendous flexibility. That sometimes can be an issue in and of itself, just making decisions on how you want to do something. But along with that flexibility, you get the ability to automate just about anything that you want or need to be able to do.

The second piece to that is that we're really excited about figuring out, when we build the hybrid cloud model, how we might be able to extend those tenants into the cloud, either as active running workloads or in a DR model, so that the multi-tenancy is retained.

Milne: From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service. It’s that capability that NSX provides that creates that seamless experience from your data center out to the hybrid cloud.

As you said, Brian, that kind of network configuration, allocation, and reallocation of IP addresses, when you are moving things from one data center to another, is not something you want to do on a manual basis. So NSX is a key component of our hybrid cloud vision. It’s something that lot of the other cloud providers just don’t have.

Pietrewicz: I see it as the next frontier in IT. I think that when SDN starts taking off, it’s going to be a game changer in ways that we are not even recognizing yet, and that’s one example. Moving a workload from one network to another network is extremely powerful.

Cloud broker

Gardner: Kurt, this sounds as if not only is Brian transitioning into being a service provider to his constituencies, but now he's also becoming a cloud broker. Is this typical of what you're seeing in the market as well?

Milne: It is. Some of our customers will take a step to try to get their arms around shadow IT, users going around IT, to just offer that provisioning option through the IT portal. So it’s like, "You're using Amazon? That’s fine. We can help you do that." So putting a button in the service catalog deploys the kind of work that they've been doing in a public cloud like Amazon, but it has to come through IT. Then, IT is aware of it.

There's a saying I like. It’s called the "cloud boomerang." A lot of times, the IT customers will put thing out in the public cloud, but like a boomerang, it seems to always come back. The customer wants to integrate it with an existing system or they realize that they have to support it up in the cloud. A lot of times, those rogue deployments make their way back to the IT organization. So putting an Amazon service in the vCAC portal and not changing anything else is a nice first step in corralling that.
Now, we're taking that next step and combining a lot of those capabilities into a single platform.

Pietrewicz: That is exactly what we're seeing. At a university, because there isn’t really governance, it’s more like build a good service and hope they come. We take the approach of trying to enable it. We want to make it very transparent and say that they can use Amazon or vCHS, but there's a better way to do it. If you do it through the portal, you may be able to move those workloads back and forth.

We are actually seeing exactly what you mentioned, Kurt. Folks are reaching the limitations of using some of the cloud providers, because they need to get access to data back here at UNM and are actually doing the boomerang approach. They started out there and now they're migrating their machines into our IaaS so that they can get access to the data that they need.

Gardner: Kurt, we heard some very interesting things at VMworld recently around the cloud-management platform. Why don’t you tell us a little bit about that and how that fits into what we've been discussing in terms of this ongoing maturity and evolution that a large organization like the University of New Mexico is well into?

Milne: We recently announced the vRealizeSuite, which is a cloud management platform. So we're moving our product management strategy to a common platform.

Over the years, VMware has either built or acquired quite a few different management products. We've combined those products into a number of suites, like our automation, operations, and our business management suites. Now, we're taking that next step and combining a lot of those capabilities into a single platform.

There are a couple of guiding ideas there. We see in organizations like Brian’s is that the lines between the automated provisioning of those workloads automation, provisioning those workloads, and the ongoing operations and maintenance and support of those workloads, is really starting to blur.

So you have automation tasks that might happen when you're doing a support call. Maybe you want to provision some more resources, and there are operations tasks like checking system health that you might want to do as a step in an automation routine.

Shared services

Our product strategy change is to move toward a shared-services model, similar to a service-oriented architecture. The different services that are underlying our management products would be executable through a tool like vCAC, through a command line interface, or through like a REST API. There's kind of a mix-and-match opportunity to execute those services in different ways.

To build that platform with the shared service model on top, we need to start re-architecting some of our products in the back-end, so that we have a common orchestration engine, a common DR backup and a common policy engine. You don’t want one tool to undo the work that another tool did yesterday. You can’t have conflicting robots going out and doing automated tasks.

The general idea is to try to further consolidate these different management functions into a single platform. The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Gardner: Brian, is that something that you think is going to be on your radar? Is management so distributed now that you're looking for a more consolidated approach that’s inclusive?
The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Pietrewicz: That would be wonderful. We're doing things many different ways. If you take the example of orchestration, we are using Orchestrator, PowerShell, Perl, and starting to experiment with Puppet.

It would be really good if you could have one standardized way that you approach orchestration, as an example, and how that might tie into all the other pieces for back-end management, rather than handling it several different ways. As Kurt was mentioning, one part starts to step on another part. Having that be consolidated and consistent would be a huge value.

Milne: The other part of the strategy is also to make that work across environments. So the same tools and services would be available if you are provisioning up to Amazon or to your private cloud or hybrid cloud service, and even different hypervisors.

We're fully aware of the heterogeneous nature of the modern data center. So we're shifting to try to create that kind of powerful common management stack with that unified management experience across all of the environment. It’s kind of a nirvana. When we talk to people, they say that’s exactly what they want. So our vision is to kind of march towards delivering on that.

Gardner: Kurt, I am trying to recall from VMworld whether this was offered on-premises, as a service from a cloud, or some combination?

Service offerings

Milne: That’s the other interesting part of this. We're starting to go down the path of offering a number of our management products as a service. For example, at VMworld, we announced the availability of a beta for our vCAC product as a software as a service (SaaS), so you can without installing any software get a service portal, get that workflow and policy engine, and deploy infrastructure services across different environments.

We'll be rolling out betas for our other products in subsequent quarters over the next year or so. Then potentially we could have the SaaS services interact with and combine with the services that are available through the products that are installed on-premise. Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Gardner: It’s interesting. We might have a reverse boomerang when it comes to the management of all of this. Does that sound appealing Brian? Is that something you would look to as a cloud service, comprehensive management?
Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Pietrewicz: Absolutely, but it’s largely dependent on return on investment (ROI). It’s that balance of, when you get to a certain level in an IT shop, it’s sometimes cheaper to do things in-house than it is to outsource it, and sometimes not. You have to do the analysis on the ROI on what makes more sense to bring it in or to use a SaaS.

As an example, we completely outsourced all of our email, because it’s a lot of work. It's very simple and easy to do as a SaaS solution, but it’s a lot more work to do in-house. It’s definitely something that we would look into.

Milne: In a mid-sized organization that might have 300 different applications that the IT organization supports, maybe 50 of those are IT tools. Already we've seen progress with companies like ServiceNow that have a SaaS-based service desk. It makes sense to start to turn more of those management products into a SaaS delivery model.

Gardner: Brian, any thoughts about others who are starting to move in your direction, perhaps their own Lobo Cloud, their own portal rationalizing these services, being able to measure them better. What in 20/20 hindsight do you have that you could recommend for them as they go about this? Any learned lessons you could share?

Process orientation

Pietrewicz: The biggest lesson learned, without a doubt, is the focus on the process orientation, the ITIL model. The technology is really not that hard. It’s determining what your service is, what are you trying to deliver, and then how do you build that into a consistently delivered service, complete with SLAs and service descriptions that meet the customer needs. That's the most difficult part.

The technical folks can definitely sling the technology. That doesn’t seem to be that big of a deal. The partners and providers do a very good job of putting together products that make it happen, but the hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

Gardner: Kurt, any thoughts in reaction to what Brian said in terms of getting started on the right path around cloud rationalization of your IT organization?

Milne: One of the things that I've seen is a lot of organizations go through this process that Brian has described, trying to clearly define their services and figure out which parts of those services they're going to automate.
The hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

A lot of organizations start that service definition effort from an inside-out perspective, get a bunch of IT guys together, and try to define what you do on a daily basis in a service. That's hard.

The easier approach is just to go talk to your customers and users and ask, "If I were going to give you a button you could click to get what you need, what would you put behind the button?" Then, you define your services more from an outside-in perspective. It seems to be where companies get anyway and you just shortcut a lot of teeth gnashing and internal meetings when you do it that way.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: