Archive for the ‘vSphere’ Category

10 things we learnt in 2010 that can help make 2011 better

December 23, 2010

This is the end of a tough year for many organisations across all sectors. We found ourselves snowed-in last winter, were stuck abroad due to a volcano eruption in spring, suffered from the announcement of a tightened budget in summer, and had to start making drastic cost-saving plans following the Comprehensive Spending Review in autumn. Data security breaches and issues with unreliable service providers have also populated the press.

Somehow the majority of us have managed to survive all that; some better than others. As another winter approaches it is time to ask ourselves: what helped us through the hard times and what can we do better to prevent IT disruptions, data breaches and money loss in the future?

Here are some things to learn from 2010 that may help us avoid repeating errors and at the same time increase awareness of current issues, for a more efficient, productive and fruitful 2011:

1- VDI to work from home or the Maldives

Plenty of things prevented us getting to work in 2010; natural disasters, severe weather and industrial disputes being the biggest culprits. Remote access solutions have been around for a long time, but desktop virtualisation has taken things a stage further. With a virtual desktop, you’re accessing your own complete and customised workspace when out of the office, with similar performance to working in the office. Provided there’s a strong and reliable connection, VDI minimises the technical need to be physically close to your IT.

2- Business continuity and resilience with server virtualisation

Server virtualisation is now mainstream, but there are plenty of organisations large and small who have yet to virtualise their server platform. When disaster strikes, those who have virtualised are at a real advantage – the ability to build an all-encompassing recovery solution when you’ve virtualised your servers is just so much easier than having to deal with individual physical kit and the applications running on them. For anyone who has yet to fully embrace the virtualisation path, it’s time to reassess that decision as you prepare for 2011.

3- Good Service Management to beat economic restrictions

With the recent economic crisis and the unstable business climate, the general message is that people should be doing more with less. It’s easy to delay capital expenditure (unless there’s a pressing need to replace something that’s broken or out of warranty) but how else to go about saving money? Surprising, effective Service Management can help deliver significant cost-efficiencies through efficient management of processes, tools and staff. Techniques include rearrangement of roles within the IT Service Desk to get higher levels of fix quicker in the support process, and adoption of some automatic tools to deal with the most common repeat incidents. Also getting proper and effective measures on the service, down to the individuals delivering it, helps to set the bar of expectation, to monitor performance and improve processes’ success.

4- Flexible support for variable business

An unstable economic climate means that staffing may need to be reduced or increased for certain periods of time, but may need rescaling shortly afterwards. At the same time epidemics, natural disasters and severe weather conditions may require extra staff to cover for absences, often at the last minute. Not all organisations, however, can afford to have a ‘floating’ team paid to be available in case of need or manage to get contractors easily and rapidly. An IT Support provider that can offer flexibility and scalability may help minimise these kinds of disruption. In fact, some providers will have a team of widely-skilled multi-site engineers which can be sent to any site in need of extra support, and kept only until no longer needed, without major contractual restrictions.

5- Look beyond the PC

Apple’s iPad captured the imagination this year. It’s seen as a “cool” device but its success stems as much from the wide range of applications available for it as for its innate functionality. The success of the iPad is prompting organisations to look beyond the PC in delivering IT to their user base. Perhaps a more surprising story was the rise of the Amazon Kindle, which resurrected the idea of a single function device. The Kindle is good because it’s relatively cheap, delivers well on its specific function, is easy to use and has long battery life. As a single function device, it’s also extremely easy to manage. Given the choice, I’d rather the challenge of managing and securing a fleet of Kindles than Apple iPads which for all its sexiness adds another set of security management challenges.

6- Protecting data from people

Even a secured police environment can become the setting for a data protection breach, as Gwent Police taught us. A mistake due to the recipient auto-complete function led an officer to send some 10,000 unencrypted criminal records to a journalist. If a data classification system had been in place, where every document created is routinely classified with different levels of sensitivity and restricted to the only view of authorised people, the breach would have not taken place as the information couldn’t have been set. We can all learn from this incident – human error will occur and there is no way to avoid it completely, so counter measures have to be implemented upfront to prevent breaches.

7- ISO27001 compliance to avoid tougher ICO fines

The Data Protection Act was enforced last year with stricter rules and higher fines, with the ICO able to impose a £500,000 payment over a data breach. This resulted in organisations paying the highest fines ever seen. For instance Zurich Insurance which, after the loss of 46,000 records containing customers’ personal information, had to pay over £2m – but it would have been higher if they hadn’t agreed to settle at an early stage of the FSA investigation. ISO 27001 has gained advocates in the last year because it tackles the broad spectrum of good information security practice, and not just the obvious points of exposure. A gap analysis and alignment with the ISO 27001 standards is a great first step to stay on the safe side. However, it is important that any improved security measure is accompanied by extensive training, where all staff who may deal with the systems can gain a strong awareness of regulations, breaches and consequences.

8- IT is not just IT’s business – it is the business’ business as well

In an atmosphere where organisations are watching every penny, CFOs acquired a stronger presence in IT although neither they nor the IT heads were particularly prepared for this move. For this reason, now the CIO has to find ways to justify costs concretely, using financial language to propose projects and explain their possible ROI. Role changes will concern the CFO as well, with a need to acquire a better knowledge of IT so as to be able to discuss strategies and investments with the IT department.

9- Choose your outsourcing strategy and partner carefully

In 2010 we heard about companies dropping their outsourcing partner and moving their Service Desk back in-house or to a safer Managed Service solution; we heard about Virgin Blue losing reputation due to a faulty booking system, managed by a provider; and Singapore bank DBS, which suffered a critical IT failure that caused many inconveniences among customers. In 2011, outsourcing should not be avoided but the strategy should include solutions which allow more control over assets, IP and data, and less upheaval should the choice of outsourcing partner prove to be the wrong one.

10- Education, awareness, training – efficiency starts from people

There is no use in having the latest technologies, best practice processes and security policies in place if staff are not trained to put them to use, as the events that occurred in 2010 have largely demonstrated. Data protection awareness is vital to avoid information security breaches; training to use the latest applications will drastically reduce the amount of incident calls; and education to best practices will smooth operations and allow the organisations to achieve the cost-efficiencies sought.

Adrian Polley, CEO

This article has been published on Tech Republic: http://blogs.techrepublic.com.com/10things/?p=2100

Quick win, quick fall if you fail to plan ahead

January 11, 2010

Virtualisation seems to be the hot word of the year for all businesses large and small, and as everyone seems to concentrate on deciding whether VMware is better than Microsoft HyperV, often driven by the media, they might overlook one of the major pitfalls in moving to virtual – the lack of forward planning.

Many organisations invest only a small amount of money and time investigating solutions, but choosing one which is tailored to the business rather than investing in the coolest, latest or cheapest product on the market can save organisations from the illusion of cost-effectiveness.

The second mistake organisations often make is to put together the virtual environment quite quickly for testing purposes, which then almost without anyone realising becomes production or live due to demands in the market to keep up with the rest of the business, or because of the IT department using the new and only partly tested environment as a way to provision services rapidly in order to gain a “quick win” with the rest of the business.

But a system that is not planned and correctly tested is often not set for success.

My advice would be to plan, plan and then plan some more.

I suggest organisations that are thinking about virtualising their system undertake a capacity planning exercise. They should start by accurately analysing the existing infrastructure; this gives the business the necessary information required to correctly scope the hardware required for a virtual environment, and in turn provides the necessary information for licensing.

Do not go from “testing of a new technology” on to a “live/production environment” without sufficient testing and understanding of the technology, or the inefficiencies could damage business continuity and the quality of services.

All in all, I advice organisations that are not specifically of the IT sector to engage a certified partner company to assist with design and planning and, equally importantly, undertake certified training courses to prepare staff to work with the new system.

Will Rodbard 

Will Rodbard, Senior Consultant

Cloud computing – Help your IT out of the Tetris effect

January 8, 2010

Enjoy playing human Tetris on the tube at rush hour? All the hot, sweaty physical contact; the effort in pushing your way out, slowly, uneasily; people in your way, blocking you, breathing on you.. Of course not. You just wish you could share the carriage with three friendly, quiet companions and kick the rest of the lot out, bringing a small selection of them back in only when you need an extra chat, some heat in the carriage, specific information they might have.

If you imagine the tube situation to be your IT system, then you get a first glance at what Cloud Computing is about.

Cloud based computing promises a number of advantages, but it is buying services “on-demand” that has caught the imagination.  Rather than having to make significant, upfront investment in technology and capacity which they may never use, Cloud based computing potentially allows you to tap into someone else’s investment and flex your resources up and down to suit your present circumstances.

Like all new computing buzzwords, the Cloud suffers from “scope creep” as everyone wants to say that their own solution fits the buzzword – however spurious the claim is.  Many IT-savvies think that ‘the cloud’ is nothing but old wine in a new bottle, finding similarities to what was referred to as managed or hosted application services; it is essentially based on that, only with new technology to support their evolution, which makes matters more complicated and brings new doubts and queries.

But for most purposes, the Cloud extends to three types of solution – Software as a Service (Saas), Managed Application Hosting and On-demand Infrastructure.  These are obviously all terms that have been used for some time – the Cloud sees the distinction becoming more blurry over time.

Software-as-a-Service is what most people will have already experienced with systems such as Salesforce.com.  The application is licensed on a monthly basis, is hosted and managed on the provider’s web server and is available to access from the client’s computers until the contract expires.

Managed Application Hosting simply takes this one step further where a provider takes responsibility for managing a system that the customer previously managed themselves.  A big area here is Microsoft Exchange hosting – many companies struggle with the 24×7 obligation of providing access to email and find it easier to get a specialist to manage the environment for them.

With Software as a Service, the infrastructure and data is physically located with the provider.  This can also be the model with Managed Application Hosting, although there can be options for the provider to manage a system that is still within the customer’s perimeter.  But both models raise a specific security concern in that the customer is obliged to give the provider access to the customer’s data.  This is, of course, not an uncommon model – outsourcing and its implied trust has been around for years. 

The third type of Cloud solution is On-demand Infrastructure.  Virtualisation has already got customers used to the idea that Infrastructure can now be more flexible and dynamic – in the past bringing a new physical server online could take weeks, particularly when procurement was factored in, but a new, fully-configured virtual server can now frequently be brought up in seconds.  However, there’s still the investment to be made in the virtualisation platform at the start – and what happens when you run out of capacity?

The promise of On-demand Infrastructure is that it removes the need to make a big upfront investment but allows Infrastructure to be added or taken away as circumstances arise.  This is potentially as powerful and radical a concept as virtualisation.  So how could it work?

Different vendors are approaching it in different ways.  Amazon, formerly the online book store, has gone through significant recent transformation and now offers its Elastic Compute Cloud service.  If you develop web based systems within this system, you can configure and pay for the capacity that you actually need.  Equally, Salesforce.com is no longer just an application but a whole development environment which can be extended by the end user to different functionalities, with additional computing capacity bought as required.

One issue with both of these models is portability – if I develop my application for the Amazon platform, I’m tied into it and can’t go and buy my Cloud resources from someone else.

VMware has taken a slightly different approach with its vSphere suite, which it is claiming to be the first “Cloud operating system”.  What this means in practice is that VMware is partnering with dozens of service providers across the world to provide On-Demand Infrastructure services which customers can then take advantage of.  In this model, a customer could choose to buy their virtual infrastructure from one provider located in that provider’s premises.  They could then join that with their own private virtual infrastructure and also that of another provider to give provider flexibility.  The real advantage of this approach is when it’s combined with VMware’s live migration technologies.  A customer who was running out of capacity in their own virtual infrastructure could potentially live-migrate services into a provider’s infrastructure with no down time.  Infrastructure becomes truly available On-Demand, can be moved to a different provider with little fuss, and the customer only pays for what they use. 

The vision is impressive.  There are still tie-in issues in that the customer is tied to VMware technology, but should find himself less tied to individual providers.

Surely the kind of technology necessary to virtualise data centres demands an investment not everyone is willing to undertake, and brings us to the question everyone’s thinking about: ‘Why should I move to the cloud?’

According to a survey carried out by Quest Software in October, nearly 75% of CIOs are not sure what the benefits of cloud computing are, and half of them are not sure of the cost benefits, particularly since they find it difficult to calculate how much their current IT system is costing them. The firm interviewed 100 UK organisations with over 1,000 employees, of which only 20% said they are already actively using some of the cloud services offered, and whose main worries pivot around security, technical complexity and cost.

Choosing between public and private cloud has a different financial impact as well. Initial investment in the first one is clearly cheaper because of the lack of hardware expenditure, necessary instead for private services, but Gartner analysts reckon IT departments will invest more money on the private cloud through 2012 while the virtual market is maturing, which will prepare technology and business culture to move to the public cloud later on. The cost related to virtual storage duration is an issue only for public services, which are purchased on a monthly basis with a usage fee of GB combined with bandwidth transfer charges, therefore ideal for relatively short-term storage. In any case the experts say that not all IT services will be moved to a virtual environment, some will have to remain in the intimacy of the organisations because of data security and sensitivity issues.

The promise of the Cloud is virtualised data centre, applications and services all managed by expert third parties so that not only business operations will run more smoothly and efficiently, but more importantly IT managers can finally stop worrying about technical issues and focus on the important parts of their business, taking the strategic decisions that will bring their organisation to further success.  Whether and how long this takes to become a reality is something that at this stage is very hard to predict.

Adrian Polley 

Adrian Polley, CEO