Archive for the ‘Cloud Computing’ Category

Focus on 2012: 5 key areas in Enterprise IT

December 19, 2011

According to the industry analysts, experts and professionals, some of the changes and novelties introduced in the last few years are set to become actual trends in 2012. Influenced by the ever-challenging economic climate, disillusioned yet careful outlook on industry best practices and need to obtain measurable efficiency from any IT project, these are the five key areas that will acquire growing importance next year:

1)      Larger use of non-desktop-based applications

This is due to of a growing need for mobility and flexibility. Users need to be able to work while travelling, from any desk or office (for instance, in the case of large/international companies) and from home, as home-working is growing due to the financial benefits involved. It is also a good choice to guarantee business continuity in the case of unforeseen circumstances such as natural disaster or strikes which leave the workers stranded or unable to reach the office. As well as cloud applications, virtualised desktops are becoming a must-have for many organisations. Companies with older desktops which need updating anyway will find this switch more financially convenient, as well as those which have a large number of mobile users which need to access applications from their smartphone or laptop while out of their main office. It can also give those organisations considering or embracing home-working more control over the desktops, as they will be centralised and managed by the company and not at user level.

2)      Larger use of outsourced management services

The ‘doing more with less’ concept that started to take grip at the beginning of the past recession has translated into practical measures. These include handing part or the whole of the Service Desk to an external service provider which, for a fixed cost, will know how to make the best of what the company has, and provide skilled personnel, up-to-date technology and performance metrics. Managed services, IT outsourcing and cloud services will become even more prominent in 2012 and the following years due to their convenience from a practical and financial point of view. With the right service provider, the outcome is improved efficiency, less losses deriving from IT-related incidents and more manageable IT expenditure.

3)      Management plans for ‘big data’

There is much talk around the current topic of ‘big data’, which describes the concept of the large amount of varied data organisations have to deal with nowadays. There are some practical issues that arise from this – mainly how to store it, share it and use it, all without breaching the Data Protection Act. However, at the moment it is still very difficult to understand how to take the next step: using this data strategically and to create business advantage. This is something companies will have to look at in the years to come; as for the next year, they might just concentrate on dealing with data safely and efficiently, possibly storing it on a private virtual server or using public cloud services.

4)      A more balanced approach to security

This new approach sees the over-adoption of security measures dropped after the realisation that it might affect productivity as it may cause delay in carrying out business operations; it could also diminish opportunities that are found in sharing data within the sector to allow organisations to improve and grow; lastly, it can be counter-productive, with employees bypassing the measures in place in order to make operations quicker. Although being compliant with on-going regulations is becoming vital, there will be more scoping and tailoring than large technology adoption. Organisations will be analysed to understand which areas are in need of security measures and to what extent. This way, heavy security measures will be applied only to high risk areas rather than throughout the whole organisations, with less critical areas able to work more freely. In this approach, risks are balanced against efficiency and opportunity and the end result is a tailored solution rather than a collection of off-the-shelf products.

5)      Less budget control

Due to the challenging economic climate, other departments, in particular the financial department and therefore the DOF, will have more control over IT investments. CIOs and IT Managers will have to be able to evaluate if their IT project is necessary or just a nice-to-have, and how it can bring business advantage.  All proposed IT investment will have to be justified financially; therefore, it is important to analyse each project and find a reasonable ROI before presenting it to the finance decision-makers. This implies that IT professionals have to learn ‘business talk’ and manage to translate difficult technical descriptions in business terms.

All in all, developments within IT will not come to a halt next year – investment and changes will continue but with a more careful outlook and a stronger focus on efficiency, safety and Return on Investment rather than on following trends or adopting the latest technology for the sake of it. Because of this, the difficult economic climate could also be seen as a good thing: organisations make wiser and far-sighted choices that will create a solid base for any future decision that will be made when times are less tough and spending capacity rises, increasing the efficiency potential of IT for business purposes.

Tony Rice, Service Delivery Manager

What is the impact of the Cloud on the existing IT environment?

March 10, 2011

As organisations look to embrace the cost-efficiency opportunities deriving from new technologies and services, there is a lot of talk about the benefits, risks and possible ROI of the blanket concept of ‘Cloud computing’. However, it is still unclear how using Cloud services will affect the existing network infrastructure and what impact it can have on IT support roles and the way end users deal with incidents.

The effect on an organisation’s infrastructure depends on the Cloud model adopted, which may vary based on company size. For example, small organisations which are less worried about owning IT and have simpler, more generic IT needs might want to buy a large part of their infrastructure as a shared service, purchasing on-demand software from different vendors.

Buying into the Software as a Service model has the benefit of simplicity and cheapness, as it cuts out much of the responsibility and costs involved, which is a great advantage for SMEs. This solution also allows 24/7 service availability, something that small firms might not be able to afford otherwise. The lack of flexibility of this service, due to the impossibility to customise software, is less of a problem for these types of organisations. But there are still risks related to performance, and vendor lock-in.

Using this model, a small company’s retained IT infrastructure can be relatively simple, and therefore there might be little need for specialist technical skills within the IT department. A new skill set is required, however: IT personnel will need to be able to manage all the different relationships with the various vendors, checking that the service purchased is performing as agreed and that they are getting the quality they are paying for. The IT Service Desk will therefore require a smaller number of engineers, less technical but more commercially savvy. More specialist skills will shift towards the provider and 1st line analysts will have to escalate more calls to the various vendors.

Larger organisations, on the other hand, may well be keen on retaining more control over their infrastructure and purchasing IT resources on-demand. With this model, the organisation still manages much of its infrastructure but at a virtual level – the vendor might provide them with hardware resources on-demand, for instance. The main advantage of this model is that it allows the existing infrastructure to be incredibly flexible and scalable, able to adapt to changing business needs. For example, building a data centre is lengthy and expensive, therefore not convenient for expansion. But by using a “virtual datacentre” provider, capacity can be increased in the space of an online card transaction, with great financial benefits – in the Cloud model only the necessary resources are paid for without investment in hardware or its maintenance.

With this second model, the change in the roles within the IT department will mainly regard an increased need, as in the other model, for vendor management skills. Monitoring KPIs, SLAs and billing will be a day-to-day task although there will still be the need for engineers to deal with the real and virtual infrastructure.

Both models generally have very little impact on the end user if the IT Service Desk has been running efficiently, as this does not disappear as a first point of contact. However, in certain cases the change might be more visible, for instance if desk-side support is eliminated – a cultural change that may need some adapting to.

All in all, change is not to be feared – with the necessary awareness, embracing Cloud services can improve IT efficiency significantly, and align it to the business. By leaving some IT support and management issues to expert providers an organisation can gain major strategic advantage, saving money and time that they can ultimately use in their search for business success.

 

Adrian Polley, Technical Services Director

This article appears on the National Outsourcing Association’s (NOA) online publication Sourcing Focus: http://www.sourcingfocus.com/index.php/site/featuresitem/3318/

Surviving IT spending cuts in the public sector

February 15, 2011

How to create cost-efficiencies in the post-Spending Review scenario

After the announcement of 25%-40% budget cuts last year, it is reasonable to expect IT to be one of the departments to suffer the most in public sector organisations. However, cuts in IT support and projects may bring inefficiencies and disruptions, which can then lead to real losses and increasing costs.  More than ever, CIOs and IT Directors at public sector organisations are taking various options into consideration, from quick-fixes to farther-sighted ideas, trying to find a solution that will produce savings without compromising on service quality and data security, and perhaps even increasing efficiency. Here are some common ideas analysed:

Solution 1: Reducing headcount

Firing half of your IT team will produce immediate savings since you will not have to pay them a salary the following months, but when Support staff is insufficient or not skilled enough to meet the organisation’s needs it can lead to excessive downtime, data loss, security breaches or the inability to access applications or the database. A ‘quick-fix’ such as this represents a false economy. Reviewing resource allocation and improving skill distribution at Service Desk level, on the other hand, can be a valid solution. Indeed many IT departments can find themselves top heavy with expert long serving team members where the knowledge supply out-weighs the demand. A larger proportion of lower-cost 1st line engineers with improved and broader skills and a fair reduction of the more deeply skilled and costly 2nd and 3rd line technicians can not only reduce staff spend, but also create efficiencies with more calls being solved with first-time fix.

Solution 2: Offshoring

Although the thought of employing staff who only ask for a small percentage of a normal UK salary may sound appealing, offshoring is not as simple as ABC. It requires a large upfront investment to set up the office abroad, with costs including hardware, software, office supplies and travel and accommodation of any personnel that manages the relationship with the supplier. Organisations are not able to afford that kind of investment, especially since this solution only creates cost-savings in the long term – but the public sector needs cost savings now. Furthermore, the different culture and law can represent a risk to information security: data could be easily accessed by staff in a country thousands of miles away and sold for a couple of dollars, as various newspapers and TV channels have found out. With the extreme sensitivity of data processed by Councils, charities and the NHS, no matter how hard foreign suppliers try to convince the public sector to offshore their IT, it is unlikely this will happen – it is simply too risky.

Solution 3: IT Cost Transparency

Understanding the cost of IT and its value to the organisation, being able to prioritise and manage people and assets accordingly and knowing what can be sacrificed, can help identify where money is being wasted, which priorities need to be altered and what can be improved. For instance, do all employees need that piece of software if only three people actually use it more than twice a year, and do you need to upgrade it every year? Do all incidents need to be resolved now, or can some wait until the more urgent ones are dealt with? Do you need a printer in each room, and when it breaks do you need to buy a new one or could you make do with sharing one machine with another room? These and many other questions will lead to more efficient choices, but only after having identified and assessed the cost and value of each aspect of IT, including people and assets.

Solution 4: Cloud computing

There are contrasting opinions on this matter. The Government CIO, John Suffolk encourages the use of this service, and reckons that the public sector would be able to save £1.2bn by 2014 thanks to this solution. However, many believe that placing data in the hands of a service provider can be risky due to the highly sensitive nature of the data involved, so traditional Cloud computing may not be an ideal solution.

A shared environment such as the G-cloud, where various public sector organisation share private data centres or servers, may be a safer option that allows the public sector to achieve major efficiencies and cost savings, while minimising issues related to data security.

Solution 5: Shared Services

A shared service desk is not for everyone – it can only work if the organisations sharing have similar needs, culture and characteristics, and as IT can be a strategic advantage for competitive businesses, sharing the quality may mean losing this advantage. But for the public sector, this solution may be ideal. Local councils with the same functions, services and needs will be able to afford a higher level of service for a reasonable price, sharing the cost and the quality.

Solution 6: Service Management Good Practice

‘Doing more with less’ is one of the most used quotes since the recession started. And it is exactly what the public sector is looking for. Public organisations don’t want to be ITIL-aligned, obtain certifications, and tick the boxes. All they want is efficiency and cost savings – and through the right Service Management moves, after an Efficiency Review to find out what needs improvement and how, this can be obtained through the right choices regarding people, processes and technology.

Solution 7: Managed Services

A solution where the IT Service Desk is kept internal with its assets owned by the company, but managed by a service provider is becoming more and more popular among organisations from all sectors. When the sensitivity of data and a desire for a certain level of control over IT rules out full outsourcing, but in-house management does not allow to reach potential cost savings and efficiencies, a managed service may represent the ideal ‘in-between’ choice. The post-Spending Review public sector, then, may benefit from a flexible solution that is safer than outsourcing, but more cost-effective than an in-house solution.

Every challenge can be a new opportunity

Although budget reduction may affect investment in large IT projects and shiny new technology, it also represents the ideal opportunity to analyse what is essential and what is not, and to prioritise projects based on this. The public sector, then, find itself prioritising for effectiveness over compliance, cost-efficiency over cheapness and experience over offers, when choosing providers and tools for their IT. This will lead to the choice of solutions that will help organisations run more smoothly and safely, invest their resources better and, ultimately, deliver a service that will bring maximum customer and user satisfaction.

Martin Hill, Head of Support Operations

(also on Business Computing World: http://www.businesscomputingworld.co.uk/how-to-create-cost-efficiencies-in-the-post-spending-review-scenario/)

5 tips for moving Disaster Recovery to the Cloud

October 5, 2010

As virtualization technologies become increasingly popular, more and more businesses are thinking about using cloud computing for Disaster Recovery. Experts in the field believe that there are many advantages in embracing this solution – however, there are also some potential threats that need to be taken into account.

In order to consider cloud computing services, organisations need to evaluate the potential risks to their Information Assets and, in particular, how a 3rd party supplier will affect the Confidentiality, Integrity and Availability of their data.

Here are five tips on how to deal with the main challenges:

1. Risk Assessment and Asset Valuation

Right from the outset, organisations should try to understand what the greatest risks to the business are and identify which information assets are too important or too sensitive to hand over to a 3rd party supplier to control.

2. Smoke and Mirrors

To overcome the risks associated with choosing a new supplier, it is a good idea to carry out due diligence on the Cloud Supplier – find out all you can about who you will be trusting with your information and review their facilities, processes and procedures, references and credentials, i.e. if they are ISO27001 accredited.

3. Migrating Information

Once a decision is made to either partially or wholly migrate data/systems to the cloud, the biggest challenge is how to ensure there is a seamless migration to the external provider’s service. This is a very delicate step which, if dealt with inadequately, may result in data loss, leakage or downtime which could prove extremely costly to the business.

4. Service Level Management

When businesses trust 3rd parties with their vital corporate, personal and sensitive information, it is important to set up structured SLAs, Confidentiality Agreements, Security Incident handling procedures, and reporting metrics, and above all ensure they provide compliant, transparent, real-time, accurate service performance and availability information.

5. Retention and disposal

Depending on the policies and regulatory requirements applicable to the business, one of the main challenges with cloud computing is how to ensure the corporate retention polices are enforced if the information is located outside the company’s IT network perimeter. Obtaining certificates relating to the destruction of data is one thing, but proving that information identified as sensitive or personal is only kept for as long as necessary is another.  With the economies of scale often associated with cloud computing, total adherence with retention policies of individual companies may prove difficult if resilience, backup and snapshot technologies are employed to safeguard the environment from outages or data loss.

David Cowan, Head of Infrastructure and Security

Find this article in the ‘5 tips’ section of Tech Republic: http://blogs.techrepublic.com.com/five-tips/?p=324

Cloud computing: how to minimise lock-in risks

June 10, 2010

Choosing more than one supplier is necessary until a time when cloud computing comes of age.

Virtualising servers, purchasing space in data centres and utilising applications hosted and managed by third parties can have some undeniable advantages: they can increase efficiency, decrease IT-related costs, allow greater mobility and also represent a greener alternative for organisations. But as the popularity of cloud computing grows, so do concerns regarding the unclear implications of the new technologies. If the initial worries were mostly about security of data stored at a provider, now an even bigger question is arising: what would happen if an organisation wanted their data back to bring it in-house as they grow, or to transfer it to another provider as part of a merger, or a cheaper and more efficient provider for some of the services (e.g. only email or back-up)? Although it is possible to retrieve and migrate data, it is not an easy and straightforward operation and the costs involved might represent a barrier, causing the organisation to be locked-in with the provider – and accept any price and conditions they might decide to impose.

The problem with the newness of cloud computing technologies is that there are yet to be set standards for data formats and APIs to allow interoperability between infrastructures. Cloud computing providers are already working on how to improve portability and reduce latency during data transfers, but only within services and platforms hosted on their own, proprietary infrastructure. Migration to another vendor can instead be a lengthy and expensive procedure – apart from possible end-of-contract penalties, organisations will be charged both for format conversion and for the transfer, including additional charges for bandwidth usage which due to the high latency, might altogether amount to a very large figure. Migration costs can be prohibitive when dealing with a large amount of data, therefore even if it might seem easier and convenient to have only one vendor providing all services, storing the entire organisation’s data within one infrastructure represents a threat which might obstruct growth, structural changes and the search for more cost-efficient and bespoke solutions.

Experts reckon it might be a few years before data and service portability within vendors will be possible, but organisations need not put off a move to the cloud – they just have to apply some smart thinking. The key to avoiding lock-in, it seems, is to not have all the eggs in one basket. The wisest organisations are already using this technique, which sees them cherry-picking various vendors for different services: one provider for email, another for back-up and another couple for applications and VDI. There are a few criteria for choosing, not necessarily based on the cheapest offers: ideal vendors have to first of all provide modular packages, use popular formats for data and services and be transparent on regulations and fees applied to data transfer.

Many benefits can be achieved with this strategy: for instance, organisations can create a bespoke and flexible solution, and choose the best offer for each service. In some cases the overall price could be higher than the cost of a single provider for all services, but if it is lengthy and economically prohibitive to switch vendor, then price is inelastic and can be increased at any time, leaving the organisation no choice but to pay. It is also essential to take into account the risk of a provider going bust: the recent security attacks on Google and the dotcom meltdown have taught us that no company is too big to go out of business.

To avoid data and financial loss, the only solution is to use more than one vendor. It is only through a game of pick and choose that lock-in risks and their consequences can be avoided, while still enjoying the cost-efficiencies made possible by cloud computing.

 

 

Ayodele Soleye, Senior Consultant

This article is featured on Director of Finance Online: http://www.dofonline.co.uk/content/view/4645/152/

Personal touch or fast resolution – what do end users really want from their IT Support?

April 19, 2010

With regards to IT support, do end users really prefer face-to-face contact with an analyst, or would their rather have the problem fixed remotely in the shortest time possible? Does it make any difference to them, and what is their priority?

Many IT directors seem to invest a great part of their IT budget in an unnecessary number of desk side support analysts, rather than in new technology to speed up operations through remote intervention. Their justification is something along these lines: “Users like that personal touch. They feel awkward when they have to interact with a voice over the phone or deal with automated tools. They like to have a friendly face at their desk.”

Is this the truth or an assumption by IT? Do users just want their problem fixed which ever way is quickest, be that desk side or over the phone?

These are the results of our poll :

Answer Votes
I don’t mind if my problem is solved remotely, I just want to get it fixed as quick as possible.    80%
 I feel awkward when I have to interact with a voice over the phone or automated software. I like to have a friendly face at my desk.    10%
I have no preference. Either way is fine with me.    10%

Best Practice and Virtualisation: essential tools in Business Resilience and Continuity planning

March 25, 2010

Life in Venice doesn’t stop every time it floods. People roll up their trousers, pull on their wellies and still walk to the grocer’s, go to work, grab a vino with friends. And when it’s all over they mop the floor, dry the furniture, and go back to their pre-flood life. How do they do it? They choose not to have carpet or wooden flooring, keep updated on water level and have a spare pair of boots right next to the door. This is called prevention.

When it comes to faults in IT systems, both common and rare just like flooding can be, prevention is not better than cure – it is the cure, the only one to allow business continuity and resilience.

Complicated machinery and analysis are a thing of the past: nowadays planning is extraordinarily easy thanks to the expertise given by Best Practice processes, and new technologies such as virtualisation that can bring user downtime close to zero.

First of all, it must be noted that virtualising servers, desktop, data centre is not something that can be done overnight. Planning is needed to avoid choosing the wrong solution, perhaps based on what the latest product on the market is and on media talk rather than what works best for the specific needs of one’s business, and to shun possible inefficiencies, interruption of business services, or even data loss during the process. Best Practice, then, comes across as the essential framework in which all operations should be carried out in order for them to be successful.

Any change made to the system, in fact, needs a mature level of Best Practice processes such as world-renowned ITIL (Information Technology Infrastructure Library) in place, to guide organisations in planning the best route in dealing with all operations and incidents, and are a key tool for avoiding inefficiencies in money and time, and improving the performance of the IT department and of the business as a whole.

Once this is sorted, you can think about going virtual. From a technical point of view, virtualisation is gaining importance in business resilience and continuity planning thanks to the progress made by new technologies. Products such as VMware’s Vsphere, for example, allow what is called “live migration”: capacity and speed of the virtual machines are seen as an aggregate rather than individually, and as a consequence not only the load is more evenly distributed, for faster, smoother operations, but whenever a machine crashes resources are immediately accessible from another connected device, without the user even noticing and without interrupting the procedure.

Moreover, data is stored on a virtual central storage so that it is accessible from different source and does not get lost during system malfunctions, making business resilience faster and easier.

Guided by the expertise of Best Practice and with the help of virtualisation products that suit individual needs and goals, business resilience and continuity planning will not only come easier, but also make results more effective, allowing organisations to deliver their services and carry out their operations without fear of interruptions, inefficiencies or data loss.

 

Pete Canavan, Head of Service Transition

 

This article is in April’s issue of Contingency Today, and is also online at: http://www.contingencytoday.com/online_article/Best-Practice-and-Virtualisation/2242

Quick win, quick fall if you fail to plan ahead

January 11, 2010

Virtualisation seems to be the hot word of the year for all businesses large and small, and as everyone seems to concentrate on deciding whether VMware is better than Microsoft HyperV, often driven by the media, they might overlook one of the major pitfalls in moving to virtual – the lack of forward planning.

Many organisations invest only a small amount of money and time investigating solutions, but choosing one which is tailored to the business rather than investing in the coolest, latest or cheapest product on the market can save organisations from the illusion of cost-effectiveness.

The second mistake organisations often make is to put together the virtual environment quite quickly for testing purposes, which then almost without anyone realising becomes production or live due to demands in the market to keep up with the rest of the business, or because of the IT department using the new and only partly tested environment as a way to provision services rapidly in order to gain a “quick win” with the rest of the business.

But a system that is not planned and correctly tested is often not set for success.

My advice would be to plan, plan and then plan some more.

I suggest organisations that are thinking about virtualising their system undertake a capacity planning exercise. They should start by accurately analysing the existing infrastructure; this gives the business the necessary information required to correctly scope the hardware required for a virtual environment, and in turn provides the necessary information for licensing.

Do not go from “testing of a new technology” on to a “live/production environment” without sufficient testing and understanding of the technology, or the inefficiencies could damage business continuity and the quality of services.

All in all, I advice organisations that are not specifically of the IT sector to engage a certified partner company to assist with design and planning and, equally importantly, undertake certified training courses to prepare staff to work with the new system.

Will Rodbard 

Will Rodbard, Senior Consultant

Cloud computing – Help your IT out of the Tetris effect

January 8, 2010

Enjoy playing human Tetris on the tube at rush hour? All the hot, sweaty physical contact; the effort in pushing your way out, slowly, uneasily; people in your way, blocking you, breathing on you.. Of course not. You just wish you could share the carriage with three friendly, quiet companions and kick the rest of the lot out, bringing a small selection of them back in only when you need an extra chat, some heat in the carriage, specific information they might have.

If you imagine the tube situation to be your IT system, then you get a first glance at what Cloud Computing is about.

Cloud based computing promises a number of advantages, but it is buying services “on-demand” that has caught the imagination.  Rather than having to make significant, upfront investment in technology and capacity which they may never use, Cloud based computing potentially allows you to tap into someone else’s investment and flex your resources up and down to suit your present circumstances.

Like all new computing buzzwords, the Cloud suffers from “scope creep” as everyone wants to say that their own solution fits the buzzword – however spurious the claim is.  Many IT-savvies think that ‘the cloud’ is nothing but old wine in a new bottle, finding similarities to what was referred to as managed or hosted application services; it is essentially based on that, only with new technology to support their evolution, which makes matters more complicated and brings new doubts and queries.

But for most purposes, the Cloud extends to three types of solution – Software as a Service (Saas), Managed Application Hosting and On-demand Infrastructure.  These are obviously all terms that have been used for some time – the Cloud sees the distinction becoming more blurry over time.

Software-as-a-Service is what most people will have already experienced with systems such as Salesforce.com.  The application is licensed on a monthly basis, is hosted and managed on the provider’s web server and is available to access from the client’s computers until the contract expires.

Managed Application Hosting simply takes this one step further where a provider takes responsibility for managing a system that the customer previously managed themselves.  A big area here is Microsoft Exchange hosting – many companies struggle with the 24×7 obligation of providing access to email and find it easier to get a specialist to manage the environment for them.

With Software as a Service, the infrastructure and data is physically located with the provider.  This can also be the model with Managed Application Hosting, although there can be options for the provider to manage a system that is still within the customer’s perimeter.  But both models raise a specific security concern in that the customer is obliged to give the provider access to the customer’s data.  This is, of course, not an uncommon model – outsourcing and its implied trust has been around for years. 

The third type of Cloud solution is On-demand Infrastructure.  Virtualisation has already got customers used to the idea that Infrastructure can now be more flexible and dynamic – in the past bringing a new physical server online could take weeks, particularly when procurement was factored in, but a new, fully-configured virtual server can now frequently be brought up in seconds.  However, there’s still the investment to be made in the virtualisation platform at the start – and what happens when you run out of capacity?

The promise of On-demand Infrastructure is that it removes the need to make a big upfront investment but allows Infrastructure to be added or taken away as circumstances arise.  This is potentially as powerful and radical a concept as virtualisation.  So how could it work?

Different vendors are approaching it in different ways.  Amazon, formerly the online book store, has gone through significant recent transformation and now offers its Elastic Compute Cloud service.  If you develop web based systems within this system, you can configure and pay for the capacity that you actually need.  Equally, Salesforce.com is no longer just an application but a whole development environment which can be extended by the end user to different functionalities, with additional computing capacity bought as required.

One issue with both of these models is portability – if I develop my application for the Amazon platform, I’m tied into it and can’t go and buy my Cloud resources from someone else.

VMware has taken a slightly different approach with its vSphere suite, which it is claiming to be the first “Cloud operating system”.  What this means in practice is that VMware is partnering with dozens of service providers across the world to provide On-Demand Infrastructure services which customers can then take advantage of.  In this model, a customer could choose to buy their virtual infrastructure from one provider located in that provider’s premises.  They could then join that with their own private virtual infrastructure and also that of another provider to give provider flexibility.  The real advantage of this approach is when it’s combined with VMware’s live migration technologies.  A customer who was running out of capacity in their own virtual infrastructure could potentially live-migrate services into a provider’s infrastructure with no down time.  Infrastructure becomes truly available On-Demand, can be moved to a different provider with little fuss, and the customer only pays for what they use. 

The vision is impressive.  There are still tie-in issues in that the customer is tied to VMware technology, but should find himself less tied to individual providers.

Surely the kind of technology necessary to virtualise data centres demands an investment not everyone is willing to undertake, and brings us to the question everyone’s thinking about: ‘Why should I move to the cloud?’

According to a survey carried out by Quest Software in October, nearly 75% of CIOs are not sure what the benefits of cloud computing are, and half of them are not sure of the cost benefits, particularly since they find it difficult to calculate how much their current IT system is costing them. The firm interviewed 100 UK organisations with over 1,000 employees, of which only 20% said they are already actively using some of the cloud services offered, and whose main worries pivot around security, technical complexity and cost.

Choosing between public and private cloud has a different financial impact as well. Initial investment in the first one is clearly cheaper because of the lack of hardware expenditure, necessary instead for private services, but Gartner analysts reckon IT departments will invest more money on the private cloud through 2012 while the virtual market is maturing, which will prepare technology and business culture to move to the public cloud later on. The cost related to virtual storage duration is an issue only for public services, which are purchased on a monthly basis with a usage fee of GB combined with bandwidth transfer charges, therefore ideal for relatively short-term storage. In any case the experts say that not all IT services will be moved to a virtual environment, some will have to remain in the intimacy of the organisations because of data security and sensitivity issues.

The promise of the Cloud is virtualised data centre, applications and services all managed by expert third parties so that not only business operations will run more smoothly and efficiently, but more importantly IT managers can finally stop worrying about technical issues and focus on the important parts of their business, taking the strategic decisions that will bring their organisation to further success.  Whether and how long this takes to become a reality is something that at this stage is very hard to predict.

Adrian Polley 

Adrian Polley, CEO