Archive for the ‘virtualisation’ Category

Focus on 2012: 5 key areas in Enterprise IT

December 19, 2011

According to the industry analysts, experts and professionals, some of the changes and novelties introduced in the last few years are set to become actual trends in 2012. Influenced by the ever-challenging economic climate, disillusioned yet careful outlook on industry best practices and need to obtain measurable efficiency from any IT project, these are the five key areas that will acquire growing importance next year:

1)      Larger use of non-desktop-based applications

This is due to of a growing need for mobility and flexibility. Users need to be able to work while travelling, from any desk or office (for instance, in the case of large/international companies) and from home, as home-working is growing due to the financial benefits involved. It is also a good choice to guarantee business continuity in the case of unforeseen circumstances such as natural disaster or strikes which leave the workers stranded or unable to reach the office. As well as cloud applications, virtualised desktops are becoming a must-have for many organisations. Companies with older desktops which need updating anyway will find this switch more financially convenient, as well as those which have a large number of mobile users which need to access applications from their smartphone or laptop while out of their main office. It can also give those organisations considering or embracing home-working more control over the desktops, as they will be centralised and managed by the company and not at user level.

2)      Larger use of outsourced management services

The ‘doing more with less’ concept that started to take grip at the beginning of the past recession has translated into practical measures. These include handing part or the whole of the Service Desk to an external service provider which, for a fixed cost, will know how to make the best of what the company has, and provide skilled personnel, up-to-date technology and performance metrics. Managed services, IT outsourcing and cloud services will become even more prominent in 2012 and the following years due to their convenience from a practical and financial point of view. With the right service provider, the outcome is improved efficiency, less losses deriving from IT-related incidents and more manageable IT expenditure.

3)      Management plans for ‘big data’

There is much talk around the current topic of ‘big data’, which describes the concept of the large amount of varied data organisations have to deal with nowadays. There are some practical issues that arise from this – mainly how to store it, share it and use it, all without breaching the Data Protection Act. However, at the moment it is still very difficult to understand how to take the next step: using this data strategically and to create business advantage. This is something companies will have to look at in the years to come; as for the next year, they might just concentrate on dealing with data safely and efficiently, possibly storing it on a private virtual server or using public cloud services.

4)      A more balanced approach to security

This new approach sees the over-adoption of security measures dropped after the realisation that it might affect productivity as it may cause delay in carrying out business operations; it could also diminish opportunities that are found in sharing data within the sector to allow organisations to improve and grow; lastly, it can be counter-productive, with employees bypassing the measures in place in order to make operations quicker. Although being compliant with on-going regulations is becoming vital, there will be more scoping and tailoring than large technology adoption. Organisations will be analysed to understand which areas are in need of security measures and to what extent. This way, heavy security measures will be applied only to high risk areas rather than throughout the whole organisations, with less critical areas able to work more freely. In this approach, risks are balanced against efficiency and opportunity and the end result is a tailored solution rather than a collection of off-the-shelf products.

5)      Less budget control

Due to the challenging economic climate, other departments, in particular the financial department and therefore the DOF, will have more control over IT investments. CIOs and IT Managers will have to be able to evaluate if their IT project is necessary or just a nice-to-have, and how it can bring business advantage.  All proposed IT investment will have to be justified financially; therefore, it is important to analyse each project and find a reasonable ROI before presenting it to the finance decision-makers. This implies that IT professionals have to learn ‘business talk’ and manage to translate difficult technical descriptions in business terms.

All in all, developments within IT will not come to a halt next year – investment and changes will continue but with a more careful outlook and a stronger focus on efficiency, safety and Return on Investment rather than on following trends or adopting the latest technology for the sake of it. Because of this, the difficult economic climate could also be seen as a good thing: organisations make wiser and far-sighted choices that will create a solid base for any future decision that will be made when times are less tough and spending capacity rises, increasing the efficiency potential of IT for business purposes.

Tony Rice, Service Delivery Manager

NEWS: Plan-Net wins 5-year IT outsourcing deal with Davenport Lyons

October 31, 2011

IT Services provider Plan-Net plc have agreed a 5-year IT outsourcing contract with the west end law firm Davenport Lyons.

Plan-Net will provide Davenport Lyons with a new virtual infrastructure and 24/7 support delivered from an onsite team and dedicated legal IT support centre based in central London.

Plan-Net Director Adrian Polley commented:

‘We are extremely pleased to be adding Davenport Lyons to our growing list of legal clients and look forward to delivering the high levels service required in this sector.’

  • For more information contact:

Samantha Selvini

Press Officer, Plan-Net plc

Tel: 020 7632 7990

Email: samantha.selvini@plan-net.co.uk

  • About Plan-Net

A specialist in transforming IT operations into high-performance, cost-efficient platforms for business success, Plan-Net works with clients of all sizes and needs to help them maintain high levels of service while still meeting demands for a reduction in IT spending.

Plan-Net has helped to enhance performance, flexibility, security, cost-efficiency and, ultimately, user productivity at clients large and small over the two prosperous decades of its existence.

Website: www.plan-net.co.uk

Blog: https://plannetplc.wordpress.com/

Twitter: www.twitter.com/PlanNetplc

Financial firms’ IP is safe with VDI

May 10, 2011

As with many other new technologies, financial organisations have been among the most keen to embrace desktop virtualisation. The main reason this particular technology is being largely adopted by the sector is because it suits the need for easier mobility: thanks to VDI, users can access their desktop from any PC with an internet connection, making it easy to access large amounts of data and heavy applications from a light mini netbook while travelling and even making it unnecessary to carry a laptop around when visiting another office.

But although for many this is a good enough reason alone to adopt this technology, there is another advantage that makes desktop virtualisation even more attractive for the financial sector: it allows centralisation of all Intellectual Property, which is not owned and managed at end user level but centrally by the IT department. It is extremely important for this particular sector to have control over data and IP, as these are vital to a firm creating competitive advantage in the market – to financial firms, IP is an important asset and therefore needs to be protected. With this solution, all data will be processed and saved in a central hub rather than at user level, so that it is less difficult for users to take information to competitors or to copy it on an external device and lose it, therefore protecting the company from breaches of the Data Protection Act.

Centralisation also means that individuals will not be able to freely download random software onto their desktop that may contain viruses or create a window for hacking, with all the extra security benefits that this entails. Due to the sensitivity and importance of the information a financial firm deals with, being able to minimise the occurrence of these kinds of data security breaches is a great advantage – it increases public and regulatory confidence and credibility, which can add value to the company. Data leakage, loss or theft may in fact lead not only to costly fines, but there is also a likelihood, due to the obligation to inform clients and make an incident public, that  this would create a loss of reputation and, therefore, business – both with current and potential customers who might choose to opt for the competition. A safe environment is more attractive, hence a correctly managed VDI solution can help retain clients and perhaps also win new business.

Sure, desktop virtualisation has a cost and is not particularly convenient financially for companies that have just upgraded their hardware. It is instead a wise investment when PCs are in need of a refresh, as an alternative choice. The ROI in this case is immediate, but in any case the short and medium-term benefits are not confined to reduced hardware costs. It also enables the IT department to reach some important cost-efficiencies. Benefits include: enabling IT support personnel to carry out maintenance more easily and quickly; speeding up simple operations such as patching and applying new application upgrades; a smaller number of technicians needed to deal with support to remote users, especially the more expensive desk-side engineers.

Desktop virtualisation also allows for cost savings in the long run by extending the PC lifecycle and applying a concurrent-usage software licensing model. The pooling of flexible server hardware will extend its lifecycle and the simplified infrastructure will enable zero downtime.

The advantages of VDI are evidently numerous, but being in control of data, IP and the way each individual desktop is managed by end users represents a major benefit for financial organisations in particular. If implemented and managed correctly, in fact, this technology can allow them to gain competitive advantage, minimise losses, increase their security and return on investment, ultimately improving business success.

Sharron Deakin, Principal Consultant

This article was written for Director of Finance Online: http://www.dofonline.co.uk/content/view/5270/118/

What is the impact of the Cloud on the existing IT environment?

March 10, 2011

As organisations look to embrace the cost-efficiency opportunities deriving from new technologies and services, there is a lot of talk about the benefits, risks and possible ROI of the blanket concept of ‘Cloud computing’. However, it is still unclear how using Cloud services will affect the existing network infrastructure and what impact it can have on IT support roles and the way end users deal with incidents.

The effect on an organisation’s infrastructure depends on the Cloud model adopted, which may vary based on company size. For example, small organisations which are less worried about owning IT and have simpler, more generic IT needs might want to buy a large part of their infrastructure as a shared service, purchasing on-demand software from different vendors.

Buying into the Software as a Service model has the benefit of simplicity and cheapness, as it cuts out much of the responsibility and costs involved, which is a great advantage for SMEs. This solution also allows 24/7 service availability, something that small firms might not be able to afford otherwise. The lack of flexibility of this service, due to the impossibility to customise software, is less of a problem for these types of organisations. But there are still risks related to performance, and vendor lock-in.

Using this model, a small company’s retained IT infrastructure can be relatively simple, and therefore there might be little need for specialist technical skills within the IT department. A new skill set is required, however: IT personnel will need to be able to manage all the different relationships with the various vendors, checking that the service purchased is performing as agreed and that they are getting the quality they are paying for. The IT Service Desk will therefore require a smaller number of engineers, less technical but more commercially savvy. More specialist skills will shift towards the provider and 1st line analysts will have to escalate more calls to the various vendors.

Larger organisations, on the other hand, may well be keen on retaining more control over their infrastructure and purchasing IT resources on-demand. With this model, the organisation still manages much of its infrastructure but at a virtual level – the vendor might provide them with hardware resources on-demand, for instance. The main advantage of this model is that it allows the existing infrastructure to be incredibly flexible and scalable, able to adapt to changing business needs. For example, building a data centre is lengthy and expensive, therefore not convenient for expansion. But by using a “virtual datacentre” provider, capacity can be increased in the space of an online card transaction, with great financial benefits – in the Cloud model only the necessary resources are paid for without investment in hardware or its maintenance.

With this second model, the change in the roles within the IT department will mainly regard an increased need, as in the other model, for vendor management skills. Monitoring KPIs, SLAs and billing will be a day-to-day task although there will still be the need for engineers to deal with the real and virtual infrastructure.

Both models generally have very little impact on the end user if the IT Service Desk has been running efficiently, as this does not disappear as a first point of contact. However, in certain cases the change might be more visible, for instance if desk-side support is eliminated – a cultural change that may need some adapting to.

All in all, change is not to be feared – with the necessary awareness, embracing Cloud services can improve IT efficiency significantly, and align it to the business. By leaving some IT support and management issues to expert providers an organisation can gain major strategic advantage, saving money and time that they can ultimately use in their search for business success.

 

Adrian Polley, Technical Services Director

This article appears on the National Outsourcing Association’s (NOA) online publication Sourcing Focus: http://www.sourcingfocus.com/index.php/site/featuresitem/3318/

Surviving IT spending cuts in the public sector

February 15, 2011

How to create cost-efficiencies in the post-Spending Review scenario

After the announcement of 25%-40% budget cuts last year, it is reasonable to expect IT to be one of the departments to suffer the most in public sector organisations. However, cuts in IT support and projects may bring inefficiencies and disruptions, which can then lead to real losses and increasing costs.  More than ever, CIOs and IT Directors at public sector organisations are taking various options into consideration, from quick-fixes to farther-sighted ideas, trying to find a solution that will produce savings without compromising on service quality and data security, and perhaps even increasing efficiency. Here are some common ideas analysed:

Solution 1: Reducing headcount

Firing half of your IT team will produce immediate savings since you will not have to pay them a salary the following months, but when Support staff is insufficient or not skilled enough to meet the organisation’s needs it can lead to excessive downtime, data loss, security breaches or the inability to access applications or the database. A ‘quick-fix’ such as this represents a false economy. Reviewing resource allocation and improving skill distribution at Service Desk level, on the other hand, can be a valid solution. Indeed many IT departments can find themselves top heavy with expert long serving team members where the knowledge supply out-weighs the demand. A larger proportion of lower-cost 1st line engineers with improved and broader skills and a fair reduction of the more deeply skilled and costly 2nd and 3rd line technicians can not only reduce staff spend, but also create efficiencies with more calls being solved with first-time fix.

Solution 2: Offshoring

Although the thought of employing staff who only ask for a small percentage of a normal UK salary may sound appealing, offshoring is not as simple as ABC. It requires a large upfront investment to set up the office abroad, with costs including hardware, software, office supplies and travel and accommodation of any personnel that manages the relationship with the supplier. Organisations are not able to afford that kind of investment, especially since this solution only creates cost-savings in the long term – but the public sector needs cost savings now. Furthermore, the different culture and law can represent a risk to information security: data could be easily accessed by staff in a country thousands of miles away and sold for a couple of dollars, as various newspapers and TV channels have found out. With the extreme sensitivity of data processed by Councils, charities and the NHS, no matter how hard foreign suppliers try to convince the public sector to offshore their IT, it is unlikely this will happen – it is simply too risky.

Solution 3: IT Cost Transparency

Understanding the cost of IT and its value to the organisation, being able to prioritise and manage people and assets accordingly and knowing what can be sacrificed, can help identify where money is being wasted, which priorities need to be altered and what can be improved. For instance, do all employees need that piece of software if only three people actually use it more than twice a year, and do you need to upgrade it every year? Do all incidents need to be resolved now, or can some wait until the more urgent ones are dealt with? Do you need a printer in each room, and when it breaks do you need to buy a new one or could you make do with sharing one machine with another room? These and many other questions will lead to more efficient choices, but only after having identified and assessed the cost and value of each aspect of IT, including people and assets.

Solution 4: Cloud computing

There are contrasting opinions on this matter. The Government CIO, John Suffolk encourages the use of this service, and reckons that the public sector would be able to save £1.2bn by 2014 thanks to this solution. However, many believe that placing data in the hands of a service provider can be risky due to the highly sensitive nature of the data involved, so traditional Cloud computing may not be an ideal solution.

A shared environment such as the G-cloud, where various public sector organisation share private data centres or servers, may be a safer option that allows the public sector to achieve major efficiencies and cost savings, while minimising issues related to data security.

Solution 5: Shared Services

A shared service desk is not for everyone – it can only work if the organisations sharing have similar needs, culture and characteristics, and as IT can be a strategic advantage for competitive businesses, sharing the quality may mean losing this advantage. But for the public sector, this solution may be ideal. Local councils with the same functions, services and needs will be able to afford a higher level of service for a reasonable price, sharing the cost and the quality.

Solution 6: Service Management Good Practice

‘Doing more with less’ is one of the most used quotes since the recession started. And it is exactly what the public sector is looking for. Public organisations don’t want to be ITIL-aligned, obtain certifications, and tick the boxes. All they want is efficiency and cost savings – and through the right Service Management moves, after an Efficiency Review to find out what needs improvement and how, this can be obtained through the right choices regarding people, processes and technology.

Solution 7: Managed Services

A solution where the IT Service Desk is kept internal with its assets owned by the company, but managed by a service provider is becoming more and more popular among organisations from all sectors. When the sensitivity of data and a desire for a certain level of control over IT rules out full outsourcing, but in-house management does not allow to reach potential cost savings and efficiencies, a managed service may represent the ideal ‘in-between’ choice. The post-Spending Review public sector, then, may benefit from a flexible solution that is safer than outsourcing, but more cost-effective than an in-house solution.

Every challenge can be a new opportunity

Although budget reduction may affect investment in large IT projects and shiny new technology, it also represents the ideal opportunity to analyse what is essential and what is not, and to prioritise projects based on this. The public sector, then, find itself prioritising for effectiveness over compliance, cost-efficiency over cheapness and experience over offers, when choosing providers and tools for their IT. This will lead to the choice of solutions that will help organisations run more smoothly and safely, invest their resources better and, ultimately, deliver a service that will bring maximum customer and user satisfaction.

Martin Hill, Head of Support Operations

(also on Business Computing World: http://www.businesscomputingworld.co.uk/how-to-create-cost-efficiencies-in-the-post-spending-review-scenario/)

10 things we learnt in 2010 that can help make 2011 better

December 23, 2010

This is the end of a tough year for many organisations across all sectors. We found ourselves snowed-in last winter, were stuck abroad due to a volcano eruption in spring, suffered from the announcement of a tightened budget in summer, and had to start making drastic cost-saving plans following the Comprehensive Spending Review in autumn. Data security breaches and issues with unreliable service providers have also populated the press.

Somehow the majority of us have managed to survive all that; some better than others. As another winter approaches it is time to ask ourselves: what helped us through the hard times and what can we do better to prevent IT disruptions, data breaches and money loss in the future?

Here are some things to learn from 2010 that may help us avoid repeating errors and at the same time increase awareness of current issues, for a more efficient, productive and fruitful 2011:

1- VDI to work from home or the Maldives

Plenty of things prevented us getting to work in 2010; natural disasters, severe weather and industrial disputes being the biggest culprits. Remote access solutions have been around for a long time, but desktop virtualisation has taken things a stage further. With a virtual desktop, you’re accessing your own complete and customised workspace when out of the office, with similar performance to working in the office. Provided there’s a strong and reliable connection, VDI minimises the technical need to be physically close to your IT.

2- Business continuity and resilience with server virtualisation

Server virtualisation is now mainstream, but there are plenty of organisations large and small who have yet to virtualise their server platform. When disaster strikes, those who have virtualised are at a real advantage – the ability to build an all-encompassing recovery solution when you’ve virtualised your servers is just so much easier than having to deal with individual physical kit and the applications running on them. For anyone who has yet to fully embrace the virtualisation path, it’s time to reassess that decision as you prepare for 2011.

3- Good Service Management to beat economic restrictions

With the recent economic crisis and the unstable business climate, the general message is that people should be doing more with less. It’s easy to delay capital expenditure (unless there’s a pressing need to replace something that’s broken or out of warranty) but how else to go about saving money? Surprising, effective Service Management can help deliver significant cost-efficiencies through efficient management of processes, tools and staff. Techniques include rearrangement of roles within the IT Service Desk to get higher levels of fix quicker in the support process, and adoption of some automatic tools to deal with the most common repeat incidents. Also getting proper and effective measures on the service, down to the individuals delivering it, helps to set the bar of expectation, to monitor performance and improve processes’ success.

4- Flexible support for variable business

An unstable economic climate means that staffing may need to be reduced or increased for certain periods of time, but may need rescaling shortly afterwards. At the same time epidemics, natural disasters and severe weather conditions may require extra staff to cover for absences, often at the last minute. Not all organisations, however, can afford to have a ‘floating’ team paid to be available in case of need or manage to get contractors easily and rapidly. An IT Support provider that can offer flexibility and scalability may help minimise these kinds of disruption. In fact, some providers will have a team of widely-skilled multi-site engineers which can be sent to any site in need of extra support, and kept only until no longer needed, without major contractual restrictions.

5- Look beyond the PC

Apple’s iPad captured the imagination this year. It’s seen as a “cool” device but its success stems as much from the wide range of applications available for it as for its innate functionality. The success of the iPad is prompting organisations to look beyond the PC in delivering IT to their user base. Perhaps a more surprising story was the rise of the Amazon Kindle, which resurrected the idea of a single function device. The Kindle is good because it’s relatively cheap, delivers well on its specific function, is easy to use and has long battery life. As a single function device, it’s also extremely easy to manage. Given the choice, I’d rather the challenge of managing and securing a fleet of Kindles than Apple iPads which for all its sexiness adds another set of security management challenges.

6- Protecting data from people

Even a secured police environment can become the setting for a data protection breach, as Gwent Police taught us. A mistake due to the recipient auto-complete function led an officer to send some 10,000 unencrypted criminal records to a journalist. If a data classification system had been in place, where every document created is routinely classified with different levels of sensitivity and restricted to the only view of authorised people, the breach would have not taken place as the information couldn’t have been set. We can all learn from this incident – human error will occur and there is no way to avoid it completely, so counter measures have to be implemented upfront to prevent breaches.

7- ISO27001 compliance to avoid tougher ICO fines

The Data Protection Act was enforced last year with stricter rules and higher fines, with the ICO able to impose a £500,000 payment over a data breach. This resulted in organisations paying the highest fines ever seen. For instance Zurich Insurance which, after the loss of 46,000 records containing customers’ personal information, had to pay over £2m – but it would have been higher if they hadn’t agreed to settle at an early stage of the FSA investigation. ISO 27001 has gained advocates in the last year because it tackles the broad spectrum of good information security practice, and not just the obvious points of exposure. A gap analysis and alignment with the ISO 27001 standards is a great first step to stay on the safe side. However, it is important that any improved security measure is accompanied by extensive training, where all staff who may deal with the systems can gain a strong awareness of regulations, breaches and consequences.

8- IT is not just IT’s business – it is the business’ business as well

In an atmosphere where organisations are watching every penny, CFOs acquired a stronger presence in IT although neither they nor the IT heads were particularly prepared for this move. For this reason, now the CIO has to find ways to justify costs concretely, using financial language to propose projects and explain their possible ROI. Role changes will concern the CFO as well, with a need to acquire a better knowledge of IT so as to be able to discuss strategies and investments with the IT department.

9- Choose your outsourcing strategy and partner carefully

In 2010 we heard about companies dropping their outsourcing partner and moving their Service Desk back in-house or to a safer Managed Service solution; we heard about Virgin Blue losing reputation due to a faulty booking system, managed by a provider; and Singapore bank DBS, which suffered a critical IT failure that caused many inconveniences among customers. In 2011, outsourcing should not be avoided but the strategy should include solutions which allow more control over assets, IP and data, and less upheaval should the choice of outsourcing partner prove to be the wrong one.

10- Education, awareness, training – efficiency starts from people

There is no use in having the latest technologies, best practice processes and security policies in place if staff are not trained to put them to use, as the events that occurred in 2010 have largely demonstrated. Data protection awareness is vital to avoid information security breaches; training to use the latest applications will drastically reduce the amount of incident calls; and education to best practices will smooth operations and allow the organisations to achieve the cost-efficiencies sought.

Adrian Polley, CEO

This article has been published on Tech Republic: http://blogs.techrepublic.com.com/10things/?p=2100

Doing more with less: an opportunity to learn

May 7, 2010

Budget reduction teaches organisations to prioritise – a lesson to be learnt not only by the public sector.

The recently announced budget has not been kind to public sector IT, just as expected. Large cuts mean that most technology projects will have to be shelved, but this does not make the level of performance the sector is craving for impossible to be reached – on the contrary, budget reduction is the kind of incentive that drives organisations to prioritise and to seek efficiencies, focusing more on operational, rather than capital expenditure. This does not apply exclusively to the public sector, of course: many private companies are struggling with similarly tight purse strings, so there is a lesson to be learnt for them as well from such challenging circumstances. 

Quick-fix plans which consist of simply reducing the number of personnel and only purchasing tools to replace the most obsolete assets are unlikely to represent the best way to preserve, let alone increase efficiency. With most operations nowadays recognising that IT forms the backbone of the organisation, it is clear that a wiser roadmap must be designed. Clear-sighted organisations, then, will have a strategy which sees them realigning roles and improving skills within their IT department, implementing relevant Best Practice processes and adopting tools and technologies that can help towards reducing overall operating costs while improving efficiency, such as virtualised servers and automated service desk management software. Scoping and planning is vital in order to design a strategic solution that is bespoke, fit-for-purpose and scalable, hence fit not only for present conditions but the medium term as well, and to demonstrate clearly what cost efficiencies a well-balanced mix of people, process and technology can achieve. 

In terms of staffing, it seems that many IT Service Desks lack the skills and tools to deal with most of the calls at first-line level, and therefore become overburdened with an unnecessary (not to mention costly) number of second-line engineers, which are also, because of their more ‘flexible’ nature, often slower in dealing with incidents. An up-skilling of first line support in conjunction with Best Practice procedures and the adoption of automated software which can deal with simple and repetitive incidents such as password resets may take the level of first-time fix from as little as 20-30 per cent to 60-70 per cent. This means that a smaller total number of support personnel are needed, especially at second line, and that the business will be remarkably improved, with incidents taking less time to be resolved, resulting in a more efficient service for users.

Best Practice implementation is a key component in this cost-effective innovation project. The adoption of procedures based on a discipline such as ITIL (Information Technology Infrastructure Library) will help any organisation function in the best possible way. The processes described by ITIL deal, among others, with the management of incidents, risks and change. The latter is of particular relevance: to deal with any alteration to the system, be it small or large, without causing inefficiencies, disruptions and consequently business or client loss it is important to have a mature level of Change Management already in place.

Because of the difficulty of accepting change and truly understanding this new way of working, ITIL-based experiential learning sessions are an important aid in delivering the discipline so that change can effectively happen, and to guarantee active participation of all staff taking part in the training. This should not only be limited to people that are directly affected, but extend to management who equally need to embrace the importance of best practice.

Another smart innovation that takes the idea of ‘doing more with less’ in its most literal form is that of virtualisation. Through virtualising both the desktop and server environment cost savings from a reduction in user downtime and further improvements in levels of remote (and therefore first line) fixing can be substantial, not to mention further benefits seen in terms of reduced server maintenance costs (from personnel to energy consumption).

The steps to take may appear quite clear and straightforward, but current in-house skills, resources and experience might not be enough to deal with such innovation and, as a result, many organisations will need the expertise of a service provider. With regards to the public sector, the cheapest outsourcing option, commonly seen as offshoring, may be automatically ruled out due to information security issues. However, security concerns private organisations as well, especially ones which withhold information that is extremely sensitive, such as law firms and banks. These particular companies cannot risk the loss of reputation, not to mention a hefty fine that can follow a breach of the Data Protection Act by a non properly-trained employee or a non-secure service provider.

There is a solution, though, where cost-efficiency can be achieved at the same, or a lower price than an in-house solution. As predicted by analysts in the sector, it is probable that many organisations will be more and more driven towards adopting a managed service solution in the next couple of years. With Managed Services, Service Desk management is taken care of by a third party, often in the office premises, and while personnel and procedures are left in the hands of the provider the organisation still retains ownership of assets and power over data, particularly important when information withheld within the system is sensitive and cannot risk leakage or loss.

It is not uncommon to achieve cost savings of 15 per cent or more when compared to a similar, in-house option, saving organisations money and improving the overall functioning of operations, in turn creating more business opportunities and enhancing the users’ ability to maximise productivity.

When it comes to innovation and change, and especially when that may involve reductions of any kind, it might be true that a view from the inside is not likely to be the most objective. With that in mind, working with a specialist partner would seem to be the most logical conclusion; however, doing more with less is far more likely to be attainable in the long term if management visibility and control is retained internally to ensure IT is kept close to the heart of the organisation at all times. Balance, it seems, is key to success.

 

Jerry Cave, Director

This article features on the BCS website and in the BCS Service Management e-newsletter: http://www.bcs.org/server.php?show=conWebDoc.35420

Personal touch or fast resolution – what do end users really want from their IT Support?

April 19, 2010

With regards to IT support, do end users really prefer face-to-face contact with an analyst, or would their rather have the problem fixed remotely in the shortest time possible? Does it make any difference to them, and what is their priority?

Many IT directors seem to invest a great part of their IT budget in an unnecessary number of desk side support analysts, rather than in new technology to speed up operations through remote intervention. Their justification is something along these lines: “Users like that personal touch. They feel awkward when they have to interact with a voice over the phone or deal with automated tools. They like to have a friendly face at their desk.”

Is this the truth or an assumption by IT? Do users just want their problem fixed which ever way is quickest, be that desk side or over the phone?

These are the results of our poll :

Answer Votes
I don’t mind if my problem is solved remotely, I just want to get it fixed as quick as possible.    80%
 I feel awkward when I have to interact with a voice over the phone or automated software. I like to have a friendly face at my desk.    10%
I have no preference. Either way is fine with me.    10%

Best Practice and Virtualisation: essential tools in Business Resilience and Continuity planning

March 25, 2010

Life in Venice doesn’t stop every time it floods. People roll up their trousers, pull on their wellies and still walk to the grocer’s, go to work, grab a vino with friends. And when it’s all over they mop the floor, dry the furniture, and go back to their pre-flood life. How do they do it? They choose not to have carpet or wooden flooring, keep updated on water level and have a spare pair of boots right next to the door. This is called prevention.

When it comes to faults in IT systems, both common and rare just like flooding can be, prevention is not better than cure – it is the cure, the only one to allow business continuity and resilience.

Complicated machinery and analysis are a thing of the past: nowadays planning is extraordinarily easy thanks to the expertise given by Best Practice processes, and new technologies such as virtualisation that can bring user downtime close to zero.

First of all, it must be noted that virtualising servers, desktop, data centre is not something that can be done overnight. Planning is needed to avoid choosing the wrong solution, perhaps based on what the latest product on the market is and on media talk rather than what works best for the specific needs of one’s business, and to shun possible inefficiencies, interruption of business services, or even data loss during the process. Best Practice, then, comes across as the essential framework in which all operations should be carried out in order for them to be successful.

Any change made to the system, in fact, needs a mature level of Best Practice processes such as world-renowned ITIL (Information Technology Infrastructure Library) in place, to guide organisations in planning the best route in dealing with all operations and incidents, and are a key tool for avoiding inefficiencies in money and time, and improving the performance of the IT department and of the business as a whole.

Once this is sorted, you can think about going virtual. From a technical point of view, virtualisation is gaining importance in business resilience and continuity planning thanks to the progress made by new technologies. Products such as VMware’s Vsphere, for example, allow what is called “live migration”: capacity and speed of the virtual machines are seen as an aggregate rather than individually, and as a consequence not only the load is more evenly distributed, for faster, smoother operations, but whenever a machine crashes resources are immediately accessible from another connected device, without the user even noticing and without interrupting the procedure.

Moreover, data is stored on a virtual central storage so that it is accessible from different source and does not get lost during system malfunctions, making business resilience faster and easier.

Guided by the expertise of Best Practice and with the help of virtualisation products that suit individual needs and goals, business resilience and continuity planning will not only come easier, but also make results more effective, allowing organisations to deliver their services and carry out their operations without fear of interruptions, inefficiencies or data loss.

 

Pete Canavan, Head of Service Transition

 

This article is in April’s issue of Contingency Today, and is also online at: http://www.contingencytoday.com/online_article/Best-Practice-and-Virtualisation/2242