Posts Tagged ‘IT trends’

The Post-ApocOlympic IT scenario: scalability, mobility and security

July 10, 2012

The Post-ApocOlympic IT scenario: scalability, mobility and security

As organisations of all types and sizes prepare themselves for the Olympics as best as they can, there is still a lot of uncertainty with regards to not only what will happen during the summer event, but also what to expect from the aftermath.

Uncertain forecasts

The latest post-Jubilee figures, issued by Visit England, show that the Queen’s diamond jubilee celebrations have brought an estimated £700m boost to the UK economy; this amount being based on four million people who took overnight trips, spending an average of £175 each. With the Olympics expected to attract an even bigger crowd to London for up to two weeks or possibly even more, it is difficult to foresee what effects there will be on UK businesses, let alone how they will be affected afterwards.

The Bank of England believes that the struggling UK economy will receive a boost that could spell the end of the double dip recession, with an expected output of around 0.2% higher in the third quarter than it otherwise would have been. But others are not so optimistic. Citigroup research based on data from ten Olympics held between 1964 and 2008 shows that there is a tendency for growth to rise in the six months before the tournament, but this is then followed by six months of much weaker growth which can start even before the Games begin.

How are companies preparing?

With so much uncertainty, organisations aren’t really sure how to prepare for all eventualities. Their business might increase greatly during the Olympics, creating a need for more staff, a stronger IT infrastructure and greater IT support to deal with the growth in demand; but they also need a level of scalability that enables them to go back to their previous size afterwards, or to accommodate for any long-term changes if their business finds itself deeply changed. A flexible and scalable IT system and IT support service is vital to keep companies working in a cost-efficient way.

This need for scalability and flexibility has also pushed organisations to try new ways of working, such as mobile and home working, allowing individuals to work around the summer events’ issues and reducing the need to travel into potentially congested areas.

The post-Olympics scenario

After trying mobile and home working during the Olympics, forward-thinking UK businesses might decide they want to adopt this as part of their longer-term IT strategy, finding it a cheaper, more efficient solution that allows them to scale up and down more easily. They will embrace desktop virtualisation to allow employees to work from their own PCs and laptops, and design BYOD (Bring Your Own Device) policies to use tablets and smartphones for work purposes.

This might be the start of a revolution. With the upcoming Windows 8 being able to run on tablets, these will become more powerful and users will be able to do more with them, such as access their familiar Office applications, which at the moment is not always possible. These touchscreen devices could replace mice and desktop PCs, and as users move towards using a single device, it might well be that they will only be using tablets in a few years’ time.

However, right now the tablet doesn’t meet most people’s requirements as an everyday work device: its screen is too small, touch keypads are not as accurate as a standard keyboard and it’s not ideal to quickly switch between multiple applications. It will probably be a while before tablets replace desktop PCs, but they are already starting to replace laptops for things such as working on-the-go, sales and giving client presentations.

New issues

With this new way of working, hardware is not a problem anymore – employees can use their own PC, laptop or tablet, or the company might just set a budget and let the employee choose which device to purchase. The problem, in this scenario, is data.

The data saved, transmitted and processed on employees’ devices is part of the organisation’s Intellectual Property and therefore has great value. How do you make sure that it is secure, managed appropriately and stored in a safe place? Even if virtual desktops allow users to work from their home PCs, you cannot be sure that they don’t store data on their machine.  And when cloud services are used, where is the company’s data kept – is it stored in a data centre in another country, where different laws apply with regards to data security and access? People are using cloud because it is cheap and easy, but it is often not secure enough. You need to wrap something around it to make it more secure.

Companies need to adopt appropriate security measures, such as network access control, strong policies for document management, and use of robust encryption technologies, so that even if data is stolen or accessed by non-authorised people, it cannot be read.

A new, post-Olympics culture

Working from home PCs, tablets and smartphones is a big cultural shift for many, and has to be supported by other types of behaviour-related change. All the security tools and policies in the world are useless without the appropriate security training; human error is the first cause of data security breaches, and if people don’t understand why they have to implement a certain security measure that will add time to their work, they will circumvent it.

So, as organisations evolve and adapt to more flexible ways of working, they shouldn’t forget the data. Hardware can be replaced, but can they afford to lose the list of their clients to their competitors? Organisations must make users aware of the responsibility this new-found work freedom allows. They, and not just the IT department, are now custodians of the data and responsible for its security so you have an obligation to make them aware of this.  Data security should be included in everyone’s induction training and the promotion of good practice should be a continuous feature.

With the Olympics and technology innovations pushing companies towards more flexible ways of working, the revolution may be coming sooner than we think. But it is important to understand that everyone needs to be ready, not just the IT department, in order for it to take place without the company incurring a new risk that may outweigh all the benefits.

David Tuck, Principal Consultant 

This article can be found in the July/August edition of London Business Matters (page 40):

http://www.londonbusinessmatters.co.uk/archive/2012-07/index.html

Advertisements

ITIL V3 – should you bother?

November 24, 2010

With the retirement of version 2 of ITIL, the Information Technology Infrastructure Library, organisations across all sectors are considering the implications of this change and whether they should think about a possible move to version 3. A reoccurring question is about not just the value of moving towards a V3 aligned approach, but also querying the overall value of the ITIL discipline itself.

There are many doubts regarding the Good Practice framework which is one of the most widely adopted worldwide, and it is not only the CEOs and financial directors who question its effectiveness, ROI and ability to deliver – even many CIOs, IT directors and unfortunately, in some instances, service management professionals themselves have started to look at ITIL with scepticism.

In this current climate of austerity, organisations are being extra cautious regarding their spending. This is leading both those who are considering the step up from V2 and those considering whether to start on the service management journey to wonder: what can V3 possibly add, and isn’t ITIL overrated anyway?

Let’s take the last question first. Like a lot of challenges within business, rather than deciding on a solution and then trying to relate everything back to it, look at what overall objective is and which issues need to be resolved. ITIL, which ever version you choose, is not a panacea. It won’t fix everything, but it may be able to help if you take a pragmatic and realistic approach to activities.

ITIL’s approach to implementation in the early days was described as “adopt and adapt” – an approach that still rings true even with V3. However, this appears to have fallen out of the vocabulary recently. Adopting all processes regardless of their relevance to the business and following them religiously will not add any value. Nor will implementing them without ensuring that there is awareness and buy-in across the organisation; treating implementation as a one-off project rather than a continuously evolving process or expecting the discipline to work on its without positioning it alongside the existing behaviours, culture, processes and structure in the organisation.

ITIL’s contribution to an organisation is akin to raising children, where one asks oneself: is it nature or nurture that creates the well rounded individuals, and what parenting skills work best? You need to find the most compatible match, one that will in part depend on what that particular business wants from a Best Practice framework and if they really understand how it works. Do they want to be told what to do or find out what works and what doesn’t and why, so they can learn from it?

All activities in a Best Practice framework have to be carefully selected and tailored in order to create some value. Moreover, adoption of tools and processes must be supported by an appropriate amount of education and awareness sessions, so that any involved staff, including senior management, will fully understand their purpose, usefulness and benefits and will therefore collaborate in producing successful results.

The other question raised by many organisations is: why should I move to V3 – isn’t V2 perfectly fine? It is hard to come up with a perfect answer as there are a number of considerations to take into consideration, but in part it can come back to what the overall objective was for the business. Looking at the move from V2 to V3 as an evolution, a number of the key principles expanded on in V3 exist with V2, so there will be some organisations for whom the expanded areas relating to IT strategy and service transition are not core to their IT operation. However, the separation of request fulfilment from incident management and the focus on event management may lead an organisation to alter the way they deal with the day-to-day activity triggers into the IT department.

My personal view is that anything that helps organisations to communicate more effectively is a benefit. V3 provides more suggestions that can help with these objectives, as well as helping the IT department to operate with more of a service oriented approach, again something that can help cross the language gap between technology and business. V3 provides a lifecycle approach to IT service, recommending continual review and improvement at organisation level.

So, is V3 essential if you have already successfully adopted and adapted V2? For organisations that do not require maximum IT efficiency because IT is not strategic, V2 is probably enough to keep them doing well. For those that, instead, gain real competitive advantage from efficient IT, any improvement that can make their business outperform others in the market is one worth embracing.

As for all the organisations in the middle, a move to V3 is probably not essential in the immediate future – however, as publications and examinations are substituted to match the latest version, and the way in which their suppliers are providing service changes, it will soon become a necessary thing to do in order to keep up-to-date and in turn competitive within the market.

Samantha-Jane Scales, Senior Service Management consultant and ITSM Portal columnist

Find the column on ITSM Portal:  http://www.itsmportal.com/columns/itil-v3-%E2%80%93-should-you-bother

Does the future of business mobile computing lie in hybrid tablet devices?

September 28, 2010

As a legion of hybrid laptop/tablet devices are thrown into the market, riding the wave of the trendy but not-so-business-friendly iPad whilst trying to overcome its limitations in a bid to conquer a place in the corporate world, a few thoughts come to mind as a reflection on the future of business mobile computing.

Tablets in their pure and hybrid forms have been around for several years, but it is only recently that they have reached some sort of success thanks to the right marketing, targeting and perhaps timing. Perhaps they could only be accepted as the natural successor to smartphones and e-readers, and had to hit the gadget-thirsty consumer market before they could be introduced in a corporate environment.

However, tablets like the iPad aren’t specifically built for work. Apart from the security issues that are still to be fully assessed, there are some technical aspects which makes this device unfit for business in its present form. Its touch-screen technology is not ideal for writing long documents and emails, for instance, and the attachable keyboard is an extra expense and an extra item to carry around, making it less comfortable than a normal laptop. Another issue is that the screen does not stand on its own. To write on it, the device has to be held with one hand, leaving only one hand free to type, or placed horizontally on a surface or one’s lap, an unusual position which makes it harder to compose long texts. A holder can be purchased, but at an extra cost. It must be said that consumers of mobile computing are not eager to carry around extra detachable parts. That’s what mobile computing is all about – compact and lightweight devices to access resources from different places or while travelling.

The latest hybrids launched on the market try to overcome these issues, for instance keeping the laptop’s two-piece, foldable, all-in-one appearance and merging it with the touchscreen concept introduced by tablets. For instance, Dell is launching a 10-inch hybrid Netbook/Tablet device where the screen can be rotated to face upwards before closing the machine, so that whilst the keyboard disappears it remains on the upper surface, appearing exactly like a Tablet. Toshiba’s Libretto, instead, is an even smaller device (7 inches) that looks like a normal mini-netbook but is composed of two screens with touch-screen technology. One screen can be used for input and the other for displaying information, but they can also be used together as a double-screen, for example to read a newspaper the ‘traditional’ way.

Although the two hybrids both show an effort to meet market requirements for a marketable device – small and fast, easy to carry around, one-piece, foldable, able to stand on its own, touchscreen – this still doesn’t make them ideal and safe for work. It is possible that they become popular among a niche of professionals to whom the design and some of their functionalities may appeal, but it is highly unlikely that they will replace traditional laptops in the IT department or in organisations where IT needs to be efficient and extremely safe.

First of all, the capacity and speed of these devices is limited, and so is the screen size. Furthermore, although the touch-screen technology may probably become the way forward at some point in the future, at the moment it is not advanced enough to make it better than a traditional keyboard. When typing on a touchscreen there is no tactile response at the fingertips, hence it is necessary to keep glancing at one’s fingers to be sure you are hitting the right keys. Finally, the risk of a ‘cool’ device is that it is an easy target for theft, which can represent a risk to the business from a data protection point of view especially if the device does not allow a sufficient level of security or has some faults to due its newness.

Although the mass of tech-crazy professionals that populate organisations in all sectors are looking more and more for a one-for-all device, it is unlikely that this becomes the mainstream solution. It is more likely that people will have a travel-size device for their free time or when they are on the go, a smartphone for calls and quick email checking and a super-safe and bulky laptop for work.

The problem, here, will be how to access the same resources from the various devices without having to transfer and save all the documents and applications onto all of them. This could be overcome with desktop virtualisation which makes a user’s desktop and resources reachable from any device and anywhere in the world – abroad, home or on a train. Unfortunately this requires a reliable, strong and stable internet connection which at present is still not available everywhere, and especially not outside homes and offices.

As for the far future, portable devices will probably be very different from what we are used to – they will be as thin as a sheet of paper, with touchscreen technology that is more advanced than the one at present, and users will be able to roll them away and carry them in a pocket. The projected keyboard might become popular as well – although it already exists, consumers are still not embracing this new way of inputting information but this might change with time.

In fact, the future of computing is not only determined by technological developments. Adoption in the mainstream culture is essential and it can only happen when consumers are ready to accept variants of what they are used to. It is only through a cultural change that things can really progress onto new forms and it is through the choices and preferences of the new consumer/professional figure that the future of mobile computer will ultimately be determined.

Will Rodbard, Senior Consultant

Find this article on Business Computing World: http://www.businesscomputingworld.co.uk/does-the-future-of-business-mobile-computing-lie-in-hybrid-tablet-devices/

The perils of commoditising IT Support

September 2, 2010

The term ‘commoditisation’ seems to rear its head whenever there is a perceived trend for technology to become standardised and, however unlikely it is to become prevalent, there are often many positives that can be identified from its methods. After all, standardisation should mean technology becomes more affordable and reliable in the first instance and easier and cheaper to support once implemented. However, when this trend spills over into IT Support and Service Delivery, then the positives become much more difficult to identify.

Its stealthy advance into the marketplace is understandable. For a large-scale, multinational provider of IT Support, being able to implement ‘off-the-shelf’ models means quicker turnaround and less upkeep once the service is underway. It is also easier to market – do you want the gold, silver or bronze package, sir?

In fact, not only is it easier, due to the sheer size of these providers and the inevitable lack of mobility this brings, it is often the only type of solution they can offer. It is in their collective interests to tell you that your environment, and therefore the solution they provide, is the same as the business next to you.

The obvious problem they then experience is differentiating themselves from their competitors. Better customer service? More experienced account managers? They simply care that little bit more? The spiel is varied and endless but it never really answers the question any IT Director assessing Support providers should ask. What will your service do to meet the specific needs of my individual business?

For a convincing answer to that question, it is likely you will have to turn to a smaller, niche provider of IT Support. With an ear to the ground and, in many cases, a specialism in a specific vertical or business type, they will soon debunk the myth of the ‘one-size-fits-all’ approach to IT Support.

Of course, tailoring the model is only part of delivering the right IT Support service. Not only should the set-up be right in terms of balance, but the processes used also need to be considered. Again, here lies an area fraught with danger when it comes to standardisation. Best Practice guidelines such as ITIL can undoubtedly provide many benefits, in terms of both performance and efficiency; however, simply implementing ITIL to the letter, as many providers will, is likely to not only be a waste of money but inhibitive to the service in the long run. Even ITIL, the benchmark for Best Practice in IT Support, needs tailoring to the environment in question before it truly performs to its capabilities.

Once the service is up and running, the single largest and most important component is the people that staff it. As a result of technological evolution, advancements in software and the trend towards remote fixes, there has been a cultural change in the way engineers have to work, and the skills they need to bring to the table.

With advanced software able to take care of the most common incidents, the first-line engineer will have to take on some of the responsibilities usually attributed to second-line technicians – especially as virtualised environments allow so many more fixes to take place remotely. As a result they will have to acquire the skills necessary to resolve more challenging issues, therefore need to always be up-to-date with the newest technologies and have a broader but shallow knowledge, as more technical problems can be left to the provider to deal with remotely.

Now many of the larger IT Support providers will no doubt claim this standardisation of skills will lead to IT Support becoming what in economical terms can be described as a ‘perfect market’, where a broader, shallower skill-set will mean lower salaries for engineers, price-war between support providers to win a tender, and competition not only within the same city or country, but extended globally to places where the standard skills can be accessed at a lower cost.

But where the problem with this argument lies is under their ‘one-size-fits-all’ approach – the way to deal with more calls resolvable at the first line is to overload the Service Desk with these ‘commoditised’ engineers to deal with them. With people always the biggest cost, this increase in headcount will inevitably lead to more cost, negating the efficiencies involved in this approach which are generated from so-called ‘less-expensive’ engineers.

A law firm, financial institution and a charity need IT staff with different experience, skills and even mind-set, in line with the organisation’s environment, business culture and goals. Staffing a service desk is never as simple as matching a skill set to a CV and a niche provider should recognise that and provide the right mix of resource to keep numbers (and therefore cost) as low as possible.

Needless to say, there are huge differences between support providers and it is not always the case that the big boys are the wrong choice. Many smaller organisations provide standard services and are unwilling to create the service that best supports each individual client. The fact is, though, that unlike software and hardware, which could potentially benefit from a degree of commoditisation, a service does not come in an out-of-the-box package. Not all businesses are the same –they all have their individual needs, goals, ethics and indeed, technologies and therefore need someone that has the right skills and expertise to understand their unique features and design the best strategy for them, tailored to the client and not rolled out from a standard blueprint.

It is unusual for a support provider or indeed, an individual engineer to have experience in all sectors, hence it is essential to find someone ‘niche’ enough to really be able to add value to a business. Sure an organisation could save money when compared to insourcing by partnering with a standard support provider but they are unlikely to deliver any real assistance in driving the business forward.

Organisations are beginning to see IT as a vital part of the business, including it in their overall strategy and recognising its place as the number one tool for business success. An IT Support provider that can understand the particular needs, aims and environment of the organisation in question and be part of their business strategy is able to create business value simply by bucking the trend for standardisation.

Richard Forkan, Director

10 reasons to migrate to Exchange 2010

July 29, 2010

A Plan-Net survey found that 87% of organisations are currently using Exchange 2003 or earlier. There has been a reluctance to adopt the 2007 version, often considered to be the ‘Vista’ of the server platform – faulty and dispensable. But an upgrade to a modern, improved version is now becoming crucial: standard support for the 2003 version ended over a year ago and much technological progress has been made since then. It seems that unconvinced organisations need some good reasons to move from their well-known but obsolete system to the new and improved 2010 version, where business continuity and resilience are easier to obtain and virtualisation can be embraced, with all the benefits that follow.

Here are 10 reasons your organisation should migrate to Exchange 2010:

1- Continuous replication

International research shows that companies lose £10,000/$10,000 an hour to email downtime. This version of Exchange enables continuous replication of data which can minimise disruptions dramatically and spare organisations from such loss. Moreover, Microsoft reckons the costs of deploying Exchange 2010 can be recouped within six months thanks to the improvements in business continuity and

2- Allows Virtualisation

It supports virtualisation, allowing consolidation. Server virtualisation is not only a cost cutter, reducing expenditure related to maintenance, support staff, power, cooling and space. It also improves business continuity – when a virtual machine is down, computers can run on another virtual machine with no downtime.

3- Cost savings on storage

Exchange 2010 has, according to Microsoft, 70% less disk I/O (input/output) than Exchange 2007. For this reason, the firm recommends moving away from SAN storage solutions and adopt less expensive direct attached storage. This translates to real and significant cost savings for most businesses.

4- Larger mailboxes

Coupling the ability to now use larger, slower SATA (or SAS) disks with changes to the underlying mailbox database architecture allows for far larger mailbox sizes than previously to become the norm.

5- Voicemail transcription

Unified Messaging, first introduced with Exchange 2007, offers the concept of the ‘universal inbox’ where email and voice mail are available from a single location and consequently accessed from any of the following clients:

  • Outlook 2007 and later
  • Outlook Web App
  • Outlook Voice Access – access from any phone
  • Windows Mobile 6.5 or later devices

A new feature to Exchange 2010, Voicemail Preview, sees text-transcripts of voicemails being received, saving the time it takes to listen to the message. Upon reception of a voice message, the receiver can glance at the preview and decide whether it is an urgent matter. This and other improvements, such as managing voice and email from a single directory (using AD), offer organisations the opportunity to discard third-party voicemail solutions in favour of Exchange 2010.

6- Helpdesk cost reduction

Exchange 2010 offers potential to reduce helpdesk costs by enabling users to perform common tasks which would normally require a helpdesk call. Role-based Access control (RBAC) allows delegation based on job function which, coupled with the Web-based Exchange Control Panel (ECP), enables users to assume responsibility for Distribution Lists, update personal information held in AD and track messages. This reduces the call volumes placed on the Helpdesk, with obvious financial benefits.

7- High(er) Availability

Exchange 2010 builds upon the continuous replication technologies first introduced in Exchange 2007. The technology is far simpler to deploy than Exchange 2007 as the complexities of a cluster install are taken away from the administrator. It incorporates easily with existing Mailbox servers and offers protection at the database – with Database Availability Groups – rather than the server level. By supporting automatic failover, this feature allows faster recovery times than previously.

8- Native archiving

A large hole in previous Exchange offerings was the lack of a native managed archive solution. This saw either the proliferation of un-managed PSTs or the expense of deploying third-party solutions. With the advent of Exchange 2010 – and in particular the upcoming arrival of SP1 this year – a basic archiving suite is now available out-of-the-box.

9- Can be run on-premise or in the cloud

Exchange 2010 offers organisations the option to run Exchange ‘on-premise’ or in the ‘cloud’. This approach even allows organisations to run some mailboxes in the cloud and some on locally held Exchange resources. This offers companies very competitive rates for mailbox provision from cloud providers for key mailboxes, whilst deciding how much control to relinquish by still hosting most mailboxes on local servers.

10- Easier calendar sharing

With Federation for Exchange 2010, employees can share calendars and distribution lists with external recipients more easily. The application allows them in fact to schedule meetings with partners and customers as if they belonged to the same organisation. Whilst this might not appeal to most organisations, those investing in collaboration technologies will see the value Exchange 2010 offers.

Taking the leap

Due to the uncertain economy many organisations are wary of investing their tight budgets in projects deemed unessential. However, if they follow the ‘more with less’ rule and invest in some good service management for their IT Service Desk, the resulting cost savings will free resources that can be invested in this type of asset. The adoption of Exchange 2010, in turn, will allow more efficient use of IT by end users and help the service desk run more smoothly, thus engaging in a cycle of reciprocal benefits.

Keith Smith, Senior Consultant

This article is featured on Tech Republic:  http://blogs.techrepublic.com.com/10things/?p=1681&tag=leftCol;post-1681

Are you Off-Sure about your IT Service Desk?

July 15, 2010

No matter the economic climate, or indeed within which industry they operate, organisations are constantly seeking to lower the cost of IT while also trying to improve performance. The problem is it can often seem impossible to achieve one without compromising on the other and in most cases, cost cutting will take prevalence, leading to a dip in service levels.

When things get tough the popularity of off-shoring inevitably increases, leading many decision-makers to consider sending the IT Service Desk off to India, China or Chile as a convenient solution financially – low-cost labour for high-level skills is how offshore service providers are advertising the service.

In reality things are not so straightforward. The primary reason for off-shoring is to reduce costs, but according to experts average cost savings only tend to lie between 10-15%, and what is more, additional costs can be created – research shows, in fact, that they can in some cases increase by 25%.

Hidden costs, cultural differences and low customer and user satisfaction are reasons which have made nearly 40% of UK companies surveyed by the NCC Evaluation Centre change their mind and either reverse the move – a phenomenon known as ‘back-shoring’ or ‘reverse off-shoring’ – or think about doing so in the near future. Once an organisation decides to reverse the decision, however, the process is not trouble-free. Of those who have taken services back in-house, 30% say they have found it ‘difficult’ and nearly half, 49%, ‘moderately difficult’. Disruptions and inefficiencies often lead to business loss, loss of client base and, more importantly, a loss of reputation – it is in fact always the client and not the provider which suffers the most damage in this sense.

Data security is another great concern in off-shoring. An ITV news programme recently uncovered a market for data stolen at offshore service providers: bank details and medical information could be easily bought for only a few pounds, often just from call centre workers. Of course information security breaches can happen even in-house, caused by internal staff; however, in off-shoring the risk is increased by the distance and the different culture and law which exist abroad.

Not a decision to be taken lightly, then. Organisations should realise that the IT Service Desk is a vital business tool and while outsourcing has its advantages, if they do it by off-shoring they are placing the face of their IT system on the other side of the planet, and in the hands of a provider that might not have the same business culture, ethics and regulations as they do.

So before thinking about off-shoring part or the whole IT department, organisations would be wise to take the time to think about why their IT is so expensive and what they could do to improve it, cutting down on costs without affecting quality, efficiency and security and moreover, not even having to move it from its existing location.

Here are some measures organisations could take in order to improve efficiency in the IT Service Desk while at the same time reducing costs:

Best practice implementation

Adoption of Best Practice is designed to make operations faster and more efficient, reducing downtime and preserving business continuity. The most common Best Practice in the UK is ITIL (Information Technology Infrastructure Library) which is divided into different disciplines – Change Management, Risk Management, Incident Management to name but a few.

ITIL processes can be seen as a guide to help organisations plan the most efficient routes when dealing with different types of issues, from everyday standard operations and common incidents up to rarer events and even emergencies.

Whilst incident management seems to be easily recognised as a useful tool, other applications of ITIL are unfairly seen by many as a nice to have. But implementing best practice processes to deal with change management, for example, is particularly important: if changes are carried out in a random way they can cause disruptions and inefficiencies, and when a user cannot access resources or has limited use of important tools to carry out their work, business loss can occur – and not without cost.

Every minute of downtime is a minute of unpaid work, but costs can also extend to customer relationship and perhaps loss of client base if the inefficiencies are frequent or very severe.

Realignment of roles within the Service Desk

With Best Practice in place, attention turns to the set-up of resources on the Service Desk. A survey conducted by Plan-Net showed that the average IT Service Desk is composed of 35% first-line analysts, 48% second line and 17% third line. According to Gartner statistics, the average first-line fix costs between £7 and £25 whereas second line fixes normally vary from £24 to £170. Second and third line technicians have more specific skills, therefore their salaries are much higher than the ones of first line engineers; however, most incidents do not require such specific skills or even physical presence.

An efficient Service Desk will be able to resolve 70% of their calls remotely at first line level, reducing the need for face-to-face interventions by second line engineers. The perception of many within IT is that users prefer a face-to-face approach to a phone call or interaction with a machine, but in reality the culture is starting to change thanks to efficiency acquiring more importance within the business. With second-line fix costing up to 600% more, it is better to invest in a Service Desk that hits a 70% rate of first-time fix, users for the most part will be satisfied that their issues are fixed promptly and the business will go along way to seeing the holy grail of reduced costs and improved performance simultaneously.

From a recent survey carried out by Forrester for TeamQuest Corporation, it appears that 50% of organisations normally use two to five people to resolve a performance issue, and 35% of the participants are not able to resolve up to 75% of their application performance issues within 24 hours. Once you calculate the cost of number of staff involved multiplied by number of hours to fix the incident, it is not difficult to see where the costly problem lies. An efficient solution will allow IT to do more with less people, and faster.

Upskilling and Service Management toolset selection

Statistics show that the wider adoption of Best Practice processes and the arrival of new technologies are causing realignments of roles within the Service Desk. In many cases this also involves changes to the roles themselves, as the increased use of automated tools and virtualised solutions mean more complex fixes can be conducted remotely and at the first line. As this happens first line engineers will be required to have a broader knowledgebase and be able to deal with more issues without passing them on.

With all these advancements leading to a Service Desk that requires less resource (and therefore commands less cost) while driving up fix rates and therefore reducing downtime it seems less and less sensible for organisations to accept off-shore outsourcing contracts with Service Level Agreements (SLA’s) that guarantee a first-time fix rate of as little as 20% or 30% for a diminished price. It seems the popularity of such models lies only in organisations not being aware that quality and efficiency are something they can indeed afford – without the risk of off-shoring.

The adoption of a better toolset and the upskilling of first-line analysts, especially through ITIL-related training, will help cut down on costs and undoubtedly improve service levels. However while it will also remove the necessity to have a large amount of personnel, especially at higher level, the issues with finding, recruiting and training resource will still involve all the traditional headaches IT Managers have always faced. With this in mind it can often be prudent to engage with a service provider and have a co-sourced or managed desk that remains in-house and under internal management control. Personnel selected by an expert provider will have all the up-to-date skills necessary for the roles required, and only the exact number needed will be provided, while none of the risks associated with wholesale outsourcing, or worse, off-shoring, are taken.

Improving IT infrastructure and enhancing security

Improving efficiencies in IT does not begin and end with the Service Desk of course. The platform on which your organisation sits, the IT infrastructure itself, is of equal importance in terms of both cost and performance – and crucially, is something that cannot be influenced by off-shoring. For example, investing in server virtualisation can make substantial cost savings in the medium to long term. Primarily this arises from energy saving but costs can also be cut in relation to space and building and maintenance of physical servers, not to mention the added green credentials. Increased business continuity is another advantage: virtualisation can minimise disruptions and inefficiencies, therefore reducing downtime – probably the quickest way to make this aspect of IT more efficient in the short, medium and long term.

Alongside the myriad of new technologies aimed squarely at improving efficiency and performance sits the issue of Information Security. With Data Protection laws getting tougher due to the new 2010 regulations, forcing private companies to declare any breaches to the Information Commissioner who has the right to make them public, and facing them with fines up to £500,000, security is becoming even more of an unavoidable cost than ever. Increased awareness is needed across the entire organisation as data security is not only the concern of the IT department, but applicable to all personnel at all levels. The first step in the right direction is having a thorough security review and gap analysis in order to assess compliance with ISO 27001 standards and study any weak points where a breach can occur. Then workshops are needed to train non-IT staff on how to deal with data protection. Management participation is particularly important in order to get the message across that data safety is vital to an organisation.

Taking a holistic view of IT

Whatever the area of IT under scrutiny, the use of external consultancies and service providers to provide assistance is often essential. That said, it is rare to find an occasion where moving IT away from the heart of the business results in improvements. The crucial element to consider then is balance. Many organisations, as predicted by Gartner at the beginning of this year, are investing in operational rather than capital expenditure as they begin to understand that adoption of the latest tools and assets is useless without a holistic view of IT. When taking this methodology and applying it to the Service Desk it soon becomes apparent that simply by applying a Best Practice approach to an internal desk and utilising the new technologies at your disposal, the quick-fix cost benefits of off-shoring soon become untenable.

Pete Canavan, Head of Support Services

This article is featured in the current issue of ServiceTalk