Posts Tagged ‘IT best practice’

Saving ITIL – how to protect the reputation of Best Practice frameworks

October 12, 2010

Since the news came out that the Office of Government Commerce stated in a report by the Office of Public Sector Information they had ‘no policy remit’ to produce and develop the Information Technology Infrastructure Library (ITIL) methodology, various articles and blogs have been written declaring the ‘death of ITIL’, or at least of the discipline as we know it.

This has been interpreted by some as an intention to drop official support due to lack of interest, since ITIL is admittedly not one of the OGC’s core responsibilities. Critics believe the move will make ITIL an even more lucrative money machine for vendors and service providers which may end in self-sabotage. Most opponents have focused their editorials on the consequences of this move on the Best Practice framework’s quality and credibility, or have taken this as an occasion to declare that ITIL is already overrated and over-praised.

Those who welcome the change, instead, believe it would be a good thing for ITIL to be free, open and available to all. However, there seems to be little analysis of what the choice made by the OGC might mean, the pros and cons of a liberated Best Practice framework and, ultimately, hardly any propositions on how to save the framework’s reputation.

Taking into account such pros and cons, it is difficult to have a clear opinion on thesituation. There can definitely be some benefits in liberating a framework – for instance, it creates an opportunity for professionals to provide recommendations and contribute with ideas and innovations which derive from their personal experience. They are able to interact more comprehensively with the discipline, allowing it to grow, improve and change with the market and the various business environments it operates in.

But labels like ‘ITIL’ – which have become brand names – are often used as a sales tool to sell books, memos and software, and by making it even more commercial the risk is that the discipline will lose its authority. Let’s take Neuro-Linguistic Programming as an example. As there is no regulation, people are free to say that they are NLP practitioners although they are only recognised within their own training company, and their methodology may be different from practitioners who come from another company. There is no official recognition of what is good and bad practice in NLP, therefore it may not be felt as a discipline one can rely on alone.

So if any consultancy, training company, book author and software vendor was able to say that their product or service is ‘ITIL aligned’, although it complies with their version of ITIL which might be different from another one, then it would be impossible to have some measurable quality standards that can be used to evaluate and choose. If you take away standardisation and consistency, if there isn’t a strong and consistent identity or an independent body that can set standards, the framework will practically cease to exist.

To reassure readers, the ambiguous OPSI report does not state that the OGC has no interest in ITIL and, in fact, it still owns copyright on the product. The information on the report might mean that the body will outsource development but will still have the last word on content and the power to approve a product or service. If this is the case, then the situation might prove ideal for the reasons stated above, balancing the pros and cons in a safer scenario.

But this is not the main problem with Best Practice frameworks, it seems. An example of one that is not supported by an official body but is still popular and widely used is the Microsoft Operations Framework (MOF). Although it is free and available to everybody, it doesn’t appear to be very different from ITIL in its recognition, methodology and principles. Nevertheless, it appears to have the same issues that consultants see in ITIL as it is – there is a lot of emphasis on gap-fill documents and selling books rather than in delivering a thorough understanding of the processes and aims. Unless the professional who downloads the templates and fills the gaps understands the content and context of what they are doing, it has little value and probably little effectiveness. It is apparent, then, that freeing the discipline doesn’t solve the issues behind Best Practice frameworks, nor does keeping control over it.

Perhaps the problem is not about ITIL being endorsed by an official body or not, but rather how to enhance the reputation and effectiveness of Best Practice frameworks. Disciplines such as ITIL and MOF need to find a way to overcome their credibility issues, cease to be mere money machines and become what they are supposed to be – guidelines for carrying out operations in the best possible way to reach efficiencies and cost savings. Only if Service Management professionals start believing in the ‘wider aims’ and practicing the discipline with a thorough understanding of what is being done, will it be possible for such frameworks to regain trust and, ultimately, to really deliver results.

Samantha-Jane Scales, Service Management consultant

Find this column on ITSM Portal: http://www.itsmportal.com/columns/saving-itil-%E2%80%93-how-protect-reputation-best-practice-frameworks

Advertisements

Best Practice and Virtualisation: essential tools in Business Resilience and Continuity planning

March 25, 2010

Life in Venice doesn’t stop every time it floods. People roll up their trousers, pull on their wellies and still walk to the grocer’s, go to work, grab a vino with friends. And when it’s all over they mop the floor, dry the furniture, and go back to their pre-flood life. How do they do it? They choose not to have carpet or wooden flooring, keep updated on water level and have a spare pair of boots right next to the door. This is called prevention.

When it comes to faults in IT systems, both common and rare just like flooding can be, prevention is not better than cure – it is the cure, the only one to allow business continuity and resilience.

Complicated machinery and analysis are a thing of the past: nowadays planning is extraordinarily easy thanks to the expertise given by Best Practice processes, and new technologies such as virtualisation that can bring user downtime close to zero.

First of all, it must be noted that virtualising servers, desktop, data centre is not something that can be done overnight. Planning is needed to avoid choosing the wrong solution, perhaps based on what the latest product on the market is and on media talk rather than what works best for the specific needs of one’s business, and to shun possible inefficiencies, interruption of business services, or even data loss during the process. Best Practice, then, comes across as the essential framework in which all operations should be carried out in order for them to be successful.

Any change made to the system, in fact, needs a mature level of Best Practice processes such as world-renowned ITIL (Information Technology Infrastructure Library) in place, to guide organisations in planning the best route in dealing with all operations and incidents, and are a key tool for avoiding inefficiencies in money and time, and improving the performance of the IT department and of the business as a whole.

Once this is sorted, you can think about going virtual. From a technical point of view, virtualisation is gaining importance in business resilience and continuity planning thanks to the progress made by new technologies. Products such as VMware’s Vsphere, for example, allow what is called “live migration”: capacity and speed of the virtual machines are seen as an aggregate rather than individually, and as a consequence not only the load is more evenly distributed, for faster, smoother operations, but whenever a machine crashes resources are immediately accessible from another connected device, without the user even noticing and without interrupting the procedure.

Moreover, data is stored on a virtual central storage so that it is accessible from different source and does not get lost during system malfunctions, making business resilience faster and easier.

Guided by the expertise of Best Practice and with the help of virtualisation products that suit individual needs and goals, business resilience and continuity planning will not only come easier, but also make results more effective, allowing organisations to deliver their services and carry out their operations without fear of interruptions, inefficiencies or data loss.

 

Pete Canavan, Head of Service Transition

 

This article is in April’s issue of Contingency Today, and is also online at: http://www.contingencytoday.com/online_article/Best-Practice-and-Virtualisation/2242

5 thoughts on the IT Service Desk that need re-thinking

March 10, 2010

Slowly recovering from the crisis and with a more careful eye to the unsteadiness of the market, many organisations across all sectors are considering ways to make their IT Service Desk more cost-efficient, but some ideas decision-makers might have could be partially or totally wrong.
So if you are thinking any of the following, you might want to think again:

“Our Service Desk is costing us too much. Outsourcing it to [insert favourite low-cost country abroad] can solve the problem.”

Although outsourcing has it advantages, doing it off-shore is a huge investment and has a lot of hidden costs, including losses due to inefficiencies and disruptions during the transition or caused by bad performance – bad service can damage the business. Moreover, reversing the move can be a costly, lengthy and treacherous procedure. Before they consider drastic moves, organisations should try to identify the reasons their IT expenditure is so high. Likely causes could be inefficient management, poor skills or obsolete tools and processes. Best practice implementation, using automated ITIL-compliant software and updating IT skills are a first step towards efficiency; however, a more cost-effective outsourcing solution could be handing management of the Service Desk to a service provider that can take care of service improvement on site.

“If leading companies around the world are off-shoring, it must be convenient.”

Only Global organisations seem to gain great benefits from off-shoring their IT department, often being the sole solution to reduce their otherwise enormous spending. Just because many important organisations are doing it, it doesn’t mean it is suitable for all. For example, there are important cultural differences which may not be an issue for those organisations with offices and clients spread worldwide that are already dealing with a mixture of cultures, but can definitely cause problems for a relatively European company with a certain type of business mind. Another issue is costs: many organisations find that after the conspicuous initial investment, cost saving might not exceed 10% and what is more, the new facility sometimes creates extra costs that were unforeseen, actually increasing expenditure.

“Our system has always worked; I don’t see why we should change it.”

Technology is changing regardless of one’s eagerness, and it is important to keep up with the changing demands of the market in order to remain competitive. A certain system might have worked five years ago, but new technologies and procedures can make older ones obsolete and comparatively inefficient. Take server virtualisation for example: business continuity can reach astonishing levels thanks to live migration, guaranteeing a better service with the extra benefit of energy saving through consolidation. Adoption of ITIL Best Practice processes also helps increase efficiency not only in the Service Desk, but in the business as a whole. Thanks to its implementation, organisations can save time and money and enhance the smoothness and quality of all IT-reliant operations, which helps the entire business.

“We need more 2nd and 3rd line engineers.”

When problems need more second and third-line resolution, it probably means first line is not efficient enough. Thanks to specific automated software to help with simple incidents and to the adoption of software as a service managed by an external provider, the simplest and most complex issues are being taken care of, meaning some of the work of a first-line engineer and the whole work of third-line engineers are no longer an issue for the organisation’s IT staff. However, the remaining incidents still need a more efficient resolution at first-line level: the more incidents are resolved here, the less need there is to increase the number of more expensive second-line staff. To improve first-line fix, engineers need to be trained to follow Best Practice processes that can make incident resolution fast and effective, as well as help the organisation deal with change and prevent risks connected to data security.

“I’d rather we managed our IT ourselves – control is key.”

An organisation might be proficient in its field, but may find it difficult to manage its IT Service Desk as effectively. When cost-efficiency is important, it is best to leave one’s ego at the door and have experts do the job. The IT arena is constantly changing and continuous training and updating is necessary in order to keep up with the market standards, and an organisation often cannot afford to invest in constant innovation of their IT. If outsourcing, on and off-shore, gives organisations the impression of losing control, then managed services is a better solution: the existing team and tools, or new and improved ones, can be managed by a service provider directly on the office premises, if needed. Thanks to this, organisations can focus on the more important parts of their business, leaving IT to the techies while still keeping an eye on it.

 

Adrian Polley, CEO

Find this article online at Fresh Business Thinking: http://www.freshbusinessthinking.com/business_advice.php?CID=&AID=5004&Title=5+Thoughts+On+The+IT+Service+Desk+That+Need+Re-Thinking