Archive for the ‘IT Infrastructure’ Category

What is the impact of the Cloud on the existing IT environment?

March 10, 2011

As organisations look to embrace the cost-efficiency opportunities deriving from new technologies and services, there is a lot of talk about the benefits, risks and possible ROI of the blanket concept of ‘Cloud computing’. However, it is still unclear how using Cloud services will affect the existing network infrastructure and what impact it can have on IT support roles and the way end users deal with incidents.

The effect on an organisation’s infrastructure depends on the Cloud model adopted, which may vary based on company size. For example, small organisations which are less worried about owning IT and have simpler, more generic IT needs might want to buy a large part of their infrastructure as a shared service, purchasing on-demand software from different vendors.

Buying into the Software as a Service model has the benefit of simplicity and cheapness, as it cuts out much of the responsibility and costs involved, which is a great advantage for SMEs. This solution also allows 24/7 service availability, something that small firms might not be able to afford otherwise. The lack of flexibility of this service, due to the impossibility to customise software, is less of a problem for these types of organisations. But there are still risks related to performance, and vendor lock-in.

Using this model, a small company’s retained IT infrastructure can be relatively simple, and therefore there might be little need for specialist technical skills within the IT department. A new skill set is required, however: IT personnel will need to be able to manage all the different relationships with the various vendors, checking that the service purchased is performing as agreed and that they are getting the quality they are paying for. The IT Service Desk will therefore require a smaller number of engineers, less technical but more commercially savvy. More specialist skills will shift towards the provider and 1st line analysts will have to escalate more calls to the various vendors.

Larger organisations, on the other hand, may well be keen on retaining more control over their infrastructure and purchasing IT resources on-demand. With this model, the organisation still manages much of its infrastructure but at a virtual level – the vendor might provide them with hardware resources on-demand, for instance. The main advantage of this model is that it allows the existing infrastructure to be incredibly flexible and scalable, able to adapt to changing business needs. For example, building a data centre is lengthy and expensive, therefore not convenient for expansion. But by using a “virtual datacentre” provider, capacity can be increased in the space of an online card transaction, with great financial benefits – in the Cloud model only the necessary resources are paid for without investment in hardware or its maintenance.

With this second model, the change in the roles within the IT department will mainly regard an increased need, as in the other model, for vendor management skills. Monitoring KPIs, SLAs and billing will be a day-to-day task although there will still be the need for engineers to deal with the real and virtual infrastructure.

Both models generally have very little impact on the end user if the IT Service Desk has been running efficiently, as this does not disappear as a first point of contact. However, in certain cases the change might be more visible, for instance if desk-side support is eliminated – a cultural change that may need some adapting to.

All in all, change is not to be feared – with the necessary awareness, embracing Cloud services can improve IT efficiency significantly, and align it to the business. By leaving some IT support and management issues to expert providers an organisation can gain major strategic advantage, saving money and time that they can ultimately use in their search for business success.

 

Adrian Polley, Technical Services Director

This article appears on the National Outsourcing Association’s (NOA) online publication Sourcing Focus: http://www.sourcingfocus.com/index.php/site/featuresitem/3318/

Advertisements

10 things we learnt in 2010 that can help make 2011 better

December 23, 2010

This is the end of a tough year for many organisations across all sectors. We found ourselves snowed-in last winter, were stuck abroad due to a volcano eruption in spring, suffered from the announcement of a tightened budget in summer, and had to start making drastic cost-saving plans following the Comprehensive Spending Review in autumn. Data security breaches and issues with unreliable service providers have also populated the press.

Somehow the majority of us have managed to survive all that; some better than others. As another winter approaches it is time to ask ourselves: what helped us through the hard times and what can we do better to prevent IT disruptions, data breaches and money loss in the future?

Here are some things to learn from 2010 that may help us avoid repeating errors and at the same time increase awareness of current issues, for a more efficient, productive and fruitful 2011:

1- VDI to work from home or the Maldives

Plenty of things prevented us getting to work in 2010; natural disasters, severe weather and industrial disputes being the biggest culprits. Remote access solutions have been around for a long time, but desktop virtualisation has taken things a stage further. With a virtual desktop, you’re accessing your own complete and customised workspace when out of the office, with similar performance to working in the office. Provided there’s a strong and reliable connection, VDI minimises the technical need to be physically close to your IT.

2- Business continuity and resilience with server virtualisation

Server virtualisation is now mainstream, but there are plenty of organisations large and small who have yet to virtualise their server platform. When disaster strikes, those who have virtualised are at a real advantage – the ability to build an all-encompassing recovery solution when you’ve virtualised your servers is just so much easier than having to deal with individual physical kit and the applications running on them. For anyone who has yet to fully embrace the virtualisation path, it’s time to reassess that decision as you prepare for 2011.

3- Good Service Management to beat economic restrictions

With the recent economic crisis and the unstable business climate, the general message is that people should be doing more with less. It’s easy to delay capital expenditure (unless there’s a pressing need to replace something that’s broken or out of warranty) but how else to go about saving money? Surprising, effective Service Management can help deliver significant cost-efficiencies through efficient management of processes, tools and staff. Techniques include rearrangement of roles within the IT Service Desk to get higher levels of fix quicker in the support process, and adoption of some automatic tools to deal with the most common repeat incidents. Also getting proper and effective measures on the service, down to the individuals delivering it, helps to set the bar of expectation, to monitor performance and improve processes’ success.

4- Flexible support for variable business

An unstable economic climate means that staffing may need to be reduced or increased for certain periods of time, but may need rescaling shortly afterwards. At the same time epidemics, natural disasters and severe weather conditions may require extra staff to cover for absences, often at the last minute. Not all organisations, however, can afford to have a ‘floating’ team paid to be available in case of need or manage to get contractors easily and rapidly. An IT Support provider that can offer flexibility and scalability may help minimise these kinds of disruption. In fact, some providers will have a team of widely-skilled multi-site engineers which can be sent to any site in need of extra support, and kept only until no longer needed, without major contractual restrictions.

5- Look beyond the PC

Apple’s iPad captured the imagination this year. It’s seen as a “cool” device but its success stems as much from the wide range of applications available for it as for its innate functionality. The success of the iPad is prompting organisations to look beyond the PC in delivering IT to their user base. Perhaps a more surprising story was the rise of the Amazon Kindle, which resurrected the idea of a single function device. The Kindle is good because it’s relatively cheap, delivers well on its specific function, is easy to use and has long battery life. As a single function device, it’s also extremely easy to manage. Given the choice, I’d rather the challenge of managing and securing a fleet of Kindles than Apple iPads which for all its sexiness adds another set of security management challenges.

6- Protecting data from people

Even a secured police environment can become the setting for a data protection breach, as Gwent Police taught us. A mistake due to the recipient auto-complete function led an officer to send some 10,000 unencrypted criminal records to a journalist. If a data classification system had been in place, where every document created is routinely classified with different levels of sensitivity and restricted to the only view of authorised people, the breach would have not taken place as the information couldn’t have been set. We can all learn from this incident – human error will occur and there is no way to avoid it completely, so counter measures have to be implemented upfront to prevent breaches.

7- ISO27001 compliance to avoid tougher ICO fines

The Data Protection Act was enforced last year with stricter rules and higher fines, with the ICO able to impose a £500,000 payment over a data breach. This resulted in organisations paying the highest fines ever seen. For instance Zurich Insurance which, after the loss of 46,000 records containing customers’ personal information, had to pay over £2m – but it would have been higher if they hadn’t agreed to settle at an early stage of the FSA investigation. ISO 27001 has gained advocates in the last year because it tackles the broad spectrum of good information security practice, and not just the obvious points of exposure. A gap analysis and alignment with the ISO 27001 standards is a great first step to stay on the safe side. However, it is important that any improved security measure is accompanied by extensive training, where all staff who may deal with the systems can gain a strong awareness of regulations, breaches and consequences.

8- IT is not just IT’s business – it is the business’ business as well

In an atmosphere where organisations are watching every penny, CFOs acquired a stronger presence in IT although neither they nor the IT heads were particularly prepared for this move. For this reason, now the CIO has to find ways to justify costs concretely, using financial language to propose projects and explain their possible ROI. Role changes will concern the CFO as well, with a need to acquire a better knowledge of IT so as to be able to discuss strategies and investments with the IT department.

9- Choose your outsourcing strategy and partner carefully

In 2010 we heard about companies dropping their outsourcing partner and moving their Service Desk back in-house or to a safer Managed Service solution; we heard about Virgin Blue losing reputation due to a faulty booking system, managed by a provider; and Singapore bank DBS, which suffered a critical IT failure that caused many inconveniences among customers. In 2011, outsourcing should not be avoided but the strategy should include solutions which allow more control over assets, IP and data, and less upheaval should the choice of outsourcing partner prove to be the wrong one.

10- Education, awareness, training – efficiency starts from people

There is no use in having the latest technologies, best practice processes and security policies in place if staff are not trained to put them to use, as the events that occurred in 2010 have largely demonstrated. Data protection awareness is vital to avoid information security breaches; training to use the latest applications will drastically reduce the amount of incident calls; and education to best practices will smooth operations and allow the organisations to achieve the cost-efficiencies sought.

Adrian Polley, CEO

This article has been published on Tech Republic: http://blogs.techrepublic.com.com/10things/?p=2100

10 reasons to migrate to Exchange 2010

July 29, 2010

A Plan-Net survey found that 87% of organisations are currently using Exchange 2003 or earlier. There has been a reluctance to adopt the 2007 version, often considered to be the ‘Vista’ of the server platform – faulty and dispensable. But an upgrade to a modern, improved version is now becoming crucial: standard support for the 2003 version ended over a year ago and much technological progress has been made since then. It seems that unconvinced organisations need some good reasons to move from their well-known but obsolete system to the new and improved 2010 version, where business continuity and resilience are easier to obtain and virtualisation can be embraced, with all the benefits that follow.

Here are 10 reasons your organisation should migrate to Exchange 2010:

1- Continuous replication

International research shows that companies lose £10,000/$10,000 an hour to email downtime. This version of Exchange enables continuous replication of data which can minimise disruptions dramatically and spare organisations from such loss. Moreover, Microsoft reckons the costs of deploying Exchange 2010 can be recouped within six months thanks to the improvements in business continuity and

2- Allows Virtualisation

It supports virtualisation, allowing consolidation. Server virtualisation is not only a cost cutter, reducing expenditure related to maintenance, support staff, power, cooling and space. It also improves business continuity – when a virtual machine is down, computers can run on another virtual machine with no downtime.

3- Cost savings on storage

Exchange 2010 has, according to Microsoft, 70% less disk I/O (input/output) than Exchange 2007. For this reason, the firm recommends moving away from SAN storage solutions and adopt less expensive direct attached storage. This translates to real and significant cost savings for most businesses.

4- Larger mailboxes

Coupling the ability to now use larger, slower SATA (or SAS) disks with changes to the underlying mailbox database architecture allows for far larger mailbox sizes than previously to become the norm.

5- Voicemail transcription

Unified Messaging, first introduced with Exchange 2007, offers the concept of the ‘universal inbox’ where email and voice mail are available from a single location and consequently accessed from any of the following clients:

  • Outlook 2007 and later
  • Outlook Web App
  • Outlook Voice Access – access from any phone
  • Windows Mobile 6.5 or later devices

A new feature to Exchange 2010, Voicemail Preview, sees text-transcripts of voicemails being received, saving the time it takes to listen to the message. Upon reception of a voice message, the receiver can glance at the preview and decide whether it is an urgent matter. This and other improvements, such as managing voice and email from a single directory (using AD), offer organisations the opportunity to discard third-party voicemail solutions in favour of Exchange 2010.

6- Helpdesk cost reduction

Exchange 2010 offers potential to reduce helpdesk costs by enabling users to perform common tasks which would normally require a helpdesk call. Role-based Access control (RBAC) allows delegation based on job function which, coupled with the Web-based Exchange Control Panel (ECP), enables users to assume responsibility for Distribution Lists, update personal information held in AD and track messages. This reduces the call volumes placed on the Helpdesk, with obvious financial benefits.

7- High(er) Availability

Exchange 2010 builds upon the continuous replication technologies first introduced in Exchange 2007. The technology is far simpler to deploy than Exchange 2007 as the complexities of a cluster install are taken away from the administrator. It incorporates easily with existing Mailbox servers and offers protection at the database – with Database Availability Groups – rather than the server level. By supporting automatic failover, this feature allows faster recovery times than previously.

8- Native archiving

A large hole in previous Exchange offerings was the lack of a native managed archive solution. This saw either the proliferation of un-managed PSTs or the expense of deploying third-party solutions. With the advent of Exchange 2010 – and in particular the upcoming arrival of SP1 this year – a basic archiving suite is now available out-of-the-box.

9- Can be run on-premise or in the cloud

Exchange 2010 offers organisations the option to run Exchange ‘on-premise’ or in the ‘cloud’. This approach even allows organisations to run some mailboxes in the cloud and some on locally held Exchange resources. This offers companies very competitive rates for mailbox provision from cloud providers for key mailboxes, whilst deciding how much control to relinquish by still hosting most mailboxes on local servers.

10- Easier calendar sharing

With Federation for Exchange 2010, employees can share calendars and distribution lists with external recipients more easily. The application allows them in fact to schedule meetings with partners and customers as if they belonged to the same organisation. Whilst this might not appeal to most organisations, those investing in collaboration technologies will see the value Exchange 2010 offers.

Taking the leap

Due to the uncertain economy many organisations are wary of investing their tight budgets in projects deemed unessential. However, if they follow the ‘more with less’ rule and invest in some good service management for their IT Service Desk, the resulting cost savings will free resources that can be invested in this type of asset. The adoption of Exchange 2010, in turn, will allow more efficient use of IT by end users and help the service desk run more smoothly, thus engaging in a cycle of reciprocal benefits.

Keith Smith, Senior Consultant

This article is featured on Tech Republic:  http://blogs.techrepublic.com.com/10things/?p=1681&tag=leftCol;post-1681

Will Tablets rule the future?

June 17, 2010

Apple CEO Steve Jobs recently announced the start of a new, post-PC era, declaring that Tablets such as the iPod might be replacing PCs just like ‘old trucks were replaced by modern cars’. Microsoft’s Steve Ballmer reacted by saying that PCs are undergoing many transformations and tablets are just one of the experimental forms we will see, adding that the PC market has still a lot to grow.

As an experiment Keith Smith, Senior Consultant and Adrian Polley, Technical Services Director take the sides of Jobs and Ballmer and discuss the two different viewpoints.

Are Tablets the future? 

It’s a strong possibility.

Keith Smith, Senior Consultant

Nowadays there is an increasing need for light and easily transportable devices, which are at the same time aesthetically pleasing. From this point of view, Tablets tick all the boxes: they offer flexibility and mobility in use as they are not restricted to a keyboard, and because of their shape they can be used in places or positions not conducive to a notebook such as in bed, standing or with one hand. Apart from this, what differentiates Tablets is that they give users the possibility to write directly into the device using their own handwriting, which is something normal laptops do not allow. Users can then share their “ink”, the data which is input and displayed as handwriting, with other tablet and non-tablet users and integrate it with other business applications, for instance Word. There is also the option of using the traditional mouse-keyboard combination, although the elements have to be purchased separately.

After the warm welcome the iPad received, it may prove difficult to go back to portable PCs as we know them. This is especially because the Tablet offers the “touch environment” which makes navigation easier than notebook equivalents of keyboard, mouse and touchpad in certain situations, and offers faster input for creating diagrams or playing games.

The fact that users can use a stylus to input information, which builds on peoples’ traditional use of a pen, makes it even more accessible as for a lot of people it is easier to use than a keyboard.

The functions that characterise Tablets make it ideal for personal use first, which may then leak into the business world when issues such as security will be properly addressed. Although a lot of work still needs to be done, especially to gain credibility in a business environment, Tablets can be seen as the first step to a technology that is minimal, versatile and why not, democratic.

 __________________________________________________

I don’t think so!

Adrian Polley, Technical Services Director

There is a lot of fervour around this ‘innovative’ piece of technology, but contrary to what many seem to believe, Tablets are not so shockingly original, nor can really be considered the anti-PC – there have in fact been PC-based Tablets since 2001. These are generally standard Laptops with a rotating screen that can be used to write on, so that they have the general functionality of a laptop with the convenience of a pen-based device. This device was lauded as the natural successor to the laptop, but even though marketing enthusiasm has increased with both the Windows Vista and then the Windows 7 launches, take up has been relatively small compared to overall laptop sales. The dual functionality made these types of Tablets considerably more expensive than comparable laptops, which could be a reason for their limited success. There is commercial appeal in the iPad because of its ease of portability and accessible price, but these are both possible because it lacks traditional PC or Mac components. This might make it lighter and sexier, but does it meet normal functionality needs?

There is a major issue with Tablets that concerns user input.  In spite of 20 years’ worth of development of voice and handwriting-based input, the vast majority of user input to a computer is still done via the keyboard, which is considered to be fast and accurate. The lack of an equally efficient means of input into a Tablet device relegates it to those tasks which are primarily consumer based, such as viewing and interacting with content that is provided without having to input a lot of information.  This may suit consumer applications, but only a certain class of business applications. Until the input problem is resolved, Tablets will always be an item in the business world that is niche and not mainstream.  As has already been proven with PC-based Tablets, users are generally unwilling to pay the premium required to get the Tablet functions on top of a standard laptop, let alone to lose some of its main functions completely.

Apple’s Tablet may have sold to millions of technology fans, but widespread day-to-day and business adoption is probably not going to become a reality anytime soon.

Disclaimer: this is a role-play exercise and may not represent the writers’ real views on the subject.

1/3 of UK organisations put off Windows 7 roll-out, but are they wise to wait?

April 14, 2010

Data collected through a survey carried out by IT Services provider Plan-Net has shown that 42% of UK businesses are planning to roll out Windows 7 in the next 18 months. However the survey, of 100 IT decision makers in City-based businesses of over 250 users, discovered that 24% are waiting until 2011 to roll out the new OS while only 18% are either in the process or plan to start the transition in 2010. With only 6% of the surveyed organisations already using Windows 7 and a further 8% not making the leap for 2 to 3 years, a stunning 24% are not thinking about rolling out Windows 7 at all. Finally, 20% are still unsure of whether to make the leap and when.

According to Gartner, the process of a full-scale migration takes, on average, 12-18 months and with Microsoft stopping downgrade rights to XP on new Windows 7 machines in mid-2011, are these organisations wise to wait?

David Cowan, Head of Infrastructure Consulting at Plan-Net, examines the likely timeframes of a Windows 7 implementation for businesses of different sizes along with the possible problems, issues and concerns organisations might face during the inevitable roll-out – whether they begin in good time or alternatively, leave it too late…

“Planning changes is rather compelling at the moment, as many organisations have not invested in infrastructure or large projects for a couple of years. Only now they are beginning to plan their investments for the next 12-18 months, mostly driven towards upgrading ageing hardware, desktop refresh and storage solutions. But they can’t wait any longer – according to Gartner the process of a full-scale migration takes, on average, 12-18 months, without taking into account the time needed to adopt and become accustomed to Best Practice change management processes if they are not yet in place. So if you are expecting to be involved in new business opportunities brought along by the London 2012 Olympics, for example, you should plan mid-2010 to avoid finding yourself halfway through a rollout when the time comes.

What is more, Gartner analysts appear to be pro-migration, advising Vista-traumatised users not to bypass Windows 7 like they did with its predecessor and early adopters have given it positive feedback. Perhaps more importantly, there do not seem to be too many other options – the scent of change is in the air, and Windows 7 is only the blastoff.”

 
For more information, contact:
Samantha Selvini
Press Assistant, Plan-Net plc
Tel: 020 7632 7990
Email: samantha.selvini@plan-net.co.uk

Best Practice and Virtualisation: essential tools in Business Resilience and Continuity planning

March 25, 2010

Life in Venice doesn’t stop every time it floods. People roll up their trousers, pull on their wellies and still walk to the grocer’s, go to work, grab a vino with friends. And when it’s all over they mop the floor, dry the furniture, and go back to their pre-flood life. How do they do it? They choose not to have carpet or wooden flooring, keep updated on water level and have a spare pair of boots right next to the door. This is called prevention.

When it comes to faults in IT systems, both common and rare just like flooding can be, prevention is not better than cure – it is the cure, the only one to allow business continuity and resilience.

Complicated machinery and analysis are a thing of the past: nowadays planning is extraordinarily easy thanks to the expertise given by Best Practice processes, and new technologies such as virtualisation that can bring user downtime close to zero.

First of all, it must be noted that virtualising servers, desktop, data centre is not something that can be done overnight. Planning is needed to avoid choosing the wrong solution, perhaps based on what the latest product on the market is and on media talk rather than what works best for the specific needs of one’s business, and to shun possible inefficiencies, interruption of business services, or even data loss during the process. Best Practice, then, comes across as the essential framework in which all operations should be carried out in order for them to be successful.

Any change made to the system, in fact, needs a mature level of Best Practice processes such as world-renowned ITIL (Information Technology Infrastructure Library) in place, to guide organisations in planning the best route in dealing with all operations and incidents, and are a key tool for avoiding inefficiencies in money and time, and improving the performance of the IT department and of the business as a whole.

Once this is sorted, you can think about going virtual. From a technical point of view, virtualisation is gaining importance in business resilience and continuity planning thanks to the progress made by new technologies. Products such as VMware’s Vsphere, for example, allow what is called “live migration”: capacity and speed of the virtual machines are seen as an aggregate rather than individually, and as a consequence not only the load is more evenly distributed, for faster, smoother operations, but whenever a machine crashes resources are immediately accessible from another connected device, without the user even noticing and without interrupting the procedure.

Moreover, data is stored on a virtual central storage so that it is accessible from different source and does not get lost during system malfunctions, making business resilience faster and easier.

Guided by the expertise of Best Practice and with the help of virtualisation products that suit individual needs and goals, business resilience and continuity planning will not only come easier, but also make results more effective, allowing organisations to deliver their services and carry out their operations without fear of interruptions, inefficiencies or data loss.

 

Pete Canavan, Head of Service Transition

 

This article is in April’s issue of Contingency Today, and is also online at: http://www.contingencytoday.com/online_article/Best-Practice-and-Virtualisation/2242

5 thoughts on the IT Service Desk that need re-thinking

March 10, 2010

Slowly recovering from the crisis and with a more careful eye to the unsteadiness of the market, many organisations across all sectors are considering ways to make their IT Service Desk more cost-efficient, but some ideas decision-makers might have could be partially or totally wrong.
So if you are thinking any of the following, you might want to think again:

“Our Service Desk is costing us too much. Outsourcing it to [insert favourite low-cost country abroad] can solve the problem.”

Although outsourcing has it advantages, doing it off-shore is a huge investment and has a lot of hidden costs, including losses due to inefficiencies and disruptions during the transition or caused by bad performance – bad service can damage the business. Moreover, reversing the move can be a costly, lengthy and treacherous procedure. Before they consider drastic moves, organisations should try to identify the reasons their IT expenditure is so high. Likely causes could be inefficient management, poor skills or obsolete tools and processes. Best practice implementation, using automated ITIL-compliant software and updating IT skills are a first step towards efficiency; however, a more cost-effective outsourcing solution could be handing management of the Service Desk to a service provider that can take care of service improvement on site.

“If leading companies around the world are off-shoring, it must be convenient.”

Only Global organisations seem to gain great benefits from off-shoring their IT department, often being the sole solution to reduce their otherwise enormous spending. Just because many important organisations are doing it, it doesn’t mean it is suitable for all. For example, there are important cultural differences which may not be an issue for those organisations with offices and clients spread worldwide that are already dealing with a mixture of cultures, but can definitely cause problems for a relatively European company with a certain type of business mind. Another issue is costs: many organisations find that after the conspicuous initial investment, cost saving might not exceed 10% and what is more, the new facility sometimes creates extra costs that were unforeseen, actually increasing expenditure.

“Our system has always worked; I don’t see why we should change it.”

Technology is changing regardless of one’s eagerness, and it is important to keep up with the changing demands of the market in order to remain competitive. A certain system might have worked five years ago, but new technologies and procedures can make older ones obsolete and comparatively inefficient. Take server virtualisation for example: business continuity can reach astonishing levels thanks to live migration, guaranteeing a better service with the extra benefit of energy saving through consolidation. Adoption of ITIL Best Practice processes also helps increase efficiency not only in the Service Desk, but in the business as a whole. Thanks to its implementation, organisations can save time and money and enhance the smoothness and quality of all IT-reliant operations, which helps the entire business.

“We need more 2nd and 3rd line engineers.”

When problems need more second and third-line resolution, it probably means first line is not efficient enough. Thanks to specific automated software to help with simple incidents and to the adoption of software as a service managed by an external provider, the simplest and most complex issues are being taken care of, meaning some of the work of a first-line engineer and the whole work of third-line engineers are no longer an issue for the organisation’s IT staff. However, the remaining incidents still need a more efficient resolution at first-line level: the more incidents are resolved here, the less need there is to increase the number of more expensive second-line staff. To improve first-line fix, engineers need to be trained to follow Best Practice processes that can make incident resolution fast and effective, as well as help the organisation deal with change and prevent risks connected to data security.

“I’d rather we managed our IT ourselves – control is key.”

An organisation might be proficient in its field, but may find it difficult to manage its IT Service Desk as effectively. When cost-efficiency is important, it is best to leave one’s ego at the door and have experts do the job. The IT arena is constantly changing and continuous training and updating is necessary in order to keep up with the market standards, and an organisation often cannot afford to invest in constant innovation of their IT. If outsourcing, on and off-shore, gives organisations the impression of losing control, then managed services is a better solution: the existing team and tools, or new and improved ones, can be managed by a service provider directly on the office premises, if needed. Thanks to this, organisations can focus on the more important parts of their business, leaving IT to the techies while still keeping an eye on it.

 

Adrian Polley, CEO

Find this article online at Fresh Business Thinking: http://www.freshbusinessthinking.com/business_advice.php?CID=&AID=5004&Title=5+Thoughts+On+The+IT+Service+Desk+That+Need+Re-Thinking

Microsoft System Center Service Manager 2010: a credible challenger in the Service Management software market?

February 17, 2010

After 3 years in beta, Microsoft is expected to launch System Center Service Manager (SCSM) sometime this year. Long-time Microsoft watchers will know that the company often “drip feeds” new markets with product information before products are ready as a way of generating interest.  This has the added benefit, from Microsoft’s perspective, of creating uncertainty and potentially delaying buying decisions for competing products.  But a 3-year beta is unusual even for Microsoft, and is largely explained by the company deciding that the product needed a ground-up rewrite after feedback from early tests to improve performance and integration.

Although an official release date has not been published, organisations are already starting to reflect upon the consequences of Microsoft entering a sector which is currently served by relatively small-sized, niche software companies.  Whilst BMC Remedy and HP Service Manager compete for very large installations, there isn’t really a stand-out market leader in the general Service Management software market, but rather a small group of vendors offering specific, focused products.

Microsoft expects, and frankly needs to compete across the breadth of any market it enters. And here, Microsoft’s standard approach is at odds with what most buyers have come to expect.  Microsoft’s competitors in the Service Management software market most commonly use a sales model where they sell directly to the customer and provide related services such as installation, configuration assistance, customisation and training as well as the software. Microsoft has never used this model.  Instead, it invests heavily in product marketing but sells through its partner network – which in the UK amounts to tens of thousands of IT service companies and resellers of all shapes and sizes, which in turn get their product from a distributor.

In choosing new Service Management software, companies frequently go through a tender process to ensure that they choose the most suitable product at the best price.  But here, Microsoft is at an immediate disadvantage.  A customer who wants to include Microsoft’s product in its tender will have to find a suitable partner to deal with, and may find there are multiple Microsoft partners who want to compete for the sale. And whereas niche vendors can genuinely offer an end-to-end solution including after-sale support which plugs directly into the software vendor, anyone considering Microsoft’s offering will be wary of the fact that they will potentially have multiple layers of support to deal with if they want answers or fixes to problems.

Of course, Microsoft can afford to compete on price as they do in other markets.  But the software cost in changing Service Management products is only one part of the overall cost of transition.  And whilst Microsoft is likely to tout its close integration with other System Center products as a key selling point, it has the major disadvantage in the UK market in that the product does not align with the Information Technology Infrastructure Library (ITIL), but rather the Microsoft Operational Framework (MOF).  This in itself would be enough to see the product discounted in many tender processes. In a blog entry, a Microsoft employee points to ITIL licensing costs as the main reasons for the lack of ITIL alignment, which is rather curious for a company of Microsoft’s resources.

All things considered, can Microsoft convince potential customers that the multi-tier sales and service model is better, and will it win the market by selling cheaper? It is hard to believe the Service Management savvies will be easily convinced – Service Management software is a big investment not only cost-wise but especially because it is the heart of the support process, which it orchestrates. In such a peculiar market, where quality, reliability and ease of adoption are more important than price, Microsoft will have to work hard to win any form of trust, let alone take control of the market.

Adrian Polley

 

Adrian Polley, CEO

A new lease of IT life

February 11, 2010

The latest Gartner predictions state that by 2012, 20% of businesses will own no IT assets. Is IT following the paths of cars and mobile phones and will we end up leasing it?

It is actually not difficult to imagine. The growth of utility computing means organisations are already purchasing software and storage on demand, leaving its management to a third party. They don’t only do it because it is convenient economically speaking, but for a more important reason – it spares them from the responsibility of managing something that is not the main function of their business. As the trend grows, technologies are given the right incentive and resources to mature and develop even more, and this is already starting to affect other markets: with the evolution of new technologies happening with increasing regularity the need to update or replace hardware and software becomes more business critical and this of course comes with a heavy financial burden. So it is not too far-fetched to imagine that in the future all hardware and software will be at some point leased and not owned, in order to make constant updates and replacements less expensive in the long run, at least for businesses.

It is not unusual for this to happen with high-evolution technology. In their first twenty years, mobile phones only changed in shape and size, but all models did the same thing: make and receive phone calls. At some point the SMS function was added but could often be activated on older machines, so one could have had the same mobile phone for a decade without being too ‘obsolete’. Then suddenly everything started to change at high speed: some mobiles had a camera, some had polyphonic ringtones, some had both, and when you bought the one which had both, another mobile would come out on the market with ‘real tones’, a much higher resolution camera and the new MMS function, and as soon as you become accustomed to this, there came the new ones with video function and internet browsing and so on. Leasing mobile phones, then, seemed really sensible for a company, to be free to upgrade to the new technology with ease and without having to purchase each phone at full price. Maintenance and substitution are easy when management is in the hands of the provider – you only pay monthly fees to have all the services you need.

The same happens with cars – a company surely wouldn’t use the same car for ten years, and wouldn’t want to spend an awful amount of money every couple of years to buy a new model and perhaps go through the struggle of selling the old one, not to mention repair, revision and all the rest. So it sounds reasonable to pay for a service that provides you with perfectly-working cars, substitute them as soon as they start being faulty, and cares for maintenance issues.

Gartner analysts are probably right in their prediction, then. Just like mobile phones and cars, many organisations will see ownership of their hardware as ‘nonstrategic’, as they define it, but it is important to note that this is actually nothing new: in the past, there have been attempts to rent IT equipment which in the end proved unsuccessful. It is difficult to know if the new technologies will change that fate, however what is obvious is that with organisation switching their IT spending toward operational rather than capital expenditure, it seems they are happy to delegate the responsibility of managing IT assets and services to the experts, in order to increase ROI and efficiency and, nonetheless, to be free to focus on the important parts of their business – making it as successful as it can be.

Richard Forkan

 

Richard Forkan, Director of Business Development

A shorter version can be found in the comment section of CRN – Channelweb:  http://www.channelweb.co.uk/crn/comment/2257649/lease-life

Quick win, quick fall if you fail to plan ahead

January 11, 2010

Virtualisation seems to be the hot word of the year for all businesses large and small, and as everyone seems to concentrate on deciding whether VMware is better than Microsoft HyperV, often driven by the media, they might overlook one of the major pitfalls in moving to virtual – the lack of forward planning.

Many organisations invest only a small amount of money and time investigating solutions, but choosing one which is tailored to the business rather than investing in the coolest, latest or cheapest product on the market can save organisations from the illusion of cost-effectiveness.

The second mistake organisations often make is to put together the virtual environment quite quickly for testing purposes, which then almost without anyone realising becomes production or live due to demands in the market to keep up with the rest of the business, or because of the IT department using the new and only partly tested environment as a way to provision services rapidly in order to gain a “quick win” with the rest of the business.

But a system that is not planned and correctly tested is often not set for success.

My advice would be to plan, plan and then plan some more.

I suggest organisations that are thinking about virtualising their system undertake a capacity planning exercise. They should start by accurately analysing the existing infrastructure; this gives the business the necessary information required to correctly scope the hardware required for a virtual environment, and in turn provides the necessary information for licensing.

Do not go from “testing of a new technology” on to a “live/production environment” without sufficient testing and understanding of the technology, or the inefficiencies could damage business continuity and the quality of services.

All in all, I advice organisations that are not specifically of the IT sector to engage a certified partner company to assist with design and planning and, equally importantly, undertake certified training courses to prepare staff to work with the new system.

Will Rodbard 

Will Rodbard, Senior Consultant