Posts Tagged ‘virtualisation’

NEWS: Plan-Net wins 5-year IT outsourcing deal with Davenport Lyons

October 31, 2011

IT Services provider Plan-Net plc have agreed a 5-year IT outsourcing contract with the west end law firm Davenport Lyons.

Plan-Net will provide Davenport Lyons with a new virtual infrastructure and 24/7 support delivered from an onsite team and dedicated legal IT support centre based in central London.

Plan-Net Director Adrian Polley commented:

‘We are extremely pleased to be adding Davenport Lyons to our growing list of legal clients and look forward to delivering the high levels service required in this sector.’

  • For more information contact:

Samantha Selvini

Press Officer, Plan-Net plc

Tel: 020 7632 7990

Email: samantha.selvini@plan-net.co.uk

  • About Plan-Net

A specialist in transforming IT operations into high-performance, cost-efficient platforms for business success, Plan-Net works with clients of all sizes and needs to help them maintain high levels of service while still meeting demands for a reduction in IT spending.

Plan-Net has helped to enhance performance, flexibility, security, cost-efficiency and, ultimately, user productivity at clients large and small over the two prosperous decades of its existence.

Website: www.plan-net.co.uk

Blog: https://plannetplc.wordpress.com/

Twitter: www.twitter.com/PlanNetplc

Advertisements

Financial firms’ IP is safe with VDI

May 10, 2011

As with many other new technologies, financial organisations have been among the most keen to embrace desktop virtualisation. The main reason this particular technology is being largely adopted by the sector is because it suits the need for easier mobility: thanks to VDI, users can access their desktop from any PC with an internet connection, making it easy to access large amounts of data and heavy applications from a light mini netbook while travelling and even making it unnecessary to carry a laptop around when visiting another office.

But although for many this is a good enough reason alone to adopt this technology, there is another advantage that makes desktop virtualisation even more attractive for the financial sector: it allows centralisation of all Intellectual Property, which is not owned and managed at end user level but centrally by the IT department. It is extremely important for this particular sector to have control over data and IP, as these are vital to a firm creating competitive advantage in the market – to financial firms, IP is an important asset and therefore needs to be protected. With this solution, all data will be processed and saved in a central hub rather than at user level, so that it is less difficult for users to take information to competitors or to copy it on an external device and lose it, therefore protecting the company from breaches of the Data Protection Act.

Centralisation also means that individuals will not be able to freely download random software onto their desktop that may contain viruses or create a window for hacking, with all the extra security benefits that this entails. Due to the sensitivity and importance of the information a financial firm deals with, being able to minimise the occurrence of these kinds of data security breaches is a great advantage – it increases public and regulatory confidence and credibility, which can add value to the company. Data leakage, loss or theft may in fact lead not only to costly fines, but there is also a likelihood, due to the obligation to inform clients and make an incident public, that  this would create a loss of reputation and, therefore, business – both with current and potential customers who might choose to opt for the competition. A safe environment is more attractive, hence a correctly managed VDI solution can help retain clients and perhaps also win new business.

Sure, desktop virtualisation has a cost and is not particularly convenient financially for companies that have just upgraded their hardware. It is instead a wise investment when PCs are in need of a refresh, as an alternative choice. The ROI in this case is immediate, but in any case the short and medium-term benefits are not confined to reduced hardware costs. It also enables the IT department to reach some important cost-efficiencies. Benefits include: enabling IT support personnel to carry out maintenance more easily and quickly; speeding up simple operations such as patching and applying new application upgrades; a smaller number of technicians needed to deal with support to remote users, especially the more expensive desk-side engineers.

Desktop virtualisation also allows for cost savings in the long run by extending the PC lifecycle and applying a concurrent-usage software licensing model. The pooling of flexible server hardware will extend its lifecycle and the simplified infrastructure will enable zero downtime.

The advantages of VDI are evidently numerous, but being in control of data, IP and the way each individual desktop is managed by end users represents a major benefit for financial organisations in particular. If implemented and managed correctly, in fact, this technology can allow them to gain competitive advantage, minimise losses, increase their security and return on investment, ultimately improving business success.

Sharron Deakin, Principal Consultant

This article was written for Director of Finance Online: http://www.dofonline.co.uk/content/view/5270/118/

10 things we learnt in 2010 that can help make 2011 better

December 23, 2010

This is the end of a tough year for many organisations across all sectors. We found ourselves snowed-in last winter, were stuck abroad due to a volcano eruption in spring, suffered from the announcement of a tightened budget in summer, and had to start making drastic cost-saving plans following the Comprehensive Spending Review in autumn. Data security breaches and issues with unreliable service providers have also populated the press.

Somehow the majority of us have managed to survive all that; some better than others. As another winter approaches it is time to ask ourselves: what helped us through the hard times and what can we do better to prevent IT disruptions, data breaches and money loss in the future?

Here are some things to learn from 2010 that may help us avoid repeating errors and at the same time increase awareness of current issues, for a more efficient, productive and fruitful 2011:

1- VDI to work from home or the Maldives

Plenty of things prevented us getting to work in 2010; natural disasters, severe weather and industrial disputes being the biggest culprits. Remote access solutions have been around for a long time, but desktop virtualisation has taken things a stage further. With a virtual desktop, you’re accessing your own complete and customised workspace when out of the office, with similar performance to working in the office. Provided there’s a strong and reliable connection, VDI minimises the technical need to be physically close to your IT.

2- Business continuity and resilience with server virtualisation

Server virtualisation is now mainstream, but there are plenty of organisations large and small who have yet to virtualise their server platform. When disaster strikes, those who have virtualised are at a real advantage – the ability to build an all-encompassing recovery solution when you’ve virtualised your servers is just so much easier than having to deal with individual physical kit and the applications running on them. For anyone who has yet to fully embrace the virtualisation path, it’s time to reassess that decision as you prepare for 2011.

3- Good Service Management to beat economic restrictions

With the recent economic crisis and the unstable business climate, the general message is that people should be doing more with less. It’s easy to delay capital expenditure (unless there’s a pressing need to replace something that’s broken or out of warranty) but how else to go about saving money? Surprising, effective Service Management can help deliver significant cost-efficiencies through efficient management of processes, tools and staff. Techniques include rearrangement of roles within the IT Service Desk to get higher levels of fix quicker in the support process, and adoption of some automatic tools to deal with the most common repeat incidents. Also getting proper and effective measures on the service, down to the individuals delivering it, helps to set the bar of expectation, to monitor performance and improve processes’ success.

4- Flexible support for variable business

An unstable economic climate means that staffing may need to be reduced or increased for certain periods of time, but may need rescaling shortly afterwards. At the same time epidemics, natural disasters and severe weather conditions may require extra staff to cover for absences, often at the last minute. Not all organisations, however, can afford to have a ‘floating’ team paid to be available in case of need or manage to get contractors easily and rapidly. An IT Support provider that can offer flexibility and scalability may help minimise these kinds of disruption. In fact, some providers will have a team of widely-skilled multi-site engineers which can be sent to any site in need of extra support, and kept only until no longer needed, without major contractual restrictions.

5- Look beyond the PC

Apple’s iPad captured the imagination this year. It’s seen as a “cool” device but its success stems as much from the wide range of applications available for it as for its innate functionality. The success of the iPad is prompting organisations to look beyond the PC in delivering IT to their user base. Perhaps a more surprising story was the rise of the Amazon Kindle, which resurrected the idea of a single function device. The Kindle is good because it’s relatively cheap, delivers well on its specific function, is easy to use and has long battery life. As a single function device, it’s also extremely easy to manage. Given the choice, I’d rather the challenge of managing and securing a fleet of Kindles than Apple iPads which for all its sexiness adds another set of security management challenges.

6- Protecting data from people

Even a secured police environment can become the setting for a data protection breach, as Gwent Police taught us. A mistake due to the recipient auto-complete function led an officer to send some 10,000 unencrypted criminal records to a journalist. If a data classification system had been in place, where every document created is routinely classified with different levels of sensitivity and restricted to the only view of authorised people, the breach would have not taken place as the information couldn’t have been set. We can all learn from this incident – human error will occur and there is no way to avoid it completely, so counter measures have to be implemented upfront to prevent breaches.

7- ISO27001 compliance to avoid tougher ICO fines

The Data Protection Act was enforced last year with stricter rules and higher fines, with the ICO able to impose a £500,000 payment over a data breach. This resulted in organisations paying the highest fines ever seen. For instance Zurich Insurance which, after the loss of 46,000 records containing customers’ personal information, had to pay over £2m – but it would have been higher if they hadn’t agreed to settle at an early stage of the FSA investigation. ISO 27001 has gained advocates in the last year because it tackles the broad spectrum of good information security practice, and not just the obvious points of exposure. A gap analysis and alignment with the ISO 27001 standards is a great first step to stay on the safe side. However, it is important that any improved security measure is accompanied by extensive training, where all staff who may deal with the systems can gain a strong awareness of regulations, breaches and consequences.

8- IT is not just IT’s business – it is the business’ business as well

In an atmosphere where organisations are watching every penny, CFOs acquired a stronger presence in IT although neither they nor the IT heads were particularly prepared for this move. For this reason, now the CIO has to find ways to justify costs concretely, using financial language to propose projects and explain their possible ROI. Role changes will concern the CFO as well, with a need to acquire a better knowledge of IT so as to be able to discuss strategies and investments with the IT department.

9- Choose your outsourcing strategy and partner carefully

In 2010 we heard about companies dropping their outsourcing partner and moving their Service Desk back in-house or to a safer Managed Service solution; we heard about Virgin Blue losing reputation due to a faulty booking system, managed by a provider; and Singapore bank DBS, which suffered a critical IT failure that caused many inconveniences among customers. In 2011, outsourcing should not be avoided but the strategy should include solutions which allow more control over assets, IP and data, and less upheaval should the choice of outsourcing partner prove to be the wrong one.

10- Education, awareness, training – efficiency starts from people

There is no use in having the latest technologies, best practice processes and security policies in place if staff are not trained to put them to use, as the events that occurred in 2010 have largely demonstrated. Data protection awareness is vital to avoid information security breaches; training to use the latest applications will drastically reduce the amount of incident calls; and education to best practices will smooth operations and allow the organisations to achieve the cost-efficiencies sought.

Adrian Polley, CEO

This article has been published on Tech Republic: http://blogs.techrepublic.com.com/10things/?p=2100

Best Practice and Virtualisation: essential tools in Business Resilience and Continuity planning

March 25, 2010

Life in Venice doesn’t stop every time it floods. People roll up their trousers, pull on their wellies and still walk to the grocer’s, go to work, grab a vino with friends. And when it’s all over they mop the floor, dry the furniture, and go back to their pre-flood life. How do they do it? They choose not to have carpet or wooden flooring, keep updated on water level and have a spare pair of boots right next to the door. This is called prevention.

When it comes to faults in IT systems, both common and rare just like flooding can be, prevention is not better than cure – it is the cure, the only one to allow business continuity and resilience.

Complicated machinery and analysis are a thing of the past: nowadays planning is extraordinarily easy thanks to the expertise given by Best Practice processes, and new technologies such as virtualisation that can bring user downtime close to zero.

First of all, it must be noted that virtualising servers, desktop, data centre is not something that can be done overnight. Planning is needed to avoid choosing the wrong solution, perhaps based on what the latest product on the market is and on media talk rather than what works best for the specific needs of one’s business, and to shun possible inefficiencies, interruption of business services, or even data loss during the process. Best Practice, then, comes across as the essential framework in which all operations should be carried out in order for them to be successful.

Any change made to the system, in fact, needs a mature level of Best Practice processes such as world-renowned ITIL (Information Technology Infrastructure Library) in place, to guide organisations in planning the best route in dealing with all operations and incidents, and are a key tool for avoiding inefficiencies in money and time, and improving the performance of the IT department and of the business as a whole.

Once this is sorted, you can think about going virtual. From a technical point of view, virtualisation is gaining importance in business resilience and continuity planning thanks to the progress made by new technologies. Products such as VMware’s Vsphere, for example, allow what is called “live migration”: capacity and speed of the virtual machines are seen as an aggregate rather than individually, and as a consequence not only the load is more evenly distributed, for faster, smoother operations, but whenever a machine crashes resources are immediately accessible from another connected device, without the user even noticing and without interrupting the procedure.

Moreover, data is stored on a virtual central storage so that it is accessible from different source and does not get lost during system malfunctions, making business resilience faster and easier.

Guided by the expertise of Best Practice and with the help of virtualisation products that suit individual needs and goals, business resilience and continuity planning will not only come easier, but also make results more effective, allowing organisations to deliver their services and carry out their operations without fear of interruptions, inefficiencies or data loss.

 

Pete Canavan, Head of Service Transition

 

This article is in April’s issue of Contingency Today, and is also online at: http://www.contingencytoday.com/online_article/Best-Practice-and-Virtualisation/2242

Quick win, quick fall if you fail to plan ahead

January 11, 2010

Virtualisation seems to be the hot word of the year for all businesses large and small, and as everyone seems to concentrate on deciding whether VMware is better than Microsoft HyperV, often driven by the media, they might overlook one of the major pitfalls in moving to virtual – the lack of forward planning.

Many organisations invest only a small amount of money and time investigating solutions, but choosing one which is tailored to the business rather than investing in the coolest, latest or cheapest product on the market can save organisations from the illusion of cost-effectiveness.

The second mistake organisations often make is to put together the virtual environment quite quickly for testing purposes, which then almost without anyone realising becomes production or live due to demands in the market to keep up with the rest of the business, or because of the IT department using the new and only partly tested environment as a way to provision services rapidly in order to gain a “quick win” with the rest of the business.

But a system that is not planned and correctly tested is often not set for success.

My advice would be to plan, plan and then plan some more.

I suggest organisations that are thinking about virtualising their system undertake a capacity planning exercise. They should start by accurately analysing the existing infrastructure; this gives the business the necessary information required to correctly scope the hardware required for a virtual environment, and in turn provides the necessary information for licensing.

Do not go from “testing of a new technology” on to a “live/production environment” without sufficient testing and understanding of the technology, or the inefficiencies could damage business continuity and the quality of services.

All in all, I advice organisations that are not specifically of the IT sector to engage a certified partner company to assist with design and planning and, equally importantly, undertake certified training courses to prepare staff to work with the new system.

Will Rodbard 

Will Rodbard, Senior Consultant

Cloud computing – Help your IT out of the Tetris effect

January 8, 2010

Enjoy playing human Tetris on the tube at rush hour? All the hot, sweaty physical contact; the effort in pushing your way out, slowly, uneasily; people in your way, blocking you, breathing on you.. Of course not. You just wish you could share the carriage with three friendly, quiet companions and kick the rest of the lot out, bringing a small selection of them back in only when you need an extra chat, some heat in the carriage, specific information they might have.

If you imagine the tube situation to be your IT system, then you get a first glance at what Cloud Computing is about.

Cloud based computing promises a number of advantages, but it is buying services “on-demand” that has caught the imagination.  Rather than having to make significant, upfront investment in technology and capacity which they may never use, Cloud based computing potentially allows you to tap into someone else’s investment and flex your resources up and down to suit your present circumstances.

Like all new computing buzzwords, the Cloud suffers from “scope creep” as everyone wants to say that their own solution fits the buzzword – however spurious the claim is.  Many IT-savvies think that ‘the cloud’ is nothing but old wine in a new bottle, finding similarities to what was referred to as managed or hosted application services; it is essentially based on that, only with new technology to support their evolution, which makes matters more complicated and brings new doubts and queries.

But for most purposes, the Cloud extends to three types of solution – Software as a Service (Saas), Managed Application Hosting and On-demand Infrastructure.  These are obviously all terms that have been used for some time – the Cloud sees the distinction becoming more blurry over time.

Software-as-a-Service is what most people will have already experienced with systems such as Salesforce.com.  The application is licensed on a monthly basis, is hosted and managed on the provider’s web server and is available to access from the client’s computers until the contract expires.

Managed Application Hosting simply takes this one step further where a provider takes responsibility for managing a system that the customer previously managed themselves.  A big area here is Microsoft Exchange hosting – many companies struggle with the 24×7 obligation of providing access to email and find it easier to get a specialist to manage the environment for them.

With Software as a Service, the infrastructure and data is physically located with the provider.  This can also be the model with Managed Application Hosting, although there can be options for the provider to manage a system that is still within the customer’s perimeter.  But both models raise a specific security concern in that the customer is obliged to give the provider access to the customer’s data.  This is, of course, not an uncommon model – outsourcing and its implied trust has been around for years. 

The third type of Cloud solution is On-demand Infrastructure.  Virtualisation has already got customers used to the idea that Infrastructure can now be more flexible and dynamic – in the past bringing a new physical server online could take weeks, particularly when procurement was factored in, but a new, fully-configured virtual server can now frequently be brought up in seconds.  However, there’s still the investment to be made in the virtualisation platform at the start – and what happens when you run out of capacity?

The promise of On-demand Infrastructure is that it removes the need to make a big upfront investment but allows Infrastructure to be added or taken away as circumstances arise.  This is potentially as powerful and radical a concept as virtualisation.  So how could it work?

Different vendors are approaching it in different ways.  Amazon, formerly the online book store, has gone through significant recent transformation and now offers its Elastic Compute Cloud service.  If you develop web based systems within this system, you can configure and pay for the capacity that you actually need.  Equally, Salesforce.com is no longer just an application but a whole development environment which can be extended by the end user to different functionalities, with additional computing capacity bought as required.

One issue with both of these models is portability – if I develop my application for the Amazon platform, I’m tied into it and can’t go and buy my Cloud resources from someone else.

VMware has taken a slightly different approach with its vSphere suite, which it is claiming to be the first “Cloud operating system”.  What this means in practice is that VMware is partnering with dozens of service providers across the world to provide On-Demand Infrastructure services which customers can then take advantage of.  In this model, a customer could choose to buy their virtual infrastructure from one provider located in that provider’s premises.  They could then join that with their own private virtual infrastructure and also that of another provider to give provider flexibility.  The real advantage of this approach is when it’s combined with VMware’s live migration technologies.  A customer who was running out of capacity in their own virtual infrastructure could potentially live-migrate services into a provider’s infrastructure with no down time.  Infrastructure becomes truly available On-Demand, can be moved to a different provider with little fuss, and the customer only pays for what they use. 

The vision is impressive.  There are still tie-in issues in that the customer is tied to VMware technology, but should find himself less tied to individual providers.

Surely the kind of technology necessary to virtualise data centres demands an investment not everyone is willing to undertake, and brings us to the question everyone’s thinking about: ‘Why should I move to the cloud?’

According to a survey carried out by Quest Software in October, nearly 75% of CIOs are not sure what the benefits of cloud computing are, and half of them are not sure of the cost benefits, particularly since they find it difficult to calculate how much their current IT system is costing them. The firm interviewed 100 UK organisations with over 1,000 employees, of which only 20% said they are already actively using some of the cloud services offered, and whose main worries pivot around security, technical complexity and cost.

Choosing between public and private cloud has a different financial impact as well. Initial investment in the first one is clearly cheaper because of the lack of hardware expenditure, necessary instead for private services, but Gartner analysts reckon IT departments will invest more money on the private cloud through 2012 while the virtual market is maturing, which will prepare technology and business culture to move to the public cloud later on. The cost related to virtual storage duration is an issue only for public services, which are purchased on a monthly basis with a usage fee of GB combined with bandwidth transfer charges, therefore ideal for relatively short-term storage. In any case the experts say that not all IT services will be moved to a virtual environment, some will have to remain in the intimacy of the organisations because of data security and sensitivity issues.

The promise of the Cloud is virtualised data centre, applications and services all managed by expert third parties so that not only business operations will run more smoothly and efficiently, but more importantly IT managers can finally stop worrying about technical issues and focus on the important parts of their business, taking the strategic decisions that will bring their organisation to further success.  Whether and how long this takes to become a reality is something that at this stage is very hard to predict.

Adrian Polley 

Adrian Polley, CEO

One of you may be fired

December 17, 2009

Those of us old enough still remember the advertising slogan suggesting that ‘no one ever got fired for buying IBM’. And it was largely true. Many IT managers spent a lot of money on IBM systems as it appeared a risk free option – even if they were not always convinced it was the best solution for the business.  

The sentiment is not confined to IBM of course. More recently you could easily replace IBM with names such as Microsoft, Cisco or Dell, for example. The problem is that it is there are usually too many options available. And the same is true when it comes to virtualisation.

With a list of benefits as long as your arm, the decision to adopt a virtual desktop infrastructure in the first place seems a no brainer. But that’s where the easy decisions end. Once committed to virtualising the environment, many organisations quickly become bogged down with the sheer number of options, features and functionalities. 

So, rather than using an unbiased and well-researched approach to the platform selection process, far too many organisations are making snap judgements based on unfounded or irrelevant criteria – or simply on a name.

So who are the front runners? Unless IT managers have been living in the Himalayas for the last five years, they will certainly be aware of VMWare, Microsoft HyperV and Citrix with its XenDesktop. But there are also a number of other suppliers such as Quest and Sun with their own, lesser known offerings that should not be ruled out.

The problem often lies in the criteria organisations use to select their platforms. What they need to do is carefully detail what is required and which platform best meets those needs. After all, the main benefits of virtualisation are achieved in the long term and these will be negated if an unsuitable platform is selected in the first instance.

For example, when deliberating between Microsoft Hyper V and VMWare, it is easy to get caught up in comparisons between up-front cost and perceived compatibility with a current operational platform. HyperV may appear to be cheaper than VMWare at first glance, but this will only be the case for organisations for which it is the fit-for-purpose solution.

There are clearly many organisations where HyperV is the right choice, but elsewhere, while there may be initial savings to be made on up-front cost, these will soon be forgotten once the platform begins to come up short further down the line. Equally, choosing VMWare because of its reputation and positive press will be as costly for organisations that cannot hope to utilise its vast scope within the requirements of their environments – or for those that discover incompatibility issues later down the line when it is too late.

It is surprising how often companies get this wrong. So, before you reach for the cheque book, make sure you have looked carefully at what you are signing up for or take independent expert advice. At the very least you can then blame it on someone else. 

David Cowan

 

David Cowan, Head of Infrastructure

This article appeared in the Dec 2009 issue of Networking+