Archive for the ‘Cloud risks’ Category

What is the impact of the Cloud on the existing IT environment?

March 10, 2011

As organisations look to embrace the cost-efficiency opportunities deriving from new technologies and services, there is a lot of talk about the benefits, risks and possible ROI of the blanket concept of ‘Cloud computing’. However, it is still unclear how using Cloud services will affect the existing network infrastructure and what impact it can have on IT support roles and the way end users deal with incidents.

The effect on an organisation’s infrastructure depends on the Cloud model adopted, which may vary based on company size. For example, small organisations which are less worried about owning IT and have simpler, more generic IT needs might want to buy a large part of their infrastructure as a shared service, purchasing on-demand software from different vendors.

Buying into the Software as a Service model has the benefit of simplicity and cheapness, as it cuts out much of the responsibility and costs involved, which is a great advantage for SMEs. This solution also allows 24/7 service availability, something that small firms might not be able to afford otherwise. The lack of flexibility of this service, due to the impossibility to customise software, is less of a problem for these types of organisations. But there are still risks related to performance, and vendor lock-in.

Using this model, a small company’s retained IT infrastructure can be relatively simple, and therefore there might be little need for specialist technical skills within the IT department. A new skill set is required, however: IT personnel will need to be able to manage all the different relationships with the various vendors, checking that the service purchased is performing as agreed and that they are getting the quality they are paying for. The IT Service Desk will therefore require a smaller number of engineers, less technical but more commercially savvy. More specialist skills will shift towards the provider and 1st line analysts will have to escalate more calls to the various vendors.

Larger organisations, on the other hand, may well be keen on retaining more control over their infrastructure and purchasing IT resources on-demand. With this model, the organisation still manages much of its infrastructure but at a virtual level – the vendor might provide them with hardware resources on-demand, for instance. The main advantage of this model is that it allows the existing infrastructure to be incredibly flexible and scalable, able to adapt to changing business needs. For example, building a data centre is lengthy and expensive, therefore not convenient for expansion. But by using a “virtual datacentre” provider, capacity can be increased in the space of an online card transaction, with great financial benefits – in the Cloud model only the necessary resources are paid for without investment in hardware or its maintenance.

With this second model, the change in the roles within the IT department will mainly regard an increased need, as in the other model, for vendor management skills. Monitoring KPIs, SLAs and billing will be a day-to-day task although there will still be the need for engineers to deal with the real and virtual infrastructure.

Both models generally have very little impact on the end user if the IT Service Desk has been running efficiently, as this does not disappear as a first point of contact. However, in certain cases the change might be more visible, for instance if desk-side support is eliminated – a cultural change that may need some adapting to.

All in all, change is not to be feared – with the necessary awareness, embracing Cloud services can improve IT efficiency significantly, and align it to the business. By leaving some IT support and management issues to expert providers an organisation can gain major strategic advantage, saving money and time that they can ultimately use in their search for business success.

 

Adrian Polley, Technical Services Director

This article appears on the National Outsourcing Association’s (NOA) online publication Sourcing Focus: http://www.sourcingfocus.com/index.php/site/featuresitem/3318/

Data security: controlling the risks – The EGB Masterclass

February 23, 2011

by Dan Jellinek, E-Government Bulletin

David Cowan

Public sector information security breaches often hit the headlines, but are public bodies really any worse than private sector in this area? What are the main risks, now and in a future of ‘any time, any place’ access to systems through cloud computing, and how can they best be tackled? We ask David Cowan (pictured), Head of Consulting at IT services provider Plan-Net.

Q: What are the main areas of risk for public bodies in keeping their data secure?

A: Public bodies are subjected to a plethora of regulations, standards and frameworks on data security due to the nature of the information they hold, and the associated risks of handling the sheer volume of personal or sensitive data. The main areas of risk are staff failing to observe the organisation’s information security procedures; malicious activity from both internal and external sources such as staff unlawfully selling or obtaining personal data and external threats from fraud, or from crime syndicates or rogue groups using ‘phishing’ or social engineering; and organisations and staff not being aware of their legal obligations, such as the legal obligation to report an information security breach under the Data Protection Act.

Q: What is the scale of risk faced by public bodies?

A: It is difficult to quantify the scale of the risk, but even with all the controls, standards and regulations in place to reduce risk, it is still not possible to eradicate it altogether. The size and geographical spread of public sector organisations increases the risk of data leakage and malicious activity occurring, as does the reliance on recording personal information within huge databases across the public services.

Q: What are the potential consequences of security breaches?

A: The greatest risk to an organisation is the potential reputational damage which could occur as a result of an information security incident reaching the public domain. The Information Commissioner’s Office is now empowered to hand out large fines of up to £500,000 for serious breaches of the Data Protection Act, and there is the possibility of criminal prosecution depending on the severity and scale of the breach. Public bodies could even lose access to public sector frameworks and IT networks as a result of a perceived systematic breakdown in their processes and procedures. The impact on individuals and society could include a lack of trust in public sector organisations handling their personal information; loss of productivity; and individual distress or harm caused by a breach.

(Read the rest of this interview on E-Government Bulletin issue 329: http://www.headstar.com/egblive/?p=771 )

5 tips for moving Disaster Recovery to the Cloud

October 5, 2010

As virtualization technologies become increasingly popular, more and more businesses are thinking about using cloud computing for Disaster Recovery. Experts in the field believe that there are many advantages in embracing this solution – however, there are also some potential threats that need to be taken into account.

In order to consider cloud computing services, organisations need to evaluate the potential risks to their Information Assets and, in particular, how a 3rd party supplier will affect the Confidentiality, Integrity and Availability of their data.

Here are five tips on how to deal with the main challenges:

1. Risk Assessment and Asset Valuation

Right from the outset, organisations should try to understand what the greatest risks to the business are and identify which information assets are too important or too sensitive to hand over to a 3rd party supplier to control.

2. Smoke and Mirrors

To overcome the risks associated with choosing a new supplier, it is a good idea to carry out due diligence on the Cloud Supplier – find out all you can about who you will be trusting with your information and review their facilities, processes and procedures, references and credentials, i.e. if they are ISO27001 accredited.

3. Migrating Information

Once a decision is made to either partially or wholly migrate data/systems to the cloud, the biggest challenge is how to ensure there is a seamless migration to the external provider’s service. This is a very delicate step which, if dealt with inadequately, may result in data loss, leakage or downtime which could prove extremely costly to the business.

4. Service Level Management

When businesses trust 3rd parties with their vital corporate, personal and sensitive information, it is important to set up structured SLAs, Confidentiality Agreements, Security Incident handling procedures, and reporting metrics, and above all ensure they provide compliant, transparent, real-time, accurate service performance and availability information.

5. Retention and disposal

Depending on the policies and regulatory requirements applicable to the business, one of the main challenges with cloud computing is how to ensure the corporate retention polices are enforced if the information is located outside the company’s IT network perimeter. Obtaining certificates relating to the destruction of data is one thing, but proving that information identified as sensitive or personal is only kept for as long as necessary is another.  With the economies of scale often associated with cloud computing, total adherence with retention policies of individual companies may prove difficult if resilience, backup and snapshot technologies are employed to safeguard the environment from outages or data loss.

David Cowan, Head of Infrastructure and Security

Find this article in the ‘5 tips’ section of Tech Republic: http://blogs.techrepublic.com.com/five-tips/?p=324

Cloud computing: how to minimise lock-in risks

June 10, 2010

Choosing more than one supplier is necessary until a time when cloud computing comes of age.

Virtualising servers, purchasing space in data centres and utilising applications hosted and managed by third parties can have some undeniable advantages: they can increase efficiency, decrease IT-related costs, allow greater mobility and also represent a greener alternative for organisations. But as the popularity of cloud computing grows, so do concerns regarding the unclear implications of the new technologies. If the initial worries were mostly about security of data stored at a provider, now an even bigger question is arising: what would happen if an organisation wanted their data back to bring it in-house as they grow, or to transfer it to another provider as part of a merger, or a cheaper and more efficient provider for some of the services (e.g. only email or back-up)? Although it is possible to retrieve and migrate data, it is not an easy and straightforward operation and the costs involved might represent a barrier, causing the organisation to be locked-in with the provider – and accept any price and conditions they might decide to impose.

The problem with the newness of cloud computing technologies is that there are yet to be set standards for data formats and APIs to allow interoperability between infrastructures. Cloud computing providers are already working on how to improve portability and reduce latency during data transfers, but only within services and platforms hosted on their own, proprietary infrastructure. Migration to another vendor can instead be a lengthy and expensive procedure – apart from possible end-of-contract penalties, organisations will be charged both for format conversion and for the transfer, including additional charges for bandwidth usage which due to the high latency, might altogether amount to a very large figure. Migration costs can be prohibitive when dealing with a large amount of data, therefore even if it might seem easier and convenient to have only one vendor providing all services, storing the entire organisation’s data within one infrastructure represents a threat which might obstruct growth, structural changes and the search for more cost-efficient and bespoke solutions.

Experts reckon it might be a few years before data and service portability within vendors will be possible, but organisations need not put off a move to the cloud – they just have to apply some smart thinking. The key to avoiding lock-in, it seems, is to not have all the eggs in one basket. The wisest organisations are already using this technique, which sees them cherry-picking various vendors for different services: one provider for email, another for back-up and another couple for applications and VDI. There are a few criteria for choosing, not necessarily based on the cheapest offers: ideal vendors have to first of all provide modular packages, use popular formats for data and services and be transparent on regulations and fees applied to data transfer.

Many benefits can be achieved with this strategy: for instance, organisations can create a bespoke and flexible solution, and choose the best offer for each service. In some cases the overall price could be higher than the cost of a single provider for all services, but if it is lengthy and economically prohibitive to switch vendor, then price is inelastic and can be increased at any time, leaving the organisation no choice but to pay. It is also essential to take into account the risk of a provider going bust: the recent security attacks on Google and the dotcom meltdown have taught us that no company is too big to go out of business.

To avoid data and financial loss, the only solution is to use more than one vendor. It is only through a game of pick and choose that lock-in risks and their consequences can be avoided, while still enjoying the cost-efficiencies made possible by cloud computing.

 

 

Ayodele Soleye, Senior Consultant

This article is featured on Director of Finance Online: http://www.dofonline.co.uk/content/view/4645/152/