A data centre is a dedicated space where companies can keep and operate most of the ICT infrastructure that supports their business. This would be the servers and storage equipment that run application software and process and store data and content. For some companies this might be a simple cage or rack of equipment, for others it could be a room housing a few or many cabinets, depending on the scale of their operation.
This has expanded to a stage where companies can outsource their data centre needs without having anything on premises or they can augment their own data centres with a modular or container type datacentre. A modern data centre is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and various security devices. Large data centres are industrial scale operations using as much electricity as a small town.
Data Centre Consulting & Planning
With the rapidly changing application environment and increasing service requirements, how to plan the architecture of a data centre to support the service development has become a common concern. Data centre consulting & planning services can help a customer develop a three to five-year plan on data centre architecture in accordance with the customer’s strategic targets of service development and the current service environment, so that the data centre can effectively support the current service systems and the service systems to be created in the future.
Challenges to Customers
Existing data centres have the following challenges. To solve the above problems, you need to re-plan the existing data centre or create a data centre based on a new plan.
- The power supply system is vulnerable.
- The cooling is inadequate, and airflow is not optimal.
- The structural load fails to meet exacting standards.
- Fire prevention facilities need to be improved.
- The total amount of IT resources increases year by year in an unplanned manner.
- The capacity and development of the data centres are limited.
Based on the experience of expert partners, the following methods are used to assess and set out the needs and planning for the architecture of a data centre:
- Analysis of the data centre architecture requirement
- Overall design and implementation of the data centre architecture
- Preparation and discussion for the design of data centre components
- Design of data centre infrastructure components and formulation of technical specifications
The following figure shows typical data centre architecture:
Data Centre Integration & Transformation
Nowadays, Chief Information Officers (CIOs) are under the pressure of transforming the IT infrastructure from a cost centre to a strategic element for business promotion. The following new tasks come one after another: promoting the growth of businesses, speeding up the launching of new products and services, rapidly integrating purchased enterprises, increasing the investment in innovation, bringing the IT infrastructure into full play to gain new competitive edges in businesses, and cutting the costs.
Almost all the business flows depend on the data centre resources which include new and old technologies, software applications, and employee skills. Therefore, to maintain the growth of businesses, CIOs must coordinate various resources, and use new technologies to improve the operation efficiency and business response capability.
Knowing the challenges and opportunities in this aspect, allows the ability to help you determine a proper technology maturity level and then customize a solution for you based on your business targets and requirements.
Data Centre Managed services
Bearing services and data, data centres become essential for the informatization of enterprises, and good Operation and Maintenance (O&M) of data centres is the basis of enterprises’ normal operation. According to the report from Gartner, in 2012 there were more than 3.36 million data centres globally and the number is expected to reach 3.72 million in 2016. Development of cloud computing raises high requirements for the O&M of data centres. As a result, enterprises can hardly perform the O&M of data centres by themselves and corresponding subcontracting becomes the hot issue.
With our partners years’ experience in the O&M and management of data centres, we have a deep understanding of customer requirements. Proved by multiple success cases, this solution can help users realize the accurate control of management and the continuous improvement of the management level and service quality.
Real-time Data Centre Monitoring
Real-time access to secure and reliable data is crucial in today’s competitive global business environment. New generation tier 3 data centre solutions or above that allow companies to achieve this essential goal of high level of data centre reliability. These new designs allow IT managers to have the exact tools and capabilities they need in order to effectively organize the various servers within a company.
Energy consumption classifications are given using the actual device deployment of the data centre in real time with historical PUE records to assist the IT manager to analyse and understand the power usage effectiveness of the data centre. Customized power consumption management and electricity cost analysis are also available to help you save energy.
The status of all electrical equipment used for power - including uninterruptible power supply (UPS), power distribution cabinets (PDC), power distribution units (PDU), and power meters – can be monitored. The system is also able to handle capacity management, power quality monitoring, power paths, rack current, and server power consumption analysis.
Cooling System Management
The system displays the current status of cooling equipment – including the precision cooling, the chiller, and the cooling tower – together with temperature data and water-leakage alerts. The layout for the data centre facility’s heat-exchange is displayed in an easy-to understand fashion by the rack power consumption and cooling capacity data.
Various environmental parameters, including temperature, humidity, water leakage, smoke detection, door contact and more, are collected.
By combining the camera function, datacentre managers can observe the environment status in the real world remotely. The camera function includes full time recording, event-trigger recording, play back and image/video back-up. Multiple door access control readers, controllers and user accounts can be integrated to provide the most correct information of in and out times of users.
The asset management module displays the correct installation position of servers and network equipment, and also provides capacity analysis, failure analysis, power paths, network topology, asset searching, and event management advice.
Server and Network Equipment Monitoring and Management
This function covers the server and network equipment monitoring, OS shutdown protection and server power capping. To look into the server chassis environment, IT managers can communicate with the server BMC to retrieve the power information, PCB thermal sensors, fan speed. In addition, the OOB server power control is also available.
While the data centre must provide the resources necessary for the end users and the enterprise's applications, the provisioning and operation of a data centre is divided (sometimes uncomfortably) between IT, facilities and finance, each with its own unique perspective and responsibilities.
IT: It is the responsibility of the business's IT group to make decisions regarding what systems and applications are required to support the business' operations. IT will directly manage those aspects of the data centre that relate directly to the IT systems while relying on facilities to provide for the data centre’s power, cooling, access and physical space.
Facilities: The facilities group is generally responsible for the physical space, for provisioning, operations and maintenance, along with other building assets owned by the company. The facilities group will generally have a good idea of overall data centre efficiency and will have an understanding of and access to IT load information and total power consumption.
Finance: The finance group will be responsible for aligning near term vs. long term capital expenditures (CAPEX) to acquire or upgrade physical assets and operating expenses (OPEX) to run them with overall corporate financial operations (balance sheet and cash flow).
Perhaps the biggest challenge confronting these three groups is that by its very nature a data centre rarely will be operating at or even close to its optimally defined range. With a typical life cycle of 10 years (or perhaps longer), it is essential that the data centre’s design remains sufficiently flexible to support increasing power densities and various degrees of occupancy over a not insignificant period of time.
This in-built flexibility should apply to power, cooling, space and network connectivity. When a facility is approaching its limits of power, cooling and space, the organization will be confronted by the need to optimize its existing facilities, expand them or establish new ones.
Why Data Centres
Any entity that generates or uses data has the need for data centres on some level, including government agencies, educational bodies, telecommunications companies, financial institutions, retailers of all sizes, and the purveyors of online information and social networking services such as Google and Facebook.
Lack of fast and reliable access to data can mean an inability to provide vital services or loss of customer satisfaction and revenue. A study by International Data Corporation for EMC estimated that 1.8 trillion gigabytes (GB), or around 1.8 zettabytes (ZB), of digital information was created in 2011.
The amount of data in 2012 was approximately 2.8 ZB and is expected to rise to 40 ZB by the year 2020. Where it goes after 2020 is anybody’s guess but the amount of storage required is never going to reduce in size and so more and more information will move to data centres, and more data centres will be required.
All of this media has to be stored somewhere. And these days, more and more things are also moving into the cloud, meaning that rather than running or storing them on our own home or work computers, we are accessing them via the host servers of cloud providers. Many companies are also moving their professional applications to cloud services to cut back on the cost of running their own centralized computing networks and servers.
The cloud doesn't mean that the applications and data are not housed on computing hardware. It just means that someone else maintains the hardware and software at remote locations where the clients and their customers can access them via the Internet. And those locations are data centres.
There was a time when our information needs were simpler. We had TV shows broadcast into our homes at set times on just a handful of channels, we typed up memos and letters in triplicate for paper distribution and backup, and we had conversations on phones wired to the wall. Even cell phones used to be used just for making calls.
But since the dawn of the Internet, high-bandwidth broadband, smartphones and other new technologies, we are constantly online and constantly demanding that data be delivered to our computers, gaming systems, TVs and our phones. While paper documents still exist, we get lots of what used to be paperwork in the form of e-mail, Web pages, PDFs and other digitized files generated by software and rendered on computer screens. Even books are going from pulp to images on our computers, mobile devices and e-readers.
Electronic exchange of data is required for just about every type of business transaction, and is becoming the norm for many of our personal interactions. Even things that used to be analogue, like TV broadcasts and phone calls, are largely delivered in digital form over wires and radio waves. And at a far greater volume than ever before. Whether its government forms or instructions for baking a tuna casserole or a streamed TV show, we want to be able to call it up online, and we want it now.
With this massive demand for near-instantaneous delivery of digital information came the need for concentrations of computer and networking equipment that can handle the requests and serve up the goods. Thus, the modern data centre was born.
Data Centres and VDI in the Cloud
Initially, virtual desktops were touted as the replacement for physical desktops and a simple transition to a new type of end-point architecture. Managers and admins were promised this whole new type of environment which would help them transition to a more BYOD-friendly environment and assist with the move to Windows 7 (and now Windows 8). The problem became clear when underlying infrastructure components began to suffer as more resources were, in fact, required to run a VDI platform. More network bandwidth, storage resources and computing processes required organizations to rethink exactly how they were going to deploy VDI, and make it work.
So, let’s take a new approach to the VDI conversation. Instead of bashing the technology or mentioning what it needs to work properly we can examine where exactly VDI fits, and where it has been most successful: Data Centres and the internet are now more than capable of running virtual desktops with little to no overhead for the companies involved.
Labs, kiosks and any other environment that has a lot of users accessing the same hardware is a great use-case for VDI. Once the user is done with the end-point, the OS is reset to its pristine state. This is perfect for healthcare laboratories, task workers, libraries and even classrooms. There have been several large educational VDI deployments taking place as thin/zero clients begin to replace older fat clients. Furthermore, these lab environments can be completely hosted either in a private or public cloud environment. By using non-persistent cloud-based desktops administrators can quickly provision and de-provision these labs.
Testing and Development
What better way to test out an application, service or new product than on an efficiently provisioned VDI image. Administrators can deploy and test out new platforms within “live” environments without having to provision hardware resources. Once the testing is complete, they can simply spin down the VDI instance and rollout the new update, application or desktop environment. This can be done either internally or through a cloud provider.
Recent updates within an organization have forced some applications to adopt 64bit technologies. Well, some apps just won’t run on such a platform. So, administrators have been forced to get creative. This is where VDI can help. For those select finicky applications, VDI within a private cloud environment can be a lifesaver. Virtual desktops can run within a 32bit or 64bit instance and allow administrators to continue to support some older apps.
Contractors, outside employees and Branches
Some organizations have numerous contractors working within an organization. A great way to control contractor access is through a private cloud VDI platform. Give a user access via controlled AD policies and credentials and allow them to connect to a virtual desktop. From there, the administrators can quickly provision and de-provision desktop resources as needed for a given contractor. This allows outside consultants to bring in their own laptops, access centralized desktops and conduct their jobs. Then, once done, simply power down or reset the VM. This creates a quick, easy to manage contractor VDI environment.
Application virtualization aside, delivering desktops via BYOD can be a great solution for the end-user. Whether they’re working from home, internally or even internationally, the user can be presented ass desktop with all of their settings intact. BYOD and IT consumerization have created a true demand for BYOD. This is where VDI can help. The end-point never retains the data, and the desktop as well as the applications, are always controlled at the data centre level.
Heavy workload delivery
That’s right – you read that correctly. New technologies, like those from nVidia GRID, are allowing for powerful resource sharing while still using a single GPU. Solutions like GRID basically accelerate virtual desktops and applications, allowing the enterprise IT to deliver true graphics from the datacenter to any user on the network. Unlike in the past, you can now place more heavy resource users on a multi-tenant blade and GPU architecture. This opens up new possibilities for those few users that always needed a very expensive end-point.
VDI can be very successful if deployed in the right type of environment. One of the first steps in looking at a VDI solution is to understand how this type of platform will work within your organization. Is there a use-case? Is there an underlying infrastructure that will be able to support a VDI platform? By seeing the direct fit for VDI within an organization, the entire solution can actually have some great benefits.