Data Protection and Information Security – Together at Last.

Data Protection and Information Security – Together at Last.

0 Comments
matrix

Any IT professional that spent time around a corporate data center for several years has more than likely adapted to the separation of data protection and data security fields. The division in specialties has long historical roots, but does it really make sense anymore?

data protection

Data protection is a major component of any corporate disaster recovery plan. A disaster recovery plan is a set of strategies and processes put in place to prevent, avoid, and minimize the impact of a data loss in the event of a catastrophe. Data protection is essential to a disaster recovery plan as business-critical data cannot be substituted.

The only way to protect data is to make a copy of the original and store the copy adequately secluded from the primary. That way in the event of an unfortunate incident, the same disaster cannot destroy both copies.

In fact, a sufficient disaster recovery plan should also include requirements for application, network, and user data retrievals, as well as procedures for testing and training management.

Disaster recovery planning can be compared to information security planning in many ways. They both intend to protect business-critical practices and data assets. However, InfoSec uses various intertwining tactics that are exclusive to security.

infosec

Information Security

Infosec has established its own terminology and set of strategies for securing vital data assets. These policies are then enhanced by methods of constant monitoring and seasonal analysis to ensure that security precautions are keeping data confidential.

Until recently there have been few exchanges between data protection and information security fields. However, when someone in the data protection field is worried about retrieving data that is encrypted, communication with the InfoSec team is mandatory.

On the other hand, the InfoSec team might only collaborate with the data protection team to confirm that continuous data protection resources are being implemented and used. This would allow speedy restoration in the wake of a cyber-attack by basically reversing data to a point prior to the attack.

Together at Last

Believe it or not, both data protection and InfoSec fields have a lot to learn from each other. Data protection has already dipped into quantitative techniques for matching protection services to detailed data provided the threats to the organization.  These quantitative methods, Single Loss Expectancy (SLE) and Annual Loss Expectancy (ALE), were trivial at face value and abandoned by disaster recovery experts.

InfoSec is moving down a similar path.  Attack surface reduction modeling techniques are akin to the pseudo-scientific numerical looking practices as ALE and SLE. Certain experts see these methods as an upgrade over the threat modeling that was applied by many InfoSec specialists in the 90s.  Before the turn of the century, it was widely thought that the cost to protect data should not be much higher than the cost to hackers to sidestep the security. In spite of this, the correlation was lopsided as hackers suffered little to no expense in testing the protection of their targets or to rout the actions that were taken to keep them out.

Selecting the Best Server for Your Data Center

0 Comments
server

Server Selection

In order to improve bottom line performance in the workplace, IT professionals should assess top priorities to establish protocol on how to choose a server while constructing the most efficient workloads.

Some may say that servers are the heart and lungs of the modern internet, but the deliberation on how to select a server can every so often create a confusing range of hardware choices. Even though it’s possible to pack a data center with matching, virtualized and bundled systems that have the ability to handle any job, the cloud is forever altering how businesses run applications. As more organizations move workloads in the public cloud, local data centers need less resources to host the workloads that remain on site. This is encouraging IT administrators and business professionals to pursue more value and performance from the dwindling server fleet.

These days, the infinity of computer hardware systems is being tested by a new trend in customization with server attributes. Some businesses are encountering the idea that one size may in fact fit all in regards to servers. However, you can opt for and even design server cluster hardware to accommodate specific usage categories.

server selection

Figure from: http://searchdatacenter.techtarget.com/tip/How-to-choose-a-server-based-on-your-data-centers-needs

VM Merger and Network I/O

An advantage of server virtualization is the capacity to host several virtual machines on the same physical server in order to use more of a server’s existing resources. VMs largely depend on RAM and processor cores. It’s impractical to decide exactly how many VMs can exist on any given server because you can arrange them in a way that they can use an extensive range of memory space and processor cores. However, selecting a server with more memory and processor cores will usually permit more VMs to exist on the same server, improving consolidation.

For instance, a Dell EMC PowerEdge R940 rack server can host up to 28 processor cores and offers 48 DDR4 DIMM slots that support up to 6 TB of memory. Some system administrators may choose to pass on individual rack servers with a preference of blade servers for another form factor or as part of hyper-converged infrastructure. Servers meant for high levels of VM merger should also contain resiliency server features.

Another thing to consider when choosing a server for consolidation reasons is the extra attention to network I/O. Enterprise workloads regularly exchange data, access centralized storage resources, and interface with users across the LAN or WAN. Server merging can take advantage of a fast network interface, such as a 10 Gigabit Ethernet port.

Visualization and Scientific Computing

Graphics processing units (GPUs) are surfacing at the server level more and more to help with statistically intensive tasks from big data processing and scientific computing to modeling and visualization. GPUs also allow IT to retain and procedure sensitive, valuable data sets in a more secure data center rather than let that data flow to business endpoints.

GPUs need more than an extra GPU card in the server since there is a slight effect on the server’s traditional processor, memory, I/O, storage, networking or other hardware. The GPU adapters contained in enterprise-class servers are usually far more advanced than the GPU adapters offered for desktops. Graphics processing units are progressively more available as highly specific modules for blade systems.

Take HPE’s ProLiant Graphics Server Blade for instance. The graphics system flaunts support for up to 48 GPUs through the use of multiple graphics server blades. The huge volume of supported GPU hardware gives several users and workloads the ability to share the graphics subsystem.

Info from: http://searchdatacenter.techtarget.com/tip/How-to-choose-a-server-based-on-your-data-centers-needs