Servers - Memory

THREE REASONS TO START PLANNING YOUR IT INFRASTRUCTURE UPGRADE

0 Comments
IT upgrade

Microsoft will soon be ending its customer support for Windows Server 2008. What does this mean for you and your organization?

Well, the end of one era always means the beginning to another. This could be the perfect opportunity to ramp up your production, security, and improvements throughout. 

As much as we all tend to preach about the importance of staying up to date with the latest and greatest equipment in the IT industry, its easier said than done.

That fact of the matter is that more than half of all servers in operating existence are five to seven years old, and using archaic software like Microsoft Windows Server 2008.

windows server 2008
Image Courtesy of Microsoft

In recent news from the Microsoft Ignite conference, Microsoft will stop support for Windows Server 2008 and 2008 R2 effective January 14, 2020.

They also plan to terminate the support service of Microsoft SQL Server 2008 and 2008 R2 on July 19, 2019. If your organization is one of the many businesses that still currently uses these systems, you could be directly affected.

The news isn’t all bad though. Microsoft’s end of service could be the inspiration your organization needs in order to implement a full IT renovation, from up-to-date software solutions to the servers that propel them forward.

Need even more motivation for a data center facelift? We’ve put together three reasons to consider, based on challenges that technology experts are facing and the direct benefits they’re receiving from a well-orchestrated server overhaul.

Image Courtesy of Device42

REASON ONE: YOU’LL BE READY FOR MORE DEMANDING WORKLOADS

Recent surveys conducted with IT professionals and industry leaders suggests that analytics and AI strategy are among their top priorities in regards to infrastructure investments. 

Even more so, enterprise IP traffic is projected to triple by 2020. With these developments, it’s no surprise there’s a growing strain within IT that warrants an updated data center to sustain it.

Let’s be real here, there’s no such thing as “business as usual” anymore. Not just in IT, but in any industry for that matter. In order to stay competitive in any market, businesses must welcome change, and embrace adaptability to stay ahead. In terms of IT, modernization is critical.

According to 71% of those surveyed, the biggest road block preventing their IT transformation is an aging infrastructure. Businesses that currently operate with legacy systems find it nearly impossible to compete. 

Their archaic data centers just weren’t built to keep up with the modern demands of a digital world.

data center upgrade
Image Courtesy ComputerWorld.com

Modernization of your organization’s infrastructure is the most efficient strategy to stay competitive for the long haul.

A well-orchestrated renovation also brings opportunities to take full advantage of recent server technologies such as effortlessly handling workloads that would otherwise bog down any legacy systems.

For instance, new equipment running Windows Server 2019 optimized for Intel Xeon Scalable processors delivers a 4X performance increase over similar systems that are five years older.

REASON TWO: YOU’LL BENEFIT FROM INCREASED SECURITY

It’s no secret that the number of security breaches and cyber-attacks on businesses continue to grow astronomically, creating an impact of almost $2.1 trillion by 2019.

An older and weaker operating system leaves you vulnerable to an overabundance of business-critical attacks. The last thing any organization needs is a list of compliance failures that could result in the end of valued relationships. 

Ensuring your system is safeguarded against ransomware and protecting customer’s proprietary information to GDPR and HIPAA standards is vital.

Having an updated IT infrastructure allows you to deploy the latest security measures for data protection and encryption.

To name a few, Windows Defender Advanced Threat Detection and Intel Trusted Execution Technology, servers are furnished with a collection of multi-layered security resources.

Modern security can be instilled deep within an organization’s infrastructure and therefore out of reach of hackers. With features such as next-gen firewalls, security with software-defined networking, and identity and access management; newer systems create a much larger obstacles in the way of attacks.

REASON THREE: YOU’LL BE READY FOR THE FUTURE

Decrease total operating costs– Organizations that modernize experience up to 69 percent less revenue losses. Maintenance expenses used to maintain aging systems, unplanned downtime, and more abundant power usage all add up.

Simplify your transition to cloud – Studies have shown that by 2020, 90 percent of businesses will have developed a cloud strategy to support mission-critical applications. Updating your IT infrastructure will ensure you don’t get left behind.

Support expanding workloads – Organizations that update their systems have the ability to speed time-to-insight from analytics and AI technologies.
Enjoy the benefits of Windows Server 2019 – the advantages of the server upgrade include improved application platforms, containerization, pervasive encryption, and more

DON’T WAIT TO START PLANNING YOUR INFRASTRUCTURE UPGRADE

Despite the fact that your current legacy system may still be ultra-reliable, you’ll still want to take a proactive approach to planning a server upgrade before Windows Server 2008 support goes away. 

There is still plenty of time to both plan a serviceable upgrade strategy, and to take the steps necessary to complete it.

No matter which modernized options you wish to explore; whether it be hybrid cloud, hyperconverged infrastructures, virtualized networks, or the full capabilities of Windows Server 2019, DTC Computer Supplies can help.

Data Protection and Information Security – Together at Last.

0 Comments
matrix

Any IT professional that spent time around a corporate data center for several years has more than likely adapted to the separation of data protection and data security fields. The division in specialties has long historical roots, but does it really make sense anymore?

data protection

Data protection is a major component of any corporate disaster recovery plan. A disaster recovery plan is a set of strategies and processes put in place to prevent, avoid, and minimize the impact of a data loss in the event of a catastrophe. Data protection is essential to a disaster recovery plan as business-critical data cannot be substituted.

The only way to protect data is to make a copy of the original and store the copy adequately secluded from the primary. That way in the event of an unfortunate incident, the same disaster cannot destroy both copies.

In fact, a sufficient disaster recovery plan should also include requirements for application, network, and user data retrievals, as well as procedures for testing and training management.

Disaster recovery planning can be compared to information security planning in many ways. They both intend to protect business-critical practices and data assets. However, InfoSec uses various intertwining tactics that are exclusive to security.

infosec

Information Security

Infosec has established its own terminology and set of strategies for securing vital data assets. These policies are then enhanced by methods of constant monitoring and seasonal analysis to ensure that security precautions are keeping data confidential.

Until recently there have been few exchanges between data protection and information security fields. However, when someone in the data protection field is worried about retrieving data that is encrypted, communication with the InfoSec team is mandatory.

On the other hand, the InfoSec team might only collaborate with the data protection team to confirm that continuous data protection resources are being implemented and used. This would allow speedy restoration in the wake of a cyber-attack by basically reversing data to a point prior to the attack.

Together at Last

Believe it or not, both data protection and InfoSec fields have a lot to learn from each other. Data protection has already dipped into quantitative techniques for matching protection services to detailed data provided the threats to the organization.  These quantitative methods, Single Loss Expectancy (SLE) and Annual Loss Expectancy (ALE), were trivial at face value and abandoned by disaster recovery experts.

InfoSec is moving down a similar path.  Attack surface reduction modeling techniques are akin to the pseudo-scientific numerical looking practices as ALE and SLE. Certain experts see these methods as an upgrade over the threat modeling that was applied by many InfoSec specialists in the 90s.  Before the turn of the century, it was widely thought that the cost to protect data should not be much higher than the cost to hackers to sidestep the security. In spite of this, the correlation was lopsided as hackers suffered little to no expense in testing the protection of their targets or to rout the actions that were taken to keep them out.

Selecting the Best Server for Your Data Center

0 Comments
server

Server Selection

In order to improve bottom line performance in the workplace, IT professionals should assess top priorities to establish protocol on how to choose a server while constructing the most efficient workloads.

Some may say that servers are the heart and lungs of the modern internet, but the deliberation on how to select a server can every so often create a confusing range of hardware choices. Even though it’s possible to pack a data center with matching, virtualized and bundled systems that have the ability to handle any job, the cloud is forever altering how businesses run applications. As more organizations move workloads in the public cloud, local data centers need less resources to host the workloads that remain on site. This is encouraging IT administrators and business professionals to pursue more value and performance from the dwindling server fleet.

These days, the infinity of computer hardware systems is being tested by a new trend in customization with server attributes. Some businesses are encountering the idea that one size may in fact fit all in regards to servers. However, you can opt for and even design server cluster hardware to accommodate specific usage categories.

server selection

Figure from: http://searchdatacenter.techtarget.com/tip/How-to-choose-a-server-based-on-your-data-centers-needs

VM Merger and Network I/O

An advantage of server virtualization is the capacity to host several virtual machines on the same physical server in order to use more of a server’s existing resources. VMs largely depend on RAM and processor cores. It’s impractical to decide exactly how many VMs can exist on any given server because you can arrange them in a way that they can use an extensive range of memory space and processor cores. However, selecting a server with more memory and processor cores will usually permit more VMs to exist on the same server, improving consolidation.

For instance, a Dell EMC PowerEdge R940 rack server can host up to 28 processor cores and offers 48 DDR4 DIMM slots that support up to 6 TB of memory. Some system administrators may choose to pass on individual rack servers with a preference of blade servers for another form factor or as part of hyper-converged infrastructure. Servers meant for high levels of VM merger should also contain resiliency server features.

Another thing to consider when choosing a server for consolidation reasons is the extra attention to network I/O. Enterprise workloads regularly exchange data, access centralized storage resources, and interface with users across the LAN or WAN. Server merging can take advantage of a fast network interface, such as a 10 Gigabit Ethernet port.

Visualization and Scientific Computing

Graphics processing units (GPUs) are surfacing at the server level more and more to help with statistically intensive tasks from big data processing and scientific computing to modeling and visualization. GPUs also allow IT to retain and procedure sensitive, valuable data sets in a more secure data center rather than let that data flow to business endpoints.

GPUs need more than an extra GPU card in the server since there is a slight effect on the server’s traditional processor, memory, I/O, storage, networking or other hardware. The GPU adapters contained in enterprise-class servers are usually far more advanced than the GPU adapters offered for desktops. Graphics processing units are progressively more available as highly specific modules for blade systems.

Take HPE’s ProLiant Graphics Server Blade for instance. The graphics system flaunts support for up to 48 GPUs through the use of multiple graphics server blades. The huge volume of supported GPU hardware gives several users and workloads the ability to share the graphics subsystem.

Info from: http://searchdatacenter.techtarget.com/tip/How-to-choose-a-server-based-on-your-data-centers-needs

Data Storage Faceoff: Tape vs Disk vs Cloud

0 Comments
face off

Data Storage Faceoff: Tape vs Disk vs Cloud

In September 2010, Virgin Blue airline’s check-in and online booking systems went down for a period of 11 days, affecting around 50,000 passengers and 400 flights. Ultimately the downtime ended up costing the company over $20 million. So, what is the true cost of downtime you ask? Studies have shown that cost of partial data storage backup outages cost an average of $5600 per minute or $300 thousand dollars per hour! Depending on the industry, any single downtime can run into the millions of dollars of lost revenue. Just ask Virgin Blue.

Most businesses have (or should have) plans in place for emergency outages. Data backup procedures are a critical part of these emergency downtime plans. However, as the number of possible backup options multiplies with the introduction of new technologies it can be difficult to ensure you made the correct data backup decision.

Whether companies are looking to upgrade and retire aging infrastructure or are just preparing for the future, an assessment of options is needed. Welcome to the data storage face off- tape vs. disk vs. cloud backup.

Data Tape Storage Backuptape data storage

Back in the 1960s and 1970s, tape was the main way to store any backup data. Since then, it has evolved and grown with changes in technology and growing infrastructures for more than forty years. By deciding to implement or continue using tape will leave your organization with very few infrastructure improvements.

Tape is also the most affordable data storage option due to minimal infrastructure enhancements and the cost of tape itself. Usually, tape is an inexpensive alternative to both disk and cloud storage options; however, its dependent on the total amount and type of data your organization backs up.

Another major benefit of using tape as a main data storage method is the ability to take advantage multi-site storage. As with any organization’s emergency downtime initiative, it is ideal to ensure multi-site data storage. That way a disaster at one storage site doesn’t risk destroying all of the company’s proprietary data. Nevertheless, the multi-site tape storage method can be pricey once you factor in costs of secure tape transfer and storage in data centers at off site facilities. Although tapes have certainly been in existence the longest, like everything else they’re bound to evolve.

Hard Disk Storage Backuphard disk data storage

Disk-based backup such as hard drives, is vastly quicker and more consistent for data repair than tape backup. Instead of writing from disk to a tape, writing from a disk to a disk is purely a more effective method of data transfer.

In the same way, restoration from disk based storage is a fairly efficient process. Restoration allows the avoidance of retrieving, sequencing and then replicating tapes one at a time.

The biggest downside to disk-based storage backups is that they’re located in an on-site facility. For multi-location storage, a third-party is needed for off-site backup. If you don’t want to use an off-site storage provider, it can become extremely expensive to keep increasing disk space.

Cloud Storage Backupcloud data storage

Cloud-based backup refers to an online off-site back up through cloud enablement technologies. Organizations can typically store data with superior cost efficiency in a cloud, by eliminating the need to buy and refresh tapes or disks. Cloud backup is also a much less painstaking process, as data replication is done as a service.

Cloud backup normally consist of multi-site data storage. A local data copy can live in an on-site appliance like the SmartFrame, while it also replicates data to your off-site data storage provider. These applications and enablement technologies constantly run in the background of your IT processes, removing some manual IT processes.

Since cloud based back up is still a relatively new technology, there are some perceived security concerns around the cloud. Questions remain whether cloud backup will open their data to being hacked or even leaked from a third party. There are also concerns of accidental mixing of data with other customers existing in the same data center.

Decision Time

No matter which data backup method an organization chooses to implement, it will have its fair share of both benefits and drawbacks due to technology continuing to make advances in speed, efficiency and security. When the time is right for your business to install a data storage infrastructure or upgrade the infrastructure currently in place, let We Buy Used Tape do the work for you. Since 1965, We Buy Used Tape has been assisting IT professionals and their companies efficiently update their data back up systems. Used and surplus tape and disk storage can be repurposed; helping businesses increase initial return on investment while staying socially responsible. Contact one of our IT experts today for a hassle-free quote on your IT assets. Let us take care of your infrastructure update and security from start to finish.

 

we buy used tape logo

Evolution of the Data Center

0 Comments

The definition of a data center is a large group of networked computer servers typically used by organizations for the remote storage, processing, or distribution of large amounts of data. Not too long ago, the datacenter was the main aspect of any IT structure. Although data centers may still be essential to various IT procedures and their responsibility remains the same, the evolution is something to behold.

The Rise of Computerscomputer evolution

Personal computers were introduced to the world in 1981, leading to a rise in the microcomputing industry. In the early 1990s microcomputers began filling old mainframe computer rooms as servers. The server housings quickly became known as data centers. Before you knew it, businesses started building series of servers within their own facilities.

The .com Boom

By the mid-90s, the “.com boom” as we all know it, instigated companies into want faster internet connectivity speeds and around the clock operations. This surge in internet necessities resulted in construction of server rooms consisting of hundreds of thousands of servers. By this time, the data center as a service model had become popular.

Cloud Servicesinternet

It was until the turn of the millennium when cloud services came into the picture. Cloud storage is a cloud computing model in which data is stored on remote servers from the internet. From 2002-2006, Amazon Web Services went from development of cloud based services to offering IT infrastructure services. The Amazon infrastructure services included data storage, computation, and minor human intelligence through “Amazon MTurk”.

With the quick spread of cloud services in the past few decades, the data center is not so much about metal server rooms, but more about strategic assets. In some cases, businesses IT infrastructures are not equipped for cloud services or contain explicit compliance needs that require a closer eye.

Evolution and Upgrading IT Servicescloud servers evolution

As the cloud transition ensues, there are two thought-provoking actions that are taking place. First, businesses will be getting rid of data center assets long before their useful life has ended; giving others an opportunity to find higher end servers and storage devices at more affordable prices.

Second, as physical data centers adapt to the ever-changing organizations, the server rooms of smaller companies will need to keep up with the newer equipment available to them.

Since 1965, We Buy Used IT Equipment has been helping organizations from healthcare, government, and educational institutes to small and large businesses alike. Making the transition from old to new data center services can be overwhelming, so let us make it easy on you.

We Buy Used IT Equipment has been both saving the environment and companies money, by setting the standard for secure handling and re purposing of used IT assets for over half a century. Our spotless reputation ensures your transactions are handled efficiently, ethically, and securely.  With thousands of transactions processed, we have never had one security breach or data loss. Contact one of our friendly IT product experts today for a fast and fair quote on your used assets.