Total traffic on the internet this year is going to surpass the one zettabyte (1 billion terabytes) mark. An IDC study sponsored by storage giant EMC tells us that we need to buy more storage because the digital universe will reach 40 zettabytes by 2020.
Let's put this in context, though. You won't be able to buy petabyte hard drives any time soon, let alone zettabytes; some of the largest storage arrays in the world today only contain about half a petabyte. It's also important to point out that much of the world's data (about three-quarters) is just copies. In reality, only about a third of a zettabyte of original, new information will be created during 2016.
Even so, this is still a lot of data.
How are we able to handle the processing of this massive throughput of data? Step forward Team Fiber Optics and Team 40GBASE-T Ethernet.
Currently, data centres use Gigabit and 10-Gigabit Ethernet as their backbone technology of choice, and in many cases utilize Gigabit to the desktop due to the lowering cost of Gigabit Ethernet switches and interface cards. Until now this was sufficient throughput for most SMB and Enterprise networks. But with the sort of numbers I just mentioned, this will no longer be enough. Users will require access to huge global databases as well as thousands of IP enabled devices in order to run their business.
10GBASE-T or 10-Gig Ethernet over copper twisted pair cabling was standardized back in 2006 and can be used with both Category 6 cabling (which can stretch to 55 metres) , and Category 6a cabling and the newer Category 7 cabling (which can reach 100 metres). At the moment, 40GBASE-T or 40-Gig Ethernet is being developed along with Category 8 cabling that will be required to run 40-Gig over copper twisted pair.
As well as 10G Ethernet, the data challenge can be met by fibre optic connectivity. Of course, fibre is usually more costly than a copper based solution, but it can, right now, support terabits of throughput on a solitary, single mode fiber cable. The higher system cost is due to the expensive laser driven fiber transceivers that are required to transmit and receive voice, video and data packets at super fast speeds.
It does have other positives, however. Fibre uses less power to transmit at a longer distance. Copper backbones and Intermediate Distribution Frames (IDFs) can be converted to fibre optic direct from a centralized data center all the way to the desktop. When you eliminate the need for IDFs, you eliminate the need for all the extra space, cooling, cabling and power backup they require. This will generally mean less power being used, especially at longer distances.
Copper also has limited run distances, so getting data from A to B can be an issue if the connection is greater than 100 metres. Not so fibre.
One thing is for sure; the demand for data by businesses and consumers will grow exponentially and using both fibre optic and high speed Ethernet connectivity will help quench our thirst for even more data.
Professional workstations have included DisplayPort ports for quite some time, the interface’s prominence growing in tandem with the ever-increasing demand for multiple-monitor setups.
As the growth of ecommerce continues, server uptime is more essential to businesses than ever. Uptime is a measurement of how long servers remain operational without crashing or rebooting. The interactions between servers and their environment often pose a major risk to server availability.
Damage caused by the environment can often go unnoticed or incorrectly blamed on other causes. Condensation, rust, and heat damage is usually hidden inside machines, out of human sight.