Storage Considerations for Big Data and Analytics
By: Tom Rascon, Chief Technology Officer US Public Sector - DoD/IC
As we have progressed into the age of data saturation, analysts have lost the ability to effectively filter data without the use of analytical tools. We are producing data at rates that have never been seen before, and they continue to grow as the Internet of Things (IOT) continues to expand, including more and more items. Currently, the name for the largest amount of data classification is the yottabyte (2 to the 80th power), but the International System of Units is considering an even larger classification (2 to the 90th power).
With this increasing amount of data, we must turn to analytical tools to enable us to filter the noise. This filtration and distillation of the important information will allow analysts to focus on the significant data and make timely and effective decisions. This process is what we call Big Data Analytics and is used throughout the Department of Defense and the Intelligence Community.
The client-server model we successfully used during the later stages of the twentieth century was effective at delivering relatively small amounts of data to the customers who needed it. But as we are dramatically increasing the amount of data that users require, we must evolve to the next level where we allow analytical tools to directly affect the information we receive. In order to do this, there are considerations we must make when planning our storage environment. All data and storage is not created equally, therefore we must account for those differences as we proceed into the future. This presentation will illustrate those differences and provide recommendations for storage considerations.
Not all data is created equally; therefore, your storage shouldn’t be either.
Data has surpassed the analysts’ ability to consume it.
The problem is only getting worse as more and more items become data producers.