Tag Archives: data recovery

Data Processing - red

Data Processing: Size Matters

An article about University of Illinois signing a $188 million supercomputer deal with Cray Inc. came out this month and a lot of people were amazed by the huge amount of data that these supercomputers can process. Computer processing is growing incredibly fast as the university is using 1.5 petabytes of information each day and a capacity of 500 petabytes.

500 petabytes! If you have no inkling of how enormous that data is, you can start computing in quadrillion bytes. Search engines like Google process an approximate of 24 petabytes every single day; that should give you an idea of how big a petabyte is.

But that is nothing compared to what IBM is planning to launch for Netherlands’ Foundation for Research in Astronomy. The $2.1 billion deal is talking not about supercomputers but about “exascale” computers. An exascale compute is a machine that can process an exabyte of data each day. And what is an exabyte? Just about 1000 petabytes; yes, start multiplying quadrillion bytes by one thousand.

You probably know how much information is being processed in the internet on a daily basis. Well, an exabyte is twice as much that information. Colossal is an understatement! But it appears mankind isn’t happy with that yet, we’re just getting started.

The National Security Agency is planning to build a $2 billion facility in Utah that will process and store the entire agency’s data. We’re talking yottabyte of information to be stored in this facility. What is a yottabyte you may ask? It’s just equivalent to quadrillion gigabytes and just about the ceiling for data measurement (for now) as no word has been coined yet for the next higher degree. With this staggering amount of data comes a plan for a highly productive computer that can execute a petaflop. Petaflop is a term coined for quadrillion operations in a second.

These are indeed mind-blowing pieces of information and some people might be thinking what kind of technology these computer makers will be using to be able to efficiently process data as big as the ones previously mentioned. The power alone is staggering – 200 megawatts – which can power up to 200,000 homes. So how on earth can an enormous and yet efficient machine be built and maintained?

Knowing that the more data you store in your computer means using up essential resources that if depleted, may mean poor performance, it would be interesting to see how the abovementioned projects will come to life. When it comes to data processing, size does matters and you’ve probably had a dose of irritating problems that come with processing huge amounts of data.

Just think that with personal computers, many users already experience poor PC performance with just hundreds and maybe less gigabytes of data. Some users often find themselves using optimization tools such as PC Speed Up to improve their computer’s speed and performance.

Now we’re talking thousands and quadrillions of gigabytes. It’s safe to assume that these supercomputers and exascale computers will have (should have) their own built-in optimization tools so as to avoid performance issues in the future.


This article was submitted through TechGeec’s article submission form.