Fighting Oblivion with a New Kind of Computer Brain

Hewlett Packard Enterprise Cooperates with the DZNE – Employs Memory-Driven Computing in Brain Research

Bonn/Palo Alto/CeBIT, March 20th, 2017. The architecture of all computers in use today is more than 60 years old. That is why computers already today are not able to cope with the exponential data growth in several areas. This causes problems, e.g. in medical research. Amongst those affected is  the German Center for Neurodegenerative Diseases (DZNE) – an institution researching, among other things, Parkinson's and Alzheimer's disease – as imaging and analysis of genetic information create immense amounts of raw data.

Hewlett Packard Enterprise's (HPE) “The Machine" research program – one of the biggest research projects in the IT pioneer's history – aims at overcoming the limitations of today's computer architecture. As a result from this research, HPE recently launched the first prototype of a radically new computer architecture. The core of this computer architecture is no longer the processor – it is a large pool of memory.

HPE calls this new architecture "Memory-Driven Computing". It allows for increasing compute power several thousand times. HPE aims to make this technology pervasive – powering everything from tiny compute devices in sensors and cars to supercomputers as large as a container. This opens up a whole new world of possibilities in a variety of settings.

HPE has now signed a cooperation agreement with DZNE. As a European partner, DZNE intends to utilize this innovative computer architecture for scientific and medical research. Computers in Palo Alto will be employed alongside on-premises development systems at DZNE. Ultimately, the partners want to accelerate the research process and improve its precision by analyzing larger volumes of data. 

Whole New Insights for Dementia Research
DZNE already generates huge amounts of data in various areas – e.g. recording images of the brain via magnetic resonance tomography (MRT). Additional examples include automated living cell microscopy or genetic data generation. Today, applications like these already generate multiple terabytes of data – a trend expected to increase drastically.

While it is possible on an experimental level to apply nanotechnology in researching individual genetic building blocks, the challenges for the computing infrastructure to process accumulated data are huge. It is necessary to develop algorithms, software, and hardware infrastructures capable of dealing with these memory- and compute-intensive requirements. Now it needs to be explored how the architecture of "The Machine" research program can be utilized to calculate results faster and more comprehensively. DZNE and HPE researchers plan to start initial pilot developments in this area within the next few weeks. Given these possibilities, the scientists expect fundamentally new insights when conducting research regarding the cause of Alzheimer's and other forms of dementia.

Appendix: Applications of Memory-Driven Computing in Brain Research
By means of magnetic resonance tomography (MRT), DZNE creates images of subject’s brains. High-speed, high-definition measurements can cause up to 0.5 gigabytes of raw data per second – i.e. up to two terabytes per hour. These data, however, are not stored on the device, but are directly converted into images and subsequently discarded. Ultimately, the information contained in these images is smaller than the (not stored) raw data by a factor of 300.

This approach, however, has crucial drawbacks: modern techniques could generate images of a considerably better quality by filtering interference signals caused by subject movement or random noise. Currently, this can only be achieved with much difficulty after the fact, or even not at all, because raw data volume is too big. HPE's new architecture offers an ideal platform for comprehensively storing these data, so as to convert them into excellent MRT images. This would be a revolution, not only for MRT studies at DZNE, but for diagnostic practice in general.

An additional advantage: hitherto, images are converted, correlated, and subsequently evaluated. This analysis usually takes seven to 14 days. Only then can DZNE scientists define the next step and start a new analytics cycle. Usage of the new HPE technology could dramatically accelerate the evaluation of high-volume data sets, facilitating immediate analysis. A subject could be specifically re-examined during the examination period, and scientists could immediately compare current images to previous ones. Adapted to hospital situations, this would enable personalized diagnostics absolutely impossible today.

In their quest for new therapies, DZNE researchers also use automated microscopes capable of representing molecular processes on an individual living cell level. The size of these cells is only a few micrometers (1/1,000th of a millimeter). In a single project, researchers take pictures of millions of these cells – creating a data volume of several terabytes. Analyzing these image data requires considerable compute power and takes several days, even weeks (on a machine with 20 CPUs and 100 GB of RAM). This means that data evaluation eats up more time than automated microscope imaging itself, which takes one or two days. Once more, compute power is the "bottleneck" that can only be eliminated by a new computing structure.

Welcome to our website, here you can inform yourself basically cookie-free.

We would be pleased if you would allow a cookie to be set for analysis purposes in order to optimise our provided information. All data are pseudonymous and are only used by the DZNE. We deliberately avoid third-party cookies. You can deselect this setting at any time here.

Your browser allows the setting of cookies: