Memory Driven Computing
Next generation sequencing (NGS) and its manifold applications are transforming the life and medical sciences as reflected by an exponential 860-fold increase in published data sets during the last 10 years. Considering only human genome sequencing, the increase in genomic data over the next decade has been estimated to reach more than three Exabyte (3x1018 byte). NGS data are projected to become on par either with data collections such as in astronomy or YouTube or the most demanding in terms of data acquisition, storage, distribution, and analysis. To cope with this data avalanche, so far, compute power is simply increased by building larger high-performance computing (HPC) clusters (more processors/cores), and by scaling out and centralizing data storage (cloud solutions). While currently still legitimate, it can already be foreseen that this approach is not sustainable.
Traditionally, computers are based on the von Neumann architecture. Scaling such systems is achieved by adding more resources, like in clusters or super computers. Scaling is limited by the end of Moore’s law. One of the approaches to overcome this, is memory-driven computing where a pool of devices is connected into a single environment using the optical Gen-Z fabric. Memory-driven computing is a paradigm shift, that puts memory at the center of the compute infrastructure to support todays data-driven applications. System components are connected through a fabric, Gen-Z. With Gen-Z, up to 4096 Yottabytes of memory can be addressed among up to 16 million devices. It is possible to connect the computational power of 1600 exascale computers.
Zurück zur AG Schultze