Quick Memory

Reading time ( words)

Computer memory capacity has expanded greatly, allowing machines to access data and perform tasks very quickly, but accessing the computer’s central processing unit, or CPU, for each task slows the machine and negates the gains that a large memory provides.

Song Jiang, a UTA associate professor in the Department of Computer Science and Engineering, received an NSF grant to improve computer accessibility.

To counteract this issue, which is known as a memory wall, computers use a cache, which is a hardware component that stores recently accessed data that has already been accessed so that it can be accessed faster in the future. Song Jiang, an associate professor in the Department of Computer Science and Engineering at The University of Texas at Arlington, is using a three-year, $345,000 grant from the National Science Foundation to explore how to make better use of the cache by allowing programmers to directly access it in software.

“Efficient use of a software-defined cache allows quick access to data along with large memory. With memory becoming more expansive, we need to involve programmers to make it more efficient. The programmer knows best how to use the cache for a particular application, so they can add efficiency without making the cache a burden,” Jiang said.

When a computer accesses its memory, it must go through the index of all the data stored there, and it must do so each time it goes back to the memory. Each step slows the process. With a software-defined cache, the computer can combine or skip steps to access the data it needs automatically without having to go through the memory from the beginning each time. Jiang has studied these issues for several years and has developed four prototypes which he will test to determine if they can serve large memories without slowing CPU speeds at the same time.

The current trend in technology is toward using NVM or non-volatile memory. NVM is expected to be of much higher density, larger and less expensive, and will provide many terabytes of memory. Speeds will not change much, but the size will expand greatly, which will also increase the time necessary to go through the index. If Jiang is successful, speeds will keep pace with technology.

“As we ask our computer systems to work with increasingly large data sets, speed becomes an issue. Dr. Jiang’s work could provide a breakthrough in how software developers approach software-derived caches and, as a result, make it easier and less time-consuming to analyze big data,” Hong Jiang said.

Written by Jeremy Agor



Suggested Items

Telecom (Compute and Storage) Infrastructure Market to Reach $16.35B in 2022

09/03/2018 | IDC
A new forecast from IDC sizes the market for compute and storage infrastructure for Telecoms at nearly $10.81 billion in 2017. However, as Telecoms aggressively build out their infrastructure, IDC projects this market to see a healthy five-year compound annual growth rate (CAGR) of 6.2% with purchases totaling $16.35 billion in 2022.

A Camera That Can See Unlike Any Imager Before It

09/20/2016 | DARPA
Picture a sensor pixel about the size of a red blood cell. Now envision a million of these pixels—a megapixel’s worth—in an array that covers a thumbnail.

Copyright © 2019 I-Connect007. All rights reserved.