The heart of the Big Data challenge

Data is fast becoming the most important economic commodity in the 21st century. Unlike finite resources, data can be created from nothing, which explains why there has been such an explosion in data in the past two decades. The trend towards free digital products and services has also promoted monetization through the sale of the data that is generated by these products. This has provided a catalyst for the generation of even more data. It is evident that the real value in this infinite resource is the extraction of quality intelligence.

Data CentreFrom an engineering perspective a number of new technical challenges have emerged as a result of this progression. One is related to the sheer volume of data to be stored and processed. A number of groups are working on new developments in high-density storage to accommodate these trends. The transmission of this data from user to data-center also presents challenges in the speed and bandwidth of telecommunications infrastructure. The third challenge-space involves the analysis and extraction of intelligent information from this data. Machine learning algorithms have become key in analyzing the copious amounts of data and creating value from it. The algorithms operate above several layers of software, making the process inefficient. Under the layers of software lays the heart of current limitations- the processor. Processors used in today’s data-centers are the fastest they have ever been, as described by Moore’s law. Every 18 months for the past 50 years speeds have doubled and size halved, however the architecture of the chips has remained the same. The laws of physics are now limiting how much smaller we can go, and Moore’s law is facing a brick wall.

One constant which has remained throughout this period of rapid progress is the architecture of processors. The architecture of current processors is deterministic and hierarchical. Interaction between machine learning software and processors in the cloud has resulted in software complexity and therefore security issues. The architecture of current processors also makes scalability and multi-core interaction counter-intuitive, as well as power-hungry and large. In order to handle the rapid expansion of data volume, data-centers need to be enabled with a high-speed low-energy, smaller yet scalable processor. A fundamental change in processor architecture is essential.

VIMOC Technologies is re-designing the processor by re-thinking how it is fundamentally structured. The aim is to build a bio-inspired chip that is intuitive with machine learning algorithms. Rather than using several layers of software, this processor will conduct machine-learning operations at the hardware level.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s