The heart of the Big Data challenge

Data is fast becoming the most important economic commodity in the 21st century. Unlike finite resources, data can be created from nothing, which explains why there has been such an explosion in data in the past two decades. The trend towards free digital products and services has also promoted monetization through the sale of the data that is generated by these products. This has provided a catalyst for the generation of even more data. It is evident that the real value in this infinite resource is the extraction of quality intelligence.

Data CentreFrom an engineering perspective a number of new technical challenges have emerged as a result of this progression. One is related to the sheer volume of data to be stored and processed. A number of groups are working on new developments in high-density storage to accommodate these trends. The transmission of this data from user to data-center also presents challenges in the speed and bandwidth of telecommunications infrastructure. The third challenge-space involves the analysis and extraction of intelligent information from this data. Machine learning algorithms have become key in analyzing the copious amounts of data and creating value from it. The algorithms operate above several layers of software, making the process inefficient. Under the layers of software lays the heart of current limitations- the processor. Processors used in today’s data-centers are the fastest they have ever been, as described by Moore’s law. Every 18 months for the past 50 years speeds have doubled and size halved, however the architecture of the chips has remained the same. The laws of physics are now limiting how much smaller we can go, and Moore’s law is facing a brick wall.

One constant which has remained throughout this period of rapid progress is the architecture of processors. The architecture of current processors is deterministic and hierarchical. Interaction between machine learning software and processors in the cloud has resulted in software complexity and therefore security issues. The architecture of current processors also makes scalability and multi-core interaction counter-intuitive, as well as power-hungry and large. In order to handle the rapid expansion of data volume, data-centers need to be enabled with a high-speed low-energy, smaller yet scalable processor. A fundamental change in processor architecture is essential.

VIMOC Technologies is re-designing the processor by re-thinking how it is fundamentally structured. The aim is to build a bio-inspired chip that is intuitive with machine learning algorithms. Rather than using several layers of software, this processor will conduct machine-learning operations at the hardware level.

Revolutionizing Data Centres

VIMOC’s Cognitive-Core Technology enables a revolutionary architecture for hyper-scale computing environments such as Cloud Storage, Analytics, Webserving and Media streaming. VIMOC’s unique combination of ultra low power ARM processors and proprietary neuro-Cell (nCell) technology sets the foundation for the next generation of cognitive computing server designs. VIMOC’s technology is designed to sustain large scale applications with dramatic savings in power and space compared to today’s state of the art installations, and can provide software developers with an efficient and flexible platform to implement advanced machine learning algorithms.

VIMOC Technologies’ products include IP silicon Cognitive-Core processors and hardware-software platforms, which allow OEMs to bring ultra-efficient, hyper-scale solutions to market with a great level of efficiency and speed.