{"id":1945,"date":"2023-05-24T15:16:00","date_gmt":"2023-05-24T07:16:00","guid":{"rendered":"http:\/\/hce.pro.demo.coodemo.com\/?p=1945"},"modified":"2023-06-15T22:54:13","modified_gmt":"2023-06-15T14:54:13","slug":"the-future-of-ai-is-analog-new-framework-strikes-to-scale-analog-ai-chips","status":"publish","type":"post","link":"https:\/\/www.hceics.com\/ja\/the-future-of-ai-is-analog-new-framework-strikes-to-scale-analog-ai-chips\/","title":{"rendered":"The Future of AI is Analog? New Framework Strikes to Scale Analog AI Chips"},"content":{"rendered":"
As the industry continues to push for lower power and higher performance processing for\u00a0machine learning (ML)<\/a>\u00a0and artificial intelligence (AI), a plethora of new concepts and technologies have taken center stage. Amongst these,\u00a0analog computing has been revived<\/a>\u00a0as an exciting approach to more efficient processing.<\/p>\n Still, the technology is relatively new for this given application, and there is significant room for improvement. This week, researchers from the IISC\u00a0published a new paper describing\u00a0a novel framework for the future of scalable analog AI chips<\/a>.<\/p>\n <\/p>\n <\/p>\n This article will discuss the benefits of analog computation for AI, some challenges facing the technology, and the new research from IISC.<\/p>\n <\/p>\n Analog computation is a technology that predates digital computing but had largely been forgotten as digital took off. Now,\u00a0researchers are again looking to analog<\/a>, and this time it appears to have digital beaten in several ways.<\/p>\n <\/p>\n <\/p>\n As data rates have gotten faster, process nodes smaller, and global interconnects longer, an emerging trend in the industry has been the significant impact of data movement energy.<\/p>\n Increasing parasitics have caused the physical movement of data in and out of memory has become one of the most significant contributors to overall chip power consumption. Couple this with ML, an extremely data-intensive application, and we find that the von Neumann architecture is no longer well suited for AI\/ML.<\/p>\n <\/p>\n <\/p>\n Instead, analog computation allows for in-memory computation, where data can be processed where it is stored. The major benefit is the significant decrease in data movement overall, which reduces overall energy expenditure.<\/p>\n This way, analog AI can offer power efficiency improvements up to 100x compared to traditional digital electronics for AI\/ML applications.<\/p>\n <\/p>\n Despite its efficiency benefits,\u00a0analog computing still faces several challenges<\/a>\u00a0before it can be a legitimate competitor to digital computing.<\/p>\n One of the key challenges in the design of analog computing for AI\/ML is that, unlike digital chips, testing and co-design of analog processors is difficult. Traditionally, VLSI (very-large-scale integration) design can consist of millions of transistors, but engineers can synthesize the design by compiling high-level code. This capability allows the same design to be easily ported across different process nodes and technology generations.<\/p>\n <\/p>\n <\/p>\n Analog chips, however, don\u2019t scale as easily due to differences in transistor biasing regimes, temperature variations, and limited dynamic range. The result is that each new generation and process node needs to be individually customized and re-designed. This result not only makes the design more time-consuming and expensive, but it also makes it less scalable, as transitions to new technology generations require a lot more manual work.<\/p>\n For analog AI to make it mainstream, the challenges towards design and scalability need first to be solved.<\/p>\n <\/p>\n To solve this problem, researchers at the IISC have proposed a new framework for scalable analog compute design in\u00a0their most recently published paper.<\/a><\/p>\n The key concept of their work revolves around the generalization of margin propagation (MP), which is a mathematical tool that has previously shown value in\u00a0synthesizing analog piecewise-linear computing circuits<\/a>using the MP principle.<\/p>\n Out of this generalization, the researchers developed a novel shape-based analog computing (S-AC) framework that allows the researchers to approximate different functions commonly used in ML architectures.<\/p>\n <\/p>\n The result was a framework that successfully could trade off accuracy with speed and power, like digital designs, and also be scaled across different process nodes and biasing regimes.<\/p>\n As a proof-of-concept, the researchers implemented a number of S-AC circuits to represent common mathematical functions in ML in several different processes. In doing this, the researchers successfully used circuit simulations to demonstrate that the circuit I\/O characteristics remained reasonably the same across both a plan 180 nm CMOS process and a 7 nm FinFET process.<\/p>\n With the new framework, the researchers hope to have created something that will allow for more scalable and cost efficiency analog AI designs in the near future.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":" Revitalizing the need for analog chip design, researchers from the Indian Institute of Science (IISC) describe a framework for future analog AI chips. As the industry continues to push for lower power and higher performance processing for\u00a0machine learning (ML)\u00a0and artificial intelligence (AI), a plethora of new concepts and technologies have taken center stage. Amongst these,\u00a0analog computing has been revived\u00a0as an exciting approach to more efficient processing. Still, the technology is relatively new for this given application, and there is significant room for improvement. This week, researchers from the IISC\u00a0published a new paper describing\u00a0a novel framework for the future of scalable analog AI chips. IISC’s ARYABHAT-1 chip.\u00a0Image used courtesy of\u00a0NeuRonICS<\/p>","protected":false},"author":1,"featured_media":2045,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/posts\/1945"}],"collection":[{"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/comments?post=1945"}],"version-history":[{"count":0,"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/posts\/1945\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/media\/2045"}],"wp:attachment":[{"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/media?parent=1945"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/categories?post=1945"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hceics.com\/ja\/wp-json\/wp\/v2\/tags?post=1945"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<\/p>\n
IISC’s ARYABHAT-1 chip.\u00a0Image used courtesy of\u00a0NeuRonICS Lab, DESE, IISc<\/a><\/em><\/h5>\n
Why is there a\u00a0Shift to Analog?<\/h3>\n
<\/p>\n
A conventional von Neumann architecture is bottlenecked by data movement. Image used courtesy of\u00a0IBM<\/a><\/em><\/h5>\n
<\/p>\n
Analog AI brings the processing directly to the memory. Image used courtesy of\u00a0IBM<\/a><\/em><\/h5>\n
Challenges for Analog AI Scaling<\/h3>\n
<\/p>\n
Transconductance (gm\/Id) as a function of (Vgs\u2013Vth) at different process nodes. This plot shows the challenges in easily scaling analog designs. Image used courtesy of\u00a0Kumar et al<\/a><\/em><\/h5>\n
IISC’s Framework to Scale AI<\/h3>\n
<\/p>\n
Test setup for the chip built off of the proposed analog framework. Image used courtesy of\u00a0NeuRonICS Lab, DESE, IISc<\/a><\/em><\/h5>\n