Copyright © 2023 Tessolve | Privacy Policy
The verification of processor architectures designed for Machine Learning (ML) applications represent a departure from conventional techniques. Conventional constrained random testbenches, which focus on stimulus driving coverage, cannot scale for many ML algorithm realizations. ML architectures involve neural networks of processors that “learn” by manipulating coefficients across the network to match ideal outputs to a large quantity of input data. Furthermore, smart compiler technology is employed to leverage the many paths available in the network. An effective verification strategy can leverage planning algorithms that start with the desired output and optimize input values to achieve that output. Ensuring the paths that the compiler might trigger have all been tested, and that the test content can scale from individual processors to the entire network are critical challenges. Breker will share various approaches to this problem, developed through cooperation with three noted AI processor providers.
3 Key Points:
The emergence of Artificial Intelligence is the “next big thing” and presents a unique opportunity for disruptive semiconductor development. End applications could range from ADAS, to 3D facial recognition, to voice and image processing, or to intelligent search. The SoCs for AI applications whether targeted for training or inference will have their own unique characteristics, but present quite common verification challenges that we will present in this session.
Supporting designs as big as 15 billion gates, Mentor’s Veloce Strato has unique virtualization capabilities that enable highly accurate pre-silicon execution of AI benchmarking applications like MLPerf. The Veloce Power App enables analysis of peak and average. We will cover how Veloce Strato and its supporting solutions are the best tool to help address the verification challenges of SoCs targeted for AI applications.
3 Key Points:
Most AI chips and hardware accelerators that power machine learning (ML) and deep learning (DL) applications include floating-point units (FPUs). Algorithms used in neural networks are often based on operations that use multiplication and addition of floating-point values. FPUs are difficult to implement. The IEEE 754 standard defines many corner-case scenarios and non-ordinary values. Even a minor rounding mistake could accumulate over many iterations and produce a large error. An FPU formal verification app compliant with IEEE-754 provides an efficient and rigorous solutions to FPU functional verification
3 Key Points:
Name: Mike Bartley
Designation: Senior Vice President – VLSI Design
Title: Introduction
Biography:
Mike Bartley has a PhD in Mathematics from Bristol University, an MSc in Software Engineering, an MBA from the Open University and over 25 years of experience in software testing and hardware verification. He has built and managed state-of-the-art test and verification teams in a number of companies who still use the methodologies he established. Since founding TVS in 2008 he has grown the company to over 100 employees worldwide. Dr Bartley is Chair of both the Bristol branch of the British Computer Society and the West of England Bristol Local Enterprise Partnership (LEP). He has had over 50 articles and presentations published on the subjects of hardware verification, software testing and outsourcing.