The number of transistors per silicon chip has been doubling roughly every 18 months for nearly two decades (according to Moore's law). This exponentional growth is expected to continue for several more years, resulting in chips with hundreds of millions of gates. The result is that components (like microcontrollers) that used to occupy entire chips now fit in a tiny corner of a chip, and entire systems that used to occupy boards now fit on one chip (system-on-a-chip). The chip design industry is thus being transformed, as designers buy components as Intellectual Property (IP), also known as cores, and integrate those cores on a single chip.
The Dalton project addresses new design issues related to such IP-based design. The project focuses on a particular approach to IP-based design, wherein one starts with a pre-designed system-on-a-chip, called a "reference chip," that is "close" to one's desired application. One then configures this chip until it implements the desired application. After extensive verification, one finally generates a new chip. The main advantage of starting with a reference chip is that one can execute the chip in its real environment during verification, which many now see as a crucial aspect of system design since simulation-based approaches to verification are too slow. We refer to the approach as "configure-and-execute."
A key to the success of a configure-and-execute approach is having a reference design that is heavily parameterized, so that one can tune the design to one's own power, performance and size constraints. Parameters that we are focusing on include bus parameters (data bus size, encoding schemes), cache parameters (size, line size, mapping techniques, write techniques, associativity), bus arbitration schemes, DMA parameters (block sizes), and numerous core-dependent parameters like buffer sizes, algorithm options (such as for compression), precision options (such as for analog-to-digital conversion), etc. Such parameters heavily influence power, performance and size of a design, and the constraints on those metrics vary greatly across applications. Therefore, methods for rapidly exploring the tradeoffs among various parameter settings is an important research area, and forms the main thrust of the Dalton project.
Another important issue in IP-based design is core interfacing. The Virtual Socket Interface (VSI) Alliance was formed a few years back to address standardization needs for IP-based design. One area of their focus is developing a core interface standard that would enable cores to easily-integrated into any system, by separating each core into an internal part and an interface part. The Dalton project is presently investigating techniques to tradeoff power, performance and size in such interface parts, and thus the choice of an interface part with a particular power/performance/size feature becomes yet another parameter to a core.
Below, we provide some further details on why a reference chip
approach is necessary.
During the past 15 years, the industry has shifted to a top-down simulation-based design methodology. In this methodology, a designer specifies a system behavior in an HDL, also models the system environment, and then simulates the system. The designer refines the system to a more detailed HDL model and simulates again. Finally, the designer generates a silicon chip and test it in the real environment; the chip should hopefully work because of all the simulation.
Unfortunately, this approach may have run out of steam for systems-on-a-chip. Simulation times are so long that even simulating a few seconds of real-time may require hours or days of simulation time. Furthermore, modeling the system's environment is often as much work as modeling the system itself, and frequently omits many undocumented features of the environment. In addition, fabricating and testing a system-on-a-chip is turning out to be a huge problem.
We therefore propose an entirely new approach for mainstream systems-on-a-chip. In this approach, a firm (perhaps a silicon vendor or independent design house) creates a REFERENCE DESIGN for a particular class of applications, such as control systems. The reference design contains microprocessors, microcontrollers, and/or DSP processors, coprocessors, memory, field-programmable logic, and a large collection of peripherals common to the application class -- more than any one application would ever need, but that's O.K. because there's room on the chip. These devices are preconfigured to work with one another, i.e., the operating systems are installed, all device drivers are written, etc. The internal bus can be brought to the pins to allow external devices to connected to the design, or to cascade reference designs. The tremendous effort put into building this reference design gets amortized over all the applications that utilize it. A software debugging environment can be built on top of it, using JTAG scan to control/observe internal registers. Note that a reference design approach is similar in idea to microcontrollers, which target a particular application class, contain numerous on-chip peripherals, and cost only a few dollars per device because of amortization; reference designs can be seen as "super-charged" microcontrollers.