Machine Learning Optimizes FPGA Timing

By Bernard Murphy (*)

Machine learning (ML) is the hot new technology of our time so EDA development teams are eagerly searching for new ways to optimize various facets of design using ML to distill wisdom from the mountains of data generated in previous designs. Pre-ML, we had little interest in historical data and would mostly look only at localized comparisons with recent runs to decide whatever we felt were best-case implementations. Now, prompted by demonstrated ML-value in other domains, we are starting to look for hidden intelligence in a broader range of data.

One such direction uses machine-learning methods to find a path to optimization. Plunify does this with their InTime optimizer for FPGA design. The tool operates as a plugin to a variety of standard FPGA design tools but does the clever part in the cloud (private or public at your choice), in which the goal is to provide optimized strategies for synthesis and place and route.

There is a very limited way to do this toda…


FPGAs and GPUs: a Tour of SETI's Computer Hardware

David MacMahon is a research astronomer with Berkeley SETI Research Center. Dave works on several projects at BSRC, including Breakthrough Listen, designing many of the computer systems we use to process data collected from our telescopes. If you've ever been curious what hardware is required to search for ET, check out this tour of Berkeley SETI behind the scenes.

Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms

Machine Learning is a hot I+D topic.

One of the strategies used in Machine Learning is to learn by means of neural networks. You can get a free introduction to neural networks here.
I also warmly recommend Andrew Ng's introductory course to Machine Learning on Coursera.

Machine Learning neural networks were inspired by biological neural networks, and are easily applied but highly effective in image processing algorithms, like handwritten text recognition.

More complex neural networks algorithms are being implemented on what is called Deep Machine Learning, using neural networks with many layers of complexity.

Typically a neural network is trained, or it learns, from its exposure to thousands of 'good' and 'bad' examples of the image to be recognized or classfied. For example, a neural network that has to recognize handwritten numbers, will be exposed to thousands of examples of numbers written by different people, and even with changes in the orientation of the te…

Best FPGA development practices - Whitepaper

This whitepaper by Charles Fulk and RC Cofer is an excellent summary of several techniques, tools and design guidelines for FPGA:

FPGA design processRevision controlCoding guidelinesScripting automationPCB design for FPGAVHDL capture and simulation (including OS-VVM package)Project ManagementDesign Resources The whitepaper is available here

Xilinx AXI Stream tutorial - Part 2

Hi again,

On the previous chapter of this tutorial we presented the AXI Streaming interface, its main signals and some of its applications.

Now let's go for the funnier stuff, that is, to actually make and test some VHDL code to implement our AXI master. We will proceed gradually, adding features as we go. At the end of this tutorial you will have code that:
Implements an AXI master with variable packet lengthFlow control support (ready and valid)Option for generation of several kinds of data patternsTestbench to check that all features work OKInclude an instantiation of Xilinx's AXI Stream protocol checker IP to verify the correctness of our AXI master core.

So let's see the first version of an AXI master. In this version we will have fixed data length of the packet, and the data will be a progression of ascending numbers (the same counter that controls that the packet length is reached, is used to generate the packet data):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1…

Xilinx AXI Stream tutorial - Part 1


In these series of articles I am going to present the design of an AXI4-Stream master. As I often do in my tutorials, I will try to show the design procedure for the block, starting from a "bare bones" solution and gradually adding features to it.

Xilinx provides a wide range of AXI peripherals/IPs from which to choose. My purpose in making my own block was in learning 'hands-on' the protocol. As a side effect, this tutorial provides you with a (synthesizable) AXI4 Stream master which I have not seen provided by Xilinx. The closest IP provided by Xilinx, that I know of, is an AXI memory mapped to AXI stream block.

But first things first, what is AXI4-streaming? Streaming is a way of sending data from one block to another. The idea on streaming devices is to provide a steady flow of high speed data, so usually one new block of data is transferred every clock pulse. Also, to reduce overhead streaming buses do no have addressing. Streaming connections are point to …

Spartan 7 now available

"Xilinx announced today that its Spartan-7 family of FPGAs is now available for order and shipping to standard lead times. As a key member of Xilinx's Cost-Optimized Portfolio, this device family is designed to meet the needs of cost-sensitive markets by delivering low cost and low power entry points that are I/O optimized for connectivity with industry leading performance-per-watt"

For more information:
Spartan-7 general availability announcement
Spartan-7 device page