Previous | Next --- Slide 37 of 47
Back to Lecture Thumbnails
blairwaldorf

Reconfigurable logic -- You can set each look up table (configure the logic) associated with each logic block so that wired together, works exactly for your problem. This is as low-level we can get without actually fabricating a new chip for your problem. However, this is not easy to program.

Also, people are researching -- is there a space of applications for FGPAs? As Kayvon mentioned in lecture, people will pay a lot to manufacture a chip that does important things. And if it's only important to some people and we want ease of programming, then optimizing CPU/GPU is probably better. What lies in the middle?

hofstee

@blairwaldorf the current space of applications (as heterogeneous compute) is trending heavily towards the reconfigurable part of the FPGA. An ASIC can only do one task. If you were say, a cloud computing provider, it might not make sense to have every single ASIC imaginable for all your clients. Odds are if they truly needed an ASIC, they would have gone out of their way to get one. However FPGAs let you get as close to hardware as possible without an ASIC, so you can build much more efficient systems than CPU/GPU. This way, a cloud computing provider could host a large variety of people with just FPGAs.

FPGAs are also used heavily in hardware design, as an intermediate step between simulation and silicon. You can prototype parts of a system on an FPGA to see if they work as expected. If they don't work on an FPGA, they probably will not work in silicon. There's a bit of discrepancy between simulation and silicon, but the use of FPGAs as hardware emulation to help alleviate this is pretty common right now. It's certainly much cheaper than making a prototype in silicon, and finding bugs there. By finding the bugs in the FPGA emulation, there will hopefully be fewer bugs and fewer iterations of silicon fabrication before the chip is complete.

FPGAs are also pretty common for a proof of concept prior to an ASIC. If you can prove that an FPGA implementation is much better, that gives you a stronger argument for making a dedicated ASIC.

Of course, that's just a few examples, and there are many others.

jhibshma

If a computer wanted to switch from one task to another, how quickly could it reconfigure an FPGA? Having adjustable hardware seems like a cool idea to me, but how does the speed compare?

Also, do any software-to-hardware compilers exist? Could a computer literally run a program through encoding it into hardware? It seems to me like this would technically be possible, but the ease would depend in large part on the nature of the software.

hofstee

@jhibshma if you had the bitfile already prepared, less than a second. FPGAs run around 250MHz clock at the higher end for most applications, but they can have much shallower depths and better pipelining than CPU or GPU so it's difficult to compare without implementation.

The second part of your comment is undergoing research. There are some limitations but maybe the closest thing out there today is Xilinx HLS (high level synthesis). I don't know the exact details surrounding it, but I know you could write a hardware image kernel in a few lines of C and it will be a fairly decent implementation.

jhibshma

@hofstee -- Thanks. :)

jpd

@jhibshma -- a significant subset of Haskell can be compiled to hardware (http://www.clash-lang.org). In most cases, this is really hard -- a pure function can be turned into a straight sequence of logic gates, which won't require a clock, and gives a value as soon as the electrical signal has propagated through, but if that function is recursive (or has a loop), you suddenly have to make the system clocked, and do parts of the computation over time, or you'd need an infinitely big FPGA. So the quality of the FPGA design created from your program will vary greatly depending on how much that program already looks like hardware design languages like Verilog or VHDL.

TanXiaoFengSheng

@jhibshma, the use of FPGA that might be relevant to us most is that it can be used to implement the prototype of CPU, which you might have a hands on experience in a Computer Architecture class. Since compared to more fundamental circuit design, Verilog is easier to reason about the logic design of CPU

bmperez

@blairwaldorf Other than for validation purposes, there is a space of applications for an FPGA. From an economic standpoint, there are certain times when using an FPGA makes more sense then developing an FPGA. An ASIC has a very high initial (or non-recurring) cost, which is the investment in the design and verification of the chip, and (potential) capital investment in production facilities. However, once this initial investment is made, it is extremely cheap to produce the ASIC chips. FPGA's on the other hand, have a much lower initial investment; compared to an ASIC the development and verification times are very low. However, the cost per unit of an FPGA is much higher, typically on the order of hundreds of dollars.

Thus, if a company is planning on producing a very large number of units, then an ASIC makes more sense, because the benefits from the lower per-unit cost outweigh the initial cost of creating the ASIC. However, if the company is producing a comparatively small number of units, then the initial high investment for an ASIC negates the benefit its cheaper per-unit cost, so using an FPGA for the application/product makes more economic sense.

cyl

Is this the brain chip also FPGA? brain chip

hofstee

@cyl no it's not. It's completely different and in silicon. The current dev boards use a Zynq board (has an ARM and an FPGA on the board) in order to interact with the chip, but the chip itself is neither a CPU nor an FPGA. Fun fact: the chip itself uses less power than the LEDs on the dev board.