Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What good is an FPGA if it isn't 'workload-specific'? That's what they do. That workload might be a processor, it might be an interface, ... you get it.

Workload-specific FPGAs are here, now. They're not 'promising' because they're doing exactly what they were made to do!

>I don't know what the outlook for improving the memory latency situation is. It's probably going to involve gobs of on-chip embedded RAM, which is expensive.

They already have a name for that and you already named it.

It's called cache ;)



    Workload-specific FPGAs are here, now. They're not 
    'promising' because they're doing exactly what they 
    were made to do!
Right. I was thinking in terms of them being more integrated into hardware and software toolchains. We already have software runtimes (various Javascript VMs, .NET, the JVM and its offshoots, etc) that can optimize code at runtime based on the hardware that's present, hotspots in the code, etc.

Now imagine if FPGAs were integrated into your typical laptop/desktop logic board. Now imagine those software runtimes we talked about above could optimize your hardware at runtime as well. Or maybe it wouldn't be transparent; maybe FPGA hardware would be targetable like GPU hardware is today. Anyway, there's a lot of things that could happen there...

    They already have a name for that and you already 
    named it. It's called cache ;)
I should have said "embedded DRAM" instead of "embedded RAM."

eDRAM is different from your typical CPU cache. Your CPU cache is almost always SRAM. But SRAM takes up 3x as much die space as an equivalent amount of eDRAM. Of course... some processors like IBM's POWER chips use eDRAM as L3 cache so the lines are blurred a little.

A lot of game consoles use embedded DRAM for stuff like graphics processing. On the XBox One I believe it's used as a framebuffer.


>We already have software runtimes (various Javascript VMs, .NET, the JVM and its offshoots, etc) that can optimize code at runtime based on the hardware that's present, hotspots in the code, etc.

Are you a hardware guy? :)

Thanks for elaborating. I had no idea how you went from "memory latency is a problem, that's why we have L_n caches" to "memory latency is a problem, we'll probably solve it ... (with cache)." This helps.

The biggest problem I see with integrating FPGAs into designs (HW) is educational. An on-die accessory FPGA doesn't amount to much if it doesn't get used. It's coming though. One way or another we're going to see more flexible HW. Taking advantage of it won't be something we're used to.


    An on-die accessory FPGA doesn't amount to much if 
    it doesn't get used
I know! It's the usual chicken-and-the-egg hardware+software problem, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: