Hi JT77,My $0.02 worth:1. Focus and SkillsHow much time do you have to devote to this project? What finance, programming, networking and FPGA / circuit design skills you have already?Does your advisor/supervisor have skills in any of these areas or have you got someone else who can help to guide you?HFT takes many different forms and both FPGA and GPU skills are useful, but so are skills in low-level multi-core multi-threaded CPU programming and optimisation, real-time OS kernel tuning, net driver performance optimisation, FIX, TCP/IP, UDP, PCIe, etc. protocols, low-latency messaging, quirky fast databases, etc. I have no idea what you know already, but you will have to be clever in deciding what you want so that you can actually build something reasonably impressive in your timescale while also learning useful new skills.2. Comparing FPGAs versus GPUs?There?s been already a lot of work in this area and there certainly is more to be done. However, to do a good job of comparing any two technologies you have to be able to design and implement performance benchmarks while trying to push each technology to its limits. Then you have to invent some way of comparing the technologies ?fairly? given that some will be a better match for your benchmark than others. Not easy to do at all, many comparisons out there suffer from flaws in this department. And that?s just the beginning... this is a lot of work! It would be easier to choose just one benchmark with already published results (say a specific MC simulation) and see whether you can improve on it using another specific technology.3. FPGAs, VHDL/VerilogIf you are aiming to optimise performance in an FPGA you need to understand the details of your specific FPGA architecture, its timing characteristics, its various on-chip resources, soft/hard CPU cores, FPGA-vendor specfic design tools, your FPGA platform, you need to be able to know how to optimise your circuits for a specific performance goal, etc., there is lots and lots to learn here. This ultimately means learning VHDL or Verilog to a very high standard (not just the language itself but how to write it for circuit synthesis, how to optimise circuits for FPGAs using the VHDL/Verilog tools + FPGA vendor specific low-level tools). This may take you years, not months.Yes, they are numerous C/Java/C++/etc. based approaches for FPGA ?compilation? and these make the process easier in particular for larger projects (improved productivity, simpler system integration, optimise only what needs to be, often provide out-of-the-box support for specific boards, etc.). Beware that these approaches work through imposing their own computing models on top of a generic FPGA architecture. Unless the computing model matches the application you are implementing well, you will find yourself trying to fit a square peg into a round hole. If your objective is to maximise performance, stick with VHDL/Verilog. One good thing about these languages is that you can combine low-level optimised code (i.e. gates/Boolean expression connected with wires) with high-level sequential/parallel processes triggered by events; there are multiple levels of ?abstraction? hence VHDL/Verilog is really not like writing assembly. If you don?t mind trading some loss of performance for the benefit of productivity, then there are more choices, go with the tool vendor which has good board support as this will save you a lot of time.The above is true about optimising for any architecture (Intel CPUs, NVIDIA GPUs, etc.), age old argument about C++ vs. assembly, but things are more difficult with FPGAs as there are so many new concepts, ?tricks of the trade? and know-how to master.Compare this with GPUs - assuming you know C, you can learn everything you need to know about GPUs in a couple of weeks ? study the architecture, learn CUDA/OpenCL, look at all demos, case studies, examples, read all CUDA finance-related papers, 2-3 books and you are pretty-much set to go. GPUs are popular because learning GPU programming is relatively easy, very similar to learning to program for a different albeit parallel processor. FPGAs are another ball game altogether.I am not trying to put you off, just trying to get you thinking about what you want and can do in your timescale. For example you might not want to be risking spending all that time only to deliver a mediocre, half-working FPGA demo, just because you had to spend a month struggling with the tools or writing your own high-throughput memory/PCIe/PHY interface because your FPGA vendor did not provide one for you (or you thought you could write a better one). Having said all that, FPGA skills are good for many HFT houses.4. Projects - some suggestions- Something to do with low-level network processing in an FPGA ? at TCP/IP/UDP level this has been largely tackled by various TOE (TCP/IP Offload Engine) vendors, there is also some interest in getting FIX implemented onto FPGAs. Looking at how to implement a FIX engine on an FPGA (this is non-trivial, many tradeoffs to consider) might be useful to your career prospects. Ideally, though, you should aim for something simpler, for example start with getting the QuickFIX running on a CPU core inside an FPGA and then try to accelerate only a small part of the FIX protocol. If you can get your hands on a good FPGA TCP/IP module, it would make your life much easier. Or just stick with UDP and ITCH/OUCH protocols. This type of project is probably the most difficult and risky route you can take. If you cannot get the FPGA implementation finished in time, you will not have much to show for your dissertation.- Something with accelerating Monte Carlo simulations on FPGAs and/or GPUs ? a lot of work on this has been done already by several academic and industrial groups. You could look at how the simulation speed changes with complexity of MC models and computational accuracy, how to do large multi-dimensional simulations well and/or show how to do them quickly at sufficient accuracy. GPUs are an obvious choice for MC simulations, but FPGAs offer more flexibility (more opportunities for making the simulation run faster and/or playing with speed/accuracy tradeoffs) and also consume much less power.- Something like the farmer?s idea (nice) ? a game/application incorporating trading-like behaviour and technology challenge. Make it fun, but keep it relevant (e.g. you could read-up on standard HFT strategies and choose to model some of them in a small network). Incorporate FPGAs if you wish, but that might well take your focus away from developing your HFT algorithmic skills.- Something like a model of an optimised trading platform running on a standard multi-core CPU target ? forget FPGAs/GPUs altogether and see how far can you push a tuned Linux kernel with off-the-shelf 10G cards, write a custom network driver to speed-up processing of streamed network data, play with the RT kernel scheduler, low-level assembly optimisation, find a way of measuring communication and processing latency/throughput, etc. Your setup can be as simple as two servers pinging each other, or scale it up to having an actual exchange protocol connecting the two. This might not be as fancy as playing with FPGAs and GPUs, but there are not enough people knowing how to do this sort of stuff well! This is probably the least risky route as you will have ?something working? at each stage as you progress through the project. You will have useful results no matter if something unexpected goes wrong in the process.-I hope this helps. For the sake of readability I have not included references in the above, but feel free to ask for specific pointers if you cannot find more on the web.Best of luck,M.