Many of this site’s readers work in the field of IT and come here looking to build home labs. Highly parallel computing is something that has become more prevalent as each quarter passes, but there is still a relatively high barrier to entry. Parallella is a Kickstarter project that is looking to raise a not-so-insignificant amount of $750,000 with a stretch goal of $3m. The goal: create the Raspberry Pi version of an ARM/ RISC parallel computing platform. Here’s a quick video explaining the project:
One of the ServeTheHome forum members, vv111y recently posted about the Parallella Kickstarter project he supported, and I really liked the idea. This is, if nothing else, a great platform for aspiring students. The Kickstarter campaign is about half complete both in duration and funding. Here is a bit from him on the project.
Also, Andreas (founder and one of the 4 core team members of Adapteva) has written several insightful posts that sheds more light on their thought processes, approach & why it is so promising:
Ten Processor Myths Debunked by the Epiphany-IV 64-Core Microprocessor
What is the Cost of 1 Exaflop?
Ten Challenges that will shape the Future of Computing
And finally for potential uses – it’s not just for hard-core researchers or hackers! 104 Parallel Computing Projects for next Summer
My take on it.
- If you are looking at cost per GFLOP then GPU processing will beat the Epiphany and the current pricing of the Parallella mini-computer. But you have to put a fair chunk of change down (in the thousands) just to enter. With Parallella you can start with as little as $99.
- Adapteva is marketing this as a hobby gadget akin to arduino & raspberry Pi. Backers are looking at running anything and everything: xmbc, apache servers, transcoders, real-time ray tracing, you name it.
- If you’re doing embedded or anything power-constrained then again Epiphany at 2 watts wins.
- Your investing in the future and in a community. Adapteva considers this a starting point and as stated by Andreas in links above they are going after the power wall problem for the future of supercomputing. They have a roadmap for scaling the Epiphany architecture to 4096 cores in a 524.3 mm^2 footprint. The 16 core is sized at 2mm^2 and the 64 at 8.2mm^2. They have room to grow. Plus, they are confident they can keep the power staggering low – 20w for the 4096 core chip. GPU’s not so much.
- In this blog post someone has graciously put together a chart showing how the Epiphany beats any competitor in terms of GFLOPS/watt
Finally, I’ll quote another backer (Kai Staats) who I think says it best:
I spent ten years in supercomputing, building large scale systems built upon the POWER architecture. I was engaged in dozens of discussions with leading scientists at NASA, the DoE and DoD, at Universities and in the meetings with Freescale, IBM, and Sony — and this is what everyone wanted to do, but no one could make it happen for the burden of the corporate structure.
The big guys are too blinded by short-term profit, answering to their shareholders in a roller coaster market to be able to take a step back and consider the long-term, the value in establishing a new paradigm for open source supercomputing.
Yes. You are doing this right. This is exactly how supercomputing should be developed. It is the only way for advanced parallel systems to take the next, big leap forward. No one company has the resources to make the philosophical and real world breakthroughs in-house. Only through engaging the bigger landscape of closet genius, hacker kids, retired software engineers, and everyone in between can the true potential of an architecture be fully explored.
This is Arduino on steroids. Make if happen!
If this has got you excited then please go and pledge!
I will be supporting this project. In the future, feel free to contact us through the forums or via the contact link above if you see any other projects looking for funding.