Parallella: A supercomputer for Everyone on Kickstarter

1

Many of this site’s readers work in the field of IT and come here looking to build home labs. Highly parallel computing is something that has become more prevalent as each quarter passes, but there is still a relatively high barrier to entry. Parallella is a Kickstarter project that is looking to raise a not-so-insignificant amount of $750,000 with a stretch goal of $3m.  The goal: create the Raspberry Pi version of an ARM/ RISC parallel computing platform. Here’s a quick video explaining the project:

One of the ServeTheHome forum members, vv111y recently posted about the Parallella Kickstarter project he supported, and I really liked the idea. This is, if nothing else, a great platform for aspiring students. The Kickstarter campaign is about half complete both in duration and funding. Here is a bit from him on the project.

There’s plenty written about Adapteva and their Epiphany chip, and now about the Parallella Kickstarter project. The latest press is here:
The Register
ArsTechnica

Also, Andreas (founder and one of the 4 core team members of Adapteva) has written several insightful posts that sheds more light on their thought processes, approach & why it is so promising:
Ten Processor Myths Debunked by the Epiphany-IV 64-Core Microprocessor
What is the Cost of 1 Exaflop?
Ten Challenges that will shape the Future of Computing
And finally for potential uses – it’s not just for hard-core researchers or hackers! 104 Parallel Computing Projects for next Summer

My take on it.

  • If you are looking at cost per GFLOP then GPU processing will beat the Epiphany and the current pricing of the Parallella mini-computer. But you have to put a fair chunk of change down (in the thousands) just to enter. With Parallella you can start with as little as $99.
  • Adapteva is marketing this as a hobby gadget akin to arduino & raspberry Pi. Backers are looking at running anything and everything: xmbc, apache servers, transcoders, real-time ray tracing, you name it.
  • If you’re doing embedded or anything power-constrained then again Epiphany at 2 watts wins.
  • Your investing in the future and in a community. Adapteva considers this a starting point and as stated by Andreas in links above they are going after the power wall problem for the future of supercomputing. They have a roadmap for scaling the Epiphany architecture to 4096 cores in a 524.3 mm^2 footprint. The 16 core is sized at 2mm^2 and the 64 at 8.2mm^2. They have room to grow. Plus, they are confident they can keep the power staggering low – 20w for the 4096 core chip. GPU’s not so much.
  • In this blog post someone has graciously put together a chart showing how the Epiphany beats any competitor in terms of GFLOPS/watt

Finally, I’ll quote another backer (Kai Staats) who I think says it best:

I spent ten years in supercomputing, building large scale systems built upon the POWER architecture. I was engaged in dozens of discussions with leading scientists at NASA, the DoE and DoD, at Universities and in the meetings with Freescale, IBM, and Sony — and this is what everyone wanted to do, but no one could make it happen for the burden of the corporate structure.

The big guys are too blinded by short-term profit, answering to their shareholders in a roller coaster market to be able to take a step back and consider the long-term, the value in establishing a new paradigm for open source supercomputing.

Yes. You are doing this right. This is exactly how supercomputing should be developed. It is the only way for advanced parallel systems to take the next, big leap forward. No one company has the resources to make the philosophical and real world breakthroughs in-house. Only through engaging the bigger landscape of closet genius, hacker kids, retired software engineers, and everyone in between can the true potential of an architecture be fully explored.

This is Arduino on steroids. Make if happen!

If you want to dig into the details, you can get the architecture reference manual here, and the SDK manual here.

If this has got you excited then please go and pledge!

I will be supporting this project. In the future, feel free to contact us through the forums or via the contact link above if you see any other projects looking for funding.

SHARE
Previous articleSynology DS1812+ Hardware Teardown
Next articleIntel Core i5-3470 77w Quad Core CPU Benchmarks and Review
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.

1 COMMENT

  1. “If you are looking at cost per GFLOP then GPU processing will beat the Epiphany and the current pricing of the Parallella mini-computer.”

    only if your task is suitable for a GPU. GPU chips are SIMD, they can do the same operation on lots of data at once. if you have an algorithm, where the operations depend on the data, then it can be very inefficient to run on a GPU. Epiphany is much closer to a multicore CPU. each core can work independently, so if you have an ‘if’ statement in you code, you dont have half the cores waiting while the others do something.

    Also $99 gets you an epiphany processor, plus a whole ARM dual-core A9 computer. So comparing a parallella to a $99 GPU is a bit miss leading. especially once you factor in a years worth of electricity.

LEAVE A REPLY

Please enter your comment!
Please enter your name here