Computers are funny animals. A quick survey of the applications you have installed on your computer will demonstrate just how many functions we expect our computers to perform. If you’re like me, you have multiple web browsers, a full suite of office applications, some games, an accounting app or two, some graphics software, some audio and video editing software…oh, and calculator. Don’t forget that! I have multiple network connections and types, 4 displays, a scanner, speakers, and other misc. peripherals attached. All of these parts, hardware and software, are needed for me to work or play.
While such a computer is great for all-around use, that flexibility doesn’t come without penalties. For example, how often do I use my scanner? Not very often, yet it’s always plugged in and consuming power and at least some bandwidth on the USB bus. How often do I play graphics-heavy games? “Too often,” says my wife, but not nearly as often as most. My two beefy graphics cards are, nonetheless, happily taking up chassis space, burning Kilowatts like they’re free, and sitting largely in an idle and wasteful state. Aside from the obvious penalties of power draw, heat generation, and bit bandwidth, what about a penalty of complexity? Ever had a hardware problem where one driver didn’t like another component of your system? Ever had raw hardware conflicts between two devices? How about when your computer crashes? Does the number of attached components make it difficult to narrow down the root cause or bring a solution to the problem?
This is where Purpose-Built Computing enters the picture, riding on its white stallion of truth.
The goal of purpose-built computers is to perform a single task adequately and completely, with as little waste or overhead as possible. A good example of that is the Raspberry Pi, a single-board computer whose purpose is to basically run one application on its meager hardware. It’s extremely cheap, draws almost no power, has very little in the way of peripheral possibilities, but it gets one job done. Simplicity truly is beautiful, as there is so little to break, so little to conflict, and so little to waste. Why can’t a similar approach be applied to render farms?
Don’t fear, my friends, we are already doing exactly this.
Our render nodes are models of efficiency, having been custom built to do only one thing: render quickly. Every aspect of hardware and software has been meticulously trimmed, tuned, and tweaked to keep our overhead at an absolute minimum. We’re constantly looking for things to remove from the equation. Put more simply in the words of my father, we’re “taking out the slow parts”.
Lest you think that’s where the optimization ends, we are just getting warmed up. What about network infrastructure? Render farms have somewhat unique problems regarding scene file and content distribution for optimal retrieval by each render node. Where is that data being stored? Where are the network bottlenecks? Does each node have full, equal, and simultaneous access to the content? How about cooling? How are all of these machines kept at normal operating temperatures? If you regularly build datacenters, your normal answer will be: “Buy some full-height racks, arrange them in cold and hot rows, and duct big, expensive HVAC units into the cold rows.” The lack of innovation in that line of thinking is staggering. We carry the mantras of Purpose-Built Computing beyond the extents of hardware into the environment in which that hardware operates.
The bottom line is this: Purpose-Built Computing is the way forward, and we are the only Purpose-Built Render Farm. This enables us to become and remain the price leader in the rendering industry, as our overhead is miniscule. It is the only way to be the cheapest render farm. We hope you will realize these benefits by joining us and becoming part of the rendering revolution.