Virtualization is going the wrong way.
Virtualization and Cluster computing not doing it for me, NLB and heartbeat failover are just multiple servers taking turns. This goes boost uptime, and parallelize relatively static content (even many dynamic websites are relatively static updating only every few hours). Virtualization is a way to stuff more servers into a data center without having to have more machines; great for testing and Pre-Prod and from a support standpoint really nice, reboots are are unbelievably fast.
I see the value of these services, but I really want is a "virtual-cluster" I think that would be the best description of what I want. I want 1 machine spread across many computers. I want to be able to dynamically add capacity to a "Virtual Machine" by throwing more machines into the pool that are supporting this 1 "Virtual Machine". Keep in mind I don't want this for specific applications. Beowulf and other cluster computing platforms perform this. I am after general purpose computing spread across multiple computers. Think of this as peer-to-peer super computing.
I realize that this is not a likely reality for the moment, but this type of computing will really assist with the real time data intensive applications for our children's children. The types of applications that I see this useful for aren't what we use the idea for now like folding proteins, and checking for cancer results, or any of the other parallelized computing projects out there. In the data center environment, you would only phase out machines to get more power per sq inch. Older but serviceable machines would still have power to give to the grid that is powering what ever app is running on it. If you have a graphic designer, you don't need a $6000 machine for with physics cards, you need 5 $500 machines that could team.
For the basement web developer that accidentally grabs a hold of the long tail and designs a killer app, scaling would be graceful and relatively easy to manage. I realize that the external data bus would have to be fast. Don't we have PCI-E 16x eSATA cards now? isn't that a pretty fast external bus. Two computers could be joined with that as a high speed low latency bus. Two could be used to increase that speed... I am not saying that the software exists to leverage this, but it could. What about diskless heavily interconnected clients sharing an iSCSI target several machines that think like one. I can see how the communication could bet tricky with several machines (more than 3 or 5)and that this connectivity would be a mesh and web, or it could just be a chassis. The real winners in this model would be the blade market. They already are designing machines that share resources (power, and Ethernet) why not extend the metaphor and look for that computational performance gain? Connections between chassis would be much more manageable than between individual rack mounted servers. Right?
No comments:
Post a Comment