3

Is it possible to cluster two computers together and run one operating system across them? I have heard of Beowulf & such, but can I cluster things like PCI devices (graphics acceleration, specifically) storage, network cards, USB, and memory in addition to just computing power? I expect that not all of this is possible, but which of them are, and where would I look to start implementing them (I'm not asking for a complete guide or anything ridiculous, just a step in the right direction)

I am using Linux, just in case that isn't clear already. I doubt the specific system I'm running matters all that much, but I'm currently using Ubuntu 17.10

EDIT: Just to make some things clear, I'm not opposed to all my clustered computers running a full Linux system (like Ubuntu Server, not a full desktop, but I think that much is clear), and having the cluster in the userspace

atoms118
  • 33
  • 1
  • 6
  • 2
    Sure you can, given enough time. However, consider the following: Latency, Bandwidth. It’s not practical. – Daniel B Nov 25 '17 at 20:08
  • You make a fair point, it's not at all practical. But, assuming I wanted to continue anyways, where would I start? – atoms118 Nov 25 '17 at 20:32
  • 1
    If you're clustering all the peripherals then just get a multi-CPU system. It'll be cheaper and easier, and it will work with no further effort. – Ignacio Vazquez-Abrams Nov 25 '17 at 21:09
  • [Plan 9](https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs) had things like CPU servers, so you could argue it was "one OS across several computers". I don't think you can do that with Linux in any meaningful way. – dirkt Nov 25 '17 at 21:12
  • @IgnacioVazquez-Abrams, I forgot do mention I'm doing this with a couple of old laptops I have lying around :D, should've made that clear in the question. If I had the money, I'd just buy a better computer. Besides, this is more for the kicks than to be actually practical. – atoms118 Nov 26 '17 at 00:42
  • It's possible to offload *specific* tasks to another machine as well as use NBD and similar technologies to make devices available, but there's nothing that provides a unified interface to treat both machines as one. – Ignacio Vazquez-Abrams Nov 26 '17 at 00:58
  • 1
    Also, [daaaaaamn...](https://www.supermicro.com/products/motherboard/Xeon/C600/X10QRH_.cfm) – Ignacio Vazquez-Abrams Nov 26 '17 at 01:00
  • @IgnacioVazquez-Abrams, I'm also doing my own research at this point, but is it possible to use xNBD to share PCI devices, not just storage devices? Also, what do you mean by "offload _specific_ tasks to another machine"? Do you mean like cluster computing or actually running a process entirely on my other machine? – atoms118 Nov 26 '17 at 15:04
  • General reference: [Is it possible to combine processing power of two computers?](https://superuser.com/q/122506/150988),  [How to share CPU or RAM?](https://superuser.com/q/256521/150988),  [Linking computers to increase performance](https://superuser.com/q/647132/150988),  and probably more. – Scott - Слава Україні Nov 27 '17 at 05:20

3 Answers3

1

Question stays unanswered for some time while answer is simple - it is possible (of course) but not practical because of synchronization challenges. Single processor system can be expanded to SMP system. Next step is NUMA (or ccNUMA which dominates today) and that ends current options for single OS image.

For academic purposes: First problem that you need to resolve is to create low latency connection between computers and figure out synchronization mechanisms. Think how much you want to implement in additional hardware. Then think of what resources you need to share and how will you arbitrate. Next think how you will schedule processes to run. How you will assign memory to them (locality). Think about DMA (you want to share PCI resources) how you want it to work.

Note that after you figure out everything and optimize everything the best possible way you'll get crawling speed OS.

I think we both deserve to be downwoted for silly question and lame attempt to answer it :-)

0

Not sure how you can achieve running one os on two pc but you might be able to run the applications remotely giving the impression that you are sharing one os between the pcs.

Lest say you have less powerful laptop and a more powerful desktop, and lets say on the desktop you have instead blender - 3d software. You should be able to remotely access blender from your laptop and run it in a separate window just how you would run any application that is installed on your laptop.

You can do it with ssh, if you have it properly set up. Inorder to start this kind of session I think you need to do something like:

ssh -X me@myDesktopAddress

and then run the application. Lets say:

blender

So this is a concept of remote access to a pc. You can search on google for something like: ssh gui, here is a link from stack overflow https://askubuntu.com/questions/886313/what-is-the-simplest-way-to-have-remote-gui-access-to-ubuntu-16-04-server-from

You can also do this with an application X2Go. I have only tried this over vpn and its not the best experience but it does the job. You can connect the two pcs on your network or maybe somehow directly for a better experience. Im not sure how to directly connect the pcs or if is even more efficient than your lan connection but, this can be done maybe with the network/wifi cards directly(I know I can connect my laptopt to my phone to share the internet form my phone) or maybe you can connect them with usb 3.x and set up some sort of tunneling (also I know there is option to connect your pc to your phone with usb and then share the internet connection). This direct connection might give you more speed.

Again, this concept is not running the same operating system on two pcs so there is no central resource management. But still you will be able to get more power than using just one pc and the user interface experience will be close to that of using just one operating system on one machine.

user1796624
  • 101
  • 1
-2

Yes, it is possible. There are several such solutions floating around, but the open source ones are mostly deprecated and unmaintained. Here are some of the more notable examples:

Proprietary: Mosix: https://en.m.wikipedia.org/wiki/MOSIX

Open source: OpenMosix: https://en.m.wikipedia.org/wiki/OpenMosix Kerrighed: https://en.m.wikipedia.org/wiki/Kerrighed OpenSSI: https://en.m.wikipedia.org/wiki/OpenSSI

The real reason why development drifted away from this sort of technology is because it became less and less relevant as CPUs started getting more and more cores and servers started getting more and more sockets. Clustering of this sort made sense until the late '90s when CPUs only had one core and servers with more than one socket were exotic and expensive. It is questionable how useful this technology is in the day when you can easily and affordably get a CPU with 64 cores and server with two such sockets capable of driving terabytes of RAM. It's interesting as a research toy but of limited practical use today.

Gordan Bobić
  • 3,330
  • 1
  • 18
  • 23