Nemo is a Hexarc module that implements distributed memory blocks. It is designed to be accessed by distributed computation processes.

Some properties of this system:

  • Designed for non-persistent data, i.e., data that is being manipulated. For persistent storage we still need Aeon.
  • Accessible to all modules on a machine (unlike Datum). We use shared file mapping to allow all modules to access the memory block.
  • Universal addresses (i.e., a way to specify the address of any byte on any machine).
  • Resilient to machine failure: if a machine dies, we can detect it and recover. For now we do not guarantee that no data will be lost (up to the caller to reload data from Aeon). But we could eventually implement redundancy.
  • We expose locality (which machine memory is on) to callers, and give them tools to easily move data from machine to machine.

Motivation

We will have several different distributed abstractions. For example, an abstraction for multidimensional arrays will be different from one for rendering images. Assume that we have an archon for each type of abstraction. Does having a common data store help us?

Imagine a multidimensional array archon. It would need to allocate storage for each array across machines. In most cases we might need to split up the array into pieces and have each piece processed on a different machine. We would need methods to track which data is on which machine, to move data from machine to machine, and to control or synchronize operations on data across machines.

Similarly, imagine a large database spread across multiple machines. We'd allocate a portion of the database on each machine and do distributed searches and sorting. Again we'd need a system for tracking memory allocations across machines, maybe even redundantly.

Lastly, imagine if we end up with common data structures (e.g., tables, arrays, etc.) and want multiple processing systems to operate on them across machines. Again, it would make sense to have a distributed memory system like Nemo.

Implementation Notes

Blocks

The unit of memory is a block. A block is a contiguous range of memory living on a specific machine. It has a BlockID, which is unique in the arcology. A BlockID consists of a machine index and a block index (on that machine).

16 bits for a machine ID (65K)
16 bits for a block ID (65K)
32 bits for offset (4 GB)

Thus we can identify any byte in the arcology in 64 bits. Eventually, we can expand the bits for machine ID, etc.

Block Collections

A block collection is a logical grouping of blocks distributed across multiple machines. A machine keeps track of its own block collections, and is responsible for timing them out if appropriate.

A caller can operate on the entire collection by specifying a message to send for all blocks (in which each block ID is a parameter). The result of the message is then aggregated and returned to the caller.

Lifetime

One potential problem is dealing with lifetime. How will we know when we should free blocks? A few possibilities:

  • Maybe we keep a timestamp for the last time we accessed the block. After a certain amount of idle time, we expire the block. Clients would be responsible for sending keep-alive messages.
  • Maybe we tag each block with the set of users accessing the block. Clients release access when appropriate and when there are no more references, we delete the block. We would probably still need to implement a timestamp in case of leaks.

Abstractions

We may want to implement common abstractions on top of the blocks (or block collections). For example, maybe we implement multidimensional arrays as a native type. Similarly, we could implement tables as a native type.

This might also help when aggregating result data. For example, when a render farm generates an image, maybe we allocate a block to hold the resulting image, then we send messages to each machine to render a part of the image, sending image coordinates. The renderers could update the final image by calling Nemo.