This article describes various management scenarios.


Install Local

  1. Navigate to
  2. Sign in (this will be admin account on arcology)
  3. Enter (unique) arcology name
  4. Download and run Arcology.exe
  5. Arcology asks for:
    1. Sign in
    2. Arcology name (if more than one)
  6. AI2 gets installed automatically (via
  7. AI2 available on localhost (by default) and user can configure everything else through there.


  • If an arcology is already established, there should be a command in AI2 that connects it to (registers the unique arcology name).
  • There should be a way to federate user accounts so that you can use an arcology account in (perhaps with the arcology name as a prefix).

Account Connections

We keep a list of accounts that we connect to and provide access via APIs/etc. For example:

  • AWS account to manipulate instances, containers, DNS, etc.
  • Azure account
  • Email account (e.g., Google account to send email)
  • GitHub account (?)
  • Twitter account to send out notifications, etc.
  • NetworkSolutions account to modify DNS, etc.

Basically, anytime we need to connect to a service that requires a username and password, we create an account connection. We should keep track of all the places that we're using an account connection.

NOTE: Account connections are orthogonal to API drivers. For example, the AWS account is used by multiple drivers: a container instance driver, a storage driver, a DNS driver, etc.

NOTE: We can have multiple account of the same type. For example, we might have multiple Gmail accounts connected. Account can enforce permissions, so that only certain services can access them.

NOTE: We should support an arbitrary number of connections, in case we want users to connect their accounts. Again, we rely on access control to determine which users have a right to use which accounts.

See also

Modules & Machines

  • We have a list of hostnames that we listen on (from the services). We should do a DNS lookup for each hostname and see which machine we connect to. Then we can create a map of hostname to machine.
  • We can show appropriate errors if a particular hostname doesn't point to a machine in the arcology.
  • Hyperion should probably be available on all machines (at least optionally).
  • should have a list of available modules.

Packages & Services

  • We need a table tracking service configuration for the arcology.
  • We can configure each top-level type independently. For example, we have entries for Arc.service, Arc.table, etc.
  • We can install new services from or upgrade existing services.
  • Services can require modules; when installing a new service, we install required modules.


  • We keep track of every arcology in a table on
  • Each arcology connects up to to get configuration information (such as Config.ars). This is also the easiest way to set up additional machines for an arcology.
  • You should be able to access AI2 from for your arcology. I.e., when you sign in to AI2, it goes to the appropriate arcology (perhaps via redirect or via internal routing).
  • You can deploy a new arcology easily from We have special code to handle AWS, Azure, and self-hosted servers.
  • Each arcology has a public ID to identify it to the world (and perhaps a unique human-readable name/address).
  • The Hexarc code (which is free/open source) does not require the service, but it just works much better with it (including upgrades, etc.).
  • The Hexarc service is free for non-commercial deployments, but we charge a monthly fee for commercial.
  • Of course, we would self-host the Kronosaur arcology.
  • would also be the source for binary upgrades and for service upgrades. I.e., it would have a catalog of services available to any arcology (potentially commercial).