GuidesAPI Reference
DocumentationLog In

Automated deployment and testing

Docker Deployment

There are many projects that allow you to configure and manage the automated deployment of a service. From our experience, we have found that the small learning curve, mature tooling, and widespread ecosystem support of Docker make for a relatively painless and sufficiently reliable deployment.

🚧

In Rosetta, blockchain teams are expected to create and maintain a single Dockerfile (referencing any number of build stages) that starts the node runtime and all of its dependent services without human intervention.

Upon first glance, using a single Dockerfile to start all services required for a particular API (i.e. the node runtime and an indexer DB) may sound antithetical. However, we have found that restricting deployment to a single container makes the orchestration of multiple nodes much easier because of coordinated start/stop and single volume mounting.

Coordinated Start/Stop

Some blockchain nodes rely on a number of dependent services to function correctly. It is often the case that these nodes require an explicit startup and shutdown sequence to function correctly and/or prevent state corruption. With distributed services (in multiple running containers), this sequencing of operations can require a custom deployer for each blockchain. Building and maintaining these deployers can take a lot of communication with blockchain teams and a lot of complicated testing to ensure correctness.

With a single Dockerfile, blockchain teams can explicitly specify how services should be started and stopped using scripts and easily test for issues in various scenarios (as all services are confined to a single container instead of being spread across some set of systems).

Middleware vs Embedded

Blockchain teams that do not wish to extend a core node to comply with the Rosetta API specifications can implement a middleware server that transforms native responses to the Rosetta format (as long as this additional service's startup and shutdown is coordinated by the Docker implementation).

That being said, multiple teams have initially chosen this route but reversed course to avoid the increased maintenance burden of having to manage an interface in an external package (especially across upgrades of the core node).

Single Volume Mounting

When a deployment is started from a single Dockerfile it is straightforward to mount a single volume to the new container and manage all of its state. Node deployments can be easily scaled by duplicating this volume to any number of new hosts without any sophisticated tooling. As mentioned previously, coordinated start/stop of all services provides strong guarantees around the corruption of the state that would be much more difficult to achieve with distributed services as there may be specific ordering restrictions to prevent corruption.

Running multiple instances of a node configuration can get complicated quickly if the node utilizes multiple stateful containers (ex: a node that stores historical state in an external database). In this scenario, the node orchestration engine must manage which node deployment talks to which services based on which state the node runtime is in. Furthermore, it is more time-intensive to scale up a node deployment as a deployment and all its services must be synced from scratch to ensure correctness or the volumes of another deployment's stateful containers must be used to bootstrap the new deployment (which can be a manual procedure).

Stateful Data API Implementations are OK

To efficiently populate responses, it may be necessary to preprocess + cache data in a Rosetta Data API implementation. For example, a Bitcoin Data API server may cache transaction outpoint information (otherwise n requests to the node may need to be made to fully populate a transaction, where n is the inputs in a given transaction).

🚧

There is no reason that additional caches (outside of what is stored by the node) cannot be used in a Data API implementation. The only expectation is that any state is stored in the /data directory (as mentioned previously) and that cache migrations are handled gracefully.

Stateful implementations of the Construction API, however, are prohibited. Clients accessing a Construction API implementation will provide all information necessary to construct a valid transaction.

Automated Testing

Integrating with a blockchain quickly is pointless if that integration isn't reliable or becomes less reliable across updates. From inception, the Rosetta API was designed to support comprehensive automated testing with little effort from teams building a Rosetta implementation or developers looking to integrate.

rosetta-cli

While developing the Rosetta API, we produced an automated testing tool called the rosetta-cli. This tool tests the correctness (i.e. are responses from a Rosetta API implementation formatted correctly) and consistency of a Data API implementation (i.e. does the balance computed from all operations equal the balance returned by the node). We are actively developing a similar testing suite for the Construction API.

🚧

If you develop a testing tool while working on a Rosetta API implementation, please share it with us at [email protected] or in the community!