The information infrastructure of any organization evolves in the same manner as the software development does. Any change in the infrastructure has to surpass a series of stages in a defined lifecycle, until it finally reaches production. A typical lifecycle considers stages like development, staging or production. In the Denodo ecosystem, each one of these stages is called an environment.
In terms of composition, an environment is defined as a set of servers, of the same or different type, working together for a common purpose. For example, an environment can be composed by several Virtual DataPort servers, one Scheduler server and one database server working as data cache. In addition, an environment is also composed by all the resources and data sources that the servers depend on.
Inside an environment, servers are organized in one or several clusters, with a load balancer per cluster. All the requests that enter the cluster are preprocessed by the load balancer, who decides which is the final server that will process the queries among the set that conforms the cluster. That is the way organizations guarantee high availability in their systems.
Take into account that, for this to work, all the servers in the same environment have to share the same metadata, since they operate on the same resources and data sources. However, each environment manages its own set of data sources and resources. Hence, the server’s metadata must be different among environments.
The Solution Manager allows you to promote changes from one environment to the next one in the lifecycle. Since every environment has different needs, in terms of consistency or service interruption, the Solution Manager implements several strategies for deploy changes. Therefore, each environment can configure its own deployment strategy.
A cluster is a group of Denodo servers that belong to the same environment.
To guarantee high availability, the production environments are organized in one or several clusters behind a load balancer that decides which is the final server that will process the incoming requests.
Moreover, production environments should provide low latency. Organizations meet this requirement with several clusters geographically distributed. For example, they may have one cluster in North America, another one in Europe and a third one in China.
A typical structure of a cluster includes several Virtual DataPort servers, one Scheduler server and one database server working as data cache.
All the Virtual DataPort servers in the same environment share the same metadata. This means that, before promoting changes from another environment, you have to define, at environment level, the properties required to execute a deployment on the Virtual DataPort servers.
On the other hand, the metadata of the Scheduler server is shared at cluster level, since it references servers or data sources local to the cluster. Therefore, before promoting changes from another environment, you have to define, on each cluster, the properties required to execute a deployment on the Scheduler servers.
The Solution Manager can work in two modes:
Standard: This is the intended mode to work with on premises cluster, you have to manually add the cluster resources.
This mode is explained in Standard Mode.
Automated: all the resources are managed by the Solution Manager, you only have to set the desired capacities.
This mode is explained in Automated Cloud Mode (AWS).