The Distribware platform scales out both processing power and storage capacity by adding nodes (servers) to its various server farms. This type of scalability is useless without fault tolerance (reliability) in the form of redundant nodes. In this way, scalability and reliability are related to each other, but remain independent. A large Distribware system will have redundancy both within each location and between locations.
A system can scale out by adding more locations, and by adding more nodes to the farms within a location. Adding more locations is really only viable in small-scale systems. For larger systems, two locations would be optimal, increasing the scalability of the system by adding more nodes to the farms within each location.
Processing nodes (web service API’s + matching windows service) scale out as a stateless server farms. Each server in a web server farm is generally a generic clone of all the other nodes, but for a Distribware system that is not necessarily the case. The specific web services can be deployed individually to a subset of processing nodes in the server farm as desired. Thanks to the load balancers, adding more nodes to a stateless web server farm increases both the reliability and scalability of a system at the same time.
Database storage nodes are different than the stateless web servers, but can still scale out as shards in their own “farm”. Mirrored databases are limited to a pair of servers, and their data is confined to a single location. Replica databases are similarly limited by convention to a pair of servers in each location (to simplify the failover logic in the API’s), but span across multiple locations in the system. Either type of database can be sharded, which simply requires a few modifications to the data access classes for the specific database. Inserts use a random “scatter” mechanism across the databases in the write list, while complex queries use a “gather” mechanism to run the same query in parallel against all of the databases in the read list, and then create a union of all the result sets. Single row selects, updates and deletes all target the specific shard that contains the record.
All of the simple backup logic for mirrors and replicas is made possible by the combination of techniques borrowed from MVCC (multiple versions of each row), functional programming (immutable rows, append-only tables) and some of the latest thoughts on distributed system design (race and repair, etc).
Distribware – Configuration Details
Multi-tenancy is allowed by sharing host servers among multiple owners (tenants). A database server can host many databases, a web server can host many web applications, and so on. This requires a set of naming conventions to be followed to keep track of which tenant owns what on each server. The names of databases, web apps, file repositories, etc. must be unique on the host they share. The database names are also subject to the database sharding convention as well. The conventions are as follows:
Schema name = what the name of the database would have been in a non-distributed environment.
Shard name = the schemaname(#), which represents a segment or partition of the data in the overall schema.
Example: SecDB1, SecDB2, SecDB3 …
Database name = ownername_shardname, which represents a database or pair of databases in a location.
Web app/svc name = ownername_appname(#)
Win svc name = ownername_svcname(#)
Important Note: an optional a/b suffix on the end of the shardname would only be used if multiple shards were hosted on the same server. Otherwise, the shardname is the database name, and will be identical on two different host servers in the location.
For both the web and windows service convention, the last numeric suffix allows multiple web services and/or windows services of the same type to be installed on a single host server, all belonging to the same owner (purely optional, but can be useful to more fully utilize the resources of some host servers). In other configuration data, the entities listed above must be globally unique within the system. This requires an additional convention, as follows:
RecSource = location-host-dbname
WebSvcName = location-host-websvcname
WinSvcName = location-host-winsvcname
- Replication should only copy data between identical shard databases – it is never used across shards.
- Only replication can copy data between different locations, and requires the replica schema to do so.
- The number of records in a single replication “batch” should be adjusted so that the size of the collection (DataTable) remains below 85K whenever possible.
- Both mirroring and replication are designed to work with either one or two databases/servers configured as a shard. Any more than two copies will be ignored by the API logic.
- Configuration data is confined to a single location. The only exception is AppSettings, which are global in scope (and copied between locations as part of the SecDB by replication).
- API logic references a schema name for operations such as the scatter and gather methods that work across all the shards in a schema.
- API logic references a specific shard name for more specific actions such as single record selects, updates and deletes.
- API logic references a specific database name for replication methods, verification methods, etc.
- The scatter and gather methods are only used once more than one shard exists for a schema. The logic for specific shards and databases remains unchanged.