Post Deployment Tuning
When you install SanteDB iCDR server or any of the SanteDB solutions (like SanteMPI or SanteIMS), the default installer uses a "quick start" configuration. These configurations are intended to get implementers up and running quickly, however the default options will not result in optimal performance. A common source of system performance issues is interfaces to external systems. To decouple SanteDB responsiveness from external systems and obtain the best possible performance, SanteDB makes use of message queue services to manage outbound messages. The default message queue installation automatically supports all operating systems, however further optimization is possible by directing SanteDB to make use of third party tools. Please read below to learn more about optimizations that are available.
The dispatcher queue service is used to reliably store an outbound message to a reliable place so:
- In-process actions aren't slowed by the response time of an external system,
- Messages are delivered in a reliable fashion
The default dispatcher queue used by SanteDB on all installations is a file-system based queue. This allows for a simplistic reliable messaging management where each outbound request or queue request is written and read as a file on the file system. This service is useful when:
- The server will have a low volume of messages sent to it
- The server will have low latency connections to any subscribed services
The file-based dispatcher queue, however can severely impact performance on high volume SanteDB iCDR services.
In installations where Microsoft Windows Server is used exclusively as the underlying operating system for SanteDB, it is recommended that Microsoft Message Queue (MSMQ) is used for message queueing services. MSMQ offers extremely high performance and reliable message delivery but is an operating system level service provided by Microsoft Windows Server. When deploying SanteDB in non-Windows or mixed operating system environments, please see the following section on RabbitMQ.
In a MSMQ deployment, the MSMQ dispatcher allows SanteDB iCDR to connect to a Microsoft Message Queue server. This service allows SanteDB iCDR servers to queue messages to the MSMQ service on a local machine (
.\\$Private) or on a remote machine (with appropriate configuration). MSMQ is suitable when:
- The server will have a moderate to high level of server traffic sent it
- The server may have longer-lasting connections or unreliable connection to remote machines (i.e. queue lifetime may be longer)
- There is no need to failover or scaling queue functions.
RabbitMQ is the most widely deployed open source message broker. For SanteDB deloyments that use Linux, Docker or a mixture of Linux, Docker and Windows Server as their underlying operating environment, RabbitMQ is the recommended high-performance message passing service. RabbitMQ can also be used in the Windows Server environment if desired.
The RabbitMQ plugin is still currently under development with a targeted release date of Q1 2022. RabbitMQ will allow users of SanteDB iCDR on Linux, Docker or Windows to connect SanteDB to a RabbitMQ server to dramatically improve the performance of external interfaces.
SanteDB iCDR server generates large volumes of audits. Depending on the load under which the server is running, it may be beneficial to split the logical roles of the SanteDB databases. In general, SanteDB databases should be split by role, and where possible, these databases should be running on different physical hardware (in virtual machines, however on different disk infrastructure).
When running SanteDB iCDR in a production context, it is recommended that implementers enable pooling and tune their connection settings to match their environment. For example, the deployment of PostgreSQL may be modified to enable larger connection pool allowances (concurrent connections). These settings are highly dependent on the context in which SanteDB is operating (how many clients, load, traffic patterns, etc.).
Implementers are encouraged to run their PostgreSQL infrastructure in failover clusters and (if possible) in a scaled deployment.
To configure SanteDB iCDR to use a PostgreSQL failover, you should modify the connection string parameter to be a list of server addresses separate by a comma, for example:
value="server=hotserver:5432,coldfailover:5432; database=x; user id=x; password=x;pooling=true; "
Implementers can use PostgreSQL Streaming Replication (unidirectional, real-time replication) or asynchronous replication to enhance performance. In SanteDB's configuration file, it is possible to specify a
readWriterConnectionString. These two connection strings should point at pools of read/write primary addresses and a pool of read-only secondary addresses.
Assuming that a scaled SanteDB iCDR deployment has four PostgreSQL servers:
A configuration may be set as:
<add name="READWRITE" value="server=192.168.0.100,192.168.0.101; Target Session Attributes=primary ..." provider="Npgsql" />
<add name="READONLY" value="server=192.168.0.102,192.168.0.103; Target Session Attributes=prefer-standby;Load Balance Hosts=true" provider="Npgsql" />
SanteDB provides opportunities to partition large tables when deployed on PostgreSQL. Depending on the type of data and the volumes of data being stored, partitioning can allow PostgreSQL to break apart large tables into smaller tables based on attributes within the table.
In SanteDB there are several tables which are candidates for partitioning, and how an implementer partitions these tables will depend on their use case for SanteDB.
An example of partitioning the
act_vrsn_tblis provided in the file
ZZ-Partition-PSQL11.sql(for PostgreSQL 11+) and
ZZ-Partition-PSQL10.sql(for PostgreSQL 10)
By default SanteDB iCDR's database (on PostgreSQL and Firebird) has a series of check constraints which are used for validating the codification of fields and relationships. These checks are implemented in the database to prevent rogue SQL scripts for assigning, for example, a GenderConcept to a Status (i.e. the status of an Entity cannot be
Male) and to prevent non-sensical relationships (i.e. adding an
Good Health Hospital).
Depending on the resources available, and the maturity of your deployment (i.e. are all new enhancements tested prior to rollout), the size of the deployment (i.e. resources for the database server), and processes (i.e. can external objects add new data to the iCDR) you may wish to disable these checks.
To disable these checks:
ALTER TABLE public.ent_rel_tbl DISABLE TRIGGER ent_rel_tbl_vrfy; -- RELATIONSHIP VALIDATION
-- For Version 2.0.x, 2.1.x, 2.2.x
DROP FUNCTION ck_is_cd_set_mem CASCADE; -- DROPS SEMANTIC FIELD VALIDATION
-- For Version 2.3.x
DROP FUNCTION ck_is_cd_set_mem_with_null CASCADE; -- DROPS SEMANTIC FIELD VALIDATION
DROP FUNCTION ck_is_cd_set_mem CASCADE; -- DROPS SEMANTIC FIELD VALIDATION
The contents of the PostgreSQL database are, by default, not encrypted. If your context requires encryption at rest, you may do so by placing sensitive data in SanteDB onto a tablespace which is stored on an encrypted volume.
To do this:
- 1.Create a data volume which will be used to store the PostgreSQL database
- 2.Format the new volume
- 4.Mount the new volume (in Windows as a drive letter, or on Linux as a mount point)
- 5.Create a new tablespace in PostgreSQL which is on the protected volume (
CREATE TABLESPACE encrypted LOCATION '/var/......';)
- 6.Move tables which require encryption (or the entire database) to the new tablespace. For example, to place identifier values on the encrypted table space
ALTER TABLE ent_id_tbl SET TABLESPACE encrypted;)
The default installation of several SanteDB solutions will enable services which may not be required in every jurisdiction. Disabling these services may drastically increase throughput of the solution.
You can disable the
DATA_POLICYfeature) to increase query and write performance. This service attaches to all disclosure pathways in the SanteDB server and applies appropriate masking and filtering based on user roles.
Disabling this service will result in the disclosure of all data (regardless of policy tagging), however does reduce the overhead of applying policies.
Disable this service when:
- Your deployment uses an upstream privacy control solution (i.e. there is an intermediary which applies policies)
- Your deployment has no need for special data protection policies
Disable this service when:
You can also disable the default data quality service. By default, SanteDB iCDR allows implementers to specify a series of data quality rules which are applied on all data entering the iCDR. The data quality validation service executes these rules and tags/flags failures (rather than rejecting the request entirely).
Disabling this service means that no data quality validation is performed. This may be suitable if:
- Your user interface already ensures that these rules are followed
- The solution has C# logic or a scheduled job which can run these rules in the background
SanteDB iCDR can be deployed in a distributed nature, however there are some caveats to choosing this type of deployment which will impact the rollout of SanteDB.
The caching environment in a clustered SanteDB iCDR application server environment should be configured such that.
Currently, in SanteDB iCDR - the service hosts use a local configuration file to start up and read application configuration settings (the exception to this is the Docker host which uses environment variables for settings). If clustering your application servers, it is important that these files reflect the role of the application server in the SanteDB cluster, and that each configuration file in the cluster is synchronized based on that role.
For example, a clustered environment may be deployed as:
At the time of writing, only file-system based match configuration services are supported by SanteDB iCDR. Since a distributed application environment would yield varying results as clients bounce between application servers, it using SanteDB in a distributed application server environment, it is recommended that the match configuration service be set to a UNC network share (where each application server can read configuration data from a common directory).