Scenario Redundant with User Mail Replica#
This section describes a Carbonio infrastructure that builds on the Scenario Redundant and adds the necessary components to provide Components redundancy and User Mail Replica.
The number of required Nodes, the necessary steps, and the overall complexity involved require to pay attention to each task that needs to be carried out.
The installation of this scenario can be carried out using Ansible only, so if you do not have it installed yet please refer to Section Ansible Setup: there you will find directions for its setup.
This section covers the required components to set up the scenario, including load balancers, a Kafka cluster, a PostgreSQL cluster, a supported Object Storage system, and a multi-master Carbonio Directory Server. A step-by-step approach to setting up the Nodes, configuring centralised storage, and deploying User Mail Replica, will guide you in the procedure.
Procedure Overview#
The procedure to install this scenario is long and complex and it is divided into various parts for simplicity and to allow to follow it easily.
In the remainder of this page you find a scenario overview, requirements, and pre-installation tasks.
The rest of the procedure consists of a dedicated, self-contained guide to one of the parts required to successfully complete the procedure and use the Carbonio infrastructure. In more details:
Carbonio Preliminaries and Installation describes how to install the scenario proposed in this page
User Mail Replica Installation shows how to install the User Mail Replica Components and configure them
Pre-installation checks contains a number of commands to check the status of User Mail Replica and related services.
Note
The parts must be executed in their entirety and in the order given to successfully complete the procedure and start using the Carbonio infrastructure in this scenario.
We strongly suggest to look through the whole procedure to become acquainted with it and make sure you have no doubts before actually starting the installation.
Scenario Overview#
To install Scenario Redundant with User Mail Replica in a Carbonio infrastructure, you need to ensure redundancy for all critical services.
In a Carbonio User Mail Replica setup, each Component except Monitoring is deployed redundantly across multiple Nodes. This setup guarantees continuous service availability, even in the event of individual Node failures. Below is the recommended Node distribution and configuration for each service to achieve redundancy and optimal performance, with centralised S3 storage.
Each service, except for the Cluster service, has a mirrored node, creating a reliable failover configuration. The (Core) Cluster service provides all the functionalities of a Core Node (Database, Mesh Server, and Directory Service) plus the Kafka software, which provide high-reliability services used by Carbonio: stream-processing and distributed synchronisation of configuration information, respectively. The configuration of the Cluster service includes three nodes to maintain quorum and prevent split-brain scenarios, ensuring stability in the environment.
Requirements#
Each Node must satisfy the overall Software Requirements and Hardware Requirements
To implement a Redundant with User Mail Replica Carbonio infrastructure, load-balancers are required in front of services that should be always available. Load-balancers are not included in Carbonio: an Open Source or commercial balancer can be used, with the requirement that it must support per-port TCP balancing.
A working Kafka cluster is needed to transfer metadata between mailbox, to simplify the cluster installation
A Postgres cluster setup
A supported Object Storage
An additional carbonio-directory-server Node configured in MultiMaster mode (mmr)
A centralised Primary storage. Please refer to the following sections to set it up, either from the Admin Panel pr from the CLI.
Detailed Node Specifications#
To meet Redundant with User Mail Replica requirements, each Component should meet the following recommended specifications:
Component |
Purpose |
Configuration |
---|---|---|
Mail Transfer Agent (MTA) |
Ensures continuous mail transfer and reception, preventing downtime |
Both Nodes are identically configured to handle failover, so if one MTA Node experiences an issue, the other seamlessly takes over to maintain service continuity |
Proxy |
Manages incoming and outgoing client requests, providing customers with consistent access to mail services |
Identical setup across both Nodes enables a smooth transition if the primary Node fails, ensuring uninterrupted access |
Mailstore |
Responsible for mailbox storage and retrieval, utilising centralised S3 storage to ensure continuous data availability |
Both Nodes share S3 storage, ensuring real-time data redundancy, so customer data is always accessible |
Core Cluster Services [1] |
Manage core functions for cluster maintenance, including high availability and distributed consensus |
A three-Node setup prevents split-brain scenarios, ensuring uninterrupted services by maintaining quorum even if one Node goes down |
Files, Preview, Tasks, and Docs |
Supports document handling, previews, and other file-related functions |
Redundant Nodes ensure that document services are always available, minimizing any impact from Node failure |
Video Services |
Supports video functionality for user communication |
Both Nodes provide redundancy of video services |
Chats |
Supports chat functionality for communication between users |
Both Nodes provide redundancy of chat services |
The following software installed on a Carbonio infrastructure do not
support redundancy, therefore only a single instance of them can be
installed and run at a time within the infrastructure:
carbonio-message-broker
and carbonio-message-dispatcher
are
used internally by Carbonio, while the carbonio-certbot
command is used to generate and renew the Let’s Encrypt certificates.
Centralised S3 Storage Requirements#
Storage Performance: A high-performance, centralized S3 storage solution is crucial for Carbonio Mailstore Nodes. The centralized storage must be fast enough to handle real-time data retrieval and storage across Nodes, ensuring that data access times remain consistent and efficient.
Shared Access: The S3 storage must be accessible to both Carbonio Mailstore Nodes, facilitating redundancy in data storage and minimizing potential data loss in the event of a Node failure.
Pre-installation checks#
The following is a list of essential pre-installation checks that you should carry out to ensure your setup is properly configured for a Carbonio Redundant with User Mail Replica installation:
After all the software and hardware requirements are satisfied, here are some tasks to carry out before attempting the installation and a couple of checks to verify you are ready to install Carbonio.
For the sake of simplicity, we consider a three Nodes scenario:
core.example.com
, mbox.example.com
, and video.example.com
with IP addresses 10.176.134.101, 10.176.134.102, and 10.176.134.103,
respectively. These will be used in the remainder of this section.
Note
Some of the CLI commands presented here, even if they should be installed by default on your system, may not be available, but equivalent alternatives are given. You can always install them or ever use other commands that you feel more confident with.
You need to put the FQDN and IP address of each Node in the
infrastructure in file /etc/hosts
. For example, on
core.example.com
, /etc/hosts
must contain a line like:
core.example.com 10.176.134.101
Similarly for the other Nodes.
If you plan to install commercial SSL certificates, make sure you receive them in PEM format. Instruction on the procedure to request a certificate and deploy it on Carbonio after the installation can be found in Section Deploy a Commercial SSL Certificate.
All Nodes must be able to communicate with one another.
System clocks on Carbonio needs to be synchronised, otherwise some services (for example external LDAP or AD authentication) may not work correctly. The operating systems themselves take usually charge of this, but you can manually verify that systems time are synchronised and that the timezone is correct by using command timedatectl, which will output a number of useful data about the current time:
Local time: Wed 2025-03-12 14:06:30 UTC
Universal time: Wed 2025-03-12 14:06:30 UTC
RTC time: Wed 2025-03-12 14:06:30
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
There are a few points to highlight about volumes and disk space:
The Nodes hosting the Mailstore & Provisioning Component must have the Primary storage mounted on
/opt/
The Nodes hosting the Database and Video Server Components must have enough disk space on the
/
or/opt
directories. This is especially required especially when there are many mobile devices that use Carbonio' s ActiveSync feature (Database) and video meetings are often recorded (Video Server).Command df -h will output the size, usage, and other information about each of the mounted partitions on the system.
In case you use S3 buckets, check that they can be reached from the
Mailstore & Provisioning Node using command carbonio
core testS3Connection c6d71d55-9497-44e6-bf46-046d5598d940 as the
zextras
user, where the string is the bucket’s UUID.
If you think that the S3 bucket underperforms or is not efficient, you can use the S3 Benchmark to verify its status and performances.
Carbonio repository configuration in stored in files
/etc/apt/sources.list.d/zextras.list
(Ubuntu) and
/etc/yum.repos.d/zextras.repo
(RHEL) and you can choose
between two channels from which to install and upgrade Carbonio
packages: RC and RELEASE (see Section Configure repositories for
details).
When using Ansible to install or upgrade Carbonio, it will look for that file and use the channel found there. However, if that file does not exist, can not be read, or for any reason is unavailable to Ansible, a new file will be installed using the RELEASE channel. If you use the RC channel, make sure that the file is present and readable, because otherwise Ansible will install or upgrade Carbonio using the RELEASE channel.