Hardware Requirements for 300 Mailboxes (100GB Each)

Discuss your pilot or production implementation with other Zimbra admins or our engineers.
Post Reply
DMSE-DESK
Posts: 5
Joined: Wed Jun 21, 2023 10:34 am

Hardware Requirements for 300 Mailboxes (100GB Each)

Post by DMSE-DESK »

Hi Team,

Could you please advise on how to calculate the required hardware specifications for hosting 300 mailboxes, each with a size of 100 GB?

Additionally, I would like to know:
  • What server roles are needed
  • Recommended specifications for each role
  • The best approach for implementing redundancy
Appreciate your guidance on this.

Thanks
User avatar
jered
Advanced member
Advanced member
Posts: 112
Joined: Sat Sep 13, 2014 12:35 am
Location: Somerville, MA

Re: Hardware Requirements for 300 Mailboxes (100GB Each)

Post by jered »

This is a question probably better addressed by your presales engineer. (I assume you are not intending to run FOSS in this configuration.) 30 TB is not a particularly large install.
rojoblandino
Advanced member
Advanced member
Posts: 64
Joined: Sat Sep 13, 2014 1:36 am

Re: Hardware Requirements for 300 Mailboxes (100GB Each)

Post by rojoblandino »

DMSE-DESK wrote: Wed Aug 06, 2025 11:36 am Hi Team,

Could you please advise on how to calculate the required hardware specifications for hosting 300 mailboxes, each with a size of 100 GB?

Additionally, I would like to know:
  • What server roles are needed
  • Recommended specifications for each role
  • The best approach for implementing redundancy
Appreciate your guidance on this.

Thanks
NOTE: Remember this is just a recomendation guide, maybe not the full solution of what you really want, and need.

Calculating the hardware specifications for 300 mailboxes at 100 GB each (totaling 30 TB of raw mailbox storage) requires a robust and highly available architecture. The primary challenges are I/O performance for mailbox access, storage capacity, and ensuring service continuity.

The following proposal outlines a modern, scalable, and redundant infrastructure based on a virtualized environment.

1. Core Architectural Principles

Virtualization: A hypervisor-based infrastructure is strongly recommended for flexibility, high availability (HA), and simplified resource management.

Separation of Roles: Dedicate virtual machines (VMs) to specific server roles (e.g., MTA, Mailbox Store, LDAP) for improved security, performance, and manageability.

Redundancy at Every Layer: Eliminate single points of failure in compute, storage, and networking.

2. Recommended Server Roles & Specifications

For a solution like Zimbra Collaboration, the key roles are:

Mailbox Store: Hosts the mailboxes and handles IMAP/POP3 traffic. This role is the most demanding in terms of I/O and RAM.

Message Transfer Agent (MTA) / Proxies: Handles inbound and outbound SMTP traffic (pop, imap, smtp, webmail, etc.).

LDAP: Manages user authentication, routing, and client connections.

3. Proposed Infrastructure Specification

This design uses a hypervisor cluster with centralized storage for maximum resilience.

3.1. Hypervisor Cluster (Compute Layer)

Quantity: 3 physical servers (minimum), with a 4th as an ideal for better N+1 redundancy during maintenance.

Recommended Models: Dell PowerEdge R6625, R6625, or similar from other vendors (HPE, Supermicro).

Per-Server Specifications:

CPU: 2x AMD EPYC 7713 (64 Cores) / Intel Xeon Gold 6430 (32 Cores) or equivalent. High core count benefits virtualized workloads.

RAM: 256 GB DDR5 (4x 64GB RDIMMs). Mailbox stores are memory-intensive for cache.

Boot Drives: 2x 480GB SSD SAS/SATA in RAID 1 for the hypervisor OS.

Network: 1x Dual-Port or Quad-Port 25 GbE SFP28 OCP NIC. This is critical for VM and storage traffic.

Local Storage: Not required for mail data, as it will reside on dedicated storage.

3.2. Network Infrastructure

Quantity: 2 x Top-of-Rack (ToR) Switches for full redundancy.

Specifications: Managed switches with a sufficient number of 25 GbE SFP28 ports to accommodate all servers and storage devices. Support for LACP (Link Aggregation Control Protocol) and MLAG (Multi-chassis Link Aggregation) is mandatory.

3.3. Centralized Storage (Data & Backup Layer)

A centralized storage system is essential for allowing VMs to live-migrate between hosts.

Primary Storage (for running VMs):

Technology: A dedicated Storage Area Network (SAN) is preferred over NAS for block-level performance. Consider an all-flash array or a hybrid array with SSD cache.

Configuration: 2x storage controllers (heads) for high availability.

Capacity: Provision at least 40-60 TB of usable space based on a RAID-10 or RAID-6 configuration, accounting for the 30 TB mailbox data, VM overhead, and future growth.

Connectivity: Connect via iSCSI or NFS over the 25 GbE network.

Backup Storage:

Solution: 2x NAS devices (e.g., from QNAP or Synology) with equivalent raw capacity to the primary storage.

Purpose: These will serve as targets for VM backups, providing a "cold" or "warm" copy of the data.

4. Implementation for High Availability & Redundancy

Hypervisor Cluster: Install Proxmox VE hypervisor on the three physical servers and form a cluster. This enables features like Live Migration and High Availability (HA). If one host fails, VMs automatically restart on the remaining hosts.

Network Bonding: Configure the physical NICs on each host in an active-active or active-passive bond (using LACP) and connect them to both switches. This provides redundancy and increased throughput.

Storage Network: Create a separate VLAN or use dedicated physical NICs for storage traffic (iSCSI/NFS) to prevent congestion.

Virtual Machine Roles: Deploy the following VMs across your cluster:

VM 1 & 2: Two LDAP/Proxy servers. These can be balanced for client connections (IMAP, POP, webmail, SMTP).

VM 3 & 4: Two Mailbox Store servers, configured in a multi-server setup. This provides redundancy for the most critical data.

VM 5: A dedicated MTA server. A second can be added for load balancing.

5. Advanced Considerations

For Extreme Performance: You can deploy multiple smaller "proxy" VMs to create a pool of servers dedicated to handling client protocol traffic, further isolating the backend mailbox stores.

Split-Domain Deployment: For very large deployments, you can split users across multiple mailbox store servers, but for 300 users, a two-store redundant setup is sufficient.

Spare Parts: Maintain a stock of critical spare parts (hard drives, memory modules) to minimize downtime in case of hardware failure.

This architecture provides a solid foundation that is scalable, performant, and eliminates single points of failure, ensuring the reliability required for business-critical email services.
Post Reply