PaperAdvocate wrote:Both files are the same except for 2 things, that the order of the line items are different and the value of innodb_buffer_pool_size is different; 4163895296 for the server with the delay and 2511535718 for the server that is fine.
The 4163895296 value was given to me by Zimbra support when I sent them my logs, so this is why it's different, but the issue was present prior to changing this value.
On both the innodb_max_dirty_pages_pct = 30.
Dunno, but the InnoDB Buffer Pool size should ideally be 1.25x or larger the size of the InnoDB databases. If the databases are bigger than the buffer pool, some portions of the databases get paged out to disk. If that's what's happening in your case, then likely MariaDB needs to pull that data in from swap and write it to disk before allowing itself to shut down, and that can add to the shutdown time (users will also notice periodic "stalls" of UI responsiveness).
You can use a tool like mysqltuner.pl to get the InnoDB database size, and you can also use the M parameter to set pool size in MB, rather than bytes. Much easier to read and less likely to cause a typo.
So here you can see on one system where, given the fullness of time, more and larger mailboxes, I needed to increase the size of the buffer pool eventually to 7GB. A number of clients have buffer pools 20GB or greater in size, so it's a good thing to check periodically. Be sure to add more RAM to your server if needed too!
Code: Select all
zimbra@zimbra:~$ cat conf/my.cnf | grep -i innodb_buffer
# innodb_buffer_pool_size = 5047721164
# innodb_buffer_pool_size = 6144M
innodb_buffer_pool_size = 7168M
Hope that helps,