Sizing of innodb_buffer_pool_size
- pup_seba
- Outstanding Member
- Posts: 687
- Joined: Sat Sep 13, 2014 2:43 am
- Location: Tarragona - Spain
- Contact:
Sizing of innodb_buffer_pool_size
Hi,
I usually configure innodb_buffer_pool_size as the result of what `du -sh /opt/zimbra/db/data`.
For example:
du -sh /opt/zimbra/db/data
9.6 GB
I would end up with a innodb_buffer_pool_size = 12GB (aprox)
Althought running a script like mysqltunner, gives information like this:
[--] Data in MyISAM tables: 4.0K (Tables: 1)
[--] Data in InnoDB tables: 5.4G (Tables: 1624)
[OK] Total fragmented tables: 0
If I run an optimize operation on all databases, then I can see that:
du -sh /opt/zimbra/db/data
5.4 GB
Which matches what the mysqltunner said the size was.
Considering this:
1. should we configure innodb_buffer_pool_size as per the result of the quoted "du" command, or as the result of what a script like mysqltunner returns?
2. I thought that "optimize" operations were a way to defragment the tables, yet the mysqltunner script sais that there aren't fragmented tables...then why optimization decreases the space in disk of these tables?
PS: Do you have any other recomendation for db configuration in Zimbra, like adjusting innodb_buffer_pool_instances so we end up with n instances of 1GB (like MariaDB documentation recomends)?
Thanks!
I usually configure innodb_buffer_pool_size as the result of what `du -sh /opt/zimbra/db/data`.
For example:
du -sh /opt/zimbra/db/data
9.6 GB
I would end up with a innodb_buffer_pool_size = 12GB (aprox)
Althought running a script like mysqltunner, gives information like this:
[--] Data in MyISAM tables: 4.0K (Tables: 1)
[--] Data in InnoDB tables: 5.4G (Tables: 1624)
[OK] Total fragmented tables: 0
If I run an optimize operation on all databases, then I can see that:
du -sh /opt/zimbra/db/data
5.4 GB
Which matches what the mysqltunner said the size was.
Considering this:
1. should we configure innodb_buffer_pool_size as per the result of the quoted "du" command, or as the result of what a script like mysqltunner returns?
2. I thought that "optimize" operations were a way to defragment the tables, yet the mysqltunner script sais that there aren't fragmented tables...then why optimization decreases the space in disk of these tables?
PS: Do you have any other recomendation for db configuration in Zimbra, like adjusting innodb_buffer_pool_instances so we end up with n instances of 1GB (like MariaDB documentation recomends)?
Thanks!
- JDunphy
- Outstanding Member
- Posts: 899
- Joined: Fri Sep 12, 2014 11:18 pm
- Location: Victoria, BC
- ZCS/ZD Version: 9.0.0_P39 NETWORK Edition
Re: Sizing of innodb_buffer_pool_size
Very interesting observation on your part. "If you frequently delete rows (or update rows with variable-length data types), you can end up with a lot of wasted space in your data file(s), similar to filesystem fragmentation." according to this source. https://www.datavail.com/blog/innodb-ta ... nd-fix-it/ I believe that the optimize will perform a recreate + analyze so you can shrink it via one of two methods.
or
Unfortunately, I only dabble into DB tuning when something bad happens so I can't offer any real world experience. If memory is in short supply, it sounds like using the tools recommendation would be the preferred method. My guess is that in reality the empty space wouldn't be in memory anyway during a shortfall so either method has its advantages depending on your sizing/job mix. You may gain more benefit if this is a dedicated db server by configuring swappiness lower at the expense of larger system caches that the kernel normally would use... ie. a high value would favor higher I/O performance in contrast to swapping your db process memory out of ram during distress shortfalls.
Code: Select all
mysql> optimize table sbtest1;
Code: Select all
mysql> alter table sbtest1 engine=innodb;
- pup_seba
- Outstanding Member
- Posts: 687
- Joined: Sat Sep 13, 2014 2:43 am
- Location: Tarragona - Spain
- Contact:
Re: Sizing of innodb_buffer_pool_size
Hi mate!
The first thing you say is one of the things I fully don't understand. I could understand that the space on disk (reported by du command) is bigger than the actual data in the database (reported by mysqltunner), but then again mysqltunner says that there is no fragmentation in the db. Weird huh?
You are absolutly right about the recreate + analyze thing, or at least that's the message given by the system when performing an "optimize" operation on the databases. This is the command that I use:
mysqlcheck -u root -p --socket=/opt/zimbra/data/tmp/mysql/mysql.sock --all-databases --optimize --skip-write-binlog
There is an argument in the "/opt/zimbra/libexec/zmdbintegritycheck" command (i think is -o but I dont remember) that says it should "optimize". I never tried it but maybe is a wrapper for the "mysqlcheck"...I should give that a try one day
No swappiness or memory shortage, is just curiosity about how to configure the db in the best possible way. For instance, I'm really curious about the "instances", or the open_files parameter and I was hoping someone with better db understanding could throw some light on the subject
About the mysqltunner reporting "5.4GB" and the "du" reporting "9.6GB"...now it makes more sense for me why the "top" command never shows more than 6GB of memory consumed. I really wonder If I should ignore the "du" size for innodb_buffer_pool_size and focus on the "real" size occupied by tables reported by mysqltunner.
I would love to see an updated updated performance tunning guide...so we don't need to become dba to tune zimbra
Thank you for your answer!!!
The first thing you say is one of the things I fully don't understand. I could understand that the space on disk (reported by du command) is bigger than the actual data in the database (reported by mysqltunner), but then again mysqltunner says that there is no fragmentation in the db. Weird huh?
You are absolutly right about the recreate + analyze thing, or at least that's the message given by the system when performing an "optimize" operation on the databases. This is the command that I use:
mysqlcheck -u root -p --socket=/opt/zimbra/data/tmp/mysql/mysql.sock --all-databases --optimize --skip-write-binlog
There is an argument in the "/opt/zimbra/libexec/zmdbintegritycheck" command (i think is -o but I dont remember) that says it should "optimize". I never tried it but maybe is a wrapper for the "mysqlcheck"...I should give that a try one day
No swappiness or memory shortage, is just curiosity about how to configure the db in the best possible way. For instance, I'm really curious about the "instances", or the open_files parameter and I was hoping someone with better db understanding could throw some light on the subject
About the mysqltunner reporting "5.4GB" and the "du" reporting "9.6GB"...now it makes more sense for me why the "top" command never shows more than 6GB of memory consumed. I really wonder If I should ignore the "du" size for innodb_buffer_pool_size and focus on the "real" size occupied by tables reported by mysqltunner.
I would love to see an updated updated performance tunning guide...so we don't need to become dba to tune zimbra
Thank you for your answer!!!
-
- Ambassador
- Posts: 2767
- Joined: Mon Dec 16, 2013 11:35 am
- Location: France - Drôme
- ZCS/ZD Version: All of them
- Contact:
Re: Sizing of innodb_buffer_pool_size
Funny, I was looking at strange disk usage today...
Here's the "official" wiki page on this: https://wiki.zimbra.com/wiki/DB_not_rel ... eting_data
You definitively need to OPTIMIZE the tables, ZCS won't do it for you (it checks and repairs the tables weekly but that's all).
Here's the "official" wiki page on this: https://wiki.zimbra.com/wiki/DB_not_rel ... eting_data
You definitively need to OPTIMIZE the tables, ZCS won't do it for you (it checks and repairs the tables weekly but that's all).
-
- Ambassador
- Posts: 2767
- Joined: Mon Dec 16, 2013 11:35 am
- Location: France - Drôme
- ZCS/ZD Version: All of them
- Contact:
Re: Sizing of innodb_buffer_pool_size
I did some test on a live server.
137 accounts, about 600 GB live data (store + index + DB + HSM).
Before
7.8G /opt/zimbra/db/data/
After
5.1G /opt/zimbra/db/data/
137 accounts, about 600 GB live data (store + index + DB + HSM).
Before
7.8G /opt/zimbra/db/data/
After
5.1G /opt/zimbra/db/data/
- L. Mark Stone
- Ambassador
- Posts: 2802
- Joined: Wed Oct 09, 2013 11:35 am
- Location: Portland, Maine, US
- ZCS/ZD Version: 10.0.7 Network Edition
- Contact:
Re: Sizing of innodb_buffer_pool_size
Not sure this is a cause for worry.
Take a look at:
https://mariadb.com/kb/en/library/defra ... blespaces/
https://mariadb.org/defragmenting-unuse ... ablespace/
As I understand it, if you have a mailbox server with 5,000 mailboxes and you delete 3,000 mailboxes, the mboxgroup* tables will be defragmented, but not compressed of free space; i.e. there will be lots of blank lines that used to contain data for the 3,000 mailboxes you deleted. But, when you add back another 2,000 mailboxes, those blank lines should/will be reused, so the docs would lead one to conclude that adding back 2K mailboxes will not result in any more innodb data growth on disk at that time.
If correct, then so long as you have enough RAM to accommodate organic growth in the InnoDB Buffer Pool size (by increasing the variable in my.cnf), you should be OK, even if you delete and create mailboxes frequently.
I can also say that, having tuned scores of Zimbra systems that were crawling slowly on account of too small an innodb_buffer_pool size variable setting, I consistently used the size reported by mysqltuner.pl, plus 25% to reset that variable, and in all cases saw performance return to as-new. I'm therefore speculating that MySQL/MariaDB is smart when it comes to buffer pool management of tables with lots of blank lines. Would be nice to get firm confirmation of this though for sure.
So except in some extreme cases, it doesn't seem that we would need to OPTIMIZE tables all that often -- and even if so, it might be more expedient and less impactful to customers to migrate the mailboxes to a new mailbox server (or split them between multiple mailbox servers) and then zmprov ds the old mailbox server. The certified wiki recall says to stop Zimbra, and just start MySQL before running the optimize tables command -- as well as to do each table sequentially. Not necessarily a quick process when you have 100 mboxgroup tables?
Hope that helps,
Mark
Take a look at:
https://mariadb.com/kb/en/library/defra ... blespaces/
https://mariadb.org/defragmenting-unuse ... ablespace/
As I understand it, if you have a mailbox server with 5,000 mailboxes and you delete 3,000 mailboxes, the mboxgroup* tables will be defragmented, but not compressed of free space; i.e. there will be lots of blank lines that used to contain data for the 3,000 mailboxes you deleted. But, when you add back another 2,000 mailboxes, those blank lines should/will be reused, so the docs would lead one to conclude that adding back 2K mailboxes will not result in any more innodb data growth on disk at that time.
If correct, then so long as you have enough RAM to accommodate organic growth in the InnoDB Buffer Pool size (by increasing the variable in my.cnf), you should be OK, even if you delete and create mailboxes frequently.
I can also say that, having tuned scores of Zimbra systems that were crawling slowly on account of too small an innodb_buffer_pool size variable setting, I consistently used the size reported by mysqltuner.pl, plus 25% to reset that variable, and in all cases saw performance return to as-new. I'm therefore speculating that MySQL/MariaDB is smart when it comes to buffer pool management of tables with lots of blank lines. Would be nice to get firm confirmation of this though for sure.
So except in some extreme cases, it doesn't seem that we would need to OPTIMIZE tables all that often -- and even if so, it might be more expedient and less impactful to customers to migrate the mailboxes to a new mailbox server (or split them between multiple mailbox servers) and then zmprov ds the old mailbox server. The certified wiki recall says to stop Zimbra, and just start MySQL before running the optimize tables command -- as well as to do each table sequentially. Not necessarily a quick process when you have 100 mboxgroup tables?
Hope that helps,
Mark
___________________________________
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
-
- Ambassador
- Posts: 2767
- Joined: Mon Dec 16, 2013 11:35 am
- Location: France - Drôme
- ZCS/ZD Version: All of them
- Contact:
Re: Sizing of innodb_buffer_pool_size
Hello Mark,
I'm not really worrying about this.
Let me try to explain how I came to this.
I'm talking about a multi-servers setup, with SAN storage (no object storage).
So each mailbox server as a dedicated storage (primary and secondary).
As the mailboxes grow, they end up filling the dedicated storage.
When this happens, you can either grow the storage or move some of the mailboxes to another server.
The second option was chosen: some space is freed on the servers this way, by moving mailboxes.
No new mailboxes are added to the server, it's about letting the remaining ones grow.
However, the space used by the SQL tables is not freed.
Would it be used again by other records? Yes, as long as they're in the same tables that the deleted ones.
I just did other tests.
/opt/zimbra/db/data/ went from 18 to 8.5 GB in about 10 minutes, 127 accounts on that server for about one TB of storage.
It can be much longer on a server with lots of IMAP users (because of the IMAP tables).
I'm not really worrying about this.
Let me try to explain how I came to this.
I'm talking about a multi-servers setup, with SAN storage (no object storage).
So each mailbox server as a dedicated storage (primary and secondary).
As the mailboxes grow, they end up filling the dedicated storage.
When this happens, you can either grow the storage or move some of the mailboxes to another server.
The second option was chosen: some space is freed on the servers this way, by moving mailboxes.
No new mailboxes are added to the server, it's about letting the remaining ones grow.
However, the space used by the SQL tables is not freed.
Would it be used again by other records? Yes, as long as they're in the same tables that the deleted ones.
I just did other tests.
/opt/zimbra/db/data/ went from 18 to 8.5 GB in about 10 minutes, 127 accounts on that server for about one TB of storage.
It can be much longer on a server with lots of IMAP users (because of the IMAP tables).
- L. Mark Stone
- Ambassador
- Posts: 2802
- Joined: Wed Oct 09, 2013 11:35 am
- Location: Portland, Maine, US
- ZCS/ZD Version: 10.0.7 Network Edition
- Contact:
Re: Sizing of innodb_buffer_pool_size
Hi David,
If I have a healthy mailbox server getting low on space, I typically just add another primary volume and make the new volume current; especially in a SAN-backed environment. And very nice to see that the optimize completed so quickly, thank you!
But really the more important question for me is when we are changing the value for innodb_buffer_pool in my.cnf: Can rely on the lower value reported by mysqltuner.pl, or, do we need to use the higher value reported by du -csh?
I've historically used the lower value reported by mysqltuner.pl and seen good performance improvements. Would be nice to know if I can do better...
Thanks for your replies.
All the best,
Mark
If I have a healthy mailbox server getting low on space, I typically just add another primary volume and make the new volume current; especially in a SAN-backed environment. And very nice to see that the optimize completed so quickly, thank you!
But really the more important question for me is when we are changing the value for innodb_buffer_pool in my.cnf: Can rely on the lower value reported by mysqltuner.pl, or, do we need to use the higher value reported by du -csh?
I've historically used the lower value reported by mysqltuner.pl and seen good performance improvements. Would be nice to know if I can do better...
Thanks for your replies.
All the best,
Mark
___________________________________
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
- pup_seba
- Outstanding Member
- Posts: 687
- Joined: Sat Sep 13, 2014 2:43 am
- Location: Tarragona - Spain
- Contact:
Re: Sizing of innodb_buffer_pool_size
Hi guys!
I've historically sized the innodb_buffer_pool_size according to du -sh of the /opt/zimbra/db/data. But the thing is that when I run "top", I've never seen the mysql process using significatly more memory than the one reported by mysqltunner. So I truly think that my approach is overkilling and Mark's approach is more precise (better).
Maybe I should try asking this in a mariadb/mysql forum? I'll try to do that and report back here
The ammount of instances though...that one would make no much sense to ask in a general mariadb forum, as the recommendation is pretty clear. But the omission of this parameter in Zimbra makes me wonder that maybe it shouldn't be configured in Zimbra at all.
I've historically sized the innodb_buffer_pool_size according to du -sh of the /opt/zimbra/db/data. But the thing is that when I run "top", I've never seen the mysql process using significatly more memory than the one reported by mysqltunner. So I truly think that my approach is overkilling and Mark's approach is more precise (better).
Maybe I should try asking this in a mariadb/mysql forum? I'll try to do that and report back here
The ammount of instances though...that one would make no much sense to ask in a general mariadb forum, as the recommendation is pretty clear. But the omission of this parameter in Zimbra makes me wonder that maybe it shouldn't be configured in Zimbra at all.
- L. Mark Stone
- Ambassador
- Posts: 2802
- Joined: Wed Oct 09, 2013 11:35 am
- Location: Portland, Maine, US
- ZCS/ZD Version: 10.0.7 Network Edition
- Contact:
Re: Sizing of innodb_buffer_pool_size
Seb,pup_seba wrote:Hi guys!
I've historically sized the innodb_buffer_pool_size according to du -sh of the /opt/zimbra/db/data. But the thing is that when I run "top", I've never seen the mysql process using significatly more memory than the one reported by mysqltunner. So I truly think that my approach is overkilling and Mark's approach is more precise (better).
Maybe I should try asking this in a mariadb/mysql forum? I'll try to do that and report back here
The ammount of instances though...that one would make no much sense to ask in a general mariadb forum, as the recommendation is pretty clear. But the omission of this parameter in Zimbra makes me wonder that maybe it shouldn't be configured in Zimbra at all.
Yes, if you get an answer in the MariaDB forums about du vs mysqltuner.pl memory usage, that would be great!
Thanks,
Mark
___________________________________
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate