Zimbra 8.8.15 patch23 LDAP segmentation fault
Zimbra 8.8.15 patch23 LDAP segmentation fault
Hi All,
just a warning in case you try to deploy latest patch to zimbra 8.8.15 and encounter LDAP segmentation faults (with ldap crashing and nothing working after it) and you see something like this from dmesg:
[33525.226945] slapd[851]: segfault at 18 ip 00007f1634292b43 sp 00007eee32857310 error 6 in syncprov-2.4.so.2.10.12[7f1634286000+10000]
The workaround that worked for me:
stop all zimbra services (well - they will be stopped anyways... but just to be sure)
take a backup copy of "/opt/zimbra/conf/zmconfigd.cf"
from "/opt/zimbra/conf/zmconfigd.cf", remove following lines:
LOCAL ldap_overlay_syncprov_sessionlog
LDAP ldap_overlay_syncprov_sessionlog LOCAL ldap_overlay_syncprov_sessionlog
start zimbra services
There should be a big red warning at https://wiki.zimbra.com/wiki/Zimbra_Releases/8.8.15/P23
just a warning in case you try to deploy latest patch to zimbra 8.8.15 and encounter LDAP segmentation faults (with ldap crashing and nothing working after it) and you see something like this from dmesg:
[33525.226945] slapd[851]: segfault at 18 ip 00007f1634292b43 sp 00007eee32857310 error 6 in syncprov-2.4.so.2.10.12[7f1634286000+10000]
The workaround that worked for me:
stop all zimbra services (well - they will be stopped anyways... but just to be sure)
take a backup copy of "/opt/zimbra/conf/zmconfigd.cf"
from "/opt/zimbra/conf/zmconfigd.cf", remove following lines:
LOCAL ldap_overlay_syncprov_sessionlog
LDAP ldap_overlay_syncprov_sessionlog LOCAL ldap_overlay_syncprov_sessionlog
start zimbra services
There should be a big red warning at https://wiki.zimbra.com/wiki/Zimbra_Releases/8.8.15/P23
-
- Posts: 1
- Joined: Fri Feb 10, 2017 10:53 pm
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
@sensor - yes this workaround is perfect and verified from Zimbra Support. We probably have a fix in upcoming patch.
This error is coming for few customers where 'olcSpSessionlog: 1000000' is missing in ldap config.
This error is coming for few customers where 'olcSpSessionlog: 1000000' is missing in ldap config.
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
Howdy,
I just wanted to add some context here. I bet you are wondering why there isn't a big red warning.
This segfault happens in a very specific set of circumstances, which will not happen to the vast majority of users. It only happens if the LDAP server was once in an LDAP MMR configuration and is no longer.
It's unclear if it's because it wasn't removed properly due to user error or a script error in a previous release, but either way, we've had less than "a handfull" about this.
I just wanted to add some context here. I bet you are wondering why there isn't a big red warning.
This segfault happens in a very specific set of circumstances, which will not happen to the vast majority of users. It only happens if the LDAP server was once in an LDAP MMR configuration and is no longer.
It's unclear if it's because it wasn't removed properly due to user error or a script error in a previous release, but either way, we've had less than "a handfull" about this.
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
Well, It happened in our company as well after updating to patch 23.jholder wrote:Howdy,
I just wanted to add some context here. I bet you are wondering why there isn't a big red warning.
This segfault happens in a very specific set of circumstances, which will not happen to the vast majority of users. It only happens if the LDAP server was once in an LDAP MMR configuration and is no longer.
It's unclear if it's because it wasn't removed properly due to user error or a script error in a previous release, but either way, we've had less than "a handfull" about this.
So many thanks for the work around, @sensor !!!
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
We had it yesterday on a single server, that has never seen master-replica or MMR configurationjholder wrote:It only happens if the LDAP server was once in an LDAP MMR configuration and is no longer.
We had it today on a master-replica multiserver installation that has never been an MMR
Edit: We had it even on a multi master replica with custom schema.
Edit: We had it on a brand new install.
Edit: We had it today on a FOSS mmr install. All the previous were NETWORK.
We NEVER revert from an MMR installation, as stated in the KB:jholder wrote:It's unclear if it's because it wasn't removed properly due to user error or a script error in a previous release, but either way, we've had less than "a handfull" about this.
"WARNING: Configuring MMR is a one-way trip! Once you have configured MMR, you must not remove all nodes from the MMR configuration! If you're removing nodes, you must retain at least one replication agreement on your MMR nodes."
https://wiki.zimbra.com/wiki/LDAP_Multi ... eplication
For that we are abandoning MMR in new installation and preferring master-replicas
Last edited by gabrieles on Mon Jul 26, 2021 9:04 am, edited 2 times in total.
- DavidMerrill
- Advanced member
- Posts: 126
- Joined: Thu Jul 30, 2015 2:44 pm
- Location: Portland, ME
- ZCS/ZD Version: 8.8.15 P19
- Contact:
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
Couple questions:We NEVER revert from an MMR installation, as stated in the KB: "WARNING: Configuring MMR is a one-way trip! Once you have configured MMR, you must not remove all nodes from the MMR configuration! If you're removing nodes, you must retain at least one replication agreement on your MMR nodes."
- I've seen these warnings too, but as of yet nothing specific on details, why exactly is there no way back?
- What happens when you do say for example you lose an MMR node in a 2-node system?
- Assume you make appropriate changes to ldap_url/ldap_master_url so nothing tries to talk to the missing MMR-node, does Zimbra continue to work (alebit super-cranky about not being able to replicate with the missing node)?
___________________________________
David Merrill - Zimbra Practice Lead
OTELCO Zimbra Hosting, Licensing and Professional Services
Zeta Alliance
David Merrill - Zimbra Practice Lead
OTELCO Zimbra Hosting, Licensing and Professional Services
Zeta Alliance
-
- Ambassador
- Posts: 2767
- Joined: Mon Dec 16, 2013 11:35 am
- Location: France - Drôme
- ZCS/ZD Version: All of them
- Contact:
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
2. AFAIK, nothing specific happens but what you've talked about in 3.
No more replication and if the other servers are not using the one that is done, they don't even notice.
3. Yes it does, flawlessly (it just can't replicate).
The problem might be in case there's a new MMR server added.
To avoid any issue, I would set up a fully new server (new FQDN) as a MMR with another ID (see wiki page).
I would not try to reuse a previously used FQDN and have to find out if I have to remove it from the setup before (zmprov ds server.domain.tld) or not and other questions of the same type.
About 2 and 3, anyway, the best way would be to setup a load-balancer in front of the LDAP servers and point all other servers to the load balancer.
HAproxy does this very well, both for read and write, it's also able to test the backends and stop using one LDAP if it's down.
The very best way would be a pair of load-balancers with HA.
The point issue is you have to set back the real FQDN in the localconfig when upgrading your servers (not patching) because the scripts need to have the real LDAP server.
No more replication and if the other servers are not using the one that is done, they don't even notice.
3. Yes it does, flawlessly (it just can't replicate).
The problem might be in case there's a new MMR server added.
To avoid any issue, I would set up a fully new server (new FQDN) as a MMR with another ID (see wiki page).
I would not try to reuse a previously used FQDN and have to find out if I have to remove it from the setup before (zmprov ds server.domain.tld) or not and other questions of the same type.
About 2 and 3, anyway, the best way would be to setup a load-balancer in front of the LDAP servers and point all other servers to the load balancer.
HAproxy does this very well, both for read and write, it's also able to test the backends and stop using one LDAP if it's down.
The very best way would be a pair of load-balancers with HA.
The point issue is you have to set back the real FQDN in the localconfig when upgrading your servers (not patching) because the scripts need to have the real LDAP server.
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
I'm lacking details too, i think that's related to the impossibility of deleting the last mmr replication agreement for a node.DavidMerrill wrote:I've seen these warnings too, but as of yet nothing specific on details, why exactly is there no way back?
If someone has details, or workarounds, they are well accepted!
I'm not a openldap blackbelt, but i suppose that only the first node gets all the writes, and fails every time the syncronization with all the other masters that agrees.DavidMerrill wrote:What happens when you do say for example you lose an MMR node in a 2-node system?
Dumb question: mmr sync is a push sync, no? a master that has new data tells other masters "hey, someone gave me these these updated data, these are the value, this is the timestamp" and the timestamp becomes the ContextCSN.
If this is true, basically a "orphaned" ldap master tries always to communicate updated data to the dead master, failing. But this shoud not impact on the zimbra operations.
These are exactly the steps on which i based the previous answer...DavidMerrill wrote:Assume you make appropriate changes to ldap_url/ldap_master_url so nothing tries to talk to the missing MMR-node, does Zimbra continue to work (alebit super-cranky about not being able to replicate with the missing node)?
- DavidMerrill
- Advanced member
- Posts: 126
- Joined: Thu Jul 30, 2015 2:44 pm
- Location: Portland, ME
- ZCS/ZD Version: 8.8.15 P19
- Contact:
Re: Zimbra 8.8.15 patch23 LDAP segmentation fault
Yeah that's the frustrating part, why is it impossible?I'm lacking details too, i think that's related to the impossibility of deleting the last mmr replication agreement for a node.
If someone has details, or workarounds, they are well accepted!
- Did the original LDAP-authors or LDAP-MMR-implementors/designers never consider the possibility going back to one node?
- Maybe they figured one would just dump the LDAP database and start up a new LDAP server (not a trivial ask in a Zimbra environment - thus hard to support).
___________________________________
David Merrill - Zimbra Practice Lead
OTELCO Zimbra Hosting, Licensing and Professional Services
Zeta Alliance
David Merrill - Zimbra Practice Lead
OTELCO Zimbra Hosting, Licensing and Professional Services
Zeta Alliance