Discuss your pilot or production implementation with other Zimbra admins or our engineers.
armitage318
Advanced member
Posts: 98 Joined: Sat Sep 13, 2014 2:01 am
Post
by armitage318 » Wed Oct 31, 2018 5:04 pm
Hi,
I use ZCS Release 8.8.7
I am trying to restore a zip format mailbox backup (about 14 GB).
I used this command:
Code: Select all
$ zmmailbox -z -m user@domain.com postRestURL "//?fmt=zip&resolve=reset" /tmp/user\@domain.com.zip
but I get this:
Code: Select all
ERROR: zclient.IO_ERROR (Broken pipe (Write failed)) (cause: java.net.SocketException Broken pipe (Write failed))
how can I solve this issue?
thank you very much
Code: Select all
$ free
total used free shared buff/cache available
Mem: 3881988 2419276 376392 7568 1086320 1190500
Swap: 2097148 5760 2091388
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 57G 21G 37G 36% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.6M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 189M 826M 19% /boot
tmpfs 380M 0 380M 0% /run/user/998
tmpfs 380M 0 380M 0% /run/user/0
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
stepping : 4
microcode : 0x1a
cpu MHz : 2394.000
cache size : 12288 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer hypervisor lahf_lm tsc_adjust arat
bogomips : 4788.00
clflush size : 64
cache_alignment : 64
address sizes : 42 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
stepping : 4
microcode : 0x1a
cpu MHz : 2394.000
cache size : 12288 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer hypervisor lahf_lm tsc_adjust arat
bogomips : 4788.00
clflush size : 64
cache_alignment : 64
address sizes : 42 bits physical, 48 bits virtual
power management:
armitage318
Advanced member
Posts: 98 Joined: Sat Sep 13, 2014 2:01 am
Post
by armitage318 » Wed Oct 31, 2018 5:31 pm
With a smaller account (about 500 MB), restore went fine with same command..
So it seems there is some issue with large mailbox backup files .. is there a way to solve this?
Thank you!
tonster
Zimbra Employee
Posts: 313 Joined: Fri Feb 21, 2014 10:14 am
Location: Ypsilanti, MI
ZCS/ZD Version: Release 8.7.0_GA_1659.RHEL6_64_2016
Post
by tonster » Wed Oct 31, 2018 11:00 pm
armitage318 wrote: With a smaller account (about 500 MB), restore went fine with same command..
So it seems there is some issue with large mailbox backup files .. is there a way to solve this?
Thank you!
This is ultimately caused by
Bug 101760 , but one workaround that often works is to send it directly to mailboxd, so do this on the server that houses the user you're restoring:
curl -k -u '
admin@domain.com :password' --data-binary @/path-to/zzz.tgz "
https://host.domain.com:7071/service/ho ... solve=skip "
Hope that helps!
Tony
armitage318
Advanced member
Posts: 98 Joined: Sat Sep 13, 2014 2:01 am
Post
by armitage318 » Fri Nov 02, 2018 4:49 pm
Hi Tony,
thank you very much for your reply !
I used the command you provided:
Code: Select all
# curl -k -u 'admin:XXXXXXXXXXXXXXXXX' --data-binary @/tmp/user@domain.com.zip "https://mailserver.domain.com:7071/service/home/user@domain.com/?fmt=zip&resolve=skip"
(I just changed format from tgz to zip)
but I got similar error:
Code: Select all
curl: option --data-binary: out of memory
curl: try 'curl --help' or 'curl --manual' for more information
What could I try?
Thank you!
tonster
Zimbra Employee
Posts: 313 Joined: Fri Feb 21, 2014 10:14 am
Location: Ypsilanti, MI
ZCS/ZD Version: Release 8.7.0_GA_1659.RHEL6_64_2016
Post
by tonster » Fri Nov 02, 2018 4:51 pm
armitage318 wrote: Hi Tony,
thank you very much for your reply !
I used the command you provided:
Code: Select all
# curl -k -u 'admin:XXXXXXXXXXXXXXXXX' --data-binary @/tmp/user@domain.com.zip "https://mailserver.domain.com:7071/service/home/user@domain.com/?fmt=zip&resolve=skip"
(I just changed format from tgz to zip)
but I got similar error:
Code: Select all
curl: option --data-binary: out of memory
curl: try 'curl --help' or 'curl --manual' for more information
What could I try?
Thank you!
If you're running out of memory, you're probably going to need to add RAM to the system. How much memory does the system have?
armitage318
Advanced member
Posts: 98 Joined: Sat Sep 13, 2014 2:01 am
Post
by armitage318 » Mon Nov 05, 2018 5:07 pm
Hi, I have 4 GB.
Now I just added +4 GB (total amount: 8 GB RAM).
It seems it is running fine.
I'll keep you updated
Thank you
tonster
Zimbra Employee
Posts: 313 Joined: Fri Feb 21, 2014 10:14 am
Location: Ypsilanti, MI
ZCS/ZD Version: Release 8.7.0_GA_1659.RHEL6_64_2016
Post
by tonster » Mon Nov 05, 2018 5:11 pm
armitage318 wrote: Hi, I have 4 GB.
Now I just added +4 GB (total amount: 8 GB RAM).
It seems it is running fine.
I'll keep you updated
Thank you
Glad to hear it. I would note that 8GB is the minimum recommended amount of RAM, so I would expect you should be good now.
armitage318
Advanced member
Posts: 98 Joined: Sat Sep 13, 2014 2:01 am
Post
by armitage318 » Mon Nov 05, 2018 5:14 pm
I got no luck:
Code: Select all
[ 441.251306] Out of memory: Kill process 3059 (curl) score 751 or sacrifice child
[ 441.251345] Killed process 3059 (curl) total-vm:8567108kB, anon-rss:6007268kB, file-rss:32kB, shmem-rss:0kB
I wondering how it is possible to manage restore of such big mailboxes
Thank you
JDunphy
Outstanding Member
Posts: 899 Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition
Post
by JDunphy » Mon Nov 05, 2018 6:22 pm
armitage318 wrote: I got no luck:
Code: Select all
[ 441.251306] Out of memory: Kill process 3059 (curl) score 751 or sacrifice child
[ 441.251345] Killed process 3059 (curl) total-vm:8567108kB, anon-rss:6007268kB, file-rss:32kB, shmem-rss:0kB
I wondering how it is possible to manage restore of such big mailboxes
Thank you
I have never run into this myself but it sounds like they are malloc'ing the entire 14GB ... so a few ideas.
Do you have any swap configured? If so is it large enough to demand page some of that 14GB to disk. If not enough, you can add a swapfile large enough mem+14GB --- 3 simple commands from the command line (ie. dd, mkswap, swapon).
Another idea would be to break that zip file into smaller pieces... ie. something like: zipsplit or perhaps extracting it and rebuilding into smaller tar/zip images by hand and importing those smaller pieces. I would think either of these should work.
DualBoot
Elite member
Posts: 1326 Joined: Mon Apr 18, 2016 8:18 pm
Location: France - Earth
ZCS/ZD Version: ZCS FLOSS - 8.8.15 Mutli servers
Contact:
Post
by DualBoot » Tue Nov 06, 2018 10:36 am
Hello,
maybe you can try to tune the allocated memory to the command using Java and intermediate zm"command" , see this example :
Code: Select all
zmlocalconfig zimbra_zmjava_options
zimbra_zmjava_options = -Xmx256m -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -Djdk.tls.client.protocols=TLSv1,TLSv1.1,TLSv1.2 -Djava.net.preferIPv4Stack=true
So you can tune -Xmx option, in my case I will update the value by increasing it by 2.
Code: Select all
zmlocalconfig -e zimbra_zmjava_options ='-Xmx512m -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -Djdk.tls.client.protocols=TLSv1,TLSv1.1,TLSv1.2 -Djava.net.preferIPv4Stack=true'
Do not forget the single quote and no need to restart anything. This is dynamic.
Regards,