Stopping attacks in real-time

Have a great idea for extending Zimbra? Share ideas, ask questions, contribute, and get feedback.
User avatar
JDunphy
Outstanding Member
Outstanding Member
Posts: 889
Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition

Stopping attacks in real-time

Post by JDunphy »

I have been investigating what could be done with an extension like modsecurity 3 with nginx and have a high level idea and could use some feedback. In many ways, it's more like I have a hammer (modsecurity 3) or lua and what could I do with it. Is there really a need given fail2ban seems to solve some of the same problems.

Note: if you already run 0-trust than this has no value. This is attempting to provide some of the benefits of 0-trust. Geo blocking is another simple FW mitigation that can be done and is dead simple given the available and free access to cidr's by country.

Big goal is shrink 2**32 for possible legitimate ipv4 access.
  1. Ideally, limit access to our user by ip address if we knew them
  2. no false positives so error on allowing access
  3. We don't apply other rules such as OWASP Core Rule Set (CRS) to our users
  4. We block attacking ip at the firewall with an ipset called blacklist24hr that will remove them automatically at expiration of good behavior.
  5. We use nginx + modsecurity 3 to determine what ip's our users have
  6. We use nginx + modsecurity 3 to determine what ip's to add to the blacklist24hr
The firewall has a rule like this:

Code: Select all

-A Input -m set --match-set blacklist24hr src -j DROP
We have a tool that can prime the whitelist from successful logins (audit logs) to mitigate performance and FP implications.

The rules then become this once they authenticate:

Code: Select all

SecRule RESPONSE_HEADERS:/Set-Cookie/ "(ZM_AUTH_TOKEN|ZM_TRUST_TOKEN|JSESSIONID)" \
    "id:1001,phase:3,nolog,pass,exec:/path/to/add_ip_to_whitelist.sh %{REMOTE_ADDR}"
    
At which point, they will hit this first rule or something like this where we turn off further processing for them.

Code: Select all

SecRule REMOTE_ADDR "@ipMatchFromFile /path/to/whitelist_ips.txt" \
    "id:1000,phase:1,nolog,pass,ctl:ruleEngine=Off"
    
For other ip's, like user-agents that are python, etc they go through further rule exploration where they may or may not be placed into our ipset on the fly with a rule like this:

Code: Select all

SecRule REQUEST_HEADERS:User-Agent "(python|wget|curl)" \
    "id:12345,phase:2,t:block,log,deny,status:403,\
    msg:'Blocked User-Agent: %{REQUEST_HEADERS.User-Agent}', \
    chain, \
    setenv:ip.blocked=1, \
    exec:/usr/bin/ipset add blacklist24hr %{REMOTE_ADDR}"
    

Code: Select all

SecRule REQUEST_URI|REQUEST_HEADERS|REQUEST_BODY "(mboximport|wp-login)" \
    "id:12347,phase:2,t:block,log,deny,status:403,\
    msg:'Blocked request: %{MATCHED_VAR}', \
    chain, \
    setenv:ip.blocked=1, \
    exec:/usr/bin/ipset add blacklist24hr %{REMOTE_ADDR}"
    
or potentially blocked by OWASP Core Rule Set (CRS) if they are attempting to exploit an unknown flaw.

I started looking at some cookies but don't have a very good handle on what would constitute a valid authenticated user. I have debugging turned on with modsecurity so that will show me the response headers but am more at the how could this work phase and what would be the best way to identify a valid user. I am also a little worried at how not to kill performance with the check for valid ip address and the best way to dynamic add/remove them which could be the killer reason not to try it.

I don't have any solution yet for pop/imap/submission so my "hammer" may not be capable enough. ;-) ;-)

Thoughts, ideas, concerns?

Jim
User avatar
L. Mark Stone
Ambassador
Ambassador
Posts: 2796
Joined: Wed Oct 09, 2013 11:35 am
Location: Portland, Maine, US
ZCS/ZD Version: 10.0.6 Network Edition
Contact:

Re: Stopping attacks in real-time

Post by L. Mark Stone »

Hi Jim,

I like how you are thinking (if I am understanding it correctly) but I'm wondering if trying to do this in Zimbra's nginx is optimal?

The Zimbra wiki Fail2ban filters parsing mailbox.log and zimbra.log are generally looking just for repeated login failures. In our case, I have added additional filter regexes, like when I see bad actors impersonating From: addresses of gmail.com users, but sending from a non-Gmail mail server. None of these emails have ever gotten through, but if I can ban the bad actors' IPs, it lightens the load on Zimbra and makes it just a little harder for bad actors to approach...

Anyway, I've always thought of modsecurity3 as essentially a WAF. Commercial WAFs with which I'm familiar have rule sets updated very frequently by the vendor much like DNSBLs update their database entries.

I also have in my mind that some of the now-plugged Zimbra exploits, like the mailbox import thing, have legitimate uses, so can't be blocked outright. Further, in our case with customers traveling and domiciled all over the globe, GEOIP blocking isn't really an option, and doesn't buy us anything anyways since most bad behavior we see in the logs comes from countries with governments friendly to (or at least tolerant of) the United States anyway.

I guess I'm asking if your ultimate goal is to build a kind of extensible Zimbra-specifc WAF? And if so, would it be better if it sat it front of Zimbra, like a Layer 7 load balancer with WAF-like functionality? It could then have remote-update capability, as you and/or the community spot different flavors of attacks.

As re things like LUA, no one I have seen has been able to come up with bulletproof Apparmor/SELinux configurations that would allow us to keep them on reliably with Zimbra; one customer last year tried to install Zimbra on a CIS-compliant version of Ubuntu and had to make a number of adjustments to get it to work. Knowing that the file system changed I could see could be helpful, but it also might be too late, like if the LUA alert is sent by email but the Active mailq is already in the tens of thousands...

Apologies if I've totally misunderstood the direction in which you are trying to head!

I absolutely believe you have identified an area needing attention to be clear.

All the best,
Mark
___________________________________
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
User avatar
JDunphy
Outstanding Member
Outstanding Member
Posts: 889
Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition

Re: Stopping attacks in real-time

Post by JDunphy »

L. Mark Stone wrote: Mon May 01, 2023 2:54 pm
I also have in my mind that some of the now-plugged Zimbra exploits, like the mailbox import thing, have legitimate uses, so can't be blocked outright. Further, in our case with customers traveling and domiciled all over the globe, GEOIP blocking isn't really an option, and doesn't buy us anything anyways since most bad behavior we see in the logs comes from countries with governments friendly to (or at least tolerant of) the United States anyway.

I guess I'm asking if your ultimate goal is to build a kind of extensible Zimbra-specifc WAF? And if so, would it be better if it sat it front of Zimbra, like a Layer 7 load balancer with WAF-like functionality? It could then have remote-update capability, as you and/or the community spot different flavors of attacks.
Hi Mark,

Thanks for the response and "extensible Zimbra-specifc WAF" is what I am investigating.

Definitely one should never block on things like the mailbox import even if there is a 400 status code which was more of an example of an overly simplified rule because it was a pattern some of us have recently seen. If I have learned anything from watching the nginx.access.logs is that lots of 400's, 500's errors are perfectly normal for Zimbra's operation as are errors being returned in response headers that can cause some of the OWASP stuff to fail. There are however patterns that are obvious but the more important issue is what tools do we have available for future unknowns and attacks which is really what I am looking into. Keying off the logs after the fact is a reactive approach in comparison but nonetheless a powerful one given its simplicity. That could also be done at this level and those modsecurity pattern rules could generate fail2ban ip jail additions vs ipsets.

I see this as a layered approach where various rules can be included or created depending on the risk the server is experiencing or soon to be experiencing. Adding or backing out capability (think include statements to text rule files for a conceptual model) would allow each admin to create or reuse rules to create a customized solution tailored to their specific environment without Synacor/Zimbra intervention. There is remote support for rules with a key (think customer password) so that a vetted service or commercial service is a possibility but that is probably a bridge too far for me to trust at this time. That might be a different story if Zimbra introduced a vetted option but that is cart before the horse.

The entire performance is unknown but I did locate some benchmarking research papers and a few others that said modsecurity 3 was faster and a few that it is worse than modsecurity 2 which was heavily based on internal apache architecture and still is the only modsecurity version that passes the CRS test suite. Not to mention - Trustwave (owner of ModSecurity) has announced end of support for Modsecurity in 2024 and seems to be moving toward Coraza so it can focus on rules vs the engine.
Ref: https://coreruleset.org/20211222/talkin ... oraza-waf/

I was also looking at what would be necessary to modify modsecurity 3 and add native support for performance bottlenecks which is where embedding lua (language) or adding an extension in c could come into play. Adding another smart proxy/load balancer or whatever we are calling them these days has some advantages and some disadvantages. I use cloudflare's WAF (which is nginx + openresty (lua) + bpf, etc) which gives us anycast + custom rules but would I put that in front of our mail server which we go to great lengths to manage for privacy reasons? That CF WAF could also be an indication of how it might work at scale given the size of their network. Do I do it in front with another layer when I am a single server with enough cpu cycles to spare for the extra complexity? You could but you also might scale the proxy's and keep this all under zimbra mgmt. I don't know the answer here nor "know what I don't know" to be exact. Given these are rules, the location and how would be under the admin's discretion. Meaning future engines like coraza with associated connectors like HAproxy/nginx/etc should be able to run them.

I am at the investigative stage but did jump the gun a little yesterday given the simplicity with the approach and was surprised that I was able to enable the default rule set and run zimbra through it without issue in my limited testing after a few adjustments; but that isn't what I want to do initially nor the goal here. Those CRS (Core Rule Sets) might be the ultimate goal but would I ever trust a large set of rules for zimbra that I might also need to test and verify with every update? I don't think many of us are ready to sign up for more work in testing and what do we do after every patch or upgrade to zimbra? I think that is why I thought if we could disable rules for our users it might give us more options.

No what I have in mind is more and severely limited in scope as a result.
  1. Never apply rules to our users
  2. A limited subset of 100% no FP's rule to identify some bots, scanners, etc
  3. An enhanced and vetted smaller set of best practice rules to identify common and future unknown attacks ... eventually leading toward a threat network sharing
I don't care if they go after robots.txt files for example, but am keenly interested when I see them attempting to test a buffer by sending a loaded 4K header in the request phase.

Part of me believes this has been investigated before and there could be very good reasons not to do this. Just hope it doesn't turn into another check_attacks.pl where I thought the problem/solution was simple. :-) ;-) ;-) And given I can't seem to quit on that program, it is now getting options to generate modsecurity rules and replay any discovered attacks found in my nginx.access.logs back out at my zimbra server. Not to mention that I can log the POST DATA now so even more for me to key on. That program is a sickness. :-)

Keep your ideas and concerns coming.

Jim
User avatar
JDunphy
Outstanding Member
Outstanding Member
Posts: 889
Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition

Re: Stopping attacks in real-time

Post by JDunphy »

I should mention for those not familiar that rules can operate in DetectionOnly to learn how they would work without causing harm; other than performance of course. They also add improved debugging and logging at the various phases:

Code: Select all

* request headers
* request body
* response headers
* response body
The rules can operate at any or all of these phases and be chained with other rules which is why performance is important to pick a select set of rules.
All rules can be turned off completely (modsecurity off;) or even better to not load the module at all - yet having it available should one need it in an emergency.

Jim
User avatar
JDunphy
Outstanding Member
Outstanding Member
Posts: 889
Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition

Re: Stopping attacks in real-time

Post by JDunphy »

This project continues but I have made a few mistakes by thinking GBT4 could help me gap my learning curve between ModSecurity 2.9 and ModSecurity 3 which has a lot of incomplete functionality. Looks like I have to read the manual which is incomplete at times for version 3. ;-) In the process, I discovered that ModSecurity 3 didn't finish implementation of exec so I had to add lua to execute external scripts for example. GBT4's hallucinations are fairly bad for this rule writing. As a result I have changed my plan of attack which is now:

Code: Select all

# The order of file inclusion in your webserver configuration should always be:
# 1. modsecurity.conf
# 2. crs-setup.conf (this file)
# 3. rules/*.conf (the CRS rule files for OWASP_CRS 4+)
I have this working now and then to look at the rules of OWASP_CRS to get a deeper dive into rule writing before some custom Zimbra rules.

On the positive side, I only had to make the following 2 changes to modsecurity.conf because the Zimbra client says it is sending soap+xml but really it is json and the default rule attempted to parse it as xml. The solution was to verify and replace one generic rule with 2 rules for each type.

Code: Select all

# If SecArgumentsLimit has been set, you probably want to reject any
# request body that has only been partly parsed. The value used in this
# rule should match what was used with SecArgumentsLimit
#JAD#SecRule &ARGS "@ge 1000" \
SecRule &ARGS "@ge 2000" \
"id:'200007', phase:2,t:none,log,deny,status:400,msg:'Failed to fully parse request body due to large argument count',severity:2"

# Verify that we've correctly processed the request body.
# As a rule of thumb, when failing to process a request body
# you should reject the request (when deployed in blocking mode)
# or log a high-severity alert (when deployed in detection-only mode).
#
#
#=========================================================================
# BEGIN Zimbra client says it is sending soap+xml but really it is json and this rule is parsing as xml
#       so work around is to check for both XML and JSON
#
#JAD#SecRule REQBODY_ERROR "!@eq 0" \
#JAD#"id:'200002', phase:2,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:'%{reqbody_error_msg}',severity:2"
# Workaround is to have to separate rules for each type
#

# detect content type and set tx.is_json when JSON and tx.is_xml when XML
SecRule REQUEST_HEADERS:Content-Type "application/json" "id:'100001',phase:1,t:none,pass,nolog,setvar:tx.is_json=1"
SecRule REQUEST_HEADERS:Content-Type "application/xml|text/xml|application/soap+xml" "id:'100002',phase:1,t:none,pass,nolog,setvar:tx.is_xml=1"

# Evaluate these rules during phase 2 (request body processing)
# Chain the following 2 rules together
#  so If request is JSON and a parsing error
SecRule REQBODY_ERROR "!@eq 0" "id:'300001', phase:2,t:none,chain,deny,status:400,msg:'Failed to parse request body.',logdata:'%{reqbody_error_msg}',severity:2"
  SecRule TX:IS_JSON "@eq 1"

# Chain the following 2 rules together
#  so If request is XML and a parsing error
SecRule REQBODY_ERROR "!@eq 0" "id:'300002', phase:2,t:none,chain,deny,status:400,msg:'Failed to parse request body.',logdata:'%{reqbody_error_msg}',severity:2"
  SecRule TX:IS_XML "@eq 1"

# END Zimbra
I don't know where this is heading but I opened up a staging server that has no users (this is NETWORK 8.8.15P39) to allow external attacks and continue to create scripts to automate the process of building and installing.
A word of warning: If you use 3rd party supplied tree by zimbra for nginx to build the modsecurity connector, know in advance the default rule in the makefile is to remove all packages and therefore your installed zimbra in /opt/zimbra. Therefore, building is really just:

Code: Select all

% make getsrc    (1 time only)
% make build
And never

Code: Select all

% make
Which would do a clean before a build. I can build the connector with or with the zimbra 3rd party support and my build script already supports things like 1 time patching of appropriate nginx conf files, installing rules, testing nginx conf files, restarting the proxy, etc, etc.

At this point however, I can use the admin console without issues and the web interfaces without problem using all the default ModSecurity rules.

Jim
User avatar
L. Mark Stone
Ambassador
Ambassador
Posts: 2796
Joined: Wed Oct 09, 2013 11:35 am
Location: Portland, Maine, US
ZCS/ZD Version: 10.0.6 Network Edition
Contact:

Re: Stopping attacks in real-time

Post by L. Mark Stone »

Hi Jim,

Are you really using ChatGPT Plus (the paid web access version of ChatGPT-4)?

I am using the free version, and I am finding the more domain-specific a question I ask, the less useful answers I get. I haven't felt the need yet to spend the $20/month for a Plus subscription, so really can't compare the two.

I did ask (free) ChatGPT: "How can I add the nginx modesecurity 3 module to an existing nginx 1.20.0 installation on Ubuntu Server 20.04?" and the answer included downloading a new source code tarball, compiling and installing it, which is opposite of what I asked it to do.

So I then asked it: "As regards your previous response, I already have nginx installed; I want only to add the ModSecurity 3 module to it without otherwise modifying the existing nginx installation." and then it gave me what I would have expected the first time (not sure it's entirely correct though!).

All the best,
Mark
___________________________________
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
User avatar
JDunphy
Outstanding Member
Outstanding Member
Posts: 889
Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition

Re: Stopping attacks in real-time

Post by JDunphy »

L. Mark Stone wrote: Tue May 09, 2023 3:26 pm Hi Jim,

Are you really using ChatGPT Plus (the paid web access version of ChatGPT-4)?
Hi Mark,

Yes. I am using the ChatGPT plus version with GPT 4.

Initially, It was to see if it was better than GPT 3.5 but now I can't go back to search engines ... hahahhaha ;-) ...My first experience was to build something I know with GPT3.5 but it took me 3-4 hours to get a working program as it lacked context from previous questions and kept making mistakes and losing "concentration and creating new problems I would then have to debug too". ;-) I switched to the paid version the next day with GPT 4 and it did the same program in 30 seconds. That was the perl script to manages ip address based on time stamps that could import/export for using in black/white lists I had shared.

For somethings it works really well but it had a really difficult time with bird 2 (BGP) and now seems completely clueless with modsecurity 3. The problem appears worse when syntax has changed and documentation is out of sync with actual code but there are area's where it can be beneficial. Explanation of working rules, how to resolve install dependencies, how to resolve some error messages, how to create a new rule that you will need to debug, ;-) ;-), etc. Often it can get you close but ultimately it is up to you to fix the last 10% to get it where you want it. The more you know on the subject the easier the tool works for you. For example, with its help I had a really good day with CRS yesterday and can see how we could use this with minor tweaks in Zimbra for example. Pretty impressive how CRS has layered its rules so that different levels of paranoia and/or tagging can bring on different rules for example. Those techniques could be used for specialized Zimbra rules. So I used it to explain exactly what I was seeing without having to track down documentation and concepts and could ask it follow up questions to make sure I really was understanding the explanations it was giving me.

I think I kind of view it as an intern, assistant or maybe even one of my kids. ;-) It gets most of the research leg work out of the way and then with a little guidance depending on its previous training you can get something useful with little time expended. It isn't always initially evident when it is hallucinating vs generating correct answers so that is one of the challenges.

Where it has succeed is with simple tasks when I have enough knowledge on a subject matter to differentiate these hallucinations. Quite often we post logs and then have to obfuscate the email and ip addresses in these forums for example. I started to type in this program as I normally would but then remembered I have chatgpt 4 and had it generate a small program that does it all... So I can do this:

Code: Select all

% check_attacks.pl | obfusicate.pl 
or
% cat /var/log/zimbra/log | obfusicate.pl
To code something like that up might take me 20 mins with my typing ability w/ debugging but it was done in about 15 seconds with a simple cut/paste. I don't find GPT 3.5 very good at this with perl because It generates really bad code with a lot of hallucinations. I also need it to remember context to have it fix it's mistakes or misunderstandings in requirements. I find the paid version (GPT4) really good at that extra context and remembering. The other problem is it wants to install everything with no understanding that we sometimes need to work with what is available in our production systems and don't want to bring in lots of perl modules and their dependencies at times so I have to keep dialing it back. Often, I will tell it to generate it in bash because I can't tolerate the perl code it is generating even if it is really elegant. Making less changes to production systems is my primary goal here.

Now this might be something you would be interested in.

I tried a chrome plugin last week that worked well enough in zimbra that I could see where we might be headed and a game changer for our users. It will certainly be a competitive advantage for gmail/outlook if we don't have that. It would be a really nice enhancement for future zimbra versions. The plugin worked really well in gmail, twitter, facebook, etc. The plugin basically had a menu with what I will call pre-selects of questions to query chatgpt4. You highlight the text, right mouse click a menu item and up pops this menu
* generate profession tone
* make it shorter
* make it longer
* reply
* etc.

It can work with text only mode in zimbra's compose window because the zimbra html is too complex for it to parse. It requires a tab be open to chatgpt and uses that. There is no reason that Zimbra could not monetize a feature like that with their own backend service or provide this new functionality with something like this plugin that feeds these preset questions to chatgpt4.

All this is moving fairly fast but it is becoming obvious to me after using it for 3+ weeks that a lot of that drudgery of going through lots of search engine noise to find answers is a lot more fun with interacting with GPT4 which gives me a lot more time to do things I would like to explore.

Jim
User avatar
L. Mark Stone
Ambassador
Ambassador
Posts: 2796
Joined: Wed Oct 09, 2013 11:35 am
Location: Portland, Maine, US
ZCS/ZD Version: 10.0.6 Network Edition
Contact:

Re: Stopping attacks in real-time

Post by L. Mark Stone »

JDunphy wrote: Tue May 09, 2023 5:02 pm
Now this might be something you would be interested in.

I tried a chrome plugin last week that worked well enough in zimbra that I could see where we might be headed and a game changer for our users. It will certainly be a competitive advantage for gmail/outlook if we don't have that. It would be a really nice enhancement for future zimbra versions. The plugin worked really well in gmail, twitter, facebook, etc. The plugin basically had a menu with what I will call pre-selects of questions to query chatgpt4. You highlight the text, right mouse click a menu item and up pops this menu
* generate profession tone
* make it shorter
* make it longer
* reply
* etc.

It can work with text only mode in zimbra's compose window because the zimbra html is too complex for it to parse. It requires a tab be open to chatgpt and uses that. There is no reason that Zimbra could not monetize a feature like that with their own backend service or provide this new functionality with something like this plugin that feeds these preset questions to chatgpt4.

All this is moving fairly fast but it is becoming obvious to me after using it for 3+ weeks that a lot of that drudgery of going through lots of search engine noise to find answers is a lot more fun with interacting with GPT4 which gives me a lot more time to do things I would like to explore.

Jim

Jim,

I would ABSOLUTELY suggest you post this as a feature request on the new pm.zimbra.com! Smells like a new zimlet to me...

Is it using the web version of ChatGPT Plus? Or does the it require OpenAI API access?

Either way, if it requires a Zimbra user to have a (free or paid) ChatGPT account, Zimbra ought to be able to themselves, behind the scenes of the Compose window, pull out the ASCII version of the email so that the zimlet (if that's the best way to do this) can ship the selected message snippet out to ChatGPT using the user's (or a corporate?) account.

I could also see a popup menu item like "Is this text truthful and accurate?" as well as allowing the user to enter a custom query to ChatGPT concerning the selected text.


All the best,
Mark
___________________________________
L. Mark Stone
Mission Critical Email - Zimbra VAR/BSP/Training Partner https://www.missioncriticalemail.com/
AWS Certified Solutions Architect-Associate
User avatar
JDunphy
Outstanding Member
Outstanding Member
Posts: 889
Joined: Fri Sep 12, 2014 11:18 pm
Location: Victoria, BC
ZCS/ZD Version: 9.0.0_P39 NETWORK Edition

Re: Stopping attacks in real-time

Post by JDunphy »

Requirements:
- Modsecurity library which is /usr/local in my environment
- modsecurity 3 connector for nginx which is /etc/nginx/modsec in my envionment
- 1 line patches to /opt/zimbra/nginx (web and main). load module the above connector and include a modsecurity .conf file

The more I use this, the more I am convinced this is a valuable and essential tool that has a lot of application with Zimbra. This morning, I checked my blacklist24hr ipset and saw that it stopped 80+ active ip addresses in a little over 10 mins. I have 1 specific zimbra rule in place but the complete OWASP_CRS at paranoia 1 (least likely to cause false positives) and 100% sampling meaning every request is being looked at. All that is configurable. It is also checking for other active attacks including bots, web shells, cross scripting, etc, etc. I have all these rules enabled

Code: Select all

crawlers-user-agents.data			     REQUEST-901-INITIALIZATION.conf		  REQUEST-942-APPLICATION-ATTACK-SQLI.conf		restricted-files.data
iis-errors.data					     REQUEST-905-COMMON-EXCEPTIONS.conf		  REQUEST-943-APPLICATION-ATTACK-SESSION-FIXATION.conf	restricted-upload.data
java-classes.data				     REQUEST-911-METHOD-ENFORCEMENT.conf	  REQUEST-944-APPLICATION-ATTACK-JAVA.conf		scanners-headers.data
java-code-leakages.data				     REQUEST-913-SCANNER-DETECTION.conf		  REQUEST-949-BLOCKING-EVALUATION.conf			scanners-urls.data
java-errors.data				     REQUEST-920-PROTOCOL-ENFORCEMENT.conf	  RESPONSE-950-DATA-LEAKAGES.conf			scanners-user-agents.data
lfi-os-files.data				     REQUEST-921-PROTOCOL-ATTACK.conf		  RESPONSE-951-DATA-LEAKAGES-SQL.conf			scripting-user-agents.data
php-config-directives.data			     REQUEST-922-MULTIPART-ATTACK.conf		  RESPONSE-952-DATA-LEAKAGES-JAVA.conf			sql-errors.data
php-errors.data					     REQUEST-930-APPLICATION-ATTACK-LFI.conf	  RESPONSE-953-DATA-LEAKAGES-PHP.conf			ssrf.data
php-errors-pl2.data				     REQUEST-931-APPLICATION-ATTACK-RFI.conf	  RESPONSE-954-DATA-LEAKAGES-IIS.conf			unix-shell.data
php-function-names-933150.data			     REQUEST-932-APPLICATION-ATTACK-RCE.conf	  RESPONSE-955-WEB-SHELLS.conf				web-shells-php.data
php-function-names-933151.data			     REQUEST-933-APPLICATION-ATTACK-PHP.conf	  RESPONSE-959-BLOCKING-EVALUATION.conf			windows-powershell-commands.data
php-variables.data				     REQUEST-934-APPLICATION-ATTACK-GENERIC.conf  RESPONSE-980-CORRELATION.conf
REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example  REQUEST-941-APPLICATION-ATTACK-XSS.conf	  RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example
From my perspective, it feels no different with the rules on/off but no performance testing is being done at this point during the evaluation. I am convinced we can disable almost all the rules for our users and large sites can move into this gradually with sampling to make sure performance isn't impacted. There is also this from trustwave.
Ref: https://www.trustwave.com/en-us/resourc ... endations/

The logging and debugging levels are really helpful to see what lurks in some of those innocent looking requests we often see in our logs..

I am almost convinced at this point to do this as a zimbra plugin for modsecurity vs publishing rules only.

My only and first working rule is the following which is an interesting example because it extends modsecurity 3 with the ability to execute commands. This rule will put bots into a 24 hour timeout if they come at us with an ip address vs using our FQDN on port 443. It is accomplished with the following rule which may not be optimal but appears to work.

Code: Select all

SecRule REQUEST_HEADERS:Host "@rx ^(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" \
   "id:950000,log, pass, t:none, capture, setvar:'tx.host_was_ip=1', chain"
    SecRule REMOTE_ADDR "^(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" \
        "capture,nolog,setvar:tx.addr=%{tx.1},setvar:tx.ip_block=1,chain"
    SecRule TX:ip_block "@streq 1" \
       "id:950003, block, log, msg:'Blocked IP address: %{REMOTE_ADDR}', status:403, setenv:REMOTE_ADDR=tx.addr,\
        exec:/etc/nginx/modsec/add_to_blacklist.lua"
And the following lua included code. Note: Checking rules with nginx -t also will check lua syntax in case you have errors. The following lua code could be adapted to fail2ban but be warned that nginx is running this as worker threads as the zimbra user so make sure any database or application called is re-entrant. I have some ideas how to map an ipset into a fail2ban jail if that is a desire with an external script to switch an empty ipset into place while the script drains current ip's into the fail2ban jail. In other words, use an ipset but not attached to a fw rule.

Code: Select all

-- verify what uid we are using
function get_process_uid()
    local handle = io.popen("id -u")
    local uid = handle:read("*a")
    handle:close()
    return tonumber(uid)
end

-- Note: cp /usr/sbin/ipset /usr/local/bin/ipset; chmod 4555 !$; or sudo entry for /usr/sbin/ipset
-- add_to_blacklist.lua (this runs as zimbra's uid but ipset needs root)
function add_to_blacklist(remote_addr)
    -- %%% ipset is thread safe but is fail2ban or its db, or other programs?
    -- %%% Do we trust REMOTE_ADDR from nginx parsing?  Perhaps sanitize again before execute
    local cmd = "/usr/local/bin/ipset add blacklist24hr " .. remote_addr .. " -exist "
    os.execute(cmd)
    --local uid = get_process_uid()
    --local cmd = "/bin/logger -p local2.info NETWORK setting " .. remote_addr .. " uid: " .. uid
    local cmd = "/bin/logger -p local2.info NETWORK zimbra is adding blacklist24hr " .. remote_addr 
    os.execute(cmd)
end

--add_to_blacklist("2.1.1.1")
-- modsecurity rule setenv REMOTE_ADDR and we extract here
add_to_blacklist(m.getvar("REMOTE_ADDR"))
These rules will get tighter and better as I become more adept with the tool.

If anyone needs the steps to install modsecurity 3, let me know and I'll post more information. I broke them into 4 steps/scripts
  • step0-modsecurity.sh - install components needed to compile everything
  • step1-modsecurity.sh - grab, build, and install ModSecurity into /usr/local
  • step2-connector.sh - build and install the nginx connector. Currently 2 ways. One using the Zimbra 3rd party nginx and without
  • step3-install-modsecurity.sh - bash script with options that will be renamed to do all the steps at some point.
I run the step3 script to test my rules and restart the proxy. It can also patch zimbra's nginx and install /etc/nginx/modsec

I am unsure of the best way to build the connector. If I had any pull with Zimbra, I would ask them to include 2 files (modsecurity library and nginx connector) with their nginx rpm. I currently modify their spec file myself with the addition of 2 lines for the connector. I also built it statically into nginx like how they do all their nginx modules but like the option to load the module when I need it.

Code: Select all

% cat step0-modsecurity.sh 
dnf groupinstall "Development Tools"
dnf install openssl-devel pcre-devel zlib-devel libxml2-devel libxslt-devel gd-devel perl-ExtUtils-Embed GeoIP-devel
dnf install expat-devel yajl-devel

# so we can do exec
dnf install lua-devel

# to use syslog in lua, need posfix. Easiest way to install
dnf install luarocks
luarocks install luaposix
And ModSecurity library

Code: Select all

% cat step1-modsecurity.sh 
 git clone --depth 1 -b v3.0.9 --single-branch https://github.com/SpiderLabs/ModSecurity
#git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity
cd ModSecurity
git submodule init
git submodule update
./build.sh
make clean	#if you want to reconfigure anything with ./configure
./configure --with-pcre2 --with-lua
make
make install
It should go without saying that I am running this on a staging server which is acting like a honeypot as I learn how the tool will behave and learn to write better rules. Frankly, I had no idea that it could be extended this easily, had scoring or the abilities to turn off large swaths of rule sets or even the ability to identify slow performing rules. It is a flexible tool where we could put up a captcha or redirect to a landing page vs strict blocking.

Jim
liverpoolfcfan
Elite member
Elite member
Posts: 1096
Joined: Sat Sep 13, 2014 12:47 am

Re: Stopping attacks in real-time

Post by liverpoolfcfan »

Jim,

This is really interesting stuff. As always, thank you for your fantastic work and for taking the time to explain it to the rest of us.

Vincent
Post Reply