Merge branch '0.11' into wc/debian

debian
sebres 2020-11-20 18:51:36 +01:00
commit a5ea34c51b
116 changed files with 2411 additions and 999 deletions

66
.github/workflows/main.yml vendored Normal file
View File

@ -0,0 +1,66 @@
name: CI
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
push:
paths-ignore:
- 'doc/**'
- 'files/**'
- 'man/**'
pull_request:
paths-ignore:
- 'doc/**'
- 'files/**'
- 'man/**'
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9, pypy2, pypy3]
fail-fast: false
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Python version
run: |
F2B_PY=$(python -c "import sys; print(sys.version)")
echo "Python: ${{ matrix.python-version }} -- $F2B_PY"
F2B_PY=${F2B_PY:0:1}
echo "Set F2B_PY=$F2B_PY"
echo "F2B_PY=$F2B_PY" >> $GITHUB_ENV
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [[ "$F2B_PY" = 3 ]] && ! command -v 2to3x -v 2to3 > /dev/null; then
pip install 2to3
fi
pip install systemd-python || echo 'systemd not available'
pip install pyinotify || echo 'inotify not available'
- name: Before scripts
run: |
cd "$GITHUB_WORKSPACE"
# Manually execute 2to3 for now
if [[ "$F2B_PY" = 3 ]]; then echo "2to3 ..." && ./fail2ban-2to3; fi
# (debug) output current preferred encoding:
python -c 'import locale, sys; from fail2ban.helpers import PREFER_ENC; print(PREFER_ENC, locale.getpreferredencoding(), (sys.stdout and sys.stdout.encoding))'
- name: Test suite
run: if [[ "$F2B_PY" = 2 ]]; then python setup.py test; else python bin/fail2ban-testcases --verbosity=2; fi
#- name: Test initd scripts
# run: shellcheck -s bash -e SC1090,SC1091 files/debian-initd

View File

@ -18,14 +18,14 @@ matrix:
- python: 2.7 - python: 2.7
name: 2.7 (xenial) name: 2.7 (xenial)
- python: pypy - python: pypy
dist: trusty
- python: 3.3 - python: 3.3
dist: trusty dist: trusty
- python: 3.4 - python: 3.4
- python: 3.5 - python: 3.5
- python: 3.6 - python: 3.6
- python: 3.7 - python: 3.7
- python: 3.8-dev - python: 3.8
- python: 3.9-dev
- python: pypy3.5 - python: pypy3.5
before_install: before_install:
- echo "running under $TRAVIS_PYTHON_VERSION" - echo "running under $TRAVIS_PYTHON_VERSION"
@ -69,8 +69,8 @@ script:
- if [[ "$F2B_PY" = 3 ]]; then coverage run bin/fail2ban-testcases --verbosity=2; fi - if [[ "$F2B_PY" = 3 ]]; then coverage run bin/fail2ban-testcases --verbosity=2; fi
# Use $VENV_BIN (not python) or else sudo will always run the system's python (2.7) # Use $VENV_BIN (not python) or else sudo will always run the system's python (2.7)
- sudo $VENV_BIN/pip install . - sudo $VENV_BIN/pip install .
# Doc files should get installed on Travis under Linux (python >= 3.8 seem to use another path segment) # Doc files should get installed on Travis under Linux (some builds/python's seem to use another path segment)
- if [[ $TRAVIS_PYTHON_VERSION < 3.8 ]]; then test -e /usr/share/doc/fail2ban/FILTERS; fi - test -e /usr/share/doc/fail2ban/FILTERS && echo 'found' || echo 'not found'
# Test initd script # Test initd script
- shellcheck -s bash -e SC1090,SC1091 files/debian-initd - shellcheck -s bash -e SC1090,SC1091 files/debian-initd
after_success: after_success:

View File

@ -6,7 +6,7 @@
Fail2Ban: Changelog Fail2Ban: Changelog
=================== ===================
ver. 0.11.1 (2020/01/11) - this-is-the-way ver. 0.11.2-dev (20??/??/??) - development edition
----------- -----------
### Compatibility: ### Compatibility:
@ -37,6 +37,70 @@ ver. 0.11.1 (2020/01/11) - this-is-the-way
- Since v0.10 fail2ban supports the matching of IPv6 addresses, but not all ban actions are - Since v0.10 fail2ban supports the matching of IPv6 addresses, but not all ban actions are
IPv6-capable now. IPv6-capable now.
### Fixes
* [stability] prevent race condition - no ban if filter (backend) is continuously busy if
too many messages will be found in log, e. g. initial scan of large log-file or journal (gh-2660)
* pyinotify-backend sporadically avoided initial scanning of log-file by start
* python 3.9 compatibility (and Travis CI support)
* restoring a large number (500+ depending on files ulimit) of current bans when using PyPy fixed
* manual ban is written to database, so can be restored by restart (gh-2647)
* `jail.conf`: don't specify `action` directly in jails (use `action_` or `banaction` instead)
* no mails-action added per default anymore (e. g. to allow that `action = %(action_mw)s` should be specified
per jail or in default section in jail.local), closes gh-2357
* ensure we've unique action name per jail (also if parameter `actname` is not set but name deviates from standard name, gh-2686)
* don't use `%(banaction)s` interpolation because it can be complex value (containing `[...]` and/or quotes),
so would bother the action interpolation
* fixed type conversion in config readers (take place after all interpolations get ready), that allows to
specify typed parameters variable (as substitutions) as well as to supply it in other sections or as init parameters.
* `action.d/*-ipset*.conf`: several ipset actions fixed (no timeout per default anymore), so no discrepancy
between ipset and fail2ban (removal from ipset will be managed by fail2ban only, gh-2703)
* `action.d/cloudflare.conf`: fixed `actionunban` (considering new-line chars and optionally real json-parsing
with `jq`, gh-2140, gh-2656)
* `action.d/nftables.conf` (type=multiport only): fixed port range selector, replacing `:` with `-` (gh-2763)
* `action.d/firewallcmd-*.conf` (multiport only): fixed port range selector, replacing `:` with `-` (gh-2821)
* `action.d/bsd-ipfw.conf`: fixed selection of rule-no by large list or initial `lowest_rule_num` (gh-2836)
* `filter.d/common.conf`: avoid substitute of default values in related `lt_*` section, `__prefix_line`
should be interpolated in definition section (inside the filter-config, gh-2650)
* `filter.d/courier-smtp.conf`: prefregex extended to consider port in log-message (gh-2697)
* `filter.d/traefik-auth.conf`: filter extended with parameter mode (`normal`, `ddos`, `aggressive`) to handle
the match of username differently (gh-2693):
- `normal`: matches 401 with supplied username only
- `ddos`: matches 401 without supplied username only
- `aggressive`: matches 401 and any variant (with and without username)
* `filter.d/sshd.conf`: normalizing of user pattern in all RE's, allowing empty user (gh-2749)
### New Features and Enhancements
* fail2ban-regex:
- speedup formatted output (bypass unneeded stats creation)
- extended with prefregex statistic
- more informative output for `datepattern` (e. g. set from filter) - pattern : description
* parsing of action in jail-configs considers space between action-names as separator also
(previously only new-line was allowed), for example `action = a b` would specify 2 actions `a` and `b`
* new filter and jail for GitLab recognizing failed application logins (gh-2689)
* new filter and jail for Grafana recognizing failed application logins (gh-2855)
* new filter and jail for SoftEtherVPN recognizing failed application logins (gh-2723)
* `filter.d/guacamole.conf` extended with `logging` parameter to follow webapp-logging if it's configured (gh-2631)
* `filter.d/bitwarden.conf` enhanced to support syslog (gh-2778)
* introduced new prefix `{UNB}` for `datepattern` to disable word boundaries in regex;
* datetemplate: improved anchor detection for capturing groups `(^...)`;
* datepattern: improved handling with wrong recognized timestamps (timezones, no datepattern, etc)
as well as some warnings signaling user about invalid pattern or zone (gh-2814):
- filter gets mode in-operation, which gets activated if filter starts processing of new messages;
in this mode a timestamp read from log-line that appeared recently (not an old line), deviating too much
from now (up too 24h), will be considered as now (assuming a timezone issue), so could avoid unexpected
bypass of failure (previously exceeding `findtime`);
- better interaction with non-matching optional datepattern or invalid timestamps;
- implements special datepattern `{NONE}` - allow to find failures totally without date-time in log messages,
whereas filter will use now as timestamp (gh-2802)
* performance optimization of `datepattern` (better search algorithm in datedetector, especially for single template);
* fail2ban-client: extended to unban IP range(s) by subnet (CIDR/mask) or hostname (DNS), gh-2791;
* extended capturing of alternate tags in filter, allowing combine of multiple groups to single tuple token with new tag
prefix `<F-TUPLE_`, that would combine value of `<F-V>` with all value of `<F-TUPLE_V?_n?>` tags (gh-2755)
ver. 0.11.1 (2020/01/11) - this-is-the-way
-----------
### Fixes ### Fixes
* purge database will be executed now (within observer). * purge database will be executed now (within observer).
* restoring currently banned ip after service restart fixed * restoring currently banned ip after service restart fixed

View File

@ -227,6 +227,8 @@ fail2ban/tests/clientreadertestcase.py
fail2ban/tests/config/action.d/action.conf fail2ban/tests/config/action.d/action.conf
fail2ban/tests/config/action.d/brokenaction.conf fail2ban/tests/config/action.d/brokenaction.conf
fail2ban/tests/config/fail2ban.conf fail2ban/tests/config/fail2ban.conf
fail2ban/tests/config/filter.d/checklogtype.conf
fail2ban/tests/config/filter.d/checklogtype_test.conf
fail2ban/tests/config/filter.d/simple.conf fail2ban/tests/config/filter.d/simple.conf
fail2ban/tests/config/filter.d/test.conf fail2ban/tests/config/filter.d/test.conf
fail2ban/tests/config/filter.d/test.local fail2ban/tests/config/filter.d/test.local

View File

@ -21,14 +21,13 @@
# #
# Example, for ssh bruteforce (in section [sshd] of `jail.local`): # Example, for ssh bruteforce (in section [sshd] of `jail.local`):
# action = %(known/action)s # action = %(known/action)s
# %(action_abuseipdb)s[abuseipdb_apikey="my-api-key", abuseipdb_category="18,22"] # abuseipdb[abuseipdb_apikey="my-api-key", abuseipdb_category="18,22"]
# #
# See below for catagories. # See below for categories.
# #
# Original Ref: https://wiki.shaunc.com/wikka.php?wakka=ReportingToAbuseIPDBWithFail2Ban
# Added to fail2ban by Andrew James Collett (ajcollett) # Added to fail2ban by Andrew James Collett (ajcollett)
## abuseIPDB Catagories, `the abuseipdb_category` MUST be set in the jail.conf action call. ## abuseIPDB Categories, `the abuseipdb_category` MUST be set in the jail.conf action call.
# Example, for ssh bruteforce: action = %(action_abuseipdb)s[abuseipdb_category="18,22"] # Example, for ssh bruteforce: action = %(action_abuseipdb)s[abuseipdb_category="18,22"]
# ID Title Description # ID Title Description
# 3 Fraud Orders # 3 Fraud Orders

View File

@ -14,7 +14,10 @@
# Notes.: command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false). # Notes.: command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values: CMD # Values: CMD
# #
actionstart = ipfw show | fgrep -c -m 1 -s 'table(<table>)' > /dev/null 2>&1 || ( ipfw show | awk 'BEGIN { b = <lowest_rule_num> } { if ($1 < b) {} else if ($1 == b) { b = $1 + 1 } else { e = b } } END { if (e) exit e <br> else exit b }'; num=$?; ipfw -q add $num <blocktype> <block> from table\(<table>\) to me <port>; echo $num > "<startstatefile>" ) actionstart = ipfw show | fgrep -c -m 1 -s 'table(<table>)' > /dev/null 2>&1 || (
num=$(ipfw show | awk 'BEGIN { b = <lowest_rule_num> } { if ($1 == b) { b = $1 + 1 } } END { print b }');
ipfw -q add "$num" <blocktype> <block> from table\(<table>\) to me <port>; echo "$num" > "<startstatefile>"
)
# Option: actionstop # Option: actionstop

View File

@ -5,7 +5,7 @@
# #
# Please set jail.local's permission to 640 because it contains your CF API key. # Please set jail.local's permission to 640 because it contains your CF API key.
# #
# This action depends on curl. # This action depends on curl (and optionally jq).
# Referenced from http://www.normyee.net/blog/2012/02/02/adding-cloudflare-support-to-fail2ban by NORM YEE # Referenced from http://www.normyee.net/blog/2012/02/02/adding-cloudflare-support-to-fail2ban by NORM YEE
# #
# To get your CloudFlare API Key: https://www.cloudflare.com/a/account/my-account # To get your CloudFlare API Key: https://www.cloudflare.com/a/account/my-account
@ -43,9 +43,9 @@ actioncheck =
# API v1 # API v1
#actionban = curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=ban' -d 'tkn=<cftoken>' -d 'email=<cfuser>' -d 'key=<ip>' #actionban = curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=ban' -d 'tkn=<cftoken>' -d 'email=<cfuser>' -d 'key=<ip>'
# API v4 # API v4
actionban = curl -s -o /dev/null -X POST -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \ actionban = curl -s -o /dev/null -X POST <_cf_api_prms> \
-H 'Content-Type: application/json' -d '{ "mode": "block", "configuration": { "target": "ip", "value": "<ip>" } }' \ -d '{"mode":"block","configuration":{"target":"ip","value":"<ip>"},"notes":"Fail2Ban <name>"}' \
https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules <_cf_api_url>
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -58,9 +58,14 @@ actionban = curl -s -o /dev/null -X POST -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-
# API v1 # API v1
#actionunban = curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=nul' -d 'tkn=<cftoken>' -d 'email=<cfuser>' -d 'key=<ip>' #actionunban = curl -s -o /dev/null https://www.cloudflare.com/api_json.html -d 'a=nul' -d 'tkn=<cftoken>' -d 'email=<cfuser>' -d 'key=<ip>'
# API v4 # API v4
actionunban = curl -s -o /dev/null -X DELETE -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \ actionunban = id=$(curl -s -X GET <_cf_api_prms> \
https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules/$(curl -s -X GET -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \ "<_cf_api_url>?mode=block&configuration_target=ip&configuration_value=<ip>&page=1&per_page=1&notes=Fail2Ban%%20<name>" \
'https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules?mode=block&configuration_target=ip&configuration_value=<ip>&page=1&per_page=1' | cut -d'"' -f6) | { jq -r '.result[0].id' 2>/dev/null || tr -d '\n' | sed -nE 's/^.*"result"\s*:\s*\[\s*\{\s*"id"\s*:\s*"([^"]+)".*$/\1/p'; })
if [ -z "$id" ]; then echo "<name>: id for <ip> cannot be found"; exit 0; fi;
curl -s -o /dev/null -X DELETE <_cf_api_prms> "<_cf_api_url>/$id"
_cf_api_url = https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules
_cf_api_prms = -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' -H 'Content-Type: application/json'
[Init] [Init]

View File

@ -18,7 +18,7 @@ before = firewallcmd-common.conf
[Definition] [Definition]
actionstart = ipset create <ipmset> hash:ip timeout <default-timeout><familyopt> actionstart = ipset create <ipmset> hash:ip timeout <default-ipsettime> <familyopt>
firewall-cmd --direct --add-rule <family> filter <chain> 0 <actiontype> -m set --match-set <ipmset> src -j <blocktype> firewall-cmd --direct --add-rule <family> filter <chain> 0 <actiontype> -m set --match-set <ipmset> src -j <blocktype>
actionflush = ipset flush <ipmset> actionflush = ipset flush <ipmset>
@ -27,9 +27,9 @@ actionstop = firewall-cmd --direct --remove-rule <family> filter <chain> 0 <acti
<actionflush> <actionflush>
ipset destroy <ipmset> ipset destroy <ipmset>
actionban = ipset add <ipmset> <ip> timeout <bantime> -exist actionban = ipset add <ipmset> <ip> timeout <ipsettime> -exist
actionprolong = %(actionban)s # actionprolong = %(actionban)s
actionunban = ipset del <ipmset> <ip> -exist actionunban = ipset del <ipmset> <ip> -exist
@ -42,11 +42,19 @@ actionunban = ipset del <ipmset> <ip> -exist
# #
chain = INPUT_direct chain = INPUT_direct
# Option: default-timeout # Option: default-ipsettime
# Notes: specifies default timeout in seconds (handled default ipset timeout only) # Notes: specifies default timeout in seconds (handled default ipset timeout only)
# Values: [ NUM ] Default: 600 # Values: [ NUM ] Default: 0 (no timeout, managed by fail2ban by unban)
default-ipsettime = 0
default-timeout = 600 # Option: ipsettime
# Notes: specifies ticket timeout (handled ipset timeout only)
# Values: [ NUM ] Default: 0 (managed by fail2ban by unban)
ipsettime = 0
# expresion to caclulate timeout from bantime, example:
# banaction = %(known/banaction)s[ipsettime='<timeout-bantime>']
timeout-bantime = $([ "<bantime>" -le 2147483 ] && echo "<bantime>" || echo 0)
# Option: actiontype # Option: actiontype
# Notes.: defines additions to the blocking rule # Notes.: defines additions to the blocking rule
@ -63,7 +71,7 @@ allports = -p <protocol>
# Option: multiport # Option: multiport
# Notes.: addition to block access only to specific ports # Notes.: addition to block access only to specific ports
# Usage.: use in jail config: banaction = firewallcmd-ipset[actiontype=<multiport>] # Usage.: use in jail config: banaction = firewallcmd-ipset[actiontype=<multiport>]
multiport = -p <protocol> -m multiport --dports <port> multiport = -p <protocol> -m multiport --dports "$(echo '<port>' | sed s/:/-/g)"
ipmset = f2b-<name> ipmset = f2b-<name>
familyopt = familyopt =
@ -71,7 +79,7 @@ familyopt =
[Init?family=inet6] [Init?family=inet6]
ipmset = f2b-<name>6 ipmset = f2b-<name>6
familyopt = <sp>family inet6 familyopt = family inet6
# DEV NOTES: # DEV NOTES:

View File

@ -11,9 +11,9 @@ before = firewallcmd-common.conf
actionstart = firewall-cmd --direct --add-chain <family> filter f2b-<name> actionstart = firewall-cmd --direct --add-chain <family> filter f2b-<name>
firewall-cmd --direct --add-rule <family> filter f2b-<name> 1000 -j RETURN firewall-cmd --direct --add-rule <family> filter f2b-<name> 1000 -j RETURN
firewall-cmd --direct --add-rule <family> filter <chain> 0 -m conntrack --ctstate NEW -p <protocol> -m multiport --dports <port> -j f2b-<name> firewall-cmd --direct --add-rule <family> filter <chain> 0 -m conntrack --ctstate NEW -p <protocol> -m multiport --dports "$(echo '<port>' | sed s/:/-/g)" -j f2b-<name>
actionstop = firewall-cmd --direct --remove-rule <family> filter <chain> 0 -m conntrack --ctstate NEW -p <protocol> -m multiport --dports <port> -j f2b-<name> actionstop = firewall-cmd --direct --remove-rule <family> filter <chain> 0 -m conntrack --ctstate NEW -p <protocol> -m multiport --dports "$(echo '<port>' | sed s/:/-/g)" -j f2b-<name>
firewall-cmd --direct --remove-rules <family> filter f2b-<name> firewall-cmd --direct --remove-rules <family> filter f2b-<name>
firewall-cmd --direct --remove-chain <family> filter f2b-<name> firewall-cmd --direct --remove-chain <family> filter f2b-<name>

View File

@ -10,9 +10,9 @@ before = firewallcmd-common.conf
actionstart = firewall-cmd --direct --add-chain <family> filter f2b-<name> actionstart = firewall-cmd --direct --add-chain <family> filter f2b-<name>
firewall-cmd --direct --add-rule <family> filter f2b-<name> 1000 -j RETURN firewall-cmd --direct --add-rule <family> filter f2b-<name> 1000 -j RETURN
firewall-cmd --direct --add-rule <family> filter <chain> 0 -m state --state NEW -p <protocol> -m multiport --dports <port> -j f2b-<name> firewall-cmd --direct --add-rule <family> filter <chain> 0 -m state --state NEW -p <protocol> -m multiport --dports "$(echo '<port>' | sed s/:/-/g)" -j f2b-<name>
actionstop = firewall-cmd --direct --remove-rule <family> filter <chain> 0 -m state --state NEW -p <protocol> -m multiport --dports <port> -j f2b-<name> actionstop = firewall-cmd --direct --remove-rule <family> filter <chain> 0 -m state --state NEW -p <protocol> -m multiport --dports "$(echo '<port>' | sed s/:/-/g)" -j f2b-<name>
firewall-cmd --direct --remove-rules <family> filter f2b-<name> firewall-cmd --direct --remove-rules <family> filter f2b-<name>
firewall-cmd --direct --remove-chain <family> filter f2b-<name> firewall-cmd --direct --remove-chain <family> filter f2b-<name>

View File

@ -1,6 +1,6 @@
# Fail2Ban configuration file # Fail2Ban configuration file
# #
# Author: Donald Yandt # Authors: Donald Yandt, Sergey G. Brester
# #
# Because of the rich rule commands requires firewalld-0.3.1+ # Because of the rich rule commands requires firewalld-0.3.1+
# This action uses firewalld rich-rules which gives you a cleaner iptables since it stores rules according to zones and not # This action uses firewalld rich-rules which gives you a cleaner iptables since it stores rules according to zones and not
@ -10,36 +10,15 @@
# #
# If you use the --permanent rule you get a xml file in /etc/firewalld/zones/<zone>.xml that can be shared and parsed easliy # If you use the --permanent rule you get a xml file in /etc/firewalld/zones/<zone>.xml that can be shared and parsed easliy
# #
# Example commands to view rules: # This is an derivative of firewallcmd-rich-rules.conf, see there for details and other parameters.
# firewall-cmd [--zone=<zone>] --list-rich-rules
# firewall-cmd [--zone=<zone>] --list-all
# firewall-cmd [--zone=zone] --query-rich-rule='rule'
[INCLUDES] [INCLUDES]
before = firewallcmd-common.conf before = firewallcmd-rich-rules.conf
[Definition] [Definition]
actionstart = rich-suffix = log prefix='f2b-<name>' level='<level>' limit value='<rate>/m' <rich-blocktype>
actionstop =
actioncheck =
# you can also use zones and/or service names.
#
# zone example:
# firewall-cmd --zone=<zone> --add-rich-rule="rule family='<family>' source address='<ip>' port port='<port>' protocol='<protocol>' log prefix='f2b-<name>' level='<level>' limit value='<rate>/m' <rich-blocktype>"
#
# service name example:
# firewall-cmd --zone=<zone> --add-rich-rule="rule family='<family>' source address='<ip>' service name='<service>' log prefix='f2b-<name>' level='<level>' limit value='<rate>/m' <rich-blocktype>"
#
# Because rich rules can only handle single or a range of ports we must split ports and execute the command for each port. Ports can be single and ranges separated by a comma or space for an example: http, https, 22-60, 18 smtp
actionban = ports="<port>"; for p in $(echo $ports | tr ", " " "); do firewall-cmd --add-rich-rule="rule family='<family>' source address='<ip>' port port='$p' protocol='<protocol>' log prefix='f2b-<name>' level='<level>' limit value='<rate>/m' <rich-blocktype>"; done
actionunban = ports="<port>"; for p in $(echo $ports | tr ", " " "); do firewall-cmd --remove-rich-rule="rule family='<family>' source address='<ip>' port port='$p' protocol='<protocol>' log prefix='f2b-<name>' level='<level>' limit value='<rate>/m' <rich-blocktype>"; done
[Init] [Init]
@ -48,4 +27,3 @@ level = info
# log rate per minute # log rate per minute
rate = 1 rate = 1

View File

@ -35,8 +35,10 @@ actioncheck =
# #
# Because rich rules can only handle single or a range of ports we must split ports and execute the command for each port. Ports can be single and ranges separated by a comma or space for an example: http, https, 22-60, 18 smtp # Because rich rules can only handle single or a range of ports we must split ports and execute the command for each port. Ports can be single and ranges separated by a comma or space for an example: http, https, 22-60, 18 smtp
actionban = ports="<port>"; for p in $(echo $ports | tr ", " " "); do firewall-cmd --add-rich-rule="rule family='<family>' source address='<ip>' port port='$p' protocol='<protocol>' <rich-blocktype>"; done fwcmd_rich_rule = rule family='<family>' source address='<ip>' port port='$p' protocol='<protocol>' %(rich-suffix)s
actionban = ports="$(echo '<port>' | sed s/:/-/g)"; for p in $(echo $ports | tr ", " " "); do firewall-cmd --add-rich-rule="%(fwcmd_rich_rule)s"; done
actionunban = ports="<port>"; for p in $(echo $ports | tr ", " " "); do firewall-cmd --remove-rich-rule="rule family='<family>' source address='<ip>' port port='$p' protocol='<protocol>' <rich-blocktype>"; done actionunban = ports="$(echo '<port>' | sed s/:/-/g)"; for p in $(echo $ports | tr ", " " "); do firewall-cmd --remove-rich-rule="%(fwcmd_rich_rule)s"; done
rich-suffix = <rich-blocktype>

View File

@ -26,7 +26,7 @@ before = iptables-common.conf
# Notes.: command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false). # Notes.: command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values: CMD # Values: CMD
# #
actionstart = ipset create <ipmset> hash:ip timeout <default-timeout><familyopt> actionstart = ipset create <ipmset> hash:ip timeout <default-ipsettime> <familyopt>
<iptables> -I <chain> -m set --match-set <ipmset> src -j <blocktype> <iptables> -I <chain> -m set --match-set <ipmset> src -j <blocktype>
# Option: actionflush # Option: actionflush
@ -49,9 +49,9 @@ actionstop = <iptables> -D <chain> -m set --match-set <ipmset> src -j <blocktype
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = ipset add <ipmset> <ip> timeout <bantime> -exist actionban = ipset add <ipmset> <ip> timeout <ipsettime> -exist
actionprolong = %(actionban)s # actionprolong = %(actionban)s
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -63,11 +63,19 @@ actionunban = ipset del <ipmset> <ip> -exist
[Init] [Init]
# Option: default-timeout # Option: default-ipsettime
# Notes: specifies default timeout in seconds (handled default ipset timeout only) # Notes: specifies default timeout in seconds (handled default ipset timeout only)
# Values: [ NUM ] Default: 600 # Values: [ NUM ] Default: 0 (no timeout, managed by fail2ban by unban)
default-ipsettime = 0
default-timeout = 600 # Option: ipsettime
# Notes: specifies ticket timeout (handled ipset timeout only)
# Values: [ NUM ] Default: 0 (managed by fail2ban by unban)
ipsettime = 0
# expresion to caclulate timeout from bantime, example:
# banaction = %(known/banaction)s[ipsettime='<timeout-bantime>']
timeout-bantime = $([ "<bantime>" -le 2147483 ] && echo "<bantime>" || echo 0)
ipmset = f2b-<name> ipmset = f2b-<name>
familyopt = familyopt =
@ -76,4 +84,4 @@ familyopt =
[Init?family=inet6] [Init?family=inet6]
ipmset = f2b-<name>6 ipmset = f2b-<name>6
familyopt = <sp>family inet6 familyopt = family inet6

View File

@ -26,7 +26,7 @@ before = iptables-common.conf
# Notes.: command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false). # Notes.: command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values: CMD # Values: CMD
# #
actionstart = ipset create <ipmset> hash:ip timeout <default-timeout><familyopt> actionstart = ipset create <ipmset> hash:ip timeout <default-ipsettime> <familyopt>
<iptables> -I <chain> -p <protocol> -m multiport --dports <port> -m set --match-set <ipmset> src -j <blocktype> <iptables> -I <chain> -p <protocol> -m multiport --dports <port> -m set --match-set <ipmset> src -j <blocktype>
# Option: actionflush # Option: actionflush
@ -49,9 +49,9 @@ actionstop = <iptables> -D <chain> -p <protocol> -m multiport --dports <port> -m
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = ipset add <ipmset> <ip> timeout <bantime> -exist actionban = ipset add <ipmset> <ip> timeout <ipsettime> -exist
actionprolong = %(actionban)s # actionprolong = %(actionban)s
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -63,11 +63,19 @@ actionunban = ipset del <ipmset> <ip> -exist
[Init] [Init]
# Option: default-timeout # Option: default-ipsettime
# Notes: specifies default timeout in seconds (handled default ipset timeout only) # Notes: specifies default timeout in seconds (handled default ipset timeout only)
# Values: [ NUM ] Default: 600 # Values: [ NUM ] Default: 0 (no timeout, managed by fail2ban by unban)
default-ipsettime = 0
default-timeout = 600 # Option: ipsettime
# Notes: specifies ticket timeout (handled ipset timeout only)
# Values: [ NUM ] Default: 0 (managed by fail2ban by unban)
ipsettime = 0
# expresion to caclulate timeout from bantime, example:
# banaction = %(known/banaction)s[ipsettime='<timeout-bantime>']
timeout-bantime = $([ "<bantime>" -le 2147483 ] && echo "<bantime>" || echo 0)
ipmset = f2b-<name> ipmset = f2b-<name>
familyopt = familyopt =
@ -76,4 +84,4 @@ familyopt =
[Init?family=inet6] [Init?family=inet6]
ipmset = f2b-<name>6 ipmset = f2b-<name>6
familyopt = <sp>family inet6 familyopt = family inet6

View File

@ -34,7 +34,7 @@ type = multiport
rule_match-custom = rule_match-custom =
rule_match-allports = meta l4proto \{ <protocol> \} rule_match-allports = meta l4proto \{ <protocol> \}
rule_match-multiport = $proto dport \{ <port> \} rule_match-multiport = $proto dport \{ $(echo '<port>' | sed s/:/-/g) \}
match = <rule_match-<type>> match = <rule_match-<type>>
# Option: rule_stat # Option: rule_stat

View File

@ -103,6 +103,8 @@ actionstop = %(actionflush)s
actioncheck = actioncheck =
actionban = echo "\\\\<fid> 1;" >> '%(blck_lst_file)s'; %(blck_lst_reload)s _echo_blck_row = printf '\%%s 1;\n' "<fid>"
actionunban = id=$(echo "<fid>" | sed -e 's/[]\/$*.^|[]/\\&/g'); sed -i "/^\\\\$id 1;$/d" %(blck_lst_file)s; %(blck_lst_reload)s actionban = %(_echo_blck_row)s >> '%(blck_lst_file)s'; %(blck_lst_reload)s
actionunban = id=$(%(_echo_blck_row)s | sed -e 's/[]\/$*.^|[]/\\&/g'); sed -i "/^$id$/d" %(blck_lst_file)s; %(blck_lst_reload)s

View File

@ -51,7 +51,7 @@
# Values: CMD # Values: CMD
# #
actionstart = if ! ipset -quiet -name list f2b-<name> >/dev/null; actionstart = if ! ipset -quiet -name list f2b-<name> >/dev/null;
then ipset -quiet -exist create f2b-<name> hash:ip timeout <default-timeout>; then ipset -quiet -exist create f2b-<name> hash:ip timeout <default-ipsettime>;
fi fi
# Option: actionstop # Option: actionstop
@ -66,9 +66,9 @@ actionstop = ipset flush f2b-<name>
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = ipset add f2b-<name> <ip> timeout <bantime> -exist actionban = ipset add f2b-<name> <ip> timeout <ipsettime> -exist
actionprolong = %(actionban)s # actionprolong = %(actionban)s
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -78,8 +78,16 @@ actionprolong = %(actionban)s
# #
actionunban = ipset del f2b-<name> <ip> -exist actionunban = ipset del f2b-<name> <ip> -exist
# Option: default-timeout # Option: default-ipsettime
# Notes: specifies default timeout in seconds (handled default ipset timeout only) # Notes: specifies default timeout in seconds (handled default ipset timeout only)
# Values: [ NUM ] Default: 600 # Values: [ NUM ] Default: 0 (no timeout, managed by fail2ban by unban)
default-ipsettime = 0
default-timeout = 600 # Option: ipsettime
# Notes: specifies ticket timeout (handled ipset timeout only)
# Values: [ NUM ] Default: 0 (managed by fail2ban by unban)
ipsettime = 0
# expresion to caclulate timeout from bantime, example:
# banaction = %(known/banaction)s[ipsettime='<timeout-bantime>']
timeout-bantime = $([ "<bantime>" -le 2147483 ] && echo "<bantime>" || echo 0)

View File

@ -19,7 +19,7 @@
# NOTICE # NOTICE
# INFO # INFO
# DEBUG # DEBUG
# Values: [ LEVEL ] Default: ERROR # Values: [ LEVEL ] Default: INFO
# #
loglevel = INFO loglevel = INFO

View File

@ -2,5 +2,12 @@
# Detecting failed login attempts # Detecting failed login attempts
# Logged in bwdata/logs/identity/Identity/log.txt # Logged in bwdata/logs/identity/Identity/log.txt
[INCLUDES]
before = common.conf
[Definition] [Definition]
failregex = ^\s*\[WRN\]\s+Failed login attempt(?:, 2FA invalid)?\. <HOST>$ _daemon = Bitwarden-Identity
failregex = ^%(__prefix_line)s\s*\[(?:W(?:RN|arning)|Bit\.Core\.[^\]]+)\]\s+Failed login attempt(?:, 2FA invalid)?\. <ADDR>$
# DEV Notes:
# __prefix_line can result to an empty string, so it can support syslog and non-syslog at once.

View File

@ -25,7 +25,7 @@ __pid_re = (?:\[\d+\])
# Daemon name (with optional source_file:line or whatever) # Daemon name (with optional source_file:line or whatever)
# EXAMPLES: pam_rhosts_auth, [sshd], pop(pam_unix) # EXAMPLES: pam_rhosts_auth, [sshd], pop(pam_unix)
__daemon_re = [\[\(]?%(_daemon)s(?:\(\S+\))?[\]\)]?:? __daemon_re = [\[\(]?<_daemon>(?:\(\S+\))?[\]\)]?:?
# extra daemon info # extra daemon info
# EXAMPLE: [ID 800047 auth.info] # EXAMPLE: [ID 800047 auth.info]
@ -33,7 +33,7 @@ __daemon_extra_re = \[ID \d+ \S+\]
# Combinations of daemon name and PID # Combinations of daemon name and PID
# EXAMPLES: sshd[31607], pop(pam_unix)[4920] # EXAMPLES: sshd[31607], pop(pam_unix)[4920]
__daemon_combs_re = (?:%(__pid_re)s?:\s+%(__daemon_re)s|%(__daemon_re)s%(__pid_re)s?:?) __daemon_combs_re = (?:<__pid_re>?:\s+<__daemon_re>|<__daemon_re><__pid_re>?:?)
# Some messages have a kernel prefix with a timestamp # Some messages have a kernel prefix with a timestamp
# EXAMPLES: kernel: [769570.846956] # EXAMPLES: kernel: [769570.846956]
@ -69,12 +69,12 @@ datepattern = <lt_<logtype>/datepattern>
[lt_file] [lt_file]
# Common line prefixes for logtype "file": # Common line prefixes for logtype "file":
__prefix_line = %(__date_ambit)s?\s*(?:%(__bsd_syslog_verbose)s\s+)?(?:%(__hostname)s\s+)?(?:%(__kernel_prefix)s\s+)?(?:%(__vserver)s\s+)?(?:%(__daemon_combs_re)s\s+)?(?:%(__daemon_extra_re)s\s+)? __prefix_line = <__date_ambit>?\s*(?:<__bsd_syslog_verbose>\s+)?(?:<__hostname>\s+)?(?:<__kernel_prefix>\s+)?(?:<__vserver>\s+)?(?:<__daemon_combs_re>\s+)?(?:<__daemon_extra_re>\s+)?
datepattern = {^LN-BEG} datepattern = {^LN-BEG}
[lt_short] [lt_short]
# Common (short) line prefix for logtype "journal" (corresponds output of formatJournalEntry): # Common (short) line prefix for logtype "journal" (corresponds output of formatJournalEntry):
__prefix_line = \s*(?:%(__hostname)s\s+)?(?:%(_daemon)s%(__pid_re)s?:?\s+)?(?:%(__kernel_prefix)s\s+)? __prefix_line = \s*(?:<__hostname>\s+)?(?:<_daemon><__pid_re>?:?\s+)?(?:<__kernel_prefix>\s+)?
datepattern = %(lt_file/datepattern)s datepattern = %(lt_file/datepattern)s
[lt_journal] [lt_journal]
__prefix_line = %(lt_short/__prefix_line)s __prefix_line = %(lt_short/__prefix_line)s

View File

@ -12,7 +12,7 @@ before = common.conf
_daemon = courieresmtpd _daemon = courieresmtpd
prefregex = ^%(__prefix_line)serror,relay=<HOST>,<F-CONTENT>.+</F-CONTENT>$ prefregex = ^%(__prefix_line)serror,relay=<HOST>,(?:port=\d+,)?<F-CONTENT>.+</F-CONTENT>$
failregex = ^[^:]*: 550 User (<.*> )?unknown\.?$ failregex = ^[^:]*: 550 User (<.*> )?unknown\.?$
^msg="535 Authentication failed\.",cmd:( AUTH \S+)?( [0-9a-zA-Z\+/=]+)?(?: \S+)$ ^msg="535 Authentication failed\.",cmd:( AUTH \S+)?( [0-9a-zA-Z\+/=]+)?(?: \S+)$

View File

@ -0,0 +1,6 @@
# Fail2Ban filter for Gitlab
# Detecting unauthorized access to the Gitlab Web portal
# typically logged in /var/log/gitlab/gitlab-rails/application.log
[Definition]
failregex = ^: Failed Login: username=<F-USER>.+</F-USER> ip=<HOST>$

View File

@ -0,0 +1,9 @@
# Fail2Ban filter for Grafana
# Detecting unauthorized access
# Typically logged in /var/log/grafana/grafana.log
[Init]
datepattern = ^t=%%Y-%%m-%%dT%%H:%%M:%%S%%z
[Definition]
failregex = ^(?: lvl=err?or)? msg="Invalid username or password"(?: uname=(?:"<F-ALT_USER>[^"]+</F-ALT_USER>"|<F-USER>\S+</F-USER>)| error="<F-ERROR>[^"]+</F-ERROR>"| \S+=(?:\S*|"[^"]+"))* remote_addr=<ADDR>$

View File

@ -5,21 +5,47 @@
[Definition] [Definition]
# Option: failregex logging = catalina
# Notes.: regex to match the password failures messages in the logfile. failregex = <L_<logging>/failregex>
# Values: TEXT maxlines = <L_<logging>/maxlines>
# datepattern = <L_<logging>/datepattern>
[L_catalina]
failregex = ^.*\nWARNING: Authentication attempt from <HOST> for user "[^"]*" failed\.$ failregex = ^.*\nWARNING: Authentication attempt from <HOST> for user "[^"]*" failed\.$
# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =
# "maxlines" is number of log lines to buffer for multi-line regex searches
maxlines = 2 maxlines = 2
datepattern = ^%%b %%d, %%ExY %%I:%%M:%%S %%p datepattern = ^%%b %%d, %%ExY %%I:%%M:%%S %%p
^WARNING:()** ^WARNING:()**
{^LN-BEG} {^LN-BEG}
[L_webapp]
failregex = ^ \[\S+\] WARN \S+ - Authentication attempt from <HOST> for user "<F-USER>[^"]+</F-USER>" failed.
maxlines = 1
datepattern = ^%%H:%%M:%%S.%%f
# DEV Notes:
#
# failregex is based on the default pattern given in Guacamole documentation :
# https://guacamole.apache.org/doc/gug/configuring-guacamole.html#webapp-logging
#
# The following logback.xml Guacamole configuration file can then be used accordingly :
# <configuration>
# <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
# <file>/var/log/guacamole.log</file>
# <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
# <fileNamePattern>/var/log/guacamole.%d.log.gz</fileNamePattern>
# <maxHistory>32</maxHistory>
# </rollingPolicy>
# <encoder>
# <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
# </encoder>
# </appender>
# <root level="info">
# <appender-ref ref="FILE" />
# </root>
# </configuration>

View File

@ -8,13 +8,17 @@
# common.local # common.local
before = common.conf before = common.conf
# [DEFAULT]
# logtype = short
[Definition] [Definition]
_daemon = monit _daemon = monit
_prefix = Warning|HttpRequest
# Regexp for previous (accessing monit httpd) and new (access denied) versions # Regexp for previous (accessing monit httpd) and new (access denied) versions
failregex = ^\[\s*\]\s*error\s*:\s*Warning:\s+Client '<HOST>' supplied (?:unknown user '[^']+'|wrong password for user '[^']*') accessing monit httpd$ failregex = ^%(__prefix_line)s(?:error\s*:\s+)?(?:%(_prefix)s):\s+(?:access denied\s+--\s+)?[Cc]lient '?<HOST>'?(?:\s+supplied|\s*:)\s+(?:unknown user '<F-ALT_USER>[^']+</F-ALT_USER>'|wrong password for user '<F-USER>[^']*</F-USER>'|empty password)
^%(__prefix_line)s\w+: access denied -- client <HOST>: (?:unknown user '[^']+'|wrong password for user '[^']*'|empty password)$
# Ignore login with empty user (first connect, no user specified) # Ignore login with empty user (first connect, no user specified)
# ignoreregex = %(__prefix_line)s\w+: access denied -- client <HOST>: (?:unknown user '') # ignoreregex = %(__prefix_line)s\w+: access denied -- client <HOST>: (?:unknown user '')

View File

@ -17,7 +17,7 @@ before = common.conf
_daemon = mysqld _daemon = mysqld
failregex = ^%(__prefix_line)s(?:(?:\d{6}|\d{4}-\d{2}-\d{2})[ T]\s?\d{1,2}:\d{2}:\d{2} )?(?:\d+ )?\[\w+\] (?:\[[^\]]+\] )*Access denied for user '[^']+'@'<HOST>' (to database '[^']*'|\(using password: (YES|NO)\))*\s*$ failregex = ^%(__prefix_line)s(?:(?:\d{6}|\d{4}-\d{2}-\d{2})[ T]\s?\d{1,2}:\d{2}:\d{2} )?(?:\d+ )?\[\w+\] (?:\[[^\]]+\] )*Access denied for user '<F-USER>[^']+</F-USER>'@'<HOST>' (to database '[^']*'|\(using password: (YES|NO)\))*\s*$
ignoreregex = ignoreregex =

View File

@ -37,7 +37,7 @@ mdre-rbl = ^RCPT from [^[]*\[<HOST>\]%(_port)s: [45]54 [45]\.7\.1 Service unava
mdpr-more = %(mdpr-normal)s mdpr-more = %(mdpr-normal)s
mdre-more = %(mdre-normal)s mdre-more = %(mdre-normal)s
mdpr-ddos = lost connection after(?! DATA) [A-Z]+ mdpr-ddos = (?:lost connection after(?! DATA) [A-Z]+|disconnect(?= from \S+(?: \S+=\d+)* auth=0/(?:[1-9]|\d\d+)))
mdre-ddos = ^from [^[]*\[<HOST>\]%(_port)s:? mdre-ddos = ^from [^[]*\[<HOST>\]%(_port)s:?
mdpr-extra = (?:%(mdpr-auth)s|%(mdpr-normal)s) mdpr-extra = (?:%(mdpr-auth)s|%(mdpr-normal)s)

View File

@ -14,16 +14,15 @@ before = common.conf
_daemon = proftpd _daemon = proftpd
__suffix_failed_login = (User not authorized for login|No such user found|Incorrect password|Password expired|Account disabled|Invalid shell: '\S+'|User in \S+|Limit (access|configuration) denies login|Not a UserAlias|maximum login length exceeded).? __suffix_failed_login = ([uU]ser not authorized for login|[nN]o such user found|[iI]ncorrect password|[pP]assword expired|[aA]ccount disabled|[iI]nvalid shell: '\S+'|[uU]ser in \S+|[lL]imit (access|configuration) denies login|[nN]ot a UserAlias|[mM]aximum login length exceeded)
prefregex = ^%(__prefix_line)s%(__hostname)s \(\S+\[<HOST>\]\)[: -]+ <F-CONTENT>(?:USER|SECURITY|Maximum).+</F-CONTENT>$ prefregex = ^%(__prefix_line)s%(__hostname)s \(\S+\[<HOST>\]\)[: -]+ <F-CONTENT>(?:USER|SECURITY|Maximum) .+</F-CONTENT>$
failregex = ^USER .*: no such user found from \S+ \[\S+\] to \S+:\S+ *$ failregex = ^USER <F-USER>\S+|.*?</F-USER>(?: \(Login failed\))?: %(__suffix_failed_login)s
^USER .* \(Login failed\): %(__suffix_failed_login)s\s*$ ^SECURITY VIOLATION: <F-USER>\S+|.*?</F-USER> login attempted
^SECURITY VIOLATION: .* login attempted\. *$ ^Maximum login attempts \(\d+\) exceeded
^Maximum login attempts \(\d+\) exceeded *$
ignoreregex = ignoreregex =

View File

@ -8,11 +8,14 @@ before = common.conf
[Definition] [Definition]
_daemon = (?:sendmail|sm-(?:mta|acceptingconnections)) _daemon = (?:sendmail|sm-(?:mta|acceptingconnections))
# "\w{14,20}" will give support for IDs from 14 up to 20 characters long
__prefix_line = %(known/__prefix_line)s(?:\w{14,20}: )? __prefix_line = %(known/__prefix_line)s(?:\w{14,20}: )?
addr = (?:IPv6:<IP6>|<IP4>)
# "w{14,20}" will give support for IDs from 14 up to 20 characters long prefregex = ^<F-MLFID>%(__prefix_line)s</F-MLFID><F-CONTENT>.+</F-CONTENT>$
failregex = ^%(__prefix_line)s(\S+ )?\[(?:IPv6:<IP6>|<IP4>)\]( \(may be forged\))?: possible SMTP attack: command=AUTH, count=\d+$
failregex = ^(\S+ )?\[%(addr)s\]( \(may be forged\))?: possible SMTP attack: command=AUTH, count=\d+$
^AUTH failure \(LOGIN\):(?: [^:]+:)? authentication failure: checkpass failed, user=<F-USER>(?:\S+|.*?)</F-USER>, relay=(?:\S+ )?\[%(addr)s\](?: \(may be forged\))?$
ignoreregex = ignoreregex =
journalmatch = _SYSTEMD_UNIT=sendmail.service journalmatch = _SYSTEMD_UNIT=sendmail.service

View File

@ -21,19 +21,20 @@ before = common.conf
_daemon = (?:(sm-(mta|acceptingconnections)|sendmail)) _daemon = (?:(sm-(mta|acceptingconnections)|sendmail))
__prefix_line = %(known/__prefix_line)s(?:\w{14,20}: )? __prefix_line = %(known/__prefix_line)s(?:\w{14,20}: )?
addr = (?:IPv6:<IP6>|<IP4>)
prefregex = ^<F-MLFID>%(__prefix_line)s</F-MLFID><F-CONTENT>.+</F-CONTENT>$ prefregex = ^<F-MLFID>%(__prefix_line)s</F-MLFID><F-CONTENT>.+</F-CONTENT>$
cmnfailre = ^ruleset=check_rcpt, arg1=(?P<email><\S+@\S+>), relay=(\S+ )?\[(?:IPv6:<IP6>|<IP4>)\](?: \(may be forged\))?, reject=(550 5\.7\.1 (?P=email)\.\.\. Relaying denied\. (IP name possibly forged \[(\d+\.){3}\d+\]|Proper authentication required\.|IP name lookup failed \[(\d+\.){3}\d+\])|553 5\.1\.8 (?P=email)\.\.\. Domain of sender address \S+ does not exist|550 5\.[71]\.1 (?P=email)\.\.\. (Rejected: .*|User unknown))$ cmnfailre = ^ruleset=check_rcpt, arg1=(?P<email><\S+@\S+>), relay=(\S+ )?\[%(addr)s\](?: \(may be forged\))?, reject=(550 5\.7\.1 (?P=email)\.\.\. Relaying denied\. (IP name possibly forged \[(\d+\.){3}\d+\]|Proper authentication required\.|IP name lookup failed \[(\d+\.){3}\d+\])|553 5\.1\.8 (?P=email)\.\.\. Domain of sender address \S+ does not exist|550 5\.[71]\.1 (?P=email)\.\.\. (Rejected: .*|User unknown))$
^ruleset=check_relay, arg1=(?P<dom>\S+), arg2=(?:IPv6:<IP6>|<IP4>), relay=((?P=dom) )?\[(\d+\.){3}\d+\](?: \(may be forged\))?, reject=421 4\.3\.2 (Connection rate limit exceeded\.|Too many open connections\.)$ ^ruleset=check_relay, arg1=(?P<dom>\S+), arg2=%(addr)s, relay=((?P=dom) )?\[(\d+\.){3}\d+\](?: \(may be forged\))?, reject=421 4\.3\.2 (Connection rate limit exceeded\.|Too many open connections\.)$
^rejecting commands from (\S* )?\[(?:IPv6:<IP6>|<IP4>)\] due to pre-greeting traffic after \d+ seconds$ ^rejecting commands from (\S* )?\[%(addr)s\] due to pre-greeting traffic after \d+ seconds$
^(?:\S+ )?\[(?:IPv6:<IP6>|<IP4>)\]: (?:(?i)expn|vrfy) \S+ \[rejected\]$ ^(?:\S+ )?\[%(addr)s\]: (?:(?i)expn|vrfy) \S+ \[rejected\]$
^<[^@]+@[^>]+>\.\.\. No such user here$ ^<[^@]+@[^>]+>\.\.\. No such user here$
^<F-NOFAIL>from=<[^@]+@[^>]+></F-NOFAIL>, size=\d+, class=\d+, nrcpts=\d+, bodytype=\w+, proto=E?SMTP, daemon=MTA, relay=\S+ \[(?:IPv6:<IP6>|<IP4>)\]$ ^<F-NOFAIL>from=<[^@]+@[^>]+></F-NOFAIL>, size=\d+, class=\d+, nrcpts=\d+, bodytype=\w+, proto=E?SMTP, daemon=MTA, relay=\S+ \[%(addr)s\]$
mdre-normal = mdre-normal =
mdre-extra = ^(?:\S+ )?\[(?:IPv6:<IP6>|<IP4>)\](?: \(may be forged\))? did not issue (?:[A-Z]{4}[/ ]?)+during connection to (?:TLS)?M(?:TA|S[PA])(?:-\w+)?$ mdre-extra = ^(?:\S+ )?\[%(addr)s\](?: \(may be forged\))? did not issue \S+ during connection
mdre-aggressive = %(mdre-extra)s mdre-aggressive = %(mdre-extra)s

View File

@ -0,0 +1,9 @@
# Fail2Ban filter for SoftEtherVPN
# Detecting unauthorized access to SoftEtherVPN
# typically logged in /usr/local/vpnserver/security_log/*/sec.log, or in syslog, depending on configuration
[INCLUDES]
before = common.conf
[Definition]
failregex = ^%(__prefix_line)s(?:(?:\([\d\-]+ [\d:.]+\) )?<SECURITY_LOG>: )?Connection "[^"]+": User authentication failed. The user name that has been provided was "<F-USER>(?:[^"]+|.+)</F-USER>", from <ADDR>\.$

View File

@ -25,7 +25,7 @@ __pref = (?:(?:error|fatal): (?:PAM: )?)?
__suff = (?: (?:port \d+|on \S+|\[preauth\])){0,3}\s* __suff = (?: (?:port \d+|on \S+|\[preauth\])){0,3}\s*
__on_port_opt = (?: (?:port \d+|on \S+)){0,2} __on_port_opt = (?: (?:port \d+|on \S+)){0,2}
# close by authenticating user: # close by authenticating user:
__authng_user = (?: (?:invalid|authenticating) user <F-USER>\S+|.+?</F-USER>)? __authng_user = (?: (?:invalid|authenticating) user <F-USER>\S+|.*?</F-USER>)?
# for all possible (also future) forms of "no matching (cipher|mac|MAC|compression method|key exchange method|host key type) found", # for all possible (also future) forms of "no matching (cipher|mac|MAC|compression method|key exchange method|host key type) found",
# see ssherr.c for all possible SSH_ERR_..._ALG_MATCH errors. # see ssherr.c for all possible SSH_ERR_..._ALG_MATCH errors.
@ -40,39 +40,45 @@ prefregex = ^<F-MLFID>%(__prefix_line)s</F-MLFID>%(__pref)s<F-CONTENT>.+</F-CONT
cmnfailre = ^[aA]uthentication (?:failure|error|failed) for <F-USER>.*</F-USER> from <HOST>( via \S+)?%(__suff)s$ cmnfailre = ^[aA]uthentication (?:failure|error|failed) for <F-USER>.*</F-USER> from <HOST>( via \S+)?%(__suff)s$
^User not known to the underlying authentication module for <F-USER>.*</F-USER> from <HOST>%(__suff)s$ ^User not known to the underlying authentication module for <F-USER>.*</F-USER> from <HOST>%(__suff)s$
^Failed publickey for invalid user <F-USER>(?P<cond_user>\S+)|(?:(?! from ).)*?</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$) <cmnfailre-failed-pub-<publickey>>
^Failed \b(?!publickey)\S+ for (?P<cond_inv>invalid user )?<F-USER>(?P<cond_user>\S+)|(?(cond_inv)(?:(?! from ).)*?|[^:]+)</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$) ^Failed <cmnfailed> for (?P<cond_inv>invalid user )?<F-USER>(?P<cond_user>\S+)|(?(cond_inv)(?:(?! from ).)*?|[^:]+)</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
^<F-USER>ROOT</F-USER> LOGIN REFUSED FROM <HOST> ^<F-USER>ROOT</F-USER> LOGIN REFUSED FROM <HOST>
^[iI](?:llegal|nvalid) user <F-USER>.*?</F-USER> from <HOST>%(__suff)s$ ^[iI](?:llegal|nvalid) user <F-USER>.*?</F-USER> from <HOST>%(__suff)s$
^User <F-USER>.+</F-USER> from <HOST> not allowed because not listed in AllowUsers%(__suff)s$ ^User <F-USER>\S+|.*?</F-USER> from <HOST> not allowed because not listed in AllowUsers%(__suff)s$
^User <F-USER>.+</F-USER> from <HOST> not allowed because listed in DenyUsers%(__suff)s$ ^User <F-USER>\S+|.*?</F-USER> from <HOST> not allowed because listed in DenyUsers%(__suff)s$
^User <F-USER>.+</F-USER> from <HOST> not allowed because not in any group%(__suff)s$ ^User <F-USER>\S+|.*?</F-USER> from <HOST> not allowed because not in any group%(__suff)s$
^refused connect from \S+ \(<HOST>\) ^refused connect from \S+ \(<HOST>\)
^Received <F-MLFFORGET>disconnect</F-MLFFORGET> from <HOST>%(__on_port_opt)s:\s*3: .*: Auth fail%(__suff)s$ ^Received <F-MLFFORGET>disconnect</F-MLFFORGET> from <HOST>%(__on_port_opt)s:\s*3: .*: Auth fail%(__suff)s$
^User <F-USER>.+</F-USER> from <HOST> not allowed because a group is listed in DenyGroups%(__suff)s$ ^User <F-USER>\S+|.*?</F-USER> from <HOST> not allowed because a group is listed in DenyGroups%(__suff)s$
^User <F-USER>.+</F-USER> from <HOST> not allowed because none of user's groups are listed in AllowGroups%(__suff)s$ ^User <F-USER>\S+|.*?</F-USER> from <HOST> not allowed because none of user's groups are listed in AllowGroups%(__suff)s$
^<F-NOFAIL>%(__pam_auth)s\(sshd:auth\):\s+authentication failure;</F-NOFAIL>(?:\s+(?:(?:logname|e?uid|tty)=\S*)){0,4}\s+ruser=<F-ALT_USER>\S*</F-ALT_USER>\s+rhost=<HOST>(?:\s+user=<F-USER>\S*</F-USER>)?%(__suff)s$ ^<F-NOFAIL>%(__pam_auth)s\(sshd:auth\):\s+authentication failure;</F-NOFAIL>(?:\s+(?:(?:logname|e?uid|tty)=\S*)){0,4}\s+ruser=<F-ALT_USER>\S*</F-ALT_USER>\s+rhost=<HOST>(?:\s+user=<F-USER>\S*</F-USER>)?%(__suff)s$
^(error: )?maximum authentication attempts exceeded for <F-USER>.*</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?%(__suff)s$ ^maximum authentication attempts exceeded for <F-USER>.*</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?%(__suff)s$
^User <F-USER>.+</F-USER> not allowed because account is locked%(__suff)s ^User <F-USER>\S+|.*?</F-USER> not allowed because account is locked%(__suff)s
^<F-MLFFORGET>Disconnecting</F-MLFFORGET>(?: from)?(?: (?:invalid|authenticating)) user <F-USER>\S+</F-USER> <HOST>%(__on_port_opt)s:\s*Change of username or service not allowed:\s*.*\[preauth\]\s*$ ^<F-MLFFORGET>Disconnecting</F-MLFFORGET>(?: from)?(?: (?:invalid|authenticating)) user <F-USER>\S+</F-USER> <HOST>%(__on_port_opt)s:\s*Change of username or service not allowed:\s*.*\[preauth\]\s*$
^<F-MLFFORGET>Disconnecting</F-MLFFORGET>: Too many authentication failures(?: for <F-USER>.+?</F-USER>)?%(__suff)s$ ^Disconnecting: Too many authentication failures(?: for <F-USER>\S+|.*?</F-USER>)?%(__suff)s$
^<F-NOFAIL>Received <F-MLFFORGET>disconnect</F-MLFFORGET></F-NOFAIL> from <HOST>%(__on_port_opt)s:\s*11: ^<F-NOFAIL>Received <F-MLFFORGET>disconnect</F-MLFFORGET></F-NOFAIL> from <HOST>%(__on_port_opt)s:\s*11:
<mdre-<mode>-other> <mdre-<mode>-other>
^<F-MLFFORGET><F-MLFGAINED>Accepted \w+</F-MLFGAINED></F-MLFFORGET> for <F-USER>\S+</F-USER> from <HOST>(?:\s|$) ^<F-MLFFORGET><F-MLFGAINED>Accepted \w+</F-MLFGAINED></F-MLFFORGET> for <F-USER>\S+</F-USER> from <HOST>(?:\s|$)
cmnfailed-any = \S+
cmnfailed-ignore = \b(?!publickey)\S+
cmnfailed-invalid = <cmnfailed-ignore>
cmnfailed-nofail = (?:<F-NOFAIL>publickey</F-NOFAIL>|\S+)
cmnfailed = <cmnfailed-<publickey>>
mdre-normal = mdre-normal =
# used to differentiate "connection closed" with and without `[preauth]` (fail/nofail cases in ddos mode) # used to differentiate "connection closed" with and without `[preauth]` (fail/nofail cases in ddos mode)
mdre-normal-other = ^<F-NOFAIL><F-MLFFORGET>(Connection closed|Disconnected)</F-MLFFORGET></F-NOFAIL> (?:by|from)%(__authng_user)s <HOST>(?:%(__suff)s|\s*)$ mdre-normal-other = ^<F-NOFAIL><F-MLFFORGET>(Connection closed|Disconnected)</F-MLFFORGET></F-NOFAIL> (?:by|from)%(__authng_user)s <HOST>(?:%(__suff)s|\s*)$
mdre-ddos = ^Did not receive identification string from <HOST> mdre-ddos = ^Did not receive identification string from <HOST>
^kex_exchange_identification: (?:[Cc]lient sent invalid protocol identifier|[Cc]onnection closed by remote host)
^Bad protocol version identification '.*' from <HOST> ^Bad protocol version identification '.*' from <HOST>
^Connection <F-MLFFORGET>reset</F-MLFFORGET> by <HOST>
^<F-NOFAIL>SSH: Server;Ltype:</F-NOFAIL> (?:Authname|Version|Kex);Remote: <HOST>-\d+;[A-Z]\w+: ^<F-NOFAIL>SSH: Server;Ltype:</F-NOFAIL> (?:Authname|Version|Kex);Remote: <HOST>-\d+;[A-Z]\w+:
^Read from socket failed: Connection <F-MLFFORGET>reset</F-MLFFORGET> by peer ^Read from socket failed: Connection <F-MLFFORGET>reset</F-MLFFORGET> by peer
# same as mdre-normal-other, but as failure (without <F-NOFAIL>) and [preauth] only: # same as mdre-normal-other, but as failure (without <F-NOFAIL>) and [preauth] only:
mdre-ddos-other = ^<F-MLFFORGET>(Connection closed|Disconnected)</F-MLFFORGET> (?:by|from)%(__authng_user)s <HOST>%(__on_port_opt)s\s+\[preauth\]\s*$ mdre-ddos-other = ^<F-MLFFORGET>(Connection (?:closed|reset)|Disconnected)</F-MLFFORGET> (?:by|from)%(__authng_user)s <HOST>%(__on_port_opt)s\s+\[preauth\]\s*$
mdre-extra = ^Received <F-MLFFORGET>disconnect</F-MLFFORGET> from <HOST>%(__on_port_opt)s:\s*14: No supported authentication methods available mdre-extra = ^Received <F-MLFFORGET>disconnect</F-MLFFORGET> from <HOST>%(__on_port_opt)s:\s*14: No(?: supported)? authentication methods available
^Unable to negotiate with <HOST>%(__on_port_opt)s: no matching <__alg_match> found. ^Unable to negotiate with <HOST>%(__on_port_opt)s: no matching <__alg_match> found.
^Unable to negotiate a <__alg_match> ^Unable to negotiate a <__alg_match>
^no matching <__alg_match> found: ^no matching <__alg_match> found:
@ -84,6 +90,17 @@ mdre-aggressive = %(mdre-ddos)s
# mdre-extra-other is fully included within mdre-ddos-other: # mdre-extra-other is fully included within mdre-ddos-other:
mdre-aggressive-other = %(mdre-ddos-other)s mdre-aggressive-other = %(mdre-ddos-other)s
# Parameter "publickey": nofail (default), invalid, any, ignore
publickey = nofail
# consider failed publickey for invalid users only:
cmnfailre-failed-pub-invalid = ^Failed publickey for invalid user <F-USER>(?P<cond_user>\S+)|(?:(?! from ).)*?</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
# consider failed publickey for valid users too (don't need RE, see cmnfailed):
cmnfailre-failed-pub-any =
# same as invalid, but consider failed publickey for valid users too, just as no failure (helper to get IP and user-name only, see cmnfailed):
cmnfailre-failed-pub-nofail = <cmnfailre-failed-pub-invalid>
# don't consider failed publickey as failures (don't need RE, see cmnfailed):
cmnfailre-failed-pub-ignore =
cfooterre = ^<F-NOFAIL>Connection from</F-NOFAIL> <HOST> cfooterre = ^<F-NOFAIL>Connection from</F-NOFAIL> <HOST>
failregex = %(cmnfailre)s failregex = %(cmnfailre)s

View File

@ -51,6 +51,26 @@
[Definition] [Definition]
failregex = ^<HOST> \- (?!- )\S+ \[\] \"(GET|POST|HEAD) [^\"]+\" 401\b # Parameter "method" can be used to specifiy request method
req-method = \S+
# Usage example (for jail.local):
# filter = traefik-auth[req-method="GET|POST|HEAD"]
failregex = ^<HOST> \- <usrre-<mode>> \[\] \"(?:<req-method>) [^\"]+\" 401\b
ignoreregex = ignoreregex =
# Parameter "mode": normal (default), ddos or aggressive
# Usage example (for jail.local):
# [traefik-auth]
# mode = aggressive
# # or another jail (rewrite filter parameters of jail):
# [traefik-auth-ddos]
# filter = traefik-auth[mode=ddos]
#
mode = normal
# part of failregex matches user name (must be available in normal mode, must be empty in ddos mode, and both for aggressive mode):
usrre-normal = (?!- )<F-USER>\S+</F-USER>
usrre-ddos = -
usrre-aggressive = <F-USER>\S+</F-USER>

View File

@ -52,7 +52,7 @@ before = paths-debian.conf
# to prevent "clever" botnets calculate exact time IP can be unbanned again: # to prevent "clever" botnets calculate exact time IP can be unbanned again:
#bantime.rndtime = #bantime.rndtime =
# "bantime.maxtime" is the max number of seconds using the ban time can reach (don't grows further) # "bantime.maxtime" is the max number of seconds using the ban time can reach (doesn't grow further)
#bantime.maxtime = #bantime.maxtime =
# "bantime.factor" is a coefficient to calculate exponent growing of the formula or common multiplier, # "bantime.factor" is a coefficient to calculate exponent growing of the formula or common multiplier,
@ -60,7 +60,7 @@ before = paths-debian.conf
# grows by 1, 2, 4, 8, 16 ... # grows by 1, 2, 4, 8, 16 ...
#bantime.factor = 1 #bantime.factor = 1
# "bantime.formula" used by default to calculate next value of ban time, default value bellow, # "bantime.formula" used by default to calculate next value of ban time, default value below,
# the same ban time growing will be reached by multipliers 1, 2, 4, 8, 16, 32... # the same ban time growing will be reached by multipliers 1, 2, 4, 8, 16, 32...
#bantime.formula = ban.Time * (1<<(ban.Count if ban.Count<20 else 20)) * banFactor #bantime.formula = ban.Time * (1<<(ban.Count if ban.Count<20 else 20)) * banFactor
# #
@ -209,28 +209,28 @@ banaction = iptables-multiport
banaction_allports = iptables-allports banaction_allports = iptables-allports
# The simplest action to take: ban only # The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] action_ = %(banaction)s[port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report to the destemail. # ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] action_mw = %(action_)s
%(mta)s-whois[name=%(__name__)s, sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"] %(mta)s-whois[sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report and relevant log lines # ban & send an e-mail with whois report and relevant log lines
# to the destemail. # to the destemail.
action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] action_mwl = %(action_)s
%(mta)s-whois-lines[name=%(__name__)s, sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"] %(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# See the IMPORTANT note in action.d/xarf-login-attack for when to use this action # See the IMPORTANT note in action.d/xarf-login-attack for when to use this action
# #
# ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines # ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines
# to the destemail. # to the destemail.
action_xarf = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] action_xarf = %(action_)s
xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"] xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"]
# ban IP on CloudFlare & send an e-mail with whois report and relevant log lines # ban IP on CloudFlare & send an e-mail with whois report and relevant log lines
# to the destemail. # to the destemail.
action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"] action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
%(mta)s-whois-lines[name=%(__name__)s, sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"] %(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# Report block via blocklist.de fail2ban reporting service API # Report block via blocklist.de fail2ban reporting service API
# #
@ -240,7 +240,7 @@ action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
# in your `jail.local` globally (section [DEFAULT]) or per specific jail section (resp. in # in your `jail.local` globally (section [DEFAULT]) or per specific jail section (resp. in
# corresponding jail.d/my-jail.local file). # corresponding jail.d/my-jail.local file).
# #
action_blocklist_de = blocklist_de[email="%(sender)s", service=%(filter)s, apikey="%(blocklist_de_apikey)s", agent="%(fail2ban_agent)s"] action_blocklist_de = blocklist_de[email="%(sender)s", service="%(__name__)s", apikey="%(blocklist_de_apikey)s", agent="%(fail2ban_agent)s"]
# Report ban via badips.com, and use as blacklist # Report ban via badips.com, and use as blacklist
# #
@ -371,7 +371,7 @@ maxretry = 1
[openhab-auth] [openhab-auth]
filter = openhab filter = openhab
action = iptables-allports[name=NoAuthFailures] banaction = %(banaction_allports)s
logpath = /opt/openhab/logs/request.log logpath = /opt/openhab/logs/request.log
@ -478,6 +478,7 @@ backend = %(syslog_backend)s
port = http,https port = http,https
logpath = /var/log/tomcat*/catalina.out logpath = /var/log/tomcat*/catalina.out
#logpath = /var/log/guacamole.log
[monit] [monit]
#Ban clients brute-forcing the monit gui login #Ban clients brute-forcing the monit gui login
@ -744,8 +745,8 @@ logpath = /var/log/named/security.log
[nsd] [nsd]
port = 53 port = 53
action = %(banaction)s[name=%(__name__)s-tcp, port="%(port)s", protocol="tcp", chain="%(chain)s", actname=%(banaction)s-tcp] action_ = %(default/action_)s[name=%(__name__)s-tcp, protocol="tcp"]
%(banaction)s[name=%(__name__)s-udp, port="%(port)s", protocol="udp", chain="%(chain)s", actname=%(banaction)s-udp] %(default/action_)s[name=%(__name__)s-udp, protocol="udp"]
logpath = /var/log/nsd.log logpath = /var/log/nsd.log
@ -756,9 +757,8 @@ logpath = /var/log/nsd.log
[asterisk] [asterisk]
port = 5060,5061 port = 5060,5061
action = %(banaction)s[name=%(__name__)s-tcp, port="%(port)s", protocol="tcp", chain="%(chain)s", actname=%(banaction)s-tcp] action_ = %(default/action_)s[name=%(__name__)s-tcp, protocol="tcp"]
%(banaction)s[name=%(__name__)s-udp, port="%(port)s", protocol="udp", chain="%(chain)s", actname=%(banaction)s-udp] %(default/action_)s[name=%(__name__)s-udp, protocol="udp"]
%(mta)s-whois[name=%(__name__)s, dest="%(destemail)s"]
logpath = /var/log/asterisk/messages logpath = /var/log/asterisk/messages
maxretry = 10 maxretry = 10
@ -766,9 +766,8 @@ maxretry = 10
[freeswitch] [freeswitch]
port = 5060,5061 port = 5060,5061
action = %(banaction)s[name=%(__name__)s-tcp, port="%(port)s", protocol="tcp", chain="%(chain)s", actname=%(banaction)s-tcp] action_ = %(default/action_)s[name=%(__name__)s-tcp, protocol="tcp"]
%(banaction)s[name=%(__name__)s-udp, port="%(port)s", protocol="udp", chain="%(chain)s", actname=%(banaction)s-udp] %(default/action_)s[name=%(__name__)s-udp, protocol="udp"]
%(mta)s-whois[name=%(__name__)s, dest="%(destemail)s"]
logpath = /var/log/freeswitch.log logpath = /var/log/freeswitch.log
maxretry = 10 maxretry = 10
@ -853,11 +852,23 @@ logpath = /var/log/ejabberd/ejabberd.log
[counter-strike] [counter-strike]
logpath = /opt/cstrike/logs/L[0-9]*.log logpath = /opt/cstrike/logs/L[0-9]*.log
# Firewall: http://www.cstrike-planet.com/faq/6
tcpport = 27030,27031,27032,27033,27034,27035,27036,27037,27038,27039 tcpport = 27030,27031,27032,27033,27034,27035,27036,27037,27038,27039
udpport = 1200,27000,27001,27002,27003,27004,27005,27006,27007,27008,27009,27010,27011,27012,27013,27014,27015 udpport = 1200,27000,27001,27002,27003,27004,27005,27006,27007,27008,27009,27010,27011,27012,27013,27014,27015
action = %(banaction)s[name=%(__name__)s-tcp, port="%(tcpport)s", protocol="tcp", chain="%(chain)s", actname=%(banaction)s-tcp] action_ = %(default/action_)s[name=%(__name__)s-tcp, port="%(tcpport)s", protocol="tcp"]
%(banaction)s[name=%(__name__)s-udp, port="%(udpport)s", protocol="udp", chain="%(chain)s", actname=%(banaction)s-udp] %(default/action_)s[name=%(__name__)s-udp, port="%(udpport)s", protocol="udp"]
[softethervpn]
port = 500,4500
protocol = udp
logpath = /usr/local/vpnserver/security_log/*/sec.log
[gitlab]
port = http,https
logpath = /var/log/gitlab/gitlab-rails/application.log
[grafana]
port = http,https
logpath = /var/log/grafana/grafana.log
[bitwarden] [bitwarden]
port = http,https port = http,https
@ -909,8 +920,8 @@ findtime = 1
[murmur] [murmur]
# AKA mumble-server # AKA mumble-server
port = 64738 port = 64738
action = %(banaction)s[name=%(__name__)s-tcp, port="%(port)s", protocol=tcp, chain="%(chain)s", actname=%(banaction)s-tcp] action_ = %(default/action_)s[name=%(__name__)s-tcp, protocol="tcp"]
%(banaction)s[name=%(__name__)s-udp, port="%(port)s", protocol=udp, chain="%(chain)s", actname=%(banaction)s-udp] %(default/action_)s[name=%(__name__)s-udp, protocol="udp"]
logpath = /var/log/mumble-server/mumble-server.log logpath = /var/log/mumble-server/mumble-server.log

View File

@ -38,28 +38,32 @@ class ActionReader(DefinitionInitConfigReader):
_configOpts = { _configOpts = {
"actionstart": ["string", None], "actionstart": ["string", None],
"actionstart_on_demand": ["string", None], "actionstart_on_demand": ["bool", None],
"actionstop": ["string", None], "actionstop": ["string", None],
"actionflush": ["string", None], "actionflush": ["string", None],
"actionreload": ["string", None], "actionreload": ["string", None],
"actioncheck": ["string", None], "actioncheck": ["string", None],
"actionrepair": ["string", None], "actionrepair": ["string", None],
"actionrepair_on_unban": ["string", None], "actionrepair_on_unban": ["bool", None],
"actionban": ["string", None], "actionban": ["string", None],
"actionprolong": ["string", None], "actionprolong": ["string", None],
"actionreban": ["string", None], "actionreban": ["string", None],
"actionunban": ["string", None], "actionunban": ["string", None],
"norestored": ["string", None], "norestored": ["bool", None],
} }
def __init__(self, file_, jailName, initOpts, **kwargs): def __init__(self, file_, jailName, initOpts, **kwargs):
# always supply jail name as name parameter if not specified in options:
n = initOpts.get("name")
if n is None:
initOpts["name"] = n = jailName
actname = initOpts.get("actname") actname = initOpts.get("actname")
if actname is None: if actname is None:
actname = file_ actname = file_
# ensure we've unique action name per jail:
if n != jailName:
actname += n[len(jailName):] if n.startswith(jailName) else '-' + n
initOpts["actname"] = actname initOpts["actname"] = actname
# always supply jail name as name parameter if not specified in options:
if initOpts.get("name") is None:
initOpts["name"] = jailName
self._name = actname self._name = actname
DefinitionInitConfigReader.__init__( DefinitionInitConfigReader.__init__(
self, file_, jailName, initOpts, **kwargs) self, file_, jailName, initOpts, **kwargs)
@ -80,11 +84,6 @@ class ActionReader(DefinitionInitConfigReader):
def convert(self): def convert(self):
opts = self.getCombined( opts = self.getCombined(
ignore=CommandAction._escapedTags | set(('timeout', 'bantime'))) ignore=CommandAction._escapedTags | set(('timeout', 'bantime')))
# type-convert only after combined (otherwise boolean converting prevents substitution):
for o in ('norestored', 'actionstart_on_demand', 'actionrepair_on_unban'):
if opts.get(o):
opts[o] = self._convert_to_boolean(opts[o])
# stream-convert: # stream-convert:
head = ["set", self._jailName] head = ["set", self._jailName]
stream = list() stream = list()

View File

@ -29,7 +29,7 @@ import re
import sys import sys
from ..helpers import getLogger from ..helpers import getLogger
if sys.version_info >= (3,2): if sys.version_info >= (3,): # pragma: 2.x no cover
# SafeConfigParser deprecated from Python 3.2 (renamed to ConfigParser) # SafeConfigParser deprecated from Python 3.2 (renamed to ConfigParser)
from configparser import ConfigParser as SafeConfigParser, BasicInterpolation, \ from configparser import ConfigParser as SafeConfigParser, BasicInterpolation, \
@ -61,7 +61,7 @@ if sys.version_info >= (3,2):
return super(BasicInterpolationWithName, self)._interpolate_some( return super(BasicInterpolationWithName, self)._interpolate_some(
parser, option, accum, rest, section, map, *args, **kwargs) parser, option, accum, rest, section, map, *args, **kwargs)
else: # pragma: no cover else: # pragma: 3.x no cover
from ConfigParser import SafeConfigParser, \ from ConfigParser import SafeConfigParser, \
InterpolationMissingOptionError, NoOptionError, NoSectionError InterpolationMissingOptionError, NoOptionError, NoSectionError
@ -372,7 +372,8 @@ after = 1.conf
s2 = alls.get(n) s2 = alls.get(n)
if isinstance(s2, dict): if isinstance(s2, dict):
# save previous known values, for possible using in local interpolations later: # save previous known values, for possible using in local interpolations later:
self.merge_section('KNOWN/'+n, s2, '') self.merge_section('KNOWN/'+n,
dict(filter(lambda i: i[0] in s, s2.iteritems())), '')
# merge section # merge section
s2.update(s) s2.update(s)
else: else:

View File

@ -34,6 +34,30 @@ from ..helpers import getLogger, _as_bool, _merge_dicts, substituteRecursiveTags
# Gets the instance of the logger. # Gets the instance of the logger.
logSys = getLogger(__name__) logSys = getLogger(__name__)
CONVERTER = {
"bool": _as_bool,
"int": int,
}
def _OptionsTemplateGen(options):
"""Iterator over the options template with default options.
Each options entry is composed of an array or tuple with:
[[type, name, ?default?], ...]
Or it is a dict:
{name: [type, default], ...}
"""
if isinstance(options, (list,tuple)):
for optname in options:
if len(optname) > 2:
opttype, optname, optvalue = optname
else:
(opttype, optname), optvalue = optname, None
yield opttype, optname, optvalue
else:
for optname in options:
opttype, optvalue = options[optname]
yield opttype, optname, optvalue
class ConfigReader(): class ConfigReader():
"""Generic config reader class. """Generic config reader class.
@ -120,6 +144,10 @@ class ConfigReader():
except AttributeError: except AttributeError:
return False return False
def has_option(self, sec, opt, withDefault=True):
return self._cfg.has_option(sec, opt) if withDefault \
else opt in self._cfg._sections.get(sec, {})
def merge_defaults(self, d): def merge_defaults(self, d):
self._cfg.get_defaults().update(d) self._cfg.get_defaults().update(d)
@ -224,31 +252,22 @@ class ConfigReaderUnshared(SafeConfigParserWithIncludes):
# Or it is a dict: # Or it is a dict:
# {name: [type, default], ...} # {name: [type, default], ...}
def getOptions(self, sec, options, pOptions=None, shouldExist=False): def getOptions(self, sec, options, pOptions=None, shouldExist=False, convert=True):
values = dict() values = dict()
if pOptions is None: if pOptions is None:
pOptions = {} pOptions = {}
# Get only specified options: # Get only specified options:
for optname in options: for opttype, optname, optvalue in _OptionsTemplateGen(options):
if isinstance(options, (list,tuple)):
if len(optname) > 2:
opttype, optname, optvalue = optname
else:
(opttype, optname), optvalue = optname, None
else:
opttype, optvalue = options[optname]
if optname in pOptions: if optname in pOptions:
continue continue
try: try:
if opttype == "bool": v = self.get(sec, optname, vars=pOptions)
v = self.getboolean(sec, optname)
if v is None: continue
elif opttype == "int":
v = self.getint(sec, optname)
if v is None: continue
else:
v = self.get(sec, optname, vars=pOptions)
values[optname] = v values[optname] = v
if convert:
conv = CONVERTER.get(opttype)
if conv:
if v is None: continue
values[optname] = conv(v)
except NoSectionError as e: except NoSectionError as e:
if shouldExist: if shouldExist:
raise raise
@ -261,8 +280,8 @@ class ConfigReaderUnshared(SafeConfigParserWithIncludes):
logSys.warning("'%s' not defined in '%s'. Using default one: %r" logSys.warning("'%s' not defined in '%s'. Using default one: %r"
% (optname, sec, optvalue)) % (optname, sec, optvalue))
values[optname] = optvalue values[optname] = optvalue
elif logSys.getEffectiveLevel() <= logLevel: # elif logSys.getEffectiveLevel() <= logLevel:
logSys.log(logLevel, "Non essential option '%s' not defined in '%s'.", optname, sec) # logSys.log(logLevel, "Non essential option '%s' not defined in '%s'.", optname, sec)
except ValueError: except ValueError:
logSys.warning("Wrong value for '" + optname + "' in '" + sec + logSys.warning("Wrong value for '" + optname + "' in '" + sec +
"'. Using default one: '" + repr(optvalue) + "'") "'. Using default one: '" + repr(optvalue) + "'")
@ -320,8 +339,9 @@ class DefinitionInitConfigReader(ConfigReader):
pOpts = dict() pOpts = dict()
if self._initOpts: if self._initOpts:
pOpts = _merge_dicts(pOpts, self._initOpts) pOpts = _merge_dicts(pOpts, self._initOpts)
# type-convert only in combined (otherwise int/bool converting prevents substitution):
self._opts = ConfigReader.getOptions( self._opts = ConfigReader.getOptions(
self, "Definition", self._configOpts, pOpts) self, "Definition", self._configOpts, pOpts, convert=False)
self._pOpts = pOpts self._pOpts = pOpts
if self.has_section("Init"): if self.has_section("Init"):
# get only own options (without options from default): # get only own options (without options from default):
@ -342,10 +362,21 @@ class DefinitionInitConfigReader(ConfigReader):
if opt == '__name__' or opt in self._opts: continue if opt == '__name__' or opt in self._opts: continue
self._opts[opt] = self.get("Definition", opt) self._opts[opt] = self.get("Definition", opt)
def convertOptions(self, opts, configOpts):
"""Convert interpolated combined options to expected type.
"""
for opttype, optname, optvalue in _OptionsTemplateGen(configOpts):
conv = CONVERTER.get(opttype)
if conv:
v = opts.get(optname)
if v is None: continue
try:
opts[optname] = conv(v)
except ValueError:
logSys.warning("Wrong %s value %r for %r. Using default one: %r",
opttype, v, optname, optvalue)
opts[optname] = optvalue
def _convert_to_boolean(self, value):
return _as_bool(value)
def getCombOption(self, optname): def getCombOption(self, optname):
"""Get combined definition option (as string) using pre-set and init """Get combined definition option (as string) using pre-set and init
options as preselection (values with higher precedence as specified in section). options as preselection (values with higher precedence as specified in section).
@ -380,6 +411,8 @@ class DefinitionInitConfigReader(ConfigReader):
ignore=ignore, addrepl=self.getCombOption) ignore=ignore, addrepl=self.getCombOption)
if not opts: if not opts:
raise ValueError('recursive tag definitions unable to be resolved') raise ValueError('recursive tag definitions unable to be resolved')
# convert options after all interpolations:
self.convertOptions(opts, self._configOpts)
return opts return opts
def convert(self): def convert(self):

View File

@ -48,7 +48,8 @@ class CSocket:
def send(self, msg, nonblocking=False, timeout=None): def send(self, msg, nonblocking=False, timeout=None):
# Convert every list member to string # Convert every list member to string
obj = dumps(map(CSocket.convert, msg), HIGHEST_PROTOCOL) obj = dumps(map(CSocket.convert, msg), HIGHEST_PROTOCOL)
self.__csock.send(obj + CSPROTO.END) self.__csock.send(obj)
self.__csock.send(CSPROTO.END)
return self.receive(self.__csock, nonblocking, timeout) return self.receive(self.__csock, nonblocking, timeout)
def settimeout(self, timeout): def settimeout(self, timeout):
@ -81,9 +82,12 @@ class CSocket:
msg = CSPROTO.EMPTY msg = CSPROTO.EMPTY
if nonblocking: sock.setblocking(0) if nonblocking: sock.setblocking(0)
if timeout: sock.settimeout(timeout) if timeout: sock.settimeout(timeout)
while msg.rfind(CSPROTO.END) == -1: bufsize = 1024
chunk = sock.recv(512) while msg.rfind(CSPROTO.END, -32) == -1:
if chunk in ('', b''): # python 3.x may return b'' instead of '' chunk = sock.recv(bufsize)
raise RuntimeError("socket connection broken") if not len(chunk):
raise socket.error(104, 'Connection reset by peer')
if chunk == CSPROTO.END: break
msg = msg + chunk msg = msg + chunk
if bufsize < 32768: bufsize <<= 1
return loads(msg) return loads(msg)

View File

@ -168,19 +168,6 @@ class Fail2banClient(Fail2banCmdLine, Thread):
if not ret: if not ret:
return None return None
# verify that directory for the socket file exists
socket_dir = os.path.dirname(self._conf["socket"])
if not os.path.exists(socket_dir):
logSys.error(
"There is no directory %s to contain the socket file %s."
% (socket_dir, self._conf["socket"]))
return None
if not os.access(socket_dir, os.W_OK | os.X_OK): # pragma: no cover
logSys.error(
"Directory %s exists but not accessible for writing"
% (socket_dir,))
return None
# Check already running # Check already running
if not self._conf["force"] and os.path.exists(self._conf["socket"]): if not self._conf["force"] and os.path.exists(self._conf["socket"]):
logSys.error("Fail2ban seems to be in unexpected state (not running but the socket exists)") logSys.error("Fail2ban seems to be in unexpected state (not running but the socket exists)")

View File

@ -27,15 +27,20 @@ import sys
from ..version import version, normVersion from ..version import version, normVersion
from ..protocol import printFormatted from ..protocol import printFormatted
from ..helpers import getLogger, str2LogLevel, getVerbosityFormat from ..helpers import getLogger, str2LogLevel, getVerbosityFormat, BrokenPipeError
# Gets the instance of the logger. # Gets the instance of the logger.
logSys = getLogger("fail2ban") logSys = getLogger("fail2ban")
def output(s): # pragma: no cover def output(s): # pragma: no cover
print(s) try:
print(s)
except (BrokenPipeError, IOError) as e: # pragma: no cover
if e.errno != 32: # closed / broken pipe
raise
CONFIG_PARAMS = ("socket", "pidfile", "logtarget", "loglevel", "syslogsocket",) # Config parameters required to start fail2ban which can be also set via command line (overwrite fail2ban.conf),
CONFIG_PARAMS = ("socket", "pidfile", "logtarget", "loglevel", "syslogsocket")
# Used to signal - we are in test cases (ex: prevents change logging params, log capturing, etc) # Used to signal - we are in test cases (ex: prevents change logging params, log capturing, etc)
PRODUCTION = True PRODUCTION = True
@ -94,9 +99,10 @@ class Fail2banCmdLine():
output("and bans the corresponding IP addresses using firewall rules.") output("and bans the corresponding IP addresses using firewall rules.")
output("") output("")
output("Options:") output("Options:")
output(" -c <DIR> configuration directory") output(" -c, --conf <DIR> configuration directory")
output(" -s <FILE> socket path") output(" -s, --socket <FILE> socket path")
output(" -p <FILE> pidfile path") output(" -p, --pidfile <FILE> pidfile path")
output(" --pname <NAME> name of the process (main thread) to identify instance (default fail2ban-server)")
output(" --loglevel <LEVEL> logging level") output(" --loglevel <LEVEL> logging level")
output(" --logtarget <TARGET> logging target, use file-name or stdout, stderr, syslog or sysout.") output(" --logtarget <TARGET> logging target, use file-name or stdout, stderr, syslog or sysout.")
output(" --syslogsocket auto|<FILE>") output(" --syslogsocket auto|<FILE>")
@ -129,17 +135,15 @@ class Fail2banCmdLine():
""" """
for opt in optList: for opt in optList:
o = opt[0] o = opt[0]
if o == "-c": if o in ("-c", "--conf"):
self._conf["conf"] = opt[1] self._conf["conf"] = opt[1]
elif o == "-s": elif o in ("-s", "--socket"):
self._conf["socket"] = opt[1] self._conf["socket"] = opt[1]
elif o == "-p": elif o in ("-p", "--pidfile"):
self._conf["pidfile"] = opt[1] self._conf["pidfile"] = opt[1]
elif o.startswith("--log") or o.startswith("--sys"): elif o in ("-d", "--dp", "--dump-pretty"):
self._conf[ o[2:] ] = opt[1]
elif o in ["-d", "--dp", "--dump-pretty"]:
self._conf["dump"] = True if o == "-d" else 2 self._conf["dump"] = True if o == "-d" else 2
elif o == "-t" or o == "--test": elif o in ("-t", "--test"):
self.cleanConfOnly = True self.cleanConfOnly = True
self._conf["test"] = True self._conf["test"] = True
elif o == "-v": elif o == "-v":
@ -163,12 +167,14 @@ class Fail2banCmdLine():
from ..server.mytime import MyTime from ..server.mytime import MyTime
output(MyTime.str2seconds(opt[1])) output(MyTime.str2seconds(opt[1]))
return True return True
elif o in ["-h", "--help"]: elif o in ("-h", "--help"):
self.dispUsage() self.dispUsage()
return True return True
elif o in ["-V", "--version"]: elif o in ("-V", "--version"):
self.dispVersion(o == "-V") self.dispVersion(o == "-V")
return True return True
elif o.startswith("--"): # other long named params (see also resetConf)
self._conf[ o[2:] ] = opt[1]
return None return None
def initCmdLine(self, argv): def initCmdLine(self, argv):
@ -185,6 +191,7 @@ class Fail2banCmdLine():
try: try:
cmdOpts = 'hc:s:p:xfbdtviqV' cmdOpts = 'hc:s:p:xfbdtviqV'
cmdLongOpts = ['loglevel=', 'logtarget=', 'syslogsocket=', 'test', 'async', cmdLongOpts = ['loglevel=', 'logtarget=', 'syslogsocket=', 'test', 'async',
'conf=', 'pidfile=', 'pname=', 'socket=',
'timeout=', 'str2sec=', 'help', 'version', 'dp', '--dump-pretty'] 'timeout=', 'str2sec=', 'help', 'version', 'dp', '--dump-pretty']
optList, self._args = getopt.getopt(self._argv[1:], cmdOpts, cmdLongOpts) optList, self._args = getopt.getopt(self._argv[1:], cmdOpts, cmdLongOpts)
except getopt.GetoptError: except getopt.GetoptError:
@ -227,7 +234,8 @@ class Fail2banCmdLine():
if not conf: if not conf:
self.configurator.readEarly() self.configurator.readEarly()
conf = self.configurator.getEarlyOptions() conf = self.configurator.getEarlyOptions()
self._conf[o] = conf[o] if o in conf:
self._conf[o] = conf[o]
logSys.info("Using socket file %s", self._conf["socket"]) logSys.info("Using socket file %s", self._conf["socket"])
@ -304,18 +312,24 @@ class Fail2banCmdLine():
# since method is also exposed in API via globally bound variable # since method is also exposed in API via globally bound variable
@staticmethod @staticmethod
def _exit(code=0): def _exit(code=0):
if hasattr(os, '_exit') and os._exit: # implicit flush without to produce broken pipe error (32):
os._exit(code) sys.stderr.close()
else: try:
sys.exit(code) sys.stdout.flush()
# exit:
if hasattr(sys, 'exit') and sys.exit:
sys.exit(code)
else:
os._exit(code)
except (BrokenPipeError, IOError) as e: # pragma: no cover
if e.errno != 32: # closed / broken pipe
raise
@staticmethod @staticmethod
def exit(code=0): def exit(code=0):
logSys.debug("Exit with code %s", code) logSys.debug("Exit with code %s", code)
# because of possible buffered output in python, we should flush it before exit: # because of possible buffered output in python, we should flush it before exit:
logging.shutdown() logging.shutdown()
sys.stdout.flush()
sys.stderr.flush()
# exit # exit
Fail2banCmdLine._exit(code) Fail2banCmdLine._exit(code)

View File

@ -21,7 +21,6 @@ Fail2Ban reads log file that contains password failure report
and bans the corresponding IP addresses using firewall rules. and bans the corresponding IP addresses using firewall rules.
This tools can test regular expressions for "fail2ban". This tools can test regular expressions for "fail2ban".
""" """
__author__ = "Fail2Ban Developers" __author__ = "Fail2Ban Developers"
@ -109,19 +108,22 @@ class _f2bOptParser(OptionParser):
def format_help(self, *args, **kwargs): def format_help(self, *args, **kwargs):
""" Overwritten format helper with full ussage.""" """ Overwritten format helper with full ussage."""
self.usage = '' self.usage = ''
return "Usage: " + usage() + __doc__ + """ return "Usage: " + usage() + "\n" + __doc__ + """
LOG: LOG:
string a string representing a log line string a string representing a log line
filename path to a log file (/var/log/auth.log) filename path to a log file (/var/log/auth.log)
"systemd-journal" search systemd journal (systemd-python required) systemd-journal search systemd journal (systemd-python required),
optionally with backend parameters, see `man jail.conf`
for usage and examples (systemd-journal[journalflags=1]).
REGEX: REGEX:
string a string representing a 'failregex' string a string representing a 'failregex'
filename path to a filter file (filter.d/sshd.conf) filter name of filter, optionally with options (sshd[mode=aggressive])
filename path to a filter file (filter.d/sshd.conf)
IGNOREREGEX: IGNOREREGEX:
string a string representing an 'ignoreregex' string a string representing an 'ignoreregex'
filename path to a filter file (filter.d/sshd.conf) filename path to a filter file (filter.d/sshd.conf)
\n""" + OptionParser.format_help(self, *args, **kwargs) + """\n \n""" + OptionParser.format_help(self, *args, **kwargs) + """\n
Report bugs to https://github.com/fail2ban/fail2ban/issues\n Report bugs to https://github.com/fail2ban/fail2ban/issues\n
""" + __copyright__ + "\n" """ + __copyright__ + "\n"
@ -252,6 +254,8 @@ class Fail2banRegex(object):
self.share_config=dict() self.share_config=dict()
self._filter = Filter(None) self._filter = Filter(None)
self._prefREMatched = 0
self._prefREGroups = list()
self._ignoreregex = list() self._ignoreregex = list()
self._failregex = list() self._failregex = list()
self._time_elapsed = None self._time_elapsed = None
@ -272,6 +276,10 @@ class Fail2banRegex(object):
self._filter.returnRawHost = opts.raw self._filter.returnRawHost = opts.raw
self._filter.checkFindTime = False self._filter.checkFindTime = False
self._filter.checkAllRegex = opts.checkAllRegex and not opts.out self._filter.checkAllRegex = opts.checkAllRegex and not opts.out
# ignore pending (without ID/IP), added to matches if it hits later (if ID/IP can be retreved)
self._filter.ignorePending = opts.out
# callback to increment ignored RE's by index (during process):
self._filter.onIgnoreRegex = self._onIgnoreRegex
self._backend = 'auto' self._backend = 'auto'
def output(self, line): def output(self, line):
@ -288,8 +296,8 @@ class Fail2banRegex(object):
self._filter.setDatePattern(pattern) self._filter.setDatePattern(pattern)
self._datepattern_set = True self._datepattern_set = True
if pattern is not None: if pattern is not None:
self.output( "Use datepattern : %s" % ( self.output( "Use datepattern : %s : %s" % (
self._filter.getDatePattern()[1], ) ) pattern, self._filter.getDatePattern()[1], ) )
def setMaxLines(self, v): def setMaxLines(self, v):
if not self._maxlines_set: if not self._maxlines_set:
@ -372,11 +380,8 @@ class Fail2banRegex(object):
if not ret: if not ret:
output( "ERROR: failed to load filter %s" % value ) output( "ERROR: failed to load filter %s" % value )
return False return False
# overwrite default logtype (considering that the filter could specify this too in Definition/Init sections): # set backend-related options (logtype):
if not fltOpt.get('logtype'): reader.applyAutoOptions(self._backend)
reader.merge_defaults({
'logtype': ['file','journal'][int(self._backend.startswith("systemd"))]
})
# get, interpolate and convert options: # get, interpolate and convert options:
reader.getOptions(None) reader.getOptions(None)
# show real options if expected: # show real options if expected:
@ -436,71 +441,140 @@ class Fail2banRegex(object):
'add%sRegex' % regextype.title())(regex.getFailRegex()) 'add%sRegex' % regextype.title())(regex.getFailRegex())
return True return True
def testIgnoreRegex(self, line): def _onIgnoreRegex(self, idx, ignoreRegex):
found = False self._lineIgnored = True
try: self._ignoreregex[idx].inc()
ret = self._filter.ignoreLine([(line, "", "")])
if ret is not None:
found = True
regex = self._ignoreregex[ret].inc()
except RegexException as e: # pragma: no cover
output( 'ERROR: %s' % e )
return False
return found
def testRegex(self, line, date=None): def testRegex(self, line, date=None):
orgLineBuffer = self._filter._Filter__lineBuffer orgLineBuffer = self._filter._Filter__lineBuffer
# duplicate line buffer (list can be changed inplace during processLine):
if self._filter.getMaxLines() > 1:
orgLineBuffer = orgLineBuffer[:]
fullBuffer = len(orgLineBuffer) >= self._filter.getMaxLines() fullBuffer = len(orgLineBuffer) >= self._filter.getMaxLines()
is_ignored = False is_ignored = self._lineIgnored = False
try: try:
found = self._filter.processLine(line, date) found = self._filter.processLine(line, date)
lines = [] lines = []
line = self._filter.processedLine()
ret = [] ret = []
for match in found: for match in found:
# Append True/False flag depending if line was matched by if not self._opts.out:
# more than one regex # Append True/False flag depending if line was matched by
match.append(len(ret)>1) # more than one regex
regex = self._failregex[match[0]] match.append(len(ret)>1)
regex.inc() regex = self._failregex[match[0]]
regex.appendIP(match) regex.inc()
regex.appendIP(match)
if not match[3].get('nofail'): if not match[3].get('nofail'):
ret.append(match) ret.append(match)
else: else:
is_ignored = True is_ignored = True
if self._opts.out: # (formated) output - don't need stats:
return None, ret, None
# prefregex stats:
if self._filter.prefRegex:
pre = self._filter.prefRegex
if pre.hasMatched():
self._prefREMatched += 1
if self._verbose:
if len(self._prefREGroups) < self._maxlines:
self._prefREGroups.append(pre.getGroups())
else:
if len(self._prefREGroups) == self._maxlines:
self._prefREGroups.append('...')
except RegexException as e: # pragma: no cover except RegexException as e: # pragma: no cover
output( 'ERROR: %s' % e ) output( 'ERROR: %s' % e )
return False return None, 0, None
for bufLine in orgLineBuffer[int(fullBuffer):]: if self._filter.getMaxLines() > 1:
if bufLine not in self._filter._Filter__lineBuffer: for bufLine in orgLineBuffer[int(fullBuffer):]:
try: if bufLine not in self._filter._Filter__lineBuffer:
self._line_stats.missed_lines.pop( try:
self._line_stats.missed_lines.index("".join(bufLine))) self._line_stats.missed_lines.pop(
if self._debuggex: self._line_stats.missed_lines.index("".join(bufLine)))
self._line_stats.missed_lines_timeextracted.pop( if self._debuggex:
self._line_stats.missed_lines_timeextracted.index( self._line_stats.missed_lines_timeextracted.pop(
"".join(bufLine[::2]))) self._line_stats.missed_lines_timeextracted.index(
except ValueError: "".join(bufLine[::2])))
pass except ValueError:
# if buffering - add also another lines from match: pass
if self._print_all_matched: # if buffering - add also another lines from match:
if not self._debuggex: if self._print_all_matched:
self._line_stats.matched_lines.append("".join(bufLine)) if not self._debuggex:
else: self._line_stats.matched_lines.append("".join(bufLine))
lines.append(bufLine[0] + bufLine[2]) else:
self._line_stats.matched += 1 lines.append(bufLine[0] + bufLine[2])
self._line_stats.missed -= 1 self._line_stats.matched += 1
self._line_stats.missed -= 1
if lines: # pre-lines parsed in multiline mode (buffering) if lines: # pre-lines parsed in multiline mode (buffering)
lines.append(line) lines.append(self._filter.processedLine())
line = "\n".join(lines) line = "\n".join(lines)
return line, ret, is_ignored return line, ret, (is_ignored or self._lineIgnored)
def _prepaireOutput(self):
"""Prepares output- and fetch-function corresponding given '--out' option (format)"""
ofmt = self._opts.out
if ofmt in ('id', 'ip'):
def _out(ret):
for r in ret:
output(r[1])
elif ofmt == 'msg':
def _out(ret):
for r in ret:
for r in r[3].get('matches'):
if not isinstance(r, basestring):
r = ''.join(r for r in r)
output(r)
elif ofmt == 'row':
def _out(ret):
for r in ret:
output('[%r,\t%r,\t%r],' % (r[1],r[2],dict((k,v) for k, v in r[3].iteritems() if k != 'matches')))
elif '<' not in ofmt:
def _out(ret):
for r in ret:
output(r[3].get(ofmt))
else: # extended format with tags substitution:
from ..server.actions import Actions, CommandAction, BanTicket
def _escOut(t, v):
# use safe escape (avoid inject on pseudo tag "\x00msg\x00"):
if t not in ('msg',):
return v.replace('\x00', '\\x00')
return v
def _out(ret):
rows = []
wrap = {'NL':0}
for r in ret:
ticket = BanTicket(r[1], time=r[2], data=r[3])
aInfo = Actions.ActionInfo(ticket)
# if msg tag is used - output if single line (otherwise let it as is to wrap multilines later):
def _get_msg(self):
if not wrap['NL'] and len(r[3].get('matches', [])) <= 1:
return self['matches']
else: # pseudo tag for future replacement:
wrap['NL'] = 1
return "\x00msg\x00"
aInfo['msg'] = _get_msg
# not recursive interpolation (use safe escape):
v = CommandAction.replaceDynamicTags(ofmt, aInfo, escapeVal=_escOut)
if wrap['NL']: # contains multiline tags (msg):
rows.append((r, v))
continue
output(v)
# wrap multiline tag (msg) interpolations to single line:
for r, v in rows:
for r in r[3].get('matches'):
if not isinstance(r, basestring):
r = ''.join(r for r in r)
r = v.replace("\x00msg\x00", r)
output(r)
return _out
def process(self, test_lines): def process(self, test_lines):
t0 = time.time() t0 = time.time()
if self._opts.out: # get out function
out = self._prepaireOutput()
for line in test_lines: for line in test_lines:
if isinstance(line, tuple): if isinstance(line, tuple):
line_datetimestripped, ret, is_ignored = self.testRegex( line_datetimestripped, ret, is_ignored = self.testRegex(line[0], line[1])
line[0], line[1])
line = "".join(line[0]) line = "".join(line[0])
else: else:
line = line.rstrip('\r\n') line = line.rstrip('\r\n')
@ -508,8 +582,10 @@ class Fail2banRegex(object):
# skip comment and empty lines # skip comment and empty lines
continue continue
line_datetimestripped, ret, is_ignored = self.testRegex(line) line_datetimestripped, ret, is_ignored = self.testRegex(line)
if not is_ignored:
is_ignored = self.testIgnoreRegex(line_datetimestripped) if self._opts.out: # (formated) output:
if len(ret) > 0 and not is_ignored: out(ret)
continue
if is_ignored: if is_ignored:
self._line_stats.ignored += 1 self._line_stats.ignored += 1
@ -517,42 +593,25 @@ class Fail2banRegex(object):
self._line_stats.ignored_lines.append(line) self._line_stats.ignored_lines.append(line)
if self._debuggex: if self._debuggex:
self._line_stats.ignored_lines_timeextracted.append(line_datetimestripped) self._line_stats.ignored_lines_timeextracted.append(line_datetimestripped)
elif len(ret) > 0:
if len(ret) > 0:
assert(not is_ignored)
if self._opts.out:
if self._opts.out in ('id', 'ip'):
for ret in ret:
output(ret[1])
elif self._opts.out == 'msg':
for ret in ret:
output('\n'.join(map(lambda v:''.join(v for v in v), ret[3].get('matches'))))
elif self._opts.out == 'row':
for ret in ret:
output('[%r,\t%r,\t%r],' % (ret[1],ret[2],dict((k,v) for k, v in ret[3].iteritems() if k != 'matches')))
else:
for ret in ret:
output(ret[3].get(self._opts.out))
continue
self._line_stats.matched += 1 self._line_stats.matched += 1
if self._print_all_matched: if self._print_all_matched:
self._line_stats.matched_lines.append(line) self._line_stats.matched_lines.append(line)
if self._debuggex: if self._debuggex:
self._line_stats.matched_lines_timeextracted.append(line_datetimestripped) self._line_stats.matched_lines_timeextracted.append(line_datetimestripped)
else: else:
if not is_ignored: self._line_stats.missed += 1
self._line_stats.missed += 1 if not self._print_no_missed and (self._print_all_missed or self._line_stats.missed <= self._maxlines + 1):
if not self._print_no_missed and (self._print_all_missed or self._line_stats.missed <= self._maxlines + 1): self._line_stats.missed_lines.append(line)
self._line_stats.missed_lines.append(line) if self._debuggex:
if self._debuggex: self._line_stats.missed_lines_timeextracted.append(line_datetimestripped)
self._line_stats.missed_lines_timeextracted.append(line_datetimestripped)
self._line_stats.tested += 1 self._line_stats.tested += 1
self._time_elapsed = time.time() - t0 self._time_elapsed = time.time() - t0
def printLines(self, ltype): def printLines(self, ltype):
lstats = self._line_stats lstats = self._line_stats
assert(self._line_stats.missed == lstats.tested - (lstats.matched + lstats.ignored)) assert(lstats.missed == lstats.tested - (lstats.matched + lstats.ignored))
lines = lstats[ltype] lines = lstats[ltype]
l = lstats[ltype + '_lines'] l = lstats[ltype + '_lines']
multiline = self._filter.getMaxLines() > 1 multiline = self._filter.getMaxLines() > 1
@ -610,7 +669,18 @@ class Fail2banRegex(object):
pprint_list(out, " #) [# of hits] regular expression") pprint_list(out, " #) [# of hits] regular expression")
return total return total
# Print title # Print prefregex:
if self._filter.prefRegex:
#self._filter.prefRegex.hasMatched()
pre = self._filter.prefRegex
out = [pre.getRegex()]
if self._verbose:
for grp in self._prefREGroups:
out.append(" %s" % (grp,))
output( "\n%s: %d total" % ("Prefregex", self._prefREMatched) )
pprint_list(out)
# Print regex's:
total = print_failregexes("Failregex", self._failregex) total = print_failregexes("Failregex", self._failregex)
_ = print_failregexes("Ignoreregex", self._ignoreregex) _ = print_failregexes("Ignoreregex", self._ignoreregex)
@ -689,10 +759,10 @@ class Fail2banRegex(object):
test_lines = journal_lines_gen(flt, myjournal) test_lines = journal_lines_gen(flt, myjournal)
else: else:
# if single line parsing (without buffering) # if single line parsing (without buffering)
if self._filter.getMaxLines() <= 1: if self._filter.getMaxLines() <= 1 and '\n' not in cmd_log:
self.output( "Use single line : %s" % shortstr(cmd_log.replace("\n", r"\n")) ) self.output( "Use single line : %s" % shortstr(cmd_log.replace("\n", r"\n")) )
test_lines = [ cmd_log ] test_lines = [ cmd_log ]
else: # multi line parsing (with buffering) else: # multi line parsing (with and without buffering)
test_lines = cmd_log.split("\n") test_lines = cmd_log.split("\n")
self.output( "Use multi line : %s line(s)" % len(test_lines) ) self.output( "Use multi line : %s line(s)" % len(test_lines) )
for i, l in enumerate(test_lines): for i, l in enumerate(test_lines):
@ -712,6 +782,7 @@ class Fail2banRegex(object):
def exec_command_line(*args): def exec_command_line(*args):
logging.exitOnIOError = True
parser = get_opt_parser() parser = get_opt_parser()
(opts, args) = parser.parse_args(*args) (opts, args) = parser.parse_args(*args)
errors = [] errors = []

View File

@ -53,6 +53,14 @@ class FilterReader(DefinitionInitConfigReader):
def getFile(self): def getFile(self):
return self.__file return self.__file
def applyAutoOptions(self, backend):
# set init option to backend-related logtype, considering
# that the filter settings may be overwritten in its local:
if (not self._initOpts.get('logtype') and
not self.has_option('Definition', 'logtype', False)
):
self._initOpts['logtype'] = ['file','journal'][int(backend.startswith("systemd"))]
def convert(self): def convert(self):
stream = list() stream = list()
opts = self.getCombined() opts = self.getCombined()

View File

@ -149,11 +149,8 @@ class JailReader(ConfigReader):
ret = self.__filter.read() ret = self.__filter.read()
if not ret: if not ret:
raise JailDefError("Unable to read the filter %r" % filterName) raise JailDefError("Unable to read the filter %r" % filterName)
if not filterOpt.get('logtype'): # set backend-related options (logtype):
# overwrite default logtype backend-related (considering that the filter settings may be overwritten): self.__filter.applyAutoOptions(self.__opts.get('backend', ''))
self.__filter.merge_defaults({
'logtype': ['file','journal'][int(self.__opts.get('backend', '').startswith("systemd"))]
})
# merge options from filter as 'known/...' (all options unfiltered): # merge options from filter as 'known/...' (all options unfiltered):
self.__filter.getOptions(self.__opts, all=True) self.__filter.getOptions(self.__opts, all=True)
ConfigReader.merge_section(self, self.__name, self.__filter.getCombined(), 'known/') ConfigReader.merge_section(self, self.__name, self.__filter.getCombined(), 'known/')

View File

@ -208,6 +208,26 @@ class FormatterWithTraceBack(logging.Formatter):
return logging.Formatter.format(self, record) return logging.Formatter.format(self, record)
logging.exitOnIOError = False
def __stopOnIOError(logSys=None, logHndlr=None): # pragma: no cover
if logSys and len(logSys.handlers):
logSys.removeHandler(logSys.handlers[0])
if logHndlr:
logHndlr.close = lambda: None
logging.StreamHandler.flush = lambda self: None
#sys.excepthook = lambda *args: None
if logging.exitOnIOError:
try:
sys.stderr.close()
except:
pass
sys.exit(0)
try:
BrokenPipeError = BrokenPipeError
except NameError: # pragma: 3.x no cover
BrokenPipeError = IOError
__origLog = logging.Logger._log __origLog = logging.Logger._log
def __safeLog(self, level, msg, args, **kwargs): def __safeLog(self, level, msg, args, **kwargs):
"""Safe log inject to avoid possible errors by unsafe log-handlers, """Safe log inject to avoid possible errors by unsafe log-handlers,
@ -223,6 +243,10 @@ def __safeLog(self, level, msg, args, **kwargs):
try: try:
# if isEnabledFor(level) already called... # if isEnabledFor(level) already called...
__origLog(self, level, msg, args, **kwargs) __origLog(self, level, msg, args, **kwargs)
except (BrokenPipeError, IOError) as e: # pragma: no cover
if e.errno == 32: # closed / broken pipe
__stopOnIOError(self)
raise
except Exception as e: # pragma: no cover - unreachable if log-handler safe in this python-version except Exception as e: # pragma: no cover - unreachable if log-handler safe in this python-version
try: try:
for args in ( for args in (
@ -237,6 +261,18 @@ def __safeLog(self, level, msg, args, **kwargs):
pass pass
logging.Logger._log = __safeLog logging.Logger._log = __safeLog
__origLogFlush = logging.StreamHandler.flush
def __safeLogFlush(self):
"""Safe flush inject stopping endless logging on closed streams (redirected pipe).
"""
try:
__origLogFlush(self)
except (BrokenPipeError, IOError) as e: # pragma: no cover
if e.errno == 32: # closed / broken pipe
__stopOnIOError(None, self)
raise
logging.StreamHandler.flush = __safeLogFlush
def getLogger(name): def getLogger(name):
"""Get logging.Logger instance with Fail2Ban logger name convention """Get logging.Logger instance with Fail2Ban logger name convention
""" """
@ -267,7 +303,7 @@ def getVerbosityFormat(verbosity, fmt=' %(message)s', addtime=True, padding=True
if addtime: if addtime:
fmt = ' %(asctime)-15s' + fmt fmt = ' %(asctime)-15s' + fmt
else: # default (not verbose): else: # default (not verbose):
fmt = "%(name)-23.23s [%(process)d]: %(levelname)-7s" + fmt fmt = "%(name)-24s[%(process)d]: %(levelname)-7s" + fmt
if addtime: if addtime:
fmt = "%(asctime)s " + fmt fmt = "%(asctime)s " + fmt
# remove padding if not needed: # remove padding if not needed:
@ -291,7 +327,7 @@ def splitwords(s):
""" """
if not s: if not s:
return [] return []
return filter(bool, map(str.strip, re.split('[ ,\n]+', s))) return filter(bool, map(lambda v: v.strip(), re.split('[ ,\n]+', s)))
if sys.version_info >= (3,5): if sys.version_info >= (3,5):
eval(compile(r'''if 1: eval(compile(r'''if 1:
@ -338,7 +374,7 @@ OPTION_EXTRACT_CRE = re.compile(
r'([\w\-_\.]+)=(?:"([^"]*)"|\'([^\']*)\'|([^,\]]*))(?:,|\]\s*\[|$)', re.DOTALL) r'([\w\-_\.]+)=(?:"([^"]*)"|\'([^\']*)\'|([^,\]]*))(?:,|\]\s*\[|$)', re.DOTALL)
# split by new-line considering possible new-lines within options [...]: # split by new-line considering possible new-lines within options [...]:
OPTION_SPLIT_CRE = re.compile( OPTION_SPLIT_CRE = re.compile(
r'(?:[^\[\n]+(?:\s*\[\s*(?:[\w\-_\.]+=(?:"[^"]*"|\'[^\']*\'|[^,\]]*)\s*(?:,|\]\s*\[)?\s*)*\])?\s*|[^\n]+)(?=\n\s*|$)', re.DOTALL) r'(?:[^\[\s]+(?:\s*\[\s*(?:[\w\-_\.]+=(?:"[^"]*"|\'[^\']*\'|[^,\]]*)\s*(?:,|\]\s*\[)?\s*)*\])?\s*|\S+)(?=\n\s*|\s+|$)', re.DOTALL)
def extractOptions(option): def extractOptions(option):
match = OPTION_CRE.match(option) match = OPTION_CRE.match(option)
@ -363,8 +399,8 @@ def splitWithOptions(option):
# tags (<tag>) in tagged options. # tags (<tag>) in tagged options.
# #
# max tag replacement count: # max tag replacement count (considering tag X in tag Y repeat):
MAX_TAG_REPLACE_COUNT = 10 MAX_TAG_REPLACE_COUNT = 25
# compiled RE for tag name (replacement name) # compiled RE for tag name (replacement name)
TAG_CRE = re.compile(r'<([^ <>]+)>') TAG_CRE = re.compile(r'<([^ <>]+)>')
@ -398,6 +434,7 @@ def substituteRecursiveTags(inptags, conditional='',
done = set() done = set()
noRecRepl = hasattr(tags, "getRawItem") noRecRepl = hasattr(tags, "getRawItem")
# repeat substitution while embedded-recursive (repFlag is True) # repeat substitution while embedded-recursive (repFlag is True)
repCounts = {}
while True: while True:
repFlag = False repFlag = False
# substitute each value: # substitute each value:
@ -409,7 +446,7 @@ def substituteRecursiveTags(inptags, conditional='',
value = orgval = uni_string(tags[tag]) value = orgval = uni_string(tags[tag])
# search and replace all tags within value, that can be interpolated using other tags: # search and replace all tags within value, that can be interpolated using other tags:
m = tre_search(value) m = tre_search(value)
refCounts = {} rplc = repCounts.get(tag, {})
#logSys.log(5, 'TAG: %s, value: %s' % (tag, value)) #logSys.log(5, 'TAG: %s, value: %s' % (tag, value))
while m: while m:
# found replacement tag: # found replacement tag:
@ -419,13 +456,13 @@ def substituteRecursiveTags(inptags, conditional='',
m = tre_search(value, m.end()) m = tre_search(value, m.end())
continue continue
#logSys.log(5, 'found: %s' % rtag) #logSys.log(5, 'found: %s' % rtag)
if rtag == tag or refCounts.get(rtag, 1) > MAX_TAG_REPLACE_COUNT: if rtag == tag or rplc.get(rtag, 1) > MAX_TAG_REPLACE_COUNT:
# recursive definitions are bad # recursive definitions are bad
#logSys.log(5, 'recursion fail tag: %s value: %s' % (tag, value) ) #logSys.log(5, 'recursion fail tag: %s value: %s' % (tag, value) )
raise ValueError( raise ValueError(
"properties contain self referencing definitions " "properties contain self referencing definitions "
"and cannot be resolved, fail tag: %s, found: %s in %s, value: %s" % "and cannot be resolved, fail tag: %s, found: %s in %s, value: %s" %
(tag, rtag, refCounts, value)) (tag, rtag, rplc, value))
repl = None repl = None
if conditional: if conditional:
repl = tags.get(rtag + '?' + conditional) repl = tags.get(rtag + '?' + conditional)
@ -445,7 +482,7 @@ def substituteRecursiveTags(inptags, conditional='',
value = value.replace('<%s>' % rtag, repl) value = value.replace('<%s>' % rtag, repl)
#logSys.log(5, 'value now: %s' % value) #logSys.log(5, 'value now: %s' % value)
# increment reference count: # increment reference count:
refCounts[rtag] = refCounts.get(rtag, 0) + 1 rplc[rtag] = rplc.get(rtag, 0) + 1
# the next match for replace: # the next match for replace:
m = tre_search(value, m.start()) m = tre_search(value, m.start())
#logSys.log(5, 'TAG: %s, newvalue: %s' % (tag, value)) #logSys.log(5, 'TAG: %s, newvalue: %s' % (tag, value))
@ -453,6 +490,7 @@ def substituteRecursiveTags(inptags, conditional='',
if orgval != value: if orgval != value:
# check still contains any tag - should be repeated (possible embedded-recursive substitution): # check still contains any tag - should be repeated (possible embedded-recursive substitution):
if tre_search(value): if tre_search(value):
repCounts[tag] = rplc
repFlag = True repFlag = True
# copy return tags dict to prevent modifying of inptags: # copy return tags dict to prevent modifying of inptags:
if id(tags) == id(inptags): if id(tags) == id(inptags):

View File

@ -55,6 +55,8 @@ protocol = [
["stop", "stops all jails and terminate the server"], ["stop", "stops all jails and terminate the server"],
["unban --all", "unbans all IP addresses (in all jails and database)"], ["unban --all", "unbans all IP addresses (in all jails and database)"],
["unban <IP> ... <IP>", "unbans <IP> (in all jails and database)"], ["unban <IP> ... <IP>", "unbans <IP> (in all jails and database)"],
["banned", "return jails with banned IPs as dictionary"],
["banned <IP> ... <IP>]", "return list(s) of jails where given IP(s) are banned"],
["status", "gets the current status of the server"], ["status", "gets the current status of the server"],
["ping", "tests if the server is alive"], ["ping", "tests if the server is alive"],
["echo", "for internal usage, returns back and outputs a given string"], ["echo", "for internal usage, returns back and outputs a given string"],
@ -120,6 +122,8 @@ protocol = [
["set <JAIL> action <ACT> <PROPERTY> <VALUE>", "sets the <VALUE> of <PROPERTY> for the action <ACT> for <JAIL>"], ["set <JAIL> action <ACT> <PROPERTY> <VALUE>", "sets the <VALUE> of <PROPERTY> for the action <ACT> for <JAIL>"],
["set <JAIL> action <ACT> <METHOD>[ <JSONKWARGS>]", "calls the <METHOD> with <JSONKWARGS> for the action <ACT> for <JAIL>"], ["set <JAIL> action <ACT> <METHOD>[ <JSONKWARGS>]", "calls the <METHOD> with <JSONKWARGS> for the action <ACT> for <JAIL>"],
['', "JAIL INFORMATION", ""], ['', "JAIL INFORMATION", ""],
["get <JAIL> banned", "return banned IPs of <JAIL>"],
["get <JAIL> banned <IP> ... <IP>]", "return 1 if IP is banned in <JAIL> otherwise 0, or a list of 1/0 for multiple IPs"],
["get <JAIL> logpath", "gets the list of the monitored files for <JAIL>"], ["get <JAIL> logpath", "gets the list of the monitored files for <JAIL>"],
["get <JAIL> logencoding", "gets the encoding of the log files for <JAIL>"], ["get <JAIL> logencoding", "gets the encoding of the log files for <JAIL>"],
["get <JAIL> journalmatch", "gets the journal filter match for <JAIL>"], ["get <JAIL> journalmatch", "gets the journal filter match for <JAIL>"],

View File

@ -404,10 +404,13 @@ class CommandAction(ActionBase):
def _getOperation(self, tag, family): def _getOperation(self, tag, family):
# replace operation tag (interpolate all values), be sure family is enclosed as conditional value # replace operation tag (interpolate all values), be sure family is enclosed as conditional value
# (as lambda in addrepl so only if not overwritten in action): # (as lambda in addrepl so only if not overwritten in action):
return self.replaceTag(tag, self._properties, cmd = self.replaceTag(tag, self._properties,
conditional=('family='+family if family else ''), conditional=('family='+family if family else ''),
addrepl=(lambda tag:family if tag == 'family' else None),
cache=self.__substCache) cache=self.__substCache)
if '<' not in cmd or not family: return cmd
# replace family as dynamic tags, important - don't cache, no recursion and auto-escape here:
cmd = self.replaceDynamicTags(cmd, {'family':family})
return cmd
def _operationExecuted(self, tag, family, *args): def _operationExecuted(self, tag, family, *args):
""" Get, set or delete command of operation considering family. """ Get, set or delete command of operation considering family.
@ -452,7 +455,18 @@ class CommandAction(ActionBase):
ret = True ret = True
# avoid double execution of same command for both families: # avoid double execution of same command for both families:
if cmd and cmd not in self._operationExecuted(tag, lambda f: f != famoper): if cmd and cmd not in self._operationExecuted(tag, lambda f: f != famoper):
ret = self.executeCmd(cmd, self.timeout) realCmd = cmd
if self._jail:
# simulate action info with "empty" ticket:
aInfo = getattr(self._jail.actions, 'actionInfo', None)
if not aInfo:
aInfo = self._jail.actions._getActionInfo(None)
setattr(self._jail.actions, 'actionInfo', aInfo)
aInfo['time'] = MyTime.time()
aInfo['family'] = famoper
# replace dynamical tags, important - don't cache, no recursion and auto-escape here
realCmd = self.replaceDynamicTags(cmd, aInfo)
ret = self.executeCmd(realCmd, self.timeout)
res &= ret res &= ret
if afterExec: afterExec(famoper, ret) if afterExec: afterExec(famoper, ret)
self._operationExecuted(tag, famoper, cmd if ret else None) self._operationExecuted(tag, famoper, cmd if ret else None)
@ -806,7 +820,7 @@ class CommandAction(ActionBase):
ESCAPE_VN_CRE = re.compile(r"\W") ESCAPE_VN_CRE = re.compile(r"\W")
@classmethod @classmethod
def replaceDynamicTags(cls, realCmd, aInfo): def replaceDynamicTags(cls, realCmd, aInfo, escapeVal=None):
"""Replaces dynamical tags in `query` with property values. """Replaces dynamical tags in `query` with property values.
**Important** **Important**
@ -831,16 +845,17 @@ class CommandAction(ActionBase):
# array for escaped vars: # array for escaped vars:
varsDict = dict() varsDict = dict()
def escapeVal(tag, value): if not escapeVal:
# if the value should be escaped: def escapeVal(tag, value):
if cls.ESCAPE_CRE.search(value): # if the value should be escaped:
# That one needs to be escaped since its content is if cls.ESCAPE_CRE.search(value):
# out of our control # That one needs to be escaped since its content is
tag = 'f2bV_%s' % cls.ESCAPE_VN_CRE.sub('_', tag) # out of our control
varsDict[tag] = value # add variable tag = 'f2bV_%s' % cls.ESCAPE_VN_CRE.sub('_', tag)
value = '$'+tag # replacement as variable varsDict[tag] = value # add variable
# replacement for tag: value = '$'+tag # replacement as variable
return value # replacement for tag:
return value
# additional replacement as calling map: # additional replacement as calling map:
ADD_REPL_TAGS_CM = CallingMap(ADD_REPL_TAGS) ADD_REPL_TAGS_CM = CallingMap(ADD_REPL_TAGS)
@ -864,7 +879,7 @@ class CommandAction(ActionBase):
tickData = aInfo.get("F-*") tickData = aInfo.get("F-*")
if not tickData: tickData = {} if not tickData: tickData = {}
def substTag(m): def substTag(m):
tag = mapTag2Opt(m.groups()[0]) tag = mapTag2Opt(m.group(1))
try: try:
value = uni_string(tickData[tag]) value = uni_string(tickData[tag])
except KeyError: except KeyError:

View File

@ -211,6 +211,14 @@ class Actions(JailThread, Mapping):
def getBanTime(self): def getBanTime(self):
return self.__banManager.getBanTime() return self.__banManager.getBanTime()
def getBanned(self, ids):
lst = self.__banManager.getBanList()
if not ids:
return lst
if len(ids) == 1:
return 1 if ids[0] in lst else 0
return map(lambda ip: 1 if ip in lst else 0, ids)
def getBanList(self, withTime=False): def getBanList(self, withTime=False):
"""Returns the list of banned IP addresses. """Returns the list of banned IP addresses.
@ -254,7 +262,7 @@ class Actions(JailThread, Mapping):
if ip is None: if ip is None:
return self.__flushBan(db) return self.__flushBan(db)
# Multiple IPs: # Multiple IPs:
if isinstance(ip, list): if isinstance(ip, (list, tuple)):
missed = [] missed = []
cnt = 0 cnt = 0
for i in ip: for i in ip:
@ -276,6 +284,14 @@ class Actions(JailThread, Mapping):
# Unban the IP. # Unban the IP.
self.__unBan(ticket) self.__unBan(ticket)
else: else:
# Multiple IPs by subnet or dns:
if not isinstance(ip, IPAddr):
ipa = IPAddr(ip)
if not ipa.isSingle: # subnet (mask/cidr) or raw (may be dns/hostname):
ips = filter(ipa.contains, self.__banManager.getBanList())
if ips:
return self.removeBannedIP(ips, db, ifexists)
# not found:
msg = "%s is not banned" % ip msg = "%s is not banned" % ip
logSys.log(logging.MSG, msg) logSys.log(logging.MSG, msg)
if ifexists: if ifexists:
@ -322,23 +338,33 @@ class Actions(JailThread, Mapping):
self._jail.name, name, e, self._jail.name, name, e,
exc_info=logSys.getEffectiveLevel()<=logging.DEBUG) exc_info=logSys.getEffectiveLevel()<=logging.DEBUG)
while self.active: while self.active:
if self.idle: try:
logSys.debug("Actions: enter idle mode") if self.idle:
Utils.wait_for(lambda: not self.active or not self.idle, logSys.debug("Actions: enter idle mode")
lambda: False, self.sleeptime) Utils.wait_for(lambda: not self.active or not self.idle,
logSys.debug("Actions: leave idle mode") lambda: False, self.sleeptime)
continue logSys.debug("Actions: leave idle mode")
# wait for ban (stop if gets inactive): continue
bancnt = Utils.wait_for(lambda: not self.active or self.__checkBan(), self.sleeptime) # wait for ban (stop if gets inactive, pending ban or unban):
cnt += bancnt bancnt = 0
# unban if nothing is banned not later than banned tickets >= banPrecedence wt = min(self.sleeptime, self.__banManager._nextUnbanTime - MyTime.time())
if not bancnt or cnt >= self.banPrecedence: logSys.log(5, "Actions: wait for pending tickets %s (default %s)", wt, self.sleeptime)
if self.active: if Utils.wait_for(lambda: not self.active or self._jail.hasFailTickets, wt):
# let shrink the ban list faster bancnt = self.__checkBan()
bancnt *= 2 cnt += bancnt
self.__checkUnBan(bancnt if bancnt and bancnt < self.unbanMaxCount else self.unbanMaxCount) # unban if nothing is banned not later than banned tickets >= banPrecedence
cnt = 0 if not bancnt or cnt >= self.banPrecedence:
if self.active:
# let shrink the ban list faster
bancnt *= 2
logSys.log(5, "Actions: check-unban %s, bancnt %s, max: %s", bancnt if bancnt and bancnt < self.unbanMaxCount else self.unbanMaxCount, bancnt, self.unbanMaxCount)
self.__checkUnBan(bancnt if bancnt and bancnt < self.unbanMaxCount else self.unbanMaxCount)
cnt = 0
except Exception as e: # pragma: no cover
logSys.error("[%s] unhandled error in actions thread: %s",
self._jail.name, e,
exc_info=logSys.getEffectiveLevel()<=logging.DEBUG)
self.__flushBan(stop=True) self.__flushBan(stop=True)
self.stopActions() self.stopActions()
return True return True
@ -431,7 +457,9 @@ class Actions(JailThread, Mapping):
return mi[idx] if mi[idx] is not None else self.__ticket return mi[idx] if mi[idx] is not None else self.__ticket
def __getActionInfo(self, ticket): def _getActionInfo(self, ticket):
if not ticket:
ticket = BanTicket("", MyTime.time())
aInfo = Actions.ActionInfo(ticket, self._jail) aInfo = Actions.ActionInfo(ticket, self._jail)
return aInfo return aInfo
@ -465,7 +493,7 @@ class Actions(JailThread, Mapping):
bTicket = BanTicket.wrap(ticket) bTicket = BanTicket.wrap(ticket)
btime = ticket.getBanTime(self.__banManager.getBanTime()) btime = ticket.getBanTime(self.__banManager.getBanTime())
ip = bTicket.getIP() ip = bTicket.getIP()
aInfo = self.__getActionInfo(bTicket) aInfo = self._getActionInfo(bTicket)
reason = {} reason = {}
if self.__banManager.addBanTicket(bTicket, reason=reason): if self.__banManager.addBanTicket(bTicket, reason=reason):
cnt += 1 cnt += 1
@ -476,7 +504,7 @@ class Actions(JailThread, Mapping):
# do actions : # do actions :
for name, action in self._actions.iteritems(): for name, action in self._actions.iteritems():
try: try:
if ticket.restored and getattr(action, 'norestored', False): if bTicket.restored and getattr(action, 'norestored', False):
continue continue
if not aInfo.immutable: aInfo.reset() if not aInfo.immutable: aInfo.reset()
action.ban(aInfo) action.ban(aInfo)
@ -522,6 +550,8 @@ class Actions(JailThread, Mapping):
cnt += self.__reBan(bTicket, actions=rebanacts) cnt += self.__reBan(bTicket, actions=rebanacts)
else: # pragma: no cover - unexpected: ticket is not banned for some reasons - reban using all actions: else: # pragma: no cover - unexpected: ticket is not banned for some reasons - reban using all actions:
cnt += self.__reBan(bTicket) cnt += self.__reBan(bTicket)
# add ban to database moved to observer (should previously check not already banned
# and increase ticket time if "bantime.increment" set)
if cnt: if cnt:
logSys.debug("Banned %s / %s, %s ticket(s) in %r", cnt, logSys.debug("Banned %s / %s, %s ticket(s) in %r", cnt,
self.__banManager.getBanTotal(), self.__banManager.size(), self._jail.name) self.__banManager.getBanTotal(), self.__banManager.size(), self._jail.name)
@ -540,7 +570,7 @@ class Actions(JailThread, Mapping):
""" """
actions = actions or self._actions actions = actions or self._actions
ip = ticket.getIP() ip = ticket.getIP()
aInfo = self.__getActionInfo(ticket) aInfo = self._getActionInfo(ticket)
if log: if log:
logSys.notice("[%s] Reban %s%s", self._jail.name, aInfo["ip"], (', action %r' % actions.keys()[0] if len(actions) == 1 else '')) logSys.notice("[%s] Reban %s%s", self._jail.name, aInfo["ip"], (', action %r' % actions.keys()[0] if len(actions) == 1 else ''))
for name, action in actions.iteritems(): for name, action in actions.iteritems():
@ -574,7 +604,7 @@ class Actions(JailThread, Mapping):
if not action._prolongable: if not action._prolongable:
continue continue
if aInfo is None: if aInfo is None:
aInfo = self.__getActionInfo(ticket) aInfo = self._getActionInfo(ticket)
if not aInfo.immutable: aInfo.reset() if not aInfo.immutable: aInfo.reset()
action.prolong(aInfo) action.prolong(aInfo)
except Exception as e: except Exception as e:
@ -668,7 +698,7 @@ class Actions(JailThread, Mapping):
else: else:
unbactions = actions unbactions = actions
ip = ticket.getIP() ip = ticket.getIP()
aInfo = self.__getActionInfo(ticket) aInfo = self._getActionInfo(ticket)
if log: if log:
logSys.notice("[%s] Unban %s", self._jail.name, aInfo["ip"]) logSys.notice("[%s] Unban %s", self._jail.name, aInfo["ip"])
for name, action in unbactions.iteritems(): for name, action in unbactions.iteritems():
@ -687,13 +717,19 @@ class Actions(JailThread, Mapping):
"""Status of current and total ban counts and current banned IP list. """Status of current and total ban counts and current banned IP list.
""" """
# TODO: Allow this list to be printed as 'status' output # TODO: Allow this list to be printed as 'status' output
supported_flavors = ["basic", "cymru"] supported_flavors = ["short", "basic", "cymru"]
if flavor is None or flavor not in supported_flavors: if flavor is None or flavor not in supported_flavors:
logSys.warning("Unsupported extended jail status flavor %r. Supported: %s" % (flavor, supported_flavors)) logSys.warning("Unsupported extended jail status flavor %r. Supported: %s" % (flavor, supported_flavors))
# Always print this information (basic) # Always print this information (basic)
ret = [("Currently banned", self.__banManager.size()), if flavor != "short":
("Total banned", self.__banManager.getBanTotal()), banned = self.__banManager.getBanList()
("Banned IP list", self.__banManager.getBanList())] cnt = len(banned)
else:
cnt = self.__banManager.size()
ret = [("Currently banned", cnt),
("Total banned", self.__banManager.getBanTotal())]
if flavor != "short":
ret += [("Banned IP list", banned)]
if flavor == "cymru": if flavor == "cymru":
cymru_info = self.__banManager.getBanListExtendedCymruInfo() cymru_info = self.__banManager.getBanListExtendedCymruInfo()
ret += \ ret += \

View File

@ -57,7 +57,7 @@ class BanManager:
## Total number of banned IP address ## Total number of banned IP address
self.__banTotal = 0 self.__banTotal = 0
## The time for next unban process (for performance and load reasons): ## The time for next unban process (for performance and load reasons):
self.__nextUnbanTime = BanTicket.MAX_TIME self._nextUnbanTime = BanTicket.MAX_TIME
## ##
# Set the ban time. # Set the ban time.
@ -66,7 +66,6 @@ class BanManager:
# @param value the time # @param value the time
def setBanTime(self, value): def setBanTime(self, value):
with self.__lock:
self.__banTime = int(value) self.__banTime = int(value)
## ##
@ -76,7 +75,6 @@ class BanManager:
# @return the time # @return the time
def getBanTime(self): def getBanTime(self):
with self.__lock:
return self.__banTime return self.__banTime
## ##
@ -85,7 +83,6 @@ class BanManager:
# @param value total number # @param value total number
def setBanTotal(self, value): def setBanTotal(self, value):
with self.__lock:
self.__banTotal = value self.__banTotal = value
## ##
@ -94,7 +91,6 @@ class BanManager:
# @return the total number # @return the total number
def getBanTotal(self): def getBanTotal(self):
with self.__lock:
return self.__banTotal return self.__banTotal
## ##
@ -103,21 +99,21 @@ class BanManager:
# @return IP list # @return IP list
def getBanList(self, ordered=False, withTime=False): def getBanList(self, ordered=False, withTime=False):
if not ordered:
return list(self.__banList.keys())
with self.__lock: with self.__lock:
if not ordered:
return self.__banList.keys()
lst = [] lst = []
for ticket in self.__banList.itervalues(): for ticket in self.__banList.itervalues():
eob = ticket.getEndOfBanTime(self.__banTime) eob = ticket.getEndOfBanTime(self.__banTime)
lst.append((ticket,eob)) lst.append((ticket,eob))
lst.sort(key=lambda t: t[1]) lst.sort(key=lambda t: t[1])
t2s = MyTime.time2str t2s = MyTime.time2str
if withTime: if withTime:
return ['%s \t%s + %d = %s' % ( return ['%s \t%s + %d = %s' % (
t[0].getID(), t[0].getID(),
t2s(t[0].getTime()), t[0].getBanTime(self.__banTime), t2s(t[1]) t2s(t[0].getTime()), t[0].getBanTime(self.__banTime), t2s(t[1])
) for t in lst] ) for t in lst]
return [t[0].getID() for t in lst] return [t[0].getID() for t in lst]
## ##
# Returns a iterator to ban list (used in reload, so idle). # Returns a iterator to ban list (used in reload, so idle).
@ -125,8 +121,8 @@ class BanManager:
# @return ban list iterator # @return ban list iterator
def __iter__(self): def __iter__(self):
with self.__lock: # ensure iterator is safe - traverse over the list in snapshot created within lock (GIL):
return self.__banList.itervalues() return iter(list(self.__banList.values()))
## ##
# Returns normalized value # Returns normalized value
@ -297,8 +293,8 @@ class BanManager:
self.__banTotal += 1 self.__banTotal += 1
ticket.incrBanCount() ticket.incrBanCount()
# correct next unban time: # correct next unban time:
if self.__nextUnbanTime > eob: if self._nextUnbanTime > eob:
self.__nextUnbanTime = eob self._nextUnbanTime = eob
return True return True
## ##
@ -329,12 +325,8 @@ class BanManager:
def unBanList(self, time, maxCount=0x7fffffff): def unBanList(self, time, maxCount=0x7fffffff):
with self.__lock: with self.__lock:
# Permanent banning
if self.__banTime < 0:
return list()
# Check next unban time: # Check next unban time:
nextUnbanTime = self.__nextUnbanTime nextUnbanTime = self._nextUnbanTime
if nextUnbanTime > time: if nextUnbanTime > time:
return list() return list()
@ -347,12 +339,12 @@ class BanManager:
if time > eob: if time > eob:
unBanList[fid] = ticket unBanList[fid] = ticket
if len(unBanList) >= maxCount: # stop search cycle, so reset back the next check time if len(unBanList) >= maxCount: # stop search cycle, so reset back the next check time
nextUnbanTime = self.__nextUnbanTime nextUnbanTime = self._nextUnbanTime
break break
elif nextUnbanTime > eob: elif nextUnbanTime > eob:
nextUnbanTime = eob nextUnbanTime = eob
self.__nextUnbanTime = nextUnbanTime self._nextUnbanTime = nextUnbanTime
# Removes tickets. # Removes tickets.
if len(unBanList): if len(unBanList):
if len(unBanList) / 2.0 <= len(self.__banList) / 3.0: if len(unBanList) / 2.0 <= len(self.__banList) / 3.0:

View File

@ -489,22 +489,24 @@ class Fail2BanDb(object):
If log was already present in database, value of last position If log was already present in database, value of last position
in the log file; else `None` in the log file; else `None`
""" """
return self._addLog(cur, jail, container.getFileName(), container.getPos(), container.getHash())
def _addLog(self, cur, jail, name, pos=0, md5=None):
lastLinePos = None lastLinePos = None
cur.execute( cur.execute(
"SELECT firstlinemd5, lastfilepos FROM logs " "SELECT firstlinemd5, lastfilepos FROM logs "
"WHERE jail=? AND path=?", "WHERE jail=? AND path=?",
(jail.name, container.getFileName())) (jail.name, name))
try: try:
firstLineMD5, lastLinePos = cur.fetchone() firstLineMD5, lastLinePos = cur.fetchone()
except TypeError: except TypeError:
firstLineMD5 = False firstLineMD5 = None
cur.execute( if not firstLineMD5 and (pos or md5):
"INSERT OR REPLACE INTO logs(jail, path, firstlinemd5, lastfilepos) " cur.execute(
"VALUES(?, ?, ?, ?)", "INSERT OR REPLACE INTO logs(jail, path, firstlinemd5, lastfilepos) "
(jail.name, container.getFileName(), "VALUES(?, ?, ?, ?)", (jail.name, name, md5, pos))
container.getHash(), container.getPos())) if md5 is not None and md5 != firstLineMD5:
if container.getHash() != firstLineMD5:
lastLinePos = None lastLinePos = None
return lastLinePos return lastLinePos
@ -533,7 +535,7 @@ class Fail2BanDb(object):
return set(row[0] for row in cur.fetchmany()) return set(row[0] for row in cur.fetchmany())
@commitandrollback @commitandrollback
def updateLog(self, cur, *args, **kwargs): def updateLog(self, cur, jail, container):
"""Updates hash and last position in log file. """Updates hash and last position in log file.
Parameters Parameters
@ -543,14 +545,48 @@ class Fail2BanDb(object):
container : FileContainer container : FileContainer
File container of the log file being updated. File container of the log file being updated.
""" """
self._updateLog(cur, *args, **kwargs) self._updateLog(cur, jail, container.getFileName(), container.getPos(), container.getHash())
def _updateLog(self, cur, jail, container): def _updateLog(self, cur, jail, name, pos, md5):
cur.execute( cur.execute(
"UPDATE logs SET firstlinemd5=?, lastfilepos=? " "UPDATE logs SET firstlinemd5=?, lastfilepos=? "
"WHERE jail=? AND path=?", "WHERE jail=? AND path=?", (md5, pos, jail.name, name))
(container.getHash(), container.getPos(), # be sure it is set (if not available):
jail.name, container.getFileName())) if not cur.rowcount:
cur.execute(
"INSERT OR REPLACE INTO logs(jail, path, firstlinemd5, lastfilepos) "
"VALUES(?, ?, ?, ?)", (jail.name, name, md5, pos))
@commitandrollback
def getJournalPos(self, cur, jail, name, time=0, iso=None):
"""Get journal position from database.
Parameters
----------
jail : Jail
Jail of which the journal belongs to.
name, time, iso :
Journal name (typically systemd-journal) and last known time.
Returns
-------
int (or float)
Last position (as time) if it was already present in database; else `None`
"""
return self._addLog(cur, jail, name, time, iso); # no hash, just time as iso
@commitandrollback
def updateJournal(self, cur, jail, name, time, iso):
"""Updates last position (as time) of journal.
Parameters
----------
jail : Jail
Jail of which the journal belongs to.
name, time, iso :
Journal name (typically systemd-journal) and last known time.
"""
self._updateLog(cur, jail, name, time, iso); # no hash, just time as iso
@commitandrollback @commitandrollback
def addBan(self, cur, jail, ticket): def addBan(self, cur, jail, ticket):
@ -754,7 +790,8 @@ class Fail2BanDb(object):
if overalljails or jail is None: if overalljails or jail is None:
query += " GROUP BY ip ORDER BY timeofban DESC LIMIT 1" query += " GROUP BY ip ORDER BY timeofban DESC LIMIT 1"
cur = self._db.cursor() cur = self._db.cursor()
return cur.execute(query, queryArgs) # repack iterator as long as in lock:
return list(cur.execute(query, queryArgs))
def _getCurrentBans(self, cur, jail = None, ip = None, forbantime=None, fromtime=None): def _getCurrentBans(self, cur, jail = None, ip = None, forbantime=None, fromtime=None):
queryArgs = [] queryArgs = []

View File

@ -282,6 +282,8 @@ class DateDetector(object):
elif "{DATE}" in key: elif "{DATE}" in key:
self.addDefaultTemplate(preMatch=pattern, allDefaults=False) self.addDefaultTemplate(preMatch=pattern, allDefaults=False)
return return
elif key == "{NONE}":
template = _getPatternTemplate('{UNB}^', key)
else: else:
template = _getPatternTemplate(pattern, key) template = _getPatternTemplate(pattern, key)
@ -337,65 +339,76 @@ class DateDetector(object):
# if no templates specified - default templates should be used: # if no templates specified - default templates should be used:
if not len(self.__templates): if not len(self.__templates):
self.addDefaultTemplate() self.addDefaultTemplate()
logSys.log(logLevel-1, "try to match time for line: %.120s", line) log = logSys.log if logSys.getEffectiveLevel() <= logLevel else lambda *args: None
match = None log(logLevel-1, "try to match time for line: %.120s", line)
# first try to use last template with same start/end position: # first try to use last template with same start/end position:
match = None
found = None, 0x7fffffff, 0x7fffffff, -1
ignoreBySearch = 0x7fffffff ignoreBySearch = 0x7fffffff
i = self.__lastTemplIdx i = self.__lastTemplIdx
if i < len(self.__templates): if i < len(self.__templates):
ddtempl = self.__templates[i] ddtempl = self.__templates[i]
template = ddtempl.template template = ddtempl.template
if template.flags & (DateTemplate.LINE_BEGIN|DateTemplate.LINE_END): if template.flags & (DateTemplate.LINE_BEGIN|DateTemplate.LINE_END):
if logSys.getEffectiveLevel() <= logLevel-1: # pragma: no cover - very-heavy debug log(logLevel-1, " try to match last anchored template #%02i ...", i)
logSys.log(logLevel-1, " try to match last anchored template #%02i ...", i)
match = template.matchDate(line) match = template.matchDate(line)
ignoreBySearch = i ignoreBySearch = i
else: else:
distance, endpos = self.__lastPos[0], self.__lastEndPos[0] distance, endpos = self.__lastPos[0], self.__lastEndPos[0]
if logSys.getEffectiveLevel() <= logLevel-1: log(logLevel-1, " try to match last template #%02i (from %r to %r): ...%r==%r %s %r==%r...",
logSys.log(logLevel-1, " try to match last template #%02i (from %r to %r): ...%r==%r %s %r==%r...", i, distance, endpos,
i, distance, endpos, line[distance-1:distance], self.__lastPos[1],
line[distance-1:distance], self.__lastPos[1], line[distance:endpos],
line[distance:endpos], line[endpos:endpos+1], self.__lastEndPos[2])
line[endpos:endpos+1], self.__lastEndPos[1]) # check same boundaries left/right, outside fully equal, inside only if not alnum (e. g. bound RE
# check same boundaries left/right, otherwise possible collision/pattern switch: # with space or some special char), otherwise possible collision/pattern switch:
if (line[distance-1:distance] == self.__lastPos[1] and if ((
line[endpos:endpos+1] == self.__lastEndPos[1] line[distance-1:distance] == self.__lastPos[1] or
): (line[distance] == self.__lastPos[2] and not self.__lastPos[2].isalnum())
) and (
line[endpos:endpos+1] == self.__lastEndPos[2] or
(line[endpos-1] == self.__lastEndPos[1] and not self.__lastEndPos[1].isalnum())
)):
# search in line part only:
log(logLevel-1, " boundaries are correct, search in part %r", line[distance:endpos])
match = template.matchDate(line, distance, endpos) match = template.matchDate(line, distance, endpos)
else:
log(logLevel-1, " boundaries show conflict, try whole search")
match = template.matchDate(line)
ignoreBySearch = i
if match: if match:
distance = match.start() distance = match.start()
endpos = match.end() endpos = match.end()
# if different position, possible collision/pattern switch: # if different position, possible collision/pattern switch:
if ( if (
len(self.__templates) == 1 or # single template:
template.flags & (DateTemplate.LINE_BEGIN|DateTemplate.LINE_END) or template.flags & (DateTemplate.LINE_BEGIN|DateTemplate.LINE_END) or
(distance == self.__lastPos[0] and endpos == self.__lastEndPos[0]) (distance == self.__lastPos[0] and endpos == self.__lastEndPos[0])
): ):
logSys.log(logLevel, " matched last time template #%02i", i) log(logLevel, " matched last time template #%02i", i)
else: else:
logSys.log(logLevel, " ** last pattern collision - pattern change, search ...") log(logLevel, " ** last pattern collision - pattern change, reserve & search ...")
found = match, distance, endpos, i; # save current best alternative
match = None match = None
else: else:
logSys.log(logLevel, " ** last pattern not found - pattern change, search ...") log(logLevel, " ** last pattern not found - pattern change, search ...")
# search template and better match: # search template and better match:
if not match: if not match:
logSys.log(logLevel, " search template (%i) ...", len(self.__templates)) log(logLevel, " search template (%i) ...", len(self.__templates))
found = None, 0x7fffffff, 0x7fffffff, -1
i = 0 i = 0
for ddtempl in self.__templates: for ddtempl in self.__templates:
if logSys.getEffectiveLevel() <= logLevel-1:
logSys.log(logLevel-1, " try template #%02i: %s", i, ddtempl.name)
if i == ignoreBySearch: if i == ignoreBySearch:
i += 1 i += 1
continue continue
log(logLevel-1, " try template #%02i: %s", i, ddtempl.name)
template = ddtempl.template template = ddtempl.template
match = template.matchDate(line) match = template.matchDate(line)
if match: if match:
distance = match.start() distance = match.start()
endpos = match.end() endpos = match.end()
if logSys.getEffectiveLevel() <= logLevel: log(logLevel, " matched time template #%02i (at %r <= %r, %r) %s",
logSys.log(logLevel, " matched time template #%02i (at %r <= %r, %r) %s", i, distance, ddtempl.distance, self.__lastPos[0], template.name)
i, distance, ddtempl.distance, self.__lastPos[0], template.name)
## last (or single) template - fast stop: ## last (or single) template - fast stop:
if i+1 >= len(self.__templates): if i+1 >= len(self.__templates):
break break
@ -408,7 +421,7 @@ class DateDetector(object):
## [grave] if distance changed, possible date-match was found somewhere ## [grave] if distance changed, possible date-match was found somewhere
## in body of message, so save this template, and search further: ## in body of message, so save this template, and search further:
if distance > ddtempl.distance or distance > self.__lastPos[0]: if distance > ddtempl.distance or distance > self.__lastPos[0]:
logSys.log(logLevel, " ** distance collision - pattern change, reserve") log(logLevel, " ** distance collision - pattern change, reserve")
## shortest of both: ## shortest of both:
if distance < found[1]: if distance < found[1]:
found = match, distance, endpos, i found = match, distance, endpos, i
@ -422,7 +435,7 @@ class DateDetector(object):
# check other template was found (use this one with shortest distance): # check other template was found (use this one with shortest distance):
if not match and found[0]: if not match and found[0]:
match, distance, endpos, i = found match, distance, endpos, i = found
logSys.log(logLevel, " use best time template #%02i", i) log(logLevel, " use best time template #%02i", i)
ddtempl = self.__templates[i] ddtempl = self.__templates[i]
template = ddtempl.template template = ddtempl.template
# we've winner, incr hits, set distance, usage, reorder, etc: # we've winner, incr hits, set distance, usage, reorder, etc:
@ -432,8 +445,8 @@ class DateDetector(object):
ddtempl.distance = distance ddtempl.distance = distance
if self.__firstUnused == i: if self.__firstUnused == i:
self.__firstUnused += 1 self.__firstUnused += 1
self.__lastPos = distance, line[distance-1:distance] self.__lastPos = distance, line[distance-1:distance], line[distance]
self.__lastEndPos = endpos, line[endpos:endpos+1] self.__lastEndPos = endpos, line[endpos-1], line[endpos:endpos+1]
# if not first - try to reorder current template (bubble up), they will be not sorted anymore: # if not first - try to reorder current template (bubble up), they will be not sorted anymore:
if i and i != self.__lastTemplIdx: if i and i != self.__lastTemplIdx:
i = self._reorderTemplate(i) i = self._reorderTemplate(i)
@ -442,7 +455,7 @@ class DateDetector(object):
return (match, template) return (match, template)
# not found: # not found:
logSys.log(logLevel, " no template.") log(logLevel, " no template.")
return (None, None) return (None, None)
@property @property

View File

@ -36,15 +36,16 @@ logSys = getLogger(__name__)
RE_GROUPED = re.compile(r'(?<!(?:\(\?))(?<!\\)\((?!\?)') RE_GROUPED = re.compile(r'(?<!(?:\(\?))(?<!\\)\((?!\?)')
RE_GROUP = ( re.compile(r'^((?:\(\?\w+\))?\^?(?:\(\?\w+\))?)(.*?)(\$?)$'), r"\1(\2)\3" ) RE_GROUP = ( re.compile(r'^((?:\(\?\w+\))?\^?(?:\(\?\w+\))?)(.*?)(\$?)$'), r"\1(\2)\3" )
RE_EXLINE_NO_BOUNDS = re.compile(r'^\{UNB\}')
RE_EXLINE_BOUND_BEG = re.compile(r'^\{\^LN-BEG\}') RE_EXLINE_BOUND_BEG = re.compile(r'^\{\^LN-BEG\}')
RE_EXSANC_BOUND_BEG = re.compile(r'^\(\?:\^\|\\b\|\\W\)') RE_EXSANC_BOUND_BEG = re.compile(r'^\((?:\?:)?\^\|\\b\|\\W\)')
RE_EXEANC_BOUND_BEG = re.compile(r'\(\?=\\b\|\\W\|\$\)$') RE_EXEANC_BOUND_BEG = re.compile(r'\(\?=\\b\|\\W\|\$\)$')
RE_NO_WRD_BOUND_BEG = re.compile(r'^\(*(?:\(\?\w+\))?(?:\^|\(*\*\*|\(\?:\^)') RE_NO_WRD_BOUND_BEG = re.compile(r'^\(*(?:\(\?\w+\))?(?:\^|\(*\*\*|\((?:\?:)?\^)')
RE_NO_WRD_BOUND_END = re.compile(r'(?<!\\)(?:\$\)?|\\b|\\s|\*\*\)*)$') RE_NO_WRD_BOUND_END = re.compile(r'(?<!\\)(?:\$\)?|\\b|\\s|\*\*\)*)$')
RE_DEL_WRD_BOUNDS = ( re.compile(r'^\(*(?:\(\?\w+\))?\(*\*\*|(?<!\\)\*\*\)*$'), RE_DEL_WRD_BOUNDS = ( re.compile(r'^\(*(?:\(\?\w+\))?\(*\*\*|(?<!\\)\*\*\)*$'),
lambda m: m.group().replace('**', '') ) lambda m: m.group().replace('**', '') )
RE_LINE_BOUND_BEG = re.compile(r'^(?:\(\?\w+\))?(?:\^|\(\?:\^(?!\|))') RE_LINE_BOUND_BEG = re.compile(r'^(?:\(\?\w+\))?(?:\^|\((?:\?:)?\^(?!\|))')
RE_LINE_BOUND_END = re.compile(r'(?<![\\\|])(?:\$\)?)$') RE_LINE_BOUND_END = re.compile(r'(?<![\\\|])(?:\$\)?)$')
RE_ALPHA_PATTERN = re.compile(r'(?<!\%)\%[aAbBpc]') RE_ALPHA_PATTERN = re.compile(r'(?<!\%)\%[aAbBpc]')
@ -119,7 +120,7 @@ class DateTemplate(object):
if boundBegin: if boundBegin:
self.flags |= DateTemplate.WORD_BEGIN if wordBegin != 'start' else DateTemplate.LINE_BEGIN self.flags |= DateTemplate.WORD_BEGIN if wordBegin != 'start' else DateTemplate.LINE_BEGIN
if wordBegin != 'start': if wordBegin != 'start':
regex = r'(?:^|\b|\W)' + regex regex = r'(?=^|\b|\W)' + regex
else: else:
regex = r"^(?:\W{0,2})?" + regex regex = r"^(?:\W{0,2})?" + regex
if not self.name.startswith('{^LN-BEG}'): if not self.name.startswith('{^LN-BEG}'):
@ -128,8 +129,10 @@ class DateTemplate(object):
if boundEnd: if boundEnd:
self.flags |= DateTemplate.WORD_END self.flags |= DateTemplate.WORD_END
regex += r'(?=\b|\W|$)' regex += r'(?=\b|\W|$)'
if RE_LINE_BOUND_BEG.search(regex): self.flags |= DateTemplate.LINE_BEGIN if not (self.flags & DateTemplate.LINE_BEGIN) and RE_LINE_BOUND_BEG.search(regex):
if RE_LINE_BOUND_END.search(regex): self.flags |= DateTemplate.LINE_END self.flags |= DateTemplate.LINE_BEGIN
if not (self.flags & DateTemplate.LINE_END) and RE_LINE_BOUND_END.search(regex):
self.flags |= DateTemplate.LINE_END
# remove possible special pattern "**" in front and end of regex: # remove possible special pattern "**" in front and end of regex:
regex = RE_DEL_WRD_BOUNDS[0].sub(RE_DEL_WRD_BOUNDS[1], regex) regex = RE_DEL_WRD_BOUNDS[0].sub(RE_DEL_WRD_BOUNDS[1], regex)
self._regex = regex self._regex = regex
@ -188,7 +191,7 @@ class DateTemplate(object):
def unboundPattern(pattern): def unboundPattern(pattern):
return RE_EXEANC_BOUND_BEG.sub('', return RE_EXEANC_BOUND_BEG.sub('',
RE_EXSANC_BOUND_BEG.sub('', RE_EXSANC_BOUND_BEG.sub('',
RE_EXLINE_BOUND_BEG.sub('', pattern) RE_EXLINE_BOUND_BEG.sub('', RE_EXLINE_NO_BOUNDS.sub('', pattern))
) )
) )
@ -297,6 +300,10 @@ class DatePatternRegex(DateTemplate):
def setRegex(self, pattern, wordBegin=True, wordEnd=True): def setRegex(self, pattern, wordBegin=True, wordEnd=True):
# original pattern: # original pattern:
self._pattern = pattern self._pattern = pattern
# if unbound signalled - reset boundaries left and right:
if RE_EXLINE_NO_BOUNDS.search(pattern):
pattern = RE_EXLINE_NO_BOUNDS.sub('', pattern)
wordBegin = wordEnd = False
# if explicit given {^LN-BEG} - remove it from pattern and set 'start' in wordBegin: # if explicit given {^LN-BEG} - remove it from pattern and set 'start' in wordBegin:
if wordBegin and RE_EXLINE_BOUND_BEG.search(pattern): if wordBegin and RE_EXLINE_BOUND_BEG.search(pattern):
pattern = RE_EXLINE_BOUND_BEG.sub('', pattern) pattern = RE_EXLINE_BOUND_BEG.sub('', pattern)

View File

@ -43,26 +43,20 @@ class FailManager:
self.__maxRetry = 3 self.__maxRetry = 3
self.__maxTime = 600 self.__maxTime = 600
self.__failTotal = 0 self.__failTotal = 0
self.maxMatches = 50 self.maxMatches = 5
self.__bgSvc = BgService() self.__bgSvc = BgService()
def setFailTotal(self, value): def setFailTotal(self, value):
with self.__lock: self.__failTotal = value
self.__failTotal = value
def getFailTotal(self): def getFailTotal(self):
with self.__lock: return self.__failTotal
return self.__failTotal
def getFailCount(self): def getFailCount(self):
# may be slow on large list of failures, should be used for test purposes only... # may be slow on large list of failures, should be used for test purposes only...
with self.__lock: with self.__lock:
return len(self.__failList), sum([f.getRetry() for f in self.__failList.values()]) return len(self.__failList), sum([f.getRetry() for f in self.__failList.values()])
def getFailTotal(self):
with self.__lock:
return self.__failTotal
def setMaxRetry(self, value): def setMaxRetry(self, value):
self.__maxRetry = value self.__maxRetry = value
@ -92,10 +86,7 @@ class FailManager:
if attempt <= 0: if attempt <= 0:
attempt += 1 attempt += 1
unixTime = ticket.getTime() unixTime = ticket.getTime()
fData.setLastTime(unixTime) fData.adjustTime(unixTime, self.__maxTime)
if fData.getLastReset() < unixTime - self.__maxTime:
fData.setLastReset(unixTime)
fData.setRetry(0)
fData.inc(matches, attempt, count) fData.inc(matches, attempt, count)
# truncate to maxMatches: # truncate to maxMatches:
if self.maxMatches: if self.maxMatches:
@ -133,13 +124,12 @@ class FailManager:
return attempts return attempts
def size(self): def size(self):
with self.__lock: return len(self.__failList)
return len(self.__failList)
def cleanup(self, time): def cleanup(self, time):
with self.__lock: with self.__lock:
todelete = [fid for fid,item in self.__failList.iteritems() \ todelete = [fid for fid,item in self.__failList.iteritems() \
if item.getLastTime() + self.__maxTime <= time] if item.getTime() + self.__maxTime <= time]
if len(todelete) == len(self.__failList): if len(todelete) == len(self.__failList):
# remove all: # remove all:
self.__failList = dict() self.__failList = dict()
@ -153,7 +143,7 @@ class FailManager:
else: else:
# create new dictionary without items to be deleted: # create new dictionary without items to be deleted:
self.__failList = dict((fid,item) for fid,item in self.__failList.iteritems() \ self.__failList = dict((fid,item) for fid,item in self.__failList.iteritems() \
if item.getLastTime() + self.__maxTime > time) if item.getTime() + self.__maxTime > time)
self.__bgSvc.service() self.__bgSvc.service()
def delFailure(self, fid): def delFailure(self, fid):

View File

@ -87,20 +87,24 @@ RH4TAG = {
# default failure groups map for customizable expressions (with different group-id): # default failure groups map for customizable expressions (with different group-id):
R_MAP = { R_MAP = {
"ID": "fid", "id": "fid",
"PORT": "fport", "port": "fport",
} }
def mapTag2Opt(tag): def mapTag2Opt(tag):
try: # if should be mapped: tag = tag.lower()
return R_MAP[tag] return R_MAP.get(tag, tag)
except KeyError:
return tag.lower()
# alternate names to be merged, e. g. alt_user_1 -> user ... # complex names:
# ALT_ - alternate names to be merged, e. g. alt_user_1 -> user ...
ALTNAME_PRE = 'alt_' ALTNAME_PRE = 'alt_'
ALTNAME_CRE = re.compile(r'^' + ALTNAME_PRE + r'(.*)(?:_\d+)?$') # TUPLE_ - names of parts to be combined to single value as tuple
TUPNAME_PRE = 'tuple_'
COMPLNAME_PRE = (ALTNAME_PRE, TUPNAME_PRE)
COMPLNAME_CRE = re.compile(r'^(' + '|'.join(COMPLNAME_PRE) + r')(.*?)(?:_\d+)?$')
## ##
# Regular expression class. # Regular expression class.
@ -127,17 +131,27 @@ class Regex:
try: try:
self._regexObj = re.compile(regex, re.MULTILINE if multiline else 0) self._regexObj = re.compile(regex, re.MULTILINE if multiline else 0)
self._regex = regex self._regex = regex
self._altValues = {} self._altValues = []
self._tupleValues = []
for k in filter( for k in filter(
lambda k: len(k) > len(ALTNAME_PRE) and k.startswith(ALTNAME_PRE), lambda k: len(k) > len(COMPLNAME_PRE[0]), self._regexObj.groupindex
self._regexObj.groupindex
): ):
n = ALTNAME_CRE.match(k).group(1) n = COMPLNAME_CRE.match(k)
self._altValues[k] = n if n:
self._altValues = list(self._altValues.items()) if len(self._altValues) else None g, n = n.group(1), mapTag2Opt(n.group(2))
if g == ALTNAME_PRE:
self._altValues.append((k,n))
else:
self._tupleValues.append((k,n))
self._altValues.sort()
self._tupleValues.sort()
self._altValues = self._altValues if len(self._altValues) else None
self._tupleValues = self._tupleValues if len(self._tupleValues) else None
except sre_constants.error: except sre_constants.error:
raise RegexException("Unable to compile regular expression '%s'" % raise RegexException("Unable to compile regular expression '%s'" %
regex) regex)
# set fetch handler depending on presence of alternate (or tuple) tags:
self.getGroups = self._getGroupsWithAlt if (self._altValues or self._tupleValues) else self._getGroups
def __str__(self): def __str__(self):
return "%s(%r)" % (self.__class__.__name__, self._regex) return "%s(%r)" % (self.__class__.__name__, self._regex)
@ -277,18 +291,33 @@ class Regex:
# Returns all matched groups. # Returns all matched groups.
# #
def getGroups(self): def _getGroups(self):
if not self._altValues: return self._matchCache.groupdict()
return self._matchCache.groupdict()
# merge alternate values (e. g. 'alt_user_1' -> 'user' or 'alt_host' -> 'host'): def _getGroupsWithAlt(self):
fail = self._matchCache.groupdict() fail = self._matchCache.groupdict()
#fail = fail.copy() #fail = fail.copy()
for k,n in self._altValues: # merge alternate values (e. g. 'alt_user_1' -> 'user' or 'alt_host' -> 'host'):
v = fail.get(k) if self._altValues:
if v and not fail.get(n): for k,n in self._altValues:
fail[n] = v v = fail.get(k)
if v and not fail.get(n):
fail[n] = v
# combine tuple values (e. g. 'id', 'tuple_id' ... 'tuple_id_N' -> 'id'):
if self._tupleValues:
for k,n in self._tupleValues:
v = fail.get(k)
t = fail.get(n)
if isinstance(t, tuple):
t += (v,)
else:
t = (t,v,)
fail[n] = t
return fail return fail
def getGroups(self): # pragma: no cover - abstract function (replaced in __init__)
pass
## ##
# Returns skipped lines. # Returns skipped lines.
# #

View File

@ -81,6 +81,7 @@ class Filter(JailThread):
## Ignore own IPs flag: ## Ignore own IPs flag:
self.__ignoreSelf = True self.__ignoreSelf = True
## The ignore IP list. ## The ignore IP list.
self.__ignoreIpSet = set()
self.__ignoreIpList = [] self.__ignoreIpList = []
## External command ## External command
self.__ignoreCommand = False self.__ignoreCommand = False
@ -106,8 +107,16 @@ class Filter(JailThread):
self.returnRawHost = False self.returnRawHost = False
## check each regex (used for test purposes): ## check each regex (used for test purposes):
self.checkAllRegex = False self.checkAllRegex = False
## avoid finding of pending failures (without ID/IP, used in fail2ban-regex):
self.ignorePending = True
## callback called on ignoreregex match :
self.onIgnoreRegex = None
## if true ignores obsolete failures (failure time < now - findTime): ## if true ignores obsolete failures (failure time < now - findTime):
self.checkFindTime = True self.checkFindTime = True
## shows that filter is in operation mode (processing new messages):
self.inOperation = True
## if true prevents against retarded banning in case of RC by too many failures (disabled only for test purposes):
self.banASAP = True
## Ticks counter ## Ticks counter
self.ticks = 0 self.ticks = 0
## Thread name: ## Thread name:
@ -169,7 +178,7 @@ class Filter(JailThread):
# @param value the regular expression # @param value the regular expression
def addFailRegex(self, value): def addFailRegex(self, value):
multiLine = self.getMaxLines() > 1 multiLine = self.__lineBufferSize > 1
try: try:
regex = FailRegex(value, prefRegex=self.__prefRegex, multiline=multiLine, regex = FailRegex(value, prefRegex=self.__prefRegex, multiline=multiLine,
useDns=self.__useDns) useDns=self.__useDns)
@ -452,10 +461,10 @@ class Filter(JailThread):
logSys.info( logSys.info(
"[%s] Attempt %s - %s", self.jailName, ip, datetime.datetime.fromtimestamp(unixTime).strftime("%Y-%m-%d %H:%M:%S") "[%s] Attempt %s - %s", self.jailName, ip, datetime.datetime.fromtimestamp(unixTime).strftime("%Y-%m-%d %H:%M:%S")
) )
self.failManager.addFailure(ticket, len(matches) or 1) attempts = self.failManager.addFailure(ticket, len(matches) or 1)
# Perform the ban if this attempt is resulted to: # Perform the ban if this attempt is resulted to:
self.performBan(ip) if attempts >= self.failManager.getMaxRetry():
self.performBan(ip)
return 1 return 1
@ -484,28 +493,36 @@ class Filter(JailThread):
# Create IP address object # Create IP address object
ip = IPAddr(ipstr) ip = IPAddr(ipstr)
# Avoid exact duplicates # Avoid exact duplicates
if ip in self.__ignoreIpList: if ip in self.__ignoreIpSet or ip in self.__ignoreIpList:
logSys.warn(" Ignore duplicate %r (%r), already in ignore list", ip, ipstr) logSys.log(logging.MSG, " Ignore duplicate %r (%r), already in ignore list", ip, ipstr)
return return
# log and append to ignore list # log and append to ignore list
logSys.debug(" Add %r to ignore list (%r)", ip, ipstr) logSys.debug(" Add %r to ignore list (%r)", ip, ipstr)
self.__ignoreIpList.append(ip) # if single IP (not DNS or a subnet) add to set, otherwise to list:
if ip.isSingle:
self.__ignoreIpSet.add(ip)
else:
self.__ignoreIpList.append(ip)
def delIgnoreIP(self, ip=None): def delIgnoreIP(self, ip=None):
# clear all: # clear all:
if ip is None: if ip is None:
self.__ignoreIpSet.clear()
del self.__ignoreIpList[:] del self.__ignoreIpList[:]
return return
# delete by ip: # delete by ip:
logSys.debug(" Remove %r from ignore list", ip) logSys.debug(" Remove %r from ignore list", ip)
self.__ignoreIpList.remove(ip) if ip in self.__ignoreIpSet:
self.__ignoreIpSet.remove(ip)
else:
self.__ignoreIpList.remove(ip)
def logIgnoreIp(self, ip, log_ignore, ignore_source="unknown source"): def logIgnoreIp(self, ip, log_ignore, ignore_source="unknown source"):
if log_ignore: if log_ignore:
logSys.info("[%s] Ignore %s by %s", self.jailName, ip, ignore_source) logSys.info("[%s] Ignore %s by %s", self.jailName, ip, ignore_source)
def getIgnoreIP(self): def getIgnoreIP(self):
return self.__ignoreIpList return self.__ignoreIpList + list(self.__ignoreIpSet)
## ##
# Check if IP address/DNS is in the ignore list. # Check if IP address/DNS is in the ignore list.
@ -545,8 +562,11 @@ class Filter(JailThread):
if self.__ignoreCache: c.set(key, True) if self.__ignoreCache: c.set(key, True)
return True return True
# check if the IP is covered by ignore IP (in set or in subnet/dns):
if ip in self.__ignoreIpSet:
self.logIgnoreIp(ip, log_ignore, ignore_source="ip")
return True
for net in self.__ignoreIpList: for net in self.__ignoreIpList:
# check if the IP is covered by ignore IP
if ip.isInNet(net): if ip.isInNet(net):
self.logIgnoreIp(ip, log_ignore, ignore_source=("ip" if net.isValid else "dns")) self.logIgnoreIp(ip, log_ignore, ignore_source=("ip" if net.isValid else "dns"))
if self.__ignoreCache: c.set(key, True) if self.__ignoreCache: c.set(key, True)
@ -569,29 +589,89 @@ class Filter(JailThread):
if self.__ignoreCache: c.set(key, False) if self.__ignoreCache: c.set(key, False)
return False return False
def _logWarnOnce(self, nextLTM, *args):
"""Log some issue as warning once per day, otherwise level 7"""
if MyTime.time() < getattr(self, nextLTM, 0):
if logSys.getEffectiveLevel() <= 7: logSys.log(7, *(args[0]))
else:
setattr(self, nextLTM, MyTime.time() + 24*60*60)
for args in args:
logSys.warning('[%s] ' + args[0], self.jailName, *args[1:])
def processLine(self, line, date=None): def processLine(self, line, date=None):
"""Split the time portion from log msg and return findFailures on them """Split the time portion from log msg and return findFailures on them
""" """
logSys.log(7, "Working on line %r", line)
noDate = False
if date: if date:
tupleLine = line tupleLine = line
self.__lastTimeText = tupleLine[1]
self.__lastDate = date
else: else:
l = line.rstrip('\r\n') # try to parse date:
logSys.log(7, "Working on line %r", line) timeMatch = self.dateDetector.matchTime(line)
m = timeMatch[0]
(timeMatch, template) = self.dateDetector.matchTime(l) if m:
if timeMatch: s = m.start(1)
tupleLine = ( e = m.end(1)
l[:timeMatch.start(1)], m = line[s:e]
l[timeMatch.start(1):timeMatch.end(1)], tupleLine = (line[:s], m, line[e:])
l[timeMatch.end(1):], if m: # found and not empty - retrive date:
(timeMatch, template) date = self.dateDetector.getTime(m, timeMatch)
) if date is not None:
# Lets get the time part
date = date[0]
self.__lastTimeText = m
self.__lastDate = date
else:
logSys.error("findFailure failed to parse timeText: %s", m)
# matched empty value - date is optional or not available - set it to last known or now:
elif self.__lastDate and self.__lastDate > MyTime.time() - 60:
# set it to last known:
tupleLine = ("", self.__lastTimeText, line)
date = self.__lastDate
else:
# set it to now:
date = MyTime.time()
else: else:
tupleLine = (l, "", "", None) tupleLine = ("", "", line)
# still no date - try to use last known:
if date is None:
noDate = True
if self.__lastDate and self.__lastDate > MyTime.time() - 60:
tupleLine = ("", self.__lastTimeText, line)
date = self.__lastDate
if self.checkFindTime:
# if in operation (modifications have been really found):
if self.inOperation:
# if weird date - we'd simulate now for timeing issue (too large deviation from now):
if (date is None or date < MyTime.time() - 60 or date > MyTime.time() + 60):
# log time zone issue as warning once per day:
self._logWarnOnce("_next_simByTimeWarn",
("Simulate NOW in operation since found time has too large deviation %s ~ %s +/- %s",
date, MyTime.time(), 60),
("Please check jail has possibly a timezone issue. Line with odd timestamp: %s",
line))
# simulate now as date:
date = MyTime.time()
self.__lastDate = date
else:
# in initialization (restore) phase, if too old - ignore:
if date is not None and date < MyTime.time() - self.getFindTime():
# log time zone issue as warning once per day:
self._logWarnOnce("_next_ignByTimeWarn",
("Ignore line since time %s < %s - %s",
date, MyTime.time(), self.getFindTime()),
("Please check jail has possibly a timezone issue. Line with odd timestamp: %s",
line))
# ignore - too old (obsolete) entry:
return []
# save last line (lazy convert of process line tuple to string on demand): # save last line (lazy convert of process line tuple to string on demand):
self.processedLine = lambda: "".join(tupleLine[::2]) self.processedLine = lambda: "".join(tupleLine[::2])
return self.findFailure(tupleLine, date) return self.findFailure(tupleLine, date, noDate=noDate)
def processLineAndAdd(self, line, date=None): def processLineAndAdd(self, line, date=None):
"""Processes the line for failures and populates failManager """Processes the line for failures and populates failManager
@ -603,13 +683,20 @@ class Filter(JailThread):
fail = element[3] fail = element[3]
logSys.debug("Processing line with time:%s and ip:%s", logSys.debug("Processing line with time:%s and ip:%s",
unixTime, ip) unixTime, ip)
# ensure the time is not in the future, e. g. by some estimated (assumed) time:
if self.checkFindTime and unixTime > MyTime.time():
unixTime = MyTime.time()
tick = FailTicket(ip, unixTime, data=fail) tick = FailTicket(ip, unixTime, data=fail)
if self._inIgnoreIPList(ip, tick): if self._inIgnoreIPList(ip, tick):
continue continue
logSys.info( logSys.info(
"[%s] Found %s - %s", self.jailName, ip, MyTime.time2str(unixTime) "[%s] Found %s - %s", self.jailName, ip, MyTime.time2str(unixTime)
) )
self.failManager.addFailure(tick) attempts = self.failManager.addFailure(tick)
# avoid RC on busy filter (too many failures) - if attempts for IP/ID reached maxretry,
# we can speedup ban, so do it as soon as possible:
if self.banASAP and attempts >= self.failManager.getMaxRetry():
self.performBan(ip)
# report to observer - failure was found, for possibly increasing of it retry counter (asynchronous) # report to observer - failure was found, for possibly increasing of it retry counter (asynchronous)
if Observers.Main is not None: if Observers.Main is not None:
Observers.Main.add('failureFound', self.failManager, self.jail, tick) Observers.Main.add('failureFound', self.failManager, self.jail, tick)
@ -632,20 +719,26 @@ class Filter(JailThread):
self._errors //= 2 self._errors //= 2
self.idle = True self.idle = True
## def _ignoreLine(self, buf, orgBuffer, failRegex=None):
# Returns true if the line should be ignored. # if multi-line buffer - use matched only, otherwise (single line) - original buf:
# if failRegex and self.__lineBufferSize > 1:
# Uses ignoreregex. orgBuffer = failRegex.getMatchedTupleLines()
# @param line: the line buf = Regex._tupleLinesBuf(orgBuffer)
# @return: a boolean # search ignored:
fnd = None
def ignoreLine(self, tupleLines):
buf = Regex._tupleLinesBuf(tupleLines)
for ignoreRegexIndex, ignoreRegex in enumerate(self.__ignoreRegex): for ignoreRegexIndex, ignoreRegex in enumerate(self.__ignoreRegex):
ignoreRegex.search(buf, tupleLines) ignoreRegex.search(buf, orgBuffer)
if ignoreRegex.hasMatched(): if ignoreRegex.hasMatched():
return ignoreRegexIndex fnd = ignoreRegexIndex
return None logSys.log(7, " Matched ignoreregex %d and was ignored", fnd)
if self.onIgnoreRegex: self.onIgnoreRegex(fnd, ignoreRegex)
# remove ignored match:
if not self.checkAllRegex or self.__lineBufferSize > 1:
# todo: check ignoreRegex.getUnmatchedTupleLines() would be better (fix testGetFailuresMultiLineIgnoreRegex):
if failRegex:
self.__lineBuffer = failRegex.getUnmatchedTupleLines()
if not self.checkAllRegex: break
return fnd
def _updateUsers(self, fail, user=()): def _updateUsers(self, fail, user=()):
users = fail.get('users') users = fail.get('users')
@ -655,54 +748,31 @@ class Filter(JailThread):
fail['users'] = users = set() fail['users'] = users = set()
users.add(user) users.add(user)
return users return users
return None return users
# # ATM incremental (non-empty only) merge deactivated ...
# @staticmethod
# def _updateFailure(self, mlfidGroups, fail):
# # reset old failure-ids when new types of id available in this failure:
# fids = set()
# for k in ('fid', 'ip4', 'ip6', 'dns'):
# if fail.get(k):
# fids.add(k)
# if fids:
# for k in ('fid', 'ip4', 'ip6', 'dns'):
# if k not in fids:
# try:
# del mlfidGroups[k]
# except:
# pass
# # update not empty values:
# mlfidGroups.update(((k,v) for k,v in fail.iteritems() if v))
def _mergeFailure(self, mlfid, fail, failRegex): def _mergeFailure(self, mlfid, fail, failRegex):
mlfidFail = self.mlfidCache.get(mlfid) if self.__mlfidCache else None mlfidFail = self.mlfidCache.get(mlfid) if self.__mlfidCache else None
users = None users = None
nfflgs = 0 nfflgs = 0
if fail.get("mlfgained"): if fail.get("mlfgained"):
nfflgs |= 9 nfflgs |= (8|1)
if not fail.get('nofail'): if not fail.get('nofail'):
fail['nofail'] = fail["mlfgained"] fail['nofail'] = fail["mlfgained"]
elif fail.get('nofail'): nfflgs |= 1 elif fail.get('nofail'): nfflgs |= 1
if fail.get('mlfforget'): nfflgs |= 2 if fail.pop('mlfforget', None): nfflgs |= 2
# if multi-line failure id (connection id) known: # if multi-line failure id (connection id) known:
if mlfidFail: if mlfidFail:
mlfidGroups = mlfidFail[1] mlfidGroups = mlfidFail[1]
# update users set (hold all users of connect): # update users set (hold all users of connect):
users = self._updateUsers(mlfidGroups, fail.get('user')) users = self._updateUsers(mlfidGroups, fail.get('user'))
# be sure we've correct current state ('nofail' and 'mlfgained' only from last failure) # be sure we've correct current state ('nofail' and 'mlfgained' only from last failure)
try: if mlfidGroups.pop('nofail', None): nfflgs |= 4
del mlfidGroups['nofail'] if mlfidGroups.pop('mlfgained', None): nfflgs |= 4
del mlfidGroups['mlfgained'] # if we had no pending failures then clear the matches (they are already provided):
except KeyError: if (nfflgs & 4) == 0 and not mlfidGroups.get('mlfpending', 0):
pass mlfidGroups.pop("matches", None)
# # ATM incremental (non-empty only) merge deactivated (for future version only),
# # it can be simulated using alternate value tags, like <F-ALT_VAL>...</F-ALT_VAL>,
# # so previous value 'val' will be overwritten only if 'alt_val' is not empty...
# _updateFailure(mlfidGroups, fail)
#
# overwrite multi-line failure with all values, available in fail: # overwrite multi-line failure with all values, available in fail:
mlfidGroups.update(fail) mlfidGroups.update(((k,v) for k,v in fail.iteritems() if v is not None))
# new merged failure data: # new merged failure data:
fail = mlfidGroups fail = mlfidGroups
# if forget (disconnect/reset) - remove cached entry: # if forget (disconnect/reset) - remove cached entry:
@ -713,24 +783,19 @@ class Filter(JailThread):
mlfidFail = [self.__lastDate, fail] mlfidFail = [self.__lastDate, fail]
self.mlfidCache.set(mlfid, mlfidFail) self.mlfidCache.set(mlfid, mlfidFail)
# check users in order to avoid reset failure by multiple logon-attempts: # check users in order to avoid reset failure by multiple logon-attempts:
if users and len(users) > 1: if fail.pop('mlfpending', 0) or users and len(users) > 1:
# we've new user, reset 'nofail' because of multiple users attempts: # we've pending failures or new user, reset 'nofail' because of failures or multiple users attempts:
try: fail.pop('nofail', None)
del fail['nofail'] fail.pop('mlfgained', None)
nfflgs &= ~1 # reset nofail nfflgs &= ~(8|1) # reset nofail and gained
except KeyError:
pass
# merge matches: # merge matches:
if not (nfflgs & 1): # current nofail state (corresponding users) if (nfflgs & 1) == 0: # current nofail state (corresponding users)
try: m = fail.pop("nofail-matches", [])
m = fail.pop("nofail-matches") m += fail.get("matches", [])
m += fail.get("matches", []) if (nfflgs & 8) == 0: # no gain signaled
except KeyError:
m = fail.get("matches", [])
if not (nfflgs & 8): # no gain signaled
m += failRegex.getMatchedTupleLines() m += failRegex.getMatchedTupleLines()
fail["matches"] = m fail["matches"] = m
elif not (nfflgs & 2) and (nfflgs & 1): # not mlfforget and nofail: elif (nfflgs & 3) == 1: # not mlfforget and nofail:
fail["nofail-matches"] = fail.get("nofail-matches", []) + failRegex.getMatchedTupleLines() fail["nofail-matches"] = fail.get("nofail-matches", []) + failRegex.getMatchedTupleLines()
# return merged: # return merged:
return fail return fail
@ -743,7 +808,7 @@ class Filter(JailThread):
# to find the logging time. # to find the logging time.
# @return a dict with IP and timestamp. # @return a dict with IP and timestamp.
def findFailure(self, tupleLine, date=None): def findFailure(self, tupleLine, date, noDate=False):
failList = list() failList = list()
ll = logSys.getEffectiveLevel() ll = logSys.getEffectiveLevel()
@ -753,62 +818,33 @@ class Filter(JailThread):
returnRawHost = True returnRawHost = True
cidr = IPAddr.CIDR_RAW cidr = IPAddr.CIDR_RAW
# Checks if we mut ignore this line.
if self.ignoreLine([tupleLine[::2]]) is not None:
# The ignoreregex matched. Return.
if ll <= 7: logSys.log(7, "Matched ignoreregex and was \"%s\" ignored",
"".join(tupleLine[::2]))
return failList
timeText = tupleLine[1]
if date:
self.__lastTimeText = timeText
self.__lastDate = date
elif timeText:
dateTimeMatch = self.dateDetector.getTime(timeText, tupleLine[3])
if dateTimeMatch is None:
logSys.error("findFailure failed to parse timeText: %s", timeText)
date = self.__lastDate
else:
# Lets get the time part
date = dateTimeMatch[0]
self.__lastTimeText = timeText
self.__lastDate = date
else:
timeText = self.__lastTimeText or "".join(tupleLine[::2])
date = self.__lastDate
if self.checkFindTime and date is not None and date < MyTime.time() - self.getFindTime():
if ll <= 5: logSys.log(5, "Ignore line since time %s < %s - %s",
date, MyTime.time(), self.getFindTime())
return failList
if self.__lineBufferSize > 1: if self.__lineBufferSize > 1:
orgBuffer = self.__lineBuffer = ( self.__lineBuffer.append(tupleLine)
self.__lineBuffer + [tupleLine[:3]])[-self.__lineBufferSize:] orgBuffer = self.__lineBuffer = self.__lineBuffer[-self.__lineBufferSize:]
else: else:
orgBuffer = self.__lineBuffer = [tupleLine[:3]] orgBuffer = self.__lineBuffer = [tupleLine]
if ll <= 5: logSys.log(5, "Looking for match of %r", self.__lineBuffer) if ll <= 5: logSys.log(5, "Looking for match of %r", orgBuffer)
buf = Regex._tupleLinesBuf(self.__lineBuffer) buf = Regex._tupleLinesBuf(orgBuffer)
# Checks if we must ignore this line (only if fewer ignoreregex than failregex).
if self.__ignoreRegex and len(self.__ignoreRegex) < len(self.__failRegex) - 2:
if self._ignoreLine(buf, orgBuffer) is not None:
# The ignoreregex matched. Return.
return failList
# Pre-filter fail regex (if available): # Pre-filter fail regex (if available):
preGroups = {} preGroups = {}
if self.__prefRegex: if self.__prefRegex:
if ll <= 5: logSys.log(5, " Looking for prefregex %r", self.__prefRegex.getRegex()) if ll <= 5: logSys.log(5, " Looking for prefregex %r", self.__prefRegex.getRegex())
self.__prefRegex.search(buf, self.__lineBuffer) self.__prefRegex.search(buf, orgBuffer)
if not self.__prefRegex.hasMatched(): if not self.__prefRegex.hasMatched():
if ll <= 5: logSys.log(5, " Prefregex not matched") if ll <= 5: logSys.log(5, " Prefregex not matched")
return failList return failList
preGroups = self.__prefRegex.getGroups() preGroups = self.__prefRegex.getGroups()
if ll <= 7: logSys.log(7, " Pre-filter matched %s", preGroups) if ll <= 7: logSys.log(7, " Pre-filter matched %s", preGroups)
repl = preGroups.get('content') repl = preGroups.pop('content', None)
# Content replacement: # Content replacement:
if repl: if repl:
del preGroups['content']
self.__lineBuffer, buf = [('', '', repl)], None self.__lineBuffer, buf = [('', '', repl)], None
# Iterates over all the regular expressions. # Iterates over all the regular expressions.
@ -826,28 +862,21 @@ class Filter(JailThread):
# The failregex matched. # The failregex matched.
if ll <= 7: logSys.log(7, " Matched failregex %d: %s", failRegexIndex, fail) if ll <= 7: logSys.log(7, " Matched failregex %d: %s", failRegexIndex, fail)
# Checks if we must ignore this match. # Checks if we must ignore this match.
if self.ignoreLine(failRegex.getMatchedTupleLines()) \ if self.__ignoreRegex and self._ignoreLine(buf, orgBuffer, failRegex) is not None:
is not None:
# The ignoreregex matched. Remove ignored match. # The ignoreregex matched. Remove ignored match.
self.__lineBuffer, buf = failRegex.getUnmatchedTupleLines(), None buf = None
if ll <= 7: logSys.log(7, " Matched ignoreregex and was ignored")
if not self.checkAllRegex: if not self.checkAllRegex:
break break
else:
continue
if date is None:
logSys.warning(
"Found a match for %r but no valid date/time "
"found for %r. Please try setting a custom "
"date pattern (see man page jail.conf(5)). "
"If format is complex, please "
"file a detailed issue on"
" https://github.com/fail2ban/fail2ban/issues "
"in order to get support for this format.",
"\n".join(failRegex.getMatchedLines()), timeText)
continue continue
if noDate:
self._logWarnOnce("_next_noTimeWarn",
("Found a match but no valid date/time found for %r.", tupleLine[1]),
("Match without a timestamp: %s", "\n".join(failRegex.getMatchedLines())),
("Please try setting a custom date pattern (see man page jail.conf(5)).",)
)
if date is None and self.checkFindTime: continue
# we should check all regex (bypass on multi-line, otherwise too complex): # we should check all regex (bypass on multi-line, otherwise too complex):
if not self.checkAllRegex or self.getMaxLines() > 1: if not self.checkAllRegex or self.__lineBufferSize > 1:
self.__lineBuffer, buf = failRegex.getUnmatchedTupleLines(), None self.__lineBuffer, buf = failRegex.getUnmatchedTupleLines(), None
# merge data if multi-line failure: # merge data if multi-line failure:
raw = returnRawHost raw = returnRawHost
@ -892,7 +921,8 @@ class Filter(JailThread):
if host is None: if host is None:
if ll <= 7: logSys.log(7, "No failure-id by mlfid %r in regex %s: %s", if ll <= 7: logSys.log(7, "No failure-id by mlfid %r in regex %s: %s",
mlfid, failRegexIndex, fail.get('mlfforget', "waiting for identifier")) mlfid, failRegexIndex, fail.get('mlfforget', "waiting for identifier"))
if not self.checkAllRegex: return failList fail['mlfpending'] = 1; # mark failure is pending
if not self.checkAllRegex and self.ignorePending: return failList
ips = [None] ips = [None]
# if raw - add single ip or failure-id, # if raw - add single ip or failure-id,
# otherwise expand host to multiple ips using dns (or ignore it if not valid): # otherwise expand host to multiple ips using dns (or ignore it if not valid):
@ -905,6 +935,9 @@ class Filter(JailThread):
# otherwise, try to use dns conversion: # otherwise, try to use dns conversion:
else: else:
ips = DNSUtils.textToIp(host, self.__useDns) ips = DNSUtils.textToIp(host, self.__useDns)
# if checkAllRegex we must make a copy (to be sure next RE doesn't change merged/cached failure):
if self.checkAllRegex and mlfid is not None:
fail = fail.copy()
# append failure with match to the list: # append failure with match to the list:
for ip in ips: for ip in ips:
failList.append([failRegexIndex, ip, date, fail]) failList.append([failRegexIndex, ip, date, fail])
@ -950,7 +983,7 @@ class FileFilter(Filter):
log.setPos(lastpos) log.setPos(lastpos)
self.__logs[path] = log self.__logs[path] = log
logSys.info("Added logfile: %r (pos = %s, hash = %s)" , path, log.getPos(), log.getHash()) logSys.info("Added logfile: %r (pos = %s, hash = %s)" , path, log.getPos(), log.getHash())
if autoSeek: if autoSeek and not tail:
self.__autoSeek[path] = autoSeek self.__autoSeek[path] = autoSeek
self._addLogPath(path) # backend specific self._addLogPath(path) # backend specific
@ -1034,7 +1067,7 @@ class FileFilter(Filter):
# MyTime.time()-self.findTime. When a failure is detected, a FailTicket # MyTime.time()-self.findTime. When a failure is detected, a FailTicket
# is created and is added to the FailManager. # is created and is added to the FailManager.
def getFailures(self, filename): def getFailures(self, filename, inOperation=None):
log = self.getLog(filename) log = self.getLog(filename)
if log is None: if log is None:
logSys.error("Unable to get failures in %s", filename) logSys.error("Unable to get failures in %s", filename)
@ -1079,10 +1112,15 @@ class FileFilter(Filter):
if has_content: if has_content:
while not self.idle: while not self.idle:
line = log.readline() line = log.readline()
if not line or not self.active: if not self.active: break; # jail has been stopped
# The jail reached the bottom or has been stopped if not line:
# The jail reached the bottom, simply set in operation for this log
# (since we are first time at end of file, growing is only possible after modifications):
log.inOperation = True
break break
self.processLineAndAdd(line) # acquire in operation from log and process:
self.inOperation = inOperation if inOperation is not None else log.inOperation
self.processLineAndAdd(line.rstrip('\r\n'))
finally: finally:
log.close() log.close()
db = self.jail.database db = self.jail.database
@ -1220,7 +1258,7 @@ except ImportError: # pragma: no cover
class FileContainer: class FileContainer:
def __init__(self, filename, encoding, tail = False): def __init__(self, filename, encoding, tail=False):
self.__filename = filename self.__filename = filename
self.setEncoding(encoding) self.setEncoding(encoding)
self.__tail = tail self.__tail = tail
@ -1241,6 +1279,8 @@ class FileContainer:
self.__pos = 0 self.__pos = 0
finally: finally:
handler.close() handler.close()
## shows that log is in operation mode (expecting new messages only from here):
self.inOperation = tail
def getFileName(self): def getFileName(self):
return self.__filename return self.__filename
@ -1314,16 +1354,17 @@ class FileContainer:
return line.decode(enc, 'strict') return line.decode(enc, 'strict')
except (UnicodeDecodeError, UnicodeEncodeError) as e: except (UnicodeDecodeError, UnicodeEncodeError) as e:
global _decode_line_warn global _decode_line_warn
lev = logging.DEBUG lev = 7
if _decode_line_warn.get(filename, 0) <= MyTime.time(): if not _decode_line_warn.get(filename, 0):
lev = logging.WARNING lev = logging.WARNING
_decode_line_warn[filename] = MyTime.time() + 24*60*60 _decode_line_warn.set(filename, 1)
logSys.log(lev, logSys.log(lev,
"Error decoding line from '%s' with '%s'." "Error decoding line from '%s' with '%s'.", filename, enc)
" Consider setting logencoding=utf-8 (or another appropriate" if logSys.getEffectiveLevel() <= lev:
" encoding) for this jail. Continuing" logSys.log(lev, "Consider setting logencoding=utf-8 (or another appropriate"
" to process line ignoring invalid characters: %r", " encoding) for this jail. Continuing"
filename, enc, line) " to process line ignoring invalid characters: %r",
line)
# decode with replacing error chars: # decode with replacing error chars:
line = line.decode(enc, 'replace') line = line.decode(enc, 'replace')
return line return line
@ -1344,7 +1385,7 @@ class FileContainer:
## print "D: Closed %s with pos %d" % (handler, self.__pos) ## print "D: Closed %s with pos %d" % (handler, self.__pos)
## sys.stdout.flush() ## sys.stdout.flush()
_decode_line_warn = {} _decode_line_warn = Utils.Cache(maxCount=1000, maxTime=24*60*60);
## ##

View File

@ -79,7 +79,8 @@ class FilterGamin(FileFilter):
this is a common logic and must be shared/provided by FileFilter this is a common logic and must be shared/provided by FileFilter
""" """
self.getFailures(path) self.getFailures(path)
self.performBan() if not self.banASAP: # pragma: no cover
self.performBan()
self.__modified = False self.__modified = False
## ##

View File

@ -111,13 +111,16 @@ class FilterPoll(FileFilter):
modlst = [] modlst = []
Utils.wait_for(lambda: not self.active or self.getModified(modlst), Utils.wait_for(lambda: not self.active or self.getModified(modlst),
self.sleeptime) self.sleeptime)
if not self.active: # pragma: no cover - timing
break
for filename in modlst: for filename in modlst:
self.getFailures(filename) self.getFailures(filename)
self.__modified = True self.__modified = True
self.ticks += 1 self.ticks += 1
if self.__modified: if self.__modified:
self.performBan() if not self.banASAP: # pragma: no cover
self.performBan()
self.__modified = False self.__modified = False
except Exception as e: # pragma: no cover except Exception as e: # pragma: no cover
if not self.active: # if not active - error by stop... if not self.active: # if not active - error by stop...
@ -139,7 +142,7 @@ class FilterPoll(FileFilter):
try: try:
logStats = os.stat(filename) logStats = os.stat(filename)
stats = logStats.st_mtime, logStats.st_ino, logStats.st_size stats = logStats.st_mtime, logStats.st_ino, logStats.st_size
pstats = self.__prevStats.get(filename, (0)) pstats = self.__prevStats.get(filename, (0,))
if logSys.getEffectiveLevel() <= 4: if logSys.getEffectiveLevel() <= 4:
# we do not want to waste time on strftime etc if not necessary # we do not want to waste time on strftime etc if not necessary
dt = logStats.st_mtime - pstats[0] dt = logStats.st_mtime - pstats[0]

View File

@ -140,7 +140,8 @@ class FilterPyinotify(FileFilter):
""" """
if not self.idle: if not self.idle:
self.getFailures(path) self.getFailures(path)
self.performBan() if not self.banASAP: # pragma: no cover
self.performBan()
self.__modified = False self.__modified = False
def _addPending(self, path, reason, isDir=False): def _addPending(self, path, reason, isDir=False):
@ -187,7 +188,8 @@ class FilterPyinotify(FileFilter):
for path, isDir in found.iteritems(): for path, isDir in found.iteritems():
self._delPending(path) self._delPending(path)
# refresh monitoring of this: # refresh monitoring of this:
self._refreshWatcher(path, isDir=isDir) if isDir is not None:
self._refreshWatcher(path, isDir=isDir)
if isDir: if isDir:
# check all files belong to this dir: # check all files belong to this dir:
for logpath in self.__watchFiles: for logpath in self.__watchFiles:
@ -270,7 +272,13 @@ class FilterPyinotify(FileFilter):
def _addLogPath(self, path): def _addLogPath(self, path):
self._addFileWatcher(path) self._addFileWatcher(path)
self._process_file(path) # initial scan:
if self.active:
# we can execute it right now:
self._process_file(path)
else:
# retard until filter gets started, isDir=None signals special case: process file only (don't need to refresh monitor):
self._addPending(path, ('INITIAL', path), isDir=None)
## ##
# Delete a log path # Delete a log path
@ -278,9 +286,9 @@ class FilterPyinotify(FileFilter):
# @param path the log file to delete # @param path the log file to delete
def _delLogPath(self, path): def _delLogPath(self, path):
self._delPending(path)
if not self._delFileWatcher(path): # pragma: no cover if not self._delFileWatcher(path): # pragma: no cover
logSys.error("Failed to remove watch on path: %s", path) logSys.error("Failed to remove watch on path: %s", path)
self._delPending(path)
path_dir = dirname(path) path_dir = dirname(path)
for k in self.__watchFiles: for k in self.__watchFiles:
@ -290,8 +298,8 @@ class FilterPyinotify(FileFilter):
if path_dir: if path_dir:
# Remove watches for the directory # Remove watches for the directory
# since there is no other monitored file under this directory # since there is no other monitored file under this directory
self._delDirWatcher(path_dir)
self._delPending(path_dir) self._delPending(path_dir)
self._delDirWatcher(path_dir)
# pyinotify.ProcessEvent default handler: # pyinotify.ProcessEvent default handler:
def __process_default(self, event): def __process_default(self, event):

View File

@ -190,6 +190,13 @@ class FilterSystemd(JournalFilter): # pragma: systemd no cover
def getJournalReader(self): def getJournalReader(self):
return self.__journal return self.__journal
def getJrnEntTime(self, logentry):
""" Returns time of entry as tuple (ISO-str, Posix)."""
date = logentry.get('_SOURCE_REALTIME_TIMESTAMP')
if date is None:
date = logentry.get('__REALTIME_TIMESTAMP')
return (date.isoformat(), time.mktime(date.timetuple()) + date.microsecond/1.0E6)
## ##
# Format journal log entry into syslog style # Format journal log entry into syslog style
# #
@ -222,9 +229,8 @@ class FilterSystemd(JournalFilter): # pragma: systemd no cover
logelements[-1] += v logelements[-1] += v
logelements[-1] += ":" logelements[-1] += ":"
if logelements[-1] == "kernel:": if logelements[-1] == "kernel:":
if '_SOURCE_MONOTONIC_TIMESTAMP' in logentry: monotonic = logentry.get('_SOURCE_MONOTONIC_TIMESTAMP')
monotonic = logentry.get('_SOURCE_MONOTONIC_TIMESTAMP') if monotonic is None:
else:
monotonic = logentry.get('__MONOTONIC_TIMESTAMP')[0] monotonic = logentry.get('__MONOTONIC_TIMESTAMP')[0]
logelements.append("[%12.6f]" % monotonic.total_seconds()) logelements.append("[%12.6f]" % monotonic.total_seconds())
msg = logentry.get('MESSAGE','') msg = logentry.get('MESSAGE','')
@ -235,13 +241,11 @@ class FilterSystemd(JournalFilter): # pragma: systemd no cover
logline = " ".join(logelements) logline = " ".join(logelements)
date = logentry.get('_SOURCE_REALTIME_TIMESTAMP', date = self.getJrnEntTime(logentry)
logentry.get('__REALTIME_TIMESTAMP'))
logSys.log(5, "[%s] Read systemd journal entry: %s %s", self.jailName, logSys.log(5, "[%s] Read systemd journal entry: %s %s", self.jailName,
date.isoformat(), logline) date[0], logline)
## use the same type for 1st argument: ## use the same type for 1st argument:
return ((logline[:0], date.isoformat(), logline.replace('\n', '\\n')), return ((logline[:0], date[0], logline.replace('\n', '\\n')), date[1])
time.mktime(date.timetuple()) + date.microsecond/1.0E6)
def seekToTime(self, date): def seekToTime(self, date):
if not isinstance(date, datetime.datetime): if not isinstance(date, datetime.datetime):
@ -262,9 +266,12 @@ class FilterSystemd(JournalFilter): # pragma: systemd no cover
"Jail regexs will be checked against all journal entries, " "Jail regexs will be checked against all journal entries, "
"which is not advised for performance reasons.") "which is not advised for performance reasons.")
# Seek to now - findtime in journal # Try to obtain the last known time (position of journal)
start_time = datetime.datetime.now() - \ start_time = 0
datetime.timedelta(seconds=int(self.getFindTime())) if self.jail.database is not None:
start_time = self.jail.database.getJournalPos(self.jail, 'systemd-journal') or 0
# Seek to max(last_known_time, now - findtime) in journal
start_time = max( start_time, MyTime.time() - int(self.getFindTime()) )
self.seekToTime(start_time) self.seekToTime(start_time)
# Move back one entry to ensure do not end up in dead space # Move back one entry to ensure do not end up in dead space
# if start time beyond end of journal # if start time beyond end of journal
@ -303,16 +310,20 @@ class FilterSystemd(JournalFilter): # pragma: systemd no cover
e, exc_info=logSys.getEffectiveLevel() <= logging.DEBUG) e, exc_info=logSys.getEffectiveLevel() <= logging.DEBUG)
self.ticks += 1 self.ticks += 1
if logentry: if logentry:
self.processLineAndAdd( line = self.formatJournalEntry(logentry)
*self.formatJournalEntry(logentry)) self.processLineAndAdd(*line)
self.__modified += 1 self.__modified += 1
if self.__modified >= 100: # todo: should be configurable if self.__modified >= 100: # todo: should be configurable
break break
else: else:
break break
if self.__modified: if self.__modified:
self.performBan() if not self.banASAP: # pragma: no cover
self.performBan()
self.__modified = 0 self.__modified = 0
# update position in log (time and iso string):
if self.jail.database is not None:
self.jail.database.updateJournal(self.jail, 'systemd-journal', line[1], line[0][1])
except Exception as e: # pragma: no cover except Exception as e: # pragma: no cover
if not self.active: # if not active - error by stop... if not self.active: # if not active - error by stop...
break break

View File

@ -337,7 +337,7 @@ class IPAddr(object):
return repr(self.ntoa) return repr(self.ntoa)
def __str__(self): def __str__(self):
return self.ntoa return self.ntoa if isinstance(self.ntoa, basestring) else str(self.ntoa)
def __reduce__(self): def __reduce__(self):
"""IPAddr pickle-handler, that simply wraps IPAddr to the str """IPAddr pickle-handler, that simply wraps IPAddr to the str
@ -379,6 +379,12 @@ class IPAddr(object):
""" """
return self._family != socket.AF_UNSPEC return self._family != socket.AF_UNSPEC
@property
def isSingle(self):
"""Returns whether the object is a single IP address (not DNS and subnet)
"""
return self._plen == {socket.AF_INET: 32, socket.AF_INET6: 128}.get(self._family, -1000)
def __eq__(self, other): def __eq__(self, other):
if self._family == IPAddr.CIDR_RAW and not isinstance(other, IPAddr): if self._family == IPAddr.CIDR_RAW and not isinstance(other, IPAddr):
return self._raw == other return self._raw == other
@ -511,6 +517,11 @@ class IPAddr(object):
return (self.addr & mask) == net.addr return (self.addr & mask) == net.addr
def contains(self, ip):
"""Return whether the object (as network) contains given IP
"""
return isinstance(ip, IPAddr) and (ip == self or ip.isInNet(self))
# Pre-calculated map: addr to maskplen # Pre-calculated map: addr to maskplen
def __getMaskMap(): def __getMaskMap():
m6 = (1 << 128)-1 m6 = (1 << 128)-1

View File

@ -161,6 +161,10 @@ class Jail(object):
""" """
return self.__db return self.__db
@database.setter
def database(self, value):
self.__db = value;
@property @property
def filter(self): def filter(self):
"""The filter which the jail is using to monitor log files. """The filter which the jail is using to monitor log files.
@ -192,6 +196,12 @@ class Jail(object):
("Actions", self.actions.status(flavor=flavor)), ("Actions", self.actions.status(flavor=flavor)),
] ]
@property
def hasFailTickets(self):
"""Retrieve whether queue has tickets to ban.
"""
return not self.__queue.empty()
def putFailTicket(self, ticket): def putFailTicket(self, ticket):
"""Add a fail ticket to the jail. """Add a fail ticket to the jail.

View File

@ -120,3 +120,6 @@ class JailThread(Thread):
## python 2.x replace binding of private __bootstrap method: ## python 2.x replace binding of private __bootstrap method:
if sys.version_info < (3,): # pragma: 3.x no cover if sys.version_info < (3,): # pragma: 3.x no cover
JailThread._Thread__bootstrap = JailThread._JailThread__bootstrap JailThread._Thread__bootstrap = JailThread._JailThread__bootstrap
## python 3.9, restore isAlive method:
elif not hasattr(JailThread, 'isAlive'): # pragma: 2.x no cover
JailThread.isAlive = JailThread.is_alive

View File

@ -121,8 +121,11 @@ class MyTime:
@return ISO-capable string representation of given unixTime @return ISO-capable string representation of given unixTime
""" """
return datetime.datetime.fromtimestamp( # consider end of 9999th year (in GMT+23 to avoid year overflow in other TZ)
unixTime).replace(microsecond=0).strftime(format) dt = datetime.datetime.fromtimestamp(
unixTime).replace(microsecond=0
) if unixTime < 253402214400 else datetime.datetime(9999, 12, 31, 23, 59, 59)
return dt.strftime(format)
## precreate/precompile primitives used in str2seconds: ## precreate/precompile primitives used in str2seconds:

View File

@ -87,7 +87,7 @@ class ObserverThread(JailThread):
except KeyError: except KeyError:
raise KeyError("Invalid event index : %s" % i) raise KeyError("Invalid event index : %s" % i)
def __delitem__(self, name): def __delitem__(self, i):
try: try:
del self._queue[i] del self._queue[i]
except KeyError: except KeyError:
@ -146,9 +146,11 @@ class ObserverThread(JailThread):
def pulse_notify(self): def pulse_notify(self):
"""Notify wakeup (sets /and resets/ notify event) """Notify wakeup (sets /and resets/ notify event)
""" """
if not self._paused and self._notify: if not self._paused:
self._notify.set() n = self._notify
#self._notify.clear() if n:
n.set()
#n.clear()
def add(self, *event): def add(self, *event):
"""Add a event to queue and notify thread to wake up. """Add a event to queue and notify thread to wake up.
@ -237,6 +239,7 @@ class ObserverThread(JailThread):
break break
## end of main loop - exit ## end of main loop - exit
logSys.info("Observer stopped, %s events remaining.", len(self._queue)) logSys.info("Observer stopped, %s events remaining.", len(self._queue))
self._notify = None
#print("Observer stopped, %s events remaining." % len(self._queue)) #print("Observer stopped, %s events remaining." % len(self._queue))
except Exception as e: except Exception as e:
logSys.error('Observer stopped after error: %s', e, exc_info=True) logSys.error('Observer stopped after error: %s', e, exc_info=True)
@ -262,9 +265,8 @@ class ObserverThread(JailThread):
if not self.active: if not self.active:
super(ObserverThread, self).start() super(ObserverThread, self).start()
def stop(self): def stop(self, wtime=5, forceQuit=True):
if self.active and self._notify: if self.active and self._notify:
wtime = 5
logSys.info("Observer stop ... try to end queue %s seconds", wtime) logSys.info("Observer stop ... try to end queue %s seconds", wtime)
#print("Observer stop ....") #print("Observer stop ....")
# just add shutdown job to make possible wait later until full (events remaining) # just add shutdown job to make possible wait later until full (events remaining)
@ -276,10 +278,15 @@ class ObserverThread(JailThread):
#self.pulse_notify() #self.pulse_notify()
self._notify = None self._notify = None
# wait max wtime seconds until full (events remaining) # wait max wtime seconds until full (events remaining)
self.wait_empty(wtime) if self.wait_empty(wtime) or forceQuit:
n.clear() n.clear()
self.active = False self.active = False; # leave outer (active) loop
self.wait_idle(0.5) self._paused = True; # leave inner (queue) loop
self.__db = None
else:
self._notify = n
return self.wait_idle(min(wtime, 0.5)) and not self.is_full
return True
@property @property
def is_full(self): def is_full(self):

View File

@ -58,6 +58,23 @@ except ImportError: # pragma: no cover
def _thread_name(): def _thread_name():
return threading.current_thread().__class__.__name__ return threading.current_thread().__class__.__name__
try:
FileExistsError
except NameError: # pragma: 3.x no cover
FileExistsError = OSError
def _make_file_path(name):
"""Creates path of file (last level only) on demand"""
name = os.path.dirname(name)
# only if it is absolute (e. g. important for socket, so if unix path):
if os.path.isabs(name):
# be sure path exists (create last level of directory on demand):
try:
os.mkdir(name)
except (OSError, FileExistsError) as e:
if e.errno != 17: # pragma: no cover - not EEXIST is not covered
raise
class Server: class Server:
@ -81,8 +98,6 @@ class Server:
'Linux': '/dev/log', 'Linux': '/dev/log',
} }
self.__prev_signals = {} self.__prev_signals = {}
# replace real thread name with short process name (for top/ps/pstree or diagnostic):
prctl_set_th_name('f2b/server')
def __sigTERMhandler(self, signum, frame): # pragma: no cover - indirect tested def __sigTERMhandler(self, signum, frame): # pragma: no cover - indirect tested
logSys.debug("Caught signal %d. Exiting", signum) logSys.debug("Caught signal %d. Exiting", signum)
@ -99,7 +114,7 @@ class Server:
def start(self, sock, pidfile, force=False, observer=True, conf={}): def start(self, sock, pidfile, force=False, observer=True, conf={}):
# First set the mask to only allow access to owner # First set the mask to only allow access to owner
os.umask(0077) os.umask(0o077)
# Second daemonize before logging etc, because it will close all handles: # Second daemonize before logging etc, because it will close all handles:
if self.__daemon: # pragma: no cover if self.__daemon: # pragma: no cover
logSys.info("Starting in daemon mode") logSys.info("Starting in daemon mode")
@ -113,6 +128,9 @@ class Server:
logSys.error(err) logSys.error(err)
raise ServerInitializationError(err) raise ServerInitializationError(err)
# We are daemon. # We are daemon.
# replace main thread (and process) name to identify server (for top/ps/pstree or diagnostic):
prctl_set_th_name(conf.get("pname", "fail2ban-server"))
# Set all logging parameters (or use default if not specified): # Set all logging parameters (or use default if not specified):
self.__verbose = conf.get("verbose", None) self.__verbose = conf.get("verbose", None)
@ -141,6 +159,7 @@ class Server:
# Creates a PID file. # Creates a PID file.
try: try:
logSys.debug("Creating PID file %s", pidfile) logSys.debug("Creating PID file %s", pidfile)
_make_file_path(pidfile)
pidFile = open(pidfile, 'w') pidFile = open(pidfile, 'w')
pidFile.write("%s\n" % os.getpid()) pidFile.write("%s\n" % os.getpid())
pidFile.close() pidFile.close()
@ -156,6 +175,7 @@ class Server:
# Start the communication # Start the communication
logSys.debug("Starting communication") logSys.debug("Starting communication")
try: try:
_make_file_path(sock)
self.__asyncServer = AsyncServer(self.__transm) self.__asyncServer = AsyncServer(self.__transm)
self.__asyncServer.onstart = conf.get('onstart') self.__asyncServer.onstart = conf.get('onstart')
self.__asyncServer.start(sock, force) self.__asyncServer.start(sock, force)
@ -193,23 +213,26 @@ class Server:
signal.signal(s, sh) signal.signal(s, sh)
# Give observer a small chance to complete its work before exit # Give observer a small chance to complete its work before exit
if Observers.Main is not None: obsMain = Observers.Main
Observers.Main.stop() if obsMain is not None:
if obsMain.stop(forceQuit=False):
obsMain = None
Observers.Main = None
# Now stop all the jails # Now stop all the jails
self.stopAllJail() self.stopAllJail()
# Stop observer ultimately
if obsMain is not None:
obsMain.stop()
# Explicit close database (server can leave in a thread, # Explicit close database (server can leave in a thread,
# so delayed GC can prevent commiting changes) # so delayed GC can prevent commiting changes)
if self.__db: if self.__db:
self.__db.close() self.__db.close()
self.__db = None self.__db = None
# Stop observer and exit # Stop async and exit
if Observers.Main is not None:
Observers.Main.stop()
Observers.Main = None
# Stop async
if self.__asyncServer is not None: if self.__asyncServer is not None:
self.__asyncServer.stop() self.__asyncServer.stop()
self.__asyncServer = None self.__asyncServer = None
@ -517,6 +540,32 @@ class Server:
cnt += jail.actions.removeBannedIP(value, ifexists=ifexists) cnt += jail.actions.removeBannedIP(value, ifexists=ifexists)
return cnt return cnt
def banned(self, name=None, ids=None):
if name is not None:
# single jail:
jails = [self.__jails[name]]
else:
# in all jails:
jails = self.__jails.values()
# check banned ids:
res = []
if name is None and ids:
for ip in ids:
ret = []
for jail in jails:
if jail.actions.getBanned([ip]):
ret.append(jail.name)
res.append(ret)
else:
for jail in jails:
ret = jail.actions.getBanned(ids)
if name is not None:
return ret
res.append(ret)
else:
res.append({jail.name: ret})
return res
def getBanTime(self, name): def getBanTime(self, name):
return self.__jails[name].actions.getBanTime() return self.__jails[name].actions.getBanTime()
@ -777,6 +826,7 @@ class Server:
self.__db = None self.__db = None
else: else:
if Fail2BanDb is not None: if Fail2BanDb is not None:
_make_file_path(filename)
self.__db = Fail2BanDb(filename) self.__db = Fail2BanDb(filename)
self.__db.delAllJails() self.__db.delAllJails()
else: # pragma: no cover else: # pragma: no cover

View File

@ -291,9 +291,8 @@ def reGroupDictStrptime(found_dict, msec=False, default_tz=None):
date_result -= datetime.timedelta(days=1) date_result -= datetime.timedelta(days=1)
if assume_year: if assume_year:
if not now: now = MyTime.now() if not now: now = MyTime.now()
if date_result > now: if date_result > now + datetime.timedelta(days=1): # ignore by timezone issues (+24h)
# Could be last year? # assume last year - also reset month and day as it's not yesterday...
# also reset month and day as it's not yesterday...
date_result = date_result.replace( date_result = date_result.replace(
year=year-1, month=month, day=day) year=year-1, month=month, day=day)

View File

@ -227,15 +227,14 @@ class FailTicket(Ticket):
def __init__(self, ip=None, time=None, matches=None, data={}, ticket=None): def __init__(self, ip=None, time=None, matches=None, data={}, ticket=None):
# this class variables: # this class variables:
self._retry = 0 self._firstTime = None
self._lastReset = None self._retry = 1
# create/copy using default ticket constructor: # create/copy using default ticket constructor:
Ticket.__init__(self, ip, time, matches, data, ticket) Ticket.__init__(self, ip, time, matches, data, ticket)
# init: # init:
if ticket is None: if not isinstance(ticket, FailTicket):
self._lastReset = time if time is not None else self.getTime() self._firstTime = time if time is not None else self.getTime()
if not self._retry: self._retry = self._data.get('failures', 1)
self._retry = self._data['failures'];
def setRetry(self, value): def setRetry(self, value):
""" Set artificial retry count, normally equal failures / attempt, """ Set artificial retry count, normally equal failures / attempt,
@ -252,7 +251,20 @@ class FailTicket(Ticket):
""" Returns failures / attempt count or """ Returns failures / attempt count or
artificial retry count increased for bad IPs artificial retry count increased for bad IPs
""" """
return max(self._retry, self._data['failures']) return self._retry
def adjustTime(self, time, maxTime):
""" Adjust time of ticket and current attempts count considering given maxTime
as estimation from rate by previous known interval (if it exceeds the findTime)
"""
if time > self._time:
# expand current interval and attemps count (considering maxTime):
if self._firstTime < time - maxTime:
# adjust retry calculated as estimation from rate by previous known interval:
self._retry = int(round(self._retry / float(time - self._firstTime) * maxTime))
self._firstTime = time - maxTime
# last time of failure:
self._time = time
def inc(self, matches=None, attempt=1, count=1): def inc(self, matches=None, attempt=1, count=1):
self._retry += count self._retry += count
@ -264,19 +276,6 @@ class FailTicket(Ticket):
else: else:
self._data['matches'] = matches self._data['matches'] = matches
def setLastTime(self, value):
if value > self._time:
self._time = value
def getLastTime(self):
return self._time
def getLastReset(self):
return self._lastReset
def setLastReset(self, value):
self._lastReset = value
@staticmethod @staticmethod
def wrap(o): def wrap(o):
o.__class__ = FailTicket o.__class__ = FailTicket

View File

@ -118,6 +118,9 @@ class Transmitter:
if len(value) == 1 and value[0] == "--all": if len(value) == 1 and value[0] == "--all":
return self.__server.setUnbanIP() return self.__server.setUnbanIP()
return self.__server.setUnbanIP(None, value) return self.__server.setUnbanIP(None, value)
elif name == "banned":
# check IP is banned in all jails:
return self.__server.banned(None, command[1:])
elif name == "echo": elif name == "echo":
return command[1:] return command[1:]
elif name == "server-status": elif name == "server-status":
@ -274,7 +277,8 @@ class Transmitter:
value = command[2] value = command[2]
self.__server.setPrefRegex(name, value) self.__server.setPrefRegex(name, value)
if self.__quiet: return if self.__quiet: return
return self.__server.getPrefRegex(name) v = self.__server.getPrefRegex(name)
return v.getRegex() if v else ""
elif command[1] == "addfailregex": elif command[1] == "addfailregex":
value = command[2] value = command[2]
self.__server.addFailRegex(name, value, multiple=multiple) self.__server.addFailRegex(name, value, multiple=multiple)
@ -430,7 +434,10 @@ class Transmitter:
return None return None
else: else:
return db.purgeage return db.purgeage
# Filter # Jail, Filter
elif command[1] == "banned":
# check IP is banned in all jails:
return self.__server.banned(name, command[2:])
elif command[1] == "logpath": elif command[1] == "logpath":
return self.__server.getLogPath(name) return self.__server.getLogPath(name)
elif command[1] == "logencoding": elif command[1] == "logencoding":
@ -446,7 +453,8 @@ class Transmitter:
elif command[1] == "ignorecache": elif command[1] == "ignorecache":
return self.__server.getIgnoreCache(name) return self.__server.getIgnoreCache(name)
elif command[1] == "prefregex": elif command[1] == "prefregex":
return self.__server.getPrefRegex(name) v = self.__server.getPrefRegex(name)
return v.getRegex() if v else ""
elif command[1] == "failregex": elif command[1] == "failregex":
return self.__server.getFailRegex(name) return self.__server.getFailRegex(name)
elif command[1] == "ignoreregex": elif command[1] == "ignoreregex":

View File

@ -125,6 +125,10 @@ class Utils():
with self.__lock: with self.__lock:
self._cache.pop(k, None) self._cache.pop(k, None)
def clear(self):
with self.__lock:
self._cache.clear()
@staticmethod @staticmethod
def setFBlockMode(fhandle, value): def setFBlockMode(fhandle, value):
@ -260,7 +264,6 @@ class Utils():
if stdout is not None and stdout != '' and std_level >= logSys.getEffectiveLevel(): if stdout is not None and stdout != '' and std_level >= logSys.getEffectiveLevel():
for l in stdout.splitlines(): for l in stdout.splitlines():
logSys.log(std_level, "%x -- stdout: %r", realCmdId, uni_decode(l)) logSys.log(std_level, "%x -- stdout: %r", realCmdId, uni_decode(l))
popen.stdout.close()
if popen.stderr: if popen.stderr:
try: try:
if retcode is None or retcode < 0: if retcode is None or retcode < 0:
@ -271,7 +274,9 @@ class Utils():
if stderr is not None and stderr != '' and std_level >= logSys.getEffectiveLevel(): if stderr is not None and stderr != '' and std_level >= logSys.getEffectiveLevel():
for l in stderr.splitlines(): for l in stderr.splitlines():
logSys.log(std_level, "%x -- stderr: %r", realCmdId, uni_decode(l)) logSys.log(std_level, "%x -- stderr: %r", realCmdId, uni_decode(l))
popen.stderr.close()
if popen.stdout: popen.stdout.close()
if popen.stderr: popen.stderr.close()
success = False success = False
if retcode in success_codes: if retcode in success_codes:

View File

@ -96,6 +96,8 @@ class ExecuteActions(LogCaptureTestCase):
self.assertLogged("stdout: %r" % 'ip flush', "stdout: %r" % 'ip stop') self.assertLogged("stdout: %r" % 'ip flush', "stdout: %r" % 'ip stop')
self.assertEqual(self.__actions.status(),[("Currently banned", 0 ), self.assertEqual(self.__actions.status(),[("Currently banned", 0 ),
("Total banned", 0 ), ("Banned IP list", [] )]) ("Total banned", 0 ), ("Banned IP list", [] )])
self.assertEqual(self.__actions.status('short'),[("Currently banned", 0 ),
("Total banned", 0 )])
def testAddActionPython(self): def testAddActionPython(self):
self.__actions.add( self.__actions.add(

View File

@ -252,7 +252,7 @@ class CommandActionTest(LogCaptureTestCase):
delattr(self.__action, 'ac') delattr(self.__action, 'ac')
# produce self-referencing query except: # produce self-referencing query except:
self.assertRaisesRegexp(ValueError, r"possible self referencing definitions in query", self.assertRaisesRegexp(ValueError, r"possible self referencing definitions in query",
lambda: self.__action.replaceTag("<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x<x>>>>>>>>>>>>>>>>>>>>>", lambda: self.__action.replaceTag("<x"*30+">"*30,
self.__action._properties, conditional="family=inet6") self.__action._properties, conditional="family=inet6")
) )

View File

@ -154,6 +154,21 @@ class AddFailure(unittest.TestCase):
finally: finally:
self.__banManager.setBanTime(btime) self.__banManager.setBanTime(btime)
def testBanList(self):
tickets = [
BanTicket('192.0.2.1', 1167605999.0),
BanTicket('192.0.2.2', 1167605999.0),
]
tickets[1].setBanTime(-1)
for t in tickets:
self.__banManager.addBanTicket(t)
self.assertSortedEqual(self.__banManager.getBanList(ordered=True, withTime=True),
[
'192.0.2.1 \t2006-12-31 23:59:59 + 600 = 2007-01-01 00:09:59',
'192.0.2.2 \t2006-12-31 23:59:59 + -1 = 9999-12-31 23:59:59'
]
)
class StatusExtendedCymruInfo(unittest.TestCase): class StatusExtendedCymruInfo(unittest.TestCase):
def setUp(self): def setUp(self):

View File

@ -87,6 +87,21 @@ option = %s
self.assertTrue(self.c.read(f)) # we got some now self.assertTrue(self.c.read(f)) # we got some now
return self.c.getOptions('section', [("int", 'option')])['option'] return self.c.getOptions('section', [("int", 'option')])['option']
def testConvert(self):
self.c.add_section("Definition")
self.c.set("Definition", "a", "1")
self.c.set("Definition", "b", "1")
self.c.set("Definition", "c", "test")
opts = self.c.getOptions("Definition",
(('int', 'a', 0), ('bool', 'b', 0), ('int', 'c', 0)))
self.assertSortedEqual(opts, {'a': 1, 'b': True, 'c': 0})
opts = self.c.getOptions("Definition",
(('int', 'a'), ('bool', 'b'), ('int', 'c')))
self.assertSortedEqual(opts, {'a': 1, 'b': True, 'c': None})
opts = self.c.getOptions("Definition",
{'a': ('int', 0), 'b': ('bool', 0), 'c': ('int', 0)})
self.assertSortedEqual(opts, {'a': 1, 'b': True, 'c': 0})
def testInaccessibleFile(self): def testInaccessibleFile(self):
f = os.path.join(self.d, "d.conf") # inaccessible file f = os.path.join(self.d, "d.conf") # inaccessible file
self._write('d.conf', 0) self._write('d.conf', 0)
@ -249,6 +264,17 @@ class JailReaderTest(LogCaptureTestCase):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(JailReaderTest, self).__init__(*args, **kwargs) super(JailReaderTest, self).__init__(*args, **kwargs)
def testSplitWithOptions(self):
# covering all separators - new-line and spaces:
for sep in ('\n', '\t', ' '):
self.assertEqual(splitWithOptions('a%sb' % (sep,)), ['a', 'b'])
self.assertEqual(splitWithOptions('a[x=y]%sb' % (sep,)), ['a[x=y]', 'b'])
self.assertEqual(splitWithOptions('a[x=y][z=z]%sb' % (sep,)), ['a[x=y][z=z]', 'b'])
self.assertEqual(splitWithOptions('a[x="y][z"]%sb' % (sep,)), ['a[x="y][z"]', 'b'])
self.assertEqual(splitWithOptions('a[x="y z"]%sb' % (sep,)), ['a[x="y z"]', 'b'])
self.assertEqual(splitWithOptions('a[x="y\tz"]%sb' % (sep,)), ['a[x="y\tz"]', 'b'])
self.assertEqual(splitWithOptions('a[x="y\nz"]%sb' % (sep,)), ['a[x="y\nz"]', 'b'])
def testIncorrectJail(self): def testIncorrectJail(self):
jail = JailReader('XXXABSENTXXX', basedir=CONFIG_DIR, share_config=CONFIG_DIR_SHARE_CFG) jail = JailReader('XXXABSENTXXX', basedir=CONFIG_DIR, share_config=CONFIG_DIR_SHARE_CFG)
self.assertRaises(ValueError, jail.read) self.assertRaises(ValueError, jail.read)
@ -328,7 +354,22 @@ class JailReaderTest(LogCaptureTestCase):
self.assertFalse(len(o) > 2 and o[2].endswith('regex')) self.assertFalse(len(o) > 2 and o[2].endswith('regex'))
i += 1 i += 1
if i > usednsidx: break if i > usednsidx: break
def testLogTypeOfBackendInJail(self):
unittest.F2B.SkipIfCfgMissing(stock=True); # expected include of common.conf
# test twice to check cache works peoperly:
for i in (1, 2):
# backend-related, overwritten in definition, specified in init parameters:
for prefline in ('JRNL', 'FILE', 'TEST', 'INIT'):
jail = JailReader('checklogtype_'+prefline.lower(), basedir=IMPERFECT_CONFIG,
share_config=IMPERFECT_CONFIG_SHARE_CFG, force_enable=True)
self.assertTrue(jail.read())
self.assertTrue(jail.getOptions())
stream = jail.convert()
# 'JRNL' for systemd, 'FILE' for file backend, 'TEST' for custom logtype (overwrite it):
self.assertEqual([['set', jail.getName(), 'addfailregex', '^%s failure from <HOST>$' % prefline]],
[o for o in stream if len(o) > 2 and o[2] == 'addfailregex'])
def testSplitOption(self): def testSplitOption(self):
# Simple example # Simple example
option = "mail-whois[name=SSH]" option = "mail-whois[name=SSH]"
@ -468,14 +509,12 @@ class JailReaderTest(LogCaptureTestCase):
self.assertRaises(NoSectionError, c.getOptions, 'test', {}) self.assertRaises(NoSectionError, c.getOptions, 'test', {})
class FilterReaderTest(unittest.TestCase): class FilterReaderTest(LogCaptureTestCase):
def __init__(self, *args, **kwargs):
super(FilterReaderTest, self).__init__(*args, **kwargs)
self.__share_cfg = {}
def testConvert(self): def testConvert(self):
output = [['multi-set', 'testcase01', 'addfailregex', [ output = [
['set', 'testcase01', 'maxlines', 1],
['multi-set', 'testcase01', 'addfailregex', [
"^\\s*(?:\\S+ )?(?:kernel: \\[\\d+\\.\\d+\\] )?(?:@vserver_\\S+ )" "^\\s*(?:\\S+ )?(?:kernel: \\[\\d+\\.\\d+\\] )?(?:@vserver_\\S+ )"
"?(?:(?:\\[\\d+\\])?:\\s+[\\[\\(]?sshd(?:\\(\\S+\\))?[\\]\\)]?:?|" "?(?:(?:\\[\\d+\\])?:\\s+[\\[\\(]?sshd(?:\\(\\S+\\))?[\\]\\)]?:?|"
"[\\[\\(]?sshd(?:\\(\\S+\\))?[\\]\\)]?:?(?:\\[\\d+\\])?:)?\\s*(?:" "[\\[\\(]?sshd(?:\\(\\S+\\))?[\\]\\)]?:?(?:\\[\\d+\\])?:)?\\s*(?:"
@ -497,7 +536,6 @@ class FilterReaderTest(unittest.TestCase):
['set', 'testcase01', 'addjournalmatch', ['set', 'testcase01', 'addjournalmatch',
"FIELD= with spaces ", "+", "AFIELD= with + char and spaces"], "FIELD= with spaces ", "+", "AFIELD= with + char and spaces"],
['set', 'testcase01', 'datepattern', "%Y %m %d %H:%M:%S"], ['set', 'testcase01', 'datepattern', "%Y %m %d %H:%M:%S"],
['set', 'testcase01', 'maxlines', 1], # Last for overide test
] ]
filterReader = FilterReader("testcase01", "testcase01", {}) filterReader = FilterReader("testcase01", "testcase01", {})
filterReader.setBaseDir(TEST_FILES_DIR) filterReader.setBaseDir(TEST_FILES_DIR)
@ -514,9 +552,18 @@ class FilterReaderTest(unittest.TestCase):
filterReader.read() filterReader.read()
#filterReader.getOptions(["failregex", "ignoreregex"]) #filterReader.getOptions(["failregex", "ignoreregex"])
filterReader.getOptions(None) filterReader.getOptions(None)
output[-1][-1] = "5" output[0][-1] = 5; # maxlines = 5
self.assertSortedEqual(filterReader.convert(), output) self.assertSortedEqual(filterReader.convert(), output)
def testConvertOptions(self):
filterReader = FilterReader("testcase01", "testcase01", {'maxlines': '<test>', 'test': 'X'},
share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read()
filterReader.getOptions(None)
opts = filterReader.getCombined();
self.assertNotEqual(opts['maxlines'], 'X'); # wrong int value 'X' for 'maxlines'
self.assertLogged("Wrong int value 'X' for 'maxlines'. Using default one:")
def testFilterReaderSubstitionDefault(self): def testFilterReaderSubstitionDefault(self):
output = [['set', 'jailname', 'addfailregex', 'to=sweet@example.com fromip=<IP>']] output = [['set', 'jailname', 'addfailregex', 'to=sweet@example.com fromip=<IP>']]
filterReader = FilterReader('substition', "jailname", {}, filterReader = FilterReader('substition', "jailname", {},
@ -526,6 +573,17 @@ class FilterReaderTest(unittest.TestCase):
c = filterReader.convert() c = filterReader.convert()
self.assertSortedEqual(c, output) self.assertSortedEqual(c, output)
def testFilterReaderSubstKnown(self):
# testcase02.conf + testcase02.local, test covering that known/option is not overridden
# with unmodified (not available) value of option from .local config file, so wouldn't
# cause self-recursion if option already has a reference to known/option in .conf file.
filterReader = FilterReader('testcase02', "jailname", {},
share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read()
filterReader.getOptions(None)
opts = filterReader.getCombined()
self.assertTrue('sshd' in opts['failregex'])
def testFilterReaderSubstitionSet(self): def testFilterReaderSubstitionSet(self):
output = [['set', 'jailname', 'addfailregex', 'to=sour@example.com fromip=<IP>']] output = [['set', 'jailname', 'addfailregex', 'to=sour@example.com fromip=<IP>']]
filterReader = FilterReader('substition', "jailname", {'honeypot': 'sour@example.com'}, filterReader = FilterReader('substition', "jailname", {'honeypot': 'sour@example.com'},

View File

@ -0,0 +1,31 @@
# Fail2Ban configuration file
#
[INCLUDES]
# Read common prefixes (logtype is set in default section)
before = ../../../../config/filter.d/common.conf
[Definition]
_daemon = test
failregex = ^<lt_<logtype>/__prefix_line> failure from <HOST>$
ignoreregex =
# following sections define prefix line considering logtype:
# backend-related (retrieved from backend, overwrite default):
[lt_file]
__prefix_line = FILE
[lt_journal]
__prefix_line = JRNL
# specified in definition section of filter (see filter checklogtype_test.conf):
[lt_test]
__prefix_line = TEST
# specified in init parameter of jail (see ../jail.conf, jail checklogtype_init):
[lt_init]
__prefix_line = INIT

View File

@ -0,0 +1,12 @@
# Fail2Ban configuration file
#
[INCLUDES]
# Read common prefixes (logtype is set in default section)
before = checklogtype.conf
[Definition]
# overwrite logtype in definition (no backend anymore):
logtype = test

View File

@ -37,7 +37,7 @@ __pam_auth = pam_[a-z]+
cmnfailre = ^%(__prefix_line_sl)s[aA]uthentication (?:failure|error|failed) for .* from <HOST>( via \S+)?\s*%(__suff)s$ cmnfailre = ^%(__prefix_line_sl)s[aA]uthentication (?:failure|error|failed) for .* from <HOST>( via \S+)?\s*%(__suff)s$
^%(__prefix_line_sl)sUser not known to the underlying authentication module for .* from <HOST>\s*%(__suff)s$ ^%(__prefix_line_sl)sUser not known to the underlying authentication module for .* from <HOST>\s*%(__suff)s$
^%(__prefix_line_sl)sFailed \S+ for invalid user <F-USER>(?P<cond_user>\S+)|(?:(?! from ).)*?</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$) ^%(__prefix_line_sl)sFailed \S+ for invalid user <F-USER>(?P<cond_user>\S+)|(?:(?! from ).)*?</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
^%(__prefix_line_sl)sFailed \b(?!publickey)\S+ for (?P<cond_inv>invalid user )?<F-USER>(?P<cond_user>\S+)|(?(cond_inv)(?:(?! from ).)*?|[^:]+)</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$) ^%(__prefix_line_sl)sFailed (?:<F-NOFAIL>publickey</F-NOFAIL>|\S+) for (?P<cond_inv>invalid user )?<F-USER>(?P<cond_user>\S+)|(?(cond_inv)(?:(?! from ).)*?|[^:]+)</F-USER> from <HOST>%(__on_port_opt)s(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
^%(__prefix_line_sl)sROOT LOGIN REFUSED FROM <HOST> ^%(__prefix_line_sl)sROOT LOGIN REFUSED FROM <HOST>
^%(__prefix_line_sl)s[iI](?:llegal|nvalid) user .*? from <HOST>%(__suff)s$ ^%(__prefix_line_sl)s[iI](?:llegal|nvalid) user .*? from <HOST>%(__suff)s$
^%(__prefix_line_sl)sUser .+ from <HOST> not allowed because not listed in AllowUsers\s*%(__suff)s$ ^%(__prefix_line_sl)sUser .+ from <HOST> not allowed because not listed in AllowUsers\s*%(__suff)s$
@ -57,11 +57,10 @@ mdre-normal =
mdre-ddos = ^%(__prefix_line_sl)sDid not receive identification string from <HOST> mdre-ddos = ^%(__prefix_line_sl)sDid not receive identification string from <HOST>
^%(__prefix_line_sl)sBad protocol version identification '.*' from <HOST> ^%(__prefix_line_sl)sBad protocol version identification '.*' from <HOST>
^%(__prefix_line_sl)sConnection closed by%(__authng_user)s <HOST>%(__on_port_opt)s\s+\[preauth\]\s*$ ^%(__prefix_line_sl)sConnection (?:closed|reset) by%(__authng_user)s <HOST>%(__on_port_opt)s\s+\[preauth\]\s*$
^%(__prefix_line_sl)sConnection reset by <HOST>
^%(__prefix_line_ml1)sSSH: Server;Ltype: (?:Authname|Version|Kex);Remote: <HOST>-\d+;[A-Z]\w+:.*%(__prefix_line_ml2)sRead from socket failed: Connection reset by peer%(__suff)s$ ^%(__prefix_line_ml1)sSSH: Server;Ltype: (?:Authname|Version|Kex);Remote: <HOST>-\d+;[A-Z]\w+:.*%(__prefix_line_ml2)sRead from socket failed: Connection reset by peer%(__suff)s$
mdre-extra = ^%(__prefix_line_sl)sReceived disconnect from <HOST>%(__on_port_opt)s:\s*14: No supported authentication methods available mdre-extra = ^%(__prefix_line_sl)sReceived disconnect from <HOST>%(__on_port_opt)s:\s*14: No(?: supported)? authentication methods available
^%(__prefix_line_sl)sUnable to negotiate with <HOST>%(__on_port_opt)s: no matching <__alg_match> found. ^%(__prefix_line_sl)sUnable to negotiate with <HOST>%(__on_port_opt)s: no matching <__alg_match> found.
^%(__prefix_line_ml1)sConnection from <HOST>%(__on_port_opt)s%(__prefix_line_ml2)sUnable to negotiate a <__alg_match> ^%(__prefix_line_ml1)sConnection from <HOST>%(__on_port_opt)s%(__prefix_line_ml2)sUnable to negotiate a <__alg_match>
^%(__prefix_line_ml1)sConnection from <HOST>%(__on_port_opt)s%(__prefix_line_ml2)sno matching <__alg_match> found: ^%(__prefix_line_ml1)sConnection from <HOST>%(__on_port_opt)s%(__prefix_line_ml2)sno matching <__alg_match> found:

View File

@ -74,3 +74,28 @@ journalmatch = _COMM=test
maxlines = 2 maxlines = 2
usedns = no usedns = no
enabled = false enabled = false
[checklogtype_jrnl]
filter = checklogtype
backend = systemd
action = action
enabled = false
[checklogtype_file]
filter = checklogtype
backend = polling
logpath = README.md
action = action
enabled = false
[checklogtype_test]
filter = checklogtype_test
backend = systemd
action = action
enabled = false
[checklogtype_init]
filter = checklogtype_test[logtype=init]
backend = systemd
action = action
enabled = false

View File

@ -262,6 +262,15 @@ class DatabaseTest(LogCaptureTestCase):
self.db.addLog(self.jail, self.fileContainer), None) self.db.addLog(self.jail, self.fileContainer), None)
os.remove(filename) os.remove(filename)
def testUpdateJournal(self):
self.testAddJail() # Jail required
# not yet updated:
self.assertEqual(self.db.getJournalPos(self.jail, 'systemd-journal'), None)
# update 3 times (insert and 2 updates) and check it was set (and overwritten):
for t in (1500000000, 1500000001, 1500000002):
self.db.updateJournal(self.jail, 'systemd-journal', t, 'TEST'+str(t))
self.assertEqual(self.db.getJournalPos(self.jail, 'systemd-journal'), t)
def testAddBan(self): def testAddBan(self):
self.testAddJail() self.testAddJail()
ticket = FailTicket("127.0.0.1", 0, ["abc\n"]) ticket = FailTicket("127.0.0.1", 0, ["abc\n"])
@ -534,6 +543,7 @@ class DatabaseTest(LogCaptureTestCase):
# test action together with database functionality # test action together with database functionality
self.testAddJail() # Jail required self.testAddJail() # Jail required
self.jail.database = self.db self.jail.database = self.db
self.db.addJail(self.jail)
actions = Actions(self.jail) actions = Actions(self.jail)
actions.add( actions.add(
"action_checkainfo", "action_checkainfo",

View File

@ -330,6 +330,27 @@ class DateDetectorTest(LogCaptureTestCase):
dt = '2005 Jun 03'; self.assertEqual(t.matchDate(dt).group(1), dt) dt = '2005 Jun 03'; self.assertEqual(t.matchDate(dt).group(1), dt)
dt = '2005 JUN 03'; self.assertEqual(t.matchDate(dt).group(1), dt) dt = '2005 JUN 03'; self.assertEqual(t.matchDate(dt).group(1), dt)
def testNotAnchoredCollision(self):
# try for patterns with and without word boundaries:
for dp in (r'%H:%M:%S', r'{UNB}%H:%M:%S'):
dd = DateDetector()
dd.appendTemplate(dp)
# boundary of timestamp changes right and left (and time is left and right in line):
for fmt in ('%s test', '%8s test', 'test %s', 'test %8s'):
for dt in (
'00:01:02',
'00:01:2',
'00:1:2',
'0:1:2',
'00:1:2',
'00:01:2',
'00:01:02',
'0:1:2',
'00:01:02',
):
t = dd.getTime(fmt % dt)
self.assertEqual((t[0], t[1].group()), (1123970462.0, dt))
def testAmbiguousInOrderedTemplates(self): def testAmbiguousInOrderedTemplates(self):
dd = self.datedetector dd = self.datedetector
for (debit, line, cnt) in ( for (debit, line, cnt) in (

View File

@ -40,7 +40,6 @@ class DummyJail(Jail):
self.lock = Lock() self.lock = Lock()
self.queue = [] self.queue = []
super(DummyJail, self).__init__(name=name, backend=backend) super(DummyJail, self).__init__(name=name, backend=backend)
self.__db = None
self.__actions = DummyActions(self) self.__actions = DummyActions(self)
def __len__(self): def __len__(self):
@ -55,6 +54,10 @@ class DummyJail(Jail):
with self.lock: with self.lock:
return bool(self.queue) return bool(self.queue)
@property
def hasFailTickets(self):
return bool(self.queue)
def putFailTicket(self, ticket): def putFailTicket(self, ticket):
with self.lock: with self.lock:
self.queue.append(ticket) self.queue.append(ticket)
@ -74,14 +77,6 @@ class DummyJail(Jail):
def idle(self, value): def idle(self, value):
pass pass
@property
def database(self):
return self.__db;
@database.setter
def database(self, value):
self.__db = value;
@property @property
def actions(self): def actions(self):
return self.__actions; return self.__actions;

View File

@ -37,7 +37,7 @@ from threading import Thread
from ..client import fail2banclient, fail2banserver, fail2bancmdline from ..client import fail2banclient, fail2banserver, fail2bancmdline
from ..client.fail2bancmdline import Fail2banCmdLine from ..client.fail2bancmdline import Fail2banCmdLine
from ..client.fail2banclient import exec_command_line as _exec_client, VisualWait from ..client.fail2banclient import exec_command_line as _exec_client, CSocket, VisualWait
from ..client.fail2banserver import Fail2banServer, exec_command_line as _exec_server from ..client.fail2banserver import Fail2banServer, exec_command_line as _exec_server
from .. import protocol from .. import protocol
from ..server import server from ..server import server
@ -343,6 +343,7 @@ def with_foreground_server_thread(startextra={}):
# to wait for end of server, default accept any exit code, because multi-threaded, # to wait for end of server, default accept any exit code, because multi-threaded,
# thus server can exit in-between... # thus server can exit in-between...
def _stopAndWaitForServerEnd(code=(SUCCESS, FAILED)): def _stopAndWaitForServerEnd(code=(SUCCESS, FAILED)):
tearDownMyTime()
# if seems to be down - try to catch end phase (wait a bit for end:True to recognize down state): # if seems to be down - try to catch end phase (wait a bit for end:True to recognize down state):
if not phase.get('end', None) and not os.path.exists(pjoin(tmp, "f2b.pid")): if not phase.get('end', None) and not os.path.exists(pjoin(tmp, "f2b.pid")):
Utils.wait_for(lambda: phase.get('end', None) is not None, MID_WAITTIME) Utils.wait_for(lambda: phase.get('end', None) is not None, MID_WAITTIME)
@ -452,6 +453,14 @@ class Fail2banClientServerBase(LogCaptureTestCase):
self.assertRaises(exitType, self.exec_command_line[0], self.assertRaises(exitType, self.exec_command_line[0],
(self.exec_command_line[1:] + startparams + args)) (self.exec_command_line[1:] + startparams + args))
def execCmdDirect(self, startparams, *args):
sock = startparams[startparams.index('-s')+1]
s = CSocket(sock)
try:
return s.send(args)
finally:
s.close()
# #
# Common tests # Common tests
# #
@ -469,14 +478,14 @@ class Fail2banClientServerBase(LogCaptureTestCase):
@with_foreground_server_thread(startextra={'f2b_local':( @with_foreground_server_thread(startextra={'f2b_local':(
"[Thread]", "[Thread]",
"stacksize = 32" "stacksize = 128"
"", "",
)}) )})
def testStartForeground(self, tmp, startparams): def testStartForeground(self, tmp, startparams):
# check thread options were set: # check thread options were set:
self.pruneLog() self.pruneLog()
self.execCmd(SUCCESS, startparams, "get", "thread") self.execCmd(SUCCESS, startparams, "get", "thread")
self.assertLogged("{'stacksize': 32}") self.assertLogged("{'stacksize': 128}")
# several commands to server: # several commands to server:
self.execCmd(SUCCESS, startparams, "ping") self.execCmd(SUCCESS, startparams, "ping")
self.execCmd(FAILED, startparams, "~~unknown~cmd~failed~~") self.execCmd(FAILED, startparams, "~~unknown~cmd~failed~~")
@ -646,12 +655,6 @@ class Fail2banClientTest(Fail2banClientServerBase):
self.assertLogged("Base configuration directory " + pjoin(tmp, "miss") + " does not exist") self.assertLogged("Base configuration directory " + pjoin(tmp, "miss") + " does not exist")
self.pruneLog() self.pruneLog()
## wrong socket
self.execCmd(FAILED, (),
"--async", "-c", pjoin(tmp, "config"), "-s", pjoin(tmp, "miss/f2b.sock"), "start")
self.assertLogged("There is no directory " + pjoin(tmp, "miss") + " to contain the socket file")
self.pruneLog()
## not running ## not running
self.execCmd(FAILED, (), self.execCmd(FAILED, (),
"-c", pjoin(tmp, "config"), "-s", pjoin(tmp, "f2b.sock"), "reload") "-c", pjoin(tmp, "config"), "-s", pjoin(tmp, "f2b.sock"), "reload")
@ -747,12 +750,6 @@ class Fail2banServerTest(Fail2banClientServerBase):
self.assertLogged("Base configuration directory " + pjoin(tmp, "miss") + " does not exist") self.assertLogged("Base configuration directory " + pjoin(tmp, "miss") + " does not exist")
self.pruneLog() self.pruneLog()
## wrong socket
self.execCmd(FAILED, (),
"-c", pjoin(tmp, "config"), "-x", "-s", pjoin(tmp, "miss/f2b.sock"))
self.assertLogged("There is no directory " + pjoin(tmp, "miss") + " to contain the socket file")
self.pruneLog()
## already exists: ## already exists:
open(pjoin(tmp, "f2b.sock"), 'a').close() open(pjoin(tmp, "f2b.sock"), 'a').close()
self.execCmd(FAILED, (), self.execCmd(FAILED, (),
@ -891,7 +888,7 @@ class Fail2banServerTest(Fail2banClientServerBase):
"action = ", "action = ",
" test-action2[name='%(__name__)s', restore='restored: <restored>', info=', err-code: <F-ERRCODE>']" \ " test-action2[name='%(__name__)s', restore='restored: <restored>', info=', err-code: <F-ERRCODE>']" \
if 2 in actions else "", if 2 in actions else "",
" test-action2[name='%(__name__)s', actname=test-action3, _exec_once=1, restore='restored: <restored>']" " test-action2[name='%(__name__)s', actname=test-action3, _exec_once=1, restore='restored: <restored>',"
" actionflush=<_use_flush_>]" \ " actionflush=<_use_flush_>]" \
if 3 in actions else "", if 3 in actions else "",
"logpath = " + test2log, "logpath = " + test2log,
@ -1004,8 +1001,8 @@ class Fail2banServerTest(Fail2banClientServerBase):
# leave action2 just to test restored interpolation: # leave action2 just to test restored interpolation:
_write_jail_cfg(actions=[2,3]) _write_jail_cfg(actions=[2,3])
# write new failures:
self.pruneLog("[test-phase 2b]") self.pruneLog("[test-phase 2b]")
# write new failures:
_write_file(test2log, "w+", *( _write_file(test2log, "w+", *(
(str(int(MyTime.time())) + " error 403 from 192.0.2.2: test 2",) * 3 + (str(int(MyTime.time())) + " error 403 from 192.0.2.2: test 2",) * 3 +
(str(int(MyTime.time())) + " error 403 from 192.0.2.3: test 2",) * 3 + (str(int(MyTime.time())) + " error 403 from 192.0.2.3: test 2",) * 3 +
@ -1018,13 +1015,19 @@ class Fail2banServerTest(Fail2banClientServerBase):
self.assertLogged( self.assertLogged(
"2 ticket(s) in 'test-jail2", "2 ticket(s) in 'test-jail2",
"5 ticket(s) in 'test-jail1", all=True, wait=MID_WAITTIME) "5 ticket(s) in 'test-jail1", all=True, wait=MID_WAITTIME)
# ban manually to cover restore in restart (phase 2c):
self.execCmd(SUCCESS, startparams,
"set", "test-jail2", "banip", "192.0.2.9")
self.assertLogged(
"3 ticket(s) in 'test-jail2", wait=MID_WAITTIME)
self.assertLogged( self.assertLogged(
"[test-jail1] Ban 192.0.2.2", "[test-jail1] Ban 192.0.2.2",
"[test-jail1] Ban 192.0.2.3", "[test-jail1] Ban 192.0.2.3",
"[test-jail1] Ban 192.0.2.4", "[test-jail1] Ban 192.0.2.4",
"[test-jail1] Ban 192.0.2.8", "[test-jail1] Ban 192.0.2.8",
"[test-jail2] Ban 192.0.2.4", "[test-jail2] Ban 192.0.2.4",
"[test-jail2] Ban 192.0.2.8", all=True) "[test-jail2] Ban 192.0.2.8",
"[test-jail2] Ban 192.0.2.9", all=True)
# test ips at all not visible for jail2: # test ips at all not visible for jail2:
self.assertNotLogged( self.assertNotLogged(
"[test-jail2] Found 192.0.2.2", "[test-jail2] Found 192.0.2.2",
@ -1034,6 +1037,30 @@ class Fail2banServerTest(Fail2banClientServerBase):
all=True) all=True)
# if observer available wait for it becomes idle (write all tickets to db): # if observer available wait for it becomes idle (write all tickets to db):
_observer_wait_idle() _observer_wait_idle()
# test banned command:
self.assertSortedEqual(self.execCmdDirect(startparams,
'banned'), (0, [
{'test-jail1': ['192.0.2.4', '192.0.2.1', '192.0.2.8', '192.0.2.3', '192.0.2.2']},
{'test-jail2': ['192.0.2.4', '192.0.2.9', '192.0.2.8']}
]
))
self.assertSortedEqual(self.execCmdDirect(startparams,
'banned', '192.0.2.1', '192.0.2.4', '192.0.2.222'), (0, [
['test-jail1'], ['test-jail1', 'test-jail2'], []
]
))
self.assertSortedEqual(self.execCmdDirect(startparams,
'get', 'test-jail1', 'banned')[1], [
'192.0.2.4', '192.0.2.1', '192.0.2.8', '192.0.2.3', '192.0.2.2'])
self.assertSortedEqual(self.execCmdDirect(startparams,
'get', 'test-jail2', 'banned')[1], [
'192.0.2.4', '192.0.2.9', '192.0.2.8'])
self.assertEqual(self.execCmdDirect(startparams,
'get', 'test-jail1', 'banned', '192.0.2.3')[1], 1)
self.assertEqual(self.execCmdDirect(startparams,
'get', 'test-jail1', 'banned', '192.0.2.9')[1], 0)
self.assertEqual(self.execCmdDirect(startparams,
'get', 'test-jail1', 'banned', '192.0.2.3', '192.0.2.9')[1], [1, 0])
# rotate logs: # rotate logs:
_write_file(test1log, "w+") _write_file(test1log, "w+")
@ -1046,15 +1073,17 @@ class Fail2banServerTest(Fail2banClientServerBase):
self.assertLogged( self.assertLogged(
"Reload finished.", "Reload finished.",
"Restore Ban", "Restore Ban",
"2 ticket(s) in 'test-jail2", all=True, wait=MID_WAITTIME) "3 ticket(s) in 'test-jail2", all=True, wait=MID_WAITTIME)
# stop/start and unban/restore ban: # stop/start and unban/restore ban:
self.assertLogged( self.assertLogged(
"Jail 'test-jail2' stopped",
"Jail 'test-jail2' started",
"[test-jail2] Unban 192.0.2.4", "[test-jail2] Unban 192.0.2.4",
"[test-jail2] Unban 192.0.2.8", "[test-jail2] Unban 192.0.2.8",
"[test-jail2] Unban 192.0.2.9",
"Jail 'test-jail2' stopped",
"Jail 'test-jail2' started",
"[test-jail2] Restore Ban 192.0.2.4", "[test-jail2] Restore Ban 192.0.2.4",
"[test-jail2] Restore Ban 192.0.2.8", all=True "[test-jail2] Restore Ban 192.0.2.8",
"[test-jail2] Restore Ban 192.0.2.9", all=True
) )
# test restored is 1 (only test-action2): # test restored is 1 (only test-action2):
self.assertLogged( self.assertLogged(
@ -1099,7 +1128,8 @@ class Fail2banServerTest(Fail2banClientServerBase):
"Jail 'test-jail2' stopped", "Jail 'test-jail2' stopped",
"Jail 'test-jail2' started", "Jail 'test-jail2' started",
"[test-jail2] Unban 192.0.2.4", "[test-jail2] Unban 192.0.2.4",
"[test-jail2] Unban 192.0.2.8", all=True "[test-jail2] Unban 192.0.2.8",
"[test-jail2] Unban 192.0.2.9", all=True
) )
# test unban (action2): # test unban (action2):
self.assertLogged( self.assertLogged(
@ -1173,13 +1203,41 @@ class Fail2banServerTest(Fail2banClientServerBase):
self.assertNotLogged("[test-jail1] Found 192.0.2.5") self.assertNotLogged("[test-jail1] Found 192.0.2.5")
# unban single ips: # unban single ips:
self.pruneLog("[test-phase 6]") self.pruneLog("[test-phase 6a]")
self.execCmd(SUCCESS, startparams, self.execCmd(SUCCESS, startparams,
"--async", "unban", "192.0.2.5", "192.0.2.6") "--async", "unban", "192.0.2.5", "192.0.2.6")
self.assertLogged( self.assertLogged(
"192.0.2.5 is not banned", "192.0.2.5 is not banned",
"[test-jail1] Unban 192.0.2.6", all=True, wait=MID_WAITTIME "[test-jail1] Unban 192.0.2.6", all=True, wait=MID_WAITTIME
) )
# unban ips by subnet (cidr/mask):
self.pruneLog("[test-phase 6b]")
self.execCmd(SUCCESS, startparams,
"--async", "unban", "192.0.2.2/31")
self.assertLogged(
"[test-jail1] Unban 192.0.2.2",
"[test-jail1] Unban 192.0.2.3", all=True, wait=MID_WAITTIME
)
self.execCmd(SUCCESS, startparams,
"--async", "unban", "192.0.2.8/31", "192.0.2.100/31")
self.assertLogged(
"[test-jail1] Unban 192.0.2.8",
"192.0.2.100/31 is not banned", all=True, wait=MID_WAITTIME)
# ban/unban subnet(s):
self.pruneLog("[test-phase 6c]")
self.execCmd(SUCCESS, startparams,
"--async", "set", "test-jail1", "banip", "192.0.2.96/28", "192.0.2.112/28")
self.assertLogged(
"[test-jail1] Ban 192.0.2.96/28",
"[test-jail1] Ban 192.0.2.112/28", all=True, wait=MID_WAITTIME
)
self.execCmd(SUCCESS, startparams,
"--async", "set", "test-jail1", "unbanip", "192.0.2.64/26"); # contains both subnets .96/28 and .112/28
self.assertLogged(
"[test-jail1] Unban 192.0.2.96/28",
"[test-jail1] Unban 192.0.2.112/28", all=True, wait=MID_WAITTIME
)
# reload all (one jail) with unban all: # reload all (one jail) with unban all:
self.pruneLog("[test-phase 7]") self.pruneLog("[test-phase 7]")
@ -1190,8 +1248,6 @@ class Fail2banServerTest(Fail2banClientServerBase):
self.assertLogged( self.assertLogged(
"Jail 'test-jail1' reloaded", "Jail 'test-jail1' reloaded",
"[test-jail1] Unban 192.0.2.1", "[test-jail1] Unban 192.0.2.1",
"[test-jail1] Unban 192.0.2.2",
"[test-jail1] Unban 192.0.2.3",
"[test-jail1] Unban 192.0.2.4", all=True "[test-jail1] Unban 192.0.2.4", all=True
) )
# no restart occurred, no more ban (unbanned all using option "--unban"): # no restart occurred, no more ban (unbanned all using option "--unban"):
@ -1199,8 +1255,6 @@ class Fail2banServerTest(Fail2banClientServerBase):
"Jail 'test-jail1' stopped", "Jail 'test-jail1' stopped",
"Jail 'test-jail1' started", "Jail 'test-jail1' started",
"[test-jail1] Ban 192.0.2.1", "[test-jail1] Ban 192.0.2.1",
"[test-jail1] Ban 192.0.2.2",
"[test-jail1] Ban 192.0.2.3",
"[test-jail1] Ban 192.0.2.4", all=True "[test-jail1] Ban 192.0.2.4", all=True
) )
@ -1570,6 +1624,37 @@ class Fail2banServerTest(Fail2banClientServerBase):
self.assertLogged( self.assertLogged(
"192.0.2.11", "+ 600 =", all=True, wait=MID_WAITTIME) "192.0.2.11", "+ 600 =", all=True, wait=MID_WAITTIME)
# test stop with busy observer:
self.pruneLog("[test-phase end) stop on busy observer]")
tearDownMyTime()
a = {'state': 0}
obsMain = Observers.Main
def _long_action():
logSys.info('++ observer enters busy state ...')
a['state'] = 1
Utils.wait_for(lambda: a['state'] == 2, MAX_WAITTIME)
obsMain.db_purge(); # does nothing (db is already None)
logSys.info('-- observer leaves busy state.')
obsMain.add('call', _long_action)
obsMain.add('call', lambda: None)
# wait observer enter busy state:
Utils.wait_for(lambda: a['state'] == 1, MAX_WAITTIME)
# overwrite default wait time (normally 5 seconds):
obsMain_stop = obsMain.stop
def _stop(wtime=(0.01 if unittest.F2B.fast else 0.1), forceQuit=True):
return obsMain_stop(wtime, forceQuit)
obsMain.stop = _stop
# stop server and wait for end:
self.stopAndWaitForServerEnd(SUCCESS)
# check observer and db state:
self.assertNotLogged('observer leaves busy state')
self.assertFalse(obsMain.idle)
self.assertEqual(obsMain._ObserverThread__db, None)
# server is exited without wait for observer, stop it now:
a['state'] = 2
self.assertLogged('observer leaves busy state', wait=True)
obsMain.join()
# test multiple start/stop of the server (threaded in foreground) -- # test multiple start/stop of the server (threaded in foreground) --
if False: # pragma: no cover if False: # pragma: no cover
@with_foreground_server_thread() @with_foreground_server_thread()

View File

@ -81,15 +81,32 @@ def _test_exec_command_line(*args):
return _exit_code return _exit_code
STR_00 = "Dec 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 192.0.2.0" STR_00 = "Dec 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 192.0.2.0"
STR_00_NODT = "[sshd] error: PAM: Authentication failure for kevin from 192.0.2.0"
RE_00 = r"(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>" RE_00 = r"(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>"
RE_00_ID = r"Authentication failure for <F-ID>.*?</F-ID> from <HOST>$" RE_00_ID = r"Authentication failure for <F-ID>.*?</F-ID> from <ADDR>$"
RE_00_USER = r"Authentication failure for <F-USER>.*?</F-USER> from <HOST>$" RE_00_USER = r"Authentication failure for <F-USER>.*?</F-USER> from <ADDR>$"
FILENAME_01 = os.path.join(TEST_FILES_DIR, "testcase01.log") FILENAME_01 = os.path.join(TEST_FILES_DIR, "testcase01.log")
FILENAME_02 = os.path.join(TEST_FILES_DIR, "testcase02.log") FILENAME_02 = os.path.join(TEST_FILES_DIR, "testcase02.log")
FILENAME_WRONGCHAR = os.path.join(TEST_FILES_DIR, "testcase-wrong-char.log") FILENAME_WRONGCHAR = os.path.join(TEST_FILES_DIR, "testcase-wrong-char.log")
# STR_ML_SSHD -- multiline log-excerpt with two sessions:
# 192.0.2.1 (sshd[32307]) makes 2 failed attempts using public keys (without "Disconnecting: Too many authentication"),
# and delayed success on accepted (STR_ML_SSHD_OK) or no success by close on preauth phase (STR_ML_SSHD_FAIL)
# 192.0.2.2 (sshd[32310]) makes 2 failed attempts using public keys (with "Disconnecting: Too many authentication"),
# and closed on preauth phase
STR_ML_SSHD = """Nov 28 09:16:03 srv sshd[32307]: Failed publickey for git from 192.0.2.1 port 57904 ssh2: ECDSA 0e:ff:xx:xx:xx:xx:xx:xx:xx:xx:xx:...
Nov 28 09:16:03 srv sshd[32307]: Failed publickey for git from 192.0.2.1 port 57904 ssh2: RSA 04:bc:xx:xx:xx:xx:xx:xx:xx:xx:xx:...
Nov 28 09:16:03 srv sshd[32307]: Postponed publickey for git from 192.0.2.1 port 57904 ssh2 [preauth]
Nov 28 09:16:05 srv sshd[32310]: Failed publickey for git from 192.0.2.2 port 57910 ssh2: ECDSA 1e:fe:xx:xx:xx:xx:xx:xx:xx:xx:xx:...
Nov 28 09:16:05 srv sshd[32310]: Failed publickey for git from 192.0.2.2 port 57910 ssh2: RSA 14:ba:xx:xx:xx:xx:xx:xx:xx:xx:xx:...
Nov 28 09:16:05 srv sshd[32310]: Disconnecting: Too many authentication failures for git [preauth]
Nov 28 09:16:05 srv sshd[32310]: Connection closed by 192.0.2.2 [preauth]"""
STR_ML_SSHD_OK = "Nov 28 09:16:06 srv sshd[32307]: Accepted publickey for git from 192.0.2.1 port 57904 ssh2: DSA 36:48:xx:xx:xx:xx:xx:xx:xx:xx:xx:..."
STR_ML_SSHD_FAIL = "Nov 28 09:16:06 srv sshd[32307]: Connection closed by 192.0.2.1 [preauth]"
FILENAME_SSHD = os.path.join(TEST_FILES_DIR, "logs", "sshd") FILENAME_SSHD = os.path.join(TEST_FILES_DIR, "logs", "sshd")
FILTER_SSHD = os.path.join(CONFIG_DIR, 'filter.d', 'sshd.conf') FILTER_SSHD = os.path.join(CONFIG_DIR, 'filter.d', 'sshd.conf')
FILENAME_ZZZ_SSHD = os.path.join(TEST_FILES_DIR, 'zzz-sshd-obsolete-multiline.log') FILENAME_ZZZ_SSHD = os.path.join(TEST_FILES_DIR, 'zzz-sshd-obsolete-multiline.log')
@ -156,7 +173,7 @@ class Fail2banRegexTest(LogCaptureTestCase):
"--print-all-matched", "--print-all-matched",
FILENAME_01, RE_00 FILENAME_01, RE_00
)) ))
self.assertLogged('Lines: 19 lines, 0 ignored, 13 matched, 6 missed') self.assertLogged('Lines: 19 lines, 0 ignored, 16 matched, 3 missed')
self.assertLogged('Error decoding line'); self.assertLogged('Error decoding line');
self.assertLogged('Continuing to process line ignoring invalid characters') self.assertLogged('Continuing to process line ignoring invalid characters')
@ -170,7 +187,7 @@ class Fail2banRegexTest(LogCaptureTestCase):
"--print-all-matched", "--raw", "--print-all-matched", "--raw",
FILENAME_01, RE_00 FILENAME_01, RE_00
)) ))
self.assertLogged('Lines: 19 lines, 0 ignored, 16 matched, 3 missed') self.assertLogged('Lines: 19 lines, 0 ignored, 19 matched, 0 missed')
def testDirectRE_1raw_noDns(self): def testDirectRE_1raw_noDns(self):
self.assertTrue(_test_exec( self.assertTrue(_test_exec(
@ -178,7 +195,7 @@ class Fail2banRegexTest(LogCaptureTestCase):
"--print-all-matched", "--raw", "--usedns=no", "--print-all-matched", "--raw", "--usedns=no",
FILENAME_01, RE_00 FILENAME_01, RE_00
)) ))
self.assertLogged('Lines: 19 lines, 0 ignored, 13 matched, 6 missed') self.assertLogged('Lines: 19 lines, 0 ignored, 16 matched, 3 missed')
# usage of <F-ID>\S+</F-ID> causes raw handling automatically: # usage of <F-ID>\S+</F-ID> causes raw handling automatically:
self.pruneLog() self.pruneLog()
self.assertTrue(_test_exec( self.assertTrue(_test_exec(
@ -291,10 +308,10 @@ class Fail2banRegexTest(LogCaptureTestCase):
# #
self.assertTrue(_test_exec( self.assertTrue(_test_exec(
"--usedns", "no", "-d", "^Epoch", "--print-all-matched", "--usedns", "no", "-d", "^Epoch", "--print-all-matched",
"1490349000 FAIL: failure\nhost: 192.0.2.35", "-L", "2", "1490349000 FAIL: failure\nhost: 192.0.2.35",
r"^\s*FAIL:\s*.*\nhost:\s+<HOST>$" r"^\s*FAIL:\s*.*\nhost:\s+<HOST>$"
)) ))
self.assertLogged('Lines: 1 lines, 0 ignored, 1 matched, 0 missed') self.assertLogged('Lines: 2 lines, 0 ignored, 2 matched, 0 missed')
def testRegexEpochPatterns(self): def testRegexEpochPatterns(self):
self.assertTrue(_test_exec( self.assertTrue(_test_exec(
@ -324,6 +341,23 @@ class Fail2banRegexTest(LogCaptureTestCase):
self.assertTrue(_test_exec('-o', 'id', STR_00, RE_00_ID)) self.assertTrue(_test_exec('-o', 'id', STR_00, RE_00_ID))
self.assertLogged('kevin') self.assertLogged('kevin')
self.pruneLog() self.pruneLog()
# multiple id combined to a tuple (id, tuple_id):
self.assertTrue(_test_exec('-o', 'id',
'1591983743.667 192.0.2.1 192.0.2.2',
r'^\s*<F-ID/> <F-TUPLE_ID>\S+</F-TUPLE_ID>'))
self.assertLogged(str(('192.0.2.1', '192.0.2.2')))
self.pruneLog()
# multiple id combined to a tuple, id first - (id, tuple_id_1, tuple_id_2):
self.assertTrue(_test_exec('-o', 'id',
'1591983743.667 left 192.0.2.3 right',
r'^\s*<F-TUPLE_ID_1>\S+</F-TUPLE_ID_1> <F-ID/> <F-TUPLE_ID_2>\S+</F-TUPLE_ID_2>'))
self.pruneLog()
# id had higher precedence as ip-address:
self.assertTrue(_test_exec('-o', 'id',
'1591983743.667 left [192.0.2.4]:12345 right',
r'^\s*<F-TUPLE_ID_1>\S+</F-TUPLE_ID_1> <F-ID><ADDR>:<F-PORT/></F-ID> <F-TUPLE_ID_2>\S+</F-TUPLE_ID_2>'))
self.assertLogged(str(('[192.0.2.4]:12345', 'left', 'right')))
self.pruneLog()
# row with id : # row with id :
self.assertTrue(_test_exec('-o', 'row', STR_00, RE_00_ID)) self.assertTrue(_test_exec('-o', 'row', STR_00, RE_00_ID))
self.assertLogged("['kevin'", "'ip4': '192.0.2.0'", "'fid': 'kevin'", all=True) self.assertLogged("['kevin'", "'ip4': '192.0.2.0'", "'fid': 'kevin'", all=True)
@ -340,6 +374,73 @@ class Fail2banRegexTest(LogCaptureTestCase):
self.assertTrue(_test_exec('-o', 'user', STR_00, RE_00_USER)) self.assertTrue(_test_exec('-o', 'user', STR_00, RE_00_USER))
self.assertLogged('kevin') self.assertLogged('kevin')
self.pruneLog() self.pruneLog()
# complex substitution using tags (ip, user, family):
self.assertTrue(_test_exec('-o', '<ip>, <F-USER>, <family>', STR_00, RE_00_USER))
self.assertLogged('192.0.2.0, kevin, inet4')
self.pruneLog()
def testNoDateTime(self):
# datepattern doesn't match:
self.assertTrue(_test_exec('-d', '{^LN-BEG}EPOCH', '-o', 'Found-ID:<F-ID>', STR_00_NODT, RE_00_ID))
self.assertLogged(
"Found a match but no valid date/time found",
"Match without a timestamp:",
"Found-ID:kevin", all=True)
self.pruneLog()
# explicitly no datepattern:
self.assertTrue(_test_exec('-d', '{NONE}', '-o', 'Found-ID:<F-ID>', STR_00_NODT, RE_00_ID))
self.assertLogged(
"Found-ID:kevin", all=True)
self.assertNotLogged(
"Found a match but no valid date/time found",
"Match without a timestamp:", all=True)
self.pruneLog()
def testFrmtOutputWrapML(self):
unittest.F2B.SkipIfCfgMissing(stock=True)
# complex substitution using tags and message (ip, user, msg):
self.assertTrue(_test_exec('-o', '<ip>, <F-USER>, <msg>',
'-c', CONFIG_DIR, '--usedns', 'no',
STR_ML_SSHD + "\n" + STR_ML_SSHD_OK, 'sshd[logtype=short, publickey=invalid]'))
# be sure we don't have IP in one line and have it in another:
lines = STR_ML_SSHD.split("\n")
self.assertTrue('192.0.2.2' not in lines[-2] and '192.0.2.2' in lines[-1])
# but both are in output "merged" with IP and user:
self.assertLogged(
'192.0.2.2, git, '+lines[-2],
'192.0.2.2, git, '+lines[-1],
all=True)
# nothing should be found for 192.0.2.1 (mode is not aggressive):
self.assertNotLogged('192.0.2.1, git, ')
# test with publickey (nofail) - would not produce output for 192.0.2.1 because accepted:
self.pruneLog("[test-phase 1] mode=aggressive & publickey=nofail + OK (accepted)")
self.assertTrue(_test_exec('-o', '<ip>, <F-USER>, <msg>',
'-c', CONFIG_DIR, '--usedns', 'no',
STR_ML_SSHD + "\n" + STR_ML_SSHD_OK, 'sshd[logtype=short, mode=aggressive]'))
self.assertLogged(
'192.0.2.2, git, '+lines[-4],
'192.0.2.2, git, '+lines[-3],
'192.0.2.2, git, '+lines[-2],
'192.0.2.2, git, '+lines[-1],
all=True)
# nothing should be found for 192.0.2.1 (access gained so failures ignored):
self.assertNotLogged('192.0.2.1, git, ')
# now same test but "accepted" replaced with "closed" on preauth phase:
self.pruneLog("[test-phase 2] mode=aggressive & publickey=nofail + FAIL (closed on preauth)")
self.assertTrue(_test_exec('-o', '<ip>, <F-USER>, <msg>',
'-c', CONFIG_DIR, '--usedns', 'no',
STR_ML_SSHD + "\n" + STR_ML_SSHD_FAIL, 'sshd[logtype=short, mode=aggressive]'))
# 192.0.2.1 should be found for every failure (2x failed key + 1x closed):
lines = STR_ML_SSHD.split("\n")[0:2] + STR_ML_SSHD_FAIL.split("\n")[-1:]
self.assertLogged(
'192.0.2.1, git, '+lines[-3],
'192.0.2.1, git, '+lines[-2],
'192.0.2.1, git, '+lines[-1],
all=True)
def testWrongFilterFile(self): def testWrongFilterFile(self):
# use test log as filter file to cover eror cases... # use test log as filter file to cover eror cases...
@ -420,7 +521,7 @@ class Fail2banRegexTest(LogCaptureTestCase):
def testLogtypeSystemdJournal(self): # pragma: no cover def testLogtypeSystemdJournal(self): # pragma: no cover
if not fail2banregex.FilterSystemd: if not fail2banregex.FilterSystemd:
raise unittest.SkipTest('Skip test because no systemd backand available') raise unittest.SkipTest('Skip test because no systemd backend available')
self.assertTrue(_test_exec( self.assertTrue(_test_exec(
"systemd-journal", FILTER_ZZZ_GEN "systemd-journal", FILTER_ZZZ_GEN
+'[journalmatch="SYSLOG_IDENTIFIER=\x01\x02dummy\x02\x01",' +'[journalmatch="SYSLOG_IDENTIFIER=\x01\x02dummy\x02\x01",'

View File

@ -0,0 +1,12 @@
[INCLUDES]
# Read common prefixes. If any customizations available -- read them from
# common.local
before = testcase-common.conf
[Definition]
_daemon = sshd
__prefix_line = %(known/__prefix_line)s(?:\w{14,20}: )?
failregex = %(__prefix_line)s test

View File

@ -0,0 +1,4 @@
[Definition]
# no options here, coverage for testFilterReaderSubstKnown:
# avoid to overwrite known/option with unmodified (not available) value of option from .local config file

View File

@ -6,3 +6,6 @@
# failJSON: { "time": "2018-09-28T09:18:06", "match": true , "host": "192.0.2.1", "desc": "two client entries in message (gh-2247)" } # failJSON: { "time": "2018-09-28T09:18:06", "match": true , "host": "192.0.2.1", "desc": "two client entries in message (gh-2247)" }
[Sat Sep 28 09:18:06 2018] [error] [client 192.0.2.1:55555] [client 192.0.2.1] ModSecurity: [file "/etc/httpd/modsecurity.d/10_asl_rules.conf"] [line "635"] [id "340069"] [rev "4"] [msg "Atomicorp.com UNSUPPORTED DELAYED Rules: Web vulnerability scanner"] [severity "CRITICAL"] Access denied with code 403 (phase 2). Pattern match "(?:nessus(?:_is_probing_you_|test)|^/w00tw00t\\\\.at\\\\.)" at REQUEST_URI. [hostname "192.81.249.191"] [uri "/w00tw00t.at.blackhats.romanian.anti-sec:)"] [unique_id "4Q6RdsBR@b4AAA65LRUAAAAA"] [Sat Sep 28 09:18:06 2018] [error] [client 192.0.2.1:55555] [client 192.0.2.1] ModSecurity: [file "/etc/httpd/modsecurity.d/10_asl_rules.conf"] [line "635"] [id "340069"] [rev "4"] [msg "Atomicorp.com UNSUPPORTED DELAYED Rules: Web vulnerability scanner"] [severity "CRITICAL"] Access denied with code 403 (phase 2). Pattern match "(?:nessus(?:_is_probing_you_|test)|^/w00tw00t\\\\.at\\\\.)" at REQUEST_URI. [hostname "192.81.249.191"] [uri "/w00tw00t.at.blackhats.romanian.anti-sec:)"] [unique_id "4Q6RdsBR@b4AAA65LRUAAAAA"]
# failJSON: { "time": "2020-05-09T00:35:52", "match": true , "host": "192.0.2.2", "desc": "new format - apache 2.4 and php-fpm (gh-2717)" }
[Sat May 09 00:35:52.389262 2020] [:error] [pid 22406:tid 139985298601728] [client 192.0.2.2:47762] [client 192.0.2.2] ModSecurity: Access denied with code 401 (phase 2). Operator EQ matched 1 at IP:blocked. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_wp_login.conf"] [line "14"] [id "500000"] [msg "Ip address blocked for 15 minutes, more than 5 login attempts in 3 minutes."] [hostname "example.com"] [uri "/wp-login.php"] [unique_id "XrYlGL5IY3I@EoLOgAAAA8"], referer: https://example.com/wp-login.php

View File

@ -3,3 +3,9 @@
# failJSON: { "time": "2019-11-25T21:39:58", "match": true , "host": "192.168.0.21" } # failJSON: { "time": "2019-11-25T21:39:58", "match": true , "host": "192.168.0.21" }
2019-11-25 21:39:58.464 +01:00 [WRN] Failed login attempt, 2FA invalid. 192.168.0.21 2019-11-25 21:39:58.464 +01:00 [WRN] Failed login attempt, 2FA invalid. 192.168.0.21
# failJSON: { "time": "2019-11-25T21:39:58", "match": true , "host": "192.168.0.21" }
2019-11-25 21:39:58.464 +01:00 [Warning] Failed login attempt, 2FA invalid. 192.168.0.21
# failJSON: { "time": "2019-09-24T13:16:50", "match": true , "host": "192.168.0.23" }
2019-09-24T13:16:50 e5a81dbf7fd1 Bitwarden-Identity[1]: [Bit.Core.IdentityServer.ResourceOwnerPasswordValidator] Failed login attempt. 192.168.0.23

View File

@ -8,7 +8,9 @@ Jul 4 18:39:39 mail courieresmtpd: error,relay=::ffff:1.2.3.4,from=<picaro@astr
Jul 6 03:42:28 whistler courieresmtpd: error,relay=::ffff:1.2.3.4,from=<>,to=<admin at memcpy>: 550 User unknown. Jul 6 03:42:28 whistler courieresmtpd: error,relay=::ffff:1.2.3.4,from=<>,to=<admin at memcpy>: 550 User unknown.
# failJSON: { "time": "2004-11-21T23:16:17", "match": true , "host": "1.2.3.4" } # failJSON: { "time": "2004-11-21T23:16:17", "match": true , "host": "1.2.3.4" }
Nov 21 23:16:17 server courieresmtpd: error,relay=::ffff:1.2.3.4,from=<>,to=<>: 550 User unknown. Nov 21 23:16:17 server courieresmtpd: error,relay=::ffff:1.2.3.4,from=<>,to=<>: 550 User unknown.
# failJSON: { "time": "2004-08-14T12:51:04", "match": true , "host": "1.2.3.4" } # failJSON: { "time": "2005-08-14T12:51:04", "match": true , "host": "1.2.3.4" }
Aug 14 12:51:04 HOSTNAME courieresmtpd: error,relay=::ffff:1.2.3.4,from=<firozquarl@aclunc.org>,to=<BOGUSUSER@HOSTEDDOMAIN.org>: 550 User unknown. Aug 14 12:51:04 HOSTNAME courieresmtpd: error,relay=::ffff:1.2.3.4,from=<firozquarl@aclunc.org>,to=<BOGUSUSER@HOSTEDDOMAIN.org>: 550 User unknown.
# failJSON: { "time": "2004-08-14T12:51:04", "match": true , "host": "1.2.3.4" } # failJSON: { "time": "2005-08-14T12:51:04", "match": true , "host": "1.2.3.4" }
Aug 14 12:51:04 mail.server courieresmtpd[26762]: error,relay=::ffff:1.2.3.4,msg="535 Authentication failed.",cmd: AUTH PLAIN AAAAABBBBCCCCWxlZA== admin Aug 14 12:51:04 mail.server courieresmtpd[26762]: error,relay=::ffff:1.2.3.4,msg="535 Authentication failed.",cmd: AUTH PLAIN AAAAABBBBCCCCWxlZA== admin
# failJSON: { "time": "2005-08-14T12:51:05", "match": true , "host": "192.0.2.3" }
Aug 14 12:51:05 mail.server courieresmtpd[425070]: error,relay=::ffff:192.0.2.3,port=43632,msg="535 Authentication failed.",cmd: AUTH LOGIN PlcmSpIp@example.com

View File

@ -0,0 +1,5 @@
# Access of unauthorized host in /var/log/gitlab/gitlab-rails/application.log
# failJSON: { "time": "2020-04-09T16:04:00", "match": true , "host": "80.10.11.12" }
2020-04-09T14:04:00.667Z: Failed Login: username=admin ip=80.10.11.12
# failJSON: { "time": "2020-04-09T16:15:09", "match": true , "host": "80.10.11.12" }
2020-04-09T14:15:09.344Z: Failed Login: username=user name ip=80.10.11.12

View File

@ -0,0 +1,5 @@
# Access of unauthorized host in /var/log/grafana/grafana.log
# failJSON: { "time": "2020-10-19T17:44:33", "match": true , "host": "182.56.23.12" }
t=2020-10-19T17:44:33+0200 lvl=eror msg="Invalid username or password" logger=context userId=0 orgId=0 uname= error="Invalid Username or Password" remote_addr=182.56.23.12
# failJSON: { "time": "2020-10-19T18:44:33", "match": true , "host": "182.56.23.13" }
t=2020-10-19T18:44:33+0200 lvl=eror msg="Invalid username or password" logger=context userId=0 orgId=0 uname= error="User not found" remote_addr=182.56.23.13

View File

@ -10,3 +10,8 @@ WARNING: Authentication attempt from 192.0.2.0 for user "null" failed.
apr 16, 2013 8:32:28 AM org.slf4j.impl.JCLLoggerAdapter warn apr 16, 2013 8:32:28 AM org.slf4j.impl.JCLLoggerAdapter warn
# failJSON: { "time": "2013-04-16T08:32:28", "match": true , "host": "192.0.2.0" } # failJSON: { "time": "2013-04-16T08:32:28", "match": true , "host": "192.0.2.0" }
WARNING: Authentication attempt from 192.0.2.0 for user "pippo" failed. WARNING: Authentication attempt from 192.0.2.0 for user "pippo" failed.
# filterOptions: {"logging": "webapp"}
# failJSON: { "time": "2005-08-13T12:57:32", "match": true , "host": "182.23.72.36" }
12:57:32.907 [http-nio-8080-exec-10] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 182.23.72.36 for user "guacadmin" failed.

View File

@ -33,3 +33,7 @@ Sep 16 21:30:32 catinthehat mysqld: 130916 21:30:32 [Warning] Access denied for
2019-09-06T01:45:18 srv mysqld: 2019-09-06 1:45:18 140581192722176 [Warning] Access denied for user 'global'@'192.0.2.2' (using password: YES) 2019-09-06T01:45:18 srv mysqld: 2019-09-06 1:45:18 140581192722176 [Warning] Access denied for user 'global'@'192.0.2.2' (using password: YES)
# failJSON: { "time": "2019-09-24T13:16:50", "match": true , "host": "192.0.2.3", "desc": "ISO timestamp within log message" } # failJSON: { "time": "2019-09-24T13:16:50", "match": true , "host": "192.0.2.3", "desc": "ISO timestamp within log message" }
2019-09-24T13:16:50 srv mysqld[1234]: 2019-09-24 13:16:50 8756 [Warning] Access denied for user 'root'@'192.0.2.3' (using password: YES) 2019-09-24T13:16:50 srv mysqld[1234]: 2019-09-24 13:16:50 8756 [Warning] Access denied for user 'root'@'192.0.2.3' (using password: YES)
# filterOptions: [{"logtype": "file"}, {"logtype": "short"}, {"logtype": "journal"}]
# failJSON: { "match": true , "host": "192.0.2.1", "user":"root", "desc": "mariadb 10.4 log format, gh-2611" }
2020-01-16 21:34:14 4644 [Warning] Access denied for user 'root'@'192.0.2.1' (using password: YES)

View File

@ -137,6 +137,11 @@ Jan 14 16:18:16 xxx postfix/smtpd[14933]: warning: host[192.0.2.5]: SASL CRAM-MD
# filterOptions: [{"mode": "ddos"}, {"mode": "aggressive"}] # filterOptions: [{"mode": "ddos"}, {"mode": "aggressive"}]
# failJSON: { "time": "2005-02-10T13:26:34", "match": true , "host": "192.0.2.1" }
Feb 10 13:26:34 srv postfix/smtpd[123]: disconnect from unknown[192.0.2.1] helo=1 auth=0/1 quit=1 commands=2/3
# failJSON: { "time": "2005-02-10T13:26:34", "match": true , "host": "192.0.2.2" }
Feb 10 13:26:34 srv postfix/smtpd[123]: disconnect from unknown[192.0.2.2] ehlo=1 auth=0/1 rset=1 quit=1 commands=3/4
# failJSON: { "time": "2005-02-18T09:45:10", "match": true , "host": "192.0.2.10" } # failJSON: { "time": "2005-02-18T09:45:10", "match": true , "host": "192.0.2.10" }
Feb 18 09:45:10 xxx postfix/smtpd[42]: lost connection after CONNECT from spammer.example.com[192.0.2.10] Feb 18 09:45:10 xxx postfix/smtpd[42]: lost connection after CONNECT from spammer.example.com[192.0.2.10]
# failJSON: { "time": "2005-02-18T09:45:12", "match": true , "host": "192.0.2.42" } # failJSON: { "time": "2005-02-18T09:45:12", "match": true , "host": "192.0.2.42" }

View File

@ -1,6 +1,6 @@
# failJSON: { "time": "2005-01-10T00:00:00", "match": true , "host": "123.123.123.123" } # failJSON: { "time": "2005-01-10T00:00:00", "match": true , "host": "123.123.123.123", "user": "username" }
Jan 10 00:00:00 myhost proftpd[12345] myhost.domain.com (123.123.123.123[123.123.123.123]): USER username (Login failed): User in /etc/ftpusers Jan 10 00:00:00 myhost proftpd[12345] myhost.domain.com (123.123.123.123[123.123.123.123]): USER username (Login failed): User in /etc/ftpusers
# failJSON: { "time": "2005-02-01T00:00:00", "match": true , "host": "123.123.123.123" } # failJSON: { "time": "2005-02-01T00:00:00", "match": true , "host": "123.123.123.123", "user": "username" }
Feb 1 00:00:00 myhost proftpd[12345] myhost.domain.com (123.123.123.123[123.123.123.123]): USER username: no such user found from 123.123.123.123 [123.123.123.123] to 234.234.234.234:21 Feb 1 00:00:00 myhost proftpd[12345] myhost.domain.com (123.123.123.123[123.123.123.123]): USER username: no such user found from 123.123.123.123 [123.123.123.123] to 234.234.234.234:21
# failJSON: { "time": "2005-06-09T07:30:58", "match": true , "host": "67.227.224.66" } # failJSON: { "time": "2005-06-09T07:30:58", "match": true , "host": "67.227.224.66" }
Jun 09 07:30:58 platypus.ace-hosting.com.au proftpd[11864] platypus.ace-hosting.com.au (mail.bloodymonster.net[::ffff:67.227.224.66]): USER username (Login failed): Incorrect password. Jun 09 07:30:58 platypus.ace-hosting.com.au proftpd[11864] platypus.ace-hosting.com.au (mail.bloodymonster.net[::ffff:67.227.224.66]): USER username (Login failed): Incorrect password.
@ -12,7 +12,9 @@ Jun 13 22:07:23 platypus.ace-hosting.com.au proftpd[15719] platypus.ace-hosting.
Jun 14 00:09:59 platypus.ace-hosting.com.au proftpd[17839] platypus.ace-hosting.com.au (::ffff:59.167.242.100[::ffff:59.167.242.100]): USER platypus.ace-hosting.com.au proftpd[17424] platypus.ace-hosting.com.au (hihoinjection[1.2.3.44]): no such user found from ::ffff:59.167.242.100 [::ffff:59.167.242.100] to ::ffff:113.212.99.194:21 Jun 14 00:09:59 platypus.ace-hosting.com.au proftpd[17839] platypus.ace-hosting.com.au (::ffff:59.167.242.100[::ffff:59.167.242.100]): USER platypus.ace-hosting.com.au proftpd[17424] platypus.ace-hosting.com.au (hihoinjection[1.2.3.44]): no such user found from ::ffff:59.167.242.100 [::ffff:59.167.242.100] to ::ffff:113.212.99.194:21
# failJSON: { "time": "2005-05-31T10:53:25", "match": true , "host": "1.2.3.4" } # failJSON: { "time": "2005-05-31T10:53:25", "match": true , "host": "1.2.3.4" }
May 31 10:53:25 mail proftpd[15302]: xxxxxxxxxx (::ffff:1.2.3.4[::ffff:1.2.3.4]) - Maximum login attempts (3) exceeded May 31 10:53:25 mail proftpd[15302]: xxxxxxxxxx (::ffff:1.2.3.4[::ffff:1.2.3.4]) - Maximum login attempts (3) exceeded
# failJSON: { "time": "2004-12-05T15:44:32", "match": true , "host": "1.2.3.4" } # failJSON: { "time": "2004-10-02T15:45:44", "match": true , "host": "192.0.2.13", "user": "Root", "desc": "dot at end is optional (mod_sftp, gh-2246)" }
Oct 2 15:45:44 ftp01 proftpd[5517]: 192.0.2.13 (192.0.2.13[192.0.2.13]) - SECURITY VIOLATION: Root login attempted
# failJSON: { "time": "2004-12-05T15:44:32", "match": true , "host": "1.2.3.4", "user": "jtittle@domain.org" }
Dec 5 15:44:32 serv1 proftpd[70944]: serv1.domain.com (example.com[1.2.3.4]) - USER jtittle@domain.org: no such user found from example.com [1.2.3.4] to 1.2.3.4:21 Dec 5 15:44:32 serv1 proftpd[70944]: serv1.domain.com (example.com[1.2.3.4]) - USER jtittle@domain.org: no such user found from example.com [1.2.3.4] to 1.2.3.4:21
# failJSON: { "time": "2013-11-16T21:59:30", "match": true , "host": "1.2.3.4", "desc": "proftpd-basic 1.3.5~rc3-2.1 on Debian uses date format with milliseconds if logging under /var/log/proftpd/proftpd.log" } # failJSON: { "time": "2013-11-16T21:59:30", "match": true , "host": "1.2.3.4", "desc": "proftpd-basic 1.3.5~rc3-2.1 on Debian uses date format with milliseconds if logging under /var/log/proftpd/proftpd.log" }
2013-11-16 21:59:30,121 novo proftpd[25891] localhost (andy[1.2.3.4]): USER kjsad: no such user found from andy [1.2.3.5] to ::ffff:192.168.1.14:21 2013-11-16 21:59:30,121 novo proftpd[25891] localhost (andy[1.2.3.4]): USER kjsad: no such user found from andy [1.2.3.5] to ::ffff:192.168.1.14:21

View File

@ -17,3 +17,8 @@ Feb 24 14:00:00 server sendmail[26592]: u0CB32qX026592: [192.0.2.1]: possible SM
# failJSON: { "time": "2005-02-24T14:00:01", "match": true , "host": "192.0.2.2", "desc": "long PID, ID longer as 14 chars (gh-2563)" } # failJSON: { "time": "2005-02-24T14:00:01", "match": true , "host": "192.0.2.2", "desc": "long PID, ID longer as 14 chars (gh-2563)" }
Feb 24 14:00:01 server sendmail[3529566]: xA32R2PQ3529566: [192.0.2.2]: possible SMTP attack: command=AUTH, count=5 Feb 24 14:00:01 server sendmail[3529566]: xA32R2PQ3529566: [192.0.2.2]: possible SMTP attack: command=AUTH, count=5
# failJSON: { "time": "2005-02-25T04:02:27", "match": true , "host": "192.0.2.3", "desc": "sendmail 8.16.1, AUTH_FAIL_LOG_USER (gh-2757)" }
Feb 25 04:02:27 relay1 sendmail[16664]: 06I02CNi016764: AUTH failure (LOGIN): authentication failure (-13) SASL(-13): authentication failure: checkpass failed, user=user@example.com, relay=example.com [192.0.2.3] (may be forged)
# failJSON: { "time": "2005-02-25T04:02:28", "match": true , "host": "192.0.2.4", "desc": "injection attempt on user name" }
Feb 25 04:02:28 relay1 sendmail[16665]: 06I02CNi016765: AUTH failure (LOGIN): authentication failure (-13) SASL(-13): authentication failure: checkpass failed, user=criminal, relay=[192.0.2.100], relay=[192.0.2.4] (may be forged)

View File

@ -103,3 +103,7 @@ Mar 29 22:51:42 kismet sm-mta[24202]: x2TMpAlI024202: internettl.org [104.152.52
# failJSON: { "time": "2005-03-29T22:51:43", "match": true , "host": "192.0.2.2", "desc": "long PID, ID longer as 14 chars (gh-2563)" } # failJSON: { "time": "2005-03-29T22:51:43", "match": true , "host": "192.0.2.2", "desc": "long PID, ID longer as 14 chars (gh-2563)" }
Mar 29 22:51:43 server sendmail[3529565]: xA32R2PQ3529565: [192.0.2.2] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA Mar 29 22:51:43 server sendmail[3529565]: xA32R2PQ3529565: [192.0.2.2] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA
# failJSON: { "time": "2005-03-29T22:51:45", "match": true , "host": "192.0.2.3", "desc": "sendmail 8.15.2 default names IPv4/6 (gh-2787)" }
Mar 29 22:51:45 server sm-mta[50437]: 06QDQnNf050437: example.com [192.0.2.3] did not issue MAIL/EXPN/VRFY/ETRN during connection to IPv4
# failJSON: { "time": "2005-03-29T22:51:46", "match": true , "host": "2001:DB8::1", "desc": "IPv6" }
Mar 29 22:51:46 server sm-mta[50438]: 06QDQnNf050438: example.com [IPv6:2001:DB8::1] did not issue MAIL/EXPN/VRFY/ETRN during connection to IPv6

View File

@ -0,0 +1,7 @@
# Access of unauthorized host in /usr/local/vpnserver/security_log/*/sec.log
# failJSON: { "time": "2020-05-12T10:53:19", "match": true , "host": "80.10.11.12" }
2020-05-12 10:53:19.781 Connection "CID-72": User authentication failed. The user name that has been provided was "bob", from 80.10.11.12.
# Access of unauthorized host in syslog
# failJSON: { "time": "2020-05-13T10:53:19", "match": true , "host": "80.10.11.13" }
2020-05-13T10:53:19 localhost [myserver.com/VPN/defaultvpn] (2020-05-13 10:53:19.591) <SECURITY_LOG>: Connection "CID-594": User authentication failed. The user name that has been provided was "alice", from 80.10.11.13.

View File

@ -134,7 +134,7 @@ Sep 29 17:15:02 spaceman sshd[12946]: Failed password for user from 127.0.0.1 po
# failJSON: { "time": "2004-09-29T17:15:02", "match": true , "host": "127.0.0.1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" } # failJSON: { "time": "2004-09-29T17:15:02", "match": true , "host": "127.0.0.1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" }
Sep 29 17:15:02 spaceman sshd[12946]: Failed password for user from 127.0.0.1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4 Sep 29 17:15:02 spaceman sshd[12946]: Failed password for user from 127.0.0.1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4
# failJSON: { "time": "2004-09-29T17:15:03", "match": true , "host": "aaaa:bbbb:cccc:1234::1:1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" } # failJSON: { "time": "2004-09-29T17:15:03", "match": true , "host": "aaaa:bbbb:cccc:1234::1:1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" }
Sep 29 17:15:03 spaceman sshd[12946]: Failed password for user from aaaa:bbbb:cccc:1234::1:1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4 Sep 29 17:15:03 spaceman sshd[12947]: Failed password for user from aaaa:bbbb:cccc:1234::1:1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4
# failJSON: { "time": "2004-11-11T08:04:51", "match": true , "host": "127.0.0.1", "desc": "Injecting on username ssh 'from 10.10.1.1'@localhost" } # failJSON: { "time": "2004-11-11T08:04:51", "match": true , "host": "127.0.0.1", "desc": "Injecting on username ssh 'from 10.10.1.1'@localhost" }
Nov 11 08:04:51 redbamboo sshd[2737]: Failed password for invalid user from 10.10.1.1 from 127.0.0.1 port 58946 ssh2 Nov 11 08:04:51 redbamboo sshd[2737]: Failed password for invalid user from 10.10.1.1 from 127.0.0.1 port 58946 ssh2
@ -166,9 +166,11 @@ Nov 28 09:16:03 srv sshd[32307]: Connection closed by 192.0.2.1
Nov 28 09:16:05 srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: ECDSA 1e:fe:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx Nov 28 09:16:05 srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: ECDSA 1e:fe:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
# failJSON: { "match": false } # failJSON: { "match": false }
Nov 28 09:16:05 srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: RSA 14:ba:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx Nov 28 09:16:05 srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: RSA 14:ba:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
# failJSON: { "match": false } # failJSON: { "constraint": "name == 'sshd'", "time": "2004-11-28T09:16:05", "match": true , "attempts": 3, "desc": "Should catch failure - no success/no accepted public key" }
Nov 28 09:16:05 srv sshd[32310]: Disconnecting: Too many authentication failures for git [preauth] Nov 28 09:16:05 srv sshd[32310]: Disconnecting: Too many authentication failures for git [preauth]
# failJSON: { "time": "2004-11-28T09:16:05", "match": true , "host": "192.0.2.111", "desc": "Should catch failure - no success/no accepted public key" } # failJSON: { "constraint": "opts.get('mode') != 'aggressive'", "match": false, "desc": "Nofail in normal mode, failure already produced above" }
Nov 28 09:16:05 srv sshd[32310]: Connection closed by 192.0.2.111 [preauth]
# failJSON: { "constraint": "opts.get('mode') == 'aggressive'", "time": "2004-11-28T09:16:05", "match": true , "host": "192.0.2.111", "attempts":1, "desc": "Matches in aggressive mode only" }
Nov 28 09:16:05 srv sshd[32310]: Connection closed by 192.0.2.111 [preauth] Nov 28 09:16:05 srv sshd[32310]: Connection closed by 192.0.2.111 [preauth]
# failJSON: { "match": false } # failJSON: { "match": false }
@ -215,7 +217,7 @@ Apr 27 13:02:04 host sshd[29116]: Received disconnect from 1.2.3.4: 11: Normal S
# Match sshd auth errors on OpenSUSE systems (gh-1024) # Match sshd auth errors on OpenSUSE systems (gh-1024)
# failJSON: { "match": false, "desc": "No failure until closed or another fail (e. g. F-MLFFORGET by success/accepted password can avoid failure, see gh-2070)" } # failJSON: { "match": false, "desc": "No failure until closed or another fail (e. g. F-MLFFORGET by success/accepted password can avoid failure, see gh-2070)" }
2015-04-16T18:02:50.321974+00:00 host sshd[2716]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.0.2.112 user=root 2015-04-16T18:02:50.321974+00:00 host sshd[2716]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.0.2.112 user=root
# failJSON: { "time": "2015-04-16T20:02:50", "match": true , "host": "192.0.2.112", "desc": "Should catch failure - no success/no accepted password" } # failJSON: { "constraint": "opts.get('mode') == 'aggressive'", "time": "2015-04-16T20:02:50", "match": true , "host": "192.0.2.112", "desc": "Should catch failure - no success/no accepted password" }
2015-04-16T18:02:50.568798+00:00 host sshd[2716]: Connection closed by 192.0.2.112 [preauth] 2015-04-16T18:02:50.568798+00:00 host sshd[2716]: Connection closed by 192.0.2.112 [preauth]
# disable this test-cases block for obsolete multi-line filter (zzz-sshd-obsolete...): # disable this test-cases block for obsolete multi-line filter (zzz-sshd-obsolete...):
@ -238,7 +240,7 @@ Mar 7 18:53:20 bar sshd[1556]: Connection closed by 192.0.2.113
Mar 7 18:53:22 bar sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.114 Mar 7 18:53:22 bar sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.114
# failJSON: { "time": "2005-03-07T18:53:23", "match": true , "attempts": 2, "users": ["root", "sudoer"], "host": "192.0.2.114", "desc": "Failure: attempt 2nd user" } # failJSON: { "time": "2005-03-07T18:53:23", "match": true , "attempts": 2, "users": ["root", "sudoer"], "host": "192.0.2.114", "desc": "Failure: attempt 2nd user" }
Mar 7 18:53:23 bar sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=sudoer rhost=192.0.2.114 Mar 7 18:53:23 bar sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=sudoer rhost=192.0.2.114
# failJSON: { "time": "2005-03-07T18:53:24", "match": true , "attempts": 2, "users": ["root", "sudoer", "known"], "host": "192.0.2.114", "desc": "Failure: attempt 3rd user" } # failJSON: { "time": "2005-03-07T18:53:24", "match": true , "attempts": 1, "users": ["root", "sudoer", "known"], "host": "192.0.2.114", "desc": "Failure: attempt 3rd user" }
Mar 7 18:53:24 bar sshd[1558]: Accepted password for known from 192.0.2.114 port 52100 ssh2 Mar 7 18:53:24 bar sshd[1558]: Accepted password for known from 192.0.2.114 port 52100 ssh2
# failJSON: { "match": false , "desc": "No failure" } # failJSON: { "match": false , "desc": "No failure" }
Mar 7 18:53:24 bar sshd[1558]: pam_unix(sshd:session): session opened for user known by (uid=0) Mar 7 18:53:24 bar sshd[1558]: pam_unix(sshd:session): session opened for user known by (uid=0)
@ -248,14 +250,14 @@ Mar 7 18:53:24 bar sshd[1558]: pam_unix(sshd:session): session opened for user
Mar 7 18:53:32 bar sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116 Mar 7 18:53:32 bar sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116
# failJSON: { "match": false , "desc": "Still no failure (second try, same user)" } # failJSON: { "match": false , "desc": "Still no failure (second try, same user)" }
Mar 7 18:53:32 bar sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116 Mar 7 18:53:32 bar sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116
# failJSON: { "time": "2005-03-07T18:53:34", "match": true , "attempts": 2, "users": ["root", "known"], "host": "192.0.2.116", "desc": "Failure: attempt 2nd user" } # failJSON: { "time": "2005-03-07T18:53:34", "match": true , "attempts": 3, "users": ["root", "known"], "host": "192.0.2.116", "desc": "Failure: attempt 2nd user" }
Mar 7 18:53:34 bar sshd[1559]: Accepted password for known from 192.0.2.116 port 52100 ssh2 Mar 7 18:53:34 bar sshd[1559]: Accepted password for known from 192.0.2.116 port 52100 ssh2
# failJSON: { "match": false , "desc": "No failure" } # failJSON: { "match": false , "desc": "No failure" }
Mar 7 18:53:38 bar sshd[1559]: Connection closed by 192.0.2.116 Mar 7 18:53:38 bar sshd[1559]: Connection closed by 192.0.2.116
# failJSON: { "time": "2005-03-19T16:47:48", "match": true , "attempts": 1, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt invalid user" } # failJSON: { "time": "2005-03-19T16:47:48", "match": true , "attempts": 1, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt invalid user" }
Mar 19 16:47:48 test sshd[5672]: Invalid user admin from 192.0.2.117 port 44004 Mar 19 16:47:48 test sshd[5672]: Invalid user admin from 192.0.2.117 port 44004
# failJSON: { "time": "2005-03-19T16:47:49", "match": true , "attempts": 2, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt to change user (disallowed)" } # failJSON: { "time": "2005-03-19T16:47:49", "match": true , "attempts": 1, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt to change user (disallowed)" }
Mar 19 16:47:49 test sshd[5672]: Disconnecting invalid user admin 192.0.2.117 port 44004: Change of username or service not allowed: (admin,ssh-connection) -> (user,ssh-connection) [preauth] Mar 19 16:47:49 test sshd[5672]: Disconnecting invalid user admin 192.0.2.117 port 44004: Change of username or service not allowed: (admin,ssh-connection) -> (user,ssh-connection) [preauth]
# failJSON: { "time": "2005-03-19T16:47:50", "match": false, "desc": "Disconnected during preauth phase (no failure in normal mode)" } # failJSON: { "time": "2005-03-19T16:47:50", "match": false, "desc": "Disconnected during preauth phase (no failure in normal mode)" }
Mar 19 16:47:50 srv sshd[5672]: Disconnected from authenticating user admin 192.0.2.6 port 33553 [preauth] Mar 19 16:47:50 srv sshd[5672]: Disconnected from authenticating user admin 192.0.2.6 port 33553 [preauth]
@ -294,6 +296,9 @@ Nov 24 23:46:43 host sshd[32686]: fatal: Read from socket failed: Connection res
# failJSON: { "time": "2005-03-15T09:20:57", "match": true , "host": "192.0.2.39", "desc": "Singleline for connection reset by" } # failJSON: { "time": "2005-03-15T09:20:57", "match": true , "host": "192.0.2.39", "desc": "Singleline for connection reset by" }
Mar 15 09:20:57 host sshd[28972]: Connection reset by 192.0.2.39 port 14282 [preauth] Mar 15 09:20:57 host sshd[28972]: Connection reset by 192.0.2.39 port 14282 [preauth]
# failJSON: { "time": "2005-03-16T09:29:50", "match": true , "host": "192.0.2.20", "desc": "connection reset by user (gh-2662)" }
Mar 16 09:29:50 host sshd[19131]: Connection reset by authenticating user root 192.0.2.20 port 1558 [preauth]
# failJSON: { "time": "2005-07-17T23:03:05", "match": true , "host": "192.0.2.10", "user": "root", "desc": "user name additionally, gh-2185" } # failJSON: { "time": "2005-07-17T23:03:05", "match": true , "host": "192.0.2.10", "user": "root", "desc": "user name additionally, gh-2185" }
Jul 17 23:03:05 srv sshd[1296]: Connection closed by authenticating user root 192.0.2.10 port 46038 [preauth] Jul 17 23:03:05 srv sshd[1296]: Connection closed by authenticating user root 192.0.2.10 port 46038 [preauth]
# failJSON: { "time": "2005-07-17T23:04:00", "match": true , "host": "192.0.2.11", "user": "test 127.0.0.1", "desc": "check inject on username, gh-2185" } # failJSON: { "time": "2005-07-17T23:04:00", "match": true , "host": "192.0.2.11", "user": "test 127.0.0.1", "desc": "check inject on username, gh-2185" }
@ -303,6 +308,13 @@ Jul 17 23:04:01 srv sshd[1300]: Connection closed by authenticating user test 12
# filterOptions: [{"test.condition":"name=='sshd'", "mode": "ddos"}, {"test.condition":"name=='sshd'", "mode": "aggressive"}] # filterOptions: [{"test.condition":"name=='sshd'", "mode": "ddos"}, {"test.condition":"name=='sshd'", "mode": "aggressive"}]
# failJSON: { "match": false }
Feb 17 17:40:17 sshd[19725]: Connection from 192.0.2.10 port 62004 on 192.0.2.10 port 22
# failJSON: { "time": "2005-02-17T17:40:17", "match": true , "host": "192.0.2.10", "desc": "ddos: port scanner (invalid protocol identifier)" }
Feb 17 17:40:17 sshd[19725]: error: kex_exchange_identification: client sent invalid protocol identifier ""
# failJSON: { "time": "2005-02-17T17:40:18", "match": true , "host": "192.0.2.10", "desc": "ddos: flood attack vector, gh-2850" }
Feb 17 17:40:18 sshd[19725]: error: kex_exchange_identification: Connection closed by remote host
# failJSON: { "time": "2005-03-15T09:21:01", "match": true , "host": "192.0.2.212", "desc": "DDOS mode causes failure on close within preauth stage" } # failJSON: { "time": "2005-03-15T09:21:01", "match": true , "host": "192.0.2.212", "desc": "DDOS mode causes failure on close within preauth stage" }
Mar 15 09:21:01 host sshd[2717]: Connection closed by 192.0.2.212 [preauth] Mar 15 09:21:01 host sshd[2717]: Connection closed by 192.0.2.212 [preauth]
# failJSON: { "time": "2005-03-15T09:21:02", "match": true , "host": "192.0.2.212", "desc": "DDOS mode causes failure on close within preauth stage" } # failJSON: { "time": "2005-03-15T09:21:02", "match": true , "host": "192.0.2.212", "desc": "DDOS mode causes failure on close within preauth stage" }
@ -311,6 +323,11 @@ Mar 15 09:21:02 host sshd[2717]: Connection closed by 192.0.2.212 [preauth]
# failJSON: { "time": "2005-07-18T17:19:11", "match": true , "host": "192.0.2.4", "desc": "ddos: disconnect on preauth phase, gh-2115" } # failJSON: { "time": "2005-07-18T17:19:11", "match": true , "host": "192.0.2.4", "desc": "ddos: disconnect on preauth phase, gh-2115" }
Jul 18 17:19:11 srv sshd[2101]: Disconnected from 192.0.2.4 port 36985 [preauth] Jul 18 17:19:11 srv sshd[2101]: Disconnected from 192.0.2.4 port 36985 [preauth]
# failJSON: { "time": "2005-06-06T04:17:04", "match": true , "host": "192.0.2.68", "dns": null, "user": "", "desc": "empty user, gh-2749" }
Jun 6 04:17:04 host sshd[1189074]: Invalid user from 192.0.2.68 port 34916
# failJSON: { "time": "2005-06-06T04:17:09", "match": true , "host": "192.0.2.68", "dns": null, "user": "", "desc": "empty user, gh-2749" }
Jun 6 04:17:09 host sshd[1189074]: Connection closed by invalid user 192.0.2.68 port 34916 [preauth]
# filterOptions: [{"mode": "extra"}, {"mode": "aggressive"}] # filterOptions: [{"mode": "extra"}, {"mode": "aggressive"}]
# several other cases from gh-864: # several other cases from gh-864:
@ -320,6 +337,8 @@ Nov 25 01:34:12 srv sshd[123]: Received disconnect from 127.0.0.1: 14: No suppor
Nov 25 01:35:13 srv sshd[123]: error: Received disconnect from 127.0.0.1: 14: No supported authentication methods available [preauth] Nov 25 01:35:13 srv sshd[123]: error: Received disconnect from 127.0.0.1: 14: No supported authentication methods available [preauth]
# failJSON: { "time": "2004-11-25T01:35:14", "match": true , "host": "192.168.2.92", "desc": "Optional space after port" } # failJSON: { "time": "2004-11-25T01:35:14", "match": true , "host": "192.168.2.92", "desc": "Optional space after port" }
Nov 25 01:35:14 srv sshd[3625]: error: Received disconnect from 192.168.2.92 port 1684:14: No supported authentication methods available [preauth] Nov 25 01:35:14 srv sshd[3625]: error: Received disconnect from 192.168.2.92 port 1684:14: No supported authentication methods available [preauth]
# failJSON: { "time": "2004-11-25T01:35:15", "match": true , "host": "192.168.2.93", "desc": "No authentication methods available (supported is optional, gh-2682)" }
Nov 25 01:35:15 srv sshd[3626]: error: Received disconnect from 192.168.2.93 port 1883:14: No authentication methods available [preauth]
# gh-1545: # gh-1545:
# failJSON: { "time": "2004-11-26T13:03:29", "match": true , "host": "192.0.2.1", "desc": "No matching cipher" } # failJSON: { "time": "2004-11-26T13:03:29", "match": true , "host": "192.0.2.1", "desc": "No matching cipher" }
@ -332,7 +351,7 @@ Nov 26 13:03:30 srv sshd[45]: fatal: Unable to negotiate with 192.0.2.2 port 554
Nov 26 15:03:30 host sshd[22440]: Connection from 192.0.2.3 port 39678 on 192.168.1.9 port 22 Nov 26 15:03:30 host sshd[22440]: Connection from 192.0.2.3 port 39678 on 192.168.1.9 port 22
# failJSON: { "time": "2004-11-26T15:03:31", "match": true , "host": "192.0.2.3", "desc": "Multiline - no matching key exchange method" } # failJSON: { "time": "2004-11-26T15:03:31", "match": true , "host": "192.0.2.3", "desc": "Multiline - no matching key exchange method" }
Nov 26 15:03:31 host sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth] Nov 26 15:03:31 host sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth]
# failJSON: { "time": "2004-11-26T15:03:32", "match": true , "host": "192.0.2.3", "filter": "sshd", "desc": "Second attempt within the same connect" } # failJSON: { "time": "2004-11-26T15:03:32", "match": true , "host": "192.0.2.3", "constraint": "name == 'sshd'", "desc": "Second attempt within the same connect" }
Nov 26 15:03:32 host sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth] Nov 26 15:03:32 host sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth]
# gh-1943 (previous OpenSSH log-format) # gh-1943 (previous OpenSSH log-format)

View File

@ -135,7 +135,7 @@ srv sshd[12946]: Failed password for user from 127.0.0.1 port 20000 ssh1: ruser
# failJSON: { "match": true , "host": "127.0.0.1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" } # failJSON: { "match": true , "host": "127.0.0.1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" }
srv sshd[12946]: Failed password for user from 127.0.0.1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4 srv sshd[12946]: Failed password for user from 127.0.0.1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4
# failJSON: { "match": true , "host": "aaaa:bbbb:cccc:1234::1:1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" } # failJSON: { "match": true , "host": "aaaa:bbbb:cccc:1234::1:1", "desc": "Injecting while exhausting initially present {0,100} match length limits set for ruser etc" }
srv sshd[12946]: Failed password for user from aaaa:bbbb:cccc:1234::1:1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4 srv sshd[12947]: Failed password for user from aaaa:bbbb:cccc:1234::1:1 port 20000 ssh1: ruser XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX from 1.2.3.4
# failJSON: { "match": true , "host": "127.0.0.1", "desc": "Injecting on username ssh 'from 10.10.1.1'@localhost" } # failJSON: { "match": true , "host": "127.0.0.1", "desc": "Injecting on username ssh 'from 10.10.1.1'@localhost" }
srv sshd[2737]: Failed password for invalid user from 10.10.1.1 from 127.0.0.1 port 58946 ssh2 srv sshd[2737]: Failed password for invalid user from 10.10.1.1 from 127.0.0.1 port 58946 ssh2
@ -167,9 +167,11 @@ srv sshd[32307]: Connection closed by 192.0.2.1
srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: ECDSA 1e:fe:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: ECDSA 1e:fe:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
# failJSON: { "match": false } # failJSON: { "match": false }
srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: RSA 14:ba:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx srv sshd[32310]: Failed publickey for git from 192.0.2.111 port 57910 ssh2: RSA 14:ba:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
# failJSON: { "match": false } # failJSON: { "match": true , "attempts": 3, "desc": "Should catch failure - no success/no accepted public key" }
srv sshd[32310]: Disconnecting: Too many authentication failures for git [preauth] srv sshd[32310]: Disconnecting: Too many authentication failures for git [preauth]
# failJSON: { "match": true , "host": "192.0.2.111", "desc": "Should catch failure - no success/no accepted public key" } # failJSON: { "constraint": "opts.get('mode') != 'aggressive'", "match": false, "desc": "Nofail in normal mode, failure already produced above" }
srv sshd[32310]: Connection closed by 192.0.2.111 [preauth]
# failJSON: { "constraint": "opts.get('mode') == 'aggressive'", "match": true , "host": "192.0.2.111", "attempts":1, "desc": "Matches in aggressive mode only" }
srv sshd[32310]: Connection closed by 192.0.2.111 [preauth] srv sshd[32310]: Connection closed by 192.0.2.111 [preauth]
# failJSON: { "match": false } # failJSON: { "match": false }
@ -216,7 +218,7 @@ srv sshd[29116]: Received disconnect from 1.2.3.4: 11: Normal Shutdown, Thank yo
# Match sshd auth errors on OpenSUSE systems (gh-1024) # Match sshd auth errors on OpenSUSE systems (gh-1024)
# failJSON: { "match": false, "desc": "No failure until closed or another fail (e. g. F-MLFFORGET by success/accepted password can avoid failure, see gh-2070)" } # failJSON: { "match": false, "desc": "No failure until closed or another fail (e. g. F-MLFFORGET by success/accepted password can avoid failure, see gh-2070)" }
srv sshd[2716]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.0.2.112 user=root srv sshd[2716]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.0.2.112 user=root
# failJSON: { "match": true , "host": "192.0.2.112", "desc": "Should catch failure - no success/no accepted password" } # failJSON: { "constraint": "opts.get('mode') == 'aggressive'", "match": true , "host": "192.0.2.112", "desc": "Should catch failure - no success/no accepted password" }
srv sshd[2716]: Connection closed by 192.0.2.112 [preauth] srv sshd[2716]: Connection closed by 192.0.2.112 [preauth]
# filterOptions: [{}] # filterOptions: [{}]
@ -238,7 +240,7 @@ srv sshd[1556]: Connection closed by 192.0.2.113
srv sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.114 srv sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.114
# failJSON: { "match": true , "attempts": 2, "users": ["root", "sudoer"], "host": "192.0.2.114", "desc": "Failure: attempt 2nd user" } # failJSON: { "match": true , "attempts": 2, "users": ["root", "sudoer"], "host": "192.0.2.114", "desc": "Failure: attempt 2nd user" }
srv sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=sudoer rhost=192.0.2.114 srv sshd[1558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=sudoer rhost=192.0.2.114
# failJSON: { "match": true , "attempts": 2, "users": ["root", "sudoer", "known"], "host": "192.0.2.114", "desc": "Failure: attempt 3rd user" } # failJSON: { "match": true , "attempts": 1, "users": ["root", "sudoer", "known"], "host": "192.0.2.114", "desc": "Failure: attempt 3rd user" }
srv sshd[1558]: Accepted password for known from 192.0.2.114 port 52100 ssh2 srv sshd[1558]: Accepted password for known from 192.0.2.114 port 52100 ssh2
# failJSON: { "match": false , "desc": "No failure" } # failJSON: { "match": false , "desc": "No failure" }
srv sshd[1558]: pam_unix(sshd:session): session opened for user known by (uid=0) srv sshd[1558]: pam_unix(sshd:session): session opened for user known by (uid=0)
@ -248,14 +250,14 @@ srv sshd[1558]: pam_unix(sshd:session): session opened for user known by (uid=0)
srv sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116 srv sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116
# failJSON: { "match": false , "desc": "Still no failure (second try, same user)" } # failJSON: { "match": false , "desc": "Still no failure (second try, same user)" }
srv sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116 srv sshd[1559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=root rhost=192.0.2.116
# failJSON: { "match": true , "attempts": 2, "users": ["root", "known"], "host": "192.0.2.116", "desc": "Failure: attempt 2nd user" } # failJSON: { "match": true , "attempts": 3, "users": ["root", "known"], "host": "192.0.2.116", "desc": "Failure: attempt 2nd user" }
srv sshd[1559]: Accepted password for known from 192.0.2.116 port 52100 ssh2 srv sshd[1559]: Accepted password for known from 192.0.2.116 port 52100 ssh2
# failJSON: { "match": false , "desc": "No failure" } # failJSON: { "match": false , "desc": "No failure" }
srv sshd[1559]: Connection closed by 192.0.2.116 srv sshd[1559]: Connection closed by 192.0.2.116
# failJSON: { "match": true , "attempts": 1, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt invalid user" } # failJSON: { "match": true , "attempts": 1, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt invalid user" }
srv sshd[5672]: Invalid user admin from 192.0.2.117 port 44004 srv sshd[5672]: Invalid user admin from 192.0.2.117 port 44004
# failJSON: { "match": true , "attempts": 2, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt to change user (disallowed)" } # failJSON: { "match": true , "attempts": 1, "user": "admin", "host": "192.0.2.117", "desc": "Failure: attempt to change user (disallowed)" }
srv sshd[5672]: Disconnecting invalid user admin 192.0.2.117 port 44004: Change of username or service not allowed: (admin,ssh-connection) -> (user,ssh-connection) [preauth] srv sshd[5672]: Disconnecting invalid user admin 192.0.2.117 port 44004: Change of username or service not allowed: (admin,ssh-connection) -> (user,ssh-connection) [preauth]
# failJSON: { "match": false, "desc": "Disconnected during preauth phase (no failure in normal mode)" } # failJSON: { "match": false, "desc": "Disconnected during preauth phase (no failure in normal mode)" }
srv sshd[5672]: Disconnected from authenticating user admin 192.0.2.6 port 33553 [preauth] srv sshd[5672]: Disconnected from authenticating user admin 192.0.2.6 port 33553 [preauth]
@ -325,7 +327,7 @@ srv sshd[45]: fatal: Unable to negotiate with 192.0.2.2 port 55419: no matching
srv sshd[22440]: Connection from 192.0.2.3 port 39678 on 192.168.1.9 port 22 srv sshd[22440]: Connection from 192.0.2.3 port 39678 on 192.168.1.9 port 22
# failJSON: { "match": true , "host": "192.0.2.3", "desc": "Multiline - no matching key exchange method" } # failJSON: { "match": true , "host": "192.0.2.3", "desc": "Multiline - no matching key exchange method" }
srv sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth] srv sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth]
# failJSON: { "match": true , "host": "192.0.2.3", "filter": "sshd", "desc": "Second attempt within the same connect" } # failJSON: { "match": true , "host": "192.0.2.3", "constraint": "name == 'sshd'", "desc": "Second attempt within the same connect" }
srv sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth] srv sshd[22440]: fatal: Unable to negotiate a key exchange method [preauth]
# gh-1943 (previous OpenSSH log-format) # gh-1943 (previous OpenSSH log-format)

View File

@ -1,6 +1,23 @@
# filterOptions: [{"mode": "normal"}]
# failJSON: { "match": false } # failJSON: { "match": false }
10.0.0.2 - - [18/Nov/2018:21:34:30 +0000] "GET /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms 10.0.0.2 - - [18/Nov/2018:21:34:30 +0000] "GET /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms
# filterOptions: [{"mode": "ddos"}]
# failJSON: { "match": false }
10.0.0.2 - username [18/Nov/2018:21:34:30 +0000] "GET /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms
# filterOptions: [{"mode": "normal"}, {"mode": "aggressive"}]
# failJSON: { "time": "2018-11-18T22:34:34", "match": true , "host": "10.0.0.2" } # failJSON: { "time": "2018-11-18T22:34:34", "match": true , "host": "10.0.0.2" }
10.0.0.2 - username [18/Nov/2018:21:34:34 +0000] "GET /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms 10.0.0.2 - username [18/Nov/2018:21:34:34 +0000] "GET /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms
# failJSON: { "time": "2018-11-18T22:34:34", "match": true , "host": "10.0.0.2", "desc": "other request method" }
10.0.0.2 - username [18/Nov/2018:21:34:34 +0000] "TRACE /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms
# failJSON: { "match": false } # failJSON: { "match": false }
10.0.0.2 - username [27/Nov/2018:23:33:31 +0000] "GET /dashboard/ HTTP/2.0" 200 716 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 118 "Host-traefik-0" "/dashboard/" 4ms 10.0.0.2 - username [27/Nov/2018:23:33:31 +0000] "GET /dashboard/ HTTP/2.0" 200 716 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 118 "Host-traefik-0" "/dashboard/" 4ms
# filterOptions: [{"mode": "ddos"}, {"mode": "aggressive"}]
# failJSON: { "time": "2018-11-18T22:34:30", "match": true , "host": "10.0.0.2" }
10.0.0.2 - - [18/Nov/2018:21:34:30 +0000] "GET /dashboard/ HTTP/2.0" 401 17 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" 72 "Auth for frontend-Host-traefik-0" "/dashboard/" 0ms

View File

@ -30,8 +30,8 @@ Jun 21 16:55:02 <auth.info> machine kernel: [ 970.699396] @vserver_demo test-
# failJSON: { "time": "2005-06-21T16:55:03", "match": true , "host": "192.0.2.3" } # failJSON: { "time": "2005-06-21T16:55:03", "match": true , "host": "192.0.2.3" }
[Jun 21 16:55:03] <auth.info> machine kernel: [ 970.699396] @vserver_demo test-demo(pam_unix)[13709] [ID 255 test] F2B: failure from 192.0.2.3 [Jun 21 16:55:03] <auth.info> machine kernel: [ 970.699396] @vserver_demo test-demo(pam_unix)[13709] [ID 255 test] F2B: failure from 192.0.2.3
# -- wrong time direct in journal-line (used last known date): # -- wrong time direct in journal-line (used last known date or now, but null because no checkFindTime in samples test factory):
# failJSON: { "time": "2005-06-21T16:55:03", "match": true , "host": "192.0.2.1" } # failJSON: { "time": null, "match": true , "host": "192.0.2.1" }
0000-12-30 00:00:00 server test-demo[47831]: F2B: failure from 192.0.2.1 0000-12-30 00:00:00 server test-demo[47831]: F2B: failure from 192.0.2.1
# -- wrong time after newline in message (plist without escaped newlines): # -- wrong time after newline in message (plist without escaped newlines):
# failJSON: { "match": false } # failJSON: { "match": false }
@ -42,8 +42,8 @@ Jun 22 20:37:04 server test-demo[402]: writeToStorage plist={
applicationDate = "0000-12-30 00:00:00 +0000"; applicationDate = "0000-12-30 00:00:00 +0000";
# failJSON: { "match": false } # failJSON: { "match": false }
} }
# -- wrong time direct in journal-line (used last known date): # -- wrong time direct in journal-line (used last known date, but null because no checkFindTime in samples test factory):
# failJSON: { "time": "2005-06-22T20:37:04", "match": true , "host": "192.0.2.2" } # failJSON: { "time": null, "match": true , "host": "192.0.2.2" }
0000-12-30 00:00:00 server test-demo[47831]: F2B: failure from 192.0.2.2 0000-12-30 00:00:00 server test-demo[47831]: F2B: failure from 192.0.2.2
# -- test no zone and UTC/GMT named zone "2005-06-21T14:55:10 UTC" == "2005-06-21T16:55:10 CEST" (diff +2h in CEST): # -- test no zone and UTC/GMT named zone "2005-06-21T14:55:10 UTC" == "2005-06-21T16:55:10 CEST" (diff +2h in CEST):
@ -60,3 +60,6 @@ Jun 22 20:37:04 server test-demo[402]: writeToStorage plist={
[Jun 21 16:56:03] machine test-demo(pam_unix)[13709] F2B: error from 192.0.2.251 [Jun 21 16:56:03] machine test-demo(pam_unix)[13709] F2B: error from 192.0.2.251
# failJSON: { "match": false, "desc": "test 2nd ignoreregex" } # failJSON: { "match": false, "desc": "test 2nd ignoreregex" }
[Jun 21 16:56:04] machine test-demo(pam_unix)[13709] F2B: error from 192.0.2.252 [Jun 21 16:56:04] machine test-demo(pam_unix)[13709] F2B: error from 192.0.2.252
# failJSON: { "match": false, "desc": "ignore other daemon" }
[Jun 21 16:56:04] machine captain-nemo(pam_unix)[55555] F2B: error from 192.0.2.2

View File

@ -43,7 +43,8 @@ from ..server.failmanager import FailManagerEmpty
from ..server.ipdns import asip, getfqdn, DNSUtils, IPAddr from ..server.ipdns import asip, getfqdn, DNSUtils, IPAddr
from ..server.mytime import MyTime from ..server.mytime import MyTime
from ..server.utils import Utils, uni_decode from ..server.utils import Utils, uni_decode
from .utils import setUpMyTime, tearDownMyTime, mtimesleep, with_tmpdir, LogCaptureTestCase, \ from .databasetestcase import getFail2BanDb
from .utils import setUpMyTime, tearDownMyTime, mtimesleep, with_alt_time, with_tmpdir, LogCaptureTestCase, \
logSys as DefLogSys, CONFIG_DIR as STOCK_CONF_DIR logSys as DefLogSys, CONFIG_DIR as STOCK_CONF_DIR
from .dummyjail import DummyJail from .dummyjail import DummyJail
@ -62,10 +63,7 @@ def open(*args):
if len(args) == 2: if len(args) == 2:
# ~50kB buffer should be sufficient for all tests here. # ~50kB buffer should be sufficient for all tests here.
args = args + (50000,) args = args + (50000,)
if sys.version_info >= (3,): return fopen(*args)
return fopen(*args, **{'encoding': 'utf-8', 'errors': 'ignore'})
else:
return fopen(*args)
def _killfile(f, name): def _killfile(f, name):
@ -199,7 +197,7 @@ def _copy_lines_between_files(in_, fout, n=None, skip=0, mode='a', terminal_line
# polling filter could detect the change # polling filter could detect the change
mtimesleep() mtimesleep()
if isinstance(in_, str): # pragma: no branch - only used with str in test cases if isinstance(in_, str): # pragma: no branch - only used with str in test cases
fin = open(in_, 'r') fin = open(in_, 'rb')
else: else:
fin = in_ fin = in_
# Skip # Skip
@ -209,7 +207,7 @@ def _copy_lines_between_files(in_, fout, n=None, skip=0, mode='a', terminal_line
i = 0 i = 0
lines = [] lines = []
while n is None or i < n: while n is None or i < n:
l = fin.readline() l = FileContainer.decode_line(in_, 'UTF-8', fin.readline()).rstrip('\r\n')
if terminal_line is not None and l == terminal_line: if terminal_line is not None and l == terminal_line:
break break
lines.append(l) lines.append(l)
@ -217,7 +215,7 @@ def _copy_lines_between_files(in_, fout, n=None, skip=0, mode='a', terminal_line
# Write: all at once and flush # Write: all at once and flush
if isinstance(fout, str): if isinstance(fout, str):
fout = open(fout, mode) fout = open(fout, mode)
fout.write('\n'.join(lines)) fout.write('\n'.join(lines)+'\n')
fout.flush() fout.flush()
if isinstance(in_, str): # pragma: no branch - only used with str in test cases if isinstance(in_, str): # pragma: no branch - only used with str in test cases
# Opened earlier, therefore must close it # Opened earlier, therefore must close it
@ -237,7 +235,7 @@ def _copy_lines_to_journal(in_, fields={},n=None, skip=0, terminal_line=""): # p
Returns None Returns None
""" """
if isinstance(in_, str): # pragma: no branch - only used with str in test cases if isinstance(in_, str): # pragma: no branch - only used with str in test cases
fin = open(in_, 'r') fin = open(in_, 'rb')
else: else:
fin = in_ fin = in_
# Required for filtering # Required for filtering
@ -248,7 +246,7 @@ def _copy_lines_to_journal(in_, fields={},n=None, skip=0, terminal_line=""): # p
# Read/Write # Read/Write
i = 0 i = 0
while n is None or i < n: while n is None or i < n:
l = fin.readline() l = FileContainer.decode_line(in_, 'UTF-8', fin.readline()).rstrip('\r\n')
if terminal_line is not None and l == terminal_line: if terminal_line is not None and l == terminal_line:
break break
journal.send(MESSAGE=l.strip(), **fields) journal.send(MESSAGE=l.strip(), **fields)
@ -396,11 +394,13 @@ class IgnoreIP(LogCaptureTestCase):
finally: finally:
tearDownMyTime() tearDownMyTime()
def testTimeJump(self): def _testTimeJump(self, inOperation=False):
try: try:
self.filter.addFailRegex('^<HOST>') self.filter.addFailRegex('^<HOST>')
self.filter.setDatePattern(r'{^LN-BEG}%Y-%m-%d %H:%M:%S(?:\s*%Z)?\s') self.filter.setDatePattern(r'{^LN-BEG}%Y-%m-%d %H:%M:%S(?:\s*%Z)?\s')
self.filter.setFindTime(10); # max 10 seconds back self.filter.setFindTime(10); # max 10 seconds back
self.filter.setMaxRetry(5); # don't ban here
self.filter.inOperation = inOperation
# #
self.pruneLog('[phase 1] DST time jump') self.pruneLog('[phase 1] DST time jump')
# check local time jump (DST hole): # check local time jump (DST hole):
@ -431,6 +431,47 @@ class IgnoreIP(LogCaptureTestCase):
self.assertNotLogged('Ignore line') self.assertNotLogged('Ignore line')
finally: finally:
tearDownMyTime() tearDownMyTime()
def testTimeJump(self):
self._testTimeJump(inOperation=False)
def testTimeJump_InOperation(self):
self._testTimeJump(inOperation=True)
def testWrongTimeZone(self):
try:
self.filter.addFailRegex('fail from <ADDR>$')
self.filter.setDatePattern(r'{^LN-BEG}%Y-%m-%d %H:%M:%S(?:\s*%Z)?\s')
self.filter.setMaxRetry(5); # don't ban here
self.filter.inOperation = True; # real processing (all messages are new)
# current time is 1h later than log-entries:
MyTime.setTime(1572138000+3600)
#
self.pruneLog("[phase 1] simulate wrong TZ")
for i in (1,2,3):
self.filter.processLineAndAdd('2019-10-27 02:00:00 fail from 192.0.2.15'); # +3 = 3
self.assertLogged(
"Simulate NOW in operation since found time has too large deviation",
"Please check jail has possibly a timezone issue.",
"192.0.2.15:1", "192.0.2.15:2", "192.0.2.15:3",
"Total # of detected failures: 3.", wait=True)
#
self.pruneLog("[phase 2] wrong TZ given in log")
for i in (1,2,3):
self.filter.processLineAndAdd('2019-10-27 04:00:00 GMT fail from 192.0.2.16'); # +3 = 6
self.assertLogged(
"192.0.2.16:1", "192.0.2.16:2", "192.0.2.16:3",
"Total # of detected failures: 6.", all=True, wait=True)
self.assertNotLogged("Found a match but no valid date/time found")
#
self.pruneLog("[phase 3] other timestamp (don't match datepattern), regex matches")
for i in range(3):
self.filter.processLineAndAdd('27.10.2019 04:00:00 fail from 192.0.2.17'); # +3 = 9
self.assertLogged(
"Found a match but no valid date/time found",
"Match without a timestamp:",
"192.0.2.17:1", "192.0.2.17:2", "192.0.2.17:3",
"Total # of detected failures: 9.", all=True, wait=True)
finally:
tearDownMyTime()
def testAddAttempt(self): def testAddAttempt(self):
self.filter.setMaxRetry(3) self.filter.setMaxRetry(3)
@ -759,6 +800,7 @@ class LogFileMonitor(LogCaptureTestCase):
_, self.name = tempfile.mkstemp('fail2ban', 'monitorfailures') _, self.name = tempfile.mkstemp('fail2ban', 'monitorfailures')
self.file = open(self.name, 'a') self.file = open(self.name, 'a')
self.filter = FilterPoll(DummyJail()) self.filter = FilterPoll(DummyJail())
self.filter.banASAP = False # avoid immediate ban in this tests
self.filter.addLogPath(self.name, autoSeek=False) self.filter.addLogPath(self.name, autoSeek=False)
self.filter.active = True self.filter.active = True
self.filter.addFailRegex(r"(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>") self.filter.addFailRegex(r"(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>")
@ -878,7 +920,7 @@ class LogFileMonitor(LogCaptureTestCase):
self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan) self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan)
# and it should have not been enough # and it should have not been enough
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=5) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=12, n=3)
self.filter.getFailures(self.name) self.filter.getFailures(self.name)
_assert_correct_last_attempt(self, self.filter, GetFailures.FAILURES_01) _assert_correct_last_attempt(self, self.filter, GetFailures.FAILURES_01)
@ -897,7 +939,7 @@ class LogFileMonitor(LogCaptureTestCase):
# filter "marked" as the known beginning, otherwise it # filter "marked" as the known beginning, otherwise it
# would not detect "rotation" # would not detect "rotation"
self.file = _copy_lines_between_files(GetFailures.FILENAME_01, self.name, self.file = _copy_lines_between_files(GetFailures.FILENAME_01, self.name,
skip=3, mode='w') skip=12, n=3, mode='w')
self.filter.getFailures(self.name) self.filter.getFailures(self.name)
#self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan) #self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan)
_assert_correct_last_attempt(self, self.filter, GetFailures.FAILURES_01) _assert_correct_last_attempt(self, self.filter, GetFailures.FAILURES_01)
@ -916,7 +958,7 @@ class LogFileMonitor(LogCaptureTestCase):
# move aside, but leaving the handle still open... # move aside, but leaving the handle still open...
os.rename(self.name, self.name + '.bak') os.rename(self.name, self.name + '.bak')
_copy_lines_between_files(GetFailures.FILENAME_01, self.name, skip=14).close() _copy_lines_between_files(GetFailures.FILENAME_01, self.name, skip=14, n=1).close()
self.filter.getFailures(self.name) self.filter.getFailures(self.name)
_assert_correct_last_attempt(self, self.filter, GetFailures.FAILURES_01) _assert_correct_last_attempt(self, self.filter, GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 3) self.assertEqual(self.filter.failManager.getFailTotal(), 3)
@ -976,6 +1018,7 @@ def get_monitor_failures_testcase(Filter_):
self.file = open(self.name, 'a') self.file = open(self.name, 'a')
self.jail = DummyJail() self.jail = DummyJail()
self.filter = Filter_(self.jail) self.filter = Filter_(self.jail)
self.filter.banASAP = False # avoid immediate ban in this tests
self.filter.addLogPath(self.name, autoSeek=False) self.filter.addLogPath(self.name, autoSeek=False)
# speedup search using exact date pattern: # speedup search using exact date pattern:
self.filter.setDatePattern(r'^(?:%a )?%b %d %H:%M:%S(?:\.%f)?(?: %ExY)?') self.filter.setDatePattern(r'^(?:%a )?%b %d %H:%M:%S(?:\.%f)?(?: %ExY)?')
@ -1026,13 +1069,13 @@ def get_monitor_failures_testcase(Filter_):
self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan) self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan)
# Now let's feed it with entries from the file # Now let's feed it with entries from the file
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, n=5) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, n=12)
self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan) self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan)
# and our dummy jail is empty as well # and our dummy jail is empty as well
self.assertFalse(len(self.jail)) self.assertFalse(len(self.jail))
# since it should have not been enough # since it should have not been enough
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=5) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=12, n=3)
if idle: if idle:
self.waitForTicks(1) self.waitForTicks(1)
self.assertTrue(self.isEmpty(1)) self.assertTrue(self.isEmpty(1))
@ -1051,7 +1094,7 @@ def get_monitor_failures_testcase(Filter_):
#return #return
# just for fun let's copy all of them again and see if that results # just for fun let's copy all of them again and see if that results
# in a new ban # in a new ban
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, n=100) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=12, n=3)
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
def test_rewrite_file(self): def test_rewrite_file(self):
@ -1065,7 +1108,7 @@ def get_monitor_failures_testcase(Filter_):
# filter "marked" as the known beginning, otherwise it # filter "marked" as the known beginning, otherwise it
# would not detect "rotation" # would not detect "rotation"
self.file = _copy_lines_between_files(GetFailures.FILENAME_01, self.name, self.file = _copy_lines_between_files(GetFailures.FILENAME_01, self.name,
skip=3, mode='w') skip=12, n=3, mode='w')
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
def _wait4failures(self, count=2): def _wait4failures(self, count=2):
@ -1086,13 +1129,13 @@ def get_monitor_failures_testcase(Filter_):
# move aside, but leaving the handle still open... # move aside, but leaving the handle still open...
os.rename(self.name, self.name + '.bak') os.rename(self.name, self.name + '.bak')
_copy_lines_between_files(GetFailures.FILENAME_01, self.name, skip=14).close() _copy_lines_between_files(GetFailures.FILENAME_01, self.name, skip=14, n=1).close()
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 3) self.assertEqual(self.filter.failManager.getFailTotal(), 3)
# now remove the moved file # now remove the moved file
_killfile(None, self.name + '.bak') _killfile(None, self.name + '.bak')
_copy_lines_between_files(GetFailures.FILENAME_01, self.name, n=100).close() _copy_lines_between_files(GetFailures.FILENAME_01, self.name, skip=12, n=3).close()
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 6) self.assertEqual(self.filter.failManager.getFailTotal(), 6)
@ -1168,8 +1211,7 @@ def get_monitor_failures_testcase(Filter_):
def _test_move_into_file(self, interim_kill=False): def _test_move_into_file(self, interim_kill=False):
# if we move a new file into the location of an old (monitored) file # if we move a new file into the location of an old (monitored) file
_copy_lines_between_files(GetFailures.FILENAME_01, self.name, _copy_lines_between_files(GetFailures.FILENAME_01, self.name).close()
n=100).close()
# make sure that it is monitored first # make sure that it is monitored first
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 3) self.assertEqual(self.filter.failManager.getFailTotal(), 3)
@ -1180,14 +1222,14 @@ def get_monitor_failures_testcase(Filter_):
# now create a new one to override old one # now create a new one to override old one
_copy_lines_between_files(GetFailures.FILENAME_01, self.name + '.new', _copy_lines_between_files(GetFailures.FILENAME_01, self.name + '.new',
n=100).close() skip=12, n=3).close()
os.rename(self.name + '.new', self.name) os.rename(self.name + '.new', self.name)
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 6) self.assertEqual(self.filter.failManager.getFailTotal(), 6)
# and to make sure that it now monitored for changes # and to make sure that it now monitored for changes
_copy_lines_between_files(GetFailures.FILENAME_01, self.name, _copy_lines_between_files(GetFailures.FILENAME_01, self.name,
n=100).close() skip=12, n=3).close()
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 9) self.assertEqual(self.filter.failManager.getFailTotal(), 9)
@ -1206,7 +1248,7 @@ def get_monitor_failures_testcase(Filter_):
# create a bogus file in the same directory and see if that doesn't affect # create a bogus file in the same directory and see if that doesn't affect
open(self.name + '.bak2', 'w').close() open(self.name + '.bak2', 'w').close()
_copy_lines_between_files(GetFailures.FILENAME_01, self.name, n=100).close() _copy_lines_between_files(GetFailures.FILENAME_01, self.name, skip=12, n=3).close()
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
self.assertEqual(self.filter.failManager.getFailTotal(), 6) self.assertEqual(self.filter.failManager.getFailTotal(), 6)
_killfile(None, self.name + '.bak2') _killfile(None, self.name + '.bak2')
@ -1238,8 +1280,8 @@ def get_monitor_failures_testcase(Filter_):
self.assert_correct_last_attempt(GetFailures.FAILURES_01, count=6) # was needed if we write twice above self.assert_correct_last_attempt(GetFailures.FAILURES_01, count=6) # was needed if we write twice above
# now copy and get even more # now copy and get even more
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, n=100) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=12, n=3)
# check for 3 failures (not 9), because 6 already get above... # check for 3 failures (not 9), because 6 already get above...
self.assert_correct_last_attempt(GetFailures.FAILURES_01) self.assert_correct_last_attempt(GetFailures.FAILURES_01)
# total count in this test: # total count in this test:
self.assertEqual(self.filter.failManager.getFailTotal(), 12) self.assertEqual(self.filter.failManager.getFailTotal(), 12)
@ -1274,6 +1316,7 @@ def get_monitor_failures_journal_testcase(Filter_): # pragma: systemd no cover
def _initFilter(self, **kwargs): def _initFilter(self, **kwargs):
self._getRuntimeJournal() # check journal available self._getRuntimeJournal() # check journal available
self.filter = Filter_(self.jail, **kwargs) self.filter = Filter_(self.jail, **kwargs)
self.filter.banASAP = False # avoid immediate ban in this tests
self.filter.addJournalMatch([ self.filter.addJournalMatch([
"SYSLOG_IDENTIFIER=fail2ban-testcases", "SYSLOG_IDENTIFIER=fail2ban-testcases",
"TEST_FIELD=1", "TEST_FIELD=1",
@ -1397,6 +1440,52 @@ def get_monitor_failures_journal_testcase(Filter_): # pragma: systemd no cover
self.test_file, self.journal_fields, skip=5, n=4) self.test_file, self.journal_fields, skip=5, n=4)
self.assert_correct_ban("193.168.0.128", 3) self.assert_correct_ban("193.168.0.128", 3)
@with_alt_time
def test_grow_file_with_db(self):
def _gen_falure(ip):
# insert new failures ans check it is monitored:
fields = self.journal_fields
fields.update(TEST_JOURNAL_FIELDS)
journal.send(MESSAGE="error: PAM: Authentication failure for test from "+ip, **fields)
self.waitForTicks(1)
self.assert_correct_ban(ip, 1)
# coverage for update log:
self.jail.database = getFail2BanDb(':memory:')
self.jail.database.addJail(self.jail)
MyTime.setTime(time.time())
self._test_grow_file()
# stop:
self.filter.stop()
self.filter.join()
MyTime.setTime(time.time() + 2)
# update log manually (should cause a seek to end of log without wait for next second):
self.jail.database.updateJournal(self.jail, 'systemd-journal', MyTime.time(), 'TEST')
# check seek to last (simulated) position succeeds (without bans of previous copied tickets):
self._failTotal = 0
self._initFilter()
self.filter.setMaxRetry(1)
self.filter.start()
self.waitForTicks(1)
# check new IP but no old IPs found:
_gen_falure("192.0.2.5")
self.assertFalse(self.jail.getFailTicket())
# now the same with increased time (check now - findtime case):
self.filter.stop()
self.filter.join()
MyTime.setTime(time.time() + 10000)
self._failTotal = 0
self._initFilter()
self.filter.setMaxRetry(1)
self.filter.start()
self.waitForTicks(1)
MyTime.setTime(time.time() + 3)
# check new IP but no old IPs found:
_gen_falure("192.0.2.6")
self.assertFalse(self.jail.getFailTicket())
def test_delJournalMatch(self): def test_delJournalMatch(self):
self._initFilter() self._initFilter()
self.filter.start() self.filter.start()
@ -1481,6 +1570,7 @@ class GetFailures(LogCaptureTestCase):
setUpMyTime() setUpMyTime()
self.jail = DummyJail() self.jail = DummyJail()
self.filter = FileFilter(self.jail) self.filter = FileFilter(self.jail)
self.filter.banASAP = False # avoid immediate ban in this tests
self.filter.active = True self.filter.active = True
# speedup search using exact date pattern: # speedup search using exact date pattern:
self.filter.setDatePattern(r'^(?:%a )?%b %d %H:%M:%S(?:\.%f)?(?: %ExY)?') self.filter.setDatePattern(r'^(?:%a )?%b %d %H:%M:%S(?:\.%f)?(?: %ExY)?')
@ -1536,9 +1626,9 @@ class GetFailures(LogCaptureTestCase):
# We first adjust logfile/failures to end with CR+LF # We first adjust logfile/failures to end with CR+LF
fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='crlf') fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='crlf')
# poor man unix2dos: # poor man unix2dos:
fin, fout = open(GetFailures.FILENAME_01), open(fname, 'w') fin, fout = open(GetFailures.FILENAME_01, 'rb'), open(fname, 'wb')
for l in fin.readlines(): for l in fin.read().splitlines():
fout.write('%s\r\n' % l.rstrip('\n')) fout.write(l + b'\r\n')
fin.close() fin.close()
fout.close() fout.close()
@ -1557,16 +1647,24 @@ class GetFailures(LogCaptureTestCase):
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
def testGetFailures03(self): def testGetFailures03(self):
output = ('203.162.223.135', 7, 1124013544.0) output = ('203.162.223.135', 6, 1124013600.0)
self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=0) self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=0)
self.filter.addFailRegex(r"error,relay=<HOST>,.*550 User unknown") self.filter.addFailRegex(r"error,relay=<HOST>,.*550 User unknown")
self.filter.getFailures(GetFailures.FILENAME_03) self.filter.getFailures(GetFailures.FILENAME_03)
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
def testGetFailures03_InOperation(self):
output = ('203.162.223.135', 9, 1124013600.0)
self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=0)
self.filter.addFailRegex(r"error,relay=<HOST>,.*550 User unknown")
self.filter.getFailures(GetFailures.FILENAME_03, inOperation=True)
_assert_correct_last_attempt(self, self.filter, output)
def testGetFailures03_Seek1(self): def testGetFailures03_Seek1(self):
# same test as above but with seek to 'Aug 14 11:55:04' - so other output ... # same test as above but with seek to 'Aug 14 11:55:04' - so other output ...
output = ('203.162.223.135', 5, 1124013544.0) output = ('203.162.223.135', 3, 1124013600.0)
self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=output[2] - 4*60) self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=output[2] - 4*60)
self.filter.addFailRegex(r"error,relay=<HOST>,.*550 User unknown") self.filter.addFailRegex(r"error,relay=<HOST>,.*550 User unknown")
@ -1575,7 +1673,7 @@ class GetFailures(LogCaptureTestCase):
def testGetFailures03_Seek2(self): def testGetFailures03_Seek2(self):
# same test as above but with seek to 'Aug 14 11:59:04' - so other output ... # same test as above but with seek to 'Aug 14 11:59:04' - so other output ...
output = ('203.162.223.135', 1, 1124013544.0) output = ('203.162.223.135', 2, 1124013600.0)
self.filter.setMaxRetry(1) self.filter.setMaxRetry(1)
self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=output[2]) self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=output[2])
@ -1603,6 +1701,7 @@ class GetFailures(LogCaptureTestCase):
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
def testGetFailuresWrongChar(self): def testGetFailuresWrongChar(self):
self.filter.checkFindTime = False
# write wrong utf-8 char: # write wrong utf-8 char:
fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='crlf') fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='crlf')
fout = fopen(fname, 'wb') fout = fopen(fname, 'wb')
@ -1623,6 +1722,8 @@ class GetFailures(LogCaptureTestCase):
for enc in (None, 'utf-8', 'ascii'): for enc in (None, 'utf-8', 'ascii'):
if enc is not None: if enc is not None:
self.tearDown();self.setUp(); self.tearDown();self.setUp();
if DefLogSys.getEffectiveLevel() > 7: DefLogSys.setLevel(7); # ensure decode_line logs always
self.filter.checkFindTime = False;
self.filter.setLogEncoding(enc); self.filter.setLogEncoding(enc);
# speedup search using exact date pattern: # speedup search using exact date pattern:
self.filter.setDatePattern(r'^%ExY-%Exm-%Exd %ExH:%ExM:%ExS') self.filter.setDatePattern(r'^%ExY-%Exm-%Exd %ExH:%ExM:%ExS')
@ -1670,6 +1771,7 @@ class GetFailures(LogCaptureTestCase):
self.pruneLog("[test-phase useDns=%s]" % useDns) self.pruneLog("[test-phase useDns=%s]" % useDns)
jail = DummyJail() jail = DummyJail()
filter_ = FileFilter(jail, useDns=useDns) filter_ = FileFilter(jail, useDns=useDns)
filter_.banASAP = False # avoid immediate ban in this tests
filter_.active = True filter_.active = True
filter_.failManager.setMaxRetry(1) # we might have just few failures filter_.failManager.setMaxRetry(1) # we might have just few failures
@ -1849,7 +1951,9 @@ class DNSUtilsNetworkTests(unittest.TestCase):
ip4 = IPAddr('192.0.2.1') ip4 = IPAddr('192.0.2.1')
ip6 = IPAddr('2001:DB8::') ip6 = IPAddr('2001:DB8::')
self.assertTrue(ip4.isIPv4) self.assertTrue(ip4.isIPv4)
self.assertTrue(ip4.isSingle)
self.assertTrue(ip6.isIPv6) self.assertTrue(ip6.isIPv6)
self.assertTrue(ip6.isSingle)
self.assertTrue(asip('192.0.2.1').isIPv4) self.assertTrue(asip('192.0.2.1').isIPv4)
self.assertTrue(id(asip(ip4)) == id(ip4)) self.assertTrue(id(asip(ip4)) == id(ip4))
@ -1858,6 +1962,7 @@ class DNSUtilsNetworkTests(unittest.TestCase):
r = IPAddr('xxx', IPAddr.CIDR_RAW) r = IPAddr('xxx', IPAddr.CIDR_RAW)
self.assertFalse(r.isIPv4) self.assertFalse(r.isIPv4)
self.assertFalse(r.isIPv6) self.assertFalse(r.isIPv6)
self.assertFalse(r.isSingle)
self.assertTrue(r.isValid) self.assertTrue(r.isValid)
self.assertEqual(r, 'xxx') self.assertEqual(r, 'xxx')
self.assertEqual('xxx', str(r)) self.assertEqual('xxx', str(r))
@ -1866,6 +1971,7 @@ class DNSUtilsNetworkTests(unittest.TestCase):
r = IPAddr('1:2', IPAddr.CIDR_RAW) r = IPAddr('1:2', IPAddr.CIDR_RAW)
self.assertFalse(r.isIPv4) self.assertFalse(r.isIPv4)
self.assertFalse(r.isIPv6) self.assertFalse(r.isIPv6)
self.assertFalse(r.isSingle)
self.assertTrue(r.isValid) self.assertTrue(r.isValid)
self.assertEqual(r, '1:2') self.assertEqual(r, '1:2')
self.assertEqual('1:2', str(r)) self.assertEqual('1:2', str(r))
@ -1888,7 +1994,7 @@ class DNSUtilsNetworkTests(unittest.TestCase):
def testUseDns(self): def testUseDns(self):
res = DNSUtils.textToIp('www.example.com', 'no') res = DNSUtils.textToIp('www.example.com', 'no')
self.assertSortedEqual(res, []) self.assertSortedEqual(res, [])
unittest.F2B.SkipIfNoNetwork() #unittest.F2B.SkipIfNoNetwork()
res = DNSUtils.textToIp('www.example.com', 'warn') res = DNSUtils.textToIp('www.example.com', 'warn')
# sort ipaddr, IPv4 is always smaller as IPv6 # sort ipaddr, IPv4 is always smaller as IPv6
self.assertSortedEqual(res, ['93.184.216.34', '2606:2800:220:1:248:1893:25c8:1946']) self.assertSortedEqual(res, ['93.184.216.34', '2606:2800:220:1:248:1893:25c8:1946'])
@ -1897,7 +2003,7 @@ class DNSUtilsNetworkTests(unittest.TestCase):
self.assertSortedEqual(res, ['93.184.216.34', '2606:2800:220:1:248:1893:25c8:1946']) self.assertSortedEqual(res, ['93.184.216.34', '2606:2800:220:1:248:1893:25c8:1946'])
def testTextToIp(self): def testTextToIp(self):
unittest.F2B.SkipIfNoNetwork() #unittest.F2B.SkipIfNoNetwork()
# Test hostnames # Test hostnames
hostnames = [ hostnames = [
'www.example.com', 'www.example.com',
@ -1921,7 +2027,7 @@ class DNSUtilsNetworkTests(unittest.TestCase):
self.assertTrue(isinstance(ip, IPAddr)) self.assertTrue(isinstance(ip, IPAddr))
def testIpToName(self): def testIpToName(self):
unittest.F2B.SkipIfNoNetwork() #unittest.F2B.SkipIfNoNetwork()
res = DNSUtils.ipToName('8.8.4.4') res = DNSUtils.ipToName('8.8.4.4')
self.assertTrue(res.endswith(('.google', '.google.com'))) self.assertTrue(res.endswith(('.google', '.google.com')))
# same as above, but with IPAddr: # same as above, but with IPAddr:
@ -1943,8 +2049,10 @@ class DNSUtilsNetworkTests(unittest.TestCase):
self.assertEqual(res.addr, 167772160L) self.assertEqual(res.addr, 167772160L)
res = IPAddr('10.0.0.1', cidr=32L) res = IPAddr('10.0.0.1', cidr=32L)
self.assertEqual(res.addr, 167772161L) self.assertEqual(res.addr, 167772161L)
self.assertTrue(res.isSingle)
res = IPAddr('10.0.0.1', cidr=31L) res = IPAddr('10.0.0.1', cidr=31L)
self.assertEqual(res.addr, 167772160L) self.assertEqual(res.addr, 167772160L)
self.assertFalse(res.isSingle)
self.assertEqual(IPAddr('10.0.0.0').hexdump, '0a000000') self.assertEqual(IPAddr('10.0.0.0').hexdump, '0a000000')
self.assertEqual(IPAddr('1::2').hexdump, '00010000000000000000000000000002') self.assertEqual(IPAddr('1::2').hexdump, '00010000000000000000000000000002')
@ -1969,6 +2077,8 @@ class DNSUtilsNetworkTests(unittest.TestCase):
def testIPAddr_InInet(self): def testIPAddr_InInet(self):
ip4net = IPAddr('93.184.0.1/24') ip4net = IPAddr('93.184.0.1/24')
ip6net = IPAddr('2606:2800:220:1:248:1893:25c8:0/120') ip6net = IPAddr('2606:2800:220:1:248:1893:25c8:0/120')
self.assertFalse(ip4net.isSingle)
self.assertFalse(ip6net.isSingle)
# ip4: # ip4:
self.assertTrue(IPAddr('93.184.0.1').isInNet(ip4net)) self.assertTrue(IPAddr('93.184.0.1').isInNet(ip4net))
self.assertTrue(IPAddr('93.184.0.255').isInNet(ip4net)) self.assertTrue(IPAddr('93.184.0.255').isInNet(ip4net))
@ -2064,6 +2174,7 @@ class DNSUtilsNetworkTests(unittest.TestCase):
) )
def testIPAddr_CompareDNS(self): def testIPAddr_CompareDNS(self):
#unittest.F2B.SkipIfNoNetwork()
ips = IPAddr('example.com') ips = IPAddr('example.com')
self.assertTrue(IPAddr("93.184.216.34").isInNet(ips)) self.assertTrue(IPAddr("93.184.216.34").isInNet(ips))
self.assertTrue(IPAddr("2606:2800:220:1:248:1893:25c8:1946").isInNet(ips)) self.assertTrue(IPAddr("2606:2800:220:1:248:1893:25c8:1946").isInNet(ips))

Some files were not shown because too many files have changed in this diff Show More