Merge remote-tracking branch 'remotes/gh-origin/f2b-perfom-prepare-716' into ban-time-incr

pull/1460/head
sebres 2015-12-29 14:04:41 +01:00
commit 21f058a9f7
117 changed files with 3781 additions and 1558 deletions

5
.mailmap Normal file
View File

@ -0,0 +1,5 @@
Lee Clemens <java@leeclemens.net>
Serg G. Brester <info@sebres.de>
Serg G. Brester <serg.brester@sebres.de>
Serg G. Brester <sergey.brester@W7-DEHBG0189.wincor-nixdorf.com>
Viktor Szépe <viktor@szepe.net>

View File

@ -1,12 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?eclipse-pydev version="1.0"?>
<pydev_project>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.3</pydev_property>
<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
<path>/fail2ban-0.8/client</path>
<path>/fail2ban-0.8/server</path>
<path>/fail2ban-0.8/testcases</path>
<path>/fail2ban-0.8</path>
</pydev_pathproperty>
</pydev_project>

View File

@ -6,7 +6,8 @@ python:
- 2.6 - 2.6
- 2.7 - 2.7
- pypy - pypy
- 3.2 # disabled until coverage module fixes up compatibility issue
# - 3.2
- 3.3 - 3.3
- 3.4 - 3.4
- pypy3 - pypy3

117
ChangeLog
View File

@ -6,12 +6,92 @@
Fail2Ban: Changelog Fail2Ban: Changelog
=================== ===================
ver. 0.9.3 (2015/XX/XXX) - increment ban time ver. 0.9.5 (2015/XX/XXX) - increment ban time
----------- -----------
- New Features:
* increment ban time (+ observer) functionality introduced.
Thanks Serg G. Brester (sebres)
ver. 0.9.4 (2015/XX/XXX) - wanna-be-released
-----------
- Fixes:
* roundcube-auth jail typo for logpath
* Fix dnsToIp resolver for fqdn with large list of IPs (gh-1164)
* filter.d/apache-badbots.conf
- Updated useragent string regex adding escape for `+`
* filter.d/mysqld-auth.conf
- Updated "Access denied ..." regex for MySQL 5.6 and later (gh-1211)
* filter.d/sshd.conf
- Updated "Auth fail" regex for OpenSSH 5.9 and later
* Treat failed and killed execution of commands identically (only
different log messages), which addresses different behavior on different
exit codes of dash and bash (gh-1155)
* Fix jail.conf.5 man's section (gh-1226)
* Fixed default banaction for allports jails like pam-generic, recidive, etc
with new default variable `banaction_allports` (gh-1216)
* Fixed `fail2ban-regex` stops working on invalid (wrong encoded) character
for python version < 3.x (gh-1248)
* Use postfix_log logpath for postfix-rbl jail
* filters.d/postfix.conf - add 'Sender address rejected: Domain not found' failregex
- New Features:
* New interpolation feature for definition config readers - `<known/parameter>`
(means last known init definition of filters or actions with name `parameter`).
This interpolation makes possible to extend a parameters of stock filter or
action directly in jail inside jail.local file, without creating a separately
filter.d/*.local file.
As extension to interpolation `%(known/parameter)s`, that does not works for
filter and action init parameters
* New filters:
- openhab - domotic software authentication failure with the
rest api and web interface (gh-1223)
- nginx-limit-req - ban hosts, that were failed through nginx by limit
request processing rate (ngx_http_limit_req_module)
- murmur - ban hosts that repeatedly attempt to connect to
murmur/mumble-server with an invalid server password or certificate.
* New jails:
- murmur - bans TCP and UDP from the bad host on the default murmur port.
* sshd filter got new failregex to match "maximum authentication
attempts exceeded" (introduced in openssh 6.8)
- Enhancements:
* Do not rotate empty log files
* Added new date pattern with year after day (e.g. Sun Jan 23 2005 21:59:59)
http://bugs.debian.org/798923
* Added openSUSE path configuration (Thanks Johannes Weberhofer)
* Allow to split ignoreip entries by ',' as well as by ' ' (gh-1197)
* Added a timeout (3 sec) to urlopen within badips.py action
(Thanks M. Maraun)
* Added check against atacker's Googlebot PTR fake records
(Thanks Pablo Rodriguez Fernandez)
* Enhance filter against atacker's Googlebot PTR fake records
(gh-1226)
* Nginx log paths extended (prefixed with "*" wildcard) (gh-1237)
* Added filter for openhab domotic software authentication failure with the
rest api and web interface (gh-1223)
* Add *_backend options for services to allow distros to set the default
backend per service, set default to systemd for Fedora as appropriate
* Performance improvements while monitoring large number of files (gh-1265).
Use associative array (dict) for monitored log files to speed up lookup
operations. Thanks @kshetragia
ver. 0.9.3 (2015/08/01) - lets-all-stay-friends
----------
- IMPORTANT incompatible changes: - IMPORTANT incompatible changes:
* filter.d/roundcube-auth.conf * filter.d/roundcube-auth.conf
- Changed logpath to 'errors' log (was 'userlogins') - Changed logpath to 'errors' log (was 'userlogins')
* action.d/iptables-common.conf
- All calls to iptables command now use -w switch introduced in
iptables 1.4.20 (some distribution could have patched their
earlier base version as well) to provide this locking mechanism
useful under heavy load to avoid contesting on iptables calls.
If you need to disable, define 'action.d/iptables-common.local'
with empty value for 'lockingopt' in `[Init]` section.
* mail-whois-lines, sendmail-geoip-lines and sendmail-whois-lines
actions now include by default only the first 1000 log lines in
the emails. Adjust <grepopts> to augment the behavior.
- Fixes: - Fixes:
* purge database will be executed now (within observer). * purge database will be executed now (within observer).
@ -28,18 +108,32 @@ ver. 0.9.3 (2015/XX/XXX) - increment ban time
* filter.d/roundcube-auth.conf * filter.d/roundcube-auth.conf
- Updated regex to work with 'errors' log (1.0.5 and 1.1.1) - Updated regex to work with 'errors' log (1.0.5 and 1.1.1)
- Added regex to work with 'userlogins' log - Added regex to work with 'userlogins' log
* action.d/sendmail*.conf - use LC_ALL (superseeding LC_TIME) to override * action.d/sendmail*.conf - use LC_ALL (superseeding LC_TIME) to override
locale on systems with customized LC_ALL locale on systems with customized LC_ALL
* performance fix: minimizes connection overhead, close socket only at * performance fix: minimizes connection overhead, close socket only at
communication end (gh-1099) communication end (gh-1099)
* unbanip always deletes ip from database (independent of bantime, also if * unbanip always deletes ip from database (independent of bantime, also if
currently not banned or persistent) currently not banned or persistent)
* guarantee order of dbfile to be before dbpurgeage (gh-1048)
* always set 'dbfile' before other database options (gh-1050)
* kill the entire process group of the child process upon timeout (gh-1129).
Otherwise could lead to resource exhaustion due to hanging whois
processes.
* resolve /var/run/fail2ban path in setup.py to help installation
on platforms with /var/run -> /run symlink (gh-1142)
- New Features: - New Features:
* increment ban time (+ observer) functionality introduced. * RETURN iptables target is now a variable: <returntype>
Thanks Serg G. Brester (sebres) * New type of operation: pass2allow, use fail2ban for "knocking",
opening a closed port by swapping blocktype and returntype
* New filters: * New filters:
- froxlor-auth Thanks Joern Muehlencord - froxlor-auth - Thanks Joern Muehlencord
- apache-pass - filter Apache access log for successful authentication
* New actions:
- shorewall-ipset-proto6 - using proto feature of the Shorewall. Still requires
manual pre-configuration of the shorewall. See the action file for detail.
* New jails:
- pass2allow-ftp - allows FTP traffic after successful HTTP authentication
- Enhancements: - Enhancements:
* action.d/cloudflare.conf - improved documentation on how to allow * action.d/cloudflare.conf - improved documentation on how to allow
@ -50,6 +144,19 @@ ver. 0.9.3 (2015/XX/XXX) - increment ban time
* filter.d/apache-badbots.conf, filter.d/nginx-botsearch.conf - add * filter.d/apache-badbots.conf, filter.d/nginx-botsearch.conf - add
HEAD method verb HEAD method verb
* Revamp of Travis and coverage automated testing * Revamp of Travis and coverage automated testing
* Added a space between IP address and the following colon
in notification emails for easier text selection
* Character detection heuristics for whois output via optional setting
in mail-whois*.conf. Thanks Thomas Mayer.
Not enabled by default, if _whois_command is set to be
%(_whois_convert_charset)s (e.g. in action.d/mail-whois-common.local),
it
- detects character set of whois output (which is undefined by
RFC 3912) via heuristics of the file command
- converts whois data to UTF-8 character set with iconv
- sends the whois output in UTF-8 character set to mail program
- avoids that heirloom mailx creates binary attachment for input with
unknown character set
ver. 0.9.2 (2015/04/29) - better-quick-now-than-later ver. 0.9.2 (2015/04/29) - better-quick-now-than-later

View File

@ -180,7 +180,6 @@ fail2ban/server/banmanager.py
fail2ban/server/database.py fail2ban/server/database.py
fail2ban/server/datedetector.py fail2ban/server/datedetector.py
fail2ban/server/datetemplate.py fail2ban/server/datetemplate.py
fail2ban/server/faildata.py
fail2ban/server/failmanager.py fail2ban/server/failmanager.py
fail2ban/server/failregex.py fail2ban/server/failregex.py
fail2ban/server/filter.py fail2ban/server/filter.py
@ -197,6 +196,7 @@ fail2ban/server/server.py
fail2ban/server/strptime.py fail2ban/server/strptime.py
fail2ban/server/ticket.py fail2ban/server/ticket.py
fail2ban/server/transmitter.py fail2ban/server/transmitter.py
fail2ban/server/utils.py
fail2ban/tests/__init__.py fail2ban/tests/__init__.py
fail2ban/tests/action_d/__init__.py fail2ban/tests/action_d/__init__.py
fail2ban/tests/action_d/test_badips.py fail2ban/tests/action_d/test_badips.py
@ -331,6 +331,7 @@ fail2ban/tests/misctestcase.py
fail2ban/tests/samplestestcase.py fail2ban/tests/samplestestcase.py
fail2ban/tests/servertestcase.py fail2ban/tests/servertestcase.py
fail2ban/tests/sockettestcase.py fail2ban/tests/sockettestcase.py
fail2ban/tests/tickettestcase.py
fail2ban/tests/utils.py fail2ban/tests/utils.py
fail2ban/version.py fail2ban/version.py
files/bash-completion files/bash-completion

View File

@ -2,17 +2,19 @@
/ _|__ _(_) |_ ) |__ __ _ _ _ / _|__ _(_) |_ ) |__ __ _ _ _
| _/ _` | | |/ /| '_ \/ _` | ' \ | _/ _` | | |/ /| '_ \/ _` | ' \
|_| \__,_|_|_/___|_.__/\__,_|_||_| |_| \__,_|_|_/___|_.__/\__,_|_||_|
v0.9.2.dev0 2015/xx/xx v0.9.3.dev 2015/XX/XX
## Fail2Ban: ban hosts that cause multiple authentication errors ## Fail2Ban: ban hosts that cause multiple authentication errors
Fail2Ban scans log files like /var/log/pwdfail and bans IP that makes too many Fail2Ban scans log files like `/var/log/auth.log` and bans IP addresses having
password failures. It updates firewall rules to reject the IP address. These too many failed login attempts. It does this by updating system firewall rules
rules can be defined by the user. Fail2Ban can read multiple log files such as to reject new connections from those IP addresses, for a configurable amount
sshd or Apache web server ones. of time. Fail2Ban comes out-of-the-box ready to read many standard log files,
such as those for sshd and Apache, and is easy to configure to read any log
file you choose, for any error you choose.
Fail2Ban is able to reduce the rate of incorrect authentications attempts Though Fail2Ban is able to reduce the rate of incorrect authentications
however it cannot eliminate the risk that weak authentication presents. attempts, it cannot eliminate the risk that weak authentication presents.
Configure services to use only two factor or public/private authentication Configure services to use only two factor or public/private authentication
mechanisms if you really want to protect services. mechanisms if you really want to protect services.
@ -37,12 +39,12 @@ Optional:
To install, just do: To install, just do:
tar xvfj fail2ban-0.9.2.tar.bz2 tar xvfj fail2ban-0.9.3.tar.bz2
cd fail2ban-0.9.2 cd fail2ban-0.9.3
python setup.py install python setup.py install
This will install Fail2Ban into the python library directory. The executable This will install Fail2Ban into the python library directory. The executable
scripts are placed into /usr/bin, and configuration under /etc/fail2ban. scripts are placed into `/usr/bin`, and configuration under `/etc/fail2ban`.
Fail2Ban should be correctly installed now. Just type: Fail2Ban should be correctly installed now. Just type:
@ -51,11 +53,20 @@ Fail2Ban should be correctly installed now. Just type:
to see if everything is alright. You should always use fail2ban-client and to see if everything is alright. You should always use fail2ban-client and
never call fail2ban-server directly. never call fail2ban-server directly.
Please note that the system init/service script is not automatically installed.
To enable fail2ban as an automatic service, simply copy the script for your
distro from the `files` directory to `/etc/init.d`. Example (on a Debian-based
system):
cp files/debian-initd /etc/init.d/fail2ban
update-rc.d fail2ban defaults
service fail2ban start
Configuration: Configuration:
-------------- --------------
You can configure Fail2Ban using the files in /etc/fail2ban. It is possible to You can configure Fail2Ban using the files in `/etc/fail2ban`. It is possible to
configure the server using commands sent to it by fail2ban-client. The configure the server using commands sent to it by `fail2ban-client`. The
available commands are described in the fail2ban-client(1) manpage. Also see available commands are described in the fail2ban-client(1) manpage. Also see
fail2ban(1) and jail.conf(5) manpages for further references. fail2ban(1) and jail.conf(5) manpages for further references.

12
RELEASE
View File

@ -61,24 +61,24 @@ Preparation
* Which indicates that testcases/files/logs/mysqld.log has been moved or is a directory:: * Which indicates that testcases/files/logs/mysqld.log has been moved or is a directory::
tar -C /tmp -jxf dist/fail2ban-0.9.3.tar.bz2 tar -C /tmp -jxf dist/fail2ban-0.9.4.tar.bz2
* clean up current direcory:: * clean up current direcory::
diff -rul --exclude \*.pyc . /tmp/fail2ban-0.9.3/ diff -rul --exclude \*.pyc . /tmp/fail2ban-0.9.4/
* Only differences should be files that you don't want distributed. * Only differences should be files that you don't want distributed.
* Ensure the tests work from the tarball:: * Ensure the tests work from the tarball::
cd /tmp/fail2ban-0.9.3/ && bin/fail2ban-testcases cd /tmp/fail2ban-0.9.4/ && bin/fail2ban-testcases
* Add/finalize the corresponding entry in the ChangeLog * Add/finalize the corresponding entry in the ChangeLog
* To generate a list of committers use e.g.:: * To generate a list of committers use e.g.::
git shortlog -sn 0.9.3.. | sed -e 's,^[ 0-9\t]*,,g' | tr '\n' '\|' | sed -e 's:|:, :g' git shortlog -sn 0.9.4.. | sed -e 's,^[ 0-9\t]*,,g' | tr '\n' '\|' | sed -e 's:|:, :g'
* Ensure the top of the ChangeLog has the right version and current date. * Ensure the top of the ChangeLog has the right version and current date.
* Ensure the top entry of the ChangeLog has the right version and current date. * Ensure the top entry of the ChangeLog has the right version and current date.
@ -101,7 +101,7 @@ Preparation
* Tag the release by using a signed (and annotated) tag. Cut/paste * Tag the release by using a signed (and annotated) tag. Cut/paste
release ChangeLog entry as tag annotation:: release ChangeLog entry as tag annotation::
git tag -s 0.9.3 git tag -s 0.9.4
Pre Release Pre Release
=========== ===========
@ -185,7 +185,7 @@ Post Release
Add the following to the top of the ChangeLog:: Add the following to the top of the ChangeLog::
ver. 0.9.3 (2014/XX/XXX) - wanna-be-released ver. 0.9.5 (2015/XX/XXX) - wanna-be-released
----------- -----------
- Fixes: - Fixes:

5
THANKS
View File

@ -40,6 +40,7 @@ Eric Gerbier
Enrico Labedzki Enrico Labedzki
Eugene Hopkinson (SlowRiot) Eugene Hopkinson (SlowRiot)
ftoppi ftoppi
Florian Robert (1technophile)
François Boulogne François Boulogne
Frantisek Sumsal Frantisek Sumsal
Frédéric Frédéric
@ -65,12 +66,14 @@ Joël Bertrand
JP Espinosa JP Espinosa
jserrachinha jserrachinha
Justin Shore Justin Shore
Kevin Locke
Kévin Drapel Kévin Drapel
kjohnsonecl kjohnsonecl
kojiro kojiro
Lars Kneschke Lars Kneschke
Lee Clemens Lee Clemens
leftyfb (Mike Rushton) leftyfb (Mike Rushton)
M. Maraun
Manuel Arostegui Ramirez Manuel Arostegui Ramirez
Marcel Dopita Marcel Dopita
Mark Edgington Mark Edgington
@ -87,6 +90,7 @@ Mika (mkl)
Nick Munger Nick Munger
onorua onorua
Orion Poplawski Orion Poplawski
Pablo Rodriguez Fernandez
Paul Marrapese Paul Marrapese
Paul Traina Paul Traina
Noel Butler Noel Butler
@ -109,6 +113,7 @@ Stefan Tatschner
Stephen Gildea Stephen Gildea
Steven Hiscocks Steven Hiscocks
TESTOVIK TESTOVIK
Thomas Mayer
Tom Pike Tom Pike
Tomas Pihl Tomas Pihl
Tony Lawrence Tony Lawrence

View File

@ -29,563 +29,6 @@ __author__ = "Fail2Ban Developers"
__copyright__ = "Copyright (c) 2004-2008 Cyril Jaquier, 2012-2014 Yaroslav Halchenko" __copyright__ = "Copyright (c) 2004-2008 Cyril Jaquier, 2012-2014 Yaroslav Halchenko"
__license__ = "GPL" __license__ = "GPL"
import getopt from fail2ban.client.fail2banregex import exec_command_line
import locale
import logging
import os
import shlex
import sys
import time
import time
import urllib
from optparse import OptionParser, Option
from ConfigParser import NoOptionError, NoSectionError, MissingSectionHeaderError exec_command_line()
try:
from systemd import journal
from fail2ban.server.filtersystemd import FilterSystemd
except ImportError:
journal = None
from fail2ban.version import version
from fail2ban.client.filterreader import FilterReader
from fail2ban.server.filter import Filter
from fail2ban.server.failregex import RegexException
from fail2ban.helpers import FormatterWithTraceBack, getLogger
# Gets the instance of the logger.
logSys = getLogger("fail2ban")
def debuggexURL(sample, regex):
q = urllib.urlencode({ 're': regex.replace('<HOST>', '(?&.ipv4)'),
'str': sample,
'flavor': 'python' })
return 'http://www.debuggex.com/?' + q
def shortstr(s, l=53):
"""Return shortened string
"""
if len(s) > l:
return s[:l-3] + '...'
return s
def pprint_list(l, header=None):
if not len(l):
return
if header:
s = "|- %s\n" % header
else:
s = ''
print s + "| " + "\n| ".join(l) + '\n`-'
def file_lines_gen(hdlr):
for line in hdlr:
try:
line = line.decode(fail2banRegex.encoding, 'strict')
except UnicodeDecodeError:
if sys.version_info >= (3,): # Python 3 must be decoded
line = line.decode(fail2banRegex.encoding, 'ignore')
yield line
def journal_lines_gen(myjournal):
while True:
try:
entry = myjournal.get_next()
except OSError:
continue
if not entry:
break
yield FilterSystemd.formatJournalEntry(entry)
def get_opt_parser():
# use module docstring for help output
p = OptionParser(
usage="%s [OPTIONS] <LOG> <REGEX> [IGNOREREGEX]\n" % sys.argv[0] + __doc__
+ """
LOG:
string a string representing a log line
filename path to a log file (/var/log/auth.log)
"systemd-journal" search systemd journal (systemd-python required)
REGEX:
string a string representing a 'failregex'
filename path to a filter file (filter.d/sshd.conf)
IGNOREREGEX:
string a string representing an 'ignoreregex'
filename path to a filter file (filter.d/sshd.conf)
Copyright (c) 2004-2008 Cyril Jaquier, 2008- Fail2Ban Contributors
Copyright of modifications held by their respective authors.
Licensed under the GNU General Public License v2 (GPL).
Written by Cyril Jaquier <cyril.jaquier@fail2ban.org>.
Many contributions by Yaroslav O. Halchenko and Steven Hiscocks.
Report bugs to https://github.com/fail2ban/fail2ban/issues
""",
version="%prog " + version)
p.add_options([
Option("-d", "--datepattern",
help="set custom pattern used to match date/times"),
Option("-e", "--encoding",
help="File encoding. Default: system locale"),
Option("-L", "--maxlines", type=int, default=0,
help="maxlines for multi-line regex"),
Option("-m", "--journalmatch",
help="journalctl style matches overriding filter file. "
"\"systemd-journal\" only"),
Option('-l', "--log-level", type="choice",
dest="log_level",
choices=('heavydebug', 'debug', 'info', 'notice', 'warning', 'error', 'critical'),
default=None,
help="Log level for the Fail2Ban logger to use"),
Option("-v", "--verbose", action='store_true',
help="Be verbose in output"),
Option("-D", "--debuggex", action='store_true',
help="Produce debuggex.com urls for debugging there"),
Option("--print-no-missed", action='store_true',
help="Do not print any missed lines"),
Option("--print-no-ignored", action='store_true',
help="Do not print any ignored lines"),
Option("--print-all-matched", action='store_true',
help="Print all matched lines"),
Option("--print-all-missed", action='store_true',
help="Print all missed lines, no matter how many"),
Option("--print-all-ignored", action='store_true',
help="Print all ignored lines, no matter how many"),
Option("-t", "--log-traceback", action='store_true',
help="Enrich log-messages with compressed tracebacks"),
Option("--full-traceback", action='store_true',
help="Either to make the tracebacks full, not compressed (as by default)"),
])
return p
class RegexStat(object):
def __init__(self, failregex):
self._stats = 0
self._failregex = failregex
self._ipList = list()
def __str__(self):
return "%s(%r) %d failed: %s" \
% (self.__class__, self._failregex, self._stats, self._ipList)
def inc(self):
self._stats += 1
def getStats(self):
return self._stats
def getFailRegex(self):
return self._failregex
def appendIP(self, value):
self._ipList.append(value)
def getIPList(self):
return self._ipList
class LineStats(object):
"""Just a convenience container for stats
"""
def __init__(self):
self.tested = self.matched = 0
self.matched_lines = []
self.missed = 0
self.missed_lines = []
self.missed_lines_timeextracted = []
self.ignored = 0
self.ignored_lines = []
self.ignored_lines_timeextracted = []
def __str__(self):
return "%(tested)d lines, %(ignored)d ignored, %(matched)d matched, %(missed)d missed" % self
# just for convenient str
def __getitem__(self, key):
return getattr(self, key)
class Fail2banRegex(object):
def __init__(self, opts):
self._verbose = opts.verbose
self._debuggex = opts.debuggex
self._maxlines = 20
self._print_no_missed = opts.print_no_missed
self._print_no_ignored = opts.print_no_ignored
self._print_all_matched = opts.print_all_matched
self._print_all_missed = opts.print_all_missed
self._print_all_ignored = opts.print_all_ignored
self._maxlines_set = False # so we allow to override maxlines in cmdline
self._datepattern_set = False
self._journalmatch = None
self.share_config=dict()
self._filter = Filter(None)
self._ignoreregex = list()
self._failregex = list()
self._time_elapsed = None
self._line_stats = LineStats()
if opts.maxlines:
self.setMaxLines(opts.maxlines)
if opts.journalmatch is not None:
self.setJournalMatch(opts.journalmatch.split())
if opts.datepattern:
self.setDatePattern(opts.datepattern)
if opts.encoding:
self.encoding = opts.encoding
else:
self.encoding = locale.getpreferredencoding()
def setDatePattern(self, pattern):
if not self._datepattern_set:
self._filter.setDatePattern(pattern)
self._datepattern_set = True
if pattern is not None:
print "Use datepattern : %s" % (
self._filter.getDatePattern()[1], )
def setMaxLines(self, v):
if not self._maxlines_set:
self._filter.setMaxLines(int(v))
self._maxlines_set = True
print "Use maxlines : %d" % self._filter.getMaxLines()
def setJournalMatch(self, v):
if self._journalmatch is None:
self._journalmatch = v
def readRegex(self, value, regextype):
assert(regextype in ('fail', 'ignore'))
regex = regextype + 'regex'
if os.path.isfile(value) or os.path.isfile(value + '.conf'):
if os.path.basename(os.path.dirname(value)) == 'filter.d':
## within filter.d folder - use standard loading algorithm to load filter completely (with .local etc.):
basedir = os.path.dirname(os.path.dirname(value))
value = os.path.splitext(os.path.basename(value))[0]
print "Use %11s filter file : %s, basedir: %s" % (regex, value, basedir)
reader = FilterReader(value, 'fail2ban-regex-jail', {}, share_config=self.share_config, basedir=basedir)
if not reader.read():
print "ERROR: failed to load filter %s" % value
return False
else:
## foreign file - readexplicit this file and includes if possible:
print "Use %11s file : %s" % (regex, value)
reader = FilterReader(value, 'fail2ban-regex-jail', {}, share_config=self.share_config)
reader.setBaseDir(None)
if not reader.readexplicit():
print "ERROR: failed to read %s" % value
return False
reader.getOptions(None)
readercommands = reader.convert()
regex_values = [
RegexStat(m[3])
for m in filter(
lambda x: x[0] == 'set' and x[2] == "add%sregex" % regextype,
readercommands)]
# Read out and set possible value of maxlines
for command in readercommands:
if command[2] == "maxlines":
maxlines = int(command[3])
try:
self.setMaxLines(maxlines)
except ValueError:
print "ERROR: Invalid value for maxlines (%(maxlines)r) " \
"read from %(value)s" % locals()
return False
elif command[2] == 'addjournalmatch':
journalmatch = command[3:]
self.setJournalMatch(journalmatch)
elif command[2] == 'datepattern':
datepattern = command[3]
self.setDatePattern(datepattern)
else:
print "Use %11s line : %s" % (regex, shortstr(value))
regex_values = [RegexStat(value)]
setattr(self, "_" + regex, regex_values)
for regex in regex_values:
getattr(
self._filter,
'add%sRegex' % regextype.title())(regex.getFailRegex())
return True
def testIgnoreRegex(self, line):
found = False
try:
ret = self._filter.ignoreLine([(line, "", "")])
if ret is not None:
found = True
regex = self._ignoreregex[ret].inc()
except RegexException, e:
print e
return False
return found
def testRegex(self, line, date=None):
orgLineBuffer = self._filter._Filter__lineBuffer
fullBuffer = len(orgLineBuffer) >= self._filter.getMaxLines()
try:
line, ret = self._filter.processLine(line, date, checkAllRegex=True)
for match in ret:
# Append True/False flag depending if line was matched by
# more than one regex
match.append(len(ret)>1)
regex = self._failregex[match[0]]
regex.inc()
regex.appendIP(match)
except RegexException, e:
print e
return False
except IndexError:
print "Sorry, but no <HOST> found in regex"
return False
for bufLine in orgLineBuffer[int(fullBuffer):]:
if bufLine not in self._filter._Filter__lineBuffer:
try:
self._line_stats.missed_lines.pop(
self._line_stats.missed_lines.index("".join(bufLine)))
self._line_stats.missed_lines_timeextracted.pop(
self._line_stats.missed_lines_timeextracted.index(
"".join(bufLine[::2])))
except ValueError:
pass
else:
self._line_stats.matched += 1
self._line_stats.missed -= 1
return line, ret
def process(self, test_lines):
t0 = time.time()
for line_no, line in enumerate(test_lines):
if isinstance(line, tuple):
line_datetimestripped, ret = fail2banRegex.testRegex(
line[0], line[1])
line = "".join(line[0])
else:
line = line.rstrip('\r\n')
if line.startswith('#') or not line:
# skip comment and empty lines
continue
line_datetimestripped, ret = fail2banRegex.testRegex(line)
is_ignored = fail2banRegex.testIgnoreRegex(line_datetimestripped)
if is_ignored:
self._line_stats.ignored += 1
if not self._print_no_ignored and (self._print_all_ignored or self._line_stats.ignored <= self._maxlines + 1):
self._line_stats.ignored_lines.append(line)
self._line_stats.ignored_lines_timeextracted.append(line_datetimestripped)
if len(ret) > 0:
assert(not is_ignored)
self._line_stats.matched += 1
if self._print_all_matched:
self._line_stats.matched_lines.append(line)
else:
if not is_ignored:
self._line_stats.missed += 1
if not self._print_no_missed and (self._print_all_missed or self._line_stats.missed <= self._maxlines + 1):
self._line_stats.missed_lines.append(line)
self._line_stats.missed_lines_timeextracted.append(line_datetimestripped)
self._line_stats.tested += 1
if line_no % 10 == 0 and self._filter.dateDetector is not None:
self._filter.dateDetector.sortTemplate()
self._time_elapsed = time.time() - t0
def printLines(self, ltype):
lstats = self._line_stats
assert(self._line_stats.missed == lstats.tested - (lstats.matched + lstats.ignored))
lines = lstats[ltype]
l = lstats[ltype + '_lines']
if lines:
header = "%s line(s):" % (ltype.capitalize(),)
if self._debuggex:
if ltype == 'missed' or ltype == 'matched':
regexlist = self._failregex
else:
regexlist = self._ignoreregex
l = lstats[ltype + '_lines_timeextracted']
if lines < self._maxlines or getattr(self, '_print_all_' + ltype):
ans = [[]]
for arg in [l, regexlist]:
ans = [ x + [y] for x in ans for y in arg ]
b = map(lambda a: a[0] + ' | ' + a[1].getFailRegex() + ' | ' + debuggexURL(a[0], a[1].getFailRegex()), ans)
pprint_list([x.rstrip() for x in b], header)
else:
print "%s too many to print. Use --print-all-%s " \
"to print all %d lines" % (header, ltype, lines)
elif lines < self._maxlines or getattr(self, '_print_all_' + ltype):
pprint_list([x.rstrip() for x in l], header)
else:
print "%s too many to print. Use --print-all-%s " \
"to print all %d lines" % (header, ltype, lines)
def printStats(self):
print
print "Results"
print "======="
def print_failregexes(title, failregexes):
# Print title
total, out = 0, []
for cnt, failregex in enumerate(failregexes):
match = failregex.getStats()
total += match
if (match or self._verbose):
out.append("%2d) [%d] %s" % (cnt+1, match, failregex.getFailRegex()))
if self._verbose and len(failregex.getIPList()):
for ip in failregex.getIPList():
timeTuple = time.localtime(ip[2])
timeString = time.strftime("%a %b %d %H:%M:%S %Y", timeTuple)
out.append(
" %s %s%s" % (
ip[1],
timeString,
ip[-1] and " (multiple regex matched)" or ""))
print "\n%s: %d total" % (title, total)
pprint_list(out, " #) [# of hits] regular expression")
return total
# Print title
total = print_failregexes("Failregex", self._failregex)
_ = print_failregexes("Ignoreregex", self._ignoreregex)
if self._filter.dateDetector is not None:
print "\nDate template hits:"
out = []
for template in self._filter.dateDetector.templates:
if self._verbose or template.hits:
out.append("[%d] %s" % (
template.hits, template.name))
pprint_list(out, "[# of hits] date format")
print "\nLines: %s" % self._line_stats,
if self._time_elapsed is not None:
print "[processed in %.2f sec]" % self._time_elapsed,
print
if self._print_all_matched:
self.printLines('matched')
if not self._print_no_ignored:
self.printLines('ignored')
if not self._print_no_missed:
self.printLines('missed')
return True
if __name__ == "__main__":
parser = get_opt_parser()
(opts, args) = parser.parse_args()
if opts.print_no_missed and opts.print_all_missed:
sys.stderr.write("ERROR: --print-no-missed and --print-all-missed are mutually exclusive.\n\n")
parser.print_help()
sys.exit(-1)
if opts.print_no_ignored and opts.print_all_ignored:
sys.stderr.write("ERROR: --print-no-ignored and --print-all-ignored are mutually exclusive.\n\n")
parser.print_help()
sys.exit(-1)
print
print "Running tests"
print "============="
print
fail2banRegex = Fail2banRegex(opts)
# We need 2 or 3 parameters
if not len(args) in (2, 3):
sys.stderr.write("ERROR: provide both <LOG> and <REGEX>.\n\n")
parser.print_help()
sys.exit(-1)
# TODO: taken from -testcases -- move common functionality somewhere
if opts.log_level is not None: # pragma: no cover
# so we had explicit settings
logSys.setLevel(getattr(logging, opts.log_level.upper()))
else: # pragma: no cover
# suppress the logging but it would leave unittests' progress dots
# ticking, unless like with '-l critical' which would be silent
# unless error occurs
logSys.setLevel(getattr(logging, 'CRITICAL'))
# Add the default logging handler
stdout = logging.StreamHandler(sys.stdout)
fmt = 'D: %(message)s'
if opts.log_traceback:
Formatter = FormatterWithTraceBack
fmt = (opts.full_traceback and ' %(tb)s' or ' %(tbc)s') + fmt
else:
Formatter = logging.Formatter
# Custom log format for the verbose tests runs
if opts.verbose: # pragma: no cover
stdout.setFormatter(Formatter(' %(asctime)-15s %(thread)s' + fmt))
else: # pragma: no cover
# just prefix with the space
stdout.setFormatter(Formatter(fmt))
logSys.addHandler(stdout)
cmd_log, cmd_regex = args[:2]
fail2banRegex.readRegex(cmd_regex, 'fail') or sys.exit(-1)
if len(args) == 3:
fail2banRegex.readRegex(args[2], 'ignore') or sys.exit(-1)
if os.path.isfile(cmd_log):
try:
hdlr = open(cmd_log, 'rb')
print "Use log file : %s" % cmd_log
print "Use encoding : %s" % fail2banRegex.encoding
test_lines = file_lines_gen(hdlr)
except IOError, e:
print e
sys.exit(-1)
elif cmd_log == "systemd-journal":
if not journal:
print "Error: systemd library not found. Exiting..."
sys.exit(-1)
myjournal = journal.Reader(converters={'__CURSOR': lambda x: x})
journalmatch = fail2banRegex._journalmatch
fail2banRegex.setDatePattern(None)
if journalmatch:
try:
for element in journalmatch:
if element == "+":
myjournal.add_disjunction()
else:
myjournal.add_match(element)
except ValueError:
print "Error: Invalid journalmatch: %s" % shortstr(" ".join(journalmatch))
sys.exit(-1)
print "Use journal match : %s" % " ".join(journalmatch)
test_lines = journal_lines_gen(myjournal)
else:
print "Use single line : %s" % shortstr(cmd_log)
test_lines = [ cmd_log ]
print
fail2banRegex.process(test_lines)
fail2banRegex.printStats() or sys.exit(-1)

View File

@ -58,6 +58,15 @@ def get_opt_parser():
Option('-n', "--no-network", action="store_true", Option('-n', "--no-network", action="store_true",
dest="no_network", dest="no_network",
help="Do not run tests that require the network"), help="Do not run tests that require the network"),
Option('-g', "--no-gamin", action="store_true",
dest="no_gamin",
help="Do not run tests that require the gamin"),
Option('-m', "--memory-db", action="store_true",
dest="memory_db",
help="Run database tests using memory instead of file"),
Option('-f', "--fast", action="store_true",
dest="fast",
help="Try to increase speed of the tests, decreasing of wait intervals, memory database"),
Option("-t", "--log-traceback", action='store_true', Option("-t", "--log-traceback", action='store_true',
help="Enrich log-messages with compressed tracebacks"), help="Enrich log-messages with compressed tracebacks"),
Option("--full-traceback", action='store_true', Option("--full-traceback", action='store_true',
@ -120,7 +129,7 @@ if not opts.log_level or opts.log_level != 'critical': # pragma: no cover
print("Fail2ban %s test suite. Python %s. Please wait..." \ print("Fail2ban %s test suite. Python %s. Please wait..." \
% (version, str(sys.version).replace('\n', ''))) % (version, str(sys.version).replace('\n', '')))
tests = gatherTests(regexps, opts.no_network) tests = gatherTests(regexps, opts)
# #
# Run the tests # Run the tests
# #

View File

@ -117,7 +117,7 @@ class BadIPsAction(ActionBase):
""" """
try: try:
response = urlopen( response = urlopen(
self._Request("/".join([self._badips, "get", "categories"]))) self._Request("/".join([self._badips, "get", "categories"])), None, 3)
except HTTPError as response: except HTTPError as response:
messages = json.loads(response.read().decode('utf-8')) messages = json.loads(response.read().decode('utf-8'))
self._logSys.error( self._logSys.error(

View File

@ -9,6 +9,8 @@
# Referenced from http://www.normyee.net/blog/2012/02/02/adding-cloudflare-support-to-fail2ban by NORM YEE # Referenced from http://www.normyee.net/blog/2012/02/02/adding-cloudflare-support-to-fail2ban by NORM YEE
# #
# To get your CloudFlare API Key: https://www.cloudflare.com/a/account/my-account # To get your CloudFlare API Key: https://www.cloudflare.com/a/account/my-account
#
# CloudFlare API error codes: https://www.cloudflare.com/docs/host-api.html#s4.2
[Definition] [Definition]

View File

@ -17,23 +17,23 @@ before = iptables-common.conf
# Notes.: command executed once at the start of Fail2Ban. # Notes.: command executed once at the start of Fail2Ban.
# Values: CMD # Values: CMD
# #
actionstart = iptables -N f2b-<name> actionstart = <iptables> -N f2b-<name>
iptables -A f2b-<name> -j RETURN <iptables> -A f2b-<name> -j <returntype>
iptables -I <chain> -p <protocol> -j f2b-<name> <iptables> -I <chain> -p <protocol> -j f2b-<name>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -p <protocol> -j f2b-<name> actionstop = <iptables> -D <chain> -p <protocol> -j f2b-<name>
iptables -F f2b-<name> <iptables> -F f2b-<name>
iptables -X f2b-<name> <iptables> -X f2b-<name>
# Option: actioncheck # Option: actioncheck
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command
# Values: CMD # Values: CMD
# #
actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]' actioncheck = <iptables> -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Option: actionban # Option: actionban
# Notes.: command executed when banning an IP. Take care that the # Notes.: command executed when banning an IP. Take care that the
@ -41,7 +41,7 @@ actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype> actionban = <iptables> -I f2b-<name> 1 -s <ip> -j <blocktype>
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -49,7 +49,7 @@ actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype>
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionunban = iptables -D f2b-<name> -s <ip> -j <blocktype> actionunban = <iptables> -D f2b-<name> -s <ip> -j <blocktype>
[Init] [Init]

View File

@ -43,3 +43,22 @@ protocol = tcp
# REJECT, REJECT --reject-with icmp-port-unreachable # REJECT, REJECT --reject-with icmp-port-unreachable
# Values: STRING # Values: STRING
blocktype = REJECT --reject-with icmp-port-unreachable blocktype = REJECT --reject-with icmp-port-unreachable
# Option: returntype
# Note: This is the default rule on "actionstart". This should be RETURN
# in all (blocking) actions, except REJECT in allowing actions.
# Values: STRING
returntype = RETURN
# Option: lockingopt
# Notes.: Option was introduced to iptables to prevent multiple instances from
# running concurrently and causing irratic behavior. -w was introduced
# in iptables 1.4.20, so might be absent on older systems
# See https://github.com/fail2ban/fail2ban/issues/1122
# Values: STRING
lockingopt = -w
# Option: iptables
# Notes.: Actual command to be executed, including common to all calls options
# Values: STRING
iptables = iptables <lockingopt>

View File

@ -28,13 +28,13 @@ before = iptables-common.conf
# Values: CMD # Values: CMD
# #
actionstart = ipset --create f2b-<name> iphash actionstart = ipset --create f2b-<name> iphash
iptables -I <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype> <iptables> -I <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype> actionstop = <iptables> -D <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype>
ipset --flush f2b-<name> ipset --flush f2b-<name>
ipset --destroy f2b-<name> ipset --destroy f2b-<name>

View File

@ -24,13 +24,13 @@ before = iptables-common.conf
# Values: CMD # Values: CMD
# #
actionstart = ipset create f2b-<name> hash:ip timeout <bantime> actionstart = ipset create f2b-<name> hash:ip timeout <bantime>
iptables -I <chain> -m set --match-set f2b-<name> src -j <blocktype> <iptables> -I <chain> -m set --match-set f2b-<name> src -j <blocktype>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -m set --match-set f2b-<name> src -j <blocktype> actionstop = <iptables> -D <chain> -m set --match-set f2b-<name> src -j <blocktype>
ipset flush f2b-<name> ipset flush f2b-<name>
ipset destroy f2b-<name> ipset destroy f2b-<name>

View File

@ -24,13 +24,13 @@ before = iptables-common.conf
# Values: CMD # Values: CMD
# #
actionstart = ipset create f2b-<name> hash:ip timeout <bantime> actionstart = ipset create f2b-<name> hash:ip timeout <bantime>
iptables -I <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype> <iptables> -I <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype> actionstop = <iptables> -D <chain> -p <protocol> -m multiport --dports <port> -m set --match-set f2b-<name> src -j <blocktype>
ipset flush f2b-<name> ipset flush f2b-<name>
ipset destroy f2b-<name> ipset destroy f2b-<name>

View File

@ -19,28 +19,28 @@ before = iptables-common.conf
# Notes.: command executed once at the start of Fail2Ban. # Notes.: command executed once at the start of Fail2Ban.
# Values: CMD # Values: CMD
# #
actionstart = iptables -N f2b-<name> actionstart = <iptables> -N f2b-<name>
iptables -A f2b-<name> -j RETURN <iptables> -A f2b-<name> -j <returntype>
iptables -I <chain> 1 -p <protocol> -m multiport --dports <port> -j f2b-<name> <iptables> -I <chain> 1 -p <protocol> -m multiport --dports <port> -j f2b-<name>
iptables -N f2b-<name>-log <iptables> -N f2b-<name>-log
iptables -I f2b-<name>-log -j LOG --log-prefix "$(expr f2b-<name> : '\(.\{1,23\}\)'):DROP " --log-level warning -m limit --limit 6/m --limit-burst 2 <iptables> -I f2b-<name>-log -j LOG --log-prefix "$(expr f2b-<name> : '\(.\{1,23\}\)'):DROP " --log-level warning -m limit --limit 6/m --limit-burst 2
iptables -A f2b-<name>-log -j <blocktype> <iptables> -A f2b-<name>-log -j <blocktype>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -p <protocol> -m multiport --dports <port> -j f2b-<name> actionstop = <iptables> -D <chain> -p <protocol> -m multiport --dports <port> -j f2b-<name>
iptables -F f2b-<name> <iptables> -F f2b-<name>
iptables -F f2b-<name>-log <iptables> -F f2b-<name>-log
iptables -X f2b-<name> <iptables> -X f2b-<name>
iptables -X f2b-<name>-log <iptables> -X f2b-<name>-log
# Option: actioncheck # Option: actioncheck
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command
# Values: CMD # Values: CMD
# #
actioncheck = iptables -n -L f2b-<name>-log >/dev/null actioncheck = <iptables> -n -L f2b-<name>-log >/dev/null
# Option: actionban # Option: actionban
# Notes.: command executed when banning an IP. Take care that the # Notes.: command executed when banning an IP. Take care that the
@ -48,7 +48,7 @@ actioncheck = iptables -n -L f2b-<name>-log >/dev/null
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = iptables -I f2b-<name> 1 -s <ip> -j f2b-<name>-log actionban = <iptables> -I f2b-<name> 1 -s <ip> -j f2b-<name>-log
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -56,7 +56,7 @@ actionban = iptables -I f2b-<name> 1 -s <ip> -j f2b-<name>-log
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionunban = iptables -D f2b-<name> -s <ip> -j f2b-<name>-log actionunban = <iptables> -D f2b-<name> -s <ip> -j f2b-<name>-log
[Init] [Init]

View File

@ -14,23 +14,23 @@ before = iptables-common.conf
# Notes.: command executed once at the start of Fail2Ban. # Notes.: command executed once at the start of Fail2Ban.
# Values: CMD # Values: CMD
# #
actionstart = iptables -N f2b-<name> actionstart = <iptables> -N f2b-<name>
iptables -A f2b-<name> -j RETURN <iptables> -A f2b-<name> -j <returntype>
iptables -I <chain> -p <protocol> -m multiport --dports <port> -j f2b-<name> <iptables> -I <chain> -p <protocol> -m multiport --dports <port> -j f2b-<name>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -p <protocol> -m multiport --dports <port> -j f2b-<name> actionstop = <iptables> -D <chain> -p <protocol> -m multiport --dports <port> -j f2b-<name>
iptables -F f2b-<name> <iptables> -F f2b-<name>
iptables -X f2b-<name> <iptables> -X f2b-<name>
# Option: actioncheck # Option: actioncheck
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command
# Values: CMD # Values: CMD
# #
actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]' actioncheck = <iptables> -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Option: actionban # Option: actionban
# Notes.: command executed when banning an IP. Take care that the # Notes.: command executed when banning an IP. Take care that the
@ -38,7 +38,7 @@ actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype> actionban = <iptables> -I f2b-<name> 1 -s <ip> -j <blocktype>
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -46,7 +46,7 @@ actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype>
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionunban = iptables -D f2b-<name> -s <ip> -j <blocktype> actionunban = <iptables> -D f2b-<name> -s <ip> -j <blocktype>
[Init] [Init]

View File

@ -16,23 +16,23 @@ before = iptables-common.conf
# Notes.: command executed once at the start of Fail2Ban. # Notes.: command executed once at the start of Fail2Ban.
# Values: CMD # Values: CMD
# #
actionstart = iptables -N f2b-<name> actionstart = <iptables> -N f2b-<name>
iptables -A f2b-<name> -j RETURN <iptables> -A f2b-<name> -j <returntype>
iptables -I <chain> -m state --state NEW -p <protocol> --dport <port> -j f2b-<name> <iptables> -I <chain> -m state --state NEW -p <protocol> --dport <port> -j f2b-<name>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -m state --state NEW -p <protocol> --dport <port> -j f2b-<name> actionstop = <iptables> -D <chain> -m state --state NEW -p <protocol> --dport <port> -j f2b-<name>
iptables -F f2b-<name> <iptables> -F f2b-<name>
iptables -X f2b-<name> <iptables> -X f2b-<name>
# Option: actioncheck # Option: actioncheck
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command
# Values: CMD # Values: CMD
# #
actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]' actioncheck = <iptables> -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Option: actionban # Option: actionban
# Notes.: command executed when banning an IP. Take care that the # Notes.: command executed when banning an IP. Take care that the
@ -40,7 +40,7 @@ actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype> actionban = <iptables> -I f2b-<name> 1 -s <ip> -j <blocktype>
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -48,7 +48,7 @@ actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype>
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionunban = iptables -D f2b-<name> -s <ip> -j <blocktype> actionunban = <iptables> -D f2b-<name> -s <ip> -j <blocktype>
[Init] [Init]

View File

@ -32,14 +32,14 @@ before = iptables-common.conf
# own rules. The 3600 second timeout is independent and acts as a # own rules. The 3600 second timeout is independent and acts as a
# safeguard in case the fail2ban process dies unexpectedly. The # safeguard in case the fail2ban process dies unexpectedly. The
# shorter of the two timeouts actually matters. # shorter of the two timeouts actually matters.
actionstart = if [ `id -u` -eq 0 ];then iptables -I <chain> -m recent --update --seconds 3600 --name f2b-<name> -j <blocktype>;fi actionstart = if [ `id -u` -eq 0 ];then <iptables> -I <chain> -m recent --update --seconds 3600 --name f2b-<name> -j <blocktype>;fi
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = echo / > /proc/net/xt_recent/f2b-<name> actionstop = echo / > /proc/net/xt_recent/f2b-<name>
if [ `id -u` -eq 0 ];then iptables -D <chain> -m recent --update --seconds 3600 --name f2b-<name> -j <blocktype>;fi if [ `id -u` -eq 0 ];then <iptables> -D <chain> -m recent --update --seconds 3600 --name f2b-<name> -j <blocktype>;fi
# Option: actioncheck # Option: actioncheck
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command

View File

@ -14,23 +14,23 @@ before = iptables-common.conf
# Notes.: command executed once at the start of Fail2Ban. # Notes.: command executed once at the start of Fail2Ban.
# Values: CMD # Values: CMD
# #
actionstart = iptables -N f2b-<name> actionstart = <iptables> -N f2b-<name>
iptables -A f2b-<name> -j RETURN <iptables> -A f2b-<name> -j <returntype>
iptables -I <chain> -p <protocol> --dport <port> -j f2b-<name> <iptables> -I <chain> -p <protocol> --dport <port> -j f2b-<name>
# Option: actionstop # Option: actionstop
# Notes.: command executed once at the end of Fail2Ban # Notes.: command executed once at the end of Fail2Ban
# Values: CMD # Values: CMD
# #
actionstop = iptables -D <chain> -p <protocol> --dport <port> -j f2b-<name> actionstop = <iptables> -D <chain> -p <protocol> --dport <port> -j f2b-<name>
iptables -F f2b-<name> <iptables> -F f2b-<name>
iptables -X f2b-<name> <iptables> -X f2b-<name>
# Option: actioncheck # Option: actioncheck
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command
# Values: CMD # Values: CMD
# #
actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]' actioncheck = <iptables> -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Option: actionban # Option: actionban
# Notes.: command executed when banning an IP. Take care that the # Notes.: command executed when banning an IP. Take care that the
@ -38,7 +38,7 @@ actioncheck = iptables -n -L <chain> | grep -q 'f2b-<name>[ \t]'
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype> actionban = <iptables> -I f2b-<name> 1 -s <ip> -j <blocktype>
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the # Notes.: command executed when unbanning an IP. Take care that the
@ -46,7 +46,7 @@ actionban = iptables -I f2b-<name> 1 -s <ip> -j <blocktype>
# Tags: See jail.conf(5) man page # Tags: See jail.conf(5) man page
# Values: CMD # Values: CMD
# #
actionunban = iptables -D f2b-<name> -s <ip> -j <blocktype> actionunban = <iptables> -D f2b-<name> -s <ip> -j <blocktype>
[Init] [Init]

View File

@ -0,0 +1,28 @@
# Fail2Ban configuration file
#
# Common settings for mail actions
#
# Users can override the defaults in mail-whois-common.local
[INCLUDES]
# Load customizations if any available
after = mail-whois-common.local
[DEFAULT]
#original character set of whois output will be sent to mail program
_whois = whois <ip> || echo "missing whois program"
# use heuristics to convert charset of whois output to a target
# character set before sending it to a mail program
# make sure you have 'file' and 'iconv' commands installed when opting for that
_whois_target_charset = UTF-8
_whois_convert_charset = whois <ip> |
{ WHOIS_OUTPUT=$(cat) ; WHOIS_CHARSET=$(printf %%b "$WHOIS_OUTPUT" | file -b --mime-encoding -) ; printf %%b "$WHOIS_OUTPUT" | iconv -f $WHOIS_CHARSET -t %(_whois_target_charset)s//TRANSLIT - ; }
# choose between _whois and _whois_convert_charset in mail-whois-common.local
# or other *.local which include mail-whois-common.conf.
_whois_command = %(_whois)s
#_whois_command = %(_whois_convert_charset)s
[Init]

View File

@ -4,6 +4,10 @@
# Modified-By: Yaroslav Halchenko to include grepping on IP over log files # Modified-By: Yaroslav Halchenko to include grepping on IP over log files
# #
[INCLUDES]
before = mail-whois-common.conf
[Definition] [Definition]
# Option: actionstart # Option: actionstart
@ -39,10 +43,10 @@ actioncheck =
actionban = printf %%b "Hi,\n actionban = printf %%b "Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`whois <ip> || echo missing whois program`\n\n `%(_whois_command)s`\n\n
Lines containing IP:<ip> in <logpath>\n Lines containing IP:<ip> in <logpath>\n
`grep -E '(^|[^0-9])<ip>([^0-9]|$)' <logpath>`\n\n `grep -E <grepopts> '(^|[^0-9])<ip>([^0-9]|$)' <logpath>`\n\n
Regards,\n Regards,\n
Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from `uname -n`" <dest> Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from `uname -n`" <dest>
@ -67,3 +71,7 @@ dest = root
# Path to the log files which contain relevant lines for the abuser IP # Path to the log files which contain relevant lines for the abuser IP
# #
logpath = /dev/null logpath = /dev/null
# Number of log lines to include in the email
#
grepopts = -m 1000

View File

@ -4,6 +4,10 @@
# #
# #
[INCLUDES]
before = mail-whois-common.conf
[Definition] [Definition]
# Option: actionstart # Option: actionstart
@ -39,8 +43,8 @@ actioncheck =
actionban = printf %%b "Hi,\n actionban = printf %%b "Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`whois <ip> || echo missing whois program`\n `%(_whois_command)s`\n
Regards,\n Regards,\n
Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from `uname -n`" <dest> Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from `uname -n`" <dest>

View File

@ -17,6 +17,9 @@
[Definition] [Definition]
actionban = ip route add <blocktype> <ip> actionban = ip route add <blocktype> <ip>
actionunban = ip route del <blocktype> <ip> actionunban = ip route del <blocktype> <ip>
actioncheck =
actionstart =
actionstop =
[Init] [Init]

View File

@ -26,7 +26,7 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
Hi,\n Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
http://bgp.he.net/ip/<ip> http://bgp.he.net/ip/<ip>
http://www.projecthoneypot.org/ip_<ip> http://www.projecthoneypot.org/ip_<ip>
http://whois.domaintools.com/<ip>\n\n http://whois.domaintools.com/<ip>\n\n
@ -34,7 +34,7 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
AS:`geoiplookup -f /usr/share/GeoIP/GeoIPASNum.dat "<ip>" | cut -d':' -f2-` AS:`geoiplookup -f /usr/share/GeoIP/GeoIPASNum.dat "<ip>" | cut -d':' -f2-`
hostname: `host -t A <ip> 2>&1`\n\n hostname: `host -t A <ip> 2>&1`\n\n
Lines containing IP:<ip> in <logpath>\n Lines containing IP:<ip> in <logpath>\n
`grep -E '(^|[^0-9])<ip>([^0-9]|$)' <logpath>`\n\n `grep -E <grepopts> '(^|[^0-9])<ip>([^0-9]|$)' <logpath>`\n\n
Regards,\n Regards,\n
Fail2Ban" | /usr/sbin/sendmail -f <sender> <dest> Fail2Ban" | /usr/sbin/sendmail -f <sender> <dest>
@ -47,3 +47,7 @@ name = default
# Path to the log files which contain relevant lines for the abuser IP # Path to the log files which contain relevant lines for the abuser IP
# #
logpath = /dev/null logpath = /dev/null
# Number of log lines to include in the email
#
grepopts = -m 1000

View File

@ -23,7 +23,7 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
Hi,\n Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`/usr/bin/whois <ip>`\n\n `/usr/bin/whois <ip>`\n\n
Matches for <name> with <ipjailfailures> failures IP:<ip>\n Matches for <name> with <ipjailfailures> failures IP:<ip>\n
<ipjailmatches>\n\n <ipjailmatches>\n\n

View File

@ -23,7 +23,7 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
Hi,\n Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`/usr/bin/whois <ip>`\n\n `/usr/bin/whois <ip>`\n\n
Matches with <ipfailures> failures IP:<ip>\n Matches with <ipfailures> failures IP:<ip>\n
<ipmatches>\n\n <ipmatches>\n\n

View File

@ -23,10 +23,10 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
Hi,\n Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`/usr/bin/whois <ip> || echo missing whois program`\n\n `/usr/bin/whois <ip> || echo missing whois program`\n\n
Lines containing IP:<ip> in <logpath>\n Lines containing IP:<ip> in <logpath>\n
`grep -E '(^|[^0-9])<ip>([^0-9]|$)' <logpath>`\n\n `grep -E <grepopts> '(^|[^0-9])<ip>([^0-9]|$)' <logpath>`\n\n
Regards,\n Regards,\n
Fail2Ban" | /usr/sbin/sendmail -f <sender> <dest> Fail2Ban" | /usr/sbin/sendmail -f <sender> <dest>
@ -40,3 +40,6 @@ name = default
# #
logpath = /dev/null logpath = /dev/null
# Number of log lines to include in the email
#
grepopts = -m 1000

View File

@ -23,7 +23,7 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
Hi,\n Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`/usr/bin/whois <ip>`\n\n `/usr/bin/whois <ip>`\n\n
Matches:\n Matches:\n
<matches>\n\n <matches>\n\n

View File

@ -23,7 +23,7 @@ actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
Hi,\n Hi,\n
The IP <ip> has just been banned by Fail2Ban after The IP <ip> has just been banned by Fail2Ban after
<failures> attempts against <name>.\n\n <failures> attempts against <name>.\n\n
Here is more information about <ip>:\n Here is more information about <ip> :\n
`/usr/bin/whois <ip> || echo missing whois program`\n `/usr/bin/whois <ip> || echo missing whois program`\n
Regards,\n Regards,\n
Fail2Ban" | /usr/sbin/sendmail -f <sender> <dest> Fail2Ban" | /usr/sbin/sendmail -f <sender> <dest>

View File

@ -0,0 +1,85 @@
# Fail2Ban configuration file
#
# Author: Eduardo Diaz
#
# This is for ipset protocol 6 (and hopefully later) (ipset v6.14).
# for shorewall
#
# Use this setting in jail.conf to modify use this action instead of a
# default one
#
# banaction = shorewall-ipset-proto6
#
# This requires the program ipset which is normally in package called ipset.
#
# IPset was a feature introduced in the linux kernel 2.6.39 and 3.0.0
# kernels, and you need Shorewall >= 4.5.5 to use this action.
#
# The default Shorewall configuration is with "BLACKLISTNEWONLY=Yes" (see
# file /etc/shorewall/shorewall.conf). This means that when Fail2ban adds a
# new shorewall rule to ban an IP address, that rule will affect only new
# connections. So if the attacker goes on trying using the same connection
# he could even log in. In order to get the same behavior of the iptable
# action (so that the ban is immediate) the /etc/shorewall/shorewall.conf
# file should me modified with "BLACKLISTNEWONLY=No".
#
#
# Enable shorewall to use a blacklist using iptables creating a file
# /etc/shorewall/blrules and adding "DROP net:+f2b-ssh all" and
# similar lines for every jail. To enable restoring you ipset you
# must set SAVE_IPSETS=Yes in shorewall.conf . You can read more
# about ipsets handling in Shorewall at http://shorewall.net/ipsets.html
#
# To force creation of the ipset in the case that somebody deletes the
# ipset create a file /etc/shorewall/initdone and add one line for
# every ipset (this files are in Perl) and add 1 at the end of the file.
# The example:
# system("/usr/sbin/ipset -quiet -exist create f2b-ssh hash:ip timeout 600 ");
# 1;
#
# To destroy the ipset in shorewall you must add to the file /etc/shorewall/stopped
# # One line of every ipset
# system("/usr/sbin/ipset -quiet destroy f2b-ssh ");
# 1; # This must go to the end of the file if not shorewall compilation fails
#
[Definition]
# Option: actionstart
# Notes.: command executed once at the start of Fail2Ban.
# Values: CMD
#
actionstart = if ! ipset -quiet -name list f2b-<name> >/dev/null;
then ipset -quiet -exist create f2b-<name> hash:ip timeout <bantime>;
fi
# Option: actionstop
# Notes.: command executed once at the end of Fail2Ban
# Values: CMD
#
actionstop = ipset flush f2b-<name>
# Option: actionban
# Notes.: command executed when banning an IP. Take care that the
# command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionban = ipset add f2b-<name> <ip> timeout <bantime> -exist
# Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the
# command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionunban = ipset del f2b-<name> <ip> -exist
[Init]
# Option: bantime
# Notes: specifies the bantime in seconds (handled internally rather than by fail2ban)
# Values: [ NUM ] Default: 600
#
bantime = 600

View File

@ -3,6 +3,9 @@
# Author: Yaroslav Halchenko # Author: Yaroslav Halchenko
# #
[INCLUDES]
before = iptables-common.conf
[Definition] [Definition]
@ -22,21 +25,21 @@ actionstop =
# Notes.: command executed once before each actionban command # Notes.: command executed once before each actionban command
# Values: CMD # Values: CMD
# #
actioncheck = iptables -n -L <chain> actioncheck = <iptables> -n -L <chain>
# Option: actionban # Option: actionban
# Notes.: command executed when banning an IP. # Notes.: command executed when banning an IP.
# Values: CMD # Values: CMD
# #
actionban = echo 'all' >| /etc/symbiosis/firewall/blacklist.d/<ip>.auto actionban = echo 'all' >| /etc/symbiosis/firewall/blacklist.d/<ip>.auto
iptables -I <chain> 1 -s <ip> -j <blocktype> <iptables> -I <chain> 1 -s <ip> -j <blocktype>
# Option: actionunban # Option: actionunban
# Notes.: command executed when unbanning an IP. # Notes.: command executed when unbanning an IP.
# Values: CMD # Values: CMD
# #
actionunban = rm -f /etc/symbiosis/firewall/blacklist.d/<ip>.auto actionunban = rm -f /etc/symbiosis/firewall/blacklist.d/<ip>.auto
iptables -D <chain> -s <ip> -j <blocktype> || : <iptables> -D <chain> -s <ip> -j <blocktype> || :
[Init] [Init]

View File

@ -8,7 +8,7 @@
[Definition] [Definition]
badbotscustom = EmailCollector|WebEMailExtrac|TrackBack/1\.02|sogou music spider badbotscustom = EmailCollector|WebEMailExtrac|TrackBack/1\.02|sogou music spider
badbots = Atomic_Email_Hunter/4\.0|atSpider/1\.0|autoemailspider|bwh3_user_agent|China Local Browse 2\.6|ContactBot/0\.2|ContentSmartz|DataCha0s/2\.0|DBrowse 1\.4b|DBrowse 1\.4d|Demo Bot DOT 16b|Demo Bot Z 16b|DSurf15a 01|DSurf15a 71|DSurf15a 81|DSurf15a VA|EBrowse 1\.4b|Educate Search VxB|EmailSiphon|EmailSpider|EmailWolf 1\.00|ESurf15a 15|ExtractorPro|Franklin Locator 1\.8|FSurf15a 01|Full Web Bot 0416B|Full Web Bot 0516B|Full Web Bot 2816B|Guestbook Auto Submitter|Industry Program 1\.0\.x|ISC Systems iRc Search 2\.1|IUPUI Research Bot v 1\.9a|LARBIN-EXPERIMENTAL \(efp@gmx\.net\)|LetsCrawl\.com/1\.0 +http\://letscrawl\.com/|Lincoln State Web Browser|LMQueueBot/0\.2|LWP\:\:Simple/5\.803|Mac Finder 1\.0\.xx|MFC Foundation Class Library 4\.0|Microsoft URL Control - 6\.00\.8xxx|Missauga Locate 1\.0\.0|Missigua Locator 1\.9|Missouri College Browse|Mizzu Labs 2\.2|Mo College 1\.9|MVAClient|Mozilla/2\.0 \(compatible; NEWT ActiveX; Win32\)|Mozilla/3\.0 \(compatible; Indy Library\)|Mozilla/3\.0 \(compatible; scan4mail \(advanced version\) http\://www\.peterspages\.net/?scan4mail\)|Mozilla/4\.0 \(compatible; Advanced Email Extractor v2\.xx\)|Mozilla/4\.0 \(compatible; Iplexx Spider/1\.0 http\://www\.iplexx\.at\)|Mozilla/4\.0 \(compatible; MSIE 5\.0; Windows NT; DigExt; DTS Agent|Mozilla/4\.0 efp@gmx\.net|Mozilla/5\.0 \(Version\: xxxx Type\:xx\)|NameOfAgent \(CMS Spider\)|NASA Search 1\.0|Nsauditor/1\.x|PBrowse 1\.4b|PEval 1\.4b|Poirot|Port Huron Labs|Production Bot 0116B|Production Bot 2016B|Production Bot DOT 3016B|Program Shareware 1\.0\.2|PSurf15a 11|PSurf15a 51|PSurf15a VA|psycheclone|RSurf15a 41|RSurf15a 51|RSurf15a 81|searchbot admin@google\.com|ShablastBot 1\.0|snap\.com beta crawler v0|Snapbot/1\.0|Snapbot/1\.0 \(Snap Shots&#44; +http\://www\.snap\.com\)|sogou develop spider|Sogou Orion spider/3\.0\(+http\://www\.sogou\.com/docs/help/webmasters\.htm#07\)|sogou spider|Sogou web spider/3\.0\(+http\://www\.sogou\.com/docs/help/webmasters\.htm#07\)|sohu agent|SSurf15a 11 |TSurf15a 11|Under the Rainbow 2\.2|User-Agent\: Mozilla/4\.0 \(compatible; MSIE 6\.0; Windows NT 5\.1\)|VadixBot|WebVulnCrawl\.unknown/1\.0 libwww-perl/5\.803|Wells Search II|WEP Search 00 badbots = Atomic_Email_Hunter/4\.0|atSpider/1\.0|autoemailspider|bwh3_user_agent|China Local Browse 2\.6|ContactBot/0\.2|ContentSmartz|DataCha0s/2\.0|DBrowse 1\.4b|DBrowse 1\.4d|Demo Bot DOT 16b|Demo Bot Z 16b|DSurf15a 01|DSurf15a 71|DSurf15a 81|DSurf15a VA|EBrowse 1\.4b|Educate Search VxB|EmailSiphon|EmailSpider|EmailWolf 1\.00|ESurf15a 15|ExtractorPro|Franklin Locator 1\.8|FSurf15a 01|Full Web Bot 0416B|Full Web Bot 0516B|Full Web Bot 2816B|Guestbook Auto Submitter|Industry Program 1\.0\.x|ISC Systems iRc Search 2\.1|IUPUI Research Bot v 1\.9a|LARBIN-EXPERIMENTAL \(efp@gmx\.net\)|LetsCrawl\.com/1\.0 \+http\://letscrawl\.com/|Lincoln State Web Browser|LMQueueBot/0\.2|LWP\:\:Simple/5\.803|Mac Finder 1\.0\.xx|MFC Foundation Class Library 4\.0|Microsoft URL Control - 6\.00\.8xxx|Missauga Locate 1\.0\.0|Missigua Locator 1\.9|Missouri College Browse|Mizzu Labs 2\.2|Mo College 1\.9|MVAClient|Mozilla/2\.0 \(compatible; NEWT ActiveX; Win32\)|Mozilla/3\.0 \(compatible; Indy Library\)|Mozilla/3\.0 \(compatible; scan4mail \(advanced version\) http\://www\.peterspages\.net/?scan4mail\)|Mozilla/4\.0 \(compatible; Advanced Email Extractor v2\.xx\)|Mozilla/4\.0 \(compatible; Iplexx Spider/1\.0 http\://www\.iplexx\.at\)|Mozilla/4\.0 \(compatible; MSIE 5\.0; Windows NT; DigExt; DTS Agent|Mozilla/4\.0 efp@gmx\.net|Mozilla/5\.0 \(Version\: xxxx Type\:xx\)|NameOfAgent \(CMS Spider\)|NASA Search 1\.0|Nsauditor/1\.x|PBrowse 1\.4b|PEval 1\.4b|Poirot|Port Huron Labs|Production Bot 0116B|Production Bot 2016B|Production Bot DOT 3016B|Program Shareware 1\.0\.2|PSurf15a 11|PSurf15a 51|PSurf15a VA|psycheclone|RSurf15a 41|RSurf15a 51|RSurf15a 81|searchbot admin@google\.com|ShablastBot 1\.0|snap\.com beta crawler v0|Snapbot/1\.0|Snapbot/1\.0 \(Snap Shots&#44; \+http\://www\.snap\.com\)|sogou develop spider|Sogou Orion spider/3\.0\(\+http\://www\.sogou\.com/docs/help/webmasters\.htm#07\)|sogou spider|Sogou web spider/3\.0\(\+http\://www\.sogou\.com/docs/help/webmasters\.htm#07\)|sohu agent|SSurf15a 11 |TSurf15a 11|Under the Rainbow 2\.2|User-Agent\: Mozilla/4\.0 \(compatible; MSIE 6\.0; Windows NT 5\.1\)|VadixBot|WebVulnCrawl\.unknown/1\.0 libwww-perl/5\.803|Wells Search II|WEP Search 00
failregex = ^<HOST> -.*"(GET|POST|HEAD).*HTTP.*"(?:%(badbots)s|%(badbotscustom)s)"$ failregex = ^<HOST> -.*"(GET|POST|HEAD).*HTTP.*"(?:%(badbots)s|%(badbotscustom)s)"$

View File

@ -0,0 +1,20 @@
# Fail2Ban Apache pass filter
# This filter is for access.log, NOT for error.log
#
# The knocking request must have a referer.
[INCLUDES]
before = apache-common.conf
[Definition]
failregex = ^<HOST> - \w+ \[\] "GET <knocking_url> HTTP/1\.[01]" 200 \d+ ".*" "[^-].*"$
ignoreregex =
[Init]
knocking_url = /knocking/
# Author: Viktor Szépe

View File

@ -26,7 +26,10 @@ def is_googlebot(ip):
from fail2ban.server.filter import DNSUtils from fail2ban.server.filter import DNSUtils
host = DNSUtils.ipToName(ip) host = DNSUtils.ipToName(ip)
sys.exit(0 if (host and re.match('crawl-.*\.googlebot\.com', host)) else 1) if not host or not re.match('.*\.google(bot)?\.com$', host):
sys.exit(1)
host_ips = DNSUtils.dnsToIp(host)
sys.exit(0 if ip in host_ips else 1)
if __name__ == '__main__': if __name__ == '__main__':
is_googlebot(process_args(sys.argv)) is_googlebot(process_args(sys.argv))

View File

@ -0,0 +1,28 @@
# Fail2Ban filter for murmur/mumble-server
#
[INCLUDES]
before = common.conf
[Definition]
_daemon = murmurd
# N.B. If you allow users to have usernames that include the '>' character you
# should change this to match the regex assigned to the 'username'
# variable in your server config file (murmur.ini / mumble-server.ini).
_usernameregex = [^>]+
_prefix = <W>[\n\s]*(\.\d{3})?\s+\d+ => <\d+:%(_usernameregex)s\(-1\)> Rejected connection from <HOST>:\d+:
failregex = ^%(_prefix)s Invalid server password$
^%(_prefix)s Wrong certificate or password for existing user$
ignoreregex =
# DEV Notes:
#
# Author: Ross Brown

View File

@ -17,7 +17,7 @@ before = common.conf
_daemon = mysqld _daemon = mysqld
failregex = ^%(__prefix_line)s(\d{6} \s?\d{1,2}:\d{2}:\d{2} )?\[Warning\] Access denied for user '\w+'@'<HOST>' (to database '[^']*'|\(using password: (YES|NO)\))*\s*$ failregex = ^%(__prefix_line)s(?:\d+ |\d{6} \s?\d{1,2}:\d{2}:\d{2} )?\[Warning\] Access denied for user '\w+'@'<HOST>' (to database '[^']*'|\(using password: (YES|NO)\))*\s*$
ignoreregex = ignoreregex =

View File

@ -0,0 +1,45 @@
# Fail2ban filter configuration for nginx :: limit_req
# used to ban hosts, that were failed through nginx by limit request processing rate
#
# Author: Serg G. Brester (sebres)
#
# To use 'nginx-limit-req' filter you should have `ngx_http_limit_req_module`
# and define `limit_req` and `limit_req_zone` as described in nginx documentation
# http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
#
# Example:
#
# http {
# ...
# limit_req_zone $binary_remote_addr zone=lr_zone:10m rate=1r/s;
# ...
# # http, server, or location:
# location ... {
# limit_req zone=lr_zone burst=1 nodelay;
# ...
# }
# ...
# }
# ...
#
[Definition]
# Specify following expression to define exact zones, if you want to ban IPs limited
# from specified zones only.
# Example:
#
# ngx_limit_req_zones = lr_zone|lr_zone2
#
ngx_limit_req_zones = [^"]+
# Use following full expression if you should range limit request to specified
# servers, requests, referrers etc. only :
#
# failregex = ^\s*\[error\] \d+#\d+: \*\d+ limiting requests, excess: [\d\.]+ by zone "(?:%(ngx_limit_req_zones)s)", client: <HOST>, server: \S*, request: "\S+ \S+ HTTP/\d+\.\d+", host: "\S+"(, referrer: "\S+")?\s*$
# Shortly, much faster and stable version of regexp:
failregex = ^\s*\[error\] \d+#\d+: \*\d+ limiting requests, excess: [\d\.]+ by zone "(?:%(ngx_limit_req_zones)s)", client: <HOST>
ignoreregex =

View File

@ -0,0 +1,16 @@
# Openhab brute force auth filter: /etc/fail2ban/filter.d/openhab.conf:
#
# Block IPs trying to auth openhab by web or rest api
#
# Matches e.g.
# 12.34.33.22 - - [26/sept./2015:18:04:43 +0200] "GET /openhab.app HTTP/1.1" 401 1382
# 175.18.15.10 - - [02/sept./2015:00:11:31 +0200] "GET /rest/bindings HTTP/1.1" 401 1384
[Definition]
failregex = ^<HOST>\s+-\s+-\s+\[\]\s+"[A-Z]+ .*" 401 \d+\s*$
[Init]
datepattern = %%d/%%b[^/]*/%%Y:%%H:%%M:%%S %%z

View File

@ -16,6 +16,7 @@ failregex = ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 554 5\.7
^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.7\.1 Client host rejected: cannot find your hostname, (\[\S*\]); from=<\S*> to=<\S+> proto=ESMTP helo=<\S*>$ ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.7\.1 Client host rejected: cannot find your hostname, (\[\S*\]); from=<\S*> to=<\S+> proto=ESMTP helo=<\S*>$
^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.7\.1 : Helo command rejected: Host not found; from=<> to=<> proto=ESMTP helo= *$ ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.7\.1 : Helo command rejected: Host not found; from=<> to=<> proto=ESMTP helo= *$
^%(__prefix_line)sNOQUEUE: reject: VRFY from \S+\[<HOST>\]: 550 5\.1\.1 .*$ ^%(__prefix_line)sNOQUEUE: reject: VRFY from \S+\[<HOST>\]: 550 5\.1\.1 .*$
^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.1\.8 <\S*>: Sender address rejected: Domain not found; from=<\S*> to=<\S+> proto=ESMTP helo=<\S*>$
^%(__prefix_line)simproper command pipelining after \S+ from [^[]*\[<HOST>\]:?$ ^%(__prefix_line)simproper command pipelining after \S+ from [^[]*\[<HOST>\]:?$
ignoreregex = ignoreregex =

View File

@ -27,12 +27,13 @@ failregex = ^%(__prefix_line)s(?:error: PAM: )?[aA]uthentication (?:failure|erro
^%(__prefix_line)sUser .+ from <HOST> not allowed because listed in DenyUsers\s*$ ^%(__prefix_line)sUser .+ from <HOST> not allowed because listed in DenyUsers\s*$
^%(__prefix_line)sUser .+ from <HOST> not allowed because not in any group\s*$ ^%(__prefix_line)sUser .+ from <HOST> not allowed because not in any group\s*$
^%(__prefix_line)srefused connect from \S+ \(<HOST>\)\s*$ ^%(__prefix_line)srefused connect from \S+ \(<HOST>\)\s*$
^%(__prefix_line)sReceived disconnect from <HOST>: 3: \S+: Auth fail$ ^%(__prefix_line)s(?:error: )?Received disconnect from <HOST>: 3: .*: Auth fail(?: \[preauth\])?$
^%(__prefix_line)sUser .+ from <HOST> not allowed because a group is listed in DenyGroups\s*$ ^%(__prefix_line)sUser .+ from <HOST> not allowed because a group is listed in DenyGroups\s*$
^%(__prefix_line)sUser .+ from <HOST> not allowed because none of user's groups are listed in AllowGroups\s*$ ^%(__prefix_line)sUser .+ from <HOST> not allowed because none of user's groups are listed in AllowGroups\s*$
^(?P<__prefix>%(__prefix_line)s)User .+ not allowed because account is locked<SKIPLINES>(?P=__prefix)(?:error: )?Received disconnect from <HOST>: 11: .+ \[preauth\]$ ^(?P<__prefix>%(__prefix_line)s)User .+ not allowed because account is locked<SKIPLINES>(?P=__prefix)(?:error: )?Received disconnect from <HOST>: 11: .+ \[preauth\]$
^(?P<__prefix>%(__prefix_line)s)Disconnecting: Too many authentication failures for .+? \[preauth\]<SKIPLINES>(?P=__prefix)(?:error: )?Connection closed by <HOST> \[preauth\]$ ^(?P<__prefix>%(__prefix_line)s)Disconnecting: Too many authentication failures for .+? \[preauth\]<SKIPLINES>(?P=__prefix)(?:error: )?Connection closed by <HOST> \[preauth\]$
^(?P<__prefix>%(__prefix_line)s)Connection from <HOST> port \d+(?: on \S+ port \d+)?<SKIPLINES>(?P=__prefix)Disconnecting: Too many authentication failures for .+? \[preauth\]$ ^(?P<__prefix>%(__prefix_line)s)Connection from <HOST> port \d+(?: on \S+ port \d+)?<SKIPLINES>(?P=__prefix)Disconnecting: Too many authentication failures for .+? \[preauth\]$
^%(__prefix_line)s(error: )?maximum authentication attempts exceeded for .* from <HOST>(?: port \d*)?(?: ssh\d*)? \[preauth\]$
^%(__prefix_line)spam_unix\(sshd:auth\):\s+authentication failure;\s*logname=\S*\s*uid=\d*\s*euid=\d*\s*tty=\S*\s*ruser=\S*\s*rhost=<HOST>\s.*$ ^%(__prefix_line)spam_unix\(sshd:auth\):\s+authentication failure;\s*logname=\S*\s*uid=\d*\s*euid=\d*\s*tty=\S*\s*ruser=\S*\s*rhost=<HOST>\s.*$
ignoreregex = ignoreregex =

View File

@ -84,7 +84,7 @@ before = paths-debian.conf
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not # "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be # ban a host which matches an address in this list. Several addresses can be
# defined using space separator. # defined using space (and/or comma) separator.
ignoreip = 127.0.0.1/8 ignoreip = 127.0.0.1/8
# External command that will take an tagged arguments to ignore, e.g. <ip>, # External command that will take an tagged arguments to ignore, e.g. <ip>,
@ -118,7 +118,7 @@ maxretry = 5
# auto: will try to use the following backends, in order: # auto: will try to use the following backends, in order:
# pyinotify, gamin, polling. # pyinotify, gamin, polling.
# #
# Note: if systemd backend is choses as the default but you enable a jail # Note: if systemd backend is chosen as the default but you enable a jail
# for which logs are present only in its own log files, specify some other # for which logs are present only in its own log files, specify some other
# backend for that jail (e.g. polling) and provide empty value for # backend for that jail (e.g. polling) and provide empty value for
# journalmatch. See https://github.com/fail2ban/fail2ban/issues/959#issuecomment-74901200 # journalmatch. See https://github.com/fail2ban/fail2ban/issues/959#issuecomment-74901200
@ -192,6 +192,7 @@ port = 0:65535
# action_* variables. Can be overridden globally or per # action_* variables. Can be overridden globally or per
# section within jail.local file # section within jail.local file
banaction = iptables-multiport banaction = iptables-multiport
banaction_allports = iptables-allports
# The simplest action to take: ban only # The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, bantime="%(bantime)s", port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"] action_ = %(banaction)s[name=%(__name__)s, bantime="%(bantime)s", port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
@ -254,6 +255,7 @@ action = %(action_)s
port = ssh port = ssh
logpath = %(sshd_log)s logpath = %(sshd_log)s
backend = %(sshd_backend)s
[sshd-ddos] [sshd-ddos]
@ -262,12 +264,14 @@ logpath = %(sshd_log)s
# in the body. # in the body.
port = ssh port = ssh
logpath = %(sshd_log)s logpath = %(sshd_log)s
backend = %(sshd_backend)s
[dropbear] [dropbear]
port = ssh port = ssh
logpath = %(dropbear_log)s logpath = %(dropbear_log)s
backend = %(dropbear_backend)s
[selinux-ssh] [selinux-ssh]
@ -344,11 +348,25 @@ port = http,https
logpath = %(apache_error_log)s logpath = %(apache_error_log)s
maxretry = 1 maxretry = 1
[openhab-auth]
filter = openhab
action = iptables-allports[name=NoAuthFailures]
logpath = /opt/openhab/logs/request.log
[nginx-http-auth] [nginx-http-auth]
port = http,https port = http,https
logpath = %(nginx_error_log)s logpath = %(nginx_error_log)s
# To use 'nginx-limit-req' jail you should have `ngx_http_limit_req_module`
# and define `limit_req` and `limit_req_zone` as described in nginx documentation
# http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
# or for example see in 'config/filter.d/nginx-limit-req.conf'
[nginx-limit-req]
port = http,https
logpath = %(nginx_error_log)s
[nginx-botsearch] [nginx-botsearch]
port = http,https port = http,https
@ -386,7 +404,7 @@ logpath = %(lighttpd_error_log)s
[roundcube-auth] [roundcube-auth]
port = http,https port = http,https
logpath = logpath = %(roundcube_errors_log)s logpath = %(roundcube_errors_log)s
[openwebmail] [openwebmail]
@ -431,6 +449,7 @@ maxretry = 5
port = http,https port = http,https
logpath = %(syslog_daemon)s logpath = %(syslog_daemon)s
backend = %(syslog_backend)s
[guacamole] [guacamole]
@ -448,12 +467,14 @@ logpath = /var/log/monit
port = 10000 port = 10000
logpath = %(syslog_authpriv)s logpath = %(syslog_authpriv)s
backend = %(syslog_backend)s
[froxlor-auth] [froxlor-auth]
port = http,https port = http,https
logpath = %(syslog_authpriv)s logpath = %(syslog_authpriv)s
backend = %(syslog_backend)s
# #
@ -482,12 +503,14 @@ logpath = /var/log/3proxy.log
port = ftp,ftp-data,ftps,ftps-data port = ftp,ftp-data,ftps,ftps-data
logpath = %(proftpd_log)s logpath = %(proftpd_log)s
backend = %(proftpd_backend)s
[pure-ftpd] [pure-ftpd]
port = ftp,ftp-data,ftps,ftps-data port = ftp,ftp-data,ftps,ftps-data
logpath = %(pureftpd_log)s logpath = %(pureftpd_log)s
backend = %(pureftpd_backend)s
maxretry = 6 maxretry = 6
@ -495,6 +518,7 @@ maxretry = 6
port = ftp,ftp-data,ftps,ftps-data port = ftp,ftp-data,ftps,ftps-data
logpath = %(syslog_daemon)s logpath = %(syslog_daemon)s
backend = %(syslog_backend)s
maxretry = 6 maxretry = 6
@ -502,6 +526,7 @@ maxretry = 6
port = ftp,ftp-data,ftps,ftps-data port = ftp,ftp-data,ftps,ftps-data
logpath = %(wuftpd_log)s logpath = %(wuftpd_log)s
backend = %(wuftpd_backend)s
maxretry = 6 maxretry = 6
@ -529,18 +554,21 @@ logpath = /root/path/to/assp/logs/maillog.txt
port = smtp,465,submission port = smtp,465,submission
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
[postfix] [postfix]
port = smtp,465,submission port = smtp,465,submission
logpath = %(postfix_log)s logpath = %(postfix_log)s
backend = %(postfix_backend)s
[postfix-rbl] [postfix-rbl]
port = smtp,465,submission port = smtp,465,submission
logpath = %(syslog_mail)s logpath = %(postfix_log)s
backend = %(postfix_backend)s
maxretry = 1 maxretry = 1
@ -548,12 +576,14 @@ maxretry = 1
port = submission,465,smtp port = submission,465,smtp
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
[sendmail-reject] [sendmail-reject]
port = smtp,465,submission port = smtp,465,submission
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
[qmail-rbl] [qmail-rbl]
@ -569,12 +599,14 @@ logpath = /service/qmail/log/main/current
port = pop3,pop3s,imap,imaps,submission,465,sieve port = pop3,pop3s,imap,imaps,submission,465,sieve
logpath = %(dovecot_log)s logpath = %(dovecot_log)s
backend = %(dovecot_backend)s
[sieve] [sieve]
port = smtp,465,submission port = smtp,465,submission
logpath = %(dovecot_log)s logpath = %(dovecot_log)s
backend = %(dovecot_backend)s
[solid-pop3d] [solid-pop3d]
@ -610,6 +642,7 @@ logpath = /opt/kerio/mailserver/store/logs/security.log
port = smtp,465,submission,imap3,imaps,pop3,pop3s port = smtp,465,submission,imap3,imaps,pop3,pop3s
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
[postfix-sasl] [postfix-sasl]
@ -619,12 +652,14 @@ port = smtp,465,submission,imap3,imaps,pop3,pop3s
# running postfix since it would provide the same log lines at the # running postfix since it would provide the same log lines at the
# "warn" level but overall at the smaller filesize. # "warn" level but overall at the smaller filesize.
logpath = %(postfix_log)s logpath = %(postfix_log)s
backend = %(postfix_backend)s
[perdition] [perdition]
port = imap3,imaps,pop3,pop3s port = imap3,imaps,pop3,pop3s
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
[squirrelmail] [squirrelmail]
@ -637,12 +672,14 @@ logpath = /var/lib/squirrelmail/prefs/squirrelmail_access_log
port = imap3,imaps port = imap3,imaps
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
[uwimap-auth] [uwimap-auth]
port = imap3,imaps port = imap3,imaps
logpath = %(syslog_mail)s logpath = %(syslog_mail)s
backend = %(syslog_backend)s
# #
@ -724,6 +761,7 @@ maxretry = 10
port = 3306 port = 3306
logpath = %(mysql_log)s logpath = %(mysql_log)s
backend = %(mysql_backend)s
maxretry = 5 maxretry = 5
@ -737,7 +775,7 @@ maxretry = 5
[recidive] [recidive]
logpath = /var/log/fail2ban.log logpath = /var/log/fail2ban.log
banaction = iptables-allports banaction = %(banaction_allports)s
bantime = 1w bantime = 1w
findtime = 1d findtime = 1d
maxretry = 5 maxretry = 5
@ -748,14 +786,16 @@ maxretry = 5
[pam-generic] [pam-generic]
# pam-generic filter can be customized to monitor specific subset of 'tty's # pam-generic filter can be customized to monitor specific subset of 'tty's
banaction = iptables-allports banaction = %(banaction_allports)s
logpath = %(syslog_authpriv)s logpath = %(syslog_authpriv)s
backend = %(syslog_backend)s
[xinetd-fail] [xinetd-fail]
banaction = iptables-multiport-log banaction = iptables-multiport-log
logpath = %(syslog_daemon)s logpath = %(syslog_daemon)s
backend = %(syslog_backend)s
maxretry = 2 maxretry = 2
@ -786,6 +826,7 @@ action = %(banaction)s[name=%(__name__)s-tcp, port="%(tcpport)s", protocol="tcp
enabled = false enabled = false
logpath = %(syslog_daemon)s ; nrpe.cfg may define a different log_facility logpath = %(syslog_daemon)s ; nrpe.cfg may define a different log_facility
backend = %(syslog_backend)s
maxretry = 1 maxretry = 1
@ -794,7 +835,7 @@ maxretry = 1
enabled = false enabled = false
logpath = /opt/sun/comms/messaging64/log/mail.log_current logpath = /opt/sun/comms/messaging64/log/mail.log_current
maxretry = 6 maxretry = 6
banaction = iptables-allports banaction = %(banaction_allports)s
[directadmin] [directadmin]
enabled = false enabled = false
@ -805,3 +846,25 @@ port = 2222
enabled = false enabled = false
logpath = /var/lib/portsentry/portsentry.history logpath = /var/lib/portsentry/portsentry.history
maxretry = 1 maxretry = 1
[pass2allow-ftp]
# this pass2allow example allows FTP traffic after successful HTTP authentication
port = ftp,ftp-data,ftps,ftps-data
# knocking_url variable must be overridden to some secret value in filter.d/apache-pass.local
filter = apache-pass
# access log of the website with HTTP auth
logpath = %(apache_access_log)s
blocktype = RETURN
returntype = DROP
bantime = 1h
maxretry = 1
findtime = 1
[murmur]
# AKA mumble-server
port = 64738
filter = murmur
action = %(banaction)s[name=%(__name__)s-tcp, port="%(port)s", protocol=tcp, chain="%(chain)s", actname=%(banaction)s-tcp]
%(banaction)s[name=%(__name__)s-udp, port="%(port)s", protocol=udp, chain="%(chain)s", actname=%(banaction)s-udp]
logpath = /var/log/mumble-server/mumble-server.log

View File

@ -7,9 +7,13 @@ after = paths-overrides.local
[DEFAULT] [DEFAULT]
default_backend = auto
sshd_log = %(syslog_authpriv)s sshd_log = %(syslog_authpriv)s
sshd_backend = %(default_backend)s
dropbear_log = %(syslog_authpriv)s dropbear_log = %(syslog_authpriv)s
dropbear_backend = %(default_backend)s
# There is no sensible generic defaults for syslog log targets, thus # There is no sensible generic defaults for syslog log targets, thus
# leaving them empty here so that no errors while parsing/interpolating configs # leaving them empty here so that no errors while parsing/interpolating configs
@ -18,15 +22,17 @@ syslog_ftp =
syslog_local0 = syslog_local0 =
syslog_mail_warn = syslog_mail_warn =
syslog_user = syslog_user =
# Set the default syslog backend target to default_backend
syslog_backend = %(default_backend)s
# from /etc/audit/auditd.conf # from /etc/audit/auditd.conf
auditd_log = /var/log/audit/audit.log auditd_log = /var/log/audit/audit.log
exim_main_log = /var/log/exim/mainlog exim_main_log = /var/log/exim/mainlog
nginx_error_log = /var/log/nginx/error.log nginx_error_log = /var/log/nginx/*error.log
nginx_access_log = /var/log/nginx/access.log nginx_access_log = /var/log/nginx/*access.log
lighttpd_error_log = /var/log/lighttpd/error.log lighttpd_error_log = /var/log/lighttpd/error.log
@ -38,14 +44,17 @@ suhosin_log = %(syslog_user)s %(lighttpd_error_log)s
# defaults to ftp or local2 if ftp doesn't exist # defaults to ftp or local2 if ftp doesn't exist
proftpd_log = %(syslog_ftp)s proftpd_log = %(syslog_ftp)s
proftpd_backend = %(default_backend)s
# http://svnweb.freebsd.org/ports/head/ftp/proftpd/files/patch-src_proftpd.8.in?view=markup # http://svnweb.freebsd.org/ports/head/ftp/proftpd/files/patch-src_proftpd.8.in?view=markup
# defaults to ftp but can be overwritten. # defaults to ftp but can be overwritten.
pureftpd_log = %(syslog_ftp)s pureftpd_log = %(syslog_ftp)s
pureftpd_backend = %(default_backend)s
# ftp, daemon and then local7 are tried at configure time however it is overwriteable at configure time # ftp, daemon and then local7 are tried at configure time however it is overwriteable at configure time
# #
wuftpd_log = %(syslog_ftp)s wuftpd_log = %(syslog_ftp)s
wuftpd_backend = %(default_backend)s
# syslog_enable defaults to no. so it defaults to vsftpd_log_file setting of /var/log/vsftpd.log # syslog_enable defaults to no. so it defaults to vsftpd_log_file setting of /var/log/vsftpd.log
# No distro seems to set it to syslog by default # No distro seems to set it to syslog by default
@ -54,13 +63,16 @@ vsftpd_log = /var/log/vsftpd.log
# Technically syslog_facility in main.cf can overwrite but no-one sane does this. # Technically syslog_facility in main.cf can overwrite but no-one sane does this.
postfix_log = %(syslog_mail_warn)s postfix_log = %(syslog_mail_warn)s
postfix_backend = %(default_backend)s
dovecot_log = %(syslog_mail_warn)s dovecot_log = %(syslog_mail_warn)s
dovecot_backend = %(default_backend)s
# Seems to be set at compile time only to LOG_LOCAL0 (src/const.h) at Notice level # Seems to be set at compile time only to LOG_LOCAL0 (src/const.h) at Notice level
solidpop3d_log = %(syslog_local0)s solidpop3d_log = %(syslog_local0)s
mysql_log = %(syslog_daemon)s mysql_log = %(syslog_daemon)s
mysql_backend = %(default_backend)s
roundcube_errors_log = /var/log/roundcube/errors roundcube_errors_log = /var/log/roundcube/errors

View File

@ -37,3 +37,15 @@ exim_main_log = /var/log/exim/main.log
mysql_log = /var/lib/mysql/mysqld.log mysql_log = /var/lib/mysql/mysqld.log
roundcube_errors_log = /var/log/roundcubemail/errors roundcube_errors_log = /var/log/roundcubemail/errors
# These services will log to the journal via syslog, so use the journal by
# default.
syslog_backend = systemd
sshd_backend = systemd
dropbear_backend = systemd
proftpd_backend = systemd
pureftpd_backend = systemd
wuftpd_backend = systemd
postfix_backend = systemd
dovecot_backend = systemd
mysql_backend = systemd

View File

@ -0,0 +1,38 @@
# openSUSE log-file locations
[INCLUDES]
before = paths-common.conf
after = paths-overrides.local
[DEFAULT]
syslog_local0 = /var/log/messages
syslog_mail = /var/log/mail
syslog_mail_warn = %(syslog_mail)s
syslog_authpriv = %(syslog_local0)s
syslog_user = %(syslog_local0)s
syslog_ftp = %(syslog_local0)s
syslog_daemon = %(syslog_local0)s
apache_error_log = /var/log/apache2/*error_log
apache_access_log = /var/log/apache2/*access_log
pureftpd_log = %(syslog_local0)s
exim_main_log = /var/log/exim/main.log
mysql_log = /var/log/mysql/mysqld.log
roundcube_errors_log = /srv/www/roundcubemail/logs/errors
solidpop3d_log = %(syslog_mail)s

View File

@ -10,7 +10,6 @@ fail2ban.server package
fail2ban.server.database fail2ban.server.database
fail2ban.server.datedetector fail2ban.server.datedetector
fail2ban.server.datetemplate fail2ban.server.datetemplate
fail2ban.server.faildata
fail2ban.server.failmanager fail2ban.server.failmanager
fail2ban.server.failregex fail2ban.server.failregex
fail2ban.server.filter fail2ban.server.filter
@ -26,3 +25,4 @@ fail2ban.server package
fail2ban.server.strptime fail2ban.server.strptime
fail2ban.server.ticket fail2ban.server.ticket
fail2ban.server.transmitter fail2ban.server.transmitter
fail2ban.server.utils

View File

@ -1,7 +1,7 @@
fail2ban.server.faildata module fail2ban.server.utils module
=============================== ===============================
.. automodule:: fail2ban.server.faildata .. automodule:: fail2ban.server.utils
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -285,8 +285,10 @@ class DefinitionInitConfigReader(ConfigReader):
if self.has_section("Init"): if self.has_section("Init"):
for opt in self.options("Init"): for opt in self.options("Init"):
v = self.get("Init", opt)
self._initOpts['known/'+opt] = v
if not opt in self._initOpts: if not opt in self._initOpts:
self._initOpts[opt] = self.get("Init", opt) self._initOpts[opt] = v
def convert(self): def convert(self):
raise NotImplementedError raise NotImplementedError

View File

@ -53,12 +53,14 @@ class Fail2banReader(ConfigReader):
self.__opts = ConfigReader.getOptions(self, "Definition", opts) self.__opts = ConfigReader.getOptions(self, "Definition", opts)
def convert(self): def convert(self):
order = {"loglevel":0, "logtarget":1, "syslogsocket":2, "dbfile":50, "dbpurgeage":51} # Ensure logtarget/level set first so any db errors are captured
# Also dbfile should be set before all other database options.
# So adding order indices into items, to be stripped after sorting, upon return
order = {"syslogsocket":0, "loglevel":1, "logtarget":2,
"dbfile":50, "dbpurgeage":51}
stream = list() stream = list()
for opt in self.__opts: for opt in self.__opts:
if opt in order: if opt in order:
stream.append((order[opt], ["set", opt, self.__opts[opt]])) stream.append((order[opt], ["set", opt, self.__opts[opt]]))
# Ensure logtarget/level set first so any db errors are captured
# and dbfile set before all other database options
return [opt[1] for opt in sorted(stream)] return [opt[1] for opt in sorted(stream)]

597
fail2ban/client/fail2banregex.py Executable file
View File

@ -0,0 +1,597 @@
#!/usr/bin/python
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
# vi: set ft=python sts=4 ts=4 sw=4 noet :
#
# This file is part of Fail2Ban.
#
# Fail2Ban is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Fail2Ban is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Fail2Ban; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
Fail2Ban reads log file that contains password failure report
and bans the corresponding IP addresses using firewall rules.
This tools can test regular expressions for "fail2ban".
"""
__author__ = "Fail2Ban Developers"
__copyright__ = "Copyright (c) 2004-2008 Cyril Jaquier, 2012-2014 Yaroslav Halchenko"
__license__ = "GPL"
import getopt
import locale
import logging
import os
import shlex
import sys
import time
import time
import urllib
from optparse import OptionParser, Option
from ConfigParser import NoOptionError, NoSectionError, MissingSectionHeaderError
try:
from systemd import journal
from ..server.filtersystemd import FilterSystemd
except ImportError:
journal = None
from ..version import version
from .filterreader import FilterReader
from ..server.filter import Filter, FileContainer
from ..server.failregex import RegexException
from ..helpers import FormatterWithTraceBack, getLogger
# Gets the instance of the logger.
logSys = getLogger("fail2ban")
def debuggexURL(sample, regex):
q = urllib.urlencode({ 're': regex.replace('<HOST>', '(?&.ipv4)'),
'str': sample,
'flavor': 'python' })
return 'http://www.debuggex.com/?' + q
def output(args):
print(args)
def shortstr(s, l=53):
"""Return shortened string
"""
if len(s) > l:
return s[:l-3] + '...'
return s
def pprint_list(l, header=None):
if not len(l):
return
if header:
s = "|- %s\n" % header
else:
s = ''
output( s + "| " + "\n| ".join(l) + '\n`-' )
def journal_lines_gen(myjournal):
while True:
try:
entry = myjournal.get_next()
except OSError:
continue
if not entry:
break
yield FilterSystemd.formatJournalEntry(entry)
def get_opt_parser():
# use module docstring for help output
p = OptionParser(
usage="%s [OPTIONS] <LOG> <REGEX> [IGNOREREGEX]\n" % sys.argv[0] + __doc__
+ """
LOG:
string a string representing a log line
filename path to a log file (/var/log/auth.log)
"systemd-journal" search systemd journal (systemd-python required)
REGEX:
string a string representing a 'failregex'
filename path to a filter file (filter.d/sshd.conf)
IGNOREREGEX:
string a string representing an 'ignoreregex'
filename path to a filter file (filter.d/sshd.conf)
Copyright (c) 2004-2008 Cyril Jaquier, 2008- Fail2Ban Contributors
Copyright of modifications held by their respective authors.
Licensed under the GNU General Public License v2 (GPL).
Written by Cyril Jaquier <cyril.jaquier@fail2ban.org>.
Many contributions by Yaroslav O. Halchenko and Steven Hiscocks.
Report bugs to https://github.com/fail2ban/fail2ban/issues
""",
version="%prog " + version)
p.add_options([
Option("-d", "--datepattern",
help="set custom pattern used to match date/times"),
Option("-e", "--encoding",
help="File encoding. Default: system locale"),
Option("-L", "--maxlines", type=int, default=0,
help="maxlines for multi-line regex"),
Option("-m", "--journalmatch",
help="journalctl style matches overriding filter file. "
"\"systemd-journal\" only"),
Option('-l', "--log-level", type="choice",
dest="log_level",
choices=('heavydebug', 'debug', 'info', 'notice', 'warning', 'error', 'critical'),
default=None,
help="Log level for the Fail2Ban logger to use"),
Option("-v", "--verbose", action='store_true',
help="Be verbose in output"),
Option("-D", "--debuggex", action='store_true',
help="Produce debuggex.com urls for debugging there"),
Option("--print-no-missed", action='store_true',
help="Do not print any missed lines"),
Option("--print-no-ignored", action='store_true',
help="Do not print any ignored lines"),
Option("--print-all-matched", action='store_true',
help="Print all matched lines"),
Option("--print-all-missed", action='store_true',
help="Print all missed lines, no matter how many"),
Option("--print-all-ignored", action='store_true',
help="Print all ignored lines, no matter how many"),
Option("-t", "--log-traceback", action='store_true',
help="Enrich log-messages with compressed tracebacks"),
Option("--full-traceback", action='store_true',
help="Either to make the tracebacks full, not compressed (as by default)"),
])
return p
class RegexStat(object):
def __init__(self, failregex):
self._stats = 0
self._failregex = failregex
self._ipList = list()
def __str__(self):
return "%s(%r) %d failed: %s" \
% (self.__class__, self._failregex, self._stats, self._ipList)
def inc(self):
self._stats += 1
def getStats(self):
return self._stats
def getFailRegex(self):
return self._failregex
def appendIP(self, value):
self._ipList.append(value)
def getIPList(self):
return self._ipList
class LineStats(object):
"""Just a convenience container for stats
"""
def __init__(self):
self.tested = self.matched = 0
self.matched_lines = []
self.missed = 0
self.missed_lines = []
self.missed_lines_timeextracted = []
self.ignored = 0
self.ignored_lines = []
self.ignored_lines_timeextracted = []
def __str__(self):
return "%(tested)d lines, %(ignored)d ignored, %(matched)d matched, %(missed)d missed" % self
# just for convenient str
def __getitem__(self, key):
return getattr(self, key) if hasattr(self, key) else ''
class Fail2banRegex(object):
def __init__(self, opts):
self._verbose = opts.verbose
self._debuggex = opts.debuggex
self._maxlines = 20
self._print_no_missed = opts.print_no_missed
self._print_no_ignored = opts.print_no_ignored
self._print_all_matched = opts.print_all_matched
self._print_all_missed = opts.print_all_missed
self._print_all_ignored = opts.print_all_ignored
self._maxlines_set = False # so we allow to override maxlines in cmdline
self._datepattern_set = False
self._journalmatch = None
self.share_config=dict()
self._filter = Filter(None)
self._ignoreregex = list()
self._failregex = list()
self._time_elapsed = None
self._line_stats = LineStats()
if opts.maxlines:
self.setMaxLines(opts.maxlines)
if opts.journalmatch is not None:
self.setJournalMatch(opts.journalmatch.split())
if opts.datepattern:
self.setDatePattern(opts.datepattern)
if opts.encoding:
self.encoding = opts.encoding
else:
self.encoding = locale.getpreferredencoding()
def decode_line(self, line):
return FileContainer.decode_line('<LOG>', self.encoding, line)
def encode_line(self, line):
return line.encode(self.encoding, 'ignore')
def setDatePattern(self, pattern):
if not self._datepattern_set:
self._filter.setDatePattern(pattern)
self._datepattern_set = True
if pattern is not None:
output( "Use datepattern : %s" % (
self._filter.getDatePattern()[1], ) )
def setMaxLines(self, v):
if not self._maxlines_set:
self._filter.setMaxLines(int(v))
self._maxlines_set = True
output( "Use maxlines : %d" % self._filter.getMaxLines() )
def setJournalMatch(self, v):
if self._journalmatch is None:
self._journalmatch = v
def readRegex(self, value, regextype):
assert(regextype in ('fail', 'ignore'))
regex = regextype + 'regex'
if os.path.isfile(value) or os.path.isfile(value + '.conf'):
if os.path.basename(os.path.dirname(value)) == 'filter.d':
## within filter.d folder - use standard loading algorithm to load filter completely (with .local etc.):
basedir = os.path.dirname(os.path.dirname(value))
value = os.path.splitext(os.path.basename(value))[0]
output( "Use %11s filter file : %s, basedir: %s" % (regex, value, basedir) )
reader = FilterReader(value, 'fail2ban-regex-jail', {}, share_config=self.share_config, basedir=basedir)
if not reader.read():
output( "ERROR: failed to load filter %s" % value )
return False
else:
## foreign file - readexplicit this file and includes if possible:
output( "Use %11s file : %s" % (regex, value) )
reader = FilterReader(value, 'fail2ban-regex-jail', {}, share_config=self.share_config)
reader.setBaseDir(None)
if not reader.readexplicit():
output( "ERROR: failed to read %s" % value )
return False
reader.getOptions(None)
readercommands = reader.convert()
regex_values = [
RegexStat(m[3])
for m in filter(
lambda x: x[0] == 'set' and x[2] == "add%sregex" % regextype,
readercommands)]
# Read out and set possible value of maxlines
for command in readercommands:
if command[2] == "maxlines":
maxlines = int(command[3])
try:
self.setMaxLines(maxlines)
except ValueError:
output( "ERROR: Invalid value for maxlines (%(maxlines)r) " \
"read from %(value)s" % locals() )
return False
elif command[2] == 'addjournalmatch':
journalmatch = command[3:]
self.setJournalMatch(journalmatch)
elif command[2] == 'datepattern':
datepattern = command[3]
self.setDatePattern(datepattern)
else:
output( "Use %11s line : %s" % (regex, shortstr(value)) )
regex_values = [RegexStat(value)]
setattr(self, "_" + regex, regex_values)
for regex in regex_values:
getattr(
self._filter,
'add%sRegex' % regextype.title())(regex.getFailRegex())
return True
def testIgnoreRegex(self, line):
found = False
try:
ret = self._filter.ignoreLine([(line, "", "")])
if ret is not None:
found = True
regex = self._ignoreregex[ret].inc()
except RegexException, e:
output( e )
return False
return found
def testRegex(self, line, date=None):
orgLineBuffer = self._filter._Filter__lineBuffer
fullBuffer = len(orgLineBuffer) >= self._filter.getMaxLines()
try:
line, ret = self._filter.processLine(line, date, checkAllRegex=True)
for match in ret:
# Append True/False flag depending if line was matched by
# more than one regex
match.append(len(ret)>1)
regex = self._failregex[match[0]]
regex.inc()
regex.appendIP(match)
except RegexException, e:
output( e )
return False
except IndexError:
output( "Sorry, but no <HOST> found in regex" )
return False
for bufLine in orgLineBuffer[int(fullBuffer):]:
if bufLine not in self._filter._Filter__lineBuffer:
try:
self._line_stats.missed_lines.pop(
self._line_stats.missed_lines.index("".join(bufLine)))
self._line_stats.missed_lines_timeextracted.pop(
self._line_stats.missed_lines_timeextracted.index(
"".join(bufLine[::2])))
except ValueError:
pass
else:
self._line_stats.matched += 1
self._line_stats.missed -= 1
return line, ret
def process(self, test_lines):
t0 = time.time()
for line in test_lines:
if isinstance(line, tuple):
line_datetimestripped, ret = self.testRegex(
line[0], line[1])
line = "".join(line[0])
else:
line = line.rstrip('\r\n')
if line.startswith('#') or not line:
# skip comment and empty lines
continue
line_datetimestripped, ret = self.testRegex(line)
is_ignored = self.testIgnoreRegex(line_datetimestripped)
if is_ignored:
self._line_stats.ignored += 1
if not self._print_no_ignored and (self._print_all_ignored or self._line_stats.ignored <= self._maxlines + 1):
self._line_stats.ignored_lines.append(line)
self._line_stats.ignored_lines_timeextracted.append(line_datetimestripped)
if len(ret) > 0:
assert(not is_ignored)
self._line_stats.matched += 1
if self._print_all_matched:
self._line_stats.matched_lines.append(line)
else:
if not is_ignored:
self._line_stats.missed += 1
if not self._print_no_missed and (self._print_all_missed or self._line_stats.missed <= self._maxlines + 1):
self._line_stats.missed_lines.append(line)
self._line_stats.missed_lines_timeextracted.append(line_datetimestripped)
self._line_stats.tested += 1
self._time_elapsed = time.time() - t0
def printLines(self, ltype):
lstats = self._line_stats
assert(self._line_stats.missed == lstats.tested - (lstats.matched + lstats.ignored))
lines = lstats[ltype]
l = lstats[ltype + '_lines']
if lines:
header = "%s line(s):" % (ltype.capitalize(),)
if self._debuggex:
if ltype == 'missed' or ltype == 'matched':
regexlist = self._failregex
else:
regexlist = self._ignoreregex
l = lstats[ltype + '_lines_timeextracted']
if lines < self._maxlines or getattr(self, '_print_all_' + ltype):
ans = [[]]
for arg in [l, regexlist]:
ans = [ x + [y] for x in ans for y in arg ]
b = map(lambda a: a[0] + ' | ' + a[1].getFailRegex() + ' | ' +
debuggexURL(self.encode_line(a[0]), a[1].getFailRegex()), ans)
pprint_list([x.rstrip() for x in b], header)
else:
output( "%s too many to print. Use --print-all-%s " \
"to print all %d lines" % (header, ltype, lines) )
elif lines < self._maxlines or getattr(self, '_print_all_' + ltype):
pprint_list([x.rstrip() for x in l], header)
else:
output( "%s too many to print. Use --print-all-%s " \
"to print all %d lines" % (header, ltype, lines) )
def printStats(self):
output( "" )
output( "Results" )
output( "=======" )
def print_failregexes(title, failregexes):
# Print title
total, out = 0, []
for cnt, failregex in enumerate(failregexes):
match = failregex.getStats()
total += match
if (match or self._verbose):
out.append("%2d) [%d] %s" % (cnt+1, match, failregex.getFailRegex()))
if self._verbose and len(failregex.getIPList()):
for ip in failregex.getIPList():
timeTuple = time.localtime(ip[2])
timeString = time.strftime("%a %b %d %H:%M:%S %Y", timeTuple)
out.append(
" %s %s%s" % (
ip[1],
timeString,
ip[-1] and " (multiple regex matched)" or ""))
output( "\n%s: %d total" % (title, total) )
pprint_list(out, " #) [# of hits] regular expression")
return total
# Print title
total = print_failregexes("Failregex", self._failregex)
_ = print_failregexes("Ignoreregex", self._ignoreregex)
if self._filter.dateDetector is not None:
output( "\nDate template hits:" )
out = []
for template in self._filter.dateDetector.templates:
if self._verbose or template.hits:
out.append("[%d] %s" % (
template.hits, template.name))
pprint_list(out, "[# of hits] date format")
output( "\nLines: %s" % self._line_stats, )
if self._time_elapsed is not None:
output( "[processed in %.2f sec]" % self._time_elapsed, )
output( "" )
if self._print_all_matched:
self.printLines('matched')
if not self._print_no_ignored:
self.printLines('ignored')
if not self._print_no_missed:
self.printLines('missed')
return True
def file_lines_gen(self, hdlr):
for line in hdlr:
yield self.decode_line(line)
def start(self, opts, args):
cmd_log, cmd_regex = args[:2]
if not self.readRegex(cmd_regex, 'fail'):
return False
if len(args) == 3 and not self.readRegex(args[2], 'ignore'):
return False
if os.path.isfile(cmd_log):
try:
hdlr = open(cmd_log, 'rb')
output( "Use log file : %s" % cmd_log )
output( "Use encoding : %s" % self.encoding )
test_lines = self.file_lines_gen(hdlr)
except IOError, e:
output( e )
return False
elif cmd_log == "systemd-journal": # pragma: no cover
if not journal:
output( "Error: systemd library not found. Exiting..." )
return False
myjournal = journal.Reader(converters={'__CURSOR': lambda x: x})
journalmatch = self._journalmatch
self.setDatePattern(None)
if journalmatch:
try:
for element in journalmatch:
if element == "+":
myjournal.add_disjunction()
else:
myjournal.add_match(element)
except ValueError:
output( "Error: Invalid journalmatch: %s" % shortstr(" ".join(journalmatch)) )
return False
output( "Use journal match : %s" % " ".join(journalmatch) )
test_lines = journal_lines_gen(myjournal)
else:
output( "Use single line : %s" % shortstr(cmd_log) )
test_lines = [ cmd_log ]
output( "" )
self.process(test_lines)
if not self.printStats():
return False
return True
def exec_command_line():
parser = get_opt_parser()
(opts, args) = parser.parse_args()
if opts.print_no_missed and opts.print_all_missed:
sys.stderr.write("ERROR: --print-no-missed and --print-all-missed are mutually exclusive.\n\n")
parser.print_help()
sys.exit(-1)
if opts.print_no_ignored and opts.print_all_ignored:
sys.stderr.write("ERROR: --print-no-ignored and --print-all-ignored are mutually exclusive.\n\n")
parser.print_help()
sys.exit(-1)
# We need 2 or 3 parameters
if not len(args) in (2, 3):
sys.stderr.write("ERROR: provide both <LOG> and <REGEX>.\n\n")
parser.print_help()
return False
output( "" )
output( "Running tests" )
output( "=============" )
output( "" )
# TODO: taken from -testcases -- move common functionality somewhere
if opts.log_level is not None:
# so we had explicit settings
logSys.setLevel(getattr(logging, opts.log_level.upper()))
else:
# suppress the logging but it would leave unittests' progress dots
# ticking, unless like with '-l critical' which would be silent
# unless error occurs
logSys.setLevel(getattr(logging, 'CRITICAL'))
# Add the default logging handler
stdout = logging.StreamHandler(sys.stdout)
fmt = 'D: %(message)s'
if opts.log_traceback:
Formatter = FormatterWithTraceBack
fmt = (opts.full_traceback and ' %(tb)s' or ' %(tbc)s') + fmt
else:
Formatter = logging.Formatter
# Custom log format for the verbose tests runs
if opts.verbose:
stdout.setFormatter(Formatter(' %(asctime)-15s %(thread)s' + fmt))
else:
# just prefix with the space
stdout.setFormatter(Formatter(fmt))
logSys.addHandler(stdout)
fail2banRegex = Fail2banRegex(opts)
if not fail2banRegex.start(opts, args):
sys.exit(-1)

View File

@ -33,6 +33,7 @@ from .configreader import ConfigReaderUnshared, ConfigReader
from .filterreader import FilterReader from .filterreader import FilterReader
from .actionreader import ActionReader from .actionreader import ActionReader
from ..helpers import getLogger from ..helpers import getLogger
from ..helpers import splitcommaspace
# Gets the instance of the logger. # Gets the instance of the logger.
logSys = getLogger(__name__) logSys = getLogger(__name__)
@ -215,10 +216,8 @@ class JailReader(ConfigReader):
elif opt == "maxretry": elif opt == "maxretry":
stream.append(["set", self.__name, "maxretry", self.__opts[opt]]) stream.append(["set", self.__name, "maxretry", self.__opts[opt]])
elif opt == "ignoreip": elif opt == "ignoreip":
for ip in self.__opts[opt].split(): for ip in splitcommaspace(self.__opts[opt]):
# Do not send a command if the rule is empty. stream.append(["set", self.__name, "addignoreip", ip])
if ip != '':
stream.append(["set", self.__name, "addignoreip", ip])
elif opt == "findtime": elif opt == "findtime":
stream.append(["set", self.__name, "findtime", self.__opts[opt]]) stream.append(["set", self.__name, "findtime", self.__opts[opt]])
elif opt == "bantime": elif opt == "bantime":

View File

@ -20,11 +20,16 @@
__author__ = "Cyril Jaquier, Arturo 'Buanzo' Busleiman, Yaroslav Halchenko" __author__ = "Cyril Jaquier, Arturo 'Buanzo' Busleiman, Yaroslav Halchenko"
__license__ = "GPL" __license__ = "GPL"
import sys import gc
import os
import traceback
import re
import logging import logging
import os
import re
import sys
import traceback
from threading import Lock
from .server.mytime import MyTime
def formatExceptionInfo(): def formatExceptionInfo():
@ -127,3 +132,58 @@ def excepthook(exctype, value, traceback):
getLogger("fail2ban").critical( getLogger("fail2ban").critical(
"Unhandled exception in Fail2Ban:", exc_info=True) "Unhandled exception in Fail2Ban:", exc_info=True)
return sys.__excepthook__(exctype, value, traceback) return sys.__excepthook__(exctype, value, traceback)
def splitcommaspace(s):
"""Helper to split on any comma or space
Returns empty list if input is empty (or None) and filters
out empty entries
"""
if not s:
return []
return filter(bool, re.split('[ ,]', s))
class BgService(object):
"""Background servicing
Prevents memory leak on some platforms/python versions,
using forced GC in periodical intervals.
"""
_mutex = Lock()
_instance = None
def __new__(cls):
if not cls._instance:
cls._instance = \
super(BgService, cls).__new__(cls)
return cls._instance
def __init__(self):
self.__serviceTime = -0x7fffffff
self.__periodTime = 30
self.__threshold = 100;
self.__count = self.__threshold;
if hasattr(gc, 'set_threshold'):
gc.set_threshold(0)
gc.disable()
def service(self, force=False, wait=False):
self.__count -= 1
# avoid locking if next service time don't reached
if not force and (self.__count > 0 or MyTime.time() < self.__serviceTime):
return False
# return immediately if mutex already locked (other thread in servicing):
if not BgService._mutex.acquire(wait):
return False
try:
# check again in lock:
if MyTime.time() < self.__serviceTime:
return False
gc.collect()
self.__serviceTime = MyTime.time() + self.__periodTime
self.__count = self.__threshold
return True
finally:
BgService._mutex.release()
return False

View File

@ -89,7 +89,7 @@ protocol = [
["set <JAIL> unbanip <IP>", "manually Unban <IP> in <JAIL>"], ["set <JAIL> unbanip <IP>", "manually Unban <IP> in <JAIL>"],
["set <JAIL> maxretry <RETRY>", "sets the number of failures <RETRY> before banning the host for <JAIL>"], ["set <JAIL> maxretry <RETRY>", "sets the number of failures <RETRY> before banning the host for <JAIL>"],
["set <JAIL> maxlines <LINES>", "sets the number of <LINES> to buffer for regex search for <JAIL>"], ["set <JAIL> maxlines <LINES>", "sets the number of <LINES> to buffer for regex search for <JAIL>"],
["set <JAIL> addaction <ACT>[ <PYTHONFILE> <JSONKWARGS>]", "adds a new action named <NAME> for <JAIL>. Optionally for a Python based action, a <PYTHONFILE> and <JSONKWARGS> can be specified, else will be a Command Action"], ["set <JAIL> addaction <ACT>[ <PYTHONFILE> <JSONKWARGS>]", "adds a new action named <ACT> for <JAIL>. Optionally for a Python based action, a <PYTHONFILE> and <JSONKWARGS> can be specified, else will be a Command Action"],
["set <JAIL> delaction <ACT>", "removes the action <ACT> from <JAIL>"], ["set <JAIL> delaction <ACT>", "removes the action <ACT> from <JAIL>"],
["", "COMMAND ACTION CONFIGURATION", ""], ["", "COMMAND ACTION CONFIGURATION", ""],
["set <JAIL> action <ACT> actionstart <CMD>", "sets the start command <CMD> of the action <ACT> for <JAIL>"], ["set <JAIL> action <ACT> actionstart <CMD>", "sets the start command <CMD> of the action <ACT> for <JAIL>"],

View File

@ -32,6 +32,7 @@ import time
from abc import ABCMeta from abc import ABCMeta
from collections import MutableMapping from collections import MutableMapping
from .utils import Utils
from ..helpers import getLogger from ..helpers import getLogger
# Gets the instance of the logger. # Gets the instance of the logger.
@ -40,21 +41,6 @@ logSys = getLogger(__name__)
# Create a lock for running system commands # Create a lock for running system commands
_cmd_lock = threading.Lock() _cmd_lock = threading.Lock()
# Some hints on common abnormal exit codes
_RETCODE_HINTS = {
127: '"Command not found". Make sure that all commands in %(realCmd)r '
'are in the PATH of fail2ban-server process '
'(grep -a PATH= /proc/`pidof -x fail2ban-server`/environ). '
'You may want to start '
'"fail2ban-server -f" separately, initiate it with '
'"fail2ban-client reload" in another shell session and observe if '
'additional informative error messages appear in the terminals.'
}
# Dictionary to lookup signal name from number
signame = dict((num, name)
for name, num in signal.__dict__.iteritems() if name.startswith("SIG"))
class CallingMap(MutableMapping): class CallingMap(MutableMapping):
"""A Mapping type which returns the result of callable values. """A Mapping type which returns the result of callable values.
@ -559,58 +545,5 @@ class CommandAction(ActionBase):
logSys.debug("Nothing to do") logSys.debug("Nothing to do")
return True return True
_cmd_lock.acquire() with _cmd_lock:
try: # Try wrapped within another try needed for python version < 2.5 return Utils.executeCmd(realCmd, timeout, shell=True, output=False)
stdout = tempfile.TemporaryFile(suffix=".stdout", prefix="fai2ban_")
stderr = tempfile.TemporaryFile(suffix=".stderr", prefix="fai2ban_")
try:
popen = subprocess.Popen(
realCmd, stdout=stdout, stderr=stderr, shell=True)
stime = time.time()
retcode = popen.poll()
while time.time() - stime <= timeout and retcode is None:
time.sleep(0.1)
retcode = popen.poll()
if retcode is None:
logSys.error("%s -- timed out after %i seconds." %
(realCmd, timeout))
os.kill(popen.pid, signal.SIGTERM) # Terminate the process
time.sleep(0.1)
retcode = popen.poll()
if retcode is None: # Still going...
os.kill(popen.pid, signal.SIGKILL) # Kill the process
time.sleep(0.1)
retcode = popen.poll()
except OSError, e:
logSys.error("%s -- failed with %s" % (realCmd, e))
finally:
_cmd_lock.release()
std_level = retcode == 0 and logging.DEBUG or logging.ERROR
if std_level >= logSys.getEffectiveLevel():
stdout.seek(0); msg = stdout.read()
if msg != '':
logSys.log(std_level, "%s -- stdout: %r", realCmd, msg)
stderr.seek(0); msg = stderr.read()
if msg != '':
logSys.log(std_level, "%s -- stderr: %r", realCmd, msg)
stdout.close()
stderr.close()
if retcode == 0:
logSys.debug("%s -- returned successfully" % realCmd)
return True
elif retcode is None:
logSys.error("%s -- unable to kill PID %i" % (realCmd, popen.pid))
elif retcode < 0:
logSys.error("%s -- killed with %s" %
(realCmd, signame.get(-retcode, "signal %i" % -retcode)))
else:
msg = _RETCODE_HINTS.get(retcode, None)
logSys.error("%s -- returned %i" % (realCmd, retcode))
if msg:
logSys.info("HINT on %i: %s"
% (retcode, msg % locals()))
return False
raise RuntimeError("Command execution failed: %s" % realCmd)

View File

@ -43,6 +43,7 @@ from .observer import Observers
from .jailthread import JailThread from .jailthread import JailThread
from .action import ActionBase, CommandAction, CallingMap from .action import ActionBase, CommandAction, CallingMap
from .mytime import MyTime from .mytime import MyTime
from .utils import Utils
from ..helpers import getLogger from ..helpers import getLogger
# Gets the instance of the logger. # Gets the instance of the logger.
@ -226,14 +227,11 @@ class Actions(JailThread, Mapping):
self._jail.name, name, e, self._jail.name, name, e,
exc_info=logSys.getEffectiveLevel()<=logging.DEBUG) exc_info=logSys.getEffectiveLevel()<=logging.DEBUG)
while self.active: while self.active:
if not self.idle: if self.idle:
#logSys.debug(self._jail.name + ": action")
ret = self.__checkBan()
if not ret:
self.__checkUnBan()
time.sleep(self.sleeptime)
else:
time.sleep(self.sleeptime) time.sleep(self.sleeptime)
continue
if not Utils.wait_for(self.__checkBan, self.sleeptime):
self.__checkUnBan()
self.__flushBan() self.__flushBan()
actions = self._actions.items() actions = self._actions.items()

View File

@ -27,12 +27,14 @@ __license__ = "GPL"
from pickle import dumps, loads, HIGHEST_PROTOCOL from pickle import dumps, loads, HIGHEST_PROTOCOL
import asynchat import asynchat
import asyncore import asyncore
import errno
import fcntl import fcntl
import os import os
import socket import socket
import sys import sys
import traceback import traceback
from .utils import Utils
from ..protocol import CSPROTO from ..protocol import CSPROTO
from ..helpers import getLogger,formatExceptionInfo from ..helpers import getLogger,formatExceptionInfo
@ -89,6 +91,29 @@ class RequestHandler(asynchat.async_chat):
self.close() self.close()
def loop(active, timeout=None, use_poll=False):
# Use poll instead of loop, because of recognition of active flag,
# because of loop timeout mistake: different in poll and poll2 (sec vs ms),
# and to prevent sporadical errors like EBADF 'Bad file descriptor' etc. (see gh-161)
if timeout is None:
timeout = Utils.DEFAULT_SLEEP_TIME
poll = asyncore.poll
if use_poll and asyncore.poll2 and hasattr(asyncore.select, 'poll'): # pragma: no cover
logSys.debug('Server listener (select) uses poll')
# poll2 expected a timeout in milliseconds (but poll and loop in seconds):
timeout = float(timeout) / 1000
poll = asyncore.poll2
# Poll as long as active:
while active():
try:
poll(timeout)
except Exception as e: # pragma: no cover
if e.args[0] in (errno.ENOTCONN, errno.EBADF): # (errno.EBADF, 'Bad file descriptor')
logSys.info('Server connection was closed: %s', str(e))
else:
logSys.error('Server connection was closed: %s', str(e))
## ##
# Asynchronous server class. # Asynchronous server class.
# #
@ -102,6 +127,7 @@ class AsyncServer(asyncore.dispatcher):
self.__transmitter = transmitter self.__transmitter = transmitter
self.__sock = "/var/run/fail2ban/fail2ban.sock" self.__sock = "/var/run/fail2ban/fail2ban.sock"
self.__init = False self.__init = False
self.__active = False
## ##
# Returns False as we only read the socket first. # Returns False as we only read the socket first.
@ -129,7 +155,7 @@ class AsyncServer(asyncore.dispatcher):
# @param sock: socket file. # @param sock: socket file.
# @param force: remove the socket file if exists. # @param force: remove the socket file if exists.
def start(self, sock, force): def start(self, sock, force, use_poll=False):
self.__sock = sock self.__sock = sock
# Remove socket # Remove socket
if os.path.exists(sock): if os.path.exists(sock):
@ -149,28 +175,31 @@ class AsyncServer(asyncore.dispatcher):
AsyncServer.__markCloseOnExec(self.socket) AsyncServer.__markCloseOnExec(self.socket)
self.listen(1) self.listen(1)
# Sets the init flag. # Sets the init flag.
self.__init = True self.__init = self.__active = True
# TODO Add try..catch # Event loop as long as active:
# There's a bug report for Python 2.6/3.0 that use_poll=True yields some 2.5 incompatibilities: loop(lambda: self.__active)
if (sys.version_info >= (2, 7) and sys.version_info < (2, 8)) \ # Cleanup all
or (sys.version_info >= (3, 4)): # if python 2.7 ... self.stop()
logSys.debug("Detected Python 2.7. asyncore.loop() using poll")
asyncore.loop(use_poll=True) # workaround for the "Bad file descriptor" issue on Python 2.7, gh-161
else: def close(self):
asyncore.loop(use_poll=False) # fixes the "Unexpected communication problem" issue on Python 2.6 and 3.0 if self.__active:
asyncore.dispatcher.close(self)
# Remove socket (file) only if it was created:
if self.__init and os.path.exists(self.__sock):
logSys.debug("Removed socket file " + self.__sock)
os.remove(self.__sock)
logSys.debug("Socket shutdown")
self.__active = False
## ##
# Stops the communication server. # Stops the communication server.
def stop(self): def stop(self):
if self.__init: self.close()
# Only closes the socket if it was initialized first.
self.close() def isActive(self):
# Remove socket return self.__active
if os.path.exists(self.__sock):
logSys.debug("Removed socket file " + self.__sock)
os.remove(self.__sock)
logSys.debug("Socket shutdown")
## ##
# Marks socket as close-on-exec to avoid leaking file descriptors when # Marks socket as close-on-exec to avoid leaking file descriptors when

View File

@ -247,16 +247,10 @@ class BanManager:
@staticmethod @staticmethod
def createBanTicket(ticket): def createBanTicket(ticket):
ip = ticket.getIP()
# if ticked was restored from database - set time of original restored ticket:
# we should always use correct time to calculate correct end time (ban time is variable now, # we should always use correct time to calculate correct end time (ban time is variable now,
# + possible double banning by restore from database and from log file) # + possible double banning by restore from database and from log file)
lastTime = ticket.getTime() # so use as lastTime always time from ticket.
# if not ticket.getRestored(): return BanTicket(ticket=ticket)
# lastTime = MyTime.time()
banTicket = BanTicket(ip, lastTime, ticket.getMatches())
banTicket.setAttempt(ticket.getAttempt())
return banTicket
## ##
# Add a ban ticket. # Add a ban ticket.
@ -269,19 +263,19 @@ class BanManager:
try: try:
self.__lock.acquire() self.__lock.acquire()
# check already banned # check already banned
for i in self.__banList: for oldticket in self.__banList:
if ticket.getIP() == i.getIP(): if ticket.getIP() == oldticket.getIP():
# if already permanent # if already permanent
btorg, torg = i.getBanTime(self.__banTime), i.getTime() btold, told = oldticket.getBanTime(self.__banTime), oldticket.getTime()
if btorg == -1: if btold == -1:
return False return False
# if given time is less than already banned time # if given time is less than already banned time
btnew, tnew = ticket.getBanTime(self.__banTime), ticket.getTime() btnew, tnew = ticket.getBanTime(self.__banTime), ticket.getTime()
if btnew != -1 and tnew + btnew <= torg + btorg: if btnew != -1 and tnew + btnew <= told + btold:
return False return False
# we have longest ban - set new (increment) ban time # we have longest ban - set new (increment) ban time
i.setTime(tnew) oldticket.setTime(tnew)
i.setBanTime(btnew) oldticket.setBanTime(btnew)
return False return False
# not yet banned - add new # not yet banned - add new
self.__banList.append(ticket) self.__banList.append(ticket)

View File

@ -178,6 +178,7 @@ class Fail2BanDb(object):
def __init__(self, filename, purgeAge=24*60*60, outDatedFactor=3): def __init__(self, filename, purgeAge=24*60*60, outDatedFactor=3):
self.maxEntries = 50
try: try:
self._lock = RLock() self._lock = RLock()
self._db = sqlite3.connect( self._db = sqlite3.connect(
@ -334,7 +335,7 @@ class Fail2BanDb(object):
cur.execute("UPDATE jails SET enabled=0") cur.execute("UPDATE jails SET enabled=0")
@commitandrollback @commitandrollback
def getJailNames(self, cur): def getJailNames(self, cur, enabled=None):
"""Get name of jails in database. """Get name of jails in database.
Currently only used for testing purposes. Currently only used for testing purposes.
@ -344,7 +345,11 @@ class Fail2BanDb(object):
set set
Set of jail names. Set of jail names.
""" """
cur.execute("SELECT name FROM jails") if enabled is None:
cur.execute("SELECT name FROM jails")
else:
cur.execute("SELECT name FROM jails WHERE enabled=%s" %
(int(enabled),))
return set(row[0] for row in cur.fetchmany()) return set(row[0] for row in cur.fetchmany())
@commitandrollback @commitandrollback
@ -450,8 +455,7 @@ class Fail2BanDb(object):
cur.execute( cur.execute(
"INSERT INTO bans(jail, ip, timeofban, bantime, bancount, data) VALUES(?, ?, ?, ?, ?, ?)", "INSERT INTO bans(jail, ip, timeofban, bantime, bancount, data) VALUES(?, ?, ?, ?, ?, ?)",
(jail.name, ticket.getIP(), int(round(ticket.getTime())), ticket.getBanTime(jail.actions.getBanTime()), ticket.getBanCount(), (jail.name, ticket.getIP(), int(round(ticket.getTime())), ticket.getBanTime(jail.actions.getBanTime()), ticket.getBanCount(),
{"matches": ticket.getMatches(), ticket.getData()))
"failures": ticket.getAttempt()}))
cur.execute( cur.execute(
"INSERT OR REPLACE INTO bips(ip, jail, timeofban, bantime, bancount, data) VALUES(?, ?, ?, ?, ?, ?)", "INSERT OR REPLACE INTO bips(ip, jail, timeofban, bantime, bancount, data) VALUES(?, ?, ?, ?, ?, ?)",
(ticket.getIP(), jail.name, int(round(ticket.getTime())), ticket.getBanTime(jail.actions.getBanTime()), ticket.getBanCount(), (ticket.getIP(), jail.name, int(round(ticket.getTime())), ticket.getBanTime(jail.actions.getBanTime()), ticket.getBanCount(),
@ -491,7 +495,7 @@ class Fail2BanDb(object):
if ip is not None: if ip is not None:
query += " AND ip=?" query += " AND ip=?"
queryArgs.append(ip) queryArgs.append(ip)
query += " ORDER BY ip, timeofban" query += " ORDER BY ip, timeofban desc"
return cur.execute(query, queryArgs) return cur.execute(query, queryArgs)
@ -517,8 +521,8 @@ class Fail2BanDb(object):
tickets = [] tickets = []
for ip, timeofban, data in self._getBans(**kwargs): for ip, timeofban, data in self._getBans(**kwargs):
#TODO: Implement data parts once arbitrary match keys completed #TODO: Implement data parts once arbitrary match keys completed
tickets.append(FailTicket(ip, timeofban, data.get('matches'))) tickets.append(FailTicket(ip, timeofban))
tickets[-1].setAttempt(data.get('failures', 1)) tickets[-1].setData(data)
return tickets return tickets
def getBansMerged(self, ip=None, jail=None, bantime=None): def getBansMerged(self, ip=None, jail=None, bantime=None):
@ -560,6 +564,7 @@ class Fail2BanDb(object):
prev_banip = results[0][0] prev_banip = results[0][0]
matches = [] matches = []
failures = 0 failures = 0
tickdata = {}
for banip, timeofban, data in results: for banip, timeofban, data in results:
#TODO: Implement data parts once arbitrary match keys completed #TODO: Implement data parts once arbitrary match keys completed
if banip != prev_banip: if banip != prev_banip:
@ -570,11 +575,21 @@ class Fail2BanDb(object):
prev_banip = banip prev_banip = banip
matches = [] matches = []
failures = 0 failures = 0
matches.extend(data.get('matches', [])) tickdata = {}
m = data.get('matches', [])
# pre-insert "maxadd" enries (because tickets are ordered desc by time)
maxadd = self.maxEntries - len(matches)
if maxadd > 0:
if len(m) <= maxadd:
matches = m + matches
else:
matches = m[-maxadd:] + matches
failures += data.get('failures', 1) failures += data.get('failures', 1)
tickdata.update(data.get('data', {}))
prev_timeofban = timeofban prev_timeofban = timeofban
ticket = FailTicket(banip, prev_timeofban, matches) ticket = FailTicket(banip, prev_timeofban, matches)
ticket.setAttempt(failures) ticket.setAttempt(failures)
ticket.setData(**tickdata)
tickets.append(ticket) tickets.append(ticket)
if cacheKey: if cacheKey:

View File

@ -21,6 +21,8 @@ __author__ = "Cyril Jaquier and Fail2Ban Contributors"
__copyright__ = "Copyright (c) 2004 Cyril Jaquier" __copyright__ = "Copyright (c) 2004 Cyril Jaquier"
__license__ = "GPL" __license__ = "GPL"
import time
from threading import Lock from threading import Lock
from .datetemplate import DatePatternRegex, DateTai64n, DateEpoch from .datetemplate import DatePatternRegex, DateTai64n, DateEpoch
@ -32,6 +34,82 @@ logSys = getLogger(__name__)
logLevel = 6 logLevel = 6
class DateDetectorCache(object):
def __init__(self):
self.__lock = Lock()
self.__templates = list()
@property
def templates(self):
"""List of template instances managed by the detector.
"""
with self.__lock:
if self.__templates:
return self.__templates
self._addDefaultTemplate()
return self.__templates
def _cacheTemplate(self, template):
"""Cache Fail2Ban's default template.
"""
if isinstance(template, str):
template = DatePatternRegex(template)
self.__templates.append(template)
def _addDefaultTemplate(self):
"""Add resp. cache Fail2Ban's default set of date templates.
"""
# asctime with optional day, subsecond and/or year:
# Sun Jan 23 21:59:59.011 2005
self._cacheTemplate("(?:%a )?%b %d %H:%M:%S(?:\.%f)?(?: %Y)?")
# asctime with optional day, subsecond and/or year coming after day
# http://bugs.debian.org/798923
# Sun Jan 23 2005 21:59:59.011
self._cacheTemplate("(?:%a )?%b %d %Y %H:%M:%S(?:\.%f)?")
# simple date, optional subsecond (proftpd):
# 2005-01-23 21:59:59
# simple date: 2005/01/23 21:59:59
# custom for syslog-ng 2006.12.21 06:43:20
self._cacheTemplate("%Y(?P<_sep>[-/.])%m(?P=_sep)%d %H:%M:%S(?:,%f)?")
# simple date too (from x11vnc): 23/01/2005 21:59:59
# and with optional year given by 2 digits: 23/01/05 21:59:59
# (See http://bugs.debian.org/537610)
# 17-07-2008 17:23:25
self._cacheTemplate("%d(?P<_sep>[-/])%m(?P=_sep)(?:%Y|%y) %H:%M:%S")
# Apache format optional time zone:
# [31/Oct/2006:09:22:55 -0000]
# 26-Jul-2007 15:20:52
self._cacheTemplate("%d(?P<_sep>[-/])%b(?P=_sep)%Y[ :]?%H:%M:%S(?:\.%f)?(?: %z)?")
# CPanel 05/20/2008:01:57:39
self._cacheTemplate("%m/%d/%Y:%H:%M:%S")
# named 26-Jul-2007 15:20:52.252
# roundcube 26-Jul-2007 15:20:52 +0200
# 01-27-2012 16:22:44.252
# subseconds explicit to avoid possible %m<->%d confusion
# with previous
self._cacheTemplate("%m-%d-%Y %H:%M:%S\.%f")
# TAI64N
template = DateTai64n()
template.name = "TAI64N"
self._cacheTemplate(template)
# Epoch
template = DateEpoch()
template.name = "Epoch"
self._cacheTemplate(template)
# ISO 8601
self._cacheTemplate("%Y-%m-%d[T ]%H:%M:%S(?:\.%f)?(?:%z)?")
# Only time information in the log
self._cacheTemplate("^%H:%M:%S")
# <09/16/08@05:03:30>
self._cacheTemplate("^<%m/%d/%y@%H:%M:%S>")
# MySQL: 130322 11:46:11
self._cacheTemplate("^%y%m%d ?%H:%M:%S")
# Apache Tomcat
self._cacheTemplate("%b %d, %Y %I:%M:%S %p")
# ASSP: Apr-27-13 02:33:06
self._cacheTemplate("^%b-%d-%y %H:%M:%S")
class DateDetector(object): class DateDetector(object):
"""Manages one or more date templates to find a date within a log line. """Manages one or more date templates to find a date within a log line.
@ -39,11 +117,14 @@ class DateDetector(object):
---------- ----------
templates templates
""" """
_defCache = DateDetectorCache()
def __init__(self): def __init__(self):
self.__lock = Lock() self.__lock = Lock()
self.__templates = list() self.__templates = list()
self.__known_names = set() self.__known_names = set()
# time the template was long unused (currently 300 == 5m):
self.__unusedTime = 300
def _appendTemplate(self, template): def _appendTemplate(self, template):
name = template.name name = template.name
@ -75,55 +156,9 @@ class DateDetector(object):
def addDefaultTemplate(self): def addDefaultTemplate(self):
"""Add Fail2Ban's default set of date templates. """Add Fail2Ban's default set of date templates.
""" """
self.__lock.acquire() with self.__lock:
try: for template in DateDetector._defCache.templates:
# asctime with optional day, subsecond and/or year: self._appendTemplate(template)
# Sun Jan 23 21:59:59.011 2005
self.appendTemplate("(?:%a )?%b %d %H:%M:%S(?:\.%f)?(?: %Y)?")
# simple date, optional subsecond (proftpd):
# 2005-01-23 21:59:59
# simple date: 2005/01/23 21:59:59
# custom for syslog-ng 2006.12.21 06:43:20
self.appendTemplate("%Y(?P<_sep>[-/.])%m(?P=_sep)%d %H:%M:%S(?:,%f)?")
# simple date too (from x11vnc): 23/01/2005 21:59:59
# and with optional year given by 2 digits: 23/01/05 21:59:59
# (See http://bugs.debian.org/537610)
# 17-07-2008 17:23:25
self.appendTemplate("%d(?P<_sep>[-/])%m(?P=_sep)(?:%Y|%y) %H:%M:%S")
# Apache format optional time zone:
# [31/Oct/2006:09:22:55 -0000]
# 26-Jul-2007 15:20:52
self.appendTemplate("%d(?P<_sep>[-/])%b(?P=_sep)%Y[ :]?%H:%M:%S(?:\.%f)?(?: %z)?")
# CPanel 05/20/2008:01:57:39
self.appendTemplate("%m/%d/%Y:%H:%M:%S")
# named 26-Jul-2007 15:20:52.252
# roundcube 26-Jul-2007 15:20:52 +0200
# 01-27-2012 16:22:44.252
# subseconds explicit to avoid possible %m<->%d confusion
# with previous
self.appendTemplate("%m-%d-%Y %H:%M:%S\.%f")
# TAI64N
template = DateTai64n()
template.name = "TAI64N"
self.appendTemplate(template)
# Epoch
template = DateEpoch()
template.name = "Epoch"
self.appendTemplate(template)
# ISO 8601
self.appendTemplate("%Y-%m-%d[T ]%H:%M:%S(?:\.%f)?(?:%z)?")
# Only time information in the log
self.appendTemplate("^%H:%M:%S")
# <09/16/08@05:03:30>
self.appendTemplate("^<%m/%d/%y@%H:%M:%S>")
# MySQL: 130322 11:46:11
self.appendTemplate("^%y%m%d ?%H:%M:%S")
# Apache Tomcat
self.appendTemplate("%b %d, %Y %I:%M:%S %p")
# ASSP: Apr-27-13 02:33:06
self.appendTemplate("^%b-%d-%y %H:%M:%S")
finally:
self.__lock.release()
@property @property
def templates(self): def templates(self):
@ -149,22 +184,29 @@ class DateDetector(object):
The regex match returned from the first successfully matched The regex match returned from the first successfully matched
template. template.
""" """
self.__lock.acquire() i = 0
try: with self.__lock:
for template in self.__templates: for template in self.__templates:
match = template.matchDate(line) match = template.matchDate(line)
if not match is None: if not match is None:
if logSys.getEffectiveLevel() <= logLevel: if logSys.getEffectiveLevel() <= logLevel:
logSys.log(logLevel, "Matched time template %s", template.name) logSys.log(logLevel, "Matched time template %s", template.name)
template.hits += 1 template.hits += 1
template.lastUsed = time.time()
# if not first - try to reorder current template (bubble up), they will be not sorted anymore:
if i:
self._reorderTemplate(i)
# return tuple with match and template reference used for parsing:
return (match, template) return (match, template)
return (None, None) i += 1
finally: # not found:
self.__lock.release() return (None, None)
def getTime(self, line): def getTime(self, line):
"""Attempts to return the date on a log line using templates. """Attempts to return the date on a log line using templates.
Obsolete: Use "getTime2" instead.
This uses the templates' `getDate` method in an attempt to find This uses the templates' `getDate` method in an attempt to find
a date. a date.
@ -179,8 +221,7 @@ class DateDetector(object):
The Unix timestamp returned from the first successfully matched The Unix timestamp returned from the first successfully matched
template or None if not found. template or None if not found.
""" """
self.__lock.acquire() with self.__lock:
try:
for template in self.__templates: for template in self.__templates:
try: try:
date = template.getDate(line) date = template.getDate(line)
@ -193,8 +234,6 @@ class DateDetector(object):
except ValueError: # pragma: no cover except ValueError: # pragma: no cover
pass pass
return None return None
finally:
self.__lock.release()
def getTime2(self, line, timeMatch = None): def getTime2(self, line, timeMatch = None):
"""Attempts to return the date on a log line using given template. """Attempts to return the date on a log line using given template.
@ -228,21 +267,28 @@ class DateDetector(object):
return date return date
return self.getTime(line) return self.getTime(line)
def sortTemplate(self): def _reorderTemplate(self, num):
"""Sort the date templates by number of hits """Reorder template (bubble up) in template list if hits grows enough.
Sort the template lists using the hits score. This method is not Parameters
called in this object and thus should be called from time to time. ----------
This ensures the most commonly matched templates are checked first, num : int
improving performance of matchTime and getTime. Index of template should be moved.
""" """
self.__lock.acquire() if num:
try: templates = self.__templates
if logSys.getEffectiveLevel() <= logLevel: template = templates[num]
logSys.log(logLevel, "Sorting the template list") ## current hits and time the template was long unused:
self.__templates.sort(key=lambda x: x.hits, reverse=True) untime = template.lastUsed - self.__unusedTime
t = self.__templates[0] hits = template.hits
if logSys.getEffectiveLevel() <= logLevel: ## don't move too often (multiline logs resp. log's with different date patterns),
logSys.log(logLevel, "Winning template: %s with %d hits", t.name, t.hits) ## if template not used too long, replace it also :
finally: if hits > templates[num-1].hits + 5 or templates[num-1].lastUsed < untime:
self.__lock.release() ## try to move faster (half of part to current template):
pos = num // 2
## if not larger - move slow (exact 1 position):
if hits <= templates[pos].hits or templates[pos].lastUsed < untime:
pos = num-1
templates[pos], templates[num] = template, templates[pos]

View File

@ -50,6 +50,7 @@ class DateTemplate(object):
self._regex = "" self._regex = ""
self._cRegex = None self._cRegex = None
self.hits = 0 self.hits = 0
self.lastUsed = 0
@property @property
def name(self): def name(self):
@ -85,7 +86,6 @@ class DateTemplate(object):
if (wordBegin and not re.search(r'^\^', regex)): if (wordBegin and not re.search(r'^\^', regex)):
regex = r'\b' + regex regex = r'\b' + regex
self._regex = regex self._regex = regex
self._cRegex = re.compile(regex, re.UNICODE | re.IGNORECASE)
regex = property(getRegex, setRegex, doc= regex = property(getRegex, setRegex, doc=
"""Regex used to search for date. """Regex used to search for date.
@ -94,6 +94,8 @@ class DateTemplate(object):
def matchDate(self, line): def matchDate(self, line):
"""Check if regex for date matches on a log line. """Check if regex for date matches on a log line.
""" """
if not self._cRegex:
self._cRegex = re.compile(self.regex, re.UNICODE | re.IGNORECASE)
dateMatch = self._cRegex.search(line) dateMatch = self._cRegex.search(line)
return dateMatch return dateMatch
@ -170,7 +172,7 @@ class DatePatternRegex(DateTemplate):
regex regex
pattern pattern
""" """
_patternRE = r"%%(%%|[%s])" % "".join(timeRE.keys()) _patternRE = re.compile(r"%%(%%|[%s])" % "".join(timeRE.keys()))
_patternName = { _patternName = {
'a': "DAY", 'A': "DAYNAME", 'b': "MON", 'B': "MONTH", 'd': "Day", 'a': "DAY", 'A': "DAYNAME", 'b': "MON", 'B': "MONTH", 'd': "Day",
'H': "24hour", 'I': "12hour", 'j': "Yearday", 'm': "Month", 'H': "24hour", 'I': "12hour", 'j': "Yearday", 'm': "Month",
@ -201,10 +203,9 @@ class DatePatternRegex(DateTemplate):
@pattern.setter @pattern.setter
def pattern(self, pattern): def pattern(self, pattern):
self._pattern = pattern self._pattern = pattern
self._name = re.sub( fmt = self._patternRE.sub(r'%(\1)s', pattern)
self._patternRE, r'%(\1)s', pattern) % self._patternName self._name = fmt % self._patternName
super(DatePatternRegex, self).setRegex( super(DatePatternRegex, self).setRegex(fmt % timeRE)
re.sub(self._patternRE, r'%(\1)s', pattern) % timeRE)
def setRegex(self, value): def setRegex(self, value):
raise NotImplementedError("Regex derived from pattern") raise NotImplementedError("Regex derived from pattern")

View File

@ -1,71 +0,0 @@
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
# vi: set ft=python sts=4 ts=4 sw=4 noet :
# This file is part of Fail2Ban.
#
# Fail2Ban is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Fail2Ban is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Fail2Ban; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
# Author: Cyril Jaquier
#
__author__ = "Cyril Jaquier"
__copyright__ = "Copyright (c) 2004 Cyril Jaquier"
__license__ = "GPL"
from ..helpers import getLogger
# Gets the instance of the logger.
logSys = getLogger(__name__)
class FailData:
def __init__(self):
self.__retry = 0
self.__lastTime = 0
self.__lastReset = 0
self.__matches = []
def setRetry(self, value):
self.__retry = value
# keep only the last matches or reset entirely
# Explicit if/else for compatibility with Python 2.4
if value:
self.__matches = self.__matches[-min(len(self.__matches, value)):]
else:
self.__matches = []
def getRetry(self):
return self.__retry
def getMatches(self):
return self.__matches
def inc(self, matches=None, count=1):
self.__retry += count
self.__matches += matches or []
def setLastTime(self, value):
if value > self.__lastTime:
self.__lastTime = value
def getLastTime(self):
return self.__lastTime
def getLastReset(self):
return self.__lastReset
def setLastReset(self, value):
self.__lastReset = value

View File

@ -27,12 +27,12 @@ __license__ = "GPL"
from threading import Lock from threading import Lock
import logging import logging
from .faildata import FailData
from .ticket import FailTicket from .ticket import FailTicket
from ..helpers import getLogger from ..helpers import getLogger, BgService
# Gets the instance of the logger. # Gets the instance of the logger.
logSys = getLogger(__name__) logSys = getLogger(__name__)
logLevel = logging.DEBUG
class FailManager: class FailManager:
@ -43,120 +43,123 @@ class FailManager:
self.__maxRetry = 3 self.__maxRetry = 3
self.__maxTime = 600 self.__maxTime = 600
self.__failTotal = 0 self.__failTotal = 0
self.maxEntries = 50
self.__bgSvc = BgService()
def setFailTotal(self, value): def setFailTotal(self, value):
try: with self.__lock:
self.__lock.acquire()
self.__failTotal = value self.__failTotal = value
finally:
self.__lock.release()
def getFailTotal(self): def getFailTotal(self):
try: with self.__lock:
self.__lock.acquire()
return self.__failTotal return self.__failTotal
finally:
self.__lock.release()
def setMaxRetry(self, value): def setMaxRetry(self, value):
try: self.__maxRetry = value
self.__lock.acquire()
self.__maxRetry = value
finally:
self.__lock.release()
def getMaxRetry(self): def getMaxRetry(self):
try: return self.__maxRetry
self.__lock.acquire()
return self.__maxRetry
finally:
self.__lock.release()
def setMaxTime(self, value): def setMaxTime(self, value):
try: self.__maxTime = value
self.__lock.acquire()
self.__maxTime = value
finally:
self.__lock.release()
def getMaxTime(self): def getMaxTime(self):
try: return self.__maxTime
self.__lock.acquire()
return self.__maxTime
finally:
self.__lock.release()
def addFailure(self, ticket, count=1, observed=False): def addFailure(self, ticket, count=1, observed=False):
try: attempts = 1
self.__lock.acquire() with self.__lock:
ip = ticket.getIP() ip = ticket.getIP()
unixTime = ticket.getTime() try:
matches = ticket.getMatches()
if ip in self.__failList:
fData = self.__failList[ip] fData = self.__failList[ip]
# if the same object - the same matches but +1 attempt:
if fData is ticket:
matches = None
attempt = 1
else:
# will be incremented / extended (be sure we have at least +1 attempt):
matches = ticket.getMatches()
attempt = ticket.getAttempt()
if attempt <= 0:
attempt += 1
unixTime = ticket.getTime()
fData.setLastTime(unixTime)
if fData.getLastReset() < unixTime - self.__maxTime: if fData.getLastReset() < unixTime - self.__maxTime:
fData.setLastReset(unixTime) fData.setLastReset(unixTime)
fData.setRetry(0) fData.setRetry(0)
fData.inc(matches, count) fData.inc(matches, attempt, count)
fData.setLastTime(unixTime) # truncate to maxEntries:
else: matches = fData.getMatches()
## not found - already banned - prevent to add failure if comes from observer: if len(matches) > self.maxEntries:
fData.setMatches(matches[-self.maxEntries:])
except KeyError:
# not found - already banned - prevent to add failure if comes from observer:
if observed: if observed:
return return
fData = FailData() # if already FailTicket - add it direct, otherwise create (using copy all ticket data):
fData.inc(matches, count) if isinstance(ticket, FailTicket):
fData.setLastReset(unixTime) fData = ticket;
fData.setLastTime(unixTime) else:
fData = FailTicket(ticket=ticket)
if count > ticket.getAttempt():
fData.setRetry(count)
self.__failList[ip] = fData self.__failList[ip] = fData
attempts = fData.getRetry()
self.__failTotal += 1 self.__failTotal += 1
if logSys.getEffectiveLevel() <= logging.DEBUG: if logSys.getEffectiveLevel() <= logLevel:
# yoh: Since composing this list might be somewhat time consuming # yoh: Since composing this list might be somewhat time consuming
# in case of having many active failures, it should be ran only # in case of having many active failures, it should be ran only
# if debug level is "low" enough # if debug level is "low" enough
failures_summary = ', '.join(['%s:%d' % (k, v.getRetry()) failures_summary = ', '.join(['%s:%d' % (k, v.getRetry())
for k,v in self.__failList.iteritems()]) for k,v in self.__failList.iteritems()])
logSys.debug("Total # of detected failures: %d. Current failures from %d IPs (IP:count): %s" logSys.log(logLevel, "Total # of detected failures: %d. Current failures from %d IPs (IP:count): %s"
% (self.__failTotal, len(self.__failList), failures_summary)) % (self.__failTotal, len(self.__failList), failures_summary))
finally:
self.__lock.release() self.__bgSvc.service()
return attempts
def size(self): def size(self):
try: with self.__lock:
self.__lock.acquire()
return len(self.__failList) return len(self.__failList)
finally:
self.__lock.release()
def cleanup(self, time): def cleanup(self, time):
try: with self.__lock:
self.__lock.acquire() todelete = [ip for ip,item in self.__failList.iteritems() \
tmp = self.__failList.copy() if item.getLastTime() + self.__maxTime <= time]
for item in tmp: if len(todelete) == len(self.__failList):
if tmp[item].getLastTime() < time - self.__maxTime: # remove all:
self.__delFailure(item) self.__failList = dict()
finally: elif not len(todelete):
self.__lock.release() # nothing:
return
if len(todelete) / 2.0 <= len(self.__failList) / 3.0:
# few as 2/3 should be removed - remove particular items:
for ip in todelete:
del self.__failList[ip]
else:
# create new dictionary without items to be deleted:
self.__failList = dict((ip,item) for ip,item in self.__failList.iteritems() \
if item.getLastTime() + self.__maxTime > time)
self.__bgSvc.service()
def __delFailure(self, ip): def delFailure(self, ip):
if ip in self.__failList: with self.__lock:
del self.__failList[ip] try:
del self.__failList[ip]
except KeyError:
pass
def toBan(self, ip=None): def toBan(self, ip=None):
try: with self.__lock:
self.__lock.acquire()
for ip in ([ip] if ip != None and ip in self.__failList else self.__failList): for ip in ([ip] if ip != None and ip in self.__failList else self.__failList):
data = self.__failList[ip] data = self.__failList[ip]
if data.getRetry() >= self.__maxRetry: if data.getRetry() >= self.__maxRetry:
del self.__failList[ip] del self.__failList[ip]
# Create a FailTicket from BanData return data
failTicket = FailTicket(ip, data.getLastTime(), data.getMatches()) self.__bgSvc.service()
failTicket.setAttempt(data.getRetry()) raise FailManagerEmpty
return failTicket
raise FailManagerEmpty
finally:
self.__lock.release()
class FailManagerEmpty(Exception): class FailManagerEmpty(Exception):

View File

@ -428,11 +428,11 @@ class Filter(JailThread):
ip = element[1] ip = element[1]
unixTime = element[2] unixTime = element[2]
lines = element[3] lines = element[3]
logSys.debug("Processing line with time:%s and ip:%s" logSys.debug("Processing line with time:%s and ip:%s",
% (unixTime, ip)) unixTime, ip)
if unixTime < MyTime.time() - self.getFindTime(): if unixTime < MyTime.time() - self.getFindTime():
logSys.debug("Ignore line since time %s < %s - %s" logSys.debug("Ignore line since time %s < %s - %s",
% (unixTime, MyTime.time(), self.getFindTime())) unixTime, MyTime.time(), self.getFindTime())
break break
if self.inIgnoreIPList(ip, log_ignore=True): if self.inIgnoreIPList(ip, log_ignore=True):
continue continue
@ -501,6 +501,7 @@ class Filter(JailThread):
self.__lineBuffer = ( self.__lineBuffer = (
self.__lineBuffer + [tupleLine[:3]])[-self.__lineBufferSize:] self.__lineBuffer + [tupleLine[:3]])[-self.__lineBufferSize:]
logSys.log(5, "Looking for failregex match of %r" % self.__lineBuffer)
# Iterates over all the regular expressions. # Iterates over all the regular expressions.
for failRegexIndex, failRegex in enumerate(self.__failRegex): for failRegexIndex, failRegex in enumerate(self.__failRegex):
@ -562,7 +563,8 @@ class FileFilter(Filter):
def __init__(self, jail, **kwargs): def __init__(self, jail, **kwargs):
Filter.__init__(self, jail, **kwargs) Filter.__init__(self, jail, **kwargs)
## The log file path. ## The log file path.
self.__logPath = [] self.__logs = dict()
self.__autoSeek = dict()
self.setLogEncoding("auto") self.setLogEncoding("auto")
## ##
@ -570,18 +572,23 @@ class FileFilter(Filter):
# #
# @param path log file path # @param path log file path
def addLogPath(self, path, tail=False): def addLogPath(self, path, tail=False, autoSeek=True):
if self.containsLogPath(path): if path in self.__logs:
logSys.error(path + " already exists") logSys.error(path + " already exists")
else: else:
container = FileContainer(path, self.getLogEncoding(), tail) log = FileContainer(path, self.getLogEncoding(), tail)
db = self.jail.database db = self.jail.database
if db is not None: if db is not None:
lastpos = db.addLog(self.jail, container) lastpos = db.addLog(self.jail, log)
if lastpos and not tail: if lastpos and not tail:
container.setPos(lastpos) log.setPos(lastpos)
self.__logPath.append(container) self.__logs[path] = log
logSys.info("Added logfile = %s (pos = %s, hash = %s)" , path, container.getPos(), container.getHash()) logSys.info("Added logfile = %s (pos = %s, hash = %s)" , path, log.getPos(), log.getHash())
if autoSeek:
# if default, seek to "current time" - "find time":
if isinstance(autoSeek, bool):
autoSeek = MyTime.time() - self.getFindTime()
self.__autoSeek[path] = autoSeek
self._addLogPath(path) # backend specific self._addLogPath(path) # backend specific
def _addLogPath(self, path): def _addLogPath(self, path):
@ -595,15 +602,16 @@ class FileFilter(Filter):
# @param path the log file to delete # @param path the log file to delete
def delLogPath(self, path): def delLogPath(self, path):
for log in self.__logPath: try:
if log.getFileName() == path: log = self.__logs.pop(path)
self.__logPath.remove(log) except KeyError:
db = self.jail.database return
if db is not None: db = self.jail.database
db.updateLog(self.jail, log) if db is not None:
logSys.info("Removed logfile = %s" % path) db.updateLog(self.jail, log)
self._delLogPath(path) logSys.info("Removed logfile = %s" % path)
return self._delLogPath(path)
return
def _delLogPath(self, path): # pragma: no cover - overwritten function def _delLogPath(self, path): # pragma: no cover - overwritten function
# nothing to do by default # nothing to do by default
@ -611,12 +619,28 @@ class FileFilter(Filter):
pass pass
## ##
# Get the log file path # Get the log file names
# #
# @return log file path # @return log paths
def getLogPath(self): def getLogPaths(self):
return self.__logPath return self.__logs.keys()
##
# Get the log containers
#
# @return log containers
def getLogs(self):
return self.__logs.values()
##
# Get the count of log containers
#
# @return count of log containers
def getLogCount(self):
return len(self.__logs)
## ##
# Check whether path is already monitored. # Check whether path is already monitored.
@ -625,10 +649,7 @@ class FileFilter(Filter):
# @return True if the path is already monitored else False # @return True if the path is already monitored else False
def containsLogPath(self, path): def containsLogPath(self, path):
for log in self.__logPath: return path in self.__logs
if log.getFileName() == path:
return True
return False
## ##
# Set the log file encoding # Set the log file encoding
@ -639,7 +660,7 @@ class FileFilter(Filter):
if encoding.lower() == "auto": if encoding.lower() == "auto":
encoding = locale.getpreferredencoding() encoding = locale.getpreferredencoding()
codecs.lookup(encoding) # Raise LookupError if invalid codec codecs.lookup(encoding) # Raise LookupError if invalid codec
for log in self.getLogPath(): for log in self.__logs.itervalues():
log.setEncoding(encoding) log.setEncoding(encoding)
self.__encoding = encoding self.__encoding = encoding
logSys.info("Set jail log file encoding to %s" % encoding) logSys.info("Set jail log file encoding to %s" % encoding)
@ -652,11 +673,8 @@ class FileFilter(Filter):
def getLogEncoding(self): def getLogEncoding(self):
return self.__encoding return self.__encoding
def getFileContainer(self, path): def getLog(self, path):
for log in self.__logPath: return self.__logs.get(path, None)
if log.getFileName() == path:
return log
return None
## ##
# Gets all the failure in the log file. # Gets all the failure in the log file.
@ -666,13 +684,13 @@ class FileFilter(Filter):
# is created and is added to the FailManager. # is created and is added to the FailManager.
def getFailures(self, filename, startTime=None): def getFailures(self, filename, startTime=None):
container = self.getFileContainer(filename) log = self.getLog(filename)
if container is None: if log is None:
logSys.error("Unable to get failures in " + filename) logSys.error("Unable to get failures in " + filename)
return False return False
# Try to open log file. # Try to open log file.
try: try:
has_content = container.open() has_content = log.open()
# see http://python.org/dev/peps/pep-3151/ # see http://python.org/dev/peps/pep-3151/
except IOError, e: except IOError, e:
logSys.error("Unable to open %s" % filename) logSys.error("Unable to open %s" % filename)
@ -687,13 +705,17 @@ class FileFilter(Filter):
logSys.exception(e) logSys.exception(e)
return False return False
# prevent completely read of big files first time (after start of service), initial seek to start time using half-interval search algorithm: # seek to find time for first usage only (prevent performance decline with polling of big files)
if container.getPos() == 0 and startTime is not None: if self.__autoSeek.get(filename):
startTime = self.__autoSeek[filename]
del self.__autoSeek[filename]
# prevent completely read of big files first time (after start of service),
# initial seek to start time using half-interval search algorithm:
try: try:
# startTime = MyTime.time() - self.getFindTime() self.seekToTime(log, startTime)
self.seekToTime(container, startTime)
except Exception, e: # pragma: no cover except Exception, e: # pragma: no cover
logSys.error("Error during seek to start time in \"%s\"", filename) logSys.error("Error during seek to start time in \"%s\"", filename)
raise
logSys.exception(e) logSys.exception(e)
return False return False
@ -703,92 +725,109 @@ class FileFilter(Filter):
# start reading tested to be empty container -- race condition # start reading tested to be empty container -- race condition
# might occur leading at least to tests failures. # might occur leading at least to tests failures.
while has_content: while has_content:
line = container.readline() line = log.readline()
if not line or not self.active: if not line or not self.active:
# The jail reached the bottom or has been stopped # The jail reached the bottom or has been stopped
break break
self.processLineAndAdd(line) self.processLineAndAdd(line)
container.close() log.close()
db = self.jail.database db = self.jail.database
if db is not None: if db is not None:
db.updateLog(self.jail, container) db.updateLog(self.jail, log)
return True return True
## ##
# Seeks to line with date (search using half-interval search algorithm), to start polling from it # Seeks to line with date (search using half-interval search algorithm), to start polling from it
# #
def seekToTime(self, container, date): def seekToTime(self, container, date, accuracy=3):
fs = container.getFileSize() fs = container.getFileSize()
if logSys.getEffectiveLevel() <= logging.DEBUG: if logSys.getEffectiveLevel() <= logging.DEBUG:
logSys.debug("Seek to find time %s (%s), file size %s", date, logSys.debug("Seek to find time %s (%s), file size %s", date,
datetime.datetime.fromtimestamp(date).strftime("%Y-%m-%d %H:%M:%S"), fs) datetime.datetime.fromtimestamp(date).strftime("%Y-%m-%d %H:%M:%S"), fs)
date -= 0.009 minp = container.getPos()
minp = 0
maxp = fs maxp = fs
lastpos = 0 tryPos = minp
lastFew = 0 lastPos = -1
lastTime = None foundPos = 0
foundTime = None
cntr = 0 cntr = 0
unixTime = None unixTime = None
lasti = 0 movecntr = accuracy
movecntr = 1
while maxp > minp: while maxp > minp:
i = int(minp + (maxp - minp) / 2) if tryPos is None:
pos = container.seek(i) pos = int(minp + (maxp - minp) / 2)
else:
pos, tryPos = tryPos, None
# because container seek will go to start of next line (minus CRLF):
pos = max(0, pos-2)
seekpos = pos = container.seek(pos)
cntr += 1 cntr += 1
# within next 5 lines try to find any legal datetime: # within next 5 lines try to find any legal datetime:
lncntr = 5; lncntr = 5;
dateTimeMatch = None dateTimeMatch = None
llen = 0 nextp = None
if lastpos == pos:
i = pos
while True: while True:
line = container.readline() line = container.readline()
if not line: if not line:
break break
llen += len(line) (timeMatch, template) = self.dateDetector.matchTime(line)
l = line.rstrip('\r\n')
(timeMatch, template) = self.dateDetector.matchTime(l)
if timeMatch: if timeMatch:
dateTimeMatch = self.dateDetector.getTime2(l[timeMatch.start():timeMatch.end()], (timeMatch, template)) dateTimeMatch = self.dateDetector.getTime2(line[timeMatch.start():timeMatch.end()], (timeMatch, template))
else:
nextp = container.tell()
if nextp > maxp:
pos = seekpos
break
pos = nextp
if not dateTimeMatch and lncntr: if not dateTimeMatch and lncntr:
lncntr -= 1 lncntr -= 1
continue continue
break break
# not found at this step - stop searching
if dateTimeMatch:
unixTime = dateTimeMatch[0]
if unixTime >= date:
if foundTime is None or unixTime <= foundTime:
foundPos = pos
foundTime = unixTime
if pos == maxp:
pos = seekpos
if pos < maxp:
maxp = pos
else:
if foundTime is None or unixTime >= foundTime:
foundPos = pos
foundTime = unixTime
if nextp is None:
nextp = container.tell()
pos = nextp
if pos > minp:
minp = pos
# if we can't move (position not changed) # if we can't move (position not changed)
if i + llen == lasti: if pos == lastPos:
movecntr -= 1 movecntr -= 1
if movecntr <= 0: if movecntr <= 0:
break break
lasti = i + llen; # we have found large area without any date mached
# not found at this step - stop searching # or end of search - try min position (because can be end of previous line):
if not dateTimeMatch: if minp != lastPos:
lastPos = tryPos = minp
continue
break break
unixTime = dateTimeMatch[0] lastPos = pos
if unixTime >= date: # always use smallest pos, that could be found:
maxp = i foundPos = container.seek(minp, False)
else: container.setPos(foundPos)
minp = i + llen
lastFew = pos;
lastTime = unixTime
lastpos = pos
# if found position have a time greater as given - use smallest time we have found
if unixTime is None or unixTime > date:
unixTime = lastTime
lastpos = container.seek(lastFew, False)
else:
lastpos = container.seek(lastpos, False)
container.setPos(lastpos)
if logSys.getEffectiveLevel() <= logging.DEBUG: if logSys.getEffectiveLevel() <= logging.DEBUG:
logSys.debug("Position %s from %s, found time %s (%s) within %s seeks", lastpos, fs, unixTime, logSys.debug("Position %s from %s, found time %s (%s) within %s seeks", lastPos, fs, foundTime,
(datetime.datetime.fromtimestamp(unixTime).strftime("%Y-%m-%d %H:%M:%S") if unixTime is not None else ''), cntr) (datetime.datetime.fromtimestamp(foundTime).strftime("%Y-%m-%d %H:%M:%S") if foundTime is not None else ''), cntr)
def status(self, flavor="basic"): def status(self, flavor="basic"):
"""Status of Filter plus files being monitored. """Status of Filter plus files being monitored.
""" """
ret = super(FileFilter, self).status(flavor=flavor) ret = super(FileFilter, self).status(flavor=flavor)
path = [m.getFileName() for m in self.getLogPath()] path = self.__logs.keys()
ret.append(("File list", path)) ret.append(("File list", path))
return ret return ret
@ -885,33 +924,41 @@ class FileContainer:
self.__handler.seek(self.__pos) self.__handler.seek(self.__pos)
return True return True
def seek(self, offs, endLine = True): def seek(self, offs, endLine=True):
h = self.__handler h = self.__handler
# seek to given position # seek to given position
h.seek(offs, 0) h.seek(offs, 0)
# goto end of next line # goto end of next line
if endLine: if offs and endLine:
h.readline() h.readline()
# get current real position # get current real position
return h.tell() return h.tell()
def readline(self): def tell(self):
if self.__handler is None: # get current real position
return "" return self.__handler.tell()
line = self.__handler.readline()
@staticmethod
def decode_line(filename, enc, line):
try: try:
line = line.decode(self.getEncoding(), 'strict') line = line.decode(enc, 'strict')
except UnicodeDecodeError: except UnicodeDecodeError:
logSys.warning( logSys.warning(
"Error decoding line from '%s' with '%s'." "Error decoding line from '%s' with '%s'."
" Consider setting logencoding=utf-8 (or another appropriate" " Consider setting logencoding=utf-8 (or another appropriate"
" encoding) for this jail. Continuing" " encoding) for this jail. Continuing"
" to process line ignoring invalid characters: %r" % " to process line ignoring invalid characters: %r" %
(self.getFileName(), self.getEncoding(), line)) (filename, enc, line))
# decode with replacing error chars: # decode with replacing error chars:
line = line.decode(self.getEncoding(), 'replace') line = line.decode(enc, 'replace')
return line return line
def readline(self):
if self.__handler is None:
return ""
return FileContainer.decode_line(
self.getFileName(), self.getEncoding(), self.__handler.readline())
def close(self): def close(self):
if not self.__handler is None: if not self.__handler is None:
# Saves the last position. # Saves the last position.
@ -947,31 +994,50 @@ class JournalFilter(Filter): # pragma: systemd no cover
import socket import socket
import struct import struct
from .utils import Utils
class DNSUtils: class DNSUtils:
IP_CRE = re.compile("^(?:\d{1,3}\.){3}\d{1,3}$") IP_CRE = re.compile("^(?:\d{1,3}\.){3}\d{1,3}$")
# todo: make configurable the expired time and max count of cache entries:
CACHE_dnsToIp = Utils.Cache(maxCount=1000, maxTime=5*60)
CACHE_ipToName = Utils.Cache(maxCount=1000, maxTime=5*60)
@staticmethod @staticmethod
def dnsToIp(dns): def dnsToIp(dns):
""" Convert a DNS into an IP address using the Python socket module. """ Convert a DNS into an IP address using the Python socket module.
Thanks to Kevin Drapel. Thanks to Kevin Drapel.
""" """
# cache, also prevent long wait during retrieving of ip for wrong dns or lazy dns-system:
v = DNSUtils.CACHE_dnsToIp.get(dns)
if v is not None:
return v
# retrieve ip (todo: use AF_INET6 for IPv6)
try: try:
return set(socket.gethostbyname_ex(dns)[2]) v = set([i[4][0] for i in socket.getaddrinfo(dns, None, socket.AF_INET, 0, socket.IPPROTO_TCP)])
except socket.error, e: except socket.error, e:
logSys.warning("Unable to find a corresponding IP address for %s: %s" # todo: make configurable the expired time of cache entry:
% (dns, e)) logSys.warning("Unable to find a corresponding IP address for %s: %s", dns, e)
return list() v = list()
DNSUtils.CACHE_dnsToIp.set(dns, v)
return v
@staticmethod @staticmethod
def ipToName(ip): def ipToName(ip):
# cache, also prevent long wait during retrieving of name for wrong addresses, lazy dns:
v = DNSUtils.CACHE_ipToName.get(ip, ())
if v != ():
return v
# retrieve name
try: try:
return socket.gethostbyaddr(ip)[0] v = socket.gethostbyaddr(ip)[0]
except socket.error, e: except socket.error, e:
logSys.debug("Unable to find a name for the IP %s: %s" % (ip, e)) logSys.debug("Unable to find a name for the IP %s: %s", ip, e)
return None v = None
DNSUtils.CACHE_ipToName.set(ip, v)
return v
@staticmethod @staticmethod
def searchIP(text): def searchIP(text):

View File

@ -31,6 +31,7 @@ import gamin
from .failmanager import FailManagerEmpty from .failmanager import FailManagerEmpty
from .filter import FileFilter from .filter import FileFilter
from .mytime import MyTime from .mytime import MyTime
from .utils import Utils
from ..helpers import getLogger from ..helpers import getLogger
# Gets the instance of the logger. # Gets the instance of the logger.
@ -83,7 +84,6 @@ class FilterGamin(FileFilter):
self.jail.putFailTicket(ticket) self.jail.putFailTicket(ticket)
except FailManagerEmpty: except FailManagerEmpty:
self.failManager.cleanup(MyTime.time()) self.failManager.cleanup(MyTime.time())
self.dateDetector.sortTemplate()
self.__modified = False self.__modified = False
## ##
@ -102,6 +102,15 @@ class FilterGamin(FileFilter):
def _delLogPath(self, path): def _delLogPath(self, path):
self.monitor.stop_watch(path) self.monitor.stop_watch(path)
def _handleEvents(self):
ret = False
mon = self.monitor
while mon and mon.event_pending():
mon.handle_events()
mon = self.monitor
ret = True
return ret
## ##
# Main loop. # Main loop.
# #
@ -112,12 +121,10 @@ class FilterGamin(FileFilter):
def run(self): def run(self):
# Gamin needs a loop to collect and dispatch events # Gamin needs a loop to collect and dispatch events
while self.active: while self.active:
if not self.idle: if self.idle:
# We cannot block here because we want to be able to time.sleep(self.sleeptime)
# exit. continue
if self.monitor.event_pending(): Utils.wait_for(self._handleEvents, self.sleeptime)
self.monitor.handle_events()
time.sleep(self.sleeptime)
logSys.debug(self.jail.name + ": filter terminated") logSys.debug(self.jail.name + ": filter terminated")
return True return True
@ -129,6 +136,6 @@ class FilterGamin(FileFilter):
# Desallocates the resources used by Gamin. # Desallocates the resources used by Gamin.
def __cleanup(self): def __cleanup(self):
for path in self.getLogPath(): for filename in self.getLogPaths():
self.monitor.stop_watch(path.getFileName()) self.monitor.stop_watch(filename)
del self.monitor self.monitor = None

View File

@ -31,6 +31,7 @@ from .failmanager import FailManagerEmpty
from .filter import FileFilter from .filter import FileFilter
from .mytime import MyTime from .mytime import MyTime
from ..helpers import getLogger from ..helpers import getLogger
from ..server.utils import Utils
# Gets the instance of the logger. # Gets the instance of the logger.
logSys = getLogger(__name__) logSys = getLogger(__name__)
@ -78,6 +79,15 @@ class FilterPoll(FileFilter):
del self.__prevStats[path] del self.__prevStats[path]
del self.__file404Cnt[path] del self.__file404Cnt[path]
##
# Get a modified log path at once
#
def getModified(self, modlst):
for filename in self.getLogPaths():
if self.isModified(filename):
modlst.append(filename)
return modlst
## ##
# Main loop. # Main loop.
# #
@ -89,31 +99,27 @@ class FilterPoll(FileFilter):
while self.active: while self.active:
if logSys.getEffectiveLevel() <= 6: if logSys.getEffectiveLevel() <= 6:
logSys.log(6, "Woke up idle=%s with %d files monitored", logSys.log(6, "Woke up idle=%s with %d files monitored",
self.idle, len(self.getLogPath())) self.idle, self.getLogCount())
if not self.idle: if self.idle:
# Get file modification if not Utils.wait_for(lambda: not self.idle,
for container in self.getLogPath(): self.sleeptime * 100, self.sleeptime
filename = container.getFileName() ):
if self.isModified(filename): continue
# set start time as now - find time for first usage only (prevent performance bug with polling of big files) # Get file modification
self.getFailures(filename, modlst = []
(MyTime.time() - self.getFindTime()) if not self.__initial.get(filename) else None Utils.wait_for(lambda: self.getModified(modlst), self.sleeptime)
) for filename in modlst:
self.__initial[filename] = True self.getFailures(filename)
self.__modified = True self.__modified = True
if self.__modified: if self.__modified:
try: try:
while True: while True:
ticket = self.failManager.toBan() ticket = self.failManager.toBan()
self.jail.putFailTicket(ticket) self.jail.putFailTicket(ticket)
except FailManagerEmpty: except FailManagerEmpty:
self.failManager.cleanup(MyTime.time()) self.failManager.cleanup(MyTime.time())
self.dateDetector.sortTemplate() self.__modified = False
self.__modified = False
time.sleep(self.sleeptime)
else:
time.sleep(self.sleeptime)
logSys.debug( logSys.debug(
(self.jail is not None and self.jail.name or "jailless") + (self.jail is not None and self.jail.name or "jailless") +
" filter terminated") " filter terminated")
@ -129,7 +135,7 @@ class FilterPoll(FileFilter):
try: try:
logStats = os.stat(filename) logStats = os.stat(filename)
stats = logStats.st_mtime, logStats.st_ino, logStats.st_size stats = logStats.st_mtime, logStats.st_ino, logStats.st_size
pstats = self.__prevStats[filename] pstats = self.__prevStats.get(filename, ())
self.__file404Cnt[filename] = 0 self.__file404Cnt[filename] = 0
if logSys.getEffectiveLevel() <= 7: if logSys.getEffectiveLevel() <= 7:
# we do not want to waste time on strftime etc if not necessary # we do not want to waste time on strftime etc if not necessary
@ -139,10 +145,9 @@ class FilterPoll(FileFilter):
# os.system("stat %s | grep Modify" % filename) # os.system("stat %s | grep Modify" % filename)
if pstats == stats: if pstats == stats:
return False return False
else: logSys.debug("%s has been modified", filename)
logSys.debug("%s has been modified", filename) self.__prevStats[filename] = stats
self.__prevStats[filename] = stats return True
return True
except OSError, e: except OSError, e:
logSys.error("Unable to get stat on %s because of: %s" logSys.error("Unable to get stat on %s because of: %s"
% (filename, e)) % (filename, e))

View File

@ -108,7 +108,6 @@ class FilterPyinotify(FileFilter):
self.jail.putFailTicket(ticket) self.jail.putFailTicket(ticket)
except FailManagerEmpty: except FailManagerEmpty:
self.failManager.cleanup(MyTime.time()) self.failManager.cleanup(MyTime.time())
self.dateDetector.sortTemplate()
self.__modified = False self.__modified = False
def _addFileWatcher(self, path): def _addFileWatcher(self, path):

View File

@ -36,7 +36,7 @@ from .mytime import MyTime
logSys = getLogger(__name__) logSys = getLogger(__name__)
class Jail: class Jail(object):
"""Fail2Ban jail, which manages a filter and associated actions. """Fail2Ban jail, which manages a filter and associated actions.
The class handles the initialisation of a filter, and actions. It's The class handles the initialisation of a filter, and actions. It's
@ -299,7 +299,7 @@ class Jail:
self.actions.join() self.actions.join()
logSys.info("Jail '%s' stopped" % self.name) logSys.info("Jail '%s' stopped" % self.name)
def is_alive(self): def isAlive(self):
"""Check jail "is_alive" by checking filter and actions threads. """Check jail "isAlive" by checking filter and actions threads.
""" """
return self.filter.is_alive() or self.actions.is_alive() return self.filter.isAlive() or self.actions.isAlive()

View File

@ -28,6 +28,7 @@ import sys
from threading import Thread from threading import Thread
from abc import abstractmethod from abc import abstractmethod
from .utils import Utils
from ..helpers import excepthook from ..helpers import excepthook
@ -55,7 +56,7 @@ class JailThread(Thread):
## Control the idle state of the thread. ## Control the idle state of the thread.
self.idle = False self.idle = False
## The time the thread sleeps in the loop. ## The time the thread sleeps in the loop.
self.sleeptime = 1 self.sleeptime = Utils.DEFAULT_SLEEP_TIME
# excepthook workaround for threads, derived from: # excepthook workaround for threads, derived from:
# http://bugs.python.org/issue1230540#msg91244 # http://bugs.python.org/issue1230540#msg91244

View File

@ -98,7 +98,6 @@ class MyTime:
else: else:
return time.localtime(MyTime.myTime) return time.localtime(MyTime.myTime)
@staticmethod @staticmethod
def str2seconds(val): def str2seconds(val):
"""Wraps string expression like "1h 2m 3s" into number contains seconds (3723). """Wraps string expression like "1h 2m 3s" into number contains seconds (3723).

View File

@ -45,7 +45,7 @@ logSys = getLogger(__name__)
try: try:
from .database import Fail2BanDb from .database import Fail2BanDb
except ImportError: except ImportError: # pragma: no cover
# Dont print error here, as database may not even be used # Dont print error here, as database may not even be used
Fail2BanDb = None Fail2BanDb = None
@ -68,10 +68,10 @@ class Server:
'FreeBSD': '/var/run/log', 'FreeBSD': '/var/run/log',
'Linux': '/dev/log', 'Linux': '/dev/log',
} }
self.setSyslogSocket("auto")
# Set logging level # Set logging level
self.setLogLevel("INFO") self.setLogLevel("INFO")
self.setLogTarget("STDOUT") self.setLogTarget("STDOUT")
self.setSyslogSocket("auto")
def __sigTERMhandler(self, signum, frame): def __sigTERMhandler(self, signum, frame):
logSys.debug("Caught signal %d. Exiting" % signum) logSys.debug("Caught signal %d. Exiting" % signum)
@ -168,7 +168,7 @@ class Server:
def startJail(self, name): def startJail(self, name):
try: try:
self.__lock.acquire() self.__lock.acquire()
if not self.__jails[name].is_alive(): if not self.__jails[name].isAlive():
self.__jails[name].start() self.__jails[name].start()
finally: finally:
self.__lock.release() self.__lock.release()
@ -177,7 +177,7 @@ class Server:
logSys.debug("Stopping jail %s" % name) logSys.debug("Stopping jail %s" % name)
try: try:
self.__lock.acquire() self.__lock.acquire()
if self.__jails[name].is_alive(): if self.__jails[name].isAlive():
self.__jails[name].stop() self.__jails[name].stop()
self.delJail(name) self.delJail(name)
finally: finally:
@ -222,8 +222,7 @@ class Server:
def getLogPath(self, name): def getLogPath(self, name):
filter_ = self.__jails[name].filter filter_ = self.__jails[name].filter
if isinstance(filter_, FileFilter): if isinstance(filter_, FileFilter):
return [m.getFileName() return filter_.getLogPaths()
for m in filter_.getLogPath()]
else: # pragma: systemd no cover else: # pragma: systemd no cover
logSys.info("Jail %s is not a FileFilter instance" % name) logSys.info("Jail %s is not a FileFilter instance" % name)
return [] return []
@ -341,6 +340,14 @@ class Server:
def getBanTimeExtra(self, name, opt): def getBanTimeExtra(self, name, opt):
return self.__jails[name].getBanTimeExtra(opt) return self.__jails[name].getBanTimeExtra(opt)
def isAlive(self, jailnum=None):
if jailnum is not None and len(self.__jails) != jailnum:
return 0
for jail in self.__jails.values():
if not jail.isAlive():
return 0
return 1
# Status # Status
def status(self): def status(self):
try: try:

View File

@ -24,16 +24,20 @@ __author__ = "Cyril Jaquier"
__copyright__ = "Copyright (c) 2004 Cyril Jaquier" __copyright__ = "Copyright (c) 2004 Cyril Jaquier"
__license__ = "GPL" __license__ = "GPL"
import sys
from ..helpers import getLogger from ..helpers import getLogger
from .mytime import MyTime from .mytime import MyTime
# Gets the instance of the logger. # Gets the instance of the logger.
logSys = getLogger(__name__) logSys = getLogger(__name__)
RESTORED = 0x01
class Ticket: class Ticket:
def __init__(self, ip, time=None, matches=None): def __init__(self, ip=None, time=None, matches=None, ticket=None):
"""Ticket constructor """Ticket constructor
@param ip the IP address @param ip the IP address
@ -42,17 +46,22 @@ class Ticket:
""" """
self.setIP(ip) self.setIP(ip)
self.__restored = False; self._flags = 0;
self.__banCount = 0; self._banCount = 0;
self.__banTime = None; self._banTime = None;
self.__time = time if time is not None else MyTime.time() self._time = time if time is not None else MyTime.time()
self.__attempt = 0 self._data = {'matches': [], 'failures': 0}
self.__file = None if ticket:
self.__matches = matches or [] # ticket available - copy whole information from ticket:
self.__dict__.update(i for i in ticket.__dict__.iteritems() if i[0] in self.__dict__)
else:
self._data['matches'] = matches or []
def __str__(self): def __str__(self):
return "%s: ip=%s time=%s bantime=%s bancount=%s #attempts=%d matches=%r" % \ return "%s: ip=%s time=%s bantime=%s bancount=%s #attempts=%d matches=%r" % \
(self.__class__.__name__.split('.')[-1], self.__ip, self.__time, self.__banTime, self.__banCount, self.__attempt, self.__matches) (self.__class__.__name__.split('.')[-1], self.__ip, self._time,
self.__banTime, self.__banCount,
self._data['failures'], self._data.get('matches', []))
def __repr__(self): def __repr__(self):
return str(self) return str(self)
@ -60,9 +69,8 @@ class Ticket:
def __eq__(self, other): def __eq__(self, other):
try: try:
return self.__ip == other.__ip and \ return self.__ip == other.__ip and \
round(self.__time, 2) == round(other.__time, 2) and \ round(self._time, 2) == round(other._time, 2) and \
self.__attempt == other.__attempt and \ self._data == other._data
self.__matches == other.__matches
except AttributeError: except AttributeError:
return False return False
@ -76,56 +84,141 @@ class Ticket:
return self.__ip return self.__ip
def setTime(self, value): def setTime(self, value):
self.__time = value self._time = value
def getTime(self): def getTime(self):
return self.__time return self._time
def setBanTime(self, value): def setBanTime(self, value):
self.__banTime = value; self._banTime = value;
def getBanTime(self, defaultBT = None): def getBanTime(self, defaultBT=None):
return (self.__banTime if not self.__banTime is None else defaultBT); return (self._banTime if not self._banTime is None else defaultBT);
def setBanCount(self, value): def setBanCount(self, value):
self.__banCount = value; self._banCount = value;
def incrBanCount(self, value = 1): def incrBanCount(self, value = 1):
self.__banCount += value; self._banCount += value;
def getBanCount(self): def getBanCount(self):
return self.__banCount; return self._banCount;
def isTimedOut(self, time, defaultBT = None): def isTimedOut(self, time, defaultBT=None):
bantime = (self.__banTime if not self.__banTime is None else defaultBT); bantime = (self._banTime if not self._banTime is None else defaultBT);
# permanent # permanent
if bantime == -1: if bantime == -1:
return False return False
# timed out # timed out
return (time > self.__time + bantime) return (time > self._time + bantime)
def setAttempt(self, value): def setAttempt(self, value):
self.__attempt = value self._data['failures'] = value
def getAttempt(self): def getAttempt(self):
return self.__attempt return self._data['failures']
def setMatches(self, matches): def setMatches(self, matches):
self.__matches = matches self._data['matches'] = matches or []
def getMatches(self): def getMatches(self):
return self.__matches return self._data.get('matches', [])
def setRestored(self, value): def setRestored(self, value):
self.__restored = value self._flags |= RESTORED
def getRestored(self): def getRestored(self):
return self.__restored return 1 if self._flags & RESTORED else 0
def setData(self, *args, **argv):
# if overwrite - set data and filter None values:
if len(args) == 1:
# todo: if support >= 2.7 only:
# self._data = {k:v for k,v in args[0].iteritems() if v is not None}
self._data = dict([(k,v) for k,v in args[0].iteritems() if v is not None])
# add k,v list or dict (merge):
elif len(args) == 2:
self._data.update((args,))
elif len(args) > 2:
self._data.update((k,v) for k,v in zip(*[iter(args)]*2))
if len(argv):
self._data.update(argv)
# filter (delete) None values:
# todo: if support >= 2.7 only:
# self._data = {k:v for k,v in self._data.iteritems() if v is not None}
self._data = dict([(k,v) for k,v in self._data.iteritems() if v is not None])
def getData(self, key=None, default=None):
# return whole data dict:
if key is None:
return self._data
# return default if not exists:
if not self._data:
return default
if not isinstance(key,(str,unicode,type(None),int,float,bool,complex)):
# return filtered by lambda/function:
if callable(key):
# todo: if support >= 2.7 only:
# return {k:v for k,v in self._data.iteritems() if key(k)}
return dict([(k,v) for k,v in self._data.iteritems() if key(k)])
# return filtered by keys:
if hasattr(key, '__iter__'):
# todo: if support >= 2.7 only:
# return {k:v for k,v in self._data.iteritems() if k in key}
return dict([(k,v) for k,v in self._data.iteritems() if k in key])
# return single value of data:
return self._data.get(key, default)
class FailTicket(Ticket): class FailTicket(Ticket):
pass
def __init__(self, ip=None, time=None, matches=None, ticket=None):
# this class variables:
self.__retry = 0
self.__lastReset = None
# create/copy using default ticket constructor:
Ticket.__init__(self, ip, time, matches, ticket)
# init:
if ticket is None:
self.__lastReset = time if time is not None else self.getTime()
if not self.__retry:
self.__retry = self._data['failures'];
def setRetry(self, value):
""" Set artificial retry count, normally equal failures / attempt,
used in incremental features (BanTimeIncr) to increase retry count for bad IPs
"""
self.__retry = value
if not self._data['failures']:
self._data['failures'] = 1
if not value:
self._data['failures'] = 0
self._data['matches'] = []
def getRetry(self):
""" Returns failures / attempt count or
artificial retry count increased for bad IPs
"""
return max(self.__retry, self._data['failures'])
def inc(self, matches=None, attempt=1, count=1):
self.__retry += count
self._data['failures'] += attempt
if matches:
self._data['matches'] += matches
def setLastTime(self, value):
if value > self._time:
self._time = value
def getLastTime(self):
return self._time
def getLastReset(self):
return self.__lastReset
def setLastReset(self, value):
self.__lastReset = value
## ##
# Ban Ticket. # Ban Ticket.

View File

@ -95,7 +95,7 @@ class Transmitter:
return None return None
elif command[0] == "sleep": elif command[0] == "sleep":
value = command[1] value = command[1]
time.sleep(int(value)) time.sleep(float(value))
return None return None
elif command[0] == "flushlogs": elif command[0] == "flushlogs":
return self.__server.flushLogs() return self.__server.flushLogs()
@ -139,6 +139,7 @@ class Transmitter:
elif name == "dbpurgeage": elif name == "dbpurgeage":
db = self.__server.getDatabase() db = self.__server.getDatabase()
if db is None: if db is None:
logSys.warning("dbpurgeage setting was not in effect since no db yet")
return None return None
else: else:
db.purgeage = command[1] db.purgeage = command[1]

245
fail2ban/server/utils.py Normal file
View File

@ -0,0 +1,245 @@
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
# vi: set ft=python sts=4 ts=4 sw=4 noet :
# This file is part of Fail2Ban.
#
# Fail2Ban is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Fail2Ban is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Fail2Ban; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
__author__ = "Serg G. Brester (sebres) and Fail2Ban Contributors"
__copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2011-2012 Yaroslav Halchenko, 2012-2015 Serg G. Brester"
__license__ = "GPL"
import logging, os, fcntl, subprocess, time, signal
from ..helpers import getLogger
# Gets the instance of the logger.
logSys = getLogger(__name__)
# Some hints on common abnormal exit codes
_RETCODE_HINTS = {
127: '"Command not found". Make sure that all commands in %(realCmd)r '
'are in the PATH of fail2ban-server process '
'(grep -a PATH= /proc/`pidof -x fail2ban-server`/environ). '
'You may want to start '
'"fail2ban-server -f" separately, initiate it with '
'"fail2ban-client reload" in another shell session and observe if '
'additional informative error messages appear in the terminals.'
}
# Dictionary to lookup signal name from number
signame = dict((num, name)
for name, num in signal.__dict__.iteritems() if name.startswith("SIG"))
class Utils():
"""Utilities provide diverse static methods like executes OS shell commands, etc.
"""
DEFAULT_SLEEP_TIME = 0.1
DEFAULT_SLEEP_INTERVAL = 0.01
class Cache(dict):
def __init__(self, *args, **kwargs):
self.setOptions(*args, **kwargs)
def setOptions(self, maxCount=1000, maxTime=60):
self.maxCount = maxCount
self.maxTime = maxTime
def get(self, k, defv=None):
v = dict.get(self, k)
if v:
if v[1] > time.time():
return v[0]
del self[k]
return defv
def set(self, k, v):
t = time.time()
# clean cache if max count reached:
if len(self) >= self.maxCount:
for (ck,cv) in self.items():
if cv[1] < t:
del self[ck]
# if still max count - remove any one:
if len(self) >= self.maxCount:
self.popitem()
self[k] = (v, t + self.maxTime)
@staticmethod
def setFBlockMode(fhandle, value):
flags = fcntl.fcntl(fhandle, fcntl.F_GETFL)
if not value:
flags |= os.O_NONBLOCK
else:
flags &= ~os.O_NONBLOCK
fcntl.fcntl(fhandle, fcntl.F_SETFL, flags)
return flags
@staticmethod
def executeCmd(realCmd, timeout=60, shell=True, output=False, tout_kill_tree=True):
"""Executes a command.
Parameters
----------
realCmd : str
The command to execute.
timeout : int
The time out in seconds for the command.
shell : bool
If shell is True (default), the specified command (may be a string) will be
executed through the shell.
output : bool
If output is True, the function returns tuple (success, stdoutdata, stderrdata, returncode)
Returns
-------
bool
True if the command succeeded.
Raises
------
OSError
If command fails to be executed.
RuntimeError
If command execution times out.
"""
stdout = stderr = None
retcode = None
if not callable(timeout):
stime = time.time()
timeout_expr = lambda: time.time() - stime <= timeout
else:
timeout_expr = timeout
try:
popen = subprocess.Popen(
realCmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell,
preexec_fn=os.setsid # so that killpg does not kill our process
)
retcode = popen.poll()
while retcode is None and timeout_expr():
time.sleep(Utils.DEFAULT_SLEEP_INTERVAL)
retcode = popen.poll()
if retcode is None:
logSys.error("%s -- timed out after %s seconds." %
(realCmd, timeout))
pgid = os.getpgid(popen.pid)
# if not tree - first try to terminate and then kill, otherwise - kill (-9) only:
os.killpg(pgid, signal.SIGTERM) # Terminate the process
time.sleep(Utils.DEFAULT_SLEEP_INTERVAL)
retcode = popen.poll()
#logSys.debug("%s -- terminated %s ", realCmd, retcode)
if retcode is None or tout_kill_tree: # Still going...
os.killpg(pgid, signal.SIGKILL) # Kill the process
time.sleep(Utils.DEFAULT_SLEEP_INTERVAL)
retcode = popen.poll()
#logSys.debug("%s -- killed %s ", realCmd, retcode)
if retcode is None and not Utils.pid_exists(pgid):
retcode = signal.SIGKILL
except OSError as e:
logSys.error("%s -- failed with %s" % (realCmd, e))
std_level = retcode == 0 and logging.DEBUG or logging.ERROR
# if we need output (to return or to log it):
if output or std_level >= logSys.getEffectiveLevel():
# if was timeouted (killed/terminated) - to prevent waiting, set std handles to non-blocking mode.
if popen.stdout:
try:
if retcode is None or retcode < 0:
Utils.setFBlockMode(popen.stdout, False)
stdout = popen.stdout.read()
except IOError as e:
logSys.error(" ... -- failed to read stdout %s", e)
if stdout is not None and stdout != '':
logSys.log(std_level, "%s -- stdout: %r", realCmd, stdout)
popen.stdout.close()
if popen.stderr:
try:
if retcode is None or retcode < 0:
Utils.setFBlockMode(popen.stderr, False)
stderr = popen.stderr.read()
except IOError as e:
logSys.error(" ... -- failed to read stderr %s", e)
if stderr is not None and stderr != '':
logSys.log(std_level, "%s -- stderr: %r", realCmd, stderr)
popen.stderr.close()
if retcode == 0:
logSys.debug("%s -- returned successfully", realCmd)
return True if not output else (True, stdout, stderr, retcode)
elif retcode is None:
logSys.error("%s -- unable to kill PID %i" % (realCmd, popen.pid))
elif retcode < 0 or retcode > 128:
# dash would return negative while bash 128 + n
sigcode = -retcode if retcode < 0 else retcode - 128
logSys.error("%s -- killed with %s (return code: %s)" %
(realCmd, signame.get(sigcode, "signal %i" % sigcode), retcode))
else:
msg = _RETCODE_HINTS.get(retcode, None)
logSys.error("%s -- returned %i" % (realCmd, retcode))
if msg:
logSys.info("HINT on %i: %s", retcode, msg % locals())
return False if not output else (False, stdout, stderr, retcode)
@staticmethod
def wait_for(cond, timeout, interval=None):
"""Wait until condition expression `cond` is True, up to `timeout` sec
"""
ini = 1
while True:
ret = cond()
if ret:
return ret
if ini:
ini = stm = 0
time0 = time.time() + timeout
if not interval:
interval = Utils.DEFAULT_SLEEP_INTERVAL
if time.time() > time0:
break
stm = min(stm + interval, Utils.DEFAULT_SLEEP_TIME)
time.sleep(stm)
return ret
# Solution from http://stackoverflow.com/questions/568271/how-to-check-if-there-exists-a-process-with-a-given-pid
# under cc by-sa 3.0
if os.name == 'posix':
@staticmethod
def pid_exists(pid):
"""Check whether pid exists in the current process table."""
import errno
if pid < 0:
return False
try:
os.kill(pid, 0)
except OSError as e:
return e.errno == errno.EPERM
else:
return True
else:
@staticmethod
def pid_exists(pid):
import ctypes
kernel32 = ctypes.windll.kernel32
SYNCHRONIZE = 0x100000
process = kernel32.OpenProcess(SYNCHRONIZE, 0, pid)
if process != 0:
kernel32.CloseHandle(process)
return True
else:
return False

View File

@ -29,6 +29,8 @@ if sys.version_info >= (2,7):
def setUp(self): def setUp(self):
"""Call before every test case.""" """Call before every test case."""
unittest.F2B.SkipIfNoNetwork()
self.jail = DummyJail() self.jail = DummyJail()
self.jail.actions.add("test") self.jail.actions.add("test")

View File

@ -19,7 +19,6 @@
import os import os
import smtpd import smtpd
import asyncore
import threading import threading
import unittest import unittest
import sys import sys
@ -30,7 +29,7 @@ else:
from ..dummyjail import DummyJail from ..dummyjail import DummyJail
from ..utils import CONFIG_DIR from ..utils import CONFIG_DIR, asyncserver
class TestSMTPServer(smtpd.SMTPServer): class TestSMTPServer(smtpd.SMTPServer):
@ -62,13 +61,16 @@ class SMTPActionTest(unittest.TestCase):
self.action = customActionModule.Action( self.action = customActionModule.Action(
self.jail, "test", host="127.0.0.1:%i" % port) self.jail, "test", host="127.0.0.1:%i" % port)
## because of bug in loop (see loop in asyncserver.py) use it's loop instead of asyncore.loop:
self._active = True
self._loop_thread = threading.Thread( self._loop_thread = threading.Thread(
target=asyncore.loop, kwargs={'timeout': 1}) target=asyncserver.loop, kwargs={'active': lambda: self._active})
self._loop_thread.start() self._loop_thread.start()
def tearDown(self): def tearDown(self):
"""Call after every test case.""" """Call after every test case."""
self.smtpd.close() self.smtpd.close()
self._active = False
self._loop_thread.join() self._loop_thread.join()
def testStart(self): def testStart(self):

View File

@ -30,6 +30,7 @@ import tempfile
from ..server.actions import Actions from ..server.actions import Actions
from ..server.ticket import FailTicket from ..server.ticket import FailTicket
from ..server.utils import Utils
from .dummyjail import DummyJail from .dummyjail import DummyJail
from .utils import LogCaptureTestCase from .utils import LogCaptureTestCase
@ -81,8 +82,7 @@ class ExecuteActions(LogCaptureTestCase):
self.defaultActions() self.defaultActions()
self.__actions.start() self.__actions.start()
with open(self.__tmpfilename) as f: with open(self.__tmpfilename) as f:
time.sleep(3) self.assertTrue( Utils.wait_for(lambda: (f.read() == "ip start 64\n"), 3) )
self.assertEqual(f.read(),"ip start 64\n")
self.__actions.stop() self.__actions.stop()
self.__actions.join() self.__actions.join()
@ -94,15 +94,14 @@ class ExecuteActions(LogCaptureTestCase):
"Action", os.path.join(TEST_FILES_DIR, "action.d/action.py"), "Action", os.path.join(TEST_FILES_DIR, "action.d/action.py"),
{'opt1': 'value'}) {'opt1': 'value'})
self.assertTrue(self._is_logged("TestAction initialised")) self.assertLogged("TestAction initialised")
self.__actions.start() self.__actions.start()
time.sleep(3) self.assertTrue( Utils.wait_for(lambda: self._is_logged("TestAction action start"), 3) )
self.assertTrue(self._is_logged("TestAction action start"))
self.__actions.stop() self.__actions.stop()
self.__actions.join() self.__actions.join()
self.assertTrue(self._is_logged("TestAction action stop")) self.assertLogged("TestAction action stop")
self.assertRaises(IOError, self.assertRaises(IOError,
self.__actions.add, "Action3", "/does/not/exist.py", {}) self.__actions.add, "Action3", "/does/not/exist.py", {})
@ -135,11 +134,10 @@ class ExecuteActions(LogCaptureTestCase):
"action.d/action_errors.py"), "action.d/action_errors.py"),
{}) {})
self.__actions.start() self.__actions.start()
time.sleep(3) self.assertTrue( Utils.wait_for(lambda: self._is_logged("Failed to start"), 3) )
self.assertTrue(self._is_logged("Failed to start"))
self.__actions.stop() self.__actions.stop()
self.__actions.join() self.__actions.join()
self.assertTrue(self._is_logged("Failed to stop")) self.assertLogged("Failed to stop")
def testBanActionsAInfo(self): def testBanActionsAInfo(self):
# Action which deletes IP address from aInfo # Action which deletes IP address from aInfo
@ -155,13 +153,13 @@ class ExecuteActions(LogCaptureTestCase):
self.__actions._Actions__checkBan() self.__actions._Actions__checkBan()
# Will fail if modification of aInfo from first action propagates # Will fail if modification of aInfo from first action propagates
# to second action, as both delete same key # to second action, as both delete same key
self.assertFalse(self._is_logged("Failed to execute ban")) self.assertNotLogged("Failed to execute ban")
self.assertTrue(self._is_logged("action1 ban deleted aInfo IP")) self.assertLogged("action1 ban deleted aInfo IP")
self.assertTrue(self._is_logged("action2 ban deleted aInfo IP")) self.assertLogged("action2 ban deleted aInfo IP")
self.__actions._Actions__flushBan() self.__actions._Actions__flushBan()
# Will fail if modification of aInfo from first action propagates # Will fail if modification of aInfo from first action propagates
# to second action, as both delete same key # to second action, as both delete same key
self.assertFalse(self._is_logged("Failed to execute unban")) self.assertNotLogged("Failed to execute unban")
self.assertTrue(self._is_logged("action1 unban deleted aInfo IP")) self.assertLogged("action1 unban deleted aInfo IP")
self.assertTrue(self._is_logged("action2 unban deleted aInfo IP")) self.assertLogged("action2 unban deleted aInfo IP")

View File

@ -24,12 +24,16 @@ __author__ = "Cyril Jaquier"
__copyright__ = "Copyright (c) 2004 Cyril Jaquier" __copyright__ = "Copyright (c) 2004 Cyril Jaquier"
__license__ = "GPL" __license__ = "GPL"
import os
import tempfile
import time import time
import unittest
from ..server.action import CommandAction, CallingMap from ..server.action import CommandAction, CallingMap
from ..server.utils import Utils
from .utils import LogCaptureTestCase from .utils import LogCaptureTestCase
from .utils import pid_exists
class CommandActionTest(LogCaptureTestCase): class CommandActionTest(LogCaptureTestCase):
@ -141,17 +145,17 @@ class CommandActionTest(LogCaptureTestCase):
self.__action.actionunban = "true" self.__action.actionunban = "true"
self.assertEqual(self.__action.actionunban, 'true') self.assertEqual(self.__action.actionunban, 'true')
self.assertFalse(self._is_logged('returned')) self.assertNotLogged('returned')
# no action was actually executed yet # no action was actually executed yet
self.__action.ban({'ip': None}) self.__action.ban({'ip': None})
self.assertTrue(self._is_logged('Invariant check failed')) self.assertLogged('Invariant check failed')
self.assertTrue(self._is_logged('returned successfully')) self.assertLogged('returned successfully')
def testExecuteActionEmptyUnban(self): def testExecuteActionEmptyUnban(self):
self.__action.actionunban = "" self.__action.actionunban = ""
self.__action.unban({}) self.__action.unban({})
self.assertTrue(self._is_logged('Nothing to do')) self.assertLogged('Nothing to do')
def testExecuteActionStartCtags(self): def testExecuteActionStartCtags(self):
self.__action.HOST = "192.0.2.0" self.__action.HOST = "192.0.2.0"
@ -166,7 +170,7 @@ class CommandActionTest(LogCaptureTestCase):
self.__action.actionban = "rm /tmp/fail2ban.test" self.__action.actionban = "rm /tmp/fail2ban.test"
self.__action.actioncheck = "[ -e /tmp/fail2ban.test ]" self.__action.actioncheck = "[ -e /tmp/fail2ban.test ]"
self.assertRaises(RuntimeError, self.__action.ban, {'ip': None}) self.assertRaises(RuntimeError, self.__action.ban, {'ip': None})
self.assertTrue(self._is_logged('Unable to restore environment')) self.assertLogged('Unable to restore environment')
def testExecuteActionChangeCtags(self): def testExecuteActionChangeCtags(self):
self.assertRaises(AttributeError, getattr, self.__action, "ROST") self.assertRaises(AttributeError, getattr, self.__action, "ROST")
@ -185,30 +189,93 @@ class CommandActionTest(LogCaptureTestCase):
def testExecuteActionStartEmpty(self): def testExecuteActionStartEmpty(self):
self.__action.actionstart = "" self.__action.actionstart = ""
self.__action.start() self.__action.start()
self.assertTrue(self._is_logged('Nothing to do')) self.assertLogged('Nothing to do')
def testExecuteIncorrectCmd(self): def testExecuteIncorrectCmd(self):
CommandAction.executeCmd('/bin/ls >/dev/null\nbogusXXX now 2>/dev/null') CommandAction.executeCmd('/bin/ls >/dev/null\nbogusXXX now 2>/dev/null')
self.assertTrue(self._is_logged('HINT on 127: "Command not found"')) self.assertLogged('HINT on 127: "Command not found"')
def testExecuteTimeout(self): def testExecuteTimeout(self):
unittest.F2B.SkipIfFast()
stime = time.time() stime = time.time()
# Should take a minute # Should take a minute
self.assertRaises( self.assertFalse(CommandAction.executeCmd('sleep 30', timeout=1))
RuntimeError, CommandAction.executeCmd, 'sleep 60', timeout=2)
# give a test still 1 second, because system could be too busy # give a test still 1 second, because system could be too busy
self.assertTrue(time.time() >= stime + 2 and time.time() <= stime + 3) self.assertTrue(time.time() >= stime + 1 and time.time() <= stime + 2)
self.assertTrue(self._is_logged('sleep 60 -- timed out after 2 seconds') self.assertLogged(
or self._is_logged('sleep 60 -- timed out after 3 seconds')) 'sleep 30 -- timed out after 1 seconds',
self.assertTrue(self._is_logged('sleep 60 -- killed with SIGTERM')) 'sleep 30 -- timed out after 2 seconds'
)
self.assertLogged('sleep 30 -- killed with SIGTERM')
def testExecuteTimeoutWithNastyChildren(self):
# temporary file for a nasty kid shell script
tmpFilename = tempfile.mktemp(".sh", "fail2ban_")
# Create a nasty script which would hang there for a while
with open(tmpFilename, 'w') as f:
f.write("""#!/bin/bash
trap : HUP EXIT TERM
echo "$$" > %s.pid
echo "my pid $$ . sleeping lo-o-o-ong"
sleep 30
""" % tmpFilename)
stime = 0
# timeout as long as pid-file was not created, but max 5 seconds
def getnasty_tout():
return (
getnastypid() is None
and time.time() - stime <= 5
)
def getnastypid():
cpid = None
if os.path.isfile(tmpFilename + '.pid'):
with open(tmpFilename + '.pid') as f:
try:
cpid = int(f.read())
except ValueError:
pass
return cpid
# First test if can kill the bastard
stime = time.time()
self.assertFalse(CommandAction.executeCmd(
'bash %s' % tmpFilename, timeout=getnasty_tout))
# Wait up to 3 seconds, the child got killed
cpid = getnastypid()
# Verify that the process itself got killed
self.assertTrue(Utils.wait_for(lambda: not pid_exists(cpid), 3)) # process should have been killed
self.assertLogged('my pid ', 'Resource temporarily unavailable')
self.assertLogged('timed out')
self.assertLogged('killed with SIGTERM',
'killed with SIGKILL')
os.unlink(tmpFilename + '.pid')
# A bit evolved case even though, previous test already tests killing children processes
stime = time.time()
self.assertFalse(CommandAction.executeCmd(
'out=`bash %s`; echo ALRIGHT' % tmpFilename, timeout=getnasty_tout))
# Wait up to 3 seconds, the child got killed
cpid = getnastypid()
# Verify that the process itself got killed
self.assertTrue(Utils.wait_for(lambda: not pid_exists(cpid), 3))
self.assertLogged('my pid ', 'Resource temporarily unavailable')
self.assertLogged('timed out')
self.assertLogged('killed with SIGTERM',
'killed with SIGKILL')
os.unlink(tmpFilename)
os.unlink(tmpFilename + '.pid')
def testCaptureStdOutErr(self): def testCaptureStdOutErr(self):
CommandAction.executeCmd('echo "How now brown cow"') CommandAction.executeCmd('echo "How now brown cow"')
self.assertTrue(self._is_logged("'How now brown cow\\n'")) self.assertLogged("'How now brown cow\\n'")
CommandAction.executeCmd( CommandAction.executeCmd(
'echo "The rain in Spain stays mainly in the plain" 1>&2') 'echo "The rain in Spain stays mainly in the plain" 1>&2')
self.assertTrue(self._is_logged( self.assertLogged(
"'The rain in Spain stays mainly in the plain\\n'")) "'The rain in Spain stays mainly in the plain\\n'")
def testCallingMap(self): def testCallingMap(self):
mymap = CallingMap(callme=lambda: str(10), error=lambda: int('a'), mymap = CallingMap(callme=lambda: str(10), error=lambda: int('a'),

View File

@ -35,27 +35,53 @@ class AddFailure(unittest.TestCase):
"""Call before every test case.""" """Call before every test case."""
self.__ticket = BanTicket('193.168.0.128', 1167605999.0) self.__ticket = BanTicket('193.168.0.128', 1167605999.0)
self.__banManager = BanManager() self.__banManager = BanManager()
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
def tearDown(self): def tearDown(self):
"""Call after every test case.""" """Call after every test case."""
pass pass
def testAdd(self): def testAdd(self):
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
self.assertEqual(self.__banManager.size(), 1) self.assertEqual(self.__banManager.size(), 1)
self.assertEqual(self.__banManager.getBanTotal(), 1) self.assertEqual(self.__banManager.getBanTotal(), 1)
self.__banManager.setBanTotal(0) self.__banManager.setBanTotal(0)
self.assertEqual(self.__banManager.getBanTotal(), 0) self.assertEqual(self.__banManager.getBanTotal(), 0)
def testAddDuplicate(self): def testAddDuplicate(self):
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
self.assertFalse(self.__banManager.addBanTicket(self.__ticket)) self.assertFalse(self.__banManager.addBanTicket(self.__ticket))
self.assertEqual(self.__banManager.size(), 1) self.assertEqual(self.__banManager.size(), 1)
def testAddDuplicateWithTime(self):
# add again a duplicate :
# 1) with newer start time and the same ban time
# 2) with same start time and longer ban time
# 3) with permanent ban time (-1)
for tnew, btnew in (
(1167605999.0 + 100, None),
(1167605999.0, 24*60*60),
(1167605999.0, -1),
):
ticket1 = BanTicket('193.168.0.128', 1167605999.0)
ticket2 = BanTicket('193.168.0.128', tnew)
if btnew is not None:
ticket2.setBanTime(btnew)
self.assertTrue(self.__banManager.addBanTicket(ticket1))
self.assertFalse(self.__banManager.addBanTicket(ticket2))
self.assertEqual(self.__banManager.size(), 1)
# pop ticket and check it was prolonged :
banticket = self.__banManager.getTicketByIP(ticket2.getIP())
self.assertEqual(banticket.getTime(), ticket2.getTime())
self.assertEqual(banticket.getTime(), ticket2.getTime())
self.assertEqual(banticket.getBanTime(), ticket2.getBanTime(self.__banManager.getBanTime()))
def testInListOK(self): def testInListOK(self):
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
ticket = BanTicket('193.168.0.128', 1167605999.0) ticket = BanTicket('193.168.0.128', 1167605999.0)
self.assertTrue(self.__banManager._inBanList(ticket)) self.assertTrue(self.__banManager._inBanList(ticket))
def testInListNOK(self): def testInListNOK(self):
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
ticket = BanTicket('111.111.1.111', 1167605999.0) ticket = BanTicket('111.111.1.111', 1167605999.0)
self.assertFalse(self.__banManager._inBanList(ticket)) self.assertFalse(self.__banManager._inBanList(ticket))
@ -77,10 +103,29 @@ class AddFailure(unittest.TestCase):
self.assertEqual(str(self.__banManager.getTicketByIP(ticket.getIP())), self.assertEqual(str(self.__banManager.getTicketByIP(ticket.getIP())),
"BanTicket: ip=%s time=%s bantime=%s bancount=0 #attempts=0 matches=[]" % (ticket.getIP(), ticket.getTime(), -1)) "BanTicket: ip=%s time=%s bantime=%s bancount=0 #attempts=0 matches=[]" % (ticket.getIP(), ticket.getTime(), -1))
def testUnban(self):
btime = self.__banManager.getBanTime()
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
self.assertTrue(self.__banManager._inBanList(self.__ticket))
self.assertEqual(self.__banManager.unBanList(self.__ticket.getTime() + btime + 1), [self.__ticket])
self.assertEqual(self.__banManager.size(), 0)
def testUnbanPermanent(self):
btime = self.__banManager.getBanTime()
self.__banManager.setBanTime(-1)
try:
self.assertTrue(self.__banManager.addBanTicket(self.__ticket))
self.assertTrue(self.__banManager._inBanList(self.__ticket))
self.assertEqual(self.__banManager.unBanList(self.__ticket.getTime() + btime + 1), [])
self.assertEqual(self.__banManager.size(), 1)
finally:
self.__banManager.setBanTime(btime)
class StatusExtendedCymruInfo(unittest.TestCase): class StatusExtendedCymruInfo(unittest.TestCase):
def setUp(self): def setUp(self):
"""Call before every test case.""" """Call before every test case."""
unittest.F2B.SkipIfNoNetwork()
self.__ban_ip = "93.184.216.34" self.__ban_ip = "93.184.216.34"
self.__asn = "15133" self.__asn = "15133"
self.__country = "EU" self.__country = "EU"

View File

@ -38,12 +38,15 @@ from ..client.configurator import Configurator
from .utils import LogCaptureTestCase from .utils import LogCaptureTestCase
TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files") TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files")
TEST_FILES_DIR_SHARE_CFG = {}
from .utils import CONFIG_DIR from .utils import CONFIG_DIR
CONFIG_DIR_SHARE_CFG = unittest.F2B.share_config
STOCK = os.path.exists(os.path.join('config','fail2ban.conf')) STOCK = os.path.exists(os.path.join('config','fail2ban.conf'))
IMPERFECT_CONFIG = os.path.join(os.path.dirname(__file__), 'config') IMPERFECT_CONFIG = os.path.join(os.path.dirname(__file__), 'config')
IMPERFECT_CONFIG_SHARE_CFG = {}
class ConfigReaderTest(unittest.TestCase): class ConfigReaderTest(unittest.TestCase):
@ -162,40 +165,44 @@ class JailReaderTest(LogCaptureTestCase):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(JailReaderTest, self).__init__(*args, **kwargs) super(JailReaderTest, self).__init__(*args, **kwargs)
self.__share_cfg = {}
def testIncorrectJail(self): def testIncorrectJail(self):
jail = JailReader('XXXABSENTXXX', basedir=CONFIG_DIR, share_config = self.__share_cfg) jail = JailReader('XXXABSENTXXX', basedir=CONFIG_DIR, share_config=CONFIG_DIR_SHARE_CFG)
self.assertRaises(ValueError, jail.read) self.assertRaises(ValueError, jail.read)
def testJailActionEmpty(self): def testJailActionEmpty(self):
jail = JailReader('emptyaction', basedir=IMPERFECT_CONFIG, share_config = self.__share_cfg) jail = JailReader('emptyaction', basedir=IMPERFECT_CONFIG, share_config=IMPERFECT_CONFIG_SHARE_CFG)
self.assertTrue(jail.read()) self.assertTrue(jail.read())
self.assertTrue(jail.getOptions()) self.assertTrue(jail.getOptions())
self.assertTrue(jail.isEnabled()) self.assertTrue(jail.isEnabled())
self.assertTrue(self._is_logged('No filter set for jail emptyaction')) self.assertLogged('No filter set for jail emptyaction')
self.assertTrue(self._is_logged('No actions were defined for emptyaction')) self.assertLogged('No actions were defined for emptyaction')
def testJailActionFilterMissing(self): def testJailActionFilterMissing(self):
jail = JailReader('missingbitsjail', basedir=IMPERFECT_CONFIG, share_config = self.__share_cfg) jail = JailReader('missingbitsjail', basedir=IMPERFECT_CONFIG, share_config=IMPERFECT_CONFIG_SHARE_CFG)
self.assertTrue(jail.read()) self.assertTrue(jail.read())
self.assertFalse(jail.getOptions()) self.assertFalse(jail.getOptions())
self.assertTrue(jail.isEnabled()) self.assertTrue(jail.isEnabled())
self.assertTrue(self._is_logged("Found no accessible config files for 'filter.d/catchallthebadies' under %s" % IMPERFECT_CONFIG)) self.assertLogged("Found no accessible config files for 'filter.d/catchallthebadies' under %s" % IMPERFECT_CONFIG)
self.assertTrue(self._is_logged('Unable to read the filter')) self.assertLogged('Unable to read the filter')
def TODOtestJailActionBrokenDef(self): def testJailActionBrokenDef(self):
jail = JailReader('brokenactiondef', basedir=IMPERFECT_CONFIG, share_config = self.__share_cfg) jail = JailReader('brokenactiondef', basedir=IMPERFECT_CONFIG,
share_config=IMPERFECT_CONFIG_SHARE_CFG)
self.assertTrue(jail.read()) self.assertTrue(jail.read())
self.assertFalse(jail.getOptions()) self.assertFalse(jail.getOptions())
self.assertTrue(jail.isEnabled()) self.assertTrue(jail.isEnabled())
self.printLog() self.assertLogged('Error in action definition joho[foo')
self.assertTrue(self._is_logged('Error in action definition joho[foo')) # This unittest has been deactivated for some time...
self.assertTrue(self._is_logged('Caught exception: While reading action joho[foo we should have got 1 or 2 groups. Got: 0')) # self.assertLogged(
# 'Caught exception: While reading action joho[foo we should have got 1 or 2 groups. Got: 0')
# let's test for what is actually logged and handle changes in the future
self.assertLogged(
"Caught exception: 'NoneType' object has no attribute 'endswith'")
if STOCK: if STOCK:
def testStockSSHJail(self): def testStockSSHJail(self):
jail = JailReader('sshd', basedir=CONFIG_DIR, share_config = self.__share_cfg) # we are running tests from root project dir atm jail = JailReader('sshd', basedir=CONFIG_DIR, share_config=CONFIG_DIR_SHARE_CFG) # we are running tests from root project dir atm
self.assertTrue(jail.read()) self.assertTrue(jail.read())
self.assertTrue(jail.getOptions()) self.assertTrue(jail.getOptions())
self.assertFalse(jail.isEnabled()) self.assertFalse(jail.isEnabled())
@ -216,7 +223,7 @@ class JailReaderTest(LogCaptureTestCase):
self.assertEqual(('mail--ho_is', {}), JailReader.extractOptions("mail--ho_is['s']")) self.assertEqual(('mail--ho_is', {}), JailReader.extractOptions("mail--ho_is['s']"))
#self.printLog() #self.printLog()
#self.assertTrue(self._is_logged("Invalid argument ['s'] in ''s''")) #self.assertLogged("Invalid argument ['s'] in ''s''")
self.assertEqual(('mail', {'a': ','}), JailReader.extractOptions("mail[a=',']")) self.assertEqual(('mail', {'a': ','}), JailReader.extractOptions("mail[a=',']"))
@ -260,7 +267,7 @@ class JailReaderTest(LogCaptureTestCase):
self.assertEqual(JailReader._glob(os.path.join(d, '*')), [f1]) self.assertEqual(JailReader._glob(os.path.join(d, '*')), [f1])
# since f2 is dangling -- empty list # since f2 is dangling -- empty list
self.assertEqual(JailReader._glob(f2), []) self.assertEqual(JailReader._glob(f2), [])
self.assertTrue(self._is_logged('File %s is a dangling link, thus cannot be monitored' % f2)) self.assertLogged('File %s is a dangling link, thus cannot be monitored' % f2)
self.assertEqual(JailReader._glob(os.path.join(d, 'nonexisting')), []) self.assertEqual(JailReader._glob(os.path.join(d, 'nonexisting')), [])
os.remove(f1) os.remove(f1)
os.remove(f2) os.remove(f2)
@ -269,6 +276,10 @@ class JailReaderTest(LogCaptureTestCase):
class FilterReaderTest(unittest.TestCase): class FilterReaderTest(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(FilterReaderTest, self).__init__(*args, **kwargs)
self.__share_cfg = {}
def testConvert(self): def testConvert(self):
output = [['set', 'testcase01', 'addfailregex', output = [['set', 'testcase01', 'addfailregex',
"^\\s*(?:\\S+ )?(?:kernel: \\[\\d+\\.\\d+\\] )?(?:@vserver_\\S+ )" "^\\s*(?:\\S+ )?(?:kernel: \\[\\d+\\.\\d+\\] )?(?:@vserver_\\S+ )"
@ -306,9 +317,8 @@ class FilterReaderTest(unittest.TestCase):
# is unreliable # is unreliable
self.assertEqual(sorted(filterReader.convert()), sorted(output)) self.assertEqual(sorted(filterReader.convert()), sorted(output))
filterReader = FilterReader( filterReader = FilterReader("testcase01", "testcase01", {'maxlines': "5"},
"testcase01", "testcase01", {'maxlines': "5"}) share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.setBaseDir(TEST_FILES_DIR)
filterReader.read() filterReader.read()
#filterReader.getOptions(["failregex", "ignoreregex"]) #filterReader.getOptions(["failregex", "ignoreregex"])
filterReader.getOptions(None) filterReader.getOptions(None)
@ -317,8 +327,8 @@ class FilterReaderTest(unittest.TestCase):
def testFilterReaderSubstitionDefault(self): def testFilterReaderSubstitionDefault(self):
output = [['set', 'jailname', 'addfailregex', 'to=sweet@example.com fromip=<IP>']] output = [['set', 'jailname', 'addfailregex', 'to=sweet@example.com fromip=<IP>']]
filterReader = FilterReader('substition', "jailname", {}) filterReader = FilterReader('substition', "jailname", {},
filterReader.setBaseDir(TEST_FILES_DIR) share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read() filterReader.read()
filterReader.getOptions(None) filterReader.getOptions(None)
c = filterReader.convert() c = filterReader.convert()
@ -326,16 +336,34 @@ class FilterReaderTest(unittest.TestCase):
def testFilterReaderSubstitionSet(self): def testFilterReaderSubstitionSet(self):
output = [['set', 'jailname', 'addfailregex', 'to=sour@example.com fromip=<IP>']] output = [['set', 'jailname', 'addfailregex', 'to=sour@example.com fromip=<IP>']]
filterReader = FilterReader('substition', "jailname", {'honeypot': 'sour@example.com'}) filterReader = FilterReader('substition', "jailname", {'honeypot': 'sour@example.com'},
filterReader.setBaseDir(TEST_FILES_DIR) share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read()
filterReader.getOptions(None)
c = filterReader.convert()
self.assertEqual(sorted(c), sorted(output))
def testFilterReaderSubstitionKnown(self):
output = [['set', 'jailname', 'addfailregex', 'to=test,sweet@example.com,test2,sweet@example.com fromip=<IP>']]
filterName, filterOpt = JailReader.extractOptions(
'substition[honeypot="<sweet>,<known/honeypot>", sweet="test,<known/honeypot>,test2"]')
filterReader = FilterReader('substition', "jailname", filterOpt,
share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read() filterReader.read()
filterReader.getOptions(None) filterReader.getOptions(None)
c = filterReader.convert() c = filterReader.convert()
self.assertEqual(sorted(c), sorted(output)) self.assertEqual(sorted(c), sorted(output))
def testFilterReaderSubstitionFail(self): def testFilterReaderSubstitionFail(self):
filterReader = FilterReader('substition', "jailname", {'honeypot': '<sweet>', 'sweet': '<honeypot>'}) # directly subst the same var :
filterReader.setBaseDir(TEST_FILES_DIR) filterReader = FilterReader('substition', "jailname", {'honeypot': '<honeypot>'},
share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read()
filterReader.getOptions(None)
self.assertRaises(ValueError, FilterReader.convert, filterReader)
# cross subst the same var :
filterReader = FilterReader('substition', "jailname", {'honeypot': '<sweet>', 'sweet': '<honeypot>'},
share_config=TEST_FILES_DIR_SHARE_CFG, basedir=TEST_FILES_DIR)
filterReader.read() filterReader.read()
filterReader.getOptions(None) filterReader.getOptions(None)
self.assertRaises(ValueError, FilterReader.convert, filterReader) self.assertRaises(ValueError, FilterReader.convert, filterReader)
@ -378,6 +406,7 @@ class JailsReaderTestCache(LogCaptureTestCase):
return cnt return cnt
def testTestJailConfCache(self): def testTestJailConfCache(self):
unittest.F2B.SkipIfFast()
saved_ll = configparserinc.logLevel saved_ll = configparserinc.logLevel
configparserinc.logLevel = logging.DEBUG configparserinc.logLevel = logging.DEBUG
basedir = tempfile.mkdtemp("fail2ban_conf") basedir = tempfile.mkdtemp("fail2ban_conf")
@ -420,7 +449,6 @@ class JailsReaderTest(LogCaptureTestCase):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(JailsReaderTest, self).__init__(*args, **kwargs) super(JailsReaderTest, self).__init__(*args, **kwargs)
self.__share_cfg = {}
def testProvidingBadBasedir(self): def testProvidingBadBasedir(self):
if not os.path.exists('/XXX'): if not os.path.exists('/XXX'):
@ -428,7 +456,7 @@ class JailsReaderTest(LogCaptureTestCase):
self.assertRaises(ValueError, reader.read) self.assertRaises(ValueError, reader.read)
def testReadTestJailConf(self): def testReadTestJailConf(self):
jails = JailsReader(basedir=IMPERFECT_CONFIG, share_config=self.__share_cfg) jails = JailsReader(basedir=IMPERFECT_CONFIG, share_config=IMPERFECT_CONFIG_SHARE_CFG)
self.assertTrue(jails.read()) self.assertTrue(jails.read())
self.assertFalse(jails.getOptions()) self.assertFalse(jails.getOptions())
self.assertRaises(ValueError, jails.convert) self.assertRaises(ValueError, jails.convert)
@ -458,8 +486,8 @@ class JailsReaderTest(LogCaptureTestCase):
['start', 'missinglogfiles'], ['start', 'missinglogfiles'],
['start', 'brokenaction'], ['start', 'brokenaction'],
['start', 'parse_to_end_of_jail.conf'],])) ['start', 'parse_to_end_of_jail.conf'],]))
self.assertTrue(self._is_logged("Errors in jail 'missingbitsjail'. Skipping...")) self.assertLogged("Errors in jail 'missingbitsjail'. Skipping...")
self.assertTrue(self._is_logged("No file(s) found for glob /weapons/of/mass/destruction")) self.assertLogged("No file(s) found for glob /weapons/of/mass/destruction")
if STOCK: if STOCK:
def testReadStockActionConf(self): def testReadStockActionConf(self):
@ -478,7 +506,7 @@ class JailsReaderTest(LogCaptureTestCase):
msg="Action file %r is lacking [Init] section" % actionConfig) msg="Action file %r is lacking [Init] section" % actionConfig)
def testReadStockJailConf(self): def testReadStockJailConf(self):
jails = JailsReader(basedir=CONFIG_DIR, share_config=self.__share_cfg) # we are running tests from root project dir atm jails = JailsReader(basedir=CONFIG_DIR, share_config=CONFIG_DIR_SHARE_CFG) # we are running tests from root project dir atm
self.assertTrue(jails.read()) # opens fine self.assertTrue(jails.read()) # opens fine
self.assertTrue(jails.getOptions()) # reads fine self.assertTrue(jails.getOptions()) # reads fine
comm_commands = jails.convert() comm_commands = jails.convert()
@ -491,7 +519,7 @@ class JailsReaderTest(LogCaptureTestCase):
#old_comm_commands = comm_commands[:] # make a copy #old_comm_commands = comm_commands[:] # make a copy
#self.assertRaises(ValueError, jails.getOptions, "BOGUS") #self.assertRaises(ValueError, jails.getOptions, "BOGUS")
#self.printLog() #self.printLog()
#self.assertTrue(self._is_logged("No section: 'BOGUS'")) #self.assertLogged("No section: 'BOGUS'")
## and there should be no side-effects ## and there should be no side-effects
#self.assertEqual(jails.convert(), old_comm_commands) #self.assertEqual(jails.convert(), old_comm_commands)
@ -503,12 +531,13 @@ class JailsReaderTest(LogCaptureTestCase):
if jail == 'INCLUDES': if jail == 'INCLUDES':
continue continue
filterName = jails.get(jail, 'filter') filterName = jails.get(jail, 'filter')
filterName, filterOpt = JailReader.extractOptions(filterName)
allFilters.add(filterName) allFilters.add(filterName)
self.assertTrue(len(filterName)) self.assertTrue(len(filterName))
# moreover we must have a file for it # moreover we must have a file for it
# and it must be readable as a Filter # and it must be readable as a Filter
filterReader = FilterReader(filterName, jail, {}) filterReader = FilterReader(filterName, jail, filterOpt,
filterReader.setBaseDir(CONFIG_DIR) share_config=CONFIG_DIR_SHARE_CFG, basedir=CONFIG_DIR)
self.assertTrue(filterReader.read(),"Failed to read filter:" + filterName) # opens fine self.assertTrue(filterReader.read(),"Failed to read filter:" + filterName) # opens fine
filterReader.getOptions({}) # reads fine filterReader.getOptions({}) # reads fine
@ -527,8 +556,8 @@ class JailsReaderTest(LogCaptureTestCase):
if actName == 'iptables-multiport': if actName == 'iptables-multiport':
self.assertTrue('port' in actOpt) self.assertTrue('port' in actOpt)
actionReader = ActionReader( actionReader = ActionReader(actName, jail, {},
actName, jail, {}, basedir=CONFIG_DIR) share_config=CONFIG_DIR_SHARE_CFG, basedir=CONFIG_DIR)
self.assertTrue(actionReader.read()) self.assertTrue(actionReader.read())
actionReader.getOptions({}) # populate _opts actionReader.getOptions({}) # populate _opts
cmds = actionReader.convert() cmds = actionReader.convert()
@ -539,14 +568,17 @@ class JailsReaderTest(LogCaptureTestCase):
# Verify that all filters found under config/ have a jail # Verify that all filters found under config/ have a jail
def testReadStockJailFilterComplete(self): def testReadStockJailFilterComplete(self):
jails = JailsReader(basedir=CONFIG_DIR, force_enable=True, share_config=self.__share_cfg) jails = JailsReader(basedir=CONFIG_DIR, force_enable=True, share_config=CONFIG_DIR_SHARE_CFG)
self.assertTrue(jails.read()) # opens fine self.assertTrue(jails.read()) # opens fine
self.assertTrue(jails.getOptions()) # reads fine self.assertTrue(jails.getOptions()) # reads fine
# grab all filter names # grab all filter names
filters = set(os.path.splitext(os.path.split(a)[1])[0] filters = set(os.path.splitext(os.path.split(a)[1])[0]
for a in glob.glob(os.path.join('config', 'filter.d', '*.conf')) for a in glob.glob(os.path.join('config', 'filter.d', '*.conf'))
if not a.endswith('common.conf')) if not a.endswith('common.conf'))
filters_jail = set(jail.options['filter'] for jail in jails.jails) # get filters of all jails (filter names without options inside filter[...])
filters_jail = set(
JailReader.extractOptions(jail.options['filter'])[0] for jail in jails.jails
)
self.maxDiff = None self.maxDiff = None
self.assertTrue(filters.issubset(filters_jail), self.assertTrue(filters.issubset(filters_jail),
"More filters exists than are referenced in stock jail.conf %r" % filters.difference(filters_jail)) "More filters exists than are referenced in stock jail.conf %r" % filters.difference(filters_jail))
@ -556,7 +588,7 @@ class JailsReaderTest(LogCaptureTestCase):
def testReadStockJailConfForceEnabled(self): def testReadStockJailConfForceEnabled(self):
# more of a smoke test to make sure that no obvious surprises # more of a smoke test to make sure that no obvious surprises
# on users' systems when enabling shipped jails # on users' systems when enabling shipped jails
jails = JailsReader(basedir=CONFIG_DIR, force_enable=True, share_config=self.__share_cfg) # we are running tests from root project dir atm jails = JailsReader(basedir=CONFIG_DIR, force_enable=True, share_config=CONFIG_DIR_SHARE_CFG) # we are running tests from root project dir atm
self.assertTrue(jails.read()) # opens fine self.assertTrue(jails.read()) # opens fine
self.assertTrue(jails.getOptions()) # reads fine self.assertTrue(jails.getOptions()) # reads fine
comm_commands = jails.convert(allow_no_files=True) comm_commands = jails.convert(allow_no_files=True)
@ -617,6 +649,22 @@ class JailsReaderTest(LogCaptureTestCase):
configurator.getOptions() configurator.getOptions()
configurator.convertToProtocol() configurator.convertToProtocol()
commands = configurator.getConfigStream() commands = configurator.getConfigStream()
# verify that dbfile comes before dbpurgeage
def find_set(option):
for i, e in enumerate(commands):
if e[0] == 'set' and e[1] == option:
return i
raise ValueError("Did not find command 'set %s' among commands %s"
% (option, commands))
# Set up of logging should come first
self.assertEqual(find_set('syslogsocket'), 0)
self.assertEqual(find_set('loglevel'), 1)
self.assertEqual(find_set('logtarget'), 2)
# then dbfile should be before dbpurgeage
self.assertTrue(find_set('dbpurgeage') > find_set('dbfile'))
# and there is logging information left to be passed into the # and there is logging information left to be passed into the
# server # server
self.assertEqual(sorted(commands), self.assertEqual(sorted(commands),
@ -651,7 +699,7 @@ action = testaction1[actname=test1]
filter = testfilter1 filter = testfilter1
""") """)
jailfd.close() jailfd.close()
jails = JailsReader(basedir=basedir, share_config=self.__share_cfg) jails = JailsReader(basedir=basedir, share_config={})
self.assertTrue(jails.read()) self.assertTrue(jails.read())
self.assertTrue(jails.getOptions()) self.assertTrue(jails.getOptions())
comm_commands = jails.convert(allow_no_files=True) comm_commands = jails.convert(allow_no_files=True)

View File

@ -35,14 +35,21 @@ from ..server.ticket import FailTicket
from ..server.actions import Actions from ..server.actions import Actions
from .dummyjail import DummyJail from .dummyjail import DummyJail
try: try:
from ..server.database import Fail2BanDb from ..server.database import Fail2BanDb as Fail2BanDb
except ImportError: except ImportError: # pragma: no cover
Fail2BanDb = None Fail2BanDb = None
from .utils import LogCaptureTestCase from .utils import LogCaptureTestCase
TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files") TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files")
# because of tests performance use memory instead of file:
def getFail2BanDb(filename):
if unittest.F2B.memory_db: # pragma: no cover
return Fail2BanDb(':memory:')
return Fail2BanDb(filename)
class DatabaseTest(LogCaptureTestCase): class DatabaseTest(LogCaptureTestCase):
def setUp(self): def setUp(self):
@ -54,8 +61,10 @@ class DatabaseTest(LogCaptureTestCase):
"available.") "available.")
elif Fail2BanDb is None: elif Fail2BanDb is None:
return return
_, self.dbFilename = tempfile.mkstemp(".db", "fail2ban_") self.dbFilename = None
self.db = Fail2BanDb(self.dbFilename) if not unittest.F2B.memory_db:
_, self.dbFilename = tempfile.mkstemp(".db", "fail2ban_")
self.db = getFail2BanDb(self.dbFilename)
def tearDown(self): def tearDown(self):
"""Call after every test case.""" """Call after every test case."""
@ -63,10 +72,11 @@ class DatabaseTest(LogCaptureTestCase):
if Fail2BanDb is None: # pragma: no cover if Fail2BanDb is None: # pragma: no cover
return return
# Cleanup # Cleanup
os.remove(self.dbFilename) if self.dbFilename is not None:
os.remove(self.dbFilename)
def testGetFilename(self): def testGetFilename(self):
if Fail2BanDb is None: # pragma: no cover if Fail2BanDb is None or self.db.filename == ':memory:': # pragma: no cover
return return
self.assertEqual(self.dbFilename, self.db.filename) self.assertEqual(self.dbFilename, self.db.filename)
@ -88,7 +98,7 @@ class DatabaseTest(LogCaptureTestCase):
"/this/path/should/not/exist") "/this/path/should/not/exist")
def testCreateAndReconnect(self): def testCreateAndReconnect(self):
if Fail2BanDb is None: # pragma: no cover if Fail2BanDb is None or self.db.filename == ':memory:': # pragma: no cover
return return
self.testAddJail() self.testAddJail()
# Reconnect... # Reconnect...
@ -101,6 +111,9 @@ class DatabaseTest(LogCaptureTestCase):
def testUpdateDb(self): def testUpdateDb(self):
if Fail2BanDb is None: # pragma: no cover if Fail2BanDb is None: # pragma: no cover
return return
self.db = None
if self.dbFilename is None: # pragma: no cover
_, self.dbFilename = tempfile.mkstemp(".db", "fail2ban_")
shutil.copyfile( shutil.copyfile(
os.path.join(TEST_FILES_DIR, 'database_v1.db'), self.dbFilename) os.path.join(TEST_FILES_DIR, 'database_v1.db'), self.dbFilename)
self.db = Fail2BanDb(self.dbFilename) self.db = Fail2BanDb(self.dbFilename)
@ -146,7 +159,7 @@ class DatabaseTest(LogCaptureTestCase):
self.jail = DummyJail() self.jail = DummyJail()
self.db.addJail(self.jail) self.db.addJail(self.jail)
self.assertTrue( self.assertTrue(
self.jail.name in self.db.getJailNames(), self.jail.name in self.db.getJailNames(True),
"Jail not added to database") "Jail not added to database")
def testAddLog(self): def testAddLog(self):
@ -265,6 +278,37 @@ class DatabaseTest(LogCaptureTestCase):
# be returned # be returned
self.assertEqual(len(self.db.getBans(jail=self.jail,bantime=-1)), 2) self.assertEqual(len(self.db.getBans(jail=self.jail,bantime=-1)), 2)
def testGetBansMerged_MaxEntries(self):
if Fail2BanDb is None: # pragma: no cover
return
self.testAddJail()
maxEntries = 2
failures = ["abc\n", "123\n", "ABC\n", "1234\n"]
# add failures sequential:
i = 80
for f in failures:
i -= 10
ticket = FailTicket("127.0.0.1", MyTime.time() - i, [f])
ticket.setAttempt(1)
self.db.addBan(self.jail, ticket)
# should retrieve 2 matches only, but count of all attempts:
self.db.maxEntries = maxEntries;
ticket = self.db.getBansMerged("127.0.0.1")
self.assertEqual(ticket.getIP(), "127.0.0.1")
self.assertEqual(ticket.getAttempt(), len(failures))
self.assertEqual(len(ticket.getMatches()), maxEntries)
self.assertEqual(ticket.getMatches(), failures[len(failures) - maxEntries:])
# add more failures at once:
ticket = FailTicket("127.0.0.1", MyTime.time() - 10, failures)
ticket.setAttempt(len(failures))
self.db.addBan(self.jail, ticket)
# should retrieve 2 matches only, but count of all attempts:
self.db.maxEntries = maxEntries;
ticket = self.db.getBansMerged("127.0.0.1")
self.assertEqual(ticket.getAttempt(), 2 * len(failures))
self.assertEqual(len(ticket.getMatches()), maxEntries)
self.assertEqual(ticket.getMatches(), failures[len(failures) - maxEntries:])
def testGetBansMerged(self): def testGetBansMerged(self):
if Fail2BanDb is None: # pragma: no cover if Fail2BanDb is None: # pragma: no cover
return return
@ -353,7 +397,26 @@ class DatabaseTest(LogCaptureTestCase):
ticket.setMatches(['test', 'test']) ticket.setMatches(['test', 'test'])
self.jail.putFailTicket(ticket) self.jail.putFailTicket(ticket)
actions._Actions__checkBan() actions._Actions__checkBan()
self.assertTrue(self._is_logged("ban ainfo %s, %s, %s, %s" % (True, True, True, True))) self.assertLogged("ban ainfo %s, %s, %s, %s" % (True, True, True, True))
def testDelAndAddJail(self):
self.testAddJail() # Add jail
# Delete jail (just disabled it):
self.db.delJail(self.jail)
jails = self.db.getJailNames()
self.assertTrue(len(jails) == 1 and self.jail.name in jails)
jails = self.db.getJailNames(enabled=False)
self.assertTrue(len(jails) == 1 and self.jail.name in jails)
jails = self.db.getJailNames(enabled=True)
self.assertTrue(len(jails) == 0)
# Add it again - should just enable it:
self.db.addJail(self.jail)
jails = self.db.getJailNames()
self.assertTrue(len(jails) == 1 and self.jail.name in jails)
jails = self.db.getJailNames(enabled=True)
self.assertTrue(len(jails) == 1 and self.jail.name in jails)
jails = self.db.getJailNames(enabled=False)
self.assertTrue(len(jails) == 0)
def testPurge(self): def testPurge(self):
if Fail2BanDb is None: # pragma: no cover if Fail2BanDb is None: # pragma: no cover

View File

@ -36,6 +36,7 @@ from ..helpers import getLogger
logSys = getLogger("fail2ban") logSys = getLogger("fail2ban")
class DateDetectorTest(LogCaptureTestCase): class DateDetectorTest(LogCaptureTestCase):
def setUp(self): def setUp(self):
@ -82,6 +83,7 @@ class DateDetectorTest(LogCaptureTestCase):
(False, "Jan 23 21:59:59"), (False, "Jan 23 21:59:59"),
(False, "Sun Jan 23 21:59:59 2005"), (False, "Sun Jan 23 21:59:59 2005"),
(False, "Sun Jan 23 21:59:59"), (False, "Sun Jan 23 21:59:59"),
(False, "Sun Jan 23 2005 21:59:59"),
(False, "2005/01/23 21:59:59"), (False, "2005/01/23 21:59:59"),
(False, "2005.01.23 21:59:59"), (False, "2005.01.23 21:59:59"),
(False, "23/01/2005 21:59:59"), (False, "23/01/2005 21:59:59"),
@ -141,13 +143,6 @@ class DateDetectorTest(LogCaptureTestCase):
else: else:
self.assertEqual(logtime, None, "getTime should have not matched for %r Got: %s" % (sdate, logtime)) self.assertEqual(logtime, None, "getTime should have not matched for %r Got: %s" % (sdate, logtime))
def testStableSortTemplate(self):
old_names = [x.name for x in self.__datedetector.templates]
self.__datedetector.sortTemplate()
# If there were no hits -- sorting should not change the order
for old_name, n in zip(old_names, self.__datedetector.templates):
self.assertEqual(old_name, n.name) # "Sort must be stable"
def testAllUniqueTemplateNames(self): def testAllUniqueTemplateNames(self):
self.assertRaises(ValueError, self.__datedetector.appendTemplate, self.assertRaises(ValueError, self.__datedetector.appendTemplate,
self.__datedetector.templates[0]) self.__datedetector.templates[0])
@ -162,13 +157,11 @@ class DateDetectorTest(LogCaptureTestCase):
( logTime, logMatch ) = logdate ( logTime, logMatch ) = logdate
self.assertEqual(logTime, mu) self.assertEqual(logTime, mu)
self.assertEqual(logMatch.group(), '2012/10/11 02:37:17') self.assertEqual(logMatch.group(), '2012/10/11 02:37:17')
self.__datedetector.sortTemplate()
# confuse it with year being at the end # confuse it with year being at the end
for i in xrange(10): for i in xrange(10):
( logTime, logMatch ) = self.__datedetector.getTime('11/10/2012 02:37:17 [error] 18434#0') ( logTime, logMatch ) = self.__datedetector.getTime('11/10/2012 02:37:17 [error] 18434#0')
self.assertEqual(logTime, mu) self.assertEqual(logTime, mu)
self.assertEqual(logMatch.group(), '11/10/2012 02:37:17') self.assertEqual(logMatch.group(), '11/10/2012 02:37:17')
self.__datedetector.sortTemplate()
# and now back to the original # and now back to the original
( logTime, logMatch ) = self.__datedetector.getTime('2012/10/11 02:37:17 [error] 18434#0') ( logTime, logMatch ) = self.__datedetector.getTime('2012/10/11 02:37:17 [error] 18434#0')
self.assertEqual(logTime, mu) self.assertEqual(logTime, mu)

View File

@ -33,7 +33,7 @@ class DummyActions(Actions):
return self._Actions__checkBan() return self._Actions__checkBan()
class DummyJail(Jail, object): class DummyJail(Jail):
"""A simple 'jail' to suck in all the tickets generated by Filter's """A simple 'jail' to suck in all the tickets generated by Filter's
""" """
def __init__(self, backend=None): def __init__(self, backend=None):
@ -44,28 +44,27 @@ class DummyJail(Jail, object):
self.__actions = DummyActions(self) self.__actions = DummyActions(self)
def __len__(self): def __len__(self):
try: with self.lock:
self.lock.acquire()
return len(self.queue) return len(self.queue)
finally:
self.lock.release() def isEmpty(self):
with self.lock:
return not self.queue
def isFilled(self):
with self.lock:
return bool(self.queue)
def putFailTicket(self, ticket): def putFailTicket(self, ticket):
try: with self.lock:
self.lock.acquire()
self.queue.append(ticket) self.queue.append(ticket)
finally:
self.lock.release()
def getFailTicket(self): def getFailTicket(self):
try: with self.lock:
self.lock.acquire()
try: try:
return self.queue.pop() return self.queue.pop()
except IndexError: except IndexError:
return False return False
finally:
self.lock.release()
@property @property
def name(self): def name(self):
@ -91,5 +90,5 @@ class DummyJail(Jail, object):
def actions(self): def actions(self):
return self.__actions; return self.__actions;
def is_alive(self): def isAlive(self):
return True; return True;

View File

@ -0,0 +1,181 @@
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
# vi: set ft=python sts=4 ts=4 sw=4 noet :
# This file is part of Fail2Ban.
#
# Fail2Ban is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Fail2Ban is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Fail2Ban; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
# Fail2Ban developers
__author__ = "Serg Brester"
__copyright__ = "Copyright (c) 2015 Serg G. Brester (sebres), 2008- Fail2Ban Contributors"
__license__ = "GPL"
from __builtin__ import open as fopen
import unittest
import getpass
import os
import sys
import time
import tempfile
import uuid
try:
from systemd import journal
except ImportError:
journal = None
from ..client import fail2banregex
from ..client.fail2banregex import Fail2banRegex, get_opt_parser, output
from .utils import LogCaptureTestCase, logSys
fail2banregex.logSys = logSys
def _test_output(*args):
logSys.info(args[0])
fail2banregex.output = _test_output
CONF_FILES_DIR = os.path.abspath(
os.path.join(os.path.dirname(__file__),"..", "..", "config"))
TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files")
def _Fail2banRegex(*args):
parser = get_opt_parser()
(opts, args) = parser.parse_args(list(args))
return (opts, args, Fail2banRegex(opts))
class Fail2banRegexTest(LogCaptureTestCase):
RE_00 = r"(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>"
FILENAME_01 = os.path.join(TEST_FILES_DIR, "testcase01.log")
FILENAME_02 = os.path.join(TEST_FILES_DIR, "testcase02.log")
FILENAME_WRONGCHAR = os.path.join(TEST_FILES_DIR, "testcase-wrong-char.log")
FILTER_SSHD = os.path.join(CONF_FILES_DIR, 'filter.d', 'sshd.conf')
def setUp(self):
"""Call before every test case."""
LogCaptureTestCase.setUp(self)
def tearDown(self):
"""Call after every test case."""
LogCaptureTestCase.tearDown(self)
def testWrongRE(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"test", r".** from <HOST>$"
)
self.assertRaises(Exception, lambda: fail2banRegex.start(opts, args))
self.assertLogged("Unable to compile regular expression")
def testWrongIngnoreRE(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"test", r".*? from <HOST>$", r".**"
)
self.assertRaises(Exception, lambda: fail2banRegex.start(opts, args))
self.assertLogged("Unable to compile regular expression")
def testDirectFound(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--print-all-matched", "--print-no-missed",
"Dec 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 192.0.2.0",
r"Authentication failure for .*? from <HOST>$"
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 1 lines, 0 ignored, 1 matched, 0 missed')
def testDirectNotFound(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--print-all-missed",
"Dec 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 192.0.2.0",
r"XYZ from <HOST>$"
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 1 lines, 0 ignored, 0 matched, 1 missed')
def testDirectIgnored(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--print-all-ignored",
"Dec 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 192.0.2.0",
r"Authentication failure for .*? from <HOST>$",
r"kevin from 192.0.2.0$"
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 1 lines, 1 ignored, 0 matched, 0 missed')
def testDirectRE_1(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--print-all-matched",
Fail2banRegexTest.FILENAME_01,
Fail2banRegexTest.RE_00
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 19 lines, 0 ignored, 13 matched, 6 missed')
self.assertLogged('Error decoding line');
self.assertLogged('Continuing to process line ignoring invalid characters')
self.assertLogged('Dez 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 193.168.0.128')
self.assertLogged('Dec 31 11:59:59 [sshd] error: PAM: Authentication failure for kevin from 87.142.124.10')
def testDirectRE_2(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--print-all-matched",
Fail2banRegexTest.FILENAME_02,
Fail2banRegexTest.RE_00
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 13 lines, 0 ignored, 5 matched, 8 missed')
def testVerbose(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--verbose", "--print-no-missed",
Fail2banRegexTest.FILENAME_02,
Fail2banRegexTest.RE_00
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 13 lines, 0 ignored, 5 matched, 8 missed')
self.assertLogged('141.3.81.106 Fri Aug 14 11:53:59 2015')
self.assertLogged('141.3.81.106 Fri Aug 14 11:54:59 2015')
def testWronChar(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
Fail2banRegexTest.FILENAME_WRONGCHAR, Fail2banRegexTest.FILTER_SSHD
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 4 lines, 0 ignored, 2 matched, 2 missed')
self.assertLogged('Error decoding line');
self.assertLogged('Continuing to process line ignoring invalid characters:', '2015-01-14 20:00:58 user ');
self.assertLogged('Continuing to process line ignoring invalid characters:', '2015-01-14 20:00:59 user ');
self.assertLogged('Nov 8 00:16:12 main sshd[32548]: input_userauth_request: invalid user llinco')
self.assertLogged('Nov 8 00:16:12 main sshd[32547]: pam_succeed_if(sshd:auth): error retrieving information about user llinco')
def testWronCharDebuggex(self):
(opts, args, fail2banRegex) = _Fail2banRegex(
"--debuggex", "--print-all-matched",
Fail2banRegexTest.FILENAME_WRONGCHAR, Fail2banRegexTest.FILTER_SSHD
)
self.assertTrue(fail2banRegex.start(opts, args))
self.assertLogged('Lines: 4 lines, 0 ignored, 2 matched, 2 missed')
self.assertLogged('http://')

View File

@ -26,6 +26,7 @@ __license__ = "GPL"
import unittest import unittest
from ..server import failmanager
from ..server.failmanager import FailManager, FailManagerEmpty from ..server.failmanager import FailManager, FailManagerEmpty
from ..server.ticket import FailTicket from ..server.ticket import FailTicket
@ -34,6 +35,13 @@ class AddFailure(unittest.TestCase):
def setUp(self): def setUp(self):
"""Call before every test case.""" """Call before every test case."""
self.__items = None
self.__failManager = FailManager()
def tearDown(self):
"""Call after every test case."""
def _addDefItems(self):
self.__items = [[u'193.168.0.128', 1167605999.0], self.__items = [[u'193.168.0.128', 1167605999.0],
[u'193.168.0.128', 1167605999.0], [u'193.168.0.128', 1167605999.0],
[u'193.168.0.128', 1167605999.0], [u'193.168.0.128', 1167605999.0],
@ -47,44 +55,87 @@ class AddFailure(unittest.TestCase):
['100.100.10.10', 1000001000.0], ['100.100.10.10', 1000001000.0],
['100.100.10.10', 1000001500.0], ['100.100.10.10', 1000001500.0],
['100.100.10.10', 1000002000.0]] ['100.100.10.10', 1000002000.0]]
self.__failManager = FailManager()
for i in self.__items: for i in self.__items:
self.__failManager.addFailure(FailTicket(i[0], i[1])) self.__failManager.addFailure(FailTicket(i[0], i[1]))
def tearDown(self):
"""Call after every test case."""
def testFailManagerAdd(self): def testFailManagerAdd(self):
self._addDefItems()
self.assertEqual(self.__failManager.size(), 3) self.assertEqual(self.__failManager.size(), 3)
self.assertEqual(self.__failManager.getFailTotal(), 13) self.assertEqual(self.__failManager.getFailTotal(), 13)
self.__failManager.setFailTotal(0) self.__failManager.setFailTotal(0)
self.assertEqual(self.__failManager.getFailTotal(), 0) self.assertEqual(self.__failManager.getFailTotal(), 0)
self.__failManager.setFailTotal(13) self.__failManager.setFailTotal(13)
def testFailManagerAdd_MaxEntries(self):
maxEntries = 2
self.__failManager.maxEntries = maxEntries
failures = ["abc\n", "123\n", "ABC\n", "1234\n"]
# add failures sequential:
i = 80
for f in failures:
i -= 10
ticket = FailTicket("127.0.0.1", 1000002000 - i, [f])
ticket.setAttempt(1)
self.__failManager.addFailure(ticket)
#
manFailList = self.__failManager._FailManager__failList
self.assertEqual(len(manFailList), 1)
ticket = manFailList["127.0.0.1"]
# should retrieve 2 matches only, but count of all attempts (4):
self.assertEqual(ticket.getAttempt(), len(failures))
self.assertEqual(len(ticket.getMatches()), maxEntries)
self.assertEqual(ticket.getMatches(), failures[len(failures) - maxEntries:])
# add more failures at once:
ticket = FailTicket("127.0.0.1", 1000002000 - 10, failures)
ticket.setAttempt(len(failures))
self.__failManager.addFailure(ticket)
#
manFailList = self.__failManager._FailManager__failList
self.assertEqual(len(manFailList), 1)
ticket = manFailList["127.0.0.1"]
# should retrieve 2 matches only, but count of all attempts (8):
self.assertEqual(ticket.getAttempt(), 2 * len(failures))
self.assertEqual(len(ticket.getMatches()), maxEntries)
self.assertEqual(ticket.getMatches(), failures[len(failures) - maxEntries:])
# add self ticket again:
self.__failManager.addFailure(ticket)
#
manFailList = self.__failManager._FailManager__failList
self.assertEqual(len(manFailList), 1)
ticket = manFailList["127.0.0.1"]
# same matches, but +1 attempt (9)
self.assertEqual(ticket.getAttempt(), 2 * len(failures) + 1)
self.assertEqual(len(ticket.getMatches()), maxEntries)
self.assertEqual(ticket.getMatches(), failures[len(failures) - maxEntries:])
def testFailManagerMaxTime(self): def testFailManagerMaxTime(self):
self._addDefItems()
self.assertEqual(self.__failManager.getMaxTime(), 600) self.assertEqual(self.__failManager.getMaxTime(), 600)
self.__failManager.setMaxTime(13) self.__failManager.setMaxTime(13)
self.assertEqual(self.__failManager.getMaxTime(), 13) self.assertEqual(self.__failManager.getMaxTime(), 13)
self.__failManager.setMaxTime(600) self.__failManager.setMaxTime(600)
def _testDel(self): def testDel(self):
self._addDefItems()
self.__failManager.delFailure('193.168.0.128') self.__failManager.delFailure('193.168.0.128')
self.__failManager.delFailure('111.111.1.111') self.__failManager.delFailure('111.111.1.111')
self.assertEqual(self.__failManager.size(), 1) self.assertEqual(self.__failManager.size(), 2)
def testCleanupOK(self): def testCleanupOK(self):
self._addDefItems()
timestamp = 1167606999.0 timestamp = 1167606999.0
self.__failManager.cleanup(timestamp) self.__failManager.cleanup(timestamp)
self.assertEqual(self.__failManager.size(), 0) self.assertEqual(self.__failManager.size(), 0)
def testCleanupNOK(self): def testCleanupNOK(self):
self._addDefItems()
timestamp = 1167605990.0 timestamp = 1167605990.0
self.__failManager.cleanup(timestamp) self.__failManager.cleanup(timestamp)
self.assertEqual(self.__failManager.size(), 2) self.assertEqual(self.__failManager.size(), 2)
def testbanOK(self): def testbanOK(self):
self._addDefItems()
self.__failManager.setMaxRetry(5) self.__failManager.setMaxRetry(5)
#ticket = FailTicket('193.168.0.128', None) #ticket = FailTicket('193.168.0.128', None)
ticket = self.__failManager.toBan() ticket = self.__failManager.toBan()
@ -111,12 +162,90 @@ class AddFailure(unittest.TestCase):
'FailTicket: ip=193.168.0.128 time=1000002000.0 bantime=None bancount=0 #attempts=5 matches=[]') 'FailTicket: ip=193.168.0.128 time=1000002000.0 bantime=None bancount=0 #attempts=5 matches=[]')
def testbanNOK(self): def testbanNOK(self):
self._addDefItems()
self.__failManager.setMaxRetry(10) self.__failManager.setMaxRetry(10)
self.assertRaises(FailManagerEmpty, self.__failManager.toBan) self.assertRaises(FailManagerEmpty, self.__failManager.toBan)
def testWindow(self): def testWindow(self):
self._addDefItems()
ticket = self.__failManager.toBan() ticket = self.__failManager.toBan()
self.assertNotEqual(ticket.getIP(), "100.100.10.10") self.assertNotEqual(ticket.getIP(), "100.100.10.10")
ticket = self.__failManager.toBan() ticket = self.__failManager.toBan()
self.assertNotEqual(ticket.getIP(), "100.100.10.10") self.assertNotEqual(ticket.getIP(), "100.100.10.10")
self.assertRaises(FailManagerEmpty, self.__failManager.toBan) self.assertRaises(FailManagerEmpty, self.__failManager.toBan)
def testBgService(self):
bgSvc = self.__failManager._FailManager__bgSvc
failManager2nd = FailManager()
# test singleton (same object):
bgSvc2 = failManager2nd._FailManager__bgSvc
self.assertTrue(id(bgSvc) == id(bgSvc2))
bgSvc2 = None
# test service :
self.assertTrue(bgSvc.service(True, True))
self.assertFalse(bgSvc.service())
# bypass threshold and time:
for i in range(1, bgSvc._BgService__threshold):
self.assertFalse(bgSvc.service())
# bypass time check:
bgSvc._BgService__serviceTime = -0x7fffffff
self.assertTrue(bgSvc.service())
# bypass threshold and time:
bgSvc._BgService__serviceTime = -0x7fffffff
for i in range(1, bgSvc._BgService__threshold):
self.assertFalse(bgSvc.service())
self.assertTrue(bgSvc.service(False, True))
self.assertFalse(bgSvc.service(False, True))
class FailmanagerComplex(unittest.TestCase):
def setUp(self):
"""Call before every test case."""
super(FailmanagerComplex, self).setUp()
self.__failManager = FailManager()
# down logging level for all this tests, because of extremely large failure count (several GB on heavydebug)
self.__saved_ll = failmanager.logLevel
failmanager.logLevel = 3
def tearDown(self):
super(FailmanagerComplex, self).tearDown()
# restore level
failmanager.logLevel = self.__saved_ll
@staticmethod
def _ip_range(maxips):
class _ip(list):
def __str__(self):
return '.'.join(map(str, self))
def __repr__(self):
return str(self)
def __key__(self):
return str(self)
def __hash__(self):
#return (int)(struct.unpack('I', struct.pack("BBBB",*self))[0])
return (int)(self[0] << 24 | self[1] << 16 | self[2] << 8 | self[3])
i = 0
c = [127,0,0,0]
while i < maxips:
for n in range(3,0,-1):
if c[n] < 255:
c[n] += 1
break
c[n] = 0
yield (i, _ip(c))
i += 1
def testCheckIPGenerator(self):
for i, ip in self._ip_range(65536 if not unittest.F2B.fast else 1000):
if i == 254:
self.assertEqual(str(ip), '127.0.0.255')
elif i == 255:
self.assertEqual(str(ip), '127.0.1.0')
elif i == 1000:
self.assertEqual(str(ip), '127.0.3.233')
elif i == 65534:
self.assertEqual(str(ip), '127.0.255.255')
elif i == 65535:
self.assertEqual(str(ip), '127.1.0.0')

View File

@ -0,0 +1,2 @@
# failJSON: { "time": "2013-06-27T11:55:44", "match": true , "host": "192.0.2.12" }
192.0.2.12 - user1 [27/Jun/2013:11:55:44] "GET /knocking/ HTTP/1.1" 200 266 "http://domain.net/hello-world/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:40.0) Gecko/20100101 Firefox/40.0"

View File

@ -0,0 +1,5 @@
# failJSON: { "time": "2015-11-29T16:38:01", "match": true , "host": "192.168.0.1" }
<W>2015-11-29 16:38:01.818 1 => <4:testUsernameOne(-1)> Rejected connection from 192.168.0.1:29530: Invalid server password
# failJSON: { "time": "2015-11-29T17:18:20", "match": true , "host": "192.168.1.2" }
<W>2015-11-29 17:18:20.962 1 => <8:testUsernameTwo(-1)> Rejected connection from 192.168.1.2:29761: Wrong certificate or password for existing user

View File

@ -15,3 +15,5 @@ Sep 16 21:30:26 catinthehat mysqld: 130916 21:30:26 [Warning] Access denied for
# failJSON: { "time": "2004-09-16T21:30:32", "match": true , "host": "74.207.241.159" } # failJSON: { "time": "2004-09-16T21:30:32", "match": true , "host": "74.207.241.159" }
Sep 16 21:30:32 catinthehat mysqld: 130916 21:30:32 [Warning] Access denied for user 'hacker'@'74.207.241.159' (using password: NO) Sep 16 21:30:32 catinthehat mysqld: 130916 21:30:32 [Warning] Access denied for user 'hacker'@'74.207.241.159' (using password: NO)
# failJSON: { "time": "2015-10-07T06:09:42", "match": true , "host": "127.0.0.1", "desc": "mysql 5.6 log format" }
2015-10-07 06:09:42 5907 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)

View File

@ -0,0 +1,6 @@
# failJSON: { "time": "2015-10-29T20:01:02", "match": true , "host": "1.2.3.4" }
2015/10/29 20:01:02 [error] 256554#0: *99927 limiting requests, excess: 1.852 by zone "one", client: 1.2.3.4, server: example.com, request: "POST /index.htm HTTP/1.0", host: "exmaple.com"
# failJSON: { "time": "2015-10-29T19:24:05", "match": true , "host": "192.0.2.0" }
2015/10/29 19:24:05 [error] 12684#12684: *22174 limiting requests, excess: 1.495 by zone "one", client: 192.0.2.0, server: example.com, request: "GET /index.php HTTP/1.1", host: "example.com", referrer: "https://example.com"

View File

@ -0,0 +1,11 @@
# should match
# failJSON: { "time": "2015-09-02T00:11:31", "match": true , "host": "175.18.15.10" }
175.18.15.10 - - [02/sept./2015:00:11:31 +0200] "GET /openhab.app HTTP/1.1" 401 1382
# failJSON: { "time": "2015-09-02T00:11:31", "match": true , "host": "175.18.15.10" }
175.18.15.10 - - [02/sept./2015:00:11:31 +0200] "GET /rest/bindings HTTP/1.1" 401 1384
# Should not match
# failJSON: { "match": false }
175.18.15.11 - - [17/oct./2015:00:35:12 +0200] "GET /openhab.app?sitemap=default&poll=true&__async=true&__source=waHome HTTP/1.1" 200 92
# failJSON: { "match": false }
175.18.15.11 - - [16/oct./2015:20:29:38 +0200] "GET /rest/sitemaps/default/maison HTTP/1.1" 200 2837

View File

@ -23,3 +23,6 @@ Dec 18 02:05:46 platypus postfix/smtpd[16349]: improper command pipelining after
# failJSON: { "time": "2004-12-21T21:17:29", "match": true , "host": "93.184.216.34" } # failJSON: { "time": "2004-12-21T21:17:29", "match": true , "host": "93.184.216.34" }
Dec 21 21:17:29 xxx postfix/smtpd[7150]: NOQUEUE: reject: RCPT from badserver.example.com[93.184.216.34]: 450 4.7.1 Client host rejected: cannot find your hostname, [93.184.216.34]; from=<badactor@example.com> to=<goodguy@example.com> proto=ESMTP helo=<badserver.example.com> Dec 21 21:17:29 xxx postfix/smtpd[7150]: NOQUEUE: reject: RCPT from badserver.example.com[93.184.216.34]: 450 4.7.1 Client host rejected: cannot find your hostname, [93.184.216.34]; from=<badactor@example.com> to=<goodguy@example.com> proto=ESMTP helo=<badserver.example.com>
# failJSON: { "time": "2004-11-22T22:33:44", "match": true , "host": "1.2.3.4" }
Nov 22 22:33:44 xxx postfix/smtpd[11111]: NOQUEUE: reject: RCPT from 1-2-3-4.example.com[1.2.3.4]: 450 4.1.8 <some@nonexistant.tld>: Sender address rejected: Domain not found; from=<some@nonexistant.tld> to=<goodguy@example.com> proto=ESMTP helo=<1-2-3-4.example.com>

View File

@ -132,6 +132,12 @@ Nov 23 21:50:37 sshd[7148]: Connection closed by 61.0.0.1 [preauth]
# failJSON: { "time": "2005-07-13T18:44:28", "match": true , "host": "89.24.13.192", "desc": "from gh-289" } # failJSON: { "time": "2005-07-13T18:44:28", "match": true , "host": "89.24.13.192", "desc": "from gh-289" }
Jul 13 18:44:28 mdop sshd[4931]: Received disconnect from 89.24.13.192: 3: com.jcraft.jsch.JSchException: Auth fail Jul 13 18:44:28 mdop sshd[4931]: Received disconnect from 89.24.13.192: 3: com.jcraft.jsch.JSchException: Auth fail
# failJSON: { "time": "2004-10-01T17:27:44", "match": true , "host": "94.249.236.6", "desc": "newer format per commit 36919d9f" }
Oct 1 17:27:44 localhost sshd[24077]: error: Received disconnect from 94.249.236.6: 3: com.jcraft.jsch.JSchException: Auth fail [preauth]
# failJSON: { "time": "2004-10-01T17:27:44", "match": true , "host": "94.249.236.6", "desc": "space in disconnect description per commit 36919d9f" }
Oct 1 17:27:44 localhost sshd[24077]: error: Received disconnect from 94.249.236.6: 3: Ha ha, suckers!: Auth fail [preauth]
# failJSON: { "match": false } # failJSON: { "match": false }
Feb 12 04:09:18 localhost sshd[26713]: Connection from 115.249.163.77 port 51353 Feb 12 04:09:18 localhost sshd[26713]: Connection from 115.249.163.77 port 51353
# failJSON: { "time": "2005-02-12T04:09:21", "match": true , "host": "115.249.163.77", "desc": "from gh-457" } # failJSON: { "time": "2005-02-12T04:09:21", "match": true , "host": "115.249.163.77", "desc": "from gh-457" }
@ -142,6 +148,9 @@ Feb 12 04:09:18 localhost sshd[26713]: Connection from 115.249.163.77 port 51353
# failJSON: { "time": "2005-02-12T04:09:21", "match": true , "host": "115.249.163.77", "desc": "Multiline match with interface address" } # failJSON: { "time": "2005-02-12T04:09:21", "match": true , "host": "115.249.163.77", "desc": "Multiline match with interface address" }
Feb 12 04:09:21 localhost sshd[26713]: Disconnecting: Too many authentication failures for root [preauth] Feb 12 04:09:21 localhost sshd[26713]: Disconnecting: Too many authentication failures for root [preauth]
# failJSON: { "time": "2004-11-23T21:50:37", "match": true , "host": "61.0.0.1", "desc": "New logline format as openssh 6.8 to replace prev multiline version" }
Nov 23 21:50:37 myhost sshd[21810]: error: maximum authentication attempts exceeded for root from 61.0.0.1 port 49940 ssh2 [preauth]
# failJSON: { "match": false } # failJSON: { "match": false }
Apr 27 13:02:04 host sshd[29116]: User root not allowed because account is locked Apr 27 13:02:04 host sshd[29116]: User root not allowed because account is locked
# failJSON: { "match": false } # failJSON: { "match": false }

View File

@ -0,0 +1,4 @@
Nov 8 00:16:12 main sshd[32547]: Invalid user llinco\361ir from 192.0.2.0
Nov 8 00:16:12 main sshd[32548]: input_userauth_request: invalid user llinco\361ir
Nov 8 00:16:12 main sshd[32547]: pam_succeed_if(sshd:auth): error retrieving information about user llincoñir
Nov 8 00:16:14 main sshd[32547]: Failed password for invalid user llinco\361ir from 192.0.2.0 port 57025 ssh2

View File

@ -41,6 +41,7 @@ from ..server.filterpoll import FilterPoll
from ..server.filter import Filter, FileFilter, FileContainer, DNSUtils from ..server.filter import Filter, FileFilter, FileContainer, DNSUtils
from ..server.failmanager import FailManagerEmpty from ..server.failmanager import FailManagerEmpty
from ..server.mytime import MyTime from ..server.mytime import MyTime
from ..server.utils import Utils
from .utils import setUpMyTime, tearDownMyTime, mtimesleep, LogCaptureTestCase from .utils import setUpMyTime, tearDownMyTime, mtimesleep, LogCaptureTestCase
from .dummyjail import DummyJail from .dummyjail import DummyJail
@ -80,6 +81,39 @@ def _killfile(f, name):
_killfile(None, name + '.bak') _killfile(None, name + '.bak')
def _maxWaitTime(wtime):
if unittest.F2B.fast:
wtime /= 10
return wtime
class _tmSerial():
_last_s = -0x7fffffff
_last_m = -0x7fffffff
_str_s = ""
_str_m = ""
@staticmethod
def _tm(time):
# ## strftime it too slow for large time serializer :
# return datetime.datetime.fromtimestamp(time).strftime("%Y-%m-%d %H:%M:%S")
c = _tmSerial
sec = (time % 60)
if c._last_s == time - sec:
return "%s%02u" % (c._str_s, sec)
mt = (time % 3600)
if c._last_m == time - mt:
c._last_s = time - sec
c._str_s = "%s%02u:" % (c._str_m, mt // 60)
return "%s%02u" % (c._str_s, sec)
c._last_m = time - mt
c._str_m = datetime.datetime.fromtimestamp(time).strftime("%Y-%m-%d %H:")
c._last_s = time - sec
c._str_s = "%s%02u:" % (c._str_m, mt // 60)
return "%s%02u" % (c._str_s, sec)
_tm = _tmSerial._tm
def _assert_equal_entries(utest, found, output, count=None): def _assert_equal_entries(utest, found, output, count=None):
"""Little helper to unify comparisons with the target entries """Little helper to unify comparisons with the target entries
@ -90,7 +124,11 @@ def _assert_equal_entries(utest, found, output, count=None):
found_time, output_time = \ found_time, output_time = \
MyTime.localtime(found[2]),\ MyTime.localtime(found[2]),\
MyTime.localtime(output[2]) MyTime.localtime(output[2])
utest.assertEqual(found_time, output_time) try:
utest.assertEqual(found_time, output_time)
except AssertionError as e:
# assert more structured:
utest.assertEqual((float(found[2]), found_time), (float(output[2]), output_time))
if len(output) > 3 and count is None: # match matches if len(output) > 3 and count is None: # match matches
# do not check if custom count (e.g. going through them twice) # do not check if custom count (e.g. going through them twice)
if os.linesep != '\n' or sys.platform.startswith('cygwin'): if os.linesep != '\n' or sys.platform.startswith('cygwin'):
@ -117,9 +155,15 @@ def _assert_correct_last_attempt(utest, filter_, output, count=None):
Test filter to contain target ticket Test filter to contain target ticket
""" """
if isinstance(filter_, DummyJail): if isinstance(filter_, DummyJail):
# get fail ticket from jail
found = _ticket_tuple(filter_.getFailTicket()) found = _ticket_tuple(filter_.getFailTicket())
else: else:
# when we are testing without jails # when we are testing without jails
# wait for failures (up to max time)
Utils.wait_for(
lambda: filter_.failManager.getFailTotal() >= (count if count else output[1]),
_maxWaitTime(10))
# get fail ticket from filter
found = _ticket_tuple(filter_.failManager.toBan()) found = _ticket_tuple(filter_.failManager.toBan())
_assert_equal_entries(utest, found, output, count) _assert_equal_entries(utest, found, output, count)
@ -158,7 +202,7 @@ def _copy_lines_between_files(in_, fout, n=None, skip=0, mode='a', terminal_line
# Opened earlier, therefore must close it # Opened earlier, therefore must close it
fin.close() fin.close()
# to give other threads possibly some time to crunch # to give other threads possibly some time to crunch
time.sleep(0.1) time.sleep(Utils.DEFAULT_SLEEP_INTERVAL)
return fout return fout
@ -216,6 +260,22 @@ class BasicFilter(unittest.TestCase):
("^%Y-%m-%d-%H%M%S.%f %z", ("^%Y-%m-%d-%H%M%S.%f %z",
"^Year-Month-Day-24hourMinuteSecond.Microseconds Zone offset")) "^Year-Month-Day-24hourMinuteSecond.Microseconds Zone offset"))
def testAssertWrongTime(self):
self.assertRaises(AssertionError,
lambda: _assert_equal_entries(self,
('1.1.1.1', 1, 1421262060.0),
('1.1.1.1', 1, 1421262059.0),
1)
)
def testTest_tm(self):
unittest.F2B.SkipIfFast()
## test function "_tm" works correct (returns the same as slow strftime):
for i in xrange(1417512352, (1417512352 // 3600 + 3) * 3600):
tm = datetime.datetime.fromtimestamp(i).strftime("%Y-%m-%d %H:%M:%S")
if _tm(i) != tm:
self.assertEqual((_tm(i), i), (tm, i))
class IgnoreIP(LogCaptureTestCase): class IgnoreIP(LogCaptureTestCase):
@ -260,14 +320,14 @@ class IgnoreIP(LogCaptureTestCase):
self.filter.addIgnoreIP('192.168.1.0/25') self.filter.addIgnoreIP('192.168.1.0/25')
self.filter.addFailRegex('<HOST>') self.filter.addFailRegex('<HOST>')
self.filter.processLineAndAdd('1387203300.222 192.168.1.32') self.filter.processLineAndAdd('1387203300.222 192.168.1.32')
self.assertTrue(self._is_logged('Ignore 192.168.1.32')) self.assertLogged('Ignore 192.168.1.32')
tearDownMyTime() tearDownMyTime()
def testIgnoreAddBannedIP(self): def testIgnoreAddBannedIP(self):
self.filter.addIgnoreIP('192.168.1.0/25') self.filter.addIgnoreIP('192.168.1.0/25')
self.filter.addBannedIP('192.168.1.32') self.filter.addBannedIP('192.168.1.32')
self.assertFalse(self._is_logged('Ignore 192.168.1.32')) self.assertNotLogged('Ignore 192.168.1.32')
self.assertTrue(self._is_logged('Requested to manually ban an ignored IP 192.168.1.32. User knows best. Proceeding to ban it.')) self.assertLogged('Requested to manually ban an ignored IP 192.168.1.32. User knows best. Proceeding to ban it.')
def testIgnoreCommand(self): def testIgnoreCommand(self):
self.filter.setIgnoreCommand(sys.executable + ' ' + os.path.join(TEST_FILES_DIR, "ignorecommand.py <ip>")) self.filter.setIgnoreCommand(sys.executable + ' ' + os.path.join(TEST_FILES_DIR, "ignorecommand.py <ip>"))
@ -278,15 +338,20 @@ class IgnoreIP(LogCaptureTestCase):
ip = "93.184.216.34" ip = "93.184.216.34"
for ignore_source in ["dns", "ip", "command"]: for ignore_source in ["dns", "ip", "command"]:
self.filter.logIgnoreIp(ip, True, ignore_source=ignore_source) self.filter.logIgnoreIp(ip, True, ignore_source=ignore_source)
self.assertTrue(self._is_logged("[%s] Ignore %s by %s" % (self.jail.name, ip, ignore_source))) self.assertLogged("[%s] Ignore %s by %s" % (self.jail.name, ip, ignore_source))
def testIgnoreCauseNOK(self): def testIgnoreCauseNOK(self):
self.filter.logIgnoreIp("example.com", False, ignore_source="NOT_LOGGED") self.filter.logIgnoreIp("example.com", False, ignore_source="NOT_LOGGED")
self.assertFalse(self._is_logged("[%s] Ignore %s by %s" % (self.jail.name, "example.com", "NOT_LOGGED"))) self.assertNotLogged("[%s] Ignore %s by %s" % (self.jail.name, "example.com", "NOT_LOGGED"))
class IgnoreIPDNS(IgnoreIP): class IgnoreIPDNS(IgnoreIP):
def setUp(self):
"""Call before every test case."""
unittest.F2B.SkipIfNoNetwork()
IgnoreIP.setUp(self)
def testIgnoreIPDNSOK(self): def testIgnoreIPDNSOK(self):
self.filter.addIgnoreIP("www.epfl.ch") self.filter.addIgnoreIP("www.epfl.ch")
self.assertTrue(self.filter.inIgnoreIPList("128.178.50.12")) self.assertTrue(self.filter.inIgnoreIPList("128.178.50.12"))
@ -334,59 +399,132 @@ class LogFileFilterPoll(unittest.TestCase):
self.assertTrue(self.filter.isModified(LogFileFilterPoll.FILENAME)) self.assertTrue(self.filter.isModified(LogFileFilterPoll.FILENAME))
self.assertFalse(self.filter.isModified(LogFileFilterPoll.FILENAME)) self.assertFalse(self.filter.isModified(LogFileFilterPoll.FILENAME))
def testSeekToTime(self): def testSeekToTimeSmallFile(self):
fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='.log') fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='.log')
tm = lambda time: datetime.datetime.fromtimestamp(time).strftime("%Y-%m-%d %H:%M:%S")
time = 1417512352 time = 1417512352
f = open(fname, 'w') f = open(fname, 'w')
fc = FileContainer(fname, self.filter.getLogEncoding()) fc = None
fc.open()
fc.setPos(0); self.filter.seekToTime(fc, time)
try: try:
fc = FileContainer(fname, self.filter.getLogEncoding())
fc.open()
fc.setPos(0); self.filter.seekToTime(fc, time)
f.flush() f.flush()
# empty : # empty :
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 0) self.assertEqual(fc.getPos(), 0)
# one entry with exact time: # one entry with exact time:
f.write("%s [sshd] error: PAM: failure len 1\n" % tm(time)) f.write("%s [sshd] error: PAM: failure len 1\n" % _tm(time))
f.flush() f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
# one entry with smaller time:
# rewrite :
f.seek(0) f.seek(0)
f.write("%s [sshd] error: PAM: failure len 1\n" % tm(time - 10)) f.truncate()
fc.close()
fc = FileContainer(fname, self.filter.getLogEncoding())
fc.open()
# no time - nothing should be found :
for i in xrange(10):
f.write("[sshd] error: PAM: failure len 1\n")
f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time)
# rewrite
f.seek(0)
f.truncate()
fc.close()
fc = FileContainer(fname, self.filter.getLogEncoding())
fc.open()
# one entry with smaller time:
f.write("%s [sshd] error: PAM: failure len 2\n" % _tm(time - 10))
f.flush() f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 0) self.assertEqual(fc.getPos(), 53)
f.write("%s [sshd] error: PAM: failure len 3 2 1\n" % tm(time - 9)) # two entries with smaller time:
f.flush() f.write("%s [sshd] error: PAM: failure len 3 2 1\n" % _tm(time - 9))
fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 0)
# add exact time between:
f.write("%s [sshd] error: PAM: failure\n" % tm(time - 1))
f.flush() f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 110) self.assertEqual(fc.getPos(), 110)
# check move after end (all of time smaller):
f.write("%s [sshd] error: PAM: failure\n" % _tm(time - 1))
f.flush()
self.assertEqual(fc.getFileSize(), 157)
fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 157)
# stil one exact line: # stil one exact line:
f.write("%s [sshd] error: PAM: Authentication failure\n" % tm(time)) f.write("%s [sshd] error: PAM: Authentication failure\n" % _tm(time))
f.write("%s [sshd] error: PAM: failure len 1\n" % tm(time)) f.write("%s [sshd] error: PAM: failure len 1\n" % _tm(time))
f.flush() f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 110) self.assertEqual(fc.getPos(), 157)
# add something hereafter: # add something hereafter:
f.write("%s [sshd] error: PAM: failure len 3 2 1\n" % tm(time + 2)) f.write("%s [sshd] error: PAM: failure len 3 2 1\n" % _tm(time + 2))
f.write("%s [sshd] error: PAM: Authentication failure\n" % tm(time + 3)) f.write("%s [sshd] error: PAM: Authentication failure\n" % _tm(time + 3))
f.flush() f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 110) self.assertEqual(fc.getPos(), 157)
# add something hereafter: # add something hereafter:
f.write("%s [sshd] error: PAM: failure\n" % tm(time + 9)) f.write("%s [sshd] error: PAM: failure\n" % _tm(time + 9))
f.write("%s [sshd] error: PAM: failure len 3 2 1\n" % tm(time + 9)) f.write("%s [sshd] error: PAM: failure len 4 3 2\n" % _tm(time + 9))
f.flush() f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time) fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 110) self.assertEqual(fc.getPos(), 157)
# start search from current pos :
fc.setPos(157); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 157)
# start search from current pos :
fc.setPos(110); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 157)
finally: finally:
fc.close() if fc:
fc.close()
_killfile(f, fname)
def testSeekToTimeLargeFile(self):
fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='.log')
time = 1417512352
f = open(fname, 'w')
fc = None
count = 1000 if unittest.F2B.fast else 10000
try:
fc = FileContainer(fname, self.filter.getLogEncoding())
fc.open()
f.seek(0)
# variable length of file (ca 45K or 450K before and hereafter):
# write lines with smaller as search time:
t = time - count - 1
for i in xrange(count):
f.write("%s [sshd] error: PAM: failure\n" % _tm(t))
t += 1
f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 47*count)
# write lines with exact search time:
for i in xrange(10):
f.write("%s [sshd] error: PAM: failure\n" % _tm(time))
f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 47*count)
fc.setPos(4*count); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 47*count)
# write lines with greater as search time:
t = time+1
for i in xrange(count//500):
for j in xrange(500):
f.write("%s [sshd] error: PAM: failure\n" % _tm(t))
t += 1
f.flush()
fc.setPos(0); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 47*count)
fc.setPos(53); self.filter.seekToTime(fc, time)
self.assertEqual(fc.getPos(), 47*count)
finally:
if fc:
fc.close()
_killfile(f, fname) _killfile(f, fname)
class LogFileMonitor(LogCaptureTestCase): class LogFileMonitor(LogCaptureTestCase):
@ -400,7 +538,7 @@ class LogFileMonitor(LogCaptureTestCase):
_, self.name = tempfile.mkstemp('fail2ban', 'monitorfailures') _, self.name = tempfile.mkstemp('fail2ban', 'monitorfailures')
self.file = open(self.name, 'a') self.file = open(self.name, 'a')
self.filter = FilterPoll(DummyJail()) self.filter = FilterPoll(DummyJail())
self.filter.addLogPath(self.name) self.filter.addLogPath(self.name, autoSeek=False)
self.filter.active = True self.filter.active = True
self.filter.addFailRegex("(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>") self.filter.addFailRegex("(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>")
@ -413,16 +551,12 @@ class LogFileMonitor(LogCaptureTestCase):
def isModified(self, delay=2.): def isModified(self, delay=2.):
"""Wait up to `delay` sec to assure that it was modified or not """Wait up to `delay` sec to assure that it was modified or not
""" """
time0 = time.time() return Utils.wait_for(lambda: self.filter.isModified(self.name), _maxWaitTime(delay))
while time.time() < time0 + delay:
if self.filter.isModified(self.name):
return True
time.sleep(0.1)
return False
def notModified(self): def notModified(self, delay=2.):
# shorter wait time for not modified status """Wait up to `delay` sec as long as it was not modified
return not self.isModified(0.4) """
return Utils.wait_for(lambda: not self.filter.isModified(self.name), _maxWaitTime(delay))
def testUnaccessibleLogFile(self): def testUnaccessibleLogFile(self):
os.chmod(self.name, 0) os.chmod(self.name, 0)
@ -436,18 +570,17 @@ class LogFileMonitor(LogCaptureTestCase):
def testNoLogFile(self): def testNoLogFile(self):
_killfile(self.file, self.name) _killfile(self.file, self.name)
self.filter.getFailures(self.name) self.filter.getFailures(self.name)
failure_was_logged = self._is_logged('Unable to open %s' % self.name) self.assertLogged('Unable to open %s' % self.name)
self.assertTrue(failure_was_logged)
def testRemovingFailRegex(self): def testRemovingFailRegex(self):
self.filter.delFailRegex(0) self.filter.delFailRegex(0)
self.assertFalse(self._is_logged('Cannot remove regular expression. Index 0 is not valid')) self.assertNotLogged('Cannot remove regular expression. Index 0 is not valid')
self.filter.delFailRegex(0) self.filter.delFailRegex(0)
self.assertTrue(self._is_logged('Cannot remove regular expression. Index 0 is not valid')) self.assertLogged('Cannot remove regular expression. Index 0 is not valid')
def testRemovingIgnoreRegex(self): def testRemovingIgnoreRegex(self):
self.filter.delIgnoreRegex(0) self.filter.delIgnoreRegex(0)
self.assertTrue(self._is_logged('Cannot remove regular expression. Index 0 is not valid')) self.assertLogged('Cannot remove regular expression. Index 0 is not valid')
def testNewChangeViaIsModified(self): def testNewChangeViaIsModified(self):
# it is a brand new one -- so first we think it is modified # it is a brand new one -- so first we think it is modified
@ -466,7 +599,7 @@ class LogFileMonitor(LogCaptureTestCase):
os.rename(self.name, self.name + '.old') os.rename(self.name, self.name + '.old')
# we are not signaling as modified whenever # we are not signaling as modified whenever
# it gets away # it gets away
self.assertTrue(self.notModified()) self.assertTrue(self.notModified(1))
f = open(self.name, 'a') f = open(self.name, 'a')
self.assertTrue(self.isModified()) self.assertTrue(self.isModified())
self.assertTrue(self.notModified()) self.assertTrue(self.notModified())
@ -550,7 +683,7 @@ def get_monitor_failures_testcase(Filter_):
self.file = open(self.name, 'a') self.file = open(self.name, 'a')
self.jail = DummyJail() self.jail = DummyJail()
self.filter = Filter_(self.jail) self.filter = Filter_(self.jail)
self.filter.addLogPath(self.name) self.filter.addLogPath(self.name, autoSeek=False)
self.filter.active = True self.filter.active = True
self.filter.addFailRegex("(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>") self.filter.addFailRegex("(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>")
self.filter.start() self.filter.start()
@ -572,29 +705,24 @@ def get_monitor_failures_testcase(Filter_):
#time.sleep(0.2) # Give FS time to ack the removal #time.sleep(0.2) # Give FS time to ack the removal
pass pass
def isFilled(self, delay=2.): def isFilled(self, delay=1.):
"""Wait up to `delay` sec to assure that it was modified or not """Wait up to `delay` sec to assure that it was modified or not
""" """
time0 = time.time() return Utils.wait_for(self.jail.isFilled, _maxWaitTime(delay))
while time.time() < time0 + delay:
if len(self.jail):
return True
time.sleep(0.1)
return False
def _sleep_4_poll(self): def _sleep_4_poll(self):
# Since FilterPoll relies on time stamps and some # Since FilterPoll relies on time stamps and some
# actions might be happening too fast in the tests, # actions might be happening too fast in the tests,
# sleep a bit to guarantee reliable time stamps # sleep a bit to guarantee reliable time stamps
if isinstance(self.filter, FilterPoll): if isinstance(self.filter, FilterPoll):
mtimesleep() Utils.wait_for(self.filter.isAlive, _maxWaitTime(5))
def isEmpty(self, delay=0.4): def isEmpty(self, delay=_maxWaitTime(5)):
# shorter wait time for not modified status # shorter wait time for not modified status
return not self.isFilled(delay) return Utils.wait_for(self.jail.isEmpty, _maxWaitTime(delay))
def assert_correct_last_attempt(self, failures, count=None): def assert_correct_last_attempt(self, failures, count=None):
self.assertTrue(self.isFilled(20)) # give Filter a chance to react self.assertTrue(self.isFilled(10)) # give Filter a chance to react
_assert_correct_last_attempt(self, self.jail, failures, count=count) _assert_correct_last_attempt(self, self.jail, failures, count=count)
def test_grow_file(self): def test_grow_file(self):
@ -609,7 +737,7 @@ def get_monitor_failures_testcase(Filter_):
# since it should have not been enough # since it should have not been enough
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=5) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, skip=5)
self.assertTrue(self.isFilled(6)) self.assertTrue(self.isFilled(10))
# so we sleep for up to 2 sec for it not to become empty, # so we sleep for up to 2 sec for it not to become empty,
# and meanwhile pass to other thread(s) and filter should # and meanwhile pass to other thread(s) and filter should
# have gathered new failures and passed them into the # have gathered new failures and passed them into the
@ -646,10 +774,11 @@ def get_monitor_failures_testcase(Filter_):
self.file = _copy_lines_between_files(GetFailures.FILENAME_01, self.name, self.file = _copy_lines_between_files(GetFailures.FILENAME_01, self.name,
n=14, mode='w') n=14, mode='w')
# Poll might need more time # Poll might need more time
self.assertTrue(self.isEmpty(4 + int(isinstance(self.filter, FilterPoll))*2), self.assertTrue(self.isEmpty(_maxWaitTime(5)),
"Queue must be empty but it is not: %s." "Queue must be empty but it is not: %s."
% (', '.join([str(x) for x in self.jail.queue]))) % (', '.join([str(x) for x in self.jail.queue])))
self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan) self.assertRaises(FailManagerEmpty, self.filter.failManager.toBan)
Utils.wait_for(lambda: self.filter.failManager.getFailTotal() == 2, _maxWaitTime(10))
self.assertEqual(self.filter.failManager.getFailTotal(), 2) self.assertEqual(self.filter.failManager.getFailTotal(), 2)
# move aside, but leaving the handle still open... # move aside, but leaving the handle still open...
@ -674,7 +803,7 @@ def get_monitor_failures_testcase(Filter_):
if interim_kill: if interim_kill:
_killfile(None, self.name) _killfile(None, self.name)
time.sleep(0.2) # let them know time.sleep(Utils.DEFAULT_SLEEP_INTERVAL) # let them know
# now create a new one to override old one # now create a new one to override old one
_copy_lines_between_files(GetFailures.FILENAME_01, self.name + '.new', _copy_lines_between_files(GetFailures.FILENAME_01, self.name + '.new',
@ -721,10 +850,10 @@ def get_monitor_failures_testcase(Filter_):
_copy_lines_between_files(GetFailures.FILENAME_01, self.file, n=100) _copy_lines_between_files(GetFailures.FILENAME_01, self.file, n=100)
# so we should get no more failures detected # so we should get no more failures detected
self.assertTrue(self.isEmpty(2)) self.assertTrue(self.isEmpty(_maxWaitTime(10)))
# but then if we add it back again # but then if we add it back again (no seek to time in FileFilter's, because in file used the same time)
self.filter.addLogPath(self.name) self.filter.addLogPath(self.name, autoSeek=False)
# Tricky catch here is that it should get them from the # Tricky catch here is that it should get them from the
# tail written before, so let's not copy anything yet # tail written before, so let's not copy anything yet
#_copy_lines_between_files(GetFailures.FILENAME_01, self.name, n=100) #_copy_lines_between_files(GetFailures.FILENAME_01, self.name, n=100)
@ -778,22 +907,17 @@ def get_monitor_failures_journal_testcase(Filter_): # pragma: systemd no cover
return "MonitorJournalFailures%s(%s)" \ return "MonitorJournalFailures%s(%s)" \
% (Filter_, hasattr(self, 'name') and self.name or 'tempfile') % (Filter_, hasattr(self, 'name') and self.name or 'tempfile')
def isFilled(self, delay=2.): def isFilled(self, delay=1.):
"""Wait up to `delay` sec to assure that it was modified or not """Wait up to `delay` sec to assure that it was modified or not
""" """
time0 = time.time() return Utils.wait_for(self.jail.isFilled, _maxWaitTime(delay))
while time.time() < time0 + delay:
if len(self.jail):
return True
time.sleep(0.1)
return False
def isEmpty(self, delay=0.4): def isEmpty(self, delay=_maxWaitTime(5)):
# shorter wait time for not modified status # shorter wait time for not modified status
return not self.isFilled(delay) return Utils.wait_for(self.jail.isEmpty, _maxWaitTime(delay))
def assert_correct_ban(self, test_ip, test_attempts): def assert_correct_ban(self, test_ip, test_attempts):
self.assertTrue(self.isFilled(10)) # give Filter a chance to react self.assertTrue(self.isFilled(_maxWaitTime(10))) # give Filter a chance to react
ticket = self.jail.getFailTicket() ticket = self.jail.getFailTicket()
attempts = ticket.getAttempt() attempts = ticket.getAttempt()
@ -816,7 +940,7 @@ def get_monitor_failures_journal_testcase(Filter_): # pragma: systemd no cover
_copy_lines_to_journal( _copy_lines_to_journal(
self.test_file, self.journal_fields, skip=2, n=3) self.test_file, self.journal_fields, skip=2, n=3)
self.assertTrue(self.isFilled(6)) self.assertTrue(self.isFilled(10))
# so we sleep for up to 6 sec for it not to become empty, # so we sleep for up to 6 sec for it not to become empty,
# and meanwhile pass to other thread(s) and filter should # and meanwhile pass to other thread(s) and filter should
# have gathered new failures and passed them into the # have gathered new failures and passed them into the
@ -849,7 +973,7 @@ def get_monitor_failures_journal_testcase(Filter_): # pragma: systemd no cover
_copy_lines_to_journal( _copy_lines_to_journal(
self.test_file, self.journal_fields, n=5, skip=5) self.test_file, self.journal_fields, n=5, skip=5)
# so we should get no more failures detected # so we should get no more failures detected
self.assertTrue(self.isEmpty(2)) self.assertTrue(self.isEmpty(_maxWaitTime(10)))
# but then if we add it back again # but then if we add it back again
self.filter.addJournalMatch([ self.filter.addJournalMatch([
@ -860,12 +984,12 @@ def get_monitor_failures_journal_testcase(Filter_): # pragma: systemd no cover
_copy_lines_to_journal( _copy_lines_to_journal(
self.test_file, self.journal_fields, n=6, skip=10) self.test_file, self.journal_fields, n=6, skip=10)
# we should detect the failures # we should detect the failures
self.assertTrue(self.isFilled(6)) self.assertTrue(self.isFilled(10))
return MonitorJournalFailures return MonitorJournalFailures
class GetFailures(unittest.TestCase): class GetFailures(LogCaptureTestCase):
FILENAME_01 = os.path.join(TEST_FILES_DIR, "testcase01.log") FILENAME_01 = os.path.join(TEST_FILES_DIR, "testcase01.log")
FILENAME_02 = os.path.join(TEST_FILES_DIR, "testcase02.log") FILENAME_02 = os.path.join(TEST_FILES_DIR, "testcase02.log")
@ -880,6 +1004,7 @@ class GetFailures(unittest.TestCase):
def setUp(self): def setUp(self):
"""Call before every test case.""" """Call before every test case."""
LogCaptureTestCase.setUp(self)
setUpMyTime() setUpMyTime()
self.jail = DummyJail() self.jail = DummyJail()
self.filter = FileFilter(self.jail) self.filter = FileFilter(self.jail)
@ -891,20 +1016,43 @@ class GetFailures(unittest.TestCase):
def tearDown(self): def tearDown(self):
"""Call after every test case.""" """Call after every test case."""
tearDownMyTime() tearDownMyTime()
LogCaptureTestCase.tearDown(self)
def testFilterAPI(self):
self.assertEqual(self.filter.getLogs(), [])
self.assertEqual(self.filter.getLogCount(), 0)
self.filter.addLogPath(GetFailures.FILENAME_01, tail=True)
self.assertEqual(self.filter.getLogCount(), 1)
self.assertEqual(self.filter.getLogPaths(), [GetFailures.FILENAME_01])
self.filter.addLogPath(GetFailures.FILENAME_02, tail=True)
self.assertEqual(self.filter.getLogCount(), 2)
self.assertEqual(sorted(self.filter.getLogPaths()), sorted([GetFailures.FILENAME_01, GetFailures.FILENAME_02]))
def testTail(self): def testTail(self):
# There must be no containters registered, otherwise [-1] indexing would be wrong
self.assertEqual(self.filter.getLogs(), [])
self.filter.addLogPath(GetFailures.FILENAME_01, tail=True) self.filter.addLogPath(GetFailures.FILENAME_01, tail=True)
self.assertEqual(self.filter.getLogPath()[-1].getPos(), 1653) self.assertEqual(self.filter.getLogs()[-1].getPos(), 1653)
self.filter.getLogPath()[-1].close() self.filter.getLogs()[-1].close()
self.assertEqual(self.filter.getLogPath()[-1].readline(), "") self.assertEqual(self.filter.getLogs()[-1].readline(), "")
self.filter.delLogPath(GetFailures.FILENAME_01) self.filter.delLogPath(GetFailures.FILENAME_01)
self.assertEqual(self.filter.getLogPath(),[]) self.assertEqual(self.filter.getLogs(), [])
def testNoLogAdded(self):
self.filter.addLogPath(GetFailures.FILENAME_01, tail=True)
self.assertTrue(self.filter.containsLogPath(GetFailures.FILENAME_01))
self.filter.delLogPath(GetFailures.FILENAME_01)
self.assertFalse(self.filter.containsLogPath(GetFailures.FILENAME_01))
# and unknown (safety and cover)
self.assertFalse(self.filter.containsLogPath('unknown.log'))
self.filter.delLogPath('unknown.log')
def testGetFailures01(self, filename=None, failures=None): def testGetFailures01(self, filename=None, failures=None):
filename = filename or GetFailures.FILENAME_01 filename = filename or GetFailures.FILENAME_01
failures = failures or GetFailures.FAILURES_01 failures = failures or GetFailures.FAILURES_01
self.filter.addLogPath(filename) self.filter.addLogPath(filename, autoSeek=0)
self.filter.addFailRegex("(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>$") self.filter.addFailRegex("(?:(?:Authentication failure|Failed [-/\w+]+) for(?: [iI](?:llegal|nvalid) user)?|[Ii](?:llegal|nvalid) user|ROOT LOGIN REFUSED) .*(?: from|FROM) <HOST>$")
self.filter.getFailures(filename) self.filter.getFailures(filename)
_assert_correct_last_attempt(self, self.filter, failures) _assert_correct_last_attempt(self, self.filter, failures)
@ -928,7 +1076,7 @@ class GetFailures(unittest.TestCase):
[u'Aug 14 11:%d:59 i60p295 sshd[12365]: Failed publickey for roehl from ::ffff:141.3.81.106 port 51332 ssh2' [u'Aug 14 11:%d:59 i60p295 sshd[12365]: Failed publickey for roehl from ::ffff:141.3.81.106 port 51332 ssh2'
% m for m in 53, 54, 57, 58]) % m for m in 53, 54, 57, 58])
self.filter.addLogPath(GetFailures.FILENAME_02) self.filter.addLogPath(GetFailures.FILENAME_02, autoSeek=0)
self.filter.addFailRegex("Failed .* from <HOST>") self.filter.addFailRegex("Failed .* from <HOST>")
self.filter.getFailures(GetFailures.FILENAME_02) self.filter.getFailures(GetFailures.FILENAME_02)
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
@ -936,25 +1084,35 @@ class GetFailures(unittest.TestCase):
def testGetFailures03(self): def testGetFailures03(self):
output = ('203.162.223.135', 7, 1124013544.0) output = ('203.162.223.135', 7, 1124013544.0)
self.filter.addLogPath(GetFailures.FILENAME_03) self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=0)
self.filter.addFailRegex("error,relay=<HOST>,.*550 User unknown") self.filter.addFailRegex("error,relay=<HOST>,.*550 User unknown")
self.filter.getFailures(GetFailures.FILENAME_03) self.filter.getFailures(GetFailures.FILENAME_03)
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
def testGetFailures03_seek(self): def testGetFailures03_Seek1(self):
# same test as above but with seek to 'Aug 14 11:55:04' - so other output ... # same test as above but with seek to 'Aug 14 11:55:04' - so other output ...
output = ('203.162.223.135', 5, 1124013544.0) output = ('203.162.223.135', 5, 1124013544.0)
self.filter.addLogPath(GetFailures.FILENAME_03) self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=output[2] - 4*60)
self.filter.addFailRegex("error,relay=<HOST>,.*550 User unknown") self.filter.addFailRegex("error,relay=<HOST>,.*550 User unknown")
self.filter.getFailures(GetFailures.FILENAME_03, output[2] - 4*60 + 1) self.filter.getFailures(GetFailures.FILENAME_03)
_assert_correct_last_attempt(self, self.filter, output)
def testGetFailures03_Seek2(self):
# same test as above but with seek to 'Aug 14 11:59:04' - so other output ...
output = ('203.162.223.135', 1, 1124013544.0)
self.filter.setMaxRetry(1)
self.filter.addLogPath(GetFailures.FILENAME_03, autoSeek=output[2])
self.filter.addFailRegex("error,relay=<HOST>,.*550 User unknown")
self.filter.getFailures(GetFailures.FILENAME_03)
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
def testGetFailures04(self): def testGetFailures04(self):
output = [('212.41.96.186', 4, 1124013600.0), output = [('212.41.96.186', 4, 1124013600.0),
('212.41.96.185', 4, 1124017198.0)] ('212.41.96.185', 4, 1124017198.0)]
self.filter.addLogPath(GetFailures.FILENAME_04) self.filter.addLogPath(GetFailures.FILENAME_04, autoSeek=0)
self.filter.addFailRegex("Invalid user .* <HOST>") self.filter.addFailRegex("Invalid user .* <HOST>")
self.filter.getFailures(GetFailures.FILENAME_04) self.filter.getFailures(GetFailures.FILENAME_04)
@ -964,7 +1122,43 @@ class GetFailures(unittest.TestCase):
except FailManagerEmpty: except FailManagerEmpty:
pass pass
def testGetFailuresWrongChar(self):
# write wrong utf-8 char:
fname = tempfile.mktemp(prefix='tmp_fail2ban', suffix='crlf')
fout = fopen(fname, 'wb')
try:
# write:
for l in (
b'2015-01-14 20:00:58 user \"test\xf1ing\" from \"192.0.2.0\"\n', # wrong utf-8 char
b'2015-01-14 20:00:59 user \"\xd1\xe2\xe5\xf2\xe0\" from \"192.0.2.0\"\n', # wrong utf-8 chars
b'2015-01-14 20:01:00 user \"testing\" from \"192.0.2.0\"\n' # correct utf-8 chars
):
fout.write(l)
fout.close()
#
output = ('192.0.2.0', 3, 1421262060.0)
failregex = "^\s*user \"[^\"]*\" from \"<HOST>\"\s*$"
# test encoding auto or direct set of encoding:
for enc in (None, 'utf-8', 'ascii'):
if enc is not None:
self.tearDown();self.setUp();
self.filter.setLogEncoding(enc);
self.assertNotLogged('Error decoding line');
self.filter.addLogPath(fname)
self.filter.addFailRegex(failregex)
self.filter.getFailures(fname)
_assert_correct_last_attempt(self, self.filter, output)
self.assertLogged('Error decoding line');
self.assertLogged('Continuing to process line ignoring invalid characters:', '2015-01-14 20:00:58 user ');
self.assertLogged('Continuing to process line ignoring invalid characters:', '2015-01-14 20:00:59 user ');
finally:
_killfile(fout, fname)
def testGetFailuresUseDNS(self): def testGetFailuresUseDNS(self):
unittest.F2B.SkipIfNoNetwork()
# We should still catch failures with usedns = no ;-) # We should still catch failures with usedns = no ;-)
output_yes = ('93.184.216.34', 2, 1124013539.0, output_yes = ('93.184.216.34', 2, 1124013539.0,
[u'Aug 14 11:54:59 i60p295 sshd[12365]: Failed publickey for roehl from example.com port 51332 ssh2', [u'Aug 14 11:54:59 i60p295 sshd[12365]: Failed publickey for roehl from example.com port 51332 ssh2',
@ -985,7 +1179,7 @@ class GetFailures(unittest.TestCase):
filter_.active = True filter_.active = True
filter_.failManager.setMaxRetry(1) # we might have just few failures filter_.failManager.setMaxRetry(1) # we might have just few failures
filter_.addLogPath(GetFailures.FILENAME_USEDNS) filter_.addLogPath(GetFailures.FILENAME_USEDNS, autoSeek=False)
filter_.addFailRegex("Failed .* from <HOST>") filter_.addFailRegex("Failed .* from <HOST>")
filter_.getFailures(GetFailures.FILENAME_USEDNS) filter_.getFailures(GetFailures.FILENAME_USEDNS)
_assert_correct_last_attempt(self, filter_, output) _assert_correct_last_attempt(self, filter_, output)
@ -993,14 +1187,14 @@ class GetFailures(unittest.TestCase):
def testGetFailuresMultiRegex(self): def testGetFailuresMultiRegex(self):
output = ('141.3.81.106', 8, 1124013541.0) output = ('141.3.81.106', 8, 1124013541.0)
self.filter.addLogPath(GetFailures.FILENAME_02) self.filter.addLogPath(GetFailures.FILENAME_02, autoSeek=False)
self.filter.addFailRegex("Failed .* from <HOST>") self.filter.addFailRegex("Failed .* from <HOST>")
self.filter.addFailRegex("Accepted .* from <HOST>") self.filter.addFailRegex("Accepted .* from <HOST>")
self.filter.getFailures(GetFailures.FILENAME_02) self.filter.getFailures(GetFailures.FILENAME_02)
_assert_correct_last_attempt(self, self.filter, output) _assert_correct_last_attempt(self, self.filter, output)
def testGetFailuresIgnoreRegex(self): def testGetFailuresIgnoreRegex(self):
self.filter.addLogPath(GetFailures.FILENAME_02) self.filter.addLogPath(GetFailures.FILENAME_02, autoSeek=False)
self.filter.addFailRegex("Failed .* from <HOST>") self.filter.addFailRegex("Failed .* from <HOST>")
self.filter.addFailRegex("Accepted .* from <HOST>") self.filter.addFailRegex("Accepted .* from <HOST>")
self.filter.addIgnoreRegex("for roehl") self.filter.addIgnoreRegex("for roehl")
@ -1012,7 +1206,7 @@ class GetFailures(unittest.TestCase):
def testGetFailuresMultiLine(self): def testGetFailuresMultiLine(self):
output = [("192.0.43.10", 2, 1124013599.0), output = [("192.0.43.10", 2, 1124013599.0),
("192.0.43.11", 1, 1124013598.0)] ("192.0.43.11", 1, 1124013598.0)]
self.filter.addLogPath(GetFailures.FILENAME_MULTILINE) self.filter.addLogPath(GetFailures.FILENAME_MULTILINE, autoSeek=False)
self.filter.addFailRegex("^.*rsyncd\[(?P<pid>\d+)\]: connect from .+ \(<HOST>\)$<SKIPLINES>^.+ rsyncd\[(?P=pid)\]: rsync error: .*$") self.filter.addFailRegex("^.*rsyncd\[(?P<pid>\d+)\]: connect from .+ \(<HOST>\)$<SKIPLINES>^.+ rsyncd\[(?P=pid)\]: rsync error: .*$")
self.filter.setMaxLines(100) self.filter.setMaxLines(100)
self.filter.setMaxRetry(1) self.filter.setMaxRetry(1)
@ -1030,7 +1224,7 @@ class GetFailures(unittest.TestCase):
def testGetFailuresMultiLineIgnoreRegex(self): def testGetFailuresMultiLineIgnoreRegex(self):
output = [("192.0.43.10", 2, 1124013599.0)] output = [("192.0.43.10", 2, 1124013599.0)]
self.filter.addLogPath(GetFailures.FILENAME_MULTILINE) self.filter.addLogPath(GetFailures.FILENAME_MULTILINE, autoSeek=False)
self.filter.addFailRegex("^.*rsyncd\[(?P<pid>\d+)\]: connect from .+ \(<HOST>\)$<SKIPLINES>^.+ rsyncd\[(?P=pid)\]: rsync error: .*$") self.filter.addFailRegex("^.*rsyncd\[(?P<pid>\d+)\]: connect from .+ \(<HOST>\)$<SKIPLINES>^.+ rsyncd\[(?P=pid)\]: rsync error: .*$")
self.filter.addIgnoreRegex("rsync error: Received SIGINT") self.filter.addIgnoreRegex("rsync error: Received SIGINT")
self.filter.setMaxLines(100) self.filter.setMaxLines(100)
@ -1046,7 +1240,7 @@ class GetFailures(unittest.TestCase):
output = [("192.0.43.10", 2, 1124013599.0), output = [("192.0.43.10", 2, 1124013599.0),
("192.0.43.11", 1, 1124013598.0), ("192.0.43.11", 1, 1124013598.0),
("192.0.43.15", 1, 1124013598.0)] ("192.0.43.15", 1, 1124013598.0)]
self.filter.addLogPath(GetFailures.FILENAME_MULTILINE) self.filter.addLogPath(GetFailures.FILENAME_MULTILINE, autoSeek=False)
self.filter.addFailRegex("^.*rsyncd\[(?P<pid>\d+)\]: connect from .+ \(<HOST>\)$<SKIPLINES>^.+ rsyncd\[(?P=pid)\]: rsync error: .*$") self.filter.addFailRegex("^.*rsyncd\[(?P<pid>\d+)\]: connect from .+ \(<HOST>\)$<SKIPLINES>^.+ rsyncd\[(?P=pid)\]: rsync error: .*$")
self.filter.addFailRegex("^.* sendmail\[.*, msgid=<(?P<msgid>[^>]+).*relay=\[<HOST>\].*$<SKIPLINES>^.+ spamd: result: Y \d+ .*,mid=<(?P=msgid)>(,bayes=[.\d]+)?(,autolearn=\S+)?\s*$") self.filter.addFailRegex("^.* sendmail\[.*, msgid=<(?P<msgid>[^>]+).*relay=\[<HOST>\].*$<SKIPLINES>^.+ spamd: result: Y \d+ .*,mid=<(?P=msgid)>(,bayes=[.\d]+)?(,autolearn=\S+)?\s*$")
self.filter.setMaxLines(100) self.filter.setMaxLines(100)
@ -1066,6 +1260,56 @@ class GetFailures(unittest.TestCase):
class DNSUtilsTests(unittest.TestCase): class DNSUtilsTests(unittest.TestCase):
def testCache(self):
c = Utils.Cache(maxCount=5, maxTime=60)
# not available :
self.assertTrue(c.get('a') is None)
self.assertEqual(c.get('a', 'test'), 'test')
# exact 5 elements :
for i in xrange(5):
c.set(i, i)
for i in xrange(5):
self.assertEqual(c.get(i), i)
def testCacheMaxSize(self):
c = Utils.Cache(maxCount=5, maxTime=60)
# exact 5 elements :
for i in xrange(5):
c.set(i, i)
self.assertEqual([c.get(i) for i in xrange(5)], [i for i in xrange(5)])
self.assertFalse(-1 in [c.get(i, -1) for i in xrange(5)])
# add one - too many:
c.set(10, i)
# one element should be removed :
self.assertTrue(-1 in [c.get(i, -1) for i in xrange(5)])
# test max size (not expired):
for i in xrange(10):
c.set(i, 1)
self.assertEqual(len(c), 5)
def testCacheMaxTime(self):
# test max time (expired, timeout reached) :
c = Utils.Cache(maxCount=5, maxTime=0.0005)
for i in xrange(10):
c.set(i, 1)
st = time.time()
self.assertTrue(Utils.wait_for(lambda: time.time() >= st + 0.0005, 1))
# we have still 5 elements (or fewer if too slow test mashine):
self.assertTrue(len(c) <= 5)
# but all that are expiered also:
for i in xrange(10):
self.assertTrue(c.get(i) is None)
# here the whole cache should be empty:
self.assertEqual(len(c), 0)
class DNSUtilsNetworkTests(unittest.TestCase):
def setUp(self):
"""Call before every test case."""
unittest.F2B.SkipIfNoNetwork()
def testUseDns(self): def testUseDns(self):
res = DNSUtils.textToIp('www.example.com', 'no') res = DNSUtils.textToIp('www.example.com', 'no')
self.assertEqual(res, []) self.assertEqual(res, [])
@ -1089,8 +1333,9 @@ class DNSUtilsTests(unittest.TestCase):
self.assertEqual(res, []) self.assertEqual(res, [])
def testIpToName(self): def testIpToName(self):
res = DNSUtils.ipToName('66.249.66.1') unittest.F2B.SkipIfNoNetwork()
self.assertEqual(res, 'crawl-66-249-66-1.googlebot.com') res = DNSUtils.ipToName('8.8.4.4')
self.assertEqual(res, 'google-public-dns-b.google.com')
# invalid ip (TEST-NET-1 according to RFC 5737) # invalid ip (TEST-NET-1 according to RFC 5737)
res = DNSUtils.ipToName('192.0.2.0') res = DNSUtils.ipToName('192.0.2.0')
self.assertEqual(res, None) self.assertEqual(res, None)

View File

@ -33,6 +33,7 @@ from glob import glob
from StringIO import StringIO from StringIO import StringIO
from ..helpers import formatExceptionInfo, mbasename, TraceBack, FormatterWithTraceBack, getLogger from ..helpers import formatExceptionInfo, mbasename, TraceBack, FormatterWithTraceBack, getLogger
from ..helpers import splitcommaspace
from ..server.datetemplate import DatePatternRegex from ..server.datetemplate import DatePatternRegex
from ..server.mytime import MyTime from ..server.mytime import MyTime
@ -56,16 +57,42 @@ class HelpersTest(unittest.TestCase):
# might be fragile due to ' vs " # might be fragile due to ' vs "
self.assertEqual(args, "('Very bad', None)") self.assertEqual(args, "('Very bad', None)")
def testsplitcommaspace(self):
self.assertEqual(splitcommaspace(None), [])
self.assertEqual(splitcommaspace(''), [])
self.assertEqual(splitcommaspace(' '), [])
self.assertEqual(splitcommaspace('1'), ['1'])
self.assertEqual(splitcommaspace(' 1 2 '), ['1', '2'])
self.assertEqual(splitcommaspace(' 1, 2 , '), ['1', '2'])
def _getSysPythonVersion():
import subprocess, locale
sysVerCmd = "python -c 'import sys; print(tuple(sys.version_info))'"
if sys.version_info >= (2,7):
sysVer = subprocess.check_output(sysVerCmd, shell=True)
else:
sysVer = subprocess.Popen(sysVerCmd, shell=True, stdout=subprocess.PIPE).stdout.read()
if sys.version_info >= (3,):
sysVer = sysVer.decode(locale.getpreferredencoding(), 'replace')
return str(sysVer).rstrip()
class SetupTest(unittest.TestCase): class SetupTest(unittest.TestCase):
def setUp(self): def setUp(self):
unittest.F2B.SkipIfFast()
setup = os.path.join(os.path.dirname(__file__), '..', '..', 'setup.py') setup = os.path.join(os.path.dirname(__file__), '..', '..', 'setup.py')
self.setup = os.path.exists(setup) and setup or None self.setup = os.path.exists(setup) and setup or None
if not self.setup and sys.version_info >= (2,7): # pragma: no cover - running not out of the source if not self.setup and sys.version_info >= (2,7): # pragma: no cover - running not out of the source
raise unittest.SkipTest( raise unittest.SkipTest(
"Seems to be running not out of source distribution" "Seems to be running not out of source distribution"
" -- cannot locate setup.py") " -- cannot locate setup.py")
# compare current version of python installed resp. active one:
sysVer = _getSysPythonVersion()
if sysVer != str(tuple(sys.version_info)):
raise unittest.SkipTest(
"Seems to be running with python distribution %s"
" -- install can be tested only with system distribution %s" % (str(tuple(sys.version_info)), sysVer))
def testSetupInstallRoot(self): def testSetupInstallRoot(self):
if not self.setup: if not self.setup:

View File

@ -64,7 +64,8 @@ def testSampleRegexsFactory(name):
def testFilter(self): def testFilter(self):
# Check filter exists # Check filter exists
filterConf = FilterReader(name, "jail", {}, basedir=CONFIG_DIR) filterConf = FilterReader(name, "jail", {},
basedir=CONFIG_DIR, share_config=unittest.F2B.share_config)
self.assertEqual(filterConf.getFile(), name) self.assertEqual(filterConf.getFile(), name)
self.assertEqual(filterConf.getJailName(), "jail") self.assertEqual(filterConf.getJailName(), "jail")
filterConf.read() filterConf.read()

View File

@ -36,6 +36,7 @@ from ..server.failregex import Regex, FailRegex, RegexException
from ..server.server import Server from ..server.server import Server
from ..server.jail import Jail from ..server.jail import Jail
from ..server.jailthread import JailThread from ..server.jailthread import JailThread
from ..server.utils import Utils
from .utils import LogCaptureTestCase from .utils import LogCaptureTestCase
from ..helpers import getLogger from ..helpers import getLogger
from .. import version from .. import version
@ -46,6 +47,7 @@ except ImportError: # pragma: no cover
filtersystemd = None filtersystemd = None
TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files") TEST_FILES_DIR = os.path.join(os.path.dirname(__file__), "files")
FAST_BACKEND = "polling"
class TestServer(Server): class TestServer(Server):
@ -61,27 +63,36 @@ class TransmitterBase(unittest.TestCase):
def setUp(self): def setUp(self):
"""Call before every test case.""" """Call before every test case."""
self.transm = self.server._Server__transm self.transm = self.server._Server__transm
self.tmp_files = []
sock_fd, sock_name = tempfile.mkstemp('fail2ban.sock', 'transmitter') sock_fd, sock_name = tempfile.mkstemp('fail2ban.sock', 'transmitter')
os.close(sock_fd) os.close(sock_fd)
self.tmp_files.append(sock_name)
pidfile_fd, pidfile_name = tempfile.mkstemp( pidfile_fd, pidfile_name = tempfile.mkstemp(
'fail2ban.pid', 'transmitter') 'fail2ban.pid', 'transmitter')
os.close(pidfile_fd) os.close(pidfile_fd)
self.tmp_files.append(pidfile_name)
self.server.start(sock_name, pidfile_name, force=False) self.server.start(sock_name, pidfile_name, force=False)
self.jailName = "TestJail1" self.jailName = "TestJail1"
self.server.addJail(self.jailName, "auto") self.server.addJail(self.jailName, FAST_BACKEND)
def tearDown(self): def tearDown(self):
"""Call after every test case.""" """Call after every test case."""
self.server.quit() self.server.quit()
for f in self.tmp_files:
if os.path.exists(f):
os.remove(f)
def setGetTest(self, cmd, inValue, outValue=None, outCode=0, jail=None, repr_=False): def setGetTest(self, cmd, inValue, outValue=(None,), outCode=0, jail=None, repr_=False):
"""Process set/get commands and compare both return values
with outValue if it was given otherwise with inValue"""
setCmd = ["set", cmd, inValue] setCmd = ["set", cmd, inValue]
getCmd = ["get", cmd] getCmd = ["get", cmd]
if jail is not None: if jail is not None:
setCmd.insert(1, jail) setCmd.insert(1, jail)
getCmd.insert(1, jail) getCmd.insert(1, jail)
if outValue is None: # if outValue was not given (now None is allowed return/compare value also)
if outValue == (None,):
outValue = inValue outValue = inValue
def v(x): def v(x):
@ -113,19 +124,15 @@ class TransmitterBase(unittest.TestCase):
self.assertEqual( self.assertEqual(
self.transm.proceed(["get", jail, cmd]), (0, [])) self.transm.proceed(["get", jail, cmd]), (0, []))
for n, value in enumerate(values): for n, value in enumerate(values):
self.assertEqual( ret = self.transm.proceed(["set", jail, cmdAdd, value])
self.transm.proceed(["set", jail, cmdAdd, value]), self.assertEqual((ret[0], sorted(ret[1])), (0, sorted(values[:n+1])))
(0, values[:n+1])) ret = self.transm.proceed(["get", jail, cmd])
self.assertEqual( self.assertEqual((ret[0], sorted(ret[1])), (0, sorted(values[:n+1])))
self.transm.proceed(["get", jail, cmd]),
(0, values[:n+1]))
for n, value in enumerate(values): for n, value in enumerate(values):
self.assertEqual( ret = self.transm.proceed(["set", jail, cmdDel, value])
self.transm.proceed(["set", jail, cmdDel, value]), self.assertEqual((ret[0], sorted(ret[1])), (0, sorted(values[n+1:])))
(0, values[n+1:])) ret = self.transm.proceed(["get", jail, cmd])
self.assertEqual( self.assertEqual((ret[0], sorted(ret[1])), (0, sorted(values[n+1:])))
self.transm.proceed(["get", jail, cmd]),
(0, values[n+1:]))
def jailAddDelRegexTest(self, cmd, inValues, outValues, jail): def jailAddDelRegexTest(self, cmd, inValues, outValues, jail):
cmdAdd = "add" + cmd cmdAdd = "add" + cmd
@ -165,14 +172,21 @@ class Transmitter(TransmitterBase):
self.assertEqual(self.transm.proceed(["version"]), (0, version.version)) self.assertEqual(self.transm.proceed(["version"]), (0, version.version))
def testSleep(self): def testSleep(self):
t0 = time.time() if not unittest.F2B.fast:
self.assertEqual(self.transm.proceed(["sleep", "1"]), (0, None)) t0 = time.time()
t1 = time.time() self.assertEqual(self.transm.proceed(["sleep", "0.1"]), (0, None))
# Approx 1 second delay t1 = time.time()
self.assertAlmostEqual(t1 - t0, 1, places=1) # Approx 0.1 second delay but not faster
dt = t1 - t0
self.assertTrue(0.09 < dt < 0.2, msg="Sleep was %g sec" % dt)
else: # pragma: no cover
self.assertEqual(self.transm.proceed(["sleep", "0.0001"]), (0, None))
def testDatabase(self): def testDatabase(self):
tmp, tmpFilename = tempfile.mkstemp(".db", "fail2ban_") if not unittest.F2B.memory_db:
tmp, tmpFilename = tempfile.mkstemp(".db", "fail2ban_")
else: # pragma: no cover
tmpFilename = ':memory:'
# Jails present, can't change database # Jails present, can't change database
self.setGetTestNOK("dbfile", tmpFilename) self.setGetTestNOK("dbfile", tmpFilename)
self.server.delJail(self.jailName) self.server.delJail(self.jailName)
@ -182,7 +196,7 @@ class Transmitter(TransmitterBase):
self.setGetTest("dbpurgeage", "600", 600) self.setGetTest("dbpurgeage", "600", 600)
self.setGetTestNOK("dbpurgeage", "LIZARD") self.setGetTestNOK("dbpurgeage", "LIZARD")
# the same file name (again with jails / not changed): # the same file name (again with jails / not changed):
self.server.addJail(self.jailName, "auto") self.server.addJail(self.jailName, FAST_BACKEND)
self.setGetTest("dbfile", tmpFilename) self.setGetTest("dbfile", tmpFilename)
self.server.delJail(self.jailName) self.server.delJail(self.jailName)
@ -200,12 +214,13 @@ class Transmitter(TransmitterBase):
["get", "dbpurgeage"]), ["get", "dbpurgeage"]),
(0, None)) (0, None))
# the same (again with jails / not changed): # the same (again with jails / not changed):
self.server.addJail(self.jailName, "auto") self.server.addJail(self.jailName, FAST_BACKEND)
self.assertEqual(self.transm.proceed( self.assertEqual(self.transm.proceed(
["set", "dbfile", "None"]), ["set", "dbfile", "None"]),
(0, None)) (0, None))
os.close(tmp) if not unittest.F2B.memory_db:
os.unlink(tmpFilename) os.close(tmp)
os.unlink(tmpFilename)
def testAddJail(self): def testAddJail(self):
jail2 = "TestJail2" jail2 = "TestJail2"
@ -228,13 +243,17 @@ class Transmitter(TransmitterBase):
def testStartStopJail(self): def testStartStopJail(self):
self.assertEqual( self.assertEqual(
self.transm.proceed(["start", self.jailName]), (0, None)) self.transm.proceed(["start", self.jailName]), (0, None))
time.sleep(1) time.sleep(Utils.DEFAULT_SLEEP_TIME)
# wait until not started (3 seconds as long as any RuntimeError, ex.: RuntimeError('cannot join thread before it is started',)):
self.assertTrue( Utils.wait_for(
lambda: self.server.isAlive(1) and not isinstance(self.transm.proceed(["status", self.jailName]), RuntimeError),
3) )
self.assertEqual( self.assertEqual(
self.transm.proceed(["stop", self.jailName]), (0, None)) self.transm.proceed(["stop", self.jailName]), (0, None))
self.assertTrue(self.jailName not in self.server._Server__jails) self.assertTrue(self.jailName not in self.server._Server__jails)
def testStartStopAllJail(self): def testStartStopAllJail(self):
self.server.addJail("TestJail2", "auto") self.server.addJail("TestJail2", FAST_BACKEND)
self.assertEqual( self.assertEqual(
self.transm.proceed(["start", self.jailName]), (0, None)) self.transm.proceed(["start", self.jailName]), (0, None))
self.assertEqual( self.assertEqual(
@ -242,9 +261,12 @@ class Transmitter(TransmitterBase):
# yoh: workaround for gh-146. I still think that there is some # yoh: workaround for gh-146. I still think that there is some
# race condition and missing locking somewhere, but for now # race condition and missing locking somewhere, but for now
# giving it a small delay reliably helps to proceed with tests # giving it a small delay reliably helps to proceed with tests
time.sleep(0.1) time.sleep(Utils.DEFAULT_SLEEP_TIME)
self.assertTrue( Utils.wait_for(
lambda: self.server.isAlive(2) and not isinstance(self.transm.proceed(["status", self.jailName]), RuntimeError),
3) )
self.assertEqual(self.transm.proceed(["stop", "all"]), (0, None)) self.assertEqual(self.transm.proceed(["stop", "all"]), (0, None))
time.sleep(1) self.assertTrue( Utils.wait_for( lambda: not len(self.server._Server__jails), 3) )
self.assertTrue(self.jailName not in self.server._Server__jails) self.assertTrue(self.jailName not in self.server._Server__jails)
self.assertTrue("TestJail2" not in self.server._Server__jails) self.assertTrue("TestJail2" not in self.server._Server__jails)
@ -262,6 +284,7 @@ class Transmitter(TransmitterBase):
def testJailFindTime(self): def testJailFindTime(self):
self.setGetTest("findtime", "120", 120, jail=self.jailName) self.setGetTest("findtime", "120", 120, jail=self.jailName)
self.setGetTest("findtime", "60", 60, jail=self.jailName) self.setGetTest("findtime", "60", 60, jail=self.jailName)
self.setGetTest("findtime", "30m", 30*60, jail=self.jailName)
self.setGetTest("findtime", "-60", -60, jail=self.jailName) self.setGetTest("findtime", "-60", -60, jail=self.jailName)
self.setGetTestNOK("findtime", "Dog", jail=self.jailName) self.setGetTestNOK("findtime", "Dog", jail=self.jailName)
@ -269,6 +292,7 @@ class Transmitter(TransmitterBase):
self.setGetTest("bantime", "600", 600, jail=self.jailName) self.setGetTest("bantime", "600", 600, jail=self.jailName)
self.setGetTest("bantime", "50", 50, jail=self.jailName) self.setGetTest("bantime", "50", 50, jail=self.jailName)
self.setGetTest("bantime", "-50", -50, jail=self.jailName) self.setGetTest("bantime", "-50", -50, jail=self.jailName)
self.setGetTest("bantime", "15d 5h 30m", 1315800, jail=self.jailName)
self.setGetTestNOK("bantime", "Cat", jail=self.jailName) self.setGetTestNOK("bantime", "Cat", jail=self.jailName)
def testDatePattern(self): def testDatePattern(self):
@ -298,11 +322,11 @@ class Transmitter(TransmitterBase):
self.assertEqual( self.assertEqual(
self.transm.proceed(["set", self.jailName, "banip", "127.0.0.1"]), self.transm.proceed(["set", self.jailName, "banip", "127.0.0.1"]),
(0, "127.0.0.1")) (0, "127.0.0.1"))
time.sleep(1) # Give chance to ban time.sleep(Utils.DEFAULT_SLEEP_TIME) # Give chance to ban
self.assertEqual( self.assertEqual(
self.transm.proceed(["set", self.jailName, "banip", "Badger"]), self.transm.proceed(["set", self.jailName, "banip", "Badger"]),
(0, "Badger")) #NOTE: Is IP address validated? Is DNS Lookup done? (0, "Badger")) #NOTE: Is IP address validated? Is DNS Lookup done?
time.sleep(1) # Give chance to ban time.sleep(Utils.DEFAULT_SLEEP_TIME) # Give chance to ban
# Unban IP # Unban IP
self.assertEqual( self.assertEqual(
self.transm.proceed( self.transm.proceed(
@ -474,7 +498,7 @@ class Transmitter(TransmitterBase):
jails = [self.jailName] jails = [self.jailName]
self.assertEqual(self.transm.proceed(["status"]), self.assertEqual(self.transm.proceed(["status"]),
(0, [('Number of jail', len(jails)), ('Jail list', ", ".join(jails))])) (0, [('Number of jail', len(jails)), ('Jail list', ", ".join(jails))]))
self.server.addJail("TestJail2", "auto") self.server.addJail("TestJail2", FAST_BACKEND)
jails.append("TestJail2") jails.append("TestJail2")
self.assertEqual(self.transm.proceed(["status"]), self.assertEqual(self.transm.proceed(["status"]),
(0, [('Number of jail', len(jails)), ('Jail list', ", ".join(jails))])) (0, [('Number of jail', len(jails)), ('Jail list', ", ".join(jails))]))
@ -942,7 +966,7 @@ class LoggingTests(LogCaptureTestCase):
badThread = _BadThread() badThread = _BadThread()
badThread.start() badThread.start()
badThread.join() badThread.join()
self.assertTrue(self._is_logged("Unhandled exception")) self.assertLogged("Unhandled exception")
finally: finally:
sys.__excepthook__ = prev_exchook sys.__excepthook__ = prev_exchook
self.assertEqual(len(x), 1) self.assertEqual(len(x), 1)

View File

@ -33,6 +33,7 @@ import unittest
from .. import protocol from .. import protocol
from ..server.asyncserver import AsyncServer, AsyncServerException from ..server.asyncserver import AsyncServer, AsyncServerException
from ..server.utils import Utils
from ..client.csocket import CSocket from ..client.csocket import CSocket
@ -54,14 +55,39 @@ class Socket(unittest.TestCase):
"""Test transmitter proceed method which just returns first arg""" """Test transmitter proceed method which just returns first arg"""
return message return message
def testStopPerCloseUnexpected(self):
# start in separate thread :
serverThread = threading.Thread(
target=self.server.start, args=(self.sock_name, False))
serverThread.daemon = True
serverThread.start()
self.assertTrue(Utils.wait_for(self.server.isActive, unittest.F2B.maxWaitTime(10)))
# unexpected stop directly after start:
self.server.close()
# wait for end of thread :
Utils.wait_for(lambda: not serverThread.isAlive()
or serverThread.join(Utils.DEFAULT_SLEEP_INTERVAL), unittest.F2B.maxWaitTime(10))
self.assertFalse(serverThread.isAlive())
# clean :
self.server.stop()
self.assertFalse(self.server.isActive())
self.assertFalse(os.path.exists(self.sock_name))
def _serverSocket(self):
try:
return CSocket(self.sock_name)
except Exception as e:
return None
def testSocket(self): def testSocket(self):
serverThread = threading.Thread( serverThread = threading.Thread(
target=self.server.start, args=(self.sock_name, False)) target=self.server.start, args=(self.sock_name, False))
serverThread.daemon = True serverThread.daemon = True
serverThread.start() serverThread.start()
time.sleep(1) self.assertTrue(Utils.wait_for(self.server.isActive, unittest.F2B.maxWaitTime(10)))
time.sleep(Utils.DEFAULT_SLEEP_TIME)
client = CSocket(self.sock_name) client = Utils.wait_for(self._serverSocket, 2)
testMessage = ["A", "test", "message"] testMessage = ["A", "test", "message"]
self.assertEqual(client.send(testMessage), testMessage) self.assertEqual(client.send(testMessage), testMessage)
@ -71,7 +97,11 @@ class Socket(unittest.TestCase):
client.close() client.close()
self.server.stop() self.server.stop()
serverThread.join(1) # wait for end of thread :
Utils.wait_for(lambda: not serverThread.isAlive()
or serverThread.join(Utils.DEFAULT_SLEEP_INTERVAL), unittest.F2B.maxWaitTime(10))
self.assertFalse(serverThread.isAlive())
self.assertFalse(self.server.isActive())
self.assertFalse(os.path.exists(self.sock_name)) self.assertFalse(os.path.exists(self.sock_name))
def testSocketForce(self): def testSocketForce(self):
@ -85,10 +115,13 @@ class Socket(unittest.TestCase):
target=self.server.start, args=(self.sock_name, True)) target=self.server.start, args=(self.sock_name, True))
serverThread.daemon = True serverThread.daemon = True
serverThread.start() serverThread.start()
time.sleep(1) self.assertTrue(Utils.wait_for(self.server.isActive, unittest.F2B.maxWaitTime(10)))
self.server.stop() self.server.stop()
serverThread.join(1) # wait for end of thread :
Utils.wait_for(lambda: not serverThread.isAlive()
or serverThread.join(Utils.DEFAULT_SLEEP_INTERVAL), unittest.F2B.maxWaitTime(10))
self.assertFalse(self.server.isActive())
self.assertFalse(os.path.exists(self.sock_name)) self.assertFalse(os.path.exists(self.sock_name))

View File

@ -0,0 +1,176 @@
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
# vi: set ft=python sts=4 ts=4 sw=4 noet :
# This file is part of Fail2Ban.
#
# Fail2Ban is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Fail2Ban is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Fail2Ban; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
__author__ = "Serg G. Brester (sebres)"
__copyright__ = "Copyright (c) 2015 Serg G. Brester, 2015- Fail2Ban Contributors"
__license__ = "GPL"
from ..server.mytime import MyTime
import unittest
from ..server.ticket import Ticket, FailTicket, BanTicket
class TicketTests(unittest.TestCase):
def testTicket(self):
tm = MyTime.time()
matches = ['first', 'second']
matches2 = ['first', 'second']
matches3 = ['first', 'second', 'third']
# Ticket
t = Ticket('193.168.0.128', tm, matches)
self.assertEqual(t.getIP(), '193.168.0.128')
self.assertEqual(t.getTime(), tm)
self.assertEqual(t.getMatches(), matches2)
t.setAttempt(2)
self.assertEqual(t.getAttempt(), 2)
t.setBanCount(10)
self.assertEqual(t.getBanCount(), 10)
# default ban time (from manager):
self.assertEqual(t.getBanTime(60*60), 60*60)
self.assertFalse(t.isTimedOut(tm + 60 + 1, 60*60))
self.assertTrue(t.isTimedOut(tm + 60*60 + 1, 60*60))
t.setBanTime(60)
self.assertEqual(t.getBanTime(60*60), 60)
self.assertEqual(t.getBanTime(), 60)
self.assertFalse(t.isTimedOut(tm))
self.assertTrue(t.isTimedOut(tm + 60 + 1))
# permanent :
t.setBanTime(-1)
self.assertFalse(t.isTimedOut(tm + 60 + 1))
t.setBanTime(60)
# BanTicket
tm = MyTime.time()
matches = ['first', 'second']
ft = FailTicket('193.168.0.128', tm, matches)
ft.setBanTime(60*60)
self.assertEqual(ft.getIP(), '193.168.0.128')
self.assertEqual(ft.getTime(), tm)
self.assertEqual(ft.getMatches(), matches2)
ft.setAttempt(2)
self.assertEqual(ft.getAttempt(), 2)
# retry is max of set retry and failures:
self.assertEqual(ft.getRetry(), 2)
ft.setRetry(1)
self.assertEqual(ft.getRetry(), 2)
ft.setRetry(3)
self.assertEqual(ft.getRetry(), 3)
ft.inc()
self.assertEqual(ft.getAttempt(), 3)
self.assertEqual(ft.getRetry(), 4)
self.assertEqual(ft.getMatches(), matches2)
# with 1 match, 1 failure and factor 10 (retry count) :
ft.inc(['third'], 1, 10)
self.assertEqual(ft.getAttempt(), 4)
self.assertEqual(ft.getRetry(), 14)
self.assertEqual(ft.getMatches(), matches3)
# last time (ignore if smaller as time):
self.assertEqual(ft.getLastTime(), tm)
ft.setLastTime(tm-60)
self.assertEqual(ft.getTime(), tm)
self.assertEqual(ft.getLastTime(), tm)
ft.setLastTime(tm+60)
self.assertEqual(ft.getTime(), tm+60)
self.assertEqual(ft.getLastTime(), tm+60)
ft.setData('country', 'DE')
self.assertEqual(ft.getData(),
{'matches': ['first', 'second', 'third'], 'failures': 4, 'country': 'DE'})
# copy all from another ticket:
ft2 = FailTicket(ticket=ft)
self.assertEqual(ft, ft2)
self.assertEqual(ft.getData(), ft2.getData())
self.assertEqual(ft2.getAttempt(), 4)
self.assertEqual(ft2.getRetry(), 14)
self.assertEqual(ft2.getMatches(), matches3)
self.assertEqual(ft2.getTime(), ft.getTime())
self.assertEqual(ft2.getLastTime(), ft.getLastTime())
self.assertEqual(ft2.getBanTime(), ft.getBanTime())
def testTicketData(self):
t = BanTicket('193.168.0.128', None, ['first', 'second'])
# expand data (no overwrites, matches are available) :
t.setData('region', 'Hamburg', 'country', 'DE', 'city', 'Hamburg')
self.assertEqual(
t.getData(),
{'matches': ['first', 'second'], 'failures':0, 'region': 'Hamburg', 'country': 'DE', 'city': 'Hamburg'})
# at once as dict (single argument, overwrites it completelly, no more matches/failures) :
t.setData({'region': None, 'country': 'FR', 'city': 'Paris'},)
self.assertEqual(
t.getData(),
{'city': 'Paris', 'country': 'FR'})
# at once as dict (overwrites it completelly, no more matches/failures) :
t.setData({'region': 'Hamburg', 'country': 'DE', 'city': None})
self.assertEqual(
t.getData(),
{'region': 'Hamburg', 'country': 'DE'})
self.assertEqual(
t.getData('region'),
'Hamburg')
self.assertEqual(
t.getData('country'),
'DE')
# again, named arguments:
t.setData(region='Bremen', city='Bremen')
self.assertEqual(t.getData(),
{'region': 'Bremen', 'country': 'DE', 'city': 'Bremen'})
# again, but as args (key value pair):
t.setData('region', 'Brandenburg', 'city', 'Berlin')
self.assertEqual(
t.getData('region'),
'Brandenburg')
self.assertEqual(
t.getData('city'),
'Berlin')
self.assertEqual(
t.getData(),
{'city':'Berlin', 'region': 'Brandenburg', 'country': 'DE'})
# interator filter :
self.assertEqual(
t.getData(('city', 'country')),
{'city':'Berlin', 'country': 'DE'})
# callable filter :
self.assertEqual(
t.getData(lambda k: k.upper() == 'COUNTRY'),
{'country': 'DE'})
# remove one data entry:
t.setData('city', None)
self.assertEqual(
t.getData(),
{'region': 'Brandenburg', 'country': 'DE'})
# default if not available:
self.assertEqual(
t.getData('city', 'Unknown'),
'Unknown')
# add continent :
t.setData('continent', 'Europe')
# again, but as argument list (overwrite new only, leave continent unchanged) :
t.setData(*['country', 'RU', 'region', 'Moscow'])
self.assertEqual(
t.getData(),
{'continent': 'Europe', 'country': 'RU', 'region': 'Moscow'})
# clear:
t.setData({})
self.assertEqual(t.getData(), {})
self.assertEqual(t.getData('anything', 'default'), 'default')

Some files were not shown because too many files have changed in this diff Show More