v6.3.18.0

Changelog: https://roxy-wi.org/changelog#6_3_18
pull/364/head v6.3.18.0
Aidaho 2023-08-17 14:37:08 +03:00
parent 2a50d5b869
commit 4a83b3696c
26 changed files with 610 additions and 128 deletions

View File

@ -2,8 +2,6 @@
Web interface(user-friendly web GUI, alerting, monitoring and secure) for managing HAProxy, Nginx and Keepalived servers. Leave your [feedback](https://github.com/hap-wi/roxy-wi/issues)
# Get involved
* [Youtube Demo video](https://www.youtube.com/channel/UCo0lCg24j-H4f0S9kMjp-_w)
* [Twitter](https://twitter.com/roxy_wi), subscribe!
* [Telegram Channel](https://t.me/roxy_wi_channel) about Roxy-WI, talks and questions are welcome
# Demo site
@ -12,17 +10,17 @@ Web interface(user-friendly web GUI, alerting, monitoring and secure) for managi
![alt text](https://roxy-wi.org/static/images/viewstat.png "HAProxy state page")
# Features:
1. Installing and updating HAProxy, Nginx and Keepalived with Roxy-WI as a system service
1. Installing and updating HAProxy, Nginx, Apache and Keepalived with Roxy-WI as a system service
2. Installing and updating HAProxy and Nginx with Roxy-WI as a Docker service
3. Installing and updating Grafana, Prometheus servers with Roxy-WI
4. Installing and updating HAProxy and Nginx exporters with Roxy-WI
4. Installing and updating HAProxy, Nginx, Apache, Keepalived and Node exporters with Roxy-WI
5. Server provisioning on AWS, DigitalOcean and G-Core Labs
6. Downloading, updating and formatting GeoIP to the acceptable format for HAProxy with Roxy-WI
7. Dynamic change of Maxconn, Black/white lists and backend's IP address and port with saving changes to the config file
8. Configuring HAProxy, Nginx, Apache and Keepalived in a jiffy with Roxy-WI
9. Viewing and analysing the status of all Frontend/backend servers via Roxy-WI from a single control panel
10. Enabling/disabling servers through stats page without rebooting HAProxy
11. Viewing/Analysing HAProxy, Nginx and Apache logs right from the Roxy-WI web interface
11. Viewing/Analysing HAProxy, Nginx, Apache and Keepalived logs right from the Roxy-WI web interface
12. Creating and visualizing the HAProxy workflow from Web Ui
13. Pushing Your changes to your HAProxy, Nginx, Apache and Keepalived servers with a single click via the web interface
14. Getting info on past changes, evaluating your config files and restoring the previous stable config at any time with a single click right from Web interface
@ -32,8 +30,8 @@ Web interface(user-friendly web GUI, alerting, monitoring and secure) for managi
18. Managing the ports assigned to Frontend automatically
19. Evaluating the changes of recent configs pushed to HAProxy, Nginx, Apache and Keepalived instances right from the Web UI
20. Multiple User Roles support for privileged based Viewing and editing of Config
21. Creating Groups and adding/removing servers to ensure the proper identification for your HAProxy and Nginx Clusters
22. Sending notifications from Roxy-WI via Telegram, Slack, Email and via the web interface
21. Creating Groups and adding/removing servers to ensure the proper identification for your HAProxy, Nginx and Apache Clusters
22. Sending notifications from Roxy-WI via Telegram, Slack, Email, PageDuty and via the web interface
23. Supporting high Availability to ensure uptime to all Master slave servers configured
24. Support of SSL (including Let's Encrypt)
25. Support of SSH Key for managing multiple HAProxy, Nginx, Apache and Keepalived Servers straight from Roxy-WI
@ -47,7 +45,7 @@ Web interface(user-friendly web GUI, alerting, monitoring and secure) for managi
33. Keep active HAProxy, Nginx, Apache and Keepalived services
34. Possibility to hide parts of the config with tags for users with "guest" role: "HideBlockStart" and "HideBlockEnd"
35. Mobile-ready design
36. Simple port monitoring (SMON)
36. [SMON](https://roxy-wi.org/services/smon) (Check: Ping, TCP/UDP, HTTP(s), SSL expiry, HTTP body answer, DNS records)
37. Backup HAProxy, Nginx, Apache and Keepalived config files through Roxy-WI
38. Managing OpenVPN3 as a client via Roxy-WI
@ -108,7 +106,7 @@ Login https://roxy-wi-server/users.py, and add: users, groups and servers. Defau
If you have error:
```
Forbidden
You don't have permission to access /app/overview.py on this server.
You don't have permission to access /app/overview.py on this server.
```
Check owner(must be apache, or another user for apache)
@ -123,7 +121,7 @@ Do this:
$ cd /var/www/haproxy-wi/app
$ ./create_db.py
```
and check executable py files
and check executable .py files
If you see plain text, check section "Directory" in httpd conf

View File

@ -19,9 +19,6 @@ def default_values():
'The directory must be owned by the user specified in SSH settings', 'group': '1'},
{'param': 'cert_path', 'value': '/etc/ssl/certs/', 'section': 'main',
'desc': 'Path to SSL dir. Folder owner must be a user which set in the SSH settings. Path must exist', 'group': '1'},
{'param': 'ssl_local_path', 'value': 'certs', 'section': 'main',
'desc': 'Path to the directory with the saved local SSL certificates. The value of this parameter is '
'specified as a relative path beginning with $HOME_ROXY_WI/app/', 'group': '1'},
{'param': 'maxmind_key', 'value': '', 'section': 'main', 'desc': 'License key for downloading GeoIP DB. You can create it on maxmind.com', 'group': '1'},
{'param': 'haproxy_path_logs', 'value': '/var/log/haproxy/', 'section': 'haproxy', 'desc': 'The path for HAProxy logs', 'group': '1'},
{'param': 'syslog_server_enable', 'value': '0', 'section': 'logs', 'desc': 'Enable getting logs from a syslog server', 'group': '1'},
@ -841,9 +838,18 @@ def update_db_v_6_3_17():
print("Updating... DB has been updated to version 6.3.17")
def update_db_v_6_3_18():
try:
Setting.delete().where(Setting.param == 'ssl_local_path').execute()
except Exception as e:
print("An error occurred:", e)
else:
print("Updating... DB has been updated to version 6.3.18")
def update_ver():
try:
Version.update(version='6.3.17.0').execute()
Version.update(version='6.3.18.0').execute()
except Exception:
print('Cannot update version')
@ -876,6 +882,7 @@ def update_all():
update_db_v_6_3_13_4()
update_db_v_6_3_13_5()
update_db_v_6_3_17()
update_db_v_6_3_18()
update_ver()

View File

@ -419,13 +419,9 @@ def del_ssl_cert(server_ip: str, cert_id: str) -> None:
def upload_ssl_cert(server_ip: str, ssl_name: str, ssl_cont: str) -> None:
cert_local_dir = f"{os.path.dirname(os.getcwd())}/{sql.get_setting('ssl_local_path')}"
cert_path = sql.get_setting('cert_path')
name = ''
if not os.path.exists(cert_local_dir):
os.makedirs(cert_local_dir)
if ssl_name is None:
print('error: Please enter a desired name')
else:
@ -450,9 +446,9 @@ def upload_ssl_cert(server_ip: str, ssl_name: str, ssl_cont: str) -> None:
print(f'success: the SSL file has been uploaded to {server_ip} into: {cert_path}/{name}')
except Exception as e:
roxywi_common.logging('Roxy-WI server', e.args[0], roxywi=1)
try:
os.rename(name, cert_local_dir)
except OSError as e:
roxywi_common.logging('Roxy-WI server', e.args[0], roxywi=1)
# try:
# os.rename(name, cert_local_dir)
# except OSError as e:
# roxywi_common.logging('Roxy-WI server', e.args[0], roxywi=1)
roxywi_common.logging(server_ip, f"add.py#ssl uploaded a new SSL cert {name}", roxywi=1, login=1)

View File

@ -209,6 +209,20 @@ class Backup(BaseModel):
table_name = 'backups'
class S3Backup(BaseModel):
id = AutoField()
server = CharField()
s3_server = CharField()
bucket = CharField()
secret_key = CharField()
access_key = CharField()
time = CharField()
description = CharField(null=True)
class Meta:
table_name = 's3_backups'
class Metrics(BaseModel):
serv = CharField()
curr_con = IntegerField()
@ -669,4 +683,4 @@ def create_tables():
ProvisionedServers, MetricsHttpStatus, SMON, WafRules, Alerts, GeoipCodes, NginxMetrics,
SystemInfo, Services, UserName, GitSetting, CheckerSetting, ApacheMetrics, ProvisionParam,
WafNginx, ServiceStatus, KeepaliveRestart, PD, SmonHistory, SmonTcpCheck, SmonHttpCheck,
SmonPingCheck, SmonDnsCheck])
SmonPingCheck, SmonDnsCheck, S3Backup])

View File

@ -26,7 +26,7 @@ def get_setting(param, **kwargs):
except Exception:
pass
if user_group == '' or param in ('ssl_local_path', 'proxy'):
if user_group == '' or param in ('proxy'):
user_group = 1
if kwargs.get('all'):
@ -1019,6 +1019,19 @@ def insert_backup_job(server, rserver, rpath, backup_type, time, cred, descripti
return True
def insert_s3_backup_job(server, s3_server, bucket, secret_key, access_key, time, description):
try:
S3Backup.insert(
server=server, s3_server=s3_server, bucket=bucket, secret_key=secret_key, access_key=access_key, time=time,
description=description
).execute()
except Exception as e:
out_error(e)
return False
else:
return True
def select_backups(**kwargs):
if kwargs.get("server") is not None and kwargs.get("rserver") is not None:
query = Backup.select().where((Backup.server == kwargs.get("server")) & (Backup.rhost == kwargs.get("rserver")))
@ -1033,6 +1046,24 @@ def select_backups(**kwargs):
return query_res
def select_s3_backups(**kwargs):
if kwargs.get("server") is not None and kwargs.get("bucket") is not None:
query = S3Backup.select().where(
(S3Backup.server == kwargs.get("server")) &
(S3Backup.s3_server == kwargs.get("s3_server")) &
(S3Backup.bucket == kwargs.get("bucket"))
)
else:
query = S3Backup.select().order_by(S3Backup.id)
try:
query_res = query.execute()
except Exception as e:
out_error(e)
else:
return query_res
def update_backup(server, rserver, rpath, backup_type, time, cred, description, backup_id):
backup_update = Backup.update(
server=server, rhost=rserver, rpath=rpath, backup_type=backup_type, time=time,
@ -1058,6 +1089,17 @@ def delete_backups(backup_id: int) -> bool:
return True
def delete_s3_backups(backup_id: int) -> bool:
query = S3Backup.delete().where(S3Backup.id == backup_id)
try:
query.execute()
except Exception as e:
out_error(e)
return False
else:
return True
def check_exists_backup(server: str) -> bool:
try:
backup = Backup.get(Backup.server == server)
@ -1070,9 +1112,9 @@ def check_exists_backup(server: str) -> bool:
return False
def check_exists_s3_backup(server_id: int) -> bool:
def check_exists_s3_backup(server: str) -> bool:
try:
backup = S3Backup.get(S3Backup.server_id == server_id)
backup = S3Backup.get(S3Backup.server == server)
except Exception:
pass
else:

View File

@ -492,10 +492,10 @@ def delete_server(server_id: int) -> None:
server_ip = s[2]
if sql.check_exists_backup(server_ip):
print('warning: Delete the backup first ')
print('warning: Delete the backup first')
return
if sql.check_exists_s3_backup(server_id):
print('warning: Delete the S3 backup first ')
if sql.check_exists_s3_backup(server_ip):
print('warning: Delete the S3 backup first')
return
if sql.delete_server(server_id):
sql.delete_waf_server(server_id)

View File

@ -6,6 +6,7 @@ import modules.db.sql as sql
import modules.server.ssh as ssh_mod
import modules.server.server as server_mod
import modules.roxywi.common as roxywi_common
import modules.service.installation as installation_mod
def backup(serv, rpath, time, backup_type, rserver, cred, deljob, update, description) -> None:
@ -51,8 +52,7 @@ def backup(serv, rpath, time, backup_type, rserver, cred, deljob, update, descri
)
print(template)
print('success: Backup job has been created')
roxywi_common.logging('backup ', f' a new backup job for server {serv} has been created', roxywi=1,
login=1)
roxywi_common.logging('backup ', f' a new backup job for server {serv} has been created', roxywi=1, login=1)
else:
print('error: Cannot add the job into DB')
elif deljob:
@ -67,13 +67,104 @@ def backup(serv, rpath, time, backup_type, rserver, cred, deljob, update, descri
os.remove(script)
def create_s3_backup() -> None:
...
def s3_backup(server, s3_server, bucket, secret_key, access_key, time, deljob, description) -> None:
script = 's3_backup.sh'
tag = 'add'
if deljob:
time = ''
secret_key = ''
access_key = ''
tag = 'delete'
else:
if sql.check_exists_s3_backup(server):
raise Exception(f'error: Backup job for {server} already exists')
os.system(f"cp scripts/{script} .")
commands = [
f"chmod +x {script} && ./{script} SERVER={server} S3_SERVER={s3_server} BUCKET={bucket} SECRET_KEY={secret_key} ACCESS_KEY={access_key} TIME={time} TAG={tag}"
]
return_out = server_mod.subprocess_execute_with_rc(commands[0])
if not deljob and not update:
try:
if installation_mod.show_installation_output(return_out['error'], return_out['output'], 'S3 backup', rc=return_out['rc'], api=1):
try:
sql.insert_s3_backup_job(server, s3_server, bucket, secret_key, access_key, time, description)
except Exception as e:
raise Exception(f'error: {e}')
except Exception as e:
raise Exception(e)
env = Environment(loader=FileSystemLoader('templates/ajax'), autoescape=True)
template = env.get_template('new_s3_backup.html')
template = template.render(backups=sql.select_s3_backups(server=server, s3_server=s3_server, bucket=bucket))
print(template)
print('success: Backup job has been created')
roxywi_common.logging('backup ', f' a new S3 backup job for server {server} has been created', roxywi=1, login=1)
elif deljob:
sql.delete_s3_backups(deljob)
print('Ok')
roxywi_common.logging('backup ', f' a S3 backup job for server {server} has been deleted', roxywi=1, login=1)
def delete_s3_backup() -> None:
...
def git_backup(server_id, service_id, git_init, repo, branch, period, cred, deljob, description) -> None:
servers = roxywi_common.get_dick_permit()
proxy = sql.get_setting('proxy')
services = sql.select_services()
server_ip = sql.select_server_ip_by_id(server_id)
service_name = sql.select_service_name_by_id(service_id).lower()
service_config_dir = sql.get_setting(service_name + '_dir')
script = 'git_backup.sh'
proxy_serv = ''
ssh_settings = ssh_mod.return_ssh_keys_path('localhost', id=int(cred))
os.system(f"cp scripts/{script} .")
def show_s3_backup():
...
if proxy is not None and proxy != '' and proxy != 'None':
proxy_serv = proxy
if repo is None or git_init == '0':
repo = ''
if branch is None or branch == '0':
branch = 'main'
commands = [
f"chmod +x {script} && ./{script} HOST={server_ip} DELJOB={deljob} SERVICE={service_name} INIT={git_init} "
f"SSH_PORT={ssh_settings['port']} PERIOD={period} REPO={repo} BRANCH={branch} CONFIG_DIR={service_config_dir} "
f"PROXY={proxy_serv} USER={ssh_settings['user']} KEY={ssh_settings['key']}"
]
output, error = server_mod.subprocess_execute(commands[0])
for line in output:
if any(s in line for s in ("Traceback", "FAILED")):
try:
print('error: ' + line)
break
except Exception:
print('error: ' + output)
break
else:
if deljob == '0':
if sql.insert_new_git(
server_id=server_id, service_id=service_id, repo=repo, branch=branch,
period=period, cred=cred, description=description
):
gits = sql.select_gits(server_id=server_id, service_id=service_id)
sshs = sql.select_ssh()
lang = roxywi_common.get_user_lang()
env = Environment(loader=FileSystemLoader('templates/ajax'), autoescape=True)
template = env.get_template('new_git.html')
template = template.render(gits=gits, sshs=sshs, servers=servers, services=services, new_add=1, lang=lang)
print(template)
print('success: Git job has been created')
roxywi_common.logging(
server_ip, ' A new git job has been created', roxywi=1, login=1, keep_history=1, service=service_name
)
else:
if sql.delete_git(form.getvalue('git_backup')):
print('Ok')
os.remove(script)

View File

@ -450,7 +450,7 @@ if form.getvalue('keepalived_exp_install'):
if form.getvalue('backup') or form.getvalue('deljob') or form.getvalue('backupupdate'):
import modules.service.backup as backup_mod
serv = common.is_ip_or_dns(form.getvalue('server'))
server = common.is_ip_or_dns(form.getvalue('server'))
rpath = common.checkAjaxInput(form.getvalue('rpath'))
time = common.checkAjaxInput(form.getvalue('time'))
backup_type = common.checkAjaxInput(form.getvalue('type'))
@ -460,10 +460,28 @@ if form.getvalue('backup') or form.getvalue('deljob') or form.getvalue('backupup
update = common.checkAjaxInput(form.getvalue('backupupdate'))
description = common.checkAjaxInput(form.getvalue('description'))
backup_mod.backup(serv, rpath, time, backup_type, rserver, cred, deljob, update, description)
try:
backup_mod.backup(server, rpath, time, backup_type, rserver, cred, deljob, update, description)
except Exception as e:
print(e)
if any((form.getvalue('s3_backup_server'), form.getvalue('dels3job'))):
import modules.service.backup as backup_mod
server = common.is_ip_or_dns(form.getvalue('s3_backup_server'))
s3_server = common.checkAjaxInput(form.getvalue('s3_server'))
bucket = common.checkAjaxInput(form.getvalue('s3_bucket'))
secret_key = common.checkAjaxInput(form.getvalue('s3_secret_key'))
access_key = common.checkAjaxInput(form.getvalue('s3_access_key'))
time = common.checkAjaxInput(form.getvalue('time'))
deljob = common.checkAjaxInput(form.getvalue('dels3job'))
description = common.checkAjaxInput(form.getvalue('description'))
backup_mod.s3_backup(server, s3_server, bucket, secret_key, access_key, time, deljob, description)
if form.getvalue('git_backup'):
import modules.service.backup as backup_mod
server_id = form.getvalue('server')
service_id = form.getvalue('git_service')
git_init = form.getvalue('git_init')
@ -473,64 +491,8 @@ if form.getvalue('git_backup'):
cred = form.getvalue('cred')
deljob = form.getvalue('git_deljob')
description = form.getvalue('description')
servers = roxywi_common.get_dick_permit()
proxy = sql.get_setting('proxy')
services = sql.select_services()
server_ip = sql.select_server_ip_by_id(server_id)
service_name = sql.select_service_name_by_id(service_id).lower()
service_config_dir = sql.get_setting(service_name + '_dir')
script = 'git_backup.sh'
proxy_serv = ''
ssh_settings = ssh_mod.return_ssh_keys_path('localhost', id=int(cred))
os.system(f"cp scripts/{script} .")
if proxy is not None and proxy != '' and proxy != 'None':
proxy_serv = proxy
if repo is None or git_init == '0':
repo = ''
if branch is None or branch == '0':
branch = 'main'
commands = [
f"chmod +x {script} && ./{script} HOST={server_ip} DELJOB={deljob} SERVICE={service_name} INIT={git_init} "
f"SSH_PORT={ssh_settings['port']} PERIOD={period} REPO={repo} BRANCH={branch} CONFIG_DIR={service_config_dir} "
f"PROXY={proxy_serv} USER={ssh_settings['user']} KEY={ssh_settings['key']}"
]
output, error = server_mod.subprocess_execute(commands[0])
for line in output:
if any(s in line for s in ("Traceback", "FAILED")):
try:
print('error: ' + line)
break
except Exception:
print('error: ' + output)
break
else:
if deljob == '0':
if sql.insert_new_git(
server_id=server_id, service_id=service_id, repo=repo, branch=branch,
period=period, cred=cred, description=description
):
gits = sql.select_gits(server_id=server_id, service_id=service_id)
sshs = sql.select_ssh()
lang = roxywi_common.get_user_lang()
env = Environment(loader=FileSystemLoader('templates/ajax'), autoescape=True)
template = env.get_template('new_git.html')
template = template.render(gits=gits, sshs=sshs, servers=servers, services=services, new_add=1, lang=lang)
print(template)
print('success: Git job has been created')
roxywi_common.logging(
server_ip, ' A new git job has been created', roxywi=1, login=1, keep_history=1, service=service_name
)
else:
if sql.delete_git(form.getvalue('git_backup')):
print('Ok')
os.remove(script)
backup_mod.git_backup(server_id, service_id, git_init, repo, branch, period, cred, deljob, description)
if form.getvalue('install_service'):
server_ip = common.is_ip_or_dns(form.getvalue('install_service'))

View File

@ -0,0 +1,49 @@
- hosts: 127.0.0.1
connection: local
become: yes
become_method: sudo
gather_facts: no
tasks:
- name: Add S3 Job
tags: add
block:
- name: Install s3cmd
package:
name: s3cmd
state: present
- name: Find full path to s3cmd
shell: which s3cmd
register: which_s3cmd
- name: Add keys var
set_fact: keys="--access_key={{ACCESS_KEY}} --secret_key={{SECRET_KEY}} --host={{S3_SERVER}} --host-bucket={{S3_SERVER}}:443"
- name: Create bucket
shell: "{{ which_s3cmd.stdout }} mb s3://{{ BUCKET }} {{ keys }}"
ignore_errors: true
- name: Add CRON job
cron:
name: "Roxy-WI S3 Backup configs for server {{ SERVER }} {{ BUCKET }} {{ item }}"
special_time: "{{ TIME }}"
job: "{{ which_s3cmd.stdout }} sync /var/lib/roxy-wi/configs/{{ item }}/{{ SERVER }}*.conf s3://{{ BUCKET }}/{{ SERVER }}/{{ item }}/ {{ keys }}"
with_items:
- kp_config
- hap_config
- nginx_config
- apache_config
- name: Delete S3 Job
tags: delete
block:
- name: Removes backup jobs
cron:
name: "Roxy-WI S3 Backup configs for server {{ SERVER }} {{ BUCKET }} {{ item }}"
state: absent
with_items:
- kp_config
- hap_config
- nginx_config
- apache_config

34
app/scripts/s3_backup.sh Normal file
View File

@ -0,0 +1,34 @@
#!/bin/bash
for ARGUMENT in "$@"
do
KEY=$(echo $ARGUMENT | cut -f1 -d=)
VALUE=$(echo $ARGUMENT | cut -f2 -d=)
case "$KEY" in
SERVER) SERVER=${VALUE} ;;
S3_SERVER) S3_SERVER=${VALUE} ;;
BUCKET) BUCKET=${VALUE} ;;
SECRET_KEY) SECRET_KEY=${VALUE} ;;
ACCESS_KEY) ACCESS_KEY=${VALUE} ;;
TAG) TAG=${VALUE} ;;
TIME) TIME=${VALUE} ;;
*)
esac
done
export ANSIBLE_HOST_KEY_CHECKING=False
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=False
export ACTION_WARNINGS=False
export LOCALHOST_WARNING=False
export COMMAND_WARNINGS=False
PWD=`pwd`
PWD=$PWD/scripts/ansible/
ansible-playbook $PWD/roles/s3_backup.yml -e "SERVER=$SERVER S3_SERVER=$S3_SERVER BUCKET=$BUCKET SECRET_KEY=$SECRET_KEY ACCESS_KEY=$ACCESS_KEY TIME=$TIME" -t $TAG -i $PWD/$HOST
if [ $? -gt 0 ]
then
echo "error: Cannot create a S3 backup job"
exit 1
fi

View File

@ -23,19 +23,19 @@ except Exception:
sys.exit()
roxywi_auth.page_for_admin(level=2)
try:
ldap_enable = sql.get_setting('ldap_enable')
user_group = roxywi_common.get_user_group(id=1)
settings = sql.get_setting('', all=1)
geoip_country_codes = sql.select_geoip_country_codes()
services = sql.select_services()
gits = sql.select_gits()
servers = roxywi_common.get_dick_permit(virt=1, disable=0, only_group=1)
masters = sql.select_servers(get_master_servers=1, uuid=user_params['user_uuid'].value)
is_needed_tool = common.is_tool('ansible')
user_roles = sql.select_user_roles_by_group(user_group)
except Exception:
pass
ldap_enable = sql.get_setting('ldap_enable')
user_group = roxywi_common.get_user_group(id=1)
settings = sql.get_setting('', all=1)
geoip_country_codes = sql.select_geoip_country_codes()
services = sql.select_services()
gits = sql.select_gits()
servers = roxywi_common.get_dick_permit(virt=1, disable=0, only_group=1)
masters = sql.select_servers(get_master_servers=1, uuid=user_params['user_uuid'].value)
is_needed_tool = common.is_tool('ansible')
user_roles = sql.select_user_roles_by_group(user_group)
backups = sql.select_backups()
s3_backups = sql.select_s3_backups()
try:
user_subscription = roxywi_common.return_user_status()
@ -52,7 +52,7 @@ rendered_template = template.render(
h2=1, title=title, role=user_params['role'], user=user_params['user'], users=sql.select_users(group=user_group),
groups=sql.select_groups(), servers=servers, roles=sql.select_roles(), sshs=sql.select_ssh(group=user_group),
masters=masters, group=user_group, services=services, timezones=pytz.all_timezones, guide_me=1,
token=user_params['token'], settings=settings, backups=sql.select_backups(), page="servers.py",
token=user_params['token'], settings=settings, backups=backups, s3_backups=s3_backups, page="servers.py",
geoip_country_codes=geoip_country_codes, user_services=user_params['user_services'], ldap_enable=ldap_enable,
user_status=user_subscription['user_status'], user_plan=user_subscription['user_plan'], gits=gits,
is_needed_tool=is_needed_tool, lang=user_params['lang'], user_roles=user_roles

View File

@ -0,0 +1,37 @@
{% for b in backups %}
<tr class="newbackup" id="s3-backup-table-{{b.id}}">
<td class="padding10 first-collumn">
<span id="backup-s3-server-{{b.id}}" style="display: none">{{ b.server }}</span>
{{s.1}}
</td>
<td>
<span id="s3-server-{{b.id}}">{{b.s3_server}}</span>
</td>
<td>
<span id="bucket-{{b.id}}">{{b.bucket}}</span>
</td>
<td>
<span id="s3-backup-time-{{b.id}}">{{b.time}}</span>
</td>
<td>
{% if b.description != 'None' %}
<span id="s3-backup-description-{{b.id}}">{{b.description}}</span>
{% else %}
<span id="s3-backup-description-{{b.id}}"></span>
{% endif %}
</td>
<!-- <td>-->
<!-- <a class="add" onclick="cloneS3Backup({{b.id}})" id="clone-s3-backup{{b.id}}" title="Clone S3 {{b.server}}" style="cursor: pointer;"></a>-->
<!-- </td>-->
<td>
<a class="delete" onclick="confirmDeleteS3Backup({{b.id}})" title="Delete S3 backup {{b.server}}" style="cursor: pointer;"></a>
</td>
</tr>
<script>
$( function() {
$("#s3-backup-time-{{ b.id}}" ).selectmenu({
width: 100
});
});
</script>
{% endfor %}

View File

@ -0,0 +1,68 @@
<tr>
<td class="padding20" style="width: 40%;">
Select a server for backup
<span class="need-field">*</span>
</td>
<td>
<select autofocus required name="s3-backup-server" id="s3-backup-server">
<option disabled>------</option>
{% for s in servers %}}
<option value="{{ s.2 }}">{{ s.1 }}</option>
{% endfor %}
</select>
</td>
</tr>
<tr>
<td class="padding20">
Enter a S3 server
<span class="need-field">*</span>
</td>
<td>
{{ input('s3_server', size='30', required="required") }}
</td>
</tr>
<tr>
<td class="padding20">
Enter bucket
<span class="need-field">*</span>
</td>
<td>
{{ input('s3_bucket', size='30', required="required") }}
</td>
</tr>
<tr>
<td class="padding20">
Access key
<span class="need-field">*</span>
</td>
<td>
{{ input('s3_access_key', size='30', required="required", type='password') }}
</td>
</tr>
<tr>
<td class="padding20">
Secret key
<span class="need-field">*</span>
</td>
<td>
{{ input('s3_secret_key', size='30', required="required", type='password') }}
</td>
</tr>
<tr>
<td class="padding20">
Period time
<span class="need-field">*</span>
</td>
<td>
{% set values = {'hourly':'hourly','daily':'daily','weekly':'weekly', 'monthly':'monthly'} %}
{{ select('s3-backup-time', values=values, selected='weekly', required='required', class='force_close') }}
</td>
</tr>
<tr>
<td class="padding20">
Description
</td>
<td>
{{ input('s3-backup-description', size='30') }}
</td>
</tr>

View File

@ -46,15 +46,15 @@
{% include 'include/no_sub.html' %}
{% else %}
<table class="overview" id="ajax-backup-table">
<caption><h3>Filesystem</h3></caption>
<caption><h3>Remote server</h3></caption>
<tr class="overviewHead">
<td class="padding10 first-collumn">{{lang.words.servers|title()}}</td>
<td class="padding10">{{lang.words.remote|title()}} {{lang.words.server}}</td>
<td class="padding10">{{lang.words.remote|title()}} {{lang.words.folder2}}</td>
<td class="padding10">{{lang.words.backup|title()}} {{lang.words.type}}</td>
<td class="padding10">{{lang.words.period|title()}}</td>
<td class="padding10">{{lang.words.creds|title()}}</td>
<td class="padding10">{{lang.words.desc|title()}}</td>
<td style="width: 10%">{{lang.words.remote|title()}} {{lang.words.server}}</td>
<td style="width: 10%">{{lang.words.remote|title()}} {{lang.words.folder2}}</td>
<td style="width: 15%">{{lang.words.backup|title()}} {{lang.words.type}}</td>
<td style="width: 15%">{{lang.words.period|title()}}</td>
<td style="width: 15%">{{lang.words.creds|title()}}</td>
<td style="width: 100%">{{lang.words.desc|title()}}</td>
<td style="margin-left: 5px;"></td>
<td></td>
</tr>
@ -114,6 +114,58 @@
</table>
<br /><span class="add-button" title="{{lang.words.add|title()}} {{lang.words.w_a}} {{lang.words.new}} {{lang.words.backup}} {{lang.words.job}}" id="add-backup-button">+ {{lang.words.add|title()}} {{lang.words.backup}}</span>
<br /><br />
<table class="overview" id="ajax-backup-s3-table">
<thead>
<caption><h3>S3</h3></caption>
<tr class="overviewHead">
<td class="padding10 first-collumn">{{lang.words.servers|title()}}</td>
<td style="width: 10%">S3 {{lang.words.server}}</td>
<td style="width: 10%">Bucket</td>
<td style="width: 15%">{{lang.words.period|title()}}</td>
<td style="width: 100%">{{lang.words.desc|title()}}</td>
<td style="margin-left: 5px;"></td>
<!-- <td></td>-->
</tr>
</thead>
<tbody id="tbody-s3">
{% for b in s3_backups %}
{% for s in servers %}
{% if b.server in s.2 %}
<tr id="s3-backup-table-{{b.id}}">
<td class="padding10 first-collumn">
<span id="backup-s3-server-{{b.id}}" style="display: none">{{ b.server }}</span>
{{s.1}}
</td>
<td>
<span id="s3-server-{{b.id}}">{{b.s3_server}}</span>
</td>
<td>
<span id="bucket-{{b.id}}">{{b.bucket}}</span>
</td>
<td>
<span id="s3-backup-time-{{b.id}}">{{b.time}}</span>
</td>
<td>
{% if b.description != 'None' %}
<span id="s3-backup-description-{{b.id}}">{{b.description}}</span>
{% else %}
<span id="s3-backup-description-{{b.id}}"></span>
{% endif %}
</td>
<!-- <td>-->
<!-- <a class="add" onclick="cloneS3Backup({{b.id}})" id="clone-s3-backup{{b.id}}" title="Clone S3 {{b.server}}" style="cursor: pointer;"></a>-->
<!-- </td>-->
<td>
<a class="delete" onclick="confirmDeleteS3Backup({{b.id}})" title="Delete S3 backup {{b.server}}" style="cursor: pointer;"></a>
</td>
</tr>
{% endif %}
{% endfor %}
{% endfor %}
</tbody>
</table>
<br /><span class="add-button" title="{{lang.words.add|title()}} {{lang.words.w_a}} {{lang.words.new}} S3 {{lang.words.backup}} {{lang.words.job}}" id="add-backup-s3-button">+ {{lang.words.add|title()}} {{lang.words.backup}}</span>
<br /><br />
<div id="ajax-backup"></div>
<div class="add-note alert addName alert-info" style="width: inherit; margin-right: 15px;">
{{lang.phrases.read_about_parameters}} <a href="https://roxy-wi.org/description/backup" title="{{lang.words.backup|title()}} {{lang.words.desc}}" target="_blank">{{lang.words.here}}</a>

View File

@ -291,6 +291,12 @@
</tr>
</table>
</div>
<div id="s3-backup-add-table" style="display: none;">
<table class="overview" id="s3-backup-add-table-overview" title="{{lang.words.add|title()}} {{lang.words.w_a}} {{lang.words.new}} S3 {{lang.words.backup}}">
{% include 'include/tr_validate_tips.html' %}
{% include 'include/add_s3_backup.html' %}
</table>
</div>
<div id="git-add-table" style="display: none;">
<table class="overview" id="git-add-table-overview" title="{{lang.words.add|title()}} {{lang.words.w_a}} {{lang.words.new}} git {{lang.words.job}}">
{% include 'include/tr_validate_tips.html' %}

View File

@ -55,7 +55,7 @@
<td class="padding10 first-collumn" style="width: 20%;">
{% set values = dict() %}
{% set values = {'0.9.0':'0.9.0', '0.10.0':'0.10.0', '0.11.0':'0.11.0', '0.12.0':'0.12.0', '0.13.0':'0.13.0', '0.14.0':'0.14.0', '0.15.0':'0.15.0'} %}
{{ select('hapexpver', values=values, selected='0.14.0') }}
{{ select('hapexpver', values=values, selected='0.15.0') }}
</td>
<td class="padding10 first-collumn">
<select autofocus required name="haproxy_exp_addserv" id="haproxy_exp_addserv">
@ -116,8 +116,8 @@
<td id="cur_apache_exp_ver" class="padding10 first-collumn"></td>
<td class="padding10 first-collumn" style="width: 20%;">
{% set values = dict() %}
{% set values = {'0.7.0':'0.7.0', '0.8.0':'0.8.0', '0.9.0':'0.9.0', '0.10.0':'0.10.0'} %}
{{ select('apacheexpver', values=values, selected='0.10.0') }}
{% set values = {'0.10.0':'0.10.0', '0.13.4':'0.13.4', '1.0.1':'1.0.1'} %}
{{ select('apacheexpver', values=values, selected='1.0.1') }}
</td>
<td class="padding10 first-collumn">
<select autofocus required name="apache_exp_addserv" id="apache_exp_addserv">
@ -178,8 +178,8 @@
<td id="cur_node_exp_ver" class="padding10 first-collumn"></td>
<td class="padding10 first-collumn" style="width: 20%;">
{% set values = dict() %}
{% set values = {'1.1.1':'1.1.1', '1.1.2':'1.1.2', '1.2.0':'1.2.0', '1.2.2':'1.2.2', '1.3.0':'1.3.0', '1.3.1':'1.3.1', '1.5.0':'1.5.0'} %}
{{ select('nodeexpver', values=values, selected='1.5.0') }}
{% set values = {'1.2.0':'1.2.0', '1.2.2':'1.2.2', '1.3.0':'1.3.0', '1.3.1':'1.3.1', '1.5.0':'1.5.0', '1.6.1':'1.6.1'} %}
{{ select('nodeexpver', values=values, selected='1.6.1') }}
</td>
<td class="padding10 first-collumn">
<select autofocus required name="node_exp_addserv" id="node_exp_addserv">

View File

@ -170,7 +170,6 @@
"token_ttl": "TTL for a user token (in days)",
"tmp_config_path": "Path to the temporary directory. A valid path should be specified as the value of this parameter. The directory must be owned by the user specified in SSH settings",
"cert_path": "Path to SSL dir. Folder owner must be a user which set in the SSH settings. Path must exist",
"ssl_local_path": "Path to the directory with the saved local SSL certificates. The value of this parameter is specified as a relative path beginning with $HOME_ROXY_WI/app/",
"maxmind_key": "License key for downloading to GeoLite2 DB. You can create it on maxmind.com",
},
"mail": {

View File

@ -170,7 +170,6 @@
"token_ttl": "TTL pour le jeton de l\'utilisateur (en jours)",
"tmp_config_path": "Chemin pour le dossier temporaire. Un chemin valide doit être spécifier pour ce paramètre. Le répèrtoire doit appartenir au même utilisateur celui spécifié dans les paramètres SSH",
"cert_path": "Chemin pour le dossier SSL. Le répèrtoire doit appartenir au même utilisateur celui spécifié dans les paramètres SSH. Le chemin doit éxister",
"ssl_local_path": "Chemin pour le dossier local contenant les certificats SSL sauvegardés. La valeur spécifiée est relative au chemin commençant par $HOME_ROXY_WI/app/",
"maxmind_key": "Clé de licence à télécharger sur GeoLite2 DB. Vous pouvez la créer sur maxmind.com",
},
"mail": {

View File

@ -170,7 +170,6 @@
"token_ttl": "TTL de token de usuario (em dias)",
"tmp_config_path": "Caminho para o diretório temporário.. Indica um caminho válidos. O dono do directorio deve ser o usuario indicado na configuração de SSH. O camihno deve existir",
"cert_path": "Caminho para o diretório SSL. O dono do directorio deve ser o usuario indicado na configuração de SSH. O camihno deve existir",
"ssl_local_path": "aminho para o diretório com certificados SSL locais. O valor desse parâmetro debe ser o caminho relativo que começa com $HOME_ROXY_WI/app/",
"maxmind_key": "A chave de licença para carregar GeoliteDB. Você pode cria-lo no maxmind.com",
},
"mail": {

View File

@ -170,7 +170,6 @@
"token_ttl": "Время жизни пользовательских токенов (в днях)",
"tmp_config_path": "Путь до временной директории. Путь должен существовать. Директория должна принадлежать пользователю, от имени которого подключается SSH",
"cert_path": "Путь до SSL-директории. Путь должен существовать. Директория должна принадлежать пользователю, от имени которого подключается SSH",
"ssl_local_path": "Локальный путь для хранения SSL-сертификатов. Укажите относительный путь от $HOME_ROXY_WI/app/",
"maxmind_key": "Лицензионный ключ для загрузки GeoLite2 DB. Создается на сайте maxmind.com",
},
"mail": {

View File

@ -37,6 +37,8 @@ gits = sql.select_gits()
masters = sql.select_servers(get_master_servers=1)
is_needed_tool = common.is_tool('ansible')
grafana = 0
backups = sql.select_backups()
s3_backups = sql.select_s3_backups()
if not roxywi.is_docker():
grafana, stderr = server_mod.subprocess_execute("systemctl is-active grafana-server")
@ -51,7 +53,7 @@ except Exception as e:
rendered_template = template.render(
h2=1, role=user_params['role'], user=user_params['user'], users=users, groups=sql.select_groups(),
servers=sql.select_servers(full=1), masters=masters, sshs=sql.select_ssh(), roles=sql.select_roles(),
settings=settings, backups=sql.select_backups(), services=services, timezones=pytz.all_timezones,
settings=settings, backups=backups, s3_backups=s3_backups, services=services, timezones=pytz.all_timezones,
page="users.py", user_services=user_params['user_services'], ldap_enable=ldap_enable, gits=gits, guide_me=1,
user_status=user_subscription['user_status'], user_plan=user_subscription['user_plan'], token=user_params['token'],
is_needed_tool=is_needed_tool, lang=user_params['lang'], grafana=grafana

View File

@ -7,3 +7,4 @@ peewee>=3.14.10
PyMySQL>=1.0.2
retry>=0.9.2
pdpyras>=4.5.2
pika>=1.3.1

View File

@ -8,3 +8,4 @@ PyMySQL>=1.0.2
bottle>=0.12.18
retry>=0.9.2
pdpyras>=4.5.2
pika>=1.3.1

View File

@ -8,3 +8,4 @@ PyMySQL>=1.0.2
bottle>=0.12.18
retry>=0.9.2
pdpyras>=4.5.2
pika>=1.3.1

View File

@ -648,7 +648,7 @@ $( function() {
}
});
$('#add-backup-s3-button').click(function() {
addBackupDialog.dialog('open');
addS3BackupDialog.dialog('open');
});
var s3_backup_tabel_title = $( "#s3-backup-add-table-overview" ).attr('title');
var addS3BackupDialog = $( "#s3-backup-add-table" ).dialog({
@ -1330,6 +1330,51 @@ function addBackup(dialog_id) {
} );
}
}
function addS3Backup(dialog_id) {
var valid = true;
toastr.clear();
allFields = $( [] ).add( $('#s3-backup-server') ).add( $('#s3_server') ).add( $('#s3_bucket') ).add( $('#s3_secret_key') ).add( $('#s3_access_key') )
allFields.removeClass( "ui-state-error" );
valid = valid && checkLength( $('#s3-backup-server'), "backup server ", 1 );
valid = valid && checkLength( $('#s3_server'), "S3 server", 1 );
valid = valid && checkLength( $('#s3_bucket'), "S3 bucket", 1 );
valid = valid && checkLength( $('#s3_secret_key'), "S3 secret key", 1 );
valid = valid && checkLength( $('#s3_access_key'), "S3 access key", 1 );
if (valid) {
$.ajax( {
url: "options.py",
data: {
s3_backup_server: $('#s3-backup-server').val(),
s3_server: $('#s3_server').val(),
s3_bucket: $('#s3_bucket').val(),
s3_secret_key: $('#s3_secret_key').val(),
s3_access_key: $('#s3_access_key').val(),
time: $('#s3-backup-time').val(),
description: $('#s3-backup-description').val(),
token: $('#token').val()
},
type: "POST",
success: function( data ) {
data = data.replace(/\s+/g,' ');
if (data.indexOf('error:') != '-1') {
toastr.error(data);
} else if (data.indexOf('success: ') != '-1') {
common_ajax_action_after_success(dialog_id, 'newbackup', 'ajax-backup-s3-table', data);
$( "select" ).selectmenu();
} else if (data.indexOf('info: ') != '-1') {
toastr.clear();
toastr.info(data);
} else if (data.indexOf('warning: ') != '-1') {
toastr.clear();
toastr.warning(data);
} else if (data.indexOf('error: ') != '-1') {
toastr.clear();
toastr.error(data);
}
}
} );
}
}
function addGit(dialog_id) {
var valid = true;
toastr.clear();
@ -1555,6 +1600,29 @@ function confirmDeleteBackup(id) {
}]
});
}
function confirmDeleteS3Backup(id) {
var delete_word = $('#translate').attr('data-delete');
var cancel_word = $('#translate').attr('data-cancel');
$( "#dialog-confirm" ).dialog({
resizable: false,
height: "auto",
width: 400,
modal: true,
title: delete_word + " " +$('#backup-s3-server-'+id).val() + "?",
buttons: [{
text: delete_word,
click: function () {
$(this).dialog("close");
removeS3Backup(id);
}
}, {
text: cancel_word,
click: function () {
$(this).dialog("close");
}
}]
});
}
function confirmDeleteGit(id) {
var delete_word = $('#translate').attr('data-delete');
var cancel_word = $('#translate').attr('data-cancel');
@ -1759,6 +1827,28 @@ function removeBackup(id) {
}
} );
}
function removeS3Backup(id) {
$("#backup-table-s3-"+id).css("background-color", "#f2dede");
$.ajax( {
url: "options.py",
data: {
dels3job: id,
s3_backet: $('#backup-s3-backet-'+id).val(),
s3_backup_server: $('#backup-s3-server-'+id).text(),
s3_bucket: $('#s3-bucket-'+id).val(),
token: $('#token').val()
},
type: "POST",
success: function( data ) {
data = data.replace(/\s+/g,' ');
if(data.indexOf('Ok') != '-1') {
$("#s3-backup-table-"+id).remove();
} else if (data.indexOf('error:') != '-1' || data.indexOf('unique') != '-1') {
toastr.error(data);
}
}
} );
}
function removeGit(id) {
$("#git-table-"+id).css("background-color", "#f2dede");
$.ajax( {
@ -2040,6 +2130,40 @@ function updateBackup(id) {
} );
}
}
function updateS3Backup(id) {
toastr.clear();
if ($( "#backup-type-"+id+" option:selected" ).val() == "-------" || $('#backup-rserver-'+id).val() == '' || $('#backup-rpath-'+id).val() == '') {
toastr.error('All fields must be completed');
} else {
$.ajax( {
url: "options.py",
data: {
s3_backupupdate: id,
server: $('#backup-server-'+id).text(),
rserver: $('#backup-rserver-'+id).val(),
rpath: $('#backup-rpath-'+id).val(),
type: $('#backup-type-'+id).val(),
time: $('#backup-time-'+id).val(),
cred: $('#backup-credentials-'+id).val(),
description: $('#backup-description-'+id).val(),
token: $('#token').val()
},
type: "POST",
success: function( data ) {
data = data.replace(/\s+/g,' ');
if (data.indexOf('error:') != '-1' || data.indexOf('unique') != '-1') {
toastr.error(data);
} else {
toastr.clear();
$("#backup-table-"+id).addClass( "update", 1000 );
setTimeout(function() {
$( "#backup-table-"+id ).removeClass( "update" );
}, 2500 );
}
}
} );
}
}
function showApacheLog(serv) {
var rows = $('#rows').val()
var grep = $('#grep').val()

View File

@ -14,3 +14,4 @@ distro>=1.2.0
bottle>=0.12.20
psutil>=5.9.1
pdpyras>=4.5.2
pika>=1.3.1