diff -Nru pcs-0.9.155+dfsg/CHANGELOG.md pcs-0.9.159/CHANGELOG.md --- pcs-0.9.155+dfsg/CHANGELOG.md 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/CHANGELOG.md 2017-06-30 15:33:01.000000000 +0000 @@ -1,5 +1,273 @@ # Change Log +## [0.9.159] - 2017-06-30 + +### Added +- Option to create a cluster with or without corosync encryption enabled, + by default the encryption is disabled ([rhbz#1165821]) +- It is now possible to disable, enable, unmanage and manage bundle resources + and set their meta attributes ([rhbz#1447910]) +- Pcs now warns against using the `action` option of stonith devices + ([rhbz#1421702]) + +### Fixed +- Fixed crash of the `pcs cluster setup` command when the `--force` flag was + used ([rhbz#1176018]) +- Fixed crash of the `pcs cluster destroy --all` command when the cluster was + not running ([rhbz#1176018]) +- Fixed crash of the `pcs config restore` command when restoring pacemaker + authkey ([rhbz#1176018]) +- Fixed "Error: unable to get cib" when adding a node to a stopped cluster + ([rhbz#1176018]) +- Fixed a crash in the `pcs cluster node add-remote` command when an id + conflict occurs ([rhbz#1386114]) +- Fixed creating a new cluster from the web UI ([rhbz#1284404]) +- `pcs cluster node add-guest` now works with the flag `--skipp-offline` + ([rhbz#1176018]) +- `pcs cluster node remove-guest` can be run again when the guest node was + unreachable first time ([rhbz#1176018]) +- Fixed "Error: Unable to read /etc/corosync/corosync.conf" when running + `pcs resource create`([rhbz#1386114]) +- It is now possible to set `debug` and `verbose` parameters of stonith devices + ([rhbz#1432283]) +- Resource operation ids are now properly validated and no longer ignored in + `pcs resource create`, `pcs resource update` and `pcs resource op add` + commands ([rhbz#1443418]) +- Flag `--force` works correctly when an operation is not successful on some + nodes durrng `pcs cluster node add-remote` or `pcs cluster node add-guest` + ([rhbz#1464781]) + +### Changed +- Binary data are stored in corosync authkey ([rhbz#1165821]) +- It is now mandatory to specify container type in the `resource bundle create` + command +- When creating a new cluster, corosync communication encryption is disabled + by default (in 0.9.158 it was enabled by default, in 0.9.157 and older it was + disabled) + +[rhbz#1165821]: https://bugzilla.redhat.com/show_bug.cgi?id=1165821 +[rhbz#1176018]: https://bugzilla.redhat.com/show_bug.cgi?id=1176018 +[rhbz#1284404]: https://bugzilla.redhat.com/show_bug.cgi?id=1284404 +[rhbz#1386114]: https://bugzilla.redhat.com/show_bug.cgi?id=1386114 +[rhbz#1421702]: https://bugzilla.redhat.com/show_bug.cgi?id=1421702 +[rhbz#1432283]: https://bugzilla.redhat.com/show_bug.cgi?id=1432283 +[rhbz#1443418]: https://bugzilla.redhat.com/show_bug.cgi?id=1443418 +[rhbz#1447910]: https://bugzilla.redhat.com/show_bug.cgi?id=1447910 +[rhbz#1464781]: https://bugzilla.redhat.com/show_bug.cgi?id=1464781 + + +## [0.9.158] - 2017-05-23 + +### Added +- Support for bundle resources (CLI only) ([rhbz#1433016]) +- Commands for adding and removing guest and remote nodes including handling + pacemaker authkey (CLI only) ([rhbz#1176018], [rhbz#1254984], [rhbz#1386114], + [rhbz#1386512]) +- Command `pcs cluster node clear` to remove a node from pacemaker's + configuration and caches +- Backing up and restoring cluster configuration by `pcs config backup` and + `pcs config restore` commands now support corosync and pacemaker authkeys + ([rhbz#1165821], [rhbz#1176018]) + +### Deprecated +- `pcs cluster remote-node add` and `pcs cluster remote-node remove `commands + have been deprecated in favor of `pcs cluster node add-guest` and `pcs + cluster node remove-guest` commands ([rhbz#1386512]) + +### Fixed +- Fixed a bug which under specific conditions caused pcsd to crash on start + when running under systemd ([ghissue#134]) +- `pcs resource unmanage` now sets the unmanaged flag to primitive resources + even if a clone or master/slave resource is specified. Thus the primitive + resources will not become managed just by uncloning. This also prevents some + discrepancies between disabled monitor operations and the unmanaged flag. + ([rhbz#1303969]) +- `pcs resource unmanage --monitor` now properly disables monitor operations + even if a clone or master/slave resource is specified. ([rhbz#1303969]) +- `--help` option now shows help just for the specified command. Previously the + usage for a whole group of commands was shown. +- Fixed a crash when `pcs cluster cib-push` is called with an explicit value of + the `--wait` flag ([rhbz#1422667]) +- Handle pcsd crash when an unusable address is set in `PCSD_BIND_ADDR` + ([rhbz#1373614]) +- Removal of a pacemaker remote resource no longer causes the respective remote + node to be fenced ([rhbz#1390609]) + +### Changed +- Newly created clusters are set up to encrypt corosync communication + ([rhbz#1165821], [ghissue#98]) + +[ghissue#98]: https://github.com/ClusterLabs/pcs/issues/98 +[ghissue#134]: https://github.com/ClusterLabs/pcs/issues/134 +[rhbz#1176018]: https://bugzilla.redhat.com/show_bug.cgi?id=1176018 +[rhbz#1254984]: https://bugzilla.redhat.com/show_bug.cgi?id=1254984 +[rhbz#1303969]: https://bugzilla.redhat.com/show_bug.cgi?id=1303969 +[rhbz#1373614]: https://bugzilla.redhat.com/show_bug.cgi?id=1373614 +[rhbz#1386114]: https://bugzilla.redhat.com/show_bug.cgi?id=1386114 +[rhbz#1386512]: https://bugzilla.redhat.com/show_bug.cgi?id=1386512 +[rhbz#1390609]: https://bugzilla.redhat.com/show_bug.cgi?id=1390609 +[rhbz#1422667]: https://bugzilla.redhat.com/show_bug.cgi?id=1422667 +[rhbz#1433016]: https://bugzilla.redhat.com/show_bug.cgi?id=1433016 +[rhbz#1165821]: https://bugzilla.redhat.com/show_bug.cgi?id=1165821 + + +## [0.9.157] - 2017-04-10 + +### Added +- Resources in location constraints now may be specified by resource name + patterns in addition to resource names ([rhbz#1362493]) +- Proxy settings description in pcsd configuration file ([rhbz#1315627]) +- Man page for pcsd ([rhbz#1378742]) +- Pcs now allows to set `trace_ra` and `trace_file` options of `ocf:heartbeat` + and `ocf:pacemaker` resources ([rhbz#1421702]) +- `pcs resource describe` and `pcs stonith describe` commands now show all + information about the specified agent if the `--full` flag is used +- `pcs resource manage | unmanage` enables respectively disables monitor + operations when the `--monitor` flag is specified ([rhbz#1303969]) +- Support for shared storage in SBD. Currently, there is very limited support + in web UI ([rhbz#1413958]) + +### Changed +- It is now possible to specify more than one resource in the `pcs resource + enable` and `pcs resource disable` commands. + +### Fixed +- Python 3: pcs no longer spams stderr with error messages when communicating + with another node +- Stopping a cluster does not timeout too early and it generally works better + even if the cluster is running Virtual IP resources ([rhbz#1334429]) +- `pcs booth remove` now works correctly even if the booth resource group is + disabled (another fix) ([rhbz#1389941]) +- Fixed Cross-site scripting (XSS) vulnerability in web UI ([CVE-2017-2661], + [rhbz#1434111]) +- Pcs no longer allows to create a stonith resource based on an agent whose + name contains a colon ([rhbz#1415080]) +- Pcs command now launches Python interpreter with "sane" options (python -Es) + ([rhbz#1328882]) +- Clufter is now supported on both Python 2 and Python 3 ([rhbz#1428350]) +- Do not colorize clufter output if saved to a file + +[CVE-2017-2661]: https://access.redhat.com/security/cve/CVE-2017-2661 +[rhbz#1303969]: https://bugzilla.redhat.com/show_bug.cgi?id=1303969 +[rhbz#1315627]: https://bugzilla.redhat.com/show_bug.cgi?id=1315627 +[rhbz#1328882]: https://bugzilla.redhat.com/show_bug.cgi?id=1328882 +[rhbz#1334429]: https://bugzilla.redhat.com/show_bug.cgi?id=1334429 +[rhbz#1362493]: https://bugzilla.redhat.com/show_bug.cgi?id=1362493 +[rhbz#1378742]: https://bugzilla.redhat.com/show_bug.cgi?id=1378742 +[rhbz#1389941]: https://bugzilla.redhat.com/show_bug.cgi?id=1389941 +[rhbz#1413958]: https://bugzilla.redhat.com/show_bug.cgi?id=1413958 +[rhbz#1415080]: https://bugzilla.redhat.com/show_bug.cgi?id=1415080 +[rhbz#1421702]: https://bugzilla.redhat.com/show_bug.cgi?id=1421702 +[rhbz#1428350]: https://bugzilla.redhat.com/show_bug.cgi?id=1428350 +[rhbz#1434111]: https://bugzilla.redhat.com/show_bug.cgi?id=1434111 + + +## [0.9.156] - 2017-02-10 + +### Added +- Fencing levels now may be targeted in CLI by a node name pattern or a node + attribute in addition to a node name ([rhbz#1261116]) +- `pcs cluster cib-push` allows to push a diff obtained internally by comparing + CIBs in specified files ([rhbz#1404233], [rhbz#1419903]) +- Added flags `--wait`, `--disabled`, `--group`, `--after`, `--before` into + the command `pcs stonith create` +- Added commands `pcs stonith enable` and `pcs stonith disable` +- Command line option --request-timeout ([rhbz#1292858]) +- Check whenever proxy is set when unable to connect to a node ([rhbz#1315627]) + +### Changed +- `pcs node [un]standby` and `pcs node [un]maintenance` is now atomic even if + more than one node is specified ([rhbz#1315992]) +- Restarting pcsd initiated from pcs is now a synchronous operation + ([rhbz#1284404]) +- Stopped bundling fonts used in pcsd GUI ([ghissue#125]) +- In `pcs resource create` flags `--master` and `--clone` changed to keywords + `master` and `clone` +- libcurl is now used for node to node communication + +### Fixed +- When upgrading CIB to the latest schema version, check for minimal common + version across the cluster ([rhbz#1389443]) +- `pcs booth remove` now works correctly even if the booth resource group is + disabled ([rhbz#1389941]) +- Adding a node in a CMAN cluster does not cause the new node to be fenced + immediately ([rhbz#1394846]) +- Show proper error message when there is an HTTP communication failure + ([rhbz#1394273]) +- Fixed searching for files to remove in the `/var/lib` directory ([ghpull#119], + [ghpull#120]) +- Fixed messages when managing services (start, stop, enable, disable...) +- Fixed disabling services on systemd systems when using instances + ([rhbz#1389501]) +- Fixed parsing commandline options ([rhbz#1404229]) +- Pcs does not exit with a false error message anymore when pcsd-cli.rb outputs + to stderr ([ghissue#124]) +- Pcs now exits with an error when both `--all` and a list of nodes is specified + in the `pcs cluster start | stop | enable | disable` commands ([rhbz#1339355]) +- built-in help and man page fixes and improvements ([rhbz#1347335]) +- In `pcs resource create` the flag `--clone` no longer steals arguments from + the keywords `meta` and `op` ([rhbz#1395226]) +- `pcs resource create` does not produce invalid cib when group id is already + occupied with non-resource element ([rhbz#1382004]) +- Fixed misbehavior of the flag `--master` in `pcs resource create` command + ([rhbz#1378107]) +- Fixed tacit acceptance of invalid resource operation in `pcs resource create` + ([rhbz#1398562]) +- Fixed misplacing metadata for disabling when running `pcs resource create` + with flags `--clone` and `--disabled` ([rhbz#1402475]) +- Fixed incorrect acceptance of the invalid attribute of resource operation in + `pcs resource create` ([rhbz#1382597]) +- Fixed validation of options of resource operations in `pcs resource create` + ([rhbz#1390071]) +- Fixed silent omission of duplicate options ([rhbz#1390066]) +- Added more validation for resource agent names ([rhbz#1387670]) +- Fixed network communication issues in pcsd when a node was specified by an + IPv6 address +- Fixed JS error in web UI when empty cluster status is received + ([rhbz#1396462]) +- Fixed sending user group in cookies from Python 3 +- Fixed pcsd restart in Python 3 +- Fixed parsing XML in Python 3 (caused crashes when reading resource agents + metadata) ([rhbz#1419639]) +- Fixed the recognition of the structure of a resource agent name that contains + a systemd instance ([rhbz#1419661]) + +### Removed +- Ruby 1.8 and 1.9 is no longer supported due to bad libcurl support + +[ghissue#124]: https://github.com/ClusterLabs/pcs/issues/124 +[ghissue#125]: https://github.com/ClusterLabs/pcs/issues/125 +[ghpull#119]: https://github.com/ClusterLabs/pcs/pull/119 +[ghpull#120]: https://github.com/ClusterLabs/pcs/pull/120 +[rhbz#1261116]: https://bugzilla.redhat.com/show_bug.cgi?id=1261116 +[rhbz#1284404]: https://bugzilla.redhat.com/show_bug.cgi?id=1284404 +[rhbz#1292858]: https://bugzilla.redhat.com/show_bug.cgi?id=1292858 +[rhbz#1315627]: https://bugzilla.redhat.com/show_bug.cgi?id=1315627 +[rhbz#1315992]: https://bugzilla.redhat.com/show_bug.cgi?id=1315992 +[rhbz#1339355]: https://bugzilla.redhat.com/show_bug.cgi?id=1339355 +[rhbz#1347335]: https://bugzilla.redhat.com/show_bug.cgi?id=1347335 +[rhbz#1378107]: https://bugzilla.redhat.com/show_bug.cgi?id=1378107 +[rhbz#1382004]: https://bugzilla.redhat.com/show_bug.cgi?id=1382004 +[rhbz#1382597]: https://bugzilla.redhat.com/show_bug.cgi?id=1382597 +[rhbz#1387670]: https://bugzilla.redhat.com/show_bug.cgi?id=1387670 +[rhbz#1389443]: https://bugzilla.redhat.com/show_bug.cgi?id=1389443 +[rhbz#1389501]: https://bugzilla.redhat.com/show_bug.cgi?id=1389501 +[rhbz#1389941]: https://bugzilla.redhat.com/show_bug.cgi?id=1389941 +[rhbz#1390066]: https://bugzilla.redhat.com/show_bug.cgi?id=1390066 +[rhbz#1390071]: https://bugzilla.redhat.com/show_bug.cgi?id=1390071 +[rhbz#1394273]: https://bugzilla.redhat.com/show_bug.cgi?id=1394273 +[rhbz#1394846]: https://bugzilla.redhat.com/show_bug.cgi?id=1394846 +[rhbz#1395226]: https://bugzilla.redhat.com/show_bug.cgi?id=1395226 +[rhbz#1396462]: https://bugzilla.redhat.com/show_bug.cgi?id=1396462 +[rhbz#1398562]: https://bugzilla.redhat.com/show_bug.cgi?id=1398562 +[rhbz#1402475]: https://bugzilla.redhat.com/show_bug.cgi?id=1402475 +[rhbz#1404229]: https://bugzilla.redhat.com/show_bug.cgi?id=1404229 +[rhbz#1404233]: https://bugzilla.redhat.com/show_bug.cgi?id=1404233 +[rhbz#1419639]: https://bugzilla.redhat.com/show_bug.cgi?id=1419639 +[rhbz#1419661]: https://bugzilla.redhat.com/show_bug.cgi?id=1419661 +[rhbz#1419903]: https://bugzilla.redhat.com/show_bug.cgi?id=1419903 + + ## [0.9.155] - 2016-11-03 ### Added @@ -20,7 +288,7 @@ - When stopping a cluster with some of the nodes unreachable, stop the cluster completely on all reachable nodes ([rhbz#1380372]) - Fixed pcsd crash when rpam rubygem is installed ([ghissue#109]) -- Fixed occasional crashes / failures when using locale other than en_US.UTF8 +- Fixed occasional crashes / failures when using locale other than en\_US.UTF8 ([rhbz#1387106]) - Fixed starting and stopping cluster services on systemd machines without the `service` executable ([ghissue#115]) diff -Nru pcs-0.9.155+dfsg/debian/changelog pcs-0.9.159/debian/changelog --- pcs-0.9.155+dfsg/debian/changelog 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/changelog 2017-09-02 05:12:10.000000000 +0000 @@ -1,3 +1,46 @@ +pcs (0.9.159-3ubuntu2) artful; urgency=medium + + * But also unset the https proxy. + + -- Steve Langasek Fri, 01 Sep 2017 22:12:10 -0700 + +pcs (0.9.159-3ubuntu1) artful; urgency=medium + + * Fix autopkgtest to not complain if a proxy is configured in the + environment. + + -- Steve Langasek Fri, 01 Sep 2017 21:12:50 -0700 + +pcs (0.9.159-3) unstable; urgency=medium + + * d/control: build requires fontconfig (Closes: #867304) + + -- Valentin Vidic Wed, 05 Jul 2017 19:01:46 +0200 + +pcs (0.9.159-2) unstable; urgency=medium + + * d/patches: refresh for fuzz + * d/patches: fix Makefile to work with dash (Closes: #867287) + + -- Valentin Vidic Wed, 05 Jul 2017 16:44:21 +0200 + +pcs (0.9.159-1) unstable; urgency=medium + + * New upstream version 0.9.159 + * Remove upstream repack + * d/control: add ruby-ethon to Depends + * d/control: remove ${shlibs:Depends} + * d/control: update Standards-Version to 4.0.0 + * d/rules: use dh_missing --fail-missing + * d/rules: use debhelper version 10 + * d/source/lintian-overrides: cleanup comments + * d/patches: refresh for new version + * d/patches: update numbering + * d/patches: fix failure in d/tests/testsuite-pcs + * d/control: add fonts-dejavu-core to Depends + + -- Valentin Vidic Wed, 05 Jul 2017 13:49:33 +0200 + pcs (0.9.155+dfsg-2) unstable; urgency=medium * Add upstream fix for CVE-2017-2661 (Closes: #858379) diff -Nru pcs-0.9.155+dfsg/debian/compat pcs-0.9.159/debian/compat --- pcs-0.9.155+dfsg/debian/compat 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/compat 2017-06-29 01:30:26.000000000 +0000 @@ -1 +1 @@ -9 +10 diff -Nru pcs-0.9.155+dfsg/debian/control pcs-0.9.159/debian/control --- pcs-0.9.155+dfsg/debian/control 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/control 2017-09-02 04:12:54.000000000 +0000 @@ -1,28 +1,30 @@ Source: pcs Section: admin Priority: extra -Maintainer: Debian HA Maintainers +Maintainer: Ubuntu Developers +XSBC-Original-Maintainer: Debian HA Maintainers Uploaders: Richard B Winters , Christoph Berg , Valentin Vidic -Build-Depends: debhelper (>= 9.0.0), +Build-Depends: debhelper (>= 10), dh-python, dh-systemd, + fontconfig, libpam0g-dev, python-all (>= 2.6.6-3~), python-setuptools -Standards-Version: 3.9.8 +Standards-Version: 4.0.0 Homepage: https://github.com/ClusterLabs/pcs Vcs-Git: https://alioth.debian.org/anonscm/git/debian-ha/pcs.git Vcs-Browser: https://anonscm.debian.org/cgit/debian-ha/pcs.git Package: pcs Architecture: all -Depends: ${shlibs:Depends}, - ${python:Depends}, +Depends: ${python:Depends}, ${misc:Depends}, lsb-base (>= 3.0-6), psmisc, + fonts-dejavu-core, fonts-liberation, python2.7 (>=2.7.9), python-lxml, @@ -30,6 +32,7 @@ ruby, ruby-activesupport, ruby-backports, + ruby-ethon, ruby-highline, ruby-json, ruby-multi-json, diff -Nru pcs-0.9.155+dfsg/debian/copyright pcs-0.9.159/debian/copyright --- pcs-0.9.155+dfsg/debian/copyright 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/copyright 2017-06-29 01:30:26.000000000 +0000 @@ -1,7 +1,6 @@ Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: pcs Source: http://github.com/feist/pcs -Files-Excluded: pcsd/public/css/LiberationSans-*.ttf Files: * Copyright: (c) 2012-2015 Chris Feist diff -Nru pcs-0.9.155+dfsg/debian/patches/0001-Remove-Gemlock.file-on-Debian.patch pcs-0.9.159/debian/patches/0001-Remove-Gemlock.file-on-Debian.patch --- pcs-0.9.155+dfsg/debian/patches/0001-Remove-Gemlock.file-on-Debian.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0001-Remove-Gemlock.file-on-Debian.patch 2017-06-29 01:30:26.000000000 +0000 @@ -17,14 +17,16 @@ index 3140d6f..0000000 --- a/pcsd/Gemfile.lock +++ /dev/null -@@ -1,44 +0,0 @@ +@@ -1,48 +0,0 @@ -GEM - remote: https://rubygems.org/ - remote: https://tojeline.fedorapeople.org/rubygems/ - specs: - backports (3.6.8) -- json (1.8.3) -- multi_json (1.12.0) +- ethon (0.10.1) +- ffi (1.9.17) +- json (2.0.3) +- multi_json (1.12.1) - open4 (1.3.4) - orderedhash (0.0.6) - rack (1.6.4) @@ -33,7 +35,7 @@ - rack-test (0.6.3) - rack (>= 1.0) - rpam-ruby19 (1.2.1) -- sinatra (1.4.7) +- sinatra (1.4.8) - rack (~> 1.4) - rack-protection (~> 1.4) - tilt (>= 1.3, < 3) @@ -44,13 +46,15 @@ - rack-test - sinatra (~> 1.4.0) - tilt (>= 1.3, < 3) -- tilt (2.0.3) +- tilt (2.0.6) - -PLATFORMS - ruby - -DEPENDENCIES - backports +- ethon +- ffi - json - multi_json - open4 diff -Nru pcs-0.9.155+dfsg/debian/patches/0003-Fix-spelling.patch pcs-0.9.159/debian/patches/0003-Fix-spelling.patch --- pcs-0.9.155+dfsg/debian/patches/0003-Fix-spelling.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0003-Fix-spelling.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,25 +0,0 @@ -Description: Fix spelling errors reported by lintian -Author: Valentin Vidic -Last-Update: 2016-11-13 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcs/pcs.8 -+++ b/pcs/pcs.8 -@@ -272,7 +272,7 @@ - Edit the cib in the editor specified by the $EDITOR environment variable and push out any changes upon saving. Specify scope to edit a specific section of the CIB. Valid values of the scope are: configuration, nodes, resources, constraints, crm_config, rsc_defaults, op_defaults. \fB\-\-config\fR is the same as scope=configuration. Use of \fB\-\-config\fR is recommended. Do not specify a scope if you need to edit the whole CIB or be warned in the case of outdated CIB. - .TP - node add [\fB\-\-start\fR [\fB\-\-wait\fR[=]]] [\fB\-\-enable\fR] [\fB\-\-watchdog\fR=] --Add the node to corosync.conf and corosync on all nodes in the cluster and sync the new corosync.conf to the new node. If \fB\-\-start\fR is specified also start corosync/pacemaker on the new node, if \fB\-\-wait\fR is sepcified wait up to 'n' seconds for the new node to start. If \fB\-\-enable\fR is specified enable corosync/pacemaker on new node. When using Redundant Ring Protocol (RRP) with udpu transport, specify the ring 0 address first followed by a ',' and then the ring 1 address. Use \fB\-\-watchdog\fR to specify path to watchdog on newly added node, when SBD is enabled in cluster. -+Add the node to corosync.conf and corosync on all nodes in the cluster and sync the new corosync.conf to the new node. If \fB\-\-start\fR is specified also start corosync/pacemaker on the new node, if \fB\-\-wait\fR is specified wait up to 'n' seconds for the new node to start. If \fB\-\-enable\fR is specified enable corosync/pacemaker on new node. When using Redundant Ring Protocol (RRP) with udpu transport, specify the ring 0 address first followed by a ',' and then the ring 1 address. Use \fB\-\-watchdog\fR to specify path to watchdog on newly added node, when SBD is enabled in cluster. - .TP - node remove - Shutdown specified node and remove it from pacemaker and corosync on all other nodes in the cluster. -@@ -520,7 +520,7 @@ - .br - ( ) - .br --where duration options and date spec options are: hours, monthdays, weekdays, yeardays, months, weeks, years, weekyears, moon If score is ommited it defaults to INFINITY. If id is ommited one is generated from the constraint id. -+where duration options and date spec options are: hours, monthdays, weekdays, yeardays, months, weeks, years, weekyears, moon If score is omitted it defaults to INFINITY. If id is omitted one is generated from the constraint id. - .TP - rule remove - Remove a rule if a rule id is specified, if rule is last rule in its constraint, the constraint will be removed. diff -Nru pcs-0.9.155+dfsg/debian/patches/0003-Remove-pcsd-test-.gitignore-file.patch pcs-0.9.159/debian/patches/0003-Remove-pcsd-test-.gitignore-file.patch --- pcs-0.9.155+dfsg/debian/patches/0003-Remove-pcsd-test-.gitignore-file.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0003-Remove-pcsd-test-.gitignore-file.patch 2017-06-29 01:30:26.000000000 +0000 @@ -0,0 +1,21 @@ +From: Richard B Winters +Date: Tue, 26 Jan 2016 15:23:05 -0500 +Subject: Remove pcsd/test .gitignore file + + - We don't install vcs ignore files in a Debian package, + and do not recommend packing them in release tarballs. + +Change-Id: Ica2a07c880d30f34e2d5c19f550823b951126bff +Signed-off-by: Richard B Winters +--- + pcsd/test/.gitignore | 1 - + 1 file changed, 1 deletion(-) + delete mode 100644 pcsd/test/.gitignore + +diff --git a/pcsd/test/.gitignore b/pcsd/test/.gitignore +deleted file mode 100644 +index 1944fd6..0000000 +--- a/pcsd/test/.gitignore ++++ /dev/null +@@ -1 +0,0 @@ +-*.tmp diff -Nru pcs-0.9.155+dfsg/debian/patches/0004-Remove-pcsd-test-.gitignore-file.patch pcs-0.9.159/debian/patches/0004-Remove-pcsd-test-.gitignore-file.patch --- pcs-0.9.155+dfsg/debian/patches/0004-Remove-pcsd-test-.gitignore-file.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0004-Remove-pcsd-test-.gitignore-file.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,21 +0,0 @@ -From: Richard B Winters -Date: Tue, 26 Jan 2016 15:23:05 -0500 -Subject: Remove pcsd/test .gitignore file - - - We don't install vcs ignore files in a Debian package, - and do not recommend packing them in release tarballs. - -Change-Id: Ica2a07c880d30f34e2d5c19f550823b951126bff -Signed-off-by: Richard B Winters ---- - pcsd/test/.gitignore | 1 - - 1 file changed, 1 deletion(-) - delete mode 100644 pcsd/test/.gitignore - -diff --git a/pcsd/test/.gitignore b/pcsd/test/.gitignore -deleted file mode 100644 -index 1944fd6..0000000 ---- a/pcsd/test/.gitignore -+++ /dev/null -@@ -1 +0,0 @@ --*.tmp diff -Nru pcs-0.9.155+dfsg/debian/patches/0004-settings.py pcs-0.9.159/debian/patches/0004-settings.py --- pcs-0.9.155+dfsg/debian/patches/0004-settings.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0004-settings.py 2017-06-29 01:30:26.000000000 +0000 @@ -0,0 +1,33 @@ +Description: Update locations of binaries for Debian +Author: Valentin Vidic +Last-Update: 2016-11-13 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/pcs/settings.py.debian ++++ b/pcs/settings.py.debian +@@ -1,7 +1,8 @@ + from pcs.settings_default import * +-pengine_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/pengine" +-crmd_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/crmd" +-cib_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/cib" +-stonithd_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/stonithd" ++service_binary = "/usr/sbin/service" ++pengine_binary = "/usr/lib/pacemaker/pengine" ++crmd_binary = "/usr/lib/pacemaker/crmd" ++cib_binary = "/usr/lib/pacemaker/cib" ++stonithd_binary = "/usr/lib/pacemaker/stonithd" + pcsd_exec_location = "/usr/share/pcsd/" + sbd_config = "/etc/default/sbd" +--- a/pcsd/settings.rb.debian ++++ b/pcsd/settings.rb.debian +@@ -6,8 +6,8 @@ + KEY_FILE = PCSD_VAR_LOCATION + 'pcsd.key' + COOKIE_FILE = PCSD_VAR_LOCATION + 'pcsd.cookiesecret' + +-PENGINE = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/pengine" +-CIB_BINARY = '/usr/lib/DEB_HOST_MULTIARCH/pacemaker/cib' ++PENGINE = "/usr/lib/pacemaker/pengine" ++CIB_BINARY = '/usr/lib/pacemaker/cib' + CRM_MON = "/usr/sbin/crm_mon" + CRM_NODE = "/usr/sbin/crm_node" + CRM_ATTRIBUTE = "/usr/sbin/crm_attribute" diff -Nru pcs-0.9.155+dfsg/debian/patches/0005-Replace-orderedhash.patch pcs-0.9.159/debian/patches/0005-Replace-orderedhash.patch --- pcs-0.9.155+dfsg/debian/patches/0005-Replace-orderedhash.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0005-Replace-orderedhash.patch 2017-06-29 01:30:26.000000000 +0000 @@ -0,0 +1,102 @@ +Description: Replace orderedhash gem with active_support + Gem orderedhash + has several problems: + * not packaged in Debian (so not used by some other software) + * does not look maintained (last version 0.0.6 is from 2008) + * no license file included (just one mention of public domain + in a source file) + . + On the other hand, replacement active_support gem is rather + popular (albeit somewhat big) and does not experience any of + these problems. +Author: Valentin Vidic +Last-Update: 2017-06-17 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/pcsd/config.rb ++++ b/pcsd/config.rb +@@ -1,5 +1,5 @@ + require 'json' +-require 'orderedhash' ++require 'active_support/ordered_hash' + + require 'cluster.rb' + require 'permissions.rb' +@@ -124,15 +124,15 @@ + end + + def text() +- out_hash = OrderedHash.new ++ out_hash = ActiveSupport::OrderedHash.new + out_hash['format_version'] = CURRENT_FORMAT + out_hash['data_version'] = @data_version + out_hash['clusters'] = [] +- out_hash['permissions'] = OrderedHash.new ++ out_hash['permissions'] = ActiveSupport::OrderedHash.new + out_hash['permissions']['local_cluster'] = [] + + @clusters.each { |c| +- c_hash = OrderedHash.new ++ c_hash = ActiveSupport::OrderedHash.new + c_hash['name'] = c.name + c_hash['nodes'] = c.nodes.uniq.sort + out_hash['clusters'] << c_hash +@@ -226,10 +226,10 @@ + end + + def text() +- tokens_hash = OrderedHash.new ++ tokens_hash = ActiveSupport::OrderedHash.new + @tokens.keys.sort.each { |key| tokens_hash[key] = @tokens[key] } + +- out_hash = OrderedHash.new ++ out_hash = ActiveSupport::OrderedHash.new + out_hash['format_version'] = CURRENT_FORMAT + out_hash['data_version'] = @data_version + out_hash['tokens'] = tokens_hash +--- a/pcsd/pcsd-cli.rb ++++ b/pcsd/pcsd-cli.rb +@@ -4,14 +4,14 @@ + require 'etc' + require 'json' + require 'stringio' +-require 'orderedhash' ++require 'active_support/ordered_hash' + + require 'bootstrap.rb' + require 'pcs.rb' + require 'auth.rb' + + def cli_format_response(status, text=nil, data=nil) +- response = OrderedHash.new ++ response = ActiveSupport::OrderedHash.new + response['status'] = status + response['text'] = text if text + response['data'] = data if data +--- a/pcsd/permissions.rb ++++ b/pcsd/permissions.rb +@@ -1,4 +1,4 @@ +-require 'orderedhash' ++require 'active_support/ordered_hash' + + module Permissions + +@@ -104,7 +104,7 @@ + end + + def to_hash() +- perm_hash = OrderedHash.new ++ perm_hash = ActiveSupport::OrderedHash.new + perm_hash['type'] = @type + perm_hash['name'] = @name + perm_hash['allow'] = @allow_list.uniq.sort +--- a/pcsd/Gemfile ++++ b/pcsd/Gemfile +@@ -15,6 +15,6 @@ + gem 'json' + gem 'multi_json' + gem 'open4' +-gem 'orderedhash' ++gem 'activesupport' + gem 'ffi' + gem 'ethon' diff -Nru pcs-0.9.155+dfsg/debian/patches/0005-settings.py pcs-0.9.159/debian/patches/0005-settings.py --- pcs-0.9.155+dfsg/debian/patches/0005-settings.py 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0005-settings.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,33 +0,0 @@ -Description: Update locations of binaries for Debian -Author: Valentin Vidic -Last-Update: 2016-11-13 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcs/settings.py.debian -+++ b/pcs/settings.py.debian -@@ -1,7 +1,8 @@ - from pcs.settings_default import * --pengine_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/pengine" --crmd_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/crmd" --cib_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/cib" --stonithd_binary = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/stonithd" -+service_binary = "/usr/sbin/service" -+pengine_binary = "/usr/lib/pacemaker/pengine" -+crmd_binary = "/usr/lib/pacemaker/crmd" -+cib_binary = "/usr/lib/pacemaker/cib" -+stonithd_binary = "/usr/lib/pacemaker/stonithd" - pcsd_exec_location = "/usr/share/pcsd/" - sbd_config = "/etc/default/sbd" ---- a/pcsd/settings.rb.debian -+++ b/pcsd/settings.rb.debian -@@ -5,8 +5,8 @@ - KEY_FILE = PCSD_VAR_LOCATION + 'pcsd.key' - COOKIE_FILE = PCSD_VAR_LOCATION + 'pcsd.cookiesecret' - --PENGINE = "/usr/lib/DEB_HOST_MULTIARCH/pacemaker/pengine" --CIB_BINARY = '/usr/lib/DEB_HOST_MULTIARCH/pacemaker/cib' -+PENGINE = "/usr/lib/pacemaker/pengine" -+CIB_BINARY = '/usr/lib/pacemaker/cib' - CRM_MON = "/usr/sbin/crm_mon" - CRM_NODE = "/usr/sbin/crm_node" - CRM_ATTRIBUTE = "/usr/sbin/crm_attribute" diff -Nru pcs-0.9.155+dfsg/debian/patches/0006-Fix-corosync-log.patch pcs-0.9.159/debian/patches/0006-Fix-corosync-log.patch --- pcs-0.9.155+dfsg/debian/patches/0006-Fix-corosync-log.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0006-Fix-corosync-log.patch 2017-07-05 14:39:49.000000000 +0000 @@ -0,0 +1,16 @@ +Description: Update corosync log location for Debian +Author: Valentin Vidic +Last-Update: 2016-05-20 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/pcs/cluster.py ++++ b/pcs/cluster.py +@@ -847,7 +847,7 @@ + quorum_section.add_attribute("two_node", "1") + + logging_section.add_attribute("to_logfile", "yes") +- logging_section.add_attribute("logfile", "/var/log/cluster/corosync.log") ++ logging_section.add_attribute("logfile", "/var/log/corosync/corosync.log") + logging_section.add_attribute("to_syslog", "yes") + + return str(corosync_conf), messages diff -Nru pcs-0.9.155+dfsg/debian/patches/0006-Replace-orderedhash.patch pcs-0.9.159/debian/patches/0006-Replace-orderedhash.patch --- pcs-0.9.155+dfsg/debian/patches/0006-Replace-orderedhash.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0006-Replace-orderedhash.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,100 +0,0 @@ -Description: Replace orderedhash gem with active_support - Gem orderedhash - has several problems: - * not packaged in Debian (so not used by some other software) - * does not look maintained (last version 0.0.6 is from 2008) - * no license file included (just one mention of public domain - in a source file) - . - On the other hand, replacement active_support gem is rather - popular (albeit somewhat big) and does not experience any of - these problems. -Author: Valentin Vidic -Last-Update: 2017-06-17 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcsd/config.rb -+++ b/pcsd/config.rb -@@ -1,5 +1,5 @@ - require 'json' --require 'orderedhash' -+require 'active_support/ordered_hash' - - require 'cluster.rb' - require 'permissions.rb' -@@ -124,15 +124,15 @@ - end - - def text() -- out_hash = OrderedHash.new -+ out_hash = ActiveSupport::OrderedHash.new - out_hash['format_version'] = CURRENT_FORMAT - out_hash['data_version'] = @data_version - out_hash['clusters'] = [] -- out_hash['permissions'] = OrderedHash.new -+ out_hash['permissions'] = ActiveSupport::OrderedHash.new - out_hash['permissions']['local_cluster'] = [] - - @clusters.each { |c| -- c_hash = OrderedHash.new -+ c_hash = ActiveSupport::OrderedHash.new - c_hash['name'] = c.name - c_hash['nodes'] = c.nodes.uniq.sort - out_hash['clusters'] << c_hash -@@ -226,10 +226,10 @@ - end - - def text() -- tokens_hash = OrderedHash.new -+ tokens_hash = ActiveSupport::OrderedHash.new - @tokens.keys.sort.each { |key| tokens_hash[key] = @tokens[key] } - -- out_hash = OrderedHash.new -+ out_hash = ActiveSupport::OrderedHash.new - out_hash['format_version'] = CURRENT_FORMAT - out_hash['data_version'] = @data_version - out_hash['tokens'] = tokens_hash ---- a/pcsd/pcsd-cli.rb -+++ b/pcsd/pcsd-cli.rb -@@ -4,14 +4,14 @@ - require 'etc' - require 'json' - require 'stringio' --require 'orderedhash' -+require 'active_support/ordered_hash' - - require 'bootstrap.rb' - require 'pcs.rb' - require 'auth.rb' - - def cli_format_response(status, text=nil, data=nil) -- response = OrderedHash.new -+ response = ActiveSupport::OrderedHash.new - response['status'] = status - response['text'] = text if text - response['data'] = data if data ---- a/pcsd/permissions.rb -+++ b/pcsd/permissions.rb -@@ -1,4 +1,4 @@ --require 'orderedhash' -+require 'active_support/ordered_hash' - - module Permissions - -@@ -104,7 +104,7 @@ - end - - def to_hash() -- perm_hash = OrderedHash.new -+ perm_hash = ActiveSupport::OrderedHash.new - perm_hash['type'] = @type - perm_hash['name'] = @name - perm_hash['allow'] = @allow_list.uniq.sort ---- a/pcsd/Gemfile -+++ b/pcsd/Gemfile -@@ -15,4 +15,4 @@ - gem 'json' - gem 'multi_json' - gem 'open4' --gem 'orderedhash' -+gem 'activesupport' diff -Nru pcs-0.9.155+dfsg/debian/patches/0007-Fix-corosync-log.patch pcs-0.9.159/debian/patches/0007-Fix-corosync-log.patch --- pcs-0.9.155+dfsg/debian/patches/0007-Fix-corosync-log.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0007-Fix-corosync-log.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,16 +0,0 @@ -Description: Update corosync log location for Debian -Author: Valentin Vidic -Last-Update: 2016-05-20 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcs/cluster.py -+++ b/pcs/cluster.py -@@ -729,7 +729,7 @@ - quorum_section.add_attribute("two_node", "1") - - logging_section.add_attribute("to_logfile", "yes") -- logging_section.add_attribute("logfile", "/var/log/cluster/corosync.log") -+ logging_section.add_attribute("logfile", "/var/log/corosync/corosync.log") - logging_section.add_attribute("to_syslog", "yes") - - return str(corosync_conf), messages diff -Nru pcs-0.9.155+dfsg/debian/patches/0007-Fix-testsuite.patch pcs-0.9.159/debian/patches/0007-Fix-testsuite.patch --- pcs-0.9.155+dfsg/debian/patches/0007-Fix-testsuite.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0007-Fix-testsuite.patch 2017-06-30 16:15:51.000000000 +0000 @@ -0,0 +1,392 @@ +Description: Update testsuite to work with Debian +Author: Valentin Vidic +Forwarded: not-needed +Last-Update: 2016-11-13 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/pcs/test/test_cluster.py ++++ b/pcs/test/test_cluster.py +@@ -256,7 +256,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """ +@@ -315,7 +315,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -458,7 +458,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -503,7 +503,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -566,7 +566,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -611,7 +611,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -652,7 +652,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -698,7 +698,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -740,7 +740,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -775,7 +775,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -823,7 +823,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -869,7 +869,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -910,7 +910,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -960,7 +960,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1006,7 +1006,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1408,7 +1408,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1530,7 +1530,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1589,7 +1589,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1648,7 +1648,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1709,7 +1709,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1775,7 +1775,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1840,7 +1840,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1905,7 +1905,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -1994,7 +1994,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -2582,7 +2582,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +@@ -2820,7 +2820,7 @@ + + logging { + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + } + """) +--- a/pcs/test/test_lib_corosync_config_parser.py ++++ b/pcs/test/test_lib_corosync_config_parser.py +@@ -1020,7 +1020,7 @@ + # Log to a log file. When set to "no", the "logfile" option + # must not be set. + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + # Log to the system log daemon. When in doubt, set to yes. + to_syslog: yes + # Log debug messages (very verbose). When in doubt, leave off. +@@ -1060,7 +1060,7 @@ + fileline: off + to_stderr: no + to_logfile: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + to_syslog: yes + debug: off + timestamp: on +@@ -1097,7 +1097,7 @@ + fileline: off + to_logfile: yes + to_syslog: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + debug: off + timestamp: on + logger_subsys { +@@ -1155,7 +1155,7 @@ + fileline: off + to_logfile: yes + to_syslog: yes +- logfile: /var/log/cluster/corosync.log ++ logfile: /var/log/corosync/corosync.log + debug: off + timestamp: on + +--- a/pcs/test/test_stonith.py ++++ b/pcs/test/test_stonith.py +@@ -279,7 +279,7 @@ + + output, returnVal = pcs( + temp_cib, +- "stonith create f4 fence_xvm meta provides=something" ++ "stonith create f4 fence_dummy meta provides=something" + ) + ac(output, "") + self.assertEqual(0, returnVal) +@@ -295,7 +295,7 @@ + Resource: f3 (class=stonith type=fence_scsi) + Meta Attrs: provides=unfencing + Operations: monitor interval=60s (f3-monitor-interval-60s) +- Resource: f4 (class=stonith type=fence_xvm) ++ Resource: f4 (class=stonith type=fence_dummy) + Meta Attrs: provides=something + Operations: monitor interval=60s (f4-monitor-interval-60s) + """) +@@ -328,7 +328,7 @@ + + output, returnVal = pcs( + temp_cib, +- "stonith create f4 fence_xvm meta provides=something" ++ "stonith create f4 fence_dummy meta provides=something" + ) + ac(output, "") + self.assertEqual(0, returnVal) +@@ -347,7 +347,7 @@ + Attributes: key=abc + Meta Attrs: provides=unfencing + Operations: monitor interval=60s (f3-monitor-interval-60s) +- Resource: f4 (class=stonith type=fence_xvm) ++ Resource: f4 (class=stonith type=fence_dummy) + Meta Attrs: provides=something + Operations: monitor interval=60s (f4-monitor-interval-60s) + """) +--- a/pcs/test/test_resource.py ++++ b/pcs/test/test_resource.py +@@ -2521,13 +2521,13 @@ + + def testLSBResource(self): + self.assert_pcs_fail( +- "resource create --no-default-ops D2 lsb:network foo=bar", ++ "resource create --no-default-ops D2 lsb:networking foo=bar", + "Error: invalid resource option 'foo', there are no options" + " allowed, use --force to override\n" + ) + + self.assert_pcs_success( +- "resource create --no-default-ops D2 lsb:network foo=bar --force", ++ "resource create --no-default-ops D2 lsb:networking foo=bar --force", + "Warning: invalid resource option 'foo', there are no options" + " allowed\n" + ) +@@ -2536,7 +2536,7 @@ + "resource show --full", + outdent( + """\ +- Resource: D2 (class=lsb type=network) ++ Resource: D2 (class=lsb type=networking) + Attributes: foo=bar + Operations: monitor interval=15 timeout=15 (D2-monitor-interval-15) + """ +@@ -2559,7 +2559,7 @@ + "resource show --full", + outdent( + """\ +- Resource: D2 (class=lsb type=network) ++ Resource: D2 (class=lsb type=networking) + Attributes: foo=bar bar=baz + Operations: monitor interval=15 timeout=15 (D2-monitor-interval-15) + """ +--- a/pcs/lib/test/test_resource_agent.py ++++ b/pcs/lib/test/test_resource_agent.py +@@ -1449,7 +1449,7 @@ + ) + + self.mock_runner.run.assert_called_once_with( +- ["/usr/libexec/pacemaker/stonithd", "metadata"] ++ ["/usr/lib/pacemaker/stonithd", "metadata"] + ) + + def test_failed_to_get_xml(self): +@@ -1465,7 +1465,7 @@ + ) + + self.mock_runner.run.assert_called_once_with( +- ["/usr/libexec/pacemaker/stonithd", "metadata"] ++ ["/usr/lib/pacemaker/stonithd", "metadata"] + ) + + def test_invalid_xml(self): +@@ -1481,7 +1481,7 @@ + ) + + self.mock_runner.run.assert_called_once_with( +- ["/usr/libexec/pacemaker/stonithd", "metadata"] ++ ["/usr/lib/pacemaker/stonithd", "metadata"] + ) + + +@@ -1832,7 +1832,7 @@ + } + ), + mock.call( +- ["/usr/libexec/pacemaker/stonithd", "metadata"] ++ ["/usr/lib/pacemaker/stonithd", "metadata"] + ), + ]) + diff -Nru pcs-0.9.155+dfsg/debian/patches/0008-Fix-cluster-destroy-cleanup.patch pcs-0.9.159/debian/patches/0008-Fix-cluster-destroy-cleanup.patch --- pcs-0.9.155+dfsg/debian/patches/0008-Fix-cluster-destroy-cleanup.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0008-Fix-cluster-destroy-cleanup.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,23 +0,0 @@ -Description: Fix: "find" should run only in specific directories - Some users reported that running find over "/var/lib" for cleanup - purposes can take too long depending on what you have installed. - A particular example was having "lxcfs" fuse mounted in /var/lib. - That can make the search for cluster leftovers to take quite some - time, making user to believe the process has hang. -Author: Rafael David Tinoco -Applied-Upstream: https://github.com/ClusterLabs/pcs/commit/8a2b8b337bcb0f26da593973e7b1f27bdef86449 -Reviewed-by: Valentin Vidic -Last-Update: 2016-12-11 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcs/cluster.py -+++ b/pcs/cluster.py -@@ -1893,7 +1893,7 @@ - state_files = ["cib.xml*", "cib-*", "core.*", "hostcache", "cts.*", - "pe*.bz2","cib.*"] - for name in state_files: -- os.system("find /var/lib -name '"+name+"' -exec rm -f \{\} \;") -+ os.system("find /var/lib/pacemaker -name '"+name+"' -exec rm -f \{\} \;") - try: - qdevice_net.client_destroy() - except: diff -Nru pcs-0.9.155+dfsg/debian/patches/0008-Replace-chkconfig.patch pcs-0.9.159/debian/patches/0008-Replace-chkconfig.patch --- pcs-0.9.155+dfsg/debian/patches/0008-Replace-chkconfig.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0008-Replace-chkconfig.patch 2017-06-30 15:56:45.000000000 +0000 @@ -0,0 +1,242 @@ +Description: Replace chkconfig calls + All chkconfig calls should be replaced with update-rc.d + and insserv calls to work on Debian. +Author: Valentin Vidic +Last-Update: 2016-11-13 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/pcsd/pcs.rb ++++ b/pcsd/pcs.rb +@@ -2068,11 +2068,22 @@ + def is_service_enabled?(service) + if ISSYSTEMCTL + cmd = ['systemctl', 'is-enabled', "#{service}.service"] ++ _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) ++ return (retcode == 0) + else +- cmd = ['chkconfig', service] ++ cmd = ['/sbin/insserv', '-s'] ++ stdout, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) ++ if retcode != 0 ++ return nil ++ end ++ stdout.each { |line| ++ parts = line.split(':') ++ if parts[3] == service and parts[0] == 'S' ++ return true ++ end ++ } ++ return false + end +- _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) +- return (retcode == 0) + end + + def is_service_running?(service) +@@ -2087,12 +2098,13 @@ + + def is_service_installed?(service) + unless ISSYSTEMCTL +- stdout, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), 'chkconfig') ++ cmd = ['/sbin/insserv', '-s'] ++ stdout, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) + if retcode != 0 + return nil + end + stdout.each { |line| +- if line.split(' ')[0] == service ++ if line.split(':')[3] == service + return true + end + } +@@ -2122,7 +2134,7 @@ + cmd = ['systemctl', 'enable', "#{service}.service"] + else + # fails when the service is not installed +- cmd = ['chkconfig', service, 'on'] ++ cmd = ['update-rc.d', service, 'enable'] + end + _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) + return (retcode == 0) +@@ -2137,7 +2149,7 @@ + if ISSYSTEMCTL + cmd = ['systemctl', 'disable', "#{service}.service"] + else +- cmd = ['chkconfig', service, 'off'] ++ cmd = ['update-rc.d', service, 'disable'] + end + _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) + return (retcode == 0) +--- a/pcs/lib/external.py ++++ b/pcs/lib/external.py +@@ -128,7 +128,7 @@ + _systemctl, "disable", _get_service_name(service, instance) + ]) + else: +- stdout, stderr, retval = runner.run([_chkconfig, service, "off"]) ++ stdout, stderr, retval = runner.run([_chkconfig, service, "disable"]) + if retval != 0: + raise DisableServiceError( + service, +@@ -152,7 +152,7 @@ + _systemctl, "enable", _get_service_name(service, instance) + ]) + else: +- stdout, stderr, retval = runner.run([_chkconfig, service, "on"]) ++ stdout, stderr, retval = runner.run([_chkconfig, service, "enable"]) + if retval != 0: + raise EnableServiceError( + service, +@@ -235,10 +235,17 @@ + dummy_stdout, dummy_stderr, retval = runner.run( + [_systemctl, "is-enabled", _get_service_name(service, instance)] + ) ++ return retval == 0 + else: +- dummy_stdout, dummy_stderr, retval = runner.run([_chkconfig, service]) +- +- return retval == 0 ++ stdout, dummy_stderr, retval = runner.run(["/sbin/insserv", "-s"]) ++ if retval != 0: ++ return False ++ ++ for line in stdout.splitlines(): ++ fields = line.split(":") ++ if fields[3] == service and fields[0] == "S": ++ return True ++ return False + + + def is_service_running(runner, service, instance=None): +@@ -286,13 +293,13 @@ + if is_systemctl(): + return [] + +- stdout, dummy_stderr, return_code = runner.run([_chkconfig]) ++ stdout, dummy_stderr, return_code = runner.run(["/sbin/insserv", "-s"]) + if return_code != 0: + return [] + + service_list = [] + for service in stdout.splitlines(): +- service = service.split(" ", 1)[0] ++ service = service.split(":")[3] + if service: + service_list.append(service) + return service_list +--- a/pcs/test/test_lib_external.py ++++ b/pcs/test/test_lib_external.py +@@ -1320,7 +1320,7 @@ + self.mock_runner, self.service, None + ) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service, "off"] ++ [_chkconfig, self.service, "disable"] + ) + + def test_not_systemctl_failed(self, mock_is_installed, mock_systemctl): +@@ -1335,7 +1335,7 @@ + self.mock_runner, self.service, None + ) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service, "off"] ++ [_chkconfig, self.service, "disable"] + ) + + def test_systemctl_not_installed( +@@ -1385,7 +1385,7 @@ + self.mock_runner, self.service, instance + ) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service, "off"] ++ [_chkconfig, self.service, "disable"] + ) + + @mock.patch("pcs.lib.external.is_systemctl") +@@ -1418,7 +1418,7 @@ + self.mock_runner.run.return_value = ("", "", 0) + lib.enable_service(self.mock_runner, self.service) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service, "on"] ++ [_chkconfig, self.service, "enable"] + ) + + def test_not_systemctl_failed(self, mock_systemctl): +@@ -1429,7 +1429,7 @@ + lambda: lib.enable_service(self.mock_runner, self.service) + ) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service, "on"] ++ [_chkconfig, self.service, "enable"] + ) + + def test_instance_systemctl(self, mock_systemctl): +@@ -1447,7 +1447,7 @@ + self.mock_runner.run.return_value = ("", "", 0) + lib.enable_service(self.mock_runner, self.service, instance="test") + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service, "on"] ++ [_chkconfig, self.service, "enable"] + ) + + +@@ -1656,18 +1656,18 @@ + + def test_not_systemctl_enabled(self, mock_systemctl): + mock_systemctl.return_value = False +- self.mock_runner.run.return_value = ("", "", 0) ++ self.mock_runner.run.return_value = ("S:02:2 3 4 5:" + self.service, "", 0) + self.assertTrue(lib.is_service_enabled(self.mock_runner, self.service)) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service] ++ ["/sbin/insserv", "-s"] + ) + + def test_not_systemctl_disabled(self, mock_systemctl): + mock_systemctl.return_value = False +- self.mock_runner.run.return_value = ("", "", 3) ++ self.mock_runner.run.return_value = ("K:01:0 1 6:" + self.service, "", 0) + self.assertFalse(lib.is_service_enabled(self.mock_runner, self.service)) + self.mock_runner.run.assert_called_once_with( +- [_chkconfig, self.service] ++ ["/sbin/insserv", "-s"] + ) + + +@@ -1852,24 +1852,23 @@ + mock_is_systemctl.return_value = False + self.mock_runner.run.return_value = (outdent( + """\ +- pcsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off +- sbd 0:off 1:on 2:on 3:on 4:on 5:on 6:off +- pacemaker 0:off 1:off 2:off 3:off 4:off 5:off 6:off +- """ ++ K:01:0 1 6:pcsd ++ S:02:2 3 4 5:sbd ++ S:02:2 3 4 5:pacemaker""" + ), "", 0) + self.assertEqual( + lib.get_non_systemd_services(self.mock_runner), + ["pcsd", "sbd", "pacemaker"] + ) + self.assertEqual(mock_is_systemctl.call_count, 1) +- self.mock_runner.run.assert_called_once_with([_chkconfig]) ++ self.mock_runner.run.assert_called_once_with(["/sbin/insserv", "-s"]) + + def test_failed(self, mock_is_systemctl): + mock_is_systemctl.return_value = False + self.mock_runner.run.return_value = ("stdout", "failed", 1) + self.assertEqual(lib.get_non_systemd_services(self.mock_runner), []) + self.assertEqual(mock_is_systemctl.call_count, 1) +- self.mock_runner.run.assert_called_once_with([_chkconfig]) ++ self.mock_runner.run.assert_called_once_with(["/sbin/insserv", "-s"]) + + def test_systemd(self, mock_is_systemctl): + mock_is_systemctl.return_value = True +--- a/pcs/settings.py.debian ++++ b/pcs/settings.py.debian +@@ -6,3 +6,4 @@ + stonithd_binary = "/usr/lib/pacemaker/stonithd" + pcsd_exec_location = "/usr/share/pcsd/" + sbd_config = "/etc/default/sbd" ++chkconfig_binary = "/usr/sbin/update-rc.d" diff -Nru pcs-0.9.155+dfsg/debian/patches/0009-Fix-python-lxml.patch pcs-0.9.159/debian/patches/0009-Fix-python-lxml.patch --- pcs-0.9.155+dfsg/debian/patches/0009-Fix-python-lxml.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0009-Fix-python-lxml.patch 2017-06-30 15:56:49.000000000 +0000 @@ -0,0 +1,18 @@ +Description: Update pcs testsuite for python-lxml 3.7.1-1 + New lxml version has changed the error messages a bit so + some of the pcs tests started failing because of that. +Author: Valentin Vidic +Last-Update: 2017-01-06 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/pcs/test/tools/assertions.py ++++ b/pcs/test/tools/assertions.py +@@ -23,7 +23,7 @@ + """ + msg = "Start tag expected, '<' not found, line 1, column 1" + if LXML_VERSION >= (3, 7, 0, 0): +- msg += " (, line 1)" ++ msg += " (line 1)" + return msg + + def console_report(*lines): diff -Nru pcs-0.9.155+dfsg/debian/patches/0009-Fix-testsuite.patch pcs-0.9.159/debian/patches/0009-Fix-testsuite.patch --- pcs-0.9.155+dfsg/debian/patches/0009-Fix-testsuite.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0009-Fix-testsuite.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,360 +0,0 @@ -Description: Update testsuite to work with Debian -Author: Valentin Vidic -Forwarded: not-needed -Last-Update: 2016-11-13 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcs/test/test_cluster.py -+++ b/pcs/test/test_cluster.py -@@ -193,7 +193,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """ -@@ -252,7 +252,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -399,7 +399,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -444,7 +444,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -485,7 +485,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -531,7 +531,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -573,7 +573,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -608,7 +608,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -656,7 +656,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -702,7 +702,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -743,7 +743,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -793,7 +793,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -839,7 +839,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1241,7 +1241,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1363,7 +1363,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1422,7 +1422,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1481,7 +1481,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1542,7 +1542,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1608,7 +1608,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1673,7 +1673,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1738,7 +1738,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -1827,7 +1827,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -2415,7 +2415,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) -@@ -2655,7 +2655,7 @@ - - logging { - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - } - """) ---- a/pcs/test/test_lib_corosync_config_parser.py -+++ b/pcs/test/test_lib_corosync_config_parser.py -@@ -1020,7 +1020,7 @@ - # Log to a log file. When set to "no", the "logfile" option - # must not be set. - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - # Log to the system log daemon. When in doubt, set to yes. - to_syslog: yes - # Log debug messages (very verbose). When in doubt, leave off. -@@ -1060,7 +1060,7 @@ - fileline: off - to_stderr: no - to_logfile: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - to_syslog: yes - debug: off - timestamp: on -@@ -1097,7 +1097,7 @@ - fileline: off - to_logfile: yes - to_syslog: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - debug: off - timestamp: on - logger_subsys { -@@ -1155,7 +1155,7 @@ - fileline: off - to_logfile: yes - to_syslog: yes -- logfile: /var/log/cluster/corosync.log -+ logfile: /var/log/corosync/corosync.log - debug: off - timestamp: on - ---- a/pcs/test/test_stonith.py -+++ b/pcs/test/test_stonith.py -@@ -236,7 +236,7 @@ - - output, returnVal = pcs( - temp_cib, -- "stonith create f4 fence_xvm meta provides=something" -+ "stonith create f4 fence_dummy meta provides=something" - ) - ac(output, "") - self.assertEqual(0, returnVal) -@@ -252,7 +252,7 @@ - Resource: f3 (class=stonith type=fence_scsi) - Meta Attrs: provides=unfencing - Operations: monitor interval=60s (f3-monitor-interval-60s) -- Resource: f4 (class=stonith type=fence_xvm) -+ Resource: f4 (class=stonith type=fence_dummy) - Meta Attrs: provides=something - Operations: monitor interval=60s (f4-monitor-interval-60s) - """) -@@ -285,7 +285,7 @@ - - output, returnVal = pcs( - temp_cib, -- "stonith create f4 fence_xvm meta provides=something" -+ "stonith create f4 fence_dummy meta provides=something" - ) - ac(output, "") - self.assertEqual(0, returnVal) -@@ -304,7 +304,7 @@ - Attributes: key=abc - Meta Attrs: provides=unfencing - Operations: monitor interval=60s (f3-monitor-interval-60s) -- Resource: f4 (class=stonith type=fence_xvm) -+ Resource: f4 (class=stonith type=fence_dummy) - Meta Attrs: provides=something - Operations: monitor interval=60s (f4-monitor-interval-60s) - """) ---- a/pcs/test/test_resource.py -+++ b/pcs/test/test_resource.py -@@ -2859,7 +2859,7 @@ - def testLSBResource(self): - output, returnVal = pcs( - temp_cib, -- "resource create --no-default-ops D2 lsb:network" -+ "resource create --no-default-ops D2 lsb:networking" - ) - assert returnVal == 0 - assert output == "", [output] ---- a/pcsd/test/test_config.rb -+++ b/pcsd/test/test_config.rb -@@ -125,7 +125,7 @@ - assert_equal( - [[ - 'error', -- "Unable to parse pcs_settings file: 399: unexpected token at '\"rh71-node2\"\n ]\n }\n ]\n}'" -+ "Unable to parse pcs_settings file: 409: unexpected token at '\"rh71-node2\"\n ]\n }\n ]\n}'" - ]], - $logger.log - ) ---- a/pcs/lib/test/test_resource_agent.py -+++ b/pcs/lib/test/test_resource_agent.py -@@ -1040,7 +1040,7 @@ - ) - - self.mock_runner.run.assert_called_once_with( -- ["/usr/libexec/pacemaker/stonithd", "metadata"] -+ ["/usr/lib/pacemaker/stonithd", "metadata"] - ) - - -@@ -1057,7 +1057,7 @@ - ) - - self.mock_runner.run.assert_called_once_with( -- ["/usr/libexec/pacemaker/stonithd", "metadata"] -+ ["/usr/lib/pacemaker/stonithd", "metadata"] - ) - - -@@ -1074,7 +1074,7 @@ - ) - - self.mock_runner.run.assert_called_once_with( -- ["/usr/libexec/pacemaker/stonithd", "metadata"] -+ ["/usr/lib/pacemaker/stonithd", "metadata"] - ) - - -@@ -1415,7 +1415,7 @@ - } - ), - mock.call( -- ["/usr/libexec/pacemaker/stonithd", "metadata"] -+ ["/usr/lib/pacemaker/stonithd", "metadata"] - ), - ]) - diff -Nru pcs-0.9.155+dfsg/debian/patches/0010-Fix-Makefile-dash.patch pcs-0.9.159/debian/patches/0010-Fix-Makefile-dash.patch --- pcs-0.9.155+dfsg/debian/patches/0010-Fix-Makefile-dash.patch 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/debian/patches/0010-Fix-Makefile-dash.patch 2017-07-05 14:42:43.000000000 +0000 @@ -0,0 +1,17 @@ +Description: Update Makefile to work with dash + echo -e does not work in dash, use printf instead +Author: Valentin Vidic +Last-Update: 2017-07-05 +--- +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ +--- a/Makefile ++++ b/Makefile +@@ -100,7 +100,7 @@ + + install: + # make Python interpreter execution sane (via -Es flags) +- echo -e "[build]\nexecutable = $(PYTHON) -Es\n" > setup.cfg ++ printf "[build]\nexecutable = $(PYTHON) -Es\n" > setup.cfg + $(PYTHON) setup.py install --root=$(or ${DESTDIR}, /) ${EXTRA_SETUP_OPTS} + # fix excessive script interpreting "executable" quoting with old setuptools: + # https://github.com/pypa/setuptools/issues/188 diff -Nru pcs-0.9.155+dfsg/debian/patches/0010-Replace-chkconfig.patch pcs-0.9.159/debian/patches/0010-Replace-chkconfig.patch --- pcs-0.9.155+dfsg/debian/patches/0010-Replace-chkconfig.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0010-Replace-chkconfig.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,241 +0,0 @@ -Description: Replace chkconfig calls - All chkconfig calls should be replaced with update-rc.d - and insserv calls to work on Debian. -Author: Valentin Vidic -Last-Update: 2016-11-13 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcsd/pcs.rb -+++ b/pcsd/pcs.rb -@@ -1968,11 +1968,22 @@ - def is_service_enabled?(service) - if ISSYSTEMCTL - cmd = ['systemctl', 'is-enabled', "#{service}.service"] -+ _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) -+ return (retcode == 0) - else -- cmd = ['chkconfig', service] -+ cmd = ['/sbin/insserv', '-s'] -+ stdout, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) -+ if retcode != 0 -+ return nil -+ end -+ stdout.each { |line| -+ parts = line.split(':') -+ if parts[3] == service and parts[0] == 'S' -+ return true -+ end -+ } -+ return false - end -- _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) -- return (retcode == 0) - end - - def is_service_running?(service) -@@ -1987,12 +1998,13 @@ - - def is_service_installed?(service) - unless ISSYSTEMCTL -- stdout, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), 'chkconfig') -+ cmd = ['/sbin/insserv', '-s'] -+ stdout, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) - if retcode != 0 - return nil - end - stdout.each { |line| -- if line.split(' ')[0] == service -+ if line.split(':')[3] == service - return true - end - } -@@ -2019,7 +2031,7 @@ - cmd = ['systemctl', 'enable', "#{service}.service"] - else - # fails when the service is not installed -- cmd = ['chkconfig', service, 'on'] -+ cmd = ['update-rc.d', service, 'enable'] - end - _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) - return (retcode == 0) -@@ -2034,7 +2046,7 @@ - if ISSYSTEMCTL - cmd = ['systemctl', 'disable', "#{service}.service"] - else -- cmd = ['chkconfig', service, 'off'] -+ cmd = ['update-rc.d', service, 'disable'] - end - _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), *cmd) - return (retcode == 0) ---- a/pcs/lib/external.py -+++ b/pcs/lib/external.py -@@ -145,7 +145,7 @@ - _systemctl, "disable", _get_service_name(service, instance) - ]) - else: -- stdout, stderr, retval = runner.run([_chkconfig, service, "off"]) -+ stdout, stderr, retval = runner.run([_chkconfig, service, "disable"]) - if retval != 0: - raise DisableServiceError( - service, -@@ -169,7 +169,7 @@ - _systemctl, "enable", _get_service_name(service, instance) - ]) - else: -- stdout, stderr, retval = runner.run([_chkconfig, service, "on"]) -+ stdout, stderr, retval = runner.run([_chkconfig, service, "enable"]) - if retval != 0: - raise EnableServiceError( - service, -@@ -252,10 +252,17 @@ - dummy_stdout, dummy_stderr, retval = runner.run( - [_systemctl, "is-enabled", _get_service_name(service, instance)] - ) -+ return retval == 0 - else: -- dummy_stdout, dummy_stderr, retval = runner.run([_chkconfig, service]) -- -- return retval == 0 -+ stdout, dummy_stderr, retval = runner.run(["/sbin/insserv", "-s"]) -+ if retval != 0: -+ return False -+ -+ for line in stdout.splitlines(): -+ fields = line.split(":") -+ if fields[3] == service and fields[0] == "S": -+ return True -+ return False - - - def is_service_running(runner, service, instance=None): -@@ -301,13 +308,13 @@ - if is_systemctl(): - return [] - -- stdout, dummy_stderr, return_code = runner.run([_chkconfig]) -+ stdout, dummy_stderr, return_code = runner.run(["/sbin/insserv", "-s"]) - if return_code != 0: - return [] - - service_list = [] - for service in stdout.splitlines(): -- service = service.split(" ", 1)[0] -+ service = service.split(":")[3] - if service: - service_list.append(service) - return service_list ---- a/pcs/test/test_lib_external.py -+++ b/pcs/test/test_lib_external.py -@@ -1082,7 +1082,7 @@ - self.mock_runner.run.return_value = ("", "", 0) - lib.disable_service(self.mock_runner, self.service) - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service, "off"] -+ [_chkconfig, self.service, "disable"] - ) - - def test_not_systemctl_failed(self, mock_is_installed, mock_systemctl): -@@ -1094,7 +1094,7 @@ - lambda: lib.disable_service(self.mock_runner, self.service) - ) - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service, "off"] -+ [_chkconfig, self.service, "disable"] - ) - - def test_systemctl_not_installed( -@@ -1130,7 +1130,7 @@ - self.mock_runner.run.return_value = ("", "", 0) - lib.disable_service(self.mock_runner, self.service, instance="test") - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service, "off"] -+ [_chkconfig, self.service, "disable"] - ) - - @mock.patch("pcs.lib.external.is_systemctl") -@@ -1163,7 +1163,7 @@ - self.mock_runner.run.return_value = ("", "", 0) - lib.enable_service(self.mock_runner, self.service) - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service, "on"] -+ [_chkconfig, self.service, "enable"] - ) - - def test_not_systemctl_failed(self, mock_systemctl): -@@ -1174,7 +1174,7 @@ - lambda: lib.enable_service(self.mock_runner, self.service) - ) - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service, "on"] -+ [_chkconfig, self.service, "enable"] - ) - - def test_instance_systemctl(self, mock_systemctl): -@@ -1192,7 +1192,7 @@ - self.mock_runner.run.return_value = ("", "", 0) - lib.enable_service(self.mock_runner, self.service, instance="test") - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service, "on"] -+ [_chkconfig, self.service, "enable"] - ) - - -@@ -1401,18 +1401,18 @@ - - def test_not_systemctl_enabled(self, mock_systemctl): - mock_systemctl.return_value = False -- self.mock_runner.run.return_value = ("", "", 0) -+ self.mock_runner.run.return_value = ("S:02:2 3 4 5:" + self.service, "", 0) - self.assertTrue(lib.is_service_enabled(self.mock_runner, self.service)) - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service] -+ ["/sbin/insserv", "-s"] - ) - - def test_not_systemctl_disabled(self, mock_systemctl): - mock_systemctl.return_value = False -- self.mock_runner.run.return_value = ("", "", 3) -+ self.mock_runner.run.return_value = ("K:01:0 1 6:" + self.service, "", 0) - self.assertFalse(lib.is_service_enabled(self.mock_runner, self.service)) - self.mock_runner.run.assert_called_once_with( -- [_chkconfig, self.service] -+ ["/sbin/insserv", "-s"] - ) - - -@@ -1555,23 +1555,23 @@ - def test_success(self, mock_is_systemctl): - mock_is_systemctl.return_value = False - self.mock_runner.run.return_value = ("""\ --pcsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off --sbd 0:off 1:on 2:on 3:on 4:on 5:on 6:off --pacemaker 0:off 1:off 2:off 3:off 4:off 5:off 6:off -+K:01:0 1 6:pcsd -+S:02:2 3 4 5:sbd -+S:02:2 3 4 5:pacemaker - """, "", 0) - self.assertEqual( - lib.get_non_systemd_services(self.mock_runner), - ["pcsd", "sbd", "pacemaker"] - ) - self.assertEqual(mock_is_systemctl.call_count, 1) -- self.mock_runner.run.assert_called_once_with([_chkconfig]) -+ self.mock_runner.run.assert_called_once_with(["/sbin/insserv", "-s"]) - - def test_failed(self, mock_is_systemctl): - mock_is_systemctl.return_value = False - self.mock_runner.run.return_value = ("stdout", "failed", 1) - self.assertEqual(lib.get_non_systemd_services(self.mock_runner), []) - self.assertEqual(mock_is_systemctl.call_count, 1) -- self.mock_runner.run.assert_called_once_with([_chkconfig]) -+ self.mock_runner.run.assert_called_once_with(["/sbin/insserv", "-s"]) - - def test_systemd(self, mock_is_systemctl): - mock_is_systemctl.return_value = True ---- a/pcs/settings.py.debian -+++ b/pcs/settings.py.debian -@@ -6,3 +6,4 @@ - stonithd_binary = "/usr/lib/pacemaker/stonithd" - pcsd_exec_location = "/usr/share/pcsd/" - sbd_config = "/etc/default/sbd" -+chkconfig_binary = "/usr/sbin/update-rc.d" diff -Nru pcs-0.9.155+dfsg/debian/patches/0011-Fix-python-lxml.patch pcs-0.9.159/debian/patches/0011-Fix-python-lxml.patch --- pcs-0.9.155+dfsg/debian/patches/0011-Fix-python-lxml.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0011-Fix-python-lxml.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,60 +0,0 @@ -Description: Update pcs testsuite for python-lxml 3.7.1-1 - New lxml version has changed the error messages a bit so - some of the pcs tests started failing because of that. -Author: Valentin Vidic -Last-Update: 2017-01-06 ---- -This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ ---- a/pcs/lib/commands/test/test_resource_agent.py -+++ b/pcs/lib/commands/test/test_resource_agent.py -@@ -353,7 +353,7 @@ - report_codes.UNABLE_TO_GET_AGENT_METADATA, - { - "agent": "ocf:test:Dummy", -- "reason": "Start tag expected, '<' not found, line 1, column 1", -+ "reason": "Start tag expected, '<' not found, line 1, column 1 (line 1)", - } - ) - ) ---- a/pcs/lib/commands/test/test_stonith_agent.py -+++ b/pcs/lib/commands/test/test_stonith_agent.py -@@ -204,7 +204,7 @@ - report_codes.UNABLE_TO_GET_AGENT_METADATA, - { - "agent": "fence_dummy", -- "reason": "Start tag expected, '<' not found, line 1, column 1", -+ "reason": "Start tag expected, '<' not found, line 1, column 1 (line 1)", - } - ) - ) ---- a/pcs/lib/test/test_resource_agent.py -+++ b/pcs/lib/test/test_resource_agent.py -@@ -1069,7 +1069,7 @@ - self.agent._get_metadata, - { - "agent": "stonithd", -- "message": "Start tag expected, '<' not found, line 1, column 1", -+ "message": "Start tag expected, '<' not found, line 1, column 1 (line 1)", - } - ) - -@@ -1196,7 +1196,7 @@ - self.agent._get_metadata, - { - "agent": self.agent_name, -- "message": "Start tag expected, '<' not found, line 1, column 1", -+ "message": "Start tag expected, '<' not found, line 1, column 1 (line 1)", - } - ) - ---- a/pcs/test/test_lib_cib_tools.py -+++ b/pcs/test/test_lib_cib_tools.py -@@ -488,7 +488,7 @@ - report_codes.CIB_UPGRADE_FAILED, - { - "reason": -- "Start tag expected, '<' not found, line 1, column 1", -+ "Start tag expected, '<' not found, line 1, column 1 (line 1)", - } - ) - ) diff -Nru pcs-0.9.155+dfsg/debian/patches/0012-CVE-2017-2661.patch pcs-0.9.159/debian/patches/0012-CVE-2017-2661.patch --- pcs-0.9.155+dfsg/debian/patches/0012-CVE-2017-2661.patch 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/0012-CVE-2017-2661.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,41 +0,0 @@ -From: Ondrej Mular -Date: Sat, 4 Mar 2017 14:01:43 +0100 -Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1428948 -Subject: [PATCH] web UI: fixed XSS vulnerability - ---- - pcsd/public/js/nodes-ember.js | 4 ++-- - pcsd/public/js/pcsd.js | 2 +- - 3 files changed, 7 insertions(+), 3 deletions(-) - ---- a/pcsd/public/js/nodes-ember.js -+++ b/pcsd/public/js/nodes-ember.js -@@ -75,7 +75,7 @@ - var banned_options = ["SBD_OPTS", "SBD_WATCHDOG_DEV", "SBD_PACEMAKER"]; - $.each(this.get("sbd_config"), function(opt, val) { - if (banned_options.indexOf(opt) == -1) { -- out += '' + opt + '' + val + '\n'; -+ out += '' + htmlEncode(opt) + '' + htmlEncode(val) + '\n'; - } - }); - return out + ''; -@@ -879,7 +879,7 @@ - }.property("status_val"), - show_status: function() { - return '' -- + this.get('status') + (this.get("is_unmanaged") ? " (unmanaged)" : "") -+ + htmlEncode(this.get('status')) + (this.get("is_unmanaged") ? " (unmanaged)" : "") - + ''; - }.property("status_style", "disabled"), - status_class: function() { ---- a/pcsd/public/js/pcsd.js -+++ b/pcsd/public/js/pcsd.js -@@ -822,7 +822,7 @@ - - dialog_obj.find('#auth_nodes_list').empty(); - unauth_nodes.forEach(function(node) { -- dialog_obj.find('#auth_nodes_list').append("\t\t\t" + node + '\n'); -+ dialog_obj.find('#auth_nodes_list').append("\t\t\t" + htmlEncode(node) + '\n'); - }); - - } diff -Nru pcs-0.9.155+dfsg/debian/patches/series pcs-0.9.159/debian/patches/series --- pcs-0.9.155+dfsg/debian/patches/series 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/patches/series 2017-07-05 14:40:21.000000000 +0000 @@ -1,12 +1,10 @@ 0001-Remove-Gemlock.file-on-Debian.patch 0002-Remove-require-in-pcsd-ssl.rb.patch -0003-Fix-spelling.patch -0004-Remove-pcsd-test-.gitignore-file.patch -0005-settings.py -0006-Replace-orderedhash.patch -0007-Fix-corosync-log.patch -0008-Fix-cluster-destroy-cleanup.patch -0009-Fix-testsuite.patch -0010-Replace-chkconfig.patch -0011-Fix-python-lxml.patch -0012-CVE-2017-2661.patch +0003-Remove-pcsd-test-.gitignore-file.patch +0004-settings.py +0005-Replace-orderedhash.patch +0006-Fix-corosync-log.patch +0007-Fix-testsuite.patch +0008-Replace-chkconfig.patch +0009-Fix-python-lxml.patch +0010-Fix-Makefile-dash.patch diff -Nru pcs-0.9.155+dfsg/debian/rules pcs-0.9.159/debian/rules --- pcs-0.9.155+dfsg/debian/rules 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/rules 2017-06-29 01:30:26.000000000 +0000 @@ -11,7 +11,10 @@ # main packaging script based on dh7 syntax %: - dh $@ --with python2,systemd --fail-missing + dh $@ --with python2 + +override_dh_missing: + dh_missing --fail-missing override_dh_clean: dh_clean --exclude="corosync.conf.orig" diff -Nru pcs-0.9.155+dfsg/debian/source/lintian-overrides pcs-0.9.159/debian/source/lintian-overrides --- pcs-0.9.155+dfsg/debian/source/lintian-overrides 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/source/lintian-overrides 2017-06-29 01:30:26.000000000 +0000 @@ -1,5 +1,2 @@ -# Missing sources for jquery and jquery-ui are provided in debian/missing-sources -# ... but lintian still complains about the real jquery-ui: - # handlebars-v1.2.1.js has long lines in the real source code: pcs source: source-is-missing *handlebars-v1.2.1.js* diff -Nru pcs-0.9.155+dfsg/debian/tests/testsuite-pcs pcs-0.9.159/debian/tests/testsuite-pcs --- pcs-0.9.155+dfsg/debian/tests/testsuite-pcs 2017-03-21 19:37:55.000000000 +0000 +++ pcs-0.9.159/debian/tests/testsuite-pcs 2017-09-02 05:12:03.000000000 +0000 @@ -1,6 +1,8 @@ #!/bin/sh set -e +unset http_proxy +unset https_proxy cat >>/etc/hosts < setup.cfg $(PYTHON) setup.py install --root=$(or ${DESTDIR}, /) ${EXTRA_SETUP_OPTS} + # fix excessive script interpreting "executable" quoting with old setuptools: + # https://github.com/pypa/setuptools/issues/188 + # https://bugzilla.redhat.com/1353934 + sed -i '1s|^\(#!\)"\(.*\)"$$|\1\2|' ${DESTDIR}${PREFIX}/bin/pcs + rm setup.cfg mkdir -p ${DESTDIR}${PREFIX}/sbin/ mv ${DESTDIR}${PREFIX}/bin/pcs ${DESTDIR}${PREFIX}/sbin/pcs - install -D -m644 pcs/bash_completion.sh ${BASH_COMPLETION_DIR}/pcs + install -D -m644 pcs/bash_completion ${BASH_COMPLETION_DIR}/pcs install -m644 -D pcs/pcs.8 ${DESTDIR}/${MANDIR}/man8/pcs.8 ifeq ($(IS_DEBIAN),true) ifeq ($(install_settings),true) @@ -123,23 +147,30 @@ install -m 755 -D pcsd/pcsd.debian ${DESTDIR}/${initdir}/pcsd endif else - mkdir -p ${DESTDIR}${PREFIX}/lib/ - cp -r pcsd ${DESTDIR}${PREFIX}/lib/ + mkdir -p ${DESTDIR}${PCSD_PARENT_DIR}/ + cp -r pcsd ${DESTDIR}${PCSD_PARENT_DIR}/ install -m 644 -D pcsd/pcsd.conf ${DESTDIR}/etc/sysconfig/pcsd install -d ${DESTDIR}/etc/pam.d install pcsd/pcsd.pam ${DESTDIR}/etc/pam.d/pcsd ifeq ($(IS_SYSTEMCTL),true) install -d ${DESTDIR}/${systemddir}/system/ install -m 644 pcsd/pcsd.service ${DESTDIR}/${systemddir}/system/ -# ${DESTDIR}${PREFIX}/lib/pcsd/pcsd holds the selinux context - install -m 755 pcsd/pcsd.service-runner ${DESTDIR}${PREFIX}/lib/pcsd/pcsd - rm ${DESTDIR}${PREFIX}/lib/pcsd/pcsd.service-runner +# ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/pcsd holds the selinux context + install -m 755 pcsd/pcsd.service-runner ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/pcsd + rm ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/pcsd.service-runner else install -m 755 -D pcsd/pcsd ${DESTDIR}/${initdir}/pcsd endif endif install -m 700 -d ${DESTDIR}/var/lib/pcsd install -m 644 -D pcsd/pcsd.logrotate ${DESTDIR}/etc/logrotate.d/pcsd + install -m644 -D pcsd/pcsd.8 ${DESTDIR}/${MANDIR}/man8/pcsd.8 + $(foreach font,$(pcsd_fonts),\ + $(eval font_file = $(word 1,$(subst ;, ,$(font)))) \ + $(eval font_def = $(word 2,$(subst ;, ,$(font)))) \ + $(eval font_path = $(shell fc-match '--format=%{file}' '$(font_def)')) \ + $(if $(font_path),ln -s -f $(font_path) ${DESTDIR}${PCSD_PARENT_DIR}/pcsd/public/css/$(font_file);,$(error Font $(font_def) not found)) \ + ) uninstall: rm -f ${DESTDIR}${PREFIX}/sbin/pcs diff -Nru pcs-0.9.155+dfsg/MANIFEST.in pcs-0.9.159/MANIFEST.in --- pcs-0.9.155+dfsg/MANIFEST.in 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/MANIFEST.in 2017-06-30 15:33:01.000000000 +0000 @@ -1,7 +1,7 @@ include Makefile include COPYING include pcs/pcs.8 -include pcs/bash_completion.sh +include pcs/bash_completion include pcsd/.bundle/config graft pcsd graft pcsd/vendor/cache diff -Nru pcs-0.9.155+dfsg/newversion.py pcs-0.9.159/newversion.py --- pcs-0.9.155+dfsg/newversion.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/newversion.py 2017-06-30 15:33:01.000000000 +0000 @@ -29,11 +29,14 @@ print(os.system("sed -i 's/"+settings.pcs_version+"/"+new_version+"/' pcsd/bootstrap.rb")) print(os.system("sed -i 's/\#\# \[Unreleased\]/\#\# ["+new_version+"] - "+datetime.date.today().strftime('%Y-%m-%d')+"/' CHANGELOG.md")) -manpage_head = '.TH PCS "8" "{date}" "pcs {version}" "System Administration Utilities"'.format( - date=datetime.date.today().strftime('%B %Y'), - version=new_version -) -print(os.system("sed -i '1c " + manpage_head + "' pcs/pcs.8")) +def manpage_head(component): + return '.TH {component} "8" "{date}" "pcs {version}" "System Administration Utilities"'.format( + component=component.upper(), + date=datetime.date.today().strftime('%B %Y'), + version=new_version + ) +print(os.system("sed -i '1c " + manpage_head("pcs") + "' pcs/pcs.8")) +print(os.system("sed -i '1c " + manpage_head("pcsd") + "' pcsd/pcsd.8")) print(os.system("git diff")) print("Look good? (y/n)") diff -Nru pcs-0.9.155+dfsg/pcs/acl.py pcs-0.9.159/pcs/acl.py --- pcs-0.9.155+dfsg/pcs/acl.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/acl.py 2017-06-30 15:33:01.000000000 +0000 @@ -12,10 +12,10 @@ usage, utils, ) -from pcs.lib.pacemaker_values import is_true from pcs.cli.common.console_report import indent from pcs.cli.common.errors import CmdLineInputError from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.values import is_true def acl_cmd(lib, argv, modifiers): if len(argv) < 1: diff -Nru pcs-0.9.155+dfsg/pcs/alert.py pcs-0.9.159/pcs/alert.py --- pcs-0.9.155+dfsg/pcs/alert.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/alert.py 2017-06-30 15:33:01.000000000 +0000 @@ -18,7 +18,7 @@ from pcs.cli.common.console_report import indent from pcs.lib.errors import LibraryError -parse_cmd_sections = partial(group_by_keywords, implicit_first_keyword="main") +parse_cmd_sections = partial(group_by_keywords, implicit_first_group_key="main") def alert_cmd(*args): argv = args[1] diff -Nru pcs-0.9.155+dfsg/pcs/app.py pcs-0.9.159/pcs/app.py --- pcs-0.9.155+dfsg/pcs/app.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/app.py 2017-06-30 15:33:01.000000000 +0000 @@ -9,7 +9,6 @@ import os import sys import logging -logging.basicConfig() from pcs import ( acl, @@ -31,9 +30,10 @@ alert, ) -from pcs.cli.common import completion +from pcs.cli.common import completion, parse_args +logging.basicConfig() usefile = False filename = "" def main(argv=None): @@ -49,90 +49,41 @@ global filename, usefile orig_argv = argv[:] utils.pcs_options = {} - modified_argv = [] - real_argv = [] - try: - # we change --cloneopt to "clone" for backwards compatibility - new_argv = [] - for arg in argv: - if arg == "--cloneopt" or arg == "--clone": - new_argv.append("clone") - elif arg.startswith("--cloneopt="): - new_argv.append("clone") - new_argv.append(arg.split('=',1)[1]) - else: - new_argv.append(arg) - argv = new_argv - # we want to support optional arguments for --wait, so if an argument - # is specified with --wait (ie. --wait=30) then we use them - waitsecs = None - new_argv = [] - for arg in argv: - if arg.startswith("--wait="): - tempsecs = arg.replace("--wait=","") - if len(tempsecs) > 0: - waitsecs = tempsecs - arg = "--wait" - new_argv.append(arg) - argv = new_argv - - # h = help, f = file, - # p = password (cluster auth), u = user (cluster auth), - # V = verbose (cluster verify) - pcs_short_options = "hf:p:u:V" - pcs_long_options = [ - "debug", "version", "help", "fullhelp", - "force", "skip-offline", "autocorrect", "interactive", "autodelete", - "all", "full", "groups", "local", "wait", "config", - "start", "enable", "disabled", "off", - "pacemaker", "corosync", - "no-default-ops", "defaults", "nodesc", - "clone", "master", "name=", "group=", "node=", - "from=", "to=", "after=", "before=", - "transport=", "rrpmode=", "ipv6", - "addr0=", "bcast0=", "mcast0=", "mcastport0=", "ttl0=", "broadcast0", - "addr1=", "bcast1=", "mcast1=", "mcastport1=", "ttl1=", "broadcast1", - "wait_for_all=", "auto_tie_breaker=", "last_man_standing=", - "last_man_standing_window=", - "token=", "token_coefficient=", "consensus=", "join=", - "miss_count_const=", "fail_recv_const=", - "corosync_conf=", "cluster_conf=", - "booth-conf=", "booth-key=", - "remote", "watchdog=", - #in pcs status - do not display resorce status on inactive node - "hide-inactive", - ] - # pull out negative number arguments and add them back after getopt - prev_arg = "" - for arg in argv: - if len(arg) > 0 and arg[0] == "-": - if arg[1:].isdigit() or arg[1:].startswith("INFINITY"): - real_argv.append(arg) - else: - modified_argv.append(arg) - else: - # If previous argument required an argument, then this arg - # should not be added back in - if not prev_arg or (not (prev_arg[0] == "-" and prev_arg[1:] in pcs_short_options) and not (prev_arg[0:2] == "--" and (prev_arg[2:] + "=") in pcs_long_options)): - real_argv.append(arg) - modified_argv.append(arg) - prev_arg = arg + argv = parse_args.upgrade_args(argv) + + # we want to support optional arguments for --wait, so if an argument + # is specified with --wait (ie. --wait=30) then we use them + waitsecs = None + new_argv = [] + for arg in argv: + if arg.startswith("--wait="): + tempsecs = arg.replace("--wait=","") + if len(tempsecs) > 0: + waitsecs = tempsecs + arg = "--wait" + new_argv.append(arg) + argv = new_argv - pcs_options, argv = getopt.gnu_getopt(modified_argv, pcs_short_options, pcs_long_options) + try: + pcs_options, dummy_argv = getopt.gnu_getopt( + parse_args.filter_out_non_option_negative_numbers(argv), + parse_args.PCS_SHORT_OPTIONS, + parse_args.PCS_LONG_OPTIONS, + ) except getopt.GetoptError as err: print(err) usage.main() sys.exit(1) - argv = real_argv + argv = parse_args.filter_out_options(argv) for o, a in pcs_options: if not o in utils.pcs_options: - if o == "--watchdog": + if o in ["--watchdog", "--device"]: a = [a] utils.pcs_options[o] = a else: # If any options are a list then they've been entered twice which isn't valid - if o != "--watchdog": + if o not in ["--watchdog", "--device"]: utils.err("%s can only be used once" % o) else: utils.pcs_options[o].append(a) @@ -160,6 +111,22 @@ sys.exit() elif o == "--wait": utils.pcs_options[o] = waitsecs + elif o == "--request-timeout": + request_timeout_valid = False + try: + timeout = int(a) + if timeout > 0: + utils.pcs_options[o] = timeout + request_timeout_valid = True + except ValueError: + pass + if not request_timeout_valid: + utils.err( + ( + "'{0}' is not a valid --request-timeout value, use " + "a positive integer" + ).format(a) + ) if len(argv) == 0: usage.main() @@ -189,7 +156,11 @@ "status": status.status_cmd, "config": config.config_cmd, "pcsd": pcsd.pcsd_cmd, - "node": node.node_cmd, + "node": lambda argv: node.node_cmd( + utils.get_library_wrapper(), + argv, + utils.get_modificators() + ), "quorum": lambda argv: quorum.quorum_cmd( utils.get_library_wrapper(), argv, diff -Nru pcs-0.9.155+dfsg/pcs/bash_completion pcs-0.9.159/pcs/bash_completion --- pcs-0.9.155+dfsg/pcs/bash_completion 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/bash_completion 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,38 @@ +# bash completion for pcs +_pcs_completion(){ + + LENGTHS=() + for WORD in "${COMP_WORDS[@]}"; do + LENGTHS+=(${#WORD}) + done + + + COMPREPLY=( $( \ + env COMP_WORDS="${COMP_WORDS[*]}" \ + COMP_LENGTHS="${LENGTHS[*]}" \ + COMP_CWORD=$COMP_CWORD \ + PCS_AUTO_COMPLETE=1 pcs \ + ) ) + + #examples what we get: + #pcs + #COMP_WORDS: pcs COMP_LENGTHS: 3 + #pcs co + #COMP_WORDS: pcs co COMP_LENGTHS: 3 2 + # pcs config + #COMP_WORDS: pcs config COMP_LENGTHS: 3 6 + # pcs config " + #COMP_WORDS: pcs config " COMP_LENGTHS: 3 6 4 + # pcs config "'\\n + #COMP_WORDS: pcs config "'\\n COMP_LENGTHS: 3 6 5'" +} + +# -o default +# Use readline's default filename completion if the compspec generates no +# matches. +# -F function +# The shell function function is executed in the current shell environment. +# When it finishes, the possible completions are retrieved from the value of +# the COMPREPLY array variable. + +complete -o default -F _pcs_completion pcs diff -Nru pcs-0.9.155+dfsg/pcs/bash_completion.sh pcs-0.9.159/pcs/bash_completion.sh --- pcs-0.9.155+dfsg/pcs/bash_completion.sh 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/bash_completion.sh 1970-01-01 00:00:00.000000000 +0000 @@ -1,38 +0,0 @@ -# bash completion for pcs -_pcs_completion(){ - - LENGTHS=() - for WORD in "${COMP_WORDS[@]}"; do - LENGTHS+=(${#WORD}) - done - - - COMPREPLY=( $( \ - env COMP_WORDS="${COMP_WORDS[*]}" \ - COMP_LENGTHS="${LENGTHS[*]}" \ - COMP_CWORD=$COMP_CWORD \ - PCS_AUTO_COMPLETE=1 pcs \ - ) ) - - #examples what we get: - #pcs - #COMP_WORDS: pcs COMP_LENGTHS: 3 - #pcs co - #COMP_WORDS: pcs co COMP_LENGTHS: 3 2 - # pcs config - #COMP_WORDS: pcs config COMP_LENGTHS: 3 6 - # pcs config " - #COMP_WORDS: pcs config " COMP_LENGTHS: 3 6 4 - # pcs config "'\\n - #COMP_WORDS: pcs config "'\\n COMP_LENGTHS: 3 6 5'" -} - -# -o default -# Use readline's default filename completion if the compspec generates no -# matches. -# -F function -# The shell function function is executed in the current shell environment. -# When it finishes, the possible completions are retrieved from the value of -# the COMPREPLY array variable. - -complete -o default -F _pcs_completion pcs diff -Nru pcs-0.9.155+dfsg/pcs/booth.py pcs-0.9.159/pcs/booth.py --- pcs-0.9.155+dfsg/pcs/booth.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/booth.py 2017-06-30 15:33:01.000000000 +0000 @@ -12,7 +12,7 @@ from pcs.cli.booth import command from pcs.cli.common.errors import CmdLineInputError from pcs.lib.errors import LibraryError -from pcs.resource import resource_create, resource_remove, resource_restart +from pcs.resource import resource_remove, resource_restart def booth_cmd(lib, argv, modifiers): @@ -26,7 +26,7 @@ sub_cmd, argv_next = argv[0], argv[1:] try: if sub_cmd == "help": - usage.booth(argv) + usage.booth([" ".join(argv_next)] if argv_next else []) elif sub_cmd == "config": command.config_show(lib, argv_next, modifiers) elif sub_cmd == "setup": @@ -47,9 +47,7 @@ else: raise CmdLineInputError() elif sub_cmd == "create": - command.get_create_in_cluster(resource_create, resource_remove)( - lib, argv_next, modifiers - ) + command.create_in_cluster(lib, argv_next, modifiers) elif sub_cmd == "remove": command.get_remove_from_cluster(resource_remove)( lib, argv_next, modifiers diff -Nru pcs-0.9.155+dfsg/pcs/cli/booth/command.py pcs-0.9.159/pcs/cli/booth/command.py --- pcs-0.9.155+dfsg/pcs/cli/booth/command.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/booth/command.py 2017-06-30 15:33:01.000000000 +0000 @@ -94,21 +94,14 @@ def ticket_grant(lib, arg_list, modifiers): ticket_operation(lib.booth.ticket_grant, arg_list, modifiers) -def get_create_in_cluster(resource_create, resource_remove): - #TODO resource_remove is provisional hack until resources are not moved to - #lib - def create_in_cluster(lib, arg_list, modifiers): - if len(arg_list) != 2 or arg_list[0] != "ip": - raise CmdLineInputError() - ip = arg_list[1] - - lib.booth.create_in_cluster( - __get_name(modifiers), - ip, - resource_create, - resource_remove, - ) - return create_in_cluster +def create_in_cluster(lib, arg_list, modifiers): + if len(arg_list) != 2 or arg_list[0] != "ip": + raise CmdLineInputError() + lib.booth.create_in_cluster( + __get_name(modifiers), + ip=arg_list[1], + allow_absent_resource_agent=modifiers["force"] + ) def get_remove_from_cluster(resource_remove): #TODO resource_remove is provisional hack until resources are not moved to diff -Nru pcs-0.9.155+dfsg/pcs/cli/booth/console_report.py pcs-0.9.159/pcs/cli/booth/console_report.py --- pcs-0.9.155+dfsg/pcs/cli/booth/console_report.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/booth/console_report.py 2017-06-30 15:33:01.000000000 +0000 @@ -10,9 +10,10 @@ def format_booth_default(value, template): return "" if value in ("booth", "", None) else template.format(value) -#Each value (callable taking report_item.info) returns string template. -#Optionaly the template can contain placehodler {force} for next processing. -#Placeholder {force} will be appended if is necessary and if is not presset +#Each value (a callable taking report_item.info) returns a message. +#Force text will be appended if necessary. +#If it is necessary to put the force text inside the string then the callable +#must take the force_text parameter. CODE_TO_MESSAGE_BUILDER_MAP = { codes.BOOTH_LACK_OF_SITES: lambda info: "lack of sites for booth configuration (need 2 at least): sites {0}" diff -Nru pcs-0.9.155+dfsg/pcs/cli/booth/env.py pcs-0.9.159/pcs/cli/booth/env.py --- pcs-0.9.155+dfsg/pcs/cli/booth/env.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/booth/env.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,58 +5,12 @@ unicode_literals, ) -import os.path - from pcs.cli.common import console_report -from pcs.common import report_codes, env_file_role_codes as file_role_codes +from pcs.common.env_file_role_codes import BOOTH_CONFIG, BOOTH_KEY from pcs.lib.errors import LibraryEnvError +from pcs.cli.common import env_file -def read_env_file(path): - try: - return { - "content": open(path).read() if os.path.isfile(path) else None - } - except EnvironmentError as e: - raise console_report.error( - "Unable to read {0}: {1}".format(path, e.strerror) - ) - -def write_env_file(env_file, file_path): - try: - f = open(file_path, "wb" if env_file.get("is_binary", False) else "w") - f.write(env_file["content"]) - f.close() - except EnvironmentError as e: - raise console_report.error( - "Unable to write {0}: {1}".format(file_path, e.strerror) - ) - -def process_no_existing_file_expectation(file_role, env_file, file_path): - if( - env_file["no_existing_file_expected"] - and - os.path.exists(file_path) - ): - msg = "{0} {1} already exists".format(file_role, file_path) - if not env_file["can_overwrite_existing_file"]: - raise console_report.error( - "{0}, use --force to override".format(msg) - ) - console_report.warn(msg) - -def is_missing_file_report(report, file_role_code): - return ( - report.code == report_codes.FILE_DOES_NOT_EXIST - and - report.info["file_role"] == file_role_code - ) - -def report_missing_file(file_role, file_path): - console_report.error( - "{0} '{1}' does not exist".format(file_role, file_path) - ) - def middleware_config(name, config_path, key_path): if config_path and not key_path: raise console_report.error( @@ -75,8 +29,8 @@ return {"name": name} return { "name": name, - "config_file": read_env_file(config_path), - "key_file": read_env_file(key_path), + "config_file": env_file.read(config_path), + "key_file": env_file.read(key_path, is_binary=True), "key_path": key_path, } @@ -89,31 +43,30 @@ #pcs.cli.common.lib_wrapper.lib_env_to_cli_env raise console_report.error("Error during library communication") - process_no_existing_file_expectation( + env_file.process_no_existing_file_expectation( "booth config file", modified_env["config_file"], config_path ) - process_no_existing_file_expectation( + env_file.process_no_existing_file_expectation( "booth key file", modified_env["key_file"], key_path ) - write_env_file(modified_env["key_file"], key_path) - write_env_file(modified_env["config_file"], config_path) + env_file.write(modified_env["key_file"], key_path) + env_file.write(modified_env["config_file"], config_path) def apply(next_in_line, env, *args, **kwargs): env.booth = create_booth_env() try: result_of_next = next_in_line(env, *args, **kwargs) except LibraryEnvError as e: - for report in e.args: - if is_missing_file_report(report, file_role_codes.BOOTH_CONFIG): - report_missing_file("Booth config file", config_path) - e.sign_processed(report) - if is_missing_file_report(report, file_role_codes.BOOTH_KEY): - report_missing_file("Booth key file", key_path) - e.sign_processed(report) + missing_file = env_file.MissingFileCandidateInfo + + env_file.evaluate_for_missing_files(e, [ + missing_file(BOOTH_CONFIG, "Booth config file", config_path), + missing_file(BOOTH_KEY, "Booth key file", key_path), + ]) raise e flush(env.booth["modified_env"]) return result_of_next diff -Nru pcs-0.9.155+dfsg/pcs/cli/booth/test/test_env.py pcs-0.9.159/pcs/cli/booth/test/test_env.py --- pcs-0.9.155+dfsg/pcs/cli/booth/test/test_env.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/booth/test/test_env.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,16 +7,23 @@ from pcs.test.tools.pcs_unittest import TestCase -from pcs.cli.booth.env import middleware_config +from pcs.cli.booth import env from pcs.common import report_codes, env_file_role_codes from pcs.lib.errors import LibraryEnvError, ReportItem from pcs.test.tools.pcs_unittest import mock +from pcs.test.tools.misc import create_setup_patch_mixin +SetupPatchMixin = create_setup_patch_mixin(env) -class BoothConfTest(TestCase): - @mock.patch("pcs.cli.booth.env.os.path.isfile") - def test_sucessfully_care_about_local_file(self, mock_is_file): - #setup, fixtures +class BoothConfTest(TestCase, SetupPatchMixin): + def setUp(self): + self.write = self.setup_patch("env_file.write") + self.read = self.setup_patch("env_file.read") + self.process_no_existing_file_expectation = self.setup_patch( + "env_file.process_no_existing_file_expectation" + ) + + def test_sucessfully_care_about_local_file(self): def next_in_line(env): env.booth["modified_env"] = { "config_file": { @@ -29,55 +36,62 @@ } } return "call result" - mock_is_file.return_value = True + mock_env = mock.MagicMock() + booth_conf_middleware = env.middleware_config( + "booth-name", + "/local/file/path.conf", + "/local/file/path.key", + ) - mock_open = mock.mock_open() - with mock.patch( - "pcs.cli.booth.env.open", - mock_open, - create=True - ): - #run tested code - booth_conf_middleware = middleware_config( - "booth-name", - "/local/file/path.conf", - "/local/file/path.key", - ) + self.assertEqual( + "call result", + booth_conf_middleware(next_in_line, mock_env) + ) - self.assertEqual( - "call result", - booth_conf_middleware(next_in_line, mock_env) - ) + self.assertEqual(self.read.mock_calls, [ + mock.call('/local/file/path.conf'), + mock.call('/local/file/path.key', is_binary=True), + ]) - #assertions - self.assertEqual(mock_is_file.mock_calls,[ - mock.call("/local/file/path.conf"), - mock.call("/local/file/path.key"), + self.assertEqual(self.process_no_existing_file_expectation.mock_calls, [ + mock.call( + 'booth config file', + { + 'content': 'file content', + 'no_existing_file_expected': False + }, + '/local/file/path.conf' + ), + mock.call( + 'booth key file', + { + 'content': 'key file content', + 'no_existing_file_expected': False + }, + '/local/file/path.key' + ), ]) - self.assertEqual(mock_env.booth["name"], "booth-name") - self.assertEqual(mock_env.booth["config_file"], {"content": ""}) - self.assertEqual(mock_env.booth["key_file"], {"content": ""}) - - self.assertEqual(mock_open.mock_calls, [ - mock.call(u'/local/file/path.conf'), - mock.call().read(), - mock.call(u'/local/file/path.key'), - mock.call().read(), - mock.call(u'/local/file/path.key', u'w'), - mock.call().write(u'key file content'), - mock.call().close(), - mock.call(u'/local/file/path.conf', u'w'), - mock.call().write(u'file content'), - mock.call().close(), + self.assertEqual(self.write.mock_calls, [ + mock.call( + { + 'content': 'key file content', + 'no_existing_file_expected': False + }, + '/local/file/path.key' + ), + mock.call( + { + 'content': 'file content', + 'no_existing_file_expected': False + }, + '/local/file/path.conf' + ) ]) - @mock.patch("pcs.cli.booth.env.console_report") - @mock.patch("pcs.cli.booth.env.os.path.isfile") - def test_catch_exactly_his_exception( - self, mock_is_file, mock_console_report - ): + def test_catch_exactly_his_exception(self): + report_missing = self.setup_patch("env_file.report_missing") next_in_line = mock.Mock(side_effect=LibraryEnvError( ReportItem.error(report_codes.FILE_DOES_NOT_EXIST, info={ "file_role": env_file_role_codes.BOOTH_CONFIG, @@ -87,11 +101,10 @@ }), ReportItem.error("OTHER ERROR", info={}), )) - mock_is_file.return_value = False mock_env = mock.MagicMock() + self.read.return_value = {"content": None} - #run tested code - booth_conf_middleware = middleware_config( + booth_conf_middleware = env.middleware_config( "booth-name", "/local/file/path.conf", "/local/file/path.key", @@ -103,16 +116,10 @@ except Exception as e: raised_exception.append(e) raise e - self.assertRaises(LibraryEnvError, run_middleware) self.assertEqual(1, len(raised_exception[0].unprocessed)) self.assertEqual("OTHER ERROR", raised_exception[0].unprocessed[0].code) - - self.assertEqual(mock_console_report.error.mock_calls, [ - mock.call( - "Booth config file '/local/file/path.conf' does not exist" - ), - mock.call( - "Booth key file '/local/file/path.key' does not exist" - ), + self.assertEqual(report_missing.mock_calls, [ + mock.call('Booth config file', '/local/file/path.conf'), + mock.call('Booth key file', '/local/file/path.key'), ]) diff -Nru pcs-0.9.155+dfsg/pcs/cli/cluster/command.py pcs-0.9.159/pcs/cli/cluster/command.py --- pcs-0.9.155+dfsg/pcs/cli/cluster/command.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/cluster/command.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,105 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.cli.resource.parse_args import( + parse_create_simple as parse_resource_create_args +) +from pcs.cli.common.errors import CmdLineInputError +from pcs.cli.common.parse_args import prepare_options + +def _node_add_remote_separate_host_and_name(arg_list): + node_host = arg_list[0] + if len(arg_list) == 1: + node_name = node_host + rest_args = [] + elif "=" in arg_list[1] or arg_list[1] in ["op", "meta"]: + node_name = node_host + rest_args = arg_list[1:] + else: + node_name = arg_list[1] + rest_args = arg_list[2:] + + return node_host, node_name, rest_args + +def node_add_remote(lib, arg_list, modifiers): + if not arg_list: + raise CmdLineInputError() + + node_host, node_name, rest_args = _node_add_remote_separate_host_and_name( + arg_list + ) + + parts = parse_resource_create_args(rest_args) + force = modifiers["force"] + + lib.cluster.node_add_remote( + node_host, + node_name, + parts["op"], + parts["meta"], + parts["options"], + skip_offline_nodes=modifiers["skip_offline_nodes"], + allow_incomplete_distribution=force, + allow_pacemaker_remote_service_fail=force, + allow_invalid_operation=force, + allow_invalid_instance_attributes=force, + use_default_operations=not modifiers["no-default-ops"], + wait=modifiers["wait"], + ) + +def create_node_remove_remote(remove_resource): + def node_remove_remote(lib, arg_list, modifiers): + if not arg_list: + raise CmdLineInputError() + lib.cluster.node_remove_remote( + arg_list[0], + remove_resource, + skip_offline_nodes=modifiers["skip_offline_nodes"], + allow_remove_multiple_nodes=modifiers["force"], + allow_pacemaker_remote_service_fail=modifiers["force"], + ) + return node_remove_remote + +def node_add_guest(lib, arg_list, modifiers): + if len(arg_list) < 2: + raise CmdLineInputError() + + + node_name = arg_list[0] + resource_id = arg_list[1] + meta_options = prepare_options(arg_list[2:]) + + lib.cluster.node_add_guest( + node_name, + resource_id, + meta_options, + skip_offline_nodes=modifiers["skip_offline_nodes"], + allow_incomplete_distribution=modifiers["force"], + allow_pacemaker_remote_service_fail=modifiers["force"], + wait=modifiers["wait"], + ) + +def node_remove_guest(lib, arg_list, modifiers): + if not arg_list: + raise CmdLineInputError() + + lib.cluster.node_remove_guest( + arg_list[0], + skip_offline_nodes=modifiers["skip_offline_nodes"], + allow_remove_multiple_nodes=modifiers["force"], + allow_pacemaker_remote_service_fail=modifiers["force"], + wait=modifiers["wait"], + ) + +def node_clear(lib, arg_list, modifiers): + if len(arg_list) != 1: + raise CmdLineInputError() + + lib.cluster.node_clear( + arg_list[0], + allow_clear_cluster_node=modifiers["force"] + ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/cluster/test/test_command.py pcs-0.9.159/pcs/cli/cluster/test/test_command.py --- pcs-0.9.155+dfsg/pcs/cli/cluster/test/test_command.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/cluster/test/test_command.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,24 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase +from pcs.cli.cluster import command + +class ParseNodeAddRemote(TestCase): + def test_deal_with_explicit_name(self): + self.assertEqual( + command._node_add_remote_separate_host_and_name( + ["host", "name", "a=b"] + ), + ("host", "name", ["a=b"]) + ) + + def test_deal_with_implicit_name(self): + self.assertEqual( + command._node_add_remote_separate_host_and_name(["host", "a=b"]), + ("host", "host", ["a=b"]) + ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/console_report.py pcs-0.9.159/pcs/cli/common/console_report.py --- pcs-0.9.155+dfsg/pcs/cli/common/console_report.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/console_report.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,11 +5,13 @@ unicode_literals, ) -import sys +from collections import Iterable from functools import partial +import sys from pcs.common import report_codes as codes -from collections import Iterable +from pcs.common.fencing_topology import TARGET_TYPE_ATTRIBUTE +from pcs.common.tools import is_string INSTANCE_SUFFIX = "@{0}" NODE_PREFIX = "{0}: " @@ -35,11 +37,16 @@ for line in line_list ] -def format_optional(value, template): - return "" if not value else template.format(value) +def format_optional(value, template, empty_case=""): + return empty_case if not value else template.format(value) + +def format_fencing_level_target(target_type, target_value): + if target_type == TARGET_TYPE_ATTRIBUTE: + return "{0}={1}".format(target_value[0], target_value[1]) + return target_value def service_operation_started(operation, info): - return "{operation}{service}{instance_suffix}...".format( + return "{operation} {service}{instance_suffix}...".format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), **info @@ -56,7 +63,7 @@ **info ) -def service_opration_success(operation, info): +def service_operation_success(operation, info): return "{node_prefix}{service}{instance_suffix} {operation}".format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), @@ -66,7 +73,7 @@ def service_operation_skipped(operation, info): return ( - "{node_prefix}not {operation}{service}{instance_suffix} - {reason}" + "{node_prefix}not {operation} {service}{instance_suffix}: {reason}" ).format( operation=operation, instance_suffix=format_optional(info["instance"], INSTANCE_SUFFIX), @@ -74,10 +81,84 @@ **info ) +def id_belongs_to_unexpected_type(info): + translate_expected = { + "acl_group": "an acl group", + "acl_target": "an acl user", + "group": "a group", + } + return "'{id}' is not {expected_type}".format( + id=info["id"], + expected_type="/".join([ + translate_expected.get(tag, "{0}".format(tag)) + for tag in info["expected_types"] + ]), + ) + +def id_not_found(info): + desc = format_optional(info["id_description"], "{0} ") + if not info["context_type"] or not info["context_id"]: + return "{desc}'{id}' does not exist".format(desc=desc, id=info["id"]) + + return ( + "there is no {desc}'{id}' in the {context_type} '{context_id}'".format( + desc=desc, + id=info["id"], + context_type=info["context_type"], + context_id=info["context_id"], + ) + ) + +def resource_running_on_nodes(info): + role_label_map = { + "Started": "running", + } + state_info = {} + for state, node_list in info["roles_with_nodes"].items(): + state_info.setdefault( + role_label_map.get(state, state.lower()), + [] + ).extend(node_list) + + return "resource '{resource_id}' is {detail_list}".format( + resource_id=info["resource_id"], + detail_list="; ".join(sorted([ + "{run_type} on node{s} {node_list}".format( + run_type=run_type, + s="s" if len(node_list) > 1 else "", + node_list=joined_list(node_list) + ) + for run_type, node_list in state_info.items() + ])) + ) + +def build_node_description(node_types): + if not node_types: + return "Node" + + label = "{0} node".format + + if is_string(node_types): + return label(node_types) -#Each value (callable taking report_item.info) returns string template. -#Optionaly the template can contain placehodler {force} for next processing. -#Placeholder {force} will be appended if is necessary and if is not presset + if len(node_types) == 1: + return label(node_types[0]) + + return "nor " + " or ".join([label(ntype) for ntype in node_types]) + +def joined_list(item_list, optional_transformations=None): + if not optional_transformations: + optional_transformations={} + + return ", ".join(sorted([ + "'{0}'".format(optional_transformations.get(item, item)) + for item in item_list + ])) + +#Each value (a callable taking report_item.info) returns a message. +#Force text will be appended if necessary. +#If it is necessary to put the force text inside the string then the callable +#must take the force_text parameter. CODE_TO_MESSAGE_BUILDER_MAP = { codes.COMMON_ERROR: lambda info: info["text"], @@ -87,35 +168,121 @@ codes.EMPTY_RESOURCE_SET_LIST: "Resource set list is empty", codes.REQUIRED_OPTION_IS_MISSING: lambda info: - "required option '{option_name}' is missing" - .format(**info) + "required {desc}option{s} {option_names_list} {are} missing" + .format( + desc=format_optional(info["option_type"], "{0} "), + option_names_list=joined_list(info["option_names"]), + s=("s" if len(info["option_names"]) > 1 else ""), + are=( + "are" if len(info["option_names"]) > 1 + else "is" + ) + ) , - codes.INVALID_OPTION: lambda info: - "invalid {desc}option '{option_name}', allowed options are: {allowed_values}" + codes.PREREQUISITE_OPTION_IS_MISSING: lambda info: + ( + "If {opt_desc}option '{option_name}' is specified, " + "{pre_desc}option '{prerequisite_name}' must be specified as well" + ).format( + opt_desc=format_optional(info.get("option_type"), "{0} "), + pre_desc=format_optional(info.get("prerequisite_type"), "{0} "), + **info + ) + , + + codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING: lambda info: + "{desc}option {option_names_list} has to be specified" .format( + desc=format_optional(info.get("option_type"), "{0} "), + option_names_list=" or ".join(sorted([ + "'{0}'".format(name) + for name in info["option_names"] + ])), + ) + , + + codes.INVALID_OPTION: lambda info: + ( + "invalid {desc}option{s} {option_names_list}," + + + ( + " allowed option{are} {allowed_values}" if info["allowed"] + else " there are no options allowed" + ) + ).format( desc=format_optional(info["option_type"], "{0} "), - allowed_values=", ".join(info["allowed"]), + allowed_values=", ".join(sorted(info["allowed"])), + option_names_list=joined_list(info["option_names"]), + s=("s:" if len(info["option_names"]) > 1 else ""), + are=("s are:" if len(info["allowed"]) > 1 else " is"), **info ) , codes.INVALID_OPTION_VALUE: lambda info: + #value on key "allowed_values" is overloaded: + # * it can be a list - then it express possible option values + # * it can be a string - then it is verbal description of value "'{option_value}' is not a valid {option_name} value, use {hint}" .format( hint=( - ", ".join(info["allowed_values"]) - if ( + ", ".join(sorted(info["allowed_values"])) if ( isinstance(info["allowed_values"], Iterable) and - not isinstance(info["allowed_values"], "".__class__) - ) - else info["allowed_values"] + not is_string(info["allowed_values"]) + ) else info["allowed_values"] + ), + **info + ) + , + + codes.INVALID_OPTION_TYPE: lambda info: + #value on key "allowed_types" is overloaded: + # * it can be a list - then it express possible option types + # * it can be a string - then it is verbal description of the type + "specified {option_name} is not valid, use {hint}" + .format( + hint=( + ", ".join(sorted(info["allowed_types"])) if ( + isinstance(info["allowed_types"], Iterable) + and + not is_string(info["allowed_types"]) + ) else info["allowed_types"] + ), + **info + ) + , + + codes.DEPRECATED_OPTION: lambda info: + ( + "{desc}option '{option_name}' is deprecated and should not be " + "used, use {hint} instead" + ).format( + desc=format_optional(info["option_type"], "{0} "), + hint=( + ", ".join(sorted(info["replaced_by"])) if ( + isinstance(info["replaced_by"], Iterable) + and + not is_string(info["replaced_by"]) + ) else info["replaced_by"] ), **info ) , + codes.MUTUALLY_EXCLUSIVE_OPTIONS: lambda info: + # "{desc}options {option_names} are muttually exclusive".format( + "Only one of {desc}options {option_names} can be used".format( + desc=format_optional(info["option_type"], "{0} "), + option_names=( + joined_list(sorted(info["option_names"])[:-1]) + + + " and '{0}'".format(sorted(info["option_names"])[-1]) + ) + ) + , + codes.EMPTY_ID: lambda info: "{id_description} cannot be empty" .format(**info) @@ -147,11 +314,17 @@ codes.RUN_EXTERNAL_PROCESS_STARTED: lambda info: - "Running: {command}\n{stdin_part}".format( + "Running: {command}\nEnvironment:{env_part}\n{stdin_part}".format( stdin_part=format_optional( info["stdin"], "--Debug Input Start--\n{0}\n--Debug Input End--\n" ), + env_part=( + "" if not info["environment"] else "\n" + "\n".join([ + " {0}={1}".format(key, val) + for key, val in sorted(info["environment"].items()) + ]) + ), **info ) , @@ -174,6 +347,15 @@ .format(**info) , + codes.NODE_COMMUNICATION_DEBUG_INFO: lambda info: + ( + "Communication debug info for calling: {target}\n" + "--Debug Communication Info Start--\n" + "{data}\n" + "--Debug Communication Info End--\n" + ).format(**info) + , + codes.NODE_COMMUNICATION_STARTED: lambda info: "Sending HTTP Request to: {target}\n{data_part}".format( data_part=format_optional( @@ -232,6 +414,33 @@ .format(**info) , + codes.NODE_COMMUNICATION_ERROR_TIMED_OUT: lambda info: + "{node}: Connection timeout ({reason})" + .format(**info) + , + + codes.NODE_COMMUNICATION_PROXY_IS_SET: + "Proxy is set in environment variables, try disabling it" + , + + codes.CANNOT_ADD_NODE_IS_IN_CLUSTER: lambda info: + "cannot add the node '{node}' because it is in a cluster" + .format(**info) + , + + codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE: lambda info: + ( + "cannot add the node '{node}' because it is running service" + " '{service}'{guess}" + ).format( + guess=( + "" if info["service"] not in ["pacemaker", "pacemaker_remote"] + else " (is not the node already in a cluster?)" + ), + **info + ) + , + codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED: "Sending updated corosync.conf to nodes..." , @@ -403,19 +612,18 @@ .format(**info) , - codes.ID_NOT_FOUND: lambda info: - "{desc}'{id}' does not exist" + codes.ID_BELONGS_TO_UNEXPECTED_TYPE: id_belongs_to_unexpected_type, + + codes.ID_NOT_FOUND: id_not_found, + + codes.STONITH_RESOURCES_DO_NOT_EXIST: lambda info: + "Stonith resource(s) '{stonith_id_list}' do not exist" .format( - desc=format_optional(info["id_description"], "{0} "), + stonith_id_list="', '".join(info["stonith_ids"]), **info ) , - codes.RESOURCE_DOES_NOT_EXIST: lambda info: - "Resource '{resource_id}' does not exist" - .format(**info) - , - codes.CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET: lambda info: "Role '{role_id}' is already asigned to '{target_id}'" .format(**info) @@ -431,6 +639,37 @@ .format(**info) , + codes.CIB_FENCING_LEVEL_ALREADY_EXISTS: lambda info: + ( + "Fencing level for '{target}' at level '{level}' " + "with device(s) '{device_list}' already exists" + ).format( + device_list=",".join(info["devices"]), + target=format_fencing_level_target( + info["target_type"], info["target_value"] + ), + **info + ) + , + + codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST: lambda info: + "Fencing level {part_target}{part_level}{part_devices}does not exist" + .format( + part_target=( + "for '{0}' ".format(format_fencing_level_target( + info["target_type"], info["target_value"] + )) + if info["target_type"] and info["target_value"] + else "" + ), + part_level=format_optional(info["level"], "at level '{0}' "), + part_devices=format_optional( + ",".join(info["devices"]) if info["devices"] else "", + "with device(s) '{0}' " + ) + ) + , + codes.CIB_LOAD_ERROR: "unable to get cib", codes.CIB_LOAD_ERROR_SCOPE_MISSING: lambda info: @@ -452,6 +691,11 @@ .format(**info) , + codes.CIB_SAVE_TMP_ERROR: lambda info: + "Unable to save CIB to a temporary file: {reason}" + .format(**info) + , + codes.CRM_MON_ERROR: "error running crm_mon, is pacemaker running?" , @@ -460,20 +704,31 @@ "cannot load cluster status, xml does not conform to the schema" , - codes.RESOURCE_WAIT_NOT_SUPPORTED: + codes.WAIT_FOR_IDLE_NOT_SUPPORTED: "crm_resource does not support --wait, please upgrade pacemaker" , - codes.RESOURCE_WAIT_TIMED_OUT: lambda info: + codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER: + "Cannot use '-f' together with '--wait'" + , + + codes.WAIT_FOR_IDLE_TIMED_OUT: lambda info: "waiting timeout\n\n{reason}" .format(**info) , - codes.RESOURCE_WAIT_ERROR: lambda info: + codes.WAIT_FOR_IDLE_ERROR: lambda info: "{reason}" .format(**info) , + codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE: lambda info: + ( + "bundle '{bundle_id}' already contains resource '{resource_id}'" + ", a bundle may contain at most one resource" + ).format(**info) + , + codes.RESOURCE_CLEANUP_ERROR: lambda info: ( "Unable to cleanup resource: {resource}\n{reason}" @@ -491,11 +746,84 @@ ).format(**info) , + codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION: lambda info: ( + "multiple specification of the same operation with the same interval:\n" + +"\n".join([ + "{0} with intervals {1}".format(name, ", ".join(intervals)) + for name, intervals_list in info["duplications"].items() + for intervals in intervals_list + ]) + ), + + codes.RESOURCE_OPERATION_INTERVAL_ADAPTED: lambda info: + ( + "changing a {operation_name} operation interval" + " from {original_interval}" + " to {adapted_interval} to make the operation unique" + ).format(**info) + , + + codes.RESOURCE_RUNNING_ON_NODES: resource_running_on_nodes, + + codes.RESOURCE_DOES_NOT_RUN: lambda info: + "resource '{resource_id}' is not running on any node" + .format(**info) + , + + codes.RESOURCE_IS_UNMANAGED: lambda info: + "'{resource_id}' is unmanaged" + .format(**info) + , + + codes.RESOURCE_IS_GUEST_NODE_ALREADY: lambda info: + "the resource '{resource_id}' is already a guest node" + .format(**info) + , + + codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED: lambda info: + ( + "Resource '{resource_id}' has no enabled monitor operations." + " Re-run with '--monitor' to enable them." + ) + .format(**info) + , + codes.NODE_NOT_FOUND: lambda info: - "node '{node}' does not appear to exist in configuration" + "{desc} '{node}' does not appear to exist in configuration".format( + desc=build_node_description(info["searched_types"]), + node=info["node"] + ) + , + + codes.NODE_REMOVE_IN_PACEMAKER_FAILED: lambda info: + "unable to remove node '{node_name}' from pacemaker{reason_part}" + .format( + reason_part=format_optional(info["reason"], ": {0}"), + **info + + ) + , + + codes.NODE_TO_CLEAR_IS_STILL_IN_CLUSTER: lambda info: + ( + "node '{node}' seems to be still in the cluster" + "; this command should be used only with nodes that have been" + " removed from the cluster" + ) .format(**info) , + codes.MULTIPLE_RESULTS_FOUND: lambda info: + "multiple {result_type} {search_description} found: {what_found}" + .format( + what_found=joined_list(info["result_identifier_list"]), + search_description="" if not info["search_description"] + else "for '{0}'".format(info["search_description"]) + , + result_type=info["result_type"] + ) + , + codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND: lambda info: "unable to get local node name from pacemaker: {reason}" .format(**info) @@ -526,18 +854,18 @@ codes.SERVICE_START_STARTED: partial(service_operation_started, "Starting"), codes.SERVICE_START_ERROR: partial(service_operation_error, "start"), - codes.SERVICE_START_SUCCESS: partial(service_opration_success, "started"), + codes.SERVICE_START_SUCCESS: partial(service_operation_success, "started"), codes.SERVICE_START_SKIPPED: partial(service_operation_skipped, "starting"), codes.SERVICE_STOP_STARTED: partial(service_operation_started, "Stopping"), codes.SERVICE_STOP_ERROR: partial(service_operation_error, "stop"), - codes.SERVICE_STOP_SUCCESS: partial(service_opration_success, "stopped"), + codes.SERVICE_STOP_SUCCESS: partial(service_operation_success, "stopped"), codes.SERVICE_ENABLE_STARTED: partial( service_operation_started, "Enabling" ), codes.SERVICE_ENABLE_ERROR: partial(service_operation_error, "enable"), - codes.SERVICE_ENABLE_SUCCESS: partial(service_opration_success, "enabled"), + codes.SERVICE_ENABLE_SUCCESS: partial(service_operation_success, "enabled"), codes.SERVICE_ENABLE_SKIPPED: partial( service_operation_skipped, "enabling" ), @@ -546,7 +874,7 @@ partial(service_operation_started, "Disabling") , codes.SERVICE_DISABLE_ERROR: partial(service_operation_error, "disable"), - codes.SERVICE_DISABLE_SUCCESS: partial(service_opration_success, "disabled"), + codes.SERVICE_DISABLE_SUCCESS: partial(service_operation_success, "disabled"), codes.SERVICE_KILL_ERROR: lambda info: "Unable to kill {service_list}: {reason}" @@ -574,13 +902,24 @@ codes.INVALID_RESOURCE_AGENT_NAME: lambda info: ( "Invalid resource agent name '{name}'." - " Use standard:provider:type or standard:type." + " Use standard:provider:type when standard is 'ocf' or" + " standard:type otherwise." " List of standards and providers can be obtained by using commands" " 'pcs resource standards' and 'pcs resource providers'" ) .format(**info) , + codes.INVALID_STONITH_AGENT_NAME: lambda info: + ( + "Invalid stonith agent name '{name}'." + " List of agents can be obtained by using command" + " 'pcs stonith list'. Do not use the 'stonith:' prefix. Agent name" + " cannot contain the ':' character." + ) + .format(**info) + , + codes.AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE: lambda info: ( "Multiple agents match '{agent}'" @@ -632,11 +971,153 @@ codes.SBD_DISABLING_STARTED: "Disabling SBD service...", + codes.SBD_DEVICE_INITIALIZATION_STARTED: lambda info: + "Initializing device(s) {devices}..." + .format(devices=", ".join(info["device_list"])) + , + + codes.SBD_DEVICE_INITIALIZATION_SUCCESS: + "Device(s) initialized successfuly", + + codes.SBD_DEVICE_INITIALIZATION_ERROR: lambda info: + "Initialization of device(s) failed: {reason}" + .format(**info) + , + + codes.SBD_DEVICE_LIST_ERROR: lambda info: + "Unable to get list of messages from device '{device}': {reason}" + .format(**info) + , + + codes.SBD_DEVICE_MESSAGE_ERROR: lambda info: + "Unable to set message '{message}' for node '{node}' on device " + "'{device}'" + .format(**info) + , + + codes.SBD_DEVICE_DUMP_ERROR: lambda info: + "Unable to get SBD headers from device '{device}': {reason}" + .format(**info) + , + + codes.FILES_DISTRIBUTION_STARTED: lambda info: + "Sending {description}{where}".format( + where=( + "" if not info["node_list"] + else " to " + joined_list(info["node_list"]) + ), + description=info["description"] if info["description"] + else joined_list(info["file_list"]) + ) + , + + codes.FILE_DISTRIBUTION_SUCCESS: lambda info: + "{node}: successful distribution of the file '{file_description}'" + .format( + **info + ) + , + + + codes.FILE_DISTRIBUTION_ERROR: lambda info: + "{node}: unable to distribute file '{file_description}': {reason}" + .format( + **info + ) + , + + codes.FILES_REMOVE_FROM_NODE_STARTED: lambda info: + "Requesting remove {description}{where}".format( + where=( + "" if not info["node_list"] + else " from " + joined_list(info["node_list"]) + ), + description=info["description"] if info["description"] + else joined_list(info["file_list"]) + ) + , + + codes.FILE_REMOVE_FROM_NODE_SUCCESS: lambda info: + "{node}: successful removal of the file '{file_description}'" + .format( + **info + ) + , + + + codes.FILE_REMOVE_FROM_NODE_ERROR: lambda info: + "{node}: unable to remove file '{file_description}': {reason}" + .format( + **info + ) + , + + codes.SERVICE_COMMANDS_ON_NODES_STARTED: lambda info: + "Requesting {description}{where}".format( + where=( + "" if not info["node_list"] + else " on " + joined_list(info["node_list"]) + ), + description=info["description"] if info["description"] + else joined_list(info["action_list"]) + ) + , + + codes.SERVICE_COMMAND_ON_NODE_SUCCESS: lambda info: + "{node}: successful run of '{service_command_description}'" + .format( + **info + ) + , + + codes.SERVICE_COMMAND_ON_NODE_ERROR: lambda info: + ( + "{node}: service command failed:" + " {service_command_description}: {reason}" + ) + .format( + **info + ) + , + + codes.SBD_DEVICE_PATH_NOT_ABSOLUTE: lambda info: + "Device path '{device}'{on_node} is not absolute" + .format( + on_node=format_optional( + info["node"], " on node '{0}'".format(info["node"]) + ), + **info + ) + , + + codes.SBD_DEVICE_DOES_NOT_EXIST: lambda info: + "{node}: device '{device}' not found" + .format(**info) + , + + codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE: lambda info: + "{node}: device '{device}' is not a block device" + .format(**info) + , + codes.INVALID_RESPONSE_FORMAT: lambda info: "{node}: Invalid format of response" .format(**info) , + codes.SBD_NO_DEVICE_FOR_NODE: lambda info: + "No device defined for node '{node}'" + .format(**info) + , + + codes.SBD_TOO_MANY_DEVICES_FOR_NODE: lambda info: + ( + "More than {max_devices} devices defined for node '{node}' " + "(devices: {devices})" + ) + .format(devices=", ".join(info["device_list"]), **info) + , + codes.SBD_NOT_INSTALLED: lambda info: "SBD is not installed on node '{node}'" .format(**info) @@ -674,14 +1155,8 @@ .format(**info) , - codes.CIB_ALERT_NOT_FOUND: lambda info: - "Alert '{alert}' not found." - .format(**info) - , - - codes.CIB_UPGRADE_SUCCESSFUL: lambda info: + codes.CIB_UPGRADE_SUCCESSFUL: "CIB has been upgraded to the latest schema version." - .format(**info) , codes.CIB_UPGRADE_FAILED: lambda info: @@ -701,7 +1176,7 @@ codes.FILE_ALREADY_EXISTS: lambda info: "{node_prefix}{role_prefix}file {file_path} already exists" .format( - node_prefix=format_optional(info["node"], NODE_PREFIX), + node_prefix=format_optional(info["node"], NODE_PREFIX), role_prefix=format_optional(info["file_role"], "{0} "), **info ) @@ -736,7 +1211,49 @@ codes.LIVE_ENVIRONMENT_REQUIRED: lambda info: "This command does not support {forbidden_options}" - .format(forbidden_options=", ".join(info["forbidden_options"])) + .format( + forbidden_options=joined_list(info["forbidden_options"], { + "BOOTH_CONF": "--booth-conf", + "BOOTH_KEY": "--booth-key", + "CIB": "-f", + "COROSYNC_CONF": "--corosync_conf", + }) + ) + , + + codes.LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE: + "Node(s) must be specified if -f is used" + , + + codes.NOLIVE_SKIP_FILES_DISTRIBUTION: lambda info: + ( + "the distribution of {files} to {nodes} was skipped because command" + " does not run on live cluster (e.g. -f was used)." + " You will have to do it manually." + ).format( + files=joined_list(info["files_description"]), + nodes=joined_list(info["nodes"]), + ) + , + codes.NOLIVE_SKIP_FILES_REMOVE: lambda info: + ( + "{files} remove from {nodes} was skipped because command" + " does not run on live cluster (e.g. -f was used)." + " You will have to do it manually." + ).format( + files=joined_list(info["files_description"]), + nodes=joined_list(info["nodes"]), + ) + , + codes.NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES: lambda info: + ( + "running '{command}' on {nodes} was skipped" + " because command does not run on live cluster (e.g. -f was" + " used). You will have to run it manually." + ).format( + command="{0} {1}".format(info["service"], info["command"]), + nodes=joined_list(info["nodes"]), + ) , codes.COROSYNC_QUORUM_CANNOT_DISABLE_ATB_DUE_TO_SBD: lambda info: @@ -758,4 +1275,22 @@ "Unable to read {path}: {reason}" .format(**info) , + codes.USE_COMMAND_NODE_ADD_REMOTE: lambda info: + ( + "this command is not sufficient for creating a remote connection," + " use 'pcs cluster node add-remote'" + ) + , + codes.USE_COMMAND_NODE_ADD_GUEST: lambda info: + ( + "this command is not sufficient for creating a guest node, use" + " 'pcs cluster node add-guest'" + ) + , + codes.USE_COMMAND_NODE_REMOVE_GUEST: lambda info: + ( + "this command is not sufficient for removing a guest node, use" + " 'pcs cluster node remove-guest'" + ) + , } diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/env_cli.py pcs-0.9.159/pcs/cli/common/env_cli.py --- pcs-0.9.155+dfsg/pcs/cli/common/env_cli.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/env_cli.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,21 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +class Env(object): + #pylint: disable=too-many-instance-attributes + def __init__(self): + self.cib_data = None + self.cib_upgraded = False + self.user = None + self.groups = None + self.corosync_conf_data = None + self.booth = None + self.pacemaker = None + self.auth_tokens_getter = None + self.debug = False + self.cluster_conf_data = None + self.request_timeout = None diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/env_file.py pcs-0.9.159/pcs/cli/common/env_file.py --- pcs-0.9.155+dfsg/pcs/cli/common/env_file.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/env_file.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,75 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import os.path +from collections import namedtuple + +from pcs.cli.common import console_report +from pcs.common import report_codes + + +def report_missing(file_role, file_path): + console_report.error( + "{0} '{1}' does not exist".format(file_role, file_path) + ) + +def is_missing_report(report, file_role_code): + return ( + report.code == report_codes.FILE_DOES_NOT_EXIST + and + report.info["file_role"] == file_role_code + ) + +def process_no_existing_file_expectation(file_role, env_file, file_path): + if( + env_file["no_existing_file_expected"] + and + os.path.exists(file_path) + ): + msg = "{0} {1} already exists".format(file_role, file_path) + if not env_file["can_overwrite_existing_file"]: + raise console_report.error( + "{0}, use --force to override".format(msg) + ) + console_report.warn(msg) + +def write(env_file, file_path): + try: + f = open(file_path, "wb" if env_file.get("is_binary", False) else "w") + f.write(env_file["content"]) + f.close() + except EnvironmentError as e: + raise console_report.error( + "Unable to write {0}: {1}".format(file_path, e.strerror) + ) + +def read(path, is_binary=False): + try: + mode = "rb" if is_binary else "r" + return { + "content": open(path, mode).read() if os.path.isfile(path) else None + } + except EnvironmentError as e: + raise console_report.error( + "Unable to read {0}: {1}".format(path, e.strerror) + ) + +MissingFileCandidateInfo = namedtuple( + "MissingFileCandidateInfo", + "code desc path" +) + +def evaluate_for_missing_files(exception, file_info_list): + """ + list of MissingFileCandidateInfo file_info_list contains the info for files + that can be missing + """ + for report in exception.args: + for file_info in file_info_list: + if is_missing_report(report, file_info.code): + report_missing(file_info.desc, file_info.path) + exception.sign_processed(report) diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/env.py pcs-0.9.159/pcs/cli/common/env.py --- pcs-0.9.155+dfsg/pcs/cli/common/env.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/env.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,19 +0,0 @@ -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -class Env(object): - #pylint: disable=too-many-instance-attributes - def __init__(self): - self.cib_data = None - self.cib_upgraded = False - self.user = None - self.groups = None - self.corosync_conf_data = None - self.booth = None - self.auth_tokens_getter = None - self.debug = False - self.cluster_conf_data = None diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/errors.py pcs-0.9.159/pcs/cli/common/errors.py --- pcs-0.9.155+dfsg/pcs/cli/common/errors.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/errors.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,6 +5,12 @@ unicode_literals, ) + +ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE = ( + "Cannot specify both --all and a list of nodes." +) + + class CmdLineInputError(Exception): """ Exception express that user entered incorrect commad in command line. diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/lib_wrapper.py pcs-0.9.159/pcs/cli/common/lib_wrapper.py --- pcs-0.9.155+dfsg/pcs/cli/common/lib_wrapper.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/lib_wrapper.py 2017-06-30 15:33:01.000000000 +0000 @@ -18,9 +18,14 @@ acl, alert, booth, + cluster, + fencing_topology, + node, qdevice, quorum, resource_agent, + resource, + stonith, sbd, stonith_agent, ) @@ -49,6 +54,7 @@ booth=cli_env.booth, auth_tokens_getter=cli_env.auth_tokens_getter, cluster_conf_data=cli_env.cluster_conf_data, + request_timeout=cli_env.request_timeout, ) def lib_env_to_cli_env(lib_env, cli_env): @@ -176,6 +182,22 @@ } ) + if name == "cluster": + return bind_all( + env, + middleware.build( + middleware_factory.cib, + middleware_factory.corosync_conf_existing, + ), + { + "node_add_remote": cluster.node_add_remote, + "node_add_guest": cluster.node_add_guest, + "node_remove_remote": cluster.node_remove_remote, + "node_remove_guest": cluster.node_remove_guest, + "node_clear": cluster.node_clear, + } + ) + if name == 'constraint_colocation': return bind_all( env, @@ -208,6 +230,37 @@ } ) + if name == "fencing_topology": + return bind_all( + env, + middleware.build(middleware_factory.cib), + { + "add_level": fencing_topology.add_level, + "get_config": fencing_topology.get_config, + "remove_all_levels": fencing_topology.remove_all_levels, + "remove_levels_by_params": + fencing_topology.remove_levels_by_params, + "verify": fencing_topology.verify, + } + ) + + if name == "node": + return bind_all( + env, + middleware.build(middleware_factory.cib), + { + "maintenance_unmaintenance_all": + node.maintenance_unmaintenance_all, + "maintenance_unmaintenance_list": + node.maintenance_unmaintenance_list, + "maintenance_unmaintenance_local": + node.maintenance_unmaintenance_local, + "standby_unstandby_all": node.standby_unstandby_all, + "standby_unstandby_list": node.standby_unstandby_list, + "standby_unstandby_local": node.standby_unstandby_local, + } + ) + if name == "qdevice": return bind_all( env, @@ -261,6 +314,41 @@ } ) + if name == "resource": + return bind_all( + env, + middleware.build( + middleware_factory.cib, + middleware_factory.corosync_conf_existing, + ), + { + "bundle_create": resource.bundle_create, + "bundle_update": resource.bundle_update, + "create": resource.create, + "create_as_master": resource.create_as_master, + "create_as_clone": resource.create_as_clone, + "create_in_group": resource.create_in_group, + "create_into_bundle": resource.create_into_bundle, + "disable": resource.disable, + "enable": resource.enable, + "manage": resource.manage, + "unmanage": resource.unmanage, + } + ) + if name == "stonith": + return bind_all( + env, + middleware.build( + middleware_factory.cib, + middleware_factory.corosync_conf_existing, + ), + { + "create": stonith.create, + "create_in_group": stonith.create_in_group, + } + ) + + if name == "sbd": return bind_all( env, @@ -271,6 +359,9 @@ "get_cluster_sbd_status": sbd.get_cluster_sbd_status, "get_cluster_sbd_config": sbd.get_cluster_sbd_config, "get_local_sbd_config": sbd.get_local_sbd_config, + "initialize_block_devices": sbd.initialize_block_devices, + "get_local_devices_info": sbd.get_local_devices_info, + "set_message": sbd.set_message, } ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/parse_args.py pcs-0.9.159/pcs/cli/common/parse_args.py --- pcs-0.9.155+dfsg/pcs/cli/common/parse_args.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/parse_args.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,48 +7,300 @@ from pcs.cli.common.errors import CmdLineInputError + +ARG_TYPE_DELIMITER = "%" + +# h = help, f = file, +# p = password (cluster auth), u = user (cluster auth), +# V = verbose (cluster verify) +PCS_SHORT_OPTIONS = "hf:p:u:V" +PCS_LONG_OPTIONS = [ + "debug", "version", "help", "fullhelp", + "force", "skip-offline", "autocorrect", "interactive", "autodelete", + "all", "full", "groups", "local", "wait", "config", "async", + "start", "enable", "disabled", "off", "request-timeout=", + "pacemaker", "corosync", + "no-default-ops", "defaults", "nodesc", + "clone", "master", "name=", "group=", "node=", + "from=", "to=", "after=", "before=", + "transport=", "rrpmode=", "ipv6", + "addr0=", "bcast0=", "mcast0=", "mcastport0=", "ttl0=", "broadcast0", + "addr1=", "bcast1=", "mcast1=", "mcastport1=", "ttl1=", "broadcast1", + "wait_for_all=", "auto_tie_breaker=", "last_man_standing=", + "last_man_standing_window=", + "token=", "token_coefficient=", "consensus=", "join=", + "miss_count_const=", "fail_recv_const=", + "corosync_conf=", "cluster_conf=", + "booth-conf=", "booth-key=", + "remote", "watchdog=", "device=", "encryption=", + #in pcs status - do not display resorce status on inactive node + "hide-inactive", + # pcs resource (un)manage - enable or disable monitor operations + "monitor", +] + def split_list(arg_list, separator): """return list of list of arg_list using separator as delimiter""" separator_indexes = [i for i, x in enumerate(arg_list) if x == separator] bounds = zip([0]+[i+1 for i in separator_indexes], separator_indexes+[None]) return [arg_list[i:j] for i, j in bounds] +def split_option(arg): + """ + Get (key, value) from a key=value commandline argument. + + Split the argument by the first = and return resulting parts. Raise + CmdLineInputError if the argument cannot be splitted. + + string arg -- commandline argument + """ + if "=" not in arg: + raise CmdLineInputError("missing value of '{0}' option".format(arg)) + if arg.startswith("="): + raise CmdLineInputError("missing key in '{0}' option".format(arg)) + return arg.split("=", 1) + def prepare_options(cmdline_args): - """return dictionary of options from comandline key=value args""" + """return dictionary of options from commandline key=value args""" options = dict() for arg in cmdline_args: - if "=" not in arg: - raise CmdLineInputError("missing value of '{0}' option".format(arg)) - if arg.startswith("="): - raise CmdLineInputError("missing key in '{0}' option".format(arg)) - - name, value = arg.split("=", 1) - options[name] = value + name, value = split_option(arg) + if name not in options: + options[name] = value + elif options[name] != value: + raise CmdLineInputError( + "duplicate option '{0}' with different values '{1}' and '{2}'" + .format(name, options[name], value) + ) return options def group_by_keywords( arg_list, keyword_set, - implicit_first_keyword=None, keyword_repeat_allowed=True, + implicit_first_group_key=None, keyword_repeat_allowed=True, + group_repeated_keywords=None, only_found_keywords=False ): - groups = dict([(keyword, []) for keyword in keyword_set]) - if implicit_first_keyword: - groups[implicit_first_keyword] = [] - - if not arg_list: - return groups - - used_keywords = [] - if implicit_first_keyword: - used_keywords.append(implicit_first_keyword) - elif arg_list[0] not in keyword_set: - raise CmdLineInputError() + """ + Return dictionary with keywords as keys and following argumets as value. + For example when keywords are "first" and "seconds" then for arg_list + ["first", 1, 2, "second", 3] it returns {"first": [1, 2], "second": [3]} - for arg in arg_list: - if arg in list(groups.keys()): - if arg in used_keywords and not keyword_repeat_allowed: + list arg_list is commandline arguments containing keywords + set keyword_set contain all expected keywords + string implicit_first_group_key is the key for capturing of arguments before + the occurrence of the first keyword. implicit_first_group_key is not + a keyword => its occurence in args is considered as ordinary argument. + bool keyword_repeat_allowed is the flag to turn on/off checking the + uniqueness of each keyword in arg_list. + list group_repeated_keywords contains keywords for which each occurence is + packed separately. For example when keywords are "first" and "seconds" + and group_repeated_keywords is ["first"] then for arg_list + ["first", 1, 2, "second", 3, "first", 4] it returns + {"first": [[1, 2], [4]], "second": [3]}. + For these keywords is allowed repeating. + bool only_found_keywords is flag for deciding to (not)contain keywords + that do not appeared in arg_list. + """ + + def get_keywords_for_grouping(): + if not group_repeated_keywords: + return [] + #implicit_first_group_key is not keyword: when it is in + #group_repeated_keywords but not in keyword_set is considered as + #unknown. + unknown_keywords = set(group_repeated_keywords) - set(keyword_set) + if unknown_keywords: + #to avoid developer mistake + raise AssertionError( + "Keywords in grouping not in keyword set: {0}" + .format(", ".join(unknown_keywords)) + ) + return group_repeated_keywords + + def get_completed_groups(): + completed_groups = groups.copy() + if not only_found_keywords: + for keyword in keyword_set: + if keyword not in completed_groups: + completed_groups[keyword] = [] + if( + implicit_first_group_key + and + implicit_first_group_key not in completed_groups + ): + completed_groups[implicit_first_group_key] = [] + return completed_groups + + def is_acceptable_keyword_occurence(keyword): + return ( + keyword not in groups.keys() + or + keyword_repeat_allowed + or + keyword in keywords_for_grouping + ) + + def process_keyword(keyword): + if not is_acceptable_keyword_occurence(keyword): + raise CmdLineInputError( + "'{0}' cannot be used more than once".format(keyword) + ) + groups.setdefault(keyword, []) + if keyword in keywords_for_grouping: + groups[keyword].append([]) + + def process_non_keyword(keyword, arg): + place = groups[keyword] + if keyword in keywords_for_grouping: + place = place[-1] + place.append(arg) + + groups = {} + keywords_for_grouping = get_keywords_for_grouping() + + if arg_list: + current_keyword = None + if arg_list[0] not in keyword_set: + if not implicit_first_group_key: raise CmdLineInputError() - used_keywords.append(arg) - else: - groups[used_keywords[-1]].append(arg) + process_keyword(implicit_first_group_key) + current_keyword = implicit_first_group_key + + for arg in arg_list: + if arg in keyword_set: + process_keyword(arg) + current_keyword = arg + else: + process_non_keyword(current_keyword, arg) - return groups + return get_completed_groups() + +def parse_typed_arg(arg, allowed_types, default_type): + """ + Get (type, value) from a typed commandline argument. + + Split the argument by the type separator and return the type and the value. + Raise CmdLineInputError in the argument format or type is not valid. + string arg -- commandline argument + Iterable allowed_types -- list of allowed argument types + string default_type -- type to return if the argument doesn't specify a type + """ + if ARG_TYPE_DELIMITER not in arg: + return default_type, arg + arg_type, arg_value = arg.split(ARG_TYPE_DELIMITER, 1) + if not arg_type: + return default_type, arg_value + if arg_type not in allowed_types: + raise CmdLineInputError( + "'{arg_type}' is not an allowed type for '{arg_full}', use {hint}" + .format( + arg_type=arg_type, + arg_full=arg, + hint=", ".join(sorted(allowed_types)) + ) + ) + return arg_type, arg_value + +def is_num(arg): + return arg.isdigit() or arg.lower() == "infinity" + +def is_negative_num(arg): + return arg.startswith("-") and is_num(arg[1:]) + +def is_short_option_expecting_value(arg): + return ( + len(arg) == 2 + and + arg[0] == "-" + and + "{0}:".format(arg[1]) in PCS_SHORT_OPTIONS + ) + +def is_long_option_expecting_value(arg): + return ( + len(arg) > 2 + and + arg[0:2] == "--" + and + "{0}=".format(arg[2:]) in PCS_LONG_OPTIONS + ) + +def is_option_expecting_value(arg): + return ( + is_short_option_expecting_value(arg) + or + is_long_option_expecting_value(arg) + ) + +def filter_out_non_option_negative_numbers(arg_list): + """ + Return arg_list without non-option negative numbers. + Negative numbers following the option expecting value are kept. + + There is the problematic legacy. + Argumet "--" has special meaning: can be used to signal that no more + options will follow. This would solve the problem with negative numbers in + a standard way: there would be no special approach to negative numbers, + everything would be left in the hands of users. But now it would be + backward incompatible change. + + list arg_list contains command line arguments + """ + args_without_negative_nums = [] + for i, arg in enumerate(arg_list): + prev_arg = arg_list[i-1] if i > 0 else "" + if not is_negative_num(arg) or is_option_expecting_value(prev_arg): + args_without_negative_nums.append(arg) + + return args_without_negative_nums + +def filter_out_options(arg_list): + """ + Return arg_list without options and its negative numbers. + + list arg_list contains command line arguments + """ + args_without_options = [] + for i, arg in enumerate(arg_list): + prev_arg = arg_list[i-1] if i > 0 else "" + if( + not is_option_expecting_value(prev_arg) + and ( + not arg.startswith("-") + or + arg == "-" + or + is_negative_num(arg) + ) + ): + args_without_options.append(arg) + return args_without_options + +def upgrade_args(arg_list): + """ + Return modified copy of arg_list. + This function transform some old syntax to new syntax to keep backward + compatibility. + + list arg_list contains command line arguments + """ + upgraded_args = [] + args_without_options = filter_out_options(arg_list) + for arg in arg_list: + if arg in ["--cloneopt", "--clone"]: + #for every commands - kept as it was previously + upgraded_args.append("clone") + elif arg.startswith("--cloneopt="): + #for every commands - kept as it was previously + upgraded_args.append("clone") + upgraded_args.append(arg.split('=', 1)[1]) + elif( + #only for resource create - currently the only known problematic + #place + arg == "--master" + and + args_without_options[:2] == ["resource", "create"] + ): + upgraded_args.append("master") + else: + upgraded_args.append(arg) + return upgraded_args diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/reports.py pcs-0.9.159/pcs/cli/common/reports.py --- pcs-0.9.155+dfsg/pcs/cli/common/reports.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/reports.py 2017-06-30 15:33:01.000000000 +0000 @@ -6,6 +6,7 @@ ) import sys +import inspect from functools import partial from pcs.cli.booth.console_report import ( @@ -36,45 +37,74 @@ if report_item.code not in code_builder_map: return build_default_message_from_report(report_item, force_text) - template = code_builder_map[report_item.code] + message = code_builder_map[report_item.code] #Sometimes report item info is not needed for message building. - #In this case template is string. Otherwise, template is callable. - if callable(template): - try: - template = template(report_item.info) - except(TypeError, KeyError): - return build_default_message_from_report(report_item, force_text) - + #In that case the message is a string. Otherwise the message is a callable. + if not callable(message): + return message + force_text + + try: + # Object functools.partial cannot be used with inspect because it is not + # regular python function. We have to use original function for that. + if isinstance(message, partial): + keywords = message.keywords if message.keywords is not None else {} + args = inspect.getargspec(message.func).args + del args[:len(message.args)] + args = [arg for arg in args if arg not in keywords] + else: + args = inspect.getargspec(message).args + if "force_text" in args: + return message(report_item.info, force_text) + return message(report_item.info) + force_text + except(TypeError, KeyError): + return build_default_message_from_report(report_item, force_text) - #Message can contain {force} placeholder if there is need to have it on - #specific position. Otherwise is appended to the end (if necessary). This - #removes the need to explicitly specify placeholder for each message. - if force_text and "{force}" not in template: - template += "{force}" - return template.format(force=force_text) build_report_message = partial(build_message_from_report, __CODE_BUILDER_MAP) class LibraryReportProcessorToConsole(object): def __init__(self, debug=False): self.debug = debug + self.items = [] + + def append(self, report_item): + self.items.append(report_item) + return self + + def extend(self, report_item_list): + self.items.extend(report_item_list) + return self + + @property + def errors_count(self): + return len([ + item for item in self.items + if item.severity == ReportItemSeverity.ERROR + ]) def process(self, report_item): - self.process_list([report_item]) + self.append(report_item) + self.send() def process_list(self, report_item_list): + self.extend(report_item_list) + self.send() + + def send(self): errors = [] - for report_item in report_item_list: + for report_item in self.items: if report_item.severity == ReportItemSeverity.ERROR: errors.append(report_item) elif report_item.severity == ReportItemSeverity.WARNING: print("Warning: " + build_report_message(report_item)) elif self.debug or report_item.severity != ReportItemSeverity.DEBUG: print(build_report_message(report_item)) + self.items = [] if errors: raise LibraryError(*errors) + def _prepare_force_text(report_item): if report_item.forceable == codes.SKIP_OFFLINE_NODES: return ", use --skip-offline to override" diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/test/test_console_report.py pcs-0.9.159/pcs/cli/common/test/test_console_report.py --- pcs-0.9.155+dfsg/pcs/cli/common/test/test_console_report.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/test/test_console_report.py 2017-06-30 15:33:01.000000000 +0000 @@ -12,6 +12,11 @@ format_optional, ) from pcs.common import report_codes as codes +from pcs.common.fencing_topology import ( + TARGET_TYPE_NODE, + TARGET_TYPE_REGEXP, + TARGET_TYPE_ATTRIBUTE, +) class IndentTest(TestCase): def test_indent_list_of_lines(self): @@ -43,9 +48,9 @@ self.assert_message_from_info( "invalid TYPE option 'NAME', allowed options are: FIRST, SECOND", { - "option_name": "NAME", + "option_names": ["NAME"], "option_type": "TYPE", - "allowed": sorted(["FIRST", "SECOND"]), + "allowed": ["SECOND", "FIRST"], } ) @@ -53,9 +58,59 @@ self.assert_message_from_info( "invalid option 'NAME', allowed options are: FIRST, SECOND", { - "option_name": "NAME", + "option_names": ["NAME"], + "option_type": "", + "allowed": ["FIRST", "SECOND"], + } + ) + + def test_build_message_with_multiple_names(self): + self.assert_message_from_info( + "invalid options: 'ANOTHER', 'NAME', allowed option is FIRST", + { + "option_names": ["NAME", "ANOTHER"], + "option_type": "", + "allowed": ["FIRST"], + } + ) + + def test_no_allowed_options(self): + self.assert_message_from_info( + "invalid options: 'ANOTHER', 'NAME', there are no options allowed", + { + "option_names": ["NAME", "ANOTHER"], + "option_type": "", + "allowed": [], + } + ) + + +class RequiredOptionIsMissing(NameBuildTest): + code = codes.REQUIRED_OPTION_IS_MISSING + def test_build_message_with_type(self): + self.assert_message_from_info( + "required TYPE option 'NAME' is missing", + { + "option_names": ["NAME"], + "option_type": "TYPE", + } + ) + + def test_build_message_without_type(self): + self.assert_message_from_info( + "required option 'NAME' is missing", + { + "option_names": ["NAME"], + "option_type": "", + } + ) + + def test_build_message_with_multiple_names(self): + self.assert_message_from_info( + "required options 'ANOTHER', 'NAME' are missing", + { + "option_names": ["NAME", "ANOTHER"], "option_type": "", - "allowed": sorted(["FIRST", "SECOND"]), } ) @@ -155,13 +210,23 @@ } ) -class BuildRunExternalaStartedTest(NameBuildTest): +class BuildRunExternalStartedTest(NameBuildTest): code = codes.RUN_EXTERNAL_PROCESS_STARTED + def test_build_message_minimal(self): + self.assert_message_from_info( + "Running: COMMAND\nEnvironment:\n", + { + "command": "COMMAND", + "stdin": "", + "environment": dict(), + } + ) + def test_build_message_with_stdin(self): self.assert_message_from_info( ( - "Running: COMMAND\n" + "Running: COMMAND\nEnvironment:\n" "--Debug Input Start--\n" "STDIN\n" "--Debug Input End--\n" @@ -169,18 +234,58 @@ { "command": "COMMAND", "stdin": "STDIN", + "environment": dict(), } ) - def test_build_message_without_stdin(self): + def test_build_message_with_env(self): self.assert_message_from_info( - "Running: COMMAND\n", + ( + "Running: COMMAND\nEnvironment:\n" + " env_a=A\n" + " env_b=B\n" + ), { "command": "COMMAND", "stdin": "", + "environment": {"env_a": "A", "env_b": "B",}, + } + ) + + def test_build_message_maximal(self): + self.assert_message_from_info( + ( + "Running: COMMAND\nEnvironment:\n" + " env_a=A\n" + " env_b=B\n" + "--Debug Input Start--\n" + "STDIN\n" + "--Debug Input End--\n" + ), + { + "command": "COMMAND", + "stdin": "STDIN", + "environment": {"env_a": "A", "env_b": "B",}, + } + ) + + def test_insidious_environment(self): + self.assert_message_from_info( + ( + "Running: COMMAND\nEnvironment:\n" + " test=a:{green},b:{red}\n" + "--Debug Input Start--\n" + "STDIN\n" + "--Debug Input End--\n" + ), + { + "command": "COMMAND", + "stdin": "STDIN", + "environment": {"test": "a:{green},b:{red}",}, } ) + class BuildNodeCommunicationStartedTest(NameBuildTest): code = codes.NODE_COMMUNICATION_STARTED @@ -214,6 +319,10 @@ def test_info_key_is_not_falsy(self): self.assertEqual("A: ", format_optional("A", "{0}: ")) + def test_default_value(self): + self.assertEqual("DEFAULT", format_optional("", "{0}: ", "DEFAULT")) + + class AgentNameGuessedTest(NameBuildTest): code = codes.AGENT_NAME_GUESSED def test_build_message_with_data(self): @@ -230,12 +339,1401 @@ def test_build_message_with_data(self): self.assert_message_from_info( "Invalid resource agent name ':name'." - " Use standard:provider:type or standard:type." - " List of standards and providers can be obtained by using" - " commands 'pcs resource standards' and" + " Use standard:provider:type when standard is 'ocf' or" + " standard:type otherwise. List of standards and providers can" + " be obtained by using commands 'pcs resource standards' and" " 'pcs resource providers'" , { "name": ":name", } ) + +class InvalidiStonithAgentNameTest(NameBuildTest): + code = codes.INVALID_STONITH_AGENT_NAME + def test_build_message_with_data(self): + self.assert_message_from_info( + "Invalid stonith agent name 'fence:name'. List of agents can be" + " obtained by using command 'pcs stonith list'. Do not use the" + " 'stonith:' prefix. Agent name cannot contain the ':'" + " character." + , + { + "name": "fence:name", + } + ) + +class InvalidOptionType(NameBuildTest): + code = codes.INVALID_OPTION_TYPE + def test_allowed_string(self): + self.assert_message_from_info( + "specified option name is not valid, use allowed types", + { + "option_name": "option name", + "allowed_types": "allowed types", + } + ) + + def test_allowed_list(self): + self.assert_message_from_info( + "specified option name is not valid, use allowed, types", + { + "option_name": "option name", + "allowed_types": ["allowed", "types"], + } + ) + + +class DeprecatedOption(NameBuildTest): + code = codes.DEPRECATED_OPTION + + def test_no_desc_hint_array(self): + self.assert_message_from_info( + "option 'option name' is deprecated and should not be used," + " use new_a, new_b instead" + , + { + "option_name": "option name", + "option_type": "", + "replaced_by": ["new_b", "new_a"], + } + ) + + def test_desc_hint_string(self): + self.assert_message_from_info( + "option type option 'option name' is deprecated and should not be" + " used, use new option instead" + , + { + "option_name": "option name", + "option_type": "option type", + "replaced_by": "new option", + } + ) + + +class StonithResourcesDoNotExist(NameBuildTest): + code = codes.STONITH_RESOURCES_DO_NOT_EXIST + def test_success(self): + self.assert_message_from_info( + "Stonith resource(s) 'device1', 'device2' do not exist", + { + "stonith_ids": ["device1", "device2"], + } + ) + +class FencingLevelAlreadyExists(NameBuildTest): + code = codes.CIB_FENCING_LEVEL_ALREADY_EXISTS + def test_target_node(self): + self.assert_message_from_info( + "Fencing level for 'nodeA' at level '1' with device(s) " + "'device1,device2' already exists", + { + "level": "1", + "target_type": TARGET_TYPE_NODE, + "target_value": "nodeA", + "devices": ["device1", "device2"], + } + ) + + def test_target_pattern(self): + self.assert_message_from_info( + "Fencing level for 'node-\d+' at level '1' with device(s) " + "'device1,device2' already exists", + { + "level": "1", + "target_type": TARGET_TYPE_REGEXP, + "target_value": "node-\d+", + "devices": ["device1", "device2"], + } + ) + + def test_target_attribute(self): + self.assert_message_from_info( + "Fencing level for 'name=value' at level '1' with device(s) " + "'device1,device2' already exists", + { + "level": "1", + "target_type": TARGET_TYPE_ATTRIBUTE, + "target_value": ("name", "value"), + "devices": ["device1", "device2"], + } + ) + +class FencingLevelDoesNotExist(NameBuildTest): + code = codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST + def test_full_info(self): + self.assert_message_from_info( + "Fencing level for 'nodeA' at level '1' with device(s) " + "'device1,device2' does not exist", + { + "level": "1", + "target_type": TARGET_TYPE_NODE, + "target_value": "nodeA", + "devices": ["device1", "device2"], + } + ) + + def test_only_level(self): + self.assert_message_from_info( + "Fencing level at level '1' does not exist", + { + "level": "1", + "target_type": None, + "target_value": None, + "devices": None, + } + ) + + def test_only_target(self): + self.assert_message_from_info( + "Fencing level for 'name=value' does not exist", + { + "level": None, + "target_type": TARGET_TYPE_ATTRIBUTE, + "target_value": ("name", "value"), + "devices": None, + } + ) + + def test_only_devices(self): + self.assert_message_from_info( + "Fencing level with device(s) 'device1,device2' does not exist", + { + "level": None, + "target_type": None, + "target_value": None, + "devices": ["device1", "device2"], + } + ) + + def test_no_info(self): + self.assert_message_from_info( + "Fencing level does not exist", + { + "level": None, + "target_type": None, + "target_value": None, + "devices": None, + } + ) + + +class ResourceBundleAlreadyContainsAResource(NameBuildTest): + code = codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE + def test_build_message_with_data(self): + self.assert_message_from_info( + ( + "bundle 'test_bundle' already contains resource " + "'test_resource', a bundle may contain at most one resource" + ), + { + "resource_id": "test_resource", + "bundle_id": "test_bundle", + } + ) + + +class ResourceOperationIntevalDuplicationTest(NameBuildTest): + code = codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION + def test_build_message_with_data(self): + self.assert_message_from_info( + "multiple specification of the same operation with the same" + " interval:" + "\nmonitor with intervals 3600s, 60m, 1h" + "\nmonitor with intervals 60s, 1m" + , + { + "duplications": { + "monitor": [ + ["3600s", "60m", "1h"], + ["60s", "1m"], + ], + }, + } + ) + +class ResourceOperationIntevalAdaptedTest(NameBuildTest): + code = codes.RESOURCE_OPERATION_INTERVAL_ADAPTED + def test_build_message_with_data(self): + self.assert_message_from_info( + "changing a monitor operation interval from 10 to 11 to make the" + " operation unique" + , + { + "operation_name": "monitor", + "original_interval": "10", + "adapted_interval": "11", + } + ) + +class IdBelongsToUnexpectedType(NameBuildTest): + code = codes.ID_BELONGS_TO_UNEXPECTED_TYPE + def test_build_message_with_data(self): + self.assert_message_from_info("'ID' is not primitive/master/clone", { + "id": "ID", + "expected_types": ["primitive", "master", "clone"], + "current_type": "op", + }) + + def test_build_message_with_transformation(self): + self.assert_message_from_info("'ID' is not a group", { + "id": "ID", + "expected_types": ["group"], + "current_type": "op", + }) + +class ResourceRunOnNodes(NameBuildTest): + code = codes.RESOURCE_RUNNING_ON_NODES + def test_one_node(self): + self.assert_message_from_info( + "resource 'R' is running on node 'node1'", + { + "resource_id": "R", + "roles_with_nodes": {"Started": ["node1"]}, + } + ) + def test_multiple_nodes(self): + self.assert_message_from_info( + "resource 'R' is running on nodes 'node1', 'node2'", + { + "resource_id": "R", + "roles_with_nodes": {"Started": ["node1","node2"]}, + } + ) + def test_multiple_role_multiple_nodes(self): + self.assert_message_from_info( + "resource 'R' is master on node 'node3'" + "; running on nodes 'node1', 'node2'" + , + { + "resource_id": "R", + "roles_with_nodes": { + "Started": ["node1","node2"], + "Master": ["node3"], + }, + } + ) + +class ResourceDoesNotRun(NameBuildTest): + code = codes.RESOURCE_DOES_NOT_RUN + def test_build_message(self): + self.assert_message_from_info( + "resource 'R' is not running on any node", + { + "resource_id": "R", + } + ) + +class MutuallyExclusiveOptions(NameBuildTest): + code = codes.MUTUALLY_EXCLUSIVE_OPTIONS + def test_build_message(self): + self.assert_message_from_info( + "Only one of some options 'a' and 'b' can be used", + { + "option_type": "some", + "option_names": ["a", "b"], + } + ) + +class ResourceIsUnmanaged(NameBuildTest): + code = codes.RESOURCE_IS_UNMANAGED + def test_build_message(self): + self.assert_message_from_info( + "'R' is unmanaged", + { + "resource_id": "R", + } + ) + +class ResourceManagedNoMonitorEnabled(NameBuildTest): + code = codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED + def test_build_message(self): + self.assert_message_from_info( + "Resource 'R' has no enabled monitor operations." + " Re-run with '--monitor' to enable them." + , + { + "resource_id": "R", + } + ) + +class NodeIsInCluster(NameBuildTest): + code = codes.CANNOT_ADD_NODE_IS_IN_CLUSTER + def test_build_message(self): + self.assert_message_from_info( + "cannot add the node 'N1' because it is in a cluster", + { + "node": "N1", + } + ) + +class NodeIsRunningPacemakerRemote(NameBuildTest): + code = codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE + def test_build_message(self): + self.assert_message_from_info( + "cannot add the node 'N1' because it is running service" + " 'pacemaker_remote' (is not the node already in a cluster?)" + , + { + "node": "N1", + "service": "pacemaker_remote", + } + ) + def test_build_message_with_unknown_service(self): + self.assert_message_from_info( + "cannot add the node 'N1' because it is running service 'unknown'", + { + "node": "N1", + "service": "unknown", + } + ) + + +class SbdDeviceInitializationStarted(NameBuildTest): + code = codes.SBD_DEVICE_INITIALIZATION_STARTED + def test_build_message(self): + self.assert_message_from_info( + "Initializing device(s) /dev1, /dev2, /dev3...", + { + "device_list": ["/dev1", "/dev2", "/dev3"], + } + ) + + +class SbdDeviceInitializationError(NameBuildTest): + code = codes.SBD_DEVICE_INITIALIZATION_ERROR + def test_build_message(self): + self.assert_message_from_info( + "Initialization of device(s) failed: this is reason", + { + "reason": "this is reason" + } + ) + + +class SbdDeviceListError(NameBuildTest): + code = codes.SBD_DEVICE_LIST_ERROR + def test_build_message(self): + self.assert_message_from_info( + "Unable to get list of messages from device '/dev': this is reason", + { + "device": "/dev", + "reason": "this is reason", + } + ) + + +class SbdDeviceMessageError(NameBuildTest): + code = codes.SBD_DEVICE_MESSAGE_ERROR + def test_build_message(self): + self.assert_message_from_info( + "Unable to set message 'test' for node 'node1' on device '/dev1'", + { + "message": "test", + "node": "node1", + "device": "/dev1", + } + ) + + +class SbdDeviceDumpError(NameBuildTest): + code = codes.SBD_DEVICE_DUMP_ERROR + def test_build_message(self): + self.assert_message_from_info( + "Unable to get SBD headers from device '/dev1': this is reason", + { + "device": "/dev1", + "reason": "this is reason", + } + ) + + +class SbdDevcePathNotAbsolute(NameBuildTest): + code = codes.SBD_DEVICE_PATH_NOT_ABSOLUTE + def test_build_message(self): + self.assert_message_from_info( + "Device path '/dev' on node 'node1' is not absolute", + { + "device": "/dev", + "node": "node1", + } + ) + + def test_build_message_without_node(self): + self.assert_message_from_info( + "Device path '/dev' is not absolute", + { + "device": "/dev", + "node": None, + } + ) + + +class SbdDeviceDoesNotExist(NameBuildTest): + code = codes.SBD_DEVICE_DOES_NOT_EXIST + def test_build_message(self): + self.assert_message_from_info( + "node1: device '/dev' not found", + { + "node": "node1", + "device": "/dev", + } + ) + + +class SbdDeviceISNotBlockDevice(NameBuildTest): + code = codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE + def test_build_message(self): + self.assert_message_from_info( + "node1: device '/dev' is not a block device", + { + "node": "node1", + "device": "/dev", + } + ) + + +class SbdNoDEviceForNode(NameBuildTest): + code = codes.SBD_NO_DEVICE_FOR_NODE + def test_build_message(self): + self.assert_message_from_info( + "No device defined for node 'node1'", + { + "node": "node1", + } + ) + + +class SbdTooManyDevicesForNode(NameBuildTest): + code = codes.SBD_TOO_MANY_DEVICES_FOR_NODE + def test_build_messages(self): + self.assert_message_from_info( + "More than 3 devices defined for node 'node1' (devices: /dev1, " + "/dev2, /dev3)", + { + "max_devices": 3, + "node": "node1", + "device_list": ["/dev1", "/dev2", "/dev3"] + } + ) + +class RequiredOptionOfAlternativesIsMissing(NameBuildTest): + code = codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING + def test_without_type(self): + self.assert_message_from_info( + "option 'aAa' or 'bBb' or 'cCc' has to be specified", + { + "option_names": ["aAa", "bBb", "cCc"], + } + ) + + def test_with_type(self): + self.assert_message_from_info( + "test option 'aAa' or 'bBb' or 'cCc' has to be specified", + { + "option_type": "test", + "option_names": ["aAa", "bBb", "cCc"], + } + ) + +class PrerequisiteOptionIsMissing(NameBuildTest): + code = codes.PREREQUISITE_OPTION_IS_MISSING + def test_without_type(self): + self.assert_message_from_info( + "If option 'a' is specified, option 'b' must be specified as well", + { + "option_name": "a", + "prerequisite_name": "b", + } + ) + + def test_with_type(self): + self.assert_message_from_info( + "If some option 'a' is specified, " + "other option 'b' must be specified as well" + , + { + "option_name": "a", + "option_type": "some", + "prerequisite_name": "b", + "prerequisite_type": "other", + } + ) + +class FileDistributionStarted(NameBuildTest): + code = codes.FILES_DISTRIBUTION_STARTED + def test_build_messages(self): + self.assert_message_from_info( + "Sending 'first', 'second'", + { + "file_list": ["first", "second"], + "node_list": None, + "description": None, + } + ) + + def test_build_messages_with_nodes(self): + self.assert_message_from_info( + "Sending 'first', 'second' to 'node1', 'node2'", + { + "file_list": ["first", "second"], + "node_list": ["node1", "node2"], + "description": None, + } + ) + + def test_build_messages_with_description(self): + self.assert_message_from_info( + "Sending configuration files to 'node1', 'node2'", + { + "file_list": ["first", "second"], + "node_list": ["node1", "node2"], + "description": "configuration files", + } + ) + +class FileDistributionSucess(NameBuildTest): + code = codes.FILE_DISTRIBUTION_SUCCESS + def test_build_messages(self): + self.assert_message_from_info( + "node1: successful distribution of the file 'some authfile'", + { + "nodes_success_files": None, + "node": "node1", + "file_description": "some authfile", + } + ) + +class FileDistributionError(NameBuildTest): + code = codes.FILE_DISTRIBUTION_ERROR + def test_build_messages(self): + self.assert_message_from_info( + "node1: unable to distribute file 'file1': permission denied", + { + "node_file_errors": None, + "node": "node1", + "file_description": "file1", + "reason": "permission denied", + } + ) + +class FileRemoveFromNodeStarted(NameBuildTest): + code = codes.FILES_REMOVE_FROM_NODE_STARTED + def test_build_messages(self): + self.assert_message_from_info( + "Requesting remove 'first', 'second' from 'node1', 'node2'", + { + "file_list": ["first", "second"], + "node_list": ["node1", "node2"], + "description": None, + } + ) + + def test_build_messages_with_description(self): + self.assert_message_from_info( + "Requesting remove remote configuration files from 'node1'," + " 'node2'" + , + { + "file_list": ["first", "second"], + "node_list": ["node1", "node2"], + "description": "remote configuration files", + } + ) + +class FileRemoveFromNodeSucess(NameBuildTest): + code = codes.FILE_REMOVE_FROM_NODE_SUCCESS + def test_build_messages(self): + self.assert_message_from_info( + "node1: successful removal of the file 'some authfile'", + { + "nodes_success_files": None, + "node": "node1", + "file_description": "some authfile", + } + ) + +class FileRemoveFromNodeError(NameBuildTest): + code = codes.FILE_REMOVE_FROM_NODE_ERROR + def test_build_messages(self): + self.assert_message_from_info( + "node1: unable to remove file 'file1': permission denied", + { + "node_file_errors": None, + "node": "node1", + "file_description": "file1", + "reason": "permission denied", + } + ) + + +class ActionsOnNodesStarted(NameBuildTest): + code = codes.SERVICE_COMMANDS_ON_NODES_STARTED + def test_build_messages(self): + self.assert_message_from_info( + "Requesting 'first', 'second'", + { + "action_list": ["first", "second"], + "node_list": None, + "description": None, + } + ) + + def test_build_messages_with_nodes(self): + self.assert_message_from_info( + "Requesting 'first', 'second' on 'node1', 'node2'", + { + "action_list": ["first", "second"], + "node_list": ["node1", "node2"], + "description": None, + } + ) + + def test_build_messages_with_description(self): + self.assert_message_from_info( + "Requesting running pacemaker_remote on 'node1', 'node2'", + { + "action_list": ["first", "second"], + "node_list": ["node1", "node2"], + "description": "running pacemaker_remote", + } + ) + +class ActionsOnNodesSuccess(NameBuildTest): + code = codes.SERVICE_COMMAND_ON_NODE_SUCCESS + def test_build_messages(self): + self.assert_message_from_info( + "node1: successful run of 'service enable'", + { + "nodes_success_actions": None, + "node": "node1", + "service_command_description": "service enable", + } + ) + +class ActionOnNodesError(NameBuildTest): + code = codes.SERVICE_COMMAND_ON_NODE_ERROR + def test_build_messages(self): + self.assert_message_from_info( + "node1: service command failed: service1 start: permission denied", + { + "node_action_errors": None, + "node": "node1", + "service_command_description": "service1 start", + "reason": "permission denied", + } + ) + +class resource_is_guest_node_already(NameBuildTest): + code = codes.RESOURCE_IS_GUEST_NODE_ALREADY + def test_build_messages(self): + self.assert_message_from_info( + "the resource 'some-resource' is already a guest node", + {"resource_id": "some-resource"} + ) + +class live_environment_required(NameBuildTest): + code = codes.LIVE_ENVIRONMENT_REQUIRED + def test_build_messages(self): + self.assert_message_from_info( + "This command does not support '--corosync_conf'", + { + "forbidden_options": ["--corosync_conf"] + } + ) + + def test_build_messages_transformable_codes(self): + self.assert_message_from_info( + "This command does not support '--corosync_conf', '-f'", + { + "forbidden_options": ["COROSYNC_CONF", "CIB"] + } + ) + +class nolive_skip_files_distribution(NameBuildTest): + code = codes.NOLIVE_SKIP_FILES_DISTRIBUTION + def test_build_messages(self): + self.assert_message_from_info( + "the distribution of 'file1', 'file2' to 'node1', 'node2' was" + " skipped because command" + " does not run on live cluster (e.g. -f was used)." + " You will have to do it manually." + , + { + "files_description": ["file1", 'file2'], + "nodes": ["node1", "node2"], + } + ) + +class nolive_skip_files_remove(NameBuildTest): + code = codes.NOLIVE_SKIP_FILES_REMOVE + def test_build_messages(self): + self.assert_message_from_info( + "'file1', 'file2' remove from 'node1', 'node2'" + " was skipped because command" + " does not run on live cluster (e.g. -f was used)." + " You will have to do it manually." + , + { + "files_description": ["file1", 'file2'], + "nodes": ["node1", "node2"], + } + ) + +class nolive_skip_service_command_on_nodes(NameBuildTest): + code = codes.NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES + def test_build_messages(self): + self.assert_message_from_info( + "running 'pacemaker_remote start' on 'node1', 'node2' was skipped" + " because command does not run on live cluster (e.g. -f was" + " used). You will have to run it manually." + , + { + "service": "pacemaker_remote", + "command": "start", + "nodes": ["node1", "node2"] + } + ) + +class NodeNotFound(NameBuildTest): + code = codes.NODE_NOT_FOUND + def test_build_messages(self): + self.assert_message_from_info( + "Node 'SOME_NODE' does not appear to exist in configuration", + { + "node": "SOME_NODE", + "searched_types": [] + } + ) + + def test_build_messages_with_one_search_types(self): + self.assert_message_from_info( + "remote node 'SOME_NODE' does not appear to exist in configuration", + { + "node": "SOME_NODE", + "searched_types": ["remote"] + } + ) + + def test_build_messages_with_string_search_types(self): + self.assert_message_from_info( + "remote node 'SOME_NODE' does not appear to exist in configuration", + { + "node": "SOME_NODE", + "searched_types": "remote" + } + ) + + def test_build_messages_with_multiple_search_types(self): + self.assert_message_from_info( + "nor remote node or guest node 'SOME_NODE' does not appear to exist" + " in configuration" + , + { + "node": "SOME_NODE", + "searched_types": ["remote", "guest"] + } + ) + +class MultipleResultFound(NameBuildTest): + code = codes.MULTIPLE_RESULTS_FOUND + def test_build_messages(self): + self.assert_message_from_info( + "multiple resource for 'NODE-NAME' found: 'ID1', 'ID2'", + { + "result_type": "resource", + "result_identifier_list": ["ID1", "ID2"], + "search_description": "NODE-NAME", + } + ) + +class UseCommandNodeAddRemote(NameBuildTest): + code = codes.USE_COMMAND_NODE_ADD_REMOTE + def test_build_messages(self): + self.assert_message_from_info( + "this command is not sufficient for creating a remote connection," + " use 'pcs cluster node add-remote'" + , + {} + ) + +class UseCommandNodeAddGuest(NameBuildTest): + code = codes.USE_COMMAND_NODE_ADD_GUEST + def test_build_messages(self): + self.assert_message_from_info( + "this command is not sufficient for creating a guest node, use " + "'pcs cluster node add-guest'", + {} + ) + +class UseCommandNodeRemoveGuest(NameBuildTest): + code = codes.USE_COMMAND_NODE_REMOVE_GUEST + def test_build_messages(self): + self.assert_message_from_info( + "this command is not sufficient for removing a guest node, use " + "'pcs cluster node remove-guest'", + {} + ) + +class NodeRemoveInPacemakerFailed(NameBuildTest): + code = codes.NODE_REMOVE_IN_PACEMAKER_FAILED + def test_build_messages(self): + self.assert_message_from_info( + "unable to remove node 'NODE' from pacemaker: reason", + { + "node_name": "NODE", + "reason": "reason" + } + ) + +class NodeToClearIsStillInCluster(NameBuildTest): + code = codes.NODE_TO_CLEAR_IS_STILL_IN_CLUSTER + def test_build_messages(self): + self.assert_message_from_info( + "node 'node1' seems to be still in the cluster" + "; this command should be used only with nodes that have been" + " removed from the cluster" + , + { + "node": "node1" + } + ) + + +class ServiceStartStarted(NameBuildTest): + code = codes.SERVICE_START_STARTED + def test_minimal(self): + self.assert_message_from_info( + "Starting a_service...", + { + "service": "a_service", + "instance": None, + } + ) + + def test_with_instance(self): + self.assert_message_from_info( + "Starting a_service@an_instance...", + { + "service": "a_service", + "instance": "an_instance", + } + ) + + +class ServiceStartError(NameBuildTest): + code = codes.SERVICE_START_ERROR + def test_minimal(self): + self.assert_message_from_info( + "Unable to start a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: Unable to start a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "Unable to start a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: Unable to start a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceStartSuccess(NameBuildTest): + code = codes.SERVICE_START_SUCCESS + def test_minimal(self): + self.assert_message_from_info( + "a_service started", + { + "service": "a_service", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: a_service started", + { + "service": "a_service", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "a_service@an_instance started", + { + "service": "a_service", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: a_service@an_instance started", + { + "service": "a_service", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceStartSkipped(NameBuildTest): + code = codes.SERVICE_START_SKIPPED + def test_minimal(self): + self.assert_message_from_info( + "not starting a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: not starting a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "not starting a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: not starting a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceStopStarted(NameBuildTest): + code = codes.SERVICE_STOP_STARTED + def test_minimal(self): + self.assert_message_from_info( + "Stopping a_service...", + { + "service": "a_service", + "instance": None, + } + ) + + def test_with_instance(self): + self.assert_message_from_info( + "Stopping a_service@an_instance...", + { + "service": "a_service", + "instance": "an_instance", + } + ) + + +class ServiceStopError(NameBuildTest): + code = codes.SERVICE_STOP_ERROR + def test_minimal(self): + self.assert_message_from_info( + "Unable to stop a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: Unable to stop a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "Unable to stop a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: Unable to stop a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceStopSuccess(NameBuildTest): + code = codes.SERVICE_STOP_SUCCESS + def test_minimal(self): + self.assert_message_from_info( + "a_service stopped", + { + "service": "a_service", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: a_service stopped", + { + "service": "a_service", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "a_service@an_instance stopped", + { + "service": "a_service", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: a_service@an_instance stopped", + { + "service": "a_service", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceEnableStarted(NameBuildTest): + code = codes.SERVICE_ENABLE_STARTED + def test_minimal(self): + self.assert_message_from_info( + "Enabling a_service...", + { + "service": "a_service", + "instance": None, + } + ) + + def test_with_instance(self): + self.assert_message_from_info( + "Enabling a_service@an_instance...", + { + "service": "a_service", + "instance": "an_instance", + } + ) + + +class ServiceEnableError(NameBuildTest): + code = codes.SERVICE_ENABLE_ERROR + def test_minimal(self): + self.assert_message_from_info( + "Unable to enable a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: Unable to enable a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "Unable to enable a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: Unable to enable a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceEnableSuccess(NameBuildTest): + code = codes.SERVICE_ENABLE_SUCCESS + def test_minimal(self): + self.assert_message_from_info( + "a_service enabled", + { + "service": "a_service", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: a_service enabled", + { + "service": "a_service", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "a_service@an_instance enabled", + { + "service": "a_service", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: a_service@an_instance enabled", + { + "service": "a_service", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceEnableSkipped(NameBuildTest): + code = codes.SERVICE_ENABLE_SKIPPED + def test_minimal(self): + self.assert_message_from_info( + "not enabling a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: not enabling a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "not enabling a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: not enabling a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceDisableStarted(NameBuildTest): + code = codes.SERVICE_DISABLE_STARTED + def test_minimal(self): + self.assert_message_from_info( + "Disabling a_service...", + { + "service": "a_service", + "instance": None, + } + ) + + def test_with_instance(self): + self.assert_message_from_info( + "Disabling a_service@an_instance...", + { + "service": "a_service", + "instance": "an_instance", + } + ) + + +class ServiceDisableError(NameBuildTest): + code = codes.SERVICE_DISABLE_ERROR + def test_minimal(self): + self.assert_message_from_info( + "Unable to disable a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: Unable to disable a_service: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "Unable to disable a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: Unable to disable a_service@an_instance: a_reason", + { + "service": "a_service", + "reason": "a_reason", + "node": "a_node", + "instance": "an_instance", + } + ) + + +class ServiceDisableSuccess(NameBuildTest): + code = codes.SERVICE_DISABLE_SUCCESS + def test_minimal(self): + self.assert_message_from_info( + "a_service disabled", + { + "service": "a_service", + "node": None, + "instance": None, + } + ) + + def test_node(self): + self.assert_message_from_info( + "a_node: a_service disabled", + { + "service": "a_service", + "node": "a_node", + "instance": None, + } + ) + + def test_instance(self): + self.assert_message_from_info( + "a_service@an_instance disabled", + { + "service": "a_service", + "node": None, + "instance": "an_instance", + } + ) + + def test_all(self): + self.assert_message_from_info( + "a_node: a_service@an_instance disabled", + { + "service": "a_service", + "node": "a_node", + "instance": "an_instance", + } + ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/test/test_env_file.py pcs-0.9.159/pcs/cli/common/test/test_env_file.py --- pcs-0.9.155+dfsg/pcs/cli/common/test/test_env_file.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/test/test_env_file.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,138 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase +from pcs.test.tools.pcs_unittest import mock +from pcs.cli.common import env_file +from pcs.test.tools.misc import create_patcher, create_setup_patch_mixin + +from pcs.lib.errors import ReportItem +from pcs.common import report_codes + +patch_env_file = create_patcher(env_file) +SetupPatchMixin = create_setup_patch_mixin(patch_env_file) + +FILE_PATH = "/path/to/local/file" + +class Write(TestCase, SetupPatchMixin): + def setUp(self): + self.mock_open = mock.mock_open() + self.mock_error = self.setup_patch("console_report.error") + + def assert_params_causes_calls(self, env_file_dict, calls, path=FILE_PATH): + with patch_env_file("open", self.mock_open, create=True): + env_file.write(env_file_dict, path) + self.assertEqual(self.mock_open.mock_calls, calls) + + def test_sucessfully_write(self): + self.assert_params_causes_calls( + {"content": "filecontent"}, + [ + mock.call(FILE_PATH, "w"), + mock.call().write("filecontent"), + mock.call().close(), + ] + ) + + def test_sucessfully_write_binary(self): + self.assert_params_causes_calls( + {"content": "filecontent", "is_binary": True}, + [ + mock.call(FILE_PATH, "wb"), + mock.call().write("filecontent"), + mock.call().close(), + ] + ) + + def test_exit_when_cannot_open_file(self): + self.mock_open.side_effect = EnvironmentError() + self.mock_error.side_effect = SystemExit() + self.assertRaises( + SystemExit, + lambda: env_file.write({"content": "filecontent"}, FILE_PATH) + ) + +class Read(TestCase, SetupPatchMixin): + def setUp(self): + self.is_file = self.setup_patch('os.path.isfile') + self.mock_open = mock.mock_open(read_data='filecontent') + self.mock_error = self.setup_patch("console_report.error") + + def assert_returns_content(self, content, is_file): + self.is_file.return_value = is_file + with patch_env_file("open", self.mock_open, create=True): + self.assertEqual( + content, + env_file.read(FILE_PATH) + ) + + def test_successfully_read(self): + self.assert_returns_content({"content": "filecontent"}, is_file=True) + + def test_successfully_return_empty_content(self): + self.assert_returns_content({"content": None}, is_file=False) + + def test_exit_when_cannot_open_file(self): + self.mock_open.side_effect = EnvironmentError() + self.mock_error.side_effect = SystemExit() + self.assertRaises(SystemExit, lambda: env_file.read(FILE_PATH)) + +class ProcessNoExistingFileExpectation(TestCase, SetupPatchMixin): + def setUp(self): + self.exists = self.setup_patch('os.path.exists') + self.mock_error = self.setup_patch("console_report.error") + + def run_process( + self, no_existing_file_expected, file_exists, overwrite=False + ): + self.exists.return_value = file_exists + env_file.process_no_existing_file_expectation( + "role", + { + "no_existing_file_expected": no_existing_file_expected, + "can_overwrite_existing_file": overwrite, + }, + FILE_PATH + ) + + def test_do_nothing_when_expectation_does_not_conflict(self): + self.run_process(no_existing_file_expected=False, file_exists=True) + self.run_process(no_existing_file_expected=False, file_exists=False) + self.run_process(no_existing_file_expected=True, file_exists=False) + + def test_overwrite_permission_produce_console_warning(self): + warn = self.setup_patch("console_report.warn") + self.run_process( + no_existing_file_expected=True, + file_exists=True, + overwrite=True + ) + warn.assert_called_once_with("role /path/to/local/file already exists") + + def test_non_overwrittable_conflict_exits(self): + self.mock_error.side_effect = SystemExit() + self.assertRaises( + SystemExit, + lambda: + self.run_process(no_existing_file_expected=True, file_exists=True) + ) + +class ReportMissing(TestCase): + @patch_env_file("console_report.error") + def test_report_to_console(self, error): + env_file.report_missing("role", "path") + error.assert_called_once_with("role 'path' does not exist") + +class IsMissingReport(TestCase): + def test_regcognize_missing_report(self): + self.assertTrue(env_file.is_missing_report( + ReportItem.error( + report_codes.FILE_DOES_NOT_EXIST, + info={"file_role": "role"} + ), + "role" + )) diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/test/test_parse_args.py pcs-0.9.159/pcs/cli/common/test/test_parse_args.py --- pcs-0.9.155+dfsg/pcs/cli/common/test/test_parse_args.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/test/test_parse_args.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,9 +7,18 @@ from pcs.test.tools.pcs_unittest import TestCase from pcs.cli.common.parse_args import( - split_list, - prepare_options, group_by_keywords, + parse_typed_arg, + prepare_options, + split_list, + filter_out_non_option_negative_numbers, + filter_out_options, + is_num, + is_negative_num, + is_short_option_expecting_value, + is_long_option_expecting_value, + is_option_expecting_value, + upgrade_args, ) from pcs.cli.common.errors import CmdLineInputError @@ -31,6 +40,15 @@ CmdLineInputError, lambda: prepare_options(['=a']) ) + def test_refuse_options_with_same_key_and_differend_value(self): + self.assertRaises( + CmdLineInputError, lambda: prepare_options(['a=a', "a=b"]) + ) + + def test_accept_options_with_ssame_key_and_same_value(self): + self.assertEqual({'a': '1'}, prepare_options(["a=1", "a=1"])) + + class SplitListTest(TestCase): def test_returns_list_with_original_when_separator_not_in_original(self): self.assertEqual([['a', 'b']], split_list(['a', 'b'], 'c')) @@ -53,7 +71,7 @@ group_by_keywords( [0, "first", 1, 2, "second", 3], set(["first", "second"]), - implicit_first_keyword="zero" + implicit_first_group_key="zero" ), { "zero": [0], @@ -97,7 +115,7 @@ group_by_keywords( [], set(["first", "second"]), - implicit_first_keyword="zero", + implicit_first_group_key="zero", ), { "zero": [], @@ -106,6 +124,50 @@ } ) + def test_returns_dict_with_empty_lists_for_no_opts_and_only_found_kws(self): + self.assertEqual( + group_by_keywords( + ["first"], + set(["first", "second"]), + only_found_keywords=True, + ), + { + "first": [], + } + ) + + def test_returns_empty_lists_no_opts_and_only_found_kws_with_grouping(self): + self.assertEqual( + group_by_keywords( + ["second", 1, "second", "second", 2, 3], + set(["first", "second"]), + group_repeated_keywords=["second"], + only_found_keywords=True, + ), + { + "second": [ + [1], + [], + [2, 3], + ], + } + ) + + def test_empty_repeatable(self): + self.assertEqual( + group_by_keywords( + ["second"], + set(["first", "second"]), + group_repeated_keywords=["second"], + only_found_keywords=True, + ), + { + "second": [ + [], + ], + } + ) + def test_allow_keywords_repeating(self): self.assertEqual( group_by_keywords( @@ -124,3 +186,327 @@ set(["first", "second"]), keyword_repeat_allowed=False, )) + + def test_group_repeating_keyword_occurences(self): + self.assertEqual( + group_by_keywords( + ["first", 1, 2, "second", 3, "first", 4], + set(["first", "second"]), + group_repeated_keywords=["first"] + ), + { + "first": [[1, 2], [4]], + "second": [3], + } + ) + + def test_raises_on_group_repeated_keywords_inconsistency(self): + self.assertRaises(AssertionError, lambda: group_by_keywords( + [], + set(["first", "second"]), + group_repeated_keywords=["first", "third"], + implicit_first_group_key="third" + )) + + def test_implicit_first_kw_not_applyed_in_the_middle(self): + self.assertEqual( + group_by_keywords( + [1, 2, "first", 3, "zero", 4], + set(["first"]), + implicit_first_group_key="zero" + ), + { + "zero": [1, 2], + "first": [3, "zero", 4], + } + ) + def test_implicit_first_kw_applyed_in_the_middle_when_is_in_kwds(self): + self.assertEqual( + group_by_keywords( + [1, 2, "first", 3, "zero", 4], + set(["first", "zero"]), + implicit_first_group_key="zero" + ), + { + "zero": [1, 2, 4], + "first": [3], + } + ) + + +class ParseTypedArg(TestCase): + def assert_parse(self, arg, parsed): + self.assertEqual( + parse_typed_arg(arg, ["t0", "t1", "t2"], "t0"), + parsed + ) + + def test_no_type(self): + self.assert_parse("value", ("t0", "value")) + + def test_escape(self): + self.assert_parse("%value", ("t0", "value")) + + def test_allowed_type(self): + self.assert_parse("t1%value", ("t1", "value")) + + def test_bad_type(self): + self.assertRaises( + CmdLineInputError, + lambda: self.assert_parse("tX%value", "aaa") + ) + + def test_escape_delimiter(self): + self.assert_parse("%%value", ("t0", "%value")) + self.assert_parse("%val%ue", ("t0", "val%ue")) + + def test_more_delimiters(self): + self.assert_parse("t2%va%lu%e", ("t2", "va%lu%e")) + self.assert_parse("t2%%va%lu%e", ("t2", "%va%lu%e")) + +class FilterOutNonOptionNegativeNumbers(TestCase): + def test_does_not_remove_anything_when_no_negative_numbers(self): + args = ["first", "second"] + self.assertEqual(args, filter_out_non_option_negative_numbers(args)) + + def test_remove_negative_number(self): + self.assertEqual( + ["first"], + filter_out_non_option_negative_numbers(["first", "-1"]) + ) + + def test_remove_negative_infinity(self): + self.assertEqual( + ["first"], + filter_out_non_option_negative_numbers(["first", "-INFINITY"]) + ) + self.assertEqual( + ["first"], + filter_out_non_option_negative_numbers(["first", "-infinity"]) + ) + + def test_not_remove_follower_of_short_signed_option(self): + self.assertEqual( + ["first", "-f", "-1"], + filter_out_non_option_negative_numbers(["first", "-f", "-1"]) + ) + + def test_remove_follower_of_short_unsigned_option(self): + self.assertEqual( + ["first", "-h"], + filter_out_non_option_negative_numbers(["first", "-h", "-1"]) + ) + + def test_not_remove_follower_of_long_signed_option(self): + self.assertEqual( + ["first", "--name", "-1"], + filter_out_non_option_negative_numbers(["first", "--name", "-1"]) + ) + + def test_remove_follower_of_long_unsigned_option(self): + self.assertEqual( + ["first", "--master"], + filter_out_non_option_negative_numbers(["first", "--master", "-1"]) + ) + + def test_does_not_remove_dash(self): + self.assertEqual( + ["first", "-"], + filter_out_non_option_negative_numbers(["first", "-"]) + ) + + def test_does_not_remove_dash_dash(self): + self.assertEqual( + ["first", "--"], + filter_out_non_option_negative_numbers(["first", "--"]) + ) + +class FilterOutOptions(TestCase): + def test_does_not_remove_anything_when_no_options(self): + args = ["first", "second"] + self.assertEqual(args, filter_out_options(args)) + + def test_remove_unsigned_short_option(self): + self.assertEqual( + ["first", "second"], + filter_out_options(["first", "-h", "second"]) + ) + + def test_remove_signed_short_option_with_value(self): + self.assertEqual( + ["first"], + filter_out_options(["first", "-f", "second"]) + ) + + def test_not_remove_value_of_signed_short_option_when_value_bundled(self): + self.assertEqual( + ["first", "second"], + filter_out_options(["first", "-fvalue", "second"]) + ) + + def test_remove_unsigned_long_option(self): + self.assertEqual( + ["first", "second"], + filter_out_options(["first", "--master", "second"]) + ) + + def test_remove_signed_long_option_with_value(self): + self.assertEqual( + ["first"], + filter_out_options(["first", "--name", "second"]) + ) + + def test_not_remove_value_of_signed_long_option_when_value_bundled(self): + self.assertEqual( + ["first", "second"], + filter_out_options(["first", "--name=value", "second"]) + ) + + def test_does_not_remove_dash(self): + self.assertEqual( + ["first", "-"], + filter_out_options(["first", "-"]) + ) + + def test_remove_dash_dash(self): + self.assertEqual( + ["first"], + filter_out_options(["first", "--"]) + ) + +class IsNum(TestCase): + def test_returns_true_on_number(self): + self.assertTrue(is_num("10")) + + def test_returns_true_on_infinity(self): + self.assertTrue(is_num("infinity")) + + def test_returns_false_on_no_number(self): + self.assertFalse(is_num("no-num")) + +class IsNegativeNum(TestCase): + def test_returns_true_on_negative_number(self): + self.assertTrue(is_negative_num("-10")) + + def test_returns_true_on_infinity(self): + self.assertTrue(is_negative_num("-INFINITY")) + + def test_returns_false_on_positive_number(self): + self.assertFalse(is_negative_num("10")) + + def test_returns_false_on_no_number(self): + self.assertFalse(is_negative_num("no-num")) + +class IsShortOptionExpectingValue(TestCase): + def test_returns_true_on_short_option_with_value(self): + self.assertTrue(is_short_option_expecting_value("-f")) + + def test_returns_false_on_short_option_without_value(self): + self.assertFalse(is_short_option_expecting_value("-h")) + + def test_returns_false_on_unknown_short_option(self): + self.assertFalse(is_short_option_expecting_value("-x")) + + def test_returns_false_on_dash(self): + self.assertFalse(is_short_option_expecting_value("-")) + + def test_returns_false_on_option_without_dash(self): + self.assertFalse(is_short_option_expecting_value("ff")) + + def test_returns_false_on_option_including_value(self): + self.assertFalse(is_short_option_expecting_value("-fvalue")) + +class IsLongOptionExpectingValue(TestCase): + def test_returns_true_on_long_option_with_value(self): + self.assertTrue(is_long_option_expecting_value("--name")) + + def test_returns_false_on_long_option_without_value(self): + self.assertFalse(is_long_option_expecting_value("--master")) + + def test_returns_false_on_unknown_long_option(self): + self.assertFalse(is_long_option_expecting_value("--not-specified-long-opt")) + + def test_returns_false_on_dash_dash(self): + self.assertFalse(is_long_option_expecting_value("--")) + + def test_returns_false_on_option_without_dash_dash(self): + self.assertFalse(is_long_option_expecting_value("-long-option")) + + def test_returns_false_on_option_including_value(self): + self.assertFalse(is_long_option_expecting_value("--name=Name")) + +class IsOptionExpectingValue(TestCase): + def test_returns_true_on_short_option_with_value(self): + self.assertTrue(is_option_expecting_value("-f")) + + def test_returns_true_on_long_option_with_value(self): + self.assertTrue(is_option_expecting_value("--name")) + + def test_returns_false_on_short_option_without_value(self): + self.assertFalse(is_option_expecting_value("-h")) + + def test_returns_false_on_long_option_without_value(self): + self.assertFalse(is_option_expecting_value("--master")) + + def test_returns_false_on_unknown_short_option(self): + self.assertFalse(is_option_expecting_value("-x")) + + def test_returns_false_on_unknown_long_option(self): + self.assertFalse(is_option_expecting_value("--not-specified-long-opt")) + + def test_returns_false_on_dash(self): + self.assertFalse(is_option_expecting_value("-")) + + def test_returns_false_on_dash_dash(self): + self.assertFalse(is_option_expecting_value("--")) + + def test_returns_false_on_option_including_value(self): + self.assertFalse(is_option_expecting_value("--name=Name")) + self.assertFalse(is_option_expecting_value("-fvalue")) + +class UpgradeArgs(TestCase): + def test_returns_the_same_args_when_no_older_versions_detected(self): + args = ["first", "second"] + self.assertEqual(args, upgrade_args(args)) + + def test_upgrade_2dash_cloneopt(self): + self.assertEqual( + ["first", "clone", "second"], + upgrade_args(["first", "--cloneopt", "second"]) + ) + + def test_upgrade_2dash_clone(self): + self.assertEqual( + ["first", "clone", "second"], + upgrade_args(["first", "--clone", "second"]) + ) + + def test_upgrade_2dash_cloneopt_with_value(self): + self.assertEqual( + ["first", "clone", "1", "second"], + upgrade_args(["first", "--cloneopt=1", "second"]) + ) + + def test_upgrade_2dash_master_in_resource_create(self): + self.assertEqual( + ["resource", "create", "master", "second"], + upgrade_args(["resource", "create", "--master", "second"]) + ) + + def test_dont_upgrade_2dash_master_outside_of_resource_create(self): + self.assertEqual( + ["first", "--master", "second"], + upgrade_args(["first", "--master", "second"]) + ) + + def test_upgrade_2dash_master_in_resource_create_with_complications(self): + self.assertEqual( + [ + "-f", "path/to/file", "resource", "-V", "create", "master", + "second" + ], + upgrade_args([ + "-f", "path/to/file", "resource", "-V", "create", "--master", + "second" + ]) + ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/common/test/test_reports.py pcs-0.9.159/pcs/cli/common/test/test_reports.py --- pcs-0.9.155+dfsg/pcs/cli/common/test/test_reports.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/common/test/test_reports.py 2017-06-30 15:33:01.000000000 +0000 @@ -6,7 +6,9 @@ ) from pcs.test.tools.pcs_unittest import TestCase + from collections import namedtuple +from functools import partial from pcs.cli.common.reports import build_message_from_report @@ -29,7 +31,9 @@ "Message force text is inside", build_message_from_report( { - "SOME": "Message {force} is inside", + "SOME": lambda info, force_text: + "Message "+force_text+" is inside" + , }, ReportItem("SOME", {}), "force text" @@ -82,3 +86,40 @@ ReportItem("SOME", {}), ) ) + + def test_callable_is_partial_object(self): + code_builder_map = { + "SOME": partial( + lambda title, info: "{title}: {message}".format( + title=title, **info + ), + "Info" + ) + } + self.assertEqual( + "Info: MESSAGE", + build_message_from_report( + code_builder_map, + ReportItem("SOME", {"message": "MESSAGE"}) + ) + ) + + def test_callable_is_partial_object_with_force(self): + code_builder_map = { + "SOME": partial( + lambda title, info, force_text: + "{title}: {message} {force_text}".format( + title=title, force_text=force_text, **info + ), + "Info" + ) + } + self.assertEqual( + "Info: MESSAGE force text", + build_message_from_report( + code_builder_map, + ReportItem("SOME", {"message": "MESSAGE"}), + "force text" + ) + ) + diff -Nru pcs-0.9.155+dfsg/pcs/cli/constraint_all/console_report.py pcs-0.9.159/pcs/cli/constraint_all/console_report.py --- pcs-0.9.155+dfsg/pcs/cli/constraint_all/console_report.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/constraint_all/console_report.py 2017-06-30 15:33:01.000000000 +0000 @@ -43,12 +43,14 @@ return type_report_map[constraint_type](options_dict, with_id) -#Each value (callable taking report_item.info) returns string template. -#Optionaly the template can contain placehodler {force} for next processing. -#Placeholder {force} will be appended if is necessary and if is not presset +#Each value (a callable taking report_item.info) returns a message. +#Force text will be appended if necessary. +#If it is necessary to put the force text inside the string then the callable +#must take the force_text parameter. CODE_TO_MESSAGE_BUILDER_MAP = { - codes.DUPLICATE_CONSTRAINTS_EXIST: lambda info: - "duplicate constraint already exists{force}\n" + "\n".join([ + codes.DUPLICATE_CONSTRAINTS_EXIST: lambda info, force_text: + "duplicate constraint already exists{0}\n".format(force_text) + + "\n".join([ " " + constraint(info["constraint_type"], constraint_info) for constraint_info in info["constraint_info_list"] ]) @@ -59,7 +61,9 @@ "{resource_id} is a {mode} resource, you should use the" " {parent_type} id: {parent_id} when adding constraints" ).format( - mode="master/slave" if info["parent_type"] == "master" else "clone", + mode="master/slave" if info["parent_type"] == "master" + else info["parent_type"] + , **info ) , diff -Nru pcs-0.9.155+dfsg/pcs/cli/constraint_all/test/test_console_report.py pcs-0.9.159/pcs/cli/constraint_all/test/test_console_report.py --- pcs-0.9.155+dfsg/pcs/cli/constraint_all/test/test_console_report.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/constraint_all/test/test_console_report.py 2017-06-30 15:33:01.000000000 +0000 @@ -72,13 +72,20 @@ self.assertEqual( "\n".join([ - "duplicate constraint already exists{force}", + "duplicate constraint already exists force text", " constraint info" ]), - self.build({ - "constraint_info_list": [{"options": {"a": "b"}}], - "constraint_type": "rsc_some" - }) + self.build( + { + "constraint_info_list": [{"options": {"a": "b"}}], + "constraint_type": "rsc_some" + }, + force_text=" force text" + ) + ) + mock_constraint.assert_called_once_with( + "rsc_some", + {"options": {"a": "b"}} ) class ResourceForConstraintIsMultiinstanceTest(TestCase): @@ -110,3 +117,15 @@ "parent_id": "RESOURCE_CLONE" }) ) + + def test_build_message_for_bundle(self): + self.assertEqual( + "RESOURCE_PRIMITIVE is a bundle resource, you should use the" + " bundle id: RESOURCE_CLONE when adding constraints" + , + self.build({ + "resource_id": "RESOURCE_PRIMITIVE", + "parent_type": "bundle", + "parent_id": "RESOURCE_CLONE" + }) + ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/constraint_order/console_report.py pcs-0.9.159/pcs/cli/constraint_order/console_report.py --- pcs-0.9.155+dfsg/pcs/cli/constraint_order/console_report.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cli/constraint_order/console_report.py 2017-06-30 15:33:01.000000000 +0000 @@ -4,7 +4,7 @@ print_function, unicode_literals, ) -from pcs.lib.pacemaker_values import is_true +from pcs.lib.pacemaker.values import is_true def constraint_plain(constraint_info, with_id=False): """ diff -Nru pcs-0.9.155+dfsg/pcs/cli/fencing_topology.py pcs-0.9.159/pcs/cli/fencing_topology.py --- pcs-0.9.155+dfsg/pcs/cli/fencing_topology.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/fencing_topology.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,24 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.common.fencing_topology import ( + TARGET_TYPE_NODE, + TARGET_TYPE_REGEXP, + TARGET_TYPE_ATTRIBUTE, +) + +__target_type_map = { + "attrib": TARGET_TYPE_ATTRIBUTE, + "node": TARGET_TYPE_NODE, + "regexp": TARGET_TYPE_REGEXP, +} + +target_type_map_cli_to_lib = __target_type_map + +target_type_map_lib_to_cli = dict([ + (value, key) for key, value in __target_type_map.items() +]) diff -Nru pcs-0.9.155+dfsg/pcs/cli/resource/parse_args.py pcs-0.9.159/pcs/cli/resource/parse_args.py --- pcs-0.9.155+dfsg/pcs/cli/resource/parse_args.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/resource/parse_args.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,191 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) +from pcs.cli.common.parse_args import group_by_keywords, prepare_options +from pcs.cli.common.errors import CmdLineInputError + + +def parse_create_simple(arg_list): + groups = group_by_keywords( + arg_list, + set(["op", "meta"]), + implicit_first_group_key="options", + group_repeated_keywords=["op"], + ) + + parts = { + "meta": prepare_options(groups.get("meta", [])), + "options": prepare_options(groups.get("options", [])), + "op": [ + prepare_options(op) + for op in build_operations(groups.get("op", [])) + ], + } + + return parts + +def parse_create(arg_list): + groups = group_by_keywords( + arg_list, + set(["op", "meta", "clone", "master", "bundle"]), + implicit_first_group_key="options", + group_repeated_keywords=["op"], + only_found_keywords=True, + ) + + parts = { + "meta": prepare_options(groups.get("meta", [])), + "options": prepare_options(groups.get("options", [])), + "op": [ + prepare_options(op) + for op in build_operations(groups.get("op", [])) + ], + } + + if "clone" in groups: + parts["clone"] = prepare_options(groups["clone"]) + + if "master" in groups: + parts["master"] = prepare_options(groups["master"]) + + if "bundle" in groups: + parts["bundle"] = groups["bundle"] + + return parts + +def _parse_bundle_groups(arg_list): + repeatable_keyword_list = ["port-map", "storage-map"] + keyword_list = ["meta", "container", "network"] + repeatable_keyword_list + groups = group_by_keywords( + arg_list, + set(keyword_list), + group_repeated_keywords=repeatable_keyword_list, + only_found_keywords=True, + ) + for keyword in keyword_list: + if keyword not in groups: + continue + if keyword in repeatable_keyword_list: + for repeated_section in groups[keyword]: + if len(repeated_section) == 0: + raise CmdLineInputError( + "No {0} options specified".format(keyword) + ) + else: + if len(groups[keyword]) == 0: + raise CmdLineInputError( + "No {0} options specified".format(keyword) + ) + return groups + +def parse_bundle_create_options(arg_list): + groups = _parse_bundle_groups(arg_list) + container_options = groups.get("container", []) + container_type = "" + if container_options and "=" not in container_options[0]: + container_type = container_options.pop(0) + parts = { + "container_type": container_type, + "container": prepare_options(container_options), + "network": prepare_options(groups.get("network", [])), + "port_map": [ + prepare_options(port_map) + for port_map in groups.get("port-map", []) + ], + "storage_map": [ + prepare_options(storage_map) + for storage_map in groups.get("storage-map", []) + ], + "meta": prepare_options(groups.get("meta", [])) + } + return parts + +def _split_bundle_map_update_op_and_options( + map_arg_list, result_parts, map_name +): + if len(map_arg_list) < 2: + raise _bundle_map_update_not_valid(map_name) + op, options = map_arg_list[0], map_arg_list[1:] + if op == "add": + result_parts[op].append(prepare_options(options)) + elif op == "remove": + result_parts[op].extend(options) + else: + raise _bundle_map_update_not_valid(map_name) + +def _bundle_map_update_not_valid(map_name): + return CmdLineInputError( + ( + "When using '{map}' you must specify either 'add' and options or " + "'remove' and id(s)" + ).format(map=map_name) + ) + +def parse_bundle_update_options(arg_list): + groups = _parse_bundle_groups(arg_list) + port_map = {"add": [], "remove": []} + for map_group in groups.get("port-map", []): + _split_bundle_map_update_op_and_options( + map_group, port_map, "port-map" + ) + storage_map = {"add": [], "remove": []} + for map_group in groups.get("storage-map", []): + _split_bundle_map_update_op_and_options( + map_group, storage_map, "storage-map" + ) + parts = { + "container": prepare_options(groups.get("container", [])), + "network": prepare_options(groups.get("network", [])), + "port_map_add": port_map["add"], + "port_map_remove": port_map["remove"], + "storage_map_add": storage_map["add"], + "storage_map_remove": storage_map["remove"], + "meta": prepare_options(groups.get("meta", [])) + } + return parts + +def build_operations(op_group_list): + """ + Return a list of dicts. Each dict represents one operation. + list of list op_group_list contains items that have parameters after "op" + (so item can contain multiple operations) for example: [ + [monitor timeout=1 start timeout=2], + [monitor timeout=3 interval=10], + ] + """ + operation_list = [] + for op_group in op_group_list: + #empty operation is not allowed + if not op_group: + raise __not_enough_parts_in_operation() + + #every operation group needs to start with operation name + if "=" in op_group[0]: + raise __every_operation_needs_name() + + for arg in op_group: + if "=" not in arg: + operation_list.append(["name={0}".format(arg)]) + else: + operation_list[-1].append(arg) + + #every operation needs at least name and one option + #there can be more than one operation in op_group: check is after processing + if any([len(operation) < 2 for operation in operation_list]): + raise __not_enough_parts_in_operation() + + return operation_list + +def __not_enough_parts_in_operation(): + return CmdLineInputError( + "When using 'op' you must specify an operation name" + " and at least one option" + ) + +def __every_operation_needs_name(): + return CmdLineInputError( + "When using 'op' you must specify an operation name after 'op'" + ) diff -Nru pcs-0.9.155+dfsg/pcs/cli/resource/test/test_parse_args.py pcs-0.9.159/pcs/cli/resource/test/test_parse_args.py --- pcs-0.9.155+dfsg/pcs/cli/resource/test/test_parse_args.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/cli/resource/test/test_parse_args.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,742 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase +from pcs.cli.resource import parse_args +from pcs.cli.common.errors import CmdLineInputError + +class ParseCreateArgs(TestCase): + def assert_produce(self, arg_list, result): + self.assertEqual(parse_args.parse_create(arg_list), result) + + def test_no_args(self): + self.assert_produce([], { + "meta": {}, + "options": {}, + "op": [], + }) + + def test_only_instance_attributes(self): + self.assert_produce(["a=b", "c=d"], { + "meta": {}, + "options": { + "a": "b", + "c": "d", + }, + "op": [], + }) + + def test_only_meta(self): + self.assert_produce(["meta", "a=b", "c=d"], { + "options": {}, + "op": [], + "meta": { + "a": "b", + "c": "d", + }, + }) + + def test_only_clone(self): + self.assert_produce(["clone", "a=b", "c=d"], { + "meta": {}, + "options": {}, + "op": [], + "clone": { + "a": "b", + "c": "d", + }, + }) + + def test_only_operations(self): + self.assert_produce([ + "op", "monitor", "a=b", "c=d", "start", "e=f", + ], { + "meta": {}, + "options": {}, + "op": [ + {"name": "monitor", "a": "b", "c": "d"}, + {"name": "start", "e": "f"}, + ], + }) + + def test_args_op_clone_meta(self): + self.assert_produce([ + "a=b", "c=d", + "meta", "e=f", "g=h", + "op", "monitor", "i=j", "k=l", "start", "m=n", + "clone", "o=p", "q=r", + ], { + "options": { + "a": "b", + "c": "d", + }, + "op": [ + {"name": "monitor", "i": "j", "k": "l"}, + {"name": "start", "m": "n"}, + ], + "meta": { + "e": "f", + "g": "h", + }, + "clone": { + "o": "p", + "q": "r", + }, + }) + + def assert_raises_cmdline(self, args): + self.assertRaises( + CmdLineInputError, + lambda: parse_args.parse_create(args) + ) + + def test_raises_when_operation_name_does_not_follow_op_keyword(self): + self.assert_raises_cmdline(["op", "a=b"]) + self.assert_raises_cmdline(["op", "monitor", "a=b", "op", "c=d"]) + + def test_raises_when_operation_have_no_option(self): + self.assert_raises_cmdline( + ["op", "monitor", "a=b", "start", "stop", "c=d"] + ) + self.assert_raises_cmdline( + ["op", "monitor", "a=b", "stop", "c=d", "op", "start"] + ) + + def test_allow_to_repeat_op(self): + self.assert_produce([ + "op", "monitor", "a=b", "c=d", + "op", "start", "e=f", + ], { + "meta": {}, + "options": {}, + "op": [ + {"name": "monitor", "a": "b", "c": "d"}, + {"name": "start", "e": "f"}, + ], + }) + + def test_deal_with_empty_operatins(self): + self.assert_raises_cmdline(["op", "monitoring", "a=b", "op"]) + + +class ParseCreateSimple(TestCase): + def assert_produce(self, arg_list, result): + self.assertEqual(parse_args.parse_create_simple(arg_list), result) + + def test_without_args(self): + self.assert_produce([], { + "meta": {}, + "options": {}, + "op": [], + }) + + def test_only_instance_attributes(self): + self.assert_produce(["a=b", "c=d"], { + "meta": {}, + "options": { + "a": "b", + "c": "d", + }, + "op": [], + }) + + def test_only_meta(self): + self.assert_produce(["meta", "a=b", "c=d"], { + "options": {}, + "op": [], + "meta": { + "a": "b", + "c": "d", + }, + }) + + def test_only_operations(self): + self.assert_produce([ + "op", "monitor", "a=b", "c=d", "start", "e=f", + ], { + "meta": {}, + "options": {}, + "op": [ + {"name": "monitor", "a": "b", "c": "d"}, + {"name": "start", "e": "f"}, + ], + }) + + def assert_raises_cmdline(self, args): + self.assertRaises( + CmdLineInputError, + lambda: parse_args.parse_create_simple(args) + ) + + def test_raises_when_operation_name_does_not_follow_op_keyword(self): + self.assert_raises_cmdline(["op", "a=b"]) + self.assert_raises_cmdline(["op", "monitor", "a=b", "op", "c=d"]) + + def test_raises_when_operation_have_no_option(self): + self.assert_raises_cmdline( + ["op", "monitor", "a=b", "start", "stop", "c=d"] + ) + self.assert_raises_cmdline( + ["op", "monitor", "a=b", "stop", "c=d", "op", "start"] + ) + + def test_allow_to_repeat_op(self): + self.assert_produce([ + "op", "monitor", "a=b", "c=d", + "op", "start", "e=f", + ], { + "meta": {}, + "options": {}, + "op": [ + {"name": "monitor", "a": "b", "c": "d"}, + {"name": "start", "e": "f"}, + ], + }) + + +class ParseBundleCreateOptions(TestCase): + def assert_produce(self, arg_list, result): + self.assertEqual( + result, + parse_args.parse_bundle_create_options(arg_list) + ) + + def assert_raises_cmdline(self, arg_list): + self.assertRaises( + CmdLineInputError, + lambda: parse_args.parse_bundle_create_options(arg_list) + ) + + def test_no_args(self): + self.assert_produce( + [], + { + "container_type": "", + "container": {}, + "network": {}, + "port_map": [], + "storage_map": [], + "meta": {}, + } + ) + + def test_container_empty(self): + self.assert_raises_cmdline(["container"]) + + def test_container_type(self): + self.assert_produce( + ["container", "docker"], + { + "container_type": "docker", + "container": {}, + "network": {}, + "port_map": [], + "storage_map": [], + "meta": {}, + } + ) + + def test_container_options(self): + self.assert_produce( + ["container", "a=b", "c=d"], + { + "container_type": "", + "container": {"a": "b", "c": "d"}, + "network": {}, + "port_map": [], + "storage_map": [], + "meta": {}, + } + ) + + def test_container_type_and_options(self): + self.assert_produce( + ["container", "docker", "a=b", "c=d"], + { + "container_type": "docker", + "container": {"a": "b", "c": "d"}, + "network": {}, + "port_map": [], + "storage_map": [], + "meta": {}, + } + ) + + def test_container_type_must_be_first(self): + self.assert_raises_cmdline(["container", "a=b", "docker", "c=d"]) + + def test_container_missing_value(self): + self.assert_raises_cmdline(["container", "docker", "a", "c=d"]) + + def test_container_missing_key(self): + self.assert_raises_cmdline(["container", "docker", "=b", "c=d"]) + + def test_network(self): + self.assert_produce( + ["network", "a=b", "c=d"], + { + "container_type": "", + "container": {}, + "network": {"a": "b", "c": "d"}, + "port_map": [], + "storage_map": [], + "meta": {}, + } + ) + + def test_network_empty(self): + self.assert_raises_cmdline(["network"]) + + def test_network_missing_value(self): + self.assert_raises_cmdline(["network", "a", "c=d"]) + + def test_network_missing_key(self): + self.assert_raises_cmdline(["network", "=b", "c=d"]) + + def test_port_map_empty(self): + self.assert_raises_cmdline(["port-map"]) + + def test_one_of_port_map_empty(self): + self.assert_raises_cmdline( + ["port-map", "a=b", "port-map", "network", "c=d"] + ) + + def test_port_map_one(self): + self.assert_produce( + ["port-map", "a=b", "c=d"], + { + "container_type": "", + "container": {}, + "network": {}, + "port_map": [{"a": "b", "c": "d"}], + "storage_map": [], + "meta": {}, + } + ) + + def test_port_map_more(self): + self.assert_produce( + ["port-map", "a=b", "c=d", "port-map", "e=f"], + { + "container_type": "", + "container": {}, + "network": {}, + "port_map": [{"a": "b", "c": "d"}, {"e": "f"}], + "storage_map": [], + "meta": {}, + } + ) + + def test_port_map_missing_value(self): + self.assert_raises_cmdline(["port-map", "a", "c=d"]) + + def test_port_map_missing_key(self): + self.assert_raises_cmdline(["port-map", "=b", "c=d"]) + + def test_storage_map_empty(self): + self.assert_raises_cmdline(["storage-map"]) + + def test_one_of_storage_map_empty(self): + self.assert_raises_cmdline( + ["storage-map", "port-map", "a=b", "storage-map", "c=d"] + ) + + def test_storage_map_one(self): + self.assert_produce( + ["storage-map", "a=b", "c=d"], + { + "container_type": "", + "container": {}, + "network": {}, + "port_map": [], + "storage_map": [{"a": "b", "c": "d"}], + "meta": {}, + } + ) + + def test_storage_map_more(self): + self.assert_produce( + ["storage-map", "a=b", "c=d", "storage-map", "e=f"], + { + "container_type": "", + "container": {}, + "network": {}, + "port_map": [], + "storage_map": [{"a": "b", "c": "d"}, {"e": "f"}], + "meta": {}, + } + ) + + def test_storage_map_missing_value(self): + self.assert_raises_cmdline(["storage-map", "a", "c=d"]) + + def test_storage_map_missing_key(self): + self.assert_raises_cmdline(["storage-map", "=b", "c=d"]) + + def test_meta(self): + self.assert_produce( + ["meta", "a=b", "c=d"], + { + "container_type": "", + "container": {}, + "network": {}, + "port_map": [], + "storage_map": [], + "meta": {"a": "b", "c": "d"}, + } + ) + + def test_meta_empty(self): + self.assert_raises_cmdline(["meta"]) + + def test_meta_missing_value(self): + self.assert_raises_cmdline(["meta", "a", "c=d"]) + + def test_meta_missing_key(self): + self.assert_raises_cmdline(["meta", "=b", "c=d"]) + + def test_all(self): + self.assert_produce( + [ + "container", "docker", "a=b", "c=d", + "network", "e=f", "g=h", + "port-map", "i=j", "k=l", + "port-map", "m=n", "o=p", + "storage-map", "q=r", "s=t", + "storage-map", "u=v", "w=x", + "meta", "y=z", "A=B", + ], + { + "container_type": "docker", + "container": {"a": "b", "c": "d"}, + "network": {"e": "f", "g": "h"}, + "port_map": [{"i": "j", "k": "l"}, {"m": "n", "o": "p"}], + "storage_map": [{"q": "r", "s": "t"}, {"u": "v", "w": "x"}], + "meta": {"y": "z", "A": "B"}, + } + ) + + def test_all_mixed(self): + self.assert_produce( + [ + "storage-map", "q=r", "s=t", + "meta", "y=z", + "port-map", "i=j", "k=l", + "network", "e=f", + "container", "docker", "a=b", + "storage-map", "u=v", "w=x", + "port-map", "m=n", "o=p", + "meta", "A=B", + "network", "g=h", + "container", "c=d", + ], + { + "container_type": "docker", + "container": {"a": "b", "c": "d"}, + "network": {"e": "f", "g": "h"}, + "port_map": [{"i": "j", "k": "l"}, {"m": "n", "o": "p"}], + "storage_map": [{"q": "r", "s": "t"}, {"u": "v", "w": "x"}], + "meta": {"y": "z", "A": "B"}, + } + ) + + +class ParseBundleUpdateOptions(TestCase): + def assert_produce(self, arg_list, result): + self.assertEqual( + result, + parse_args.parse_bundle_update_options(arg_list) + ) + + def assert_raises_cmdline(self, arg_list): + self.assertRaises( + CmdLineInputError, + lambda: parse_args.parse_bundle_update_options(arg_list) + ) + + def test_no_args(self): + self.assert_produce( + [], + { + "container": {}, + "network": {}, + "port_map_add": [], + "port_map_remove": [], + "storage_map_add": [], + "storage_map_remove": [], + "meta": {}, + } + ) + + def test_container_options(self): + self.assert_produce( + ["container", "a=b", "c=d"], + { + "container": {"a": "b", "c": "d"}, + "network": {}, + "port_map_add": [], + "port_map_remove": [], + "storage_map_add": [], + "storage_map_remove": [], + "meta": {}, + } + ) + + def test_container_empty(self): + self.assert_raises_cmdline(["container"]) + + def test_container_missing_value(self): + self.assert_raises_cmdline(["container", "a", "c=d"]) + + def test_container_missing_key(self): + self.assert_raises_cmdline(["container", "=b", "c=d"]) + + def test_network(self): + self.assert_produce( + ["network", "a=b", "c=d"], + { + "container": {}, + "network": {"a": "b", "c": "d"}, + "port_map_add": [], + "port_map_remove": [], + "storage_map_add": [], + "storage_map_remove": [], + "meta": {}, + } + ) + + def test_network_empty(self): + self.assert_raises_cmdline(["network"]) + + def test_network_missing_value(self): + self.assert_raises_cmdline(["network", "a", "c=d"]) + + def test_network_missing_key(self): + self.assert_raises_cmdline(["network", "=b", "c=d"]) + + def test_port_map_empty(self): + self.assert_raises_cmdline(["port-map"]) + + def test_one_of_port_map_empty(self): + self.assert_raises_cmdline( + ["port-map", "a=b", "port-map", "network", "c=d"] + ) + + def test_port_map_missing_params(self): + self.assert_raises_cmdline(["port-map"]) + self.assert_raises_cmdline(["port-map add"]) + self.assert_raises_cmdline(["port-map remove"]) + + def test_port_map_wrong_keyword(self): + self.assert_raises_cmdline(["port-map", "wrong", "a=b"]) + + def test_port_map_missing_value(self): + self.assert_raises_cmdline(["port-map", "add", "a", "c=d"]) + + def test_port_map_missing_key(self): + self.assert_raises_cmdline(["port-map", "add", "=b", "c=d"]) + + def test_port_map_more(self): + self.assert_produce( + [ + "port-map", "add", "a=b", + "port-map", "remove", "c", "d", + "port-map", "add", "e=f", "g=h", + "port-map", "remove", "i", + ], + { + "container": {}, + "network": {}, + "port_map_add": [ + {"a": "b", }, + {"e": "f", "g": "h",}, + ], + "port_map_remove": ["c", "d", "i"], + "storage_map_add": [], + "storage_map_remove": [], + "meta": {}, + } + ) + + def test_storage_map_empty(self): + self.assert_raises_cmdline(["storage-map"]) + + def test_one_of_storage_map_empty(self): + self.assert_raises_cmdline( + ["storage-map", "port-map", "a=b", "storage-map", "c=d"] + ) + + def test_storage_map_missing_params(self): + self.assert_raises_cmdline(["storage-map"]) + self.assert_raises_cmdline(["storage-map add"]) + self.assert_raises_cmdline(["storage-map remove"]) + + def test_storage_map_wrong_keyword(self): + self.assert_raises_cmdline(["storage-map", "wrong", "a=b"]) + + def test_storage_map_missing_value(self): + self.assert_raises_cmdline(["storage-map", "add", "a", "c=d"]) + + def test_storage_map_missing_key(self): + self.assert_raises_cmdline(["storage-map", "add", "=b", "c=d"]) + + def test_storage_map_more(self): + self.assert_produce( + [ + "storage-map", "add", "a=b", + "storage-map", "remove", "c", "d", + "storage-map", "add", "e=f", "g=h", + "storage-map", "remove", "i", + ], + { + "container": {}, + "network": {}, + "port_map_add": [], + "port_map_remove": [], + "storage_map_add": [ + {"a": "b", }, + {"e": "f", "g": "h",}, + ], + "storage_map_remove": ["c", "d", "i"], + "meta": {}, + } + ) + + def test_meta(self): + self.assert_produce( + ["meta", "a=b", "c=d"], + { + "container": {}, + "network": {}, + "port_map_add": [], + "port_map_remove": [], + "storage_map_add": [], + "storage_map_remove": [], + "meta": {"a": "b", "c": "d"}, + } + ) + + def test_meta_empty(self): + self.assert_raises_cmdline(["meta"]) + + def test_meta_missing_value(self): + self.assert_raises_cmdline(["meta", "a", "c=d"]) + + def test_meta_missing_key(self): + self.assert_raises_cmdline(["meta", "=b", "c=d"]) + + + def test_all(self): + self.assert_produce( + [ + "container", "a=b", "c=d", + "network", "e=f", "g=h", + "port-map", "add", "i=j", "k=l", + "port-map", "add", "m=n", + "port-map", "remove", "o", "p", + "port-map", "remove", "q", + "storage-map", "add", "r=s", "t=u", + "storage-map", "add", "v=w", + "storage-map", "remove", "x", "y", + "storage-map", "remove", "z", + "meta", "A=B", "C=D", + ], + { + "container": {"a": "b", "c": "d"}, + "network": {"e": "f", "g": "h"}, + "port_map_add": [ + {"i": "j", "k": "l"}, + {"m": "n"}, + ], + "port_map_remove": ["o", "p", "q"], + "storage_map_add": [ + {"r": "s", "t": "u"}, + {"v": "w"}, + ], + "storage_map_remove": ["x", "y", "z"], + "meta": {"A": "B", "C": "D"}, + } + ) + + def test_all_mixed(self): + self.assert_produce( + [ + "storage-map", "remove", "x", "y", + "meta", "A=B", + "port-map", "remove", "o", "p", + "network", "e=f", "g=h", + "storage-map", "add", "r=s", "t=u", + "port-map", "add", "i=j", "k=l", + "container", "a=b", "c=d", + "meta", "C=D", + "port-map", "remove", "q", + "storage-map", "remove", "z", + "storage-map", "add", "v=w", + "port-map", "add", "m=n", + ], + { + "container": {"a": "b", "c": "d"}, + "network": {"e": "f", "g": "h"}, + "port_map_add": [ + {"i": "j", "k": "l"}, + {"m": "n"}, + ], + "port_map_remove": ["o", "p", "q"], + "storage_map_add": [ + {"r": "s", "t": "u"}, + {"v": "w"}, + ], + "storage_map_remove": ["x", "y", "z"], + "meta": {"A": "B", "C": "D"}, + } + ) + + +class BuildOperations(TestCase): + def assert_produce(self, arg_list, result): + self.assertEqual(result, parse_args.build_operations(arg_list)) + + def assert_raises_cmdline(self, arg_list): + self.assertRaises( + CmdLineInputError, + lambda: parse_args.build_operations(arg_list) + ) + + def test_return_empty_list_on_empty_input(self): + self.assert_produce([], []) + + def test_return_all_operations_specified_in_the_same_group(self): + self.assert_produce( + [ + ["monitor", "interval=10s", "start", "timeout=20s"] + ], + [ + ["name=monitor", "interval=10s"], + ["name=start", "timeout=20s"], + ] + ) + + def test_return_all_operations_specified_in_different_groups(self): + self.assert_produce( + [ + ["monitor", "interval=10s"], + ["start", "timeout=20s"], + ], + [ + ["name=monitor", "interval=10s"], + ["name=start", "timeout=20s"], + ] + ) + + def test_refuse_empty_operation(self): + self.assert_raises_cmdline([[]]) + + def test_refuse_operation_without_attribute(self): + self.assert_raises_cmdline([["monitor"]]) + + def test_refuse_operation_without_name(self): + self.assert_raises_cmdline([["interval=10s"]]) diff -Nru pcs-0.9.155+dfsg/pcs/cluster.py pcs-0.9.159/pcs/cluster.py --- pcs-0.9.155+dfsg/pcs/cluster.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/cluster.py 2017-06-30 15:33:01.000000000 +0000 @@ -30,25 +30,29 @@ resource, settings, status, - stonith, usage, utils, ) from pcs.utils import parallel_for_nodes from pcs.common import report_codes +from pcs.cli.common.errors import ( + CmdLineInputError, + ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE, +) from pcs.cli.common.reports import process_library_reports, build_report_message +import pcs.cli.cluster.command as cluster_command from pcs.lib import ( - pacemaker as lib_pacemaker, sbd as lib_sbd, reports as lib_reports, ) from pcs.lib.booth import sync as booth_sync -from pcs.lib.nodes_task import check_corosync_offline_on_nodes +from pcs.lib.commands.cluster import _share_authkey, _destroy_pcmk_remote_env from pcs.lib.commands.quorum import _add_device_model_net from pcs.lib.corosync import ( config_parser as corosync_conf_utils, qdevice_net, ) +from pcs.cli.common.console_report import warn, error from pcs.lib.corosync.config_facade import ConfigFacade as corosync_conf_facade from pcs.lib.errors import ( LibraryError, @@ -57,11 +61,20 @@ from pcs.lib.external import ( disable_service, is_systemctl, + NodeCommandUnsuccessfulException, NodeCommunicationException, node_communicator_exception_to_report_item, ) -from pcs.lib.node import NodeAddresses -from pcs.lib.tools import environment_file_to_dict +from pcs.lib.env_tools import get_nodes +from pcs.lib.node import NodeAddresses, NodeAddressesList +from pcs.lib.nodes_task import check_corosync_offline_on_nodes, distribute_files +from pcs.lib import node_communication_format +import pcs.lib.pacemaker.live as lib_pacemaker +from pcs.lib.tools import ( + environment_file_to_dict, + generate_binary_key, + generate_key, +) def cluster_cmd(argv): if len(argv) == 0: @@ -70,7 +83,7 @@ sub_cmd = argv.pop(0) if (sub_cmd == "help"): - usage.cluster(argv) + usage.cluster([" ".join(argv)] if argv else []) elif (sub_cmd == "setup"): if "--name" in utils.pcs_options: cluster_setup([utils.pcs_options["--name"]] + argv) @@ -94,32 +107,63 @@ cluster_token_nodes(argv) elif (sub_cmd == "start"): if "--all" in utils.pcs_options: + if argv: + utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) start_cluster_all() else: start_cluster(argv) elif (sub_cmd == "stop"): if "--all" in utils.pcs_options: + if argv: + utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) stop_cluster_all() else: stop_cluster(argv) elif (sub_cmd == "kill"): kill_cluster(argv) elif (sub_cmd == "standby"): - node.node_standby(argv) + try: + node.node_standby_cmd( + utils.get_library_wrapper(), + argv, + utils.get_modificators(), + True + ) + except LibraryError as e: + utils.process_library_reports(e.args) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "node", "standby") elif (sub_cmd == "unstandby"): - node.node_standby(argv, False) + try: + node.node_standby_cmd( + utils.get_library_wrapper(), + argv, + utils.get_modificators(), + False + ) + except LibraryError as e: + utils.process_library_reports(e.args) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "node", "unstandby") elif (sub_cmd == "enable"): if "--all" in utils.pcs_options: + if argv: + utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) enable_cluster_all() else: enable_cluster(argv) elif (sub_cmd == "disable"): if "--all" in utils.pcs_options: + if argv: + utils.err(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) disable_cluster_all() else: disable_cluster(argv) elif (sub_cmd == "remote-node"): - cluster_remote_node(argv) + try: + cluster_remote_node(argv) + except LibraryError as e: + utils.process_library_reports(e.args) elif (sub_cmd == "cib"): get_cib(argv) elif (sub_cmd == "cib-push"): @@ -129,7 +173,34 @@ elif (sub_cmd == "edit"): cluster_edit(argv) elif (sub_cmd == "node"): - cluster_node(argv) + if not argv: + usage.cluster(["node"]) + sys.exit(1) + + remote_node_command_map = { + "add-remote": cluster_command.node_add_remote, + "add-guest": cluster_command.node_add_guest, + "remove-remote": cluster_command.create_node_remove_remote( + resource.resource_remove + ), + "remove-guest": cluster_command.node_remove_guest, + "clear": cluster_command.node_clear, + } + if argv[0] in remote_node_command_map: + try: + remote_node_command_map[argv[0]]( + utils.get_library_wrapper(), + argv[1:], + utils.get_modificators() + ) + except LibraryError as e: + utils.process_library_reports(e.args) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror( + e, "cluster", "node " + argv[0] + ) + else: + cluster_node(argv) elif (sub_cmd == "localnode"): cluster_localnode(argv) elif (sub_cmd == "uidgid"): @@ -231,6 +302,18 @@ def cluster_setup(argv): + modifiers = utils.get_modificators() + allowed_encryption_values = ["0", "1"] + if modifiers["encryption"] not in allowed_encryption_values: + process_library_reports([ + lib_reports.invalid_option_value( + "--encryption", + modifiers["encryption"], + allowed_encryption_values, + severity=ReportItemSeverity.ERROR, + forceable=None + ) + ]) if len(argv) < 2: usage.cluster(["setup"]) sys.exit(1) @@ -313,7 +396,8 @@ node_list, options["transport_options"], options["totem_options"], - options["quorum_options"] + options["quorum_options"], + modifiers["encryption"] == "1" ) process_library_reports(messages) @@ -358,10 +442,14 @@ else: # verify and ensure no cluster is set up on the nodes # checks that nodes are authenticated as well + lib_env = utils.get_lib_env() if "--force" not in utils.pcs_options: all_nodes_available = True for node in primary_addr_list: - available, message = utils.canAddNodeToCluster(node) + available, message = utils.canAddNodeToCluster( + lib_env.node_communicator(), + NodeAddresses(node) + ) if not available: all_nodes_available = False utils.err("{0}: {1}".format(node, message), False) @@ -376,6 +464,32 @@ destroy_cluster(primary_addr_list) print() + try: + file_definitions = {} + file_definitions.update( + node_communication_format.pcmk_authkey_file(generate_key()) + ) + if modifiers["encryption"] == "1": + file_definitions.update( + node_communication_format.corosync_authkey_file( + generate_binary_key(random_bytes_count=128) + ) + ) + + distribute_files( + lib_env.node_communicator(), + lib_env.report_processor, + file_definitions, + NodeAddressesList( + [NodeAddresses(node) for node in primary_addr_list] + ), + skip_offline_nodes=modifiers["skip_offline_nodes"], + allow_incomplete_distribution="--force" in utils.pcs_options + ) + except LibraryError as e: #Theoretically, this should not happen + utils.process_library_reports(e.args) + + # send local cluster pcsd configs to the new nodes print("Sending cluster config files to the nodes...") pcsd_data = { @@ -420,7 +534,9 @@ # sync certificates as the last step because it restarts pcsd print() - pcsd.pcsd_sync_certs([], exit_after_error=False) + pcsd.pcsd_sync_certs( + [], exit_after_error=False, async_restart=modifiers["async"] + ) if wait: print() wait_for_nodes_started(primary_addr_list, wait_timeout) @@ -638,7 +754,8 @@ return parsed, messages def cluster_setup_create_corosync_conf( - cluster_name, node_list, transport_options, totem_options, quorum_options + cluster_name, node_list, transport_options, totem_options, quorum_options, + encrypted ): messages = [] @@ -653,8 +770,9 @@ corosync_conf.add_section(logging_section) totem_section.add_attribute("version", "2") - totem_section.add_attribute("secauth", "off") totem_section.add_attribute("cluster_name", cluster_name) + if not encrypted: + totem_section.add_attribute("secauth", "off") transport_options_names = ( "transport", @@ -934,6 +1052,7 @@ def wait_for_local_node_started(stop_at, interval): try: while True: + time.sleep(interval) node_status = lib_pacemaker.get_local_node_status( utils.cmd_runner() ) @@ -941,7 +1060,6 @@ return 0, "Started" if datetime.datetime.now() > stop_at: return 1, "Waiting timeout" - time.sleep(interval) except LibraryError as e: return 1, "Unable to get node status: {0}".format( "\n".join([build_report_message(item) for item in e.args]) @@ -949,6 +1067,7 @@ def wait_for_remote_node_started(node, stop_at, interval): while True: + time.sleep(interval) code, output = utils.getPacemakerNodeStatus(node) # HTTP error, permission denied or unable to auth # there is no point in trying again as it won't get magically fixed @@ -964,7 +1083,6 @@ return 1, "Unable to get node status" if datetime.datetime.now() > stop_at: return 1, "Waiting timeout" - time.sleep(interval) def wait_for_nodes_started(node_list, timeout=None): timeout = 60 * 15 if timeout is None else timeout @@ -1036,7 +1154,11 @@ ) was_error = False - node_errors = parallel_for_nodes(utils.stopPacemaker, nodes, quiet=True) + node_errors = parallel_for_nodes( + utils.repeat_if_timeout(utils.stopPacemaker), + nodes, + quiet=True + ) accessible_nodes = [ node for node in nodes if node not in node_errors.keys() ] @@ -1047,7 +1169,7 @@ ) was_error = True - for node in node_errors.keys(): + for node in node_errors: print("{0}: Not stopping cluster - node is unreachable".format(node)) node_errors = parallel_for_nodes( @@ -1102,7 +1224,11 @@ if len(argv) > 0: # stop pacemaker and resources while cluster is still quorate nodes = argv - node_errors = parallel_for_nodes(utils.stopPacemaker, nodes, quiet=True) + node_errors = parallel_for_nodes( + utils.repeat_if_timeout(utils.stopPacemaker), + nodes, + quiet=True + ) # proceed with destroy regardless of errors # destroy will stop any remaining cluster daemons node_errors = parallel_for_nodes(utils.destroyCluster, nodes, quiet=True) @@ -1228,6 +1354,8 @@ filename = None scope = None timeout = None + diff_against = None + if "--wait" in utils.pcs_options: timeout = utils.validate_wait_get_timeout() for arg in argv: @@ -1235,16 +1363,22 @@ filename = arg else: arg_name, arg_value = arg.split("=", 1) - if arg_name == "scope" and "--config" not in utils.pcs_options: + if arg_name == "scope": + if "--config" in utils.pcs_options: + utils.err("Cannot use both scope and --config") if not utils.is_valid_cib_scope(arg_value): utils.err("invalid CIB scope '%s'" % arg_value) else: scope = arg_value + elif arg_name == "diff-against": + diff_against = arg_value else: usage.cluster(["cib-push"]) sys.exit(1) if "--config" in utils.pcs_options: scope = "configuration" + if diff_against and scope: + utils.err("Cannot use both scope and diff-against") if not filename: usage.cluster(["cib-push"]) sys.exit(1) @@ -1259,18 +1393,48 @@ except (EnvironmentError, xml.parsers.expat.ExpatError) as e: utils.err("unable to parse new cib: %s" % e) - command = ["cibadmin", "--replace", "--xml-file", filename] - if scope: - command.append("--scope=%s" % scope) - output, retval = utils.run(command) - if retval != 0: - utils.err("unable to push cib\n" + output) + if diff_against: + try: + xml.dom.minidom.parse(diff_against) + except (EnvironmentError, xml.parsers.expat.ExpatError) as e: + utils.err("unable to parse original cib: %s" % e) + runner = utils.cmd_runner() + command = [ + "crm_diff", "--original", diff_against, "--new", filename, + "--no-version" + ] + patch, error, dummy_retval = runner.run(command) + # dummy_retval == -1 means one of two things: + # a) an error has occured + # b) --original and --new differ + # therefore it's of no use to see if an error occurred + if error.strip(): + utils.err("unable to diff the CIBs:\n" + error) + if not patch.strip(): + utils.err( + "The new CIB is the same as the original CIB, nothing to push." + ) + + command = ["cibadmin", "--patch", "--xml-pipe"] + output, error, retval = runner.run(command, patch) + if retval != 0: + utils.err("unable to push cib\n" + error + output) + + else: + command = ["cibadmin", "--replace", "--xml-file", filename] + if scope: + command.append("--scope=%s" % scope) + output, retval = utils.run(command) + if retval != 0: + utils.err("unable to push cib\n" + output) + print("CIB updated") + if "--wait" not in utils.pcs_options: return cmd = ["crm_resource", "--wait"] if timeout: - cmd.extend(["--timeout", timeout]) + cmd.extend(["--timeout", str(timeout)]) output, retval = utils.run(cmd) if retval != 0: msg = [] @@ -1397,16 +1561,30 @@ def cluster_node(argv): - if len(argv) != 2: - usage.cluster() + if len(argv) < 1: + usage.cluster(["node"]) sys.exit(1) if argv[0] == "add": add_node = True elif argv[0] in ["remove","delete"]: add_node = False + elif argv[0] == "add-outside": + try: + node_add_outside_cluster( + utils.get_library_wrapper(), + argv[1:], + utils.get_modificators(), + ) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "cluster", "node") + return else: - usage.cluster() + usage.cluster(["node"]) + sys.exit(1) + + if len(argv) != 2: + usage.cluster([" ".join(["node", argv[0]])]) sys.exit(1) node = argv[1] @@ -1435,238 +1613,336 @@ modifiers = utils.get_modificators() if add_node == True: - wait = False - wait_timeout = None - if "--start" in utils.pcs_options and "--wait" in utils.pcs_options: - wait_timeout = utils.validate_wait_get_timeout(False) - wait = True - need_ring1_address = utils.need_ring1_address(utils.getCorosyncConf()) - if not node1 and need_ring1_address: - utils.err( - "cluster is configured for RRP, " - "you have to specify ring 1 address for the node" - ) - elif node1 and not need_ring1_address: - utils.err( - "cluster is not configured for RRP, " - "you must not specify ring 1 address for the node" - ) - (canAdd, error) = utils.canAddNodeToCluster(node0) - if not canAdd: - utils.err("Unable to add '%s' to cluster: %s" % (node0, error)) - - report_processor = lib_env.report_processor - node_communicator = lib_env.node_communicator() - node_addr = NodeAddresses(node0, node1) - - # First set up everything else than corosync. Once the new node is - # present in corosync.conf / cluster.conf, it's considered part of a - # cluster and the node add command cannot be run again. So we need to - # minimize the amout of actions (and therefore possible failures) after - # adding the node to corosync. - try: - # qdevice setup - if not utils.is_rhel6(): - conf_facade = corosync_conf_facade.from_string( - utils.getCorosyncConf() - ) - qdevice_model, qdevice_model_options, _ = conf_facade.get_quorum_device_settings() - if qdevice_model == "net": - _add_device_model_net( - lib_env, - qdevice_model_options["host"], - conf_facade.get_cluster_name(), - [node_addr], - skip_offline_nodes=False - ) + node_add(lib_env, node0, node1, modifiers) + else: + node_remove(lib_env, node0, modifiers) - # sbd setup - if lib_sbd.is_sbd_enabled(utils.cmd_runner()): - if "--watchdog" not in utils.pcs_options: - watchdog = settings.sbd_watchdog_default - print("Warning: using default watchdog '{0}'".format( - watchdog - )) - else: - watchdog = utils.pcs_options["--watchdog"][0] - _ensure_cluster_is_offline_if_atb_should_be_enabled( - lib_env, 1, modifiers["skip_offline_nodes"] - ) +def node_add_outside_cluster(lib, argv, modifiers): + if len(argv) != 2: + raise CmdLineInputError( + "Usage: pcs cluster node add-outside " + ) - report_processor.process(lib_reports.sbd_check_started()) - lib_sbd.check_sbd_on_node( - report_processor, node_communicator, node_addr, watchdog - ) - sbd_cfg = environment_file_to_dict( - lib_sbd.get_local_sbd_config() - ) - report_processor.process( - lib_reports.sbd_config_distribution_started() + if len(modifiers["watchdog"]) > 1: + raise CmdLineInputError("Multiple watchdogs defined") + + node_ring0, node_ring1 = utils.parse_multiring_node(argv[0]) + cluster_node = argv[1] + data = [ + ("new_nodename", node_ring0), + ] + + if node_ring1: + data.append(("new_ring1addr", node_ring1)) + if modifiers["watchdog"]: + data.append(("watchdog", modifiers["watchdog"][0])) + if modifiers["device"]: + # way to send data in array + data += [("devices[]", device) for device in modifiers["device"]] + + communicator = utils.get_lib_env().node_communicator() + try: + communicator.call_host( + cluster_node, + "remote/add_node_all", + communicator.format_data_dict(data), + ) + except NodeCommandUnsuccessfulException as e: + print(e.reason) + except NodeCommunicationException as e: + process_library_reports([node_communicator_exception_to_report_item(e)]) + + +def node_add(lib_env, node0, node1, modifiers): + wait = False + wait_timeout = None + if "--start" in utils.pcs_options and "--wait" in utils.pcs_options: + wait_timeout = utils.validate_wait_get_timeout(False) + wait = True + need_ring1_address = utils.need_ring1_address(utils.getCorosyncConf()) + if not node1 and need_ring1_address: + utils.err( + "cluster is configured for RRP, " + "you have to specify ring 1 address for the node" + ) + elif node1 and not need_ring1_address: + utils.err( + "cluster is not configured for RRP, " + "you must not specify ring 1 address for the node" + ) + node_addr = NodeAddresses(node0, node1) + node_communicator = lib_env.node_communicator() + (canAdd, error) = utils.canAddNodeToCluster(node_communicator, node_addr) + + if not canAdd: + utils.err("Unable to add '%s' to cluster: %s" % (node0, error)) + + report_processor = lib_env.report_processor + + # First set up everything else than corosync. Once the new node is + # present in corosync.conf / cluster.conf, it's considered part of a + # cluster and the node add command cannot be run again. So we need to + # minimize the amout of actions (and therefore possible failures) after + # adding the node to corosync. + try: + # qdevice setup + if not utils.is_rhel6(): + conf_facade = corosync_conf_facade.from_string( + utils.getCorosyncConf() + ) + qdevice_model, qdevice_model_options, _ = conf_facade.get_quorum_device_settings() + if qdevice_model == "net": + _add_device_model_net( + lib_env, + qdevice_model_options["host"], + conf_facade.get_cluster_name(), + [node_addr], + skip_offline_nodes=False ) - lib_sbd.set_sbd_config_on_node( - report_processor, - node_communicator, - node_addr, - sbd_cfg, + + # sbd setup + if lib_sbd.is_sbd_enabled(utils.cmd_runner()): + if "--watchdog" not in utils.pcs_options: + watchdog = settings.sbd_watchdog_default + print("Warning: using default watchdog '{0}'".format( watchdog - ) - report_processor.process(lib_reports.sbd_enabling_started()) - lib_sbd.enable_sbd_service_on_node( - report_processor, node_communicator, node_addr - ) + )) else: - report_processor.process(lib_reports.sbd_disabling_started()) - lib_sbd.disable_sbd_service_on_node( - report_processor, node_communicator, node_addr + watchdog = utils.pcs_options["--watchdog"][0] + + _ensure_cluster_is_offline_if_atb_should_be_enabled( + lib_env, 1, modifiers["skip_offline_nodes"] + ) + + report_processor.process(lib_reports.sbd_check_started()) + + device_list = utils.pcs_options.get("--device", []) + device_num = len(device_list) + sbd_with_device = lib_sbd.is_device_set_local() + sbd_cfg = environment_file_to_dict(lib_sbd.get_local_sbd_config()) + + if sbd_with_device and device_num not in range(1, 4): + utils.err( + "SBD is configured to use shared storage, therefore it " +\ + "is required to specify at least one device and at most " +\ + "{0} devices (option --device),".format( + settings.sbd_max_device_num + ) + ) + elif not sbd_with_device and device_num > 0: + utils.err( + "SBD is not configured to use shared device, " +\ + "therefore --device should not be specified" ) - # booth setup - booth_sync.send_all_config_to_node( - node_communicator, + lib_sbd.check_sbd_on_node( + report_processor, node_communicator, node_addr, watchdog, + device_list + ) + + report_processor.process( + lib_reports.sbd_config_distribution_started() + ) + lib_sbd.set_sbd_config_on_node( report_processor, + node_communicator, node_addr, - rewrite_existing=modifiers["force"], - skip_wrong_config=modifiers["force"] + sbd_cfg, + watchdog, + device_list, ) - except LibraryError as e: - process_library_reports(e.args) - except NodeCommunicationException as e: - process_library_reports( - [node_communicator_exception_to_report_item(e)] + report_processor.process(lib_reports.sbd_enabling_started()) + lib_sbd.enable_sbd_service_on_node( + report_processor, node_communicator, node_addr + ) + else: + report_processor.process(lib_reports.sbd_disabling_started()) + lib_sbd.disable_sbd_service_on_node( + report_processor, node_communicator, node_addr ) - # Now add the new node to corosync.conf / cluster.conf - corosync_conf = None - for my_node in utils.getNodesFromCorosyncConf(): - retval, output = utils.addLocalNode(my_node, node0, node1) - if retval != 0: - utils.err( - "unable to add %s on %s - %s" % (node0, my_node, output.strip()), - False - ) - else: - print("%s: Corosync updated" % my_node) - corosync_conf = output - # corosync.conf must be reloaded before the new node is started + # booth setup + booth_sync.send_all_config_to_node( + node_communicator, + report_processor, + node_addr, + rewrite_existing=modifiers["force"], + skip_wrong_config=modifiers["force"] + ) + + if os.path.isfile(settings.corosync_authkey_file): + distribute_files( + lib_env.node_communicator(), + lib_env.report_processor, + node_communication_format.corosync_authkey_file( + open(settings.corosync_authkey_file).read() + ), + NodeAddressesList([node_addr]), + ) + + # do not send pcmk authkey to guest and remote nodes, they either have + # it or are not working anyway + # if the cluster is stopped, we cannot get the cib anyway + _share_authkey( + lib_env, + get_nodes(lib_env.get_corosync_conf()), + node_addr, + skip_offline_nodes=modifiers["skip_offline_nodes"], + allow_incomplete_distribution=modifiers["skip_offline_nodes"] + ) + + except LibraryError as e: + process_library_reports(e.args) + except NodeCommunicationException as e: + process_library_reports( + [node_communicator_exception_to_report_item(e)] + ) + + # Now add the new node to corosync.conf / cluster.conf + corosync_conf = None + for my_node in utils.getNodesFromCorosyncConf(): + retval, output = utils.addLocalNode(my_node, node0, node1) + if retval != 0: + utils.err( + "unable to add %s on %s - %s" % (node0, my_node, output.strip()), + False + ) + else: + print("%s: Corosync updated" % my_node) + corosync_conf = output + if not utils.is_cman_cluster(): + # When corosync 2 is in use, the procedure for adding a node is: + # 1. add the new node to corosync.conf + # 2. reload corosync.conf before the new node is started + # 3. start the new node + # If done otherwise, membership gets broken a qdevice hangs. Cluster + # will recover after a minute or so but still it's a wrong way. + # When corosync 1 is in use, the procedure for adding a node is: + # 1. add the new node to cluster.conf + # 2. start the new node + # Starting the node will automaticall reload cluster.conf on all + # nodes. If the config is reloaded before the new node is started, + # the new node gets fenced by the cluster, output, retval = utils.reloadCorosync() - if corosync_conf != None: - # send local cluster pcsd configs to the new node - # may be used for sending corosync config as well in future - pcsd_data = { - 'nodes': [node0], - 'force': True, - } - output, retval = utils.run_pcsdcli('send_local_configs', pcsd_data) + if corosync_conf != None: + # send local cluster pcsd configs to the new node + # may be used for sending corosync config as well in future + pcsd_data = { + 'nodes': [node0], + 'force': True, + } + output, retval = utils.run_pcsdcli('send_local_configs', pcsd_data) + if retval != 0: + utils.err("Unable to set pcsd configs") + if output['status'] == 'notauthorized': + utils.err( + "Unable to authenticate to " + node0 + + ", try running 'pcs cluster auth'" + ) + if output['status'] == 'ok' and output['data']: + try: + node_response = output['data'][node0] + if node_response['status'] not in ['ok', 'not_supported']: + utils.err("Unable to set pcsd configs") + except: + utils.err('Unable to communicate with pcsd') + + print("Setting up corosync...") + utils.setCorosyncConfig(node0, corosync_conf) + if "--enable" in utils.pcs_options: + retval, err = utils.enableCluster(node0) if retval != 0: - utils.err("Unable to set pcsd configs") - if output['status'] == 'notauthorized': - utils.err( - "Unable to authenticate to " + node0 - + ", try running 'pcs cluster auth'" - ) - if output['status'] == 'ok' and output['data']: - try: - node_response = output['data'][node0] - if node_response['status'] not in ['ok', 'not_supported']: - utils.err("Unable to set pcsd configs") - except: - utils.err('Unable to communicate with pcsd') - - print("Setting up corosync...") - utils.setCorosyncConfig(node0, corosync_conf) - if "--enable" in utils.pcs_options: - retval, err = utils.enableCluster(node0) - if retval != 0: - print("Warning: enable cluster - {0}".format(err)) - if "--start" in utils.pcs_options or utils.is_rhel6(): - # always start new node on cman cluster - # otherwise it will get fenced - retval, err = utils.startCluster(node0) - if retval != 0: - print("Warning: start cluster - {0}".format(err)) - - pcsd.pcsd_sync_certs([node0], exit_after_error=False) - else: - utils.err("Unable to update any nodes") - if utils.is_cman_with_udpu_transport(): - print("Warning: Using udpu transport on a CMAN cluster, " - + "cluster restart is required to apply node addition") - if wait: - print() - wait_for_nodes_started([node0], wait_timeout) + print("Warning: enable cluster - {0}".format(err)) + if "--start" in utils.pcs_options or utils.is_rhel6(): + # Always start the new node on cman cluster in order to reload + # cluster.conf (see above). + retval, err = utils.startCluster(node0) + if retval != 0: + print("Warning: start cluster - {0}".format(err)) + + pcsd.pcsd_sync_certs([node0], exit_after_error=False) else: - if node0 not in utils.getNodesFromCorosyncConf(): + utils.err("Unable to update any nodes") + if utils.is_cman_with_udpu_transport(): + print("Warning: Using udpu transport on a CMAN cluster, " + + "cluster restart is required to apply node addition") + if wait: + print() + wait_for_nodes_started([node0], wait_timeout) + +def node_remove(lib_env, node0, modifiers): + if node0 not in utils.getNodesFromCorosyncConf(): + utils.err( + "node '%s' does not appear to exist in configuration" % node0 + ) + if "--force" not in utils.pcs_options: + retval, data = utils.get_remote_quorumtool_output(node0) + if retval != 0: utils.err( - "node '%s' does not appear to exist in configuration" % node0 + "Unable to determine whether removing the node will cause " + + "a loss of the quorum, use --force to override\n" + + data ) - if "--force" not in utils.pcs_options: - retval, data = utils.get_remote_quorumtool_output(node0) - if retval != 0: - utils.err( - "Unable to determine whether removing the node will cause " - + "a loss of the quorum, use --force to override\n" - + data - ) - # we are sure whether we are on cman cluster or not because only - # nodes from a local cluster can be stopped (see nodes validation - # above) - if utils.is_rhel6(): - quorum_info = utils.parse_cman_quorum_info(data) - else: - quorum_info = utils.parse_quorumtool_output(data) - if quorum_info: - if utils.is_node_stop_cause_quorum_loss( - quorum_info, local=False, node_list=[node0] - ): - utils.err( - "Removing the node will cause a loss of the quorum" - + ", use --force to override" - ) - elif not utils.is_node_offline_by_quorumtool_output(data): + # we are sure whether we are on cman cluster or not because only + # nodes from a local cluster can be stopped (see nodes validation + # above) + if utils.is_rhel6(): + quorum_info = utils.parse_cman_quorum_info(data) + else: + quorum_info = utils.parse_quorumtool_output(data) + if quorum_info: + if utils.is_node_stop_cause_quorum_loss( + quorum_info, local=False, node_list=[node0] + ): utils.err( - "Unable to determine whether removing the node will cause " - + "a loss of the quorum, use --force to override\n" - + data + "Removing the node will cause a loss of the quorum" + + ", use --force to override" ) - # else the node seems to be stopped already, we're ok to proceed - - try: - _ensure_cluster_is_offline_if_atb_should_be_enabled( - lib_env, -1, modifiers["skip_offline_nodes"] + elif not utils.is_node_offline_by_quorumtool_output(data): + utils.err( + "Unable to determine whether removing the node will cause " + + "a loss of the quorum, use --force to override\n" + + data ) - except LibraryError as e: - utils.process_library_reports(e.args) + # else the node seems to be stopped already, we're ok to proceed - nodesRemoved = False - c_nodes = utils.getNodesFromCorosyncConf() - destroy_cluster([node0], keep_going=("--force" in utils.pcs_options)) - for my_node in c_nodes: - if my_node == node0: - continue - retval, output = utils.removeLocalNode(my_node, node0) - if retval != 0: + try: + _ensure_cluster_is_offline_if_atb_should_be_enabled( + lib_env, -1, modifiers["skip_offline_nodes"] + ) + except LibraryError as e: + utils.process_library_reports(e.args) + + nodesRemoved = False + c_nodes = utils.getNodesFromCorosyncConf() + destroy_cluster([node0], keep_going=("--force" in utils.pcs_options)) + for my_node in c_nodes: + if my_node == node0: + continue + retval, output = utils.removeLocalNode(my_node, node0) + if retval != 0: + utils.err( + "unable to remove %s on %s - %s" % (node0,my_node,output.strip()), + False + ) + else: + if output[0] == 0: + print("%s: Corosync updated" % my_node) + nodesRemoved = True + else: utils.err( - "unable to remove %s on %s - %s" % (node0,my_node,output.strip()), + "%s: Error executing command occured: %s" % (my_node, "".join(output[1])), False ) - else: - if output[0] == 0: - print("%s: Corosync updated" % my_node) - nodesRemoved = True - else: - utils.err( - "%s: Error executing command occured: %s" % (my_node, "".join(output[1])), - False - ) - if nodesRemoved == False: - utils.err("Unable to update any nodes") + if nodesRemoved == False: + utils.err("Unable to update any nodes") - output, retval = utils.reloadCorosync() - output, retval = utils.run(["crm_node", "--force", "-R", node0]) - if utils.is_cman_with_udpu_transport(): - print("Warning: Using udpu transport on a CMAN cluster, " - + "cluster restart is required to apply node removal") + output, retval = utils.reloadCorosync() + output, retval = utils.run(["crm_node", "--force", "-R", node0]) + if utils.is_cman_with_udpu_transport(): + print("Warning: Using udpu transport on a CMAN cluster, " + + "cluster restart is required to apply node removal") def cluster_localnode(argv): if len(argv) != 2: @@ -1864,6 +2140,30 @@ # Code taken from cluster-clean script in pacemaker def cluster_destroy(argv): if "--all" in utils.pcs_options: + # destroy remote and guest nodes + cib = None + lib_env = utils.get_lib_env() + try: + cib = lib_env.get_cib() + except LibraryError as e: + warn( + "Unable to load CIB to get guest and remote nodes from it, " + "those nodes will not be deconfigured." + ) + if cib is not None: + try: + all_remote_nodes = get_nodes(tree=cib) + if len(all_remote_nodes) > 0: + _destroy_pcmk_remote_env( + lib_env, + all_remote_nodes, + skip_offline_nodes=True, + allow_fails=True + ) + except LibraryError as e: + utils.process_library_reports(e.args) + + # destroy full-stack nodes destroy_cluster(utils.getNodesFromCorosyncConf()) else: print("Shutting down pacemaker/corosync services...") @@ -1890,10 +2190,12 @@ os.system("rm -f /etc/cluster/cluster.conf") else: os.system("rm -f /etc/corosync/corosync.conf") + os.system("rm -f {0}".format(settings.corosync_authkey_file)) state_files = ["cib.xml*", "cib-*", "core.*", "hostcache", "cts.*", "pe*.bz2","cib.*"] for name in state_files: - os.system("find /var/lib -name '"+name+"' -exec rm -f \{\} \;") + os.system("find /var/lib/pacemaker -name '"+name+"' -exec rm -f \{\} \;") + os.system("rm -f {0}".format(settings.pacemaker_authkey_file)) try: qdevice_net.client_destroy() except: @@ -1917,12 +2219,16 @@ else: options.append("--xml-file") options.append(filename) - output, retval = utils.run([settings.crm_verify] + options) - if output != "": print(output) - stonith.stonith_level_verify() + + lib = utils.get_library_wrapper() + try: + lib.fencing_topology.verify() + except LibraryError as e: + utils.process_library_reports(e.args) + return retval def cluster_report(argv): @@ -1972,25 +2278,63 @@ print(newoutput) def cluster_remote_node(argv): + usage_add = """\ + remote-node add [options] + Enables the specified resource as a remote-node resource on the + specified hostname (hostname should be the same as 'uname -n').""" + usage_remove = """\ + remote-node remove + Disables any resources configured to be remote-node resource on the + specified hostname (hostname should be the same as 'uname -n').""" + if len(argv) < 1: - usage.cluster(["remote-node"]) + print("\nUsage: pcs cluster remote-node...") + print(usage_add) + print() + print(usage_remove) + print() sys.exit(1) command = argv.pop(0) if command == "add": if len(argv) < 2: - usage.cluster(["remote-node"]) + print("\nUsage: pcs cluster remote-node add...") + print(usage_add) + print() sys.exit(1) + if "--force" in utils.pcs_options: + warn("this command is deprecated, use 'pcs cluster node add-guest'") + else: + raise error( + "this command is deprecated, use 'pcs cluster node add-guest'" + ", use --force to override" + ) hostname = argv.pop(0) rsc = argv.pop(0) if not utils.dom_get_resource(utils.get_cib_dom(), rsc): utils.err("unable to find resource '%s'" % rsc) - resource.resource_update(rsc, ["meta", "remote-node="+hostname] + argv) + resource.resource_update( + rsc, + ["meta", "remote-node="+hostname] + argv, + deal_with_guest_change=False + ) elif command in ["remove","delete"]: if len(argv) < 1: - usage.cluster(["remote-node"]) + print("\nUsage: pcs cluster remote-node remove...") + print(usage_remove) + print() sys.exit(1) + if "--force" in utils.pcs_options: + warn( + "this command is deprecated, use" + " 'pcs cluster node remove-guest'" + ) + else: + raise error( + "this command is deprecated, use 'pcs cluster node" + " remove-guest', use --force to override" + ) hostname = argv.pop(0) dom = utils.get_cib_dom() nvpairs = dom.getElementsByTagName("nvpair") @@ -2015,6 +2359,9 @@ if retval != 0: utils.err("unable to remove: {0}".format(output)) else: - usage.cluster(["remote-node"]) + print("\nUsage: pcs cluster remote-node...") + print(usage_add) + print() + print(usage_remove) + print() sys.exit(1) - diff -Nru pcs-0.9.155+dfsg/pcs/common/env_file_role_codes.py pcs-0.9.159/pcs/common/env_file_role_codes.py --- pcs-0.9.155+dfsg/pcs/common/env_file_role_codes.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/common/env_file_role_codes.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,3 +7,4 @@ BOOTH_CONFIG = "BOOTH_CONFIG" BOOTH_KEY = "BOOTH_KEY" +PACEMAKER_AUTHKEY = "PACEMAKER_AUTHKEY" diff -Nru pcs-0.9.155+dfsg/pcs/common/fencing_topology.py pcs-0.9.159/pcs/common/fencing_topology.py --- pcs-0.9.155+dfsg/pcs/common/fencing_topology.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/common/fencing_topology.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,10 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +TARGET_TYPE_NODE = "node" +TARGET_TYPE_REGEXP = "regexp" +TARGET_TYPE_ATTRIBUTE = "attribute" diff -Nru pcs-0.9.155+dfsg/pcs/common/pcs_pycurl.py pcs-0.9.159/pcs/common/pcs_pycurl.py --- pcs-0.9.155+dfsg/pcs/common/pcs_pycurl.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/common/pcs_pycurl.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,34 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import sys +from pycurl import * + +# This package defines constants which are not present in some older versions +# of pycurl but pcs needs to use them + +required_constants = { + "PROTOCOLS": 181, + "PROTO_HTTPS": 2, + "E_OPERATION_TIMEDOUT": 28, + # these are types of debug messages + # see https://curl.haxx.se/libcurl/c/CURLOPT_DEBUGFUNCTION.html + "DEBUG_TEXT": 0, + "DEBUG_HEADER_IN": 1, + "DEBUG_HEADER_OUT": 2, + "DEBUG_DATA_IN": 3, + "DEBUG_DATA_OUT": 4, + "DEBUG_SSL_DATA_IN": 5, + "DEBUG_SSL_DATA_OUT": 6, + "DEBUG_END": 7, +} + +__current_module = sys.modules[__name__] + +for constant, value in required_constants.items(): + if not hasattr(__current_module, constant): + setattr(__current_module, constant, value) diff -Nru pcs-0.9.155+dfsg/pcs/common/report_codes.py pcs-0.9.159/pcs/common/report_codes.py --- pcs-0.9.155+dfsg/pcs/common/report_codes.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/common/report_codes.py 2017-06-30 15:33:01.000000000 +0000 @@ -10,15 +10,22 @@ FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE = "FORCE_ALERT_RECIPIENT_VALUE_NOT_UNIQUE" FORCE_BOOTH_DESTROY = "FORCE_BOOTH_DESTROY" FORCE_BOOTH_REMOVE_FROM_CIB = "FORCE_BOOTH_REMOVE_FROM_CIB" +FORCE_REMOVE_MULTIPLE_NODES = "FORCE_REMOVE_MULTIPLE_NODES" FORCE_CONSTRAINT_DUPLICATE = "CONSTRAINT_DUPLICATE" FORCE_CONSTRAINT_MULTIINSTANCE_RESOURCE = "CONSTRAINT_MULTIINSTANCE_RESOURCE" FORCE_FILE_OVERWRITE = "FORCE_FILE_OVERWRITE" FORCE_LOAD_THRESHOLD = "LOAD_THRESHOLD" FORCE_METADATA_ISSUE = "METADATA_ISSUE" +FORCE_NODE_DOES_NOT_EXIST = "FORCE_NODE_DOES_NOT_EXIST" FORCE_OPTIONS = "OPTIONS" FORCE_QDEVICE_MODEL = "QDEVICE_MODEL" FORCE_QDEVICE_USED = "QDEVICE_USED" +FORCE_STONITH_RESOURCE_DOES_NOT_EXIST = "FORCE_STONITH_RESOURCE_DOES_NOT_EXIST" +FORCE_NOT_SUITABLE_COMMAND = "FORCE_NOT_SUITABLE_COMMAND" +FORCE_CLEAR_CLUSTER_NODE = "FORCE_CLEAR_CLUSTER_NODE" SKIP_OFFLINE_NODES = "SKIP_OFFLINE_NODES" +SKIP_FILE_DISTRIBUTION_ERRORS = "SKIP_FILE_DISTRIBUTION_ERRORS" +SKIP_ACTION_ON_NODES_ERRORS = "SKIP_ACTION_ON_NODES_ERRORS" SKIP_UNREADABLE_CONFIG = "SKIP_UNREADABLE_CONFIG" AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE = "AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE" @@ -56,14 +63,16 @@ CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET = "CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET" CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET = "CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET" CIB_ACL_TARGET_ALREADY_EXISTS = "CIB_ACL_TARGET_ALREADY_EXISTS" -CIB_ALERT_NOT_FOUND = "CIB_ALERT_NOT_FOUND" CIB_ALERT_RECIPIENT_ALREADY_EXISTS = "CIB_ALERT_RECIPIENT_ALREADY_EXISTS" CIB_ALERT_RECIPIENT_VALUE_INVALID = "CIB_ALERT_RECIPIENT_VALUE_INVALID" CIB_CANNOT_FIND_MANDATORY_SECTION = "CIB_CANNOT_FIND_MANDATORY_SECTION" +CIB_FENCING_LEVEL_ALREADY_EXISTS = "CIB_FENCING_LEVEL_ALREADY_EXISTS" +CIB_FENCING_LEVEL_DOES_NOT_EXIST = "CIB_FENCING_LEVEL_DOES_NOT_EXIST" CIB_LOAD_ERROR_BAD_FORMAT = "CIB_LOAD_ERROR_BAD_FORMAT" CIB_LOAD_ERROR = "CIB_LOAD_ERROR" CIB_LOAD_ERROR_SCOPE_MISSING = "CIB_LOAD_ERROR_SCOPE_MISSING" CIB_PUSH_ERROR = "CIB_PUSH_ERROR" +CIB_SAVE_TMP_ERROR = "CIB_SAVE_TMP_ERROR" CIB_UPGRADE_FAILED = "CIB_UPGRADE_FAILED" CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION = "CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION" CIB_UPGRADE_SUCCESSFUL = "CIB_UPGRADE_SUCCESSFUL" @@ -76,6 +85,7 @@ COMMON_ERROR = 'COMMON_ERROR' COMMON_INFO = 'COMMON_INFO' LIVE_ENVIRONMENT_REQUIRED = "LIVE_ENVIRONMENT_REQUIRED" +LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE = "LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE" COROSYNC_CONFIG_ACCEPTED_BY_NODE = "COROSYNC_CONFIG_ACCEPTED_BY_NODE" COROSYNC_CONFIG_DISTRIBUTION_STARTED = "COROSYNC_CONFIG_DISTRIBUTION_STARTED" COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR = "COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR" @@ -90,40 +100,64 @@ COROSYNC_QUORUM_SET_EXPECTED_VOTES_ERROR = "COROSYNC_QUORUM_SET_EXPECTED_VOTES_ERROR" COROSYNC_RUNNING_ON_NODE = "COROSYNC_RUNNING_ON_NODE" CRM_MON_ERROR = "CRM_MON_ERROR" +DEPRECATED_OPTION = "DEPRECATED_OPTION" DUPLICATE_CONSTRAINTS_EXIST = "DUPLICATE_CONSTRAINTS_EXIST" EMPTY_RESOURCE_SET_LIST = "EMPTY_RESOURCE_SET_LIST" EMPTY_ID = "EMPTY_ID" FILE_ALREADY_EXISTS = "FILE_ALREADY_EXISTS" FILE_DOES_NOT_EXIST = "FILE_DOES_NOT_EXIST" FILE_IO_ERROR = "FILE_IO_ERROR" +FILES_DISTRIBUTION_STARTED = "FILES_DISTRIBUTION_STARTED" +FILE_DISTRIBUTION_ERROR = "FILE_DISTRIBUTION_ERROR" +FILE_DISTRIBUTION_SUCCESS = "FILE_DISTRIBUTION_SUCCESS" +FILES_REMOVE_FROM_NODE_STARTED = "FILES_REMOVE_FROM_NODE_STARTED" +FILE_REMOVE_FROM_NODE_ERROR = "FILE_REMOVE_FROM_NODE_ERROR" +FILE_REMOVE_FROM_NODE_SUCCESS = "FILE_REMOVE_FROM_NODE_SUCCESS" ID_ALREADY_EXISTS = 'ID_ALREADY_EXISTS' +ID_BELONGS_TO_UNEXPECTED_TYPE = "ID_BELONGS_TO_UNEXPECTED_TYPE" ID_NOT_FOUND = 'ID_NOT_FOUND' IGNORED_CMAN_UNSUPPORTED_OPTION = 'IGNORED_CMAN_UNSUPPORTED_OPTION' INVALID_ID = "INVALID_ID" INVALID_OPTION = "INVALID_OPTION" +INVALID_OPTION_TYPE = "INVALID_OPTION_TYPE" INVALID_OPTION_VALUE = "INVALID_OPTION_VALUE" INVALID_RESOURCE_NAME = 'INVALID_RESOURCE_NAME' INVALID_RESOURCE_AGENT_NAME = 'INVALID_RESOURCE_AGENT_NAME' INVALID_RESPONSE_FORMAT = "INVALID_RESPONSE_FORMAT" INVALID_SCORE = "INVALID_SCORE" +INVALID_STONITH_AGENT_NAME = "INVALID_STONITH_AGENT_NAME" INVALID_TIMEOUT_VALUE = "INVALID_TIMEOUT_VALUE" MULTIPLE_SCORE_OPTIONS = "MULTIPLE_SCORE_OPTIONS" +MULTIPLE_RESULTS_FOUND = "MULTIPLE_RESULTS_FOUND" +MUTUALLY_EXCLUSIVE_OPTIONS = "MUTUALLY_EXCLUSIVE_OPTIONS" +CANNOT_ADD_NODE_IS_IN_CLUSTER = "CANNOT_ADD_NODE_IS_IN_CLUSTER" +CANNOT_ADD_NODE_IS_RUNNING_SERVICE = "CANNOT_ADD_NODE_IS_RUNNING_SERVICE" NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL = "NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL" +NODE_COMMUNICATION_DEBUG_INFO = "NODE_COMMUNICATION_DEBUG_INFO" NODE_COMMUNICATION_ERROR = "NODE_COMMUNICATION_ERROR" NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED = "NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED" NODE_COMMUNICATION_ERROR_PERMISSION_DENIED = "NODE_COMMUNICATION_ERROR_PERMISSION_DENIED" NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT = "NODE_COMMUNICATION_ERROR_UNABLE_TO_CONNECT" NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND = "NODE_COMMUNICATION_ERROR_UNSUPPORTED_COMMAND" +NODE_COMMUNICATION_ERROR_TIMED_OUT = "NODE_COMMUNICATION_ERROR_TIMED_OUT" NODE_COMMUNICATION_FINISHED = "NODE_COMMUNICATION_FINISHED" NODE_COMMUNICATION_NOT_CONNECTED = "NODE_COMMUNICATION_NOT_CONNECTED" +NODE_COMMUNICATION_PROXY_IS_SET = "NODE_COMMUNICATION_PROXY_IS_SET" NODE_COMMUNICATION_STARTED = "NODE_COMMUNICATION_STARTED" NODE_NOT_FOUND = "NODE_NOT_FOUND" +NODE_REMOVE_IN_PACEMAKER_FAILED = "NODE_REMOVE_IN_PACEMAKER_FAILED" NON_UDP_TRANSPORT_ADDR_MISMATCH = 'NON_UDP_TRANSPORT_ADDR_MISMATCH' +NOLIVE_SKIP_FILES_DISTRIBUTION="NOLIVE_SKIP_FILES_DISTRIBUTION" +NOLIVE_SKIP_FILES_REMOVE="NOLIVE_SKIP_FILES_REMOVE" +NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES="NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES" +NODE_TO_CLEAR_IS_STILL_IN_CLUSTER = "NODE_TO_CLEAR_IS_STILL_IN_CLUSTER" OMITTING_NODE = "OMITTING_NODE" +OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT = "OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT" PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND = "PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND" PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE = "PARSE_ERROR_COROSYNC_CONF_MISSING_CLOSING_BRACE" PARSE_ERROR_COROSYNC_CONF = "PARSE_ERROR_COROSYNC_CONF" PARSE_ERROR_COROSYNC_CONF_UNEXPECTED_CLOSING_BRACE = "PARSE_ERROR_COROSYNC_CONF_UNEXPECTED_CLOSING_BRACE" +PREREQUISITE_OPTION_IS_MISSING = "PREREQUISITE_OPTION_IS_MISSING" QDEVICE_ALREADY_DEFINED = "QDEVICE_ALREADY_DEFINED" QDEVICE_ALREADY_INITIALIZED = "QDEVICE_ALREADY_INITIALIZED" QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE = "QDEVICE_CERTIFICATE_ACCEPTED_BY_NODE" @@ -144,26 +178,43 @@ QDEVICE_REMOVE_OR_CLUSTER_STOP_NEEDED = "QDEVICE_REMOVE_OR_CLUSTER_STOP_NEEDED" QDEVICE_USED_BY_CLUSTERS = "QDEVICE_USED_BY_CLUSTERS" REQUIRED_OPTION_IS_MISSING = "REQUIRED_OPTION_IS_MISSING" +REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING = "REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING" +RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE = "RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE" +RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP = "RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP" RESOURCE_CLEANUP_ERROR = "RESOURCE_CLEANUP_ERROR" RESOURCE_CLEANUP_TOO_TIME_CONSUMING = 'RESOURCE_CLEANUP_TOO_TIME_CONSUMING' -RESOURCE_DOES_NOT_EXIST = 'RESOURCE_DOES_NOT_EXIST' +RESOURCE_DOES_NOT_RUN = "RESOURCE_DOES_NOT_RUN" RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE = 'RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE' -RESOURCE_WAIT_ERROR = "RESOURCE_WAIT_ERROR" -RESOURCE_WAIT_NOT_SUPPORTED = "RESOURCE_WAIT_NOT_SUPPORTED" -RESOURCE_WAIT_TIMED_OUT = "RESOURCE_WAIT_TIMED_OUT" +RESOURCE_IS_GUEST_NODE_ALREADY = "RESOURCE_IS_GUEST_NODE_ALREADY" +RESOURCE_IS_UNMANAGED = "RESOURCE_IS_UNMANAGED" +RESOURCE_MANAGED_NO_MONITOR_ENABLED = "RESOURCE_MANAGED_NO_MONITOR_ENABLED" +RESOURCE_OPERATION_INTERVAL_DUPLICATION = "RESOURCE_OPERATION_INTERVAL_DUPLICATION" +RESOURCE_OPERATION_INTERVAL_ADAPTED = "RESOURCE_OPERATION_INTERVAL_ADAPTED" +RESOURCE_RUNNING_ON_NODES = "RESOURCE_RUNNING_ON_NODES" RRP_ACTIVE_NOT_SUPPORTED = 'RRP_ACTIVE_NOT_SUPPORTED' RUN_EXTERNAL_PROCESS_ERROR = "RUN_EXTERNAL_PROCESS_ERROR" RUN_EXTERNAL_PROCESS_FINISHED = "RUN_EXTERNAL_PROCESS_FINISHED" RUN_EXTERNAL_PROCESS_STARTED = "RUN_EXTERNAL_PROCESS_STARTED" SBD_CHECK_STARTED = "SBD_CHECK_STARTED" SBD_CHECK_SUCCESS = "SBD_CHECK_SUCCESS" -SBD_CONFIG_DISTRIBUTION_STARTED = "SBD_CONFIG_DISTRIBUTION_STARTED" SBD_CONFIG_ACCEPTED_BY_NODE = "SBD_CONFIG_ACCEPTED_BY_NODE" +SBD_CONFIG_DISTRIBUTION_STARTED = "SBD_CONFIG_DISTRIBUTION_STARTED" +SBD_DEVICE_DOES_NOT_EXIST = "SBD_DEVICE_DOES_NOT_EXIST" +SBD_DEVICE_DUMP_ERROR = "SBD_DEVICE_DUMP_ERROR" +SBD_DEVICE_INITIALIZATION_ERROR = "SBD_DEVICE_INITIALIZATION_ERROR" +SBD_DEVICE_INITIALIZATION_STARTED = "SBD_DEVICE_INITIALIZATION_STARTED" +SBD_DEVICE_INITIALIZATION_SUCCESS = "SBD_DEVICE_INITIALIZATION_SUCCESS" +SBD_DEVICE_IS_NOT_BLOCK_DEVICE = "SBD_DEVICE_IS_NOT_BLOCK_DEVICE" +SBD_DEVICE_LIST_ERROR = "SBD_DEVICE_LIST_ERROR" +SBD_DEVICE_MESSAGE_ERROR = "SBD_DEVICE_MESSAGE_ERROR" +SBD_DEVICE_PATH_NOT_ABSOLUTE = "SBD_DEVICE_PATH_NOT_ABSOLUTE" SBD_DISABLING_STARTED = "SBD_DISABLING_STARTED" SBD_ENABLING_STARTED = "SBD_ENABLING_STARTED" -SBD_NOT_INSTALLED = "SBD_NOT_INSTALLED" +SBD_NO_DEVICE_FOR_NODE = "SBD_NO_DEVICE_FOR_NODE" SBD_NOT_ENABLED = "SBD_NOT_ENABLED" +SBD_NOT_INSTALLED = "SBD_NOT_INSTALLED" SBD_REQUIRES_ATB = "SBD_REQUIRES_ATB" +SBD_TOO_MANY_DEVICES_FOR_NODE = "SBD_TOO_MANY_DEVICES_FOR_NODE" SERVICE_DISABLE_ERROR = "SERVICE_DISABLE_ERROR" SERVICE_DISABLE_STARTED = "SERVICE_DISABLE_STARTED" SERVICE_DISABLE_SUCCESS = "SERVICE_DISABLE_SUCCESS" @@ -180,6 +231,10 @@ SERVICE_STOP_ERROR = "SERVICE_STOP_ERROR" SERVICE_STOP_STARTED = "SERVICE_STOP_STARTED" SERVICE_STOP_SUCCESS = "SERVICE_STOP_SUCCESS" +STONITH_RESOURCES_DO_NOT_EXIST = "STONITH_RESOURCES_DO_NOT_EXIST" +SERVICE_COMMANDS_ON_NODES_STARTED = "SERVICE_COMMANDS_ON_NODES_STARTED" +SERVICE_COMMAND_ON_NODE_ERROR = "SERVICE_COMMAND_ON_NODE_ERROR" +SERVICE_COMMAND_ON_NODE_SUCCESS = "SERVICE_COMMAND_ON_NODE_SUCCESS" UNABLE_TO_DETERMINE_USER_UID = "UNABLE_TO_DETERMINE_USER_UID" UNABLE_TO_DETERMINE_GROUP_GID = "UNABLE_TO_DETERMINE_GROUP_GID" UNABLE_TO_GET_AGENT_METADATA = 'UNABLE_TO_GET_AGENT_METADATA' @@ -189,4 +244,11 @@ UNKNOWN_COMMAND = 'UNKNOWN_COMMAND' WATCHDOG_INVALID = "WATCHDOG_INVALID" UNSUPPORTED_OPERATION_ON_NON_SYSTEMD_SYSTEMS = "UNSUPPORTED_OPERATION_ON_NON_SYSTEMD_SYSTEMS" +USE_COMMAND_NODE_ADD_REMOTE = "USE_COMMAND_NODE_ADD_REMOTE" +USE_COMMAND_NODE_ADD_GUEST = "USE_COMMAND_NODE_ADD_GUEST" +USE_COMMAND_NODE_REMOVE_GUEST = "USE_COMMAND_NODE_REMOVE_GUEST" +WAIT_FOR_IDLE_ERROR = "WAIT_FOR_IDLE_ERROR" +WAIT_FOR_IDLE_NOT_LIVE_CLUSTER = "WAIT_FOR_IDLE_NOT_LIVE_CLUSTER" +WAIT_FOR_IDLE_NOT_SUPPORTED = "WAIT_FOR_IDLE_NOT_SUPPORTED" +WAIT_FOR_IDLE_TIMED_OUT = "WAIT_FOR_IDLE_TIMED_OUT" WATCHDOG_NOT_FOUND = "WATCHDOG_NOT_FOUND" diff -Nru pcs-0.9.155+dfsg/pcs/common/test/test_tools.py pcs-0.9.159/pcs/common/test/test_tools.py --- pcs-0.9.155+dfsg/pcs/common/test/test_tools.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/common/test/test_tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,24 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase +from pcs.common.tools import is_string + +class IsString(TestCase): + def test_recognize_plain_string(self): + self.assertTrue(is_string("")) + + def test_recognize_unicode_string(self): + #in python3 this is str type + self.assertTrue(is_string(u"")) + + def test_rcognize_bytes(self): + #in python3 this is str type + self.assertTrue(is_string(b"")) + + def test_list_of_string_is_not_string(self): + self.assertFalse(is_string(["a", "b"])) diff -Nru pcs-0.9.155+dfsg/pcs/common/tools.py pcs-0.9.159/pcs/common/tools.py --- pcs-0.9.155+dfsg/pcs/common/tools.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/common/tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,6 +5,7 @@ unicode_literals, ) +from lxml import etree import threading @@ -41,3 +42,31 @@ def join_multilines(strings): return "\n".join([a.strip() for a in strings if a.strip()]) + +def is_string(candidate): + """ + Return if candidate is string. + Simply lookin solution isinstance(candidate, "".__class__) does not work: + + >>> isinstance("", "".__class__), isinstance(u"", "".__class__) + (True, False) + + This code also needs to deal with python2 and python3 and unicode type is in + python2 but not in python3. + """ + string_list = [str, bytes] + try: + string_list.append(unicode) + except NameError: #unicode is not present in python3 + pass + + return any([isinstance(candidate, string) for string in string_list]) + +def xml_fromstring(xml): + # If the xml contains encoding declaration such as: + # + # we get an exception in python3: + # ValueError: Unicode strings with encoding declaration are not supported. + # Please use bytes input or XML fragments without declaration. + # So we encode the string to bytes. + return etree.fromstring(xml.encode("utf-8")) diff -Nru pcs-0.9.155+dfsg/pcs/config.py pcs-0.9.159/pcs/config.py --- pcs-0.9.155+dfsg/pcs/config.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/config.py 2017-06-30 15:33:01.000000000 +0000 @@ -17,8 +17,10 @@ import logging import pwd import grp +import tempfile import time import platform +import shutil try: import clufter.facts @@ -122,20 +124,26 @@ utils.process_library_reports(e.args) def config_show_cib(): + lib = utils.get_library_wrapper() + modificators = utils.get_modificators() + print("Resources:") utils.pcs_options["--all"] = 1 utils.pcs_options["--full"] = 1 resource.resource_show([]) + print() print("Stonith Devices:") resource.resource_show([], True) print("Fencing Levels:") - stonith.stonith_level_show() - print() + levels = stonith.stonith_level_config_to_str( + lib.fencing_topology.get_config() + ) + if levels: + print("\n".join(indent(levels, 2))) - lib = utils.get_library_wrapper() + print() constraint.location_show([]) - modificators = utils.get_modificators() order_command.show(lib, [], modificators) colocation_command.show(lib, [], modificators) ticket_command.show(lib, [], modificators) @@ -341,6 +349,7 @@ file_list = config_backup_path_list(with_uid_gid=True) tarball_file_list = [] version = None + tmp_dir = None try: tarball = tarfile.open(infile_name, "r|*", infile_obj) while True: @@ -387,15 +396,30 @@ path = os.path.dirname(path) if not extract_info: continue - path_extract = os.path.dirname(extract_info["path"]) - tarball.extractall(path_extract, [tar_member_info]) - path_full = os.path.join(path_extract, tar_member_info.name) + path_full = None + if hasattr(extract_info.get("pre_store_call"), '__call__'): + extract_info["pre_store_call"]() + if "rename" in extract_info and extract_info["rename"]: + if tmp_dir is None: + tmp_dir = tempfile.mkdtemp() + tarball.extractall(tmp_dir, [tar_member_info]) + path_full = extract_info["path"] + os.rename( + os.path.join(tmp_dir, tar_member_info.name), path_full + ) + else: + dir_path = os.path.dirname(extract_info["path"]) + tarball.extractall(dir_path, [tar_member_info]) + path_full = os.path.join(dir_path, tar_member_info.name) file_attrs = extract_info["attrs"] os.chmod(path_full, file_attrs["mode"]) os.chown(path_full, file_attrs["uid"], file_attrs["gid"]) tarball.close() - except (tarfile.TarError, EnvironmentError) as e: + except (tarfile.TarError, EnvironmentError, OSError) as e: utils.err("unable to restore the cluster: %s" % e) + finally: + if tmp_dir: + shutil.rmtree(tmp_dir, ignore_errors=True) try: sig_path = os.path.join(settings.cib_dir, "cib.xml.sig") @@ -414,6 +438,8 @@ "uid": 0, "gid": 0, } + corosync_authkey_attrs = dict(corosync_attrs) + corosync_authkey_attrs["mode"] = 0o400 cib_attrs = { "mtime": int(time.time()), "mode": 0o600, @@ -421,25 +447,32 @@ "gname": settings.pacemaker_gname, } if with_uid_gid: - try: - cib_attrs["uid"] = pwd.getpwnam(cib_attrs["uname"]).pw_uid - except KeyError: - utils.err( - "Unable to determine uid of user '%s'" % cib_attrs["uname"] - ) - try: - cib_attrs["gid"] = grp.getgrnam(cib_attrs["gname"]).gr_gid - except KeyError: - utils.err( - "Unable to determine gid of group '%s'" % cib_attrs["gname"] - ) + cib_attrs["uid"] = _get_uid(cib_attrs["uname"]) + cib_attrs["gid"] = _get_gid(cib_attrs["gname"]) + pcmk_authkey_attrs = dict(cib_attrs) + pcmk_authkey_attrs["mode"] = 0o440 file_list = { "cib.xml": { "path": os.path.join(settings.cib_dir, "cib.xml"), "required": True, "attrs": dict(cib_attrs), }, + "corosync_authkey": { + "path": settings.corosync_authkey_file, + "required": False, + "attrs": corosync_authkey_attrs, + "restore_procedure": None, + "rename": True, + }, + "pacemaker_authkey": { + "path": settings.pacemaker_authkey_file, + "required": False, + "attrs": pcmk_authkey_attrs, + "restore_procedure": None, + "rename": True, + "pre_store_call": _ensure_etc_pacemaker_exists, + }, } if rhel6: file_list["cluster.conf"] = { @@ -472,6 +505,35 @@ } return file_list + +def _get_uid(user_name): + try: + return pwd.getpwnam(user_name).pw_uid + except KeyError: + utils.err("Unable to determine uid of user '{0}'".format(user_name)) + + +def _get_gid(group_name): + try: + return grp.getgrnam(group_name).gr_gid + except KeyError: + utils.err( + "Unable to determine gid of group '{0}'".format(group_name) + ) + + +def _ensure_etc_pacemaker_exists(): + dir_name = os.path.dirname(settings.pacemaker_authkey_file) + if not os.path.exists(dir_name): + os.mkdir(dir_name) + os.chmod(dir_name, 0o750) + os.chown( + dir_name, + _get_uid(settings.pacemaker_uname), + _get_gid(settings.pacemaker_gname) + ) + + def config_backup_check_version(version): try: version_number = int(version) @@ -621,8 +683,6 @@ "batch": True, "sys": "linux", "dist": dist, - # Make it work on RHEL6 as well for sure - "color": "always" if sys.stdout.isatty() else "never" } if interactive: if "EDITOR" not in os.environ: @@ -670,7 +730,7 @@ if output_format in ("pcs-commands", "pcs-commands-verbose"): ok, message = utils.write_file( dry_run_output, - clufter_args_obj.output["passout"] + clufter_args_obj.output["passout"].decode() ) if not ok: utils.err(message) @@ -692,14 +752,14 @@ config_backup_add_version_to_tarball(tarball) utils.tar_add_file_data( tarball, - clufter_args_obj.cib["passout"].encode("utf-8"), + clufter_args_obj.cib["passout"], "cib.xml", **file_list["cib.xml"]["attrs"] ) if output_format == "cluster.conf": utils.tar_add_file_data( tarball, - clufter_args_obj.ccs_pcmk["passout"].encode("utf-8"), + clufter_args_obj.ccs_pcmk["passout"], "cluster.conf", **file_list["cluster.conf"]["attrs"] ) @@ -720,7 +780,7 @@ )("bytestring") utils.tar_add_file_data( tarball, - corosync_conf_data.encode("utf-8"), + corosync_conf_data, "corosync.conf", **file_list["corosync.conf"]["attrs"] ) @@ -738,7 +798,7 @@ )("bytestring") utils.tar_add_file_data( tarball, - uidgid_data.encode("utf-8"), + uidgid_data, "uidgid.d/" + filename, **file_list["uidgid.d"]["attrs"] ) @@ -796,8 +856,6 @@ "batch": True, "sys": "linux", "dist": dist, - # Make it work on RHEL6 as well for sure - "color": "always" if sys.stdout.isatty() else "never", "coro": settings.corosync_conf_file, "ccs": settings.cluster_conf_file, "start_wait": "60", @@ -839,7 +897,7 @@ if output_file: ok, message = utils.write_file( output_file, - clufter_args_obj.output["passout"] + clufter_args_obj.output["passout"].decode() ) if not ok: utils.err(message) diff -Nru pcs-0.9.155+dfsg/pcs/constraint.py pcs-0.9.159/pcs/constraint.py --- pcs-0.9.155+dfsg/pcs/constraint.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/constraint.py 2017-06-30 15:33:01.000000000 +0000 @@ -10,8 +10,6 @@ from collections import defaultdict from xml.dom.minidom import parseString -import pcs.cli.constraint_colocation.command as colocation_command -import pcs.cli.constraint_order.command as order_command from pcs import ( rule as rule_utils, usage, @@ -21,11 +19,15 @@ constraint_colocation, constraint_order, ) -from pcs.cli.constraint_ticket import command as ticket_command +from pcs.cli.common import parse_args from pcs.cli.common.errors import CmdLineInputError +import pcs.cli.constraint_colocation.command as colocation_command +import pcs.cli.constraint_order.command as order_command +from pcs.cli.constraint_ticket import command as ticket_command from pcs.lib.cib.constraint import resource_set from pcs.lib.cib.constraint.order import ATTRIB as order_attrib from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.values import sanitize_id OPTIONS_ACTION = resource_set.ATTRIB["action"] @@ -36,111 +38,120 @@ OPTIONS_SYMMETRICAL = order_attrib["symmetrical"] OPTIONS_KIND = order_attrib["kind"] +RESOURCE_TYPE_RESOURCE = "resource" +RESOURCE_TYPE_REGEXP = "regexp" + def constraint_cmd(argv): lib = utils.get_library_wrapper() modificators = utils.get_modificators() + if len(argv) == 0: argv = ["list"] - sub_cmd = argv.pop(0) - if (sub_cmd == "help"): - usage.constraint(argv) - elif (sub_cmd == "location"): - if len (argv) == 0: - sub_cmd2 = "show" - else: - sub_cmd2 = argv.pop(0) - if (sub_cmd2 == "add"): - location_add(argv) - elif (sub_cmd2 in ["remove","delete"]): - location_add(argv,True) - elif (sub_cmd2 == "show"): - location_show(argv) - elif len(argv) >= 2: - if argv[0] == "rule": - location_rule([sub_cmd2] + argv) + try: + if (sub_cmd == "help"): + usage.constraint(argv) + elif (sub_cmd == "location"): + if len (argv) == 0: + sub_cmd2 = "show" else: - location_prefer([sub_cmd2] + argv) - else: - usage.constraint() - sys.exit(1) - elif (sub_cmd == "order"): - if (len(argv) == 0): - sub_cmd2 = "show" - else: - sub_cmd2 = argv.pop(0) + sub_cmd2 = argv.pop(0) - if (sub_cmd2 == "set"): - try: - order_command.create_with_set(lib, argv, modificators) - except CmdLineInputError as e: - utils.exit_on_cmdline_input_errror(e, "constraint", 'order set') - except LibraryError as e: - utils.process_library_reports(e.args) - elif (sub_cmd2 in ["remove","delete"]): - order_rm(argv) - elif (sub_cmd2 == "show"): - order_command.show(lib, argv, modificators) - else: - order_start([sub_cmd2] + argv) - elif sub_cmd == "ticket": - usage_name = "ticket" - try: - command_map = { - "set": ticket_command.create_with_set, - "add": ticket_command.add, - "remove": ticket_command.remove, - "show": ticket_command.show, - } - sub_command = argv[0] if argv else "show" - if sub_command not in command_map: - raise CmdLineInputError() - usage_name = "ticket "+sub_command - - command_map[sub_command](lib, argv[1:], modificators) - except LibraryError as e: - utils.process_library_reports(e.args) - except CmdLineInputError as e: - utils.exit_on_cmdline_input_errror(e, "constraint", usage_name) - - elif (sub_cmd == "colocation"): - if (len(argv) == 0): - sub_cmd2 = "show" - else: - sub_cmd2 = argv.pop(0) + if (sub_cmd2 == "add"): + location_add(argv) + elif (sub_cmd2 in ["remove","delete"]): + location_add(argv,True) + elif (sub_cmd2 == "show"): + location_show(argv) + elif len(argv) >= 2: + if argv[0] == "rule": + location_rule([sub_cmd2] + argv) + else: + location_prefer([sub_cmd2] + argv) + else: + usage.constraint() + sys.exit(1) + elif (sub_cmd == "order"): + if (len(argv) == 0): + sub_cmd2 = "show" + else: + sub_cmd2 = argv.pop(0) - if (sub_cmd2 == "add"): - colocation_add(argv) - elif (sub_cmd2 in ["remove","delete"]): - colocation_rm(argv) - elif (sub_cmd2 == "set"): + if (sub_cmd2 == "set"): + try: + order_command.create_with_set(lib, argv, modificators) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "constraint", 'order set') + except LibraryError as e: + utils.process_library_reports(e.args) + elif (sub_cmd2 in ["remove","delete"]): + order_rm(argv) + elif (sub_cmd2 == "show"): + order_command.show(lib, argv, modificators) + else: + order_start([sub_cmd2] + argv) + elif sub_cmd == "ticket": + usage_name = "ticket" try: + command_map = { + "set": ticket_command.create_with_set, + "add": ticket_command.add, + "remove": ticket_command.remove, + "show": ticket_command.show, + } + sub_command = argv[0] if argv else "show" + if sub_command not in command_map: + raise CmdLineInputError() + usage_name = "ticket "+sub_command - colocation_command.create_with_set(lib, argv, modificators) + command_map[sub_command](lib, argv[1:], modificators) except LibraryError as e: utils.process_library_reports(e.args) except CmdLineInputError as e: - utils.exit_on_cmdline_input_errror(e, "constraint", "colocation set") - elif (sub_cmd2 == "show"): + utils.exit_on_cmdline_input_errror(e, "constraint", usage_name) + + elif (sub_cmd == "colocation"): + if (len(argv) == 0): + sub_cmd2 = "show" + else: + sub_cmd2 = argv.pop(0) + + if (sub_cmd2 == "add"): + colocation_add(argv) + elif (sub_cmd2 in ["remove","delete"]): + colocation_rm(argv) + elif (sub_cmd2 == "set"): + try: + + colocation_command.create_with_set(lib, argv, modificators) + except LibraryError as e: + utils.process_library_reports(e.args) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "constraint", "colocation set") + elif (sub_cmd2 == "show"): + colocation_command.show(lib, argv, modificators) + else: + usage.constraint() + sys.exit(1) + elif (sub_cmd in ["remove","delete"]): + constraint_rm(argv) + elif (sub_cmd == "show" or sub_cmd == "list"): + location_show(argv) + order_command.show(lib, argv, modificators) colocation_command.show(lib, argv, modificators) + ticket_command.show(lib, argv, modificators) + elif (sub_cmd == "ref"): + constraint_ref(argv) + elif (sub_cmd == "rule"): + constraint_rule(argv) else: usage.constraint() sys.exit(1) - elif (sub_cmd in ["remove","delete"]): - constraint_rm(argv) - elif (sub_cmd == "show" or sub_cmd == "list"): - location_show(argv) - order_command.show(lib, argv, modificators) - colocation_command.show(lib, argv, modificators) - ticket_command.show(lib, argv, modificators) - elif (sub_cmd == "ref"): - constraint_ref(argv) - elif (sub_cmd == "rule"): - constraint_rule(argv) - else: - usage.constraint() - sys.exit(1) + except LibraryError as e: + utils.process_library_reports(e.args) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "resource", sub_cmd) @@ -526,7 +537,17 @@ showDetail = False if len(argv) > 1: - valid_noderes = argv[1:] + if byNode: + valid_noderes = argv[1:] + else: + valid_noderes = [ + parse_args.parse_typed_arg( + arg, + [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], + RESOURCE_TYPE_RESOURCE + ) + for arg in argv[1:] + ] else: valid_noderes = [] @@ -540,17 +561,24 @@ print("Location Constraints:") for rsc_loc in all_loc_constraints: - lc_node = rsc_loc.getAttribute("node") - lc_rsc = rsc_loc.getAttribute("rsc") + if rsc_loc.hasAttribute("rsc-pattern"): + lc_rsc_type = RESOURCE_TYPE_REGEXP + lc_rsc_value = rsc_loc.getAttribute("rsc-pattern") + lc_name = "Resource pattern: {0}".format(lc_rsc_value) + else: + lc_rsc_type = RESOURCE_TYPE_RESOURCE + lc_rsc_value = rsc_loc.getAttribute("rsc") + lc_name = "Resource: {0}".format(lc_rsc_value) + lc_rsc = lc_rsc_type, lc_rsc_value, lc_name lc_id = rsc_loc.getAttribute("id") + lc_node = rsc_loc.getAttribute("node") lc_score = rsc_loc.getAttribute("score") lc_role = rsc_loc.getAttribute("role") - lc_name = "Resource: " + lc_rsc lc_resource_discovery = rsc_loc.getAttribute("resource-discovery") for child in rsc_loc.childNodes: if child.nodeType == child.ELEMENT_NODE and child.tagName == "rule": - ruleshash[lc_name].append(child) + ruleshash[lc_rsc].append(child) # NEED TO FIX FOR GROUP LOCATION CONSTRAINTS (where there are children of # rsc_location) @@ -573,18 +601,36 @@ nodeshash = nodehashoff rschash = rschashoff + hash_element = { + "id": lc_id, + "rsc_type": lc_rsc_type, + "rsc_value": lc_rsc_value, + "rsc_label": lc_name, + "node": lc_node, + "score": lc_score, + "role": lc_role, + "resource-discovery": lc_resource_discovery, + } if lc_node in nodeshash: - nodeshash[lc_node].append((lc_id,lc_rsc,lc_score, lc_role, lc_resource_discovery)) + nodeshash[lc_node].append(hash_element) else: - nodeshash[lc_node] = [(lc_id, lc_rsc,lc_score, lc_role, lc_resource_discovery)] - + nodeshash[lc_node] = [hash_element] if lc_rsc in rschash: - rschash[lc_rsc].append((lc_id,lc_node,lc_score, lc_role, lc_resource_discovery)) + rschash[lc_rsc].append(hash_element) else: - rschash[lc_rsc] = [(lc_id,lc_node,lc_score, lc_role, lc_resource_discovery)] + rschash[lc_rsc] = [hash_element] - nodelist = list(set(list(nodehashon.keys()) + list(nodehashoff.keys()))) - rsclist = list(set(list(rschashon.keys()) + list(rschashoff.keys()))) + nodelist = sorted(set(list(nodehashon.keys()) + list(nodehashoff.keys()))) + rsclist = sorted( + set(list(rschashon.keys()) + list(rschashoff.keys())), + key=lambda item: ( + { + RESOURCE_TYPE_RESOURCE: 1, + RESOURCE_TYPE_REGEXP: 0, + }[item[0]], + item[1] + ) + ) if byNode == True: for node in nodelist: @@ -601,25 +647,29 @@ if node in nodehash: print(label) for options in nodehash[node]: - line_parts = [ - " " + options[1] + " (" + options[0] + ")", - ] - if options[3]: - line_parts.append("(role: {0})".format(options[3])) - if options[4]: + line_parts = [( + " " + options["rsc_label"] + + " (" + options["id"] + ")" + )] + if options["role"]: line_parts.append( - "(resource-discovery={0})".format(options[4]) + "(role: {0})".format(options["role"]) ) - line_parts.append("Score: " + options[2]) + if options["resource-discovery"]: + line_parts.append( + "(resource-discovery={0})".format( + options["resource-discovery"] + ) + ) + line_parts.append("Score: " + options["score"]) print(" ".join(line_parts)) - show_location_rules(ruleshash,showDetail) + show_location_rules(ruleshash, showDetail) else: - rsclist.sort() for rsc in rsclist: if len(valid_noderes) != 0: - if rsc not in valid_noderes: + if rsc[0:2] not in valid_noderes: continue - print(" Resource: " + rsc) + print(" {0}".format(rsc[2])) rschash_label = ( (rschashon, " Enabled on:"), (rschashoff, " Disabled on:"), @@ -627,32 +677,45 @@ for rschash, label in rschash_label: if rsc in rschash: for options in rschash[rsc]: - if not options[1]: + if not options["node"]: continue line_parts = [ label, - options[1], - "(score:{0})".format(options[2]), + options["node"], + "(score:{0})".format(options["score"]), ] - if options[3]: - line_parts.append("(role: {0})".format(options[3])) - if options[4]: + if options["role"]: + line_parts.append( + "(role: {0})".format(options["role"]) + ) + if options["resource-discovery"]: line_parts.append( - "(resource-discovery={0})".format(options[4]) + "(resource-discovery={0})".format( + options["resource-discovery"] + ) ) if showDetail: - line_parts.append("(id:{0})".format(options[0])) + line_parts.append("(id:{0})".format(options["id"])) print(" ".join(line_parts)) miniruleshash={} - miniruleshash["Resource: " + rsc] = ruleshash["Resource: " + rsc] - show_location_rules(miniruleshash,showDetail, True) + miniruleshash[rsc] = ruleshash[rsc] + show_location_rules(miniruleshash, showDetail, True) -def show_location_rules(ruleshash,showDetail,noheader=False): +def show_location_rules(ruleshash, showDetail, noheader=False): constraint_options = {} - for rsc in ruleshash: - constrainthash= defaultdict(list) + for rsc in sorted( + ruleshash.keys(), + key=lambda item: ( + { + RESOURCE_TYPE_RESOURCE: 1, + RESOURCE_TYPE_REGEXP: 0, + }[item[0]], + item[1] + ) + ): + constrainthash = defaultdict(list) if not noheader: - print(" " + rsc) + print(" {0}".format(rsc[2])) for rule in ruleshash[rsc]: constraint_id = rule.parentNode.getAttribute("id") constrainthash[constraint_id].append(rule) @@ -676,6 +739,12 @@ rsc = argv.pop(0) prefer_option = argv.pop(0) + dummy_rsc_type, rsc_value = parse_args.parse_typed_arg( + rsc, + [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], + RESOURCE_TYPE_RESOURCE + ) + if prefer_option == "prefers": prefer = True elif prefer_option == "avoids": @@ -703,80 +772,139 @@ else: score = "-" + score node = nodeconf_a[0] - location_add(["location-" +rsc+"-"+node+"-"+score,rsc,node,score]) + location_add([ + sanitize_id("location-{0}-{1}-{2}".format(rsc_value, node, score)), + rsc, + node, + score + ]) def location_add(argv,rm=False): - if len(argv) < 4 and (rm == False or len(argv) < 1): - usage.constraint() + if rm: + location_remove(argv) + return + + if len(argv) < 4: + usage.constraint(["location add"]) sys.exit(1) constraint_id = argv.pop(0) - - # If we're removing, we only care about the id - if (rm == True): - resource_name = "" - node = "" - score = "" - else: - id_valid, id_error = utils.validate_xml_id(constraint_id, 'constraint id') - if not id_valid: - utils.err(id_error) - resource_name = argv.pop(0) - node = argv.pop(0) - score = argv.pop(0) - options = [] - # For now we only allow setting resource-discovery - if len(argv) > 0: - for arg in argv: - if '=' in arg: - options.append(arg.split('=',1)) - else: - print("Error: bad option '%s'" % arg) - usage.constraint(["location add"]) - sys.exit(1) - if options[-1][0] != "resource-discovery" and "--force" not in utils.pcs_options: - utils.err("bad option '%s', use --force to override" % options[-1][0]) - - - resource_valid, resource_error, correct_id \ - = utils.validate_constraint_resource( - utils.get_cib_dom(), resource_name - ) + rsc_type, rsc_value = parse_args.parse_typed_arg( + argv.pop(0), + [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], + RESOURCE_TYPE_RESOURCE + ) + node = argv.pop(0) + score = argv.pop(0) + options = [] + # For now we only allow setting resource-discovery + if len(argv) > 0: + for arg in argv: + if '=' in arg: + options.append(arg.split('=',1)) + else: + print("Error: bad option '%s'" % arg) + usage.constraint(["location add"]) + sys.exit(1) + if options[-1][0] != "resource-discovery" and "--force" not in utils.pcs_options: + utils.err("bad option '%s', use --force to override" % options[-1][0]) + + id_valid, id_error = utils.validate_xml_id(constraint_id, 'constraint id') + if not id_valid: + utils.err(id_error) + + if not utils.is_score(score): + utils.err("invalid score '%s', use integer or INFINITY or -INFINITY" % score) + + required_version = None + if rsc_type == RESOURCE_TYPE_REGEXP: + required_version = 2, 6, 0 + + if required_version: + dom = utils.cluster_upgrade_to_version(required_version) + else: + dom = utils.get_cib_dom() + + if rsc_type == RESOURCE_TYPE_RESOURCE: + rsc_valid, rsc_error, correct_id = utils.validate_constraint_resource( + dom, rsc_value + ) if "--autocorrect" in utils.pcs_options and correct_id: - resource_name = correct_id - elif not resource_valid: - utils.err(resource_error) - if not utils.is_score(score): - utils.err("invalid score '%s', use integer or INFINITY or -INFINITY" % score) + rsc_value = correct_id + elif not rsc_valid: + utils.err(rsc_error) # Verify current constraint doesn't already exist # If it does we replace it with the new constraint - (dom,constraintsElement) = getCurrentConstraints() + dummy_dom, constraintsElement = getCurrentConstraints(dom) elementsToRemove = [] - # If the id matches, or the rsc & node match, then we replace/remove for rsc_loc in constraintsElement.getElementsByTagName('rsc_location'): - if (constraint_id == rsc_loc.getAttribute("id")) or \ - (rsc_loc.getAttribute("rsc") == resource_name and \ - rsc_loc.getAttribute("node") == node and not rm): + if ( + rsc_loc.getAttribute("id") == constraint_id + or + ( + rsc_loc.getAttribute("node") == node + and + ( + ( + RESOURCE_TYPE_RESOURCE == rsc_type + and + rsc_loc.getAttribute("rsc") == rsc_value + ) + or + ( + RESOURCE_TYPE_REGEXP == rsc_type + and + rsc_loc.getAttribute("rsc-pattern") == rsc_value + ) + ) + ) + ): elementsToRemove.append(rsc_loc) - for etr in elementsToRemove: constraintsElement.removeChild(etr) - if (rm == True and len(elementsToRemove) == 0): - utils.err("resource location id: " + constraint_id + " not found.") + element = dom.createElement("rsc_location") + element.setAttribute("id",constraint_id) + if rsc_type == RESOURCE_TYPE_RESOURCE: + element.setAttribute("rsc", rsc_value) + elif rsc_type == RESOURCE_TYPE_REGEXP: + element.setAttribute("rsc-pattern", rsc_value) + element.setAttribute("node",node) + element.setAttribute("score",score) + for option in options: + element.setAttribute(option[0], option[1]) + constraintsElement.appendChild(element) + + utils.replace_cib_configuration(dom) - if (not rm): - element = dom.createElement("rsc_location") - element.setAttribute("id",constraint_id) - element.setAttribute("rsc",resource_name) - element.setAttribute("node",node) - element.setAttribute("score",score) - for option in options: - element.setAttribute(option[0], option[1]) - constraintsElement.appendChild(element) +def location_remove(argv): + # This code was originally merged in the location_add function and was + # documented to take 1 or 4 arguments: + # location remove [ ] + # However it has always ignored all arguments but constraint id. Therefore + # this command / function has no use as it can be fully replaced by "pcs + # constraint remove" which also removes constraints by id. For now I keep + # things as they are but we should solve this when moving these functions + # to pcs.lib. + if len(argv) != 1: + usage.constraint(["location remove"]) + sys.exit(1) + + constraint_id = argv.pop(0) + dom, constraintsElement = getCurrentConstraints() + + elementsToRemove = [] + for rsc_loc in constraintsElement.getElementsByTagName('rsc_location'): + if constraint_id == rsc_loc.getAttribute("id"): + elementsToRemove.append(rsc_loc) + + if (len(elementsToRemove) == 0): + utils.err("resource location id: " + constraint_id + " not found.") + for etr in elementsToRemove: + constraintsElement.removeChild(etr) utils.replace_cib_configuration(dom) @@ -785,29 +913,52 @@ usage.constraint(["location", "rule"]) sys.exit(1) - res_name = argv.pop(0) - resource_valid, resource_error, correct_id \ - = utils.validate_constraint_resource(utils.get_cib_dom(), res_name) - if "--autocorrect" in utils.pcs_options and correct_id: - res_name = correct_id - elif not resource_valid: - utils.err(resource_error) - + rsc_type, rsc_value = parse_args.parse_typed_arg( + argv.pop(0), + [RESOURCE_TYPE_RESOURCE, RESOURCE_TYPE_REGEXP], + RESOURCE_TYPE_RESOURCE + ) argv.pop(0) # pop "rule" + options, rule_argv = rule_utils.parse_argv( + argv, + { + "constraint-id": None, + "resource-discovery": None, + } + ) + resource_discovery = ( + "resource-discovery" in options + and + options["resource-discovery"] + ) - options, rule_argv = rule_utils.parse_argv(argv, {"constraint-id": None, "resource-discovery": None,}) + required_version = None + if resource_discovery: + required_version = 2, 2, 0 + if rsc_type == RESOURCE_TYPE_REGEXP: + required_version = 2, 6, 0 + + if required_version: + dom = utils.cluster_upgrade_to_version(required_version) + else: + dom = utils.get_cib_dom() + + if rsc_type == RESOURCE_TYPE_RESOURCE: + rsc_valid, rsc_error, correct_id = utils.validate_constraint_resource( + dom, rsc_value + ) + if "--autocorrect" in utils.pcs_options and correct_id: + rsc_value = correct_id + elif not rsc_valid: + utils.err(rsc_error) + + cib, constraints = getCurrentConstraints(dom) + lc = cib.createElement("rsc_location") # If resource-discovery is specified, we use it with the rsc_location # element not the rule - if "resource-discovery" in options and options["resource-discovery"]: - utils.checkAndUpgradeCIB(2,2,0) - cib, constraints = getCurrentConstraints(utils.get_cib_dom()) - lc = cib.createElement("rsc_location") + if resource_discovery: lc.setAttribute("resource-discovery", options.pop("resource-discovery")) - else: - cib, constraints = getCurrentConstraints(utils.get_cib_dom()) - lc = cib.createElement("rsc_location") - constraints.appendChild(lc) if options.get("constraint-id"): @@ -816,7 +967,7 @@ ) if not id_valid: utils.err(id_error) - if utils.does_id_exist(cib, options["constraint-id"]): + if utils.does_id_exist(dom, options["constraint-id"]): utils.err( "id '%s' is already in use, please specify another one" % options["constraint-id"] @@ -824,8 +975,14 @@ lc.setAttribute("id", options["constraint-id"]) del options["constraint-id"] else: - lc.setAttribute("id", utils.find_unique_id(cib, "location-" + res_name)) - lc.setAttribute("rsc", res_name) + lc.setAttribute( + "id", + utils.find_unique_id(dom, sanitize_id("location-" + rsc_value)) + ) + if rsc_type == RESOURCE_TYPE_RESOURCE: + lc.setAttribute("rsc", rsc_value) + elif rsc_type == RESOURCE_TYPE_REGEXP: + lc.setAttribute("rsc-pattern", rsc_value) rule_utils.dom_rule_add(lc, options, rule_argv) location_rule_check_duplicates(constraints, lc) @@ -849,8 +1006,18 @@ def location_rule_find_duplicates(dom, constraint_el): def normalize(constraint_el): + if constraint_el.hasAttribute("rsc-pattern"): + rsc = ( + RESOURCE_TYPE_REGEXP, + constraint_el.getAttribute("rsc-pattern") + ) + else: + rsc = ( + RESOURCE_TYPE_RESOURCE, + constraint_el.getAttribute("rsc") + ) return ( - constraint_el.getAttribute("rsc"), + rsc, [ rule_utils.ExportAsExpression().get_string(rule_el, True) for rule_el in constraint_el.getElementsByTagName("rule") diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/config_files.py pcs-0.9.159/pcs/lib/booth/config_files.py --- pcs-0.9.155+dfsg/pcs/lib/booth/config_files.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/config_files.py 2017-06-30 15:33:01.000000000 +0000 @@ -6,7 +6,6 @@ ) import os -import binascii from pcs.common import report_codes, env_file_role_codes as file_roles from pcs.common.tools import format_environment_error @@ -16,9 +15,6 @@ from pcs.settings import booth_config_dir as BOOTH_CONFIG_DIR -def generate_key(): - return binascii.hexlify(os.urandom(32)) - def get_all_configs_file_names(): """ Returns list of all file names ending with '.conf' in booth configuration diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/config_structure.py pcs-0.9.159/pcs/lib/booth/config_structure.py --- pcs-0.9.155+dfsg/pcs/lib/booth/config_structure.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/config_structure.py 2017-06-30 15:33:01.000000000 +0000 @@ -109,14 +109,17 @@ reports = [] for key in sorted(options): if key in GLOBAL_KEYS: - reports.append( - common_reports.invalid_option(key, TICKET_KEYS, "booth ticket") - ) + reports.append(common_reports.invalid_option( + [key], + TICKET_KEYS, + "booth ticket", + )) elif key not in TICKET_KEYS: reports.append( common_reports.invalid_option( - key, TICKET_KEYS, + [key], + TICKET_KEYS, "booth ticket", severity=( severities.WARNING if allow_unknown_options diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/env.py pcs-0.9.159/pcs/lib/booth/env.py --- pcs-0.9.155+dfsg/pcs/lib/booth/env.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/env.py 2017-06-30 15:33:01.000000000 +0000 @@ -78,7 +78,8 @@ self.__key_path = env_data["key_path"] self.__key = GhostFile( file_role=env_file_role_codes.BOOTH_KEY, - content=env_data["key_file"]["content"] + content=env_data["key_file"]["content"], + is_binary=True ) else: self.__config = RealFile( @@ -92,13 +93,14 @@ self.__key = RealFile( file_role=env_file_role_codes.BOOTH_KEY, file_path=path, + is_binary=True ) def command_expect_live_env(self): if not self.__config.is_live: raise LibraryError(common_reports.live_environment_required([ - "--booth-conf", - "--booth-key", + "BOOTH_CONF", + "BOOTH_KEY", ])) def set_key_path(self, path): @@ -131,7 +133,7 @@ self.__report_processor, can_overwrite_existing ) - self.__key.write(key_content, set_keyfile_access, is_binary=True) + self.__key.write(key_content, set_keyfile_access) def push_config(self, content): self.__config.write(content) diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/resource.py pcs-0.9.159/pcs/lib/booth/resource.py --- pcs-0.9.155+dfsg/pcs/lib/booth/resource.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/resource.py 2017-06-30 15:33:01.000000000 +0000 @@ -13,49 +13,22 @@ resources_section.getroottree(), "booth-{0}-{1}".format(name, suffix) ) -def get_creator(resource_create, resource_remove=None): - #TODO resource_create is provisional hack until resources are not moved to - #lib - def create_booth_in_cluster(ip, booth_config_file_path, create_id): - ip_id = create_id("ip") - booth_id = create_id("service") - group_id = create_id("group") - - resource_create( - ra_id=ip_id, - ra_type="ocf:heartbeat:IPaddr2", - ra_values=["ip={0}".format(ip)], - op_values=[], - meta_values=[], - clone_opts=[], - group=group_id, - ) - try: - resource_create( - ra_id=booth_id, - ra_type="ocf:pacemaker:booth-site", - ra_values=["config={0}".format(booth_config_file_path)], - op_values=[], - meta_values=[], - clone_opts=[], - group=group_id, - ) - except SystemExit: - resource_remove(ip_id) - return create_booth_in_cluster - def is_ip_resource(resource_element): - return resource_element.attrib["type"] == "IPaddr2" + return resource_element.attrib.get("type", "") == "IPaddr2" def find_grouped_ip_element_to_remove(booth_element): - if booth_element.getparent().tag != "group": + group = booth_element.getparent() + + if group.tag != "group": return None - group = booth_element.getparent() - if len(group) != 2: - #when something else in group, ip is not for remove + primitives = group.xpath("./primitive") + if len(primitives) != 2: + # Don't remove the IP resource if some other resources are in the group. + # It is most likely manually configured by the user so we cannot delete + # it automatically. return None - for element in group: + for element in primitives: if is_ip_resource(element): return element return None diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/test/test_config_structure.py pcs-0.9.159/pcs/lib/booth/test/test_config_structure.py --- pcs-0.9.155+dfsg/pcs/lib/booth/test/test_config_structure.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/test/test_config_structure.py 2017-06-30 15:33:01.000000000 +0000 @@ -59,7 +59,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "site", + "option_names": ["site"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), }, @@ -68,7 +68,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "port", + "option_names": ["port"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), }, @@ -86,7 +86,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "unknown", + "option_names": ["unknown"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), }, @@ -118,7 +118,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "site", + "option_names": ["site"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), }, @@ -141,7 +141,7 @@ severities.WARNING, report_codes.INVALID_OPTION, { - "option_name": "unknown", + "option_names": ["unknown"], "option_type": "booth ticket", "allowed": list(config_structure.TICKET_KEYS), }, diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/test/test_env.py pcs-0.9.159/pcs/lib/booth/test/test_env.py --- pcs-0.9.155+dfsg/pcs/lib/booth/test/test_env.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/test/test_env.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,17 +5,13 @@ unicode_literals, ) -import grp -import os -import pwd from pcs.test.tools.pcs_unittest import TestCase -from pcs import settings from pcs.common import report_codes from pcs.lib.booth import env from pcs.lib.errors import ReportItemSeverity as severities from pcs.test.tools.assertions import assert_raise_library_error -from pcs.test.tools.misc import get_test_resource as rc, create_patcher +from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import mock patch_env = create_patcher("pcs.lib.booth.env") @@ -109,7 +105,7 @@ "content": "secure", "can_overwrite_existing_file": False, "no_existing_file_expected": False, - "is_binary": False, + "is_binary": True, }, } ) @@ -121,38 +117,25 @@ ) class SetKeyfileAccessTest(TestCase): - def test_set_desired_file_access(self): - #setup - file_path = rc("temp-keyfile") - if os.path.exists(file_path): - os.remove(file_path) - with open(file_path, "w") as file: - file.write("content") - - #check assumptions - stat = os.stat(file_path) - self.assertNotEqual('600', oct(stat.st_mode)[-3:]) - current_user = pwd.getpwuid(os.getuid())[0] - if current_user != settings.pacemaker_uname: - file_user = pwd.getpwuid(stat.st_uid)[0] - self.assertNotEqual(file_user, settings.pacemaker_uname) - current_group = grp.getgrgid(os.getgid())[0] - if current_group != settings.pacemaker_gname: - file_group = grp.getgrgid(stat.st_gid)[0] - self.assertNotEqual(file_group, settings.pacemaker_gname) - - #run tested method + @patch_env("os.chmod") + @patch_env("os.chown") + @patch_env("grp.getgrnam") + @patch_env("pwd.getpwnam") + @patch_env("settings") + def test_do_everything_to_set_desired_file_access( + self, settings, getpwnam, getgrnam, chown, chmod + ): + file_path = "/tmp/some_booth_file" env.set_keyfile_access(file_path) - #check - stat = os.stat(file_path) - self.assertEqual('600', oct(stat.st_mode)[-3:]) + getpwnam.assert_called_once_with(settings.pacemaker_uname) + getgrnam.assert_called_once_with(settings.pacemaker_gname) - file_user = pwd.getpwuid(stat.st_uid)[0] - self.assertEqual(file_user, settings.pacemaker_uname) - - file_group = grp.getgrgid(stat.st_gid)[0] - self.assertEqual(file_group, settings.pacemaker_gname) + chown.assert_called_once_with( + file_path, + getpwnam.return_value.pw_uid, + getgrnam.return_value.gr_gid, + ) @patch_env("pwd.getpwnam", mock.MagicMock(side_effect=KeyError)) @patch_env("settings.pacemaker_uname", "some-user") diff -Nru pcs-0.9.155+dfsg/pcs/lib/booth/test/test_resource.py pcs-0.9.159/pcs/lib/booth/test/test_resource.py --- pcs-0.9.155+dfsg/pcs/lib/booth/test/test_resource.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/booth/test/test_resource.py 2017-06-30 15:33:01.000000000 +0000 @@ -11,7 +11,6 @@ import pcs.lib.booth.resource as booth_resource from pcs.test.tools.pcs_unittest import mock -from pcs.test.tools.misc import get_test_resource as rc def fixture_resources_with_booth(booth_config_file_path): @@ -91,6 +90,9 @@ booth_resource.get_remover(mock_resource_remove)(element_list) return mock_resource_remove + def find_booth_resources(self, tree): + return tree.xpath('.//primitive[@type="booth-site"]') + def test_remove_ip_when_is_only_booth_sibling_in_group(self): group = etree.fromstring(''' @@ -103,7 +105,7 @@ ''') - mock_resource_remove = self.call(group.getchildren()[1:]) + mock_resource_remove = self.call(self.find_booth_resources(group)) self.assertEqual( mock_resource_remove.mock_calls, [ mock.call('ip'), @@ -111,43 +113,73 @@ ] ) -class CreateInClusterTest(TestCase): - def test_remove_ip_when_booth_resource_add_failed(self): - mock_resource_create = mock.Mock(side_effect=[None, SystemExit(1)]) - mock_resource_remove = mock.Mock() - mock_create_id = mock.Mock(side_effect=["ip_id","booth_id","group_id"]) - ip = "1.2.3.4" - booth_config_file_path = rc("/path/to/booth.conf") - - booth_resource.get_creator(mock_resource_create, mock_resource_remove)( - ip, - booth_config_file_path, - mock_create_id - ) - self.assertEqual(mock_resource_create.mock_calls, [ - mock.call( - clone_opts=[], - group=u'group_id', - meta_values=[], - op_values=[], - ra_id=u'ip_id', - ra_type=u'ocf:heartbeat:IPaddr2', - ra_values=[u'ip=1.2.3.4'], - ), - mock.call( - clone_opts=[], - group='group_id', - meta_values=[], - op_values=[], - ra_id='booth_id', - ra_type='ocf:pacemaker:booth-site', - ra_values=['config=/path/to/booth.conf'], - ) - ]) - mock_resource_remove.assert_called_once_with("ip_id") + def test_remove_ip_when_group_is_disabled_1(self): + group = etree.fromstring(''' + + + + + + + + + + + + ''') + + mock_resource_remove = self.call(self.find_booth_resources(group)) + self.assertEqual( + mock_resource_remove.mock_calls, [ + mock.call('ip'), + mock.call('booth'), + ] + ) + + def test_remove_ip_when_group_is_disabled_2(self): + group = etree.fromstring(''' + + + + + + + + + + + + ''') + + mock_resource_remove = self.call(self.find_booth_resources(group)) + self.assertEqual( + mock_resource_remove.mock_calls, [ + mock.call('ip'), + mock.call('booth'), + ] + ) + def test_dont_remove_ip_when_group_has_other_resources(self): + group = etree.fromstring(''' + + + + + + + + + + ''') + + mock_resource_remove = self.call(self.find_booth_resources(group)) + self.assertEqual( + mock_resource_remove.mock_calls, [ + mock.call('booth'), + ] + ) -class FindBindedIpTest(TestCase): +class FindBoundIpTest(TestCase): def fixture_resource_section(self, ip_element_list): resources_section = etree.fromstring('') group = etree.SubElement(resources_section, "group") diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/acl.py pcs-0.9.159/pcs/lib/cib/acl.py --- pcs-0.9.155+dfsg/pcs/lib/cib/acl.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/acl.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,40 +5,32 @@ unicode_literals, ) +from functools import partial + from lxml import etree from pcs.lib import reports from pcs.lib.errors import LibraryError from pcs.lib.cib.tools import ( - etree_element_attibutes_to_dict, check_new_id_applicable, does_id_exist, find_unique_id, - get_acls, + find_element_by_tag_and_id, ) +from pcs.lib.xml_tools import etree_element_attibutes_to_dict -class AclError(Exception): - pass - - -class AclRoleNotFound(AclError): - # pylint: disable=super-init-not-called - def __init__(self, role_id): - self.role_id = role_id - - -class AclTargetNotFound(AclError): - # pylint: disable=super-init-not-called - def __init__(self, target_id): - self.target_id = target_id - - -class AclGroupNotFound(AclError): - # pylint: disable=super-init-not-called - def __init__(self, group_id): - self.group_id = group_id - +TAG_GROUP = "acl_group" +TAG_ROLE = "acl_role" +TAG_TARGET = "acl_target" +TAG_PERMISSION = "acl_permission" + +TAG_DESCRIPTION_MAP = { + TAG_GROUP: "group", + TAG_ROLE: "role", + TAG_TARGET: "user", + TAG_PERMISSION: "permission" +} def validate_permissions(tree, permission_info_list): """ @@ -74,35 +66,57 @@ raise LibraryError(*report_items) -def find_role(tree, role_id): - """ - Returns acl_role element with specified role_id in given tree. - Raise AclRoleNotFound if role doesn't exist. - - tree -- etree node - role_id -- id of role - """ - role = tree.find('.//acl_role[@id="{0}"]'.format(role_id)) - if role is not None: - return role - raise AclRoleNotFound(role_id) - +def _find( + tag, acl_section, element_id, none_if_id_unused=False, id_description=None +): + if tag not in TAG_DESCRIPTION_MAP.keys(): + raise AssertionError("Unknown acl tag '{0}'".format(tag)) + + return find_element_by_tag_and_id( + tag, + acl_section, + element_id, + id_description=id_description if id_description + else TAG_DESCRIPTION_MAP[tag] + , + none_if_id_unused=none_if_id_unused, + ) -def _find_permission(tree, permission_id): - """ - Returns acl_permission element with specified id. - Raises LibraryError if that permission doesn't exist. +find_group = partial(_find, TAG_GROUP) +find_role = partial(_find, TAG_ROLE) +find_target = partial(_find, TAG_TARGET) + +def find_target_or_group(acl_section, target_or_group_id): + """ + Returns acl_target or acl_group element with id target_or_group_id. Target + element has bigger priority so if there are target and group with the same + id only target element will be affected by this function. + Raises LibraryError if there is no target or group element with + specified id. + + This approach is DEPRECATED and it is there only for backward compatibility + reason. It is better to know explicitly whether we need target(user) or + group. + + acl_section -- cib etree node + target_or_group_id -- id of target/group element which should be returned + """ + target = find_target( + acl_section, + target_or_group_id, + none_if_id_unused=True + ) - tree -- etree node - permisson_id -- id of permision element - """ - permission = tree.find(".//acl_permission[@id='{0}']".format(permission_id)) - if permission is not None: - return permission - raise LibraryError(reports.id_not_found(permission_id, "permission")) + if target is not None: + return target + return find_group( + acl_section, + target_or_group_id, + id_description="user/group" + ) -def create_role(tree, role_id, description=None): +def create_role(acl_section, role_id, description=None): """ Create new role element and add it to cib. Returns newly created role element. @@ -110,31 +124,46 @@ role_id id of desired role description role description """ - check_new_id_applicable(tree, "ACL role", role_id) - role = etree.SubElement(get_acls(tree), "acl_role", id=role_id) + check_new_id_applicable(acl_section, "ACL role", role_id) + role = etree.SubElement(acl_section, TAG_ROLE, id=role_id) if description: role.set("description", description) return role -def remove_role(tree, role_id, autodelete_users_groups=False): +def remove_role(acl_section, role_id, autodelete_users_groups=False): """ Remove role with specified id from CIB and all references to it. - tree -- etree node + acl_section -- etree node role_id -- id of role to be removed autodelete_users_group -- if True remove targets with no role after removing """ - acl_role = find_role(tree, role_id) + acl_role = find_role(acl_section, role_id) acl_role.getparent().remove(acl_role) - for role_el in tree.findall(".//role[@id='{0}']".format(role_id)): + for role_el in acl_section.findall(".//role[@id='{0}']".format(role_id)): role_parent = role_el.getparent() role_parent.remove(role_el) if autodelete_users_groups and role_parent.find(".//role") is None: role_parent.getparent().remove(role_parent) +def _assign_role(acl_section, role_id, target_el): + try: + role_el = find_role(acl_section, role_id) + except LibraryError as e: + return list(e.args) + assigned_role = target_el.find( + "./role[@id='{0}']".format(role_el.get("id")) + ) + if assigned_role is not None: + return [reports.acl_role_is_already_assigned_to_target( + role_el.get("id"), target_el.get("id") + )] + etree.SubElement(target_el, "role", {"id": role_el.get("id")}) + return [] + -def assign_role(target_el, role_el): +def assign_role(acl_section, role_id, target_el): """ Assign role element to specified target/group element. Raise LibraryError if role is already assigned to target/group. @@ -142,14 +171,25 @@ target_el -- etree element of target/group to which role should be assign role_el -- etree element of role """ - assigned_role = target_el.find( - "./role[@id='{0}']".format(role_el.get("id")) - ) - if assigned_role is not None: - raise LibraryError(reports.acl_role_is_already_assigned_to_target( - role_el.get("id"), target_el.get("id") - )) - etree.SubElement(target_el, "role", {"id": role_el.get("id")}) + report_list = _assign_role(acl_section, role_id, target_el) + if report_list: + raise LibraryError(*report_list) + +def assign_all_roles(acl_section, role_id_list, element): + """ + Assign roles from role_id_list to element. + Raises LibraryError on any failure. + + acl_section -- cib etree node + element -- element to which specified roles should be assigned + role_id_list -- list of role id + """ + report_list = [] + for role_id in role_id_list: + report_list.extend(_assign_role(acl_section, role_id, element)) + if report_list: + raise LibraryError(*report_list) + def unassign_role(target_el, role_id, autodelete_target=False): @@ -172,101 +212,67 @@ target_el.getparent().remove(target_el) -def find_target(tree, target_id): - """ - Return acl_target etree element with specified id. - Raise AclTargetNotFound if target with specified id doesn't exist. - - tree -- etree node - target_id -- if of target to find - """ - role = get_acls(tree).find('./acl_target[@id="{0}"]'.format(target_id)) - if role is None: - raise AclTargetNotFound(target_id) - return role - - -def find_group(tree, group_id): - """ - Returns acl_group etree element with specified id. - Raise AclGroupNotFound if group with group_id doesn't exist. - - tree -- etree node - group_id -- id of group to find - """ - role = get_acls(tree).find('./acl_group[@id="{0}"]'.format(group_id)) - if role is None: - raise AclGroupNotFound(group_id) - return role - - -def provide_role(tree, role_id): +def provide_role(acl_section, role_id): """ Returns role with id role_id. If doesn't exist, it will be created. role_id id of desired role """ - try: - return find_role(tree, role_id) - except AclRoleNotFound: - return create_role(tree, role_id) + role = find_role(acl_section, role_id, none_if_id_unused=True) + return role if role is not None else create_role(acl_section, role_id) -def create_target(tree, target_id): +def create_target(acl_section, target_id): """ Creates new acl_target element with id target_id. Raises LibraryError if target with wpecified id aleready exists. - tree -- etree node + acl_section -- etree node target_id -- id of new target """ - acl_el = get_acls(tree) # id of element acl_target is not type ID in CIB ACL schema so we don't need # to check if it is unique ID in whole CIB - if acl_el.find("./acl_target[@id='{0}']".format(target_id)) is not None: + if( + acl_section.find("./{0}[@id='{1}']".format(TAG_TARGET, target_id)) + is not None + ): raise LibraryError(reports.acl_target_already_exists(target_id)) - return etree.SubElement(get_acls(tree), "acl_target", id=target_id) + return etree.SubElement(acl_section, TAG_TARGET, id=target_id) -def create_group(tree, group_id): +def create_group(acl_section, group_id): """ Creates new acl_group element with specified id. Raises LibraryError if tree contains element with id group_id. - tree -- etree node + acl_section -- etree node group_id -- id of new group """ - check_new_id_applicable(tree, "ACL group", group_id) - return etree.SubElement(get_acls(tree), "acl_group", id=group_id) + check_new_id_applicable(acl_section, "ACL group", group_id) + return etree.SubElement(acl_section, TAG_GROUP, id=group_id) -def remove_target(tree, target_id): +def remove_target(acl_section, target_id): """ - Removes acl_target element from tree with specified id. + Removes acl_target element from acl_section with specified id. Raises LibraryError if target with id target_id doesn't exist. - tree -- etree node + acl_section -- etree node target_id -- id of target element to remove """ - try: - target = find_target(tree, target_id) - target.getparent().remove(target) - except AclTargetNotFound: - raise LibraryError(reports.id_not_found(target_id, "user")) + target = find_target(acl_section, target_id) + target.getparent().remove(target) -def remove_group(tree, group_id): +def remove_group(acl_section, group_id): """ Removes acl_group element from tree with specified id. Raises LibraryError if group with id group_id doesn't exist. - tree -- etree node + acl_section -- etree node group_id -- id of group element to remove """ - try: - group = find_group(tree, group_id) - group.getparent().remove(group) - except AclGroupNotFound: - raise LibraryError(reports.id_not_found(group_id, "group")) + group = find_group(acl_section, group_id) + group.getparent().remove(group) def add_permissions_to_role(role_el, permission_info_list): @@ -294,20 +300,20 @@ perm.set(area_type_attribute_map[scope_type], scope) -def remove_permission(tree, permission_id): +def remove_permission(acl_section, permission_id): """ - Remove permission with id permission_id from tree. + Remove permission with id permission_id from acl_section. - tree -- etree node + acl_section -- etree node permission_id -- id of permission element to be removed """ - permission = _find_permission(tree, permission_id) + permission = _find(TAG_PERMISSION, acl_section, permission_id) permission.getparent().remove(permission) -def get_role_list(tree): +def get_role_list(acl_section): """ - Returns list of all acl_role elements from tree. + Returns list of all acl_role elements from acl_section. Format of items of output list: { "id": , @@ -315,10 +321,10 @@ "permission_list": [, ...] } - tree -- etree node + acl_section -- etree node """ output_list = [] - for role_el in get_acls(tree).findall("./acl_role"): + for role_el in acl_section.findall("./{0}".format(TAG_ROLE)): role = etree_element_attibutes_to_dict( role_el, ["id", "description"] ) @@ -356,7 +362,7 @@ return output_list -def get_target_list(tree): +def get_target_list(acl_section): """ Returns list of acl_target elements in format: { @@ -364,12 +370,12 @@ "role_list": [, ...] } - tree -- etree node + acl_section -- etree node """ - return _get_target_like_list_with_tag(tree, "acl_target") + return get_target_like_list(acl_section, TAG_TARGET) -def get_group_list(tree): +def get_group_list(acl_section): """ Returns list of acl_group elements in format: { @@ -377,14 +383,14 @@ "role_list": [, ...] } - tree -- etree node + acl_section -- etree node """ - return _get_target_like_list_with_tag(tree, "acl_group") + return get_target_like_list(acl_section, TAG_GROUP) -def _get_target_like_list_with_tag(tree, tag): +def get_target_like_list(acl_section, tag): output_list = [] - for target_el in get_acls(tree).findall("./{0}".format(tag)): + for target_el in acl_section.findall("./{0}".format(tag)): output_list.append({ "id": target_el.get("id"), "role_list": _get_role_list_of_target(target_el), @@ -421,13 +427,3 @@ for permission in dom.getElementsByTagName("acl_permission"): if permission.getAttribute("reference") == reference: permission.parentNode.removeChild(permission) - - -def acl_error_to_report_item(e): - if e.__class__ == AclTargetNotFound: - return reports.id_not_found(e.target_id, "user") - elif e.__class__ == AclGroupNotFound: - return reports.id_not_found(e.group_id, "group") - elif e.__class__ == AclRoleNotFound: - return reports.id_not_found(e.role_id, "role") - raise e diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/alert.py pcs-0.9.159/pcs/lib/cib/alert.py --- pcs-0.9.155+dfsg/pcs/lib/cib/alert.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/alert.py 2017-06-30 15:33:01.000000000 +0000 @@ -10,16 +10,22 @@ from pcs.common import report_codes from pcs.lib import reports -from pcs.lib.errors import LibraryError, ReportItemSeverity as Severities +from pcs.lib.errors import ReportItemSeverity as Severities from pcs.lib.cib.nvpair import arrange_first_nvset, get_nvset from pcs.lib.cib.tools import ( check_new_id_applicable, - get_sub_element, find_unique_id, get_alerts, validate_id_does_not_exist, + find_element_by_tag_and_id, ) +from pcs.lib.xml_tools import get_sub_element +TAG_ALERT = "alert" +TAG_RECIPIENT = "recipient" + +find_alert = partial(find_element_by_tag_and_id, TAG_ALERT) +find_recipient = partial(find_element_by_tag_and_id, TAG_RECIPIENT) update_instance_attributes = partial( arrange_first_nvset, @@ -43,38 +49,6 @@ elif attribute in element.attrib: del element.attrib[attribute] - -def get_alert_by_id(tree, alert_id): - """ - Returns alert element with specified id. - Raises LibraryError if alert with specified id doesn't exist. - - tree -- cib etree node - alert_id -- id of alert - """ - alert = get_alerts(tree).find("./alert[@id='{0}']".format(alert_id)) - if alert is None: - raise LibraryError(reports.cib_alert_not_found(alert_id)) - return alert - - -def get_recipient_by_id(tree, recipient_id): - """ - Returns recipient element with value recipient_value which belong to - specified alert. - Raises LibraryError if recipient doesn't exist. - - tree -- cib etree node - recipient_id -- id of recipient - """ - recipient = get_alerts(tree).find( - "./alert/recipient[@id='{0}']".format(recipient_id) - ) - if recipient is None: - raise LibraryError(reports.id_not_found(recipient_id, "Recipient")) - return recipient - - def ensure_recipient_value_is_unique( reporter, alert, recipient_value, recipient_id="", allow_duplicity=False ): @@ -138,7 +112,7 @@ description -- new value of description, stay unchanged if None, remove if empty """ - alert = get_alert_by_id(tree, alert_id) + alert = find_alert(get_alerts(tree), alert_id) if path: alert.set("path", path) _update_optional_attribute(alert, "description", description) @@ -153,7 +127,7 @@ tree -- cib etree node alert_id -- id of alert which should be removed """ - alert = get_alert_by_id(tree, alert_id) + alert = find_alert(get_alerts(tree), alert_id) alert.getparent().remove(alert) @@ -184,7 +158,7 @@ else: validate_id_does_not_exist(tree, recipient_id) - alert = get_alert_by_id(tree, alert_id) + alert = find_alert(get_alerts(tree), alert_id) ensure_recipient_value_is_unique( reporter, alert, recipient_value, allow_duplicity=allow_same_value ) @@ -218,7 +192,7 @@ if None allow_same_value -- if True unique recipient value is not required """ - recipient = get_recipient_by_id(tree, recipient_id) + recipient = find_recipient(get_alerts(tree), recipient_id) if recipient_value is not None: ensure_recipient_value_is_unique( reporter, @@ -240,7 +214,7 @@ tree -- cib etree node recipient_id -- id of recipient to be removed """ - recipient = get_recipient_by_id(tree, recipient_id) + recipient = find_recipient(get_alerts(tree), recipient_id) recipient.getparent().remove(recipient) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/constraint/colocation.py pcs-0.9.159/pcs/lib/cib/constraint/colocation.py --- pcs-0.9.155+dfsg/pcs/lib/cib/constraint/colocation.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/constraint/colocation.py 2017-06-30 15:33:01.000000000 +0000 @@ -11,7 +11,7 @@ from pcs.lib.cib.constraint import constraint from pcs.lib.cib.tools import check_new_id_applicable from pcs.lib.errors import LibraryError -from pcs.lib.pacemaker_values import is_score_value, SCORE_INFINITY +from pcs.lib.pacemaker.values import is_score, SCORE_INFINITY TAG_NAME = 'rsc_colocation' DESCRIPTION = "constraint id" @@ -25,7 +25,7 @@ partial(check_new_id_applicable, cib, DESCRIPTION), ) - if "score" in options and not is_score_value(options["score"]): + if "score" in options and not is_score(options["score"]): raise LibraryError(reports.invalid_score(options["score"])) score_attrs_count = len([ diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/constraint/constraint.py pcs-0.9.159/pcs/lib/cib/constraint/constraint.py --- pcs-0.9.155+dfsg/pcs/lib/cib/constraint/constraint.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/constraint/constraint.py 2017-06-30 15:33:01.000000000 +0000 @@ -12,29 +12,42 @@ from pcs.lib import reports from pcs.lib.cib import resource from pcs.lib.cib.constraint import resource_set -from pcs.lib.cib.tools import export_attributes, find_unique_id, find_parent +from pcs.lib.cib.tools import ( + find_unique_id, + find_element_by_tag_and_id, +) from pcs.lib.errors import LibraryError, ReportItemSeverity +from pcs.lib.xml_tools import ( + export_attributes, + find_parent, +) def _validate_attrib_names(attrib_names, options): - for option_name in options.keys(): - if option_name not in attrib_names: - raise LibraryError( - reports.invalid_option(option_name, attrib_names, None) - ) + invalid_names = [ + name for name in options.keys() + if name not in attrib_names + ] + if invalid_names: + raise LibraryError( + reports.invalid_option(invalid_names, attrib_names, None) + ) def find_valid_resource_id( report_processor, cib, can_repair_to_clone, in_clone_allowed, id ): - resource_element = resource.find_by_id(cib, id) + parent_tags = resource.clone.ALL_TAGS + [resource.bundle.TAG] + resource_element = find_element_by_tag_and_id( + parent_tags + [resource.primitive.TAG, resource.group.TAG], + cib, + id, + id_description="resource" + ) - if(resource_element is None): - raise LibraryError(reports.resource_does_not_exist(id)) - - if resource_element.tag in resource.TAGS_CLONE: + if resource_element.tag in parent_tags: return resource_element.attrib["id"] - clone = find_parent(resource_element, resource.TAGS_CLONE) + clone = find_parent(resource_element, parent_tags) if clone is None: return resource_element.attrib["id"] diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/constraint/resource_set.py pcs-0.9.159/pcs/lib/cib/constraint/resource_set.py --- pcs-0.9.155+dfsg/pcs/lib/cib/constraint/resource_set.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/constraint/resource_set.py 2017-06-30 15:33:01.000000000 +0000 @@ -8,11 +8,9 @@ from lxml import etree from pcs.lib import reports -from pcs.lib.cib.tools import ( - find_unique_id, - export_attributes, -) +from pcs.lib.cib.tools import find_unique_id from pcs.lib.errors import LibraryError +from pcs.lib.xml_tools import export_attributes ATTRIB = { "sequential": ("true", "false"), @@ -35,7 +33,7 @@ for name, value in options.items(): if name not in ATTRIB: raise LibraryError( - reports.invalid_option(name, list(ATTRIB.keys()), None) + reports.invalid_option([name], list(ATTRIB.keys()), None) ) if value not in ATTRIB[name]: raise LibraryError( diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/constraint/ticket.py pcs-0.9.159/pcs/lib/cib/constraint/ticket.py --- pcs-0.9.155+dfsg/pcs/lib/cib/constraint/ticket.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/constraint/ticket.py 2017-06-30 15:33:01.000000000 +0000 @@ -54,7 +54,7 @@ ) report = _validate_options_common(options) if "ticket" not in options or not options["ticket"].strip(): - report.append(reports.required_option_is_missing('ticket')) + report.append(reports.required_option_is_missing(['ticket'])) if report: raise LibraryError(*report) return options @@ -65,11 +65,11 @@ report = _validate_options_common(options) if not ticket: - report.append(reports.required_option_is_missing('ticket')) + report.append(reports.required_option_is_missing(['ticket'])) options["ticket"] = ticket if not resource_id: - report.append(reports.required_option_is_missing('rsc')) + report.append(reports.required_option_is_missing(['rsc'])) options["rsc"] = resource_id if "rsc-role" in options: diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/fencing_topology.py pcs-0.9.159/pcs/lib/cib/fencing_topology.py --- pcs-0.9.155+dfsg/pcs/lib/cib/fencing_topology.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/fencing_topology.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,322 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.common.fencing_topology import ( + TARGET_TYPE_NODE, + TARGET_TYPE_REGEXP, + TARGET_TYPE_ATTRIBUTE, +) +from pcs.lib import reports +from pcs.lib.cib.stonith import is_stonith_resource +from pcs.lib.cib.tools import find_unique_id +from pcs.lib.errors import ReportItemSeverity +from pcs.lib.pacemaker.values import sanitize_id, validate_id + +def add_level( + reporter, topology_el, resources_el, level, target_type, target_value, + devices, cluster_status_nodes, force_device=False, force_node=False +): + """ + Validate and add a new fencing level. Raise LibraryError if not valid. + + object reporter -- report processor + etree topology_el -- etree element to add the level to + etree resources_el -- etree element with resources definitions + int|string level -- level (index) of the new fencing level + constant target_type -- the new fencing level target value type + mixed target_value -- the new fencing level target value + Iterable devices -- list of stonith devices for the new fencing level + Iterable cluster_status_nodes -- list of status of existing cluster nodes + bool force_device -- continue even if a stonith device does not exist + bool force_node -- continue even if a node (target) does not exist + """ + valid_level = _validate_level(reporter, level) + _validate_target( + reporter, cluster_status_nodes, target_type, target_value, force_node + ) + _validate_devices(reporter, resources_el, devices, force_device) + reporter.send() + _validate_level_target_devices_does_not_exist( + reporter, topology_el, level, target_type, target_value, devices + ) + reporter.send() + _append_level_element( + topology_el, valid_level, target_type, target_value, devices + ) + +def remove_all_levels(topology_el): + """ + Remove all fencing levels. + etree topology_el -- etree element to remove the levels from + """ + for level_el in topology_el.findall("fencing-level"): + level_el.getparent().remove(level_el) + +def remove_levels_by_params( + reporter, topology_el, level=None, target_type=None, target_value=None, + devices=None, ignore_if_missing=False +): + """ + Remove specified fencing level(s). Raise LibraryError if not found. + + object reporter -- report processor + etree topology_el -- etree element to remove the levels from + int|string level -- level (index) of the new fencing level + constant target_type -- the new fencing level target value type + mixed target_value -- the new fencing level target value + Iterable devices -- list of stonith devices for the new fencing level + bool ignore_if_missing -- when True, do not raise if level not found + """ + if target_type: + _validate_target_typewise(reporter, target_type) + reporter.send() + + level_el_list = _find_level_elements( + topology_el, level, target_type, target_value, devices + ) + + if not level_el_list: + if ignore_if_missing: + return + reporter.process(reports.fencing_level_does_not_exist( + level, target_type, target_value, devices + )) + for el in level_el_list: + el.getparent().remove(el) + +def remove_device_from_all_levels(topology_el, device_id): + """ + Remove specified stonith device from all fencing levels. + + etree topology_el -- etree element with levels to remove the device from + string device_id -- stonith device to remove + """ + for level_el in topology_el.findall("fencing-level"): + new_devices = [ + dev + for dev in level_el.get("devices").split(",") + if dev != device_id + ] + if new_devices: + level_el.set("devices", ",".join(new_devices)) + else: + level_el.getparent().remove(level_el) + +def export(topology_el): + """ + Export all fencing levels. + + Return a list of levels where each level is a dict with keys: target_type, + target_value. level and devices. Devices is a list of stonith device ids. + + etree topology_el -- etree element to export + """ + export_levels = [] + for level_el in topology_el.iterfind("fencing-level"): + target_type = target_value = None + if "target" in level_el.attrib: + target_type = TARGET_TYPE_NODE + target_value = level_el.get("target") + elif "target-pattern" in level_el.attrib: + target_type = TARGET_TYPE_REGEXP + target_value = level_el.get("target-pattern") + elif "target-attribute" in level_el.attrib: + target_type = TARGET_TYPE_ATTRIBUTE + target_value = ( + level_el.get("target-attribute"), + level_el.get("target-value") + ) + if target_type and target_value: + export_levels.append({ + "target_type": target_type, + "target_value": target_value, + "level": level_el.get("index"), + "devices": level_el.get("devices").split(",") + }) + return export_levels + +def verify(reporter, topology_el, resources_el, cluster_status_nodes): + """ + Check if all cluster nodes and stonith devices used in fencing levels exist. + + All errors are stored into the passed reporter. Calling function is + responsible for processing the report. + + object reporter -- report processor + etree topology_el -- etree element with fencing levels to check + etree resources_el -- etree element with resources definitions + Iterable cluster_status_nodes -- list of status of existing cluster nodes + """ + used_nodes = set() + used_devices = set() + + for level_el in topology_el.iterfind("fencing-level"): + used_devices.update(level_el.get("devices").split(",")) + if "target" in level_el.attrib: + used_nodes.add(level_el.get("target")) + + if used_devices: + _validate_devices( + reporter, + resources_el, + sorted(used_devices), + allow_force=False + ) + + for node in sorted(used_nodes): + _validate_target_valuewise( + reporter, + cluster_status_nodes, + TARGET_TYPE_NODE, + node, + allow_force=False + ) + +def _validate_level(reporter, level): + try: + candidate = int(level) + if candidate > 0: + return candidate + except ValueError: + pass + reporter.append( + reports.invalid_option_value("level", level, "a positive integer") + ) + +def _validate_target( + reporter, cluster_status_nodes, target_type, target_value, + force_node=False +): + _validate_target_typewise(reporter, target_type) + _validate_target_valuewise( + reporter, cluster_status_nodes, target_type, target_value, force_node + ) + +def _validate_target_typewise(reporter, target_type): + if target_type not in [ + TARGET_TYPE_NODE, TARGET_TYPE_ATTRIBUTE, TARGET_TYPE_REGEXP + ]: + reporter.append(reports.invalid_option_type( + "target", + ["node", "regular expression", "attribute_name=value"] + )) + +def _validate_target_valuewise( + reporter, cluster_status_nodes, target_type, target_value, force_node=False, + allow_force=True +): + if target_type == TARGET_TYPE_NODE: + node_found = False + for node in cluster_status_nodes: + if target_value == node.attrs.name: + node_found = True + break + if not node_found: + reporter.append( + reports.node_not_found( + target_value, + severity=ReportItemSeverity.WARNING + if force_node and allow_force + else ReportItemSeverity.ERROR + , + forceable=None if force_node or not allow_force + else report_codes.FORCE_NODE_DOES_NOT_EXIST + ) + ) + +def _validate_devices( + reporter, resources_el, devices, force_device=False, allow_force=True +): + if not devices: + reporter.append( + reports.required_option_is_missing(["stonith devices"]) + ) + invalid_devices = [] + for dev in devices: + errors = reporter.errors_count + validate_id(dev, description="device id", reporter=reporter) + if reporter.errors_count > errors: + continue + # TODO use the new finding function + if not is_stonith_resource(resources_el, dev): + invalid_devices.append(dev) + if invalid_devices: + reporter.append( + reports.stonith_resources_do_not_exist( + invalid_devices, + ReportItemSeverity.WARNING if force_device and allow_force + else ReportItemSeverity.ERROR + , + None if force_device or not allow_force + else report_codes.FORCE_STONITH_RESOURCE_DOES_NOT_EXIST + ) + ) + +def _validate_level_target_devices_does_not_exist( + reporter, tree, level, target_type, target_value, devices +): + if _find_level_elements(tree, level, target_type, target_value, devices): + reporter.append( + reports.fencing_level_already_exists( + level, target_type, target_value, devices + ) + ) + +def _append_level_element(tree, level, target_type, target_value, devices): + level_el = etree.SubElement( + tree, + "fencing-level", + index=str(level), + devices=",".join(devices) + ) + if target_type == TARGET_TYPE_NODE: + level_el.set("target", target_value) + id_part = target_value + elif target_type == TARGET_TYPE_REGEXP: + level_el.set("target-pattern", target_value) + id_part = target_value + elif target_type == TARGET_TYPE_ATTRIBUTE: + level_el.set("target-attribute", target_value[0]) + level_el.set("target-value", target_value[1]) + id_part = target_value[0] + level_el.set( + "id", + find_unique_id(tree, sanitize_id("fl-{0}-{1}".format(id_part, level))) + ) + return level_el + +def _find_level_elements( + tree, level=None, target_type=None, target_value=None, devices=None +): + xpath_target = "" + if target_type and target_value: + if target_type == TARGET_TYPE_NODE: + xpath_target = "@target='{0}'".format(target_value) + elif target_type == TARGET_TYPE_REGEXP: + xpath_target = "@target-pattern='{0}'".format(target_value) + elif target_type == TARGET_TYPE_ATTRIBUTE: + xpath_target = ( + "@target-attribute='{0}' and @target-value='{1}'".format( + target_value[0], target_value[1] + ) + ) + xpath_devices = "" + if devices: + xpath_devices = "@devices='{0}'".format(",".join(devices)) + xpath_level = "" + if level: + xpath_level = "@index='{0}'".format(level) + + xpath_attrs = " and ".join( + filter(None, [xpath_level, xpath_devices, xpath_target]) + ) + if xpath_attrs: + return tree.xpath("fencing-level[{0}]".format(xpath_attrs)) + return tree.findall("fencing-level") diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/node.py pcs-0.9.159/pcs/lib/cib/node.py --- pcs-0.9.155+dfsg/pcs/lib/cib/node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,91 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib import reports +from pcs.lib.cib.nvpair import update_nvset +from pcs.lib.cib.tools import get_nodes, find_unique_id +from pcs.lib.errors import LibraryError + + +def update_node_instance_attrs(cib, node_name, attrs, state_nodes=None): + """ + Update nvpairs in instance_attributes for a node specified by its name. + + Automatically creates instance_attributes element if needed. If the node has + more than one instance_attributes element, the first one is modified. If the + node is missing in the CIB, it is automatically created if its state is + provided in state_nodes. + + etree cib -- cib + string node_name -- name of the node to be updated + dict attrs -- attrs to update, e.g. {'A': 'a', 'B': ''} + iterable state_nodes -- optional list of node state objects + """ + node_el = _ensure_node_exists(get_nodes(cib), node_name, state_nodes) + # If no instance_attributes id is specified, crm_attribute modifies the + # first one found. So we just mimic this behavior here. + attrs_el = node_el.find("./instance_attributes") + if attrs_el is None: + attrs_el = etree.SubElement( + node_el, + "instance_attributes", + id=find_unique_id(cib, "nodes-{0}".format(node_el.get("id"))) + ) + update_nvset(attrs_el, attrs) + +def _ensure_node_exists(tree, node_name, state_nodes=None): + """ + Make sure node with specified name exists in the tree. + + If the node doesn't exist, raise LibraryError. If state_nodes is provided + and contains state of a node with the specified name, create the node in + the tree. Return existing or created node element. + + etree tree -- node parent element + string name -- node name + iterable state_nodes -- optional list of node state objects + """ + node_el = _get_node_by_uname(tree, node_name) + if node_el is None and state_nodes: + for node_state in state_nodes: + if node_state.attrs.name == node_name: + node_el = _create_node( + tree, + node_state.attrs.id, + node_state.attrs.name, + node_state.attrs.type + ) + break + if node_el is None: + raise LibraryError(reports.node_not_found(node_name)) + return node_el + +def _get_node_by_uname(tree, uname): + """ + Return a node element with specified uname in the tree or None if not found + + etree tree -- node parent element + string uname -- node name + """ + return tree.find("./node[@uname='{0}']".format(uname)) + +def _create_node(tree, node_id, uname, node_type=None): + """ + Create new node element as a direct child of the tree element + + etree tree -- node parent element + string node_id -- node id + string uname -- node name + string node_type -- optional node type (normal, member, ping, remote) + """ + node = etree.SubElement(tree, "node", id=node_id, uname=uname) + if node_type: + node.set("type", node_type) + return node + diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/nvpair.py pcs-0.9.159/pcs/lib/cib/nvpair.py --- pcs-0.9.155+dfsg/pcs/lib/cib/nvpair.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/nvpair.py 2017-06-30 15:33:01.000000000 +0000 @@ -6,12 +6,27 @@ ) from lxml import etree +from functools import partial -from pcs.lib.cib.tools import ( - get_sub_element, - create_subelement_id, -) +from pcs.lib.cib.tools import create_subelement_id +from pcs.lib.xml_tools import get_sub_element + +def _append_new_nvpair(nvset_element, name, value, id_provider=None): + """ + Create nvpair with name and value as subelement of nvset_element. + etree.Element nvset_element is context of new nvpair + string name is name attribute of new nvpair + string value is value attribute of new nvpair + IdProvider id_provider -- elements' ids generator + """ + etree.SubElement( + nvset_element, + "nvpair", + id=create_subelement_id(nvset_element, name, id_provider), + name=name, + value=value + ) def set_nvpair_in_nvset(nvset_element, name, value): """ @@ -25,38 +40,29 @@ nvpair = nvset_element.find("./nvpair[@name='{0}']".format(name)) if nvpair is None: if value: - etree.SubElement( - nvset_element, - "nvpair", - id=create_subelement_id(nvset_element, name), - name=name, - value=value - ) + _append_new_nvpair(nvset_element, name, value) else: if value: nvpair.set("value", value) else: nvset_element.remove(nvpair) -def arrange_first_nvset(tag_name, context_element, attribute_dict): +def arrange_first_nvset(tag_name, context_element, nvpair_dict): """ - Arrange to context_element contains some nvset (with tag_name) with nvpairs - corresponing to attribute_dict. + Put nvpairs to the first tag_name nvset in the context_element. + + If the nvset does not exist, it will be created. - WARNING: does not solve multiple nvset (with tag_name) under - context_element! Consider carefully if this is your case. Probably not. + WARNING: does not solve multiple nvsets (with the same tag_name) in the + context_element! Consider carefully if this is your use case. Probably not. There could be more than one nvset. This function is DEPRECATED. Try to use update_nvset etc. - This method updates nvset specified by tag_name. If specified nvset - doesn't exist it will be created. Returns updated nvset element or None if - attribute_dict is empty. - - tag_name -- tag name of nvset element - context_element -- parent element of nvset - attribute_dict -- dictionary of nvpairs + string tag_name -- tag name of nvset element + etree context_element -- parent element of nvset + dict nvpair_dict -- dictionary of nvpairs """ - if not attribute_dict: + if not nvpair_dict: return nvset_element = get_sub_element( @@ -66,11 +72,48 @@ new_index=0 ) - update_nvset(nvset_element, attribute_dict) + update_nvset(nvset_element, nvpair_dict) + +def append_new_nvset(tag_name, context_element, nvpair_dict, id_provider=None): + """ + Append new nvset_element comprising nvpairs children (corresponding + nvpair_dict) to the context_element + + string tag_name should be "instance_attributes" or "meta_attributes" + etree.Element context_element is element where new nvset will be appended + dict nvpair_dict contains source for nvpair children + IdProvider id_provider -- elements' ids generator + """ + nvset_element = etree.SubElement(context_element, tag_name, { + "id": create_subelement_id(context_element, tag_name, id_provider) + }) + for name, value in sorted(nvpair_dict.items()): + _append_new_nvpair(nvset_element, name, value, id_provider) + +append_new_instance_attributes = partial( + append_new_nvset, + "instance_attributes" +) + +append_new_meta_attributes = partial( + append_new_nvset, + "meta_attributes" +) -def update_nvset(nvset_element, attribute_dict): - for name, value in sorted(attribute_dict.items()): +def update_nvset(nvset_element, nvpair_dict): + """ + Add, remove or update nvpairs according to nvpair_dict into nvset_element + + If the resulting nvset is empty, it will be removed. + + etree nvset_element -- container where nvpairs are set + dict nvpair_dict -- contains source for nvpair children + """ + for name, value in sorted(nvpair_dict.items()): set_nvpair_in_nvset(nvset_element, name, value) + # remove an empty nvset + if not list(nvset_element): + nvset_element.getparent().remove(nvset_element) def get_nvset(nvset): """ @@ -94,3 +137,43 @@ "value": nvpair.get("value", "") }) return nvpair_list + +def get_value(tag_name, context_element, name, default=None): + """ + Return value from nvpair. + + WARNING: does not solve multiple nvsets (with the same tag_name) in the + context_element nor multiple nvpair with the same name + + string tag_name should be "instance_attributes" or "meta_attributes" + etree.Element context_element is searched element + string name specify nvpair name + """ + value_list = context_element.xpath(""" + ./{0} + /nvpair[ + @name="{1}" + and + string-length(@value) > 0 + ] + /@value + """.format(tag_name, name)) + return value_list[0] if value_list else default + +def has_meta_attribute(resource_el, name): + """ + Return if the element contains meta attribute 'name' + + etree.Element resource_el is researched element + string name specifies attribute + """ + return 0 < len(resource_el.xpath( + './meta_attributes/nvpair[@name="{0}"]'.format(name) + )) + +arrange_first_meta_attributes = partial( + arrange_first_nvset, + "meta_attributes" +) + +get_meta_attribute_value = partial(get_value, "meta_attributes") diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/bundle.py pcs-0.9.159/pcs/lib/cib/resource/bundle.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/bundle.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/bundle.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,529 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib import reports, validate +from pcs.lib.cib.nvpair import ( + append_new_meta_attributes, + arrange_first_meta_attributes, +) +from pcs.lib.cib.resource.primitive import TAG as TAG_PRIMITIVE +from pcs.lib.cib.tools import find_element_by_tag_and_id +from pcs.lib.errors import ( + LibraryError, + ReportListAnalyzer, +) +from pcs.lib.pacemaker.values import sanitize_id +from pcs.lib.xml_tools import ( + get_sub_element, + update_attributes_remove_empty, +) + +TAG = "bundle" + +_docker_options = set(( + "image", + "masters", + "network", + "options", + "run-command", + "replicas", + "replicas-per-host", +)) + +_network_options = set(( + "control-port", + "host-interface", + "host-netmask", + "ip-range-start", +)) + +def is_bundle(resource_el): + return resource_el.tag == TAG + +def validate_new( + id_provider, bundle_id, container_type, container_options, network_options, + port_map, storage_map, force_options=False +): + """ + Validate new bundle parameters, return list of report items + + IdProvider id_provider -- elements' ids generator and uniqueness checker + string bundle_id -- id of the bundle + string container_type -- bundle container type + dict container_options -- container options + dict network_options -- network options + list of dict port_map -- list of port mapping options + list of dict storage_map -- list of storage mapping options + bool force_options -- return warnings instead of forceable errors + """ + report_list = [] + + report_list.extend( + validate.run_collection_of_option_validators( + {"id": bundle_id}, + [ + # with id_provider it validates that the id is available as well + validate.value_id("id", "bundle name", id_provider), + ] + ) + ) + + aux_reports = _validate_container_type(container_type) + report_list.extend(aux_reports) + if not ReportListAnalyzer(aux_reports).error_list: + report_list.extend( + # TODO call the proper function once more container_types are + # supported by pacemaker + _validate_container_docker_options_new( + container_options, + force_options + ) + ) + report_list.extend( + _validate_network_options_new(network_options, force_options) + ) + report_list.extend( + _validate_port_map_list(port_map, id_provider, force_options) + ) + report_list.extend( + _validate_storage_map_list(storage_map, id_provider, force_options) + ) + + return report_list + +def append_new( + parent_element, id_provider, bundle_id, container_type, container_options, + network_options, port_map, storage_map, meta_attributes +): + """ + Create new bundle and add it to the CIB + + etree parent_element -- the bundle will be appended to this element + IdProvider id_provider -- elements' ids generator + string bundle_id -- id of the bundle + string container_type -- bundle container type + dict container_options -- container options + dict network_options -- network options + list of dict port_map -- list of port mapping options + list of dict storage_map -- list of storage mapping options + dict meta_attributes -- meta attributes + """ + bundle_element = etree.SubElement(parent_element, TAG, {"id": bundle_id}) + # TODO create the proper element once more container_types are supported + # by pacemaker + docker_element = etree.SubElement(bundle_element, "docker") + # Do not add options with empty values. When updating, an empty value means + # remove the option. + update_attributes_remove_empty(docker_element, container_options) + if network_options or port_map: + network_element = etree.SubElement(bundle_element, "network") + # Do not add options with empty values. When updating, an empty value + # means remove the option. + update_attributes_remove_empty(network_element, network_options) + for port_map_options in port_map: + _append_port_map( + network_element, id_provider, bundle_id, port_map_options + ) + if storage_map: + storage_element = etree.SubElement(bundle_element, "storage") + for storage_map_options in storage_map: + _append_storage_map( + storage_element, id_provider, bundle_id, storage_map_options + ) + if meta_attributes: + append_new_meta_attributes(bundle_element, meta_attributes, id_provider) + return bundle_element + +def validate_update( + id_provider, bundle_el, container_options, network_options, + port_map_add, port_map_remove, storage_map_add, storage_map_remove, + force_options=False +): + """ + Validate modifying an existing bundle, return list of report items + + IdProvider id_provider -- elements' ids generator and uniqueness checker + etree bundle_el -- the bundle to be updated + dict container_options -- container options to modify + dict network_options -- network options to modify + list of dict port_map_add -- list of port mapping options to add + list of string port_map_remove -- list of port mapping ids to remove + list of dict storage_map_add -- list of storage mapping options to add + list of string storage_map_remove -- list of storage mapping ids to remove + bool force_options -- return warnings instead of forceable errors + """ + report_list = [] + + container_el = _get_container_element(bundle_el) + if container_el.tag == "docker": + # TODO call the proper function once more container types are + # supported by pacemaker + report_list.extend( + _validate_container_docker_options_update( + container_el, + container_options, + force_options + ) + ) + + network_el = bundle_el.find("network") + if network_el is None: + report_list.extend( + _validate_network_options_new(network_options, force_options) + ) + else: + report_list.extend( + _validate_network_options_update( + network_el, + network_options, + force_options + ) + ) + + # TODO It will probably be needed to split the following validators to + # create and update variants. It should be done once the need exists and + # not sooner. + report_list.extend( + _validate_port_map_list(port_map_add, id_provider, force_options) + ) + report_list.extend( + _validate_storage_map_list(storage_map_add, id_provider, force_options) + ) + report_list.extend( + _validate_map_ids_exist( + bundle_el, "port-mapping", "port-map", port_map_remove + ) + ) + report_list.extend( + _validate_map_ids_exist( + bundle_el, "storage-mapping", "storage-map", storage_map_remove + ) + ) + return report_list + +def update( + id_provider, bundle_el, container_options, network_options, + port_map_add, port_map_remove, storage_map_add, storage_map_remove, + meta_attributes +): + """ + Modify an existing bundle (does not touch encapsulated resources) + + IdProvider id_provider -- elements' ids generator and uniqueness checker + etree bundle_el -- the bundle to be updated + dict container_options -- container options to modify + dict network_options -- network options to modify + list of dict port_map_add -- list of port mapping options to add + list of string port_map_remove -- list of port mapping ids to remove + list of dict storage_map_add -- list of storage mapping options to add + list of string storage_map_remove -- list of storage mapping ids to remove + dict meta_attributes -- meta attributes to update + """ + bundle_id = bundle_el.get("id") + update_attributes_remove_empty( + _get_container_element(bundle_el), + container_options + ) + + network_element = get_sub_element(bundle_el, "network") + if network_options: + update_attributes_remove_empty(network_element, network_options) + # It's crucial to remove port maps prior to appending new ones: If we are + # adding a port map which in any way conflicts with another one and that + # another one is being removed in the very same command, the removal must + # be done first, otherwise the conflict would manifest itself (and then + # possibly the old mapping would be removed) + if port_map_remove: + _remove_map_elements( + network_element.findall("port-mapping"), + port_map_remove + ) + for port_map_options in port_map_add: + _append_port_map( + network_element, id_provider, bundle_id, port_map_options + ) + + storage_element = get_sub_element(bundle_el, "storage") + # See the comment above about removing port maps prior to adding new ones. + if storage_map_remove: + _remove_map_elements( + storage_element.findall("storage-mapping"), + storage_map_remove + ) + for storage_map_options in storage_map_add: + _append_storage_map( + storage_element, id_provider, bundle_id, storage_map_options + ) + + if meta_attributes: + arrange_first_meta_attributes(bundle_el, meta_attributes) + + # remove empty elements with no attributes + # meta attributes are handled in their own function + for element in (network_element, storage_element): + if len(element) < 1 and not element.attrib: + element.getparent().remove(element) + +def add_resource(bundle_element, primitive_element): + """ + Add an existing resource to an existing bundle + + etree bundle_element -- where to add the resource to + etree primitive_element -- the resource to be added to the bundle + """ + # TODO possibly split to 'validate' and 'do' functions + # a bundle may currently contain at most one primitive resource + inner_primitive = bundle_element.find(TAG_PRIMITIVE) + if inner_primitive is not None: + raise LibraryError(reports.resource_bundle_already_contains_a_resource( + bundle_element.get("id"), inner_primitive.get("id") + )) + bundle_element.append(primitive_element) + +def get_inner_resource(bundle_el): + resources = bundle_el.xpath("./primitive") + if resources: + return resources[0] + return None + +def _validate_container_type(container_type): + return validate.value_in("type", ("docker", ), "container type")({ + "type": container_type, + }) + +def _validate_container_docker_options_new(options, force_options): + validators = [ + validate.is_required("image", "container"), + validate.value_not_empty("image", "image name"), + validate.value_nonnegative_integer("masters"), + validate.value_positive_integer("replicas"), + validate.value_positive_integer("replicas-per-host"), + ] + return ( + validate.run_collection_of_option_validators(options, validators) + + + validate.names_in( + _docker_options, + options.keys(), + "container", + report_codes.FORCE_OPTIONS, + force_options + ) + ) + +def _validate_container_docker_options_update( + docker_el, options, force_options +): + validators = [ + # image is a mandatory attribute and cannot be removed + validate.value_not_empty("image", "image name"), + validate.value_empty_or_valid( + "masters", + validate.value_nonnegative_integer("masters") + ), + validate.value_empty_or_valid( + "replicas", + validate.value_positive_integer("replicas") + ), + validate.value_empty_or_valid( + "replicas-per-host", + validate.value_positive_integer("replicas-per-host") + ), + ] + return ( + validate.run_collection_of_option_validators(options, validators) + + + validate.names_in( + # allow to remove options even if they are not allowed + _docker_options | _options_to_remove(options), + options.keys(), + "container", + report_codes.FORCE_OPTIONS, + force_options + ) + ) + +def _validate_network_options_new(options, force_options): + validators = [ + # TODO add validators for other keys (ip-range-start - IPv4) + validate.value_port_number("control-port"), + _value_host_netmask("host-netmask", force_options), + ] + return ( + validate.run_collection_of_option_validators(options, validators) + + + validate.names_in( + _network_options, + options.keys(), + "network", + report_codes.FORCE_OPTIONS, + force_options + ) + ) + +def _validate_network_options_update(network_el, options, force_options): + validators = [ + # TODO add validators for other keys (ip-range-start - IPv4) + validate.value_empty_or_valid( + "control-port", + validate.value_port_number("control-port"), + ), + validate.value_empty_or_valid( + "host-netmask", + _value_host_netmask("host-netmask", force_options), + ), + ] + return ( + validate.run_collection_of_option_validators(options, validators) + + + validate.names_in( + # allow to remove options even if they are not allowed + _network_options | _options_to_remove(options), + options.keys(), + "network", + report_codes.FORCE_OPTIONS, + force_options + ) + ) + +def _validate_port_map_list(options_list, id_provider, force_options): + allowed_options = [ + "id", + "port", + "internal-port", + "range", + ] + validators = [ + validate.value_id("id", "port-map id", id_provider), + validate.depends_on_option( + "internal-port", "port", "port-map", "port-map" + ), + validate.is_required_some_of(["port", "range"], "port-map"), + validate.mutually_exclusive(["port", "range"], "port-map"), + validate.value_port_number("port"), + validate.value_port_number("internal-port"), + validate.value_port_range( + "range", + code_to_allow_extra_values=report_codes.FORCE_OPTIONS, + allow_extra_values=force_options + ), + ] + report_list = [] + for options in options_list: + report_list.extend( + validate.run_collection_of_option_validators(options, validators) + + + validate.names_in( + allowed_options, + options.keys(), + "port-map", + report_codes.FORCE_OPTIONS, + force_options + ) + ) + return report_list + +def _validate_storage_map_list(options_list, id_provider, force_options): + allowed_options = [ + "id", + "options", + "source-dir", + "source-dir-root", + "target-dir", + ] + source_dir_options = ["source-dir", "source-dir-root"] + validators = [ + validate.value_id("id", "storage-map id", id_provider), + validate.is_required_some_of(source_dir_options, "storage-map"), + validate.mutually_exclusive(source_dir_options, "storage-map"), + validate.is_required("target-dir", "storage-map"), + ] + report_list = [] + for options in options_list: + report_list.extend( + validate.run_collection_of_option_validators(options, validators) + + + validate.names_in( + allowed_options, + options.keys(), + "storage-map", + report_codes.FORCE_OPTIONS, + force_options + ) + ) + return report_list + +def _validate_map_ids_exist(bundle_el, map_type, map_label, id_list): + report_list = [] + for id in id_list: + try: + find_element_by_tag_and_id( + map_type, bundle_el, id, id_description=map_label + ) + except LibraryError as e: + report_list.extend(e.args) + return report_list + +def _value_host_netmask(option_name, force_options): + return validate.value_cond( + option_name, + lambda value: validate.is_integer(value, 1, 32), + "a number of bits of the mask (1-32)", + # Leaving a possibility to force this validation, if pacemaker + # starts supporting IPv6 or other format of the netmask + code_to_allow_extra_values=report_codes.FORCE_OPTIONS, + allow_extra_values=force_options + ) + +def _append_port_map(parent_element, id_provider, id_base, port_map_options): + if "id" not in port_map_options: + id_suffix = None + if "port" in port_map_options: + id_suffix = port_map_options["port"] + elif "range" in port_map_options: + id_suffix = port_map_options["range"] + if id_suffix: + port_map_options["id"] = id_provider.allocate_id( + sanitize_id("{0}-port-map-{1}".format(id_base, id_suffix)) + ) + port_map_element = etree.SubElement(parent_element, "port-mapping") + # Do not add options with empty values. When updating, an empty value means + # remove the option. + update_attributes_remove_empty(port_map_element, port_map_options) + return port_map_element + +def _append_storage_map( + parent_element, id_provider, id_base, storage_map_options +): + if "id" not in storage_map_options: + storage_map_options["id"] = id_provider.allocate_id( + # use just numbers to keep the ids reasonably short + "{0}-storage-map".format(id_base) + ) + storage_map_element = etree.SubElement(parent_element, "storage-mapping") + # Do not add options with empty values. When updating, an empty value means + # remove the option. + update_attributes_remove_empty(storage_map_element, storage_map_options) + return storage_map_element + +def _get_container_element(bundle_el): + # TODO get different types of container once supported by pacemaker + return bundle_el.find("docker") + +def _remove_map_elements(element_list, id_to_remove_list): + for el in element_list: + if el.get("id", "") in id_to_remove_list: + el.getparent().remove(el) + +def _options_to_remove(options): + return set([ + name for name, value in options.items() + if validate.is_empty_string(value) + ]) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/clone.py pcs-0.9.159/pcs/lib/cib/resource/clone.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/clone.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/clone.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,71 @@ +""" +Module for stuff related to clones. +Multi-state resources are a specialization of clone resources. So this module +include stuffs related to master. +""" +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib.cib.nvpair import append_new_meta_attributes +from pcs.lib.cib.tools import find_unique_id + + +TAG_CLONE = "clone" +TAG_MASTER = "master" +ALL_TAGS = [TAG_CLONE, TAG_MASTER] + +def is_clone(resource_el): + return resource_el.tag == TAG_CLONE + +def is_master(resource_el): + return resource_el.tag == TAG_MASTER + +def is_any_clone(resource_el): + return resource_el.tag in ALL_TAGS + +def create_id(clone_tag, primitive_element): + """ + Create id for clone element based on contained primitive_element. + + string clone_tag is tag of clone element. Specialization of "clone" is + "master" and this function is common for both - "clone" and "master". + etree.Element primitive_element is resource which will be cloned. + It must be connected into the cib to ensure that the resulting id is + unique! + """ + return find_unique_id( + primitive_element, + "{0}-{1}".format(primitive_element.get("id"), clone_tag) + ) + +def append_new(clone_tag, resources_section, primitive_element, options): + """ + Append a new clone element (containing the primitive_element) to the + resources_section. + + string clone_tag is tag of clone element. Expected values are "clone" and + "master". + etree.Element resources_section is place where new clone will be appended. + etree.Element primitive_element is resource which will be cloned. + dict options is source for clone meta options + """ + clone_element = etree.SubElement( + resources_section, + clone_tag, + id=create_id(clone_tag, primitive_element), + ) + clone_element.append(primitive_element) + + if options: + append_new_meta_attributes(clone_element, options) + + return clone_element + +def get_inner_resource(clone_el): + return clone_el.xpath("./primitive | ./group")[0] diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/common.py pcs-0.9.159/pcs/lib/cib/resource/common.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/common.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/common.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,219 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + + +from pcs.lib.cib import nvpair +from pcs.lib.cib.resource.bundle import ( + is_bundle, + get_inner_resource as get_bundle_inner_resource, +) +from pcs.lib.cib.resource.clone import ( + is_any_clone, + get_inner_resource as get_clone_inner_resource, +) +from pcs.lib.cib.resource.group import ( + is_group, + get_inner_resources as get_group_inner_resources, +) +from pcs.lib.cib.resource.primitive import is_primitive +from pcs.lib.xml_tools import find_parent + + +def are_meta_disabled(meta_attributes): + return meta_attributes.get("target-role", "Started").lower() == "stopped" + +def _can_be_evaluated_as_positive_num(value): + string_wo_leading_zeros = str(value).lstrip("0") + return string_wo_leading_zeros and string_wo_leading_zeros[0].isdigit() + +def is_clone_deactivated_by_meta(meta_attributes): + return are_meta_disabled(meta_attributes) or any([ + not _can_be_evaluated_as_positive_num(meta_attributes.get(key, "1")) + for key in ["clone-max", "clone-node-max"] + ]) + +def find_primitives(resource_el): + """ + Get list of primitives contained in a given resource + etree resource_el -- resource element + """ + if is_bundle(resource_el): + in_bundle = get_bundle_inner_resource(resource_el) + return [in_bundle] if in_bundle is not None else [] + if is_any_clone(resource_el): + resource_el = get_clone_inner_resource(resource_el) + if is_group(resource_el): + return get_group_inner_resources(resource_el) + if is_primitive(resource_el): + return [resource_el] + return [] + +def find_resources_to_enable(resource_el): + """ + Get resources to enable in order to enable specified resource succesfully + etree resource_el -- resource element + """ + if is_bundle(resource_el): + to_enable = [resource_el] + in_bundle = get_bundle_inner_resource(resource_el) + if in_bundle is not None: + to_enable.append(in_bundle) + return to_enable + + if is_any_clone(resource_el): + return [resource_el, get_clone_inner_resource(resource_el)] + + to_enable = [resource_el] + parent = resource_el.getparent() + if is_any_clone(parent) or is_bundle(parent): + to_enable.append(parent) + return to_enable + +def enable(resource_el): + """ + Enable specified resource + etree resource_el -- resource element + """ + nvpair.arrange_first_nvset( + "meta_attributes", + resource_el, + { + "target-role": "", + } + ) + +def disable(resource_el): + """ + Disable specified resource + etree resource_el -- resource element + """ + nvpair.arrange_first_nvset( + "meta_attributes", + resource_el, + { + "target-role": "Stopped", + } + ) + +def find_resources_to_manage(resource_el): + """ + Get resources to manage to manage the specified resource succesfully + etree resource_el -- resource element + """ + # If the resource_el is a primitive in a group, we set both the group and + # the primitive to managed mode. Otherwise the resource_el, all its + # children and parents need to be set to managed mode. We do it to make + # sure to remove the unmanaged flag form the whole tree. The flag could be + # put there manually. If we didn't do it, the resource may stay unmanaged, + # as a managed primitive in an unmanaged clone / group is still unmanaged + # and vice versa. + res_id = resource_el.attrib["id"] + return ( + [resource_el] # the resource itself + + + # its parents + find_parent(resource_el, "resources").xpath( + # a master or a clone which contains a group, a primitve, or a + # grouped primitive with the specified id + # OR + # a group (in a clone, master, etc. - hence //) which contains a + # primitive with the specified id + # OR + # a bundle which contains a primitive with the specified id + """ + (./master|./clone)[(group|group/primitive|primitive)[@id='{r}']] + | + //group[primitive[@id='{r}']] + | + ./bundle[primitive[@id='{r}']] + """ + .format(r=res_id) + ) + + + # its children + resource_el.xpath("(./group|./primitive|./group/primitive)") + ) + +def find_resources_to_unmanage(resource_el): + """ + Get resources to unmanage to unmanage the specified resource succesfully + etree resource_el -- resource element + """ + # resource hierarchy - specified resource - what to return + # a primitive - the primitive - the primitive + # + # a cloned primitive - the primitive - the primitive + # a cloned primitive - the clone - the primitive + # The resource will run on all nodes after unclone. However that doesn't + # seem to be bad behavior. Moreover, if monitor operations were disabled, + # they wouldn't enable on unclone, but the resource would become managed, + # which is definitely bad. + # + # a primitive in a group - the primitive - the primitive + # Otherwise all primitives in the group would become unmanaged. + # a primitive in a group - the group - all primitives in the group + # If only the group was set to unmanaged, setting any primitive in the + # group to managed would set all the primitives in the group to managed. + # If the group as well as all its primitives were set to unmanaged, any + # primitive added to the group would become unmanaged. This new primitive + # would become managed if any original group primitive becomes managed. + # Therefore changing one primitive influences another one, which we do + # not want to happen. + # + # a primitive in a cloned group - the primitive - the primitive + # a primitive in a cloned group - the group - all primitives in the group + # See group notes above + # a primitive in a cloned group - the clone - all primitives in the group + # See clone notes above + # + # a bundled primitive - the primitive - the primitive + # a bundled primitive - the bundle - the bundle and the primitive + # We need to unmanage implicit resources create by pacemaker and there is + # no other way to do it than unmanage the bundle itself. + # Since it is not possible to unbundle a resource, the concers described + # at unclone don't apply here. However to prevent future bugs, in case + # unbundling becomes possible, we unmanage the primitive as well. + # an empty bundle - the bundle - the bundle + # There is nothing else to unmanage. + if is_bundle(resource_el): + in_bundle = get_bundle_inner_resource(resource_el) + return ( + [resource_el, in_bundle] if in_bundle is not None else [resource_el] + ) + if is_any_clone(resource_el): + resource_el = get_clone_inner_resource(resource_el) + if is_group(resource_el): + return get_group_inner_resources(resource_el) + if is_primitive(resource_el): + return [resource_el] + return [] + +def manage(resource_el): + """ + Set the resource to be managed by the cluster + etree resource_el -- resource element + """ + nvpair.arrange_first_nvset( + "meta_attributes", + resource_el, + { + "is-managed": "", + } + ) + +def unmanage(resource_el): + """ + Set the resource not to be managed by the cluster + etree resource_el -- resource element + """ + nvpair.arrange_first_nvset( + "meta_attributes", + resource_el, + { + "is-managed": "false", + } + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/group.py pcs-0.9.159/pcs/lib/cib/resource/group.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/group.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/group.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,82 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib import reports +from pcs.lib.cib.tools import find_element_by_tag_and_id +from pcs.lib.errors import LibraryError + + +TAG = "group" + +def is_group(resource_el): + return resource_el.tag == TAG + +def provide_group(resources_section, group_id): + """ + Provide group with id=group_id. Create new group if group with id=group_id + does not exists. + + etree.Element resources_section is place where new group will be appended + string group_id is id of group + """ + group_element = find_element_by_tag_and_id( + TAG, + resources_section, + group_id, + none_if_id_unused=True + ) + if group_element is None: + group_element = etree.SubElement(resources_section, TAG, id=group_id) + return group_element + +def place_resource( + group_element, primitive_element, + adjacent_resource_id=None, put_after_adjacent=False +): + """ + Add resource into group. This function is also applicable for a modification + of the resource position because the primitive element is replanted from + anywhere (including group itself) to concrete place inside group. + + etree.Element group_element is element where to put primitive_element + etree.Element primitive_element is element for placement + string adjacent_resource_id is id of the existing resource in group. + primitive_element will be put beside adjacent_resource_id if specified. + bool put_after_adjacent is flag where put primitive_element: + before adjacent_resource_id if put_after_adjacent=False + after adjacent_resource_id if put_after_adjacent=True + Note that it make sense only if adjacent_resource_id is specified + """ + if primitive_element.attrib["id"] == adjacent_resource_id: + raise LibraryError(reports.resource_cannot_be_next_to_itself_in_group( + adjacent_resource_id, + group_element.attrib["id"], + )) + + if not adjacent_resource_id: + return group_element.append(primitive_element) + + adjacent_resource = find_element_by_tag_and_id( + "primitive", + group_element, + adjacent_resource_id, + id_description="resource", + ) + + if put_after_adjacent and adjacent_resource.getnext() is None: + return group_element.append(primitive_element) + + index = group_element.index( + adjacent_resource.getnext() if put_after_adjacent + else adjacent_resource + ) + group_element.insert(index, primitive_element) + +def get_inner_resources(group_el): + return group_el.xpath("./primitive") diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/guest_node.py pcs-0.9.159/pcs/lib/cib/resource/guest_node.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/guest_node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/guest_node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,243 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.lib import reports, validate +from pcs.lib.cib.tools import does_id_exist +from pcs.lib.cib.nvpair import( + has_meta_attribute, + arrange_first_meta_attributes, + get_meta_attribute_value, +) +from pcs.lib.node import ( + NodeAddresses, + node_addresses_contain_host, + node_addresses_contain_name, +) + + +#TODO pcs currently does not care about multiple meta_attributes and here +#we don't care as well +GUEST_OPTIONS = [ + 'remote-port', + 'remote-addr', + 'remote-connect-timeout', +] + +def validate_conflicts(tree, nodes, node_name, options): + report_list = [] + if( + does_id_exist(tree, node_name) + or + node_addresses_contain_name(nodes, node_name) + or ( + "remote-addr" not in options + and + node_addresses_contain_host(nodes, node_name) + ) + ): + report_list.append(reports.id_already_exists(node_name)) + + if( + "remote-addr" in options + and + node_addresses_contain_host(nodes, options["remote-addr"]) + ): + report_list.append(reports.id_already_exists(options["remote-addr"])) + return report_list + +def is_node_name_in_options(options): + return "remote-node" in options + +def get_guest_option_value(options, default=None): + return options.get("remote-node", default) + + +def validate_set_as_guest(tree, nodes, node_name, options): + report_list = validate.names_in( + GUEST_OPTIONS, + options.keys(), + "guest", + ) + + validator_list = [ + validate.value_time_interval("remote-connect-timeout"), + validate.value_port_number("remote-port"), + ] + + report_list.extend( + validate.run_collection_of_option_validators(options, validator_list) + ) + + report_list.extend( + validate_conflicts(tree, nodes, node_name, options) + ) + + if not node_name.strip(): + report_list.append( + reports.invalid_option_value( + "node name", + node_name, + "no empty value", + ) + ) + + return report_list + +def is_guest_node(resource_element): + """ + Return True if resource_element is already set as guest node. + + etree.Element resource_element is a search element + """ + return has_meta_attribute(resource_element, "remote-node") + +def validate_is_not_guest(resource_element): + """ + etree.Element resource_element + """ + if not is_guest_node(resource_element): + return [] + + return [ + reports.resource_is_guest_node_already( + resource_element.attrib["id"] + ) + ] + +def set_as_guest( + resource_element, node, addr=None, port=None, connect_timeout=None +): + """ + Set resource as guest node. + + etree.Element resource_element + + """ + meta_options = {"remote-node": str(node)} + if addr: + meta_options["remote-addr"] = str(addr) + if port: + meta_options["remote-port"] = str(port) + if connect_timeout: + meta_options["remote-connect-timeout"] = str(connect_timeout) + + arrange_first_meta_attributes(resource_element, meta_options) + +def unset_guest(resource_element): + """ + Unset resource as guest node. + + etree.Element resource_element + """ + guest_nvpair_list = resource_element.xpath( + "./meta_attributes/nvpair[{0}]".format( + " or ".join([ + '@name="{0}"'.format(option) + for option in (GUEST_OPTIONS + ["remote-node"]) + ]) + ) + ) + for nvpair in guest_nvpair_list: + meta_attributes = nvpair.getparent() + meta_attributes.remove(nvpair) + if not len(meta_attributes): + meta_attributes.getparent().remove(meta_attributes) + +def get_node(meta_attributes): + """ + Return NodeAddresses with corresponding to guest node in meta_attributes. + Return None if meta_attributes does not mean guest node + + etree.Element meta_attributes is a researched element + """ + host = None + name = None + for nvpair in meta_attributes: + if nvpair.attrib.get("name", "") == "remote-addr": + host = nvpair.attrib["value"] + if nvpair.attrib.get("name", "") == "remote-node": + name = nvpair.attrib["value"] + if host is None: + host = name + return NodeAddresses(host, name=name) if name else None + +def get_host_from_options(node_name, meta_options): + """ + Return host from node_name meta options. + dict meta_options + """ + return meta_options.get("remote-addr", node_name) + +def get_node_name_from_options(meta_options, default=None): + """ + Return node_name from meta options. + dict meta_options + """ + return meta_options.get("remote-node", default) + + +def get_host(resource_element): + host = get_meta_attribute_value(resource_element, "remote-addr") + if host: + return host + + return get_meta_attribute_value(resource_element, "remote-node") + +def find_node_list(resources_section): + """ + Return list of nodes from resources_section + + etree.Element resources_section is a researched element + """ + return [ + get_node(meta_attrs) for meta_attrs in resources_section.xpath(""" + .//primitive + /meta_attributes[ + nvpair[ + @name="remote-node" + and + string-length(@value) > 0 + ] + ] + """) + ] + +def find_node_resources(resources_section, node_identifier): + """ + Return list of etree.Eleent primitives that are guest nodes. + + etree.Element resources_section is a researched element + string node_identifier could be id of resource, node name or node address + """ + resources = resources_section.xpath(""" + .//primitive[ + ( + @id="{0}" + and + meta_attributes[ + nvpair[ + @name="remote-node" + and + string-length(@value) > 0 + ] + ] + ) + or + meta_attributes[ + nvpair[ + ( + @name="remote-addr" + or + @name="remote-node" + ) + and + @value="{0}" + ] + ] + ] + """.format(node_identifier)) + return resources diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/__init__.py pcs-0.9.159/pcs/lib/cib/resource/__init__.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/__init__.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/__init__.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,17 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.lib.cib.resource import ( + bundle, + clone, + common, + group, + guest_node, + operations, + primitive, + remote_node, +) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/operations.py pcs-0.9.159/pcs/lib/cib/resource/operations.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/operations.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/operations.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,373 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from collections import defaultdict + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib import reports, validate +from pcs.lib.resource_agent import get_default_interval, complete_all_intervals +from pcs.lib.cib.nvpair import append_new_instance_attributes +from pcs.lib.cib.tools import ( + create_subelement_id, + does_id_exist, +) +from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.values import ( + is_true, + timeout_to_seconds, +) + +OPERATION_NVPAIR_ATTRIBUTES = [ + "OCF_CHECK_LEVEL", +] + +ATTRIBUTES = [ + "id", + "description", + "enabled", + "interval", + "interval-origin", + "name", + "on-fail", + "record-pending", + "requires", + "role", + "start-delay", + "timeout", + "OCF_CHECK_LEVEL", +] + +ROLE_VALUES = [ + "Stopped", + "Started", + "Slave", + "Master", +] + +REQUIRES_VALUES = [ + "nothing", + "quorum", + "fencing", + "unfencing", +] + +ON_FAIL_VALUES = [ + "ignore", + "block", + "stop", + "restart", + "standby", + "fence", + "restart-container", +] + +BOOLEAN_VALUES = [ + "0", + "1", + "true", + "false", +] + +#normalize(key, value) -> normalized_value +normalize = validate.option_value_normalization({ + "role": lambda value: value.lower().capitalize(), + "requires": lambda value: value.lower(), + "on-fail": lambda value: value.lower(), + "record-pending": lambda value: value.lower(), + "enabled": lambda value: value.lower(), +}) + +def prepare( + report_processor, raw_operation_list, default_operation_list, + allowed_operation_name_list, allow_invalid=False +): + """ + Return operation_list prepared from raw_operation_list and + default_operation_list. + + report_processor is tool for warning/info/error reporting + list of dicts raw_operation_list are entered operations that require + follow-up care + list of dicts default_operation_list are operations defined as default by + (most probably) resource agent + bool allow_invalid is flag for validation skipping + """ + operations_to_validate = operations_to_normalized(raw_operation_list) + + report_list = [] + report_list.extend( + validate_operation_list( + operations_to_validate, + allowed_operation_name_list, + allow_invalid + ) + ) + + operation_list = normalized_to_operations(operations_to_validate) + + report_list.extend(validate_different_intervals(operation_list)) + + #can raise LibraryError + report_processor.process_list(report_list) + + return complete_all_intervals(operation_list) + get_remaining_defaults( + report_processor, + operation_list, + default_operation_list + ) + +def operations_to_normalized(raw_operation_list): + return [ + validate.values_to_pairs(op, normalize) for op in raw_operation_list + ] + +def normalized_to_operations(normalized_pairs): + return [ + validate.pairs_to_values(op) for op in normalized_pairs + ] + +def validate_operation_list( + operation_list, allowed_operation_name_list, allow_invalid=False +): + options_validators = [ + validate.is_required("name", "resource operation"), + validate.value_in("role", ROLE_VALUES), + validate.value_in("requires", REQUIRES_VALUES), + validate.value_in("on-fail", ON_FAIL_VALUES), + validate.value_in("record-pending", BOOLEAN_VALUES), + validate.value_in("enabled", BOOLEAN_VALUES), + validate.mutually_exclusive( + ["interval-origin", "start-delay"], + "resource operation" + ), + validate.value_in( + "name", + allowed_operation_name_list, + option_name_for_report="operation name", + code_to_allow_extra_values=report_codes.FORCE_OPTIONS, + allow_extra_values=allow_invalid, + ), + validate.value_id("id", option_name_for_report="operation id"), + ] + report_list = [] + for operation in operation_list: + report_list.extend( + validate_operation(operation, options_validators) + ) + return report_list + +def validate_operation(operation, options_validator_list): + """ + Return a list with reports (ReportItems) about problems inside + operation. + dict operation contains attributes of operation + """ + report_list = validate.names_in( + ATTRIBUTES, + operation.keys(), + "resource operation", + ) + + report_list.extend(validate.run_collection_of_option_validators( + operation, + options_validator_list + )) + + return report_list + +def get_remaining_defaults( + report_processor, operation_list, default_operation_list +): + """ + Return operations not mentioned in operation_list but contained in + default_operation_list. + report_processor is tool for warning/info/error reporting + list operation_list contains dictionaries with attributes of operation + list default_operation_list contains dictionaries with attributes of the + operation + """ + return make_unique_intervals( + report_processor, + [ + default_operation for default_operation in default_operation_list + if default_operation["name"] not in [ + operation["name"] for operation in operation_list + ] + ] + ) + +def get_interval_uniquer(): + used_intervals_map = defaultdict(set) + def get_uniq_interval(name, initial_interval): + """ + Return unique interval for name based on initial_interval if + initial_interval is valid or return initial_interval otherwise. + + string name is the operation name for searching interval + initial_interval is starting point for finding free value + """ + used_intervals = used_intervals_map[name] + normalized_interval = timeout_to_seconds(initial_interval) + if normalized_interval is None: + return initial_interval + + if normalized_interval not in used_intervals: + used_intervals.add(normalized_interval) + return initial_interval + + while normalized_interval in used_intervals: + normalized_interval += 1 + used_intervals.add(normalized_interval) + return str(normalized_interval) + return get_uniq_interval + +def make_unique_intervals(report_processor, operation_list): + """ + Return operation list similar to operation_list where intervals for the same + operation are unique + report_processor is tool for warning/info/error reporting + list operation_list contains dictionaries with attributes of operation + """ + get_unique_interval = get_interval_uniquer() + adapted_operation_list = [] + for operation in operation_list: + adapted = operation.copy() + if "interval" in adapted: + adapted["interval"] = get_unique_interval( + operation["name"], + operation["interval"] + ) + if adapted["interval"] != operation["interval"]: + report_processor.process( + reports.resource_operation_interval_adapted( + operation["name"], + operation["interval"], + adapted["interval"], + ) + ) + adapted_operation_list.append(adapted) + return adapted_operation_list + +def validate_different_intervals(operation_list): + """ + Check that the same operations (e.g. monitor) have different interval. + list operation_list contains dictionaries with attributes of operation + return see resource operation in pcs/lib/exchange_formats.md + """ + duplication_map = defaultdict(lambda: defaultdict(list)) + for operation in operation_list: + interval = operation.get( + "interval", + get_default_interval(operation["name"]) + ) + seconds = timeout_to_seconds(interval) + duplication_map[operation["name"]][seconds].append(interval) + + duplications = defaultdict(list) + for name, interval_map in duplication_map.items(): + for timeout in sorted(interval_map.values()): + if len(timeout) > 1: + duplications[name].append(timeout) + + if duplications: + return [reports.resource_operation_interval_duplication( + dict(duplications) + )] + return [] + +def create_id(context_element, name, interval): + """ + Create id for op element. + etree context_element is used for the name building + string name is the name of the operation + mixed interval is the interval attribute of operation + """ + return create_subelement_id( + context_element, + "{0}-interval-{1}".format(name, interval) + ) + +def create_operations(primitive_element, operation_list): + """ + Create operation element containing operations from operation_list + list operation_list contains dictionaries with attributes of operation + etree primitive_element is context element + """ + operations_element = etree.SubElement(primitive_element, "operations") + for operation in sorted(operation_list, key=lambda op: op["name"]): + append_new_operation(operations_element, operation) + +def append_new_operation(operations_element, options): + """ + Create op element and apend it to operations_element. + etree operations_element is the context element + dict options are attributes of operation + """ + attribute_map = dict( + (key, value) for key, value in options.items() + if key not in OPERATION_NVPAIR_ATTRIBUTES + ) + if "id" in attribute_map: + if does_id_exist(operations_element, attribute_map["id"]): + raise LibraryError(reports.id_already_exists(attribute_map["id"])) + else: + attribute_map.update({ + "id": create_id( + operations_element.getparent(), + options["name"], + options["interval"] + ) + }) + op_element = etree.SubElement( + operations_element, + "op", + attribute_map, + ) + nvpair_attribute_map = dict( + (key, value) for key, value in options.items() + if key in OPERATION_NVPAIR_ATTRIBUTES + ) + + if nvpair_attribute_map: + append_new_instance_attributes(op_element, nvpair_attribute_map) + + return op_element + +def get_resource_operations(resource_el, names=None): + """ + Get operations of a given resource, optionally filtered by name + etree resource_el -- resource element + iterable names -- return only operations of these names if specified + """ + return [ + op_el + for op_el in resource_el.xpath("./operations/op") + if not names or op_el.attrib.get("name", "") in names + ] + +def disable(operation_element): + """ + Disable the specified operation + etree operation_element -- the operation + """ + operation_element.attrib["enabled"] = "false" + +def enable(operation_element): + """ + Enable the specified operation + etree operation_element -- the operation + """ + operation_element.attrib.pop("enabled", None) + +def is_enabled(operation_element): + """ + Check if the specified operation is enabled + etree operation_element -- the operation + """ + return is_true(operation_element.attrib.get("enabled", "true")) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/primitive.py pcs-0.9.159/pcs/lib/cib/resource/primitive.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/primitive.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/primitive.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,136 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib import reports +from pcs.lib.cib.nvpair import ( + append_new_instance_attributes, + append_new_meta_attributes, +) +from pcs.lib.cib.resource.operations import( + prepare as prepare_operations, + create_operations, +) +from pcs.lib.cib.tools import does_id_exist +from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.values import validate_id + + +TAG = "primitive" + +def is_primitive(resource_el): + return resource_el.tag == TAG + +def create( + report_processor, resources_section, resource_id, resource_agent, + raw_operation_list=None, meta_attributes=None, instance_attributes=None, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + resource_type="resource" +): + """ + Prepare all parts of primitive resource and append it into cib. + + report_processor is a tool for warning/info/error reporting + etree.Element resources_section is place where new element will be appended + string resource_id is id of new resource + lib.resource_agent.CrmAgent resource_agent + list of dict raw_operation_list specifies operations of resource + dict meta_attributes specifies meta attributes of resource + dict instance_attributes specifies instance attributes of resource + bool allow_invalid_operation is flag for skipping validation of operations + bool allow_invalid_instance_attributes is flag for skipping validation of + instance_attributes + bool use_default_operations is flag for completion operations with default + actions specified in resource agent + string resource_type -- describes the resource for reports + """ + if raw_operation_list is None: + raw_operation_list = [] + if meta_attributes is None: + meta_attributes = {} + if instance_attributes is None: + instance_attributes = {} + + if does_id_exist(resources_section, resource_id): + raise LibraryError(reports.id_already_exists(resource_id)) + validate_id(resource_id, "{0} name".format(resource_type)) + + operation_list = prepare_operations( + report_processor, + raw_operation_list, + resource_agent.get_cib_default_actions( + necessary_only=not use_default_operations + ), + [operation["name"] for operation in resource_agent.get_actions()], + allow_invalid=allow_invalid_operation, + ) + + report_processor.process_list( + resource_agent.validate_parameters( + instance_attributes, + parameters_type=resource_type, + allow_invalid=allow_invalid_instance_attributes, + ) + ) + + return append_new( + resources_section, + resource_id, + resource_agent.get_standard(), + resource_agent.get_provider(), + resource_agent.get_type(), + instance_attributes=instance_attributes, + meta_attributes=meta_attributes, + operation_list=operation_list + ) + +def append_new( + resources_section, resource_id, standard, provider, agent_type, + instance_attributes=None, + meta_attributes=None, + operation_list=None +): + """ + Append a new primitive element to the resources_section. + + etree.Element resources_section is place where new element will be appended + string resource_id is id of new resource + string standard is a standard of resource agent (e.g. ocf) + string agent_type is a type of resource agent (e.g. IPaddr2) + string provider is a provider of resource agent (e.g. heartbeat) + dict instance_attributes will be nvpairs inside instance_attributes element + dict meta_attributes will be nvpairs inside meta_attributes element + list operation_list contains dicts representing operations + (e.g. [{"name": "monitor"}, {"name": "start"}]) + """ + attributes = { + "id": resource_id, + "class": standard, + "type": agent_type, + } + if provider: + attributes["provider"] = provider + primitive_element = etree.SubElement(resources_section, TAG, attributes) + + if instance_attributes: + append_new_instance_attributes( + primitive_element, + instance_attributes + ) + + if meta_attributes: + append_new_meta_attributes(primitive_element, meta_attributes) + + create_operations( + primitive_element, + operation_list if operation_list else [] + ) + + return primitive_element diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource/remote_node.py pcs-0.9.159/pcs/lib/cib/resource/remote_node.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource/remote_node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource/remote_node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,219 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.common import report_codes +from pcs.lib import reports +from pcs.lib.errors import LibraryError +from pcs.lib.cib.resource import primitive +from pcs.lib.node import( + NodeAddresses, + node_addresses_contain_host, + node_addresses_contain_name, +) +from pcs.lib.resource_agent import( + find_valid_resource_agent_by_name, + ResourceAgentName, +) + +AGENT_NAME = ResourceAgentName("ocf", "pacemaker", "remote") + +def get_agent(report_processor, cmd_runner): + return find_valid_resource_agent_by_name( + report_processor, + cmd_runner, + AGENT_NAME.full_name, + ) + +_IS_REMOTE_AGENT_XPATH_SNIPPET = """ + @class="{0}" and @provider="{1}" and @type="{2}" +""".format(AGENT_NAME.standard, AGENT_NAME.provider, AGENT_NAME.type) + +_HAS_SERVER_XPATH_SNIPPET = """ + instance_attributes/nvpair[ + @name="server" + and + string-length(@value) > 0 + ] +""" + + + +def find_node_list(resources_section): + node_list = [ + NodeAddresses( + nvpair.attrib["value"], + name=nvpair.getparent().getparent().attrib["id"] + ) + for nvpair in resources_section.xpath( + ".//primitive[{is_remote}]/{has_server}" + .format( + is_remote=_IS_REMOTE_AGENT_XPATH_SNIPPET, + has_server=_HAS_SERVER_XPATH_SNIPPET, + ) + ) + ] + + node_list.extend([ + NodeAddresses(primitive.attrib["id"], name=primitive.attrib["id"]) + for primitive in resources_section.xpath( + ".//primitive[{is_remote} and not({has_server})]" + .format( + is_remote=_IS_REMOTE_AGENT_XPATH_SNIPPET, + has_server=_HAS_SERVER_XPATH_SNIPPET, + ) + ) + ]) + + return node_list + +def find_node_resources(resources_section, node_identifier): + """ + Return list of resource elements that match to node_identifier + + etree.Element resources_section is a search element + string node_identifier could be id of the resource or its instance attribute + "server" + """ + return resources_section.xpath( + """ + .//primitive[ + {is_remote} and ( + @id="{identifier}" + or + instance_attributes/nvpair[ + @name="server" + and + @value="{identifier}" + ] + ) + ] + """ + .format( + is_remote=_IS_REMOTE_AGENT_XPATH_SNIPPET, + identifier=node_identifier + ) + ) + +def get_host(resource_element): + """ + Return first host from resource element if is there. Return None if host is + not there. + + etree.Element resource_element + """ + if not ( + resource_element.attrib.get("class", "") == AGENT_NAME.standard + and + resource_element.attrib.get("provider", "") == AGENT_NAME.provider + and + resource_element.attrib.get("type", "") == AGENT_NAME.type + ): + return None + + + host_list = resource_element.xpath( + "./{has_server}/@value".format(has_server=_HAS_SERVER_XPATH_SNIPPET) + ) + if host_list: + return host_list[0] + return resource_element.attrib["id"] + +def _validate_server_not_used(agent, option_dict): + if "server" in option_dict: + return [reports.invalid_option( + ["server"], + sorted([ + attr["name"] for attr in agent.get_parameters() + if attr["name"] != "server" + ]), + "resource", + )] + return [] + + +def validate_host_not_conflicts(nodes, node_name, instance_attributes): + host = instance_attributes.get("server", node_name) + if node_addresses_contain_host(nodes, host): + return [reports.id_already_exists(host)] + return [] + +def validate_create( + nodes, resource_agent, host, node_name, instance_attributes +): + """ + validate inputs for create + + list of NodeAddresses nodes -- nodes already used + string node_name -- name of future node + dict instance_attributes -- data for future resource instance attributes + """ + report_list = _validate_server_not_used(resource_agent, instance_attributes) + + host_is_used = False + if node_addresses_contain_host(nodes, host): + report_list.append(reports.id_already_exists(host)) + host_is_used = True + + if not host_is_used or host != node_name: + if node_addresses_contain_name(nodes, node_name): + report_list.append(reports.id_already_exists(node_name)) + + return report_list + +def prepare_instance_atributes(instance_attributes, host): + enriched_instance_attributes = instance_attributes.copy() + enriched_instance_attributes["server"] = host + return enriched_instance_attributes + +def create( + report_processor, resource_agent, resources_section, host, node_name, + raw_operation_list=None, meta_attributes=None, + instance_attributes=None, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, +): + """ + Prepare all parts of remote resource and append it into the cib. + + report_processor is a tool for warning/info/error reporting + cmd_runner is tool for launching external commands + etree.Element resources_section is place where new element will be appended + string node_name is name of the remote node and id of new resource as well + list of dict raw_operation_list specifies operations of resource + dict meta_attributes specifies meta attributes of resource + dict instance_attributes specifies instance attributes of resource + bool allow_invalid_operation is flag for skipping validation of operations + bool allow_invalid_instance_attributes is flag for skipping validation of + instance_attributes + bool use_default_operations is flag for completion operations with default + actions specified in resource agent + """ + all_instance_attributes = instance_attributes.copy() + if host != node_name: + all_instance_attributes.update({"server": host}) + try: + return primitive.create( + report_processor, + resources_section, + node_name, + resource_agent, + raw_operation_list, + meta_attributes, + all_instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + except LibraryError as e: + for report in e.args: + if report.code == report_codes.INVALID_OPTION: + report.info["allowed"] = [ + value for value in report.info["allowed"] + if value != "server" + ] + raise e diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/resource.py pcs-0.9.159/pcs/lib/cib/resource.py --- pcs-0.9.155+dfsg/pcs/lib/cib/resource.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/resource.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,15 +0,0 @@ -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -TAGS_CLONE = "clone", "master" -TAGS_ALL = TAGS_CLONE + ("primitive", "group") - -def find_by_id(tree, id): - for element in tree.findall('.//*[@id="{0}"]'.format(id)): - if element is not None and element.tag in TAGS_ALL: - return element - return None diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/stonith.py pcs-0.9.159/pcs/lib/cib/stonith.py --- pcs-0.9.155+dfsg/pcs/lib/cib/stonith.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/stonith.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,15 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +# TODO replace by the new finding function +def is_stonith_resource(resources_el, name): + return len( + resources_el.xpath( + "primitive[@id='{0}' and @class='stonith']".format(name) + ) + ) > 0 + diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_acl.py pcs-0.9.159/pcs/lib/cib/test/test_acl.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_acl.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_acl.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,1016 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_xml_equal, + ExtendedAssertionsMixin, +) +from pcs.test.tools.misc import get_test_resource as rc +from pcs.test.tools.xml import get_xml_manipulation_creator_from_file +from pcs.test.tools.pcs_unittest import mock, TestCase + +from pcs.common import report_codes +from pcs.lib.cib import acl as lib +from pcs.lib.cib.tools import get_acls +from pcs.lib.errors import ReportItemSeverity as severities, LibraryError + +class LibraryAclTest(TestCase): + def setUp(self): + self.create_cib = get_xml_manipulation_creator_from_file( + rc("cib-empty.xml") + ) + self.cib = self.create_cib() + + @property + def acls(self): + return get_acls(self.cib.tree) + + def fixture_add_role(self, role_id): + self.cib.append_to_first_tag_name( + 'configuration', + ''.format(role_id) + ) + + def assert_cib_equal(self, expected_cib): + got_xml = str(self.cib) + expected_xml = str(expected_cib) + assert_xml_equal(expected_xml, got_xml) + + +class ValidatePermissionsTest(LibraryAclTest): + def setUp(self): + self.xml = """ + + + + + + + """ + self.tree = etree.XML(self.xml) + self.allowed_permissions = ["read", "write", "deny"] + self.allowed_scopes = ["xpath", "id"] + + def test_success(self): + permissions = [ + ("read", "id", "test-id"), + ("write", "id", "another-id"), + ("deny", "id", "last-id"), + ("read", "xpath", "any string"), + ("write", "xpath", "maybe xpath"), + ("deny", "xpath", "xpath") + ] + lib.validate_permissions(self.tree, permissions) + + def test_unknown_permission(self): + permissions = [ + ("read", "id", "test-id"), + ("unknown", "id", "another-id"), + ("write", "xpath", "my xpath"), + ("allow", "xpath", "xpath") + ] + assert_raise_library_error( + lambda: lib.validate_permissions(self.tree, permissions), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_value": "unknown", + "option_name": "permission", + "allowed_values": self.allowed_permissions, + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_value": "allow", + "option_name": "permission", + "allowed_values": self.allowed_permissions, + }, + None + ) + ) + + def test_unknown_scope(self): + permissions = [ + ("read", "id", "test-id"), + ("write", "not_id", "test-id"), + ("deny", "not_xpath", "some xpath"), + ("read", "xpath", "xpath") + ] + assert_raise_library_error( + lambda: lib.validate_permissions(self.tree, permissions), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_value": "not_id", + "option_name": "scope type", + "allowed_values": self.allowed_scopes, + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_value": "not_xpath", + "option_name": "scope type", + "allowed_values": self.allowed_scopes, + }, + None + ) + ) + + def test_not_existing_id(self): + permissions = [ + ("read", "id", "test-id"), + ("write", "id", "id"), + ("deny", "id", "last"), + ("write", "xpath", "maybe xpath") + ] + assert_raise_library_error( + lambda: lib.validate_permissions(self.tree, permissions), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "id", + "id_description": "id", + } + ), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "last", + "id_description": "id", + } + ) + ) + + +class CreateRoleTest(LibraryAclTest): + def test_create_for_new_role_id(self): + role_id = 'new-id' + lib.create_role(self.acls, role_id) + + self.assert_cib_equal( + self.create_cib().append_to_first_tag_name( + 'configuration', + ''.format(role_id) + ) + ) + + def test_refuse_invalid_id(self): + assert_raise_library_error( + lambda: lib.create_role(self.cib.tree, '#invalid'), + ( + severities.ERROR, + report_codes.INVALID_ID, + {'id': '#invalid'}, + ), + ) + + def test_refuse_existing_non_role_id(self): + self.cib.append_to_first_tag_name( + 'nodes', + '' + ) + + assert_raise_library_error( + lambda: lib.create_role(self.cib.tree, 'node-id'), + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + {'id': 'node-id'}, + ), + ) + +class RemoveRoleTest(LibraryAclTest, ExtendedAssertionsMixin): + def setUp(self): + self.xml = """ + + + + + + + + + + + + + + + """ + self.tree = etree.XML(self.xml) + + def test_success(self): + expected_xml = """ + + + + + + + + + + + """ + lib.remove_role(self.tree, "role-id") + assert_xml_equal(expected_xml, etree.tostring(self.tree).decode()) + + def test_autodelete(self): + expected_xml = """ + + + + + + + + + + """ + lib.remove_role(self.tree, "role-id", autodelete_users_groups=True) + assert_xml_equal(expected_xml, etree.tostring(self.tree).decode()) + + def test_id_not_exists(self): + assert_raise_library_error( + lambda: lib.remove_role(self.tree.find(".//acls"), "id-of-role"), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "context_type": "acls", + "context_id": "", + "id": "id-of-role", + }, + ), + ) + +class AssignRoleTest(LibraryAclTest): + def setUp(self): + LibraryAclTest.setUp(self) + self.cib.append_to_first_tag_name( + "configuration", + """ + + + + + + + + + """ + ) + + def test_success_target(self): + target = self.cib.tree.find(".//acl_target[@id='target1']") + lib.assign_role(self.cib.tree, "role1", target) + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + """ + )) + + def test_sucess_group(self): + group = self.cib.tree.find(".//acl_group[@id='group1']") + lib.assign_role(self.cib.tree, "role1", group) + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + + """ + )) + + def test_role_already_assigned(self): + target = self.cib.tree.find(".//acl_target[@id='target1']") + assert_raise_library_error( + lambda: lib.assign_role(self.cib.tree, "role2", target), + ( + severities.ERROR, + report_codes.CIB_ACL_ROLE_IS_ALREADY_ASSIGNED_TO_TARGET, + { + "role_id": "role2", + "target_id": "target1", + } + ) + ) + + +@mock.patch("pcs.lib.cib.acl._assign_role") +class AssignAllRoles(TestCase): + def test_success(self, assign_role): + assign_role.return_value = [] + lib.assign_all_roles("acl_section", ["1", "2", "3"], "element") + assign_role.assert_has_calls([ + mock.call("acl_section", "1", "element"), + mock.call("acl_section", "2", "element"), + mock.call("acl_section", "3", "element"), + ], any_order=True) + + def test_fail_on_error_report(self, assign_role): + assign_role.return_value = ['report'] + self.assertRaises( + LibraryError, + lambda: + lib.assign_all_roles("acl_section", ["1", "2", "3"], "element") + ) + + + +class UnassignRoleTest(LibraryAclTest): + def setUp(self): + LibraryAclTest.setUp(self) + self.cib.append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + + + """ + ) + + def test_success_target(self): + target = self.cib.tree.find( + ".//acl_target[@id='{0}']".format("target1") + ) + lib.unassign_role(target, "role2") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + + """ + )) + + def test_success_group(self): + group = self.cib.tree.find(".//acl_group[@id='{0}']".format("group1")) + lib.unassign_role(group, "role1") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + """ + )) + + def test_not_existing_role(self): + target = self.cib.tree.find( + ".//acl_target[@id='{0}']".format("target1") + ) + lib.unassign_role(target, "role3") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + + """ + )) + + def test_role_not_assigned(self): + target = self.cib.tree.find( + ".//acl_target[@id='{0}']".format("target1") + ) + assert_raise_library_error( + lambda: lib.unassign_role(target, "role1"), + ( + severities.ERROR, + report_codes.CIB_ACL_ROLE_IS_NOT_ASSIGNED_TO_TARGET, + { + "role_id": "role1", + "target_id": "target1", + } + ) + ) + + def test_autodelete(self): + target = self.cib.tree.find(".//acl_group[@id='{0}']".format("group1")) + lib.unassign_role(target, "role1", True) + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + + + + """ + )) + + +class AddPermissionsToRoleTest(LibraryAclTest): + def test_add_for_correct_permissions(self): + role_id = 'role1' + self.fixture_add_role(role_id) + + lib.add_permissions_to_role( + self.cib.tree.find(".//acl_role[@id='{0}']".format(role_id)), + [('read', 'xpath', '/whatever')] + ) + + self.assert_cib_equal( + self.create_cib().append_to_first_tag_name('configuration', ''' + + + + + + '''.format(role_id)) + ) + + +class ProvideRoleTest(LibraryAclTest): + def test_add_role_for_nonexisting_id(self): + role_id = 'new-id' + lib.provide_role(self.acls, role_id) + + self.assert_cib_equal( + self.create_cib().append_to_first_tag_name('configuration', ''' + + + + '''.format(role_id)) + ) + + def test_add_role_for_nonexisting_role_id(self): + self.fixture_add_role('role1') + + role_id = 'role1' + lib.provide_role(self.cib.tree, role_id) + + self.assert_cib_equal( + self.create_cib().append_to_first_tag_name('configuration', ''' + + + + '''.format(role_id)) + ) + + +class CreateTargetTest(LibraryAclTest): + def setUp(self): + LibraryAclTest.setUp(self) + self.fixture_add_role("target3") + self.cib.append_to_first_tag_name("acls", '') + + def test_success(self): + lib.create_target(self.acls, "target1") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + """ + )) + + def test_target_id_is_not_unique_id(self): + lib.create_target(self.acls, "target3") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + + """ + )) + + def test_target_id_is_not_unique_target_id(self): + assert_raise_library_error( + lambda: lib.create_target(self.acls, "target2"), + ( + severities.ERROR, + report_codes.CIB_ACL_TARGET_ALREADY_EXISTS, + {"target_id":"target2"} + ) + ) + + +class CreateGroupTest(LibraryAclTest): + def setUp(self): + LibraryAclTest.setUp(self) + self.fixture_add_role("group2") + + def test_success(self): + lib.create_group(self.acls, "group1") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + + """ + )) + + def test_existing_id(self): + assert_raise_library_error( + lambda: lib.create_group(self.acls, "group2"), + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + {"id": "group2"} + ) + ) + + +class RemoveTargetTest(LibraryAclTest, ExtendedAssertionsMixin): + def setUp(self): + LibraryAclTest.setUp(self) + self.fixture_add_role("target2") + self.cib.append_to_first_tag_name("acls", '') + + def test_success(self): + lib.remove_target(self.cib.tree, "target1") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + """ + )) + + def test_not_existing(self): + assert_raise_library_error( + lambda: lib.remove_target(self.acls, "target2"), + ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": "target2", + "expected_types": ["acl_target"], + "current_type": "acl_role", + } + ) + ) + + +class RemoveGroupTest(LibraryAclTest, ExtendedAssertionsMixin): + def setUp(self): + LibraryAclTest.setUp(self) + self.fixture_add_role("group2") + self.cib.append_to_first_tag_name("acls", '') + + def test_success(self): + lib.remove_group(self.cib.tree, "group1") + self.assert_cib_equal(self.create_cib().append_to_first_tag_name( + "configuration", + """ + + + + """ + )) + + def test_not_existing(self): + assert_raise_library_error( + lambda: lib.remove_group(self.cib.tree, "group2"), + ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": "group2", + "expected_types": ["acl_group"], + "current_type": "acl_role", + } + ) + ) + + +class RemovePermissionForReferenceTest(LibraryAclTest): + def test_has_no_efect_when_id_not_referenced(self): + lib.remove_permissions_referencing(self.cib.tree, 'dummy') + self.assert_cib_equal(self.create_cib()) + + def test_remove_all_references(self): + self.cib.append_to_first_tag_name('configuration', ''' + + + + + + + + + + ''') + + lib.remove_permissions_referencing(self.cib.tree, 'dummy') + + self.assert_cib_equal( + self.create_cib().append_to_first_tag_name('configuration', ''' + + + + + + + ''') + ) + + +class RemovePermissionTest(LibraryAclTest): + def setUp(self): + self.xml = """ + + + + + + + + + + + """ + self.tree = etree.XML(self.xml) + + def test_success(self): + expected_xml = """ + + + + + + + + + + """ + lib.remove_permission(self.tree, "permission-id") + assert_xml_equal(expected_xml, etree.tostring(self.tree).decode()) + + def test_not_existing_id(self): + assert_raise_library_error( + lambda: lib.remove_permission(self.tree, "role-id"), + ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": "role-id", + "expected_types": ["acl_permission"], + "current_type": "acl_role", + } + ) + ) + + +class GetRoleListTest(LibraryAclTest): + def test_success(self): + self.cib.append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + """ + ) + expected = [ + { + "id": "role1", + "description": "desc1", + "permission_list": [ + { + "id": "role1-perm1", + "description": None, + "kind": "read", + "xpath": "XPATH", + "reference": None, + "object-type": None, + "attribute": None, + }, + { + "id": "role1-perm2", + "description": "desc", + "kind": "write", + "xpath": None, + "reference": "id", + "object-type": None, + "attribute": None, + }, + { + "id": "role1-perm3", + "description": None, + "kind": "deny", + "xpath": None, + "reference": None, + "object-type": "type", + "attribute": "attr", + } + ] + }, + { + "id": "role2", + "description": None, + "permission_list": [], + } + ] + self.assertEqual(expected, lib.get_role_list(self.acls)) + + +class GetPermissionListTest(LibraryAclTest): + def test_success(self): + role_el = etree.Element("acl_role") + etree.SubElement( + role_el, + "acl_permission", + { + "id":"role1-perm1", + "kind": "read", + "xpath": "XPATH", + } + ) + etree.SubElement( + role_el, + "acl_permission", + { + "id": "role1-perm2", + "description": "desc", + "kind": "write", + "reference": "id", + } + ) + etree.SubElement( + role_el, + "acl_permission", + { + "id": "role1-perm3", + "kind": "deny", + "object-type": "type", + "attribute": "attr", + } + ) + expected = [ + { + "id": "role1-perm1", + "description": None, + "kind": "read", + "xpath": "XPATH", + "reference": None, + "object-type": None, + "attribute": None, + }, + { + "id": "role1-perm2", + "description": "desc", + "kind": "write", + "xpath": None, + "reference": "id", + "object-type": None, + "attribute": None, + }, + { + "id": "role1-perm3", + "description": None, + "kind": "deny", + "xpath": None, + "reference": None, + "object-type": "type", + "attribute": "attr", + } + ] + self.assertEqual(expected, lib._get_permission_list(role_el)) + + +@mock.patch("pcs.lib.cib.acl.get_target_like_list") +class GetTargetListTest(TestCase): + def test_success(self, mock_fn): + mock_fn.return_value = "returned data" + self.assertEqual("returned data", lib.get_target_list("tree")) + mock_fn.assert_called_once_with("tree", "acl_target") + + +@mock.patch("pcs.lib.cib.acl.get_target_like_list") +class GetGroupListTest(TestCase): + def test_success(self, mock_fn): + mock_fn.return_value = "returned data" + self.assertEqual("returned data", lib.get_group_list("tree")) + mock_fn.assert_called_once_with("tree", "acl_group") + + +class GetTargetLikeListWithTagTest(LibraryAclTest): + def setUp(self): + LibraryAclTest.setUp(self) + self.cib.append_to_first_tag_name( + "configuration", + """ + + + + + + + + + + + + + """ + ) + + def test_success_targets(self): + self.assertEqual( + [ + { + "id": "target1", + "role_list": [], + }, + { + "id": "target2", + "role_list": ["role1", "role2", "role3"], + } + ], + lib.get_target_like_list(self.acls, "acl_target") + ) + + def test_success_groups(self): + self.assertEqual( + [ + { + "id": "group1", + "role_list": ["role1"], + }, + { + "id": "group2", + "role_list": [], + } + ], + lib.get_target_like_list(self.acls, "acl_group") + ) + + +class GetRoleListOfTargetTest(LibraryAclTest): + def test_success(self): + target_el = etree.Element("target") + etree.SubElement(target_el, "role", {"id": "role1"}) + etree.SubElement(target_el, "role", {"id": "role2"}) + etree.SubElement(target_el, "role") + etree.SubElement(target_el, "role", {"id": "role3"}) + self.assertEqual( + ["role1", "role2", "role3"], lib._get_role_list_of_target(target_el) + ) + + +@mock.patch("pcs.lib.cib.acl.find_group") +@mock.patch("pcs.lib.cib.acl.find_target") +class FindTargetOrGroup(TestCase): + def test_returns_target(self, find_target, find_group): + find_target.return_value = "target_element" + self.assertEqual( + lib.find_target_or_group("acl_section", "target_id"), + "target_element" + ) + find_target.assert_called_once_with( + "acl_section", + "target_id", + none_if_id_unused=True + ) + + def test_returns_group_if_target_is_none(self, find_target, find_group): + find_target.return_value = None + find_group.return_value = "group_element" + self.assertEqual( + lib.find_target_or_group("acl_section", "group_id"), + "group_element" + ) + find_target.assert_called_once_with( + "acl_section", + "group_id", + none_if_id_unused=True + ) + find_group.assert_called_once_with( + "acl_section", + "group_id", + id_description="user/group" + ) + + +class Find(TestCase): + def test_refuses_bad_tag(self): + self.assertRaises( + AssertionError, + lambda: lib._find("bad_tag", "acl_section", "id") + ) + + @mock.patch("pcs.lib.cib.acl.find_element_by_tag_and_id") + def test_map_well_to_common_finder(self, common_finder): + common_finder.return_value = "element" + self.assertEqual("element", lib._find( + lib.TAG_GROUP, "acl_section", "group_id", + none_if_id_unused=True, + id_description="some description" + )) + common_finder.assert_called_once_with( + lib.TAG_GROUP, + "acl_section", + "group_id", + none_if_id_unused=True, + id_description="some description" + ) + + @mock.patch("pcs.lib.cib.acl.find_element_by_tag_and_id") + def test_map_well_to_common_finder_with_automatic_desc(self, common_finder): + common_finder.return_value = "element" + self.assertEqual("element", lib._find( + lib.TAG_GROUP, "acl_section", "group_id", none_if_id_unused=True + )) + common_finder.assert_called_once_with( + lib.TAG_GROUP, + "acl_section", + "group_id", + none_if_id_unused=True, + id_description=lib.TAG_DESCRIPTION_MAP[lib.TAG_GROUP] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_alert.py pcs-0.9.159/pcs/lib/cib/test/test_alert.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_alert.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_alert.py 2017-06-30 15:33:01.000000000 +0000 @@ -36,136 +36,6 @@ alert._update_optional_attribute(element, "attr", "") self.assertTrue(element.get("attr") is None) - -class GetAlertByIdTest(TestCase): - def test_found(self): - xml = """ - - - - - - - - - """ - assert_xml_equal( - '', - etree.tostring( - alert.get_alert_by_id(etree.XML(xml), "alert-2") - ).decode() - ) - - def test_different_place(self): - xml = """ - - - - - - - - - """ - assert_raise_library_error( - lambda: alert.get_alert_by_id(etree.XML(xml), "alert-2"), - ( - severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "alert-2"} - ) - ) - - def test_not_exist(self): - xml = """ - - - - - - - - """ - assert_raise_library_error( - lambda: alert.get_alert_by_id(etree.XML(xml), "alert-2"), - ( - severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "alert-2"} - ) - ) - - -class GetRecipientByIdTest(TestCase): - def setUp(self): - self.xml = etree.XML( - """ - - - - - - - - - - - - - - - - """ - ) - - def test_exist(self): - assert_xml_equal( - '', - etree.tostring( - alert.get_recipient_by_id(self.xml, "rec-1") - ).decode() - ) - - def test_different_place(self): - assert_raise_library_error( - lambda: alert.get_recipient_by_id(self.xml, "rec-4"), - ( - severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "rec-4", - "id_description": "Recipient" - } - ) - ) - - def test_not_in_alert(self): - assert_raise_library_error( - lambda: alert.get_recipient_by_id(self.xml, "rec-2"), - ( - severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "rec-2", - "id_description": "Recipient" - } - ) - ) - - def test_not_recipient(self): - assert_raise_library_error( - lambda: alert.get_recipient_by_id(self.xml, "rec-3"), - ( - severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "rec-3", - "id_description": "Recipient" - } - ) - ) - - class EnsureRecipientValueIsUniqueTest(TestCase): def setUp(self): self.mock_reporter = MockLibraryReportProcessor() @@ -472,8 +342,13 @@ lambda: alert.update_alert(self.tree, "alert0", "/test"), ( severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "alert0"} + report_codes.ID_NOT_FOUND, + { + "id": "alert0", + "context_type": "alerts", + "context_id": "", + "id_description": "alert" + } ) ) @@ -513,8 +388,13 @@ lambda: alert.remove_alert(self.tree, "not-existing-id"), ( severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "not-existing-id"} + report_codes.ID_NOT_FOUND, + { + "id": "not-existing-id", + "context_type": "alerts", + "context_id": "", + "id_description": "alert" + } ) ) @@ -668,8 +548,13 @@ ), ( severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "alert1"} + report_codes.ID_NOT_FOUND, + { + "id": "alert1", + "context_type": "alerts", + "context_id": "", + "id_description": "alert" + } ) ) @@ -990,7 +875,7 @@ report_codes.ID_NOT_FOUND, { "id": "recipient", - "id_description": "Recipient" + "id_description": "recipient" } ) ) @@ -1038,7 +923,9 @@ report_codes.ID_NOT_FOUND, { "id": "recipient", - "id_description": "Recipient" + "context_type": "alerts", + "context_id": "", + "id_description": "recipient", } ) ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint_colocation.py pcs-0.9.159/pcs/lib/cib/test/test_constraint_colocation.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint_colocation.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_constraint_colocation.py 2017-06-30 15:33:01.000000000 +0000 @@ -86,7 +86,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "unknown", + "option_names": ["unknown"], "option_type": None, "allowed": [ "id", diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint_order.py pcs-0.9.159/pcs/lib/cib/test/test_constraint_order.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint_order.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_constraint_order.py 2017-06-30 15:33:01.000000000 +0000 @@ -95,7 +95,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "unknown", + "option_names": ["unknown"], "option_type": None, "allowed": [ "id", "kind", "symmetrical"], } diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint.py pcs-0.9.159/pcs/lib/cib/test/test_constraint.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_constraint.py 2017-06-30 15:33:01.000000000 +0000 @@ -31,7 +31,7 @@ return element @mock.patch("pcs.lib.cib.constraint.constraint.find_parent") -@mock.patch("pcs.lib.cib.constraint.constraint.resource.find_by_id") +@mock.patch("pcs.lib.cib.constraint.constraint.find_element_by_tag_and_id") class FindValidResourceId(TestCase): def setUp(self): self.cib = "cib" @@ -44,28 +44,47 @@ in_clone_allowed=False, ) - def test_raises_when_element_not_found(self, mock_find_by_id, _): - mock_find_by_id.return_value = None - assert_raise_library_error( - lambda: self.find(id="resourceA"), - ( - severities.ERROR, - report_codes.RESOURCE_DOES_NOT_EXIST, - {"resource_id": "resourceA"} - ), + def fixture_error_multiinstance(self, parent_type, parent_id): + return ( + severities.ERROR, + report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, + { + "resource_id": "resourceA", + "parent_type": parent_type, + "parent_id": parent_id, + }, + report_codes.FORCE_CONSTRAINT_MULTIINSTANCE_RESOURCE + ) + + def fixture_warning_multiinstance(self, parent_type, parent_id): + return ( + severities.WARNING, + report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, + { + "resource_id": "resourceA", + "parent_type": parent_type, + "parent_id": parent_id, + }, + None ) def test_return_same_id_when_resource_is_clone(self, mock_find_by_id, _): mock_find_by_id.return_value = fixture_element("clone", "resourceA") self.assertEqual("resourceA", self.find(id="resourceA")) + def test_return_same_id_when_resource_is_master(self, mock_find_by_id, _): + mock_find_by_id.return_value = fixture_element("master", "resourceA") + self.assertEqual("resourceA", self.find(id="resourceA")) - def test_return_same_id_when_is_primitive_but_not_in_clone( + def test_return_same_id_when_resource_is_bundle(self, mock_find_by_id, _): + mock_find_by_id.return_value = fixture_element("bundle", "resourceA") + self.assertEqual("resourceA", self.find(id="resourceA")) + + def test_return_same_id_when_resource_is_standalone_primitive( self, mock_find_by_id, mock_find_parent ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = None - self.assertEqual("resourceA", self.find(id="resourceA")) def test_refuse_when_resource_is_in_clone( @@ -73,19 +92,29 @@ ): mock_find_by_id.return_value = fixture_element("primitive", "resourceA") mock_find_parent.return_value = fixture_element("clone", "clone_id") + assert_raise_library_error( + lambda: self.find(id="resourceA"), + self.fixture_error_multiinstance("clone", "clone_id"), + ) + def test_refuse_when_resource_is_in_master( + self, mock_find_by_id, mock_find_parent + ): + mock_find_by_id.return_value = fixture_element("primitive", "resourceA") + mock_find_parent.return_value = fixture_element("master", "master_id") assert_raise_library_error( lambda: self.find(id="resourceA"), - ( - severities.ERROR, - report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, - { - "resource_id": "resourceA", - "parent_type": "clone", - "parent_id": "clone_id", - }, - report_codes.FORCE_CONSTRAINT_MULTIINSTANCE_RESOURCE - ), + self.fixture_error_multiinstance("master", "master_id"), + ) + + def test_refuse_when_resource_is_in_bundle( + self, mock_find_by_id, mock_find_parent + ): + mock_find_by_id.return_value = fixture_element("primitive", "resourceA") + mock_find_parent.return_value = fixture_element("bundle", "bundle_id") + assert_raise_library_error( + lambda: self.find(id="resourceA"), + self.fixture_error_multiinstance("bundle", "bundle_id"), ) def test_return_clone_id_when_repair_allowed( @@ -102,6 +131,34 @@ self.report_processor.report_item_list, [] ) + def test_return_master_id_when_repair_allowed( + self, mock_find_by_id, mock_find_parent + ): + mock_find_by_id.return_value = fixture_element("primitive", "resourceA") + mock_find_parent.return_value = fixture_element("master", "master_id") + + self.assertEqual( + "master_id", + self.find(can_repair_to_clone=True, id="resourceA") + ) + assert_report_item_list_equal( + self.report_processor.report_item_list, [] + ) + + def test_return_bundle_id_when_repair_allowed( + self, mock_find_by_id, mock_find_parent + ): + mock_find_by_id.return_value = fixture_element("primitive", "resourceA") + mock_find_parent.return_value = fixture_element("bundle", "bundle_id") + + self.assertEqual( + "bundle_id", + self.find(can_repair_to_clone=True, id="resourceA") + ) + assert_report_item_list_equal( + self.report_processor.report_item_list, [] + ) + def test_return_resource_id_when_in_clone_allowed( self, mock_find_by_id, mock_find_parent ): @@ -112,15 +169,46 @@ "resourceA", self.find(in_clone_allowed=True, id="resourceA") ) - assert_report_item_list_equal(self.report_processor.report_item_list, [( - severities.WARNING, - report_codes.RESOURCE_FOR_CONSTRAINT_IS_MULTIINSTANCE, - { - "resource_id": "resourceA", - "parent_type": "clone", - "parent_id": "clone_id", - }, - )]) + assert_report_item_list_equal( + self.report_processor.report_item_list, + [ + self.fixture_warning_multiinstance("clone", "clone_id"), + ] + ) + + def test_return_resource_id_when_in_master_allowed( + self, mock_find_by_id, mock_find_parent + ): + mock_find_by_id.return_value = fixture_element("primitive", "resourceA") + mock_find_parent.return_value = fixture_element("master", "master_id") + + self.assertEqual( + "resourceA", + self.find(in_clone_allowed=True, id="resourceA") + ) + assert_report_item_list_equal( + self.report_processor.report_item_list, + [ + self.fixture_warning_multiinstance("master", "master_id"), + ] + ) + + def test_return_resource_id_when_in_bundle_allowed( + self, mock_find_by_id, mock_find_parent + ): + mock_find_by_id.return_value = fixture_element("primitive", "resourceA") + mock_find_parent.return_value = fixture_element("bundle", "bundle_id") + + self.assertEqual( + "resourceA", + self.find(in_clone_allowed=True, id="resourceA") + ) + assert_report_item_list_equal( + self.report_processor.report_item_list, + [ + self.fixture_warning_multiinstance("bundle", "bundle_id"), + ] + ) class PrepareOptionsTest(TestCase): def test_refuse_unknown_option(self): @@ -132,7 +220,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "b", + "option_names": ["b"], "option_type": None, "allowed": ["a", "id"], } diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint_ticket.py pcs-0.9.159/pcs/lib/cib/test/test_constraint_ticket.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_constraint_ticket.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_constraint_ticket.py 2017-06-30 15:33:01.000000000 +0000 @@ -72,7 +72,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "unknown", + "option_names": ["unknown"], "option_type": None, "allowed": ["id", "loss-policy", "rsc", "rsc-role", "ticket"], } @@ -100,7 +100,7 @@ severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { - "option_name": "ticket" + "option_names": ["ticket"] } ), ) @@ -114,7 +114,7 @@ severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, { - "option_name": "rsc", + "option_names": ["rsc"], } ), ) @@ -223,7 +223,7 @@ ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, - {"option_name": "ticket"} + {"option_names": ["ticket"]} ) ) @@ -237,7 +237,7 @@ ( severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, - {"option_name": "ticket"} + {"option_names": ["ticket"]} ) ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_fencing_topology.py pcs-0.9.159/pcs/lib/cib/test/test_fencing_topology.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_fencing_topology.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_fencing_topology.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,984 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib import reports +from pcs.lib.errors import LibraryError, ReportItemSeverity as severity +from pcs.lib.pacemaker.state import ClusterState +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_report_item_list_equal, + assert_xml_equal, +) +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.misc import create_patcher +from pcs.test.tools.pcs_unittest import TestCase#, mock +from pcs.test.tools.xml import etree_to_str + +from pcs.common.fencing_topology import ( + TARGET_TYPE_NODE, + TARGET_TYPE_REGEXP, + TARGET_TYPE_ATTRIBUTE, +) +from pcs.lib.cib import fencing_topology as lib + + +patch_lib = create_patcher("pcs.lib.cib.fencing_topology") + + +class CibMixin(object): + def get_cib(self): + return etree.fromstring(""" + + + + + + + + + + + + + """) + + +class StatusNodesMixin(object): + def get_status(self): + return ClusterState(""" + + + + + + + + + + + + """).node_section.nodes + + +@patch_lib("_append_level_element") +@patch_lib("_validate_level_target_devices_does_not_exist") +@patch_lib("_validate_devices") +@patch_lib("_validate_target") +@patch_lib("_validate_level", return_value="valid_level") +class AddLevel(TestCase): + def setUp(self): + self.reporter = MockLibraryReportProcessor() + + def assert_validators_called( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + dupl_called=True + ): + mock_val_level.assert_called_once_with(self.reporter, "level") + mock_val_target.assert_called_once_with( + self.reporter, "cluster_status_nodes", "target_type", + "target_value", "force_node" + ) + mock_val_devices.assert_called_once_with( + self.reporter, "resources_el", "devices", "force_device" + ) + if dupl_called: + mock_val_dupl.assert_called_once_with( + self.reporter, "topology_el", "level", "target_type", + "target_value", "devices" + ) + else: + mock_val_dupl.assert_not_called() + + def assert_called_invalid( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append, dupl_called=True + ): + self.assertRaises( + LibraryError, + lambda: lib.add_level( + self.reporter, "topology_el", "resources_el", "level", + "target_type", "target_value", "devices", + "cluster_status_nodes", "force_device", "force_node" + ) + ) + self.assert_validators_called( + mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + dupl_called + ) + mock_append.assert_not_called() + + def test_success( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append + ): + lib.add_level( + self.reporter, "topology_el", "resources_el", "level", + "target_type", "target_value", "devices", "cluster_status_nodes", + "force_device", "force_node" + ) + self.assert_validators_called( + mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl + ) + mock_append.assert_called_once_with( + "topology_el", "valid_level", "target_type", "target_value", + "devices" + ) + + def test_invalid_level( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append + ): + mock_val_level.side_effect = lambda reporter, level: reporter.append( + reports.invalid_option_value("level", level, "a positive integer") + ) + self.assert_called_invalid( + mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append, dupl_called=False + ) + + def test_invalid_target( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append + ): + mock_val_target.side_effect = ( + lambda reporter, status_nodes, target_type, target_value, force: + reporter.append( + reports.node_not_found(target_value) + ) + ) + self.assert_called_invalid( + mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append, dupl_called=False + ) + + def test_invalid_devices( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append + ): + mock_val_devices.side_effect = ( + lambda reporter, resources, devices, force: + reporter.append( + reports.stonith_resources_do_not_exist(["device"]) + ) + ) + self.assert_called_invalid( + mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append, dupl_called=False + ) + + def test_already_exists( + self, mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append + ): + mock_val_dupl.side_effect = ( + lambda reporter, tree, level, target_type, target_value, devices: + reporter.append( + reports.fencing_level_already_exists( + level, target_type, target_value, devices + ) + ) + ) + self.assert_called_invalid( + mock_val_level, mock_val_target, mock_val_devices, mock_val_dupl, + mock_append, dupl_called=True + ) + + +class RemoveAllLevels(TestCase, CibMixin): + def setUp(self): + self.tree = self.get_cib() + + def test_success(self): + lib.remove_all_levels(self.tree) + assert_xml_equal( + "", + etree_to_str(self.tree) + ) + + +class RemoveLevelsByParams(TestCase, CibMixin): + def setUp(self): + self.tree = self.get_cib() + self.reporter = MockLibraryReportProcessor() + + def get_remaining_ids(self): + return [el.get("id") for el in self.tree.findall("fencing-level")] + + def test_level(self): + lib.remove_levels_by_params( + self.reporter, self.tree, level=2 + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl1", "fl3", "fl5", "fl7", "fl8", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_target_node(self): + lib.remove_levels_by_params( + self.reporter, self.tree, target_type=TARGET_TYPE_NODE, + target_value="nodeA" + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_target_pattern(self): + lib.remove_levels_by_params( + self.reporter, self.tree, target_type=TARGET_TYPE_REGEXP, + target_value="node\d+" + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl1", "fl2", "fl3", "fl4", "fl7", "fl8", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_target_attrib(self): + lib.remove_levels_by_params( + self.reporter, self.tree, target_type=TARGET_TYPE_ATTRIBUTE, + target_value=("fencing", "improved") + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_one_device(self): + lib.remove_levels_by_params( + self.reporter, self.tree, devices=["d3"] + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl1", "fl3", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_more_devices(self): + lib.remove_levels_by_params( + self.reporter, self.tree, devices=["d2", "d1"] + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl1", "fl2", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_combination(self): + lib.remove_levels_by_params( + self.reporter, self.tree, 2, TARGET_TYPE_NODE, "nodeB", ["d3"] + ) + self.assertEqual( + self.get_remaining_ids(), + ["fl1", "fl2", "fl3", "fl5", "fl6", "fl7", "fl8", "fl9", "fl10"] + ) + assert_report_item_list_equal(self.reporter.report_item_list, []) + + def test_invalid_target(self): + assert_raise_library_error( + lambda: lib.remove_levels_by_params( + self.reporter, self.tree, target_type="bad_target", + target_value="nodeA" + ), + ( + severity.ERROR, + report_codes.INVALID_OPTION_TYPE, + { + "option_name": "target", + "allowed_types": [ + "node", + "regular expression", + "attribute_name=value" + ] + }, + None + ), + ) + self.assertEqual( + self.get_remaining_ids(), + [ + "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", + "fl10" + ] + ) + + def test_no_such_level(self): + assert_raise_library_error( + lambda: lib.remove_levels_by_params( + self.reporter, self.tree, 9, TARGET_TYPE_NODE, "nodeB", ["d3"] + ), + ( + severity.ERROR, + report_codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST, + { + "devices": ["d3", ], + "target_type": TARGET_TYPE_NODE, + "target_value": "nodeB", + "level": 9, + }, + None + ), + ) + self.assertEqual( + self.get_remaining_ids(), + [ + "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", + "fl10" + ] + ) + + def test_no_such_level_ignore_missing(self): + lib.remove_levels_by_params( + self.reporter, self.tree, 9, TARGET_TYPE_NODE, "nodeB", ["d3"], True + ) + self.assertEqual( + self.get_remaining_ids(), + [ + "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", + "fl10" + ] + ) + +class RemoveDeviceFromAllLevels(TestCase, CibMixin): + def setUp(self): + self.tree = self.get_cib() + + def test_success(self): + lib.remove_device_from_all_levels(self.tree, "d3") + assert_xml_equal( + """ + + + + + + + + + + + """, + etree_to_str(self.tree) + ) + + def test_no_such_device(self): + original_xml = etree_to_str(self.tree) + lib.remove_device_from_all_levels(self.tree, "dX") + assert_xml_equal(original_xml, etree_to_str(self.tree)) + + +class Export(TestCase, CibMixin): + def test_empty(self): + self.assertEqual( + lib.export(etree.fromstring("")), + [] + ) + + def test_success(self): + self.assertEqual( + lib.export(self.get_cib()), + [ + { + "level": "1", + "target_type": "node", + "target_value": "nodeA", + "devices": ["d1", "d2"], + }, + { + "level": "2", + "target_type": "node", + "target_value": "nodeA", + "devices": ["d3"], + }, + { + "level": "1", + "target_type": "node", + "target_value": "nodeB", + "devices": ["d2", "d1"], + }, + { + "level": "2", + "target_type": "node", + "target_value": "nodeB", + "devices": ["d3"], + }, + { + "level": "1", + "target_type": "regexp", + "target_value": "node\d+", + "devices": ["d3", "d4"], + }, + { + "level": "2", + "target_type": "regexp", + "target_value": "node\d+", + "devices": ["d1"], + }, + { + "level": "3", + "target_type": "attribute", + "target_value": ("fencing", "improved"), + "devices": ["d3", "d4"], + }, + { + "level": "4", + "target_type": "attribute", + "target_value": ("fencing", "improved"), + "devices": ["d5"], + }, + { + "level": "3", + "target_type": "regexp", + "target_value": "node-R.*", + "devices": ["dR"], + }, + { + "level": "4", + "target_type": "attribute", + "target_value": ("fencing", "remote-special"), + "devices": ["dR-special"], + } + ] + ) + + +class Verify(TestCase, CibMixin, StatusNodesMixin): + def fixture_resource(self, tree, name): + el = etree.SubElement(tree, "primitive", id=name, type="fence_dummy") + el.set("class", "stonith") + + def test_empty(self): + resources = etree.fromstring("") + topology = etree.fromstring("") + reporter = MockLibraryReportProcessor() + + lib.verify(reporter, topology, resources, self.get_status()) + + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_success(self): + resources = etree.fromstring("") + for name in ["d1", "d2", "d3", "d4", "d5", "dR", "dR-special"]: + self.fixture_resource(resources, name) + reporter = MockLibraryReportProcessor() + + lib.verify(reporter, self.get_cib(), resources, self.get_status()) + + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_failures(self): + resources = etree.fromstring("") + reporter = MockLibraryReportProcessor() + + lib.verify(reporter, self.get_cib(), resources, []) + + report = [ + ( + severity.ERROR, + report_codes.STONITH_RESOURCES_DO_NOT_EXIST, + { + "stonith_ids": [ + "d1", "d2", "d3", "d4", "d5", "dR", "dR-special" + ], + }, + None + ), + ( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + { + "node": "nodeA", + }, + None + ), + ( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + { + "node": "nodeB", + }, + None + ), + ] + assert_report_item_list_equal(reporter.report_item_list, report) + + +class ValidateLevel(TestCase): + def test_success(self): + reporter = MockLibraryReportProcessor() + lib._validate_level(reporter, 1) + lib._validate_level(reporter, "1") + lib._validate_level(reporter, 9) + lib._validate_level(reporter, "9") + lib._validate_level(reporter, "05") + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_invalid(self): + reporter = MockLibraryReportProcessor() + lib._validate_level(reporter, "") + lib._validate_level(reporter, 0) + lib._validate_level(reporter, "0") + lib._validate_level(reporter, -1) + lib._validate_level(reporter, "-1") + lib._validate_level(reporter, "1abc") + reports = [] + for value in ["", 0, "0", -1, "-1", "1abc"]: + reports.append(( + severity.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_value": value, + "option_name": "level", + "allowed_values": "a positive integer", + }, + None + )) + assert_report_item_list_equal(reporter.report_item_list, reports) + + +@patch_lib("_validate_target_valuewise") +@patch_lib("_validate_target_typewise") +class ValidateTarget(TestCase): + def test_delegate(self, validate_type, validate_value): + lib._validate_target("reporter", "status", "type", "value", "force") + validate_type.assert_called_once_with("reporter", "type") + validate_value.assert_called_once_with( + "reporter", "status", "type", "value", "force" + ) + + +class ValidateTargetTypewise(TestCase): + def test_success(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_typewise(reporter, TARGET_TYPE_NODE) + lib._validate_target_typewise(reporter, TARGET_TYPE_ATTRIBUTE) + lib._validate_target_typewise(reporter, TARGET_TYPE_REGEXP) + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_empty(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_typewise(reporter, "") + report = [( + severity.ERROR, + report_codes.INVALID_OPTION_TYPE, + { + "option_name": "target", + "allowed_types": [ + "node", + "regular expression", + "attribute_name=value" + ], + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_invalid(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_typewise(reporter, "bad_target") + report = [( + severity.ERROR, + report_codes.INVALID_OPTION_TYPE, + { + "option_name": "target", + "allowed_types": [ + "node", + "regular expression", + "attribute_name=value" + ], + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + +class ValidateTargetValuewise(TestCase, StatusNodesMixin): + def setUp(self): + self.state = self.get_status() + + def test_node_valid(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_valuewise( + reporter, self.state, TARGET_TYPE_NODE, "nodeA" + ) + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_node_empty(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_valuewise( + reporter, self.state, TARGET_TYPE_NODE, "" + ) + report = [( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + { + "node": "", + }, + report_codes.FORCE_NODE_DOES_NOT_EXIST + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_node_invalid(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_valuewise( + reporter, self.state, TARGET_TYPE_NODE, "rh7-x" + ) + report = [( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + { + "node": "rh7-x", + }, + report_codes.FORCE_NODE_DOES_NOT_EXIST + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_node_invalid_force(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_valuewise( + reporter, self.state, TARGET_TYPE_NODE, "rh7-x", force_node=True + ) + report = [( + severity.WARNING, + report_codes.NODE_NOT_FOUND, + { + "node": "rh7-x", + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_node_invalid_not_forceable(self): + reporter = MockLibraryReportProcessor() + lib._validate_target_valuewise( + reporter, self.state, TARGET_TYPE_NODE, "rh7-x", allow_force=False + ) + report = [( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + { + "node": "rh7-x", + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + +class ValidateDevices(TestCase): + def setUp(self): + self.resources_el = etree.fromstring(""" + + + + + + """) + + def test_success(self): + reporter = MockLibraryReportProcessor() + lib._validate_devices( + reporter, self.resources_el, ["stonith1"] + ) + lib._validate_devices( + reporter, self.resources_el, ["stonith1", "stonith2"] + ) + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_empty(self): + reporter = MockLibraryReportProcessor() + lib._validate_devices(reporter, self.resources_el, []) + report = [( + severity.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_type": None, + "option_names": ["stonith devices"], + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_invalid(self): + reporter = MockLibraryReportProcessor() + lib._validate_devices(reporter, self.resources_el, ["dummy", "fenceX"]) + report = [( + severity.ERROR, + report_codes.STONITH_RESOURCES_DO_NOT_EXIST, + { + "stonith_ids": ["dummy", "fenceX"], + }, + report_codes.FORCE_STONITH_RESOURCE_DOES_NOT_EXIST + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_invalid_forced(self): + reporter = MockLibraryReportProcessor() + lib._validate_devices( + reporter, self.resources_el, ["dummy", "fenceX"], force_device=True + ) + report = [( + severity.WARNING, + report_codes.STONITH_RESOURCES_DO_NOT_EXIST, + { + "stonith_ids": ["dummy", "fenceX"], + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + def test_node_invalid_not_forceable(self): + reporter = MockLibraryReportProcessor() + lib._validate_devices( + reporter, self.resources_el, ["dummy", "fenceX"], allow_force=False + ) + report = [( + severity.ERROR, + report_codes.STONITH_RESOURCES_DO_NOT_EXIST, + { + "stonith_ids": ["dummy", "fenceX"], + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + +@patch_lib("_find_level_elements") +class ValidateLevelTargetDevicesDoesNotExist(TestCase): + def test_success(self, mock_find): + mock_find.return_value = [] + reporter = MockLibraryReportProcessor() + + lib._validate_level_target_devices_does_not_exist( + reporter, "tree", "level", "target_type", "target_value", "devices" + ) + + mock_find.assert_called_once_with( + "tree", "level", "target_type", "target_value", "devices" + ) + assert_report_item_list_equal(reporter.report_item_list, []) + + def test_error(self, mock_find): + mock_find.return_value = ["element"] + reporter = MockLibraryReportProcessor() + + lib._validate_level_target_devices_does_not_exist( + reporter, "tree", "level", "target_type", "target_value", "devices" + ) + + mock_find.assert_called_once_with( + "tree", "level", "target_type", "target_value", "devices" + ) + report = [( + severity.ERROR, + report_codes.CIB_FENCING_LEVEL_ALREADY_EXISTS, + { + "devices": "devices", + "target_type": "target_type", + "target_value": "target_value", + "level": "level", + }, + None + )] + assert_report_item_list_equal(reporter.report_item_list, report) + + +class AppendLevelElement(TestCase): + def setUp(self): + self.tree = etree.fromstring("") + + def test_node_name(self): + lib._append_level_element( + self.tree, 1, TARGET_TYPE_NODE, "node1", ["d1"] + ) + assert_xml_equal( + """ + + + + """, + etree_to_str(self.tree) + ) + + def test_node_pattern(self): + lib._append_level_element( + self.tree, "2", TARGET_TYPE_REGEXP, "node-\d+", ["d1", "d2"] + ) + assert_xml_equal( + """ + + + + """, + etree_to_str(self.tree) + ) + + def test_node_attribute(self): + lib._append_level_element( + self.tree, 3, TARGET_TYPE_ATTRIBUTE, ("name%@x", "val%@x"), ["d1"], + ) + assert_xml_equal( + """ + + + + """, + etree_to_str(self.tree) + ) + + +class FindLevelElements(TestCase, CibMixin): + def setUp(self): + self.tree = self.get_cib() + + def get_ids(self, elements): + return [el.get("id") for el in elements] + + def test_no_filter(self): + self.assertEqual( + self.get_ids(lib._find_level_elements(self.tree)), + [ + "fl1", "fl2", "fl3", "fl4", "fl5", "fl6", "fl7", "fl8", "fl9", + "fl10" + ] + ) + + def test_no_such_level(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, level=2, target_type=TARGET_TYPE_NODE, + target_value="nodeB", devices=["d5"] + )), + [] + ) + + def test_level(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, level=1 + )), + ["fl1", "fl3", "fl5"] + ) + + def test_target_node(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, target_type=TARGET_TYPE_NODE, target_value="nodeB" + )), + ["fl3", "fl4"] + ) + + def test_target_pattern(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, target_type=TARGET_TYPE_REGEXP, + target_value="node-R.*" + )), + ["fl9"] + ) + + def test_target_attribute(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, target_type=TARGET_TYPE_ATTRIBUTE, + target_value=("fencing", "improved") + )), + ["fl7", "fl8"] + ) + + def test_devices(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, devices=["d3"] + )), + ["fl2", "fl4"] + ) + + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, devices=["d1", "d2"] + )), + ["fl1"] + ) + + def test_combination(self): + self.assertEqual( + self.get_ids(lib._find_level_elements( + self.tree, 2, TARGET_TYPE_NODE, "nodeB", ["d3"] + )), + ["fl4"] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_node.py pcs-0.9.159/pcs/lib/cib/test/test_node.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,233 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib.errors import ReportItemSeverity as severity +from pcs.lib.pacemaker.state import ClusterState +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_xml_equal, +) +from pcs.test.tools.pcs_unittest import TestCase, mock +from pcs.test.tools.xml import etree_to_str + +from pcs.lib.cib import node + + +@mock.patch("pcs.lib.cib.node._ensure_node_exists") +class UpdateNodeInstanceAttrs(TestCase): + def setUp(self): + self.node1 = etree.fromstring(""" + + """) + self.node2 = etree.fromstring(""" + + + + + + + + """) + self.node3 = etree.fromstring(""" + + + + + + + + + """) + self.cib = etree.fromstring(""" + + + {0}{1}{2} + + + """.format(*[ + etree_to_str(el) for el in [self.node1, self.node2, self.node3] + ])) + self.state = "node state list" + + def test_empty_node(self, mock_get_node): + mock_get_node.return_value = self.node1 + node.update_node_instance_attrs( + self.cib, "rh73-node1", {"x": "X"}, self.state + ) + assert_xml_equal( + etree_to_str(self.node1), + """ + + + + + + """ + ) + + def test_exisitng_attrs(self, mock_get_node): + mock_get_node.return_value = self.node2 + node.update_node_instance_attrs( + self.cib, "rh73-node2", {"a": "", "b": "b", "x": "X"}, self.state + ) + assert_xml_equal( + etree_to_str(self.node2), + """ + + + + + + + + """ + ) + + def test_multiple_attrs_sets(self, mock_get_node): + mock_get_node.return_value = self.node3 + node.update_node_instance_attrs( + self.cib, "rh73-node3", {"x": "X"}, self.state + ) + assert_xml_equal( + etree_to_str(self.node3), + """ + + + + + + + + + + """ + ) + +class EnsureNodeExists(TestCase): + def setUp(self): + self.node1 = etree.fromstring(""" + + """) + self.node2 = etree.fromstring(""" + + """) + self.nodes = etree.Element("nodes") + self.nodes.append(self.node1) + + self.state = ClusterState(""" + + + + + + + + + + + + """).node_section.nodes + + def test_node_already_exists(self): + assert_xml_equal( + etree_to_str(node._ensure_node_exists(self.nodes, "name-test1")), + etree_to_str(self.node1) + ) + + def test_node_missing_no_state(self): + assert_raise_library_error( + lambda: node._ensure_node_exists(self.nodes, "name-missing"), + ( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + {"node": "name-missing"}, + None + ), + ) + + def test_node_missing_not_in_state(self): + assert_raise_library_error( + lambda: node._ensure_node_exists( + self.nodes, "name-missing", self.state + ), + ( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + {"node": "name-missing"}, + None + ), + ) + + def test_node_missing_and_gets_created(self): + assert_xml_equal( + etree_to_str( + node._ensure_node_exists(self.nodes, "name-test2", self.state) + ), + etree_to_str(self.node2) + ) + +class GetNodeByUname(TestCase): + def setUp(self): + self.node1 = etree.fromstring(""" + + """) + self.node2 = etree.fromstring(""" + + """) + self.nodes = etree.Element("nodes") + self.nodes.append(self.node1) + self.nodes.append(self.node2) + + def test_found(self): + assert_xml_equal( + etree_to_str(node._get_node_by_uname(self.nodes, "name-test1")), + """""" + ) + + def test_not_found(self): + self.assertTrue( + node._get_node_by_uname(self.nodes, "id-test1") is None + ) + +class CreateNode(TestCase): + def setUp(self): + self.nodes = etree.Element("nodes") + + def test_minimal(self): + node._create_node(self.nodes, "id-test", "name-test") + assert_xml_equal( + """ + + + + """, + etree_to_str(self.nodes) + ) + + def test_with_type(self): + node._create_node(self.nodes, "id-test", "name-test", "type-test") + assert_xml_equal( + """ + + + + """, + etree_to_str(self.nodes) + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_nvpair.py pcs-0.9.159/pcs/lib/cib/test/test_nvpair.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_nvpair.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_nvpair.py 2017-06-30 15:33:01.000000000 +0000 @@ -8,8 +8,37 @@ from lxml import etree from pcs.lib.cib import nvpair +from pcs.lib.cib.tools import IdProvider from pcs.test.tools.assertions import assert_xml_equal from pcs.test.tools.pcs_unittest import TestCase, mock +from pcs.test.tools.xml import etree_to_str + +class AppendNewNvpair(TestCase): + def test_append_new_nvpair_to_given_element(self): + nvset_element = etree.fromstring('') + nvpair._append_new_nvpair(nvset_element, "b", "c") + assert_xml_equal( + etree_to_str(nvset_element), + """ + + + + """ + ) + + def test_with_id_provider(self): + nvset_element = etree.fromstring('') + provider = IdProvider(nvset_element) + provider.book_ids("a-b") + nvpair._append_new_nvpair(nvset_element, "b", "c", provider) + assert_xml_equal( + etree_to_str(nvset_element), + """ + + + + """ + ) class UpdateNvsetTest(TestCase): @@ -38,7 +67,7 @@ """, - etree.tostring(nvset_element).decode() + etree_to_str(nvset_element) ) def test_empty_value_has_no_effect(self): xml = """ @@ -50,7 +79,25 @@ """ nvset_element = etree.fromstring(xml) nvpair.update_nvset(nvset_element, {}) - assert_xml_equal(xml, etree.tostring(nvset_element).decode()) + assert_xml_equal(xml, etree_to_str(nvset_element)) + + def test_remove_empty_nvset(self): + xml_pre = """ + + + + + + """ + xml_post = """ + + + """ + xml = etree.fromstring(xml_pre) + nvset_element = xml.find("instance_attributes") + nvpair.update_nvset(nvset_element, {"a": ""}) + assert_xml_equal(xml_post, etree_to_str(xml)) + class SetNvpairInNvsetTest(TestCase): def setUp(self): @@ -75,7 +122,7 @@ """, - etree.tostring(self.nvset).decode() + etree_to_str(self.nvset) ) def test_add(self): @@ -89,7 +136,7 @@ """, - etree.tostring(self.nvset).decode() + etree_to_str(self.nvset) ) def test_remove(self): @@ -101,7 +148,7 @@ """, - etree.tostring(self.nvset).decode() + etree_to_str(self.nvset) ) def test_remove_not_existing(self): @@ -114,11 +161,55 @@ """, - etree.tostring(self.nvset).decode() + etree_to_str(self.nvset) + ) + +class AppendNewNvsetTest(TestCase): + def test_append_new_nvset_to_given_element(self): + context_element = etree.fromstring('') + nvpair.append_new_nvset("instance_attributes", context_element, { + "a": "b", + "c": "d", + }) + assert_xml_equal( + """ + + + + + + + """, + etree_to_str(context_element) + ) + + def test_with_id_provider(self): + context_element = etree.fromstring('') + provider = IdProvider(context_element) + provider.book_ids("a-instance_attributes", "a-instance_attributes-1-a") + nvpair.append_new_nvset( + "instance_attributes", + context_element, + { + "a": "b", + "c": "d", + }, + provider + ) + assert_xml_equal( + """ + + + + + + + """, + etree_to_str(context_element) ) -class ArrangeSomeNvsetTest(TestCase): +class ArrangeFirstNvsetTest(TestCase): def setUp(self): self.root = etree.Element("root", id="root") self.nvset = etree.SubElement(self.root, "nvset", id="nvset") @@ -142,7 +233,7 @@ """, - etree.tostring(self.nvset).decode() + etree_to_str(self.nvset) ) def test_update_existing_nvset(self): @@ -161,7 +252,7 @@ """, - etree.tostring(self.nvset).decode() + etree_to_str(self.nvset) ) def test_create_new_nvset_if_does_not_exist(self): @@ -183,7 +274,7 @@ """, - etree.tostring(root).decode() + etree_to_str(root) ) @@ -218,3 +309,111 @@ ], nvpair.get_nvset(nvset) ) + +class GetValue(TestCase): + def assert_find_value(self, tag_name, name, value, xml, default=None): + self.assertEqual( + value, + nvpair.get_value(tag_name, etree.fromstring(xml), name, default) + ) + + def test_return_value_when_name_exists(self): + self.assert_find_value( + "meta_attributes", + "SOME-NAME", + "some-value", + """ + + + + + + + """, + ) + + def test_return_none_when_name_not_exists(self): + self.assert_find_value( + "instance_attributes", + "SOME-NAME", + value=None, + xml=""" + + + + + + """, + ) + + def test_return_default_when_name_not_exists(self): + self.assert_find_value( + "instance_attributes", + "SOME-NAME", + value="DEFAULT", + xml=""" + + + + + + """, + default="DEFAULT", + ) + + def test_return_none_when_no_nvpair(self): + self.assert_find_value( + "instance_attributes", + "SOME-NAME", + value=None, + xml=""" + + + + """, + ) + + def test_return_none_when_no_nvset(self): + self.assert_find_value( + "instance_attributes", + "SOME-NAME", + value=None, + xml=""" + + + """, + ) + +class HasMetaAttribute(TestCase): + def test_return_false_if_does_not_have_such_attribute(self): + resource_element = etree.fromstring("""""") + self.assertFalse( + nvpair.has_meta_attribute(resource_element, "attr_name") + ) + + def test_return_true_if_such_meta_attribute_exists(self): + resource_element = etree.fromstring(""" + + + + + + + """) + self.assertTrue( + nvpair.has_meta_attribute(resource_element, "attr_name") + ) + + def test_return_false_if_meta_attribute_exists_but_in_nested_element(self): + resource_element = etree.fromstring(""" + + + + + + + + """) + self.assertFalse( + nvpair.has_meta_attribute(resource_element, "attr_name") + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_bundle.py pcs-0.9.159/pcs/lib/cib/test/test_resource_bundle.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_bundle.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_bundle.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,42 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib.cib.resource import bundle +from pcs.test.tools.pcs_unittest import TestCase + +# pcs.lib.cib.resource.bundle is covered by: +# - pcs.lib.commands.test.resource.test_bundle_create +# - pcs.lib.commands.test.resource.test_bundle_update +# - pcs.lib.commands.test.resource.test_resource_create + +class IsBundle(TestCase): + def test_is_bundle(self): + self.assertTrue(bundle.is_bundle(etree.fromstring(""))) + self.assertFalse(bundle.is_bundle(etree.fromstring(""))) + self.assertFalse(bundle.is_bundle(etree.fromstring(""))) + + +class GetInnerResource(TestCase): + def assert_inner_resource(self, resource_id, xml): + self.assertEqual( + resource_id, + bundle.get_inner_resource(etree.fromstring(xml)).get("id", "") + ) + + def test_primitive(self): + self.assert_inner_resource( + "A", + """ + + + + + + """ + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_clone.py pcs-0.9.159/pcs/lib/cib/test/test_resource_clone.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_clone.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_clone.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,109 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib.cib.resource import clone +from pcs.test.tools.pcs_unittest import TestCase +from pcs.test.tools.assertions import assert_xml_equal + +class AppendNewCommon(TestCase): + def setUp(self): + self.cib = etree.fromstring(""" + + + + + + """) + self.resources = self.cib.find(".//resources") + self.primitive = self.cib.find(".//primitive") + + def assert_clone_effect(self, options, xml): + clone.append_new( + clone.TAG_CLONE, + self.resources, + self.primitive, + options + ) + assert_xml_equal(etree.tostring(self.cib).decode(), xml) + + def test_add_without_options(self): + self.assert_clone_effect({}, """ + + + + + + + + """) + + def test_add_with_options(self): + self.assert_clone_effect({"a": "b"}, """ + + + + + + + + + + + """) + + +class IsAnyClone(TestCase): + def test_is_clone(self): + self.assertTrue(clone.is_clone(etree.fromstring(""))) + self.assertFalse(clone.is_clone(etree.fromstring(""))) + self.assertFalse(clone.is_clone(etree.fromstring(""))) + + def test_is_master(self): + self.assertTrue(clone.is_master(etree.fromstring(""))) + self.assertFalse(clone.is_master(etree.fromstring(""))) + self.assertFalse(clone.is_master(etree.fromstring(""))) + + def test_is_any_clone(self): + self.assertTrue(clone.is_any_clone(etree.fromstring(""))) + self.assertTrue(clone.is_any_clone(etree.fromstring(""))) + self.assertFalse(clone.is_any_clone(etree.fromstring(""))) + + +class GetInnerResource(TestCase): + def assert_inner_resource(self, resource_id, xml): + self.assertEqual( + resource_id, + clone.get_inner_resource(etree.fromstring(xml)).get("id", "") + ) + + def test_primitive(self): + self.assert_inner_resource( + "A", + """ + + + + + + """ + ) + + def test_group(self): + self.assert_inner_resource( + "A", + """ + + + + + + """ + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_common.py pcs-0.9.159/pcs/lib/cib/test/test_resource_common.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_common.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_common.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,570 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.lib.cib.resource import common +from pcs.test.tools.assertions import assert_xml_equal +from pcs.test.tools.pcs_unittest import TestCase +from pcs.test.tools.xml import etree_to_str + + +fixture_cib = etree.fromstring(""" + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +""") + + +class AreMetaDisabled(TestCase): + def test_detect_is_disabled(self): + self.assertTrue(common.are_meta_disabled({"target-role": "Stopped"})) + self.assertTrue(common.are_meta_disabled({"target-role": "stopped"})) + + def test_detect_is_not_disabled(self): + self.assertFalse(common.are_meta_disabled({})) + self.assertFalse(common.are_meta_disabled({"target-role": "any"})) + + +class IsCloneDeactivatedByMeta(TestCase): + def assert_is_disabled(self, meta_attributes): + self.assertTrue(common.is_clone_deactivated_by_meta(meta_attributes)) + + def assert_is_not_disabled(self, meta_attributes): + self.assertFalse(common.is_clone_deactivated_by_meta(meta_attributes)) + + def test_detect_is_disabled(self): + self.assert_is_disabled({"target-role": "Stopped"}) + self.assert_is_disabled({"target-role": "stopped"}) + self.assert_is_disabled({"clone-max": "0"}) + self.assert_is_disabled({"clone-max": "00"}) + self.assert_is_disabled({"clone-max": 0}) + self.assert_is_disabled({"clone-node-max": "0"}) + self.assert_is_disabled({"clone-node-max": "abc1"}) + + def test_detect_is_not_disabled(self): + self.assert_is_not_disabled({}) + self.assert_is_not_disabled({"target-role": "any"}) + self.assert_is_not_disabled({"clone-max": "1"}) + self.assert_is_not_disabled({"clone-max": "01"}) + self.assert_is_not_disabled({"clone-max": 1}) + self.assert_is_not_disabled({"clone-node-max": "1"}) + self.assert_is_not_disabled({"clone-node-max": 1}) + self.assert_is_not_disabled({"clone-node-max": "1abc"}) + self.assert_is_not_disabled({"clone-node-max": "1.1"}) + + +class FindPrimitives(TestCase): + def assert_find_resources(self, input_resource_id, output_resource_ids): + self.assertEqual( + output_resource_ids, + [ + element.get("id", "") + for element in + common.find_primitives( + fixture_cib.find( + './/*[@id="{0}"]'.format(input_resource_id) + ) + ) + ] + ) + + def test_primitive(self): + self.assert_find_resources("A", ["A"]) + + def test_primitive_in_clone(self): + self.assert_find_resources("B", ["B"]) + + def test_primitive_in_master(self): + self.assert_find_resources("C", ["C"]) + + def test_primitive_in_group(self): + self.assert_find_resources("D1", ["D1"]) + self.assert_find_resources("D2", ["D2"]) + self.assert_find_resources("E1", ["E1"]) + self.assert_find_resources("E2", ["E2"]) + self.assert_find_resources("F1", ["F1"]) + self.assert_find_resources("F2", ["F2"]) + + def test_primitive_in_bundle(self): + self.assert_find_resources("H", ["H"]) + + def test_group(self): + self.assert_find_resources("D", ["D1", "D2"]) + + def test_group_in_clone(self): + self.assert_find_resources("E", ["E1", "E2"]) + + def test_group_in_master(self): + self.assert_find_resources("F", ["F1", "F2"]) + + def test_cloned_primitive(self): + self.assert_find_resources("B-clone", ["B"]) + + def test_cloned_group(self): + self.assert_find_resources("E-clone", ["E1", "E2"]) + + def test_mastered_primitive(self): + self.assert_find_resources("C-master", ["C"]) + + def test_mastered_group(self): + self.assert_find_resources("F-master", ["F1", "F2"]) + + def test_bundle_empty(self): + self.assert_find_resources("G-bundle", []) + + def test_bundle_with_primitive(self): + self.assert_find_resources("H-bundle", ["H"]) + + +class FindResourcesToEnable(TestCase): + def assert_find_resources(self, input_resource_id, output_resource_ids): + self.assertEqual( + output_resource_ids, + [ + element.get("id", "") + for element in + common.find_resources_to_enable( + fixture_cib.find( + './/*[@id="{0}"]'.format(input_resource_id) + ) + ) + ] + ) + + def test_primitive(self): + self.assert_find_resources("A", ["A"]) + + def test_primitive_in_clone(self): + self.assert_find_resources("B", ["B", "B-clone"]) + + def test_primitive_in_master(self): + self.assert_find_resources("C", ["C", "C-master"]) + + def test_primitive_in_group(self): + self.assert_find_resources("D1", ["D1"]) + self.assert_find_resources("D2", ["D2"]) + self.assert_find_resources("E1", ["E1"]) + self.assert_find_resources("E2", ["E2"]) + self.assert_find_resources("F1", ["F1"]) + self.assert_find_resources("F2", ["F2"]) + + def test_primitive_in_bundle(self): + self.assert_find_resources("H", ["H", "H-bundle"]) + + def test_group(self): + self.assert_find_resources("D", ["D"]) + + def test_group_in_clone(self): + self.assert_find_resources("E", ["E", "E-clone"]) + + def test_group_in_master(self): + self.assert_find_resources("F", ["F", "F-master"]) + + def test_cloned_primitive(self): + self.assert_find_resources("B-clone", ["B-clone", "B"]) + + def test_cloned_group(self): + self.assert_find_resources("E-clone", ["E-clone", "E"]) + + def test_mastered_primitive(self): + self.assert_find_resources("C-master", ["C-master", "C"]) + + def test_mastered_group(self): + self.assert_find_resources("F-master", ["F-master", "F"]) + + def test_bundle_empty(self): + self.assert_find_resources("G-bundle", ["G-bundle"]) + + def test_bundle_with_primitive(self): + self.assert_find_resources("H-bundle", ["H-bundle", "H"]) + + +class Enable(TestCase): + def assert_enabled(self, pre, post): + resource = etree.fromstring(pre) + common.enable(resource) + assert_xml_equal(post, etree_to_str(resource)) + + def test_disabled(self): + self.assert_enabled( + """ + + + + + + """, + """ + + + """ + ) + + def test_enabled(self): + self.assert_enabled( + """ + + + """, + """ + + + """ + ) + + def test_only_first_meta(self): + # this captures the current behavior + # once pcs supports more instance and meta attributes for each resource, + # this test should be reconsidered + self.assert_enabled( + """ + + + + + + + + + """, + """ + + + + + + """ + ) + + +class Disable(TestCase): + def assert_disabled(self, pre, post): + resource = etree.fromstring(pre) + common.disable(resource) + assert_xml_equal(post, etree_to_str(resource)) + + def test_disabled(self): + xml = """ + + + + + + """ + self.assert_disabled(xml, xml) + + def test_enabled(self): + self.assert_disabled( + """ + + + """, + """ + + + + + + """ + ) + + def test_only_first_meta(self): + # this captures the current behavior + # once pcs supports more instance and meta attributes for each resource, + # this test should be reconsidered + self.assert_disabled( + """ + + + + + + + """, + """ + + + + + + + + """ + ) + + +class FindResourcesToManage(TestCase): + def assert_find_resources(self, input_resource_id, output_resource_ids): + self.assertEqual( + output_resource_ids, + [ + element.get("id", "") + for element in + common.find_resources_to_manage( + fixture_cib.find( + './/*[@id="{0}"]'.format(input_resource_id) + ) + ) + ] + ) + + def test_primitive(self): + self.assert_find_resources("A", ["A"]) + + def test_primitive_in_clone(self): + self.assert_find_resources("B", ["B", "B-clone"]) + + def test_primitive_in_master(self): + self.assert_find_resources("C", ["C", "C-master"]) + + def test_primitive_in_group(self): + self.assert_find_resources("D1", ["D1", "D"]) + self.assert_find_resources("D2", ["D2", "D"]) + self.assert_find_resources("E1", ["E1", "E-clone", "E"]) + self.assert_find_resources("E2", ["E2", "E-clone", "E"]) + self.assert_find_resources("F1", ["F1", "F-master", "F"]) + self.assert_find_resources("F2", ["F2", "F-master", "F"]) + + def test_primitive_in_bundle(self): + self.assert_find_resources("H", ["H", "H-bundle"]) + + def test_group(self): + self.assert_find_resources("D", ["D", "D1", "D2"]) + + def test_group_in_clone(self): + self.assert_find_resources("E", ["E", "E-clone", "E1", "E2"]) + + def test_group_in_master(self): + self.assert_find_resources("F", ["F", "F-master", "F1", "F2"]) + + def test_cloned_primitive(self): + self.assert_find_resources("B-clone", ["B-clone", "B"]) + + def test_cloned_group(self): + self.assert_find_resources("E-clone", ["E-clone", "E", "E1", "E2"]) + + def test_mastered_primitive(self): + self.assert_find_resources("C-master", ["C-master", "C"]) + + def test_mastered_group(self): + self.assert_find_resources("F-master", ["F-master", "F", "F1", "F2"]) + + def test_bundle_empty(self): + self.assert_find_resources("G-bundle", ["G-bundle"]) + + def test_bundle_with_primitive(self): + self.assert_find_resources("H-bundle", ["H-bundle", "H"]) + + +class FindResourcesToUnmanage(TestCase): + def assert_find_resources(self, input_resource_id, output_resource_ids): + self.assertEqual( + output_resource_ids, + [ + element.get("id", "") + for element in + common.find_resources_to_unmanage( + fixture_cib.find( + './/*[@id="{0}"]'.format(input_resource_id) + ) + ) + ] + ) + + def test_primitive(self): + self.assert_find_resources("A", ["A"]) + + def test_primitive_in_clone(self): + self.assert_find_resources("B", ["B"]) + + def test_primitive_in_master(self): + self.assert_find_resources("C", ["C"]) + + def test_primitive_in_group(self): + self.assert_find_resources("D1", ["D1"]) + self.assert_find_resources("D2", ["D2"]) + self.assert_find_resources("E1", ["E1"]) + self.assert_find_resources("E2", ["E2"]) + self.assert_find_resources("F1", ["F1"]) + self.assert_find_resources("F2", ["F2"]) + + def test_primitive_in_bundle(self): + self.assert_find_resources("H", ["H"]) + + def test_group(self): + self.assert_find_resources("D", ["D1", "D2"]) + + def test_group_in_clone(self): + self.assert_find_resources("E", ["E1", "E2"]) + + def test_group_in_master(self): + self.assert_find_resources("F", ["F1", "F2"]) + + def test_cloned_primitive(self): + self.assert_find_resources("B-clone", ["B"]) + + def test_cloned_group(self): + self.assert_find_resources("E-clone", ["E1", "E2"]) + + def test_mastered_primitive(self): + self.assert_find_resources("C-master", ["C"]) + + def test_mastered_group(self): + self.assert_find_resources("F-master", ["F1", "F2"]) + + def test_bundle_empty(self): + self.assert_find_resources("G-bundle", ["G-bundle"]) + + def test_bundle_with_primitive(self): + self.assert_find_resources("H-bundle", ["H-bundle", "H"]) + + +class Manage(TestCase): + def assert_managed(self, pre, post): + resource = etree.fromstring(pre) + common.manage(resource) + assert_xml_equal(post, etree_to_str(resource)) + + def test_unmanaged(self): + self.assert_managed( + """ + + + + + + """, + """ + + + """ + ) + + def test_managed(self): + self.assert_managed( + """ + + + """, + """ + + + """ + ) + + def test_only_first_meta(self): + # this captures the current behavior + # once pcs supports more instance and meta attributes for each resource, + # this test should be reconsidered + self.assert_managed( + """ + + + + + + + + + """, + """ + + + + + + """ + ) + + +class Unmanage(TestCase): + def assert_unmanaged(self, pre, post): + resource = etree.fromstring(pre) + common.unmanage(resource) + assert_xml_equal(post, etree_to_str(resource)) + + def test_unmanaged(self): + xml = """ + + + + + + """ + self.assert_unmanaged(xml, xml) + + def test_managed(self): + self.assert_unmanaged( + """ + + + """, + """ + + + + + + """ + ) + + def test_only_first_meta(self): + # this captures the current behavior + # once pcs supports more instance and meta attributes for each resource, + # this test should be reconsidered + self.assert_unmanaged( + """ + + + + + + + """, + """ + + + + + + + + """ + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_group.py pcs-0.9.159/pcs/lib/cib/test/test_resource_group.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_group.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_group.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,163 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib.cib.resource import group +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import assert_raise_library_error, assert_xml_equal +from pcs.test.tools.pcs_unittest import TestCase, mock + + +class IsGroup(TestCase): + def test_is_group(self): + self.assertTrue(group.is_group(etree.fromstring(""))) + self.assertFalse(group.is_group(etree.fromstring(""))) + self.assertFalse(group.is_group(etree.fromstring(""))) + + +@mock.patch("pcs.lib.cib.resource.group.find_element_by_tag_and_id") +class ProvideGroup(TestCase): + def setUp(self): + self.cib = etree.fromstring( + '' + ) + self.group_element = self.cib.find('.//group') + self.resources_section = self.cib.find('.//resources') + + def test_search_in_whole_tree(self, find_element_by_tag_and_id): + def find_group(*args, **kwargs): + return self.group_element + + find_element_by_tag_and_id.side_effect = find_group + + self.assertEqual( + self.group_element, + group.provide_group(self.resources_section, "g") + ) + + def test_create_group_when_not_exists(self, find_element_by_tag_and_id): + find_element_by_tag_and_id.return_value = None + group_element = group.provide_group(self.resources_section, "g2") + self.assertEqual('group', group_element.tag) + self.assertEqual('g2', group_element.attrib["id"]) + +class PlaceResource(TestCase): + def setUp(self): + self.group_element = etree.fromstring(""" + + + + + """) + self.primitive_element = etree.Element("primitive", {"id": "c"}) + + def assert_final_order( + self, id_list=None, adjacent_resource_id=None, put_after_adjacent=False + ): + group.place_resource( + self.group_element, + self.primitive_element, + adjacent_resource_id, + put_after_adjacent + ) + assert_xml_equal( + etree.tostring(self.group_element).decode(), + """ + + + + + + """.format(*id_list) + ) + + def test_append_at_the_end_when_adjacent_is_not_specified(self): + self.assert_final_order(["a", "b", "c"]) + + def test_insert_before_adjacent(self): + self.assert_final_order(["c", "a", "b"], "a") + + def test_insert_after_adjacent(self): + self.assert_final_order(["a", "c", "b"], "a", put_after_adjacent=True) + + def test_insert_after_adjacent_which_is_last(self): + self.assert_final_order(["a", "b", "c"], "b", put_after_adjacent=True) + + def test_refuse_to_put_next_to_the_same_resource_id(self): + assert_raise_library_error( + lambda: group.place_resource( + self.group_element, + self.primitive_element, + adjacent_resource_id="c", + ), + ( + severities.ERROR, + report_codes.RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP, + { + "group_id": "g", + "resource_id": "c", + }, + ), + ) + + def test_raises_when_adjacent_resource_not_in_group(self): + assert_raise_library_error( + lambda: group.place_resource( + self.group_element, + self.primitive_element, + adjacent_resource_id="r", + ), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "r", + "id_description": "resource", + "context_type": "group", + "context_id": "g", + }, + ), + ) + + +class GetInnerResource(TestCase): + def assert_inner_resource(self, resource_id, xml): + self.assertEqual( + resource_id, + [ + element.attrib.get("id", "") + for element in group.get_inner_resources(etree.fromstring(xml)) + ] + ) + + def test_one(self): + self.assert_inner_resource( + ["A"], + """ + + + + + + """ + ) + + def test_more(self): + self.assert_inner_resource( + ["A", "C", "B"], + """ + + + + + + + + """ + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_guest_node.py pcs-0.9.159/pcs/lib/cib/test/test_resource_guest_node.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_guest_node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_guest_node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,444 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib.cib.resource import guest_node +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import( + assert_xml_equal, + assert_report_item_list_equal, +) +from pcs.test.tools.misc import create_setup_patch_mixin +from pcs.test.tools.pcs_unittest import TestCase +from pcs.lib.node import NodeAddresses + + +SetupPatchMixin = create_setup_patch_mixin(guest_node) + +class ValidateHostConflicts(TestCase): + def validate(self, node_name, options): + tree = etree.fromstring(""" + + + + + + + + + + + + + + + + + + + + + + + + """) + nodes = [ + NodeAddresses("RING0", "RING1", name="R1"), + NodeAddresses("REMOTE_CONFLICT", name="B"), + NodeAddresses("GUEST_CONFLICT", name="GUEST_CONFLICT"), + NodeAddresses("GUEST_ADDR_CONFLICT", name="some"), + ] + return guest_node.validate_conflicts(tree, nodes, node_name, options) + + def assert_already_exists_error( + self, conflict_name, node_name, options=None + ): + assert_report_item_list_equal( + self.validate(node_name, options if options else {}), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": conflict_name, + }, + None + ), + ] + ) + + + def test_report_conflict_with_id(self): + self.assert_already_exists_error("CONFLICT", "CONFLICT") + + def test_report_conflict_guest_node(self): + self.assert_already_exists_error("GUEST_CONFLICT", "GUEST_CONFLICT") + + def test_report_conflict_guest_addr(self): + self.assert_already_exists_error( + "GUEST_ADDR_CONFLICT", + "GUEST_ADDR_CONFLICT", + ) + + def test_report_conflict_guest_addr_by_addr(self): + self.assert_already_exists_error( + "GUEST_ADDR_CONFLICT", + "GUEST_ADDR_CONFLICT", + ) + + def test_no_conflict_guest_node_whe_addr_is_different(self): + self.assertEqual([], self.validate("GUEST_ADDR_CONFLICT", { + "remote-addr": "different", + })) + + def test_report_conflict_remote_node(self): + self.assert_already_exists_error("REMOTE_CONFLICT", "REMOTE_CONFLICT") + + def test_no_conflict_remote_node_whe_addr_is_different(self): + self.assertEqual([], self.validate("REMOTE_CONFLICT", { + "remote-addr": "different", + })) + + def test_report_conflict_remote_node_by_addr(self): + self.assert_already_exists_error("REMOTE_CONFLICT", "different", { + "remote-addr": "REMOTE_CONFLICT", + }) + +class ValidateOptions(TestCase): + def validate(self, options, name="some_name"): + return guest_node.validate_set_as_guest( + etree.fromstring(''), + [NodeAddresses( + "EXISTING-HOST-RING0", + "EXISTING-HOST-RING0", + name="EXISTING-HOST-NAME" + )], + name, + options + ) + + def test_no_report_on_valid(self): + self.assertEqual( + [], + self.validate({}, "node1") + ) + + def test_report_invalid_option(self): + assert_report_item_list_equal( + self.validate({"invalid": "invalid"}, "node1"), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_type": "guest", + "option_names": ["invalid"], + "allowed": sorted(guest_node.GUEST_OPTIONS), + }, + None + ), + ] + ) + + def test_report_invalid_interval(self): + assert_report_item_list_equal( + self.validate({"remote-connect-timeout": "invalid"}, "node1"), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "remote-connect-timeout", + "option_value": "invalid", + }, + None + ), + ] + ) + + def test_report_invalid_node_name(self): + assert_report_item_list_equal( + self.validate({}, "EXISTING-HOST-NAME"), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": "EXISTING-HOST-NAME", + }, + None + ), + ] + ) + + +class ValidateInNotGuest(TestCase): + #guest_node.is_guest_node is tested here as well + def test_no_report_on_non_guest(self): + self.assertEqual( + [], + guest_node.validate_is_not_guest(etree.fromstring("")) + ) + + def test_report_when_is_guest(self): + assert_report_item_list_equal( + guest_node.validate_is_not_guest(etree.fromstring(""" + + + + + + """)), + [ + ( + severities.ERROR, + report_codes.RESOURCE_IS_GUEST_NODE_ALREADY, + { + "resource_id": "resource_id", + }, + None + ), + ] + ) + +class SetAsGuest(TestCase): + def test_set_guest_meta_correctly(self): + resource_element = etree.fromstring('') + guest_node.set_as_guest(resource_element, "node1", connect_timeout="10") + assert_xml_equal( + etree.tostring(resource_element).decode(), + """ + + + + + + + """ + ) + +class UnsetGuest(TestCase): + def test_unset_all_guest_attributes(self): + resource_element = etree.fromstring(""" + + + + + + + + + + """) + guest_node.unset_guest(resource_element) + assert_xml_equal( + etree.tostring(resource_element).decode(), + """ + + + + + + """ + ) + + def test_unset_all_guest_attributes_and_empty_meta_tag(self): + resource_element = etree.fromstring(""" + + + + + + + + + """) + guest_node.unset_guest(resource_element) + assert_xml_equal( + etree.tostring(resource_element).decode(), + '' + ) + + +class FindNodeList(TestCase, SetupPatchMixin): + def assert_find_meta_attributes(self, xml, meta_attributes_xml_list): + get_node = self.setup_patch("get_node", return_value=None) + + self.assertEquals( + [None] * len(meta_attributes_xml_list), + guest_node.find_node_list(etree.fromstring(xml)) + ) + + for i, call in enumerate(get_node.mock_calls): + assert_xml_equal( + meta_attributes_xml_list[i], + etree.tostring(call[1][0]).decode() + ) + + def test_get_no_nodes_when_no_primitives(self): + self.assert_find_meta_attributes("", []) + + def test_get_no_nodes_when_no_meta_remote_node(self): + self.assert_find_meta_attributes( + """ + + + + + + + + """, + [] + ) + + def test_get_multiple_nodes(self): + self.assert_find_meta_attributes( + """ + + + + + + + + + + + + + + """, + [ + """ + + + + + """, + """ + + + + """, + ] + ) + +class GetNode(TestCase): + def assert_node(self, xml, expected_node): + node = guest_node.get_node(etree.fromstring(xml)) + self.assertEquals(expected_node, (node.ring0, node.name)) + + def test_return_none_when_is_not_guest_node(self): + self.assertIsNone(guest_node.get_node(etree.fromstring( + """ + + + + """ + ))) + + def test_return_same_host_and_name_when_remote_node_only(self): + self.assert_node( + """ + + + + """, + ("G1", "G1") + ) + + def test_return_different_host_and_name_when_remote_addr_there(self): + self.assert_node( + """ + + + + + """, + ("G1addr", "G1") + ) + +class GetHost(TestCase): + def assert_find_host(self, host, xml): + self.assertEqual(host, guest_node.get_host(etree.fromstring(xml))) + + def test_return_host_from_remote_addr(self): + self.assert_find_host("HOST", """ + + + + + + + """) + + def test_return_host_from_remote_node(self): + self.assert_find_host("HOST", """ + + + + + + """) + + def test_return_none(self): + self.assert_find_host(None, """ + + + + + + """) + +class FindNodeResources(TestCase): + def assert_return_resources(self, identifier): + resources_section = etree.fromstring(""" + + + + + + + + + """) + self.assertEquals( + "RESOURCE_ID", + guest_node.find_node_resources(resources_section, identifier)[0] + .attrib["id"] + ) + + def test_return_resources_by_resource_id(self): + self.assert_return_resources("RESOURCE_ID") + + def test_return_resources_by_node_name(self): + self.assert_return_resources("NODE_NAME") + + def test_return_resources_by_node_host(self): + self.assert_return_resources("NODE_HOST") + + def test_no_result_when_no_guest_nodes(self): + resources_section = etree.fromstring( + '' + ) + self.assertEquals([], guest_node.find_node_resources( + resources_section, + "RESOURCE_ID" + )) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_operations.py pcs-0.9.159/pcs/lib/cib/test/test_resource_operations.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_operations.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_operations.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,414 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from functools import partial +from lxml import etree + +from pcs.common import report_codes +from pcs.lib.cib.resource import operations +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.lib.validate import ValuePair +from pcs.test.tools.assertions import assert_report_item_list_equal +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.misc import create_patcher +from pcs.test.tools.pcs_unittest import TestCase, mock + + +patch_operations = create_patcher("pcs.lib.cib.resource.operations") + +@patch_operations("get_remaining_defaults") +@patch_operations("complete_all_intervals") +@patch_operations("validate_different_intervals") +@patch_operations("validate_operation_list") +@patch_operations("normalized_to_operations") +@patch_operations("operations_to_normalized") +class Prepare(TestCase): + def test_prepare( + self, operations_to_normalized, normalized_to_operations, + validate_operation_list, validate_different_intervals, + complete_all_intervals, get_remaining_defaults + ): + validate_operation_list.return_value = ["options_report"] + validate_different_intervals.return_value = [ + "different_interval_report" + ] + operations_to_normalized.return_value = [ + {"name": ValuePair("Start", "start")}, + {"name": ValuePair("Monitor", "monitor")}, + ] + normalized_to_operations.return_value = [ + {"name": "start"}, + {"name": "monitor"}, + ] + + report_processor = mock.MagicMock() + raw_operation_list = [ + {"name": "Start"}, + {"name": "Monitor"}, + ] + default_operation_list = [ + {"name": "stop"}, + ] + allowed_operation_name_list = ["start", "stop", "monitor"] + allow_invalid = True + + operations.prepare( + report_processor, + raw_operation_list, + default_operation_list, + allowed_operation_name_list, + allow_invalid, + ) + + operations_to_normalized.assert_called_once_with(raw_operation_list) + normalized_to_operations.assert_called_once_with( + operations_to_normalized.return_value + ) + validate_operation_list.assert_called_once_with( + operations_to_normalized.return_value, + allowed_operation_name_list, + allow_invalid + ) + validate_different_intervals.assert_called_once_with( + normalized_to_operations.return_value + ) + complete_all_intervals.assert_called_once_with( + normalized_to_operations.return_value + ) + get_remaining_defaults.assert_called_once_with( + report_processor, + normalized_to_operations.return_value, + default_operation_list + ) + report_processor.process_list.assert_called_once_with([ + "options_report", + "different_interval_report", + ]) + + +class ValidateDifferentIntervals(TestCase): + def test_return_empty_reports_on_empty_list(self): + operations.validate_different_intervals([]) + + def test_return_empty_reports_on_operations_without_duplication(self): + operations.validate_different_intervals([ + {"name": "monitor", "interval": "10s"}, + {"name": "monitor", "interval": "5s"}, + {"name": "start", "interval": "5s"}, + ]) + + def test_return_report_on_duplicated_intervals(self): + assert_report_item_list_equal( + operations.validate_different_intervals([ + {"name": "monitor", "interval": "3600s"}, + {"name": "monitor", "interval": "60m"}, + {"name": "monitor", "interval": "1h"}, + {"name": "monitor", "interval": "60s"}, + {"name": "monitor", "interval": "1m"}, + {"name": "monitor", "interval": "5s"}, + ]), + [( + severities.ERROR, + report_codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION, + { + "duplications": { + "monitor": [ + ["3600s", "60m", "1h"], + ["60s", "1m"], + ], + }, + }, + )] + ) + +class MakeUniqueIntervals(TestCase): + def setUp(self): + self.report_processor = MockLibraryReportProcessor() + self.run = partial( + operations.make_unique_intervals, + self.report_processor + ) + + def test_return_copy_input_when_no_interval_duplication(self): + operation_list = [ + {"name": "monitor", "interval": "10s"}, + {"name": "monitor", "interval": "5s"}, + {"name": "monitor", }, + {"name": "monitor", "interval": ""}, + {"name": "start", "interval": "5s"}, + ] + self.assertEqual(operation_list, self.run(operation_list)) + + def test_adopt_duplicit_values(self): + self.assertEqual( + self.run([ + {"name": "monitor", "interval": "60s"}, + {"name": "monitor", "interval": "1m"}, + {"name": "monitor", "interval": "5s"}, + {"name": "monitor", "interval": "6s"}, + {"name": "monitor", "interval": "5s"}, + {"name": "start", "interval": "5s"}, + ]), + [ + {"name": "monitor", "interval": "60s"}, + {"name": "monitor", "interval": "61"}, + {"name": "monitor", "interval": "5s"}, + {"name": "monitor", "interval": "6s"}, + {"name": "monitor", "interval": "7"}, + {"name": "start", "interval": "5s"}, + ] + ) + + assert_report_item_list_equal(self.report_processor.report_item_list, [ + ( + severities.WARNING, + report_codes.RESOURCE_OPERATION_INTERVAL_ADAPTED, + { + "operation_name": "monitor", + "original_interval": "1m", + "adapted_interval": "61", + }, + ), + ( + severities.WARNING, + report_codes.RESOURCE_OPERATION_INTERVAL_ADAPTED, + { + "operation_name": "monitor", + "original_interval": "5s", + "adapted_interval": "7", + }, + ), + ]) + + def test_keep_duplicit_values_when_are_not_valid_interval(self): + self.assertEqual( + self.run([ + {"name": "monitor", "interval": "some"}, + {"name": "monitor", "interval": "some"}, + ]), + [ + {"name": "monitor", "interval": "some"}, + {"name": "monitor", "interval": "some"}, + ] + ) + + +class Normalize(TestCase): + def test_return_operation_with_the_same_values(self): + operation = { + "name": "monitor", + "role": "Master", + "timeout": "10", + } + + self.assertEqual(operation, dict([ + (key, operations.normalize(key, value)) + for key, value in operation.items() + ])) + + def test_return_operation_with_normalized_values(self): + self.assertEqual( + { + "name": "monitor", + "role": "Master", + "timeout": "10", + "requires": "nothing", + "on-fail": "ignore", + "record-pending": "true", + "enabled": "1", + }, + dict([(key, operations.normalize(key, value)) for key, value in { + "name": "monitor", + "role": "master", + "timeout": "10", + "requires": "Nothing", + "on-fail": "Ignore", + "record-pending": "True", + "enabled": "1", + }.items()]) + ) + +class ValidateOperation(TestCase): + def assert_operation_produces_report(self, operation, report_list): + assert_report_item_list_equal( + operations.validate_operation_list( + [operation], + ["monitor"], + ), + report_list + ) + + def test_return_empty_report_on_valid_operation(self): + self.assert_operation_produces_report( + { + "name": "monitor", + "role": "Master" + }, + [] + ) + + def test_validate_all_individual_options(self): + self.assertEqual( + ["REQUIRES REPORT", "ROLE REPORT"], + sorted(operations.validate_operation({"name": "monitor"}, [ + mock.Mock(return_value=["ROLE REPORT"]), + mock.Mock(return_value=["REQUIRES REPORT"]), + ])) + ) + + def test_return_error_when_unknown_operation_attribute(self): + self.assert_operation_produces_report( + { + "name": "monitor", + "unknown": "invalid", + }, + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["unknown"], + "option_type": "resource operation", + "allowed": sorted(operations.ATTRIBUTES), + }, + None + ), + ], + ) + + def test_return_errror_when_missing_key_name(self): + self.assert_operation_produces_report( + { + "role": "Master" + }, + [ + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_names": ["name"], + "option_type": "resource operation", + }, + None + ), + ], + ) + + def test_return_error_when_both_interval_origin_and_start_delay(self): + self.assert_operation_produces_report( + { + "name": "monitor", + "interval-origin": "a", + "start-delay": "b", + }, + [ + ( + severities.ERROR, + report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, + { + "option_names": ["interval-origin", "start-delay"], + "option_type": "resource operation", + }, + None + ), + ], + ) + + def test_return_error_on_invalid_id(self): + self.assert_operation_produces_report( + { + "name": "monitor", + "id": "a#b", + }, + [ + ( + severities.ERROR, + report_codes.INVALID_ID, + { + "id": "a#b", + "id_description": "operation id", + "invalid_character": "#", + "is_first_char": False, + }, + None + ), + ], + ) + + +class GetRemainingDefaults(TestCase): + @mock.patch("pcs.lib.cib.resource.operations.make_unique_intervals") + def test_returns_remining_operations(self, make_unique_intervals): + make_unique_intervals.side_effect = ( + lambda report_processor, operations: operations + ) + self.assertEqual( + operations.get_remaining_defaults( + report_processor=None, + operation_list =[{"name": "monitor"}], + default_operation_list=[{"name": "monitor"}, {"name": "start"}] + ), + [{"name": "start"}] + ) + + +class GetResourceOperations(TestCase): + resource_el = etree.fromstring(""" + + + + + + + + + """) + resource_noop_el = etree.fromstring(""" + + + """) + + def assert_op_list(self, op_list, expected_ids): + self.assertEqual( + [op.attrib.get("id") for op in op_list], + expected_ids + ) + + def test_all_operations(self): + self.assert_op_list( + operations.get_resource_operations(self.resource_el), + ["dummy-start", "dummy-stop", "dummy-monitor-m", "dummy-monitor-s"] + ) + + def test_filter_operations(self): + self.assert_op_list( + operations.get_resource_operations(self.resource_el, ["start"]), + ["dummy-start"] + ) + + def test_filter_more_operations(self): + self.assert_op_list( + operations.get_resource_operations( + self.resource_el, + ["monitor", "stop"] + ), + ["dummy-stop", "dummy-monitor-m", "dummy-monitor-s"] + ) + + def test_filter_none(self): + self.assert_op_list( + operations.get_resource_operations(self.resource_el, ["promote"]), + [] + ) + + def test_no_operations(self): + self.assert_op_list( + operations.get_resource_operations(self.resource_noop_el), + [] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_primitive.py pcs-0.9.159/pcs/lib/cib/test/test_resource_primitive.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_primitive.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_primitive.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,96 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from functools import partial + +from lxml import etree + +from pcs.lib.cib.resource import primitive +from pcs.test.tools.pcs_unittest import TestCase, mock + +@mock.patch("pcs.lib.cib.resource.primitive.append_new_instance_attributes") +@mock.patch("pcs.lib.cib.resource.primitive.append_new_meta_attributes") +@mock.patch("pcs.lib.cib.resource.primitive.create_operations") +class AppendNew(TestCase): + def setUp(self): + self.resources_section = etree.fromstring("") + + self.instance_attributes = {"a": "b"} + self.meta_attributes = {"c": "d"} + self.operation_list = [{"name": "monitoring"}] + + self.run = partial( + primitive.append_new, + self.resources_section, + instance_attributes=self.instance_attributes, + meta_attributes=self.meta_attributes, + operation_list=self.operation_list, + ) + + def check_mocks( + self, + primitive_element, + create_operations, + append_new_meta_attributes, + append_new_instance_attributes, + ): + create_operations.assert_called_once_with( + primitive_element, + self.operation_list + ) + append_new_meta_attributes.assert_called_once_with( + primitive_element, + self.meta_attributes + ) + append_new_instance_attributes.assert_called_once_with( + primitive_element, + self.instance_attributes + ) + + def test_append_without_provider( + self, + create_operations, + append_new_meta_attributes, + append_new_instance_attributes, + ): + primitive_element = self.run("RESOURCE_ID", "OCF", None, "DUMMY") + self.assertEqual( + primitive_element, + self.resources_section.find(".//primitive") + ) + self.assertEqual(primitive_element.attrib["class"], "OCF") + self.assertEqual(primitive_element.attrib["type"], "DUMMY") + self.assertFalse(primitive_element.attrib.has_key("provider")) + + self.check_mocks( + primitive_element, + create_operations, + append_new_meta_attributes, + append_new_instance_attributes, + ) + + def test_append_with_provider( + self, + create_operations, + append_new_meta_attributes, + append_new_instance_attributes, + ): + primitive_element = self.run("RESOURCE_ID", "OCF", "HEARTBEAT", "DUMMY") + self.assertEqual( + primitive_element, + self.resources_section.find(".//primitive") + ) + self.assertEqual(primitive_element.attrib["class"], "OCF") + self.assertEqual(primitive_element.attrib["type"], "DUMMY") + self.assertEqual(primitive_element.attrib["provider"], "HEARTBEAT") + + self.check_mocks( + primitive_element, + create_operations, + append_new_meta_attributes, + append_new_instance_attributes, + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource.py pcs-0.9.159/pcs/lib/cib/test/test_resource.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,21 +0,0 @@ -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -from pcs.test.tools.pcs_unittest import TestCase -from lxml import etree -from pcs.lib.cib.resource import find_by_id - -class FindByIdTest(TestCase): - def test_find_correct_tag(self): - tree = etree.XML(""" - - - - - """) - element = find_by_id(tree, "A") - self.assertEqual(element.tag, "primitive") diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_remote_node.py pcs-0.9.159/pcs/lib/cib/test/test_resource_remote_node.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_remote_node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_remote_node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,287 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib.cib.resource import remote_node +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.lib.node import NodeAddresses +from pcs.test.tools.assertions import assert_report_item_list_equal +from pcs.test.tools.pcs_unittest import TestCase, mock + + +class FindNodeList(TestCase): + def assert_nodes_equals(self, xml, expected_nodes): + self.assertEquals( + expected_nodes, + [ + (node.ring0, node.name) + for node in remote_node.find_node_list(etree.fromstring(xml)) + ] + ) + def test_find_multiple_nodes(self): + self.assert_nodes_equals( + """ + + + + + + + + + + + + + """, + [ + ("H1", "R1"), + ("H2", "R2"), + ] + ) + + def test_find_no_nodes(self): + self.assert_nodes_equals( + """ + + + + + + + + """, + [] + ) + + def test_find_nodes_without_server(self): + self.assert_nodes_equals( + """ + + + + + """, + [ + ("R1", "R1"), + ] + ) + + def test_find_nodes_with_empty_server(self): + #it does not work, but the node "R1" is visible as remote node in the + #status + self.assert_nodes_equals( + """ + + + + + + + + """, + [ + ("R1", "R1"), + ] + ) + + +class FindNodeResources(TestCase): + def assert_resources_equals(self, node_identifier, xml, resource_id_list): + self.assertEqual( + resource_id_list, + [ + resource_element.attrib["id"] + for resource_element in remote_node.find_node_resources( + etree.fromstring(xml), + node_identifier + ) + ] + ) + + def test_find_all_resources(self): + self.assert_resources_equals( + "HOST", + """ + + + + + + + + + + + """, + ["R1", "R2"] + ) + + def test_find_by_resource_id(self): + self.assert_resources_equals( + "HOST", + """ + + """, + ["HOST"] + ) + + def test_ignore_non_remote_primitives(self): + self.assert_resources_equals( + "HOST", + """ + + """, + [] + ) + + +class GetHost(TestCase): + def test_return_host_when_there(self): + self.assertEqual( + "HOST", + remote_node.get_host(etree.fromstring(""" + + + + + + """)) + ) + + def test_return_none_when_host_not_found(self): + self.assertIsNone(remote_node.get_host(etree.fromstring(""" + + """))) + + def test_return_none_when_primitive_is_without_agent(self): + case_list = [ + '', + '', + '', + ] + for case in case_list: + self.assertIsNone( + remote_node.get_host(etree.fromstring(case)), + "for '{0}' is not returned None".format(case) + ) + + def test_return_host_from_resource_id(self): + self.assertEqual( + "R", + remote_node.get_host(etree.fromstring(""" + + """)) + ) + +class Validate(TestCase): + def validate( + self, instance_attributes=None, node_name="NODE-NAME", host="node-host" + ): + nodes = [ + NodeAddresses("RING0", "RING1", name="R"), + ] + resource_agent = mock.MagicMock() + return remote_node.validate_create( + nodes, + resource_agent, + host, + node_name, + instance_attributes if instance_attributes else {}, + ) + + def test_report_conflict_node_name(self): + assert_report_item_list_equal( + self.validate( + node_name="R", + host="host", + ), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": "R", + }, + None + ) + ] + ) + + def test_report_conflict_node_host(self): + assert_report_item_list_equal( + self.validate( + host="RING0", + ), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": "RING0", + }, + None + ) + ] + ) + + def test_report_conflict_node_host_ring1(self): + assert_report_item_list_equal( + self.validate( + host="RING1", + ), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": "RING1", + }, + None + ) + ] + ) + + def test_report_used_disallowed_server(self): + assert_report_item_list_equal( + self.validate( + instance_attributes={"server": "A"} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + 'option_type': 'resource', + 'option_names': ['server'], + 'allowed': [], + }, + None + ) + ] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_set.py pcs-0.9.159/pcs/lib/cib/test/test_resource_set.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_resource_set.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_resource_set.py 2017-06-30 15:33:01.000000000 +0000 @@ -41,7 +41,7 @@ severities.ERROR, report_codes.INVALID_OPTION, { - "option_name": "invalid_name", + "option_names": ["invalid_name"], "option_type": None, "allowed": ["action", "require-all", "role", "sequential"], }), diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/test/test_tools.py pcs-0.9.159/pcs/lib/cib/test/test_tools.py --- pcs-0.9.155+dfsg/pcs/lib/cib/test/test_tools.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/test/test_tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,558 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from functools import partial + +from lxml import etree + +from pcs.test.tools.pcs_unittest import TestCase +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_report_item_list_equal, +) +from pcs.test.tools.misc import get_test_resource as rc +from pcs.test.tools.pcs_unittest import mock +from pcs.test.tools.xml import get_xml_manipulation_creator_from_file + +from pcs.common import report_codes +from pcs.lib.errors import ReportItemSeverity as severities + +from pcs.lib.cib import tools as lib + +class CibToolsTest(TestCase): + def setUp(self): + self.create_cib = get_xml_manipulation_creator_from_file( + rc("cib-empty.xml") + ) + self.cib = self.create_cib() + + def fixture_add_primitive_with_id(self, element_id): + self.cib.append_to_first_tag_name( + "resources", + '' + .format(element_id) + ) + + +class IdProviderTest(CibToolsTest): + def setUp(self): + super(IdProviderTest, self).setUp() + self.provider = lib.IdProvider(self.cib.tree) + + def fixture_report(self, id): + return ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": id, + }, + None + ) + + +class IdProviderBook(IdProviderTest): + def test_nonexisting_id(self): + assert_report_item_list_equal( + self.provider.book_ids("myId"), + [] + ) + + def test_existing_id(self): + self.fixture_add_primitive_with_id("myId") + assert_report_item_list_equal( + self.provider.book_ids("myId"), + [ + self.fixture_report("myId"), + ] + ) + + def test_double_book(self): + assert_report_item_list_equal( + self.provider.book_ids("myId"), + [] + ) + assert_report_item_list_equal( + self.provider.book_ids("myId"), + [ + self.fixture_report("myId"), + ] + ) + + def test_more_ids(self): + assert_report_item_list_equal( + self.provider.book_ids("myId1", "myId2"), + [] + ) + assert_report_item_list_equal( + self.provider.book_ids("myId1", "myId2"), + [ + self.fixture_report("myId1"), + self.fixture_report("myId2"), + ] + ) + + def test_complex(self): + # test ids existing in the cib, double booked, available + # test reports not repeated + self.fixture_add_primitive_with_id("myId1") + self.fixture_add_primitive_with_id("myId2") + assert_report_item_list_equal( + self.provider.book_ids( + "myId1", "myId2", "myId3", "myId2", "myId3", "myId4", "myId3" + ), + [ + self.fixture_report("myId1"), + self.fixture_report("myId2"), + self.fixture_report("myId3"), + ] + ) + + +class IdProviderAllocate(IdProviderTest): + def test_nonexisting_id(self): + self.assertEqual("myId", self.provider.allocate_id("myId")) + + def test_existing_id(self): + self.fixture_add_primitive_with_id("myId") + self.assertEqual("myId-1", self.provider.allocate_id("myId")) + + def test_allocate_books(self): + self.assertEqual("myId", self.provider.allocate_id("myId")) + self.assertEqual("myId-1", self.provider.allocate_id("myId")) + + def test_booked_ids(self): + self.fixture_add_primitive_with_id("myId") + assert_report_item_list_equal( + self.provider.book_ids("myId-1"), + [] + ) + self.assertEqual("myId-2", self.provider.allocate_id("myId")) + + +class DoesIdExistTest(CibToolsTest): + def test_existing_id(self): + self.fixture_add_primitive_with_id("myId") + self.assertTrue(lib.does_id_exist(self.cib.tree, "myId")) + + def test_nonexisting_id(self): + self.fixture_add_primitive_with_id("myId") + self.assertFalse(lib.does_id_exist(self.cib.tree, "otherId")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "myid")) + self.assertFalse(lib.does_id_exist(self.cib.tree, " myId")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "myId ")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "my Id")) + + def test_ignore_status_section(self): + self.cib.append_to_first_tag_name("status", """ + + + + + + + + + + + """) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1a")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1aa")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1ab")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1b")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1ba")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "status-1bb")) + + def test_ignore_acl_target(self): + self.cib.append_to_first_tag_name( + "configuration", + """ + + + + """ + ) + self.assertFalse(lib.does_id_exist(self.cib.tree, "target1")) + + def test_ignore_acl_role_references(self): + self.cib.append_to_first_tag_name( + "configuration", + """ + + + + + + + """ + ) + self.assertFalse(lib.does_id_exist(self.cib.tree, "role1")) + self.assertFalse(lib.does_id_exist(self.cib.tree, "role2")) + + def test_ignore_sections_directly_under_cib(self): + #this is side effect of current implementation but is not problem since + #id attribute is not allowed for elements directly under cib + tree = etree.fromstring('') + self.assertFalse(lib.does_id_exist(tree, "a")) + + def test_find_id_when_cib_is_not_root_element(self): + #for example we have only part of xml + tree = etree.fromstring('') + self.assertTrue(lib.does_id_exist(tree, "a")) + + def test_find_remote_node_pacemaker_internal_id(self): + tree = etree.fromstring(""" + + + + + + + + + + + + """) + self.assertTrue(lib.does_id_exist(tree, "a")) + +class FindUniqueIdTest(CibToolsTest): + def test_already_unique(self): + self.fixture_add_primitive_with_id("myId") + self.assertEqual("other", lib.find_unique_id(self.cib.tree, "other")) + + def test_add_suffix(self): + self.fixture_add_primitive_with_id("myId") + self.assertEqual("myId-1", lib.find_unique_id(self.cib.tree, "myId")) + + self.fixture_add_primitive_with_id("myId-1") + self.assertEqual("myId-2", lib.find_unique_id(self.cib.tree, "myId")) + + def test_suffix_not_needed(self): + self.fixture_add_primitive_with_id("myId-1") + self.assertEqual("myId", lib.find_unique_id(self.cib.tree, "myId")) + + def test_add_first_available_suffix(self): + self.fixture_add_primitive_with_id("myId") + self.fixture_add_primitive_with_id("myId-1") + self.fixture_add_primitive_with_id("myId-3") + self.assertEqual("myId-2", lib.find_unique_id(self.cib.tree, "myId")) + + def test_reserved_ids(self): + self.fixture_add_primitive_with_id("myId-1") + self.assertEqual( + "myId-3", + lib.find_unique_id(self.cib.tree, "myId", ["myId", "myId-2"]) + ) + +class CreateNvsetIdTest(TestCase): + def test_create_plain_id_when_no_confilicting_id_there(self): + context = etree.fromstring('') + self.assertEqual( + "b-name", + lib.create_subelement_id(context.find(".//a"), "name") + ) + + def test_create_decorated_id_when_conflicting_id_there(self): + context = etree.fromstring( + '' + ) + self.assertEqual( + "b-name-1", + lib.create_subelement_id(context.find(".//a"), "name") + ) + +class GetConfigurationTest(CibToolsTest): + def test_success_if_exists(self): + self.assertEqual( + "configuration", + lib.get_configuration(self.cib.tree).tag + ) + + def test_raise_if_missing(self): + for conf in self.cib.tree.findall(".//configuration"): + conf.getparent().remove(conf) + assert_raise_library_error( + lambda: lib.get_configuration(self.cib.tree), + ( + severities.ERROR, + report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, + { + "section": "configuration", + } + ), + ) + +class GetConstraintsTest(CibToolsTest): + def test_success_if_exists(self): + self.assertEqual( + "constraints", + lib.get_constraints(self.cib.tree).tag + ) + + def test_raise_if_missing(self): + for section in self.cib.tree.findall(".//configuration/constraints"): + section.getparent().remove(section) + assert_raise_library_error( + lambda: lib.get_constraints(self.cib.tree), + ( + severities.ERROR, + report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, + { + "section": "configuration/constraints", + } + ), + ) + +class GetResourcesTest(CibToolsTest): + def test_success_if_exists(self): + self.assertEqual( + "resources", + lib.get_resources(self.cib.tree).tag + ) + + def test_raise_if_missing(self): + for section in self.cib.tree.findall(".//configuration/resources"): + section.getparent().remove(section) + assert_raise_library_error( + lambda: lib.get_resources(self.cib.tree), + ( + severities.ERROR, + report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, + { + "section": "configuration/resources", + } + ), + ) + +class GetNodes(CibToolsTest): + def test_success_if_exists(self): + self.assertEqual( + "nodes", + lib.get_nodes(self.cib.tree).tag + ) + + def test_raise_if_missing(self): + for section in self.cib.tree.findall(".//configuration/nodes"): + section.getparent().remove(section) + assert_raise_library_error( + lambda: lib.get_nodes(self.cib.tree), + ( + severities.ERROR, + report_codes.CIB_CANNOT_FIND_MANDATORY_SECTION, + { + "section": "configuration/nodes", + }, + None + ), + ) + +class GetAclsTest(CibToolsTest): + def test_success_if_exists(self): + self.cib.append_to_first_tag_name( + "configuration", + '' + ) + self.assertEqual( + "test_role", + lib.get_acls(self.cib.tree)[0].get("id") + ) + + def test_success_if_missing(self): + acls = lib.get_acls(self.cib.tree) + self.assertEqual("acls", acls.tag) + self.assertEqual("configuration", acls.getparent().tag) + +class GetFencingTopology(CibToolsTest): + def test_success_if_exists(self): + self.cib.append_to_first_tag_name( + "configuration", + "" + ) + self.assertEqual( + "fencing-topology", + lib.get_fencing_topology(self.cib.tree).tag + ) + + def test_success_if_missing(self): + ft = lib.get_fencing_topology(self.cib.tree) + self.assertEqual("fencing-topology", ft.tag) + self.assertEqual("configuration", ft.getparent().tag) + + +@mock.patch('pcs.lib.cib.tools.does_id_exist') +class ValidateIdDoesNotExistsTest(TestCase): + def test_success_when_id_does_not_exists(self, does_id_exists): + does_id_exists.return_value = False + lib.validate_id_does_not_exist("tree", "some-id") + does_id_exists.assert_called_once_with("tree", "some-id") + + def test_raises_whne_id_exists(self, does_id_exists): + does_id_exists.return_value = True + assert_raise_library_error( + lambda: lib.validate_id_does_not_exist("tree", "some-id"), + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + {"id": "some-id"}, + ), + ) + does_id_exists.assert_called_once_with("tree", "some-id") + + +class GetPacemakerVersionByWhichCibWasValidatedTest(TestCase): + def test_missing_attribute(self): + assert_raise_library_error( + lambda: lib.get_pacemaker_version_by_which_cib_was_validated( + etree.XML("") + ), + ( + severities.ERROR, + report_codes.CIB_LOAD_ERROR_BAD_FORMAT, + {} + ) + ) + + def test_invalid_version(self): + assert_raise_library_error( + lambda: lib.get_pacemaker_version_by_which_cib_was_validated( + etree.XML('') + ), + ( + severities.ERROR, + report_codes.CIB_LOAD_ERROR_BAD_FORMAT, + {} + ) + ) + + def test_no_revision(self): + self.assertEqual( + (1, 2, 0), + lib.get_pacemaker_version_by_which_cib_was_validated( + etree.XML('') + ) + ) + + def test_with_revision(self): + self.assertEqual( + (1, 2, 3), + lib.get_pacemaker_version_by_which_cib_was_validated( + etree.XML('') + ) + ) + + +find_group = partial(lib.find_element_by_tag_and_id, "group") +class FindTagWithId(TestCase): + def test_returns_element_when_exists(self): + tree = etree.fromstring( + '' + ) + element = find_group(tree.find(".//resources"), "a") + self.assertEqual("group", element.tag) + self.assertEqual("a", element.attrib["id"]) + + def test_returns_element_when_exists_one_of_tags(self): + tree = etree.fromstring(""" + + + + + + + """) + element = lib.find_element_by_tag_and_id( + ["group", "primitive"], + tree.find(".//resources"), + "a" + ) + self.assertEqual("group", element.tag) + self.assertEqual("a", element.attrib["id"]) + + def test_raises_when_is_under_another_tag(self): + tree = etree.fromstring( + '' + ) + + assert_raise_library_error( + lambda: find_group(tree.find(".//resources"), "a"), + ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": "a", + "expected_types": ["group"], + "current_type": "primitive", + }, + ), + ) + + def test_raises_when_is_under_another_context(self): + tree = etree.fromstring(""" + + + + + + + """) + assert_raise_library_error( + lambda: lib.find_element_by_tag_and_id( + "primitive", + tree.find('.//resources/group[@id="g2"]'), + "a" + ), + ( + severities.ERROR, + report_codes.OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT, + { + "type": "primitive", + "id": "a", + "expected_context_type": "group", + "expected_context_id": "g2", + }, + ), + ) + + def test_raises_when_id_does_not_exists(self): + tree = etree.fromstring('') + assert_raise_library_error( + lambda: find_group(tree.find('.//resources'), "a"), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "a", + "id_description": "group", + "context_type": "resources", + "context_id": "", + }, + ), + ) + assert_raise_library_error( + lambda: find_group( + tree.find('.//resources'), + "a", + id_description="resource group" + ), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "a", + "id_description": "resource group", + }, + ), + ) + + def test_returns_none_if_id_do_not_exists(self): + tree = etree.fromstring('') + self.assertIsNone(find_group( + tree.find('.//resources'), + "a", + none_if_id_unused=True + )) diff -Nru pcs-0.9.155+dfsg/pcs/lib/cib/tools.py pcs-0.9.159/pcs/lib/cib/tools.py --- pcs-0.9.155+dfsg/pcs/lib/cib/tools.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cib/tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,16 +5,54 @@ unicode_literals, ) -import os import re -import tempfile -from lxml import etree -from pcs import settings -from pcs.common.tools import join_multilines +from pcs.common.tools import is_string from pcs.lib import reports from pcs.lib.errors import LibraryError -from pcs.lib.pacemaker_values import validate_id +from pcs.lib.pacemaker.values import validate_id +from pcs.lib.xml_tools import ( + get_root, + get_sub_element, +) + +class IdProvider(object): + """ + Book ids for future use in the CIB and generate new ids accordingly + """ + def __init__(self, cib_element): + """ + etree cib_element -- any element of the xml to being check against + """ + self._cib = get_root(cib_element) + self._booked_ids = set() + + def allocate_id(self, proposed_id): + """ + Generate a new unique id based on the proposal and keep track of it + string proposed_id -- requested id + """ + final_id = find_unique_id(self._cib, proposed_id, self._booked_ids) + self._booked_ids.add(final_id) + return final_id + + def book_ids(self, *id_list): + """ + Check if the ids are not already used and reserve them for future use + strings *id_list -- ids + """ + reported_ids = set() + report_list = [] + for id in id_list: + if id in reported_ids: + continue + if id in self._booked_ids or does_id_exist(self._cib, id): + report_list.append(reports.id_already_exists(id)) + reported_ids.add(id) + continue + self._booked_ids.add(id) + return report_list + def does_id_exist(tree, check_id): """ @@ -22,16 +60,40 @@ tree cib etree node check_id id to check """ - # ElementTree has getroot, Elemet has getroottree - root = tree.getroot() if hasattr(tree, "getroot") else tree.getroottree() + # do not search in /cib/status, it may contain references to previously # existing and deleted resources and thus preventing creating them again - existing = root.xpath( + + #pacemaker creates an implicit resource for the pacemaker_remote connection, + #which will be named the same as the value of the remote-node attribute of + #the explicit resource. So the value of nvpair named "remote-node" is + #considered to be id + existing = get_root(tree).xpath(""" ( - '(/cib/*[name()!="status"]|/*[name()!="cib"])' + - '//*[name()!="acl_target" and name()!="role" and @id="{0}"]' - ).format(check_id) - ) + /cib/*[name()!="status"] + | + /*[name()!="cib"] + ) + //*[ + ( + name()!="acl_target" + and + name()!="role" + and + @id="{0}" + ) or ( + name()="primitive" + and + meta_attributes[ + nvpair[ + @name="remote-node" + and + @value="{0}" + ] + ] + ) + ] + """.format(check_id)) return len(existing) > 0 def validate_id_does_not_exist(tree, id): @@ -41,26 +103,86 @@ if does_id_exist(tree, id): raise LibraryError(reports.id_already_exists(id)) -def find_unique_id(tree, check_id): +def find_unique_id(tree, check_id, reserved_ids=None): """ Returns check_id if it doesn't exist in the dom, otherwise it adds an integer to the end of the id and increments it until a unique id is found - tree cib etree node - check_id id to check + etree tree -- cib etree node + string check_id -- id to check + iterable reserved_ids -- ids to think about as already used """ + if not reserved_ids: + reserved_ids = set() counter = 1 temp_id = check_id - while does_id_exist(tree, temp_id): + while temp_id in reserved_ids or does_id_exist(tree, temp_id): temp_id = "{0}-{1}".format(check_id, counter) counter += 1 return temp_id -def create_subelement_id(context_element, suffix): - return find_unique_id( - context_element, - "{0}-{1}".format(context_element.get("id"), suffix) +def find_element_by_tag_and_id( + tag, context_element, element_id, none_if_id_unused=False, id_description="" +): + """ + Return element with given tag and element_id under context_element. When + element does not exists raises LibraryError or return None if specified in + none_if_id_unused. + + etree.Element(Tree) context_element is part of tree for element scan + string|list tag is expected tag (or list of tags) of search element + string element_id is id of search element + bool none_if_id_unused if the element is not found then return None if True + or raise a LibraryError if False + string id_description optional description for id + """ + tag_list = [tag] if is_string(tag) else tag + element_list = context_element.xpath( + './/*[({0}) and @id="{1}"]'.format( + " or ".join(["self::{0}".format(one_tag) for one_tag in tag_list]), + element_id + ) + ) + + if element_list: + return element_list[0] + + element = get_root(context_element).find( + './/*[@id="{0}"]'.format(element_id) + ) + + if element is not None: + raise LibraryError( + reports.id_belongs_to_unexpected_type( + element_id, + expected_types=tag_list, + current_type=element.tag + ) if element.tag not in tag_list + else reports.object_with_id_in_unexpected_context( + element.tag, + element_id, + context_element.tag, + context_element.attrib.get("id", "") + ) + ) + + if none_if_id_unused: + return None + + raise LibraryError( + reports.id_not_found( + element_id, + id_description if id_description else "/".join(tag_list), + context_element.tag, + context_element.attrib.get("id", "") + ) ) +def create_subelement_id(context_element, suffix, id_provider=None): + proposed_id = "{0}-{1}".format(context_element.get("id"), suffix) + if id_provider: + return id_provider.allocate_id(proposed_id) + return find_unique_id(context_element, proposed_id) + def check_new_id_applicable(tree, description, id): validate_id(id, description) validate_id_does_not_exist(tree, id) @@ -87,11 +209,7 @@ Return 'acls' element from tree, create a new one if missing tree cib etree node """ - acls = tree.find(".//acls") - if acls is None: - acls = etree.SubElement(get_configuration(tree), "acls") - return acls - + return get_sub_element(get_configuration(tree), "acls") def get_alerts(tree): """ @@ -100,7 +218,6 @@ """ return get_sub_element(get_configuration(tree), "alerts") - def get_constraints(tree): """ Return 'constraint' element from tree @@ -108,47 +225,26 @@ """ return _get_mandatory_section(tree, "configuration/constraints") -def get_resources(tree): +def get_fencing_topology(tree): """ - Return 'resources' element from tree - tree cib etree node + Return the 'fencing-topology' element from the tree + tree -- cib etree node """ - return _get_mandatory_section(tree, "configuration/resources") + return get_sub_element(get_configuration(tree), "fencing-topology") -def find_parent(element, tag_names): - candidate = element - while True: - if candidate is None or candidate.tag in tag_names: - return candidate - candidate = candidate.getparent() - -def export_attributes(element): - return dict((key, value) for key, value in element.attrib.items()) - - -def get_sub_element(element, sub_element_tag, new_id=None, new_index=None): - """ - Returns the FIRST sub-element sub_element_tag of element. It will create new - element if such doesn't exist yet. Id of new element will be new_if if - it's not None. new_index specify where will be new element added, if None - it will be appended. - - element -- parent element - sub_element_tag -- tag of wanted element - new_id -- id of new element - new_index -- index for new element - """ - sub_element = element.find("./{0}".format(sub_element_tag)) - if sub_element is None: - sub_element = etree.Element(sub_element_tag) - if new_id: - sub_element.set("id", new_id) - if new_index is None: - element.append(sub_element) - else: - element.insert(new_index, sub_element) - return sub_element +def get_nodes(tree): + """ + Return 'nodes' element from the tree + tree cib etree node + """ + return _get_mandatory_section(tree, "configuration/nodes") +def get_resources(tree): + """ + Return the 'resources' element from the tree + tree -- cib etree node + """ + return _get_mandatory_section(tree, "configuration/resources") def get_pacemaker_version_by_which_cib_was_validated(cib): """ @@ -173,86 +269,3 @@ int(match.group("minor")), int(match.group("rev") or 0) ) - - -def upgrade_cib(cib, runner): - """ - Upgrade CIB to the latest schema of installed pacemaker. Returns upgraded - CIB as string. - Raises LibraryError on any failure. - - cib -- cib etree - runner -- CommandRunner - """ - temp_file = None - try: - temp_file = tempfile.NamedTemporaryFile("w+", suffix=".pcs") - temp_file.write(etree.tostring(cib).decode()) - temp_file.flush() - stdout, stderr, retval = runner.run( - [ - os.path.join(settings.pacemaker_binaries, "cibadmin"), - "--upgrade", - "--force" - ], - env_extend={"CIB_file": temp_file.name} - ) - - if retval != 0: - temp_file.close() - raise LibraryError( - reports.cib_upgrade_failed(join_multilines([stderr, stdout])) - ) - - temp_file.seek(0) - return etree.fromstring(temp_file.read()) - except (EnvironmentError, etree.XMLSyntaxError, etree.DocumentInvalid) as e: - raise LibraryError(reports.cib_upgrade_failed(str(e))) - finally: - if temp_file: - temp_file.close() - - -def ensure_cib_version(runner, cib, version): - """ - This method ensures that specified cib is verified by pacemaker with - version 'version' or newer. If cib doesn't correspond to this version, - method will try to upgrade cib. - Returns cib which was verified by pacemaker version 'version' or later. - Raises LibraryError on any failure. - - runner -- CommandRunner - cib -- cib tree - version -- tuple of integers (, , ) - """ - current_version = get_pacemaker_version_by_which_cib_was_validated( - cib - ) - if current_version >= version: - return None - - upgraded_cib = upgrade_cib(cib, runner) - current_version = get_pacemaker_version_by_which_cib_was_validated( - upgraded_cib - ) - - if current_version >= version: - return upgraded_cib - - raise LibraryError(reports.unable_to_upgrade_cib_to_required_version( - current_version, version - )) - - -def etree_element_attibutes_to_dict(etree_el, required_key_list): - """ - Returns all attributes of etree_el from required_key_list in dictionary, - where keys are attributes and values are values of attributes or None if - it's not present. - - etree_el -- etree element from which attributes should be extracted - required_key_list -- list of strings, attributes names which should be - extracted - """ - return dict([(key, etree_el.get(key)) for key in required_key_list]) - diff -Nru pcs-0.9.155+dfsg/pcs/lib/cluster_conf_facade.py pcs-0.9.159/pcs/lib/cluster_conf_facade.py --- pcs-0.9.155+dfsg/pcs/lib/cluster_conf_facade.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/cluster_conf_facade.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,6 +7,7 @@ from lxml import etree +from pcs.common.tools import xml_fromstring from pcs.lib import reports from pcs.lib.errors import LibraryError from pcs.lib.node import NodeAddresses, NodeAddressesList @@ -24,7 +25,7 @@ config_string -- cluster.conf file content as string """ try: - return cls(etree.fromstring(config_string)) + return cls(xml_fromstring(config_string)) except (etree.XMLSyntaxError, etree.DocumentInvalid) as e: raise LibraryError(reports.cluster_conf_invalid_format(str(e))) @@ -56,4 +57,3 @@ id=node.get("nodeid") )) return result - diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/acl.py pcs-0.9.159/pcs/lib/commands/acl.py --- pcs-0.9.155+dfsg/pcs/lib/commands/acl.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/acl.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,13 +5,19 @@ unicode_literals, ) -from pcs.lib import reports +from contextlib import contextmanager + from pcs.lib.cib import acl -from pcs.lib.errors import LibraryError +from pcs.lib.cib.tools import get_acls REQUIRED_CIB_VERSION = (2, 0, 0) +@contextmanager +def cib_acl_section(env): + cib = env.get_cib(REQUIRED_CIB_VERSION) + yield get_acls(cib) + env.push_cib(cib) def create_role(lib_env, role_id, permission_info_list, description): """ @@ -24,16 +30,12 @@ (, , ) description -- text description for role """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - - if permission_info_list: - acl.validate_permissions(cib, permission_info_list) - role_el = acl.create_role(cib, role_id, description) - if permission_info_list: - acl.add_permissions_to_role(role_el, permission_info_list) - - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + if permission_info_list: + acl.validate_permissions(acl_section, permission_info_list) + role_el = acl.create_role(acl_section, role_id, description) + if permission_info_list: + acl.add_permissions_to_role(role_el, permission_info_list) def remove_role(lib_env, role_id, autodelete_users_groups=False): """ @@ -45,56 +47,26 @@ autodelete_users_groups -- if True targets and groups which are empty after removal will be removed """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - try: - acl.remove_role(cib, role_id, autodelete_users_groups) - except acl.AclRoleNotFound as e: - raise LibraryError(acl.acl_error_to_report_item(e)) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.remove_role(acl_section, role_id, autodelete_users_groups) def assign_role_not_specific(lib_env, role_id, target_or_group_id): """ - Assign role wth id role_id to target or group with id target_or_group_id. - Target element has bigger pririty so if there are target and group with same - id only target element will be affected by this function. + Assign role with id role_id to target or group with id target_or_group_id. + Target element has bigger priority so if there are target and group with + the same id only target element will be affected by this function. Raises LibraryError on any failure. lib_env -- LibraryEnviroment - role_id -- id of role which should be assigne to target/group + role_id -- id of role which should be assigned to target/group target_or_group_id -- id of target/group element """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - try: + with cib_acl_section(lib_env) as acl_section: acl.assign_role( - _get_target_or_group(cib, target_or_group_id), - acl.find_role(cib, role_id) + acl_section, + role_id, + acl.find_target_or_group(acl_section, target_or_group_id), ) - except acl.AclError as e: - raise LibraryError(acl.acl_error_to_report_item(e)) - lib_env.push_cib(cib) - - -def _get_target_or_group(cib, target_or_group_id): - """ - Returns acl_target or acl_group element with id target_or_group_id. Target - element has bigger pririty so if there are target and group with same id - only target element will be affected by this function. - Raises LibraryError if there is no target or group element with - specified id. - - cib -- cib etree node - target_or_group_id -- id of target/group element which should be returned - """ - try: - return acl.find_target(cib, target_or_group_id) - except acl.AclTargetNotFound: - try: - return acl.find_group(cib, target_or_group_id) - except acl.AclGroupNotFound: - raise LibraryError( - reports.id_not_found(target_or_group_id, "user/group") - ) def assign_role_to_target(lib_env, role_id, target_id): """ @@ -105,15 +77,12 @@ role_id -- id of acl_role element which should be assigned to target target_id -- id of acl_target element to which role should be assigned """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - try: + with cib_acl_section(lib_env) as acl_section: acl.assign_role( - acl.find_target(cib, target_id), acl.find_role(cib, role_id) + acl_section, + role_id, + acl.find_target(acl_section, target_id), ) - except acl.AclError as e: - raise LibraryError(acl.acl_error_to_report_item(e)) - lib_env.push_cib(cib) - def assign_role_to_group(lib_env, role_id, group_id): """ @@ -124,23 +93,20 @@ role_id -- id of acl_role element which should be assigned to group group_id -- id of acl_group element to which role should be assigned """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - try: + with cib_acl_section(lib_env) as acl_section: acl.assign_role( - acl.find_group(cib, group_id), acl.find_role(cib, role_id) + acl_section, + role_id, + acl.find_group(acl_section, group_id), ) - except acl.AclError as e: - raise LibraryError(acl.acl_error_to_report_item(e)) - lib_env.push_cib(cib) - def unassign_role_not_specific( lib_env, role_id, target_or_group_id, autodelete_target_group=False ): """ Unassign role with role_id from target/group with id target_or_group_id. - Target element has bigger pririty so if there are target and group with same - id only target element will be affected by this function. + Target element has bigger priority so if there are target and group with + the same id only target element will be affected by this function. Raises LibraryError on any failure. lib_env -- LibraryEnvironment @@ -149,14 +115,12 @@ autodelete_target_group -- if True remove target/group element if has no more role assigned """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - acl.unassign_role( - _get_target_or_group(cib, target_or_group_id), - role_id, - autodelete_target_group - ) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.unassign_role( + acl.find_target_or_group(acl_section, target_or_group_id), + role_id, + autodelete_target_group + ) def unassign_role_from_target( lib_env, role_id, target_id, autodelete_target=False @@ -171,17 +135,12 @@ autodelete_target -- if True remove target element if has no more role assigned """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - try: + with cib_acl_section(lib_env) as acl_section: acl.unassign_role( - acl.find_target(cib, target_id), + acl.find_target(acl_section, target_id), role_id, autodelete_target ) - except acl.AclError as e: - raise LibraryError(acl.acl_error_to_report_item(e)) - lib_env.push_cib(cib) - def unassign_role_from_group( lib_env, role_id, group_id, autodelete_group=False @@ -196,36 +155,12 @@ autodelete_target -- if True remove group element if has no more role assigned """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - try: + with cib_acl_section(lib_env) as acl_section: acl.unassign_role( - acl.find_group(cib, group_id), + acl.find_group(acl_section, group_id), role_id, autodelete_group ) - except acl.AclError as e: - raise LibraryError(acl.acl_error_to_report_item(e)) - lib_env.push_cib(cib) - - -def _assign_roles_to_element(cib, element, role_id_list): - """ - Assign roles from role_id_list to element. - Raises LibraryError on any failure. - - cib -- cib etree node - element -- element to which specified roles should be assigned - role_id_list -- list of role id - """ - report_list = [] - for role_id in role_id_list: - try: - acl.assign_role(element, acl.find_role(cib, role_id)) - except acl.AclError as e: - report_list.append(acl.acl_error_to_report_item(e)) - if report_list: - raise LibraryError(*report_list) - def create_target(lib_env, target_id, role_list): """ @@ -236,10 +171,12 @@ target_id -- id of new target role_list -- list of roles to assign to new target """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - _assign_roles_to_element(cib, acl.create_target(cib, target_id), role_list) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.assign_all_roles( + acl_section, + role_list, + acl.create_target(acl_section, target_id) + ) def create_group(lib_env, group_id, role_list): """ @@ -250,10 +187,12 @@ group_id -- id of new group role_list -- list of roles to assign to new group """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - _assign_roles_to_element(cib, acl.create_group(cib, group_id), role_list) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.assign_all_roles( + acl_section, + role_list, + acl.create_group(acl_section, group_id) + ) def remove_target(lib_env, target_id): """ @@ -263,10 +202,8 @@ lib_env -- LibraryEnvironment target_id -- id of taget which should be removed """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - acl.remove_target(cib, target_id) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.remove_target(acl_section, target_id) def remove_group(lib_env, group_id): """ @@ -276,10 +213,8 @@ lib_env -- LibraryEnvironment group_id -- id of group which should be removed """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - acl.remove_group(cib, group_id) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.remove_group(acl_section, group_id) def add_permission(lib_env, role_id, permission_info_list): """ @@ -292,13 +227,12 @@ permission_info_list -- list of permissons, items of list should be tuples: (, , ) """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - acl.validate_permissions(cib, permission_info_list) - acl.add_permissions_to_role( - acl.provide_role(cib, role_id), permission_info_list - ) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.validate_permissions(acl_section, permission_info_list) + acl.add_permissions_to_role( + acl.provide_role(acl_section, role_id), + permission_info_list + ) def remove_permission(lib_env, permission_id): """ @@ -308,14 +242,12 @@ lib_env -- LibraryEnvironment permission_id -- id of permission element which should be removed """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) - acl.remove_permission(cib, permission_id) - lib_env.push_cib(cib) - + with cib_acl_section(lib_env) as acl_section: + acl.remove_permission(acl_section, permission_id) def get_config(lib_env): """ - Returns ACL configuration in disctionary. Fromat of output: + Returns ACL configuration in dictionary. Format of output: { "target_list": , "group_list": , @@ -324,10 +256,9 @@ lib_env -- LibraryEnvironment """ - cib = lib_env.get_cib(REQUIRED_CIB_VERSION) + acl_section = get_acls(lib_env.get_cib(REQUIRED_CIB_VERSION)) return { - "target_list": acl.get_target_list(cib), - "group_list": acl.get_group_list(cib), - "role_list": acl.get_role_list(cib), + "target_list": acl.get_target_list(acl_section), + "group_list": acl.get_group_list(acl_section), + "role_list": acl.get_role_list(acl_section), } - diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/alert.py pcs-0.9.159/pcs/lib/commands/alert.py --- pcs-0.9.155+dfsg/pcs/lib/commands/alert.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/alert.py 2017-06-30 15:33:01.000000000 +0000 @@ -33,7 +33,7 @@ description -- alert description description """ if not path: - raise LibraryError(reports.required_option_is_missing("path")) + raise LibraryError(reports.required_option_is_missing(["path"])) cib = lib_env.get_cib(REQUIRED_CIB_VERSION) @@ -86,6 +86,7 @@ alert.remove_alert(cib, alert_id) except LibraryError as e: report_list += e.args + lib_env.report_processor.process_list(report_list) lib_env.push_cib(cib) @@ -114,7 +115,7 @@ """ if not recipient_value: raise LibraryError( - reports.required_option_is_missing("value") + reports.required_option_is_missing(["value"]) ) cib = lib_env.get_cib(REQUIRED_CIB_VERSION) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/booth.py pcs-0.9.159/pcs/lib/commands/booth.py --- pcs-0.9.155+dfsg/pcs/lib/commands/booth.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/booth.py 2017-06-30 15:33:01.000000000 +0000 @@ -11,7 +11,8 @@ from pcs import settings from pcs.common.tools import join_multilines -from pcs.lib import external, reports +from pcs.lib import external, reports, tools +from pcs.lib.cib.resource import primitive, group from pcs.lib.booth import ( config_exchange, config_files, @@ -26,6 +27,7 @@ from pcs.lib.cib.tools import get_resources from pcs.lib.errors import LibraryError, ReportItemSeverity from pcs.lib.node import NodeAddresses +from pcs.lib.resource_agent import find_valid_resource_agent_by_name def config_setup(env, booth_configuration, overwrite_existing=False): @@ -40,7 +42,7 @@ *config_structure.take_peers(config_content) ) - env.booth.create_key(config_files.generate_key(), overwrite_existing) + env.booth.create_key(tools.generate_key(), overwrite_existing) config_content = config_structure.set_authfile( config_content, env.booth.key_path @@ -145,24 +147,56 @@ ) env.booth.push_config(build(booth_configuration)) -def create_in_cluster(env, name, ip, resource_create, resource_remove): - #TODO resource_create is provisional hack until resources are not moved to - #lib - resources_section = get_resources(env.get_cib()) +def create_in_cluster(env, name, ip, allow_absent_resource_agent=False): + """ + Create group with ip resource and booth resource + + LibraryEnvironment env provides all for communication with externals + string name identifies booth instance + string ip determines float ip for the operation of the booth + bool allow_absent_resource_agent is flag allowing create booth resource even + if its agent is not installed + """ + cib = env.get_cib() + resources_section = get_resources(cib) booth_config_file_path = get_config_file_name(name) if resource.find_for_config(resources_section, booth_config_file_path): raise LibraryError(booth_reports.booth_already_in_cib(name)) - resource.get_creator(resource_create, resource_remove)( - ip, - booth_config_file_path, - create_id = partial( - resource.create_resource_id, - resources_section, - name - ) + create_id = partial( + resource.create_resource_id, + resources_section, + name ) + get_agent = partial( + find_valid_resource_agent_by_name, + env.report_processor, + env.cmd_runner(), + allowed_absent=allow_absent_resource_agent + ) + create_primitive = partial( + primitive.create, + env.report_processor, + resources_section, + ) + into_booth_group = partial( + group.place_resource, + group.provide_group(resources_section, create_id("group")), + ) + + into_booth_group(create_primitive( + create_id("ip"), + get_agent("ocf:heartbeat:IPaddr2"), + instance_attributes={"ip": ip}, + )) + into_booth_group(create_primitive( + create_id("service"), + get_agent("ocf:pacemaker:booth-site"), + instance_attributes={"config": booth_config_file_path}, + )) + + env.push_cib(cib) def remove_from_cluster(env, name, resource_remove, allow_remove_multiple): #TODO resource_remove is provisional hack until resources are not moved to diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/cluster.py pcs-0.9.159/pcs/lib/commands/cluster.py --- pcs-0.9.155+dfsg/pcs/lib/commands/cluster.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/cluster.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,533 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.common import report_codes +from pcs.lib import reports, nodes_task, node_communication_format +from pcs.lib.node import( + NodeAddresses, + NodeAddressesList, + node_addresses_contain_name, + node_addresses_contain_host, +) +from pcs.lib.tools import generate_key +from pcs.lib.cib.resource import guest_node, primitive, remote_node +from pcs.lib.cib.tools import get_resources, find_element_by_tag_and_id +from pcs.lib.env_tools import get_nodes, get_nodes_remote, get_nodes_guest +from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker import state +from pcs.lib.pacemaker.live import remove_node + +def _ensure_can_add_node_to_remote_cluster( + env, node_addresses, warn_on_communication_exception=False +): + report_items = [] + nodes_task.check_can_add_node_to_cluster( + env.node_communicator(), + node_addresses, + report_items, + check_response=nodes_task.availability_checker_remote_node, + warn_on_communication_exception=warn_on_communication_exception, + ) + env.report_processor.process_list(report_items) + +def _share_authkey( + env, current_nodes, candidate_node_addresses, + skip_offline_nodes=False, + allow_incomplete_distribution=False +): + if env.pacemaker.has_authkey: + authkey_content = env.pacemaker.get_authkey_content() + node_addresses_list = NodeAddressesList([candidate_node_addresses]) + else: + authkey_content = generate_key() + node_addresses_list = current_nodes + [candidate_node_addresses] + + nodes_task.distribute_files( + env.node_communicator(), + env.report_processor, + node_communication_format.pcmk_authkey_file(authkey_content), + node_addresses_list, + skip_offline_nodes, + allow_incomplete_distribution, + description="remote node configuration files" + ) + +def _start_and_enable_pacemaker_remote( + env, node_list, skip_offline_nodes=False, allow_fails=False +): + nodes_task.run_actions_on_multiple_nodes( + env.node_communicator(), + env.report_processor, + node_communication_format.create_pcmk_remote_actions([ + "start", + "enable", + ]), + lambda key, response: response.code == "success", + node_list, + skip_offline_nodes, + allow_fails, + description="start of service pacemaker_remote" + ) + +def _prepare_pacemaker_remote_environment( + env, current_nodes, node_host, skip_offline_nodes, + allow_incomplete_distribution, allow_fails +): + if not env.is_corosync_conf_live: + env.report_processor.process_list([ + reports.nolive_skip_files_distribution( + ["pacemaker authkey"], + [node_host] + ), + reports.nolive_skip_service_command_on_nodes( + "pacemaker_remote", + "start", + [node_host] + ), + reports.nolive_skip_service_command_on_nodes( + "pacemaker_remote", + "enable", + [node_host] + ), + ]) + return + + candidate_node = NodeAddresses(node_host) + _ensure_can_add_node_to_remote_cluster( + env, + candidate_node, + skip_offline_nodes + ) + _share_authkey( + env, + current_nodes, + candidate_node, + skip_offline_nodes, + allow_incomplete_distribution + ) + _start_and_enable_pacemaker_remote( + env, + [candidate_node], + skip_offline_nodes, + allow_fails + ) + +def _ensure_resource_running(env, resource_id): + env.report_processor.process( + state.ensure_resource_running(env.get_cluster_state(), resource_id) + ) + +def _ensure_consistently_live_env(env): + if env.is_cib_live and env.is_corosync_conf_live: + return + + #we accept is as well, we need it for tests + if not env.is_cib_live and not env.is_corosync_conf_live: + return + + raise LibraryError(reports.live_environment_required([ + "CIB" if not env.is_cib_live else "COROSYNC_CONF" + ])) + + +def node_add_remote( + env, host, node_name, operations, meta_attributes, instance_attributes, + skip_offline_nodes=False, + allow_incomplete_distribution=False, + allow_pacemaker_remote_service_fail=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + wait=False, +): + """ + create resource ocf:pacemaker:remote and use it as remote node + + LibraryEnvironment env provides all for communication with externals + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + dict instance_attributes contains attributes for + primitive/instance_attributes + bool skip_offline_nodes -- a flag for ignoring when some nodes are offline + bool allow_incomplete_distribution -- is a flag for allowing successfully + finish this command even if is file distribution not succeeded + bool allow_pacemaker_remote_service_fail -- is a flag for allowing + successfully finish this command even if starting/enabling + pacemaker_remote not succeeded + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a resource agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a resource agent metadata + or for allowing to not use the instance_attributes that are required in + resource agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a resource agent) + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + """ + _ensure_consistently_live_env(env) + env.ensure_wait_satisfiable(wait) + + cib = env.get_cib() + current_nodes = get_nodes(env.get_corosync_conf(), cib) + + resource_agent = remote_node.get_agent( + env.report_processor, + env.cmd_runner() + ) + + report_list = remote_node.validate_create( + current_nodes, + resource_agent, + host, + node_name, + instance_attributes + ) + + try: + remote_resource_element = remote_node.create( + env.report_processor, + resource_agent, + get_resources(cib), + host, + node_name, + operations, + meta_attributes, + instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + except LibraryError as e: + #Check unique id conflict with check against nodes. Until validation + #resource create is not separated, we need to make unique post + #validation. + already_exists = [] + unified_report_list = [] + for report in report_list + list(e.args): + if report.code != report_codes.ID_ALREADY_EXISTS: + unified_report_list.append(report) + elif report.info["id"] not in already_exists: + unified_report_list.append(report) + already_exists.append(report.info["id"]) + report_list = unified_report_list + + env.report_processor.process_list(report_list) + + _prepare_pacemaker_remote_environment( + env, + current_nodes, + host, + skip_offline_nodes, + allow_incomplete_distribution, + allow_pacemaker_remote_service_fail, + ) + env.push_cib(cib, wait) + if wait: + _ensure_resource_running(env, remote_resource_element.attrib["id"]) + +def node_add_guest( + env, node_name, resource_id, options, + skip_offline_nodes=False, + allow_incomplete_distribution=False, + allow_pacemaker_remote_service_fail=False, wait=False, +): + + """ + setup resource (resource_id) as guest node and setup node as guest + + LibraryEnvironment env provides all for communication with externals + string resource_id -- specifies resource that should be guest node + dict options could contain keys remote-node, remote-port, remote-addr, + remote-connect-timeout + bool skip_offline_nodes -- a flag for ignoring when some nodes are offline + bool allow_incomplete_distribution -- is a flag for allowing successfully + finish this command even if is file distribution not succeeded + bool allow_pacemaker_remote_service_fail -- is a flag for allowing + successfully finish this command even if starting/enabling + pacemaker_remote not succeeded + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + """ + _ensure_consistently_live_env(env) + env.ensure_wait_satisfiable(wait) + + cib = env.get_cib() + current_nodes = get_nodes(env.get_corosync_conf(), cib) + + report_list = guest_node.validate_set_as_guest( + cib, + current_nodes, + node_name, + options + ) + try: + resource_element = find_element_by_tag_and_id( + primitive.TAG, + get_resources(cib), + resource_id + ) + report_list.extend(guest_node.validate_is_not_guest(resource_element)) + except LibraryError as e: + report_list.extend(e.args) + + env.report_processor.process_list(report_list) + + guest_node.set_as_guest( + resource_element, + node_name, + options.get("remote-addr", None), + options.get("remote-port", None), + options.get("remote-connect-timeout", None), + ) + + _prepare_pacemaker_remote_environment( + env, + current_nodes, + guest_node.get_host_from_options(node_name, options), + skip_offline_nodes, + allow_incomplete_distribution, + allow_pacemaker_remote_service_fail, + ) + + env.push_cib(cib, wait) + if wait: + _ensure_resource_running(env, resource_id) + +def _find_resources_to_remove( + cib, report_processor, + node_type, node_identifier, allow_remove_multiple_nodes, + find_resources +): + resource_element_list = find_resources(get_resources(cib), node_identifier) + + if not resource_element_list: + raise LibraryError(reports.node_not_found(node_identifier, node_type)) + + if len(resource_element_list) > 1: + report_processor.process( + reports.get_problem_creator( + report_codes.FORCE_REMOVE_MULTIPLE_NODES, + allow_remove_multiple_nodes + )( + reports.multiple_result_found, + "resource", + [resource.attrib["id"] for resource in resource_element_list], + node_identifier + ) + ) + + return resource_element_list + +def _get_node_addresses_from_resources(nodes, resource_element_list, get_host): + node_addresses_set = set() + for resource_element in resource_element_list: + for node in nodes: + #remote nodes uses ring0 only + if get_host(resource_element) == node.ring0: + node_addresses_set.add(node) + return sorted(node_addresses_set, key=lambda node: node.ring0) + +def _destroy_pcmk_remote_env( + env, node_addresses_list, skip_offline_nodes, allow_fails +): + actions = node_communication_format.create_pcmk_remote_actions([ + "stop", + "disable", + ]) + files = { + "pacemaker_remote authkey": {"type": "pcmk_remote_authkey"}, + } + + nodes_task.run_actions_on_multiple_nodes( + env.node_communicator(), + env.report_processor, + actions, + lambda key, response: response.code == "success", + node_addresses_list, + skip_offline_nodes, + allow_fails, + description="stop of service pacemaker_remote" + ) + + nodes_task.remove_files( + env.node_communicator(), + env.report_processor, + files, + node_addresses_list, + skip_offline_nodes, + allow_fails, + description="remote node files" + ) + +def _report_skip_live_parts_in_remove(node_addresses_list): + #remote nodes uses ring0 only + node_host_list = [addresses.ring0 for addresses in node_addresses_list] + return [ + reports.nolive_skip_service_command_on_nodes( + "pacemaker_remote", + "stop", + node_host_list + ), + reports.nolive_skip_service_command_on_nodes( + "pacemaker_remote", + "disable", + node_host_list + ), + reports.nolive_skip_files_remove(["pacemaker authkey"], node_host_list) + ] + +def node_remove_remote( + env, node_identifier, remove_resource, + skip_offline_nodes=False, + allow_remove_multiple_nodes=False, + allow_pacemaker_remote_service_fail=False +): + """ + remove a resource representing remote node and destroy remote node + + LibraryEnvironment env provides all for communication with externals + string node_identifier -- node name or hostname + callable remove_resource -- function for remove resource + bool skip_offline_nodes -- a flag for ignoring when some nodes are offline + bool allow_remove_multiple_nodes -- is a flag for allowing + remove unexpected multiple occurence of remote node for node_identifier + bool allow_pacemaker_remote_service_fail -- is a flag for allowing + successfully finish this command even if stoping/disabling + pacemaker_remote not succeeded + """ + + _ensure_consistently_live_env(env) + cib = env.get_cib() + resource_element_list = _find_resources_to_remove( + cib, + env.report_processor, + "remote", + node_identifier, + allow_remove_multiple_nodes, + remote_node.find_node_resources, + ) + + node_addresses_list = _get_node_addresses_from_resources( + get_nodes_remote(cib), + resource_element_list, + remote_node.get_host, + ) + + if not env.is_corosync_conf_live: + env.report_processor.process_list( + _report_skip_live_parts_in_remove(node_addresses_list) + ) + else: + _destroy_pcmk_remote_env( + env, + node_addresses_list, + skip_offline_nodes, + allow_pacemaker_remote_service_fail + ) + + #remove node from pcmk caches is currently integrated in remove_resource + #function + for resource_element in resource_element_list: + remove_resource( + resource_element.attrib["id"], + is_remove_remote_context=True, + ) + +def node_remove_guest( + env, node_identifier, + skip_offline_nodes=False, + allow_remove_multiple_nodes=False, + allow_pacemaker_remote_service_fail=False, + wait=False, +): + """ + remove a resource representing remote node and destroy remote node + + LibraryEnvironment env provides all for communication with externals + string node_identifier -- node name, hostname or resource id + bool skip_offline_nodes -- a flag for ignoring when some nodes are offline + bool allow_remove_multiple_nodes -- is a flag for allowing + remove unexpected multiple occurence of remote node for node_identifier + bool allow_pacemaker_remote_service_fail -- is a flag for allowing + successfully finish this command even if stoping/disabling + pacemaker_remote not succeeded + """ + _ensure_consistently_live_env(env) + env.ensure_wait_satisfiable(wait) + cib = env.get_cib() + + resource_element_list = _find_resources_to_remove( + cib, + env.report_processor, + "guest", + node_identifier, + allow_remove_multiple_nodes, + guest_node.find_node_resources, + ) + + node_addresses_list = _get_node_addresses_from_resources( + get_nodes_guest(cib), + resource_element_list, + guest_node.get_host, + ) + + if not env.is_corosync_conf_live: + env.report_processor.process_list( + _report_skip_live_parts_in_remove(node_addresses_list) + ) + else: + _destroy_pcmk_remote_env( + env, + node_addresses_list, + skip_offline_nodes, + allow_pacemaker_remote_service_fail + ) + + for resource_element in resource_element_list: + guest_node.unset_guest(resource_element) + + env.push_cib(cib, wait) + + #remove node from pcmk caches + if env.is_cib_live: + for node_addresses in node_addresses_list: + remove_node(env.cmd_runner(), node_addresses.name) + + +def node_clear(env, node_name, allow_clear_cluster_node=False): + """ + Remove specified node from various cluster caches. + + LibraryEnvironment env provides all for communication with externals + string node_name + bool allow_clear_cluster_node -- flag allows to clear node even if it's + still in a cluster + """ + mocked_envs = [] + if not env.is_cib_live: + mocked_envs.append("CIB") + if not env.is_corosync_conf_live: + mocked_envs.append("COROSYNC_CONF") + if mocked_envs: + raise LibraryError(reports.live_environment_required(mocked_envs)) + + current_nodes = get_nodes(env.get_corosync_conf(), env.get_cib()) + if( + node_addresses_contain_name(current_nodes, node_name) + or + node_addresses_contain_host(current_nodes, node_name) + ): + env.report_processor.process( + reports.get_problem_creator( + report_codes.FORCE_CLEAR_CLUSTER_NODE, + allow_clear_cluster_node + )( + reports.node_to_clear_is_still_in_cluster, + node_name + ) + ) + + remove_node(env.cmd_runner(), node_name) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/fencing_topology.py pcs-0.9.159/pcs/lib/commands/fencing_topology.py --- pcs-0.9.155+dfsg/pcs/lib/commands/fencing_topology.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/fencing_topology.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,122 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.common.fencing_topology import ( + TARGET_TYPE_REGEXP, + TARGET_TYPE_ATTRIBUTE, +) +from pcs.lib.cib import fencing_topology as cib_fencing_topology +from pcs.lib.cib.tools import ( + get_fencing_topology, + get_resources, +) +from pcs.lib.pacemaker.live import get_cluster_status_xml +from pcs.lib.pacemaker.state import ClusterState + +def add_level( + lib_env, level, target_type, target_value, devices, + force_device=False, force_node=False +): + """ + Validate and add a new fencing level + + LibraryError lib_env -- environment + int|string level -- level (index) of the new fencing level + constant target_type -- the new fencing level target value type + mixed target_value -- the new fencing level target value + Iterable devices -- list of stonith devices for the new fencing level + bool force_device -- continue even if a stonith device does not exist + bool force_node -- continue even if a node (target) does not exist + """ + version_check = None + if target_type == TARGET_TYPE_REGEXP: + version_check = (2, 3, 0) + elif target_type == TARGET_TYPE_ATTRIBUTE: + version_check = (2, 4, 0) + + cib = lib_env.get_cib(version_check) + cib_fencing_topology.add_level( + lib_env.report_processor, + get_fencing_topology(cib), + get_resources(cib), + level, + target_type, + target_value, + devices, + ClusterState( + get_cluster_status_xml(lib_env.cmd_runner()) + ).node_section.nodes, + force_device, + force_node + ) + lib_env.report_processor.send() + lib_env.push_cib(cib) + +def get_config(lib_env): + """ + Get fencing levels configuration. + + Return a list of levels where each level is a dict with keys: target_type, + target_value. level and devices. Devices is a list of stonith device ids. + + LibraryError lib_env -- environment + """ + cib = lib_env.get_cib() + return cib_fencing_topology.export(get_fencing_topology(cib)) + +def remove_all_levels(lib_env): + """ + Remove all fencing levels + LibraryError lib_env -- environment + """ + cib = lib_env.get_cib() + cib_fencing_topology.remove_all_levels(get_fencing_topology(cib)) + lib_env.push_cib(cib) + +def remove_levels_by_params( + lib_env, level=None, target_type=None, target_value=None, devices=None, + ignore_if_missing=False +): + """ + Remove specified fencing level(s) + + LibraryError lib_env -- environment + int|string level -- level (index) of the new fencing level + constant target_type -- the new fencing level target value type + mixed target_value -- the new fencing level target value + Iterable devices -- list of stonith devices for the new fencing level + bool ignore_if_missing -- when True, do not report if level not found + """ + cib = lib_env.get_cib() + cib_fencing_topology.remove_levels_by_params( + lib_env.report_processor, + get_fencing_topology(cib), + level, + target_type, + target_value, + devices, + ignore_if_missing + ) + lib_env.report_processor.send() + lib_env.push_cib(cib) + +def verify(lib_env): + """ + Check if all cluster nodes and stonith devices used in fencing levels exist + + LibraryError lib_env -- environment + """ + cib = lib_env.get_cib() + cib_fencing_topology.verify( + lib_env.report_processor, + get_fencing_topology(cib), + get_resources(cib), + ClusterState( + get_cluster_status_xml(lib_env.cmd_runner()) + ).node_section.nodes + ) + lib_env.report_processor.send() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/node.py pcs-0.9.159/pcs/lib/commands/node.py --- pcs-0.9.155+dfsg/pcs/lib/commands/node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,166 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from contextlib import contextmanager + +from pcs.lib import reports +from pcs.lib.cib.node import update_node_instance_attrs +from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.live import ( + get_cluster_status_xml, + get_local_node_name, +) +from pcs.lib.pacemaker.state import ClusterState + + +@contextmanager +def cib_runner_nodes(lib_env, wait): + lib_env.ensure_wait_satisfiable(wait) + runner = lib_env.cmd_runner() + cib = lib_env.get_cib() + + state_nodes = ClusterState( + get_cluster_status_xml(runner) + ).node_section.nodes + + yield (cib, runner, state_nodes) + lib_env.push_cib(cib, wait) + + +def standby_unstandby_local(lib_env, standby, wait=False): + """ + Change local node standby mode + + LibraryEnvironment lib_env + bool standby -- True: enable standby, False: disable standby + mixed wait -- False: no wait, None: wait with default timeout, str or int: + wait with specified timeout + """ + return _set_instance_attrs_local_node( + lib_env, + _create_standby_unstandby_dict(standby), + wait + ) + +def standby_unstandby_list(lib_env, standby, node_names, wait=False): + """ + Change specified nodes standby mode + + LibraryEnvironment lib_env + bool standby -- True: enable standby, False: disable standby + iterable node_names -- nodes to apply the change to + mixed wait -- False: no wait, None: wait with default timeout, str or int: + wait with specified timeout + """ + return _set_instance_attrs_node_list( + lib_env, + _create_standby_unstandby_dict(standby), + node_names, + wait + ) + +def standby_unstandby_all(lib_env, standby, wait=False): + """ + Change all nodes standby mode + + LibraryEnvironment lib_env + bool standby -- True: enable standby, False: disable standby + mixed wait -- False: no wait, None: wait with default timeout, str or int: + wait with specified timeout + """ + return _set_instance_attrs_all_nodes( + lib_env, + _create_standby_unstandby_dict(standby), + wait + ) + +def maintenance_unmaintenance_local(lib_env, maintenance, wait=False): + """ + Change local node maintenance mode + + LibraryEnvironment lib_env + bool maintenance -- True: enable maintenance, False: disable maintenance + mixed wait -- False: no wait, None: wait with default timeout, str or int: + wait with specified timeout + """ + return _set_instance_attrs_local_node( + lib_env, + _create_maintenance_unmaintenance_dict(maintenance), + wait + ) + +def maintenance_unmaintenance_list( + lib_env, maintenance, node_names, wait=False +): + """ + Change specified nodes maintenance mode + + LibraryEnvironment lib_env + bool maintenance -- True: enable maintenance, False: disable maintenance + iterable node_names -- nodes to apply the change to + mixed wait -- False: no wait, None: wait with default timeout, str or int: + wait with specified timeout + """ + return _set_instance_attrs_node_list( + lib_env, + _create_maintenance_unmaintenance_dict(maintenance), + node_names, + wait + ) + +def maintenance_unmaintenance_all(lib_env, maintenance, wait=False): + """ + Change all nodes maintenance mode + + LibraryEnvironment lib_env + bool maintenance -- True: enable maintenance, False: disable maintenance + mixed wait -- False: no wait, None: wait with default timeout, str or int: + wait with specified timeout + """ + return _set_instance_attrs_all_nodes( + lib_env, + _create_maintenance_unmaintenance_dict(maintenance), + wait + ) + +def _create_standby_unstandby_dict(standby): + return {"standby": "on" if standby else ""} + +def _create_maintenance_unmaintenance_dict(maintenance): + return {"maintenance": "on" if maintenance else ""} + +def _set_instance_attrs_local_node(lib_env, attrs, wait): + if not lib_env.is_cib_live: + # If we are not working with a live cluster we cannot get the local node + # name. + raise LibraryError(reports.live_environment_required_for_local_node()) + + with cib_runner_nodes(lib_env, wait) as (cib, runner, state_nodes): + update_node_instance_attrs( + cib, + get_local_node_name(runner), + attrs, + state_nodes + ) + +def _set_instance_attrs_node_list(lib_env, attrs, node_names, wait): + with cib_runner_nodes(lib_env, wait) as (cib, dummy_runner, state_nodes): + known_nodes = [node.attrs.name for node in state_nodes] + report = [] + for node in node_names: + if node not in known_nodes: + report.append(reports.node_not_found(node)) + if report: + raise LibraryError(*report) + + for node in node_names: + update_node_instance_attrs(cib, node, attrs, state_nodes) + +def _set_instance_attrs_all_nodes(lib_env, attrs, wait): + with cib_runner_nodes(lib_env, wait) as (cib, dummy_runner, state_nodes): + for node in [node.attrs.name for node in state_nodes]: + update_node_instance_attrs(cib, node, attrs, state_nodes) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/resource_agent.py pcs-0.9.159/pcs/lib/commands/resource_agent.py --- pcs-0.9.155+dfsg/pcs/lib/commands/resource_agent.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/resource_agent.py 2017-06-30 15:33:01.000000000 +0000 @@ -95,9 +95,20 @@ agent_list.append(agent_metadata.get_description_info()) else: agent_list.append(agent_metadata.get_name_info()) - except resource_agent.UnableToGetAgentMetadata: - # if we cannot get valid metadata, it's not a resource agent and - # we don't return it in the list + except resource_agent.ResourceAgentError: + #we don't return it in the list: + # + #UnableToGetAgentMetadata - if we cannot get valid metadata, it's + #not a resource agent + # + #InvalidResourceAgentName - invalid name cannot be used with a new + #resource. The list of names is gained from "crm_resource" whilst + #pcs is doing the validation. So there can be a name that pcs does + #not recognize as valid. + # + #Providing a warning is not the way (currently). Other components + #read this list and do not expect warnings there. Using the stderr + #(to separate warnings) is currently difficult. pass return agent_list @@ -111,5 +122,6 @@ lib_env.report_processor, lib_env.cmd_runner(), agent_name, + absent_agent_supported=False ) return agent.get_full_info() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/resource.py pcs-0.9.159/pcs/lib/commands/resource.py --- pcs-0.9.155+dfsg/pcs/lib/commands/resource.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/resource.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,798 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from contextlib import contextmanager +from functools import partial + +from pcs.common import report_codes +from pcs.lib import reports +from pcs.lib.cib import resource +from pcs.lib.cib.resource import operations, remote_node, guest_node +from pcs.lib.cib.tools import ( + find_element_by_tag_and_id, + get_resources, + IdProvider, +) +from pcs.lib.env_tools import get_nodes +from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.values import validate_id +from pcs.lib.pacemaker.state import ( + ensure_resource_state, + info_resource_state, + is_resource_managed, + ResourceNotFound, +) +from pcs.lib.resource_agent import( + find_valid_resource_agent_by_name as get_agent +) + +@contextmanager +def resource_environment( + env, + wait=False, + wait_for_resource_ids=None, + resource_state_reporter=info_resource_state, + required_cib_version=None +): + env.ensure_wait_satisfiable(wait) + cib = env.get_cib(required_cib_version) + yield get_resources(cib) + env.push_cib(cib, wait) + if wait is not False and wait_for_resource_ids: + state = env.get_cluster_state() + env.report_processor.process_list([ + resource_state_reporter(state, res_id) + for res_id in wait_for_resource_ids + ]) + +def _ensure_disabled_after_wait(disabled_after_wait): + def inner(state, resource_id): + return ensure_resource_state( + not disabled_after_wait, + state, + resource_id + ) + return inner + +def _validate_remote_connection( + resource_agent, nodes_to_validate_against, resource_id, instance_attributes, + allow_not_suitable_command +): + if resource_agent.get_name() != remote_node.AGENT_NAME.full_name: + return [] + + report_list = [] + report_list.append( + reports.get_problem_creator( + report_codes.FORCE_NOT_SUITABLE_COMMAND, + allow_not_suitable_command + )(reports.use_command_node_add_remote) + ) + + report_list.extend( + remote_node.validate_host_not_conflicts( + nodes_to_validate_against, + resource_id, + instance_attributes + ) + ) + return report_list + +def _validate_guest_change( + tree, nodes_to_validate_against, meta_attributes, + allow_not_suitable_command, detect_remove=False +): + if not guest_node.is_node_name_in_options(meta_attributes): + return [] + + node_name = guest_node.get_node_name_from_options(meta_attributes) + + report_list = [] + create_report = reports.use_command_node_add_guest + if detect_remove and not guest_node.get_guest_option_value(meta_attributes): + create_report = reports.use_command_node_remove_guest + + report_list.append( + reports.get_problem_creator( + report_codes.FORCE_NOT_SUITABLE_COMMAND, + allow_not_suitable_command + )(create_report) + ) + + report_list.extend( + guest_node.validate_conflicts( + tree, + nodes_to_validate_against, + node_name, + meta_attributes + ) + ) + + return report_list + +def _get_nodes_to_validate_against(env, tree): + if not env.is_corosync_conf_live and env.is_cib_live: + raise LibraryError( + reports.live_environment_required(["COROSYNC_CONF"]) + ) + + if not env.is_cib_live and env.is_corosync_conf_live: + #we do not try to get corosync.conf from live cluster when cib is not + #taken from live cluster + return get_nodes(tree=tree) + + return get_nodes(env.get_corosync_conf(), tree) + + +def _check_special_cases( + env, resource_agent, resources_section, resource_id, meta_attributes, + instance_attributes, allow_not_suitable_command +): + if( + resource_agent.get_name() != remote_node.AGENT_NAME.full_name + and + not guest_node.is_node_name_in_options(meta_attributes) + ): + #if no special case happens we won't take care about corosync.conf that + #is needed for getting nodes to validate against + return + + nodes_to_validate_against = _get_nodes_to_validate_against( + env, + resources_section + ) + + report_list = [] + report_list.extend(_validate_remote_connection( + resource_agent, + nodes_to_validate_against, + resource_id, + instance_attributes, + allow_not_suitable_command, + )) + report_list.extend(_validate_guest_change( + resources_section, + nodes_to_validate_against, + meta_attributes, + allow_not_suitable_command, + )) + + env.report_processor.process_list(report_list) + +def create( + env, resource_id, resource_agent_name, + operations, meta_attributes, instance_attributes, + allow_absent_agent=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + ensure_disabled=False, + wait=False, + allow_not_suitable_command=False, +): + """ + Create resource in a cib. + + LibraryEnvironment env provides all for communication with externals + string resource_id is identifier of resource + string resource_agent_name contains name for the identification of agent + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + dict instance_attributes contains attributes for + primitive/instance_attributes + bool allow_absent_agent is a flag for allowing agent that is not installed + in a system + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a resource agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a resource agent metadata + or for allowing to not use the instance_attributes that are required in + resource agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a resource agent) + bool ensure_disabled is flag that keeps resource in target-role "Stopped" + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND + """ + resource_agent = get_agent( + env.report_processor, + env.cmd_runner(), + resource_agent_name, + allow_absent_agent, + ) + with resource_environment( + env, + wait, + [resource_id], + _ensure_disabled_after_wait( + ensure_disabled + or + resource.common.are_meta_disabled(meta_attributes) + ) + ) as resources_section: + _check_special_cases( + env, + resource_agent, + resources_section, + resource_id, + meta_attributes, + instance_attributes, + allow_not_suitable_command + ) + + primitive_element = resource.primitive.create( + env.report_processor, resources_section, + resource_id, resource_agent, + operations, meta_attributes, instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + if ensure_disabled: + resource.common.disable(primitive_element) + +def _create_as_clone_common( + tag, env, resource_id, resource_agent_name, + operations, meta_attributes, instance_attributes, clone_meta_options, + allow_absent_agent=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + ensure_disabled=False, + wait=False, + allow_not_suitable_command=False, +): + """ + Create resource in some kind of clone (clone or master). + + Currently the only difference between commands "create_as_clone" and + "create_as_master" is in tag. So the commands create_as_clone and + create_as_master are created by passing tag with partial. + + string tag is any clone tag. Currently it can be "clone" or "master". + LibraryEnvironment env provides all for communication with externals + string resource_id is identifier of resource + string resource_agent_name contains name for the identification of agent + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + dict instance_attributes contains attributes for + primitive/instance_attributes + dict clone_meta_options contains attributes for clone/meta_attributes + bool allow_absent_agent is a flag for allowing agent that is not installed + in a system + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a resource agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a resource agent metadata + or for allowing to not use the instance_attributes that are required in + resource agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a resource agent) + bool ensure_disabled is flag that keeps resource in target-role "Stopped" + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND + """ + resource_agent = get_agent( + env.report_processor, + env.cmd_runner(), + resource_agent_name, + allow_absent_agent, + ) + with resource_environment( + env, + wait, + [resource_id], + _ensure_disabled_after_wait( + ensure_disabled + or + resource.common.are_meta_disabled(meta_attributes) + or + resource.common.is_clone_deactivated_by_meta(clone_meta_options) + ) + ) as resources_section: + _check_special_cases( + env, + resource_agent, + resources_section, + resource_id, + meta_attributes, + instance_attributes, + allow_not_suitable_command + ) + + primitive_element = resource.primitive.create( + env.report_processor, resources_section, + resource_id, resource_agent, + operations, meta_attributes, instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + clone_element = resource.clone.append_new( + tag, + resources_section, + primitive_element, + clone_meta_options, + ) + if ensure_disabled: + resource.common.disable(clone_element) + +def create_in_group( + env, resource_id, resource_agent_name, group_id, + operations, meta_attributes, instance_attributes, + allow_absent_agent=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + ensure_disabled=False, + adjacent_resource_id=None, + put_after_adjacent=False, + wait=False, + allow_not_suitable_command=False, +): + """ + Create resource in a cib and put it into defined group + + LibraryEnvironment env provides all for communication with externals + string resource_id is identifier of resource + string resource_agent_name contains name for the identification of agent + string group_id is identificator for group to put primitive resource inside + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + bool allow_absent_agent is a flag for allowing agent that is not installed + in a system + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a resource agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a resource agent metadata + or for allowing to not use the instance_attributes that are required in + resource agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a resource agent) + bool ensure_disabled is flag that keeps resource in target-role "Stopped" + string adjacent_resource_id identify neighbor of a newly created resource + bool put_after_adjacent is flag to put a newly create resource befor/after + adjacent resource + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND + """ + resource_agent = get_agent( + env.report_processor, + env.cmd_runner(), + resource_agent_name, + allow_absent_agent, + ) + with resource_environment( + env, + wait, + [resource_id], + _ensure_disabled_after_wait( + ensure_disabled + or + resource.common.are_meta_disabled(meta_attributes) + ) + ) as resources_section: + _check_special_cases( + env, + resource_agent, + resources_section, + resource_id, + meta_attributes, + instance_attributes, + allow_not_suitable_command + ) + + primitive_element = resource.primitive.create( + env.report_processor, resources_section, + resource_id, resource_agent, + operations, meta_attributes, instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + if ensure_disabled: + resource.common.disable(primitive_element) + validate_id(group_id, "group name") + resource.group.place_resource( + resource.group.provide_group(resources_section, group_id), + primitive_element, + adjacent_resource_id, + put_after_adjacent, + ) + +create_as_clone = partial(_create_as_clone_common, resource.clone.TAG_CLONE) +create_as_master = partial(_create_as_clone_common, resource.clone.TAG_MASTER) + +def create_into_bundle( + env, resource_id, resource_agent_name, + operations, meta_attributes, instance_attributes, + bundle_id, + allow_absent_agent=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + ensure_disabled=False, + wait=False, + allow_not_suitable_command=False, +): + """ + Create a new resource in a cib and put it into an existing bundle + + LibraryEnvironment env provides all for communication with externals + string resource_id is identifier of resource + string resource_agent_name contains name for the identification of agent + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + dict instance_attributes contains attributes for + primitive/instance_attributes + string bundle_id is id of an existing bundle to put the created resource in + bool allow_absent_agent is a flag for allowing agent that is not installed + in a system + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a resource agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a resource agent metadata + or for allowing to not use the instance_attributes that are required in + resource agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a resource agent) + bool ensure_disabled is flag that keeps resource in target-role "Stopped" + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + bool allow_not_suitable_command -- flag for FORCE_NOT_SUITABLE_COMMAND + """ + resource_agent = get_agent( + env.report_processor, + env.cmd_runner(), + resource_agent_name, + allow_absent_agent, + ) + with resource_environment( + env, + wait, + [resource_id], + _ensure_disabled_after_wait( + ensure_disabled + or + resource.common.are_meta_disabled(meta_attributes) + ), + required_cib_version=(2, 8, 0) + ) as resources_section: + _check_special_cases( + env, + resource_agent, + resources_section, + resource_id, + meta_attributes, + instance_attributes, + allow_not_suitable_command + ) + + primitive_element = resource.primitive.create( + env.report_processor, resources_section, + resource_id, resource_agent, + operations, meta_attributes, instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + if ensure_disabled: + resource.common.disable(primitive_element) + resource.bundle.add_resource( + find_element_by_tag_and_id( + "bundle", resources_section, bundle_id + ), + primitive_element + ) + +def bundle_create( + env, bundle_id, container_type, container_options=None, + network_options=None, port_map=None, storage_map=None, meta_attributes=None, + force_options=False, + ensure_disabled=False, + wait=False, +): + """ + Create a new bundle containing no resources + + LibraryEnvironment env -- provides communication with externals + string bundle_id -- id of the new bundle + string container_type -- container engine name (docker, lxc...) + dict container_options -- container options + dict network_options -- network options + list of dict port_map -- a list of port mapping options + list of dict storage_map -- a list of storage mapping options + dict meta_attributes -- bundle's meta attributes + bool force_options -- return warnings instead of forceable errors + bool ensure_disabled -- set the bundle's target-role to "Stopped" + mixed wait -- False: no wait, None: wait default timeout, int: wait timeout + """ + container_options = container_options or {} + network_options = network_options or {} + port_map = port_map or [] + storage_map = storage_map or [] + meta_attributes = meta_attributes or {} + + with resource_environment( + env, + wait, + [bundle_id], + _ensure_disabled_after_wait( + ensure_disabled + or + resource.common.are_meta_disabled(meta_attributes) + ), + required_cib_version=(2, 8, 0) + ) as resources_section: + # no need to run validations related to remote and guest nodes as those + # nodes can only be created from primitive resources + id_provider = IdProvider(resources_section) + env.report_processor.process_list( + resource.bundle.validate_new( + id_provider, + bundle_id, + container_type, + container_options, + network_options, + port_map, + storage_map, + # TODO meta attributes - there is no validation for now + force_options + ) + ) + bundle_element = resource.bundle.append_new( + resources_section, + id_provider, + bundle_id, + container_type, + container_options, + network_options, + port_map, + storage_map, + meta_attributes + ) + if ensure_disabled: + resource.common.disable(bundle_element) + +def bundle_update( + env, bundle_id, container_options=None, network_options=None, + port_map_add=None, port_map_remove=None, storage_map_add=None, + storage_map_remove=None, meta_attributes=None, + force_options=False, + wait=False, +): + """ + Modify an existing bundle (does not touch encapsulated resources) + + LibraryEnvironment env -- provides communication with externals + string bundle_id -- id of the bundle to modify + dict container_options -- container options to modify + dict network_options -- network options to modify + list of dict port_map_add -- list of port mapping options to add + list of string port_map_remove -- list of port mapping ids to remove + list of dict storage_map_add -- list of storage mapping options to add + list of string storage_map_remove -- list of storage mapping ids to remove + dict meta_attributes -- meta attributes to update + bool force_options -- return warnings instead of forceable errors + mixed wait -- False: no wait, None: wait default timeout, int: wait timeout + """ + container_options = container_options or {} + network_options = network_options or {} + port_map_add = port_map_add or [] + port_map_remove = port_map_remove or [] + storage_map_add = storage_map_add or [] + storage_map_remove = storage_map_remove or [] + meta_attributes = meta_attributes or {} + + with resource_environment( + env, + wait, + [bundle_id], + required_cib_version=(2, 8, 0) + ) as resources_section: + # no need to run validations related to remote and guest nodes as those + # nodes can only be created from primitive resources + id_provider = IdProvider(resources_section) + bundle_element = find_element_by_tag_and_id( + resource.bundle.TAG, + resources_section, + bundle_id + ) + env.report_processor.process_list( + resource.bundle.validate_update( + id_provider, + bundle_element, + container_options, + network_options, + port_map_add, + port_map_remove, + storage_map_add, + storage_map_remove, + # TODO meta attributes - there is no validation for now + force_options + ) + ) + resource.bundle.update( + id_provider, + bundle_element, + container_options, + network_options, + port_map_add, + port_map_remove, + storage_map_add, + storage_map_remove, + meta_attributes + ) + +def disable(env, resource_ids, wait): + """ + Disallow specified resource to be started by the cluster + LibraryEnvironment env -- + strings resource_ids -- ids of the resources to be disabled + mixed wait -- False: no wait, None: wait default timeout, int: wait timeout + """ + with resource_environment( + env, wait, resource_ids, _ensure_disabled_after_wait(True) + ) as resources_section: + resource_el_list = _find_resources_or_raise( + resources_section, + resource_ids + ) + env.report_processor.process_list( + _resource_list_enable_disable( + resource_el_list, + resource.common.disable, + env.get_cluster_state() + ) + ) + +def enable(env, resource_ids, wait): + """ + Allow specified resource to be started by the cluster + LibraryEnvironment env -- + strings resource_ids -- ids of the resources to be enabled + mixed wait -- False: no wait, None: wait default timeout, int: wait timeout + """ + with resource_environment( + env, wait, resource_ids, _ensure_disabled_after_wait(False) + ) as resources_section: + resource_el_list = _find_resources_or_raise( + resources_section, + resource_ids, + resource.common.find_resources_to_enable + ) + env.report_processor.process_list( + _resource_list_enable_disable( + resource_el_list, + resource.common.enable, + env.get_cluster_state() + ) + ) + +def _resource_list_enable_disable(resource_el_list, func, cluster_state): + report_list = [] + for resource_el in resource_el_list: + res_id = resource_el.attrib["id"] + try: + if not is_resource_managed(cluster_state, res_id): + report_list.append(reports.resource_is_unmanaged(res_id)) + func(resource_el) + except ResourceNotFound: + report_list.append( + reports.id_not_found( + res_id, + id_description="resource/clone/master/group/bundle" + ) + ) + return report_list + +def unmanage(env, resource_ids, with_monitor=False): + """ + Set specified resources not to be managed by the cluster + LibraryEnvironment env -- + strings resource_ids -- ids of the resources to become unmanaged + bool with_monitor -- disable resources' monitor operations + """ + with resource_environment(env) as resources_section: + resource_el_list = _find_resources_or_raise( + resources_section, + resource_ids, + resource.common.find_resources_to_unmanage + ) + primitives = [] + + for resource_el in resource_el_list: + resource.common.unmanage(resource_el) + if with_monitor: + primitives.extend( + resource.common.find_primitives(resource_el) + ) + + for resource_el in set(primitives): + for op in operations.get_resource_operations( + resource_el, + ["monitor"] + ): + operations.disable(op) + +def manage(env, resource_ids, with_monitor=False): + """ + Set specified resource to be managed by the cluster + LibraryEnvironment env -- + strings resource_ids -- ids of the resources to become managed + bool with_monitor -- enable resources' monitor operations + """ + with resource_environment(env) as resources_section: + report_list = [] + resource_el_list = _find_resources_or_raise( + resources_section, + resource_ids, + resource.common.find_resources_to_manage + ) + primitives = [] + + for resource_el in resource_el_list: + resource.common.manage(resource_el) + primitives.extend( + resource.common.find_primitives(resource_el) + ) + + for resource_el in sorted( + set(primitives), + key=lambda element: element.get("id", "") + ): + op_list = operations.get_resource_operations( + resource_el, + ["monitor"] + ) + if with_monitor: + for op in op_list: + operations.enable(op) + else: + monitor_enabled = False + for op in op_list: + if operations.is_enabled(op): + monitor_enabled = True + break + if op_list and not monitor_enabled: + # do not advise enabling monitors if there are none defined + report_list.append( + reports.resource_managed_no_monitor_enabled( + resource_el.get("id", "") + ) + ) + + env.report_processor.process_list(report_list) + +def _find_resources_or_raise( + resources_section, resource_ids, additional_search=None +): + if not additional_search: + additional_search = lambda x: [x] + report_list = [] + resource_el_list = [] + resource_tags = ( + resource.clone.ALL_TAGS + + + [resource.group.TAG, resource.primitive.TAG, resource.bundle.TAG] + ) + for res_id in resource_ids: + try: + resource_el_list.extend( + additional_search( + find_element_by_tag_and_id( + resource_tags, + resources_section, + res_id, + id_description="resource/clone/master/group/bundle" + ) + ) + ) + except LibraryError as e: + report_list.extend(e.args) + if report_list: + raise LibraryError(*report_list) + return resource_el_list diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/sbd.py pcs-0.9.159/pcs/lib/commands/sbd.py --- pcs-0.9.155+dfsg/pcs/lib/commands/sbd.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/sbd.py 2017-06-30 15:33:01.000000000 +0000 @@ -33,6 +33,11 @@ NodeAddressesList, NodeNotFound ) +from pcs.lib.validate import ( + names_in, + run_collection_of_option_validators, + value_nonnegative_integer, +) def _validate_sbd_options(sbd_config, allow_unknown_opts=False): @@ -46,7 +51,7 @@ report_item_list = [] unsupported_sbd_option_list = [ - "SBD_WATCHDOG_DEV", "SBD_OPTS", "SBD_PACEMAKER" + "SBD_WATCHDOG_DEV", "SBD_OPTS", "SBD_PACEMAKER", "SBD_DEVICE" ] allowed_sbd_options = [ "SBD_DELAY_START", "SBD_STARTMODE", "SBD_WATCHDOG_TIMEOUT" @@ -54,12 +59,12 @@ for sbd_opt in sbd_config: if sbd_opt in unsupported_sbd_option_list: report_item_list.append(reports.invalid_option( - sbd_opt, allowed_sbd_options, None + [sbd_opt], allowed_sbd_options, None )) elif sbd_opt not in allowed_sbd_options: report_item_list.append(reports.invalid_option( - sbd_opt, + [sbd_opt], allowed_sbd_options, None, Severities.WARNING if allow_unknown_opts else Severities.ERROR, @@ -69,7 +74,7 @@ report_item = reports.invalid_option_value( "SBD_WATCHDOG_TIMEOUT", sbd_config["SBD_WATCHDOG_TIMEOUT"], - "nonnegative integer" + "a non-negative integer" ) try: if int(sbd_config["SBD_WATCHDOG_TIMEOUT"]) < 0: @@ -80,38 +85,89 @@ return report_item_list -def _get_full_watchdog_list(node_list, default_watchdog, watchdog_dict): +def _validate_watchdog_dict(watchdog_dict): """ - Validate if all nodes in watchdog_dict does exist and returns dictionary - where keys are nodes and value is corresponding watchdog. - Raises LibraryError if any of nodes doesn't belong to cluster. + Validates if all watchdogs are specified by absolute path. + Returns list of ReportItem. - node_list -- NodeAddressesList - default_watchdog -- watchdog for nodes which are not specified - in watchdog_dict - watchdog_dict -- dictionary with node names as keys and value as watchdog + watchdog_dict -- dictionary with NodeAddresses as keys and value as watchdog """ - full_dict = dict([(node, default_watchdog) for node in node_list]) - report_item_list = [] + return [ + reports.invalid_watchdog_path(watchdog) + for watchdog in watchdog_dict.values() + if not watchdog or not os.path.isabs(watchdog) + ] + - for node_name, watchdog in watchdog_dict.items(): - if not watchdog or not os.path.isabs(watchdog): - report_item_list.append(reports.invalid_watchdog_path(watchdog)) +def _validate_device_dict(node_device_dict): + """ + Validates device list for all nodes. If node is present, it checks if there + is at least one device and at max settings.sbd_max_device_num. Also devices + have to be specified with absolute path. + Returns list of ReportItem + + node_device_dict -- dictionary with NodeAddresses as keys and list of + devices as values + """ + report_item_list = [] + for node, device_list in node_device_dict.items(): + if not device_list: + report_item_list.append( + reports.sbd_no_device_for_node(node.label) + ) continue + elif len(device_list) > settings.sbd_max_device_num: + report_item_list.append(reports.sbd_too_many_devices_for_node( + node.label, device_list, settings.sbd_max_device_num + )) + continue + for device in device_list: + if not device or not os.path.isabs(device): + report_item_list.append( + reports.sbd_device_path_not_absolute(device, node.label) + ) + + return report_item_list + + +def _check_node_names_in_cluster(node_list, node_name_list): + """ + Check whenever all node names from node_name_list exists in node_list. + Returns list of ReportItem + + node_list -- NodeAddressesList + node_name_list -- list of stings + """ + not_existing_node_set = set() + for node_name in node_name_list: try: - full_dict[node_list.find_by_label(node_name)] = watchdog + node_list.find_by_label(node_name) except NodeNotFound: - report_item_list.append(reports.node_not_found(node_name)) + not_existing_node_set.add(node_name) + + return [reports.node_not_found(node) for node in not_existing_node_set] - if report_item_list: - raise LibraryError(*report_item_list) - return full_dict +def _get_full_node_dict(node_list, node_value_dict, default_value): + """ + Returns dictionary where keys NodeAdressesof all nodes in cluster and value + is obtained from node_value_dict for node name, or default+value if node + nade is not specified in node_value_dict. + + node_list -- NodeAddressesList + node_value_dict -- dictionary, keys: node names, values: some velue + default_value -- some default value + """ + return dict([ + (node, node_value_dict.get(node.label, default_value)) + for node in node_list + ]) def enable_sbd( - lib_env, default_watchdog, watchdog_dict, sbd_options, - allow_unknown_opts=False, ignore_offline_nodes=False + lib_env, default_watchdog, watchdog_dict, sbd_options, + default_device_list=None, node_device_dict=None, allow_unknown_opts=False, + ignore_offline_nodes=False, ): """ Enable SBD on all nodes in cluster. @@ -119,44 +175,64 @@ lib_env -- LibraryEnvironment default_watchdog -- watchdog for nodes which are not specified in watchdog_dict. Uses default value from settings if None. - watchdog_dict -- dictionary with NodeAddresses as keys and watchdog path + watchdog_dict -- dictionary with node names as keys and watchdog path as value sbd_options -- dictionary in format: : + default_device_list -- list of devices for all nodes + node_device_dict -- dictionary with node names as keys and list of devices + as value allow_unknown_opts -- if True, accept also unknown options. ignore_offline_nodes -- if True, omit offline nodes """ node_list = _get_cluster_nodes(lib_env) - + using_devices = not ( + default_device_list is None and node_device_dict is None + ) + if default_device_list is None: + default_device_list = [] + if node_device_dict is None: + node_device_dict = {} if not default_watchdog: default_watchdog = settings.sbd_watchdog_default + sbd_options = dict([(opt.upper(), val) for opt, val in sbd_options.items()]) - # input validation begin - full_watchdog_dict = _get_full_watchdog_list( - node_list, default_watchdog, watchdog_dict + full_watchdog_dict = _get_full_node_dict( + node_list, watchdog_dict, default_watchdog + ) + full_device_dict = _get_full_node_dict( + node_list, node_device_dict, default_device_list ) - # config validation - sbd_options = dict([(opt.upper(), val) for opt, val in sbd_options.items()]) lib_env.report_processor.process_list( + _check_node_names_in_cluster( + node_list, watchdog_dict.keys() + node_device_dict.keys() + ) + + + _validate_watchdog_dict(full_watchdog_dict) + + + _validate_device_dict(full_device_dict) if using_devices else [] + + _validate_sbd_options(sbd_options, allow_unknown_opts) ) - # check nodes status online_nodes = _get_online_nodes(lib_env, node_list, ignore_offline_nodes) - for node in list(full_watchdog_dict): - if node not in online_nodes: - full_watchdog_dict.pop(node, None) - # input validation end + + node_data_dict = {} + for node in online_nodes: + node_data_dict[node] = { + "watchdog": full_watchdog_dict[node], + "device_list": full_device_dict[node] if using_devices else [], + } # check if SBD can be enabled sbd.check_sbd_on_all_nodes( lib_env.report_processor, lib_env.node_communicator(), - full_watchdog_dict + node_data_dict, ) # enable ATB if needed - if not lib_env.is_cman_cluster: + if not lib_env.is_cman_cluster and not using_devices: corosync_conf = lib_env.get_corosync_conf() if sbd.atb_has_to_be_enabled_pre_enable_check(corosync_conf): lib_env.report_processor.process(reports.sbd_requires_atb()) @@ -173,7 +249,8 @@ lib_env.node_communicator(), online_nodes, config, - full_watchdog_dict + full_watchdog_dict, + full_device_dict, ) # remove cluster prop 'stonith_watchdog_timeout' @@ -280,9 +357,11 @@ def get_sbd_status(node): try: status_list.append({ - "node": node, + "node": node.label, "status": json.loads( - sbd.check_sbd(lib_env.node_communicator(), node, "") + # here we just need info about sbd service, + # therefore watchdog and device list is empty + sbd.check_sbd(lib_env.node_communicator(), node, "", []) )["sbd"] }) successful_node_list.append(node) @@ -307,7 +386,7 @@ for node in node_list: if node not in successful_node_list: status_list.append({ - "node": node, + "node": node.label, "status": { "installed": None, "enabled": None, @@ -341,7 +420,7 @@ def get_sbd_config(node): try: config_list.append({ - "node": node, + "node": node.label, "config": environment_file_to_dict( sbd.get_sbd_config(lib_env.node_communicator(), node) ) @@ -373,13 +452,18 @@ for node in node_list: if node not in successful_node_list: config_list.append({ - "node": node, + "node": node.label, "config": None }) return config_list def get_local_sbd_config(lib_env): + """ + Returns local SBD config as dictionary. + + lib_env -- LibraryEnvironment + """ return environment_file_to_dict(sbd.get_local_sbd_config()) @@ -389,3 +473,104 @@ else: return lib_env.get_corosync_conf().get_nodes() + +def initialize_block_devices(lib_env, device_list, option_dict): + """ + Initialize SBD devices in device_list with options_dict. + + lib_env -- LibraryEnvironment + device_list -- list of strings + option_dict -- dictionary + """ + report_item_list = [] + if not device_list: + report_item_list.append(reports.required_option_is_missing(["device"])) + + supported_options = sbd.DEVICE_INITIALIZATION_OPTIONS_MAPPING.keys() + + report_item_list += names_in(supported_options, option_dict.keys()) + validator_list = [ + value_nonnegative_integer(key) + for key in supported_options + ] + + report_item_list += run_collection_of_option_validators( + option_dict, validator_list + ) + + lib_env.report_processor.process_list(report_item_list) + sbd.initialize_block_devices( + lib_env.report_processor, lib_env.cmd_runner(), device_list, option_dict + ) + + +def get_local_devices_info(lib_env, dump=False): + """ + Returns list of local devices info in format: + { + "device": , + "list": , + "dump": if dump is True, None otherwise + } + If sbd is not enabled, empty list will be returned. + + lib_env -- LibraryEnvironment + dump -- if True returns also output of command 'sbd dump' + """ + if not sbd.is_sbd_enabled(lib_env.cmd_runner()): + return [] + device_list = sbd.get_local_sbd_device_list() + report_item_list = [] + output = [] + for device in device_list: + obj = { + "device": device, + "list": None, + "dump": None, + } + try: + obj["list"] = sbd.get_device_messages_info( + lib_env.cmd_runner(), device + ) + if dump: + obj["dump"] = sbd.get_device_sbd_header_dump( + lib_env.cmd_runner(), device + ) + except LibraryError as e: + report_item_list += e.args + + output.append(obj) + + for report_item in report_item_list: + report_item.severity = Severities.WARNING + lib_env.report_processor.process_list(report_item_list) + return output + + +def set_message(lib_env, device, node_name, message): + """ + Set message on device for node_name. + + lib_env -- LibrayEnvironment + device -- string, absolute path to device + node_name -- string + message -- string, mesage type, should be one of settings.sbd_message_types + """ + report_item_list = [] + missing_options = [] + if not device: + missing_options.append("device") + if not node_name: + missing_options.append("node") + if missing_options: + report_item_list.append( + reports.required_option_is_missing(missing_options) + ) + supported_messages = settings.sbd_message_types + if message not in supported_messages: + report_item_list.append( + reports.invalid_option_value("message", message, supported_messages) + ) + lib_env.report_processor.process_list(report_item_list) + sbd.set_message(lib_env.cmd_runner(), device, node_name, message) + diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/stonith_agent.py pcs-0.9.159/pcs/lib/commands/stonith_agent.py --- pcs-0.9.155+dfsg/pcs/lib/commands/stonith_agent.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/stonith_agent.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,7 +7,6 @@ from pcs.lib import resource_agent from pcs.lib.commands.resource_agent import _complete_agent_list -from pcs.lib.errors import LibraryError def list_agents(lib_env, describe=True, search=None): @@ -32,14 +31,10 @@ Get agent's description (metadata) in a structure string agent_name name of the agent (not containing "stonith:" prefix) """ - try: - metadata = resource_agent.StonithAgent( - lib_env.cmd_runner(), - agent_name - ) - return metadata.get_full_info() - except resource_agent.ResourceAgentError as e: - raise LibraryError( - resource_agent.resource_agent_error_to_report_item(e) - ) - + agent = resource_agent.find_valid_stonith_agent_by_name( + lib_env.report_processor, + lib_env.cmd_runner(), + agent_name, + absent_agent_supported=False + ) + return agent.get_full_info() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/stonith.py pcs-0.9.159/pcs/lib/commands/stonith.py --- pcs-0.9.155+dfsg/pcs/lib/commands/stonith.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/stonith.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,148 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.lib.resource_agent import find_valid_stonith_agent_by_name as get_agent +from pcs.lib.cib import resource +from pcs.lib.cib.resource.common import are_meta_disabled +from pcs.lib.pacemaker.values import validate_id +from pcs.lib.commands.resource import resource_environment + +def create( + env, stonith_id, stonith_agent_name, + operations, meta_attributes, instance_attributes, + allow_absent_agent=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + ensure_disabled=False, + wait=False, +): + """ + Create stonith as resource in a cib. + + LibraryEnvironment env provides all for communication with externals + string stonith_id is an identifier of stonith resource + string stonith_agent_name contains name for the identification of agent + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + dict instance_attributes contains attributes for + primitive/instance_attributes + bool allow_absent_agent is a flag for allowing agent that is not installed + in a system + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a stonith agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a stonith agent metadata + or for allowing to not use the instance_attributes that are required in + stonith agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a stonith agent) + bool ensure_disabled is flag that keeps resource in target-role "Stopped" + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + """ + stonith_agent = get_agent( + env.report_processor, + env.cmd_runner(), + stonith_agent_name, + allow_absent_agent, + ) + if stonith_agent.get_provides_unfencing(): + meta_attributes["provides"] = "unfencing" + + with resource_environment( + env, + wait, + stonith_id, + ensure_disabled or are_meta_disabled(meta_attributes), + ) as resources_section: + stonith_element = resource.primitive.create( + env.report_processor, + resources_section, + stonith_id, + stonith_agent, + raw_operation_list=operations, + meta_attributes=meta_attributes, + instance_attributes=instance_attributes, + allow_invalid_operation=allow_invalid_operation, + allow_invalid_instance_attributes=allow_invalid_instance_attributes, + use_default_operations=use_default_operations, + resource_type="stonith" + ) + if ensure_disabled: + resource.common.disable(stonith_element) + +def create_in_group( + env, stonith_id, stonith_agent_name, group_id, + operations, meta_attributes, instance_attributes, + allow_absent_agent=False, + allow_invalid_operation=False, + allow_invalid_instance_attributes=False, + use_default_operations=True, + ensure_disabled=False, + adjacent_resource_id=None, + put_after_adjacent=False, + wait=False, +): + """ + Create stonith as resource in a cib and put it into defined group. + + LibraryEnvironment env provides all for communication with externals + string stonith_id is an identifier of stonith resource + string stonith_agent_name contains name for the identification of agent + string group_id is identificator for group to put stonith inside + list of dict operations contains attributes for each entered operation + dict meta_attributes contains attributes for primitive/meta_attributes + dict instance_attributes contains attributes for + primitive/instance_attributes + bool allow_absent_agent is a flag for allowing agent that is not installed + in a system + bool allow_invalid_operation is a flag for allowing to use operations that + are not listed in a stonith agent metadata + bool allow_invalid_instance_attributes is a flag for allowing to use + instance attributes that are not listed in a stonith agent metadata + or for allowing to not use the instance_attributes that are required in + stonith agent metadata + bool use_default_operations is a flag for stopping stopping of adding + default cib operations (specified in a stonith agent) + bool ensure_disabled is flag that keeps resource in target-role "Stopped" + string adjacent_resource_id identify neighbor of a newly created stonith + bool put_after_adjacent is flag to put a newly create resource befor/after + adjacent stonith + mixed wait is flag for controlling waiting for pacemaker iddle mechanism + """ + stonith_agent = get_agent( + env.report_processor, + env.cmd_runner(), + stonith_agent_name, + allow_absent_agent, + ) + if stonith_agent.get_provides_unfencing(): + meta_attributes["provides"] = "unfencing" + + with resource_environment( + env, + wait, + stonith_id, + ensure_disabled or are_meta_disabled(meta_attributes), + ) as resources_section: + stonith_element = resource.primitive.create( + env.report_processor, resources_section, + stonith_id, stonith_agent, + operations, meta_attributes, instance_attributes, + allow_invalid_operation, + allow_invalid_instance_attributes, + use_default_operations, + ) + if ensure_disabled: + resource.common.disable(stonith_element) + validate_id(group_id, "group name") + resource.group.place_resource( + resource.group.provide_group(resources_section, group_id), + stonith_element, + adjacent_resource_id, + put_after_adjacent, + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/common.py pcs-0.9.159/pcs/lib/commands/test/resource/common.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/common.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/common.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,76 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import logging + +import pcs.lib.commands.test.resource.fixture as fixture +from pcs.lib.env import LibraryEnvironment +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.integration_lib import Runner +from pcs.test.tools.misc import get_test_resource as rc +from pcs.test.tools.pcs_unittest import TestCase, mock + +class CommonResourceTest(TestCase): + @classmethod + def setUpClass(cls): + cls.runner = Runner() + cls.patcher = mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: cls.runner + ) + cls.patcher.start() + + cls.patcher_corosync = mock.patch.object( + LibraryEnvironment, + "get_corosync_conf_data", + lambda self: open(rc("corosync.conf")).read() + ) + cls.patcher_corosync.start() + + @classmethod + def tearDownClass(cls): + cls.patcher.stop() + cls.patcher_corosync.stop() + + def setUp(self): + self.env = LibraryEnvironment( + mock.MagicMock(logging.Logger), + MockLibraryReportProcessor() + ) + self.cib_base_file = "cib-empty.xml" + + +class ResourceWithoutStateTest(CommonResourceTest): + def assert_command_effect(self, cib_pre, cmd, cib_post, reports=None): + self.runner.set_runs( + fixture.calls_cib( + cib_pre, + cib_post, + cib_base_file=self.cib_base_file + ) + ) + cmd() + self.env.report_processor.assert_reports(reports if reports else []) + self.runner.assert_everything_launched() + + +class ResourceWithStateTest(CommonResourceTest): + def assert_command_effect( + self, cib_pre, status, cmd, cib_post, reports=None + ): + self.runner.set_runs( + fixture.calls_cib_and_status( + cib_pre, + status, + cib_post, + cib_base_file=self.cib_base_file + ) + ) + cmd() + self.env.report_processor.assert_reports(reports if reports else []) + self.runner.assert_everything_launched() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/fixture.py pcs-0.9.159/pcs/lib/commands/test/resource/fixture.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/fixture.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/fixture.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,201 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +from pcs.common import report_codes +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.integration_lib import Call +from pcs.test.tools.misc import get_test_resource as rc +from pcs.test.tools.xml import etree_to_str + + +def call_cib_load(cib): + return [ + Call("cibadmin --local --query", cib), + ] + +def call_cib_push(cib): + return [ + Call( + "cibadmin --replace --verbose --xml-pipe --scope configuration", + check_stdin=Call.create_check_stdin_xml(cib) + ), + ] + +def call_cib_upgrade(): + return [ + Call("cibadmin --upgrade --force"), + ] + +def call_status(status): + return [ + Call("/usr/sbin/crm_mon --one-shot --as-xml --inactive", status), + ] + +def call_wait_supported(): + return [ + Call("crm_resource -?", "--wait"), + ] + +def call_wait(timeout, retval=0, stderr=""): + return [ + Call( + "crm_resource --wait --timeout={0}".format(timeout), + stderr=stderr, + returncode=retval + ), + ] + +def call_dummy_metadata(): + return [ + Call( + "crm_resource --show-metadata ocf:heartbeat:Dummy", + open(rc("resource_agent_ocf_heartbeat_dummy.xml")).read() + ), + ] + +def calls_cib(cib_pre, cib_post, cib_base_file=None): + return ( + call_cib_load(cib_resources(cib_pre, cib_base_file=cib_base_file)) + + + call_cib_push(cib_resources(cib_post, cib_base_file=cib_base_file)) + ) + +def calls_cib_and_status(cib_pre, status, cib_post, cib_base_file=None): + return ( + call_cib_load(cib_resources(cib_pre, cib_base_file=cib_base_file)) + + + call_status(state_complete(status)) + + + call_cib_push(cib_resources(cib_post, cib_base_file=cib_base_file)) + ) + +def calls_cib_load_and_upgrade(cib_old_version): + return ( + call_cib_load(cib_resources(cib_old_version)) + + + call_cib_upgrade() + ) + + + +def cib_resources(cib_resources_xml, cib_base_file=None): + cib_xml = open(rc(cib_base_file or "cib-empty.xml")).read() + cib = etree.fromstring(cib_xml) + resources_section = cib.find(".//resources") + for child in etree.fromstring(cib_resources_xml): + resources_section.append(child) + return etree_to_str(cib) + + +def state_complete(resource_status_xml): + status = etree.parse(rc("crm_mon.minimal.xml")).getroot() + resource_status = etree.fromstring(resource_status_xml) + for resource in resource_status.xpath(".//resource"): + _default_element_attributes( + resource, + { + "active": "true", + "managed": "true", + "failed": "false", + "failure_ignored": "false", + "nodes_running_on": "1", + "orphaned": "false", + "resource_agent": "ocf::heartbeat:Dummy", + "role": "Started", + } + ) + for clone in resource_status.xpath(".//clone"): + _default_element_attributes( + clone, + { + "failed": "false", + "failure_ignored": "false", + } + ) + for bundle in resource_status.xpath(".//bundle"): + _default_element_attributes( + bundle, + { + "type": "docker", + "image": "image:name", + "unique": "false", + "failed": "false", + } + ) + status.append(resource_status) + return etree_to_str(status) + +def _default_element_attributes(element, default_attributes): + for name, value in default_attributes.items(): + if name not in element.attrib: + element.attrib[name] = value + + +def report_not_found(res_id, context_type=""): + return ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "context_type": context_type, + "context_id": "", + "id": res_id, + "id_description": "resource/clone/master/group/bundle", + }, + None + ) + +def report_resource_not_running(resource, severity=severities.INFO): + return ( + severity, + report_codes.RESOURCE_DOES_NOT_RUN, + { + "resource_id": resource, + }, + None + ) + +def report_resource_running(resource, roles, severity=severities.INFO): + return ( + severity, + report_codes.RESOURCE_RUNNING_ON_NODES, + { + "resource_id": resource, + "roles_with_nodes": roles, + }, + None + ) + +def report_unexpected_element(element_id, elemet_type, expected_types): + return ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": element_id, + "expected_types": expected_types, + "current_type": elemet_type, + }, + None + ) + +def report_not_for_bundles(element_id): + return report_unexpected_element( + element_id, + "bundle", + ["clone", "master", "group", "primitive"] + ) + +def report_wait_for_idle_timed_out(reason): + return ( + severities.ERROR, + report_codes.WAIT_FOR_IDLE_TIMED_OUT, + { + "reason": reason.strip(), + }, + None + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_bundle_create.py pcs-0.9.159/pcs/lib/commands/test/resource/test_bundle_create.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_bundle_create.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/test_bundle_create.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,1273 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from textwrap import dedent + +from pcs.common import report_codes +from pcs.lib.commands import resource +from pcs.lib.commands.test.resource.common import ResourceWithoutStateTest +import pcs.lib.commands.test.resource.fixture as fixture +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.misc import skip_unless_pacemaker_supports_bundle + + +class CommonTest(ResourceWithoutStateTest): + fixture_cib_pre = "" + fixture_resources_bundle_simple = """ + + + + + + """ + + def setUp(self): + super(CommonTest, self).setUp() + self.cib_base_file = "cib-empty-2.8.xml" + + def fixture_cib_resources(self, cib): + return fixture.cib_resources(cib, cib_base_file=self.cib_base_file) + + +class MinimalCreate(CommonTest): + def test_success(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={"image": "pcs:test", } + ), + self.fixture_resources_bundle_simple + ) + + def test_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create(self.env, "B#1", "nonsense"), + ( + severities.ERROR, + report_codes.INVALID_ID, + { + "invalid_character": "#", + "id": "B#1", + "id_description": "bundle name", + "is_first_char": False, + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "container type", + "option_value": "nonsense", + "allowed_values": ("docker", ), + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_cib_upgrade(self): + self.runner.set_runs( + fixture.calls_cib_load_and_upgrade(self.fixture_cib_pre) + + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple, + cib_base_file=self.cib_base_file + ) + ) + + resource.bundle_create( + self.env, "B1", "docker", + container_options={"image": "pcs:test", } + ) + + self.env.report_processor.assert_reports([ + ( + severities.INFO, + report_codes.CIB_UPGRADE_SUCCESSFUL, + { + }, + None + ), + ]) + self.runner.assert_everything_launched() + + + +class CreateDocker(CommonTest): + allowed_options = [ + "image", + "masters", + "network", + "options", + "replicas", + "replicas-per-host", + "run-command", + ] + + def test_minimal(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={"image": "pcs:test", } + ), + self.fixture_resources_bundle_simple + ) + + def test_all_options(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={ + "image": "pcs:test", + "masters": "0", + "network": "extra network settings", + "options": "extra options", + "run-command": "/bin/true", + "replicas": "4", + "replicas-per-host": "2", + } + ), + """ + + + + + + """ + ) + + def test_options_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={ + "replicas-per-host": "0", + "replicas": "0", + "masters": "-1", + }, + force_options=True + ), + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_type": "container", + "option_names": ["image", ], + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "masters", + "option_value": "-1", + "allowed_values": "a non-negative integer", + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "replicas", + "option_value": "0", + "allowed_values": "a positive integer", + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "replicas-per-host", + "option_value": "0", + "allowed_values": "a positive integer", + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_empty_image(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={ + "image": "", + }, + force_options=True + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "image", + "option_value": "", + "allowed_values": "image name", + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_unknow_option(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={ + "image": "pcs:test", + "extra": "option", + } + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "container", + "allowed": self.allowed_options, + }, + report_codes.FORCE_OPTIONS + ), + ) + self.runner.assert_everything_launched() + + def test_unknow_option_forced(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={ + "image": "pcs:test", + "extra": "option", + }, + force_options=True + ), + """ + + + + + + """, + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "container", + "allowed": self.allowed_options, + }, + None + ), + ] + ) + + +class CreateWithNetwork(CommonTest): + allowed_options = [ + "control-port", + "host-interface", + "host-netmask", + "ip-range-start", + ] + + def test_no_options(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + network_options={} + ), + self.fixture_resources_bundle_simple + ) + + def test_all_options(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + network_options={ + "control-port": "12345", + "host-interface": "eth0", + "host-netmask": "24", + "ip-range-start": "192.168.100.200", + } + ), + """ + + + + + + + """ + ) + + def test_options_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + network_options={ + "control-port": "0", + "host-netmask": "abc", + "extra": "option", + } + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "control-port", + "option_value": "0", + "allowed_values": "a port number (1-65535)", + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "host-netmask", + "option_value": "abc", + "allowed_values": "a number of bits of the mask (1-32)", + }, + report_codes.FORCE_OPTIONS + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "network", + "allowed": self.allowed_options, + }, + report_codes.FORCE_OPTIONS + ), + ) + self.runner.assert_everything_launched() + + def test_options_forced(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + { + "image": "pcs:test", + }, + network_options={ + "host-netmask": "abc", + "extra": "option", + }, + force_options=True + ), + """ + + + + + + + """, + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "host-netmask", + "option_value": "abc", + "allowed_values": "a number of bits of the mask (1-32)", + }, + None + ), + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "network", + "allowed": self.allowed_options, + }, + None + ), + ] + ) + + +class CreateWithPortMap(CommonTest): + allowed_options = [ + "id", + "internal-port", + "port", + "range", + ] + + def test_no_options(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + port_map=[] + ), + self.fixture_resources_bundle_simple + ) + + def test_several_mappings_and_handle_their_ids(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + port_map=[ + { + "port": "1001", + }, + { + # use an autogenerated id of the previous item + "id": "B1-port-map-1001", + "port": "2000", + "internal-port": "2002", + }, + { + "range": "3000-3300", + }, + ] + ), + """ + + + + + + + + + + + """ + ) + + def test_options_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + port_map=[ + { + }, + { + "id": "not#valid", + }, + { + "internal-port": "1000", + }, + { + "port": "abc", + }, + { + "port": "2000", + "range": "3000-4000", + "internal-port": "def", + }, + ], + force_options=True + ), + # first + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, + { + "option_type": "port-map", + "option_names": ["port", "range"], + }, + None + ), + # second + ( + severities.ERROR, + report_codes.INVALID_ID, + { + "invalid_character": "#", + "id": "not#valid", + "id_description": "port-map id", + "is_first_char": False, + }, + None + ), + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, + { + "option_type": "port-map", + "option_names": ["port", "range"], + }, + None + ), + # third + ( + severities.ERROR, + report_codes.PREREQUISITE_OPTION_IS_MISSING, + { + "option_type": "port-map", + "option_name": "internal-port", + "prerequisite_type": "port-map", + "prerequisite_name": "port", + }, + None + ), + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, + { + "option_type": "port-map", + "option_names": ["port", "range"], + }, + None + ), + # fourth + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "port", + "option_value": "abc", + "allowed_values": "a port number (1-65535)", + }, + None + ), + # fifth + ( + severities.ERROR, + report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, + { + "option_names": ["port", "range", ], + "option_type": "port-map", + }, + None + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "internal-port", + "option_value": "def", + "allowed_values": "a port number (1-65535)", + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_forceable_options_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + port_map=[ + { + "range": "3000", + "extra": "option", + }, + ] + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "port-map", + "allowed": self.allowed_options, + }, + report_codes.FORCE_OPTIONS + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "range", + "option_value": "3000", + "allowed_values": "port-port", + }, + report_codes.FORCE_OPTIONS + ), + ) + + def test_forceable_options_errors_forced(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + { + "image": "pcs:test", + }, + port_map=[ + { + "range": "3000", + "extra": "option", + }, + ], + force_options=True + ), + """ + + + + + + + + + """, + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "port-map", + "allowed": self.allowed_options, + }, + None + ), + ( + severities.WARNING, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "range", + "option_value": "3000", + "allowed_values": "port-port", + }, + None + ), + ] + ) + + +class CreateWithStorageMap(CommonTest): + allowed_options = [ + "id", + "options", + "source-dir", + "source-dir-root", + "target-dir", + ] + + def test_several_mappings_and_handle_their_ids(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + storage_map=[ + { + "source-dir": "/tmp/docker1a", + "target-dir": "/tmp/docker1b", + }, + { + # use an autogenerated id of the previous item + "id": "B1-storage-map", + "source-dir": "/tmp/docker2a", + "target-dir": "/tmp/docker2b", + "options": "extra options 1" + }, + { + "source-dir-root": "/tmp/docker3a", + "target-dir": "/tmp/docker3b", + }, + { + # use an autogenerated id of the previous item + "id": "B1-storage-map-2", + "source-dir-root": "/tmp/docker4a", + "target-dir": "/tmp/docker4b", + "options": "extra options 2" + }, + ] + ), + """ + + + + + + + + + + + + """ + ) + + def test_options_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + storage_map=[ + { + }, + { + "id": "not#valid", + "source-dir": "/tmp/docker1a", + "source-dir-root": "/tmp/docker1b", + "target-dir": "/tmp/docker1c", + }, + ], + force_options=True + ), + # first + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, + { + "option_type": "storage-map", + "option_names": ["source-dir", "source-dir-root"], + }, + None + ), + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_type": "storage-map", + "option_names": ["target-dir", ], + }, + None + ), + # second + ( + severities.ERROR, + report_codes.INVALID_ID, + { + "invalid_character": "#", + "id": "not#valid", + "id_description": "storage-map id", + "is_first_char": False, + }, + None + ), + ( + severities.ERROR, + report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, + { + "option_type": "storage-map", + "option_names": ["source-dir", "source-dir-root"], + }, + None + ), + ) + + def test_forceable_options_errors(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_create( + self.env, "B1", "docker", + {"image": "pcs:test", }, + storage_map=[ + { + "source-dir": "/tmp/docker1a", + "target-dir": "/tmp/docker1b", + "extra": "option", + }, + ] + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "storage-map", + "allowed": self.allowed_options, + }, + report_codes.FORCE_OPTIONS + ), + ) + + def test_forceable_options_errors_forced(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + { + "image": "pcs:test", + }, + storage_map=[ + { + "source-dir": "/tmp/docker1a", + "target-dir": "/tmp/docker1b", + "extra": "option", + }, + ], + force_options=True + ), + """ + + + + + + + + + """, + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "storage-map", + "allowed": self.allowed_options, + }, + None + ), + ] + ) + + +class CreateWithMeta(CommonTest): + def test_success(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={"image": "pcs:test", }, + meta_attributes={ + "target-role": "Stopped", + "is-managed": "false", + } + ), + """ + + + + + + + + + + """ + ) + + def test_disabled(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={"image": "pcs:test", }, + ensure_disabled=True + ), + """ + + + + + + + + + """ + ) + +class CreateWithAllOptions(CommonTest): + def test_success(self): + self.assert_command_effect( + self.fixture_cib_pre, + lambda: resource.bundle_create( + self.env, "B1", "docker", + container_options={ + "image": "pcs:test", + "masters": "0", + "network": "extra network settings", + "options": "extra options", + "run-command": "/bin/true", + "replicas": "4", + "replicas-per-host": "2", + }, + network_options={ + "control-port": "12345", + "host-interface": "eth0", + "host-netmask": "24", + "ip-range-start": "192.168.100.200", + }, + port_map=[ + { + "port": "1001", + }, + { + # use an autogenerated id of the previous item + "id": "B1-port-map-1001", + "port": "2000", + "internal-port": "2002", + }, + { + "range": "3000-3300", + }, + ], + storage_map=[ + { + "source-dir": "/tmp/docker1a", + "target-dir": "/tmp/docker1b", + }, + { + # use an autogenerated id of the previous item + "id": "B1-storage-map", + "source-dir": "/tmp/docker2a", + "target-dir": "/tmp/docker2b", + "options": "extra options 1" + }, + { + "source-dir-root": "/tmp/docker3a", + "target-dir": "/tmp/docker3b", + }, + { + # use an autogenerated id of the previous item + "id": "B1-port-map-1001-1", + "source-dir-root": "/tmp/docker4a", + "target-dir": "/tmp/docker4b", + "options": "extra options 2" + }, + ] + ), + """ + + + + + + + + + + + + + + + + + """ + ) + + +class Wait(CommonTest): + fixture_status_running = """ + + + + + + + + + + + + + + + """ + + fixture_status_not_running = """ + + + + + + + + + + + """ + + fixture_resources_bundle_simple_disabled = """ + + + + + + + + + """ + + timeout = 10 + + def simple_bundle_create(self, wait=False, disabled=False): + return resource.bundle_create( + self.env, "B1", "docker", + container_options={"image": "pcs:test"}, + ensure_disabled=disabled, + wait=wait, + ) + + def test_wait_fail(self): + fixture_wait_timeout_error = dedent( + """\ + Pending actions: + Action 12: B1-node2-stop on node2 + Error performing operation: Timer expired + """ + ) + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple, + cib_base_file=self.cib_base_file, + ) + + + fixture.call_wait(self.timeout, 62, fixture_wait_timeout_error) + ) + assert_raise_library_error( + lambda: self.simple_bundle_create(self.timeout), + fixture.report_wait_for_idle_timed_out( + fixture_wait_timeout_error + ), + ) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_wait_ok_run_ok(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple, + cib_base_file=self.cib_base_file, + ) + + + fixture.call_wait(self.timeout) + + + fixture.call_status(fixture.state_complete( + self.fixture_status_running + )) + ) + self.simple_bundle_create(self.timeout) + self.env.report_processor.assert_reports([ + fixture.report_resource_running( + "B1", {"Started": ["node1", "node2"]} + ), + ]) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_wait_ok_run_fail(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple, + cib_base_file=self.cib_base_file, + ) + + + fixture.call_wait(self.timeout) + + + fixture.call_status(fixture.state_complete( + self.fixture_status_not_running + )) + ) + assert_raise_library_error( + lambda: self.simple_bundle_create(self.timeout), + fixture.report_resource_not_running("B1", severities.ERROR), + ) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_disabled_wait_ok_run_ok(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple_disabled, + cib_base_file=self.cib_base_file, + ) + + + fixture.call_wait(self.timeout) + + + fixture.call_status(fixture.state_complete( + self.fixture_status_not_running + )) + ) + self.simple_bundle_create(self.timeout, disabled=True) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_disabled_wait_ok_run_fail(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple_disabled, + cib_base_file=self.cib_base_file, + ) + + + fixture.call_wait(self.timeout) + + + fixture.call_status(fixture.state_complete( + self.fixture_status_running + )) + ) + assert_raise_library_error( + lambda: self.simple_bundle_create(self.timeout, disabled=True), + fixture.report_resource_running( + "B1", {"Started": ["node1", "node2"]}, severities.ERROR + ) + ) + self.runner.assert_everything_launched() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_bundle_update.py pcs-0.9.159/pcs/lib/commands/test/resource/test_bundle_update.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_bundle_update.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/test_bundle_update.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,916 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from textwrap import dedent + +from pcs.common import report_codes +from pcs.lib.commands import resource +from pcs.lib.commands.test.resource.common import ResourceWithoutStateTest +import pcs.lib.commands.test.resource.fixture as fixture +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.misc import skip_unless_pacemaker_supports_bundle + +class CommonTest(ResourceWithoutStateTest): + fixture_cib_minimal = """ + + + + + + """ + + def setUp(self): + super(CommonTest, self).setUp() + self.cib_base_file = "cib-empty-2.8.xml" + + def fixture_cib_resources(self, cib): + return fixture.cib_resources(cib, cib_base_file=self.cib_base_file) + + +class Basics(CommonTest): + def test_nonexisting_id(self): + fixture_cib_pre = "" + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update(self.env, "B1"), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "B1", + "id_description": "bundle", + "context_type": "resources", + "context_id": "", + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_not_bundle_id(self): + fixture_cib_pre = """ + + + + """ + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(fixture_cib_pre) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update(self.env, "B1"), + ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": "B1", + "expected_types": ["bundle"], + "current_type": "primitive", + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_no_updates(self): + fixture_cib_pre = """ + + + + + + """ + self.assert_command_effect( + fixture_cib_pre, + lambda: resource.bundle_update(self.env, "B1"), + fixture_cib_pre + ) + + def test_cib_upgrade(self): + fixture_cib_pre = """ + + + + + + """ + self.runner.set_runs( + fixture.calls_cib_load_and_upgrade(fixture_cib_pre) + + + fixture.calls_cib( + fixture_cib_pre, + fixture_cib_pre, + cib_base_file=self.cib_base_file + ) + ) + + resource.bundle_update(self.env, "B1") + + self.env.report_processor.assert_reports([ + ( + severities.INFO, + report_codes.CIB_UPGRADE_SUCCESSFUL, + { + }, + None + ), + ]) + self.runner.assert_everything_launched() + + +class ContainerDocker(CommonTest): + allowed_options = [ + "image", + "masters", + "network", + "options", + "replicas", + "replicas-per-host", + "run-command", + ] + + fixture_cib_extra_option = """ + + + + + + """ + + def test_success(self): + fixture_cib_pre = """ + + + + + + """ + fixture_cib_post = """ + + + + + + """ + self.assert_command_effect( + fixture_cib_pre, + lambda: resource.bundle_update( + self.env, "B1", + container_options={ + "options": "test", + "replicas": "3", + "masters": "", + } + ), + fixture_cib_post + ) + + def test_cannot_remove_required_options(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_minimal) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update( + self.env, "B1", + container_options={ + "image": "", + "options": "test", + }, + force_options=True + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "image", + "option_value": "", + "allowed_values": "image name", + }, + None + ), + ) + self.runner.assert_everything_launched() + + def test_unknow_option(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_minimal) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update( + self.env, "B1", + container_options={ + "extra": "option", + } + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "container", + "allowed": self.allowed_options, + }, + report_codes.FORCE_OPTIONS + ), + ) + self.runner.assert_everything_launched() + + def test_unknow_option_forced(self): + self.assert_command_effect( + self.fixture_cib_minimal, + lambda: resource.bundle_update( + self.env, "B1", + container_options={ + "extra": "option", + }, + force_options=True + ), + self.fixture_cib_extra_option, + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "container", + "allowed": self.allowed_options, + }, + None + ), + ] + ) + + def test_unknown_option_remove(self): + self.assert_command_effect( + self.fixture_cib_extra_option, + lambda: resource.bundle_update( + self.env, "B1", + container_options={ + "extra": "", + } + ), + self.fixture_cib_minimal, + ) + + +class Network(CommonTest): + allowed_options = [ + "control-port", + "host-interface", + "host-netmask", + "ip-range-start", + ] + + fixture_cib_interface = """ + + + + + + + """ + + fixture_cib_extra_option = """ + + + + + + + """ + + def test_add_network(self): + self.assert_command_effect( + self.fixture_cib_minimal, + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "host-interface": "eth0", + } + ), + self.fixture_cib_interface + ) + + def test_remove_network(self): + self.assert_command_effect( + self.fixture_cib_interface, + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "host-interface": "", + } + ), + self.fixture_cib_minimal + ) + + def test_keep_network_when_port_map_set(self): + fixture_cib_pre = """ + + + + + + + + + """ + fixture_cib_post = """ + + + + + + + + + """ + self.assert_command_effect( + fixture_cib_pre, + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "host-interface": "", + } + ), + fixture_cib_post + ) + + def test_success(self): + fixture_cib_pre = """ + + + + + + + """ + fixture_cib_post = """ + + + + + + + """ + self.assert_command_effect( + fixture_cib_pre, + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "control-port": "", + "host-netmask": "24", + } + ), + fixture_cib_post + ) + + def test_unknow_option(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_interface) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "extra": "option", + } + ), + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "network", + "allowed": self.allowed_options, + }, + report_codes.FORCE_OPTIONS + ), + ) + self.runner.assert_everything_launched() + + def test_unknow_option_forced(self): + self.assert_command_effect( + self.fixture_cib_interface, + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "extra": "option", + }, + force_options=True + ), + self.fixture_cib_extra_option, + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["extra", ], + "option_type": "network", + "allowed": self.allowed_options, + }, + None + ), + ] + ) + + def test_unknown_option_remove(self): + self.assert_command_effect( + self.fixture_cib_extra_option, + lambda: resource.bundle_update( + self.env, "B1", + network_options={ + "extra": "", + } + ), + self.fixture_cib_interface, + ) + + +class PortMap(CommonTest): + allowed_options = [ + "id", + "port", + "internal-port", + "range", + ] + + fixture_cib_port_80 = """ + + + + + + + + + """ + + fixture_cib_port_80_8080 = """ + + + + + + + + + + """ + + def test_add_network(self): + self.assert_command_effect( + self.fixture_cib_minimal, + lambda: resource.bundle_update( + self.env, "B1", + port_map_add=[ + { + "port": "80", + } + ] + ), + self.fixture_cib_port_80 + ) + + def test_remove_network(self): + self.assert_command_effect( + self.fixture_cib_port_80, + lambda: resource.bundle_update( + self.env, "B1", + port_map_remove=[ + "B1-port-map-80", + ] + ), + self.fixture_cib_minimal + ) + + def test_keep_network_when_options_set(self): + fixture_cib_pre = """ + + + + + + + + + """ + fixture_cib_post = """ + + + + + + + """ + self.assert_command_effect( + fixture_cib_pre, + lambda: resource.bundle_update( + self.env, "B1", + port_map_remove=[ + "B1-port-map-80", + ] + ), + fixture_cib_post + ) + + def test_add(self): + self.assert_command_effect( + self.fixture_cib_port_80, + lambda: resource.bundle_update( + self.env, "B1", + port_map_add=[ + { + "port": "8080", + } + ] + ), + self.fixture_cib_port_80_8080 + ) + + def test_remove(self): + self.assert_command_effect( + self.fixture_cib_port_80_8080, + lambda: resource.bundle_update( + self.env, "B1", + port_map_remove=[ + "B1-port-map-8080", + ] + ), + self.fixture_cib_port_80 + ) + + def test_remove_missing(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_port_80) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update( + self.env, "B1", + port_map_remove=[ + "B1-port-map-8080", + ] + ), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "B1-port-map-8080", + "id_description": "port-map", + "context_type": "bundle", + "context_id": "B1", + }, + None + ), + ) + self.runner.assert_everything_launched() + + +class StorageMap(CommonTest): + allowed_options = [ + "id", + "options", + "source-dir", + "source-dir-root", + "target-dir", + ] + + fixture_cib_storage_1 = """ + + + + + + + + + """ + + fixture_cib_storage_1_2 = """ + + + + + + + + + + """ + + def test_add_storage(self): + self.assert_command_effect( + self.fixture_cib_minimal, + lambda: resource.bundle_update( + self.env, "B1", + storage_map_add=[ + { + "source-dir": "/tmp/docker1a", + "target-dir": "/tmp/docker1b", + } + ] + ), + self.fixture_cib_storage_1 + ) + + def test_remove_storage(self): + self.assert_command_effect( + self.fixture_cib_storage_1, + lambda: resource.bundle_update( + self.env, "B1", + storage_map_remove=[ + "B1-storage-map", + ] + ), + self.fixture_cib_minimal + ) + + def test_add(self): + self.assert_command_effect( + self.fixture_cib_storage_1, + lambda: resource.bundle_update( + self.env, "B1", + storage_map_add=[ + { + "source-dir": "/tmp/docker2a", + "target-dir": "/tmp/docker2b", + } + ] + ), + self.fixture_cib_storage_1_2 + ) + + def test_remove(self): + self.assert_command_effect( + self.fixture_cib_storage_1_2, + lambda: resource.bundle_update( + self.env, "B1", + storage_map_remove=[ + "B1-storage-map-1", + ] + ), + self.fixture_cib_storage_1 + ) + + def test_remove_missing(self): + self.runner.set_runs( + fixture.call_cib_load( + self.fixture_cib_resources(self.fixture_cib_storage_1) + ) + ) + assert_raise_library_error( + lambda: resource.bundle_update( + self.env, "B1", + storage_map_remove=[ + "B1-storage-map-1", + ] + ), + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "B1-storage-map-1", + "id_description": "storage-map", + "context_type": "bundle", + "context_id": "B1", + }, + None + ), + ) + self.runner.assert_everything_launched() + + +class Meta(CommonTest): + fixture_no_meta = """ + + + + + + """ + + fixture_meta_stopped = """ + + + + + + + + + """ + + def test_add_meta_element(self): + self.assert_command_effect( + self.fixture_no_meta, + lambda: resource.bundle_update( + self.env, "B1", + meta_attributes={ + "target-role": "Stopped", + } + ), + self.fixture_meta_stopped + ) + + def test_remove_meta_element(self): + self.assert_command_effect( + self.fixture_meta_stopped, + lambda: resource.bundle_update( + self.env, "B1", + meta_attributes={ + "target-role": "", + } + ), + self.fixture_no_meta + ) + + def test_change_meta(self): + fixture_cib_pre = """ + + + + + + + + + + + """ + fixture_cib_post = """ + + + + + + + + + + + """ + self.assert_command_effect( + fixture_cib_pre, + lambda: resource.bundle_update( + self.env, "B1", + meta_attributes={ + "priority": "10", + "resource-stickiness": "100", + "is-managed": "", + } + ), + fixture_cib_post + ) + + +class Wait(CommonTest): + fixture_status_running = """ + + + + + + + + + + + + + + + """ + + fixture_status_not_running = """ + + + + + + + + + + + """ + + fixture_cib_pre = """ + + + + + + """ + + fixture_resources_bundle_simple = """ + + + + + + """ + + timeout = 10 + + def fixture_calls_initial(self): + return ( + fixture.call_wait_supported() + + fixture.calls_cib( + self.fixture_cib_pre, + self.fixture_resources_bundle_simple, + cib_base_file=self.cib_base_file, + ) + ) + + def simple_bundle_update(self, wait=False): + return resource.bundle_update( + self.env, "B1", {"image": "new:image"}, wait=wait, + ) + + def test_wait_fail(self): + fixture_wait_timeout_error = dedent( + """\ + Pending actions: + Action 12: B1-node2-stop on node2 + Error performing operation: Timer expired + """ + ) + self.runner.set_runs( + self.fixture_calls_initial() + + fixture.call_wait(self.timeout, 62, fixture_wait_timeout_error) + ) + assert_raise_library_error( + lambda: self.simple_bundle_update(self.timeout), + fixture.report_wait_for_idle_timed_out( + fixture_wait_timeout_error + ), + ) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_wait_ok_running(self): + self.runner.set_runs( + self.fixture_calls_initial() + + fixture.call_wait(self.timeout) + + fixture.call_status(fixture.state_complete( + self.fixture_status_running + )) + ) + self.simple_bundle_update(self.timeout) + self.env.report_processor.assert_reports([ + fixture.report_resource_running( + "B1", {"Started": ["node1", "node2"]} + ), + ]) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_wait_ok_not_running(self): + self.runner.set_runs( + self.fixture_calls_initial() + + fixture.call_wait(self.timeout) + + fixture.call_status(fixture.state_complete( + self.fixture_status_not_running + )) + ) + self.simple_bundle_update(self.timeout) + self.env.report_processor.assert_reports([ + fixture.report_resource_not_running("B1", severities.INFO), + ]) + self.runner.assert_everything_launched() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_resource_create.py pcs-0.9.159/pcs/lib/commands/test/resource/test_resource_create.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_resource_create.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/test_resource_create.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,1295 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from functools import partial +import logging + +from lxml import etree + +from pcs.test.tools.pcs_unittest import TestCase, mock +from pcs.common import report_codes +from pcs.lib.env import LibraryEnvironment +from pcs.lib.commands import resource +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.lib.commands.test.resource.common import ResourceWithoutStateTest +import pcs.lib.commands.test.resource.fixture as fixture +from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.integration_lib import ( + Call, + Runner, +) +from pcs.test.tools.misc import ( + get_test_resource as rc, + outdent, + skip_unless_pacemaker_supports_bundle, +) +from pcs.test.tools.xml import etree_to_str + + +runner = Runner() + +fixture_cib_resources_xml_simplest = """ + + + + + + + +""" + +fixture_cib_resources_xml_simplest_disabled = """ + + + + + + + + + + +""" + +fixture_cib_resources_xml_master_simplest = """ + + + + + + + + + +""" + + +fixture_cib_resources_xml_master_simplest_disabled = """ + + + + + + + + + + + + +""" + +fixture_cib_resources_xml_master_simplest_disabled_meta_after = """ + + + + + + + + + + + + +""" + +fixture_cib_resources_xml_group_simplest = """ + + + + + + + + + +""" + + +fixture_cib_resources_xml_group_simplest_disabled = """ + + + + + + + + + + + + +""" + + +fixture_cib_resources_xml_clone_simplest = """ + + + + + + + + + +""" + +fixture_cib_resources_xml_clone_simplest_disabled = """ + + + + + + + + + + + + +""" + +def fixture_state_resources_xml(role="Started", failed="false"): + return( + """ + + + + + + """.format( + role=role, + failed=failed, + ) + ) + +def fixture_cib_calls(cib_resources_xml): + cib_xml = open(rc("cib-empty.xml")).read() + + cib = etree.fromstring(cib_xml) + resources_section = cib.find(".//resources") + for child in etree.fromstring(cib_resources_xml): + resources_section.append(child) + + return [ + Call("cibadmin --local --query", cib_xml), + Call( + "cibadmin --replace --verbose --xml-pipe --scope configuration", + check_stdin=Call.create_check_stdin_xml(etree_to_str(cib)) + ), + ] + +def fixture_agent_load_calls(): + return [ + Call( + "crm_resource --show-metadata ocf:heartbeat:Dummy", + open(rc("resource_agent_ocf_heartbeat_dummy.xml")).read() + ), + ] + + +def fixture_pre_timeout_calls(cib_resources_xml): + return ( + fixture_agent_load_calls() + + + [ + Call("crm_resource -?", "--wait"), + ] + + + fixture_cib_calls(cib_resources_xml) + ) + +def fixture_wait_and_get_state_calls(state_resource_xml): + crm_mon = etree.fromstring(open(rc("crm_mon.minimal.xml")).read()) + crm_mon.append(etree.fromstring(state_resource_xml)) + + return [ + Call("crm_resource --wait --timeout=10"), + Call( + "crm_mon --one-shot --as-xml --inactive", + etree_to_str(crm_mon), + ), + ] + +def fixture_calls_including_waiting(cib_resources_xml, state_resources_xml): + return ( + fixture_pre_timeout_calls(cib_resources_xml) + + + fixture_wait_and_get_state_calls(state_resources_xml) + ) + +class CommonResourceTest(TestCase): + @classmethod + def setUpClass(cls): + cls.patcher = mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: runner + ) + cls.patcher.start() + cls.patcher_corosync = mock.patch.object( + LibraryEnvironment, + "get_corosync_conf_data", + lambda self: open(rc("corosync.conf")).read() + ) + cls.patcher_corosync.start() + + @classmethod + def tearDownClass(cls): + cls.patcher.stop() + cls.patcher_corosync.stop() + + def setUp(self): + self.env = LibraryEnvironment( + mock.MagicMock(logging.Logger), + MockLibraryReportProcessor() + ) + self.create = partial(self.get_create(), self.env) + + def assert_command_effect(self, cmd, cib_resources_xml, reports=None): + runner.set_runs( + fixture_agent_load_calls() + + + fixture_cib_calls(cib_resources_xml) + ) + cmd() + self.env.report_processor.assert_reports(reports if reports else []) + runner.assert_everything_launched() + + def assert_wait_fail(self, command, cib_resources_xml): + wait_error_message = outdent( + """\ + Pending actions: + Action 39: stonith-vm-rhel72-1-reboot on vm-rhel72-1 + Error performing operation: Timer expired + """ + ) + + runner.set_runs(fixture_pre_timeout_calls(cib_resources_xml) + [ + Call( + "crm_resource --wait --timeout=10", + stderr=wait_error_message, + returncode=62, + ), + ]) + + assert_raise_library_error( + command, + ( + severities.ERROR, + report_codes.WAIT_FOR_IDLE_TIMED_OUT, + { + "reason": wait_error_message.strip(), + }, + None + ) + ) + runner.assert_everything_launched() + + def assert_wait_ok_run_fail( + self, command, cib_resources_xml, state_resources_xml + ): + runner.set_runs(fixture_calls_including_waiting( + cib_resources_xml, + state_resources_xml + )) + + assert_raise_library_error( + command, + ( + severities.ERROR, + report_codes.RESOURCE_DOES_NOT_RUN, + { + "resource_id": "A", + }, + None + ) + ) + runner.assert_everything_launched() + + def assert_wait_ok_run_ok( + self, command, cib_resources_xml, state_resources_xml + ): + runner.set_runs(fixture_calls_including_waiting( + cib_resources_xml, + state_resources_xml + )) + command() + self.env.report_processor.assert_reports([ + ( + severities.INFO, + report_codes.RESOURCE_RUNNING_ON_NODES, + { + "roles_with_nodes": {"Started": ["node1"]}, + "resource_id": "A", + }, + None + ), + ]) + runner.assert_everything_launched() + + def assert_wait_ok_disable_fail( + self, command, cib_resources_xml, state_resources_xml + ): + runner.set_runs(fixture_calls_including_waiting( + cib_resources_xml, + state_resources_xml + )) + + assert_raise_library_error( + command, + ( + severities.ERROR, + report_codes.RESOURCE_RUNNING_ON_NODES, + { + 'roles_with_nodes': {'Started': ['node1']}, + 'resource_id': 'A' + }, + None + ) + ) + runner.assert_everything_launched() + + def assert_wait_ok_disable_ok( + self, command, cib_resources_xml, state_resources_xml + ): + runner.set_runs(fixture_calls_including_waiting( + cib_resources_xml, + state_resources_xml + )) + command() + self.env.report_processor.assert_reports([ + ( + severities.INFO, + report_codes.RESOURCE_DOES_NOT_RUN, + { + "resource_id": "A", + }, + None + ), + ]) + runner.assert_everything_launched() + +class Create(CommonResourceTest): + def get_create(self): + return resource.create + + def simplest_create(self, wait=False, disabled=False, meta_attributes=None): + return self.create( + "A", "ocf:heartbeat:Dummy", + operations=[], + meta_attributes=meta_attributes if meta_attributes else {}, + instance_attributes={}, + wait=wait, + ensure_disabled=disabled + ) + + def test_simplest_resource(self): + self.assert_command_effect( + self.simplest_create, + fixture_cib_resources_xml_simplest + ) + + def test_resource_with_operation(self): + self.assert_command_effect( + lambda: self.create( + "A", "ocf:heartbeat:Dummy", + operations=[ + {"name": "monitor", "timeout": "10s", "interval": "10"} + ], + meta_attributes={}, + instance_attributes={}, + ), + """ + + + + + + + + """ + ) + + def test_fail_wait(self): + self.assert_wait_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_simplest, + ) + + def test_wait_ok_run_fail(self): + self.assert_wait_ok_run_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_simplest, + fixture_state_resources_xml(failed="true"), + ) + + def test_wait_ok_run_ok(self): + self.assert_wait_ok_run_ok( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_simplest, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_fail(self): + self.assert_wait_ok_disable_fail( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_simplest_disabled, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_ok(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_simplest_disabled, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_target_role(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + meta_attributes={"target-role": "Stopped"} + ), + fixture_cib_resources_xml_simplest_disabled, + fixture_state_resources_xml(role="Stopped"), + ) + +class CreateAsMaster(CommonResourceTest): + def get_create(self): + return resource.create_as_master + + def simplest_create( + self, wait=False, disabled=False, meta_attributes=None, + master_meta_options=None + ): + return self.create( + "A", "ocf:heartbeat:Dummy", + operations=[], + meta_attributes=meta_attributes if meta_attributes else {}, + instance_attributes={}, + clone_meta_options=master_meta_options if master_meta_options + else {} + , + wait=wait, + ensure_disabled=disabled + ) + + def test_simplest_resource(self): + self.assert_command_effect( + self.simplest_create, + fixture_cib_resources_xml_master_simplest + ) + + def test_fail_wait(self): + self.assert_wait_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_master_simplest, + ) + + def test_wait_ok_run_fail(self): + self.assert_wait_ok_run_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_master_simplest, + fixture_state_resources_xml(failed="true"), + ) + + def test_wait_ok_run_ok(self): + self.assert_wait_ok_run_ok( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_master_simplest, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_fail(self): + self.assert_wait_ok_disable_fail( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_master_simplest_disabled, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_ok(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_master_simplest_disabled, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_target_role(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + meta_attributes={"target-role": "Stopped"} + ), + """ + + + + + + + + + + + + + """, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_target_role_in_master(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + master_meta_options={"target-role": "Stopped"} + ), + fixture_cib_resources_xml_master_simplest_disabled_meta_after, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_clone_max(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + master_meta_options={"clone-max": "0"} + ), + """ + + + + + + + + + + + + + """, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_clone_node_max(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + master_meta_options={"clone-node-max": "0"} + ), + """ + + + + + + + + + + + + + """, + fixture_state_resources_xml(role="Stopped"), + ) + +class CreateInGroup(CommonResourceTest): + def get_create(self): + return resource.create_in_group + + def simplest_create(self, wait=False, disabled=False, meta_attributes=None): + return self.create( + "A", "ocf:heartbeat:Dummy", "G", + operations=[], + meta_attributes=meta_attributes if meta_attributes else {}, + instance_attributes={}, + wait=wait, + ensure_disabled=disabled + ) + + def test_simplest_resource(self): + self.assert_command_effect(self.simplest_create, """ + + + + + + + + + + """) + + def test_fail_wait(self): + self.assert_wait_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_group_simplest, + ) + + def test_wait_ok_run_fail(self): + self.assert_wait_ok_run_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_group_simplest, + fixture_state_resources_xml(failed="true"), + ) + + def test_wait_ok_run_ok(self): + self.assert_wait_ok_run_ok( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_group_simplest, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_fail(self): + self.assert_wait_ok_disable_fail( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_group_simplest_disabled, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_ok(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_group_simplest_disabled, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_target_role(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + meta_attributes={"target-role": "Stopped"} + ), + fixture_cib_resources_xml_group_simplest_disabled, + fixture_state_resources_xml(role="Stopped"), + ) + +class CreateAsClone(CommonResourceTest): + def get_create(self): + return resource.create_as_clone + + def simplest_create( + self, wait=False, disabled=False, meta_attributes=None, + clone_options=None + ): + return self.create( + "A", "ocf:heartbeat:Dummy", + operations=[], + meta_attributes=meta_attributes if meta_attributes else {}, + instance_attributes={}, + clone_meta_options=clone_options if clone_options else {}, + wait=wait, + ensure_disabled=disabled + ) + + def test_simplest_resource(self): + self.assert_command_effect( + self.simplest_create, + fixture_cib_resources_xml_clone_simplest + ) + + def test_fail_wait(self): + self.assert_wait_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_clone_simplest, + ) + + def test_wait_ok_run_fail(self): + self.assert_wait_ok_run_fail( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_clone_simplest, + fixture_state_resources_xml(failed="true"), + ) + + def test_wait_ok_run_ok(self): + self.assert_wait_ok_run_ok( + lambda: self.simplest_create(wait="10"), + fixture_cib_resources_xml_clone_simplest, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_fail(self): + self.assert_wait_ok_disable_fail( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_clone_simplest_disabled, + fixture_state_resources_xml(), + ) + + def test_wait_ok_disable_ok(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create(wait="10", disabled=True), + fixture_cib_resources_xml_clone_simplest_disabled, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_target_role(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + meta_attributes={"target-role": "Stopped"} + ), + """ + + + + + + + + + + + + + """ + , + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_target_role_in_clone(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + clone_options={"target-role": "Stopped"} + ), + """ + + + + + + + + + + + + + """, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_clone_max(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + clone_options={"clone-max": "0"} + ), + """ + + + + + + + + + + + + + """, + fixture_state_resources_xml(role="Stopped"), + ) + + def test_wait_ok_disable_ok_by_clone_node_max(self): + self.assert_wait_ok_disable_ok( + lambda: self.simplest_create( + wait="10", + clone_options={"clone-node-max": "0"} + ), + """ + + + + + + + + + + + + + """, + fixture_state_resources_xml(role="Stopped"), + ) + + +class CreateInToBundle(ResourceWithoutStateTest): + upgraded_cib = "cib-empty-2.8.xml" + + fixture_empty_resources = "" + + fixture_resources_pre = """ + + + + """ + + fixture_resources_post_simple = """ + + + + + + + + + + + + """ + + fixture_resources_post_disabled = """ + + + + + + + + + + + + + + + """ + + fixture_status_stopped = """ + + + + + + + + """ + + fixture_status_running_with_primitive = """ + + + + + + + + + + + + + """ + + fixture_status_primitive_not_running = """ + + + + + + + + + + + """ + + fixture_wait_timeout_error = outdent( + """\ + Pending actions: + Action 12: B-node2-stop on node2 + Error performing operation: Timer expired + """ + ) + + def simplest_create(self, wait=False, disabled=False, meta_attributes=None): + return resource.create_into_bundle( + self.env, + "A", "ocf:heartbeat:Dummy", + operations=[], + meta_attributes=meta_attributes if meta_attributes else {}, + instance_attributes={}, + bundle_id="B", + wait=wait, + ensure_disabled=disabled + ) + + def test_upgrade_cib(self): + self.runner.set_runs( + fixture_agent_load_calls() + + + fixture.calls_cib_load_and_upgrade(self.fixture_empty_resources) + + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_simple, + self.upgraded_cib, + ) + ) + self.simplest_create() + self.runner.assert_everything_launched() + + def test_simplest_resource(self): + self.runner.set_runs( + fixture_agent_load_calls() + + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_simple, + self.upgraded_cib, + ) + ) + self.simplest_create() + self.runner.assert_everything_launched() + + def test_bundle_doesnt_exist(self): + self.runner.set_runs( + fixture_agent_load_calls() + + + fixture.call_cib_load(fixture.cib_resources( + self.fixture_empty_resources, self.upgraded_cib, + )) + ) + assert_raise_library_error( + self.simplest_create, + ( + severities.ERROR, + report_codes.ID_NOT_FOUND, + { + "id": "B", + "id_description": "bundle", + "context_type": "resources", + "context_id": "", + } + ) + ) + + def test_id_not_bundle(self): + resources_pre_update = """ + + """ + self.runner.set_runs( + fixture_agent_load_calls() + + + fixture.call_cib_load(fixture.cib_resources( + resources_pre_update, self.upgraded_cib, + )) + ) + assert_raise_library_error( + self.simplest_create, + ( + severities.ERROR, + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + { + "id": "B", + "expected_types": ["bundle"], + "current_type": "primitive", + } + ) + ) + + def test_bundle_not_empty(self): + resources_pre_update = """ + + + + """ + self.runner.set_runs( + fixture_agent_load_calls() + + + fixture.call_cib_load(fixture.cib_resources( + resources_pre_update, self.upgraded_cib, + )) + ) + assert_raise_library_error( + self.simplest_create, + ( + severities.ERROR, + report_codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE, + { + "bundle_id": "B", + "resource_id": "P", + } + ) + ) + + def test_wait_fail(self): + self.runner.set_runs( + fixture.call_dummy_metadata() + + fixture.call_wait_supported() + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_simple, + cib_base_file=self.upgraded_cib, + ) + + fixture.call_wait(10, 62, self.fixture_wait_timeout_error) + ) + assert_raise_library_error( + lambda: self.simplest_create(10), + fixture.report_wait_for_idle_timed_out( + self.fixture_wait_timeout_error + ), + ) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_wait_ok_run_ok(self): + self.runner.set_runs( + fixture.call_dummy_metadata() + + fixture.call_wait_supported() + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_simple, + cib_base_file=self.upgraded_cib, + ) + + fixture.call_wait(10) + + fixture.call_status(fixture.state_complete( + self.fixture_status_running_with_primitive + )) + ) + self.simplest_create(10) + self.env.report_processor.assert_reports([ + fixture.report_resource_running("A", {"Started": ["node1"]}), + ]) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_wait_ok_run_fail(self): + self.runner.set_runs( + fixture.call_dummy_metadata() + + fixture.call_wait_supported() + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_simple, + cib_base_file=self.upgraded_cib, + ) + + fixture.call_wait(10) + + fixture.call_status(fixture.state_complete( + self.fixture_status_primitive_not_running + )) + ) + assert_raise_library_error( + lambda: self.simplest_create(10), + fixture.report_resource_not_running("A", severities.ERROR), + ) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_disabled_wait_ok_not_running(self): + self.runner.set_runs( + fixture.call_dummy_metadata() + + fixture.call_wait_supported() + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_disabled, + cib_base_file=self.upgraded_cib, + ) + + fixture.call_wait(10) + + fixture.call_status(fixture.state_complete( + self.fixture_status_primitive_not_running + )) + ) + self.simplest_create(10, disabled=True) + self.env.report_processor.assert_reports([ + fixture.report_resource_not_running("A") + ]) + self.runner.assert_everything_launched() + + @skip_unless_pacemaker_supports_bundle + def test_disabled_wait_ok_running(self): + self.runner.set_runs( + fixture.call_dummy_metadata() + + fixture.call_wait_supported() + + fixture.calls_cib( + self.fixture_resources_pre, + self.fixture_resources_post_disabled, + cib_base_file=self.upgraded_cib, + ) + + fixture.call_wait(10) + + fixture.call_status(fixture.state_complete( + self.fixture_status_running_with_primitive + )) + ) + assert_raise_library_error( + lambda: self.simplest_create(10, disabled=True), + fixture.report_resource_running( + "A", {"Started": ["node1"]}, severities.ERROR + ), + ) + self.runner.assert_everything_launched() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_resource_enable_disable.py pcs-0.9.159/pcs/lib/commands/test/resource/test_resource_enable_disable.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_resource_enable_disable.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/test_resource_enable_disable.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,1578 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.common import report_codes +from pcs.lib.commands import resource +from pcs.lib.commands.test.resource.common import ResourceWithStateTest +import pcs.lib.commands.test.resource.fixture as fixture +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.misc import ( + outdent, + skip_unless_pacemaker_supports_bundle, +) + + +fixture_primitive_cib_enabled = """ + + + + +""" +fixture_primitive_cib_disabled = """ + + + + + + + +""" +fixture_primitive_status_managed = """ + + + +""" +fixture_primitive_status_unmanaged = """ + + + +""" + +fixture_two_primitives_cib_enabled = """ + + + + + + +""" +fixture_two_primitives_cib_disabled = """ + + + + + + + + + +""" +fixture_two_primitives_cib_disabled_both = """ + + + + + + + + + + + + +""" +fixture_two_primitives_status_managed = """ + + + + +""" + +fixture_group_cib_enabled = """ + + + + + + + + +""" +fixture_group_cib_disabled_group = """ + + + + + + + + + + + +""" +fixture_group_cib_disabled_primitive = """ + + + + + + + + + + + +""" +fixture_group_cib_disabled_both = """ + + + + + + + + + + + + + + +""" +fixture_group_status_managed = """ + + + + + + +""" +fixture_group_status_unmanaged = """ + + + + + + +""" + +fixture_clone_cib_enabled = """ + + + + + + +""" +fixture_clone_cib_disabled_clone = """ + + + + + + + + + +""" +fixture_clone_cib_disabled_primitive = """ + + + + + + + + + +""" +fixture_clone_cib_disabled_both = """ + + + + + + + + + + + + +""" +fixture_clone_status_managed = """ + + + + + + +""" +fixture_clone_status_unmanaged = """ + + + + + + +""" + +fixture_master_cib_enabled = """ + + + + + + +""" +fixture_master_cib_disabled_master = """ + + + + + + + + + +""" +fixture_master_cib_disabled_primitive = """ + + + + + + + + + +""" +fixture_master_cib_disabled_both = """ + + + + + + + + + + + + +""" +fixture_master_status_managed = """ + + + + + + +""" +fixture_master_status_unmanaged = """ + + + + + + +""" + +fixture_clone_group_cib_enabled = """ + + + + + + + + + + +""" +fixture_clone_group_cib_disabled_clone = """ + + + + + + + + + + + + + +""" +fixture_clone_group_cib_disabled_group = """ + + + + + + + + + + + + + +""" +fixture_clone_group_cib_disabled_primitive = """ + + + + + + + + + + + + + +""" +fixture_clone_group_cib_disabled_clone_group = """ + + + + + + + + + + + + + + + + +""" +fixture_clone_group_cib_disabled_all = """ + + + + + + + + + + + + + + + + + + + +""" +fixture_clone_group_status_managed = """ + + + + + + + + + + + + +""" +fixture_clone_group_status_unmanaged = """ + + + + + + + + + + + + +""" + +fixture_bundle_cib_enabled = """ + + + + + + + +""" +fixture_bundle_cib_disabled_primitive = """ + + + + + + + + + + +""" +fixture_bundle_cib_disabled_bundle = """ + + + + + + + + + +""" +fixture_bundle_cib_disabled_both = """ + + + + + + + + + + + + + +""" +fixture_bundle_status_managed = """ + + + + + + + + + + +""" +fixture_bundle_status_unmanaged = """ + + + + + + + + + + +""" + +def fixture_report_unmanaged(resource): + return ( + severities.WARNING, + report_codes.RESOURCE_IS_UNMANAGED, + { + "resource_id": resource, + }, + None + ) + + +class DisablePrimitive(ResourceWithStateTest): + def test_nonexistent_resource(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(fixture_primitive_cib_enabled) + ) + ) + + assert_raise_library_error( + lambda: resource.disable(self.env, ["B"], False), + fixture.report_not_found("B", "resources") + ) + self.runner.assert_everything_launched() + + def test_nonexistent_resource_in_status(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(fixture_two_primitives_cib_enabled) + ) + + + fixture.call_status( + fixture.state_complete(fixture_primitive_status_managed) + ) + ) + + assert_raise_library_error( + lambda: resource.disable(self.env, ["B"], False), + fixture.report_not_found("B") + ) + self.runner.assert_everything_launched() + + def test_correct_resource(self): + self.assert_command_effect( + fixture_two_primitives_cib_enabled, + fixture_two_primitives_status_managed, + lambda: resource.disable(self.env, ["A"], False), + fixture_two_primitives_cib_disabled + ) + + def test_unmanaged(self): + # The code doesn't care what causes the resource to be unmanaged + # (cluster property, resource's meta-attribute or whatever). It only + # checks the cluster state (crm_mon). + self.assert_command_effect( + fixture_primitive_cib_enabled, + fixture_primitive_status_unmanaged, + lambda: resource.disable(self.env, ["A"], False), + fixture_primitive_cib_disabled, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + +class EnablePrimitive(ResourceWithStateTest): + def test_nonexistent_resource(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(fixture_primitive_cib_disabled) + ) + ) + + assert_raise_library_error( + lambda: resource.enable(self.env, ["B"], False), + fixture.report_not_found("B", "resources") + ) + self.runner.assert_everything_launched() + + def test_nonexistent_resource_in_status(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(fixture_two_primitives_cib_disabled) + ) + + + fixture.call_status( + fixture.state_complete(fixture_primitive_status_managed) + ) + ) + + assert_raise_library_error( + lambda: resource.enable(self.env, ["B"], False), + fixture.report_not_found("B") + ) + self.runner.assert_everything_launched() + + def test_correct_resource(self): + self.assert_command_effect( + fixture_two_primitives_cib_disabled_both, + fixture_two_primitives_status_managed, + lambda: resource.enable(self.env, ["B"], False), + fixture_two_primitives_cib_disabled + ) + + def test_unmanaged(self): + # The code doesn't care what causes the resource to be unmanaged + # (cluster property, resource's meta-attribute or whatever). It only + # checks the cluster state (crm_mon). + self.assert_command_effect( + fixture_primitive_cib_disabled, + fixture_primitive_status_unmanaged, + lambda: resource.enable(self.env, ["A"], False), + fixture_primitive_cib_enabled, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + +class MoreResources(ResourceWithStateTest): + fixture_cib_enabled = """ + + + + + + + + + + + """ + fixture_cib_disabled = """ + + + + + + + + + + + + + + + + + + + + + + + """ + fixture_status = """ + + + + + + + """ + def test_success_enable(self): + fixture_enabled = """ + + + + + + + + + + + + + + """ + self.assert_command_effect( + self.fixture_cib_disabled, + self.fixture_status, + lambda: resource.enable(self.env, ["A", "B", "D"], False), + fixture_enabled, + reports=[ + fixture_report_unmanaged("B"), + fixture_report_unmanaged("D"), + ] + ) + + def test_success_disable(self): + fixture_disabled = """ + + + + + + + + + + + + + + + + + + + + """ + self.assert_command_effect( + self.fixture_cib_enabled, + self.fixture_status, + lambda: resource.disable(self.env, ["A", "B", "D"], False), + fixture_disabled, + reports=[ + fixture_report_unmanaged("B"), + fixture_report_unmanaged("D"), + ] + ) + + def test_bad_resource_enable(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(self.fixture_cib_disabled) + ) + ) + + assert_raise_library_error( + lambda: resource.enable(self.env, ["B", "X", "Y", "A"], False), + fixture.report_not_found("X", "resources"), + fixture.report_not_found("Y", "resources"), + ) + self.runner.assert_everything_launched() + + def test_bad_resource_disable(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(self.fixture_cib_enabled) + ) + ) + + assert_raise_library_error( + lambda: resource.disable(self.env, ["B", "X", "Y", "A"], False), + fixture.report_not_found("X", "resources"), + fixture.report_not_found("Y", "resources"), + ) + self.runner.assert_everything_launched() + + +class Wait(ResourceWithStateTest): + fixture_status_running = """ + + + + + + + + + """ + fixture_status_stopped = """ + + + + + + + """ + fixture_status_mixed = """ + + + + + + + """ + fixture_wait_timeout_error = outdent( + """\ + Pending actions: + Action 12: B-node2-stop on node2 + Error performing operation: Timer expired + """ + ) + + def test_enable_dont_wait_on_error(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.call_cib_load( + fixture.cib_resources(fixture_primitive_cib_disabled) + ) + ) + + assert_raise_library_error( + lambda: resource.enable(self.env, ["B"], 10), + fixture.report_not_found("B", "resources"), + ) + self.runner.assert_everything_launched() + + def test_disable_dont_wait_on_error(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.call_cib_load( + fixture.cib_resources(fixture_primitive_cib_enabled) + ) + ) + + assert_raise_library_error( + lambda: resource.disable(self.env, ["B"], 10), + fixture.report_not_found("B", "resources"), + ) + self.runner.assert_everything_launched() + + def test_enable_resource_stopped(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_two_primitives_cib_disabled_both, + self.fixture_status_stopped, + fixture_two_primitives_cib_enabled + ) + + + fixture.call_wait(10) + + + fixture.call_status( + fixture.state_complete(self.fixture_status_stopped) + ) + ) + + assert_raise_library_error( + lambda: resource.enable(self.env, ["A", "B"], 10), + fixture.report_resource_not_running("A", severities.ERROR), + fixture.report_resource_not_running("B", severities.ERROR), + ) + self.runner.assert_everything_launched() + + def test_disable_resource_stopped(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_two_primitives_cib_enabled, + self.fixture_status_running, + fixture_two_primitives_cib_disabled_both + ) + + + fixture.call_wait(10) + + + fixture.call_status( + fixture.state_complete(self.fixture_status_stopped) + ) + ) + + resource.disable(self.env, ["A", "B"], 10) + self.env.report_processor.assert_reports([ + fixture.report_resource_not_running("A"), + fixture.report_resource_not_running("B"), + ]) + self.runner.assert_everything_launched() + + def test_enable_resource_running(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_two_primitives_cib_disabled_both, + self.fixture_status_stopped, + fixture_two_primitives_cib_enabled + ) + + + fixture.call_wait(10) + + + fixture.call_status( + fixture.state_complete(self.fixture_status_running) + ) + ) + + resource.enable(self.env, ["A", "B"], 10) + + self.env.report_processor.assert_reports([ + fixture.report_resource_running("A", {"Started": ["node1"]}), + fixture.report_resource_running("B", {"Started": ["node2"]}), + ]) + self.runner.assert_everything_launched() + + def test_disable_resource_running(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_two_primitives_cib_enabled, + self.fixture_status_running, + fixture_two_primitives_cib_disabled_both + ) + + + fixture.call_wait(10) + + + fixture.call_status( + fixture.state_complete(self.fixture_status_running) + ) + ) + + assert_raise_library_error( + lambda: resource.disable(self.env, ["A", "B"], 10), + fixture.report_resource_running( + "A", {"Started": ["node1"]}, severities.ERROR + ), + fixture.report_resource_running( + "B", {"Started": ["node2"]}, severities.ERROR + ), + ) + self.runner.assert_everything_launched() + + def test_enable_wait_timeout(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_primitive_cib_disabled, + self.fixture_status_stopped, + fixture_primitive_cib_enabled + ) + + + fixture.call_wait( + 10, retval=62, stderr=self.fixture_wait_timeout_error + ) + ) + + assert_raise_library_error( + lambda: resource.enable(self.env, ["A"], 10), + fixture.report_wait_for_idle_timed_out( + self.fixture_wait_timeout_error + ), + ) + self.runner.assert_everything_launched() + + def test_disable_wait_timeout(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_primitive_cib_enabled, + self.fixture_status_running, + fixture_primitive_cib_disabled + ) + + + fixture.call_wait( + 10, retval=62, stderr=self.fixture_wait_timeout_error + ) + ) + + assert_raise_library_error( + lambda: resource.disable(self.env, ["A"], 10), + fixture.report_wait_for_idle_timed_out( + self.fixture_wait_timeout_error + ), + ) + self.runner.assert_everything_launched() + + +class WaitClone(ResourceWithStateTest): + fixture_status_running = """ + + + + + + + + + + + """ + fixture_status_stopped = """ + + + + + + + + + """ + def test_disable_clone(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_clone_cib_enabled, + self.fixture_status_running, + fixture_clone_cib_disabled_clone + ) + + + fixture.call_wait(10) + + + fixture.call_status( + fixture.state_complete(self.fixture_status_stopped) + ) + ) + + resource.disable(self.env, ["A-clone"], 10) + self.env.report_processor.assert_reports([ + ( + severities.INFO, + report_codes.RESOURCE_DOES_NOT_RUN, + { + "resource_id": "A-clone", + }, + None + ) + ]) + self.runner.assert_everything_launched() + + def test_enable_clone(self): + self.runner.set_runs( + fixture.call_wait_supported() + + + fixture.calls_cib_and_status( + fixture_clone_cib_disabled_clone, + self.fixture_status_stopped, + fixture_clone_cib_enabled + ) + + + fixture.call_wait(10) + + + fixture.call_status( + fixture.state_complete(self.fixture_status_running) + ) + ) + + resource.enable(self.env, ["A-clone"], 10) + + self.env.report_processor.assert_reports([ + ( + severities.INFO, + report_codes.RESOURCE_RUNNING_ON_NODES, + { + "resource_id": "A-clone", + "roles_with_nodes": {"Started": ["node1", "node2"]}, + }, + None + ) + ]) + self.runner.assert_everything_launched() + + +class DisableGroup(ResourceWithStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_group_cib_enabled, + fixture_group_status_managed, + lambda: resource.disable(self.env, ["A1"], False), + fixture_group_cib_disabled_primitive + ) + + def test_group(self): + self.assert_command_effect( + fixture_group_cib_enabled, + fixture_group_status_managed, + lambda: resource.disable(self.env, ["A"], False), + fixture_group_cib_disabled_group + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_group_cib_enabled, + fixture_group_status_unmanaged, + lambda: resource.disable(self.env, ["A1"], False), + fixture_group_cib_disabled_primitive, + reports=[ + fixture_report_unmanaged("A1"), + ] + ) + + def test_group_unmanaged(self): + self.assert_command_effect( + fixture_group_cib_enabled, + fixture_group_status_unmanaged, + lambda: resource.disable(self.env, ["A"], False), + fixture_group_cib_disabled_group, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + +class EnableGroup(ResourceWithStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_group_cib_disabled_primitive, + fixture_group_status_managed, + lambda: resource.enable(self.env, ["A1"], False), + fixture_group_cib_enabled + ) + + def test_primitive_disabled_both(self): + self.assert_command_effect( + fixture_group_cib_disabled_both, + fixture_group_status_managed, + lambda: resource.enable(self.env, ["A1"], False), + fixture_group_cib_disabled_group + ) + + def test_group(self): + self.assert_command_effect( + fixture_group_cib_disabled_group, + fixture_group_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_group_cib_enabled + ) + + def test_group_both_disabled(self): + self.assert_command_effect( + fixture_group_cib_disabled_both, + fixture_group_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_group_cib_disabled_primitive + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_group_cib_disabled_primitive, + fixture_group_status_unmanaged, + lambda: resource.enable(self.env, ["A1"], False), + fixture_group_cib_enabled, + reports=[ + fixture_report_unmanaged("A1"), + ] + ) + + def test_group_unmanaged(self): + self.assert_command_effect( + fixture_group_cib_disabled_group, + fixture_group_status_unmanaged, + lambda: resource.enable(self.env, ["A"], False), + fixture_group_cib_enabled, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + +class DisableClone(ResourceWithStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_clone_cib_enabled, + fixture_clone_status_managed, + lambda: resource.disable(self.env, ["A"], False), + fixture_clone_cib_disabled_primitive + ) + + def test_clone(self): + self.assert_command_effect( + fixture_clone_cib_enabled, + fixture_clone_status_managed, + lambda: resource.disable(self.env, ["A-clone"], False), + fixture_clone_cib_disabled_clone + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_clone_cib_enabled, + fixture_clone_status_unmanaged, + lambda: resource.disable(self.env, ["A"], False), + fixture_clone_cib_disabled_primitive, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + def test_clone_unmanaged(self): + self.assert_command_effect( + fixture_clone_cib_enabled, + fixture_clone_status_unmanaged, + lambda: resource.disable(self.env, ["A-clone"], False), + fixture_clone_cib_disabled_clone, + reports=[ + fixture_report_unmanaged("A-clone"), + ] + ) + + +class EnableClone(ResourceWithStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_clone_cib_disabled_primitive, + fixture_clone_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_clone_cib_enabled + ) + + def test_primitive_disabled_both(self): + self.assert_command_effect( + fixture_clone_cib_disabled_both, + fixture_clone_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_clone_cib_enabled + ) + + def test_clone(self): + self.assert_command_effect( + fixture_clone_cib_disabled_clone, + fixture_clone_status_managed, + lambda: resource.enable(self.env, ["A-clone"], False), + fixture_clone_cib_enabled + ) + + def test_clone_disabled_both(self): + self.assert_command_effect( + fixture_clone_cib_disabled_both, + fixture_clone_status_managed, + lambda: resource.enable(self.env, ["A-clone"], False), + fixture_clone_cib_enabled + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_clone_cib_disabled_primitive, + fixture_clone_status_unmanaged, + lambda: resource.enable(self.env, ["A"], False), + fixture_clone_cib_enabled, + reports=[ + fixture_report_unmanaged("A-clone"), + fixture_report_unmanaged("A"), + ] + ) + + def test_clone_unmanaged(self): + self.assert_command_effect( + fixture_clone_cib_disabled_clone, + fixture_clone_status_unmanaged, + lambda: resource.enable(self.env, ["A-clone"], False), + fixture_clone_cib_enabled, + reports=[ + fixture_report_unmanaged("A-clone"), + fixture_report_unmanaged("A"), + ] + ) + + +class DisableMaster(ResourceWithStateTest): + # same as clone, minimum tests in here + def test_primitive(self): + self.assert_command_effect( + fixture_master_cib_enabled, + fixture_master_status_managed, + lambda: resource.disable(self.env, ["A"], False), + fixture_master_cib_disabled_primitive + ) + + def test_master(self): + self.assert_command_effect( + fixture_master_cib_enabled, + fixture_master_status_managed, + lambda: resource.disable(self.env, ["A-master"], False), + fixture_master_cib_disabled_master + ) + + +class EnableMaster(ResourceWithStateTest): + # same as clone, minimum tests in here + def test_primitive(self): + self.assert_command_effect( + fixture_master_cib_disabled_primitive, + fixture_master_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_master_cib_enabled + ) + + def test_primitive_disabled_both(self): + self.assert_command_effect( + fixture_master_cib_disabled_both, + fixture_master_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_master_cib_enabled + ) + + def test_master(self): + self.assert_command_effect( + fixture_master_cib_disabled_master, + fixture_master_status_managed, + lambda: resource.enable(self.env, ["A-master"], False), + fixture_master_cib_enabled + ) + + def test_master_disabled_both(self): + self.assert_command_effect( + fixture_master_cib_disabled_both, + fixture_master_status_managed, + lambda: resource.enable(self.env, ["A-master"], False), + fixture_master_cib_enabled + ) + +class DisableClonedGroup(ResourceWithStateTest): + def test_clone(self): + self.assert_command_effect( + fixture_clone_group_cib_enabled, + fixture_clone_group_status_managed, + lambda: resource.disable(self.env, ["A-clone"], False), + fixture_clone_group_cib_disabled_clone + ) + + def test_group(self): + self.assert_command_effect( + fixture_clone_group_cib_enabled, + fixture_clone_group_status_managed, + lambda: resource.disable(self.env, ["A"], False), + fixture_clone_group_cib_disabled_group + ) + + def test_primitive(self): + self.assert_command_effect( + fixture_clone_group_cib_enabled, + fixture_clone_group_status_managed, + lambda: resource.disable(self.env, ["A1"], False), + fixture_clone_group_cib_disabled_primitive + ) + + def test_clone_unmanaged(self): + self.assert_command_effect( + fixture_clone_group_cib_enabled, + fixture_clone_group_status_unmanaged, + lambda: resource.disable(self.env, ["A-clone"], False), + fixture_clone_group_cib_disabled_clone, + reports=[ + fixture_report_unmanaged("A-clone"), + ] + ) + + def test_group_unmanaged(self): + self.assert_command_effect( + fixture_clone_group_cib_enabled, + fixture_clone_group_status_unmanaged, + lambda: resource.disable(self.env, ["A"], False), + fixture_clone_group_cib_disabled_group, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_clone_group_cib_enabled, + fixture_clone_group_status_unmanaged, + lambda: resource.disable(self.env, ["A1"], False), + fixture_clone_group_cib_disabled_primitive, + reports=[ + fixture_report_unmanaged("A1"), + ] + ) + + +class EnableClonedGroup(ResourceWithStateTest): + def test_clone(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_clone, + fixture_clone_group_status_managed, + lambda: resource.enable(self.env, ["A-clone"], False), + fixture_clone_group_cib_enabled, + ) + + def test_clone_disabled_all(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_all, + fixture_clone_group_status_managed, + lambda: resource.enable(self.env, ["A-clone"], False), + fixture_clone_group_cib_disabled_primitive + ) + + def test_group(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_group, + fixture_clone_group_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_clone_group_cib_enabled + ) + + def test_group_disabled_all(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_all, + fixture_clone_group_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_clone_group_cib_disabled_primitive + ) + + def test_primitive(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_primitive, + fixture_clone_group_status_managed, + lambda: resource.enable(self.env, ["A1"], False), + fixture_clone_group_cib_enabled + ) + + def test_primitive_disabled_all(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_all, + fixture_clone_group_status_managed, + lambda: resource.enable(self.env, ["A1"], False), + fixture_clone_group_cib_disabled_clone_group + ) + + def test_clone_unmanaged(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_clone, + fixture_clone_group_status_unmanaged, + lambda: resource.enable(self.env, ["A-clone"], False), + fixture_clone_group_cib_enabled, + reports=[ + fixture_report_unmanaged("A-clone"), + fixture_report_unmanaged("A"), + ] + ) + + def test_group_unmanaged(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_group, + fixture_clone_group_status_unmanaged, + lambda: resource.enable(self.env, ["A"], False), + fixture_clone_group_cib_enabled, + reports=[ + fixture_report_unmanaged("A"), + fixture_report_unmanaged("A-clone"), + ] + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_clone_group_cib_disabled_primitive, + fixture_clone_group_status_unmanaged, + lambda: resource.enable(self.env, ["A1"], False), + fixture_clone_group_cib_enabled, + reports=[ + fixture_report_unmanaged("A1"), + ] + ) + + +@skip_unless_pacemaker_supports_bundle +class DisableBundle(ResourceWithStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_bundle_cib_enabled, + fixture_bundle_status_managed, + lambda: resource.disable(self.env, ["A"], False), + fixture_bundle_cib_disabled_primitive + ) + + def test_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_enabled, + fixture_bundle_status_managed, + lambda: resource.disable(self.env, ["A-bundle"], False), + fixture_bundle_cib_disabled_bundle + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_bundle_cib_enabled, + fixture_bundle_status_unmanaged, + lambda: resource.disable(self.env, ["A"], False), + fixture_bundle_cib_disabled_primitive, + reports=[ + fixture_report_unmanaged("A"), + ] + ) + + def test_bundle_unmanaged(self): + self.assert_command_effect( + fixture_bundle_cib_enabled, + fixture_bundle_status_unmanaged, + lambda: resource.disable(self.env, ["A-bundle"], False), + fixture_bundle_cib_disabled_bundle, + reports=[ + fixture_report_unmanaged("A-bundle"), + ] + ) + + +@skip_unless_pacemaker_supports_bundle +class EnableBundle(ResourceWithStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_bundle_cib_disabled_primitive, + fixture_bundle_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_bundle_cib_enabled + ) + + def test_primitive_disabled_both(self): + self.assert_command_effect( + fixture_bundle_cib_disabled_both, + fixture_bundle_status_managed, + lambda: resource.enable(self.env, ["A"], False), + fixture_bundle_cib_enabled + ) + + def test_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_disabled_bundle, + fixture_bundle_status_managed, + lambda: resource.enable(self.env, ["A-bundle"], False), + fixture_bundle_cib_enabled + ) + + def test_bundle_disabled_both(self): + self.assert_command_effect( + fixture_bundle_cib_disabled_both, + fixture_bundle_status_managed, + lambda: resource.enable(self.env, ["A-bundle"], False), + fixture_bundle_cib_enabled + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_bundle_cib_disabled_primitive, + fixture_bundle_status_unmanaged, + lambda: resource.enable(self.env, ["A"], False), + fixture_bundle_cib_enabled, + reports=[ + fixture_report_unmanaged("A"), + fixture_report_unmanaged("A-bundle"), + ] + ) + + def test_bundle_unmanaged(self): + self.assert_command_effect( + fixture_bundle_cib_disabled_primitive, + fixture_bundle_status_unmanaged, + lambda: resource.enable(self.env, ["A-bundle"], False), + fixture_bundle_cib_enabled, + reports=[ + fixture_report_unmanaged("A-bundle"), + fixture_report_unmanaged("A"), + ] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_resource_manage_unmanage.py pcs-0.9.159/pcs/lib/commands/test/resource/test_resource_manage_unmanage.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/resource/test_resource_manage_unmanage.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/resource/test_resource_manage_unmanage.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,1247 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + + +from pcs.common import report_codes +from pcs.lib.commands import resource +from pcs.lib.commands.test.resource.common import ResourceWithoutStateTest +import pcs.lib.commands.test.resource.fixture as fixture +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import assert_raise_library_error + + +fixture_primitive_cib_managed = """ + + + + +""" +fixture_primitive_cib_unmanaged = """ + + + + + + + +""" + +fixture_primitive_cib_managed_op_enabled = """ + + + + + + + + + + +""" +fixture_primitive_cib_managed_op_disabled = """ + + + + + + + + + + +""" +fixture_primitive_cib_unmanaged_op_enabled = """ + + + + + + + + + + + + + +""" +fixture_primitive_cib_unmanaged_op_disabled = """ + + + + + + + + + + + + + +""" + +fixture_group_cib_managed = """ + + + + + + + + +""" +fixture_group_cib_unmanaged_resource = """ + + + + + + + + + + + +""" +fixture_group_cib_unmanaged_resource_and_group = """ + + + + + + + + + + + + + + +""" +fixture_group_cib_unmanaged_all_resources = """ + + + + + + + + + + + + + + +""" + +fixture_clone_cib_managed = """ + + + + + + +""" +fixture_clone_cib_unmanaged_clone = """ + + + + + + + + + +""" +fixture_clone_cib_unmanaged_primitive = """ + + + + + + + + + +""" +fixture_clone_cib_unmanaged_both = """ + + + + + + + + + + + + +""" + +fixture_clone_cib_managed_op_enabled = """ + + + + + + + + + + + +""" +fixture_clone_cib_unmanaged_primitive_op_disabled = """ + + + + + + + + + + + + + + +""" + +fixture_master_cib_managed = """ + + + + + + +""" +fixture_master_cib_unmanaged_master = """ + + + + + + + + + +""" +fixture_master_cib_unmanaged_primitive = """ + + + + + + + + + +""" +fixture_master_cib_unmanaged_both = """ + + + + + + + + + + + + +""" + +fixture_master_cib_managed_op_enabled = """ + + + + + + + + + + + +""" +fixture_master_cib_unmanaged_primitive_op_disabled = """ + + + + + + + + + + + + + + +""" + +fixture_clone_group_cib_managed = """ + + + + + + + + + + +""" +fixture_clone_group_cib_unmanaged_primitive = """ + + + + + + + + + + + + + +""" +fixture_clone_group_cib_unmanaged_all_primitives = """ + + + + + + + + + + + + + + + + +""" +fixture_clone_group_cib_unmanaged_clone = """ + + + + + + + + + + + + + +""" +fixture_clone_group_cib_unmanaged_everything = """ + + + + + + + + + + + + + + + + + + + + + + +""" + +fixture_clone_group_cib_managed_op_enabled = """ + + + + + + + + + + + + + + + + + + + + +""" +fixture_clone_group_cib_unmanaged_primitive_op_disabled = """ + + + + + + + + + + + + + + + + + + + + + + + +""" +fixture_clone_group_cib_unmanaged_all_primitives_op_disabled = """ + + + + + + + + + + + + + + + + + + + + + + + + + + +""" + + +fixture_bundle_empty_cib_managed = """ + + + + + +""" +fixture_bundle_empty_cib_unmanaged_bundle = """ + + + + + + + + +""" + +fixture_bundle_cib_managed = """ + + + + + + + +""" +fixture_bundle_cib_unmanaged_bundle = """ + + + + + + + + + + +""" +fixture_bundle_cib_unmanaged_primitive = """ + + + + + + + + + + +""" +fixture_bundle_cib_unmanaged_both = """ + + + + + + + + + + + + + +""" + +fixture_bundle_cib_managed_op_enabled = """ + + + + + + + + + + + + +""" +fixture_bundle_cib_unmanaged_primitive_op_disabled = """ + + + + + + + + + + + + + + + +""" +fixture_bundle_cib_unmanaged_both_op_disabled = """ + + + + + + + + + + + + + + + + + + +""" + +def fixture_report_no_monitors(resource): + return ( + severities.WARNING, + report_codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED, + { + "resource_id": resource, + }, + None + ) + + +class UnmanagePrimitive(ResourceWithoutStateTest): + def test_nonexistent_resource(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(fixture_primitive_cib_managed) + ) + ) + + assert_raise_library_error( + lambda: resource.unmanage(self.env, ["B"]), + fixture.report_not_found("B", "resources") + ) + self.runner.assert_everything_launched() + + def test_primitive(self): + self.assert_command_effect( + fixture_primitive_cib_managed, + lambda: resource.unmanage(self.env, ["A"]), + fixture_primitive_cib_unmanaged + ) + + def test_primitive_unmanaged(self): + self.assert_command_effect( + fixture_primitive_cib_unmanaged, + lambda: resource.unmanage(self.env, ["A"]), + fixture_primitive_cib_unmanaged + ) + + +class ManagePrimitive(ResourceWithoutStateTest): + def test_nonexistent_resource(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(fixture_primitive_cib_unmanaged) + ) + ) + + assert_raise_library_error( + lambda: resource.manage(self.env, ["B"]), + fixture.report_not_found("B", "resources") + ) + self.runner.assert_everything_launched() + + def test_primitive(self): + self.assert_command_effect( + fixture_primitive_cib_unmanaged, + lambda: resource.manage(self.env, ["A"]), + fixture_primitive_cib_managed + ) + + def test_primitive_managed(self): + self.assert_command_effect( + fixture_primitive_cib_managed, + lambda: resource.manage(self.env, ["A"]), + fixture_primitive_cib_managed + ) + + +class UnmanageGroup(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_group_cib_managed, + lambda: resource.unmanage(self.env, ["A1"]), + fixture_group_cib_unmanaged_resource + ) + + def test_group(self): + self.assert_command_effect( + fixture_group_cib_managed, + lambda: resource.unmanage(self.env, ["A"]), + fixture_group_cib_unmanaged_all_resources + ) + + +class ManageGroup(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_group_cib_unmanaged_all_resources, + lambda: resource.manage(self.env, ["A2"]), + fixture_group_cib_unmanaged_resource + ) + + def test_primitive_unmanaged_group(self): + self.assert_command_effect( + fixture_group_cib_unmanaged_resource_and_group, + lambda: resource.manage(self.env, ["A1"]), + fixture_group_cib_managed + ) + + def test_group(self): + self.assert_command_effect( + fixture_group_cib_unmanaged_all_resources, + lambda: resource.manage(self.env, ["A"]), + fixture_group_cib_managed + ) + + def test_group_unmanaged_group(self): + self.assert_command_effect( + fixture_group_cib_unmanaged_resource_and_group, + lambda: resource.manage(self.env, ["A"]), + fixture_group_cib_managed + ) + + +class UnmanageClone(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_clone_cib_managed, + lambda: resource.unmanage(self.env, ["A"]), + fixture_clone_cib_unmanaged_primitive + ) + + def test_clone(self): + self.assert_command_effect( + fixture_clone_cib_managed, + lambda: resource.unmanage(self.env, ["A-clone"]), + fixture_clone_cib_unmanaged_primitive + ) + + +class ManageClone(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_clone_cib_unmanaged_clone, + lambda: resource.manage(self.env, ["A"]), + fixture_clone_cib_managed + ) + + def test_primitive_unmanaged_primitive(self): + self.assert_command_effect( + fixture_clone_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A"]), + fixture_clone_cib_managed + ) + + def test_primitive_unmanaged_both(self): + self.assert_command_effect( + fixture_clone_cib_unmanaged_both, + lambda: resource.manage(self.env, ["A"]), + fixture_clone_cib_managed + ) + + def test_clone(self): + self.assert_command_effect( + fixture_clone_cib_unmanaged_clone, + lambda: resource.manage(self.env, ["A-clone"]), + fixture_clone_cib_managed + ) + + def test_clone_unmanaged_primitive(self): + self.assert_command_effect( + fixture_clone_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A-clone"]), + fixture_clone_cib_managed + ) + + def test_clone_unmanaged_both(self): + self.assert_command_effect( + fixture_clone_cib_unmanaged_both, + lambda: resource.manage(self.env, ["A-clone"]), + fixture_clone_cib_managed + ) + + +class UnmanageMaster(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_master_cib_managed, + lambda: resource.unmanage(self.env, ["A"]), + fixture_master_cib_unmanaged_primitive + ) + + def test_master(self): + self.assert_command_effect( + fixture_master_cib_managed, + lambda: resource.unmanage(self.env, ["A-master"]), + fixture_master_cib_unmanaged_primitive + ) + + +class ManageMaster(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_master_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A"]), + fixture_master_cib_managed + ) + + def test_primitive_unmanaged_master(self): + self.assert_command_effect( + fixture_master_cib_unmanaged_master, + lambda: resource.manage(self.env, ["A"]), + fixture_master_cib_managed + ) + + def test_primitive_unmanaged_both(self): + self.assert_command_effect( + fixture_master_cib_unmanaged_both, + lambda: resource.manage(self.env, ["A"]), + fixture_master_cib_managed + ) + + def test_master(self): + self.assert_command_effect( + fixture_master_cib_unmanaged_master, + lambda: resource.manage(self.env, ["A-master"]), + fixture_master_cib_managed + ) + + def test_master_unmanaged_primitive(self): + self.assert_command_effect( + fixture_master_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A-master"]), + fixture_master_cib_managed + ) + + def test_master_unmanaged_both(self): + self.assert_command_effect( + fixture_master_cib_unmanaged_both, + lambda: resource.manage(self.env, ["A-master"]), + fixture_master_cib_managed + ) + + +class UnmanageClonedGroup(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_clone_group_cib_managed, + lambda: resource.unmanage(self.env, ["A1"]), + fixture_clone_group_cib_unmanaged_primitive + ) + + def test_group(self): + self.assert_command_effect( + fixture_clone_group_cib_managed, + lambda: resource.unmanage(self.env, ["A"]), + fixture_clone_group_cib_unmanaged_all_primitives + ) + + def test_clone(self): + self.assert_command_effect( + fixture_clone_group_cib_managed, + lambda: resource.unmanage(self.env, ["A-clone"]), + fixture_clone_group_cib_unmanaged_all_primitives + ) + + +class ManageClonedGroup(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_clone_group_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A1"]), + fixture_clone_group_cib_managed + ) + + def test_primitive_unmanaged_all(self): + self.assert_command_effect( + fixture_clone_group_cib_unmanaged_everything, + lambda: resource.manage(self.env, ["A2"]), + fixture_clone_group_cib_unmanaged_primitive + ) + + def test_group(self): + self.assert_command_effect( + fixture_clone_group_cib_unmanaged_all_primitives, + lambda: resource.manage(self.env, ["A"]), + fixture_clone_group_cib_managed + ) + + def test_group_unmanaged_all(self): + self.assert_command_effect( + fixture_clone_group_cib_unmanaged_everything, + lambda: resource.manage(self.env, ["A"]), + fixture_clone_group_cib_managed + ) + + def test_clone(self): + self.assert_command_effect( + fixture_clone_group_cib_unmanaged_clone, + lambda: resource.manage(self.env, ["A-clone"]), + fixture_clone_group_cib_managed + ) + + def test_clone_unmanaged_all(self): + self.assert_command_effect( + fixture_clone_group_cib_unmanaged_everything, + lambda: resource.manage(self.env, ["A-clone"]), + fixture_clone_group_cib_managed + ) + + +class UnmanageBundle(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_bundle_cib_managed, + lambda: resource.unmanage(self.env, ["A"]), + fixture_bundle_cib_unmanaged_primitive + ) + + def test_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_managed, + lambda: resource.unmanage(self.env, ["A-bundle"]), + fixture_bundle_cib_unmanaged_both + ) + + def test_bundle_empty(self): + self.assert_command_effect( + fixture_bundle_empty_cib_managed, + lambda: resource.unmanage(self.env, ["A-bundle"]), + fixture_bundle_empty_cib_unmanaged_bundle + ) + + +class ManageBundle(ResourceWithoutStateTest): + def test_primitive(self): + self.assert_command_effect( + fixture_bundle_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A"]), + fixture_bundle_cib_managed, + ) + + def test_primitive_unmanaged_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_unmanaged_bundle, + lambda: resource.manage(self.env, ["A"]), + fixture_bundle_cib_managed, + ) + + def test_primitive_unmanaged_both(self): + self.assert_command_effect( + fixture_bundle_cib_unmanaged_both, + lambda: resource.manage(self.env, ["A"]), + fixture_bundle_cib_managed, + ) + + def test_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_unmanaged_bundle, + lambda: resource.manage(self.env, ["A-bundle"]), + fixture_bundle_cib_managed, + ) + + def test_bundle_unmanaged_primitive(self): + self.assert_command_effect( + fixture_bundle_cib_unmanaged_primitive, + lambda: resource.manage(self.env, ["A-bundle"]), + fixture_bundle_cib_managed, + ) + + def test_bundle_unmanaged_both(self): + self.assert_command_effect( + fixture_bundle_cib_unmanaged_both, + lambda: resource.manage(self.env, ["A-bundle"]), + fixture_bundle_cib_managed, + ) + + def test_bundle_empty(self): + self.assert_command_effect( + fixture_bundle_empty_cib_unmanaged_bundle, + lambda: resource.manage(self.env, ["A-bundle"]), + fixture_bundle_empty_cib_managed + ) + + +class MoreResources(ResourceWithoutStateTest): + fixture_cib_managed = """ + + + + + + + + + """ + fixture_cib_unmanaged = """ + + + + + + + + + + + + + + + + + + """ + + def test_success_unmanage(self): + fixture_cib_unmanaged = """ + + + + + + + + + + + + + + + """ + self.assert_command_effect( + self.fixture_cib_managed, + lambda: resource.unmanage(self.env, ["A", "C"]), + fixture_cib_unmanaged + ) + + def test_success_manage(self): + fixture_cib_managed = """ + + + + + + + + + + + + """ + self.assert_command_effect( + self.fixture_cib_unmanaged, + lambda: resource.manage(self.env, ["A", "C"]), + fixture_cib_managed + ) + + def test_bad_resource_unmanage(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(self.fixture_cib_managed) + ) + ) + + assert_raise_library_error( + lambda: resource.unmanage(self.env, ["B", "X", "Y", "A"]), + fixture.report_not_found("X", "resources"), + fixture.report_not_found("Y", "resources"), + ) + self.runner.assert_everything_launched() + + def test_bad_resource_enable(self): + self.runner.set_runs( + fixture.call_cib_load( + fixture.cib_resources(self.fixture_cib_unmanaged) + ) + ) + + assert_raise_library_error( + lambda: resource.manage(self.env, ["B", "X", "Y", "A"]), + fixture.report_not_found("X", "resources"), + fixture.report_not_found("Y", "resources"), + ) + self.runner.assert_everything_launched() + + +class WithMonitor(ResourceWithoutStateTest): + def test_unmanage_noop(self): + self.assert_command_effect( + fixture_primitive_cib_managed, + lambda: resource.unmanage(self.env, ["A"], True), + fixture_primitive_cib_unmanaged + ) + + def test_manage_noop(self): + self.assert_command_effect( + fixture_primitive_cib_unmanaged, + lambda: resource.manage(self.env, ["A"], True), + fixture_primitive_cib_managed + ) + + def test_unmanage(self): + self.assert_command_effect( + fixture_primitive_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A"], True), + fixture_primitive_cib_unmanaged_op_disabled + ) + + def test_manage(self): + self.assert_command_effect( + fixture_primitive_cib_unmanaged_op_disabled, + lambda: resource.manage(self.env, ["A"], True), + fixture_primitive_cib_managed_op_enabled + ) + + def test_unmanage_enabled_monitors(self): + self.assert_command_effect( + fixture_primitive_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A"], False), + fixture_primitive_cib_unmanaged_op_enabled + ) + + def test_manage_disabled_monitors(self): + self.assert_command_effect( + fixture_primitive_cib_unmanaged_op_disabled, + lambda: resource.manage(self.env, ["A"], False), + fixture_primitive_cib_managed_op_disabled, + [ + fixture_report_no_monitors("A"), + ] + ) + + def test_unmanage_clone(self): + self.assert_command_effect( + fixture_clone_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A-clone"], True), + fixture_clone_cib_unmanaged_primitive_op_disabled + ) + + def test_unmanage_in_clone(self): + self.assert_command_effect( + fixture_clone_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A"], True), + fixture_clone_cib_unmanaged_primitive_op_disabled + ) + + def test_unmanage_master(self): + self.assert_command_effect( + fixture_master_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A-master"], True), + fixture_master_cib_unmanaged_primitive_op_disabled + ) + + def test_unmanage_in_master(self): + self.assert_command_effect( + fixture_master_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A"], True), + fixture_master_cib_unmanaged_primitive_op_disabled + ) + + def test_unmanage_clone_with_group(self): + self.assert_command_effect( + fixture_clone_group_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A-clone"], True), + fixture_clone_group_cib_unmanaged_all_primitives_op_disabled + ) + + def test_unmanage_group_in_clone(self): + self.assert_command_effect( + fixture_clone_group_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A"], True), + fixture_clone_group_cib_unmanaged_all_primitives_op_disabled + ) + + def test_unmanage_in_cloned_group(self): + self.assert_command_effect( + fixture_clone_group_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A1"], True), + fixture_clone_group_cib_unmanaged_primitive_op_disabled + ) + + def test_unmanage_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A-bundle"], True), + fixture_bundle_cib_unmanaged_both_op_disabled + ) + + def test_unmanage_in_bundle(self): + self.assert_command_effect( + fixture_bundle_cib_managed_op_enabled, + lambda: resource.unmanage(self.env, ["A"], True), + fixture_bundle_cib_unmanaged_primitive_op_disabled + ) + + def test_unmanage_bundle_empty(self): + self.assert_command_effect( + fixture_bundle_empty_cib_managed, + lambda: resource.unmanage(self.env, ["A-bundle"], True), + fixture_bundle_empty_cib_unmanaged_bundle + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_acl.py pcs-0.9.159/pcs/lib/commands/test/test_acl.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_acl.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_acl.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,24 +5,12 @@ unicode_literals, ) - -from pcs.test.tools.assertions import ( - assert_raise_library_error, - ExtendedAssertionsMixin, -) +import pcs.lib.commands.acl as cmd_acl +from pcs.lib.env import LibraryEnvironment +from pcs.test.tools.assertions import ExtendedAssertionsMixin from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase -from pcs.common import report_codes -from pcs.lib.errors import ( - LibraryError, - ReportItemSeverity as Severities, -) -from pcs.lib.env import LibraryEnvironment - -import pcs.lib.commands.acl as cmd_acl -import pcs.lib.cib.acl as acl_lib - REQUIRED_CIB_VERSION = (2, 0, 0) @@ -44,8 +32,26 @@ def assert_cib_not_pushed(self): self.assertEqual(0, self.mock_env.push_cib.call_count) +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) +class CibAclSection(TestCase): + def test_push_cib_on_success(self): + env = mock.MagicMock() + env.get_cib = mock.Mock(return_value="cib") + with cmd_acl.cib_acl_section(env): + pass + env.get_cib.assert_called_once_with(cmd_acl.REQUIRED_CIB_VERSION) + env.push_cib.assert_called_once_with("cib") + + def test_does_not_push_cib_on_exception(self): + env = mock.MagicMock() + def run(): + with cmd_acl.cib_acl_section(env): + raise AssertionError() + self.assertRaises(AssertionError, run) + env.get_cib.assert_called_once_with(cmd_acl.REQUIRED_CIB_VERSION) + env.push_cib.assert_not_called() - +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.validate_permissions") @mock.patch("pcs.lib.cib.acl.create_role") @mock.patch("pcs.lib.cib.acl.add_permissions_to_role") @@ -72,358 +78,99 @@ self.assert_same_cib_pushed() +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_role") class RemoveRoleTest(AclCommandsTest): - def test_success_no_autodelete(self, mock_remove): + def test_success(self, mock_remove): cmd_acl.remove_role(self.mock_env, "role_id", False) self.assert_get_cib_called() mock_remove.assert_called_once_with(self.cib, "role_id", False) self.assert_same_cib_pushed() - def test_success_autodelete(self, mock_remove): - cmd_acl.remove_role(self.mock_env, "role_id", True) - self.assert_get_cib_called() - mock_remove.assert_called_once_with(self.cib, "role_id", True) - self.assert_same_cib_pushed() - def test_role_not_found(self, mock_remove): - mock_remove.side_effect = acl_lib.AclRoleNotFound("role_id") - assert_raise_library_error( - lambda: cmd_acl.remove_role(self.mock_env, "role_id", True), - ( - Severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "role_id", - "id_description": "role", - } - ) - ) - self.assert_get_cib_called() - mock_remove.assert_called_once_with(self.cib, "role_id", True) - self.assert_cib_not_pushed() - - -@mock.patch("pcs.lib.commands.acl._get_target_or_group") +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) +@mock.patch("pcs.lib.cib.acl.find_target_or_group") @mock.patch("pcs.lib.cib.acl.assign_role") -@mock.patch("pcs.lib.cib.acl.find_role") -@mock.patch("pcs.lib.cib.acl.acl_error_to_report_item") class AssignRoleNotSpecific(AclCommandsTest, ExtendedAssertionsMixin): - def test_success( - self, mock_error_convert, mock_find_role, mock_assign, mock_get_tg - ): - mock_get_tg.return_value = "target_el" - mock_find_role.return_value = "role_el" + def test_success(self, mock_assign, find_target_or_group): + find_target_or_group.return_value = "target_el" cmd_acl.assign_role_not_specific(self.mock_env, "role_id", "target_id") self.assert_get_cib_called() - mock_get_tg.assert_called_once_with(self.cib, "target_id") - mock_find_role.assert_called_once_with(self.cib, "role_id") - mock_assign.assert_called_once_with("target_el", "role_el") - self.assertEqual(0, mock_error_convert.call_count) + find_target_or_group.assert_called_once_with(self.cib, "target_id") + mock_assign.assert_called_once_with(self.cib, "role_id", "target_el") self.assert_same_cib_pushed() - def test_failure( - self, mock_error_convert, mock_find_role, mock_assign, mock_get_tg - ): - mock_get_tg.return_value = "target_el" - exception_obj = acl_lib.AclRoleNotFound("role_id") - mock_find_role.side_effect = exception_obj - self.assert_raises( - LibraryError, - lambda: cmd_acl.assign_role_not_specific( - self.mock_env, "role_id", "target_id" - ) - ) - self.assert_get_cib_called() - self.assertEqual(0, mock_assign.call_count) - mock_error_convert.assert_called_once_with(exception_obj) - self.assert_cib_not_pushed() - +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.find_target") -@mock.patch("pcs.lib.cib.acl.find_group") -class GetTargetOrGroupTest(AclCommandsTest): - def test_target(self, mock_find_group, mock_find_target): - mock_find_target.return_value = "target_el" - self.assertEqual( - "target_el", cmd_acl._get_target_or_group(self.cib, "target_id") - ) - mock_find_target.assert_called_once_with(self.cib, "target_id") - self.assertEqual(0, mock_find_group.call_count) - - def test_group(self, mock_find_group, mock_find_target): - mock_find_target.side_effect = acl_lib.AclTargetNotFound("group_id") - mock_find_group.return_value = "group_el" - self.assertEqual( - "group_el", cmd_acl._get_target_or_group(self.cib, "group_id") - ) - mock_find_target.assert_called_once_with(self.cib, "group_id") - mock_find_group.assert_called_once_with(self.cib, "group_id") - - def test_not_found(self, mock_find_group, mock_find_target): - mock_find_target.side_effect = acl_lib.AclTargetNotFound("id") - mock_find_group.side_effect = acl_lib.AclGroupNotFound("id") - assert_raise_library_error( - lambda: cmd_acl._get_target_or_group(self.cib, "id"), - ( - Severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "id", - "id_description": "user/group", - } - ) - ) - mock_find_target.assert_called_once_with(self.cib, "id") - mock_find_group.assert_called_once_with(self.cib, "id") - - @mock.patch("pcs.lib.cib.acl.assign_role") -@mock.patch("pcs.lib.cib.acl.find_role") -@mock.patch("pcs.lib.cib.acl.find_target") -@mock.patch("pcs.lib.cib.acl.acl_error_to_report_item") class AssignRoleToTargetTest(AclCommandsTest): - def test_success( - self, mock_error_convert, mock_target, mock_role, mock_assign - ): - mock_target.return_value = "target_el" - mock_role.return_value = "role_el" + def test_success(self, mock_assign, find_target): + find_target.return_value = "target_el" cmd_acl.assign_role_to_target(self.mock_env, "role_id", "target_id") self.assert_get_cib_called() - mock_target.assert_called_once_with(self.cib, "target_id") - mock_role.assert_called_once_with(self.cib, "role_id") - mock_assign.assert_called_once_with("target_el", "role_el") + mock_assign.assert_called_once_with(self.cib, "role_id", "target_el") self.assert_same_cib_pushed() - self.assertEqual(0, mock_error_convert.call_count) - - def test_failure( - self, mock_error_convert, mock_target, mock_role, mock_assign - ): - exception_obj = acl_lib.AclTargetNotFound("target_id") - mock_target.side_effect = exception_obj - mock_role.return_value = "role_el" - self.assert_raises( - LibraryError, - lambda: cmd_acl.assign_role_to_target( - self.mock_env, "role_id", "target_id" - ) - ) - self.assert_get_cib_called() - mock_target.assert_called_once_with(self.cib, "target_id") - mock_error_convert.assert_called_once_with(exception_obj) - self.assertEqual(0, mock_assign.call_count) - self.assert_cib_not_pushed() - -@mock.patch("pcs.lib.cib.acl.assign_role") -@mock.patch("pcs.lib.cib.acl.find_role") +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.find_group") -@mock.patch("pcs.lib.cib.acl.acl_error_to_report_item") +@mock.patch("pcs.lib.cib.acl.assign_role") class AssignRoleToGroupTest(AclCommandsTest): - def test_success( - self, mock_error_convert, mock_group, mock_role, mock_assign - ): - mock_group.return_value = "group_el" - mock_role.return_value = "role_el" + def test_success(self, mock_assign, find_group): + find_group.return_value = "group_el" cmd_acl.assign_role_to_group(self.mock_env, "role_id", "group_id") self.assert_get_cib_called() - mock_group.assert_called_once_with(self.cib, "group_id") - mock_role.assert_called_once_with(self.cib, "role_id") - mock_assign.assert_called_once_with("group_el", "role_el") + mock_assign.assert_called_once_with(self.cib, "role_id", "group_el") self.assert_same_cib_pushed() - self.assertEqual(0, mock_error_convert.call_count) - - def test_failure( - self, mock_error_convert, mock_group, mock_role, mock_assign - ): - exception_obj = acl_lib.AclGroupNotFound("group_id") - mock_group.side_effect = exception_obj - mock_role.return_value = "role_el" - self.assert_raises( - LibraryError, - lambda: cmd_acl.assign_role_to_group( - self.mock_env, "role_id", "group_id" - ) - ) - self.assert_get_cib_called() - mock_group.assert_called_once_with(self.cib, "group_id") - mock_error_convert.assert_called_once_with(exception_obj) - self.assertEqual(0, mock_assign.call_count) - self.assert_cib_not_pushed() - -@mock.patch("pcs.lib.commands.acl._get_target_or_group") +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.unassign_role") +@mock.patch("pcs.lib.cib.acl.find_target_or_group") class UnassignRoleNotSpecificTest(AclCommandsTest): - def test_success(self, mock_unassign, mock_tg): - mock_tg.return_value = "target_el" + def test_success(self, find_target_or_group, mock_unassign): + find_target_or_group.return_value = "target_el" cmd_acl.unassign_role_not_specific( self.mock_env, "role_id", "target_id", False ) self.assert_get_cib_called() - mock_tg.assert_called_once_with(self.cib, "target_id") + find_target_or_group.assert_called_once_with(self.cib, "target_id") mock_unassign.assert_called_once_with("target_el", "role_id", False) self.assert_same_cib_pushed() - def test_success_with_autodelete(self, mock_unassign, mock_tg): - mock_tg.return_value = "target_el" - cmd_acl.unassign_role_not_specific( - self.mock_env, "role_id", "target_id", True - ) - self.assert_get_cib_called() - mock_tg.assert_called_once_with(self.cib, "target_id") - mock_unassign.assert_called_once_with("target_el", "role_id", True) - self.assert_same_cib_pushed() - +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.unassign_role") @mock.patch("pcs.lib.cib.acl.find_target") -@mock.patch("pcs.lib.cib.acl.acl_error_to_report_item") class UnassignRoleFromTargetTest(AclCommandsTest): - def test_success(self, mock_error_convert, mock_find_el, mock_unassign): - mock_find_el.return_value = "el" + def test_success(self, find_target, mock_unassign): + find_target.return_value = "el" cmd_acl.unassign_role_from_target( self.mock_env, "role_id", "el_id", False ) self.assert_get_cib_called() - mock_find_el.assert_called_once_with(self.cib, "el_id") + find_target.assert_called_once_with(self.cib, "el_id") mock_unassign.assert_called_once_with("el", "role_id", False) self.assert_same_cib_pushed() - self.assertEqual(0, mock_error_convert.call_count) - - def test_success_autodelete( - self, mock_error_convert, mock_find_el, mock_unassign - ): - mock_find_el.return_value = "el" - cmd_acl.unassign_role_from_target( - self.mock_env, "role_id", "el_id", True - ) - self.assert_get_cib_called() - mock_find_el.assert_called_once_with(self.cib, "el_id") - mock_unassign.assert_called_once_with("el", "role_id", True) - self.assert_same_cib_pushed() - self.assertEqual(0, mock_error_convert.call_count) - - def test_failure(self, mock_error_convert, mock_find_el, mock_unassign): - exception_obj = acl_lib.AclTargetNotFound("el_id") - mock_find_el.side_effect = exception_obj - self.assert_raises( - LibraryError, - lambda: cmd_acl.unassign_role_from_target( - self.mock_env, "role_id", "el_id", False - ) - ) - self.assert_get_cib_called() - mock_find_el.assert_called_once_with(self.cib, "el_id") - self.assertEqual(0, mock_unassign.call_count) - self.assert_cib_not_pushed() - mock_error_convert.assert_called_once_with(exception_obj) +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.unassign_role") @mock.patch("pcs.lib.cib.acl.find_group") -@mock.patch("pcs.lib.cib.acl.acl_error_to_report_item") class UnassignRoleFromGroupTest(AclCommandsTest): - def test_success(self, mock_error_convert, mock_find_el, mock_unassign): - mock_find_el.return_value = "el" + def test_success(self, find_group, mock_unassign): + find_group.return_value = "el" cmd_acl.unassign_role_from_group( self.mock_env, "role_id", "el_id", False ) self.assert_get_cib_called() - mock_find_el.assert_called_once_with(self.cib, "el_id") + find_group.assert_called_once_with(self.cib, "el_id") mock_unassign.assert_called_once_with("el", "role_id", False) self.assert_same_cib_pushed() - self.assertEqual(0, mock_error_convert.call_count) - - def test_success_autodelete( - self, mock_error_convert, mock_find_el, mock_unassign - ): - mock_find_el.return_value = "el" - cmd_acl.unassign_role_from_group( - self.mock_env, "role_id", "el_id", True - ) - self.assert_get_cib_called() - mock_find_el.assert_called_once_with(self.cib, "el_id") - mock_unassign.assert_called_once_with("el", "role_id", True) - self.assert_same_cib_pushed() - self.assertEqual(0, mock_error_convert.call_count) - - def test_failure(self, mock_error_convert, mock_find_el, mock_unassign): - exception_obj = acl_lib.AclGroupNotFound("el_id") - mock_find_el.side_effect = exception_obj - self.assert_raises( - LibraryError, - lambda: cmd_acl.unassign_role_from_group( - self.mock_env, "role_id", "el_id", False - ) - ) - self.assert_get_cib_called() - mock_find_el.assert_called_once_with(self.cib, "el_id") - self.assertEqual(0, mock_unassign.call_count) - self.assert_cib_not_pushed() - mock_error_convert.assert_called_once_with(exception_obj) - - -@mock.patch("pcs.lib.cib.acl.assign_role") -@mock.patch("pcs.lib.cib.acl.find_role") -class AssignRolesToElement(AclCommandsTest): - def test_success(self, mock_role, mock_assign): - mock_role.side_effect = lambda _, el_id: "{0}_el".format(el_id) - cmd_acl._assign_roles_to_element( - self.cib, "el", ["role1", "role2", "role3"] - ) - mock_role.assert_has_calls([ - mock.call(self.cib, "role1"), - mock.call(self.cib, "role2"), - mock.call(self.cib, "role3") - ]) - mock_assign.assert_has_calls([ - mock.call("el", "role1_el"), - mock.call("el", "role2_el"), - mock.call("el", "role3_el") - ]) - - def test_failure(self, mock_role, mock_assign): - def _mock_role(_, el_id): - if el_id in ["role1", "role3"]: - raise acl_lib.AclRoleNotFound(el_id) - elif el_id == "role2": - return "role2_el" - else: - raise AssertionError("unexpected input") - - mock_role.side_effect = _mock_role - assert_raise_library_error( - lambda: cmd_acl._assign_roles_to_element( - self.cib, "el", ["role1", "role2", "role3"] - ), - ( - Severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "role1", - "id_description": "role", - } - ), - ( - Severities.ERROR, - report_codes.ID_NOT_FOUND, - { - "id": "role3", - "id_description": "role", - } - ) - ) - mock_role.assert_has_calls([ - mock.call(self.cib, "role1"), - mock.call(self.cib, "role2"), - mock.call(self.cib, "role3") - ]) - mock_assign.assert_called_once_with("el", "role2_el") +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.create_target") -@mock.patch("pcs.lib.commands.acl._assign_roles_to_element") +@mock.patch("pcs.lib.cib.acl.assign_all_roles") class CreateTargetTest(AclCommandsTest): def test_success(self, mock_assign, mock_create): mock_create.return_value = "el" @@ -436,8 +183,9 @@ self.assert_same_cib_pushed() +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.create_group") -@mock.patch("pcs.lib.commands.acl._assign_roles_to_element") +@mock.patch("pcs.lib.cib.acl.assign_all_roles") class CreateGroupTest(AclCommandsTest): def test_success(self, mock_assign, mock_create): mock_create.return_value = "el" @@ -450,6 +198,7 @@ self.assert_same_cib_pushed() +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_target") class RemoveTargetTest(AclCommandsTest): def test_success(self, mock_remove): @@ -459,6 +208,7 @@ self.assert_same_cib_pushed() +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_group") class RemoveGroupTest(AclCommandsTest): def test_success(self, mock_remove): @@ -468,6 +218,7 @@ self.assert_same_cib_pushed() +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.validate_permissions") @mock.patch("pcs.lib.cib.acl.provide_role") @mock.patch("pcs.lib.cib.acl.add_permissions_to_role") @@ -482,6 +233,7 @@ self.assert_same_cib_pushed() +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) @mock.patch("pcs.lib.cib.acl.remove_permission") class RemovePermission(AclCommandsTest): def test_success(self, mock_remove): @@ -494,6 +246,7 @@ @mock.patch("pcs.lib.cib.acl.get_target_list") @mock.patch("pcs.lib.cib.acl.get_group_list") @mock.patch("pcs.lib.cib.acl.get_role_list") +@mock.patch("pcs.lib.commands.acl.get_acls", mock.Mock(side_effect=lambda x:x)) class GetConfigTest(AclCommandsTest): def test_success(self, mock_role, mock_group, mock_target): mock_role.return_value = "role" @@ -507,4 +260,3 @@ }, cmd_acl.get_config(self.mock_env) ) - diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_alert.py pcs-0.9.159/pcs/lib/commands/test/test_alert.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_alert.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_alert.py 2017-06-30 15:33:01.000000000 +0000 @@ -26,7 +26,7 @@ import pcs.lib.commands.alert as cmd_alert -@mock.patch("pcs.lib.cib.tools.upgrade_cib") +@mock.patch("pcs.lib.env.ensure_cib_version") class CreateAlertTest(TestCase): def setUp(self): self.mock_log = mock.MagicMock(spec_set=logging.Logger) @@ -36,7 +36,7 @@ self.mock_log, self.mock_rep, cib_data="" ) - def test_no_path(self, mock_upgrade_cib): + def test_no_path(self, mock_ensure_cib_version): assert_raise_library_error( lambda: cmd_alert.create_alert( self.mock_env, None, None, None, None @@ -44,21 +44,20 @@ ( Severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, - {"option_name": "path"} + {"option_names": ["path"]} ) ) - self.assertEqual(0, mock_upgrade_cib.call_count) + mock_ensure_cib_version.assert_not_called() - def test_upgrade_needed(self, mock_upgrade_cib): - self.mock_env._push_cib_xml( - """ + def test_upgrade_needed(self, mock_ensure_cib_version): + original_cib_xml = """ - """ - ) - mock_upgrade_cib.return_value = etree.XML( + """ + self.mock_env._push_cib_xml(original_cib_xml) + mock_ensure_cib_version.return_value = etree.XML( """ @@ -109,7 +108,7 @@ """, self.mock_env._get_cib_xml() ) - self.assertEqual(1, mock_upgrade_cib.call_count) + self.assertEqual(1, mock_ensure_cib_version.call_count) class UpdateAlertTest(TestCase): @@ -264,8 +263,8 @@ ), ( Severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "unknown"} + report_codes.ID_NOT_FOUND, + {"id": "unknown"} ) ) @@ -348,13 +347,13 @@ report_list = [ ( Severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "unknown"} + report_codes.ID_NOT_FOUND, + {"id": "unknown"} ), ( Severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "unknown2"} + report_codes.ID_NOT_FOUND, + {"id": "unknown2"} ) ] assert_raise_library_error( @@ -388,18 +387,6 @@ self.mock_log, self.mock_rep, cib_data=cib ) - def test_alert_not_found(self): - assert_raise_library_error( - lambda: cmd_alert.add_recipient( - self.mock_env, "unknown", "recipient", {}, {} - ), - ( - Severities.ERROR, - report_codes.CIB_ALERT_NOT_FOUND, - {"alert": "unknown"} - ) - ) - def test_value_not_defined(self): assert_raise_library_error( lambda: cmd_alert.add_recipient( @@ -408,7 +395,7 @@ ( Severities.ERROR, report_codes.REQUIRED_OPTION_IS_MISSING, - {"option_name": "value"} + {"option_names": ["value"]} ) ) @@ -596,7 +583,7 @@ report_codes.ID_NOT_FOUND, { "id": "recipient", - "id_description": "Recipient" + "id_description": "recipient" } ) ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_booth.py pcs-0.9.159/pcs/lib/commands/test/test_booth.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_booth.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_booth.py 2017-06-30 15:33:01.000000000 +0000 @@ -35,7 +35,7 @@ patch_commands = create_patcher("pcs.lib.commands.booth") -@mock.patch("pcs.lib.booth.config_files.generate_key", return_value="key value") +@mock.patch("pcs.lib.tools.generate_key", return_value="key value") @mock.patch("pcs.lib.commands.booth.build", return_value="config content") @mock.patch("pcs.lib.booth.config_structure.validate_peers") class ConfigSetupTest(TestCase): @@ -111,6 +111,7 @@ @patch_commands("parse", mock.Mock(side_effect=LibraryError())) def test_raises_when_cannot_get_content_of_config(self): env = mock.MagicMock() + env.booth.name = "somename" assert_raise_library_error( lambda: commands.config_destroy(env), ( @@ -126,6 +127,7 @@ @patch_commands("parse", mock.Mock(side_effect=LibraryError())) def test_remove_config_even_if_cannot_get_its_content_when_forced(self): env = mock.MagicMock() + env.booth.name = "somename" env.report_processor = MockLibraryReportProcessor() commands.config_destroy(env, ignore_config_load_problems=True) env.booth.remove_config.assert_called_once_with() @@ -544,7 +546,6 @@ assert_raise_library_error( lambda: commands.create_in_cluster( mock.MagicMock(), "somename", ip="1.2.3.4", - resource_create=None, resource_remove=None, ), ( Severities.ERROR, diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_fencing_topology.py pcs-0.9.159/pcs/lib/commands/test/test_fencing_topology.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_fencing_topology.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_fencing_topology.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,257 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from functools import partial +import logging + +from pcs.common.fencing_topology import ( + TARGET_TYPE_REGEXP, + TARGET_TYPE_ATTRIBUTE, +) +from pcs.lib.env import LibraryEnvironment +from pcs.test.tools.misc import create_patcher +from pcs.test.tools.pcs_unittest import mock, TestCase +from pcs.test.tools.custom_mock import MockLibraryReportProcessor + +from pcs.lib.commands import fencing_topology as lib + + +create_lib_env = partial( + LibraryEnvironment, + mock.MagicMock(logging.Logger), + MockLibraryReportProcessor() +) +patch_env = partial(mock.patch.object, LibraryEnvironment) +patch_command = create_patcher("pcs.lib.commands.fencing_topology") + + +@patch_command("cib_fencing_topology.add_level") +@patch_command("get_resources") +@patch_command("get_fencing_topology") +@patch_env("push_cib") +@patch_command("ClusterState") +@patch_command("get_cluster_status_xml") +@patch_env("get_cib") +@patch_env("cmd_runner", lambda self: "mocked cmd_runner") +class AddLevel(TestCase): + def prepare_mocks( + self, mock_get_cib, mock_status_xml, mock_status, mock_get_topology, + mock_get_resources + ): + mock_get_cib.return_value = "mocked cib" + mock_status_xml.return_value = "mock get_cluster_status_xml" + mock_status.return_value = mock.MagicMock( + node_section=mock.MagicMock(nodes="nodes") + ) + mock_get_topology.return_value = "topology el" + mock_get_resources.return_value = "resources_el" + + def assert_mocks( + self, mock_status_xml, mock_status, mock_get_topology, + mock_get_resources, mock_push_cib + ): + mock_status_xml.assert_called_once_with("mocked cmd_runner") + mock_status.assert_called_once_with("mock get_cluster_status_xml") + mock_get_topology.assert_called_once_with("mocked cib") + mock_get_resources.assert_called_once_with("mocked cib") + mock_push_cib.assert_called_once_with("mocked cib") + + def test_success( + self, mock_get_cib, mock_status_xml, mock_status, mock_push_cib, + mock_get_topology, mock_get_resources, mock_add_level + ): + self.prepare_mocks( + mock_get_cib, mock_status_xml, mock_status, mock_get_topology, + mock_get_resources + ) + lib_env = create_lib_env() + + lib.add_level( + lib_env, "level", "target type", "target value", "devices", + "force device", "force node" + ) + + mock_add_level.assert_called_once_with( + lib_env.report_processor, + "topology el", + "resources_el", + "level", + "target type", + "target value", + "devices", + "nodes", + "force device", + "force node" + ) + mock_get_cib.assert_called_once_with(None) + self.assert_mocks( + mock_status_xml, mock_status, mock_get_topology, mock_get_resources, + mock_push_cib + ) + + def test_target_attribute_updates_cib( + self, mock_get_cib, mock_status_xml, mock_status, mock_push_cib, + mock_get_topology, mock_get_resources, mock_add_level + ): + self.prepare_mocks( + mock_get_cib, mock_status_xml, mock_status, mock_get_topology, + mock_get_resources + ) + lib_env = create_lib_env() + + lib.add_level( + lib_env, "level", TARGET_TYPE_ATTRIBUTE, "target value", "devices", + "force device", "force node" + ) + + mock_add_level.assert_called_once_with( + lib_env.report_processor, + "topology el", + "resources_el", + "level", + TARGET_TYPE_ATTRIBUTE, + "target value", + "devices", + "nodes", + "force device", + "force node" + ) + mock_get_cib.assert_called_once_with((2, 4, 0)) + self.assert_mocks( + mock_status_xml, mock_status, mock_get_topology, mock_get_resources, + mock_push_cib + ) + + def test_target_regexp_updates_cib( + self, mock_get_cib, mock_status_xml, mock_status, mock_push_cib, + mock_get_topology, mock_get_resources, mock_add_level + ): + self.prepare_mocks( + mock_get_cib, mock_status_xml, mock_status, mock_get_topology, + mock_get_resources + ) + lib_env = create_lib_env() + + lib.add_level( + lib_env, "level", TARGET_TYPE_REGEXP, "target value", "devices", + "force device", "force node" + ) + + mock_add_level.assert_called_once_with( + lib_env.report_processor, + "topology el", + "resources_el", + "level", + TARGET_TYPE_REGEXP, + "target value", + "devices", + "nodes", + "force device", + "force node" + ) + mock_get_cib.assert_called_once_with((2, 3, 0)) + self.assert_mocks( + mock_status_xml, mock_status, mock_get_topology, mock_get_resources, + mock_push_cib + ) + +@patch_command("cib_fencing_topology.export") +@patch_command("get_fencing_topology") +@patch_env("push_cib") +@patch_env("get_cib", lambda self: "mocked cib") +class GetConfig(TestCase): + def test_success(self, mock_push_cib, mock_get_topology, mock_export): + mock_get_topology.return_value = "topology el" + mock_export.return_value = "exported config" + lib_env = create_lib_env() + + self.assertEqual( + "exported config", + lib.get_config(lib_env) + ) + + mock_export.assert_called_once_with("topology el") + mock_get_topology.assert_called_once_with("mocked cib") + mock_push_cib.assert_not_called() + + +@patch_command("cib_fencing_topology.remove_all_levels") +@patch_command("get_fencing_topology") +@patch_env("push_cib") +@patch_env("get_cib", lambda self: "mocked cib") +class RemoveAllLevels(TestCase): + def test_success(self, mock_push_cib, mock_get_topology, mock_remove): + mock_get_topology.return_value = "topology el" + lib_env = create_lib_env() + + lib.remove_all_levels(lib_env) + + mock_remove.assert_called_once_with("topology el") + mock_get_topology.assert_called_once_with("mocked cib") + mock_push_cib.assert_called_once_with("mocked cib") + + +@patch_command("cib_fencing_topology.remove_levels_by_params") +@patch_command("get_fencing_topology") +@patch_env("push_cib") +@patch_env("get_cib", lambda self: "mocked cib") +class RemoveLevelsByParams(TestCase): + def test_success(self, mock_push_cib, mock_get_topology, mock_remove): + mock_get_topology.return_value = "topology el" + lib_env = create_lib_env() + + lib.remove_levels_by_params( + lib_env, "level", "target type", "target value", "devices", "ignore" + ) + + mock_remove.assert_called_once_with( + lib_env.report_processor, + "topology el", + "level", + "target type", + "target value", + "devices", + "ignore" + ) + mock_get_topology.assert_called_once_with("mocked cib") + mock_push_cib.assert_called_once_with("mocked cib") + + +@patch_command("cib_fencing_topology.verify") +@patch_command("get_resources") +@patch_command("get_fencing_topology") +@patch_env("push_cib") +@patch_command("ClusterState") +@patch_command("get_cluster_status_xml") +@patch_env("get_cib", lambda self: "mocked cib") +@patch_env("cmd_runner", lambda self: "mocked cmd_runner") +class Verify(TestCase): + def test_success( + self, mock_status_xml, mock_status, mock_push_cib, mock_get_topology, + mock_get_resources, mock_verify + ): + mock_status_xml.return_value = "mock get_cluster_status_xml" + mock_status.return_value = mock.MagicMock( + node_section=mock.MagicMock(nodes="nodes") + ) + mock_get_topology.return_value = "topology el" + mock_get_resources.return_value = "resources_el" + lib_env = create_lib_env() + + lib.verify(lib_env) + + mock_verify.assert_called_once_with( + lib_env.report_processor, + "topology el", + "resources_el", + "nodes" + ) + mock_status_xml.assert_called_once_with("mocked cmd_runner") + mock_status.assert_called_once_with("mock get_cluster_status_xml") + mock_get_topology.assert_called_once_with("mocked cib") + mock_get_resources.assert_called_once_with("mocked cib") + mock_push_cib.assert_not_called() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_node.py pcs-0.9.159/pcs/lib/commands/test/test_node.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_node.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_node.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,296 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from functools import partial +from contextlib import contextmanager + +from lxml import etree +import logging + +from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.pcs_unittest import mock, TestCase +from pcs.test.tools.misc import create_patcher + +from pcs.common import report_codes +from pcs.lib.env import LibraryEnvironment +from pcs.lib.errors import ReportItemSeverity as severity, LibraryError + +from pcs.lib.commands import node as lib + + +mocked_cib = etree.fromstring("") + +patch_env = partial(mock.patch.object, LibraryEnvironment) +patch_command = create_patcher("pcs.lib.commands.node") + +create_env = partial( + LibraryEnvironment, + mock.MagicMock(logging.Logger), + MockLibraryReportProcessor() +) + +def fixture_node(order_num): + node = mock.MagicMock(attrs=mock.MagicMock()) + node.attrs.name = "node-{0}".format(order_num) + return node + +class StandbyMaintenancePassParameters(TestCase): + def setUp(self): + self.lib_env = "lib_env" + self.nodes = "nodes" + self.wait = "wait" + self.standby_on = {"standby": "on"} + self.standby_off = {"standby": ""} + self.maintenance_on = {"maintenance": "on"} + self.maintenance_off = {"maintenance": ""} + +@patch_command("_set_instance_attrs_local_node") +class StandbyMaintenancePassParametersLocal(StandbyMaintenancePassParameters): + def test_standby(self, mock_doer): + lib.standby_unstandby_local(self.lib_env, True, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.standby_on, + self.wait + ) + + def test_unstandby(self, mock_doer): + lib.standby_unstandby_local(self.lib_env, False, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.standby_off, + self.wait + ) + + def test_maintenance(self, mock_doer): + lib.maintenance_unmaintenance_local(self.lib_env, True, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.maintenance_on, + self.wait + ) + + def test_unmaintenance(self, mock_doer): + lib.maintenance_unmaintenance_local(self.lib_env, False, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.maintenance_off, + self.wait + ) + +@patch_command("_set_instance_attrs_node_list") +class StandbyMaintenancePassParametersList(StandbyMaintenancePassParameters): + def test_standby(self, mock_doer): + lib.standby_unstandby_list(self.lib_env, True, self.nodes, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.standby_on, + self.nodes, + self.wait + ) + + def test_unstandby(self, mock_doer): + lib.standby_unstandby_list(self.lib_env, False, self.nodes, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.standby_off, + self.nodes, + self.wait + ) + + def test_maintenance(self, mock_doer): + lib.maintenance_unmaintenance_list( + self.lib_env, True, self.nodes, self.wait + ) + mock_doer.assert_called_once_with( + self.lib_env, + self.maintenance_on, + self.nodes, + self.wait + ) + + def test_unmaintenance(self, mock_doer): + lib.maintenance_unmaintenance_list( + self.lib_env, False, self.nodes, self.wait + ) + mock_doer.assert_called_once_with( + self.lib_env, + self.maintenance_off, + self.nodes, + self.wait + ) + +@patch_command("_set_instance_attrs_all_nodes") +class StandbyMaintenancePassParametersAll(StandbyMaintenancePassParameters): + def test_standby(self, mock_doer): + lib.standby_unstandby_all(self.lib_env, True, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.standby_on, + self.wait + ) + + def test_unstandby(self, mock_doer): + lib.standby_unstandby_all(self.lib_env, False, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.standby_off, + self.wait + ) + + def test_maintenance(self, mock_doer): + lib.maintenance_unmaintenance_all(self.lib_env, True, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.maintenance_on, + self.wait + ) + + def test_unmaintenance(self, mock_doer): + lib.maintenance_unmaintenance_all(self.lib_env, False, self.wait) + mock_doer.assert_called_once_with( + self.lib_env, + self.maintenance_off, + self.wait + ) + +class SetInstaceAttrsBase(TestCase): + node_count = 2 + def setUp(self): + self.cluster_nodes = [fixture_node(i) for i in range(self.node_count)] + + self.launch = {"pre": False, "post": False} + @contextmanager + def cib_runner_nodes_contextmanager(env, wait): + self.launch["pre"] = True + yield ("cib", "mock_runner", self.cluster_nodes) + self.launch["post"] = True + + patcher = patch_command('cib_runner_nodes') + self.addCleanup(patcher.stop) + patcher.start().side_effect = cib_runner_nodes_contextmanager + + def assert_context_manager_launched(self, pre=False, post=False): + self.assertEqual(self.launch, {"pre": pre, "post": post}) + +@patch_command("update_node_instance_attrs") +@patch_command("get_local_node_name") +class SetInstaceAttrsLocal(SetInstaceAttrsBase): + node_count = 2 + + def test_not_possible_with_cib_file(self, mock_name, mock_attrs): + assert_raise_library_error( + lambda: lib._set_instance_attrs_local_node( + create_env(cib_data=""), + "attrs", + "wait" + ), + ( + severity.ERROR, + report_codes.LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE, + {} + ) + ) + self.assert_context_manager_launched(pre=False, post=False) + mock_name.assert_not_called() + mock_attrs.assert_not_called() + + def test_success(self, mock_name, mock_attrs): + mock_name.return_value = "node-1" + + lib._set_instance_attrs_local_node(create_env(), "attrs", False) + + self.assert_context_manager_launched(pre=True, post=True) + mock_name.assert_called_once_with("mock_runner") + mock_attrs.assert_called_once_with( + "cib", "node-1", "attrs", self.cluster_nodes + ) + +@patch_command("update_node_instance_attrs") +class SetInstaceAttrsAll(SetInstaceAttrsBase): + node_count = 2 + + def test_success(self, mock_attrs): + lib._set_instance_attrs_all_nodes(create_env(), "attrs", False) + + self.assertEqual(2, len(mock_attrs.mock_calls)) + mock_attrs.assert_has_calls([ + mock.call("cib", "node-0", "attrs", self.cluster_nodes), + mock.call("cib", "node-1", "attrs", self.cluster_nodes), + ]) + +@patch_command("update_node_instance_attrs") +class SetInstaceAttrsList(SetInstaceAttrsBase): + node_count = 4 + + def test_success(self, mock_attrs): + lib._set_instance_attrs_node_list( + create_env(), "attrs", ["node-1", "node-2"], False + ) + + self.assert_context_manager_launched(pre=True, post=True) + self.assertEqual(2, len(mock_attrs.mock_calls)) + mock_attrs.assert_has_calls([ + mock.call("cib", "node-1", "attrs", self.cluster_nodes), + mock.call("cib", "node-2", "attrs", self.cluster_nodes), + ]) + + def test_bad_node(self, mock_attrs): + assert_raise_library_error( + lambda: lib._set_instance_attrs_node_list( + create_env(), "attrs", ["node-1", "node-9"], False + ), + ( + severity.ERROR, + report_codes.NODE_NOT_FOUND, + { + "node": "node-9", + } + ) + ) + mock_attrs.assert_not_called() + +@patch_env("push_cib") +class CibRunnerNodes(TestCase): + def setUp(self): + self.env = create_env() + + @patch_env("get_cib", lambda self: "mocked cib") + @patch_env("cmd_runner", lambda self: "mocked cmd_runner") + @patch_env("ensure_wait_satisfiable") + @patch_command("ClusterState") + @patch_command("get_cluster_status_xml") + def test_wire_together_all_expected_dependecies( + self, get_cluster_status_xml, ClusterState, ensure_wait_satisfiable, + push_cib + ): + ClusterState.return_value = mock.MagicMock( + node_section=mock.MagicMock(nodes="nodes") + ) + get_cluster_status_xml.return_value = "mock get_cluster_status_xml" + wait = 10 + + with lib.cib_runner_nodes(self.env, wait) as (cib, runner, nodes): + self.assertEqual(cib, "mocked cib") + self.assertEqual(runner, "mocked cmd_runner") + self.assertEqual(nodes, "nodes") + ensure_wait_satisfiable.assert_called_once_with(wait) + get_cluster_status_xml.assert_called_once_with("mocked cmd_runner") + ClusterState.assert_called_once_with("mock get_cluster_status_xml") + + push_cib.assert_called_once_with("mocked cib", wait) + + @patch_env("ensure_wait_satisfiable", mock.Mock(side_effect=LibraryError)) + def test_raises_when_wait_is_not_satisfiable(self, push_cib): + def run(): + #pylint: disable=unused-variable + with lib.cib_runner_nodes(self.env, "wait") as (cib, runner, nodes): + pass + + self.assertRaises(LibraryError, run) + push_cib.assert_not_called() diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_resource_agent.py pcs-0.9.159/pcs/lib/commands/test/test_resource_agent.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_resource_agent.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_resource_agent.py 2017-06-30 15:33:01.000000000 +0000 @@ -8,7 +8,7 @@ import logging from lxml import etree -from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.assertions import assert_raise_library_error, start_tag_error_text from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase @@ -238,9 +238,9 @@ @mock.patch.object(lib_ra.Agent, "_get_metadata", autospec=True) def test_describe(self, mock_metadata): def mock_metadata_func(self): - if self._full_agent_name == "ocf:test:Stateful": + if self.get_name() == "ocf:test:Stateful": raise lib_ra.UnableToGetAgentMetadata( - self._full_agent_name, + self.get_name(), "test exception" ) return etree.XML(""" @@ -252,7 +252,7 @@ - """.format(name=self._full_agent_name)) + """.format(name=self.get_name())) mock_metadata.side_effect = mock_metadata_func # Stateful is missing as it does not provide valid metadata - see above @@ -284,6 +284,26 @@ ) +class CompleteAgentList(TestCase): + def test_skip_agent_name_when_InvalidResourceAgentName_raised(self): + invalid_agent_name = "systemd:lvm2-pvscan@252:2"#suppose it is invalid + class Agent(object): + def __init__(self, runner, name): + if name == invalid_agent_name: + raise lib_ra.InvalidResourceAgentName(name) + self.name = name + + def get_name_info(self): + return self.name + + self.assertEqual(["ocf:heartbeat:Dummy"], lib._complete_agent_list( + mock.MagicMock(), + ["ocf:heartbeat:Dummy", invalid_agent_name], + describe=False, + search=False, + metadata_class=Agent, + )) + @mock.patch.object(lib_ra.ResourceAgent, "_load_metadata", autospec=True) @mock.patch("pcs.lib.resource_agent.guess_exactly_one_resource_agent_full_name") @mock.patch.object( @@ -312,6 +332,7 @@ "longdesc": "long desc", "parameters": [], "actions": [], + "default_actions": [{"interval": "60s", "name": "monitor"}], } @@ -353,7 +374,7 @@ report_codes.UNABLE_TO_GET_AGENT_METADATA, { "agent": "ocf:test:Dummy", - "reason": "Start tag expected, '<' not found, line 1, column 1", + "reason": start_tag_error_text(), } ) ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_stonith_agent.py pcs-0.9.159/pcs/lib/commands/test/test_stonith_agent.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_stonith_agent.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_stonith_agent.py 2017-06-30 15:33:01.000000000 +0000 @@ -8,7 +8,11 @@ import logging from lxml import etree -from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_report_item_list_equal, + start_tag_error_text, +) from pcs.test.tools.custom_mock import MockLibraryReportProcessor from pcs.test.tools.pcs_unittest import mock, TestCase @@ -16,6 +20,7 @@ from pcs.lib import resource_agent as lib_ra from pcs.lib.env import LibraryEnvironment from pcs.lib.errors import ReportItemSeverity as severity +from pcs.lib.external import CommandRunner from pcs.lib.commands import stonith_agent as lib @@ -97,10 +102,11 @@ @mock.patch.object(lib_ra.Agent, "_get_metadata", autospec=True) def test_describe(self, mock_metadata): + self.maxDiff = None def mock_metadata_func(self): - if self._full_agent_name == "ocf:test:Stateful": + if self.get_name() == "ocf:test:Stateful": raise lib_ra.UnableToGetAgentMetadata( - self._full_agent_name, + self.get_name(), "test exception" ) return etree.XML(""" @@ -112,7 +118,7 @@ - """.format(name=self._full_agent_name)) + """.format(name=self.get_name())) mock_metadata.side_effect = mock_metadata_func # Stateful is missing as it does not provide valid metadata - see above @@ -121,22 +127,22 @@ [ { "name": "fence_apc", - "shortdesc": "short stonith:fence_apc", - "longdesc": "long stonith:fence_apc", + "shortdesc": "short fence_apc", + "longdesc": "long fence_apc", "parameters": [], "actions": [], }, { "name": "fence_dummy", - "shortdesc": "short stonith:fence_dummy", - "longdesc": "long stonith:fence_dummy", + "shortdesc": "short fence_dummy", + "longdesc": "long fence_dummy", "parameters": [], "actions": [], }, { "name": "fence_xvm", - "shortdesc": "short stonith:fence_xvm", - "longdesc": "long stonith:fence_xvm", + "shortdesc": "short fence_xvm", + "longdesc": "long fence_xvm", "parameters": [], "actions": [], }, @@ -176,6 +182,7 @@ "longdesc": "long desc", "parameters": [], "actions": [], + "default_actions": [{"name": "monitor", "interval": "60s"}], } @@ -204,9 +211,115 @@ report_codes.UNABLE_TO_GET_AGENT_METADATA, { "agent": "fence_dummy", - "reason": "Start tag expected, '<' not found, line 1, column 1", + "reason": start_tag_error_text(), } ) ) self.assertEqual(len(mock_metadata.mock_calls), 1) + + +class ValidateParameters(TestCase): + def setUp(self): + self.agent = lib_ra.StonithAgent( + mock.MagicMock(spec_set=CommandRunner), + "fence_dummy" + ) + self.metadata = etree.XML(""" + + + + Long description + short description + + + + + + + + Fencing action + + + + """) + patcher = mock.patch.object(lib_ra.StonithAgent, "_get_metadata") + self.addCleanup(patcher.stop) + self.get_metadata = patcher.start() + self.get_metadata.return_value = self.metadata + + patcher_stonithd = mock.patch.object( + lib_ra.StonithdMetadata, "_get_metadata" + ) + self.addCleanup(patcher_stonithd.stop) + self.get_stonithd_metadata = patcher_stonithd.start() + self.get_stonithd_metadata.return_value = etree.XML(""" + + + + """) + + def test_action_is_deprecated(self): + assert_report_item_list_equal( + self.agent.validate_parameters({ + "action": "reboot", + "required_param": "value", + }), + [ + ( + severity.ERROR, + report_codes.DEPRECATED_OPTION, + { + "option_name": "action", + "option_type": "stonith", + "replaced_by": [ + "pcmk_off_action", + "pcmk_reboot_action" + ], + }, + report_codes.FORCE_OPTIONS + ), + ], + ) + + def test_action_is_deprecated_forced(self): + assert_report_item_list_equal( + self.agent.validate_parameters({ + "action": "reboot", + "required_param": "value", + }, allow_invalid=True), + [ + ( + severity.WARNING, + report_codes.DEPRECATED_OPTION, + { + "option_name": "action", + "option_type": "stonith", + "replaced_by": [ + "pcmk_off_action", + "pcmk_reboot_action" + ], + }, + None + ), + ], + ) + + def test_action_not_reported_deprecated_when_empty(self): + assert_report_item_list_equal( + self.agent.validate_parameters({ + "action": "", + "required_param": "value", + }), + [ + ], + ) + + def test_required_not_specified_on_update(self): + assert_report_item_list_equal( + self.agent.validate_parameters({ + "test_param": "value", + }, update=True), + [ + ], + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/commands/test/test_ticket.py pcs-0.9.159/pcs/lib/commands/test/test_ticket.py --- pcs-0.9.155+dfsg/pcs/lib/commands/test/test_ticket.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/commands/test/test_ticket.py 2017-06-30 15:33:01.000000000 +0000 @@ -64,8 +64,13 @@ ), ( severities.ERROR, - report_codes.RESOURCE_DOES_NOT_EXIST, - {"resource_id": "resourceA"}, + report_codes.ID_NOT_FOUND, + { + "context_type": "cib", + "context_id": "", + "id": "resourceA", + "id_description": "resource" + }, ), ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/corosync/config_facade.py pcs-0.9.159/pcs/lib/corosync/config_facade.py --- pcs-0.9.155+dfsg/pcs/lib/corosync/config_facade.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/corosync/config_facade.py 2017-06-30 15:33:01.000000000 +0000 @@ -147,7 +147,7 @@ allowed_names = self.__class__.QUORUM_OPTIONS if name not in allowed_names: report_items.append( - reports.invalid_option(name, allowed_names, "quorum") + reports.invalid_option([name], allowed_names, "quorum") ) continue @@ -386,6 +386,7 @@ ]) allowed_options = required_options | optional_options model_options_names = frozenset(model_options.keys()) + missing_options = [] report_items = [] severity = ( ReportItemSeverity.WARNING if force else ReportItemSeverity.ERROR @@ -393,13 +394,12 @@ forceable = None if force else report_codes.FORCE_OPTIONS if need_required: - for missing in sorted(required_options - model_options_names): - report_items.append(reports.required_option_is_missing(missing)) + missing_options += required_options - model_options_names for name, value in sorted(model_options.items()): if name not in allowed_options: report_items.append(reports.invalid_option( - name, + [name], allowed_options, "quorum device model", severity, @@ -410,9 +410,7 @@ if value == "": # do not allow to remove required options if name in required_options: - report_items.append( - reports.required_option_is_missing(name) - ) + missing_options.append(name) else: continue @@ -455,6 +453,11 @@ name, value, allowed_values, severity, forceable )) + if missing_options: + report_items.append( + reports.required_option_is_missing(sorted(missing_options)) + ) + return report_items def __validate_quorum_device_generic_options( @@ -476,7 +479,7 @@ # model is never allowed in generic options, it is passed # in its own argument report_items.append(reports.invalid_option( - name, + [name], allowed_options, "quorum device", severity if name != "model" else ReportItemSeverity.ERROR, diff -Nru pcs-0.9.155+dfsg/pcs/lib/env_file.py pcs-0.9.159/pcs/lib/env_file.py --- pcs-0.9.155+dfsg/pcs/lib/env_file.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/env_file.py 2017-06-30 15:33:01.000000000 +0000 @@ -15,12 +15,12 @@ class GhostFile(object): is_live = False - def __init__(self, file_role, content=None): + def __init__(self, file_role, content=None, is_binary=False): self.__file_role = file_role self.__content = content self.__no_existing_file_expected = False self.__can_overwrite_existing_file = False - self.__is_binary = False + self.__is_binary = is_binary def read(self): if self.__content is None: @@ -30,15 +30,20 @@ return self.__content + @property + def exists(self): + #file will be considered to exist after writing: it is symmetrical with + #RealFile + return self.__content is not None + def remove(self, silence_no_existence): raise AssertionError("Remove GhostFile is not supported.") - def write(self, content, file_operation=None, is_binary=False): + def write(self, content, file_operation=None): """ callable file_operation is there only for RealFile compatible interface it has no efect """ - self.__is_binary = is_binary self.__content = content def assert_no_conflict_with_existing( @@ -58,32 +63,33 @@ class RealFile(object): is_live = True - def __init__( - self, file_role, file_path, - overwrite_code=report_codes.FORCE_FILE_OVERWRITE - ): + def __init__(self, file_role, file_path, is_binary=False): self.__file_role = file_role self.__file_path = file_path - self.__overwrite_code = overwrite_code + self.__is_binary=is_binary def assert_no_conflict_with_existing( self, report_processor, can_overwrite_existing=False ): - if os.path.exists(self.__file_path): + if self.exists: report_processor.process(reports.file_already_exists( self.__file_role, self.__file_path, ReportItemSeverity.WARNING if can_overwrite_existing else ReportItemSeverity.ERROR, forceable=None if can_overwrite_existing - else self.__overwrite_code, + else report_codes.FORCE_FILE_OVERWRITE, )) - def write(self, content, file_operation=None, is_binary=False): + @property + def exists(self): + return os.path.exists(self.__file_path) + + def write(self, content, file_operation=None): """ callable file_operation takes path and proces operation on it e.g. chmod """ - mode = "wb" if is_binary else "w" + mode = "wb" if self.__is_binary else "w" try: with open(self.__file_path, mode) as config_file: config_file.write(content) @@ -94,13 +100,14 @@ def read(self): try: - with open(self.__file_path, "r") as file: + mode = "rb" if self.__is_binary else "r" + with open(self.__file_path, mode) as file: return file.read() except EnvironmentError as e: raise self.__report_io_error(e, "read") def remove(self, silence_no_existence=False): - if os.path.exists(self.__file_path): + if self.exists: try: os.remove(self.__file_path) except EnvironmentError as e: diff -Nru pcs-0.9.155+dfsg/pcs/lib/env.py pcs-0.9.159/pcs/lib/env.py --- pcs-0.9.155+dfsg/pcs/lib/env.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/env.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,14 +5,14 @@ unicode_literals, ) -import os.path - from lxml import etree +import os.path +import tempfile from pcs import settings from pcs.lib import reports from pcs.lib.booth.env import BoothEnv -from pcs.lib.cib.tools import ensure_cib_version +from pcs.lib.pacemaker.env import PacemakerEnv from pcs.lib.cluster_conf_facade import ClusterConfFacade from pcs.lib.corosync.config_facade import ConfigFacade as CorosyncConfigFacade from pcs.lib.corosync.live import ( @@ -33,12 +33,17 @@ check_corosync_offline_on_nodes, qdevice_reload_on_nodes, ) -from pcs.lib.pacemaker import ( +from pcs.lib.pacemaker.live import ( + ensure_wait_for_idle_support, + ensure_cib_version, get_cib, get_cib_xml, replace_cib_configuration_xml, + wait_for_idle, + get_cluster_status_xml, ) - +from pcs.lib.pacemaker.state import get_cluster_state_dom +from pcs.lib.pacemaker.values import get_valid_timeout_seconds class LibraryEnvironment(object): # pylint: disable=too-many-instance-attributes @@ -52,8 +57,10 @@ cib_data=None, corosync_conf_data=None, booth=None, + pacemaker=None, auth_tokens_getter=None, cluster_conf_data=None, + request_timeout=None, ): self._logger = logger self._report_processor = report_processor @@ -65,6 +72,10 @@ self._booth = ( BoothEnv(report_processor, booth) if booth is not None else None ) + #pacemaker is currently not mocked and it provides only an access to + #the authkey + self._pacemaker = PacemakerEnv() + self._request_timeout = request_timeout self._is_cman_cluster = None # TODO tokens probably should not be inserted from outside, but we're # postponing dealing with them, because it's not that easy to move @@ -72,6 +83,9 @@ self._auth_tokens_getter = auth_tokens_getter self._auth_tokens = None self._cib_upgraded = False + self._cib_data_tmp_file = None + + self.__timeout_cache = {} @property def logger(self): @@ -109,32 +123,60 @@ cib = get_cib(self._get_cib_xml()) if minimal_version is not None: upgraded_cib = ensure_cib_version( - self.cmd_runner(), cib, minimal_version + self.cmd_runner(), + cib, + minimal_version ) if upgraded_cib is not None: cib = upgraded_cib + if self.is_cib_live and not self._cib_upgraded: + self.report_processor.process( + reports.cib_upgrade_successful() + ) self._cib_upgraded = True return cib + def get_cluster_state(self): + return get_cluster_state_dom(get_cluster_status_xml(self.cmd_runner())) + def _push_cib_xml(self, cib_data): if self.is_cib_live: - replace_cib_configuration_xml( - self.cmd_runner(), cib_data, self._cib_upgraded - ) - if self._cib_upgraded: - self._cib_upgraded = False - self.report_processor.process(reports.cib_upgrade_successful()) + replace_cib_configuration_xml(self.cmd_runner(), cib_data) + self._cib_upgraded = False else: self._cib_data = cib_data + def _get_wait_timeout(self, wait): + if wait is False: + return False + + if wait not in self.__timeout_cache: + if not self.is_cib_live: + raise LibraryError(reports.wait_for_idle_not_live_cluster()) + ensure_wait_for_idle_support(self.cmd_runner()) + self.__timeout_cache[wait] = get_valid_timeout_seconds(wait) + return self.__timeout_cache[wait] + + + def ensure_wait_satisfiable(self, wait): + """ + Raise when wait is not supported or when wait is not valid wait value. + + mixed wait can be False when waiting is not required or valid timeout + """ + self._get_wait_timeout(wait) - def push_cib(self, cib): + def push_cib(self, cib, wait=False): + timeout = self._get_wait_timeout(wait) #etree returns bytes: b'xml' #python 3 removed .encode() from bytes #run(...) calls subprocess.Popen.communicate which calls encode... #so here is bytes to str conversion self._push_cib_xml(etree.tostring(cib).decode()) + if timeout is not False: + wait_for_idle(self.cmd_runner(), timeout) + @property def is_cib_live(self): return self._cib_data is None @@ -214,9 +256,9 @@ def command_expect_live_corosync_env(self): if not self.is_corosync_conf_live: - raise LibraryError(reports.live_environment_required([ - "--corosync_conf" - ])) + raise LibraryError( + reports.live_environment_required(["COROSYNC_CONF"]) + ) @property def is_corosync_conf_live(self): @@ -227,8 +269,26 @@ # make sure to get output of external processes in English and ASCII "LC_ALL": "C", } + if self.user_login: runner_env["CIB_user"] = self.user_login + + if not self.is_cib_live: + # Dump CIB data to a temporary file and set it up in the runner. + # This way every called pacemaker tool can access the CIB and we + # don't need to take care of it every time the runner is called. + if not self._cib_data_tmp_file: + try: + self._cib_data_tmp_file = tempfile.NamedTemporaryFile( + "w+", + suffix=".pcs" + ) + self._cib_data_tmp_file.write(self._get_cib_xml()) + self._cib_data_tmp_file.flush() + except EnvironmentError as e: + raise LibraryError(reports.cib_save_tmp_error(str(e))) + runner_env["CIB_file"] = self._cib_data_tmp_file.name + return CommandRunner(self.logger, self.report_processor, runner_env) def node_communicator(self): @@ -237,7 +297,8 @@ self.report_processor, self.__get_auth_tokens(), self.user_login, - self.user_groups + self.user_groups, + self._request_timeout ) def __get_auth_tokens(self): @@ -251,3 +312,7 @@ @property def booth(self): return self._booth + + @property + def pacemaker(self): + return self._pacemaker diff -Nru pcs-0.9.155+dfsg/pcs/lib/env_tools.py pcs-0.9.159/pcs/lib/env_tools.py --- pcs-0.9.155+dfsg/pcs/lib/env_tools.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/env_tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,35 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.lib.cib.resource import remote_node, guest_node +from pcs.lib.xml_tools import get_root +from pcs.lib.node import NodeAddressesList + + +def get_nodes(corosync_conf=None, tree=None): + return NodeAddressesList( + ( + corosync_conf.get_nodes() if corosync_conf + else NodeAddressesList([]) + ) + + + ( + get_nodes_remote(tree) if tree is not None + else NodeAddressesList([]) + ) + + + ( + get_nodes_guest(tree) if tree is not None + else NodeAddressesList([]) + ) + ) + +def get_nodes_remote(tree): + return NodeAddressesList(remote_node.find_node_list(get_root(tree))) + +def get_nodes_guest(tree): + return NodeAddressesList(guest_node.find_node_list(get_root(tree))) diff -Nru pcs-0.9.155+dfsg/pcs/lib/errors.py pcs-0.9.159/pcs/lib/errors.py --- pcs-0.9.155+dfsg/pcs/lib/errors.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/errors.py 2017-06-30 15:33:01.000000000 +0000 @@ -59,3 +59,26 @@ code=self.code, info=self.info ) + +class ReportListAnalyzer(object): + def __init__(self, report_list): + self.__error_list = None + self.__report_list = report_list + + def reports_with_severities(self, severity_list): + return [ + report_item for report_item in self.report_list + if report_item.severity in severity_list + ] + + @property + def report_list(self): + return self.__report_list + + @property + def error_list(self): + if self.__error_list is None: + self.__error_list = self.reports_with_severities( + [ReportItemSeverity.ERROR] + ) + return self.__error_list diff -Nru pcs-0.9.155+dfsg/pcs/lib/exchange_formats.md pcs-0.9.159/pcs/lib/exchange_formats.md --- pcs-0.9.155+dfsg/pcs/lib/exchange_formats.md 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/exchange_formats.md 2017-06-30 15:33:01.000000000 +0000 @@ -32,3 +32,15 @@ "resource_sets": {"options": {"id": "id"}, "ids": ["resourceA", "resourceB"]}, } ``` + +Resource operation interval duplication +--------------------------------------- +Dictionary. Key is operation name. Value is list of list of interval. +```python +{ + "monitor": [ + ["3600s", "60m", "1h"], + ["60s", "1m"], + ], +}, +``` diff -Nru pcs-0.9.155+dfsg/pcs/lib/external.py pcs-0.9.159/pcs/lib/external.py --- pcs-0.9.155+dfsg/pcs/lib/external.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/external.py 2017-06-30 15:33:01.000000000 +0000 @@ -6,9 +6,10 @@ ) import base64 -import inspect +import io import json import os + try: # python 2 from pipes import quote as shell_quote @@ -17,7 +18,6 @@ from shlex import quote as shell_quote import re import signal -import ssl import subprocess import sys try: @@ -26,29 +26,12 @@ except ImportError: # python3 from urllib.parse import urlencode as urllib_urlencode -try: - # python2 - from urllib2 import ( - build_opener as urllib_build_opener, - HTTPCookieProcessor as urllib_HTTPCookieProcessor, - HTTPSHandler as urllib_HTTPSHandler, - HTTPError as urllib_HTTPError, - URLError as urllib_URLError - ) -except ImportError: - # python3 - from urllib.request import ( - build_opener as urllib_build_opener, - HTTPCookieProcessor as urllib_HTTPCookieProcessor, - HTTPSHandler as urllib_HTTPSHandler - ) - from urllib.error import ( - HTTPError as urllib_HTTPError, - URLError as urllib_URLError - ) from pcs import settings -from pcs.common import report_codes +from pcs.common import ( + pcs_pycurl as pycurl, + report_codes, +) from pcs.common.tools import ( join_multilines, simple_cache, @@ -138,7 +121,7 @@ instance -- instance name, it ha no effect on not systemd systems. If None no instance name will be used. """ - if not is_service_installed(runner, service): + if not is_service_installed(runner, service, instance): return if is_systemctl(): stdout, stderr, retval = runner.run([ @@ -279,15 +262,17 @@ return retval == 0 -def is_service_installed(runner, service): +def is_service_installed(runner, service, instance=None): """ Check if specified service is installed on local system. runner -- CommandRunner service -- name of service + instance -- systemd service instance """ if is_systemctl(): - return service in get_systemd_services(runner) + service_name = "{0}{1}".format(service, "" if instance is None else "@") + return service_name in get_systemd_services(runner) else: return service in get_non_systemd_services(runner) @@ -356,6 +341,20 @@ return match is not None and match.group(1) == "1" +def is_proxy_set(env_dict): + """ + Returns True whenever any of proxy environment variables (https_proxy, + HTTPS_PROXY, all_proxy, ALL_PROXY) are set in env_dict. False otherwise. + + env_dict -- environment variables in dict + """ + proxy_list = ["https_proxy", "all_proxy"] + for var in proxy_list + [v.upper() for v in proxy_list]: + if env_dict.get(var, "") != "": + return True + return False + + class CommandRunner(object): def __init__(self, logger, reporter, env_vars=None): self._logger = logger @@ -375,18 +374,31 @@ # set own PATH or CIB_file, we must allow it. I.e. it wants to run # a pacemaker tool on a CIB in a file but cannot afford the risk of # changing the CIB in the file specified by the user. - env_vars = self._env_vars + env_vars = self._env_vars.copy() env_vars.update( dict(env_extend) if env_extend else dict() ) log_args = " ".join([shell_quote(x) for x in args]) - msg = "Running: {args}" - if stdin_string: - msg += "\n--Debug Input Start--\n{stdin}\n--Debug Input End--" - self._logger.debug(msg.format(args=log_args, stdin=stdin_string)) + self._logger.debug( + "Running: {args}\nEnvironment:{env_vars}{stdin_string}".format( + args=log_args, + stdin_string=("" if not stdin_string else ( + "\n--Debug Input Start--\n{0}\n--Debug Input End--" + .format(stdin_string) + )), + env_vars=("" if not env_vars else ( + "\n" + "\n".join([ + " {0}={1}".format(key, val) + for key, val in sorted(env_vars.items()) + ]) + )) + ) + ) self._reporter.process( - reports.run_external_process_started(log_args, stdin_string) + reports.run_external_process_started( + log_args, stdin_string, env_vars + ) ) try: @@ -456,6 +468,10 @@ pass +class NodeConnectionTimedOutException(NodeCommunicationException): + pass + + def node_communicator_exception_to_report_item( e, severity=ReportItemSeverity.ERROR, forceable=None ): @@ -479,6 +495,8 @@ reports.node_communication_error_other_error, NodeConnectionException: reports.node_communication_error_unable_to_connect, + NodeConnectionTimedOutException: + reports.node_communication_error_timed_out, } if e.__class__ in exception_to_report: return exception_to_report[e.__class__]( @@ -509,42 +527,90 @@ """ return json.dumps(data) - def __init__(self, logger, reporter, auth_tokens, user=None, groups=None): + def __init__( + self, logger, reporter, auth_tokens, user=None, groups=None, + request_timeout=None + ): """ auth_tokens authorization tokens for nodes: {node: token} user username groups groups the user is member of + request_timeout -- positive integer, time for one reqest in seconds """ self._logger = logger self._reporter = reporter self._auth_tokens = auth_tokens self._user = user self._groups = groups + self._request_timeout = request_timeout - def call_node(self, node_addr, request, data): + @property + def request_timeout(self): + return ( + settings.default_request_timeout + if self._request_timeout is None + else self._request_timeout + ) + + def call_node(self, node_addr, request, data, request_timeout=None): """ Send a request to a node node_addr destination node, instance of NodeAddresses request command to be run on the node data command parameters, encoded by format_data_* method """ - return self.call_host(node_addr.ring0, request, data) + return self.call_host(node_addr.ring0, request, data, request_timeout) - def call_host(self, host, request, data): + def call_host(self, host, request, data, request_timeout=None): """ Send a request to a host host host address request command to be run on the host data command parameters, encoded by format_data_* method + request timeout float timeout for request, if not set object property + will be used """ - opener = self.__get_opener() + def __debug_callback(data_type, debug_data): + prefixes = { + pycurl.DEBUG_TEXT: b"* ", + pycurl.DEBUG_HEADER_IN: b"< ", + pycurl.DEBUG_HEADER_OUT: b"> ", + pycurl.DEBUG_DATA_IN: b"<< ", + pycurl.DEBUG_DATA_OUT: b">> ", + } + if data_type in prefixes: + debug_output.write(prefixes[data_type]) + debug_output.write(debug_data) + if not debug_data.endswith(b"\n"): + debug_output.write(b"\n") + + output = io.BytesIO() + debug_output = io.BytesIO() + cookies = self.__prepare_cookies(host) + timeout = ( + request_timeout + if request_timeout is not None + else self.request_timeout + ) url = "https://{host}:2224/{request}".format( host=("[{0}]".format(host) if ":" in host else host), request=request ) - cookies = self.__prepare_cookies(host) + + handler = pycurl.Curl() + handler.setopt(pycurl.PROTOCOLS, pycurl.PROTO_HTTPS) + handler.setopt(pycurl.TIMEOUT_MS, int(timeout * 1000)) + handler.setopt(pycurl.URL, url.encode("utf-8")) + handler.setopt(pycurl.WRITEFUNCTION, output.write) + handler.setopt(pycurl.VERBOSE, 1) + handler.setopt(pycurl.DEBUGFUNCTION, __debug_callback) + handler.setopt(pycurl.SSL_VERIFYHOST, 0) + handler.setopt(pycurl.SSL_VERIFYPEER, 0) + handler.setopt(pycurl.NOSIGNAL, 1) # required for multi-threading if cookies: - opener.addheaders.append(("Cookie", ";".join(cookies))) + handler.setopt(pycurl.COOKIE, ";".join(cookies).encode("utf-8")) + if data: + handler.setopt(pycurl.COPYPOSTFIELDS, data.encode("utf-8")) msg = "Sending HTTP Request to: {url}" if data: @@ -559,81 +625,73 @@ ) try: - # python3 requires data to be bytes not str - if data: - data = data.encode("utf-8") - result = opener.open(url, data) - # python3 returns bytes not str - response_data = result.read().decode("utf-8") + handler.perform() + response_data = output.getvalue().decode("utf-8") + response_code = handler.getinfo(pycurl.RESPONSE_CODE) self._logger.debug(result_msg.format( url=url, - code=result.getcode(), + code=response_code, response=response_data )) - self._reporter.process( - reports.node_communication_finished( - url, result.getcode(), response_data - ) - ) - return response_data - except urllib_HTTPError as e: - # python3 returns bytes not str - response_data = e.read().decode("utf-8") - self._logger.debug(result_msg.format( - url=url, - code=e.code, - response=response_data + self._reporter.process(reports.node_communication_finished( + url, response_code, response_data )) - self._reporter.process( - reports.node_communication_finished(url, e.code, response_data) - ) - if e.code == 400: + if response_code == 400: # old pcsd protocol: error messages are commonly passed in plain # text in response body with HTTP code 400 # we need to be backward compatible with that raise NodeCommandUnsuccessfulException( host, request, response_data.rstrip() ) - elif e.code == 401: + elif response_code == 401: raise NodeAuthenticationException( - host, request, "HTTP error: {0}".format(e.code) + host, request, "HTTP error: {0}".format(response_code) ) - elif e.code == 403: + elif response_code == 403: raise NodePermissionDeniedException( - host, request, "HTTP error: {0}".format(e.code) + host, request, "HTTP error: {0}".format(response_code) ) - elif e.code == 404: + elif response_code == 404: raise NodeUnsupportedCommandException( - host, request, "HTTP error: {0}".format(e.code) + host, request, "HTTP error: {0}".format(response_code) ) - else: + elif response_code >= 400: raise NodeCommunicationException( - host, request, "HTTP error: {0}".format(e.code) + host, request, "HTTP error: {0}".format(response_code) ) - except urllib_URLError as e: + return response_data + except pycurl.error as e: + # In pycurl versions lower then 7.19.3 it is not possible to set + # NOPROXY option. Therefore for the proper support of proxy settings + # we have to use environment variables. + if is_proxy_set(os.environ): + self._logger.warning("Proxy is set") + self._reporter.process( + reports.node_communication_proxy_is_set() + ) + errno, reason = e.args msg = "Unable to connect to {node} ({reason})" - self._logger.debug(msg.format(node=host, reason=e.reason)) + self._logger.debug(msg.format(node=host, reason=reason)) self._reporter.process( - reports.node_communication_not_connected(host, e.reason) + reports.node_communication_not_connected(host, reason) ) - raise NodeConnectionException(host, request, e.reason) - - def __get_opener(self): - # enable self-signed certificates - # https://www.python.org/dev/peps/pep-0476/ - # http://bugs.python.org/issue21308 - if ( - hasattr(ssl, "_create_unverified_context") - and - "context" in inspect.getargspec(urllib_HTTPSHandler.__init__).args - ): - opener = urllib_build_opener( - urllib_HTTPSHandler(context=ssl._create_unverified_context()), - urllib_HTTPCookieProcessor() + if errno == pycurl.E_OPERATION_TIMEDOUT: + raise NodeConnectionTimedOutException(host, request, reason) + else: + raise NodeConnectionException(host, request, reason) + finally: + debug_data = debug_output.getvalue().decode("utf-8", "ignore") + self._logger.debug( + ( + "Communication debug info for calling: {url}\n" + "--Debug Communication Info Start--\n" + "{data}\n" + "--Debug Communication Info End--" + ).format(url=url, data=debug_data) + ) + self._reporter.process( + reports.node_communication_debug_info(url, debug_data) ) - else: - opener = urllib_build_opener(urllib_HTTPCookieProcessor()) - return opener def __prepare_cookies(self, host): # Let's be safe about characters in variables (they can come from env) @@ -649,7 +707,9 @@ if self._groups: cookies.append("CIB_user_groups={0}".format( # python3 requires the value to be bytes not str - base64.b64encode(" ".join(self._groups).encode("utf-8")) + base64.b64encode( + " ".join(self._groups).encode("utf-8") + ).decode("utf-8") )) return cookies diff -Nru pcs-0.9.155+dfsg/pcs/lib/node_communication_format.py pcs-0.9.159/pcs/lib/node_communication_format.py --- pcs-0.9.155+dfsg/pcs/lib/node_communication_format.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/node_communication_format.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,161 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) +from collections import namedtuple +from pcs.lib import reports +from pcs.lib.errors import LibraryError +import base64 + +def create_pcmk_remote_actions(action_list): + return dict([ + ( + "pacemaker_remote {0}".format(action), + service_cmd_format( + "pacemaker_remote", + action + ) + ) + for action in action_list + ]) + +def pcmk_authkey_format(authkey_content): + """ + Return a dict usable in the communication with a remote/put_file + authkey_content is raw authkey content + """ + return { + "data": base64.b64encode(authkey_content).decode("utf-8"), + "type": "pcmk_remote_authkey", + "rewrite_existing": True, + } + +def corosync_authkey_format(authkey_content): + """ + Return a dict usable in the communication with a remote/put_file + authkey_content is raw authkey content + """ + return { + "data": base64.b64encode(authkey_content).decode("utf-8"), + "type": "corosync_authkey", + "rewrite_existing": True, + } + +def pcmk_authkey_file(authkey_content): + return { + "pacemaker_remote authkey": pcmk_authkey_format(authkey_content) + } + +def corosync_authkey_file(authkey_content): + return { + "corosync authkey": corosync_authkey_format(authkey_content) + } + +def service_cmd_format(service, command): + """ + Return a dict usable in the communication with a remote/run_action + string service is name of requested service (eg. pacemaker_remote) + string command specifies an action on service (eg. start) + """ + return { + "type": "service_command", + "service": service, + "command": command, + } + +class Result(namedtuple("Result", "code message")): + """ Wrapper over some call results """ + +def unpack_items_from_response(main_response, main_key, node_label): + """ + Check format of main_response and return main_response[main_key]. + dict main_response has on the key 'main_key' dict with item name as key and + dict with result as value. E.g. + { + "files": { + "file1": {"code": "success", "message": ""} + } + } + string main_key is name of key under that is a dict with results + string node_label is a node label for reporting an invalid format + """ + is_in_expected_format = ( + isinstance(main_response, dict) + and + main_key in main_response + and + isinstance(main_response[main_key], dict) + ) + + if not is_in_expected_format: + raise LibraryError(reports.invalid_response_format(node_label)) + + return main_response[main_key] + +def response_items_to_result(response_items, expected_keys, node_label): + """ + Check format of response_items and return dict where keys are transformed to + Result. E.g. + {"file1": {"code": "success", "message": ""}} + -> + {"file1": Result("success", "")}} + + dict resposne_items has item name as key and dict with result as value. + list expected_keys contains expected keys in a dict main_response[main_key] + string node_label is a node label for reporting an invalid format + """ + if set(expected_keys) != set(response_items.keys()): + raise LibraryError(reports.invalid_response_format(node_label)) + + for result in response_items.values(): + if( + not isinstance(result, dict) + or + "code" not in result + or + "message" not in result + ): + raise LibraryError(reports.invalid_response_format(node_label)) + + return dict([ + ( + file_key, + Result(raw_result["code"], raw_result["message"]) + ) + for file_key, raw_result in response_items.items() + ]) + + +def response_to_result( + main_response, main_key, expected_keys, node_label +): + """ + Validate response (from remote/put_file or remote/run_action) and transform + results from dict to Result. + + dict main_response has on the key 'main_key' dict with item name as key and + dict with result as value. E.g. + { + "files": { + "file1": {"code": "success", "message": ""} + } + } + string main_key is name of key under that is a dict with results + list expected_keys contains expected keys in a dict main_response[main_key] + string node_label is a node label for reporting an invalid format + """ + return response_items_to_result( + unpack_items_from_response(main_response, main_key, node_label), + expected_keys, + node_label + ) + +def get_format_result(code_message_map): + def format_result(result): + if result.code in code_message_map: + return code_message_map[result.code] + + return result.message + return format_result diff -Nru pcs-0.9.155+dfsg/pcs/lib/node.py pcs-0.9.159/pcs/lib/node.py --- pcs-0.9.155+dfsg/pcs/lib/node.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/node.py 2017-06-30 15:33:01.000000000 +0000 @@ -9,6 +9,16 @@ class NodeNotFound(Exception): pass +def node_addresses_contain_host(node_addresses_list, host): + return ( + host in [node.ring0 for node in node_addresses_list] + or + host in [node.ring1 for node in node_addresses_list if node.ring1] + ) + +def node_addresses_contain_name(node_addresses_list, name): + return name in [node.name for node in node_addresses_list] + class NodeAddresses(object): def __init__(self, ring0, ring1=None, name=None, id=None): @@ -29,6 +39,20 @@ def __lt__(self, other): return self.label < other.label + def __repr__(self): + #the "dict" with name and id is "written" inside string because in + #python3 the order is not + return str("<{0}.{1} {2}, {{'name': {3}, 'id': {4}}}>").format( + self.__module__, + self.__class__.__name__, + repr( + [self.ring0] if self.ring1 is None + else [self.ring0, self.ring1] + ), + repr(self.name), + repr(self.id), + ) + @property def ring0(self): return self._ring0 @@ -71,6 +95,13 @@ def __reversed__(self): return self._list.__reversed__() + def __add__(self, other): + if isinstance(other, NodeAddressesList): + return NodeAddressesList(self._list + other._list) + #Suppose that the other is a list. If it is not a list it correctly + #raises. + return NodeAddressesList(self._list + other) + def find_by_label(self, label): for node in self._list: if node.label == label: diff -Nru pcs-0.9.155+dfsg/pcs/lib/nodes_task.py pcs-0.9.159/pcs/lib/nodes_task.py --- pcs-0.9.155+dfsg/pcs/lib/nodes_task.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/nodes_task.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,15 +5,17 @@ unicode_literals, ) +from collections import defaultdict import json from pcs.common import report_codes from pcs.common.tools import run_parallel as tools_run_parallel -from pcs.lib import reports -from pcs.lib.errors import LibraryError, ReportItemSeverity +from pcs.lib import reports, node_communication_format +from pcs.lib.errors import LibraryError, ReportItemSeverity, ReportListAnalyzer from pcs.lib.external import ( NodeCommunicator, NodeCommunicationException, + NodeCommandUnsuccessfulException, node_communicator_exception_to_report_item, parallel_nodes_communication_helper, ) @@ -23,6 +25,56 @@ ) +def _call_for_json( + node_communicator, node, request_path, report_items, + data=None, request_timeout=None, warn_on_communication_exception=False +): + """ + Return python object parsed from a json call response. + """ + try: + return json.loads(node_communicator.call_node( + node, + request_path, + data=None if data is None + else NodeCommunicator.format_data_dict(data) + , + request_timeout=request_timeout + )) + except NodeCommandUnsuccessfulException as e: + report_items.append( + reports.node_communication_command_unsuccessful( + e.node, + e.command, + e.reason, + severity=( + ReportItemSeverity.WARNING + if warn_on_communication_exception else + ReportItemSeverity.ERROR + ), + forceable=( + None if warn_on_communication_exception + else report_codes.SKIP_OFFLINE_NODES + ), + ) + ) + + except NodeCommunicationException as e: + report_items.append( + node_communicator_exception_to_report_item( + e, + ReportItemSeverity.WARNING if warn_on_communication_exception + else ReportItemSeverity.ERROR + , + forceable=None if warn_on_communication_exception + else report_codes.SKIP_OFFLINE_NODES + ) + ) + except ValueError: + #e.g. response is not in json format + report_items.append(reports.invalid_response_format(node.label)) + + def distribute_corosync_conf( node_communicator, reporter, node_addr_list, config_text, skip_offline_nodes=False @@ -177,3 +229,293 @@ "remote/check_auth", NodeCommunicator.format_data_dict({"check_auth_only": 1}) ) + +def availability_checker_node(availability_info, report_items, node_label): + """ + Check if availability_info means that the node is suitable as cluster + (corosync) node. + """ + if availability_info["node_available"]: + return + + if availability_info.get("pacemaker_running", False): + report_items.append(reports.cannot_add_node_is_running_service( + node_label, + "pacemaker" + )) + return + + if availability_info.get("pacemaker_remote", False): + report_items.append(reports.cannot_add_node_is_running_service( + node_label, + "pacemaker_remote" + )) + return + + report_items.append(reports.cannot_add_node_is_in_cluster(node_label)) + +def availability_checker_remote_node( + availability_info, report_items, node_label +): + """ + Check if availability_info means that the node is suitable as remote node. + """ + if availability_info["node_available"]: + return + + if availability_info.get("pacemaker_running", False): + report_items.append(reports.cannot_add_node_is_running_service( + node_label, + "pacemaker" + )) + return + + if not availability_info.get("pacemaker_remote", False): + report_items.append(reports.cannot_add_node_is_in_cluster(node_label)) + return + + +def check_can_add_node_to_cluster( + node_communicator, node, report_items, + check_response=availability_checker_node, + warn_on_communication_exception=False, +): + """ + Analyze result of node_available check if it is possible use the node as + cluster node. + + NodeCommunicator node_communicator is an object for making the http request + NodeAddresses node specifies the destination url + list report_items is place where report items should be collected + callable check_response -- make decision about availability based on + response info + """ + safe_report_items = [] + availability_info = _call_for_json( + node_communicator, + node, + "remote/node_available", + safe_report_items, + warn_on_communication_exception=warn_on_communication_exception + ) + report_items.extend(safe_report_items) + + if ReportListAnalyzer(safe_report_items).error_list: + return + + #If there was a communication error and --skip-offline is in effect, no + #exception was raised. If there is no result cannot process it. + #Note: the error may be caused by older pcsd daemon not supporting commands + #sent by newer client. + if not availability_info: + return + + is_in_expected_format = ( + isinstance(availability_info, dict) + and + #node_available is a mandatory field + "node_available" in availability_info + ) + + if not is_in_expected_format: + report_items.append(reports.invalid_response_format(node.label)) + return + + check_response(availability_info, report_items, node.label) + +def run_actions_on_node( + node_communicator, path, response_key, report_processor, node, actions, + warn_on_communication_exception=False +): + """ + NodeCommunicator node_communicator is an object for making the http request + NodeAddresses node specifies the destination url + dict actions has key that identifies the action and value is a dict + with a data that are specific per action type. Mandatory keys there are: + * type - is type of file like "booth_autfile" or "pcmk_remote_authkey" + For type == 'service_command' are mandatory + * service - specify the service (eg. pacemaker_remote) + * command - specify the command should be applied on service + (eg. enable or start) + """ + report_items = [] + action_results = _call_for_json( + node_communicator, + node, + path, + report_items, + [("data_json", json.dumps(actions))], + warn_on_communication_exception=warn_on_communication_exception + ) + + #can raise + report_processor.process_list(report_items) + #If there was a communication error and --skip-offline is in effect, no + #exception was raised. If there is no result cannot process it. + #Note: the error may be caused by older pcsd daemon not supporting commands + #sent by newer client. + if not action_results: + return + + + return node_communication_format.response_to_result( + action_results, + response_key, + actions.keys(), + node.label, + ) + +def _run_actions_on_multiple_nodes( + node_communicator, url, response_key, report_processor, create_start_report, + actions, node_addresses_list, is_success, + create_success_report, create_error_report, force_code, format_result, + skip_offline_nodes=False, + allow_incomplete_distribution=False, description="" +): + error_map = defaultdict(dict) + def worker(node_addresses): + result = run_actions_on_node( + node_communicator, + url, + response_key, + report_processor, + node_addresses, + actions, + warn_on_communication_exception=skip_offline_nodes, + ) + #If there was a communication error and --skip-offline is in effect, no + #exception was raised. If there is no result cannot process it. + #Note: the error may be caused by older pcsd daemon not supporting + #commands sent by newer client. + if not result: + return + + for key, item_response in sorted(result.items()): + if is_success(key, item_response): + #only success process individually + report_processor.process( + create_success_report(node_addresses.label, key) + ) + else: + error_map[node_addresses.label][key] = format_result( + item_response + ) + + report_processor.process(create_start_report( + actions.keys(), + [node.label for node in node_addresses_list], + description + )) + + parallel_nodes_communication_helper( + worker, + [([node_addresses], {}) for node_addresses in node_addresses_list], + report_processor, + allow_incomplete_distribution, + ) + + #now we process errors + if error_map: + make_report = reports.get_problem_creator( + force_code, + allow_incomplete_distribution + ) + report_processor.process_list([ + make_report(create_error_report, node_name, action_key, message) + for node_name, errors in error_map.items() + for action_key, message in errors.items() + ]) + +def distribute_files( + node_communicator, report_processor, file_definitions, node_addresses_list, + skip_offline_nodes=False, + allow_incomplete_distribution=False, description="" +): + """ + Put files specified in file_definitions to nodes specified in + node_addresses_list. + + NodeCommunicator node_communicator is an object for making the http request + NodeAddresses node specifies the destination url + dict file_definitions has key that identifies the file and value is a dict + with a data that are specific per file type. Mandatory keys there are: + * type - is type of file like "booth_autfile" or "pcmk_remote_authkey" + * data - it contains content of file in file specific format (e.g. + binary is encoded by base64) + Common optional key is "rewrite_existing" (True/False) that specifies + the behaviour when file already exists. + bool allow_incomplete_distribution keep success even if some node(s) are + unavailable + """ + _run_actions_on_multiple_nodes( + node_communicator, + "remote/put_file", + "files", + report_processor, + reports.files_distribution_started, + file_definitions, + node_addresses_list, + lambda key, response: response.code in [ + "written", + "rewritten", + "same_content", + ], + reports.file_distribution_success, + reports.file_distribution_error, + report_codes.SKIP_FILE_DISTRIBUTION_ERRORS, + node_communication_format.get_format_result({ + "conflict": "File already exists", + }), + skip_offline_nodes, + allow_incomplete_distribution, + description, + ) + +def remove_files( + node_communicator, report_processor, file_definitions, node_addresses_list, + skip_offline_nodes=False, + allow_incomplete_distribution=False, description="" +): + _run_actions_on_multiple_nodes( + node_communicator, + "remote/remove_file", + "files", + report_processor, + reports.files_remove_from_node_started, + file_definitions, + node_addresses_list, + lambda key, response: response.code in ["deleted", "not_found"], + reports.file_remove_from_node_success, + reports.file_remove_from_node_error, + report_codes.SKIP_FILE_DISTRIBUTION_ERRORS, + node_communication_format.get_format_result({}), + skip_offline_nodes, + allow_incomplete_distribution, + description, + ) + +def run_actions_on_multiple_nodes( + node_communicator, report_processor, action_definitions, is_success, + node_addresses_list, + skip_offline_nodes=False, + allow_fails=False, description="" +): + _run_actions_on_multiple_nodes( + node_communicator, + "remote/manage_services", + "actions", + report_processor, + reports.service_commands_on_nodes_started, + action_definitions, + node_addresses_list, + is_success, + reports.service_command_on_node_success, + reports.service_command_on_node_error, + report_codes.SKIP_ACTION_ON_NODES_ERRORS, + node_communication_format.get_format_result({ + "fail": "Operation failed.", + }), + skip_offline_nodes, + allow_fails, + description, + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/env.py pcs-0.9.159/pcs/lib/pacemaker/env.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/env.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/env.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,28 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.common import env_file_role_codes +from pcs.lib.env_file import RealFile +from pcs import settings + + +class PacemakerEnv(object): + def __init__(self): + """ + callable get_cib should return cib as lxml tree + """ + self.__authkey = RealFile( + file_role=env_file_role_codes.PACEMAKER_AUTHKEY, + file_path=settings.pacemaker_authkey_file, + ) + + @property + def has_authkey(self): + return self.__authkey.exists + + def get_authkey_content(self): + return self.__authkey.read() diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/live.py pcs-0.9.159/pcs/lib/pacemaker/live.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/live.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/live.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,285 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import os.path +from lxml import etree + +from pcs import settings +from pcs.common.tools import ( + join_multilines, + xml_fromstring +) +from pcs.lib import reports +from pcs.lib.cib.tools import get_pacemaker_version_by_which_cib_was_validated +from pcs.lib.errors import LibraryError +from pcs.lib.pacemaker.state import ClusterState + + +__EXITCODE_WAIT_TIMEOUT = 62 +__EXITCODE_CIB_SCOPE_VALID_BUT_NOT_PRESENT = 6 +__EXITCODE_CIB_SCHEMA_IS_THE_LATEST_AVAILABLE = 211 +__RESOURCE_CLEANUP_OPERATION_COUNT_THRESHOLD = 100 + +class CrmMonErrorException(LibraryError): + pass + + +### status + +def get_cluster_status_xml(runner): + stdout, stderr, retval = runner.run( + [__exec("crm_mon"), "--one-shot", "--as-xml", "--inactive"] + ) + if retval != 0: + raise CrmMonErrorException( + reports.cluster_state_cannot_load(join_multilines([stderr, stdout])) + ) + return stdout + +### cib + +def get_cib_xml(runner, scope=None): + command = [__exec("cibadmin"), "--local", "--query"] + if scope: + command.append("--scope={0}".format(scope)) + stdout, stderr, retval = runner.run(command) + if retval != 0: + if retval == __EXITCODE_CIB_SCOPE_VALID_BUT_NOT_PRESENT and scope: + raise LibraryError( + reports.cib_load_error_scope_missing( + scope, + join_multilines([stderr, stdout]) + ) + ) + else: + raise LibraryError( + reports.cib_load_error(join_multilines([stderr, stdout])) + ) + return stdout + +def parse_cib_xml(xml): + return xml_fromstring(xml) + +def get_cib(xml): + try: + return parse_cib_xml(xml) + except (etree.XMLSyntaxError, etree.DocumentInvalid): + raise LibraryError(reports.cib_load_error_invalid_format()) + +def replace_cib_configuration_xml(runner, xml): + cmd = [ + __exec("cibadmin"), + "--replace", + "--verbose", + "--xml-pipe", + "--scope", "configuration", + ] + stdout, stderr, retval = runner.run(cmd, stdin_string=xml) + if retval != 0: + raise LibraryError(reports.cib_push_error(stderr, stdout)) + +def replace_cib_configuration(runner, tree): + #etree returns bytes: b'xml' + #python 3 removed .encode() from bytes + #run(...) calls subprocess.Popen.communicate which calls encode... + #so here is bytes to str conversion + xml = etree.tostring(tree).decode() + return replace_cib_configuration_xml(runner, xml) + +def ensure_cib_version(runner, cib, version): + """ + This method ensures that specified cib is verified by pacemaker with + version 'version' or newer. If cib doesn't correspond to this version, + method will try to upgrade cib. + Returns cib which was verified by pacemaker version 'version' or later. + Raises LibraryError on any failure. + + CommandRunner runner + etree cib cib tree + tuple version tuple of integers (, , ) + """ + current_version = get_pacemaker_version_by_which_cib_was_validated(cib) + if current_version >= version: + return None + + _upgrade_cib(runner) + new_cib_xml = get_cib_xml(runner) + + try: + new_cib = parse_cib_xml(new_cib_xml) + except (etree.XMLSyntaxError, etree.DocumentInvalid) as e: + raise LibraryError(reports.cib_upgrade_failed(str(e))) + + current_version = get_pacemaker_version_by_which_cib_was_validated(new_cib) + if current_version >= version: + return new_cib + + raise LibraryError(reports.unable_to_upgrade_cib_to_required_version( + current_version, version + )) + +def _upgrade_cib(runner): + """ + Upgrade CIB to the latest schema available locally or clusterwise. + CommandRunner runner + """ + stdout, stderr, retval = runner.run( + [__exec("cibadmin"), "--upgrade", "--force"] + ) + # If we are already on the latest schema available, do not consider it an + # error. We do not know here what version is required. The caller however + # knows and is responsible for dealing with it. + if retval not in (0, __EXITCODE_CIB_SCHEMA_IS_THE_LATEST_AVAILABLE): + raise LibraryError( + reports.cib_upgrade_failed(join_multilines([stderr, stdout])) + ) + +### wait for idle + +def has_wait_for_idle_support(runner): + # returns 1 on success so we don't care about retval + stdout, stderr, dummy_retval = runner.run( + [__exec("crm_resource"), "-?"] + ) + # help goes to stderr but we check stdout as well if that gets changed + return "--wait" in stderr or "--wait" in stdout + +def ensure_wait_for_idle_support(runner): + if not has_wait_for_idle_support(runner): + raise LibraryError(reports.wait_for_idle_not_supported()) + +def wait_for_idle(runner, timeout=None): + """ + Run waiting command. Raise LibraryError if command failed. + + runner is preconfigured object for running external programs + string timeout is waiting timeout + """ + args = [__exec("crm_resource"), "--wait"] + if timeout is not None: + args.append("--timeout={0}".format(timeout)) + stdout, stderr, retval = runner.run(args) + if retval != 0: + # Usefull info goes to stderr - not only error messages, a list of + # pending actions in case of timeout goes there as well. + # We use stdout just to be sure if that's get changed. + if retval == __EXITCODE_WAIT_TIMEOUT: + raise LibraryError( + reports.wait_for_idle_timed_out( + join_multilines([stderr, stdout]) + ) + ) + else: + raise LibraryError( + reports.wait_for_idle_error( + join_multilines([stderr, stdout]) + ) + ) + +### nodes + +def get_local_node_name(runner): + # It would be possible to run "crm_node --name" to get the name in one call, + # but it returns false names when cluster is not running (or we are on + # a remote node). Getting node id first is reliable since it fails in those + # cases. + stdout, dummy_stderr, retval = runner.run( + [__exec("crm_node"), "--cluster-id"] + ) + if retval != 0: + raise LibraryError( + reports.pacemaker_local_node_name_not_found("node id not found") + ) + node_id = stdout.strip() + + stdout, dummy_stderr, retval = runner.run( + [__exec("crm_node"), "--name-for-id={0}".format(node_id)] + ) + if retval != 0: + raise LibraryError( + reports.pacemaker_local_node_name_not_found("node name not found") + ) + node_name = stdout.strip() + + if node_name == "(null)": + raise LibraryError( + reports.pacemaker_local_node_name_not_found("node name is null") + ) + return node_name + +def get_local_node_status(runner): + try: + cluster_status = ClusterState(get_cluster_status_xml(runner)) + except CrmMonErrorException: + return {"offline": True} + node_name = get_local_node_name(runner) + for node_status in cluster_status.node_section.nodes: + if node_status.attrs.name == node_name: + result = { + "offline": False, + } + for attr in ( + 'id', 'name', 'type', 'online', 'standby', 'standby_onfail', + 'maintenance', 'pending', 'unclean', 'shutdown', 'expected_up', + 'is_dc', 'resources_running', + ): + result[attr] = getattr(node_status.attrs, attr) + return result + raise LibraryError(reports.node_not_found(node_name)) + +def remove_node(runner, node_name): + stdout, stderr, retval = runner.run([ + __exec("crm_node"), + "--force", + "--remove", + node_name, + ]) + if retval != 0: + raise LibraryError( + reports.node_remove_in_pacemaker_failed( + node_name, + reason=join_multilines([stderr, stdout]) + ) + ) + +### resources + +def resource_cleanup(runner, resource=None, node=None, force=False): + if not force and not node and not resource: + summary = ClusterState(get_cluster_status_xml(runner)).summary + operations = summary.nodes.attrs.count * summary.resources.attrs.count + if operations > __RESOURCE_CLEANUP_OPERATION_COUNT_THRESHOLD: + raise LibraryError( + reports.resource_cleanup_too_time_consuming( + __RESOURCE_CLEANUP_OPERATION_COUNT_THRESHOLD + ) + ) + + cmd = [__exec("crm_resource"), "--cleanup"] + if resource: + cmd.extend(["--resource", resource]) + if node: + cmd.extend(["--node", node]) + + stdout, stderr, retval = runner.run(cmd) + + if retval != 0: + raise LibraryError( + reports.resource_cleanup_error( + join_multilines([stderr, stdout]), + resource, + node + ) + ) + # usefull output (what has been done) goes to stderr + return join_multilines([stdout, stderr]) + +### tools + +# shortcut for getting a full path to a pacemaker executable +def __exec(name): + return os.path.join(settings.pacemaker_binaries, name) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/state.py pcs-0.9.159/pcs/lib/pacemaker/state.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/state.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/state.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,289 @@ +''' +The intention is put there knowledge about cluster state structure. +Hide information about underlaying xml is desired too. +''' + +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import os.path +from collections import defaultdict + +from lxml import etree + +from pcs import settings +from pcs.common.tools import xml_fromstring +from pcs.lib import reports +from pcs.lib.errors import LibraryError, ReportItemSeverity as severities +from pcs.lib.pacemaker.values import ( + is_false, + is_true, +) +from pcs.lib.xml_tools import find_parent + +class ResourceNotFound(Exception): + pass + +class _Attrs(object): + def __init__(self, owner_name, attrib, required_attrs): + ''' + attrib lxml.etree._Attrib - wrapped attribute collection + required_attrs dict of required atribute names object_name:xml_attribute + ''' + self.owner_name = owner_name + self.attrib = attrib + self.required_attrs = required_attrs + + def __getattr__(self, name): + if name in self.required_attrs.keys(): + try: + attr_specification = self.required_attrs[name] + if isinstance(attr_specification, tuple): + attr_name, attr_transform = attr_specification + return attr_transform(self.attrib[attr_name]) + else: + return self.attrib[attr_specification] + except KeyError: + raise AttributeError( + "Missing attribute '{0}' ('{1}' in source) in '{2}'" + .format(name, self.required_attrs[name], self.owner_name) + ) + + raise AttributeError( + "'{0}' does not declare attribute '{1}'" + .format(self.owner_name, name) + ) + +class _Children(object): + def __init__(self, owner_name, dom_part, children, sections): + self.owner_name = owner_name + self.dom_part = dom_part + self.children = children + self.sections = sections + + def __getattr__(self, name): + if name in self.children.keys(): + element_name, wrapper = self.children[name] + return [ + wrapper(element) + for element in self.dom_part.findall('.//' + element_name) + ] + + if name in self.sections.keys(): + element_name, wrapper = self.sections[name] + return wrapper(self.dom_part.findall('.//' + element_name)[0]) + + raise AttributeError( + "'{0}' does not declare child or section '{1}'" + .format(self.owner_name, name) + ) + +class _Element(object): + required_attrs = {} + children = {} + sections = {} + + def __init__(self, dom_part): + self.dom_part = dom_part + self.attrs = _Attrs( + self.__class__.__name__, + self.dom_part.attrib, + self.required_attrs + ) + self.children_access = _Children( + self.__class__.__name__, + self.dom_part, + self.children, + self.sections, + ) + + def __getattr__(self, name): + return getattr(self.children_access, name) + +class _SummaryNodes(_Element): + required_attrs = { + 'count': ('number', int), + } + +class _SummaryResources(_Element): + required_attrs = { + 'count': ('number', int), + } + +class _SummarySection(_Element): + sections = { + 'nodes': ('nodes_configured', _SummaryNodes), + 'resources': ('resources_configured', _SummaryResources), + } + +class _Node(_Element): + required_attrs = { + 'id': 'id', + 'name': 'name', + 'type': 'type', + 'online': ('online', is_true), + 'standby': ('standby', is_true), + 'standby_onfail': ('standby_onfail', is_true), + 'maintenance': ('maintenance', is_true), + 'pending': ('pending', is_true), + 'unclean': ('unclean', is_true), + 'shutdown': ('shutdown', is_true), + 'expected_up': ('expected_up', is_true), + 'is_dc': ('is_dc', is_true), + 'resources_running': ('resources_running', int), + } + +class _NodeSection(_Element): + children = { + 'nodes': ('node', _Node), + } + +def get_cluster_state_dom(xml): + try: + dom = xml_fromstring(xml) + if os.path.isfile(settings.crm_mon_schema): + etree.RelaxNG(file=settings.crm_mon_schema).assertValid(dom) + return dom + except (etree.XMLSyntaxError, etree.DocumentInvalid): + raise LibraryError(reports.cluster_state_invalid_format()) + +class ClusterState(_Element): + sections = { + 'summary': ('summary', _SummarySection), + 'node_section': ('nodes', _NodeSection), + } + + def __init__(self, xml): + self.dom = get_cluster_state_dom(xml) + super(ClusterState, self).__init__(self.dom) + +def _id_xpath_predicate(resource_id): + return """(@id="{0}" or starts-with(@id, "{0}:"))""".format(resource_id) + +def _get_primitives_for_state_check( + cluster_state, resource_id, expected_running +): + primitives = cluster_state.xpath(""" + .//resource[{predicate_id}] + | + .//group[{predicate_id}]/resource[{predicate_position}] + | + .//clone[@id="{id}"]/resource + | + .//clone[@id="{id}"]/group/resource[{predicate_position}] + | + .//bundle[@id="{id}"]/replica/resource + """.format( + id=resource_id, + predicate_id=_id_xpath_predicate(resource_id), + predicate_position=("last()" if expected_running else "1") + )) + return [ + element for element in primitives + if not is_true(element.attrib.get("failed", "")) + ] + +def _get_primitive_roles_with_nodes(primitive_el_list): + # Clone resources are represented by multiple primitive elements. + roles_with_nodes = defaultdict(set) + for resource_element in primitive_el_list: + if resource_element.attrib["role"] in ["Started", "Master", "Slave"]: + roles_with_nodes[resource_element.attrib["role"]].update([ + node.attrib["name"] + for node in resource_element.findall(".//node") + ]) + return dict([ + (role, sorted(nodes)) + for role, nodes in roles_with_nodes.items() + ]) + +def info_resource_state(cluster_state, resource_id): + roles_with_nodes = _get_primitive_roles_with_nodes( + _get_primitives_for_state_check( + cluster_state, + resource_id, + expected_running=True + ) + ) + if not roles_with_nodes: + return reports.resource_does_not_run( + resource_id, + severities.INFO + ) + return reports.resource_running_on_nodes( + resource_id, + roles_with_nodes, + severities.INFO + ) + +def ensure_resource_state(expected_running, cluster_state, resource_id): + roles_with_nodes = _get_primitive_roles_with_nodes( + _get_primitives_for_state_check( + cluster_state, + resource_id, + expected_running + ) + ) + if not roles_with_nodes: + return reports.resource_does_not_run( + resource_id, + severities.INFO if not expected_running else severities.ERROR + ) + return reports.resource_running_on_nodes( + resource_id, + roles_with_nodes, + severities.INFO if expected_running else severities.ERROR + ) + +def ensure_resource_running(cluster_state, resource_id): + return ensure_resource_state( + expected_running=True, + cluster_state=cluster_state, + resource_id=resource_id, + ) + +def is_resource_managed(cluster_state, resource_id): + """ + Check if the resource is managed + + etree cluster_state -- status of the cluster + string resource_id -- id of the resource + """ + primitive_list = cluster_state.xpath(""" + .//resource[{predicate_id}] + | + .//group[{predicate_id}]/resource + """.format(predicate_id=_id_xpath_predicate(resource_id)) + ) + if primitive_list: + for primitive in primitive_list: + if is_false(primitive.attrib.get("managed", "")): + return False + parent = find_parent(primitive, ["clone", "bundle"]) + if ( + parent is not None + and + is_false(parent.attrib.get("managed", "")) + ): + return False + return True + + parent_list = cluster_state.xpath(""" + .//clone[@id="{0}"] + | + .//bundle[@id="{0}"] + """.format(resource_id) + ) + for parent in parent_list: + if is_false(parent.attrib.get("managed", "")): + return False + for primitive in parent.xpath(".//resource"): + if is_false(primitive.attrib.get("managed", "")): + return False + return True + + raise ResourceNotFound(resource_id) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/test/test_live.py pcs-0.9.159/pcs/lib/pacemaker/test/test_live.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/test/test_live.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/test/test_live.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,993 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree +import os.path + +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_xml_equal, + start_tag_error_text, +) +from pcs.test.tools.misc import get_test_resource as rc +from pcs.test.tools.pcs_unittest import TestCase, mock +from pcs.test.tools.xml import XmlManipulation + +from pcs import settings +from pcs.common import report_codes +import pcs.lib.pacemaker.live as lib +from pcs.lib.errors import ReportItemSeverity as Severity +from pcs.lib.external import CommandRunner + + +class LibraryPacemakerTest(TestCase): + def path(self, name): + return os.path.join(settings.pacemaker_binaries, name) + + def crm_mon_cmd(self): + return [self.path("crm_mon"), "--one-shot", "--as-xml", "--inactive"] + +class LibraryPacemakerNodeStatusTest(LibraryPacemakerTest): + def setUp(self): + self.status = XmlManipulation.from_file(rc("crm_mon.minimal.xml")) + + def fixture_get_node_status(self, node_name, node_id): + return { + "id": node_id, + "name": node_name, + "type": "member", + "online": True, + "standby": False, + "standby_onfail": False, + "maintenance": True, + "pending": True, + "unclean": False, + "shutdown": False, + "expected_up": True, + "is_dc": True, + "resources_running": 7, + } + + def fixture_add_node_status(self, node_attrs): + xml_attrs = [] + for name, value in node_attrs.items(): + if value is True: + value = "true" + elif value is False: + value = "false" + xml_attrs.append('{0}="{1}"'.format(name, value)) + node_xml = "".format(" ".join(xml_attrs)) + self.status.append_to_first_tag_name("nodes", node_xml) + +class GetClusterStatusXmlTest(LibraryPacemakerTest): + def test_success(self): + expected_stdout = "" + expected_stderr = "" + expected_retval = 0 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + real_xml = lib.get_cluster_status_xml(mock_runner) + + mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) + self.assertEqual(expected_stdout, real_xml) + + def test_error(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.get_cluster_status_xml(mock_runner), + ( + Severity.ERROR, + report_codes.CRM_MON_ERROR, + { + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) + +class GetCibXmlTest(LibraryPacemakerTest): + def test_success(self): + expected_stdout = "" + expected_stderr = "" + expected_retval = 0 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + real_xml = lib.get_cib_xml(mock_runner) + + mock_runner.run.assert_called_once_with( + [self.path("cibadmin"), "--local", "--query"] + ) + self.assertEqual(expected_stdout, real_xml) + + def test_error(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.get_cib_xml(mock_runner), + ( + Severity.ERROR, + report_codes.CIB_LOAD_ERROR, + { + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with( + [self.path("cibadmin"), "--local", "--query"] + ) + + def test_success_scope(self): + expected_stdout = "" + expected_stderr = "" + expected_retval = 0 + scope = "test_scope" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + real_xml = lib.get_cib_xml(mock_runner, scope) + + mock_runner.run.assert_called_once_with( + [ + self.path("cibadmin"), + "--local", "--query", "--scope={0}".format(scope) + ] + ) + self.assertEqual(expected_stdout, real_xml) + + def test_scope_error(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 6 + scope = "test_scope" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.get_cib_xml(mock_runner, scope=scope), + ( + Severity.ERROR, + report_codes.CIB_LOAD_ERROR_SCOPE_MISSING, + { + "scope": scope, + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with( + [ + self.path("cibadmin"), + "--local", "--query", "--scope={0}".format(scope) + ] + ) + +class GetCibTest(LibraryPacemakerTest): + def test_success(self): + xml = "" + assert_xml_equal(xml, str(XmlManipulation((lib.get_cib(xml))))) + + def test_invalid_xml(self): + xml = "" + assert_raise_library_error( + lambda: lib.get_cib(xml), + ( + Severity.ERROR, + report_codes.CIB_LOAD_ERROR_BAD_FORMAT, + { + } + ) + ) + +class ReplaceCibConfigurationTest(LibraryPacemakerTest): + def test_success(self): + xml = "" + expected_stdout = "expected output" + expected_stderr = "" + expected_retval = 0 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + lib.replace_cib_configuration( + mock_runner, + XmlManipulation.from_str(xml).tree + ) + + mock_runner.run.assert_called_once_with( + [ + self.path("cibadmin"), "--replace", "--verbose", "--xml-pipe", + "--scope", "configuration" + ], + stdin_string=xml + ) + + def test_error(self): + xml = "" + expected_stdout = "expected output" + expected_stderr = "expected stderr" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.replace_cib_configuration( + mock_runner, + XmlManipulation.from_str(xml).tree + ) + , + ( + Severity.ERROR, + report_codes.CIB_PUSH_ERROR, + { + "reason": expected_stderr, + "pushed_cib": expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with( + [ + self.path("cibadmin"), "--replace", "--verbose", "--xml-pipe", + "--scope", "configuration" + ], + stdin_string=xml + ) + +class UpgradeCibTest(TestCase): + def setUp(self): + self.mock_runner = mock.MagicMock(spec_set=CommandRunner) + + def test_success(self): + self.mock_runner.run.return_value = "", "", 0 + lib._upgrade_cib(self.mock_runner) + self.mock_runner.run.assert_called_once_with( + ["/usr/sbin/cibadmin", "--upgrade", "--force"] + ) + + def test_error(self): + error = "Call cib_upgrade failed (-62): Timer expired" + self.mock_runner.run.return_value = "", error, 62 + assert_raise_library_error( + lambda: lib._upgrade_cib(self.mock_runner), + ( + Severity.ERROR, + report_codes.CIB_UPGRADE_FAILED, + { + "reason": error, + } + ) + ) + self.mock_runner.run.assert_called_once_with( + ["/usr/sbin/cibadmin", "--upgrade", "--force"] + ) + + def test_already_at_latest_schema(self): + error = ("Call cib_upgrade failed (-211): Schema is already " + "the latest available") + self.mock_runner.run.return_value = "", error, 211 + lib._upgrade_cib(self.mock_runner) + self.mock_runner.run.assert_called_once_with( + ["/usr/sbin/cibadmin", "--upgrade", "--force"] + ) + +@mock.patch("pcs.lib.pacemaker.live.get_cib_xml") +@mock.patch("pcs.lib.pacemaker.live._upgrade_cib") +class EnsureCibVersionTest(TestCase): + def setUp(self): + self.mock_runner = mock.MagicMock(spec_set=CommandRunner) + self.cib = etree.XML('') + + def test_same_version(self, mock_upgrade, mock_get_cib): + self.assertTrue( + lib.ensure_cib_version( + self.mock_runner, self.cib, (2, 3, 4) + ) is None + ) + mock_upgrade.assert_not_called() + mock_get_cib.assert_not_called() + + def test_higher_version(self, mock_upgrade, mock_get_cib): + self.assertTrue( + lib.ensure_cib_version( + self.mock_runner, self.cib, (2, 3, 3) + ) is None + ) + mock_upgrade.assert_not_called() + mock_get_cib.assert_not_called() + + def test_upgraded_same_version(self, mock_upgrade, mock_get_cib): + upgraded_cib = '' + mock_get_cib.return_value = upgraded_cib + assert_xml_equal( + upgraded_cib, + etree.tostring( + lib.ensure_cib_version( + self.mock_runner, self.cib, (2, 3, 5) + ) + ).decode() + ) + mock_upgrade.assert_called_once_with(self.mock_runner) + mock_get_cib.assert_called_once_with(self.mock_runner) + + def test_upgraded_higher_version(self, mock_upgrade, mock_get_cib): + upgraded_cib = '' + mock_get_cib.return_value = upgraded_cib + assert_xml_equal( + upgraded_cib, + etree.tostring( + lib.ensure_cib_version( + self.mock_runner, self.cib, (2, 3, 5) + ) + ).decode() + ) + mock_upgrade.assert_called_once_with(self.mock_runner) + mock_get_cib.assert_called_once_with(self.mock_runner) + + def test_upgraded_lower_version(self, mock_upgrade, mock_get_cib): + mock_get_cib.return_value = etree.tostring(self.cib).decode() + assert_raise_library_error( + lambda: lib.ensure_cib_version( + self.mock_runner, self.cib, (2, 3, 5) + ), + ( + Severity.ERROR, + report_codes.CIB_UPGRADE_FAILED_TO_MINIMAL_REQUIRED_VERSION, + { + "required_version": "2.3.5", + "current_version": "2.3.4" + } + ) + ) + mock_upgrade.assert_called_once_with(self.mock_runner) + mock_get_cib.assert_called_once_with(self.mock_runner) + + def test_cib_parse_error(self, mock_upgrade, mock_get_cib): + mock_get_cib.return_value = "not xml" + assert_raise_library_error( + lambda: lib.ensure_cib_version( + self.mock_runner, self.cib, (2, 3, 5) + ), + ( + Severity.ERROR, + report_codes.CIB_UPGRADE_FAILED, + { + "reason": + start_tag_error_text(), + } + ) + ) + mock_upgrade.assert_called_once_with(self.mock_runner) + mock_get_cib.assert_called_once_with(self.mock_runner) + +class GetLocalNodeStatusTest(LibraryPacemakerNodeStatusTest): + def test_offline(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + self.assertEqual( + {"offline": True}, + lib.get_local_node_status(mock_runner) + ) + mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) + + def test_invalid_status(self): + expected_stdout = "invalid xml" + expected_stderr = "" + expected_retval = 0 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.get_local_node_status(mock_runner), + ( + Severity.ERROR, + report_codes.BAD_CLUSTER_STATE_FORMAT, + {} + ) + ) + mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) + + def test_success(self): + node_id = "id_1" + node_name = "name_1" + node_status = self.fixture_get_node_status(node_name, node_id) + expected_status = dict(node_status, offline=False) + self.fixture_add_node_status( + self.fixture_get_node_status("name_2", "id_2") + ) + self.fixture_add_node_status(node_status) + self.fixture_add_node_status( + self.fixture_get_node_status("name_3", "id_3") + ) + + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_node"), "--cluster-id"]), + mock.call( + [self.path("crm_node"), "--name-for-id={0}".format(node_id)] + ), + ] + return_value_list = [ + (str(self.status), "", 0), + (node_id, "", 0), + (node_name, "", 0) + ] + mock_runner.run.side_effect = return_value_list + + real_status = lib.get_local_node_status(mock_runner) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + self.assertEqual(expected_status, real_status) + + def test_node_not_in_status(self): + node_id = "id_1" + node_name = "name_1" + node_name_bad = "name_X" + node_status = self.fixture_get_node_status(node_name, node_id) + self.fixture_add_node_status(node_status) + + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_node"), "--cluster-id"]), + mock.call( + [self.path("crm_node"), "--name-for-id={0}".format(node_id)] + ), + ] + return_value_list = [ + (str(self.status), "", 0), + (node_id, "", 0), + (node_name_bad, "", 0) + ] + mock_runner.run.side_effect = return_value_list + + assert_raise_library_error( + lambda: lib.get_local_node_status(mock_runner), + ( + Severity.ERROR, + report_codes.NODE_NOT_FOUND, + {"node": node_name_bad} + ) + ) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + + def test_error_1(self): + node_id = "id_1" + node_name = "name_1" + node_status = self.fixture_get_node_status(node_name, node_id) + self.fixture_add_node_status(node_status) + + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_node"), "--cluster-id"]), + ] + return_value_list = [ + (str(self.status), "", 0), + ("", "some error", 1), + ] + mock_runner.run.side_effect = return_value_list + + assert_raise_library_error( + lambda: lib.get_local_node_status(mock_runner), + ( + Severity.ERROR, + report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, + {"reason": "node id not found"} + ) + ) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + + def test_error_2(self): + node_id = "id_1" + node_name = "name_1" + node_status = self.fixture_get_node_status(node_name, node_id) + self.fixture_add_node_status(node_status) + + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_node"), "--cluster-id"]), + mock.call( + [self.path("crm_node"), "--name-for-id={0}".format(node_id)] + ), + ] + return_value_list = [ + (str(self.status), "", 0), + (node_id, "", 0), + ("", "some error", 1), + ] + mock_runner.run.side_effect = return_value_list + + assert_raise_library_error( + lambda: lib.get_local_node_status(mock_runner), + ( + Severity.ERROR, + report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, + {"reason": "node name not found"} + ) + ) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + + def test_error_3(self): + node_id = "id_1" + node_name = "name_1" + node_status = self.fixture_get_node_status(node_name, node_id) + self.fixture_add_node_status(node_status) + + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_node"), "--cluster-id"]), + mock.call( + [self.path("crm_node"), "--name-for-id={0}".format(node_id)] + ), + ] + return_value_list = [ + (str(self.status), "", 0), + (node_id, "", 0), + ("(null)", "", 0), + ] + mock_runner.run.side_effect = return_value_list + + assert_raise_library_error( + lambda: lib.get_local_node_status(mock_runner), + ( + Severity.ERROR, + report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, + {"reason": "node name is null"} + ) + ) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + +class RemoveNode(LibraryPacemakerTest): + def test_success(self): + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ("", "", 0) + lib.remove_node( + mock_runner, + "NODE_NAME" + ) + mock_runner.run.assert_called_once_with([ + self.path("crm_node"), + "--force", + "--remove", + "NODE_NAME", + ]) + + def test_error(self): + mock_runner = mock.MagicMock(spec_set=CommandRunner) + expected_stderr = "expected stderr" + mock_runner.run.return_value = ("", expected_stderr, 1) + assert_raise_library_error( + lambda: lib.remove_node(mock_runner, "NODE_NAME") , + ( + Severity.ERROR, + report_codes.NODE_REMOVE_IN_PACEMAKER_FAILED, + { + "node_name": "NODE_NAME", + "reason": expected_stderr, + } + ) + ) + +class ResourceCleanupTest(LibraryPacemakerTest): + def fixture_status_xml(self, nodes, resources): + xml_man = XmlManipulation.from_file(rc("crm_mon.minimal.xml")) + doc = xml_man.tree.getroottree() + doc.find("/summary/nodes_configured").set("number", str(nodes)) + doc.find("/summary/resources_configured").set("number", str(resources)) + return str(XmlManipulation(doc)) + + def test_basic(self): + expected_stdout = "expected output" + expected_stderr = "expected stderr" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_resource"), "--cleanup"]), + ] + return_value_list = [ + (self.fixture_status_xml(1, 1), "", 0), + (expected_stdout, expected_stderr, 0), + ] + mock_runner.run.side_effect = return_value_list + + real_output = lib.resource_cleanup(mock_runner) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + self.assertEqual( + expected_stdout + "\n" + expected_stderr, + real_output + ) + + def test_threshold_exceeded(self): + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + self.fixture_status_xml(1000, 1000), + "", + 0 + ) + + assert_raise_library_error( + lambda: lib.resource_cleanup(mock_runner), + ( + Severity.ERROR, + report_codes.RESOURCE_CLEANUP_TOO_TIME_CONSUMING, + {"threshold": 100}, + report_codes.FORCE_LOAD_THRESHOLD + ) + ) + + mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) + + def test_forced(self): + expected_stdout = "expected output" + expected_stderr = "expected stderr" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = (expected_stdout, expected_stderr, 0) + + real_output = lib.resource_cleanup(mock_runner, force=True) + + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "--cleanup"] + ) + self.assertEqual( + expected_stdout + "\n" + expected_stderr, + real_output + ) + + def test_resource(self): + resource = "test_resource" + expected_stdout = "expected output" + expected_stderr = "expected stderr" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = (expected_stdout, expected_stderr, 0) + + real_output = lib.resource_cleanup(mock_runner, resource=resource) + + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "--cleanup", "--resource", resource] + ) + self.assertEqual( + expected_stdout + "\n" + expected_stderr, + real_output + ) + + def test_node(self): + node = "test_node" + expected_stdout = "expected output" + expected_stderr = "expected stderr" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = (expected_stdout, expected_stderr, 0) + + real_output = lib.resource_cleanup(mock_runner, node=node) + + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "--cleanup", "--node", node] + ) + self.assertEqual( + expected_stdout + "\n" + expected_stderr, + real_output + ) + + def test_node_and_resource(self): + node = "test_node" + resource = "test_resource" + expected_stdout = "expected output" + expected_stderr = "expected stderr" + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = (expected_stdout, expected_stderr, 0) + + real_output = lib.resource_cleanup( + mock_runner, resource=resource, node=node + ) + + mock_runner.run.assert_called_once_with( + [ + self.path("crm_resource"), + "--cleanup", "--resource", resource, "--node", node + ] + ) + self.assertEqual( + expected_stdout + "\n" + expected_stderr, + real_output + ) + + def test_error_state(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.resource_cleanup(mock_runner), + ( + Severity.ERROR, + report_codes.CRM_MON_ERROR, + { + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with(self.crm_mon_cmd()) + + def test_error_cleanup(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + call_list = [ + mock.call(self.crm_mon_cmd()), + mock.call([self.path("crm_resource"), "--cleanup"]), + ] + return_value_list = [ + (self.fixture_status_xml(1, 1), "", 0), + (expected_stdout, expected_stderr, expected_retval), + ] + mock_runner.run.side_effect = return_value_list + + assert_raise_library_error( + lambda: lib.resource_cleanup(mock_runner), + ( + Severity.ERROR, + report_codes.RESOURCE_CLEANUP_ERROR, + { + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + self.assertEqual(len(return_value_list), len(call_list)) + self.assertEqual(len(return_value_list), mock_runner.run.call_count) + mock_runner.run.assert_has_calls(call_list) + +class ResourcesWaitingTest(LibraryPacemakerTest): + def test_has_support(self): + expected_stdout = "" + expected_stderr = "something --wait something else" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + self.assertTrue( + lib.has_wait_for_idle_support(mock_runner) + ) + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "-?"] + ) + + def test_has_support_stdout(self): + expected_stdout = "something --wait something else" + expected_stderr = "" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + self.assertTrue( + lib.has_wait_for_idle_support(mock_runner) + ) + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "-?"] + ) + + def test_doesnt_have_support(self): + expected_stdout = "something something else" + expected_stderr = "something something else" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + self.assertFalse( + lib.has_wait_for_idle_support(mock_runner) + ) + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "-?"] + ) + + @mock.patch( + "pcs.lib.pacemaker.live.has_wait_for_idle_support", + autospec=True + ) + def test_ensure_support_success(self, mock_obj): + mock_obj.return_value = True + self.assertEqual(None, lib.ensure_wait_for_idle_support(mock.Mock())) + + @mock.patch( + "pcs.lib.pacemaker.live.has_wait_for_idle_support", + autospec=True + ) + def test_ensure_support_error(self, mock_obj): + mock_obj.return_value = False + assert_raise_library_error( + lambda: lib.ensure_wait_for_idle_support(mock.Mock()), + ( + Severity.ERROR, + report_codes.WAIT_FOR_IDLE_NOT_SUPPORTED, + {} + ) + ) + + def test_wait_success(self): + expected_stdout = "expected output" + expected_stderr = "expected stderr" + expected_retval = 0 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + self.assertEqual(None, lib.wait_for_idle(mock_runner)) + + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "--wait"] + ) + + def test_wait_timeout_success(self): + expected_stdout = "expected output" + expected_stderr = "expected stderr" + expected_retval = 0 + timeout = 10 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + self.assertEqual(None, lib.wait_for_idle(mock_runner, timeout)) + + mock_runner.run.assert_called_once_with( + [ + self.path("crm_resource"), + "--wait", "--timeout={0}".format(timeout) + ] + ) + + def test_wait_error(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 1 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.wait_for_idle(mock_runner), + ( + Severity.ERROR, + report_codes.WAIT_FOR_IDLE_ERROR, + { + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "--wait"] + ) + + def test_wait_error_timeout(self): + expected_stdout = "some info" + expected_stderr = "some error" + expected_retval = 62 + mock_runner = mock.MagicMock(spec_set=CommandRunner) + mock_runner.run.return_value = ( + expected_stdout, + expected_stderr, + expected_retval + ) + + assert_raise_library_error( + lambda: lib.wait_for_idle(mock_runner), + ( + Severity.ERROR, + report_codes.WAIT_FOR_IDLE_TIMED_OUT, + { + "reason": expected_stderr + "\n" + expected_stdout, + } + ) + ) + + mock_runner.run.assert_called_once_with( + [self.path("crm_resource"), "--wait"] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/test/test_state.py pcs-0.9.159/pcs/lib/pacemaker/test/test_state.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/test/test_state.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/test/test_state.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,964 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase, mock +from lxml import etree + +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_report_item_equal, +) +from pcs.test.tools.misc import get_test_resource as rc +from pcs.test.tools.xml import get_xml_manipulation_creator_from_file +from pcs.lib.pacemaker import state +from pcs.lib.pacemaker.state import ( + ClusterState, + _Attrs, + _Children, +) + +from pcs.common import report_codes +from pcs.lib.errors import ReportItemSeverity as severities + +class AttrsTest(TestCase): + def test_get_declared_attr(self): + attrs = _Attrs('test', {'node-name': 'node1'}, {'name': 'node-name'}) + self.assertEqual('node1', attrs.name) + + def test_raises_on_undeclared_attribute(self): + attrs = _Attrs('test', {'node-name': 'node1'}, {}) + self.assertRaises(AttributeError, lambda: attrs.name) + + def test_raises_on_missing_required_attribute(self): + attrs = _Attrs('test', {}, {'name': 'node-name'}) + self.assertRaises(AttributeError, lambda: attrs.name) + + def test_attr_transformation_success(self): + attrs = _Attrs('test', {'number': '7'}, {'count': ('number', int)}) + self.assertEqual(7, attrs.count) + + def test_attr_transformation_fail(self): + attrs = _Attrs('test', {'number': 'abc'}, {'count': ('number', int)}) + self.assertRaises(ValueError, lambda: attrs.count) + +class ChildrenTest(TestCase): + def setUp(self): + self.dom = etree.fromstring( + '
' + ) + + def wrap(self, element): + return '{0}.{1}'.format(element.tag, element.attrib['name']) + + def test_get_declared_section(self): + children = _Children( + 'test', self.dom, {}, {'some_section': ('some', self.wrap)} + ) + self.assertEqual('some.0', children.some_section) + + def test_get_declared_children(self): + children = _Children('test', self.dom, {'anys': ('any', self.wrap)}, {}) + self.assertEqual(['any.1', 'any.2'], children.anys) + + def test_raises_on_undeclared_children(self): + children = _Children('test', self.dom, {}, {}) + self.assertRaises(AttributeError, lambda: children.some_section) + + +class TestBase(TestCase): + def setUp(self): + self.create_covered_status = get_xml_manipulation_creator_from_file( + rc('crm_mon.minimal.xml') + ) + self.covered_status = self.create_covered_status() + +class ClusterStatusTest(TestBase): + def test_minimal_crm_mon_is_valid(self): + ClusterState(str(self.covered_status)) + + def test_refuse_invalid_xml(self): + assert_raise_library_error( + lambda: ClusterState('invalid xml'), + (severities.ERROR, report_codes.BAD_CLUSTER_STATE_FORMAT, {}) + ) + + def test_refuse_invalid_document(self): + self.covered_status.append_to_first_tag_name( + 'nodes', + '' + ) + + assert_raise_library_error( + lambda: ClusterState(str(self.covered_status)), + (severities.ERROR, report_codes.BAD_CLUSTER_STATE_FORMAT, {}) + ) + + +class WorkWithClusterStatusNodesTest(TestBase): + def fixture_node_string(self, **kwargs): + attrs = dict(name='name', id='id', type='member') + attrs.update(kwargs) + return ''''''.format(**attrs) + + def test_can_get_node_names(self): + self.covered_status.append_to_first_tag_name( + 'nodes', + self.fixture_node_string(name='node1', id='1'), + self.fixture_node_string(name='node2', id='2'), + ) + xml = str(self.covered_status) + self.assertEqual( + ['node1', 'node2'], + [node.attrs.name for node in ClusterState(xml).node_section.nodes] + ) + + def test_can_filter_out_remote_nodes(self): + self.covered_status.append_to_first_tag_name( + 'nodes', + self.fixture_node_string(name='node1', id='1'), + self.fixture_node_string(name='node2', type='remote', id='2'), + ) + xml = str(self.covered_status) + self.assertEqual( + ['node1'], + [ + node.attrs.name + for node in ClusterState(xml).node_section.nodes + if node.attrs.type != 'remote' + ] + ) + + +class WorkWithClusterStatusSummaryTest(TestBase): + def test_nodes_count(self): + xml = str(self.covered_status) + self.assertEqual(0, ClusterState(xml).summary.nodes.attrs.count) + + def test_resources_count(self): + xml = str(self.covered_status) + self.assertEqual(0, ClusterState(xml).summary.resources.attrs.count) + + +class GetPrimitiveRolesWithNodes(TestCase): + def test_success(self): + primitives_xml = [ + """ + + + + """, + """ + + + + """, + """ + + + + """, + """ + + + + """, + """ + + + """, + """ + + + + """, + ] + primitives = [ + etree.fromstring(xml) for xml in primitives_xml + ] + + self.assertEqual( + state._get_primitive_roles_with_nodes(primitives), + { + "Started": ["node1", "node5"], + "Master": ["node2"], + "Slave": ["node3", "node4"] + } + ) + + def test_empty(self): + self.assertEqual( + state._get_primitive_roles_with_nodes([]), + { + } + ) + + +class GetPrimitivesForStateCheck(TestCase): + status_xml = etree.fromstring(""" + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + """) + + def setUp(self): + self.status = etree.parse(rc("crm_mon.minimal.xml")).getroot() + self.status.append(self.status_xml) + for resource in self.status.xpath(".//resource"): + resource.attrib.update({ + "resource_agent": "ocf::pacemaker:Stateful", + "role": "Started", + "active": "true", + "orphaned": "false", + "blocked": "false", + "managed": "true", + "failure_ignored": "false", + "nodes_running_on": "1", + }) + + def assert_primitives(self, resource_id, primitive_ids, expected_running): + self.assertEqual( + [ + elem.attrib["id"] + for elem in state._get_primitives_for_state_check( + self.status, resource_id, expected_running + ) + ], + primitive_ids + ) + + def test_missing(self): + self.assert_primitives("Rxx", [], True) + self.assert_primitives("Rxx", [], False) + + def test_primitive(self): + self.assert_primitives("R01", ["R01"], True) + self.assert_primitives("R01", ["R01"], False) + + def test_primitive_failed(self): + self.assert_primitives("R02", [], True) + self.assert_primitives("R02", [], False) + + def test_group(self): + self.assert_primitives("G1", ["R04"], True) + self.assert_primitives("G1", ["R03"], False) + + def test_group_failed_primitive(self): + self.assert_primitives("G2", [], True) + self.assert_primitives("G2", [], False) + + def test_primitive_in_group(self): + self.assert_primitives("R03", ["R03"], True) + self.assert_primitives("R03", ["R03"], False) + + def test_primitive_in_group_failed(self): + self.assert_primitives("R05", [], True) + self.assert_primitives("R05", [], False) + + def test_clone(self): + self.assert_primitives("R07-clone", ["R07", "R07"], True) + self.assert_primitives("R07-clone", ["R07", "R07"], False) + self.assert_primitives("R10-clone", ["R10:0", "R10:1"], True) + self.assert_primitives("R10-clone", ["R10:0", "R10:1"], False) + + def test_clone_partially_failed(self): + self.assert_primitives("R08-clone", ["R08"], True) + self.assert_primitives("R08-clone", ["R08"], False) + self.assert_primitives("R11-clone", ["R11:0"], True) + self.assert_primitives("R11-clone", ["R11:0"], False) + + def test_clone_failed(self): + self.assert_primitives("R09-clone", [], True) + self.assert_primitives("R09-clone", [], False) + self.assert_primitives("R12-clone", [], True) + self.assert_primitives("R12-clone", [], False) + + def test_primitive_in_clone(self): + self.assert_primitives("R07", ["R07", "R07"], True) + self.assert_primitives("R07", ["R07", "R07"], False) + self.assert_primitives("R10", ["R10:0", "R10:1"], True) + self.assert_primitives("R10", ["R10:0", "R10:1"], False) + + def test_primitive_in_clone_partially_failed(self): + self.assert_primitives("R08", ["R08"], True) + self.assert_primitives("R08", ["R08"], False) + self.assert_primitives("R11", ["R11:0"], True) + self.assert_primitives("R11", ["R11:0"], False) + + def test_primitive_in_clone_failed(self): + self.assert_primitives("R09", [], True) + self.assert_primitives("R09", [], False) + self.assert_primitives("R12", [], True) + self.assert_primitives("R12", [], False) + + def test_clone_containing_group(self): + self.assert_primitives("G3-clone", ["R14", "R14"], True) + self.assert_primitives("G3-clone", ["R13", "R13"], False) + self.assert_primitives("G6-clone", ["R20:0", "R20:1"], True) + self.assert_primitives("G6-clone", ["R19:0", "R19:1"], False) + + def test_clone_containing_group_partially_failed(self): + self.assert_primitives("G4-clone", ["R16"], True) + self.assert_primitives("G4-clone", ["R15"], False) + self.assert_primitives("G7-clone", ["R22:1"], True) + self.assert_primitives("G7-clone", ["R21:1"], False) + + def test_clone_containing_group_failed(self): + self.assert_primitives("G5-clone", [], True) + self.assert_primitives("G5-clone", [], False) + self.assert_primitives("G8-clone", [], True) + self.assert_primitives("G8-clone", [], False) + + def test_group_in_clone_containing_group(self): + self.assert_primitives("G3", ["R14", "R14"], True) + self.assert_primitives("G3", ["R13", "R13"], False) + self.assert_primitives("G6", ["R20:0", "R20:1"], True) + self.assert_primitives("G6", ["R19:0", "R19:1"], False) + + def test_group_in_clone_containing_group_partially_failed(self): + self.assert_primitives("G4", ["R16"], True) + self.assert_primitives("G4", ["R15"], False) + self.assert_primitives("G7", ["R22:1"], True) + self.assert_primitives("G7", ["R21:1"], False) + + def test_group_in_clone_containing_group_failed(self): + self.assert_primitives("G5", [], True) + self.assert_primitives("G5", [], False) + self.assert_primitives("G8", [], True) + self.assert_primitives("G8", [], False) + + def test_primitive_in_clone_containing_group(self): + self.assert_primitives("R14", ["R14", "R14"], True) + self.assert_primitives("R14", ["R14", "R14"], False) + self.assert_primitives("R20", ["R20:0", "R20:1"], True) + self.assert_primitives("R20", ["R20:0", "R20:1"], False) + + def test_primitive_in_clone_containing_group_partially_failed(self): + self.assert_primitives("R16", ["R16"], True) + self.assert_primitives("R16", ["R16"], False) + self.assert_primitives("R22", ["R22:1"], True) + self.assert_primitives("R22", ["R22:1"], False) + + def test_primitive_in_clone_containing_group_failed(self): + self.assert_primitives("R18", [], True) + self.assert_primitives("R18", [], False) + self.assert_primitives("R24", [], True) + self.assert_primitives("R24", [], False) + + def test_bundle(self): + self.assert_primitives("B1", ["B1-R1", "B1-R2"], True) + self.assert_primitives("B1", ["B1-R1", "B1-R2"], False) + self.assert_primitives("B2", ["B2-R2", "B2-R1", "B2-R2"], True) + self.assert_primitives("B2", ["B2-R2", "B2-R1", "B2-R2"], False) + + def test_primitive_in_bundle(self): + self.assert_primitives("B1-R1", ["B1-R1"], True) + self.assert_primitives("B1-R1", ["B1-R1"], False) + self.assert_primitives("B2-R1", ["B2-R1"], True) + self.assert_primitives("B2-R1", ["B2-R1"], False) + self.assert_primitives("B2-R2", ["B2-R2", "B2-R2"], True) + self.assert_primitives("B2-R2", ["B2-R2", "B2-R2"], False) + + +class CommonResourceState(TestCase): + resource_id = "R" + def setUp(self): + self.cluster_state = "state" + + patcher_primitives = mock.patch( + "pcs.lib.pacemaker.state._get_primitives_for_state_check" + ) + self.addCleanup(patcher_primitives.stop) + self.get_primitives_for_state_check = patcher_primitives.start() + + patcher_roles = mock.patch( + "pcs.lib.pacemaker.state._get_primitive_roles_with_nodes" + ) + self.addCleanup(patcher_roles.stop) + self.get_primitive_roles_with_nodes = patcher_roles.start() + + def fixture_running_state_info(self): + return { + "Started": ["node1"], + "Master": ["node2"], + "Slave": ["node3", "node4"], + } + + def fixture_running_report(self, severity): + return (severity, report_codes.RESOURCE_RUNNING_ON_NODES, { + "resource_id": self.resource_id, + "roles_with_nodes": self.fixture_running_state_info(), + }) + + def fixture_not_running_report(self, severity): + return (severity, report_codes.RESOURCE_DOES_NOT_RUN, { + "resource_id": self.resource_id + }) + + +class EnsureResourceState(CommonResourceState): + def assert_running_info_transform(self, run_info, report, expected_running): + self.get_primitives_for_state_check.return_value = ["elem1", "elem2"] + self.get_primitive_roles_with_nodes.return_value = run_info + assert_report_item_equal( + state.ensure_resource_state( + expected_running, + self.cluster_state, + self.resource_id + ), + report + ) + self.get_primitives_for_state_check.assert_called_once_with( + self.cluster_state, + self.resource_id, + expected_running + ) + self.get_primitive_roles_with_nodes.assert_called_once_with( + ["elem1", "elem2"] + ) + + def test_report_info_running(self): + self.assert_running_info_transform( + self.fixture_running_state_info(), + self.fixture_running_report(severities.INFO), + expected_running=True, + ) + + def test_report_error_running(self): + self.assert_running_info_transform( + self.fixture_running_state_info(), + self.fixture_running_report(severities.ERROR), + expected_running=False, + ) + + def test_report_error_not_running(self): + self.assert_running_info_transform( + [], + self.fixture_not_running_report(severities.ERROR), + expected_running=True, + ) + + def test_report_info_not_running(self): + self.assert_running_info_transform( + [], + self.fixture_not_running_report(severities.INFO), + expected_running=False, + ) + + +class InfoResourceState(CommonResourceState): + def assert_running_info_transform(self, run_info, report): + self.get_primitives_for_state_check.return_value = ["elem1", "elem2"] + self.get_primitive_roles_with_nodes.return_value = run_info + assert_report_item_equal( + state.info_resource_state(self.cluster_state, self.resource_id), + report + ) + self.get_primitives_for_state_check.assert_called_once_with( + self.cluster_state, + self.resource_id, + expected_running=True + ) + self.get_primitive_roles_with_nodes.assert_called_once_with( + ["elem1", "elem2"] + ) + + def test_report_info_running(self): + self.assert_running_info_transform( + self.fixture_running_state_info(), + self.fixture_running_report(severities.INFO) + ) + def test_report_info_not_running(self): + self.assert_running_info_transform( + [], + self.fixture_not_running_report(severities.INFO) + ) + + +class IsResourceManaged(TestCase): + status_xml = etree.fromstring(""" + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + """) + + def setUp(self): + self.status = etree.parse(rc("crm_mon.minimal.xml")).getroot() + self.status.append(self.status_xml) + for resource in self.status.xpath(".//resource"): + resource.attrib.update({ + "resource_agent": "ocf::pacemaker:Stateful", + "role": "Started", + "active": "true", + "orphaned": "false", + "blocked": "false", + "failed": "false", + "failure_ignored": "false", + "nodes_running_on": "1", + }) + + def assert_managed(self, resource, managed): + self.assertEqual( + managed, + state.is_resource_managed(self.status, resource) + ) + + def test_missing(self): + self.assertRaises( + state.ResourceNotFound, + self.assert_managed, "Rxx", True + ) + + def test_primitive(self): + self.assert_managed("R01", True) + self.assert_managed("R02", False) + + def test_group(self): + self.assert_managed("G1", True) + self.assert_managed("G2", False) + self.assert_managed("G3", False) + self.assert_managed("G4", False) + + def test_primitive_in_group(self): + self.assert_managed("R03", True) + self.assert_managed("R04", True) + self.assert_managed("R05", False) + self.assert_managed("R06", True) + self.assert_managed("R07", True) + self.assert_managed("R08", False) + self.assert_managed("R09", False) + self.assert_managed("R10", False) + + def test_clone(self): + self.assert_managed("R11-clone", True) + self.assert_managed("R12-clone", False) + self.assert_managed("R13-clone", False) + self.assert_managed("R14-clone", False) + + self.assert_managed("R15-clone", True) + self.assert_managed("R16-clone", False) + self.assert_managed("R17-clone", False) + self.assert_managed("R18-clone", False) + + def test_primitive_in_clone(self): + self.assert_managed("R11", True) + self.assert_managed("R12", False) + self.assert_managed("R13", False) + self.assert_managed("R14", False) + + def test_primitive_in_unique_clone(self): + self.assert_managed("R15", True) + self.assert_managed("R16", False) + self.assert_managed("R17", False) + self.assert_managed("R18", False) + + def test_clone_containing_group(self): + self.assert_managed("G5-clone", True) + self.assert_managed("G6-clone", False) + self.assert_managed("G7-clone", False) + self.assert_managed("G8-clone", False) + self.assert_managed("G9-clone", False) + + self.assert_managed("G10-clone", True) + self.assert_managed("G11-clone", False) + self.assert_managed("G12-clone", False) + self.assert_managed("G13-clone", False) + self.assert_managed("G14-clone", False) + + def test_group_in_clone(self): + self.assert_managed("G5", True) + self.assert_managed("G6", False) + self.assert_managed("G7", False) + self.assert_managed("G8", False) + self.assert_managed("G9", False) + + def test_group_in_unique_clone(self): + self.assert_managed("G10", True) + self.assert_managed("G11", False) + self.assert_managed("G12", False) + self.assert_managed("G13", False) + self.assert_managed("G14", False) + + def test_primitive_in_group_in_clone(self): + self.assert_managed("R19", True) + self.assert_managed("R20", True) + self.assert_managed("R21", False) + self.assert_managed("R22", False) + self.assert_managed("R23", False) + self.assert_managed("R24", True) + self.assert_managed("R25", True) + self.assert_managed("R26", False) + self.assert_managed("R27", False) + self.assert_managed("R28", False) + + def test_primitive_in_group_in_unique_clone(self): + self.assert_managed("R29", True) + self.assert_managed("R30", True) + self.assert_managed("R31", False) + self.assert_managed("R32", False) + self.assert_managed("R33", False) + self.assert_managed("R34", True) + self.assert_managed("R35", True) + self.assert_managed("R36", False) + self.assert_managed("R37", False) + self.assert_managed("R38", False) + + def test_bundle(self): + self.assert_managed("B1", True) + self.assert_managed("B2", False) + self.assert_managed("B3", True) + self.assert_managed("B4", False) + self.assert_managed("B5", False) + self.assert_managed("B6", False) + self.assert_managed("B7", False) + + def test_primitive_in_bundle(self): + self.assert_managed("R39", True) + self.assert_managed("R40", True) + self.assert_managed("R41", False) + self.assert_managed("R42", False) + self.assert_managed("R43", False) + self.assert_managed("R44", True) + self.assert_managed("R45", True) + self.assert_managed("R46", False) + self.assert_managed("R47", False) + self.assert_managed("R48", False) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/test/test_values.py pcs-0.9.159/pcs/lib/pacemaker/test/test_values.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/test/test_values.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/test/test_values.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,299 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase + +from pcs.test.tools.assertions import assert_raise_library_error + +from pcs.common import report_codes +from pcs.lib.errors import ReportItemSeverity as severity + +import pcs.lib.pacemaker.values as lib + + +class BooleanTest(TestCase): + def test_true_is_true(self): + self.assertTrue(lib.is_true("true")) + self.assertTrue(lib.is_true("tRue")) + self.assertTrue(lib.is_true("on")) + self.assertTrue(lib.is_true("ON")) + self.assertTrue(lib.is_true("yes")) + self.assertTrue(lib.is_true("yeS")) + self.assertTrue(lib.is_true("y")) + self.assertTrue(lib.is_true("Y")) + self.assertTrue(lib.is_true("1")) + + def test_nontrue_is_not_true(self): + self.assertFalse(lib.is_true("")) + self.assertFalse(lib.is_true(" 1 ")) + self.assertFalse(lib.is_true("a")) + self.assertFalse(lib.is_true("2")) + self.assertFalse(lib.is_true("10")) + self.assertFalse(lib.is_true("yes please")) + + def test_true_is_boolean(self): + self.assertTrue(lib.is_boolean("true")) + self.assertTrue(lib.is_boolean("tRue")) + self.assertTrue(lib.is_boolean("on")) + self.assertTrue(lib.is_boolean("ON")) + self.assertTrue(lib.is_boolean("yes")) + self.assertTrue(lib.is_boolean("yeS")) + self.assertTrue(lib.is_boolean("y")) + self.assertTrue(lib.is_boolean("Y")) + self.assertTrue(lib.is_boolean("1")) + + def test_false_is_false(self): + self.assertTrue(lib.is_false("false")) + self.assertTrue(lib.is_false("faLse")) + self.assertTrue(lib.is_false("off")) + self.assertTrue(lib.is_false("OFF")) + self.assertTrue(lib.is_false("no")) + self.assertTrue(lib.is_false("nO")) + self.assertTrue(lib.is_false("n")) + self.assertTrue(lib.is_false("N")) + self.assertTrue(lib.is_false("0")) + + def test_nonfalse_is_not_false(self): + self.assertFalse(lib.is_false("")) + self.assertFalse(lib.is_false(" 0 ")) + self.assertFalse(lib.is_false("x")) + self.assertFalse(lib.is_false("-1")) + self.assertFalse(lib.is_false("10")) + self.assertFalse(lib.is_false("heck no")) + + def test_false_is_boolean(self): + self.assertTrue(lib.is_boolean("false")) + self.assertTrue(lib.is_boolean("fAlse")) + self.assertTrue(lib.is_boolean("off")) + self.assertTrue(lib.is_boolean("oFf")) + self.assertTrue(lib.is_boolean("no")) + self.assertTrue(lib.is_boolean("nO")) + self.assertTrue(lib.is_boolean("n")) + self.assertTrue(lib.is_boolean("N")) + self.assertTrue(lib.is_boolean("0")) + + def test_nonboolean_is_not_boolean(self): + self.assertFalse(lib.is_boolean("")) + self.assertFalse(lib.is_boolean("a")) + self.assertFalse(lib.is_boolean("2")) + self.assertFalse(lib.is_boolean("10")) + self.assertFalse(lib.is_boolean("yes please")) + self.assertFalse(lib.is_boolean(" y")) + self.assertFalse(lib.is_boolean("n ")) + self.assertFalse(lib.is_boolean("NO!")) + + +class TimeoutTest(TestCase): + def test_valid(self): + self.assertEqual(10, lib.timeout_to_seconds(10)) + self.assertEqual(10, lib.timeout_to_seconds("10")) + self.assertEqual(10, lib.timeout_to_seconds("10s")) + self.assertEqual(10, lib.timeout_to_seconds("10sec")) + self.assertEqual(600, lib.timeout_to_seconds("10m")) + self.assertEqual(600, lib.timeout_to_seconds("10min")) + self.assertEqual(36000, lib.timeout_to_seconds("10h")) + self.assertEqual(36000, lib.timeout_to_seconds("10hr")) + + def test_invalid(self): + self.assertEqual(None, lib.timeout_to_seconds(-10)) + self.assertEqual(None, lib.timeout_to_seconds("1a1s")) + self.assertEqual(None, lib.timeout_to_seconds("10mm")) + self.assertEqual(None, lib.timeout_to_seconds("10mim")) + self.assertEqual(None, lib.timeout_to_seconds("aaa")) + self.assertEqual(None, lib.timeout_to_seconds("")) + + self.assertEqual(-10, lib.timeout_to_seconds(-10, True)) + self.assertEqual("1a1s", lib.timeout_to_seconds("1a1s", True)) + self.assertEqual("10mm", lib.timeout_to_seconds("10mm", True)) + self.assertEqual("10mim", lib.timeout_to_seconds("10mim", True)) + self.assertEqual("aaa", lib.timeout_to_seconds("aaa", True)) + self.assertEqual("", lib.timeout_to_seconds("", True)) + + +class ValidateIdTest(TestCase): + def test_valid(self): + self.assertEqual(None, lib.validate_id("dummy")) + self.assertEqual(None, lib.validate_id("DUMMY")) + self.assertEqual(None, lib.validate_id("dUmMy")) + self.assertEqual(None, lib.validate_id("dummy0")) + self.assertEqual(None, lib.validate_id("dum0my")) + self.assertEqual(None, lib.validate_id("dummy-")) + self.assertEqual(None, lib.validate_id("dum-my")) + self.assertEqual(None, lib.validate_id("dummy.")) + self.assertEqual(None, lib.validate_id("dum.my")) + self.assertEqual(None, lib.validate_id("_dummy")) + self.assertEqual(None, lib.validate_id("dummy_")) + self.assertEqual(None, lib.validate_id("dum_my")) + + def test_invalid_empty(self): + assert_raise_library_error( + lambda: lib.validate_id("", "test id"), + ( + severity.ERROR, + report_codes.EMPTY_ID, + { + "id": "", + "id_description": "test id", + } + ) + ) + + def test_invalid_first_character(self): + desc = "test id" + info = { + "id": "", + "id_description": desc, + "invalid_character": "", + "is_first_char": True, + } + report = (severity.ERROR, report_codes.INVALID_ID, info) + + info["id"] = "0" + info["invalid_character"] = "0" + assert_raise_library_error( + lambda: lib.validate_id("0", desc), + report + ) + + info["id"] = "-" + info["invalid_character"] = "-" + assert_raise_library_error( + lambda: lib.validate_id("-", desc), + report + ) + + info["id"] = "." + info["invalid_character"] = "." + assert_raise_library_error( + lambda: lib.validate_id(".", desc), + report + ) + + info["id"] = ":" + info["invalid_character"] = ":" + assert_raise_library_error( + lambda: lib.validate_id(":", desc), + report + ) + + info["id"] = "0dummy" + info["invalid_character"] = "0" + assert_raise_library_error( + lambda: lib.validate_id("0dummy", desc), + report + ) + + info["id"] = "-dummy" + info["invalid_character"] = "-" + assert_raise_library_error( + lambda: lib.validate_id("-dummy", desc), + report + ) + + info["id"] = ".dummy" + info["invalid_character"] = "." + assert_raise_library_error( + lambda: lib.validate_id(".dummy", desc), + report + ) + + info["id"] = ":dummy" + info["invalid_character"] = ":" + assert_raise_library_error( + lambda: lib.validate_id(":dummy", desc), + report + ) + + def test_invalid_character(self): + desc = "test id" + info = { + "id": "", + "id_description": desc, + "invalid_character": "", + "is_first_char": False, + } + report = (severity.ERROR, report_codes.INVALID_ID, info) + + info["id"] = "dum:my" + info["invalid_character"] = ":" + assert_raise_library_error( + lambda: lib.validate_id("dum:my", desc), + report + ) + + info["id"] = "dummy:" + info["invalid_character"] = ":" + assert_raise_library_error( + lambda: lib.validate_id("dummy:", desc), + report + ) + + info["id"] = "dum?my" + info["invalid_character"] = "?" + assert_raise_library_error( + lambda: lib.validate_id("dum?my", desc), + report + ) + + info["id"] = "dummy?" + info["invalid_character"] = "?" + assert_raise_library_error( + lambda: lib.validate_id("dummy?", desc), + report + ) + +class SanitizeId(TestCase): + def test_dont_change_valid_id(self): + self.assertEqual("d", lib.sanitize_id("d")) + self.assertEqual("dummy", lib.sanitize_id("dummy")) + self.assertEqual("dum0my", lib.sanitize_id("dum0my")) + self.assertEqual("dum-my", lib.sanitize_id("dum-my")) + self.assertEqual("dum.my", lib.sanitize_id("dum.my")) + self.assertEqual("dum_my", lib.sanitize_id("dum_my")) + self.assertEqual("_dummy", lib.sanitize_id("_dummy")) + + def test_empty(self): + self.assertEqual("", lib.sanitize_id("")) + + def test_invalid_id(self): + self.assertEqual("", lib.sanitize_id("0")) + self.assertEqual("", lib.sanitize_id("-")) + self.assertEqual("", lib.sanitize_id(".")) + self.assertEqual("", lib.sanitize_id(":", "_")) + + self.assertEqual("dummy", lib.sanitize_id("0dummy")) + self.assertEqual("dummy", lib.sanitize_id("-dummy")) + self.assertEqual("dummy", lib.sanitize_id(".dummy")) + self.assertEqual("dummy", lib.sanitize_id(":dummy", "_")) + + self.assertEqual("dummy", lib.sanitize_id("dum:my")) + self.assertEqual("dum_my", lib.sanitize_id("dum:my", "_")) + +class IsScoreValueTest(TestCase): + def test_returns_true_for_number(self): + self.assertTrue(lib.is_score("1")) + + def test_returns_true_for_minus_number(self): + self.assertTrue(lib.is_score("-1")) + + def test_returns_true_for_plus_number(self): + self.assertTrue(lib.is_score("+1")) + + def test_returns_true_for_infinity(self): + self.assertTrue(lib.is_score("INFINITY")) + + def test_returns_true_for_minus_infinity(self): + self.assertTrue(lib.is_score("-INFINITY")) + + def test_returns_true_for_plus_infinity(self): + self.assertTrue(lib.is_score("+INFINITY")) + + def test_returns_false_for_nonumber_noinfinity(self): + self.assertFalse(lib.is_score("something else")) + + def test_returns_false_for_multiple_operators(self): + self.assertFalse(lib.is_score("++INFINITY")) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker/values.py pcs-0.9.159/pcs/lib/pacemaker/values.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker/values.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker/values.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,135 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import re + +from pcs.lib import reports +from pcs.lib.errors import LibraryError + + +_BOOLEAN_TRUE = frozenset(["true", "on", "yes", "y", "1"]) +_BOOLEAN_FALSE = frozenset(["false", "off", "no", "n", "0"]) +_BOOLEAN = _BOOLEAN_TRUE | _BOOLEAN_FALSE +_ID_FIRST_CHAR_NOT_RE = re.compile("[^a-zA-Z_]") +_ID_REST_CHARS_NOT_RE = re.compile("[^a-zA-Z0-9_.-]") +SCORE_INFINITY = "INFINITY" + + +def is_boolean(val): + """ + Does pacemaker consider a value to be a boolean? + See crm_is_true in pacemaker/lib/common/utils.c + val checked value + """ + return val.lower() in _BOOLEAN + +def is_true(val): + """ + Does pacemaker consider a value to be true? + See crm_is_true in pacemaker/lib/common/utils.c + var checked value + """ + return val.lower() in _BOOLEAN_TRUE + +def is_false(val): + """ + Does pacemaker consider a value to be false? + See crm_is_true in pacemaker/lib/common/utils.c + var checked value + """ + return val.lower() in _BOOLEAN_FALSE + +def is_score(value): + if not value: + return False + unsigned_value = value[1:] if value[0] in ("+", "-") else value + return unsigned_value == SCORE_INFINITY or unsigned_value.isdigit() + +def timeout_to_seconds(timeout, return_unknown=False): + """ + Transform pacemaker style timeout to number of seconds + timeout timeout string + return_unknown if timeout is not valid then return None on False or timeout + on True (default False) + """ + try: + candidate = int(timeout) + if candidate >= 0: + return candidate + return timeout if return_unknown else None + except ValueError: + pass + # now we know the timeout is not an integer nor an integer string + suffix_multiplier = { + "s": 1, + "sec": 1, + "m": 60, + "min": 60, + "h": 3600, + "hr": 3600, + } + for suffix, multiplier in suffix_multiplier.items(): + if timeout.endswith(suffix) and timeout[:-len(suffix)].isdigit(): + return int(timeout[:-len(suffix)]) * multiplier + return timeout if return_unknown else None + +def get_valid_timeout_seconds(timeout_candidate): + """ + Transform pacemaker style timeout to number of seconds, raise LibraryError + on invalid timeout + timeout_candidate timeout string or None + """ + if timeout_candidate is None: + return None + wait_timeout = timeout_to_seconds(timeout_candidate) + if wait_timeout is None: + raise LibraryError(reports.invalid_timeout(timeout_candidate)) + return wait_timeout + +def validate_id(id_candidate, description="id", reporter=None): + """ + Validate a pacemaker id, raise LibraryError on invalid id. + + id_candidate id's value + description id's role description (default "id") + """ + # see NCName definition + # http://www.w3.org/TR/REC-xml-names/#NT-NCName + # http://www.w3.org/TR/REC-xml/#NT-Name + if len(id_candidate) < 1: + report = reports.invalid_id_is_empty(id_candidate, description) + if reporter is not None: + # we check for None so it works with an empty list as well + reporter.append(report) + return + else: + raise LibraryError(report) + if _ID_FIRST_CHAR_NOT_RE.match(id_candidate[0]): + report = reports.invalid_id_bad_char( + id_candidate, description, id_candidate[0], True + ) + if reporter is not None: + reporter.append(report) + else: + raise LibraryError(report) + for char in id_candidate[1:]: + if _ID_REST_CHARS_NOT_RE.match(char): + report = reports.invalid_id_bad_char( + id_candidate, description, char, False + ) + if reporter is not None: + reporter.append(report) + else: + raise LibraryError(report) + +def sanitize_id(id_candidate, replacement=""): + if not id_candidate: + return id_candidate + return "".join([ + "" if _ID_FIRST_CHAR_NOT_RE.match(id_candidate[0]) else id_candidate[0], + _ID_REST_CHARS_NOT_RE.sub(replacement, id_candidate[1:]) + ]) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker.py pcs-0.9.159/pcs/lib/pacemaker.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,237 +0,0 @@ -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -import os.path -from lxml import etree - -from pcs import settings -from pcs.common.tools import join_multilines -from pcs.lib import reports -from pcs.lib.errors import LibraryError -from pcs.lib.pacemaker_state import ClusterState - - -__EXITCODE_WAIT_TIMEOUT = 62 -__EXITCODE_CIB_SCOPE_VALID_BUT_NOT_PRESENT = 6 -__RESOURCE_CLEANUP_OPERATION_COUNT_THRESHOLD = 100 - -class CrmMonErrorException(LibraryError): - pass - -# syntactic sugar for getting a full path to a pacemaker executable -def __exec(name): - return os.path.join(settings.pacemaker_binaries, name) - -def get_cluster_status_xml(runner): - stdout, stderr, retval = runner.run( - [__exec("crm_mon"), "--one-shot", "--as-xml", "--inactive"] - ) - if retval != 0: - raise CrmMonErrorException( - reports.cluster_state_cannot_load(join_multilines([stderr, stdout])) - ) - return stdout - -def get_cib_xml(runner, scope=None): - command = [__exec("cibadmin"), "--local", "--query"] - if scope: - command.append("--scope={0}".format(scope)) - stdout, stderr, retval = runner.run(command) - if retval != 0: - if retval == __EXITCODE_CIB_SCOPE_VALID_BUT_NOT_PRESENT and scope: - raise LibraryError( - reports.cib_load_error_scope_missing( - scope, - join_multilines([stderr, stdout]) - ) - ) - else: - raise LibraryError( - reports.cib_load_error(join_multilines([stderr, stdout])) - ) - return stdout - -def get_cib(xml): - try: - return etree.fromstring(xml) - except (etree.XMLSyntaxError, etree.DocumentInvalid): - raise LibraryError(reports.cib_load_error_invalid_format()) - -def replace_cib_configuration_xml(runner, xml, cib_upgraded=False): - cmd = [__exec("cibadmin"), "--replace", "--verbose", "--xml-pipe"] - if not cib_upgraded: - cmd += ["--scope", "configuration"] - stdout, stderr, retval = runner.run(cmd, stdin_string=xml) - if retval != 0: - raise LibraryError(reports.cib_push_error(stderr, stdout)) - -def replace_cib_configuration(runner, tree, cib_upgraded=False): - #etree returns bytes: b'xml' - #python 3 removed .encode() from bytes - #run(...) calls subprocess.Popen.communicate which calls encode... - #so here is bytes to str conversion - xml = etree.tostring(tree).decode() - return replace_cib_configuration_xml(runner, xml, cib_upgraded) - -def get_local_node_status(runner): - try: - cluster_status = ClusterState(get_cluster_status_xml(runner)) - except CrmMonErrorException: - return {"offline": True} - node_name = __get_local_node_name(runner) - for node_status in cluster_status.node_section.nodes: - if node_status.attrs.name == node_name: - result = { - "offline": False, - } - for attr in ( - 'id', 'name', 'type', 'online', 'standby', 'standby_onfail', - 'maintenance', 'pending', 'unclean', 'shutdown', 'expected_up', - 'is_dc', 'resources_running', - ): - result[attr] = getattr(node_status.attrs, attr) - return result - raise LibraryError(reports.node_not_found(node_name)) - -def resource_cleanup(runner, resource=None, node=None, force=False): - if not force and not node and not resource: - summary = ClusterState(get_cluster_status_xml(runner)).summary - operations = summary.nodes.attrs.count * summary.resources.attrs.count - if operations > __RESOURCE_CLEANUP_OPERATION_COUNT_THRESHOLD: - raise LibraryError( - reports.resource_cleanup_too_time_consuming( - __RESOURCE_CLEANUP_OPERATION_COUNT_THRESHOLD - ) - ) - - cmd = [__exec("crm_resource"), "--cleanup"] - if resource: - cmd.extend(["--resource", resource]) - if node: - cmd.extend(["--node", node]) - - stdout, stderr, retval = runner.run(cmd) - - if retval != 0: - raise LibraryError( - reports.resource_cleanup_error( - join_multilines([stderr, stdout]), - resource, - node - ) - ) - # usefull output (what has been done) goes to stderr - return join_multilines([stdout, stderr]) - -def nodes_standby(runner, node_list=None, all_nodes=False): - return __nodes_standby_unstandby(runner, True, node_list, all_nodes) - -def nodes_unstandby(runner, node_list=None, all_nodes=False): - return __nodes_standby_unstandby(runner, False, node_list, all_nodes) - -def has_resource_wait_support(runner): - # returns 1 on success so we don't care about retval - stdout, stderr, dummy_retval = runner.run( - [__exec("crm_resource"), "-?"] - ) - # help goes to stderr but we check stdout as well if that gets changed - return "--wait" in stderr or "--wait" in stdout - -def ensure_resource_wait_support(runner): - if not has_resource_wait_support(runner): - raise LibraryError(reports.resource_wait_not_supported()) - -def wait_for_resources(runner, timeout=None): - args = [__exec("crm_resource"), "--wait"] - if timeout is not None: - args.append("--timeout={0}".format(timeout)) - stdout, stderr, retval = runner.run(args) - if retval != 0: - # Usefull info goes to stderr - not only error messages, a list of - # pending actions in case of timeout goes there as well. - # We use stdout just to be sure if that's get changed. - if retval == __EXITCODE_WAIT_TIMEOUT: - raise LibraryError( - reports.resource_wait_timed_out( - join_multilines([stderr, stdout]) - ) - ) - else: - raise LibraryError( - reports.resource_wait_error( - join_multilines([stderr, stdout]) - ) - ) - -def __nodes_standby_unstandby( - runner, standby=True, node_list=None, all_nodes=False -): - if node_list or all_nodes: - # TODO once we switch to editing CIB instead of running crm_stanby, we - # cannot always relly on getClusterState. If we're not editing a CIB - # from a live cluster, there is no status. - state = ClusterState(get_cluster_status_xml(runner)).node_section.nodes - known_nodes = [node.attrs.name for node in state] - - if all_nodes: - node_list = known_nodes - elif node_list: - report = [] - for node in node_list: - if node not in known_nodes: - report.append(reports.node_not_found(node)) - if report: - raise LibraryError(*report) - - # TODO Edit CIB directly instead of running commands for each node; be aware - # remote nodes might not be in the CIB yet so we need to put them there. - cmd_template = [__exec("crm_standby")] - cmd_template.extend(["-v", "on"] if standby else ["-D"]) - cmd_list = [] - if node_list: - for node in node_list: - cmd_list.append(cmd_template + ["-N", node]) - else: - cmd_list.append(cmd_template) - report = [] - for cmd in cmd_list: - stdout, stderr, retval = runner.run(cmd) - if retval != 0: - report.append( - reports.common_error(join_multilines([stderr, stdout])) - ) - if report: - raise LibraryError(*report) - -def __get_local_node_name(runner): - # It would be possible to run "crm_node --name" to get the name in one call, - # but it returns false names when cluster is not running (or we are on - # a remote node). Getting node id first is reliable since it fails in those - # cases. - stdout, dummy_stderr, retval = runner.run( - [__exec("crm_node"), "--cluster-id"] - ) - if retval != 0: - raise LibraryError( - reports.pacemaker_local_node_name_not_found("node id not found") - ) - node_id = stdout.strip() - - stdout, dummy_stderr, retval = runner.run( - [__exec("crm_node"), "--name-for-id={0}".format(node_id)] - ) - if retval != 0: - raise LibraryError( - reports.pacemaker_local_node_name_not_found("node name not found") - ) - node_name = stdout.strip() - - if node_name == "(null)": - raise LibraryError( - reports.pacemaker_local_node_name_not_found("node name is null") - ) - return node_name diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker_state.py pcs-0.9.159/pcs/lib/pacemaker_state.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker_state.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker_state.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,153 +0,0 @@ -''' -The intention is put there knowledge about cluster state structure. -Hide information about underlaying xml is desired too. -''' - -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -import os.path - -from lxml import etree - -from pcs import settings -from pcs.lib import reports -from pcs.lib.errors import LibraryError -from pcs.lib.pacemaker_values import is_true - -class _Attrs(object): - def __init__(self, owner_name, attrib, required_attrs): - ''' - attrib lxml.etree._Attrib - wrapped attribute collection - required_attrs dict of required atribute names object_name:xml_attribute - ''' - self.owner_name = owner_name - self.attrib = attrib - self.required_attrs = required_attrs - - def __getattr__(self, name): - if name in self.required_attrs.keys(): - try: - attr_specification = self.required_attrs[name] - if isinstance(attr_specification, tuple): - attr_name, attr_transform = attr_specification - return attr_transform(self.attrib[attr_name]) - else: - return self.attrib[attr_specification] - except KeyError: - raise AttributeError( - "Missing attribute '{0}' ('{1}' in source) in '{2}'" - .format(name, self.required_attrs[name], self.owner_name) - ) - - raise AttributeError( - "'{0}' does not declare attribute '{1}'" - .format(self.owner_name, name) - ) - -class _Children(object): - def __init__(self, owner_name, dom_part, children, sections): - self.owner_name = owner_name - self.dom_part = dom_part - self.children = children - self.sections = sections - - def __getattr__(self, name): - if name in self.children.keys(): - element_name, wrapper = self.children[name] - return [ - wrapper(element) - for element in self.dom_part.findall('.//' + element_name) - ] - - if name in self.sections.keys(): - element_name, wrapper = self.sections[name] - return wrapper(self.dom_part.findall('.//' + element_name)[0]) - - raise AttributeError( - "'{0}' does not declare child or section '{1}'" - .format(self.owner_name, name) - ) - -class _Element(object): - required_attrs = {} - children = {} - sections = {} - - def __init__(self, dom_part): - self.dom_part = dom_part - self.attrs = _Attrs( - self.__class__.__name__, - self.dom_part.attrib, - self.required_attrs - ) - self.children_access = _Children( - self.__class__.__name__, - self.dom_part, - self.children, - self.sections, - ) - - def __getattr__(self, name): - return getattr(self.children_access, name) - -class _SummaryNodes(_Element): - required_attrs = { - 'count': ('number', int), - } - -class _SummaryResources(_Element): - required_attrs = { - 'count': ('number', int), - } - -class _SummarySection(_Element): - sections = { - 'nodes': ('nodes_configured', _SummaryNodes), - 'resources': ('resources_configured', _SummaryResources), - } - -class _Node(_Element): - required_attrs = { - 'id': 'id', - 'name': 'name', - 'type': 'type', - 'online': ('online', is_true), - 'standby': ('standby', is_true), - 'standby_onfail': ('standby_onfail', is_true), - 'maintenance': ('maintenance', is_true), - 'pending': ('pending', is_true), - 'unclean': ('unclean', is_true), - 'shutdown': ('shutdown', is_true), - 'expected_up': ('expected_up', is_true), - 'is_dc': ('is_dc', is_true), - 'resources_running': ('resources_running', int), - } - -class _NodeSection(_Element): - children = { - 'nodes': ('node', _Node), - } - -def _get_valid_cluster_state_dom(xml): - try: - dom = etree.fromstring(xml) - if os.path.isfile(settings.crm_mon_schema): - etree.RelaxNG(file=settings.crm_mon_schema).assertValid(dom) - return dom - except (etree.XMLSyntaxError, etree.DocumentInvalid): - raise LibraryError(reports.cluster_state_invalid_format()) - -class ClusterState(_Element): - sections = { - 'summary': ('summary', _SummarySection), - 'node_section': ('nodes', _NodeSection), - } - - def __init__(self, xml): - self.dom = _get_valid_cluster_state_dom(xml) - super(ClusterState, self).__init__(self.dom) diff -Nru pcs-0.9.155+dfsg/pcs/lib/pacemaker_values.py pcs-0.9.159/pcs/lib/pacemaker_values.py --- pcs-0.9.155+dfsg/pcs/lib/pacemaker_values.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/pacemaker_values.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,100 +0,0 @@ -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -import re - -from pcs.lib import reports -from pcs.lib.errors import LibraryError - - -__BOOLEAN_TRUE = ["true", "on", "yes", "y", "1"] -__BOOLEAN_FALSE = ["false", "off", "no", "n", "0"] -SCORE_INFINITY = "INFINITY" - - -def is_true(val): - """ - Does pacemaker consider a value to be true? - See crm_is_true in pacemaker/lib/common/utils.c - var checked value - """ - return val.lower() in __BOOLEAN_TRUE - -def is_boolean(val): - """ - Does pacemaker consider a value to be a boolean? - See crm_is_true in pacemaker/lib/common/utils.c - val checked value - """ - return val.lower() in __BOOLEAN_TRUE + __BOOLEAN_FALSE - -def timeout_to_seconds(timeout, return_unknown=False): - """ - Transform pacemaker style timeout to number of seconds - timeout timeout string - return_unknown if timeout is not valid then return None on False or timeout - on True (default False) - """ - if timeout.isdigit(): - return int(timeout) - suffix_multiplier = { - "s": 1, - "sec": 1, - "m": 60, - "min": 60, - "h": 3600, - "hr": 3600, - } - for suffix, multiplier in suffix_multiplier.items(): - if timeout.endswith(suffix) and timeout[:-len(suffix)].isdigit(): - return int(timeout[:-len(suffix)]) * multiplier - return timeout if return_unknown else None - -def get_valid_timeout_seconds(timeout_candidate): - """ - Transform pacemaker style timeout to number of seconds, raise LibraryError - on invalid timeout - timeout_candidate timeout string or None - """ - if timeout_candidate is None: - return None - wait_timeout = timeout_to_seconds(timeout_candidate) - if wait_timeout is None: - raise LibraryError(reports.invalid_timeout(timeout_candidate)) - return wait_timeout - -def validate_id(id_candidate, description="id"): - """ - Validate a pacemaker id, raise LibraryError on invalid id. - - id_candidate id's value - description id's role description (default "id") - """ - # see NCName definition - # http://www.w3.org/TR/REC-xml-names/#NT-NCName - # http://www.w3.org/TR/REC-xml/#NT-Name - if len(id_candidate) < 1: - raise LibraryError(reports.invalid_id_is_empty( - id_candidate, description - )) - first_char_re = re.compile("[a-zA-Z_]") - if not first_char_re.match(id_candidate[0]): - raise LibraryError(reports.invalid_id_bad_char( - id_candidate, description, id_candidate[0], True - )) - char_re = re.compile("[a-zA-Z0-9_.-]") - for char in id_candidate[1:]: - if not char_re.match(char): - raise LibraryError(reports.invalid_id_bad_char( - id_candidate, description, char, False - )) - -def is_score_value(value): - if not value: - return False - unsigned_value = value[1:] if value[0] in ("+", "-") else value - return unsigned_value == SCORE_INFINITY or unsigned_value.isdigit() diff -Nru pcs-0.9.155+dfsg/pcs/lib/reports.py pcs-0.9.159/pcs/lib/reports.py --- pcs-0.9.155+dfsg/pcs/lib/reports.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/reports.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,9 +5,86 @@ unicode_literals, ) +from functools import partial + from pcs.common import report_codes from pcs.lib.errors import ReportItem, ReportItemSeverity +def forceable_error(force_code, report_creator, *args, **kwargs): + """ + Return ReportItem created by report_creator. + + This is experimental shortcut for common pattern. It is intended to + cooperate with functions "error" and "warning". + the pair with function "warning". + + string force_code is code for forcing error + callable report_creator is function that produce ReportItem. It must take + parameters forceable (None or force code) and severity + (from ReportItemSeverity) + rest of args are for the report_creator + """ + return report_creator( + *args, + forceable=force_code, + severity=ReportItemSeverity.ERROR, + **kwargs + ) + +def warning(report_creator, *args, **kwargs): + """ + Return ReportItem created by report_creator. + + This is experimental shortcut for common pattern. It is intended to + cooperate with functions "error" and "forceable_error". + + callable report_creator is function that produce ReportItem. It must take + parameters forceable (None or force code) and severity + (from ReportItemSeverity) + rest of args are for the report_creator + """ + return report_creator( + *args, + forceable=None, + severity=ReportItemSeverity.WARNING, + **kwargs + ) + +def error(report_creator, *args, **kwargs): + """ + Return ReportItem created by report_creator. + + This is experimental shortcut for common pattern. It is intended to + cooperate with functions "forceable_error" and "forceable_error". + + callable report_creator is function that produce ReportItem. It must take + parameters forceable (None or force code) and severity + (from ReportItemSeverity) + rest of args are for the report_creator + """ + return report_creator( + *args, + forceable=None, + severity=ReportItemSeverity.ERROR, + **kwargs + ) + +def get_problem_creator(force_code=None, is_forced=False): + """ + Returns report creator wraper (forceable_error or warning). + + This is experimental shortcut for decision if ReportItem will be + either forceable_error or warning. + + string force_code is code for forcing error. It could be usefull to prepare + it for whole module by using functools.partial. + bool warn_only is flag for selecting wrapper + """ + if not force_code: + return error + if is_forced: + return warning + return partial(forceable_error, force_code) def common_error(text): # TODO replace by more specific reports @@ -81,40 +158,105 @@ report_codes.EMPTY_RESOURCE_SET_LIST, ) -def required_option_is_missing(name): +def required_option_is_missing( + option_names, option_type=None, + severity=ReportItemSeverity.ERROR, forceable=None +): """ required option has not been specified, command cannot continue + list name is/are required but was not entered + option_type decsribes the option + severity report item severity + forceable is this report item forceable? by what cathegory? """ - return ReportItem.error( + return ReportItem( report_codes.REQUIRED_OPTION_IS_MISSING, + severity, + forceable=forceable, info={ - "option_name": name + "option_names": option_names, + "option_type": option_type, + } + ) + +def prerequisite_option_is_missing( + option_name, prerequisite_name, option_type="", prerequisite_type="" +): + """ + if the option_name is specified, the prerequisite_option must be specified + string option_name -- an option which depends on the prerequisite_option + string prerequisite_name -- the prerequisite option + string option_type -- decsribes the option + string prerequisite_type -- decsribes the prerequisite_option + """ + return ReportItem.error( + report_codes.PREREQUISITE_OPTION_IS_MISSING, + info={ + "option_name": option_name, + "option_type": option_type, + "prerequisite_name": prerequisite_name, + "prerequisite_type": prerequisite_type, + } + ) + +def required_option_of_alternatives_is_missing( + option_names, option_type=None +): + """ + at least one option has to be specified + iterable option_names -- options from which at least one has to be specified + string option_type -- decsribes the option + """ + severity = ReportItemSeverity.ERROR + forceable = None + return ReportItem( + report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, + severity, + forceable=forceable, + info={ + "option_names": option_names, + "option_type": option_type, } ) def invalid_option( - option_name, allowed_options, option_type, + option_names, allowed_options, option_type, severity=ReportItemSeverity.ERROR, forceable=None ): """ specified option name is not valid, usualy an error or a warning - option_name specified invalid option name + list option_names specified invalid option names allowed_options iterable of possible allowed option names option_type decsribes the option severity report item severity forceable is this report item forceable? by what cathegory? """ + return ReportItem( report_codes.INVALID_OPTION, severity, forceable, info={ - "option_name": option_name, + "option_names": option_names, "option_type": option_type, "allowed": sorted(allowed_options), } ) +def invalid_option_type(option_name, allowed_types): + """ + specified value is not of a valid type for the option + string option_name -- option name whose value is not of a valid type + list|string allowed_types -- list of allowed types or string description + """ + return ReportItem.error( + report_codes.INVALID_OPTION_TYPE, + info={ + "option_name": option_name, + "allowed_types": allowed_types, + }, + ) + def invalid_option_value( option_name, option_value, allowed_values, severity=ReportItemSeverity.ERROR, forceable=None @@ -138,6 +280,45 @@ forceable=forceable ) +def deprecated_option( + option_name, replaced_by_options, option_type, + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + Specified option name is deprecated and has been replaced by other option(s) + + string option_name -- the deprecated option + iterable or string replaced_by_options -- new option(s) to be used instead + string option_type -- option description + string severity -- report item severity + string forceable -- a category by which the report is forceable + """ + return ReportItem( + report_codes.DEPRECATED_OPTION, + severity, + info={ + "option_name": option_name, + "option_type": option_type, + "replaced_by": sorted(replaced_by_options), + }, + forceable=forceable + ) + +def mutually_exclusive_options(option_names, option_type): + """ + entered options can not coexist + set option_names contain entered mutually exclusive options + string option_type decsribes the option + """ + return ReportItem.error( + report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, + info={ + "option_names": option_names, + "option_type": option_type, + }, + ) + + def invalid_id_is_empty(id, id_description): """ empty string was specified as an id, which is not valid @@ -201,7 +382,7 @@ report_codes.MULTIPLE_SCORE_OPTIONS, ) -def run_external_process_started(command, stdin): +def run_external_process_started(command, stdin, environment): """ information about running an external process command string the external process command @@ -212,6 +393,7 @@ info={ "command": command, "stdin": stdin, + "environment": environment, } ) @@ -277,6 +459,20 @@ } ) + +def node_communication_debug_info(target, data): + """ + Node communication debug info from pycurl + """ + return ReportItem.debug( + report_codes.NODE_COMMUNICATION_DEBUG_INFO, + info={ + "target": target, + "data": data, + } + ) + + def node_communication_not_connected(node, reason): """ an error occured when connecting to a remote node, debug info @@ -351,21 +547,26 @@ forceable=forceable ) -def node_communication_command_unsuccessful(node, command, reason): +def node_communication_command_unsuccessful( + node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None +): """ node rejected a request for another reason with a plain text explanation node string node address / name reason string decription of the error """ - return ReportItem.error( + return ReportItem( report_codes.NODE_COMMUNICATION_COMMAND_UNSUCCESSFUL, + severity, info={ "node": node, "command": command, "reason": reason, - } + }, + forceable=forceable ) + def node_communication_error_other_error( node, command, reason, severity=ReportItemSeverity.ERROR, forceable=None @@ -406,6 +607,55 @@ forceable=forceable ) + +def node_communication_error_timed_out( + node, command, reason, + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + Communication with node timed out. + """ + return ReportItem( + report_codes.NODE_COMMUNICATION_ERROR_TIMED_OUT, + severity, + info={ + "node": node, + "command": command, + "reason": reason, + }, + forceable=forceable + ) + +def node_communication_proxy_is_set(): + """ + Warning when connection failed and there is proxy set in environment + variables + """ + return ReportItem.warning(report_codes.NODE_COMMUNICATION_PROXY_IS_SET) + +def cannot_add_node_is_in_cluster(node): + """ + Node is in cluster. It is not possible to add it as a new cluster node. + """ + return ReportItem.error( + report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, + info={"node": node} + ) + +def cannot_add_node_is_running_service(node, service): + """ + Node is running service. It is not possible to add it as a new cluster node. + string node address of desired node + string service name of service (pacemaker, pacemaker_remote) + """ + return ReportItem.error( + report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, + info={ + "node": node, + "service": service, + } + ) + def corosync_config_distribution_started(): """ corosync configuration is about to be sent to nodes @@ -801,28 +1051,167 @@ info={"id": id} ) -def id_not_found(id, id_description): +def id_belongs_to_unexpected_type(id, expected_types, current_type): + """ + Specified id exists but for another element than expected. + For example user wants to create resource in group that is specifies by id. + But id does not belong to group. + """ + return ReportItem.error( + report_codes.ID_BELONGS_TO_UNEXPECTED_TYPE, + info={ + "id": id, + "expected_types": expected_types, + "current_type": current_type, + } + ) + +def object_with_id_in_unexpected_context( + object_type, object_id, expected_context_type, expected_context_id +): + """ + Object specified by object_type (tag) and object_id exists but not inside + given context (expected_context_type, expected_context_id). + """ + return ReportItem.error( + report_codes.OBJECT_WITH_ID_IN_UNEXPECTED_CONTEXT, + info={ + "type": object_type, + "id": object_id, + "expected_context_type": expected_context_type, + "expected_context_id": expected_context_id, + } + ) + + +def id_not_found(id, id_description, context_type="", context_id=""): """ specified id does not exist in CIB, user referenced a nonexisting id - use "resource_does_not_exist" if id is a resource id - id string specified id - id_description string decribe id's role + string id specified id + string id_description decribe id's role + string context_id specifies the search area """ return ReportItem.error( report_codes.ID_NOT_FOUND, info={ "id": id, "id_description": id_description, + "context_type": context_type, + "context_id": context_id, + } + ) + +def resource_bundle_already_contains_a_resource(bundle_id, resource_id): + """ + The bundle already contains a resource, another one caanot be added + + string bundle_id -- id of the bundle + string resource_id -- id of the resource already contained in the bundle + """ + return ReportItem.error( + report_codes.RESOURCE_BUNDLE_ALREADY_CONTAINS_A_RESOURCE, + info={ + "bundle_id": bundle_id, + "resource_id": resource_id, + } + ) + +def resource_cannot_be_next_to_itself_in_group(resource_id, group_id): + """ + Cannot put resource(id=resource_id) into group(id=group_id) next to itself: + resource(id=resource_id). + """ + return ReportItem.error( + report_codes.RESOURCE_CANNOT_BE_NEXT_TO_ITSELF_IN_GROUP, + info={ + "resource_id": resource_id, + "group_id": group_id, + } + ) + +def stonith_resources_do_not_exist( + stonith_ids, severity=ReportItemSeverity.ERROR, forceable=None +): + """ + specified stonith resource doesn't exist (e.g. when creating in constraints) + iterable stoniths -- list of specified stonith id + """ + return ReportItem( + report_codes.STONITH_RESOURCES_DO_NOT_EXIST, + severity, + info={ + "stonith_ids": stonith_ids, + }, + forceable=forceable + ) + +def resource_running_on_nodes( + resource_id, roles_with_nodes, severity=ReportItemSeverity.INFO +): + """ + Resource is running on some nodes. Taken from cluster state. + + string resource_id represent the resource + list of tuple roles_with_nodes contain pairs (role, node) + """ + return ReportItem( + report_codes.RESOURCE_RUNNING_ON_NODES, + severity, + info={ + "resource_id": resource_id, + "roles_with_nodes": roles_with_nodes, + } + ) + +def resource_does_not_run(resource_id, severity=ReportItemSeverity.INFO): + """ + Resource is not running on any node. Taken from cluster state. + + string resource_id represent the resource + """ + return ReportItem( + report_codes.RESOURCE_DOES_NOT_RUN, + severity, + info={ + "resource_id": resource_id, } ) -def resource_does_not_exist(resource_id): +def resource_is_guest_node_already(resource_id): """ - specified resource does not exist (e.g. when creating in constraints) - resource_id string specified resource id + The resource is already used as guest node (i.e. has meta attribute + remote-node). + + string resource_id -- id of the resource that is guest node """ return ReportItem.error( - report_codes.RESOURCE_DOES_NOT_EXIST, + report_codes.RESOURCE_IS_GUEST_NODE_ALREADY, + info={ + "resource_id": resource_id, + } + ) + +def resource_is_unmanaged(resource_id): + """ + The resource the user works with is unmanaged (e.g. in enable/disable) + + string resource_id -- id of the unmanaged resource + """ + return ReportItem.warning( + report_codes.RESOURCE_IS_UNMANAGED, + info={ + "resource_id": resource_id, + } + ) + +def resource_managed_no_monitor_enabled(resource_id): + """ + The resource which was set to managed mode has no monitor operations enabled + + string resource_id -- id of the resource + """ + return ReportItem.warning( + report_codes.RESOURCE_MANAGED_NO_MONITOR_ENABLED, info={ "resource_id": resource_id, } @@ -888,6 +1277,18 @@ } ) +def cib_save_tmp_error(reason): + """ + cannot save CIB into a temporary file + string reason error description + """ + return ReportItem.error( + report_codes.CIB_SAVE_TMP_ERROR, + info={ + "reason": reason, + } + ) + def cluster_state_cannot_load(reason): """ cannot load cluster status from crm_mon, crm_mon exited with non-zero code @@ -908,38 +1309,46 @@ report_codes.BAD_CLUSTER_STATE_FORMAT, ) -def resource_wait_not_supported(): +def wait_for_idle_not_supported(): """ crm_resource does not support --wait """ return ReportItem.error( - report_codes.RESOURCE_WAIT_NOT_SUPPORTED, + report_codes.WAIT_FOR_IDLE_NOT_SUPPORTED, ) -def resource_wait_timed_out(reason): +def wait_for_idle_timed_out(reason): """ waiting for resources (crm_resource --wait) failed, timeout expired string reason error description """ return ReportItem.error( - report_codes.RESOURCE_WAIT_TIMED_OUT, + report_codes.WAIT_FOR_IDLE_TIMED_OUT, info={ "reason": reason, } ) -def resource_wait_error(reason): +def wait_for_idle_error(reason): """ waiting for resources (crm_resource --wait) failed string reason error description """ return ReportItem.error( - report_codes.RESOURCE_WAIT_ERROR, + report_codes.WAIT_FOR_IDLE_ERROR, info={ "reason": reason, } ) +def wait_for_idle_not_live_cluster(): + """ + cannot wait for the cluster if not running with a live cluster + """ + return ReportItem.error( + report_codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER, + ) + def resource_cleanup_error(reason, resource=None, node=None): """ an error occured when deleting resource history in pacemaker @@ -967,27 +1376,125 @@ forceable=report_codes.FORCE_LOAD_THRESHOLD ) -def node_not_found(node): +def resource_operation_interval_duplication(duplications): """ - specified node does not exist - node string specified node + More operations with same name and same interval apeared. + Each operation with the same name (e.g. monitoring) need to have unique + interval. + dict duplications see resource operation interval duplication + in pcs/lib/exchange_formats.md """ return ReportItem.error( - report_codes.NODE_NOT_FOUND, - info={"node": node} + report_codes.RESOURCE_OPERATION_INTERVAL_DUPLICATION, + info={ + "duplications": duplications, + } ) -def pacemaker_local_node_name_not_found(reason): - """ - we are unable to figure out pacemaker's local node's name - reason string error message +def resource_operation_interval_adapted( + operation_name, original_interval, adapted_interval +): """ - return ReportItem.error( - report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, - info={"reason": reason} - ) + Interval of resource operation was adopted to operation (with the same name) + intervals were unique. + Each operation with the same name (e.g. monitoring) need to have unique + interval. -def rrp_active_not_supported(warning=False): + """ + return ReportItem.warning( + report_codes.RESOURCE_OPERATION_INTERVAL_ADAPTED, + info={ + "operation_name": operation_name, + "original_interval": original_interval, + "adapted_interval": adapted_interval, + } + ) + +def node_not_found( + node, searched_types=None, severity=ReportItemSeverity.ERROR, forceable=None +): + """ + specified node does not exist + node string specified node + searched_types list|string + """ + return ReportItem( + report_codes.NODE_NOT_FOUND, + severity, + info={ + "node": node, + "searched_types": searched_types if searched_types else [] + }, + forceable=forceable + ) + +def node_to_clear_is_still_in_cluster( + node, severity=ReportItemSeverity.ERROR, forceable=None +): + """ + specified node is still in cluster and `crm_node --remove` should be not + used + + node string specified node + """ + return ReportItem( + report_codes.NODE_TO_CLEAR_IS_STILL_IN_CLUSTER, + severity, + info={ + "node": node, + }, + forceable=forceable + ) + +def node_remove_in_pacemaker_failed(node_name, reason): + """ + calling of crm_node --remove failed + string reason is caught reason + """ + return ReportItem.error( + report_codes.NODE_REMOVE_IN_PACEMAKER_FAILED, + info={ + "node_name": node_name, + "reason": reason, + } + ) + +def multiple_result_found( + result_type, result_identifier_list, search_description="", + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + Multiple result was found when something was looked for. E.g. resource for + remote node. + + string result_type specifies what was looked for, e.g. "resource" + list result_identifier_list contains identifiers of results + e.g. resource ids + string search_description e.g. name of remote_node + """ + return ReportItem( + report_codes.MULTIPLE_RESULTS_FOUND, + severity, + info={ + "result_type": result_type, + "result_identifier_list": result_identifier_list, + "search_description": search_description, + }, + forceable=forceable + ) + + +def pacemaker_local_node_name_not_found(reason): + """ + we are unable to figure out pacemaker's local node's name + reason string error message + """ + return ReportItem.error( + report_codes.PACEMAKER_LOCAL_NODE_NAME_NOT_FOUND, + info={"reason": reason} + ) + +def rrp_active_not_supported(warning=False): """ active RRP mode is not supported, require user confirmation warning set to True if user confirmed he/she wants to proceed @@ -1321,6 +1828,18 @@ } ) +def invalid_stonith_agent_name(name): + """ + The entered stonith agent name is not valid. + string name -- entered stonith agent name + """ + return ReportItem.error( + report_codes.INVALID_STONITH_AGENT_NAME, + info={ + "name": name, + } + ) + def agent_name_guessed(entered_name, guessed_name): """ Resource agent name was deduced from the entered name. @@ -1455,6 +1974,242 @@ ) +def sbd_device_initialization_started(device_list): + """ + initialization of SBD device(s) started + """ + return ReportItem.info( + report_codes.SBD_DEVICE_INITIALIZATION_STARTED, + info={ + "device_list": device_list, + } + ) + + +def sbd_device_initialization_success(device_list): + """ + initialization of SBD device(s) successed + """ + return ReportItem.info( + report_codes.SBD_DEVICE_INITIALIZATION_SUCCESS, + info={ + "device_list": device_list, + } + ) + + +def sbd_device_initialization_error(device_list, reason): + """ + initialization of SBD device failed + """ + return ReportItem.error( + report_codes.SBD_DEVICE_INITIALIZATION_ERROR, + info={ + "device_list": device_list, + "reason": reason, + } + ) + + +def sbd_device_list_error(device, reason): + """ + command 'sbd list' failed + """ + return ReportItem.error( + report_codes.SBD_DEVICE_LIST_ERROR, + info={ + "device": device, + "reason": reason, + } + ) + + +def sbd_device_message_error(device, node, message, reason): + """ + unable to set message 'message' on shared block device 'device' + for node 'node'. + """ + return ReportItem.error( + report_codes.SBD_DEVICE_MESSAGE_ERROR, + info={ + "device": device, + "node": node, + "message": message, + "reason": reason, + } + ) + + +def sbd_device_dump_error(device, reason): + """ + command 'sbd dump' failed + """ + return ReportItem.error( + report_codes.SBD_DEVICE_DUMP_ERROR, + info={ + "device": device, + "reason": reason, + } + ) + +def files_distribution_started(file_list, node_list=None, description=None): + """ + files is about to be sent to nodes + """ + file_list = file_list if file_list else [] + return ReportItem.info( + report_codes.FILES_DISTRIBUTION_STARTED, + info={ + "file_list": file_list, + "node_list": node_list, + "description": description, + } + ) + +def file_distribution_success(node=None, file_description=None): + """ + files was successfuly distributed on nodes + + string node -- name of destination node + string file_description -- name (code) of sucessfully put files + """ + return ReportItem.info( + report_codes.FILE_DISTRIBUTION_SUCCESS, + info={ + "node": node, + "file_description": file_description, + }, + ) + +def file_distribution_error( + node=None, file_description=None, reason=None, + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + cannot put files to specific nodes + + string node -- name of destination node + string file_description -- is file code + string reason -- is error message + """ + return ReportItem( + report_codes.FILE_DISTRIBUTION_ERROR, + severity, + info={ + "node": node, + "file_description": file_description, + "reason": reason, + }, + forceable=forceable + ) + +def files_remove_from_node_started(file_list, node_list=None, description=None): + """ + files is about to be removed from nodes + """ + file_list = file_list if file_list else [] + return ReportItem.info( + report_codes.FILES_REMOVE_FROM_NODE_STARTED, + info={ + "file_list": file_list, + "node_list": node_list, + "description": description, + } + ) + +def file_remove_from_node_success(node=None, file_description=None): + """ + files was successfuly removed nodes + + string node -- name of destination node + string file_description -- name (code) of sucessfully put files + """ + return ReportItem.info( + report_codes.FILE_REMOVE_FROM_NODE_SUCCESS, + info={ + "node": node, + "file_description": file_description, + }, + ) + +def file_remove_from_node_error( + node=None, file_description=None, reason=None, + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + cannot remove files from specific nodes + + string node -- name of destination node + string file_description -- is file code + string reason -- is error message + """ + return ReportItem( + report_codes.FILE_REMOVE_FROM_NODE_ERROR, + severity, + info={ + "node": node, + "file_description": file_description, + "reason": reason, + }, + forceable=forceable + ) + +def service_commands_on_nodes_started( + action_list, node_list=None, description=None +): + """ + node was requested for actions + """ + action_list = action_list if action_list else [] + return ReportItem.info( + report_codes.SERVICE_COMMANDS_ON_NODES_STARTED, + info={ + "action_list": action_list, + "node_list": node_list, + "description": description, + } + ) + +def service_command_on_node_success( + node=None, service_command_description=None +): + """ + files was successfuly distributed on nodes + + string service_command_description -- name (code) of sucessfully service + command + """ + return ReportItem.info( + report_codes.SERVICE_COMMAND_ON_NODE_SUCCESS, + info={ + "node": node, + "service_command_description": service_command_description, + }, + ) + +def service_command_on_node_error( + node=None, service_command_description=None, reason=None, + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + action on nodes failed + + string service_command_description -- name (code) of sucessfully service + command + string reason -- is error message + """ + return ReportItem( + report_codes.SERVICE_COMMAND_ON_NODE_ERROR, + severity, + info={ + "node": node, + "service_command_description": service_command_description, + "reason": reason, + }, + forceable=forceable + ) + + def invalid_response_format(node): """ error message that response in invalid format has been received from @@ -1468,6 +2223,69 @@ ) +def sbd_no_device_for_node(node): + """ + there is no device defined for node when enabling sbd with device + """ + return ReportItem.error( + report_codes.SBD_NO_DEVICE_FOR_NODE, + info={"node": node} + ) + + +def sbd_too_many_devices_for_node(node, device_list, max_devices): + """ + More than 3 devices defined for node + """ + return ReportItem.error( + report_codes.SBD_TOO_MANY_DEVICES_FOR_NODE, + info={ + "node": node, + "device_list": device_list, + "max_devices": max_devices, + } + ) + + +def sbd_device_path_not_absolute(device, node=None): + """ + path of SBD device is not absolute + """ + return ReportItem.error( + report_codes.SBD_DEVICE_PATH_NOT_ABSOLUTE, + info={ + "device": device, + "node": node, + } + ) + + +def sbd_device_does_not_exist(device, node): + """ + specified device on node doesn't exist + """ + return ReportItem.error( + report_codes.SBD_DEVICE_DOES_NOT_EXIST, + info={ + "device": device, + "node": node, + } + ) + + +def sbd_device_is_not_block_device(device, node): + """ + specified device on node is not block device + """ + return ReportItem.error( + report_codes.SBD_DEVICE_IS_NOT_BLOCK_DEVICE, + info={ + "device": device, + "node": node, + } + ) + + def sbd_not_installed(node): """ sbd is not installed on specified node @@ -1564,18 +2382,6 @@ info={"recipient": recipient_value} ) -def cib_alert_not_found(alert_id): - """ - Alert with specified id doesn't exist. - - alert_id -- id of alert - """ - return ReportItem.error( - report_codes.CIB_ALERT_NOT_FOUND, - info={"alert": alert_id} - ) - - def cib_upgrade_successful(): """ Upgrade of CIB schema was successful. @@ -1682,6 +2488,58 @@ } ) +def live_environment_required_for_local_node(): + """ + The operation cannot be performed on CIB in file (not live cluster) if no + node name is specified i.e. working with the local node + """ + return ReportItem.error( + report_codes.LIVE_ENVIRONMENT_REQUIRED_FOR_LOCAL_NODE, + ) + +def nolive_skip_files_distribution(files_description, nodes): + """ + When running action with e.g. -f the files was not distributed to nodes. + list files_description -- contains description of files + list nodes -- destinations where should be files distributed + """ + return ReportItem.info( + report_codes.NOLIVE_SKIP_FILES_DISTRIBUTION, + info={ + "files_description": files_description, + "nodes": nodes, + } + ) + +def nolive_skip_files_remove(files_description, nodes): + """ + When running action with e.g. -f the files was not removed from nodes. + list files_description -- contains description of files + list nodes -- destinations from where should be files removed + """ + return ReportItem.info( + report_codes.NOLIVE_SKIP_FILES_REMOVE, + info={ + "files_description": files_description, + "nodes": nodes, + } + ) + +def nolive_skip_service_command_on_nodes(service, command, nodes): + """ + When running action with e.g. -f the service command is not run on nodes. + string service -- e.g. pacemaker, pacemaker_remote, corosync + string command -- e.g. start, enable, stop, disable + list nodes -- destinations where should be commad run + """ + return ReportItem.info( + report_codes.NOLIVE_SKIP_SERVICE_COMMAND_ON_NODES, + info={ + "service": service, + "command": command, + "nodes": nodes, + } + ) def quorum_cannot_disable_atb_due_to_sbd( severity=ReportItemSeverity.ERROR, forceable=None @@ -1766,3 +2624,70 @@ "reason": reason, } ) + +def fencing_level_already_exists(level, target_type, target_value, devices): + """ + Fencing level already exists, it cannot be created + """ + return ReportItem.error( + report_codes.CIB_FENCING_LEVEL_ALREADY_EXISTS, + info={ + "level": level, + "target_type": target_type, + "target_value": target_value, + "devices": devices, + } + ) + +def fencing_level_does_not_exist(level, target_type, target_value, devices): + """ + Fencing level does not exist, it cannot be updated or deleted + """ + return ReportItem.error( + report_codes.CIB_FENCING_LEVEL_DOES_NOT_EXIST, + info={ + "level": level, + "target_type": target_type, + "target_value": target_value, + "devices": devices, + } + ) + +def use_command_node_add_remote( + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + Advise the user for more appropriate command. + """ + return ReportItem( + report_codes.USE_COMMAND_NODE_ADD_REMOTE, + severity, + info={}, + forceable=forceable + ) + +def use_command_node_add_guest( + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + Advise the user for more appropriate command. + """ + return ReportItem( + report_codes.USE_COMMAND_NODE_ADD_GUEST, + severity, + info={}, + forceable=forceable + ) + +def use_command_node_remove_guest( + severity=ReportItemSeverity.ERROR, forceable=None +): + """ + Advise the user for more appropriate command. + """ + return ReportItem( + report_codes.USE_COMMAND_NODE_REMOVE_GUEST, + severity, + info={}, + forceable=forceable + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/resource_agent.py pcs-0.9.159/pcs/lib/resource_agent.py --- pcs-0.9.155+dfsg/pcs/lib/resource_agent.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/resource_agent.py 2017-06-30 15:33:01.000000000 +0000 @@ -5,19 +5,76 @@ unicode_literals, ) +from collections import namedtuple +from lxml import etree import os import re -from lxml import etree from pcs import settings +from pcs.common import report_codes +from pcs.common.tools import xml_fromstring from pcs.lib import reports from pcs.lib.errors import LibraryError, ReportItemSeverity -from pcs.lib.pacemaker_values import is_true -from pcs.common import report_codes +from pcs.lib.pacemaker.values import is_true _crm_resource = os.path.join(settings.pacemaker_binaries, "crm_resource") +DEFAULT_RESOURCE_CIB_ACTION_NAMES = [ + "monitor", + "start", + "stop", + "promote", + "demote", +] +DEFAULT_STONITH_CIB_ACTION_NAMES = ["monitor"] + +# Operation monitor is required always! No matter if --no-default-ops was +# entered or if agent does not specify it. See +# http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_resource_operations +NECESSARY_CIB_ACTION_NAMES = ["monitor"] + +#These are all standards valid in cib. To get a list of standards supported by +#pacemaker in local environment use result of "pcs resource standards". +STANDARD_LIST = [ + "ocf", + "lsb", + "heartbeat", + "stonith", + "upstart", + "service", + "systemd", + "nagios", +] + +DEFAULT_INTERVALS = { + "monitor": "60s" +} + +_STONITH_ACTION_REPLACED_BY = ("pcmk_off_action", "pcmk_reboot_action") + + +def get_default_interval(operation_name): + """ + Return default interval for given operation_name. + string operation_name + """ + return DEFAULT_INTERVALS.get(operation_name, "0s") + +def complete_all_intervals(raw_operation_list): + """ + Return operation_list based on raw_operation_list where each item has key + "interval". + + list of dict raw_operation_list can include items withou key "interval". + """ + operation_list = [] + for raw_operation in raw_operation_list: + operation = raw_operation.copy() + if "interval" not in operation: + operation["interval"] = get_default_interval(operation["name"]) + operation_list.append(operation) + return operation_list class ResourceAgentError(Exception): # pylint: disable=super-init-not-called @@ -32,6 +89,56 @@ class InvalidResourceAgentName(ResourceAgentError): pass +class InvalidStonithAgentName(ResourceAgentError): + pass + +class ResourceAgentName( + namedtuple("ResourceAgentName", "standard provider type") +): + @property + def full_name(self): + return ":".join( + filter( + None, + [self.standard, self.provider, self.type] + ) + ) + +def get_resource_agent_name_from_string(full_agent_name): + #full_agent_name could be for example systemd:lvm2-pvscan@252:2 + #note that the second colon is not separator of provider and type + match = re.match( + "^(?Psystemd|service):(?P[^:@]+@.*)$", + full_agent_name + ) + if match: + return ResourceAgentName( + match.group("standard"), + None, + match.group("agent_type") + ) + + match = re.match( + "^(?P[^:]+)(:(?P[^:]+))?:(?P[^:]+)$", + full_agent_name + ) + if not match: + raise InvalidResourceAgentName(full_agent_name) + + standard = match.group("standard") + provider = match.group("provider") if match.group("provider") else None + agent_type = match.group("type") + + if standard not in STANDARD_LIST: + raise InvalidResourceAgentName(full_agent_name) + + if standard == "ocf" and not provider: + raise InvalidResourceAgentName(full_agent_name) + + if standard != "ocf" and provider: + raise InvalidResourceAgentName(full_agent_name) + + return ResourceAgentName(standard, provider, agent_type) def list_resource_agents_standards(runner): """ @@ -186,8 +293,18 @@ return agents[0] def find_valid_resource_agent_by_name( - report_processor, runner, name, allowed_absent=False + report_processor, runner, name, + allowed_absent=False, absent_agent_supported=True ): + """ + Return instance of ResourceAgent corresponding to name + + report_processor is tool for warning/info/error reporting + runner is tool for launching external commands + string name specifies a searched agent + bool absent_agent_supported flag decides if is possible to allow to return + absent agent: if is produced forceable/no-forcable error + """ if ":" not in name: agent = guess_exactly_one_resource_agent_full_name(runner, name) report_processor.process( @@ -195,26 +312,59 @@ ) return agent + return _find_valid_agent_by_name( + report_processor, + runner, + name, + ResourceAgent, + AbsentResourceAgent if allowed_absent else None, + absent_agent_supported=absent_agent_supported, + ) + +def find_valid_stonith_agent_by_name( + report_processor, runner, name, + allowed_absent=False, absent_agent_supported=True +): + return _find_valid_agent_by_name( + report_processor, + runner, + name, + StonithAgent, + AbsentStonithAgent if allowed_absent else None, + absent_agent_supported=absent_agent_supported, + ) + +def _find_valid_agent_by_name( + report_processor, runner, name, PresentAgentClass, AbsentAgentClass, + absent_agent_supported=True +): try: - return ResourceAgent(runner, name).validate_metadata() - except InvalidResourceAgentName as e: + return PresentAgentClass(runner, name).validate_metadata() + except (InvalidResourceAgentName, InvalidStonithAgentName) as e: raise LibraryError(resource_agent_error_to_report_item(e)) except UnableToGetAgentMetadata as e: - if not allowed_absent: + if not absent_agent_supported: raise LibraryError(resource_agent_error_to_report_item(e)) + if not AbsentAgentClass: + raise LibraryError(resource_agent_error_to_report_item( + e, + forceable=True + )) + report_processor.process(resource_agent_error_to_report_item( e, severity=ReportItemSeverity.WARNING, - forceable=True )) - return AbsentResourceAgent(runner, name) + return AbsentAgentClass(runner, name) class Agent(object): """ Base class for providing convinient access to an agent's metadata """ + DEFAULT_CIB_ACTION_NAMES = [] + def __init__(self, runner): """ create an instance which reads metadata by itself on demand @@ -258,6 +408,7 @@ agent_info = self.get_description_info() agent_info["parameters"] = self.get_parameters() agent_info["actions"] = self.get_actions() + agent_info["default_actions"] = self.get_cib_default_actions() return agent_info @@ -298,10 +449,12 @@ params_element = self._get_metadata().find("parameters") if params_element is None: return [] - return [ - self._get_parameter(parameter) - for parameter in params_element.iter("parameter") - ] + param_list = [] + for param_el in params_element.iter("parameter"): + param = self._get_parameter(param_el) + if not param["obsoletes"]: + param_list.append(param) + return param_list def _get_parameter(self, parameter_element): @@ -312,7 +465,7 @@ value_type = content_element.get("type", value_type) default_value = content_element.get("default", default_value) - return { + return self._create_parameter({ "name": parameter_element.get("name", ""), "longdesc": self._get_text_from_dom_element( parameter_element.find("longdesc") @@ -324,8 +477,45 @@ "default": default_value, "required": is_true(parameter_element.get("required", "0")), "advanced": False, - } + "deprecated": is_true(parameter_element.get("deprecated", "0")), + "obsoletes": parameter_element.get("obsoletes", None), + }) + + def validate_parameters( + self, parameters, + parameters_type="resource", + allow_invalid=False, + update=False + ): + forceable = report_codes.FORCE_OPTIONS if not allow_invalid else None + severity = ( + ReportItemSeverity.ERROR if not allow_invalid + else ReportItemSeverity.WARNING + ) + + report_list = [] + bad_opts, missing_req_opts = self.validate_parameters_values( + parameters + ) + + if bad_opts: + report_list.append(reports.invalid_option( + bad_opts, + sorted([attr["name"] for attr in self.get_parameters()]), + parameters_type, + severity=severity, + forceable=forceable, + )) + + if not update and missing_req_opts: + report_list.append(reports.required_option_is_missing( + missing_req_opts, + parameters_type, + severity=severity, + forceable=forceable, + )) + return report_list def validate_parameters_values(self, parameters_values): """ @@ -350,11 +540,7 @@ required_missing ) - - def get_actions(self): - """ - Get list of agent's actions (operations) - """ + def _get_raw_actions(self): actions_element = self._get_metadata().find("actions") if actions_element is None: return [] @@ -366,6 +552,41 @@ for action in actions_element.iter("action") ] + def get_actions(self): + """ + Get list of agent's actions (operations). Each action is represented as + dict. Example: [{"name": "monitor", "timeout": 20, "interval": 10}] + """ + action_list = [] + for raw_action in self._get_raw_actions(): + action = {} + for key, value in raw_action.items(): + if key != "depth": + action[key] = value + elif value != "0": + action["OCF_CHECK_LEVEL"] = value + action_list.append(action) + return action_list + + def get_cib_default_actions(self, necessary_only=False): + """ + List actions that should be put to resource on its creation. + Note that every action has at least attribute name. + """ + + action_list = [ + action for action in self.get_actions() + if action.get("name", "") in ( + NECESSARY_CIB_ACTION_NAMES if necessary_only + else self.DEFAULT_CIB_ACTION_NAMES + ) + ] + + for action_name in NECESSARY_CIB_ACTION_NAMES: + if action_name not in [action["name"] for action in action_list]: + action_list.append({"name": action_name}) + + return complete_all_intervals(action_list) def _get_metadata(self): """ @@ -384,7 +605,7 @@ def _parse_metadata(self, metadata): try: - dom = etree.fromstring(metadata) + dom = xml_fromstring(metadata) # TODO Majority of agents don't provide valid metadata, so we skip # the validation for now. We want to enable it once the schema # and/or agents are fixed. @@ -401,14 +622,26 @@ return "" return element.text.strip() - -class FakeAgentMetadata(Agent): - def get_name(self): - raise NotImplementedError() + def _create_parameter(self, properties): + new_param = { + "name": "", + "longdesc": "", + "shortdesc": "", + "type": "string", + "default": None, + "required": False, + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", + } + new_param.update(properties) + return new_param - def _load_metadata(self): - raise NotImplementedError() +class FakeAgentMetadata(Agent): + #pylint:disable=abstract-method + pass class StonithdMetadata(FakeAgentMetadata): @@ -443,19 +676,29 @@ class CrmAgent(Agent): - def __init__(self, runner, full_agent_name): + #pylint:disable=abstract-method + def __init__(self, runner, name): """ init CommandRunner runner - string full_agent_name standard:provider:type or standard:type """ super(CrmAgent, self).__init__(runner) - self._full_agent_name = full_agent_name + self._name_parts = self._prepare_name_parts(name) + def _prepare_name_parts(self, name): + raise NotImplementedError() - def get_name(self): - return self._full_agent_name + def _get_full_name(self): + return self._name_parts.full_name + def get_standard(self): + return self._name_parts.standard + + def get_provider(self): + return self._name_parts.provider + + def get_type(self): + return self._name_parts.type def is_valid_metadata(self): """ @@ -486,7 +729,7 @@ "/usr/bin/", ]) stdout, stderr, retval = self._runner.run( - [_crm_resource, "--show-metadata", self._full_agent_name], + [_crm_resource, "--show-metadata", self._get_full_name()], env_extend={ "PATH": env_path, } @@ -497,40 +740,101 @@ class ResourceAgent(CrmAgent): + DEFAULT_CIB_ACTION_NAMES = DEFAULT_RESOURCE_CIB_ACTION_NAMES """ Provides convinient access to a resource agent's metadata """ - def __init__(self, runner, full_agent_name): - if not re.match("^[^:]+(:[^:]+){1,2}$", full_agent_name): - raise InvalidResourceAgentName(full_agent_name) - super(ResourceAgent, self).__init__(runner, full_agent_name) + def _prepare_name_parts(self, name): + return get_resource_agent_name_from_string(name) + + def get_name(self): + return self._get_full_name() + + def get_parameters(self): + parameters = super(ResourceAgent, self).get_parameters() + if ( + self.get_standard() == "ocf" + and + (self.get_provider() in ("heartbeat", "pacemaker")) + ): + trace_ra_found = False + trace_file_found = False + for param in parameters: + param_name = param["name"].lower() + if param_name == "trace_ra": + trace_ra_found = True + if param_name == "trace_file": + trace_file_found = True + if trace_file_found and trace_ra_found: + break + + if not trace_ra_found: + shortdesc = ( + "Set to 1 to turn on resource agent tracing" + " (expect large output)" + ) + parameters.append(self._create_parameter({ + "name": "trace_ra", + "longdesc": ( + shortdesc + + + " The trace output will be saved to trace_file, if set," + " or by default to" + " $HA_VARRUN/ra_trace//.." + " e.g. $HA_VARRUN/ra_trace/oracle/" + "db.start.2012-11-27.08:37:08" + ), + "shortdesc": shortdesc, + "type": "integer", + "default": 0, + "required": False, + "advanced": True, + })) + if not trace_file_found: + shortdesc = ( + "Path to a file to store resource agent tracing log" + ) + parameters.append(self._create_parameter({ + "name": "trace_file", + "longdesc": shortdesc, + "shortdesc": shortdesc, + "type": "string", + "default": "", + "required": False, + "advanced": True, + })) + + return parameters + -class AbsentResourceAgent(ResourceAgent): +class AbsentAgentMixin(object): def _load_metadata(self): return "" def validate_parameters_values(self, parameters_values): return ([], []) + +class AbsentResourceAgent(AbsentAgentMixin, ResourceAgent): + pass + + class StonithAgent(CrmAgent): """ Provides convinient access to a stonith agent's metadata """ + DEFAULT_CIB_ACTION_NAMES = DEFAULT_STONITH_CIB_ACTION_NAMES _stonithd_metadata = None - - def __init__(self, runner, agent_name): - super(StonithAgent, self).__init__( - runner, - "stonith:{0}".format(agent_name) - ) - self._agent_name = agent_name - + def _prepare_name_parts(self, name): + # pacemaker doesn't support stonith (nor resource) agents with : in type + if ":" in name: + raise InvalidStonithAgentName(name) + return ResourceAgentName("stonith", None, name) def get_name(self): - return self._agent_name - + return self.get_type() def get_parameters(self): return ( @@ -541,6 +845,32 @@ self._get_stonithd_metadata().get_parameters() ) + def validate_parameters( + self, parameters, + parameters_type="stonith", + allow_invalid=False, + update=False + ): + report_list = super(StonithAgent, self).validate_parameters( + parameters, + parameters_type=parameters_type, + allow_invalid=allow_invalid, + update=update + ) + if parameters.get("action", ""): + report_list.append(reports.deprecated_option( + "action", + _STONITH_ACTION_REPLACED_BY, + parameters_type, + severity=( + ReportItemSeverity.ERROR if not allow_invalid + else ReportItemSeverity.WARNING + ), + forceable=( + report_codes.FORCE_OPTIONS if not allow_invalid else None + ) + )) + return report_list def _filter_parameters(self, parameters): """ @@ -549,9 +879,7 @@ # We don't allow the user to change these options which are only # intended to be used interactively on command line. remove_parameters = frozenset([ - "debug", "help", - "verbose", "version", ]) filtered = [] @@ -561,15 +889,19 @@ elif param["name"] == "action": # However we still need the user to be able to set 'action' due # to backward compatibility reasons. So we just mark it as not - # required. + # required. We also move it to advanced params to indicate users + # should not set it in most cases. new_param = dict(param) - new_param["shortdesc"] = "\n".join(filter(None, [ - param.get("shortdesc", ""), - "WARNING: specifying 'action' is deprecated and not " - "necessary with current Pacemaker versions." - , - ])) new_param["required"] = False + new_param["advanced"] = True + new_param["pcs_deprecated_warning"] = ( + "Specifying 'action' is deprecated and not necessary with" + " current Pacemaker versions. Use {0} instead." + ).format( + ", ".join( + ["'{0}'".format(x) for x in _STONITH_ACTION_REPLACED_BY] + ) + ) filtered.append(new_param) else: filtered.append(param) @@ -579,35 +911,14 @@ # Pacemaker marks the 'port' parameter as not required for us. return filtered - def _get_stonithd_metadata(self): if not self.__class__._stonithd_metadata: self.__class__._stonithd_metadata = StonithdMetadata(self._runner) return self.__class__._stonithd_metadata - - def get_actions(self): - # In previous versions of pcs there was no way to read actions from - # stonith agents, the functions always returned an empty list. It - # wasn't clear if that is a mistake or an intention. We keep it that - # way for two reasons: - # 1) Fence agents themselfs specify the actions without any attributes - # (interval, timeout) - # 2) Pacemaker explained shows an example stonith agent configuration - # in CIB with only monitor operation specified (and that pcs creates - # automatically in "pcs stonith create" regardless of provided actions - # from here). - # It may be better to return real actions from this class and deal ommit - # them in higher layers, which can decide if the actions are desired or - # not. For now there is not enough information to do that. Code which - # uses this is not clean enough. Once everything is cleaned we should - # decide if it is better to move this to higher level. - return [] - - def get_provides_unfencing(self): # self.get_actions returns an empty list - for action in super(StonithAgent, self).get_actions(): + for action in self._get_raw_actions(): if ( action.get("name", "") == "on" and @@ -619,6 +930,11 @@ return False +class AbsentStonithAgent(AbsentAgentMixin, StonithAgent): + def get_parameters(self): + return [] + + def resource_agent_error_to_report_item( e, severity=ReportItemSeverity.ERROR, forceable=False ): @@ -634,4 +950,6 @@ ) if e.__class__ == InvalidResourceAgentName: return reports.invalid_resource_agent_name(e.agent) + if e.__class__ == InvalidStonithAgentName: + return reports.invalid_stonith_agent_name(e.agent) raise e diff -Nru pcs-0.9.155+dfsg/pcs/lib/sbd.py pcs-0.9.159/pcs/lib/sbd.py --- pcs-0.9.155+dfsg/pcs/lib/sbd.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/sbd.py 2017-06-30 15:33:01.000000000 +0000 @@ -6,6 +6,7 @@ ) import json +from os import path from pcs import settings from pcs.common import tools @@ -13,7 +14,7 @@ external, reports, ) -from pcs.lib.tools import dict_to_environment_file +from pcs.lib.tools import dict_to_environment_file, environment_file_to_dict from pcs.lib.external import ( NodeCommunicator, node_communicator_exception_to_report_item, @@ -22,6 +23,14 @@ from pcs.lib.errors import LibraryError +DEVICE_INITIALIZATION_OPTIONS_MAPPING = { + "watchdog-timeout": "-1", + "allocate-timeout": "-2", + "loop-timeout": "-3", + "msgwait-timeout": "-4", +} + + def _run_parallel_and_raise_lib_error_on_failure(func, param_list): """ Run function func in parallel for all specified parameters in arg_list. @@ -86,6 +95,8 @@ is_sbd_installed(runner) and is_sbd_enabled(runner) + and + not is_device_set_local() ) @@ -123,22 +134,29 @@ ) -def check_sbd(communicator, node, watchdog): +def check_sbd(communicator, node, watchdog, device_list): """ - Check SBD on specified 'node' and existence of specified watchdog. + Check SBD on specified 'node' and existence of specified watchdog and + devices. communicator -- NodeCommunicator node -- NodeAddresses watchdog -- watchdog path + device_list -- list of strings """ return communicator.call_node( node, "remote/check_sbd", - NodeCommunicator.format_data_dict([("watchdog", watchdog)]) + NodeCommunicator.format_data_dict([ + ("watchdog", watchdog), + ("device_list", NodeCommunicator.format_data_json(device_list)), + ]) ) -def check_sbd_on_node(report_processor, node_communicator, node, watchdog): +def check_sbd_on_node( + report_processor, node_communicator, node, watchdog, device_list +): """ Check if SBD can be enabled on specified 'node'. Raises LibraryError if check fails. @@ -148,15 +166,29 @@ node_communicator -- NodeCommunicator node -- NodeAddresses watchdog -- watchdog path + device_list -- list of strings """ report_list = [] try: - data = json.loads(check_sbd(node_communicator, node, watchdog)) + data = json.loads( + check_sbd(node_communicator, node, watchdog, device_list) + ) if not data["sbd"]["installed"]: report_list.append(reports.sbd_not_installed(node.label)) if not data["watchdog"]["exist"]: report_list.append(reports.watchdog_not_found(node.label, watchdog)) - except (ValueError, KeyError): + for device in data.get("device_list", []): + if not device["exist"]: + report_list.append(reports.sbd_device_does_not_exist( + device["path"], node.label + )) + elif not device["block_device"]: + report_list.append(reports.sbd_device_is_not_block_device( + device["path"], node.label + )) + # TODO maybe we can check whenever device is initialized by sbd (by + # running 'sbd -d dump;') + except (ValueError, KeyError, TypeError): raise LibraryError(reports.invalid_response_format(node.label)) if report_list: @@ -164,25 +196,29 @@ report_processor.process(reports.sbd_check_success(node.label)) -def check_sbd_on_all_nodes(report_processor, node_communicator, nodes_watchdog): +def check_sbd_on_all_nodes(report_processor, node_communicator, nodes_data): """ Checks SBD (if SBD is installed and watchdog exists) on all NodeAddresses - defined as keys in data. + defined as keys in nodes_data. Raises LibraryError with all ReportItems in case of any failure. report_processor -- node_communicator -- NodeCommunicator - nodes_watchdog -- dictionary with NodeAddresses as keys and watchdog path - as value + nodes_data -- dictionary with NodeAddresses as keys and dict (with keys + 'watchdog' and 'device_list') as value """ report_processor.process(reports.sbd_check_started()) - _run_parallel_and_raise_lib_error_on_failure( - check_sbd_on_node, - [ - ([report_processor, node_communicator, node, watchdog], {}) - for node, watchdog in sorted(nodes_watchdog.items()) - ] - ) + data_list = [] + for node, data in sorted(nodes_data.items()): + data_list.append(( + [ + report_processor, node_communicator, node, data["watchdog"], + data["device_list"] + ], + {} + )) + + _run_parallel_and_raise_lib_error_on_failure(check_sbd_on_node, data_list) def set_sbd_config(communicator, node, config): @@ -201,7 +237,8 @@ def set_sbd_config_on_node( - report_processor, node_communicator, node, config, watchdog + report_processor, node_communicator, node, config, watchdog, + device_list=None ): """ Send SBD configuration to 'node' with specified watchdog set. Also puts @@ -212,11 +249,14 @@ node -- NodeAddresses config -- dictionary in format: : watchdog -- path to watchdog device + device_list -- list of strings """ config = dict(config) config["SBD_OPTS"] = '"-n {node_name}"'.format(node_name=node.label) if watchdog: config["SBD_WATCHDOG_DEV"] = watchdog + if device_list: + config["SBD_DEVICE"] = '"{0}"'.format(";".join(device_list)) set_sbd_config(node_communicator, node, dict_to_environment_file(config)) report_processor.process( reports.sbd_config_accepted_by_node(node.label) @@ -224,7 +264,8 @@ def set_sbd_config_on_all_nodes( - report_processor, node_communicator, node_list, config, watchdog_dict + report_processor, node_communicator, node_list, config, watchdog_dict, + device_dict ): """ Send SBD configuration 'config' to all nodes in 'node_list'. Option @@ -237,6 +278,8 @@ config -- dictionary in format: : watchdog_dict -- dictionary of watchdogs where key is NodeAdresses object and value is path to watchdog + device_dict -- distionary with NodeAddresses as keys and lists of devices + as values """ report_processor.process(reports.sbd_config_distribution_started()) _run_parallel_and_raise_lib_error_on_failure( @@ -245,7 +288,7 @@ ( [ report_processor, node_communicator, node, config, - watchdog_dict.get(node) + watchdog_dict.get(node), device_dict.get(node) ], {} ) @@ -412,7 +455,7 @@ return { "SBD_DELAY_START": "no", "SBD_PACEMAKER": "yes", - "SBD_STARTMODE": "clean", + "SBD_STARTMODE": "always", "SBD_WATCHDOG_DEV": settings.sbd_watchdog_default, "SBD_WATCHDOG_TIMEOUT": "5" } @@ -468,3 +511,114 @@ """ return external.is_service_installed(runner, get_sbd_service_name()) + +def initialize_block_devices( + report_processor, cmd_runner, device_list, option_dict +): + """ + Initialize devices with specified options in option_dict. + Raise LibraryError on failure. + + report_processor -- report processor + cmd_runner -- CommandRunner + device_list -- list of strings + option_dict -- dictionary of options and their values + """ + report_processor.process( + reports.sbd_device_initialization_started(device_list) + ) + + cmd = [settings.sbd_binary] + for device in device_list: + cmd += ["-d", device] + + for option, value in sorted(option_dict.items()): + cmd += [DEVICE_INITIALIZATION_OPTIONS_MAPPING[option], str(value)] + + cmd.append("create") + _, std_err, ret_val = cmd_runner.run(cmd) + if ret_val != 0: + raise LibraryError( + reports.sbd_device_initialization_error(device_list, std_err) + ) + report_processor.process( + reports.sbd_device_initialization_success(device_list) + ) + + +def get_local_sbd_device_list(): + """ + Returns list of devices specified in local SBD config + """ + if not path.exists(settings.sbd_config): + return [] + + cfg = environment_file_to_dict(get_local_sbd_config()) + if "SBD_DEVICE" not in cfg: + return [] + devices = cfg["SBD_DEVICE"] + if devices.startswith('"') and devices.endswith('"'): + devices = devices[1:-1] + return [ + device.strip() + for device in devices.split(";") if device.strip() + ] + + +def is_device_set_local(): + """ + Returns True if there is at least one device specified in local SBD config, + False otherwise. + """ + return len(get_local_sbd_device_list()) > 0 + + +def get_device_messages_info(cmd_runner, device): + """ + Returns info about messages (string) stored on specified SBD device. + + cmd_runner -- CommandRunner + device -- string + """ + std_out, dummy_std_err, ret_val = cmd_runner.run( + [settings.sbd_binary, "-d", device, "list"] + ) + if ret_val != 0: + # sbd writes error message into std_out + raise LibraryError(reports.sbd_device_list_error(device, std_out)) + return std_out + + +def get_device_sbd_header_dump(cmd_runner, device): + """ + Returns header dump (string) of specified SBD device. + + cmd_runner -- CommandRunner + device -- string + """ + std_out, dummy_std_err, ret_val = cmd_runner.run( + [settings.sbd_binary, "-d", device, "dump"] + ) + if ret_val != 0: + # sbd writes error message into std_out + raise LibraryError(reports.sbd_device_dump_error(device, std_out)) + return std_out + + +def set_message(cmd_runner, device, node_name, message): + """ + Set message of specified type 'message' on SBD device for node. + + cmd_runner -- CommandRunner + device -- string, device path + node_name -- string, nae of node for which message should be set + message -- string, message type + """ + dummy_std_out, std_err, ret_val = cmd_runner.run( + [settings.sbd_binary, "-d", device, "message", node_name, message] + ) + if ret_val != 0: + raise LibraryError(reports.sbd_device_message_error( + device, node_name, message, std_err + )) + diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_env_file.py pcs-0.9.159/pcs/lib/test/test_env_file.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_env_file.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_env_file.py 2017-06-30 15:33:01.000000000 +0000 @@ -8,8 +8,9 @@ from pcs.test.tools.pcs_unittest import TestCase from pcs.common import report_codes -from pcs.lib.env_file import RealFile, GhostFile +from pcs.lib import env_file from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.misc import create_patcher from pcs.test.tools.assertions import( assert_raise_library_error, assert_report_item_list_equal @@ -18,10 +19,29 @@ from pcs.test.tools.pcs_unittest import mock +patch_env_file = create_patcher(env_file) + +FILE_PATH = "/path/to.file" +MISSING_PATH = "/no/existing/file.path" +CONF_PATH = "/etc/booth/some-name.conf" + +class GhostFileInit(TestCase): + def test_is_not_binary_default(self): + ghost_file = env_file.GhostFile("some role", content=None) + self.assertFalse(ghost_file.export()["is_binary"]) + + def test_accepts_is_binary_attribute(self): + ghost_file = env_file.GhostFile( + "some role", + content=None, + is_binary=True + ) + self.assertTrue(ghost_file.export()["is_binary"]) + class GhostFileReadTest(TestCase): def test_raises_when_trying_read_nonexistent_file(self): assert_raise_library_error( - lambda: GhostFile("some role", content=None).read(), + lambda: env_file.GhostFile("some role", content=None).read(), ( severities.ERROR, report_codes.FILE_DOES_NOT_EXIST, @@ -31,10 +51,31 @@ ), ) -@mock.patch("pcs.lib.env_file.os.path.exists", return_value=True) +class GhostFileExists(TestCase): + def test_return_true_if_file_exists(self): + self.assertTrue(env_file.GhostFile("some_role", "any content").exists) + + def test_return_False_if_file_exists(self): + self.assertFalse(env_file.GhostFile("some_role").exists) + + def test_return_True_after_write(self): + ghost_file = env_file.GhostFile("some_role") + ghost_file.write("any content") + self.assertTrue(ghost_file.exists) + +class RealFileExists(TestCase): + @patch_env_file("os.path.exists", return_value=True) + def test_return_true_if_file_exists(self, exists): + self.assertTrue(env_file.RealFile("some role", FILE_PATH).exists) + + @patch_env_file("os.path.exists", return_value=False) + def test_return_false_if_file_does_not_exist(self, exists): + self.assertFalse(env_file.RealFile("some role", FILE_PATH).exists) + +@patch_env_file("os.path.exists", return_value=True) class RealFileAssertNoConflictWithExistingTest(TestCase): def check(self, report_processor, can_overwrite_existing=False): - real_file = RealFile("some role", "/etc/booth/some-name.conf") + real_file = env_file.RealFile("some role", CONF_PATH) real_file.assert_no_conflict_with_existing( report_processor, can_overwrite_existing @@ -53,7 +94,7 @@ severities.ERROR, report_codes.FILE_ALREADY_EXISTS, { - "file_path": "/etc/booth/some-name.conf" + "file_path": CONF_PATH }, report_codes.FORCE_FILE_OVERWRITE, ), @@ -66,7 +107,7 @@ severities.WARNING, report_codes.FILE_ALREADY_EXISTS, { - "file_path": "/etc/booth/some-name.conf" + "file_path": CONF_PATH }, )]) @@ -74,88 +115,93 @@ def test_success_write_content_to_path(self): mock_open = mock.mock_open() mock_file_operation = mock.Mock() - with mock.patch("pcs.lib.env_file.open", mock_open, create=True): - RealFile("some role", "/etc/booth/some-name.conf").write( + with patch_env_file("open", mock_open, create=True): + env_file.RealFile("some role", CONF_PATH).write( "config content", file_operation=mock_file_operation ) - mock_open.assert_called_once_with("/etc/booth/some-name.conf", "w") + mock_open.assert_called_once_with(CONF_PATH, "w") mock_open().write.assert_called_once_with("config content") - mock_file_operation.assert_called_once_with( - "/etc/booth/some-name.conf" - ) + mock_file_operation.assert_called_once_with(CONF_PATH) def test_success_binary(self): mock_open = mock.mock_open() mock_file_operation = mock.Mock() - with mock.patch("pcs.lib.env_file.open", mock_open, create=True): - RealFile("some role", "/etc/booth/some-name.conf").write( + with patch_env_file("open", mock_open, create=True): + env_file.RealFile("some role", CONF_PATH, is_binary=True).write( "config content".encode("utf-8"), file_operation=mock_file_operation, - is_binary=True ) - mock_open.assert_called_once_with("/etc/booth/some-name.conf", "wb") + mock_open.assert_called_once_with(CONF_PATH, "wb") mock_open().write.assert_called_once_with( "config content".encode("utf-8") ) - mock_file_operation.assert_called_once_with( - "/etc/booth/some-name.conf" - ) + mock_file_operation.assert_called_once_with(CONF_PATH) def test_raises_when_could_not_write(self): assert_raise_library_error( lambda: - RealFile("some role", "/no/existing/file.path").write(["content"]), + env_file.RealFile("some role", MISSING_PATH).write(["content"]), ( severities.ERROR, report_codes.FILE_IO_ERROR, { "reason": - "No such file or directory: '/no/existing/file.path'" + "No such file or directory: '{0}'".format(MISSING_PATH) , } ) ) class RealFileReadTest(TestCase): - def test_success_read_content_from_file(self): + def assert_read_in_correct_mode(self, real_file, mode): mock_open = mock.mock_open() - with mock.patch("pcs.lib.env_file.open", mock_open, create=True): + with patch_env_file("open", mock_open, create=True): mock_open().read.return_value = "test booth\nconfig" - self.assertEqual( - "test booth\nconfig", - RealFile("some role", "/path/to.file").read() - ) + self.assertEqual("test booth\nconfig", real_file.read()) + mock_open.assert_has_calls([mock.call(FILE_PATH, mode)]) + + def test_success_read_content_from_file(self): + self.assert_read_in_correct_mode( + env_file.RealFile("some role", FILE_PATH, is_binary=False), + mode="r" + ) + + def test_success_read_content_from_binary_file(self): + self.assert_read_in_correct_mode( + env_file.RealFile("some role", FILE_PATH, is_binary=True), + mode="rb" + ) def test_raises_when_could_not_read(self): assert_raise_library_error( - lambda: RealFile("some role", "/no/existing/file.path").read(), + lambda: env_file.RealFile("some role", MISSING_PATH).read(), ( severities.ERROR, report_codes.FILE_IO_ERROR, { "reason": - "No such file or directory: '/no/existing/file.path'" + "No such file or directory: '{0}'".format(MISSING_PATH) , } ) ) class RealFileRemoveTest(TestCase): - @mock.patch("pcs.lib.env_file.os.remove") - @mock.patch("pcs.lib.env_file.os.path.exists", return_value=True) + @patch_env_file("os.remove") + @patch_env_file("os.path.exists", return_value=True) def test_success_remove_file(self, _, mock_remove): - RealFile("some role", "/path/to.file").remove() - mock_remove.assert_called_once_with("/path/to.file") + env_file.RealFile("some role", FILE_PATH).remove() + mock_remove.assert_called_once_with(FILE_PATH) - @mock.patch( - "pcs.lib.env_file.os.remove", - side_effect=EnvironmentError(1, "mock remove failed", "/path/to.file") + @patch_env_file( + "os.remove", + side_effect=EnvironmentError(1, "mock remove failed", FILE_PATH) ) - @mock.patch("pcs.lib.env_file.os.path.exists", return_value=True) + @patch_env_file("os.path.exists", return_value=True) def test_raise_library_error_when_remove_failed(self, _, dummy): assert_raise_library_error( - lambda: RealFile("some role", "/path/to.file").remove(), + lambda: env_file.RealFile("some role", FILE_PATH).remove(), ( severities.ERROR, report_codes.FILE_IO_ERROR, @@ -167,10 +213,10 @@ ) ) - @mock.patch("pcs.lib.env_file.os.path.exists", return_value=False) + @patch_env_file("os.path.exists", return_value=False) def test_existence_is_required(self, _): assert_raise_library_error( - lambda: RealFile("some role", "/path/to.file").remove(), + lambda: env_file.RealFile("some role", FILE_PATH).remove(), ( severities.ERROR, report_codes.FILE_IO_ERROR, @@ -182,6 +228,8 @@ ) ) - @mock.patch("pcs.lib.env_file.os.path.exists", return_value=False) + @patch_env_file("os.path.exists", return_value=False) def test_noexistent_can_be_silenced(self, _): - RealFile("some role", "/path/to.file").remove(silence_no_existence=True) + env_file.RealFile("some role", FILE_PATH).remove( + silence_no_existence=True + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_env.py pcs-0.9.159/pcs/lib/test/test_env.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_env.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_env.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,761 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.pcs_unittest import TestCase +import logging +from functools import partial +from lxml import etree + +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_xml_equal, + assert_report_item_list_equal, +) +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.misc import get_test_resource as rc, create_patcher +from pcs.test.tools.pcs_unittest import mock + +from pcs.lib.env import LibraryEnvironment +from pcs.common import report_codes +from pcs.lib import reports +from pcs.lib.cluster_conf_facade import ClusterConfFacade +from pcs.lib.corosync.config_facade import ConfigFacade as CorosyncConfigFacade +from pcs.lib.errors import ( + LibraryError, + ReportItemSeverity as severity, +) + + +patch_env = create_patcher("pcs.lib.env") +patch_env_object = partial(mock.patch.object, LibraryEnvironment) + +class LibraryEnvironmentTest(TestCase): + def setUp(self): + self.mock_logger = mock.MagicMock(logging.Logger) + self.mock_reporter = MockLibraryReportProcessor() + + def test_logger(self): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + self.assertEqual(self.mock_logger, env.logger) + + def test_report_processor(self): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + self.assertEqual(self.mock_reporter, env.report_processor) + + def test_user_set(self): + user = "testuser" + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + user_login=user + ) + self.assertEqual(user, env.user_login) + + def test_user_not_set(self): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + self.assertEqual(None, env.user_login) + + def test_usergroups_set(self): + groups = ["some", "group"] + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + user_groups=groups + ) + self.assertEqual(groups, env.user_groups) + + def test_usergroups_not_set(self): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + self.assertEqual([], env.user_groups) + + @patch_env("is_cman_cluster") + def test_is_cman_cluster(self, mock_is_cman): + mock_is_cman.return_value = True + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + self.assertTrue(env.is_cman_cluster) + self.assertTrue(env.is_cman_cluster) + self.assertEqual(1, mock_is_cman.call_count) + + @patch_env("replace_cib_configuration_xml") + @patch_env("get_cib_xml") + def test_cib_set(self, mock_get_cib, mock_push_cib): + cib_data = "test cib data" + new_cib_data = "new test cib data" + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + cib_data=cib_data + ) + + self.assertFalse(env.is_cib_live) + + self.assertEqual(cib_data, env._get_cib_xml()) + self.assertEqual(0, mock_get_cib.call_count) + + env._push_cib_xml(new_cib_data) + self.assertEqual(0, mock_push_cib.call_count) + + self.assertEqual(new_cib_data, env._get_cib_xml()) + self.assertEqual(0, mock_get_cib.call_count) + + @patch_env("replace_cib_configuration_xml") + @patch_env("get_cib_xml") + def test_cib_not_set(self, mock_get_cib, mock_push_cib): + cib_data = "test cib data" + new_cib_data = "new test cib data" + mock_get_cib.return_value = cib_data + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + + self.assertTrue(env.is_cib_live) + + self.assertEqual(cib_data, env._get_cib_xml()) + self.assertEqual(1, mock_get_cib.call_count) + + env._push_cib_xml(new_cib_data) + self.assertEqual(1, mock_push_cib.call_count) + + @patch_env("ensure_cib_version") + @patch_env("get_cib_xml") + def test_get_cib_no_version_live( + self, mock_get_cib_xml, mock_ensure_cib_version + ): + mock_get_cib_xml.return_value = '' + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + assert_xml_equal('', etree.tostring(env.get_cib()).decode()) + self.assertEqual(1, mock_get_cib_xml.call_count) + self.assertEqual(0, mock_ensure_cib_version.call_count) + self.assertFalse(env.cib_upgraded) + + @patch_env("ensure_cib_version") + @patch_env("get_cib_xml") + def test_get_cib_upgrade_live( + self, mock_get_cib_xml, mock_ensure_cib_version + ): + mock_get_cib_xml.return_value = '' + mock_ensure_cib_version.return_value = etree.XML('') + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + assert_xml_equal( + '', etree.tostring(env.get_cib((1, 2, 3))).decode() + ) + self.assertEqual(1, mock_get_cib_xml.call_count) + self.assertEqual(1, mock_ensure_cib_version.call_count) + assert_report_item_list_equal( + env.report_processor.report_item_list, + [( + severity.INFO, + report_codes.CIB_UPGRADE_SUCCESSFUL, + {} + )] + ) + self.assertTrue(env.cib_upgraded) + + @patch_env("ensure_cib_version") + @patch_env("get_cib_xml") + def test_get_cib_no_upgrade_live( + self, mock_get_cib_xml, mock_ensure_cib_version + ): + mock_get_cib_xml.return_value = '' + mock_ensure_cib_version.return_value = None + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + assert_xml_equal( + '', etree.tostring(env.get_cib((1, 2, 3))).decode() + ) + self.assertEqual(1, mock_get_cib_xml.call_count) + self.assertEqual(1, mock_ensure_cib_version.call_count) + self.assertFalse(env.cib_upgraded) + + @patch_env("ensure_cib_version") + @patch_env("get_cib_xml") + def test_get_cib_no_version_file( + self, mock_get_cib_xml, mock_ensure_cib_version + ): + env = LibraryEnvironment( + self.mock_logger, self.mock_reporter, cib_data='' + ) + assert_xml_equal('', etree.tostring(env.get_cib()).decode()) + self.assertEqual(0, mock_get_cib_xml.call_count) + self.assertEqual(0, mock_ensure_cib_version.call_count) + self.assertFalse(env.cib_upgraded) + + @patch_env("ensure_cib_version") + @patch_env("get_cib_xml") + def test_get_cib_upgrade_file( + self, mock_get_cib_xml, mock_ensure_cib_version + ): + mock_ensure_cib_version.return_value = etree.XML('') + env = LibraryEnvironment( + self.mock_logger, self.mock_reporter, cib_data='' + ) + assert_xml_equal( + '', etree.tostring(env.get_cib((1, 2, 3))).decode() + ) + self.assertEqual(0, mock_get_cib_xml.call_count) + self.assertEqual(1, mock_ensure_cib_version.call_count) + self.assertTrue(env.cib_upgraded) + + @patch_env("ensure_cib_version") + @patch_env("get_cib_xml") + def test_get_cib_no_upgrade_file( + self, mock_get_cib_xml, mock_ensure_cib_version + ): + mock_ensure_cib_version.return_value = None + env = LibraryEnvironment( + self.mock_logger, self.mock_reporter, cib_data='' + ) + assert_xml_equal( + '', etree.tostring(env.get_cib((1, 2, 3))).decode() + ) + self.assertEqual(0, mock_get_cib_xml.call_count) + self.assertEqual(1, mock_ensure_cib_version.call_count) + self.assertFalse(env.cib_upgraded) + + @patch_env("replace_cib_configuration_xml") + @mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: "mock cmd runner" + ) + def test_push_cib_not_upgraded_live(self, mock_replace_cib): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + env.push_cib(etree.XML('')) + mock_replace_cib.assert_called_once_with( + "mock cmd runner", + '' + ) + self.assertEqual([], env.report_processor.report_item_list) + + @patch_env("replace_cib_configuration_xml") + @mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: "mock cmd runner" + ) + def test_push_cib_upgraded_live(self, mock_replace_cib): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + env._cib_upgraded = True + env.push_cib(etree.XML('')) + mock_replace_cib.assert_called_once_with( + "mock cmd runner", + '' + ) + self.assertFalse(env.cib_upgraded) + + @patch_env("qdevice_reload_on_nodes") + @patch_env("check_corosync_offline_on_nodes") + @patch_env("reload_corosync_config") + @patch_env("distribute_corosync_conf") + @patch_env("get_local_corosync_conf") + @mock.patch.object( + LibraryEnvironment, + "node_communicator", + lambda self: "mock node communicator" + ) + @mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: "mock cmd runner" + ) + def test_corosync_conf_set( + self, mock_get_corosync, mock_distribute, mock_reload, + mock_check_offline, mock_qdevice_reload + ): + corosync_data = "totem {\n version: 2\n}\n" + new_corosync_data = "totem {\n version: 3\n}\n" + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + corosync_conf_data=corosync_data + ) + + self.assertFalse(env.is_corosync_conf_live) + + self.assertEqual(corosync_data, env.get_corosync_conf_data()) + self.assertEqual(corosync_data, env.get_corosync_conf().config.export()) + self.assertEqual(0, mock_get_corosync.call_count) + + env.push_corosync_conf( + CorosyncConfigFacade.from_string(new_corosync_data) + ) + self.assertEqual(0, mock_distribute.call_count) + + self.assertEqual(new_corosync_data, env.get_corosync_conf_data()) + self.assertEqual(0, mock_get_corosync.call_count) + mock_check_offline.assert_not_called() + mock_reload.assert_not_called() + mock_qdevice_reload.assert_not_called() + + @patch_env("qdevice_reload_on_nodes") + @patch_env("reload_corosync_config") + @patch_env("is_service_running") + @patch_env("distribute_corosync_conf") + @patch_env("get_local_corosync_conf") + @mock.patch.object( + CorosyncConfigFacade, + "get_nodes", + lambda self: "mock node list" + ) + @mock.patch.object( + LibraryEnvironment, + "node_communicator", + lambda self: "mock node communicator" + ) + @mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: "mock cmd runner" + ) + def test_corosync_conf_not_set_online( + self, mock_get_corosync, mock_distribute, mock_is_running, mock_reload, + mock_qdevice_reload + ): + corosync_data = open(rc("corosync.conf")).read() + new_corosync_data = corosync_data.replace("version: 2", "version: 3") + mock_get_corosync.return_value = corosync_data + mock_is_running.return_value = True + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + + self.assertTrue(env.is_corosync_conf_live) + + self.assertEqual(corosync_data, env.get_corosync_conf_data()) + self.assertEqual(corosync_data, env.get_corosync_conf().config.export()) + self.assertEqual(2, mock_get_corosync.call_count) + + env.push_corosync_conf( + CorosyncConfigFacade.from_string(new_corosync_data) + ) + mock_distribute.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + new_corosync_data, + False + ) + mock_is_running.assert_called_once_with("mock cmd runner", "corosync") + mock_reload.assert_called_once_with("mock cmd runner") + mock_qdevice_reload.assert_not_called() + + @patch_env("qdevice_reload_on_nodes") + @patch_env("reload_corosync_config") + @patch_env("is_service_running") + @patch_env("distribute_corosync_conf") + @patch_env("get_local_corosync_conf") + @mock.patch.object( + CorosyncConfigFacade, + "get_nodes", + lambda self: "mock node list" + ) + @mock.patch.object( + LibraryEnvironment, + "node_communicator", + lambda self: "mock node communicator" + ) + @mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: "mock cmd runner" + ) + def test_corosync_conf_not_set_offline( + self, mock_get_corosync, mock_distribute, mock_is_running, mock_reload, + mock_qdevice_reload + ): + corosync_data = open(rc("corosync.conf")).read() + new_corosync_data = corosync_data.replace("version: 2", "version: 3") + mock_get_corosync.return_value = corosync_data + mock_is_running.return_value = False + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + + self.assertTrue(env.is_corosync_conf_live) + + self.assertEqual(corosync_data, env.get_corosync_conf_data()) + self.assertEqual(corosync_data, env.get_corosync_conf().config.export()) + self.assertEqual(2, mock_get_corosync.call_count) + + env.push_corosync_conf( + CorosyncConfigFacade.from_string(new_corosync_data) + ) + mock_distribute.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + new_corosync_data, + False + ) + mock_is_running.assert_called_once_with("mock cmd runner", "corosync") + mock_reload.assert_not_called() + mock_qdevice_reload.assert_not_called() + + @patch_env("qdevice_reload_on_nodes") + @patch_env("check_corosync_offline_on_nodes") + @patch_env("reload_corosync_config") + @patch_env("is_service_running") + @patch_env("distribute_corosync_conf") + @patch_env("get_local_corosync_conf") + @mock.patch.object( + CorosyncConfigFacade, + "get_nodes", + lambda self: "mock node list" + ) + @mock.patch.object( + LibraryEnvironment, + "node_communicator", + lambda self: "mock node communicator" + ) + @mock.patch.object( + LibraryEnvironment, + "cmd_runner", + lambda self: "mock cmd runner" + ) + def test_corosync_conf_not_set_need_qdevice_reload_success( + self, mock_get_corosync, mock_distribute, mock_is_running, mock_reload, + mock_check_offline, mock_qdevice_reload + ): + corosync_data = open(rc("corosync.conf")).read() + new_corosync_data = corosync_data.replace("version: 2", "version: 3") + mock_get_corosync.return_value = corosync_data + mock_is_running.return_value = True + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + + self.assertTrue(env.is_corosync_conf_live) + + self.assertEqual(corosync_data, env.get_corosync_conf_data()) + self.assertEqual(corosync_data, env.get_corosync_conf().config.export()) + self.assertEqual(2, mock_get_corosync.call_count) + + conf_facade = CorosyncConfigFacade.from_string(new_corosync_data) + conf_facade._need_qdevice_reload = True + env.push_corosync_conf(conf_facade) + mock_check_offline.assert_not_called() + mock_distribute.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + new_corosync_data, + False + ) + mock_reload.assert_called_once_with("mock cmd runner") + mock_qdevice_reload.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + False + ) + + @patch_env("qdevice_reload_on_nodes") + @patch_env("check_corosync_offline_on_nodes") + @patch_env("reload_corosync_config") + @patch_env("is_service_running") + @patch_env("distribute_corosync_conf") + @patch_env("get_local_corosync_conf") + @mock.patch.object( + CorosyncConfigFacade, + "get_nodes", + lambda self: "mock node list" + ) + @mock.patch.object( + LibraryEnvironment, + "node_communicator", + lambda self: "mock node communicator" + ) + def test_corosync_conf_not_set_need_offline_success( + self, mock_get_corosync, mock_distribute, mock_is_running, mock_reload, + mock_check_offline, mock_qdevice_reload + ): + corosync_data = open(rc("corosync.conf")).read() + new_corosync_data = corosync_data.replace("version: 2", "version: 3") + mock_get_corosync.return_value = corosync_data + mock_is_running.return_value = False + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + + self.assertTrue(env.is_corosync_conf_live) + + self.assertEqual(corosync_data, env.get_corosync_conf_data()) + self.assertEqual(corosync_data, env.get_corosync_conf().config.export()) + self.assertEqual(2, mock_get_corosync.call_count) + + conf_facade = CorosyncConfigFacade.from_string(new_corosync_data) + conf_facade._need_stopped_cluster = True + env.push_corosync_conf(conf_facade) + mock_check_offline.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + False + ) + mock_distribute.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + new_corosync_data, + False + ) + mock_reload.assert_not_called() + mock_qdevice_reload.assert_not_called() + + @patch_env("qdevice_reload_on_nodes") + @patch_env("check_corosync_offline_on_nodes") + @patch_env("reload_corosync_config") + @patch_env("distribute_corosync_conf") + @patch_env("get_local_corosync_conf") + @mock.patch.object( + CorosyncConfigFacade, + "get_nodes", + lambda self: "mock node list" + ) + @mock.patch.object( + LibraryEnvironment, + "node_communicator", + lambda self: "mock node communicator" + ) + def test_corosync_conf_not_set_need_offline_fail( + self, mock_get_corosync, mock_distribute, mock_reload, + mock_check_offline, mock_qdevice_reload + ): + corosync_data = open(rc("corosync.conf")).read() + new_corosync_data = corosync_data.replace("version: 2", "version: 3") + mock_get_corosync.return_value = corosync_data + def raiser(dummy_communicator, dummy_reporter, dummy_nodes, dummy_force): + raise LibraryError( + reports.corosync_not_running_check_node_error("test node") + ) + mock_check_offline.side_effect = raiser + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + + self.assertTrue(env.is_corosync_conf_live) + + self.assertEqual(corosync_data, env.get_corosync_conf_data()) + self.assertEqual(corosync_data, env.get_corosync_conf().config.export()) + self.assertEqual(2, mock_get_corosync.call_count) + + conf_facade = CorosyncConfigFacade.from_string(new_corosync_data) + conf_facade._need_stopped_cluster = True + assert_raise_library_error( + lambda: env.push_corosync_conf(conf_facade), + ( + severity.ERROR, + report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, + {"node": "test node"} + ) + ) + mock_check_offline.assert_called_once_with( + "mock node communicator", + self.mock_reporter, + "mock node list", + False + ) + mock_distribute.assert_not_called() + mock_reload.assert_not_called() + mock_qdevice_reload.assert_not_called() + + @patch_env("NodeCommunicator") + def test_node_communicator_no_options(self, mock_comm): + expected_comm = mock.MagicMock() + mock_comm.return_value = expected_comm + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + comm = env.node_communicator() + self.assertEqual(expected_comm, comm) + mock_comm.assert_called_once_with( + self.mock_logger, + self.mock_reporter, + {}, + None, + [], + None + ) + + @patch_env("NodeCommunicator") + def test_node_communicator_all_options(self, mock_comm): + expected_comm = mock.MagicMock() + mock_comm.return_value = expected_comm + user = "testuser" + groups = ["some", "group"] + tokens = {"node": "token"} + timeout = 10 + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + user_login=user, + user_groups=groups, + auth_tokens_getter=lambda:tokens, + request_timeout=timeout + ) + comm = env.node_communicator() + self.assertEqual(expected_comm, comm) + mock_comm.assert_called_once_with( + self.mock_logger, + self.mock_reporter, + tokens, + user, + groups, + timeout + ) + + @patch_env("get_local_cluster_conf") + def test_get_cluster_conf_live(self, mock_get_local_cluster_conf): + env = LibraryEnvironment( + self.mock_logger, self.mock_reporter, cluster_conf_data=None + ) + mock_get_local_cluster_conf.return_value = "cluster.conf data" + self.assertEqual("cluster.conf data", env.get_cluster_conf_data()) + mock_get_local_cluster_conf.assert_called_once_with() + + @patch_env("get_local_cluster_conf") + def test_get_cluster_conf_not_live(self, mock_get_local_cluster_conf): + env = LibraryEnvironment( + self.mock_logger, self.mock_reporter, cluster_conf_data="data" + ) + self.assertEqual("data", env.get_cluster_conf_data()) + self.assertEqual(0, mock_get_local_cluster_conf.call_count) + + @mock.patch.object( + LibraryEnvironment, + "get_cluster_conf_data", + lambda self: "" + ) + def test_get_cluster_conf(self): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + facade_obj = env.get_cluster_conf() + self.assertTrue(isinstance(facade_obj, ClusterConfFacade)) + assert_xml_equal( + '', etree.tostring(facade_obj._config).decode() + ) + + def test_is_cluster_conf_live_live(self): + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + self.assertTrue(env.is_cluster_conf_live) + + def test_is_cluster_conf_live_not_live(self): + env = LibraryEnvironment( + self.mock_logger, self.mock_reporter, cluster_conf_data="data" + ) + self.assertFalse(env.is_cluster_conf_live) + +@patch_env("CommandRunner") +class CmdRunner(TestCase): + def setUp(self): + self.mock_logger = mock.MagicMock(logging.Logger) + self.mock_reporter = MockLibraryReportProcessor() + + def test_no_options(self, mock_runner): + expected_runner = mock.MagicMock() + mock_runner.return_value = expected_runner + env = LibraryEnvironment(self.mock_logger, self.mock_reporter) + runner = env.cmd_runner() + self.assertEqual(expected_runner, runner) + mock_runner.assert_called_once_with( + self.mock_logger, + self.mock_reporter, + { + "LC_ALL": "C", + } + ) + + def test_user(self, mock_runner): + expected_runner = mock.MagicMock() + mock_runner.return_value = expected_runner + user = "testuser" + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + user_login=user + ) + runner = env.cmd_runner() + self.assertEqual(expected_runner, runner) + mock_runner.assert_called_once_with( + self.mock_logger, + self.mock_reporter, + { + "CIB_user": user, + "LC_ALL": "C", + } + ) + + @patch_env("tempfile.NamedTemporaryFile") + def test_dump_cib_file(self, mock_tmpfile, mock_runner): + expected_runner = mock.MagicMock() + mock_runner.return_value = expected_runner + mock_instance = mock.MagicMock() + mock_instance.name = rc("file.tmp") + mock_tmpfile.return_value = mock_instance + env = LibraryEnvironment( + self.mock_logger, + self.mock_reporter, + cib_data="" + ) + runner = env.cmd_runner() + self.assertEqual(expected_runner, runner) + mock_runner.assert_called_once_with( + self.mock_logger, + self.mock_reporter, + { + "LC_ALL": "C", + "CIB_file": rc("file.tmp"), + } + ) + mock_instance.write.assert_called_once_with("") + +@patch_env_object("cmd_runner", lambda self: "runner") +class EnsureValidWait(TestCase): + def setUp(self): + self.create_env = partial( + LibraryEnvironment, + mock.MagicMock(logging.Logger), + MockLibraryReportProcessor() + ) + + @property + def env_live(self): + return self.create_env() + + @property + def env_fake(self): + return self.create_env(cib_data="") + + + def test_not_raises_if_waiting_false_no_matter_if_env_is_live(self): + self.env_live.ensure_wait_satisfiable(False) + self.env_fake.ensure_wait_satisfiable(False) + + def test_raises_when_is_not_live(self): + env = self.env_fake + assert_raise_library_error( + lambda: env.ensure_wait_satisfiable(10), + ( + severity.ERROR, + report_codes.WAIT_FOR_IDLE_NOT_LIVE_CLUSTER, + {} + ) + ) + + @patch_env("get_valid_timeout_seconds") + @patch_env("ensure_wait_for_idle_support") + def test_do_checks(self, ensure_wait_for_idle_support, get_valid_timeout): + env = self.env_live + env.ensure_wait_satisfiable(10) + ensure_wait_for_idle_support.assert_called_once_with(env.cmd_runner()) + get_valid_timeout.assert_called_once_with(10) + + +@patch_env_object("cmd_runner", lambda self: "runner") +@patch_env_object("_get_wait_timeout", lambda self, wait: wait) +@patch_env_object("_push_cib_xml") +@patch_env("wait_for_idle") +class PushCib(TestCase): + def setUp(self): + self.env = LibraryEnvironment( + mock.MagicMock(logging.Logger), + MockLibraryReportProcessor() + ) + + def test_run_only_push_when_without_wait(self, wait_for_idle, push_cib_xml): + self.env.push_cib(etree.fromstring("")) + push_cib_xml.assert_called_once_with("") + wait_for_idle.assert_not_called() + + def test_run_wait_when_wait_specified(self, wait_for_idle, push_cib_xml): + self.env.push_cib(etree.fromstring(""), 10) + push_cib_xml.assert_called_once_with("") + wait_for_idle.assert_called_once_with(self.env.cmd_runner(), 10) diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_errors.py pcs-0.9.159/pcs/lib/test/test_errors.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_errors.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_errors.py 2017-06-30 15:33:01.000000000 +0000 @@ -7,14 +7,74 @@ from pcs.test.tools.pcs_unittest import TestCase -from pcs.lib.errors import LibraryEnvError +from pcs.lib import errors class LibraryEnvErrorTest(TestCase): def test_can_sign_solved_reports(self): - e = LibraryEnvError("first", "second", "third") + e = errors.LibraryEnvError("first", "second", "third") for report in e.args: if report == "second": e.sign_processed(report) self.assertEqual(["first", "third"], e.unprocessed) + +class ReportListAnalyzerSelectSeverities(TestCase): + def setUp(self): + self.severities = [ + errors.ReportItemSeverity.WARNING, + errors.ReportItemSeverity.INFO, + errors.ReportItemSeverity.DEBUG, + ] + + def assert_select_reports(self, all_reports, expected_errors): + self.assertEqual( + expected_errors, + errors.ReportListAnalyzer(all_reports) + .reports_with_severities(self.severities) + ) + + def test_returns_empty_on_no_reports(self): + self.assert_select_reports([], []) + + def test_returns_empty_on_reports_with_other_severities(self): + self.assert_select_reports([errors.ReportItem.error("ERR")], []) + + def test_returns_selection_of_desired_severities(self): + err = errors.ReportItem.error("ERR") + warn = errors.ReportItem.warning("WARN") + info = errors.ReportItem.info("INFO") + debug = errors.ReportItem.debug("DEBUG") + self.assert_select_reports( + [ + err, + warn, + info, + debug, + ], + [ + warn, + info, + debug, + ] + ) + +class ReportListAnalyzerErrorList(TestCase): + def assert_select_reports(self, all_reports, expected_errors): + self.assertEqual( + expected_errors, + errors.ReportListAnalyzer(all_reports).error_list + ) + + def test_returns_empty_on_no_reports(self): + self.assert_select_reports([], []) + + def test_returns_empty_on_no_errors(self): + self.assert_select_reports([errors.ReportItem.warning("WARN")], []) + + def test_returns_only_errors_on_mixed_content(self): + err = errors.ReportItem.error("ERR") + self.assert_select_reports( + [errors.ReportItem.warning("WARN"), err], + [err] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_node_communication_format.py pcs-0.9.159/pcs/lib/test/test_node_communication_format.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_node_communication_format.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_node_communication_format.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,119 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from pcs.test.tools.assertions import assert_raise_library_error +from pcs.test.tools.misc import create_setup_patch_mixin +from pcs.test.tools.pcs_unittest import TestCase + +from pcs.common import report_codes +from pcs.lib import node_communication_format +from pcs.lib.errors import ReportItemSeverity as severity + +SetupPatchMixin = create_setup_patch_mixin(node_communication_format) + +class PcmkAuthkeyFormat(TestCase, SetupPatchMixin): + def test_create_expected_dict(self): + b64encode = self.setup_patch("base64.b64encode") + b64encode.return_value = "encoded_content".encode() + self.assertEqual( + node_communication_format.pcmk_authkey_format("content"), + { + "data": b64encode.return_value.decode("utf-8"), + "type": "pcmk_remote_authkey", + "rewrite_existing": True, + } + ) + + +class ServiceCommandFormat(TestCase): + def test_create_expected_dict(self): + self.assertEqual( + node_communication_format.service_cmd_format("pcsd", "start"), + { + "type": "service_command", + "service": "pcsd", + "command": "start", + } + ) + +def fixture_invalid_response_format(node_label): + return ( + severity.ERROR, + report_codes.INVALID_RESPONSE_FORMAT, + { + "node": node_label + }, + None + ) + + +class ResponseToNodeActionResults(TestCase): + def setUp(self): + self.expected_keys = ["file"] + self.main_key = "files" + self.node_label = "node1" + + def assert_result_causes_invalid_format(self, result): + assert_raise_library_error( + lambda: node_communication_format.response_to_result( + result, + self.main_key, + self.expected_keys, + self.node_label, + ), + fixture_invalid_response_format(self.node_label) + ) + + def test_report_response_is_not_dict(self): + self.assert_result_causes_invalid_format("bad answer") + + def test_report_dict_without_mandatory_key(self): + self.assert_result_causes_invalid_format({}) + + def test_report_when_on_files_is_not_dict(self): + self.assert_result_causes_invalid_format({"files": True}) + + def test_report_when_on_some_result_is_not_dict(self): + self.assert_result_causes_invalid_format({ + "files": { + "file": True + } + }) + + def test_report_when_on_some_result_is_without_code(self): + self.assert_result_causes_invalid_format({ + "files": { + "file": {"message": "some_message"} + } + }) + + def test_report_when_on_some_result_is_without_message(self): + self.assert_result_causes_invalid_format({ + "files": { + "file": {"code": "some_code"} + } + }) + + def test_report_when_some_result_key_is_missing(self): + self.assert_result_causes_invalid_format({ + "files": { + } + }) + + def test_report_when_some_result_key_is_extra(self): + self.assert_result_causes_invalid_format({ + "files": { + "file": { + "code": "some_code", + "message": "some_message", + }, + "extra": { + "code": "some_extra_code", + "message": "some_extra_message", + } + } + }) diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_nodes_task.py pcs-0.9.159/pcs/lib/test/test_nodes_task.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_nodes_task.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_nodes_task.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,826 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +import json + +from pcs.test.tools.pcs_unittest import TestCase + +from pcs.test.tools.assertions import ( + assert_raise_library_error, + assert_report_item_list_equal, +) +from pcs.test.tools.custom_mock import MockLibraryReportProcessor +from pcs.test.tools.pcs_unittest import mock +from pcs.test.tools.misc import create_patcher + +from pcs.common import report_codes +from pcs.lib.external import NodeCommunicator, NodeAuthenticationException +from pcs.lib.node import NodeAddresses, NodeAddressesList +from pcs.lib.errors import ReportItemSeverity as severity + +import pcs.lib.nodes_task as lib + +patch_nodes_task = create_patcher(lib) + +class DistributeCorosyncConfTest(TestCase): + def setUp(self): + self.mock_reporter = MockLibraryReportProcessor() + self.mock_communicator = "mock node communicator" + + @patch_nodes_task("corosync_live") + def test_success(self, mock_corosync_live): + conf_text = "test conf text" + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + mock_corosync_live.set_remote_corosync_conf = mock.MagicMock() + + lib.distribute_corosync_conf( + self.mock_communicator, + self.mock_reporter, + node_addrs_list, + conf_text + ) + + corosync_live_calls = [ + mock.call.set_remote_corosync_conf( + "mock node communicator", node_addrs_list[0], conf_text + ), + mock.call.set_remote_corosync_conf( + "mock node communicator", node_addrs_list[1], conf_text + ), + ] + self.assertEqual( + len(corosync_live_calls), + len(mock_corosync_live.mock_calls) + ) + mock_corosync_live.set_remote_corosync_conf.assert_has_calls( + corosync_live_calls, + any_order=True + ) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED, + {} + ), + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, + {"node": nodes[0]} + ), + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, + {"node": nodes[1]} + ), + ] + ) + + @patch_nodes_task("corosync_live") + def test_one_node_down(self, mock_corosync_live): + conf_text = "test conf text" + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + mock_corosync_live.set_remote_corosync_conf = mock.MagicMock() + def raiser(comm, node, conf): + if node.ring0 == nodes[1]: + raise NodeAuthenticationException( + nodes[1], "command", "HTTP error: 401" + ) + mock_corosync_live.set_remote_corosync_conf.side_effect = raiser + + assert_raise_library_error( + lambda: lib.distribute_corosync_conf( + self.mock_communicator, + self.mock_reporter, + node_addrs_list, + conf_text + ), + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ), + ( + severity.ERROR, + report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, + { + "node": nodes[1], + }, + report_codes.SKIP_OFFLINE_NODES + ) + ) + + corosync_live_calls = [ + mock.call.set_remote_corosync_conf( + "mock node communicator", nodes[0], conf_text + ), + mock.call.set_remote_corosync_conf( + "mock node communicator", nodes[1], conf_text + ), + ] + self.assertEqual( + len(corosync_live_calls), + len(mock_corosync_live.mock_calls) + ) + mock_corosync_live.set_remote_corosync_conf.assert_has_calls([ + mock.call("mock node communicator", node_addrs_list[0], conf_text), + mock.call("mock node communicator", node_addrs_list[1], conf_text), + ], any_order=True) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED, + {} + ), + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, + {"node": nodes[0]} + ), + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ), + ( + severity.ERROR, + report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, + { + "node": nodes[1], + }, + report_codes.SKIP_OFFLINE_NODES + ) + ] + ) + + @patch_nodes_task("corosync_live") + def test_one_node_down_forced(self, mock_corosync_live): + conf_text = "test conf text" + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + mock_corosync_live.set_remote_corosync_conf = mock.MagicMock() + def raiser(comm, node, conf): + if node.ring0 == nodes[1]: + raise NodeAuthenticationException( + nodes[1], "command", "HTTP error: 401" + ) + mock_corosync_live.set_remote_corosync_conf.side_effect = raiser + + lib.distribute_corosync_conf( + self.mock_communicator, + self.mock_reporter, + node_addrs_list, + conf_text, + skip_offline_nodes=True + ) + + corosync_live_calls = [ + mock.call.set_remote_corosync_conf( + "mock node communicator", nodes[0], conf_text + ), + mock.call.set_remote_corosync_conf( + "mock node communicator", nodes[1], conf_text + ), + ] + self.assertEqual( + len(corosync_live_calls), + len(mock_corosync_live.mock_calls) + ) + mock_corosync_live.set_remote_corosync_conf.assert_has_calls([ + mock.call("mock node communicator", node_addrs_list[0], conf_text), + mock.call("mock node communicator", node_addrs_list[1], conf_text), + ], any_order=True) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_DISTRIBUTION_STARTED, + {} + ), + ( + severity.INFO, + report_codes.COROSYNC_CONFIG_ACCEPTED_BY_NODE, + {"node": nodes[0]} + ), + ( + severity.WARNING, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + } + ), + ( + severity.WARNING, + report_codes.COROSYNC_CONFIG_DISTRIBUTION_NODE_ERROR, + { + "node": nodes[1], + } + ), + ] + ) + +class CheckCorosyncOfflineTest(TestCase): + def setUp(self): + self.mock_reporter = MockLibraryReportProcessor() + self.mock_communicator = mock.MagicMock(NodeCommunicator) + + def test_success(self): + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + self.mock_communicator.call_node.return_value = '{"corosync": false}' + + lib.check_corosync_offline_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list + ) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED, + {} + ), + ( + severity.INFO, + report_codes.COROSYNC_NOT_RUNNING_ON_NODE, + {"node": nodes[0]} + ), + ( + severity.INFO, + report_codes.COROSYNC_NOT_RUNNING_ON_NODE, + {"node": nodes[1]} + ), + ] + ) + + def test_one_node_running(self): + node_responses = { + "node1": '{"corosync": false}', + "node2": '{"corosync": true}', + } + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in node_responses.keys()] + ) + + self.mock_communicator.call_node.side_effect = ( + lambda node, request, data: node_responses[node.label] + ) + + + assert_raise_library_error( + lambda: lib.check_corosync_offline_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list + ), + ( + severity.ERROR, + report_codes.COROSYNC_RUNNING_ON_NODE, + { + "node": "node2", + } + ) + ) + + def test_json_error(self): + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + self.mock_communicator.call_node.side_effect = [ + '{}', # missing key + '{', # not valid json + ] + + assert_raise_library_error( + lambda: lib.check_corosync_offline_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list + ), + ( + severity.ERROR, + report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, + { + "node": nodes[0], + }, + report_codes.SKIP_OFFLINE_NODES + ), + ( + severity.ERROR, + report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, + { + "node": nodes[1], + }, + report_codes.SKIP_OFFLINE_NODES + ) + ) + + def test_node_down(self): + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + def side_effect(node, request, data): + if node.ring0 == nodes[1]: + raise NodeAuthenticationException( + nodes[1], "command", "HTTP error: 401" + ) + return '{"corosync": false}' + self.mock_communicator.call_node.side_effect = side_effect + + assert_raise_library_error( + lambda: lib.check_corosync_offline_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list + ), + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ), + ( + severity.ERROR, + report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, + { + "node": nodes[1], + }, + report_codes.SKIP_OFFLINE_NODES + ) + ) + + def test_errors_forced(self): + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + def side_effect(node, request, data): + if node.ring0 == nodes[1]: + raise NodeAuthenticationException( + nodes[1], "command", "HTTP error: 401" + ) + return '{' # invalid json + self.mock_communicator.call_node.side_effect = side_effect + + lib.check_corosync_offline_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list, + skip_offline_nodes=True + ) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.COROSYNC_NOT_RUNNING_CHECK_STARTED, + {} + ), + ( + severity.WARNING, + report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, + { + "node": nodes[0], + } + ), + ( + severity.WARNING, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + } + ), + ( + severity.WARNING, + report_codes.COROSYNC_NOT_RUNNING_CHECK_NODE_ERROR, + { + "node": nodes[1], + } + ) + ] + ) + + +@patch_nodes_task("qdevice_client.remote_client_stop") +@patch_nodes_task("qdevice_client.remote_client_start") +class QdeviceReloadOnNodesTest(TestCase): + def setUp(self): + self.mock_reporter = MockLibraryReportProcessor() + self.mock_communicator = mock.MagicMock(spec_set=NodeCommunicator) + + def test_success(self, mock_remote_start, mock_remote_stop): + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + + lib.qdevice_reload_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list + ) + + node_calls = [ + mock.call( + self.mock_reporter, self.mock_communicator, node_addrs_list[0] + ), + mock.call( + self.mock_reporter, self.mock_communicator, node_addrs_list[1] + ), + ] + self.assertEqual(len(node_calls), len(mock_remote_stop.mock_calls)) + self.assertEqual(len(node_calls), len(mock_remote_start.mock_calls)) + mock_remote_stop.assert_has_calls(node_calls, any_order=True) + mock_remote_start.assert_has_calls(node_calls, any_order=True) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.QDEVICE_CLIENT_RELOAD_STARTED, + {} + ), + ] + ) + + def test_fail_doesnt_prevent_start( + self, mock_remote_start, mock_remote_stop + ): + nodes = ["node1", "node2"] + node_addrs_list = NodeAddressesList( + [NodeAddresses(addr) for addr in nodes] + ) + def raiser(reporter, communicator, node): + if node.ring0 == nodes[1]: + raise NodeAuthenticationException( + node.label, "command", "HTTP error: 401" + ) + mock_remote_start.side_effect = raiser + + assert_raise_library_error( + lambda: lib.qdevice_reload_on_nodes( + self.mock_communicator, + self.mock_reporter, + node_addrs_list + ), + # why the same error twice? + # 1. Tested piece of code calls a function which puts an error + # into the reporter. The reporter raises an exception. The + # exception is caught in the tested piece of code, stored, and + # later put to reporter again. + # 2. Mock reporter remembers everything that goes through it + # and by the machanism described in 1 the error goes througt it + # twice. + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ), + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ) + ) + + node_calls = [ + mock.call( + self.mock_reporter, self.mock_communicator, node_addrs_list[0] + ), + mock.call( + self.mock_reporter, self.mock_communicator, node_addrs_list[1] + ), + ] + self.assertEqual(len(node_calls), len(mock_remote_stop.mock_calls)) + self.assertEqual(len(node_calls), len(mock_remote_start.mock_calls)) + mock_remote_stop.assert_has_calls(node_calls, any_order=True) + mock_remote_start.assert_has_calls(node_calls, any_order=True) + + assert_report_item_list_equal( + self.mock_reporter.report_item_list, + [ + ( + severity.INFO, + report_codes.QDEVICE_CLIENT_RELOAD_STARTED, + {} + ), + # why the same error twice? + # 1. Tested piece of code calls a function which puts an error + # into the reporter. The reporter raises an exception. The + # exception is caught in the tested piece of code, stored, and + # later put to reporter again. + # 2. Mock reporter remembers everything that goes through it + # and by the machanism described in 1 the error goes througt it + # twice. + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ), + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + "node": nodes[1], + "command": "command", + "reason" : "HTTP error: 401", + }, + report_codes.SKIP_OFFLINE_NODES + ), + ] + ) + + +class NodeCheckAuthTest(TestCase): + def test_success(self): + mock_communicator = mock.MagicMock(spec_set=NodeCommunicator) + node = NodeAddresses("node1") + lib.node_check_auth(mock_communicator, node) + mock_communicator.call_node.assert_called_once_with( + node, "remote/check_auth", "check_auth_only=1" + ) + + +def fixture_invalid_response_format(node_label): + return ( + severity.ERROR, + report_codes.INVALID_RESPONSE_FORMAT, + { + "node": node_label + }, + None + ) + +def assert_call_cause_reports(call, expected_report_items): + report_items = [] + call(report_items) + assert_report_item_list_equal(report_items, expected_report_items) + +class CallForJson(TestCase): + def setUp(self): + self.node = NodeAddresses("node1") + self.node_communicator = mock.MagicMock(spec_set=NodeCommunicator) + + def make_call(self, report_items): + lib._call_for_json( + self.node_communicator, + self.node, + "some/path", + report_items + ) + + def test_report_no_json_response(self): + #leads to ValueError + self.node_communicator.call_node = mock.Mock(return_value="bad answer") + assert_call_cause_reports(self.make_call, [ + fixture_invalid_response_format(self.node.label) + ]) + + def test_process_communication_exception(self): + self.node_communicator.call_node = mock.Mock( + side_effect=NodeAuthenticationException("node", "request", "reason") + ) + assert_call_cause_reports(self.make_call, [ + ( + severity.ERROR, + report_codes.NODE_COMMUNICATION_ERROR_NOT_AUTHORIZED, + { + 'node': 'node', + 'reason': 'reason', + 'command': 'request' + }, + report_codes.SKIP_OFFLINE_NODES, + ) + ]) + +class AvailabilityCheckerNode(TestCase): + def setUp(self): + self.node = "node1" + + def assert_result_causes_reports( + self, availability_info, expected_report_items + ): + report_items = [] + lib.availability_checker_node( + availability_info, + report_items, + self.node + ) + assert_report_item_list_equal(report_items, expected_report_items) + + def test_no_reports_when_available(self): + self.assert_result_causes_reports({"node_available": True}, []) + + def test_report_node_is_in_cluster(self): + self.assert_result_causes_reports({"node_available": False}, [ + ( + severity.ERROR, + report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, + { + "node": self.node + } + ), + ]) + + def test_report_node_is_running_pacemaker_remote(self): + self.assert_result_causes_reports( + {"node_available": False, "pacemaker_remote": True}, + [ + ( + severity.ERROR, + report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, + { + "node": self.node, + "service": "pacemaker_remote", + } + ), + ] + ) + + def test_report_node_is_running_pacemaker(self): + self.assert_result_causes_reports( + {"node_available": False, "pacemaker_running": True}, + [ + ( + severity.ERROR, + report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, + { + "node": self.node, + "service": "pacemaker", + } + ), + ] + ) + +class AvailabilityCheckerRemoteNode(TestCase): + def setUp(self): + self.node = "node1" + + def assert_result_causes_reports( + self, availability_info, expected_report_items + ): + report_items = [] + lib.availability_checker_remote_node( + availability_info, + report_items, + self.node + ) + assert_report_item_list_equal(report_items, expected_report_items) + + def test_no_reports_when_available(self): + self.assert_result_causes_reports({"node_available": True}, []) + + def test_report_node_is_running_pacemaker(self): + self.assert_result_causes_reports( + {"node_available": False, "pacemaker_running": True}, + [ + ( + severity.ERROR, + report_codes.CANNOT_ADD_NODE_IS_RUNNING_SERVICE, + { + "node": self.node, + "service": "pacemaker", + } + ), + ] + ) + + def test_report_node_is_in_cluster(self): + self.assert_result_causes_reports({"node_available": False}, [ + ( + severity.ERROR, + report_codes.CANNOT_ADD_NODE_IS_IN_CLUSTER, + { + "node": self.node + } + ), + ]) + + def test_no_reports_when_pacemaker_remote_there(self): + self.assert_result_causes_reports( + {"node_available": False, "pacemaker_remote": True}, + [] + ) + +class CheckCanAddNodeToCluster(TestCase): + def setUp(self): + self.node = NodeAddresses("node1") + self.node_communicator = mock.MagicMock(spec_set=NodeCommunicator) + + def assert_result_causes_invalid_format(self, result): + self.node_communicator.call_node = mock.Mock( + return_value=json.dumps(result) + ) + assert_call_cause_reports( + self.make_call, + [fixture_invalid_response_format(self.node.label)], + ) + + def make_call(self, report_items): + lib.check_can_add_node_to_cluster( + self.node_communicator, + self.node, + report_items, + check_response=( + lambda availability_info, report_items, node_label: None + ) + ) + + def test_report_no_dict_in_json_response(self): + self.assert_result_causes_invalid_format("bad answer") + +class OnNodeTest(TestCase): + def setUp(self): + self.reporter = MockLibraryReportProcessor() + self.node = NodeAddresses("node1") + self.node_communicator = mock.MagicMock(spec_set=NodeCommunicator) + + def set_call_result(self, result): + self.node_communicator.call_node = mock.Mock( + return_value=json.dumps(result) + ) + +class RunActionOnNode(OnNodeTest): + def make_call(self): + return lib.run_actions_on_node( + self.node_communicator, + "remote/run_action", + "actions", + self.reporter, + self.node, + {"action": {"type": "any_mock_type"}} + ) + + def test_return_node_action_result(self): + self.set_call_result({ + "actions": { + "action": { + "code": "some_code", + "message": "some_message", + } + } + }) + result = self.make_call()["action"] + self.assertEqual(result.code, "some_code") + self.assertEqual(result.message, "some_message") diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_pacemaker_values.py pcs-0.9.159/pcs/lib/test/test_pacemaker_values.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_pacemaker_values.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_pacemaker_values.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,251 +0,0 @@ -from __future__ import ( - absolute_import, - division, - print_function, - unicode_literals, -) - -from pcs.test.tools.pcs_unittest import TestCase - -from pcs.test.tools.assertions import assert_raise_library_error - -from pcs.common import report_codes -from pcs.lib.errors import ReportItemSeverity as severity - -import pcs.lib.pacemaker_values as lib - - -class BooleanTest(TestCase): - def test_true_is_true(self): - self.assertTrue(lib.is_true("true")) - self.assertTrue(lib.is_true("tRue")) - self.assertTrue(lib.is_true("on")) - self.assertTrue(lib.is_true("ON")) - self.assertTrue(lib.is_true("yes")) - self.assertTrue(lib.is_true("yeS")) - self.assertTrue(lib.is_true("y")) - self.assertTrue(lib.is_true("Y")) - self.assertTrue(lib.is_true("1")) - - def test_nontrue_is_not_true(self): - self.assertFalse(lib.is_true("")) - self.assertFalse(lib.is_true(" 1 ")) - self.assertFalse(lib.is_true("a")) - self.assertFalse(lib.is_true("2")) - self.assertFalse(lib.is_true("10")) - self.assertFalse(lib.is_true("yes please")) - - def test_true_is_boolean(self): - self.assertTrue(lib.is_boolean("true")) - self.assertTrue(lib.is_boolean("tRue")) - self.assertTrue(lib.is_boolean("on")) - self.assertTrue(lib.is_boolean("ON")) - self.assertTrue(lib.is_boolean("yes")) - self.assertTrue(lib.is_boolean("yeS")) - self.assertTrue(lib.is_boolean("y")) - self.assertTrue(lib.is_boolean("Y")) - self.assertTrue(lib.is_boolean("1")) - - def test_false_is_boolean(self): - self.assertTrue(lib.is_boolean("false")) - self.assertTrue(lib.is_boolean("fAlse")) - self.assertTrue(lib.is_boolean("off")) - self.assertTrue(lib.is_boolean("oFf")) - self.assertTrue(lib.is_boolean("no")) - self.assertTrue(lib.is_boolean("nO")) - self.assertTrue(lib.is_boolean("n")) - self.assertTrue(lib.is_boolean("N")) - self.assertTrue(lib.is_boolean("0")) - - def test_nonboolean_is_not_boolean(self): - self.assertFalse(lib.is_boolean("")) - self.assertFalse(lib.is_boolean("a")) - self.assertFalse(lib.is_boolean("2")) - self.assertFalse(lib.is_boolean("10")) - self.assertFalse(lib.is_boolean("yes please")) - self.assertFalse(lib.is_boolean(" y")) - self.assertFalse(lib.is_boolean("n ")) - self.assertFalse(lib.is_boolean("NO!")) - - -class TimeoutTest(TestCase): - def test_valid(self): - self.assertEqual(10, lib.timeout_to_seconds("10")) - self.assertEqual(10, lib.timeout_to_seconds("10s")) - self.assertEqual(10, lib.timeout_to_seconds("10sec")) - self.assertEqual(600, lib.timeout_to_seconds("10m")) - self.assertEqual(600, lib.timeout_to_seconds("10min")) - self.assertEqual(36000, lib.timeout_to_seconds("10h")) - self.assertEqual(36000, lib.timeout_to_seconds("10hr")) - - def test_invalid(self): - self.assertEqual(None, lib.timeout_to_seconds("1a1s")) - self.assertEqual(None, lib.timeout_to_seconds("10mm")) - self.assertEqual(None, lib.timeout_to_seconds("10mim")) - self.assertEqual(None, lib.timeout_to_seconds("aaa")) - self.assertEqual(None, lib.timeout_to_seconds("")) - - self.assertEqual("1a1s", lib.timeout_to_seconds("1a1s", True)) - self.assertEqual("10mm", lib.timeout_to_seconds("10mm", True)) - self.assertEqual("10mim", lib.timeout_to_seconds("10mim", True)) - self.assertEqual("aaa", lib.timeout_to_seconds("aaa", True)) - self.assertEqual("", lib.timeout_to_seconds("", True)) - - -class ValidateIdTest(TestCase): - def test_valid(self): - self.assertEqual(None, lib.validate_id("dummy")) - self.assertEqual(None, lib.validate_id("DUMMY")) - self.assertEqual(None, lib.validate_id("dUmMy")) - self.assertEqual(None, lib.validate_id("dummy0")) - self.assertEqual(None, lib.validate_id("dum0my")) - self.assertEqual(None, lib.validate_id("dummy-")) - self.assertEqual(None, lib.validate_id("dum-my")) - self.assertEqual(None, lib.validate_id("dummy.")) - self.assertEqual(None, lib.validate_id("dum.my")) - self.assertEqual(None, lib.validate_id("_dummy")) - self.assertEqual(None, lib.validate_id("dummy_")) - self.assertEqual(None, lib.validate_id("dum_my")) - - def test_invalid_empty(self): - assert_raise_library_error( - lambda: lib.validate_id("", "test id"), - ( - severity.ERROR, - report_codes.EMPTY_ID, - { - "id": "", - "id_description": "test id", - } - ) - ) - - def test_invalid_first_character(self): - desc = "test id" - info = { - "id": "", - "id_description": desc, - "invalid_character": "", - "is_first_char": True, - } - report = (severity.ERROR, report_codes.INVALID_ID, info) - - info["id"] = "0" - info["invalid_character"] = "0" - assert_raise_library_error( - lambda: lib.validate_id("0", desc), - report - ) - - info["id"] = "-" - info["invalid_character"] = "-" - assert_raise_library_error( - lambda: lib.validate_id("-", desc), - report - ) - - info["id"] = "." - info["invalid_character"] = "." - assert_raise_library_error( - lambda: lib.validate_id(".", desc), - report - ) - - info["id"] = ":" - info["invalid_character"] = ":" - assert_raise_library_error( - lambda: lib.validate_id(":", desc), - report - ) - - info["id"] = "0dummy" - info["invalid_character"] = "0" - assert_raise_library_error( - lambda: lib.validate_id("0dummy", desc), - report - ) - - info["id"] = "-dummy" - info["invalid_character"] = "-" - assert_raise_library_error( - lambda: lib.validate_id("-dummy", desc), - report - ) - - info["id"] = ".dummy" - info["invalid_character"] = "." - assert_raise_library_error( - lambda: lib.validate_id(".dummy", desc), - report - ) - - info["id"] = ":dummy" - info["invalid_character"] = ":" - assert_raise_library_error( - lambda: lib.validate_id(":dummy", desc), - report - ) - - def test_invalid_character(self): - desc = "test id" - info = { - "id": "", - "id_description": desc, - "invalid_character": "", - "is_first_char": False, - } - report = (severity.ERROR, report_codes.INVALID_ID, info) - - info["id"] = "dum:my" - info["invalid_character"] = ":" - assert_raise_library_error( - lambda: lib.validate_id("dum:my", desc), - report - ) - - info["id"] = "dummy:" - info["invalid_character"] = ":" - assert_raise_library_error( - lambda: lib.validate_id("dummy:", desc), - report - ) - - info["id"] = "dum?my" - info["invalid_character"] = "?" - assert_raise_library_error( - lambda: lib.validate_id("dum?my", desc), - report - ) - - info["id"] = "dummy?" - info["invalid_character"] = "?" - assert_raise_library_error( - lambda: lib.validate_id("dummy?", desc), - report - ) - - -class IsScoreValueTest(TestCase): - def test_returns_true_for_number(self): - self.assertTrue(lib.is_score_value("1")) - - def test_returns_true_for_minus_number(self): - self.assertTrue(lib.is_score_value("-1")) - - def test_returns_true_for_plus_number(self): - self.assertTrue(lib.is_score_value("+1")) - - def test_returns_true_for_infinity(self): - self.assertTrue(lib.is_score_value("INFINITY")) - - def test_returns_true_for_minus_infinity(self): - self.assertTrue(lib.is_score_value("-INFINITY")) - - def test_returns_true_for_plus_infinity(self): - self.assertTrue(lib.is_score_value("+INFINITY")) - - def test_returns_false_for_nonumber_noinfinity(self): - self.assertFalse(lib.is_score_value("something else")) - - def test_returns_false_for_multiple_operators(self): - self.assertFalse(lib.is_score_value("++INFINITY")) diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_resource_agent.py pcs-0.9.159/pcs/lib/test/test_resource_agent.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_resource_agent.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_resource_agent.py 2017-06-30 15:33:01.000000000 +0000 @@ -11,7 +11,9 @@ from pcs.test.tools.assertions import ( ExtendedAssertionsMixin, assert_raise_library_error, + assert_report_item_list_equal, assert_xml_equal, + start_tag_error_text, ) from pcs.test.tools.misc import create_patcher from pcs.test.tools.pcs_unittest import TestCase, mock @@ -23,6 +25,94 @@ from pcs.lib.external import CommandRunner patch_agent = create_patcher("pcs.lib.resource_agent") +patch_agent_object = partial(mock.patch.object, lib_ra.Agent) + + +class GetDefaultInterval(TestCase): + def test_return_0s_on_name_different_from_monitor(self): + self.assertEqual("0s", lib_ra.get_default_interval("start")) + def test_return_60s_on_monitor(self): + self.assertEqual("60s", lib_ra.get_default_interval("monitor")) + + +@patch_agent("get_default_interval", mock.Mock(return_value="10s")) +class CompleteAllIntervals(TestCase): + def test_add_intervals_everywhere_is_missing(self): + self.assertEqual( + [ + {"name": "monitor", "interval": "20s"}, + {"name": "start", "interval": "10s"}, + ], + lib_ra.complete_all_intervals([ + {"name": "monitor", "interval": "20s"}, + {"name": "start"}, + ]) + ) + +class GetResourceAgentNameFromString(TestCase): + def test_returns_resource_agent_name_when_is_valid(self): + self.assertEqual( + lib_ra.ResourceAgentName("ocf", "heartbeat", "Dummy"), + lib_ra.get_resource_agent_name_from_string("ocf:heartbeat:Dummy") + ) + + def test_refuses_string_if_is_not_valid(self): + self.assertRaises( + lib_ra.InvalidResourceAgentName, + lambda: lib_ra.get_resource_agent_name_from_string( + "invalid:resource:agent:string" + ) + ) + + def test_refuses_with_unknown_standard(self): + self.assertRaises( + lib_ra.InvalidResourceAgentName, + lambda: lib_ra.get_resource_agent_name_from_string("unknown:Dummy") + ) + + def test_refuses_ocf_agent_name_without_provider(self): + self.assertRaises( + lib_ra.InvalidResourceAgentName, + lambda: lib_ra.get_resource_agent_name_from_string("ocf:Dummy") + ) + + def test_refuses_non_ocf_agent_name_with_provider(self): + self.assertRaises( + lib_ra.InvalidResourceAgentName, + lambda: + lib_ra.get_resource_agent_name_from_string("lsb:provider:Dummy") + ) + + def test_returns_resource_agent_containing_sytemd_instance(self): + self.assertEqual( + lib_ra.ResourceAgentName("systemd", None, "lvm2-pvscan@252:2"), + lib_ra.get_resource_agent_name_from_string( + "systemd:lvm2-pvscan@252:2" + ) + ) + + def test_returns_resource_agent_containing_service_instance(self): + self.assertEqual( + lib_ra.ResourceAgentName("service", None, "lvm2-pvscan@252:2"), + lib_ra.get_resource_agent_name_from_string( + "service:lvm2-pvscan@252:2" + ) + ) + + def test_returns_resource_agent_containing_systemd_instance_short(self): + self.assertEqual( + lib_ra.ResourceAgentName("service", None, "getty@tty1"), + lib_ra.get_resource_agent_name_from_string("service:getty@tty1") + ) + + def test_refuses_systemd_agent_name_with_provider(self): + self.assertRaises( + lib_ra.InvalidResourceAgentName, + lambda: lib_ra.get_resource_agent_name_from_string( + "sytemd:lvm2-pvscan252:@2" + ) + ) + class ListResourceAgentsStandardsTest(TestCase): def test_success_and_filter_stonith_out(self): @@ -57,7 +147,6 @@ "/usr/sbin/crm_resource", "--list-standards" ]) - def test_success_filter_whitespace(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) agents = [ @@ -93,7 +182,6 @@ "/usr/sbin/crm_resource", "--list-standards" ]) - def test_empty(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("", "", 0) @@ -107,7 +195,6 @@ "/usr/sbin/crm_resource", "--list-standards" ]) - def test_error(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("lsb", "error", 1) @@ -152,7 +239,6 @@ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) - def test_success_filter_whitespace(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) providers = [ @@ -183,7 +269,6 @@ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) - def test_empty(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("", "", 0) @@ -197,7 +282,6 @@ "/usr/sbin/crm_resource", "--list-ocf-providers" ]) - def test_error(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ("booth", "error", 1) @@ -294,7 +378,6 @@ "/usr/sbin/crm_resource", "--list-agents", "ocf" ]) - def test_success_standard_provider(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( @@ -325,7 +408,6 @@ "/usr/sbin/crm_resource", "--list-agents", "ocf:pacemaker" ]) - def test_bad_standard(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( @@ -375,7 +457,6 @@ "/usr/sbin/crm_resource", "--list-agents", "stonith" ]) - def test_no_agents(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( @@ -393,7 +474,6 @@ "/usr/sbin/crm_resource", "--list-agents", "stonith" ]) - def test_filter_hidden_agents(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.return_value = ( @@ -449,7 +529,6 @@ ("Dummy\nStateful\n", "", 0), ] - def test_one_agent_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( @@ -468,7 +547,6 @@ ["ocf:heartbeat:Delay"] ) - def test_one_agent_exception(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( @@ -487,7 +565,6 @@ "ocf:heartbeat:Delay" ) - def test_two_agents_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( @@ -507,7 +584,6 @@ ["ocf:heartbeat:Dummy", "ocf:pacemaker:Dummy"] ) - def test_two_agents_one_valid_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( @@ -527,7 +603,6 @@ ["ocf:heartbeat:Dummy"] ) - def test_two_agents_exception(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( @@ -557,7 +632,6 @@ ), ) - def test_no_agents_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = self.mock_runner_side_effect @@ -567,7 +641,6 @@ [] ) - def test_no_agents_exception(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = self.mock_runner_side_effect @@ -586,7 +659,6 @@ ), ) - def test_no_valids_agent_list(self): mock_runner = mock.MagicMock(spec_set=CommandRunner) mock_runner.run.side_effect = ( @@ -603,14 +675,13 @@ ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class AgentMetadataGetShortdescTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) - def test_no_desc(self, mock_metadata): xml = '' mock_metadata.return_value = etree.XML(xml) @@ -619,7 +690,6 @@ "" ) - def test_shortdesc_attribute(self, mock_metadata): xml = '' mock_metadata.return_value = etree.XML(xml) @@ -628,7 +698,6 @@ "short description" ) - def test_shortdesc_element(self, mock_metadata): xml = """ @@ -642,14 +711,13 @@ ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class AgentMetadataGetLongdescTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) - def test_no_desc(self, mock_metadata): xml = '' mock_metadata.return_value = etree.XML(xml) @@ -658,7 +726,6 @@ "" ) - def test_longesc_element(self, mock_metadata): xml = """ @@ -672,14 +739,13 @@ ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class AgentMetadataGetParametersTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) - def test_no_parameters(self, mock_metadata): xml = """ @@ -691,7 +757,6 @@ [] ) - def test_empty_parameters(self, mock_metadata): xml = """ @@ -704,7 +769,6 @@ [] ) - def test_empty_parameter(self, mock_metadata): xml = """ @@ -725,6 +789,9 @@ "required": False, "default": None, "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", } ] ) @@ -756,6 +823,9 @@ "required": True, "default": "default_value", "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", }, { "name": "another parameter", @@ -765,19 +835,48 @@ "required": False, "default": None, "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", } ] ) + def test_remove_obsoletes_keep_deprecated(self, mock_metadata): + xml = """ + + + + + + + """ + mock_metadata.return_value = etree.XML(xml) + self.assertEqual( + self.agent.get_parameters(), + [ + { + "name": "deprecated", + "longdesc": "", + "shortdesc": "", + "type": "string", + "required": False, + "default": None, + "advanced": False, + "deprecated": True, + "obsoletes": None, + "pcs_deprecated_warning": "", + }, + ] + ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class AgentMetadataGetActionsTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( mock.MagicMock(spec_set=CommandRunner) ) - def test_no_actions(self, mock_metadata): xml = """ @@ -789,7 +888,6 @@ [] ) - def test_empty_actions(self, mock_metadata): xml = """ @@ -802,7 +900,6 @@ [] ) - def test_empty_action(self, mock_metadata): xml = """ @@ -817,7 +914,6 @@ [{}] ) - def test_more_actions(self, mock_metadata): xml = """ @@ -843,9 +939,101 @@ ] ) + def test_remove_depth_with_0(self, mock_metadata): + xml = """ + + + + + + """ + mock_metadata.return_value = etree.XML(xml) + self.assertEqual( + self.agent.get_actions(), + [ + { + "name": "monitor", + "timeout": "20" + }, + ] + ) + + def test_transfor_depth_to_OCF_CHECK_LEVEL(self, mock_metadata): + xml = """ + + + + + + """ + mock_metadata.return_value = etree.XML(xml) + self.assertEqual( + self.agent.get_actions(), + [ + { + "name": "monitor", + "timeout": "20", + "OCF_CHECK_LEVEL": "1", + }, + ] + ) + + +@patch_agent_object("DEFAULT_CIB_ACTION_NAMES", ["monitor", "start"]) +@patch_agent_object("get_actions") +class AgentMetadataGetCibDefaultActions(TestCase): + def setUp(self): + self.agent = lib_ra.Agent( + mock.MagicMock(spec_set=CommandRunner) + ) + + def test_select_only_actions_for_cib(self, get_actions): + get_actions.return_value = [ + {"name": "metadata"}, + {"name": "start", "interval": "40s"}, + {"name": "monitor", "interval": "10s", "timeout": "30s"}, + ] + self.assertEqual( + [ + {"name": "start", "interval": "40s"}, + {"name": "monitor", "interval": "10s", "timeout": "30s"} + ], + self.agent.get_cib_default_actions() + ) + + def test_complete_monitor(self, get_actions): + get_actions.return_value = [{"name": "metadata"}] + self.assertEqual( + [{"name": "monitor", "interval": "60s"}], + self.agent.get_cib_default_actions() + ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") -@mock.patch.object(lib_ra.Agent, "get_name", lambda self: "agent-name") + def test_complete_intervals(self, get_actions): + get_actions.return_value = [ + {"name": "metadata"}, + {"name": "monitor", "timeout": "30s"}, + ] + self.assertEqual( + [{"name": "monitor", "interval": "60s", "timeout": "30s"}], + self.agent.get_cib_default_actions() + ) + + def test_select_only_necessary_actions_for_cib(self, get_actions): + get_actions.return_value = [ + {"name": "metadata"}, + {"name": "start", "interval": "40s"}, + {"name": "monitor", "interval": "10s", "timeout": "30s"}, + ] + self.assertEqual( + [ + {"name": "monitor", "interval": "10s", "timeout": "30s"} + ], + self.agent.get_cib_default_actions(necessary_only=True) + ) + + +@patch_agent_object("_get_metadata") +@patch_agent_object("get_name", lambda self: "agent-name") class AgentMetadataGetInfoTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( @@ -872,7 +1060,6 @@ """) - def test_name_info(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -886,7 +1073,6 @@ } ) - def test_description_info(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -900,7 +1086,6 @@ } ) - def test_full_info(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -918,6 +1103,9 @@ "required": True, "default": "default_value", "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", }, { "name": "another parameter", @@ -927,6 +1115,9 @@ "required": False, "default": None, "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", } ], "actions": [ @@ -936,11 +1127,12 @@ }, {"name": "off"}, ], + "default_actions": [{"name": "monitor", "interval": "60s"}], } ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class AgentMetadataValidateParametersValuesTest(TestCase): def setUp(self): self.agent = lib_ra.Agent( @@ -964,7 +1156,6 @@ """) - def test_all_required(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -975,7 +1166,6 @@ ([], []) ) - def test_all_required_and_optional(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -987,7 +1177,6 @@ ([], []) ) - def test_all_required_and_invalid(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -999,7 +1188,6 @@ (["invalid_param"], []) ) - def test_missing_required(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -1008,7 +1196,6 @@ ([], ["required_param", "another_required_param"]) ) - def test_missing_required_and_invalid(self, mock_metadata): mock_metadata.return_value = self.metadata self.assertEqual( @@ -1019,13 +1206,235 @@ (["invalid_param"], ["required_param"]) ) + def test_ignore_obsoletes_use_deprecated(self, mock_metadata): + xml = """ + + + + + + + """ + mock_metadata.return_value = etree.XML(xml) + self.assertEqual( + self.agent.validate_parameters_values({ + }), + ([], ["deprecated"]) + ) + + def test_dont_allow_obsoletes_use_deprecated(self, mock_metadata): + xml = """ + + + + + + + """ + mock_metadata.return_value = etree.XML(xml) + self.assertEqual( + self.agent.validate_parameters_values({ + "obsoletes": "value", + }), + (["obsoletes"], ["deprecated"]) + ) + + +class AgentMetadataValidateParameters(TestCase): + def setUp(self): + self.agent = lib_ra.Agent(mock.MagicMock(spec_set=CommandRunner)) + self.metadata = etree.XML(""" + + + + Long description + short description + + + + + + + + + + + """) + patcher = patch_agent_object("_get_metadata") + self.addCleanup(patcher.stop) + self.get_metadata = patcher.start() + self.get_metadata.return_value = self.metadata + + def test_returns_empty_report_when_all_required_there(self): + self.assertEqual( + [], + self.agent.validate_parameters({ + "another_required_param": "value1", + "required_param": "value2", + }), + ) + + def test_returns_empty_report_when_all_required_and_optional_there(self): + self.assertEqual( + [], + self.agent.validate_parameters({ + "another_required_param": "value1", + "required_param": "value2", + "test_param": "value3", + }) + ) + + def test_report_invalid_option(self): + assert_report_item_list_equal( + self.agent.validate_parameters({ + "another_required_param": "value1", + "required_param": "value2", + "invalid_param": "value3", + }), + [ + ( + severity.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["invalid_param"], + "option_type": "resource", + "allowed": [ + "another_required_param", + "required_param", + "test_param", + ] + }, + report_codes.FORCE_OPTIONS + ), + ], + ) + + def test_report_missing_option(self): + assert_report_item_list_equal( + self.agent.validate_parameters({}), + [ + ( + severity.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_names": [ + "required_param", + "another_required_param", + ], + "option_type": "resource", + }, + report_codes.FORCE_OPTIONS + ), + ], + ) + + def test_warn_missing_required(self): + assert_report_item_list_equal( + self.agent.validate_parameters({}, allow_invalid=True), + [ + ( + severity.WARNING, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_names": [ + "required_param", + "another_required_param", + ], + "option_type": "resource", + }, + ), + ] + ) + + def test_ignore_obsoletes_use_deprecated(self): + xml = """ + + + + + + + """ + self.get_metadata.return_value = etree.XML(xml) + assert_report_item_list_equal( + self.agent.validate_parameters({}), + [ + ( + severity.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_names": [ + "deprecated", + ], + "option_type": "resource", + }, + report_codes.FORCE_OPTIONS + ), + ] + ) + + def test_dont_allow_obsoletes_use_deprecated(self): + xml = """ + + + + + + + """ + self.get_metadata.return_value = etree.XML(xml) + assert_report_item_list_equal( + self.agent.validate_parameters({"obsoletes": "value"}), + [ + ( + severity.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_names": [ + "deprecated", + ], + "option_type": "resource", + }, + report_codes.FORCE_OPTIONS + ), + ( + severity.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["obsoletes"], + "option_type": "resource", + "allowed": [ + "deprecated", + ] + }, + report_codes.FORCE_OPTIONS + ), + ] + ) + + def test_required_not_specified_on_update(self): + assert_report_item_list_equal( + self.agent.validate_parameters({ + "test_param": "value", + }, update=True), + [ + ], + ) + class StonithdMetadataGetMetadataTest(TestCase, ExtendedAssertionsMixin): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) self.agent = lib_ra.StonithdMetadata(self.mock_runner) - def test_success(self): metadata = """ @@ -1043,7 +1452,6 @@ ["/usr/libexec/pacemaker/stonithd", "metadata"] ) - def test_failed_to_get_xml(self): self.mock_runner.run.return_value = ("", "some error", 1) @@ -1060,7 +1468,6 @@ ["/usr/libexec/pacemaker/stonithd", "metadata"] ) - def test_invalid_xml(self): self.mock_runner.run.return_value = ("some garbage", "", 0) @@ -1069,7 +1476,7 @@ self.agent._get_metadata, { "agent": "stonithd", - "message": "Start tag expected, '<' not found, line 1, column 1", + "message": start_tag_error_text(), } ) @@ -1078,14 +1485,13 @@ ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class StonithdMetadataGetParametersTest(TestCase): def setUp(self): self.agent = lib_ra.StonithdMetadata( mock.MagicMock(spec_set=CommandRunner) ) - def test_success(self, mock_metadata): xml = """ @@ -1116,7 +1522,10 @@ "type": "test_type", "required": False, "default": "default_value", - "advanced": True + "advanced": True, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", }, { "name": "another parameter", @@ -1125,27 +1534,27 @@ "type": "string", "required": False, "default": None, - "advanced": False + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", } ] ) -class CrmAgentMetadataGetNameTest(TestCase, ExtendedAssertionsMixin): - def test_success(self): - mock_runner = mock.MagicMock(spec_set=CommandRunner) - agent_name = "ocf:pacemaker:Dummy" - agent = lib_ra.CrmAgent(mock_runner, agent_name) +class CrmAgentDescendant(lib_ra.CrmAgent): + def _prepare_name_parts(self, name): + return lib_ra.ResourceAgentName("STANDARD", None, name) - self.assertEqual(agent.get_name(), agent_name) + def get_name(self): + return self.get_type() class CrmAgentMetadataGetMetadataTest(TestCase, ExtendedAssertionsMixin): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) - self.agent_name = "ocf:pacemaker:Dummy" - self.agent = lib_ra.CrmAgent(self.mock_runner, self.agent_name) - + self.agent = CrmAgentDescendant(self.mock_runner, "TYPE") def test_success(self): metadata = """ @@ -1161,13 +1570,16 @@ ) self.mock_runner.run.assert_called_once_with( - ["/usr/sbin/crm_resource", "--show-metadata", self.agent_name], + [ + "/usr/sbin/crm_resource", + "--show-metadata", + self.agent._get_full_name() + ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ) - def test_failed_to_get_xml(self): self.mock_runner.run.return_value = ("", "some error", 1) @@ -1175,19 +1587,22 @@ lib_ra.UnableToGetAgentMetadata, self.agent._get_metadata, { - "agent": self.agent_name, + "agent": self.agent.get_name(), "message": "some error", } ) self.mock_runner.run.assert_called_once_with( - ["/usr/sbin/crm_resource", "--show-metadata", self.agent_name], + [ + "/usr/sbin/crm_resource", + "--show-metadata", + self.agent._get_full_name() + ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } ) - def test_invalid_xml(self): self.mock_runner.run.return_value = ("some garbage", "", 0) @@ -1195,13 +1610,17 @@ lib_ra.UnableToGetAgentMetadata, self.agent._get_metadata, { - "agent": self.agent_name, - "message": "Start tag expected, '<' not found, line 1, column 1", + "agent": self.agent.get_name(), + "message": start_tag_error_text(), } ) self.mock_runner.run.assert_called_once_with( - ["/usr/sbin/crm_resource", "--show-metadata", self.agent_name], + [ + "/usr/sbin/crm_resource", + "--show-metadata", + self.agent._get_full_name() + ], env_extend={ "PATH": "/usr/sbin/:/bin/:/usr/bin/", } @@ -1211,9 +1630,7 @@ class CrmAgentMetadataIsValidAgentTest(TestCase): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) - self.agent_name = "ocf:pacemaker:Dummy" - self.agent = lib_ra.CrmAgent(self.mock_runner, self.agent_name) - + self.agent = CrmAgentDescendant(self.mock_runner, "TYPE") def test_success(self): metadata = """ @@ -1225,7 +1642,6 @@ self.assertTrue(self.agent.is_valid_metadata()) - def test_fail(self): self.mock_runner.run.return_value = ("", "", 1) @@ -1252,11 +1668,9 @@ self.agent_name ) - def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None - def test_success(self): metadata = """ @@ -1282,37 +1696,6 @@ ) -@mock.patch.object(lib_ra.Agent, "_get_metadata") -class StonithAgentMetadataGetActionsTest(TestCase): - def setUp(self): - self.agent = lib_ra.StonithAgent( - mock.MagicMock(spec_set=CommandRunner), - "fence_dummy" - ) - - - def tearDown(self): - lib_ra.StonithAgent._stonithd_metadata = None - - - def test_more_actions(self, mock_metadata): - xml = """ - - - - - - - - - """ - mock_metadata.return_value = etree.XML(xml) - self.assertEqual( - self.agent.get_actions(), - [] - ) - - class StonithAgentMetadataGetParametersTest(TestCase): def setUp(self): self.mock_runner = mock.MagicMock(spec_set=CommandRunner) @@ -1322,11 +1705,9 @@ self.agent_name ) - def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None - def test_success(self): metadata = """ @@ -1360,26 +1741,56 @@ self.agent.get_parameters(), [ { + "name": "debug", + "longdesc": "", + "shortdesc": "", + "type": "string", + "required": False, + "default": None, + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", + }, + { "name": "valid_param", "longdesc": "", "shortdesc": "", "type": "string", "required": False, "default": None, - "advanced": False + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", + }, + { + "name": "verbose", + "longdesc": "", + "shortdesc": "", + "type": "string", + "required": False, + "default": None, + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", }, { "name": "action", "longdesc": "", - "shortdesc": - "Fencing Action\nWARNING: specifying 'action' is" - " deprecated and not necessary with current Pacemaker" - " versions." - , + "shortdesc": "Fencing Action", "type": "string", "required": False, "default": None, - "advanced": False + "advanced": True, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "Specifying 'action' is" + " deprecated and not necessary with current Pacemaker" + " versions. Use 'pcmk_off_action'," + " 'pcmk_reboot_action' instead." + , }, { "name": "another_param", @@ -1388,7 +1799,10 @@ "type": "string", "required": False, "default": None, - "advanced": False + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", }, { "name": "stonithd_param", @@ -1397,7 +1811,10 @@ "type": "string", "required": False, "default": None, - "advanced": False + "advanced": False, + "deprecated": False, + "obsoletes": None, + "pcs_deprecated_warning": "", }, ] ) @@ -1420,7 +1837,7 @@ ]) -@mock.patch.object(lib_ra.Agent, "_get_metadata") +@patch_agent_object("_get_metadata") class StonithAgentMetadataGetProvidesUnfencingTest(TestCase): def setUp(self): self.agent = lib_ra.StonithAgent( @@ -1428,11 +1845,9 @@ "fence_dummy" ) - def tearDown(self): lib_ra.StonithAgent._stonithd_metadata = None - def test_true(self, mock_metadata): xml = """ @@ -1447,7 +1862,6 @@ mock_metadata.return_value = etree.XML(xml) self.assertTrue(self.agent.get_provides_unfencing()) - def test_no_action_on(self, mock_metadata): xml = """ @@ -1461,7 +1875,6 @@ mock_metadata.return_value = etree.XML(xml) self.assertFalse(self.agent.get_provides_unfencing()) - def test_no_tagret(self, mock_metadata): xml = """ @@ -1476,7 +1889,6 @@ mock_metadata.return_value = etree.XML(xml) self.assertFalse(self.agent.get_provides_unfencing()) - def test_no_automatic(self, mock_metadata): xml = """ @@ -1491,6 +1903,7 @@ mock_metadata.return_value = etree.XML(xml) self.assertFalse(self.agent.get_provides_unfencing()) + class ResourceAgentTest(TestCase): def test_raises_on_invalid_name(self): self.assertRaises( @@ -1499,7 +1912,60 @@ ) def test_does_not_raise_on_valid_name(self): - lib_ra.ResourceAgent(mock.MagicMock(), "formal:valid:name") + lib_ra.ResourceAgent(mock.MagicMock(), "ocf:heardbeat:name") + + +@patch_agent_object("_get_metadata") +class ResourceAgentGetParameters(TestCase): + def fixture_metadata(self, params): + return etree.XML(""" + + {0} + + """.format([''.format(name) for name in params]) + ) + + def assert_param_names(self, expected_names, actual_params): + self.assertEqual( + expected_names, + [param["name"] for param in actual_params] + ) + + def test_add_trace_parameters_to_ocf(self, mock_metadata): + mock_metadata.return_value = self.fixture_metadata(["test_param"]) + agent = lib_ra.ResourceAgent( + mock.MagicMock(spec_set=CommandRunner), + "ocf:pacemaker:test" + ) + self.assert_param_names( + ["test_param", "trace_ra", "trace_file"], + agent.get_parameters() + ) + + def test_do_not_add_trace_parameters_if_present(self, mock_metadata): + mock_metadata.return_value = self.fixture_metadata([ + "trace_ra", "test_param", "trace_file" + ]) + agent = lib_ra.ResourceAgent( + mock.MagicMock(spec_set=CommandRunner), + "ocf:pacemaker:test" + ) + self.assert_param_names( + ["trace_ra", "test_param", "trace_file"], + agent.get_parameters() + ) + + def test_do_not_add_trace_parameters_to_others(self, mock_metadata): + mock_metadata.return_value = self.fixture_metadata(["test_param"]) + agent = lib_ra.ResourceAgent( + mock.MagicMock(spec_set=CommandRunner), + "service:test" + ) + self.assert_param_names( + ["test_param"], + agent.get_parameters() + ) + class FindResourceAgentByNameTest(TestCase): def setUp(self): @@ -1563,7 +2029,7 @@ ResourceAgent.assert_called_once_with(self.runner, name) AbsentResourceAgent.assert_called_once_with(self.runner, name) error_to_report_item.assert_called_once_with( - e, severity=severity.WARNING, forceable=True + e, severity=severity.WARNING ) self.report_processor.process.assert_called_once_with(report) @@ -1584,7 +2050,7 @@ self.assertEqual(report, context_manager.exception.args[0]) ResourceAgent.assert_called_once_with(self.runner, name) - error_to_report_item.assert_called_once_with(e) + error_to_report_item.assert_called_once_with(e, forceable=True) @patch_agent("resource_agent_error_to_report_item") @patch_agent("ResourceAgent") @@ -1603,6 +2069,52 @@ ResourceAgent.assert_called_once_with(self.runner, name) error_to_report_item.assert_called_once_with(e) + +class FindStonithAgentByName(TestCase): + # It is quite similar to find_valid_stonith_agent_by_name, so only minimum + # tests here: + # - test success + # - test with ":" in agent name - there was a bug + def setUp(self): + self.report_processor = mock.MagicMock() + self.runner = mock.MagicMock() + self.run = partial( + lib_ra.find_valid_stonith_agent_by_name, + self.report_processor, + self.runner, + ) + + @patch_agent("StonithAgent") + def test_returns_real_agent_when_is_there(self, StonithAgent): + #setup + name = "fence_xvm" + + agent = mock.MagicMock() + agent.validate_metadata = mock.Mock(return_value=agent) + StonithAgent.return_value = agent + + #test + self.assertEqual(agent, self.run(name)) + StonithAgent.assert_called_once_with(self.runner, name) + + @patch_agent("resource_agent_error_to_report_item") + @patch_agent("StonithAgent") + def test_raises_on_invalid_name(self, StonithAgent, error_to_report_item): + name = "fence_xvm:invalid" + report = "INVALID_STONITH_AGENT_NAME" + e = lib_ra.InvalidStonithAgentName(name, "invalid agent name") + + StonithAgent.side_effect = e + error_to_report_item.return_value = report + + with self.assertRaises(LibraryError) as context_manager: + self.run(name) + + self.assertEqual(report, context_manager.exception.args[0]) + StonithAgent.assert_called_once_with(self.runner, name) + error_to_report_item.assert_called_once_with(e) + + class AbsentResourceAgentTest(TestCase): @mock.patch.object(lib_ra.CrmAgent, "_load_metadata") def test_behaves_like_a_proper_agent(self, load_metadata): diff -Nru pcs-0.9.155+dfsg/pcs/lib/test/test_validate.py pcs-0.9.159/pcs/lib/test/test_validate.py --- pcs-0.9.155+dfsg/pcs/lib/test/test_validate.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/test/test_validate.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,1045 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree +import re + +from pcs.common import report_codes +from pcs.lib import validate +from pcs.lib.cib.tools import IdProvider +from pcs.lib.errors import ReportItemSeverity as severities +from pcs.test.tools.assertions import assert_report_item_list_equal +from pcs.test.tools.pcs_unittest import TestCase + +class ValuesToPairs(TestCase): + def test_create_from_plain_values(self): + self.assertEqual( + { + "first": validate.ValuePair("A", "a"), + "second": validate.ValuePair("B", "b"), + }, + validate.values_to_pairs( + { + "first": "A", + "second": "B", + }, + lambda key, value: value.lower() + ) + ) + + def test_keep_pair_if_is_already_there(self): + self.assertEqual( + { + "first": validate.ValuePair("A", "aaa"), + "second": validate.ValuePair("B", "b"), + }, + validate.values_to_pairs( + { + "first": validate.ValuePair("A", "aaa"), + "second": "B", + }, + lambda key, value: value.lower() + ) + ) + +class PairsToValues(TestCase): + def test_keep_values_if_is_not_pair(self): + self.assertEqual( + { + "first": "A", + "second": "B", + }, + validate.pairs_to_values( + { + "first": "A", + "second": "B", + } + ) + ) + + def test_extract_normalized_values(self): + self.assertEqual( + { + "first": "aaa", + "second": "B", + }, + validate.pairs_to_values( + { + "first": validate.ValuePair( + original="A", + normalized="aaa" + ), + "second": "B", + } + ) + ) + +class OptionValueNormalization(TestCase): + def test_return_normalized_value_if_normalization_for_key_specified(self): + normalize = validate.option_value_normalization({ + "first": lambda value: value.upper() + }) + self.assertEqual("ONE", normalize("first", "one")) + + def test_return_value_if_normalization_for_key_unspecified(self): + normalize = validate.option_value_normalization({}) + self.assertEqual("one", normalize("first", "one")) + + +class DependsOn(TestCase): + def test_success_when_dependency_present(self): + assert_report_item_list_equal( + validate.depends_on_option("name", "prerequisite", "type")({ + "name": "value", + "prerequisite": "value", + }), + [] + ) + + def test_report_when_dependency_missing(self): + assert_report_item_list_equal( + validate.depends_on_option( + "name", "prerequisite", "type1", "type2" + )({ + "name": "value", + }), + [ + ( + severities.ERROR, + report_codes.PREREQUISITE_OPTION_IS_MISSING, + { + "option_name": "name", + "option_type": "type1", + "prerequisite_name": "prerequisite", + "prerequisite_type": "type2", + }, + None + ), + ] + ) + + +class IsRequired(TestCase): + def test_returns_no_report_when_required_is_present(self): + assert_report_item_list_equal( + validate.is_required("name", "some type")({"name": "monitor"}), + [] + ) + + def test_returns_report_when_required_is_missing(self): + assert_report_item_list_equal( + validate.is_required("name", "some type")({}), + [ + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_IS_MISSING, + { + "option_names": ["name"], + "option_type": "some type", + }, + None + ), + ] + ) + + +class IsRequiredSomeOf(TestCase): + def test_returns_no_report_when_first_is_present(self): + assert_report_item_list_equal( + validate.is_required_some_of(["first", "second"], "type")({ + "first": "value", + }), + [] + ) + + def test_returns_no_report_when_second_is_present(self): + assert_report_item_list_equal( + validate.is_required_some_of(["first", "second"], "type")({ + "second": "value", + }), + [] + ) + + def test_returns_report_when_missing(self): + assert_report_item_list_equal( + validate.is_required_some_of(["first", "second"], "type")({ + "third": "value", + }), + [ + ( + severities.ERROR, + report_codes.REQUIRED_OPTION_OF_ALTERNATIVES_IS_MISSING, + { + "option_names": ["first", "second"], + "option_type": "type", + }, + None + ), + ] + ) + + +class ValueCondTest(TestCase): + def setUp(self): + self.predicate = lambda a: a == "b" + + def test_returns_empty_report_on_valid_option(self): + self.assertEqual( + [], + validate.value_cond("a", self.predicate, "test")({"a": "b"}) + ) + + def test_returns_empty_report_on_valid_normalized_option(self): + self.assertEqual( + [], + validate.value_cond("a", self.predicate, "test")( + {"a": validate.ValuePair(original="C", normalized="b")} + ), + ) + + def test_returns_report_about_invalid_option(self): + assert_report_item_list_equal( + validate.value_cond("a", self.predicate, "test")({"a": "c"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": "test", + }, + None + ), + ] + ) + + def test_support_OptionValuePair(self): + assert_report_item_list_equal( + validate.value_cond("a", self.predicate, "test")( + {"a": validate.ValuePair(original="b", normalized="c")} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "b", + "allowed_values": "test", + }, + None + ), + ] + ) + + def test_supports_another_report_option_name(self): + assert_report_item_list_equal( + validate.value_cond( + "a", self.predicate, "test", option_name_for_report="option a" + )( + {"a": "c"} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "option a", + "option_value": "c", + "allowed_values": "test", + }, + None + ), + ] + ) + + def test_supports_forceable_errors(self): + assert_report_item_list_equal( + validate.value_cond( + "a", self.predicate, "test", code_to_allow_extra_values="FORCE" + )( + {"a": "c"} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": "test", + }, + "FORCE" + ), + ] + ) + + def test_supports_warning(self): + assert_report_item_list_equal( + validate.value_cond( + "a", + self.predicate, + "test", + code_to_allow_extra_values="FORCE", + allow_extra_values=True + )( + {"a": "c"} + ), + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": "test", + }, + None + ), + ] + ) + + +class ValueEmptyOrValid(TestCase): + def setUp(self): + self.validator = validate.value_cond("a", lambda a: a == "b", "test") + + def test_missing(self): + assert_report_item_list_equal( + validate.value_empty_or_valid("a", self.validator)({"b": "c"}), + [ + ] + ) + + def test_empty(self): + assert_report_item_list_equal( + validate.value_empty_or_valid("a", self.validator)({"a": ""}), + [ + ] + ) + + def test_valid(self): + assert_report_item_list_equal( + validate.value_empty_or_valid("a", self.validator)({"a": "b"}), + [ + ] + ) + + def test_not_valid(self): + assert_report_item_list_equal( + validate.value_empty_or_valid("a", self.validator)({"a": "c"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": "test", + }, + None + ), + ] + ) + + +class ValueId(TestCase): + def test_empty_id(self): + assert_report_item_list_equal( + validate.value_id("id", "test id")({"id": ""}), + [ + ( + severities.ERROR, + report_codes.EMPTY_ID, + { + "id": "", + "id_description": "test id", + }, + None + ), + ] + ) + + def test_invalid_first_char(self): + assert_report_item_list_equal( + validate.value_id("id", "test id")({"id": "0-test"}), + [ + ( + severities.ERROR, + report_codes.INVALID_ID, + { + "id": "0-test", + "id_description": "test id", + "invalid_character": "0", + "is_first_char": True, + }, + None + ), + ] + ) + + def test_invalid_char(self): + assert_report_item_list_equal( + validate.value_id("id", "test id")({"id": "te#st"}), + [ + ( + severities.ERROR, + report_codes.INVALID_ID, + { + "id": "te#st", + "id_description": "test id", + "invalid_character": "#", + "is_first_char": False, + }, + None + ), + ] + ) + + def test_used_id(self): + id_provider = IdProvider(etree.fromstring("")) + assert_report_item_list_equal( + validate.value_id("id", "test id", id_provider)({"id": "used"}), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + "id": "used", + }, + None + ), + ] + ) + + def test_pair_invalid(self): + assert_report_item_list_equal( + validate.value_id("id", "test id")({ + "id": validate.ValuePair("@&#", "") + }), + [ + ( + severities.ERROR, + report_codes.EMPTY_ID, + { + # TODO: This should be "@&#". However an old validator + # is used and it doesn't work with pairs. + "id": "", + "id_description": "test id", + }, + None + ), + ] + ) + + def test_pair_used_id(self): + id_provider = IdProvider(etree.fromstring("")) + assert_report_item_list_equal( + validate.value_id("id", "test id", id_provider)({ + "id": validate.ValuePair("not-used", "used") + }), + [ + ( + severities.ERROR, + report_codes.ID_ALREADY_EXISTS, + { + # TODO: This should be "not-used". However an old + # validator is used and it doesn't work with pairs. + "id": "used", + }, + None + ), + ] + ) + + def test_success(self): + id_provider = IdProvider(etree.fromstring("")) + assert_report_item_list_equal( + validate.value_id("id", "test id", id_provider)({"id": "correct"}), + [] + ) + + def test_pair_success(self): + id_provider = IdProvider(etree.fromstring("")) + assert_report_item_list_equal( + validate.value_id("id", "test id", id_provider)({ + "id": validate.ValuePair("correct", "correct") + }), + [] + ) + + +class ValueIn(TestCase): + def test_returns_empty_report_on_valid_option(self): + self.assertEqual( + [], + validate.value_in("a", ["b"])({"a": "b"}) + ) + + def test_returns_empty_report_on_valid_normalized_option(self): + self.assertEqual( + [], + validate.value_in("a", ["b"])( + {"a": validate.ValuePair(original="C", normalized="b")} + ), + ) + + def test_returns_report_about_invalid_option(self): + assert_report_item_list_equal( + validate.value_in("a", ["b"])({"a": "c"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": ["b"], + }, + None + ), + ] + ) + + def test_support_OptionValuePair(self): + assert_report_item_list_equal( + validate.value_in("a", ["b"])( + {"a": validate.ValuePair(original="C", normalized="c")} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "C", + "allowed_values": ["b"], + }, + None + ), + ] + ) + + def test_supports_another_report_option_name(self): + assert_report_item_list_equal( + validate.value_in("a", ["b"], option_name_for_report="option a")( + {"a": "c"} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "option a", + "option_value": "c", + "allowed_values": ["b"], + }, + None + ), + ] + ) + + def test_supports_forceable_errors(self): + assert_report_item_list_equal( + validate.value_in("a", ["b"], code_to_allow_extra_values="FORCE")( + {"a": "c"} + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": ["b"], + }, + "FORCE" + ), + ] + ) + + def test_supports_warning(self): + assert_report_item_list_equal( + validate.value_in( + "a", + ["b"], + code_to_allow_extra_values="FORCE", + allow_extra_values=True + )( + {"a": "c"} + ), + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "c", + "allowed_values": ["b"], + }, + None + ), + ] + ) + + +class ValueNonnegativeInteger(TestCase): + # The real code only calls value_cond => only basic tests here. + def test_empty_report_on_valid_option(self): + assert_report_item_list_equal( + validate.value_nonnegative_integer("key")({"key": "10"}), + [] + ) + + def test_report_invalid_value(self): + assert_report_item_list_equal( + validate.value_nonnegative_integer("key")({"key": "-10"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "-10", + "allowed_values": "a non-negative integer", + }, + None + ), + ] + ) + + +class ValueNotEmpty(TestCase): + def test_empty_report_on_not_empty_value(self): + assert_report_item_list_equal( + validate.value_not_empty("key", "description")({"key": "abc"}), + [] + ) + + def test_empty_report_on_zero_int_value(self): + assert_report_item_list_equal( + validate.value_not_empty("key", "description")({"key": 0}), + [] + ) + + def test_report_on_empty_string(self): + assert_report_item_list_equal( + validate.value_not_empty("key", "description")({"key": ""}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "", + "allowed_values": "description", + }, + None + ), + ] + ) + + +class ValuePortNumber(TestCase): + # The real code only calls value_cond => only basic tests here. + def test_empty_report_on_valid_option(self): + assert_report_item_list_equal( + validate.value_port_number("key")({"key": "54321"}), + [] + ) + + def test_report_invalid_value(self): + assert_report_item_list_equal( + validate.value_port_number("key")({"key": "65536"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "65536", + "allowed_values": "a port number (1-65535)", + }, + None + ), + ] + ) + + +class ValuePortRange(TestCase): + # The real code only calls value_cond => only basic tests here. + def test_empty_report_on_valid_option(self): + assert_report_item_list_equal( + validate.value_port_range("key")({"key": "100-200"}), + [] + ) + + def test_report_nonsense(self): + assert_report_item_list_equal( + validate.value_port_range("key")({"key": "10-20-30"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "10-20-30", + "allowed_values": "port-port", + }, + None + ), + ] + ) + + def test_report_bad_start(self): + assert_report_item_list_equal( + validate.value_port_range("key")({"key": "0-100"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "0-100", + "allowed_values": "port-port", + }, + None + ), + ] + ) + + def test_report_bad_end(self): + assert_report_item_list_equal( + validate.value_port_range("key")({"key": "100-65536"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "100-65536", + "allowed_values": "port-port", + }, + None + ), + ] + ) + + +class ValuePositiveInteger(TestCase): + # The real code only calls value_cond => only basic tests here. + def test_empty_report_on_valid_option(self): + assert_report_item_list_equal( + validate.value_positive_integer("key")({"key": "10"}), + [] + ) + + def test_report_invalid_value(self): + assert_report_item_list_equal( + validate.value_positive_integer("key")({"key": "0"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "key", + "option_value": "0", + "allowed_values": "a positive integer", + }, + None + ), + ] + ) + + +class MutuallyExclusive(TestCase): + def test_returns_empty_report_when_valid(self): + assert_report_item_list_equal( + validate.mutually_exclusive(["a", "b"])({"a": "A"}), + [], + ) + + def test_returns_mutually_exclusive_report_on_2_names_conflict(self): + assert_report_item_list_equal( + validate.mutually_exclusive(["a", "b", "c"])({ + "a": "A", + "b": "B", + "d": "D", + }), + [ + ( + severities.ERROR, + report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, + { + "option_type": "option", + "option_names": ["a", "b"], + }, + None + ), + ], + ) + + def test_returns_mutually_exclusive_report_on_multiple_name_conflict(self): + assert_report_item_list_equal( + validate.mutually_exclusive(["a", "b", "c", "e"])({ + "a": "A", + "b": "B", + "c": "C", + "d": "D", + }), + [ + ( + severities.ERROR, + report_codes.MUTUALLY_EXCLUSIVE_OPTIONS, + { + "option_type": "option", + "option_names": ["a", "b", "c"], + }, + None + ), + ], + ) + +class CollectOptionValidations(TestCase): + def test_collect_all_errors_from_specifications(self): + specification = [ + lambda option_dict: ["A{0}".format(option_dict["x"])], + lambda option_dict: ["B"], + ] + + self.assertEqual( + ["Ay", "B"], + validate.run_collection_of_option_validators( + {"x": "y"}, + specification + ) + ) + +class NamesIn(TestCase): + def test_return_empty_report_on_allowed_names(self): + assert_report_item_list_equal( + validate.names_in( + ["a", "b", "c"], + ["a", "b"], + ), + [], + ) + + def test_return_error_on_not_allowed_names(self): + assert_report_item_list_equal( + validate.names_in( + ["a", "b", "c"], + ["x", "y"], + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["x", "y"], + "allowed": ["a", "b", "c"], + "option_type": "option", + }, + None + ) + ] + ) + + def test_return_error_on_not_allowed_names_without_force_code(self): + assert_report_item_list_equal( + validate.names_in( + ["a", "b", "c"], + ["x", "y"], + #does now work without code_to_allow_extra_names + allow_extra_names=True, + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["x", "y"], + "allowed": ["a", "b", "c"], + "option_type": "option", + }, + None + ) + ] + ) + + def test_return_forceable_error_on_not_allowed_names(self): + assert_report_item_list_equal( + validate.names_in( + ["a", "b", "c"], + ["x", "y"], + option_type="some option", + code_to_allow_extra_names="FORCE_CODE", + ), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION, + { + "option_names": ["x", "y"], + "allowed": ["a", "b", "c"], + "option_type": "some option", + }, + "FORCE_CODE" + ) + ] + ) + + def test_return_warning_on_not_allowed_names(self): + assert_report_item_list_equal( + validate.names_in( + ["a", "b", "c"], + ["x", "y"], + option_type="some option", + code_to_allow_extra_names="FORCE_CODE", + allow_extra_names=True, + ), + [ + ( + severities.WARNING, + report_codes.INVALID_OPTION, + { + "option_names": ["x", "y"], + "allowed": ["a", "b", "c"], + "option_type": "some option", + }, + None + ) + ] + ) + + +class IsInteger(TestCase): + def test_no_range(self): + self.assertTrue(validate.is_integer(1)) + self.assertTrue(validate.is_integer("1")) + self.assertTrue(validate.is_integer(-1)) + self.assertTrue(validate.is_integer("-1")) + self.assertTrue(validate.is_integer(+1)) + self.assertTrue(validate.is_integer("+1")) + self.assertTrue(validate.is_integer(" 1")) + self.assertTrue(validate.is_integer("-1 ")) + self.assertTrue(validate.is_integer("+1 ")) + + self.assertFalse(validate.is_integer("")) + self.assertFalse(validate.is_integer("1a")) + self.assertFalse(validate.is_integer("a1")) + self.assertFalse(validate.is_integer("aaa")) + self.assertFalse(validate.is_integer(1.0)) + self.assertFalse(validate.is_integer("1.0")) + + def test_at_least(self): + self.assertTrue(validate.is_integer(5, 5)) + self.assertTrue(validate.is_integer(5, 4)) + self.assertTrue(validate.is_integer("5", 5)) + self.assertTrue(validate.is_integer("5", 4)) + + self.assertFalse(validate.is_integer(5, 6)) + self.assertFalse(validate.is_integer("5", 6)) + + def test_at_most(self): + self.assertTrue(validate.is_integer(5, None, 5)) + self.assertTrue(validate.is_integer(5, None, 6)) + self.assertTrue(validate.is_integer("5", None, 5)) + self.assertTrue(validate.is_integer("5", None, 6)) + + self.assertFalse(validate.is_integer(5, None, 4)) + self.assertFalse(validate.is_integer("5", None, 4)) + + def test_range(self): + self.assertTrue(validate.is_integer(5, 5, 5)) + self.assertTrue(validate.is_integer(5, 4, 6)) + self.assertTrue(validate.is_integer("5", 5, 5)) + self.assertTrue(validate.is_integer("5", 4, 6)) + + self.assertFalse(validate.is_integer(3, 4, 6)) + self.assertFalse(validate.is_integer(7, 4, 6)) + self.assertFalse(validate.is_integer("3", 4, 6)) + self.assertFalse(validate.is_integer("7", 4, 6)) + + +class IsPortNumber(TestCase): + def test_valid_port(self): + self.assertTrue(validate.is_port_number(1)) + self.assertTrue(validate.is_port_number("1")) + self.assertTrue(validate.is_port_number(65535)) + self.assertTrue(validate.is_port_number("65535")) + self.assertTrue(validate.is_port_number(8192)) + self.assertTrue(validate.is_port_number(" 8192 ")) + + def test_bad_port(self): + self.assertFalse(validate.is_port_number(0)) + self.assertFalse(validate.is_port_number("0")) + self.assertFalse(validate.is_port_number(65536)) + self.assertFalse(validate.is_port_number("65536")) + self.assertFalse(validate.is_port_number(-128)) + self.assertFalse(validate.is_port_number("-128")) + self.assertFalse(validate.is_port_number("abcd")) + + +class MatchesRegexp(TestCase): + def test_matches_string(self): + self.assertTrue(validate.matches_regexp("abcdcba", "^[a-d]+$")) + + def test_matches_regexp(self): + self.assertTrue(validate.matches_regexp( + "abCDCBa", + re.compile("^[a-d]+$", re.IGNORECASE) + )) + + def test_not_matches_string(self): + self.assertFalse(validate.matches_regexp("abcDcba", "^[a-d]+$")) + + def test_not_matches_regexp(self): + self.assertFalse(validate.matches_regexp( + "abCeCBa", + re.compile("^[a-d]+$", re.IGNORECASE) + )) + + +class IsEmptyString(TestCase): + def test_empty_string(self): + self.assertTrue(validate.is_empty_string("")) + + def test_not_empty_string(self): + self.assertFalse(validate.is_empty_string("a")) + self.assertFalse(validate.is_empty_string("0")) + self.assertFalse(validate.is_empty_string(0)) + + +class IsTimeInterval(TestCase): + def test_no_reports_for_valid_time_interval(self): + for interval in ["0", "1s", "2sec", "3m", "4min", "5h", "6hr"]: + self.assertEquals( + [], + validate.value_time_interval("a")({"a": interval}), + "interval: {0}".format(interval) + ) + + def test_reports_about_invalid_interval(self): + assert_report_item_list_equal( + validate.value_time_interval("a")({"a": "invalid_value"}), + [ + ( + severities.ERROR, + report_codes.INVALID_OPTION_VALUE, + { + "option_name": "a", + "option_value": "invalid_value", + "allowed_values": + "time interval (e.g. 1, 2s, 3m, 4h, ...)" + , + }, + None + ), + ] + ) diff -Nru pcs-0.9.155+dfsg/pcs/lib/tools.py pcs-0.9.159/pcs/lib/tools.py --- pcs-0.9.155+dfsg/pcs/lib/tools.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/lib/tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -4,8 +4,16 @@ print_function, unicode_literals, ) +import binascii +import os +def generate_key(random_bytes_count=32): + return binascii.hexlify(generate_binary_key(random_bytes_count)) + +def generate_binary_key(random_bytes_count): + return os.urandom(random_bytes_count) + def environment_file_to_dict(config): """ Parse systemd Environment file. This parser is simplified version of diff -Nru pcs-0.9.155+dfsg/pcs/lib/validate.py pcs-0.9.159/pcs/lib/validate.py --- pcs-0.9.155+dfsg/pcs/lib/validate.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/validate.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,532 @@ +""" +Module contains list of functions that should be useful for validation. +Example of use (how things play together): + >>> option_dict = {"some_option": "A"} + >>> validators = [ + ... is_required("name"), + ... value_in("some_option", ["B", "C"]) + ... ] + >>> report_list = run_collection_of_option_validators( + ... option_dict, + ... validators + ... ) + >>> for report in report_list: + ... print(report) + ... + ... + ERROR REQUIRED_OPTION_IS_MISSING: { + 'option_type': 'option', + 'option_names': ['name'] + } + ERROR INVALID_OPTION_VALUE: { + 'option_name': 'some_option', + 'option_value': 'A', + 'allowed_values': ['B', 'C'] + } + +Sometimes we need to validate the normalized value but in report we need the +original value. For this purposes is ValuePair and helpers like values_to_pairs +and pairs_to_values. + +TODO provide parameters to provide forceable error/warning for functions that + does not support it +""" +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from collections import namedtuple +import re + +from pcs.common.tools import is_string +from pcs.lib import reports +from pcs.lib.pacemaker.values import ( + timeout_to_seconds, + validate_id, +) + + +### normalization + +class ValuePair(namedtuple("ValuePair", "original normalized")): + """ + Storage for the original value and its normalized form + """ + + @staticmethod + def get(val): + return val if isinstance(val, ValuePair) else ValuePair(val, val) + +def values_to_pairs(option_dict, normalize): + """ + Return a dict derived from option_dict where every value is instance of + ValuePair. + + dict option_dict contains values that should be paired with the normalized + form + callable normalize should take key and value and return normalized form. + Function option_value_normalization can be good base for create such + callable. + """ + option_dict_with_pairs = {} + for key, value in option_dict.items(): + if not isinstance(value, ValuePair): + value = ValuePair( + original=value, + normalized=normalize(key, value), + ) + option_dict_with_pairs[key] = value + return option_dict_with_pairs + +def pairs_to_values(option_dict): + """ + Take a dict which has OptionValuePairs as its values and return dict with + normalized forms as its values. It is reverse function to + values_to_pairs. + + dict option_dict contains OptionValuePairs as its values + """ + raw_option_dict = {} + for key, value in option_dict.items(): + if isinstance(value, ValuePair): + value = value.normalized + raw_option_dict[key] = value + return raw_option_dict + +def option_value_normalization(normalization_map): + """ + Return function that takes key and value and return the normalized form. + + dict normalization_map has on each key function that takes value and return + its normalized form. + """ + def normalize(key, value): + return( + value if key not in normalization_map + else normalization_map[key](value) + ) + return normalize + +### keys validators + +def depends_on_option( + option_name, prerequisite_option, option_type="", prerequisite_type="" +): + """ + Get a validator reporting REQUIRED_OPTION_IS_MISSING when the option_dict + does not contain the prerequisite_option and contains the option_name. + + string option_name -- name of the option to check + string prerequisite_option -- name of the option which is a prerequisite + string option_type -- describes a type of the option for reporting purposes + """ + def validate(option_dict): + if ( + option_name in option_dict + and + prerequisite_option not in option_dict + ): + return [reports.prerequisite_option_is_missing( + option_name, + prerequisite_option, + option_type, + prerequisite_type + )] + return [] + return validate + +def is_required(option_name, option_type=""): + """ + Return a the function that takes option_dict and returns report list + (with REQUIRED_OPTION_IS_MISSING when option_dict does not contain + option_name). + + string option_name is name of option of option_dict that will be tested + string option_type describes type of option for reporting purposes + """ + def validate(option_dict): + if option_name not in option_dict: + return [reports.required_option_is_missing( + [option_name], + option_type, + )] + return [] + return validate + +def is_required_some_of(option_name_list, option_type=""): + """ + Get a validator reporting REQUIRED_OPTION_IS_MISSING report when the + option_dict does not contain at least one item from the option_name_list. + + iterable option_name_list -- names of options of the option_dict to test + string option_type -- describes a type of the option for reporting purposes + """ + def validate(option_dict): + found_names = set.intersection( + set(option_dict.keys()), + set(option_name_list) + ) + if len(found_names) < 1: + return [reports.required_option_of_alternatives_is_missing( + sorted(option_name_list), + option_type, + )] + return [] + return validate + +def mutually_exclusive(mutually_exclusive_names, option_type="option"): + """ + Return a list with report MUTUALLY_EXCLUSIVE_OPTIONS when in option_dict + appears more than one of mutually_exclusive_names. + + list|set mutually_exclusive_names contains option names that cannot appear + together + string option_type describes type of option for reporting purposes + """ + def validate(option_dict): + found_names = set.intersection( + set(option_dict.keys()), + set(mutually_exclusive_names) + ) + if len(found_names) > 1: + return [reports.mutually_exclusive_options( + sorted(found_names), + option_type, + )] + return [] + return validate + +def names_in( + allowed_name_list, name_list, option_type="option", + code_to_allow_extra_names=None, allow_extra_names=False +): + """ + Return a list with report INVALID_OPTION when in name_list is a name that is + not in allowed_name_list. + + list allowed_name_list contains names which are valid + list name_list contains names for validation + string option_type describes type of option for reporting purposes + string code_to_allow_extra_names is code for forcing invalid names. If it is + empty report INVALID_OPTION is non-forceable error. If it is not empty + report INVALID_OPTION is forceable error or warning. + bool allow_extra_names is flag that complements code_to_allow_extra_names + and determines wheter is report INVALID_OPTION forceable error or + warning. + """ + invalid_names = set(name_list) - set(allowed_name_list) + if not invalid_names: + return [] + + create_report = reports.get_problem_creator( + code_to_allow_extra_names, + allow_extra_names + ) + return [create_report( + reports.invalid_option, + sorted(invalid_names), + sorted(allowed_name_list), + option_type, + )] + +### values validators + +def value_cond( + option_name, predicate, value_type_or_enum, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Return a validation function that takes option_dict and returns report list + (with INVALID_OPTION_VALUE when option_name is not in allowed_values). + + string option_name is name of option of option_dict that will be tested + function predicate takes one parameter, normalized value + list or string value_type_or_enum list of possible values or string + description of value type + string option_name_for_report is substitued by option name if is None + string code_to_allow_extra_values is code for forcing invalid names. If it + is empty report INVALID_OPTION is non-forceable error. If it is not + empty report INVALID_OPTION is forceable error or warning. + bool allow_extra_values is flag that complements code_to_allow_extra_values + and determines wheter is report INVALID_OPTION forceable error or + warning. + """ + @_if_option_exists(option_name) + def validate(option_dict): + value = ValuePair.get(option_dict[option_name]) + + if not predicate(value.normalized): + create_report = reports.get_problem_creator( + code_to_allow_extra_values, + allow_extra_values + ) + return [create_report( + reports.invalid_option_value, + option_name_for_report if option_name_for_report is not None + else option_name + , + value.original, + value_type_or_enum, + )] + + return [] + return validate + +def value_empty_or_valid(option_name, validator): + """ + Get a validator running the specified validator if the value is not empty + + string option_name -- name of the option to check + function validator -- validator to run when the value is not an empty string + """ + @_if_option_exists(option_name) + def validate(option_dict): + value = ValuePair.get(option_dict[option_name]) + return ( + [] if is_empty_string(value.normalized) + else validator(option_dict) + ) + return validate + +def value_id(option_name, option_name_for_report=None, id_provider=None): + """ + Get a validator reporting ID errors and optionally booking IDs along the way + + string option_name -- name of the option to check + string option_name_for_report -- substitued by the option_name if not set + IdProvider id_provider -- used to check id uniqueness if set + """ + @_if_option_exists(option_name) + def validate(option_dict): + value = ValuePair.get(option_dict[option_name]) + report_list = [] + validate_id(value.normalized, option_name_for_report, report_list) + if id_provider is not None and not report_list: + report_list.extend( + id_provider.book_ids(value.normalized) + ) + return report_list + return validate + +def value_in( + option_name, allowed_values, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Special case of value_cond function.returned function checks whenever value + is included allowed_values. If not list of ReportItem will be returned. + + option_name -- string, name of option to check + allowed_values -- list of strings, list of possible values + option_name_for_report -- string, it is substitued by option name if is None + code_to_allow_extra_values -- string, code for forcing invalid names. If it + is empty report INVALID_OPTION is non-forceable error. If it is not + empty report INVALID_OPTION is forceable error or warning. + allow_extra_values -- bool, flag that complements code_to_allow_extra_values + and determines wheter is report INVALID_OPTION forceable error or + warning. + """ + return value_cond( + option_name, + lambda normalized_value: normalized_value in allowed_values, + allowed_values, + option_name_for_report=option_name_for_report, + code_to_allow_extra_values=code_to_allow_extra_values, + allow_extra_values=allow_extra_values, + ) + +def value_nonnegative_integer( + option_name, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Get a validator reporting INVALID_OPTION_VALUE when the value is not + an integer greater than -1 + + string option_name -- name of the option to check + string option_name_for_report -- substitued by the option_name if not set + string code_to_allow_extra_values -- create a report forceable by this code + bool allow_extra_values -- create a warning instead of an error if True + """ + return value_cond( + option_name, + lambda value: is_integer(value, 0), + "a non-negative integer", + option_name_for_report=option_name_for_report, + code_to_allow_extra_values=code_to_allow_extra_values, + allow_extra_values=allow_extra_values, + ) + +def value_not_empty( + option_name, value_type_or_enum, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Get a validator reporting INVALID_OPTION_VALUE when the value is empty + + string option_name -- name of the option to check + string option_name_for_report -- substitued by the option_name if not set + string code_to_allow_extra_values -- create a report forceable by this code + bool allow_extra_values -- create a warning instead of an error if True + """ + return value_cond( + option_name, + lambda value: not is_empty_string(value), + value_type_or_enum, + option_name_for_report=option_name_for_report, + code_to_allow_extra_values=code_to_allow_extra_values, + allow_extra_values=allow_extra_values, + ) + +def value_port_number( + option_name, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Get a validator reporting INVALID_OPTION_VALUE when the value is not a TCP + or UDP port number + + string option_name -- name of the option to check + string option_name_for_report -- substitued by the option_name if not set + string code_to_allow_extra_values -- create a report forceable by this code + bool allow_extra_values -- create a warning instead of an error if True + """ + return value_cond( + option_name, + is_port_number, + "a port number (1-65535)", + option_name_for_report=option_name_for_report, + code_to_allow_extra_values=code_to_allow_extra_values, + allow_extra_values=allow_extra_values, + ) + +def value_port_range( + option_name, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Get a validator reporting INVALID_OPTION_VALUE when the value is not a TCP + or UDP port range + + string option_name -- name of the option to check + string option_name_for_report -- substitued by the option_name if not set + string code_to_allow_extra_values -- create a report forceable by this code + bool allow_extra_values -- create a warning instead of an error if True + """ + return value_cond( + option_name, + lambda value: ( + matches_regexp(value, "^[0-9]+-[0-9]+$") + and + all([is_port_number(part) for part in value.split("-", 1)]) + ), + "port-port", + option_name_for_report=option_name_for_report, + code_to_allow_extra_values=code_to_allow_extra_values, + allow_extra_values=allow_extra_values, + ) + +def value_positive_integer( + option_name, option_name_for_report=None, + code_to_allow_extra_values=None, allow_extra_values=False +): + """ + Get a validator reporting INVALID_OPTION_VALUE when the value is not + an integer greater than zero + + string option_name -- name of the option to check + string option_name_for_report -- substitued by the option_name if not set + string code_to_allow_extra_values -- create a report forceable by this code + bool allow_extra_values -- create a warning instead of an error if True + """ + return value_cond( + option_name, + lambda value: is_integer(value, 1), + "a positive integer", + option_name_for_report=option_name_for_report, + code_to_allow_extra_values=code_to_allow_extra_values, + allow_extra_values=allow_extra_values, + ) + +def value_time_interval(option_name, option_name_for_report=None): + return value_cond( + option_name, + lambda normalized_value: + timeout_to_seconds(normalized_value) is not None + , + "time interval (e.g. 1, 2s, 3m, 4h, ...)", + option_name_for_report=option_name_for_report, + ) + +### tools and predicates + +def run_collection_of_option_validators(option_dict, validator_list): + """ + Return a list with reports (ReportItems) about problems inside items of + option_dict. + + dict option_dict is source of values to validate according to specification + list validator_list contains callables that takes option_dict and returns + list of reports + """ + report_list = [] + for validate in validator_list: + report_list.extend(validate(option_dict)) + return report_list + +def is_empty_string(value): + """ + Check if the specified value is an empty string + + mixed value -- value to check + """ + return is_string(value) and not value + +def is_integer(value, at_least=None, at_most=None): + """ + Check if the specified value is an integer, optionally check a range + + mixed value -- string, int or float, value to check + """ + try: + if isinstance(value, float): + return False + value_int = int(value) + if at_least is not None and value_int < at_least: + return False + if at_most is not None and value_int > at_most: + return False + except ValueError: + return False + return True + +def is_port_number(value): + """ + Check if the specified value is a TCP or UDP port number + + mixed value -- string, int or float, value to check + """ + return is_integer(value, 1, 65535) + +def matches_regexp(value, regexp): + """ + Check if the specified value matches the specified regular expression + + mixed value -- string, int or float, value to check + mixed regexp -- string or RegularExpression to match the value against + """ + if not hasattr(regexp, "match"): + regexp = re.compile(regexp) + return regexp.match(value) is not None + +def _if_option_exists(option_name): + def params_wrapper(validate_func): + def prepare(option_dict): + if option_name not in option_dict: + return [] + return validate_func(option_dict) + return prepare + return params_wrapper diff -Nru pcs-0.9.155+dfsg/pcs/lib/xml_tools.py pcs-0.9.159/pcs/lib/xml_tools.py --- pcs-0.9.155+dfsg/pcs/lib/xml_tools.py 1970-01-01 00:00:00.000000000 +0000 +++ pcs-0.9.159/pcs/lib/xml_tools.py 2017-06-30 15:33:01.000000000 +0000 @@ -0,0 +1,86 @@ +from __future__ import ( + absolute_import, + division, + print_function, + unicode_literals, +) + +from lxml import etree + +def get_root(tree): + # ElementTree has getroot, Elemet has getroottree + return tree.getroot() if hasattr(tree, "getroot") else tree.getroottree() + +def find_parent(element, tag_names): + """ + Find parent of an element based on parent's tag name. Return the parent + element or None if such element does not exist. + + etree element -- the element whose parent we want to find + strings tag_names -- allowed tag names of parent we are looking for + """ + candidate = element + while True: + if candidate is None or candidate.tag in tag_names: + return candidate + candidate = candidate.getparent() + +def get_sub_element(element, sub_element_tag, new_id=None, new_index=None): + """ + Returns the FIRST sub-element sub_element_tag of element. It will create new + element if such doesn't exist yet. + + element -- parent element + sub_element_tag -- tag of the wanted new element + new_id -- id of the new element, None means no id will be set + new_index -- where the new element will be added, None means at the end + """ + sub_element = element.find("./{0}".format(sub_element_tag)) + if sub_element is None: + sub_element = etree.Element(sub_element_tag) + if new_id: + sub_element.set("id", new_id) + if new_index is None: + element.append(sub_element) + else: + element.insert(new_index, sub_element) + return sub_element + +def export_attributes(element): + return dict((key, value) for key, value in element.attrib.items()) + +def update_attribute_remove_empty(element, name, value): + """ + Set an attribute's value or remove the attribute if the value is "" + + etree element -- element to be updated + string name -- attribute name + mixed value -- attribute value + """ + if len(value) < 1: + if name in element.attrib: + del element.attrib[name] + return + element.set(name, value) + +def update_attributes_remove_empty(element, attributtes): + """ + Set an attributes' values or remove an attribute if its new value is "" + + etree element -- element to be updated + dict attributes -- new attributes' values + """ + for name, value in attributtes.items(): + update_attribute_remove_empty(element, name, value) + +def etree_element_attibutes_to_dict(etree_el, required_key_list): + """ + Returns all attributes of etree_el from required_key_list in dictionary, + where keys are attributes and values are values of attributes or None if + it's not present. + + etree_el -- etree element from which attributes should be extracted + required_key_list -- list of strings, attributes names which should be + extracted + """ + return dict([(key, etree_el.get(key)) for key in required_key_list]) diff -Nru pcs-0.9.155+dfsg/pcs/node.py pcs-0.9.159/pcs/node.py --- pcs-0.9.155+dfsg/pcs/node.py 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/node.py 2017-06-30 15:33:01.000000000 +0000 @@ -12,142 +12,86 @@ usage, utils, ) -from pcs.cli.common.errors import CmdLineInputError +from pcs.cli.common.errors import ( + CmdLineInputError, + ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE, +) from pcs.cli.common.parse_args import prepare_options from pcs.lib.errors import LibraryError -import pcs.lib.pacemaker as lib_pacemaker -from pcs.lib.pacemaker_values import get_valid_timeout_seconds +import pcs.lib.pacemaker.live as lib_pacemaker -def node_cmd(argv): - if len(argv) == 0: +def node_cmd(lib, argv, modifiers): + if len(argv) < 1: usage.node() sys.exit(1) - sub_cmd = argv.pop(0) - if sub_cmd == "help": - usage.node(argv) - elif sub_cmd == "maintenance": - node_maintenance(argv) - elif sub_cmd == "unmaintenance": - node_maintenance(argv, False) - elif sub_cmd == "standby": - node_standby(argv) - elif sub_cmd == "unstandby": - node_standby(argv, False) - elif sub_cmd == "attribute": - if "--name" in utils.pcs_options and len(argv) > 1: - usage.node("attribute") - sys.exit(1) - filter_attr=utils.pcs_options.get("--name", None) - if len(argv) == 0: - attribute_show_cmd(filter_attr=filter_attr) - elif len(argv) == 1: - attribute_show_cmd(argv.pop(0), filter_attr=filter_attr) - else: - attribute_set_cmd(argv.pop(0), argv) - elif sub_cmd == "utilization": - if "--name" in utils.pcs_options and len(argv) > 1: - usage.node("utilization") - sys.exit(1) - filter_name=utils.pcs_options.get("--name", None) - if len(argv) == 0: - print_node_utilization(filter_name=filter_name) - elif len(argv) == 1: - print_node_utilization(argv.pop(0), filter_name=filter_name) + sub_cmd, argv_next = argv[0], argv[1:] + + try: + if sub_cmd == "help": + usage.node([" ".join(argv_next)] if argv_next else []) + elif sub_cmd == "maintenance": + node_maintenance_cmd(lib, argv_next, modifiers, True) + elif sub_cmd == "unmaintenance": + node_maintenance_cmd(lib, argv_next, modifiers, False) + elif sub_cmd == "standby": + node_standby_cmd(lib, argv_next, modifiers, True) + elif sub_cmd == "unstandby": + node_standby_cmd(lib, argv_next, modifiers, False) + elif sub_cmd == "attribute": + node_attribute_cmd(lib, argv_next, modifiers) + elif sub_cmd == "utilization": + node_utilization_cmd(lib, argv_next, modifiers) + # pcs-to-pcsd use only + elif sub_cmd == "pacemaker-status": + node_pacemaker_status(lib, argv_next, modifiers) else: - try: - set_node_utilization(argv.pop(0), argv) - except CmdLineInputError as e: - utils.exit_on_cmdline_input_errror(e, "node", "utilization") - # pcs-to-pcsd use only - elif sub_cmd == "pacemaker-status": - node_pacemaker_status() - else: - usage.node() - sys.exit(1) + raise CmdLineInputError() + except LibraryError as e: + utils.process_library_reports(e.args) + except CmdLineInputError as e: + utils.exit_on_cmdline_input_errror(e, "node", sub_cmd) +def node_attribute_cmd(lib, argv, modifiers): + if modifiers["name"] and len(argv) > 1: + raise CmdLineInputError() + if len(argv) == 0: + attribute_show_cmd(filter_attr=modifiers["name"]) + elif len(argv) == 1: + attribute_show_cmd(argv.pop(0), filter_attr=modifiers["name"]) + else: + attribute_set_cmd(argv.pop(0), argv) -def node_maintenance(argv, on=True): - action = ["-v", "on"] if on else ["-D"] +def node_utilization_cmd(lib, argv, modifiers): + if modifiers["name"] and len(argv) > 1: + raise CmdLineInputError() + if len(argv) == 0: + print_node_utilization(filter_name=modifiers["name"]) + elif len(argv) == 1: + print_node_utilization(argv.pop(0), filter_name=modifiers["name"]) + else: + set_node_utilization(argv.pop(0), argv) - cluster_nodes = utils.getNodesFromPacemaker() - nodes = [] - failed_count = 0 - if "--all" in utils.pcs_options: - nodes = cluster_nodes +def node_maintenance_cmd(lib, argv, modifiers, enable): + if len(argv) > 0 and modifiers["all"]: + raise CmdLineInputError(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) + if modifiers["all"]: + lib.node.maintenance_unmaintenance_all(enable, modifiers["wait"]) elif argv: - for node in argv: - if node not in cluster_nodes: - utils.err( - "Node '{0}' does not appear to exist in " - "configuration".format(node), - False - ) - failed_count += 1 - else: - nodes.append(node) + lib.node.maintenance_unmaintenance_list(enable, argv, modifiers["wait"]) else: - nodes.append("") - - if failed_count > 0: - sys.exit(1) + lib.node.maintenance_unmaintenance_local(enable, modifiers["wait"]) - for node in nodes: - node_attr = ["-N", node] if node else [] - output, retval = utils.run( - ["crm_attribute", "-t", "nodes", "-n", "maintenance"] + action + - node_attr - ) - if retval != 0: - node_name = ("node '{0}'".format(node)) if argv else "current node" - failed_count += 1 - if on: - utils.err( - "Unable to put {0} to maintenance mode: {1}".format( - node_name, output - ), - False - ) - else: - utils.err( - "Unable to remove {0} from maintenance mode: {1}".format( - node_name, output - ), - False - ) - if failed_count > 0: - sys.exit(1) - -def node_standby(argv, standby=True): - if (len(argv) > 1) or (len(argv) > 0 and "--all" in utils.pcs_options): - usage.node(["standby" if standby else "unstandby"]) - sys.exit(1) - - all_nodes = "--all" in utils.pcs_options - node_list = [argv[0]] if argv else [] - wait = False - timeout = None - if "--wait" in utils.pcs_options: - wait = True - timeout = utils.pcs_options["--wait"] - - try: - if wait: - lib_pacemaker.ensure_resource_wait_support(utils.cmd_runner()) - valid_timeout = get_valid_timeout_seconds(timeout) - if standby: - lib_pacemaker.nodes_standby( - utils.cmd_runner(), node_list, all_nodes - ) - else: - lib_pacemaker.nodes_unstandby( - utils.cmd_runner(), node_list, all_nodes - ) - if wait: - lib_pacemaker.wait_for_resources(utils.cmd_runner(), valid_timeout) - except LibraryError as e: - utils.process_library_reports(e.args) +def node_standby_cmd(lib, argv, modifiers, enable): + if len(argv) > 0 and modifiers["all"]: + raise CmdLineInputError(ERR_NODE_LIST_AND_ALL_MUTUALLY_EXCLUSIVE) + if modifiers["all"]: + lib.node.standby_unstandby_all(enable, modifiers["wait"]) + elif argv: + lib.node.standby_unstandby_list(enable, argv, modifiers["wait"]) + else: + lib.node.standby_unstandby_local(enable, modifiers["wait"]) def set_node_utilization(node, argv): cib = utils.get_cib_dom() @@ -213,13 +157,10 @@ for node in sorted(utilization): print(" {0}: {1}".format(node, utilization[node])) -def node_pacemaker_status(): - try: - print(json.dumps( - lib_pacemaker.get_local_node_status(utils.cmd_runner()) - )) - except LibraryError as e: - utils.process_library_reports(e.args) +def node_pacemaker_status(lib, argv, modifiers): + print(json.dumps( + lib_pacemaker.get_local_node_status(utils.cmd_runner()) + )) def attribute_show_cmd(filter_node=None, filter_attr=None): node_attributes = utils.get_node_attributes( @@ -230,11 +171,7 @@ attribute_print(node_attributes) def attribute_set_cmd(node, argv): - try: - attrs = prepare_options(argv) - except CmdLineInputError as e: - utils.exit_on_cmdline_input_errror(e, "node", "attribute") - for name, value in attrs.items(): + for name, value in prepare_options(argv).items(): utils.set_node_attribute(name, value, node) def attribute_print(node_attributes): diff -Nru pcs-0.9.155+dfsg/pcs/pcs.8 pcs-0.9.159/pcs/pcs.8 --- pcs-0.9.155+dfsg/pcs/pcs.8 2016-11-07 16:12:24.000000000 +0000 +++ pcs-0.9.159/pcs/pcs.8 2017-06-30 15:33:01.000000000 +0000 @@ -1,4 +1,4 @@ -.TH PCS "8" "November 2016" "pcs 0.9.155" "System Administration Utilities" +.TH PCS "8" "June 2017" "pcs 0.9.159" "System Administration Utilities" .SH NAME pcs \- pacemaker/corosync configuration system .SH SYNOPSIS @@ -19,73 +19,76 @@ .TP \fB\-\-version\fR Print pcs version information. +.TP +\fB\-\-request\-timeout=\fR +Timeout for each outgoing request to another node in seconds. Default is 60s. .SS "Commands:" .TP cluster -Configure cluster options and nodes. + Configure cluster options and nodes. .TP resource -Manage cluster resources. + Manage cluster resources. .TP stonith -Configure fence devices. + Manage fence devices. .TP constraint -Set resource constraints. + Manage resource constraints. .TP property -Set pacemaker properties. + Manage pacemaker properties. .TP acl -Set pacemaker access control lists. + Manage pacemaker access control lists. .TP qdevice -Manage quorum device provider on the local host. + Manage quorum device provider on the local host. .TP quorum -Manage cluster quorum settings. + Manage cluster quorum settings. .TP booth -Manage booth (cluster ticket manager). + Manage booth (cluster ticket manager). .TP status -View cluster status. + View cluster status. .TP config -View and manage cluster configuration. + View and manage cluster configuration. .TP pcsd -Manage pcs daemon. + Manage pcs daemon. .TP node -Manage cluster nodes. + Manage cluster nodes. .TP alert -Manage pacemaker alerts. + Manage pacemaker alerts. .SS "resource" .TP [show [] | \fB\-\-full\fR | \fB\-\-groups\fR | \fB\-\-hide\-inactive\fR] Show all currently configured resources or if a resource is specified show the options for the configured resource. If \fB\-\-full\fR is specified, all configured resource options will be displayed. If \fB\-\-groups\fR is specified, only show groups (and their resources). If \fB\-\-hide\-inactive\fR is specified, only show active resources. .TP list [filter] [\fB\-\-nodesc\fR] -Show list of all available resource agents (if filter is provided then only resource agents matching the filter will be shown). If --nodesc is used then descriptions of resource agents are not printed. +Show list of all available resource agents (if filter is provided then only resource agents matching the filter will be shown). If \fB\-\-nodesc\fR is used then descriptions of resource agents are not printed. .TP -describe [:[:]] -Show options for the specified resource. +describe [:[:]] [\fB\-\-full\fR] +Show options for the specified resource. If \fB\-\-full\fR is specified, all options including advanced ones are shown. .TP -create [:[:]] [resource options] [op [ ]...] [meta ...] [\fB\-\-clone\fR | \fB\-\-master\fR | \fB\-\-group\fR [\fB\-\-before\fR | \fB\-\-after\fR ]] [\fB\-\-disabled\fR] [\fB\-\-wait\fR[=n]] -Create specified resource. If \fB\-\-clone\fR is used a clone resource is created. If \fB\-\-master\fR is specified a master/slave resource is created. If \fB\-\-group\fR is specified the resource is added to the group named. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resource relatively to some resource already existing in the group. If \fB\-\-disabled\fR is specified the resource is not started automatically. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started, or 1 if the resource has not yet started. If 'n' is not specified it defaults to 60 minutes. +create [:[:]] [resource options] [\fBop\fR [ ]...] [\fBmeta\fR ...] [\fBclone\fR [] | \fBmaster\fR [] | \fB\-\-group\fR [\fB\-\-before\fR | \fB\-\-after\fR ] | \fBbundle\fR ] [\fB\-\-disabled\fR] [\fB\-\-no\-default\-ops] [\fB\-\-wait\fR[=n]] +Create specified resource. If \fBclone\fR is used a clone resource is created. If \fBmaster\fR is specified a master/slave resource is created. If \fB\-\-group\fR is specified the resource is added to the group named. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resource relatively to some resource already existing in the group. If \fBbundle\fR is specified, resource will be created inside of the specified bundle. If \fB\-\-disabled\fR is specified the resource is not started automatically. If \fB\-\-no\-default\-ops\fR is specified, only monitor operations are created for the resource and all other operations use default settings. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started, or 1 if the resource has not yet started. If 'n' is not specified it defaults to 60 minutes. Example: Create a new resource called 'VirtualIP' with IP address 192.168.0.99, netmask of 32, monitored everything 30 seconds, on eth2: pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s .TP delete Deletes the resource, group, master or clone (and all resources within the group/master/clone). .TP -enable [\fB\-\-wait\fR[=n]] -Allow the cluster to start the resource. Depending on the rest of the configuration (constraints, options, failures, etc), the resource may remain stopped. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started, or 1 if the resource has not yet started. If 'n' is not specified it defaults to 60 minutes. +enable ... [\fB\-\-wait\fR[=n]] +Allow the cluster to start the resources. Depending on the rest of the configuration (constraints, options, failures, etc), the resources may remain stopped. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resources to start and then return 0 if the resources are started, or 1 if the resources have not yet started. If 'n' is not specified it defaults to 60 minutes. .TP -disable [\fB\-\-wait\fR[=n]] -Attempt to stop the resource if it is running and forbid the cluster from starting it again. Depending on the rest of the configuration (constraints, options, failures, etc), the resource may remain started. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to stop and then return 0 if the resource is stopped or 1 if the resource has not stopped. If 'n' is not specified it defaults to 60 minutes. +disable ... [\fB\-\-wait\fR[=n]] +Attempt to stop the resources if they are running and forbid the cluster from starting them again. Depending on the rest of the configuration (constraints, options, failures, etc), the resources may remain started. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resources to stop and then return 0 if the resources are stopped or 1 if the resources have not stopped. If 'n' is not specified it defaults to 60 minutes. .TP restart [node] [\fB\-\-wait\fR=n] Restart the resource specified. If a node is specified and if the resource is a clone or master/slave it will be restarted only on the node specified. If \fB\-\-wait\fR is specified, then we will wait up to 'n' seconds for the resource to be restarted and return 0 if the restart was successful or 1 if it was not. @@ -103,7 +106,7 @@ This command will force the specified resource to be demoted on this node ignoring the cluster recommendations and print the output from demoting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to demote. .TP debug\-monitor [\fB\-\-full\fR] -This command will force the specified resource to be moniored on this node ignoring the cluster recommendations and print the output from monitoring the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to be monitored. +This command will force the specified resource to be monitored on this node ignoring the cluster recommendations and print the output from monitoring the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to be monitored. .TP move [destination node] [\fB\-\-master\fR] [lifetime=] [\fB\-\-wait\fR[=n]] Move the resource off the node it is currently running on by creating a \-INFINITY location constraint to ban the node. If destination node is specified the resource will be moved to that node by creating an INFINITY location constraint to prefer the destination node. If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the master id (instead of the resource id). If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs location avoids'. @@ -136,7 +139,7 @@ Remove the specified operation id. .TP op defaults [options] -Set default values for operations, if no options are passed, lists currently configured defaults. +Set default values for operations, if no options are passed, lists currently configured defaults. Defaults do not apply to resources which override them with their own defined operations. .TP meta [\fB\-\-wait\fR[=n]] Add specified options to the specified resource, group, master/slave or clone. Meta options should be in the format of name=value, options may be removed by setting an option without a value. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes. Example: pcs resource meta TestResource failure\-timeout=50 stickiness= @@ -145,13 +148,13 @@ Add the specified resource to the group, creating the group if it does not exist. If the resource is present in another group it is moved to the new group. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resources relatively to some resource already existing in the group. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP group remove [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] -Remove the specified resource(s) from the group, removing the group if it no resources remain. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. +Remove the specified resource(s) from the group, removing the group if no resources remain in it. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP ungroup [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] Remove the group (note: this does not remove any resources from the cluster) or if resources are specified, remove the specified resources from the group. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and the return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP clone [clone options]... [\fB\-\-wait\fR[=n]] -Setup up the specified resource or group as a clone. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. +Set up the specified resource or group as a clone. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP unclone [\fB\-\-wait\fR[=n]] Remove the clone which contains the specified group or resource (the resource or group will not be removed). If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including stopping clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. @@ -159,17 +162,23 @@ master [] [options] [\fB\-\-wait\fR[=n]] Configure a resource or group as a multi\-state (master/slave) resource. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting and promoting resource instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. Note: to remove a master you must remove the resource/group it contains. .TP -manage ... [resource n] -Set resources listed to managed mode (default). +bundle create container [] [network ] [port\-map ]... [storage\-map ]... [meta ] [\fB\-\-disabled\fR] [\fB\-\-wait\fR[=n]] +Create a new bundle encapsulating no resources. The bundle can be used either as it is or a resource may be put into it at any time. If \fB\-\-disabled\fR is specified, the bundle is not started automatically. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the bundle to start and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. +.TP +bundle update [container ] [network ] [port\-map (add ) | (remove ...)]... [storage\-map (add ) | (remove ...)]... [meta ] [\fB\-\-wait\fR[=n]] +Add, remove or change options to specified bundle. If you wish to update a resource encapsulated in the bundle, use the 'pcs resource update' command instead and specify the resource id. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. +.TP +manage ... [\fB\-\-monitor\fR] +Set resources listed to managed mode (default). If \fB\-\-monitor\fR is specified, enable all monitor operations of the resources. .TP -unmanage ... [resource n] -Set resources listed to unmanaged mode. +unmanage ... [\fB\-\-monitor\fR] +Set resources listed to unmanaged mode. When a resource is in unmanaged mode, the cluster is not allowed to start nor stop the resource. If \fB\-\-monitor\fR is specified, disable all monitor operations of the resources. .TP defaults [options] -Set default values for resources, if no options are passed, lists currently configured defaults. +Set default values for resources, if no options are passed, lists currently configured defaults. Defaults do not apply to resources which override them with their own defined values. .TP cleanup [] [\fB\-\-node\fR ] -Cleans up the resource in the lrmd (useful to reset the resource status and failcount). This tells the cluster to forget the operation history of a resource and re-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a resource id is not specified then all resources/stonith devices will be cleaned up. If a node is not specified then resources on all nodes will be cleaned up. +Make the cluster forget the operation history of the resource and re\-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a resource id is not specified then all resources/stonith devices will be cleaned up. If a node is not specified then resources/stonith devices on all nodes will be cleaned up. .TP failcount show [node] Show current failcount for specified resource from all nodes or only on specified node. @@ -177,7 +186,7 @@ failcount reset [node] Reset failcount for specified resource on all nodes or only on specified node. This tells the cluster to forget how many times a resource has failed in the past. This may allow the resource to be started or moved to a more preferred location. .TP -relocate dry-run [resource1] [resource2] ... +relocate dry\-run [resource1] [resource2] ... The same as 'relocate run' but has no effect on the cluster. .TP relocate run [resource1] [resource2] ... @@ -194,9 +203,9 @@ .SS "cluster" .TP auth [node] [...] [\fB\-u\fR username] [\fB\-p\fR password] [\fB\-\-force\fR] [\fB\-\-local\fR] -Authenticate pcs to pcsd on nodes specified, or on all nodes configured in corosync.conf if no nodes are specified (authorization tokens are stored in ~/.pcs/tokens or /var/lib/pcsd/tokens for root). By default all nodes are also authenticated to each other, using \fB\-\-local\fR only authenticates the local node (and does not authenticate the remote nodes with each other). Using \fB\-\-force\fR forces re-authentication to occur. +Authenticate pcs to pcsd on nodes specified, or on all nodes configured in the local cluster if no nodes are specified (authorization tokens are stored in ~/.pcs/tokens or /var/lib/pcsd/tokens for root). By default all nodes are also authenticated to each other, using \fB\-\-local\fR only authenticates the local node (and does not authenticate the remote nodes with each other). Using \fB\-\-force\fR forces re\-authentication to occur. .TP -setup [\fB\-\-start\fR [\fB\-\-wait\fR[=]]] [\fB\-\-local\fR] [\fB\-\-enable\fR] \fB\-\-name\fR [] [...] [\fB\-\-transport\fR udpu|udp] [\fB\-\-rrpmode\fR active|passive] [\fB\-\-addr0\fR [[[\fB\-\-mcast0\fR
] [\fB\-\-mcastport0\fR ] [\fB\-\-ttl0\fR ]] | [\fB\-\-broadcast0\fR]] [\fB\-\-addr1\fR [[[\fB\-\-mcast1\fR
] [\fB\-\-mcastport1\fR ] [\fB\-\-ttl1\fR ]] | [\fB\-\-broadcast1\fR]]]] [\fB\-\-wait_for_all\fR=<0|1>] [\fB\-\-auto_tie_breaker\fR=<0|1>] [\fB\-\-last_man_standing\fR=<0|1> [\fB\-\-last_man_standing_window\fR=